threads
listlengths 1
2.99k
|
|---|
[
{
"msg_contents": "While doing more testing of [1], I realised that it has a bug, which\nreveals a pre-existing problem in transformLockingClause():\n\nCREATE TABLE t1(a int);\nCREATE TABLE t2(a int);\nCREATE TABLE t3(a int);\n\nSELECT 1\nFROM t1 JOIN t2 ON t1.a = t2.a,\n t3 AS unnamed_join\nFOR UPDATE OF unnamed_join;\n\nERROR: FOR UPDATE cannot be applied to a join\n\nwhich is wrong, because it should lock t3.\n\nSimilarly:\n\nSELECT foo.*\nFROM t1 JOIN t2 USING (a) AS foo,\n t3 AS unnamed_join\nFOR UPDATE OF unnamed_join;\n\nERROR: FOR UPDATE cannot be applied to a join\n\n\nThe problem is that the parser has generated a join rte with\neref->aliasname = \"unnamed_join\", and then transformLockingClause()\nfinds that before finding the relation rte for t3 whose user-supplied\nalias is also \"unnamed_join\".\n\nI think the answer is that transformLockingClause() should ignore join\nrtes that don't have a user-supplied alias, since they are not visible\nas relation names in the query (and then [1] will want to do the same\nfor subquery and values rtes without aliases).\n\nExcept, if the rte has a join_using_alias (and no regular alias), I\nthink transformLockingClause() should actually be matching on that and\nthen throwing the above error. So for the following:\n\nSELECT foo.*\nFROM t1 JOIN t2 USING (a) AS foo,\n t3 AS unnamed_join\nFOR UPDATE OF foo;\n\nERROR: relation \"foo\" in FOR UPDATE clause not found in FROM clause\n\nthe error should actually be\n\nERROR: FOR UPDATE cannot be applied to a join\n\n\nSo something like the attached.\n\nThoughts?\n\nRegards,\nDean\n\n[1] https://www.postgresql.org/message-id/flat/CAEZATCUCGCf82=hxd9N5n6xGHPyYpQnxW8HneeH+uP7yNALkWA@mail.gmail.com",
"msg_date": "Wed, 6 Jul 2022 15:12:08 +0100",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": true,
"msg_subject": "transformLockingClause() bug"
},
{
"msg_contents": "Dean Rasheed <dean.a.rasheed@gmail.com> writes:\n> The problem is that the parser has generated a join rte with\n> eref->aliasname = \"unnamed_join\", and then transformLockingClause()\n> finds that before finding the relation rte for t3 whose user-supplied\n> alias is also \"unnamed_join\".\n\n> I think the answer is that transformLockingClause() should ignore join\n> rtes that don't have a user-supplied alias, since they are not visible\n> as relation names in the query (and then [1] will want to do the same\n> for subquery and values rtes without aliases).\n\nAgreed.\n\n> Except, if the rte has a join_using_alias (and no regular alias), I\n> think transformLockingClause() should actually be matching on that and\n> then throwing the above error. So for the following:\n\nYeah, that's clearly an oversight in the join_using_alias patch.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 06 Jul 2022 10:30:38 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: transformLockingClause() bug"
}
] |
[
{
"msg_contents": "Hi hackers,\n\nI found that as of a0ffa88, it's possible to set a PGC_SUSET GUC defined by\na trusted extension as a non-superuser. I've confirmed that this only\naffects v15 and later versions.\n\n\tpostgres=# CREATE ROLE testuser;\n\tCREATE ROLE\n\tpostgres=# GRANT CREATE ON DATABASE postgres TO testuser;\n\tGRANT\n\tpostgres=# SET ROLE testuser;\n\tSET\n\tpostgres=> SET plperl.on_plperl_init = 'test';\n\tSET\n\tpostgres=> CREATE EXTENSION plperl;\n\tCREATE EXTENSION\n\tpostgres=> SELECT setting FROM pg_settings WHERE name = 'plperl.on_plperl_init';\n\t setting \n\t---------\n\t test\n\t(1 row)\n\nOn previous versions, the CREATE EXTENSION command emits the following\nWARNING, and the setting does not take effect:\n\n\tWARNING: permission denied to set parameter \"plperl.on_plperl_init\"\n\nI think the call to superuser_arg() in pg_parameter_aclmask() is causing\nset_config_option() to bypass the normal privilege checks, as\nexecute_extension_script() will have set the user ID to the bootstrap\nsuperuser for trusted extensions like plperl. I don't have a patch or a\nproposal at the moment, but I thought it was worth starting the discussion.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 6 Jul 2022 15:47:27 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "pg_parameter_aclcheck() and trusted extensions"
},
{
"msg_contents": "On Wed, Jul 06, 2022 at 03:47:27PM -0700, Nathan Bossart wrote:\n> I think the call to superuser_arg() in pg_parameter_aclmask() is causing\n> set_config_option() to bypass the normal privilege checks, as\n> execute_extension_script() will have set the user ID to the bootstrap\n> superuser for trusted extensions like plperl. I don't have a patch or a\n> proposal at the moment, but I thought it was worth starting the discussion.\n\nLooks like a bug to me, so I have added an open item assigned to Tom.\n--\nMichael",
"msg_date": "Thu, 7 Jul 2022 10:04:18 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg_parameter_aclcheck() and trusted extensions"
},
{
"msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> On Wed, Jul 06, 2022 at 03:47:27PM -0700, Nathan Bossart wrote:\n>> I think the call to superuser_arg() in pg_parameter_aclmask() is causing\n>> set_config_option() to bypass the normal privilege checks, as\n>> execute_extension_script() will have set the user ID to the bootstrap\n>> superuser for trusted extensions like plperl. I don't have a patch or a\n>> proposal at the moment, but I thought it was worth starting the discussion.\n\n> Looks like a bug to me, so I have added an open item assigned to Tom.\n\nYeah. So the fix here seems pretty obvious: rather than applying the\npermissions check using bare GetUserId(), we need to remember the role\nOID that originally applied the setting, and use that.\n\nThe problem with this sketch is that\n\n(1) we need an OID field in struct config_generic, as well as GucStack,\nwhich means an ABI break for any extensions that look directly at GUC\nrecords. There probably aren't many, but ...\n\n(2) we need an additional parameter to set_config_option, which\nagain is a compatibility break for anything calling that directly.\nThere surely are such callers --- our own extensions do it.\n\nCan we get away with doing these things in beta3? We could avoid\nbreaking (2) in the v15 branch by making set_config_option into\na wrapper around set_config_option_ext, or something like that;\nbut the problem with struct config_generic seems inescapable.\n(Putting the new field at the end would solve nothing, since\nconfig_generic is embedded into larger structs.)\n\nThe alternative to API/ABI breaks seems to be to revert the\nfeature, which would be sad.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 07 Jul 2022 12:41:00 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_parameter_aclcheck() and trusted extensions"
},
{
"msg_contents": "On Thu, Jul 07, 2022 at 10:04:18AM +0900, Michael Paquier wrote:\n> On Wed, Jul 06, 2022 at 03:47:27PM -0700, Nathan Bossart wrote:\n>> I think the call to superuser_arg() in pg_parameter_aclmask() is causing\n>> set_config_option() to bypass the normal privilege checks, as\n>> execute_extension_script() will have set the user ID to the bootstrap\n>> superuser for trusted extensions like plperl. I don't have a patch or a\n>> proposal at the moment, but I thought it was worth starting the discussion.\n> \n> Looks like a bug to me, so I have added an open item assigned to Tom.\n\nThanks. I've been thinking about this one a bit. For simple cases like\nplperl, it would be easy enough to temporarily revert the superuser switch\nwhen calling _PG_init() or one of the DefineCustomXXXVariable functions.\nUnfortunately, I think there are more complicated scenarios. For example,\nwhat role should pg_parameter_aclmask() use when a trusted extension script\nloads a library after SET ROLE? The original user might not ordinarily be\nable to assume this role, so the trusted extension script could still be a\nway to set parameters you don't have privileges for. Should we just always\nuse the role that's calling CREATE EXTENSION?\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 7 Jul 2022 09:49:21 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_parameter_aclcheck() and trusted extensions"
},
{
"msg_contents": "On Thu, Jul 07, 2022 at 12:41:00PM -0400, Tom Lane wrote:\n> Yeah. So the fix here seems pretty obvious: rather than applying the\n> permissions check using bare GetUserId(), we need to remember the role\n> OID that originally applied the setting, and use that.\n\nPlease ignore my previous message. This makes sense.\n\n> The problem with this sketch is that\n> \n> (1) we need an OID field in struct config_generic, as well as GucStack,\n> which means an ABI break for any extensions that look directly at GUC\n> records. There probably aren't many, but ...\n> \n> (2) we need an additional parameter to set_config_option, which\n> again is a compatibility break for anything calling that directly.\n> There surely are such callers --- our own extensions do it.\n> \n> Can we get away with doing these things in beta3? We could avoid\n> breaking (2) in the v15 branch by making set_config_option into\n> a wrapper around set_config_option_ext, or something like that;\n> but the problem with struct config_generic seems inescapable.\n> (Putting the new field at the end would solve nothing, since\n> config_generic is embedded into larger structs.)\n> \n> The alternative to API/ABI breaks seems to be to revert the\n> feature, which would be sad.\n\nI personally lean more towards the compatibility break than reverting the\nfeature. There are still a couple of months before 15.0, and I suspect it\nwon't be too difficult to fix any extensions that break because of this.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 7 Jul 2022 11:40:01 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_parameter_aclcheck() and trusted extensions"
},
{
"msg_contents": "Nathan Bossart <nathandbossart@gmail.com> writes:\n> On Thu, Jul 07, 2022 at 12:41:00PM -0400, Tom Lane wrote:\n>> Can we get away with doing these things in beta3? We could avoid\n>> breaking (2) in the v15 branch by making set_config_option into\n>> a wrapper around set_config_option_ext, or something like that;\n>> but the problem with struct config_generic seems inescapable.\n\n> I personally lean more towards the compatibility break than reverting the\n> feature. There are still a couple of months before 15.0, and I suspect it\n> won't be too difficult to fix any extensions that break because of this.\n\nI checked http://codesearch.debian.net and found only a couple of\nextensions that #include guc_tables.h at all, so I'm satisfied\nthat the struct config_generic ABI issue is tolerable. Recompiling\nafter beta3 would be enough to fix any problem there, and it's\nhard to believe that anyone is trying to ship production-ready\nv15 extensions already.\n\nThe aspect that is a bit more debatable is whether to trouble with\na set_config_option() wrapper to avoid the API break in v15.\nI think we'd still be making people deal with an API break in v16,\nso making them do it this year rather than next doesn't seem like\na big deal ... but maybe someone wants to argue it's too late\nfor API breaks in v15?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 07 Jul 2022 15:00:09 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_parameter_aclcheck() and trusted extensions"
},
{
"msg_contents": "On 7/7/22 15:00, Tom Lane wrote:\n> Nathan Bossart <nathandbossart@gmail.com> writes:\n>> On Thu, Jul 07, 2022 at 12:41:00PM -0400, Tom Lane wrote:\n>>> Can we get away with doing these things in beta3? We could avoid\n>>> breaking (2) in the v15 branch by making set_config_option into\n>>> a wrapper around set_config_option_ext, or something like that;\n>>> but the problem with struct config_generic seems inescapable.\n> \n>> I personally lean more towards the compatibility break than reverting the\n>> feature. There are still a couple of months before 15.0, and I suspect it\n>> won't be too difficult to fix any extensions that break because of this.\n> \n> I checked http://codesearch.debian.net and found only a couple of\n> extensions that #include guc_tables.h at all, so I'm satisfied\n> that the struct config_generic ABI issue is tolerable. Recompiling\n> after beta3 would be enough to fix any problem there, and it's\n> hard to believe that anyone is trying to ship production-ready\n> v15 extensions already.\n\nThere are a handful here as well:\n\nhttps://github.com/search?q=guc_tables.h+and+PG_MODULE_MAGIC&type=Code&ref=advsearch&l=&l=\n\nBut as one of the affected authors I would say recompiling after beta3 \nis fine.\n\n> The aspect that is a bit more debatable is whether to trouble with\n> a set_config_option() wrapper to avoid the API break in v15.\n> I think we'd still be making people deal with an API break in v16,\n> so making them do it this year rather than next doesn't seem like\n> a big deal ... but maybe someone wants to argue it's too late\n> for API breaks in v15?\n\nWell there are other API breaks that affect me in v15, and to be honest \nI have done little except keep an eye out for the ones likely to affect \nextensions I maintain so far, so may as well inflict the pain now as \nlater ¯\\_(ツ)_/¯\n\nJoe\n-- \nJoe Conway\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 7 Jul 2022 15:43:03 -0400",
"msg_from": "Joe Conway <mail@joeconway.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_parameter_aclcheck() and trusted extensions"
},
{
"msg_contents": "On Thu, Jul 07, 2022 at 03:43:03PM -0400, Joe Conway wrote:\n> On 7/7/22 15:00, Tom Lane wrote:\n>> The aspect that is a bit more debatable is whether to trouble with\n>> a set_config_option() wrapper to avoid the API break in v15.\n>> I think we'd still be making people deal with an API break in v16,\n>> so making them do it this year rather than next doesn't seem like\n>> a big deal ... but maybe someone wants to argue it's too late\n>> for API breaks in v15?\n> \n> Well there are other API breaks that affect me in v15, and to be honest I\n> have done little except keep an eye out for the ones likely to affect\n> extensions I maintain so far, so may as well inflict the pain now as later\n> ¯\\_(ツ)_/¯\n\nWith my RMT and hacker hat on, I see no reason to not break ABI or\nAPIs while we are still in beta, as long as the GA result is as best\nas we can make it. I have not looked at the reasoning behind the\nissue, but if you think that this feature will work better in the long\nterm by having an extra field to track the role OID in one of the GUC\nstructs or in one of its API arguments, that's fine by me.\n\nIf this requires more work, a revert can of course be discussed, but I\nam not getting that this is really necessary here. This would be the\nlast option to consider.\n--\nMichael",
"msg_date": "Fri, 8 Jul 2022 15:09:17 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg_parameter_aclcheck() and trusted extensions"
},
{
"msg_contents": "On Fri, Jul 8, 2022 at 1:09 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Thu, Jul 07, 2022 at 03:43:03PM -0400, Joe Conway wrote:\n> > On 7/7/22 15:00, Tom Lane wrote:\n> >> The aspect that is a bit more debatable is whether to trouble with\n> >> a set_config_option() wrapper to avoid the API break in v15.\n> >> I think we'd still be making people deal with an API break in v16,\n> >> so making them do it this year rather than next doesn't seem like\n> >> a big deal ... but maybe someone wants to argue it's too late\n> >> for API breaks in v15?\n> >\n> > Well there are other API breaks that affect me in v15, and to be honest\nI\n> > have done little except keep an eye out for the ones likely to affect\n> > extensions I maintain so far, so may as well inflict the pain now as\nlater\n> > ¯\\_(ツ)_/¯\n>\n> With my RMT and hacker hat on, I see no reason to not break ABI or\n> APIs while we are still in beta, as long as the GA result is as best\n> as we can make it. I have not looked at the reasoning behind the\n> issue, but if you think that this feature will work better in the long\n> term by having an extra field to track the role OID in one of the GUC\n> structs or in one of its API arguments, that's fine by me.\n>\n> If this requires more work, a revert can of course be discussed, but I\n> am not getting that this is really necessary here. This would be the\n> last option to consider.\n\nThe RMT has discussed this item further, and we agree an ABI break is\nacceptable for resolving this issue.\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Fri, Jul 8, 2022 at 1:09 PM Michael Paquier <michael@paquier.xyz> wrote:>> On Thu, Jul 07, 2022 at 03:43:03PM -0400, Joe Conway wrote:> > On 7/7/22 15:00, Tom Lane wrote:> >> The aspect that is a bit more debatable is whether to trouble with> >> a set_config_option() wrapper to avoid the API break in v15.> >> I think we'd still be making people deal with an API break in v16,> >> so making them do it this year rather than next doesn't seem like> >> a big deal ... but maybe someone wants to argue it's too late> >> for API breaks in v15?> >> > Well there are other API breaks that affect me in v15, and to be honest I> > have done little except keep an eye out for the ones likely to affect> > extensions I maintain so far, so may as well inflict the pain now as later> > ¯\\_(ツ)_/¯>> With my RMT and hacker hat on, I see no reason to not break ABI or> APIs while we are still in beta, as long as the GA result is as best> as we can make it. I have not looked at the reasoning behind the> issue, but if you think that this feature will work better in the long> term by having an extra field to track the role OID in one of the GUC> structs or in one of its API arguments, that's fine by me.>> If this requires more work, a revert can of course be discussed, but I> am not getting that this is really necessary here. This would be the> last option to consider.The RMT has discussed this item further, and we agree an ABI break is acceptable for resolving this issue.--John NaylorEDB: http://www.enterprisedb.com",
"msg_date": "Thu, 14 Jul 2022 11:48:35 +0700",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_parameter_aclcheck() and trusted extensions"
},
{
"msg_contents": "John Naylor <john.naylor@enterprisedb.com> writes:\n> The RMT has discussed this item further, and we agree an ABI break is\n> acceptable for resolving this issue.\n\nCool, I'll produce a patch soon.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 14 Jul 2022 00:50:40 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_parameter_aclcheck() and trusted extensions"
},
{
"msg_contents": "I wrote:\n> Michael Paquier <michael@paquier.xyz> writes:\n>> Looks like a bug to me, so I have added an open item assigned to Tom.\n\n> Yeah. So the fix here seems pretty obvious: rather than applying the\n> permissions check using bare GetUserId(), we need to remember the role\n> OID that originally applied the setting, and use that.\n\nHere's a draft patch for that. I initially ran around and changed all\nthe set_config_option callers as I threatened before, but as I did it\nI could not help observing that they were all changing in exactly the\nsame way: basically, they were passing GetUserId() if the GucContext\nis PGC_S_SESSION and BOOTSTRAP_SUPERUSERID otherwise. Not counting\nguc.c internal call sites, there is a grand total of one caller that\nfails to fit the pattern. So that brought me around to liking the idea\nof keeping set_config_option's API stable by making it a thin wrapper\naround another function with an explicit role argument. The result,\nattached, poses far less of an API/ABI hazard than I was anticipating.\nIf you're not poking into the GUC tables you have little to fear.\n\nMost of the bulk of this is mechanical changes to pass the source\nrole around properly in guc.c's data structures. That's all basically\ncopy-and-paste from the code to track the source context (scontext).\n\nI noted something that ought to be looked at separately:\nvalidate_option_array_item() seems like it needs to be taught about\ngrantable permissions on GUCs. I think that right now it may report\npermissions failures in some cases where it should succeed.\n\n\t\t\tregards, tom lane",
"msg_date": "Thu, 14 Jul 2022 16:02:30 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_parameter_aclcheck() and trusted extensions"
},
{
"msg_contents": "On Thu, Jul 14, 2022 at 04:02:30PM -0400, Tom Lane wrote:\n> Here's a draft patch for that. I initially ran around and changed all\n> the set_config_option callers as I threatened before, but as I did it\n> I could not help observing that they were all changing in exactly the\n> same way: basically, they were passing GetUserId() if the GucContext\n> is PGC_S_SESSION and BOOTSTRAP_SUPERUSERID otherwise. Not counting\n> guc.c internal call sites, there is a grand total of one caller that\n> fails to fit the pattern. So that brought me around to liking the idea\n> of keeping set_config_option's API stable by making it a thin wrapper\n> around another function with an explicit role argument. The result,\n> attached, poses far less of an API/ABI hazard than I was anticipating.\n> If you're not poking into the GUC tables you have little to fear.\n> \n> Most of the bulk of this is mechanical changes to pass the source\n> role around properly in guc.c's data structures. That's all basically\n> copy-and-paste from the code to track the source context (scontext).\n\nAt first glance, this looks pretty reasonable to me. \n\n> I noted something that ought to be looked at separately:\n> validate_option_array_item() seems like it needs to be taught about\n> grantable permissions on GUCs. I think that right now it may report\n> permissions failures in some cases where it should succeed.\n\nWhich cases do you think might be inappropriately reporting permissions\nfailures? It looked to me like this stuff was mostly used for\npg_db_role_setting, which wouldn't be impacted by the current set of\ngrantable GUC permissions. Is the idea that you should be able to do ALTER\nROLE SET for GUCs that you have SET permissions for?\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 14 Jul 2022 14:52:25 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_parameter_aclcheck() and trusted extensions"
},
{
"msg_contents": "Nathan Bossart <nathandbossart@gmail.com> writes:\n> On Thu, Jul 14, 2022 at 04:02:30PM -0400, Tom Lane wrote:\n>> I noted something that ought to be looked at separately:\n>> validate_option_array_item() seems like it needs to be taught about\n>> grantable permissions on GUCs. I think that right now it may report\n>> permissions failures in some cases where it should succeed.\n\n> Which cases do you think might be inappropriately reporting permissions\n> failures? It looked to me like this stuff was mostly used for\n> pg_db_role_setting, which wouldn't be impacted by the current set of\n> grantable GUC permissions. Is the idea that you should be able to do ALTER\n> ROLE SET for GUCs that you have SET permissions for?\n\nWell, that's what I'm wondering. Obviously that wouldn't *alone* be\nenough permissions, but it seems like it could be a component of it.\nSpecifically, this bit:\n\n\t/* manual permissions check so we can avoid an error being thrown */\n\tif (gconf->context == PGC_USERSET)\n\t\t /* ok */ ;\n\telse if (gconf->context == PGC_SUSET && superuser())\n\t\t /* ok */ ;\n\telse if (skipIfNoPermissions)\n\t\treturn false;\n\nseems like it's trying to duplicate what set_config_option would do,\nand it's now missing a component of that. If it shouldn't check\nper-GUC permissions along with superuser(), we should add a comment\nexplaining why not.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 14 Jul 2022 18:03:45 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_parameter_aclcheck() and trusted extensions"
},
{
"msg_contents": "On Thu, Jul 14, 2022 at 06:03:45PM -0400, Tom Lane wrote:\n> Nathan Bossart <nathandbossart@gmail.com> writes:\n>> On Thu, Jul 14, 2022 at 04:02:30PM -0400, Tom Lane wrote:\n>>> I noted something that ought to be looked at separately:\n>>> validate_option_array_item() seems like it needs to be taught about\n>>> grantable permissions on GUCs. I think that right now it may report\n>>> permissions failures in some cases where it should succeed.\n> \n>> Which cases do you think might be inappropriately reporting permissions\n>> failures? It looked to me like this stuff was mostly used for\n>> pg_db_role_setting, which wouldn't be impacted by the current set of\n>> grantable GUC permissions. Is the idea that you should be able to do ALTER\n>> ROLE SET for GUCs that you have SET permissions for?\n> \n> Well, that's what I'm wondering. Obviously that wouldn't *alone* be\n> enough permissions, but it seems like it could be a component of it.\n> Specifically, this bit:\n> \n> \t/* manual permissions check so we can avoid an error being thrown */\n> \tif (gconf->context == PGC_USERSET)\n> \t\t /* ok */ ;\n> \telse if (gconf->context == PGC_SUSET && superuser())\n> \t\t /* ok */ ;\n> \telse if (skipIfNoPermissions)\n> \t\treturn false;\n> \n> seems like it's trying to duplicate what set_config_option would do,\n> and it's now missing a component of that. If it shouldn't check\n> per-GUC permissions along with superuser(), we should add a comment\n> explaining why not.\n\nI looked into this a bit closer. I found that having SET permissions on a\nGUC seems to allow you to ALTER ROLE SET it to others.\n\n\tpostgres=# CREATE ROLE test CREATEROLE;\n\tCREATE ROLE\n\tpostgres=# CREATE ROLE other;\n\tCREATE ROLE\n\tpostgres=# GRANT SET ON PARAMETER zero_damaged_pages TO test;\n\tGRANT\n\tpostgres=# SET ROLE test;\n\tSET\n\tpostgres=> ALTER ROLE other SET zero_damaged_pages = 'on';\n\tALTER ROLE\n\tpostgres=> SELECT * FROM pg_db_role_setting;\n\t setdatabase | setrole | setconfig \n\t-------------+---------+-------------------------\n\t 0 | 16385 | {zero_damaged_pages=on}\n\t(1 row)\n\nHowever, ALTER ROLE RESET ALL will be blocked, while resetting only the\nindividual GUC will go through.\n\n\tpostgres=> ALTER ROLE other RESET ALL;\n\tALTER ROLE\n\tpostgres=> SELECT * FROM pg_db_role_setting;\n\t setdatabase | setrole | setconfig \n\t-------------+---------+-------------------------\n\t 0 | 16385 | {zero_damaged_pages=on}\n\t(1 row)\n\n\tpostgres=> ALTER ROLE other RESET zero_damaged_pages;\n\tALTER ROLE\n\tpostgres=> SELECT * FROM pg_db_role_setting;\n\t setdatabase | setrole | setconfig \n\t-------------+---------+-----------\n\t(0 rows)\n\nI think this is because GUCArrayReset() is the only caller of\nvalidate_option_array_item() that sets skipIfNoPermissions to true. The\nothers fall through to set_config_option(), which does a\npg_parameter_aclcheck(). So, you are right.\n\nRegarding whether SET privileges should be enough to allow ALTER ROLE SET,\nI'm not sure I have an opinion yet. You would need WITH GRANT OPTION to be\nable to grant SET to that role, but that's a bit different than altering\nthe setting for the role. You'll already have privileges to alter the role\n(e.g., CREATEROLE), so requiring extra permissions to set GUCs on roles\nfeels like it might be excessive. But there might be a good argument for\nit.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 14 Jul 2022 15:57:35 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_parameter_aclcheck() and trusted extensions"
},
{
"msg_contents": "On Thu, Jul 14, 2022 at 03:57:35PM -0700, Nathan Bossart wrote:\n> However, ALTER ROLE RESET ALL will be blocked, while resetting only the\n> individual GUC will go through.\n> \n> \tpostgres=> ALTER ROLE other RESET ALL;\n> \tALTER ROLE\n> \tpostgres=> SELECT * FROM pg_db_role_setting;\n> \t setdatabase | setrole | setconfig \n> \t-------------+---------+-------------------------\n> \t 0 | 16385 | {zero_damaged_pages=on}\n> \t(1 row)\n> \n> \tpostgres=> ALTER ROLE other RESET zero_damaged_pages;\n> \tALTER ROLE\n> \tpostgres=> SELECT * FROM pg_db_role_setting;\n> \t setdatabase | setrole | setconfig \n> \t-------------+---------+-----------\n> \t(0 rows)\n> \n> I think this is because GUCArrayReset() is the only caller of\n> validate_option_array_item() that sets skipIfNoPermissions to true. The\n> others fall through to set_config_option(), which does a\n> pg_parameter_aclcheck(). So, you are right.\n\nHere's a small patch that seems to fix this case. However, I wonder if a\nbetter way to fix this is to provide a way to stop set_config_option() from\nthrowing errors (e.g., setting elevel to -1). That way, we could remove\nthe manual permissions checks in favor of always using the real ones, which\nmight help prevent similar bugs in the future.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Mon, 18 Jul 2022 14:56:49 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_parameter_aclcheck() and trusted extensions"
},
{
"msg_contents": "Nathan Bossart <nathandbossart@gmail.com> writes:\n>> I think this is because GUCArrayReset() is the only caller of\n>> validate_option_array_item() that sets skipIfNoPermissions to true. The\n>> others fall through to set_config_option(), which does a\n>> pg_parameter_aclcheck(). So, you are right.\n\n> Here's a small patch that seems to fix this case.\n\nYeah, this is more or less what I was thinking of.\n\n> However, I wonder if a\n> better way to fix this is to provide a way to stop set_config_option() from\n> throwing errors (e.g., setting elevel to -1). That way, we could remove\n> the manual permissions checks in favor of always using the real ones, which\n> might help prevent similar bugs in the future.\n\nI thought about that for a bit. You could almost do it today if you\npassed elevel == DEBUG5; the ensuing log chatter for failures would be\ndown in the noise compared to everything else you would see with\nmin_messages cranked down that far. However,\n\n(1) As things stand, set_config_option()'s result does not distinguish\nno-permissions failures from other problems, so we'd need some rejiggering\nof its API anyway.\n\n(2) As you mused upthread, it's possible that ACL_SET isn't what we should\nbe checking here, but some more-specific privilege. So I'd just as soon\nkeep this privilege check separate from set_config_option's.\n\nI'll push ahead with fixing it like this.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 19 Jul 2022 16:27:08 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_parameter_aclcheck() and trusted extensions"
},
{
"msg_contents": "On Tue, Jul 19, 2022 at 04:27:08PM -0400, Tom Lane wrote:\n> Nathan Bossart <nathandbossart@gmail.com> writes:\n>> However, I wonder if a\n>> better way to fix this is to provide a way to stop set_config_option() from\n>> throwing errors (e.g., setting elevel to -1). That way, we could remove\n>> the manual permissions checks in favor of always using the real ones, which\n>> might help prevent similar bugs in the future.\n> \n> I thought about that for a bit. You could almost do it today if you\n> passed elevel == DEBUG5; the ensuing log chatter for failures would be\n> down in the noise compared to everything else you would see with\n> min_messages cranked down that far. However,\n> \n> (1) As things stand, set_config_option()'s result does not distinguish\n> no-permissions failures from other problems, so we'd need some rejiggering\n> of its API anyway.\n> \n> (2) As you mused upthread, it's possible that ACL_SET isn't what we should\n> be checking here, but some more-specific privilege. So I'd just as soon\n> keep this privilege check separate from set_config_option's.\n\nI think we'd also need to keep the manual permissions checks for\nplaceholders, so it wouldn't save much, anyway.\n\n> I'll push ahead with fixing it like this.\n\nSounds good.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 19 Jul 2022 14:41:42 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_parameter_aclcheck() and trusted extensions"
}
] |
[
{
"msg_contents": "Hi.\n\nIMO the comment (\"If no parameter given, ...\") is a bit misleading:\n\n====\n\n(gdb) list\n108 defGetBoolean(DefElem *def)\n109 {\n110 /*\n111 * If no parameter given, assume \"true\" is meant.\n112 */\n113 if (def->arg == NULL)\n114 return true;\n115\n116 /*\n117 * Allow 0, 1, \"true\", \"false\", \"on\", \"off\"\n(gdb) p *def\n$9 = {type = T_DefElem, defnamespace = 0x0, defname = 0x1c177a8\n\"copy_data\", arg = 0x0,\n defaction = DEFELEM_UNSPEC, location = 93}\n(gdb)\n\n\n====\n\nReally this code is for the case when there *was* a parameter given\n(e.g. \"copy_data\" in my example above) but when there is no parameter\n*value* given.\n\nSuggested comment fix:\nBEFORE\nIf no parameter given, assume \"true\" is meant.\nAFTER\nIf no parameter value given, assume \"true\" is meant.\n\n~~\n\nAlthough it seems a trivial point, the motivation is that the above\ncode has been adapted (cut/paste) by multiple other WIP patches [1][2]\nthat I'm aware of, and so this original (misleading) comment is now\nspreading to other places.\n\nPSA patch to tweak both the original comment and another place it\nalready spread to.\n\n------\n[1] https://www.postgresql.org/message-id/flat/CALDaNm0gwjY_4HFxvvty01BOT01q_fJLKQ3pWP9%3D9orqubhjcQ%40mail.gmail.com\n+static char\n+defGetStreamingMode(DefElem *def)\n\n[2] https://www.postgresql.org/message-id/flat/CAHut%2BPs%2B4iLzJGkPFEatv%3D%2Baa6NUB38-WT050RFKeJqhdcLaGA%40mail.gmail.com#6d43277cbb074071b8e9602ff8be7e41\n+static CopyData\n+DefGetCopyData(DefElem *def)\n\nKind Regards,\nPeter Smith.\nFujitsu Australia",
"msg_date": "Thu, 7 Jul 2022 09:53:01 +1000",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": true,
"msg_subject": "defGetBoolean - Fix comment"
},
{
"msg_contents": "On Thu, Jul 07, 2022 at 09:53:01AM +1000, Peter Smith wrote:\n> Really this code is for the case when there *was* a parameter given\n> (e.g. \"copy_data\" in my example above) but when there is no parameter\n> *value* given.\n> \n> Suggested comment fix:\n> BEFORE\n> If no parameter given, assume \"true\" is meant.\n> AFTER\n> If no parameter value given, assume \"true\" is meant.\n\nStill, I think that your adjustment is right, as the check is, indeed,\non the DefElem's value*. Or you could just say \"If no value given\".\n--\nMichael",
"msg_date": "Thu, 7 Jul 2022 09:54:24 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: defGetBoolean - Fix comment"
},
{
"msg_contents": "On Thu, Jul 07, 2022 at 09:54:24AM +0900, Michael Paquier wrote:\n> Still, I think that your adjustment is right, as the check is, indeed,\n> on the DefElem's value*. Or you could just say \"If no value given\".\n\nPeter, I have applied something using your original wording. This is\na minor issue, but I did not see my suggestion as being better than\nyours.\n--\nMichael",
"msg_date": "Mon, 11 Jul 2022 11:09:05 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: defGetBoolean - Fix comment"
},
{
"msg_contents": "On Mon, Jul 11, 2022 at 12:09 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Thu, Jul 07, 2022 at 09:54:24AM +0900, Michael Paquier wrote:\n> > Still, I think that your adjustment is right, as the check is, indeed,\n> > on the DefElem's value*. Or you could just say \"If no value given\".\n>\n> Peter, I have applied something using your original wording. This is\n> a minor issue, but I did not see my suggestion as being better than\n> yours.\n\nThanks for pushing it.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Mon, 11 Jul 2022 12:17:31 +1000",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: defGetBoolean - Fix comment"
}
] |
[
{
"msg_contents": "Hello.\n\nWhile working on a patch, I met a function with the signature of:\n\n> DropRelFileLocatorBuffers(SMgrRelation smgr_reln, ForkNumber *forkNum,\n> \t\t\t\t\t\t int nforks, BlockNumber *firstDelBlock)\n\nIt was DropRelFileNodeBuffers(), which means \"Drop buffers for a\nRelFileNode\", where RelFileNode means a storage or a (set of) file(s).\nIn that sense, \"Drop buffers for a RelFile*Locator*\" sounds a bit off\nto me. Isn't it better change the name? RelFileLocator doesn't look\nto be fit here.\n\n\"DropRelFileBuffers\" works better at least for me.. If it does, some\nother functions need the same amendment.\n\nThought?\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Thu, 07 Jul 2022 17:44:36 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": true,
"msg_subject": "DropRelFileLocatorBuffers"
},
{
"msg_contents": "On Thu, Jul 7, 2022 at 4:44 AM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n> While working on a patch, I met a function with the signature of:\n>\n> > DropRelFileLocatorBuffers(SMgrRelation smgr_reln, ForkNumber *forkNum,\n> > int nforks, BlockNumber *firstDelBlock)\n>\n> It was DropRelFileNodeBuffers(), which means \"Drop buffers for a\n> RelFileNode\", where RelFileNode means a storage or a (set of) file(s).\n> In that sense, \"Drop buffers for a RelFile*Locator*\" sounds a bit off\n> to me. Isn't it better change the name? RelFileLocator doesn't look\n> to be fit here.\n>\n> \"DropRelFileBuffers\" works better at least for me.. If it does, some\n> other functions need the same amendment.\n\nHave you looked at the commit message for\nhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=b0a55e43299c4ea2a9a8c757f9c26352407d0ccc\n?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 7 Jul 2022 08:36:14 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: DropRelFileLocatorBuffers"
},
{
"msg_contents": "At Thu, 7 Jul 2022 08:36:14 -0400, Robert Haas <robertmhaas@gmail.com> wrote in \n> On Thu, Jul 7, 2022 at 4:44 AM Kyotaro Horiguchi\n> <horikyota.ntt@gmail.com> wrote:\n> > While working on a patch, I met a function with the signature of:\n> >\n> > > DropRelFileLocatorBuffers(SMgrRelation smgr_reln, ForkNumber *forkNum,\n> > > int nforks, BlockNumber *firstDelBlock)\n> >\n> > It was DropRelFileNodeBuffers(), which means \"Drop buffers for a\n> > RelFileNode\", where RelFileNode means a storage or a (set of) file(s).\n> > In that sense, \"Drop buffers for a RelFile*Locator*\" sounds a bit off\n> > to me. Isn't it better change the name? RelFileLocator doesn't look\n> > to be fit here.\n> >\n> > \"DropRelFileBuffers\" works better at least for me.. If it does, some\n> > other functions need the same amendment.\n> \n> Have you looked at the commit message for\n> https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=b0a55e43299c4ea2a9a8c757f9c26352407d0ccc\n> ?\n\nThanks for the reply.\n\nYes if it is \"RelFileLocator when we're talking about all the things\nthat are needed to locate a relation's files on disk,\". I read this as\nRelFileLocator is a kind of pointer to files. I thought RelFileNode\nas a pointer as well as the storage itself. The difference of the two\nfor me could be analogized as the difference between \"DropFileBuffers\"\nand \"DropFileNameBuffers\". I think the latter is usually spelled as\n\"DropBuffersByFileNames\" or such.\n\nThough, I don't want to keep fighting any further if others don't feel\nit uneasy ;)\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Fri, 08 Jul 2022 09:22:38 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: DropRelFileLocatorBuffers"
},
{
"msg_contents": "On Thu, Jul 7, 2022 at 8:22 PM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n> Thanks for the reply.\n>\n> Yes if it is \"RelFileLocator when we're talking about all the things\n> that are needed to locate a relation's files on disk,\". I read this as\n> RelFileLocator is a kind of pointer to files. I thought RelFileNode\n> as a pointer as well as the storage itself. The difference of the two\n> for me could be analogized as the difference between \"DropFileBuffers\"\n> and \"DropFileNameBuffers\". I think the latter is usually spelled as\n> \"DropBuffersByFileNames\" or such.\n>\n> Though, I don't want to keep fighting any further if others don't feel\n> it uneasy ;)\n\nI wouldn't mind if we took \"Locator\" out of the name of that function\nand just called it DropRelFileBuffers or DropRelationBuffers or\nsomething. That would be shorter, and maybe more intuitive.\n\nI wasn't quite able to understand whether your original question was\nprompted by having missed the commit in question, or whether you\ndisagreed with it, so that's why I asked whether you had seen the\ncommit message.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 7 Jul 2022 21:13:59 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: DropRelFileLocatorBuffers"
},
{
"msg_contents": "At Thu, 7 Jul 2022 21:13:59 -0400, Robert Haas <robertmhaas@gmail.com> wrote in \n> On Thu, Jul 7, 2022 at 8:22 PM Kyotaro Horiguchi\n> <horikyota.ntt@gmail.com> wrote:\n> > Thanks for the reply.\n> >\n> > Yes if it is \"RelFileLocator when we're talking about all the things\n> > that are needed to locate a relation's files on disk,\". I read this as\n> > RelFileLocator is a kind of pointer to files. I thought RelFileNode\n> > as a pointer as well as the storage itself. The difference of the two\n> > for me could be analogized as the difference between \"DropFileBuffers\"\n> > and \"DropFileNameBuffers\". I think the latter is usually spelled as\n> > \"DropBuffersByFileNames\" or such.\n> >\n> > Though, I don't want to keep fighting any further if others don't feel\n> > it uneasy ;)\n> \n> I wouldn't mind if we took \"Locator\" out of the name of that function\n> and just called it DropRelFileBuffers or DropRelationBuffers or\n> something. That would be shorter, and maybe more intuitive.\n\nThanks. Will propose that.\n\n> I wasn't quite able to understand whether your original question was\n> prompted by having missed the commit in question, or whether you\n> disagreed with it, so that's why I asked whether you had seen the\n> commit message.\n\nThe commit message is very helpful to understand the aim of the patch.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Fri, 08 Jul 2022 11:52:45 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: DropRelFileLocatorBuffers"
},
{
"msg_contents": "At Fri, 08 Jul 2022 11:52:45 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> > I wouldn't mind if we took \"Locator\" out of the name of that function\n> > and just called it DropRelFileBuffers or DropRelationBuffers or\n> > something. That would be shorter, and maybe more intuitive.\n> \n> Thanks. Will propose that.\n\nI thought for a moment that \"Relation\" sounded better but that naming\nis confusing in bufmgr.c, where functions take Relation and those take\nRelFileLocator exist together. So the (second) attached introduces\n\"RelFile\" to represent RelFileNode excluding RelFileLocator.\n\nThe function CreateAndCopyRelationData exists since before b0a55e4329\nbut renamed since it takes RelFileLocators.\n\nWhile working on this, I found that the following coment is wrong.\n\n *\t\tFlushRelationsAllBuffers\n *\n *\t\tThis function flushes out of the buffer pool all the pages of all\n *\t\tforks of the specified smgr relations. It's equivalent to calling\n *\t\tFlushRelationBuffers once per fork per relation. The relations are\n *\t\tassumed not to use local buffers.\n\nIt is equivalent to calling FlushRelationBuffers \"per relation\". This\nis attached as the first patch, which could be thought as a separate\npatch.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Fri, 08 Jul 2022 14:59:51 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: DropRelFileLocatorBuffers"
},
{
"msg_contents": "On Fri, Jul 8, 2022 at 1:59 AM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n> I thought for a moment that \"Relation\" sounded better but that naming\n> is confusing in bufmgr.c, where functions take Relation and those take\n> RelFileLocator exist together. So the (second) attached introduces\n> \"RelFile\" to represent RelFileNode excluding RelFileLocator.\n>\n> The function CreateAndCopyRelationData exists since before b0a55e4329\n> but renamed since it takes RelFileLocators.\n\nI'm not very sold on this. I think that the places where you've\nreplaced RelFileLocator with just RelFile in various functions might\nbe an improvement, but the places where you've replaced Relation with\nRelFile seem to me to be worse. I don't really see that there's\nanything wrong with names like CreateAndCopyRelationData or\nFlushRelationsAllBuffers, and in general I prefer function names that\nare made up of whole words rather than parts of words.\n\n> While working on this, I found that the following coment is wrong.\n>\n> * FlushRelationsAllBuffers\n> *\n> * This function flushes out of the buffer pool all the pages of all\n> * forks of the specified smgr relations. It's equivalent to calling\n> * FlushRelationBuffers once per fork per relation. The relations are\n> * assumed not to use local buffers.\n>\n> It is equivalent to calling FlushRelationBuffers \"per relation\". This\n> is attached as the first patch, which could be thought as a separate\n> patch.\n\nI committed this.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 11 Jul 2022 13:51:12 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: DropRelFileLocatorBuffers"
},
{
"msg_contents": "At Mon, 11 Jul 2022 13:51:12 -0400, Robert Haas <robertmhaas@gmail.com> wrote in \n> On Fri, Jul 8, 2022 at 1:59 AM Kyotaro Horiguchi\n> <horikyota.ntt@gmail.com> wrote:\n> > The function CreateAndCopyRelationData exists since before b0a55e4329\n> > but renamed since it takes RelFileLocators.\n> \n> I'm not very sold on this. I think that the places where you've\n> replaced RelFileLocator with just RelFile in various functions might\n> be an improvement, but the places where you've replaced Relation with\n> RelFile seem to me to be worse. I don't really see that there's\n> anything wrong with names like CreateAndCopyRelationData or\n> FlushRelationsAllBuffers, and in general I prefer function names that\n> are made up of whole words rather than parts of words.\n\nFair enough. My first thought was that Relation can represent both\nRelation and \"RelFile\" but in the patch I choosed to make distinction\nbetween them by associating respectively to the types of the primary\nparameter (Relation or RelFileLocator). So I'm fine with Relation\ninstead since I see it more intuitive than RelFileLocator in the\nfunction names.\n\nThe attached is that.\n\n> > While working on this, I found that the following coment is wrong.\n..\n> I committed this.\n\nThanks!\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Tue, 12 Jul 2022 11:25:31 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: DropRelFileLocatorBuffers"
},
{
"msg_contents": "On Tue, Jul 12, 2022 at 7:55 AM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> At Mon, 11 Jul 2022 13:51:12 -0400, Robert Haas <robertmhaas@gmail.com> wrote in\n> > On Fri, Jul 8, 2022 at 1:59 AM Kyotaro Horiguchi\n> > <horikyota.ntt@gmail.com> wrote:\n> > > The function CreateAndCopyRelationData exists since before b0a55e4329\n> > > but renamed since it takes RelFileLocators.\n> >\n> > I'm not very sold on this. I think that the places where you've\n> > replaced RelFileLocator with just RelFile in various functions might\n> > be an improvement, but the places where you've replaced Relation with\n> > RelFile seem to me to be worse. I don't really see that there's\n> > anything wrong with names like CreateAndCopyRelationData or\n> > FlushRelationsAllBuffers, and in general I prefer function names that\n> > are made up of whole words rather than parts of words.\n>\n> Fair enough. My first thought was that Relation can represent both\n> Relation and \"RelFile\" but in the patch I choosed to make distinction\n> between them by associating respectively to the types of the primary\n> parameter (Relation or RelFileLocator). So I'm fine with Relation\n> instead since I see it more intuitive than RelFileLocator in the\n> function names.\n>\n> The attached is that.\n\nI think the naming used in your patch looks better to me. So +1 for the change.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 12 Jul 2022 09:37:29 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: DropRelFileLocatorBuffers"
},
{
"msg_contents": "On Tue, Jul 12, 2022 at 12:07 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> I think the naming used in your patch looks better to me. So +1 for the change.\n\nCommitted.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 12 Jul 2022 10:30:20 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: DropRelFileLocatorBuffers"
},
{
"msg_contents": "At Tue, 12 Jul 2022 10:30:20 -0400, Robert Haas <robertmhaas@gmail.com> wrote in \n> On Tue, Jul 12, 2022 at 12:07 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > I think the naming used in your patch looks better to me. So +1 for the change.\n> \n> Committed.\n\nThank you, Robert and Dilip.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Wed, 13 Jul 2022 10:35:24 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: DropRelFileLocatorBuffers"
}
] |
[
{
"msg_contents": "Hi,\n\nSince an exclusive backup method was dropped in v15, in v15 or later, we need to create backup_label and tablespace_map files from the result of pg_backup_stop() when taking a base backup using low level backup API. One issue when doing this is that; there is no simple way to create those files from two columns \"labelfile\" and \"spcmapfile\" that pg_backup_stop() returns if we execute it via psql. Probaby we need to store those columns in a temporary file and run some OS commands or script to separate that file into backup_label and tablespace_map. This is not simple way, and which would prevent users from migrating their backup scripts using psql from an exclusive backup method to non-exclusive one, I'm afraid.\n\nTo enable us to do that more easily, how about adding the pg_backup_label() function that returns backup_label and tablespace_map? I'm thinking to make this function available just after pg_backup_start() finishes, also even after pg_backup_stop() finishes. For example, this function allows us to take a backup using the following psql script file.\n\n------------------------------\nSELECT * FROM pg_backup_start('test');\n\\! cp -a $PGDATA /backup\nSELECT * FROM pg_backup_stop();\n\n\\pset tuples_only on\n\\pset format unaligned\n\n\\o /backup/data/backup_label\nSELECT labelfile FROM pg_backup_label();\n\n\\o /backup/data/tablespace_map\nSELECT spcmapfile FROM pg_backup_label();\n------------------------------\n\nAttached is the WIP patch to add pg_backup_label function. No tests nor docs have been added yet, but if we can successfully reach the consensus for adding the function, I will update the patch.\n\nThought?\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION",
"msg_date": "Fri, 8 Jul 2022 01:43:49 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "Add function to return backup_label and tablespace_map"
},
{
"msg_contents": "On Thu, Jul 7, 2022 at 10:14 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>\n> Hi,\n>\n> Since an exclusive backup method was dropped in v15, in v15 or later, we need to create backup_label and tablespace_map files from the result of pg_backup_stop() when taking a base backup using low level backup API. One issue when doing this is that; there is no simple way to create those files from two columns \"labelfile\" and \"spcmapfile\" that pg_backup_stop() returns if we execute it via psql. Probaby we need to store those columns in a temporary file and run some OS commands or script to separate that file into backup_label and tablespace_map. This is not simple way, and which would prevent users from migrating their backup scripts using psql from an exclusive backup method to non-exclusive one, I'm afraid.\n>\n> To enable us to do that more easily, how about adding the pg_backup_label() function that returns backup_label and tablespace_map? I'm thinking to make this function available just after pg_backup_start() finishes, also even after pg_backup_stop() finishes. For example, this function allows us to take a backup using the following psql script file.\n>\n> ------------------------------\n> SELECT * FROM pg_backup_start('test');\n> \\! cp -a $PGDATA /backup\n> SELECT * FROM pg_backup_stop();\n>\n> \\pset tuples_only on\n> \\pset format unaligned\n>\n> \\o /backup/data/backup_label\n> SELECT labelfile FROM pg_backup_label();\n>\n> \\o /backup/data/tablespace_map\n> SELECT spcmapfile FROM pg_backup_label();\n> ------------------------------\n>\n> Attached is the WIP patch to add pg_backup_label function. No tests nor docs have been added yet, but if we can successfully reach the consensus for adding the function, I will update the patch.\n>\n> Thought?\n\n+1 for making it easy for the user to create backup_label and\ntablespace_map files. With the patch, the label_file and\ntblspc_map_file contents are preserved until the lifecycle of the\nsession or the next run of pg_backup_start, I'm not sure if we need to\nworry more about it.\n\nWhy can't we have functions like pg_create_backup_label() and\npg_create_tablespace_map() which create the 'backup_label' and\n'tablespace_map' files respectively in the data directory and also\nreturn the contents as output columns?\n\nAlso, we can let users run these create functions only once (perhaps\nafter the pg_backup_stop is called which is when the contents will be\nconsistent). If we allow these functions to read the label_file or\ntblspc_map_file contents during the backup before stop backup, they\nmay not be consistent. We can have a new sessionBackupState something\nlike SESSION_BACKUP_READY_TO_COLLECT_INFO or SESSION_BACKUP_DONE and\nafter the new function calls sessionBackupState goes to\nSESSION_BACKUP_NONE) and the contents of label_file and\ntblspc_map_file are freed up.\n\nIn the docs, it's good if we can clearly specify the steps to use all\nof these functions.\n\nRegards,\nBharath Rupireddy.\n\n\n",
"msg_date": "Fri, 8 Jul 2022 15:31:07 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add function to return backup_label and tablespace_map"
},
{
"msg_contents": "On Fri, Jul 8, 2022 at 3:31 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Thu, Jul 7, 2022 at 10:14 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> >\n> > Hi,\n> >\n> > Since an exclusive backup method was dropped in v15, in v15 or later, we need to create backup_label and tablespace_map files from the result of pg_backup_stop() when taking a base backup using low level backup API. One issue when doing this is that; there is no simple way to create those files from two columns \"labelfile\" and \"spcmapfile\" that pg_backup_stop() returns if we execute it via psql. Probaby we need to store those columns in a temporary file and run some OS commands or script to separate that file into backup_label and tablespace_map. This is not simple way, and which would prevent users from migrating their backup scripts using psql from an exclusive backup method to non-exclusive one, I'm afraid.\n> >\n> > To enable us to do that more easily, how about adding the pg_backup_label() function that returns backup_label and tablespace_map? I'm thinking to make this function available just after pg_backup_start() finishes, also even after pg_backup_stop() finishes. For example, this function allows us to take a backup using the following psql script file.\n> >\n> > ------------------------------\n> > SELECT * FROM pg_backup_start('test');\n> > \\! cp -a $PGDATA /backup\n> > SELECT * FROM pg_backup_stop();\n> >\n> > \\pset tuples_only on\n> > \\pset format unaligned\n> >\n> > \\o /backup/data/backup_label\n> > SELECT labelfile FROM pg_backup_label();\n> >\n> > \\o /backup/data/tablespace_map\n> > SELECT spcmapfile FROM pg_backup_label();\n> > ------------------------------\n> >\n> > Attached is the WIP patch to add pg_backup_label function. No tests nor docs have been added yet, but if we can successfully reach the consensus for adding the function, I will update the patch.\n> >\n> > Thought?\n>\n> +1 for making it easy for the user to create backup_label and\n> tablespace_map files. With the patch, the label_file and\n> tblspc_map_file contents are preserved until the lifecycle of the\n> session or the next run of pg_backup_start, I'm not sure if we need to\n> worry more about it.\n>\n> Why can't we have functions like pg_create_backup_label() and\n> pg_create_tablespace_map() which create the 'backup_label' and\n> 'tablespace_map' files respectively in the data directory and also\n> return the contents as output columns?\n>\n> Also, we can let users run these create functions only once (perhaps\n> after the pg_backup_stop is called which is when the contents will be\n> consistent). If we allow these functions to read the label_file or\n> tblspc_map_file contents during the backup before stop backup, they\n> may not be consistent. We can have a new sessionBackupState something\n> like SESSION_BACKUP_READY_TO_COLLECT_INFO or SESSION_BACKUP_DONE and\n> after the new function calls sessionBackupState goes to\n> SESSION_BACKUP_NONE) and the contents of label_file and\n> tblspc_map_file are freed up.\n>\n> In the docs, it's good if we can clearly specify the steps to use all\n> of these functions.\n\nForgot to mention a comment on the v1 patch: we'll need to revoke\npermissions from the public for pg_backup_label (or whatever the new\nfunction(s) that'll be introduced) as well similar to pg_backup_start\nand pg_backup_stop.\n\nRegards,\nBharath Rupireddy.\n\n\n",
"msg_date": "Fri, 8 Jul 2022 15:39:14 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add function to return backup_label and tablespace_map"
},
{
"msg_contents": "On 7/7/22 12:43, Fujii Masao wrote:\n\n> Since an exclusive backup method was dropped in v15, in v15 or later, we \n> need to create backup_label and tablespace_map files from the result of \n> pg_backup_stop() when taking a base backup using low level backup API. \n> One issue when doing this is that; there is no simple way to create \n> those files from two columns \"labelfile\" and \"spcmapfile\" that \n> pg_backup_stop() returns if we execute it via psql. Probaby we need to \n> store those columns in a temporary file and run some OS commands or \n> script to separate that file into backup_label and tablespace_map. \n\nWhy not just select these columns into a temp table:\n\ncreate temp table backup_result as select * from pg_backup_stop(...);\n\nThen they can be easily dumped with \\o by selecting from the temp table.\n\n> To enable us to do that more easily, how about adding the \n> pg_backup_label() function that returns backup_label and tablespace_map? \n> I'm thinking to make this function available just after \n> pg_backup_start() finishes\n\nThis makes me nervous as I'm sure users will immediately start writing \nbackup_label into PGDATA to make their lives easier. Having backup_label \nin PGDATA for a running cluster causes problems and is the major reason \nwe deprecated and then removed the exclusive method. In addition, what \nlittle protection we had from this condition has been removed.\n\nRegards,\n-David\n\n\n",
"msg_date": "Fri, 8 Jul 2022 07:41:52 -0400",
"msg_from": "David Steele <david@pgmasters.net>",
"msg_from_op": false,
"msg_subject": "Re: Add function to return backup_label and tablespace_map"
},
{
"msg_contents": "On Fri, Jul 8, 2022 at 5:12 PM David Steele <david@pgmasters.net> wrote:\n>\n> > To enable us to do that more easily, how about adding the\n> > pg_backup_label() function that returns backup_label and tablespace_map?\n> > I'm thinking to make this function available just after\n> > pg_backup_start() finishes\n>\n> This makes me nervous as I'm sure users will immediately start writing\n> backup_label into PGDATA to make their lives easier. Having backup_label\n> in PGDATA for a running cluster causes problems and is the major reason\n> we deprecated and then removed the exclusive method. In addition, what\n> little protection we had from this condition has been removed.\n\nIIUC, with the new mechanism, we don't need a backup_label file to be\npresent in the data directory after pg_backup_stop? If yes, where will\nthe postgres recover from if it crashes after pg_backup_stop before\nthe next checkpoint? I'm trying to understand the significance of the\nbackup_label and tablespace_map contents after the removal of\nexclusive backup.\n\nAlso, do we need the read_backup_label part of the code [1]?\n\n\n[1]\n if (read_backup_label(&CheckPointLoc, &CheckPointTLI, &backupEndRequired,\n &backupFromStandby))\n {\n List *tablespaces = NIL;\n\n /*\n * Archive recovery was requested, and thanks to the backup label\n * file, we know how far we need to replay to reach consistency. Enter\n * archive recovery directly.\n */\n InArchiveRecovery = true;\n if (StandbyModeRequested)\n StandbyMode = true;\n\n /*\n * When a backup_label file is present, we want to roll forward from\n * the checkpoint it identifies, rather than using pg_control.\n */\n\nRegards,\nBharath Rupireddy.\n\n\n",
"msg_date": "Fri, 8 Jul 2022 17:23:17 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add function to return backup_label and tablespace_map"
},
{
"msg_contents": "On 7/8/22 07:53, Bharath Rupireddy wrote:\n> On Fri, Jul 8, 2022 at 5:12 PM David Steele <david@pgmasters.net> wrote:\n>>\n>>> To enable us to do that more easily, how about adding the\n>>> pg_backup_label() function that returns backup_label and tablespace_map?\n>>> I'm thinking to make this function available just after\n>>> pg_backup_start() finishes\n>>\n>> This makes me nervous as I'm sure users will immediately start writing\n>> backup_label into PGDATA to make their lives easier. Having backup_label\n>> in PGDATA for a running cluster causes problems and is the major reason\n>> we deprecated and then removed the exclusive method. In addition, what\n>> little protection we had from this condition has been removed.\n> \n> IIUC, with the new mechanism, we don't need a backup_label file to be\n> present in the data directory after pg_backup_stop? If yes, where will\n> the postgres recover from if it crashes after pg_backup_stop before\n> the next checkpoint? I'm trying to understand the significance of the\n> backup_label and tablespace_map contents after the removal of\n> exclusive backup.\n\nbackup_label should be written directly into the backup and should be \npresent when the backup is restored and before recovery begins. It \nshould not be present in a normally operating cluster or it will cause \nproblems after crashes and restarts.\n\n> Also, do we need the read_backup_label part of the code [1]?\n\nYes, since the backup_label is required for recovery.\n\nRegards,\n-David\n\n\n",
"msg_date": "Fri, 8 Jul 2022 08:01:06 -0400",
"msg_from": "David Steele <david@pgmasters.net>",
"msg_from_op": false,
"msg_subject": "Re: Add function to return backup_label and tablespace_map"
},
{
"msg_contents": "On Fri, Jul 8, 2022 at 7:42 PM David Steele <david@pgmasters.net> wrote:\n>\n> On 7/7/22 12:43, Fujii Masao wrote:\n>\n> > Since an exclusive backup method was dropped in v15, in v15 or later, we\n> > need to create backup_label and tablespace_map files from the result of\n> > pg_backup_stop() when taking a base backup using low level backup API.\n> > One issue when doing this is that; there is no simple way to create\n> > those files from two columns \"labelfile\" and \"spcmapfile\" that\n> > pg_backup_stop() returns if we execute it via psql. Probaby we need to\n> > store those columns in a temporary file and run some OS commands or\n> > script to separate that file into backup_label and tablespace_map.\n>\n> Why not just select these columns into a temp table:\n>\n> create temp table backup_result as select * from pg_backup_stop(...);\n>\n> Then they can be easily dumped with \\o by selecting from the temp table.\n\nThat wouldn't help people making backups from standby servers.\n\n\n",
"msg_date": "Fri, 8 Jul 2022 20:22:50 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add function to return backup_label and tablespace_map"
},
{
"msg_contents": "On 7/8/22 08:22, Julien Rouhaud wrote:\n> On Fri, Jul 8, 2022 at 7:42 PM David Steele <david@pgmasters.net> wrote:\n>>\n>> On 7/7/22 12:43, Fujii Masao wrote:\n>>\n>>> Since an exclusive backup method was dropped in v15, in v15 or later, we\n>>> need to create backup_label and tablespace_map files from the result of\n>>> pg_backup_stop() when taking a base backup using low level backup API.\n>>> One issue when doing this is that; there is no simple way to create\n>>> those files from two columns \"labelfile\" and \"spcmapfile\" that\n>>> pg_backup_stop() returns if we execute it via psql. Probaby we need to\n>>> store those columns in a temporary file and run some OS commands or\n>>> script to separate that file into backup_label and tablespace_map.\n>>\n>> Why not just select these columns into a temp table:\n>>\n>> create temp table backup_result as select * from pg_backup_stop(...);\n>>\n>> Then they can be easily dumped with \\o by selecting from the temp table.\n> \n> That wouldn't help people making backups from standby servers.\n\nAh, yes, good point. This should work on a standby, though:\n\nselect quote_literal(labelfile) as backup_label from pg_backup_stop(...) \n\\gset\n\\pset tuples_only on\n\\pset format unaligned\n\\o /backup_path/backup_label\nselect :backup_label;\n\nRegards,\n-David\n\n\n",
"msg_date": "Fri, 8 Jul 2022 09:09:27 -0400",
"msg_from": "David Steele <david@pgmasters.net>",
"msg_from_op": false,
"msg_subject": "Re: Add function to return backup_label and tablespace_map"
},
{
"msg_contents": "Re: David Steele\n> > To enable us to do that more easily, how about adding the\n> > pg_backup_label() function that returns backup_label and tablespace_map?\n> > I'm thinking to make this function available just after\n> > pg_backup_start() finishes\n\nI was just wondering: Why is \"labelfile\" only returned by\npg_backup_stop()? All the info in there is already available at\npg_backup_start() time. Having the output available earlier would\nallow writing the backup_label into the backup directory, or store it\nalong some filesystem snapshot that is already immutable by the time\npg_backup_stop is called.\n\nIf we rename all functions anyway for PG15, we could move the info\nfrom stop to start.\n\n> This makes me nervous as I'm sure users will immediately start writing\n> backup_label into PGDATA to make their lives easier. Having backup_label in\n> PGDATA for a running cluster causes problems and is the major reason we\n> deprecated and then removed the exclusive method. In addition, what little\n> protection we had from this condition has been removed.\n\nIs that really an argument for making the life of everyone else\nharder?\n\nChristoph\n\n\n",
"msg_date": "Fri, 8 Jul 2022 15:10:46 +0200",
"msg_from": "Christoph Berg <myon@debian.org>",
"msg_from_op": false,
"msg_subject": "Re: Add function to return backup_label and tablespace_map"
},
{
"msg_contents": "\n\nOn 7/8/22 09:10, Christoph Berg wrote:\n> Re: David Steele\n>>> To enable us to do that more easily, how about adding the\n>>> pg_backup_label() function that returns backup_label and tablespace_map?\n>>> I'm thinking to make this function available just after\n>>> pg_backup_start() finishes\n> \n> I was just wondering: Why is \"labelfile\" only returned by\n> pg_backup_stop()? All the info in there is already available at\n> pg_backup_start() time. \n\nNot sure exactly why this decision was made in 9.6 (might be because \ntablespace_map does need to be generated at stop time), but I'm planning \nto add data to this file in PG16 that is only available at stop time. In \nparticular, backup software would like to know the earliest possible \ntime that can be used for PITR and right now this needs to be \napproximated. Would be good to have that in backup_label along with \nstart time. Min recovery xid would also be very useful.\n\n> Having the output available earlier would\n> allow writing the backup_label into the backup directory, or store it\n> along some filesystem snapshot that is already immutable by the time\n> pg_backup_stop is called.\n\nWhat is precluded by getting the backup label after pg_backup_stop()? \nPerhaps a more detailed example here would be helpful.\n\n> If we rename all functions anyway for PG15, we could move the info\n> from stop to start.\n> \n>> This makes me nervous as I'm sure users will immediately start writing\n>> backup_label into PGDATA to make their lives easier. Having backup_label in\n>> PGDATA for a running cluster causes problems and is the major reason we\n>> deprecated and then removed the exclusive method. In addition, what little\n>> protection we had from this condition has been removed.\n> \n> Is that really an argument for making the life of everyone else\n> harder?\n\nI don't see how anyone's life is made harder unless the plan is to write \nbackup_label into PGDATA, which should not be done.\n\nAs we've noted before, there's no point in pretending that doing backup \ncorrectly is easy because it is definitely not.\n\nRegards,\n-David\n\n\n",
"msg_date": "Fri, 8 Jul 2022 09:53:52 -0400",
"msg_from": "David Steele <david@pgmasters.net>",
"msg_from_op": false,
"msg_subject": "Re: Add function to return backup_label and tablespace_map"
},
{
"msg_contents": "On 7/8/22 09:09, David Steele wrote:\n> On 7/8/22 08:22, Julien Rouhaud wrote:\n>> On Fri, Jul 8, 2022 at 7:42 PM David Steele <david@pgmasters.net> wrote:\n>>>\n>>> On 7/7/22 12:43, Fujii Masao wrote:\n>>>\n>>>> Since an exclusive backup method was dropped in v15, in v15 or \n>>>> later, we\n>>>> need to create backup_label and tablespace_map files from the result of\n>>>> pg_backup_stop() when taking a base backup using low level backup API.\n>>>> One issue when doing this is that; there is no simple way to create\n>>>> those files from two columns \"labelfile\" and \"spcmapfile\" that\n>>>> pg_backup_stop() returns if we execute it via psql. Probaby we need to\n>>>> store those columns in a temporary file and run some OS commands or\n>>>> script to separate that file into backup_label and tablespace_map.\n>>>\n>>> Why not just select these columns into a temp table:\n>>>\n>>> create temp table backup_result as select * from pg_backup_stop(...);\n>>>\n>>> Then they can be easily dumped with \\o by selecting from the temp table.\n>>\n>> That wouldn't help people making backups from standby servers.\n> \n> Ah, yes, good point. This should work on a standby, though:\n> \n> select quote_literal(labelfile) as backup_label from pg_backup_stop(...) \n> \\gset\n> \\pset tuples_only on\n> \\pset format unaligned\n> \\o /backup_path/backup_label\n> select :backup_label;\n\nLooks like I made that more complicated than it needed to be:\n\nselect * from pg_backup_stop(...) \\gset\n\\pset tuples_only on\n\\pset format unaligned\n\\o /backup_path/backup_label\nselect :'labelfile';\n\nRegards,\n-David\n\n\n",
"msg_date": "Fri, 8 Jul 2022 10:11:02 -0400",
"msg_from": "David Steele <david@pgmasters.net>",
"msg_from_op": false,
"msg_subject": "Re: Add function to return backup_label and tablespace_map"
},
{
"msg_contents": "At Fri, 8 Jul 2022 01:43:49 +0900, Fujii Masao <masao.fujii@oss.nttdata.com> wrote in \n> finishes. For example, this function allows us to take a backup using\n> the following psql script file.\n> \n> ------------------------------\n> SELECT * FROM pg_backup_start('test');\n> \\! cp -a $PGDATA /backup\n> SELECT * FROM pg_backup_stop();\n> \n> \\pset tuples_only on\n> \\pset format unaligned\n> \n> \\o /backup/data/backup_label\n> SELECT labelfile FROM pg_backup_label();\n> \n> \\o /backup/data/tablespace_map\n> SELECT spcmapfile FROM pg_backup_label();\n> ------------------------------\n\nAs David mentioned, we can do the same thing now by using \\gset, when\nwe want to save the files on the client side. (File copy is done on\nthe server side by the steps, though.)\n\nThinking about another scenario of generating those files server-side\n(this is safer than the client-side method regarding to\nline-separators and the pset settings, I think). We can do that by\nusing admingpack instead, with simpler steps.\n\nSELECT lsn, labelfile, spcmapfile\n pg_file_write('/tmp/backup_label', labelfile, false),\n pg_file_write('/tmp/tablespace_map', spcmapfile, false)\nFROM pg_backup_stop();\n\nHowever, if pg_file_write() fails, the data are gone. But \\gset also\nworks here.\n\nselect pg_backup_start('s1');\nSELECT * FROM pg_backup_stop() \\gset\nSELECT pg_file_write('/tmp/backup_label', :'labelfile', false);\nSELECT pg_file_write('/tmp/tablespace_map', :'spcmapfile', false);\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Mon, 11 Jul 2022 11:59:26 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add function to return backup_label and tablespace_map"
},
{
"msg_contents": "\n\nOn 2022/07/08 23:11, David Steele wrote:\n> Looks like I made that more complicated than it needed to be:\n> \n> select * from pg_backup_stop(...) \\gset\n> \\pset tuples_only on\n> \\pset format unaligned\n> \\o /backup_path/backup_label\n> select :'labelfile';\n\nThanks! I had completely forgotten \\gset command.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Fri, 15 Jul 2022 17:11:01 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "Re: Add function to return backup_label and tablespace_map"
},
{
"msg_contents": "Greetings,\n\n* Fujii Masao (masao.fujii@oss.nttdata.com) wrote:\n> On 2022/07/08 23:11, David Steele wrote:\n> >Looks like I made that more complicated than it needed to be:\n> >\n> >select * from pg_backup_stop(...) \\gset\n> >\\pset tuples_only on\n> >\\pset format unaligned\n> >\\o /backup_path/backup_label\n> >select :'labelfile';\n> \n> Thanks! I had completely forgotten \\gset command.\n\nSeems like it might make sense to consider using a better format for\nthese files and also to allow us to more easily add things in the future\n(ending LSN, ending time, etc) for backup tools to be able to leverage.\nPerhaps we should change this to having just a single file returned,\ninstead of two, and use JSON for it, as a more general and extensible\nformat that we've already got code to work with..?\n\nThanks,\n\nStephen",
"msg_date": "Fri, 15 Jul 2022 11:17:24 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: Add function to return backup_label and tablespace_map"
}
] |
[
{
"msg_contents": "Hi,\nI was able to create gin index on inet column in PG.\n\nGIN is good with points/elements in sets. Is gin a good index for inet\ncolumn ?\nIt seems gist index would be better.\n\nComments are welcome.\n\nHi,I was able to create gin index on inet column in PG.GIN is good with points/elements in sets. Is gin a good index for inet column ?It seems gist index would be better.Comments are welcome.",
"msg_date": "Thu, 7 Jul 2022 14:52:07 -0700",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": true,
"msg_subject": "index for inet column"
},
{
"msg_contents": "## Zhihong Yu (zyu@yugabyte.com):\n\n> I was able to create gin index on inet column in PG.\n> \n> GIN is good with points/elements in sets. Is gin a good index for inet\n> column ?\n> It seems gist index would be better.\n\nWhy not use btree? The common operations are quite supported with that.\n(Common operations being equality and subnet/CIDR matching, the latter\nbeing a glorified less/greater than operation. If you are using non-\ncontinous netmasks, you are already in a rather painful situation\nnetworkwise and it will not get better in the database, so don't).\n\nRegards,\nChristoph\n\n-- \nSpare Space\n\n\n",
"msg_date": "Fri, 8 Jul 2022 00:06:11 +0200",
"msg_from": "Christoph Moench-Tegeder <cmt@burggraben.net>",
"msg_from_op": false,
"msg_subject": "Re: index for inet column"
},
{
"msg_contents": "Zhihong Yu <zyu@yugabyte.com> writes:\n> I was able to create gin index on inet column in PG.\n> GIN is good with points/elements in sets. Is gin a good index for inet\n> column ?\n\nAs far as Postgres is concerned, inet is a scalar type: it has a\nlinear sort order, and there aren't many operations on it that are\nconcerned with identifiable sub-objects. That means btree is a\nperfectly fine index type for it, while GIN (which lives and dies by\nsub-objects) is pretty off-point. I suppose you used btree_gin for\nyour index, because there are no other GIN opclasses that would take\ninet. As the name implies, that's a poor man's substitute for btree;\nthere is nothing it does that btree doesn't do better.\n\nGenerally speaking, the use-case for btree_gin is where you want to\nmake a single, multi-column index in which one column is a collection\ntype (that is well-suited for GIN) but another is just a scalar type.\nIf you're making a one-column index with btree_gin, you're doing it\nwrong.\n\n> It seems gist index would be better.\n\nLargely the same comments apply to GiST: it's not really meant for\nscalar types either.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 07 Jul 2022 18:42:37 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: index for inet column"
}
] |
[
{
"msg_contents": "This patch causes wrong index scan plan with RLS. Maybe function restriction_is_securely_promotable is too strict?\r\n\r\n\r\nYou can reproduce in this way:\r\n\r\n\r\ncreate table abc (a integer, b text);\r\ninsert into abc select (random()*(10^4))::integer, (random()*(10^4))::text from generate_series(1,100000);\r\n\r\ncreate index on abc(a, lower(b));\r\n\r\n\r\nALTER TABLE abc enable ROW LEVEL SECURITY;\r\nALTER TABLE abc FORCE ROW LEVEL SECURITY;\r\n\r\nCREATE POLICY abc_id_iso_ply on abc to CURRENT_USER USING (a = (current_setting('app.a'::text))::int);\r\n\r\n\r\n\r\n# for bypass user, index scan works fine\r\nexplain analyse select * from abc where a=1 and lower(b)='1234';\r\n Index Scan using abc_a_lower_idx on abc\r\n\r\n Index Cond: ((a = 1) AND (lower(b) = '1234'::text))\r\n\r\n\r\n\r\n# for RLS user, index scan can only use column a, and filter by lower(b)\r\nset app.a=1;\r\nexplain analyse select * from abc where a=1 and lower(b)='1234';\r\n Index Scan using abc_a_lower_idx on abc\r\n\r\n Index Cond: (a = 1)\r\n Filter: (lower(b) = '1234'::text)\r\n\r\n\r\n\r\nThis only occurs when using non-leak-proof functional index. Everything works fine in following way: \r\ncreate index on abc(a, b);\r\nexplain analyse select * from abc where a=1 and b='1234';\r\n\r\n\r\nI think crucial function is restriction_is_securely_promotable. Maybe it is too strict to reject normal clause match. \r\nCould you please recheck RLS with functional index?\r\n\r\n\r\nregards, \r\nMark Zhao\r\n\r\n\r\n\r\n\r\n------------------ Original ------------------\r\nFrom: \"Tom Lane\" <tgl@sss.pgh.pa.us>;\r\nDate: Wed, Oct 26, 2016 05:58 AM\r\nTo: \"pgsql-hackers\"<pgsql-hackers@postgreSQL.org>;\r\n\r\nSubject: Improving RLS planning\r\n\r\n\r\n\r\nCurrently, we don't produce very good plans when row-level security\r\nis enabled. An example is that, given\r\n\r\n\tcreate table t1 (pk1 int primary key, label text);\r\n\tcreate table t2 (pk2 int primary key, fk int references t1);\r\n\r\nthen for\r\n\r\n\tselect * from t1, t2 where pk1 = fk and pk2 = 42;\r\n\r\nyou would ordinarily get a cheap plan like\r\n\r\n Nested Loop\r\n -> Index Scan using t2_pkey on t2\r\n Index Cond: (pk2 = 42)\r\n -> Index Scan using t1_pkey on t1\r\n Index Cond: (pk1 = t2.fk)\r\n\r\nBut stick an RLS policy on t1, and that degrades to a seqscan, eg\r\n\r\n Nested Loop\r\n Join Filter: (t1.pk1 = t2.fk)\r\n -> Index Scan using t2_pkey on t2\r\n Index Cond: (pk2 = 42)\r\n -> Seq Scan on t1\r\n Filter: (label = 'public'::text)\r\n\r\nThe reason for this is that we implement RLS by turning the reference\r\nto t1 into a sub-SELECT, and the planner's recursive invocation of\r\nsubquery_planner produces only a seqscan path for t1, there not being\r\nany reason visible in the subquery for it to do differently.\r\n\r\nI have been thinking about improving this by allowing subquery_planner\r\nto generate parameterized paths; but the more I think about that the\r\nless satisfied I am with it. It will be quite expensive and probably\r\nwill still fail to find desirable plans in many cases. (I've not given\r\nup on parameterized subquery paths altogether --- I just feel it'd be a\r\nbrute-force and not very effective way of dealing with RLS.)\r\n\r\nThe alternative I'm now thinking about pursuing is to get rid of the\r\nconversion of RLS quals to subqueries. Instead, we can label individual\r\nqual clauses with security precedence markings. Concretely, suppose we\r\nadd an \"int security_level\" field to struct RestrictInfo. The semantics\r\nof this would be that a qual with a lower security_level value must be\r\nevaluated before a qual with a higher security_level value, unless the\r\nlatter qual is leakproof. (It would likely also behoove us to add a\r\n\"leakproof\" bool field to struct RestrictInfo, to avoid duplicate\r\nleakproof-ness checks on quals. But that's just an optimization.)\r\n\r\nIn the initial implementation, quals coming from a RangeTblEntry's\r\nsecurityQuals field would have security_level 0, quals coming from\r\nanywhere else would have security_level 1; except that if we know\r\nthere are no security quals anywhere (ie not Query->hasRowSecurity),\r\nwe could give all quals security_level 0. (I think this exception\r\nmay be worth making because there's no need to test leakproofness\r\nfor a qual with security level 0; it could never be a candidate\r\nfor security delay anyway.)\r\n\r\nHaving done that much, I think all we need in order to get rid of\r\nRLS subqueries, and just stick RLS quals into their relation's\r\nbaserestrictinfo list, are two rules:\r\n\r\n1. When selecting potential indexquals, a RestrictInfo can be considered\r\nfor indexqual use only if it is leakproof or has security_level <= the\r\nminimum among the table's baserestrictinfo clauses.\r\n\r\n2. In order_qual_clauses, sort first by security_level and second by cost.\r\n\r\nThis would already be enough of a win to be worth doing. I think though\r\nthat this mechanism can be extended to also allow getting rid of the\r\nrestriction that security-barrier views can't be flattened. The idea\r\nwould be to make sure that quals coming from above the SB view are given\r\nhigher security_level values than quals within the SB view. We'd need\r\nsome extra mechanism to make that possible --- perhaps an additional kind\r\nof node within jointree nests to show where there had been a\r\nsecurity-barrier boundary, and then some smarts in distribute_qual_to_rels\r\nto prevent pushing upper quals down past a lower qual of strictly lesser\r\nsecurity level. But that can come later. (We do not need such smarts\r\nto fix the RLS problem, because in the initial version, quals with lower\r\nsecurity level than another qual could only exist at the baserel level.)\r\n\r\nIn short, I'm proposing to throw away the entire existing implementation\r\nfor planning of RLS and SB views, and start over.\r\n\r\nThere are some corner cases I've not entirely worked out, in particular\r\nwhat security_level to assign to quals generated from EquivalenceClasses.\r\nA safe but not optimal answer would be to assign them the maximum\r\nsecurity_level of any source clause of the EC. Maybe it's not worth\r\nworking harder than that, because most equality operators are leakproof\r\nanyway, so that it wouldn't matter what level we assigned them.\r\n\r\nBefore I start implementing this, can anyone see a fatal flaw in the\r\ndesign?\r\n\r\n\t\t\tregards, tom lane\r\n\r\n\r\n-- \r\nSent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)\r\nTo make changes to your subscription:\r\nhttp://www.postgresql.org/mailpref/pgsql-hackers\nThis patch causes wrong index scan plan with RLS. Maybe function restriction_is_securely_promotable is too strict?You can reproduce in this way:create table abc (a integer, b text);insert into abc select (random()*(10^4))::integer, (random()*(10^4))::text from generate_series(1,100000);create index on abc(a, lower(b));ALTER TABLE abc enable ROW LEVEL SECURITY;ALTER TABLE abc FORCE ROW LEVEL SECURITY;CREATE POLICY abc_id_iso_ply on abc to CURRENT_USER USING (a = (current_setting('app.a'::text))::int);# for bypass user, index scan works fineexplain analyse select * from abc where a=1 and lower(b)='1234'; Index Scan using abc_a_lower_idx on abc Index Cond: ((a = 1) AND (lower(b) = '1234'::text))# for RLS user, index scan can only use column a, and filter by lower(b)set app.a=1;explain analyse select * from abc where a=1 and lower(b)='1234'; Index Scan using abc_a_lower_idx on abc Index Cond: (a = 1) Filter: (lower(b) = '1234'::text)This only occurs when using non-leak-proof functional index. Everything works fine in following way: create index on abc(a, b);explain analyse select * from abc where a=1 and b='1234';I think crucial function is restriction_is_securely_promotable. Maybe it is too strict to reject normal clause match. Could you please recheck RLS with functional index?regards, Mark Zhao------------------ Original ------------------From: \"Tom Lane\" <tgl@sss.pgh.pa.us>;Date: Wed, Oct 26, 2016 05:58 AMTo: \"pgsql-hackers\"<pgsql-hackers@postgreSQL.org>;Subject: Improving RLS planningCurrently, we don't produce very good plans when row-level securityis enabled. An example is that, given\tcreate table t1 (pk1 int primary key, label text);\tcreate table t2 (pk2 int primary key, fk int references t1);then for\tselect * from t1, t2 where pk1 = fk and pk2 = 42;you would ordinarily get a cheap plan like Nested Loop -> Index Scan using t2_pkey on t2 Index Cond: (pk2 = 42) -> Index Scan using t1_pkey on t1 Index Cond: (pk1 = t2.fk)But stick an RLS policy on t1, and that degrades to a seqscan, eg Nested Loop Join Filter: (t1.pk1 = t2.fk) -> Index Scan using t2_pkey on t2 Index Cond: (pk2 = 42) -> Seq Scan on t1 Filter: (label = 'public'::text)The reason for this is that we implement RLS by turning the referenceto t1 into a sub-SELECT, and the planner's recursive invocation ofsubquery_planner produces only a seqscan path for t1, there not beingany reason visible in the subquery for it to do differently.I have been thinking about improving this by allowing subquery_plannerto generate parameterized paths; but the more I think about that theless satisfied I am with it. It will be quite expensive and probablywill still fail to find desirable plans in many cases. (I've not givenup on parameterized subquery paths altogether --- I just feel it'd be abrute-force and not very effective way of dealing with RLS.)The alternative I'm now thinking about pursuing is to get rid of theconversion of RLS quals to subqueries. Instead, we can label individualqual clauses with security precedence markings. Concretely, suppose weadd an \"int security_level\" field to struct RestrictInfo. The semanticsof this would be that a qual with a lower security_level value must beevaluated before a qual with a higher security_level value, unless thelatter qual is leakproof. (It would likely also behoove us to add a\"leakproof\" bool field to struct RestrictInfo, to avoid duplicateleakproof-ness checks on quals. But that's just an optimization.)In the initial implementation, quals coming from a RangeTblEntry'ssecurityQuals field would have security_level 0, quals coming fromanywhere else would have security_level 1; except that if we knowthere are no security quals anywhere (ie not Query->hasRowSecurity),we could give all quals security_level 0. (I think this exceptionmay be worth making because there's no need to test leakproofnessfor a qual with security level 0; it could never be a candidatefor security delay anyway.)Having done that much, I think all we need in order to get rid ofRLS subqueries, and just stick RLS quals into their relation'sbaserestrictinfo list, are two rules:1. When selecting potential indexquals, a RestrictInfo can be consideredfor indexqual use only if it is leakproof or has security_level <= theminimum among the table's baserestrictinfo clauses.2. In order_qual_clauses, sort first by security_level and second by cost.This would already be enough of a win to be worth doing. I think thoughthat this mechanism can be extended to also allow getting rid of therestriction that security-barrier views can't be flattened. The ideawould be to make sure that quals coming from above the SB view are givenhigher security_level values than quals within the SB view. We'd needsome extra mechanism to make that possible --- perhaps an additional kindof node within jointree nests to show where there had been asecurity-barrier boundary, and then some smarts in distribute_qual_to_relsto prevent pushing upper quals down past a lower qual of strictly lessersecurity level. But that can come later. (We do not need such smartsto fix the RLS problem, because in the initial version, quals with lowersecurity level than another qual could only exist at the baserel level.)In short, I'm proposing to throw away the entire existing implementationfor planning of RLS and SB views, and start over.There are some corner cases I've not entirely worked out, in particularwhat security_level to assign to quals generated from EquivalenceClasses.A safe but not optimal answer would be to assign them the maximumsecurity_level of any source clause of the EC. Maybe it's not worthworking harder than that, because most equality operators are leakproofanyway, so that it wouldn't matter what level we assigned them.Before I start implementing this, can anyone see a fatal flaw in thedesign?\t\t\tregards, tom lane-- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)To make changes to your subscription:http://www.postgresql.org/mailpref/pgsql-hackers",
"msg_date": "Fri, 8 Jul 2022 10:51:16 +0800",
"msg_from": "\"=?ISO-8859-1?B?WmhhbyBSdWk=?=\" <875941708@qq.com>",
"msg_from_op": true,
"msg_subject": "Re:Improving RLS planning"
}
] |
[
{
"msg_contents": "Hi,\n\nComparison of 2 values of type jsonb is allowed.\n\nComparison of 2 values of type json gives an error.\n\nThat seems like an oversight -- or is it deliberate?\n\nExample:\n\nselect '42'::json = '{}'::json;\n--> ERROR: operator does not exist: json = json\n\n(of course, easily 'solved' by casting but that's not really the point)\n\n\nThanks,\n\nErik Rijkers\n\n\n\n\n\n\n",
"msg_date": "Fri, 8 Jul 2022 08:54:53 +0200",
"msg_from": "Erik Rijkers <er@xs4all.nl>",
"msg_from_op": true,
"msg_subject": "ERROR: operator does not exist: json = json"
},
{
"msg_contents": "Erik Rijkers <er@xs4all.nl> writes:\n\n> Hi,\n>\n> Comparison of 2 values of type jsonb is allowed.\n>\n> Comparison of 2 values of type json gives an error.\n>\n> That seems like an oversight -- or is it deliberate?\n\nThis is because json is just a textual representation, and different\nJSON strings can be semantically equal because e.g. whitespace and\nobject key order is not significant.\n\n> Example:\n>\n> select '42'::json = '{}'::json;\n> --> ERROR: operator does not exist: json = json\n>\n> (of course, easily 'solved' by casting but that's not really the\n> point)\n\nTo do a proper comparison you have to parse it into a semantic form,\nwhich is what casting to jsonb does.\n\n> Thanks,\n>\n> Erik Rijkers\n\n- ilmari\n\n\n",
"msg_date": "Fri, 08 Jul 2022 12:57:19 +0100",
"msg_from": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <ilmari@ilmari.org>",
"msg_from_op": false,
"msg_subject": "Re: ERROR: operator does not exist: json = json"
},
{
"msg_contents": "\nOn 2022-07-08 Fr 07:57, Dagfinn Ilmari Mannsåker wrote:\n> Erik Rijkers <er@xs4all.nl> writes:\n>\n>> Hi,\n>>\n>> Comparison of 2 values of type jsonb is allowed.\n>>\n>> Comparison of 2 values of type json gives an error.\n>>\n>> That seems like an oversight -- or is it deliberate?\n> This is because json is just a textual representation, and different\n> JSON strings can be semantically equal because e.g. whitespace and\n> object key order is not significant.\n>\n>> Example:\n>>\n>> select '42'::json = '{}'::json;\n>> --> ERROR: operator does not exist: json = json\n>>\n>> (of course, easily 'solved' by casting but that's not really the\n>> point)\n> To do a proper comparison you have to parse it into a semantic form,\n> which is what casting to jsonb does.\n\n\nAlternatively, if you really need something like this, try\n<https://bitbucket.org/adunstan/jsoncmp/src/master/>\n\n\n(I should probably update it to mark the functions as parallel safe)\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Fri, 8 Jul 2022 11:00:09 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: ERROR: operator does not exist: json = json"
}
] |
[
{
"msg_contents": "Hi, community.\n\nRecently, when I was developing some function about INSERT ... ON CONFLICT,\nI used test cases in `src/test/regress/sql/insert_conflict.sql` to evaluate\nmy function. When I copy the CREATE TABLE from this case alone, and paste\nit to psql, I got a syntax error. As I go through the case carefully, I\nfound the CREATE TABLE uses two tabs to separate column name and column\ntype, and this two tabs are regarded as an auto completion instruction by\npsql, causing no separation between column name and column type anymore.\n\nIt may not be a problem since this case has passed the regression, but\nwould it be better to use space here to avoid this confusing situation?\nSince other part of this case are all using spaces.\n\nNever mind my opinion as a beginner, thanks.\n\n\nJingtang Zhang",
"msg_date": "Fri, 8 Jul 2022 23:31:20 +0800",
"msg_from": "Jingtang Zhang <mrdrivingduck@gmail.com>",
"msg_from_op": true,
"msg_subject": "Two successive tabs in test case are causing syntax error in psql"
},
{
"msg_contents": "Jingtang Zhang <mrdrivingduck@gmail.com> writes:\n> Recently, when I was developing some function about INSERT ... ON CONFLICT,\n> I used test cases in `src/test/regress/sql/insert_conflict.sql` to evaluate\n> my function. When I copy the CREATE TABLE from this case alone, and paste\n> it to psql, I got a syntax error. As I go through the case carefully, I\n> found the CREATE TABLE uses two tabs to separate column name and column\n> type, and this two tabs are regarded as an auto completion instruction by\n> psql, causing no separation between column name and column type anymore.\n\n> It may not be a problem since this case has passed the regression, but\n> would it be better to use space here to avoid this confusing situation?\n\nThere are tabs all through the regression test files, and we're certainly\nnot going to remove them all. (If we did, we'd lose test coverage of\nwhether the parser accepts tabs as whitespace.) So I can't get excited\nabout removing one or two.\n\nThe usual recommendation for pasting text into psql when it contains\ntabs is to start psql with the -n switch to disable tab completion.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 08 Jul 2022 15:35:26 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Two successive tabs in test case are causing syntax error in psql"
},
{
"msg_contents": "I see, thank you.\n\nTom Lane <tgl@sss.pgh.pa.us> 于2022年7月9日周六 03:35写道:\n\n> Jingtang Zhang <mrdrivingduck@gmail.com> writes:\n> > Recently, when I was developing some function about INSERT ... ON\n> CONFLICT,\n> > I used test cases in `src/test/regress/sql/insert_conflict.sql` to\n> evaluate\n> > my function. When I copy the CREATE TABLE from this case alone, and paste\n> > it to psql, I got a syntax error. As I go through the case carefully, I\n> > found the CREATE TABLE uses two tabs to separate column name and column\n> > type, and this two tabs are regarded as an auto completion instruction by\n> > psql, causing no separation between column name and column type anymore.\n>\n> > It may not be a problem since this case has passed the regression, but\n> > would it be better to use space here to avoid this confusing situation?\n>\n> There are tabs all through the regression test files, and we're certainly\n> not going to remove them all. (If we did, we'd lose test coverage of\n> whether the parser accepts tabs as whitespace.) So I can't get excited\n> about removing one or two.\n>\n> The usual recommendation for pasting text into psql when it contains\n> tabs is to start psql with the -n switch to disable tab completion.\n>\n> regards, tom lane\n>\n\nI see, thank you.Tom Lane <tgl@sss.pgh.pa.us> 于2022年7月9日周六 03:35写道:Jingtang Zhang <mrdrivingduck@gmail.com> writes:\n> Recently, when I was developing some function about INSERT ... ON CONFLICT,\n> I used test cases in `src/test/regress/sql/insert_conflict.sql` to evaluate\n> my function. When I copy the CREATE TABLE from this case alone, and paste\n> it to psql, I got a syntax error. As I go through the case carefully, I\n> found the CREATE TABLE uses two tabs to separate column name and column\n> type, and this two tabs are regarded as an auto completion instruction by\n> psql, causing no separation between column name and column type anymore.\n\n> It may not be a problem since this case has passed the regression, but\n> would it be better to use space here to avoid this confusing situation?\n\nThere are tabs all through the regression test files, and we're certainly\nnot going to remove them all. (If we did, we'd lose test coverage of\nwhether the parser accepts tabs as whitespace.) So I can't get excited\nabout removing one or two.\n\nThe usual recommendation for pasting text into psql when it contains\ntabs is to start psql with the -n switch to disable tab completion.\n\n regards, tom lane",
"msg_date": "Sat, 9 Jul 2022 10:18:45 +0800",
"msg_from": "Jingtang Zhang <mrdrivingduck@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Two successive tabs in test case are causing syntax error in psql"
},
{
"msg_contents": "On 2022-Jul-08, Tom Lane wrote:\n\n> The usual recommendation for pasting text into psql when it contains\n> tabs is to start psql with the -n switch to disable tab completion.\n\n\"Bracketed paste\" also solves this problem. To enable this feature,\njust edit your $HOME/.inputrc file to have the line\n set enable-bracketed-paste on\n(then restart psql) which will cause the text passed to be used\nliterally, so the tabs won't invoke tab-completion. There are other\nside-effects: if you paste a multi-command string, the whole string is\nadded as a single entry in the history rather than being separate\nentries. I find this extremely useful; there are also claims of this\nbeing more secure.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Sun, 10 Jul 2022 17:26:18 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Two successive tabs in test case are causing syntax error in psql"
}
] |
[
{
"msg_contents": "Hi,\nHere is the query which involves aggregate on a single column:\n\nhttps://dbfiddle.uk/?rdbms=postgres_13&fiddle=44bfd8f6b6b5aad34d00d449c04c5a96\n\nAs you can see from `Output:`, there are many columns added which are not\nneeded by the query executor.\n\nI wonder if someone has noticed this in the past.\nIf so, what was the discussion around this topic ?\n\nThanks\n\nHi,Here is the query which involves aggregate on a single column:https://dbfiddle.uk/?rdbms=postgres_13&fiddle=44bfd8f6b6b5aad34d00d449c04c5a96As you can see from `Output:`, there are many columns added which are not needed by the query executor.I wonder if someone has noticed this in the past.If so, what was the discussion around this topic ?Thanks",
"msg_date": "Fri, 8 Jul 2022 09:40:25 -0700",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": true,
"msg_subject": "Aggregate leads to superfluous projection from the scan"
},
{
"msg_contents": "On Fri, Jul 8, 2022 at 9:40 AM Zhihong Yu <zyu@yugabyte.com> wrote:\n\n> Hi,\n> Here is the query which involves aggregate on a single column:\n>\n>\n> https://dbfiddle.uk/?rdbms=postgres_13&fiddle=44bfd8f6b6b5aad34d00d449c04c5a96\n>\n> As you can see from `Output:`, there are many columns added which are not\n> needed by the query executor.\n>\n> I wonder if someone has noticed this in the past.\n> If so, what was the discussion around this topic ?\n>\n> Thanks\n>\nHi,\nWith the patch, I was able to get the following output:\n\n explain (analyze, verbose) /*+ IndexScan(t) */select count(fire_year) from\nfires t where objectid <= 2000000;\n QUERY PLAN\n--------------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=119.00..119.01 rows=1 width=8) (actual time=9.453..9.453\nrows=1 loops=1)\n Output: count(fire_year)\n -> Index Scan using fires_pkey on public.fires t (cost=0.00..116.50\nrows=1000 width=4) (actual time=9.432..9.432 rows=0 loops=1)\n Output: fire_year\n Index Cond: (t.objectid <= 2000000)\n Planning Time: 52.598 ms\n Execution Time: 13.082 ms\n\nPlease pay attention to the column list after `Output:`\n\nTom:\nCan you take a look and let me know what I may have missed ?\n\nThanks",
"msg_date": "Fri, 8 Jul 2022 10:38:32 -0700",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": true,
"msg_subject": "Re: Aggregate leads to superfluous projection from the scan"
},
{
"msg_contents": "On Fri, Jul 8, 2022 at 10:32 PM Zhihong Yu <zyu@yugabyte.com> wrote:\n\n>\n>\n> On Fri, Jul 8, 2022 at 9:40 AM Zhihong Yu <zyu@yugabyte.com> wrote:\n>\n>> Hi,\n>> Here is the query which involves aggregate on a single column:\n>>\n>>\n>> https://dbfiddle.uk/?rdbms=postgres_13&fiddle=44bfd8f6b6b5aad34d00d449c04c5a96\n>>\n>> As you can see from `Output:`, there are many columns added which are not\n>> needed by the query executor.\n>>\n>> I wonder if someone has noticed this in the past.\n>> If so, what was the discussion around this topic ?\n>>\n>> Thanks\n>>\n> Hi,\n> With the patch, I was able to get the following output:\n>\n> explain (analyze, verbose) /*+ IndexScan(t) */select count(fire_year)\n> from fires t where objectid <= 2000000;\n> QUERY PLAN\n>\n> --------------------------------------------------------------------------------------------------------------------------------------\n> Aggregate (cost=119.00..119.01 rows=1 width=8) (actual time=9.453..9.453\n> rows=1 loops=1)\n> Output: count(fire_year)\n> -> Index Scan using fires_pkey on public.fires t (cost=0.00..116.50\n> rows=1000 width=4) (actual time=9.432..9.432 rows=0 loops=1)\n> Output: fire_year\n> Index Cond: (t.objectid <= 2000000)\n> Planning Time: 52.598 ms\n> Execution Time: 13.082 ms\n>\n> Please pay attention to the column list after `Output:`\n>\n> Tom:\n> Can you take a look and let me know what I may have missed ?\n>\n> Thanks\n>\nI give a quick look and I think in case whenever data is extracted from the\nheap it shows all the columns. Therefore when columns are extracted from\nthe index only it shows the indexed column only.\n\npostgres=# explain (analyze, verbose) /*+ IndexScan(idx) */select\ncount(fire_year) from fires t where objectid = 20;\n\n\n QUERY PLAN\n\n\n\n\n------------------------------------------------------------------------------------------------------------------------------------------------------------------\n\n------------------------------------\n\n Aggregate (cost=8.31..8.32 rows=1 width=8) (actual time=0.029..0.030\nrows=1 loops=1)\n\n Output: count(fire_year)\n\n -> Index Scan using fires_pkey on public.fires t (cost=0.29..8.31\nrows=1 width=4) (actual time=0.022..0.023 rows=1 loops=1)\n\n Output: objectid, fire_name, fire_year, discovery_date,\ndiscovery_time, stat_cause_descr, fire_size, fire_size_class, latitude,\nlongitude, state, county,\n\n discovery_date_j, discovery_date_d\n\n Index Cond: (t.objectid = 20)\n\n Planning Time: 0.076 ms\n\n Execution Time: 0.059 ms\n\n(7 rows)\n\n\n\nIndex-only.\n\n\npostgres=# explain (analyze, verbose) /*+ IndexScan(idx) */select\ncount(fire_year) from fires t where fire_year = 20;\n\n QUERY PLAN\n\n\n-------------------------------------------------------------------------------------------------------------------------------\n\n Aggregate (cost=8.31..8.32 rows=1 width=8) (actual time=0.026..0.027\nrows=1 loops=1)\n\n Output: count(fire_year)\n\n -> Index Only Scan using idx on public.fires t (cost=0.29..8.31 rows=1\nwidth=4) (actual time=0.023..0.024 rows=0 loops=1)\n\n Output: fire_year\n\n Index Cond: (t.fire_year = 20)\n\n Heap Fetches: 0\n\n Planning Time: 0.140 ms\n\n Execution Time: 0.052 ms\n\n(8 rows)\n\n\n\nIndex Scans\n\n------------\n\npostgres=# explain (analyze, verbose) select count(fire_year) from fires t\nwhere objectid = 20;\n\n Aggregate (cost=8.31..8.32 rows=1 width=8) (actual time=0.030..0.031\nrows=1 loops=1)\n\n Output: count(fire_year)\n\n -> Index Scan using fires_pkey on public.fires t (cost=0.29..8.31\nrows=1 width=4) (actual time=0.021..0.023 rows=1 loops=1)\n\n Output: objectid, fire_name, fire_year, discovery_date,\ndiscovery_time, stat_cause_descr, fire_size, fire_size_class, latitude,\nlongitude, state, county,\n\n discovery_date_j, discovery_date_d\n\n Index Cond: (t.objectid = 20)\n\n Planning Time: 0.204 ms\n\n Execution Time: 0.072 ms\n\n(7 rows)\n\n\n\nSeq scans.\n\n----------\n\n\npostgres=# explain (analyze, verbose) select count(fire_year) from fires t;\n\n\n\n\n\n Aggregate (cost=1791.00..1791.01 rows=1 width=8) (actual\ntime=13.172..13.174 rows=1 loops=1)\n\n Output: count(fire_year)\n\n -> Seq Scan on public.fires t (cost=0.00..1541.00 rows=100000 width=4)\n(actual time=0.007..6.500 rows=100000 loops=1)\n\n Output: objectid, fire_name, fire_year, discovery_date,\ndiscovery_time, stat_cause_descr, fire_size, fire_size_class, latitude,\nlongitude, state, county,\n\n discovery_date_j, discovery_date_d\n\n Planning Time: 0.094 ms\n\n Execution Time: 13.201 ms\n\n(6 rows)\n\n\n-- \nIbrar Ahmed\n\nOn Fri, Jul 8, 2022 at 10:32 PM Zhihong Yu <zyu@yugabyte.com> wrote:On Fri, Jul 8, 2022 at 9:40 AM Zhihong Yu <zyu@yugabyte.com> wrote:Hi,Here is the query which involves aggregate on a single column:https://dbfiddle.uk/?rdbms=postgres_13&fiddle=44bfd8f6b6b5aad34d00d449c04c5a96As you can see from `Output:`, there are many columns added which are not needed by the query executor.I wonder if someone has noticed this in the past.If so, what was the discussion around this topic ?ThanksHi,With the patch, I was able to get the following output: explain (analyze, verbose) /*+ IndexScan(t) */select count(fire_year) from fires t where objectid <= 2000000; QUERY PLAN-------------------------------------------------------------------------------------------------------------------------------------- Aggregate (cost=119.00..119.01 rows=1 width=8) (actual time=9.453..9.453 rows=1 loops=1) Output: count(fire_year) -> Index Scan using fires_pkey on public.fires t (cost=0.00..116.50 rows=1000 width=4) (actual time=9.432..9.432 rows=0 loops=1) Output: fire_year Index Cond: (t.objectid <= 2000000) Planning Time: 52.598 ms Execution Time: 13.082 msPlease pay attention to the column list after `Output:`Tom:Can you take a look and let me know what I may have missed ?Thanks\nI give a quick look and I think in case whenever data is extracted from the heap it shows all the columns. Therefore when columns are extracted from the index only it shows the indexed column only. \npostgres=# explain (analyze, verbose) /*+ IndexScan(idx) */select count(fire_year) from fires t where objectid = 20;\n QUERY PLAN \n \n------------------------------------------------------------------------------------------------------------------------------------------------------------------\n------------------------------------\n Aggregate (cost=8.31..8.32 rows=1 width=8) (actual time=0.029..0.030 rows=1 loops=1)\n Output: count(fire_year)\n -> Index Scan using fires_pkey on public.fires t (cost=0.29..8.31 rows=1 width=4) (actual time=0.022..0.023 rows=1 loops=1)\n Output: objectid, fire_name, fire_year, discovery_date, discovery_time, stat_cause_descr, fire_size, fire_size_class, latitude, longitude, state, county,\n discovery_date_j, discovery_date_d\n Index Cond: (t.objectid = 20)\n Planning Time: 0.076 ms\n Execution Time: 0.059 ms\n(7 rows)\nIndex-only.\npostgres=# explain (analyze, verbose) /*+ IndexScan(idx) */select count(fire_year) from fires t where fire_year = 20;\n QUERY PLAN \n-------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=8.31..8.32 rows=1 width=8) (actual time=0.026..0.027 rows=1 loops=1)\n Output: count(fire_year)\n -> Index Only Scan using idx on public.fires t (cost=0.29..8.31 rows=1 width=4) (actual time=0.023..0.024 rows=0 loops=1)\n Output: fire_year\n Index Cond: (t.fire_year = 20)\n Heap Fetches: 0\n Planning Time: 0.140 ms\n Execution Time: 0.052 ms\n(8 rows)\nIndex Scans------------\npostgres=# explain (analyze, verbose) select count(fire_year) from fires t where objectid = 20;\n Aggregate (cost=8.31..8.32 rows=1 width=8) (actual time=0.030..0.031 rows=1 loops=1)\n Output: count(fire_year)\n -> Index Scan using fires_pkey on public.fires t (cost=0.29..8.31 rows=1 width=4) (actual time=0.021..0.023 rows=1 loops=1)\n Output: objectid, fire_name, fire_year, discovery_date, discovery_time, stat_cause_descr, fire_size, fire_size_class, latitude, longitude, state, county,\n discovery_date_j, discovery_date_d\n Index Cond: (t.objectid = 20)\n Planning Time: 0.204 ms\n Execution Time: 0.072 ms\n(7 rows)\nSeq scans.----------\npostgres=# explain (analyze, verbose) select count(fire_year) from fires t;\n \n \n Aggregate (cost=1791.00..1791.01 rows=1 width=8) (actual time=13.172..13.174 rows=1 loops=1)\n Output: count(fire_year)\n -> Seq Scan on public.fires t (cost=0.00..1541.00 rows=100000 width=4) (actual time=0.007..6.500 rows=100000 loops=1)\n Output: objectid, fire_name, fire_year, discovery_date, discovery_time, stat_cause_descr, fire_size, fire_size_class, latitude, longitude, state, county,\n discovery_date_j, discovery_date_d\n Planning Time: 0.094 ms\n Execution Time: 13.201 ms\n(6 rows)-- Ibrar Ahmed",
"msg_date": "Fri, 8 Jul 2022 23:41:34 +0500",
"msg_from": "Ibrar Ahmed <ibrar.ahmad@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Aggregate leads to superfluous projection from the scan"
},
{
"msg_contents": "Ibrar Ahmed <ibrar.ahmad@gmail.com> writes:\n> I give a quick look and I think in case whenever data is extracted from the\n> heap it shows all the columns. Therefore when columns are extracted from\n> the index only it shows the indexed column only.\n\nThis is operating as designed, and I don't think that the proposed\npatch is an improvement. The point of use_physical_tlist() is that\nreturning all the columns is cheaper because it avoids a projection\nstep. That's true for any case where we have to fetch the heap\ntuple, so IndexScan is included though IndexOnlyScan is not.\n\nNow, that's something that was true a decade or more ago.\nThere's been considerable discussion recently about cases where\nit's not true anymore, for example with columnar storage or FDWs,\nand so we ought to invent a way to prevent createplan.c from\ndoing it when it would be counterproductive. But just summarily\nturning it off is not an improvement.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 08 Jul 2022 15:30:30 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Aggregate leads to superfluous projection from the scan"
},
{
"msg_contents": "On Fri, Jul 8, 2022 at 12:30 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Ibrar Ahmed <ibrar.ahmad@gmail.com> writes:\n> > I give a quick look and I think in case whenever data is extracted from\n> the\n> > heap it shows all the columns. Therefore when columns are extracted from\n> > the index only it shows the indexed column only.\n>\n> This is operating as designed, and I don't think that the proposed\n> patch is an improvement. The point of use_physical_tlist() is that\n> returning all the columns is cheaper because it avoids a projection\n> step. That's true for any case where we have to fetch the heap\n> tuple, so IndexScan is included though IndexOnlyScan is not.\n>\n> Now, that's something that was true a decade or more ago.\n> There's been considerable discussion recently about cases where\n> it's not true anymore, for example with columnar storage or FDWs,\n> and so we ought to invent a way to prevent createplan.c from\n> doing it when it would be counterproductive. But just summarily\n> turning it off is not an improvement.\n>\n> regards, tom lane\n>\nHi,\nIn createplan.c, there is `change_plan_targetlist`\n\nPlan *\nchange_plan_targetlist(Plan *subplan, List *tlist, bool tlist_parallel_safe)\n\nBut it doesn't have `Path` as parameter.\nSo I am not sure whether the check of non-returnable columns should be done\nin change_plan_targetlist().\n\nbq. for example with columnar storage or FDWs,\n\nYeah. The above is the case where I want to optimize.\n\nCheers\n\nOn Fri, Jul 8, 2022 at 12:30 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:Ibrar Ahmed <ibrar.ahmad@gmail.com> writes:\n> I give a quick look and I think in case whenever data is extracted from the\n> heap it shows all the columns. Therefore when columns are extracted from\n> the index only it shows the indexed column only.\n\nThis is operating as designed, and I don't think that the proposed\npatch is an improvement. The point of use_physical_tlist() is that\nreturning all the columns is cheaper because it avoids a projection\nstep. That's true for any case where we have to fetch the heap\ntuple, so IndexScan is included though IndexOnlyScan is not.\n\nNow, that's something that was true a decade or more ago.\nThere's been considerable discussion recently about cases where\nit's not true anymore, for example with columnar storage or FDWs,\nand so we ought to invent a way to prevent createplan.c from\ndoing it when it would be counterproductive. But just summarily\nturning it off is not an improvement.\n\n regards, tom laneHi,In createplan.c, there is `change_plan_targetlist`Plan *change_plan_targetlist(Plan *subplan, List *tlist, bool tlist_parallel_safe)But it doesn't have `Path` as parameter.So I am not sure whether the check of non-returnable columns should be done in change_plan_targetlist().bq. for example with columnar storage or FDWs,Yeah. The above is the case where I want to optimize.Cheers",
"msg_date": "Fri, 8 Jul 2022 12:48:06 -0700",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": true,
"msg_subject": "Re: Aggregate leads to superfluous projection from the scan"
},
{
"msg_contents": "On Fri, Jul 8, 2022 at 12:48 PM Zhihong Yu <zyu@yugabyte.com> wrote:\n\n>\n>\n> On Fri, Jul 8, 2022 at 12:30 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n>> Ibrar Ahmed <ibrar.ahmad@gmail.com> writes:\n>> > I give a quick look and I think in case whenever data is extracted from\n>> the\n>> > heap it shows all the columns. Therefore when columns are extracted from\n>> > the index only it shows the indexed column only.\n>>\n>> This is operating as designed, and I don't think that the proposed\n>> patch is an improvement. The point of use_physical_tlist() is that\n>> returning all the columns is cheaper because it avoids a projection\n>> step. That's true for any case where we have to fetch the heap\n>> tuple, so IndexScan is included though IndexOnlyScan is not.\n>>\n>> Now, that's something that was true a decade or more ago.\n>> There's been considerable discussion recently about cases where\n>> it's not true anymore, for example with columnar storage or FDWs,\n>> and so we ought to invent a way to prevent createplan.c from\n>> doing it when it would be counterproductive. But just summarily\n>> turning it off is not an improvement.\n>>\n>> regards, tom lane\n>>\n> Hi,\n> In createplan.c, there is `change_plan_targetlist`\n>\n> Plan *\n> change_plan_targetlist(Plan *subplan, List *tlist, bool\n> tlist_parallel_safe)\n>\n> But it doesn't have `Path` as parameter.\n> So I am not sure whether the check of non-returnable columns should be\n> done in change_plan_targetlist().\n>\n> bq. for example with columnar storage or FDWs,\n>\n> Yeah. The above is the case where I want to optimize.\n>\n> Cheers\n>\nHi, Tom:\nI was looking at the following comment in createplan.c :\n\n * For table scans, rather than using the relation targetlist (which is\n * only those Vars actually needed by the query), we prefer to generate\na\n * tlist containing all Vars in order. This will allow the executor to\n * optimize away projection of the table tuples, if possible.\n\nMaybe you can give me some background on the above decision.\n\nThanks\n\nOn Fri, Jul 8, 2022 at 12:48 PM Zhihong Yu <zyu@yugabyte.com> wrote:On Fri, Jul 8, 2022 at 12:30 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:Ibrar Ahmed <ibrar.ahmad@gmail.com> writes:\n> I give a quick look and I think in case whenever data is extracted from the\n> heap it shows all the columns. Therefore when columns are extracted from\n> the index only it shows the indexed column only.\n\nThis is operating as designed, and I don't think that the proposed\npatch is an improvement. The point of use_physical_tlist() is that\nreturning all the columns is cheaper because it avoids a projection\nstep. That's true for any case where we have to fetch the heap\ntuple, so IndexScan is included though IndexOnlyScan is not.\n\nNow, that's something that was true a decade or more ago.\nThere's been considerable discussion recently about cases where\nit's not true anymore, for example with columnar storage or FDWs,\nand so we ought to invent a way to prevent createplan.c from\ndoing it when it would be counterproductive. But just summarily\nturning it off is not an improvement.\n\n regards, tom laneHi,In createplan.c, there is `change_plan_targetlist`Plan *change_plan_targetlist(Plan *subplan, List *tlist, bool tlist_parallel_safe)But it doesn't have `Path` as parameter.So I am not sure whether the check of non-returnable columns should be done in change_plan_targetlist().bq. for example with columnar storage or FDWs,Yeah. The above is the case where I want to optimize.CheersHi, Tom:I was looking at the following comment in createplan.c : * For table scans, rather than using the relation targetlist (which is * only those Vars actually needed by the query), we prefer to generate a * tlist containing all Vars in order. This will allow the executor to * optimize away projection of the table tuples, if possible. Maybe you can give me some background on the above decision.Thanks",
"msg_date": "Fri, 8 Jul 2022 14:40:43 -0700",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": true,
"msg_subject": "Re: Aggregate leads to superfluous projection from the scan"
},
{
"msg_contents": "Zhihong Yu <zyu@yugabyte.com> writes:\n> I was looking at the following comment in createplan.c :\n\n> * For table scans, rather than using the relation targetlist (which is\n> * only those Vars actually needed by the query), we prefer to generate\n> a\n> * tlist containing all Vars in order. This will allow the executor to\n> * optimize away projection of the table tuples, if possible.\n\n> Maybe you can give me some background on the above decision.\n\nLook into execScan.c and note that it skips doing ExecProject() if the\nscan node's targetlist exactly matches the table's tuple descriptor.\nAnd particularly this comment:\n\n * We can avoid a projection step if the requested tlist exactly matches\n * the underlying tuple type. If so, we just set ps_ProjInfo to NULL.\n * Note that this case occurs not only for simple \"SELECT * FROM ...\", but\n * also in most cases where there are joins or other processing nodes above\n * the scan node, because the planner will preferentially generate a matching\n * tlist.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 08 Jul 2022 19:19:29 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Aggregate leads to superfluous projection from the scan"
}
] |
[
{
"msg_contents": "Hi all, apologies if this is the wrong list to use, but I figured this is a\nlow-level enough problem that it might be the best to gain some\nunderstanding.\n\nIn PGDB 13.4 I have a simple (obscured) table;\n\nCREATE SEQUENCE tbl_id_seq START 1;\nCREATE TABLE tbl (\n a BIGINT UNIQUE NOT NULL DEFAULT nextval('tbl_id_seq'),\n b BIGINT NOT NULL,\n c TIMESTAMP WITHOUT TIME ZONE NOT NULL,\n d INT NOT NULL,\n e INT NULL DEFAULT 0,\n f INT NULL DEFAULT 1,\n g INT NULL DEFAULT 0,\n h BIGINT ARRAY,\n PRIMARY KEY (a)\n);\n\nPrior to introducing the ARRAY as field h, everything was working fine\nusing a binary mode COPY via libpq;\nCOPY tbl (b,c,d,e,f,g,h) FROM STDIN WITH (FORMAT binary, FREEZE ON)\nPQputCopyData()\nPQputCopyEnd()\netc.\n\nNow this is where the problem becomes peculiar. I read all the Interwebs\nhas to offer on the efforts required to encode an array\nin binary mode and I've achieved that just fine... but it only works *if* I\nremove column g from the COPY statement and data (it can remain in table\ndefinition and be filled in with a default). It's most odd. I've\nselectively gone through the table adding/removing fields until I get to\nthis. It doesn't appear to be the array copy itself - it succeeds with 6\ncolumns (b .. f plus h) but fails with the full complement of 7 (noting\nthat 'a' is a generative sequence). The error in the PG logs is this;\n\nERROR: syntax error at end of input at character 255\n\nIt does seem to smell of an alignment, padding, buffer overrun, parsing\nkind of error. I tried reintroducing column g as a larger integer or\nsmaller field and the problem persists (and curiously the input character\nerror remains at 255).\n\nAlso, if I remove the array from the COPY or replace it with a simple\n(series of) int, then the problem also goes away. The size of the array\nappears to have no relevance - whether its just a single item or 10, for\nexample, the same problem remains and the same parse error at character\n255. Finally, the definition order of the columns/fields also makes no\ndifference - I can sandwich the array in the middle of the table and the\nCOPY listing and the upload still succeeds so long as I keep the column\ncount down at 6, essentially omitting 'g' again in this case.\n\nI've read the array_send/recv functions in arrayfuncs.c and pretty sure I\ngot that right (otherwise the array copy wouldn't work at all, right!?) ...\nits this odd combination of array+field lengths I can't figure!? I couldn't\nfind the protocol receive code where array_recv is called - that might\nprovide a clue.\n\nAnyway, I appreciate I've sent this off without code or an MRE - I'll work\non getting something isolated. Until then I wanted to get the ball rolling,\nin case anyone has any clues or can suggest what I'm either doing wrong or\nwhere the problem might be in PG!? In the meantime, to confirm the PG array\nformat in binary its (inc overall field size for wire transfer);\n\nhtobe32(total_array_bytes_inc_header);\n/* begin header */\nhtobe32(1); // single dimension\nhtobe32(0); // flags\nhtobe32(20); // array of bigint (it's OID)\nhtobe32(2); // 2 items, as an example\nhtobe32(1); // offset to first dimension\n/* end header */\nfor (int i = 0 ; i < 2 ; ++i) {\n htobe32(sizeof(int8));\n htobe64(some_int8_val + i);\n}\n\nCheers,\n\nJim\n\n-- \nJim Vanns\nPrincipal Production Engineer\nIndustrial Light & Magic, London\n\nHi all, apologies if this is the wrong list to use, but I figured this is a low-level enough problem that it might be the best to gain some understanding.In PGDB 13.4 I have a simple (obscured) table;CREATE SEQUENCE tbl_id_seq START 1;CREATE TABLE tbl ( a BIGINT UNIQUE NOT NULL DEFAULT nextval('tbl_id_seq'), b BIGINT NOT NULL, c TIMESTAMP WITHOUT TIME ZONE NOT NULL, d INT NOT NULL, e INT NULL DEFAULT 0, f INT NULL DEFAULT 1, g INT NULL DEFAULT 0, h BIGINT ARRAY, PRIMARY KEY (a));Prior to introducing the ARRAY as field h, everything was working fine using a binary mode COPY via libpq;COPY tbl (b,c,d,e,f,g,h) FROM STDIN WITH (FORMAT binary, FREEZE ON)PQputCopyData()PQputCopyEnd()etc.Now this is where the problem becomes peculiar. I read all the Interwebs has to offer on the efforts required to encode an arrayin binary mode and I've achieved that just fine... but it only works *if* I remove column g from the COPY statement and data (it can remain in table definition and be filled in with a default). It's most odd. I've selectively gone through the table adding/removing fields until I get to this. It doesn't appear to be the array copy itself - it succeeds with 6 columns (b .. f plus h) but fails with the full complement of 7 (noting that 'a' is a generative sequence). The error in the PG logs is this;ERROR: syntax error at end of input at character 255It does seem to smell of an alignment, padding, buffer overrun, parsing kind of error. I tried reintroducing column g as a larger integer or smaller field and the problem persists (and curiously the input character error remains at 255).Also, if I remove the array from the COPY or replace it with a simple (series of) int, then the problem also goes away. The size of the array appears to have no relevance - whether its just a single item or 10, for example, the same problem remains and the same parse error at character 255. Finally, the definition order of the columns/fields also makes no difference - I can sandwich the array in the middle of the table and the COPY listing and the upload still succeeds so long as I keep the column count down at 6, essentially omitting 'g' again in this case.I've read the array_send/recv functions in arrayfuncs.c and pretty sure I got that right (otherwise the array copy wouldn't work at all, right!?) ... its this odd combination of array+field lengths I can't figure!? I couldn't find the protocol receive code where array_recv is called - that might provide a clue.Anyway, I appreciate I've sent this off without code or an MRE - I'll work on getting something isolated. Until then I wanted to get the ball rolling, in case anyone has any clues or can suggest what I'm either doing wrong or where the problem might be in PG!? In the meantime, to confirm the PG array format in binary its (inc overall field size for wire transfer);htobe32(total_array_bytes_inc_header);/* begin header */htobe32(1); // single dimensionhtobe32(0); // flagshtobe32(20); // array of bigint (it's OID)htobe32(2); // 2 items, as an examplehtobe32(1); // offset to first dimension/* end header */for (int i = 0 ; i < 2 ; ++i) { htobe32(sizeof(int8)); htobe64(some_int8_val + i);}Cheers,Jim-- Jim VannsPrincipal Production EngineerIndustrial Light & Magic, London",
"msg_date": "Fri, 8 Jul 2022 18:08:24 +0100",
"msg_from": "James Vanns <jvanns@ilm.com>",
"msg_from_op": true,
"msg_subject": "Weird behaviour with binary copy, arrays and column count"
},
{
"msg_contents": "On Fri, 8 Jul 2022 at 13:09, James Vanns <jvanns@ilm.com> wrote:\n>\n> It does seem to smell of an alignment, padding, buffer overrun, parsing kind of error.\n\nIt does.... I think you may need to bust out a debugger and see what\narray_recv is actually seeing...\n\n-- \ngreg\n\n\n",
"msg_date": "Mon, 11 Jul 2022 16:58:25 -0400",
"msg_from": "Greg Stark <stark@mit.edu>",
"msg_from_op": false,
"msg_subject": "Re: Weird behaviour with binary copy, arrays and column count"
},
{
"msg_contents": "That's a good tip! Can't believe I hadn't even thought of that! :/\n\nCheers\n\nJim\n\nOn Mon, 11 Jul 2022 at 21:59, Greg Stark <stark@mit.edu> wrote:\n>\n> On Fri, 8 Jul 2022 at 13:09, James Vanns <jvanns@ilm.com> wrote:\n> >\n> > It does seem to smell of an alignment, padding, buffer overrun, parsing kind of error.\n>\n> It does.... I think you may need to bust out a debugger and see what\n> array_recv is actually seeing...\n>\n> --\n> greg\n\n\n\n-- \nJim Vanns\nPrincipal Production Engineer\nIndustrial Light & Magic, London\n\n\n",
"msg_date": "Tue, 12 Jul 2022 09:23:22 +0100",
"msg_from": "James Vanns <jvanns@ilm.com>",
"msg_from_op": true,
"msg_subject": "Re: Weird behaviour with binary copy, arrays and column count"
}
] |
[
{
"msg_contents": "Hi,\n\nAttached are a few small changes to the JSON_TABLE section in func.sgml.\n\nThe first two changes are simple typos.\n\nThen there was this line:\n\n----\ncontext_item, path_expression [ AS json_path_name ] [ PASSING { value AS \nvarname } [, ...]]\n----\n\nthose are the parameters to JSON_TABLE() so I changed that line to:\n\n----\nJSON_TABLE(context_item, path_expression [ AS json_path_name ] [ PASSING \n{ value AS varname } [, ...]])\n----\n\nSome parts of the JSON_TABLE text strike me as opaque. For instance, \nthere are paragraphs that more than once use the term:\n json_api_common_syntax\n\n'json_api_common_syntax' is not explained. It turns out it's a relic \nfrom Nikita's original docs. I dug up a 2018 patch where the term is \nused as:\n\n---- 2018:\nJSON_TABLE (\n json_api_common_syntax [ AS path_name ]\n COLUMNS ( json_table_column [, ...] )\n (etc...)\n----\n\nwith explanation:\n\n---- 2018:\njson_api_common_syntax:\n The input data to query, the JSON path expression defining the \nquery, and an optional PASSING clause.\n----\n\nSo that made sense then (input+jsonpath+params=api), but it doesn't now \nfit as such in the current docs.\n\nI think it would be best to remove all uses of that compound term, and \nrewrite the explanations using only the current parameter names \n(context_item, path_expression, etc).\n\nBut I wasn't sure and I haven't done any such changes in the attached.\n\nPerhaps I'll give it a try during the weekend.\n\n\nErik Rijkers",
"msg_date": "Fri, 8 Jul 2022 22:03:00 +0200",
"msg_from": "Erik Rijkers <er@xs4all.nl>",
"msg_from_op": true,
"msg_subject": "SQL/JSON documentation JSON_TABLE"
},
{
"msg_contents": "\nOn 2022-07-08 Fr 16:03, Erik Rijkers wrote:\n> Hi,\n>\n> Attached are a few small changes to the JSON_TABLE section in func.sgml.\n>\n> The first two changes are simple typos.\n>\n> Then there was this line:\n>\n> ----\n> context_item, path_expression [ AS json_path_name ] [ PASSING { value\n> AS varname } [, ...]]\n> ----\n>\n> those are the parameters to JSON_TABLE() so I changed that line to:\n>\n> ----\n> JSON_TABLE(context_item, path_expression [ AS json_path_name ] [\n> PASSING { value AS varname } [, ...]])\n> ----\n>\n> Some parts of the JSON_TABLE text strike me as opaque. For instance,\n> there are paragraphs that more than once use the term:\n> json_api_common_syntax\n>\n> 'json_api_common_syntax' is not explained. It turns out it's a relic\n> from Nikita's original docs. I dug up a 2018 patch where the term is\n> used as:\n>\n> ---- 2018:\n> JSON_TABLE (\n> json_api_common_syntax [ AS path_name ]\n> COLUMNS ( json_table_column [, ...] )\n> (etc...)\n> ----\n>\n> with explanation:\n>\n> ---- 2018:\n> json_api_common_syntax:\n> The input data to query, the JSON path expression defining the\n> query, and an optional PASSING clause.\n> ----\n>\n> So that made sense then (input+jsonpath+params=api), but it doesn't\n> now fit as such in the current docs.\n>\n> I think it would be best to remove all uses of that compound term, and\n> rewrite the explanations using only the current parameter names\n> (context_item, path_expression, etc).\n>\n> But I wasn't sure and I haven't done any such changes in the attached.\n>\n> Perhaps I'll give it a try during the weekend.\n>\n>\n>\n\n\nThanks for this. If you want to follow up that last sentence I will try\nto commit a single fix early next week.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Fri, 8 Jul 2022 16:20:32 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: SQL/JSON documentation JSON_TABLE"
},
{
"msg_contents": "On 2022-07-08 Fr 16:20, Andrew Dunstan wrote:\n> On 2022-07-08 Fr 16:03, Erik Rijkers wrote:\n>> Hi,\n>>\n>> Attached are a few small changes to the JSON_TABLE section in func.sgml.\n>>\n>> The first two changes are simple typos.\n>>\n>> Then there was this line:\n>>\n>> ----\n>> context_item, path_expression [ AS json_path_name ] [ PASSING { value\n>> AS varname } [, ...]]\n>> ----\n>>\n>> those are the parameters to JSON_TABLE() so I changed that line to:\n>>\n>> ----\n>> JSON_TABLE(context_item, path_expression [ AS json_path_name ] [\n>> PASSING { value AS varname } [, ...]])\n>> ----\n>>\n>> Some parts of the JSON_TABLE text strike me as opaque. For instance,\n>> there are paragraphs that more than once use the term:\n>> json_api_common_syntax\n>>\n>> 'json_api_common_syntax' is not explained. It turns out it's a relic\n>> from Nikita's original docs. I dug up a 2018 patch where the term is\n>> used as:\n>>\n>> ---- 2018:\n>> JSON_TABLE (\n>> json_api_common_syntax [ AS path_name ]\n>> COLUMNS ( json_table_column [, ...] )\n>> (etc...)\n>> ----\n>>\n>> with explanation:\n>>\n>> ---- 2018:\n>> json_api_common_syntax:\n>> The input data to query, the JSON path expression defining the\n>> query, and an optional PASSING clause.\n>> ----\n>>\n>> So that made sense then (input+jsonpath+params=api), but it doesn't\n>> now fit as such in the current docs.\n>>\n>> I think it would be best to remove all uses of that compound term, and\n>> rewrite the explanations using only the current parameter names\n>> (context_item, path_expression, etc).\n>>\n>> But I wasn't sure and I haven't done any such changes in the attached.\n>>\n>> Perhaps I'll give it a try during the weekend.\n>>\n>>\n>>\n>\n> Thanks for this. If you want to follow up that last sentence I will try\n> to commit a single fix early next week.\n>\n>\n\nHere's a patch that deals with most of this. There's one change you\nwanted that I don't think is correct, which I omitted.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Thu, 14 Jul 2022 11:45:52 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: SQL/JSON documentation JSON_TABLE"
},
{
"msg_contents": "On 7/14/22 17:45, Andrew Dunstan wrote:\n> \n> On 2022-07-08 Fr 16:20, Andrew Dunstan wrote:\n>> On 2022-07-08 Fr 16:03, Erik Rijkers wrote:\n>>> Hi,\n>>>\n>>> Attached are a few small changes to the JSON_TABLE section in func.sgml.\n>>>\n>>> The first two changes are simple typos.\n>>>\n>>> Then there was this line:\n>>>\n>>> ----\n>>> context_item, path_expression [ AS json_path_name ] [ PASSING { value\n>>> AS varname } [, ...]]\n>>> ----\n>>>\n>>> those are the parameters to JSON_TABLE() so I changed that line to:\n>>>\n>>> ----\n>>> JSON_TABLE(context_item, path_expression [ AS json_path_name ] [\n>>> PASSING { value AS varname } [, ...]])\n>>> ----\n>>>\n>>> Some parts of the JSON_TABLE text strike me as opaque. For instance,\n>>> there are paragraphs that more than once use the term:\n>>> json_api_common_syntax\n>>>\n>>> 'json_api_common_syntax' is not explained. It turns out it's a relic\n>>> from Nikita's original docs. I dug up a 2018 patch where the term is\n>>> used as:\n>>>\n>>> ---- 2018:\n>>> JSON_TABLE (\n>>> json_api_common_syntax [ AS path_name ]\n>>> COLUMNS ( json_table_column [, ...] )\n>>> (etc...)\n>>> ----\n>>>\n>>> with explanation:\n>>>\n>>> ---- 2018:\n>>> json_api_common_syntax:\n>>> The input data to query, the JSON path expression defining the\n>>> query, and an optional PASSING clause.\n>>> ----\n>>>\n>>> So that made sense then (input+jsonpath+params=api), but it doesn't\n>>> now fit as such in the current docs.\n>>>\n>>> I think it would be best to remove all uses of that compound term, and\n>>> rewrite the explanations using only the current parameter names\n>>> (context_item, path_expression, etc).\n>>\n>> Thanks for this. If you want to follow up that last sentence I will try\n>> to commit a single fix early next week.\n> \n> Here's a patch that deals with most of this. There's one change you\n> wanted that I don't think is correct, which I omitted.\n> \n> [json-docs-fix.patch]\n\nThanks, much better. I also agree that the change I proposed (and you \nomitted) wasn't great (although it leaves the paragraph somewhat \norphaned - but maybe it isn't too bad.).\n\nI've now compared our present document not only with the original doc as \nproduced by Nikita Glukhov et al in 2018, but also with the ISO draft \nfrom 2017 (ISO/IEC TR 19075-6 (JSON) for JavaScript Object).\n\nI think we can learn a few things from that ISO draft's JSON_TABLE text. \nLet me copy-paste its first explicatory paragraph on JSON_TABLE:\n\n-------------- [ ISO SQL/JSON draft 2017 ] ---------\nLike the other JSON querying operators, JSON_TABLE begins with <JSON API \ncommon syntax> to specify the context item, path expression and PASSING \nclause. The path expression in this case is more accurately called the \nrow pattern path expression. This path expression is intended to produce \nan SQL/JSON sequence, with one SQL/JSON item for each row of the output \ntable.\n\nThe COLUMNS clause can define two kinds of columns: ordinality columns \nand regular columns.\n\nAn ordinality column provides a sequential numbering of rows. Row \nnumbering is 1-based.\n\nA regular column supports columns of scalar type. The column is produced \nusing the semantics of JSON_VALUE. The column has an optional path \nexpression, called the column pattern, which can be defaulted from the \ncolumn name. The column pattern is used to search for the column within \nthe current SQL/JSON item produced by the row pattern. The column also \nhas optional ON EMPTY and ON ERROR clauses, with the same choices and \nsemantics as JSON_VALUE.\n--------------\n\n\nSo, where the ISO draft introduces the term 'row pattern' it /also/ \nintroduces the term 'column pattern' close by, in the next paragraph.\n\nI think our docs too should have both terms. The presence of both 'row \npattern' and 'column pattern' immediately makes their meanings obvious. \n At the moment our docs only use the term 'row pattern', for all the \nJSON_TABLE json path expressions (also those in the COLUMN clause, it \nseems).\n\n\nAt the moment, we say, in the JSON_TABLE doc:\n----\nTo split the row pattern into columns, json_table provides the COLUMNS \nclause that defines the schema of the created view.\n----\n\nI think that to use 'row pattern' here is just wrong, or at least \nconfusing. The 'row pattern' is /not/ the data as produced from the \njson expression; the 'row pattern' /is/ the json path expression. (ISO \ndraft: 'The path expression in this case is more accurately called the \nrow pattern path expression.' )\n\nIf you agree with my reasoning I can try to rewrite these bits in our \ndocs accordingly.\n\n\nErik Rijkers\n\n\n",
"msg_date": "Fri, 15 Jul 2022 08:20:59 +0200",
"msg_from": "Erik Rijkers <er@xs4all.nl>",
"msg_from_op": true,
"msg_subject": "Re: SQL/JSON documentation JSON_TABLE"
},
{
"msg_contents": "\nOn 2022-07-15 Fr 02:20, Erik Rijkers wrote:\n> On 7/14/22 17:45, Andrew Dunstan wrote:\n>>\n>>\n>> Here's a patch that deals with most of this. There's one change you\n>> wanted that I don't think is correct, which I omitted.\n>>\n>> [json-docs-fix.patch]\n>\n> Thanks, much better. I also agree that the change I proposed (and you\n> omitted) wasn't great (although it leaves the paragraph somewhat\n> orphaned - but maybe it isn't too bad.).\n>\n> I've now compared our present document not only with the original doc\n> as produced by Nikita Glukhov et al in 2018, but also with the ISO\n> draft from 2017 (ISO/IEC TR 19075-6 (JSON) for JavaScript Object).\n>\n> I think we can learn a few things from that ISO draft's JSON_TABLE\n> text. Let me copy-paste its first explicatory paragraph on JSON_TABLE:\n>\n> -------------- [ ISO SQL/JSON draft 2017 ] ---------\n> Like the other JSON querying operators, JSON_TABLE begins with <JSON\n> API common syntax> to specify the context item, path expression and\n> PASSING clause. The path expression in this case is more accurately\n> called the row pattern path expression. This path expression is\n> intended to produce an SQL/JSON sequence, with one SQL/JSON item for\n> each row of the output table.\n>\n> The COLUMNS clause can define two kinds of columns: ordinality columns\n> and regular columns.\n>\n> An ordinality column provides a sequential numbering of rows. Row\n> numbering is 1-based.\n>\n> A regular column supports columns of scalar type. The column is\n> produced using the semantics of JSON_VALUE. The column has an optional\n> path expression, called the column pattern, which can be defaulted\n> from the column name. The column pattern is used to search for the\n> column within the current SQL/JSON item produced by the row pattern.\n> The column also has optional ON EMPTY and ON ERROR clauses, with the\n> same choices and semantics as JSON_VALUE.\n> --------------\n>\n>\n> So, where the ISO draft introduces the term 'row pattern' it /also/\n> introduces the term 'column pattern' close by, in the next paragraph.\n>\n> I think our docs too should have both terms. The presence of both\n> 'row pattern' and 'column pattern' immediately makes their meanings\n> obvious. At the moment our docs only use the term 'row pattern', for\n> all the JSON_TABLE json path expressions (also those in the COLUMN\n> clause, it seems).\n>\n>\n> At the moment, we say, in the JSON_TABLE doc:\n> ----\n> To split the row pattern into columns, json_table provides the COLUMNS\n> clause that defines the schema of the created view.\n> ----\n>\n> I think that to use 'row pattern' here is just wrong, or at least\n> confusing. The 'row pattern' is /not/ the data as produced from the\n> json expression; the 'row pattern' /is/ the json path expression. \n> (ISO draft: 'The path expression in this case is more accurately\n> called the row pattern path expression.' )\n>\n> If you agree with my reasoning I can try to rewrite these bits in our\n> docs accordingly.\n>\n>\n>\n\nYes, please do.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Fri, 15 Jul 2022 09:51:29 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: SQL/JSON documentation JSON_TABLE"
}
] |
[
{
"msg_contents": "We've long avoided building I/O support for utility-statement node\ntypes, mainly because it didn't seem worth the trouble to write and\nmaintain such code by hand. Now that the automatic node-support-code\ngeneration patch is in, that argument is gone, and it's just a matter\nof whether the benefits are worth the backend code bloat. I can\nsee two benefits worth considering:\n\n* Seems like having such support would be pretty useful for\ndebugging.\n\n* The only reason struct Query still needs a handwritten output\nfunction is that special logic is needed to prevent trying to\nprint the utilityStatement field when it's a utility statement\nwe lack outfuncs support for. Now it wouldn't be that hard\nto get gen_node_support.pl to replicate that special logic,\nand if we stick with the status-quo functionality then I think we\nshould do that so that we can get rid of the handwritten function.\nBut the other alternative is to provide outfuncs support for all\nutility statements and drop the conditionality.\n\nSo I looked into how much code are we talking about. On my\nRHEL8 x86_64 machine, the code sizes for outfuncs/readfuncs\nas of HEAD are\n\n$ size outfuncs.o readfuncs.o\n text data bss dec hex filename\n 117173 0 0 117173 1c9b5 outfuncs.o\n 64540 0 0 64540 fc1c readfuncs.o\n\nIf we just open the floodgates and enable both outfuncs and\nreadfuncs support for all *Stmt nodes (plus some node types\nthat thereby become dumpable, like AlterTableCmd), then\nthis becomes\n\n$ size outfuncs.o readfuncs.o\n text data bss dec hex filename\n 139503 0 0 139503 220ef outfuncs.o\n 95562 0 0 95562 1754a readfuncs.o\n\nFor my taste, the circa 20K growth in outfuncs.o is an okay\nprice for being able to inspect utility statements more easily.\nHowever, I'm less thrilled with the 30K growth in readfuncs.o,\nbecause I can't see that we'd get any direct benefit from that.\nSo I think a realistic proposal is to enable outfuncs support\nbut keep readfuncs disabled. The attached WIP patch does that,\nand gives me these code sizes:\n\n$ size outfuncs.o readfuncs.o\n text data bss dec hex filename\n 139503 0 0 139503 220ef outfuncs.o\n 69356 0 0 69356 10eec readfuncs.o\n\n(The extra readfuncs space comes from not troubling over the\nsubsidiary node types such as AlterTableCmd. We could run\naround and mark those no_read, but I didn't bother yet.)\n\nThe support-suppression code in gen_node_support.pl was a crude\nhack before, and this patch doesn't make it any less so.\nIf we go this way, it would be better to move the knowledge that\nwe're suppressing read functionality into the utility statement\nnode declarations. We could just manually label them all\npg_node_attr(no_read), but what I'm kind of tempted to do is\ninvent a dummy abstract node type like Expr, and make all the\nutility statements inherit from it:\n\ntypedef struct UtilityStmt\n{\n\tpg_node_attr(abstract, no_read)\n\n\tNodeTag\t\ttype;\n} UtilityStmt;\n\nThoughts?\n\n\t\t\tregards, tom lane",
"msg_date": "Sat, 09 Jul 2022 18:20:26 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Extending outfuncs support to utility statements"
},
{
"msg_contents": "Hi,\n\nOn 2022-07-09 18:20:26 -0400, Tom Lane wrote:\n> We've long avoided building I/O support for utility-statement node\n> types, mainly because it didn't seem worth the trouble to write and\n> maintain such code by hand. Now that the automatic node-support-code\n> generation patch is in, that argument is gone, and it's just a matter\n> of whether the benefits are worth the backend code bloat. I can\n> see two benefits worth considering:\n> \n> * Seems like having such support would be pretty useful for\n> debugging.\n\nAgreed.\n\n\n> So I looked into how much code are we talking about. On my\n> RHEL8 x86_64 machine, the code sizes for outfuncs/readfuncs\n> as of HEAD are\n> \n> $ size outfuncs.o readfuncs.o\n> text data bss dec hex filename\n> 117173 0 0 117173 1c9b5 outfuncs.o\n> 64540 0 0 64540 fc1c readfuncs.o\n> \n> If we just open the floodgates and enable both outfuncs and\n> readfuncs support for all *Stmt nodes (plus some node types\n> that thereby become dumpable, like AlterTableCmd), then\n> this becomes\n> \n> $ size outfuncs.o readfuncs.o\n> text data bss dec hex filename\n> 139503 0 0 139503 220ef outfuncs.o\n> 95562 0 0 95562 1754a readfuncs.o\n> \n> For my taste, the circa 20K growth in outfuncs.o is an okay\n> price for being able to inspect utility statements more easily.\n> However, I'm less thrilled with the 30K growth in readfuncs.o,\n> because I can't see that we'd get any direct benefit from that.\n> So I think a realistic proposal is to enable outfuncs support\n> but keep readfuncs disabled.\n\nAnother approach could be to mark those paths as \"cold\", so they are placed\nfurther away, reducing / removing potential overhead due to higher iTLB misses\netc. 30K of disk space isn't worth worrying about.\n\nDon't really have an opinion on this.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sun, 10 Jul 2022 14:43:48 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Extending outfuncs support to utility statements"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2022-07-09 18:20:26 -0400, Tom Lane wrote:\n>> For my taste, the circa 20K growth in outfuncs.o is an okay\n>> price for being able to inspect utility statements more easily.\n>> However, I'm less thrilled with the 30K growth in readfuncs.o,\n>> because I can't see that we'd get any direct benefit from that.\n>> So I think a realistic proposal is to enable outfuncs support\n>> but keep readfuncs disabled.\n\n> Another approach could be to mark those paths as \"cold\", so they are placed\n> further away, reducing / removing potential overhead due to higher iTLB misses\n> etc. 30K of disk space isn't worth worrying about.\n\nThey're not so much \"cold\" as \"dead\", so I don't see the point\nof having them at all. If we ever start allowing utility commands\n(besides NOTIFY) in stored rules, we'd need readfuncs support then\n... but at least in the short run I don't see that happening.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 10 Jul 2022 19:12:52 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Extending outfuncs support to utility statements"
},
{
"msg_contents": "Hi,\n\nOn 2022-07-10 19:12:52 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > On 2022-07-09 18:20:26 -0400, Tom Lane wrote:\n> >> For my taste, the circa 20K growth in outfuncs.o is an okay\n> >> price for being able to inspect utility statements more easily.\n> >> However, I'm less thrilled with the 30K growth in readfuncs.o,\n> >> because I can't see that we'd get any direct benefit from that.\n> >> So I think a realistic proposal is to enable outfuncs support\n> >> but keep readfuncs disabled.\n> \n> > Another approach could be to mark those paths as \"cold\", so they are placed\n> > further away, reducing / removing potential overhead due to higher iTLB misses\n> > etc. 30K of disk space isn't worth worrying about.\n> \n> They're not so much \"cold\" as \"dead\", so I don't see the point\n> of having them at all. If we ever start allowing utility commands\n> (besides NOTIFY) in stored rules, we'd need readfuncs support then\n> ... but at least in the short run I don't see that happening.\n\nIt would allow us to test utility outfuncs as part of the\nWRITE_READ_PARSE_PLAN_TREES check. Not that that's worth very much.\n\nI guess it could be a minor help in making a few more utility commands benefit\nfrom paralellism?\n\nAnyway, as mentioned earlier, I'm perfectly fine not supporting readfuns for\nutility statements for now.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sun, 10 Jul 2022 17:15:25 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Extending outfuncs support to utility statements"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2022-07-10 19:12:52 -0400, Tom Lane wrote:\n>> They're not so much \"cold\" as \"dead\", so I don't see the point\n>> of having them at all. If we ever start allowing utility commands\n>> (besides NOTIFY) in stored rules, we'd need readfuncs support then\n>> ... but at least in the short run I don't see that happening.\n\n> It would allow us to test utility outfuncs as part of the\n> WRITE_READ_PARSE_PLAN_TREES check. Not that that's worth very much.\n\nEspecially now that those are all auto-generated anyway.\n\n> I guess it could be a minor help in making a few more utility commands benefit\n> from paralellism?\n\nAgain, once we have an actual use-case, enabling that code will be\nfine by me. But we don't yet.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 10 Jul 2022 20:28:44 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Extending outfuncs support to utility statements"
},
{
"msg_contents": "Hi,\n\nNow we are ready to have debug_print_raw_parse (or something like\nthat)? Pgpool-II has been importing and using PostgreSQL's raw\nparser for years. I think it would be great for PostgreSQL and\nPgpool-II developers to have such a feature.\n\nBest reagards,\n--\nTatsuo Ishii\nSRA OSS, Inc. Japan\nEnglish: http://www.sraoss.co.jp/index_en.php\nJapanese:http://www.sraoss.co.jp\n\n\n",
"msg_date": "Mon, 11 Jul 2022 10:57:07 +0900 (JST)",
"msg_from": "Tatsuo Ishii <ishii@sraoss.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Extending outfuncs support to utility statements"
},
{
"msg_contents": "On 10.07.22 00:20, Tom Lane wrote:\n> We've long avoided building I/O support for utility-statement node\n> types, mainly because it didn't seem worth the trouble to write and\n> maintain such code by hand. Now that the automatic node-support-code\n> generation patch is in, that argument is gone, and it's just a matter\n> of whether the benefits are worth the backend code bloat. I can\n> see two benefits worth considering:\n\nThis is also needed to be able to store utility statements in (unquoted) \nSQL function bodies. I have some in-progress code for that that I need \nto dust off. IIRC, there are still some nontrivial issues to work \nthrough on the reading side. I don't have a problem with enabling the \noutfuncs side in the meantime.\n\n\n",
"msg_date": "Mon, 11 Jul 2022 15:56:44 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Extending outfuncs support to utility statements"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> On 10.07.22 00:20, Tom Lane wrote:\n>> We've long avoided building I/O support for utility-statement node\n>> types, mainly because it didn't seem worth the trouble to write and\n>> maintain such code by hand.\nk\n> This is also needed to be able to store utility statements in (unquoted) \n> SQL function bodies. I have some in-progress code for that that I need \n> to dust off. IIRC, there are still some nontrivial issues to work \n> through on the reading side. I don't have a problem with enabling the \n> outfuncs side in the meantime.\n\nOh! I'd not thought of that, but yes that is a plausible near-term\nrequirement for readfuncs support for utility statements. So my\nconcern about suppressing those is largely a waste of effort.\n\nThere might be enough node types that are raw-parse-tree-only,\nbut not involved in utility statements, to make it worth\ncontinuing to suppress readfuncs support for them. But I kinda\ndoubt it. I'll try to get some numbers later today.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 11 Jul 2022 10:16:37 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Extending outfuncs support to utility statements"
},
{
"msg_contents": "I wrote:\n> There might be enough node types that are raw-parse-tree-only,\n> but not involved in utility statements, to make it worth\n> continuing to suppress readfuncs support for them. But I kinda\n> doubt it. I'll try to get some numbers later today.\n\nGranting that we want write/read support for utility statements,\nit seems that what we can save by suppressing raw-parse-tree-only\nnodes is only about 10kB. That's clearly not worth troubling over\nin the grand scheme of things, so I suggest that we just open the\nfloodgates as attached.\n\n\t\t\tregards, tom lane",
"msg_date": "Tue, 12 Jul 2022 18:01:39 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Extending outfuncs support to utility statements"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> This is also needed to be able to store utility statements in (unquoted) \n> SQL function bodies. I have some in-progress code for that that I need \n> to dust off. IIRC, there are still some nontrivial issues to work \n> through on the reading side. I don't have a problem with enabling the \n> outfuncs side in the meantime.\n\nBTW, I experimented with trying to enable WRITE_READ_PARSE_PLAN_TREES\nfor utility statements, and found that the immediate problem is that\nConstraint and a couple of other node types lack read functions\n(they're the ones marked \"custom_read_write, no_read\" in parsenodes.h).\nThey have out functions, so writing the inverses seems like it's just\nsomething nobody ever got around to. Perhaps there are deeper problems\nlurking behind that one, though.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 12 Jul 2022 18:38:26 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Extending outfuncs support to utility statements"
},
{
"msg_contents": "On 13.07.22 00:38, Tom Lane wrote:\n> Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n>> This is also needed to be able to store utility statements in (unquoted)\n>> SQL function bodies. I have some in-progress code for that that I need\n>> to dust off. IIRC, there are still some nontrivial issues to work\n>> through on the reading side. I don't have a problem with enabling the\n>> outfuncs side in the meantime.\n> \n> BTW, I experimented with trying to enable WRITE_READ_PARSE_PLAN_TREES\n> for utility statements, and found that the immediate problem is that\n> Constraint and a couple of other node types lack read functions\n> (they're the ones marked \"custom_read_write, no_read\" in parsenodes.h).\n> They have out functions, so writing the inverses seems like it's just\n> something nobody ever got around to. Perhaps there are deeper problems\n> lurking behind that one, though.\n\nHere are patches for that.\n\nv1-0001-Fix-reading-of-most-negative-integer-value-nodes.patch\nv1-0002-Fix-reading-of-BitString-nodes.patch\n\nThese are some of those lurking problems.\n\nv1-0003-Add-read-support-for-some-missing-raw-parse-nodes.patch\n\nThis adds the read support for the missing nodes.\n\nThe above patches are candidates for committing.\n\nAt this point we have one structural problem left: char * node fields \noutput with WRITE_STRING_FIELD() (ultimately outToken()) don't \ndistinguish between empty strings and NULL values. A write/read \nroundtrip ends up as NULL for an empty string. This shows up in the \nregression tests for commands such as\n\nCREATE TABLESPACE regress_tblspace LOCATION '';\nCREATE SUBSCRIPTION regress_addr_sub CONNECTION '' ...\n\nThis will need some expansion of the output format to handle this.\n\nv1-0004-XXX-Turn-on-WRITE_READ_PARSE_PLAN_TREES-for-testi.patch\nv1-0005-Implement-WRITE_READ_PARSE_PLAN_TREES-for-raw-par.patch\nv1-0006-Enable-WRITE_READ_PARSE_PLAN_TREES-of-rewritten-u.patch\n\nThis is for testing the above. Note that in 0005 we need some special \nhandling for float values to preserve the full precision across \nwrite/read. I suppose this could be unified with the code the preserves \nthe location fields when doing write/read checking.\n\nv1-0007-Enable-utility-statements-in-unquoted-SQL-functio.patch\n\nThis demonstrates what the ultimate goal is. A few more tests should be \nadded eventually.",
"msg_date": "Wed, 24 Aug 2022 17:25:31 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Extending outfuncs support to utility statements"
},
{
"msg_contents": "Hi,\n\nOn 2022-08-24 17:25:31 +0200, Peter Eisentraut wrote:\n> Here are patches for that.\n\nThese patches have been failing since they were posted, afaict:\nhttps://cirrus-ci.com/github/postgresql-cfbot/postgresql/commitfest/39/3848\n\nI assume that's known? Most of the failures seem to be things like\ndiff -U3 /tmp/cirrus-ci-build/src/test/regress/expected/tablespace.out /tmp/cirrus-ci-build/build/testrun/main/regress/results/tablespace.out\n--- /tmp/cirrus-ci-build/src/test/regress/expected/tablespace.out\t2022-09-22 12:30:07.340655000 +0000\n+++ /tmp/cirrus-ci-build/build/testrun/main/regress/results/tablespace.out\t2022-09-22 12:35:15.075825000 +0000\n@@ -3,6 +3,8 @@\n ERROR: tablespace location must be an absolute path\n -- empty tablespace locations are not usually allowed\n CREATE TABLESPACE regress_tblspace LOCATION ''; -- fail\n+WARNING: outfuncs/readfuncs failed to produce an equal raw parse tree\n+WARNING: outfuncs/readfuncs failed to produce an equal rewritten parse tree\n ERROR: tablespace location must be an absolute path\n -- as a special developer-only option to allow us to use tablespaces\n -- with streaming replication on the same server, an empty location\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 22 Sep 2022 08:32:57 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Extending outfuncs support to utility statements"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2022-08-24 17:25:31 +0200, Peter Eisentraut wrote:\n>> Here are patches for that.\n\n> These patches have been failing since they were posted, afaict:\n> https://cirrus-ci.com/github/postgresql-cfbot/postgresql/commitfest/39/3848\n\n> I assume that's known?\n\nI think this is the issue Peter mentioned about needing to distinguish\nbetween empty strings and NULL strings. We're going to need to rethink\nthe behavior of pg_strtok() a bit to fix that.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 22 Sep 2022 12:16:30 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Extending outfuncs support to utility statements"
},
{
"msg_contents": "I wrote:\n> I think this is the issue Peter mentioned about needing to distinguish\n> between empty strings and NULL strings. We're going to need to rethink\n> the behavior of pg_strtok() a bit to fix that.\n\nAfter staring at the code a bit, I think we don't need to touch\npg_strtok() per se. I propose that this can be resolved with changes\nat the next higher level. Let's make outToken print NULL as <> as\nit always has, but print an empty string as \"\" (two double quotes).\nIf the raw input string is two double quotes, print it as \\\"\" to\ndisambiguate. This'd require a catversion bump when committed,\nbut I don't think there are any showstopper problems otherwise.\n\nI'll work on fleshing that idea out.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 22 Sep 2022 12:48:47 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Extending outfuncs support to utility statements"
},
{
"msg_contents": "On 2022-09-22 12:48:47 -0400, Tom Lane wrote:\n> I wrote:\n> > I think this is the issue Peter mentioned about needing to distinguish\n> > between empty strings and NULL strings. We're going to need to rethink\n> > the behavior of pg_strtok() a bit to fix that.\n> \n> After staring at the code a bit, I think we don't need to touch\n> pg_strtok() per se. I propose that this can be resolved with changes\n> at the next higher level. Let's make outToken print NULL as <> as\n> it always has, but print an empty string as \"\" (two double quotes).\n> If the raw input string is two double quotes, print it as \\\"\" to\n> disambiguate. This'd require a catversion bump when committed,\n> but I don't think there are any showstopper problems otherwise.\n\nMakes sense to me.\n\n\n",
"msg_date": "Thu, 22 Sep 2022 10:01:03 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Extending outfuncs support to utility statements"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2022-09-22 12:48:47 -0400, Tom Lane wrote:\n>> After staring at the code a bit, I think we don't need to touch\n>> pg_strtok() per se. I propose that this can be resolved with changes\n>> at the next higher level. Let's make outToken print NULL as <> as\n>> it always has, but print an empty string as \"\" (two double quotes).\n>> If the raw input string is two double quotes, print it as \\\"\" to\n>> disambiguate. This'd require a catversion bump when committed,\n>> but I don't think there are any showstopper problems otherwise.\n\n> Makes sense to me.\n\nHere is a version of all-but-the-last patch in Peter's series.\nI left off the last one because it fails check-world: we now\nget through the core regression tests okay, but then the pg_dump\ntests fail on the new SQL function. To fix that, we would have\nto extend ruleutils.c's get_utility_query_def() to be able to\nfully reconstruct any legal utility query ... which seems like\na pretty dauntingly large amount of tedious manual effort to\nstart with, and then also a nontrivial additional requirement\non any future patch that adds new utility syntax. Are we sure\nit's worth going there?\n\nBut I think it's probably worth committing what we have here\njust on testability grounds.\n\nSome notes:\n\n0001, 0002 not changed.\n\nI tweaked 0003 a bit, mainly because I think it's probably not very\nsafe to apply strncmp to a string we don't know the length of.\nIt might be difficult to fall off the end of memory that way, but\nI wouldn't bet it's impossible. Also, adding the length checks gets\nrid of the need for a grotty order dependency in _readA_Expr().\n\n0004 fixes the empty-string problem as per above.\n\nI did not like what you'd done about imprecise floats one bit.\nI think we ought to do it as in 0005 instead: drop all the\nhard-wired precision assumptions and just print per Ryu.\n\n0006, 0007, 0008 are basically the same as your previous 0004,\n0005, 0006, except for getting rid of the float hacking in 0005.\n\nIf you're good with this approach to the float issue, I think\nthis set is committable (minus 0006 of course, and don't forget\nthe catversion bump).\n\n\t\t\tregards, tom lane",
"msg_date": "Thu, 22 Sep 2022 16:57:14 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Extending outfuncs support to utility statements"
},
{
"msg_contents": "I wrote:\n> I left off the last one because it fails check-world: we now\n> get through the core regression tests okay, but then the pg_dump\n> tests fail on the new SQL function. To fix that, we would have\n> to extend ruleutils.c's get_utility_query_def() to be able to\n> fully reconstruct any legal utility query ... which seems like\n> a pretty dauntingly large amount of tedious manual effort to\n> start with, and then also a nontrivial additional requirement\n> on any future patch that adds new utility syntax. Are we sure\n> it's worth going there?\n\nThinking about that some more, I wondered if we'd even wish to\nbuild such code, compared to just saving the original source text\nfor utility statements and printing that. Obviously, this loses\nall the benefits of new-style SQL functions compared to old-style\n... except that those benefits would be illusory anyway, since by\ndefinition we have not done parse analysis on a utility statement.\nSo we *cannot* offer any useful guarantees about being search_path\nchange proof, following renames of referenced objects, preventing\ndrops of referenced objects, etc etc.\n\nThis makes me wonder if this is a feature we even want. If we\nput it in, we'd have to add a bunch of disclaimers about how\nutility statements behave entirely differently from DML statements.\n\nPerhaps an interesting alternative is to allow a command along\nthe lines of\n\n\tEXECUTE string-expression\n\n(of course that name is already taken) where we'd parse-analyze\nthe string-expression at function creation, but then the computed\nstring is executed as a SQL command in the runtime environment.\nThis would make it fairly clear which things you have guarantees\nof and which you don't. It'd also offer a feature that the PLs\nhave but SQL functions traditionally haven't, ie execution of\ndynamically-computed SQL.\n\nAnyway, this is a bit far afield from the stated topic of this\nthread. I think we should commit something approximately like\nwhat I posted and then start a new thread specifically about\nwhat we'd like to do about utility commands in new-style SQL\nfunctions.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 22 Sep 2022 17:21:27 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Extending outfuncs support to utility statements"
},
{
"msg_contents": "On 22.09.22 23:21, Tom Lane wrote:\n> Anyway, this is a bit far afield from the stated topic of this\n> thread. I think we should commit something approximately like\n> what I posted and then start a new thread specifically about\n> what we'd like to do about utility commands in new-style SQL\n> functions.\n\nRight, I have committed everything and will close the CF entry. I don't \nhave a specific idea about how to move forward right now.\n\n\n\n\n",
"msg_date": "Mon, 26 Sep 2022 16:46:49 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Extending outfuncs support to utility statements"
},
{
"msg_contents": "Hello,\n\n26.09.2022 17:46, Peter Eisentraut wrote:\n> On 22.09.22 23:21, Tom Lane wrote:\n>> Anyway, this is a bit far afield from the stated topic of this\n>> thread. I think we should commit something approximately like\n>> what I posted and then start a new thread specifically about\n>> what we'd like to do about utility commands in new-style SQL\n>> functions.\n>\n> Right, I have committed everything and will close the CF entry. I don't have a specific idea about how to move \n> forward right now.\n\nPlease look at the function _readA_Const() (introduced in a6bc33019), which fails on current master under valgrind:\nCPPFLAGS=\"-DUSE_VALGRIND -DWRITE_READ_PARSE_PLAN_TREES -Og \" ./configure -q --enable-debug && make -s -j8 && make check\n\n============== creating temporary instance ==============\n============== initializing database system ==============\n\npg_regress: initdb failed\nExamine .../src/test/regress/log/initdb.log for the reason.\n\ninitdb.log contains:\nperforming post-bootstrap initialization ... ==00:00:00:02.155 3419654== Invalid read of size 16\n==00:00:00:02.155 3419654== at 0x448691: memcpy (string_fortified.h:29)\n==00:00:00:02.155 3419654== by 0x448691: _readA_Const (readfuncs.c:315)\n==00:00:00:02.155 3419654== by 0x44CCD2: parseNodeString (readfuncs.switch.c:129)\n==00:00:00:02.155 3419654== by 0x4348D6: nodeRead (read.c:338)\n==00:00:00:02.155 3419654== by 0x434879: nodeRead (read.c:452)\n==00:00:00:02.155 3419654== by 0x440E6C: _readTypeName (readfuncs.funcs.c:830)\n==00:00:00:02.155 3419654== by 0x44CC3A: parseNodeString (readfuncs.switch.c:121)\n==00:00:00:02.155 3419654== by 0x4348D6: nodeRead (read.c:338)\n==00:00:00:02.155 3419654== by 0x43D51D: _readFunctionParameter (readfuncs.funcs.c:2513)\n==00:00:00:02.155 3419654== by 0x44DE0C: parseNodeString (readfuncs.switch.c:367)\n==00:00:00:02.155 3419654== by 0x4348D6: nodeRead (read.c:338)\n==00:00:00:02.155 3419654== by 0x434879: nodeRead (read.c:452)\n==00:00:00:02.155 3419654== by 0x438A9C: _readCreateFunctionStmt (readfuncs.funcs.c:2499)\n==00:00:00:02.155 3419654== Address 0xf12f718 is 0 bytes inside a block of size 8 client-defined\n==00:00:00:02.155 3419654== at 0x6A70C3: MemoryContextAllocZeroAligned (mcxt.c:1109)\n==00:00:00:02.155 3419654== by 0x450C31: makeInteger (value.c:25)\n==00:00:00:02.155 3419654== by 0x434D59: nodeRead (read.c:482)\n==00:00:00:02.155 3419654== by 0x448690: _readA_Const (readfuncs.c:313)\n==00:00:00:02.155 3419654== by 0x44CCD2: parseNodeString (readfuncs.switch.c:129)\n==00:00:00:02.155 3419654== by 0x4348D6: nodeRead (read.c:338)\n==00:00:00:02.155 3419654== by 0x434879: nodeRead (read.c:452)\n==00:00:00:02.155 3419654== by 0x440E6C: _readTypeName (readfuncs.funcs.c:830)\n==00:00:00:02.155 3419654== by 0x44CC3A: parseNodeString (readfuncs.switch.c:121)\n==00:00:00:02.155 3419654== by 0x4348D6: nodeRead (read.c:338)\n==00:00:00:02.155 3419654== by 0x43D51D: _readFunctionParameter (readfuncs.funcs.c:2513)\n==00:00:00:02.155 3419654== by 0x44DE0C: parseNodeString (readfuncs.switch.c:367)\n==00:00:00:02.155 3419654==\n\nHere _readA_Const() performs:\n union ValUnion *tmp = nodeRead(NULL, 0);\n\n memcpy(&local_node->val, tmp, sizeof(*tmp));\n\nwhere sizeof(union ValUnion) = 16, but nodeRead()->makeInteger() produced Integer (sizeof(Integer) = 8).\n\nBest regards,\nAlexander\n\n\n\n\n\nHello,\n\n 26.09.2022 17:46, Peter Eisentraut wrote:\n\nOn\n 22.09.22 23:21, Tom Lane wrote:\n \nAnyway, this is a bit far afield from the\n stated topic of this\n \n thread. I think we should commit something approximately like\n \n what I posted and then start a new thread specifically about\n \n what we'd like to do about utility commands in new-style SQL\n \n functions.\n \n\n\n Right, I have committed everything and will close the CF entry. I\n don't have a specific idea about how to move forward right now.\n \n\n\n Please look at the function _readA_Const() (introduced in\n a6bc33019), which fails on current master under valgrind:\n CPPFLAGS=\"-DUSE_VALGRIND -DWRITE_READ_PARSE_PLAN_TREES -Og \"\n ./configure -q --enable-debug && make -s -j8 && make\n check\n\n==============\n creating temporary instance ==============\n \n ============== initializing database system\n ==============\n \n\n pg_regress: initdb failed\n \n Examine .../src/test/regress/log/initdb.log for the reason.\n\n initdb.log contains:\nperforming\n post-bootstrap initialization ... ==00:00:00:02.155 3419654==\n Invalid read of size 16\n \n ==00:00:00:02.155 3419654== at 0x448691: memcpy\n (string_fortified.h:29)\n \n ==00:00:00:02.155 3419654== by 0x448691: _readA_Const\n (readfuncs.c:315)\n \n ==00:00:00:02.155 3419654== by 0x44CCD2: parseNodeString\n (readfuncs.switch.c:129)\n \n ==00:00:00:02.155 3419654== by 0x4348D6: nodeRead (read.c:338)\n \n ==00:00:00:02.155 3419654== by 0x434879: nodeRead (read.c:452)\n \n ==00:00:00:02.155 3419654== by 0x440E6C: _readTypeName\n (readfuncs.funcs.c:830)\n \n ==00:00:00:02.155 3419654== by 0x44CC3A: parseNodeString\n (readfuncs.switch.c:121)\n \n ==00:00:00:02.155 3419654== by 0x4348D6: nodeRead (read.c:338)\n \n ==00:00:00:02.155 3419654== by 0x43D51D: _readFunctionParameter\n (readfuncs.funcs.c:2513)\n \n ==00:00:00:02.155 3419654== by 0x44DE0C: parseNodeString\n (readfuncs.switch.c:367)\n \n ==00:00:00:02.155 3419654== by 0x4348D6: nodeRead (read.c:338)\n \n ==00:00:00:02.155 3419654== by 0x434879: nodeRead (read.c:452)\n \n ==00:00:00:02.155 3419654== by 0x438A9C:\n _readCreateFunctionStmt (readfuncs.funcs.c:2499)\n \n ==00:00:00:02.155 3419654== Address 0xf12f718 is 0 bytes inside a\n block of size 8 client-defined\n \n ==00:00:00:02.155 3419654== at 0x6A70C3:\n MemoryContextAllocZeroAligned (mcxt.c:1109)\n \n ==00:00:00:02.155 3419654== by 0x450C31: makeInteger\n (value.c:25)\n \n ==00:00:00:02.155 3419654== by 0x434D59: nodeRead (read.c:482)\n \n ==00:00:00:02.155 3419654== by 0x448690: _readA_Const\n (readfuncs.c:313)\n \n ==00:00:00:02.155 3419654== by 0x44CCD2: parseNodeString\n (readfuncs.switch.c:129)\n \n ==00:00:00:02.155 3419654== by 0x4348D6: nodeRead (read.c:338)\n \n ==00:00:00:02.155 3419654== by 0x434879: nodeRead (read.c:452)\n \n ==00:00:00:02.155 3419654== by 0x440E6C: _readTypeName\n (readfuncs.funcs.c:830)\n \n ==00:00:00:02.155 3419654== by 0x44CC3A: parseNodeString\n (readfuncs.switch.c:121)\n \n ==00:00:00:02.155 3419654== by 0x4348D6: nodeRead (read.c:338)\n \n ==00:00:00:02.155 3419654== by 0x43D51D: _readFunctionParameter\n (readfuncs.funcs.c:2513)\n \n ==00:00:00:02.155 3419654== by 0x44DE0C: parseNodeString\n (readfuncs.switch.c:367)\n \n ==00:00:00:02.155 3419654==\n\nHere _readA_Const() performs:\n union ValUnion *tmp = nodeRead(NULL, 0);\n\n memcpy(&local_node->val, tmp, sizeof(*tmp));\n\n where sizeof(union ValUnion) = 16, but nodeRead()->makeInteger()\n produced Integer (sizeof(Integer) = 8).\n\n Best regards,\n Alexander",
"msg_date": "Sun, 19 Mar 2023 15:00:01 +0300",
"msg_from": "Alexander Lakhin <exclusion@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Extending outfuncs support to utility statements"
},
{
"msg_contents": "Alexander Lakhin <exclusion@gmail.com> writes:\n> Please look at the function _readA_Const() (introduced in a6bc33019), which fails on current master under valgrind:\n> ...\n> Here _readA_Const() performs:\n> union ValUnion *tmp = nodeRead(NULL, 0);\n\n> memcpy(&local_node->val, tmp, sizeof(*tmp));\n\n> where sizeof(union ValUnion) = 16, but nodeRead()->makeInteger() produced Integer (sizeof(Integer) = 8).\n\nRight, so we can't get away without a switch-on-value-type like the\nother functions for A_Const have. Will fix.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 19 Mar 2023 14:22:10 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Extending outfuncs support to utility statements"
}
] |
[
{
"msg_contents": "As committed, gen_node_support.pl excludes CallContext and InlineCodeBlock\nfrom getting unneeded support functions via some very ad-hoc code.\n(Right now, there are some other node types that are handled similarly,\nbut I'm looking to drive that set to empty.) After looking at the\nsituation a bit, I think the problem is that these nodes are declared\nin parsenodes.h even though they have exactly nothing to do with\nparse trees. What they are is function-calling API infrastructure,\nso it seems like the most natural home for them is fmgr.h. A weaker\ncase could be made for funcapi.h, perhaps.\n\nSo I tried moving them to fmgr.h, and it blew up because they need\ntypedef NodeTag while fmgr.h does not #include nodes.h. I feel that\nthe most reasonable approach is to just give up on that bit of\nmicro-optimization and let fmgr.h include nodes.h. It was already\ndoing a bit of hackery to compile \"Node *\" references without that\ninclusion, so this seems more clean not less so.\n\nHence, I propose the attached. (The changes in the PL files are\njust to align them on a common best practice for an InlineCodeBlock\nargument.)\n\n\t\t\tregards, tom lane",
"msg_date": "Sat, 09 Jul 2022 19:50:15 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Making CallContext and InlineCodeBlock less special-case-y"
},
{
"msg_contents": "I wrote:\n> As committed, gen_node_support.pl excludes CallContext and InlineCodeBlock\n> from getting unneeded support functions via some very ad-hoc code.\n> (Right now, there are some other node types that are handled similarly,\n> but I'm looking to drive that set to empty.) After looking at the\n> situation a bit, I think the problem is that these nodes are declared\n> in parsenodes.h even though they have exactly nothing to do with\n> parse trees. What they are is function-calling API infrastructure,\n> so it seems like the most natural home for them is fmgr.h. A weaker\n> case could be made for funcapi.h, perhaps.\n\nOn further thought, another way we could do this is to leave them where\nthey are but label them with a new attribute pg_node_attr(node_tag_only).\nThe big advantage of this idea is that it lets us explain\ngen_node_support.pl's handling of execnodes.h and some other files as\n\"Nodes declared in these files are automatically assumed to be\nnode_tag_only. At some future date we might label them explicitly\nand remove the file-level assumption.\" That gives us an easy fix\nif we ever find ourselves wanting to supply support functions for\na subset of the nodes in one of those files.\n\nThis ties in a little bit with an idea I had for cleaning up the\nother ad-hocery remaining in gen_node_support.pl. It looks like\nwe are heading towards marking all the raw-parse-tree nodes and\nutility-statement nodes as no_read, so as to be able to support them\nin outfuncs but not readfuncs. But if we're going to touch all of\nthose declarations, how about doing something a bit higher-level,\nand marking them with semantic categories? That is,\n\"pg_node_attr(raw_parse_node)\" if the node appears in raw parse\ntrees but not anywhere later in the pipeline, or\n\"pg_node_attr(utility_statement)\" if that's what it is. Currently\nthese labels would just act as \"no_read\", but this approach would\nmake it a whole lot easier to change our minds later about how to\nhandle these categories of nodes.\n\nI'm not entirely sure whether pg_node_attr(utility_statement) is\na better or worse idea than the inherit-from-UtilityStmt method\nI posited in a nearby thread [1]. In principle we could do the\nraw-parse-node labeling that way too, but for some reason it\ndoesn't seem quite as nice for raw parse nodes, mainly because\na subclass for them doesn't seem as well defined as one for\nutility statements.\n\nThoughts?\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/4159834.1657405226%40sss.pgh.pa.us\n\n\n",
"msg_date": "Sat, 09 Jul 2022 20:45:07 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Making CallContext and InlineCodeBlock less special-case-y"
},
{
"msg_contents": "On 10.07.22 01:50, Tom Lane wrote:\n> As committed, gen_node_support.pl excludes CallContext and InlineCodeBlock\n> from getting unneeded support functions via some very ad-hoc code.\n\nCouldn't we just enable those support functions? I think they were just \nexcluded because they didn't have any before and nobody bothered to make \nany.\n\n\n\n",
"msg_date": "Mon, 11 Jul 2022 15:59:21 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Making CallContext and InlineCodeBlock less special-case-y"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> On 10.07.22 01:50, Tom Lane wrote:\n>> As committed, gen_node_support.pl excludes CallContext and InlineCodeBlock\n>> from getting unneeded support functions via some very ad-hoc code.\n\n> Couldn't we just enable those support functions? I think they were just \n> excluded because they didn't have any before and nobody bothered to make \n> any.\n\nWell, we could I suppose, but that path leads to a lot of dead code in\nbackend/nodes/ --- obviously these two alone are negligible, but I want\na story other than \"it's a hack\" for execnodes.h and the other files\nwe exclude from generation of support code.\n\nAfter sleeping on it, I'm thinking the \"pg_node_attr(nodetag_only)\"\nsolution is the way to go, as that can lead to per-node rather than\nper-file exclusion of support code, which we're surely going to want\neventually in more places.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 11 Jul 2022 10:11:49 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Making CallContext and InlineCodeBlock less special-case-y"
},
{
"msg_contents": "I wrote:\n> Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n>> On 10.07.22 01:50, Tom Lane wrote:\n>>> As committed, gen_node_support.pl excludes CallContext and InlineCodeBlock\n>>> from getting unneeded support functions via some very ad-hoc code.\n\n>> Couldn't we just enable those support functions? I think they were just \n>> excluded because they didn't have any before and nobody bothered to make \n>> any.\n\n> Well, we could I suppose, but that path leads to a lot of dead code in\n> backend/nodes/ --- obviously these two alone are negligible, but I want\n> a story other than \"it's a hack\" for execnodes.h and the other files\n> we exclude from generation of support code.\n\nHere's a proposed patch for this bit. Again, whether these two\nnode types have unnecessary support functions is not the point ---\nobviously we could afford to waste that much space. Rather, what\nI'm after is to have a more explainable and flexible way of dealing\nwith the file-level exclusions applied to a lot of other node types.\nThis patch doesn't make any change in the script's output now, but\nit gives us flexibility for the future.\n\n\t\t\tregards, tom lane",
"msg_date": "Mon, 11 Jul 2022 19:01:33 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Making CallContext and InlineCodeBlock less special-case-y"
},
{
"msg_contents": "On 12.07.22 01:01, Tom Lane wrote:\n> I wrote:\n>> Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n>>> On 10.07.22 01:50, Tom Lane wrote:\n>>>> As committed, gen_node_support.pl excludes CallContext and InlineCodeBlock\n>>>> from getting unneeded support functions via some very ad-hoc code.\n> \n>>> Couldn't we just enable those support functions? I think they were just\n>>> excluded because they didn't have any before and nobody bothered to make\n>>> any.\n> \n>> Well, we could I suppose, but that path leads to a lot of dead code in\n>> backend/nodes/ --- obviously these two alone are negligible, but I want\n>> a story other than \"it's a hack\" for execnodes.h and the other files\n>> we exclude from generation of support code.\n> \n> Here's a proposed patch for this bit. Again, whether these two\n> node types have unnecessary support functions is not the point ---\n> obviously we could afford to waste that much space. Rather, what\n> I'm after is to have a more explainable and flexible way of dealing\n> with the file-level exclusions applied to a lot of other node types.\n> This patch doesn't make any change in the script's output now, but\n> it gives us flexibility for the future.\n\nYeah, looks reasonable.\n\n\n",
"msg_date": "Tue, 12 Jul 2022 20:58:53 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Making CallContext and InlineCodeBlock less special-case-y"
}
] |
[
{
"msg_contents": "Hi,\n\nIn one of my compilations of Postgres, I noted this warning from gcc.\n\ngcc -Wall -Wmissing-prototypes -Wpointer-arith\n-Wdeclaration-after-statement -Werror=vla -Wendif-labels\n-Wmissing-format-attribute -Wimplicit-fallthrough=3 -Wcast-function-type\n-Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard\n-Wno-format-truncation -Wno-stringop-truncation -O2\n-I../../../../src/include -D_GNU_SOURCE -c -o sync.o sync.c\nsync.c: In function ‘RememberSyncRequest’:\nsync.c:528:10: warning: assignment to ‘PendingFsyncEntry *’ {aka ‘struct\n<anonymous> *’} from incompatible pointer type ‘PendingUnlinkEntry *’ {aka\n‘struct <anonymous> *’} [-Wincompatible-pointer-types]\n 528 | entry = (PendingUnlinkEntry *) lfirst(cell);\n\nAlthough the structures are identical, gcc bothers to assign a pointer from\none to the other.\n\ntypedef struct\n{\nFileTag tag; /* identifies handler and file */\nCycleCtr cycle_ctr; /* sync_cycle_ctr of oldest request */\nbool canceled; /* canceled is true if we canceled \"recently\" */\n} PendingFsyncEntry;\n\ntypedef struct\n{\nFileTag tag; /* identifies handler and file */\nCycleCtr cycle_ctr; /* checkpoint_cycle_ctr when request was made */\nbool canceled; /* true if request has been canceled */\n} PendingUnlinkEntry;\n\nThe patch tries to fix this.\n\nregards,\nRanier Vilela",
"msg_date": "Sat, 9 Jul 2022 21:53:31 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": true,
"msg_subject": "Fix gcc warning in sync.c (usr/src/backend/storage/sync/sync.c)"
},
{
"msg_contents": "At Sat, 9 Jul 2022 21:53:31 -0300, Ranier Vilela <ranier.vf@gmail.com> wrote in \r\n> sync.c: In function ‘RememberSyncRequest’:\r\n> sync.c:528:10: warning: assignment to ‘PendingFsyncEntry *’ {aka ‘struct\r\n> <anonymous> *’} from incompatible pointer type ‘PendingUnlinkEntry *’ {aka\r\n> ‘struct <anonymous> *’} [-Wincompatible-pointer-types]\r\n> 528 | entry = (PendingUnlinkEntry *) lfirst(cell);\r\n> \r\n> Although the structures are identical, gcc bothers to assign a pointer from\r\n> one to the other.\r\n\r\nIf the entry were of really PendingSyncEntry, it would need a fix, but\r\nat the same time everyone should see the same warning at their hand.\r\n\r\nActually, I already see the following line (maybe) at the place instead.\r\n\r\n529@master,REL14, 508@REL13\r\n>\t\tPendingUnlinkEntry *entry = (PendingUnlinkEntry *) lfirst(cell);\r\n\r\n\r\nregards.\r\n\r\n-- \r\nKyotaro Horiguchi\r\nNTT Open Source Software Center\r\n",
"msg_date": "Mon, 11 Jul 2022 14:35:00 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Fix gcc warning in sync.c (usr/src/backend/storage/sync/sync.c)"
},
{
"msg_contents": "[ cc'ing Thomas, whose code this seems to be ]\n\nKyotaro Horiguchi <horikyota.ntt@gmail.com> writes:\n> At Sat, 9 Jul 2022 21:53:31 -0300, Ranier Vilela <ranier.vf@gmail.com> wrote in \n>> sync.c: In function ¡RememberSyncRequest¢:\n>> sync.c:528:10: warning: assignment to ¡PendingFsyncEntry *¢ {aka ¡struct\n>> <anonymous> *¢} from incompatible pointer type ¡PendingUnlinkEntry *¢ {aka\n>> ¡struct <anonymous> *¢} [-Wincompatible-pointer-types]\n>> 528 | entry = (PendingUnlinkEntry *) lfirst(cell);\n\n> Actually, I already see the following line (maybe) at the place instead.\n>> PendingUnlinkEntry *entry = (PendingUnlinkEntry *) lfirst(cell);\n\nYeah, I see no line matching that in HEAD either.\n\nHowever, I do not much like the code at line 528, because its\n\"PendingUnlinkEntry *entry\" is masking an outer variable\n\"PendingFsyncEntry *entry\" from line 513. We should rename\none or both variables to avoid that masking.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 11 Jul 2022 01:45:16 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Fix gcc warning in sync.c (usr/src/backend/storage/sync/sync.c)"
},
{
"msg_contents": "At Mon, 11 Jul 2022 01:45:16 -0400, Tom Lane <tgl@sss.pgh.pa.us> wrote in \n> [ cc'ing Thomas, whose code this seems to be ]\n> \n> Kyotaro Horiguchi <horikyota.ntt@gmail.com> writes:\n> > At Sat, 9 Jul 2022 21:53:31 -0300, Ranier Vilela <ranier.vf@gmail.com> wrote in \n> >> 528 | entry = (PendingUnlinkEntry *) lfirst(cell);\n> \n> > Actually, I already see the following line (maybe) at the place instead.\n> >> PendingUnlinkEntry *entry = (PendingUnlinkEntry *) lfirst(cell);\n> \n> Yeah, I see no line matching that in HEAD either.\n> \n> However, I do not much like the code at line 528, because its\n> \"PendingUnlinkEntry *entry\" is masking an outer variable\n> \"PendingFsyncEntry *entry\" from line 513. We should rename\n> one or both variables to avoid that masking.\n\nI thought the same at the moment looking this. In this case, changing\nentry->syncent, unl(del)lent works. But at the same time I don't think\nthat can be strictly applied.\n\nSo, for starters, I compiled the whole tree with -Wshadow=local. and I\nsaw many warnings with it. At a glance all of them are reasonably\n\"fixed\" but I don't think it is what we want...\n\nThoughts?\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n \n\n\n\n \n\n\n",
"msg_date": "Mon, 11 Jul 2022 18:45:07 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Fix gcc warning in sync.c (usr/src/backend/storage/sync/sync.c)"
},
{
"msg_contents": "On Mon, Jul 11, 2022 at 9:45 PM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n> At Mon, 11 Jul 2022 01:45:16 -0400, Tom Lane <tgl@sss.pgh.pa.us> wrote in\n> > Kyotaro Horiguchi <horikyota.ntt@gmail.com> writes:\n> > > At Sat, 9 Jul 2022 21:53:31 -0300, Ranier Vilela <ranier.vf@gmail.com> wrote in\n> > >> 528 | entry = (PendingUnlinkEntry *) lfirst(cell);\n> >\n> > > Actually, I already see the following line (maybe) at the place instead.\n> > >> PendingUnlinkEntry *entry = (PendingUnlinkEntry *) lfirst(cell);\n> >\n> > Yeah, I see no line matching that in HEAD either.\n\nConfusing report :-)\n\n> > However, I do not much like the code at line 528, because its\n> > \"PendingUnlinkEntry *entry\" is masking an outer variable\n> > \"PendingFsyncEntry *entry\" from line 513. We should rename\n> > one or both variables to avoid that masking.\n\nFair point.\n\n> I thought the same at the moment looking this. In this case, changing\n> entry->syncent, unl(del)lent works. But at the same time I don't think\n> that can be strictly applied.\n\nYeah, let's rename both of them. Done.\n\n> So, for starters, I compiled the whole tree with -Wshadow=local. and I\n> saw many warnings with it. At a glance all of them are reasonably\n> \"fixed\" but I don't think it is what we want...\n\nWow, yeah.\n\n\n",
"msg_date": "Fri, 15 Jul 2022 00:11:15 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Fix gcc warning in sync.c (usr/src/backend/storage/sync/sync.c)"
},
{
"msg_contents": "On 2022-Jul-15, Thomas Munro wrote:\n\n> On Mon, Jul 11, 2022 at 9:45 PM Kyotaro Horiguchi\n> <horikyota.ntt@gmail.com> wrote:\n\n> > So, for starters, I compiled the whole tree with -Wshadow=local. and I\n> > saw many warnings with it. At a glance all of them are reasonably\n> > \"fixed\" but I don't think it is what we want...\n> \n> Wow, yeah.\n\nPrevious threads on this topic:\n\nhttps://postgr.es/m/MN2PR18MB2927F7B5F690065E1194B258E35D0@MN2PR18MB2927.namprd18.prod.outlook.com\nhttps://postgr.es/m/CAApHDvpqBR7u9yzW4yggjG=QfN=FZsc8Wo2ckokpQtif-+iQ2A@mail.gmail.com\nhttps://postgr.es/m/877k1psmpf.fsf@mailbox.samurai.com\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\nThou shalt study thy libraries and strive not to reinvent them without\ncause, that thy code may be short and readable and thy days pleasant\nand productive. (7th Commandment for C Programmers)\n\n\n",
"msg_date": "Thu, 21 Jul 2022 20:01:43 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Fix gcc warning in sync.c (usr/src/backend/storage/sync/sync.c)"
}
] |
[
{
"msg_contents": "Hello,\n\nI wonder how much dead code for ancient operating systems we could now\ndrop. Here are some easier cases, I think, and one tricky one that\nmight take some debate. I think it makes a lot of sense to say that\nwe expect at least POSIX-1:2001, because that corresponds to C99, with\nthe thread option because every targeted system has that.\n\n0001-Remove-dead-pg_pread-and-pg_pwrite-replacement-code.patch\n0002-Remove-dead-getrusage-replacement-code.patch\n0003-Remove-dead-setenv-unsetenv-replacement-code.patch\n0004-Remove-dead-handling-for-pre-POSIX-sigwait.patch\n0005-Remove-dead-getpwuid_r-replacement-code.patch\n0006-Remove-disable-thread-safety.patch\n\nClearly there is more stuff like this (eg more _r functions, they're\njust a touch more complicated), but this is a start. I mention these\nnow in case it's helpful for the Meson work, and just generally\nbecause I wanted to clean up after the retirement of ancient HP-UX.\nThe threads patch probably needs more polish and is extracted from\nanother series I'll propose in a later CF to do some more constructive\nwork on threads where it'd be helpful not to have to deal with 'no\nthreads' builds, but I figured it could also pitch this part along\nwith the other basic POSIX modernisation stuff.\n\nI pulled the configure output from the oldest releases of each\nsupported target OS, namely:\n\n * hornet, AIX 7.1\n * wrasse, Solaris 11.3\n * pollock, illumos rolling\n * loach, FreeBSD 12.2\n * conchuela, DragonflyBSD 6.0\n * morepork, OpenBSD 6.9\n * sidewinder, NetBSD 9.2\n * prairiedog, macOS 10.4 (vintage system most likely to cause problems)\n * clam, Linux 3.10/RHEL 7\n * fairiewren, Windows/Mingw with configure\n\nI checked for HAVE_PREAD HAVE_PWRITE HAVE_GETRUSAGE HAVE_SETENV\nHAVE_UNSETENV HAVE_GETPWUID_R, and the only missing ones were:\n\nHAVE_PREAD is missing on windows\nHAVE_PWRITE is missing on windows\nHAVE_GETRUSAGE is missing on windows\nHAVE_GETPWUID_R is missing on windows\n\nWe either have completely separate code paths or replacement functions\nfor these.\n\nThe pwritev/preadv functions are unfortunately not standardised by\nPOSIX (I dunno why, it's the obvious combination of the p* and *v\nfunctions) despite every OS in the list having them except for Solaris\nand old macOS. Oh well.",
"msg_date": "Sun, 10 Jul 2022 13:45:23 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Cleaning up historical portability baggage"
},
{
"msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> I wonder how much dead code for ancient operating systems we could now\n> drop.\n\n+1, it seems like this is the cycle for some housecleaning.\n\n> * prairiedog, macOS 10.4 (vintage system most likely to cause problems)\n\nFWIW, I am expecting to retire prairiedog once the meson stuff drops.\nmacOS 10.4 is incapable of running ninja (for lack of <spawn.h>).\nWhile I could keep it working for awhile with the autoconf build system,\nI'm not sure I see the point. The hardware is still chugging along,\nbut I think it'd be more useful to run up-to-date NetBSD or the like\non it.\n\nHaving said that, I'll be happy to try out this patch series on\nthat platform and see if it burps.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 09 Jul 2022 23:00:44 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Cleaning up historical portability baggage"
},
{
"msg_contents": "I wrote:\n> Having said that, I'll be happy to try out this patch series on\n> that platform and see if it burps.\n\nHEAD + patches 0001-0006 seems fine on prairiedog's host.\nBuilds clean (or as clean as HEAD does anyway), passes make check.\nI did not trouble with check-world.\n\n(I haven't actually read the patches, so this isn't a review,\njust a quick smoke-test.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 10 Jul 2022 10:36:08 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Cleaning up historical portability baggage"
},
{
"msg_contents": "On Sat, 9 Jul 2022 at 21:46, Thomas Munro <thomas.munro@gmail.com> wrote:\n>\n> Hello,\n>\n> I wonder how much dead code for ancient operating systems we could now\n> drop.\n\n> 0002-Remove-dead-getrusage-replacement-code.patch\n\nI thought the getrusage replacement code was for Windows. Does\ngetrusage on Windows actually do anything useful?\n\nMore generally I think there is a question about whether some of these\nthings are \"supported\" in only a minimal way to satisfy standards but\nmaybe not in a way that we actually want to use. Getrusage might exist\non Windows but not actually report the metrics we need, reentrant\nlibrary functions may be implemented by simply locking instead of\nactually avoiding static storage, etc.\n\n-- \ngreg\n\n\n",
"msg_date": "Sun, 10 Jul 2022 11:25:40 -0400",
"msg_from": "Greg Stark <stark@mit.edu>",
"msg_from_op": false,
"msg_subject": "Re: Cleaning up historical portability baggage"
},
{
"msg_contents": "(Reading the patch it seems both those points are already addressed)\n\n\n",
"msg_date": "Sun, 10 Jul 2022 11:32:55 -0400",
"msg_from": "Greg Stark <stark@mit.edu>",
"msg_from_op": false,
"msg_subject": "Re: Cleaning up historical portability baggage"
},
{
"msg_contents": "On Sat, Jul 9, 2022 at 9:46 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> The pwritev/preadv functions are unfortunately not standardised by\n> POSIX (I dunno why, it's the obvious combination of the p* and *v\n> functions) despite every OS in the list having them except for Solaris\n> and old macOS. Oh well.\n\nI don't think that 0001 is buying us a whole lot, really. I prefer the\nstyle where we have PG-specific functions that behave differently on\ndifferent platforms to the one where we call something that looks like\na native OS function call on all platforms but on some of them it is\nsecretly invoking a replacement implementation in src/port. The\nproblem with the latter is it looks like you're using something that's\nuniversally supported and works the same way everywhere, but you're\nreally not. If it were up to me, we'd have more pg_whatever() that\ncalls whatever() on non-Windows and something else on Windows, rather\nthan going in the direction that this patch takes us.\n\nI like all of the other patches. Reducing the number of configure\ntests that we need seems like a really good idea.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 11 Jul 2022 12:46:01 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Cleaning up historical portability baggage"
},
{
"msg_contents": "On Tue, Jul 12, 2022 at 4:46 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> I don't think that 0001 is buying us a whole lot, really. I prefer the\n> style where we have PG-specific functions that behave differently on\n> different platforms to the one where we call something that looks like\n> a native OS function call on all platforms but on some of them it is\n> secretly invoking a replacement implementation in src/port. The\n> problem with the latter is it looks like you're using something that's\n> universally supported and works the same way everywhere, but you're\n> really not. If it were up to me, we'd have more pg_whatever() that\n> calls whatever() on non-Windows and something else on Windows, rather\n> than going in the direction that this patch takes us.\n\nHmm, but that's not what we're doing in general. For example, on\nWindows we're redirecting open() to a replacement function of our own,\nwe're not using \"pg_open()\" in our code. That's not an example based\non AC_REPLACE_FUNCS, but there are plenty of those too. Isn't this\nquite well established?\n\nAFAIK we generally only use pg_whatever() when there's a good reason,\nsuch as an incompatibility, a complication or a different abstraction\nthat you want to highlight to a reader. The reason here was\ntemporary: we couldn't implement standard pread/pwrite perfectly on\nancient HP-UX, but we *can* implement it on Windows, so the reason is\ngone.\n\nThese particular pg_ prefixes have only been in our tree for a few\nyears and I was hoping to boot them out again before they stick, like\n\"Size\". I like using standard interfaces where possible for the very\nbasic stuff, to de-weird our stuff.\n\n> I like all of the other patches. Reducing the number of configure\n> tests that we need seems like a really good idea.\n\nThanks for looking. Yeah, we could also be a little more aggressive\nabout removing configure tests, in the cases where it's just Windows\nvs !Windows. \"HAVE_XXX\" tests that are always true on POSIX systems\nat the level we require would then be unnecessary.\n\n\n",
"msg_date": "Tue, 12 Jul 2022 13:10:29 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Cleaning up historical portability baggage"
},
{
"msg_contents": "On 12.07.22 03:10, Thomas Munro wrote:\n> AFAIK we generally only use pg_whatever() when there's a good reason,\n> such as an incompatibility, a complication or a different abstraction\n> that you want to highlight to a reader. The reason here was\n> temporary: we couldn't implement standard pread/pwrite perfectly on\n> ancient HP-UX, but we*can* implement it on Windows, so the reason is\n> gone.\n> \n> These particular pg_ prefixes have only been in our tree for a few\n> years and I was hoping to boot them out again before they stick, like\n> \"Size\". I like using standard interfaces where possible for the very\n> basic stuff, to de-weird our stuff.\n\nI agree. That's been the established approach.\n\n\n",
"msg_date": "Tue, 12 Jul 2022 11:26:46 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Cleaning up historical portability baggage"
},
{
"msg_contents": "On Mon, Jul 11, 2022 at 9:11 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> Hmm, but that's not what we're doing in general. For example, on\n> Windows we're redirecting open() to a replacement function of our own,\n> we're not using \"pg_open()\" in our code. That's not an example based\n> on AC_REPLACE_FUNCS, but there are plenty of those too. Isn't this\n> quite well established?\n\nYes. I just don't care for it.\n\nSounds like I'm in the minority, though.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 12 Jul 2022 08:01:40 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Cleaning up historical portability baggage"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Mon, Jul 11, 2022 at 9:11 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n>> Hmm, but that's not what we're doing in general. For example, on\n>> Windows we're redirecting open() to a replacement function of our own,\n>> we're not using \"pg_open()\" in our code. That's not an example based\n>> on AC_REPLACE_FUNCS, but there are plenty of those too. Isn't this\n>> quite well established?\n\n> Yes. I just don't care for it.\n> Sounds like I'm in the minority, though.\n\nI concur with your point that it's not great to use the standard name\nfor a function that doesn't have exactly the standard semantics.\nBut if it does, using a nonstandard name is not better. It's just one\nmore thing that readers of our code have to learn about.\n\nNote that \"exactly\" only needs to mean \"satisfies all the promises\nmade by POSIX\". If some caller is depending on behavior details not\nspecified in the standard, that's the caller's bug not the wrapper\nfunction's. Otherwise, yeah, we couldn't ever be sure whether a\nwrapper function is close enough.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 12 Jul 2022 10:09:22 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Cleaning up historical portability baggage"
},
{
"msg_contents": "Hi,\n\nOn 2022-07-12 08:01:40 -0400, Robert Haas wrote:\n> On Mon, Jul 11, 2022 at 9:11 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> > Hmm, but that's not what we're doing in general. For example, on\n> > Windows we're redirecting open() to a replacement function of our own,\n> > we're not using \"pg_open()\" in our code. That's not an example based\n> > on AC_REPLACE_FUNCS, but there are plenty of those too. Isn't this\n> > quite well established?\n> \n> Yes. I just don't care for it.\n> \n> Sounds like I'm in the minority, though.\n\nI agree with you, at least largely.\n\nRedefining functions, be it by linking in something or by redefining function\nnames via macros, is a mess. There's places where we then have to undefine\nsome of these things to be able to include external headers etc. Some\nfunctions are only replaced in backends, others in frontend too. It makes it\nhard to know what exactly the assumed set of platform primitives is. Etc.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 12 Jul 2022 10:33:52 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Cleaning up historical portability baggage"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> Redefining functions, be it by linking in something or by redefining function\n> names via macros, is a mess. There's places where we then have to undefine\n> some of these things to be able to include external headers etc. Some\n> functions are only replaced in backends, others in frontend too. It makes it\n> hard to know what exactly the assumed set of platform primitives is. Etc.\n\nIn the cases at hand, we aren't doing that, are we? The replacement\nfunction is only used on platforms that lack the relevant POSIX function,\nso it's hard to argue that we're replacing anything.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 12 Jul 2022 13:54:22 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Cleaning up historical portability baggage"
},
{
"msg_contents": "I have committed the first few:\n\n * \"Remove dead getrusage replacement code.\"\n * \"Remove dead handling for pre-POSIX sigwait().\"\n * \"Remove dead getpwuid_r replacement code.\"\n\nHere are some more, a couple of which I posted before but I've now\ngone a bit further with them in terms of removing configure checks\netc:\n\n * \"Remove dlopen configure probe.\"\n * \"Remove configure probe and extra tests for getrlimit.\"\n * \"Remove configure probe for shm_open.\"\n * \"Remove configure probe for setsid.\"\n * \"Remove configure probes for readlink, and dead code and docs.\"\n * \"Remove configure probe for symlink, and dead code.\"\n * \"Remove configure probe for link.\"\n * \"Remove dead replacement code for clock_gettime().\"\n * \"Remove configure probes for poll and poll.h.\"\n * \"Remove dead setenv, unsetenv replacement code.\"\n * \"Remove dead pread and pwrite replacement code.\"\n * \"Simplify replacement code for preadv and pwritev.\"\n * \"Remove --disable-thread-safety.\"\n\nSome of these depend on SUSv2 options (not just \"base\"), but we\nalready do that (fsync, ...) and they're all features that are by now\nubiquitous, which means the fallback code is untested and the probes\nare pointless.\n\n<archeology-mode>I'd guess the last system we ran on that didn't have\nsymlinks would have been SVr3-based SCO, HP-UX, DG/UX etc from the\n1980s, since they were invented in 4.2BSD in 1983 and adopted by SVr4\nin 1988. The RLIMIT_OFILE stuff seems to be referring to 1BSD or 2BSD\non a PDP, whereas RLIMIT_NOFILE was already used in 4.3BSD in 1986,\nwhich I'd have guessed would be the oldest OS POSTGRES ever actually\nran on, so that must have been cargo culting from something older?</>\n\nThe clock_gettime() one only becomes committable once prairiedog's\nhost switched to NetBSD, so that'll be committed at the same time as\nthe fdatasync one from a nearby thread.\n\nThe setenv/unsetenv one levels up to SUSv3 (= POSIX issue 6, 2001).\nThat'd be the first time we don't point at SUSv2 (= POSIX issue 5,\n199x) to justify a change like this.\n\nI expect there to be further clean-up after the removal of\n--disable-thread-safety.",
"msg_date": "Sun, 24 Jul 2022 10:39:27 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Cleaning up historical portability baggage"
},
{
"msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> Some of these depend on SUSv2 options (not just \"base\"), but we\n> already do that (fsync, ...) and they're all features that are by now\n> ubiquitous, which means the fallback code is untested and the probes\n> are pointless.\n\nReading this, it occurred to me that it'd be interesting to scrape\nall of the latest configure results from the buildfarm, and see which\ntests actually produce more than one answer among the set of tested\nplatforms. Those that don't could be targets for further simplification,\nor else an indicator that we'd better go find some more animals.\n\nBefore I go off and do that, though, I wonder if you already did.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 23 Jul 2022 19:11:12 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Cleaning up historical portability baggage"
},
{
"msg_contents": "On Sun, Jul 24, 2022 at 11:11 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Thomas Munro <thomas.munro@gmail.com> writes:\n> > Some of these depend on SUSv2 options (not just \"base\"), but we\n> > already do that (fsync, ...) and they're all features that are by now\n> > ubiquitous, which means the fallback code is untested and the probes\n> > are pointless.\n>\n> Reading this, it occurred to me that it'd be interesting to scrape\n> all of the latest configure results from the buildfarm, and see which\n> tests actually produce more than one answer among the set of tested\n> platforms. Those that don't could be targets for further simplification,\n> or else an indicator that we'd better go find some more animals.\n>\n> Before I go off and do that, though, I wonder if you already did.\n\nYeah, here are the macros I scraped yesterday, considering the latest\nresults from machines that did something in the past week.",
"msg_date": "Sun, 24 Jul 2022 11:24:26 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Cleaning up historical portability baggage"
},
{
"msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> Here are some more, a couple of which I posted before but I've now\n> gone a bit further with them in terms of removing configure checks\n> etc:\n\nAfter looking through these briefly, I'm pretty concerned about\nwhether this won't break our Cygwin build in significant ways.\nFor example, lorikeet reports \"HAVE_SETSID 1\", a condition that\nyou want to replace with !WIN32. The question here is whether\nor not WIN32 is defined in a Cygwin build. I see some places\nin our code that believe it is not, but others that believe that\nit is --- and the former ones are mostly like\n\t#if defined(__CYGWIN__) || defined(WIN32)\nwhich means they wouldn't actually fail if they are wrong about that.\n\nMore generally, I'm not exactly convinced that changes like\nthis are a readability improvement:\n\n-#ifdef HAVE_SETSID\n+#ifndef WIN32\n\nI'd rather not have the code cluttered with a sea of\nindistinguishable \"#ifndef WIN32\" tests when some of them could be\nmore specific and more mnemonic. So I think we'd be better off\nleaving that as-is. I don't mind nuking the configure-time test\nand hard-wiring \"#define HAVE_SETSID 1\" somewhere, but changing\nthe code's #if tests doesn't seem to bring any advantage.\n\nSpecific to 0001, I don't especially like what you did to\nsrc/port/dlopen.c. The original intent (and reality until\nnot so long ago) was that that would be a container for\nvarious dlopen replacements. Well, okay, maybe there will\nnever be any more besides Windows, but I think then we should\neither rename the file to (say) win32dlopen.c or move it to\nsrc/backend/port/win32. Likewise for link.c in 0007 and\npread.c et al in 0011. (But 0010 is fine, because the\nreplacement code is already handled that way.)\n\nOTOH, 0012 seems to immediately change pread.c et al back\nto NOT being Windows-only, though it's hard to tell for\nsure because the configure support seems all wrong.\nI'm quite confused by those two patches ... are they really\ncorrect?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 23 Jul 2022 20:23:06 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Cleaning up historical portability baggage"
},
{
"msg_contents": "On Sun, Jul 24, 2022 at 12:23 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> After looking through these briefly, I'm pretty concerned about\n> whether this won't break our Cygwin build in significant ways.\n> For example, lorikeet reports \"HAVE_SETSID 1\", a condition that\n> you want to replace with !WIN32. The question here is whether\n> or not WIN32 is defined in a Cygwin build. I see some places\n> in our code that believe it is not, but others that believe that\n> it is --- and the former ones are mostly like\n> #if defined(__CYGWIN__) || defined(WIN32)\n> which means they wouldn't actually fail if they are wrong about that.\n\nI spent a large chunk of today figuring out how to build PostgreSQL\nunder Cygwin/GCC on CI. My method for answering this question was to\nput the following on the end of 192 .c files that contain the pattern\n/#if.*WIN32/:\n\n+\n+#if defined(WIN32) && defined(__CYGWIN__)\n+#pragma message \"contradiction\"\n+#endif\n\nOnly one of them printed that message: dirmod.c. The reason is that\nit goes out of its way to include Windows headers:\n\n#if defined(WIN32) || defined(__CYGWIN__)\n#ifndef __CYGWIN__\n#include <winioctl.h>\n#else\n#include <windows.h>\n#include <w32api/winioctl.h>\n#endif\n#endif\n\nThe chain <windows.h> -> <windef.h> -> <minwindef.h> leads to WIN32 here:\n\nhttps://github.com/mirror/mingw-w64/blob/master/mingw-w64-headers/include/minwindef.h#L15\n\nI'm left wondering if we should de-confuse matters by ripping out all\nthe checks and comments that assume that this problem is more\nwidespread, and then stick a big notice about it in dirmod.c, to\ncontain this Jekyll/Hide situation safely inside about 8 feet of\nconcrete.\n\nI'll respond to your other complaints with new patches tomorrow.\n\n\n",
"msg_date": "Tue, 26 Jul 2022 02:35:35 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Cleaning up historical portability baggage"
},
{
"msg_contents": "\nOn 2022-07-25 Mo 10:35, Thomas Munro wrote:\n> On Sun, Jul 24, 2022 at 12:23 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> After looking through these briefly, I'm pretty concerned about\n>> whether this won't break our Cygwin build in significant ways.\n>> For example, lorikeet reports \"HAVE_SETSID 1\", a condition that\n>> you want to replace with !WIN32. The question here is whether\n>> or not WIN32 is defined in a Cygwin build. I see some places\n>> in our code that believe it is not, but others that believe that\n>> it is --- and the former ones are mostly like\n>> #if defined(__CYGWIN__) || defined(WIN32)\n>> which means they wouldn't actually fail if they are wrong about that.\n> I spent a large chunk of today figuring out how to build PostgreSQL\n> under Cygwin/GCC on CI. My method for answering this question was to\n> put the following on the end of 192 .c files that contain the pattern\n> /#if.*WIN32/:\n>\n> +\n> +#if defined(WIN32) && defined(__CYGWIN__)\n> +#pragma message \"contradiction\"\n> +#endif\n>\n> Only one of them printed that message: dirmod.c. The reason is that\n> it goes out of its way to include Windows headers:\n>\n> #if defined(WIN32) || defined(__CYGWIN__)\n> #ifndef __CYGWIN__\n> #include <winioctl.h>\n> #else\n> #include <windows.h>\n> #include <w32api/winioctl.h>\n> #endif\n> #endif\n>\n> The chain <windows.h> -> <windef.h> -> <minwindef.h> leads to WIN32 here:\n>\n> https://github.com/mirror/mingw-w64/blob/master/mingw-w64-headers/include/minwindef.h#L15\n>\n> I'm left wondering if we should de-confuse matters by ripping out all\n> the checks and comments that assume that this problem is more\n> widespread, and then stick a big notice about it in dirmod.c, to\n> contain this Jekyll/Hide situation safely inside about 8 feet of\n> concrete.\n\n\nClearly it's something we've been aware of before, port.h has:\n\n\n* Note: Some CYGWIN includes might #define WIN32.\n */\n#if defined(WIN32) && !defined(__CYGWIN__)\n#include \"port/win32_port.h\"\n#endif\n\n\nI can test any patches you want on lorikeet.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Mon, 25 Jul 2022 11:09:59 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: Cleaning up historical portability baggage"
},
{
"msg_contents": "On Sun, Jul 24, 2022 at 12:23 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Thomas Munro <thomas.munro@gmail.com> writes:\n> > Here are some more, a couple of which I posted before but I've now\n> > gone a bit further with them in terms of removing configure checks\n> > etc:\n>\n> After looking through these briefly, I'm pretty concerned about\n> whether this won't break our Cygwin build in significant ways.\n> For example, lorikeet reports \"HAVE_SETSID 1\", a condition that\n> you want to replace with !WIN32. The question here is whether\n> or not WIN32 is defined in a Cygwin build. ...\n\nNo, it should not be unless someone screws up and leaks <windows.h>\ninto a header when WIN32 isn't already defined. I've done some\nanalysis and testing of that, and proposed to nail it down a bit and\nremove the confusion created by the inconsistent macro tests, over at\n[1].\n\n> More generally, I'm not exactly convinced that changes like\n> this are a readability improvement:\n>\n> -#ifdef HAVE_SETSID\n> +#ifndef WIN32\n>\n> I'd rather not have the code cluttered with a sea of\n> indistinguishable \"#ifndef WIN32\" tests when some of them could be\n> more specific and more mnemonic. So I think we'd be better off\n> leaving that as-is. I don't mind nuking the configure-time test\n> and hard-wiring \"#define HAVE_SETSID 1\" somewhere, but changing\n> the code's #if tests doesn't seem to bring any advantage.\n\nOK, in this version of the patch series I did this:\n\n1. If it's something that only Unix has, and for Windows we do\nnothing or skip a feature, then I've now hard-wired the macro as you\nsuggested. I put that in port.h. I agree that's a little easier to\ngrok than no-context !defined(WIN32). Examples: HAVE_SETSID,\nHAVE_SHM_OPEN.\n\n2. If it's something that Unix has and we supply a Windows\nreplacements, and we just can't cope without that function, then I\ndidn't bother with a vestigial macro. There generally weren't tests\nfor such things already (mostly stuff auto-generated by\nAC_REPLACE_FUNCS). Example: HAVE_LINK.\n\n3. In the special case of symlink() and readlink(), I defined the\nmacros on Unix even though we also have replacements on Windows.\n(Previously we effectively did that for one but not the other...) My\nidea here is that, wherever we are OK using our pseudo-symlinks made\nfrom junction points (ie for tablespaces), then we should just go\nahead and use them without testing. But in just a couple of places\nwhere fully compliant symlinks are clearly expected (ie non-directory\nor relative path, eg tz file or executable symlinks), then the tests\ncan still be used. See also commit message. Does this make sense?\n\n(I also propose to supply S_ISLNK and lstat() for Windows and make\nusage of that stuff unconditional, but I put that in another\nthread[2], as that's new code, and this thread is just about ripping\nold dead stuff out.)\n\n4. If it's something that already had very obvious Windows and Unix\ncode paths, then I didn't bother with a HAVE_XXX macro, because I\nthink it'd be more confusing than just #ifdef WIN32 ...windows stuff\n... #else ...unix stuff... #endif. Example: HAVE_CLOCK_GETTIME.\n\n> Specific to 0001, I don't especially like what you did to\n> src/port/dlopen.c. The original intent (and reality until\n> not so long ago) was that that would be a container for\n> various dlopen replacements. Well, okay, maybe there will\n> never be any more besides Windows, but I think then we should\n> either rename the file to (say) win32dlopen.c or move it to\n> src/backend/port/win32. Likewise for link.c in 0007 and\n> pread.c et al in 0011. (But 0010 is fine, because the\n> replacement code is already handled that way.)\n\nAgreed on the file names win32dlopen.c, win32link.c, win32pread.c,\nwin32pwrite.c, and done.\n\nAnother characteristic of other Windows-only replacement code is that\nit's called pgwin32_THING and then a macro replaces THING() with\npgwin32_THING(). I guess I should do that too, for consistency, and\nmove relevant declarations into win32_port.h? Done.\n\nThere are clearly many other candidates for X.c -> win32X.c renamings\nby the same only-for-Windows argument, but I haven't touched those (at\nleast dirent.c, dirmod.c, gettimeofday.c, kill.c, open.c, system.c).\n\nI'll also include the fdatasync configure change here (moved from\nanother thread). Now it also renames fdatasync.c -> win32datasync.c.\nErm, but I didn't add the pgwin32_ prefix to the function name,\nbecause it shares a function declaration with macOS in c.h.\n\n> OTOH, 0012 seems to immediately change pread.c et al back\n> to NOT being Windows-only, though it's hard to tell for\n> sure because the configure support seems all wrong.\n> I'm quite confused by those two patches ... are they really\n> correct?\n\nThe 0012 patch (now 0011 in v2) is about the variants with -v on the\nend. The patches are as I intended. I've now put a longer\nexplanation into the commit message, but here's a short recap:\n\npread()/pwrite() replacements (without 'v' for vector) are now needed\nonly by Windows. HP-UX < 11 was the last Unix system I knew of\nwithout these functions. That makes sense, as I think they were\nrelated to the final POSIX threading push (multi-threaded programs\nwant to be able to skip file position side-effects), which HP-UX 10.x\npredated slightly. Gaur's retirement unblocked this cleanup.\n\npreadv()/pwritev() replacements (with 'v' for vector) are needed by\nSolaris, macOS < 11 and Windows, and will likely be required for a\nlong time, because these functions still haven't been standardised.\nMy goal is to make our replacement code side-effect free, thread-safe,\nin line with the de facto standard/convention seen on Linux, *BSD,\nmacOS, AIX, illumos.\n\nNote that I have some better vector I/O code for Windows to propose a\nbit later, so the real effect of this choice will be to drop true\nvector I/O on Solaris, until such time as they get around to providing\nthe modern interface that almost every other Unix managed to agree on.\n\n[1] https://www.postgresql.org/message-id/flat/CA%2BhUKG%2Be13wK0PBX5Z63CCwWm7MfRQuwBRabM_3aKWSko2AUww%40mail.gmail.com\n[2] https://www.postgresql.org/message-id/flat/CA+hUKGLfOOeyZpm5ByVcAt7x5Pn-=xGRNCvgiUPVVzjFLtnY0w@mail.gmail.com",
"msg_date": "Tue, 2 Aug 2022 12:18:07 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Cleaning up historical portability baggage"
},
{
"msg_contents": "Sorry, I know this is really tedious stuff, but I found a couple more\ncleanup opportunities nearby:\n\nFor dlopen(), we don't have to worry about old Unix systems without\nRTLD_NOW and RTLD_GLOBAL macros anymore. They're in SUSv2 and present\non all our BF Unixes, so that's some more configure probes that we can\nremove. Also, we might as well move the declarations for everything\nrelating to dlopen into win32_port.h.\n\n(Why some WIN32 things are done in port.h while others are done in\nwin32_port.h is something I don't grok; probably depends whether there\nwere ever non-Windows systems that needed something... I might propose\nto tidy that up some more, later...)\n\nFor setenv()/unsetenv(), I removed the declarations from port.h. Only\nthe ones in win32_port.h are needed now.\n\nI fixed a couple of places where I'd renamed a file but forgotten to\nupdate that IDENTIFICATION section from the CVS days, and a stray 2021\ncopyright year.\n\nIt'd be good to find a new home for pg_get_user_name() and\npg_get_user_home_dir(), which really shouldn't be left in the now\nbogusly named src/port/thread.c. Any suggestions?\n\nWas it a mistake to add pgwin32_ prefixes to some Windows replacement\nfunctions? It seems a little arbitrary that we do that sometimes even\nthough we don't need to. Perhaps we should only do that kind of thing\nwhen we want to avoid name-clash with a real Windows function that\nwe're replacing?\n\nI'd like to push these soon, if there are no further objections. If\nyou prefer, I could hold back on the two that will break prairiedog\nuntil you give the word, namely clock_gettime() and fdatasync().",
"msg_date": "Wed, 3 Aug 2022 14:25:01 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Cleaning up historical portability baggage"
},
{
"msg_contents": "Hi,\n\nAnother potential cleanup is the fallback for strtoll/strtoull. Some of the\nspellings were introduced because of \"ancient HPUX\":\n\ncommit 06f66cff9e0b93db81db1595156b2aff8ba1786e\nAuthor: Tom Lane <tgl@sss.pgh.pa.us>\nDate: 2018-05-19 14:22:18 -0400\n\n Support platforms where strtoll/strtoull are spelled __strtoll/__strtoull.\n\n Ancient HPUX, for one, does this. We hadn't noticed due to the lack\n of regression tests that required a working strtoll.\n\n (I was slightly tempted to remove the other historical spelling,\n strto[u]q, since it seems we have no buildfarm members testing that case.\n But I refrained.)\n\n Discussion: https://postgr.es/m/151935568942.1461.14623890240535309745@wrigleys.postgresql.org\n\nand some much longer ago:\n\ncommit 9394d391b803c55281879721ea393a50df4a0be6\nAuthor: Peter Eisentraut <peter_e@gmx.net>\nDate: 2000-11-20 15:56:14 +0000\n\n Add configure checks for strtoll, strtoull (or strto[u]q). Disable\n 'long long int' portions of ecpg if the type or these functions don't\n exist.\n\nsince strtoq, strtouq apparently were already obsolete in 2018, and hpux is\nnow obsolete...\n\n\nI only noticed this because I'd ported the configure check a bit naively,\nwithout the break in the if-found case, and was looking into why HAVE_STRTOQ,\nHAVE_STRTOUQ were defined with meson, but not autoconf...\nAC_CHECK_FUNCS([strtoll __strtoll strtoq], [break])\nAC_CHECK_FUNCS([strtoull __strtoull strtouq], [break])\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 3 Aug 2022 18:35:46 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Cleaning up historical portability baggage"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> Another potential cleanup is the fallback for strtoll/strtoull.\n\n+1, I suspect the alternate spellings are dead.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 03 Aug 2022 21:52:04 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Cleaning up historical portability baggage"
},
{
"msg_contents": "Hi,\n\nOn 2022-08-03 21:52:04 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > Another potential cleanup is the fallback for strtoll/strtoull.\n>\n> +1, I suspect the alternate spellings are dead.\n\nLooks like that includes systems where there's no declaration for strtoll,\nstrtoull. The test was introduced in\n\ncommit a6228128fc48c222953dfd41fd438522a184054c\nAuthor: Tom Lane <tgl@sss.pgh.pa.us>\nDate: 2018-05-18 22:42:10 -0400\n\n Arrange to supply declarations for strtoll/strtoull if needed.\n\nThe check was introduced for animal dromedary, afaics. Looks like that stopped\nreporting 2019-09-27 and transformed into florican.\n\nA query on the bf database didn't see any runs in the last 30 days that didn't\nhave strtoll declared.\n\nSee attached patch.\n\nGreetings,\n\nAndres Freund",
"msg_date": "Wed, 3 Aug 2022 19:30:10 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Cleaning up historical portability baggage"
},
{
"msg_contents": "Hi,\n\nCan we get a few more of these committed soon? It's all tests that I need to\nsync with the meson stuff, and I'd rather get it over with :). And it reduces\nthe set of tests that need to be compared... Or is there a blocker (leaving\nthe prairedog one aside)?\n\n\nOn 2022-08-03 14:25:01 +1200, Thomas Munro wrote:\n> Subject: [PATCH v3 01/13] Remove configure probe for dlopen.\n> Subject: [PATCH v3 02/13] Remove configure probe and extra tests for\n> getrlimit.\n\nLGTM.\n\n\n> From 96a4935ff9480c2786634e9892b1f44782b403fb Mon Sep 17 00:00:00 2001\n> From: Thomas Munro <thomas.munro@gmail.com>\n> Date: Sat, 23 Jul 2022 23:49:27 +1200\n> Subject: [PATCH v3 03/13] Remove configure probe for shm_open.\n>\n> shm_open() is in SUSv2 (realtime) and all targeted Unix systems have it.\n>\n> We retain a HAVE_SHM_OPEN macro, because it's clearer to readers than\n> something like !defined(WIN32).\n\nI don't like these. I don't find them clearer - if we really just assume this\nto be the case on windows, it's easier to understand the checks if they talk\nabout windows rather than having to know whether this specific check just\napplies to windows or potentially an unspecified separate set of systems.\n\nBut I guess I should complain upthread...\n\n\n> Subject: [PATCH v3 04/13] Remove configure probe for setsid.\n\nLGTM.\n\n\n\n> Subject: [PATCH v3 05/13] Remove configure probes for symlink/readlink, and\n> dead code.\n\nNice win.\n\n>\n\n> From 143f6917bbc7d8f457d52d02a5fbc79d849744e1 Mon Sep 17 00:00:00 2001\n> From: Thomas Munro <thomas.munro@gmail.com>\n> Date: Sun, 24 Jul 2022 01:19:05 +1200\n\n> Subject: [PATCH v3 06/13] Remove configure probe for link.\n> --- a/src/include/port.h\n> +++ b/src/include/port.h\n> @@ -402,7 +402,8 @@ extern float pg_strtof(const char *nptr, char **endptr);\n> #define strtof(a,b) (pg_strtof((a),(b)))\n> #endif\n>\n> -#ifndef HAVE_LINK\n> +#ifdef WIN32\n> +/* src/port/win32link.c */\n> extern int\tlink(const char *src, const char *dst);\n> #endif\n\nIt bothers me that we have all this windows crap in port.h instead of\nwin32_port.h. But that's not this patch's fault.\n\n\n> Subject: [PATCH v3 07/13] Remove dead replacement code for clock_gettime().\n\nNice.\n\n\n> XXX This can only be committed once prairedog is decommissioned, because\n> macOS 10.4 didn't have clock_gettime().\n\nMaybe put it later in the queue?\n\n\n> Subject: [PATCH v3 08/13] Remove configure probes for poll and poll.h.\n>\n> poll() and <poll.h> are in SUSv2 and all targeted Unix systems have\n> them.\n>\n> Retain HAVE_POLL and HAVE_POLL_H macros for readability. There's an\n> error in latch.c that is now unreachable (since logically always have\n> one of WIN32 or HAVE_POLL defined), but that falls out of a decision to\n> keep using defined(HAVE_POLL) instead of !defined(WIN32) to guard the\n> poll() code.\n\nWonder if we instead should add an empty poll.h to src/include/port/win32?\n\n\n> Subject: [PATCH v3 09/13] Remove dead setenv, unsetenv replacement code.\n> Subject: [PATCH v3 10/13] Remove dead pread and pwrite replacement code.\n\nLGTM.\n\n\n> Subject: [PATCH v3 11/13] Simplify replacement code for preadv and pwritev.\n>\n> preadv() and pwritev() are not standardized by POSIX. Most targeted\n> Unix systems have had them for more than a decade, since they are\n> obvious combinations of standard p- and -v functions.\n>\n> In 15, we had two replacement implementations: one based on lseek() + -v\n> function if available, and the other based on a loop over p- function.\n> They aren't used for much yet, but are heavily used in a current\n> proposal.\n>\n> Supporting two ways of falling back, at the cost of having a\n> pg_preadv/pg_pwritev that could never be used in a multi-threaded\n> program accessing the same file descriptor from two threads without\n> unpleasant locking does not sound like a good trade.\n>\n> Therefore, drop the lseek()-based variant, and also the pg_ prefix, now\n> that the file position portability hazard is gone. Previously, both\n> fallbacks had the file position portability hazard, because our\n> pread()/pwrite() replacement had the same hazard, but that problem has\n> been fixed for pread()/pwrite() by an earlier commit. Now the way is\n> clear to expunge the file position portability hazard of the\n> lseek()-based variants too.\n>\n> At the time of writing, the following systems in our build farm lack\n> native preadv/pwritev and thus use fallback code:\n>\n> * Solaris (but not illumos)\n> * macOS before release 11.0\n> * Windows with Cygwin\n> * Windows native\n>\n> With this commit, all of the above systems will now use the *same*\n> fallback code, the one that loops over pread()/pwrite() (which is\n> translated to equivalent calls in Windows). Previously, all but Windows\n> native would use the readv()/writev()-based fallback that this commit\n> removes.\n\nGiven that it's just solaris and old macOS that \"benefited\" from writev, just\nusing the \"full\" fallback there makes sense.\n\n\n> Subject: [PATCH v3 12/13] Remove fdatasync configure probe.\n\n> @@ -1928,7 +1925,6 @@ if test \"$PORTNAME\" = \"win32\"; then\n> AC_CHECK_FUNCS(_configthreadlocale)\n> AC_REPLACE_FUNCS(gettimeofday)\n> AC_LIBOBJ(dirmod)\n> - AC_LIBOBJ(fdatasync)\n> AC_LIBOBJ(getrusage)\n> AC_LIBOBJ(kill)\n> AC_LIBOBJ(open)\n> @@ -1936,6 +1932,7 @@ if test \"$PORTNAME\" = \"win32\"; then\n> AC_LIBOBJ(win32dlopen)\n> AC_LIBOBJ(win32env)\n> AC_LIBOBJ(win32error)\n> + AC_LIBOBJ(win32fdatasync)\n> AC_LIBOBJ(win32link)\n> AC_LIBOBJ(win32ntdll)\n> AC_LIBOBJ(win32pread)\n\nI like that we might get away from all those \"generically\" named libobjs that\nare hardcoded to be used only on windows...\n\n\n\n> From 5bda430998d502a7f8a6fe662a56b63ac374c925 Mon Sep 17 00:00:00 2001\n> From: Thomas Munro <thomas.munro@gmail.com>\n> Date: Fri, 26 Mar 2021 22:58:06 +1300\n> Subject: [PATCH v3 13/13] Remove --disable-thread-safety.\n>\n> Threads are in SUSv2 and all targeted Unix systems have the option.\n> There are no known Unix systems that don't choose to implement the\n> threading option, and we're no longer testing such builds.\n>\n> Future work to improve our use of threads will be simplified by not\n> having to cope with a no-threads build option.\n\nYeha.\n\n\n> AC_CHECK_HEADER(pthread.h, [], [AC_MSG_ERROR([\n> -pthread.h not found; use --disable-thread-safety to disable thread safety])])\n> +pthread.h not found])])\n\nIs this really needed after these changes? We probably can't get away from\nAX_PTHREAD just yet, but we should be able to rely on pthread.h?\n\n\n> diff --git a/src/include/pg_config.h.in b/src/include/pg_config.h.in\n> index 2a0d08d10f..a4ef016c06 100644\n> --- a/src/include/pg_config.h.in\n> +++ b/src/include/pg_config.h.in\n> @@ -51,10 +51,6 @@\n> /* Define to 1 if you want National Language Support. (--enable-nls) */\n> #undef ENABLE_NLS\n>\n> -/* Define to 1 to build client libraries as thread-safe code.\n> - (--enable-thread-safety) */\n> -#undef ENABLE_THREAD_SAFETY\n> -\n\nMight be worth grepping around whether there are extensions that reference\nENABLE_THREAD_SAFETY (e.g. to then error out if not available). If common\nenough we could decide to keep it, given that it's pretty reasonable for code\nto want to know that across versions?\n\n\n> # Add libraries that libpq depends (or might depend) on into the\n> diff --git a/src/interfaces/libpq/fe-auth.c b/src/interfaces/libpq/fe-auth.c\n> index 49a1c626f6..3504ab2c34 100644\n> --- a/src/interfaces/libpq/fe-auth.c\n> +++ b/src/interfaces/libpq/fe-auth.c\n> @@ -1116,11 +1116,10 @@ pg_fe_getusername(uid_t user_id, PQExpBuffer errorMessage)\n> #endif\n>\n> \t/*\n> -\t * Some users are using configure --enable-thread-safety-force, so we\n> -\t * might as well do the locking within our library to protect getpwuid().\n> -\t * In fact, application developers can use getpwuid() in their application\n> -\t * if they use the locking call we provide, or install their own locking\n> -\t * function using PQregisterThreadLock().\n> +\t * We do the locking within our library to protect getpwuid(). Application\n> +\t * developers can use getpwuid() in their application if they use the\n> +\t * locking call we provide, or install their own locking function using\n> +\t * PQregisterThreadLock().\n> \t */\n> \tpglock_thread();\n\nProbably worth using getpwuid_r where available - might even be everywhere\n(except windows of course, but GetUserName() is threadsafe).\n\n> --- a/src/port/getaddrinfo.c\n> +++ b/src/port/getaddrinfo.c\n> @@ -414,7 +414,7 @@ pqGethostbyname(const char *name,\n> \t\t\t\tstruct hostent **result,\n> \t\t\t\tint *herrno)\n> {\n> -#if defined(ENABLE_THREAD_SAFETY) && defined(HAVE_GETHOSTBYNAME_R)\n> +#if defined(HAVE_GETHOSTBYNAME_R)\n> \t/*\n> \t * broken (well early POSIX draft) gethostbyname_r() which returns 'struct\n\nDepressingly there's still plenty systems without gethostbyname_r() :(\n\ncurculio, gombessa, longfin, lorikeet, morepork, pollock, sifaka, wrasse ...\n\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 3 Aug 2022 20:43:43 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Cleaning up historical portability baggage"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n>> XXX This can only be committed once prairedog is decommissioned, because\n>> macOS 10.4 didn't have clock_gettime().\n\n> Maybe put it later in the queue?\n\nclock_gettime is required by SUSv2 (1997), so I have to admit that\nmacOS 10.4 doesn't have a lot of excuse not to have it. In any case,\nprairiedog is just sitting there doing its thing until I find cycles\nto install a newer OS. If you want to move ahead with this, don't\nlet prairiedog block you.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 04 Aug 2022 00:09:19 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Cleaning up historical portability baggage"
},
{
"msg_contents": "On Thu, Aug 4, 2022 at 4:09 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> >> XXX This can only be committed once prairedog is decommissioned, because\n> >> macOS 10.4 didn't have clock_gettime().\n>\n> > Maybe put it later in the queue?\n>\n> clock_gettime is required by SUSv2 (1997), so I have to admit that\n> macOS 10.4 doesn't have a lot of excuse not to have it. In any case,\n> prairiedog is just sitting there doing its thing until I find cycles\n> to install a newer OS. If you want to move ahead with this, don't\n> let prairiedog block you.\n\nThanks, will do. Just having an argument with MSYS about something I\nseem to have messed up in the most recent version, and then I'll start\npushing these...\n\n\n",
"msg_date": "Thu, 4 Aug 2022 16:18:40 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Cleaning up historical portability baggage"
},
{
"msg_contents": "On Thu, Aug 4, 2022 at 3:43 PM Andres Freund <andres@anarazel.de> wrote:\n> > We retain a HAVE_SHM_OPEN macro, because it's clearer to readers than\n> > something like !defined(WIN32).\n>\n> I don't like these. I don't find them clearer - if we really just assume this\n> to be the case on windows, it's easier to understand the checks if they talk\n> about windows rather than having to know whether this specific check just\n> applies to windows or potentially an unspecified separate set of systems.\n>\n> But I guess I should complain upthread...\n\nThanks for reviewing.\n\nFor this point, I'm planning to commit with those \"vestigial\" macros\nthat Tom asked for, and then we can argue about removing them\nseparately later.\n\n\n",
"msg_date": "Thu, 4 Aug 2022 16:30:26 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Cleaning up historical portability baggage"
},
{
"msg_contents": "I've now pushed all of these except the --disable-thread-safety one,\nwhich I'm still contemplating. So far all green on the farm (except\nknown unrelated breakage). But that's just the same-day animals...\n\n\n",
"msg_date": "Fri, 5 Aug 2022 21:26:04 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Cleaning up historical portability baggage"
},
{
"msg_contents": "On Thu, Aug 4, 2022 at 2:30 PM Andres Freund <andres@anarazel.de> wrote:\n> On 2022-08-03 21:52:04 -0400, Tom Lane wrote:\n> > Andres Freund <andres@anarazel.de> writes:\n> > > Another potential cleanup is the fallback for strtoll/strtoull.\n> >\n> > +1, I suspect the alternate spellings are dead.\n>\n> Looks like that includes systems where there's no declaration for strtoll,\n> strtoull. The test was introduced in\n>\n> commit a6228128fc48c222953dfd41fd438522a184054c\n> Author: Tom Lane <tgl@sss.pgh.pa.us>\n> Date: 2018-05-18 22:42:10 -0400\n>\n> Arrange to supply declarations for strtoll/strtoull if needed.\n>\n> The check was introduced for animal dromedary, afaics. Looks like that stopped\n> reporting 2019-09-27 and transformed into florican.\n>\n> A query on the bf database didn't see any runs in the last 30 days that didn't\n> have strtoll declared.\n>\n> See attached patch.\n\nLGTM. This is just C99 <stdlib.h> stuff, and my scraped config data\nset agrees with your observation.\n\n\n",
"msg_date": "Sat, 6 Aug 2022 00:01:00 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Cleaning up historical portability baggage"
},
{
"msg_contents": "On Sat, Jul 23, 2022 at 8:23 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> More generally, I'm not exactly convinced that changes like\n> this are a readability improvement:\n>\n> -#ifdef HAVE_SETSID\n> +#ifndef WIN32\n>\n> I'd rather not have the code cluttered with a sea of\n> indistinguishable \"#ifndef WIN32\" tests when some of them could be\n> more specific and more mnemonic.\n\nI can see both sides of this issue. On the one hand, if there's a\nlarge chunk of code that's surrounded by #ifndef WIN32, then it might\nnot be clear to the casual observer that the block of code in question\nis working around the lack of setsid() rather than some other\nWindows-specific weirdness. Comments can help with that, though. On\nthe other hand, to me, seeing HAVE_SETSID makes me think that we're\ndealing with a configure probe, and that I might have to worry about\nUNIX-like systems not having support for that primitive. If it says\nWIN32, then I know that's all we're talking about, and that's clearer.\n\nLooking at a HAVE_SETSID specifically, they seem to fall into three\ndifferent categories. First, there's a couple of places that look\nroughly like this:\n\n#ifdef HAVE_SETSID\n if (setsid() < 0)\n elog(FATAL, \"setsid() failed: %m\");\n#endif\n\nI don't think that changing this to WIN32 would confuse anybody.\nSurely it's obvious that the WIN32 test is about setsid(), because\nthat's the only code in the block.\n\nThen there are a couple of places that look like this:\n\n /*\n * If we have setsid(), signal the backend's whole process\n * group\n */\n#ifdef HAVE_SETSID\n (void) kill(-pid, SIGTERM);\n#else\n (void) kill(pid, SIGTERM);\n#endif\n\nI think it would be clear enough to adopt a WIN32 test here if we also\nadjusted the comment, e.g. \"All non-Windows systems supported process\ngroups, and on such systems, we want to signal the entire group.\" But\nI think there might be an even better option. On Windows, kill is\nanyway getting defined to pgkill, and our version of pgkill is defined\nto return EINVAL if you pass it a negative number. Why don't we just\nhave it change a negative value into a positive one? Then we can drop\nall this conditional logic in the callers, who can just do (void)\nkill(-pid, SIGWHATEVER) and we should be fine. On a quick look, it\nappears to me that every call site that passes non-constant second\nargument to kill() would be happy with this change to pgkill().\n\nFinally, there's this sort of thing:\n\nstatic void\nsignal_child(pid_t pid, int signal)\n{\n if (kill(pid, signal) < 0)\n elog(DEBUG3, \"kill(%ld,%d) failed: %m\", (long) pid, signal);\n#ifdef HAVE_SETSID\n switch (signal)\n {\n case SIGINT:\n case SIGTERM:\n case SIGQUIT:\n case SIGSTOP:\n case SIGKILL:\n if (kill(-pid, signal) < 0)\n elog(DEBUG3, \"kill(%ld,%d) failed: %m\", (long) (-pid), signal);\n break;\n default:\n break;\n }\n#endif\n}\n\nSo the logic here says that we should send the signal to the child and\nthen if the signal is in a certain list and HAVE_SETSID is defined,\nalso send the same signal to the whole process group. Here HAVE_SETSID\nis really just a proxy for whether the operating system has a notion\nof process groups and, again, it seems OK to change this to a WIN32\ntest with proper comments. However, here again, I think there might be\na better option. The comment above this function notes that signalling\nthe process itself (as opposed to other process group members) is\nassumed to not cause any problems, which implies that it's not a\ndesired behavior. So with the above redefinition of pgkill(), we could\nrewrite this function like this:\n\nstatic void\nsignal_child(pid_t pid, int signal)\n{\n switch (signal)\n {\n case SIGINT:\n case SIGTERM:\n case SIGQUIT:\n case SIGSTOP:\n case SIGKILL:\n pid = -pid;\n break;\n default:\n break;\n }\n if (kill(pid, signal) < 0)\n elog(DEBUG3, \"kill(%ld,%d) failed: %m\", (long) pid, signal);\n}\n\nOverall, I don't think it's a great idea to keep all of these\nHAVE_WHATEVER macros around if the configure tests are gone. It might\nbe necessary in the short term to make sure we don't regress the\nreadability of the code, but I think it would be better to come up\nwith other techniques for keeping the code readable rather than\nrelying on the names of these vestigial macros as documentation.\n\n--\nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 5 Aug 2022 10:37:50 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Cleaning up historical portability baggage"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> Overall, I don't think it's a great idea to keep all of these\n> HAVE_WHATEVER macros around if the configure tests are gone. It might\n> be necessary in the short term to make sure we don't regress the\n> readability of the code, but I think it would be better to come up\n> with other techniques for keeping the code readable rather than\n> relying on the names of these vestigial macros as documentation.\n\nHmm ... I agree with you that the end result could be nicer code,\nbut what's making it nicer is a pretty substantial amount of human\neffort for each and every call site. Is anybody stepping forward\nto put in that amount of work?\n\nMy proposal is to leave the call sites alone until someone feels\nlike doing that sort of detail work.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 05 Aug 2022 10:48:51 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Cleaning up historical portability baggage"
},
{
"msg_contents": "On Fri, Aug 5, 2022 at 10:48 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Hmm ... I agree with you that the end result could be nicer code,\n> but what's making it nicer is a pretty substantial amount of human\n> effort for each and every call site. Is anybody stepping forward\n> to put in that amount of work?\n>\n> My proposal is to leave the call sites alone until someone feels\n> like doing that sort of detail work.\n\nMy plan was to nerd-snipe Thomas Munro into doing it.[1]\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n[1] https://xkcd.com/356/\n\n\n",
"msg_date": "Fri, 5 Aug 2022 10:54:26 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Cleaning up historical portability baggage"
},
{
"msg_contents": "On Sat, Aug 6, 2022 at 12:01 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> On Thu, Aug 4, 2022 at 2:30 PM Andres Freund <andres@anarazel.de> wrote:\n>> [strtoll cleanup patch]\n>\n> LGTM. This is just C99 <stdlib.h> stuff, and my scraped config data\n> set agrees with your observation.\n\nI found a couple of explicit references to these macros left in\nsrc/interfaces/ecpg/ecpglib/data.c and src/timezone/private.h.\nRemoved in the attached, which I'll push a bit later if no objections.",
"msg_date": "Sat, 6 Aug 2022 09:02:32 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Cleaning up historical portability baggage"
},
{
"msg_contents": "Hi,\n\n\nOn 2022-08-06 09:02:32 +1200, Thomas Munro wrote:\n> On Sat, Aug 6, 2022 at 12:01 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> > On Thu, Aug 4, 2022 at 2:30 PM Andres Freund <andres@anarazel.de> wrote:\n> >> [strtoll cleanup patch]\n> >\n> > LGTM. This is just C99 <stdlib.h> stuff, and my scraped config data\n> > set agrees with your observation.\n> \n> I found a couple of explicit references to these macros left in\n> src/interfaces/ecpg/ecpglib/data.c and src/timezone/private.h.\n> Removed in the attached, which I'll push a bit later if no objections.\n\nHah, I was about to push it. Thanks for catching these. Happy for you to push\nthis soon!\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 5 Aug 2022 14:08:23 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Cleaning up historical portability baggage"
},
{
"msg_contents": "Hi,\n\nOn 2022-08-05 14:08:23 -0700, Andres Freund wrote:\n> Hah, I was about to push it. Thanks for catching these. Happy for you to push\n> this soon!\n\nThanks. Next in my quest for reducing autoconf vs meson pg_config.h\ndifferences is GETTIMEOFDAY stuff.\n\nHAVE_GETTIMEOFDAY currently is only defined for mingw as the configure test is\ngated to windows - that's somewhat weird imo. mingw has had it since at least\n2007. The attached patch makes the gettimeofday() fallback specific to msvc.\n\nI've renamed the file to win32gettimeofday now. I wonder if we should rename\nfiles that are specific to msvc to indicate that? But that's for later.\n\n1-arg gettimeofday() hasn't been around in a *long* while from what I can\nsee. So I've removed that configure test.\n\nGreetings,\n\nAndres Freund",
"msg_date": "Fri, 5 Aug 2022 17:03:11 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Cleaning up historical portability baggage"
},
{
"msg_contents": "Hi,\n\nOn 2022-08-03 14:25:01 +1200, Thomas Munro wrote:\n> It'd be good to find a new home for pg_get_user_name() and\n> pg_get_user_home_dir(), which really shouldn't be left in the now\n> bogusly named src/port/thread.c. Any suggestions?\n\nLeaving the name aside, the win32 handling of these functions is\nembarassing. Both are inside an #ifndef WIN32.\n\nThe only caller (in fe-auth.c) of pg_get_user_name() has:\n#ifdef WIN32\n\tif (GetUserName(username, &namesize))\n\t\tname = username;\n\telse if (errorMessage)\n\t\tappendPQExpBuffer(errorMessage,\n\t\t\t\t\t\t libpq_gettext(\"user name lookup failure: error code %lu\\n\"),\n\t\t\t\t\t\t GetLastError());\n#else\n\tif (pg_get_user_name(user_id, pwdbuf, sizeof(pwdbuf)))\n\t\tname = pwdbuf;\n\telse if (errorMessage)\n\t\tappendPQExpBuffer(errorMessage, \"%s\\n\", pwdbuf);\n\nthe only caller of pg_get_user_home_dir() (path.c) has:\n\nbool\nget_home_path(char *ret_path)\n{\n#ifndef WIN32\n\t/*\n\t * We first consult $HOME. If that's unset, try to get the info from\n\t * <pwd.h>.\n\t */\n\tconst char *home;\n\n\thome = getenv(\"HOME\");\n\tif (home == NULL || home[0] == '\\0')\n\t\treturn pg_get_user_home_dir(geteuid(), ret_path, MAXPGPATH);\n\tstrlcpy(ret_path, home, MAXPGPATH);\n\treturn true;\n#else\n\tchar\t *tmppath;\n\n\t/*\n\t * Note: We use getenv() here because the more modern SHGetFolderPath()\n\t * would force the backend to link with shell32.lib, which eats valuable\n\t * desktop heap. XXX This function is used only in psql, which already\n\t * brings in shell32 via libpq. Moving this function to its own file\n\t * would keep it out of the backend, freeing it from this concern.\n\t */\n\ttmppath = getenv(\"APPDATA\");\n\tif (!tmppath)\n\t\treturn false;\n\tsnprintf(ret_path, MAXPGPATH, \"%s/postgresql\", tmppath);\n\treturn true;\n#endif\n}\n\nHow does this make any sort of sense?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 5 Aug 2022 17:15:18 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Cleaning up historical portability baggage"
},
{
"msg_contents": "On Sat, Aug 6, 2022 at 12:03 PM Andres Freund <andres@anarazel.de> wrote:\n> HAVE_GETTIMEOFDAY currently is only defined for mingw as the configure test is\n> gated to windows - that's somewhat weird imo. mingw has had it since at least\n> 2007. The attached patch makes the gettimeofday() fallback specific to msvc.\n\n+1\n\n> I've renamed the file to win32gettimeofday now. I wonder if we should rename\n> files that are specific to msvc to indicate that? But that's for later.\n\n+1, I was thinking the same.\n\n> 1-arg gettimeofday() hasn't been around in a *long* while from what I can\n> see. So I've removed that configure test.\n\n+1\n\nLGTM.\n\n\n",
"msg_date": "Sat, 6 Aug 2022 12:44:48 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Cleaning up historical portability baggage"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> I've renamed the file to win32gettimeofday now. I wonder if we should rename\n> files that are specific to msvc to indicate that? But that's for later.\n\n+1, but you didn't change the file's own comments containing its name.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 05 Aug 2022 20:52:04 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Cleaning up historical portability baggage"
},
{
"msg_contents": "On Sat, Aug 6, 2022 at 2:54 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> On Fri, Aug 5, 2022 at 10:48 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > Hmm ... I agree with you that the end result could be nicer code,\n> > but what's making it nicer is a pretty substantial amount of human\n> > effort for each and every call site. Is anybody stepping forward\n> > to put in that amount of work?\n> >\n> > My proposal is to leave the call sites alone until someone feels\n> > like doing that sort of detail work.\n>\n> My plan was to nerd-snipe Thomas Munro into doing it.[1]\n\nAlright, well here's a patch for the setsid() stuff following Robert's\nplan, which I think is a pretty good plan.\n\nDid I understand correctly that the places that do kill(-pid) followed\nby kill(pid) really only need the kill(-pid)?\n\nI checked that user processes should never have pid 0 (that's a\nspecial system idle process) or 1 (because they're always even,\nactually it looks like they're pointers in kernel space or something\nlike that), since those wouldn't play nice with the coding I used\nhere.\n\nI note that Windows actually *does* have process groups (in one of the\nCI threads, we learned that there were at least two concepts like\nthat). Some of our fake signals turn into messages to pipes, and\nothers turn into process termination, and in theory we could probably\nalso take advantage of Windows' support for its native \"control C\" and\n\"control break\" signals here. It's possible that someone could do\nsomething to make all but the pipe ones propagate to real Windows\nprocess groups. That might be good for things like nuking archiver\nsubprocesses and the like when taking out the backend, or something\nlike that, but I'm not planning to look into that myself.",
"msg_date": "Sun, 7 Aug 2022 10:23:17 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Cleaning up historical portability baggage"
},
{
"msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> Did I understand correctly that the places that do kill(-pid) followed\n> by kill(pid) really only need the kill(-pid)?\n\nUh ... did you read the comment right above signal_child?\n\n * There is a race condition for recently-forked children: they might not\n * have executed setsid() yet. So we signal the child directly as well as\n * the group. We assume such a child will handle the signal before trying\n * to spawn any grandchild processes. We also assume that signaling the\n * child twice will not cause any problems.\n\nIt might be that this is wrong and signaling -pid will work even if\nthe child hasn't yet done setsid(), but I doubt it: the kill(2) man\npage is pretty clear that it'll fail if \"the process group doesn't\nexist\".\n\nPerhaps we could finesse that by signaling -pid first, and then\nsignaling pid if that fails, but offhand it seems like that has\nthe described race condition w.r.t. grandchild processes.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 06 Aug 2022 18:42:03 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Cleaning up historical portability baggage"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> Thanks. Next in my quest for reducing autoconf vs meson pg_config.h\n> differences is GETTIMEOFDAY stuff.\n\nI just noticed that this could be simplified:\n\n #ifdef _MSC_VER\n struct timezone;\n /* Last parameter not used */\n extern int\tgettimeofday(struct timeval *tp, struct timezone *tzp);\n #endif\n\nbecause what POSIX actually says is\n\n int gettimeofday(struct timeval *restrict tp, void *restrict tzp);\n\nand\n\n If tzp is not a null pointer, the behavior is unspecified.\n\nIndeed, we never call it with anything but a null tzp value,\nnor does our Windows fallback implementation do anything with tzp.\n\nSo ISTM we should drop the bogus \"struct timezone;\" and declare\nthis parameter as \"void *tzp\".\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 06 Aug 2022 19:08:27 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Cleaning up historical portability baggage"
},
{
"msg_contents": "On Sun, Aug 7, 2022 at 11:08 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > Thanks. Next in my quest for reducing autoconf vs meson pg_config.h\n> > differences is GETTIMEOFDAY stuff.\n>\n> I just noticed that this could be simplified:\n>\n> #ifdef _MSC_VER\n> struct timezone;\n> /* Last parameter not used */\n> extern int gettimeofday(struct timeval *tp, struct timezone *tzp);\n> #endif\n>\n> because what POSIX actually says is\n>\n> int gettimeofday(struct timeval *restrict tp, void *restrict tzp);\n>\n> and\n>\n> If tzp is not a null pointer, the behavior is unspecified.\n>\n> Indeed, we never call it with anything but a null tzp value,\n> nor does our Windows fallback implementation do anything with tzp.\n>\n> So ISTM we should drop the bogus \"struct timezone;\" and declare\n> this parameter as \"void *tzp\".\n\nI also wonder if half the stuff in win32gettimeofday.c can be deleted.\n From some light googling, it looks like\nGetSystemTimePreciseAsFileTime() can just be called directly on\nWindows 8+ (and we now require 10+), and that kernel32.dll malarky was\nfor older systems?\n\n\n",
"msg_date": "Sun, 7 Aug 2022 11:14:36 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Cleaning up historical portability baggage"
},
{
"msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> I also wonder if half the stuff in win32gettimeofday.c can be deleted.\n> From some light googling, it looks like\n> GetSystemTimePreciseAsFileTime() can just be called directly on\n> Windows 8+ (and we now require 10+), and that kernel32.dll malarky was\n> for older systems?\n\nYeah, Microsoft's man page for it just says to include sysinfoapi.h\n(which we aren't) and then it should work on supported versions.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 06 Aug 2022 19:22:54 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Cleaning up historical portability baggage"
},
{
"msg_contents": "On Sun, Aug 7, 2022 at 10:42 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Thomas Munro <thomas.munro@gmail.com> writes:\n> > Did I understand correctly that the places that do kill(-pid) followed\n> > by kill(pid) really only need the kill(-pid)?\n>\n> Uh ... did you read the comment right above signal_child?\n>\n> * There is a race condition for recently-forked children: they might not\n> * have executed setsid() yet. So we signal the child directly as well as\n> * the group. We assume such a child will handle the signal before trying\n> * to spawn any grandchild processes. We also assume that signaling the\n> * child twice will not cause any problems.\n\nOof. Fixed.",
"msg_date": "Sun, 7 Aug 2022 11:29:55 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Cleaning up historical portability baggage"
},
{
"msg_contents": "On Sat, Aug 6, 2022 at 9:08 AM Andres Freund <andres@anarazel.de> wrote:\n> [stuff about strtoll, strtoull]\n\nSo what about strtof? That's gotta be dead code too. I gather we\nstill need commit 72880ac1's HAVE_BUGGY_STRTOF. From a cursory glance\nat MinGW's implementation, it still has the complained-about\nbehaviour, if I've understood the complaint, and if I'm looking at the\nright C runtime[1]. But then our code says:\n\n * Test results on Mingw suggest that it has the same problem, though looking\n * at the code I can't figure out why.\n\n... so which code was that referring to then? I'm not up to speed on\nhow many C runtime libraries there are and how they are selected on\nMSYS (I mean, the closest I've ever got to this system is flinging\npatches at it on CI using Melih's patch, which, incidentally, I just\ntested the attached with and it passed[2]).\n\n[1] https://github.com/mirror/mingw-w64/blob/master/mingw-w64-crt/stdio/strtof.c\n[2] https://github.com/macdice/postgres/runs/7708082971",
"msg_date": "Sun, 7 Aug 2022 11:47:31 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Cleaning up historical portability baggage"
},
{
"msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> On Sun, Aug 7, 2022 at 10:42 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> * There is a race condition for recently-forked children: they might not\n>> * have executed setsid() yet. So we signal the child directly as well as\n>> * the group. We assume such a child will handle the signal before trying\n>> * to spawn any grandchild processes. We also assume that signaling the\n>> * child twice will not cause any problems.\n\n> Oof. Fixed.\n\nHmm ... it seems like these other callers have the same race condition.\nStatementTimeoutHandler and LockTimeoutHandler account for that\ncorrectly by issuing two kill()s, so how is it OK for pg_signal_backend\nand TerminateOtherDBBackends not to?\n\nIt would likely be a good idea for all these places to mention that\nthey're doing that to avoid a race condition, and cross-reference\nsignal_child for details. Or maybe we should promote signal_child\ninto a more widely used function?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 06 Aug 2022 19:52:28 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Cleaning up historical portability baggage"
},
{
"msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> So what about strtof? That's gotta be dead code too. I gather we\n> still need commit 72880ac1's HAVE_BUGGY_STRTOF. From a cursory glance\n> at MinGW's implementation, it still has the complained-about\n> behaviour, if I've understood the complaint, and if I'm looking at the\n> right C runtime[1].\n\nLooks plausible from here.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 06 Aug 2022 19:55:25 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Cleaning up historical portability baggage"
},
{
"msg_contents": "On Sun, Aug 7, 2022 at 11:22 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Thomas Munro <thomas.munro@gmail.com> writes:\n> > I also wonder if half the stuff in win32gettimeofday.c can be deleted.\n> > From some light googling, it looks like\n> > GetSystemTimePreciseAsFileTime() can just be called directly on\n> > Windows 8+ (and we now require 10+), and that kernel32.dll malarky was\n> > for older systems?\n>\n> Yeah, Microsoft's man page for it just says to include sysinfoapi.h\n> (which we aren't) and then it should work on supported versions.\n\nThis looks good on CI (well I haven't waited for it to finish yet, but\nMSVC compiles it without warning and we're most of the way through the\ntests...).",
"msg_date": "Sun, 7 Aug 2022 12:14:18 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Cleaning up historical portability baggage"
},
{
"msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> On Sun, Aug 7, 2022 at 11:22 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Thomas Munro <thomas.munro@gmail.com> writes:\n>>> I also wonder if half the stuff in win32gettimeofday.c can be deleted.\n\n> This looks good on CI (well I haven't waited for it to finish yet, but\n> MSVC compiles it without warning and we're most of the way through the\n> tests...).\n\nLooks plausible from here. A couple other thoughts:\n\n* While you're at it you could fix the \"MinW\" typo just above\nthe extern for gettimeofday.\n\n* I'm half tempted to add something like this to gettimeofday:\n\n\t/*\n\t * POSIX declines to define what tzp points to, saying\n\t * \"If tzp is not a null pointer, the behavior is unspecified\".\n\t * Let's take this opportunity to verify that noplace in\n\t * Postgres tries to use any unportable behavior.\n\t */\n\tAssert(tzp == NULL);\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 06 Aug 2022 20:25:13 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Cleaning up historical portability baggage"
},
{
"msg_contents": "Hi,\n\nHere's another set patches for cruft I discovered going line-by-line through\nthe autoconf vs meson test differences. They'd all be simple to port to meson\ntoo, but I think it's better to clean them up.\n\n0001: __func__ is C99, so we don't need to support fallbacks\n\n0002: windows: We've unconditionally defined HAVE_MINIDUMP_TYPE for msvc forever, we\n can rely on it for mingw too\n\n0003: aix: aix3.2.5, aix4.1 are not even of historical interest at this point\n - 4.1 was released before the first commit in our commit history\n\n0004: solaris: these gcc & gnu ld vs sun stuff differences seem unnecessary or\n outdated\n\n I started this because I wanted to get rid of with_gnu_ld, but there's still\n a necessary reference left unfortunately. But it still seems worth doing?\n\n I checked and the relevant options (-shared, -Wl,-Bsymbolic, -Wl,-soname)\n work even on solaris 10 with developerstudio12.5 (not the latest)\n\n0005: those broken system headers look to have been repaired a good while ago,\n or, in the case of irix, we don't support the platform anymore\n\nGreetings,\n\nAndres Freund",
"msg_date": "Sat, 6 Aug 2022 18:29:14 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Cleaning up historical portability baggage"
},
{
"msg_contents": "On Sun, Aug 7, 2022 at 1:29 PM Andres Freund <andres@anarazel.de> wrote:\n> 0001: __func__ is C99, so we don't need to support fallbacks\n\n+1, and my scraped data agrees.\n\nI believe our minimum MSVC is current 2015, and this says it has it\n(it doesn't let you select older versions in the version drop-down,\nbut we don't care about older versions):\n\nhttps://docs.microsoft.com/en-us/cpp/preprocessor/predefined-macros?view=msvc-140\n\n> 0002: windows: We've unconditionally defined HAVE_MINIDUMP_TYPE for msvc forever, we\n> can rely on it for mingw too\n\n * If supported on the current platform, set up a handler to be called if\n * the backend/postmaster crashes with a fatal signal or exception.\n */\n-#if defined(WIN32) && defined(HAVE_MINIDUMP_TYPE)\n+#if defined(WIN32)\n\nPersonally I'd remove \"If supported on the current platform, \" and\nshove the rest of the comment inside the #if defined(WIN32), but\nthat's just me...\n\n> 0003: aix: aix3.2.5, aix4.1 are not even of historical interest at this point\n> - 4.1 was released before the first commit in our commit history\n\nWow.\n\n> 0004: solaris: these gcc & gnu ld vs sun stuff differences seem unnecessary or\n> outdated\n\nLGTM from a look at the current man page.\n\n> I checked and the relevant options (-shared, -Wl,-Bsymbolic, -Wl,-soname)\n> work even on solaris 10 with developerstudio12.5 (not the latest)\n\nFWIW I'd call Solaris 10 EOL'd (it's in some\nsure-pay-us-but-we-aren't-really-going-to-fix-it phase with a lifetime\nsimilar to the actual sun).\n\n> 0005: those broken system headers look to have been repaired a good while ago,\n> or, in the case of irix, we don't support the platform anymore\n\nNice archeology.\n\n\n",
"msg_date": "Sun, 7 Aug 2022 14:29:20 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Cleaning up historical portability baggage"
},
{
"msg_contents": "Hi,\n\nOn 2022-08-07 11:47:31 +1200, Thomas Munro wrote:\n> So what about strtof? That's gotta be dead code too. I gather we\n> still need commit 72880ac1's HAVE_BUGGY_STRTOF.\n\n> From a cursory glance at MinGW's implementation, it still has the\n> complained-about behaviour, if I've understood the complaint, and if I'm\n> looking at the right C runtime[1].\n\nWell, right now we don't refuse to build against the \"wrong\" runtimes, so it's\nhard to say whether you're looking at the right runtime. I don't think we need\nthis if we're (as we should imo) only using the ucrt - that's microsoft's,\nwhich IIUC is ok?\n\n\n> -/*\n> - * strtof() is part of C99; this version is only for the benefit of obsolete\n> - * platforms. As such, it is known to return incorrect values for edge cases,\n> - * which have to be allowed for in variant files for regression test results\n> - * for any such platform.\n> - */\n\nWe can't remove the result files referenced here yet, due to the double\nrounding behaviour?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sat, 6 Aug 2022 19:46:23 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Cleaning up historical portability baggage"
},
{
"msg_contents": "Hi,\n\nOn 2022-08-07 14:29:20 +1200, Thomas Munro wrote:\n> On Sun, Aug 7, 2022 at 1:29 PM Andres Freund <andres@anarazel.de> wrote:\n> > 0001: __func__ is C99, so we don't need to support fallbacks\n> \n> +1, and my scraped data agrees.\n> \n> I believe our minimum MSVC is current 2015, and this says it has it\n> (it doesn't let you select older versions in the version drop-down,\n> but we don't care about older versions):\n\nThanks for checking.\n\n\n> > 0002: windows: We've unconditionally defined HAVE_MINIDUMP_TYPE for msvc forever, we\n> > can rely on it for mingw too\n> \n> * If supported on the current platform, set up a handler to be called if\n> * the backend/postmaster crashes with a fatal signal or exception.\n> */\n> -#if defined(WIN32) && defined(HAVE_MINIDUMP_TYPE)\n> +#if defined(WIN32)\n> \n> Personally I'd remove \"If supported on the current platform, \" and\n> shove the rest of the comment inside the #if defined(WIN32), but\n> that's just me...\n\nI started out that way as well, but it'd actually be nice to do this on other\nplatforms too, and we just don't support it yet :)\n\n\n> > I checked and the relevant options (-shared, -Wl,-Bsymbolic, -Wl,-soname)\n> > work even on solaris 10 with developerstudio12.5 (not the latest)\n> \n> FWIW I'd call Solaris 10 EOL'd (it's in some\n> sure-pay-us-but-we-aren't-really-going-to-fix-it phase with a lifetime\n> similar to the actual sun).\n\nI'd agree - but checked so that couldn't even be an argument against :)\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sat, 6 Aug 2022 19:57:22 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Cleaning up historical portability baggage"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2022-08-07 11:47:31 +1200, Thomas Munro wrote:\n>> So what about strtof? That's gotta be dead code too. I gather we\n>> still need commit 72880ac1's HAVE_BUGGY_STRTOF.\n\n> Well, right now we don't refuse to build against the \"wrong\" runtimes, so it's\n> hard to say whether you're looking at the right runtime. I don't think we need\n> this if we're (as we should imo) only using the ucrt - that's microsoft's,\n> which IIUC is ok?\n\nYou could pull it out and see if the buildfarm breaks, but my money\nis on it breaking. That HAVE_BUGGY_STRTOF stuff isn't very old.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 06 Aug 2022 22:58:12 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Cleaning up historical portability baggage"
},
{
"msg_contents": "Hi,\n\nOn 2022-08-06 22:58:12 -0400, Tom Lane wrote:\n> You could pull it out and see if the buildfarm breaks, but my money\n> is on it breaking. That HAVE_BUGGY_STRTOF stuff isn't very old.\n\nWe only recently figured out that we should use the ucrt runtime (and that it\nexists, I guess).\nfairywren and jacan's first runs with ucrt are from mid February:\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2022-02-13%2007%3A11%3A46\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=jacana&dt=2022-02-17%2016%3A15%3A24\n\nWe probably should just throw an error if msvcrt is used. That's the old, pre\nC99, microsoft C runtime, with some mingw replacement functions ontop. I\nthink our tests already don't pass when it's used. See [1] for more info.\n\nNot entirely sure how to best detect ucrt use - we could just check MSYSTEM,\nbut that's not determinative because one also can specify the compiler via the\nprefix...\n\n\nThat'd still leave us with the alternative output files due to cygwin, I\nthink.\n\nGreetings,\n\nAndres Freund\n\n\n[1] https://www.msys2.org/docs/environments/\n\n\n",
"msg_date": "Sat, 6 Aug 2022 20:39:48 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Cleaning up historical portability baggage"
},
{
"msg_contents": "Hi,\n\nOn 2022-08-06 20:39:48 -0700, Andres Freund wrote:\n> On 2022-08-06 22:58:12 -0400, Tom Lane wrote:\n> > You could pull it out and see if the buildfarm breaks, but my money\n> > is on it breaking. That HAVE_BUGGY_STRTOF stuff isn't very old.\n> \n> We only recently figured out that we should use the ucrt runtime (and that it\n> exists, I guess).\n> fairywren and jacan's first runs with ucrt are from mid February:\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2022-02-13%2007%3A11%3A46\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=jacana&dt=2022-02-17%2016%3A15%3A24\n\nWell, bad news and good news. The bad: We get the wrong results when just\nremoving HAVE_BUGGY_STRTOF. The good: That is just because we haven't applied\nenough, or the right, magic. To have mingw to not interfere with things one\nalso has to pass -D_UCRT and -lucrt - then the tests pass, even without\nHAVE_BUGGY_STRTOF.\n\nI think this might also explain (and fix) some other oddity we had with mingw\nthat I was getting confused about a while back, but I forgot too much of the\ndetails...\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sat, 6 Aug 2022 23:20:26 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Cleaning up historical portability baggage"
},
{
"msg_contents": "Hi,\n\nOn 2022-08-06 23:20:26 -0700, Andres Freund wrote:\n> The good: That is just because we haven't applied enough, or the right,\n> magic. To have mingw to not interfere with things one also has to pass\n> -D_UCRT and -lucrt - then the tests pass, even without HAVE_BUGGY_STRTOF.\n\nLooks like only -lucrt is required, not -D_UCRT.\n\nAsked on the mingw irc channel - it's not expected that such magic is\nrequired. Opened https://github.com/msys2/MINGW-packages/issues/12472\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sat, 6 Aug 2022 23:54:42 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Cleaning up historical portability baggage"
},
{
"msg_contents": "Hi,\n\nOn 2022-08-06 18:29:14 -0700, Andres Freund wrote:\n> 0003: aix: aix3.2.5, aix4.1 are not even of historical interest at this point\n> - 4.1 was released before the first commit in our commit history\n\nhoverfly clearly doesn't like this:\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=hoverfly&dt=2022-08-07%2017%3A06%3A15\n\nOh, I may see the problem I think I misread the comma in the ifneq\n\n--- a/src/backend/Makefile\n+++ b/src/backend/Makefile\n@@ -101,15 +101,7 @@ postgres: $(POSTGRES_IMP)\n \n $(POSTGRES_IMP): $(OBJS)\n $(LD) $(LDREL) $(LDOUT) SUBSYS.o $(call expand_subsys,$^)\n-ifeq ($(host_os), aix3.2.5)\n $(MKLDEXPORT) SUBSYS.o $(bindir)/postgres > $@\n-else\n-ifneq (,$(findstring aix4.1, $(host_os)))\n- $(MKLDEXPORT) SUBSYS.o $(bindir)/postgres > $@\n-else\n- $(MKLDEXPORT) SUBSYS.o . > $@\n-endif\n-endif\n @rm -f SUBSYS.o\n \ni.e. it should be \"$(MKLDEXPORT) SUBSYS.o . > $@\" after removing the\nconditionals rather than \"$(MKLDEXPORT) SUBSYS.o $(bindir)/postgres > $@\".\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sun, 7 Aug 2022 11:27:07 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Cleaning up historical portability baggage"
},
{
"msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> On Thu, Aug 4, 2022 at 4:09 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> clock_gettime is required by SUSv2 (1997), so I have to admit that\n>> macOS 10.4 doesn't have a lot of excuse not to have it. In any case,\n>> prairiedog is just sitting there doing its thing until I find cycles\n>> to install a newer OS. If you want to move ahead with this, don't\n>> let prairiedog block you.\n\n> Thanks, will do.\n\nBTW, that commit really should have updated the explanation at the top of\ninstr_time.h:\n\n * This file provides an abstraction layer to hide portability issues in\n * interval timing. On Unix we use clock_gettime() if available, else\n * gettimeofday(). On Windows, gettimeofday() gives a low-precision result\n * so we must use QueryPerformanceCounter() instead. These macros also give\n * some breathing room to use other high-precision-timing APIs.\n\nUpdating the second sentence is easy enough, but as for the third,\nI wonder if it's still true in view of 24c3ce8f1. Should we revisit\nwhether to use gettimeofday vs. QueryPerformanceCounter? At the very\nleast I suspect it's no longer about \"low precision\", but about which\nAPI is faster.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 07 Aug 2022 20:27:27 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Cleaning up historical portability baggage"
},
{
"msg_contents": "On Mon, Aug 8, 2022 at 12:27 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> BTW, that commit really should have updated the explanation at the top of\n> instr_time.h:\n>\n> * This file provides an abstraction layer to hide portability issues in\n> * interval timing. On Unix we use clock_gettime() if available, else\n> * gettimeofday(). On Windows, gettimeofday() gives a low-precision result\n> * so we must use QueryPerformanceCounter() instead. These macros also give\n> * some breathing room to use other high-precision-timing APIs.\n>\n> Updating the second sentence is easy enough, but as for the third,\n> I wonder if it's still true in view of 24c3ce8f1. Should we revisit\n> whether to use gettimeofday vs. QueryPerformanceCounter? At the very\n> least I suspect it's no longer about \"low precision\", but about which\n> API is faster.\n\nYeah, that's not true anymore, and QueryPerformanceCounter() is faster\nthan GetSystemTimePreciseAsFileTime()[1], but there doesn't\nreally seem to be any point in mentioning that or gettimeofday() at\nall here. I propose to cut it down to just:\n\n * This file provides an abstraction layer to hide portability issues in\n- * interval timing. On Unix we use clock_gettime() if available, else\n- * gettimeofday(). On Windows, gettimeofday() gives a low-precision result\n- * so we must use QueryPerformanceCounter() instead. These macros also give\n- * some breathing room to use other high-precision-timing APIs.\n+ * interval timing. On Unix we use clock_gettime(), and on Windows we use\n+ * QueryPerformanceCounter(). These macros also give some breathing room to\n+ * use other high-precision-timing APIs.\n\nFWIW I expect this stuff to get whacked around some more for v16[2].\n\n[1] https://devblogs.microsoft.com/oldnewthing/20170921-00/?p=97057\n[2] https://commitfest.postgresql.org/39/3751/\n\n\n",
"msg_date": "Tue, 9 Aug 2022 11:35:05 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Cleaning up historical portability baggage"
},
{
"msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> Yeah, that's not true anymore, and QueryPerformanceCounter() is faster\n> than GetSystemTimePreciseAsFileTime()[1], but there doesn't\n> really seem to be any point in mentioning that or gettimeofday() at\n> all here. I propose to cut it down to just:\n\n> * This file provides an abstraction layer to hide portability issues in\n> - * interval timing. On Unix we use clock_gettime() if available, else\n> - * gettimeofday(). On Windows, gettimeofday() gives a low-precision result\n> - * so we must use QueryPerformanceCounter() instead. These macros also give\n> - * some breathing room to use other high-precision-timing APIs.\n> + * interval timing. On Unix we use clock_gettime(), and on Windows we use\n> + * QueryPerformanceCounter(). These macros also give some breathing room to\n> + * use other high-precision-timing APIs.\n\nWFM.\n\n> FWIW I expect this stuff to get whacked around some more for v16[2].\n> [2] https://commitfest.postgresql.org/39/3751/\n\nMeh. I think trying to use rdtsc is a fool's errand; you'll be fighting\nCPU quirks forever.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 08 Aug 2022 19:46:00 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Cleaning up historical portability baggage"
},
{
"msg_contents": "Here's a new batch of these:\n\nRemove configure probe for sys/uio.h.\nRemove configure probes for sys/un.h and struct sockaddr_un.\nRemove configure probe for sys/select.h.\nRemove configure probes for sys/ipc.h, sys/sem.h, sys/shm.h.\nRemove configure probe for sys/resource.h and refactor.\nRemove configure probe for shl_load library.\n\nThe most interesting things to say about these ones are:\n * The concept of a no-Unix-socket build is removed. We should be\nable to do that now, right? Peter E seemed to say approximately that\nin the commit message for 797129e5. Or is there a thought that a new\noperating system might show up that doesn't have 'em and we'd wish\nwe'd kept this stuff well marked out?\n * Instead of eg HAVE_SYS_RESOURCE_H I supplied <sys/resource.h> and\nmoved the relevant declarations there from the big random-win2-stuff\nheader, and likewise for <sys/un.h>\n\n(I still plan to get rid of no-threads-builds soon, but I got stuck\nfiguring out how to make sure I don't break the recent magic ldap\nlibrary detection... more soon.)",
"msg_date": "Thu, 11 Aug 2022 22:02:29 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Cleaning up historical portability baggage"
},
{
"msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> The most interesting things to say about these ones are:\n> * The concept of a no-Unix-socket build is removed. We should be\n> able to do that now, right? Peter E seemed to say approximately that\n> in the commit message for 797129e5. Or is there a thought that a new\n> operating system might show up that doesn't have 'em and we'd wish\n> we'd kept this stuff well marked out?\n\nI'm kind of down on removing that. I certainly think it's premature\nto do so today, when we haven't even yet shipped a release that\nassumes we can always define it on Windows -- I won't be too surprised\nif we get pushback on that after 15.0 is out. But in general, Unix\nsockets seem like kind of an obvious thing that might not be there\non some future platform.\n\nInstead of what you did in 0002, I propose putting\n\"#define HAVE_UNIX_SOCKETS 1\" in pg_config_manual.h, and keeping\nthe #if's that reference it as-is.\n\nAll the rest of this seems fine on a cursory readthrough. Even if\nwe discover that header foo.h is less universal than we thought,\nputting back one or two configure tests won't be hard.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 11 Aug 2022 10:52:51 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Cleaning up historical portability baggage"
},
{
"msg_contents": "Hi,\n\nOn 2022-08-11 10:52:51 -0400, Tom Lane wrote:\n> Thomas Munro <thomas.munro@gmail.com> writes:\n> > The most interesting things to say about these ones are:\n> > * The concept of a no-Unix-socket build is removed. We should be\n> > able to do that now, right? Peter E seemed to say approximately that\n> > in the commit message for 797129e5. Or is there a thought that a new\n> > operating system might show up that doesn't have 'em and we'd wish\n> > we'd kept this stuff well marked out?\n> \n> I'm kind of down on removing that. I certainly think it's premature\n> to do so today, when we haven't even yet shipped a release that\n> assumes we can always define it on Windows\n\nI think what might be good next step is to have tests default to using unix\nsockets on windows, rather than requiring PG_TEST_USE_UNIX_SOCKETS. The\npg_regress.c and Utils.pm hunks disabling it by default on windows don't\nreally make sense anymore.\n\n#if !defined(HAVE_UNIX_SOCKETS)\n use_unix_sockets = false;\n#elif defined(WIN32)\n\n /*\n * We don't use Unix-domain sockets on Windows by default, even if the\n * build supports them. (See comment at remove_temp() for a reason.)\n * Override at your own risk.\n */\n use_unix_sockets = getenv(\"PG_TEST_USE_UNIX_SOCKETS\") ? true : false;\n#else\n use_unix_sockets = true;\n#endif\n\nand\n\n # Specifies whether to use Unix sockets for test setups. On\n # Windows we don't use them by default since it's not universally\n # supported, but it can be overridden if desired.\n $use_unix_sockets =\n (!$windows_os || defined $ENV{PG_TEST_USE_UNIX_SOCKETS});\n\n\nTests don't reliably run on windows without PG_TEST_USE_UNIX_SOCKETS, due to\nthe port conflict detection being incredibly racy. I see occasional failures\neven without test concurrency, and with test concurrency it reliably fails.\n\n\nI don't really know what to do about the warnings around remove_temp() and\ntrapsig(). I think we actually may be overreading the restrictions. To me the\ndocumented restrictions read more like a high-level-ish explanation of what's\nsafe in a signal handler and what not. And it seems to not have caused a\nproblem on windows on several thousand CI cycles, including plenty failures.\n\n\nAlternatively we could just default to putting the socketdir inside the data\ndirectory on windows - I *think* windows doesn't have strict path length\nlimits for the socket location. If the socket dir isn't in some global temp\ndirectory, we don't really need signal_remove_temp, given that we're ok with\nleaving the much bigger data directory around. The socket directory\ndetermination doesn't really work on windows right now anyway, one manually\nhas to set a temp directory as TMPDIR isn't normally set on windows, and /tmp\ndoesn't exist.\n\n char *template = psprintf(\"%s/pg_regress-XXXXXX\",\n getenv(\"TMPDIR\") ? getenv(\"TMPDIR\") : \"/tmp\");\n\nboth TEMP and TMP would exist though...\n\n\n\n> -- I won't be too surprised if we get pushback on that after 15.0 is out.\n\n From what angle? I think our default behaviour doesn't change because\nDEFAULT_PGSOCKET_DIR is \"\". And OS compatibility wise we apparently are good\nas well?\n\n\n> But in general, Unix sockets seem like kind of an obvious thing that might\n> not be there on some future platform.\n\nMaybe not with the precise API, but I can't imagine a new platform not having\nsomething very similar. It serves a pretty obvious need, and I think the\nsecurity benefits have become more important over time...\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 11 Aug 2022 10:14:12 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Cleaning up historical portability baggage"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2022-08-11 10:52:51 -0400, Tom Lane wrote:\n>> -- I won't be too surprised if we get pushback on that after 15.0 is out.\n\n> From what angle?\n\nIf I knew that, it'd be because we'd already received the pushback.\nI'm just suspicious that very little beta testing happens on Windows,\nand what does is probably mostly people running up-to-date Windows.\nSo I think there's plenty of chance for \"hey, this no longer works\"\ncomplaints later. Maybe we'll be able to reject it all with \"sorry,\nwe desupported that version of Windows\", but I dunno.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 11 Aug 2022 13:19:08 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Cleaning up historical portability baggage"
},
{
"msg_contents": "On 11.08.22 19:19, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n>> On 2022-08-11 10:52:51 -0400, Tom Lane wrote:\n>>> -- I won't be too surprised if we get pushback on that after 15.0 is out.\n> \n>> From what angle?\n> \n> If I knew that, it'd be because we'd already received the pushback.\n> I'm just suspicious that very little beta testing happens on Windows,\n> and what does is probably mostly people running up-to-date Windows.\n> So I think there's plenty of chance for \"hey, this no longer works\"\n> complaints later. Maybe we'll be able to reject it all with \"sorry,\n> we desupported that version of Windows\", but I dunno.\n\nWhat is changing in PG15 about this? Unix-domain sockets on Windows are \nsupported as of PG13.\n\n\n",
"msg_date": "Fri, 12 Aug 2022 09:07:18 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Cleaning up historical portability baggage"
},
{
"msg_contents": "On Thu, Aug 4, 2022 at 11:09 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> clock_gettime is required by SUSv2 (1997), so I have to admit that\n> macOS 10.4 doesn't have a lot of excuse not to have it. In any case,\n> prairiedog is just sitting there doing its thing until I find cycles\n> to install a newer OS. If you want to move ahead with this, don't\n> let prairiedog block you.\n\nNow that prairiedog is offline, it seems the oldest Bison in the\nbuildfarm is 2.3, as the vendor-supplied version for MacOS. Since\nearlier versions are now untested, it seems we should now make that\nthe minimum, as in the attached. I took a quick look to see if that\nwould enable anything useful, but nothing stuck out.\n\nAside: The MSVC builds don't report the Bison version that I can see,\nbut otherwise it seems now the only non-Apple pre-3.0 animals are\nprion (2.7) and the three Sparc animals on Debian 7 (2.5).\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com",
"msg_date": "Fri, 12 Aug 2022 14:12:15 +0700",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Cleaning up historical portability baggage"
},
{
"msg_contents": "On 11.08.22 12:02, Thomas Munro wrote:\n> * The concept of a no-Unix-socket build is removed. We should be\n> able to do that now, right? Peter E seemed to say approximately that\n> in the commit message for 797129e5. Or is there a thought that a new\n> operating system might show up that doesn't have 'em and we'd wish\n> we'd kept this stuff well marked out?\n\nMost uses of HAVE_UNIX_SOCKETS are not useful independent of that \nquestion. For example, you patch has\n\n@@ -348,7 +343,6 @@ StreamServerPort(int family, const char *hostName, \nunsigned short portNumber,\n \thint.ai_flags = AI_PASSIVE;\n \thint.ai_socktype = SOCK_STREAM;\n\n-#ifdef HAVE_UNIX_SOCKETS\n \tif (family == AF_UNIX)\n \t{\n \t\t/*\n\nBut on a platform without support for Unix sockets, family just won't be \nAF_UNIX at run time, so there is no need to hide that if branch.\n\nNote that we already require that AF_UNIX is defined on all platforms, \neven if the kernel doesn't support Unix sockets.\n\nBut maybe it would be better to make that a separate patch from the \nsys/un.h configure changes, just so there is more clarity around it.\n\n\n",
"msg_date": "Fri, 12 Aug 2022 09:15:55 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Cleaning up historical portability baggage"
},
{
"msg_contents": "On Fri, Aug 12, 2022 at 5:14 AM Andres Freund <andres@anarazel.de> wrote:\n> On 2022-08-11 10:52:51 -0400, Tom Lane wrote:\n> > Thomas Munro <thomas.munro@gmail.com> writes:\n> > > The most interesting things to say about these ones are:\n> > > * The concept of a no-Unix-socket build is removed. We should be\n> > > able to do that now, right? Peter E seemed to say approximately that\n> > > in the commit message for 797129e5. Or is there a thought that a new\n> > > operating system might show up that doesn't have 'em and we'd wish\n> > > we'd kept this stuff well marked out?\n> >\n> > I'm kind of down on removing that. I certainly think it's premature\n> > to do so today, when we haven't even yet shipped a release that\n> > assumes we can always define it on Windows\n\nWe already do assume that on Windows though. I assume it just bombs\nout with an address-family-not-recognized or similar error if you try\nto use it on old Windows? It compiles fine because there aren't\nany new APIs, and winsock.h has always had the AF_UNIX macro (and many\nothers probably copied-and-pasted from BSD sys/socket.h, the numbers\nseem to match for the first 16 AF_XXXs), and we supply our own\nsockaddr_un.\n\nAbout that struct, some versions of Visual Studio and MinGW didn't\nhave the header for it when this went in[1]. MingGW 10, released in\nApril 2022, has gained it[2]. I think it's reasonable to support only\nthe \"current\" MinGW (it's just too niche to have a support window\nwider than \"right now\" IMHO), so it'd be OK to rely on that for PG16?\nThat would leave Visual Studio. Does anyone know if our recent\nde-cluttering efforts have fixed that problem yet? If that turns out\nto be true, I'd propose to keep the <sys/un.h> header I added here,\nbut change it to simply #include <afunix.h>. But that's probably too\nhopeful... If we could find out which MSVC\nversion/SDK/whatever-you-call-it is the first not to need it, it'd be\ngreat to put that in a comment at least, for future garbage collection\nwork...\n\n(As for why we ever had a configure check for something as basic as\nUnix sockets, it looks like QNX and very old Cygwin didn't have them?\nPerhaps one or both also didn't even define AF_UNIX, explaining the\nIS_AF_UNIX() macro we used to have?)\n\n(Thinking about the standard... I wonder if Windows would have got\nthis facility sooner if POSIX's attempt to rename AF_UNIX to the more\nportable-sounding AF_LOCAL had succeeded... I noticed the Stevens\nnetworking book has a note that POSIX had just decided on AF_LOCAL,\nbut current POSIX standards make no mention of it, so something went\nwrong somewhere along the way there...)\n\n> I think what might be good next step is to have tests default to using unix\n> sockets on windows, rather than requiring PG_TEST_USE_UNIX_SOCKETS. The\n> pg_regress.c and Utils.pm hunks disabling it by default on windows don't\n> really make sense anymore.\n\n+1, obviously separate from this cleanup stuff, but, yeah, from PG16\nonwards it seems to be legit to assume that they're available AFAIK.\n\n> I don't really know what to do about the warnings around remove_temp() and\n> trapsig(). I think we actually may be overreading the restrictions. To me the\n> documented restrictions read more like a high-level-ish explanation of what's\n> safe in a signal handler and what not. And it seems to not have caused a\n> problem on windows on several thousand CI cycles, including plenty failures.\n>\n> Alternatively we could just default to putting the socketdir inside the data\n> directory on windows - I *think* windows doesn't have strict path length\n> limits for the socket location. If the socket dir isn't in some global temp\n> directory, we don't really need signal_remove_temp, given that we're ok with\n> leaving the much bigger data directory around. The socket directory\n> determination doesn't really work on windows right now anyway, one manually\n> has to set a temp directory as TMPDIR isn't normally set on windows, and /tmp\n> doesn't exist.\n\nRe: length: No Windows here but I spent some time today playing ping\npong with little stand-alone test programs on CI, and determined that\nit chomps paths at 108 despite succeeding with longer paths. I could\nsuccessfully bind(), but directory listings showed the truncation, and\nif I tried to make and bind two sockets with the same 108-character\nprefix, the second would fail with address-in-use. Is that enough to\nbe practical?\n\nOne thing I noticed is that it is implemented as reparse points. We\nmight need to tweak some of our stat/dirent stuff to be more careful\nif we start putting sockets in places that might be encountered by\nthat stuff. (The old pgwin32_is_junction() I recently removed would\nhave returned true for a socket; the newer get_dirent_type() would\nreturn PGFILETYPE_REG right now because it examines reparse points\nslightly more carefully, but that's still the wrong answer; for\ncomparison, on Unix you'd get PGFILETYPE_UNKNOWN for a DT_SOCK because\nwe didn't handle it).\n\nThere seems to be conflicting information out there about whether\n\"abstract\" sockets work (the Linux extension to AF_UNIX where sun_path\nstarts with a NUL byte and they don't appear in the filesystem, and\nthey automatically go away when all descriptors are closed). I\ncouldn't get it to work but I might be doing it wrong... information\nis scant.\n\n[1] https://www.postgresql.org/message-id/88ae9594-6177-fa3c-0061-5bf8f8044b21%402ndquadrant.com\n[2] https://github.com/mingw-w64/mingw-w64/blob/v10.0.0/mingw-w64-headers/include/afunix.h\n\n\n",
"msg_date": "Fri, 12 Aug 2022 19:42:30 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Cleaning up historical portability baggage"
},
{
"msg_contents": "On Fri, Aug 12, 2022 at 7:15 PM Peter Eisentraut\n<peter.eisentraut@enterprisedb.com> wrote:\n> On 11.08.22 12:02, Thomas Munro wrote:\n> > * The concept of a no-Unix-socket build is removed. We should be\n> > able to do that now, right? Peter E seemed to say approximately that\n> > in the commit message for 797129e5. Or is there a thought that a new\n> > operating system might show up that doesn't have 'em and we'd wish\n> > we'd kept this stuff well marked out?\n>\n> Most uses of HAVE_UNIX_SOCKETS are not useful independent of that\n> question. For example, you patch has\n>\n> @@ -348,7 +343,6 @@ StreamServerPort(int family, const char *hostName,\n> unsigned short portNumber,\n> hint.ai_flags = AI_PASSIVE;\n> hint.ai_socktype = SOCK_STREAM;\n>\n> -#ifdef HAVE_UNIX_SOCKETS\n> if (family == AF_UNIX)\n> {\n> /*\n>\n> But on a platform without support for Unix sockets, family just won't be\n> AF_UNIX at run time, so there is no need to hide that if branch.\n\nGood point.\n\n> Note that we already require that AF_UNIX is defined on all platforms,\n> even if the kernel doesn't support Unix sockets.\n\nPOSIX requires the macro too. I think this would count as SUSv3 (AKA\nissue 6?). (IIUC it existed in much older POSIX form as 1g, it's all\nvery confusing...)\n\nhttps://pubs.opengroup.org/onlinepubs/9699919799/basedefs/sys_socket.h.html\n\n> But maybe it would be better to make that a separate patch from the\n> sys/un.h configure changes, just so there is more clarity around it.\n\nCool, I'll do that.\n\n\n",
"msg_date": "Fri, 12 Aug 2022 20:03:41 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Cleaning up historical portability baggage"
},
{
"msg_contents": "Continuing the project of removing dead configure tests ...\nI see that prairiedog was the only buildfarm animal failing the\nHAVE_PPC_LWARX_MUTEX_HINT test, and it seems pretty unlikely that\nthere are any assemblers remaining in the wild that can't parse that.\n(I've confirmed that son-of-prairiedog, the NetBSD 9.3 installation\nI'm cranking up on that hardware, is okay with it.)\n\nSo, PFA a little patch to remove that test.\n\nIt doesn't look like we can remove the adjacent test about \"i\"\nsyntax, unfortunately, because the AIX animals still fail that.\n\n\t\t\tregards, tom lane",
"msg_date": "Fri, 12 Aug 2022 12:56:31 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Cleaning up historical portability baggage"
},
{
"msg_contents": "I wrote:\n> I see that prairiedog was the only buildfarm animal failing the\n> HAVE_PPC_LWARX_MUTEX_HINT test, and it seems pretty unlikely that\n> there are any assemblers remaining in the wild that can't parse that.\n\nActually, after further investigation and testing, I think we could\ndrop the conditionality around PPC spinlock sequences altogether.\nThe commentary in pg_config_manual.h claims that \"some pre-POWER4\nmachines\" will fail on LWSYNC or LWARX with hint, but I've now\nconfirmed that the oldest PPC chips in my possession (prairiedog's\nppc7400, as well as a couple of ppc7450 machines) are all fine with\nboth. Indeed, prairiedog would have been failing for some time now\nif it didn't like LWSYNC, because port/atomics/arch-ppc.h is using\nthat unconditionally in some places :-(. I think we can safely\nassume that such machines no longer exist in the wild, or at least\nare not going to be used to run Postgres v16.\n\nThe attached, expanded patch hard-wires USE_PPC_LWARX_MUTEX_HINT\nand USE_PPC_LWSYNC as true, and syncs the assembler sequences in\narch-ppc.h with that decision. I've checked this lightly on\ntern's host as well as my own machines.\n\n\t\t\tregards, tom lane",
"msg_date": "Fri, 12 Aug 2022 16:08:42 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Cleaning up historical portability baggage"
},
{
"msg_contents": "On Sat, Aug 13, 2022 at 8:08 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I wrote:\n> > I see that prairiedog was the only buildfarm animal failing the\n> > HAVE_PPC_LWARX_MUTEX_HINT test, and it seems pretty unlikely that\n> > there are any assemblers remaining in the wild that can't parse that.\n>\n> Actually, after further investigation and testing, I think we could\n> drop the conditionality around PPC spinlock sequences altogether.\n> The commentary in pg_config_manual.h claims that \"some pre-POWER4\n> machines\" will fail on LWSYNC or LWARX with hint, but I've now\n> confirmed that the oldest PPC chips in my possession (prairiedog's\n> ppc7400, as well as a couple of ppc7450 machines) are all fine with\n> both. Indeed, prairiedog would have been failing for some time now\n> if it didn't like LWSYNC, because port/atomics/arch-ppc.h is using\n> that unconditionally in some places :-(. I think we can safely\n> assume that such machines no longer exist in the wild, or at least\n> are not going to be used to run Postgres v16.\n\nYeah, POWER3 was superseded in 2001. Looking around, Linux distros\nthat someone might seriously run a database on already require much\nmore recent POWER generations to even boot up, and although there is\n(for example) a separate unofficial powerpc port for Debian and of\ncourse NetBSD and others that can target older Macs and IBM servers\netc, apparently no one has run our test suite on 21+ year old hardware\nand reported that it (presumably) SIGILLs already, per that\nobservation (I think ports/packages that fail to work probably just\nget marked broken). As for vintage Apple hardware, clearly the G4\ncube was far more collectable than those weird slightly inflated\nlooking G3 machines :-)\n\nI'm curious, though... if we used compiler builtins, would\n-march/-mcpu etc know about this kind of thing, for people who wanted\nto compile on ancient hardware, or, I guess more interestingly, newer\ntricks that we haven't got around to learning about yet?\n\n\n",
"msg_date": "Sat, 13 Aug 2022 09:48:24 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Cleaning up historical portability baggage"
},
{
"msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> I'm curious, though... if we used compiler builtins, would\n> -march/-mcpu etc know about this kind of thing, for people who wanted\n> to compile on ancient hardware, or, I guess more interestingly, newer\n> tricks that we haven't got around to learning about yet?\n\nNo idea, but I could run some tests if you have something\nspecific in mind.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 12 Aug 2022 18:45:27 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Cleaning up historical portability baggage"
},
{
"msg_contents": "On Fri, Aug 12, 2022 at 8:03 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> On Fri, Aug 12, 2022 at 7:15 PM Peter Eisentraut\n> <peter.eisentraut@enterprisedb.com> wrote:\n> > But maybe it would be better to make that a separate patch from the\n> > sys/un.h configure changes, just so there is more clarity around it.\n>\n> Cool, I'll do that.\n\nI pushed these, except I chopped out the HAVE_UNIX_SOCKETS part as\nrequested. Here it is in a separate patch, with a commit message that\nexplains the rationale (essentially, what you said, it's basically a\nruntime matter for a hypothetical AF_UNIX-less system to complain\nabout). Tom, does this argument persuade you? Also, a couple of\nnearby things:\n\n Remove HAVE_UNIX_SOCKETS.\n Remove configure probe for struct sockaddr_storage.\n Remove configure probe for getaddrinfo, and replacement code.\n\nIf I'm reading comments and scraped configure data right, it looks\nlike those last two things were there only for HP-UX 10 and Windows <\n8.1.\n\nI tried to figure out how to get rid of\nPGAC_STRUCT_SOCKADDR_STORAGE_MEMBERS, but there we're into genuine\nnon-standard cross-platform differences. At best maybe you could\nmaybe skip the test for ss_family (POSIX says you have to have that,\nbut I haven't read RFC 2553 to see why it claims someone should spell\nit differently). Didn't seem worth changing.\n\nbfbot=> select name, value, count(*) from macro where name like\n'%SOCKADDR_%' group by 1, 2;\n name | value | count\n----------------------------------------+-------+-------\n HAVE_STRUCT_SOCKADDR_STORAGE_SS_LEN | 1 | 13 <-- BSDish\n HAVE_STRUCT_SOCKADDR_STORAGE | 1 | 108 <-- everyone\n HAVE_STRUCT_SOCKADDR_SA_LEN | 1 | 18 <-- BSDish + AIX\n HAVE_STRUCT_SOCKADDR_STORAGE_SS_FAMILY | 1 | 108 <-- everyone\n HAVE_STRUCT_SOCKADDR_STORAGE___SS_LEN | 1 | 5 <-- AIX\n HAVE_STRUCT_SOCKADDR_UN | 1 | 106 <-- everyone\nexcept mingw\n(6 rows)",
"msg_date": "Sun, 14 Aug 2022 00:23:02 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Cleaning up historical portability baggage"
},
{
"msg_contents": "On Sun, Aug 14, 2022 at 12:23 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> Remove HAVE_UNIX_SOCKETS.\n> Remove configure probe for struct sockaddr_storage.\n> Remove configure probe for getaddrinfo, and replacement code.\n\nPlus one more that falls out of the above (it was only used by\nsrc/port/getaddrinfo.c):\n\n Remove configure probe for gethostbyname_r.",
"msg_date": "Sun, 14 Aug 2022 01:14:23 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Cleaning up historical portability baggage"
},
{
"msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> I pushed these, except I chopped out the HAVE_UNIX_SOCKETS part as\n> requested. Here it is in a separate patch, with a commit message that\n> explains the rationale (essentially, what you said, it's basically a\n> runtime matter for a hypothetical AF_UNIX-less system to complain\n> about). Tom, does this argument persuade you?\n\nI looked more closely and saw that basically what HAVE_UNIX_SOCKETS\nis guarding is code that assumes the existence of AF_UNIX and\nstruct sockaddr_un. As Peter said, we already rely on AF_UNIX\nin some other places; and I see that sys/un.h is required to exist\nand to define struct sockaddr_un as far back as SUSv2. So it\ndoes seem like the worst consequence is that we'd be compiling\nsome code that would be unreachable on platforms lacking support.\nObjection withdrawn.\n\nAs for the other two, they look like nice cleanup if we can actually\nget away with it. I agree that the business about nonstandard libbind\nis not of interest anymore, but I have no idea about the state of\nplay on Windows. I guess we can push 'em and see what the buildfarm\nthinks.\n\n> I tried to figure out how to get rid of\n> PGAC_STRUCT_SOCKADDR_STORAGE_MEMBERS, but there we're into genuine\n> non-standard cross-platform differences.\n\nRight. I don't think it's worth sweating over.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 13 Aug 2022 14:07:40 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Cleaning up historical portability baggage"
},
{
"msg_contents": "On Sun, Aug 14, 2022 at 6:07 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Thomas Munro <thomas.munro@gmail.com> writes:\n> > I pushed these, except I chopped out the HAVE_UNIX_SOCKETS part as\n> > requested. Here it is in a separate patch, with a commit message that\n> > explains the rationale (essentially, what you said, it's basically a\n> > runtime matter for a hypothetical AF_UNIX-less system to complain\n> > about). Tom, does this argument persuade you?\n>\n> I looked more closely and saw that basically what HAVE_UNIX_SOCKETS\n> is guarding is code that assumes the existence of AF_UNIX and\n> struct sockaddr_un. As Peter said, we already rely on AF_UNIX\n> in some other places; and I see that sys/un.h is required to exist\n> and to define struct sockaddr_un as far back as SUSv2. So it\n> does seem like the worst consequence is that we'd be compiling\n> some code that would be unreachable on platforms lacking support.\n> Objection withdrawn.\n\nThanks, and pushed with a couple of minor doc tweaks.\n\nI hadn't paid attention to our existing abstract Unix socket support\nbefore and now I'm curious: do we have a confirmed sighting of that\nworking on Windows? The thread didn't say so[1], and I'm suspicious\nbecause I couldn't get simple standalone programs that bind() to\n\"\\000c:\\\\xxx\" to work sanely (but my method for investigating Windows\nis to send the punch cards over to the CI system and wait for the\nresults to arrive by carrier pigeon which is cumbersome enough that I\nhaven't tried very hard). Naively shoving a @ into PostreSQL's\nPG_REGRESS_SOCK_DIR also breaks CI.\n\n> As for the other two, they look like nice cleanup if we can actually\n> get away with it. I agree that the business about nonstandard libbind\n> is not of interest anymore, but I have no idea about the state of\n> play on Windows. I guess we can push 'em and see what the buildfarm\n> thinks.\n\nAll green on CI... Next stop, build farm.\n\nI'm a bit confused about why I had to #define gai_sterror\ngai_strerrorA myself to get this working (my non-Windows-guy\nunderstanding is that the A-for-ANSI [sic] variants of system\nfunctions should be selected automatically unless you #define UNICODE\nto get W-for-wide variants). If anyone has any clues about that, I'd\nbe glad to clean it up.\n\nI *guess* the main risk here is that different error messages might\nshow up in some scenarios on Windows (everything else was already\ngoing directly to OS functions on Windows 8.1+ if I'm reading the code\nright), but surely that'd be progress -- messages from the real netdb\nimplementation are surely preferable to our fallback stuff.\n\n[1] https://www.postgresql.org/message-id/flat/6dee8574-b0ad-fc49-9c8c-2edc796f0033%402ndquadrant.com\n\n\n",
"msg_date": "Sun, 14 Aug 2022 10:03:19 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Cleaning up historical portability baggage"
},
{
"msg_contents": "Hi,\n\nOn 2022-08-14 10:03:19 +1200, Thomas Munro wrote:\n> I hadn't paid attention to our existing abstract Unix socket support\n> before and now I'm curious: do we have a confirmed sighting of that\n> working on Windows?\n\nI vaguely remember successfully trying it in the past. But I just tried it\nunsuccessfully in a VM and there's a bunch of other places saying it's not\nworking...\nhttps://github.com/microsoft/WSL/issues/4240\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sat, 13 Aug 2022 15:36:46 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Cleaning up historical portability baggage"
},
{
"msg_contents": "On Sun, Aug 14, 2022 at 10:36 AM Andres Freund <andres@anarazel.de> wrote:\n> On 2022-08-14 10:03:19 +1200, Thomas Munro wrote:\n> > I hadn't paid attention to our existing abstract Unix socket support\n> > before and now I'm curious: do we have a confirmed sighting of that\n> > working on Windows?\n>\n> I vaguely remember successfully trying it in the past. But I just tried it\n> unsuccessfully in a VM and there's a bunch of other places saying it's not\n> working...\n> https://github.com/microsoft/WSL/issues/4240\n\nI think we'd better remove our claim that it works then. Patch attached.\n\nWe could also reject it, I guess, but it doesn't immediately seem\nharmful so I'm on the fence. On the Windows version that Cirrus is\nrunning, we happily start up with:\n\n2022-08-13 20:44:35.174 GMT [4760][postmaster] LOG: listening on Unix\nsocket \"@c:/cirrus/.s.PGSQL.61696\"\n\n... and then client processes apparently can't see it, which is\nconfusing but, I guess, defensible if we're only claiming it works on\nLinux. We don't go out of our way to avoid this feature on a per-OS\nbasis in general, though at least on a typical Unix system it fails\nfast. For example, my FreeBSD system here barfs:\n\n2022-08-15 13:26:13.483 NZST [29956] LOG: could not bind Unix address\n\"@/tmp/.s.PGSQL.5432\": No such file or directory\n\n... because the kernel just sees an empty string and can't locate the\nparent directory.",
"msg_date": "Mon, 15 Aug 2022 13:48:22 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Cleaning up historical portability baggage"
},
{
"msg_contents": "On Sun, Aug 14, 2022 at 10:03 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> All green on CI... Next stop, build farm.\n\nAll good so far (except for an admonishment from crake, for which my\npenance was to fix headerscheck, see separate thread...). I did\nfigure out one thing that I mentioned I was confused by before: the\nreason Windows didn't like my direct calls to gai_strerror() is\nbecause another header of ours clobbered one of Windows' own macros.\nThis new batch includes a fix for that.\n\n Remove configure probe for IPv6.\n Remove dead ifaddrs.c fallback code.\n Remove configure probe for net/if.h.\n Fix macro problem with gai_strerror on Windows.\n Remove configure probe for netinet/tcp.h.\n mstcpip.h is not missing on MinGW.\n\nThe interesting one is a continuation of my \"all computers have X\"\nseries. This episode: IPv6.",
"msg_date": "Mon, 15 Aug 2022 17:53:04 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Cleaning up historical portability baggage"
},
{
"msg_contents": "On 15.08.22 03:48, Thomas Munro wrote:\n>> I vaguely remember successfully trying it in the past. But I just tried it\n>> unsuccessfully in a VM and there's a bunch of other places saying it's not\n>> working...\n>> https://github.com/microsoft/WSL/issues/4240\n> I think we'd better remove our claim that it works then. Patch attached.\n\nWhen I developed support for abstract unix sockets, I did test them on \nWindows. The lack of support on WSL appears to be an unrelated fact. \nSee for example how [0] talks about them separately.\n\n[0]: https://devblogs.microsoft.com/commandline/af_unix-comes-to-windows/\n\n\n",
"msg_date": "Mon, 15 Aug 2022 10:36:11 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Cleaning up historical portability baggage"
},
{
"msg_contents": "On Mon, Aug 15, 2022 at 8:36 PM Peter Eisentraut\n<peter.eisentraut@enterprisedb.com> wrote:\n> On 15.08.22 03:48, Thomas Munro wrote:\n> >> I vaguely remember successfully trying it in the past. But I just tried it\n> >> unsuccessfully in a VM and there's a bunch of other places saying it's not\n> >> working...\n> >> https://github.com/microsoft/WSL/issues/4240\n> > I think we'd better remove our claim that it works then. Patch attached.\n>\n> When I developed support for abstract unix sockets, I did test them on\n> Windows. The lack of support on WSL appears to be an unrelated fact.\n> See for example how [0] talks about them separately.\n\nUser amoldeshpande's complaint was posted to the WSL project's issue\ntracker but it's about native Windows/winsock code and s/he says so\nexplicitly (though other people pile in with various other complaints\nincluding WSL interop). User sunilmut's comment says it's not\nworking, and [0] is now just confusing everybody :-(\n\n\n",
"msg_date": "Mon, 15 Aug 2022 22:48:22 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Cleaning up historical portability baggage"
},
{
"msg_contents": "Hi,\n\nOn 2022-08-15 13:48:22 +1200, Thomas Munro wrote:\n> On Sun, Aug 14, 2022 at 10:36 AM Andres Freund <andres@anarazel.de> wrote:\n> > On 2022-08-14 10:03:19 +1200, Thomas Munro wrote:\n> > > I hadn't paid attention to our existing abstract Unix socket support\n> > > before and now I'm curious: do we have a confirmed sighting of that\n> > > working on Windows?\n> >\n> > I vaguely remember successfully trying it in the past. But I just tried it\n> > unsuccessfully in a VM and there's a bunch of other places saying it's not\n> > working...\n> > https://github.com/microsoft/WSL/issues/4240\n> \n> I think we'd better remove our claim that it works then. Patch attached.\n> \n> We could also reject it, I guess, but it doesn't immediately seem\n> harmful so I'm on the fence. On the Windows version that Cirrus is\n> running, we happily start up with:\n> \n> 2022-08-13 20:44:35.174 GMT [4760][postmaster] LOG: listening on Unix\n> socket \"@c:/cirrus/.s.PGSQL.61696\"\n\nWhat I find odd is that you said your naive program rejected this...\n\n\nFWIW, in an up-to-date windows 10 VM the client side fails with:\n\npsql: error: connection to server on socket \"@frak/.s.PGSQL.5432\" failed: Invalid argument (0x00002726/10022)\n Is the server running locally and accepting connections on that socket?\n\nThat's with most security things disabled and developer mode turned on.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 15 Aug 2022 12:25:05 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Cleaning up historical portability baggage"
},
{
"msg_contents": "On Tue, Aug 16, 2022 at 7:25 AM Andres Freund <andres@anarazel.de> wrote:\n> On 2022-08-15 13:48:22 +1200, Thomas Munro wrote:\n> > 2022-08-13 20:44:35.174 GMT [4760][postmaster] LOG: listening on Unix\n> > socket \"@c:/cirrus/.s.PGSQL.61696\"\n>\n> What I find odd is that you said your naive program rejected this...\n\nNo, I said it wasn't behaving sanely. It allowed me to create two\nsockets and bind them both to \"\\000C:\\\\xxxxxxxxxx\", but I expected the\nsecond to fail with EADDRINUSE/10048[1]. I was messing around with\nthings like that because my original aim was to check if the names are\nsilently truncated through EADDRINUSE errors, an approach that worked\nfor regular Unix sockets.\n\n[1] https://cirrus-ci.com/task/4643322672185344?logs=main#L16\n\n\n",
"msg_date": "Tue, 16 Aug 2022 07:51:43 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Cleaning up historical portability baggage"
},
{
"msg_contents": "On Tue, Aug 16, 2022 at 7:51 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> [1] https://cirrus-ci.com/task/4643322672185344?logs=main#L16\n\nDerp, I noticed that that particular horrendous quick and dirty test\ncode was invalidated by a closesocket() call, but in another version I\ncommented that out and it didn't help. Of course it's possible that\nI'm still doing something wrong in the test, I didn't spend long on\nthis once I saw the bigger picture...\n\n\n",
"msg_date": "Tue, 16 Aug 2022 08:26:45 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Cleaning up historical portability baggage"
},
{
"msg_contents": "On Fri, Aug 12, 2022 at 7:42 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> On Fri, Aug 12, 2022 at 5:14 AM Andres Freund <andres@anarazel.de> wrote:\n> > I don't really know what to do about the warnings around remove_temp() and\n> > trapsig(). I think we actually may be overreading the restrictions. To me the\n> > documented restrictions read more like a high-level-ish explanation of what's\n> > safe in a signal handler and what not. And it seems to not have caused a\n> > problem on windows on several thousand CI cycles, including plenty failures.\n\nSo the question there is whether we can run this stuff safely in\nWindows signal handler context, considering the rather vaguely defined\nconditions[1]:\n\n unlink(sockself);\n unlink(socklock);\n rmdir(temp_sockdir);\n\nYou'd think that basic stuff like DeleteFile() that just enters the\nkernel would be async-signal-safe, like on Unix; the danger surely\ncomes from stepping on the user context's toes with state mutations,\nlocks etc. But let's suppose we want to play by a timid\ninterpretation of that page's \"do not issue low-level or STDIO.H I/O\nroutines\". It also says that SIGINT is special and runs the handler\nin a new thread (in a big warning box because that has other hazards\nthat would break other kinds of code). Well, we *know* it's safe to\nunlink files in another thread... so... how cheesy would it be if we\njust did raise(SIGINT) in the real handlers?\n\n[1] https://docs.microsoft.com/en-us/cpp/c-runtime-library/reference/signal?view=msvc-170\n\n\n",
"msg_date": "Tue, 16 Aug 2022 13:02:55 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Cleaning up historical portability baggage"
},
{
"msg_contents": "Hi,\n\nOn 2022-08-16 13:02:55 +1200, Thomas Munro wrote:\n> On Fri, Aug 12, 2022 at 7:42 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> > On Fri, Aug 12, 2022 at 5:14 AM Andres Freund <andres@anarazel.de> wrote:\n> > > I don't really know what to do about the warnings around remove_temp() and\n> > > trapsig(). I think we actually may be overreading the restrictions. To me the\n> > > documented restrictions read more like a high-level-ish explanation of what's\n> > > safe in a signal handler and what not. And it seems to not have caused a\n> > > problem on windows on several thousand CI cycles, including plenty failures.\n> \n> So the question there is whether we can run this stuff safely in\n> Windows signal handler context, considering the rather vaguely defined\n> conditions[1]:\n> \n> unlink(sockself);\n> unlink(socklock);\n> rmdir(temp_sockdir);\n> \n> You'd think that basic stuff like DeleteFile() that just enters the\n> kernel would be async-signal-safe, like on Unix; the danger surely\n> comes from stepping on the user context's toes with state mutations,\n> locks etc.\n\nYea.\n\nI guess it could be different because things like file descriptors are just a\nuserland concept.\n\n\n> But let's suppose we want to play by a timid interpretation of that page's\n> \"do not issue low-level or STDIO.H I/O routines\". It also says that SIGINT\n> is special and runs the handler in a new thread (in a big warning box\n> because that has other hazards that would break other kinds of code). Well,\n> we *know* it's safe to unlink files in another thread... so... how cheesy\n> would it be if we just did raise(SIGINT) in the real handlers?\n\nNot quite sure I understand. You're proposing to raise(SIGINT) for all other\nhandlers, so that signal_remove_temp() gets called in another thread, because\nwe assume that'd be safe because doing file IO in other threads is safe? That\nassumes the signal handler invocation infrastructure isn't the problem...\n\nLooks like we could register a \"native\" ctrl-c handler:\nhttps://docs.microsoft.com/en-us/windows/console/setconsolectrlhandler\nthey're documented to run in a different thread, but without any of the\nfile-io warnings.\nhttps://docs.microsoft.com/en-us/windows/console/handlerroutine\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 15 Aug 2022 18:16:14 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Cleaning up historical portability baggage"
},
{
"msg_contents": "On Tue, Aug 16, 2022 at 1:16 PM Andres Freund <andres@anarazel.de> wrote:\n> > But let's suppose we want to play by a timid interpretation of that page's\n> > \"do not issue low-level or STDIO.H I/O routines\". It also says that SIGINT\n> > is special and runs the handler in a new thread (in a big warning box\n> > because that has other hazards that would break other kinds of code). Well,\n> > we *know* it's safe to unlink files in another thread... so... how cheesy\n> > would it be if we just did raise(SIGINT) in the real handlers?\n>\n> Not quite sure I understand. You're proposing to raise(SIGINT) for all other\n> handlers, so that signal_remove_temp() gets called in another thread, because\n> we assume that'd be safe because doing file IO in other threads is safe? That\n> assumes the signal handler invocation infrastructure isn't the problem...\n\nThat's what I was thinking about, yeah. But after some more reading,\nnow I'm wondering if we'd even need to do that, or what I'm missing.\nThe 6 listed signals in the manual are SIGABRT, SIGFPE, SIGILL,\nSIGINT, SIGSEGV and SIGTERM (the 6 required by C). We want to run\nsignal_remove_temp() on SIGHUP (doesn't exist, we made it up), SIGINT,\nSIGPIPE (doesn't exist, we made it up), and SIGTERM (exists for C spec\ncompliance but will never be raised by the system according to the\nmanual, and we don't raise it ourselves IIUC). So the only case we\nactually have to consider is SIGINT, and SIGINT handlers run in a\nthread, so if we assume it is therefore exempt from those\nvery-hard-to-comply-with rules, aren't we good already? What am I\nmissing?\n\n> Looks like we could register a \"native\" ctrl-c handler:\n> https://docs.microsoft.com/en-us/windows/console/setconsolectrlhandler\n> they're documented to run in a different thread, but without any of the\n> file-io warnings.\n> https://docs.microsoft.com/en-us/windows/console/handlerroutine\n\nSounds better in general, considering the extreme constraints of the\nsignal system, but it'd be nice to see if the current system is truly\nunsafe before writing more alien code.\n\nSomeone who wants to handle more than one SIGINT would certainly need\nto consider that, because there doesn't seem to be a race-free way to\nreinstall the signal handler when you receive it[1]. Races aside, for\nany signal except SIGINT (assuming the above-mentioned exemption),\nyou're probably also not even allowed to try because raise() might be\na system call and they're banned. Fortunately we don't care, we\nwanted SIG_DFL next anyway.\n\n[1] https://wiki.sei.cmu.edu/confluence/display/c/SIG01-C.+Understand+implementation-specific+details+regarding+signal+handler+persistence\n\n\n",
"msg_date": "Tue, 16 Aug 2022 16:14:27 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Cleaning up historical portability baggage"
},
{
"msg_contents": "On Mon, Aug 15, 2022 at 5:53 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> Remove configure probe for IPv6.\n> Remove dead ifaddrs.c fallback code.\n> Remove configure probe for net/if.h.\n> Fix macro problem with gai_strerror on Windows.\n> Remove configure probe for netinet/tcp.h.\n> mstcpip.h is not missing on MinGW.\n\nI pushed these except one, plus one more about <sys/sockio.h> which\nturned out to be not needed after a bit of archeology.\n\nHere's a slightly better AF_INET6 one. I'm planning to push it\ntomorrow if there are no objections. It does something a little more\naggressive than the preceding stuff, because SUSv3 says that IPv6 is\nan \"option\". I don't see that as an issue: it also says that various\nother ubiquitous stuff we're using is optional. Of course, it would\nbe absurd for a new socket implementation to appear today that can't\ntalk to a decent chunk of the internet, and all we require here is the\nheaders. That optionality was relevant for the transition period a\ncouple of decades ago.",
"msg_date": "Thu, 18 Aug 2022 18:13:38 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Cleaning up historical portability baggage"
},
{
"msg_contents": "On Tue, Aug 16, 2022 at 4:14 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> On Tue, Aug 16, 2022 at 1:16 PM Andres Freund <andres@anarazel.de> wrote:\n> > > But let's suppose we want to play by a timid interpretation of that page's\n> > > \"do not issue low-level or STDIO.H I/O routines\". It also says that SIGINT\n> > > is special and runs the handler in a new thread (in a big warning box\n> > > because that has other hazards that would break other kinds of code). Well,\n> > > we *know* it's safe to unlink files in another thread... so... how cheesy\n> > > would it be if we just did raise(SIGINT) in the real handlers?\n> >\n> > Not quite sure I understand. You're proposing to raise(SIGINT) for all other\n> > handlers, so that signal_remove_temp() gets called in another thread, because\n> > we assume that'd be safe because doing file IO in other threads is safe? That\n> > assumes the signal handler invocation infrastructure isn't the problem...\n>\n> That's what I was thinking about, yeah. But after some more reading,\n> now I'm wondering if we'd even need to do that, or what I'm missing.\n> The 6 listed signals in the manual are SIGABRT, SIGFPE, SIGILL,\n> SIGINT, SIGSEGV and SIGTERM (the 6 required by C). We want to run\n> signal_remove_temp() on SIGHUP (doesn't exist, we made it up), SIGINT,\n> SIGPIPE (doesn't exist, we made it up), and SIGTERM (exists for C spec\n> compliance but will never be raised by the system according to the\n> manual, and we don't raise it ourselves IIUC). So the only case we\n> actually have to consider is SIGINT, and SIGINT handlers run in a\n> thread, so if we assume it is therefore exempt from those\n> very-hard-to-comply-with rules, aren't we good already? What am I\n> missing?\n\nI converted that analysis into a WIP patch, and tried to make the\nWindows test setup as similar to Unix as possible. I put in the\nexplanation and an assertion that it's running in another thread.\nThis is blind coded as I don't have Windows, but it passes CI. I'd\nprobably need some help from a Windows-enabled hacker to go further\nwith this, though. Does the assertion hold if you control-C the\nregression test, and is there any other way to get it to fail?\n\nThe next thing is that the security infrastructure added by commit\nf6dc6dd5 for CVE-2014-0067 is ripped out (because unreachable) by the\nattached, but the security infrastructure added by commit be76a6d3\nprobably doesn't work on Windows yet. Where src/port/mkdtemp.c does\nmkdir(name, 0700), I believe Windows throws away the mode and makes a\ndefault ACL directory, probably due to the mismatch between the\npermissions models. I haven't studied the Windows security model, but\nreading tells me that AF_UNIX will obey filesystem ACLs, so I think we\nshould be able to make it exactly as secure as Unix if we use native\nAPIs. Perhaps we just need to replace the mkdir() call in mkdtemp.c\nwith CreateDirectory(), passing in a locked-down owner-only\nSECURITY_DESCRIPTOR, or something like that?",
"msg_date": "Fri, 19 Aug 2022 15:30:05 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Cleaning up historical portability baggage"
},
{
"msg_contents": "On Sun, Aug 14, 2022 at 6:07 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Thomas Munro <thomas.munro@gmail.com> writes:\n> > I tried to figure out how to get rid of\n> > PGAC_STRUCT_SOCKADDR_STORAGE_MEMBERS, but there we're into genuine\n> > non-standard cross-platform differences.\n>\n> Right. I don't think it's worth sweating over.\n\nI managed to get rid of four of these probes. Some were unused, and\none could be consolidated into another leaving just one probe of this\nilk.\n\n1. src/common/ip.c already made a leap by assuming that if you have\nss_len then you must have sun_len. We might as well change that to be\ndriven by the presence of sa_len instead. That leap is fine: if you\nhave one, you have them all, and sa_len has the advantage of a stable\nname across systems that have it (unlike ss_len, which AIX calls\n__ss_len, requiring more configure gloop).\n\n2. src/backend/libpq/ifaddr.c only needs to know if you have sa_len.\nThis code is only used on AIX, so we could hard-wire it in theory, but\nit's good to keep it general so you can still compile and test it on\nsystems without sa_len (mostly Linux).",
"msg_date": "Fri, 19 Aug 2022 17:54:56 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Cleaning up historical portability baggage"
},
{
"msg_contents": "Hi,\n\nOn 2022-08-18 18:13:38 +1200, Thomas Munro wrote:\n> Here's a slightly better AF_INET6 one. I'm planning to push it\n> tomorrow if there are no objections.\n\nYou didn't yet, I think. Any chance you could? The HAVE_IPV6 stuff is\nwrong/ugly in the meson build, right now, and I'd rather not spend time fixing\nit up ;)\n\n\n> It does something a little more aggressive than the preceding stuff, because\n> SUSv3 says that IPv6 is an \"option\". I don't see that as an issue: it also\n> says that various other ubiquitous stuff we're using is optional. Of\n> course, it would be absurd for a new socket implementation to appear today\n> that can't talk to a decent chunk of the internet, and all we require here\n> is the headers. That optionality was relevant for the transition period a\n> couple of decades ago.\n\n> From f162a15a6d723f8c94d9daa6236149e1f39b0d9a Mon Sep 17 00:00:00 2001\n> From: Thomas Munro <tmunro@postgresql.org>\n> Date: Thu, 18 Aug 2022 11:55:10 +1200\n> Subject: [PATCH] Remove configure probe for sockaddr_in6 and require AF_INET6.\n> \n> SUSv3 <netinet/in.h> defines struct sockaddr_in6, and all targeted Unix\n> systems have it. Windows has it in <ws2ipdef.h>. Remove the configure\n> probe, the macro and a small amount of dead code.\n> \n> Also remove a mention of IPv6-less builds from the documentation, since\n> there aren't any.\n> \n> This is similar to commits f5580882 and 077bf2f2 for Unix sockets.\n> Even though AF_INET6 is an \"optional\" component of SUSv3, there are no\n> known modern operating system without it, and it seems even less likely\n> to be omitted from future systems than AF_UNIX.\n> \n> Discussion: https://postgr.es/m/CA+hUKGKErNfhmvb_H0UprEmp4LPzGN06yR2_0tYikjzB-2ECMw@mail.gmail.com\n\nLooks good to me.\n\n\nI'm idly wondering whether it's worth at some point to introduce a configure\ntest of just compiling a file referencing all the headers and symbols we exist\nto be there...\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 25 Aug 2022 12:47:29 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Cleaning up historical portability baggage"
},
{
"msg_contents": "On Fri, Aug 26, 2022 at 7:47 AM Andres Freund <andres@anarazel.de> wrote:\n> On 2022-08-18 18:13:38 +1200, Thomas Munro wrote:\n> > Here's a slightly better AF_INET6 one. I'm planning to push it\n> > tomorrow if there are no objections.\n>\n> You didn't yet, I think. Any chance you could? The HAVE_IPV6 stuff is\n> wrong/ugly in the meson build, right now, and I'd rather not spend time fixing\n> it up ;)\n\nDone, and thanks for looking.\n\nRemaining things from this thread:\n * removing --disable-thread-safety\n * removing those vestigial HAVE_XXX macros (one by one analysis and patches)\n * making Unix sockets secure for Windows in tests\n\n\n",
"msg_date": "Fri, 26 Aug 2022 10:27:48 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Cleaning up historical portability baggage"
},
{
"msg_contents": "Here's another bit of baggage handling: fixing up the places that\nwere afraid to use fflush(NULL). We could doubtless have done\nthis years ago (indeed, I found several places already using it)\nbut as long as we're making a push to get rid of obsolete code,\ndoing it now seems appropriate.\n\nOne thing that's not clear to me is what the appropriate rules\nshould be for popen(). POSIX makes clear that you shouldn't\nexpect popen() to include an fflush() itself, but we seem quite\nhaphazard about whether to do one or not before popen(). In\nthe case of popen(..., \"r\") we can expect that the child can't\nwrite on our stdout, but stderr could be a problem anyway.\n\nLikewise, there are some places that fflush before system(),\nbut they are a minority. Again it seems like the main risk\nis duplicated or mis-ordered stderr output.\n\nI'm inclined to add fflush(NULL) before any popen() or system()\nthat hasn't got one already, but did not do that in the attached.\n\nThoughts?\n\n\t\t\tregards, tom lane",
"msg_date": "Sun, 28 Aug 2022 17:40:25 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Cleaning up historical portability baggage"
},
{
"msg_contents": "On Mon, Aug 29, 2022 at 9:40 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Here's another bit of baggage handling: fixing up the places that\n> were afraid to use fflush(NULL). We could doubtless have done\n> this years ago (indeed, I found several places already using it)\n> but as long as we're making a push to get rid of obsolete code,\n> doing it now seems appropriate.\n\n+1, must be OK (pg_dump and pg_upgrade).\n\nArcheology:\n\ncommit 79fcde48b229534fd4a5e07834e5e0e84dd38bee\nAuthor: Tom Lane <tgl@sss.pgh.pa.us>\nDate: Sun Nov 29 01:51:56 1998 +0000\n\n Portability fix for old SunOS releases: fflush(NULL)\n\nhttps://www.postgresql.org/message-id/199811241847.NAA04690%40tuna.uimage.com\n\nSunOS 4.x was still in some kind of support phase for a few more\nyears, but I guess they weren't working too hard on conformance and\nfeatures, given that SunOS 5 (the big BSD -> System V rewrite) had\ncome out in '92...\n\n> One thing that's not clear to me is what the appropriate rules\n> should be for popen(). POSIX makes clear that you shouldn't\n> expect popen() to include an fflush() itself, but we seem quite\n> haphazard about whether to do one or not before popen(). In\n> the case of popen(..., \"r\") we can expect that the child can't\n> write on our stdout, but stderr could be a problem anyway.\n>\n> Likewise, there are some places that fflush before system(),\n> but they are a minority. Again it seems like the main risk\n> is duplicated or mis-ordered stderr output.\n>\n> I'm inclined to add fflush(NULL) before any popen() or system()\n> that hasn't got one already, but did not do that in the attached.\n\nCouldn't hurt. (Looking around at our setvbuf() setup to check the\nexpected stream state in various places... and huh, I hadn't\npreviously noticed the thing about Windows interpreting line buffering\nto mean full buffering. Pfnghghl...)\n\n\n",
"msg_date": "Mon, 29 Aug 2022 11:13:07 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Cleaning up historical portability baggage"
},
{
"msg_contents": "On Mon, Aug 29, 2022 at 3:13 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n\n> On Mon, Aug 29, 2022 at 9:40 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > Here's another bit of baggage handling: fixing up the places that\n> > were afraid to use fflush(NULL). We could doubtless have done\n> > this years ago (indeed, I found several places already using it)\n> > but as long as we're making a push to get rid of obsolete code,\n> > doing it now seems appropriate.\n>\n> +1, must be OK (pg_dump and pg_upgrade).\n>\n> Archeology:\n>\n> commit 79fcde48b229534fd4a5e07834e5e0e84dd38bee\n> Author: Tom Lane <tgl@sss.pgh.pa.us>\n> Date: Sun Nov 29 01:51:56 1998 +0000\n>\n> Portability fix for old SunOS releases: fflush(NULL)\n>\n>\n> https://www.postgresql.org/message-id/199811241847.NAA04690%40tuna.uimage.com\n>\n> SunOS 4.x was still in some kind of support phase for a few more\n> years, but I guess they weren't working too hard on conformance and\n> features, given that SunOS 5 (the big BSD -> System V rewrite) had\n> come out in '92...\n>\n> > One thing that's not clear to me is what the appropriate rules\n> > should be for popen(). POSIX makes clear that you shouldn't\n> > expect popen() to include an fflush() itself, but we seem quite\n> > haphazard about whether to do one or not before popen(). In\n> > the case of popen(..., \"r\") we can expect that the child can't\n> > write on our stdout, but stderr could be a problem anyway.\n> >\n> > Likewise, there are some places that fflush before system(),\n> > but they are a minority. Again it seems like the main risk\n> > is duplicated or mis-ordered stderr output.\n> >\n> > I'm inclined to add fflush(NULL) before any popen() or system()\n> > that hasn't got one already, but did not do that in the attached.\n>\n> Couldn't hurt. (Looking around at our setvbuf() setup to check the\n> expected stream state in various places... and huh, I hadn't\n> previously noticed the thing about Windows interpreting line buffering\n> to mean full buffering. Pfnghghl...)\n>\n>\n> The patch does not apply successfully; please rebase the patch.\n\npatching file src/backend/postmaster/fork_process.c\nHunk #1 FAILED at 37.\n1 out of 1 hunk FAILED -- saving rejects to file\nsrc/backend/postmaster/fork_process.c.rej\npatching file src/backend/storage/file/fd.c\nHunk #1 FAILED at 2503.\n1 out of 1 hunk FAILED -- saving rejects to file\nsrc/backend/storage/file/fd.c.rej\npatching file src/backend/utils/error/elog.c\nHunk #1 FAILED at 643.\nHunk #2 FAILED at 670.\n\n\n\n-- \nIbrar Ahmed\n\nOn Mon, Aug 29, 2022 at 3:13 AM Thomas Munro <thomas.munro@gmail.com> wrote:On Mon, Aug 29, 2022 at 9:40 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Here's another bit of baggage handling: fixing up the places that\n> were afraid to use fflush(NULL). We could doubtless have done\n> this years ago (indeed, I found several places already using it)\n> but as long as we're making a push to get rid of obsolete code,\n> doing it now seems appropriate.\n\n+1, must be OK (pg_dump and pg_upgrade).\n\nArcheology:\n\ncommit 79fcde48b229534fd4a5e07834e5e0e84dd38bee\nAuthor: Tom Lane <tgl@sss.pgh.pa.us>\nDate: Sun Nov 29 01:51:56 1998 +0000\n\n Portability fix for old SunOS releases: fflush(NULL)\n\nhttps://www.postgresql.org/message-id/199811241847.NAA04690%40tuna.uimage.com\n\nSunOS 4.x was still in some kind of support phase for a few more\nyears, but I guess they weren't working too hard on conformance and\nfeatures, given that SunOS 5 (the big BSD -> System V rewrite) had\ncome out in '92...\n\n> One thing that's not clear to me is what the appropriate rules\n> should be for popen(). POSIX makes clear that you shouldn't\n> expect popen() to include an fflush() itself, but we seem quite\n> haphazard about whether to do one or not before popen(). In\n> the case of popen(..., \"r\") we can expect that the child can't\n> write on our stdout, but stderr could be a problem anyway.\n>\n> Likewise, there are some places that fflush before system(),\n> but they are a minority. Again it seems like the main risk\n> is duplicated or mis-ordered stderr output.\n>\n> I'm inclined to add fflush(NULL) before any popen() or system()\n> that hasn't got one already, but did not do that in the attached.\n\nCouldn't hurt. (Looking around at our setvbuf() setup to check the\nexpected stream state in various places... and huh, I hadn't\npreviously noticed the thing about Windows interpreting line buffering\nto mean full buffering. Pfnghghl...)\n\n\nThe patch does not apply successfully; please rebase the patch.patching file src/backend/postmaster/fork_process.c\nHunk #1 FAILED at 37.\n1 out of 1 hunk FAILED -- saving rejects to file src/backend/postmaster/fork_process.c.rej\npatching file src/backend/storage/file/fd.c\nHunk #1 FAILED at 2503.\n1 out of 1 hunk FAILED -- saving rejects to file src/backend/storage/file/fd.c.rej\npatching file src/backend/utils/error/elog.c\nHunk #1 FAILED at 643.\nHunk #2 FAILED at 670.\n-- Ibrar Ahmed",
"msg_date": "Thu, 15 Sep 2022 12:10:43 +0400",
"msg_from": "Ibrar Ahmed <ibrar.ahmad@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Cleaning up historical portability baggage"
},
{
"msg_contents": "On Thu, Sep 15, 2022 at 3:11 PM Ibrar Ahmed <ibrar.ahmad@gmail.com> wrote:\n> The patch does not apply successfully; please rebase the patch.\n\nThere's a good reason for that -- the latest one was committed two\nweeks ago. The status should still be waiting on author, though,\nnamely for:\n\nOn Fri, Aug 26, 2022 at 5:28 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> Remaining things from this thread:\n> * removing --disable-thread-safety\n> * removing those vestigial HAVE_XXX macros (one by one analysis and patches)\n> * making Unix sockets secure for Windows in tests\n\n-- \nJohn Naylor\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 15 Sep 2022 16:11:48 +0700",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Cleaning up historical portability baggage"
},
{
"msg_contents": "John Naylor <john.naylor@enterprisedb.com> writes:\n> On Thu, Sep 15, 2022 at 3:11 PM Ibrar Ahmed <ibrar.ahmad@gmail.com> wrote:\n>> The patch does not apply successfully; please rebase the patch.\n\n> There's a good reason for that -- the latest one was committed two\n> weeks ago. The status should still be waiting on author, though,\n> namely for:\n\n> On Fri, Aug 26, 2022 at 5:28 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n>> Remaining things from this thread:\n>> * removing --disable-thread-safety\n>> * removing those vestigial HAVE_XXX macros (one by one analysis and patches)\n>> * making Unix sockets secure for Windows in tests\n\nI imagine we should just close the current CF entry as committed.\nThere's no patch in existence for any of those TODO items, and\nI didn't think one was imminent.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 15 Sep 2022 09:55:20 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Cleaning up historical portability baggage"
},
{
"msg_contents": "On Fri, Sep 16, 2022 at 1:55 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> John Naylor <john.naylor@enterprisedb.com> writes:\n> > On Thu, Sep 15, 2022 at 3:11 PM Ibrar Ahmed <ibrar.ahmad@gmail.com> wrote:\n> >> The patch does not apply successfully; please rebase the patch.\n>\n> > There's a good reason for that -- the latest one was committed two\n> > weeks ago. The status should still be waiting on author, though,\n> > namely for:\n>\n> > On Fri, Aug 26, 2022 at 5:28 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> >> Remaining things from this thread:\n> >> * removing --disable-thread-safety\n> >> * removing those vestigial HAVE_XXX macros (one by one analysis and patches)\n> >> * making Unix sockets secure for Windows in tests\n>\n> I imagine we should just close the current CF entry as committed.\n> There's no patch in existence for any of those TODO items, and\n> I didn't think one was imminent.\n\nI have patches for these, but not quite ready to post. I'll mark this\nentry closed, and make a new one or two when ready, instead of this\none-gigantic-CF-entry-that-goes-on-forever format.\n\n\n",
"msg_date": "Fri, 16 Sep 2022 08:03:17 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Cleaning up historical portability baggage"
}
] |
[
{
"msg_contents": "Hi, hackers\n\nI don't know if this is an error.\nwhen I do ./initdb -D ../data and execute pg_waldump like this, In the last part I got an error.\n\n```\n./pg_waldump ../data/pg_wal/000000010000000000000001\n```\n\npg_waldump: error: error in WAL record at 0/1499990: invalid record length at 0/1499A08: wanted 24, got 0\n\nmy environment is `16devel on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0, 64-bit`\nIs this normal?\n\n\n\n",
"msg_date": "Sun, 10 Jul 2022 21:51:04 +0900",
"msg_from": "Dong Wook Lee <sh95119@gmail.com>",
"msg_from_op": true,
"msg_subject": "pg_waldump got an error with waldump file generated by initdb"
},
{
"msg_contents": "\nHi,\n\nOn 2022-07-10 21:51:04 +0900, Dong Wook Lee wrote:\n> I don't know if this is an error.\n> when I do ./initdb -D ../data and execute pg_waldump like this, In the last part I got an error.\n> \n> ```\n> ./pg_waldump ../data/pg_wal/000000010000000000000001\n> ```\n> \n> pg_waldump: error: error in WAL record at 0/1499990: invalid record length at 0/1499A08: wanted 24, got 0\n> \n> my environment is `16devel on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0, 64-bit`\n> Is this normal?\n\nYes, that's likely normal - i.e. pg_waldump has reached the point at which the\nWAL ends.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sun, 10 Jul 2022 14:39:00 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: pg_waldump got an error with waldump file generated by initdb"
},
{
"msg_contents": "On Sun, Jul 10, 2022 at 02:39:00PM -0700, Andres Freund wrote:\n> On 2022-07-10 21:51:04 +0900, Dong Wook Lee wrote:\n> > I don't know if this is an error.\n> > when I do ./initdb -D ../data and execute pg_waldump like this, In the last part I got an error.\n> > \n> > ```\n> > ./pg_waldump ../data/pg_wal/000000010000000000000001\n> > ```\n> > \n> > pg_waldump: error: error in WAL record at 0/1499990: invalid record length at 0/1499A08: wanted 24, got 0\n> > \n> > my environment is `16devel on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0, 64-bit`\n> > Is this normal?\n> \n> Yes, that's likely normal - i.e. pg_waldump has reached the point at which the\n> WAL ends.\n\nIt's the issue that's fixed by this patch.\n\n38/2490 \tMake message at end-of-recovery less scary \tKyotaro Horiguchi\nhttps://commitfest.postgresql.org/38/2490/\n\n\n",
"msg_date": "Sun, 10 Jul 2022 16:57:03 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_waldump got an error with waldump file generated by initdb"
}
] |
[
{
"msg_contents": "I'm new here, so forgive me if this is a bad idea or my lack of knowledge on\nhow to optimize PostgreSQL.\n\nI find PostgreSQL to be great with a large number of small transactions,\nwhich covers most use cases. However, my experience has not been so great\non the opposite end -- a small number of large transactions, i.e. Big Data.\n\nI had to increase work_mem to 3GB to stop my queries from spilling to disk.\nHowever, that's risky because it's 3GB per operation, not per\nquery/connection; it could easily spiral out of control.\n\nI think it would be better if work_mem was allocated from a pool of memory\nas need and returned to the pool when no longer needed. The pool could\noptionally be allocated from huge pages. It would allow large and mixed\nworkloads the flexibility of grabbing more memory as needed without spilling\nto disk while simultaneously being more deterministic about the maximum that\nwill be used.\n\nThoughts?\n\nThank you for your time.\n\nJoseph D. Wagner\n\nMy specifics:\n -64 GB box\n -16 GB shared buffer, although queries only using about 12 GB of that\n -16 GB effective cache\n -2-3 GB used by OS and apps\n -the rest is available for Postgresql queries/connections/whatever as\nneeded\n\n\n\n",
"msg_date": "Sun, 10 Jul 2022 20:45:38 -0700",
"msg_from": "\"Joseph D Wagner\" <joe@josephdwagner.info>",
"msg_from_op": true,
"msg_subject": "proposal: Allocate work_mem From Pool"
},
{
"msg_contents": "On Sun, Jul 10, 2022 at 08:45:38PM -0700, Joseph D Wagner wrote:\n\n> However, that's risky because it's 3GB per operation, not per\n> query/connection; it could easily spiral out of control.\n\nThis is a well-known deficiency.\nI suggest to dig up the old threads to look into.\nIt's also useful to include links to the prior discussion.\n\n> I think it would be better if work_mem was allocated from a pool of memory\n\nI think this has been proposed before, and the issue/objection with this idea\nis probably that query plans will be inconsistent, and end up being\nsub-optimal.\n\nwork_mem is considered at planning time, but I think you only consider its\napplication execution. A query that was planned with the configured work_mem\nbut can't obtain the expected amount at execution time might perform poorly.\nMaybe it should be replanned with lower work_mem, but that would lose the\narms-length relationship between the planner-executor.\n\nShould an expensive query wait a bit to try to get more work_mem?\nWhat do you do if 3 expensive queries are all waiting ?\n\n-- \nJustin\n\n\n",
"msg_date": "Sun, 10 Jul 2022 23:39:30 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: proposal: Allocate work_mem From Pool"
},
{
"msg_contents": ">> I think it would be better if work_mem was allocated from a pool\n>> of memory\n\n> I think this has been proposed before, and the issue/objection\n> with this idea is probably that query plans will be inconsistent,\n> and end up being sub-optimal.\n\n> work_mem is considered at planning time, but I think you only\n> consider its application execution. A query that was planned\n> with the configured work_mem but can't obtain the expected\n> amount at execution time might perform poorly. Maybe it\n> should be replanned with lower work_mem, but that would\n> lose the arms-length relationship between the planner-executor.\n\n> Should an expensive query wait a bit to try to get more\n> work_mem? What do you do if 3 expensive queries are all\n> waiting ?\n\nBefore I try to answer that, I need to know how the scheduler works.\n\nLet's say there's a max of 8 worker process, and 12 queries trying to run.\nWhen does query #9 run? After the first of 1-8 completes, simple FIFO?\nOr something else?\n\nAlso, how long goes a query hold a worker process? All the way to\ncompletion? Or does is perform some unit of work and rotate to\nanother query?\n\nJoseph D Wagner\n\nP.S. If there's a link to all this somewhere, please let me know.\nParsing through years of email archives is not always user friendly or\nhelpful.\n\n\n\n",
"msg_date": "Tue, 12 Jul 2022 03:55:39 -0700",
"msg_from": "\"Joseph D Wagner\" <joe@josephdwagner.info>",
"msg_from_op": true,
"msg_subject": "RE: proposal: Allocate work_mem From Pool"
},
{
"msg_contents": "On Tue, Jul 12, 2022 at 5:55 PM Joseph D Wagner <joe@josephdwagner.info>\nwrote:\n> Before I try to answer that, I need to know how the scheduler works.\n\nAs I understand the term used, there is no scheduler inside Postgres for\nuser connections -- they're handled by the OS kernel. That's probably why\nit'd be a difficult project to be smart about memory -- step one might be\nto invent a scheduler. (The autovacuum launcher and checkpointer, etc have\ntheir own logic about when to do things, but while running they too are\njust OS processes scheduled by the kernel.)\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Tue, Jul 12, 2022 at 5:55 PM Joseph D Wagner <joe@josephdwagner.info> wrote:> Before I try to answer that, I need to know how the scheduler works.As I understand the term used, there is no scheduler inside Postgres for user connections -- they're handled by the OS kernel. That's probably why it'd be a difficult project to be smart about memory -- step one might be to invent a scheduler. (The autovacuum launcher and checkpointer, etc have their own logic about when to do things, but while running they too are just OS processes scheduled by the kernel.)--John NaylorEDB: http://www.enterprisedb.com",
"msg_date": "Tue, 12 Jul 2022 19:31:56 +0700",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: proposal: Allocate work_mem From Pool"
},
{
"msg_contents": "On Tue, Jul 12, 2022 at 03:55:39AM -0700, Joseph D Wagner wrote:\n> Before I try to answer that, I need to know how the scheduler works.\n> \n> Let's say there's a max of 8 worker process, and 12 queries trying to run.\n> When does query #9 run? After the first of 1-8 completes, simple FIFO?\n> Or something else?\n> \n> Also, how long goes a query hold a worker process? All the way to\n> completion? Or does is perform some unit of work and rotate to\n> another query?\n\nI think what you're referring to as a worker process is what postgres refers to\nas a \"client backend\" (and not a \"parallel worker\", even though that sounds\nmore similar to your phrase).\n\n> P.S. If there's a link to all this somewhere, please let me know.\n> Parsing through years of email archives is not always user friendly or\n> helpful.\n\nLooking at historic communication is probably the easy part.\nHere's some to start you out.\nhttps://www.postgresql.org/message-id/flat/4d39869f4bdc42b3a43004e3685ac45d%40index.de\n\n-- \nJustin\n\n\n",
"msg_date": "Tue, 12 Jul 2022 09:39:59 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: proposal: Allocate work_mem From Pool"
},
{
"msg_contents": ">> Before I try to answer that, I need to know how the scheduler works.\n\n> As I understand the term used, there is no scheduler inside Postgres\n> for user connections -- they're handled by the OS kernel.\n\nThen, I'm probably using the wrong term. Right now, I have\nmax_worker_processes set to 16. What happens when query #17\nwants some work done? What do you call the thing that handles\nthat? What is its algorithm for allocating work to the processes?\nOr am I completely misunderstanding the role worker processes\nplay in execution?\n\nJoseph Wagner\n\n\n",
"msg_date": "Tue, 12 Jul 2022 20:49:10 -0700",
"msg_from": "Joseph D Wagner <joe@josephdwagner.info>",
"msg_from_op": false,
"msg_subject": "Re: proposal: Allocate work_mem From Pool"
},
{
"msg_contents": "On Tue, Jul 12, 2022 at 08:49:10PM -0700, Joseph D Wagner wrote:\n> > > Before I try to answer that, I need to know how the scheduler works.\n> \n> > As I understand the term used, there is no scheduler inside Postgres\n> > for user connections -- they're handled by the OS kernel.\n> \n> Then, I'm probably using the wrong term. Right now, I have\n> max_worker_processes set to 16. What happens when query #17\n> wants some work done? What do you call the thing that handles\n> that? What is its algorithm for allocating work to the processes?\n> Or am I completely misunderstanding the role worker processes\n> play in execution?\n\nmax_connections limits the number of client connections (queries).\nBackground workers are a relatively new thing - they didn't exist until v9.3.\n\nThere is no scheduler, unless you run a connection pooler between the\napplication and the DB. Which you should probably do, since you've set\nwork_mem measured in GB on a server with 10s of GB of RAM, and while using\npartitioning.\n\nhttps://www.postgresql.org/docs/current/runtime-config-connection.html#GUC-MAX-CONNECTIONS\n|max_connections (integer)\n|\n| Determines the maximum number of concurrent connections to the database server. The default is typically 100 connections, but might be less if your kernel settings will not support it (as determined during initdb). This parameter can only be set at server start.\n\nhttps://www.postgresql.org/docs/current/runtime-config-resource.html#GUC-MAX-WORKER-PROCESSES\n|max_worker_processes (integer)\n| Sets the maximum number of background processes that the system can support. This parameter can only be set at server start. The default is 8.\n| When running a standby server, you must set this parameter to the same or higher value than on the primary server. Otherwise, queries will not be allowed in the standby server.\n| When changing this value, consider also adjusting max_parallel_workers, max_parallel_maintenance_workers, and max_parallel_workers_per_gather.\n\nhttps://www.postgresql.org/docs/current/connect-estab.html\n|PostgreSQL implements a “process per user” client/server model. In this model, every client process connects to exactly one backend process. As we do not know ahead of time how many connections will be made, we have to use a “supervisor process” that spawns a new backend process every time a connection is requested. This supervisor process is called postmaster and listens at a specified TCP/IP port for incoming connections. Whenever it detects a request for a connection, it spawns a new backend process. Those backend processes communicate with each other and with other processes of the instance using semaphores and shared memory to ensure data integrity throughout concurrent data access.\n|\n|The client process can be any program that understands the PostgreSQL protocol described in Chapter 53. Many clients are based on the C-language library libpq, but several independent implementations of the protocol exist, such as the Java JDBC driver.\n|\n|Once a connection is established, the client process can send a query to the backend process it's connected to. The query is transmitted using plain text, i.e., there is no parsing done in the client. The backend process parses the query, creates an execution plan, executes the plan, and returns the retrieved rows to the client by transmitting them over the established connection.\n\nhttps://www.postgresql.org/docs/current/tutorial-arch.html\n| The PostgreSQL server can handle multiple concurrent connections from clients. To achieve this it starts (“forks”) a new process for each connection. From that point on, the client and the new server process communicate without intervention by the original postgres process. Thus, the supervisor server process is always running, waiting for client connections, whereas client and associated server processes come and go. (All of this is of course invisible to the user. We only mention it here for completeness.)\n\nhttps://wiki.postgresql.org/wiki/FAQ#How_does_PostgreSQL_use_CPU_resources.3F\n|The PostgreSQL server is process-based (not threaded). Each database session connects to a single PostgreSQL operating system (OS) process. Multiple sessions are automatically spread across all available CPUs by the OS. The OS also uses CPUs to handle disk I/O and run other non-database tasks. Client applications can use threads, each of which connects to a separate database process. Since version 9.6, portions of some queries can be run in parallel, in separate OS processes, allowing use of multiple CPU cores. Parallel queries are enabled by default in version 10 (max_parallel_workers_per_gather), with additional parallelism expected in future releases. \n\nBTW, since this is amply documented, I have to point out that it's not on-topic\nfor the development list.\n\n-- \nJustin\n\n\n",
"msg_date": "Wed, 13 Jul 2022 17:23:42 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: proposal: Allocate work_mem From Pool"
}
] |
[
{
"msg_contents": "Hi, hackers\n\n\nI'm eager to dive into how to write wal for large object. In the code path:\n\n\nheap_insert -> heap_prepare_insert -> heap_toast_insert_or_update -> toast_save_datum -> heap_insert\n\n\nI find heap_insert is called back.\n\n\n1. At heaptup = heap_prepare_insert(relation, tup, xid, cid, options), the comment says tup is untoasted data, but it could come from toast_save_datum, still untoasted ?\n\n\n2. At toast_save_datum while loop, we know heap_insert is called with every chunk data, so every chunk data is written to WAL by XLogInsert() , right ? Although it may make WAL big. (It is still the case regarding to blob, cblob ?)\n\n\n3. When heap_insert is called firstly and heap_prepare_insert returns, what does heaptup mean for large object which has been chunked and written by toast_save_datum ?\n\n\nAny articles about this aspect would be appreciated.\n\n",
"msg_date": "Mon, 11 Jul 2022 11:58:06 +0800",
"msg_from": "merryok <merryok@163.com>",
"msg_from_op": true,
"msg_subject": "wal write of large object"
}
] |
[
{
"msg_contents": "Hi,\nI'm sending this to pgsql-hackers because Vik Fearing (xocolatl), the reviewer of https://commitfest.postgresql.org/30/2316 also has a repository with a pgsql implementation of said functionalities: https://github.com/xocolatl/periods.\n\nI have stumbled upon a probable issue (https://github.com/xocolatl/periods/issues/27), can anyone take a look and confirm if the current behavior is the expected? \n\nThanks!\n\n\n\n",
"msg_date": "Mon, 11 Jul 2022 13:59:31 +0000",
"msg_from": "Jean Carlo Giambastiani Lopes <jean.lopes@hotmail.com.br>",
"msg_from_op": true,
"msg_subject": "Foreign Key constraints on xocolatl/periods"
}
] |
[
{
"msg_contents": "I like the ignore-revs file, but I run into a problem with my setup:\nbecause I keep checkouts of all branches as worktrees, then all branches\nshare the same .git/config file. So if I put the recommended stanza for\n[blame] in it, then 'git blame' complains in branches older than 13,\nsince those don't have the file:\n\n$ git blame configure\nfatal: could not open object name list: .git-blame-ignore-revs\n\nMy first workaround was to add empty .git-blame-ignore-revs in all\ncheckouts. This was moderately ok (shrug), until after a recent `tig`\nupgrade the empty file started to show up in the history as an untracked\nfile.\n\nSo I'm now by the second workaround, which is to move the [blame]\nsection of config to a separate file, and use a [includeIf] sections\nlike this:\n\n[includeIf \"onbranch:master\"]\n\tpath=config.blame.inc\n[includeIf \"onbranch:REL_1{4,5,6,7,8,9}_STABLE\"]\n\tpath=config.blame.inc\n[includeIf \"onbranch:REL_2*_STABLE\"]\n\tpath=config.blame.inc\n\nThis is quite ugly, and it doesn't work at all if I run `git blame` in a\nworktree that I create for development purposes (I don't name those\nafter the upstream PG branch they're based on).\n\nAnybody has any idea how to handle this better?\n\nA viable option would be to backpatch the addition of\n.git-blame-ignore-revs to all live branches. Would that bother anyone?\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Mon, 11 Jul 2022 18:31:38 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": true,
"msg_subject": "annoyance with .git-blame-ignore-revs"
},
{
"msg_contents": "> On 11 Jul 2022, at 18:31, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n\n> A viable option would be to backpatch the addition of\n> .git-blame-ignore-revs to all live branches. Would that bother anyone?\n\nI wouldn't mind having it backpatched.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n",
"msg_date": "Mon, 11 Jul 2022 21:32:22 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: annoyance with .git-blame-ignore-revs"
},
{
"msg_contents": "Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> A viable option would be to backpatch the addition of\n> .git-blame-ignore-revs to all live branches. Would that bother anyone?\n\nOnly if we had to update all those copies all the time. But\nI'm guessing we wouldn't need a branch's copy to be newer than\nthe last pgindent run affecting that branch?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 11 Jul 2022 15:35:58 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: annoyance with .git-blame-ignore-revs"
},
{
"msg_contents": "On Mon, Jul 11, 2022 at 12:30 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> Anybody has any idea how to handle this better?\n>\n> A viable option would be to backpatch the addition of\n> .git-blame-ignore-revs to all live branches. Would that bother anyone?\n\n+1. I was thinking of suggesting the same thing myself, for the same reasons.\n\nThis solution is a good start, but it does leave one remaining\nproblem: commits from before the introduction of\n.git-blame-ignore-revs still won't have the file. There was actually a\npatch for git that tried to address the problem directly, but it\ndidn't go anywhere. Maybe just backpatching the file is good enough.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Mon, 11 Jul 2022 12:37:26 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: annoyance with .git-blame-ignore-revs"
},
{
"msg_contents": "> On 11 Jul 2022, at 21:35, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n>> A viable option would be to backpatch the addition of\n>> .git-blame-ignore-revs to all live branches. Would that bother anyone?\n> \n> Only if we had to update all those copies all the time. But\n> I'm guessing we wouldn't need a branch's copy to be newer than\n> the last pgindent run affecting that branch?\n\nWe shouldn't need that, if we do it would indicate we did cosmetic-only commits\nin backbranches which IIUC isn't in line with project policy (or at least rare\nto the point of not being a problem).\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n",
"msg_date": "Mon, 11 Jul 2022 21:38:07 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: annoyance with .git-blame-ignore-revs"
},
{
"msg_contents": "On Mon, Jul 11, 2022 at 12:36 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Only if we had to update all those copies all the time. But\n> I'm guessing we wouldn't need a branch's copy to be newer than\n> the last pgindent run affecting that branch?\n\nI wouldn't care very much if the file itself was empty in the\nbackbranches, and remained that way -- that would at least suppress\nannoying error messages on those branches (from my text editor's git\nblame feature).\n\nYou might as well have the relevant commits when you backpatch, but\nthat's kind of not the point. At least to me. In any case I don't see\na need to maintain the file on the backbranches.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Mon, 11 Jul 2022 12:39:48 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: annoyance with .git-blame-ignore-revs"
},
{
"msg_contents": "On Mon, Jul 11, 2022 at 12:30 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> $ git blame configure\n> fatal: could not open object name list: .git-blame-ignore-revs\n>\n> My first workaround was to add empty .git-blame-ignore-revs in all\n> checkouts. This was moderately ok (shrug), until after a recent `tig`\n> upgrade the empty file started to show up in the history as an untracked\n> file.\n\nPing? Would be nice to get this done soon. I don't think that it\nrequires a great deal of care. If I was doing this myself, I would\nprobably make sure that the backbranch copies of the file won't\nreference commits from later releases. But even that probably doesn't\nmatter; just backpatching the file from HEAD as-is wouldn't break\nanybody's workflow.\n\nAgain, to reiterate: I see no reason to do anything on the\nbackbranches here more than once.\n\nI mentioned already that somebody proposed a patch that fixes the\nproblem at the git level, which seems to have stalled. Here is the\ndiscussion:\n\nhttps://public-inbox.org/git/xmqq5ywehb69.fsf@gitster.g/T/\n\nISTM that we're working around what is actually a usability problem\nwith git (imagine that!). I think that that's fine. Just thought that\nit was worth acknowledging it as such. We're certainly not the first\npeople to run into this exact annoyance.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Thu, 4 Aug 2022 17:35:21 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: annoyance with .git-blame-ignore-revs"
},
{
"msg_contents": "\nOn 2022-08-04 Th 20:35, Peter Geoghegan wrote:\n> On Mon, Jul 11, 2022 at 12:30 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>> $ git blame configure\n>> fatal: could not open object name list: .git-blame-ignore-revs\n>>\n>> My first workaround was to add empty .git-blame-ignore-revs in all\n>> checkouts. This was moderately ok (shrug), until after a recent `tig`\n>> upgrade the empty file started to show up in the history as an untracked\n>> file.\n> Ping? Would be nice to get this done soon. I don't think that it\n> requires a great deal of care. If I was doing this myself, I would\n> probably make sure that the backbranch copies of the file won't\n> reference commits from later releases. But even that probably doesn't\n> matter; just backpatching the file from HEAD as-is wouldn't break\n> anybody's workflow.\n>\n> Again, to reiterate: I see no reason to do anything on the\n> backbranches here more than once.\n>\n> I mentioned already that somebody proposed a patch that fixes the\n> problem at the git level, which seems to have stalled. Here is the\n> discussion:\n>\n> https://public-inbox.org/git/xmqq5ywehb69.fsf@gitster.g/T/\n>\n> ISTM that we're working around what is actually a usability problem\n> with git (imagine that!). I think that that's fine. Just thought that\n> it was worth acknowledging it as such. We're certainly not the first\n> people to run into this exact annoyance.\n\n\n\nlet's just backpatch the file and be done with it.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Fri, 5 Aug 2022 10:07:00 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: annoyance with .git-blame-ignore-revs"
},
{
"msg_contents": "On 2022-Aug-05, Andrew Dunstan wrote:\n\n> let's just backpatch the file and be done with it.\n\nI can do that in a couple of hours.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"You don't solve a bad join with SELECT DISTINCT\" #CupsOfFail\nhttps://twitter.com/connor_mc_d/status/1431240081726115845\n\n\n",
"msg_date": "Fri, 5 Aug 2022 16:17:13 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": true,
"msg_subject": "Re: annoyance with .git-blame-ignore-revs"
},
{
"msg_contents": "On 2022-Aug-05, Alvaro Herrera wrote:\n\n> On 2022-Aug-05, Andrew Dunstan wrote:\n> \n> > let's just backpatch the file and be done with it.\n> \n> I can do that in a couple of hours.\n\nDone.\n\nThanks!\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"La rebeldía es la virtud original del hombre\" (Arthur Schopenhauer)\n\n\n",
"msg_date": "Fri, 5 Aug 2022 19:38:38 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": true,
"msg_subject": "Re: annoyance with .git-blame-ignore-revs"
}
] |
[
{
"msg_contents": "Over on [1], I highlighted that 40af10b57 (Use Generation memory\ncontexts to store tuples in sorts) could cause some performance\nregressions for sorts when the size of the tuple is exactly a power of\n2. The reason for this is that the chunk header for generation.c\ncontexts is 8 bytes larger (on 64-bit machines) than the aset.c chunk\nheader. This means that we can store fewer tuples per work_mem during\nthe sort and that results in more batches being required.\n\nAs I write this, this regression is still marked as an open item for\nPG15 in [2]. So I've been working on this to try to assist the\ndiscussion about if we need to do anything about that for PG15.\n\nOver on [3], I mentioned that Andres came up with an idea and a\nprototype patch to reduce the chunk header size across the board by\nstoring the context type in the 3 least significant bits in a uint64\nheader.\n\nI've taken Andres' patch and made some quite significant changes to\nit. In the patch's current state, the sort performance regression in\nPG15 vs PG14 is fixed. The generation.c context chunk header has been\nreduced to 8 bytes from the previous 24 bytes as it is in master.\naset.c context chunk header is now 8 bytes instead of 16 bytes.\n\nWe can use this 8-byte chunk header by using the remaining 61-bits of\nthe uint64 header to encode 2 30-bit values to store the chunk size\nand also the number of bytes we must subtract from the given pointer\nto find the block that the chunk is stored on. Once we have the\nblock, we can find the owning context by having a pointer back to the\ncontext from the block. For memory allocations that are larger than\nwhat can be stored in 30 bits, the chunk header gets an additional two\nSize fields to store the chunk_size and the block offset. We can tell\nthe difference between the 2 chunk sizes by looking at the spare 1-bit\nthe uint64 portion of the header.\n\nAside from speeding up the sort case, this also seems to have a good\npositive performance impact on pgbench read-only workload with -M\nsimple. I'm seeing about a 17% performance increase on my AMD\nThreadripper machine.\n\nmaster + Reduced Memory Context Chunk Overhead\ndrowley@amd3990x:~$ pgbench -S -T 60 -j 156 -c 156 -M simple postgres\ntps = 1876841.953096 (without initial connection time)\ntps = 1919605.408254 (without initial connection time)\ntps = 1891734.480141 (without initial connection time)\n\nMaster\ndrowley@amd3990x:~$ pgbench -S -T 60 -j 156 -c 156 -M simple postgres\ntps = 1632248.236962 (without initial connection time)\ntps = 1615663.151604 (without initial connection time)\ntps = 1602004.146259 (without initial connection time)\n\nThe attached .png file shows the same results for PG14 and PG15 as I\nshowed in the blog [4] where I discovered the regression and adds the\nresults from current master + the attached patch. See bars in orange.\nYou can see that the regression at 64MB work_mem is fixed. Adding some\ntracing to the sort shows that we're now doing 671745 tuples per batch\ninstead of 576845 tuples. This reduces the number of batches from 245\ndown to 210.\n\nDrawbacks:\n\nThere is at least one. It might be major; to reduce the AllocSet chunk\nheader from 16 bytes down to 8 bytes I had to get rid of the freelist\npointer that was reusing the \"aset\" field in the chunk header struct.\nThis works now by storing that pointer in the actual palloc'd memory.\nThis could lead to pretty hard-to-trace bugs if we have any code that\naccidentally writes to memory after pfree. The slab.c context already\ndoes this, but that's far less commonly used. If we decided this was\nunacceptable then it does not change anything for the generation.c\ncontext. The chunk header will still be 8 bytes instead of 24 there.\nSo the sort performance regression will still be fixed.\n\nTo improve this situation, we might be able code it up so that\nMEMORY_CONTEXT_CHECKING builds add an additional freelist pointer to\nthe header and also write it to the palloc'd memory then verify\nthey're set to the same thing when we reuse a chunk from the freelist.\nIf they're not the same then MEMORY_CONTEXT_CHECKING builds could\neither spit out a WARNING or ERROR for this case. That would make it\npretty easy for developers to find their write after pfree bugs. This\nmight actually be better than the Valgrind detection method that we\nhave for this today.\n\nPatch:\n\nI've attached the WIP patch. At this stage, I'm more looking for a\ndesign review. I'm not aware of any bugs, but I am aware that I've not\ntested with Valgrind. I've not paid a great deal of attention to\nupdating the Valgrind macros at all.\n\nI'll add this to the September CF. I'm submitting now due to the fact\nthat we still have an open item in PG15 for the sort regression and\nthe existence of this patch might cause us to decide whether we can\ndefer fixing that to PG16 by way of the method in this patch, or\nrevert 40af10b57.\n\nBenchmark code in [5].\n\nDavid\n\n[1] https://www.postgresql.org/message-id/CAApHDvqXpLzav6dUeR5vO_RBh_feHrHMLhigVQXw9jHCyKP9PA%40mail.gmail.com\n[2] https://wiki.postgresql.org/wiki/PostgreSQL_15_Open_Items\n[3] https://www.postgresql.org/message-id/CAApHDvowHNSVLhMc0cnovg8PfnYQZxit-gP_bn3xkT4rZX3G0w%40mail.gmail.com\n[4] https://techcommunity.microsoft.com/t5/azure-database-for-postgresql/speeding-up-sort-performance-in-postgres-15/ba-p/3396953\n[5] https://www.postgresql.org/message-id/attachment/134161/sortbench_varcols.sh",
"msg_date": "Tue, 12 Jul 2022 17:01:18 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Reducing the chunk header sizes on all memory context types"
},
{
"msg_contents": "Good day, David.\n\nВ Вт, 12/07/2022 в 17:01 +1200, David Rowley пишет:\n> Over on [1], I highlighted that 40af10b57 (Use Generation memory\n> contexts to store tuples in sorts) could cause some performance\n> regressions for sorts when the size of the tuple is exactly a power of\n> 2. The reason for this is that the chunk header for generation.c\n> contexts is 8 bytes larger (on 64-bit machines) than the aset.c chunk\n> header. This means that we can store fewer tuples per work_mem during\n> the sort and that results in more batches being required.\n> \n> As I write this, this regression is still marked as an open item for\n> PG15 in [2]. So I've been working on this to try to assist the\n> discussion about if we need to do anything about that for PG15.\n> \n> Over on [3], I mentioned that Andres came up with an idea and a\n> prototype patch to reduce the chunk header size across the board by\n> storing the context type in the 3 least significant bits in a uint64\n> header.\n> \n> I've taken Andres' patch and made some quite significant changes to\n> it. In the patch's current state, the sort performance regression in\n> PG15 vs PG14 is fixed. The generation.c context chunk header has been\n> reduced to 8 bytes from the previous 24 bytes as it is in master.\n> aset.c context chunk header is now 8 bytes instead of 16 bytes.\n> \n> We can use this 8-byte chunk header by using the remaining 61-bits of\n> the uint64 header to encode 2 30-bit values to store the chunk size\n> and also the number of bytes we must subtract from the given pointer\n> to find the block that the chunk is stored on. Once we have the\n> block, we can find the owning context by having a pointer back to the\n> context from the block. For memory allocations that are larger than\n> what can be stored in 30 bits, the chunk header gets an additional two\n> Size fields to store the chunk_size and the block offset. We can tell\n> the difference between the 2 chunk sizes by looking at the spare 1-bit\n> the uint64 portion of the header.\n\nI don't get, why \"large chunk\" needs additional fields for size and\noffset.\nLarge allocation sizes are certainly rounded to page size.\nAnd allocations which doesn't fit 1GB we could easily round to 1MB.\nThen we could simply store `size>>20`.\nIt will limit MaxAllocHugeSize to `(1<<(30+20))-1` - 1PB. Doubdfully we\nwill deal with such huge allocations in near future.\n\nAnd to limit block offset, we just have to limit maxBlockSize to 1GB,\nwhich is quite reasonable limitation.\nChunks that are larger than maxBlockSize goes to separate blocks anyway,\ntherefore they have small block offset.\n\n> Aside from speeding up the sort case, this also seems to have a good\n> positive performance impact on pgbench read-only workload with -M\n> simple. I'm seeing about a 17% performance increase on my AMD\n> Threadripper machine.\n> \n> master + Reduced Memory Context Chunk Overhead\n> drowley@amd3990x:~$ pgbench -S -T 60 -j 156 -c 156 -M simple postgres\n> tps = 1876841.953096 (without initial connection time)\n> tps = 1919605.408254 (without initial connection time)\n> tps = 1891734.480141 (without initial connection time)\n> \n> Master\n> drowley@amd3990x:~$ pgbench -S -T 60 -j 156 -c 156 -M simple postgres\n> tps = 1632248.236962 (without initial connection time)\n> tps = 1615663.151604 (without initial connection time)\n> tps = 1602004.146259 (without initial connection time)\n\nTrick with 3bit context type is great.\n\n> The attached .png file shows the same results for PG14 and PG15 as I\n> showed in the blog [4] where I discovered the regression and adds the\n> results from current master + the attached patch. See bars in orange.\n> You can see that the regression at 64MB work_mem is fixed. Adding some\n> tracing to the sort shows that we're now doing 671745 tuples per batch\n> instead of 576845 tuples. This reduces the number of batches from 245\n> down to 210.\n> \n> Drawbacks:\n> \n> There is at least one. It might be major; to reduce the AllocSet chunk\n> header from 16 bytes down to 8 bytes I had to get rid of the freelist\n> pointer that was reusing the \"aset\" field in the chunk header struct.\n> This works now by storing that pointer in the actual palloc'd memory.\n> This could lead to pretty hard-to-trace bugs if we have any code that\n> accidentally writes to memory after pfree. The slab.c context already\n> does this, but that's far less commonly used. If we decided this was\n> unacceptable then it does not change anything for the generation.c\n> context. The chunk header will still be 8 bytes instead of 24 there.\n> So the sort performance regression will still be fixed.\n\nAt least we can still mark free list pointer as VALGRIND_MAKE_MEM_NOACCESS\nand do VALGRIND_MAKE_MEM_DEFINED on fetching from free list, can we?\n\n> To improve this situation, we might be able code it up so that\n> MEMORY_CONTEXT_CHECKING builds add an additional freelist pointer to\n> the header and also write it to the palloc'd memory then verify\n> they're set to the same thing when we reuse a chunk from the freelist.\n> If they're not the same then MEMORY_CONTEXT_CHECKING builds could\n> either spit out a WARNING or ERROR for this case. That would make it\n> pretty easy for developers to find their write after pfree bugs. This\n> might actually be better than the Valgrind detection method that we\n> have for this today.\n> \n> Patch:\n> \n> I've attached the WIP patch. At this stage, I'm more looking for a\n> design review. I'm not aware of any bugs, but I am aware that I've not\n> tested with Valgrind. I've not paid a great deal of attention to\n> updating the Valgrind macros at all.\n> \n> I'll add this to the September CF. I'm submitting now due to the fact\n> that we still have an open item in PG15 for the sort regression and\n> the existence of this patch might cause us to decide whether we can\n> defer fixing that to PG16 by way of the method in this patch, or\n> revert 40af10b57.\n> \n> Benchmark code in [5].\n> \n> David\n> \n> [1] https://www.postgresql.org/message-id/CAApHDvqXpLzav6dUeR5vO_RBh_feHrHMLhigVQXw9jHCyKP9PA%40mail.gmail.com\n> [2] https://wiki.postgresql.org/wiki/PostgreSQL_15_Open_Items\n> [3] https://www.postgresql.org/message-id/CAApHDvowHNSVLhMc0cnovg8PfnYQZxit-gP_bn3xkT4rZX3G0w%40mail.gmail.com\n> [4] https://techcommunity.microsoft.com/t5/azure-database-for-postgresql/speeding-up-sort-performance-in-postgres-15/ba-p/3396953\n> [5] https://www.postgresql.org/message-id/attachment/134161/sortbench_varcols.sh\n\n\n\n",
"msg_date": "Tue, 12 Jul 2022 20:22:57 +0300",
"msg_from": "Yura Sokolov <y.sokolov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: Reducing the chunk header sizes on all memory context types"
},
{
"msg_contents": "Hi,\n\nOn 2022-07-12 17:01:18 +1200, David Rowley wrote:\n> I've taken Andres' patch and made some quite significant changes to\n> it. In the patch's current state, the sort performance regression in\n> PG15 vs PG14 is fixed. The generation.c context chunk header has been\n> reduced to 8 bytes from the previous 24 bytes as it is in master.\n> aset.c context chunk header is now 8 bytes instead of 16 bytes.\n\nI think this is *very* cool. But I might be biased :)\n\n\n> There is at least one. It might be major; to reduce the AllocSet chunk\n> header from 16 bytes down to 8 bytes I had to get rid of the freelist\n> pointer that was reusing the \"aset\" field in the chunk header struct.\n> This works now by storing that pointer in the actual palloc'd memory.\n> This could lead to pretty hard-to-trace bugs if we have any code that\n> accidentally writes to memory after pfree.\n\nCan't we use the same trick for allcations in the freelist as we do for the\nheader in a live allocation? I.e. split the 8 byte header into two and use\npart of it to point to the next element in the list using the offset from the\nstart of the block, and part of it to indicate the size?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 12 Jul 2022 10:42:07 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Reducing the chunk header sizes on all memory context types"
},
{
"msg_contents": "Hi,\n\nOn 2022-07-12 20:22:57 +0300, Yura Sokolov wrote:\n> I don't get, why \"large chunk\" needs additional fields for size and\n> offset.\n> Large allocation sizes are certainly rounded to page size.\n> And allocations which doesn't fit 1GB we could easily round to 1MB.\n> Then we could simply store `size>>20`.\n> It will limit MaxAllocHugeSize to `(1<<(30+20))-1` - 1PB. Doubdfully we\n> will deal with such huge allocations in near future.\n\nWhat would gain by doing something like this? The storage density loss of\nstoring an exact size is smaller than what you propose here.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 12 Jul 2022 10:44:04 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Reducing the chunk header sizes on all memory context types"
},
{
"msg_contents": "On Wed, 13 Jul 2022 at 05:44, Andres Freund <andres@anarazel.de> wrote:\n> On 2022-07-12 20:22:57 +0300, Yura Sokolov wrote:\n> > I don't get, why \"large chunk\" needs additional fields for size and\n> > offset.\n> > Large allocation sizes are certainly rounded to page size.\n> > And allocations which doesn't fit 1GB we could easily round to 1MB.\n> > Then we could simply store `size>>20`.\n> > It will limit MaxAllocHugeSize to `(1<<(30+20))-1` - 1PB. Doubdfully we\n> > will deal with such huge allocations in near future.\n>\n> What would gain by doing something like this? The storage density loss of\n> storing an exact size is smaller than what you propose here.\n\nI do agree that the 16-byte additional header size overhead for\nallocations >= 1GB are not really worth troubling too much over.\nHowever, if there was some way to make it so we always had an 8-byte\nheader, it would simplify some of the code in places such as\nAllocSetFree(). For example, (ALLOC_BLOCKHDRSZ + hdrsize +\nchunksize) could be simplified at compile time if hdrsize was a known\nconstant.\n\nI did consider that in all cases where the allocations are above\nallocChunkLimit that the chunk is put on a dedicated block and in\nfact, the blockoffset is always the same for those. I wondered if we\ncould use the full 60 bits for the chunksize for those cases. The\nreason I didn't pursue that is because:\n\n#define MaxAllocHugeSize (SIZE_MAX / 2)\n\nThat's 63-bits, so 60 isn't enough.\n\nYeah, we likely could reduce that without upsetting anyone. It feels\nlike it'll be a while before not being able to allocate a chunk of\nmemory more than 1024 petabytes will be an issue, although, I do hope\nto grow old enough to one day come back here at laugh at that.\n\nDavid\n\n\n",
"msg_date": "Wed, 13 Jul 2022 17:20:50 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Reducing the chunk header sizes on all memory context types"
},
{
"msg_contents": "On Wed, 13 Jul 2022 at 05:42, Andres Freund <andres@anarazel.de> wrote:\n> > There is at least one. It might be major; to reduce the AllocSet chunk\n> > header from 16 bytes down to 8 bytes I had to get rid of the freelist\n> > pointer that was reusing the \"aset\" field in the chunk header struct.\n> > This works now by storing that pointer in the actual palloc'd memory.\n> > This could lead to pretty hard-to-trace bugs if we have any code that\n> > accidentally writes to memory after pfree.\n>\n> Can't we use the same trick for allcations in the freelist as we do for the\n> header in a live allocation? I.e. split the 8 byte header into two and use\n> part of it to point to the next element in the list using the offset from the\n> start of the block, and part of it to indicate the size?\n\nThat can't work as the next freelist item might be on some other block.\n\nDavid\n\n\n",
"msg_date": "Wed, 13 Jul 2022 17:24:16 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Reducing the chunk header sizes on all memory context types"
},
{
"msg_contents": "Hi,\n\nOn 2022-07-12 10:42:07 -0700, Andres Freund wrote:\n> On 2022-07-12 17:01:18 +1200, David Rowley wrote:\n> > There is at least one. It might be major; to reduce the AllocSet chunk\n> > header from 16 bytes down to 8 bytes I had to get rid of the freelist\n> > pointer that was reusing the \"aset\" field in the chunk header struct.\n> > This works now by storing that pointer in the actual palloc'd memory.\n> > This could lead to pretty hard-to-trace bugs if we have any code that\n> > accidentally writes to memory after pfree.\n> \n> Can't we use the same trick for allcations in the freelist as we do for the\n> header in a live allocation? I.e. split the 8 byte header into two and use\n> part of it to point to the next element in the list using the offset from the\n> start of the block, and part of it to indicate the size?\n\nSo that doesn't work because the members in the freelist can be in different\nblocks and those can be further away from each other.\n\n\nPerhaps that could still made work somehow: To point to a block we don't\nactually need 64bit pointers, they're always at least of some certain size -\nassuming we can allocate them suitably aligned. And chunks are always 8 byte\naligned. Unfortunately that doesn't quite get us far enough - assuming a 4kB\nminimum block size (larger than now, but potentially sensible as a common OS\npage size), we still only get to 2^12*8 = 32kB.\n\nIt'd easily work if we made each context have an array of allocated non-huge\nblocks, so that the blocks can be addressed with a small index. The overhead\nof that could be reduced in the common case by embedding a small constant\nsized array in the Aset. That might actually be worth trying out.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 12 Jul 2022 22:41:31 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Reducing the chunk header sizes on all memory context types"
},
{
"msg_contents": "В Вт, 12/07/2022 в 22:41 -0700, Andres Freund пишет:\n> Hi,\n> \n> On 2022-07-12 10:42:07 -0700, Andres Freund wrote:\n> > On 2022-07-12 17:01:18 +1200, David Rowley wrote:\n> > > There is at least one. It might be major; to reduce the AllocSet chunk\n> > > header from 16 bytes down to 8 bytes I had to get rid of the freelist\n> > > pointer that was reusing the \"aset\" field in the chunk header struct.\n> > > This works now by storing that pointer in the actual palloc'd memory.\n> > > This could lead to pretty hard-to-trace bugs if we have any code that\n> > > accidentally writes to memory after pfree.\n> > \n> > Can't we use the same trick for allcations in the freelist as we do for the\n> > header in a live allocation? I.e. split the 8 byte header into two and use\n> > part of it to point to the next element in the list using the offset from the\n> > start of the block, and part of it to indicate the size?\n> \n> So that doesn't work because the members in the freelist can be in different\n> blocks and those can be further away from each other.\n> \n> \n> Perhaps that could still made work somehow: To point to a block we don't\n> actually need 64bit pointers, they're always at least of some certain size -\n> assuming we can allocate them suitably aligned. And chunks are always 8 byte\n> aligned. Unfortunately that doesn't quite get us far enough - assuming a 4kB\n> minimum block size (larger than now, but potentially sensible as a common OS\n> page size), we still only get to 2^12*8 = 32kB.\n\nWell, we actually have freelists for 11 size classes.\nIt is just 11 pointers.\nWe could embed this 88 bytes in every ASet block and then link blocks.\nAnd then in every block have 44 bytes for in-block free lists.\nTotal overhead is 132 bytes per-block.\nOr 110 if we limit block size to 65k*8b=512kb.\n\nWith double-linked block lists (176bytes per block + 44bytes for in-block lists\n= 220bytes), we could track block fullness and deallocate it if it doesn't\ncontain any alive alocation. Therefore \"generational\" and \"slab\" allocators\nwill be less useful.\n\nBut CPU overhead will be noticeable.\n\n> It'd easily work if we made each context have an array of allocated non-huge\n> blocks, so that the blocks can be addressed with a small index. The overhead\n> of that could be reduced in the common case by embedding a small constant\n> sized array in the Aset. That might actually be worth trying out.\n> \n> Greetings,\n> \n> Andres Freund\n> \n> \n\n\n\n",
"msg_date": "Sun, 17 Jul 2022 19:10:12 +0300",
"msg_from": "Yura Sokolov <y.sokolov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: Reducing the chunk header sizes on all memory context types"
},
{
"msg_contents": "On Wed, 13 Jul 2022 at 17:20, David Rowley <dgrowleyml@gmail.com> wrote:\n> I did consider that in all cases where the allocations are above\n> allocChunkLimit that the chunk is put on a dedicated block and in\n> fact, the blockoffset is always the same for those. I wondered if we\n> could use the full 60 bits for the chunksize for those cases. The\n> reason I didn't pursue that is because:\n\nAs it turns out, we don't really need to explicitly store the chunk\nsize for chunks which are on dedicated blocks. We can just calculate\nthis by subtracting the pointer to the memory from the block's endptr.\nThe block offset is always fixed too, like I mentioned above.\n\nI've now revised the patch to completely get rid of the concept of\n\"large\" chunks and instead memory chunks are always 8 bytes in size.\nI've created a struct to this effect and named it MemoryChunk. All\nmemory contexts make use of this new struct. The header bit which I\nwas previously using to denote a \"large\" chunk now marks if the chunk\nis \"external\", meaning that the MemoryChunk does not have knowledge of\nthe chunk size and/or block offset. The MemoryContext itself is\nexpected to manage this when the chunk is external. I've coded up\naset.c and generation.c to always use these external chunks when size\n> set->allocChunkLimit. There is now a coding pattern like the\nfollowing (taken from AllocSetRealloc:\n\nif (MemoryChunkIsExternal(chunk))\n{\n block = ExternalChunkGetBlock(chunk);\n oldsize = block->endptr - (char *) pointer;\n}\nelse\n{\n block = MemoryChunkGetBlock(chunk);\n oldsize = MemoryChunkGetSize(chunk);\n}\n\nHere the ExternalChunkGetBlock() macro is just subtracting the\nALLOC_BLOCKHDRSZ from the chunk pointer to obtain the block pointer,\nwhereas MemoryChunkGetBlock() is subtracting the blockoffset as is\nstored in the 30-bits of the chunk header.\n\nAndres and I had a conversation off-list about the storage of freelist\npointers. Currently I have these stored in the actual allocated\nmemory. The minimum allocation size is 8-bytes, which is big enough\nfor storing sizeof(void *). Andres suggested that it might be safer to\nstore this link at the end of the chunk rather than at the start.\nI've not quite done that in the attached, but doing this should just\nbe a matter of adjusting the GetFreeListLink() macro to add the\nchunksize - sizeof(AllocFreelistLink).\n\nI did a little bit of benchmarking of the attached with a scale 1 pgbench.\n\nmaster = 0b039e3a8\n\nmaster\ndrowley@amd3990x:~$ pgbench -c 156 -j 156 -T 60 -S postgres\ntps = 1638436.367793 (without initial connection time)\ntps = 1652688.009579 (without initial connection time)\ntps = 1671281.281856 (without initial connection time)\n\nmaster + v2 patch\ndrowley@amd3990x:~$ pgbench -c 156 -j 156 -T 60 -S postgres\ntps = 1825346.734486 (without initial connection time)\ntps = 1824539.294142 (without initial connection time)\ntps = 1807359.758822 (without initial connection time)\n\n~10% faster.\n\nThere are a couple of things to note that might require discussion:\n\n1. I've added a new restriction that block sizes cannot be above 1GB.\nThis is because the 30-bits in the MemoryChunk used for storing the\noffset between the chunk and the block wouldn't be able to store the\noffset if the chunk was offset more than 1GB from the block. I used\ndebian code search to see if I could find any user code that used\nblocks this big. I found nothing.\n2. slab.c has a new restriction that the chunk size cannot be >= 1GB.\nI'm not at all concerned about this. I think if someone needed chunks\nthat big there'd be no benefits from slab context over an aset\ncontext.\n3. As mentioned above, aset.c now stores freelist pointers in the\nallocated chunk's memory. This allows us to get the header down to 8\nbytes instead of today's 16 bytes. There's an increased danger that\nbuggy code that writes to memory after a pfree could stomp on this.\n\nI think the patch is now starting to take shape. I've added it to the\nSeptember commitfest [1].\n\nDavid\n\n[1] https://commitfest.postgresql.org/39/3810/",
"msg_date": "Wed, 10 Aug 2022 00:53:04 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Reducing the chunk header sizes on all memory context types"
},
{
"msg_contents": "On Tue, Aug 9, 2022 at 8:53 AM David Rowley <dgrowleyml@gmail.com> wrote:\n> I think the patch is now starting to take shape. I've added it to the\n> September commitfest [1].\n\nThis is extremely cool. The memory savings are really nice. And I also\nlike this:\n\n# Additionally, this commit completely changes the rule that pointers must\n# be directly prefixed by the owning memory context and instead, we now\n# insist that they're directly prefixed by an 8-byte value where the least\n# significant 3-bits are set to a value to indicate which type of memory\n# context the pointer belongs to. Using those 3 bits as an index to a new\n# array which stores the methods for each memory context type, we're now\n# able to pass the pointer given to functions such as pfree and repalloc to\n# the function specific to that context implementation to allow them to\n# devise their own methods of finding the memory context which owns the\n# given allocated chunk of memory.\n\nThat seems like a good system.\n\nThis part of the commit message might need to be clarified:\n\n# We also add a restriction that block sizes for all 3 of the memory\n# allocators cannot be 1GB or larger. We would be unable to store the\n# number of bytes that the block is offset from the chunk stored beyond this\n#1GB boundary on any block that was larger than 1GB.\n\nEarlier in the commit message, you say that allocations of 1GB or more\nare stored in dedicated blocks. But here you say that blocks can't be\nmore than 1GB. Those statements seem to contradict each other. I guess\nyou mean block sizes for blocks that contain chunks, or something like\nthat?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 9 Aug 2022 10:36:57 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Reducing the chunk header sizes on all memory context types"
},
{
"msg_contents": "Hi,\n\nOn 2022-08-09 10:36:57 -0400, Robert Haas wrote:\n> On Tue, Aug 9, 2022 at 8:53 AM David Rowley <dgrowleyml@gmail.com> wrote:\n> > I think the patch is now starting to take shape. I've added it to the\n> > September commitfest [1].\n> \n> This is extremely cool. The memory savings are really nice.\n\n+1\n\n\n> And I also like this:\n> \n> # Additionally, this commit completely changes the rule that pointers must\n> # be directly prefixed by the owning memory context and instead, we now\n> # insist that they're directly prefixed by an 8-byte value where the least\n> # significant 3-bits are set to a value to indicate which type of memory\n> # context the pointer belongs to. Using those 3 bits as an index to a new\n> # array which stores the methods for each memory context type, we're now\n> # able to pass the pointer given to functions such as pfree and repalloc to\n> # the function specific to that context implementation to allow them to\n> # devise their own methods of finding the memory context which owns the\n> # given allocated chunk of memory.\n> \n> That seems like a good system.\n\nI'm obviously biased, but I agree.\n\nI think it's fine, given that we can change this at any time, but it's\nprobably worth to explicitly agree that this will for now restrict us to 8\ncontext methods?\n\n\n> This part of the commit message might need to be clarified:\n> \n> # We also add a restriction that block sizes for all 3 of the memory\n> # allocators cannot be 1GB or larger. We would be unable to store the\n> # number of bytes that the block is offset from the chunk stored beyond this\n> #1GB boundary on any block that was larger than 1GB.\n> \n> Earlier in the commit message, you say that allocations of 1GB or more\n> are stored in dedicated blocks. But here you say that blocks can't be\n> more than 1GB. Those statements seem to contradict each other. I guess\n> you mean block sizes for blocks that contain chunks, or something like\n> that?\n\nI would guess so as well.\n\n\n> diff --git a/src/include/utils/memutils_internal.h b/src/include/utils/memutils_internal.h\n> new file mode 100644\n> index 0000000000..2dcfdd7ec3\n> --- /dev/null\n> +++ b/src/include/utils/memutils_internal.h\n> @@ -0,0 +1,117 @@\n> +/*-------------------------------------------------------------------------\n> + *\n> + * memutils_internal.h\n> + *\t This file contains declarations for memory allocation utility\n> + *\t functions for internal use.\n> + *\n> + *\n> + * Portions Copyright (c) 2022, PostgreSQL Global Development Group\n> + * Portions Copyright (c) 1994, Regents of the University of California\n> + *\n> + * src/include/utils/memutils_internal.h\n> + *\n> + *-------------------------------------------------------------------------\n> + */\n> +\n> +#ifndef MEMUTILS_INTERNAL_H\n> +#define MEMUTILS_INTERNAL_H\n> +\n> +#include \"utils/memutils.h\"\n> +\n> +extern void *AllocSetAlloc(MemoryContext context, Size size);\n> +extern void AllocSetFree(void *pointer);\n> [much more]\n\nI really wish I knew of a technique to avoid this kind of thing, allowing to\nfill a constant array from different translation units... On the linker level\nthat should be trivial, but I don't think there's a C way to reach that.\n\n\n> +/*\n> + * MemoryContextMethodID\n> + *\t\tA unique identifier for each MemoryContext implementation which\n> + *\t\tindicates the index into the mcxt_methods[] array. See mcxt.c.\n> + */\n> +typedef enum MemoryContextMethodID\n> +{\n> +\tMCTX_ASET_ID = 0,\n\nIs there a reason to reserve 0 here? Practically speaking the 8-byte header\nwill always contain not just zeroes, but I don't think the patch currently\nenforces that. It's probably not worth wasting a \"valuable\" entry here...\n\n\n> diff --git a/src/include/utils/memutils_memorychunk.h b/src/include/utils/memutils_memorychunk.h\n> new file mode 100644\n> index 0000000000..6239cf9008\n> --- /dev/null\n> +++ b/src/include/utils/memutils_memorychunk.h\n> @@ -0,0 +1,185 @@\n> +/*-------------------------------------------------------------------------\n> + *\n> + * memutils_memorychunk.h\n> + *\t Here we define a struct named MemoryChunk which implementations of\n> + *\t MemoryContexts may use as a header for chunks of memory they allocate.\n> + *\n> + * A MemoryChunk provides a lightweight header which a MemoryContext can use\n> + * to store the size of an allocation and a reference back to the block which\n> + * the given chunk is allocated on.\n> + *\n> + * Although MemoryChunks are used by each of our MemoryContexts, other\n> + * implementations may choose to implement their own method for storing chunk\n> + * headers. The only requirement is that the header end with an 8-byte value\n> + * which the least significant 3-bits of are set to the MemoryContextMethodID\n> + * of the given context.\n\nWell, there can't be other implementations other than ours. So maybe phrase it\nas \"future implementations\"?\n\n\n> + * By default, a MemoryChunk is 8 bytes in size, however when\n> + * MEMORY_CONTEXT_CHECKING is defined the header becomes 16 bytes in size due\n> + * to the additional requested_size field. The MemoryContext may use this\n> + * field for whatever they wish, but it is intended to be used for additional\n> + * checks which are only done in MEMORY_CONTEXT_CHECKING builds.\n> + *\n> + * The MemoryChunk contains a uint64 field named 'hdrmask'. This field is\n> + * used to encode 4 separate pieces of information. Starting with the least\n> + * significant bits of 'hdrmask', the bits of this field as used as follows:\n> + *\n> + * 1.\t3-bits to indicate the MemoryContextMethodID\n> + * 2.\t1-bit to indicate if the chunk is externally managed (see below)\n> + * 3.\t30-bits for the amount of memory which was reserved for the chunk\n> + * 4.\t30-bits for the number of bytes that must be subtracted from the chunk\n> + *\t\tto obtain the address of the block that the chunk is stored on.\n> + *\n> + * Because we're limited to a block offset and chunk size of 1GB (30-bits),\n> + * any allocation which exceeds this amount must call MemoryChunkSetExternal()\n> + * and the MemoryContext must devise its own method for storing the offset for\n> + * the block and size of the chunk.\n\nHm. So really only the first four bits have eactly that layout, correct?\nPerhaps that could be clarified somehow?\n\n\n\n> + /*\n> + * The maximum size for a memory chunk before it must be externally managed.\n> + */\n> +#define MEMORYCHUNK_MAX_SIZE 0x3FFFFFFF\n> +\n> + /*\n> + * The value to AND onto the hdrmask to determine if it's an externally\n> + * managed memory chunk.\n> + */\n> +#define MEMORYCHUNK_EXTERNAL_BIT (1 << 3)\n\nWe should probably have a define for the three bits reserved for the context\nid, likely in _internal.h\n\n\n> @@ -109,6 +112,25 @@ typedef struct AllocChunkData *AllocChunk;\n> */\n> typedef void *AllocPointer;\n> \n> +/*\n> + * AllocFreelistLink\n> + *\t\tWhen pfreeing memory, if we maintain a freelist for the given chunk's\n> + *\t\tsize then we use a AllocFreelistLink to point to the current item in\n> + *\t\tthe AllocSetContext's freelist and then set the given freelist element\n> + *\t\tto point to the chunk being freed.\n> + */\n> +typedef struct AllocFreelistLink\n> +{\n> +\tMemoryChunk *next;\n> +}\t\t\tAllocFreelistLink;\n\nI know we have no agreement on that, but I personally would add\nAllocFreelistLink to typedefs.list and then re-pgindent ;)\n\n\n> /*\n> * AllocSetGetChunkSpace\n> *\t\tGiven a currently-allocated chunk, determine the total space\n> *\t\tit occupies (including all memory-allocation overhead).\n> */\n> -static Size\n> -AllocSetGetChunkSpace(MemoryContext context, void *pointer)\n> +Size\n> +AllocSetGetChunkSpace(void *pointer)\n> {\n> -\tAllocChunk\tchunk = AllocPointerGetChunk(pointer);\n> -\tSize\t\tresult;\n> +\tMemoryChunk *chunk = PointerGetMemoryChunk(pointer);\n> \n> -\tVALGRIND_MAKE_MEM_DEFINED(chunk, ALLOCCHUNK_PRIVATE_LEN);\n> -\tresult = chunk->size + ALLOC_CHUNKHDRSZ;\n> -\tVALGRIND_MAKE_MEM_NOACCESS(chunk, ALLOCCHUNK_PRIVATE_LEN);\n> -\treturn result;\n> +\tif (MemoryChunkIsExternal(chunk))\n> +\t{\n\nHm. We don't mark the chunk header as noaccess anymore? If so, why? I realize\nit'd be a bit annoying because there's plenty places that look at it, but I\nthink it's also a good way to catch errors.\n\n\n> +static const MemoryContextMethods mcxt_methods[] = {\n> +\t[MCTX_ASET_ID] = {\n> +\t\tAllocSetAlloc,\n> +\t\tAllocSetFree,\n> +\t\tAllocSetRealloc,\n> +\t\tAllocSetReset,\n> +\t\tAllocSetDelete,\n> +\t\tAllocSetGetChunkContext,\n> +\t\tAllocSetGetChunkSpace,\n> +\t\tAllocSetIsEmpty,\n> +\t\tAllocSetStats\n> +#ifdef MEMORY_CONTEXT_CHECKING\n> +\t\t,AllocSetCheck\n> +#endif\n> +\t},\n> +\n> +\t[MCTX_GENERATION_ID] = {\n> +\t\tGenerationAlloc,\n> +\t\tGenerationFree,\n> +\t\tGenerationRealloc,\n> +\t\tGenerationReset,\n> +\t\tGenerationDelete,\n> +\t\tGenerationGetChunkContext,\n> +\t\tGenerationGetChunkSpace,\n> +\t\tGenerationIsEmpty,\n> +\t\tGenerationStats\n> +#ifdef MEMORY_CONTEXT_CHECKING\n> +\t\t,GenerationCheck\n> +#endif\n> +\t},\n> +\n> +\t[MCTX_SLAB_ID] = {\n> +\t\tSlabAlloc,\n> +\t\tSlabFree,\n> +\t\tSlabRealloc,\n> +\t\tSlabReset,\n> +\t\tSlabDelete,\n> +\t\tSlabGetChunkContext,\n> +\t\tSlabGetChunkSpace,\n> +\t\tSlabIsEmpty,\n> +\t\tSlabStats\n> +#ifdef MEMORY_CONTEXT_CHECKING\n> +\t\t,SlabCheck\n> +#endif\n> +\t},\n> +};\n\nMildly wondering whether we ought to use designated initializers instead,\ngiven we're whacking it around already. Too easy to get the order wrong when\nadding new members, and we might want to have optional callbacks too.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 9 Aug 2022 11:44:10 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Reducing the chunk header sizes on all memory context types"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> I think it's fine, given that we can change this at any time, but it's\n> probably worth to explicitly agree that this will for now restrict us to 8\n> context methods?\n\nDo we really need it to be that tight? I know we only have 3 methods today,\nbut 8 doesn't seem that far away. If there were six bits reserved for\nthis I'd be happier.\n\n>> # We also add a restriction that block sizes for all 3 of the memory\n>> # allocators cannot be 1GB or larger. We would be unable to store the\n>> # number of bytes that the block is offset from the chunk stored beyond this\n>> #1GB boundary on any block that was larger than 1GB.\n\nLosing MemoryContextAllocHuge would be very bad, so I assume this comment\nis not telling the full truth.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 09 Aug 2022 15:21:57 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Reducing the chunk header sizes on all memory context types"
},
{
"msg_contents": "On 2022-Aug-09, Andres Freund wrote:\n\n> Mildly wondering whether we ought to use designated initializers instead,\n> given we're whacking it around already. Too easy to get the order wrong when\n> adding new members, and we might want to have optional callbacks too.\n\nStrong +1. It makes code much easier to navigate (see XmlTableRoutine\nand compare with heapam_methods, for example).\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\nSubversion to GIT: the shortest path to happiness I've ever heard of\n (Alexey Klyukin)\n\n\n",
"msg_date": "Tue, 9 Aug 2022 21:36:16 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Reducing the chunk header sizes on all memory context types"
},
{
"msg_contents": "Hi,\n\nOn 2022-08-09 15:21:57 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > I think it's fine, given that we can change this at any time, but it's\n> > probably worth to explicitly agree that this will for now restrict us to 8\n> > context methods?\n> \n> Do we really need it to be that tight? I know we only have 3 methods today,\n> but 8 doesn't seem that far away. If there were six bits reserved for\n> this I'd be happier.\n\nWe only have so many bits available, so that'd have to come from some other\nresource. The current division is:\n\n+ * 1.\t3-bits to indicate the MemoryContextMethodID\n+ * 2.\t1-bit to indicate if the chunk is externally managed (see below)\n+ * 3.\t30-bits for the amount of memory which was reserved for the chunk\n+ * 4.\t30-bits for the number of bytes that must be subtracted from the chunk\n+ *\t\tto obtain the address of the block that the chunk is stored on.\n\nI suspect we could reduce 3) here a bit, which I think would end up with slab\ncontext's max chunkSize shrinking further. Which should still be fine.\n\nBut also, we could defer that to later, this is a limit that we can easily\nchange.\n\n\n> >> # We also add a restriction that block sizes for all 3 of the memory\n> >> # allocators cannot be 1GB or larger. We would be unable to store the\n> >> # number of bytes that the block is offset from the chunk stored beyond this\n> >> #1GB boundary on any block that was larger than 1GB.\n> \n> Losing MemoryContextAllocHuge would be very bad, so I assume this comment\n> is not telling the full truth.\n\nIt's just talking about chunked allocation (except for slab, which doesn't\nhave anything else, but as David pointed out, it makes no sense to use slab\nfor that large allocations). I.e. it's the max you can pass to\nAllocSetContextCreate()'s and GenerationContextCreate()'s maxBlockSize, and to\nSlabContextCreate()'s chunkSize. I don't think we have any code that\ncurrently sets a bigger limit than 8MB.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 9 Aug 2022 14:02:58 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Reducing the chunk header sizes on all memory context types"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2022-08-09 15:21:57 -0400, Tom Lane wrote:\n>> Do we really need it to be that tight? I know we only have 3 methods today,\n>> but 8 doesn't seem that far away. If there were six bits reserved for\n>> this I'd be happier.\n\n> We only have so many bits available, so that'd have to come from some other\n> resource. The current division is:\n\n> + * 1.\t3-bits to indicate the MemoryContextMethodID\n> + * 2.\t1-bit to indicate if the chunk is externally managed (see below)\n> + * 3.\t30-bits for the amount of memory which was reserved for the chunk\n> + * 4.\t30-bits for the number of bytes that must be subtracted from the chunk\n> + *\t\tto obtain the address of the block that the chunk is stored on.\n\n> I suspect we could reduce 3) here a bit, which I think would end up with slab\n> context's max chunkSize shrinking further. Which should still be fine.\n\nHmm, I suppose you mean we could reduce 4) if we needed to. Yeah, that\nseems like a reasonable place to buy more bits later if we run out of\nMemoryContextMethodIDs. Should be fine then.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 09 Aug 2022 17:23:00 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Reducing the chunk header sizes on all memory context types"
},
{
"msg_contents": "On Wed, 10 Aug 2022 at 09:23, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Andres Freund <andres@anarazel.de> writes:\n> > On 2022-08-09 15:21:57 -0400, Tom Lane wrote:\n> >> Do we really need it to be that tight? I know we only have 3 methods today,\n> >> but 8 doesn't seem that far away. If there were six bits reserved for\n> >> this I'd be happier.\n>\n> > We only have so many bits available, so that'd have to come from some other\n> > resource. The current division is:\n>\n> > + * 1. 3-bits to indicate the MemoryContextMethodID\n> > + * 2. 1-bit to indicate if the chunk is externally managed (see below)\n> > + * 3. 30-bits for the amount of memory which was reserved for the chunk\n> > + * 4. 30-bits for the number of bytes that must be subtracted from the chunk\n> > + * to obtain the address of the block that the chunk is stored on.\n>\n> > I suspect we could reduce 3) here a bit, which I think would end up with slab\n> > context's max chunkSize shrinking further. Which should still be fine.\n>\n> Hmm, I suppose you mean we could reduce 4) if we needed to. Yeah, that\n> seems like a reasonable place to buy more bits later if we run out of\n> MemoryContextMethodIDs. Should be fine then.\n\nI think he means 3). If 4) was reduced then that would further reduce\nthe maxBlockSize we could pass when creating a context. At least for\naset.c and generation.c, we don't really need 3) to be 30-bits wide as\nthe set->allocChunkLimit is almost certainly much smaller than that.\nAllocations bigger than allocChunkLimit use a dedicated block with an\nexternal chunk. External chunks don't use 3) or 4).\n\nDavid\n\n\n",
"msg_date": "Wed, 10 Aug 2022 09:28:52 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Reducing the chunk header sizes on all memory context types"
},
{
"msg_contents": "David Rowley <dgrowleyml@gmail.com> writes:\n> On Wed, 10 Aug 2022 at 09:23, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Hmm, I suppose you mean we could reduce 4) if we needed to. Yeah, that\n>> seems like a reasonable place to buy more bits later if we run out of\n>> MemoryContextMethodIDs. Should be fine then.\n\n> I think he means 3). If 4) was reduced then that would further reduce\n> the maxBlockSize we could pass when creating a context. At least for\n> aset.c and generation.c, we don't really need 3) to be 30-bits wide as\n> the set->allocChunkLimit is almost certainly much smaller than that.\n\nOh, I see: we'd just be further constraining the size of chunk that\nhas to be pushed out as an \"external\" chunk. Got it.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 09 Aug 2022 17:44:30 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Reducing the chunk header sizes on all memory context types"
},
{
"msg_contents": "Thanks for giving this a look.\n\nOn Wed, 10 Aug 2022 at 02:37, Robert Haas <robertmhaas@gmail.com> wrote:\n> # We also add a restriction that block sizes for all 3 of the memory\n> # allocators cannot be 1GB or larger. We would be unable to store the\n> # number of bytes that the block is offset from the chunk stored beyond this\n> #1GB boundary on any block that was larger than 1GB.\n>\n> Earlier in the commit message, you say that allocations of 1GB or more\n> are stored in dedicated blocks. But here you say that blocks can't be\n> more than 1GB. Those statements seem to contradict each other. I guess\n> you mean block sizes for blocks that contain chunks, or something like\n> that?\n\nI'll update that so it's more clear.\n\nBut, just to clarify here first, the 1GB restriction is just in\nregards to the maxBlockSize parameter when creating a context.\nAnything over set->allocChunkLimit goes on a dedicated block and there\nis no 1GB size restriction on those dedicated blocks.\n\nDavid\n\n\n",
"msg_date": "Wed, 10 Aug 2022 11:17:38 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Reducing the chunk header sizes on all memory context types"
},
{
"msg_contents": "On Wed, 10 Aug 2022 at 06:44, Andres Freund <andres@anarazel.de> wrote:\n> I think it's fine, given that we can change this at any time, but it's\n> probably worth to explicitly agree that this will for now restrict us to 8\n> context methods?\n\nI know there was some discussion about this elsewhere in this thread\nabout 8 possibly not being enough, but that seems to have concluded\nwith we'll just make more space if we ever need to.\n\nTo make that easier, I've adapted the code in memutils_memorychunk.h\nto separate out the max values for the block offset from the max value\nfor the chunk size. The chunk size is the one we'd likely want to\nlower if we ever needed more bits. I think this also helps document\nthe maxBlockSize limitations in aset.c and generation.c.\n\n> > +/*\n> > + * MemoryContextMethodID\n> > + * A unique identifier for each MemoryContext implementation which\n> > + * indicates the index into the mcxt_methods[] array. See mcxt.c.\n> > + */\n> > +typedef enum MemoryContextMethodID\n> > +{\n> > + MCTX_ASET_ID = 0,\n>\n> Is there a reason to reserve 0 here? Practically speaking the 8-byte header\n> will always contain not just zeroes, but I don't think the patch currently\n> enforces that. It's probably not worth wasting a \"valuable\" entry here...\n\nThe code was just being explicit about that being set to 0. I'm not\nreally sure I see this as reserving 0. I've removed the = 0 anyway\nsince it wasn't really doing anything useful.\n\n> > + * Although MemoryChunks are used by each of our MemoryContexts, other\n> > + * implementations may choose to implement their own method for storing chunk\n\n> Well, there can't be other implementations other than ours. So maybe phrase it\n> as \"future implementations\"?\n\nYeah, that seems better.\n\n> > + * 1. 3-bits to indicate the MemoryContextMethodID\n> > + * 2. 1-bit to indicate if the chunk is externally managed (see below)\n> > + * 3. 30-bits for the amount of memory which was reserved for the chunk\n> > + * 4. 30-bits for the number of bytes that must be subtracted from the chunk\n> > + * to obtain the address of the block that the chunk is stored on.\n\n> Hm. So really only the first four bits have eactly that layout, correct?\n> Perhaps that could be clarified somehow?\n\nI've clarified that #3 and #4 are unused in external chunks.\n\n> > +#define MEMORYCHUNK_EXTERNAL_BIT (1 << 3)\n>\n> We should probably have a define for the three bits reserved for the context\n> id, likely in _internal.h\n\nI've added both MEMORY_CONTEXT_METHODID_BITS and\nMEMORY_CONTEXT_METHODID_MASK and tidied up the defines in\nmemutils_memorychunk.h so that they'll follow on from whatever\nMEMORY_CONTEXT_METHODID_BITS is set to.\n\n> > +typedef struct AllocFreelistLink\n> > +{\n> > + MemoryChunk *next;\n> > +} AllocFreelistLink;\n>\n> I know we have no agreement on that, but I personally would add\n> AllocFreelistLink to typedefs.list and then re-pgindent ;)\n\nI tend to leave that up to the build farm to generate. I really wasn't\nsure which should sort first out of the following:\n\nMemoryContextMethods\nMemoryContextMethodID\n\nThe correct answer depends on if the sort is case-sensitive or not. I\nimagine it is since it is in C, but don't really know if the buildfarm\nwill generate the same.\n\nI've added them in the above order now.\n\n> Hm. We don't mark the chunk header as noaccess anymore? If so, why? I realize\n> it'd be a bit annoying because there's plenty places that look at it, but I\n> think it's also a good way to catch errors.\n\nI don't think I've really changed anything here. If I understand\ncorrectly the pointer to the MemoryContext was not marked as NOACCESS\nbefore. I guessed that's because it's accessed outside of aset.c. I've\nkept that due to how the 3 lower bits are still accessed outside of\naset.c. It's just that we're stuffing more information into that\n8-byte variable now.\n\n> > +static const MemoryContextMethods mcxt_methods[] = {\n...\n> Mildly wondering whether we ought to use designated initializers instead,\n> given we're whacking it around already. Too easy to get the order wrong when\n> adding new members, and we might want to have optional callbacks too.\n\nI've adjusted how this array is initialized now.\n\nI've attached version 3 of the patch.\n\nThanks for having a look at this.\n\nDavid",
"msg_date": "Wed, 10 Aug 2022 18:33:49 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Reducing the chunk header sizes on all memory context types"
},
{
"msg_contents": "I've spent a bit more time hacking on this patch.\n\nChanges:\n\n1. Changed GetFreeListLink() so that it stores the AllocFreelistLink\nat the end of the chunk rather than at the start.\n2. Made it so MemoryChunk stores a magic number in the spare 60 bits\nof the hdrmask when the chunk is \"external\". This is always set but\nonly verified in assert builds.\n3. In aset.c, I'm no longer storing the chunk_size in the hdrmask. I'm\nnow instead storing the freelist index. I'll explain this below.\n4. Various other cleanups.\n\nFor #3, I was doing some benchmarking of the patch with a function I\nwrote to heavily exercise palloc() and pfree(). When this function is\ncalled to only allocate a small amount of memory at once, I saw a\nsmall regression in the palloc() / pfree() performance for aset.c. On\nlooking at profiles, I saw that the code in AllocSetFreeIndex() was\nstanding out AllocSetFree(). That function uses the __builtin_clz()\nintrinsic function which I see on x86-64 uses the \"bsr\" instruction.\nGoing by page 104 of [1], it tells me the latency of that instruction\nis 4 for my Zen 2 CPU. I'm not yet sure why the v3 patch appeared\nslower than master for this workload.\n\nTo make AllocSetFree() faster, I've now changed things so that instead\nof storing the chunk size in the hdrmask of the MemoryChunk, I'm now\njust storing the freelist index. The chunk size is always a power of\n2 for non-external chunks. It's very cheap to obtain the chunk size\nfrom the freelist index when we need to. That's just a \"sal\" or \"shl\"\ninstruction, effectively 8 << freelist_idx, both of which have a\nlatency of 1. This means that AllocSetFreeIndex() is only called in\nAllocSetAlloc now.\n\nThis changes the performance as follows:\n\nMaster:\npostgres=# select pg_allocate_memory_test(64, 1024,\n20::bigint*1024*1024*1024, 'aset');\nTime: 2524.438 ms (00:02.524)\n\nOld patch (v3):\npostgres=# select pg_allocate_memory_test(64, 1024,\n20::bigint*1024*1024*1024, 'aset');\nTime: 2646.438 ms (00:02.646)\n\nNew patch (v4):\npostgres=# select pg_allocate_memory_test(64, 1024,\n20::bigint*1024*1024*1024, 'aset');\nTime: 2296.228 ms (00:02.296)\n\n(about ~10% faster than master)\n\nThis function is allocating 64-byte chunks and keeping 1k of them\naround at once, but allocating a total of 20GBs of them. I've attached\nanother patch with that function in it for anyone who wants to check\nthe performance.\n\nI also tried another round of the pgbench -S workload that I ran\nupthread [2] on the v2 patch. Confusingly, even when testing on\n0b039e3a8 as I was last week, I'm unable to see that same 10%\nperformance increase.\n\nDoes anyone else want to have a go at taking v4 for a spin to see how\nit performs?\n\nDavid\n\n[1] https://www.agner.org/optimize/instruction_tables.pdf\n[2] https://www.postgresql.org/message-id/CAApHDvrrYfcCXfuc_bZ0xsqBP8U62Y0i27agr9Qt-2geE_rv0Q@mail.gmail.com",
"msg_date": "Thu, 18 Aug 2022 01:58:39 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Reducing the chunk header sizes on all memory context types"
},
{
"msg_contents": "On Thu, 18 Aug 2022 at 01:58, David Rowley <dgrowleyml@gmail.com> wrote:\n> Does anyone else want to have a go at taking v4 for a spin to see how\n> it performs?\n\nI've been working on this patch again. The v4 patch didn't quite get\nthe palloc(0) behaviour correct. There were no actual bugs in\nnon-MEMORY_CONTEXT_CHECKING builds. It was just my detection method\nfor free'd chunks in AllocSetCheck and GenerationCheck that was\nbroken. In the free function, I'd been setting the requested_size\nback to 0 to indicate a free'd chunk. Unfortunately that was getting\nconfused with palloc(0) chunks during the checks. The solution to\nthat was to add:\n\n#define InvalidAllocSize SIZE_MAX\n\nto memutils.h and set free'd chunk's requested_size to that value.\nThat seems fine, because:\n\n#define MaxAllocHugeSize (SIZE_MAX / 2)\n\nAfter fixing that I moved on and did some quite extensive benchmarking\nof the patch. I've learned quite a few things from this, namely:\n\na) putting the freelist pointer at the end of the chunk costs us in performance.\nb) the patch makes GenerationFree() slower than it is in master.\n\nI believe a) is due to the chunk header and the freelist pointer being\non different cache lines when the chunk becomes larger. This means\nhaving to load more cache lines resulting in slower performance.\nFor b), I'm not sure the reason for this. I wondered if it is due to\nhow the pointer to the MemoryContext is obtained in GenerationFree().\nWe first look at the MemoryChunk to find the block offset, then get\nthe address of the block from that, we then must dereference the\nblock->context pointer to get the owning MemoryContext. Previously we\njust looked at the GenerationChunk's context field.\n\nDue to a), I decided to benchmark the patch twice, once with the\nfreelist pointer at the end of the chunk (as it was in v4 patch), and\nagain with the pointer at the start of the chunk.\nFor b), I looked again at GenerationFree() and saw that I really only\nneed to look at the owning context when the block is to be freed. The\nvast majority of calls won't need to do that. Unfortunately moving\nthe \"set = block->context;\" to just after we've checked if the block\nneeds to be freed did not improve things. I've not run the complete\nbenchmark again to make sure though, only a single data point of it.\n\nBenchmarking:\n\nI did this benchmarking on an AMD 3990x machine which has 64GB of RAM\ninstalled. The tests I performed entirely used two SQL functions that\nI've written especially for exercising palloc() and pfree().\n\nTo allow me to run the tests I wanted to, I wrote the following functions:\n\n1. pg_allocate_memory_test(): Accepts an allocation size parameter\n(chunk_size), an amount of memory to keep palloc'd at once. Once this\namount is reached we'll start pfreeing 1 chunk for every newly\nallocated chunk. Another parameter controls the total amount of\nallocations. We'll do this number / chunk_size allocations in total.\nThe final parameter specifies the memory context type to use; aset,\ngeneration, etc. The intention of this function is to see if\npalloc/pfree has become faster or slower and also see how the\nreduction in the chunk header size affects performance when the amount\nof memory being allocated at once starts to get big. I'm expecting\nbetter CPU cache hit ratios due to the memory reductions.\n\n2. pg_allocate_memory_test_reset(). Since we mostly don't do pfree(),\nfunction #1 might not be that accurate a test. Mostly we palloc()\nthen MemoryContextReset().\n\nTests:\n\nI ran both of the above functions starting with a chunk size of 8 and\ndoubled that each time up to 1024. I tested both aset.c and\ngeneration.c context types.\n\nIn order to check how performance is affected by allocating more\nmemory at once, I started the total concurrent memory allocation at 1k\nand doubled each time to 1GB. 3 branches tested; master, patch (with\nfreelist at end of chunk) and patch (with freelist at start of chunk).\n\nSo, 2 memory context types, 8 chunk sizes, 21 concurrent allocation\nsizes, 2 functions, 3 * 2 * 8 * 21 * 2 = 2016 data points.\n\nI ran each test for 30 seconds in pgbench. I used 20GBs as the total\namount of memory to allocate.\n\nThe results:\n\nThe patch with freelist pointers at the start of the chunk seems a\nclear winner in the pg_allocate_memory_test_reset() test for both aset\n(graph1.gif) and generation (graph2.gif). There are a few data points\nwhere the times come out slower than master, but not by much. I\nsuspect this might be noise, but didn't investigate.\n\nFor function #1, pg_allocate_memory_test(), the performance is pretty\ncomplex. For aset, the performance is mostly better with the freelist\nstart version of the patch. There is an exception to this with the 8\nbyte chunk size where performance comes out worse than master. That\nflips around with the 16 byte chunk test and becomes better overall.\nIgnoring the 8 byte chunk, the remaining performance is faster than\nmaster. This shows that having the freelist pointer at the start of\nthe free'd chunk is best (graph3.gif) (graph4.gif better shows this\nfor the larger chunks).\n\nWith the generation context, it's complex. The more memory we\nallocate at once, the better performance generally gets (especially\nfor larger chunks). I wondered if this might be due to the reduction\nin the chunk header size, but I'm really not sure as I'd have expected\nthe most gains to appear in small chunks if that was the case as the\nheader is a larger proportion of the total allocation size for those.\nWeirdly the opposite is true, larger chunks are showing better gains\nin the patched version when compared to the same data point in the\nunpatched master branch. graph5.gif shows this for the smaller chunks\nand graph6.gif for the larger ones\n\nIn summary, I'm not too concerned that GenerationFree() is a small\namount slower. I think the other gains are significant enough to make\nup for this. We could probably modify dumptuples() so that WRITETUP()\ndoes not call pfree() just before we do\nMemoryContextReset(state->base.tuplecontext). Those pfree calls are a\ncomplete waste of effort, although we *can't* unconditionally remove\nthat pfree() call in WRITETUP().\n\nI've attached the full results in bench_results.sql. If anyone else\nwants to repeat these tests then I've attached\npg_allocate_memory_test.sh.txt and\nallocate_performance_functions.patch.txt with the 2 new SQL functions\nI used to test with.\n\nFinally, the v5 patch with the fixes mentioned above.\n\nAlso, I'm leaning towards wanting to put the freelist pointer at the\nstart of the chunk's memory rather than at the end. I understand that\ncode which actually does something with the allocated memory is more\nlikely to have loaded the useful cache lines already and that's not\nsomething my benchmark would have done, but for larger chunks the\npower of 2 wastage is likely to mean that many of those cache lines\nnearing the end of the chunk would never have to be loaded at all.\n\nI'm pretty keen to get this patch committed early next week. This is\nquite core infrastructure, so it would be good if the patch gets a few\nextra eyeballs on it before then.\n\nDavid",
"msg_date": "Tue, 23 Aug 2022 21:03:52 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Reducing the chunk header sizes on all memory context types"
},
{
"msg_contents": "On Tue, 23 Aug 2022 at 21:03, David Rowley <dgrowleyml@gmail.com> wrote:\n> Finally, the v5 patch with the fixes mentioned above.\n\nThe CFbot just alerted me to the cplusplus check was failing with the\nv5 patch, so here's v6.\n\n> I'm pretty keen to get this patch committed early next week. This is\n> quite core infrastructure, so it would be good if the patch gets a few\n> extra eyeballs on it before then.\n\nThat'll probably be New Zealand Monday, unless objects before then.\n\nDavid",
"msg_date": "Fri, 26 Aug 2022 17:16:38 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Reducing the chunk header sizes on all memory context types"
},
{
"msg_contents": "On Fri, 26 Aug 2022 at 17:16, David Rowley <dgrowleyml@gmail.com> wrote:\n> The CFbot just alerted me to the cplusplus check was failing with the\n> v5 patch, so here's v6.\n\nI'll try that one again...\n\nDavid",
"msg_date": "Sun, 28 Aug 2022 23:04:21 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Reducing the chunk header sizes on all memory context types"
},
{
"msg_contents": "On Sun, 28 Aug 2022 at 23:04, David Rowley <dgrowleyml@gmail.com> wrote:\n> I'll try that one again...\n\nOne more try to make CFbot happy.\n\nDavid",
"msg_date": "Mon, 29 Aug 2022 10:39:09 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Reducing the chunk header sizes on all memory context types"
},
{
"msg_contents": "On Mon, 29 Aug 2022 at 10:39, David Rowley <dgrowleyml@gmail.com> wrote:\n> One more try to make CFbot happy.\n\nAfter a bit more revision, mostly updating outdated comments and\nnaming adjustments, I've pushed this.\n\nPer the benchmark results I showed in [1], due to the performance of\nhaving the AllocSet free list pointers stored at the end of the\nallocated chunk being quite a bit slower than having them at the start\nof the chunk, I adjusted the patch to have them at the start.\n\nTime for me to go and watch the buildfarm results come in.\n\nDavid\n\n[1] https://postgr.es/m/CAApHDvpuhcPWCzkXZuQQgB8YjPNQSvnncbzZ6pwpHFr2QMMD2w@mail.gmail.com\n\n\n",
"msg_date": "Mon, 29 Aug 2022 17:26:29 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Reducing the chunk header sizes on all memory context types"
},
{
"msg_contents": "On Mon, Aug 29, 2022 at 10:57 AM David Rowley <dgrowleyml@gmail.com> wrote:\n>\n> After a bit more revision, mostly updating outdated comments and\n> naming adjustments, I've pushed this.\n>\n> Per the benchmark results I showed in [1], due to the performance of\n> having the AllocSet free list pointers stored at the end of the\n> allocated chunk being quite a bit slower than having them at the start\n> of the chunk, I adjusted the patch to have them at the start.\n>\n> Time for me to go and watch the buildfarm results come in.\n>\n\nThere is a BF failure with a callstack:\n2022-08-29 03:29:56.911 EDT [1056:67] pg_regress/ddl STATEMENT:\nSELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL,\nNULL, 'include-xids', '0', 'skip-empty-xacts', '1');\nTRAP: FailedAssertion(\"pointer == (void *) MAXALIGN(pointer)\", File:\n\"../../../../src/include/utils/memutils_internal.h\", Line: 120, PID:\n1056)\n0x1e6f71c <ExceptionalCondition+0x9c> at postgres\n0x1ea8494 <MemoryContextAllowInCriticalSection.part.0> at postgres\n0x1ea9ee8 <repalloc> at postgres\n0x1c56dc4 <ReorderBufferCleanupTXN+0xbc> at postgres\n0x1c58a1c <ReorderBufferProcessTXN+0x1980> at postgres\n0x1c44c5c <xact_decode+0x46c> at postgres\n0x1c445f0 <LogicalDecodingProcessRecord+0x98> at postgres\n0x1c4b578 <pg_logical_slot_get_changes_guts+0x318> at postgres\n0x1ad69ec <ExecMakeTableFunctionResult+0x268> at postgres\n0x1aedc88 <FunctionNext+0x3a0> at postgres\n0x1ad7808 <ExecScan+0x100> at postgres\n0x1acaaa0 <standard_ExecutorRun+0x158> at postgres\n0x1ceac3c <PortalRunSelect+0x2d0> at postgres\n0x1cec8ec <PortalRun+0x16c> at postgres\n0x1ce7b30 <exec_simple_query+0x3a4> at postgres\n0x1ce96ec <PostgresMain+0x1720> at postgres\n0x1c2c784 <PostmasterMain+0x1a3c> at postgres\n0x1ee6a1c <main+0x248> at postgres\n\nI am not completely sure if the failure is due to your commit but it\nwas showing the line added by this commit. Note that I had also\ncommitted (commit id: d2169c9985) one patch today but couldn't\ncorrelate the failure with that so thought of checking with you. There\nare other similar failures[2][3] as well but [1] shows the stack\ntrace. Any idea?\n\n[1] - https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=mamba&dt=2022-08-29%2005%3A53%3A57\n[2] - https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=grison&dt=2022-08-29%2008%3A13%3A09\n[3] - https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=skate&dt=2022-08-29%2006%3A13%3A24\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 29 Aug 2022 15:07:46 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Reducing the chunk header sizes on all memory context types"
},
{
"msg_contents": "On Mon, 29 Aug 2022 at 21:37, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> I am not completely sure if the failure is due to your commit but it\n> was showing the line added by this commit. Note that I had also\n> committed (commit id: d2169c9985) one patch today but couldn't\n> correlate the failure with that so thought of checking with you. There\n> are other similar failures[2][3] as well but [1] shows the stack\n> trace. Any idea?\n\nI'm currently suspecting that I broke it. I'm thinking it was just an\nunfortunate coincidence that you made some changes to test_decoding\nshortly before.\n\nI'm currently seeing it I can recreate this on a Raspberry PI.\n\nDavid\n\n> [1] - https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=mamba&dt=2022-08-29%2005%3A53%3A57\n> [2] - https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=grison&dt=2022-08-29%2008%3A13%3A09\n> [3] - https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=skate&dt=2022-08-29%2006%3A13%3A24\n\n\n",
"msg_date": "Mon, 29 Aug 2022 21:42:05 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Reducing the chunk header sizes on all memory context types"
},
{
"msg_contents": "On Mon, 29 Aug 2022 at 21:37, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> 2022-08-29 03:29:56.911 EDT [1056:67] pg_regress/ddl STATEMENT:\n> SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL,\n> NULL, 'include-xids', '0', 'skip-empty-xacts', '1');\n> TRAP: FailedAssertion(\"pointer == (void *) MAXALIGN(pointer)\", File:\n> \"../../../../src/include/utils/memutils_internal.h\", Line: 120, PID:\n> 1056)\n\nI suspect, going by all 3 failing animals being 32-bit which have a\nMAXIMUM_ALIGNOF 8 and SIZEOF_SIZE_T of 4 that this is due to the lack\nof padding in the MemoryChunk struct.\n\nAllocChunkData and GenerationChunk had padding to account for\nsizeof(Size) being 4 and sizeof(void *) being 8, I didn't add that to\nMemoryChunk, so I'll do that now.\n\nDavid\n\n\n",
"msg_date": "Mon, 29 Aug 2022 23:10:53 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Reducing the chunk header sizes on all memory context types"
},
{
"msg_contents": "David Rowley <dgrowleyml@gmail.com> writes:\n> I suspect, going by all 3 failing animals being 32-bit which have a\n> MAXIMUM_ALIGNOF 8 and SIZEOF_SIZE_T of 4 that this is due to the lack\n> of padding in the MemoryChunk struct.\n> AllocChunkData and GenerationChunk had padding to account for\n> sizeof(Size) being 4 and sizeof(void *) being 8, I didn't add that to\n> MemoryChunk, so I'll do that now.\n\nDoesn't seem to have fixed it. IMO, the fact that we can get through\ncore regression tests and pg_upgrade is a strong indicator that\nthere's not anything fundamentally wrong with memory context\nmanagement. I'm inclined to think the problem is in d2169c9985,\ninstead ... though I can't see anything wrong with it.\n\nAnother possibility is that there's a pre-existing bug in the\nlogical decoding stuff that your changes accidentally exposed.\nI wonder if valgrind would show anything interesting.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 29 Aug 2022 09:47:43 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Reducing the chunk header sizes on all memory context types"
},
{
"msg_contents": "On Mon, Aug 29, 2022 at 7:17 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> David Rowley <dgrowleyml@gmail.com> writes:\n> > I suspect, going by all 3 failing animals being 32-bit which have a\n> > MAXIMUM_ALIGNOF 8 and SIZEOF_SIZE_T of 4 that this is due to the lack\n> > of padding in the MemoryChunk struct.\n> > AllocChunkData and GenerationChunk had padding to account for\n> > sizeof(Size) being 4 and sizeof(void *) being 8, I didn't add that to\n> > MemoryChunk, so I'll do that now.\n>\n> Doesn't seem to have fixed it. IMO, the fact that we can get through\n> core regression tests and pg_upgrade is a strong indicator that\n> there's not anything fundamentally wrong with memory context\n> management. I'm inclined to think the problem is in d2169c9985,\n> instead ... though I can't see anything wrong with it.\n>\n\nYeah, I also thought that way but couldn't find a reason. I think if\nDavid is able to reproduce it on one of his systems then he can try\nlocally reverting both the commits one by one.\n\n> Another possibility is that there's a pre-existing bug in the\n> logical decoding stuff that your changes accidentally exposed.\n>\n\nYeah, this is another possibility.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 29 Aug 2022 19:32:22 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Reducing the chunk header sizes on all memory context types"
},
{
"msg_contents": "\n\nOn 8/29/22 16:02, Amit Kapila wrote:\n> On Mon, Aug 29, 2022 at 7:17 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>\n>> David Rowley <dgrowleyml@gmail.com> writes:\n>>> I suspect, going by all 3 failing animals being 32-bit which have a\n>>> MAXIMUM_ALIGNOF 8 and SIZEOF_SIZE_T of 4 that this is due to the lack\n>>> of padding in the MemoryChunk struct.\n>>> AllocChunkData and GenerationChunk had padding to account for\n>>> sizeof(Size) being 4 and sizeof(void *) being 8, I didn't add that to\n>>> MemoryChunk, so I'll do that now.\n>>\n>> Doesn't seem to have fixed it. IMO, the fact that we can get through\n>> core regression tests and pg_upgrade is a strong indicator that\n>> there's not anything fundamentally wrong with memory context\n>> management. I'm inclined to think the problem is in d2169c9985,\n>> instead ... though I can't see anything wrong with it.\n>>\n> \n> Yeah, I also thought that way but couldn't find a reason. I think if\n> David is able to reproduce it on one of his systems then he can try\n> locally reverting both the commits one by one.\n> \n\nI can reproduce it on my system (rpi4 running 32-bit raspbian). I can't\ngrant access very easily at the moment, so I'll continue investigating\ndo more debugging on perhaps I can grant access to the system.\n\nSo far all I know is that it doesn't happen on d2169c9985 (so ~5 commits\nback), and then it starts failing on c6e0fe1f2a. The extra padding added\nby df0f4feef8 makes no difference, because the struct looked like this:\n\n struct MemoryChunk {\n Size requested_size; /* 0 4 */\n\n /* XXX 4 bytes hole, try to pack */\n\n uint64 hdrmask; /* 8 8 */\n\n /* size: 16, cachelines: 1, members: 2 */\n /* sum members: 12, holes: 1, sum holes: 4 */\n /* last cacheline: 16 bytes */\n };\n\nand the padding makes it look like this:\n\n struct MemoryChunk {\n Size requested_size; /* 0 4 */\n char padding[4]; /* 4 8 */\n uint64 hdrmask; /* 8 8 */\n\n /* size: 16, cachelines: 1, members: 2 */\n /* sum members: 12, holes: 1, sum holes: 4 */\n /* last cacheline: 16 bytes */\n };\n\nso it makes no difference.\n\nI did look at the pointers in GetMemoryChunkMethodID, and it looks like\nthis (p1 is result of MAXALIGN(pointer):\n\n(gdb) p pointer\n$1 = (void *) 0x1ca1d2c\n(gdb) p p1\n$2 = 0x1ca1d30 \"\"\n(gdb) p p1 - pointer\n$3 = 4\n(gdb) p (long int) pointer\n$4 = 30022956\n(gdb) p (long int) p1\n$5 = 30022960\n(gdb) p 30022956 % 8\n$6 = 4\n\nSo the input pointer is not actually aligned to MAXIMUM_ALIGNOF (8B),\nbut only to 4B. That seems a bit strange.\n\n\n>> Another possibility is that there's a pre-existing bug in the\n>> logical decoding stuff that your changes accidentally exposed.\n>>\n> \n> Yeah, this is another possibility.\n\nNo idea.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 29 Aug 2022 16:52:20 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Reducing the chunk header sizes on all memory context types"
},
{
"msg_contents": "Amit Kapila <amit.kapila16@gmail.com> writes:\n> Yeah, I also thought that way but couldn't find a reason. I think if\n> David is able to reproduce it on one of his systems then he can try\n> locally reverting both the commits one by one.\n\nIt seems to repro easily on any 32-bit platform. Aside from the\nbuildfarm results, I've now duplicated it on 32-bit ARM (which\neliminates the possibility that it's big-endian specific).\n\n\"bt full\" from the first crash gives\n\n#0 0xb6d7126c in raise () from /lib/libc.so.6\nNo symbol table info available.\n#1 0xb6d5c360 in abort () from /lib/libc.so.6\nNo symbol table info available.\n#2 0x00572430 in ExceptionalCondition (\n conditionName=conditionName@entry=0x745110 \"pointer == (void *) MAXALIGN(pointer)\", errorType=errorType@entry=0x5d18d0 \"FailedAssertion\", \n fileName=fileName@entry=0x7450dc \"../../../../src/include/utils/memutils_internal.h\", lineNumber=lineNumber@entry=120) at assert.c:69\nNo locals.\n#3 0x005a0d90 in GetMemoryChunkMethodID (pointer=<optimized out>)\n at ../../../../src/include/utils/memutils_internal.h:120\n header = <optimized out>\n#4 0x005a231c in GetMemoryChunkMethodID (pointer=<optimized out>)\n at ../../../../src/include/utils/memutils_internal.h:119\n header = <optimized out>\n header = <optimized out>\n#5 pfree (pointer=<optimized out>) at mcxt.c:1242\nNo locals.\n#6 0x003c8fdc in ReorderBufferCleanupTXN (rb=0x7450dc, rb@entry=0x1f, \n txn=0x745110, txn@entry=0xe1f800) at reorderbuffer.c:1493\n change = <optimized out>\n found = false\n iter = {cur = <optimized out>, next = 0xddcb18, end = 0xde4e64}\n#7 0x003ca968 in ReorderBufferProcessTXN (rb=0x1f, rb@entry=0xe1f800, \n txn=0xe1f800, commit_lsn=<optimized out>, snapshot_now=<optimized out>, \n command_id=<optimized out>, streaming=false) at reorderbuffer.c:2514\n change = 0x0\n _save_exception_stack = 0x0\n _save_context_stack = 0x1\n _local_sigjmp_buf = {{__jmpbuf = {-552117431, 1640676937, -1092521468, \n 14569068, 14569068, 14568932, 729, 0, 228514434, 166497, 0, \n 8195960, 8196088, 7, 0 <repeats 13 times>, 13847224, 0, 196628, \n 0, 1844909312, 5178548, -1092520644, 13979392, 13979388, \n 14801472, 0, 8195960, 8196088, 7, 0, 8108342, 8108468, 1, \n 715827883, -1030792151, 0, 8195960, 729, 7, 0, 14809088, \n 14809088, 14542436, 0, 26303888, 0, 14568932, 3978796, 156, 0, \n 1, 0}, __mask_was_saved = 0, __saved_mask = {__val = {64, \n 7242472, 14740696, 7244624, 14809088, 14741512, 5, 14740696, \n 7244624, 7245864, 3982844, 14542436, 0, 14809092, 0, 26303888, \n 729, 14569112, 14809092, 7242472, 729, 14741512, 32, 14809088, \n 0, 228514434, 166497, 0, 0, 26303888, 3980968, 0}}}}\n _do_rethrow = <optimized out>\n using_subtxn = 228\n ccxt = 0x7aae68 <my_wait_event_info>\n iterstate = 0x0\n prev_lsn = 26303888\n specinsert = 0x0\n stream_started = false\n curtxn = 0x0\n __func__ = \"ReorderBufferProcessTXN\"\n#8 0x003cb460 in ReorderBufferReplay (txn=<optimized out>, \n rb=rb@entry=0xe1f800, commit_lsn=<optimized out>, end_lsn=<optimized out>, \n commit_time=715099169882112, origin_id=0, origin_lsn=0, xid=729)\n at reorderbuffer.c:2641\n snapshot_now = <optimized out>\n#9 0x003cbedc in ReorderBufferCommit (rb=rb@entry=0xe1f800, \n xid=xid@entry=729, commit_lsn=<optimized out>, end_lsn=<optimized out>, \n commit_time=<optimized out>, commit_time@entry=715099398396546, \n origin_id=<optimized out>, origin_id@entry=0, origin_lsn=0, \n origin_lsn@entry=5906902891454464) at reorderbuffer.c:2665\n txn = <optimized out>\n#10 0x003bb19c in DecodeCommit (two_phase=false, xid=729, parsed=0xbee17478, \n buf=<optimized out>, ctx=0xe1d7f8) at decode.c:682\n origin_lsn = <optimized out>\n commit_time = <optimized out>\n origin_id = 0\n i = <optimized out>\n origin_lsn = <optimized out>\n commit_time = <optimized out>\n origin_id = <optimized out>\n i = <optimized out>\n#11 xact_decode (ctx=0xe1d7f8, buf=<optimized out>) at decode.c:216\n xlrec = <optimized out>\n parsed = {xact_time = 715099398396546, xinfo = 73, dbId = 16384, \n tsId = 1663, nsubxacts = 0, subxacts = 0x0, nrels = 0, \n xlocators = 0x0, nstats = 0, stats = 0x0, nmsgs = 70, \n msgs = 0xe2b828, twophase_xid = 0, \n twophase_gid = '\\000' <repeats 199 times>, nabortrels = 0, \n abortlocators = 0x0, nabortstats = 0, abortstats = 0x0, \n origin_lsn = 0, origin_timestamp = 0}\n xid = <optimized out>\n two_phase = false\n builder = <optimized out>\n reorder = <optimized out>\n r = <optimized out>\n info = <optimized out>\n __func__ = \"xact_decode\"\n#12 0x003babf0 in LogicalDecodingProcessRecord (ctx=ctx@entry=0xe1d7f8, \n record=0xe1dad0) at decode.c:119\n buf = {origptr = 26303888, endptr = 26305088, record = 0xe1dad0}\n txid = <optimized out>\n rmgr = <optimized out>\n#13 0x003c0390 in pg_logical_slot_get_changes_guts (fcinfo=0x0, \n confirm=<optimized out>, binary=<optimized out>) at logicalfuncs.c:271\n record = <optimized out>\n errm = 0x0\n _save_exception_stack = 0xd60380\n _save_context_stack = 0x1\n _local_sigjmp_buf = {{__jmpbuf = {-552117223, 1640638557, 4, 14770368, \n -1092519792, -1, 14770320, 8, 1, 14691520, 0, 8195960, 8196088, \n 7, 0 <repeats 12 times>, 3398864, 0, 13849464, 13848680, \n 14650600, 3399100, 4, 3125412, 0, 144, 0, 0, 8192, 7616300, 0, \n 1576, 7616452, 14585296, 8192, 7616300, 14651128, 0, 0, \n 14651128, -1092519880, 3398864, 0, 13849464, 13848680, 3399100, \n 0, 5904972, 8112272, -1, 14591624, -1375937000, -1375937248, \n 14591376}, __mask_was_saved = 0, __saved_mask = {__val = {\n 14530312, 14528852, 2348328335, 14527224, 5699140, 3202447388, \n 4, 14527080, 5699156, 14530312, 5769228, 2918792552, 0, 0, 0, \n 14767888, 3067834868, 14591600, 13824008, 1023, 3068719720, 1, \n 184, 14589688, 14586024, 14587400, 14691504, 14586024, \n 14585752, 14586200, 2615480, 8112268}}}}\n _do_rethrow = <optimized out>\n name = 0xe16090\n upto_lsn = 3933972\n upto_nchanges = 14707744\n rsinfo = <optimized out>\n per_query_ctx = <optimized out>\n oldcontext = 0xe06cc0\n end_of_wal = 0\n ctx = 0xe1d7f8\n old_resowner = 0xd35568\n arr = <optimized out>\n ndim = <optimized out>\n options = 0x0\n p = 0xe02cb0\n __func__ = \"pg_logical_slot_get_changes_guts\"\n#14 0x00292864 in ExecMakeTableFunctionResult (setexpr=0xdea9b0, \n econtext=0xde90a8, argContext=<optimized out>, expectedDesc=0x1, \n randomAccess=false) at execSRF.c:234\n result = 2590380\n tupstore = 0x0\n tupdesc = 0x0\n funcrettype = 8040208\n returnsTuple = <optimized out>\n returnsSet = 32\n fcinfo = 0xe02cb0\n fcusage = {fs = 0x0, save_f_total_time = {tv_sec = 14592432, \n tv_nsec = 7093408}, save_total = {tv_sec = 0, tv_nsec = 0}, \n f_start = {tv_sec = 14767936, tv_nsec = 14767940}}\n rsinfo = {type = T_ReturnSetInfo, econtext = 0xde90a8, \n expectedDesc = 0xdea7a0, allowedModes = 11, \n returnMode = SFRM_Materialize, isDone = ExprSingleResult, \n setResult = 0xe16548, setDesc = 0xe16338}\n tmptup = {t_len = 3, t_self = {ip_blkid = {bi_hi = 0, bi_lo = 0}, \n ip_posid = 0}, t_tableOid = 0, t_data = 0x3}\n callerContext = 0xdea7a0\n first_time = true\n __func__ = \"ExecMakeTableFunctionResult\"\n... etc ...\n\nAnything ring a bell?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 29 Aug 2022 10:55:42 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Reducing the chunk header sizes on all memory context types"
},
{
"msg_contents": "Hi,\n\nI've got another problem with this patch here on macOS:\n\nccache clang -Wall -Wmissing-prototypes -Wpointer-arith\n-Wdeclaration-after-statement -Werror=vla\n-Werror=unguarded-availability-new -Wendif-labels\n-Wmissing-format-attribute -Wformat-security -fno-strict-aliasing\n-fwrapv -Wno-unused-command-line-argument -g -O2 -Wall -Werror\n-Wno-unknown-warning-option -fno-omit-frame-pointer\n-I../../../../src/include -isysroot\n/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk\n-I/opt/local/include/libxml2 -I/opt/local/include -I/opt/local/include\n -I/opt/local/include -c -o aset.o aset.c -MMD -MP -MF .deps/aset.Po\nIn file included from aset.c:52:\n../../../../src/include/utils/memutils_memorychunk.h:170:18: error:\ncomparison of constant 7 with expression of type\n'MemoryContextMethodID' (aka 'enum MemoryContextMethodID') is always\ntrue [-Werror,-Wtautological-constant-out-of-range-compare]\n Assert(methodid <= MEMORY_CONTEXT_METHODID_MASK);\n ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n../../../../src/include/c.h:827:9: note: expanded from macro 'Assert'\n if (!(condition)) \\\n ^~~~~~~~~\nIn file included from aset.c:52:\n../../../../src/include/utils/memutils_memorychunk.h:186:18: error:\ncomparison of constant 7 with expression of type\n'MemoryContextMethodID' (aka 'enum MemoryContextMethodID') is always\ntrue [-Werror,-Wtautological-constant-out-of-range-compare]\n Assert(methodid <= MEMORY_CONTEXT_METHODID_MASK);\n ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n../../../../src/include/c.h:827:9: note: expanded from macro 'Assert'\n if (!(condition)) \\\n ^~~~~~~~~\n\nI'm not sure what to do about that, but every file that includes\nmemutils_memorychunk.h produces those warnings (which become errors\ndue to -Werror).\n\n...Robert\n\n\n",
"msg_date": "Mon, 29 Aug 2022 11:15:52 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Reducing the chunk header sizes on all memory context types"
},
{
"msg_contents": "Tomas Vondra <tomas.vondra@enterprisedb.com> writes:\n> I can reproduce it on my system (rpi4 running 32-bit raspbian).\n\nYeah, more or less same as what I'm testing on.\n\nSeeing that the complaint is about pfree'ing a non-maxaligned\nReorderBufferChange pointer, I tried adding\n\ndiff --git a/src/backend/replication/logical/reorderbuffer.c b/src/backend/replication/logical/reorderbuffer.c\nindex 89cf9f9389..dfa9b6c9ee 100644\n--- a/src/backend/replication/logical/reorderbuffer.c\n+++ b/src/backend/replication/logical/reorderbuffer.c\n@@ -472,6 +472,8 @@ ReorderBufferGetChange(ReorderBuffer *rb)\n change = (ReorderBufferChange *)\n MemoryContextAlloc(rb->change_context, sizeof(ReorderBufferChange));\n \n+ Assert(change == (void *) MAXALIGN(change));\n+\n memset(change, 0, sizeof(ReorderBufferChange));\n return change;\n }\n\nand this failed!\n\n(gdb) f 3\n#3 0x003cb888 in ReorderBufferGetChange (rb=0x24ed820) at reorderbuffer.c:475\n475 Assert(change == (void *) MAXALIGN(change));\n(gdb) p change\n$1 = (ReorderBufferChange *) 0x24aaa14\n\nSo the bug is in fact in David's changes, and it consists in palloc\nsometimes handing back non-maxaligned pointers. I find it mildly\nastonishing that we managed to get through core regression tests\nwithout such a fault surfacing, but there you have it.\n\nThis machine has MAXALIGN 8 but 4-byte pointers, so there's something\nwrong with that situation.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 29 Aug 2022 11:20:37 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Reducing the chunk header sizes on all memory context types"
},
{
"msg_contents": "I wrote:\n> So the bug is in fact in David's changes, and it consists in palloc\n> sometimes handing back non-maxaligned pointers. I find it mildly\n> astonishing that we managed to get through core regression tests\n> without such a fault surfacing, but there you have it.\n\nOh! I just noticed that the troublesome context (rb->change_context)\nis a SlabContext, so it may be that this only happens in non-aset\ncontexts. It's a lot easier to believe that the core tests never\nexercise the case of pfree'ing a slab chunk.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 29 Aug 2022 11:25:57 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Reducing the chunk header sizes on all memory context types"
},
{
"msg_contents": "\n\nOn 8/29/22 17:20, Tom Lane wrote:\n> Tomas Vondra <tomas.vondra@enterprisedb.com> writes:\n>> I can reproduce it on my system (rpi4 running 32-bit raspbian).\n> \n> Yeah, more or less same as what I'm testing on.\n> \n> Seeing that the complaint is about pfree'ing a non-maxaligned\n> ReorderBufferChange pointer, I tried adding\n> \n> diff --git a/src/backend/replication/logical/reorderbuffer.c b/src/backend/replication/logical/reorderbuffer.c\n> index 89cf9f9389..dfa9b6c9ee 100644\n> --- a/src/backend/replication/logical/reorderbuffer.c\n> +++ b/src/backend/replication/logical/reorderbuffer.c\n> @@ -472,6 +472,8 @@ ReorderBufferGetChange(ReorderBuffer *rb)\n> change = (ReorderBufferChange *)\n> MemoryContextAlloc(rb->change_context, sizeof(ReorderBufferChange));\n> \n> + Assert(change == (void *) MAXALIGN(change));\n> +\n> memset(change, 0, sizeof(ReorderBufferChange));\n> return change;\n> }\n> \n> and this failed!\n> \n> (gdb) f 3\n> #3 0x003cb888 in ReorderBufferGetChange (rb=0x24ed820) at reorderbuffer.c:475\n> 475 Assert(change == (void *) MAXALIGN(change));\n> (gdb) p change\n> $1 = (ReorderBufferChange *) 0x24aaa14\n> \n> So the bug is in fact in David's changes, and it consists in palloc\n> sometimes handing back non-maxaligned pointers. I find it mildly\n> astonishing that we managed to get through core regression tests\n> without such a fault surfacing, but there you have it.\n> \n> This machine has MAXALIGN 8 but 4-byte pointers, so there's something\n> wrong with that situation.\n> \n\nI suspect it's a pre-existing bug in Slab allocator, because it does this:\n\n#define SlabBlockGetChunk(slab, block, idx) \\\n\t((MemoryChunk *) ((char *) (block) + sizeof(SlabBlock)\t\\\n\t\t\t\t\t+ (idx * slab->fullChunkSize)))\n\nand SlabBlock is only 20B, i.e. not a multiple of 8B. Which would mean\nthat even if we allocate block and size the chunks carefully (with all\nthe MAXALIGN things), we ultimately slice the block incorrectly.\n\nThis would explain the 4B difference I reported before, I think. But I'm\njust as astonished we got this far in the tests - regular regression\ntests don't do much logical decoding, and we only use slab for changes,\nbut I see the failure in 006 test in src/test/recovery, so the first\nfive completed fine.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 29 Aug 2022 17:27:02 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Reducing the chunk header sizes on all memory context types"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> I've got another problem with this patch here on macOS:\n\n> In file included from aset.c:52:\n> ../../../../src/include/utils/memutils_memorychunk.h:170:18: error:\n> comparison of constant 7 with expression of type\n> 'MemoryContextMethodID' (aka 'enum MemoryContextMethodID') is always\n> true [-Werror,-Wtautological-constant-out-of-range-compare]\n> Assert(methodid <= MEMORY_CONTEXT_METHODID_MASK);\n> ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n> ../../../../src/include/c.h:827:9: note: expanded from macro 'Assert'\n> if (!(condition)) \\\n> ^~~~~~~~~\n\nHuh. My macOS buildfarm animals aren't showing that, nor can I\nrepro it on my laptop. Which compiler version are you using exactly?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 29 Aug 2022 11:31:40 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Reducing the chunk header sizes on all memory context types"
},
{
"msg_contents": "Got it: sizeof(SlabBlock) isn't a multiple of MAXALIGN,\n\n(gdb) p sizeof(SlabBlock)\n$4 = 20\n\nbut this coding requires it to be:\n\n#define SlabBlockGetChunk(slab, block, idx) \\\n\t((MemoryChunk *) ((char *) (block) + sizeof(SlabBlock)\t\\\n\t\t\t\t\t+ (idx * slab->fullChunkSize)))\n\nSo what you actually need to do is add some alignment padding to\nSlabBlock. I'd suggest reverting df0f4feef as it seems to be\na red herring.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 29 Aug 2022 11:39:21 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Reducing the chunk header sizes on all memory context types"
},
{
"msg_contents": "On 8/29/22 17:27, Tomas Vondra wrote:\n> ...\n>\n> I suspect it's a pre-existing bug in Slab allocator, because it does this:\n> \n> #define SlabBlockGetChunk(slab, block, idx) \\\n> \t((MemoryChunk *) ((char *) (block) + sizeof(SlabBlock)\t\\\n> \t\t\t\t\t+ (idx * slab->fullChunkSize)))\n> \n> and SlabBlock is only 20B, i.e. not a multiple of 8B. Which would mean\n> that even if we allocate block and size the chunks carefully (with all\n> the MAXALIGN things), we ultimately slice the block incorrectly.\n> \n\nThe attached patch seems to fix the issue for me - at least it seems\nlike that. This probably will need to get backpatched, I guess. Maybe we\nshould add an assert to MemoryChunkGetPointer to check alignment?\n\n\n\n> This would explain the 4B difference I reported before, I think. But I'm\n> just as astonished we got this far in the tests - regular regression\n> tests don't do much logical decoding, and we only use slab for changes,\n> but I see the failure in 006 test in src/test/recovery, so the first\n> five completed fine.\n> \n\nI got confused - the first 5 tests in src/test/recovery don't do any\nlogical decoding, so it's not surprising it's the 006 that fails.\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Mon, 29 Aug 2022 17:39:52 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Reducing the chunk header sizes on all memory context types"
},
{
"msg_contents": "Tomas Vondra <tomas.vondra@enterprisedb.com> writes:\n> I suspect it's a pre-existing bug in Slab allocator, because it does this:\n\n> #define SlabBlockGetChunk(slab, block, idx) \\\n> \t((MemoryChunk *) ((char *) (block) + sizeof(SlabBlock)\t\\\n> \t\t\t\t\t+ (idx * slab->fullChunkSize)))\n\n> and SlabBlock is only 20B, i.e. not a multiple of 8B. Which would mean\n> that even if we allocate block and size the chunks carefully (with all\n> the MAXALIGN things), we ultimately slice the block incorrectly.\n\nRight, same conclusion I just came to. But it's not a \"pre-existing\"\nbug, because sizeof(SlabBlock) *was* maxaligned until David added\nanother field to it.\n\nI think adding a padding field to SlabBlock would be a less messy\nsolution than your patch.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 29 Aug 2022 11:43:14 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Reducing the chunk header sizes on all memory context types"
},
{
"msg_contents": "Hi,\n\nOn 2022-08-29 11:43:14 -0400, Tom Lane wrote:\n> Tomas Vondra <tomas.vondra@enterprisedb.com> writes:\n> > I suspect it's a pre-existing bug in Slab allocator, because it does this:\n> \n> > #define SlabBlockGetChunk(slab, block, idx) \\\n> > \t((MemoryChunk *) ((char *) (block) + sizeof(SlabBlock)\t\\\n> > \t\t\t\t\t+ (idx * slab->fullChunkSize)))\n> \n> > and SlabBlock is only 20B, i.e. not a multiple of 8B. Which would mean\n> > that even if we allocate block and size the chunks carefully (with all\n> > the MAXALIGN things), we ultimately slice the block incorrectly.\n> \n> Right, same conclusion I just came to. But it's not a \"pre-existing\"\n> bug, because sizeof(SlabBlock) *was* maxaligned until David added\n> another field to it.\n> \n> I think adding a padding field to SlabBlock would be a less messy\n> solution than your patch.\n\nThat just seems to invite the same problem happening again later and it's\nharder to ensure that the padding is correct across platforms.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 29 Aug 2022 08:57:45 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Reducing the chunk header sizes on all memory context types"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2022-08-29 11:43:14 -0400, Tom Lane wrote:\n>> I think adding a padding field to SlabBlock would be a less messy\n>> solution than your patch.\n\n> That just seems to invite the same problem happening again later and it's\n> harder to ensure that the padding is correct across platforms.\n\nYeah, I just tried and failed to write a general padding computation\n--- you can't use sizeof() in the #if, which makes it a lot more\nfragile than I was expecting. Tomas' way is probably the best.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 29 Aug 2022 12:00:04 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Reducing the chunk header sizes on all memory context types"
},
{
"msg_contents": "\n\nOn 8/29/22 17:43, Tom Lane wrote:\n> Tomas Vondra <tomas.vondra@enterprisedb.com> writes:\n>> I suspect it's a pre-existing bug in Slab allocator, because it does this:\n> \n>> #define SlabBlockGetChunk(slab, block, idx) \\\n>> \t((MemoryChunk *) ((char *) (block) + sizeof(SlabBlock)\t\\\n>> \t\t\t\t\t+ (idx * slab->fullChunkSize)))\n> \n>> and SlabBlock is only 20B, i.e. not a multiple of 8B. Which would mean\n>> that even if we allocate block and size the chunks carefully (with all\n>> the MAXALIGN things), we ultimately slice the block incorrectly.\n> \n> Right, same conclusion I just came to. But it's not a \"pre-existing\"\n> bug, because sizeof(SlabBlock) *was* maxaligned until David added\n> another field to it.\n> \n\nYeah, that's true. Still, there was an implicit expectation the size is\nmaxaligned, but it wasn't mentioned anywhere. I don't even recall if I\nwas aware of it when I wrote that code, or if I was just lucky.\n\n> I think adding a padding field to SlabBlock would be a less messy\n> solution than your patch.\n\nMaybe, although I find it a bit annoying that we do MAXALIGN() for a\nbunch of structs, and then in other places we add padding. Maybe not for\nSlab, but e.g. for Generation. Maybe we should try doing the same thing\nin all those places.\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 29 Aug 2022 18:00:14 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Reducing the chunk header sizes on all memory context types"
},
{
"msg_contents": "On Tue, 30 Aug 2022 at 03:43, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I think adding a padding field to SlabBlock would be a less messy\n> solution than your patch.\n\nThank you both of you for looking at this while I was sleeping.\n\nI've read over the emails and glanced at Tomas' patch. I think that\nseems good. I think I'd rather see us do that than pad the struct out\nfurther as Tomas' method is more aligned to what we do in aset.c\n(ALLOC_BLOCKHDRSZ) and generation.c (Generation_BLOCKHDRSZ).\n\nI can adjust Tomas' patch to #define Slab_BLOCKHDRSZ\n\nDavid\n\n\n",
"msg_date": "Tue, 30 Aug 2022 09:55:03 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Reducing the chunk header sizes on all memory context types"
},
{
"msg_contents": "David Rowley <dgrowleyml@gmail.com> writes:\n> I've read over the emails and glanced at Tomas' patch. I think that\n> seems good. I think I'd rather see us do that than pad the struct out\n> further as Tomas' method is more aligned to what we do in aset.c\n> (ALLOC_BLOCKHDRSZ) and generation.c (Generation_BLOCKHDRSZ).\n\n> I can adjust Tomas' patch to #define Slab_BLOCKHDRSZ\n\nWFM\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 29 Aug 2022 17:57:28 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Reducing the chunk header sizes on all memory context types"
},
{
"msg_contents": "\n\nOn 8/29/22 23:57, Tom Lane wrote:\n> David Rowley <dgrowleyml@gmail.com> writes:\n>> I've read over the emails and glanced at Tomas' patch. I think that\n>> seems good. I think I'd rather see us do that than pad the struct out\n>> further as Tomas' method is more aligned to what we do in aset.c\n>> (ALLOC_BLOCKHDRSZ) and generation.c (Generation_BLOCKHDRSZ).\n> \n>> I can adjust Tomas' patch to #define Slab_BLOCKHDRSZ\n> \n> WFM\n> \n\nSame here.\n\nI also suggested doing a similar check in MemoryChunkGetPointer, so that\nwe catch the issue earlier - right after we allocate the chunk. Any\nopinion on that? With an assert only in GetMemoryChunkMethodID() we'd\nnotice the issue much later, when it may not be obvious if it's a memory\ncorruption or what. But maybe it's overkill.\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Tue, 30 Aug 2022 02:22:55 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Reducing the chunk header sizes on all memory context types"
},
{
"msg_contents": "On Tue, 30 Aug 2022 at 03:39, Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n> The attached patch seems to fix the issue for me - at least it seems\n> like that. This probably will need to get backpatched, I guess. Maybe we\n> should add an assert to MemoryChunkGetPointer to check alignment?\n\nHi Tomas,\n\nI just wanted to check with you if you ran the full make check-world\nwith this patch?\n\nI don't yet have a working ARM 32-bit environment to test, but on\ntrying it with x86 32-bit and adjusting MAXIMUM_ALIGNOF to 8, I'm\ngetting failures in test_decoding. Namely:\n\ntest twophase ... FAILED 51 ms\ntest twophase_stream ... FAILED 25 ms\n\n INSERT INTO test_prepared2 VALUES (5);\n SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL,\nNULL, 'include-xids', '0', 'skip-empty-xacts', '1');\n+WARNING: problem in slab Change: detected write past chunk end in\nblock 0x58b8ecf0, chunk 0x58b8ed08\n+WARNING: problem in slab Change: detected write past chunk end in\nblock 0x58b8ecf0, chunk 0x58b8ed58\n+WARNING: problem in slab Change: detected write past chunk end in\nblock 0x58b8ecf0, chunk 0x58b8eda8\n+WARNING: problem in slab Change: detected write past chunk end in\nblock 0x58b8ecf0, chunk 0x58b8edf8\n+WARNING: problem in slab Change: detected write past chunk end in\nblock 0x58b8ecf0, chunk 0x58b8ee48\n\nI think the existing sentinel check looks wrong:\n\nif (!sentinel_ok(chunk, slab->chunkSize))\n\nshouldn't that be passing the pointer rather than the chunk?\n\nDavid\n\n\n",
"msg_date": "Tue, 30 Aug 2022 12:45:07 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Reducing the chunk header sizes on all memory context types"
},
{
"msg_contents": "On Tue, 30 Aug 2022 at 12:22, Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n> I also suggested doing a similar check in MemoryChunkGetPointer, so that\n> we catch the issue earlier - right after we allocate the chunk. Any\n> opinion on that?\n\nI think it's probably a good idea. However, I'm not yet sure if we can\nkeep it as a macro or if it would need to become a static inline\nfunction to do that.\n\nWhat I'd really have wished for is a macro like AssertPointersEqual()\nthat spat out the two pointer values. That would probably have saved\nmore time on this issue.\n\nDavid\n\n\n",
"msg_date": "Tue, 30 Aug 2022 13:04:46 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Reducing the chunk header sizes on all memory context types"
},
{
"msg_contents": "On Tue, 30 Aug 2022 at 12:45, David Rowley <dgrowleyml@gmail.com> wrote:\n> I think the existing sentinel check looks wrong:\n>\n> if (!sentinel_ok(chunk, slab->chunkSize))\n>\n> shouldn't that be passing the pointer rather than the chunk?\n\nHere's v2 of the slab-fix patch.\n\nI've included the sentinel check fix. This passes make check-world\nfor me when do a 32-bit build on my x86_64 machine and adjust\npg_config.h to set MAXIMUM_ALIGNOF to 8.\n\nAny chance you could run make check-world on your 32-bit Raspberry PI?\n\nI'm also wondering if this should also be backpatched back to v10,\nproviding the build farm likes it well enough on master.",
"msg_date": "Tue, 30 Aug 2022 13:16:00 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Reducing the chunk header sizes on all memory context types"
},
{
"msg_contents": "\n\nOn 8/30/22 02:45, David Rowley wrote:\n> On Tue, 30 Aug 2022 at 03:39, Tomas Vondra\n> <tomas.vondra@enterprisedb.com> wrote:\n>> The attached patch seems to fix the issue for me - at least it seems\n>> like that. This probably will need to get backpatched, I guess. Maybe we\n>> should add an assert to MemoryChunkGetPointer to check alignment?\n> \n> Hi Tomas,\n> \n> I just wanted to check with you if you ran the full make check-world\n> with this patch?\n> \n> I don't yet have a working ARM 32-bit environment to test, but on\n> trying it with x86 32-bit and adjusting MAXIMUM_ALIGNOF to 8, I'm\n> getting failures in test_decoding. Namely:\n> \n> test twophase ... FAILED 51 ms\n> test twophase_stream ... FAILED 25 ms\n> \n> INSERT INTO test_prepared2 VALUES (5);\n> SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL,\n> NULL, 'include-xids', '0', 'skip-empty-xacts', '1');\n> +WARNING: problem in slab Change: detected write past chunk end in\n> block 0x58b8ecf0, chunk 0x58b8ed08\n> +WARNING: problem in slab Change: detected write past chunk end in\n> block 0x58b8ecf0, chunk 0x58b8ed58\n> +WARNING: problem in slab Change: detected write past chunk end in\n> block 0x58b8ecf0, chunk 0x58b8eda8\n> +WARNING: problem in slab Change: detected write past chunk end in\n> block 0x58b8ecf0, chunk 0x58b8edf8\n> +WARNING: problem in slab Change: detected write past chunk end in\n> block 0x58b8ecf0, chunk 0x58b8ee48\n> \n> I think the existing sentinel check looks wrong:\n> \n> if (!sentinel_ok(chunk, slab->chunkSize))\n> \n> shouldn't that be passing the pointer rather than the chunk?\n> \n\nI agree the check in SlabCheck() looks wrong, as it's ignoring the chunk\nheader (unlike the other contexts).\n\nBut yeah, I ran \"make check-world\" and it passed just fine, so my only\nexplanation is that the check never actually executes because there's no\nspace for the sentinel thanks to alignment, and the tweak you did breaks\nthat. Strange ...\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Tue, 30 Aug 2022 03:24:47 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Reducing the chunk header sizes on all memory context types"
},
{
"msg_contents": "David Rowley <dgrowleyml@gmail.com> writes:\n> Here's v2 of the slab-fix patch.\n\n> I've included the sentinel check fix. This passes make check-world\n> for me when do a 32-bit build on my x86_64 machine and adjust\n> pg_config.h to set MAXIMUM_ALIGNOF to 8.\n\n> Any chance you could run make check-world on your 32-bit Raspberry PI?\n\nI have clean core and test_decoding passes on both 32-bit ARM and\n32-bit PPC. It'll take awhile (couple hours) to finish a full\ncheck-world, but I'd say that's good enough evidence to commit.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 29 Aug 2022 21:45:05 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Reducing the chunk header sizes on all memory context types"
},
{
"msg_contents": "Tomas Vondra <tomas.vondra@enterprisedb.com> writes:\n> On 8/30/22 02:45, David Rowley wrote:\n>> I think the existing sentinel check looks wrong:\n>> if (!sentinel_ok(chunk, slab->chunkSize))\n>> shouldn't that be passing the pointer rather than the chunk?\n\n> I agree the check in SlabCheck() looks wrong, as it's ignoring the chunk\n> header (unlike the other contexts).\n\n> But yeah, I ran \"make check-world\" and it passed just fine, so my only\n> explanation is that the check never actually executes because there's no\n> space for the sentinel thanks to alignment, and the tweak you did breaks\n> that. Strange ...\n\nA quick code-coverage check confirms that the sentinel_ok() line\nis not reached in core or test_decoding tests as of HEAD\n(on a 64-bit machine anyway). So we just happen to be using\nonly allocation requests that are already maxaligned.\n\nI wonder if slab ought to artificially bump up such requests when\nMEMORY_CONTEXT_CHECKING is enabled, so there's room for a sentinel.\nI think it's okay for aset.c to not do that, because its power-of-2\nbehavior means there usually is room for a sentinel; but slab's\npolicy makes it much more likely that there won't be.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 29 Aug 2022 21:55:39 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Reducing the chunk header sizes on all memory context types"
},
{
"msg_contents": "On 8/30/22 03:16, David Rowley wrote:\n> On Tue, 30 Aug 2022 at 12:45, David Rowley <dgrowleyml@gmail.com> wrote:\n>> I think the existing sentinel check looks wrong:\n>>\n>> if (!sentinel_ok(chunk, slab->chunkSize))\n>>\n>> shouldn't that be passing the pointer rather than the chunk?\n> \n> Here's v2 of the slab-fix patch.\n> \n> I've included the sentinel check fix. This passes make check-world\n> for me when do a 32-bit build on my x86_64 machine and adjust\n> pg_config.h to set MAXIMUM_ALIGNOF to 8.\n> \n> Any chance you could run make check-world on your 32-bit Raspberry PI?\n> \n\nWill do, but I think the sentinel fix should be\n\n if (!sentinel_ok(chunk, Slab_CHUNKHDRSZ + slab->chunkSize))\n\nwhich is what the other contexts do. However, considering check-world\npassed even before the sentinel_ok fix, I'm a bit skeptical about that\nproving anything.\n\nFWIW I added a WARNING to SlabCheck before the condition guarding the\nsentinel check, printing the (full) chunk size and header size, and this\nis what I got in test_decoding (deduplicated):\n\narmv7l (32-bit rpi4)\n\n+WARNING: chunkSize 216 fullChunkSize 232 header 16\n+WARNING: chunkSize 64 fullChunkSize 80 header 16\n\naarch64 (64-bit rpi4)\n\n+WARNING: chunkSize 304 fullChunkSize 320 header 16\n+WARNING: chunkSize 80 fullChunkSize 96 header 16\n\nSo indeed, those are *perfect* matches and thus the sentinel_ok() never\nexecuted. So no failures until now. On x86-64 I get the same thing as on\naarch64. I guess that explains why it never failed. Seems like a pretty\namazing coincidence ...\n\n\n> I'm also wondering if this should also be backpatched back to v10,\n> providing the build farm likes it well enough on master.\n\nI'd say the sentinel fix may need to be backpatched.\n\n\nregard\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Tue, 30 Aug 2022 03:58:32 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Reducing the chunk header sizes on all memory context types"
},
{
"msg_contents": "On 8/30/22 03:55, Tom Lane wrote:\n> Tomas Vondra <tomas.vondra@enterprisedb.com> writes:\n>> On 8/30/22 02:45, David Rowley wrote:\n>>> I think the existing sentinel check looks wrong:\n>>> if (!sentinel_ok(chunk, slab->chunkSize))\n>>> shouldn't that be passing the pointer rather than the chunk?\n> \n>> I agree the check in SlabCheck() looks wrong, as it's ignoring the chunk\n>> header (unlike the other contexts).\n> \n>> But yeah, I ran \"make check-world\" and it passed just fine, so my only\n>> explanation is that the check never actually executes because there's no\n>> space for the sentinel thanks to alignment, and the tweak you did breaks\n>> that. Strange ...\n> \n> A quick code-coverage check confirms that the sentinel_ok() line\n> is not reached in core or test_decoding tests as of HEAD\n> (on a 64-bit machine anyway). So we just happen to be using\n> only allocation requests that are already maxaligned.\n> \n> I wonder if slab ought to artificially bump up such requests when\n> MEMORY_CONTEXT_CHECKING is enabled, so there's room for a sentinel.\n> I think it's okay for aset.c to not do that, because its power-of-2\n> behavior means there usually is room for a sentinel; but slab's\n> policy makes it much more likely that there won't be.\n> \n\n+1 to that\n\nFor aset that's fine not just because of power-of-2 behavior, but\nbecause we use it for chunks of many different sizes - so at least some\nof those will have sentinel.\n\nBut Slab in used only for changes and txns in reorderbuffer, and it just\nso happens both structs are maxaligned on 32-bit and 64-bit machines\n(rpi and x86-64). We're unlikely to use slab in many other places, and\nthe structs don't change very often, and it'd probably grow to another\nmaxaligned size anyway. So it may be pretty rare to have a sentinel.\n\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Tue, 30 Aug 2022 04:12:35 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Reducing the chunk header sizes on all memory context types"
},
{
"msg_contents": "On Tue, 30 Aug 2022 at 13:58, Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n> armv7l (32-bit rpi4)\n>\n> +WARNING: chunkSize 216 fullChunkSize 232 header 16\n> +WARNING: chunkSize 64 fullChunkSize 80 header 16\n>\n> aarch64 (64-bit rpi4)\n>\n> +WARNING: chunkSize 304 fullChunkSize 320 header 16\n> +WARNING: chunkSize 80 fullChunkSize 96 header 16\n>\n> So indeed, those are *perfect* matches and thus the sentinel_ok() never\n> executed. So no failures until now. On x86-64 I get the same thing as on\n> aarch64. I guess that explains why it never failed. Seems like a pretty\n> amazing coincidence ...\n\nhmm, I'm not so sure I agree that it's an amazing coincidence. Isn't\nit quite likely that the chunksize being given to SlabContextCreate()\nis the same as MAXALIGN(chunksize)? Isn't that all it would take?\n\nDavid\n\n\n",
"msg_date": "Tue, 30 Aug 2022 14:29:01 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Reducing the chunk header sizes on all memory context types"
},
{
"msg_contents": "On Tue, 30 Aug 2022 at 13:55, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I wonder if slab ought to artificially bump up such requests when\n> MEMORY_CONTEXT_CHECKING is enabled, so there's room for a sentinel.\n> I think it's okay for aset.c to not do that, because its power-of-2\n> behavior means there usually is room for a sentinel; but slab's\n> policy makes it much more likely that there won't be.\n\nI think it's fairly likely that small allocations are a power of 2,\nand I think most of our allocates are small, so I imagine that if we\ndidn't do that for aset.c, we'd miss out on most of the benefits.\n\nDavid\n\n\n",
"msg_date": "Tue, 30 Aug 2022 14:31:47 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Reducing the chunk header sizes on all memory context types"
},
{
"msg_contents": "On Tue, 30 Aug 2022 at 13:58, Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n>\n> On 8/30/22 03:16, David Rowley wrote:\n> > Any chance you could run make check-world on your 32-bit Raspberry PI?\n> >\n>\n> Will do, but I think the sentinel fix should be\n\nThank you. I think Tom is also running make check-world. I now have\nan old RPI3 setup with a 32-bit OS, which is currently busy compiling\nPostgres. I'll kick off a make check-world run once that's done.\n\n> if (!sentinel_ok(chunk, Slab_CHUNKHDRSZ + slab->chunkSize))\n\nAgreed. I've changed the patch to do it that way. Since the 32-bit\nARM animals are already broken and per what Tom mentioned in [1], I've\npushed the patch.\n\nThanks again for taking the time to look at this.\n\nDavid\n\n[1] https://www.postgresql.org/message-id/3455754.1661823905@sss.pgh.pa.us\n\n\n",
"msg_date": "Tue, 30 Aug 2022 14:42:09 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Reducing the chunk header sizes on all memory context types"
},
{
"msg_contents": "On Tue, 30 Aug 2022 at 03:39, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I'd suggest reverting df0f4feef as it seems to be\n> a red herring.\n\nI think it's useless providing that a 64-bit variable will always be\naligned to 8 bytes on all of our supported 32-bit platforms as,\nwithout the padding, the uint64 hdrmask in MemoryChunk will always be\naligned to 8 bytes meaning the memory following that will be aligned\ntoo. If we have a platform where a uint64 isn't aligned to 8 bytes\nthen we might need the padding.\n\nlong long seems to align to 8 bytes on my 32-bit Rasberry PI going the\nstruct being 16 bytes rather than 12.\n\ndrowley@raspberrypi:~ $ cat struct.c\n#include <stdio.h>\n\ntypedef struct test\n{\n int a;\n long long b;\n} test;\n\nint main(void)\n{\n printf(\"%d\\n\", sizeof(test));\n return 0;\n}\ndrowley@raspberrypi:~ $ gcc struct.c -o struct\ndrowley@raspberrypi:~ $ ./struct\n16\ndrowley@raspberrypi:~ $ uname -m\narmv7l\n\nIs that the case for your 32-bit PPC too?\n\nDavid\n\n\n",
"msg_date": "Tue, 30 Aug 2022 15:01:37 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Reducing the chunk header sizes on all memory context types"
},
{
"msg_contents": "David Rowley <dgrowleyml@gmail.com> writes:\n> On Tue, 30 Aug 2022 at 03:39, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> I'd suggest reverting df0f4feef as it seems to be\n>> a red herring.\n\n> I think it's useless providing that a 64-bit variable will always be\n> aligned to 8 bytes on all of our supported 32-bit platforms as,\n> without the padding, the uint64 hdrmask in MemoryChunk will always be\n> aligned to 8 bytes meaning the memory following that will be aligned\n> too. If we have a platform where a uint64 isn't aligned to 8 bytes\n> then we might need the padding.\n\nIt's not so much \"8 bytes\". The question is this: is there any\nplatform on which uint64 has less than MAXALIGN alignment\nrequirement? If it is maxaligned then the compiler will insert any\nrequired padding automatically, so the patch accomplishes little.\n\nAFAICS that could only happen if \"double\" has 8-byte alignment\nrequirement but int64 does not. I recall some discussion about\nthat possibility a month or two back, but I think we concluded\nthat we weren't going to support it.\n\nI guess what I mostly don't like about df0f4feef is the hardwired \"8\"\nconstants. Yeah, it's hard to see how sizeof(uint64) isn't 8, but\nit's not very readable like this IMO.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 29 Aug 2022 23:15:17 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Reducing the chunk header sizes on all memory context types"
},
{
"msg_contents": "On Tue, 30 Aug 2022 at 15:15, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> AFAICS that could only happen if \"double\" has 8-byte alignment\n> requirement but int64 does not. I recall some discussion about\n> that possibility a month or two back, but I think we concluded\n> that we weren't going to support it.\n\nok\n\n> I guess what I mostly don't like about df0f4feef is the hardwired \"8\"\n> constants. Yeah, it's hard to see how sizeof(uint64) isn't 8, but\n> it's not very readable like this IMO.\n\nYeah, that was just down to lack of any SIZEOF_* macro to tell me\nuint64 was 8 bytes.\n\nI can revert df0f4feef, but would prefer just to get the green light\nfor d5ee4db0e from those 32-bit arm animals before doing so.\n\nDavid\n\n\n",
"msg_date": "Tue, 30 Aug 2022 15:21:49 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Reducing the chunk header sizes on all memory context types"
},
{
"msg_contents": "David Rowley <dgrowleyml@gmail.com> writes:\n> I can revert df0f4feef, but would prefer just to get the green light\n> for d5ee4db0e from those 32-bit arm animals before doing so.\n\nI have a check-world pass on my RPI3 (Fedora 30 armv7l image).\nPPC test still running, but I don't doubt it will pass; it's\nfinished contrib/test_decoding.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 29 Aug 2022 23:31:11 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Reducing the chunk header sizes on all memory context types"
},
{
"msg_contents": "On Tue, 30 Aug 2022 at 03:16, Robert Haas <robertmhaas@gmail.com> wrote:\n> ../../../../src/include/utils/memutils_memorychunk.h:186:18: error:\n> comparison of constant 7 with expression of type\n> 'MemoryContextMethodID' (aka 'enum MemoryContextMethodID') is always\n> true [-Werror,-Wtautological-constant-out-of-range-compare]\n> Assert(methodid <= MEMORY_CONTEXT_METHODID_MASK);\n> ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n> ../../../../src/include/c.h:827:9: note: expanded from macro 'Assert'\n> if (!(condition)) \\\n> ^~~~~~~~~\n>\n> I'm not sure what to do about that, but every file that includes\n> memutils_memorychunk.h produces those warnings (which become errors\n> due to -Werror).\n\nI'm not really sure either. I tried compiling with clang 12.0.1 with\n-Wtautological-constant-out-of-range-compare and don't get this\nwarning.\n\nI think the Assert is useful as if we were ever to add an enum member\nwith the value of 8 and forgot to adjust MEMORY_CONTEXT_METHODID_BITS\nthen bad things would happen inside MemoryChunkSetHdrMask() and\nMemoryChunkSetHdrMaskExternal(). I think it's unlikely we'll ever get\nthat many MemoryContext types, but I don't know for sure and would\nrather the person who adds the 9th one get alerted to the lack of bit\nspace in MemoryChunk as soon as possible.\n\nAs much as I'm not a fan of adding new warnings for compiler options\nthat are not part of our standard set, I feel like if there are\nwarning flags out there that are as giving us false warnings such as\nthis one, then we shouldn't trouble ourselves trying to get rid of\nthem, especially so when they force us to remove something which might\ncatch a future bug.\n\nDavid\n\n\n",
"msg_date": "Tue, 30 Aug 2022 19:14:23 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Reducing the chunk header sizes on all memory context types"
},
{
"msg_contents": "\n\nOn 8/30/22 03:04, David Rowley wrote:\n> On Tue, 30 Aug 2022 at 12:22, Tomas Vondra\n> <tomas.vondra@enterprisedb.com> wrote:\n>> I also suggested doing a similar check in MemoryChunkGetPointer, so that\n>> we catch the issue earlier - right after we allocate the chunk. Any\n>> opinion on that?\n> \n> I think it's probably a good idea. However, I'm not yet sure if we can\n> keep it as a macro or if it would need to become a static inline\n> function to do that.\n> \n\nI'd bet it can be done in the macro. See VARATT_EXTERNAL_GET_POINTER for\nexample of a \"do\" block with an Assert.\n\n> What I'd really have wished for is a macro like AssertPointersEqual()\n> that spat out the two pointer values. That would probably have saved\n> more time on this issue.\n> \n\nHmm, maybe.\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Tue, 30 Aug 2022 10:33:24 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Reducing the chunk header sizes on all memory context types"
},
{
"msg_contents": "\n\nOn 8/30/22 04:31, David Rowley wrote:\n> On Tue, 30 Aug 2022 at 13:55, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> I wonder if slab ought to artificially bump up such requests when\n>> MEMORY_CONTEXT_CHECKING is enabled, so there's room for a sentinel.\n>> I think it's okay for aset.c to not do that, because its power-of-2\n>> behavior means there usually is room for a sentinel; but slab's\n>> policy makes it much more likely that there won't be.\n> \n> I think it's fairly likely that small allocations are a power of 2,\n> and I think most of our allocates are small, so I imagine that if we\n> didn't do that for aset.c, we'd miss out on most of the benefits.\n> \n\nYeah. I think we have a fair number of \"larger\" allocations (once you\nget to ~100B it probably won't be a 2^N), but we may easily miss whole\nsections of allocations.\n\nI guess the idea was to add a sentinel only when there already is space\nfor it, but perhaps that's a bad tradeoff limiting the benefits. Either\nwe add the sentinel fairly often (and then why not just add it all the\ntime - it'll need a bit more space), or we do it only very rarely (and\nthen it's a matter of luck if it catches an issue). Considering we only\ndo this with asserts, I doubt the extra bytes / CPU is a major issue,\nand a (more) reliable detection of issues seems worth it. But maybe I\nunderestimate the costs. The only alternative seems to be valgrind, and\nthat's way costlier, though.\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Tue, 30 Aug 2022 11:05:47 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Reducing the chunk header sizes on all memory context types"
},
{
"msg_contents": "Tomas Vondra <tomas.vondra@enterprisedb.com> writes:\n> I guess the idea was to add a sentinel only when there already is space\n> for it, but perhaps that's a bad tradeoff limiting the benefits. Either\n> we add the sentinel fairly often (and then why not just add it all the\n> time - it'll need a bit more space), or we do it only very rarely (and\n> then it's a matter of luck if it catches an issue).\n\nI'm fairly sure that when we made that decision originally, a top-of-mind\ncase was ListCells, which are plentiful, small, power-of-2-sized, and\nnot used in a way likely to have buffer overruns. But since the List\nrewrite a couple years back we no longer palloc individual ListCells.\nSo maybe we should revisit the question. It'd be worth collecting some\nstats about how much extra space would be needed if we force there\nto be room for a sentinel.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 30 Aug 2022 09:26:21 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Reducing the chunk header sizes on all memory context types"
},
{
"msg_contents": "David Rowley <dgrowleyml@gmail.com> writes:\n> On Tue, 30 Aug 2022 at 03:16, Robert Haas <robertmhaas@gmail.com> wrote:\n>> ../../../../src/include/utils/memutils_memorychunk.h:186:18: error:\n>> comparison of constant 7 with expression of type\n>> 'MemoryContextMethodID' (aka 'enum MemoryContextMethodID') is always\n>> true [-Werror,-Wtautological-constant-out-of-range-compare]\n>> Assert(methodid <= MEMORY_CONTEXT_METHODID_MASK);\n>> ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\n> I think the Assert is useful as if we were ever to add an enum member\n> with the value of 8 and forgot to adjust MEMORY_CONTEXT_METHODID_BITS\n> then bad things would happen inside MemoryChunkSetHdrMask() and\n> MemoryChunkSetHdrMaskExternal(). I think it's unlikely we'll ever get\n> that many MemoryContext types, but I don't know for sure and would\n> rather the person who adds the 9th one get alerted to the lack of bit\n> space in MemoryChunk as soon as possible.\n\nI think that's a weak argument, so I don't mind dropping this Assert.\nWhat would be far more useful is a comment inside the\nMemoryContextMethodID enum pointing out that we can support at most\n8 values because XYZ.\n\nHowever, I'm still wondering why Robert sees this when the rest of us\ndon't.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 30 Aug 2022 09:36:31 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Reducing the chunk header sizes on all memory context types"
},
{
"msg_contents": "I wrote:\n> So maybe we should revisit the question. It'd be worth collecting some\n> stats about how much extra space would be needed if we force there\n> to be room for a sentinel.\n\nActually, after ingesting more caffeine, the problem with this for aset.c\nis that the only way to add space for a sentinel that didn't fit already\nis to double the space allocation. That's a little daunting, especially\nremembering how many places deliberately allocate power-of-2-sized\narrays.\n\nYou could imagine deciding that the space classifications are not\npower-of-2 but power-of-2-plus-one, or something like that. But that\nwould be very invasive to the logic, and I doubt it's a good idea.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 30 Aug 2022 10:17:34 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Reducing the chunk header sizes on all memory context types"
},
{
"msg_contents": "On Tue, Aug 30, 2022 at 3:14 AM David Rowley <dgrowleyml@gmail.com> wrote:\n> I'm not really sure either. I tried compiling with clang 12.0.1 with\n> -Wtautological-constant-out-of-range-compare and don't get this\n> warning.\n\nI have a much older clang version, it seems. clang -v reports 5.0.2. I\nuse -Wall and -Werror as a matter of habit. It looks like 5.0.2 was\nreleased in May 2018, installed by me in November of 2019, and I just\nhaven't had a reason to upgrade.\n\n> I think the Assert is useful as if we were ever to add an enum member\n> with the value of 8 and forgot to adjust MEMORY_CONTEXT_METHODID_BITS\n> then bad things would happen inside MemoryChunkSetHdrMask() and\n> MemoryChunkSetHdrMaskExternal(). I think it's unlikely we'll ever get\n> that many MemoryContext types, but I don't know for sure and would\n> rather the person who adds the 9th one get alerted to the lack of bit\n> space in MemoryChunk as soon as possible.\n\nWell I don't have a problem with that, but I think we should try to do\nit without causing compiler warnings. The attached patch fixes it for\nme.\n\n> As much as I'm not a fan of adding new warnings for compiler options\n> that are not part of our standard set, I feel like if there are\n> warning flags out there that are as giving us false warnings such as\n> this one, then we shouldn't trouble ourselves trying to get rid of\n> them, especially so when they force us to remove something which might\n> catch a future bug.\n\nFor me the point is that, at least on the compiler that I'm using, the\nwarning suggests that the compiler will optimize the test away\ncompletely, and therefore it wouldn't catch a future bug. Could there\nbe compilers where no warning is generated but the assertion is still\noptimized away?\n\nI don't know, but I don't think a 4-year-old compiler is such a fossil\nthat we shouldn't care whether it produces warnings. We worry about\noperating systems and PostgreSQL versions that are almost extinct in\nthe wild, so saying we're not going to worry about failing to update\nthe compiler regularly enough within the lifetime of one off-the-shelf\nMacBook does not really make sense to me.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com",
"msg_date": "Tue, 30 Aug 2022 11:00:20 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Reducing the chunk header sizes on all memory context types"
},
{
"msg_contents": "On Wed, 31 Aug 2022 at 03:00, Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Tue, Aug 30, 2022 at 3:14 AM David Rowley <dgrowleyml@gmail.com> wrote:\n> > I think the Assert is useful as if we were ever to add an enum member\n> > with the value of 8 and forgot to adjust MEMORY_CONTEXT_METHODID_BITS\n> > then bad things would happen inside MemoryChunkSetHdrMask() and\n> > MemoryChunkSetHdrMaskExternal(). I think it's unlikely we'll ever get\n> > that many MemoryContext types, but I don't know for sure and would\n> > rather the person who adds the 9th one get alerted to the lack of bit\n> > space in MemoryChunk as soon as possible.\n>\n> Well I don't have a problem with that, but I think we should try to do\n> it without causing compiler warnings. The attached patch fixes it for\n> me.\n\nI'm fine with adding the int cast. Seems like a good idea.\n\n> > As much as I'm not a fan of adding new warnings for compiler options\n> > that are not part of our standard set, I feel like if there are\n> > warning flags out there that are as giving us false warnings such as\n> > this one, then we shouldn't trouble ourselves trying to get rid of\n> > them, especially so when they force us to remove something which might\n> > catch a future bug.\n>\n> For me the point is that, at least on the compiler that I'm using, the\n> warning suggests that the compiler will optimize the test away\n> completely, and therefore it wouldn't catch a future bug. Could there\n> be compilers where no warning is generated but the assertion is still\n> optimized away?\n\nI'd not considered that the compiler might optimise it away. My\nsuspicions had been more along the lines of that clang removed the\nenum out of range warnings because they were annoying and wrong as\nit's pretty easy to set an enum variable to something out of range of\nthe defined enum values.\n\nLooking at [1], it seems like 5.0.2 is producing the correct code and\nit's just producing a warning. The 2nd compiler window has -Werror and\nshows that it does fail to compile. If I change that to use clang\n6.0.0 then it works. It seems to fail all the way back to clang 3.1.\nclang 3.0.0 works.\n\nDavid\n\n[1] https://godbolt.org/z/Gx388z5Ej\n\n\n",
"msg_date": "Wed, 31 Aug 2022 03:25:52 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Reducing the chunk header sizes on all memory context types"
},
{
"msg_contents": "On Wed, 31 Aug 2022 at 01:36, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> David Rowley <dgrowleyml@gmail.com> writes:\n> > I think the Assert is useful as if we were ever to add an enum member\n> > with the value of 8 and forgot to adjust MEMORY_CONTEXT_METHODID_BITS\n> > then bad things would happen inside MemoryChunkSetHdrMask() and\n> > MemoryChunkSetHdrMaskExternal(). I think it's unlikely we'll ever get\n> > that many MemoryContext types, but I don't know for sure and would\n> > rather the person who adds the 9th one get alerted to the lack of bit\n> > space in MemoryChunk as soon as possible.\n>\n> I think that's a weak argument, so I don't mind dropping this Assert.\n> What would be far more useful is a comment inside the\n> MemoryContextMethodID enum pointing out that we can support at most\n> 8 values because XYZ.\n\nI'd just sleep better knowing that we have some test coverage to\nensure that MemoryChunkSetHdrMask() and\nMemoryChunkSetHdrMaskExternal() have some verification that we don't\nend up with future code that will cause the hdrmask to be invalid. I\ntried to make those functions as lightweight as possible. Without the\nAssert, I just feel that there's a bit too much trust that none of the\nbits overlap. I've no objections to adding a comment to the enum to\nexplain to future devs. My vote would be for that and add the (int)\ncast as proposed by Robert.\n\nDavid\n\n\n",
"msg_date": "Wed, 31 Aug 2022 03:31:24 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Reducing the chunk header sizes on all memory context types"
},
{
"msg_contents": "On Wed, 31 Aug 2022 at 03:31, David Rowley <dgrowleyml@gmail.com> wrote:\n> I've no objections to adding a comment to the enum to\n> explain to future devs. My vote would be for that and add the (int)\n> cast as proposed by Robert.\n\nHere's a patch which adds a comment to MemoryContextMethodID to Robert's patch.\n\nDavid",
"msg_date": "Wed, 31 Aug 2022 03:39:00 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Reducing the chunk header sizes on all memory context types"
},
{
"msg_contents": "On Tue, Aug 30, 2022 at 11:39 AM David Rowley <dgrowleyml@gmail.com> wrote:\n> On Wed, 31 Aug 2022 at 03:31, David Rowley <dgrowleyml@gmail.com> wrote:\n> > I've no objections to adding a comment to the enum to\n> > explain to future devs. My vote would be for that and add the (int)\n> > cast as proposed by Robert.\n>\n> Here's a patch which adds a comment to MemoryContextMethodID to Robert's patch.\n\nLGTM.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 30 Aug 2022 11:54:43 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Reducing the chunk header sizes on all memory context types"
},
{
"msg_contents": "David Rowley <dgrowleyml@gmail.com> writes:\n> Here's a patch which adds a comment to MemoryContextMethodID to Robert's patch.\n\nOK, but while looking at that I noticed the adjacent\n\n#define MEMORY_CONTEXT_METHODID_MASK \\\n\tUINT64CONST((1 << MEMORY_CONTEXT_METHODID_BITS) - 1)\n\nI'm rather astonished that that compiles; UINT64CONST was only ever\nmeant to be applied to *literals*. I think what it's expanding to\nis\n\n\t((1 << MEMORY_CONTEXT_METHODID_BITS) - 1UL)\n\n(or on some machines 1ULL) which only accidentally does approximately\nwhat you want. It'd be all right perhaps to write\n\n\t((UINT64CONST(1) << MEMORY_CONTEXT_METHODID_BITS) - 1)\n\nbut you might as well avoid the Postgres-ism and just write\n\n\t((uint64) ((1 << MEMORY_CONTEXT_METHODID_BITS) - 1))\n\nNobody's ever going to make MEMORY_CONTEXT_METHODID_BITS large\nenough for the shift to overflow in int arithmetic.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 30 Aug 2022 12:17:33 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Reducing the chunk header sizes on all memory context types"
},
{
"msg_contents": "On Wed, 31 Aug 2022 at 02:17, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> I wrote:\n> > So maybe we should revisit the question. It'd be worth collecting some\n> > stats about how much extra space would be needed if we force there\n> > to be room for a sentinel.\n>\n> Actually, after ingesting more caffeine, the problem with this for aset.c\n> is that the only way to add space for a sentinel that didn't fit already\n> is to double the space allocation. That's a little daunting, especially\n> remembering how many places deliberately allocate power-of-2-sized\n> arrays.\n\nI decided to try and quantify that by logging the size, MAXALIGN(size)\nand the power of 2 size during AllocSetAlloc and GenerationAlloc. I\nmade the pow2_size 0 in GenerationAlloc and in AlloocSetAlloc when\nsize > allocChunkLimit.\n\nAfter running make installcheck, grabbing the records out the log and\nloading them into Postgres, I see that if we did double the pow2_size\nwhen there's no space for the sentinel byte then we'd go from\nallocating a total of 10.2GB all the way to 16.4GB (!) of\nnon-dedicated block aset.c allocations.\n\nselect\nround(sum(pow2_Size)::numeric/1024/1024/1024,3) as pow2_size,\nround(sum(case when maxalign_size=pow2_size then pow2_size*2 else\npow2_size end)::numeric/1024/1024/1024,3) as method1,\nround(sum(case when maxalign_size=pow2_size then pow2_size+8 else\npow2_size end)::numeric/1024/1024/1024,3) as method2\nfrom memstats\nwhere pow2_size > 0;\n pow2_size | method1 | method2\n-----------+---------+---------\n 10.194 | 16.382 | 10.463\n\nif we did just add on an extra 8 bytes (or or MAXALIGN(size+1) at\nleast), then that would take the size up to 10.5GB.\n\nDavid\n\n\n",
"msg_date": "Wed, 31 Aug 2022 10:40:43 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Reducing the chunk header sizes on all memory context types"
},
{
"msg_contents": "On 8/31/22 00:40, David Rowley wrote:\n> On Wed, 31 Aug 2022 at 02:17, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>\n>> I wrote:\n>>> So maybe we should revisit the question. It'd be worth collecting some\n>>> stats about how much extra space would be needed if we force there\n>>> to be room for a sentinel.\n>>\n>> Actually, after ingesting more caffeine, the problem with this for aset.c\n>> is that the only way to add space for a sentinel that didn't fit already\n>> is to double the space allocation. That's a little daunting, especially\n>> remembering how many places deliberately allocate power-of-2-sized\n>> arrays.\n> \n> I decided to try and quantify that by logging the size, MAXALIGN(size)\n> and the power of 2 size during AllocSetAlloc and GenerationAlloc. I\n> made the pow2_size 0 in GenerationAlloc and in AlloocSetAlloc when\n> size > allocChunkLimit.\n> \n> After running make installcheck, grabbing the records out the log and\n> loading them into Postgres, I see that if we did double the pow2_size\n> when there's no space for the sentinel byte then we'd go from\n> allocating a total of 10.2GB all the way to 16.4GB (!) of\n> non-dedicated block aset.c allocations.\n> \n> select\n> round(sum(pow2_Size)::numeric/1024/1024/1024,3) as pow2_size,\n> round(sum(case when maxalign_size=pow2_size then pow2_size*2 else\n> pow2_size end)::numeric/1024/1024/1024,3) as method1,\n> round(sum(case when maxalign_size=pow2_size then pow2_size+8 else\n> pow2_size end)::numeric/1024/1024/1024,3) as method2\n> from memstats\n> where pow2_size > 0;\n> pow2_size | method1 | method2\n> -----------+---------+---------\n> 10.194 | 16.382 | 10.463\n> \n> if we did just add on an extra 8 bytes (or or MAXALIGN(size+1) at\n> least), then that would take the size up to 10.5GB.\n> \n\nI've been experimenting with this a bit too, and my results are similar,\nbut not exactly the same. I've logged all Alloc/Realloc calls for the\ntwo memory contexts, and when I aggregated the results I get this:\n\n f | size | pow2(size) | pow2(size+1)\n-----------------+----------+------------+--------------\n AllocSetAlloc | 23528 | 28778 | 31504\n AllocSetRelloc | 761 | 824 | 1421\n GenerationAlloc | 68 | 90 | 102\n\nSo the raw size (what we asked for) is ~23.5GB, but in practice we\nallocate ~28.8GB because of the pow-of-2 logic. And by adding the extra\n1B we end up allocating 31.5GB. That doesn't seem like a huge increase,\nand it's far from the +60% you got.\n\nI wonder where does the difference come - I did make installcheck too,\nso how come you get 10/16GB, and I get 28/31GB? My patch is attached,\nmaybe I did something silly.\n\nI also did a quick hack to see if always having the sentinel detects any\npre-existing issues, but that didn't happen. I guess valgrind would find\nthose, but not sure?\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Wed, 31 Aug 2022 22:53:38 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Reducing the chunk header sizes on all memory context types"
},
{
"msg_contents": "On Thu, 1 Sept 2022 at 08:53, Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n> So the raw size (what we asked for) is ~23.5GB, but in practice we\n> allocate ~28.8GB because of the pow-of-2 logic. And by adding the extra\n> 1B we end up allocating 31.5GB. That doesn't seem like a huge increase,\n> and it's far from the +60% you got.\n>\n> I wonder where does the difference come - I did make installcheck too,\n> so how come you get 10/16GB, and I get 28/31GB? My patch is attached,\n> maybe I did something silly.\n\nThe reason my reported results were lower is because I ignored size >\nallocChunkLimit allocations. These are not raised to the next power of\n2, so I didn't think they should be included.\n\nI'm not sure why you're seeing only a 3GB additional overhead. I\nnoticed a logic error in my query where I was checking\nmaxaligned_size=pow2_size and doubling that to give sentinel space.\nThat really should have been \"case size=pow2_size then pow2_size * 2\nelse pow2_size end\", However, after adjusting the query, it does not\nseem to change the results much:\n\npostgres=# select\npostgres-# round(sum(pow2_Size)::numeric/1024/1024/1024,3) as pow2_size,\npostgres-# round(sum(case when size=pow2_size then pow2_size*2 else\npow2_size end)::numeric/1024/1024/1024,3) as method1,\npostgres-# round(sum(case when size=pow2_size then pow2_size+8 else\npow2_size end)::numeric/1024/1024/1024,3) as method2\npostgres-# from memstats\npostgres-# where pow2_size > 0;\n pow2_size | method1 | method2\n-----------+---------+---------\n 10.269 | 16.322 | 10.476\n\nI've attached the crude patch I came up with for this. For some\nreason it was crashing on Linux, but it ran ok on Windows, so I used\nthe results from that instead. Maybe that accounts for some\ndifferences as e.g sizeof(long) == 4 on 64-bit windows. I'd be\nsurprised if that accounted for so many GBs though.\n\nI also forgot to add code to GenerationRealloc and AllocSetRealloc\n\nDavid",
"msg_date": "Thu, 1 Sep 2022 09:46:26 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Reducing the chunk header sizes on all memory context types"
},
{
"msg_contents": "On 8/31/22 23:46, David Rowley wrote:\n> On Thu, 1 Sept 2022 at 08:53, Tomas Vondra\n> <tomas.vondra@enterprisedb.com> wrote:\n>> So the raw size (what we asked for) is ~23.5GB, but in practice we\n>> allocate ~28.8GB because of the pow-of-2 logic. And by adding the extra\n>> 1B we end up allocating 31.5GB. That doesn't seem like a huge increase,\n>> and it's far from the +60% you got.\n>>\n>> I wonder where does the difference come - I did make installcheck too,\n>> so how come you get 10/16GB, and I get 28/31GB? My patch is attached,\n>> maybe I did something silly.\n> \n> The reason my reported results were lower is because I ignored size >\n> allocChunkLimit allocations. These are not raised to the next power of\n> 2, so I didn't think they should be included.\n\nIf I differentiate the large chunks allocated separately (v2 patch\nattached), I get this:\n\n f | t | count | s1 | s2 | s3\n-----------------+----------+----------+----------+----------+----------\n AllocSetAlloc | normal | 60714914 | 4982 | 6288 | 8185\n AllocSetAlloc | separate | 824390 | 18245 | 18245 | 18251\n AllocSetRelloc | normal | 182070 | 763 | 826 | 1423\n GenerationAlloc | normal | 2118115 | 68 | 90 | 102\n GenerationAlloc | separate | 28 | 0 | 0 | 0\n(5 rows)\n\nWhere s1 is the sum of requested sizes, s2 is the sum of allocated\nchunks, and s3 is chunks allocated with 1B sentinel.\n\nFocusing on the aset, vast majority of allocations (60M out of 64M) is\nsmall enough to use power-of-2 logic, and we go from 6.3GB to 8.2GB, so\n~30%. Not great, not terrible.\n\nFor the large allocations, there's almost no increase - in the last\nquery I used the power-of-2 logic in the calculations, but that was\nincorrect, of course.\n\n\n> \n> I'm not sure why you're seeing only a 3GB additional overhead. I\n> noticed a logic error in my query where I was checking\n> maxaligned_size=pow2_size and doubling that to give sentinel space.\n> That really should have been \"case size=pow2_size then pow2_size * 2\n> else pow2_size end\", However, after adjusting the query, it does not\n> seem to change the results much:\n> \n> postgres=# select\n> postgres-# round(sum(pow2_Size)::numeric/1024/1024/1024,3) as pow2_size,\n> postgres-# round(sum(case when size=pow2_size then pow2_size*2 else\n> pow2_size end)::numeric/1024/1024/1024,3) as method1,\n> postgres-# round(sum(case when size=pow2_size then pow2_size+8 else\n> pow2_size end)::numeric/1024/1024/1024,3) as method2\n> postgres-# from memstats\n> postgres-# where pow2_size > 0;\n> pow2_size | method1 | method2\n> -----------+---------+---------\n> 10.269 | 16.322 | 10.476\n> \n> I've attached the crude patch I came up with for this. For some\n> reason it was crashing on Linux, but it ran ok on Windows, so I used\n> the results from that instead. Maybe that accounts for some\n> differences as e.g sizeof(long) == 4 on 64-bit windows. I'd be\n> surprised if that accounted for so many GBs though.\n> \n\nI tried to use that patch, but \"make installcheck\" never completes for\nme, for some reason. It seems to get stuck in infinite_recurse.sql, but\nI haven't looked into the details.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Thu, 1 Sep 2022 02:12:20 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Reducing the chunk header sizes on all memory context types"
},
{
"msg_contents": "Tomas Vondra <tomas.vondra@enterprisedb.com> writes:\n> Focusing on the aset, vast majority of allocations (60M out of 64M) is\n> small enough to use power-of-2 logic, and we go from 6.3GB to 8.2GB, so\n> ~30%. Not great, not terrible.\n\nNot sure why this escaped me before, but I remembered another argument\nfor not forcibly adding space for a sentinel: if you don't have room,\nthat means the chunk end is up against the header for the next chunk,\nwhich means that any buffer overrun will clobber that header. So we'll\ndetect the problem anyway if we validate the headers to a reasonable\nextent.\n\nThe hole in this argument is that the very last chunk allocated in a\nblock might have no following chunk to validate. But we could probably\nspecial-case that somehow, maybe by laying down a sentinel in the free\nspace, where it will get overwritten by the next chunk when that does\nget allocated.\n\n30% memory bloat seems like a high price to pay if it's adding negligible\ndetection ability, which it seems is true if this argument is valid.\nIs there reason to think we can't validate headers enough to catch\nclobbers?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 31 Aug 2022 20:23:41 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Reducing the chunk header sizes on all memory context types"
},
{
"msg_contents": "On Thu, 1 Sept 2022 at 12:23, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Is there reason to think we can't validate headers enough to catch\n> clobbers?\n\nFor non-sentinel chunks, the next byte after the end of the chunk will\nbe storing the block offset for the following chunk. I think:\n\nif (block != MemoryChunkGetBlock(chunk))\nelog(WARNING, \"problem in alloc set %s: bad block offset for chunk %p\nin block %p\",\nname, chunk, block);\n\nshould catch those.\n\nMaybe we should just consider always making room for a sentinel for\nchunks that are on dedicated blocks. At most that's an extra 8 bytes\nin some allocation that's either over 1024 or 8192 (depending on\nmaxBlockSize).\n\nDavid\n\n\n",
"msg_date": "Thu, 1 Sep 2022 12:31:58 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Reducing the chunk header sizes on all memory context types"
},
{
"msg_contents": "David Rowley <dgrowleyml@gmail.com> writes:\n> Maybe we should just consider always making room for a sentinel for\n> chunks that are on dedicated blocks. At most that's an extra 8 bytes\n> in some allocation that's either over 1024 or 8192 (depending on\n> maxBlockSize).\n\nAgreed, if we're not doing that already then we should.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 31 Aug 2022 20:46:29 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Reducing the chunk header sizes on all memory context types"
},
{
"msg_contents": "\n\nOn 9/1/22 02:23, Tom Lane wrote:\n> Tomas Vondra <tomas.vondra@enterprisedb.com> writes:\n>> Focusing on the aset, vast majority of allocations (60M out of 64M) is\n>> small enough to use power-of-2 logic, and we go from 6.3GB to 8.2GB, so\n>> ~30%. Not great, not terrible.\n> \n> Not sure why this escaped me before, but I remembered another argument\n> for not forcibly adding space for a sentinel: if you don't have room,\n> that means the chunk end is up against the header for the next chunk,\n> which means that any buffer overrun will clobber that header. So we'll\n> detect the problem anyway if we validate the headers to a reasonable\n> extent.\n> \n> The hole in this argument is that the very last chunk allocated in a\n> block might have no following chunk to validate. But we could probably\n> special-case that somehow, maybe by laying down a sentinel in the free\n> space, where it will get overwritten by the next chunk when that does\n> get allocated.\n> \n> 30% memory bloat seems like a high price to pay if it's adding negligible\n> detection ability, which it seems is true if this argument is valid.\n> Is there reason to think we can't validate headers enough to catch\n> clobbers?\n> \n\nI'm not quite convinced the 30% figure is correct - it might be if you\nignore cases exceeding allocChunkLimit, but that also makes it pretty\nbogus (because large allocations represent ~2x as much space).\n\nYou're probably right we'll notice the clobber cases due to corruption\nof the next chunk header. The annoying thing is having a corrupted\nheader only tells you there's a corruption somewhere, but it may be hard\nto know which part of the code caused it. I was hoping the sentinel\nwould make it easier, because we mark it as NOACCESS for valgrind. But\nnow I see we mark the first part of a MemoryChunk too, so maybe that's\nenough.\n\nOTOH we have platforms where valgrind is either not supported or no one\nruns tests with (e.g. on rpi4 it'd take insane amounts of code). In that\ncase the sentinel might be helpful, especially considering alignment on\nthose platforms can cause funny memory issues, as evidenced by this thread.\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 1 Sep 2022 02:49:36 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Reducing the chunk header sizes on all memory context types"
},
{
"msg_contents": "Tomas Vondra <tomas.vondra@enterprisedb.com> writes:\n> You're probably right we'll notice the clobber cases due to corruption\n> of the next chunk header. The annoying thing is having a corrupted\n> header only tells you there's a corruption somewhere, but it may be hard\n> to know which part of the code caused it.\n\nSame's true of a sentinel, though.\n\n> OTOH we have platforms where valgrind is either not supported or no one\n> runs tests with (e.g. on rpi4 it'd take insane amounts of code).\n\nAccording to\nhttps://valgrind.org/info/platforms.html\nvalgrind supports a pretty respectable set of platforms. It might\nbe too slow to be useful on ancient hardware, of course.\n\nI've had some success in identifying clobber perpetrators by putting\na hardware watchpoint on the clobbered word, which IIRC does work on\nrecent ARM hardware. It's tedious and far more manual than valgrind,\nbut it's possible.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 31 Aug 2022 21:06:09 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Reducing the chunk header sizes on all memory context types"
},
{
"msg_contents": "On Tue, 30 Aug 2022 at 13:16, David Rowley <dgrowleyml@gmail.com> wrote:\n> I'm also wondering if this should also be backpatched back to v10,\n> providing the build farm likes it well enough on master.\n\nDoes anyone have any objections to d5ee4db0e in its entirety being backpatched?\n\nDavid\n\n\n",
"msg_date": "Thu, 1 Sep 2022 15:15:05 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Reducing the chunk header sizes on all memory context types"
},
{
"msg_contents": "David Rowley <dgrowleyml@gmail.com> writes:\n> Does anyone have any objections to d5ee4db0e in its entirety being backpatched?\n\nIt doesn't seem to be fixing any live bug in the back branches,\nbut by the same token it's harmless.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 01 Sep 2022 00:06:21 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Reducing the chunk header sizes on all memory context types"
},
{
"msg_contents": "On Thu, 1 Sept 2022 at 16:06, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> David Rowley <dgrowleyml@gmail.com> writes:\n> > Does anyone have any objections to d5ee4db0e in its entirety being backpatched?\n>\n> It doesn't seem to be fixing any live bug in the back branches,\n> but by the same token it's harmless.\n\nI considered that an extension might use the Slab allocator with a\nnon-MAXALIGNED chunksize and might run into some troubles during\nSlabCheck().\n\nDavid\n\n\n",
"msg_date": "Thu, 1 Sep 2022 16:08:10 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Reducing the chunk header sizes on all memory context types"
},
{
"msg_contents": "David Rowley <dgrowleyml@gmail.com> writes:\n> On Thu, 1 Sept 2022 at 16:06, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> It doesn't seem to be fixing any live bug in the back branches,\n>> but by the same token it's harmless.\n\n> I considered that an extension might use the Slab allocator with a\n> non-MAXALIGNED chunksize and might run into some troubles during\n> SlabCheck().\n\nOh, yeah, the sentinel_ok() change is a live bug. Extensions\nhave no control over sizeof(SlabBlock) though.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 01 Sep 2022 00:19:21 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Reducing the chunk header sizes on all memory context types"
},
{
"msg_contents": "On Thu, 1 Sept 2022 at 12:46, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> David Rowley <dgrowleyml@gmail.com> writes:\n> > Maybe we should just consider always making room for a sentinel for\n> > chunks that are on dedicated blocks. At most that's an extra 8 bytes\n> > in some allocation that's either over 1024 or 8192 (depending on\n> > maxBlockSize).\n>\n> Agreed, if we're not doing that already then we should.\n\nHere's a patch to that effect.\n\nI've made it so that there's always space for the sentinel for all\ngeneration.c and slab.c allocations. There is no power of 2 rounding\nwith those, so no concern about doubling the memory for power-of-2\nsized allocations.\n\nWith aset.c, I'm only adding sentinel space when size >\nallocChunkLimit, aka external chunks.\n\nDavid",
"msg_date": "Fri, 2 Sep 2022 20:11:12 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Reducing the chunk header sizes on all memory context types"
},
{
"msg_contents": "On Fri, 2 Sept 2022 at 20:11, David Rowley <dgrowleyml@gmail.com> wrote:\n>\n> On Thu, 1 Sept 2022 at 12:46, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >\n> > David Rowley <dgrowleyml@gmail.com> writes:\n> > > Maybe we should just consider always making room for a sentinel for\n> > > chunks that are on dedicated blocks. At most that's an extra 8 bytes\n> > > in some allocation that's either over 1024 or 8192 (depending on\n> > > maxBlockSize).\n> >\n> > Agreed, if we're not doing that already then we should.\n>\n> Here's a patch to that effect.\n\nIf there are no objections, then I plan to push that patch soon.\n\nDavid\n\n\n",
"msg_date": "Tue, 6 Sep 2022 01:41:55 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Reducing the chunk header sizes on all memory context types"
},
{
"msg_contents": "Hi,\n\nOn 2022-08-29 17:26:29 +1200, David Rowley wrote:\n> On Mon, 29 Aug 2022 at 10:39, David Rowley <dgrowleyml@gmail.com> wrote:\n> > One more try to make CFbot happy.\n>\n> After a bit more revision, mostly updating outdated comments and\n> naming adjustments, I've pushed this.\n\nResponding to Tom's email about guc.c changes [1], I was looking at\nMemoryContextContains(). Unless I am missing something, the patch omitted\nadjusting that? We'll probably always return false right now.\n\nProbably should have something that tests that MemoryContextContains() works\nat least to some degree. Perhaps a test in regress.c?\n\nGreetings,\n\nAndres Freund\n\n[1] https://postgr.es/m/2982579.1662416866%40sss.pgh.pa.us\n\n\n",
"msg_date": "Mon, 5 Sep 2022 16:09:49 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Reducing the chunk header sizes on all memory context types"
},
{
"msg_contents": "On Tue, 6 Sept 2022 at 11:09, Andres Freund <andres@anarazel.de> wrote:\n> I was looking at\n> MemoryContextContains(). Unless I am missing something, the patch omitted\n> adjusting that? We'll probably always return false right now.\n\nOops. Yes. I'll push a fix a bit later.\n\nDavid\n\n\n",
"msg_date": "Tue, 6 Sep 2022 12:10:21 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Reducing the chunk header sizes on all memory context types"
},
{
"msg_contents": "David Rowley <dgrowleyml@gmail.com> writes:\n> On Tue, 6 Sept 2022 at 11:09, Andres Freund <andres@anarazel.de> wrote:\n>> I was looking at\n>> MemoryContextContains(). Unless I am missing something, the patch omitted\n>> adjusting that? We'll probably always return false right now.\n\n> Oops. Yes. I'll push a fix a bit later.\n\nThe existing uses in nodeAgg and nodeWindowAgg failed to expose this\nbecause an incorrect false result just causes them to do extra work\n(ie, a useless datumCopy). I think there might be a memory leak\ntoo, but the regression tests wouldn't run an aggregation long\nenough to make that obvious either.\n\n+1 for adding something to regress.c that verifies that this\nworks properly for all three allocators. I suggest making\nthree contexts and cross-checking the correct results for\nall combinations of chunk A vs context B.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 05 Sep 2022 20:27:34 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Reducing the chunk header sizes on all memory context types"
},
{
"msg_contents": "On Tue, 6 Sept 2022 at 12:27, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> David Rowley <dgrowleyml@gmail.com> writes:\n> > On Tue, 6 Sept 2022 at 11:09, Andres Freund <andres@anarazel.de> wrote:\n> >> I was looking at\n> >> MemoryContextContains(). Unless I am missing something, the patch omitted\n> >> adjusting that? We'll probably always return false right now.\n>\n> > Oops. Yes. I'll push a fix a bit later.\n\nI think the fix is harder than I thought, or perhaps impossible to do\ngiven how we now determine the owning MemoryContext of a pointer.\n\nThere's a comment in MemoryContextContains which says:\n\n* NB: Can't use GetMemoryChunkContext() here - that performs assertions\n* that aren't acceptable here since we might be passed memory not\n* allocated by any memory context.\n\nThat seems to indicate that we should be able to handle any random\npointer given to us (!). That comment seems more confident that'll\nwork than the function's header comment does:\n\n * Caution: this test is reliable as long as 'pointer' does point to\n * a chunk of memory allocated from *some* context. If 'pointer' points\n * at memory obtained in some other way, there is a small chance of a\n * false-positive result, since the bits right before it might look like\n * a valid chunk header by chance.\n\nHere that's just claiming the test might not be reliable and could\nreturn false-positive results.\n\nI find this entire function pretty scary as even before the context\nchanges that function seems to think it's fine to subtract sizeof(void\n*) from the given pointer and dereference that memory. That could very\nwell segfault.\n\nI wonder if there are many usages of MemoryContextContains in\nextensions. If there's not, I'd be much happier if we got rid of this\nfunction and used GetMemoryChunkContext() in nodeAgg.c and\nnodeWindowAgg.c.\n\n> +1 for adding something to regress.c that verifies that this\n> works properly for all three allocators. I suggest making\n> three contexts and cross-checking the correct results for\n> all combinations of chunk A vs context B.\n\nI went as far as adding an Assert to palloc(). I'm not quite sure what\nyou have in mind in regress.c\n\nAttached is a draft patch. I just don't like this function one bit.\n\nDavid",
"msg_date": "Tue, 6 Sep 2022 14:32:42 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Reducing the chunk header sizes on all memory context types"
},
{
"msg_contents": "On Tue, 6 Sept 2022 at 14:32, David Rowley <dgrowleyml@gmail.com> wrote:\n> I wonder if there are many usages of MemoryContextContains in\n> extensions. If there's not, I'd be much happier if we got rid of this\n> function and used GetMemoryChunkContext() in nodeAgg.c and\n> nodeWindowAgg.c.\n\nI see postgis is one user of it, per [1]. The other extensions\nmentioned there just seem to be copying code and not using it.\n\nDavid\n\n[1] https://codesearch.debian.net/search?q=MemoryContextContains&literal=1\n\n\n",
"msg_date": "Tue, 6 Sep 2022 14:35:52 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Reducing the chunk header sizes on all memory context types"
},
{
"msg_contents": "David Rowley <dgrowleyml@gmail.com> writes:\n> I think the fix is harder than I thought, or perhaps impossible to do\n> given how we now determine the owning MemoryContext of a pointer.\n\n> There's a comment in MemoryContextContains which says:\n\n> * NB: Can't use GetMemoryChunkContext() here - that performs assertions\n> * that aren't acceptable here since we might be passed memory not\n> * allocated by any memory context.\n\nI think MemoryContextContains' charter is to return\n\n\tGetMemoryChunkContext(pointer) == context\n\n*except* that instead of asserting what GetMemoryChunkContext asserts,\nit should treat those cases as reasons to return false. So if you\ncan still do GetMemoryChunkContext then you can still do\nMemoryContextContains. The point of having the separate function\nis to be as forgiving as we can of bogus pointers.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 05 Sep 2022 22:43:14 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Reducing the chunk header sizes on all memory context types"
},
{
"msg_contents": "On Tue, 6 Sept 2022 at 14:43, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I think MemoryContextContains' charter is to return\n>\n> GetMemoryChunkContext(pointer) == context\n>\n> *except* that instead of asserting what GetMemoryChunkContext asserts,\n> it should treat those cases as reasons to return false. So if you\n> can still do GetMemoryChunkContext then you can still do\n> MemoryContextContains. The point of having the separate function\n> is to be as forgiving as we can of bogus pointers.\n\nOk. I've readded the Asserts that c6e0fe1f2 mistakenly removed from\nGetMemoryChunkContext() and changed MemoryContextContains() to do\nthose same pre-checks before calling GetMemoryChunkContext().\n\nI've also boosted the Assert in mcxt.c to\nAssert(MemoryContextContains(context, ret)) in each place we call the\ncontext's callback function to obtain a newly allocated pointer. I\nthink this should cover the testing.\n\nI felt the need to keep the adjustments I made to the header comment\nin MemoryContextContains() to ward off anyone who thinks it's ok to\npass this any random pointer and have it do something sane. It's much\nmore prone to misbehaving/segfaulting now given the extra dereferences\nthat c6e0fe1f2 added to obtain a pointer to the owning context.\n\nDavid",
"msg_date": "Tue, 6 Sep 2022 15:17:24 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Reducing the chunk header sizes on all memory context types"
},
{
"msg_contents": "On Tue, 6 Sept 2022 at 01:41, David Rowley <dgrowleyml@gmail.com> wrote:\n>\n> On Fri, 2 Sept 2022 at 20:11, David Rowley <dgrowleyml@gmail.com> wrote:\n> >\n> > On Thu, 1 Sept 2022 at 12:46, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > >\n> > > David Rowley <dgrowleyml@gmail.com> writes:\n> > > > Maybe we should just consider always making room for a sentinel for\n> > > > chunks that are on dedicated blocks. At most that's an extra 8 bytes\n> > > > in some allocation that's either over 1024 or 8192 (depending on\n> > > > maxBlockSize).\n> > >\n> > > Agreed, if we're not doing that already then we should.\n> >\n> > Here's a patch to that effect.\n>\n> If there are no objections, then I plan to push that patch soon.\n\nI've now pushed the patch which adds the sentinel space in more cases.\n\nThe final analysis I did on the stats gathered during make\ninstallcheck show that we'll now allocate about 19MBs more over the\nentire installcheck run out of about 26GBs total allocations.\n\nThat analysis looks something like:\n\nBefore:\n\nSELECT CASE\n WHEN pow2_size > 0\n AND pow2_size = size THEN 'No'\n WHEN pow2_size = 0\n AND size = maxalign_size THEN 'No'\n ELSE 'Yes'\n END AS has_sentinel,\n Count(*) AS n_allocations,\n Sum(CASE\n WHEN pow2_size > 0 THEN pow2_size\n ELSE maxalign_size\n END) / 1024 / 1024 mega_bytes_alloc\nFROM memstats\nGROUP BY 1;\nhas_sentinel | n_allocations | mega_bytes_alloc\n--------------+---------------+------------------\n No | 26445855 | 21556\n Yes | 37602052 | 5044\n\nAfter:\n\nSELECT CASE\n WHEN pow2_size > 0\n AND pow2_size = size THEN 'No'\n WHEN pow2_size = 0\n AND size = maxalign_size THEN 'Yes' -- this part changed\n ELSE 'Yes'\n END AS has_sentinel,\n Count(*) AS n_allocations,\n Sum(CASE\n WHEN pow2_size > 0 THEN pow2_size\n WHEN size = maxalign_size THEN maxalign_size + 8\n ELSE maxalign_size\n END) / 1024 / 1024 mega_bytes_alloc\nFROM memstats\nGROUP BY 1;\nhas_sentinel | n_allocations | mega_bytes_alloc\n--------------+---------------+------------------\n No | 23980527 | 2177\n Yes | 40067380 | 24442\n\nThat amounts to previously having about 58.7% of allocations having a\nsentinel up to 62.6% currently, during the installcheck run.\n\nIt seems a pretty large portion of allocation request sizes are\npower-of-2 sized and use AllocSet.\n\nDavid\n\n\n",
"msg_date": "Wed, 7 Sep 2022 16:13:37 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Reducing the chunk header sizes on all memory context types"
},
{
"msg_contents": "David Rowley <dgrowleyml@gmail.com> writes:\n> It seems a pretty large portion of allocation request sizes are\n> power-of-2 sized and use AllocSet.\n\nNo surprise there, we've been programming with aset.c's behavior\nin mind for ~20 years ...\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 07 Sep 2022 00:56:15 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Reducing the chunk header sizes on all memory context types"
},
{
"msg_contents": "В Ср, 07/09/2022 в 16:13 +1200, David Rowley пишет:\n> On Tue, 6 Sept 2022 at 01:41, David Rowley <dgrowleyml@gmail.com> wrote:\n> > \n> > On Fri, 2 Sept 2022 at 20:11, David Rowley <dgrowleyml@gmail.com> wrote:\n> > > \n> > > On Thu, 1 Sept 2022 at 12:46, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > > > \n> > > > David Rowley <dgrowleyml@gmail.com> writes:\n> > > > > Maybe we should just consider always making room for a sentinel for\n> > > > > chunks that are on dedicated blocks. At most that's an extra 8 bytes\n> > > > > in some allocation that's either over 1024 or 8192 (depending on\n> > > > > maxBlockSize).\n> > > > \n> > > > Agreed, if we're not doing that already then we should.\n> > > \n> > > Here's a patch to that effect.\n> > \n> > If there are no objections, then I plan to push that patch soon.\n> \n> I've now pushed the patch which adds the sentinel space in more cases.\n> \n> The final analysis I did on the stats gathered during make\n> installcheck show that we'll now allocate about 19MBs more over the\n> entire installcheck run out of about 26GBs total allocations.\n> \n> That analysis looks something like:\n> \n> Before:\n> \n> SELECT CASE\n> WHEN pow2_size > 0\n> AND pow2_size = size THEN 'No'\n> WHEN pow2_size = 0\n> AND size = maxalign_size THEN 'No'\n> ELSE 'Yes'\n> END AS has_sentinel,\n> Count(*) AS n_allocations,\n> Sum(CASE\n> WHEN pow2_size > 0 THEN pow2_size\n> ELSE maxalign_size\n> END) / 1024 / 1024 mega_bytes_alloc\n> FROM memstats\n> GROUP BY 1;\n> has_sentinel | n_allocations | mega_bytes_alloc\n> --------------+---------------+------------------\n> No | 26445855 | 21556\n> Yes | 37602052 | 5044\n> \n> After:\n> \n> SELECT CASE\n> WHEN pow2_size > 0\n> AND pow2_size = size THEN 'No'\n> WHEN pow2_size = 0\n> AND size = maxalign_size THEN 'Yes' -- this part changed\n> ELSE 'Yes'\n> END AS has_sentinel,\n> Count(*) AS n_allocations,\n> Sum(CASE\n> WHEN pow2_size > 0 THEN pow2_size\n> WHEN size = maxalign_size THEN maxalign_size + 8\n> ELSE maxalign_size\n> END) / 1024 / 1024 mega_bytes_alloc\n> FROM memstats\n> GROUP BY 1;\n> has_sentinel | n_allocations | mega_bytes_alloc\n> --------------+---------------+------------------\n> No | 23980527 | 2177\n> Yes | 40067380 | 24442\n> \n> That amounts to previously having about 58.7% of allocations having a\n> sentinel up to 62.6% currently, during the installcheck run.\n> \n> It seems a pretty large portion of allocation request sizes are\n> power-of-2 sized and use AllocSet.\n\n19MB over 26GB is almost nothing. If it is only for enable-casserts\nbuilds, then it is perfectly acceptable.\n\nregards\nYura\n",
"msg_date": "Wed, 07 Sep 2022 10:15:24 +0300",
"msg_from": "Yura Sokolov <y.sokolov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: Reducing the chunk header sizes on all memory context types"
},
{
"msg_contents": "On Tue, 6 Sept 2022 at 15:17, David Rowley <dgrowleyml@gmail.com> wrote:\n>\n> On Tue, 6 Sept 2022 at 14:43, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > I think MemoryContextContains' charter is to return\n> >\n> > GetMemoryChunkContext(pointer) == context\n> >\n> > *except* that instead of asserting what GetMemoryChunkContext asserts,\n> > it should treat those cases as reasons to return false. So if you\n> > can still do GetMemoryChunkContext then you can still do\n> > MemoryContextContains. The point of having the separate function\n> > is to be as forgiving as we can of bogus pointers.\n>\n> Ok. I've readded the Asserts that c6e0fe1f2 mistakenly removed from\n> GetMemoryChunkContext() and changed MemoryContextContains() to do\n> those same pre-checks before calling GetMemoryChunkContext().\n>\n> I've also boosted the Assert in mcxt.c to\n> Assert(MemoryContextContains(context, ret)) in each place we call the\n> context's callback function to obtain a newly allocated pointer. I\n> think this should cover the testing.\n>\n> I felt the need to keep the adjustments I made to the header comment\n> in MemoryContextContains() to ward off anyone who thinks it's ok to\n> pass this any random pointer and have it do something sane. It's much\n> more prone to misbehaving/segfaulting now given the extra dereferences\n> that c6e0fe1f2 added to obtain a pointer to the owning context.\n\nI spent some time looking at our existing usages of\nMemoryContextContains() to satisfy myself that we'll only ever pass in\na pointer to memory allocated by a MemoryContext and pushed this\npatch.\n\nI put some notes in the commit message about it being unsafe now to\npass in arbitrary pointers to MemoryContextContains(). Just a note to\nthe archives that I'd personally feel much better if we just removed\nthis function in favour of using GetMemoryChunkContext() instead. That\nwould force extension authors using MemoryContextContains() to rewrite\nand revalidate their code. I feel that it's unlikely anyone will\nnotice until something crashes otherwise. Hopefully that'll happen\nbefore their extension is released.\n\nDavid\n\n\n",
"msg_date": "Thu, 8 Sep 2022 00:29:22 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Reducing the chunk header sizes on all memory context types"
},
{
"msg_contents": "Hi,\n\nOn Thu, Sep 08, 2022 at 12:29:22AM +1200, David Rowley wrote:\n>\n> I spent some time looking at our existing usages of\n> MemoryContextContains() to satisfy myself that we'll only ever pass in\n> a pointer to memory allocated by a MemoryContext and pushed this\n> patch.\n\nFYI lapwing isn't happy with this patch:\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=lapwing&dt=2022-09-07%2012%3A40%3A16.\n\n\n",
"msg_date": "Wed, 7 Sep 2022 21:05:52 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Reducing the chunk header sizes on all memory context types"
},
{
"msg_contents": "On Thu, 8 Sept 2022 at 01:05, Julien Rouhaud <rjuju123@gmail.com> wrote:\n> FYI lapwing isn't happy with this patch:\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=lapwing&dt=2022-09-07%2012%3A40%3A16.\n\nThanks. It does seem to be because of the nodeWindowAgg.c usage of\nMemoryContextContains.\n\nI'll look into it further.\n\nDavid\n\n\n",
"msg_date": "Thu, 8 Sep 2022 01:22:39 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Reducing the chunk header sizes on all memory context types"
},
{
"msg_contents": "On Thu, 8 Sept 2022 at 01:22, David Rowley <dgrowleyml@gmail.com> wrote:\n>\n> On Thu, 8 Sept 2022 at 01:05, Julien Rouhaud <rjuju123@gmail.com> wrote:\n> > FYI lapwing isn't happy with this patch:\n> > https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=lapwing&dt=2022-09-07%2012%3A40%3A16.\n>\n> I'll look into it further.\n\nLooks like my analysis wasn't that good in nodeWindowAgg.c. The\nreason it's crashing is due to int2int4_sum() returning\nInt64GetDatumFast(transdata->sum). For 64-bit machines,\nInt64GetDatumFast() translates to Int64GetDatum() and and that's\nbyval, so the MemoryContextContains() call is not triggered, but on\n32-bit machines that's PointerGetDatum() and a byref type, and we're\nreturning a pointer to transdata->sum, which is part way into an\nallocation.\n\nFunnily, the struct looks like:\n\ntypedef struct Int8TransTypeData\n{\nint64 count;\nint64 sum;\n} Int8TransTypeData;\n\nso the previous version of MemoryContextContains() would have\nsubtracted sizeof(void *) from &transdata->sum which, on this 32-bit\nmachine would have pointed halfway up the \"count\" field. That count\nfield seems like it would be a good candidate for the \"false positive\"\nthat the previous comment in MemoryContextContains mentioned about. So\nit looks like it had about a 1 in 2^32 odds of doing the wrong thing\nbefore.\n\nHad the fields in that struct happened to be in the opposite order,\nthen I don't think it would have crashed, but that's certainly no fix.\n\nI'll need to think about how best to fix this. In the meantime, I\nthink the other 32-bit animals are probably not going to like this\neither :-(\n\nDavid\n\n\n",
"msg_date": "Thu, 8 Sep 2022 01:55:51 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Reducing the chunk header sizes on all memory context types"
},
{
"msg_contents": "David Rowley <dgrowleyml@gmail.com> writes:\n> Looks like my analysis wasn't that good in nodeWindowAgg.c. The\n> reason it's crashing is due to int2int4_sum() returning\n> Int64GetDatumFast(transdata->sum).\n\nUgh. I thought for a bit about whether we could define that as wrong,\nbut it's returning a portion of its input, which seems kosher enough\n(not much different from array subscripting, for instance).\n\n> I'll need to think about how best to fix this. In the meantime, I\n> think the other 32-bit animals are probably not going to like this\n> either :-(\n\nYeah. The basic problem here is that we've greatly reduced the amount\nof redundancy in chunk headers.\n\nPerhaps we need to proceed about like this:\n\n1. Check the provided pointer is non-null and maxaligned\n(if not, return false).\n\n2. Pull out the mcxt type bits and check that they match the\ntype of the provided context.\n\n3. If 1 and 2 pass, it's safe enough to call a context-specific\ncheck routine.\n\n4. For aset.c, I'd be inclined to have it compute the AllocBlock\naddress implied by the putative chunk header, and then run through\nthe context's alloc blocks and see if any of them match. If we\ndo find a match, and the chunk address is within the allocated\nlength of the block, call it good. Probably the same could be done\nfor the other two methods.\n\nStep 4 is annoyingly expensive, but perhaps not too awful given\nthe way we step up alloc block sizes. We should make sure that\nany context we want to use MemoryContextContains with is allowed\nto let its blocks grow large, so that there can't be too many\nof them.\n\nI don't see a way to do better if we're afraid to dereference\nthe computed AllocBlock address.\n\nBTW, if we do it this way, what we'd actually be guaranteeing\nis that the address is within some alloc block belonging to\nthe context; it wouldn't quite prove that the address corresponds\nto a currently-allocated chunk. That'd be good enough probably\nfor the important use-cases. In particular it'd be 100% correct\nat rejecting chunks of other contexts and chunks gotten from\nraw malloc().\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 07 Sep 2022 11:08:27 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Reducing the chunk header sizes on all memory context types"
},
{
"msg_contents": "On Thu, 8 Sept 2022 at 03:08, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> 4. For aset.c, I'd be inclined to have it compute the AllocBlock\n> address implied by the putative chunk header, and then run through\n> the context's alloc blocks and see if any of them match. If we\n> do find a match, and the chunk address is within the allocated\n> length of the block, call it good. Probably the same could be done\n> for the other two methods.\n>\n> Step 4 is annoyingly expensive, but perhaps not too awful given\n> the way we step up alloc block sizes. We should make sure that\n> any context we want to use MemoryContextContains with is allowed\n> to let its blocks grow large, so that there can't be too many\n> of them.\n\nThanks for the idea. I've not come up with anything better other than\nremove the calls to MemoryContextContains and just copy the Datum each\ntime. That doesn't fix the problems with function, however.\n\nI'll go code up your idea and see if doing that triggers any other ideas.\n\nDavid\n\n\n",
"msg_date": "Thu, 8 Sep 2022 09:32:10 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Reducing the chunk header sizes on all memory context types"
},
{
"msg_contents": "On Thu, 8 Sept 2022 at 09:32, David Rowley <dgrowleyml@gmail.com> wrote:\n>\n> On Thu, 8 Sept 2022 at 03:08, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > Step 4 is annoyingly expensive, but perhaps not too awful given\n> > the way we step up alloc block sizes. We should make sure that\n> > any context we want to use MemoryContextContains with is allowed\n> > to let its blocks grow large, so that there can't be too many\n> > of them.\n>\n> I'll go code up your idea and see if doing that triggers any other ideas.\n\nI've attached a very much draft grade patch for this. I have a couple\nof thoughts:\n\n1. I should remove all the Assert(MemoryContextContains(context,\nret)); I littered around mcxt.c. This function is not as cheap as it\nonce was and I'm expecting that Assert to be a bit too expensive now.\n2. I changed the header comment in MemoryContextContains again, but I\nremoved the part about false positives since I don't believe that is\npossible now. What I do think is just as possible as it was before is\na segfault. We're still accessing the 8 bytes prior to the given\npointer and there's a decent chance that would segfault when working\nwith a pointer which was returned by malloc. I imagine I'm not the\nonly C programmer around that dislikes writing comments along the\nlines of \"this might segfault, but...\"\n3. For external chunks, I'd coded MemoryChunk to put a magic number in\nthe 60 free bits of the hdrmask. Since we still need to call\nMemoryChunkIsExternal on the given pointer, that function will Assert\nthat the magic number matches if the external chunk bit is set. We\ncan't expect that magic number check to pass when the external bit\njust happens to be on because it's not a MemoryChunk we're looking at.\nFor now I commented out those Asserts to make the tests pass. Not sure\nwhat's best there, maybe another version of MemoryChunkIsExternal or\nexport the underlying macro. I'm currently more focused on what I\nwrote in #2.\n\nDavid",
"msg_date": "Thu, 8 Sep 2022 12:13:11 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Reducing the chunk header sizes on all memory context types"
},
{
"msg_contents": "Hi,\n\nOn 2022-09-07 11:08:27 -0400, Tom Lane wrote:\n> > I'll need to think about how best to fix this. In the meantime, I\n> > think the other 32-bit animals are probably not going to like this\n> > either :-(\n>\n> Yeah. The basic problem here is that we've greatly reduced the amount\n> of redundancy in chunk headers.\n\nEven with the prior amount of redundancy it's quite scary. It's one thing if\nthe only consequence is a bit of added overhead - but if we *don't* copy the\ntuple due to a false positive we're in trouble. Afaict the user would have\nsome control over the memory contents and so an attacker could work on\ntriggering this issue. MemoryContextContains() may be ok for an assertion or\nsuch, but relying on it for correctness seems a bad idea.\n\nI wonder if we can get away from needing these checks, without unnecessarily\ncopying datums every time:\n\nIf there is no finalfunc, we know that the tuple ought to be in curaggcontext\nor such, and we need to copy.\n\nIf there is a finalfunc, they're typically going to return data from within\nthe current memory context, but could legitimately also return part of the\ndata from curaggcontext. Perhaps we could forbid that? Our docs already say\nthe following for serialization funcs:\n\n The result of the deserialization function should simply be allocated in the\n current memory context, as unlike the combine function's result, it is not\n long-lived.\n\nPerhaps we could institute a similar rule for finalfuncs? The argument against\nthat is that we can use arbitrary functions as finalfuncs currently. Perhaps\nwe could treat taking an internal argument as opting into the requirement and\ndefault to copying otherwise?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 8 Sep 2022 09:44:09 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Reducing the chunk header sizes on all memory context types"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> If there is a finalfunc, they're typically going to return data from within\n> the current memory context, but could legitimately also return part of the\n> data from curaggcontext. Perhaps we could forbid that?\n\nNo, I don't think we can get away with that. See int8inc() for a\ncounterexample.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 08 Sep 2022 14:10:36 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Reducing the chunk header sizes on all memory context types"
},
{
"msg_contents": "Hi,\n\nOn 2022-09-08 14:10:36 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > If there is a finalfunc, they're typically going to return data from within\n> > the current memory context, but could legitimately also return part of the\n> > data from curaggcontext. Perhaps we could forbid that?\n> \n> No, I don't think we can get away with that. See int8inc() for a\n> counterexample.\n\nWhat I was suggesting a bit below the bit quoted above, was that we'd copy\nwhenever there's no finalfunc or if the finalfunc doesn't take an internal\nparameter. And that finalfuncs returning byref with an internal parameter can\nbe expected to return memory allocated in the right context (which we of\ncourse could check with an assert). It's not super satisfying - but I don't\nthink it'd have the problem you describe above.\n\nAlternatively we could add a column to pg_aggregate denoting this. That'd only\nbe permissible to set for a superuser presumably.\n\n\nThis business with interpreting random memory as a palloc'd chunk seems like a\nfundamentally wrong approach worth incurring some pain to fix.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 8 Sep 2022 15:15:23 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Reducing the chunk header sizes on all memory context types"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2022-09-08 14:10:36 -0400, Tom Lane wrote:\n>> No, I don't think we can get away with that. See int8inc() for a\n>> counterexample.\n\n> What I was suggesting a bit below the bit quoted above, was that we'd copy\n> whenever there's no finalfunc or if the finalfunc doesn't take an internal\n> parameter.\n\nHmm, OK, I was confusing this with the optimization for transition\nfunctions; but that one is looking for pointer equality rather than\nchecking MemoryContextContains. So maybe this'd work.\n\n> This business with interpreting random memory as a palloc'd chunk seems like a\n> fundamentally wrong approach worth incurring some pain to fix.\n\nI hate to give up MemoryContextContains altogether. The assertions\nthat David nuked in b76fb6c2a had some value I think, and I was hoping\nto address your concerns in [1] by adding Assert(MemoryContextContains())\nto guc_free. But I'm not sure how much that'll help to diagnose you-\nmalloced-instead-of-pallocing if the result is not an assertion failure\nbut a segfault in a not-obviously-related place. The failure at guc_free\nis already going to be some distance from the scene of the crime.\n\nThe implementation I suggested upthread would reliably distinguish\nmalloc from palloc, and while it is potentially a tad expensive\nI don't think it's too much so for Assert checks. I don't have an\nobjection to trying to get to a place where we only use it in\nAssert, though.\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/20220905233233.jhcu5jqsrtosmgh5%40awork3.anarazel.de\n\n\n",
"msg_date": "Thu, 08 Sep 2022 18:53:29 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Reducing the chunk header sizes on all memory context types"
},
{
"msg_contents": "On Fri, 9 Sept 2022 at 10:53, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I hate to give up MemoryContextContains altogether. The assertions\n> that David nuked in b76fb6c2a had some value I think,\n\nThose can be put back if we decide to keep MemoryContextContains.\nThose newly added Asserts just temporarily had to go due to b76fb6c2a\nmaking MemoryContextContains temporarily always return false.\n\n> The implementation I suggested upthread would reliably distinguish\n> malloc from palloc, and while it is potentially a tad expensive\n> I don't think it's too much so for Assert checks. I don't have an\n> objection to trying to get to a place where we only use it in\n> Assert, though.\n\nI really think the Assert only form of MemoryContextContains() is the\nbest move, and if it's doing Assert only, then we can do the\nloop-over-the-blocks idea as you described and I drafted in [1].\n\nIf the need comes up that we're certain we always have a pointer to\nsome allocated chunk, but need to know if it's in some memory context,\nthen the proper form of expressing that, I think, should be:\n\nif (GetMemoryChunkContext(pointer) == somecontext)\n\nIf we're worried about getting that wrong, we can beef up the\nMemoryChunk struct with a magic_number field in\nMEMORY_CONTEXT_CHECKING builds to ensure we catch any code which\npasses invalid pointers.\n\nDavid\n\n[1] https://www.postgresql.org/message-id/CAApHDvoKjOmPQeokicwDuO-_Edh=tKp23-=jskYcyKfw5QuDhA@mail.gmail.com\n\n\n",
"msg_date": "Fri, 9 Sep 2022 11:33:37 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Reducing the chunk header sizes on all memory context types"
},
{
"msg_contents": "On Fri, 9 Sept 2022 at 11:33, David Rowley <dgrowleyml@gmail.com> wrote:\n> I really think the Assert only form of MemoryContextContains() is the\n> best move, and if it's doing Assert only, then we can do the\n> loop-over-the-blocks idea as you described and I drafted in [1].\n>\n> If the need comes up that we're certain we always have a pointer to\n> some allocated chunk, but need to know if it's in some memory context,\n> then the proper form of expressing that, I think, should be:\n>\n> if (GetMemoryChunkContext(pointer) == somecontext)\n>\n> If we're worried about getting that wrong, we can beef up the\n> MemoryChunk struct with a magic_number field in\n> MEMORY_CONTEXT_CHECKING builds to ensure we catch any code which\n> passes invalid pointers.\n\nI've attached a patch series which is my proposal on what we should do\nabout MemoryContextContains. In summary, this basically changes\nMemoryContextContains() so it's only available in\nMEMORY_CONTEXT_CHECKING builds and removes 4 current usages of the\nfunction.\n\n0001: Makes MemoryContextContains work again with the\nloop-over-the-blocks method mentioned by Tom.\n0002: Adds a new \"chunk_magic\" field to MemoryChunk. My thoughts are\nthat it might be a good idea to do this so that we get Assert failures\nif we use functions like GetMemoryChunkContext() on a pointer that's\nnot a MemoryChunk.\n0003: This adjusts aggregate final functions and window functions so\nthat any byref Datum they return is allocated in CurrentMemoryContext\n0004: Makes MemoryContextContains only available in\nMEMORY_CONTEXT_CHECKING builds and adjusts our usages of the function\nto use GetMemoryChunkContext() instead.\n\nAn alternative to 0004, would be more along the lines of what was\nmentioned by Andres and just Assert that the returned value is in the\nmemory context that we expect. I don't think we need to do anything\nspecial with aggregates that take an internal state. I think the rule\nis just as simple as; all final functions and window functions must\nreturn any byref values in CurrentMemoryContext. For aggregates\nwithout a finalfn, we can just datumCopy() the returned byref value.\nThere's no chance for those to be in CurrentMemoryContext anyway. The\nreturn value must be in the aggregate state's context. The attached\nassert.patch shows that this holds true in make check after applying\neach of the other patches.\n\nI see that one of the drawbacks from not using MemoryContextContains()\nis that window functions such as lead(), lag(), first_value(),\nlast_value() and nth_value() may now do the datumCopy() when it's not\nneeded. For example, with a window function call such as\nlead(byref_col ), the expression evaluation code in\nWinGetFuncArgInPartition() will just return the address in the\ntuplestore tuple for \"byref_col\". The datumCopy() is needed for that.\nHowever, if the function call was lead(byref_col || 'something') then\nwe'd have ended up with a new allocation in CurrentMemoryContext to\nconcatenate the two values. We'll now do a datumCopy() where we\npreviously wouldn't have. I don't really see any way around that\nwithout doing some highly invasive surgery to the expression\nevaluation code.\n\nNone of the attached patches are polished. I can do that once we agree\non the best way to fix the issue.\n\nDavid",
"msg_date": "Tue, 13 Sep 2022 20:27:39 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Reducing the chunk header sizes on all memory context types"
},
{
"msg_contents": "On Tue, 13 Sept 2022 at 20:27, David Rowley <dgrowleyml@gmail.com> wrote:\n> I see that one of the drawbacks from not using MemoryContextContains()\n> is that window functions such as lead(), lag(), first_value(),\n> last_value() and nth_value() may now do the datumCopy() when it's not\n> needed. For example, with a window function call such as\n> lead(byref_col ), the expression evaluation code in\n> WinGetFuncArgInPartition() will just return the address in the\n> tuplestore tuple for \"byref_col\". The datumCopy() is needed for that.\n> However, if the function call was lead(byref_col || 'something') then\n> we'd have ended up with a new allocation in CurrentMemoryContext to\n> concatenate the two values. We'll now do a datumCopy() where we\n> previously wouldn't have. I don't really see any way around that\n> without doing some highly invasive surgery to the expression\n> evaluation code.\n\nIt feels like a terrible idea, but I wondered if we could look at the\nWindowFunc->args and make a decision if we should do the datumCopy()\nbased on the type of the argument. Vars would need to be copied as\nthey will point into the tuple's memory, but an OpExpr likely would\nnot need to be copied.\n\nAside from that, I don't have any ideas on how to get rid of the\npossible additional datumCopy() from non-Var arguments to these window\nfunctions. Should we just suffer it? It's quite likely that most\narguments to these functions are plain Vars anyway.\n\nDavid\n\n\n",
"msg_date": "Tue, 20 Sep 2022 13:11:56 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Reducing the chunk header sizes on all memory context types"
},
{
"msg_contents": "David Rowley <dgrowleyml@gmail.com> writes:\n> Aside from that, I don't have any ideas on how to get rid of the\n> possible additional datumCopy() from non-Var arguments to these window\n> functions. Should we just suffer it? It's quite likely that most\n> arguments to these functions are plain Vars anyway.\n\nNo, we shouldn't. I'm pretty sure that we have various window\nfunctions that are deliberately designed to take advantage of the\nno-copy behavior, and that they have taken a significant speed\nhit from your having disabled that optimization. I don't say\nthat this is enough to justify reverting the chunk header changes\naltogether ... but I'm completely not satisfied with the current\nsituation in HEAD.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 19 Sep 2022 21:23:08 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Reducing the chunk header sizes on all memory context types"
},
{
"msg_contents": "On Tue, 20 Sept 2022 at 13:23, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> David Rowley <dgrowleyml@gmail.com> writes:\n> > Aside from that, I don't have any ideas on how to get rid of the\n> > possible additional datumCopy() from non-Var arguments to these window\n> > functions. Should we just suffer it? It's quite likely that most\n> > arguments to these functions are plain Vars anyway.\n>\n> No, we shouldn't. I'm pretty sure that we have various window\n> functions that are deliberately designed to take advantage of the\n> no-copy behavior, and that they have taken a significant speed\n> hit from your having disabled that optimization. I don't say\n> that this is enough to justify reverting the chunk header changes\n> altogether ... but I'm completely not satisfied with the current\n> situation in HEAD.\n\nMaybe you've forgotten that MemoryContextContains() is broken in the\nback branches or just don't think it is broken?\n\nDavid\n\n\n",
"msg_date": "Tue, 20 Sep 2022 16:49:29 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Reducing the chunk header sizes on all memory context types"
},
{
"msg_contents": "David Rowley <dgrowleyml@gmail.com> writes:\n> On Tue, 20 Sept 2022 at 13:23, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> ... but I'm completely not satisfied with the current\n>> situation in HEAD.\n\n> Maybe you've forgotten that MemoryContextContains() is broken in the\n> back branches or just don't think it is broken?\n\n\"Broken\" is a strong claim. There's reason to think it could fail\nin the back branches, but little evidence that it actually has failed\nin the field. So yeah, we have work to do --- which is the exact\nopposite of your apparent stand that we can walk away from the\nproblem.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 20 Sep 2022 01:23:15 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Reducing the chunk header sizes on all memory context types"
},
{
"msg_contents": "On Tue, 20 Sept 2022 at 17:23, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \"Broken\" is a strong claim. There's reason to think it could fail\n> in the back branches, but little evidence that it actually has failed\n> in the field.\n\nI've posted some code to the security list that shows that I can get\nMemoryContextContains() to return true when it should return false.\nThis results in the datumCopy() in eval_windowfunction() being skipped\nand the result of the window function being left in the incorrect\nmemory context. I was unable to get this to produce a crash, but if\nthere's some way to have the result point into a shared buffer page\nand that page is evicted and replaced with something else before the\nvalue is used then we'd have issues.\n\n> So yeah, we have work to do --- which is the exact\n> opposite of your apparent stand that we can walk away from the\n> problem.\n\nMy problem is that I'm unable to think of a way to fix something I see\nas an existing bug. I've given it a week and nobody else has come\nforward with any proposals on how to fix this. I'm very open to\nfinding some way to allow us to keep this optimisation, but so far\nI've been unable to. We have reverted broken optimisations before.\nAlso, reverting c6e0fe1f2a does not seem that appealing to me as it\njust returns MemoryContextContains() back into a state where it can\nreturn false positives again.\n\nDavid\n\n\n",
"msg_date": "Tue, 20 Sep 2022 18:10:12 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Reducing the chunk header sizes on all memory context types"
},
{
"msg_contents": "On Tue, 20 Sept 2022 at 13:23, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> David Rowley <dgrowleyml@gmail.com> writes:\n> > Aside from that, I don't have any ideas on how to get rid of the\n> > possible additional datumCopy() from non-Var arguments to these window\n> > functions. Should we just suffer it? It's quite likely that most\n> > arguments to these functions are plain Vars anyway.\n>\n> No, we shouldn't. I'm pretty sure that we have various window\n> functions that are deliberately designed to take advantage of the\n> no-copy behavior, and that they have taken a significant speed\n> hit from your having disabled that optimization. I don't say\n> that this is enough to justify reverting the chunk header changes\n> altogether ... but I'm completely not satisfied with the current\n> situation in HEAD.\n\nLooking more closely at window_gettupleslot(), it always allocates the\ntuple in ecxt_per_query_memory, so any column we fetch out of that\ntuple will be in that memory context. window_gettupleslot() is used\nin lead(), lag(), first_value(), last_value() and nth_value() to fetch\nthe Nth tuple out of the partition window. The other window functions\nall return BIGINT, FLOAT8 or INT which are byval on 64-bit, and on\n32-bit these functions return a freshly palloc'd Datum in the\nCurrentMemoryContext.\n\nMaybe we could remove the datumCopy() from eval_windowfunction() and\nalso document that a window function when returning a non-byval type,\nmust allocate the Datum in either ps_ExprContext's\necxt_per_tuple_memory or ecxt_per_query_memory. We could ensure any\nextension which has its own window functions get the memo about the\nAPI change by adding an Assert to ensure that the return value (for\nbyref types) is in the current context by calling the\nloop-over-the-blocks version of MemoryContextContains().\n\nThis would mean that wfuncs like lead(column_name) would no longer do\nthat extra datumCopy and the likes of lead(col || 'some OpExpr') would\nsave a little as we'd no longer call MemoryContextContains on\nnon-Assert builds.\n\nDavid\n\n\n",
"msg_date": "Tue, 27 Sep 2022 11:28:46 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Reducing the chunk header sizes on all memory context types"
},
{
"msg_contents": "On Tue, 27 Sept 2022 at 11:28, David Rowley <dgrowleyml@gmail.com> wrote:\n> Maybe we could remove the datumCopy() from eval_windowfunction() and\n> also document that a window function when returning a non-byval type,\n> must allocate the Datum in either ps_ExprContext's\n> ecxt_per_tuple_memory or ecxt_per_query_memory. We could ensure any\n> extension which has its own window functions get the memo about the\n> API change by adding an Assert to ensure that the return value (for\n> byref types) is in the current context by calling the\n> loop-over-the-blocks version of MemoryContextContains().\n\nI did some work on this and it turned out that the value returned by\nany of lead(), lag(), first_value(), last_value() and nth_value()\ncould also be in MessageContext or some child context to\nCacheMemoryContext. The reason for the latter two is that cases like\nLAG(col, 1, 'default value') will return the Const in the 3rd arg when\nthe offset value is outside of the window frame. That means\nMessageContext for normal queries and it means it'll be cached in a\nchild context of CacheMemoryContext for PREPAREd queries.\n\nThis means the Assert that I wanted to add to eval_windowfunction\nbecame quite complex. Namely:\n\nAssert(perfuncstate->resulttypeByVal || fcinfo->isnull ||\n MemoryContextContains(winstate->ss.ps.ps_ExprContext->ecxt_per_tuple_memory,\n(void *) *result) ||\n MemoryContextContains(winstate->ss.ps.ps_ExprContext->ecxt_per_query_memory,\n(void *) *result) ||\n MemoryContextContains(MessageContext, (void *) *result) ||\n MemoryContextOrChildOfContains(CacheMemoryContext, (void *) *result));\n\nNotice the invention of MemoryContextOrChildOfContains() to\nrecursively search the CacheMemoryContext children. It does not seem\nso great as CacheMemoryContext tends to have many children and\nsearching through them all could make that Assert a bit slow.\n\nI think I am fairly happy that all the 4 message contexts I mentioned\nin the Assert will be around long enough for the result value to not\nbe freed. It's just that the whole thing feels a bit wrong and that\nthe context the return value is in should be a bit more predictable.\n\nDoes anyone have any opinions on this?\n\nDavid",
"msg_date": "Thu, 29 Sep 2022 18:30:31 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Reducing the chunk header sizes on all memory context types"
},
{
"msg_contents": "On Thu, 29 Sept 2022 at 18:30, David Rowley <dgrowleyml@gmail.com> wrote:\n> Does anyone have any opinions on this?\n\nI by no means think I've nailed the fix in\nother_ideas_to_fix_MemoryContextContains.patch, so it would be good to\nsee if anyone else has any new ideas on how to solve this issue.\n\nAndres did mention to me off-list about perhaps adding a boolean field\nto FunctionCallInfoBaseData to indicate if the return value can be\nassumed to be in CurrentMemoryContext. I feel like that might be\nquite a bit of work to go and change all functions to ensure that\nthat's properly populated. For example, look at split_part() in\nvarlena.c, it's going to be a little tedious to ensure we set that\nfield correctly there as that function sometimes returns it's input,\nsometimes returns a string constant and sometimes allocates new\nmemory. I feel fixing it this way will be error-prone and cause lots\nof future bugs.\n\nI'm also aware that the change made in b76fb6c2a becomes less\ntemporary with each day that passes, so I really would like to find a\nsolution to the MemoryContextContains issue. I'll reiterate that I\ndon't think reverting c6e0fe1f2 fixes MemoryContextContains. That\nwould just put back the behaviour of it returning true based on the\nowning MemoryContext and/or the direction that the wind is coming from\non the particular day the function is called.\n\nAlthough I do think other_ideas_to_fix_MemoryContextContains.patch\ndoes fix the problem. I also fear a few people would be reaching for\ntheir pitchforks if I was to go and commit it. However, as of now, I'm\nstarting to look more favourably at it as more time passes.\n\nDavid\n\n\n",
"msg_date": "Mon, 3 Oct 2022 12:43:58 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Reducing the chunk header sizes on all memory context types"
},
{
"msg_contents": "David Rowley <dgrowleyml@gmail.com> writes:\n> Andres did mention to me off-list about perhaps adding a boolean field\n> to FunctionCallInfoBaseData to indicate if the return value can be\n> assumed to be in CurrentMemoryContext. I feel like that might be\n> quite a bit of work to go and change all functions to ensure that\n> that's properly populated.\n\nThat seems like the right basic answer, but wrong in detail. We have only\na tiny number of places that care about this --- aggregates and window\nfunctions basically --- and those already have a bunch of special calling\nconventions. So changing the generic fmgr APIs has side-effects far\nbeyond what's justified.\n\nI think what we should look at is extending the aggregate/window\nfunction APIs so that such functions can report where they put their\noutput, and then we can nuke MemoryContextContains(), with the\ncode code set up to assume that it has to copy if the called function\ndidn't report anything. The existing FunctionCallInfo.resultinfo\nmechanism (cf. ReturnSetInfo for SRFs) is probably the right thing\nto pass the flag through.\n\nI can take a look at this once the release dust settles a little.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 03 Oct 2022 15:37:50 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Reducing the chunk header sizes on all memory context types"
},
{
"msg_contents": "I wrote:\n> I think what we should look at is extending the aggregate/window\n> function APIs so that such functions can report where they put their\n> output, and then we can nuke MemoryContextContains(), with the\n> code code set up to assume that it has to copy if the called function\n> didn't report anything. The existing FunctionCallInfo.resultinfo\n> mechanism (cf. ReturnSetInfo for SRFs) is probably the right thing\n> to pass the flag through.\n\nAfter studying the existing usages of MemoryContextContains, I think\nthere is a better answer, which is to just nuke them.\n\nAs far as I can tell, the datumCopy steps associated with aggregate\nfinalfns are basically useless. They only serve to prevent\nreturning a pointer directly to the aggregate's transition value\n(or, perhaps, to a portion thereof). But what's wrong with that?\nIt'll last as long as we need it to. Maybe there was a reason\nback before we required finalfns to not scribble on the transition\nvalues, but those days are gone.\n\nThe same goes for aggregate serialfns --- although there, I can't\navoid the feeling that the datumCopy step was just cargo-culted in.\nI don't think there can exist a serialfn that doesn't return a\nfreshly-palloced bytea.\n\nThe one place where we actually need the conditional datumCopy is\nwith window functions, and even there I don't think we need it\nin simple cases with only one window function. The case that is\nhazardous is where multiple window functions are sharing a\nWindowObject. So I'm content to optimize the single-window-function\ncase and just always copy if there's more than one. (Sadly, there\nis no existing regression test that catches this, so I added one.)\n\nSee attached.\n\n\t\t\tregards, tom lane",
"msg_date": "Tue, 04 Oct 2022 11:55:06 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Reducing the chunk header sizes on all memory context types"
},
{
"msg_contents": "On Wed, 5 Oct 2022 at 04:55, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> After studying the existing usages of MemoryContextContains, I think\n> there is a better answer, which is to just nuke them.\n\nI was under the impression you wanted to keep that function around in\ncassert builds for some of the guc.c changes you were making.\n\n> As far as I can tell, the datumCopy steps associated with aggregate\n> finalfns are basically useless. They only serve to prevent\n> returning a pointer directly to the aggregate's transition value\n> (or, perhaps, to a portion thereof). But what's wrong with that?\n> It'll last as long as we need it to. Maybe there was a reason\n> back before we required finalfns to not scribble on the transition\n> values, but those days are gone.\n\nYeah, I wondered the same thing. I couldn't see a situation where the\naggregate context would disappear.\n\n> The same goes for aggregate serialfns --- although there, I can't\n> avoid the feeling that the datumCopy step was just cargo-culted in.\n> I don't think there can exist a serialfn that doesn't return a\n> freshly-palloced bytea.\n\nMost likely. I probably copied that as I wouldn't have understood why\nwe did any copying when calling the finalfn. I still don't understand\nwhy. Seems there's no good reason if we're both in favour of removing\nit.\n\n> The one place where we actually need the conditional datumCopy is\n> with window functions, and even there I don't think we need it\n> in simple cases with only one window function. The case that is\n> hazardous is where multiple window functions are sharing a\n> WindowObject. So I'm content to optimize the single-window-function\n> case and just always copy if there's more than one. (Sadly, there\n> is no existing regression test that catches this, so I added one.)\n\nI was unsure what window functions might exist out in the wild, so I'd\nadded some code to pass along the return type information so that any\nextensions which need to make a copy can do so. However, maybe it's\nbetter just to wait to see if anyone complains about that before we go\nto the trouble.\n\nI've looked at your patches and don't see any problems. Our findings\nseem to be roughly the same. i.e the datumCopy is mostly useless.\nHowever, you've noticed the requirement to datumCopy when there are\nmultiple window functions using the same window along with yours\ncontaining the call to MakeExpandedObjectReadOnly() where I missed\nthat.\n\nThis should also slightly improve the performance of LEAD and LAG with\nbyref types, which seems like a good side-effect.\n\nI guess the commit message for 0002 should mention that for pointers\nto allocated chunks that GetMemoryChunkContext() can be used in place\nof MemoryContextContains(). I did see that PostGIS does use\nMemoryContextContains(), though I didn't look at their code to figure\nout if they're always passing it a pointer to an allocated chunk.\nMaybe it's worth doing;\n\n#define MemoryContextContains(c, p) (GetMemoryChunkContext(p) == (c))\n\nin memutils.h? or are we better to force extension authors to\nre-evaluate their code in case anyone is passing memory that's not\npointing directly to a palloc'd chunk?\n\nDavid\n\n\n",
"msg_date": "Thu, 6 Oct 2022 17:59:51 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Reducing the chunk header sizes on all memory context types"
},
{
"msg_contents": "David Rowley <dgrowleyml@gmail.com> writes:\n> On Wed, 5 Oct 2022 at 04:55, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> After studying the existing usages of MemoryContextContains, I think\n>> there is a better answer, which is to just nuke them.\n\n> I was under the impression you wanted to keep that function around in\n> cassert builds for some of the guc.c changes you were making.\n\nI would've liked to have it, but for that purpose an unreliable version\nof MemoryContextContains is probably little help. In any case, as you\nmention, I can do something with GetMemoryChunkContext() instead.\n\n>> As far as I can tell, the datumCopy steps associated with aggregate\n>> finalfns are basically useless. They only serve to prevent\n>> returning a pointer directly to the aggregate's transition value\n>> (or, perhaps, to a portion thereof). But what's wrong with that?\n>> It'll last as long as we need it to. Maybe there was a reason\n>> back before we required finalfns to not scribble on the transition\n>> values, but those days are gone.\n\n> Yeah, I wondered the same thing. I couldn't see a situation where the\n> aggregate context would disappear.\n\nI have a feeling that we might once have aggressively reset the\naggregate context ... but we don't anymore.\n\n>> The one place where we actually need the conditional datumCopy is\n>> with window functions, and even there I don't think we need it\n>> in simple cases with only one window function. The case that is\n>> hazardous is where multiple window functions are sharing a\n>> WindowObject. So I'm content to optimize the single-window-function\n>> case and just always copy if there's more than one. (Sadly, there\n>> is no existing regression test that catches this, so I added one.)\n\n> I was unsure what window functions might exist out in the wild, so I'd\n> added some code to pass along the return type information so that any\n> extensions which need to make a copy can do so. However, maybe it's\n> better just to wait to see if anyone complains about that before we go\n> to the trouble.\n\nI'd originally feared that a window function might return a pointer\ninto the WindowObject's tuplestore, which we manipulate immediately\nafter the window function returns. However, AFAICS the APIs we\nprovide don't have any such hazard. The actual hazard is that we\nmight get a pointer into one of the temp slots, which are independent\nstorage because we tell them to copy the source tuple. (Maybe a comment\nabout that would be appropriate.)\n\n> I've looked at your patches and don't see any problems. Our findings\n> seem to be roughly the same. i.e the datumCopy is mostly useless.\n\nCool, I'll push in a little bit.\n\n> Maybe it's worth doing;\n\n> #define MemoryContextContains(c, p) (GetMemoryChunkContext(p) == (c))\n\n> in memutils.h? or are we better to force extension authors to\n> re-evaluate their code in case anyone is passing memory that's not\n> pointing directly to a palloc'd chunk?\n\nI think the latter. The fact that MemoryContextContains was (mostly)\nsafe on arbitrary pointers was an important part of its API IMO.\nI'm okay with giving up that property to reduce chunk overhead,\nbut we'll do nobody any service by pretending we still have it.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 06 Oct 2022 11:37:43 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Reducing the chunk header sizes on all memory context types"
},
{
"msg_contents": "David Rowley <dgrowleyml@gmail.com> writes:\n> I did see that PostGIS does use\n> MemoryContextContains(), though I didn't look at their code to figure\n> out if they're always passing it a pointer to an allocated chunk.\n\nAs far as I can tell from a cursory look, they should be able to use\nthe GetMemoryChunkContext workaround, because they are just doing this\nwith SHARED_GSERIALIZED objects, which seem to always be palloc'd.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 06 Oct 2022 13:38:28 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Reducing the chunk header sizes on all memory context types"
},
{
"msg_contents": "One more thing: based on what I saw in working with my pending guc.c\nchanges, the assertions in GetMemoryChunkMethodID are largely useless\nfor detecting bogus pointers. I think we should do something more\nlike the attached, which will result in a clean failure if the method\nID bits are invalid.\n\nI'm a little tempted also to rearrange the MemoryContextMethodID enum\nso that likely bit patterns like 000 are not valid IDs.\n\nWhile I didn't change it here, I wonder why GetMemoryChunkMethodID is\npublicly exposed at all. AFAICS it could be static inside mcxt.c.\n\n\t\t\tregards, tom lane",
"msg_date": "Thu, 06 Oct 2022 14:19:21 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Reducing the chunk header sizes on all memory context types"
},
{
"msg_contents": "Hi,\n\nOn 2022-10-06 14:19:21 -0400, Tom Lane wrote:\n> One more thing: based on what I saw in working with my pending guc.c\n> changes, the assertions in GetMemoryChunkMethodID are largely useless\n> for detecting bogus pointers. I think we should do something more\n> like the attached, which will result in a clean failure if the method\n> ID bits are invalid.\n\nYea, that makes sense. I wouldn't get rid of the MAXALIGN Assert though - it's\nnot replaced by the the unused mcxt stuff afaics.\n\n\n> I'm a little tempted also to rearrange the MemoryContextMethodID enum\n> so that likely bit patterns like 000 are not valid IDs.\n\nYea, I was suggesting that during a review as well. We can still relax it\nlater if we run out of bits.\n\n\n> +/*\n> + * Support routines to trap use of invalid memory context method IDs\n> + * (from calling pfree or the like on a bogus pointer).\n> + */\n> +static void\n> +BogusFree(void *pointer)\n> +{\n> +\telog(ERROR, \"pfree called with invalid pointer %p\", pointer);\n> +}\n\nMaybe worth printing the method ID as well?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 6 Oct 2022 12:00:58 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Reducing the chunk header sizes on all memory context types"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> Yea, that makes sense. I wouldn't get rid of the MAXALIGN Assert though - it's\n> not replaced by the the unused mcxt stuff afaics.\n\nOK.\n\n>> +\telog(ERROR, \"pfree called with invalid pointer %p\", pointer);\n\n> Maybe worth printing the method ID as well?\n\nI doubt it'd be useful.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 06 Oct 2022 15:10:44 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Reducing the chunk header sizes on all memory context types"
},
{
"msg_contents": "Hi,\n\nOn 2022-10-06 15:10:44 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> >> +\telog(ERROR, \"pfree called with invalid pointer %p\", pointer);\n>\n> > Maybe worth printing the method ID as well?\n>\n> I doubt it'd be useful.\n\nI was thinking it could be useful to see whether the bits are likely to be the\nresult of wipe_mem(). But I guess for that we should print the whole byte,\nrather than just the method. Perhaps not worth it.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 6 Oct 2022 12:14:54 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Reducing the chunk header sizes on all memory context types"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2022-10-06 15:10:44 -0400, Tom Lane wrote:\n>> Andres Freund <andres@anarazel.de> writes:\n>>> Maybe worth printing the method ID as well?\n\n>> I doubt it'd be useful.\n\n> I was thinking it could be useful to see whether the bits are likely to be the\n> result of wipe_mem(). But I guess for that we should print the whole byte,\n> rather than just the method. Perhaps not worth it.\n\nI think printing the whole int64 header word would be appropriate if\nwe were hoping for something like that. Still not sure if it's useful.\nOn the other hand, if control gets there then you are probably in need\nof debugging help ...\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 06 Oct 2022 15:23:16 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Reducing the chunk header sizes on all memory context types"
},
{
"msg_contents": "Here's a v2 incorporating discussed changes.\n\nIn reordering enum MemoryContextMethodID, I arranged to avoid using\n000 and 111 as valid IDs, since those bit patterns will appear in\nzeroed and wipe_mem'd memory respectively. Those should probably be\nmore-or-less-permanent exclusions, so I added comments about it.\n\nI also avoided using 001: based on my work with converting guc.c to use\npalloc [1], it seems that pfree'ing a malloc-provided pointer is likely\nto see 001 a lot, at least on 64-bit glibc platforms. I've not stuck\nmy nose into the glibc sources to see how consistent that might be,\nbut it definitely recurred several times while I was chasing down\nplaces needing adjustment in that patch.\n\nI'm not sure if there are any platform-dependent reasons to avoid\nother bit-patterns, but we do still have a little bit of flexibility\nleft here if anyone has candidates.\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/2982579.1662416866%40sss.pgh.pa.us",
"msg_date": "Thu, 06 Oct 2022 16:05:13 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Reducing the chunk header sizes on all memory context types"
},
{
"msg_contents": "On Fri, 7 Oct 2022 at 09:05, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Here's a v2 incorporating discussed changes.\n>\n> In reordering enum MemoryContextMethodID, I arranged to avoid using\n> 000 and 111 as valid IDs, since those bit patterns will appear in\n> zeroed and wipe_mem'd memory respectively. Those should probably be\n> more-or-less-permanent exclusions, so I added comments about it.\n\nI'm just considering some future developer here that is writing a new\nMemoryContext type and there's no more space left and she or he needs\nto either use 000 or 111. I think if that was me, I might be unsure if\nI should be looking to expand the bit-space to make room. I might\nthink that based on the word \"avoid\" in:\n\n> + MCTX_UNUSED1_ID, /* avoid: 000 occurs in never-used memory */\n> + MCTX_UNUSED5_ID /* avoid: 111 occurs in wipe_mem'd memory */\n\nbut the final sentence in:\n\n> + * dummy entries for unused IDs in the mcxt_methods[] array. We also try\n> + * to avoid using bit-patterns as valid IDs if they are likely to occur in\n> + * garbage data.\n\nleads me to believe we're just *trying* to avoid using these bit-patterns.\n\nAlso, the comment in mcxt_methods[] might make me believe that it's ok\nfor me to use them if I really need to.\n\n> + * Unused (as yet) IDs should have dummy entries here. This allows us to\n\nBased on these comments, I'm not quite sure if I should be completely\navoiding using 000 and 111 or I should just use those last when there\nare no other free slots in the array. It might be quite a long time\nbefore someone is in this situation, so should we be more clear?\n\nHowever, maybe you've left it this way as you feel it's a decision\nthat must be made in the future, perhaps based on how difficult it\nwould be to free up another bit?\n\nDavid\n\n\n",
"msg_date": "Fri, 7 Oct 2022 09:44:49 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Reducing the chunk header sizes on all memory context types"
},
{
"msg_contents": "David Rowley <dgrowleyml@gmail.com> writes:\n> However, maybe you've left it this way as you feel it's a decision\n> that must be made in the future, perhaps based on how difficult it\n> would be to free up another bit?\n\nYeah, pretty much. I think it'll be a long time before we run out\nof memory context IDs, and it's hard to anticipate the tradeoffs\nthat will matter at that time.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 06 Oct 2022 16:54:53 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Reducing the chunk header sizes on all memory context types"
},
{
"msg_contents": "I wrote:\n> I also avoided using 001: based on my work with converting guc.c to use\n> palloc [1], it seems that pfree'ing a malloc-provided pointer is likely\n> to see 001 a lot, at least on 64-bit glibc platforms.\n\nI poked at this some more by creating a function that intentionally\ndoes pfree(malloc(N)) for various values of N.\n\nRHEL8, x86_64: the low-order nibble of the header is consistently 0001.\n\nmacOS 12.6, arm64: the low-order nibble is consistently 0000.\n\nFreeBSD 13.0, arm64: Usually the low-order nibble is 0000 or 1111,\nbut for some smaller values of N it sometimes comes up as 0010.\n\nNetBSD 9.2, amd64: results similar to FreeBSD.\n\nI still haven't looked into anybody's source code, but based on these\nresults I'm inclined to leave both 001 and 010 IDs unused for now.\nThat'll help the GUC malloc -> palloc transition tremendously, because\npeople will get fairly clear errors rather than weird assertions\nand/or memory corruption. That'll leave us in a situation where only\none more context ID can be assigned without risk of reducing our error\ndetection ability, but I'm content with that for now.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 06 Oct 2022 17:57:01 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Reducing the chunk header sizes on all memory context types"
},
{
"msg_contents": "On Fri, 7 Oct 2022 at 10:57, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I poked at this some more by creating a function that intentionally\n> does pfree(malloc(N)) for various values of N.\n>\n> RHEL8, x86_64: the low-order nibble of the header is consistently 0001.\n>\n> macOS 12.6, arm64: the low-order nibble is consistently 0000.\n>\n> FreeBSD 13.0, arm64: Usually the low-order nibble is 0000 or 1111,\n> but for some smaller values of N it sometimes comes up as 0010.\n>\n> NetBSD 9.2, amd64: results similar to FreeBSD.\n\nOut of curiosity I tried using the attached on a Windows machine and got:\n\n0: 130951\n1: 131061\n2: 133110\n3: 129053\n4: 131061\n5: 131067\n6: 131070\n7: 131203\n\nSo it seems pretty much entirely inconsistent on that platform.\n\nAlso, on an Ubuntu machine I didn't get the consistent 0001 as you got\non your RHEL machine. There were a very small number of 010's there\ntoo:\n\n0: 0\n1: 1048569\n2: 7\n3: 0\n4: 0\n5: 0\n6: 0\n7: 0\n\nDespite Windows not being very consistent here, I think it's a useful\nchange as if our most common platform (Linux/glibc) is fairly\nconsistent, then that'll give us wide coverage to track down any buggy\ncode.\n\nIn anycase, even on Windows (assuming it's completely random) we'll\nhave a 5 out of 8 chance of getting a nice error message if there are\nany bad pointers being passed.\n\nDavid",
"msg_date": "Fri, 7 Oct 2022 11:38:05 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Reducing the chunk header sizes on all memory context types"
},
{
"msg_contents": "I wrote:\n> FreeBSD 13.0, arm64: Usually the low-order nibble is 0000 or 1111,\n> but for some smaller values of N it sometimes comes up as 0010.\n> NetBSD 9.2, amd64: results similar to FreeBSD.\n\nI looked into NetBSD's malloc.c, and what I discovered is that\ntheir implementation doesn't have any chunk headers: chunks of\nthe same size are allocated consecutively within pages, and all\nthe bookkeeping data is somewhere else. Presumably FreeBSD is\nthe same. So the apparent special case with 0010 is an illusion,\neven though I saw it on two different machines (maybe it's a\nspecific value that we're allocating??) The most likely case\nis 0000 due to the immediately previous word having never been\nused (note that like palloc, they round chunk sizes up to powers\nof two, so unused space at the end of a chunk is common). I'm\nnot sure whether the cases I saw with 1111 are chance artifacts\nor reflect some real mechanism, but probably the former. I\nthought for a bit that that might be the effects of wipe_mem\non the previous chunk, but palloc'd storage would never share\nthe same page as malloc'd storage under this allocator, because\nwe grab it from malloc in larger-than-page chunks.\n\nHowever ... after looking into glib's malloc.c, I find that\nit does use a chunk header, and very conveniently the three bits\nthat we care about are flag bits (at least on 64-bit machines):\n\n chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+\n\t | Size of previous chunk, if unallocated (P clear) |\n\t +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+\n\t | Size of chunk, in bytes |A|M|P|\n mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+\n\t | User data starts here... .\n\nThe A bit is only used when threading, and hence should always\nbe zero in our usage. The M bit only gets set in chunks large\nenough to be separately mmap'd, so when it is set P must be 0.\nIf M is not set then P seems to usually be 1, although it could\nbe 0. So the three possibilities for what we can see under\nglibc are 000, 001, 010 (the last only occuring for chunks\nlarger than 128K). This squares with experimental results on\nmy machine --- I'd not thought to try sizes above 100K before.\n\nSo I'm still inclined to leave 001 and 010 both unused, but the\nreason why is different than I thought before.\n\nGoing forward, we could commandeer 010 if we need to without losing\nvery much debuggability, since malloc'ing more than 128K in a chunk\nwon't happen often.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 06 Oct 2022 19:10:33 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Reducing the chunk header sizes on all memory context types"
},
{
"msg_contents": "I wrote:\n> So I'm still inclined to leave 001 and 010 both unused, but the\n> reason why is different than I thought before.\n\nWhich leaves me with the attached proposed wording.\n\n\t\t\tregards, tom lane",
"msg_date": "Thu, 06 Oct 2022 19:35:27 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Reducing the chunk header sizes on all memory context types"
},
{
"msg_contents": "On Fri, 7 Oct 2022 at 12:35, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Which leaves me with the attached proposed wording.\n\nNo objections here.\n\nWith these comments I'd be using slot MCTX_UNUSED4_ID first, then I'd\nprobably be looking at MCTX_UNUSED5_ID after adjusting wipe_mem to do\nsomething other than setting bytes to 0x7F. I'd then use\nMCTX_UNUSED3_ID since that pattern is only used for larger chunks with\nglibc (per your findings). After that, I'd probably start looking\ninto making more than 3 bits available. If that wasn't possible, I'd\nbe using MCTX_UNUSED2_ID and at last resort MCTX_UNUSED1_ID.\n\nDavid\n\n\n",
"msg_date": "Fri, 7 Oct 2022 13:50:11 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Reducing the chunk header sizes on all memory context types"
},
{
"msg_contents": "David Rowley <dgrowleyml@gmail.com> writes:\n> On Fri, 7 Oct 2022 at 12:35, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Which leaves me with the attached proposed wording.\n\n> No objections here.\n\nCool, I'll push in a little bit.\n\n> With these comments I'd be using slot MCTX_UNUSED4_ID first, then I'd\n> probably be looking at MCTX_UNUSED5_ID after adjusting wipe_mem to do\n> something other than setting bytes to 0x7F.\n\nWell, the only way that you could free up a bitpattern that way is\nto make wipe_mem use something ending in 000 or 001. I'd be against\nusing 000 because then wiped memory might appear to contain valid\n(aligned) pointers. But perhaps 001 would be ok.\n\n> I'd then use\n> MCTX_UNUSED3_ID since that pattern is only used for larger chunks with\n> glibc (per your findings). After that, I'd probably start looking\n> into making more than 3 bits available. If that wasn't possible, I'd\n> be using MCTX_UNUSED2_ID and at last resort MCTX_UNUSED1_ID.\n\nIf we get to having three-quarters or seven-eighths of the bitpatterns\nbeing valid IDs, we'll have precious little ability to detect garbage.\nSo personally I'd put \"find a fourth bit\" higher on the priority list.\n\nIn any case, we needn't invest more effort here until someone comes\nwith a fifth context method ... and I don't recall hearing discussions\nof even a fourth one yet.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 06 Oct 2022 21:00:14 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Reducing the chunk header sizes on all memory context types"
},
{
"msg_contents": "So I pushed that, but I don't feel that we're out of the woods yet.\nAs I mentioned at [1], while testing this stuff I hit a case where\naset.c will try to wipe_mem practically the entire address space after\nbeing asked to pfree() an invalid pointer. The specific reproducer\nisn't too interesting because it depends on the pre-80ef92675 state of\nthe code, but what I take away from this is that aset.c is still far\ntoo fragile as it stands.\n\nOne problem is that aset.c generally isn't too careful about whether\na putative chunk is actually one of its chunks. That was okay before\nc6e0fe1f2 because we would never get to AllocSetFree etc unless the\nword before the chunk data pointed at a moderately-sane AllocSet.\nNow, we can arrive at that code on the strength of three random bits,\nso it's got to be more careful. In the attached patch, I make\nAllocSetIsValid do an IsA() test, and make sure to apply it to the\naset we think we have found from the chunk header. This is not in\nany way a new check: what it is doing is replacing the IsA check done\nby the \"AssertArg(MemoryContextIsValid(context))\" that was performed\nby GetMemoryChunkContext in the old code, and that we've completely\nlost in the code as it stands.\n\nThe other problem, which is what is leading to wipe_mem-the-universe,\nis that aset.c figures the size of a non-external chunk essentially\nas \"1 << MemoryChunkGetValue(chunk)\", where the \"value\" field is 30\nbits wide and has undergone exactly zero validation before\nAllocSetFree uses the size in memset. That's far, far too trusting.\nIn the attached I put in some asserts to verify that the value field\nis in the valid range for a freelist index, which should usually\ntrigger if we have a garbage value, or at the very least constrain\nthe damage.\n\nWhat I am mainly wondering about at this point is whether Asserts\nare good enough or we should use actual test-and-elog checks for\nthese things. The precedent of the old GetMemoryChunkContext\nimplementation suggests that assertions are good enough for the\nAllocSetIsValid tests. On the other hand, there are existing\ncross-checks like\n\n if (block->freeptr != block->endptr)\n elog(ERROR, \"could not find block containing chunk %p\", chunk);\n\nso at least in some code paths we've thought it is worth expending\nsuch tests in production builds. Any opinions?\n\nI imagine generation.c and slab.c need similar bulletproofing\nmeasures, but I didn't look at them yet.\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/3436789.1665187055%40sss.pgh.pa.us",
"msg_date": "Sat, 08 Oct 2022 11:52:25 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Reducing the chunk header sizes on all memory context types"
},
{
"msg_contents": "I wrote:\n> What I am mainly wondering about at this point is whether Asserts\n> are good enough or we should use actual test-and-elog checks for\n> these things.\n\nHearing no comments on that, I decided that a good policy would be\nto use Asserts in the paths dealing with small chunks but test-and-elog\nin the paths dealing with large chunks. We already had test-and-elog\nsanity checks in the latter paths, at least in aset.c, which the new\nchecks can reasonably be combined with. It's unlikely that the\nlarge-chunk case is at all performance-critical, too. But adding\nruntime checks in the small-chunk paths would probably risk losing\nsome performance in production builds, and I'm not currently prepared\nto argue that the problem is big enough to justify that.\n\nHence v2 attached, which cleans things up a tad in aset.c and then\nextends similar policy to generation.c and slab.c. Of note is\nthat slab.c was doing things like this:\n\n\tSlabContext *slab = castNode(SlabContext, context);\n\n\tAssert(slab);\n\nwhich has about the same effect as what I'm proposing with\nAllocSetIsValid, but (a) it's randomly different from the\nother allocators, and (b) it's probably a hair less efficient,\nbecause I doubt the compiler can optimize away castNode's\nspecial handling of NULL. So I made these bits follow the\nstyle of aset.c.\n\n\t\t\tregards, tom lane",
"msg_date": "Mon, 10 Oct 2022 15:35:16 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Reducing the chunk header sizes on all memory context types"
},
{
"msg_contents": "On Tue, 11 Oct 2022 at 08:35, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Hearing no comments on that, I decided that a good policy would be\n> to use Asserts in the paths dealing with small chunks but test-and-elog\n> in the paths dealing with large chunks.\n\nThis seems like a good policy. I think it's good to get at least the\nAsserts in there. If we have any troubles in the future then we can\nrevisit this and reconsider if we need to elog them instead.\n\n> Hence v2 attached, which cleans things up a tad in aset.c and then\n> extends similar policy to generation.c and slab.c.\n\nLooking at your changes to SlabFree(), I don't really think that\nchange is well aligned to the newly proposed policy. My understanding\nof the rationale behind this policy is that large chunks get malloced\nand will be slower anyway, so the elog(ERROR) is less overhead for\nthose. In SlabFree(), we're most likely not doing any free()s, so I\ndon't quite understand why you've added the elog rather than the\nAssert for this case. The slab allocator *should* be very fast.\n\nI don't have any issue with any of the other changes.\n\nDavid\n\n\n",
"msg_date": "Tue, 11 Oct 2022 09:54:40 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Reducing the chunk header sizes on all memory context types"
},
{
"msg_contents": "David Rowley <dgrowleyml@gmail.com> writes:\n> Looking at your changes to SlabFree(), I don't really think that\n> change is well aligned to the newly proposed policy. My understanding\n> of the rationale behind this policy is that large chunks get malloced\n> and will be slower anyway, so the elog(ERROR) is less overhead for\n> those. In SlabFree(), we're most likely not doing any free()s, so I\n> don't quite understand why you've added the elog rather than the\n> Assert for this case. The slab allocator *should* be very fast.\n\nYeah, slab.c hasn't any distinction between large and small chunks,\nso we have to just pick one policy or the other. I'd hoped to get\naway with the more robust runtime test on the basis that slab allocation\nis not used so much that this'd result in any noticeable performance\nchange. SlabRealloc, at least, is not used *at all* per the code\ncoverage tests, and if we're there at all we should be highly suspicious\nthat something is wrong. However, I could be wrong about SlabFree,\nand if you're going to hold my feet to the fire then I'll change it\nrather than try to produce some performance evidence.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 10 Oct 2022 17:07:53 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Reducing the chunk header sizes on all memory context types"
},
{
"msg_contents": "On Tue, 11 Oct 2022 at 10:07, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Yeah, slab.c hasn't any distinction between large and small chunks,\n> so we have to just pick one policy or the other. I'd hoped to get\n> away with the more robust runtime test on the basis that slab allocation\n> is not used so much that this'd result in any noticeable performance\n> change. SlabRealloc, at least, is not used *at all* per the code\n> coverage tests, and if we're there at all we should be highly suspicious\n> that something is wrong. However, I could be wrong about SlabFree,\n> and if you're going to hold my feet to the fire then I'll change it\n> rather than try to produce some performance evidence.\n\nThe main reason I brought it up was that only yesterday I was looking\ninto fixing the slowness of the Slab allocator. It's currently quite\nfar behind the performance of both generation.c and aset.c and it\nwould be very nice to bring it up to at least be on-par with those.\nIdeally there would be some performance advantages of the fixed-sized\nchunks. I'd just rather not have any additional things go in to make\nthat goal harder to reach.\n\nThe proposed patches in [1] do aim to make additional usages of the\nslab allocator, and I have a feeling that we'll want to fix the\nperformance of slab.c before those. Perhaps the Asserts are a better\noption if we're to get the proposed radix tree implementation.\n\nDavid\n\n[1] https://postgr.es/m/CAD21AoD3w76wERs_Lq7_uA6+gTaoOERPji+Yz8Ac6aui4JwvTg@mail.gmail.com\n\n\n",
"msg_date": "Tue, 11 Oct 2022 11:30:51 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Reducing the chunk header sizes on all memory context types"
},
{
"msg_contents": "David Rowley <dgrowleyml@gmail.com> writes:\n> The main reason I brought it up was that only yesterday I was looking\n> into fixing the slowness of the Slab allocator. It's currently quite\n> far behind the performance of both generation.c and aset.c and it\n> would be very nice to bring it up to at least be on-par with those.\n\nReally!? That's pretty sad, because surely it should be handling a\nsimpler case.\n\nAnyway, I'm about to push this with an Assert in SlabFree and\nrun-time test in SlabRealloc. That should be enough to assuage\nmy safety concerns, and then we can think about better performance.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 10 Oct 2022 18:42:44 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Reducing the chunk header sizes on all memory context types"
},
{
"msg_contents": "On Tue, Oct 11, 2022 at 5:31 AM David Rowley <dgrowleyml@gmail.com> wrote:\n>\n> The proposed patches in [1] do aim to make additional usages of the\n> slab allocator, and I have a feeling that we'll want to fix the\n> performance of slab.c before those. Perhaps the Asserts are a better\n> option if we're to get the proposed radix tree implementation.\n\nGoing by [1], that use case is not actually a natural fit for slab because\nof memory fragmentation. The motivation to use slab there was that the\nallocation sizes are just over a power of two, leading to a lot of wasted\nspace for aset. FWIW, I have proposed in that thread a scheme to squeeze\nthings into power-of-two sizes without wasting quite as much space. That's\nnot a done deal, of course, but it could work today without adding memory\nmanagement code.\n\n[1]\nhttps://www.postgresql.org/message-id/20220704220038.at2ane5xkymzzssb%40awork3.anarazel.de\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Tue, Oct 11, 2022 at 5:31 AM David Rowley <dgrowleyml@gmail.com> wrote:>> The proposed patches in [1] do aim to make additional usages of the> slab allocator, and I have a feeling that we'll want to fix the> performance of slab.c before those. Perhaps the Asserts are a better> option if we're to get the proposed radix tree implementation.Going by [1], that use case is not actually a natural fit for slab because of memory fragmentation. The motivation to use slab there was that the allocation sizes are just over a power of two, leading to a lot of wasted space for aset. FWIW, I have proposed in that thread a scheme to squeeze things into power-of-two sizes without wasting quite as much space. That's not a done deal, of course, but it could work today without adding memory management code.[1] https://www.postgresql.org/message-id/20220704220038.at2ane5xkymzzssb%40awork3.anarazel.de--John NaylorEDB: http://www.enterprisedb.com",
"msg_date": "Tue, 11 Oct 2022 10:21:17 +0700",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Reducing the chunk header sizes on all memory context types"
},
{
"msg_contents": "Hi,\n\nOn 2022-10-11 10:21:17 +0700, John Naylor wrote:\n> On Tue, Oct 11, 2022 at 5:31 AM David Rowley <dgrowleyml@gmail.com> wrote:\n> >\n> > The proposed patches in [1] do aim to make additional usages of the\n> > slab allocator, and I have a feeling that we'll want to fix the\n> > performance of slab.c before those. Perhaps the Asserts are a better\n> > option if we're to get the proposed radix tree implementation.\n> \n> Going by [1], that use case is not actually a natural fit for slab because\n> of memory fragmentation.\n>\n> [1]\n> https://www.postgresql.org/message-id/20220704220038.at2ane5xkymzzssb%40awork3.anarazel.de\n\nNot so sure about that - IIRC I made one slab for each different size class,\nwhich seemed to work well and suit slab well?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 19 Oct 2022 11:55:03 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Reducing the chunk header sizes on all memory context types"
},
{
"msg_contents": "On Thu, Oct 20, 2022 at 1:55 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On 2022-10-11 10:21:17 +0700, John Naylor wrote:\n> > On Tue, Oct 11, 2022 at 5:31 AM David Rowley <dgrowleyml@gmail.com>\nwrote:\n> > >\n> > > The proposed patches in [1] do aim to make additional usages of the\n> > > slab allocator, and I have a feeling that we'll want to fix the\n> > > performance of slab.c before those. Perhaps the Asserts are a better\n> > > option if we're to get the proposed radix tree implementation.\n> >\n> > Going by [1], that use case is not actually a natural fit for slab\nbecause\n> > of memory fragmentation.\n> >\n> > [1]\n> >\nhttps://www.postgresql.org/message-id/20220704220038.at2ane5xkymzzssb%40awork3.anarazel.de\n>\n> Not so sure about that - IIRC I made one slab for each different size\nclass,\n> which seemed to work well and suit slab well?\n\nIf that's the case, then great! The linked message didn't give me that\nimpression, but I won't worry about it.\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Thu, Oct 20, 2022 at 1:55 AM Andres Freund <andres@anarazel.de> wrote:>> Hi,>> On 2022-10-11 10:21:17 +0700, John Naylor wrote:> > On Tue, Oct 11, 2022 at 5:31 AM David Rowley <dgrowleyml@gmail.com> wrote:> > >> > > The proposed patches in [1] do aim to make additional usages of the> > > slab allocator, and I have a feeling that we'll want to fix the> > > performance of slab.c before those. Perhaps the Asserts are a better> > > option if we're to get the proposed radix tree implementation.> >> > Going by [1], that use case is not actually a natural fit for slab because> > of memory fragmentation.> >> > [1]> > https://www.postgresql.org/message-id/20220704220038.at2ane5xkymzzssb%40awork3.anarazel.de>> Not so sure about that - IIRC I made one slab for each different size class,> which seemed to work well and suit slab well?If that's the case, then great! The linked message didn't give me that impression, but I won't worry about it.--John NaylorEDB: http://www.enterprisedb.com",
"msg_date": "Mon, 24 Oct 2022 16:56:40 +0700",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Reducing the chunk header sizes on all memory context types"
}
] |
[
{
"msg_contents": "Hi hackers,\n\n\nIt is often not feasible to use `REPLICA IDENTITY FULL` on the\npublication, because it leads to full table scan\n\nper tuple change on the subscription. This makes `REPLICA IDENTITY\nFULL` impracticable -- probably other\n\nthan some small number of use cases.\n\nWith this patch, I'm proposing the following change: If there is an\nindex on the subscriber, use the index\n\nas long as the planner sub-modules picks any index over sequential scan.\n\nMajority of the logic on the subscriber side has already existed in\nthe code. The subscriber is already\n\ncapable of doing (unique) index scans. With this patch, we are\nallowing the index to iterate over the\n\ntuples fetched and only act when tuples are equal. The ones familiar\nwith this part of the code could\n\nrealize that the sequential scan code on the subscriber already\nimplements the `tuples_equal()` function.\n\nIn short, the changes on the subscriber are mostly combining parts of\n(unique) index scan and\n\nsequential scan codes.\n\nThe decision on whether to use an index (or which index) is mostly\nderived from planner infrastructure.\n\nThe idea is that on the subscriber we have all the columns. So,\nconstruct all the `Path`s with the\n\nrestrictions on all columns, such as `col_1 = $1 AND col_2 = $2 ...\nAND col_n = $N`. Finally, let\n\nthe planner sub-module -- `create_index_paths()` -- to give us the\nrelevant index `Path`s. On top of\n\nthat adds the sequential scan `Path` as well. Finally, pick the\ncheapest `Path` among.\n\n From the performance point of view, there are few things to note.\nFirst, the patch aims not to\nchange the behavior when PRIMARY KEY or UNIQUE INDEX is used. Second,\nwhen REPLICA IDENTITY\nIS FULL on the publisher and an index is used on the subscriber, the\ndifference mostly comes down\nto `index scan` vs `sequential scan`. That's why it is hard to claim a\ncertain number of improvements.\nIt mostly depends on the data size, index and the data distribution.\n\nStill, below I try to showcase the potential improvements using an\nindex on the subscriber\n`pgbench_accounts(bid)`. With the index, the replication catches up\naround ~5 seconds.\nWhen the index is dropped, the replication takes around ~300 seconds.\n\n// init source db\npgbench -i -s 100 -p 5432 postgres\npsql -c \"ALTER TABLE pgbench_accounts DROP CONSTRAINT\npgbench_accounts_pkey;\" -p 5432 postgres\npsql -c \"CREATE INDEX i1 ON pgbench_accounts(aid);\" -p 5432 postgres\npsql -c \"ALTER TABLE pgbench_accounts REPLICA IDENTITY FULL;\" -p 5432 postgres\npsql -c \"CREATE PUBLICATION pub_test_1 FOR TABLE pgbench_accounts;\" -p\n5432 postgres\n\n// init target db, drop existing primary key\npgbench -i -p 9700 postgres\npsql -c \"truncate pgbench_accounts;\" -p 9700 postgres\npsql -c \"ALTER TABLE pgbench_accounts DROP CONSTRAINT\npgbench_accounts_pkey;\" -p 9700 postgres\npsql -c \"CREATE SUBSCRIPTION sub_test_1 CONNECTION 'host=localhost\nport=5432 user=onderkalaci dbname=postgres' PUBLICATION pub_test_1;\"\n-p 9700 postgres\n\n// create one index, even on a low cardinality column\npsql -c \"CREATE INDEX i2 ON pgbench_accounts(bid);\" -p 9700 postgres\n\n// now, run some pgbench tests and observe replication\npgbench -t 500 -b tpcb-like -p 5432 postgres\n\n\n\nWhat do hackers think about this change?\n\n\nThanks,\n\nOnder Kalaci & Developing the Citus extension for PostgreSQL",
"msg_date": "Tue, 12 Jul 2022 15:36:45 +0200",
"msg_from": "=?UTF-8?B?w5ZuZGVyIEthbGFjxLE=?= <onderkalaci@gmail.com>",
"msg_from_op": true,
"msg_subject": "[PATCH] Use indexes on the subscriber when REPLICA IDENTITY is full\n on the publisher"
},
{
"msg_contents": "On Tue, Jul 12, 2022 at 7:07 PM Önder Kalacı <onderkalaci@gmail.com> wrote:\n>\n> Hi hackers,\n>\n>\n> It is often not feasible to use `REPLICA IDENTITY FULL` on the publication, because it leads to full table scan\n>\n> per tuple change on the subscription. This makes `REPLICA IDENTITY FULL` impracticable -- probably other\n>\n> than some small number of use cases.\n>\n\nIIUC, this proposal is to optimize cases where users can't have a\nunique/primary key for a relation on the subscriber and those\nrelations receive lots of updates or deletes?\n\n> With this patch, I'm proposing the following change: If there is an index on the subscriber, use the index\n>\n> as long as the planner sub-modules picks any index over sequential scan.\n>\n> Majority of the logic on the subscriber side has already existed in the code. The subscriber is already\n>\n> capable of doing (unique) index scans. With this patch, we are allowing the index to iterate over the\n>\n> tuples fetched and only act when tuples are equal. The ones familiar with this part of the code could\n>\n> realize that the sequential scan code on the subscriber already implements the `tuples_equal()` function.\n>\n> In short, the changes on the subscriber are mostly combining parts of (unique) index scan and\n>\n> sequential scan codes.\n>\n> The decision on whether to use an index (or which index) is mostly derived from planner infrastructure.\n>\n> The idea is that on the subscriber we have all the columns. So, construct all the `Path`s with the\n>\n> restrictions on all columns, such as `col_1 = $1 AND col_2 = $2 ... AND col_n = $N`. Finally, let\n>\n> the planner sub-module -- `create_index_paths()` -- to give us the relevant index `Path`s. On top of\n>\n> that adds the sequential scan `Path` as well. Finally, pick the cheapest `Path` among.\n>\n> From the performance point of view, there are few things to note. First, the patch aims not to\n> change the behavior when PRIMARY KEY or UNIQUE INDEX is used. Second, when REPLICA IDENTITY\n> IS FULL on the publisher and an index is used on the subscriber, the difference mostly comes down\n> to `index scan` vs `sequential scan`. That's why it is hard to claim a certain number of improvements.\n> It mostly depends on the data size, index and the data distribution.\n>\n\nIt seems that in favorable cases it will improve performance but we\nshould consider unfavorable cases as well. Two things that come to\nmind in that regard are (a) while choosing index/seq. scan paths, the\npatch doesn't account for cost for tuples_equal() which needs to be\nperformed for index scans, (b) it appears to me that the patch decides\nwhich index to use the first time it opens the rel (or if the rel gets\ninvalidated) on subscriber and then for all consecutive operations it\nuses the same index. It is quite possible that after some more\noperations on the table, using the same index will actually be\ncostlier than a sequence scan or some other index scan.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 18 Jul 2022 11:59:01 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Use indexes on the subscriber when REPLICA IDENTITY is\n full on the publisher"
},
{
"msg_contents": "On Mon, Jul 18, 2022 at 11:59 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Jul 12, 2022 at 7:07 PM Önder Kalacı <onderkalaci@gmail.com> wrote:\n> >\n> > Hi hackers,\n> >\n> >\n> > It is often not feasible to use `REPLICA IDENTITY FULL` on the publication, because it leads to full table scan\n> >\n> > per tuple change on the subscription. This makes `REPLICA IDENTITY FULL` impracticable -- probably other\n> >\n> > than some small number of use cases.\n> >\n>\n> IIUC, this proposal is to optimize cases where users can't have a\n> unique/primary key for a relation on the subscriber and those\n> relations receive lots of updates or deletes?\n>\n> > With this patch, I'm proposing the following change: If there is an index on the subscriber, use the index\n> >\n> > as long as the planner sub-modules picks any index over sequential scan.\n> >\n> > Majority of the logic on the subscriber side has already existed in the code. The subscriber is already\n> >\n> > capable of doing (unique) index scans. With this patch, we are allowing the index to iterate over the\n> >\n> > tuples fetched and only act when tuples are equal. The ones familiar with this part of the code could\n> >\n> > realize that the sequential scan code on the subscriber already implements the `tuples_equal()` function.\n> >\n> > In short, the changes on the subscriber are mostly combining parts of (unique) index scan and\n> >\n> > sequential scan codes.\n> >\n> > The decision on whether to use an index (or which index) is mostly derived from planner infrastructure.\n> >\n> > The idea is that on the subscriber we have all the columns. So, construct all the `Path`s with the\n> >\n> > restrictions on all columns, such as `col_1 = $1 AND col_2 = $2 ... AND col_n = $N`. Finally, let\n> >\n> > the planner sub-module -- `create_index_paths()` -- to give us the relevant index `Path`s. On top of\n> >\n> > that adds the sequential scan `Path` as well. Finally, pick the cheapest `Path` among.\n> >\n> > From the performance point of view, there are few things to note. First, the patch aims not to\n> > change the behavior when PRIMARY KEY or UNIQUE INDEX is used. Second, when REPLICA IDENTITY\n> > IS FULL on the publisher and an index is used on the subscriber, the difference mostly comes down\n> > to `index scan` vs `sequential scan`. That's why it is hard to claim a certain number of improvements.\n> > It mostly depends on the data size, index and the data distribution.\n> >\n>\n> It seems that in favorable cases it will improve performance but we\n> should consider unfavorable cases as well. Two things that come to\n> mind in that regard are (a) while choosing index/seq. scan paths, the\n> patch doesn't account for cost for tuples_equal() which needs to be\n> performed for index scans, (b) it appears to me that the patch decides\n> which index to use the first time it opens the rel (or if the rel gets\n> invalidated) on subscriber and then for all consecutive operations it\n> uses the same index. It is quite possible that after some more\n> operations on the table, using the same index will actually be\n> costlier than a sequence scan or some other index scan.\n>\n\nPoint (a) won't matter because we perform tuples_equal both for\nsequence and index scans. So, we can ignore point (a).\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 18 Jul 2022 14:21:32 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Use indexes on the subscriber when REPLICA IDENTITY is\n full on the publisher"
},
{
"msg_contents": "Hi, thanks for your reply.\n\nAmit Kapila <amit.kapila16@gmail.com>, 18 Tem 2022 Pzt, 08:29 tarihinde\nşunu yazdı:\n\n> On Tue, Jul 12, 2022 at 7:07 PM Önder Kalacı <onderkalaci@gmail.com>\n> wrote:\n> >\n> > Hi hackers,\n> >\n> >\n> > It is often not feasible to use `REPLICA IDENTITY FULL` on the\n> publication, because it leads to full table scan\n> >\n> > per tuple change on the subscription. This makes `REPLICA IDENTITY FULL`\n> impracticable -- probably other\n> >\n> > than some small number of use cases.\n> >\n>\n> IIUC, this proposal is to optimize cases where users can't have a\n> unique/primary key for a relation on the subscriber and those\n> relations receive lots of updates or deletes?\n>\n\nYes, that is right.\n\nIn a similar perspective, I see this patch useful for reducing the \"use\nprimary key/unique index\" requirement to \"use any index\" for a reasonably\nperformant logical replication with updates/deletes.\n\n\n>\n> It seems that in favorable cases it will improve performance but we\n> should consider unfavorable cases as well. Two things that come to\n> mind in that regard are (a) while choosing index/seq. scan paths, the\n> patch doesn't account for cost for tuples_equal() which needs to be\n> performed for index scans, (b) it appears to me that the patch decides\n> which index to use the first time it opens the rel (or if the rel gets\n> invalidated) on subscriber and then for all consecutive operations it\n> uses the same index. It is quite possible that after some more\n> operations on the table, using the same index will actually be\n> costlier than a sequence scan or some other index scan\n>\n\nRegarding (b), yes that is a concern I share. And, I was actually\nconsidering sending another patch regarding this.\n\nCurrently, I can see two options and happy to hear your take on these (or\nmaybe another idea?)\n\n- Add a new class of invalidation callbacks: Today, if we do ALTER TABLE or\nCREATE INDEX on a table, the CacheRegisterRelcacheCallback helps us to\nre-create the cache entries. In this case, as far as I can see, we need a\ncallback that is called when table \"ANALYZE\"d, because that is when the\nstatistics change. That is the time picking a new index makes sense.\nHowever, that seems like adding another dimension to this patch, which I\ncan try but also see that committing becomes even harder. So, please see\nthe next idea as well.\n\n- Ask users to manually pick the index they want to use: Currently, the\nmain complexity of the patch comes with the planner related code. In fact,\nif you look into the logical replication related changes, those are\nrelatively modest changes. If we can drop the feature that Postgres picks\nthe index, and provide a user interface to set the indexes per table in the\nsubscription, we can probably have an easier patch to review & test. For\nexample, we could add `ALTER SUBSCRIPTION sub ALTER TABLE t USE INDEX i`\ntype of an API. This also needs some coding, but probably much simpler than\nthe current code. And, obviously, this pops up the question of can users\npick the right index? Probably not always, but at least that seems like a\ngood start to use this performance improvement.\n\nThoughts?\n\nThanks,\nOnder Kalaci\n\nHi, thanks for your reply.Amit Kapila <amit.kapila16@gmail.com>, 18 Tem 2022 Pzt, 08:29 tarihinde şunu yazdı:On Tue, Jul 12, 2022 at 7:07 PM Önder Kalacı <onderkalaci@gmail.com> wrote:\n>\n> Hi hackers,\n>\n>\n> It is often not feasible to use `REPLICA IDENTITY FULL` on the publication, because it leads to full table scan\n>\n> per tuple change on the subscription. This makes `REPLICA IDENTITY FULL` impracticable -- probably other\n>\n> than some small number of use cases.\n>\n\nIIUC, this proposal is to optimize cases where users can't have a\nunique/primary key for a relation on the subscriber and those\nrelations receive lots of updates or deletes?Yes, that is right. In a similar perspective, I see this patch useful for reducing the \"use primary key/unique index\" requirement to \"use any index\" for a reasonably performant logical replication with updates/deletes. \n\nIt seems that in favorable cases it will improve performance but we\nshould consider unfavorable cases as well. Two things that come to\nmind in that regard are (a) while choosing index/seq. scan paths, the\npatch doesn't account for cost for tuples_equal() which needs to be\nperformed for index scans, (b) it appears to me that the patch decides\nwhich index to use the first time it opens the rel (or if the rel gets\ninvalidated) on subscriber and then for all consecutive operations it\nuses the same index. It is quite possible that after some more\noperations on the table, using the same index will actually be\ncostlier than a sequence scan or some other index scanRegarding (b), yes that is a concern I share. And, I was actually considering sending another patch regarding this.Currently, I can see two options and happy to hear your take on these (or maybe another idea?)- Add a new class of invalidation callbacks: Today, if we do ALTER TABLE or CREATE INDEX on a table, the CacheRegisterRelcacheCallback helps us to re-create the cache entries. In this case, as far as I can see, we need a callback that is called when table \"ANALYZE\"d, because that is when the statistics change. That is the time picking a new index makes sense. However, that seems like adding another dimension to this patch, which I can try but also see that committing becomes even harder. So, please see the next idea as well.- Ask users to manually pick the index they want to use: Currently, the main complexity of the patch comes with the planner related code. In fact, if you look into the logical replication related changes, those are relatively modest changes. If we can drop the feature that Postgres picks the index, and provide a user interface to set the indexes per table in the subscription, we can probably have an easier patch to review & test. For example, we could add `ALTER SUBSCRIPTION sub ALTER TABLE t USE INDEX i` type of an API. This also needs some coding, but probably much simpler than the current code. And, obviously, this pops up the question of can users pick the right index? Probably not always, but at least that seems like a good start to use this performance improvement.Thoughts?Thanks,Onder Kalaci",
"msg_date": "Tue, 19 Jul 2022 10:16:36 +0200",
"msg_from": "=?UTF-8?B?w5ZuZGVyIEthbGFjxLE=?= <onderkalaci@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Use indexes on the subscriber when REPLICA IDENTITY is\n full on the publisher"
},
{
"msg_contents": "On Tue, Jul 19, 2022 at 1:46 PM Önder Kalacı <onderkalaci@gmail.com> wrote:\n>\n> Hi, thanks for your reply.\n>\n> Amit Kapila <amit.kapila16@gmail.com>, 18 Tem 2022 Pzt, 08:29 tarihinde şunu yazdı:\n>>\n>> >\n>>\n>> IIUC, this proposal is to optimize cases where users can't have a\n>> unique/primary key for a relation on the subscriber and those\n>> relations receive lots of updates or deletes?\n>\n>\n> Yes, that is right.\n>\n> In a similar perspective, I see this patch useful for reducing the \"use primary key/unique index\" requirement to \"use any index\" for a reasonably performant logical replication with updates/deletes.\n>\n\nAgreed. BTW, have you seen any such requirements from users where this\nwill be useful for them?\n\n>>\n>>\n>> It seems that in favorable cases it will improve performance but we\n>> should consider unfavorable cases as well. Two things that come to\n>> mind in that regard are (a) while choosing index/seq. scan paths, the\n>> patch doesn't account for cost for tuples_equal() which needs to be\n>> performed for index scans, (b) it appears to me that the patch decides\n>> which index to use the first time it opens the rel (or if the rel gets\n>> invalidated) on subscriber and then for all consecutive operations it\n>> uses the same index. It is quite possible that after some more\n>> operations on the table, using the same index will actually be\n>> costlier than a sequence scan or some other index scan\n>\n>\n> Regarding (b), yes that is a concern I share. And, I was actually considering sending another patch regarding this.\n>\n> Currently, I can see two options and happy to hear your take on these (or maybe another idea?)\n>\n> - Add a new class of invalidation callbacks: Today, if we do ALTER TABLE or CREATE INDEX on a table, the CacheRegisterRelcacheCallback helps us to re-create the cache entries. In this case, as far as I can see, we need a callback that is called when table \"ANALYZE\"d, because that is when the statistics change. That is the time picking a new index makes sense.\n> However, that seems like adding another dimension to this patch, which I can try but also see that committing becomes even harder.\n>\n\nThis idea sounds worth investigating. I see that this will require\nmore work but OTOH, we can't allow the existing system to regress\nespecially because depending on workload it might regress badly. We\ncan create a patch for this atop the base patch for easier review/test\nbut I feel we need some way to address this point.\n\n So, please see the next idea as well.\n>\n> - Ask users to manually pick the index they want to use: Currently, the main complexity of the patch comes with the planner related code. In fact, if you look into the logical replication related changes, those are relatively modest changes. If we can drop the feature that Postgres picks the index, and provide a user interface to set the indexes per table in the subscription, we can probably have an easier patch to review & test. For example, we could add `ALTER SUBSCRIPTION sub ALTER TABLE t USE INDEX i` type of an API. This also needs some coding, but probably much simpler than the current code. And, obviously, this pops up the question of can users pick the right index?\n>\n\nI think picking the right index is one point and another is what if\nthe subscription has many tables (say 10K or more), doing it for\nindividual tables per subscription won't be fun. Also, users need to\nidentify which tables belong to a particular subscription, now, users\ncan find the same via pg_subscription_rel or some other way but doing\nthis won't be straightforward for users. So, my inclination would be\nto pick the right index automatically rather than getting the input\nfrom the user.\n\nNow, your point related to planner code in the patch bothers me as\nwell but I haven't studied the patch in detail to provide any\nalternatives at this stage. Do you have any other ideas to make it\nsimpler or solve this problem in some other way?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 20 Jul 2022 10:20:45 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Use indexes on the subscriber when REPLICA IDENTITY is\n full on the publisher"
},
{
"msg_contents": "On Mon, Jul 18, 2022 at 8:29 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> IIUC, this proposal is to optimize cases where users can't have a\n> unique/primary key for a relation on the subscriber and those\n> relations receive lots of updates or deletes?\n\nI think this patch optimizes for all non-trivial cases of update/delete\nreplication (e.g. >1000 rows in the table, >1000 rows per hour updated)\nwithout a primary key. For instance, it's quite common to have a large\nappend-mostly events table without a primary key (e.g. because of\npartitioning, or insertion speed), which will still have occasional batch\nupdates/deletes.\n\nImagine an update of a table or partition with 1 million rows and a typical\nscan speed of 1M rows/sec. An update on the whole table takes maybe 1-2\nseconds. Replicating the update using a sequential scan per row can take on\nthe order of ~12 days ≈ 1M seconds.\n\nThe current implementation makes using REPLICA IDENTITY FULL a huge\nliability/ impractical for scenarios where you want to replicate an\narbitrary set of user-defined tables, such as upgrades, migrations, shard\nmoves. We generally recommend users to tolerate update/delete errors in\nsuch scenarios.\n\nIf the apply worker can use an index, the data migration tool can\ntactically create one on a high cardinality column, which would practically\nalways be better than doing a sequential scan for non-trivial workloads.\n\ncheers,\nMarco\n\nOn Mon, Jul 18, 2022 at 8:29 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> IIUC, this proposal is to optimize cases where users can't have a\n> unique/primary key for a relation on the subscriber and those\n> relations receive lots of updates or deletes?\n\nI think this patch optimizes for all non-trivial cases of update/delete replication (e.g. >1000 rows in the table, >1000 rows per hour updated) without a primary key. For instance, it's quite common to have a large append-mostly events table without a primary key (e.g. because of partitioning, or insertion speed), which will still have occasional batch updates/deletes.\n\nImagine an update of a table or partition with 1 million rows and a typical scan speed of 1M rows/sec. An update on the whole table takes maybe 1-2 seconds. Replicating the update using a sequential scan per row can take on the order of ~12 days ≈ 1M seconds.The current implementation makes using REPLICA IDENTITY FULL a huge liability/ impractical for scenarios where you want to replicate an arbitrary set of user-defined tables, such as upgrades, migrations, shard moves. We generally recommend users to tolerate update/delete errors in such scenarios.If the apply worker can use an index, the data migration tool can tactically create one on a high cardinality column, which would practically always be better than doing a sequential scan for non-trivial workloads.\ncheers,\nMarco",
"msg_date": "Wed, 20 Jul 2022 09:15:12 +0200",
"msg_from": "Marco Slot <marco.slot@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Use indexes on the subscriber when REPLICA IDENTITY is\n full on the publisher"
},
{
"msg_contents": "Hi,\n\n\n> > - Add a new class of invalidation callbacks: Today, if we do ALTER TABLE\n> or CREATE INDEX on a table, the CacheRegisterRelcacheCallback helps us to\n> re-create the cache entries. In this case, as far as I can see, we need a\n> callback that is called when table \"ANALYZE\"d, because that is when the\n> statistics change. That is the time picking a new index makes sense.\n> > However, that seems like adding another dimension to this patch, which I\n> can try but also see that committing becomes even harder.\n> >\n>\n> This idea sounds worth investigating. I see that this will require\n> more work but OTOH, we can't allow the existing system to regress\n> especially because depending on workload it might regress badly.\n\n\nJust to note if that is not clear: This patch avoids (or at least aims to\navoid assuming no bugs) changing the behavior of the existing systems with\nPRIMARY KEY or UNIQUE index. In that case, we still use the relevant\nindexes.\n\n\n> We\n> can create a patch for this atop the base patch for easier review/test\n> but I feel we need some way to address this point.\n>\n>\nOne another idea could be to re-calculate the index, say after *N*\nupdates/deletes for the table. We may consider using subscription_parameter\nfor getting N -- with a good default, or even hard-code into the code. I\nthink the cost of re-calculating should really be pretty small compared to\nthe other things happening during logical replication. So, a sane default\nmight work?\n\nIf you think the above doesn't work, I can try to work on a separate patch\nwhich adds something like \"analyze invalidation callback\".\n\n\n\n> >\n> > - Ask users to manually pick the index they want to use: Currently, the\n> main complexity of the patch comes with the planner related code. In fact,\n> if you look into the logical replication related changes, those are\n> relatively modest changes. If we can drop the feature that Postgres picks\n> the index, and provide a user interface to set the indexes per table in the\n> subscription, we can probably have an easier patch to review & test. For\n> example, we could add `ALTER SUBSCRIPTION sub ALTER TABLE t USE INDEX i`\n> type of an API. This also needs some coding, but probably much simpler than\n> the current code. And, obviously, this pops up the question of can users\n> pick the right index?\n> >\n>\n> I think picking the right index is one point and another is what if\n> the subscription has many tables (say 10K or more), doing it for\n> individual tables per subscription won't be fun. Also, users need to\n> identify which tables belong to a particular subscription, now, users\n> can find the same via pg_subscription_rel or some other way but doing\n> this won't be straightforward for users. So, my inclination would be\n> to pick the right index automatically rather than getting the input\n> from the user.\n>\n\nYes, all makes sense.\n\n\n>\n> Now, your point related to planner code in the patch bothers me as\n> well but I haven't studied the patch in detail to provide any\n> alternatives at this stage. Do you have any other ideas to make it\n> simpler or solve this problem in some other way?\n>\n>\nOne idea I tried earlier was to go over the existing indexes and on the\ntable, then get the IndexInfo via BuildIndexInfo(). And then, try to find a\ngood heuristic to pick an index. In the end, I felt like that is doing a\nsub-optimal job, requiring a similar amount of code of the current patch,\nand still using the similar infrastructure.\n\nMy conclusion for that route was I should either use a very simple\nheuristic (like pick the index with the most columns) and have a suboptimal\nindex pick, OR use a complex heuristic with a reasonable index pick. And,\nthe latter approach converged to the planner code in the patch. Do you\nthink the former approach is acceptable?\n\nThanks,\nOnder\n\nHi, \n> - Add a new class of invalidation callbacks: Today, if we do ALTER TABLE or CREATE INDEX on a table, the CacheRegisterRelcacheCallback helps us to re-create the cache entries. In this case, as far as I can see, we need a callback that is called when table \"ANALYZE\"d, because that is when the statistics change. That is the time picking a new index makes sense.\n> However, that seems like adding another dimension to this patch, which I can try but also see that committing becomes even harder.\n>\n\nThis idea sounds worth investigating. I see that this will require\nmore work but OTOH, we can't allow the existing system to regress\nespecially because depending on workload it might regress badly. Just to note if that is not clear: This patch avoids (or at least aims to avoid assuming no bugs) changing the behavior of the existing systems with PRIMARY KEY or UNIQUE index. In that case, we still use the relevant indexes. We\ncan create a patch for this atop the base patch for easier review/test\nbut I feel we need some way to address this point.\nOne another idea could be to re-calculate the index, say after N updates/deletes for the table. We may consider using subscription_parameter for getting N -- with a good default, or even hard-code into the code. I think the cost of re-calculating should really be pretty small compared to the other things happening during logical replication. So, a sane default might work?If you think the above doesn't work, I can try to work on a separate patch which adds something like \"analyze invalidation callback\". >\n> - Ask users to manually pick the index they want to use: Currently, the main complexity of the patch comes with the planner related code. In fact, if you look into the logical replication related changes, those are relatively modest changes. If we can drop the feature that Postgres picks the index, and provide a user interface to set the indexes per table in the subscription, we can probably have an easier patch to review & test. For example, we could add `ALTER SUBSCRIPTION sub ALTER TABLE t USE INDEX i` type of an API. This also needs some coding, but probably much simpler than the current code. And, obviously, this pops up the question of can users pick the right index?\n>\n\nI think picking the right index is one point and another is what if\nthe subscription has many tables (say 10K or more), doing it for\nindividual tables per subscription won't be fun. Also, users need to\nidentify which tables belong to a particular subscription, now, users\ncan find the same via pg_subscription_rel or some other way but doing\nthis won't be straightforward for users. So, my inclination would be\nto pick the right index automatically rather than getting the input\nfrom the user.Yes, all makes sense. \n\nNow, your point related to planner code in the patch bothers me as\nwell but I haven't studied the patch in detail to provide any\nalternatives at this stage. Do you have any other ideas to make it\nsimpler or solve this problem in some other way?One idea I tried earlier was to go over the existing indexes and on the table, then get the IndexInfo via BuildIndexInfo(). And then, try to find a good heuristic to pick an index. In the end, I felt like that is doing a sub-optimal job, requiring a similar amount of code of the current patch, and still using the similar infrastructure. My conclusion for that route was I should either use a very simple heuristic (like pick the index with the most columns) and have a suboptimal index pick, OR use a complex heuristic with a reasonable index pick. And, the latter approach converged to the planner code in the patch. Do you think the former approach is acceptable? Thanks,Onder",
"msg_date": "Wed, 20 Jul 2022 16:49:32 +0200",
"msg_from": "=?UTF-8?B?w5ZuZGVyIEthbGFjxLE=?= <onderkalaci@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Use indexes on the subscriber when REPLICA IDENTITY is\n full on the publisher"
},
{
"msg_contents": "On Wed, Jul 20, 2022 at 8:19 PM Önder Kalacı <onderkalaci@gmail.com> wrote:\n>\n>>\n>> > - Add a new class of invalidation callbacks: Today, if we do ALTER TABLE or CREATE INDEX on a table, the CacheRegisterRelcacheCallback helps us to re-create the cache entries. In this case, as far as I can see, we need a callback that is called when table \"ANALYZE\"d, because that is when the statistics change. That is the time picking a new index makes sense.\n>> > However, that seems like adding another dimension to this patch, which I can try but also see that committing becomes even harder.\n>> >\n>>\n>> This idea sounds worth investigating. I see that this will require\n>> more work but OTOH, we can't allow the existing system to regress\n>> especially because depending on workload it might regress badly.\n>\n>\n> Just to note if that is not clear: This patch avoids (or at least aims to avoid assuming no bugs) changing the behavior of the existing systems with PRIMARY KEY or UNIQUE index. In that case, we still use the relevant indexes.\n>\n>>\n>> We\n>> can create a patch for this atop the base patch for easier review/test\n>> but I feel we need some way to address this point.\n>>\n>\n> One another idea could be to re-calculate the index, say after N updates/deletes for the table. We may consider using subscription_parameter for getting N -- with a good default, or even hard-code into the code. I think the cost of re-calculating should really be pretty small compared to the other things happening during logical replication. So, a sane default might work?\n>\n\nOne difficulty in deciding the value of N for the user or choosing a\ndefault would be that we need to probably consider the local DML\noperations on the table as well.\n\n> If you think the above doesn't work, I can try to work on a separate patch which adds something like \"analyze invalidation callback\".\n>\n\nI suggest we should give this a try and if this turns out to be\nproblematic or complex then we can think of using some heuristic as\nyou are suggesting above.\n\n>\n>>\n>>\n>> Now, your point related to planner code in the patch bothers me as\n>> well but I haven't studied the patch in detail to provide any\n>> alternatives at this stage. Do you have any other ideas to make it\n>> simpler or solve this problem in some other way?\n>>\n>\n> One idea I tried earlier was to go over the existing indexes and on the table, then get the IndexInfo via BuildIndexInfo(). And then, try to find a good heuristic to pick an index. In the end, I felt like that is doing a sub-optimal job, requiring a similar amount of code of the current patch, and still using the similar infrastructure.\n>\n> My conclusion for that route was I should either use a very simple heuristic (like pick the index with the most columns) and have a suboptimal index pick,\n>\n\nNot only that but say all index have same number of columns then we\nneed to probably either pick the first such index or use some other\nheuristic.\n\n>\n> OR use a complex heuristic with a reasonable index pick. And, the latter approach converged to the planner code in the patch. Do you think the former approach is acceptable?\n>\n\nIn this regard, I was thinking in which cases a sequence scan can be\nbetter than the index scan (considering one is available). I think if\na certain column has a lot of duplicates (for example, a column has a\nboolean value) then probably doing a sequence scan is better. Now,\nconsidering this even though your other approach sounds simpler but\ncould lead to unpredictable results. So, I think the latter approach\nis preferable.\n\nBTW, do we want to consider partial indexes for the scan in this\ncontext? I mean it may not have data of all rows so how that would be\nusable?\n\nFew comments:\n===============\n1.\nstatic List *\n+CreateReplicaIdentityFullPaths(Relation localrel)\n{\n...\n+ /*\n+ * Rather than doing all the pushups that would be needed to use\n+ * set_baserel_size_estimates, just do a quick hack for rows and width.\n+ */\n+ rel->rows = rel->tuples;\n\nIs it a good idea to set rows without any selectivity estimation?\nWon't this always set the entire rows in a relation? Also, if we don't\nwant to use set_baserel_size_estimates(), how will we compute\nbaserestrictcost which will later be used in the costing of paths (for\nexample, costing of seqscan path (cost_seqscan) uses it)?\n\nIn general, I think it will be better to consider calling some\ntop-level planner functions even for paths. Can we consider using\nmake_one_rel() instead of building individual paths? On similar lines,\nin function PickCheapestIndexPathIfExists(), can we use\nset_cheapest()?\n\n2.\n@@ -57,9 +60,6 @@ build_replindex_scan_key(ScanKey skey, Relation rel,\nRelation idxrel,\n int2vector *indkey = &idxrel->rd_index->indkey;\n bool hasnulls = false;\n\n- Assert(RelationGetReplicaIndex(rel) == RelationGetRelid(idxrel) ||\n- RelationGetPrimaryKeyIndex(rel) == RelationGetRelid(idxrel));\n\nYou have removed this assertion but there is a comment (\"This is not\ngeneric routine, it expects the idxrel to be replication identity of a\nrel and meet all limitations associated with that.\") atop this\nfunction which either needs to be changed/removed and probably we\nshould think if the function needs some change after removing that\nrestriction.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 21 Jul 2022 15:38:11 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Use indexes on the subscriber when REPLICA IDENTITY is\n full on the publisher"
},
{
"msg_contents": "Hi,\n\n>\n> >\n> > One another idea could be to re-calculate the index, say after N\n> updates/deletes for the table. We may consider using subscription_parameter\n> for getting N -- with a good default, or even hard-code into the code. I\n> think the cost of re-calculating should really be pretty small compared to\n> the other things happening during logical replication. So, a sane default\n> might work?\n> >\n>\n> One difficulty in deciding the value of N for the user or choosing a\n> default would be that we need to probably consider the local DML\n> operations on the table as well.\n>\n>\nFair enough, it is not easy to find a good default.\n\n\n> > If you think the above doesn't work, I can try to work on a separate\n> patch which adds something like \"analyze invalidation callback\".\n> >\n>\n> I suggest we should give this a try and if this turns out to be\n> problematic or complex then we can think of using some heuristic as\n> you are suggesting above.\n>\n\nAlright, I'll try this and respond shortly back.\n\n\n> >\n> > OR use a complex heuristic with a reasonable index pick. And, the latter\n> approach converged to the planner code in the patch. Do you think the\n> former approach is acceptable?\n> >\n>\n> In this regard, I was thinking in which cases a sequence scan can be\n> better than the index scan (considering one is available). I think if\n> a certain column has a lot of duplicates (for example, a column has a\n> boolean value) then probably doing a sequence scan is better. Now,\n> considering this even though your other approach sounds simpler but\n> could lead to unpredictable results. So, I think the latter approach\n> is preferable.\n>\n\nYes, it makes sense. I also considered this during the development of the\npatch, but forgot to mention :)\n\n\n>\n> BTW, do we want to consider partial indexes for the scan in this\n> context? I mean it may not have data of all rows so how that would be\n> usable?\n>\n>\nAs far as I can see, check_index_predicates() never picks a partial index\nfor the baserestrictinfos we create in CreateReplicaIdentityFullPaths().\nThe reason is that we have roughly the following call stack:\n\n-check_index_predicates\n --predicate_implied_by\n---predicate_implied_by_recurse\n----predicate_implied_by_simple_clause\n-----operator_predicate_proof\n\nAnd, inside operator_predicate_proof(), there is never going to be an\nequality. Because, we push `Param`s to the baserestrictinfos whereas the\nindex predicates are always `Const`.\n\nIf we want to make it even more explicit, I can filter out `Path`s with\npartial indexes. But that seems redundant to me. For now, I pushed the\ncommit with an assertion that we never pick partial indexes and also added\na test.\n\nIf you think it is better to explicitly filter out partial indexes, I can\ndo that as well.\n\n\n\n> Few comments:\n> ===============\n> 1.\n> static List *\n> +CreateReplicaIdentityFullPaths(Relation localrel)\n> {\n> ...\n> + /*\n> + * Rather than doing all the pushups that would be needed to use\n> + * set_baserel_size_estimates, just do a quick hack for rows and width.\n> + */\n> + rel->rows = rel->tuples;\n>\n> Is it a good idea to set rows without any selectivity estimation?\n> Won't this always set the entire rows in a relation? Also, if we don't\n> want to use set_baserel_size_estimates(), how will we compute\n> baserestrictcost which will later be used in the costing of paths (for\n> example, costing of seqscan path (cost_seqscan) uses it)?\n>\n> In general, I think it will be better to consider calling some\n> top-level planner functions even for paths. Can we consider using\n> make_one_rel() instead of building individual paths?\n\n\nThanks, this looks like a good suggestion/simplification. I wanted to use\nthe least amount of code possible, and make_one_rel() does either what I\nexactly need or slightly more, which is great.\n\nNote that make_one_rel() also follows the same call stack that I noted\nabove. So, I cannot spot any problems with partial indexes. Maybe am I\nmissing something here?\n\n\n> On similar lines,\n> in function PickCheapestIndexPathIfExists(), can we use\n> set_cheapest()?\n>\n>\nYes, make_one_rel() + set_cheapest() sounds better. Changed.\n\n\n> 2.\n> @@ -57,9 +60,6 @@ build_replindex_scan_key(ScanKey skey, Relation rel,\n> Relation idxrel,\n> int2vector *indkey = &idxrel->rd_index->indkey;\n> bool hasnulls = false;\n>\n> - Assert(RelationGetReplicaIndex(rel) == RelationGetRelid(idxrel) ||\n> - RelationGetPrimaryKeyIndex(rel) == RelationGetRelid(idxrel));\n>\n> You have removed this assertion but there is a comment (\"This is not\n> generic routine, it expects the idxrel to be replication identity of a\n> rel and meet all limitations associated with that.\") atop this\n> function which either needs to be changed/removed and probably we\n> should think if the function needs some change after removing that\n> restriction.\n>\n>\nAck, I can see your point. I think, for example, we should skip index\nattributes that are not simple column references. And, probably whatever\nother restrictions that PRIMARY has, should be here.\n\nI'll read some more Postgres code & test before pushing a revision for this\npart. In the meantime, if you have any suggestions/pointers for me to look\ninto, please note here.\n\nAttached v2 of the patch with addressing some of the comments you had. I'll\nwork on the remaining shortly.\n\nThanks,\nOnder",
"msg_date": "Fri, 22 Jul 2022 18:15:16 +0200",
"msg_from": "=?UTF-8?B?w5ZuZGVyIEthbGFjxLE=?= <onderkalaci@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Use indexes on the subscriber when REPLICA IDENTITY is\n full on the publisher"
},
{
"msg_contents": "Hi,\n\n2.\n>> @@ -57,9 +60,6 @@ build_replindex_scan_key(ScanKey skey, Relation rel,\n>> Relation idxrel,\n>> int2vector *indkey = &idxrel->rd_index->indkey;\n>> bool hasnulls = false;\n>>\n>> - Assert(RelationGetReplicaIndex(rel) == RelationGetRelid(idxrel) ||\n>> - RelationGetPrimaryKeyIndex(rel) == RelationGetRelid(idxrel));\n>>\n>> You have removed this assertion but there is a comment (\"This is not\n>> generic routine, it expects the idxrel to be replication identity of a\n>> rel and meet all limitations associated with that.\") atop this\n>> function which either needs to be changed/removed and probably we\n>> should think if the function needs some change after removing that\n>> restriction.\n>>\n>>\n> Ack, I can see your point. I think, for example, we should skip index\n> attributes that are not simple column references. And, probably whatever\n> other restrictions that PRIMARY has, should be here.\n>\n\nPrimary keys require:\n- Unique: We don't need uniqueness, that's the point of this patch\n- Valid index: Should not be an issue in this case, because planner would\nnot pick non-valid index anyway.\n- Non-Partial index: As discussed earlier in this thread, I really don't\nsee any problems with partial indexes for this use-case. Please let me know\nif there is anything I miss.\n- Deferrable - Immediate: As far as I can see, there is no such concepts\nfor regular indexes, so does not apply here\n- Indexes with no expressions: This is the point where we require some\nminor changes inside/around `build_replindex_scan_key `. Previously,\nindexes on expressions could not be replica indexes. And, with this patch\nthey can. However, the expressions cannot be used for filtering the tuples\nbecause of the way we create the restrictinfos. We essentially create\n`WHERE col_1 = $1 AND col_2 = $2 .. col_n = $n` for the columns with\nequality operators available. In the case of expressions on the indexes,\nthe planner would never pick such indexes with these restrictions. I\nchanged `build_replindex_scan_key ` to reflect that, added a new assert and\npushed tests with the following schema, and make sure the code behaves as\nexpected:\n\nCREATE TABLE people (firstname text, lastname text);\nCREATE INDEX people_names_expr_only ON people ((firstname || ' ' ||\nlastname));\nCREATE INDEX people_names_expr_and_columns ON people ((firstname || ' ' ||\nlastname), firstname, lastname);\n\nAlso did similar tests with indexes on jsonb fields. Does that help you\navoid the concerns regarding indexes with expressions?\n\nI'll work on one of the other open items in the thread (e.g., analyze\ninvalidation callback) separately.\n\nThanks,\nOnder KALACI",
"msg_date": "Fri, 29 Jul 2022 16:59:20 +0200",
"msg_from": "=?UTF-8?B?w5ZuZGVyIEthbGFjxLE=?= <onderkalaci@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Use indexes on the subscriber when REPLICA IDENTITY is\n full on the publisher"
},
{
"msg_contents": "Hi,\n\nAs far as I can see, the following is the answer to the only remaining open\ndiscussion in this thread. Let me know if anything is missed.\n\n(b) it appears to me that the patch decides\n> >> which index to use the first time it opens the rel (or if the rel gets\n> >> invalidated) on subscriber and then for all consecutive operations it\n> >> uses the same index. It is quite possible that after some more\n> >> operations on the table, using the same index will actually be\n> >> costlier than a sequence scan or some other index scan\n> >\n> >\n> > Regarding (b), yes that is a concern I share. And, I was actually\n> considering sending another patch regarding this.\n> >\n> > Currently, I can see two options and happy to hear your take on these\n> (or maybe another idea?)\n> >\n> > - Add a new class of invalidation callbacks: Today, if we do ALTER TABLE\n> or CREATE INDEX on a table, the CacheRegisterRelcacheCallback helps us to\n> re-create the cache entries. In this case, as far as I can see, we need a\n> callback that is called when table \"ANALYZE\"d, because that is when the\n> statistics change. That is the time picking a new index makes sense.\n> > However, that seems like adding another dimension to this patch, which I\n> can try but also see that committing becomes even harder.\n> >\n>\n> This idea sounds worth investigating. I see that this will require\n> more work but OTOH, we can't allow the existing system to regress\n> especially because depending on workload it might regress badly. We\n> can create a patch for this atop the base patch for easier review/test\n> but I feel we need some way to address this point.\n>\n>\nIt turns out that we already invalidate the relevant entries\nin LogicalRepRelMap/LogicalRepPartMap when \"ANALYZE\" (or VACUUM) updates\nany of the statistics in pg_class.\n\nThe call-stack for analyze is roughly:\ndo_analyze_rel()\n -> vac_update_relstats()\n -> heap_inplace_update()\n -> if needs to apply any statistical change\n -> CacheInvalidateHeapTuple()\n\nAnd, we register for those invalidations already:\nlogicalrep_relmap_init() / logicalrep_partmap_init()\n -> CacheRegisterRelcacheCallback()\n\n\n\nAdded a test which triggers this behavior. The test is as follows:\n- Create two indexes on the target, on column_a and column_b\n- Initially load data such that the column_a has a high cardinality\n- Show that we use the index on column_a\n- Load more data such that the column_b has higher cardinality\n- ANALYZE on the target table\n- Show that we use the index on column_b afterwards\n\nThanks,\nOnder KALACI",
"msg_date": "Mon, 1 Aug 2022 18:21:48 +0200",
"msg_from": "=?UTF-8?B?w5ZuZGVyIEthbGFjxLE=?= <onderkalaci@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Use indexes on the subscriber when REPLICA IDENTITY is\n full on the publisher"
},
{
"msg_contents": "On Mon, Aug 1, 2022 at 9:52 PM Önder Kalacı <onderkalaci@gmail.com> wrote:\n>\n> Hi,\n>\n> As far as I can see, the following is the answer to the only remaining open discussion in this thread. Let me know if anything is missed.\n>\n>> (b) it appears to me that the patch decides\n>> >> which index to use the first time it opens the rel (or if the rel gets\n>> >> invalidated) on subscriber and then for all consecutive operations it\n>> >> uses the same index. It is quite possible that after some more\n>> >> operations on the table, using the same index will actually be\n>> >> costlier than a sequence scan or some other index scan\n>> >\n>> >\n>> > Regarding (b), yes that is a concern I share. And, I was actually considering sending another patch regarding this.\n>> >\n>> > Currently, I can see two options and happy to hear your take on these (or maybe another idea?)\n>> >\n>> > - Add a new class of invalidation callbacks: Today, if we do ALTER TABLE or CREATE INDEX on a table, the CacheRegisterRelcacheCallback helps us to re-create the cache entries. In this case, as far as I can see, we need a callback that is called when table \"ANALYZE\"d, because that is when the statistics change. That is the time picking a new index makes sense.\n>> > However, that seems like adding another dimension to this patch, which I can try but also see that committing becomes even harder.\n>> >\n>>\n>> This idea sounds worth investigating. I see that this will require\n>> more work but OTOH, we can't allow the existing system to regress\n>> especially because depending on workload it might regress badly. We\n>> can create a patch for this atop the base patch for easier review/test\n>> but I feel we need some way to address this point.\n>>\n>\n> It turns out that we already invalidate the relevant entries in LogicalRepRelMap/LogicalRepPartMap when \"ANALYZE\" (or VACUUM) updates any of the statistics in pg_class.\n>\n> The call-stack for analyze is roughly:\n> do_analyze_rel()\n> -> vac_update_relstats()\n> -> heap_inplace_update()\n> -> if needs to apply any statistical change\n> -> CacheInvalidateHeapTuple()\n>\n\nYeah, it appears that this will work but I see that we don't update\nhere for inherited stats, how does it work for such cases?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 4 Aug 2022 09:35:32 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Use indexes on the subscriber when REPLICA IDENTITY is\n full on the publisher"
},
{
"msg_contents": "On Fri, Jul 22, 2022 at 9:45 PM Önder Kalacı <onderkalaci@gmail.com> wrote:\n>>\n>>\n>> BTW, do we want to consider partial indexes for the scan in this\n>> context? I mean it may not have data of all rows so how that would be\n>> usable?\n>>\n>\n> As far as I can see, check_index_predicates() never picks a partial index for the baserestrictinfos we create in CreateReplicaIdentityFullPaths(). The reason is that we have roughly the following call stack:\n>\n> -check_index_predicates\n> --predicate_implied_by\n> ---predicate_implied_by_recurse\n> ----predicate_implied_by_simple_clause\n> -----operator_predicate_proof\n>\n> And, inside operator_predicate_proof(), there is never going to be an equality. Because, we push `Param`s to the baserestrictinfos whereas the index predicates are always `Const`.\n>\n\nI agree that the way currently baserestrictinfos are formed by patch,\nit won't select the partial path, and chances are that that will be\ntrue in future as well but I think it is better to be explicit in this\ncase to avoid creating a dependency between two code paths.\n\nFew other comments:\n==================\n1. Why is it a good idea to choose the index selected even for the\nbitmap path (T_BitmapHeapScan or T_BitmapIndexScan)? We use index scan\nduring update/delete, so not sure how we can conclude to use index for\nbitmap paths.\n\n2. The index info is built even on insert, so workload, where there\nare no updates/deletes or those are not published then this index\nselection work will go waste. Will it be better to do it at first\nupdate/delete? One can say that it is not worth the hassle as anyway\nit will be built the first time we perform an operation on the\nrelation or after the relation gets invalidated. If we think so, then\nprobably adding a comment could be useful.\n\n3.\n+my $synced_query =\n+ \"SELECT count(1) = 0 FROM pg_subscription_rel WHERE srsubstate NOT\nIN ('r', 's');\";\n...\n...\n+# wait for initial table synchronization to finish\n+$node_subscriber->poll_query_until('postgres', $synced_query)\n+ or die \"Timed out while waiting for subscriber to synchronize data\";\n\nYou can avoid such instances in the test by using the new\ninfrastructure added in commit 0c20dd33db.\n\n4.\n LogicalRepRelation *remoterel = &root->remoterel;\n+\n Oid partOid = RelationGetRelid(partrel);\n\nSpurious line addition.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 5 Aug 2022 15:47:08 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Use indexes on the subscriber when REPLICA IDENTITY is\n full on the publisher"
},
{
"msg_contents": "Hi,\n\nThanks for the feedback, see my reply below.\n\n>\n> > It turns out that we already invalidate the relevant entries in\n> LogicalRepRelMap/LogicalRepPartMap when \"ANALYZE\" (or VACUUM) updates any\n> of the statistics in pg_class.\n> >\n> > The call-stack for analyze is roughly:\n> > do_analyze_rel()\n> > -> vac_update_relstats()\n> > -> heap_inplace_update()\n> > -> if needs to apply any statistical change\n> > -> CacheInvalidateHeapTuple()\n> >\n\nYeah, it appears that this will work but I see that we don't update\n> here for inherited stats, how does it work for such cases?\n\n\nThere, the expansion of the relation list to partitions happens one level\nabove on the call stack. So, the call stack looks like the following:\n\nautovacuum_do_vac_analyze() (or ExecVacuum)\n -> vacuum()\n -> expand_vacuum_rel()\n -> rel_list=parent+children partitions\n -> for rel in rel_list\n ->analyze_rel()\n ->do_analyze_rel\n ... (and the same call stack as above)\n\nI also added one variation of a similar test for partitioned tables, which\nI earlier added for non-partitioned tables as well:\n\nAdded a test which triggers this behavior. The test is as follows:\n> - Create two indexes on the target, on column_a and column_b\n> - Initially load data such that the column_a has a high cardinality\n> - Show that we use the index on column_a on a *child *table\n> - Load more data such that the column_b has higher cardinality\n> - ANALYZE on the *parent* table\n> - Show that we use the index on column_b afterwards on the *child* table\n\n\nMy answer for the above assumes that your question is regarding what\nhappens if you ANALYZE on a partitioned table. If your question is\nsomething different, please let me know.\n\n\n> >> BTW, do we want to consider partial indexes for the scan in this\n> >> context? I mean it may not have data of all rows so how that would be\n> >> usable?\n> >>\n> >\n> > As far as I can see, check_index_predicates() never picks a partial\n> index for the baserestrictinfos we create in\n> CreateReplicaIdentityFullPaths(). The reason is that we have roughly the\n> following call stack:\n> >\n> > -check_index_predicates\n> > --predicate_implied_by\n> > ---predicate_implied_by_recurse\n> > ----predicate_implied_by_simple_clause\n> > -----operator_predicate_proof\n> >\n> > And, inside operator_predicate_proof(), there is never going to be an\n> equality. Because, we push `Param`s to the baserestrictinfos whereas the\n> index predicates are always `Const`.\n> >\n>\n> I agree that the way currently baserestrictinfos are formed by patch,\n> it won't select the partial path, and chances are that that will be\n> true in future as well but I think it is better to be explicit in this\n> case to avoid creating a dependency between two code paths.\n>\n>\nYes, it makes sense. So, I changed Assert into a function where we filter\npartial indexes and indexes on only expressions, so that we do not create\nsuch dependencies between the planner and here.\n\nIf one day planner supports using column values on index with expressions,\nthis code would only not be able to use the optimization until we do some\nimprovements in this code-path. I think that seems like a fair trade-off\nfor now.\n\nFew other comments:\n> ==================\n> 1. Why is it a good idea to choose the index selected even for the\n> bitmap path (T_BitmapHeapScan or T_BitmapIndexScan)? We use index scan\n> during update/delete, so not sure how we can conclude to use index for\n> bitmap paths.\n>\n\nIn our case, during update/delete we are searching for a single tuple on\nthe target. And, it seems like using an index is probably going to be\ncheaper for finding the single tuple. In general, I thought we should use\nan index if the planner ever decides to use it with the given restrictions.\n\nAlso, for the majority of the use-cases, I think we'd probably expect an\nindex on a column with high cardinality -- hence use index scan. So, bitmap\nindex scans are probably not going to be that much common.\n\nStill, I don't see a problem with using such indexes. Of course, it is\npossible that I might be missing something. Do you have any specific\nconcerns in this area?\n\n\n>\n> 2. The index info is built even on insert, so workload, where there\n> are no updates/deletes or those are not published then this index\n> selection work will go waste. Will it be better to do it at first\n> update/delete? One can say that it is not worth the hassle as anyway\n> it will be built the first time we perform an operation on the\n> relation or after the relation gets invalidated.\n\n\nWith the current approach, the index (re)-calculation is coupled with\n(in)validation of the relevant cache entries. So, I'd argue for the\nsimplicity of the code, we could afford to waste this small overhead?\nAccording to my local measurements, especially for large tables, the index\noid calculation is mostly insignificant compared to the rest of the steps.\nDoes that sound OK to you?\n\n\n\n> If we think so, then\n> probably adding a comment could be useful.\n>\n\nYes, that is useful if you are OK with the above, added.\n\n\n> 3.\n> +my $synced_query =\n> + \"SELECT count(1) = 0 FROM pg_subscription_rel WHERE srsubstate NOT\n> IN ('r', 's');\";\n> ...\n> ...\n> +# wait for initial table synchronization to finish\n> +$node_subscriber->poll_query_until('postgres', $synced_query)\n> + or die \"Timed out while waiting for subscriber to synchronize data\";\n>\n> You can avoid such instances in the test by using the new\n> infrastructure added in commit 0c20dd33db.\n>\n\n Cool, applied changes.\n\n\n> 4.\n> LogicalRepRelation *remoterel = &root->remoterel;\n> +\n> Oid partOid = RelationGetRelid(partrel);\n>\n> Spurious line addition.\n>\n>\nFixed, went over the code and couldn't find other.\n\n\nAttaching v5 of the patch which reflects the review on this email, also few\nminor test improvements.\n\nThanks,\nOnder",
"msg_date": "Mon, 8 Aug 2022 17:58:52 +0200",
"msg_from": "=?UTF-8?B?w5ZuZGVyIEthbGFjxLE=?= <onderkalaci@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Use indexes on the subscriber when REPLICA IDENTITY is\n full on the publisher"
},
{
"msg_contents": "On Tuesday, August 9, 2022 12:59 AM Önder Kalacı <onderkalaci@gmail.com> wrote:\r\n> Attaching v5 of the patch which reflects the review on this email, also few\r\n> minor test improvements.\r\nHi,\r\n\r\n\r\nThank you for the updated patch.\r\nFYI, I noticed that v5 causes cfbot failure in [1].\r\nCould you please fix it in the next version ?\r\n\r\n\r\n[19:44:38.420] execReplication.c: In function ‘RelationFindReplTupleByIndex’:\r\n[19:44:38.420] execReplication.c:186:24: error: ‘eq’ may be used uninitialized in this function [-Werror=maybe-uninitialized]\r\n[19:44:38.420] 186 | if (!indisunique && !tuples_equal(outslot, searchslot, eq))\r\n[19:44:38.420] | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\r\n[19:44:38.420] cc1: all warnings being treated as errors\r\n\r\n\r\n\r\n[1] - https://cirrus-ci.com/task/6544573026533376\r\n\r\n\r\n\r\nBest Regards,\r\n\tTakamichi Osumi\r\n\r\n",
"msg_date": "Wed, 10 Aug 2022 13:30:52 +0000",
"msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: [PATCH] Use indexes on the subscriber when REPLICA IDENTITY is\n full on the publisher"
},
{
"msg_contents": "Hi,\n\n\n>\n> FYI, I noticed that v5 causes cfbot failure in [1].\n> Could you please fix it in the next version ?\n>\n\nThanks for letting me know!\n\n\n>\n> [19:44:38.420] execReplication.c: In function\n> ‘RelationFindReplTupleByIndex’:\n> [19:44:38.420] execReplication.c:186:24: error: ‘eq’ may be used\n> uninitialized in this function [-Werror=maybe-uninitialized]\n> [19:44:38.420] 186 | if (!indisunique && !tuples_equal(outslot,\n> searchslot, eq))\n> [19:44:38.420] |\n> ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n> [19:44:38.420] cc1: all warnings being treated as errors\n>\n>\nIt is kind of interesting that the compiler cannot understand that `eq` is\nonly used when *!indisunique. *Anyway, now I've sent v6 where I avoid the\nwarning with a slight refactor to avoid the compile warning.\n\nThanks,\nOnder KALACI",
"msg_date": "Fri, 12 Aug 2022 17:11:55 +0200",
"msg_from": "=?UTF-8?B?w5ZuZGVyIEthbGFjxLE=?= <onderkalaci@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Use indexes on the subscriber when REPLICA IDENTITY is\n full on the publisher"
},
{
"msg_contents": "On Mon, Aug 8, 2022 at 9:29 PM Önder Kalacı <onderkalaci@gmail.com> wrote:\n>\n> Hi,\n>\n> Thanks for the feedback, see my reply below.\n>\n>> >\n>> > It turns out that we already invalidate the relevant entries in LogicalRepRelMap/LogicalRepPartMap when \"ANALYZE\" (or VACUUM) updates any of the statistics in pg_class.\n>> >\n>> > The call-stack for analyze is roughly:\n>> > do_analyze_rel()\n>> > -> vac_update_relstats()\n>> > -> heap_inplace_update()\n>> > -> if needs to apply any statistical change\n>> > -> CacheInvalidateHeapTuple()\n>> >\n>>\n>> Yeah, it appears that this will work but I see that we don't update\n>> here for inherited stats, how does it work for such cases?\n>\n>\n> There, the expansion of the relation list to partitions happens one level above on the call stack. So, the call stack looks like the following:\n>\n> autovacuum_do_vac_analyze() (or ExecVacuum)\n> -> vacuum()\n> -> expand_vacuum_rel()\n> -> rel_list=parent+children partitions\n> -> for rel in rel_list\n> ->analyze_rel()\n> ->do_analyze_rel\n> ... (and the same call stack as above)\n>\n> I also added one variation of a similar test for partitioned tables, which I earlier added for non-partitioned tables as well:\n>\n>> Added a test which triggers this behavior. The test is as follows:\n>> - Create two indexes on the target, on column_a and column_b\n>> - Initially load data such that the column_a has a high cardinality\n>> - Show that we use the index on column_a on a child table\n>> - Load more data such that the column_b has higher cardinality\n>> - ANALYZE on the parent table\n>> - Show that we use the index on column_b afterwards on the child table\n>\n>\n> My answer for the above assumes that your question is regarding what happens if you ANALYZE on a partitioned table. If your question is something different, please let me know.\n>\n\nI was talking about inheritance cases, something like:\ncreate table tbl1 (a int);\ncreate table tbl1_part1 (b int) inherits (tbl1);\ncreate table tbl1_part2 (c int) inherits (tbl1);\n\nWhat we do in such cases is documented as: \"if the table being\nanalyzed has inheritance children, ANALYZE gathers two sets of\nstatistics: one on the rows of the parent table only, and a second\nincluding rows of both the parent table and all of its children. This\nsecond set of statistics is needed when planning queries that process\nthe inheritance tree as a whole. The child tables themselves are not\nindividually analyzed in this case.\"\n\nNow, the point I was worried about was what if the changes in child\ntables (*_part1, *_part2) are much more than in tbl1? In such cases,\nwe may not invalidate child rel entries, so how will logical\nreplication behave for updates/deletes on child tables? There may not\nbe any problem here but it is better to do some analysis of such cases\nto see how it behaves.\n\n>>\n>> >> BTW, do we want to consider partial indexes for the scan in this\n>> >> context? I mean it may not have data of all rows so how that would be\n>> >> usable?\n>> >>\n>> >\n>\n>> Few other comments:\n>> ==================\n>> 1. Why is it a good idea to choose the index selected even for the\n>> bitmap path (T_BitmapHeapScan or T_BitmapIndexScan)? We use index scan\n>> during update/delete, so not sure how we can conclude to use index for\n>> bitmap paths.\n>\n>\n> In our case, during update/delete we are searching for a single tuple on the target. And, it seems like using an index is probably going to be cheaper for finding the single tuple. In general, I thought we should use an index if the planner ever decides to use it with the given restrictions.\n>\n\nWhat about the case where the index has a lot of duplicate values? We\nmay need to retrieve multiple tuples in such cases.\n\n> Also, for the majority of the use-cases, I think we'd probably expect an index on a column with high cardinality -- hence use index scan. So, bitmap index scans are probably not going to be that much common.\n>\n\nYou are probably right here but I don't think we can make such\nassumptions. I think the safest way to avoid any regression here is to\nchoose an index when the planner selects an index scan. We can always\nextend it later to bitmap scans if required. We can add a comment\nindicating the same.\n\n> Still, I don't see a problem with using such indexes. Of course, it is possible that I might be missing something. Do you have any specific concerns in this area?\n>\n>>\n>>\n>> 2. The index info is built even on insert, so workload, where there\n>> are no updates/deletes or those are not published then this index\n>> selection work will go waste. Will it be better to do it at first\n>> update/delete? One can say that it is not worth the hassle as anyway\n>> it will be built the first time we perform an operation on the\n>> relation or after the relation gets invalidated.\n>\n>\n> With the current approach, the index (re)-calculation is coupled with (in)validation of the relevant cache entries. So, I'd argue for the simplicity of the code, we could afford to waste this small overhead? According to my local measurements, especially for large tables, the index oid calculation is mostly insignificant compared to the rest of the steps. Does that sound OK to you?\n>\n>\n>>\n>> If we think so, then\n>> probably adding a comment could be useful.\n>\n>\n> Yes, that is useful if you are OK with the above, added.\n>\n\n*\n+ /*\n+ * For insert-only workloads, calculating the index is not necessary.\n+ * As the calculation is not expensive, we are fine to do here (instead\n+ * of during first update/delete processing).\n+ */\n\nI think here instead of talking about cost, we should mention that it\nis quite an infrequent operation i.e performed only when we first time\nperforms an operation on the relation or after invalidation. This is\nbecause I think the cost is relative.\n\n*\n+\n+ /*\n+ * Although currently it is not possible for planner to pick a\n+ * partial index or indexes only on expressions,\n\nIt may be better to expand this comment by describing a bit why it is\nnot possible in our case. You might want to give the function\nreference where it is decided.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Sat, 13 Aug 2022 10:40:50 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Use indexes on the subscriber when REPLICA IDENTITY is\n full on the publisher"
},
{
"msg_contents": "Hi,\n\nI'm a little late to catch up with your comments, but here are my replies:\n\n> My answer for the above assumes that your question is regarding what\n> happens if you ANALYZE on a partitioned table. If your question is\n> something different, please let me know.\n> >\n>\n> I was talking about inheritance cases, something like:\n> create table tbl1 (a int);\n> create table tbl1_part1 (b int) inherits (tbl1);\n> create table tbl1_part2 (c int) inherits (tbl1);\n>\n> What we do in such cases is documented as: \"if the table being\n> analyzed has inheritance children, ANALYZE gathers two sets of\n> statistics: one on the rows of the parent table only, and a second\n> including rows of both the parent table and all of its children. This\n> second set of statistics is needed when planning queries that process\n> the inheritance tree as a whole. The child tables themselves are not\n> individually analyzed in this case.\"\n\n\nOh, I haven't considered inherited tables. That seems right, the\nstatistics of the children are not updated when the parent is analyzed.\n\n\n> Now, the point I was worried about was what if the changes in child\n> tables (*_part1, *_part2) are much more than in tbl1? In such cases,\n> we may not invalidate child rel entries, so how will logical\n> replication behave for updates/deletes on child tables? There may not\n> be any problem here but it is better to do some analysis of such cases\n> to see how it behaves.\n>\n\nI also haven't observed any specific issues. In the end, when the user (or\nautovacuum) does ANALYZE on the child, it is when the statistics are\nupdated for the child. Although I do not have much experience with\ninherited tables, this sounds like the expected behavior?\n\nI also pushed a test covering inherited tables. First, a basic test on the\nparent. Then, show that updates on the parent can also use indexes of the\nchildren. Also, after an ANALYZE on the child, we can re-calculate the\nindex and use the index with a higher cardinality column.\n\n\n> > Also, for the majority of the use-cases, I think we'd probably expect an\n> index on a column with high cardinality -- hence use index scan. So, bitmap\n> index scans are probably not going to be that much common.\n> >\n>\n> You are probably right here but I don't think we can make such\n> assumptions. I think the safest way to avoid any regression here is to\n> choose an index when the planner selects an index scan. We can always\n> extend it later to bitmap scans if required. We can add a comment\n> indicating the same.\n>\n>\nAlright, I got rid of the bitmap scans.\n\nThough, it caused few of the new tests to fail. I think because of the data\nsize/distribution, the planner picks bitmap scans. To make the tests\nconsistent and small, I added `enable_bitmapscan to off` for this new test\nfile. Does that sound ok to you? Or, should we change the tests to make\nsure they genuinely use index scans?\n\n*\n> + /*\n> + * For insert-only workloads, calculating the index is not necessary.\n> + * As the calculation is not expensive, we are fine to do here (instead\n> + * of during first update/delete processing).\n> + */\n>\n> I think here instead of talking about cost, we should mention that it\n> is quite an infrequent operation i.e performed only when we first time\n> performs an operation on the relation or after invalidation. This is\n> because I think the cost is relative.\n>\n\nChanged, does that look better?\n\n+\n> + /*\n> + * Although currently it is not possible for planner to pick a\n> + * partial index or indexes only on expressions,\n>\n> It may be better to expand this comment by describing a bit why it is\n> not possible in our case. You might want to give the function\n> reference where it is decided.\n>\n> Make sense, added some more information.\n\nThanks,\nOnder",
"msg_date": "Sat, 20 Aug 2022 13:02:03 +0200",
"msg_from": "=?UTF-8?B?w5ZuZGVyIEthbGFjxLE=?= <onderkalaci@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Use indexes on the subscriber when REPLICA IDENTITY is\n full on the publisher"
},
{
"msg_contents": "On Sat, Aug 20, 2022 7:02 PM Önder Kalacı <onderkalaci@gmail.com> wrote:\r\n> Hi,\r\n> \r\n> I'm a little late to catch up with your comments, but here are my replies:\r\n\r\nThanks for your patch. Here are some comments.\r\n\r\n1.\r\nIn FilterOutNotSuitablePathsForReplIdentFull(), is \"nonPartialIndexPathList\" a\r\ngood name for the list? Indexes on only expressions are also be filtered.\r\n\r\n+static List *\r\n+FilterOutNotSuitablePathsForReplIdentFull(List *pathlist)\r\n+{\r\n+\tListCell *lc;\r\n+\tList *nonPartialIndexPathList = NIL;\r\n\r\n2.\r\n+typedef struct LogicalRepPartMapEntry\r\n+{\r\n+\tOid\t\t\tpartoid;\t\t/* LogicalRepPartMap's key */\r\n+\tLogicalRepRelMapEntry relmapentry;\r\n+\tOid\t\t\tusableIndexOid; /* which index to use? (Invalid when no index\r\n+\t\t\t\t\t\t\t\t * used) */\r\n+} LogicalRepPartMapEntry;\r\n\r\nFor partition tables, is it possible to use relmapentry->usableIndexOid to mark\r\nwhich index to use? Which means we don't need to add \"usableIndexOid\" to\r\nLogicalRepPartMapEntry.\r\n\r\n3.\r\nIt looks we should change the comment for FindReplTupleInLocalRel() in this\r\npatch.\r\n\r\n/*\r\n * Try to find a tuple received from the publication side (in 'remoteslot') in\r\n * the corresponding local relation using either replica identity index,\r\n * primary key or if needed, sequential scan.\r\n *\r\n * Local tuple, if found, is returned in '*localslot'.\r\n */\r\nstatic bool\r\nFindReplTupleInLocalRel(EState *estate, Relation localrel,\r\n\r\n4.\r\n@@ -2030,16 +2017,19 @@ apply_handle_delete_internal(ApplyExecutionData *edata,\r\n {\r\n \tEState\t *estate = edata->estate;\r\n \tRelation\tlocalrel = relinfo->ri_RelationDesc;\r\n-\tLogicalRepRelation *remoterel = &edata->targetRel->remoterel;\r\n+\tLogicalRepRelMapEntry *targetRel = edata->targetRel;\r\n+\tLogicalRepRelation *remoterel = &targetRel->remoterel;\r\n \tEPQState\tepqstate;\r\n \tTupleTableSlot *localslot;\r\n\r\nDo we need this change? I didn't see any place to use the variable targetRel\r\nafterwards.\r\n\r\n5.\r\n+\t\tif (!AttributeNumberIsValid(mainattno))\r\n+\t\t{\r\n+\t\t\t/*\r\n+\t\t\t * There are two cases to consider. First, if the index is a primary or\r\n+\t\t\t * unique key, we cannot have any indexes with expressions. So, at this\r\n+\t\t\t * point we are sure that the index we deal is not these.\r\n+\t\t\t */\r\n+\t\t\tAssert(RelationGetReplicaIndex(rel) != RelationGetRelid(idxrel) &&\r\n+\t\t\t\t RelationGetPrimaryKeyIndex(rel) != RelationGetRelid(idxrel));\r\n+\r\n+\t\t\t/*\r\n+\t\t\t * For a non-primary/unique index with an expression, we are sure that\r\n+\t\t\t * the expression cannot be used for replication index search. The\r\n+\t\t\t * reason is that we create relevant index paths by providing column\r\n+\t\t\t * equalities. And, the planner does not pick expression indexes via\r\n+\t\t\t * column equality restrictions in the query.\r\n+\t\t\t */\r\n+\t\t\tcontinue;\r\n+\t\t}\r\n\r\nIs it possible that it is a usable index with an expression? I think indexes\r\nwith an expression has been filtered in \r\nFilterOutNotSuitablePathsForReplIdentFull(). If it can't be a usable index with\r\nan expression, maybe we shouldn't use \"continue\" here.\r\n\r\n6.\r\nIn the following case, I got a result which is different from HEAD, could you\r\nplease look into it?\r\n\r\n-- publisher\r\nCREATE TABLE test_replica_id_full (x int); \r\nALTER TABLE test_replica_id_full REPLICA IDENTITY FULL; \r\nCREATE PUBLICATION tap_pub_rep_full FOR TABLE test_replica_id_full;\r\n\r\n-- subscriber\r\nCREATE TABLE test_replica_id_full (x int, y int); \r\nCREATE INDEX test_replica_id_full_idx ON test_replica_id_full(x,y); \r\nCREATE SUBSCRIPTION tap_sub_rep_full_0 CONNECTION 'dbname=postgres port=5432' PUBLICATION tap_pub_rep_full;\r\n\r\n-- publisher\r\nINSERT INTO test_replica_id_full VALUES (1);\r\nUPDATE test_replica_id_full SET x = x + 1 WHERE x = 1;\r\n\r\nThe data in subscriber:\r\non HEAD:\r\npostgres=# select * from test_replica_id_full ;\r\n x | y\r\n---+---\r\n 2 |\r\n(1 row)\r\n\r\nAfter applying the patch:\r\npostgres=# select * from test_replica_id_full ;\r\n x | y\r\n---+---\r\n 1 |\r\n(1 row)\r\n\r\nRegards,\r\nShi yu\r\n",
"msg_date": "Tue, 23 Aug 2022 02:04:32 +0000",
"msg_from": "\"shiy.fnst@fujitsu.com\" <shiy.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: [PATCH] Use indexes on the subscriber when REPLICA IDENTITY is\n full on the publisher"
},
{
"msg_contents": "Hi,\n\nThanks for the review!\n\n\n>\n> 1.\n> In FilterOutNotSuitablePathsForReplIdentFull(), is\n> \"nonPartialIndexPathList\" a\n> good name for the list? Indexes on only expressions are also be filtered.\n>\n> +static List *\n> +FilterOutNotSuitablePathsForReplIdentFull(List *pathlist)\n> +{\n> + ListCell *lc;\n> + List *nonPartialIndexPathList = NIL;\n>\n>\nYes, true. We only started filtering the non-partial ones first. Now\nchanged to *suitableIndexList*, does that look right?\n\n\n> 2.\n> +typedef struct LogicalRepPartMapEntry\n> +{\n> + Oid partoid; /*\n> LogicalRepPartMap's key */\n> + LogicalRepRelMapEntry relmapentry;\n> + Oid usableIndexOid; /* which index to use?\n> (Invalid when no index\n> + * used) */\n> +} LogicalRepPartMapEntry;\n>\n> For partition tables, is it possible to use relmapentry->usableIndexOid to\n> mark\n> which index to use? Which means we don't need to add \"usableIndexOid\" to\n> LogicalRepPartMapEntry.\n>\n>\nMy intention was to make this explicit so that it is clear that partitions\ncan explicitly own indexes.\n\nBut I tried your suggested refactor, which looks good. So, I changed it.\n\nAlso, I realized that I do not have a test where the partition has an index\n(not inherited from the parent), which I also added now.\n\n\n> 3.\n> It looks we should change the comment for FindReplTupleInLocalRel() in this\n> patch.\n>\n> /*\n> * Try to find a tuple received from the publication side (in\n> 'remoteslot') in\n> * the corresponding local relation using either replica identity index,\n> * primary key or if needed, sequential scan.\n> *\n> * Local tuple, if found, is returned in '*localslot'.\n> */\n> static bool\n> FindReplTupleInLocalRel(EState *estate, Relation localrel,\n>\n>\nI made a small change, just adding \"index\". Do you expect a larger change?\n\n\n> 4.\n> @@ -2030,16 +2017,19 @@ apply_handle_delete_internal(ApplyExecutionData\n> *edata,\n> {\n> EState *estate = edata->estate;\n> Relation localrel = relinfo->ri_RelationDesc;\n> - LogicalRepRelation *remoterel = &edata->targetRel->remoterel;\n> + LogicalRepRelMapEntry *targetRel = edata->targetRel;\n> + LogicalRepRelation *remoterel = &targetRel->remoterel;\n> EPQState epqstate;\n> TupleTableSlot *localslot;\n>\n> Do we need this change? I didn't see any place to use the variable\n> targetRel\n> afterwards.\n>\n\nSeems so, changed it back.\n\n\n> 5.\n> + if (!AttributeNumberIsValid(mainattno))\n> + {\n> + /*\n> + * There are two cases to consider. First, if the\n> index is a primary or\n> + * unique key, we cannot have any indexes with\n> expressions. So, at this\n> + * point we are sure that the index we deal is not\n> these.\n> + */\n> + Assert(RelationGetReplicaIndex(rel) !=\n> RelationGetRelid(idxrel) &&\n> + RelationGetPrimaryKeyIndex(rel) !=\n> RelationGetRelid(idxrel));\n> +\n> + /*\n> + * For a non-primary/unique index with an\n> expression, we are sure that\n> + * the expression cannot be used for replication\n> index search. The\n> + * reason is that we create relevant index paths\n> by providing column\n> + * equalities. And, the planner does not pick\n> expression indexes via\n> + * column equality restrictions in the query.\n> + */\n> + continue;\n> + }\n>\n> Is it possible that it is a usable index with an expression? I think\n> indexes\n> with an expression has been filtered in\n> FilterOutNotSuitablePathsForReplIdentFull(). If it can't be a usable index\n> with\n> an expression, maybe we shouldn't use \"continue\" here.\n>\n\nOk, I think there are some confusing comments in the code, which I updated.\nAlso, added one more explicit Assert to make the code a little more\nreadable.\n\nWe can support indexes involving expressions but not indexes that are only\nconsisting of expressions. FilterOutNotSuitablePathsForReplIdentFull()\nfilters out the latter, see IndexOnlyOnExpression().\n\nSo, for example, if we have an index as below, we are skipping the\nexpression while building the index scan keys:\n\nCREATE INDEX people_names ON people (firstname, lastname, (id || '_' ||\nsub_id));\n\nWe can consider removing `continue`, but that'd mean we should also adjust\nthe following code-block to handle indexprs. To me, that seems like an edge\ncase to implement at this point, given such an index is probably not\ncommon. Do you think should I try to use the indexprs as well while\nbuilding the scan key?\n\nI'm mostly trying to keep the complexity small. If you suggest this\nlimitation should be lifted, I can give it a shot. I think the limitation I\nleave here is with a single sentence: *The index on the subscriber can only\nuse simple column references. *\n\n\n> 6.\n> In the following case, I got a result which is different from HEAD, could\n> you\n> please look into it?\n>\n> -- publisher\n> CREATE TABLE test_replica_id_full (x int);\n> ALTER TABLE test_replica_id_full REPLICA IDENTITY FULL;\n> CREATE PUBLICATION tap_pub_rep_full FOR TABLE test_replica_id_full;\n>\n> -- subscriber\n> CREATE TABLE test_replica_id_full (x int, y int);\n> CREATE INDEX test_replica_id_full_idx ON test_replica_id_full(x,y);\n> CREATE SUBSCRIPTION tap_sub_rep_full_0 CONNECTION 'dbname=postgres\n> port=5432' PUBLICATION tap_pub_rep_full;\n>\n> -- publisher\n> INSERT INTO test_replica_id_full VALUES (1);\n> UPDATE test_replica_id_full SET x = x + 1 WHERE x = 1;\n>\n> The data in subscriber:\n> on HEAD:\n> postgres=# select * from test_replica_id_full ;\n> x | y\n> ---+---\n> 2 |\n> (1 row)\n>\n> After applying the patch:\n> postgres=# select * from test_replica_id_full ;\n> x | y\n> ---+---\n> 1 |\n> (1 row)\n>\n>\nOps, good catch. it seems we forgot to have:\n\nskey[scankey_attoff].sk_flags |= SK_SEARCHNULL;\n\n\nOn head, the index used for this purpose could only be the primary key or\nunique key on NOT NULL columns. Now, we do allow NULL values, and need to\nsearch for them. Added that (and your test) to the updated patch.\n\nAs a semi-related note, tuples_equal() decides `true` for (NULL = NULL). I\nhave not changed that, and it seems right in this context. Do you see any\nissues with that?\n\nAlso, I realized that the functions in the execReplication.c expect only\nbtree indexes. So, I skipped others as well. If that makes sense, I can\nwork on a follow-up patch after we can merge this, to remove some of the\nlimitations mentioned here.\n\nThanks,\nOnder",
"msg_date": "Tue, 23 Aug 2022 18:24:42 +0200",
"msg_from": "=?UTF-8?B?w5ZuZGVyIEthbGFjxLE=?= <onderkalaci@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Use indexes on the subscriber when REPLICA IDENTITY is\n full on the publisher"
},
{
"msg_contents": "Here are some review comments for the patch v8-0001:\n\n======\n\n1. Commit message\n\n1a.\nMajority of the logic on the subscriber side has already existed in the code.\n\nSUGGESTION\nThe majority of the logic on the subscriber side already exists in the code.\n\n~\n\n1b.\nSecond, when REPLICA IDENTITY IS FULL on the publisher and an index is\nused on the subscriber...\n\nSUGGESTION\nSecond, when REPLICA IDENTITY FULL is on the publisher and an index is\nused on the subscriber...\n\n~\n\n1c.\nStill, below I try to show case the potential improvements using an\nindex on the subscriber\n`pgbench_accounts(bid)`. With the index, the replication catches up\naround ~5 seconds.\nWhen the index is dropped, the replication takes around ~300 seconds.\n\n\"show case\" -> \"showcase\"\n\n~\n\n1d.\nIn above text, what was meant by \"catches up around ~5 seconds\"?\ne.g. Did it mean *improves* by ~5 seconds, or *takes* ~5 seconds?\n\n~\n\n1e.\n// create one indxe, even on a low cardinality column\n\ntypo \"indxe\"\n\n======\n\n2. GENERAL\n\n2a.\nThere are lots of single-line comments that start lowercase, but by\nconvention, I think they should start uppercase.\n\ne.g. + /* we should always use at least one attribute for the index scan */\ne.g. + /* we might not need this if the index is unique */\ne.g. + /* avoid expensive equality check if index is unique */\ne.g. + /* unrelated Path, skip */\ne.g. + /* simple case, we already have an identity or pkey */\ne.g. + /* indexscans are disabled, use seq. scan */\ne.g. + /* target is a regular table */\n\n~~\n\n2b.\nThere are some excess blank lines between the function. By convention,\nI think 1 blank line is normal, but here there are sometimes 2.\n\n~~\n\n2c.\nThere are some new function comments which include their function name\nin the comment. It seemed unnecessary.\n\ne.g. GetCheapestReplicaIdentityFullPath\ne.g. FindUsableIndexForReplicaIdentityFull\ne.g. LogicalRepUsableIndex\n\n======\n\n3. src/backend/executor/execReplication.c - build_replindex_scan_key\n\n- int attoff;\n+ int index_attoff;\n+ int scankey_attoff;\n bool isnull;\n Datum indclassDatum;\n oidvector *opclass;\n int2vector *indkey = &idxrel->rd_index->indkey;\n- bool hasnulls = false;\n-\n- Assert(RelationGetReplicaIndex(rel) == RelationGetRelid(idxrel) ||\n- RelationGetPrimaryKeyIndex(rel) == RelationGetRelid(idxrel));\n\n indclassDatum = SysCacheGetAttr(INDEXRELID, idxrel->rd_indextuple,\n Anum_pg_index_indclass, &isnull);\n Assert(!isnull);\n opclass = (oidvector *) DatumGetPointer(indclassDatum);\n+ scankey_attoff = 0;\n\nMaybe just assign scankey_attoff = 0 at the declaration?\n\n~~~\n\n4.\n\n+ /*\n+ * There are two cases to consider. First, if the index is a primary or\n+ * unique key, we cannot have any indexes with expressions. So, at this\n+ * point we are sure that the index we deal is not these.\n+ */\n\n\"we deal\" -> \"we are dealing with\" ?\n\n~~~\n\n5.\n\n+ /*\n+ * For a non-primary/unique index with an additional expression, do\n+ * not have to continue at this point. However, the below code\n+ * assumes the index scan is only done for simple column references.\n+ */\n+ continue;\n\nIs this one of those comments that ought to have a \"XXX\" prefix as a\nnote for the future?\n\n~~~\n\n6.\n\n- int pkattno = attoff + 1;\n...\n /* Initialize the scankey. */\n- ScanKeyInit(&skey[attoff],\n- pkattno,\n+ ScanKeyInit(&skey[scankey_attoff],\n+ index_attoff + 1,\n BTEqualStrategyNumber,\nWondering if it would have been simpler if you just did:\nint pkattno = index_attoff + 1;\n\n~~~\n\n7.\n\n- skey[attoff].sk_flags |= SK_ISNULL;\n+ skey[scankey_attoff].sk_flags |= SK_ISNULL;\n+ skey[scankey_attoff].sk_flags |= SK_SEARCHNULL;\n\nSUGGESTION\nskey[scankey_attoff].sk_flags |= (SK_ISNULL | SK_SEARCHNULL)\n\n~~~\n\n8. src/backend/executor/execReplication.c - RelationFindReplTupleByIndex\n\n@@ -128,28 +171,44 @@ RelationFindReplTupleByIndex(Relation rel, Oid idxoid,\n TransactionId xwait;\n Relation idxrel;\n bool found;\n+ TypeCacheEntry **eq;\n+ bool indisunique;\n+ int scankey_attoff;\n\n /* Open the index. */\n idxrel = index_open(idxoid, RowExclusiveLock);\n+ indisunique = idxrel->rd_index->indisunique;\n+\n+ /* we might not need this if the index is unique */\n+ eq = NULL;\n\nMaybe just default assign eq = NULL in the declaration?\n\n~~~\n\n9.\n\n+ scan = index_beginscan(rel, idxrel, &snap,\n+ scankey_attoff, 0);\n\nUnnecessary wrapping?\n\n~~~\n\n10.\n\n+ /* we only need to allocate once */\n+ if (eq == NULL)\n+ eq = palloc0(sizeof(*eq) * outslot->tts_tupleDescriptor->natts);\n\nBut shouldn't you also free this 'eq' before the function returns, to\nprevent leaking memory?\n\n======\n\n11. src/backend/replication/logical/relation.c - logicalrep_rel_open\n\n+ /*\n+ * Finding a usable index is an infrequent operation, it is performed\n+ * only when first time an operation is performed on the relation or\n+ * after invalidation of the relation cache entry (e.g., such as ANALYZE).\n+ */\n\nSUGGESTION (minor rewording)\nFinding a usable index is an infrequent task. It is performed only\nwhen an operation is first performed on the relation, or after\ninvalidation of the relation cache entry (e.g., such as ANALYZE).\n\n~~~\n\n12. src/backend/replication/logical/relation.c - logicalrep_partition_open\n\nSame as comment #11 above.\n\n~~~\n\n13. src/backend/replication/logical/relation.c - GetIndexOidFromPath\n\n+static\n+Oid\n+GetIndexOidFromPath(Path *path)\n\nTypically I think 'static Oid' should be on one line.\n\n~~~\n\n14.\n\n+ switch (path->pathtype)\n+ {\n+ case T_IndexScan:\n+ case T_IndexOnlyScan:\n+ {\n+ IndexPath *index_sc = (IndexPath *) path;\n+ indexOid = index_sc->indexinfo->indexoid;\n+\n+ break;\n+ }\n+\n+ default:\n+ indexOid = InvalidOid;\n+ }\n\nIs there any point in using a switch statement when there is only one\nfunctional code block?\n\nWhy not just do:\n\nif (path->pathtype == T_IndexScan || path->pathtype == T_IndexOnlyScan)\n{\n...\n}\n\nreturn InvalidOid;\n\n~~~\n\n15. src/backend/replication/logical/relation.c - IndexOnlyOnExpression\n\n+ * Returns true if the given index consist only of expressions such as:\n+ * CREATE INDEX idx ON table(foo(col));\n\n\"consist\" -> \"consists\"\n\n~~~\n\n16.\n\n+IndexOnlyOnExpression(IndexInfo *indexInfo)\n+{\n+ int i=0;\n+ for (i = 0; i < indexInfo->ii_NumIndexKeyAttrs; i++)\n\nDon't initialise 'i' twice.\n\n~~~\n\n17.\n\n+ AttrNumber attnum = indexInfo->ii_IndexAttrNumbers[i];\n+ if (AttributeNumberIsValid(attnum))\n+ return false;\n+\n+ }\n\nSpurious blank line\n\n~~~\n\n18. src/backend/replication/logical/relation.c -\nGetCheapestReplicaIdentityFullPath\n\n+/*\n+ * Iterates over the input path list, and returns another path list\n+ * where paths with non-btree indexes, partial indexes or\n+ * indexes on only expressions are eliminated from the list.\n+ */\n\n\"path list, and\" -> \"path list and\"\n\n~~~\n\n19.\n\n+ if (!OidIsValid(indexOid))\n+ {\n+ /* unrelated Path, skip */\n+ suitableIndexList = lappend(suitableIndexList, path);\n+ continue;\n+ }\n+\n+ indexRelation = index_open(indexOid, AccessShareLock);\n+ indexInfo = BuildIndexInfo(indexRelation);\n+ is_btree_index = (indexInfo->ii_Am == BTREE_AM_OID);\n+ is_partial_index = (indexInfo->ii_Predicate != NIL);\n+ is_index_only_on_expression = IndexOnlyOnExpression(indexInfo);\n+ index_close(indexRelation, NoLock);\n+\n+ if (!is_btree_index || is_partial_index || is_index_only_on_expression)\n+ continue;\n\nMaybe better to change this logic using if/else and changing the last\ncondition so them you can avoid having any of those 'continue' in this\nloop.\n\n~~~\n\n20. src/backend/replication/logical/relation.c -\nGetCheapestReplicaIdentityFullPath\n\n+/*\n+ * GetCheapestReplicaIdentityFullPath generates all the possible paths\n+ * for the given subscriber relation, assuming that the source relation\n+ * is replicated via REPLICA IDENTITY FULL.\n+ *\n+ * The function assumes that all the columns will be provided during\n+ * the execution phase, given that REPLICA IDENTITY FULL gurantees\n+ * that.\n+ */\n\n20a.\ntypo \"gurantees\"\n\n~\n\n20b.\nThe function comment neglects to say that after getting all these\npaths the final function return is the cheapest one that it found.\n\n~~~\n\n21.\n\n+ for (attno = 0; attno < RelationGetNumberOfAttributes(localrel); attno++)\n+ {\n+ Form_pg_attribute attr = TupleDescAttr(localrel->rd_att, attno);\n+\n+ if (attr->attisdropped)\n+ {\n+ continue;\n+ }\n+ else\n+ {\n+ Expr *eq_op;\n\nMaybe simplify by just removing the 'else' or instead just reverse the\ncondition of the 'if'.\n\n~~~\n\n22.\n\n+ /*\n+ * A sequential scan has could have been dominated by\n+ * by an index scan during make_one_rel(). We should always\n+ * have a sequential scan before set_cheapest().\n+ */\n\n\"has could have been\" -> \"could have been\"\n\n~~~\n\n23. src/backend/replication/logical/relation.c - LogicalRepUsableIndex\n\n+static Oid\n+LogicalRepUsableIndex(Relation localrel, LogicalRepRelation *remoterel)\n+{\n+ Oid idxoid;\n+\n+ /*\n+ * We never need index oid for partitioned tables, always rely on leaf\n+ * partition's index.\n+ */\n+ if (localrel->rd_rel->relkind == RELKIND_PARTITIONED_TABLE)\n+ return InvalidOid;\n+\n+ /* simple case, we already have an identity or pkey */\n+ idxoid = GetRelationIdentityOrPK(localrel);\n+ if (OidIsValid(idxoid))\n+ return idxoid;\n+\n+ /* indexscans are disabled, use seq. scan */\n+ if (!enable_indexscan)\n+ return InvalidOid;\n\nI thought the (!enable_indexscan) fast exit perhaps should be done\nfirst, or at least before calling GetRelationIdentityOrPK.\n\n======\n\n24. src/backend/replication/logical/worker.c - apply_handle_delete_internal\n\n@@ -2034,12 +2021,14 @@ apply_handle_delete_internal(ApplyExecutionData *edata,\n EPQState epqstate;\n TupleTableSlot *localslot;\n bool found;\n+ Oid usableIndexOid = usable_indexoid_internal(edata, relinfo);\n\n EvalPlanQualInit(&epqstate, estate, NULL, NIL, -1);\n ExecOpenIndices(relinfo, false);\n\n- found = FindReplTupleInLocalRel(estate, localrel, remoterel,\n- remoteslot, &localslot);\n+\n+ found = FindReplTupleInLocalRel(estate, localrel, usableIndexOid,\n+ remoterel, remoteslot, &localslot);\n\n24a.\nExcess blank line above FindReplTupleInLocalRel call.\n\n~\n\n24b.\nThis code is almost same in function handle_update_internal(), except\nthe wrapping of the params is different. Better to keep everything\nconsistent looking.\n\n~~~\n\n25. src/backend/replication/logical/worker.c - usable_indexoid_internal\n\n+/*\n+ * Decide whether we can pick an index for the relinfo (e.g., the relation)\n+ * we're actually deleting/updating from. If it is a child partition of\n+ * edata->targetRelInfo, find the index on the partition.\n+ */\n+static Oid\n+usable_indexoid_internal(ApplyExecutionData *edata, ResultRelInfo *relinfo)\n\nI'm not sure is this can maybe return InvalidOid? The function comment\nshould clarify it.\n\n~~~\n\n26.\n\nI might be mistaken, but somehow I feel this function can be\nsimplified. e.g. If you have a var 'relmapentry' and let the normal\ntable use the initial value of that. Then I think you only need to\ntest for the partitioned table and reassign that var as appropriate.\nIt also removes the need for having 'usableIndexOid' var.\n\nFOR EXAMPLE,\n\nstatic Oid\nusable_indexoid_internal(ApplyExecutionData *edata, ResultRelInfo *relinfo)\n{\nResultRelInfo *targetResultRelInfo = edata->targetRelInfo;\nLogicalRepRelMapEntry *relmapentry = edata->targetRel;\nOid targetrelid = targetResultRelInfo->ri_RelationDesc->rd_rel->oid;\nOid localrelid = relinfo->ri_RelationDesc->rd_id;\n\nif (targetrelid != localrelid)\n{\n/*\n* Target is a partitioned table, so find relmapentry of the partition.\n*/\nTupleConversionMap *map = relinfo->ri_RootToPartitionMap;\nAttrMap *attrmap = map ? map->attrMap : NULL;\nLogicalRepPartMapEntry *part_entry =\nlogicalrep_partition_open(relmapentry, relinfo->ri_RelationDesc,\nattrmap);\n\nAssert(targetResultRelInfo->ri_RelationDesc->rd_rel->relkind ==\n RELKIND_PARTITIONED_TABLE);\n\nrelmapentry = part_entry->relmapentry;\n}\nreturn relmapentry->usableIndexOid;\n}\n\n~~~\n\n27.\n\n+ /*\n+ * Target is a partitioned table, get the index oid the partition.\n+ */\n\nSUGGESTION\nTarget is a partitioned table, so get the index oid of the partition.\n\nor (see the example of comment @26)\n\n~~~\n\n28. src/backend/replication/logical/worker.c - FindReplTupleInLocalRel\n\n@@ -2093,12 +2125,11 @@ FindReplTupleInLocalRel(EState *estate,\nRelation localrel,\n\n *localslot = table_slot_create(localrel, &estate->es_tupleTable);\n\nI think this might have been existing functionality...\n\nThe comment says \"* Local tuple, if found, is returned in\n'*localslot'.\" But the code is unconditionally doing\ntable_slot_create() before it even knows if a tuple was found or not.\nSo what about when it is NOT found - in that case shouldn't there be\nsome cleaning up that (unused?) table slot that got unconditionally\ncreated?\n\n~~~\n\n29. src/backend/replication/logical/worker.c - apply_handle_tuple_routing\n\n@@ -2202,13 +2233,17 @@ apply_handle_tuple_routing(ApplyExecutionData *edata,\n * suitable partition.\n */\n {\n+ LogicalRepRelMapEntry *entry;\n TupleTableSlot *localslot;\n ResultRelInfo *partrelinfo_new;\n bool found;\n\n+ entry = &part_entry->relmapentry;\n\nMaybe just do this assignment at the entry declaration time?\n\n~~~\n\n30.\n\n /* Get the matching local tuple from the partition. */\n found = FindReplTupleInLocalRel(estate, partrel,\n- &part_entry->remoterel,\n+ part_entry->relmapentry.usableIndexOid,\n+ &entry->remoterel,\n remoteslot_part, &localslot);\nWhy not use the new 'entry' var just assigned instead of repeating\npart_entry->relmapentry?\n\nSUGGESTION\nfound = FindReplTupleInLocalRel(estate, partrel,\nentry->usableIndexOid,\n&entry->remoterel,\nremoteslot_part, &localslot);\n\n~~~\n\n31.\n\n+ slot_modify_data(remoteslot_part, localslot, entry,\n newtup);\n\nUnnecessary wrapping.\n\n======\n\n32. src/include/replication/logicalrelation.h\n\n+typedef struct LogicalRepPartMapEntry\n+{\n+ Oid partoid; /* LogicalRepPartMap's key */\n+ LogicalRepRelMapEntry relmapentry;\n+} LogicalRepPartMapEntry;\n\nIIUC this struct has been moved from relation.c to here. But I think\nthere was a large comment about this struct which maybe needs to be\nmoved with it (see the original relation.c).\n\n/*\n * Partition map (LogicalRepPartMap)\n *\n * When a partitioned table is used as replication target, replicated\n * operations are actually performed on its leaf partitions, which requires\n * the partitions to also be mapped to the remote relation. Parent's entry\n * (LogicalRepRelMapEntry) cannot be used as-is for all partitions, because\n * individual partitions may have different attribute numbers, which means\n * attribute mappings to remote relation's attributes must be maintained\n * separately for each partition.\n */\n\n======\n\n33. .../subscription/t/032_subscribe_use_index.pl\n\nTypo \"MULTIPILE\"\n\nThis typo occurs several times...\n\ne.g. # Testcase start: SUBSCRIPTION USES INDEX MODIFIES MULTIPILE ROWS\ne.g. # Testcase end: SUBSCRIPTION USES INDEX MODIFIES MULTIPILE ROWS\ne.g. # Testcase start: SUBSCRIPTION USES INDEX WITH MULTIPILE COLUMNS\ne.g. # Testcase end: SUBSCRIPTION USES INDEX MODIFIES MULTIPILE ROWS\n\n~~~\n\n34.\n\n# Basic test where the subscriber uses index\n# and only touches multiple rows\n\nWhat does \"only ... multiple\" mean?\n\nThis occurs several times also.\n\n~~~\n\n35.\n\n+# wait for initial table synchronization to finish\n+$node_subscriber->wait_for_subscription_sync;\n+$node_subscriber->wait_for_subscription_sync;\n+$node_subscriber->wait_for_subscription_sync;\n\nThat triple wait looks unusual. Is it deliberate?\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Wed, 24 Aug 2022 19:06:13 +1000",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Use indexes on the subscriber when REPLICA IDENTITY is\n full on the publisher"
},
{
"msg_contents": "Hi Peter, all\n\nThanks for the detailed review!\n\n\n> 1. Commit message\n>\n> 1a.\n> Majority of the logic on the subscriber side has already existed in the\n> code.\n>\n> 1b.\n> Second, when REPLICA IDENTITY IS FULL on the publisher and an index is\n> used on the subscriber...\n>\n>\n> 1c.\n> Still, below I try to show case the potential improvements using an\n> index on the subscriber\n> `pgbench_accounts(bid)`. With the index, the replication catches up\n> around ~5 seconds.\n> When the index is dropped, the replication takes around ~300 seconds.\n>\n> \"show case\" -> \"showcase\"\n>\n> Applied your suggestions to 1a/1b/1c/\n\n~\n>\n> 1d.\n> In above text, what was meant by \"catches up around ~5 seconds\"?\n> e.g. Did it mean *improves* by ~5 seconds, or *takes* ~5 seconds?\n>\n>\nIt \"takes\" 5 seconds to replicate all the changes. To be specific, I\nexecute 'SELECT sum(abalance) FROM pgbench_accounts' on the subscriber, and\ntry to measure the time until when all the changes are replicated. I do use\nthe same query on the publisher to check what the query result should be\nwhen replication is done.\n\nI updated the relevant text, does that look better?\n\n\n> ~\n>\n> 1e.\n> // create one indxe, even on a low cardinality column\n>\n> typo \"indxe\"\n>\n> ======\n>\n\nfixed.\n\nAlso, I realized that some of the comments on the commit message are stale,\nupdated those as well.\n\n\n\n>\n> 2. GENERAL\n>\n> 2a.\n> There are lots of single-line comments that start lowercase, but by\n> convention, I think they should start uppercase.\n>\n> e.g. + /* we should always use at least one attribute for the index scan */\n> e.g. + /* we might not need this if the index is unique */\n> e.g. + /* avoid expensive equality check if index is unique */\n> e.g. + /* unrelated Path, skip */\n> e.g. + /* simple case, we already have an identity or pkey */\n> e.g. + /* indexscans are disabled, use seq. scan */\n> e.g. + /* target is a regular table */\n>\n> ~~\n>\n\nThanks for noting this, I didn't realize that there is a strict requirement\non this. Updated all of your suggestions, and realized one more such case.\n\nIs there documentation where such conventions are listed? I couldn't\nfind any.\n\n\n>\n> 2b.\n> There are some excess blank lines between the function. By convention,\n> I think 1 blank line is normal, but here there are sometimes 2.\n>\n> ~~\n>\n\nUpdated as well.\n\n\n>\n> 2c.\n> There are some new function comments which include their function name\n> in the comment. It seemed unnecessary.\n>\n> e.g. GetCheapestReplicaIdentityFullPath\n> e.g. FindUsableIndexForReplicaIdentityFull\n> e.g. LogicalRepUsableIndex\n>\n> ======\n>\n\nFixed this as well.\n\n\n>\n> 3. src/backend/executor/execReplication.c - build_replindex_scan_key\n>\n> - int attoff;\n> + int index_attoff;\n> + int scankey_attoff;\n> bool isnull;\n> Datum indclassDatum;\n> oidvector *opclass;\n> int2vector *indkey = &idxrel->rd_index->indkey;\n> - bool hasnulls = false;\n> -\n> - Assert(RelationGetReplicaIndex(rel) == RelationGetRelid(idxrel) ||\n> - RelationGetPrimaryKeyIndex(rel) == RelationGetRelid(idxrel));\n>\n> indclassDatum = SysCacheGetAttr(INDEXRELID, idxrel->rd_indextuple,\n> Anum_pg_index_indclass, &isnull);\n> Assert(!isnull);\n> opclass = (oidvector *) DatumGetPointer(indclassDatum);\n> + scankey_attoff = 0;\n>\n> Maybe just assign scankey_attoff = 0 at the declaration?\n>\n>\nAgain, lack of coding convention knowledge :/ My observation is that it is\noften not assigned during the declaration. But, changed this one.\n\n\n> ~~~\n>\n> 4.\n>\n> + /*\n> + * There are two cases to consider. First, if the index is a primary or\n> + * unique key, we cannot have any indexes with expressions. So, at this\n> + * point we are sure that the index we deal is not these.\n> + */\n>\n> \"we deal\" -> \"we are dealing with\" ?\n>\n> makes sense\n\n\n> ~~~\n>\n> 5.\n>\n> + /*\n> + * For a non-primary/unique index with an additional expression, do\n> + * not have to continue at this point. However, the below code\n> + * assumes the index scan is only done for simple column references.\n> + */\n> + continue;\n>\n> Is this one of those comments that ought to have a \"XXX\" prefix as a\n> note for the future?\n>\n\nMakes sense\n\n\n>\n> ~~~\n>\n> 6.\n>\n> - int pkattno = attoff + 1;\n> ...\n> /* Initialize the scankey. */\n> - ScanKeyInit(&skey[attoff],\n> - pkattno,\n> + ScanKeyInit(&skey[scankey_attoff],\n> + index_attoff + 1,\n> BTEqualStrategyNumber,\n> Wondering if it would have been simpler if you just did:\n> int pkattno = index_attoff + 1;\n>\n\n\nThe index is not necessarily the primary key at this point, that's why\nI removed it.\n\nThere are already 3 variables in the same function\nindex_attoff, scankey_attoff and table_attno, which are hard to avoid. But,\nthis one seemed ok to avoid, mostly to simplify the readability. Do you\nthink it is better with the additional variable? Still, I think we need a\nbetter name as \"pk\" is not relevant anymore.\n\n\n~~~\n>\n> 7.\n>\n> - skey[attoff].sk_flags |= SK_ISNULL;\n> + skey[scankey_attoff].sk_flags |= SK_ISNULL;\n> + skey[scankey_attoff].sk_flags |= SK_SEARCHNULL;\n>\n> SUGGESTION\n> skey[scankey_attoff].sk_flags |= (SK_ISNULL | SK_SEARCHNULL)\n>\n>\nlooks good, changed\n\n\n> ~~~\n>\n> 8. src/backend/executor/execReplication.c - RelationFindReplTupleByIndex\n>\n> @@ -128,28 +171,44 @@ RelationFindReplTupleByIndex(Relation rel, Oid\n> idxoid,\n> TransactionId xwait;\n> Relation idxrel;\n> bool found;\n> + TypeCacheEntry **eq;\n> + bool indisunique;\n> + int scankey_attoff;\n>\n> /* Open the index. */\n> idxrel = index_open(idxoid, RowExclusiveLock);\n> + indisunique = idxrel->rd_index->indisunique;\n> +\n> + /* we might not need this if the index is unique */\n> + eq = NULL;\n>\n> Maybe just default assign eq = NULL in the declaration?\n>\n>\nAgain, I wasn't sure if it is OK regarding the coding convention to assign\nduring the declaration. Changed now.\n\n\n> ~~~\n>\n> 9.\n>\n> + scan = index_beginscan(rel, idxrel, &snap,\n> + scankey_attoff, 0);\n>\n> Unnecessary wrapping?\n>\n>\nSeems so, changed\n\n\n> ~~~\n>\n> 10.\n>\n> + /* we only need to allocate once */\n> + if (eq == NULL)\n> + eq = palloc0(sizeof(*eq) * outslot->tts_tupleDescriptor->natts);\n>\n> But shouldn't you also free this 'eq' before the function returns, to\n> prevent leaking memory?\n>\n>\nTwo notes here. First, this is allocated inside ApplyMessageContext, which\nseems to be reset per tuple change. So, that seems like a good boundary to\nkeep this allocation in memory.\n\nSecond, RelationFindReplTupleSeq() doesn't free the same allocation roughly\nat a very similar call stack. That's why I decided not to pfree. Do you see\nstrong reason to pfree at this point? Then we should probably change that\nfor RelationFindReplTupleSeq() as well.\n\n\n\n> ======\n>\n> 11. src/backend/replication/logical/relation.c - logicalrep_rel_open\n>\n> + /*\n> + * Finding a usable index is an infrequent operation, it is performed\n> + * only when first time an operation is performed on the relation or\n> + * after invalidation of the relation cache entry (e.g., such as ANALYZE).\n> + */\n>\n> SUGGESTION (minor rewording)\n> Finding a usable index is an infrequent task. It is performed only\n> when an operation is first performed on the relation, or after\n> invalidation of the relation cache entry (e.g., such as ANALYZE).\n>\n> ~~~\n>\n> makes sense, applied\n\n\n> 12. src/backend/replication/logical/relation.c - logicalrep_partition_open\n>\n> Same as comment #11 above.\n>\n>\ndone\n\n\n\n> ~~~\n>\n> 13. src/backend/replication/logical/relation.c - GetIndexOidFromPath\n>\n> +static\n> +Oid\n> +GetIndexOidFromPath(Path *path)\n>\n> Typically I think 'static Oid' should be on one line.\n>\n\ndone\n\n\n> ~~~\n>\n> 14.\n>\n> + switch (path->pathtype)\n> + {\n> + case T_IndexScan:\n> + case T_IndexOnlyScan:\n> + {\n> + IndexPath *index_sc = (IndexPath *) path;\n> + indexOid = index_sc->indexinfo->indexoid;\n> +\n> + break;\n> + }\n> +\n> + default:\n> + indexOid = InvalidOid;\n> + }\n>\n> Is there any point in using a switch statement when there is only one\n> functional code block?\n>\n> Why not just do:\n>\n> if (path->pathtype == T_IndexScan || path->pathtype == T_IndexOnlyScan)\n> {\n> ...\n> }\n>\n> return InvalidOid;\n>\n> ~~~\n>\n\nGood point, in the first iterations of the patch, we also had Bitmap scans\nhere. Now, the switch is redundant, applied your suggestion.\n\n\n>\n> 15. src/backend/replication/logical/relation.c - IndexOnlyOnExpression\n>\n> + * Returns true if the given index consist only of expressions such as:\n> + * CREATE INDEX idx ON table(foo(col));\n>\n> \"consist\" -> \"consists\"\n>\n> ~~~\n>\n\nfixed\n\n\n>\n> 16.\n>\n> +IndexOnlyOnExpression(IndexInfo *indexInfo)\n> +{\n> + int i=0;\n> + for (i = 0; i < indexInfo->ii_NumIndexKeyAttrs; i++)\n>\n> Don't initialise 'i' twice.\n>\n> ~~~\n>\n\nfixed\n\n\n>\n> 17.\n>\n> + AttrNumber attnum = indexInfo->ii_IndexAttrNumbers[i];\n> + if (AttributeNumberIsValid(attnum))\n> + return false;\n> +\n> + }\n>\n> Spurious blank line\n>\n> ~~~\n>\n\nfixed\n\n\n>\n> 18. src/backend/replication/logical/relation.c -\n> GetCheapestReplicaIdentityFullPath\n>\n> +/*\n> + * Iterates over the input path list, and returns another path list\n> + * where paths with non-btree indexes, partial indexes or\n> + * indexes on only expressions are eliminated from the list.\n> + */\n>\n> \"path list, and\" -> \"path list and\"\n>\n> ~~~\n>\n\nfixed\n\n\n>\n> 19.\n>\n> + if (!OidIsValid(indexOid))\n> + {\n> + /* unrelated Path, skip */\n> + suitableIndexList = lappend(suitableIndexList, path);\n> + continue;\n> + }\n> +\n> + indexRelation = index_open(indexOid, AccessShareLock);\n> + indexInfo = BuildIndexInfo(indexRelation);\n> + is_btree_index = (indexInfo->ii_Am == BTREE_AM_OID);\n> + is_partial_index = (indexInfo->ii_Predicate != NIL);\n> + is_index_only_on_expression = IndexOnlyOnExpression(indexInfo);\n> + index_close(indexRelation, NoLock);\n> +\n> + if (!is_btree_index || is_partial_index || is_index_only_on_expression)\n> + continue;\n>\n> Maybe better to change this logic using if/else and changing the last\n> condition so them you can avoid having any of those 'continue' in this\n> loop.\n>\n\nYes, it makes sense. It is good to avoid `continue` in the loop.\n\n\n>\n> ~~~\n>\n> 20. src/backend/replication/logical/relation.c -\n> GetCheapestReplicaIdentityFullPath\n>\n> +/*\n> + * GetCheapestReplicaIdentityFullPath generates all the possible paths\n> + * for the given subscriber relation, assuming that the source relation\n> + * is replicated via REPLICA IDENTITY FULL.\n> + *\n> + * The function assumes that all the columns will be provided during\n> + * the execution phase, given that REPLICA IDENTITY FULL gurantees\n> + * that.\n> + */\n>\n> 20a.\n> typo \"gurantees\"\n>\n> ~\n>\n\nFixed, for future patches I'll do a more thorough review on these myself.\nSorry for all these typos & convention errors!\n\n\n> 20b.\n> The function comment neglects to say that after getting all these\n> paths the final function return is the cheapest one that it found.\n>\n> ~~~\n>\n\nImproved the comment a bit\n\n\n>\n> 21.\n>\n> + for (attno = 0; attno < RelationGetNumberOfAttributes(localrel); attno++)\n> + {\n> + Form_pg_attribute attr = TupleDescAttr(localrel->rd_att, attno);\n> +\n> + if (attr->attisdropped)\n> + {\n> + continue;\n> + }\n> + else\n> + {\n> + Expr *eq_op;\n>\n> Maybe simplify by just removing the 'else' or instead just reverse the\n> condition of the 'if'.\n>\n> ~~~\n>\n\nI like the second suggestion more, as the `!attr->attisdropped` code block\nhas local declarations, so keeping them local to that block seems easier\nto follow.\n\n\n>\n> 22.\n>\n> + /*\n> + * A sequential scan has could have been dominated by\n> + * by an index scan during make_one_rel(). We should always\n> + * have a sequential scan before set_cheapest().\n> + */\n>\n> \"has could have been\" -> \"could have been\"\n>\n> ~~~\n>\n\nAn interesting grammar I had :) Fixed\n\n\n>\n> 23. src/backend/replication/logical/relation.c - LogicalRepUsableIndex\n>\n> +static Oid\n> +LogicalRepUsableIndex(Relation localrel, LogicalRepRelation *remoterel)\n> +{\n> + Oid idxoid;\n> +\n> + /*\n> + * We never need index oid for partitioned tables, always rely on leaf\n> + * partition's index.\n> + */\n> + if (localrel->rd_rel->relkind == RELKIND_PARTITIONED_TABLE)\n> + return InvalidOid;\n> +\n> + /* simple case, we already have an identity or pkey */\n> + idxoid = GetRelationIdentityOrPK(localrel);\n> + if (OidIsValid(idxoid))\n> + return idxoid;\n> +\n> + /* indexscans are disabled, use seq. scan */\n> + if (!enable_indexscan)\n> + return InvalidOid;\n>\n> I thought the (!enable_indexscan) fast exit perhaps should be done\n> first, or at least before calling GetRelationIdentityOrPK.\n>\n\nThis is actually a point where I need some more feedback. On HEAD, even if\nthe index scan is disabled, we use the index. For this one, (a) I didn't\nwant to change the behavior for existing users (b) want to have a way to\ndisable this feature, and enable_indexscan seems like a good one.\n\nDo you think I should dare to move it above GetRelationIdentityOrPK()? Or,\nmaybe I just need more comments? I improved the comment, and it would be\nnice to hear your thoughts on this.\n\n\n> ======\n>\n> 24. src/backend/replication/logical/worker.c - apply_handle_delete_internal\n>\n> @@ -2034,12 +2021,14 @@ apply_handle_delete_internal(ApplyExecutionData\n> *edata,\n> EPQState epqstate;\n> TupleTableSlot *localslot;\n> bool found;\n> + Oid usableIndexOid = usable_indexoid_internal(edata, relinfo);\n>\n> EvalPlanQualInit(&epqstate, estate, NULL, NIL, -1);\n> ExecOpenIndices(relinfo, false);\n>\n> - found = FindReplTupleInLocalRel(estate, localrel, remoterel,\n> - remoteslot, &localslot);\n> +\n> + found = FindReplTupleInLocalRel(estate, localrel, usableIndexOid,\n> + remoterel, remoteslot, &localslot);\n>\n> 24a.\n> Excess blank line above FindReplTupleInLocalRel call.\n>\n> Fixed\n\n> ~\n>\n> 24b.\n> This code is almost same in function handle_update_internal(), except\n> the wrapping of the params is different. Better to keep everything\n> consistent looking.\n>\n>\nHmm, I have not changed how they look because they have one variable\ndifference (&relmapentry->remoterel vs remoterel), which requires the\nindentation to be slightly difference. So, I either need a new variable or\nkeep them as-is?\n\n\n\n> ~~~\n>\n> 25. src/backend/replication/logical/worker.c - usable_indexoid_internal\n>\n> +/*\n> + * Decide whether we can pick an index for the relinfo (e.g., the\n> relation)\n> + * we're actually deleting/updating from. If it is a child partition of\n> + * edata->targetRelInfo, find the index on the partition.\n> + */\n> +static Oid\n> +usable_indexoid_internal(ApplyExecutionData *edata, ResultRelInfo\n> *relinfo)\n>\n> I'm not sure is this can maybe return InvalidOid? The function comment\n> should clarify it.\n>\n>\nImproved the comment\n\n\n> ~~~\n>\n> 26.\n>\n> I might be mistaken, but somehow I feel this function can be\n> simplified. e.g. If you have a var 'relmapentry' and let the normal\n> table use the initial value of that. Then I think you only need to\n> test for the partitioned table and reassign that var as appropriate.\n> It also removes the need for having 'usableIndexOid' var.\n>\n> FOR EXAMPLE,\n>\n> static Oid\n> usable_indexoid_internal(ApplyExecutionData *edata, ResultRelInfo *relinfo)\n> {\n> ResultRelInfo *targetResultRelInfo = edata->targetRelInfo;\n> LogicalRepRelMapEntry *relmapentry = edata->targetRel;\n> Oid targetrelid = targetResultRelInfo->ri_RelationDesc->rd_rel->oid;\n> Oid localrelid = relinfo->ri_RelationDesc->rd_id;\n>\n> if (targetrelid != localrelid)\n> {\n> /*\n> * Target is a partitioned table, so find relmapentry of the partition.\n> */\n> TupleConversionMap *map = relinfo->ri_RootToPartitionMap;\n> AttrMap *attrmap = map ? map->attrMap : NULL;\n> LogicalRepPartMapEntry *part_entry =\n> logicalrep_partition_open(relmapentry, relinfo->ri_RelationDesc,\n> attrmap);\n>\n> Assert(targetResultRelInfo->ri_RelationDesc->rd_rel->relkind ==\n> RELKIND_PARTITIONED_TABLE);\n>\n> relmapentry = part_entry->relmapentry;\n> }\n> return relmapentry->usableIndexOid;\n> }\n>\n> ~~~\n>\n\nTrue, that simplifies the function, applied.\n\n\n>\n> 27.\n>\n> + /*\n> + * Target is a partitioned table, get the index oid the partition.\n> + */\n>\n> SUGGESTION\n> Target is a partitioned table, so get the index oid of the partition.\n>\n> or (see the example of comment @26)\n>\n>\nApplied\n\n\n> ~~~\n>\n> 28. src/backend/replication/logical/worker.c - FindReplTupleInLocalRel\n>\n> @@ -2093,12 +2125,11 @@ FindReplTupleInLocalRel(EState *estate,\n> Relation localrel,\n>\n> *localslot = table_slot_create(localrel, &estate->es_tupleTable);\n>\n> I think this might have been existing functionality...\n>\n> The comment says \"* Local tuple, if found, is returned in\n> '*localslot'.\" But the code is unconditionally doing\n> table_slot_create() before it even knows if a tuple was found or not.\n> So what about when it is NOT found - in that case shouldn't there be\n> some cleaning up that (unused?) table slot that got unconditionally\n> created?\n>\n>\nThis sounds accurate. But I guess it may not have been considered critical\nas we are operating in the ApplyMessageContext? Tha is going to be freed\nonce a single tuple is dispatched.\n\nI have a slight preference not to do it in this patch, but if you think\notherwise let me know.\n\n\n> ~~~\n>\n> 29. src/backend/replication/logical/worker.c - apply_handle_tuple_routing\n>\n> @@ -2202,13 +2233,17 @@ apply_handle_tuple_routing(ApplyExecutionData\n> *edata,\n> * suitable partition.\n> */\n> {\n> + LogicalRepRelMapEntry *entry;\n> TupleTableSlot *localslot;\n> ResultRelInfo *partrelinfo_new;\n> bool found;\n>\n> + entry = &part_entry->relmapentry;\n>\n> Maybe just do this assignment at the entry declaration time?\n>\n>\ndone\n\n\n> ~~~\n>\n> 30.\n>\n> /* Get the matching local tuple from the partition. */\n> found = FindReplTupleInLocalRel(estate, partrel,\n> - &part_entry->remoterel,\n> + part_entry->relmapentry.usableIndexOid,\n> + &entry->remoterel,\n> remoteslot_part, &localslot);\n> Why not use the new 'entry' var just assigned instead of repeating\n> part_entry->relmapentry?\n>\n> SUGGESTION\n> found = FindReplTupleInLocalRel(estate, partrel,\n> entry->usableIndexOid,\n> &entry->remoterel,\n> remoteslot_part, &localslot);\n>\n> ~~~\n>\n> Yes, looks better, changed\n\n\n> 31.\n>\n> + slot_modify_data(remoteslot_part, localslot, entry,\n> newtup);\n>\n> Unnecessary wrapping.\n>\n> ======\n>\n\nI think I have not changed this, but fixed anyway\n\n\n>\n> 32. src/include/replication/logicalrelation.h\n>\n> +typedef struct LogicalRepPartMapEntry\n> +{\n> + Oid partoid; /* LogicalRepPartMap's key */\n> + LogicalRepRelMapEntry relmapentry;\n> +} LogicalRepPartMapEntry;\n>\n> IIUC this struct has been moved from relation.c to here. But I think\n> there was a large comment about this struct which maybe needs to be\n> moved with it (see the original relation.c).\n>\n> /*\n> * Partition map (LogicalRepPartMap)\n> *\n> * When a partitioned table is used as replication target, replicated\n> * operations are actually performed on its leaf partitions, which requires\n> * the partitions to also be mapped to the remote relation. Parent's entry\n> * (LogicalRepRelMapEntry) cannot be used as-is for all partitions, because\n> * individual partitions may have different attribute numbers, which means\n> * attribute mappings to remote relation's attributes must be maintained\n> * separately for each partition.\n> */\n>\n> ======\n>\nOh, seems so, moved.\n\n\n>\n> 33. .../subscription/t/032_subscribe_use_index.pl\n>\n> Typo \"MULTIPILE\"\n>\n> This typo occurs several times...\n>\n> e.g. # Testcase start: SUBSCRIPTION USES INDEX MODIFIES MULTIPILE ROWS\n> e.g. # Testcase end: SUBSCRIPTION USES INDEX MODIFIES MULTIPILE ROWS\n> e.g. # Testcase start: SUBSCRIPTION USES INDEX WITH MULTIPILE COLUMNS\n> e.g. # Testcase end: SUBSCRIPTION USES INDEX MODIFIES MULTIPILE ROWS\n>\n> ~~~\n>\n>\nYep :/ Fixed now\n\n\n> 34.\n>\n> # Basic test where the subscriber uses index\n> # and only touches multiple rows\n>\n> What does \"only ... multiple\" mean?\n>\n> This occurs several times also.\n>\n>\nAh, in the earlier iterations, the tests were updating/deleting 1 row.\nLately, I changed it to multiple rows, just to have more coverage. I guess\nthe discrepancy is because of that. Updated now.\n\n\n> ~~~\n>\n> 35.\n>\n> +# wait for initial table synchronization to finish\n> +$node_subscriber->wait_for_subscription_sync;\n> +$node_subscriber->wait_for_subscription_sync;\n> +$node_subscriber->wait_for_subscription_sync;\n>\n> That triple wait looks unusual. Is it deliberate?\n>\n> Ah, not really. Removed.\n\nThanks,\nOnder",
"msg_date": "Thu, 25 Aug 2022 11:09:15 +0200",
"msg_from": "=?UTF-8?B?w5ZuZGVyIEthbGFjxLE=?= <onderkalaci@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Use indexes on the subscriber when REPLICA IDENTITY is\n full on the publisher"
},
{
"msg_contents": "Hi Onder,\n\nSince you ask me several questions [1], this post is just for answering those.\n\nI have looked again at the latest v9 patch, but I will post my review\ncomments for that separately.\n\n\nOn Thu, Aug 25, 2022 at 7:09 PM Önder Kalacı <onderkalaci@gmail.com> wrote:\n>\n>> 1d.\n>> In above text, what was meant by \"catches up around ~5 seconds\"?\n>> e.g. Did it mean *improves* by ~5 seconds, or *takes* ~5 seconds?\n>>\n>\n> It \"takes\" 5 seconds to replicate all the changes. To be specific, I execute 'SELECT sum(abalance) FROM pgbench_accounts' on the subscriber, and try to measure the time until when all the changes are replicated. I do use the same query on the publisher to check what the query result should be when replication is done.\n>\n> I updated the relevant text, does that look better?\n\nYes.\n\n>> 2. GENERAL\n>>\n>> 2a.\n>> There are lots of single-line comments that start lowercase, but by\n>> convention, I think they should start uppercase.\n>>\n>> e.g. + /* we should always use at least one attribute for the index scan */\n>> e.g. + /* we might not need this if the index is unique */\n>> e.g. + /* avoid expensive equality check if index is unique */\n>> e.g. + /* unrelated Path, skip */\n>> e.g. + /* simple case, we already have an identity or pkey */\n>> e.g. + /* indexscans are disabled, use seq. scan */\n>> e.g. + /* target is a regular table */\n>>\n>> ~~\n>\n>\n> Thanks for noting this, I didn't realize that there is a strict requirement on this. Updated all of your suggestions, and realized one more such case.\n>\n> Is there documentation where such conventions are listed? I couldn't find any.\n\nI don’t know of any strict requirements, but I did think it was the\nmore common practice to make the comments look like proper sentences.\nHowever, when I tried to prove that by counting the single-line\ncomments in PG code it seems to be split almost 50:50\nlowercase/uppercase, so I guess you should just do whatever is most\nsensible or is most consistent with the surrounding code ….\n\nCounts for single line /* */ comments:\nregex ^\\s*\\/\\*\\s[a-z]+.*\\*\\/$ = 18222 results\nregex ^\\s*\\/\\*\\s[A-Z]+.*\\*\\/$ = 20252 results\n\n>> 3. src/backend/executor/execReplication.c - build_replindex_scan_key\n>>\n>> - int attoff;\n>> + int index_attoff;\n>> + int scankey_attoff;\n>> bool isnull;\n>> Datum indclassDatum;\n>> oidvector *opclass;\n>> int2vector *indkey = &idxrel->rd_index->indkey;\n>> - bool hasnulls = false;\n>> -\n>> - Assert(RelationGetReplicaIndex(rel) == RelationGetRelid(idxrel) ||\n>> - RelationGetPrimaryKeyIndex(rel) == RelationGetRelid(idxrel));\n>>\n>> indclassDatum = SysCacheGetAttr(INDEXRELID, idxrel->rd_indextuple,\n>> Anum_pg_index_indclass, &isnull);\n>> Assert(!isnull);\n>> opclass = (oidvector *) DatumGetPointer(indclassDatum);\n>> + scankey_attoff = 0;\n>>\n>> Maybe just assign scankey_attoff = 0 at the declaration?\n>>\n>\n> Again, lack of coding convention knowledge :/ My observation is that it is often not assigned during the declaration. But, changed this one.\n>\n\nI don’t know of any convention. Probably this is just my own\npreference to keep the simple default assignments with the declaration\nto reduce the LOC. YMMV.\n\n>>\n>> 6.\n>>\n>> - int pkattno = attoff + 1;\n>> ...\n>> /* Initialize the scankey. */\n>> - ScanKeyInit(&skey[attoff],\n>> - pkattno,\n>> + ScanKeyInit(&skey[scankey_attoff],\n>> + index_attoff + 1,\n>> BTEqualStrategyNumber,\n>> Wondering if it would have been simpler if you just did:\n>> int pkattno = index_attoff + 1;\n>\n>\n>\n> The index is not necessarily the primary key at this point, that's why I removed it.\n>\n> There are already 3 variables in the same function index_attoff, scankey_attoff and table_attno, which are hard to avoid. But, this one seemed ok to avoid, mostly to simplify the readability. Do you think it is better with the additional variable? Still, I think we need a better name as \"pk\" is not relevant anymore.\n>\n\nYour code is fine. Leave it as-is.\n\n>> 8. src/backend/executor/execReplication.c - RelationFindReplTupleByIndex\n>>\n>> @@ -128,28 +171,44 @@ RelationFindReplTupleByIndex(Relation rel, Oid idxoid,\n>> TransactionId xwait;\n>> Relation idxrel;\n>> bool found;\n>> + TypeCacheEntry **eq;\n>> + bool indisunique;\n>> + int scankey_attoff;\n>>\n>> /* Open the index. */\n>> idxrel = index_open(idxoid, RowExclusiveLock);\n>> + indisunique = idxrel->rd_index->indisunique;\n>> +\n>> + /* we might not need this if the index is unique */\n>> + eq = NULL;\n>>\n>> Maybe just default assign eq = NULL in the declaration?\n>>\n>\n> Again, I wasn't sure if it is OK regarding the coding convention to assign during the declaration. Changed now.\n>\n\nSame as #3.\n\n>> 10.\n>>\n>> + /* we only need to allocate once */\n>> + if (eq == NULL)\n>> + eq = palloc0(sizeof(*eq) * outslot->tts_tupleDescriptor->natts);\n>>\n>> But shouldn't you also free this 'eq' before the function returns, to\n>> prevent leaking memory?\n>>\n>\n> Two notes here. First, this is allocated inside ApplyMessageContext, which seems to be reset per tuple change. So, that seems like a good boundary to keep this allocation in memory.\n>\n\nOK, fair enough. Is it worth adding a comment to say that or not?\n\n> Second, RelationFindReplTupleSeq() doesn't free the same allocation roughly at a very similar call stack. That's why I decided not to pfree. Do you see strong reason to pfree at this point? Then we should probably change that for RelationFindReplTupleSeq() as well.\n>\n>>\n>> 23. src/backend/replication/logical/relation.c - LogicalRepUsableIndex\n>>\n>> +static Oid\n>> +LogicalRepUsableIndex(Relation localrel, LogicalRepRelation *remoterel)\n>> +{\n>> + Oid idxoid;\n>> +\n>> + /*\n>> + * We never need index oid for partitioned tables, always rely on leaf\n>> + * partition's index.\n>> + */\n>> + if (localrel->rd_rel->relkind == RELKIND_PARTITIONED_TABLE)\n>> + return InvalidOid;\n>> +\n>> + /* simple case, we already have an identity or pkey */\n>> + idxoid = GetRelationIdentityOrPK(localrel);\n>> + if (OidIsValid(idxoid))\n>> + return idxoid;\n>> +\n>> + /* indexscans are disabled, use seq. scan */\n>> + if (!enable_indexscan)\n>> + return InvalidOid;\n>>\n>> I thought the (!enable_indexscan) fast exit perhaps should be done\n>> first, or at least before calling GetRelationIdentityOrPK.\n>\n>\n> This is actually a point where I need some more feedback. On HEAD, even if the index scan is disabled, we use the index. For this one, (a) I didn't want to change the behavior for existing users (b) want to have a way to disable this feature, and enable_indexscan seems like a good one.\n>\n> Do you think I should dare to move it above GetRelationIdentityOrPK()? Or, maybe I just need more comments? I improved the comment, and it would be nice to hear your thoughts on this.\n\nI agree with you it is maybe best not to cause any changes in\nbehaviour. If the behaviour is unwanted then it should be changed\nindependently of this patch anyhow.\n\n>> 24b.\n>> This code is almost same in function handle_update_internal(), except\n>> the wrapping of the params is different. Better to keep everything\n>> consistent looking.\n>>\n>\n> Hmm, I have not changed how they look because they have one variable difference (&relmapentry->remoterel vs remoterel), which requires the indentation to be slightly difference. So, I either need a new variable or keep them as-is?\n\nOK. Keep code as-is.\n\n>>\n>> 28. src/backend/replication/logical/worker.c - FindReplTupleInLocalRel\n>>\n>> @@ -2093,12 +2125,11 @@ FindReplTupleInLocalRel(EState *estate,\n>> Relation localrel,\n>>\n>> *localslot = table_slot_create(localrel, &estate->es_tupleTable);\n>>\n>> I think this might have been existing functionality...\n>>\n>> The comment says \"* Local tuple, if found, is returned in\n>> '*localslot'.\" But the code is unconditionally doing\n>> table_slot_create() before it even knows if a tuple was found or not.\n>> So what about when it is NOT found - in that case shouldn't there be\n>> some cleaning up that (unused?) table slot that got unconditionally\n>> created?\n>>\n>\n> This sounds accurate. But I guess it may not have been considered critical as we are operating in the ApplyMessageContext? Tha is going to be freed once a single tuple is dispatched.\n>\n> I have a slight preference not to do it in this patch, but if you think otherwise let me know.\n\nI agree. Maybe this is not even a leak worth bothering about if it is\nonly in the short-lived ApplyMessageContext like you say. Anyway,\nAFAIK this was already in existing code, so a fix (if any) would\nbelong in a different patch to this one.\n\n>> 31.\n>>\n>> + slot_modify_data(remoteslot_part, localslot, entry,\n>> newtup);\n>>\n>> Unnecessary wrapping.\n>>\n>> ======\n>\n>\n> I think I have not changed this, but fixed anyway\n\nHmm - I don't see that you changed this, but anyway I guess you\nshouldn't be fixing wrapping problems unless this patch caused them.\n\n------\n[1] https://www.postgresql.org/message-id/CACawEhXbw%3D%3DK02v3%3DnHFEAFJqegx0b4r2J%2BFtXtKFkJeE6R95Q%40mail.gmail.com\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Tue, 30 Aug 2022 20:13:41 +1000",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Use indexes on the subscriber when REPLICA IDENTITY is\n full on the publisher"
},
{
"msg_contents": "Here are some review comments for the patch v9-0001:\n\n======\n\n1. Commit message\n\n1a.\nWith this patch, I'm proposing the following change: If there is an\nindex on the subscriber, use the index as long as the planner\nsub-modules picks any index over sequential scan. The index should be\na btree index, not a partital index. Finally, the index should have at\nleast one column reference (e.g., cannot consists of only\nexpressions).\n\nSUGGESTION\nWith this patch, I'm proposing the following change: If there is any\nindex on the subscriber, let the planner sub-modules compare the costs\nof index versus sequential scan and choose the cheapest. The index\nshould be a btree index, not a partial index, and it should have at\nleast one column reference (e.g., cannot consist of only expressions).\n\n~\n\n1b.\nThe Majority of the logic on the subscriber side exists in the code.\n\n\"exists\" -> \"already exists\"\n\n~\n\n1c.\npsql -c \"truncate pgbench_accounts;\" -p 9700 postgres\n\n\"truncate\" -> \"TRUNCATE\"\n\n~\n\n1d.\nTry to wrap this message text at 80 char width.\n\n======\n\n2. src/backend/replication/logical/relation.c - logicalrep_rel_open\n\n+ /*\n+ * Finding a usable index is an infrequent task. It is performed\n+ * when an operation is first performed on the relation, or after\n+ * invalidation of the relation cache entry (e.g., such as ANALYZE).\n+ */\n+ entry->usableIndexOid = LogicalRepUsableIndex(entry->localrel, remoterel);\n\nSeemed a bit odd to say \"performed\" 2x in the same sentence.\n\n\"It is performed when...\" -> \"It occurs when...” (?)\n\n~~~\n\n3. src/backend/replication/logical/relation.c - logicalrep_partition_open\n\n+ /*\n+ * Finding a usable index is an infrequent task. It is performed\n+ * when an operation is first performed on the relation, or after\n+ * invalidation of the relation cache entry (e.g., such as ANALYZE).\n+ */\n+ part_entry->relmapentry.usableIndexOid =\n+ LogicalRepUsableIndex(partrel, remoterel);\n\n3a.\nSame as comment #2 above.\n\n~\n\n3b.\nThe jumping between 'part_entry' and 'entry' is confusing. Since\n'entry' is already assigned to be &part_entry->relmapentry can't you\nuse that here?\n\nSUGGESTION\nentry->usableIndexOid = LogicalRepUsableIndex(partrel, remoterel);\n\n~~~\n\n4. src/backend/replication/logical/relation.c - GetIndexOidFromPath\n\n+/*\n+ * Returns a valid index oid if the input path is an index path.\n+ * Otherwise, return invalid oid.\n+ */\n+static Oid\n+GetIndexOidFromPath(Path *path)\n\nPerhaps may this function comment more consistent with others (like\nGetRelationIdentityOrPK, LogicalRepUsableIndex) and refer to the\nInvalidOid.\n\nSUGGESTION\n/*\n * Returns a valid index oid if the input path is an index path.\n *\n * Otherwise, returns InvalidOid.\n */\n\n~~~\n\n5. src/backend/replication/logical/relation.c - IndexOnlyOnExpression\n\n+bool\n+IndexOnlyOnExpression(IndexInfo *indexInfo)\n+{\n+ int i;\n+ for (i = 0; i < indexInfo->ii_NumIndexKeyAttrs; i++)\n+ {\n+ AttrNumber attnum = indexInfo->ii_IndexAttrNumbers[i];\n+ if (AttributeNumberIsValid(attnum))\n+ return false;\n+ }\n+\n+ return true;\n+}\n\n5a.\nAdd a blank line after those declarations.\n\n~\n\n5b.\nAFAIK the C99 style for loop declarations should be OK [1] for new\ncode, so declaring like below would be cleaner:\n\nfor (int i = 0; ...\n\n~~~\n\n6. src/backend/replication/logical/relation.c -\nFilterOutNotSuitablePathsForReplIdentFull\n\n+/*\n+ * Iterates over the input path list and returns another path list\n+ * where paths with non-btree indexes, partial indexes or\n+ * indexes on only expressions are eliminated from the list.\n+ */\n+static List *\n+FilterOutNotSuitablePathsForReplIdentFull(List *pathlist)\n\n\"are eliminated from the list.\" -> \"have been removed.\"\n\n~~~\n\n7.\n\n+ foreach(lc, pathlist)\n+ {\n+ Path *path = (Path *) lfirst(lc);\n+ Oid indexOid = GetIndexOidFromPath(path);\n+ Relation indexRelation;\n+ IndexInfo *indexInfo;\n+ bool is_btree;\n+ bool is_partial;\n+ bool is_only_on_expression;\n+\n+ if (!OidIsValid(indexOid))\n+ {\n+ /* Unrelated Path, skip */\n+ suitableIndexList = lappend(suitableIndexList, path);\n+ }\n+ else\n+ {\n+ indexRelation = index_open(indexOid, AccessShareLock);\n+ indexInfo = BuildIndexInfo(indexRelation);\n+ is_btree = (indexInfo->ii_Am == BTREE_AM_OID);\n+ is_partial = (indexInfo->ii_Predicate != NIL);\n+ is_only_on_expression = IndexOnlyOnExpression(indexInfo);\n+ index_close(indexRelation, NoLock);\n+\n+ if (is_btree && !is_partial && !is_only_on_expression)\n+ suitableIndexList = lappend(suitableIndexList, path);\n+ }\n+ }\n\nI think most of those variables are only used in the \"else\" block so\nmaybe it's better to declare them at that scope.\n\n+ Relation indexRelation;\n+ IndexInfo *indexInfo;\n+ bool is_btree;\n+ bool is_partial;\n+ bool is_only_on_expression;\n\n~~~\n\n8. src/backend/replication/logical/relation.c -\nGetCheapestReplicaIdentityFullPath\n\n+ * Indexes that consists of only expressions (e.g.,\n+ * no simple column references on the index) are also\n+ * eliminated with a similar reasoning.\n\n\"consists\" -> \"consist\"\n\n\"with a similar reasoning\" -> \"with similar reasoning\"\n\n~~~\n\n9.\n\n+ * We also eliminate non-btree indexes, which could be relaxed\n+ * if needed. If we allow non-btree indexes, we should adjust\n+ * RelationFindReplTupleByIndex() to support such indexes.\n\nThis looks like another of those kinds of comments that should have\n\"XXX\" prefix as a note to the future.\n\n~~~\n\n10. src/backend/replication/logical/relation.c -\nFindUsableIndexForReplicaIdentityFull\n\n+/*\n+ * Returns an index oid if the planner submodules picks index scans\n+ * over sequential scan.\n\n10a\n\"picks\" -> \"pick\"\n\n~\n\n10b.\nMaybe this should also say \", otherwise returns InvalidOid\" (?)\n\n~~~\n\n11.\n\n+FindUsableIndexForReplicaIdentityFull(Relation localrel)\n+{\n+ MemoryContext usableIndexContext;\n+ MemoryContext oldctx;\n+ Path *cheapest_total_path;\n+ Oid indexOid;\n\nIn the following function, and in the one after that, you've named the\nindex Oid as 'idxoid' (not 'indexOid'). IMO it's better to use\nconsistent naming everywhere.\n\n~~~\n\n12. src/backend/replication/logical/relation.c - GetRelationIdentityOrPK\n\n12a.\nI wondered what is the benefit of having this function. IIUC it is\nonly called from one place (LogicalRepUsableIndex) and IMO the code\nwould probably be easier if you just inline this logic in that\nfunction...\n\n~\n\n12b.\n+/*\n+ * Get replica identity index or if it is not defined a primary key.\n+ *\n+ * If neither is defined, returns InvalidOid\n+ */\n\nIf you want to keep the function for some reason (e.g. see #12a) then\nI thought the function comment could be better.\n\nSUGGESTION\n/*\n * Returns OID of the relation's replica identity index, or OID of the\n * relation's primary key index.\n *\n * If neither is defined, returns InvalidOid.\n */\n\n~~~\n\n13. src/backend/replication/logical/relation.c - LogicalRepUsableIndex\n\nFor some reason, I feel this function should be called\nFindLogicalRepUsableIndex (or similar), because it seems more\nconsistent with the others which might return the Oid or might return\nInvalidOid...\n\n~~~\n\n14.\n\n+ /*\n+ * Index scans are disabled, use sequential scan. Note that we do allow\n+ * index scans when there is a primary key or unique index replica\n+ * identity. That is the legacy behavior so we hesitate to move this check\n+ * above.\n+ */\n\nPerhaps a slight rephrasing of that comment?\n\nSUGGESTION\nIf index scans are disabled, use a sequential scan.\n\nNote that we still allowed index scans above when there is a primary\nkey or unique index replica identity, but that is the legacy behaviour\n(even when enable_indexscan is false), so we hesitate to move this\nenable_indexscan check to be done earlier in this function.\n\n~~~\n\n15.\n\n+ * If we had a primary key or relation identity with a unique index,\n+ * we would have already found a valid oid. At this point, the remote\n+ * relation has replica identity full and we have at least one local\n+ * index defined.\n\n\"would have already found a valid oid.\" -> \"would have already found\nand returned that oid.\"\n\n======\n\n16. src/backend/replication/logical/worker.c - usable_indexoid_internal\n\n+/*\n+ * Decide whether we can pick an index for the relinfo (e.g., the relation)\n+ * we're actually deleting/updating from. If it is a child partition of\n+ * edata->targetRelInfo, find the index on the partition.\n+ *\n+ * Note that if the corresponding relmapentry has InvalidOid usableIndexOid,\n+ * the function returns InvalidOid. In that case, the tuple is used via\n+ * sequential execution.\n+ */\n+static Oid\n+usable_indexoid_internal(ApplyExecutionData *edata, ResultRelInfo *relinfo)\n\nI am not sure this is the right place to be saying that last sentence\n(\"In that case, the tuple is used via sequential execution.\") because\nit's up to the *calling* code to decide what to do if InvalidOid is\nreturned\n\n======\n\n17. src/include/replication/logicalrelation.h\n\n@ -31,20 +32,40 @@ typedef struct LogicalRepRelMapEntry\n Relation localrel; /* relcache entry (NULL when closed) */\n AttrMap *attrmap; /* map of local attributes to remote ones */\n bool updatable; /* Can apply updates/deletes? */\n+ Oid usableIndexOid; /* which index to use? (Invalid when no index\n+ * used) */\n\nSUGGESTION (for the comment)\nwhich index to use, or InvalidOid if none\n\n~~~\n\n18.\n\n+/*\n+ * Partition map (LogicalRepPartMap)\n+ *\n+ * When a partitioned table is used as replication target, replicated\n+ * operations are actually performed on its leaf partitions, which requires\n+ * the partitions to also be mapped to the remote relation. Parent's entry\n+ * (LogicalRepRelMapEntry) cannot be used as-is for all partitions, because\n+ * individual partitions may have different attribute numbers, which means\n+ * attribute mappings to remote relation's attributes must be maintained\n+ * separately for each partition.\n+ */\n+typedef struct LogicalRepPartMapEntry\n\nSomething feels not quite right using the (unchanged) comment about\nthe Partition map which was removed from where it was originally in\nrelation.c.\n\nThe reason I am unsure is that this comment is still referring to the\n\"LogicalRepPartMap\", which is not here but is declared static in\nrelation.c. Maybe the quick/easy fix would be to just change the first\nline to say: \"Partition map (see LogicalRepPartMap in relation.c)\".\nOTOH, I'm not sure if some part of this comment still needs to be left\nin relation.c (??)\n\n\n------\n[1] https://www.postgresql.org/docs/devel/source-conventions.html\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Wed, 31 Aug 2022 09:35:54 +1000",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Use indexes on the subscriber when REPLICA IDENTITY is\n full on the publisher"
},
{
"msg_contents": "Hi Peter,\n\nThanks for the reviews! I'll reply to both of your reviews separately.\n\n\n> >> 10.\n> >>\n> >> + /* we only need to allocate once */\n> >> + if (eq == NULL)\n> >> + eq = palloc0(sizeof(*eq) * outslot->tts_tupleDescriptor->natts);\n> >>\n> >> But shouldn't you also free this 'eq' before the function returns, to\n> >> prevent leaking memory?\n> >>\n> >\n> > Two notes here. First, this is allocated inside ApplyMessageContext,\n> which seems to be reset per tuple change. So, that seems like a good\n> boundary to keep this allocation in memory.\n> >\n>\n> OK, fair enough. Is it worth adding a comment to say that or not?\n>\n\nYes, sounds good. Added 1 sentence comment, I'll push this along with my\nother changes on v10.\n\n\nThanks,\nOnder\n\nHi Peter,Thanks for the reviews! I'll reply to both of your reviews separately. \n\n>> 10.\n>>\n>> + /* we only need to allocate once */\n>> + if (eq == NULL)\n>> + eq = palloc0(sizeof(*eq) * outslot->tts_tupleDescriptor->natts);\n>>\n>> But shouldn't you also free this 'eq' before the function returns, to\n>> prevent leaking memory?\n>>\n>\n> Two notes here. First, this is allocated inside ApplyMessageContext, which seems to be reset per tuple change. So, that seems like a good boundary to keep this allocation in memory.\n>\n\nOK, fair enough. Is it worth adding a comment to say that or not?Yes, sounds good. Added 1 sentence comment, I'll push this along with my other changes on v10. Thanks,Onder",
"msg_date": "Thu, 1 Sep 2022 09:23:00 +0300",
"msg_from": "=?UTF-8?B?w5ZuZGVyIEthbGFjxLE=?= <onderkalaci@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Use indexes on the subscriber when REPLICA IDENTITY is\n full on the publisher"
},
{
"msg_contents": "Hi again,\n\n\n> ======\n>\n> 1. Commit message\n>\n> 1a.\n> With this patch, I'm proposing the following change: If there is an\n> index on the subscriber, use the index as long as the planner\n> sub-modules picks any index over sequential scan. The index should be\n> a btree index, not a partital index. Finally, the index should have at\n> least one column reference (e.g., cannot consists of only\n> expressions).\n>\n> SUGGESTION\n> With this patch, I'm proposing the following change: If there is any\n> index on the subscriber, let the planner sub-modules compare the costs\n> of index versus sequential scan and choose the cheapest. The index\n> should be a btree index, not a partial index, and it should have at\n> least one column reference (e.g., cannot consist of only expressions).\n>\n>\nmakes sense.\n\n\n> ~\n>\n> 1b.\n> The Majority of the logic on the subscriber side exists in the code.\n>\n> \"exists\" -> \"already exists\"\n>\n\nfixed\n\n>\n> ~\n>\n> 1c.\n> psql -c \"truncate pgbench_accounts;\" -p 9700 postgres\n>\n> \"truncate\" -> \"TRUNCATE\"\n>\n\nfixed\n\n\n> ~\n>\n> 1d.\n> Try to wrap this message text at 80 char width.\n>\n\nfixed\n\n\n>\n> ======\n>\n> 2. src/backend/replication/logical/relation.c - logicalrep_rel_open\n>\n> + /*\n> + * Finding a usable index is an infrequent task. It is performed\n> + * when an operation is first performed on the relation, or after\n> + * invalidation of the relation cache entry (e.g., such as ANALYZE).\n> + */\n> + entry->usableIndexOid = LogicalRepUsableIndex(entry->localrel,\n> remoterel);\n>\n> Seemed a bit odd to say \"performed\" 2x in the same sentence.\n>\n> \"It is performed when...\" -> \"It occurs when...” (?)\n>\n>\nfixed\n\n\n> ~~~\n>\n> 3. src/backend/replication/logical/relation.c - logicalrep_partition_open\n>\n> + /*\n> + * Finding a usable index is an infrequent task. It is performed\n> + * when an operation is first performed on the relation, or after\n> + * invalidation of the relation cache entry (e.g., such as ANALYZE).\n> + */\n> + part_entry->relmapentry.usableIndexOid =\n> + LogicalRepUsableIndex(partrel, remoterel);\n>\n> 3a.\n> Same as comment #2 above.\n>\n\ndone\n\n\n>\n> ~\n>\n> 3b.\n> The jumping between 'part_entry' and 'entry' is confusing. Since\n> 'entry' is already assigned to be &part_entry->relmapentry can't you\n> use that here?\n>\n> SUGGESTION\n> entry->usableIndexOid = LogicalRepUsableIndex(partrel, remoterel);\n>\n> Yes, sure it makes sense.\n\n\n> ~~~\n>\n> 4. src/backend/replication/logical/relation.c - GetIndexOidFromPath\n>\n> +/*\n> + * Returns a valid index oid if the input path is an index path.\n> + * Otherwise, return invalid oid.\n> + */\n> +static Oid\n> +GetIndexOidFromPath(Path *path)\n>\n> Perhaps may this function comment more consistent with others (like\n> GetRelationIdentityOrPK, LogicalRepUsableIndex) and refer to the\n> InvalidOid.\n>\n> SUGGESTION\n> /*\n> * Returns a valid index oid if the input path is an index path.\n> *\n> * Otherwise, returns InvalidOid.\n> */\n>\n> sounds good\n\n\n> ~~~\n>\n> 5. src/backend/replication/logical/relation.c - IndexOnlyOnExpression\n>\n> +bool\n> +IndexOnlyOnExpression(IndexInfo *indexInfo)\n> +{\n> + int i;\n> + for (i = 0; i < indexInfo->ii_NumIndexKeyAttrs; i++)\n> + {\n> + AttrNumber attnum = indexInfo->ii_IndexAttrNumbers[i];\n> + if (AttributeNumberIsValid(attnum))\n> + return false;\n> + }\n> +\n> + return true;\n> +}\n>\n> 5a.\n> Add a blank line after those declarations.\n>\n>\nDone, also went over all the functions and ensured we don't have this\nanymore\n\n\n> ~\n>\n> 5b.\n> AFAIK the C99 style for loop declarations should be OK [1] for new\n> code, so declaring like below would be cleaner:\n>\n> for (int i = 0; ...\n>\n> Done\n\n> ~~~\n>\n> 6. src/backend/replication/logical/relation.c -\n> FilterOutNotSuitablePathsForReplIdentFull\n>\n> +/*\n> + * Iterates over the input path list and returns another path list\n> + * where paths with non-btree indexes, partial indexes or\n> + * indexes on only expressions are eliminated from the list.\n> + */\n> +static List *\n> +FilterOutNotSuitablePathsForReplIdentFull(List *pathlist)\n>\n> \"are eliminated from the list.\" -> \"have been removed.\"\n>\n> Done\n\n\n> ~~~\n>\n> 7.\n>\n> + foreach(lc, pathlist)\n> + {\n> + Path *path = (Path *) lfirst(lc);\n> + Oid indexOid = GetIndexOidFromPath(path);\n> + Relation indexRelation;\n> + IndexInfo *indexInfo;\n> + bool is_btree;\n> + bool is_partial;\n> + bool is_only_on_expression;\n> +\n> + if (!OidIsValid(indexOid))\n> + {\n> + /* Unrelated Path, skip */\n> + suitableIndexList = lappend(suitableIndexList, path);\n> + }\n> + else\n> + {\n> + indexRelation = index_open(indexOid, AccessShareLock);\n> + indexInfo = BuildIndexInfo(indexRelation);\n> + is_btree = (indexInfo->ii_Am == BTREE_AM_OID);\n> + is_partial = (indexInfo->ii_Predicate != NIL);\n> + is_only_on_expression = IndexOnlyOnExpression(indexInfo);\n> + index_close(indexRelation, NoLock);\n> +\n> + if (is_btree && !is_partial && !is_only_on_expression)\n> + suitableIndexList = lappend(suitableIndexList, path);\n> + }\n> + }\n>\n> I think most of those variables are only used in the \"else\" block so\n> maybe it's better to declare them at that scope.\n>\n> + Relation indexRelation;\n> + IndexInfo *indexInfo;\n> + bool is_btree;\n> + bool is_partial;\n> + bool is_only_on_expression;\n>\n>\nMakes sense\n\n\n> ~~~\n>\n> 8. src/backend/replication/logical/relation.c -\n> GetCheapestReplicaIdentityFullPath\n>\n> + * Indexes that consists of only expressions (e.g.,\n> + * no simple column references on the index) are also\n> + * eliminated with a similar reasoning.\n>\n> \"consists\" -> \"consist\"\n>\n> \"with a similar reasoning\" -> \"with similar reasoning\"\n>\n> fixed\n\n> ~~~\n>\n> 9.\n>\n> + * We also eliminate non-btree indexes, which could be relaxed\n> + * if needed. If we allow non-btree indexes, we should adjust\n> + * RelationFindReplTupleByIndex() to support such indexes.\n>\n> This looks like another of those kinds of comments that should have\n> \"XXX\" prefix as a note to the future.\n>\n\nadded\n\n\n>\n> ~~~\n>\n> 10. src/backend/replication/logical/relation.c -\n> FindUsableIndexForReplicaIdentityFull\n>\n> +/*\n> + * Returns an index oid if the planner submodules picks index scans\n> + * over sequential scan.\n>\n> 10a\n> \"picks\" -> \"pick\"\n>\n>\ndone\n\n\n> ~\n>\n> 10b.\n> Maybe this should also say \", otherwise returns InvalidOid\" (?)\n>\n>\nMakes sense, added similar to above suggestion\n\n\n> ~~~\n>\n> 11.\n>\n> +FindUsableIndexForReplicaIdentityFull(Relation localrel)\n> +{\n> + MemoryContext usableIndexContext;\n> + MemoryContext oldctx;\n> + Path *cheapest_total_path;\n> + Oid indexOid;\n>\n> In the following function, and in the one after that, you've named the\n> index Oid as 'idxoid' (not 'indexOid'). IMO it's better to use\n> consistent naming everywhere.\n>\n\n Ok, existing functions use idxoid, switched to that.\n\n>\n> ~~~\n>\n> 12. src/backend/replication/logical/relation.c - GetRelationIdentityOrPK\n>\n> 12a.\n> I wondered what is the benefit of having this function. IIUC it is\n> only called from one place (LogicalRepUsableIndex) and IMO the code\n> would probably be easier if you just inline this logic in that\n> function...\n>\n>\nI just moved that from src/backend/replication/logical/worker.c, so\nprobably better not to remove it in this patch?\n\nTbh, I like the simplicity it provides.\n\n\n> ~\n>\n> 12b.\n> +/*\n> + * Get replica identity index or if it is not defined a primary key.\n> + *\n> + * If neither is defined, returns InvalidOid\n> + */\n>\n> If you want to keep the function for some reason (e.g. see #12a) then\n> I thought the function comment could be better.\n>\n> SUGGESTION\n> /*\n> * Returns OID of the relation's replica identity index, or OID of the\n> * relation's primary key index.\n> *\n> * If neither is defined, returns InvalidOid.\n> */\n>\n>\nAs I noted, I just moved this function. So, left as-is for now.\n\n\n> ~~~\n>\n> 13. src/backend/replication/logical/relation.c - LogicalRepUsableIndex\n>\n> For some reason, I feel this function should be called\n> FindLogicalRepUsableIndex (or similar), because it seems more\n> consistent with the others which might return the Oid or might return\n> InvalidOid...\n>\n>\nMakes sense, changed\n\n\n> ~~~\n>\n> 14.\n>\n> + /*\n> + * Index scans are disabled, use sequential scan. Note that we do allow\n> + * index scans when there is a primary key or unique index replica\n> + * identity. That is the legacy behavior so we hesitate to move this check\n> + * above.\n> + */\n>\n> Perhaps a slight rephrasing of that comment?\n>\n> SUGGESTION\n> If index scans are disabled, use a sequential scan.\n>\n> Note that we still allowed index scans above when there is a primary\n> key or unique index replica identity, but that is the legacy behaviour\n> (even when enable_indexscan is false), so we hesitate to move this\n> enable_indexscan check to be done earlier in this function.\n>\n> ~~~\n>\n\nSounds good, changed\n\n>\n> 15.\n>\n> + * If we had a primary key or relation identity with a unique index,\n> + * we would have already found a valid oid. At this point, the remote\n> + * relation has replica identity full and we have at least one local\n> + * index defined.\n>\n> \"would have already found a valid oid.\" -> \"would have already found\n> and returned that oid.\"\n>\n\nDone\n\n\n>\n> ======\n>\n> 16. src/backend/replication/logical/worker.c - usable_indexoid_internal\n>\n> +/*\n> + * Decide whether we can pick an index for the relinfo (e.g., the\n> relation)\n> + * we're actually deleting/updating from. If it is a child partition of\n> + * edata->targetRelInfo, find the index on the partition.\n> + *\n> + * Note that if the corresponding relmapentry has InvalidOid\n> usableIndexOid,\n> + * the function returns InvalidOid. In that case, the tuple is used via\n> + * sequential execution.\n> + */\n> +static Oid\n> +usable_indexoid_internal(ApplyExecutionData *edata, ResultRelInfo\n> *relinfo)\n>\n> I am not sure this is the right place to be saying that last sentence\n> (\"In that case, the tuple is used via sequential execution.\") because\n> it's up to the *calling* code to decide what to do if InvalidOid is\n> returned\n>\n\n Right, for now this is true, but could change in the future. Removed.\n\n\n> ======\n>\n> 17. src/include/replication/logicalrelation.h\n>\n> @ -31,20 +32,40 @@ typedef struct LogicalRepRelMapEntry\n> Relation localrel; /* relcache entry (NULL when closed) */\n> AttrMap *attrmap; /* map of local attributes to remote ones */\n> bool updatable; /* Can apply updates/deletes? */\n> + Oid usableIndexOid; /* which index to use? (Invalid when no index\n> + * used) */\n>\n> SUGGESTION (for the comment)\n> which index to use, or InvalidOid if none\n>\n\nmakes sense\n\n\n>\n> ~~~\n>\n> 18.\n>\n> +/*\n> + * Partition map (LogicalRepPartMap)\n> + *\n> + * When a partitioned table is used as replication target, replicated\n> + * operations are actually performed on its leaf partitions, which\n> requires\n> + * the partitions to also be mapped to the remote relation. Parent's\n> entry\n> + * (LogicalRepRelMapEntry) cannot be used as-is for all partitions,\n> because\n> + * individual partitions may have different attribute numbers, which means\n> + * attribute mappings to remote relation's attributes must be maintained\n> + * separately for each partition.\n> + */\n> +typedef struct LogicalRepPartMapEntry\n>\n> Something feels not quite right using the (unchanged) comment about\n> the Partition map which was removed from where it was originally in\n> relation.c.\n>\n> The reason I am unsure is that this comment is still referring to the\n> \"LogicalRepPartMap\", which is not here but is declared static in\n> relation.c. Maybe the quick/easy fix would be to just change the first\n> line to say: \"Partition map (see LogicalRepPartMap in relation.c)\".\n> OTOH, I'm not sure if some part of this comment still needs to be left\n> in relation.c (??)\n>\n> Hmm, I agree that we need some extra comments pointing where this is used\n(I followed something similar to your suggestion).\n\nHowever, I also think that it is nicer to keep this comment here because\nthat seems more common in the code-base that the comments are on the\nMapEntry, not on the Map itself, no?\n\nThanks,\nOnder",
"msg_date": "Thu, 1 Sep 2022 09:23:11 +0300",
"msg_from": "=?UTF-8?B?w5ZuZGVyIEthbGFjxLE=?= <onderkalaci@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Use indexes on the subscriber when REPLICA IDENTITY is\n full on the publisher"
},
{
"msg_contents": "On Sat, Aug 20, 2022 at 4:32 PM Önder Kalacı <onderkalaci@gmail.com> wrote:\n>\n> I'm a little late to catch up with your comments, but here are my replies:\n>\n>> > My answer for the above assumes that your question is regarding what happens if you ANALYZE on a partitioned table. If your question is something different, please let me know.\n>> >\n>>\n>> I was talking about inheritance cases, something like:\n>> create table tbl1 (a int);\n>> create table tbl1_part1 (b int) inherits (tbl1);\n>> create table tbl1_part2 (c int) inherits (tbl1);\n>>\n>> What we do in such cases is documented as: \"if the table being\n>> analyzed has inheritance children, ANALYZE gathers two sets of\n>> statistics: one on the rows of the parent table only, and a second\n>> including rows of both the parent table and all of its children. This\n>> second set of statistics is needed when planning queries that process\n>> the inheritance tree as a whole. The child tables themselves are not\n>> individually analyzed in this case.\"\n>\n>\n> Oh, I haven't considered inherited tables. That seems right, the statistics of the children are not updated when the parent is analyzed.\n>\n>>\n>> Now, the point I was worried about was what if the changes in child\n>> tables (*_part1, *_part2) are much more than in tbl1? In such cases,\n>> we may not invalidate child rel entries, so how will logical\n>> replication behave for updates/deletes on child tables? There may not\n>> be any problem here but it is better to do some analysis of such cases\n>> to see how it behaves.\n>\n>\n> I also haven't observed any specific issues. In the end, when the user (or autovacuum) does ANALYZE on the child, it is when the statistics are updated for the child.\n>\n\nRight, I also think that should be the behavior but I have not\nverified it. However, I think it should be easy to verify if\nautovacuum updates the stats for child tables when we operate on only\none of such tables and whether that will invalidate the cache for our\ncase.\n\n> Although I do not have much experience with inherited tables, this sounds like the expected behavior?\n>\n> I also pushed a test covering inherited tables. First, a basic test on the parent. Then, show that updates on the parent can also use indexes of the children. Also, after an ANALYZE on the child, we can re-calculate the index and use the index with a higher cardinality column.\n>\n>>\n>> > Also, for the majority of the use-cases, I think we'd probably expect an index on a column with high cardinality -- hence use index scan. So, bitmap index scans are probably not going to be that much common.\n>> >\n>>\n>> You are probably right here but I don't think we can make such\n>> assumptions. I think the safest way to avoid any regression here is to\n>> choose an index when the planner selects an index scan. We can always\n>> extend it later to bitmap scans if required. We can add a comment\n>> indicating the same.\n>>\n>\n> Alright, I got rid of the bitmap scans.\n>\n> Though, it caused few of the new tests to fail. I think because of the data size/distribution, the planner picks bitmap scans. To make the tests consistent and small, I added `enable_bitmapscan to off` for this new test file. Does that sound ok to you? Or, should we change the tests to make sure they genuinely use index scans?\n>\n\nThat sounds okay to me.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 6 Sep 2022 16:43:14 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Use indexes on the subscriber when REPLICA IDENTITY is\n full on the publisher"
},
{
"msg_contents": "Hi,\n\n\n> >\n> > Oh, I haven't considered inherited tables. That seems right, the\n> statistics of the children are not updated when the parent is analyzed.\n> >\n> >>\n> >> Now, the point I was worried about was what if the changes in child\n> >> tables (*_part1, *_part2) are much more than in tbl1? In such cases,\n> >> we may not invalidate child rel entries, so how will logical\n> >> replication behave for updates/deletes on child tables? There may not\n> >> be any problem here but it is better to do some analysis of such cases\n> >> to see how it behaves.\n> >\n> >\n> > I also haven't observed any specific issues. In the end, when the user\n> (or autovacuum) does ANALYZE on the child, it is when the statistics are\n> updated for the child.\n> >\n>\n> Right, I also think that should be the behavior but I have not\n> verified it. However, I think it should be easy to verify if\n> autovacuum updates the stats for child tables when we operate on only\n> one of such tables and whether that will invalidate the cache for our\n> case.\n>\n>\nI already added a regression test for this with the title: # Testcase\nstart: SUBSCRIPTION CAN UPDATE THE INDEX IT USES AFTER ANALYZE - INHERITED\nTABLE\n\nI realized that the comments on the test case were confusing, and clarified\nthose. Attached the new version also rebased onto the master branch.\n\nThanks,\nOnder",
"msg_date": "Wed, 14 Sep 2022 15:04:00 +0300",
"msg_from": "=?UTF-8?B?w5ZuZGVyIEthbGFjxLE=?= <onderkalaci@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Use indexes on the subscriber when REPLICA IDENTITY is\n full on the publisher"
},
{
"msg_contents": "Dear Önder,\r\n\r\nThank you for proposing good feature. I'm also interested in the patch, \r\nSo I started to review this. Followings are initial comments.\r\n\r\n===\r\nFor execRelation.c\r\n\r\n01. RelationFindReplTupleByIndex()\r\n\r\n```\r\n /* Start an index scan. */\r\n InitDirtySnapshot(snap);\r\n- scan = index_beginscan(rel, idxrel, &snap,\r\n- IndexRelationGetNumberOfKeyAttributes(idxrel),\r\n- 0);\r\n \r\n /* Build scan key. */\r\n- build_replindex_scan_key(skey, rel, idxrel, searchslot);\r\n+ scankey_attoff = build_replindex_scan_key(skey, rel, idxrel, searchslot);\r\n \r\n+ scan = index_beginscan(rel, idxrel, &snap, scankey_attoff, 0);\r\n```\r\n\r\nI think \"/* Start an index scan. */\" should be just above index_beginscan().\r\n\r\n===\r\nFor worker.c\r\n\r\n02. sable_indexoid_internal()\r\n\r\n```\r\n+ * Note that if the corresponding relmapentry has InvalidOid usableIndexOid,\r\n+ * the function returns InvalidOid.\r\n+ */\r\n+static Oid\r\n+usable_indexoid_internal(ApplyExecutionData *edata, ResultRelInfo *relinfo)\r\n```\r\n\r\n\"InvalidOid usableIndexOid\" should be \"invalid usableIndexOid,\"\r\n\r\n03. check_relation_updatable()\r\n\r\n```\r\n * We are in error mode so it's fine this is somewhat slow. It's better to\r\n * give user correct error.\r\n */\r\n- if (OidIsValid(GetRelationIdentityOrPK(rel->localrel)))\r\n+ if (OidIsValid(rel->usableIndexOid))\r\n {\r\n```\r\n\r\nShouldn't we change the above comment to? The check is no longer slow.\r\n\r\n===\r\nFor relation.c\r\n\r\n04. GetCheapestReplicaIdentityFullPath()\r\n\r\n```\r\n+static Path *\r\n+GetCheapestReplicaIdentityFullPath(Relation localrel)\r\n+{\r\n+ PlannerInfo *root;\r\n+ Query *query;\r\n+ PlannerGlobal *glob;\r\n+ RangeTblEntry *rte;\r\n+ RelOptInfo *rel;\r\n+ int attno;\r\n+ RangeTblRef *rt;\r\n+ List *joinList;\r\n+ Path *seqScanPath;\r\n```\r\n\r\nI think the part that constructs dummy-planner state can be move to another function\r\nbecause that part is not meaningful for this.\r\nEspecially line 824-846 can. \r\n\r\n\r\n===\r\nFor 032_subscribe_use_index.pl\r\n\r\n05. general\r\n\r\n```\r\n+# insert some initial data within the range 0-1000\r\n+$node_publisher->safe_psql('postgres',\r\n+ \"INSERT INTO test_replica_id_full SELECT i%20 FROM generate_series(0,1000)i;\"\r\n+);\r\n```\r\n\r\nIt seems that the range of initial data seems [0, 19].\r\nSame mistake-candidates are found many place.\r\n\r\n06. general\r\n\r\n```\r\n+# updates 1000 rows\r\n+$node_publisher->safe_psql('postgres',\r\n+ \"UPDATE test_replica_id_full SET x = x + 1 WHERE x = 15;\");\r\n```\r\n\r\nOnly 50 tuples are modified here.\r\nSame mistake-candidates are found many place.\r\n\r\n07. general\r\n\r\n```\r\n+# we check if the index is used or not\r\n+$node_subscriber->poll_query_until(\r\n+ 'postgres', q{select (idx_scan = 200) from pg_stat_all_indexes where indexrelname = 'test_replica_id_full_idx';}\r\n+) or die \"Timed out while waiting for check subscriber tap_sub_rep_full_3 updates 200 rows via index\";\t\r\n```\r\nThe query will be executed until the index scan is finished, but it may be not commented.\r\nHow about changing it to \"we wait until the index used on the subscriber-side.\" or something?\r\nSame comments are found in many place.\r\n\r\n08. test related with ANALYZE\r\n\r\n```\r\n+# Testcase start: SUBSCRIPTION CAN UPDATE THE INDEX IT USES AFTER ANALYZE - PARTITIONED TABLE\r\n+# ====================================================================\r\n```\r\n\r\n\"Testcase start:\" should be \"Testcase end:\" here.\r\n\r\n09. general\r\n\r\nIn some tests results are confirmed but in other test they are not.\r\nI think you can make sure results are expected in any case if there are no particular reasons.\r\n\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n",
"msg_date": "Thu, 15 Sep 2022 12:56:26 +0000",
"msg_from": "\"kuroda.hayato@fujitsu.com\" <kuroda.hayato@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: [PATCH] Use indexes on the subscriber when REPLICA IDENTITY is\n full on the publisher"
},
{
"msg_contents": "Here are some review comments for the latest v10 patch.\n\n(Mostly these are just nitpick wording/comments etc)\n\n======\n\n1. Commit message\n\nIt is often not feasible to use `REPLICA IDENTITY FULL` on the publication\nbecause it leads to full table scan per tuple change on the subscription.\nThis makes `REPLICA IDENTITY FULL` impracticable -- probably other than\nsome small number of use cases.\n\n~\n\nThe \"often not feasible\" part seems repeated by the \"impracticable\" part.\n\nSUGGESTION\nUsing `REPLICA IDENTITY FULL` on the publication leads to a full table\nscan per tuple change on the subscription. This makes `REPLICA\nIDENTITY FULL` impracticable -- probably other than some small number\nof use cases.\n\n~~~\n\n2.\n\nThe Majority of the logic on the subscriber side already exists in\nthe code.\n\n\"Majority\" -> \"majority\"\n\n~~~\n\n3.\n\nThe ones familiar\nwith this part of the code could realize that the sequential scan\ncode on the subscriber already implements the `tuples_equal()`\nfunction.\n\nSUGGESTION\nAnyone familiar with this part of the code might recognize that...\n\n~~~\n\n4.\n\nIn short, the changes on the subscriber is mostly\ncombining parts of (unique) index scan and sequential scan codes.\n\n\"is mostly\" -> \"are mostly\"\n\n~~~\n\n5.\n\n From the performance point of view, there are few things to note.\n\n\"are few\" -> \"are a few\"\n\n======\n\n6. src/backend/executor/execReplication.c - build_replindex_scan_key\n\n+static int\n build_replindex_scan_key(ScanKey skey, Relation rel, Relation idxrel,\n TupleTableSlot *searchslot)\n {\n- int attoff;\n+ int index_attoff;\n+ int scankey_attoff = 0;\n\nShould it be called 'skey_attoff' for consistency with the param 'skey'?\n\n~~~\n\n7.\n\n Oid operator;\n Oid opfamily;\n RegProcedure regop;\n- int pkattno = attoff + 1;\n- int mainattno = indkey->values[attoff];\n- Oid optype = get_opclass_input_type(opclass->values[attoff]);\n+ int table_attno = indkey->values[index_attoff];\n+ Oid optype = get_opclass_input_type(opclass->values[index_attoff]);\n\nMaybe the 'optype' should be adjacent to the other Oid opXXX\ndeclarations just to keep them all together?\n\n~~~\n\n8.\n\n+ if (!AttributeNumberIsValid(table_attno))\n+ {\n+ IndexInfo *indexInfo PG_USED_FOR_ASSERTS_ONLY;\n+\n+ /*\n+ * There are two cases to consider. First, if the index is a primary or\n+ * unique key, we cannot have any indexes with expressions. So, at this\n+ * point we are sure that the index we are dealing with is not these.\n+ */\n+ Assert(RelationGetReplicaIndex(rel) != RelationGetRelid(idxrel) &&\n+ RelationGetPrimaryKeyIndex(rel) != RelationGetRelid(idxrel));\n+\n+ /*\n+ * At this point, we are also sure that the index is not consisting\n+ * of only expressions.\n+ */\n+#ifdef USE_ASSERT_CHECKING\n+ indexInfo = BuildIndexInfo(idxrel);\n+ Assert(!IndexOnlyOnExpression(indexInfo));\n+#endif\n\nI was a bit confused by the comment. IIUC the code has already called\nthe FilterOutNotSuitablePathsForReplIdentFull some point prior so all\nthe unwanted indexes are already filtered out. Therefore these\nassertions are just for no reason, other than sanity checking that\nfact, right? If my understand is correct perhaps a simpler single\ncomment is possible:\n\nSUGGESTION (or something like this)\nThis attribute is an expression, however\nFilterOutNotSuitablePathsForReplIdentFull was called earlier during\n[...] and the indexes comprising only expressions have already been\neliminated. We sanity check this now. Furthermore, because primary key\nand unique key indexes can't include expressions we also sanity check\nthe index is neither of those kinds.\n\n~~~\n\n9.\n- return hasnulls;\n+ /* We should always use at least one attribute for the index scan */\n+ Assert (scankey_attoff > 0);\n\nSUGGESTION\nThere should always be at least one attribute for the index scan.\n\n~~~\n\n10. src/backend/executor/execReplication.c - RelationFindReplTupleByIndex\n\nScanKeyData skey[INDEX_MAX_KEYS];\nIndexScanDesc scan;\nSnapshotData snap;\nTransactionId xwait;\nRelation idxrel;\nbool found;\nTypeCacheEntry **eq = NULL; /* only used when the index is not unique */\nbool indisunique;\nint scankey_attoff;\n\n10a.\nShould 'scankey_attoff' be called 'skey_attoff' for consistency with\nthe 'skey' array?\n\n~\n\n10b.\nAlso, it might be tidier to declare the 'skey_attoff' adjacent to the 'skey'.\n\n======\n\n11. src/backend/replication/logical/relation.c\n\nFor LogicalRepPartMap, I was wondering if it should keep a small\ncomment to xref back to the long comment which was moved to\nlogicalreplication.h\n\ne.g.\n/* Refer to the LogicalRepPartMapEntry comment in logicalrelation.h */\n\n~~~\n\n12. src/backend/replication/logical/relation.c - logicalrep_partition_open\n\n+ /*\n+ * Finding a usable index is an infrequent task. It occurs when\n+ * an operation is first performed on the relation, or after\n+ * invalidation of the relation cache entry (e.g., such as ANALYZE).\n+ */\n+ entry->usableIndexOid = FindLogicalRepUsableIndex(entry->localrel, remoterel);\n entry->localrelvalid = true;\n\nShould there be a blank line between those assignments? (just for\nconsistency with the other code of this patch in a later function that\ndoes exactly the same assignments).\n\n~~~\n\n13. src/backend/replication/logical/relation.c -\nFilterOutNotSuitablePathsForReplIdentFull\n\nNot sure about this function name. Maybe should be something like\n'FilterOutUnsuitablePathsForReplIdentFull', or just\n'SuitablePathsForReplIdentFull'\n\n~~~\n\n14.\n\n+ else\n+ {\n+ Relation indexRelation;\n+ IndexInfo *indexInfo;\n+ bool is_btree;\n+ bool is_partial;\n+ bool is_only_on_expression;\n\nIs that another var that could be renamed 'idxoid' like all the others?\n\n~~~\n\n15. src/backend/replication/logical/relation.c -\nGetCheapestReplicaIdentityFullPath\n\n+ typentry = lookup_type_cache(attr->atttypid,\n+ TYPECACHE_EQ_OPR_FINFO);\n\nSeems unnecessary wrapping.\n\n~~~\n\n15.\n\n+ /*\n+ * Currently it is not possible for planner to pick a\n+ * partial index or indexes only on expressions. We\n+ * still want to be explicit and eliminate such\n+ * paths proactively.\n...\n...\n+ */\n\nThis large comment seems unusually skinny. Needs pg_indent.\n\n~~~\n\n16. src/backend/replication/logical/worker.c - check_relation_updatable\n\n@@ -1753,7 +1738,7 @@ check_relation_updatable(LogicalRepRelMapEntry *rel)\n * We are in error mode so it's fine this is somewhat slow. It's better to\n * give user correct error.\n */\n- if (OidIsValid(GetRelationIdentityOrPK(rel->localrel)))\n+ if (OidIsValid(rel->usableIndexOid))\n\nThe original comment about it being \"somewhat slow\" does not seem\nrelevant anymore because it is no longer calling a function in this\ncondition.\n\n~~~\n\n17. src/backend/replication/logical/worker.c - usable_indexoid_internal\n\n+ relmapentry = &(part_entry->relmapentry);\n\nThe parentheses seem overkill, and code is not written like this\nelsewhere in the same patch.\n\n~~~\n\n18. src/backend/replication/logical/worker.c - apply_handle_tuple_routing\n\n@@ -2202,13 +2225,15 @@ apply_handle_tuple_routing(ApplyExecutionData *edata,\n * suitable partition.\n */\n {\n+ LogicalRepRelMapEntry *entry = &part_entry->relmapentry;\n\nI think elsewhere in the patch the same variable is called\n'relmapentry' (which seems a bit better than just 'entry')\n\n======\n\n19. .../subscription/t/032_subscribe_use_index.pl\n\n+# ANALYZING child will change the index used on child_1 and going to\nuse index_on_child_1_b\n+$node_subscriber->safe_psql('postgres', \"ANALYZE child_1\");\n\n19a.\n\"ANALYZING child\" ? Should that be worded differently? There is\nnothing named 'child' that I could see.\n\n~\n\n19b.\n\"and going to use\" ? wording ? \"which will be used for \" ??\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Fri, 16 Sep 2022 10:27:30 +1000",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Use indexes on the subscriber when REPLICA IDENTITY is\n full on the publisher"
},
{
"msg_contents": "Hi Hayato Kuroda,\n\nThanks for the review, please see my reply below:\n\n\n> ===\n> For execRelation.c\n>\n> 01. RelationFindReplTupleByIndex()\n>\n> ```\n> /* Start an index scan. */\n> InitDirtySnapshot(snap);\n> - scan = index_beginscan(rel, idxrel, &snap,\n> -\n> IndexRelationGetNumberOfKeyAttributes(idxrel),\n> - 0);\n>\n> /* Build scan key. */\n> - build_replindex_scan_key(skey, rel, idxrel, searchslot);\n> + scankey_attoff = build_replindex_scan_key(skey, rel, idxrel,\n> searchslot);\n>\n> + scan = index_beginscan(rel, idxrel, &snap, scankey_attoff, 0);\n> ```\n>\n> I think \"/* Start an index scan. */\" should be just above\n> index_beginscan().\n>\n\nmoved there\n\n\n>\n> ===\n> For worker.c\n>\n> 02. sable_indexoid_internal()\n>\n> ```\n> + * Note that if the corresponding relmapentry has InvalidOid\n> usableIndexOid,\n> + * the function returns InvalidOid.\n> + */\n> +static Oid\n> +usable_indexoid_internal(ApplyExecutionData *edata, ResultRelInfo\n> *relinfo)\n> ```\n>\n> \"InvalidOid usableIndexOid\" should be \"invalid usableIndexOid,\"\n>\n\nmakes sense, updated\n\n\n>\n> 03. check_relation_updatable()\n>\n> ```\n> * We are in error mode so it's fine this is somewhat slow. It's\n> better to\n> * give user correct error.\n> */\n> - if (OidIsValid(GetRelationIdentityOrPK(rel->localrel)))\n> + if (OidIsValid(rel->usableIndexOid))\n> {\n> ```\n>\n> Shouldn't we change the above comment to? The check is no longer slow.\n>\n\nHmm, I couldn't realize this comment earlier. So you suggest \"slow\" here\nrefers to the additional function call \"GetRelationIdentityOrPK\"? If so,\nyes I'll update that.\n\n\n>\n> ===\n> For relation.c\n>\n> 04. GetCheapestReplicaIdentityFullPath()\n>\n> ```\n> +static Path *\n> +GetCheapestReplicaIdentityFullPath(Relation localrel)\n> +{\n> + PlannerInfo *root;\n> + Query *query;\n> + PlannerGlobal *glob;\n> + RangeTblEntry *rte;\n> + RelOptInfo *rel;\n> + int attno;\n> + RangeTblRef *rt;\n> + List *joinList;\n> + Path *seqScanPath;\n> ```\n>\n> I think the part that constructs dummy-planner state can be move to\n> another function\n> because that part is not meaningful for this.\n> Especially line 824-846 can.\n>\n>\nMakes sense, simplified the function. Though, it is always hard to pick\ngood names for these kinds of helper functions. I\npicked GenerateDummySelectPlannerInfoForRelation(), does that sound good to\nyou as well?\n\n\n>\n> ===\n> For 032_subscribe_use_index.pl\n>\n> 05. general\n>\n> ```\n> +# insert some initial data within the range 0-1000\n> +$node_publisher->safe_psql('postgres',\n> + \"INSERT INTO test_replica_id_full SELECT i%20 FROM\n> generate_series(0,1000)i;\"\n> +);\n> ```\n>\n> It seems that the range of initial data seems [0, 19].\n> Same mistake-candidates are found many place.\n>\n\nAh, several copy & paste errors. Fixed (hopefully) all.\n\n\n>\n> 06. general\n>\n> ```\n> +# updates 1000 rows\n> +$node_publisher->safe_psql('postgres',\n> + \"UPDATE test_replica_id_full SET x = x + 1 WHERE x = 15;\");\n> ```\n>\n> Only 50 tuples are modified here.\n> Same mistake-candidates are found many place.\n>\n\nAlright, yes there were several wrong comments in the tests. I went over\nthe tests once more to fix those and improve comments.\n\n\n>\n> 07. general\n>\n> ```\n> +# we check if the index is used or not\n> +$node_subscriber->poll_query_until(\n> + 'postgres', q{select (idx_scan = 200) from pg_stat_all_indexes\n> where indexrelname = 'test_replica_id_full_idx';}\n> +) or die \"Timed out while waiting for check subscriber tap_sub_rep_full_3\n> updates 200 rows via index\";\n> ```\n> The query will be executed until the index scan is finished, but it may be\n> not commented.\n> How about changing it to \"we wait until the index used on the\n> subscriber-side.\" or something?\n> Same comments are found in many place.\n>\n\nMakes sense, updated\n\n\n>\n> 08. test related with ANALYZE\n>\n> ```\n> +# Testcase start: SUBSCRIPTION CAN UPDATE THE INDEX IT USES AFTER ANALYZE\n> - PARTITIONED TABLE\n> +# ====================================================================\n> ```\n>\n> \"Testcase start:\" should be \"Testcase end:\" here.\n>\n\nthanks, fixed\n\n\n>\n> 09. general\n>\n> In some tests results are confirmed but in other test they are not.\n> I think you can make sure results are expected in any case if there are no\n> particular reasons.\n>\n>\nAlright, yes I also don't see a reason not to do that. Added to all cases.\n\n\nI'll attach the patch with the next email as I also want to incorporate the\nother comments. Hope this is not going to be confusing.\n\nThanks,\nOnder\n\nHi Hayato Kuroda, Thanks for the review, please see my reply below:\n\n===\nFor execRelation.c\n\n01. RelationFindReplTupleByIndex()\n\n```\n /* Start an index scan. */\n InitDirtySnapshot(snap);\n- scan = index_beginscan(rel, idxrel, &snap,\n- IndexRelationGetNumberOfKeyAttributes(idxrel),\n- 0);\n\n /* Build scan key. */\n- build_replindex_scan_key(skey, rel, idxrel, searchslot);\n+ scankey_attoff = build_replindex_scan_key(skey, rel, idxrel, searchslot);\n\n+ scan = index_beginscan(rel, idxrel, &snap, scankey_attoff, 0);\n```\n\nI think \"/* Start an index scan. */\" should be just above index_beginscan().moved there \n\n===\nFor worker.c\n\n02. sable_indexoid_internal()\n\n```\n+ * Note that if the corresponding relmapentry has InvalidOid usableIndexOid,\n+ * the function returns InvalidOid.\n+ */\n+static Oid\n+usable_indexoid_internal(ApplyExecutionData *edata, ResultRelInfo *relinfo)\n```\n\n\"InvalidOid usableIndexOid\" should be \"invalid usableIndexOid,\"makes sense, updated \n\n03. check_relation_updatable()\n\n```\n * We are in error mode so it's fine this is somewhat slow. It's better to\n * give user correct error.\n */\n- if (OidIsValid(GetRelationIdentityOrPK(rel->localrel)))\n+ if (OidIsValid(rel->usableIndexOid))\n {\n```\n\nShouldn't we change the above comment to? The check is no longer slow.Hmm, I couldn't realize this comment earlier. So you suggest \"slow\" here refers to the additional function call \"GetRelationIdentityOrPK\"? If so, yes I'll update that. \n\n===\nFor relation.c\n\n04. GetCheapestReplicaIdentityFullPath()\n\n```\n+static Path *\n+GetCheapestReplicaIdentityFullPath(Relation localrel)\n+{\n+ PlannerInfo *root;\n+ Query *query;\n+ PlannerGlobal *glob;\n+ RangeTblEntry *rte;\n+ RelOptInfo *rel;\n+ int attno;\n+ RangeTblRef *rt;\n+ List *joinList;\n+ Path *seqScanPath;\n```\n\nI think the part that constructs dummy-planner state can be move to another function\nbecause that part is not meaningful for this.\nEspecially line 824-846 can. \nMakes sense, simplified the function. Though, it is always hard to pick good names for these kinds of helper functions. I picked GenerateDummySelectPlannerInfoForRelation(), does that sound good to you as well? \n\n===\nFor 032_subscribe_use_index.pl\n\n05. general\n\n```\n+# insert some initial data within the range 0-1000\n+$node_publisher->safe_psql('postgres',\n+ \"INSERT INTO test_replica_id_full SELECT i%20 FROM generate_series(0,1000)i;\"\n+);\n```\n\nIt seems that the range of initial data seems [0, 19].\nSame mistake-candidates are found many place.Ah, several copy & paste errors. Fixed (hopefully) all. \n\n06. general\n\n```\n+# updates 1000 rows\n+$node_publisher->safe_psql('postgres',\n+ \"UPDATE test_replica_id_full SET x = x + 1 WHERE x = 15;\");\n```\n\nOnly 50 tuples are modified here.\nSame mistake-candidates are found many place.Alright, yes there were several wrong comments in the tests. I went over the tests once more to fix those and improve comments. \n\n07. general\n\n```\n+# we check if the index is used or not\n+$node_subscriber->poll_query_until(\n+ 'postgres', q{select (idx_scan = 200) from pg_stat_all_indexes where indexrelname = 'test_replica_id_full_idx';}\n+) or die \"Timed out while waiting for check subscriber tap_sub_rep_full_3 updates 200 rows via index\"; \n```\nThe query will be executed until the index scan is finished, but it may be not commented.\nHow about changing it to \"we wait until the index used on the subscriber-side.\" or something?\nSame comments are found in many place.Makes sense, updated \n\n08. test related with ANALYZE\n\n```\n+# Testcase start: SUBSCRIPTION CAN UPDATE THE INDEX IT USES AFTER ANALYZE - PARTITIONED TABLE\n+# ====================================================================\n```\n\n\"Testcase start:\" should be \"Testcase end:\" here.thanks, fixed \n\n09. general\n\nIn some tests results are confirmed but in other test they are not.\nI think you can make sure results are expected in any case if there are no particular reasons.Alright, yes I also don't see a reason not to do that. Added to all cases.I'll attach the patch with the next email as I also want to incorporate the other comments. Hope this is not going to be confusing.Thanks,Onder",
"msg_date": "Mon, 19 Sep 2022 18:31:55 +0300",
"msg_from": "=?UTF-8?B?w5ZuZGVyIEthbGFjxLE=?= <onderkalaci@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Use indexes on the subscriber when REPLICA IDENTITY is\n full on the publisher"
},
{
"msg_contents": "Hi Peter,\n\nThanks again for the review, see my comments below:\n\n\n>\n> ======\n>\n> 1. Commit message\n>\n> It is often not feasible to use `REPLICA IDENTITY FULL` on the publication\n> because it leads to full table scan per tuple change on the subscription.\n> This makes `REPLICA IDENTITY FULL` impracticable -- probably other than\n> some small number of use cases.\n>\n> ~\n>\n> The \"often not feasible\" part seems repeated by the \"impracticable\" part.\n>\n\n\n> SUGGESTION\n> Using `REPLICA IDENTITY FULL` on the publication leads to a full table\n> scan per tuple change on the subscription. This makes `REPLICA\n> IDENTITY FULL` impracticable -- probably other than some small number\n> of use cases.\n>\n> ~~~\n>\n\nSure, this is easier to follow, updated.\n\n\n>\n> 2.\n>\n> The Majority of the logic on the subscriber side already exists in\n> the code.\n>\n> \"Majority\" -> \"majority\"\n>\n>\nfixed\n\n\n> ~~~\n>\n> 3.\n>\n> The ones familiar\n> with this part of the code could realize that the sequential scan\n> code on the subscriber already implements the `tuples_equal()`\n> function.\n>\n> SUGGESTION\n> Anyone familiar with this part of the code might recognize that...\n>\n> ~~~\n>\n\nYes, this is better, applied\n\n\n>\n> 4.\n>\n> In short, the changes on the subscriber is mostly\n> combining parts of (unique) index scan and sequential scan codes.\n>\n> \"is mostly\" -> \"are mostly\"\n>\n> ~~~\n>\n>\napplied\n\n\n> 5.\n>\n> From the performance point of view, there are few things to note.\n>\n> \"are few\" -> \"are a few\"\n>\n>\napplied\n\n\n> ======\n>\n> 6. src/backend/executor/execReplication.c - build_replindex_scan_key\n>\n> +static int\n> build_replindex_scan_key(ScanKey skey, Relation rel, Relation idxrel,\n> TupleTableSlot *searchslot)\n> {\n> - int attoff;\n> + int index_attoff;\n> + int scankey_attoff = 0;\n>\n> Should it be called 'skey_attoff' for consistency with the param 'skey'?\n>\n>\nThat looks better, updated\n\n\n> ~~~\n>\n> 7.\n>\n> Oid operator;\n> Oid opfamily;\n> RegProcedure regop;\n> - int pkattno = attoff + 1;\n> - int mainattno = indkey->values[attoff];\n> - Oid optype = get_opclass_input_type(opclass->values[attoff]);\n> + int table_attno = indkey->values[index_attoff];\n> + Oid optype = get_opclass_input_type(opclass->values[index_attoff]);\n>\n> Maybe the 'optype' should be adjacent to the other Oid opXXX\n> declarations just to keep them all together?\n>\n\nI do not have any preference on this. Although I do not see such a strong\npattern in the code, I have no objection to doing so changed.\n\n~~~\n>\n> 8.\n>\n> + if (!AttributeNumberIsValid(table_attno))\n> + {\n> + IndexInfo *indexInfo PG_USED_FOR_ASSERTS_ONLY;\n> +\n> + /*\n> + * There are two cases to consider. First, if the index is a primary or\n> + * unique key, we cannot have any indexes with expressions. So, at this\n> + * point we are sure that the index we are dealing with is not these.\n> + */\n> + Assert(RelationGetReplicaIndex(rel) != RelationGetRelid(idxrel) &&\n> + RelationGetPrimaryKeyIndex(rel) != RelationGetRelid(idxrel));\n> +\n> + /*\n> + * At this point, we are also sure that the index is not consisting\n> + * of only expressions.\n> + */\n> +#ifdef USE_ASSERT_CHECKING\n> + indexInfo = BuildIndexInfo(idxrel);\n> + Assert(!IndexOnlyOnExpression(indexInfo));\n> +#endif\n>\n> I was a bit confused by the comment. IIUC the code has already called\n> the FilterOutNotSuitablePathsForReplIdentFull some point prior so all\n> the unwanted indexes are already filtered out. Therefore these\n> assertions are just for no reason, other than sanity checking that\n> fact, right? If my understand is correct perhaps a simpler single\n> comment is possible:\n>\n\nYes, these are for sanity check\n\n\n>\n> SUGGESTION (or something like this)\n> This attribute is an expression, however\n> FilterOutNotSuitablePathsForReplIdentFull was called earlier during\n> [...] and the indexes comprising only expressions have already been\n> eliminated. We sanity check this now. Furthermore, because primary key\n> and unique key indexes can't include expressions we also sanity check\n> the index is neither of those kinds.\n>\n> ~~~\n>\n\nI agree that we can improve comments here. I incorporated your suggestion\nas well.\n\n\n>\n> 9.\n> - return hasnulls;\n> + /* We should always use at least one attribute for the index scan */\n> + Assert (scankey_attoff > 0);\n>\n> SUGGESTION\n> There should always be at least one attribute for the index scan.\n>\n\napplied\n\n\n>\n> ~~~\n>\n> 10. src/backend/executor/execReplication.c - RelationFindReplTupleByIndex\n>\n> ScanKeyData skey[INDEX_MAX_KEYS];\n> IndexScanDesc scan;\n> SnapshotData snap;\n> TransactionId xwait;\n> Relation idxrel;\n> bool found;\n> TypeCacheEntry **eq = NULL; /* only used when the index is not unique */\n> bool indisunique;\n> int scankey_attoff;\n>\n> 10a.\n> Should 'scankey_attoff' be called 'skey_attoff' for consistency with\n> the 'skey' array?\n>\n\nYes, it makes sense as you suggested on build_replindex_scan_key\n\n>\n> ~\n>\n> 10b.\n> Also, it might be tidier to declare the 'skey_attoff' adjacent to the\n> 'skey'.\n>\n\nmoved\n\n>\n> ======\n>\n> 11. src/backend/replication/logical/relation.c\n>\n> For LogicalRepPartMap, I was wondering if it should keep a small\n> comment to xref back to the long comment which was moved to\n> logicalreplication.h\n>\n> e.g.\n> /* Refer to the LogicalRepPartMapEntry comment in logicalrelation.h */\n>\n\n Could work, added. We already have the xref the other way around\n(LogicalRepPartMapEntry->LogicalRepPartMap)\n\n\n> ~~~\n>\n> 12. src/backend/replication/logical/relation.c - logicalrep_partition_open\n>\n> + /*\n> + * Finding a usable index is an infrequent task. It occurs when\n> + * an operation is first performed on the relation, or after\n> + * invalidation of the relation cache entry (e.g., such as ANALYZE).\n> + */\n> + entry->usableIndexOid = FindLogicalRepUsableIndex(entry->localrel,\n> remoterel);\n> entry->localrelvalid = true;\n>\n> Should there be a blank line between those assignments? (just for\n> consistency with the other code of this patch in a later function that\n> does exactly the same assignments).\n>\n\ndone\n\n\n>\n> ~~~\n>\n> 13. src/backend/replication/logical/relation.c -\n> FilterOutNotSuitablePathsForReplIdentFull\n>\n> Not sure about this function name. Maybe should be something like\n> 'FilterOutUnsuitablePathsForReplIdentFull', or just\n> 'SuitablePathsForReplIdentFull'\n>\n> ~~~\n>\n\nI think I'll go with a slight modification of your\nsuggestion: SuitablePathsForRepIdentFull\n\n>\n> 14.\n>\n> + else\n> + {\n> + Relation indexRelation;\n> + IndexInfo *indexInfo;\n> + bool is_btree;\n> + bool is_partial;\n> + bool is_only_on_expression;\n>\n> Is that another var that could be renamed 'idxoid' like all the others?\n>\n> seems so, updated\n\n\n> ~~~\n>\n> 15. src/backend/replication/logical/relation.c -\n> GetCheapestReplicaIdentityFullPath\n>\n> + typentry = lookup_type_cache(attr->atttypid,\n> + TYPECACHE_EQ_OPR_FINFO);\n>\n> Seems unnecessary wrapping.\n>\n> fixed\n\n\n> ~~~\n>\n> 15.\n>\n> + /*\n> + * Currently it is not possible for planner to pick a\n> + * partial index or indexes only on expressions. We\n> + * still want to be explicit and eliminate such\n> + * paths proactively.\n> ...\n> ...\n> + */\n>\n> This large comment seems unusually skinny. Needs pg_indent.\n>\n>\nOk, it has been a while that I have not run pg_indent. Now did and this\ncomment is fixed as well\n\n\n> ~~~\n>\n> 16. src/backend/replication/logical/worker.c - check_relation_updatable\n>\n> @@ -1753,7 +1738,7 @@ check_relation_updatable(LogicalRepRelMapEntry *rel)\n> * We are in error mode so it's fine this is somewhat slow. It's better to\n> * give user correct error.\n> */\n> - if (OidIsValid(GetRelationIdentityOrPK(rel->localrel)))\n> + if (OidIsValid(rel->usableIndexOid))\n>\n> The original comment about it being \"somewhat slow\" does not seem\n> relevant anymore because it is no longer calling a function in this\n> condition.\n>\n>\nFixed (also a similar comment raised in another review)\n\n\n> ~~~\n>\n> 17. src/backend/replication/logical/worker.c - usable_indexoid_internal\n>\n> + relmapentry = &(part_entry->relmapentry);\n>\n> The parentheses seem overkill, and code is not written like this\n> elsewhere in the same patch.\n>\n\ntrue, no need, removed the parentheses\n\n\n> ~~~\n>\n> 18. src/backend/replication/logical/worker.c - apply_handle_tuple_routing\n>\n> @@ -2202,13 +2225,15 @@ apply_handle_tuple_routing(ApplyExecutionData\n> *edata,\n> * suitable partition.\n> */\n> {\n> + LogicalRepRelMapEntry *entry = &part_entry->relmapentry;\n>\n> I think elsewhere in the patch the same variable is called\n> 'relmapentry' (which seems a bit better than just 'entry')\n>\n>\ntrue, it used as relmapentry in other place(s), and in this context entry\nis confusing. So, changed to relmapentry.\n\n\n> ======\n>\n> 19. .../subscription/t/032_subscribe_use_index.pl\n>\n> +# ANALYZING child will change the index used on child_1 and going to\n> use index_on_child_1_b\n> +$node_subscriber->safe_psql('postgres', \"ANALYZE child_1\");\n>\n> 19a.\n> \"ANALYZING child\" ? Should that be worded differently? There is\n> nothing named 'child' that I could see.\n>\n>\nDo you mean it should be \"child_1\"? Tha is the name of the table. I\nupdated the comment, let me know if it is still confusing.\n\n~\n>\n> 19b.\n> \"and going to use\" ? wording ? \"which will be used for \" ??\n>\n>\nRewording the comment below, is that better?\n\n# ANALYZING child_1 will change the index used on the table and\n# UPDATE/DELETEs on the subscriber are going to use index_on_child_1_b\n\n\nI also attached v_11 of the patch.\n\nThanks,\nOnder Kalaci",
"msg_date": "Mon, 19 Sep 2022 18:32:03 +0300",
"msg_from": "=?UTF-8?B?w5ZuZGVyIEthbGFjxLE=?= <onderkalaci@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Use indexes on the subscriber when REPLICA IDENTITY is\n full on the publisher"
},
{
"msg_contents": "Dear Önder,\r\n\r\nThanks for updating the patch! I will check it later.\r\nCurrently I just reply to your comments.\r\n\r\n> Hmm, I couldn't realize this comment earlier. So you suggest \"slow\" here refers to the additional function call \"GetRelationIdentityOrPK\"? If so, yes I'll update that.\r\n\r\nYes I meant to say that, because functions will be called like:\r\n\r\nGetRelationIdentityOrPK() -> RelationGetPrimaryKeyIndex() -> RelationGetIndexList() -> ..\r\n\r\nand according to comments last one seems to do the heavy lifting.\r\n\r\n\r\n> Makes sense, simplified the function. Though, it is always hard to pick good names for these kinds of helper functions. I picked GenerateDummySelectPlannerInfoForRelation(), does that sound good to you as well?\r\n\r\nI could not find any better naming than yours. \r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n",
"msg_date": "Tue, 20 Sep 2022 02:05:05 +0000",
"msg_from": "\"kuroda.hayato@fujitsu.com\" <kuroda.hayato@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: [PATCH] Use indexes on the subscriber when REPLICA IDENTITY is\n full on the publisher"
},
{
"msg_contents": "I had quick look at the latest v11-0001 patch differences from v10.\n\nHere are some initial comments:\n\n======\n\n1. Commit message\n\nIt looks like some small mistake happened. You wrote [1] that my\nprevious review comments about the commit message were fixed, but it\nseems the v11 commit message is unchanged since v10.\n\n======\n\n2. src/backend/replication/logical/relation.c -\nGenerateDummySelectPlannerInfoForRelation\n\n+/*\n+ * This is not a generic function, helper function for\n+ * GetCheapestReplicaIdentityFullPath. The function creates\n+ * a dummy PlannerInfo for the given relationId as if the\n+ * relation is queried with SELECT command.\n+ */\n+static PlannerInfo *\n+GenerateDummySelectPlannerInfoForRelation(Oid relationId)\n\n\"generic function, helper function\" -> \"generic function. It is a\nhelper function\"\n\n\n------\n[1] https://www.postgresql.org/message-id/CACawEhXnTcXBOTofptkgSBOyD81Pohd7MSfFaW0SKo-0oKrCJg%40mail.gmail.com\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Tue, 20 Sep 2022 12:22:40 +1000",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Use indexes on the subscriber when REPLICA IDENTITY is\n full on the publisher"
},
{
"msg_contents": "I've gone through the v11-0001 patch in more detail.\n\nHere are some more review comments (nothing functional I think -\nmostly just wording)\n\n======\n\n1. src/backend/executor/execReplication.c - build_replindex_scan_key\n\n- * This is not generic routine, it expects the idxrel to be replication\n- * identity of a rel and meet all limitations associated with that.\n+ * This is not generic routine, it expects the idxrel to be an index\n+ * that planner would choose if the searchslot includes all the columns\n+ * (e.g., REPLICA IDENTITY FULL on the source).\n */\n-static bool\n+static int\n build_replindex_scan_key(ScanKey skey, Relation rel, Relation idxrel,\n TupleTableSlot *searchslot)\n\n\n(I know this is not caused by your patch but maybe fix it at the same time?)\n\n\"This is not generic routine, it expects...\" -> \"This is not a generic\nroutine - it expects...\"\n\n~~~\n\n2.\n\n+ IndexInfo *indexInfo PG_USED_FOR_ASSERTS_ONLY;\n+\n+ /*\n+ * This attribute is an expression, and\n+ * SuitablePathsForRepIdentFull() was called earlier while the\n+ * index for subscriber is selected. There, the indexes comprising\n+ * *only* expressions have already been eliminated.\n+ *\n+ * We sanity check this now.\n+ */\n+#ifdef USE_ASSERT_CHECKING\n+ indexInfo = BuildIndexInfo(idxrel);\n+ Assert(!IndexOnlyOnExpression(indexInfo));\n+#endif\n\n2a.\n\"while the index for subscriber is selected...\" -> \"when the index for\nthe subscriber was selected...”\n\n~\n\n2b.\nBecause there is only one declaration in this code block you could\nsimplify this a bit if you wanted to.\n\nSUGGESTION\n/*\n * This attribute is an expression, and\n * SuitablePathsForRepIdentFull() was called earlier while the\n * index for subscriber is selected. There, the indexes comprising\n * *only* expressions have already been eliminated.\n *\n * We sanity check this now.\n */\n#ifdef USE_ASSERT_CHECKING\nIndexInfo *indexInfo = BuildIndexInfo(idxrel);\nAssert(!IndexOnlyOnExpression(indexInfo));\n#endif\n\n~~~\n\n3. src/backend/executor/execReplication.c - RelationFindReplTupleByIndex\n\n+ /* Start an index scan. */\n+ scan = index_beginscan(rel, idxrel, &snap, skey_attoff, 0);\n retry:\n found = false;\n\nIt might be better to have a blank line before that ‘retry’ label,\nlike in the original code.\n\n======\n\n4. src/backend/replication/logical/relation.c\n\n+/* see LogicalRepPartMapEntry for details in logicalrelation.h */\n static HTAB *LogicalRepPartMap = NULL;\n\nPersonally, I'd word that something like:\n\"/* For LogicalRepPartMap details see LogicalRepPartMapEntry in\nlogicalrelation.h */\"\n\nbut YMMV.\n\n~~~\n\n5. src/backend/replication/logical/relation.c -\nGenerateDummySelectPlannerInfoForRelation\n\n+/*\n+ * This is not a generic function, helper function for\n+ * GetCheapestReplicaIdentityFullPath. The function creates\n+ * a dummy PlannerInfo for the given relationId as if the\n+ * relation is queried with SELECT command.\n+ */\n+static PlannerInfo *\n+GenerateDummySelectPlannerInfoForRelation(Oid relationId)\n\n(mentioned this one in my previous post)\n\n\"This is not a generic function, helper function\" -> \"This is not a\ngeneric function. It is a helper function\"\n\n~~~\n\n6. src/backend/replication/logical/relation.c -\nGetCheapestReplicaIdentityFullPath\n\n+/*\n+ * Generate all the possible paths for the given subscriber relation,\n+ * for the cases that the source relation is replicated via REPLICA\n+ * IDENTITY FULL. The function returns the cheapest Path among the\n+ * eligible paths, see SuitablePathsForRepIdentFull().\n+ *\n+ * The function guarantees to return a path, because it adds sequential\n+ * scan path if needed.\n+ *\n+ * The function assumes that all the columns will be provided during\n+ * the execution phase, given that REPLICA IDENTITY FULL guarantees\n+ * that.\n+ */\n+static Path *\n+GetCheapestReplicaIdentityFullPath(Relation localrel)\n\n\n\"for the cases that...\" -> \"for cases where...\"\n\n~~~\n\n7.\n\n+ /*\n+ * Currently it is not possible for planner to pick a partial index or\n+ * indexes only on expressions. We still want to be explicit and eliminate\n+ * such paths proactively.\n\n\"for planner...\" -> \"for the planner...\"\n\n======\n\n8. .../subscription/t/032_subscribe_use_index.pl - general\n\n8a.\n(remove the 'we')\n\"# we wait until...\" -> \"# wait until...\" X many occurrences\n\n~\n\n8b.\n(remove the 'we')\n\"# we show that...\" -> “# show that...\" X many occurrences\n\n~~~\n\n9.\n\nThere is inconsistent wording for some of your test case start/end comments\n\n9a.\ne.g.\nstart: SUBSCRIPTION USES INDEX UPDATEs MULTIPLE ROWS\nend: SUBSCRIPTION USES INDEX MODIFIES MULTIPLE ROWS\n\n~\n\n9b.\ne.g.\nstart: SUBSCRIPTION USES INDEX WITH MULTIPLE COLUMNS\nend: SUBSCRIPTION USES INDEX MODIFIES MULTIPLE ROWS\n\n~~~\n\n10.\n\nI did not really understand the point of having special subscription names\ntap_sub_rep_full_0\ntap_sub_rep_full_2\ntap_sub_rep_full_3\ntap_sub_rep_full_4\netc...\n\nSince you drop/recreate these for each test case can't they just be\ncalled 'tap_sub_rep_full'?\n\n~~~\n\n11. SUBSCRIPTION USES INDEX WITH MULTIPLE COLUMNS\n\n+# updates 200 rows\n+$node_publisher->safe_psql('postgres',\n+ \"DELETE FROM test_replica_id_full WHERE x IN (5, 6);\");\n\nThe comment says update but this is doing delete\n\n\n~~~\n\n12. SUBSCRIPTION USES INDEX WITH DROPPED COLUMNS\n\n+# cleanup sub\n+$node_subscriber->safe_psql('postgres',\n+ \"DROP SUBSCRIPTION tap_sub_rep_full_4\");\n\nUnusual wrapping?\n\n~~~\n\n13. SUBSCRIPTION USES INDEX ON PARTITIONED TABLES\n\n+# updates rows and moves between partitions\n+$node_publisher->safe_psql('postgres',\n+ \"DELETE FROM users_table_part WHERE user_id = 1 and value_1 = 1;\");\n+$node_publisher->safe_psql('postgres',\n+ \"DELETE FROM users_table_part WHERE user_id = 12 and value_1 = 12;\");\n\nThe comment says update but SQL says delete\n\n~~~\n\n14. SUBSCRIPTION CAN USE INDEXES WITH EXPRESSIONS AND COLUMNS\n\n+# update 1 row and delete 1 row using index_b, so index_a still has 2 idx_scan\n+$node_subscriber->poll_query_until(\n+ 'postgres', q{select idx_scan=2 from pg_stat_all_indexes where\nindexrelname = 'index_a';}\n+) or die \"Timed out while waiting for check subscriber\ntap_sub_rep_full_0 updates two rows via index scan with index on high\ncardinality column-3\";\n+\n\nThe comment seems misplaced. Doesn't it belong on the lines above this\nwhere the update/delete is being done?\n\n~~~\n\n15. SUBSCRIPTION CAN UPDATE THE INDEX IT USES AFTER ANALYZE - INHERITED TABLE\n\n+# ANALYZING child will change the index used on child_1 and going to\nuse index_on_child_1_b\n+$node_subscriber->safe_psql('postgres', \"ANALYZE child_1\");\n\nShould the comment say 'child_1' instead of child?\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Tue, 20 Sep 2022 18:25:33 +1000",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Use indexes on the subscriber when REPLICA IDENTITY is\n full on the publisher"
},
{
"msg_contents": "Hi Peter,\n\nThanks for the quick response.\n\n\n> 1. Commit message\n>\n> It looks like some small mistake happened. You wrote [1] that my\n> previous review comments about the commit message were fixed, but it\n> seems the v11 commit message is unchanged since v10.\n>\n>\nOops, yes you are right, I forgot to push commit message changes. I'll\nincorporate all these suggestions on v12.\n\n\n\n> ======\n>\n> 2. src/backend/replication/logical/relation.c -\n> GenerateDummySelectPlannerInfoForRelation\n>\n> +/*\n> + * This is not a generic function, helper function for\n> + * GetCheapestReplicaIdentityFullPath. The function creates\n> + * a dummy PlannerInfo for the given relationId as if the\n> + * relation is queried with SELECT command.\n> + */\n> +static PlannerInfo *\n> +GenerateDummySelectPlannerInfoForRelation(Oid relationId)\n>\n> \"generic function, helper function\" -> \"generic function. It is a\n> helper function\"\n>\n>\nFixed.\n\nI'll attach the changes in the next email with v12.\n\nThanks,\nOnder\n\nHi Peter,Thanks for the quick response.\n1. Commit message\n\nIt looks like some small mistake happened. You wrote [1] that my\nprevious review comments about the commit message were fixed, but it\nseems the v11 commit message is unchanged since v10.\nOops, yes you are right, I forgot to push commit message changes. I'll incorporate all these suggestions on v12. \n======\n\n2. src/backend/replication/logical/relation.c -\nGenerateDummySelectPlannerInfoForRelation\n\n+/*\n+ * This is not a generic function, helper function for\n+ * GetCheapestReplicaIdentityFullPath. The function creates\n+ * a dummy PlannerInfo for the given relationId as if the\n+ * relation is queried with SELECT command.\n+ */\n+static PlannerInfo *\n+GenerateDummySelectPlannerInfoForRelation(Oid relationId)\n\n\"generic function, helper function\" -> \"generic function. It is a\nhelper function\"\n Fixed.I'll attach the changes in the next email with v12.Thanks,Onder",
"msg_date": "Tue, 20 Sep 2022 13:29:29 +0300",
"msg_from": "=?UTF-8?B?w5ZuZGVyIEthbGFjxLE=?= <onderkalaci@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Use indexes on the subscriber when REPLICA IDENTITY is\n full on the publisher"
},
{
"msg_contents": "Hi Peter,\n\n\n\n>\n> 1. src/backend/executor/execReplication.c - build_replindex_scan_key\n>\n> - * This is not generic routine, it expects the idxrel to be replication\n> - * identity of a rel and meet all limitations associated with that.\n> + * This is not generic routine, it expects the idxrel to be an index\n> + * that planner would choose if the searchslot includes all the columns\n> + * (e.g., REPLICA IDENTITY FULL on the source).\n> */\n> -static bool\n> +static int\n> build_replindex_scan_key(ScanKey skey, Relation rel, Relation idxrel,\n> TupleTableSlot *searchslot)\n>\n>\n> (I know this is not caused by your patch but maybe fix it at the same\n> time?)\n>\n> \"This is not generic routine, it expects...\" -> \"This is not a generic\n> routine - it expects...\"\n>\n>\nFixed\n\n\n>\n> 2.\n>\n> + IndexInfo *indexInfo PG_USED_FOR_ASSERTS_ONLY;\n> +\n> + /*\n> + * This attribute is an expression, and\n> + * SuitablePathsForRepIdentFull() was called earlier while the\n> + * index for subscriber is selected. There, the indexes comprising\n> + * *only* expressions have already been eliminated.\n> + *\n> + * We sanity check this now.\n> + */\n> +#ifdef USE_ASSERT_CHECKING\n> + indexInfo = BuildIndexInfo(idxrel);\n> + Assert(!IndexOnlyOnExpression(indexInfo));\n> +#endif\n>\n> 2a.\n> \"while the index for subscriber is selected...\" -> \"when the index for\n> the subscriber was selected...”\n>\n>\nfixed\n\n\n> ~\n>\n> 2b.\n> Because there is only one declaration in this code block you could\n> simplify this a bit if you wanted to.\n>\n> SUGGESTION\n> /*\n> * This attribute is an expression, and\n> * SuitablePathsForRepIdentFull() was called earlier while the\n> * index for subscriber is selected. There, the indexes comprising\n> * *only* expressions have already been eliminated.\n> *\n> * We sanity check this now.\n> */\n> #ifdef USE_ASSERT_CHECKING\n> IndexInfo *indexInfo = BuildIndexInfo(idxrel);\n> Assert(!IndexOnlyOnExpression(indexInfo));\n> #endif\n>\n>\nMakes sense, no reason to declare above\n\n\n> ~~~\n>\n> 3. src/backend/executor/execReplication.c - RelationFindReplTupleByIndex\n>\n> + /* Start an index scan. */\n> + scan = index_beginscan(rel, idxrel, &snap, skey_attoff, 0);\n> retry:\n> found = false;\n>\n> It might be better to have a blank line before that ‘retry’ label,\n> like in the original code.\n>\n\nagreed, fixed\n\n\n>\n> ======\n>\n> 4. src/backend/replication/logical/relation.c\n>\n> +/* see LogicalRepPartMapEntry for details in logicalrelation.h */\n> static HTAB *LogicalRepPartMap = NULL;\n>\n> Personally, I'd word that something like:\n> \"/* For LogicalRepPartMap details see LogicalRepPartMapEntry in\n> logicalrelation.h */\"\n>\n> but YMMV.\n>\n\nI also don't have any strong opinions on that, updated to your suggestion.\n\n\n>\n> ~~~\n>\n> 5. src/backend/replication/logical/relation.c -\n> GenerateDummySelectPlannerInfoForRelation\n>\n> +/*\n> + * This is not a generic function, helper function for\n> + * GetCheapestReplicaIdentityFullPath. The function creates\n> + * a dummy PlannerInfo for the given relationId as if the\n> + * relation is queried with SELECT command.\n> + */\n> +static PlannerInfo *\n> +GenerateDummySelectPlannerInfoForRelation(Oid relationId)\n>\n> (mentioned this one in my previous post)\n>\n> \"This is not a generic function, helper function\" -> \"This is not a\n> generic function. It is a helper function\"\n>\n\nYes, applied.\n\n\n>\n> ~~~\n>\n> 6. src/backend/replication/logical/relation.c -\n> GetCheapestReplicaIdentityFullPath\n>\n> +/*\n> + * Generate all the possible paths for the given subscriber relation,\n> + * for the cases that the source relation is replicated via REPLICA\n> + * IDENTITY FULL. The function returns the cheapest Path among the\n> + * eligible paths, see SuitablePathsForRepIdentFull().\n> + *\n> + * The function guarantees to return a path, because it adds sequential\n> + * scan path if needed.\n> + *\n> + * The function assumes that all the columns will be provided during\n> + * the execution phase, given that REPLICA IDENTITY FULL guarantees\n> + * that.\n> + */\n> +static Path *\n> +GetCheapestReplicaIdentityFullPath(Relation localrel)\n>\n>\n> \"for the cases that...\" -> \"for cases where...\"\n>\n>\nsounds good\n\n\n> ~~~\n>\n> 7.\n>\n> + /*\n> + * Currently it is not possible for planner to pick a partial index or\n> + * indexes only on expressions. We still want to be explicit and eliminate\n> + * such paths proactively.\n>\n> \"for planner...\" -> \"for the planner...\"\n>\n\nfixed\n\n\n>\n> ======\n>\n> 8. .../subscription/t/032_subscribe_use_index.pl - general\n>\n> 8a.\n> (remove the 'we')\n> \"# we wait until...\" -> \"# wait until...\" X many occurrences\n>\n> ~\n>\n> 8b.\n> (remove the 'we')\n> \"# we show that...\" -> “# show that...\" X many occurrences\n>\n\nOk, removed all \"we\"s in the test\n\n>\n> ~~~\n>\n> 9.\n>\n> There is inconsistent wording for some of your test case start/end comments\n>\n> 9a.\n> e.g.\n> start: SUBSCRIPTION USES INDEX UPDATEs MULTIPLE ROWS\n> end: SUBSCRIPTION USES INDEX MODIFIES MULTIPLE ROWS\n>\n> ~\n>\n> 9b.\n> e.g.\n> start: SUBSCRIPTION USES INDEX WITH MULTIPLE COLUMNS\n> end: SUBSCRIPTION USES INDEX MODIFIES MULTIPLE ROWS\n>\n>\nthanks, fixed all\n\n\n\n> ~~~\n>\n> 10.\n>\n> I did not really understand the point of having special subscription names\n> tap_sub_rep_full_0\n> tap_sub_rep_full_2\n> tap_sub_rep_full_3\n> tap_sub_rep_full_4\n> etc...\n>\n> Since you drop/recreate these for each test case can't they just be\n> called 'tap_sub_rep_full'?\n>\n>\nThere is no special reason for that, updated all to tap_sub_rep_full.\n\nI think I initially made it in order to distinguish certain error messages\nin the tests, but then we already have unique messages regardless of the\nsubscription name.\n\n\n> ~~~\n>\n> 11. SUBSCRIPTION USES INDEX WITH MULTIPLE COLUMNS\n>\n> +# updates 200 rows\n> +$node_publisher->safe_psql('postgres',\n> + \"DELETE FROM test_replica_id_full WHERE x IN (5, 6);\");\n>\n> The comment says update but this is doing delete\n>\n>\nfixed\n\n\n>\n> ~~~\n>\n> 12. SUBSCRIPTION USES INDEX WITH DROPPED COLUMNS\n>\n> +# cleanup sub\n> +$node_subscriber->safe_psql('postgres',\n> + \"DROP SUBSCRIPTION tap_sub_rep_full_4\");\n>\n> Unusual wrapping?\n>\n\nFixed\n\n\n>\n> ~~~\n>\n> 13. SUBSCRIPTION USES INDEX ON PARTITIONED TABLES\n>\n> +# updates rows and moves between partitions\n> +$node_publisher->safe_psql('postgres',\n> + \"DELETE FROM users_table_part WHERE user_id = 1 and value_1 = 1;\");\n> +$node_publisher->safe_psql('postgres',\n> + \"DELETE FROM users_table_part WHERE user_id = 12 and value_1 = 12;\");\n>\n> The comment says update but SQL says delete\n>\n>\nfixed\n\n\n> ~~~\n>\n> 14. SUBSCRIPTION CAN USE INDEXES WITH EXPRESSIONS AND COLUMNS\n>\n> +# update 1 row and delete 1 row using index_b, so index_a still has 2\n> idx_scan\n> +$node_subscriber->poll_query_until(\n> + 'postgres', q{select idx_scan=2 from pg_stat_all_indexes where\n> indexrelname = 'index_a';}\n> +) or die \"Timed out while waiting for check subscriber\n> tap_sub_rep_full_0 updates two rows via index scan with index on high\n> cardinality column-3\";\n> +\n>\n> The comment seems misplaced. Doesn't it belong on the lines above this\n> where the update/delete is being done?\n>\n>\nYes, it seems so. moved\n\n\n> ~~~\n>\n> 15. SUBSCRIPTION CAN UPDATE THE INDEX IT USES AFTER ANALYZE - INHERITED\n> TABLE\n>\n> +# ANALYZING child will change the index used on child_1 and going to\n> use index_on_child_1_b\n> +$node_subscriber->safe_psql('postgres', \"ANALYZE child_1\");\n>\n> Should the comment say 'child_1' instead of child?\n>\n> ------\n>\n\nSeems better, changed.\n\nThanks for the reviews, attached v12.\n\nOnder Kalaci",
"msg_date": "Tue, 20 Sep 2022 13:29:34 +0300",
"msg_from": "=?UTF-8?B?w5ZuZGVyIEthbGFjxLE=?= <onderkalaci@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Use indexes on the subscriber when REPLICA IDENTITY is\n full on the publisher"
},
{
"msg_contents": "Hi Onder,\n\nThanks for addressing all my previous feedback. I checked the latest\nv12-0001, and have no more comments at this time.\n\nOne last thing - do you think there is any need to mention this\nbehaviour in the pgdocs, or is OK just to be a hidden performance\nimprovement?\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Wed, 21 Sep 2022 10:17:19 +1000",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Use indexes on the subscriber when REPLICA IDENTITY is\n full on the publisher"
},
{
"msg_contents": "> One last thing - do you think there is any need to mention this\r\n> behaviour in the pgdocs, or is OK just to be a hidden performance\r\n> improvement?\r\n\r\nFYI - I put my opinion.\r\nWe have following sentence in the logical-replication.sgml:\r\n\r\n```\r\n...\r\nIf the table does not have any suitable key, then it can be set\r\n to replica identity <quote>full</quote>, which means the entire row becomes\r\n the key. This, however, is very inefficient and should only be used as a\r\n fallback if no other solution is possible.\r\n...\r\n```\r\n\r\nHere the word \"very inefficient\" may mean that sequential scans will be executed every time.\r\nI think some descriptions can be added around here.\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n",
"msg_date": "Wed, 21 Sep 2022 02:21:37 +0000",
"msg_from": "\"kuroda.hayato@fujitsu.com\" <kuroda.hayato@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: [PATCH] Use indexes on the subscriber when REPLICA IDENTITY is\n full on the publisher"
},
{
"msg_contents": "On Tues, Sep 20, 2022 at 18:30 PM Önder Kalacı <onderkalaci@gmail.com> wrote:\r\n> Thanks for the reviews, attached v12.\r\n\r\nThanks for your patch. Here is a question and a comment:\r\n\r\n1. In the function GetCheapestReplicaIdentityFullPath.\r\n+\tif (rel->pathlist == NIL)\r\n+\t{\r\n+\t\t/*\r\n+\t\t * A sequential scan could have been dominated by by an index scan\r\n+\t\t * during make_one_rel(). We should always have a sequential scan\r\n+\t\t * before set_cheapest().\r\n+\t\t */\r\n+\t\tPath\t *seqScanPath = create_seqscan_path(root, rel, NULL, 0);\r\n+\r\n+\t\tadd_path(rel, seqScanPath);\r\n+\t}\r\n\r\nThis is a question I'm not sure about:\r\nDo we need this part to add sequential scan?\r\n\r\nI think in our case, the sequential scan seems to have been added by the\r\nfunction make_one_rel (see function set_plain_rel_pathlist). If I am missing\r\nsomething, please let me know. BTW, there is a typo in above comment: `by by`.\r\n\r\n2. In the file execReplication.c.\r\n+#ifdef USE_ASSERT_CHECKING\r\n+#include \"catalog/index.h\"\r\n+#endif\r\n #include \"commands/trigger.h\"\r\n #include \"executor/executor.h\"\r\n #include \"executor/nodeModifyTable.h\"\r\n #include \"nodes/nodeFuncs.h\"\r\n #include \"parser/parse_relation.h\"\r\n #include \"parser/parsetree.h\"\r\n+#ifdef USE_ASSERT_CHECKING\r\n+#include \"replication/logicalrelation.h\"\r\n+#endif\r\n\r\nI think it's fine to only add `logicalrelation.h` here, because `index.h` has\r\nbeen added by `logicalrelation.h`.\r\n\r\nRegards,\r\nWang wei\r\n",
"msg_date": "Thu, 22 Sep 2022 03:36:02 +0000",
"msg_from": "\"wangw.fnst@fujitsu.com\" <wangw.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: [PATCH] Use indexes on the subscriber when REPLICA IDENTITY is\n full on the publisher"
},
{
"msg_contents": "Hi Peter, Kuroda\n\nkuroda.hayato@fujitsu.com <kuroda.hayato@fujitsu.com>, 21 Eyl 2022 Çar,\n04:21 tarihinde şunu yazdı:\n\n> > One last thing - do you think there is any need to mention this\n> > behaviour in the pgdocs, or is OK just to be a hidden performance\n> > improvement?\n>\n> FYI - I put my opinion.\n> We have following sentence in the logical-replication.sgml:\n>\n> ```\n> ...\n> If the table does not have any suitable key, then it can be set\n> to replica identity <quote>full</quote>, which means the entire row\n> becomes\n> the key. This, however, is very inefficient and should only be used as\n> a\n> fallback if no other solution is possible.\n> ...\n> ```\n>\n> Here the word \"very inefficient\" may mean that sequential scans will be\n> executed every time.\n> I think some descriptions can be added around here.\n>\n\nMaking a small edit in that file makes sense. I'll attach v13 in the next\nemail that also includes this change.\n\nAlso, do you think is this a good time for me to mark the patch \"Ready for\ncommitter\" in the commit fest? Not sure when and who should change the\nstate, but it seems I can change. I couldn't find any documentation on how\nthat process should work.\n\nThanks!\n\nHi Peter, Kurodakuroda.hayato@fujitsu.com <kuroda.hayato@fujitsu.com>, 21 Eyl 2022 Çar, 04:21 tarihinde şunu yazdı:> One last thing - do you think there is any need to mention this\n> behaviour in the pgdocs, or is OK just to be a hidden performance\n> improvement?\n\nFYI - I put my opinion.\nWe have following sentence in the logical-replication.sgml:\n\n```\n...\nIf the table does not have any suitable key, then it can be set\n to replica identity <quote>full</quote>, which means the entire row becomes\n the key. This, however, is very inefficient and should only be used as a\n fallback if no other solution is possible.\n...\n```\n\nHere the word \"very inefficient\" may mean that sequential scans will be executed every time.\nI think some descriptions can be added around here.Making a small edit in that file makes sense. I'll attach v13 in the next email that also includes this change.Also, do you think is this a good time for me to mark the patch \"Ready for committer\" in the commit fest? Not sure when and who should change the state, but it seems I can change. I couldn't find any documentation on how that process should work.Thanks!",
"msg_date": "Thu, 22 Sep 2022 18:13:50 +0200",
"msg_from": "=?UTF-8?B?w5ZuZGVyIEthbGFjxLE=?= <onderkalaci@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Use indexes on the subscriber when REPLICA IDENTITY is\n full on the publisher"
},
{
"msg_contents": "Hii Wang wei,\n\n>\n> 1. In the function GetCheapestReplicaIdentityFullPath.\n> + if (rel->pathlist == NIL)\n> + {\n> + /*\n> + * A sequential scan could have been dominated by by an\n> index scan\n> + * during make_one_rel(). We should always have a\n> sequential scan\n> + * before set_cheapest().\n> + */\n> + Path *seqScanPath = create_seqscan_path(root, rel,\n> NULL, 0);\n> +\n> + add_path(rel, seqScanPath);\n> + }\n>\n> This is a question I'm not sure about:\n> Do we need this part to add sequential scan?\n>\n> I think in our case, the sequential scan seems to have been added by the\n> function make_one_rel (see function set_plain_rel_pathlist).\n\n\nYes, the sequential scan is added during make_one_rel.\n\n\n> If I am missing\n> something, please let me know. BTW, there is a typo in above comment: `by\n> by`.\n>\n\nAs the comment mentions, the sequential scan could have been dominated &\nremoved by index scan, see add_path():\n\n> *We also remove from the rel's pathlist any old paths that are dominated\n* by new_path --- that is, new_path is cheaper, at least as well ordered,\n* generates no more rows, requires no outer rels not required by the old\n* path, and is no less parallel-safe.\n\nStill, I agree that the comment could be improved, which I pushed.\n\n\n> 2. In the file execReplication.c.\n> +#ifdef USE_ASSERT_CHECKING\n> +#include \"catalog/index.h\"\n> +#endif\n> #include \"commands/trigger.h\"\n> #include \"executor/executor.h\"\n> #include \"executor/nodeModifyTable.h\"\n> #include \"nodes/nodeFuncs.h\"\n> #include \"parser/parse_relation.h\"\n> #include \"parser/parsetree.h\"\n> +#ifdef USE_ASSERT_CHECKING\n> +#include \"replication/logicalrelation.h\"\n> +#endif\n>\n> I think it's fine to only add `logicalrelation.h` here, because `index.h`\n> has\n> been added by `logicalrelation.h`.\n>\n>\nMakes sense, removed thanks.\n\nAttached v13.",
"msg_date": "Thu, 22 Sep 2022 18:13:54 +0200",
"msg_from": "=?UTF-8?B?w5ZuZGVyIEthbGFjxLE=?= <onderkalaci@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Use indexes on the subscriber when REPLICA IDENTITY is\n full on the publisher"
},
{
"msg_contents": "On Thu, Sep 22, 2022 at 9:44 PM Önder Kalacı <onderkalaci@gmail.com> wrote:\n>\n> Also, do you think is this a good time for me to mark the patch \"Ready for committer\" in the commit fest? Not sure when and who should change the state, but it seems I can change. I couldn't find any documentation on how that process should work.\n>\n\nNormally, the reviewers mark it as \"Ready for committer\".\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 26 Sep 2022 17:30:11 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Use indexes on the subscriber when REPLICA IDENTITY is\n full on the publisher"
},
{
"msg_contents": "Dear Önder:\r\n\r\nThank you for updating patch! \r\nYour documentation seems OK, and I could not find any other places to be added\r\n\r\nFollowings are my comments.\r\n\r\n====\r\n01 relation.c - general\r\n\r\nMany files are newly included.\r\nI was not sure but some codes related with planner may be able to move to src/backend/optimizer/plan.\r\nHow do you and any other one think?\r\n\r\n02 relation.c - FindLogicalRepUsableIndex\r\n\r\n```\r\n+/*\r\n+ * Returns an index oid if we can use an index for the apply side. If not,\r\n+ * returns InvalidOid.\r\n+ */\r\n+static Oid\r\n+FindLogicalRepUsableIndex(Relation localrel, LogicalRepRelation *remoterel)\r\n```\r\n\r\nI grepped files, but I cannot find the word \"apply side\". How about \"subscriber\" instead?\r\n\r\n03 relation.c - FindLogicalRepUsableIndex\r\n\r\n```\r\n+ /* Simple case, we already have an identity or pkey */\r\n+ idxoid = GetRelationIdentityOrPK(localrel);\r\n+ if (OidIsValid(idxoid))\r\n+ return idxoid;\r\n+\r\n+ /*\r\n+ * If index scans are disabled, use a sequential scan.\r\n+ *\r\n+ * Note that we still allowed index scans above when there is a primary\r\n+ * key or unique index replica identity, but that is the legacy behaviour\r\n+ * (even when enable_indexscan is false), so we hesitate to move this\r\n+ * enable_indexscan check to be done earlier in this function.\r\n+ */\r\n+ if (!enable_indexscan)\r\n+ return InvalidOid;\r\n```\r\n\r\na. \r\nI think \"identity or pkey\" should be \"replica identity key or primary key\" or \"RI or PK\"\r\n\r\nb. \r\nLater part should be at around GetRelationIdentityOrPK.\r\n\r\n\r\n04 relation.c - FindUsableIndexForReplicaIdentityFull\r\n\r\n```\r\n+ MemoryContext usableIndexContext;\r\n...\r\n+ usableIndexContext = AllocSetContextCreate(CurrentMemoryContext,\r\n+ \"usableIndexContext\",\r\n+ ALLOCSET_DEFAULT_SIZES);\r\n```\r\n\r\nI grepped other sources, and I found that the name like \"tmpcxt\" is used for the temporary MemoryContext.\r\n\r\n05 relation.c - SuitablePathsForRepIdentFull\r\n\r\n```\r\n+ indexRelation = index_open(idxoid, AccessShareLock);\r\n+ indexInfo = BuildIndexInfo(indexRelation);\r\n+ is_btree = (indexInfo->ii_Am == BTREE_AM_OID);\r\n+ is_partial = (indexInfo->ii_Predicate != NIL);\r\n+ is_only_on_expression = IndexOnlyOnExpression(indexInfo);\r\n+ index_close(indexRelation, NoLock);\r\n```\r\n\r\nWhy the index is closed with NoLock? AccessShareLock is acquired, so shouldn't same lock be released?\r\n\r\n\r\n06 relation.c - GetCheapestReplicaIdentityFullPath\r\n\r\nIIUC a query like \"SELECT tbl WHERE attr1 = $1 AND attr2 = $2 ... AND attrN = $N\" is emulated, right?\r\nyou can write explicitly it as comment\r\n\r\n07 relation.c - GetCheapestReplicaIdentityFullPath\r\n\r\n```\r\n+ Path *path = (Path *) lfirst(lc);\r\n+ Oid idxoid = GetIndexOidFromPath(path);\r\n+\r\n+ if (!OidIsValid(idxoid))\r\n+ {\r\n+ /* Unrelated Path, skip */\r\n+ suitableIndexList = lappend(suitableIndexList, path);\r\n+ }\r\n```\r\n\r\nI was not clear about here. IIUC in the function we want to extract \"good\" scan plan and based on that the cheapest one is chosen. \r\nGetIndexOidFromPath() seems to return InvalidOid when the input path is not index scan, so why is it appended to the suitable list?\r\n\r\n\r\n===\r\n08 worker.c - usable_indexoid_internal\r\n\r\nI think this is not \"internal\" function, such name should be used for like \"apply_handle_commit\" - \"apply_handle_commit_internal\", or \"apply_handle_insert\" - \"apply_handle_insert_internal\".\r\nHow about \"get_usable_index\" or something?\r\n\r\n09 worker.c - usable_indexoid_internal\r\n\r\n```\r\n+ Oid targetrelid = targetResultRelInfo->ri_RelationDesc->rd_rel->oid;\r\n+ Oid localrelid = relinfo->ri_RelationDesc->rd_id;\r\n+\r\n+ if (targetrelid != localrelid)\r\n```\r\n\r\nI think these lines are very confusable.\r\nIIUC targetrelid is corresponded to the \"parent\", and localrelid is corresponded to the \"child\", right?\r\nHow about changing name to \"partitionedoid\" and \"leadoid\" or something?\r\n\r\n===\r\n10 032_subscribe_use_index.pl\r\n\r\n```\r\n# create tables pub and sub\r\n$node_publisher->safe_psql('postgres',\r\n\t\"CREATE TABLE test_replica_id_full (x int)\");\r\n$node_publisher->safe_psql('postgres',\r\n\t\"ALTER TABLE test_replica_id_full REPLICA IDENTITY FULL;\");\r\n$node_subscriber->safe_psql('postgres',\r\n\t\"CREATE TABLE test_replica_id_full (x int)\");\r\n$node_subscriber->safe_psql('postgres',\r\n\t\"CREATE INDEX test_replica_id_full_idx ON test_replica_id_full(x)\");\r\n```\r\n\r\nIn many places same table is defined, altered as \"REPLICA IDENTITY FULL\", and index is created.\r\nCould you combine them into function?\r\n\r\n11 032_subscribe_use_index.pl\r\n\r\n```\r\n# wait until the index is used on the subscriber\r\n$node_subscriber->poll_query_until(\r\n\t'postgres', q{select (idx_scan = 1) from pg_stat_all_indexes where indexrelname = 'test_replica_id_full_idx';}\r\n) or die \"Timed out while waiting for check subscriber tap_sub_rep_full_0 updates one row via index\";\r\n```\r\n\r\nIn many places this check is done. Could you combine them into function?\r\n\r\n12 032_subscribe_use_index.pl\r\n\r\n```\r\n# create pub/sub\r\n$node_publisher->safe_psql('postgres',\r\n\t\"CREATE PUBLICATION tap_pub_rep_full FOR TABLE test_replica_id_full\");\r\n$node_subscriber->safe_psql('postgres',\r\n\t\"CREATE SUBSCRIPTION tap_sub_rep_full CONNECTION '$publisher_connstr application_name=$appname' PUBLICATION tap_pub_rep_full\"\r\n);\r\n```\r\n\r\nSame as above\r\n\r\n13 032_subscribe_use_index.pl\r\n\r\n```\r\n# cleanup pub\r\n$node_publisher->safe_psql('postgres', \"DROP PUBLICATION tap_pub_rep_full\");\r\n$node_publisher->safe_psql('postgres', \"DROP TABLE test_replica_id_full\");\r\n# cleanup sub\r\n$node_subscriber->safe_psql('postgres', \"DROP SUBSCRIPTION tap_sub_rep_full\");\r\n$node_subscriber->safe_psql('postgres', \"DROP TABLE test_replica_id_full\");\r\n```\r\n\r\nSame as above\r\n\r\n14 032_subscribe_use_index.pl - SUBSCRIPTION USES INDEX\r\n\r\n```\r\n# make sure that the subscriber has the correct data\r\nmy $result = $node_subscriber->safe_psql('postgres',\r\n\t\"SELECT sum(x) FROM test_replica_id_full\");\r\nis($result, qq(212), 'ensure subscriber has the correct data at the end of the test');\r\n\r\n$node_subscriber->poll_query_until(\r\n\t'postgres', q{select sum(x)=212 AND count(*)=21 AND count(DISTINCT x)=20 FROM test_replica_id_full;}\r\n) or die \"ensure subscriber has the correct data at the end of the test\";\r\n```\r\n\r\nI think first one is not needed.\r\n\r\n\r\n15 032_subscribe_use_index.pl - SUBSCRIPTION USES INDEX UPDATEs MULTIPLE ROWS\r\n\r\n```\r\n# insert some initial data within the range 0-20\r\n$node_publisher->safe_psql('postgres',\r\n\t\"INSERT INTO test_replica_id_full SELECT i%20 FROM generate_series(0,1000)i;\"\r\n);\r\n```\r\n\r\nI think data is within the range 0-19.\r\n(There are some mistakes)\r\n\r\n===\r\n16 test/subscription/meson.build\r\n\r\nYour test 't/032_subscribe_use_index.pl' must be added in the 'tests' for meson build system.\r\n(I checked on my env, and your test works well)\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n\r\n",
"msg_date": "Wed, 28 Sep 2022 05:57:41 +0000",
"msg_from": "\"kuroda.hayato@fujitsu.com\" <kuroda.hayato@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: [PATCH] Use indexes on the subscriber when REPLICA IDENTITY is\n full on the publisher"
},
{
"msg_contents": "Hi Hayato Kuroda,\n\nThanks for the review!\n\n\n> ====\n> 01 relation.c - general\n>\n> Many files are newly included.\n> I was not sure but some codes related with planner may be able to move to\n> src/backend/optimizer/plan.\n> How do you and any other one think?\n>\n>\nMy thinking on those functions is that they should probably stay\nin src/backend/replication/logical/relation.c. My main motivation is that\nthose functions are so much tailored to the purposes of this file that I\ncannot see any use-case for these functions in any other context.\n\nStill, at some point, I considered maybe doing something similar\nto src/backend/executor/execReplication.c, where I create a new file,\nsay, src/backend/optimizer/plan/planReplication.c or such as you noted. I'm\na bit torn on this.\n\nDoes anyone have any strong opinions for moving to\nsrc/backend/optimizer/plan/planReplication.c? (or another name)\n\n\n> 02 relation.c - FindLogicalRepUsableIndex\n>\n> ```\n> +/*\n> + * Returns an index oid if we can use an index for the apply side. If not,\n> + * returns InvalidOid.\n> + */\n> +static Oid\n> +FindLogicalRepUsableIndex(Relation localrel, LogicalRepRelation\n> *remoterel)\n> ```\n>\n> I grepped files, but I cannot find the word \"apply side\". How about\n> \"subscriber\" instead?\n>\n\nYes, it makes sense. I guess I made up the \"apply side\" as there is the\nconcept of \"apply worker\". But, yes, subscribers sound better, updated.\n\n\n>\n> 03 relation.c - FindLogicalRepUsableIndex\n>\n> ```\n> + /* Simple case, we already have an identity or pkey */\n> + idxoid = GetRelationIdentityOrPK(localrel);\n> + if (OidIsValid(idxoid))\n> + return idxoid;\n> +\n> + /*\n> + * If index scans are disabled, use a sequential scan.\n> + *\n> + * Note that we still allowed index scans above when there is a\n> primary\n> + * key or unique index replica identity, but that is the legacy\n> behaviour\n> + * (even when enable_indexscan is false), so we hesitate to move\n> this\n> + * enable_indexscan check to be done earlier in this function.\n> + */\n> + if (!enable_indexscan)\n> + return InvalidOid;\n> ```\n>\n> a.\n> I think \"identity or pkey\" should be \"replica identity key or primary key\"\n> or \"RI or PK\"\n>\n\nLooking into other places, it seems \"replica identity index\" is favored\nover \"replica identity key\". So, I used that term.\n\nYou can see this pattern in RelationGetReplicaIndex()\n\n\n>\n> b.\n> Later part should be at around GetRelationIdentityOrPK.\n>\n\nHmm, I cannot follow this comment. Can you please clarify?\n\n\n>\n>\n> 04 relation.c - FindUsableIndexForReplicaIdentityFull\n>\n> ```\n> + MemoryContext usableIndexContext;\n> ...\n> + usableIndexContext = AllocSetContextCreate(CurrentMemoryContext,\n> +\n> \"usableIndexContext\",\n> +\n> ALLOCSET_DEFAULT_SIZES);\n> ```\n>\n> I grepped other sources, and I found that the name like \"tmpcxt\" is used\n> for the temporary MemoryContext.\n>\n\nI think there are also several contextes that are named more specifically,\nsuch as new_pdcxt, perTupCxt, anl_context, cluster_context and many others.\n\nSo, I think it is better to have specific names, no?\n\n\n>\n> 05 relation.c - SuitablePathsForRepIdentFull\n>\n> ```\n> + indexRelation = index_open(idxoid,\n> AccessShareLock);\n> + indexInfo = BuildIndexInfo(indexRelation);\n> + is_btree = (indexInfo->ii_Am == BTREE_AM_OID);\n> + is_partial = (indexInfo->ii_Predicate != NIL);\n> + is_only_on_expression =\n> IndexOnlyOnExpression(indexInfo);\n> + index_close(indexRelation, NoLock);\n> ```\n>\n> Why the index is closed with NoLock? AccessShareLock is acquired, so\n> shouldn't same lock be released?\n>\n\nHmm, yes you are right. Keeping the lock seems unnecessary and wrong. It\ncould actually have prevented dropping an index. However, given\nthat RelationFindReplTupleByIndex() also closes this index at the end, the\napply worker releases the lock. Hence, no problem observed.\n\nAnyway, I'm still changing it to releasing the lock.\n\nAlso note that as soon as any index is dropped on the relation, the cache\nis invalidated and suitable indexes are re-calculated. That's why it seems\nfine to release the lock.\n\n\n>\n>\n> 06 relation.c - GetCheapestReplicaIdentityFullPath\n>\n> IIUC a query like \"SELECT tbl WHERE attr1 = $1 AND attr2 = $2 ... AND\n> attrN = $N\" is emulated, right?\n> you can write explicitly it as comment\n>\n>\nThe inlined comment in the function has a similar comment. Is that clear\nenough?\n\n/* * Generate restrictions for all columns in the form of col_1 = $1 AND *\ncol_2 = $2 ... */\n\n\n> 07 relation.c - GetCheapestReplicaIdentityFullPath\n>\n> ```\n> + Path *path = (Path *) lfirst(lc);\n> + Oid idxoid = GetIndexOidFromPath(path);\n> +\n> + if (!OidIsValid(idxoid))\n> + {\n> + /* Unrelated Path, skip */\n> + suitableIndexList = lappend(suitableIndexList,\n> path);\n> + }\n> ```\n>\n> I was not clear about here. IIUC in the function we want to extract \"good\"\n> scan plan and based on that the cheapest one is chosen.\n> GetIndexOidFromPath() seems to return InvalidOid when the input path is\n> not index scan, so why is it appended to the suitable list?\n>\n>\nIt could be a sequential scan that we have fall-back. However, we already\nadd the sequential scan at the end of the function. So, actually you are\nright, there is no need to keep any other paths here. Adjusted the comments.\n\n\n>\n> ===\n> 08 worker.c - usable_indexoid_internal\n>\n> I think this is not \"internal\" function, such name should be used for like\n> \"apply_handle_commit\" - \"apply_handle_commit_internal\", or\n> \"apply_handle_insert\" - \"apply_handle_insert_internal\".\n> How about \"get_usable_index\" or something?\n>\n\nYeah, you are right. I use this function inside functions ending with\n_internal, but this one is clearly not an internal function. I\nused get_usable_indexoid().\n\n\n>\n> 09 worker.c - usable_indexoid_internal\n>\n> ```\n> + Oid targetrelid =\n> targetResultRelInfo->ri_RelationDesc->rd_rel->oid;\n> + Oid localrelid =\n> relinfo->ri_RelationDesc->rd_id;\n> +\n> + if (targetrelid != localrelid)\n> ```\n>\n> I think these lines are very confusable.\n> IIUC targetrelid is corresponded to the \"parent\", and localrelid is\n> corresponded to the \"child\", right?\n> How about changing name to \"partitionedoid\" and \"leadoid\" or something?\n>\n\nWe do not know whether targetrelid is definitely a \"parent\". But, if that\nis a parent, this function fetches the relevant partition's usableIndexOid.\nSo, I'm not convinced that \"parent\" is a good choice.\n\nThough, I agree that we can improve the code a bit. I now\nuse targetrelkind and dropped localrelid to check whether the target is a\npartitioned table. Is this better?\n\n\n\n>\n> ===\n> 10 032_subscribe_use_index.pl\n>\n> ```\n> # create tables pub and sub\n> $node_publisher->safe_psql('postgres',\n> \"CREATE TABLE test_replica_id_full (x int)\");\n> $node_publisher->safe_psql('postgres',\n> \"ALTER TABLE test_replica_id_full REPLICA IDENTITY FULL;\");\n> $node_subscriber->safe_psql('postgres',\n> \"CREATE TABLE test_replica_id_full (x int)\");\n> $node_subscriber->safe_psql('postgres',\n> \"CREATE INDEX test_replica_id_full_idx ON\n> test_replica_id_full(x)\");\n> ```\n>\n> In many places same table is defined, altered as \"REPLICA IDENTITY FULL\",\n> and index is created.\n> Could you combine them into function?\n>\n\nWell, I'm not sure if it is worth the complexity. There are only 4 usages\nof the same table, and these are all pretty simple statements, and all\nother tests seem to have a similar pattern. I have not seen any tests where\nthese simple statements are done in a function even if they are repeated.\nI'd rather keep it so that this doesn't lead to other style discussions?\n\n\n>\n> 11 032_subscribe_use_index.pl\n>\n> ```\n> # wait until the index is used on the subscriber\n> $node_subscriber->poll_query_until(\n> 'postgres', q{select (idx_scan = 1) from pg_stat_all_indexes where\n> indexrelname = 'test_replica_id_full_idx';}\n> ) or die \"Timed out while waiting for check subscriber tap_sub_rep_full_0\n> updates one row via index\";\n> ```\n>\n> In many places this check is done. Could you combine them into function?\n>\n\nI'm a little confused. Isn't that already inside a function (e.g.,\npoll_query_until) ? Can you please clarify this suggestion a bit more?\n\n\n>\n> 12 032_subscribe_use_index.pl\n>\n> ```\n> # create pub/sub\n> $node_publisher->safe_psql('postgres',\n> \"CREATE PUBLICATION tap_pub_rep_full FOR TABLE\n> test_replica_id_full\");\n> $node_subscriber->safe_psql('postgres',\n> \"CREATE SUBSCRIPTION tap_sub_rep_full CONNECTION\n> '$publisher_connstr application_name=$appname' PUBLICATION tap_pub_rep_full\"\n> );\n> ```\n>\n> Same as above\n>\n\nWell, again, I'm not sure if it is worth moving these simple statements to\nfunctions as an improvement here. One might tell that it is better to see\nthe statements explicitly on the test -- which almost all the tests do. I\nwant to avoid introducing some unusual pattern to the tests.\n\n\n>\n> 13 032_subscribe_use_index.pl\n>\n> ```\n> # cleanup pub\n> $node_publisher->safe_psql('postgres', \"DROP PUBLICATION\n> tap_pub_rep_full\");\n> $node_publisher->safe_psql('postgres', \"DROP TABLE test_replica_id_full\");\n> # cleanup sub\n> $node_subscriber->safe_psql('postgres', \"DROP SUBSCRIPTION\n> tap_sub_rep_full\");\n> $node_subscriber->safe_psql('postgres', \"DROP TABLE test_replica_id_full\");\n> ```\n>\n> Same as above\n>\n\nSame as above :)\n\n\n>\n> 14 032_subscribe_use_index.pl - SUBSCRIPTION USES INDEX\n>\n> ```\n> # make sure that the subscriber has the correct data\n> my $result = $node_subscriber->safe_psql('postgres',\n> \"SELECT sum(x) FROM test_replica_id_full\");\n> is($result, qq(212), 'ensure subscriber has the correct data at the end of\n> the test');\n>\n> $node_subscriber->poll_query_until(\n> 'postgres', q{select sum(x)=212 AND count(*)=21 AND count(DISTINCT\n> x)=20 FROM test_replica_id_full;}\n> ) or die \"ensure subscriber has the correct data at the end of the test\";\n> ```\n>\n> I think first one is not needed.\n>\n\nI preferred to keep the second one because *is($result, ..* is needed for\ntests to show the progress while running.\n\n\n>\n>\n> 15 032_subscribe_use_index.pl - SUBSCRIPTION USES INDEX UPDATEs MULTIPLE\n> ROWS\n>\n> ```\n> # insert some initial data within the range 0-20\n> $node_publisher->safe_psql('postgres',\n> \"INSERT INTO test_replica_id_full SELECT i%20 FROM\n> generate_series(0,1000)i;\"\n> );\n> ```\n>\n> I think data is within the range 0-19.\n> (There are some mistakes)\n>\n\nYes, I fixed it all.\n\n\n\n>\n> ===\n> 16 test/subscription/meson.build\n>\n> Your test 't/032_subscribe_use_index.pl' must be added in the 'tests' for\n> meson build system.\n> (I checked on my env, and your test works well)\n>\n>\nOh, I didn't know about this, thanks!\n\nAttached v14.\n\nThanks,\nOnder",
"msg_date": "Thu, 29 Sep 2022 19:08:52 +0200",
"msg_from": "=?UTF-8?B?w5ZuZGVyIEthbGFjxLE=?= <onderkalaci@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Use indexes on the subscriber when REPLICA IDENTITY is\n full on the publisher"
},
{
"msg_contents": "Dear Önder,\r\n\r\nThank you for updating the patch! At first I replied to your comments.\r\n\r\n> My thinking on those functions is that they should probably stay\r\n> in src/backend/replication/logical/relation.c. My main motivation is that\r\n> those functions are so much tailored to the purposes of this file that I\r\n> cannot see any use-case for these functions in any other context.\r\n\r\nI was not sure what should be, but I agreed that functions will be not used from other parts.\r\n\r\n> Hmm, I cannot follow this comment. Can you please clarify?\r\n\r\nIn your patch:\r\n\r\n```\r\n+ /* Simple case, we already have a primary key or a replica identity index */\r\n+ idxoid = GetRelationIdentityOrPK(localrel);\r\n+ if (OidIsValid(idxoid))\r\n+ return idxoid;\r\n+\r\n+ /*\r\n+ * If index scans are disabled, use a sequential scan.\r\n+ *\r\n+ * Note that we still allowed index scans above when there is a primary\r\n+ * key or unique index replica identity, but that is the legacy behaviour\r\n+ * (even when enable_indexscan is false), so we hesitate to move this\r\n+ * enable_indexscan check to be done earlier in this function.\r\n+ */ \r\n```\r\n\r\nAnd the paragraph \" Note that we...\" should be at above of GetRelationIdentityOrPK().\r\nFuture readers will read the function from top to bottom,\r\nand when they read around GetRelationIdentityOrPK() they may be confused.\r\n\r\n> So, I think it is better to have specific names, no?\r\n\r\nOK.\r\n\r\n> The inlined comment in the function has a similar comment. Is that clear\r\n> enough?\r\n> /* * Generate restrictions for all columns in the form of col_1 = $1 AND *\r\n> col_2 = $2 ... */\r\n\r\nActually I missed it, but I still think that whole of emulated SQL should be clarified. \r\n\r\n> Though, I agree that we can improve the code a bit. I now\r\n> use targetrelkind and dropped localrelid to check whether the target is a\r\n> partitioned table. Is this better?\r\n\r\nGreat improvement. Genus!\r\n\r\n> Well, I'm not sure if it is worth the complexity. There are only 4 usages\r\n> of the same table, and these are all pretty simple statements, and all\r\n> other tests seem to have a similar pattern. I have not seen any tests where\r\n> these simple statements are done in a function even if they are repeated.\r\n> I'd rather keep it so that this doesn't lead to other style discussions?\r\n\r\nIf other tests do not combine such parts, it's OK.\r\nMy motivation of these comments were to reduce the number of row for the test code.\r\n\r\n> Oh, I didn't know about this, thanks!\r\n\r\nNow meson test system do your test. OK.\r\n\r\n\r\nAnd followings are the comments for v14. They are mainly about comments.\r\n\r\n===\r\n01. relation.c - logicalrep_rel_open\r\n\r\n```\r\n+ /*\r\n+ * Finding a usable index is an infrequent task. It occurs when an\r\n+ * operation is first performed on the relation, or after invalidation\r\n+ * of the relation cache entry (e.g., such as ANALYZE).\r\n+ */\r\n+ entry->usableIndexOid = FindLogicalRepUsableIndex(entry->localrel, remoterel);\r\n```\r\n\r\nI thought you can mention CREATE INDEX in the comment.\r\n\r\nAccording to your analysis [1] the relation cache will be invalidated if users do CREATE INDEX\r\nAt that time the hash entry will be removed (logicalrep_relmap_invalidate_cb) and \"usable\" index\r\nwill be checked again.\r\n\r\n~~~\r\n02. relation.c - logicalrep_partition_open\r\n\r\n```\r\n+ /*\r\n+ * Finding a usable index is an infrequent task. It occurs when an\r\n+ * operation is first performed on the relation, or after invalidation of\r\n+ * the relation cache entry (e.g., such as ANALYZE).\r\n+ */\r\n+ entry->usableIndexOid = FindLogicalRepUsableIndex(partrel, remoterel);\r\n+\r\n```\r\n\r\nSame as above\r\n\r\n~~~\r\n03. relation.c - GetIndexOidFromPath\r\n\r\n```\r\n+ if (path->pathtype == T_IndexScan || path->pathtype == T_IndexOnlyScan)\r\n+ {\r\n+ IndexPath *index_sc = (IndexPath *) path;\r\n+\r\n+ return index_sc->indexinfo->indexoid;\r\n+ }\r\n```\r\n\r\nI thought Assert(OidIsValid(indexoid)) may be added here. Or is it quite trivial?\r\n\r\n~~~\r\n04. relation.c - IndexOnlyOnExpression\r\n\r\nThis method just returns \"yes\" or \"no\", so the name of method should be start \"Has\" or \"Is\".\r\n\r\n~~~\r\n05. relation.c - SuitablePathsForRepIdentFull\r\n\r\n```\r\n+/*\r\n+ * Iterates over the input path list and returns another\r\n+ * path list that includes index [only] scans where paths\r\n+ * with non-btree indexes, partial indexes or\r\n+ * indexes on only expressions have been removed.\r\n+ */\r\n```\r\n\r\nThese lines seems to be around 60 columns. Could you expand around 80?\r\n\r\n~~~\r\n06. relation.c - SuitablePathsForRepIdentFull\r\n\r\n```\r\n+ Relation indexRelation;\r\n+ IndexInfo *indexInfo;\r\n+ bool is_btree;\r\n+ bool is_partial;\r\n+ bool is_only_on_expression;\r\n+\r\n+ indexRelation = index_open(idxoid, AccessShareLock);\r\n+ indexInfo = BuildIndexInfo(indexRelation);\r\n+ is_btree = (indexInfo->ii_Am == BTREE_AM_OID);\r\n+ is_partial = (indexInfo->ii_Predicate != NIL);\r\n+ is_only_on_expression = IndexOnlyOnExpression(indexInfo);\r\n+ index_close(indexRelation, AccessShareLock);\r\n+\r\n+ if (is_btree && !is_partial && !is_only_on_expression)\r\n+ suitableIndexList = lappend(suitableIndexList, path);\r\n```\r\n\r\nPlease add a comment like \"eliminating not suitable path\" or something.\r\n\r\n~~~\r\n07. relation.c - GenerateDummySelectPlannerInfoForRelation\r\n\r\n```\r\n+/*\r\n+ * This is not a generic function. It is a helper function\r\n+ * for GetCheapestReplicaIdentityFullPath. The function\r\n+ * creates a dummy PlannerInfo for the given relationId\r\n+ * as if the relation is queried with SELECT command.\r\n+ */\r\n```\r\n\r\nThese lines seems to be around 60 columns. Could you expand around 80?\r\n\r\n~~~\r\n08. relation.c - FindLogicalRepUsableIndex\r\n\r\n```\r\n+/*\r\n+ * Returns an index oid if we can use an index for subscriber . If not,\r\n+ * returns InvalidOid.\r\n+ */\r\n```\r\n\r\n\"subscriber .\" should be \"subscriber.\", blank is not needed.\r\n\r\n~~~\r\n09. worker.c - get_usable_indexoid\r\n\r\n```\r\n+ Assert(targetResultRelInfo->ri_RelationDesc->rd_rel->relkind ==\r\n+ RELKIND_PARTITIONED_TABLE);\r\n```\r\n\r\nI thought this assertion seems to be not needed, because this is completely same as the condition of if-statement.\r\n\r\n[1] https://www.postgresql.org/message-id/CACawEhXgP_Kj_1iyNAp16MYos4Anrtz%2BOZVtj2z-QOPGdPCt_A%40mail.gmail.com\r\n\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n",
"msg_date": "Thu, 6 Oct 2022 01:09:47 +0000",
"msg_from": "\"kuroda.hayato@fujitsu.com\" <kuroda.hayato@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: [PATCH] Use indexes on the subscriber when REPLICA IDENTITY is\n full on the publisher"
},
{
"msg_contents": "Hi Kurado Hayato,\n\n\n> In your patch:\n>\n> ```\n> + /* Simple case, we already have a primary key or a replica\n> identity index */\n> + idxoid = GetRelationIdentityOrPK(localrel);\n> + if (OidIsValid(idxoid))\n> + return idxoid;\n> +\n> + /*\n> + * If index scans are disabled, use a sequential scan.\n> + *\n> + * Note that we still allowed index scans above when there is a\n> primary\n> + * key or unique index replica identity, but that is the legacy\n> behaviour\n> + * (even when enable_indexscan is false), so we hesitate to move\n> this\n> + * enable_indexscan check to be done earlier in this function.\n> + */\n> ```\n>\n> And the paragraph \" Note that we...\" should be at above of\n> GetRelationIdentityOrPK().\n> Future readers will read the function from top to bottom,\n> and when they read around GetRelationIdentityOrPK() they may be confused.\n>\n>\nAh, makes sense, now I applied your feedback (with some different wording).\n\n\n> The inlined comment in the function has a similar comment. Is that clear\n> > enough?\n> > /* * Generate restrictions for all columns in the form of col_1 = $1 AND\n> *\n> > col_2 = $2 ... */\n>\n> Actually I missed it, but I still think that whole of emulated SQL should\n> be clarified.\n\n\nAlright, it makes sense. I added the emulated SQL to the function comment\nof GetCheapestReplicaIdentityFullPath.\n\n\n> And followings are the comments for v14. They are mainly about comments.\n>\n> ===\n> 01. relation.c - logicalrep_rel_open\n>\n> ```\n> + /*\n> + * Finding a usable index is an infrequent task. It occurs\n> when an\n> + * operation is first performed on the relation, or after\n> invalidation\n> + * of the relation cache entry (e.g., such as ANALYZE).\n> + */\n> + entry->usableIndexOid =\n> FindLogicalRepUsableIndex(entry->localrel, remoterel);\n> ```\n>\n> I thought you can mention CREATE INDEX in the comment.\n>\n> According to your analysis [1] the relation cache will be invalidated if\n> users do CREATE INDEX\n> At that time the hash entry will be removed\n> (logicalrep_relmap_invalidate_cb) and \"usable\" index\n> will be checked again.\n>\n\nYes, that is right. I think it makes sense to mention that as well. In\nfact, I also decided to add such a test.\n\nI realized that all tests use ANALYZE for re-calculation of the index. Now,\nI added an explicit test that uses CREATE/DROP index to re-calculate the\nindex.\n\nsee # Testcase start: SUBSCRIPTION RE-CALCULATES INDEX AFTER CREATE/DROP\nINDEX.\n\n\n\n> ~~~\n> 02. relation.c - logicalrep_partition_open\n>\n> ```\n> + /*\n> + * Finding a usable index is an infrequent task. It occurs when an\n> + * operation is first performed on the relation, or after\n> invalidation of\n> + * the relation cache entry (e.g., such as ANALYZE).\n> + */\n> + entry->usableIndexOid = FindLogicalRepUsableIndex(partrel,\n> remoterel);\n> +\n> ```\n>\n> Same as above\n>\n>\ndone\n\n\n> ~~~\n> 03. relation.c - GetIndexOidFromPath\n>\n> ```\n> + if (path->pathtype == T_IndexScan || path->pathtype ==\n> T_IndexOnlyScan)\n> + {\n> + IndexPath *index_sc = (IndexPath *) path;\n> +\n> + return index_sc->indexinfo->indexoid;\n> + }\n> ```\n>\n> I thought Assert(OidIsValid(indexoid)) may be added here. Or is it quite\n> trivial?\n>\n\nLooking at the PG code, I couldn't see any place that asserts the\ninformation. That seems like fundamental information that is never invalid.\n\nBtw, even if it returns InvalidOid for some reason, we'd not be crashing.\nOnly not able to use any indexes, fall back to seq. scan.\n\n\n>\n> ~~~\n> 04. relation.c - IndexOnlyOnExpression\n>\n> This method just returns \"yes\" or \"no\", so the name of method should be\n> start \"Has\" or \"Is\".\n>\n> Yes, it seems like that is a common convention.\n\n\n> ~~~\n> 05. relation.c - SuitablePathsForRepIdentFull\n>\n> ```\n> +/*\n> + * Iterates over the input path list and returns another\n> + * path list that includes index [only] scans where paths\n> + * with non-btree indexes, partial indexes or\n> + * indexes on only expressions have been removed.\n> + */\n> ```\n>\n> These lines seems to be around 60 columns. Could you expand around 80?\n>\n\ndone\n\n\n>\n> ~~~\n> 06. relation.c - SuitablePathsForRepIdentFull\n>\n> ```\n> + Relation indexRelation;\n> + IndexInfo *indexInfo;\n> + bool is_btree;\n> + bool is_partial;\n> + bool is_only_on_expression;\n> +\n> + indexRelation = index_open(idxoid,\n> AccessShareLock);\n> + indexInfo = BuildIndexInfo(indexRelation);\n> + is_btree = (indexInfo->ii_Am == BTREE_AM_OID);\n> + is_partial = (indexInfo->ii_Predicate != NIL);\n> + is_only_on_expression =\n> IndexOnlyOnExpression(indexInfo);\n> + index_close(indexRelation, AccessShareLock);\n> +\n> + if (is_btree && !is_partial &&\n> !is_only_on_expression)\n> + suitableIndexList =\n> lappend(suitableIndexList, path);\n> ```\n>\n> Please add a comment like \"eliminating not suitable path\" or something.\n>\n\ndone\n\n\n>\n> ~~~\n> 07. relation.c - GenerateDummySelectPlannerInfoForRelation\n>\n> ```\n> +/*\n> + * This is not a generic function. It is a helper function\n> + * for GetCheapestReplicaIdentityFullPath. The function\n> + * creates a dummy PlannerInfo for the given relationId\n> + * as if the relation is queried with SELECT command.\n> + */\n> ```\n>\n> These lines seems to be around 60 columns. Could you expand around 80?\n>\n\ndone\n\n\n>\n> ~~~\n> 08. relation.c - FindLogicalRepUsableIndex\n>\n> ```\n> +/*\n> + * Returns an index oid if we can use an index for subscriber . If not,\n> + * returns InvalidOid.\n> + */\n> ```\n>\n> \"subscriber .\" should be \"subscriber.\", blank is not needed.\n>\n\nfixed\n\n\n>\n> ~~~\n> 09. worker.c - get_usable_indexoid\n>\n> ```\n> +\n> Assert(targetResultRelInfo->ri_RelationDesc->rd_rel->relkind ==\n> + RELKIND_PARTITIONED_TABLE);\n> ```\n>\n> I thought this assertion seems to be not needed, because this is\n> completely same as the condition of if-statement.\n>\n\nYes, the refactor we made in the previous iteration made this assertion\nobsolete as you noted.\n\nAttached v15, thanks for the reviews.\n\nThanks,\nOnder KALACI",
"msg_date": "Fri, 7 Oct 2022 13:54:00 +0200",
"msg_from": "=?UTF-8?B?w5ZuZGVyIEthbGFjxLE=?= <onderkalaci@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Use indexes on the subscriber when REPLICA IDENTITY is\n full on the publisher"
},
{
"msg_contents": "Dear Önder,\r\n\r\nThanks for updating the patch! I checked yours and almost good.\r\nFollowings are just cosmetic comments.\r\n\r\n===\r\n01. relation.c - GetCheapestReplicaIdentityFullPath\r\n\r\n```\r\n\t * The reason that the planner would not pick partial indexes and indexes\r\n\t * with only expressions based on the way currently baserestrictinfos are\r\n\t * formed (e.g., col_1 = $1 ... AND col_N = $2).\r\n```\r\n\r\nIs \"col_N = $2\" a typo? I think it should be \"col_N = $N\" or \"attr1 = $1 ... AND attrN = $N\".\r\n\r\n===\r\n02. 032_subscribe_use_index.pl\r\n\r\nIf a table has a primary key on the subscriber, it will be used even if enable_indexscan is false(legacy behavior).\r\nShould we test it?\r\n\r\n~~~\r\n03. 032_subscribe_use_index.pl - SUBSCRIPTION RE-CALCULATES INDEX AFTER CREATE/DROP INDEX\r\n\r\nI think this test seems to be not trivial, so could you write down the motivation?\r\n\r\n~~~\r\n04. 032_subscribe_use_index.pl - SUBSCRIPTION RE-CALCULATES INDEX AFTER CREATE/DROP INDEX\r\n\r\n```\r\n# wait until the index is created\r\n$node_subscriber->poll_query_until(\r\n\t'postgres', q{select count(*)=1 from pg_stat_all_indexes where indexrelname = 'test_replica_id_full_idx';}\r\n) or die \"Timed out while waiting for check subscriber tap_sub_rep_full_0 updates one row via index\";\r\n```\r\n\r\nCREATE INDEX is a synchronous behavior, right? If so we don't have to wait here.\r\n...And the comment in case of die may be wrong.\r\n(There are some cases like this)\r\n\r\n~~~\r\n05. 032_subscribe_use_index.pl - SUBSCRIPTION USES INDEX UPDATEs MULTIPLE ROWS\r\n\r\n```\r\n# Testcase start: SUBSCRIPTION USES INDEX UPDATEs MULTIPLE ROWS\r\n#\r\n# Basic test where the subscriber uses index\r\n# and touches 50 rows with UPDATE\r\n```\r\n\r\n\"touches 50 rows with UPDATE\" -> \"updates 50 rows\", per other tests.\r\n\r\n~~~\r\n06. 032_subscribe_use_index.pl - SUBSCRIPTION CAN UPDATE THE INDEX IT USES AFTER ANALYZE\r\n\r\nI think this test seems to be not trivial, so could you write down the motivation?\r\n(Same as Re-calclate)\r\n\r\n~~~\r\n07. 032_subscribe_use_index.pl - SUBSCRIPTION CAN UPDATE THE INDEX IT USES AFTER ANALYZE\r\n\r\n```\r\n# show that index_b is not used\r\n$node_subscriber->poll_query_until(\r\n\t'postgres', q{select idx_scan=0 from pg_stat_all_indexes where indexrelname = 'index_b';}\r\n) or die \"Timed out while waiting for check subscriber tap_sub_rep_full updates two rows via index scan with index on high cardinality column-2\";\r\n```\r\n\r\nI think we don't have to wait here, is() should be used instead. \r\npoll_query_until() should be used only when idx_scan>0 is checked.\r\n(There are some cases like this)\r\n\r\n~~~\r\n08. 032_subscribe_use_index.pl - SUBSCRIPTION USES INDEX ON PARTITIONED TABLES\r\n\r\n```\r\n# make sure that the subscriber has the correct data\r\n$node_subscriber->poll_query_until(\r\n\t'postgres', q{select sum(user_id+value_1+value_2)=550070 AND count(DISTINCT(user_id,value_1, value_2))=981 from users_table_part;}\r\n) or die \"ensure subscriber has the correct data at the end of the test\";\r\n```\r\n\r\nI think we can replace it to wait_for_catchup() and is()...\r\nMoreover, we don't have to wait here because in above line we wait until the index is used on the subscriber.\r\n(There are some cases like this)\r\n\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n",
"msg_date": "Tue, 11 Oct 2022 03:54:01 +0000",
"msg_from": "\"kuroda.hayato@fujitsu.com\" <kuroda.hayato@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: [PATCH] Use indexes on the subscriber when REPLICA IDENTITY is\n full on the publisher"
},
{
"msg_contents": "Hi Kuroda Hayato,\n\n\n> ===\n> 01. relation.c - GetCheapestReplicaIdentityFullPath\n>\n> ```\n> * The reason that the planner would not pick partial indexes and\n> indexes\n> * with only expressions based on the way currently\n> baserestrictinfos are\n> * formed (e.g., col_1 = $1 ... AND col_N = $2).\n> ```\n>\n> Is \"col_N = $2\" a typo? I think it should be \"col_N = $N\" or \"attr1 = $1\n> ... AND attrN = $N\".\n>\n>\nYes, it is a typo, fixed now.\n\n\n> ===\n> 02. 032_subscribe_use_index.pl\n>\n> If a table has a primary key on the subscriber, it will be used even if\n> enable_indexscan is false(legacy behavior).\n> Should we test it?\n>\n>\nYes, good idea. I added two tests, one test that we cannot use regular\nindexes when index scan is disabled, and another one that we use replica\nidentity index when index scan is disabled. This is useful to make sure if\nsomeone changes the behavior can see the impact.\n\n\n> ~~~\n> 03. 032_subscribe_use_index.pl - SUBSCRIPTION RE-CALCULATES INDEX AFTER\n> CREATE/DROP INDEX\n>\n> I think this test seems to be not trivial, so could you write down the\n> motivation?\n>\n\nmakes sense, done\n\n\n>\n> ~~~\n> 04. 032_subscribe_use_index.pl - SUBSCRIPTION RE-CALCULATES INDEX AFTER\n> CREATE/DROP INDEX\n>\n> ```\n> # wait until the index is created\n> $node_subscriber->poll_query_until(\n> 'postgres', q{select count(*)=1 from pg_stat_all_indexes where\n> indexrelname = 'test_replica_id_full_idx';}\n> ) or die \"Timed out while waiting for check subscriber tap_sub_rep_full_0\n> updates one row via index\";\n> ```\n>\n> CREATE INDEX is a synchronous behavior, right? If so we don't have to wait\n> here.\n> ...And the comment in case of die may be wrong.\n> (There are some cases like this)\n>\n\nIt is not about CREATE INDEX being async. It is about pg_stat_all_indexes\nbeing async. If we do not wait, the tests become flaky, because sometimes\nthe update has not been reflected in the view immediately.\n\nThis is explained here: PostgreSQL: Documentation: 14: 28.2. The Statistics\nCollector <https://www.postgresql.org/docs/current/monitoring-stats.html>\n\n*When using the statistics to monitor collected data, it is important to\n> realize that the information does not update instantaneously. Each\n> individual server process transmits new statistical counts to the collector\n> just before going idle; so a query or transaction still in progress does\n> not affect the displayed totals. Also, the collector itself emits a new\n> report at most once per PGSTAT_STAT_INTERVAL milliseconds (500 ms unless\n> altered while building the server). So the displayed information lags\n> behind actual activity. However, current-query information collected by\n> track_activities is always up-to-date.*\n>\n\n\n\n>\n> ~~~\n> 05. 032_subscribe_use_index.pl - SUBSCRIPTION USES INDEX UPDATEs MULTIPLE\n> ROWS\n>\n> ```\n> # Testcase start: SUBSCRIPTION USES INDEX UPDATEs MULTIPLE ROWS\n> #\n> # Basic test where the subscriber uses index\n> # and touches 50 rows with UPDATE\n> ```\n>\n> \"touches 50 rows with UPDATE\" -> \"updates 50 rows\", per other tests.\n>\n> fixed\n\n\n> ~~~\n> 06. 032_subscribe_use_index.pl - SUBSCRIPTION CAN UPDATE THE INDEX IT\n> USES AFTER ANALYZE\n>\n> I think this test seems to be not trivial, so could you write down the\n> motivation?\n> (Same as Re-calclate)\n>\n\nsure, done\n\n\n>\n> ~~~\n> 07. 032_subscribe_use_index.pl - SUBSCRIPTION CAN UPDATE THE INDEX IT\n> USES AFTER ANALYZE\n>\n> ```\n> # show that index_b is not used\n> $node_subscriber->poll_query_until(\n> 'postgres', q{select idx_scan=0 from pg_stat_all_indexes where\n> indexrelname = 'index_b';}\n> ) or die \"Timed out while waiting for check subscriber tap_sub_rep_full\n> updates two rows via index scan with index on high cardinality column-2\";\n> ```\n>\n> I think we don't have to wait here, is() should be used instead.\n> poll_query_until() should be used only when idx_scan>0 is checked.\n> (There are some cases like this)\n>\n\nYes, makes sense\n\n\n>\n> ~~~\n> 08. 032_subscribe_use_index.pl - SUBSCRIPTION USES INDEX ON PARTITIONED\n> TABLES\n>\n> ```\n> # make sure that the subscriber has the correct data\n> $node_subscriber->poll_query_until(\n> 'postgres', q{select sum(user_id+value_1+value_2)=550070 AND\n> count(DISTINCT(user_id,value_1, value_2))=981 from users_table_part;}\n> ) or die \"ensure subscriber has the correct data at the end of the test\";\n> ```\n>\n>\nAh, for this case, we already have is() checks for the same results, this\nis just a left-over from the earlier iterations\n\n\n> I think we can replace it to wait_for_catchup() and is()...\n> Moreover, we don't have to wait here because in above line we wait until\n> the index is used on the subscriber.\n> (There are some cases like this)\n>\n\nFixed a few more such cases.\n\nThanks for the review! Attached v16.\n\nOnder KALACI",
"msg_date": "Tue, 11 Oct 2022 14:44:06 +0200",
"msg_from": "=?UTF-8?B?w5ZuZGVyIEthbGFjxLE=?= <onderkalaci@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Use indexes on the subscriber when REPLICA IDENTITY is\n full on the publisher"
},
{
"msg_contents": "Dear Önder,\r\n\r\nThank you for updating the patch!\r\n\r\n> It is not about CREATE INDEX being async. It is about pg_stat_all_indexes\r\n> being async. If we do not wait, the tests become flaky, because sometimes\r\n> the update has not been reflected in the view immediately.\r\n\r\nMake sense, I forgot how stats collector works...\r\n\r\nFollowings are comments for v16. Only for test codes.\r\n\r\n~~~\r\n01. 032_subscribe_use_index.pl - SUBSCRIPTION CAN UPDATE THE INDEX IT USES AFTER ANALYZE\r\n\r\n```\r\n# show that index_b is not used\r\n$node_subscriber->poll_query_until(\r\n\t'postgres', q{select idx_scan=0 from pg_stat_all_indexes where indexrelname = 'index_b';}\r\n) or die \"Timed out while waiting for check subscriber tap_sub_rep_full updates two rows via index scan with index on high cardinality column-2\";\r\n```\r\n\r\npoll_query_until() is still remained here, it should be replaced to is().\r\n\r\n\r\n02. 032_subscribe_use_index.pl - SUBSCRIPTION BEHAVIOR WITH ENABLE_INDEXSCAN\r\n\r\n```\r\n# show that the unique index on replica identity is used even when enable_indexscan=false\r\n$result = $node_subscriber->safe_psql('postgres',\r\n\t\"select idx_scan from pg_stat_all_indexes where indexrelname = 'test_replica_id_full_idx'\");\r\nis($result, qq(0), 'ensure subscriber has not used index with enable_indexscan=false');\r\n```\r\n\r\nIs the comment wrong? The index test_replica_id_full_idx is not used here.\r\n\r\n\r\n032_subscribe_use_index.pl - SUBSCRIPTION CAN UPDATE THE INDEX IT USES AFTER ANALYZE\r\n\r\n```\r\n# show that index_b is not used\r\n$node_subscriber->poll_query_until(\r\n\t'postgres', q{select idx_scan=0 from pg_stat_all_indexes where indexrelname = 'index_b';}\r\n) or die \"Timed out while waiting for check subscriber tap_sub_rep_full updates two rows via index scan with index on high cardinality column-2\";\r\n```\r\n\r\npoll_query_until() is still remained here, it should be replaced to is()\r\n\r\n032_subscribe_use_index.pl - SUBSCRIPTION BEHAVIOR WITH ENABLE_INDEXSCAN\r\n\r\n```\r\n# show that the unique index on replica identity is used even when enable_indexscan=false\r\n$result = $node_subscriber->safe_psql('postgres',\r\n\t\"select idx_scan from pg_stat_all_indexes where indexrelname = 'test_replica_id_full_idx'\");\r\nis($result, qq(0), 'ensure subscriber has not used index with enable_indexscan=false');\r\n```\r\n\r\nIs the comment wrong? The index test_replica_id_full_idx is not used here.\r\n\r\n03. 032_subscribe_use_index.pl - SUBSCRIPTION BEHAVIOR WITH ENABLE_INDEXSCAN\r\n\r\n```\r\n$node_publisher->safe_psql('postgres',\r\n\t\"ALTER TABLE test_replica_id_full REPLICA IDENTITY USING INDEX test_replica_id_full_unique;\");\r\n```\r\n\r\nI was not sure why ALTER TABLE REPLICA IDENTITY USING INDEX was done on the publisher side.\r\nIIUC this feature works when REPLICA IDENTITY FULL is specified on a publisher,\r\nso it might not be altered here. If so, an index does not have to define on the publisher too.\r\n\r\n04. 032_subscribe_use_index.pl - SUBSCRIPTION BEHAVIOR WITH ENABLE_INDEXSCAN\r\n\r\n```\r\n$node_subscriber->poll_query_until(\r\n\t'postgres', q{select (idx_scan=1) from pg_stat_all_indexes where indexrelname = 'test_replica_id_full_unique'}\r\n) or die \"Timed out while waiting ensuring subscriber used unique index as replica identity even with enable_indexscan=false'\";\r\n```\r\n\r\n03 comment should be added here.\r\n\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n",
"msg_date": "Wed, 12 Oct 2022 04:01:21 +0000",
"msg_from": "\"kuroda.hayato@fujitsu.com\" <kuroda.hayato@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: [PATCH] Use indexes on the subscriber when REPLICA IDENTITY is\n full on the publisher"
},
{
"msg_contents": "Hi,\n\n\n\n\n> ~~~\n> 01. 032_subscribe_use_index.pl - SUBSCRIPTION CAN UPDATE THE INDEX IT\n> USES AFTER ANALYZE\n>\n> ```\n> # show that index_b is not used\n> $node_subscriber->poll_query_until(\n> 'postgres', q{select idx_scan=0 from pg_stat_all_indexes where\n> indexrelname = 'index_b';}\n> ) or die \"Timed out while waiting for check subscriber tap_sub_rep_full\n> updates two rows via index scan with index on high cardinality column-2\";\n> ```\n>\n> poll_query_until() is still remained here, it should be replaced to is().\n>\n>\n>\nUpdated\n\n02. 032_subscribe_use_index.pl - SUBSCRIPTION BEHAVIOR WITH ENABLE_INDEXSCAN\n>\n> ```\n> # show that the unique index on replica identity is used even when\n> enable_indexscan=false\n> $result = $node_subscriber->safe_psql('postgres',\n> \"select idx_scan from pg_stat_all_indexes where indexrelname =\n> 'test_replica_id_full_idx'\");\n> is($result, qq(0), 'ensure subscriber has not used index with\n> enable_indexscan=false');\n> ```\n>\n> Is the comment wrong? The index test_replica_id_full_idx is not used here.\n>\n>\nYeah, the comment is wrong. It is a copy & paste error from the other test.\nFixed now\n\n\n>\n>\n> 03. 032_subscribe_use_index.pl - SUBSCRIPTION BEHAVIOR WITH\n> ENABLE_INDEXSCAN\n>\n> ```\n> $node_publisher->safe_psql('postgres',\n> \"ALTER TABLE test_replica_id_full REPLICA IDENTITY USING INDEX\n> test_replica_id_full_unique;\");\n> ```\n>\n> I was not sure why ALTER TABLE REPLICA IDENTITY USING INDEX was done on\n> the publisher side.\n> IIUC this feature works when REPLICA IDENTITY FULL is specified on a\n> publisher,\n> so it might not be altered here. If so, an index does not have to define\n> on the publisher too.\n>\n>\nYes, not strictly necessary but it is often the case that both\nsubscriber and publication have the similar schemas when unique index/pkey\nis used. For example, see t/028_row_filter.pl where we follow this pattern.\n\nStill, I manually tried that without the index on the publisher (e.g.,\nreplica identity full), that works as expected. But given that the majority\nof the tests already have that approach and this test focuses on\nenable_indexscan, I think I'll keep it as is - unless it is confusing?\n\n\n> 04. 032_subscribe_use_index.pl - SUBSCRIPTION BEHAVIOR WITH\n> ENABLE_INDEXSCAN\n>\n> ```\n> $node_subscriber->poll_query_until(\n> 'postgres', q{select (idx_scan=1) from pg_stat_all_indexes where\n> indexrelname = 'test_replica_id_full_unique'}\n> ) or die \"Timed out while waiting ensuring subscriber used unique index as\n> replica identity even with enable_indexscan=false'\";\n> ```\n>\n> 03 comment should be added here.\n>\n> Yes, done that as well.\n\n\nAttached v17 now. Thanks for the review!",
"msg_date": "Wed, 12 Oct 2022 20:44:06 +0200",
"msg_from": "=?UTF-8?B?w5ZuZGVyIEthbGFjxLE=?= <onderkalaci@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Use indexes on the subscriber when REPLICA IDENTITY is\n full on the publisher"
},
{
"msg_contents": "Dear Önder,\r\n\r\nThanks for updating the patch!\r\n\r\nI think your saying seems reasonable.\r\nI have no comments anymore now. Thanks for updating so quickly.\r\n\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n",
"msg_date": "Thu, 13 Oct 2022 00:54:27 +0000",
"msg_from": "\"kuroda.hayato@fujitsu.com\" <kuroda.hayato@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: [PATCH] Use indexes on the subscriber when REPLICA IDENTITY is\n full on the publisher"
},
{
"msg_contents": "On Wed, Aug 24, 2022 12:25 AM Önder Kalacı <onderkalaci@gmail.com> wrote:\r\n> Hi,\r\n> \r\n> Thanks for the review!\r\n> \r\n\r\nThanks for your reply.\r\n\r\n> \r\n> >\r\n> > 1.\r\n> > In FilterOutNotSuitablePathsForReplIdentFull(), is\r\n> > \"nonPartialIndexPathList\" a\r\n> > good name for the list? Indexes on only expressions are also be filtered.\r\n> >\r\n> > +static List *\r\n> > +FilterOutNotSuitablePathsForReplIdentFull(List *pathlist)\r\n> > +{\r\n> > + ListCell *lc;\r\n> > + List *nonPartialIndexPathList = NIL;\r\n> >\r\n> >\r\n> Yes, true. We only started filtering the non-partial ones first. Now\r\n> changed to *suitableIndexList*, does that look right?\r\n> \r\n\r\nThat looks ok to me.\r\n\r\n> \r\n> \r\n> > 3.\r\n> > It looks we should change the comment for FindReplTupleInLocalRel() in this\r\n> > patch.\r\n> >\r\n> > /*\r\n> > * Try to find a tuple received from the publication side (in\r\n> > 'remoteslot') in\r\n> > * the corresponding local relation using either replica identity index,\r\n> > * primary key or if needed, sequential scan.\r\n> > *\r\n> > * Local tuple, if found, is returned in '*localslot'.\r\n> > */\r\n> > static bool\r\n> > FindReplTupleInLocalRel(EState *estate, Relation localrel,\r\n> >\r\n> >\r\n> I made a small change, just adding \"index\". Do you expect a larger change?\r\n> \r\n> \r\n\r\nI think that's sufficient.\r\n\r\n> \r\n> \r\n> > 5.\r\n> > + if (!AttributeNumberIsValid(mainattno))\r\n> > + {\r\n> > + /*\r\n> > + * There are two cases to consider. First, if the\r\n> > index is a primary or\r\n> > + * unique key, we cannot have any indexes with\r\n> > expressions. So, at this\r\n> > + * point we are sure that the index we deal is not\r\n> > these.\r\n> > + */\r\n> > + Assert(RelationGetReplicaIndex(rel) !=\r\n> > RelationGetRelid(idxrel) &&\r\n> > + RelationGetPrimaryKeyIndex(rel) !=\r\n> > RelationGetRelid(idxrel));\r\n> > +\r\n> > + /*\r\n> > + * For a non-primary/unique index with an\r\n> > expression, we are sure that\r\n> > + * the expression cannot be used for replication\r\n> > index search. The\r\n> > + * reason is that we create relevant index paths\r\n> > by providing column\r\n> > + * equalities. And, the planner does not pick\r\n> > expression indexes via\r\n> > + * column equality restrictions in the query.\r\n> > + */\r\n> > + continue;\r\n> > + }\r\n> >\r\n> > Is it possible that it is a usable index with an expression? I think\r\n> > indexes\r\n> > with an expression has been filtered in\r\n> > FilterOutNotSuitablePathsForReplIdentFull(). If it can't be a usable index\r\n> > with\r\n> > an expression, maybe we shouldn't use \"continue\" here.\r\n> >\r\n> \r\n> \r\n> \r\n> Ok, I think there are some confusing comments in the code, which I updated.\r\n> Also, added one more explicit Assert to make the code a little more\r\n> readable.\r\n> \r\n> We can support indexes involving expressions but not indexes that are only\r\n> consisting of expressions. FilterOutNotSuitablePathsForReplIdentFull()\r\n> filters out the latter, see IndexOnlyOnExpression().\r\n> \r\n> So, for example, if we have an index as below, we are skipping the\r\n> expression while building the index scan keys:\r\n> \r\n> CREATE INDEX people_names ON people (firstname, lastname, (id || '_' ||\r\n> sub_id));\r\n> \r\n> We can consider removing `continue`, but that'd mean we should also adjust\r\n> the following code-block to handle indexprs. To me, that seems like an edge\r\n> case to implement at this point, given such an index is probably not\r\n> common. Do you think should I try to use the indexprs as well while\r\n> building the scan key?\r\n> \r\n> I'm mostly trying to keep the complexity small. If you suggest this\r\n> limitation should be lifted, I can give it a shot. I think the limitation I\r\n> leave here is with a single sentence: *The index on the subscriber can only\r\n> use simple column references. *\r\n> \r\n\r\nThanks for your explanation. I get it and think it's OK.\r\n\r\n> > 6.\r\n> > In the following case, I got a result which is different from HEAD, could\r\n> > you\r\n> > please look into it?\r\n> >\r\n> > -- publisher\r\n> > CREATE TABLE test_replica_id_full (x int);\r\n> > ALTER TABLE test_replica_id_full REPLICA IDENTITY FULL;\r\n> > CREATE PUBLICATION tap_pub_rep_full FOR TABLE test_replica_id_full;\r\n> >\r\n> > -- subscriber\r\n> > CREATE TABLE test_replica_id_full (x int, y int);\r\n> > CREATE INDEX test_replica_id_full_idx ON test_replica_id_full(x,y);\r\n> > CREATE SUBSCRIPTION tap_sub_rep_full_0 CONNECTION 'dbname=postgres\r\n> > port=5432' PUBLICATION tap_pub_rep_full;\r\n> >\r\n> > -- publisher\r\n> > INSERT INTO test_replica_id_full VALUES (1);\r\n> > UPDATE test_replica_id_full SET x = x + 1 WHERE x = 1;\r\n> >\r\n> > The data in subscriber:\r\n> > on HEAD:\r\n> > postgres=# select * from test_replica_id_full ;\r\n> > x | y\r\n> > ---+---\r\n> > 2 |\r\n> > (1 row)\r\n> >\r\n> > After applying the patch:\r\n> > postgres=# select * from test_replica_id_full ;\r\n> > x | y\r\n> > ---+---\r\n> > 1 |\r\n> > (1 row)\r\n> >\r\n> >\r\n> Ops, good catch. it seems we forgot to have:\r\n> \r\n> skey[scankey_attoff].sk_flags |= SK_SEARCHNULL;\r\n> \r\n> On head, the index used for this purpose could only be the primary key or\r\n> unique key on NOT NULL columns. Now, we do allow NULL values, and need to\r\n> search for them. Added that (and your test) to the updated patch.\r\n> \r\n> As a semi-related note, tuples_equal() decides `true` for (NULL = NULL). I\r\n> have not changed that, and it seems right in this context. Do you see any\r\n> issues with that?\r\n> \r\n> Also, I realized that the functions in the execReplication.c expect only\r\n> btree indexes. So, I skipped others as well. If that makes sense, I can\r\n> work on a follow-up patch after we can merge this, to remove some of the\r\n> limitations mentioned here.\r\n\r\nThanks for fixing it and updating the patch, I didn't see any issue about it.\r\n\r\nHere are some comments on v17 patch.\r\n\r\n1. \r\n-LogicalRepRelMapEntry *\r\n+LogicalRepPartMapEntry *\r\n logicalrep_partition_open(LogicalRepRelMapEntry *root,\r\n \t\t\t\t\t\t Relation partrel, AttrMap *map)\r\n {\r\n\r\nIs there any reason to change the return type of logicalrep_partition_open()? It\r\nseems ok without this change.\r\n\r\n2. \r\n\r\n+\t\t * of the relation cache entry (e.g., such as ANALYZE or\r\n+\t\t * CREATE/DROP index on the relation).\r\n\r\n\"e.g.\" and \"such as\" mean the same. I think we remove one of them.\r\n\r\n3.\r\n+$node_subscriber->poll_query_until(\r\n+\t'postgres', q{select (idx_scan = 2) from pg_stat_all_indexes where indexrelname = 'test_replica_id_full_idx';}\r\n+) or die \"Timed out while waiting for'check subscriber tap_sub_rep_full deletes one row via index\";\r\n+\r\n\r\n+$node_subscriber->poll_query_until(\r\n+\t'postgres', q{select (idx_scan = 1) from pg_stat_all_indexes where indexrelname = 'test_replica_id_full_idy';}\r\n+) or die \"Timed out while waiting for'check subscriber tap_sub_rep_full deletes one row via index\";\r\n\r\n\r\n\"for'check\" -> \"for check\"\r\n\r\n3.\r\n+$node_subscriber->safe_psql('postgres',\r\n+\t\"SELECT pg_reload_conf();\");\r\n+\r\n+# Testcase start: SUBSCRIPTION BEHAVIOR WITH ENABLE_INDEXSCAN\r\n+# ====================================================================\r\n+\r\n+$node_subscriber->stop('fast');\r\n+$node_publisher->stop('fast');\r\n+\r\n\r\n\"Testcase start\" in the comment should be \"Testcase end\".\r\n\r\n4.\r\nThere seems to be a problem in the following scenario, which results in\r\ninconsistent data between publisher and subscriber.\r\n\r\n-- publisher\r\nCREATE TABLE test_replica_id_full (x int, y int);\r\nALTER TABLE test_replica_id_full REPLICA IDENTITY FULL;\r\nCREATE PUBLICATION tap_pub_rep_full FOR TABLE test_replica_id_full;\r\n\r\n-- subscriber\r\nCREATE TABLE test_replica_id_full (x int, y int);\r\nCREATE UNIQUE INDEX test_replica_id_full_idx ON test_replica_id_full(x);\r\nCREATE SUBSCRIPTION tap_sub_rep_full_0 CONNECTION 'dbname=postgres port=5432' PUBLICATION tap_pub_rep_full;\r\n\r\n-- publisher\r\nINSERT INTO test_replica_id_full VALUES (NULL,1);\r\nINSERT INTO test_replica_id_full VALUES (NULL,2);\r\nINSERT INTO test_replica_id_full VALUES (NULL,3);\r\nupdate test_replica_id_full SET x=1 where y=2;\r\n\r\nThe data in publisher:\r\npostgres=# select * from test_replica_id_full order by y;\r\n x | y\r\n---+---\r\n | 1\r\n 1 | 2\r\n | 3\r\n(3 rows)\r\n\r\nThe data in subscriber:\r\npostgres=# select * from test_replica_id_full order by y;\r\n x | y\r\n---+---\r\n | 2\r\n 1 | 2\r\n | 3\r\n(3 rows)\r\n\r\nThere is no such problem on master branch.\r\n\r\n\r\nRegards,\r\nShi yu\r\n",
"msg_date": "Fri, 14 Oct 2022 02:25:58 +0000",
"msg_from": "\"shiy.fnst@fujitsu.com\" <shiy.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: [PATCH] Use indexes on the subscriber when REPLICA IDENTITY is\n full on the publisher"
},
{
"msg_contents": "Hi,\n\nThanks for the review!\n\n\nHere are some comments on v17 patch.\n>\n> 1.\n> -LogicalRepRelMapEntry *\n> +LogicalRepPartMapEntry *\n> logicalrep_partition_open(LogicalRepRelMapEntry *root,\n> Relation partrel,\n> AttrMap *map)\n> {\n>\n> Is there any reason to change the return type of\n> logicalrep_partition_open()? It\n> seems ok without this change.\n>\n\nI think you are right, I probably needed that in some of my\nearlier iterations of the patch, but now it seems redundant. Reverted back\nto the original version.\n\n\n>\n> 2.\n>\n> + * of the relation cache entry (e.g., such as ANALYZE or\n> + * CREATE/DROP index on the relation).\n>\n> \"e.g.\" and \"such as\" mean the same. I think we remove one of them.\n>\n\nfixed\n\n\n>\n> 3.\n> +$node_subscriber->poll_query_until(\n> + 'postgres', q{select (idx_scan = 2) from pg_stat_all_indexes where\n> indexrelname = 'test_replica_id_full_idx';}\n> +) or die \"Timed out while waiting for'check subscriber tap_sub_rep_full\n> deletes one row via index\";\n> +\n>\n> +$node_subscriber->poll_query_until(\n> + 'postgres', q{select (idx_scan = 1) from pg_stat_all_indexes where\n> indexrelname = 'test_replica_id_full_idy';}\n> +) or die \"Timed out while waiting for'check subscriber tap_sub_rep_full\n> deletes one row via index\";\n>\n>\n> \"for'check\" -> \"for check\"\n>\n\nfixed\n\n\n>\n> 3.\n> +$node_subscriber->safe_psql('postgres',\n> + \"SELECT pg_reload_conf();\");\n> +\n> +# Testcase start: SUBSCRIPTION BEHAVIOR WITH ENABLE_INDEXSCAN\n> +# ====================================================================\n> +\n> +$node_subscriber->stop('fast');\n> +$node_publisher->stop('fast');\n> +\n>\n> \"Testcase start\" in the comment should be \"Testcase end\".\n>\n>\nfixed\n\n\n> 4.\n> There seems to be a problem in the following scenario, which results in\n> inconsistent data between publisher and subscriber.\n>\n> -- publisher\n> CREATE TABLE test_replica_id_full (x int, y int);\n> ALTER TABLE test_replica_id_full REPLICA IDENTITY FULL;\n> CREATE PUBLICATION tap_pub_rep_full FOR TABLE test_replica_id_full;\n>\n> -- subscriber\n> CREATE TABLE test_replica_id_full (x int, y int);\n> CREATE UNIQUE INDEX test_replica_id_full_idx ON test_replica_id_full(x);\n> CREATE SUBSCRIPTION tap_sub_rep_full_0 CONNECTION 'dbname=postgres\n> port=5432' PUBLICATION tap_pub_rep_full;\n>\n> -- publisher\n> INSERT INTO test_replica_id_full VALUES (NULL,1);\n> INSERT INTO test_replica_id_full VALUES (NULL,2);\n> INSERT INTO test_replica_id_full VALUES (NULL,3);\n> update test_replica_id_full SET x=1 where y=2;\n>\n> The data in publisher:\n> postgres=# select * from test_replica_id_full order by y;\n> x | y\n> ---+---\n> | 1\n> 1 | 2\n> | 3\n> (3 rows)\n>\n> The data in subscriber:\n> postgres=# select * from test_replica_id_full order by y;\n> x | y\n> ---+---\n> | 2\n> 1 | 2\n> | 3\n> (3 rows)\n>\n> There is no such problem on master branch.\n>\n>\nUff, the second problem reported regarding NULL values for this patch (both\nby you). First, v18 contains the fix for the problem. It turns out that my\nidea of treating all unique indexes (pkey, replica identity and unique\nregular indexes) the same proved to be wrong. The former two require all\nthe involved columns to have NOT NULL. The latter not.\n\nThis resulted in RelationFindReplTupleByIndex() to skip tuples_equal() for\nregular unique indexes (e.g., non pkey/replid). Hence, the first NULL value\nis considered the matching tuple. Instead, we should be doing a full tuple\nequality check (e.g., tuples_equal). This is what v18 does. Also, add the\nabove scenario as a test.\n\nI think we can probably skip tuples_equal() for unique indexes that consist\nof only NOT NULL columns. However, that seems like an over-optimization. If\nyou have such a unique index, why not create a primary key anyway? That's\nwhy I don't see much value in compicating the code for that use case.\n\nThanks for the review & testing. I'll focus more on the NULL values on my\nown testing as well. Still, I wanted to push my changes so that you can\nalso have a look if possible.\n\nAttach v18.\n\nOnder KALACI",
"msg_date": "Fri, 14 Oct 2022 18:04:02 +0200",
"msg_from": "=?UTF-8?B?w5ZuZGVyIEthbGFjxLE=?= <onderkalaci@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Use indexes on the subscriber when REPLICA IDENTITY is\n full on the publisher"
},
{
"msg_contents": "On Fri, Sep 23, 2022 at 0:14 AM Önder Kalacı <onderkalaci@gmail.com> wrote:\r\n> Hii Wang wei,\r\n\r\nThanks for updating the patch and your reply.\r\n\r\n> > 1. In the function GetCheapestReplicaIdentityFullPath.\r\n> > +\tif (rel->pathlist == NIL)\r\n> > +\t{\r\n> > +\t\t/*\r\n> > +\t\t * A sequential scan could have been dominated by by an index\r\n> > scan\r\n> > +\t\t * during make_one_rel(). We should always have a sequential\r\n> > scan\r\n> > +\t\t * before set_cheapest().\r\n> > +\t\t */\r\n> > +\t\tPath\t *seqScanPath = create_seqscan_path(root, rel, NULL,\r\n> > 0);\r\n> > +\r\n> > +\t\tadd_path(rel, seqScanPath);\r\n> > +\t}\r\n> >\r\n> > This is a question I'm not sure about:\r\n> > Do we need this part to add sequential scan?\r\n> >\r\n> > I think in our case, the sequential scan seems to have been added by the\r\n> > function make_one_rel (see function set_plain_rel_pathlist).\r\n> \r\n> Yes, the sequential scan is added during make_one_rel.\r\n> \r\n> > If I am missing something, please let me know. BTW, there is a typo in\r\n> > above comment: `by by`.\r\n> \r\n> As the comment mentions, the sequential scan could have been dominated &\r\n> removed by index scan, see add_path():\r\n> \r\n> *We also remove from the rel's pathlist any old paths that are dominated\r\n> * by new_path --- that is, new_path is cheaper, at least as well ordered,\r\n> * generates no more rows, requires no outer rels not required by the old\r\n> * path, and is no less parallel-safe.\r\n> \r\n> Still, I agree that the comment could be improved, which I pushed.\r\n\r\nOh, sorry I didn't realize this part of the logic. Thanks for sharing this.\r\n\r\nAnd I have another confusion about function GetCheapestReplicaIdentityFullPath:\r\nIf rel->pathlist is NIL, could we return NULL directly from this function, and\r\nthen set idxoid to InvalidOid in function FindUsableIndexForReplicaIdentityFull\r\nin that case?\r\n\r\n===\r\n\r\nHere are some comments for test file 032_subscribe_use_index.pl on v18 patch:\r\n\r\n1.\r\n```\r\n+# Basic test where the subscriber uses index\r\n+# and only updates 1 row for and deletes\r\n+# 1 other row\r\n```\r\nThere seems to be an extra \"for\" here.\r\n\r\n2. Typos for subscription name in the error messages.\r\ntap_sub_rep_full_0 -> tap_sub_rep_full\r\n\r\n3. Typo in comments\r\n```\r\n+# use the newly created index (provided that it fullfils the requirements).\r\n```\r\nfullfils -> fulfils\r\n\r\n4. Some extra single quotes at the end of the error message ('\").\r\nFor example:\r\n```\r\n# wait until the index is used on the subscriber\r\n$node_subscriber->poll_query_until(\r\n\t'postgres', q{select (idx_scan = 200) from pg_stat_all_indexes where indexrelname = 'test_replica_id_full_idx';}\r\n) or die \"Timed out while waiting for check subscriber tap_sub_rep_full updates 200 rows via index'\";\r\n```\r\n\r\n5. The column names in the error message appear to be a typo.\r\n```\r\n+) or die \"Timed out while waiting for check subscriber tap_sub_rep_full updates two rows via index scan with index on high cardinality column-1\";\r\n...\r\n+) or die \"Timed out while waiting for check subscriber tap_sub_rep_full updates two rows via index scan with index on high cardinality column-3\";\r\n...\r\n+) or die \"Timed out while waiting for check subscriber tap_sub_rep_full updates two rows via index scan with index on high cardinality column-4\";\r\n```\r\nIt seems that we need to do the following change: 'column-3' -> 'column-1' and\r\n'column-4' -> 'column-2'.\r\nOr we could use the column names directly like this: 'column-1' -> 'column a',\r\n'column_3' -> 'column a' and 'column_4' -> 'column b'.\r\n\r\n6. DELETE action is missing from the error message.\r\n```\r\n+# 2 rows from first command, another 2 from the second command\r\n+# overall index_on_child_1_a is used 4 times\r\n+$node_subscriber->poll_query_until(\r\n+\t'postgres', q{select idx_scan=4 from pg_stat_all_indexes where indexrelname = 'index_on_child_1_a';}\r\n+) or die \"Timed out while waiting for check subscriber tap_sub_rep_full updates child_1 table'\";\r\n```\r\nI think we execute both UPDATE and DELETE for child_1 here. Could we add DELETE\r\naction to this error message?\r\n\r\n7. Table name in the error message.\r\n```\r\n# check if the index is used even when the index has NULL values\r\n$node_subscriber->poll_query_until(\r\n\t'postgres', q{select idx_scan=2 from pg_stat_all_indexes where indexrelname = 'test_replica_id_full_idx';}\r\n) or die \"Timed out while waiting for check subscriber tap_sub_rep_full updates parent table'\";\r\n```\r\nIt seems to be \"test_replica_id_full\" here instead of \"parent'\".\r\n\r\nRegards,\r\nWang wei\r\n",
"msg_date": "Tue, 18 Oct 2022 06:46:05 +0000",
"msg_from": "\"wangw.fnst@fujitsu.com\" <wangw.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: [PATCH] Use indexes on the subscriber when REPLICA IDENTITY is\n full on the publisher"
},
{
"msg_contents": "Hi Wang, all\n\n\n> And I have another confusion about function\n> GetCheapestReplicaIdentityFullPath:\n> If rel->pathlist is NIL, could we return NULL directly from this function,\n> and\n> then set idxoid to InvalidOid in function\n> FindUsableIndexForReplicaIdentityFull\n> in that case?\n>\n>\nWe could, but then we need to move some other checks to some other places.\nI find the current flow easier to follow, where all happens\nvia cheapest_total_path, which is a natural field for this purpose.\n\nDo you have a strong opinion on this?\n\n\n> ===\n>\n> Here are some comments for test file 032_subscribe_use_index.pl on v18\n> patch:\n>\n> 1.\n> ```\n> +# Basic test where the subscriber uses index\n> +# and only updates 1 row for and deletes\n> +# 1 other row\n> ```\n> There seems to be an extra \"for\" here.\n>\n\n Fixed\n\n\n> 2. Typos for subscription name in the error messages.\n> tap_sub_rep_full_0 -> tap_sub_rep_full\n>\n>\nFixed\n\n\n> 3. Typo in comments\n> ```\n> +# use the newly created index (provided that it fullfils the\n> requirements).\n> ```\n> fullfils -> fulfils\n>\n>\nFixed\n\n\n> 4. Some extra single quotes at the end of the error message ('\").\n> For example:\n> ```\n> # wait until the index is used on the subscriber\n> $node_subscriber->poll_query_until(\n> 'postgres', q{select (idx_scan = 200) from pg_stat_all_indexes\n> where indexrelname = 'test_replica_id_full_idx';}\n> ) or die \"Timed out while waiting for check subscriber tap_sub_rep_full\n> updates 200 rows via index'\";\n> ```\n>\n\nAll fixed, thanks\n\n\n\n>\n> 5. The column names in the error message appear to be a typo.\n> ```\n> +) or die \"Timed out while waiting for check subscriber tap_sub_rep_full\n> updates two rows via index scan with index on high cardinality column-1\";\n> ...\n> +) or die \"Timed out while waiting for check subscriber tap_sub_rep_full\n> updates two rows via index scan with index on high cardinality column-3\";\n> ...\n> +) or die \"Timed out while waiting for check subscriber tap_sub_rep_full\n> updates two rows via index scan with index on high cardinality column-4\";\n> ```\n> It seems that we need to do the following change: 'column-3' -> 'column-1'\n> and\n> 'column-4' -> 'column-2'.\n> Or we could use the column names directly like this: 'column-1' -> 'column\n> a',\n> 'column_3' -> 'column a' and 'column_4' -> 'column b'.\n>\n\nI think the latter is easier to follow, thanks.\n\n\n>\n> 6. DELETE action is missing from the error message.\n> ```\n> +# 2 rows from first command, another 2 from the second command\n> +# overall index_on_child_1_a is used 4 times\n> +$node_subscriber->poll_query_until(\n> + 'postgres', q{select idx_scan=4 from pg_stat_all_indexes where\n> indexrelname = 'index_on_child_1_a';}\n> +) or die \"Timed out while waiting for check subscriber tap_sub_rep_full\n> updates child_1 table'\";\n> ```\n> I think we execute both UPDATE and DELETE for child_1 here. Could we add\n> DELETE\n> action to this error message?\n>\n>\nmakes sense, added\n\n\n> 7. Table name in the error message.\n> ```\n> # check if the index is used even when the index has NULL values\n> $node_subscriber->poll_query_until(\n> 'postgres', q{select idx_scan=2 from pg_stat_all_indexes where\n> indexrelname = 'test_replica_id_full_idx';}\n> ) or die \"Timed out while waiting for check subscriber tap_sub_rep_full\n> updates parent table'\";\n> ```\n> It seems to be \"test_replica_id_full\" here instead of \"parent'\".\n>\nfixed as well.\n\n\nAttached v19.\n\nThanks,\nOnder KALACI",
"msg_date": "Tue, 18 Oct 2022 18:04:33 +0200",
"msg_from": "=?UTF-8?B?w5ZuZGVyIEthbGFjxLE=?= <onderkalaci@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Use indexes on the subscriber when REPLICA IDENTITY is\n full on the publisher"
},
{
"msg_contents": "On Wed, Oct 19, 2022 12:05 AM Önder Kalacı <onderkalaci@gmail.com> wrote:\r\n> \r\n> Attached v19.\r\n> \r\n\r\nThanks for updating the patch. Here are some comments on v19.\r\n\r\n1.\r\nIn execReplication.c:\r\n\r\n+\tTypeCacheEntry **eq = NULL; /* only used when the index is not unique */\r\n\r\nMaybe the comment here should be changed. Now it is used when the index is not\r\nprimary key or replica identity index.\r\n\r\n2.\r\n+# wait until the index is created\r\n+$node_subscriber->poll_query_until(\r\n+\t'postgres', q{select count(*)=1 from pg_stat_all_indexes where indexrelname = 'test_replica_id_full_idx';}\r\n+) or die \"Timed out while waiting for check subscriber tap_sub_rep_full updates one row via index\";\r\n\r\nThe message doesn't seem right, should it be changed to \"Timed out while\r\nwaiting for creating index test_replica_id_full_idx\"?\r\n\r\n3.\r\n+# now, ingest more data and create index on column y which has higher cardinality\r\n+# then create an index on column y so that future commands uses the index on column\r\n+$node_publisher->safe_psql('postgres',\r\n+\t\"INSERT INTO test_replica_id_full SELECT 50, i FROM generate_series(0,3100)i;\");\r\n\r\nThe comment say \"create (an) index on column y\" twice, maybe it can be changed\r\nto:\r\n\r\nnow, ingest more data and create index on column y which has higher cardinality,\r\nso that future commands will use the index on column y\r\n\r\n4.\r\n+# deletes 200 rows\r\n+$node_publisher->safe_psql('postgres',\r\n+\t\"DELETE FROM test_replica_id_full WHERE x IN (5, 6);\");\r\n+\r\n+# wait until the index is used on the subscriber\r\n+$node_subscriber->poll_query_until(\r\n+\t'postgres', q{select (idx_scan = 200) from pg_stat_all_indexes where indexrelname = 'test_replica_id_full_idx';}\r\n+) or die \"Timed out while waiting for check subscriber tap_sub_rep_full updates 200 rows via index\";\r\n\r\nIt would be better to call wait_for_catchup() after DELETE. (And some other\r\nplaces in this file.)\r\nBesides, the \"updates\" in the message should be \"deletes\".\r\n\r\n5.\r\n+# wait until the index is used on the subscriber\r\n+$node_subscriber->poll_query_until(\r\n+\t'postgres', q{select sum(idx_scan)=10 from pg_stat_all_indexes where indexrelname ilike 'users_table_part_%';}\r\n+) or die \"Timed out while waiting for check subscriber tap_sub_rep_full updates partitioned table\";\r\n\r\nMaybe we should say \"updates partitioned table with index\" in this message.\r\n\r\nRegards,\r\nShi yu\r\n",
"msg_date": "Thu, 20 Oct 2022 02:37:43 +0000",
"msg_from": "\"shiy.fnst@fujitsu.com\" <shiy.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: [PATCH] Use indexes on the subscriber when REPLICA IDENTITY is\n full on the publisher"
},
{
"msg_contents": "Hi Shi yu, all\n\n\n> In execReplication.c:\n>\n> + TypeCacheEntry **eq = NULL; /* only used when the index is not\n> unique */\n>\n> Maybe the comment here should be changed. Now it is used when the index is\n> not\n> primary key or replica identity index.\n>\n>\nmakes sense, updated\n\n\n> 2.\n> +# wait until the index is created\n> +$node_subscriber->poll_query_until(\n> + 'postgres', q{select count(*)=1 from pg_stat_all_indexes where\n> indexrelname = 'test_replica_id_full_idx';}\n> +) or die \"Timed out while waiting for check subscriber tap_sub_rep_full\n> updates one row via index\";\n>\n> The message doesn't seem right, should it be changed to \"Timed out while\n> waiting for creating index test_replica_id_full_idx\"?\n>\n\nyes, updated\n\n\n>\n> 3.\n> +# now, ingest more data and create index on column y which has higher\n> cardinality\n> +# then create an index on column y so that future commands uses the index\n> on column\n> +$node_publisher->safe_psql('postgres',\n> + \"INSERT INTO test_replica_id_full SELECT 50, i FROM\n> generate_series(0,3100)i;\");\n>\n> The comment say \"create (an) index on column y\" twice, maybe it can be\n> changed\n> to:\n>\n> now, ingest more data and create index on column y which has higher\n> cardinality,\n> so that future commands will use the index on column y\n>\n>\nfixed\n\n\n> 4.\n> +# deletes 200 rows\n> +$node_publisher->safe_psql('postgres',\n> + \"DELETE FROM test_replica_id_full WHERE x IN (5, 6);\");\n> +\n> +# wait until the index is used on the subscriber\n> +$node_subscriber->poll_query_until(\n> + 'postgres', q{select (idx_scan = 200) from pg_stat_all_indexes\n> where indexrelname = 'test_replica_id_full_idx';}\n> +) or die \"Timed out while waiting for check subscriber tap_sub_rep_full\n> updates 200 rows via index\";\n>\n> It would be better to call wait_for_catchup() after DELETE. (And some other\n> places in this file.)\n>\n\nHmm, I cannot follow this easily.\n\nWhy do you think wait_for_catchup() should be called? In general, I tried\nto follow a pattern where we call poll_query_until() so that we are sure\nthat all the changes are replicated via the index. And then, an\nadditional check with `is($result, ..` such that we also verify the\ncorrectness of the data.\n\nOne alternative could be to use wait_for_catchup() and then have multiple\n`is($result, ..` to check both pg_stat_all_indexes and the correctness of\nthe data.\n\nOne minor advantage I see with the current approach is that every\n`is($result, ..` adds one step to the test. So, if I use `is($result, ..`\nfor pg_stat_all_indexes queries, then I'd be adding multiple steps for a\nsingle test. It felt it is more natural/common to test roughly once with\n`is($result, ..` on each test. Or, at least do not add additional ones for\npg_stat_all_indexes checks.\n\n\n\n> Besides, the \"updates\" in the message should be \"deletes\".\n>\n>\nfixed\n\n\n> 5.\n> +# wait until the index is used on the subscriber\n> +$node_subscriber->poll_query_until(\n> + 'postgres', q{select sum(idx_scan)=10 from pg_stat_all_indexes\n> where indexrelname ilike 'users_table_part_%';}\n> +) or die \"Timed out while waiting for check subscriber tap_sub_rep_full\n> updates partitioned table\";\n>\n> Maybe we should say \"updates partitioned table with index\" in this message.\n>\n>\nFixed\n\nAttached v20.\n\nThanks!\n\nOnder KALACI",
"msg_date": "Fri, 21 Oct 2022 14:14:09 +0200",
"msg_from": "=?UTF-8?B?w5ZuZGVyIEthbGFjxLE=?= <onderkalaci@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Use indexes on the subscriber when REPLICA IDENTITY is\n full on the publisher"
},
{
"msg_contents": "Hi hackers,\n\nI rebased the changes to the current master branch, reflected pg_indent\nsuggestions and also made a few minor style changes.\n\nAlso, tested the patch with a few new PG 15 features in combination (such\nas row/column filter in logical replication, NULLS NOT DISTINCT indexes\netc.) as well somethings that I haven't tested before such\nas publish_via_partition_root.\n\nI have not added those tests to the regression tests as the existing tests\nof this patch are already bulky and I don't see a specific reason to add\nall combinations. Still, if anyone thinks that it is a good idea to add\nmore tests, I can do that. For reference, here are the tests that I did\nmanually: More Replication Index Tests (github.com)\n<https://gist.github.com/onderkalaci/fa91688dea968e4024623feb4ddb627f>\n\nAttached v21.\n\nOnder KALACI\n\n\n\nÖnder Kalacı <onderkalaci@gmail.com>, 21 Eki 2022 Cum, 14:14 tarihinde şunu\nyazdı:\n\n> Hi Shi yu, all\n>\n>\n>> In execReplication.c:\n>>\n>> + TypeCacheEntry **eq = NULL; /* only used when the index is not\n>> unique */\n>>\n>> Maybe the comment here should be changed. Now it is used when the index\n>> is not\n>> primary key or replica identity index.\n>>\n>>\n> makes sense, updated\n>\n>\n>> 2.\n>> +# wait until the index is created\n>> +$node_subscriber->poll_query_until(\n>> + 'postgres', q{select count(*)=1 from pg_stat_all_indexes where\n>> indexrelname = 'test_replica_id_full_idx';}\n>> +) or die \"Timed out while waiting for check subscriber tap_sub_rep_full\n>> updates one row via index\";\n>>\n>> The message doesn't seem right, should it be changed to \"Timed out while\n>> waiting for creating index test_replica_id_full_idx\"?\n>>\n>\n> yes, updated\n>\n>\n>>\n>> 3.\n>> +# now, ingest more data and create index on column y which has higher\n>> cardinality\n>> +# then create an index on column y so that future commands uses the\n>> index on column\n>> +$node_publisher->safe_psql('postgres',\n>> + \"INSERT INTO test_replica_id_full SELECT 50, i FROM\n>> generate_series(0,3100)i;\");\n>>\n>> The comment say \"create (an) index on column y\" twice, maybe it can be\n>> changed\n>> to:\n>>\n>> now, ingest more data and create index on column y which has higher\n>> cardinality,\n>> so that future commands will use the index on column y\n>>\n>>\n> fixed\n>\n>\n>> 4.\n>> +# deletes 200 rows\n>> +$node_publisher->safe_psql('postgres',\n>> + \"DELETE FROM test_replica_id_full WHERE x IN (5, 6);\");\n>> +\n>> +# wait until the index is used on the subscriber\n>> +$node_subscriber->poll_query_until(\n>> + 'postgres', q{select (idx_scan = 200) from pg_stat_all_indexes\n>> where indexrelname = 'test_replica_id_full_idx';}\n>> +) or die \"Timed out while waiting for check subscriber tap_sub_rep_full\n>> updates 200 rows via index\";\n>>\n>> It would be better to call wait_for_catchup() after DELETE. (And some\n>> other\n>> places in this file.)\n>>\n>\n> Hmm, I cannot follow this easily.\n>\n> Why do you think wait_for_catchup() should be called? In general, I tried\n> to follow a pattern where we call poll_query_until() so that we are sure\n> that all the changes are replicated via the index. And then, an\n> additional check with `is($result, ..` such that we also verify the\n> correctness of the data.\n>\n> One alternative could be to use wait_for_catchup() and then have multiple\n> `is($result, ..` to check both pg_stat_all_indexes and the correctness of\n> the data.\n>\n> One minor advantage I see with the current approach is that every\n> `is($result, ..` adds one step to the test. So, if I use `is($result, ..`\n> for pg_stat_all_indexes queries, then I'd be adding multiple steps for a\n> single test. It felt it is more natural/common to test roughly once with\n> `is($result, ..` on each test. Or, at least do not add additional ones for\n> pg_stat_all_indexes checks.\n>\n>\n>\n>> Besides, the \"updates\" in the message should be \"deletes\".\n>>\n>>\n> fixed\n>\n>\n>> 5.\n>> +# wait until the index is used on the subscriber\n>> +$node_subscriber->poll_query_until(\n>> + 'postgres', q{select sum(idx_scan)=10 from pg_stat_all_indexes\n>> where indexrelname ilike 'users_table_part_%';}\n>> +) or die \"Timed out while waiting for check subscriber tap_sub_rep_full\n>> updates partitioned table\";\n>>\n>> Maybe we should say \"updates partitioned table with index\" in this\n>> message.\n>>\n>>\n> Fixed\n>\n> Attached v20.\n>\n> Thanks!\n>\n> Onder KALACI\n>",
"msg_date": "Fri, 11 Nov 2022 17:16:36 +0100",
"msg_from": "=?UTF-8?B?w5ZuZGVyIEthbGFjxLE=?= <onderkalaci@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Use indexes on the subscriber when REPLICA IDENTITY is\n full on the publisher"
},
{
"msg_contents": "Hi,\n\nOn 2022-11-11 17:16:36 +0100, Önder Kalacı wrote:\n> I rebased the changes to the current master branch, reflected pg_indent\n> suggestions and also made a few minor style changes.\n\nNeeds another rebase, I think:\n\nhttps://cirrus-ci.com/task/5592444637544448\n\n[05:44:22.102] FAILED: src/backend/postgres_lib.a.p/replication_logical_worker.c.o \n[05:44:22.102] ccache cc -Isrc/backend/postgres_lib.a.p -Isrc/include -I../src/include -Isrc/include/storage -Isrc/include/utils -Isrc/include/catalog -Isrc/include/nodes -fdiagnostics-color=always -pipe -D_FILE_OFFSET_BITS=64 -Wall -Winvalid-pch -g -fno-strict-aliasing -fwrapv -fexcess-precision=standard -D_GNU_SOURCE -Wmissing-prototypes -Wpointer-arith -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wimplicit-fallthrough=3 -Wcast-function-type -Wshadow=compatible-local -Wformat-security -Wdeclaration-after-statement -Wno-format-truncation -Wno-stringop-truncation -fPIC -pthread -DBUILDING_DLL -MD -MQ src/backend/postgres_lib.a.p/replication_logical_worker.c.o -MF src/backend/postgres_lib.a.p/replication_logical_worker.c.o.d -o src/backend/postgres_lib.a.p/replication_logical_worker.c.o -c ../src/backend/replication/logical/worker.c\n[05:44:22.102] ../src/backend/replication/logical/worker.c: In function ‘get_usable_indexoid’:\n[05:44:22.102] ../src/backend/replication/logical/worker.c:2101:36: error: ‘ResultRelInfo’ has no member named ‘ri_RootToPartitionMap’\n[05:44:22.102] 2101 | TupleConversionMap *map = relinfo->ri_RootToPartitionMap;\n[05:44:22.102] | ^~\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 6 Dec 2022 10:47:56 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Use indexes on the subscriber when REPLICA IDENTITY is\n full on the publisher"
},
{
"msg_contents": "Hi,\n\nThanks for the heads-up.\n\n\n> Needs another rebase, I think:\n>\n> https://cirrus-ci.com/task/5592444637544448\n>\n> [05:44:22.102] FAILED:\n> src/backend/postgres_lib.a.p/replication_logical_worker.c.o\n> [05:44:22.102] ccache cc -Isrc/backend/postgres_lib.a.p -Isrc/include\n> -I../src/include -Isrc/include/storage -Isrc/include/utils\n> -Isrc/include/catalog -Isrc/include/nodes -fdiagnostics-color=always -pipe\n> -D_FILE_OFFSET_BITS=64 -Wall -Winvalid-pch -g -fno-strict-aliasing -fwrapv\n> -fexcess-precision=standard -D_GNU_SOURCE -Wmissing-prototypes\n> -Wpointer-arith -Werror=vla -Wendif-labels -Wmissing-format-attribute\n> -Wimplicit-fallthrough=3 -Wcast-function-type -Wshadow=compatible-local\n> -Wformat-security -Wdeclaration-after-statement -Wno-format-truncation\n> -Wno-stringop-truncation -fPIC -pthread -DBUILDING_DLL -MD -MQ\n> src/backend/postgres_lib.a.p/replication_logical_worker.c.o -MF\n> src/backend/postgres_lib.a.p/replication_logical_worker.c.o.d -o\n> src/backend/postgres_lib.a.p/replication_logical_worker.c.o -c\n> ../src/backend/replication/logical/worker.c\n> [05:44:22.102] ../src/backend/replication/logical/worker.c: In function\n> ‘get_usable_indexoid’:\n> [05:44:22.102] ../src/backend/replication/logical/worker.c:2101:36: error:\n> ‘ResultRelInfo’ has no member named ‘ri_RootToPartitionMap’\n> [05:44:22.102] 2101 | TupleConversionMap *map =\n> relinfo->ri_RootToPartitionMap;\n> [05:44:22.102] | ^~\n>\n>\nYes, it seems the commit (fb958b5da86da69651f6fb9f540c2cfb1346cdc5) broke\nthe build and commit(a61b1f74823c9c4f79c95226a461f1e7a367764b) broke the\ntests. But the fixes were trivial. All tests pass again.\n\nAttached v22.\n\nOnder KALACI",
"msg_date": "Mon, 12 Dec 2022 14:28:23 +0100",
"msg_from": "=?UTF-8?B?w5ZuZGVyIEthbGFjxLE=?= <onderkalaci@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Use indexes on the subscriber when REPLICA IDENTITY is\n full on the publisher"
},
{
"msg_contents": "=?UTF-8?B?w5ZuZGVyIEthbGFjxLE=?= <onderkalaci@gmail.com> writes:\n> Attached v22.\n\nI took a very brief look through this. I'm not too pleased with\nthis whole line of development TBH. It seems to me that the core\ndesign of execReplication.c and related code is \"let's build our\nown half-baked executor and much-less-than-half-baked planner,\nbecause XXX\". (I'm not too sure what XXX was, really, but apparently\nsomebody managed to convince people that that is a sane and\nmaintainable design.) Now this patch has decided that it *will*\nuse the real planner, or at least portions of it in some cases.\nIf we're going to do that ISTM we ought to replace all the existing\nnot-really-a-planner logic, but this has not done that; instead\nwe have a large net addition to the already very duplicative\nreplication code, with weird restrictions because it doesn't want\nto make changes to the half-baked executor.\n\nI think we should either live within the constraints set by this\noverarching design, or else nuke execReplication.c from orbit and\nstart using the real planner and executor. Perhaps the foreign\nkey enforcement mechanisms could be a model --- although if you\ndon't want to buy into using SPI as well, you probably should look\nat Amit L's work at [1].\n\nAlso ... maybe I am missing something, but is REPLICA IDENTITY FULL\nsanely defined in the first place? It looks to me that\nRelationFindReplTupleSeq assumes without proof that there is a unique\nfull-tuple match, but that is only reasonable to assume if there is at\nleast one unique index (and maybe not even then, if nulls are involved).\nIf there is a unique index, why can't that be chosen as replica identity?\nIf there isn't, what semantics are we actually providing?\n\nWhat I'm thinking about is that maybe REPLICA IDENTITY FULL should be\ndefined as \"the subscriber can pick any unique index to match on,\nand is allowed to fail if the table has none\". Or if \"fail\" is a bridge\ntoo far for you, we could fall back to the existing seqscan logic.\nBut thumbing through the existing indexes to find a non-expression unique\nindex wouldn't require invoking the full planner. Any candidate index\nwould result in a plan estimated to fetch just one row, so there aren't\nlikely to be serious cost differences.\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/flat/CA+HiwqG5e8pk8s7+7zhr1Nc_PGyhEdM5f=pHkMOdK1RYWXfJsg@mail.gmail.com\n\n\n",
"msg_date": "Sat, 07 Jan 2023 13:50:04 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Use indexes on the subscriber when REPLICA IDENTITY is\n full on the publisher"
},
{
"msg_contents": "Hi,\n\nThank you for the useful comments!\n\n\n> I took a very brief look through this. I'm not too pleased with\n> this whole line of development TBH. It seems to me that the core\n> design of execReplication.c and related code is \"let's build our\n> own half-baked executor and much-less-than-half-baked planner,\n> because XXX\". (I'm not too sure what XXX was, really, but apparently\n> somebody managed to convince people that that is a sane and\n> maintainable design.)\n\n\nThis provided me with a broad perspective for the whole execReplication.c.\nBefore your comment, I have not thought about why there is a specific\nlogic for the execution of logical replication.\n\nI tried to read the initial commit that adds execReplication.c\n(665d1fad99e7b11678b0d5fa24d2898424243cd6)\nand the main relevant mail thread (PostgreSQL: Re: Logical Replication WIP\n<https://www.postgresql.org/message-id/flat/b2b0522a-800f-5dc2-2a4e-04c1f810a5f6%402ndquadrant.com#b3ed2ee7ca2877a7af35f706fc298f23>).\nBut, I couldn't find\nany references on this decision. Maybe I'm missing something?\n\nRegarding planner, as far as I can speculate, before my patch, there is\nprobably no need for any planner infrastructure.\nThe reason seems that the logical replication either needs a\nsequential scan for REPLICA IDENTITY FULL\nor an index scan for the primary key / unique index. I'm not suggesting\nthat we shouldn't use planner at all,\njust trying to understand the design choices that have been made earlier.\n\n\nNow this patch has decided that it *will*\n> use the real planner, or at least portions of it in some cases.\n> If we're going to do that ISTM we ought to replace all the existing\n> not-really-a-planner logic, but this has not done that; instead\n> we have a large net addition to the already very duplicative\n> replication code, with weird restrictions because it doesn't want\n> to make changes to the half-baked executor.\n>\n\nThat sounds like a one good perspective on the restrictions that this patch\nadds.\n From my perspective, I wanted to fit into the existing execReplication.c,\nwhich only\nworks for primary keys / unique keys. And, if you look closely, the\nrestrictions I suggest\nare actually the same/similar restrictions with REPLICA IDENTITY ... USING\nINDEX.\nI hope/assume this is no surprise for you and not too hard to explain to\nthe users.\n\n\n>\n> I think we should either live within the constraints set by this\n> overarching design, or else nuke execReplication.c from orbit and\n> start using the real planner and executor. Perhaps the foreign\n> key enforcement mechanisms could be a model --- although if you\n> don't want to buy into using SPI as well, you probably should look\n> at Amit L's work at [1].\n>\n\nThis sounds like a good long term plan to me. Are you also suggesting to do\nthat\nbefore this patch?\n\nI think that such a change is a non-trivial / XL project, which could\nlikely not be easily\nachievable by myself in a reasonable time frame.\n\n\n>\n> Also ... maybe I am missing something, but is REPLICA IDENTITY FULL\n> sanely defined in the first place? It looks to me that\n> RelationFindReplTupleSeq assumes without proof that there is a unique\n> full-tuple match, but that is only reasonable to assume if there is at\n> least one unique index (and maybe not even then, if nulls are involved).\n>\n\nIn general, RelationFindReplTupleSeq is ok not to match any tuples. So, I'm\nnot sure\nif uniqueness is required?\n\nEven if there are multiple matches, RelationFindReplTupleSeq does only one\nchange at\na time. My understanding is that if there are multiple matches on the\nsource, they are\ngenerated as different messages, and each message triggers\nRelationFindReplTupleSeq.\n\n\n\n> If there is a unique index, why can't that be chosen as replica identity?\n> If there isn't, what semantics are we actually providing?\n>\n\nI'm not sure I can fully follow this question. In this patch, I'm trying to\nallow non-unique\nindexes to be used in the subscription. And, the target could have multiple\nindexes.\n\nSo, the semantics is that we automatically allow users to be able to use\nnon-unique\nindexes on the subscription side even if the replica identity is full on\nthe source.\n\nThe reason (a) we use planner (b) not ask users which index to use, is that\nit'd be very inconvenient for any user to pick the indexes among multiple\nindexes on the subscription.\n\nIf there is a unique index, the expectation is that the user would pick\nREPLICA IDENTITY .. USING INDEX or just make it the primary key.\nIn those cases, this patch would not interfere with the existing logic.\n\n\n> What I'm thinking about is that maybe REPLICA IDENTITY FULL should be\n> defined as \"the subscriber can pick any unique index to match on,\n> and is allowed to fail if the table has none\". Or if \"fail\" is a bridge\n> too far for you, we could fall back to the existing seqscan logic.\n> But thumbing through the existing indexes to find a non-expression unique\n> index wouldn't require invoking the full planner. Any candidate index\n> would result in a plan estimated to fetch just one row, so there aren't\n> likely to be serious cost differences.\n>\n\nAgain, maybe I'm missing something in your comments, but this patch deals\nwith\nnon-unique indexes. That's why we rely on the planner to pick the optimal\nindex\namong what we have on the subscription. (In my first iteration of this\npatch,\nI decided to pick the index without planner, but than it seems much nicer\nto rely\non the planner for obvious reasons of picking the right index)\n\nFor example, if you have a unique index on the subscription, the planner\nalready\npicks that. But, still, if you could afford to have unique index, you\nshould better\nuse REPLICA IDENTITY .. USING INDEX or just primary key. I gave this example\nfor explaining one edge case that many devs could think of.\n\nLastly, any (auto)-ANALYZE on the target table re-calculates the candidate\nindex on\nthe subscription. So, hopefully we are not too behind with the statistics\nfor a long time,\nand have a good index to use.\n\nThanks,\nOnder KALACI\n\nHi,Thank you for the useful comments!\n\nI took a very brief look through this. I'm not too pleased with\nthis whole line of development TBH. It seems to me that the core\ndesign of execReplication.c and related code is \"let's build our\nown half-baked executor and much-less-than-half-baked planner,\nbecause XXX\". (I'm not too sure what XXX was, really, but apparently\nsomebody managed to convince people that that is a sane and\nmaintainable design.) This provided me with a broad perspective for the whole execReplication.c. Before your comment, I have not thought about why there is a specificlogic for the execution of logical replication.I tried to read the initial commit that adds execReplication.c (665d1fad99e7b11678b0d5fa24d2898424243cd6)and the main relevant mail thread (PostgreSQL: Re: Logical Replication WIP). But, I couldn't findany references on this decision. Maybe I'm missing something? Regarding planner, as far as I can speculate, before my patch, there is probably no need for any planner infrastructure.The reason seems that the logical replication either needs a sequential scan for REPLICA IDENTITY FULLor an index scan for the primary key / unique index. I'm not suggesting that we shouldn't use planner at all,just trying to understand the design choices that have been made earlier. Now this patch has decided that it *will*\nuse the real planner, or at least portions of it in some cases.\nIf we're going to do that ISTM we ought to replace all the existing\nnot-really-a-planner logic, but this has not done that; instead\nwe have a large net addition to the already very duplicative\nreplication code, with weird restrictions because it doesn't want\nto make changes to the half-baked executor.That sounds like a one good perspective on the restrictions that this patch adds.From my perspective, I wanted to fit into the existing execReplication.c, which only works for primary keys / unique keys. And, if you look closely, the restrictions I suggestare actually the same/similar restrictions with REPLICA IDENTITY ... USING INDEX. I hope/assume this is no surprise for you and not too hard to explain to the users. \n\nI think we should either live within the constraints set by this\noverarching design, or else nuke execReplication.c from orbit and\nstart using the real planner and executor. Perhaps the foreign\nkey enforcement mechanisms could be a model --- although if you\ndon't want to buy into using SPI as well, you probably should look\nat Amit L's work at [1].This sounds like a good long term plan to me. Are you also suggesting to do thatbefore this patch?I think that such a change is a non-trivial / XL project, which could likely not be easilyachievable by myself in a reasonable time frame. \n\nAlso ... maybe I am missing something, but is REPLICA IDENTITY FULL\nsanely defined in the first place? It looks to me that\nRelationFindReplTupleSeq assumes without proof that there is a unique\nfull-tuple match, but that is only reasonable to assume if there is at\nleast one unique index (and maybe not even then, if nulls are involved).In general, RelationFindReplTupleSeq is ok not to match any tuples. So, I'm not sureif uniqueness is required?Even if there are multiple matches, RelationFindReplTupleSeq does only one change ata time. My understanding is that if there are multiple matches on the source, they aregenerated as different messages, and each message triggers RelationFindReplTupleSeq. \nIf there is a unique index, why can't that be chosen as replica identity?\nIf there isn't, what semantics are we actually providing?I'm not sure I can fully follow this question. In this patch, I'm trying to allow non-uniqueindexes to be used in the subscription. And, the target could have multiple indexes.So, the semantics is that we automatically allow users to be able to use non-uniqueindexes on the subscription side even if the replica identity is full on the source.The reason (a) we use planner (b) not ask users which index to use, is thatit'd be very inconvenient for any user to pick the indexes among multipleindexes on the subscription. If there is a unique index, the expectation is that the user would pick REPLICA IDENTITY .. USING INDEX or just make it the primary key. In those cases, this patch would not interfere with the existing logic. \n\nWhat I'm thinking about is that maybe REPLICA IDENTITY FULL should be\ndefined as \"the subscriber can pick any unique index to match on,\nand is allowed to fail if the table has none\". Or if \"fail\" is a bridge\ntoo far for you, we could fall back to the existing seqscan logic.\nBut thumbing through the existing indexes to find a non-expression unique\nindex wouldn't require invoking the full planner. Any candidate index\nwould result in a plan estimated to fetch just one row, so there aren't\nlikely to be serious cost differences.Again, maybe I'm missing something in your comments, but this patch deals withnon-unique indexes. That's why we rely on the planner to pick the optimal indexamong what we have on the subscription. (In my first iteration of this patch, I decided to pick the index without planner, but than it seems much nicer to relyon the planner for obvious reasons of picking the right index) For example, if you have a unique index on the subscription, the planner already picks that. But, still, if you could afford to have unique index, you should betteruse REPLICA IDENTITY .. USING INDEX or just primary key. I gave this examplefor explaining one edge case that many devs could think of. Lastly, any (auto)-ANALYZE on the target table re-calculates the candidate index on the subscription. So, hopefully we are not too behind with the statistics for a long time, and have a good index to use.Thanks,Onder KALACI",
"msg_date": "Mon, 9 Jan 2023 19:21:36 +0100",
"msg_from": "=?UTF-8?B?w5ZuZGVyIEthbGFjxLE=?= <onderkalaci@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Use indexes on the subscriber when REPLICA IDENTITY is\n full on the publisher"
},
{
"msg_contents": "Hi,\n\nOn 2023-01-07 13:50:04 -0500, Tom Lane wrote:\n> I think we should either live within the constraints set by this\n> overarching design, or else nuke execReplication.c from orbit and\n> start using the real planner and executor. Perhaps the foreign\n> key enforcement mechanisms could be a model --- although if you\n> don't want to buy into using SPI as well, you probably should look\n> at Amit L's work at [1].\n\nI don't think using the full executor for every change is feasible from an\noverhead perspective. But it might make sense to bail out to using the full\nexecutor in a bunch of non-fastpath paths.\n\nI think this is basically similar to COPY not using the full executor.\n\nBut that doesn't mean that all of this has to be open coded in\nexecReplication.c. Abstracting pieces so that COPY, logical rep and perhaps\neven nodeModifyTable.c can share code makes sense.\n\n\n> Also ... maybe I am missing something, but is REPLICA IDENTITY FULL\n> sanely defined in the first place? It looks to me that\n> RelationFindReplTupleSeq assumes without proof that there is a unique\n> full-tuple match, but that is only reasonable to assume if there is at\n> least one unique index (and maybe not even then, if nulls are involved).\n\nIf the table definition match between publisher and standby, it doesn't matter\nwhich tuple is updated, if all columns are used to match. Since there's\nnothing distinguishing two rows with all columns being equal, it doesn't\nmatter which we update.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 9 Jan 2023 12:03:47 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Use indexes on the subscriber when REPLICA IDENTITY is\n full on the publisher"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2023-01-07 13:50:04 -0500, Tom Lane wrote:\n>> Also ... maybe I am missing something, but is REPLICA IDENTITY FULL\n>> sanely defined in the first place? It looks to me that\n>> RelationFindReplTupleSeq assumes without proof that there is a unique\n>> full-tuple match, but that is only reasonable to assume if there is at\n>> least one unique index (and maybe not even then, if nulls are involved).\n\n> If the table definition match between publisher and standby, it doesn't matter\n> which tuple is updated, if all columns are used to match. Since there's\n> nothing distinguishing two rows with all columns being equal, it doesn't\n> matter which we update.\n\nYeah, but the point here is precisely that they might *not* match;\nfor example there could be extra columns in the subscriber's table.\nThis may be largely a documentation problem, though --- I think my\nbeef is mainly that there's nothing in our docs explaining the\nsemantic pitfalls of FULL, we only say \"it's slow\".\n\nAnyway, to get back to the point at hand: if we do have a REPLICA IDENTITY\nFULL situation then we can make use of any unique index over a subset of\nthe transmitted columns, and if there's more than one candidate index\nit's unlikely to matter which one we pick. Given your comment I guess\nwe have to also compare the non-indexed columns, so we can't completely\nconvert the FULL case to the straight index case. But still it doesn't\nseem to me to be appropriate to use the planner to find a suitable index.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 09 Jan 2023 15:37:10 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Use indexes on the subscriber when REPLICA IDENTITY is\n full on the publisher"
},
{
"msg_contents": "On Mon, Jan 9, 2023 at 11:37 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Anyway, to get back to the point at hand: if we do have a REPLICA IDENTITY\n> FULL situation then we can make use of any unique index over a subset of\n> the transmitted columns, and if there's more than one candidate index\n> it's unlikely to matter which one we pick. Given your comment I guess\n> we have to also compare the non-indexed columns, so we can't completely\n> convert the FULL case to the straight index case. But still it doesn't\n> seem to me to be appropriate to use the planner to find a suitable index.\n\nThe main purpose of REPLICA IDENTITY FULL seems to be to enable logical\nreplication for tables that may have duplicates and therefore cannot have a\nunique index that can be used as a replica identity.\n\nFor those tables the user currently needs to choose between update/delete\nerroring (bad) or doing a sequential scan on the apply side per\nupdated/deleted tuple (often worse). This issue currently prevents a lot of\nautomation around logical replication, because users need to decide whether\nand when they are willing to accept partial downtime. The current REPLICA\nIDENTITY FULL implementation can work in some cases, but applying the\neffects of an update that affected a million rows through a million\nsequential scans will certainly not end well.\n\nThis patch solves the problem by allowing the apply side to pick a\nnon-unique index to find any matching tuple instead of always using a\nsequential scan, but that either requires some planning/costing logic to\navoid picking a lousy index, or allowing the user to manually preselect the\nindex to use, which is less convenient.\n\nAn alternative might be to construct prepared statements and using the\nregular planner. If applied uniformly that would also be nice from the\nextensibility point-of-view, since there is currently no way for an\nextension to augment the apply side. However, I assume the current approach\nof using low-level functions in the common case was chosen for performance\nreasons.\n\nI suppose the options are:\n1. use regular planner uniformly\n2. use regular planner only when there's no replica identity (or\nconfigurable?)\n3. only use low-level functions\n4. keep using sequential scans for every single updated row\n5. introduce a hidden logical row identifier in the heap that is guaranteed\nunique within a table and can be used as a replica identity when no unique\nindex exists\n\nAny thoughts?\n\ncheers,\nMarco\n\nOn Mon, Jan 9, 2023 at 11:37 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Anyway, to get back to the point at hand: if we do have a REPLICA IDENTITY\n> FULL situation then we can make use of any unique index over a subset of\n> the transmitted columns, and if there's more than one candidate index\n> it's unlikely to matter which one we pick. Given your comment I guess\n> we have to also compare the non-indexed columns, so we can't completely\n> convert the FULL case to the straight index case. But still it doesn't\n> seem to me to be appropriate to use the planner to find a suitable index.The main purpose of REPLICA IDENTITY FULL seems to be to enable logical replication for tables that may have duplicates and therefore cannot have a unique index that can be used as a replica identity.\nFor those tables the user currently needs to choose between update/delete erroring (bad) or doing a sequential scan on the apply side per updated/deleted tuple (often worse). This issue currently prevents a lot of automation around logical replication, because users need to decide whether and when they are willing to accept partial downtime. The current REPLICA IDENTITY FULL implementation can work in some cases, but applying the effects of an update that affected a million rows through a million sequential scans will certainly not end well.\n\nThis patch solves the problem by allowing the apply side to pick a non-unique index to find any matching tuple instead of always using a sequential scan, but that either requires some planning/costing logic to avoid picking a lousy index, or allowing the user to manually preselect the index to use, which is less convenient.An alternative might be to construct prepared statements and using the regular planner. If applied uniformly that would also be nice from the extensibility point-of-view, since there is currently no way for an extension to augment the apply side. However, I assume the current approach of using low-level functions in the common case was chosen for performance reasons.I suppose the options are:1. use regular planner uniformly2. use regular planner only when there's no replica identity (or configurable?)3. only use low-level functions4. keep using sequential scans for every single updated row5. introduce a hidden logical row identifier in the heap that is guaranteed unique within a table and can be used as a replica identity when no unique index existsAny thoughts?cheers,Marco",
"msg_date": "Fri, 20 Jan 2023 15:35:51 +0300",
"msg_from": "Marco Slot <marco.slot@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Use indexes on the subscriber when REPLICA IDENTITY is\n full on the publisher"
},
{
"msg_contents": "Hi Marco, Tom,\n\n> But still it doesn't seem to me to be appropriate to use the planner to\nfind a suitable index.\n\nAs Marco noted, here we are trying to pick an index that is non-unique. We\ncould pick the index based on information extracted from pg_index (or\nsuch), but then, it'd be a premature selection. Before sending the patch to\npgsql-hackers, I initially tried to find a suitable one with such an\napproach.\n\nBut then, I still ended up using costing functions (and some other low\nlevel functions). Overall, it felt like the planner is the module that\nmakes this decision best. Why would we try to invent another immature way\nof doing this? With that reasoning, I ended up using the related planner\nfunctions directly.\n\nHowever, I assume the current approach of using low-level functions in the\n> common case was chosen for performance reasons.\n>\n\nThat's partially the reason. If you look at the patch, we use the planner\n(or the low level functions) infrequently. It is only called when the\nlogical replication relation cache is rebuilt. As far as I can see, that\nhappens with (auto) ANALYZE or DDLs etc. I expect these are infrequent\noperations. Still, I wanted to make sure we do not create too much overhead\neven if there are frequent invalidations.\n\nThe main reason for using the low level functions over the planner itself\nis to have some more control over the decision. For example, due to the\nexecution limitations, we currently cannot allow an index that consists of\nonly expressions (similar to pkey restriction). With the current approach,\nwe can easily filter those out.\n\nAlso, another minor reason is that, if we use planner, we'd get a\nPlannedStmt back. It also felt weird to check back the index used from a\nPlannedStmt. In the current patch, we iterate over Paths, which seems more\nintuitive to me.\n\n\n> I suppose the options are:\n> 1. use regular planner uniformly\n> 2. use regular planner only when there's no replica identity (or\n> configurable?)\n> 3. only use low-level functions\n> 4. keep using sequential scans for every single updated row\n> 5. introduce a hidden logical row identifier in the heap that is\n> guaranteed unique within a table and can be used as a replica identity when\n> no unique index exists\n>\n\nOne other option I considered was to ask the index explicitly on the\nsubscriber side from the user when REPLICA IDENTITY is FULL. But, it is a\npretty hard choice for any user, even a planner sometimes fails to pick the\nright index :) Also, it is probably controversial to change any of the\nAPIs for this purpose?\n\nI'd be happy to hear from more experienced hackers on the trade-offs for\nthe above, and I'd be open to work on that if there is a clear winner. For\nme (3) is a decent solution for the problem.\n\nThanks,\nOnder\n\nHi Marco, Tom,> But still it doesn't seem to me to be appropriate to use the planner to find a suitable index.As Marco noted, here we are trying to pick an index that is non-unique. We could pick the index based on information extracted from pg_index (or such), but then, it'd be a premature selection. Before sending the patch to pgsql-hackers, I initially tried to find a suitable one with such an approach. But then, I still ended up using costing functions (and some other low level functions). Overall, it felt like the planner is the module that makes this decision best. Why would we try to invent another immature way of doing this? With that reasoning, I ended up using the related planner functions directly.However, I assume the current approach of using low-level functions in the common case was chosen for performance reasons.That's partially the reason. If you look at the patch, we use the planner (or the low level functions) infrequently. It is only called when the logical replication relation cache is rebuilt. As far as I can see, that happens with (auto) ANALYZE or DDLs etc. I expect these are infrequent operations. Still, I wanted to make sure we do not create too much overhead even if there are frequent invalidations.The main reason for using the low level functions over the planner itself is to have some more control over the decision. For example, due to the execution limitations, we currently cannot allow an index that consists of only expressions (similar to pkey restriction). With the current approach, we can easily filter those out.Also, another minor reason is that, if we use planner, we'd get a PlannedStmt back. It also felt weird to check back the index used from a PlannedStmt. In the current patch, we iterate over Paths, which seems more intuitive to me.I suppose the options are:1. use regular planner uniformly2. use regular planner only when there's no replica identity (or configurable?)3. only use low-level functions4. keep using sequential scans for every single updated row5. introduce a hidden logical row identifier in the heap that is guaranteed unique within a table and can be used as a replica identity when no unique index exists One other option I considered was to ask the index explicitly on the subscriber side from the user when REPLICA IDENTITY is FULL. But, it is a pretty hard choice for any user, even a planner sometimes fails to pick the right index :) Also, it is probably controversial to change any of the APIs for this purpose?I'd be happy to hear from more experienced hackers on the trade-offs for the above, and I'd be open to work on that if there is a clear winner. For me (3) is a decent solution for the problem.Thanks,Onder",
"msg_date": "Fri, 27 Jan 2023 16:02:13 +0300",
"msg_from": "=?UTF-8?B?w5ZuZGVyIEthbGFjxLE=?= <onderkalaci@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Use indexes on the subscriber when REPLICA IDENTITY is\n full on the publisher"
},
{
"msg_contents": "On Fri, Jan 27, 2023 at 6:32 PM Önder Kalacı <onderkalaci@gmail.com> wrote:\n>>\n>> I suppose the options are:\n>> 1. use regular planner uniformly\n>> 2. use regular planner only when there's no replica identity (or configurable?)\n>> 3. only use low-level functions\n>> 4. keep using sequential scans for every single updated row\n>> 5. introduce a hidden logical row identifier in the heap that is guaranteed unique within a table and can be used as a replica identity when no unique index exists\n>\n>\n> One other option I considered was to ask the index explicitly on the subscriber side from the user when REPLICA IDENTITY is FULL. But, it is a pretty hard choice for any user, even a planner sometimes fails to pick the right index :) Also, it is probably controversial to change any of the APIs for this purpose?\n>\n\nI agree that it won't be a very convenient option for the user but how\nabout along with asking for an index from the user (when the user\ndidn't provide an index), we also allow to make use of any unique\nindex over a subset of the transmitted columns, and if there's more\nthan one candidate index pick any one. Additionally, we can allow\ndisabling the use of an index scan for this particular case. If we are\ntoo worried about API change for allowing users to specify the index\nthen we can do that later or as a separate patch.\n\n> I'd be happy to hear from more experienced hackers on the trade-offs for the above, and I'd be open to work on that if there is a clear winner. For me (3) is a decent solution for the problem.\n>\n\n From the discussion above it is not very clear that adding maintenance\ncosts in this area is worth it even though that can give better\nresults as far as this feature is concerned.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 30 Jan 2023 18:46:07 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Use indexes on the subscriber when REPLICA IDENTITY is\n full on the publisher"
},
{
"msg_contents": "Hi all,\n\nThanks for the feedback!\n\nI agree that it won't be a very convenient option for the user but how\n> about along with asking for an index from the user (when the user\n> didn't provide an index), we also allow to make use of any unique\n> index over a subset of the transmitted columns,\n\n\nTbh, I cannot follow why you would use REPLICA IDENTITY FULL if you can\nalready\ncreate a unique index? Aren't you supposed to use REPLICA IDENTITY .. USING\nINDEX\nin that case (if not simply pkey)?\n\nThat seems like a potential expansion of this patch, but I don't consider\nit as essential. Given it\nis hard to get even small commits in, I'd rather wait to see what you think\nbefore doing such\na change.\n\n\n> and if there's more\n> than one candidate index pick any one. Additionally, we can allow\n> disabling the use of an index scan for this particular case. If we are\n> too worried about API change for allowing users to specify the index\n> then we can do that later or as a separate patch.\n>\n>\nOn v23, I dropped the planner support for picking the index. Instead, it\nsimply\niterates over the indexes and picks the first one that is suitable.\n\nI'm currently thinking on how to enable users to override this decision.\nOne option I'm leaning towards is to add a syntax like the following:\n\n*ALTER SUBSCRIPTION .. ALTER TABLE ... SET INDEX ...*\n\nThough, that should probably be a seperate patch. I'm going to work\non that, but still wanted to share v23 given picking the index sounds\ncomplementary, not strictly required at this point.\n\nThanks,\nOnder",
"msg_date": "Thu, 2 Feb 2023 11:33:36 +0300",
"msg_from": "=?UTF-8?B?w5ZuZGVyIEthbGFjxLE=?= <onderkalaci@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Use indexes on the subscriber when REPLICA IDENTITY is\n full on the publisher"
},
{
"msg_contents": "On Thu, Feb 2, 2023 at 2:03 PM Önder Kalacı <onderkalaci@gmail.com> wrote:\n>\n>>\n>> and if there's more\n>> than one candidate index pick any one. Additionally, we can allow\n>> disabling the use of an index scan for this particular case. If we are\n>> too worried about API change for allowing users to specify the index\n>> then we can do that later or as a separate patch.\n>>\n>\n> On v23, I dropped the planner support for picking the index. Instead, it simply\n> iterates over the indexes and picks the first one that is suitable.\n>\n> I'm currently thinking on how to enable users to override this decision.\n> One option I'm leaning towards is to add a syntax like the following:\n>\n> ALTER SUBSCRIPTION .. ALTER TABLE ... SET INDEX ...\n>\n> Though, that should probably be a seperate patch. I'm going to work\n> on that, but still wanted to share v23 given picking the index sounds\n> complementary, not strictly required at this point.\n>\n\nI agree that it could be a separate patch. However, do you think we\nneed some way to disable picking the index scan? This is to avoid\ncases where sequence scan could be better or do we think there won't\nexist such a case?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Sat, 4 Feb 2023 16:54:10 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Use indexes on the subscriber when REPLICA IDENTITY is\n full on the publisher"
},
{
"msg_contents": "On Thu, Feb 2, 2023 4:34 PM Önder Kalacı <onderkalaci@gmail.com> wrote:\r\n>\r\n>>\r\n>> and if there's more\r\n>> than one candidate index pick any one. Additionally, we can allow\r\n>> disabling the use of an index scan for this particular case. If we are\r\n>> too worried about API change for allowing users to specify the index\r\n>> then we can do that later or as a separate patch.\r\n>>\r\n>\r\n> On v23, I dropped the planner support for picking the index. Instead, it simply\r\n> iterates over the indexes and picks the first one that is suitable.\r\n>\r\n> I'm currently thinking on how to enable users to override this decision.\r\n> One option I'm leaning towards is to add a syntax like the following:\r\n>\r\n> ALTER SUBSCRIPTION .. ALTER TABLE ... SET INDEX ...\r\n>\r\n> Though, that should probably be a seperate patch. I'm going to work\r\n> on that, but still wanted to share v23 given picking the index sounds\r\n> complementary, not strictly required at this point.\r\n>\r\n\r\nThanks for your patch. Here are some comments.\r\n\r\n1.\r\nI noticed that get_usable_indexoid() is called in apply_handle_update_internal()\r\nand apply_handle_delete_internal() to get the usable index. Could usableIndexOid\r\nbe a parameter of these two functions? Because we have got the\r\nLogicalRepRelMapEntry when calling them and if we do so, we can get\r\nusableIndexOid without get_usable_indexoid(). Otherwise for partitioned tables,\r\nlogicalrep_partition_open() is called in get_usable_indexoid() and searching\r\nthe entry via hash_search() will increase cost.\r\n\r\n2.\r\n+\t\t\t * This attribute is an expression, and\r\n+\t\t\t * SuitableIndexPathsForRepIdentFull() was called earlier when the\r\n+\t\t\t * index for subscriber was selected. There, the indexes\r\n+\t\t\t * comprising *only* expressions have already been eliminated.\r\n\r\nThe comment looks need to be updated:\r\nSuitableIndexPathsForRepIdentFull\r\n->\r\nFindUsableIndexForReplicaIdentityFull\r\n\r\n3.\r\n\r\n \t/* Build scankey for every attribute in the index. */\r\n-\tfor (attoff = 0; attoff < IndexRelationGetNumberOfKeyAttributes(idxrel); attoff++)\r\n+\tfor (index_attoff = 0; index_attoff < IndexRelationGetNumberOfKeyAttributes(idxrel);\r\n+\t\t index_attoff++)\r\n \t{\r\n\r\nShould the comment be changed? Because we skip the attributes that are expressions.\r\n\r\n4.\r\n+\t\t\tAssert(RelationGetReplicaIndex(rel) != RelationGetRelid(idxrel) &&\r\n+\t\t\t\t RelationGetPrimaryKeyIndex(rel) != RelationGetRelid(idxrel));\r\n\r\nMaybe we can call the new function idxIsRelationIdentityOrPK()?\r\n\r\nRegards,\r\nShi Yu\r\n",
"msg_date": "Mon, 13 Feb 2023 11:00:31 +0000",
"msg_from": "\"shiy.fnst@fujitsu.com\" <shiy.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: [PATCH] Use indexes on the subscriber when REPLICA IDENTITY is\n full on the publisher"
},
{
"msg_contents": "On Mon, Feb 13, 2023 7:01 PM shiy.fnst@fujitsu.com <shiy.fnst@fujitsu.com> wrote:\r\n> \r\n> On Thu, Feb 2, 2023 4:34 PM Önder Kalacı <onderkalaci@gmail.com> wrote:\r\n> >\r\n> >>\r\n> >> and if there's more\r\n> >> than one candidate index pick any one. Additionally, we can allow\r\n> >> disabling the use of an index scan for this particular case. If we are\r\n> >> too worried about API change for allowing users to specify the index\r\n> >> then we can do that later or as a separate patch.\r\n> >>\r\n> >\r\n> > On v23, I dropped the planner support for picking the index. Instead, it simply\r\n> > iterates over the indexes and picks the first one that is suitable.\r\n> >\r\n> > I'm currently thinking on how to enable users to override this decision.\r\n> > One option I'm leaning towards is to add a syntax like the following:\r\n> >\r\n> > ALTER SUBSCRIPTION .. ALTER TABLE ... SET INDEX ...\r\n> >\r\n> > Though, that should probably be a seperate patch. I'm going to work\r\n> > on that, but still wanted to share v23 given picking the index sounds\r\n> > complementary, not strictly required at this point.\r\n> >\r\n> \r\n> Thanks for your patch. Here are some comments.\r\n> \r\n\r\nHi,\r\n\r\nHere are some comments on the test cases.\r\n\r\n1. in test case \"SUBSCRIPTION RE-CALCULATES INDEX AFTER CREATE/DROP INDEX\"\r\n+# now, ingest more data and create index on column y which has higher cardinality\r\n+# so that the future commands use the index on column y\r\n+$node_publisher->safe_psql('postgres',\r\n+\t\"INSERT INTO test_replica_id_full SELECT 50, i FROM generate_series(0,3100)i;\");\r\n+$node_subscriber->safe_psql('postgres',\r\n+\t\"CREATE INDEX test_replica_id_full_idy ON test_replica_id_full(y)\");\r\n\r\nWe don't pick the cheapest index in the current patch, so should we modify this\r\npart of the test?\r\n\r\nBTW, the following comment in FindLogicalRepUsableIndex() need to be changed,\r\ntoo.\r\n\r\n+\t\t * We are looking for one more opportunity for using an index. If\r\n+\t\t * there are any indexes defined on the local relation, try to pick\r\n+\t\t * the cheapest index.\r\n\r\n\r\n2. Is there any reasons why we need the test case \"SUBSCRIPTION USES INDEX WITH\r\nDROPPED COLUMNS\"? Has there been a problem related to dropped columns before?\r\n\r\n3. in test case \"SUBSCRIPTION USES INDEX ON PARTITIONED TABLES\"\r\n+# deletes rows and moves between partitions\r\n+$node_publisher->safe_psql('postgres',\r\n+\t\"DELETE FROM users_table_part WHERE user_id = 1 and value_1 = 1;\");\r\n+$node_publisher->safe_psql('postgres',\r\n+\t\"DELETE FROM users_table_part WHERE user_id = 12 and value_1 = 12;\");\r\n\r\n\"moves between partitions\" in the comment seems wrong.\r\n\r\n4. in test case \"SUBSCRIPTION DOES NOT USE INDEXES WITH ONLY EXPRESSIONS\"\r\n+# update 2 rows\r\n+$node_publisher->safe_psql('postgres',\r\n+\t\"UPDATE people SET firstname = 'Nan' WHERE firstname = 'first_name_1';\");\r\n+$node_publisher->safe_psql('postgres',\r\n+\t\"UPDATE people SET firstname = 'Nan' WHERE firstname = 'first_name_2' AND lastname = 'last_name_2';\");\r\n+\r\n+# make sure the index is not used on the subscriber\r\n+$result = $node_subscriber->safe_psql('postgres',\r\n+\t\"select idx_scan from pg_stat_all_indexes where indexrelname = 'people_names'\");\r\n+is($result, qq(0), 'ensure subscriber tap_sub_rep_full updates two rows via seq. scan with index on expressions');\r\n+\r\n\r\nI think it would be better to call wait_for_catchup() before the check because\r\nwe want to check the index is NOT used. Otherwise the check may pass because the\r\nrows have not yet been updated on subscriber.\r\n\r\n5. in test case \"SUBSCRIPTION BEHAVIOR WITH ENABLE_INDEXSCAN\"\r\n+# show that index is not used even when enable_indexscan=false\r\n+$result = $node_subscriber->safe_psql('postgres',\r\n+\t\"select idx_scan from pg_stat_all_indexes where indexrelname = 'test_replica_id_full_idx'\");\r\n+is($result, qq(0), 'ensure subscriber has not used index with enable_indexscan=false');\r\n\r\nShould we remove the word \"even\" in the comment?\r\n\r\n6. \r\nIn each test case we re-create publications, subscriptions, and tables. Could we\r\ncreate only one publication and one subscription at the beginning, and use them\r\nin all test cases? I think this can save some time running the test file.\r\n\r\nRegards,\r\nShi Yu\r\n",
"msg_date": "Tue, 14 Feb 2023 09:35:58 +0000",
"msg_from": "\"shiy.fnst@fujitsu.com\" <shiy.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: [PATCH] Use indexes on the subscriber when REPLICA IDENTITY is\n full on the publisher"
},
{
"msg_contents": "Here are some review comments for patch v23.\n\n======\nGeneral\n\n1.\nIIUC the previous logic for checking \"cost\" comparisons and selecting\nthe \"cheapest\" strategy is no longer present in the latest patch.\n\nIn that case, I think there are some leftover stale comments that need\nchanging. For example,\n\n1a. Commit message:\n\"let the planner sub-modules compare the costs of index versus\nsequential scan and choose the cheapest.\"\n\n~\n\n1b. Commit message:\n\"Finally, pick the cheapest `Path` among.\"\n\n~\n\n1c. FindLogicalRepUsableIndex function:\n+ * We are looking for one more opportunity for using an index. If\n+ * there are any indexes defined on the local relation, try to pick\n+ * the cheapest index.\n\n======\ndoc/src/sgml/logical-replication.sgml\n\nIf replica identity \"full\" is used, indexes can be used on the\nsubscriber side for seaching the rows. The index should be btree,\nnon-partial and have at least one column reference (e.g., should not\nconsists of only expressions). If there are no suitable indexes, the\nsearch on the subscriber side is very inefficient and should only be\nused as a fallback if no other solution is possible\n\n2a.\nFixed typo \"seaching\", and minor rewording.\n\nSUGGESTION\nWhen replica identity \"full\" is specified, indexes can be used on the\nsubscriber side for searching the rows. These indexes should be btree,\nnon-partial and have at least one column reference (e.g., should not\nconsist of only expressions). If there are no such suitable indexes,\nthe search on the subscriber side can be very inefficient, therefore\nreplica identity \"full\" should only be used as a fallback if no other\nsolution is possible.\n\n~\n\n2b.\nI know you are just following some existing text here, but IMO this\nshould probably refer to replica identity <literal>FULL</literal>\ninstead of \"full\".\n\n======\nsrc/backend/executor/execReplication.c\n\n3. IdxIsRelationIdentityOrPK\n\n+/*\n+ * Given a relation and OID of an index, returns true if\n+ * the index is relation's primary key's index or\n+ * relaton's replica identity index.\n+ *\n+ * Returns false otherwise.\n+ */\n+static bool\n+IdxIsRelationIdentityOrPK(Relation rel, Oid idxoid)\n+{\n+ Assert(OidIsValid(idxoid));\n+\n+ if (RelationGetReplicaIndex(rel) == idxoid ||\n+ RelationGetPrimaryKeyIndex(rel) == idxoid)\n+ return true;\n+\n+ return false;\n\n3a.\nComment typo \"relaton\"\n\n~\n\n3b.\nCode could be written using single like below if you wish (but see #2c)\n\nreturn RelationGetReplicaIndex(rel) == idxoid ||\nRelationGetPrimaryKeyIndex(rel) == idxoid;\n\n~\n\n3c.\nActually, RelationGetReplicaIndex and RelationGetPrimaryKeyIndex\nimplementations are very similar so it seemed inefficient to be\ncalling both of them. IMO it might be better to just make a new\nrelcache function IdxIsRelationIdentityOrPK(Relation rel, Oid idxoid).\nThis implementation will be similar to those others, but now you need\nonly to call the workhorse RelationGetIndexList *one* time.\n\n~~~\n\n4. RelationFindReplTupleByIndex\n\n bool found;\n+ TypeCacheEntry **eq = NULL; /* only used when the index is not repl. ident\n+ * or pkey */\n+ bool idxIsRelationIdentityOrPK;\n\n\nIf you change the comment to say \"RI\" instead of \"repl. Ident\" then it\ncan all fit on one line, which would be an improvement.\n\n\n======\nsrc/backend/replication/logical/relation.c\n\n5.\n #include \"replication/logicalrelation.h\"\n #include \"replication/worker_internal.h\"\n+#include \"optimizer/cost.h\"\n #include \"utils/inval.h\"\n\nCan that #include be added in alphabetical order like the others or not?\n\n~~~\n\n6. logicalrep_partition_open\n\n+ /*\n+ * Finding a usable index is an infrequent task. It occurs when an\n+ * operation is first performed on the relation, or after invalidation of\n+ * of the relation cache entry (such as ANALYZE or CREATE/DROP index on\n+ * the relation).\n+ */\n+ entry->usableIndexOid = FindLogicalRepUsableIndex(partrel, remoterel);\n+\n\nTypo \"of of the relation\"\n\n~~~\n\n7. FindUsableIndexForReplicaIdentityFull\n\n+static Oid\n+FindUsableIndexForReplicaIdentityFull(Relation localrel)\n+{\n+ MemoryContext usableIndexContext;\n+ MemoryContext oldctx;\n+ Oid usableIndex;\n+ Oid idxoid;\n+ List *indexlist;\n+ ListCell *lc;\n+ Relation indexRelation;\n+ IndexInfo *indexInfo;\n+ bool is_btree;\n+ bool is_partial;\n+ bool is_only_on_expression;\n\nIt looks like some of these variables are only used within the scope\nof the foreach loop, so I think that is where they should be declared.\n\n~~~\n\n8.\n+ usableIndex = InvalidOid;\n\nMight as well do that assignment at the declaration.\n\n~~~\n\n9. FindLogicalRepUsableIndex\n\n+ /*\n+ * Simple case, we already have a primary key or a replica identity index.\n+ *\n+ * Note that we do not use index scans below when enable_indexscan is\n+ * false. Allowing primary key or replica identity even when index scan is\n+ * disabled is the legacy behaviour. So we hesitate to move the below\n+ * enable_indexscan check to be done earlier in this function.\n+ */\n+ idxoid = GetRelationIdentityOrPK(localrel);\n+ if (OidIsValid(idxoid))\n+ return idxoid;\n+\n+ /* If index scans are disabled, use a sequential scan */\n+ if (!enable_indexscan)\n+ return InvalidOid;\n\n~\n\nIMO that \"Note\" really belongs with the if (!enable)indexscan) more like this:\n\nSUGGESTION\n/*\n* Simple case, we already have a primary key or a replica identity index.\n*/\nidxoid = GetRelationIdentityOrPK(localrel);\nif (OidIsValid(idxoid))\nreturn idxoid;\n\n/*\n* If index scans are disabled, use a sequential scan.\n*\n* Note we hesitate to move this check to earlier in this function\n* because allowing primary key or replica identity even when index scan\n* is disabled is the legacy behaviour.\n*/\nif (!enable_indexscan)\nreturn InvalidOid;\n\n======\nsrc/backend/replication/logical/worker.c\n\n10. get_usable_indexoid\n\n+/*\n+ * Decide whether we can pick an index for the relinfo (e.g., the relation)\n+ * we're actually deleting/updating from. If it is a child partition of\n+ * edata->targetRelInfo, find the index on the partition.\n+ *\n+ * Note that if the corresponding relmapentry has invalid usableIndexOid,\n+ * the function returns InvalidOid.\n+ */\n\n\"(e.g., the relation)\" --> \"(i.e. the relation)\"\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Wed, 15 Feb 2023 13:16:31 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Use indexes on the subscriber when REPLICA IDENTITY is\n full on the publisher"
},
{
"msg_contents": "On Sat, Feb 4, 2023 7:24 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> \r\n> On Thu, Feb 2, 2023 at 2:03 PM Önder Kalacı <onderkalaci@gmail.com> wrote:\r\n> >\r\n> >>\r\n> >> and if there's more\r\n> >> than one candidate index pick any one. Additionally, we can allow\r\n> >> disabling the use of an index scan for this particular case. If we are\r\n> >> too worried about API change for allowing users to specify the index\r\n> >> then we can do that later or as a separate patch.\r\n> >>\r\n> >\r\n> > On v23, I dropped the planner support for picking the index. Instead, it simply\r\n> > iterates over the indexes and picks the first one that is suitable.\r\n> >\r\n> > I'm currently thinking on how to enable users to override this decision.\r\n> > One option I'm leaning towards is to add a syntax like the following:\r\n> >\r\n> > ALTER SUBSCRIPTION .. ALTER TABLE ... SET INDEX ...\r\n> >\r\n> > Though, that should probably be a seperate patch. I'm going to work\r\n> > on that, but still wanted to share v23 given picking the index sounds\r\n> > complementary, not strictly required at this point.\r\n> >\r\n> \r\n> I agree that it could be a separate patch. However, do you think we\r\n> need some way to disable picking the index scan? This is to avoid\r\n> cases where sequence scan could be better or do we think there won't\r\n> exist such a case?\r\n> \r\n\r\nI think such a case exists. I tried the following cases based on v23 patch.\r\n\r\n# Step 1.\r\nCreate publication, subscription and tables.\r\n-- on publisher\r\ncreate table tbl (a int);\r\nalter table tbl replica identity full;\r\ncreate publication pub for table tbl;\r\n\r\n-- on subscriber\r\ncreate table tbl (a int);\r\ncreate index idx_a on tbl(a);\r\ncreate subscription sub connection 'dbname=postgres port=5432' publication pub;\r\n\r\n# Step 2.\r\nSetup synchronous replication.\r\n\r\n# Step 3.\r\nExecute SQL query on publisher.\r\n\r\n-- case 1 (All values are duplicated)\r\ntruncate tbl;\r\ninsert into tbl select 1 from generate_series(0,10000)i;\r\nupdate tbl set a=a+1;\r\n\r\n-- case 2\r\ntruncate tbl;\r\ninsert into tbl select i%3 from generate_series(0,10000)i;\r\nupdate tbl set a=a+1;\r\n\r\n-- case 3\r\ntruncate tbl;\r\ninsert into tbl select i%5 from generate_series(0,10000)i;\r\nupdate tbl set a=a+1;\r\n\r\n-- case 4\r\ntruncate tbl;\r\ninsert into tbl select i%10 from generate_series(0,10000)i;\r\nupdate tbl set a=a+1;\r\n\r\n-- case 5\r\ntruncate tbl;\r\ninsert into tbl select i%100 from generate_series(0,10000)i;\r\nupdate tbl set a=a+1;\r\n\r\n-- case 6\r\ntruncate tbl;\r\ninsert into tbl select i%1000 from generate_series(0,10000)i;\r\nupdate tbl set a=a+1;\r\n\r\n-- case 7 (No duplicate value)\r\ntruncate tbl;\r\ninsert into tbl select i from generate_series(0,10000)i;\r\nupdate tbl set a=a+1;\r\n\r\n# Result\r\nThe time executing update (the average of 3 runs is taken, the unit is\r\nmilliseconds):\r\n\r\n+--------+---------+---------+\r\n| | patched | master |\r\n+--------+---------+---------+\r\n| case 1 | 3933.68 | 1298.32 |\r\n| case 2 | 1803.46 | 1294.42 |\r\n| case 3 | 1380.82 | 1299.90 |\r\n| case 4 | 1042.60 | 1300.20 |\r\n| case 5 | 691.69 | 1297.51 |\r\n| case 6 | 578.50 | 1300.69 |\r\n| case 7 | 566.45 | 1302.17 |\r\n+--------+---------+---------+\r\n\r\nIn case 1~3, there's an overhead after applying the patch. In other cases, the\r\npatch improved the performance. As more duplicate values, the greater the\r\noverhead after applying the patch.\r\n\r\nRegards,\r\nShi Yu\r\n\r\n",
"msg_date": "Wed, 15 Feb 2023 03:53:34 +0000",
"msg_from": "\"shiy.fnst@fujitsu.com\" <shiy.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: [PATCH] Use indexes on the subscriber when REPLICA IDENTITY is\n full on the publisher"
},
{
"msg_contents": "On Wed, Feb 15, 2023 at 9:23 AM shiy.fnst@fujitsu.com\n<shiy.fnst@fujitsu.com> wrote:\n>\n> On Sat, Feb 4, 2023 7:24 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Thu, Feb 2, 2023 at 2:03 PM Önder Kalacı <onderkalaci@gmail.com> wrote:\n> > >\n> > > On v23, I dropped the planner support for picking the index. Instead, it simply\n> > > iterates over the indexes and picks the first one that is suitable.\n> > >\n> > > I'm currently thinking on how to enable users to override this decision.\n> > > One option I'm leaning towards is to add a syntax like the following:\n> > >\n> > > ALTER SUBSCRIPTION .. ALTER TABLE ... SET INDEX ...\n> > >\n> > > Though, that should probably be a seperate patch. I'm going to work\n> > > on that, but still wanted to share v23 given picking the index sounds\n> > > complementary, not strictly required at this point.\n> > >\n> >\n> > I agree that it could be a separate patch. However, do you think we\n> > need some way to disable picking the index scan? This is to avoid\n> > cases where sequence scan could be better or do we think there won't\n> > exist such a case?\n> >\n>\n> I think such a case exists. I tried the following cases based on v23 patch.\n>\n...\n> # Result\n> The time executing update (the average of 3 runs is taken, the unit is\n> milliseconds):\n>\n> +--------+---------+---------+\n> | | patched | master |\n> +--------+---------+---------+\n> | case 1 | 3933.68 | 1298.32 |\n> | case 2 | 1803.46 | 1294.42 |\n> | case 3 | 1380.82 | 1299.90 |\n> | case 4 | 1042.60 | 1300.20 |\n> | case 5 | 691.69 | 1297.51 |\n> | case 6 | 578.50 | 1300.69 |\n> | case 7 | 566.45 | 1302.17 |\n> +--------+---------+---------+\n>\n> In case 1~3, there's an overhead after applying the patch. In other cases, the\n> patch improved the performance. As more duplicate values, the greater the\n> overhead after applying the patch.\n>\n\nI think this overhead seems to be mostly due to the need to perform\ntuples_equal multiple times for duplicate values. I don't know if\nthere is any simple way to avoid this without using the planner stuff\nas was used in the previous approach. So, this brings us to the\nquestion of whether just providing a way to disable/enable the use of\nindex scan for such cases is sufficient or if we need any other way.\n\nTom, Andres, or others, do you have any suggestions on how to move\nforward with this patch?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 15 Feb 2023 10:07:02 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Use indexes on the subscriber when REPLICA IDENTITY is\n full on the publisher"
},
{
"msg_contents": "FYI, I accidentally left this (v23) patch's TAP test\nt/032_subscribe_use_index.pl still lurking even after removing all\nother parts of this patch.\n\nIn this scenario, the t/032 test gets stuck (build of latest HEAD)\n\nIIUC the patch is only meant to affect performance, so I expected this\n032 test to work regardless of whether the rest of the patch is\napplied.\n\nAnyway, it hangs every time for me. I didn't dig looking for the\ncause, but if it requires patched code for this new test to pass, I\nthought it indicates something wrong either with the test or something\nmore sinister the new test has exposed. Maybe I am mistaken\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Fri, 17 Feb 2023 17:57:28 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Use indexes on the subscriber when REPLICA IDENTITY is\n full on the publisher"
},
{
"msg_contents": "On Fri, Feb 17, 2023 at 5:57 PM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> FYI, I accidentally left this (v23) patch's TAP test\n> t/032_subscribe_use_index.pl still lurking even after removing all\n> other parts of this patch.\n>\n> In this scenario, the t/032 test gets stuck (build of latest HEAD)\n>\n> IIUC the patch is only meant to affect performance, so I expected this\n> 032 test to work regardless of whether the rest of the patch is\n> applied.\n>\n> Anyway, it hangs every time for me. I didn't dig looking for the\n> cause, but if it requires patched code for this new test to pass, I\n> thought it indicates something wrong either with the test or something\n> more sinister the new test has exposed. Maybe I am mistaken\n>\n\nSorry, probably the above was a false alarm. After a long time\n(minutes) the stuck test did eventually timeout with:\n\nt/032_subscribe_use_index.pl ....... # poll_query_until timed out\nexecuting this query:\n# select (idx_scan = 1) from pg_stat_all_indexes where indexrelname =\n'test_replica_id_full_idx';\n# expecting this output:\n# t\n# last actual query output:\n# f\n# with stderr:\nt/032_subscribe_use_index.pl ....... Dubious, test returned 29 (wstat\n7424, 0x1d00)\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Sun, 19 Feb 2023 10:38:56 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Use indexes on the subscriber when REPLICA IDENTITY is\n full on the publisher"
},
{
"msg_contents": "Hi Amit, all\n\nAmit Kapila <amit.kapila16@gmail.com>, 15 Şub 2023 Çar, 07:37 tarihinde\nşunu yazdı:\n\n> On Wed, Feb 15, 2023 at 9:23 AM shiy.fnst@fujitsu.com\n> <shiy.fnst@fujitsu.com> wrote:\n> >\n> > On Sat, Feb 4, 2023 7:24 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Thu, Feb 2, 2023 at 2:03 PM Önder Kalacı <onderkalaci@gmail.com>\n> wrote:\n> > > >\n> > > > On v23, I dropped the planner support for picking the index.\n> Instead, it simply\n> > > > iterates over the indexes and picks the first one that is suitable.\n> > > >\n> > > > I'm currently thinking on how to enable users to override this\n> decision.\n> > > > One option I'm leaning towards is to add a syntax like the following:\n> > > >\n> > > > ALTER SUBSCRIPTION .. ALTER TABLE ... SET INDEX ...\n> > > >\n> > > > Though, that should probably be a seperate patch. I'm going to work\n> > > > on that, but still wanted to share v23 given picking the index sounds\n> > > > complementary, not strictly required at this point.\n> > > >\n> > >\n> > > I agree that it could be a separate patch. However, do you think we\n> > > need some way to disable picking the index scan? This is to avoid\n> > > cases where sequence scan could be better or do we think there won't\n> > > exist such a case?\n> > >\n> >\n> > I think such a case exists. I tried the following cases based on v23\n> patch.\n> >\n> ...\n> > # Result\n> > The time executing update (the average of 3 runs is taken, the unit is\n> > milliseconds):\n> >\n> > +--------+---------+---------+\n> > | | patched | master |\n> > +--------+---------+---------+\n> > | case 1 | 3933.68 | 1298.32 |\n> > | case 2 | 1803.46 | 1294.42 |\n> > | case 3 | 1380.82 | 1299.90 |\n> > | case 4 | 1042.60 | 1300.20 |\n> > | case 5 | 691.69 | 1297.51 |\n> > | case 6 | 578.50 | 1300.69 |\n> > | case 7 | 566.45 | 1302.17 |\n> > +--------+---------+---------+\n> >\n> > In case 1~3, there's an overhead after applying the patch. In other\n> cases, the\n> > patch improved the performance. As more duplicate values, the greater the\n> > overhead after applying the patch.\n> >\n>\n> I think this overhead seems to be mostly due to the need to perform\n> tuples_equal multiple times for duplicate values. I don't know if\n> there is any simple way to avoid this without using the planner stuff\n> as was used in the previous approach. So, this brings us to the\n> question of whether just providing a way to disable/enable the use of\n> index scan for such cases is sufficient or if we need any other way.\n>\n> Tom, Andres, or others, do you have any suggestions on how to move\n> forward with this patch?\n>\n>\nThanks for the feedback and testing. Due to personal circumstances,\nI could not reply the thread in the last 2 weeks, but I'll be more active\ngoing forward.\n\n I also agree that we should have a way to control the behavior.\n\nI created another patch (v24_0001_optionally_disable_index.patch) which can\nbe applied\non top of v23_0001_use_index_on_subs_when_pub_rep_ident_full.patch.\n\nThe new patch adds a new *subscription_parameter* for both CREATE and ALTER\nsubscription\nnamed: *enable_index_scan*. The setting is valid only when REPLICA IDENTITY\nis full.\n\nWhat do you think about such a patch to control the behavior? It does not\ngive a per-relation\nlevel of control, but still useful for many cases.\n\n(Note that I'll be working on the other feedback in the email thread,\nwanted to send this earlier\nto hear some early thoughts on v24_0001_optionally_disable_index.patch).",
"msg_date": "Tue, 21 Feb 2023 17:25:10 +0300",
"msg_from": "=?UTF-8?B?w5ZuZGVyIEthbGFjxLE=?= <onderkalaci@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Use indexes on the subscriber when REPLICA IDENTITY is\n full on the publisher"
},
{
"msg_contents": "Hi Peter, Amit, Shi Yu and all,\n\n(I'm replying multiple reviews in this single reply, hope that's fine)\n\n\nOn Fri, Feb 17, 2023 at 5:57 PM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> FYI, I accidentally left this (v23) patch's TAP test\n> t/032_subscribe_use_index.pl still lurking even after removing all\n> other parts of this patch.\n>\n> In this scenario, the t/032 test gets stuck (build of latest HEAD)\n>\n> IIUC the patch is only meant to affect performance, so I expected this\n> 032 test to work regardless of whether the rest of the patch is\n> applied.\n>\n> Anyway, it hangs every time for me. I didn't dig looking for the\n> cause, but if it requires patched code for this new test to pass, I\n> thought it indicates something wrong either with the test or something\n> more sinister the new test has exposed. Maybe I am mistaken\n>\n\n>\n> Sorry, probably the above was a false alarm. After a long time\n> (minutes) the stuck test did eventually timeout with:\n> t/032_subscribe_use_index.pl ....... # poll_query_until timed out\n> executing this query:\n> # select (idx_scan = 1) from pg_stat_all_indexes where indexrelname =\n> 'test_replica_id_full_idx';\n> # expecting this output:\n> # t\n> # last actual query output:\n> # f\n> # with stderr:\n> t/032_subscribe_use_index.pl ....... Dubious, test returned 29 (wstat\n> 7424, 0x1d00)\n>\n>\nI can tell that this is the expected behavior. Majority of the tests do the\nfollowing:\n- update/delete row on the source\n- check pg_stat_all_indexes on the target\n\nSo, given that HEAD does not use any indexes, it is expected that the tests\nwould\nwait until poll_query_until timeout. That's why, I do not see/expect any\nproblems on\nHEAD. I run the test file by removing the poll_query_until for the index\nscan counts,\nand all finished properly.\n\n\nI think such a case exists. I tried the following cases based on v23 patch.\n\n\nAs I noted in the earlier reply, I created another patch, which optionally\ngives the ability to\ndisable index scans on the subscription level for the replica identity\nfull case.\n\nThat is the second patch attached in this mail,\nnamed v25_0001_optionally_disable_index.patch.\n\nHere are some review comments for patch v23.\n\n\nThanks Peter, see the following reply:\n\n======\n> General\n\n\n> 1.\n> IIUC the previous logic for checking \"cost\" comparisons and selecting\n> the \"cheapest\" strategy is no longer present in the latest patch.\n\nIn that case, I think there are some leftover stale comments that need\n\nchanging. For example,\n\n1a. Commit message:\n> \"let the planner sub-modules compare the costs of index versus\n> sequential scan and choose the cheapest.\"\n> 1b. Commit message:\n> \"Finally, pick the cheapest `Path` among.\"\n> 1c. FindLogicalRepUsableIndex function:\n> + * We are looking for one more opportunity for using an index. If\n> + * there are any indexes defined on the local relation, try to pick\n> + * the cheapest index.\n\n\nMakes sense, the commit message and function messages should reflect\nthe new logic. I went over the patch with a detailed look for this.\n\n\n======\n> doc/src/sgml/logical-replication.sgml\n> If replica identity \"full\" is used, indexes can be used on the\n> subscriber side for seaching the rows. The index should be btree,\n> non-partial and have at least one column reference (e.g., should not\n> consists of only expressions). If there are no suitable indexes, the\n> search on the subscriber side is very inefficient and should only be\n> used as a fallback if no other solution is possible\n\n\n2a.\n> Fixed typo \"seaching\", and minor rewording.\n\nSUGGESTION\n> When replica identity \"full\" is specified, indexes can be used on the\n> subscriber side for searching the rows. These indexes should be btree,\n> non-partial and have at least one column reference (e.g., should not\n> consist of only expressions). If there are no such suitable indexes,\n> the search on the subscriber side can be very inefficient, therefore\n> replica identity \"full\" should only be used as a fallback if no other\n> solution is possible.\n\n\nI like your suggestion, updated\n\n2b.\n> I know you are just following some existing text here, but IMO this\n> should probably refer to replica identity <literal>FULL</literal>\n> instead of \"full\".\n\n\nI guess that works, I don't have any preference / knowledge on this.\n\n>\n> ======\n> src/backend/executor/execReplication.c\n> 3. IdxIsRelationIdentityOrPK\n> +/*\n> + * Given a relation and OID of an index, returns true if\n> + * the index is relation's primary key's index or\n> + * relaton's replica identity index.\n> + *\n> + * Returns false otherwise.\n> + */\n> +static bool\n> +IdxIsRelationIdentityOrPK(Relation rel, Oid idxoid)\n> +{\n> + Assert(OidIsValid(idxoid));\n> +\n> + if (RelationGetReplicaIndex(rel) == idxoid ||\n> + RelationGetPrimaryKeyIndex(rel) == idxoid)\n> + return true;\n> +\n> + return false;\n> 3a.\n> Comment typo \"relaton\"\n\n\nfixed\n\n\n3b.\n> Code could be written using single like below if you wish (but see #2c)\n> return RelationGetReplicaIndex(rel) == idxoid ||\n> RelationGetPrimaryKeyIndex(rel) == idxoid;\n> ~\n> 3c.\n> Actually, RelationGetReplicaIndex and RelationGetPrimaryKeyIndex\n> implementations are very similar so it seemed inefficient to be\n> calling both of them. IMO it might be better to just make a new\n> relcache function IdxIsRelationIdentityOrPK(Relation rel, Oid idxoid).\n> This implementation will be similar to those others, but now you need\n> only to call the workhorse RelationGetIndexList *one* time.\n> ~~~\n\n\nRegarding (3c), RelationGetIndexList is only called once, when\n!relation->rd_indexvalid.\nSo, there seems not to be a necessary case for merging two functions to me.\nAlso, I'd rather keep both functions, as I know that some extensions rely\non these functions separately.\n\nRegarding (3b), it makes sense, applied.\n\n\n> 4. RelationFindReplTupleByIndex\n> bool found;\n> + TypeCacheEntry **eq = NULL; /* only used when the index is not repl.\n> ident\n> + * or pkey */\n> + bool idxIsRelationIdentityOrPK;\n> If you change the comment to say \"RI\" instead of \"repl. Ident\" then it\n> can all fit on one line, which would be an improvement.\n\n\nDone, also changed pkey to PK as this seems to be used throughout the code.\n\n\n======\n> src/backend/replication/logical/relation.c\n> 5.\n> #include \"replication/logicalrelation.h\"\n> #include \"replication/worker_internal.h\"\n> +#include \"optimizer/cost.h\"\n> #include \"utils/inval.h\"\n> Can that #include be added in alphabetical order like the others or not?\n\n\nSure, it seems like I intended to do it, but made a small mistake :)\n\n\n6. logicalrep_partition_open\n> + /*\n> + * Finding a usable index is an infrequent task. It occurs when an\n> + * operation is first performed on the relation, or after invalidation of\n> + * of the relation cache entry (such as ANALYZE or CREATE/DROP index on\n> + * the relation).\n> + */\n> + entry->usableIndexOid = FindLogicalRepUsableIndex(partrel, remoterel);\n> +\n> Typo \"of of the relation\"\n\n\nFixed\n\n7. FindUsableIndexForReplicaIdentityFull\n> +static Oid\n> +FindUsableIndexForReplicaIdentityFull(Relation localrel)\n> +{\n> + MemoryContext usableIndexContext;\n> + MemoryContext oldctx;\n> + Oid usableIndex;\n> + Oid idxoid;\n> + List *indexlist;\n> + ListCell *lc;\n> + Relation indexRelation;\n> + IndexInfo *indexInfo;\n> + bool is_btree;\n> + bool is_partial;\n> + bool is_only_on_expression;\n> It looks like some of these variables are only used within the scope\n> of the foreach loop, so I think that is where they should be declared.\n\n\nmakes sense, done\n\n\n\n8.\n> + usableIndex = InvalidOid;\n> Might as well do that assignment at the declaration.\n\n\ndone\n\n9. FindLogicalRepUsableIndex\n> + /*\n> + * Simple case, we already have a primary key or a replica identity index.\n> + *\n> + * Note that we do not use index scans below when enable_indexscan is\n> + * false. Allowing primary key or replica identity even when index scan is\n> + * disabled is the legacy behaviour. So we hesitate to move the below\n> + * enable_indexscan check to be done earlier in this function.\n> + */\n> + idxoid = GetRelationIdentityOrPK(localrel);\n> + if (OidIsValid(idxoid))\n> + return idxoid;\n> +\n> + /* If index scans are disabled, use a sequential scan */\n> + if (!enable_indexscan)\n> + return InvalidOid;\n> IMO that \"Note\" really belongs with the if (!enable)indexscan) more like\n> this:\n> SUGGESTION\n> /*\n> * Simple case, we already have a primary key or a replica identity index.\n> */\n> idxoid = GetRelationIdentityOrPK(localrel);\n> if (OidIsValid(idxoid))\n> return idxoid;\n> /*\n> * If index scans are disabled, use a sequential scan.\n> *\n> * Note we hesitate to move this check to earlier in this function\n> * because allowing primary key or replica identity even when index scan\n> * is disabled is the legacy behaviour.\n> */\n> if (!enable_indexscan)\n> return InvalidOid;\n\n\nmakes sense, moved\n\n\n> ======\n> src/backend/replication/logical/worker.c\n> 10. get_usable_indexoid\n> +/*\n> + * Decide whether we can pick an index for the relinfo (e.g., the\n> relation)\n> + * we're actually deleting/updating from. If it is a child partition of\n> + * edata->targetRelInfo, find the index on the partition.\n> + *\n> + * Note that if the corresponding relmapentry has invalid usableIndexOid,\n> + * the function returns InvalidOid.\n> + */\n> \"(e.g., the relation)\" --> \"(i.e. the relation)\"\n\n\nfixed\n\n\nThanks for your patch. Here are some comments.\n\n\nThanks Shi Yu, see my reply below\n\n1.\n> I noticed that get_usable_indexoid() is called in\n> apply_handle_update_internal()\n> and apply_handle_delete_internal() to get the usable index. Could\n> usableIndexOid\n> be a parameter of these two functions? Because we have got the\n> LogicalRepRelMapEntry when calling them and if we do so, we can get\n> usableIndexOid without get_usable_indexoid(). Otherwise for partitioned\n> tables,\n> logicalrep_partition_open() is called in get_usable_indexoid() and\n> searching\n> the entry via hash_search() will increase cost.\n\n\nI think I cannot easily follow this comment. We call\nlogicalrep_partition_open()\nbecause if an update/delete is on a partitioned table, we should find the\ncorresponding local index on the partition itself. edata->targetRel points\nto the\npartitioned table, and we map it to the partition inside\nget_usable_indexoid().\n\nOverall, I cannot see how we can avoid the call\nto logicalrep_partition_open().\nCan you please explain a little further?\n\nNote that logicalrep_partition_open() is cheap for the cases where there is\nno\ninvalidations (which is probably most of the time)\n\n\n2.\n> + * This attribute is an expression, and\n> + * SuitableIndexPathsForRepIdentFull() was called\n> earlier when the\n> + * index for subscriber was selected. There, the\n> indexes\n> + * comprising *only* expressions have already been\n> eliminated.\n\nThe comment looks need to be updated:\n> SuitableIndexPathsForRepIdentFull\n> ->\n> FindUsableIndexForReplicaIdentityFull\n\n\nYes, updated.\n\n3.\n> /* Build scankey for every attribute in the index. */\n> - for (attoff = 0; attoff <\n> IndexRelationGetNumberOfKeyAttributes(idxrel); attoff++)\n> + for (index_attoff = 0; index_attoff <\n> IndexRelationGetNumberOfKeyAttributes(idxrel);\n> + index_attoff++)\n> {\n> Should the comment be changed? Because we skip the attributes that are\n> expressions.\n\n\nmakes sense\n\n\n4.\n> + Assert(RelationGetReplicaIndex(rel) !=\n> RelationGetRelid(idxrel) &&\n> + RelationGetPrimaryKeyIndex(rel) !=\n> RelationGetRelid(idxrel));\n> Maybe we can call the new function idxIsRelationIdentityOrPK()?\n\n\nMakes sense, becomes easier to understand.\n\n\n\nHere are some comments on the test cases.\n\n\n1. in test case \"SUBSCRIPTION RE-CALCULATES INDEX AFTER CREATE/DROP INDEX\"\n> +# now, ingest more data and create index on column y which has higher\n> cardinality\n> +# so that the future commands use the index on column y\n> +$node_publisher->safe_psql('postgres',\n> + \"INSERT INTO test_replica_id_full SELECT 50, i FROM\n> generate_series(0,3100)i;\");\n> +$node_subscriber->safe_psql('postgres',\n> + \"CREATE INDEX test_replica_id_full_idy ON\n> test_replica_id_full(y)\");\n> We don't pick the cheapest index in the current patch, so should we modify\n> this\n> part of the test?\n\n\nI think I already changed that test. I kept the test so that we still make\nsure that even if we\ncreate/drop indexes, we do not mess anything. I agree that the wording /\ncomments were\nstale.\n\nCan you check if it looks better now?\n\n\n> BTW, the following comment in FindLogicalRepUsableIndex() need to be\n> changed,\n> too.\n> + * We are looking for one more opportunity for using an\n> index. If\n> + * there are any indexes defined on the local relation,\n> try to pick\n> + * the cheapest index.\n\n\n\nmakes sense, Peter also had a similar comment, fixed.\n\n\n2. Is there any reasons why we need the test case \"SUBSCRIPTION USES INDEX\n> WITH\n> DROPPED COLUMNS\"? Has there been a problem related to dropped columns\n> before?\n\n\nNot really, but dropped columns are tricky in general. As far as I know,\nthose columns\ncontinue to exist in pg_attribute, which might cause some edge cases. So, I\nwanted to\nhave coverage for that.\n\n\n> 3. in test case \"SUBSCRIPTION USES INDEX ON PARTITIONED TABLES\"\n> +# deletes rows and moves between partitions\n> +$node_publisher->safe_psql('postgres',\n> + \"DELETE FROM users_table_part WHERE user_id = 1 and value_1 = 1;\");\n> +$node_publisher->safe_psql('postgres',\n> + \"DELETE FROM users_table_part WHERE user_id = 12 and value_1 =\n> 12;\");\n> \"moves between partitions\" in the comment seems wrong.\n\n\nYes, probably copy & paste error from the UPDATE test\n\n4. in test case \"SUBSCRIPTION DOES NOT USE INDEXES WITH ONLY EXPRESSIONS\"\n> +# update 2 rows\n> +$node_publisher->safe_psql('postgres',\n> + \"UPDATE people SET firstname = 'Nan' WHERE firstname =\n> 'first_name_1';\");\n> +$node_publisher->safe_psql('postgres',\n> + \"UPDATE people SET firstname = 'Nan' WHERE firstname =\n> 'first_name_2' AND lastname = 'last_name_2';\");\n> +\n> +# make sure the index is not used on the subscriber\n> +$result = $node_subscriber->safe_psql('postgres',\n> + \"select idx_scan from pg_stat_all_indexes where indexrelname =\n> 'people_names'\");\n> +is($result, qq(0), 'ensure subscriber tap_sub_rep_full updates two rows\n> via seq. scan with index on expressions');\n> +\n> I think it would be better to call wait_for_catchup() before the check\n> because\n> we want to check the index is NOT used. Otherwise the check may pass\n> because the\n> rows have not yet been updated on subscriber.\n\n\nthat's right, added\n\n5. in test case \"SUBSCRIPTION BEHAVIOR WITH ENABLE_INDEXSCAN\"\n> +# show that index is not used even when enable_indexscan=false\n> +$result = $node_subscriber->safe_psql('postgres',\n> + \"select idx_scan from pg_stat_all_indexes where indexrelname =\n> 'test_replica_id_full_idx'\");\n> +is($result, qq(0), 'ensure subscriber has not used index with\n> enable_indexscan=false');\n> Should we remove the word \"even\" in the comment?\n\n\ndone\n\n6.\n> In each test case we re-create publications, subscriptions, and tables.\n> Could we\n> create only one publication and one subscription at the beginning, and use\n> them\n> in all test cases? I think this can save some time running the test file.\n\n\nI'd rather keep as-is for (a) simplicity (b) other test files seem to have\nsimilar patterns.\n\nDo you think strongly that we should change the test file? It could make\ndebugging\nthe tests harder as well.\n\n\nTom, Andres, or others, do you have any suggestions on how to move\n> forward with this patch?\n\n\nYes, happy to hear any feedback for the attached patch(es).\n\n\nThanks,\nOnder KALACI\n\nÖnder Kalacı <onderkalaci@gmail.com>, 21 Şub 2023 Sal, 17:25 tarihinde şunu\nyazdı:\n\n> Hi Amit, all\n>\n> Amit Kapila <amit.kapila16@gmail.com>, 15 Şub 2023 Çar, 07:37 tarihinde\n> şunu yazdı:\n>\n>> On Wed, Feb 15, 2023 at 9:23 AM shiy.fnst@fujitsu.com\n>> <shiy.fnst@fujitsu.com> wrote:\n>> >\n>> > On Sat, Feb 4, 2023 7:24 PM Amit Kapila <amit.kapila16@gmail.com>\n>> wrote:\n>> > >\n>> > > On Thu, Feb 2, 2023 at 2:03 PM Önder Kalacı <onderkalaci@gmail.com>\n>> wrote:\n>> > > >\n>> > > > On v23, I dropped the planner support for picking the index.\n>> Instead, it simply\n>> > > > iterates over the indexes and picks the first one that is suitable.\n>> > > >\n>> > > > I'm currently thinking on how to enable users to override this\n>> decision.\n>> > > > One option I'm leaning towards is to add a syntax like the\n>> following:\n>> > > >\n>> > > > ALTER SUBSCRIPTION .. ALTER TABLE ... SET INDEX ...\n>> > > >\n>> > > > Though, that should probably be a seperate patch. I'm going to work\n>> > > > on that, but still wanted to share v23 given picking the index\n>> sounds\n>> > > > complementary, not strictly required at this point.\n>> > > >\n>> > >\n>> > > I agree that it could be a separate patch. However, do you think we\n>> > > need some way to disable picking the index scan? This is to avoid\n>> > > cases where sequence scan could be better or do we think there won't\n>> > > exist such a case?\n>> > >\n>> >\n>> > I think such a case exists. I tried the following cases based on v23\n>> patch.\n>> >\n>> ...\n>> > # Result\n>> > The time executing update (the average of 3 runs is taken, the unit is\n>> > milliseconds):\n>> >\n>> > +--------+---------+---------+\n>> > | | patched | master |\n>> > +--------+---------+---------+\n>> > | case 1 | 3933.68 | 1298.32 |\n>> > | case 2 | 1803.46 | 1294.42 |\n>> > | case 3 | 1380.82 | 1299.90 |\n>> > | case 4 | 1042.60 | 1300.20 |\n>> > | case 5 | 691.69 | 1297.51 |\n>> > | case 6 | 578.50 | 1300.69 |\n>> > | case 7 | 566.45 | 1302.17 |\n>> > +--------+---------+---------+\n>> >\n>> > In case 1~3, there's an overhead after applying the patch. In other\n>> cases, the\n>> > patch improved the performance. As more duplicate values, the greater\n>> the\n>> > overhead after applying the patch.\n>> >\n>>\n>> I think this overhead seems to be mostly due to the need to perform\n>> tuples_equal multiple times for duplicate values. I don't know if\n>> there is any simple way to avoid this without using the planner stuff\n>> as was used in the previous approach. So, this brings us to the\n>> question of whether just providing a way to disable/enable the use of\n>> index scan for such cases is sufficient or if we need any other way.\n>>\n>> Tom, Andres, or others, do you have any suggestions on how to move\n>> forward with this patch?\n>>\n>>\n> Thanks for the feedback and testing. Due to personal circumstances,\n> I could not reply the thread in the last 2 weeks, but I'll be more active\n> going forward.\n>\n> I also agree that we should have a way to control the behavior.\n>\n> I created another patch (v24_0001_optionally_disable_index.patch) which\n> can be applied\n> on top of v23_0001_use_index_on_subs_when_pub_rep_ident_full.patch.\n>\n> The new patch adds a new *subscription_parameter* for both CREATE and\n> ALTER subscription\n> named: *enable_index_scan*. The setting is valid only when REPLICA\n> IDENTITY is full.\n>\n> What do you think about such a patch to control the behavior? It does not\n> give a per-relation\n> level of control, but still useful for many cases.\n>\n> (Note that I'll be working on the other feedback in the email thread,\n> wanted to send this earlier\n> to hear some early thoughts on v24_0001_optionally_disable_index.patch).\n>\n>\n>",
"msg_date": "Wed, 22 Feb 2023 17:24:03 +0300",
"msg_from": "=?UTF-8?B?w5ZuZGVyIEthbGFjxLE=?= <onderkalaci@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Use indexes on the subscriber when REPLICA IDENTITY is\n full on the publisher"
},
{
"msg_contents": "On Tue, Feb 21, 2023 at 7:55 PM Önder Kalacı <onderkalaci@gmail.com> wrote:\n>\n>>\n>> I think this overhead seems to be mostly due to the need to perform\n>> tuples_equal multiple times for duplicate values. I don't know if\n>> there is any simple way to avoid this without using the planner stuff\n>> as was used in the previous approach. So, this brings us to the\n>> question of whether just providing a way to disable/enable the use of\n>> index scan for such cases is sufficient or if we need any other way.\n>>\n>> Tom, Andres, or others, do you have any suggestions on how to move\n>> forward with this patch?\n>>\n>\n> Thanks for the feedback and testing. Due to personal circumstances,\n> I could not reply the thread in the last 2 weeks, but I'll be more active\n> going forward.\n>\n> I also agree that we should have a way to control the behavior.\n>\n> I created another patch (v24_0001_optionally_disable_index.patch) which can be applied\n> on top of v23_0001_use_index_on_subs_when_pub_rep_ident_full.patch.\n>\n> The new patch adds a new subscription_parameter for both CREATE and ALTER subscription\n> named: enable_index_scan. The setting is valid only when REPLICA IDENTITY is full.\n>\n> What do you think about such a patch to control the behavior? It does not give a per-relation\n> level of control, but still useful for many cases.\n>\n\nWouldn't a table-level option like 'apply_index_scan' be better than a\nsubscription-level option with a default value as false? Anyway, the\nbigger point is that we don't see a better way to proceed here than to\nintroduce some option to control this behavior.\n\nI see this as a way to provide this feature for users but I would\nprefer to proceed with this if we can get some more buy-in from senior\ncommunity members (at least one more committer) and some user(s) if\npossible. So, I once again request others to chime in and share their\nopinion.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Sat, 25 Feb 2023 16:00:05 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Use indexes on the subscriber when REPLICA IDENTITY is\n full on the publisher"
},
{
"msg_contents": "Hi Amit, all\n\n\nWouldn't a table-level option like 'apply_index_scan' be better than a\n> subscription-level option with a default value as false? Anyway, the\n> bigger point is that we don't see a better way to proceed here than to\n> introduce some option to control this behavior.\n>\n\nWhat would be a good API for adding such an option for table-level?\nTo be more specific, I cannot see any table level sub/pub options in the\ndocs.\n\nMy main motivation for doing it for subscription-level is that (a) it might\nbe\ntoo much work for users to control the behavior if it is table-level (b) I\ncouldn't\nfind a good API for table-level, and inventing a new one seemed\nlike a big change.\n\nOverall, I think it makes sense to disable the feature by default. It is\nenabled by default, and that's good for test coverage for now, but\nlet me disable it when I push a version next time.\n\n\n>\n> I see this as a way to provide this feature for users but I would\n> prefer to proceed with this if we can get some more buy-in from senior\n> community members (at least one more committer) and some user(s) if\n> possible. So, I once again request others to chime in and share their\n> opinion.\n>\n>\nAgreed, it would be great to hear some other perspectives on this.\n\nThanks,\nOnder\n\nHi Amit, all\nWouldn't a table-level option like 'apply_index_scan' be better than a\nsubscription-level option with a default value as false? Anyway, the\nbigger point is that we don't see a better way to proceed here than to\nintroduce some option to control this behavior.What would be a good API for adding such an option for table-level?To be more specific, I cannot see any table level sub/pub options in the docs.My main motivation for doing it for subscription-level is that (a) it might betoo much work for users to control the behavior if it is table-level (b) I couldn't find a good API for table-level, and inventing a new one seemedlike a big change.Overall, I think it makes sense to disable the feature by default. It isenabled by default, and that's good for test coverage for now, butlet me disable it when I push a version next time. \n\nI see this as a way to provide this feature for users but I would\nprefer to proceed with this if we can get some more buy-in from senior\ncommunity members (at least one more committer) and some user(s) if\npossible. So, I once again request others to chime in and share their\nopinion. Agreed, it would be great to hear some other perspectives on this.Thanks,Onder",
"msg_date": "Mon, 27 Feb 2023 10:05:38 +0300",
"msg_from": "=?UTF-8?B?w5ZuZGVyIEthbGFjxLE=?= <onderkalaci@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Use indexes on the subscriber when REPLICA IDENTITY is\n full on the publisher"
},
{
"msg_contents": "On Mon, Feb 27, 2023 at 12:35 PM Önder Kalacı <onderkalaci@gmail.com> wrote:\n>\n>\n>> Wouldn't a table-level option like 'apply_index_scan' be better than a\n>> subscription-level option with a default value as false? Anyway, the\n>> bigger point is that we don't see a better way to proceed here than to\n>> introduce some option to control this behavior.\n>\n>\n> What would be a good API for adding such an option for table-level?\n> To be more specific, I cannot see any table level sub/pub options in the docs.\n>\n\nI was thinking something along the lines of \"Storage Parameters\" [1]\nfor a table. See parameters like autovacuum_enabled that decide the\nautovacuum behavior for a table. These can be set via CREATE/ALTER\nTABLE commands.\n\n[1] - https://www.postgresql.org/docs/devel/sql-createtable.html#SQL-CREATETABLE-STORAGE-PARAMETERS\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 27 Feb 2023 13:35:29 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Use indexes on the subscriber when REPLICA IDENTITY is\n full on the publisher"
},
{
"msg_contents": "Hi,\n\nOn 2023-02-25 16:00:05 +0530, Amit Kapila wrote:\n> On Tue, Feb 21, 2023 at 7:55 PM Önder Kalacı <onderkalaci@gmail.com> wrote:\n> >> I think this overhead seems to be mostly due to the need to perform\n> >> tuples_equal multiple times for duplicate values.\n\nI think more work needs to be done to determine the source of the\noverhead. It's not clear to me why there'd be an increase in tuples_equal()\ncalls in the tests upthread.\n\n\n> Wouldn't a table-level option like 'apply_index_scan' be better than a\n> subscription-level option with a default value as false? Anyway, the\n> bigger point is that we don't see a better way to proceed here than to\n> introduce some option to control this behavior.\n\nI don't think this should default to false. The quadratic apply performance\nthe sequential scans cause, are a much bigger hazard for users than some apply\nperformance reqression.\n\n\n> I see this as a way to provide this feature for users but I would\n> prefer to proceed with this if we can get some more buy-in from senior\n> community members (at least one more committer) and some user(s) if\n> possible. So, I once again request others to chime in and share their\n> opinion.\n\nI'd prefer not having an option, because we figure out the cause of the\nperformance regression (reducing it to be small enough to not care). After\nthat an option defaulting to using indexes. I don't think an option defaulting\nto false makes sense.\n\nI don't care whether it's subscription or relation level option.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 28 Feb 2023 10:39:18 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Use indexes on the subscriber when REPLICA IDENTITY is\n full on the publisher"
},
{
"msg_contents": "On Wed, Mar 1, 2023 at 12:09 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> > I see this as a way to provide this feature for users but I would\n> > prefer to proceed with this if we can get some more buy-in from senior\n> > community members (at least one more committer) and some user(s) if\n> > possible. So, I once again request others to chime in and share their\n> > opinion.\n>\n> I'd prefer not having an option, because we figure out the cause of the\n> performance regression (reducing it to be small enough to not care). After\n> that an option defaulting to using indexes.\n>\n\nSure, if we can reduce regression to be small enough then we don't\nneed to keep the default as false, otherwise, also, we can consider it\nto keep an option defaulting to using indexes depending on the\ninvestigation for regression. Anyway, the main concern was whether it\nis okay to have an option for this which I think we have an agreement\non, now I will continue my review.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 1 Mar 2023 14:10:07 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Use indexes on the subscriber when REPLICA IDENTITY is\n full on the publisher"
},
{
"msg_contents": "On Wed, Feb 22, 2023 at 7:54 PM Önder Kalacı <onderkalaci@gmail.com> wrote:\n>\n\nFew comments:\n===============\n1.\n+ identity. When replica identity <literal>FULL</literal> is specified,\n+ indexes can be used on the subscriber side for searching the rows. These\n+ indexes should be btree,\n\nWhy only btree and not others like a hash index? Also, there should be\nsome comments in FindUsableIndexForReplicaIdentityFull() to explain\nthe choices.\n\n2.\n- * This is not generic routine, it expects the idxrel to be replication\n- * identity of a rel and meet all limitations associated with that.\n+ * This is not a generic routine - it expects the idxrel to be an index\n+ * that planner would choose if the searchslot includes all the columns\n+ * (e.g., REPLICA IDENTITY FULL on the source).\n */\n-static bool\n+static int\n\nThis comment is not clear to me. Which change here makes the\nexpectation like that? Which planner function/functionality are you\nreferring to here?\n\n3.\n+/*\n+ * Given a relation and OID of an index, returns true if\n+ * the index is relation's primary key's index or\n+ * relation's replica identity index.\n\nIt seems the line length is a bit off in the above comments. There\ncould be a similar mismatch in other places. You might want to run\npgindent.\n\n4.\n+}\n+\n+\n+/*\n+ * Returns an index oid if there is an index that can be used\n\nSpurious empty line.\n\n5.\n- /*\n- * We are in error mode so it's fine this is somewhat slow. It's better to\n- * give user correct error.\n- */\n- if (OidIsValid(GetRelationIdentityOrPK(rel->localrel)))\n+ /* Give user more precise error if possible. */\n+ if (OidIsValid(rel->usableIndexOid))\n {\n ereport(ERROR,\n (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),\n\nIs this change valid? I mean this could lead to the error \"publisher\ndid not send replica identity column expected by the logical\nreplication target relation\" when it should have given an error:\n\"logical replication target relation \\\"%s.%s\\\" has neither REPLICA\nIDENTITY index nor PRIMARY ...\n\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 1 Mar 2023 17:16:42 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Use indexes on the subscriber when REPLICA IDENTITY is\n full on the publisher"
},
{
"msg_contents": "Hi Andres, Amit, Shi Yu, all\n\nAndres Freund <andres@anarazel.de>, 28 Şub 2023 Sal, 21:39 tarihinde şunu\nyazdı:\n\n> Hi,\n>\n> On 2023-02-25 16:00:05 +0530, Amit Kapila wrote:\n> > On Tue, Feb 21, 2023 at 7:55 PM Önder Kalacı <onderkalaci@gmail.com>\n> wrote:\n> > >> I think this overhead seems to be mostly due to the need to perform\n> > >> tuples_equal multiple times for duplicate values.\n>\n> I think more work needs to be done to determine the source of the\n> overhead. It's not clear to me why there'd be an increase in tuples_equal()\n> calls in the tests upthread.\n>\n>\nYou are right, looking closely, in fact, we most of the time do much less\ntuples_equal() with index scan.\n\nI've done some profiling with perf, and created flame graphs for the apply\nworker, with the\ntest described above: *-- case 1 (All values are duplicated). *I used the\nfollowing commands:\n- perf record -F 99 -p 122555 -g -- sleep 60\n- perf script | ./stackcollapse-perf.pl > out.perf-folded\n- ./flamegraph.pl out.perf-folded > perf_[index|seq]_scan.svg\n\nI attached both flame graphs. I do not see anything specific regarding what\nthe patch does, but\ninstead the difference mostly seems to come down to index scan vs\nsequential scan related\nfunctions. As I continue to investigate, I thought it might be useful to\nshare the flame graphs\nso that more experienced hackers could comment on the difference.\n\nRegarding my own end-to-end tests: In some runs, the sequential scan is\nindeed faster for case-1. But,\nwhen I execute *update tbl set a=a+1; *for 50 consecutive times, and\nmeasure end to end performance, I see\nmuch better results for index scan, only case-1 is on-par as mostly I'd\nexpect.\n\nCase-1, running the update 50 times and waiting all changes applied\n\n - index scan: 2minutes 36 seconds\n - sequential scan: 2minutes 30 seconds\n\nCase-2, running the update 50 times and waiting all changes applied\n\n - index scan: 1 minutes, 2 seconds\n - sequential scan: 2minutes 30 seconds\n\nCase-7, running the update 50 times and waiting all changes applied\n\n - index scan: 6 seconds\n - sequential scan: 2minutes 26seconds\n\n\n\n> # Result\nThe time executing update (the average of 3 runs is taken, the unit is\nmilliseconds):\n\nShi Yu, could it be possible for you to re-run the tests with some more\nruns, and share the average?\nI suspect maybe your test results have a very small pool size, and some\nruns are making\nthe average slightly problematic.\n\nIn my tests, I shared the total time, which is probably also fine.\n\nThanks,\nOnder",
"msg_date": "Wed, 1 Mar 2023 16:21:52 +0300",
"msg_from": "=?UTF-8?B?w5ZuZGVyIEthbGFjxLE=?= <onderkalaci@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Use indexes on the subscriber when REPLICA IDENTITY is\n full on the publisher"
},
{
"msg_contents": "On Wed, Mar 1, 2023 9:22 PM Önder Kalacı <onderkalaci@gmail.com> wrote:\r\n> \r\n> Hi Andres, Amit, Shi Yu, all\r\n> \r\n> Andres Freund <mailto:andres@anarazel.de>, 28 Şub 2023 Sal, 21:39 tarihinde şunu yazdı:\r\n> Hi,\r\n> \r\n> On 2023-02-25 16:00:05 +0530, Amit Kapila wrote:\r\n> > On Tue, Feb 21, 2023 at 7:55 PM Önder Kalacı <mailto:onderkalaci@gmail.com> wrote:\r\n> > >> I think this overhead seems to be mostly due to the need to perform\r\n> > >> tuples_equal multiple times for duplicate values.\r\n> \r\n> I think more work needs to be done to determine the source of the\r\n> overhead. It's not clear to me why there'd be an increase in tuples_equal()\r\n> calls in the tests upthread.\r\n> \r\n> You are right, looking closely, in fact, we most of the time do much less \r\n> tuples_equal() with index scan.\r\n> \r\n> I've done some profiling with perf, and created flame graphs for the apply worker, with the\r\n> test described above: -- case 1 (All values are duplicated). I used the following commands:\r\n> - perf record -F 99 -p 122555 -g -- sleep 60\r\n> - perf script | ./http://stackcollapse-perf.pl > out.perf-folded\r\n> - ./http://flamegraph.pl out.perf-folded > perf_[index|seq]_scan.svg\r\n> \r\n> I attached both flame graphs. I do not see anything specific regarding what the patch does, but\r\n> instead the difference mostly seems to come down to index scan vs sequential scan related\r\n> functions. As I continue to investigate, I thought it might be useful to share the flame graphs\r\n> so that more experienced hackers could comment on the difference. \r\n> \r\n> Regarding my own end-to-end tests: In some runs, the sequential scan is indeed faster for case-1. But, \r\n> when I execute update tbl set a=a+1; for 50 consecutive times, and measure end to end performance, I see\r\n> much better results for index scan, only case-1 is on-par as mostly I'd expect.\r\n> \r\n> Case-1, running the update 50 times and waiting all changes applied\r\n> • index scan: 2minutes 36 seconds\r\n> • sequential scan: 2minutes 30 seconds\r\n> Case-2, running the update 50 times and waiting all changes applied\r\n> • index scan: 1 minutes, 2 seconds\r\n> • sequential scan: 2minutes 30 seconds\r\n> Case-7, running the update 50 times and waiting all changes applied\r\n> • index scan: 6 seconds\r\n> • sequential scan: 2minutes 26seconds\r\n> \r\n> \r\n> > # Result\r\n> The time executing update (the average of 3 runs is taken, the unit is\r\n> milliseconds):\r\n> \r\n> Shi Yu, could it be possible for you to re-run the tests with some more runs, and share the average?\r\n> I suspect maybe your test results have a very small pool size, and some runs are making\r\n> the average slightly problematic.\r\n> \r\n> In my tests, I shared the total time, which is probably also fine.\r\n>\r\n\r\nThanks for your reply, I re-tested (based on\r\nv25_0001_use_index_on_subs_when_pub_rep_ident_full.patch) and took the average\r\nof 100 runs. The results are as follows. The unit is milliseconds.\r\n\r\ncase1\r\nsequential scan: 1348.57\r\nindex scan: 3785.15\r\n\r\ncase2\r\nsequential scan: 1350.26\r\nindex scan: 1754.01\r\n\r\ncase3\r\nsequential scan: 1350.13\r\nindex scan: 1340.97\r\n\r\nThere was still some degradation in the first two cases. There are some gaps in\r\nour test results. Some information about my test is as follows.\r\n\r\na. Some parameters specified in postgresql.conf.\r\nshared_buffers = 8GB\r\ncheckpoint_timeout = 30min\r\nmax_wal_size = 20GB\r\nmin_wal_size = 10GB\r\nautovacuum = off\r\n\r\nb. Executed SQL.\r\nI executed TRUNCATE and INSERT before each UPDATE. I am not sure if you did the\r\nsame, or just executed 50 consecutive UPDATEs. If the latter one, there would be\r\nlots of old tuples and this might have a bigger impact on sequential scan. I\r\ntried this case (which executes 50 consecutive UPDATEs) and also saw that the\r\noverhead is smaller than before.\r\n\r\n\r\nBesides, I looked into the regression of this patch with `gprof`. Some results\r\nare as follows. I think with single buffer lock, sequential scan can scan\r\nmultiple tuples (see heapgettup()), while index scan can only scan one tuple. So\r\nin case1, which has lots of duplicate values and more tuples need to be scanned,\r\nindex scan takes longer time.\r\n\r\n- results of `gprof`\r\ncase1:\r\nmaster\r\n % cumulative self self total \r\n time seconds seconds calls ms/call ms/call name \r\n 1.37 0.66 0.01 654312 0.00 0.00 LWLockAttemptLock\r\n 0.00 0.73 0.00 573358 0.00 0.00 LockBuffer\r\n 0.00 0.73 0.00 10014 0.00 0.06 heap_getnextslot\r\n\r\npatched\r\n % cumulative self self total \r\n time seconds seconds calls ms/call ms/call name \r\n 9.70 1.27 0.36 50531459 0.00 0.00 LWLockAttemptLock\r\n 3.23 2.42 0.12 100259200 0.00 0.00 LockBuffer\r\n 6.20 1.50 0.23 50015101 0.00 0.00 heapam_index_fetch_tuple\r\n 4.04 2.02 0.15 50015101 0.00 0.00 index_fetch_heap\r\n 1.35 3.21 0.05 10119 0.00 0.00 index_getnext_slot\r\n\r\ncase7:\r\nmaster\r\n % cumulative self self total \r\n time seconds seconds calls ms/call ms/call name \r\n 2.67 0.60 0.02 654582 0.00 0.00 LWLockAttemptLock\r\n 0.00 0.75 0.00 573488 0.00 0.00 LockBuffer\r\n 0.00 0.75 0.00 10014 0.00 0.06 heap_getnextslot\r\n\r\npatched\r\n % cumulative self self total \r\n time seconds seconds calls ms/call ms/call name \r\n 0.00 0.12 0.00 241979 0.00 0.00 LWLockAttemptLock\r\n 0.00 0.12 0.00 180884 0.00 0.00 LockBuffer\r\n 0.00 0.12 0.00 10101 0.00 0.00 heapam_index_fetch_tuple\r\n 0.00 0.12 0.00 10101 0.00 0.00 index_fetch_heap\r\n 0.00 0.12 0.00 10119 0.00 0.00 index_getnext_slot\r\n\r\nRegards,\r\nShi Yu\r\n",
"msg_date": "Thu, 2 Mar 2023 08:06:53 +0000",
"msg_from": "\"shiy.fnst@fujitsu.com\" <shiy.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: [PATCH] Use indexes on the subscriber when REPLICA IDENTITY is\n full on the publisher"
},
{
"msg_contents": "On Thu, Mar 2, 2023 at 1:37 PM shiy.fnst@fujitsu.com\n<shiy.fnst@fujitsu.com> wrote:\n>\n> On Wed, Mar 1, 2023 9:22 PM Önder Kalacı <onderkalaci@gmail.com> wrote:\n> >\n> > > # Result\n> > The time executing update (the average of 3 runs is taken, the unit is\n> > milliseconds):\n> >\n> > Shi Yu, could it be possible for you to re-run the tests with some more runs, and share the average?\n> > I suspect maybe your test results have a very small pool size, and some runs are making\n> > the average slightly problematic.\n> >\n> > In my tests, I shared the total time, which is probably also fine.\n> >\n>\n> Thanks for your reply, I re-tested (based on\n> v25_0001_use_index_on_subs_when_pub_rep_ident_full.patch) and took the average\n> of 100 runs. The results are as follows. The unit is milliseconds.\n>\n> case1\n> sequential scan: 1348.57\n> index scan: 3785.15\n>\n> case2\n> sequential scan: 1350.26\n> index scan: 1754.01\n>\n> case3\n> sequential scan: 1350.13\n> index scan: 1340.97\n>\n> There was still some degradation in the first two cases. There are some gaps in\n> our test results. Some information about my test is as follows.\n>\n> a. Some parameters specified in postgresql.conf.\n> shared_buffers = 8GB\n> checkpoint_timeout = 30min\n> max_wal_size = 20GB\n> min_wal_size = 10GB\n> autovacuum = off\n>\n> b. Executed SQL.\n> I executed TRUNCATE and INSERT before each UPDATE. I am not sure if you did the\n> same, or just executed 50 consecutive UPDATEs. If the latter one, there would be\n> lots of old tuples and this might have a bigger impact on sequential scan. I\n> tried this case (which executes 50 consecutive UPDATEs) and also saw that the\n> overhead is smaller than before.\n>\n>\n> Besides, I looked into the regression of this patch with `gprof`. Some results\n> are as follows. I think with single buffer lock, sequential scan can scan\n> multiple tuples (see heapgettup()), while index scan can only scan one tuple. So\n> in case1, which has lots of duplicate values and more tuples need to be scanned,\n> index scan takes longer time.\n>\n> - results of `gprof`\n> case1:\n> master\n> % cumulative self self total\n> time seconds seconds calls ms/call ms/call name\n> 1.37 0.66 0.01 654312 0.00 0.00 LWLockAttemptLock\n> 0.00 0.73 0.00 573358 0.00 0.00 LockBuffer\n> 0.00 0.73 0.00 10014 0.00 0.06 heap_getnextslot\n>\n> patched\n> % cumulative self self total\n> time seconds seconds calls ms/call ms/call name\n> 9.70 1.27 0.36 50531459 0.00 0.00 LWLockAttemptLock\n> 3.23 2.42 0.12 100259200 0.00 0.00 LockBuffer\n> 6.20 1.50 0.23 50015101 0.00 0.00 heapam_index_fetch_tuple\n> 4.04 2.02 0.15 50015101 0.00 0.00 index_fetch_heap\n> 1.35 3.21 0.05 10119 0.00 0.00 index_getnext_slot\n>\n\nIn the above profile number of calls to index_fetch_heap(),\nheapam_index_fetch_tuple() explains the reason for the regression you\nare seeing with the index scan. Because the update will generate dead\ntuples in the same transaction and those dead tuples won't be removed,\nwe get those from the index and then need to perform\nindex_fetch_heap() to find out whether the tuple is dead or not. Now,\nfor sequence scan also we need to scan those dead tuples but there we\ndon't need to do back-and-forth between index and heap. I think we can\nonce check with more number of tuples (say with 20000, 50000, etc.)\nfor case-1.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 2 Mar 2023 14:45:57 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Use indexes on the subscriber when REPLICA IDENTITY is\n full on the publisher"
},
{
"msg_contents": "Hi,\n\n>\n>\n> Few comments:\n> ===============\n> 1.\n> + identity. When replica identity <literal>FULL</literal> is specified,\n> + indexes can be used on the subscriber side for searching the rows.\n> These\n> + indexes should be btree,\n>\n> Why only btree and not others like a hash index? Also, there should be\n> some comments in FindUsableIndexForReplicaIdentityFull() to explain\n> the choices.\n>\n\nI updated the comment(s).\n\nFor a more technical reference, we have these restrictions, because we rely\non\nRelationFindReplTupleByIndex() which is designed to handle PK/RI. And,\nRelationFindReplTupleByIndex() is written in a way that only expects\nindexes with these limitations.\n\nIn order to keep the changes as small as possible, I refrained from\nrelaxing this\nlimitation for now. I'm definitely up to working on this for relaxing these\nlimitations, and practically allowing more cases for non-unique indexes.\n\n\n>\n> 2.\n> - * This is not generic routine, it expects the idxrel to be replication\n> - * identity of a rel and meet all limitations associated with that.\n> + * This is not a generic routine - it expects the idxrel to be an index\n> + * that planner would choose if the searchslot includes all the columns\n> + * (e.g., REPLICA IDENTITY FULL on the source).\n> */\n> -static bool\n> +static int\n>\n> This comment is not clear to me. Which change here makes the\n> expectation like that? Which planner function/functionality are you\n> referring to here?\n>\n\nOps, planner related comments are definitely stale. As you might\nremember, in the earlier iterations of this patch, we had some\nplanner functions to pick indexes for us.\n\nAnyway, I think even for that version of the patch, this comment was\nwrong. Updated now, does that look better?\n\n\n>\n> 3.\n> +/*\n> + * Given a relation and OID of an index, returns true if\n> + * the index is relation's primary key's index or\n> + * relation's replica identity index.\n>\n> It seems the line length is a bit off in the above comments. There\n> could be a similar mismatch in other places. You might want to run\n> pgindent.\n>\n\nMakes sense, run pgindent. But it didn't fix this specific instance\nautomatically,\nI changed that manually.\n\n\n>\n> 4.\n> +}\n> +\n> +\n> +/*\n> + * Returns an index oid if there is an index that can be used\n>\n> Spurious empty line.\n>\n\nfixed\n\n\n>\n> 5.\n> - /*\n> - * We are in error mode so it's fine this is somewhat slow. It's better to\n> - * give user correct error.\n> - */\n> - if (OidIsValid(GetRelationIdentityOrPK(rel->localrel)))\n> + /* Give user more precise error if possible. */\n> + if (OidIsValid(rel->usableIndexOid))\n> {\n> ereport(ERROR,\n> (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),\n>\n> Is this change valid? I mean this could lead to the error \"publisher\n> did not send replica identity column expected by the logical\n> replication target relation\" when it should have given an error:\n> \"logical replication target relation \\\"%s.%s\\\" has neither REPLICA\n> IDENTITY index nor PRIMARY ...\n>\n>\n Hmm, that's right, we'd get a wrong error message.\n\nI spent quite a bit of time trying to understand whether we'd\nneed anything additional after this patch regarding this function,\nbut my current understanding is that we should just leave the\ncheck as-is. It is mainly because when REPLICA IDENTITY is\nfull, there is no need to check anything further (other than the\ncheck that is at the bottom of the function)\n\n\nAttached are both patches: the main patch, and the patch that\noptionally disables the index scans. Let's discuss the necessity\nfor the second patch in the lights of the data we collect with\nsome more tests.\n\nThanks,\nOnder",
"msg_date": "Thu, 2 Mar 2023 12:30:20 +0300",
"msg_from": "=?UTF-8?B?w5ZuZGVyIEthbGFjxLE=?= <onderkalaci@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Use indexes on the subscriber when REPLICA IDENTITY is\n full on the publisher"
},
{
"msg_contents": "On Thu, Mar 2, 2023 at 3:00 PM Önder Kalacı <onderkalaci@gmail.com> wrote:\n>>\n>> Few comments:\n>> ===============\n>> 1.\n>> + identity. When replica identity <literal>FULL</literal> is specified,\n>> + indexes can be used on the subscriber side for searching the rows. These\n>> + indexes should be btree,\n>>\n>> Why only btree and not others like a hash index? Also, there should be\n>> some comments in FindUsableIndexForReplicaIdentityFull() to explain\n>> the choices.\n>\n>\n> I updated the comment(s).\n>\n> For a more technical reference, we have these restrictions, because we rely on\n> RelationFindReplTupleByIndex() which is designed to handle PK/RI. And,\n> RelationFindReplTupleByIndex() is written in a way that only expects\n> indexes with these limitations.\n>\n> In order to keep the changes as small as possible, I refrained from relaxing this\n> limitation for now. I'm definitely up to working on this for relaxing these\n> limitations, and practically allowing more cases for non-unique indexes.\n>\n\nSee, I think I understand why partial/expression indexes can't be\nsupported. It seems to me that because the required tuple may not\nsatisfy the expression and that won't work for our case. But what are\nother limitations you see due to which we can't support other index\ntypes for non-unique indexes? Is it just a matter of testing other\nindex types or there is something more to it, if so, we should add\ncomments so that they can be supported in the future if it is feasible\nto do so.\n\n>\n> Attached are both patches: the main patch, and the patch that\n> optionally disables the index scans.\n>\n\nBoth the patches are numbered 0001. It would be better to number them\nas 0001 and 0002.\n\n\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 2 Mar 2023 17:44:44 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Use indexes on the subscriber when REPLICA IDENTITY is\n full on the publisher"
},
{
"msg_contents": "Hi Amit, Shi Yu\n\n\n> >\n> > b. Executed SQL.\n> > I executed TRUNCATE and INSERT before each UPDATE. I am not sure if you\n> did the\n> > same, or just executed 50 consecutive UPDATEs. If the latter one, there\n> would be\n> > lots of old tuples and this might have a bigger impact on sequential\n> scan. I\n> > tried this case (which executes 50 consecutive UPDATEs) and also saw\n> that the\n> > overhead is smaller than before.\n>\n\nAlright, I'll do similarly, execute truncate/insert before each update.\n\n\n> In the above profile number of calls to index_fetch_heap(),\n> heapam_index_fetch_tuple() explains the reason for the regression you\n> are seeing with the index scan. Because the update will generate dead\n> tuples in the same transaction and those dead tuples won't be removed,\n> we get those from the index and then need to perform\n> index_fetch_heap() to find out whether the tuple is dead or not. Now,\n> for sequence scan also we need to scan those dead tuples but there we\n> don't need to do back-and-forth between index and heap.\n\n\nThanks for the insights, I think what you describe makes a lot of sense.\n\n\n\n> I think we can\n> once check with more number of tuples (say with 20000, 50000, etc.)\n> for case-1.\n>\n>\nAs we'd expect, this test made the performance regression more visible.\n\nI quickly ran case-1 for 50 times with 50000 as Shi Yu does, and got\nthe following results. I'm measuring end-to-end times for running the\nwhole set of commands:\n\nseq_scan: 00 hr 24 minutes, 42 seconds\nindex_scan: 01 hr 04 minutes 54 seconds\n\n\nBut, I'm still not sure whether we should focus on this regression too\nmuch. In the end, what we are talking about is a case (e.g., all or many\nrows are duplicated) where using an index is not a good idea anyway. So,\nI doubt users would have such indexes.\n\n\n> The quadratic apply performance\n> the sequential scans cause, are a much bigger hazard for users than some\napply\n> performance reqression.\n\nQuoting Andres' note, I personally think that the regression for this case\nis not a big concern.\n\n> I'd prefer not having an option, because we figure out the cause of the\n> performance regression (reducing it to be small enough to not care). After\n> that an option defaulting to using indexes. I don't think an option\ndefaulting\n> to false makes sense.\n\nI think we figured out the cause of the performance regression. I think it\nis not small\nenough for some scenarios like the above. But those scenarios seem like\nsynthetic\ntest cases, with not much user impacting implications. Still, I think you\nare better suited\nto comment on this.\n\nIf you consider that this is a significant issue, we could consider the\nsecond patch as well\nsuch that for this unlikely scenario users could disable index scans.\n\nThanks,\nOnder\n\nHi Amit, Shi Yu\n>\n> b. Executed SQL.\n> I executed TRUNCATE and INSERT before each UPDATE. I am not sure if you did the\n> same, or just executed 50 consecutive UPDATEs. If the latter one, there would be\n> lots of old tuples and this might have a bigger impact on sequential scan. I\n> tried this case (which executes 50 consecutive UPDATEs) and also saw that the\n> overhead is smaller than before.Alright, I'll do similarly, execute truncate/insert before each update. \nIn the above profile number of calls to index_fetch_heap(),\nheapam_index_fetch_tuple() explains the reason for the regression you\nare seeing with the index scan. Because the update will generate dead\ntuples in the same transaction and those dead tuples won't be removed,\nwe get those from the index and then need to perform\nindex_fetch_heap() to find out whether the tuple is dead or not. Now,\nfor sequence scan also we need to scan those dead tuples but there we\ndon't need to do back-and-forth between index and heap.Thanks for the insights, I think what you describe makes a lot of sense. I think we can\nonce check with more number of tuples (say with 20000, 50000, etc.)\nfor case-1.As we'd expect, this test made the performance regression more visible.I quickly ran case-1 for 50 times with 50000 as Shi Yu does, and gotthe following results. I'm measuring end-to-end times for running thewhole set of commands:seq_scan: 00 hr 24 minutes, 42 secondsindex_scan: 01 hr 04 minutes 54 secondsBut, I'm still not sure whether we should focus on this regression toomuch. In the end, what we are talking about is a case (e.g., all or manyrows are duplicated) where using an index is not a good idea anyway. So,I doubt users would have such indexes.> The quadratic apply performance> the sequential scans cause, are a much bigger hazard for users than some apply> performance reqression.Quoting Andres' note, I personally think that the regression for this caseis not a big concern. > I'd prefer not having an option, because we figure out the cause of the> performance regression (reducing it to be small enough to not care). After> that an option defaulting to using indexes. I don't think an option defaulting> to false makes sense.I think we figured out the cause of the performance regression. I think it is not small enough for some scenarios like the above. But those scenarios seem like synthetictest cases, with not much user impacting implications. Still, I think you are better suitedto comment on this.If you consider that this is a significant issue, we could consider the second patch as wellsuch that for this unlikely scenario users could disable index scans. Thanks,Onder",
"msg_date": "Thu, 2 Mar 2023 16:20:09 +0300",
"msg_from": "=?UTF-8?B?w5ZuZGVyIEthbGFjxLE=?= <onderkalaci@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Use indexes on the subscriber when REPLICA IDENTITY is\n full on the publisher"
},
{
"msg_contents": "Hi,\n\n\n> Is it just a matter of testing other\n> index types\n>\n\nYes, there are more to it. build_replindex_scan_key()\nonly works for btree indexes, as it does BTEqualStrategyNumber.\n\nI might expect a few more limitations like that. I added comment\nin the code (see FindUsableIndexForReplicaIdentityFull)\n\nor there is something more to it, if so, we should add\n> comments so that they can be supported in the future if it is feasible\n> to do so.\n\n\nI really don't see any fundamental issues regarding expanding the\nsupport for more index types, it is just some more coding & testing.\n\nAnd, I can (and willing to) work on that as a follow-up. I explicitly\ntry to keep this patch as small as possible.\n\n\n> >\n> > Attached are both patches: the main patch, and the patch that\n> > optionally disables the index scans.\n> >\n>\n> Both the patches are numbered 0001. It would be better to number them\n> as 0001 and 0002.\n>\n>\nAlright, attached v27_0001_use_index_on_subs_when_pub_rep_ident_full.patch\nand\nv27_0002_use_index_on_subs_when_pub_rep_ident_full.patch.\n\nI also added one more test which Andres asked me on a private chat\n(Testcase start: SUBSCRIPTION USES INDEX WITH PUB/SUB different data).\n\nThanks,\nOnder",
"msg_date": "Thu, 2 Mar 2023 18:22:44 +0300",
"msg_from": "=?UTF-8?B?w5ZuZGVyIEthbGFjxLE=?= <onderkalaci@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Use indexes on the subscriber when REPLICA IDENTITY is\n full on the publisher"
},
{
"msg_contents": "FYI,\n\nAfter applying only the 0001 patch I received a TAP test error.\n\nt/032_subscribe_use_index.pl ....... 1/? # Tests were run but no plan\nwas declared and done_testing() was not seen.\nt/032_subscribe_use_index.pl ....... Dubious, test returned 29 (wstat\n7424, 0x1d00)\nAll 1 subtests passed\nt/100_bugs.pl ...................... ok\n\n\nMore details:\n\n2023-03-03 12:45:45.382 AEDT [9931] 032_subscribe_use_index.pl LOG:\nstatement: CREATE INDEX test_replica_id_full_idx ON\ntest_replica_id_full(x)\n2023-03-03 12:45:45.423 AEDT [9937] 032_subscribe_use_index.pl LOG:\nstatement: CREATE SUBSCRIPTION tap_sub_rep_full CONNECTION 'port=56538\nhost=/tmp/zWyRQnOa1a dbname=postgres application_name=tap_sub'\nPUBLICATION tap_pub_rep_full WITH (enable_index_scan = false)\n2023-03-03 12:45:45.423 AEDT [9937] 032_subscribe_use_index.pl ERROR:\nunrecognized subscription parameter: \"enable_index_scan\"\n2023-03-03 12:45:45.423 AEDT [9937] 032_subscribe_use_index.pl\nSTATEMENT: CREATE SUBSCRIPTION tap_sub_rep_full CONNECTION\n'port=56538 host=/tmp/zWyRQnOa1a dbname=postgres\napplication_name=tap_sub' PUBLICATION tap_pub_rep_full WITH\n(enable_index_scan = false)\n2023-03-03 12:45:45.532 AEDT [9834] LOG: received immediate shutdown request\n2023-03-03 12:45:45.533 AEDT [9834] LOG: database system is shut down\n\n~~\n\nThe patches 0001 and 0002 seem to have accidentally blended together\nbecause AFAICT the error is because patch 0001 is testing something\nthat is not available until 0002.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Fri, 3 Mar 2023 13:10:47 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Use indexes on the subscriber when REPLICA IDENTITY is\n full on the publisher"
},
{
"msg_contents": "On Thu, Mar 2, 2023 at 6:50 PM Önder Kalacı <onderkalaci@gmail.com> wrote:\n>\n>>\n>>\n>> In the above profile number of calls to index_fetch_heap(),\n>> heapam_index_fetch_tuple() explains the reason for the regression you\n>> are seeing with the index scan. Because the update will generate dead\n>> tuples in the same transaction and those dead tuples won't be removed,\n>> we get those from the index and then need to perform\n>> index_fetch_heap() to find out whether the tuple is dead or not. Now,\n>> for sequence scan also we need to scan those dead tuples but there we\n>> don't need to do back-and-forth between index and heap.\n>\n>\n> Thanks for the insights, I think what you describe makes a lot of sense.\n>\n...\n...\n>\n> I think we figured out the cause of the performance regression. I think it is not small\n> enough for some scenarios like the above. But those scenarios seem like synthetic\n> test cases, with not much user impacting implications. Still, I think you are better suited\n> to comment on this.\n>\n> If you consider that this is a significant issue, we could consider the second patch as well\n> such that for this unlikely scenario users could disable index scans.\n>\n\nI think we can't completely ignore this regression because the key\npoint of this patch is to pick one of the non-unique indexes to\nperform scan and now it will be difficult to predict how many\nduplicates (and or dead rows) some index has without more planner\nsupport. Personally, I feel it is better to have a table-level option\nfor this so that users have some knob to avoid regressions in\nparticular cases. In general, I agree that it will be a win in more\nnumber of cases than it can regress.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 3 Mar 2023 08:16:10 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Use indexes on the subscriber when REPLICA IDENTITY is\n full on the publisher"
},
{
"msg_contents": "On Thursday, March 2, 2023 11:23 PM Önder Kalacı <onderkalaci@gmail.com> wrote:\r\n\r\n> Both the patches are numbered 0001. It would be better to number them\r\n> as 0001 and 0002.\r\n> \r\n> Alright, attached v27_0001_use_index_on_subs_when_pub_rep_ident_full.patch and \r\n> v27_0002_use_index_on_subs_when_pub_rep_ident_full.patch.\r\n> \r\n> I also added one more test which Andres asked me on a private chat\r\n> (Testcase start: SUBSCRIPTION USES INDEX WITH PUB/SUB different data).\r\n\r\nThanks for updating the patch. I think this patch can bring noticeable\r\nperformance improvements in some use cases.\r\n\r\nAnd here are few comments after reading the patch.\r\n\r\n1.\r\n+\tusableIndexContext = AllocSetContextCreate(CurrentMemoryContext,\r\n+\t\t\t\t\t\t\t\t\t\t\t \"usableIndexContext\",\r\n+\t\t\t\t\t\t\t\t\t\t\t ALLOCSET_DEFAULT_SIZES);\r\n+\toldctx = MemoryContextSwitchTo(usableIndexContext);\r\n+\r\n+\t/* Get index list of the local relation */\r\n+\tindexlist = RelationGetIndexList(localrel);\r\n+\tAssert(indexlist != NIL);\r\n+\r\n+\tforeach(lc, indexlist)\r\n\r\nIs it necessary to create a memory context here ? I thought the memory will be\r\nfreed after this apply action soon.\r\n\r\n2.\r\n\r\n+\t\t\t/*\r\n+\t\t\t * Furthermore, because primary key and unique key indexes can't\r\n+\t\t\t * include expressions we also sanity check the index is neither\r\n+\t\t\t * of those kinds.\r\n+\t\t\t */\r\n+\t\t\tAssert(!IdxIsRelationIdentityOrPK(rel, idxrel->rd_id));\r\n\r\nIt seems you mean \"replica identity key\" instead of \"unique key\" in the comments.\r\n\r\n\r\n3.\r\n--- a/src/include/replication/logicalrelation.h\r\n+++ b/src/include/replication/logicalrelation.h\r\n...\r\n+extern bool IsIndexOnlyOnExpression(IndexInfo *indexInfo);\r\n\r\nThe definition function seems better to be placed in execReplication.c\r\n\r\n4.\r\n\r\n+extern Oid GetRelationIdentityOrPK(Relation rel);\r\n\r\nThe function is only used in relation.c, so we can make it a static\r\nfunction.\r\n\r\n\r\n5.\r\n\r\n+\t/*\r\n+\t * If index scans are disabled, use a sequential scan.\r\n+\t *\r\n+\t * Note that we do not use index scans below when enable_indexscan is\r\n+\t * false. Allowing primary key or replica identity even when index scan is\r\n+\t * disabled is the legacy behaviour. So we hesitate to move the below\r\n+\t * enable_indexscan check to be done earlier in this function.\r\n+\t */\r\n+\tif (!enable_indexscan)\r\n+\t\treturn InvalidOid;\r\n\r\nSince the document of enable_indexscan says \"Enables or disables the query\r\nplanner's use of index-scan plan types. The default is on.\", and we don't use\r\nplanner here, so I am not sure should we allow/disallow index scan in apply\r\nworker based on this GUC.\r\n\r\nBest Regards,\r\nHou zj\r\n\r\n",
"msg_date": "Fri, 3 Mar 2023 03:40:31 +0000",
"msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: [PATCH] Use indexes on the subscriber when REPLICA IDENTITY is\n full on the publisher"
},
{
"msg_contents": "Here are some review comments for v27-0001 (not the tests)\n\n======\nCommit Message\n\n1.\nThere is no smart mechanism to pick the index. Instead, we choose\nthe first index that fulfils the requirements mentioned above.\n\n~\n\n1a.\nI think this paragraph should immediately follow the earlier one\n(\"With this patch...\") which talked about the index requirements.\n\n~\n\n1b.\nSlight rewording\n\nSUGGESTION\nIf there is more than one index that satisfies these requirements, we\njust pick the first one.\n\n======\ndoc/src/sgml/logical-replication.sgml\n\n2.\nA published table must have a “replica identity” configured in order\nto be able to replicate UPDATE and DELETE operations, so that\nappropriate rows to update or delete can be identified on the\nsubscriber side. By default, this is the primary key, if there is one.\nAnother unique index (with certain additional requirements) can also\nbe set to be the replica identity. When replica identity FULL is\nspecified, indexes can be used on the subscriber side for searching\nthe rows. These indexes should be btree, non-partial and have at least\none column reference (e.g., should not consist of only expressions).\nThese restrictions on the non-unique index properties are in essence\nthe same restrictions that are enforced for primary keys. Internally,\nwe follow the same approach for supporting index scans within logical\nreplication scope. If there are no such suitable indexes, the search\non the subscriber s ide can be very inefficient, therefore replica\nidentity FULL should only be used as a fallback if no other solution\nis possible. If a replica identity other than “full” is set on the\npublisher side, a replica identity comprising the same or fewer\ncolumns must also be set on the subscriber side. See REPLICA IDENTITY\nfor details on how to set the replica identity. If a table without a\nreplica identity is added to a publication that replicates UPDATE or\nDELETE operations then subsequent UPDATE or DELETE operations will\ncause an error on the publisher. INSERT operations can proceed\nregardless of any replica identity.\n\n~\n\n2a.\nIMO the <quote>replica identity</quote> in the first sentence should\nbe changed to be <firstterm>replica identity</firstterm>\n\n~\n\n2b.\nTypo: \"subscriber s ide\" --> \"subscriber side\"\n\n~\n\n2c.\nThere is still one remaining \"full\" in this text. I think ought to be\nchanged to <literal>FULL</literal> to match the others.\n\n======\nsrc/backend/executor/execReplication.c\n\n3. IdxIsRelationIdentityOrPK\n\n+/*\n+ * Given a relation and OID of an index, returns true if the\n+ * index is relation's primary key's index or relation's\n+ * replica identity index.\n+ *\n+ * Returns false otherwise.\n+ */\n+bool\n+IdxIsRelationIdentityOrPK(Relation rel, Oid idxoid)\n+{\n+ Assert(OidIsValid(idxoid));\n+\n+ return RelationGetReplicaIndex(rel) == idxoid ||\n+ RelationGetPrimaryKeyIndex(rel) == idxoid;\n }\n\n~\n\nSince the function name mentions RI (1st) and then PK (2nd), and since\nthe implementation also has the same order, I think the function\ncomment should use the same consistent order when describing what it\ndoes.\n\n======\nsrc/backend/replication/logical/relation.c\n\n4. FindUsableIndexForReplicaIdentityFull\n\n+/*\n+ * Returns an index oid if there is an index that can be used\n+ * via the apply worker. The index should be btree, non-partial\n+ * and have at least one column reference (e.g., should\n+ * not consist of only expressions). The limitations arise from\n+ * RelationFindReplTupleByIndex(), which is designed to handle\n+ * PK/RI and these limitations are inherent to PK/RI.\n+ *\n+ * There are no fundamental problems for supporting non-btree\n+ * and/or partial indexes. We should mostly relax the limitations\n+ * in RelationFindReplTupleByIndex().\n+ *\n+ * If no suitable index is found, returns InvalidOid.\n+ *\n+ * Note that this is not a generic function, it expects REPLICA\n+ * IDENTITY FULL for the remote relation.\n+ */\n\n~\n\n4a.\nMinor rewording of 1st sentence.\n\nBEFORE\nReturns an index oid if there is an index that can be used via the apply worker.\n\nSUGGESTION\nReturns the oid of an index that can be used via the apply worker.\n\n~\n\n4b.\n+ * There are no fundamental problems for supporting non-btree\n+ * and/or partial indexes. We should mostly relax the limitations\n+ * in RelationFindReplTupleByIndex().\n\nI think this paragraph should come later in the comment (just before\nthe Note) and should also have \"XXX\" prefix to indicate it is some\nimplementation note for future versions.\n\n~~~\n\n5. GetRelationIdentityOrPK\n\n+/*\n+ * Get replica identity index or if it is not defined a primary key.\n+ *\n+ * If neither is defined, returns InvalidOid\n+ */\n+Oid\n+GetRelationIdentityOrPK(Relation rel)\n+{\n+ Oid idxoid;\n+\n+ idxoid = RelationGetReplicaIndex(rel);\n+\n+ if (!OidIsValid(idxoid))\n+ idxoid = RelationGetPrimaryKeyIndex(rel);\n+\n+ return idxoid;\n+}\n\nThis is really very similar code to the other new function called\nIdxIsRelationIdentityOrPK. I wondered if such similar functions could\nbe defined together.\n\n~~~\n\n6. FindLogicalRepUsableIndex\n\n+/*\n+ * Returns an index oid if we can use an index for subscriber. If not,\n+ * returns InvalidOid.\n+ */\n\nSUGGESTION\nReturns the oid of an index that can be used by a subscriber.\nOtherwise, returns InvalidOid.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Fri, 3 Mar 2023 16:10:44 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Use indexes on the subscriber when REPLICA IDENTITY is\n full on the publisher"
},
{
"msg_contents": "On Thu, Mar 2, 2023 at 8:52 PM Önder Kalacı <onderkalaci@gmail.com> wrote:\n>\n> Alright, attached v27_0001_use_index_on_subs_when_pub_rep_ident_full.patch and\n> v27_0002_use_index_on_subs_when_pub_rep_ident_full.patch.\n>\n\nFew comments on 0001\n====================\n1.\n+ such suitable indexes, the search on the subscriber s ide can be\nvery inefficient,\n\nunnecessary space in 'side'\n\n2.\n- identity. If the table does not have any suitable key, then it can be set\n- to replica identity <quote>full</quote>, which means the entire row becomes\n- the key. This, however, is very inefficient and should only be used as a\n- fallback if no other solution is possible. If a replica identity other\n+ identity. When replica identity <literal>FULL</literal> is specified,\n+ indexes can be used on the subscriber side for searching the rows.\n\nI think it is better to retain the first sentence (If the table does\nnot ... entire row becomes the key.) as that says what will be part of\nthe key.\n\n3.\n- comprising the same or fewer columns must also be set on the subscriber\n- side. See <xref linkend=\"sql-altertable-replica-identity\"/> for details on\n- how to set the replica identity. If a table without a replica identity is\n- added to a publication that replicates <command>UPDATE</command>\n+ comprising the same or fewer columns must also be set on the\nsubscriber side.\n+ See <xref linkend=\"sql-altertable-replica-identity\"/> for\n+ details on how to set the replica identity. If a table without a replica\n+ identity is added to a publication that replicates <command>UPDATE</command>\n\nI don't see any change in this except line length. If so, I don't\nthink we should change it as part of this patch.\n\n4.\n /*\n * Setup a ScanKey for a search in the relation 'rel' for a tuple 'key' that\n * is setup to match 'rel' (*NOT* idxrel!).\n *\n- * Returns whether any column contains NULLs.\n+ * Returns how many columns should be used for the index scan.\n+ *\n+ * This is not generic routine, it expects the idxrel to be\n+ * a btree, non-partial and have at least one column\n+ * reference (e.g., should not consist of only expressions).\n *\n- * This is not generic routine, it expects the idxrel to be replication\n- * identity of a rel and meet all limitations associated with that.\n+ * By definition, replication identity of a rel meets all\n+ * limitations associated with that. Note that any other\n+ * index could also meet these limitations.\n\nThe comment changes look quite asymmetric to me. Normally, we break\nthe line if the line length goes beyond 80 cols. Please check and\nchange other places in the patch if they have a similar symptom.\n\n5.\n+ * There are no fundamental problems for supporting non-btree\n+ * and/or partial indexes.\n\nCan we mention partial indexes in the above comment? It seems to me\nthat because the required tuple may not satisfy the expression (in the\ncase of partial indexes) it may not be easy to support it.\n\n6.\nbuild_replindex_scan_key()\n{\n...\n+ for (index_attoff = 0; index_attoff <\nIndexRelationGetNumberOfKeyAttributes(idxrel);\n+ index_attoff++)\n...\n...\n+#ifdef USE_ASSERT_CHECKING\n+ IndexInfo *indexInfo = BuildIndexInfo(idxrel);\n+\n+ Assert(!IsIndexOnlyOnExpression(indexInfo));\n+#endif\n...\n}\n\nWe can avoid building index info multiple times. This can be either\nchecked at the beginning of the function outside attribute offset loop\nor we can probably cache it. I understand this is for assert builds\nbut seems easy to avoid it doing multiple times and it also looks odd\nto do it multiple times for the same index.\n\n7.\n- /* Build scankey for every attribute in the index. */\n- for (attoff = 0; attoff <\nIndexRelationGetNumberOfKeyAttributes(idxrel); attoff++)\n+ /* Build scankey for every non-expression attribute in the index. */\n+ for (index_attoff = 0; index_attoff <\nIndexRelationGetNumberOfKeyAttributes(idxrel);\n+ index_attoff++)\n {\n Oid operator;\n Oid opfamily;\n+ Oid optype = get_opclass_input_type(opclass->values[index_attoff]);\n RegProcedure regop;\n- int pkattno = attoff + 1;\n- int mainattno = indkey->values[attoff];\n- Oid optype = get_opclass_input_type(opclass->values[attoff]);\n+ int table_attno = indkey->values[index_attoff];\n\nI don't think here we need to change variable names if we retain\nmainattno as it is instead of changing it to table_attno. The current\nnaming doesn't seem bad for the current usage of the patch.\n\n8.\n+ TypeCacheEntry **eq = NULL; /* only used when the index is not RI or PK */\n\nNormally, we don't add such comments as the usage is quite obvious by\nlooking at the code.\n\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 3 Mar 2023 11:49:21 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Use indexes on the subscriber when REPLICA IDENTITY is\n full on the publisher"
},
{
"msg_contents": "On Thu, 2 Mar 2023 at 20:53, Önder Kalacı <onderkalaci@gmail.com> wrote:\n>\n> Hi,\n>\n>>\n>> Is it just a matter of testing other\n>> index types\n>\n>\n> Yes, there are more to it. build_replindex_scan_key()\n> only works for btree indexes, as it does BTEqualStrategyNumber.\n>\n> I might expect a few more limitations like that. I added comment\n> in the code (see FindUsableIndexForReplicaIdentityFull)\n>\n>> or there is something more to it, if so, we should add\n>> comments so that they can be supported in the future if it is feasible\n>> to do so.\n>\n>\n> I really don't see any fundamental issues regarding expanding the\n> support for more index types, it is just some more coding & testing.\n>\n> And, I can (and willing to) work on that as a follow-up. I explicitly\n> try to keep this patch as small as possible.\n>\n>>\n>> >\n>> > Attached are both patches: the main patch, and the patch that\n>> > optionally disables the index scans.\n>> >\n>>\n>> Both the patches are numbered 0001. It would be better to number them\n>> as 0001 and 0002.\n>>\n>\n> Alright, attached v27_0001_use_index_on_subs_when_pub_rep_ident_full.patch and\n> v27_0002_use_index_on_subs_when_pub_rep_ident_full.patch.\n>\n> I also added one more test which Andres asked me on a private chat\n> (Testcase start: SUBSCRIPTION USES INDEX WITH PUB/SUB different data).\n\nThanks for the patch. Few comments:\n1) We are currently calling RelationGetIndexList twice, once in\nFindUsableIndexForReplicaIdentityFull function and in the caller too,\nwe could avoid one of the calls by passing the indexlist to the\nfunction or removing the check here, index list check can be handled\nin FindUsableIndexForReplicaIdentityFull.\n+ if (remoterel->replident == REPLICA_IDENTITY_FULL &&\n+ RelationGetIndexList(localrel) != NIL)\n+ {\n+ /*\n+ * If we had a primary key or relation identity with a\nunique index,\n+ * we would have already found and returned that oid.\nAt this point,\n+ * the remote relation has replica identity full and\nwe have at least\n+ * one local index defined.\n+ *\n+ * We are looking for one more opportunity for using\nan index. If\n+ * there are any indexes defined on the local\nrelation, try to pick\n+ * a suitable index.\n+ *\n+ * The index selection safely assumes that all the\ncolumns are going\n+ * to be available for the index scan given that\nremote relation has\n+ * replica identity full.\n+ */\n+ return FindUsableIndexForReplicaIdentityFull(localrel);\n+ }\n+\n\n2) Copyright year should be mentioned as 2023\ndiff --git a/src/test/subscription/t/032_subscribe_use_index.pl\nb/src/test/subscription/t/032_subscribe_use_index.pl\nnew file mode 100644\nindex 0000000000..db0a7ea2a0\n--- /dev/null\n+++ b/src/test/subscription/t/032_subscribe_use_index.pl\n@@ -0,0 +1,861 @@\n+# Copyright (c) 2021-2022, PostgreSQL Global Development Group\n+\n+# Test logical replication behavior with subscriber uses available index\n+use strict;\n+use warnings;\n+use PostgreSQL::Test::Cluster;\n+use PostgreSQL::Test::Utils;\n+use Test::More;\n+\n\n3) Many of the tests are using the same tables, we need not\ndrop/create publication/subscription for each of the team, we could\njust drop and create required indexes and verify the update/delete\nstatements.\n+# ====================================================================\n+# Testcase start: SUBSCRIPTION USES INDEX\n+#\n+# Basic test where the subscriber uses index\n+# and only updates 1 row and deletes\n+# 1 other row\n+#\n+\n+# create tables pub and sub\n+$node_publisher->safe_psql('postgres',\n+ \"CREATE TABLE test_replica_id_full (x int)\");\n+$node_publisher->safe_psql('postgres',\n+ \"ALTER TABLE test_replica_id_full REPLICA IDENTITY FULL;\");\n+$node_subscriber->safe_psql('postgres',\n+ \"CREATE TABLE test_replica_id_full (x int)\");\n+$node_subscriber->safe_psql('postgres',\n+ \"CREATE INDEX test_replica_id_full_idx ON test_replica_id_full(x)\");\n\n+# ====================================================================\n+# Testcase start: SUBSCRIPTION CREATE/DROP INDEX WORKS WITHOUT ISSUES\n+#\n+# This test ensures that after CREATE INDEX, the subscriber can automatically\n+# use one of the indexes (provided that it fulfils the requirements).\n+# Similarly, after DROP index, the subscriber can automatically switch to\n+# sequential scan\n+\n+# create tables pub and sub\n+$node_publisher->safe_psql('postgres',\n+ \"CREATE TABLE test_replica_id_full (x int NOT NULL, y int)\");\n+$node_publisher->safe_psql('postgres',\n+ \"ALTER TABLE test_replica_id_full REPLICA IDENTITY FULL;\");\n+$node_subscriber->safe_psql('postgres',\n+ \"CREATE TABLE test_replica_id_full (x int NOT NULL, y int)\");\n\n4) These additional blank lines can be removed to keep it consistent:\n4.a)\n+# Testcase end: SUBSCRIPTION DOES NOT USE PARTIAL INDEX\n+# ====================================================================\n+\n+\n+# ====================================================================\n+# Testcase start: SUBSCRIPTION DOES NOT USE INDEXES WITH ONLY EXPRESSIONS\n\n4.b)\n+# Testcase end: Unique index that is not primary key or replica identity\n+# ====================================================================\n+\n+\n+\n+# ====================================================================\n+# Testcase start: SUBSCRIPTION BEHAVIOR WITH ENABLE_INDEXSCAN\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Fri, 3 Mar 2023 12:10:35 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Use indexes on the subscriber when REPLICA IDENTITY is\n full on the publisher"
},
{
"msg_contents": "Hi Hou zj, all\n\n\n>\n> 1.\n> + usableIndexContext = AllocSetContextCreate(CurrentMemoryContext,\n> +\n> \"usableIndexContext\",\n> +\n> ALLOCSET_DEFAULT_SIZES);\n> + oldctx = MemoryContextSwitchTo(usableIndexContext);\n> +\n> + /* Get index list of the local relation */\n> + indexlist = RelationGetIndexList(localrel);\n> + Assert(indexlist != NIL);\n> +\n> + foreach(lc, indexlist)\n>\n> Is it necessary to create a memory context here ? I thought the memory\n> will be\n> freed after this apply action soon.\n>\n\nYeah, probably not useful anymore, removed.\n\nIn the earlier versions of this patch, this code block was relying on some\nplanner functions. In that case, it felt safer to use a memory context. Now,\nit seems useless.\n\n\n\n> 2.\n>\n> + /*\n> + * Furthermore, because primary key and unique key\n> indexes can't\n> + * include expressions we also sanity check the\n> index is neither\n> + * of those kinds.\n> + */\n> + Assert(!IdxIsRelationIdentityOrPK(rel,\n> idxrel->rd_id));\n>\n> It seems you mean \"replica identity key\" instead of \"unique key\" in the\n> comments.\n>\n\nRight, I fixed this comment. Though, are you mentioning multiple comment*s*?\nI couldn't\nsee any other in the patch. Let me know if you see.\n\n\n>\n>\n> 3.\n> --- a/src/include/replication/logicalrelation.h\n> +++ b/src/include/replication/logicalrelation.h\n> ...\n> +extern bool IsIndexOnlyOnExpression(IndexInfo *indexInfo);\n>\n> The definition function seems better to be placed in execReplication.c\n>\n\nHmm, why do you think so? IsIndexOnlyOnExpression() is used in\nlogical/relaton.c, and used for an assertion on execReplication.c\n\nI think it is better suited for relaton.c, but let me know about\nyour perspective as well.\n\n\n>\n> 4.\n>\n> +extern Oid GetRelationIdentityOrPK(Relation rel);\n>\n> The function is only used in relation.c, so we can make it a static\n> function.\n>\n>\nIn the recent iteration of the patch (I think v27), we also use this\nfunction in check_relation_updatable() in logical/worker.c.\n\nOne could argue that we can move the definition back to worker.c,\nbut it feels better suited for in relation.c, as the perimeter of the\nfunction\nis a *Rel*, and the function is looking for a property of a relation.\n\nLet me know if you think otherwise, I don't have strong opinions\non this.\n\n\n>\n> 5.\n>\n> + /*\n> + * If index scans are disabled, use a sequential scan.\n> + *\n> + * Note that we do not use index scans below when enable_indexscan\n> is\n> + * false. Allowing primary key or replica identity even when index\n> scan is\n> + * disabled is the legacy behaviour. So we hesitate to move the\n> below\n> + * enable_indexscan check to be done earlier in this function.\n> + */\n> + if (!enable_indexscan)\n> + return InvalidOid;\n>\n> Since the document of enable_indexscan says \"Enables or disables the query\n> planner's use of index-scan plan types. The default is on.\", and we don't\n> use\n> planner here, so I am not sure should we allow/disallow index scan in apply\n> worker based on this GUC.\n>\n>\nGiven Amit's suggestion on [1], I'm planning to drop this check altogether,\nand\nrely on table storage parameters.\n\n(I'll incorporate these changes with a patch that I'm going to reply\nto Peter's e-mail).\n\nThanks,\nOnder\n\n[1]:\nhttps://www.postgresql.org/message-id/CAA4eK1KP-sV4aER51J-2mELjNzq_zVSLf1%2BW90Vu0feo-thVNA%40mail.gmail.com\n\nHi Hou zj, all\n\n1.\n+ usableIndexContext = AllocSetContextCreate(CurrentMemoryContext,\n+ \"usableIndexContext\",\n+ ALLOCSET_DEFAULT_SIZES);\n+ oldctx = MemoryContextSwitchTo(usableIndexContext);\n+\n+ /* Get index list of the local relation */\n+ indexlist = RelationGetIndexList(localrel);\n+ Assert(indexlist != NIL);\n+\n+ foreach(lc, indexlist)\n\nIs it necessary to create a memory context here ? I thought the memory will be\nfreed after this apply action soon.Yeah, probably not useful anymore, removed.In the earlier versions of this patch, this code block was relying on someplanner functions. In that case, it felt safer to use a memory context. Now,it seems useless. \n\n2.\n\n+ /*\n+ * Furthermore, because primary key and unique key indexes can't\n+ * include expressions we also sanity check the index is neither\n+ * of those kinds.\n+ */\n+ Assert(!IdxIsRelationIdentityOrPK(rel, idxrel->rd_id));\n\nIt seems you mean \"replica identity key\" instead of \"unique key\" in the comments.Right, I fixed this comment. Though, are you mentioning multiple comments? I couldn'tsee any other in the patch. Let me know if you see. \n\n\n3.\n--- a/src/include/replication/logicalrelation.h\n+++ b/src/include/replication/logicalrelation.h\n...\n+extern bool IsIndexOnlyOnExpression(IndexInfo *indexInfo);\n\nThe definition function seems better to be placed in execReplication.cHmm, why do you think so? IsIndexOnlyOnExpression() is used inlogical/relaton.c, and used for an assertion on execReplication.cI think it is better suited for relaton.c, but let me know aboutyour perspective as well. \n\n4.\n\n+extern Oid GetRelationIdentityOrPK(Relation rel);\n\nThe function is only used in relation.c, so we can make it a static\nfunction.\nIn the recent iteration of the patch (I think v27), we also use thisfunction in check_relation_updatable() in logical/worker.c.One could argue that we can move the definition back to worker.c,but it feels better suited for in relation.c, as the perimeter of the functionis a Rel, and the function is looking for a property of a relation.Let me know if you think otherwise, I don't have strong opinionson this. \n\n5.\n\n+ /*\n+ * If index scans are disabled, use a sequential scan.\n+ *\n+ * Note that we do not use index scans below when enable_indexscan is\n+ * false. Allowing primary key or replica identity even when index scan is\n+ * disabled is the legacy behaviour. So we hesitate to move the below\n+ * enable_indexscan check to be done earlier in this function.\n+ */\n+ if (!enable_indexscan)\n+ return InvalidOid;\n\nSince the document of enable_indexscan says \"Enables or disables the query\nplanner's use of index-scan plan types. The default is on.\", and we don't use\nplanner here, so I am not sure should we allow/disallow index scan in apply\nworker based on this GUC.Given Amit's suggestion on [1], I'm planning to drop this check altogether, andrely on table storage parameters. (I'll incorporate these changes with a patch that I'm going to reply to Peter's e-mail).Thanks,Onder [1]: https://www.postgresql.org/message-id/CAA4eK1KP-sV4aER51J-2mELjNzq_zVSLf1%2BW90Vu0feo-thVNA%40mail.gmail.com",
"msg_date": "Fri, 3 Mar 2023 10:39:43 +0300",
"msg_from": "=?UTF-8?B?w5ZuZGVyIEthbGFjxLE=?= <onderkalaci@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Use indexes on the subscriber when REPLICA IDENTITY is\n full on the publisher"
},
{
"msg_contents": "Hi Peter, all\n\n>\n> ======\n> Commit Message\n>\n> 1.\n> There is no smart mechanism to pick the index. Instead, we choose\n> the first index that fulfils the requirements mentioned above.\n>\n> ~\n>\n> 1a.\n> I think this paragraph should immediately follow the earlier one\n> (\"With this patch...\") which talked about the index requirements.\n>\n>\nmakes sense\n\n\n>\n> 1b.\n> Slight rewording\n>\n> SUGGESTION\n> If there is more than one index that satisfies these requirements, we\n> just pick the first one.\n>\n>\napplied\n\n\n> ======\n> doc/src/sgml/logical-replication.sgml\n>\n> 2.\n> A published table must have a “replica identity” configured in order\n> to be able to replicate UPDATE and DELETE operations, so that\n> appropriate rows to update or delete can be identified on the\n> subscriber side. By default, this is the primary key, if there is one.\n> Another unique index (with certain additional requirements) can also\n> be set to be the replica identity. When replica identity FULL is\n> specified, indexes can be used on the subscriber side for searching\n> the rows. These indexes should be btree, non-partial and have at least\n> one column reference (e.g., should not consist of only expressions).\n> These restrictions on the non-unique index properties are in essence\n> the same restrictions that are enforced for primary keys. Internally,\n> we follow the same approach for supporting index scans within logical\n> replication scope. If there are no such suitable indexes, the search\n> on the subscriber s ide can be very inefficient, therefore replica\n> identity FULL should only be used as a fallback if no other solution\n> is possible. If a replica identity other than “full” is set on the\n> publisher side, a replica identity comprising the same or fewer\n> columns must also be set on the subscriber side. See REPLICA IDENTITY\n> for details on how to set the replica identity. If a table without a\n> replica identity is added to a publication that replicates UPDATE or\n> DELETE operations then subsequent UPDATE or DELETE operations will\n> cause an error on the publisher. INSERT operations can proceed\n> regardless of any replica identity.\n>\n> ~\n>\n> 2a.\n> IMO the <quote>replica identity</quote> in the first sentence should\n> be changed to be <firstterm>replica identity</firstterm>\n>\n\n\n\n>\n> ~\n>\n> 2b.\n> Typo: \"subscriber s ide\" --> \"subscriber side\"\n>\n\nfixed\n\n\n> 2c.\n> There is still one remaining \"full\" in this text. I think ought to be\n> changed to <literal>FULL</literal> to match the others.\n>\n>\nchanged\n\n\n> ======\n> src/backend/executor/execReplication.c\n>\n> 3. IdxIsRelationIdentityOrPK\n>\n> +/*\n> + * Given a relation and OID of an index, returns true if the\n> + * index is relation's primary key's index or relation's\n> + * replica identity index.\n> + *\n> + * Returns false otherwise.\n> + */\n> +bool\n> +IdxIsRelationIdentityOrPK(Relation rel, Oid idxoid)\n> +{\n> + Assert(OidIsValid(idxoid));\n> +\n> + return RelationGetReplicaIndex(rel) == idxoid ||\n> + RelationGetPrimaryKeyIndex(rel) == idxoid;\n> }\n>\n> ~\n>\n> Since the function name mentions RI (1st) and then PK (2nd), and since\n> the implementation also has the same order, I think the function\n> comment should use the same consistent order when describing what it\n> does.\n>\n\nalright, done\n\n\n>\n> ======\n> src/backend/replication/logical/relation.c\n>\n> 4. FindUsableIndexForReplicaIdentityFull\n>\n> +/*\n> + * Returns an index oid if there is an index that can be used\n> + * via the apply worker. The index should be btree, non-partial\n> + * and have at least one column reference (e.g., should\n> + * not consist of only expressions). The limitations arise from\n> + * RelationFindReplTupleByIndex(), which is designed to handle\n> + * PK/RI and these limitations are inherent to PK/RI.\n> + *\n> + * There are no fundamental problems for supporting non-btree\n> + * and/or partial indexes. We should mostly relax the limitations\n> + * in RelationFindReplTupleByIndex().\n> + *\n> + * If no suitable index is found, returns InvalidOid.\n> + *\n> + * Note that this is not a generic function, it expects REPLICA\n> + * IDENTITY FULL for the remote relation.\n> + */\n>\n> ~\n>\n> 4a.\n> Minor rewording of 1st sentence.\n>\n> BEFORE\n> Returns an index oid if there is an index that can be used via the apply\n> worker.\n>\n> SUGGESTION\n> Returns the oid of an index that can be used via the apply worker.\n>\n>\nlooks better, applied\n\n\n>\n> 4b.\n> + * There are no fundamental problems for supporting non-btree\n> + * and/or partial indexes. We should mostly relax the limitations\n> + * in RelationFindReplTupleByIndex().\n>\n> I think this paragraph should come later in the comment (just before\n> the Note) and should also have \"XXX\" prefix to indicate it is some\n> implementation note for future versions.\n>\n>\n>\ndone\n\n\n>\n> 5. GetRelationIdentityOrPK\n>\n> +/*\n> + * Get replica identity index or if it is not defined a primary key.\n> + *\n> + * If neither is defined, returns InvalidOid\n> + */\n> +Oid\n> +GetRelationIdentityOrPK(Relation rel)\n> +{\n> + Oid idxoid;\n> +\n> + idxoid = RelationGetReplicaIndex(rel);\n> +\n> + if (!OidIsValid(idxoid))\n> + idxoid = RelationGetPrimaryKeyIndex(rel);\n> +\n> + return idxoid;\n> +}\n>\n> This is really very similar code to the other new function called\n> IdxIsRelationIdentityOrPK. I wondered if such similar functions could\n> be defined together.\n>\n\nMakes sense, moved them closer, also changed IdxIsRelationIdentityOrPK to\nrely on\nGetRelationIdentityOrPK()\n\n\n>\n> ~~~\n>\n> 6. FindLogicalRepUsableIndex\n>\n> +/*\n> + * Returns an index oid if we can use an index for subscriber. If not,\n> + * returns InvalidOid.\n> + */\n>\n> SUGGESTION\n> Returns the oid of an index that can be used by a subscriber.\n> Otherwise, returns InvalidOid.\n>\n>\napplies.\n\nNow attaching v28 of the patch, which includes the reviews from this mail\nand [1].\n\nThanks,\nOnder\n\n[1]\nhttps://www.postgresql.org/message-id/OS0PR01MB5716BE4954A99EAF14F4D1F294B39%40OS0PR01MB5716.jpnprd01.prod.outlook.com",
"msg_date": "Fri, 3 Mar 2023 10:39:55 +0300",
"msg_from": "=?UTF-8?B?w5ZuZGVyIEthbGFjxLE=?= <onderkalaci@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Use indexes on the subscriber when REPLICA IDENTITY is\n full on the publisher"
},
{
"msg_contents": "Dear Önder,\r\n\r\nThanks for updating the patch!\r\nI played with your patch and I confirmed that parallel apply worker and tablesync worker\r\ncould pick the index in typical case.\r\n\r\nFollowings are comments for v27-0001. Please ignore if it is fixed in newer one.\r\n\r\nexecReplication.c\r\n\r\n```\r\n+ /* Build scankey for every non-expression attribute in the index. */\r\n```\r\n\r\nI think that single line comments should not terminated by \".\".\r\n\r\n```\r\n+ /* There should always be at least one attribute for the index scan. */\r\n```\r\n\r\nSame as above.\r\n\r\n\r\n```\r\n+#ifdef USE_ASSERT_CHECKING\r\n+ IndexInfo *indexInfo = BuildIndexInfo(idxrel);\r\n+\r\n+ Assert(!IsIndexOnlyOnExpression(indexInfo));\r\n+#endif\r\n```\r\n\r\nI may misunderstand, but the condition of usable index has alteady been checked\r\nwhen the oid was set but anyway the you confirmed the condition again before\r\nreally using that, right?\r\nSo is it OK not to check another assumption that the index is b-tree, non-partial,\r\nand one column reference?\r\n\r\nIIUC we can do that by adding new function like IsIndexUsableForReplicaIdentityFull()\r\nthat checks these condition, and then call at RelationFindReplTupleByIndex() if\r\nidxIsRelationIdentityOrPK is false.\r\n\r\n032_subscribe_use_index.pl\r\n\r\n```\r\n+# Testcase start: SUBSCRIPTION CREATE/DROP INDEX WORKS WITHOUT ISSUES\r\n...\r\n+# Testcase end: SUBSCRIPTION RE-CALCULATES INDEX AFTER CREATE/DROP INDEX\r\n```\r\n\r\nThere is still non-consistent case.\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n",
"msg_date": "Fri, 3 Mar 2023 08:46:20 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <kuroda.hayato@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: [PATCH] Use indexes on the subscriber when REPLICA IDENTITY is\n full on the publisher"
},
{
"msg_contents": "Hi Amit, all\n\n\n> Few comments on 0001\n> ====================\n> 1.\n> + such suitable indexes, the search on the subscriber s ide can be\n> very inefficient,\n>\n> unnecessary space in 'side'\n>\n\nFixed in v28\n\n\n>\n> 2.\n> - identity. If the table does not have any suitable key, then it can be\n> set\n> - to replica identity <quote>full</quote>, which means the entire row\n> becomes\n> - the key. This, however, is very inefficient and should only be used\n> as a\n> - fallback if no other solution is possible. If a replica identity other\n> + identity. When replica identity <literal>FULL</literal> is specified,\n> + indexes can be used on the subscriber side for searching the rows.\n>\n> I think it is better to retain the first sentence (If the table does\n> not ... entire row becomes the key.) as that says what will be part of\n> the key.\n>\n>\nright, that sentence looks useful, added back.\n\n\n> 3.\n> - comprising the same or fewer columns must also be set on the subscriber\n> - side. See <xref linkend=\"sql-altertable-replica-identity\"/> for\n> details on\n> - how to set the replica identity. If a table without a replica\n> identity is\n> - added to a publication that replicates <command>UPDATE</command>\n> + comprising the same or fewer columns must also be set on the\n> subscriber side.\n> + See <xref linkend=\"sql-altertable-replica-identity\"/> for\n> + details on how to set the replica identity. If a table without a\n> replica\n> + identity is added to a publication that replicates\n> <command>UPDATE</command>\n>\n> I don't see any change in this except line length. If so, I don't\n> think we should change it as part of this patch.\n>\n>\nYes, fixed. But the first line (starting with See <xref ..) still shows as\nif it is changed,\nprobably because its line has changed. I couldn't make that line show as\nit had not\nchanged.\n\n\n> 4.\n> /*\n> * Setup a ScanKey for a search in the relation 'rel' for a tuple 'key'\n> that\n> * is setup to match 'rel' (*NOT* idxrel!).\n> *\n> - * Returns whether any column contains NULLs.\n> + * Returns how many columns should be used for the index scan.\n> + *\n> + * This is not generic routine, it expects the idxrel to be\n> + * a btree, non-partial and have at least one column\n> + * reference (e.g., should not consist of only expressions).\n> *\n> - * This is not generic routine, it expects the idxrel to be replication\n> - * identity of a rel and meet all limitations associated with that.\n> + * By definition, replication identity of a rel meets all\n> + * limitations associated with that. Note that any other\n> + * index could also meet these limitations.\n>\n> The comment changes look quite asymmetric to me. Normally, we break\n> the line if the line length goes beyond 80 cols. Please check and\n> change other places in the patch if they have a similar symptom.\n>\n\nWent over the patch, and expanded each line ~80 chars.\n\nI'm guessing 80 is not the hard limit, in some cases I went over 81-82.\n\n\n>\n> 5.\n> + * There are no fundamental problems for supporting non-btree\n> + * and/or partial indexes.\n>\n> Can we mention partial indexes in the above comment? It seems to me\n> that because the required tuple may not satisfy the expression (in the\n> case of partial indexes) it may not be easy to support it.\n>\n\nExpanded the comment and explained the differences a little further.\n\n\n>\n> 6.\n> build_replindex_scan_key()\n> {\n> ...\n> + for (index_attoff = 0; index_attoff <\n> IndexRelationGetNumberOfKeyAttributes(idxrel);\n> + index_attoff++)\n> ...\n> ...\n> +#ifdef USE_ASSERT_CHECKING\n> + IndexInfo *indexInfo = BuildIndexInfo(idxrel);\n> +\n> + Assert(!IsIndexOnlyOnExpression(indexInfo));\n> +#endif\n> ...\n> }\n>\n> We can avoid building index info multiple times. This can be either\n> checked at the beginning of the function outside attribute offset loop\n> or we can probably cache it. I understand this is for assert builds\n> but seems easy to avoid it doing multiple times and it also looks odd\n> to do it multiple times for the same index.\n>\n\nApplied your suggestions. Although I do not have strong opinions, I think\nthat\nit was easier to follow with building the indexInfo for each iteration.\n\n\n>\n> 7.\n> - /* Build scankey for every attribute in the index. */\n> - for (attoff = 0; attoff <\n> IndexRelationGetNumberOfKeyAttributes(idxrel); attoff++)\n> + /* Build scankey for every non-expression attribute in the index. */\n> + for (index_attoff = 0; index_attoff <\n> IndexRelationGetNumberOfKeyAttributes(idxrel);\n> + index_attoff++)\n> {\n> Oid operator;\n> Oid opfamily;\n> + Oid optype = get_opclass_input_type(opclass->values[index_attoff]);\n> RegProcedure regop;\n> - int pkattno = attoff + 1;\n> - int mainattno = indkey->values[attoff];\n> - Oid optype = get_opclass_input_type(opclass->values[attoff]);\n> + int table_attno = indkey->values[index_attoff];\n>\n> I don't think here we need to change variable names if we retain\n> mainattno as it is instead of changing it to table_attno. The current\n> naming doesn't seem bad for the current usage of the patch.\n>\n\nHmm, I'm actually not convinced that the variable naming on HEAD is good for\nthe current patch. The main difference is that now we allow indexes like:\n * CREATE INDEX idx ON table(foo(col), col_2)*\n\n(See # Testcase start: SUBSCRIPTION CAN USE INDEXES WITH\nEXPRESSIONS AND COLUMNS)\n\nIn such indexes, we could skip the attributes of the index. So, skey_attoff\nis not\nequal to index_attoff anymore. So, calling out this explicitly via the\nvariable names\nseems more robust to me. Plus, mainattno sounded vague to me when I first\nread\nthis function.\n\nSo, unless you have strong objections, I'm leaning towards having variable\nnames more explicit. I'm also open for suggestions if you think the names\nI picked is not clear enough.\n\n\n>\n> 8.\n> + TypeCacheEntry **eq = NULL; /* only used when the index is not RI or PK\n> */\n>\n> Normally, we don't add such comments as the usage is quite obvious by\n> looking at the code.\n>\n>\nSure, I also don't see much value for it, removed.\n\nAttached v29 for this review. Note that I'll be working on the disable\nindex scan changes after\nI reply to some of the other pending reviews.\n\nThanks,\nOnder",
"msg_date": "Fri, 3 Mar 2023 12:32:39 +0300",
"msg_from": "=?UTF-8?B?w5ZuZGVyIEthbGFjxLE=?= <onderkalaci@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Use indexes on the subscriber when REPLICA IDENTITY is\n full on the publisher"
},
{
"msg_contents": "On Fri, Mar 3, 2023 at 3:02 PM Önder Kalacı <onderkalaci@gmail.com> wrote:\n>>\n>> 7.\n>> - /* Build scankey for every attribute in the index. */\n>> - for (attoff = 0; attoff <\n>> IndexRelationGetNumberOfKeyAttributes(idxrel); attoff++)\n>> + /* Build scankey for every non-expression attribute in the index. */\n>> + for (index_attoff = 0; index_attoff <\n>> IndexRelationGetNumberOfKeyAttributes(idxrel);\n>> + index_attoff++)\n>> {\n>> Oid operator;\n>> Oid opfamily;\n>> + Oid optype = get_opclass_input_type(opclass->values[index_attoff]);\n>> RegProcedure regop;\n>> - int pkattno = attoff + 1;\n>> - int mainattno = indkey->values[attoff];\n>> - Oid optype = get_opclass_input_type(opclass->values[attoff]);\n>> + int table_attno = indkey->values[index_attoff];\n>>\n>> I don't think here we need to change variable names if we retain\n>> mainattno as it is instead of changing it to table_attno. The current\n>> naming doesn't seem bad for the current usage of the patch.\n>\n>\n> Hmm, I'm actually not convinced that the variable naming on HEAD is good for\n> the current patch. The main difference is that now we allow indexes like:\n> CREATE INDEX idx ON table(foo(col), col_2)\n>\n> (See # Testcase start: SUBSCRIPTION CAN USE INDEXES WITH\n> EXPRESSIONS AND COLUMNS)\n>\n> In such indexes, we could skip the attributes of the index. So, skey_attoff is not\n> equal to index_attoff anymore. So, calling out this explicitly via the variable names\n> seems more robust to me. Plus, mainattno sounded vague to me when I first read\n> this function.\n>\n\nYeah, I understand this part. By looking at the diff, it appeared to\nme that this was an unnecessary change. Anyway, I see your point, so\nif you want to keep the naming as you proposed at least don't change\nthe ordering for get_opclass_input_type() call because that looks odd\nto me.\n>\n> Attached v29 for this review. Note that I'll be working on the disable index scan changes after\n>\n\nOkay, thanks!\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 3 Mar 2023 16:04:56 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Use indexes on the subscriber when REPLICA IDENTITY is\n full on the publisher"
},
{
"msg_contents": "Hi Vignesh,\n\nThanks for the review\n\n\n> 1) We are currently calling RelationGetIndexList twice, once in\n> FindUsableIndexForReplicaIdentityFull function and in the caller too,\n> we could avoid one of the calls by passing the indexlist to the\n> function or removing the check here, index list check can be handled\n> in FindUsableIndexForReplicaIdentityFull.\n> + if (remoterel->replident == REPLICA_IDENTITY_FULL &&\n> + RelationGetIndexList(localrel) != NIL)\n> + {\n> + /*\n> + * If we had a primary key or relation identity with a\n> unique index,\n> + * we would have already found and returned that oid.\n> At this point,\n> + * the remote relation has replica identity full and\n> we have at least\n> + * one local index defined.\n> + *\n> + * We are looking for one more opportunity for using\n> an index. If\n> + * there are any indexes defined on the local\n> relation, try to pick\n> + * a suitable index.\n> + *\n> + * The index selection safely assumes that all the\n> columns are going\n> + * to be available for the index scan given that\n> remote relation has\n> + * replica identity full.\n> + */\n> + return FindUsableIndexForReplicaIdentityFull(localrel);\n> + }\n> +\n>\n\nmakes sense, done\n\n\n>\n> 2) Copyright year should be mentioned as 2023\n> diff --git a/src/test/subscription/t/032_subscribe_use_index.pl\n> b/src/test/subscription/t/032_subscribe_use_index.pl\n> new file mode 100644\n> index 0000000000..db0a7ea2a0\n> --- /dev/null\n> +++ b/src/test/subscription/t/032_subscribe_use_index.pl\n> @@ -0,0 +1,861 @@\n> +# Copyright (c) 2021-2022, PostgreSQL Global Development Group\n> +\n> +# Test logical replication behavior with subscriber uses available index\n> +use strict;\n> +use warnings;\n> +use PostgreSQL::Test::Cluster;\n> +use PostgreSQL::Test::Utils;\n> +use Test::More;\n> +\n>\n\nI changed it to #Copyright (c) 2022-2023, but I'm not sure if it should be\nonly 2023 or\nlike this.\n\n\n>\n> 3) Many of the tests are using the same tables, we need not\n> drop/create publication/subscription for each of the team, we could\n> just drop and create required indexes and verify the update/delete\n> statements.\n> +# ====================================================================\n> +# Testcase start: SUBSCRIPTION USES INDEX\n> +#\n> +# Basic test where the subscriber uses index\n> +# and only updates 1 row and deletes\n> +# 1 other row\n> +#\n> +\n> +# create tables pub and sub\n> +$node_publisher->safe_psql('postgres',\n> + \"CREATE TABLE test_replica_id_full (x int)\");\n> +$node_publisher->safe_psql('postgres',\n> + \"ALTER TABLE test_replica_id_full REPLICA IDENTITY FULL;\");\n> +$node_subscriber->safe_psql('postgres',\n> + \"CREATE TABLE test_replica_id_full (x int)\");\n> +$node_subscriber->safe_psql('postgres',\n> + \"CREATE INDEX test_replica_id_full_idx ON\n> test_replica_id_full(x)\");\n>\n> +# ====================================================================\n> +# Testcase start: SUBSCRIPTION CREATE/DROP INDEX WORKS WITHOUT ISSUES\n> +#\n> +# This test ensures that after CREATE INDEX, the subscriber can\n> automatically\n> +# use one of the indexes (provided that it fulfils the requirements).\n> +# Similarly, after DROP index, the subscriber can automatically switch to\n> +# sequential scan\n> +\n> +# create tables pub and sub\n> +$node_publisher->safe_psql('postgres',\n> + \"CREATE TABLE test_replica_id_full (x int NOT NULL, y int)\");\n> +$node_publisher->safe_psql('postgres',\n> + \"ALTER TABLE test_replica_id_full REPLICA IDENTITY FULL;\");\n> +$node_subscriber->safe_psql('postgres',\n> + \"CREATE TABLE test_replica_id_full (x int NOT NULL, y int)\");\n>\n\nWell, not all the tables are exactly the same, there are 4-5 different\ntables. Mostly the table names are the same.\n\nPlus, the overhead does not seem to be large enough to complicate\nthe test. Many of the src/test/subscription/t files follow this pattern.\n\nDo you have strong opinions on changing this?\n\n\n> 4) These additional blank lines can be removed to keep it consistent:\n> 4.a)\n> +# Testcase end: SUBSCRIPTION DOES NOT USE PARTIAL INDEX\n> +# ====================================================================\n> +\n> +\n> +# ====================================================================\n> +# Testcase start: SUBSCRIPTION DOES NOT USE INDEXES WITH ONLY EXPRESSIONS\n>\n> 4.b)\n> +# Testcase end: Unique index that is not primary key or replica identity\n> +# ====================================================================\n> +\n> +\n> +\n> +# ====================================================================\n> +# Testcase start: SUBSCRIPTION BEHAVIOR WITH ENABLE_INDEXSCAN\n>\n\nThanks, fixed.\n\nAttached v30",
"msg_date": "Fri, 3 Mar 2023 16:10:33 +0300",
"msg_from": "=?UTF-8?B?w5ZuZGVyIEthbGFjxLE=?= <onderkalaci@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Use indexes on the subscriber when REPLICA IDENTITY is\n full on the publisher"
},
{
"msg_contents": "Hi Hayato, Amit, all\n\n\n>\n>\n> ```\n> + /* Build scankey for every non-expression attribute in the index.\n> */\n> ```\n>\n> I think that single line comments should not terminated by \".\".\n>\n\nHmm, looking into execReplication.c, many of the single line comments\nterminated by \".\". Also, On the HEAD, the same comment has single\nline comment. So, I'd rather stick to that?\n\n\n\n>\n> ```\n> + /* There should always be at least one attribute for the index\n> scan. */\n> ```\n>\n> Same as above.\n>\n\nSame as above :)\n\n\n>\n>\n> ```\n> +#ifdef USE_ASSERT_CHECKING\n> + IndexInfo *indexInfo = BuildIndexInfo(idxrel);\n> +\n> + Assert(!IsIndexOnlyOnExpression(indexInfo));\n> +#endif\n> ```\n>\n> I may misunderstand, but the condition of usable index has alteady been\n> checked\n> when the oid was set but anyway the you confirmed the condition again\n> before\n> really using that, right?\n> So is it OK not to check another assumption that the index is b-tree,\n> non-partial,\n> and one column reference?\n\nIIUC we can do that by adding new function like\n> IsIndexUsableForReplicaIdentityFull()\n> that checks these condition, and then call at\n> RelationFindReplTupleByIndex() if\n> idxIsRelationIdentityOrPK is false.\n>\n\nI think adding a function like IsIndexUsableForReplicaIdentityFull is\nuseful. I can use it inside\nFindUsableIndexForReplicaIdentityFull() and assert here. Also good for\nreadability.\n\nSo, I mainly moved this assert to a more generic place with a more generic\ncheck\nto RelationFindReplTupleByIndex\n\n\n>\n> 032_subscribe_use_index.pl\n>\n> ```\n> +# Testcase start: SUBSCRIPTION CREATE/DROP INDEX WORKS WITHOUT ISSUES\n> ...\n> +# Testcase end: SUBSCRIPTION RE-CALCULATES INDEX AFTER CREATE/DROP INDEX\n> ```\n>\n> There is still non-consistent case.\n>\n>\nFixed, thanks\n\nAnyway, I see your point, so\n> if you want to keep the naming as you proposed at least don't change\n> the ordering for get_opclass_input_type() call because that looks odd\n> to me.\n\n\n(A small comment from Amit's previous e-mail)\n\nSure, applied now.\n\nAttaching v31.\n\nThanks for the reviews!\nOnder KALACI",
"msg_date": "Fri, 3 Mar 2023 17:15:49 +0300",
"msg_from": "=?UTF-8?B?w5ZuZGVyIEthbGFjxLE=?= <onderkalaci@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Use indexes on the subscriber when REPLICA IDENTITY is\n full on the publisher"
},
{
"msg_contents": "On Fri, Mar 3, 2023 at 1:09 PM Önder Kalacı <onderkalaci@gmail.com> wrote:\n>\n>>\n>> 5.\n>>\n>> + /*\n>> + * If index scans are disabled, use a sequential scan.\n>> + *\n>> + * Note that we do not use index scans below when enable_indexscan is\n>> + * false. Allowing primary key or replica identity even when index scan is\n>> + * disabled is the legacy behaviour. So we hesitate to move the below\n>> + * enable_indexscan check to be done earlier in this function.\n>> + */\n>> + if (!enable_indexscan)\n>> + return InvalidOid;\n>>\n>> Since the document of enable_indexscan says \"Enables or disables the query\n>> planner's use of index-scan plan types. The default is on.\", and we don't use\n>> planner here, so I am not sure should we allow/disallow index scan in apply\n>> worker based on this GUC.\n>>\n>\n> Given Amit's suggestion on [1], I'm planning to drop this check altogether, and\n> rely on table storage parameters.\n>\n\nThis still seems to be present in the latest version. I think we can\njust remove this and then add the additional check as suggested by you\nas part of the second patch.\n\nFew other comments on latest version:\n==============================\n1.\n+/*\n+ * Returns true if the index is usable for replica identity full. For details,\n+ * see FindUsableIndexForReplicaIdentityFull.\n+ */\n+bool\n+IsIndexUsableForReplicaIdentityFull(IndexInfo *indexInfo)\n+{\n+ bool is_btree = (indexInfo->ii_Am == BTREE_AM_OID);\n+ bool is_partial = (indexInfo->ii_Predicate != NIL);\n+ bool is_only_on_expression = IsIndexOnlyOnExpression(indexInfo);\n+\n+ if (is_btree && !is_partial && !is_only_on_expression)\n+ {\n+ return true;\n...\n...\n+/*\n+ * Returns the oid of an index that can be used via the apply worker. The index\n+ * should be btree, non-partial and have at least one column reference (e.g.,\n+ * should not consist of only expressions). The limitations arise from\n+ * RelationFindReplTupleByIndex(), which is designed to handle PK/RI and these\n+ * limitations are inherent to PK/RI.\n\nBy these two, the patch infers that it picks an index that adheres to\nthe limitations of PK/RI. Apart from unique, the other properties of\nRI are \"not partial, not deferrable, and include only columns marked\nNOT NULL\". See ATExecReplicaIdentity() for corresponding checks. We\ndon't try to ensure the last two from the list. It is fine to do so if\nwe document the reasons for the same in comments or we can even try to\nenforce the remaining restrictions as well. For example, it should be\nokay to allow NULL column values because we anyway compare the entire\ntuple after getting the value from the index.\n\n2.\n+ {\n+ /*\n+ * This attribute is an expression, and\n+ * FindUsableIndexForReplicaIdentityFull() was called earlier\n+ * when the index for subscriber was selected. There, the indexes\n+ * comprising *only* expressions have already been eliminated.\n+ *\n+ * Also, because PK/RI can't include expressions we\n+ * sanity check the index is neither of those kinds.\n+ */\n+ Assert(!IdxIsRelationIdentityOrPK(rel, idxrel->rd_id));\n\nThis comment doesn't make much sense after you have moved the\ncorresponding Assert in RelationFindReplTupleByIndex(). Either we\nshould move or remove this Assert as well or at least update the\ncomments to reflect the latest code.\n\n3. When FindLogicalRepUsableIndex() is invoked from\nlogicalrep_partition_open(), the current memory context would be\nLogicalRepPartMapContext which would be a long-lived context and we\nallocate memory for indexes in FindLogicalRepUsableIndex() which can\naccumulate over a period of time. So, I think it would be better to\nswitch to the old context in logicalrep_partition_open() before\ninvoking FindLogicalRepUsableIndex() provided that is not a long-lived\ncontext.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Sat, 4 Mar 2023 16:03:53 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Use indexes on the subscriber when REPLICA IDENTITY is\n full on the publisher"
},
{
"msg_contents": "Here are some review comments for v28-0001.\n\n======\ndoc/src/sgml/logical-replication.sgml\n\n1.\nA published table must have a replica identity configured in order to\nbe able to replicate UPDATE and DELETE operations, so that appropriate\nrows to update or delete can be identified on the subscriber side. By\ndefault, this is the primary key, if there is one. Another unique\nindex (with certain additional requirements) can also be set to be the\nreplica identity. When replica identity FULL is specified, indexes can\nbe used on the subscriber side for searching the rows. These indexes\nshould be btree, non-partial and have at least one column reference\n(e.g., should not consist of only expressions). These restrictions on\nthe non-unique index properties are in essence the same restrictions\nthat are enforced for primary keys. Internally, we follow the same\napproach for supporting index scans within logical replication scope.\nIf there are no such suitable indexes, the search on the subscriber\nside can be very inefficient, therefore replica identity FULL should\nonly be used as a fallback if no other solution is possible. If a\nreplica identity other than full is set on the publisher side, a\nreplica identity comprising the same or fewer columns must also be set\non the subscriber side. See REPLICA IDENTITY for details on how to set\nthe replica identity. If a table without a replica identity is added\nto a publication that replicates UPDATE or DELETE operations then\nsubsequent UPDATE or DELETE operations will cause an error on the\npublisher. INSERT operations can proceed regardless of any replica\nidentity.\n\n~\n\n1a.\nChanges include:\n\"should\" --> \"must\"\n\"e.g.\" --> \"i.e.\"\n\nBEFORE\nThese indexes should be btree, non-partial and have at least one\ncolumn reference (e.g., should not consist of only expressions).\n\nSUGGESTION\nCandidate indexes must be btree, non-partial, and have at least one\ncolumn reference (i.e., cannot consist of only expressions).\n\n~\n\n1b.\nThe fix for my v27 review comment #2b (changing \"full\" to FULL) was\nnot made correctly. It should be uppercase FULL, not full:\n\"other than full\" --> \"other than FULL\"\n\n======\nsrc/backend/executor/execReplication.c\n\n2.\n /*\n * Setup a ScanKey for a search in the relation 'rel' for a tuple 'key' that\n * is setup to match 'rel' (*NOT* idxrel!).\n *\n- * Returns whether any column contains NULLs.\n+ * Returns how many columns should be used for the index scan.\n+ *\n+ * This is not generic routine, it expects the idxrel to be\n+ * a btree, non-partial and have at least one column\n+ * reference (e.g., should not consist of only expressions).\n *\n- * This is not generic routine, it expects the idxrel to be replication\n- * identity of a rel and meet all limitations associated with that.\n+ * By definition, replication identity of a rel meets all\n+ * limitations associated with that. Note that any other\n+ * index could also meet these limitations.\n */\n-static bool\n+static int\n build_replindex_scan_key(ScanKey skey, Relation rel, Relation idxrel,\n TupleTableSlot *searchslot)\n\n~\n\n\"(e.g., should not consist of only expressions)\" --> \"(i.e., cannot\nconsist of only expressions)\"\n\n======\nsrc/backend/replication/logical/relation.c\n\n3. FindUsableIndexForReplicaIdentityFull\n\n+/*\n+ * Returns the oid of an index that can be used via the apply\n+ * worker. The index should be btree, non-partial and have at\n+ * least one column reference (e.g., should not consist of\n+ * only expressions). The limitations arise from\n+ * RelationFindReplTupleByIndex(), which is designed to handle\n+ * PK/RI and these limitations are inherent to PK/RI.\n\nThe 2nd sentence of this comment should match the same changes in the\nCommit message --- \"must not\" instead of \"should not\", \"i.e.\" instead\nof \"e.g.\", etc. See the review comment #1a above.\n\n~~~\n\n4. IdxIsRelationIdentityOrPK\n\n+/*\n+ * Given a relation and OID of an index, returns true if the\n+ * index is relation's replica identity index or relation's\n+ * primary key's index.\n+ *\n+ * Returns false otherwise.\n+ */\n+bool\n+IdxIsRelationIdentityOrPK(Relation rel, Oid idxoid)\n+{\n+ Assert(OidIsValid(idxoid));\n+\n+ return GetRelationIdentityOrPK(rel) == idxoid;\n+}\n\nI think you've \"simplified\" this function in v28 but AFAICT now it has\na different logic to v27.\n\nPREVIOUSLY it was coded like\n+ return RelationGetReplicaIndex(rel) == idxoid ||\n+ RelationGetPrimaryKeyIndex(rel) == idxoid;\n\nYou can see if 'idxoid' did NOT match RI but if it DID match PK\npreviously it would return true. But now in that scenario, it won't\neven check the PK if there was a valid RI. So it might return false\nwhen previously it returned true. Is it deliberate?\n\n======\n.../subscription/t/032_subscribe_use_index.pl\n\n5.\n+# Testcase start: SUBSCRIPTION USES INDEX WITH PUB/SUB different data\n+#\n+# The subscriber has duplicate tuples that publisher does not have.\n+# When publsher updates/deletes 1 row, subscriber uses indexes and\n+# exactly updates/deletes 1 row.\n\n\"and exactly updates/deletes 1 row.\" --> \"and updates/deletes exactly 1 row.\"\n\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Mon, 6 Mar 2023 15:41:54 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Use indexes on the subscriber when REPLICA IDENTITY is\n full on the publisher"
},
{
"msg_contents": "On Mon, Mar 6, 2023 at 10:12 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> 4. IdxIsRelationIdentityOrPK\n>\n> +/*\n> + * Given a relation and OID of an index, returns true if the\n> + * index is relation's replica identity index or relation's\n> + * primary key's index.\n> + *\n> + * Returns false otherwise.\n> + */\n> +bool\n> +IdxIsRelationIdentityOrPK(Relation rel, Oid idxoid)\n> +{\n> + Assert(OidIsValid(idxoid));\n> +\n> + return GetRelationIdentityOrPK(rel) == idxoid;\n> +}\n>\n> I think you've \"simplified\" this function in v28 but AFAICT now it has\n> a different logic to v27.\n>\n> PREVIOUSLY it was coded like\n> + return RelationGetReplicaIndex(rel) == idxoid ||\n> + RelationGetPrimaryKeyIndex(rel) == idxoid;\n>\n> You can see if 'idxoid' did NOT match RI but if it DID match PK\n> previously it would return true. But now in that scenario, it won't\n> even check the PK if there was a valid RI. So it might return false\n> when previously it returned true. Is it deliberate?\n>\n\nI don't see any problem with this because by default PK will be a\nreplica identity. So only if the user specifies the replica identity\nfull or changes the replica identity to some other index, we will try\nto get PK which seems valid for this case. Am, I missing something\nwhich makes this code do something bad?\n\nFew other comments on latest code:\n============================\n1.\n <para>\n- A published table must have a <quote>replica identity</quote> configured in\n+ A published table must have a <firstterm>replica\nidentity</firstterm> configured in\n\nHow the above change is related to this patch?\n\n2.\n certain additional requirements) can also be set to be the replica\n- identity. If the table does not have any suitable key, then it can be set\n+ identity. If the table does not have any suitable key, then it can be set\n\nI think we should change the spacing of existing docs (two spaces\nafter fullstop to one space) and that too inconsistently. I suggest to\nadd new changes with same spacing as existing doc. If you are adding\nentirely new section then we can consider differently.\n\n3.\n to replica identity <quote>full</quote>, which means the entire row becomes\n- the key. This, however, is very inefficient and should only be used as a\n- fallback if no other solution is possible. If a replica identity other\n- than <quote>full</quote> is set on the publisher side, a replica identity\n- comprising the same or fewer columns must also be set on the subscriber\n- side. See <xref linkend=\"sql-altertable-replica-identity\"/> for details on\n+ the key. When replica identity <literal>FULL</literal> is specified,\n+ indexes can be used on the subscriber side for searching the rows. These\n\nShouldn't specifying <literal>FULL</literal> be consistent wih existing docs?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 6 Mar 2023 12:14:16 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Use indexes on the subscriber when REPLICA IDENTITY is\n full on the publisher"
},
{
"msg_contents": "On Mon, Mar 6, 2023 at 5:44 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Mon, Mar 6, 2023 at 10:12 AM Peter Smith <smithpb2250@gmail.com> wrote:\n> >\n> > 4. IdxIsRelationIdentityOrPK\n> >\n> > +/*\n> > + * Given a relation and OID of an index, returns true if the\n> > + * index is relation's replica identity index or relation's\n> > + * primary key's index.\n> > + *\n> > + * Returns false otherwise.\n> > + */\n> > +bool\n> > +IdxIsRelationIdentityOrPK(Relation rel, Oid idxoid)\n> > +{\n> > + Assert(OidIsValid(idxoid));\n> > +\n> > + return GetRelationIdentityOrPK(rel) == idxoid;\n> > +}\n> >\n> > I think you've \"simplified\" this function in v28 but AFAICT now it has\n> > a different logic to v27.\n> >\n> > PREVIOUSLY it was coded like\n> > + return RelationGetReplicaIndex(rel) == idxoid ||\n> > + RelationGetPrimaryKeyIndex(rel) == idxoid;\n> >\n> > You can see if 'idxoid' did NOT match RI but if it DID match PK\n> > previously it would return true. But now in that scenario, it won't\n> > even check the PK if there was a valid RI. So it might return false\n> > when previously it returned true. Is it deliberate?\n> >\n>\n> I don't see any problem with this because by default PK will be a\n> replica identity. So only if the user specifies the replica identity\n> full or changes the replica identity to some other index, we will try\n> to get PK which seems valid for this case. Am, I missing something\n> which makes this code do something bad?\n\nI don't know if there is anything bad; the point was that the function\nnow seems to require a deeper understanding of the interrelationship\nof RelationGetReplicaIndex and RelationGetPrimaryKeyIndex, which is\nsomething the previous implementation did not require.\n\n>\n> Few other comments on latest code:\n> ============================\n> 1.\n> <para>\n> - A published table must have a <quote>replica identity</quote> configured in\n> + A published table must have a <firstterm>replica\n> identity</firstterm> configured in\n>\n> How the above change is related to this patch?\n\nThat comes from a previous suggestion of mine. Strictly speaking, it\nis unrelated to this patch. But since we are modifying this paragraph\nin a major way anyhow it seemed harmless to just fix this in passing\ntoo. OTOH I could make another patch for this but it seemed like\nunnecessary extra work.\n\n>\n> 2.\n> certain additional requirements) can also be set to be the replica\n> - identity. If the table does not have any suitable key, then it can be set\n> + identity. If the table does not have any suitable key, then it can be set\n>\n> I think we should change the spacing of existing docs (two spaces\n> after fullstop to one space) and that too inconsistently. I suggest to\n> add new changes with same spacing as existing doc. If you are adding\n> entirely new section then we can consider differently.\n>\n> 3.\n> to replica identity <quote>full</quote>, which means the entire row becomes\n> - the key. This, however, is very inefficient and should only be used as a\n> - fallback if no other solution is possible. If a replica identity other\n> - than <quote>full</quote> is set on the publisher side, a replica identity\n> - comprising the same or fewer columns must also be set on the subscriber\n> - side. See <xref linkend=\"sql-altertable-replica-identity\"/> for details on\n> + the key. When replica identity <literal>FULL</literal> is specified,\n> + indexes can be used on the subscriber side for searching the rows. These\n>\n> Shouldn't specifying <literal>FULL</literal> be consistent wih existing docs?\n>\n\nThat comes from a previous suggestion of mine too. The RI is specified\nas FULL, not \"full\".\nSee https://www.postgresql.org/docs/current/sql-altertable.html#SQL-ALTERTABLE-REPLICA-IDENTITY\n\nSure, there was an existing <quote>full</quote> in this paragraph so\nstrictly speaking the RI patch could follow that style. But IMO that\nstyle was wrong so all we are doing compounding the mistake instead of\njust fixing everything in passing. OTOH, I could make a separate patch\nto fix \"full\" to FULL, but again that seemed like unnecessary extra\nwork.\n\n~\n\nAnyhow, if you feel those firstterm and FULL changes ought to be kept\nseparate from this RI patch, please let me know and I will propose\nthose changes in a new thread,\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Mon, 6 Mar 2023 19:10:17 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Use indexes on the subscriber when REPLICA IDENTITY is\n full on the publisher"
},
{
"msg_contents": "On Mon, Mar 6, 2023 at 1:40 PM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> On Mon, Mar 6, 2023 at 5:44 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Mon, Mar 6, 2023 at 10:12 AM Peter Smith <smithpb2250@gmail.com> wrote:\n> > >\n> > > 4. IdxIsRelationIdentityOrPK\n> > >\n> > > +/*\n> > > + * Given a relation and OID of an index, returns true if the\n> > > + * index is relation's replica identity index or relation's\n> > > + * primary key's index.\n> > > + *\n> > > + * Returns false otherwise.\n> > > + */\n> > > +bool\n> > > +IdxIsRelationIdentityOrPK(Relation rel, Oid idxoid)\n> > > +{\n> > > + Assert(OidIsValid(idxoid));\n> > > +\n> > > + return GetRelationIdentityOrPK(rel) == idxoid;\n> > > +}\n> > >\n> > > I think you've \"simplified\" this function in v28 but AFAICT now it has\n> > > a different logic to v27.\n> > >\n> > > PREVIOUSLY it was coded like\n> > > + return RelationGetReplicaIndex(rel) == idxoid ||\n> > > + RelationGetPrimaryKeyIndex(rel) == idxoid;\n> > >\n> > > You can see if 'idxoid' did NOT match RI but if it DID match PK\n> > > previously it would return true. But now in that scenario, it won't\n> > > even check the PK if there was a valid RI. So it might return false\n> > > when previously it returned true. Is it deliberate?\n> > >\n> >\n> > I don't see any problem with this because by default PK will be a\n> > replica identity. So only if the user specifies the replica identity\n> > full or changes the replica identity to some other index, we will try\n> > to get PK which seems valid for this case. Am, I missing something\n> > which makes this code do something bad?\n>\n> I don't know if there is anything bad; the point was that the function\n> now seems to require a deeper understanding of the interrelationship\n> of RelationGetReplicaIndex and RelationGetPrimaryKeyIndex, which is\n> something the previous implementation did not require.\n>\n\nBut the same understanding is required for the existing function\nGetRelationIdentityOrPK(), so I feel it is better to be consistent\nunless we see some problem here.\n\n>\n> Anyhow, if you feel those firstterm and FULL changes ought to be kept\n> separate from this RI patch, please let me know and I will propose\n> those changes in a new thread,\n>\n\nPersonally, I would prefer to keep those separate. So, feel free to\npropose them in a new thread.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 6 Mar 2023 14:10:29 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Use indexes on the subscriber when REPLICA IDENTITY is\n full on the publisher"
},
{
"msg_contents": "Hi Amit, all\n\n\n> >\n> > Given Amit's suggestion on [1], I'm planning to drop this check\n> altogether, and\n> > rely on table storage parameters.\n> >\n>\n> This still seems to be present in the latest version. I think we can\n> just remove this and then add the additional check as suggested by you\n> as part of the second patch.\n>\n>\nNow attaching the second patch as well, which implements a new\nstorage parameter as\nyou suggested earlier.\n\nI'm open for naming suggestions, I wanted to make the name explicit, so it\nis a little long.\n\nI'm also not very familiar with the sgml format. I mostly followed the\nexisting docs and\nbuilt the docs for inspection, but it would be good to have a look into\nthat part\na little bit further in case there I missed something important etc.\n\n\n\n> Few other comments on latest version:\n> ==============================\n> 1.\n> +/*\n> + * Returns true if the index is usable for replica identity full. For\n> details,\n> + * see FindUsableIndexForReplicaIdentityFull.\n> + */\n> +bool\n> +IsIndexUsableForReplicaIdentityFull(IndexInfo *indexInfo)\n> +{\n> + bool is_btree = (indexInfo->ii_Am == BTREE_AM_OID);\n> + bool is_partial = (indexInfo->ii_Predicate != NIL);\n> + bool is_only_on_expression = IsIndexOnlyOnExpression(indexInfo);\n> +\n> + if (is_btree && !is_partial && !is_only_on_expression)\n> + {\n> + return true;\n> ...\n> ...\n> +/*\n> + * Returns the oid of an index that can be used via the apply worker. The\n> index\n> + * should be btree, non-partial and have at least one column reference\n> (e.g.,\n> + * should not consist of only expressions). The limitations arise from\n> + * RelationFindReplTupleByIndex(), which is designed to handle PK/RI and\n> these\n> + * limitations are inherent to PK/RI.\n>\n> By these two, the patch infers that it picks an index that adheres to\n> the limitations of PK/RI. Apart from unique, the other properties of\n> RI are \"not partial, not deferrable, and include only columns marked\n> NOT NULL\". See ATExecReplicaIdentity() for corresponding checks. We\n> don't try to ensure the last two from the list. It is fine to do so if\n> we document the reasons for the same in comments or we can even try to\n> enforce the remaining restrictions as well. For example, it should be\n> okay to allow NULL column values because we anyway compare the entire\n> tuple after getting the value from the index.\n>\n\nI think this is a documentation issue of this patch. I improved the wording\na bit\nmore. Does that look better?\n\nI also went over the code / docs to see if we have\nany other such documentation issues, I also\nupdated logical-replication.sgml.\n\nI'd prefer to support NULL values as there is no harm in doing that and it\nis\npretty useful feature (we also have tests covering it).\n\nTo my knowledge, I don't see any problems with deferrable as we are only\ninterested in the indexes, not with the constraint. Still, if you see any,\nI can\nadd the check for that. (Here, the user could still have unique index that\nis associated with a constraint, but still, I don't see any edge cases\nregarding deferrable constraints).\n\n\n\n> 2.\n> + {\n> + /*\n> + * This attribute is an expression, and\n> + * FindUsableIndexForReplicaIdentityFull() was called earlier\n> + * when the index for subscriber was selected. There, the indexes\n> + * comprising *only* expressions have already been eliminated.\n> + *\n> + * Also, because PK/RI can't include expressions we\n> + * sanity check the index is neither of those kinds.\n> + */\n> + Assert(!IdxIsRelationIdentityOrPK(rel, idxrel->rd_id));\n>\n> This comment doesn't make much sense after you have moved the\n> corresponding Assert in RelationFindReplTupleByIndex(). Either we\n> should move or remove this Assert as well or at least update the\n> comments to reflect the latest code.\n>\n\nI think removing that Assert is fine after having a more generic\nAssert in RelationFindReplTupleByIndex().\n\nI mostly left that comment so that the meaning of\nAttributeNumberIsValid() is easier for readers to follow. But, now\nI'm also leaning towards removing the comment and Assert.\n\n\n>\n> 3. When FindLogicalRepUsableIndex() is invoked from\n> logicalrep_partition_open(), the current memory context would be\n> LogicalRepPartMapContext which would be a long-lived context and we\n> allocate memory for indexes in FindLogicalRepUsableIndex() which can\n> accumulate over a period of time. So, I think it would be better to\n> switch to the old context in logicalrep_partition_open() before\n> invoking FindLogicalRepUsableIndex() provided that is not a long-lived\n> context.\n>\n>\n>\nHmm, makes sense, that'd avoid any potential leaks that this patch\nmight bring. Applied your suggestion. Also, looking at the same function\ncall in logicalrep_rel_open(), that already seems safe regarding leaks. Do\nyou see any problems with that?\n\n\n\nAttached v32. I'll continue replying to the e-mails on this thread with\ndifferent\npatches. I'm assuming this is easier for you to review such that we have\ndifferent\npatches for each review. If not, please let me know, I can reply to all\nmails\nat once.\n\nThanks,\nOnder KALACI",
"msg_date": "Mon, 6 Mar 2023 12:08:20 +0300",
"msg_from": "=?UTF-8?B?w5ZuZGVyIEthbGFjxLE=?= <onderkalaci@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Use indexes on the subscriber when REPLICA IDENTITY is\n full on the publisher"
},
{
"msg_contents": "On Fri, Mar 3, 2023 at 6:40 PM Önder Kalacı <onderkalaci@gmail.com> wrote:\n>\n> Hi Vignesh,\n>\n> Thanks for the review\n>\n>>\n>> 1) We are currently calling RelationGetIndexList twice, once in\n>> FindUsableIndexForReplicaIdentityFull function and in the caller too,\n>> we could avoid one of the calls by passing the indexlist to the\n>> function or removing the check here, index list check can be handled\n>> in FindUsableIndexForReplicaIdentityFull.\n>> + if (remoterel->replident == REPLICA_IDENTITY_FULL &&\n>> + RelationGetIndexList(localrel) != NIL)\n>> + {\n>> + /*\n>> + * If we had a primary key or relation identity with a\n>> unique index,\n>> + * we would have already found and returned that oid.\n>> At this point,\n>> + * the remote relation has replica identity full and\n>> we have at least\n>> + * one local index defined.\n>> + *\n>> + * We are looking for one more opportunity for using\n>> an index. If\n>> + * there are any indexes defined on the local\n>> relation, try to pick\n>> + * a suitable index.\n>> + *\n>> + * The index selection safely assumes that all the\n>> columns are going\n>> + * to be available for the index scan given that\n>> remote relation has\n>> + * replica identity full.\n>> + */\n>> + return FindUsableIndexForReplicaIdentityFull(localrel);\n>> + }\n>> +\n>\n> makes sense, done\n>\n\nToday, I was looking at this comment and the fix for it. It seems to\nme that it would be better to not add the check (indexlist != NIL)\nhere and rather get the indexlist in\nFindUsableIndexForReplicaIdentityFull(). It will anyway return\nInvalidOid, if there is no index and that way code will look a bit\ncleaner.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 6 Mar 2023 15:10:24 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Use indexes on the subscriber when REPLICA IDENTITY is\n full on the publisher"
},
{
"msg_contents": "On Mon, Mar 6, 2023 at 2:38 PM Önder Kalacı <onderkalaci@gmail.com> wrote:\n>\nI was going through the thread and patch, I noticed that in the\ninitial version, we were depending upon the planner to let it decide\nwhether index scan is cheaper or not and which index to pick. But in\nthe latest patch if a useful index exists then we chose that without\ncomparing the cost of whether it is cheaper than sequential scan or\nnot. Is my understanding correct? What is the reason for the same,\none reason I could see while looking into the thread is that we can\nnot just decide once whether the index scan is cheaper or not because\nthat decision could change in the future but isn't that better than\nnever checking whether index scan is cheaper or not. Because in some\ncases where column selectivity is high like 80-90% then the index can\nbe very costly due to random page fetches. So I think we could easily\nproduce regression in some cases, have we tested those cases?\n\nLet me if I am missing something.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 6 Mar 2023 16:18:10 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Use indexes on the subscriber when REPLICA IDENTITY is\n full on the publisher"
},
{
"msg_contents": "On Mon, Mar 6, 2023 at 4:18 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Mon, Mar 6, 2023 at 2:38 PM Önder Kalacı <onderkalaci@gmail.com> wrote:\n> >\n> I was going through the thread and patch, I noticed that in the\n> initial version, we were depending upon the planner to let it decide\n> whether index scan is cheaper or not and which index to pick. But in\n> the latest patch if a useful index exists then we chose that without\n> comparing the cost of whether it is cheaper than sequential scan or\n> not. Is my understanding correct? What is the reason for the same,\n>\n\nYes, your understanding is correct. The main reason is that we don't\nhave an agreement on using the internal planner APIs for apply. That\nwill be a long-term maintenance burden. See discussion around email\n[1]. So, we decided to use the current infrastructure to achieve index\nscans during apply when publisher has replica identity full. This will\nstill be win in many cases and we are planning to provide a knob to\ndisable this feature.\n\n[1] - https://www.postgresql.org/message-id/3466340.1673117404%40sss.pgh.pa.us\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 6 Mar 2023 16:44:59 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Use indexes on the subscriber when REPLICA IDENTITY is\n full on the publisher"
},
{
"msg_contents": "Hi Amit, all\n\n\n>\n> >\n> > I think you've \"simplified\" this function in v28 but AFAICT now it has\n> > a different logic to v27.\n> >\n> > PREVIOUSLY it was coded like\n> > + return RelationGetReplicaIndex(rel) == idxoid ||\n> > + RelationGetPrimaryKeyIndex(rel) == idxoid;\n> >\n> > You can see if 'idxoid' did NOT match RI but if it DID match PK\n> > previously it would return true. But now in that scenario, it won't\n> > even check the PK if there was a valid RI. So it might return false\n> > when previously it returned true. Is it deliberate?\n> >\n>\n> I don't see any problem with this because by default PK will be a\n> replica identity. So only if the user specifies the replica identity\n> full or changes the replica identity to some other index, we will try\n> to get PK which seems valid for this case. Am, I missing something\n> which makes this code do something bad?\n>\n\nI also re-investigated the code, and I also don't see any issues with that.\n\nSee my comment to Peter's original review on this.\n\n\n>\n> Few other comments on latest code:\n> ============================\n> 1.\n> <para>\n> - A published table must have a <quote>replica identity</quote>\n> configured in\n> + A published table must have a <firstterm>replica\n> identity</firstterm> configured in\n>\n> How the above change is related to this patch?\n>\n\nAs you suggest, I'm undoing this change.\n\n\n>\n> 2.\n> certain additional requirements) can also be set to be the replica\n> - identity. If the table does not have any suitable key, then it can be\n> set\n> + identity. If the table does not have any suitable key, then it can be\n> set\n>\n> I think we should change the spacing of existing docs (two spaces\n> after fullstop to one space) and that too inconsistently. I suggest to\n> add new changes with same spacing as existing doc. If you are adding\n> entirely new section then we can consider differently.\n>\n\nAlright, so changed all this section to two spaces after fullstop.\n\n\n\n> 3.\n> to replica identity <quote>full</quote>, which means the entire row\n> becomes\n> - the key. This, however, is very inefficient and should only be used\n> as a\n> - fallback if no other solution is possible. If a replica identity other\n> - than <quote>full</quote> is set on the publisher side, a replica\n> identity\n> - comprising the same or fewer columns must also be set on the subscriber\n> - side. See <xref linkend=\"sql-altertable-replica-identity\"/> for\n> details on\n> + the key. When replica identity <literal>FULL</literal> is specified,\n> + indexes can be used on the subscriber side for searching the rows.\n> These\n>\n> Shouldn't specifying <literal>FULL</literal> be consistent wih existing\n> docs?\n>\n>\n>\nConsidering the discussion below, I'm switching all back\nto <quote>full</quote>. Let's\nbe consistent with the existing code. Peter already suggested to improve\nthat with a follow-up\npatch. If that lands in, I can reflect the changes on this patch as well.\n\nGiven the changes are small, I'll incorporate the changes with v33 in my\nnext e-mail.\n\nThanks,\nOnder\n\nHi Amit, all\n>\n> I think you've \"simplified\" this function in v28 but AFAICT now it has\n> a different logic to v27.\n>\n> PREVIOUSLY it was coded like\n> + return RelationGetReplicaIndex(rel) == idxoid ||\n> + RelationGetPrimaryKeyIndex(rel) == idxoid;\n>\n> You can see if 'idxoid' did NOT match RI but if it DID match PK\n> previously it would return true. But now in that scenario, it won't\n> even check the PK if there was a valid RI. So it might return false\n> when previously it returned true. Is it deliberate?\n>\n\nI don't see any problem with this because by default PK will be a\nreplica identity. So only if the user specifies the replica identity\nfull or changes the replica identity to some other index, we will try\nto get PK which seems valid for this case. Am, I missing something\nwhich makes this code do something bad?I also re-investigated the code, and I also don't see any issues with that.See my comment to Peter's original review on this. \n\nFew other comments on latest code:\n============================\n1.\n <para>\n- A published table must have a <quote>replica identity</quote> configured in\n+ A published table must have a <firstterm>replica\nidentity</firstterm> configured in\n\nHow the above change is related to this patch?As you suggest, I'm undoing this change. \n\n2.\n certain additional requirements) can also be set to be the replica\n- identity. If the table does not have any suitable key, then it can be set\n+ identity. If the table does not have any suitable key, then it can be set\n\nI think we should change the spacing of existing docs (two spaces\nafter fullstop to one space) and that too inconsistently. I suggest to\nadd new changes with same spacing as existing doc. If you are adding\nentirely new section then we can consider differently.Alright, so changed all this section to two spaces after fullstop. \n\n3.\n to replica identity <quote>full</quote>, which means the entire row becomes\n- the key. This, however, is very inefficient and should only be used as a\n- fallback if no other solution is possible. If a replica identity other\n- than <quote>full</quote> is set on the publisher side, a replica identity\n- comprising the same or fewer columns must also be set on the subscriber\n- side. See <xref linkend=\"sql-altertable-replica-identity\"/> for details on\n+ the key. When replica identity <literal>FULL</literal> is specified,\n+ indexes can be used on the subscriber side for searching the rows. These\n\nShouldn't specifying <literal>FULL</literal> be consistent wih existing docs?\n Considering the discussion below, I'm switching all back to <quote>full</quote>. Let'sbe consistent with the existing code. Peter already suggested to improve that with a follow-uppatch. If that lands in, I can reflect the changes on this patch as well. Given the changes are small, I'll incorporate the changes with v33 in my next e-mail.Thanks,Onder",
"msg_date": "Mon, 6 Mar 2023 14:18:19 +0300",
"msg_from": "=?UTF-8?B?w5ZuZGVyIEthbGFjxLE=?= <onderkalaci@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Use indexes on the subscriber when REPLICA IDENTITY is\n full on the publisher"
},
{
"msg_contents": "Hi Peter, all\n\n\n>\n> 1.\n> A published table must have a replica identity configured in order to\n> be able to replicate UPDATE and DELETE operations, so that appropriate\n> rows to update or delete can be identified on the subscriber side. By\n> default, this is the primary key, if there is one. Another unique\n> index (with certain additional requirements) can also be set to be the\n> replica identity. When replica identity FULL is specified, indexes can\n> be used on the subscriber side for searching the rows. These indexes\n> should be btree, non-partial and have at least one column reference\n> (e.g., should not consist of only expressions). These restrictions on\n> the non-unique index properties are in essence the same restrictions\n> that are enforced for primary keys. Internally, we follow the same\n> approach for supporting index scans within logical replication scope.\n> If there are no such suitable indexes, the search on the subscriber\n> side can be very inefficient, therefore replica identity FULL should\n> only be used as a fallback if no other solution is possible. If a\n> replica identity other than full is set on the publisher side, a\n> replica identity comprising the same or fewer columns must also be set\n> on the subscriber side. See REPLICA IDENTITY for details on how to set\n> the replica identity. If a table without a replica identity is added\n> to a publication that replicates UPDATE or DELETE operations then\n> subsequent UPDATE or DELETE operations will cause an error on the\n> publisher. INSERT operations can proceed regardless of any replica\n> identity.\n>\n> ~\n>\n> 1a.\n> Changes include:\n> \"should\" --> \"must\"\n> \"e.g.\" --> \"i.e.\"\n>\n>\nmakes sense\n\n\n> BEFORE\n> These indexes should be btree, non-partial and have at least one\n> column reference (e.g., should not consist of only expressions).\n>\n> SUGGESTION\n> Candidate indexes must be btree, non-partial, and have at least one\n> column reference (i.e., cannot consist of only expressions).\n>\n> ~\n>\n> 1b.\n> The fix for my v27 review comment #2b (changing \"full\" to FULL) was\n> not made correctly. It should be uppercase FULL, not full:\n> \"other than full\" --> \"other than FULL\"\n>\n\nRight, changed that.\n\n\n>\n> ======\n> src/backend/executor/execReplication.c\n>\n> 2.\n> /*\n> * Setup a ScanKey for a search in the relation 'rel' for a tuple 'key'\n> that\n> * is setup to match 'rel' (*NOT* idxrel!).\n> *\n> - * Returns whether any column contains NULLs.\n> + * Returns how many columns should be used for the index scan.\n> + *\n> + * This is not generic routine, it expects the idxrel to be\n> + * a btree, non-partial and have at least one column\n> + * reference (e.g., should not consist of only expressions).\n> *\n> - * This is not generic routine, it expects the idxrel to be replication\n> - * identity of a rel and meet all limitations associated with that.\n> + * By definition, replication identity of a rel meets all\n> + * limitations associated with that. Note that any other\n> + * index could also meet these limitations.\n> */\n> -static bool\n> +static int\n> build_replindex_scan_key(ScanKey skey, Relation rel, Relation idxrel,\n> TupleTableSlot *searchslot)\n>\n> ~\n>\n> \"(e.g., should not consist of only expressions)\" --> \"(i.e., cannot\n> consist of only expressions)\"\n>\n>\nfixed\n\n\n> ======\n> src/backend/replication/logical/relation.c\n>\n> 3. FindUsableIndexForReplicaIdentityFull\n>\n> +/*\n> + * Returns the oid of an index that can be used via the apply\n> + * worker. The index should be btree, non-partial and have at\n> + * least one column reference (e.g., should not consist of\n> + * only expressions). The limitations arise from\n> + * RelationFindReplTupleByIndex(), which is designed to handle\n> + * PK/RI and these limitations are inherent to PK/RI.\n>\n> The 2nd sentence of this comment should match the same changes in the\n> Commit message --- \"must not\" instead of \"should not\", \"i.e.\" instead\n> of \"e.g.\", etc. See the review comment #1a above.\n>\n>\nIsn't \"cannot\" better than \"must not\" ? You also seem to suggest \"cannot\"\njust above.\n\nI changed it to \"cannot\" in all places.\n\n\n\n> ~~~\n>\n> 4. IdxIsRelationIdentityOrPK\n>\n> +/*\n> + * Given a relation and OID of an index, returns true if the\n> + * index is relation's replica identity index or relation's\n> + * primary key's index.\n> + *\n> + * Returns false otherwise.\n> + */\n> +bool\n> +IdxIsRelationIdentityOrPK(Relation rel, Oid idxoid)\n> +{\n> + Assert(OidIsValid(idxoid));\n> +\n> + return GetRelationIdentityOrPK(rel) == idxoid;\n> +}\n>\n> I think you've \"simplified\" this function in v28 but AFAICT now it has\n> a different logic to v27.\n>\n> PREVIOUSLY it was coded like\n> + return RelationGetReplicaIndex(rel) == idxoid ||\n> + RelationGetPrimaryKeyIndex(rel) == idxoid;\n>\n> But now in that scenario, it won't\n> even check the PK if there was a valid RI. So it might return false\n> when previously it returned true. Is it deliberate?\n>\n>\nThanks for detailed review/investigation on this. But, I also agree that\nthere is no difference in terms of correctness. Also, it is probably better\nto be consistent with the existing code. So,\nmaking IdxIsRelationIdentityOrPK()\nrelying on GetRelationIdentityOrPK() still sounds better to me.\n\nYou can see if 'idxoid' did NOT match RI but if it DID match PK\n> previously it would return true.\n\n\nStill, I cannot see how this check would yield a different result with how\nRI/PK works -- as Amit also noted in the next e-mail.\n\nDo you see any cases where this check would produce a different result?\nI cannot, but wanted to double check with you.\n\n\n\n> ======\n> .../subscription/t/032_subscribe_use_index.pl\n>\n> 5.\n> +# Testcase start: SUBSCRIPTION USES INDEX WITH PUB/SUB different data\n> +#\n> +# The subscriber has duplicate tuples that publisher does not have.\n> +# When publsher updates/deletes 1 row, subscriber uses indexes and\n> +# exactly updates/deletes 1 row.\n>\n> \"and exactly updates/deletes 1 row.\" --> \"and updates/deletes exactly 1\n> row.\"\n>\n>\n> Fixed.\n\nGiven the changes are small, I'll incorporate the changes with v33 in my\nnext e-mail.\n\nThanks,\nOnder KALACI\n\nHi Peter, all \n\n1.\nA published table must have a replica identity configured in order to\nbe able to replicate UPDATE and DELETE operations, so that appropriate\nrows to update or delete can be identified on the subscriber side. By\ndefault, this is the primary key, if there is one. Another unique\nindex (with certain additional requirements) can also be set to be the\nreplica identity. When replica identity FULL is specified, indexes can\nbe used on the subscriber side for searching the rows. These indexes\nshould be btree, non-partial and have at least one column reference\n(e.g., should not consist of only expressions). These restrictions on\nthe non-unique index properties are in essence the same restrictions\nthat are enforced for primary keys. Internally, we follow the same\napproach for supporting index scans within logical replication scope.\nIf there are no such suitable indexes, the search on the subscriber\nside can be very inefficient, therefore replica identity FULL should\nonly be used as a fallback if no other solution is possible. If a\nreplica identity other than full is set on the publisher side, a\nreplica identity comprising the same or fewer columns must also be set\non the subscriber side. See REPLICA IDENTITY for details on how to set\nthe replica identity. If a table without a replica identity is added\nto a publication that replicates UPDATE or DELETE operations then\nsubsequent UPDATE or DELETE operations will cause an error on the\npublisher. INSERT operations can proceed regardless of any replica\nidentity.\n\n~\n\n1a.\nChanges include:\n\"should\" --> \"must\"\n\"e.g.\" --> \"i.e.\"\nmakes sense \nBEFORE\nThese indexes should be btree, non-partial and have at least one\ncolumn reference (e.g., should not consist of only expressions).\n\nSUGGESTION\nCandidate indexes must be btree, non-partial, and have at least one\ncolumn reference (i.e., cannot consist of only expressions).\n\n~\n\n1b.\nThe fix for my v27 review comment #2b (changing \"full\" to FULL) was\nnot made correctly. It should be uppercase FULL, not full:\n\"other than full\" --> \"other than FULL\"Right, changed that. \n\n======\nsrc/backend/executor/execReplication.c\n\n2.\n /*\n * Setup a ScanKey for a search in the relation 'rel' for a tuple 'key' that\n * is setup to match 'rel' (*NOT* idxrel!).\n *\n- * Returns whether any column contains NULLs.\n+ * Returns how many columns should be used for the index scan.\n+ *\n+ * This is not generic routine, it expects the idxrel to be\n+ * a btree, non-partial and have at least one column\n+ * reference (e.g., should not consist of only expressions).\n *\n- * This is not generic routine, it expects the idxrel to be replication\n- * identity of a rel and meet all limitations associated with that.\n+ * By definition, replication identity of a rel meets all\n+ * limitations associated with that. Note that any other\n+ * index could also meet these limitations.\n */\n-static bool\n+static int\n build_replindex_scan_key(ScanKey skey, Relation rel, Relation idxrel,\n TupleTableSlot *searchslot)\n\n~\n\n\"(e.g., should not consist of only expressions)\" --> \"(i.e., cannot\nconsist of only expressions)\"\nfixed \n======\nsrc/backend/replication/logical/relation.c\n\n3. FindUsableIndexForReplicaIdentityFull\n\n+/*\n+ * Returns the oid of an index that can be used via the apply\n+ * worker. The index should be btree, non-partial and have at\n+ * least one column reference (e.g., should not consist of\n+ * only expressions). The limitations arise from\n+ * RelationFindReplTupleByIndex(), which is designed to handle\n+ * PK/RI and these limitations are inherent to PK/RI.\n\nThe 2nd sentence of this comment should match the same changes in the\nCommit message --- \"must not\" instead of \"should not\", \"i.e.\" instead\nof \"e.g.\", etc. See the review comment #1a above.\nIsn't \"cannot\" better than \"must not\" ? You also seem to suggest \"cannot\"just above.I changed it to \"cannot\" in all places. \n~~~\n\n4. IdxIsRelationIdentityOrPK\n\n+/*\n+ * Given a relation and OID of an index, returns true if the\n+ * index is relation's replica identity index or relation's\n+ * primary key's index.\n+ *\n+ * Returns false otherwise.\n+ */\n+bool\n+IdxIsRelationIdentityOrPK(Relation rel, Oid idxoid)\n+{\n+ Assert(OidIsValid(idxoid));\n+\n+ return GetRelationIdentityOrPK(rel) == idxoid;\n+}\n\nI think you've \"simplified\" this function in v28 but AFAICT now it has\na different logic to v27.\n\nPREVIOUSLY it was coded like\n+ return RelationGetReplicaIndex(rel) == idxoid ||\n+ RelationGetPrimaryKeyIndex(rel) == idxoid;\nBut now in that scenario, it won't\neven check the PK if there was a valid RI. So it might return false\nwhen previously it returned true. Is it deliberate?\nThanks for detailed review/investigation on this. But, I also agree thatthere is no difference in terms of correctness. Also, it is probably betterto be consistent with the existing code. So, making IdxIsRelationIdentityOrPK()relying on GetRelationIdentityOrPK() still sounds better to me.You can see if 'idxoid' did NOT match RI but if it DID match PKpreviously it would return true. Still, I cannot see how this check would yield a different result with howRI/PK works -- as Amit also noted in the next e-mail.Do you see any cases where this check would produce a different result?I cannot, but wanted to double check with you. \n======\n.../subscription/t/032_subscribe_use_index.pl\n\n5.\n+# Testcase start: SUBSCRIPTION USES INDEX WITH PUB/SUB different data\n+#\n+# The subscriber has duplicate tuples that publisher does not have.\n+# When publsher updates/deletes 1 row, subscriber uses indexes and\n+# exactly updates/deletes 1 row.\n\n\"and exactly updates/deletes 1 row.\" --> \"and updates/deletes exactly 1 row.\"\nFixed.Given the changes are small, I'll incorporate the changes with v33 in my next e-mail. Thanks,Onder KALACI",
"msg_date": "Mon, 6 Mar 2023 14:18:29 +0300",
"msg_from": "=?UTF-8?B?w5ZuZGVyIEthbGFjxLE=?= <onderkalaci@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Use indexes on the subscriber when REPLICA IDENTITY is\n full on the publisher"
},
{
"msg_contents": "Hi Amit, all\n\n\nAmit Kapila <amit.kapila16@gmail.com>, 6 Mar 2023 Pzt, 12:40 tarihinde şunu\nyazdı:\n\n> On Fri, Mar 3, 2023 at 6:40 PM Önder Kalacı <onderkalaci@gmail.com> wrote:\n> >\n> > Hi Vignesh,\n> >\n> > Thanks for the review\n> >\n> >>\n> >> 1) We are currently calling RelationGetIndexList twice, once in\n> >> FindUsableIndexForReplicaIdentityFull function and in the caller too,\n> >> we could avoid one of the calls by passing the indexlist to the\n> >> function or removing the check here, index list check can be handled\n> >> in FindUsableIndexForReplicaIdentityFull.\n> >> + if (remoterel->replident == REPLICA_IDENTITY_FULL &&\n> >> + RelationGetIndexList(localrel) != NIL)\n> >> + {\n> >> + /*\n> >> + * If we had a primary key or relation identity with a\n> >> unique index,\n> >> + * we would have already found and returned that oid.\n> >> At this point,\n> >> + * the remote relation has replica identity full and\n> >> we have at least\n> >> + * one local index defined.\n> >> + *\n> >> + * We are looking for one more opportunity for using\n> >> an index. If\n> >> + * there are any indexes defined on the local\n> >> relation, try to pick\n> >> + * a suitable index.\n> >> + *\n> >> + * The index selection safely assumes that all the\n> >> columns are going\n> >> + * to be available for the index scan given that\n> >> remote relation has\n> >> + * replica identity full.\n> >> + */\n> >> + return FindUsableIndexForReplicaIdentityFull(localrel);\n> >> + }\n> >> +\n> >\n> > makes sense, done\n> >\n>\n> Today, I was looking at this comment and the fix for it. It seems to\n> me that it would be better to not add the check (indexlist != NIL)\n> here and rather get the indexlist in\n> FindUsableIndexForReplicaIdentityFull(). It will anyway return\n> InvalidOid, if there is no index and that way code will look a bit\n> cleaner.\n>\n>\nYeah, seems easier to follow to me as well. Reflected it in the comment as\nwell.\n\n\nThanks,\nOnder",
"msg_date": "Mon, 6 Mar 2023 14:18:39 +0300",
"msg_from": "=?UTF-8?B?w5ZuZGVyIEthbGFjxLE=?= <onderkalaci@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Use indexes on the subscriber when REPLICA IDENTITY is\n full on the publisher"
},
{
"msg_contents": "On Thu, Mar 2, 2023 at 2:45 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Thu, Mar 2, 2023 at 1:37 PM shiy.fnst@fujitsu.com\n> <shiy.fnst@fujitsu.com> wrote:\n> > - results of `gprof`\n> > case1:\n> > master\n> > % cumulative self self total\n> > time seconds seconds calls ms/call ms/call name\n> > 1.37 0.66 0.01 654312 0.00 0.00 LWLockAttemptLock\n> > 0.00 0.73 0.00 573358 0.00 0.00 LockBuffer\n> > 0.00 0.73 0.00 10014 0.00 0.06 heap_getnextslot\n> >\n> > patched\n> > % cumulative self self total\n> > time seconds seconds calls ms/call ms/call name\n> > 9.70 1.27 0.36 50531459 0.00 0.00 LWLockAttemptLock\n> > 3.23 2.42 0.12 100259200 0.00 0.00 LockBuffer\n> > 6.20 1.50 0.23 50015101 0.00 0.00 heapam_index_fetch_tuple\n> > 4.04 2.02 0.15 50015101 0.00 0.00 index_fetch_heap\n> > 1.35 3.21 0.05 10119 0.00 0.00 index_getnext_slot\n> >\n>\n> In the above profile number of calls to index_fetch_heap(),\n> heapam_index_fetch_tuple() explains the reason for the regression you\n> are seeing with the index scan. Because the update will generate dead\n> tuples in the same transaction and those dead tuples won't be removed,\n> we get those from the index and then need to perform\n> index_fetch_heap() to find out whether the tuple is dead or not. Now,\n> for sequence scan also we need to scan those dead tuples but there we\n> don't need to do back-and-forth between index and heap. I think we can\n> once check with more number of tuples (say with 20000, 50000, etc.)\n> for case-1.\n>\n\nAndres, do you have any thoughts on this? We seem to have figured out\nthe cause of regression in the case Shi-San has reported and others\nalso agree with it. We can't think of doing anything better than what\nthe patch currently is doing, so thought of going with an option to\nallow users to disable index scans. The current understanding is that\nthe patch will be a win in much more cases than the cases where one\ncan see regression but still having a knob could be useful in those\nfew cases.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 6 Mar 2023 17:02:38 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Use indexes on the subscriber when REPLICA IDENTITY is\n full on the publisher"
},
{
"msg_contents": "On Mon, Mar 6, 2023 at 4:45 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Mon, Mar 6, 2023 at 4:18 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> >\n> > On Mon, Mar 6, 2023 at 2:38 PM Önder Kalacı <onderkalaci@gmail.com> wrote:\n> > >\n> > I was going through the thread and patch, I noticed that in the\n> > initial version, we were depending upon the planner to let it decide\n> > whether index scan is cheaper or not and which index to pick. But in\n> > the latest patch if a useful index exists then we chose that without\n> > comparing the cost of whether it is cheaper than sequential scan or\n> > not. Is my understanding correct? What is the reason for the same,\n> >\n>\n> Yes, your understanding is correct. The main reason is that we don't\n> have an agreement on using the internal planner APIs for apply. That\n> will be a long-term maintenance burden. See discussion around email\n> [1]. So, we decided to use the current infrastructure to achieve index\n> scans during apply when publisher has replica identity full. This will\n> still be win in many cases and we are planning to provide a knob to\n> disable this feature.\n>\n> [1] - https://www.postgresql.org/message-id/3466340.1673117404%40sss.pgh.pa.us\n\nOkay, this makes sense, so basically, in \"replica identify full\" case\ninstead of doing the default sequence scan we will provide a knob to\neither choose index scan or sequence scan, and that seems reasonable\nto me.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 6 Mar 2023 19:25:37 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Use indexes on the subscriber when REPLICA IDENTITY is\n full on the publisher"
},
{
"msg_contents": "Hi,\n\nOn 2023-03-01 14:10:07 +0530, Amit Kapila wrote:\n> On Wed, Mar 1, 2023 at 12:09 AM Andres Freund <andres@anarazel.de> wrote:\n> >\n> > > I see this as a way to provide this feature for users but I would\n> > > prefer to proceed with this if we can get some more buy-in from senior\n> > > community members (at least one more committer) and some user(s) if\n> > > possible. So, I once again request others to chime in and share their\n> > > opinion.\n> >\n> > I'd prefer not having an option, because we figure out the cause of the\n> > performance regression (reducing it to be small enough to not care). After\n> > that an option defaulting to using indexes.\n> >\n> \n> Sure, if we can reduce regression to be small enough then we don't\n> need to keep the default as false, otherwise, also, we can consider it\n> to keep an option defaulting to using indexes depending on the\n> investigation for regression. Anyway, the main concern was whether it\n> is okay to have an option for this which I think we have an agreement\n> on, now I will continue my review.\n\nI think even as-is it's reasonable to just use it. The sequential scan\napproach is O(N^2), which, uh, is not good. And having an index over thousands\nof non-differing values will generally perform badly, not just in this\ncontext.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 6 Mar 2023 12:04:24 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Use indexes on the subscriber when REPLICA IDENTITY is\n full on the publisher"
},
{
"msg_contents": "Hi,\n\nOn 2023-03-06 17:02:38 +0530, Amit Kapila wrote:\n> Andres, do you have any thoughts on this? We seem to have figured out\n> the cause of regression in the case Shi-San has reported and others\n> also agree with it. We can't think of doing anything better than what\n> the patch currently is doing, so thought of going with an option to\n> allow users to disable index scans. The current understanding is that\n> the patch will be a win in much more cases than the cases where one\n> can see regression but still having a knob could be useful in those\n> few cases.\n\nI think the case in which the patch regresses performance in is irrelevant in\npractice. It's good to not regress unnecessarily for easily avoidable reason,\neven in such cases, but it's not worth holding anything up due to it.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 6 Mar 2023 12:15:21 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Use indexes on the subscriber when REPLICA IDENTITY is\n full on the publisher"
},
{
"msg_contents": "On Tue, Mar 7, 2023 at 1:34 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> On 2023-03-01 14:10:07 +0530, Amit Kapila wrote:\n> > On Wed, Mar 1, 2023 at 12:09 AM Andres Freund <andres@anarazel.de> wrote:\n> > >\n> > > > I see this as a way to provide this feature for users but I would\n> > > > prefer to proceed with this if we can get some more buy-in from senior\n> > > > community members (at least one more committer) and some user(s) if\n> > > > possible. So, I once again request others to chime in and share their\n> > > > opinion.\n> > >\n> > > I'd prefer not having an option, because we figure out the cause of the\n> > > performance regression (reducing it to be small enough to not care). After\n> > > that an option defaulting to using indexes.\n> > >\n> >\n> > Sure, if we can reduce regression to be small enough then we don't\n> > need to keep the default as false, otherwise, also, we can consider it\n> > to keep an option defaulting to using indexes depending on the\n> > investigation for regression. Anyway, the main concern was whether it\n> > is okay to have an option for this which I think we have an agreement\n> > on, now I will continue my review.\n>\n> I think even as-is it's reasonable to just use it. The sequential scan\n> approach is O(N^2), which, uh, is not good. And having an index over thousands\n> of non-differing values will generally perform badly, not just in this\n> context.\n>\n\nYes, it is true that generally also index scan with a lot of\nduplicates may not perform well but during the scan, we do costing to\nensure such cases and may prefer other index or sequence scan. Then we\nhave \"enable_indexscan\" GUC that the user can use if required. So, I\nfeel it is better to have a knob to disallow usage of such indexes and\nthe default would be to use an index, if available.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 7 Mar 2023 08:22:45 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Use indexes on the subscriber when REPLICA IDENTITY is\n full on the publisher"
},
{
"msg_contents": "On Monay, Mar 6, 2023 7:19 PM Önder Kalacı <onderkalaci@gmail.com> wrote:\r\n> \r\n> Yeah, seems easier to follow to me as well. Reflected it in the comment as well. \r\n> \r\n\r\nThanks for updating the patch. Here are some comments on v33-0001 patch.\r\n\r\n1.\r\n+\tif (RelationReplicaIdentityFullIndexScanEnabled(localrel) &&\r\n+\t\tremoterel->replident == REPLICA_IDENTITY_FULL)\r\n\r\nRelationReplicaIdentityFullIndexScanEnabled() is introduced in 0002 patch, so\r\nthe call to it should be moved to 0002 patch I think.\r\n\r\n2.\r\n+#include \"optimizer/cost.h\"\r\n\r\nDo we need this in the latest patch? I tried and it looks it could be removed\r\nfrom src/backend/replication/logical/relation.c.\r\n\r\n3.\r\n+# now, create a unique index and set the replica\r\n+$node_publisher->safe_psql('postgres',\r\n+\t\"CREATE UNIQUE INDEX test_replica_id_full_unique ON test_replica_id_full(x);\");\r\n+$node_subscriber->safe_psql('postgres',\r\n+\t\"CREATE UNIQUE INDEX test_replica_id_full_unique ON test_replica_id_full(x);\");\r\n+\r\n\r\nShould the comment be \"now, create a unique index and set the replica identity\"?\r\n\r\n4.\r\n+$node_publisher->safe_psql('postgres',\r\n+\t\"ALTER TABLE test_replica_id_full REPLICA IDENTITY USING INDEX test_replica_id_full_unique;\");\r\n+$node_subscriber->safe_psql('postgres',\r\n+\t\"ALTER TABLE test_replica_id_full REPLICA IDENTITY USING INDEX test_replica_id_full_unique;\");\r\n+\r\n+# wait for the synchronization to finish\r\n+$node_subscriber->wait_for_subscription_sync;\r\n\r\nThere's no new tables to need to be synchronized here, should we remove the call\r\nto wait_for_subscription_sync?\r\n\r\nRegards,\r\nShi Yu\r\n",
"msg_date": "Tue, 7 Mar 2023 03:46:48 +0000",
"msg_from": "\"shiy.fnst@fujitsu.com\" <shiy.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: [PATCH] Use indexes on the subscriber when REPLICA IDENTITY is\n full on the publisher"
},
{
"msg_contents": "On Tue, Mar 7, 2023 at 9:16 AM shiy.fnst@fujitsu.com\n<shiy.fnst@fujitsu.com> wrote:\n>\n> On Monay, Mar 6, 2023 7:19 PM Önder Kalacı <onderkalaci@gmail.com> wrote:\n> >\n> > Yeah, seems easier to follow to me as well. Reflected it in the comment as well.\n> >\n\nFew comments:\n=============\n1.\n+get_usable_indexoid(ApplyExecutionData *edata, ResultRelInfo *relinfo)\n{\n...\n+ if (targetrelkind == RELKIND_PARTITIONED_TABLE)\n+ {\n+ /* Target is a partitioned table, so find relmapentry of the partition */\n+ TupleConversionMap *map = ExecGetRootToChildMap(relinfo, edata->estate);\n+ AttrMap *attrmap = map ? map->attrMap : NULL;\n+\n+ relmapentry =\n+ logicalrep_partition_open(relmapentry, relinfo->ri_RelationDesc,\n+ attrmap);\n\n\nWhen will we hit this part of the code? As per my understanding, for\npartitioned tables, we always follow apply_handle_tuple_routing()\nwhich should call logicalrep_partition_open(), and do the necessary\nwork for us.\n\n2. In logicalrep_partition_open(), it would be better to set\nlocalrelvalid after finding the required index. The entry should be\nmarked valid after initializing/updating all the required members. I\nhave changed this in the attached.\n\n3.\n@@ -31,6 +32,7 @@ typedef struct LogicalRepRelMapEntry\n Relation localrel; /* relcache entry (NULL when closed) */\n AttrMap *attrmap; /* map of local attributes to remote ones */\n bool updatable; /* Can apply updates/deletes? */\n+ Oid usableIndexOid; /* which index to use, or InvalidOid if none */\n\nWould it be better to name this new variable as localindexoid to match\nit with the existing variable localreloid? Also, the camel case for\nthis variable appears odd.\n\n4. If you agree with the above, then we should probably change the\nname of functions get_usable_indexoid() and\nFindLogicalRepUsableIndex() accordingly.\n\n5.\n+ {\n+ /*\n+ * If we had a primary key or relation identity with a unique index,\n+ * we would have already found and returned that oid. At this point,\n+ * the remote relation has replica identity full.\n\nThese comments are not required as this just states what the code just\nabove is doing.\n\nApart from the above, I have made some modifications in the other\ncomments. If you are convinced with those, then kindly include them in\nthe next version.\n\n-- \nWith Regards,\nAmit Kapila.",
"msg_date": "Tue, 7 Mar 2023 12:27:58 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Use indexes on the subscriber when REPLICA IDENTITY is\n full on the publisher"
},
{
"msg_contents": "On Mon, Mar 6, 2023 at 10:18 PM Önder Kalacı <onderkalaci@gmail.com> wrote:\n>\n>> 4. IdxIsRelationIdentityOrPK\n>>\n>> +/*\n>> + * Given a relation and OID of an index, returns true if the\n>> + * index is relation's replica identity index or relation's\n>> + * primary key's index.\n>> + *\n>> + * Returns false otherwise.\n>> + */\n>> +bool\n>> +IdxIsRelationIdentityOrPK(Relation rel, Oid idxoid)\n>> +{\n>> + Assert(OidIsValid(idxoid));\n>> +\n>> + return GetRelationIdentityOrPK(rel) == idxoid;\n>> +}\n>>\n>> I think you've \"simplified\" this function in v28 but AFAICT now it has\n>> a different logic to v27.\n>>\n>> PREVIOUSLY it was coded like\n>> + return RelationGetReplicaIndex(rel) == idxoid ||\n>> + RelationGetPrimaryKeyIndex(rel) == idxoid;\n>>\n>> But now in that scenario, it won't\n>> even check the PK if there was a valid RI. So it might return false\n>> when previously it returned true. Is it deliberate?\n>>\n>\n> Thanks for detailed review/investigation on this. But, I also agree that\n> there is no difference in terms of correctness. Also, it is probably better\n> to be consistent with the existing code. So, making IdxIsRelationIdentityOrPK()\n> relying on GetRelationIdentityOrPK() still sounds better to me.\n>\n>> You can see if 'idxoid' did NOT match RI but if it DID match PK\n>> previously it would return true.\n>\n>\n> Still, I cannot see how this check would yield a different result with how\n> RI/PK works -- as Amit also noted in the next e-mail.\n>\n> Do you see any cases where this check would produce a different result?\n> I cannot, but wanted to double check with you.\n>\n>\n\nLet me give an example to demonstrate why I thought something is fishy here:\n\nImagine rel has a (non-default) REPLICA IDENTITY with Oid=1111.\nImagine the same rel has a PRIMARY KEY with Oid=2222.\n\n---\n\n+/*\n+ * Get replica identity index or if it is not defined a primary key.\n+ *\n+ * If neither is defined, returns InvalidOid\n+ */\n+Oid\n+GetRelationIdentityOrPK(Relation rel)\n+{\n+ Oid idxoid;\n+\n+ idxoid = RelationGetReplicaIndex(rel);\n+\n+ if (!OidIsValid(idxoid))\n+ idxoid = RelationGetPrimaryKeyIndex(rel);\n+\n+ return idxoid;\n+}\n+\n+/*\n+ * Given a relation and OID of an index, returns true if the\n+ * index is relation's replica identity index or relation's\n+ * primary key's index.\n+ *\n+ * Returns false otherwise.\n+ */\n+bool\n+IdxIsRelationIdentityOrPK(Relation rel, Oid idxoid)\n+{\n+ Assert(OidIsValid(idxoid));\n+\n+ return GetRelationIdentityOrPK(rel) == idxoid;\n+}\n+\n\n\nSo, according to the above function comment/name, I will expect\ncalling IdxIsRelationIdentityOrPK passing Oid=1111 or Oid-2222 will\nboth return true, right?\n\nBut AFAICT\n\nIdxIsRelationIdentityOrPK(rel, 1111) --> GetRelationIdentityOrPK(rel)\nreturns 1111 (the valid oid of the RI) --> 1111 == 1111 --> true;\n\nIdxIsRelationIdentityOrPK(rel, 2222) --> GetRelationIdentityOrPK(rel)\nreturns 1111 (the valid oid of the RI) --> 1111 == 2222 --> false;\n\n~\n\nNow two people are telling me this is OK, but I've been staring at it\nfor too long and I just don't see how it can be. (??)\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Tue, 7 Mar 2023 18:48:57 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Use indexes on the subscriber when REPLICA IDENTITY is\n full on the publisher"
},
{
"msg_contents": "On Tue, Mar 7, 2023 at 1:19 PM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> Let me give an example to demonstrate why I thought something is fishy here:\n>\n> Imagine rel has a (non-default) REPLICA IDENTITY with Oid=1111.\n> Imagine the same rel has a PRIMARY KEY with Oid=2222.\n>\n> ---\n>\n> +/*\n> + * Get replica identity index or if it is not defined a primary key.\n> + *\n> + * If neither is defined, returns InvalidOid\n> + */\n> +Oid\n> +GetRelationIdentityOrPK(Relation rel)\n> +{\n> + Oid idxoid;\n> +\n> + idxoid = RelationGetReplicaIndex(rel);\n> +\n> + if (!OidIsValid(idxoid))\n> + idxoid = RelationGetPrimaryKeyIndex(rel);\n> +\n> + return idxoid;\n> +}\n> +\n> +/*\n> + * Given a relation and OID of an index, returns true if the\n> + * index is relation's replica identity index or relation's\n> + * primary key's index.\n> + *\n> + * Returns false otherwise.\n> + */\n> +bool\n> +IdxIsRelationIdentityOrPK(Relation rel, Oid idxoid)\n> +{\n> + Assert(OidIsValid(idxoid));\n> +\n> + return GetRelationIdentityOrPK(rel) == idxoid;\n> +}\n> +\n>\n>\n> So, according to the above function comment/name, I will expect\n> calling IdxIsRelationIdentityOrPK passing Oid=1111 or Oid-2222 will\n> both return true, right?\n>\n> But AFAICT\n>\n> IdxIsRelationIdentityOrPK(rel, 1111) --> GetRelationIdentityOrPK(rel)\n> returns 1111 (the valid oid of the RI) --> 1111 == 1111 --> true;\n>\n> IdxIsRelationIdentityOrPK(rel, 2222) --> GetRelationIdentityOrPK(rel)\n> returns 1111 (the valid oid of the RI) --> 1111 == 2222 --> false;\n>\n> ~\n>\n> Now two people are telling me this is OK, but I've been staring at it\n> for too long and I just don't see how it can be. (??)\n>\n\nThe difference is that you are misunderstanding the intent of this\nfunction. GetRelationIdentityOrPK() returns a \"replica identity index\noid\" if the same is defined, else return PK, if that is defined,\notherwise, return invalidOid. This is what is expected by its callers.\nNow, one can argue to have a different function name and that may be a\nvalid argument but as far as I can see the function does what is\nexpected from it.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 7 Mar 2023 14:30:54 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Use indexes on the subscriber when REPLICA IDENTITY is\n full on the publisher"
},
{
"msg_contents": "On Tue, Mar 7, 2023 at 8:01 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Mar 7, 2023 at 1:19 PM Peter Smith <smithpb2250@gmail.com> wrote:\n> >\n> > Let me give an example to demonstrate why I thought something is fishy here:\n> >\n> > Imagine rel has a (non-default) REPLICA IDENTITY with Oid=1111.\n> > Imagine the same rel has a PRIMARY KEY with Oid=2222.\n> >\n> > ---\n> >\n> > +/*\n> > + * Get replica identity index or if it is not defined a primary key.\n> > + *\n> > + * If neither is defined, returns InvalidOid\n> > + */\n> > +Oid\n> > +GetRelationIdentityOrPK(Relation rel)\n> > +{\n> > + Oid idxoid;\n> > +\n> > + idxoid = RelationGetReplicaIndex(rel);\n> > +\n> > + if (!OidIsValid(idxoid))\n> > + idxoid = RelationGetPrimaryKeyIndex(rel);\n> > +\n> > + return idxoid;\n> > +}\n> > +\n> > +/*\n> > + * Given a relation and OID of an index, returns true if the\n> > + * index is relation's replica identity index or relation's\n> > + * primary key's index.\n> > + *\n> > + * Returns false otherwise.\n> > + */\n> > +bool\n> > +IdxIsRelationIdentityOrPK(Relation rel, Oid idxoid)\n> > +{\n> > + Assert(OidIsValid(idxoid));\n> > +\n> > + return GetRelationIdentityOrPK(rel) == idxoid;\n> > +}\n> > +\n> >\n> >\n> > So, according to the above function comment/name, I will expect\n> > calling IdxIsRelationIdentityOrPK passing Oid=1111 or Oid-2222 will\n> > both return true, right?\n> >\n> > But AFAICT\n> >\n> > IdxIsRelationIdentityOrPK(rel, 1111) --> GetRelationIdentityOrPK(rel)\n> > returns 1111 (the valid oid of the RI) --> 1111 == 1111 --> true;\n> >\n> > IdxIsRelationIdentityOrPK(rel, 2222) --> GetRelationIdentityOrPK(rel)\n> > returns 1111 (the valid oid of the RI) --> 1111 == 2222 --> false;\n> >\n> > ~\n> >\n> > Now two people are telling me this is OK, but I've been staring at it\n> > for too long and I just don't see how it can be. (??)\n> >\n>\n> The difference is that you are misunderstanding the intent of this\n> function. GetRelationIdentityOrPK() returns a \"replica identity index\n> oid\" if the same is defined, else return PK, if that is defined,\n> otherwise, return invalidOid. This is what is expected by its callers.\n> Now, one can argue to have a different function name and that may be a\n> valid argument but as far as I can see the function does what is\n> expected from it.\n>\n\nSure, but I am questioning the function IdxIsRelationIdentityOrPK, not\nGetRelationIdentityOrPK.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Tue, 7 Mar 2023 20:30:20 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Use indexes on the subscriber when REPLICA IDENTITY is\n full on the publisher"
},
{
"msg_contents": "Hi Andres, Amit, all\n\nI think the case in which the patch regresses performance in is irrelevant\n> in\n> practice.\n>\n\nThis is similar to what I think in this context.\n\nI appreciate the effort from Shi Yu, so that we have a clear understanding\non the overhead.\nBut the tests we do on [1] where we observe the regression are largely\nsynthetic test cases\nthat aim to spot the overhead.\n\nAnd having an index over thousands\n> of non-differing values will generally perform badly, not just in this\n> context.\n\n\nSimilarly, maybe there are some eccentric use patterns that might follow\nthis. But I also suspect\neven if there are such patterns, could they really be performance sensitive?\n\n\nThanks,\nOnder KALACI\n\n[1]\nhttps://www.postgresql.org/message-id/OSZPR01MB63103A4AFBBA56BAF8AE7FAAFDA39%40OSZPR01MB6310.jpnprd01.prod.outlook.com\n\nHi Andres, Amit, all \nI think the case in which the patch regresses performance in is irrelevant in\npractice. This is similar to what I think in this context.I appreciate the effort from Shi Yu, so that we have a clear understanding on the overhead.But the tests we do on [1] where we observe the regression are largely synthetic test casesthat aim to spot the overhead. And having an index over thousandsof non-differing values will generally perform badly, not just in thiscontext.Similarly, maybe there are some eccentric use patterns that might follow this. But I also suspecteven if there are such patterns, could they really be performance sensitive?Thanks,Onder KALACI[1] https://www.postgresql.org/message-id/OSZPR01MB63103A4AFBBA56BAF8AE7FAAFDA39%40OSZPR01MB6310.jpnprd01.prod.outlook.com",
"msg_date": "Tue, 7 Mar 2023 13:59:08 +0300",
"msg_from": "=?UTF-8?B?w5ZuZGVyIEthbGFjxLE=?= <onderkalaci@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Use indexes on the subscriber when REPLICA IDENTITY is\n full on the publisher"
},
{
"msg_contents": "Hi Shi Yu, all\n\n\n> Thanks for updating the patch. Here are some comments on v33-0001 patch.\n>\n> 1.\n> + if (RelationReplicaIdentityFullIndexScanEnabled(localrel) &&\n> + remoterel->replident == REPLICA_IDENTITY_FULL)\n>\n> RelationReplicaIdentityFullIndexScanEnabled() is introduced in 0002 patch,\n> so\n> the call to it should be moved to 0002 patch I think.\n>\n\nAh, sure, a rebase issue, fixed in v34\n\n\n>\n> 2.\n> +#include \"optimizer/cost.h\"\n>\n> Do we need this in the latest patch? I tried and it looks it could be\n> removed\n> from src/backend/replication/logical/relation.c.\n>\n\nHmm, probably an artifact of the initial versions of the patch where we\nneeded some\ncosting functionality.\n\n\n>\n> 3.\n> +# now, create a unique index and set the replica\n> +$node_publisher->safe_psql('postgres',\n> + \"CREATE UNIQUE INDEX test_replica_id_full_unique ON\n> test_replica_id_full(x);\");\n> +$node_subscriber->safe_psql('postgres',\n> + \"CREATE UNIQUE INDEX test_replica_id_full_unique ON\n> test_replica_id_full(x);\");\n> +\n>\n> Should the comment be \"now, create a unique index and set the replica\n> identity\"?\n>\n\nyes, fixed\n\n\n>\n> 4.\n> +$node_publisher->safe_psql('postgres',\n> + \"ALTER TABLE test_replica_id_full REPLICA IDENTITY USING INDEX\n> test_replica_id_full_unique;\");\n> +$node_subscriber->safe_psql('postgres',\n> + \"ALTER TABLE test_replica_id_full REPLICA IDENTITY USING INDEX\n> test_replica_id_full_unique;\");\n> +\n> +# wait for the synchronization to finish\n> +$node_subscriber->wait_for_subscription_sync;\n>\n> There's no new tables to need to be synchronized here, should we remove\n> the call\n> to wait_for_subscription_sync?\n>\n\nright, probably a copy & paste typo, thanks for spotting.\n\n\nI'll attach v34 with the next e-mail given the comments here only touch\nsmall parts\nof the patch.\n\n\n\nThanks,\nOnder KALACI\n\nHi Shi Yu, all\nThanks for updating the patch. Here are some comments on v33-0001 patch.\n\n1.\n+ if (RelationReplicaIdentityFullIndexScanEnabled(localrel) &&\n+ remoterel->replident == REPLICA_IDENTITY_FULL)\n\nRelationReplicaIdentityFullIndexScanEnabled() is introduced in 0002 patch, so\nthe call to it should be moved to 0002 patch I think.Ah, sure, a rebase issue, fixed in v34 \n\n2.\n+#include \"optimizer/cost.h\"\n\nDo we need this in the latest patch? I tried and it looks it could be removed\nfrom src/backend/replication/logical/relation.c.Hmm, probably an artifact of the initial versions of the patch where we needed somecosting functionality. \n\n3.\n+# now, create a unique index and set the replica\n+$node_publisher->safe_psql('postgres',\n+ \"CREATE UNIQUE INDEX test_replica_id_full_unique ON test_replica_id_full(x);\");\n+$node_subscriber->safe_psql('postgres',\n+ \"CREATE UNIQUE INDEX test_replica_id_full_unique ON test_replica_id_full(x);\");\n+\n\nShould the comment be \"now, create a unique index and set the replica identity\"?yes, fixed \n\n4.\n+$node_publisher->safe_psql('postgres',\n+ \"ALTER TABLE test_replica_id_full REPLICA IDENTITY USING INDEX test_replica_id_full_unique;\");\n+$node_subscriber->safe_psql('postgres',\n+ \"ALTER TABLE test_replica_id_full REPLICA IDENTITY USING INDEX test_replica_id_full_unique;\");\n+\n+# wait for the synchronization to finish\n+$node_subscriber->wait_for_subscription_sync;\n\nThere's no new tables to need to be synchronized here, should we remove the call\nto wait_for_subscription_sync?right, probably a copy & paste typo, thanks for spotting.I'll attach v34 with the next e-mail given the comments here only touch small partsof the patch.Thanks,Onder KALACI",
"msg_date": "Tue, 7 Mar 2023 13:59:12 +0300",
"msg_from": "=?UTF-8?B?w5ZuZGVyIEthbGFjxLE=?= <onderkalaci@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Use indexes on the subscriber when REPLICA IDENTITY is\n full on the publisher"
},
{
"msg_contents": "Hi Amit, all\n\n>\n> Few comments:\n> =============\n> 1.\n> +get_usable_indexoid(ApplyExecutionData *edata, ResultRelInfo *relinfo)\n> {\n> ...\n> + if (targetrelkind == RELKIND_PARTITIONED_TABLE)\n> + {\n> + /* Target is a partitioned table, so find relmapentry of the partition */\n> + TupleConversionMap *map = ExecGetRootToChildMap(relinfo, edata->estate);\n> + AttrMap *attrmap = map ? map->attrMap : NULL;\n> +\n> + relmapentry =\n> + logicalrep_partition_open(relmapentry, relinfo->ri_RelationDesc,\n> + attrmap);\n>\n>\n> When will we hit this part of the code? As per my understanding, for\n> partitioned tables, we always follow apply_handle_tuple_routing()\n> which should call logicalrep_partition_open(), and do the necessary\n> work for us.\n>\n>\nLooking closer, there is really no need for that. I changed the\ncode such that we pass usableLocalIndexOid. It looks simpler\nto me. Do you agree?\n\n\n\n> 2. In logicalrep_partition_open(), it would be better to set\n> localrelvalid after finding the required index. The entry should be\n> marked valid after initializing/updating all the required members. I\n> have changed this in the attached.\n>\n>\nmakes sense\n\n\n> 3.\n> @@ -31,6 +32,7 @@ typedef struct LogicalRepRelMapEntry\n> Relation localrel; /* relcache entry (NULL when closed) */\n> AttrMap *attrmap; /* map of local attributes to remote ones */\n> bool updatable; /* Can apply updates/deletes? */\n> + Oid usableIndexOid; /* which index to use, or InvalidOid if none */\n>\n> Would it be better to name this new variable as localindexoid to match\n> it with the existing variable localreloid? Also, the camel case for\n> this variable appears odd.\n>\n\nyes, both makes sense\n\n\n>\n> 4. If you agree with the above, then we should probably change the\n> name of functions get_usable_indexoid() and\n> FindLogicalRepUsableIndex() accordingly.\n>\n\nI dropped get_usable_indexoid() as noted in (1).\n\nChanged FindLogicalRepUsableIndex->FindLogicalRepLocalIndex\n\n\n\n> 5.\n> + {\n> + /*\n> + * If we had a primary key or relation identity with a unique index,\n> + * we would have already found and returned that oid. At this point,\n> + * the remote relation has replica identity full.\n>\n> These comments are not required as this just states what the code just\n> above is doing.\n>\n\nI don't have any strong opinions on adding this comment, applied your\nsuggestion.\n\n\n>\n> Apart from the above, I have made some modifications in the other\n> comments. If you are convinced with those, then kindly include them in\n> the next version.\n>\n>\nSure, they all look good. I think I have lost (and caused the reviewers to\nlose)\nquite a bit of time on the comment reviews. Next time, I'll try to be more\nprepared\nfor the comments.\n\nAttached v34\n\nThanks,\nOnder KALACI",
"msg_date": "Tue, 7 Mar 2023 13:59:18 +0300",
"msg_from": "=?UTF-8?B?w5ZuZGVyIEthbGFjxLE=?= <onderkalaci@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Use indexes on the subscriber when REPLICA IDENTITY is\n full on the publisher"
},
{
"msg_contents": "On Tue, Mar 7, 2023 at 3:00 PM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> On Tue, Mar 7, 2023 at 8:01 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Tue, Mar 7, 2023 at 1:19 PM Peter Smith <smithpb2250@gmail.com> wrote:\n> > >\n> > > Let me give an example to demonstrate why I thought something is fishy here:\n> > >\n> > > Imagine rel has a (non-default) REPLICA IDENTITY with Oid=1111.\n> > > Imagine the same rel has a PRIMARY KEY with Oid=2222.\n> > >\n> > > ---\n> > >\n> > > +/*\n> > > + * Get replica identity index or if it is not defined a primary key.\n> > > + *\n> > > + * If neither is defined, returns InvalidOid\n> > > + */\n> > > +Oid\n> > > +GetRelationIdentityOrPK(Relation rel)\n> > > +{\n> > > + Oid idxoid;\n> > > +\n> > > + idxoid = RelationGetReplicaIndex(rel);\n> > > +\n> > > + if (!OidIsValid(idxoid))\n> > > + idxoid = RelationGetPrimaryKeyIndex(rel);\n> > > +\n> > > + return idxoid;\n> > > +}\n> > > +\n> > > +/*\n> > > + * Given a relation and OID of an index, returns true if the\n> > > + * index is relation's replica identity index or relation's\n> > > + * primary key's index.\n> > > + *\n> > > + * Returns false otherwise.\n> > > + */\n> > > +bool\n> > > +IdxIsRelationIdentityOrPK(Relation rel, Oid idxoid)\n> > > +{\n> > > + Assert(OidIsValid(idxoid));\n> > > +\n> > > + return GetRelationIdentityOrPK(rel) == idxoid;\n> > > +}\n> > > +\n> > >\n> > >\n> > > So, according to the above function comment/name, I will expect\n> > > calling IdxIsRelationIdentityOrPK passing Oid=1111 or Oid-2222 will\n> > > both return true, right?\n> > >\n> > > But AFAICT\n> > >\n> > > IdxIsRelationIdentityOrPK(rel, 1111) --> GetRelationIdentityOrPK(rel)\n> > > returns 1111 (the valid oid of the RI) --> 1111 == 1111 --> true;\n> > >\n> > > IdxIsRelationIdentityOrPK(rel, 2222) --> GetRelationIdentityOrPK(rel)\n> > > returns 1111 (the valid oid of the RI) --> 1111 == 2222 --> false;\n> > >\n> > > ~\n> > >\n> > > Now two people are telling me this is OK, but I've been staring at it\n> > > for too long and I just don't see how it can be. (??)\n> > >\n> >\n> > The difference is that you are misunderstanding the intent of this\n> > function. GetRelationIdentityOrPK() returns a \"replica identity index\n> > oid\" if the same is defined, else return PK, if that is defined,\n> > otherwise, return invalidOid. This is what is expected by its callers.\n> > Now, one can argue to have a different function name and that may be a\n> > valid argument but as far as I can see the function does what is\n> > expected from it.\n> >\n>\n> Sure, but I am questioning the function IdxIsRelationIdentityOrPK, not\n> GetRelationIdentityOrPK.\n>\n\nThe intent of IdxIsRelationIdentityOrPK() is as follows: Returns true\nfor the following conditions (a) if the given index OID is the same as\nreplica identity (when the same is defined); else (if replica identity\nis not defined then (b)) (b) if the given OID is the same as PK.\nReturns false otherwise. Feel free to propose any better function name\nor if you think comments can be changed which makes it easier to\nunderstand.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 7 Mar 2023 17:39:55 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Use indexes on the subscriber when REPLICA IDENTITY is\n full on the publisher"
},
{
"msg_contents": "Hi Amit, Peter\n\n\n> > > Let me give an example to demonstrate why I thought something is fishy\n> here:\n> > >\n> > > Imagine rel has a (non-default) REPLICA IDENTITY with Oid=1111.\n> > > Imagine the same rel has a PRIMARY KEY with Oid=2222.\n> > >\n>\n\nHmm, alright, this is syntactically possible, but not sure if any user\nwould do this. Still thanks for catching this.\n\nAnd, you are right, if a user has created such a schema,\nIdxIsRelationIdentityOrPK()\nwould return the wrong result and we'd use sequential scan instead of index\nscan.\nThis would be a regression. I think we should change the function.\n\n\nHere is the example:\nDROP TABLE tab1;\nCREATE TABLE tab1 (a int NOT NULL);\nCREATE UNIQUE INDEX replica_unique ON tab1(a);\nALTER TABLE tab1 REPLICA IDENTITY USING INDEX replica_unique;\nALTER TABLE tab1 ADD CONSTRAINT pkey PRIMARY KEY (a);\n\n\nI'm attaching v35.\n\nDoes that make sense to you Amit?",
"msg_date": "Tue, 7 Mar 2023 16:47:07 +0300",
"msg_from": "=?UTF-8?B?w5ZuZGVyIEthbGFjxLE=?= <onderkalaci@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Use indexes on the subscriber when REPLICA IDENTITY is\n full on the publisher"
},
{
"msg_contents": "Hi,\n\nOn 2023-03-07 08:22:45 +0530, Amit Kapila wrote:\n> On Tue, Mar 7, 2023 at 1:34 AM Andres Freund <andres@anarazel.de> wrote:\n> > I think even as-is it's reasonable to just use it. The sequential scan\n> > approach is O(N^2), which, uh, is not good. And having an index over thousands\n> > of non-differing values will generally perform badly, not just in this\n> > context.\n> >\n> Yes, it is true that generally also index scan with a lot of\n> duplicates may not perform well but during the scan, we do costing to\n> ensure such cases and may prefer other index or sequence scan. Then we\n> have \"enable_indexscan\" GUC that the user can use if required. So, I\n> feel it is better to have a knob to disallow usage of such indexes and\n> the default would be to use an index, if available.\n\nIt just feels like we're optimizing for an irrelevant case here. If we add\nGUCs for irrelevant things like this we'll explode the number of GUCs even\nfaster than we already are.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 7 Mar 2023 11:51:19 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Use indexes on the subscriber when REPLICA IDENTITY is\n full on the publisher"
},
{
"msg_contents": "On Mon, Mar 6, 2023 at 7:40 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Mon, Mar 6, 2023 at 1:40 PM Peter Smith <smithpb2250@gmail.com> wrote:\n> >\n...\n> >\n> > Anyhow, if you feel those firstterm and FULL changes ought to be kept\n> > separate from this RI patch, please let me know and I will propose\n> > those changes in a new thread,\n> >\n>\n> Personally, I would prefer to keep those separate. So, feel free to\n> propose them in a new thread.\n>\n\nDone. Those suggested pg docs changes now have their own new thread [1].\n\n------\n[1] RI quotes -\nhttps://www.postgresql.org/message-id/CAHut%2BPst11ac2hcmePt1%3DoTmBwTT%3DDAssRR1nsdoy4BT%2B68%3DMg%40mail.gmail.com\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Wed, 8 Mar 2023 09:37:10 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Use indexes on the subscriber when REPLICA IDENTITY is\n full on the publisher"
},
{
"msg_contents": "On Tue, Mar 7, 2023 9:47 PM Önder Kalacı <onderkalaci@gmail.com> wrote:\r\n> \r\n> I'm attaching v35. \r\n> \r\n\r\nI noticed that if the index column only exists on the subscriber side, this index\r\ncan also be chosen. This seems a bit odd because the index column isn't sent\r\nfrom publisher.\r\n\r\ne.g.\r\n-- pub\r\nCREATE TABLE test_replica_id_full (x int, y int);\r\nALTER TABLE test_replica_id_full REPLICA IDENTITY FULL;\r\nCREATE PUBLICATION tap_pub_rep_full FOR TABLE test_replica_id_full;\r\n-- sub\r\nCREATE TABLE test_replica_id_full (x int, y int, z int);\r\nCREATE INDEX test_replica_id_full_idx ON test_replica_id_full(z);\r\nCREATE SUBSCRIPTION tap_sub_rep_full_0 CONNECTION 'dbname=postgres port=5432' PUBLICATION tap_pub_rep_full;\r\n\r\nI didn't see in any cases the behavior changed after applying the patch, which\r\nlooks good. Besides, I tested the performance for such case.\r\n\r\nSteps:\r\n1. create tables, index, publication, and subscription\r\n-- pub\r\ncreate table tbl (a int);\r\nalter table tbl replica identity full;\r\ncreate publication pub for table tbl;\r\n-- sub\r\ncreate table tbl (a int, b int);\r\ncreate index idx_b on tbl(b);\r\ncreate subscription sub connection 'dbname=postgres port=5432' publication pub;\r\n2. setup synchronous replication\r\n3. execute SQL:\r\ntruncate tbl;\r\ninsert into tbl select i from generate_series(0,10000)i;\r\nupdate tbl set a=a+1;\r\n\r\nThe time of UPDATE (take the average of 10 runs):\r\nmaster: 1356.06 ms\r\npatched: 3968.14 ms\r\n\r\nFor the cases that all values of extra columns on the subscriber are NULL, index\r\nscan can't do better than sequential scan. This is not a real scenario and I\r\nthink it only degrades when there are many NULL values in the index column, so\r\nthis is probably not a case to worry about. I just share this case and then we\r\ncan discuss should we pick the index which only contain the extra columns on the\r\nsubscriber.\r\n\r\nRegards,\r\nShi Yu\r\n",
"msg_date": "Wed, 8 Mar 2023 02:43:56 +0000",
"msg_from": "\"shiy.fnst@fujitsu.com\" <shiy.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: [PATCH] Use indexes on the subscriber when REPLICA IDENTITY is\n full on the publisher"
},
{
"msg_contents": "On Tue, Mar 7, 2023 at 7:17 PM Önder Kalacı <onderkalaci@gmail.com> wrote:\n>\n>>\n>> > > Let me give an example to demonstrate why I thought something is fishy here:\n>> > >\n>> > > Imagine rel has a (non-default) REPLICA IDENTITY with Oid=1111.\n>> > > Imagine the same rel has a PRIMARY KEY with Oid=2222.\n>> > >\n>\n>\n> Hmm, alright, this is syntactically possible, but not sure if any user\n> would do this. Still thanks for catching this.\n>\n> And, you are right, if a user has created such a schema, IdxIsRelationIdentityOrPK()\n> would return the wrong result and we'd use sequential scan instead of index scan.\n> This would be a regression. I think we should change the function.\n>\n>\n> Here is the example:\n> DROP TABLE tab1;\n> CREATE TABLE tab1 (a int NOT NULL);\n> CREATE UNIQUE INDEX replica_unique ON tab1(a);\n> ALTER TABLE tab1 REPLICA IDENTITY USING INDEX replica_unique;\n> ALTER TABLE tab1 ADD CONSTRAINT pkey PRIMARY KEY (a);\n>\n\nYou have not given complete steps to reproduce the problem where\ninstead of the index scan, a sequential scan would be picked. I have\ntried to reproduce by extending your steps but didn't see the problem.\nLet me know if I am missing something.\n\nPublisher\n----------------\npostgres=# CREATE TABLE tab1 (a int NOT NULL);\nCREATE TABLE\npostgres=# Alter Table tab1 replica identity full;\nALTER TABLE\npostgres=# create publication pub2 for table tab1;\nCREATE PUBLICATION\npostgres=# insert into tab1 values(1);\nINSERT 0 1\npostgres=# update tab1 set a=2;\n\nSubscriber\n-----------------\npostgres=# CREATE TABLE tab1 (a int NOT NULL);\nCREATE TABLE\npostgres=# CREATE UNIQUE INDEX replica_unique ON tab1(a);\nCREATE INDEX\npostgres=# ALTER TABLE tab1 REPLICA IDENTITY USING INDEX replica_unique;\nALTER TABLE\npostgres=# ALTER TABLE tab1 ADD CONSTRAINT pkey PRIMARY KEY (a);\nALTER TABLE\npostgres=# create subscription sub2 connection 'dbname=postgres'\npublication pub2;\nNOTICE: created replication slot \"sub2\" on publisher\nCREATE SUBSCRIPTION\npostgres=# select * from tab1;\n a\n---\n 2\n(1 row)\n\nI have debugged the above example and it uses an index scan during\napply without your latest change which is what I expected. AFAICS, the\nuse of IdxIsRelationIdentityOrPK() is to decide whether we will do\ntuples_equal() or not during the index scan and I see it gives the\ncorrect results with the example you provided.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 8 Mar 2023 08:45:32 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Use indexes on the subscriber when REPLICA IDENTITY is\n full on the publisher"
},
{
"msg_contents": "Here are some review comments for v35-0001\n\n======\nGeneral\n\n1.\nSaying the index \"should\" or \"should not\" do this or that sounds like\nit is still OK but just not recommended. TO remove this ambigity IMO\nmost of the \"should\" ought to be changed to \"must\" because IIUC this\npatch will simply not consider indexes which do not obey all your\nrules.\n\nThis comment applies to a few places (search for \"should\")\n\ne.g.1. - Commit Message\ne.g.2. - /* There should always be at least one attribute for the index scan. */\ne.g.3. - The function comment for\nFindUsableIndexForReplicaIdentityFull(Relation localrel)\n\n======\ndoc/src/sgml/logical-replication.sgml\n\n2.\nA published table must have a “replica identity” configured in order\nto be able to replicate UPDATE and DELETE operations, so that\nappropriate rows to update or delete can be identified on the\nsubscriber side. By default, this is the primary key, if there is one.\nAnother unique index (with certain additional requirements) can also\nbe set to be the replica identity. If the table does not have any\nsuitable key, then it can be set to replica identity “full”, which\nmeans the entire row becomes the key. When replica identity “full” is\nspecified, indexes can be used on the subscriber side for searching\nthe rows. Candidate indexes must be btree, non-partial, and have at\nleast one column reference (i.e. cannot consist of only expressions).\nThese restrictions on the non-unique index properties adheres some of\nthe restrictions that are enforced for primary keys. Internally, we\nfollow a similar approach for supporting index scans within logical\nreplication scope. If there are no such suitable indexes, the search\non the subscriber side can be very inefficient, therefore replica\nidentity “full” should only be used as a fallback if no other solution\nis possible. If a replica identity other than “full” is set on the\npublisher side, a replica identity comprising the same or fewer\ncolumns must also be set on the subscriber side. See REPLICA IDENTITY\nfor details on how to set the replica identity. If a table without a\nreplica identity is added to a publication that replicates UPDATE or\nDELETE operations then subsequent UPDATE or DELETE operations will\ncause an error on the publisher. INSERT operations can proceed\nregardless of any replica identity.\n\n~\n\n\"adheres some of\" --> \"adhere to some of\" ?\n\n======\nsrc/backend/executor/execReplication.c\n\n3. build_replindex_scan_key\n\n {\n Oid operator;\n Oid opfamily;\n RegProcedure regop;\n- int pkattno = attoff + 1;\n- int mainattno = indkey->values[attoff];\n- Oid optype = get_opclass_input_type(opclass->values[attoff]);\n+ int table_attno = indkey->values[index_attoff];\n+ Oid optype = get_opclass_input_type(opclass->values[index_attoff]);\n\nThese variable declarations might look tidier if you kept all the Oids together.\n\n======\nsrc/backend/replication/logical/relation.c\n\n4. logicalrep_partition_open\n\n+ /*\n+ * Finding a usable index is an infrequent task. It occurs when an\n+ * operation is first performed on the relation, or after invalidation of\n+ * the relation cache entry (such as ANALYZE or CREATE/DROP index on the\n+ * relation).\n+ *\n+ * We also prefer to run this code on the oldctx such that we do not\n+ * leak anything in the LogicalRepPartMapContext (hence\n+ * CacheMemoryContext).\n+ */\n+ entry->localindexoid = FindLogicalRepLocalIndex(partrel, remoterel)\n\n\"such that\" --> \"so that\" ?\n\n~~~\n\n5. IsIndexUsableForReplicaIdentityFull\n\n+bool\n+IsIndexUsableForReplicaIdentityFull(IndexInfo *indexInfo)\n+{\n+ bool is_btree = (indexInfo->ii_Am == BTREE_AM_OID);\n+ bool is_partial = (indexInfo->ii_Predicate != NIL);\n+ bool is_only_on_expression = IsIndexOnlyOnExpression(indexInfo);\n+\n+ if (is_btree && !is_partial && !is_only_on_expression)\n+ {\n+ return true;\n+ }\n+\n+ return false;\n+}\n\nSUGGESTION (no need for 2 returns here)\nreturn is_btree && !is_partial && !is_only_on_expression;\n\n======\nsrc/backend/replication/logical/worker.c\n\n6. FindReplTupleInLocalRel\n\nstatic bool\nFindReplTupleInLocalRel(EState *estate, Relation localrel,\nLogicalRepRelation *remoterel,\nOid localidxoid,\nTupleTableSlot *remoteslot,\nTupleTableSlot **localslot)\n{\nbool found;\n\n/*\n* Regardless of the top-level operation, we're performing a read here, so\n* check for SELECT privileges.\n*/\nTargetPrivilegesCheck(localrel, ACL_SELECT);\n\n*localslot = table_slot_create(localrel, &estate->es_tupleTable);\n\nAssert(OidIsValid(localidxoid) ||\n (remoterel->replident == REPLICA_IDENTITY_FULL));\n\nif (OidIsValid(localidxoid))\nfound = RelationFindReplTupleByIndex(localrel, localidxoid,\nLockTupleExclusive,\nremoteslot, *localslot);\nelse\nfound = RelationFindReplTupleSeq(localrel, LockTupleExclusive,\nremoteslot, *localslot);\n\nreturn found;\n}\n\n~\n\nSince that 'found' variable is not used, you might as well remove the\nif/else and simplify the code.\n\nSUGGESTION\nstatic bool\nFindReplTupleInLocalRel(EState *estate, Relation localrel,\nLogicalRepRelation *remoterel,\nOid localidxoid,\nTupleTableSlot *remoteslot,\nTupleTableSlot **localslot)\n{\n/*\n* Regardless of the top-level operation, we're performing a read here, so\n* check for SELECT privileges.\n*/\nTargetPrivilegesCheck(localrel, ACL_SELECT);\n\n*localslot = table_slot_create(localrel, &estate->es_tupleTable);\n\nAssert(OidIsValid(localidxoid) ||\n (remoterel->replident == REPLICA_IDENTITY_FULL));\n\nif (OidIsValid(localidxoid))\nreturn RelationFindReplTupleByIndex(localrel, localidxoid,\nLockTupleExclusive,\nremoteslot, *localslot);\n\nreturn RelationFindReplTupleSeq(localrel, LockTupleExclusive,\nremoteslot, *localslot);\n\n~~~\n\n7. apply_handle_tuple_routing\n\n@@ -2890,6 +2877,7 @@ apply_handle_tuple_routing(ApplyExecutionData *edata,\n TupleConversionMap *map;\n MemoryContext oldctx;\n LogicalRepRelMapEntry *part_entry = NULL;\n+\n AttrMap *attrmap = NULL;\n\n /* ModifyTableState is needed for ExecFindPartition(). */\nThe added whitespace seems unrelated to this patch.\n\n\n======\nsrc/include/replication/logicalrelation.h\n\n8.\n@@ -31,6 +32,7 @@ typedef struct LogicalRepRelMapEntry\n Relation localrel; /* relcache entry (NULL when closed) */\n AttrMap *attrmap; /* map of local attributes to remote ones */\n bool updatable; /* Can apply updates/deletes? */\n+ Oid localindexoid; /* which index to use, or InvalidOid if none */\n\nIndentation is not correct for that new member comment.\n\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Wed, 8 Mar 2023 14:39:26 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Use indexes on the subscriber when REPLICA IDENTITY is\n full on the publisher"
},
{
"msg_contents": "On Wed, Mar 8, 2023 at 9:09 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n>\n> ======\n> src/backend/executor/execReplication.c\n>\n> 3. build_replindex_scan_key\n>\n> {\n> Oid operator;\n> Oid opfamily;\n> RegProcedure regop;\n> - int pkattno = attoff + 1;\n> - int mainattno = indkey->values[attoff];\n> - Oid optype = get_opclass_input_type(opclass->values[attoff]);\n> + int table_attno = indkey->values[index_attoff];\n> + Oid optype = get_opclass_input_type(opclass->values[index_attoff]);\n>\n> These variable declarations might look tidier if you kept all the Oids together.\n>\n\nI am not sure how much that would be an improvement over the current\nbut that will lead to an additional churn in the patch.\n\n> ======\n> src/backend/replication/logical/worker.c\n>\n> 6. FindReplTupleInLocalRel\n>\n> static bool\n> FindReplTupleInLocalRel(EState *estate, Relation localrel,\n> LogicalRepRelation *remoterel,\n> Oid localidxoid,\n> TupleTableSlot *remoteslot,\n> TupleTableSlot **localslot)\n> {\n> bool found;\n>\n> /*\n> * Regardless of the top-level operation, we're performing a read here, so\n> * check for SELECT privileges.\n> */\n> TargetPrivilegesCheck(localrel, ACL_SELECT);\n>\n> *localslot = table_slot_create(localrel, &estate->es_tupleTable);\n>\n> Assert(OidIsValid(localidxoid) ||\n> (remoterel->replident == REPLICA_IDENTITY_FULL));\n>\n> if (OidIsValid(localidxoid))\n> found = RelationFindReplTupleByIndex(localrel, localidxoid,\n> LockTupleExclusive,\n> remoteslot, *localslot);\n> else\n> found = RelationFindReplTupleSeq(localrel, LockTupleExclusive,\n> remoteslot, *localslot);\n>\n> return found;\n> }\n>\n> ~\n>\n> Since that 'found' variable is not used, you might as well remove the\n> if/else and simplify the code.\n>\n\nHmm, but that is an existing style/code, and this patch has done\nnothing which requires that change. Personally, I find the current\nstyle better for the readability purpose.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 8 Mar 2023 09:33:03 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Use indexes on the subscriber when REPLICA IDENTITY is\n full on the publisher"
},
{
"msg_contents": "On Wed, Mar 8, 2023 at 3:03 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, Mar 8, 2023 at 9:09 AM Peter Smith <smithpb2250@gmail.com> wrote:\n> >\n> >\n> > ======\n> > src/backend/executor/execReplication.c\n> >\n> > 3. build_replindex_scan_key\n> >\n> > {\n> > Oid operator;\n> > Oid opfamily;\n> > RegProcedure regop;\n> > - int pkattno = attoff + 1;\n> > - int mainattno = indkey->values[attoff];\n> > - Oid optype = get_opclass_input_type(opclass->values[attoff]);\n> > + int table_attno = indkey->values[index_attoff];\n> > + Oid optype = get_opclass_input_type(opclass->values[index_attoff]);\n> >\n> > These variable declarations might look tidier if you kept all the Oids together.\n> >\n>\n> I am not sure how much that would be an improvement over the current\n> but that will lead to an additional churn in the patch.\n\nThat suggestion was because IMO the 'optype' and 'opfamily' belong\ntogether. TBH, really think the assignment of 'opttype' should happen\nlater with the 'opfamilly' assignment too because then it will be\n*after* the (!AttributeNumberIsValid(table_attno)) check.\n\n>\n> > ======\n> > src/backend/replication/logical/worker.c\n> >\n> > 6. FindReplTupleInLocalRel\n> >\n> > static bool\n> > FindReplTupleInLocalRel(EState *estate, Relation localrel,\n> > LogicalRepRelation *remoterel,\n> > Oid localidxoid,\n> > TupleTableSlot *remoteslot,\n> > TupleTableSlot **localslot)\n> > {\n> > bool found;\n> >\n> > /*\n> > * Regardless of the top-level operation, we're performing a read here, so\n> > * check for SELECT privileges.\n> > */\n> > TargetPrivilegesCheck(localrel, ACL_SELECT);\n> >\n> > *localslot = table_slot_create(localrel, &estate->es_tupleTable);\n> >\n> > Assert(OidIsValid(localidxoid) ||\n> > (remoterel->replident == REPLICA_IDENTITY_FULL));\n> >\n> > if (OidIsValid(localidxoid))\n> > found = RelationFindReplTupleByIndex(localrel, localidxoid,\n> > LockTupleExclusive,\n> > remoteslot, *localslot);\n> > else\n> > found = RelationFindReplTupleSeq(localrel, LockTupleExclusive,\n> > remoteslot, *localslot);\n> >\n> > return found;\n> > }\n> >\n> > ~\n> >\n> > Since that 'found' variable is not used, you might as well remove the\n> > if/else and simplify the code.\n> >\n>\n> Hmm, but that is an existing style/code, and this patch has done\n> nothing which requires that change. Personally, I find the current\n> style better for the readability purpose.\n>\n\nOK. I failed to notice that was same as the existing code.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Wed, 8 Mar 2023 15:35:18 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Use indexes on the subscriber when REPLICA IDENTITY is\n full on the publisher"
},
{
"msg_contents": "On Tuesday, March 7, 2023 9:47 PM Önder Kalacı <onderkalaci@gmail.com> wrote:\r\n\r\nHi,\r\n\r\n> > > Let me give an example to demonstrate why I thought something is fishy here:\r\n> > >\r\n> > > Imagine rel has a (non-default) REPLICA IDENTITY with Oid=1111.\r\n> > > Imagine the same rel has a PRIMARY KEY with Oid=2222.\r\n> > >\r\n> \r\n> Hmm, alright, this is syntactically possible, but not sure if any user \r\n> would do this. Still thanks for catching this.\r\n> \r\n> And, you are right, if a user has created such a schema, IdxIsRelationIdentityOrPK() \r\n> would return the wrong result and we'd use sequential scan instead of index scan. \r\n> This would be a regression. I think we should change the function. \r\n\r\nI am looking at the latest patch and have a question about the following code.\r\n\r\n \t/* Try to find the tuple */\r\n-\tif (index_getnext_slot(scan, ForwardScanDirection, outslot))\r\n+\twhile (index_getnext_slot(scan, ForwardScanDirection, outslot))\r\n \t{\r\n-\t\tfound = true;\r\n+\t\t/*\r\n+\t\t * Avoid expensive equality check if the index is primary key or\r\n+\t\t * replica identity index.\r\n+\t\t */\r\n+\t\tif (!idxIsRelationIdentityOrPK)\r\n+\t\t{\r\n+\t\t\tif (eq == NULL)\r\n+\t\t\t{\r\n+#ifdef USE_ASSERT_CHECKING\r\n+\t\t\t\t/* apply assertions only once for the input idxoid */\r\n+\t\t\t\tIndexInfo *indexInfo = BuildIndexInfo(idxrel);\r\n+\t\t\t\tAssert(IsIndexUsableForReplicaIdentityFull(indexInfo));\r\n+#endif\r\n+\r\n+\t\t\t\t/*\r\n+\t\t\t\t * We only need to allocate once. This is allocated within per\r\n+\t\t\t\t * tuple context -- ApplyMessageContext -- hence no need to\r\n+\t\t\t\t * explicitly pfree().\r\n+\t\t\t\t */\r\n+\t\t\t\teq = palloc0(sizeof(*eq) * outslot->tts_tupleDescriptor->natts);\r\n+\t\t\t}\r\n+\r\n+\t\t\tif (!tuples_equal(outslot, searchslot, eq))\r\n+\t\t\t\tcontinue;\r\n+\t\t}\r\n\r\nIIRC, it invokes tuples_equal for all cases unless we are using replica\r\nidentity key or primary key to scan. But there seem some other cases where the\r\ntuples_equal looks unnecessary.\r\n\r\nFor example, if the table on subscriber don't have a PK or RI key but have a\r\nnot-null, non-deferrable, unique key. And if the apply worker choose this index\r\nto do the scan, it seems we can skip the tuples_equal as well.\r\n\r\n--Example\r\npub:\r\nCREATE TABLE test_replica_id_full (a int, b int not null);\r\nALTER TABLE test_replica_id_full REPLICA IDENTITY FULL;\r\nCREATE PUBLICATION tap_pub_rep_full FOR TABLE test_replica_id_full;\r\n\r\nsub:\r\nCREATE TABLE test_replica_id_full (a int, b int not null);\r\nCREATE UNIQUE INDEX test_replica_id_full_idx ON test_replica_id_full(b);\r\n--\r\n\r\nI am not 100% sure if it's worth optimizing this by complicating the check in\r\nidxIsRelationIdentityOrPK. What do you think ?\r\n\r\nBest Regards,\r\nHou zj\r\n",
"msg_date": "Wed, 8 Mar 2023 06:51:07 +0000",
"msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: [PATCH] Use indexes on the subscriber when REPLICA IDENTITY is\n full on the publisher"
},
{
"msg_contents": "On Wednesday, March 8, 2023 2:51 PM houzj.fnst@fujitsu.com <houzj.fnst@fujitsu.com> wrote:\r\n> \r\n> On Tuesday, March 7, 2023 9:47 PM Önder Kalacı <onderkalaci@gmail.com>\r\n> wrote:\r\n> \r\n> Hi,\r\n> \r\n> > > > Let me give an example to demonstrate why I thought something is fishy\r\n> here:\r\n> > > >\r\n> > > > Imagine rel has a (non-default) REPLICA IDENTITY with Oid=1111.\r\n> > > > Imagine the same rel has a PRIMARY KEY with Oid=2222.\r\n> > > >\r\n> >\r\n> > Hmm, alright, this is syntactically possible, but not sure if any user\r\n> > would do this. Still thanks for catching this.\r\n> >\r\n> > And, you are right, if a user has created such a schema,\r\n> > IdxIsRelationIdentityOrPK() would return the wrong result and we'd use\r\n> sequential scan instead of index scan.\r\n> > This would be a regression. I think we should change the function.\r\n> \r\n> I am looking at the latest patch and have a question about the following code.\r\n> \r\n> \t/* Try to find the tuple */\r\n> -\tif (index_getnext_slot(scan, ForwardScanDirection, outslot))\r\n> +\twhile (index_getnext_slot(scan, ForwardScanDirection, outslot))\r\n> \t{\r\n> -\t\tfound = true;\r\n> +\t\t/*\r\n> +\t\t * Avoid expensive equality check if the index is primary key or\r\n> +\t\t * replica identity index.\r\n> +\t\t */\r\n> +\t\tif (!idxIsRelationIdentityOrPK)\r\n> +\t\t{\r\n> +\t\t\tif (eq == NULL)\r\n> +\t\t\t{\r\n> +#ifdef USE_ASSERT_CHECKING\r\n> +\t\t\t\t/* apply assertions only once for the input\r\n> idxoid */\r\n> +\t\t\t\tIndexInfo *indexInfo = BuildIndexInfo(idxrel);\r\n> +\r\n> \tAssert(IsIndexUsableForReplicaIdentityFull(indexInfo));\r\n> +#endif\r\n> +\r\n> +\t\t\t\t/*\r\n> +\t\t\t\t * We only need to allocate once. This is\r\n> allocated within per\r\n> +\t\t\t\t * tuple context -- ApplyMessageContext --\r\n> hence no need to\r\n> +\t\t\t\t * explicitly pfree().\r\n> +\t\t\t\t */\r\n> +\t\t\t\teq = palloc0(sizeof(*eq) *\r\n> outslot->tts_tupleDescriptor->natts);\r\n> +\t\t\t}\r\n> +\r\n> +\t\t\tif (!tuples_equal(outslot, searchslot, eq))\r\n> +\t\t\t\tcontinue;\r\n> +\t\t}\r\n> \r\n> IIRC, it invokes tuples_equal for all cases unless we are using replica identity key\r\n> or primary key to scan. But there seem some other cases where the\r\n> tuples_equal looks unnecessary.\r\n> \r\n> For example, if the table on subscriber don't have a PK or RI key but have a\r\n> not-null, non-deferrable, unique key. And if the apply worker choose this index\r\n> to do the scan, it seems we can skip the tuples_equal as well.\r\n> \r\n> --Example\r\n> pub:\r\n> CREATE TABLE test_replica_id_full (a int, b int not null); ALTER TABLE\r\n> test_replica_id_full REPLICA IDENTITY FULL; CREATE PUBLICATION\r\n> tap_pub_rep_full FOR TABLE test_replica_id_full;\r\n> \r\n> sub:\r\n> CREATE TABLE test_replica_id_full (a int, b int not null); CREATE UNIQUE INDEX\r\n> test_replica_id_full_idx ON test_replica_id_full(b);\r\n\r\nThinking again. This example is incorrect, sorry. I mean the case when\r\nall the columns of the tuple to be compared are in the unique index on\r\nsubscriber side, like:\r\n\r\n--Example\r\npub:\r\nCREATE TABLE test_replica_id_full (a int);\r\nALTER TABLE test_replica_id_full REPLICA IDENTITY FULL;\r\nCREATE PUBLICATION tap_pub_rep_full FOR TABLE test_replica_id_full;\r\n\r\nsub:\r\nCREATE TABLE test_replica_id_full (a int not null);\r\nCREATE UNIQUE INDEX test_replica_id_full_idx ON test_replica_id_full(a);\r\n--\r\n\r\nBest Regards,\r\nHou zj\r\n",
"msg_date": "Wed, 8 Mar 2023 07:24:33 +0000",
"msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: [PATCH] Use indexes on the subscriber when REPLICA IDENTITY is\n full on the publisher"
},
{
"msg_contents": "Hi Amit, all\n\n\n> You have not given complete steps to reproduce the problem where\n> instead of the index scan, a sequential scan would be picked. I have\n> tried to reproduce by extending your steps but didn't see the problem.\n> Let me know if I am missing something.\n>\n\nI think the steps you shared are what I had in mind.\n\n\n>\n> I have debugged the above example and it uses an index scan during\n> apply without your latest change which is what I expected. AFAICS, the\n> use of IdxIsRelationIdentityOrPK() is to decide whether we will do\n> tuples_equal() or not during the index scan and I see it gives the\n> correct results with the example you provided.\n>\n>\nRight, I got confused. IdxIsRelationIdentityOrPK is only called within\nRelationFindReplTupleByIndex(). And, yes, it only impacts tuples_equal.\n\nBut, still, it feels safer to keep as the current patch if we don't change\nthe\nname of the function.\n\nI really don't have any strong opinions for either way, only a slight\npreference\nto keep as v35 for future callers not to get confused as we do here.\n\nLet me know how you prefer this.\n\n\nThanks,\nOnder\n\nHi Amit, all\n\nYou have not given complete steps to reproduce the problem where\ninstead of the index scan, a sequential scan would be picked. I have\ntried to reproduce by extending your steps but didn't see the problem.\nLet me know if I am missing something.I think the steps you shared are what I had in mind. \nI have debugged the above example and it uses an index scan during\napply without your latest change which is what I expected. AFAICS, the\nuse of IdxIsRelationIdentityOrPK() is to decide whether we will do\ntuples_equal() or not during the index scan and I see it gives the\ncorrect results with the example you provided.\nRight, I got confused. IdxIsRelationIdentityOrPK is only called withinRelationFindReplTupleByIndex(). And, yes, it only impacts tuples_equal.But, still, it feels safer to keep as the current patch if we don't change thename of the function.I really don't have any strong opinions for either way, only a slight preferenceto keep as v35 for future callers not to get confused as we do here.Let me know how you prefer this.Thanks,Onder",
"msg_date": "Wed, 8 Mar 2023 14:11:35 +0300",
"msg_from": "=?UTF-8?B?w5ZuZGVyIEthbGFjxLE=?= <onderkalaci@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Use indexes on the subscriber when REPLICA IDENTITY is\n full on the publisher"
},
{
"msg_contents": "Hi Peter, all\n\n\n>\n> 1.\n> Saying the index \"should\" or \"should not\" do this or that sounds like\n> it is still OK but just not recommended. TO remove this ambigity IMO\n> most of the \"should\" ought to be changed to \"must\" because IIUC this\n> patch will simply not consider indexes which do not obey all your\n> rules.\n>\n> This comment applies to a few places (search for \"should\")\n>\n> e.g.1. - Commit Message\n> e.g.2. - /* There should always be at least one attribute for the index\n> scan. */\n> e.g.3. - The function comment for\n> FindUsableIndexForReplicaIdentityFull(Relation localrel)\n>\n> ======\n> doc/src/sgml/logical-replication.sgml\n>\n\nI'm definitely not an expert on this subject (or native speaker). So, I'm\nfollowing your\nsuggestion.\n\n\n>\n> 2.\n> A published table must have a “replica identity” configured in order\n> to be able to replicate UPDATE and DELETE operations, so that\n> appropriate rows to update or delete can be identified on the\n> subscriber side. By default, this is the primary key, if there is one.\n> Another unique index (with certain additional requirements) can also\n> be set to be the replica identity. If the table does not have any\n> suitable key, then it can be set to replica identity “full”, which\n> means the entire row becomes the key. When replica identity “full” is\n> specified, indexes can be used on the subscriber side for searching\n> the rows. Candidate indexes must be btree, non-partial, and have at\n> least one column reference (i.e. cannot consist of only expressions).\n> These restrictions on the non-unique index properties adheres some of\n> the restrictions that are enforced for primary keys. Internally, we\n> follow a similar approach for supporting index scans within logical\n> replication scope. If there are no such suitable indexes, the search\n> on the subscriber side can be very inefficient, therefore replica\n> identity “full” should only be used as a fallback if no other solution\n> is possible. If a replica identity other than “full” is set on the\n> publisher side, a replica identity comprising the same or fewer\n> columns must also be set on the subscriber side. See REPLICA IDENTITY\n> for details on how to set the replica identity. If a table without a\n> replica identity is added to a publication that replicates UPDATE or\n> DELETE operations then subsequent UPDATE or DELETE operations will\n> cause an error on the publisher. INSERT operations can proceed\n> regardless of any replica identity.\n>\n> ~\n>\n> \"adheres some of\" --> \"adhere to some of\" ?\n>\n\nsounds right, changed\n\n\n>\n> ======\n> src/backend/executor/execReplication.c\n>\n> 3. build_replindex_scan_key\n>\n> {\n> Oid operator;\n> Oid opfamily;\n> RegProcedure regop;\n> - int pkattno = attoff + 1;\n> - int mainattno = indkey->values[attoff];\n> - Oid optype = get_opclass_input_type(opclass->values[attoff]);\n> + int table_attno = indkey->values[index_attoff];\n> + Oid optype = get_opclass_input_type(opclass->values[index_attoff]);\n>\n> These variable declarations might look tidier if you kept all the Oids\n> together.\n>\n> ======\n> src/backend/replication/logical/relation.c\n>\n\nBased on the discussions below, I kept as-is. I really don't want to do\nunrelated\nchanges in this patch, as I also got several feedback for not doing it,\n\n\n\n>\n> 4. logicalrep_partition_open\n>\n> + /*\n> + * Finding a usable index is an infrequent task. It occurs when an\n> + * operation is first performed on the relation, or after invalidation of\n> + * the relation cache entry (such as ANALYZE or CREATE/DROP index on the\n> + * relation).\n> + *\n> + * We also prefer to run this code on the oldctx such that we do not\n> + * leak anything in the LogicalRepPartMapContext (hence\n> + * CacheMemoryContext).\n> + */\n> + entry->localindexoid = FindLogicalRepLocalIndex(partrel, remoterel)\n>\n> \"such that\" --> \"so that\" ?\n>\n>\nfixed\n\n\n> ~~~\n>\n> 5. IsIndexUsableForReplicaIdentityFull\n>\n> +bool\n> +IsIndexUsableForReplicaIdentityFull(IndexInfo *indexInfo)\n> +{\n> + bool is_btree = (indexInfo->ii_Am == BTREE_AM_OID);\n> + bool is_partial = (indexInfo->ii_Predicate != NIL);\n> + bool is_only_on_expression = IsIndexOnlyOnExpression(indexInfo);\n> +\n> + if (is_btree && !is_partial && !is_only_on_expression)\n> + {\n> + return true;\n> + }\n> +\n> + return false;\n> +}\n>\n> SUGGESTION (no need for 2 returns here)\n> return is_btree && !is_partial && !is_only_on_expression;\n>\n\ntrue, done\n\n\n>\n> ======\n> src/backend/replication/logical/worker.c\n>\n> 6. FindReplTupleInLocalRel\n>\n> static bool\n> FindReplTupleInLocalRel(EState *estate, Relation localrel,\n> LogicalRepRelation *remoterel,\n> Oid localidxoid,\n> TupleTableSlot *remoteslot,\n> TupleTableSlot **localslot)\n> {\n> bool found;\n>\n> /*\n> * Regardless of the top-level operation, we're performing a read here, so\n> * check for SELECT privileges.\n> */\n> TargetPrivilegesCheck(localrel, ACL_SELECT);\n>\n> *localslot = table_slot_create(localrel, &estate->es_tupleTable);\n>\n> Assert(OidIsValid(localidxoid) ||\n> (remoterel->replident == REPLICA_IDENTITY_FULL));\n>\n> if (OidIsValid(localidxoid))\n> found = RelationFindReplTupleByIndex(localrel, localidxoid,\n> LockTupleExclusive,\n> remoteslot, *localslot);\n> else\n> found = RelationFindReplTupleSeq(localrel, LockTupleExclusive,\n> remoteslot, *localslot);\n>\n> return found;\n> }\n>\n> ~\n>\n> Since that 'found' variable is not used, you might as well remove the\n> if/else and simplify the code.\n>\n> SUGGESTION\n> static bool\n> FindReplTupleInLocalRel(EState *estate, Relation localrel,\n> LogicalRepRelation *remoterel,\n> Oid localidxoid,\n> TupleTableSlot *remoteslot,\n> TupleTableSlot **localslot)\n> {\n> /*\n> * Regardless of the top-level operation, we're performing a read here, so\n> * check for SELECT privileges.\n> */\n> TargetPrivilegesCheck(localrel, ACL_SELECT);\n>\n> *localslot = table_slot_create(localrel, &estate->es_tupleTable);\n>\n> Assert(OidIsValid(localidxoid) ||\n> (remoterel->replident == REPLICA_IDENTITY_FULL));\n>\n> if (OidIsValid(localidxoid))\n> return RelationFindReplTupleByIndex(localrel, localidxoid,\n> LockTupleExclusive,\n> remoteslot, *localslot);\n>\n> return RelationFindReplTupleSeq(localrel, LockTupleExclusive,\n> remoteslot, *localslot);\n>\n>\nMaybe you are right, we don't need the variable. But I don't want to get\ninto\nfurther discussions just because I'd change unrelated code to the patch.\n\nSo, I think I prefer to skip this change unless you have strong objections.\n\n\n> ~~~\n>\n> 7. apply_handle_tuple_routing\n>\n> @@ -2890,6 +2877,7 @@ apply_handle_tuple_routing(ApplyExecutionData *edata,\n> TupleConversionMap *map;\n> MemoryContext oldctx;\n> LogicalRepRelMapEntry *part_entry = NULL;\n> +\n> AttrMap *attrmap = NULL;\n>\n> /* ModifyTableState is needed for ExecFindPartition(). */\n> The added whitespace seems unrelated to this patch.\n>\n\nThanks, fixed\n\n\n>\n>\n> ======\n> src/include/replication/logicalrelation.h\n>\n> 8.\n> @@ -31,6 +32,7 @@ typedef struct LogicalRepRelMapEntry\n> Relation localrel; /* relcache entry (NULL when closed) */\n> AttrMap *attrmap; /* map of local attributes to remote ones */\n> bool updatable; /* Can apply updates/deletes? */\n> + Oid localindexoid; /* which index to use, or InvalidOid if none */\n>\n> Indentation is not correct for that new member comment.\n>\n\nfixed, thanks\n\n\nI'm attaching v36.\n\n\nThanks,\nOnder",
"msg_date": "Wed, 8 Mar 2023 14:11:46 +0300",
"msg_from": "=?UTF-8?B?w5ZuZGVyIEthbGFjxLE=?= <onderkalaci@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Use indexes on the subscriber when REPLICA IDENTITY is\n full on the publisher"
},
{
"msg_contents": "Hi Hou zj, all\n\n>\n> IIRC, it invokes tuples_equal for all cases unless we are using replica\n> identity key or primary key to scan. But there seem some other cases where\n> the\n> tuples_equal looks unnecessary.\n>\n> For example, if the table on subscriber don't have a PK or RI key but have\n> a\n> not-null, non-deferrable, unique key. And if the apply worker choose this\n> index\n> to do the scan, it seems we can skip the tuples_equal as well.\n>\n>\nYeah, that's right. I also spotted this earlier, see* # Testcase start:\nUnique index *\n*that is not primary key or replica identity*\n\nI'm thinking that we should not create any code complexity for this\ncase, at least with\nthis patch. I have a few small follow-up ideas, like this one, or allow\nnon-btree indexes etc.\nbut I'd rather not get those optional improvements to this patch, if that\nmakes sense.\n\n\n\nThanks,\nOnder KALACI\n\nHi Hou zj, all\nIIRC, it invokes tuples_equal for all cases unless we are using replica\nidentity key or primary key to scan. But there seem some other cases where the\ntuples_equal looks unnecessary.\n\nFor example, if the table on subscriber don't have a PK or RI key but have a\nnot-null, non-deferrable, unique key. And if the apply worker choose this index\nto do the scan, it seems we can skip the tuples_equal as well.Yeah, that's right. I also spotted this earlier, see # Testcase start: Unique index that is not primary key or replica identityI'm thinking that we should not create any code complexity for this case, at least withthis patch. I have a few small follow-up ideas, like this one, or allow non-btree indexes etc.but I'd rather not get those optional improvements to this patch, if that makes sense.Thanks,Onder KALACI",
"msg_date": "Wed, 8 Mar 2023 14:11:53 +0300",
"msg_from": "=?UTF-8?B?w5ZuZGVyIEthbGFjxLE=?= <onderkalaci@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Use indexes on the subscriber when REPLICA IDENTITY is\n full on the publisher"
},
{
"msg_contents": "Hi Shi Yu, all\n\n\n\n> e.g.\n> -- pub\n> CREATE TABLE test_replica_id_full (x int, y int);\n> ALTER TABLE test_replica_id_full REPLICA IDENTITY FULL;\n> CREATE PUBLICATION tap_pub_rep_full FOR TABLE test_replica_id_full;\n> -- sub\n> CREATE TABLE test_replica_id_full (x int, y int, z int);\n> CREATE INDEX test_replica_id_full_idx ON test_replica_id_full(z);\n> CREATE SUBSCRIPTION tap_sub_rep_full_0 CONNECTION 'dbname=postgres\n> port=5432' PUBLICATION tap_pub_rep_full;\n>\n> I didn't see in any cases the behavior changed after applying the patch,\n> which\n> looks good. Besides, I tested the performance for such case.\n>\n\nThanks for testing this edge case. I thought we had a test for this, but it\nseems to be missing.\n\n\n>\n> For the cases that all values of extra columns on the subscriber are NULL,\n> index\n> scan can't do better than sequential scan. This is not a real scenario and\n> I\n> think it only degrades when there are many NULL values in the index\n> column, so\n> this is probably not a case to worry about.\n\n\nI also debugged this case as well, and don't see any problems with that\neither. But I think this is a valid\ntest case given at some point we might forget about this case and somehow\nbreak.\n\nSo, I'll add a new test with *PUBLICATION LACKS THE COLUMN ON THE SUBS\nINDEX *on v36.\n\n\n\n> I just share this case and then we\n> can discuss should we pick the index which only contain the extra columns\n> on the\n> subscriber.\n>\n>\nI think its performance implications come down to the discussion on [1].\nOverall, I prefer\navoiding adding any additional complexity in the code for some edge cases.\nThe code\ncan handle this sub-optimal user pattern, with a sub-optimal performance.\n\nStill, happy to hear other thoughts on this.\n\nThanks,\nOnder KALACI\n\n\n[1]\nhttps://www.postgresql.org/message-id/20230307195119.ars36cx6gwqftoen%40awork3.anarazel.de\n\nHi Shi Yu, all\ne.g.\n-- pub\nCREATE TABLE test_replica_id_full (x int, y int);\nALTER TABLE test_replica_id_full REPLICA IDENTITY FULL;\nCREATE PUBLICATION tap_pub_rep_full FOR TABLE test_replica_id_full;\n-- sub\nCREATE TABLE test_replica_id_full (x int, y int, z int);\nCREATE INDEX test_replica_id_full_idx ON test_replica_id_full(z);\nCREATE SUBSCRIPTION tap_sub_rep_full_0 CONNECTION 'dbname=postgres port=5432' PUBLICATION tap_pub_rep_full;\n\nI didn't see in any cases the behavior changed after applying the patch, which\nlooks good. Besides, I tested the performance for such case.Thanks for testing this edge case. I thought we had a test for this, but it seems to be missing. \nFor the cases that all values of extra columns on the subscriber are NULL, index\nscan can't do better than sequential scan. This is not a real scenario and I\nthink it only degrades when there are many NULL values in the index column, so\nthis is probably not a case to worry about. I also debugged this case as well, and don't see any problems with that either. But I think this is a validtest case given at some point we might forget about this case and somehow break.So, I'll add a new test with PUBLICATION LACKS THE COLUMN ON THE SUBS INDEX on v36. I just share this case and then we\ncan discuss should we pick the index which only contain the extra columns on the\nsubscriber.I think its performance implications come down to the discussion on [1]. Overall, I preferavoiding adding any additional complexity in the code for some edge cases. The codecan handle this sub-optimal user pattern, with a sub-optimal performance.Still, happy to hear other thoughts on this.Thanks,Onder KALACI[1] https://www.postgresql.org/message-id/20230307195119.ars36cx6gwqftoen%40awork3.anarazel.de",
"msg_date": "Wed, 8 Mar 2023 14:21:12 +0300",
"msg_from": "=?UTF-8?B?w5ZuZGVyIEthbGFjxLE=?= <onderkalaci@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Use indexes on the subscriber when REPLICA IDENTITY is\n full on the publisher"
},
{
"msg_contents": "On Wed, Mar 8, 2023 at 4:51 PM Önder Kalacı <onderkalaci@gmail.com> wrote:\n>\n>\n>>\n>> I just share this case and then we\n>> can discuss should we pick the index which only contain the extra columns on the\n>> subscriber.\n>>\n>\n> I think its performance implications come down to the discussion on [1]. Overall, I prefer\n> avoiding adding any additional complexity in the code for some edge cases. The code\n> can handle this sub-optimal user pattern, with a sub-optimal performance.\n>\n\nIt is fine to leave this and Hou-San's case if they make the patch\ncomplex. However, it may be better to give it a try and see if this or\nother regression/optimization can be avoided without adding much\ncomplexity to the patch. You can prepare a top-up patch and then we\ncan discuss it.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 8 Mar 2023 17:12:06 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Use indexes on the subscriber when REPLICA IDENTITY is\n full on the publisher"
},
{
"msg_contents": "On Fri, 3 Mar 2023 at 18:40, Önder Kalacı <onderkalaci@gmail.com> wrote:\n>\n> Hi Vignesh,\n>\n> Thanks for the review\n>\n>>\n>> 1) We are currently calling RelationGetIndexList twice, once in\n>> FindUsableIndexForReplicaIdentityFull function and in the caller too,\n>> we could avoid one of the calls by passing the indexlist to the\n>> function or removing the check here, index list check can be handled\n>> in FindUsableIndexForReplicaIdentityFull.\n>> + if (remoterel->replident == REPLICA_IDENTITY_FULL &&\n>> + RelationGetIndexList(localrel) != NIL)\n>> + {\n>> + /*\n>> + * If we had a primary key or relation identity with a\n>> unique index,\n>> + * we would have already found and returned that oid.\n>> At this point,\n>> + * the remote relation has replica identity full and\n>> we have at least\n>> + * one local index defined.\n>> + *\n>> + * We are looking for one more opportunity for using\n>> an index. If\n>> + * there are any indexes defined on the local\n>> relation, try to pick\n>> + * a suitable index.\n>> + *\n>> + * The index selection safely assumes that all the\n>> columns are going\n>> + * to be available for the index scan given that\n>> remote relation has\n>> + * replica identity full.\n>> + */\n>> + return FindUsableIndexForReplicaIdentityFull(localrel);\n>> + }\n>> +\n>\n>\n> makes sense, done\n>\n>>\n>>\n>> 2) Copyright year should be mentioned as 2023\n>> diff --git a/src/test/subscription/t/032_subscribe_use_index.pl\n>> b/src/test/subscription/t/032_subscribe_use_index.pl\n>> new file mode 100644\n>> index 0000000000..db0a7ea2a0\n>> --- /dev/null\n>> +++ b/src/test/subscription/t/032_subscribe_use_index.pl\n>> @@ -0,0 +1,861 @@\n>> +# Copyright (c) 2021-2022, PostgreSQL Global Development Group\n>> +\n>> +# Test logical replication behavior with subscriber uses available index\n>> +use strict;\n>> +use warnings;\n>> +use PostgreSQL::Test::Cluster;\n>> +use PostgreSQL::Test::Utils;\n>> +use Test::More;\n>> +\n>\n>\n> I changed it to #Copyright (c) 2022-2023, but I'm not sure if it should be only 2023 or\n> like this.\n>\n>>\n>>\n>> 3) Many of the tests are using the same tables, we need not\n>> drop/create publication/subscription for each of the team, we could\n>> just drop and create required indexes and verify the update/delete\n>> statements.\n>> +# ====================================================================\n>> +# Testcase start: SUBSCRIPTION USES INDEX\n>> +#\n>> +# Basic test where the subscriber uses index\n>> +# and only updates 1 row and deletes\n>> +# 1 other row\n>> +#\n>> +\n>> +# create tables pub and sub\n>> +$node_publisher->safe_psql('postgres',\n>> + \"CREATE TABLE test_replica_id_full (x int)\");\n>> +$node_publisher->safe_psql('postgres',\n>> + \"ALTER TABLE test_replica_id_full REPLICA IDENTITY FULL;\");\n>> +$node_subscriber->safe_psql('postgres',\n>> + \"CREATE TABLE test_replica_id_full (x int)\");\n>> +$node_subscriber->safe_psql('postgres',\n>> + \"CREATE INDEX test_replica_id_full_idx ON test_replica_id_full(x)\");\n>>\n>> +# ====================================================================\n>> +# Testcase start: SUBSCRIPTION CREATE/DROP INDEX WORKS WITHOUT ISSUES\n>> +#\n>> +# This test ensures that after CREATE INDEX, the subscriber can automatically\n>> +# use one of the indexes (provided that it fulfils the requirements).\n>> +# Similarly, after DROP index, the subscriber can automatically switch to\n>> +# sequential scan\n>> +\n>> +# create tables pub and sub\n>> +$node_publisher->safe_psql('postgres',\n>> + \"CREATE TABLE test_replica_id_full (x int NOT NULL, y int)\");\n>> +$node_publisher->safe_psql('postgres',\n>> + \"ALTER TABLE test_replica_id_full REPLICA IDENTITY FULL;\");\n>> +$node_subscriber->safe_psql('postgres',\n>> + \"CREATE TABLE test_replica_id_full (x int NOT NULL, y int)\");\n>\n>\n> Well, not all the tables are exactly the same, there are 4-5 different\n> tables. Mostly the table names are the same.\n>\n> Plus, the overhead does not seem to be large enough to complicate\n> the test. Many of the src/test/subscription/t files follow this pattern.\n>\n> Do you have strong opinions on changing this?\n\nI felt that once you remove the create publication/subscription/wait\nfor sync steps, the test execution might become faster and save some\ntime in the local execution, cfbot and the various machines in\nbuildfarm. If the execution time will not reduce, then no need to\nchange.\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Wed, 8 Mar 2023 18:39:29 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Use indexes on the subscriber when REPLICA IDENTITY is\n full on the publisher"
},
{
"msg_contents": "On Tue, 7 Mar 2023 at 19:17, Önder Kalacı <onderkalaci@gmail.com> wrote:\n>\n> Hi Amit, Peter\n>\n>>\n>> > > Let me give an example to demonstrate why I thought something is fishy here:\n>> > >\n>> > > Imagine rel has a (non-default) REPLICA IDENTITY with Oid=1111.\n>> > > Imagine the same rel has a PRIMARY KEY with Oid=2222.\n>> > >\n>\n>\n> Hmm, alright, this is syntactically possible, but not sure if any user\n> would do this. Still thanks for catching this.\n>\n> And, you are right, if a user has created such a schema, IdxIsRelationIdentityOrPK()\n> would return the wrong result and we'd use sequential scan instead of index scan.\n> This would be a regression. I think we should change the function.\n>\n>\n> Here is the example:\n> DROP TABLE tab1;\n> CREATE TABLE tab1 (a int NOT NULL);\n> CREATE UNIQUE INDEX replica_unique ON tab1(a);\n> ALTER TABLE tab1 REPLICA IDENTITY USING INDEX replica_unique;\n> ALTER TABLE tab1 ADD CONSTRAINT pkey PRIMARY KEY (a);\n>\n>\n> I'm attaching v35.\n\nFew comments\n1) Maybe this change is not required:\n fallback if no other solution is possible. If a replica identity other\n than <quote>full</quote> is set on the publisher side, a replica identity\n- comprising the same or fewer columns must also be set on the subscriber\n- side. See <xref linkend=\"sql-altertable-replica-identity\"/> for details on\n+ comprising the same or fewer columns must also be set on the\nsubscriber side.\n+ See <xref linkend=\"sql-altertable-replica-identity\"/> for details on\n\n2) Variable declaration and the assignment can be split so that the\nreadability is better:\n+\n+ bool isUsableIndex =\n+ IsIndexUsableForReplicaIdentityFull(indexInfo);\n+\n+ index_close(indexRelation, AccessShareLock);\n+\n\n3) Since there is only one statement within the if condition, the\nbraces can be removed\n+ if (is_btree && !is_partial && !is_only_on_expression)\n+ {\n+ return true;\n+ }\n\n4) There is minor indentation issue in this, we could run pgindent to fix it:\n+static Oid FindLogicalRepLocalIndex(Relation localrel,\n+\n LogicalRepRelation *remoterel);\n+\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Wed, 8 Mar 2023 18:41:46 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Use indexes on the subscriber when REPLICA IDENTITY is\n full on the publisher"
},
{
"msg_contents": "Hi,\n\n\n> I felt that once you remove the create publication/subscription/wait\n> for sync steps, the test execution might become faster and save some\n> time in the local execution, cfbot and the various machines in\n> buildfarm. If the execution time will not reduce, then no need to\n> change.\n>\n>\nSo, as I noted earlier, there are different schemas. As far as I count,\nthere are at least\n7 different table definitions. I think all tables having the same name are\nmaybe confusing?\n\nEven if I try to group the same table definitions, and avoid create\npublication/subscription/wait\nfor sync steps, the total execution time of the test drops only ~5%. As far\nas I test, that does not\nseem to be the bottleneck for the tests.\n\nWell, I'm really not sure if it is really worth doing that. I think having\neach test independent of each\nother is really much easier to follow.\n\nThough, while looking into the execution times, I realized that in some\ntests, I used quite a lot\nof unnecessary tuples such as:\n\n- \"INSERT INTO test_replica_id_full SELECT i, i FROM\ngenerate_series(0,2100)i;\");\n+ \"INSERT INTO test_replica_id_full SELECT i, i FROM\ngenerate_series(0,21)i;\");\n\nIn the next iteration of the patch, I'm going to decrease the number of\ntuples. That seems to\nsave 5-10% of the execution time on my local machine.\n\n\nThanks,\nOnder KALACI\n\nHi,\nI felt that once you remove the create publication/subscription/wait\nfor sync steps, the test execution might become faster and save some\ntime in the local execution, cfbot and the various machines in\nbuildfarm. If the execution time will not reduce, then no need to\nchange.So, as I noted earlier, there are different schemas. As far as I count, there are at least7 different table definitions. I think all tables having the same name are maybe confusing?Even if I try to group the same table definitions, and avoid create publication/subscription/wait for sync steps, the total execution time of the test drops only ~5%. As far as I test, that does notseem to be the bottleneck for the tests.Well, I'm really not sure if it is really worth doing that. I think having each test independent of eachother is really much easier to follow.Though, while looking into the execution times, I realized that in some tests, I used quite a lotof unnecessary tuples such as: - \"INSERT INTO test_replica_id_full SELECT i, i FROM generate_series(0,2100)i;\");+ \"INSERT INTO test_replica_id_full SELECT i, i FROM generate_series(0,21)i;\"); In the next iteration of the patch, I'm going to decrease the number of tuples. That seems tosave 5-10% of the execution time on my local machine.Thanks,Onder KALACI",
"msg_date": "Wed, 8 Mar 2023 18:14:39 +0300",
"msg_from": "=?UTF-8?B?w5ZuZGVyIEthbGFjxLE=?= <onderkalaci@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Use indexes on the subscriber when REPLICA IDENTITY is\n full on the publisher"
},
{
"msg_contents": "Hi Vignesh C,\n\n\n>\n> Few comments\n> 1) Maybe this change is not required:\n> fallback if no other solution is possible. If a replica identity other\n> than <quote>full</quote> is set on the publisher side, a replica\n> identity\n> - comprising the same or fewer columns must also be set on the subscriber\n> - side. See <xref linkend=\"sql-altertable-replica-identity\"/> for\n> details on\n> + comprising the same or fewer columns must also be set on the\n> subscriber side.\n> + See <xref linkend=\"sql-altertable-replica-identity\"/> for details on\n>\n\nYes, fixed.\n\n>\n> 2) Variable declaration and the assignment can be split so that the\n> readability is better:\n> +\n> + bool isUsableIndex =\n> + IsIndexUsableForReplicaIdentityFull(indexInfo);\n> +\n> + index_close(indexRelation, AccessShareLock);\n> +\n\n\nHmm, can you please elaborate more on this? The declaration\nand assignment are already on different lines.\n\nps: pgindent changed this line a bit. Does that look better?\n\n\n3) Since there is only one statement within the if condition, the\n> braces can be removed\n> + if (is_btree && !is_partial && !is_only_on_expression)\n> + {\n> + return true;\n> + }\n>\n>\nFixed on a newer version of the patch. Now it is only:\n\n*return is_btree && !is_partial && !is_only_on_expression;*\n\n\n> 4) There is minor indentation issue in this, we could run pgindent to fix\n> it:\n> +static Oid FindLogicalRepLocalIndex(Relation localrel,\n> +\n> LogicalRepRelation *remoterel);\n> +\n>\n>\nYes, pgindent fixed it, thanks.\n\n\nAttached v37\n\nThanks,\nOnder KALACI",
"msg_date": "Wed, 8 Mar 2023 19:16:22 +0300",
"msg_from": "=?UTF-8?B?w5ZuZGVyIEthbGFjxLE=?= <onderkalaci@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Use indexes on the subscriber when REPLICA IDENTITY is\n full on the publisher"
},
{
"msg_contents": "Here are my review comments for v37-0001.\n\n======\nGeneral - should/must.\n\n1.\nIn my previous review [1] (comment #1) I wrote that only some of the\n\"should\" were misleading and gave examples where to change. But I\ndidn't say that *every* usage of that word was wrong, so your global\nreplace of \"should\" to \"must\" has modified a couple of places in\nunexpected ways.\n\nDetails are in subsequent review comments below -- see #2b, #3, #5.\n\n======\ndoc/src/sgml/logical-replication.sgml\n\n2.\nA published table must have a “replica identity” configured in order\nto be able to replicate UPDATE and DELETE operations, so that\nappropriate rows to update or delete can be identified on the\nsubscriber side. By default, this is the primary key, if there is one.\nAnother unique index (with certain additional requirements) can also\nbe set to be the replica identity. If the table does not have any\nsuitable key, then it can be set to replica identity “full”, which\nmeans the entire row becomes the key. When replica identity “full” is\nspecified, indexes can be used on the subscriber side for searching\nthe rows. Candidate indexes must be btree, non-partial, and have at\nleast one column reference (i.e. cannot consist of only expressions).\nThese restrictions on the non-unique index properties adheres to some\nof the restrictions that are enforced for primary keys. Internally, we\nfollow a similar approach for supporting index scans within logical\nreplication scope. If there are no such suitable indexes, the search\non the subscriber side can be very inefficient, therefore replica\nidentity “full” must only be used as a fallback if no other solution\nis possible. If a replica identity other than “full” is set on the\npublisher side, a replica identity comprising the same or fewer\ncolumns must also be set on the subscriber side. See REPLICA IDENTITY\nfor details on how to set the replica identity. If a table without a\nreplica identity is added to a publication that replicates UPDATE or\nDELETE operations then subsequent UPDATE or DELETE operations will\ncause an error on the publisher. INSERT operations can proceed\nregardless of any replica identity.\n\n~~\n\n2a.\nMy previous review [1] (comment #2) was not fixed quite as suggested.\n\nPlease change:\n\"adheres to\" --> \"adhere to\"\n\n~~\n\n2b. should/must\n\nThis should/must change was OK as it was before, because here it is only advice.\n\nPlease change this back how it was:\n\"must only be used as a fallback\" --> \"should only be used as a fallback\"\n\n======\nsrc/backend/executor/execReplication.c\n\n3. build_replindex_scan_key\n\n /*\n * Setup a ScanKey for a search in the relation 'rel' for a tuple 'key' that\n * is setup to match 'rel' (*NOT* idxrel!).\n *\n- * Returns whether any column contains NULLs.\n+ * Returns how many columns must be used for the index scan.\n+ *\n\n~\n\nThis should/must change does not seem quite right.\n\nSUGGESTION (reworded)\nReturns how many columns to use for the index scan.\n\n~~~\n\n4. build_replindex_scan_key\n\n>\n> Based on the discussions below, I kept as-is. I really don't want to do unrelated\n> changes in this patch, as I also got several feedback for not doing it,\n>\n\nHmm, although this code pre-existed I don’t consider this one as\n\"unrelated changes\" because the patch introduced the new \"if\n(!AttributeNumberIsValid(table_attno))\" which changed things. As I\nwrote to Amit yesterday [2] IMO it would be better to do the 'opttype'\nassignment *after* the potential 'continue' otherwise there is every\nchance that the assignment is just redundant. And if you move the\nassignment where it belongs, then you might as well declare the\nvariable in the more appropriate place at the same time – i.e. with\n'opfamily' declaration. Anyway, I've given my reason a couple of times\nnow, so if you don't want to change it I won't about it debate\nanymore.\n\n======\nsrc/backend/replication/logical/relation.c\n\n5. FindUsableIndexForReplicaIdentityFull\n\n+ * XXX: There are no fundamental problems for supporting non-btree indexes.\n+ * We mostly need to relax the limitations in RelationFindReplTupleByIndex().\n+ * For partial indexes, the required changes are likely to be larger. If\n+ * none of the tuples satisfy the expression for the index scan, we must\n+ * fall-back to sequential execution, which might not be a good idea in some\n+ * cases.\n\nThe should/must change (the one in the XXX comment) does not seem quite right.\n\nSUGGESTION\n\"we must fall-back to sequential execution\" --> \"we fallback to\nsequential execution\"\n\n======\n.../subscription/t/032_subscribe_use_index.pl\n\nFYI, I get TAP test in error (Note - this is when only patch 0001 is appied)\n\nt/031_column_list.pl ............... ok\nt/032_subscribe_use_index.pl ....... 19/?\n# Failed test 'ensure subscriber has not used index with\nenable_indexscan=false'\n# at t/032_subscribe_use_index.pl line 806.\n# got: '1'\n# expected: '0'\nt/032_subscribe_use_index.pl ....... 21/? # Looks like you failed 1 test of 22.\nt/032_subscribe_use_index.pl ....... Dubious, test returned 1 (wstat 256, 0x100)\nFailed 1/22 subtests\nt/100_bugs.pl ...................... ok\n\nAFAICT there is a test case that is testing the patch 0002\nfunctionality even when patch 0002 is not applied yet.\n\n------\n[1] Replies to my review v35 -\nhttps://www.postgresql.org/message-id/CACawEhXnTQyGNCXeQGhN3_%2BGWujhS3MyY27C4sSqRvZ%2B_B7FLg%40mail.gmail.com\n[2] https://www.postgresql.org/message-id/CAHut%2BPvLvDGFzk4fSaevGY5h2PpAeSZjJjje_7vBdb7ag%3DzswA%40mail.gmail.com\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Thu, 9 Mar 2023 12:03:53 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Use indexes on the subscriber when REPLICA IDENTITY is\n full on the publisher"
},
{
"msg_contents": "On Thu, Mar 9, 2023 at 6:34 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> 4. build_replindex_scan_key\n>\n> >\n> > Based on the discussions below, I kept as-is. I really don't want to do unrelated\n> > changes in this patch, as I also got several feedback for not doing it,\n> >\n>\n> Hmm, although this code pre-existed I don’t consider this one as\n> \"unrelated changes\" because the patch introduced the new \"if\n> (!AttributeNumberIsValid(table_attno))\" which changed things. As I\n> wrote to Amit yesterday [2] IMO it would be better to do the 'opttype'\n> assignment *after* the potential 'continue' otherwise there is every\n> chance that the assignment is just redundant. And if you move the\n> assignment where it belongs, then you might as well declare the\n> variable in the more appropriate place at the same time – i.e. with\n> 'opfamily' declaration. Anyway, I've given my reason a couple of times\n> now, so if you don't want to change it I won't about it debate\n> anymore.\n>\n\nI agree with this reasoning.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 9 Mar 2023 08:43:38 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Use indexes on the subscriber when REPLICA IDENTITY is\n full on the publisher"
},
{
"msg_contents": "On Wed, 8 Mar 2023 at 21:46, Önder Kalacı <onderkalaci@gmail.com> wrote:\n>\n> Hi Vignesh C,\n>\n>>\n>>\n>> Few comments\n>> 1) Maybe this change is not required:\n>> fallback if no other solution is possible. If a replica identity other\n>> than <quote>full</quote> is set on the publisher side, a replica identity\n>> - comprising the same or fewer columns must also be set on the subscriber\n>> - side. See <xref linkend=\"sql-altertable-replica-identity\"/> for details on\n>> + comprising the same or fewer columns must also be set on the\n>> subscriber side.\n>> + See <xref linkend=\"sql-altertable-replica-identity\"/> for details on\n>\n>\n> Yes, fixed.\n>>\n>>\n>> 2) Variable declaration and the assignment can be split so that the\n>> readability is better:\n>> +\n>> + bool isUsableIndex =\n>> + IsIndexUsableForReplicaIdentityFull(indexInfo);\n>> +\n>> + index_close(indexRelation, AccessShareLock);\n>> +\n>\n>\n> Hmm, can you please elaborate more on this? The declaration\n> and assignment are already on different lines.\n>\n> ps: pgindent changed this line a bit. Does that look better?\n\nI thought of changing it to something like below:\nbool isUsableIndex;\nOid idxoid = lfirst_oid(lc);\nRelation indexRelation = index_open(idxoid, AccessShareLock);\nIndexInfo *indexInfo = BuildIndexInfo(indexRelation);\n\nisUsableIndex = IsIndexUsableForReplicaIdentityFull(indexInfo);\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Thu, 9 Mar 2023 09:53:40 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Use indexes on the subscriber when REPLICA IDENTITY is\n full on the publisher"
},
{
"msg_contents": "On Wed, Mar 8, 2023 at 8:44 PM Önder Kalacı <onderkalaci@gmail.com> wrote:\n>\n>>\n>> I felt that once you remove the create publication/subscription/wait\n>> for sync steps, the test execution might become faster and save some\n>> time in the local execution, cfbot and the various machines in\n>> buildfarm. If the execution time will not reduce, then no need to\n>> change.\n>>\n>\n> So, as I noted earlier, there are different schemas. As far as I count, there are at least\n> 7 different table definitions. I think all tables having the same name are maybe confusing?\n>\n> Even if I try to group the same table definitions, and avoid create publication/subscription/wait\n> for sync steps, the total execution time of the test drops only ~5%. As far as I test, that does not\n> seem to be the bottleneck for the tests.\n>\n> Well, I'm really not sure if it is really worth doing that. I think having each test independent of each\n> other is really much easier to follow.\n>\n\nThis new test takes ~9s on my machine whereas most other tests in\nsubscription/t take roughly 2-5s. I feel we should try to reduce the\ntest timing without sacrificing much of the functionality or code\ncoverage. I think if possible we should try to reduce setup/teardown\ncost for each separate test by combining them where possible. I have a\nfew comments on tests which also might help to optimize these tests.\n\n1.\n+# Testcase start: SUBSCRIPTION USES INDEX\n+#\n+# Basic test where the subscriber uses index\n+# and only updates 1 row and deletes\n+# 1 other row\n...\n...\n+# Testcase start: SUBSCRIPTION USES INDEX UPDATEs MULTIPLE ROWS\n+#\n+# Basic test where the subscriber uses index\n+# and updates 50 rows\n\n...\n+# Testcase start: SUBSCRIPTION USES INDEX WITH MULTIPLE COLUMNS\n+#\n+# Basic test where the subscriber uses index\n+# and deletes 200 rows\n\nI think to a good extent these tests overlap each other. I think we\ncan have just one test for the index with multiple columns that\nupdates multiple rows and have both updates and deletes.\n\n2.\n+# Testcase start: SUBSCRIPTION DOES NOT USE PARTIAL INDEX\n...\n...\n+# Testcase start: SUBSCRIPTION DOES NOT USE INDEXES WITH ONLY EXPRESSIONS\n\nInstead of having separate tests where we do all setups again, I think\nit would be better if we create both the indexes in one test and show\nthat none of them is used.\n\n3.\n+# now, the update could either use the test_replica_id_full_idy or\ntest_replica_id_full_idy index\n+# it is not possible for user to control which index to use\n\nThe name of the second index is wrong in the above comment.\n\n4.\n+# Testcase start: SUBSCRIPTION BEHAVIOR WITH ENABLE_INDEXSCAN\n\nAs we have removed enable_indexscan check, you should remove this test.\n\n5. In general, the line length seems to vary a lot for different\nmulti-line comments. Though we are not strict in that it will look\nbetter if there is consistency in that (let's have ~80 cols line\nlength for each comment in a single line).\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 9 Mar 2023 10:19:02 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Use indexes on the subscriber when REPLICA IDENTITY is\n full on the publisher"
},
{
"msg_contents": "On Thu, Mar 9, 2023 at 3:49 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, Mar 8, 2023 at 8:44 PM Önder Kalacı <onderkalaci@gmail.com> wrote:\n> >\n> >>\n> >> I felt that once you remove the create publication/subscription/wait\n> >> for sync steps, the test execution might become faster and save some\n> >> time in the local execution, cfbot and the various machines in\n> >> buildfarm. If the execution time will not reduce, then no need to\n> >> change.\n> >>\n> >\n> > So, as I noted earlier, there are different schemas. As far as I count, there are at least\n> > 7 different table definitions. I think all tables having the same name are maybe confusing?\n> >\n> > Even if I try to group the same table definitions, and avoid create publication/subscription/wait\n> > for sync steps, the total execution time of the test drops only ~5%. As far as I test, that does not\n> > seem to be the bottleneck for the tests.\n> >\n> > Well, I'm really not sure if it is really worth doing that. I think having each test independent of each\n> > other is really much easier to follow.\n> >\n>\n> This new test takes ~9s on my machine whereas most other tests in\n> subscription/t take roughly 2-5s. I feel we should try to reduce the\n> test timing without sacrificing much of the functionality or code\n> coverage. I think if possible we should try to reduce setup/teardown\n> cost for each separate test by combining them where possible. I have a\n> few comments on tests which also might help to optimize these tests.\n>\n\nTo avoid culling useful tests just because they take too long to run I\nhave often thought we should separate some of the useful (but costly)\nsubscription tests from the mainstream other tests. Then they won't\ncost any extra time for the build-farm, but at least we can still run\nthem on-demand using PG_TEST_EXTRA [1] approach if we really want to.\n\n------\n[1] https://www.postgresql.org/docs/devel/regress-run.html\n\nKind Regards,\nPeter Smith.\nFujitsu Austrlia\n\n\n",
"msg_date": "Thu, 9 Mar 2023 16:07:30 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Use indexes on the subscriber when REPLICA IDENTITY is\n full on the publisher"
},
{
"msg_contents": "On Thu, Mar 9, 2023 at 10:37 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> On Thu, Mar 9, 2023 at 3:49 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Wed, Mar 8, 2023 at 8:44 PM Önder Kalacı <onderkalaci@gmail.com> wrote:\n> > >\n> > >>\n> > >> I felt that once you remove the create publication/subscription/wait\n> > >> for sync steps, the test execution might become faster and save some\n> > >> time in the local execution, cfbot and the various machines in\n> > >> buildfarm. If the execution time will not reduce, then no need to\n> > >> change.\n> > >>\n> > >\n> > > So, as I noted earlier, there are different schemas. As far as I count, there are at least\n> > > 7 different table definitions. I think all tables having the same name are maybe confusing?\n> > >\n> > > Even if I try to group the same table definitions, and avoid create publication/subscription/wait\n> > > for sync steps, the total execution time of the test drops only ~5%. As far as I test, that does not\n> > > seem to be the bottleneck for the tests.\n> > >\n> > > Well, I'm really not sure if it is really worth doing that. I think having each test independent of each\n> > > other is really much easier to follow.\n> > >\n> >\n> > This new test takes ~9s on my machine whereas most other tests in\n> > subscription/t take roughly 2-5s. I feel we should try to reduce the\n> > test timing without sacrificing much of the functionality or code\n> > coverage. I think if possible we should try to reduce setup/teardown\n> > cost for each separate test by combining them where possible. I have a\n> > few comments on tests which also might help to optimize these tests.\n> >\n>\n> To avoid culling useful tests just because they take too long to run I\n> have often thought we should separate some of the useful (but costly)\n> subscription tests from the mainstream other tests. Then they won't\n> cost any extra time for the build-farm, but at least we can still run\n> them on-demand using PG_TEST_EXTRA [1] approach if we really want to.\n>\n\nI don't think that is relevant here. It is mostly about removing\nduplicate work we are doing in tests. I don't see anything in the\ntests that should require a long time to complete.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 9 Mar 2023 11:35:21 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Use indexes on the subscriber when REPLICA IDENTITY is\n full on the publisher"
},
{
"msg_contents": "Hi Peter,\n\n\n\n> 1.\n> In my previous review [1] (comment #1) I wrote that only some of the\n> \"should\" were misleading and gave examples where to change. But I\n> didn't say that *every* usage of that word was wrong, so your global\n> replace of \"should\" to \"must\" has modified a couple of places in\n> unexpected ways.\n>\n> Details are in subsequent review comments below -- see #2b, #3, #5.\n>\n\nAh, that was my mistake. Thanks for thorough review on this.\n\n\n>\n> ======\n> doc/src/sgml/logical-replication.sgml\n>\n> 2.\n> A published table must have a “replica identity” configured in order\n> to be able to replicate UPDATE and DELETE operations, so that\n> appropriate rows to update or delete can be identified on the\n> subscriber side. By default, this is the primary key, if there is one.\n> Another unique index (with certain additional requirements) can also\n> be set to be the replica identity. If the table does not have any\n> suitable key, then it can be set to replica identity “full”, which\n> means the entire row becomes the key. When replica identity “full” is\n> specified, indexes can be used on the subscriber side for searching\n> the rows. Candidate indexes must be btree, non-partial, and have at\n> least one column reference (i.e. cannot consist of only expressions).\n> These restrictions on the non-unique index properties adheres to some\n> of the restrictions that are enforced for primary keys. Internally, we\n> follow a similar approach for supporting index scans within logical\n> replication scope. If there are no such suitable indexes, the search\n> on the subscriber side can be very inefficient, therefore replica\n> identity “full” must only be used as a fallback if no other solution\n> is possible. If a replica identity other than “full” is set on the\n> publisher side, a replica identity comprising the same or fewer\n> columns must also be set on the subscriber side. See REPLICA IDENTITY\n> for details on how to set the replica identity. If a table without a\n> replica identity is added to a publication that replicates UPDATE or\n> DELETE operations then subsequent UPDATE or DELETE operations will\n> cause an error on the publisher. INSERT operations can proceed\n> regardless of any replica identity.\n>\n> ~~\n>\n> 2a.\n> My previous review [1] (comment #2) was not fixed quite as suggested.\n>\n> Please change:\n> \"adheres to\" --> \"adhere to\"\n>\n>\nOops, it seems I only got the \"to\" part of your suggestion and missed \"s\".\n\nDone now.\n\n\n> ~~\n>\n> 2b. should/must\n>\n> This should/must change was OK as it was before, because here it is only\n> advice.\n>\n> Please change this back how it was:\n> \"must only be used as a fallback\" --> \"should only be used as a fallback\"\n>\n\nThanks, changed.\n\n\n>\n> ======\n> src/backend/executor/execReplication.c\n>\n> 3. build_replindex_scan_key\n>\n> /*\n> * Setup a ScanKey for a search in the relation 'rel' for a tuple 'key'\n> that\n> * is setup to match 'rel' (*NOT* idxrel!).\n> *\n> - * Returns whether any column contains NULLs.\n> + * Returns how many columns must be used for the index scan.\n> + *\n>\n> ~\n>\n> This should/must change does not seem quite right.\n>\n> SUGGESTION (reworded)\n> Returns how many columns to use for the index scan.\n>\n\nFixed.\n\n(I wish we had a simpler process to incorporate such\ncomments.)\n\n\n\n>\n> ~~~\n>\n> 4. build_replindex_scan_key\n>\n> >\n> > Based on the discussions below, I kept as-is. I really don't want to do\n> unrelated\n> > changes in this patch, as I also got several feedback for not doing it,\n> >\n>\n> Hmm, although this code pre-existed I don’t consider this one as\n> \"unrelated changes\" because the patch introduced the new \"if\n> (!AttributeNumberIsValid(table_attno))\" which changed things. As I\n> wrote to Amit yesterday [2] IMO it would be better to do the 'opttype'\n> assignment *after* the potential 'continue' otherwise there is every\n> chance that the assignment is just redundant. And if you move the\n> assignment where it belongs, then you might as well declare the\n> variable in the more appropriate place at the same time – i.e. with\n> 'opfamily' declaration. Anyway, I've given my reason a couple of times\n> now, so if you don't want to change it I won't about it debate\n> anymore.\n>\n\nAlright, given both you and Amit [1] agree on this, I'll follow that.\n\n\n\n>\n> ======\n> src/backend/replication/logical/relation.c\n>\n> 5. FindUsableIndexForReplicaIdentityFull\n>\n> + * XXX: There are no fundamental problems for supporting non-btree\n> indexes.\n> + * We mostly need to relax the limitations in\n> RelationFindReplTupleByIndex().\n> + * For partial indexes, the required changes are likely to be larger. If\n> + * none of the tuples satisfy the expression for the index scan, we must\n> + * fall-back to sequential execution, which might not be a good idea in\n> some\n> + * cases.\n>\n> The should/must change (the one in the XXX comment) does not seem quite\n> right.\n>\n> SUGGESTION\n> \"we must fall-back to sequential execution\" --> \"we fallback to\n> sequential execution\"\n>\n\nfixed, thanks.\n\n\n>\n> ======\n> .../subscription/t/032_subscribe_use_index.pl\n>\n> FYI, I get TAP test in error (Note - this is when only patch 0001 is\n> appied)\n>\n> t/031_column_list.pl ............... ok\n> t/032_subscribe_use_index.pl ....... 19/?\n> # Failed test 'ensure subscriber has not used index with\n> enable_indexscan=false'\n> # at t/032_subscribe_use_index.pl line 806.\n> # got: '1'\n> # expected: '0'\n> t/032_subscribe_use_index.pl ....... 21/? # Looks like you failed 1 test\n> of 22.\n> t/032_subscribe_use_index.pl ....... Dubious, test returned 1 (wstat 256,\n> 0x100)\n> Failed 1/22 subtests\n> t/100_bugs.pl ...................... ok\n>\n> AFAICT there is a test case that is testing the patch 0002\n> functionality even when patch 0002 is not applied yet.\n>\n>\nOops, I somehow managed to make the same rebase mistake. I fixed this,\nand for next time I'll make sure that each commit passes the CI separately.\nSorry for the noise.\n\nI'll attach the changes on v38 in the next e-mail.\n\nThanks,\nOnder KALACI\n\n[1]\nhttps://www.postgresql.org/message-id/CAA4eK1LDcZgkbOBr1O0cN%3DCaXT-TKf-86fb2XuKbcbOzPXRk4w%40mail.gmail.com\n\nHi Peter,\n1.\nIn my previous review [1] (comment #1) I wrote that only some of the\n\"should\" were misleading and gave examples where to change. But I\ndidn't say that *every* usage of that word was wrong, so your global\nreplace of \"should\" to \"must\" has modified a couple of places in\nunexpected ways.\n\nDetails are in subsequent review comments below -- see #2b, #3, #5.Ah, that was my mistake. Thanks for thorough review on this. \n\n======\ndoc/src/sgml/logical-replication.sgml\n\n2.\nA published table must have a “replica identity” configured in order\nto be able to replicate UPDATE and DELETE operations, so that\nappropriate rows to update or delete can be identified on the\nsubscriber side. By default, this is the primary key, if there is one.\nAnother unique index (with certain additional requirements) can also\nbe set to be the replica identity. If the table does not have any\nsuitable key, then it can be set to replica identity “full”, which\nmeans the entire row becomes the key. When replica identity “full” is\nspecified, indexes can be used on the subscriber side for searching\nthe rows. Candidate indexes must be btree, non-partial, and have at\nleast one column reference (i.e. cannot consist of only expressions).\nThese restrictions on the non-unique index properties adheres to some\nof the restrictions that are enforced for primary keys. Internally, we\nfollow a similar approach for supporting index scans within logical\nreplication scope. If there are no such suitable indexes, the search\non the subscriber side can be very inefficient, therefore replica\nidentity “full” must only be used as a fallback if no other solution\nis possible. If a replica identity other than “full” is set on the\npublisher side, a replica identity comprising the same or fewer\ncolumns must also be set on the subscriber side. See REPLICA IDENTITY\nfor details on how to set the replica identity. If a table without a\nreplica identity is added to a publication that replicates UPDATE or\nDELETE operations then subsequent UPDATE or DELETE operations will\ncause an error on the publisher. INSERT operations can proceed\nregardless of any replica identity.\n\n~~\n\n2a.\nMy previous review [1] (comment #2) was not fixed quite as suggested.\n\nPlease change:\n\"adheres to\" --> \"adhere to\"\nOops, it seems I only got the \"to\" part of your suggestion and missed \"s\".Done now. \n~~\n\n2b. should/must\n\nThis should/must change was OK as it was before, because here it is only advice.\n\nPlease change this back how it was:\n\"must only be used as a fallback\" --> \"should only be used as a fallback\"Thanks, changed. \n\n======\nsrc/backend/executor/execReplication.c\n\n3. build_replindex_scan_key\n\n /*\n * Setup a ScanKey for a search in the relation 'rel' for a tuple 'key' that\n * is setup to match 'rel' (*NOT* idxrel!).\n *\n- * Returns whether any column contains NULLs.\n+ * Returns how many columns must be used for the index scan.\n+ *\n\n~\n\nThis should/must change does not seem quite right.\n\nSUGGESTION (reworded)\nReturns how many columns to use for the index scan.Fixed. (I wish we had a simpler process to incorporate suchcomments.) \n\n~~~\n\n4. build_replindex_scan_key\n\n>\n> Based on the discussions below, I kept as-is. I really don't want to do unrelated\n> changes in this patch, as I also got several feedback for not doing it,\n>\n\nHmm, although this code pre-existed I don’t consider this one as\n\"unrelated changes\" because the patch introduced the new \"if\n(!AttributeNumberIsValid(table_attno))\" which changed things. As I\nwrote to Amit yesterday [2] IMO it would be better to do the 'opttype'\nassignment *after* the potential 'continue' otherwise there is every\nchance that the assignment is just redundant. And if you move the\nassignment where it belongs, then you might as well declare the\nvariable in the more appropriate place at the same time – i.e. with\n'opfamily' declaration. Anyway, I've given my reason a couple of times\nnow, so if you don't want to change it I won't about it debate\nanymore.Alright, given both you and Amit [1] agree on this, I'll follow that. \n\n======\nsrc/backend/replication/logical/relation.c\n\n5. FindUsableIndexForReplicaIdentityFull\n\n+ * XXX: There are no fundamental problems for supporting non-btree indexes.\n+ * We mostly need to relax the limitations in RelationFindReplTupleByIndex().\n+ * For partial indexes, the required changes are likely to be larger. If\n+ * none of the tuples satisfy the expression for the index scan, we must\n+ * fall-back to sequential execution, which might not be a good idea in some\n+ * cases.\n\nThe should/must change (the one in the XXX comment) does not seem quite right.\n\nSUGGESTION\n\"we must fall-back to sequential execution\" --> \"we fallback to\nsequential execution\"fixed, thanks. \n\n======\n.../subscription/t/032_subscribe_use_index.pl\n\nFYI, I get TAP test in error (Note - this is when only patch 0001 is appied)\n\nt/031_column_list.pl ............... ok\nt/032_subscribe_use_index.pl ....... 19/?\n# Failed test 'ensure subscriber has not used index with\nenable_indexscan=false'\n# at t/032_subscribe_use_index.pl line 806.\n# got: '1'\n# expected: '0'\nt/032_subscribe_use_index.pl ....... 21/? # Looks like you failed 1 test of 22.\nt/032_subscribe_use_index.pl ....... Dubious, test returned 1 (wstat 256, 0x100)\nFailed 1/22 subtests\nt/100_bugs.pl ...................... ok\n\nAFAICT there is a test case that is testing the patch 0002\nfunctionality even when patch 0002 is not applied yet.Oops, I somehow managed to make the same rebase mistake. I fixed this,and for next time I'll make sure that each commit passes the CI separately.Sorry for the noise.I'll attach the changes on v38 in the next e-mail.Thanks,Onder KALACI[1] https://www.postgresql.org/message-id/CAA4eK1LDcZgkbOBr1O0cN%3DCaXT-TKf-86fb2XuKbcbOzPXRk4w%40mail.gmail.com",
"msg_date": "Thu, 9 Mar 2023 12:55:51 +0300",
"msg_from": "=?UTF-8?B?w5ZuZGVyIEthbGFjxLE=?= <onderkalaci@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Use indexes on the subscriber when REPLICA IDENTITY is\n full on the publisher"
},
{
"msg_contents": "Hi Vignesh C,\n\n\n> > Hmm, can you please elaborate more on this? The declaration\n> > and assignment are already on different lines.\n> >\n> > ps: pgindent changed this line a bit. Does that look better?\n>\n> I thought of changing it to something like below:\n> bool isUsableIndex;\n> Oid idxoid = lfirst_oid(lc);\n> Relation indexRelation = index_open(idxoid, AccessShareLock);\n> IndexInfo *indexInfo = BuildIndexInfo(indexRelation);\n>\n> isUsableIndex = IsIndexUsableForReplicaIdentityFull(indexInfo);\n>\n>\nAlright, this looks slightly better. I did a small change to your\nsuggestion, basically kept *lfirst_oid *\nas the first statement in the loop.\n\nI'll attach the changes on v38 in the next e-mail.\n\n\nThanks,\nOnder KALACI\n\nHi Vignesh C,\n> Hmm, can you please elaborate more on this? The declaration\n> and assignment are already on different lines.\n>\n> ps: pgindent changed this line a bit. Does that look better?\n\nI thought of changing it to something like below:\nbool isUsableIndex;\nOid idxoid = lfirst_oid(lc);\nRelation indexRelation = index_open(idxoid, AccessShareLock);\nIndexInfo *indexInfo = BuildIndexInfo(indexRelation);\n\nisUsableIndex = IsIndexUsableForReplicaIdentityFull(indexInfo);Alright, this looks slightly better. I did a small change to your suggestion, basically kept lfirst_oid as the first statement in the loop.I'll attach the changes on v38 in the next e-mail.Thanks,Onder KALACI",
"msg_date": "Thu, 9 Mar 2023 12:55:55 +0300",
"msg_from": "=?UTF-8?B?w5ZuZGVyIEthbGFjxLE=?= <onderkalaci@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Use indexes on the subscriber when REPLICA IDENTITY is\n full on the publisher"
},
{
"msg_contents": "Hi Amit, all\n\n\n> >\n>\n> This new test takes ~9s on my machine whereas most other tests in\n> subscription/t take roughly 2-5s. I feel we should try to reduce the\n> test timing without sacrificing much of the functionality or code\n> coverage.\n\n\nAlright, that is reasonable.\n\n\n> I think if possible we should try to reduce setup/teardown\n> cost for each separate test by combining them where possible. I have a\n> few comments on tests which also might help to optimize these tests.\n>\n> 1.\n> +# Testcase start: SUBSCRIPTION USES INDEX\n> +#\n> +# Basic test where the subscriber uses index\n> +# and only updates 1 row and deletes\n> +# 1 other row\n> ...\n> ...\n> +# Testcase start: SUBSCRIPTION USES INDEX UPDATEs MULTIPLE ROWS\n> +#\n> +# Basic test where the subscriber uses index\n> +# and updates 50 rows\n>\n> ...\n> +# Testcase start: SUBSCRIPTION USES INDEX WITH MULTIPLE COLUMNS\n> +#\n> +# Basic test where the subscriber uses index\n> +# and deletes 200 rows\n>\n> I think to a good extent these tests overlap each other. I think we\n> can have just one test for the index with multiple columns that\n> updates multiple rows and have both updates and deletes.\n>\n\nAlright, dropped *SUBSCRIPTION USES INDEX*, expanded\n*SUBSCRIPTION USES INDEX WITH MULTIPLE COLUMNS* with an UPDATE\nthat touches multiple rows\n\n\n> 2.\n> +# Testcase start: SUBSCRIPTION DOES NOT USE PARTIAL INDEX\n> ...\n> ...\n> +# Testcase start: SUBSCRIPTION DOES NOT USE INDEXES WITH ONLY EXPRESSIONS\n>\n> Instead of having separate tests where we do all setups again, I think\n> it would be better if we create both the indexes in one test and show\n> that none of them is used.\n>\n\nMakes sense\n\n\n>\n> 3.\n> +# now, the update could either use the test_replica_id_full_idy or\n> test_replica_id_full_idy index\n> +# it is not possible for user to control which index to use\n>\n> The name of the second index is wrong in the above comment.\n>\n\nthanks, fixed\n\n\n>\n> 4.\n> +# Testcase start: SUBSCRIPTION BEHAVIOR WITH ENABLE_INDEXSCAN\n>\n> As we have removed enable_indexscan check, you should remove this test.\n>\n\nHmm, I think my rebase problems are causing confusion here, which v38 fixes.\n\nIn the first commit, we have ENABLE_INDEXSCAN checks. In the second commit,\nI changed the same test to use enable_replica_identity_full_index_scan.\n\nIf we are going to only consider the first patch to get into the master\nbranch,\nI can probably drop the test. In that case, I'm not sure what is our\nperspective\non ENABLE_INDEXSCAN GUC. Do we want to keep that guard in the code\n(hence the test)?\n\n\n>\n> 5. In general, the line length seems to vary a lot for different\n> multi-line comments. Though we are not strict in that it will look\n> better if there is consistency in that (let's have ~80 cols line\n> length for each comment in a single line).\n>\n>\nWent over the tests, and made ~80 cols. There is one exception, in the first\ncommit, the test for enable_indexcan is still shorter, but I failed to make\nthat\nproperly. I'll try to fix that as well, but I didn't want to block the\nprogress due to\nthat.\n\nAlso, you have not noted, but I think* SUBSCRIPTION USES INDEX WITH\nMULTIPLE COLUMNS*\nalready covers *SUBSCRIPTION USES INDEX UPDATEs MULTIPLE ROWS.*\n\nSo, I changed the first one to *SUBSCRIPTION USES INDEX WITH MULTIPLE ROWS\nAND COLUMNS*\nand dropped the second one. Let me know if it does not make sense to you.\nIf I try, there are few more\nopportunities to squeeze in some more tests together, but those would start\nto complicate readability.\n\nAttached v38.\n\nThanks,\nOnder KALACI",
"msg_date": "Thu, 9 Mar 2023 12:56:00 +0300",
"msg_from": "=?UTF-8?B?w5ZuZGVyIEthbGFjxLE=?= <onderkalaci@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Use indexes on the subscriber when REPLICA IDENTITY is\n full on the publisher"
},
{
"msg_contents": "On Thu, Mar 9, 2023 at 3:26 PM Önder Kalacı <onderkalaci@gmail.com> wrote:\n>\n>>\n>> 4.\n>> +# Testcase start: SUBSCRIPTION BEHAVIOR WITH ENABLE_INDEXSCAN\n>>\n>> As we have removed enable_indexscan check, you should remove this test.\n>\n>\n> Hmm, I think my rebase problems are causing confusion here, which v38 fixes.\n>\n\nI think it is still not fixed in v38 as the test is still present in 0001.\n\n> In the first commit, we have ENABLE_INDEXSCAN checks. In the second commit,\n> I changed the same test to use enable_replica_identity_full_index_scan.\n>\n> If we are going to only consider the first patch to get into the master branch,\n> I can probably drop the test. In that case, I'm not sure what is our perspective\n> on ENABLE_INDEXSCAN GUC. Do we want to keep that guard in the code\n> (hence the test)?\n>\n\nI am not sure what we are going to do on this because I feel we need\nsome storage option as you have in 0002 patch but you and Andres\nthinks that is not required. So, we can discuss that a bit more after\n0001 is committed but if there is no agreement then we need to\nprobably drop it. Even if drop it, I don't think using enable_index\nmakes sense. I think for now you don't need to send 0002 patch, let's\nfirst try to get 0001 patch and then we can discuss about 0002.\n\n>>\n>>\n>> 5. In general, the line length seems to vary a lot for different\n>> multi-line comments. Though we are not strict in that it will look\n>> better if there is consistency in that (let's have ~80 cols line\n>> length for each comment in a single line).\n>>\n>\n> Went over the tests, and made ~80 cols. There is one exception, in the first\n> commit, the test for enable_indexcan is still shorter, but I failed to make that\n> properly. I'll try to fix that as well, but I didn't want to block the progress due to\n> that.\n>\n> Also, you have not noted, but I think SUBSCRIPTION USES INDEX WITH MULTIPLE COLUMNS\n> already covers SUBSCRIPTION USES INDEX UPDATEs MULTIPLE ROWS.\n>\n> So, I changed the first one to SUBSCRIPTION USES INDEX WITH MULTIPLE ROWS AND COLUMNS\n> and dropped the second one. Let me know if it does not make sense to you. If I try, there are few more\n> opportunities to squeeze in some more tests together, but those would start to complicate readability.\n>\n\nI still want to reduce the test time and will think about it. Which of\nthe other tests do you think can be combined?\n\nBTW, did you consider updating the patch based on my yesterday's email [1]?\n\nOne minor comment:\n+# now, create index and see that the index is used\n+$node_subscriber->safe_psql('postgres',\n+ \"CREATE INDEX test_replica_id_full_idx ON test_replica_id_full(x)\");\n+\n+# wait until the index is created\n+$node_subscriber->poll_query_until(\n+ 'postgres', q{select count(*)=1 from pg_stat_all_indexes where\nindexrelname = 'test_replica_id_full_idx';}\n+) or die \"Timed out while waiting for creating index test_replica_id_full_idx\";\n+\n+$node_publisher->safe_psql('postgres',\n+ \"UPDATE test_replica_id_full SET x = x + 1 WHERE x = 15;\");\n+$node_publisher->wait_for_catchup($appname);\n+\n+\n+# wait until the index is used on the subscriber\n+$node_subscriber->poll_query_until(\n+ 'postgres', q{select (idx_scan = 1) from pg_stat_all_indexes where\nindexrelname = 'test_replica_id_full_idx';}\n+) or die \"Timed out while waiting for check subscriber\ntap_sub_rep_full updates one row via index\";\n+\n+\n+# now, create index on column y as well\n+$node_subscriber->safe_psql('postgres',\n+ \"CREATE INDEX test_replica_id_full_idy ON test_replica_id_full(y)\");\n+\n+# wait until the index is created\n+$node_subscriber->poll_query_until(\n+ 'postgres', q{select count(*)=1 from pg_stat_all_indexes where\nindexrelname = 'test_replica_id_full_idy';}\n+) or die \"Timed out while waiting for creating index test_replica_id_full_idy\";\n\nIt appears you are using inconsistent spacing. It may be better to use\na single empty line wherever required.\n\n[1] - https://www.postgresql.org/message-id/CAA4eK1%2BoM_v-b_WDHZmqCyVHU2oD4j3vF9YcH9xVHj%3DzAfy4og%40mail.gmail.com\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 9 Mar 2023 16:50:37 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Use indexes on the subscriber when REPLICA IDENTITY is\n full on the publisher"
},
{
"msg_contents": "On Thu, Mar 9, 2023 at 4:50 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Thu, Mar 9, 2023 at 3:26 PM Önder Kalacı <onderkalaci@gmail.com> wrote:\n> >\n> >\n> > So, I changed the first one to SUBSCRIPTION USES INDEX WITH MULTIPLE ROWS AND COLUMNS\n> > and dropped the second one. Let me know if it does not make sense to you. If I try, there are few more\n> > opportunities to squeeze in some more tests together, but those would start to complicate readability.\n> >\n>\n> I still want to reduce the test time and will think about it. Which of\n> the other tests do you think can be combined?\n>\n\nSome of the ideas I can think of are as follows:\n\n1. Combine \"SUBSCRIPTION USES INDEX WITH MULTIPLE ROWS AND COLUMNS\"\nand \"SUBSCRIPTION USES INDEX WITH DROPPED COLUMNS\" such that after\nverifying updates and deletes of the first test, we can drop some of\nthe columns on both publisher and subscriber, then use alter\nsubscription ... refresh publication command and then do the steps of\nthe second test. Note that we don't add tables after initial setup,\nonly changing schema.\n\n2. We can also combine \"Some NULL values\" and \"PUBLICATION LACKS THE\nCOLUMN ON THE SUBS INDEX\" as both use the same schema. After the first\ntest, we need to drop the existing index and create a new index on the\nsubscriber node.\n\n3. General comment\n+# updates 200 rows\n+$node_publisher->safe_psql('postgres',\n+ \"UPDATE test_replica_id_full SET x = x + 1 WHERE x IN (5, 6);\");\n\nI think here you are updating 20 rows not 200. So, the comment seems\nwrong to me.\n\nPlease think more and see if we can combine some other tests like\n\"Unique index that is not primary key or replica identity\" and the\ntest we will have after comment#2 above.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 9 Mar 2023 17:43:04 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Use indexes on the subscriber when REPLICA IDENTITY is\n full on the publisher"
},
{
"msg_contents": "Hi,\n\nAmit Kapila <amit.kapila16@gmail.com>, 8 Mar 2023 Çar, 14:42 tarihinde şunu\nyazdı:\n\n> On Wed, Mar 8, 2023 at 4:51 PM Önder Kalacı <onderkalaci@gmail.com> wrote:\n> >\n> >\n> >>\n> >> I just share this case and then we\n> >> can discuss should we pick the index which only contain the extra\n> columns on the\n> >> subscriber.\n> >>\n> >\n> > I think its performance implications come down to the discussion on [1].\n> Overall, I prefer\n> > avoiding adding any additional complexity in the code for some edge\n> cases. The code\n> > can handle this sub-optimal user pattern, with a sub-optimal performance.\n> >\n>\n> It is fine to leave this and Hou-San's case if they make the patch\n> complex. However, it may be better to give it a try and see if this or\n> other regression/optimization can be avoided without adding much\n> complexity to the patch. You can prepare a top-up patch and then we\n> can discuss it.\n>\n>\n>\nAlright, I did some basic prototypes for the problems mentioned, just to\nshow\nthat these problems can be solved without too much hassle. But, the patchees\nare not complete, some tests fail, no comments / tests exist, some values\nshould be\ncached etc. Mostly sharing as a heads up and sharing the progress given I\nhave not\nresponded to this specific mail. I'll update these when I have some extra\ntime after\nreplying to the 0001 patch.\n\n Thanks,\nOnder",
"msg_date": "Thu, 9 Mar 2023 15:17:32 +0300",
"msg_from": "=?UTF-8?B?w5ZuZGVyIEthbGFjxLE=?= <onderkalaci@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Use indexes on the subscriber when REPLICA IDENTITY is\n full on the publisher"
},
{
"msg_contents": "Hi Amit,\n\n\n> >\n> >>\n> >> 4.\n> >> +# Testcase start: SUBSCRIPTION BEHAVIOR WITH ENABLE_INDEXSCAN\n> >>\n> >> As we have removed enable_indexscan check, you should remove this test.\n> >\n> >\n> > Hmm, I think my rebase problems are causing confusion here, which v38\n> fixes.\n> >\n>\n> I think it is still not fixed in v38 as the test is still present in 0001.\n>\n\nAh, yes, sorry again for the noise. v39 will drop that.\n\n\n>\n> > In the first commit, we have ENABLE_INDEXSCAN checks. In the second\n> commit,\n> > I changed the same test to use enable_replica_identity_full_index_scan.\n> >\n> > If we are going to only consider the first patch to get into the master\n> branch,\n> > I can probably drop the test. In that case, I'm not sure what is our\n> perspective\n> > on ENABLE_INDEXSCAN GUC. Do we want to keep that guard in the code\n> > (hence the test)?\n> >\n>\n> I am not sure what we are going to do on this because I feel we need\n> some storage option as you have in 0002 patch but you and Andres\n> thinks that is not required. So, we can discuss that a bit more after\n> 0001 is committed but if there is no agreement then we need to\n> probably drop it. Even if drop it, I don't think using enable_index\n> makes sense. I think for now you don't need to send 0002 patch, let's\n> first try to get 0001 patch and then we can discuss about 0002.\n>\n\nsounds good, when needed I'll rebase 0002.\n\n\n> >\n> > Also, you have not noted, but I think SUBSCRIPTION USES INDEX WITH\n> MULTIPLE COLUMNS\n> > already covers SUBSCRIPTION USES INDEX UPDATEs MULTIPLE ROWS.\n> >\n> > So, I changed the first one to SUBSCRIPTION USES INDEX WITH MULTIPLE\n> ROWS AND COLUMNS\n> > and dropped the second one. Let me know if it does not make sense to\n> you. If I try, there are few more\n> > opportunities to squeeze in some more tests together, but those would\n> start to complicate readability.\n> >\n>\n> I still want to reduce the test time and will think about it. Which of\n> the other tests do you think can be combined?\n>\n>\nI'll follow your suggestion in the next e-mail [2], and focus on further\nimprovements.\n\nBTW, did you consider updating the patch based on my yesterday's email [1]?\n>\n>\nYes, replied to that one just now with some wip commits [1]\n\n\n> It appears you are using inconsistent spacing. It may be better to use\n> a single empty line wherever required.\n>\n>\nSure, let me fix those.\n\nattached v39.\n\n[1]\nhttps://www.postgresql.org/message-id/CACawEhXGnk6v7UOHrxuJjjybHvvq33Zv666ouy4UzjPfJM6tBw%40mail.gmail.com\n[2]\nhttps://www.postgresql.org/message-id/CAA4eK1LSYWrthA3xjbrZvZVmwuha10HtM3-QRrVMD7YBt4t3pg%40mail.gmail.com",
"msg_date": "Thu, 9 Mar 2023 15:18:28 +0300",
"msg_from": "=?UTF-8?B?w5ZuZGVyIEthbGFjxLE=?= <onderkalaci@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Use indexes on the subscriber when REPLICA IDENTITY is\n full on the publisher"
},
{
"msg_contents": "Here are some review comments for patch v39-0001 (mostly the test code).\n\n======\nsrc/backend/replication/logical/relation.c\n\n1. FindUsableIndexForReplicaIdentityFull\n\n+static Oid\n+FindUsableIndexForReplicaIdentityFull(Relation localrel)\n+{\n+ List *indexlist = RelationGetIndexList(localrel);\n+ Oid usableIndex = InvalidOid;\n+ ListCell *lc;\n+\n+ foreach(lc, indexlist)\n+ {\n+ Oid idxoid = lfirst_oid(lc);\n+ bool isUsableIndex;\n+ Relation indexRelation = index_open(idxoid, AccessShareLock);\n+ IndexInfo *indexInfo = BuildIndexInfo(indexRelation);\n+\n+ isUsableIndex = IsIndexUsableForReplicaIdentityFull(indexInfo);\n+\n+ index_close(indexRelation, AccessShareLock);\n+\n+ if (isUsableIndex)\n+ {\n+ /* we found one eligible index, don't need to continue */\n+ usableIndex = idxoid;\n+ break;\n+ }\n+ }\n+\n+ return usableIndex;\n+}\n\nThis comment is not functional -- if you prefer the code as-is, then\nignore this comment.\n\nBut, personally I would:\n- Move some of that code from the declarations. I feel it would be\nbetter if the index_open/index_close were both in the code-body\ninstead of half in declarations and half not.\n- Remove the 'usableIndex' variable, and just return directly.\n- Shorten all the long names (and use consistent 'idx' instead of\nsometimes 'idx' and sometimes 'index')\n\nSUGGESTION (YMMV)\n\nstatic Oid\nFindUsableIndexForReplicaIdentityFull(Relation localrel)\n{\nList *idxlist = RelationGetIndexList(localrel);\nListCell *lc;\n\nforeach(lc, idxlist)\n{\nOid idxoid = lfirst_oid(lc);\nbool isUsableIdx;\nRelation idxRel;\nIndexInfo *idxInfo;\n\nidxRel = index_open(idxoid, AccessShareLock);\nidxInfo = BuildIndexInfo(idxRel);\nisUsableIdx = IsIndexUsableForReplicaIdentityFull(idxInfo);\nindex_close(idxRel, AccessShareLock);\n\n/* Return the first eligible index found */\nif (isUsableIdx)\nreturn idxoid;\n}\n\nreturn InvalidOid;\n}\n\n======\n.../subscription/t/032_subscribe_use_index.pl\n\n2. SUBSCRIPTION CREATE/DROP INDEX WORKS WITHOUT ISSUES\n\n2a.\n# Testcase start: SUBSCRIPTION CREATE/DROP INDEX WORKS WITHOUT ISSUES\n#\n# This test ensures that after CREATE INDEX, the subscriber can automatically\n# use one of the indexes (provided that it fulfils the requirements).\n# Similarly, after DROP index, the subscriber can automatically switch to\n# sequential scan\n\n\nThe last sentence is missing full-stop.\n\n~\n\n2b.\n# now, create index and see that the index is used\n$node_subscriber->safe_psql('postgres',\n \"CREATE INDEX test_replica_id_full_idx ON test_replica_id_full(x)\");\n\nDon't say \"and see that the index is used\". Yes, that is what this\nwhole test is doing, but it is not what the psql following this\ncomment is doing.\n\n~\n\n2c.\n$node_publisher->safe_psql('postgres',\n \"UPDATE test_replica_id_full SET x = x + 1 WHERE x = 15;\");\n$node_publisher->wait_for_catchup($appname);\n\n\n# wait until the index is used on the subscriber\n\nThe double blank lines here should be single.\n\n~\n\n2d.\n# now, the update could either use the test_replica_id_full_idx or\n# test_replica_id_full_idy index it is not possible for user to control\n# which index to use\n\nThis sentence should break at \"it\".\n\nAso \"user\" --> \"the user\"\n\nSUGGESTION\n# now, the update could either use the test_replica_id_full_idx or\n# test_replica_id_full_idy index; it is not possible for the user to control\n# which index to use\n\n~\n\n2e.\n# let's also test dropping test_replica_id_full_idy and\n# hence use test_replica_id_full_idx\n\n\nI think you ought to have dropped the other (first) index because we\nalready know that the first index had been used (from earlier), but we\nare not 100% sure if the 'y' index has been chosen yet.\n\n~~~~\n\n3. SUBSCRIPTION USES INDEX WITH MULTIPLE ROWS AND COLUMNS\n\n3a.\n# deletes 20 rows\n$node_publisher->safe_psql('postgres',\n \"DELETE FROM test_replica_id_full WHERE x IN (5, 6);\");\n\n# updates 20 rows\n$node_publisher->safe_psql('postgres',\n \"UPDATE test_replica_id_full SET x = 100, y = '200' WHERE x IN (1, 2);\");\n\n\n\"deletes\" --> \"delete\"\n\n\"updates\" --> \"update\"\n\n~~~\n\n4. SUBSCRIPTION USES INDEX WITH DROPPED COLUMNS\n\n# updates 200 rows\n$node_publisher->safe_psql('postgres',\n \"UPDATE test_replica_id_full SET x = x + 1 WHERE x IN (5, 6);\");\n\n\n\"updates\" --> \"update\"\n\n\"200 rows\" ??? is that right -- 20 maybe ???\n\n~~~\n\n5. SUBSCRIPTION USES INDEX ON PARTITIONED TABLES\n\n5a.\n# updates rows and moves between partitions\n$node_publisher->safe_psql('postgres',\n \"UPDATE users_table_part SET value_1 = 0 WHERE user_id = 4;\");\n\n\"updates rows and moves between partitions\" --> \"update rows, moving\nthem to other partitions\"\n\n~\n\n5b.\n# deletes rows from different partitions\n\n\n\"deletes\" --> \"delete\"\n\n~~~\n\n6. SUBSCRIPTION DOES NOT USE INDEXES WITH ONLY EXPRESSIONS OR PARTIAL INDEX\n\n6a.\n# update 2 rows\n$node_publisher->safe_psql('postgres',\n \"UPDATE people SET firstname = 'Nan' WHERE firstname = 'first_name_1';\");\n$node_publisher->safe_psql('postgres',\n \"UPDATE people SET firstname = 'Nan' WHERE firstname =\n'first_name_2' AND lastname = 'last_name_2';\");\n\nIMO 'Nan' seemed a curious name to assign as a test value, because it\nseems like it has a special meaning but in reality, I don't think it\ndoes. Even 'xxx' would be better.\n\n~\n\n6b.\n# make sure none of the indexes is not used on the subscriber\n$result = $node_subscriber->safe_psql('postgres',\n \"select sum(idx_scan) from pg_stat_all_indexes where indexrelname\nIN ('people_names_expr_only', 'people_names_partial')\");\nis($result, qq(0), 'ensure subscriber tap_sub_rep_full updates two\nrows via seq. scan with index on expressions');\n\n~\n\nLooks like a typo in this comment: \"none of the indexes is not used\" ???\n\n~~~\n\n7. SUBSCRIPTION CAN USE INDEXES WITH EXPRESSIONS AND COLUMNS\n\n7a.\n# update 2 rows\n$node_publisher->safe_psql('postgres',\n \"UPDATE people SET firstname = 'Nan' WHERE firstname = 'first_name_1';\");\n$node_publisher->safe_psql('postgres',\n \"UPDATE people SET firstname = 'Nan' WHERE firstname =\n'first_name_3' AND lastname = 'last_name_3';\");\n\nSame as #6a, 'Nan' seems like a strange test value to assign to a name.\n\n~~~\n\n8. Some NULL values\n\n$node_publisher->safe_psql('postgres',\n \"INSERT INTO test_replica_id_full VALUES (1), (2), (3);\");\n$node_publisher->safe_psql('postgres',\n \"UPDATE test_replica_id_full SET x = x + 1 WHERE x = 1;\");\n$node_publisher->safe_psql('postgres',\n \"UPDATE test_replica_id_full SET x = x + 1 WHERE x = 3;\");\n\n~\n\n8a.\nFor some reason, this insert/update psql was not commented.\n\n~\n\n8b.\nMaybe this test data could be more obvious by explicitly inserting the NULLs?\n\n~~~\n\n9. Unique index that is not primary key or replica identity\n\n9a.\nWhy are other \"Testcase start\" comments all uppercase but not this one?\n\n~~~\n\n10. SUBSCRIPTION USES INDEX WITH PUB/SUB different data\n\nWhy is there a mixed case in this \"Test case:\" comment?\n\n~~~\n\n11. PUBLICATION LACKS THE COLUMN ON THE SUBS INDEX\n\n11a.\n# The subsriber has an additional column compared to publisher,\n# and the index is on that column. We still pick the index scan\n# on the subscriber even though it is practically similar to\n# sequential scan\n\nTypo \"subsriber\"\n\nMissing full-stop on last sentence.\n\n~\n\n11b.\n# make sure that the subscriber has the correct data\n# we only deleted 1 row\n$result = $node_subscriber->safe_psql('postgres',\n \"SELECT sum(x) FROM test_replica_id_full\");\nis($result, qq(232), 'ensure subscriber has the correct data at the\nend of the test');\n\n\nWhy does that say \"deleted 1 row\", when the previous operation was not a DELETE?\n\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Fri, 10 Mar 2023 15:04:35 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Use indexes on the subscriber when REPLICA IDENTITY is\n full on the publisher"
},
{
"msg_contents": "On Thu, Mar 9, 2023 at 5:47 PM Önder Kalacı <onderkalaci@gmail.com> wrote:\n>\n> Amit Kapila <amit.kapila16@gmail.com>, 8 Mar 2023 Çar, 14:42 tarihinde şunu yazdı:\n>>\n>> On Wed, Mar 8, 2023 at 4:51 PM Önder Kalacı <onderkalaci@gmail.com> wrote:\n>> >\n>> >\n>> >>\n>> >> I just share this case and then we\n>> >> can discuss should we pick the index which only contain the extra columns on the\n>> >> subscriber.\n>> >>\n>> >\n>> > I think its performance implications come down to the discussion on [1]. Overall, I prefer\n>> > avoiding adding any additional complexity in the code for some edge cases. The code\n>> > can handle this sub-optimal user pattern, with a sub-optimal performance.\n>> >\n>>\n>> It is fine to leave this and Hou-San's case if they make the patch\n>> complex. However, it may be better to give it a try and see if this or\n>> other regression/optimization can be avoided without adding much\n>> complexity to the patch. You can prepare a top-up patch and then we\n>> can discuss it.\n>>\n>>\n>\n> Alright, I did some basic prototypes for the problems mentioned, just to show\n> that these problems can be solved without too much hassle. But, the patchees\n> are not complete, some tests fail, no comments / tests exist, some values should be\n> cached etc. Mostly sharing as a heads up and sharing the progress given I have not\n> responded to this specific mail. I'll update these when I have some extra time after\n> replying to the 0001 patch.\n>\n\nwip_for_optimize_index_column_match\n+static bool\n+IndexContainsAnyRemoteColumn(IndexInfo *indexInfo,\n+ LogicalRepRelation *remoterel)\n+{\n+ for (int i = 0; i < indexInfo->ii_NumIndexAttrs; i++)\n+ {\n\nWouldn't it be better to just check if the first column is not part of\nthe remote column then we can skip that index?\n\nIn wip_optimize_for_non_pkey_non_ri_unique_index patch, irrespective\nof whether we want to retain this set of changes, the function name\nIsIdxSafeToSkipDuplicates() sounds better than\nIdxIsRelationIdentityOrPK() and we can even change the check to\nGetRelationIdentityOrPK() instead of separate checks replica index and\nPK. So, it would be better to include this part of the change (a.\nchange the function name to IsIdxSafeToSkipDuplicates() and (b) change\nthe check to use GetRelationIdentityOrPK()) in the main patch.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 10 Mar 2023 10:55:23 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Use indexes on the subscriber when REPLICA IDENTITY is\n full on the publisher"
},
{
"msg_contents": "Hi Amit, all\n\n\n>\n> Some of the ideas I can think of are as follows:\n>\n> 1. Combine \"SUBSCRIPTION USES INDEX WITH MULTIPLE ROWS AND COLUMNS\"\n> and \"SUBSCRIPTION USES INDEX WITH DROPPED COLUMNS\" such that after\n> verifying updates and deletes of the first test, we can drop some of\n> the columns on both publisher and subscriber, then use alter\n> subscription ... refresh publication command and then do the steps of\n> the second test. Note that we don't add tables after initial setup,\n> only changing schema.\n>\n\nDone with an important caveat. I think this reorganization of the test\nhelped\nus to find one edge case regarding dropped columns.\n\nI realized that the dropped columns also get into the tuples_equal()\nfunction. And,\nthe remote sends NULL to for the dropped columns(i.e., remoteslot), but\nindex_getnext_slot() (or table_scan_getnextslot) indeed fills the dropped\ncolumns on the outslot. So, the dropped columns are not NULL in the outslot.\n\nThis triggers tuples_equal to fail. To fix that, I improved the tuples_equal\nsuch that it skips the dropped columns.\n\nI also spend quite a bit of time understanding how/why this impacts\nHEAD. See steps below on HEAD, where REPLICA IDENTITY FULL\nfails to replica the data properly:\n\n\n-- pub\nCREATE TABLE test (drop_1 jsonb, x int, drop_2 numeric, y text, drop_3\ntimestamptz);\nALTER TABLE test REPLICA IDENTITY FULL;\nINSERT INTO test SELECT NULL, i, i, (i)::text, now() FROM\ngenerate_series(0,1)i;\nCREATE PUBLICATION pub FOR ALL TABLES;\n\n-- sub\nCREATE TABLE test (drop_1 jsonb, x int, drop_2 numeric, y text, drop_3\ntimestamptz);\nCREATE SUBSCRIPTION sub CONNECTION 'host=localhost port=5432\ndbname=postgres' PUBLICATION pub;\n\n-- show that before dropping the columns, the data in the source and\n-- target are deleted properly\nDELETE FROM test WHERE x = 0;\n\n-- both on the source and target\nSELECT count(*) FROM test WHERE x = 0;\n┌───────┐\n│ count │\n├───────┤\n│ 0 │\n└───────┘\n(1 row)\n\n-- drop columns on both the the source\nALTER TABLE test DROP COLUMN drop_1;\nALTER TABLE test DROP COLUMN drop_2;\nALTER TABLE test DROP COLUMN drop_3;\n\n-- drop columns on both the the target\nALTER TABLE test DROP COLUMN drop_1;\nALTER TABLE test DROP COLUMN drop_2;\nALTER TABLE test DROP COLUMN drop_3;\n\n-- on the target\nALTER SUBSCRIPTION sub REFRESH PUBLICATION;\n\n\n-- after dropping the columns\nDELETE FROM test WHERE x = 1;\n\n-- source\nSELECT count(*) FROM test WHERE x = 1;\n┌───────┐\n│ count │\n├───────┤\n│ 0 │\n└───────┘\n(1 row)\n\n\n*-- target, OOPS wrong result!!!!*SELECT count(*) FROM test WHERE x = 1;\n\n┌───────┐\n│ count │\n├───────┤\n│ 1 │\n└───────┘\n(1 row)\n\n\nShould we have a separate patch for the tuples_equal change so that we\nmight want to backport? Attached as v40_0001 on the patch.\n\nNote that I need to have that commit as 0001 so that 0002 patch\npasses the tests.\n\n\n> 2. We can also combine \"Some NULL values\" and \"PUBLICATION LACKS THE\n> COLUMN ON THE SUBS INDEX\" as both use the same schema. After the first\n> test, we need to drop the existing index and create a new index on the\n> subscriber node.\n>\n\ndone\n\n\n\n>\n> 3. General comment\n> +# updates 200 rows\n> +$node_publisher->safe_psql('postgres',\n> + \"UPDATE test_replica_id_full SET x = x + 1 WHERE x IN (5, 6);\");\n>\n> I think here you are updating 20 rows not 200. So, the comment seems\n> wrong to me.\n>\n\nI think I have fixed that in an earlier version because I cannot see this\ncomment anymore.\n\n\n>\n> Please think more and see if we can combine some other tests like\n> \"Unique index that is not primary key or replica identity\" and the\n> test we will have after comment#2 above.\n>\n>\nI'll look for more opportunities and reply to the thread. I wanted to send\nthis mail so that you can have a look at (1) earlier.\n\n\nThanks,\nOnder",
"msg_date": "Fri, 10 Mar 2023 12:54:48 +0300",
"msg_from": "=?UTF-8?B?w5ZuZGVyIEthbGFjxLE=?= <onderkalaci@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Use indexes on the subscriber when REPLICA IDENTITY is\n full on the publisher"
},
{
"msg_contents": "Hi Peter, all\n\n\n> src/backend/replication/logical/relation.c\n>\n> 1. FindUsableIndexForReplicaIdentityFull\n>\n> +static Oid\n> +FindUsableIndexForReplicaIdentityFull(Relation localrel)\n> +{\n> + List *indexlist = RelationGetIndexList(localrel);\n> + Oid usableIndex = InvalidOid;\n> + ListCell *lc;\n> +\n> + foreach(lc, indexlist)\n> + {\n> + Oid idxoid = lfirst_oid(lc);\n> + bool isUsableIndex;\n> + Relation indexRelation = index_open(idxoid, AccessShareLock);\n> + IndexInfo *indexInfo = BuildIndexInfo(indexRelation);\n> +\n> + isUsableIndex = IsIndexUsableForReplicaIdentityFull(indexInfo);\n> +\n> + index_close(indexRelation, AccessShareLock);\n> +\n> + if (isUsableIndex)\n> + {\n> + /* we found one eligible index, don't need to continue */\n> + usableIndex = idxoid;\n> + break;\n> + }\n> + }\n> +\n> + return usableIndex;\n> +}\n>\n> This comment is not functional -- if you prefer the code as-is, then\n> ignore this comment.\n>\n> But, personally I would:\n> - Move some of that code from the declarations. I feel it would be\n> better if the index_open/index_close were both in the code-body\n> instead of half in declarations and half not.\n> - Remove the 'usableIndex' variable, and just return directly.\n> - Shorten all the long names (and use consistent 'idx' instead of\n> sometimes 'idx' and sometimes 'index')\n>\n> SUGGESTION (YMMV)\n>\n> static Oid\n> FindUsableIndexForReplicaIdentityFull(Relation localrel)\n> {\n> List *idxlist = RelationGetIndexList(localrel);\n> ListCell *lc;\n>\n> foreach(lc, idxlist)\n> {\n> Oid idxoid = lfirst_oid(lc);\n> bool isUsableIdx;\n> Relation idxRel;\n> IndexInfo *idxInfo;\n>\n> idxRel = index_open(idxoid, AccessShareLock);\n> idxInfo = BuildIndexInfo(idxRel);\n> isUsableIdx = IsIndexUsableForReplicaIdentityFull(idxInfo);\n> index_close(idxRel, AccessShareLock);\n>\n> /* Return the first eligible index found */\n> if (isUsableIdx)\n> return idxoid;\n> }\n>\n> return InvalidOid;\n> }\n>\n\napplied your suggestion. I think it made it slightly easier to follow.\n\n\n>\n> ======\n> .../subscription/t/032_subscribe_use_index.pl\n>\n> 2. SUBSCRIPTION CREATE/DROP INDEX WORKS WITHOUT ISSUES\n>\n> 2a.\n> # Testcase start: SUBSCRIPTION CREATE/DROP INDEX WORKS WITHOUT ISSUES\n> #\n> # This test ensures that after CREATE INDEX, the subscriber can\n> automatically\n> # use one of the indexes (provided that it fulfils the requirements).\n> # Similarly, after DROP index, the subscriber can automatically switch to\n> # sequential scan\n>\n>\n> The last sentence is missing full-stop.\n>\n>\nfixed\n\n\n> ~\n>\n> 2b.\n> # now, create index and see that the index is used\n> $node_subscriber->safe_psql('postgres',\n> \"CREATE INDEX test_replica_id_full_idx ON test_replica_id_full(x)\");\n>\n> Don't say \"and see that the index is used\". Yes, that is what this\n> whole test is doing, but it is not what the psql following this\n> comment is doing.\n>\n\n true\n\n\n> ~\n>\n> 2c.\n> $node_publisher->safe_psql('postgres',\n> \"UPDATE test_replica_id_full SET x = x + 1 WHERE x = 15;\");\n> $node_publisher->wait_for_catchup($appname);\n>\n>\n> # wait until the index is used on the subscriber\n>\n> The double blank lines here should be single.\n>\n> ~\n>\n\nfixed,\n\n\n>\n> 2d.\n> # now, the update could either use the test_replica_id_full_idx or\n> # test_replica_id_full_idy index it is not possible for user to control\n> # which index to use\n>\n> This sentence should break at \"it\".\n>\n> Aso \"user\" --> \"the user\"\n\nSUGGESTION\n> # now, the update could either use the test_replica_id_full_idx or\n> # test_replica_id_full_idy index; it is not possible for the user to\n> control\n> # which index to use\n>\n>\nlooks good, thanks\n\n\n> ~\n>\n> 2e.\n> # let's also test dropping test_replica_id_full_idy and\n> # hence use test_replica_id_full_idx\n>\n>\n> I think you ought to have dropped the other (first) index because we\n> already know that the first index had been used (from earlier), but we\n> are not 100% sure if the 'y' index has been chosen yet.\n>\n\n make sense. Though in general it is hard to check pg_stat_all_indexes\nfor any of the indexes on this test, as we don't know the exact number\nof tuples for each. Just wanted to explicitly note\n\n\n\n> ~~~~\n>\n> 3. SUBSCRIPTION USES INDEX WITH MULTIPLE ROWS AND COLUMNS\n>\n> 3a.\n> # deletes 20 rows\n> $node_publisher->safe_psql('postgres',\n> \"DELETE FROM test_replica_id_full WHERE x IN (5, 6);\");\n>\n> # updates 20 rows\n> $node_publisher->safe_psql('postgres',\n> \"UPDATE test_replica_id_full SET x = 100, y = '200' WHERE x IN (1,\n> 2);\");\n>\n>\n> \"deletes\" --> \"delete\"\n>\n> \"updates\" --> \"update\"\n>\n\nWell, I thought the command *deletes* but I guess delete looks better\n\n\n>\n> ~~~\n>\n> 4. SUBSCRIPTION USES INDEX WITH DROPPED COLUMNS\n>\n> # updates 200 rows\n> $node_publisher->safe_psql('postgres',\n> \"UPDATE test_replica_id_full SET x = x + 1 WHERE x IN (5, 6);\");\n>\n>\n> \"updates\" --> \"update\"\n>\n> \"200 rows\" ??? is that right -- 20 maybe ???\n>\n>\nI guess this is from an earlier version of the patch, I fixed these types\nof errors.\n\n\n> ~~~\n>\n> 5. SUBSCRIPTION USES INDEX ON PARTITIONED TABLES\n>\n> 5a.\n> # updates rows and moves between partitions\n> $node_publisher->safe_psql('postgres',\n> \"UPDATE users_table_part SET value_1 = 0 WHERE user_id = 4;\");\n>\n> \"updates rows and moves between partitions\" --> \"update rows, moving\n> them to other partitions\"\n>\n>\nfixed, thanks\n\n\n> ~\n>\n> 5b.\n> # deletes rows from different partitions\n>\n>\n> \"deletes\" --> \"delete\"\n>\n>\nfixed, and searched for similar errors but couldn't see any more\n\n~~~\n\n\n\n\n> 6. SUBSCRIPTION DOES NOT USE INDEXES WITH ONLY EXPRESSIONS OR PARTIAL INDEX\n>\n> 6a.\n> # update 2 rows\n> $node_publisher->safe_psql('postgres',\n> \"UPDATE people SET firstname = 'Nan' WHERE firstname =\n> 'first_name_1';\");\n> $node_publisher->safe_psql('postgres',\n> \"UPDATE people SET firstname = 'Nan' WHERE firstname =\n> 'first_name_2' AND lastname = 'last_name_2';\");\n>\n> IMO 'Nan' seemed a curious name to assign as a test value, because it\n> seems like it has a special meaning but in reality, I don't think it\n> does. Even 'xxx' would be better.\n>\n\nchanged to \"no-name\" as \"xxx\" also looks not so good\n\n\n>\n> ~\n>\n> 6b.\n> # make sure none of the indexes is not used on the subscriber\n> $result = $node_subscriber->safe_psql('postgres',\n> \"select sum(idx_scan) from pg_stat_all_indexes where indexrelname\n> IN ('people_names_expr_only', 'people_names_partial')\");\n> is($result, qq(0), 'ensure subscriber tap_sub_rep_full updates two\n> rows via seq. scan with index on expressions');\n>\n> ~\n>\n> Looks like a typo in this comment: \"none of the indexes is not used\" ???\n>\n> dropped \"not\"\n\n\n\n> ~~~\n>\n> 7. SUBSCRIPTION CAN USE INDEXES WITH EXPRESSIONS AND COLUMNS\n>\n> 7a.\n> # update 2 rows\n> $node_publisher->safe_psql('postgres',\n> \"UPDATE people SET firstname = 'Nan' WHERE firstname =\n> 'first_name_1';\");\n> $node_publisher->safe_psql('postgres',\n> \"UPDATE people SET firstname = 'Nan' WHERE firstname =\n> 'first_name_3' AND lastname = 'last_name_3';\");\n>\n> Same as #6a, 'Nan' seems like a strange test value to assign to a name.\n>\n>\nsimilarly, changed to no-name\n\n\n> ~~~\n>\n> 8. Some NULL values\n>\n> $node_publisher->safe_psql('postgres',\n> \"INSERT INTO test_replica_id_full VALUES (1), (2), (3);\");\n> $node_publisher->safe_psql('postgres',\n> \"UPDATE test_replica_id_full SET x = x + 1 WHERE x = 1;\");\n> $node_publisher->safe_psql('postgres',\n> \"UPDATE test_replica_id_full SET x = x + 1 WHERE x = 3;\");\n>\n> ~\n>\n> 8a.\n> For some reason, this insert/update psql was not commented.\n>\n>\nadded some\n\n\n> ~\n>\n> 8b.\n> Maybe this test data could be more obvious by explicitly inserting the\n> NULLs?\n>\n>\nWell, that's a bit hard. We merged a few tests for perf reasons. And, I\nmerged this\ntest with \"missing column\" test. Now, the NULL values are triggered due to\nmissing column on the source.\n\n\n> ~~~\n>\n> 9. Unique index that is not primary key or replica identity\n>\n> 9a.\n> Why are other \"Testcase start\" comments all uppercase but not this one?\n>\n>\nfixed, there was one more\n\n\n> ~~~\n>\n> 10. SUBSCRIPTION USES INDEX WITH PUB/SUB different data\n>\n> Why is there a mixed case in this \"Test case:\" comment?\n>\n\nno specific reason, fixed\n\n\n>\n> ~~~\n>\n> 11. PUBLICATION LACKS THE COLUMN ON THE SUBS INDEX\n>\n> 11a.\n> # The subsriber has an additional column compared to publisher,\n> # and the index is on that column. We still pick the index scan\n> # on the subscriber even though it is practically similar to\n> # sequential scan\n>\n> Typo \"subsriber\"\n>\n\nI guess I fixed this in a recent iteration, I cannot find it.\n\n\n>\n> Missing full-stop on last sentence.\n>\n\nSimilarly, probably merged into another test.\n\nStill went over the all such explanations in the test, and ensured\nwe have the full stop\n\n\n>\n> ~\n>\n> 11b.\n> # make sure that the subscriber has the correct data\n> # we only deleted 1 row\n> $result = $node_subscriber->safe_psql('postgres',\n> \"SELECT sum(x) FROM test_replica_id_full\");\n> is($result, qq(232), 'ensure subscriber has the correct data at the\n> end of the test');\n>\n>\n> Why does that say \"deleted 1 row\", when the previous operation was not a\n> DELETE?\n>\n>\nProbably due to merging multiple tests into one. Fixed now.\n\n\nAgain, thanks for thorough review. Attached v41.\n\nSee the reason for 0001 patch in [1].\n\n\nOnder KALACI.\n\n\n[1]\nhttps://www.postgresql.org/message-id/CACawEhUu6S8E4Oo7%2Bs5iaq%3DyLRZJb6uOZeEQSGJj-7NVkDzSaw%40mail.gmail.com",
"msg_date": "Fri, 10 Mar 2023 13:58:25 +0300",
"msg_from": "=?UTF-8?B?w5ZuZGVyIEthbGFjxLE=?= <onderkalaci@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Use indexes on the subscriber when REPLICA IDENTITY is\n full on the publisher"
},
{
"msg_contents": "Hi Amit, all\n\n\n> wip_for_optimize_index_column_match\n> +static bool\n> +IndexContainsAnyRemoteColumn(IndexInfo *indexInfo,\n> + LogicalRepRelation *remoterel)\n> +{\n> + for (int i = 0; i < indexInfo->ii_NumIndexAttrs; i++)\n> + {\n>\n> Wouldn't it be better to just check if the first column is not part of\n> the remote column then we can skip that index?\n>\n\nReading [1], I think I can follow what you suggest. So, basically,\nif the leftmost column is not filtered, we have the following:\n\n but the entire index would have to be scanned, so in most cases the\n> planner would prefer a sequential table scan over using the index.\n\n\nSo, in our case, we could follow a similar approach. If the leftmost column\nof the index\nis not sent over the wire from the pub, we can prefer the sequential scan.\n\nIs my understanding of your suggestion accurate?\n\n\n>\n> In wip_optimize_for_non_pkey_non_ri_unique_index patch, irrespective\n> of whether we want to retain this set of changes, the function name\n> IsIdxSafeToSkipDuplicates() sounds better than\n> IdxIsRelationIdentityOrPK() and we can even change the check to\n> GetRelationIdentityOrPK() instead of separate checks replica index and\n> PK. So, it would be better to include this part of the change (a.\n> change the function name to IsIdxSafeToSkipDuplicates() and (b) change\n> the check to use GetRelationIdentityOrPK()) in the main patch.\n>\n>\n>\nI agree that it is a good change. Added to v42\n\nThanks,\nOnder KALACI\n\n\n\n[1] https://www.postgresql.org/docs/current/indexes-multicolumn.html",
"msg_date": "Fri, 10 Mar 2023 14:46:43 +0300",
"msg_from": "=?UTF-8?B?w5ZuZGVyIEthbGFjxLE=?= <onderkalaci@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Use indexes on the subscriber when REPLICA IDENTITY is\n full on the publisher"
},
{
"msg_contents": "On Fri, Mar 10, 2023 at 5:16 PM Önder Kalacı <onderkalaci@gmail.com> wrote:\n>\n>>\n>> wip_for_optimize_index_column_match\n>> +static bool\n>> +IndexContainsAnyRemoteColumn(IndexInfo *indexInfo,\n>> + LogicalRepRelation *remoterel)\n>> +{\n>> + for (int i = 0; i < indexInfo->ii_NumIndexAttrs; i++)\n>> + {\n>>\n>> Wouldn't it be better to just check if the first column is not part of\n>> the remote column then we can skip that index?\n>\n>\n> Reading [1], I think I can follow what you suggest. So, basically,\n> if the leftmost column is not filtered, we have the following:\n>\n>> but the entire index would have to be scanned, so in most cases the planner would prefer a sequential table scan over using the index.\n>\n>\n> So, in our case, we could follow a similar approach. If the leftmost column of the index\n> is not sent over the wire from the pub, we can prefer the sequential scan.\n>\n> Is my understanding of your suggestion accurate?\n>\n\nYes. I request an opinion from Shi-San who has reported the problem.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 10 Mar 2023 17:47:11 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Use indexes on the subscriber when REPLICA IDENTITY is\n full on the publisher"
},
{
"msg_contents": "Hi Amit, all\n\n\n> I'll look for more opportunities and reply to the thread. I wanted to send\n> this mail so that you can have a look at (1) earlier.\n>\n>\n> I merged SUBSCRIPTION CREATE/DROP INDEX WORKS WITHOUT ISSUES\ninto SUBSCRIPTION CAN USE INDEXES WITH EXPRESSIONS AND COLUMNS.\n\nAlso, merged SUBSCRIPTION USES INDEX WITH PUB/SUB DIFFERENT DATA and\n A UNIQUE INDEX THAT IS NOT PRIMARY KEY OR REPLICA IDENTITY\n\nSo, we have 6 test cases left. I start to feel that trying to merge further\nis going to start making\nthe readability get worse. Do you have any further easy test case merge\nsuggestions?\n\nI think one option could be to drop some cases altogether, but not sure\nwe'd want that.\n\nAs a semi-related question: Are you aware of any setting that'd\nmake pg_stat_all_indexes\nreflect the changes sooner? It is hard to debug what is the bottleneck in\nthe tests, but\nI have a suspicion that there might be several poll_query_until() calls on\npg_stat_all_indexes, which might be the reason?\n\nAttaching v43.",
"msg_date": "Fri, 10 Mar 2023 17:28:33 +0300",
"msg_from": "=?UTF-8?B?w5ZuZGVyIEthbGFjxLE=?= <onderkalaci@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Use indexes on the subscriber when REPLICA IDENTITY is\n full on the publisher"
},
{
"msg_contents": "On Fri, Mar 10, 2023 at 3:25 PM Önder Kalacı <onderkalaci@gmail.com> wrote:\n>\n>>\n>\n> Done with an important caveat. I think this reorganization of the test helped\n> us to find one edge case regarding dropped columns.\n>\n> I realized that the dropped columns also get into the tuples_equal() function. And,\n> the remote sends NULL to for the dropped columns(i.e., remoteslot), but\n> index_getnext_slot() (or table_scan_getnextslot) indeed fills the dropped\n> columns on the outslot. So, the dropped columns are not NULL in the outslot.\n>\n> This triggers tuples_equal to fail. To fix that, I improved the tuples_equal\n> such that it skips the dropped columns.\n>\n\nGood catch. By any chance, have you tried with generated columns? See\nlogicalrep_write_tuple()/logicalrep_write_attrs() where we neither\nsend anything for dropped columns nor for generated columns.\nSimilarly, on receiving side, in logicalrep_rel_open() and\nslot_store_data(), we seem to be using NULL for such columns.\n\n> I also spend quite a bit of time understanding how/why this impacts\n> HEAD. See steps below on HEAD, where REPLICA IDENTITY FULL\n> fails to replica the data properly:\n>\n>\n> -- pub\n> CREATE TABLE test (drop_1 jsonb, x int, drop_2 numeric, y text, drop_3 timestamptz);\n> ALTER TABLE test REPLICA IDENTITY FULL;\n> INSERT INTO test SELECT NULL, i, i, (i)::text, now() FROM generate_series(0,1)i;\n> CREATE PUBLICATION pub FOR ALL TABLES;\n>\n> -- sub\n> CREATE TABLE test (drop_1 jsonb, x int, drop_2 numeric, y text, drop_3 timestamptz);\n> CREATE SUBSCRIPTION sub CONNECTION 'host=localhost port=5432 dbname=postgres' PUBLICATION pub;\n>\n> -- show that before dropping the columns, the data in the source and\n> -- target are deleted properly\n> DELETE FROM test WHERE x = 0;\n>\n> -- both on the source and target\n> SELECT count(*) FROM test WHERE x = 0;\n> ┌───────┐\n> │ count │\n> ├───────┤\n> │ 0 │\n> └───────┘\n> (1 row)\n>\n> -- drop columns on both the the source\n> ALTER TABLE test DROP COLUMN drop_1;\n> ALTER TABLE test DROP COLUMN drop_2;\n> ALTER TABLE test DROP COLUMN drop_3;\n>\n> -- drop columns on both the the target\n> ALTER TABLE test DROP COLUMN drop_1;\n> ALTER TABLE test DROP COLUMN drop_2;\n> ALTER TABLE test DROP COLUMN drop_3;\n>\n> -- on the target\n> ALTER SUBSCRIPTION sub REFRESH PUBLICATION;\n>\n>\n> -- after dropping the columns\n> DELETE FROM test WHERE x = 1;\n>\n> -- source\n> SELECT count(*) FROM test WHERE x = 1;\n> ┌───────┐\n> │ count │\n> ├───────┤\n> │ 0 │\n> └───────┘\n> (1 row)\n>\n> -- target, OOPS wrong result!!!!\n> SELECT count(*) FROM test WHERE x = 1;\n>\n> ┌───────┐\n> │ count │\n> ├───────┤\n> │ 1 │\n> └───────┘\n> (1 row)\n>\n>\n> Should we have a separate patch for the tuples_equal change so that we\n> might want to backport?\n>\n\nYes, it would be better to report and discuss this in a separate thread,\n\n> Attached as v40_0001 on the patch.\n>\n> Note that I need to have that commit as 0001 so that 0002 patch\n> passes the tests.\n>\n\nI think we can add such a test (which relies on existing buggy\nbehavior) later after fixing the existing bug. For now, it would be\nbetter to remove that test and add it after we fix dropped columns\nissue in HEAD.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Sat, 11 Mar 2023 09:00:49 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Use indexes on the subscriber when REPLICA IDENTITY is\n full on the publisher"
},
{
"msg_contents": "On Fri, Mar 10, 2023 at 7:58 PM Önder Kalacı <onderkalaci@gmail.com> wrote:\n>\n>\n> I think one option could be to drop some cases altogether, but not sure we'd want that.\n>\n> As a semi-related question: Are you aware of any setting that'd make pg_stat_all_indexes\n> reflect the changes sooner? It is hard to debug what is the bottleneck in the tests, but\n> I have a suspicion that there might be several poll_query_until() calls on\n> pg_stat_all_indexes, which might be the reason?\n>\n\nYeah, I also think poll_query_until() calls on pg_stat_all_indexes is\nthe main reason for these tests taking more time. When I commented\nthose polls, it drastically reduces the test time. On looking at\npgstat_report_stat(), it seems we don't report stats sooner than 1s\nand as most of this patch's test relies on stats, it leads to taking\nmore time. I don't have a better idea to verify this patch without\nchecking whether the index scan is really used by referring to\npg_stat_all_indexes. I think trying to reduce the poll calls may help\nin reducing the test timings further. Some ideas on those lines are as\nfollows:\n1.\n+# Testcase start: SUBSCRIPTION USES INDEX WITH PUB/SUB DIFFERENT DATA VIA\n+# A UNIQUE INDEX THAT IS NOT PRIMARY KEY OR REPLICA IDENTITY\n\nNo need to use Delete test separate for this.\n\n2.\n+$node_publisher->safe_psql('postgres',\n+ \"UPDATE test_replica_id_full SET x = x + 1 WHERE x = 1;\");\n+$node_publisher->safe_psql('postgres',\n+ \"UPDATE test_replica_id_full SET x = x + 1 WHERE x = 3;\");\n+\n+# check if the index is used even when the index has NULL values\n+$node_subscriber->poll_query_until(\n+ 'postgres', q{select idx_scan=2 from pg_stat_all_indexes where\nindexrelname = 'test_replica_id_full_idx';}\n+) or die \"Timed out while waiting for check subscriber\ntap_sub_rep_full updates test_replica_id_full table\";\n\nHere, I think only one update is sufficient.\n\n3.\n+$node_subscriber->safe_psql('postgres',\n+ \"CREATE INDEX people_last_names ON people(lastname)\");\n+\n+# wait until the index is created\n+$node_subscriber->poll_query_until(\n+ 'postgres', q{select count(*)=1 from pg_stat_all_indexes where\nindexrelname = 'people_last_names';}\n+) or die \"Timed out while waiting for creating index people_last_names\";\n\nI don't think we need this poll.\n\n4.\n+# update 2 rows\n+$node_publisher->safe_psql('postgres',\n+ \"UPDATE people SET firstname = 'no-name' WHERE firstname = 'first_name_1';\");\n+$node_publisher->safe_psql('postgres',\n+ \"UPDATE people SET firstname = 'no-name' WHERE firstname =\n'first_name_3' AND lastname = 'last_name_3';\");\n+\n+# wait until the index is used on the subscriber\n+$node_subscriber->poll_query_until(\n+ 'postgres', q{select idx_scan=2 from pg_stat_all_indexes where\nindexrelname = 'people_names';}\n+) or die \"Timed out while waiting for check subscriber\ntap_sub_rep_full updates two rows via index scan with index on\nexpressions and columns\";\n+\n+$node_publisher->safe_psql('postgres',\n+ \"DELETE FROM people WHERE firstname = 'no-name';\");\n+\n+# wait until the index is used on the subscriber\n+$node_subscriber->poll_query_until(\n+ 'postgres', q{select idx_scan=4 from pg_stat_all_indexes where\nindexrelname = 'people_names';}\n+) or die \"Timed out while waiting for check subscriber\ntap_sub_rep_full deletes two rows via index scan with index on\nexpressions and columns\";\n\nI think having one update or delete should be sufficient.\n\n5.\n+# update rows, moving them to other partitions\n+$node_publisher->safe_psql('postgres',\n+ \"UPDATE users_table_part SET value_1 = 0 WHERE user_id = 4;\");\n+\n+# wait until the index is used on the subscriber\n+$node_subscriber->poll_query_until(\n+ 'postgres', q{select sum(idx_scan)=1 from pg_stat_all_indexes where\nindexrelname ilike 'users_table_part_%';}\n+) or die \"Timed out while waiting for updates on partitioned table with index\";\n+\n+# delete rows from different partitions\n+$node_publisher->safe_psql('postgres',\n+ \"DELETE FROM users_table_part WHERE user_id = 1 and value_1 = 1;\");\n+$node_publisher->safe_psql('postgres',\n+ \"DELETE FROM users_table_part WHERE user_id = 12 and value_1 = 12;\");\n+\n+# wait until the index is used on the subscriber\n+$node_subscriber->poll_query_until(\n+ 'postgres', q{select sum(idx_scan)=3 from pg_stat_all_indexes where\nindexrelname ilike 'users_table_part_%';}\n+) or die \"Timed out while waiting for check subscriber\ntap_sub_rep_full updates partitioned table\";\n+\n\nCan we combine these two polls?\n\n6.\n+# Testcase start: SUBSCRIPTION USES INDEX WITH MULTIPLE ROWS AND COLUMNS, ALSO\n+# DROPS COLUMN\n\nIn this test, let's try to update/delete 2-3 rows instead of 20. And\nafter drop columns, let's keep just one of the update or delete.\n\n7. Apart from the above, I think it is better to use\nwait_for_catchup() consistently before trying to verify the data on\nthe subscriber. We always use it in other tests. I guess here you are\nrelying on the poll for index scans to ensure that data is replicated\nbut I feel it may still be better to use wait_for_catchup().\nSimilarly, wait_for_subscription_sync uses the publisher name and\nappname in other tests, so it is better to be consistent. It can avoid\nrandom failures by ensuring data is synced.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Sat, 11 Mar 2023 12:35:13 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Use indexes on the subscriber when REPLICA IDENTITY is\n full on the publisher"
},
{
"msg_contents": "Hi Amit, all\n\n\n> > This triggers tuples_equal to fail. To fix that, I improved the\n> tuples_equal\n> > such that it skips the dropped columns.\n> >\n>\n> By any chance, have you tried with generated columns?\n\n\nYes, it shows the same behavior.\n\n\n> See\n> logicalrep_write_tuple()/logicalrep_write_attrs() where we neither\n> send anything for dropped columns nor for generated columns.\n\nSimilarly, on receiving side, in logicalrep_rel_open() and\n> slot_store_data(), we seem to be using NULL for such columns.\n>\n>\nThanks for the explanation, it helps a lot.\n\n\n>\n> Yes, it would be better to report and discuss this in a separate thread,\n>\n\nDone via [1]\n\n>\n> > Attached as v40_0001 on the patch.\n> >\n> > Note that I need to have that commit as 0001 so that 0002 patch\n> > passes the tests.\n> >\n>\n> I think we can add such a test (which relies on existing buggy\n> behavior) later after fixing the existing bug. For now, it would be\n> better to remove that test and add it after we fix dropped columns\n> issue in HEAD.\n>\n\nAlright, when I push the next version (hopefully tomorrow), I'll follow\nthis suggestion.\n\nThanks,\nOnder KALACI\n\n[1]\nhttps://www.postgresql.org/message-id/CACawEhVQC9WoofunvXg12aXtbqKnEgWxoRx3%2Bv8q32AWYsdpGg%40mail.gmail.com\n\nHi Amit, all\n> This triggers tuples_equal to fail. To fix that, I improved the tuples_equal\n> such that it skips the dropped columns.\n>\n By any chance, have you tried with generated columns? Yes, it shows the same behavior. See\nlogicalrep_write_tuple()/logicalrep_write_attrs() where we neither\nsend anything for dropped columns nor for generated columns.\nSimilarly, on receiving side, in logicalrep_rel_open() and\nslot_store_data(), we seem to be using NULL for such columns.\nThanks for the explanation, it helps a lot. \n\nYes, it would be better to report and discuss this in a separate thread,Done via [1] \n\n> Attached as v40_0001 on the patch.\n>\n> Note that I need to have that commit as 0001 so that 0002 patch\n> passes the tests.\n>\n\nI think we can add such a test (which relies on existing buggy\nbehavior) later after fixing the existing bug. For now, it would be\nbetter to remove that test and add it after we fix dropped columns\nissue in HEAD.Alright, when I push the next version (hopefully tomorrow), I'll follow this suggestion.Thanks,Onder KALACI [1] https://www.postgresql.org/message-id/CACawEhVQC9WoofunvXg12aXtbqKnEgWxoRx3%2Bv8q32AWYsdpGg%40mail.gmail.com",
"msg_date": "Sat, 11 Mar 2023 23:04:26 +0300",
"msg_from": "=?UTF-8?B?w5ZuZGVyIEthbGFjxLE=?= <onderkalaci@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Use indexes on the subscriber when REPLICA IDENTITY is\n full on the publisher"
},
{
"msg_contents": "On Sun, Mar 12, 2023 at 1:34 AM Önder Kalacı <onderkalaci@gmail.com> wrote:\n>\n>>\n>> I think we can add such a test (which relies on existing buggy\n>> behavior) later after fixing the existing bug. For now, it would be\n>> better to remove that test and add it after we fix dropped columns\n>> issue in HEAD.\n>\n>\n> Alright, when I push the next version (hopefully tomorrow), I'll follow this suggestion.\n>\n\nOkay, thanks. See, if you can also include your changes in the patch\nwip_for_optimize_index_column_match (after discussed modification).\nFew other minor comments:\n\n1.\n+ are enforced for primary keys. Internally, we follow a similar approach for\n+ supporting index scans within logical replication scope. If there are no\n\nI think we can remove the above line: \"Internally, we follow a similar\napproach for supporting index scans within logical replication scope.\"\nThis didn't seem useful for users.\n\n2.\ndiff --git a/src/backend/executor/execReplication.c\nb/src/backend/executor/execReplication.c\nindex bc6409f695..646e608eb7 100644\n--- a/src/backend/executor/execReplication.c\n+++ b/src/backend/executor/execReplication.c\n@@ -83,11 +83,8 @@ build_replindex_scan_key(ScanKey skey, Relation\nrel, Relation idxrel,\n if (!AttributeNumberIsValid(table_attno))\n {\n /*\n- * XXX: For a non-primary/unique index with an\nadditional\n- * expression, we do not have to continue at\nthis point. However,\n- * the below code assumes the index scan is\nonly done for simple\n- * column references. If we can relax the\nassumption in the below\n- * code-block, we can also remove the continue.\n+ * XXX: Currently, we don't support\nexpressions in the scan key,\n+ * see code below.\n */\n\n\nI have tried to simplify the above comment. See, if that makes sense to you.\n\n3.\n/*\n+ * We only need to allocate once. This is allocated within per\n+ * tuple context -- ApplyMessageContext -- hence no need to\n+ * explicitly pfree().\n+ */\n\nWe normally don't write why we don't need to explicitly pfree. It is\ngood during the review but not sure if it is a good idea to keep it in\nthe final code.\n\n4. I have modified the proposed commit message as follows, see if that\nmakes sense to you, and let me know if I missed anything especially\nthe review/author credit.\n\nAllow the use of indexes other than PK and REPLICA IDENTITY on the subscriber.\n\nUsing REPLICA IDENTITY FULL on the publisher can lead to a full table\nscan per tuple change on the subscription when REPLICA IDENTITY or PK\nindex is not available. This makes REPLICA IDENTITY FULL impractical\nto use apart from some small number of use cases.\n\nThis patch allows using indexes other than PRIMARY KEY or REPLICA\nIDENTITY on the subscriber during apply of update/delete. The index\nthat can be used must be a btree index, not a partial index, and it\nmust have at least one column reference (i.e. cannot consist of only\nexpressions). We can uplift these restrictions in the future. There is\nno smart mechanism to pick the index. If there is more than one index\nthat\nsatisfies these requirements, we just pick the first one. We discussed\nusing some of the optimizer's low-level APIs for this but ruled it out\nas that can be a maintenance burden in the long run.\n\nThis patch improves the performance in the vast majority of cases and\nthe improvement is proportional to the amount of data in the table.\nHowever, there could be some regression in a small number of cases\nwhere the indexes have a lot of duplicate and dead rows. It was\ndiscussed that those are mostly impractical cases but we can provide a\ntable or subscription level option to disable this feature if\nrequired.\n\nAuthor: Onder Kalaci\nReviewed-by: Peter Smith, Shi yu, Hou Zhijie, Vignesh C, Kuroda\nHayato, Amit Kapila\nDiscussion: https://postgr.es/m/CACawEhVLqmAAyPXdHEPv1ssU2c=dqOniiGz7G73HfyS7+nGV4w@mail.gmail.com\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Sun, 12 Mar 2023 11:28:33 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Use indexes on the subscriber when REPLICA IDENTITY is\n full on the publisher"
},
{
"msg_contents": "Hi Amit, all\n\n>\n> 1.\n> +# Testcase start: SUBSCRIPTION USES INDEX WITH PUB/SUB DIFFERENT DATA VIA\n> +# A UNIQUE INDEX THAT IS NOT PRIMARY KEY OR REPLICA IDENTITY\n>\n> No need to use Delete test separate for this.\n>\n\nYeah, there is really no difference between update/delete for this patch,\nso it makes sense. I initially added it for completeness for the coverage,\nbut as it has the perf overhead for the tests, I agree that we could\ndrop some of those.\n\n\n>\n> 2.\n> +$node_publisher->safe_psql('postgres',\n> + \"UPDATE test_replica_id_full SET x = x + 1 WHERE x = 1;\");\n> +$node_publisher->safe_psql('postgres',\n> + \"UPDATE test_replica_id_full SET x = x + 1 WHERE x = 3;\");\n> +\n> +# check if the index is used even when the index has NULL values\n> +$node_subscriber->poll_query_until(\n> + 'postgres', q{select idx_scan=2 from pg_stat_all_indexes where\n> indexrelname = 'test_replica_id_full_idx';}\n> +) or die \"Timed out while waiting for check subscriber\n> tap_sub_rep_full updates test_replica_id_full table\";\n>\n> Here, I think only one update is sufficient.\n>\n\ndone. I guess you requested this change so that we would wait\nfor idx_scan=1 not idx_scan=2, which could help.\n\n\n> 3.\n> +$node_subscriber->safe_psql('postgres',\n> + \"CREATE INDEX people_last_names ON people(lastname)\");\n> +\n> +# wait until the index is created\n> +$node_subscriber->poll_query_until(\n> + 'postgres', q{select count(*)=1 from pg_stat_all_indexes where\n> indexrelname = 'people_last_names';}\n> +) or die \"Timed out while waiting for creating index people_last_names\";\n>\n> I don't think we need this poll.\n>\n\n true, not sure why I have this. none of the tests has this anyway.\n\n\n> 4.\n> +# update 2 rows\n> +$node_publisher->safe_psql('postgres',\n> + \"UPDATE people SET firstname = 'no-name' WHERE firstname =\n> 'first_name_1';\");\n> +$node_publisher->safe_psql('postgres',\n> + \"UPDATE people SET firstname = 'no-name' WHERE firstname =\n> 'first_name_3' AND lastname = 'last_name_3';\");\n> +\n> +# wait until the index is used on the subscriber\n> +$node_subscriber->poll_query_until(\n> + 'postgres', q{select idx_scan=2 from pg_stat_all_indexes where\n> indexrelname = 'people_names';}\n> +) or die \"Timed out while waiting for check subscriber\n> tap_sub_rep_full updates two rows via index scan with index on\n> expressions and columns\";\n> +\n> +$node_publisher->safe_psql('postgres',\n> + \"DELETE FROM people WHERE firstname = 'no-name';\");\n> +\n> +# wait until the index is used on the subscriber\n> +$node_subscriber->poll_query_until(\n> + 'postgres', q{select idx_scan=4 from pg_stat_all_indexes where\n> indexrelname = 'people_names';}\n> +) or die \"Timed out while waiting for check subscriber\n> tap_sub_rep_full deletes two rows via index scan with index on\n> expressions and columns\";\n>\n> I think having one update or delete should be sufficient.\n>\n\nSo, I dropped the 2nd update, but kept 1 update and 1 delete.\nThe latter deletes the tuple updated by the former. Seems like\nan interesting test to keep.\n\nStill, I dropped one of the extra poll_query_until, which is probably\ngood enough for this one? Let me know if you think otherwise.\n\n\n>\n> 5.\n> +# update rows, moving them to other partitions\n> +$node_publisher->safe_psql('postgres',\n> + \"UPDATE users_table_part SET value_1 = 0 WHERE user_id = 4;\");\n> +\n> +# wait until the index is used on the subscriber\n> +$node_subscriber->poll_query_until(\n> + 'postgres', q{select sum(idx_scan)=1 from pg_stat_all_indexes where\n> indexrelname ilike 'users_table_part_%';}\n> +) or die \"Timed out while waiting for updates on partitioned table with\n> index\";\n> +\n> +# delete rows from different partitions\n> +$node_publisher->safe_psql('postgres',\n> + \"DELETE FROM users_table_part WHERE user_id = 1 and value_1 = 1;\");\n> +$node_publisher->safe_psql('postgres',\n> + \"DELETE FROM users_table_part WHERE user_id = 12 and value_1 = 12;\");\n> +\n> +# wait until the index is used on the subscriber\n> +$node_subscriber->poll_query_until(\n> + 'postgres', q{select sum(idx_scan)=3 from pg_stat_all_indexes where\n> indexrelname ilike 'users_table_part_%';}\n> +) or die \"Timed out while waiting for check subscriber\n> tap_sub_rep_full updates partitioned table\";\n> +\n>\n> Can we combine these two polls?\n>\n\nLooking at it closely, the first one seems like an unnecessary poll anyway.\nWe can simply check the idxscan at the end of the test, I don't see\nvalue in checking earlier.\n\n\n>\n> 6.\n> +# Testcase start: SUBSCRIPTION USES INDEX WITH MULTIPLE ROWS AND COLUMNS,\n> ALSO\n> +# DROPS COLUMN\n>\n> In this test, let's try to update/delete 2-3 rows instead of 20. And\n> after drop columns, let's keep just one of the update or delete.\n>\n\nchanged to 3 rows\n\n\n>\n> 7. Apart from the above, I think it is better to use\n> wait_for_catchup() consistently before trying to verify the data on\n> the subscriber. We always use it in other tests. I guess here you are\n> relying on the poll for index scans to ensure that data is replicated\n> but I feel it may still be better to use wait_for_catchup().\n>\n\nYes, that was my understanding & expectation. I'm not convinced that\n wait_for_catchup() is strictly needed, as without catching up, how could\npg_stat_all_indexes be updated? Still, it is good to be consistent\nwith the test suite. So, applied your suggestion.\n\nSimilarly, wait_for_subscription_sync uses the publisher name and\n> appname in other tests, so it is better to be consistent. It can avoid\n> random failures by ensuring data is synced.\n>\n\nmakes sense.\n\nI'll attach a new patch in the next e-mail, along with your\nother comments.\n\n\nThanks,\nOnder KALACI\n\nHi Amit, all\n1.\n+# Testcase start: SUBSCRIPTION USES INDEX WITH PUB/SUB DIFFERENT DATA VIA\n+# A UNIQUE INDEX THAT IS NOT PRIMARY KEY OR REPLICA IDENTITY\n\nNo need to use Delete test separate for this.Yeah, there is really no difference between update/delete for this patch,so it makes sense. I initially added it for completeness for the coverage,but as it has the perf overhead for the tests, I agree that we could drop some of those. \n\n2.\n+$node_publisher->safe_psql('postgres',\n+ \"UPDATE test_replica_id_full SET x = x + 1 WHERE x = 1;\");\n+$node_publisher->safe_psql('postgres',\n+ \"UPDATE test_replica_id_full SET x = x + 1 WHERE x = 3;\");\n+\n+# check if the index is used even when the index has NULL values\n+$node_subscriber->poll_query_until(\n+ 'postgres', q{select idx_scan=2 from pg_stat_all_indexes where\nindexrelname = 'test_replica_id_full_idx';}\n+) or die \"Timed out while waiting for check subscriber\ntap_sub_rep_full updates test_replica_id_full table\";\n\nHere, I think only one update is sufficient.done. I guess you requested this change so that we would waitfor idx_scan=1 not idx_scan=2, which could help.\n\n3.\n+$node_subscriber->safe_psql('postgres',\n+ \"CREATE INDEX people_last_names ON people(lastname)\");\n+\n+# wait until the index is created\n+$node_subscriber->poll_query_until(\n+ 'postgres', q{select count(*)=1 from pg_stat_all_indexes where\nindexrelname = 'people_last_names';}\n+) or die \"Timed out while waiting for creating index people_last_names\";\n\nI don't think we need this poll. true, not sure why I have this. none of the tests has this anyway.\n\n4.\n+# update 2 rows\n+$node_publisher->safe_psql('postgres',\n+ \"UPDATE people SET firstname = 'no-name' WHERE firstname = 'first_name_1';\");\n+$node_publisher->safe_psql('postgres',\n+ \"UPDATE people SET firstname = 'no-name' WHERE firstname =\n'first_name_3' AND lastname = 'last_name_3';\");\n+\n+# wait until the index is used on the subscriber\n+$node_subscriber->poll_query_until(\n+ 'postgres', q{select idx_scan=2 from pg_stat_all_indexes where\nindexrelname = 'people_names';}\n+) or die \"Timed out while waiting for check subscriber\ntap_sub_rep_full updates two rows via index scan with index on\nexpressions and columns\";\n+\n+$node_publisher->safe_psql('postgres',\n+ \"DELETE FROM people WHERE firstname = 'no-name';\");\n+\n+# wait until the index is used on the subscriber\n+$node_subscriber->poll_query_until(\n+ 'postgres', q{select idx_scan=4 from pg_stat_all_indexes where\nindexrelname = 'people_names';}\n+) or die \"Timed out while waiting for check subscriber\ntap_sub_rep_full deletes two rows via index scan with index on\nexpressions and columns\";\n\nI think having one update or delete should be sufficient.So, I dropped the 2nd update, but kept 1 update and 1 delete.The latter deletes the tuple updated by the former. Seems likean interesting test to keep.Still, I dropped one of the extra poll_query_until, which is probablygood enough for this one? Let me know if you think otherwise. \n\n5.\n+# update rows, moving them to other partitions\n+$node_publisher->safe_psql('postgres',\n+ \"UPDATE users_table_part SET value_1 = 0 WHERE user_id = 4;\");\n+\n+# wait until the index is used on the subscriber\n+$node_subscriber->poll_query_until(\n+ 'postgres', q{select sum(idx_scan)=1 from pg_stat_all_indexes where\nindexrelname ilike 'users_table_part_%';}\n+) or die \"Timed out while waiting for updates on partitioned table with index\";\n+\n+# delete rows from different partitions\n+$node_publisher->safe_psql('postgres',\n+ \"DELETE FROM users_table_part WHERE user_id = 1 and value_1 = 1;\");\n+$node_publisher->safe_psql('postgres',\n+ \"DELETE FROM users_table_part WHERE user_id = 12 and value_1 = 12;\");\n+\n+# wait until the index is used on the subscriber\n+$node_subscriber->poll_query_until(\n+ 'postgres', q{select sum(idx_scan)=3 from pg_stat_all_indexes where\nindexrelname ilike 'users_table_part_%';}\n+) or die \"Timed out while waiting for check subscriber\ntap_sub_rep_full updates partitioned table\";\n+\n\nCan we combine these two polls?Looking at it closely, the first one seems like an unnecessary poll anyway.We can simply check the idxscan at the end of the test, I don't seevalue in checking earlier. \n\n6.\n+# Testcase start: SUBSCRIPTION USES INDEX WITH MULTIPLE ROWS AND COLUMNS, ALSO\n+# DROPS COLUMN\n\nIn this test, let's try to update/delete 2-3 rows instead of 20. And\nafter drop columns, let's keep just one of the update or delete.changed to 3 rows \n\n7. Apart from the above, I think it is better to use\nwait_for_catchup() consistently before trying to verify the data on\nthe subscriber. We always use it in other tests. I guess here you are\nrelying on the poll for index scans to ensure that data is replicated\nbut I feel it may still be better to use wait_for_catchup().Yes, that was my understanding & expectation. I'm not convinced that wait_for_catchup() is strictly needed, as without catching up, how couldpg_stat_all_indexes be updated? Still, it is good to be consistentwith the test suite. So, applied your suggestion.\nSimilarly, wait_for_subscription_sync uses the publisher name and\nappname in other tests, so it is better to be consistent. It can avoid\nrandom failures by ensuring data is synced. makes sense.I'll attach a new patch in the next e-mail, along with your other comments.Thanks,Onder KALACI",
"msg_date": "Sun, 12 Mar 2023 18:07:17 +0300",
"msg_from": "=?UTF-8?B?w5ZuZGVyIEthbGFjxLE=?= <onderkalaci@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Use indexes on the subscriber when REPLICA IDENTITY is\n full on the publisher"
},
{
"msg_contents": "Hi Amit, all\n\n\n> >> I think we can add such a test (which relies on existing buggy\n> >> behavior) later after fixing the existing bug. For now, it would be\n> >> better to remove that test and add it after we fix dropped columns\n> >> issue in HEAD.\n> >\n> >\n> > Alright, when I push the next version (hopefully tomorrow), I'll follow\n> this suggestion.\n> >\n>\n> Okay, thanks. See, if you can also include your changes in the patch\n> wip_for_optimize_index_column_match (after discussed modification).\n> Few other minor comments:\n>\n\nSure, done. Please check RemoteRelContainsLeftMostColumnOnIdx() function.\n\nNote that we already have a test for that on SOME NULL VALUES AND MISSING\nCOLUMN.\nPreviously we'd check if test_replica_id_full_idy is used. Now, we don't\nbecause it is not\nused anymore :) I initially used poll_query_until with idx_scan=0, but that\nalso seems\nconfusing to read in the test. It looks like it could be prone to race\nconditions as\npoll_query_until with idxcan=0 does not guarantee anything.\n\n\n>\n> 1.\n> + are enforced for primary keys. Internally, we follow a similar\n> approach for\n> + supporting index scans within logical replication scope. If there are\n> no\n>\n> I think we can remove the above line: \"Internally, we follow a similar\n> approach for supporting index scans within logical replication scope.\"\n> This didn't seem useful for users.\n>\n>\nremoved\n\n\n> 2.\n> diff --git a/src/backend/executor/execReplication.c\n> b/src/backend/executor/execReplication.c\n> index bc6409f695..646e608eb7 100644\n> --- a/src/backend/executor/execReplication.c\n> +++ b/src/backend/executor/execReplication.c\n> @@ -83,11 +83,8 @@ build_replindex_scan_key(ScanKey skey, Relation\n> rel, Relation idxrel,\n> if (!AttributeNumberIsValid(table_attno))\n> {\n> /*\n> - * XXX: For a non-primary/unique index with an\n> additional\n> - * expression, we do not have to continue at\n> this point. However,\n> - * the below code assumes the index scan is\n> only done for simple\n> - * column references. If we can relax the\n> assumption in the below\n> - * code-block, we can also remove the continue.\n> + * XXX: Currently, we don't support\n> expressions in the scan key,\n> + * see code below.\n> */\n>\n>\n> I have tried to simplify the above comment. See, if that makes sense to\n> you.\n>\n\nMakes sense\n\n\n>\n> 3.\n> /*\n> + * We only need to allocate once. This is allocated within per\n> + * tuple context -- ApplyMessageContext -- hence no need to\n> + * explicitly pfree().\n> + */\n>\n> We normally don't write why we don't need to explicitly pfree. It is\n> good during the review but not sure if it is a good idea to keep it in\n> the final code.\n>\n>\nSounds good, applied\n\n4. I have modified the proposed commit message as follows, see if that\n> makes sense to you, and let me know if I missed anything especially\n> the review/author credit.\n>\n> Allow the use of indexes other than PK and REPLICA IDENTITY on the\n> subscriber.\n>\n> Using REPLICA IDENTITY FULL on the publisher can lead to a full table\n> scan per tuple change on the subscription when REPLICA IDENTITY or PK\n> index is not available. This makes REPLICA IDENTITY FULL impractical\n> to use apart from some small number of use cases.\n>\n> This patch allows using indexes other than PRIMARY KEY or REPLICA\n> IDENTITY on the subscriber during apply of update/delete. The index\n> that can be used must be a btree index, not a partial index, and it\n> must have at least one column reference (i.e. cannot consist of only\n> expressions). We can uplift these restrictions in the future. There is\n> no smart mechanism to pick the index. If there is more than one index\n> that\n> satisfies these requirements, we just pick the first one. We discussed\n> using some of the optimizer's low-level APIs for this but ruled it out\n> as that can be a maintenance burden in the long run.\n>\n> This patch improves the performance in the vast majority of cases and\n> the improvement is proportional to the amount of data in the table.\n> However, there could be some regression in a small number of cases\n> where the indexes have a lot of duplicate and dead rows. It was\n> discussed that those are mostly impractical cases but we can provide a\n> table or subscription level option to disable this feature if\n> required.\n>\n> Author: Onder Kalaci\n> Reviewed-by: Peter Smith, Shi yu, Hou Zhijie, Vignesh C, Kuroda\n> Hayato, Amit Kapila\n> Discussion:\n> https://postgr.es/m/CACawEhVLqmAAyPXdHEPv1ssU2c=dqOniiGz7G73HfyS7+nGV4w@mail.gmail.com\n>\n>\nI also see 2 mails/reviews from Wang wei, but I'm not sure what qualifies\nas \"reviewer\" for this group. Should we\nadd that name as well? I think you can guide us on this.\n\nOther than that, I only fixed one extra new line between 'that' and\n\"satisfies'. Other than that, it looks pretty good!\n\nThanks,\nOnder",
"msg_date": "Sun, 12 Mar 2023 18:07:21 +0300",
"msg_from": "=?UTF-8?B?w5ZuZGVyIEthbGFjxLE=?= <onderkalaci@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Use indexes on the subscriber when REPLICA IDENTITY is\n full on the publisher"
},
{
"msg_contents": "On Sat, Mar 11, 2023 at 6:05 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Fri, Mar 10, 2023 at 7:58 PM Önder Kalacı <onderkalaci@gmail.com> wrote:\n> >\n> >\n> > I think one option could be to drop some cases altogether, but not sure we'd want that.\n> >\n> > As a semi-related question: Are you aware of any setting that'd make pg_stat_all_indexes\n> > reflect the changes sooner? It is hard to debug what is the bottleneck in the tests, but\n> > I have a suspicion that there might be several poll_query_until() calls on\n> > pg_stat_all_indexes, which might be the reason?\n> >\n>\n> Yeah, I also think poll_query_until() calls on pg_stat_all_indexes is\n> the main reason for these tests taking more time. When I commented\n> those polls, it drastically reduces the test time. On looking at\n> pgstat_report_stat(), it seems we don't report stats sooner than 1s\n> and as most of this patch's test relies on stats, it leads to taking\n> more time. I don't have a better idea to verify this patch without\n> checking whether the index scan is really used by referring to\n> pg_stat_all_indexes. I think trying to reduce the poll calls may help\n> in reducing the test timings further. Some ideas on those lines are as\n> follows:\n\nIf the reason for the stats polling was only to know if some index is\nchosen or not, I was wondering if you can just convey the same\ninformation to the TAP test via some conveniently placed (DEBUG?)\nlogging.\n\nThis way the TAP test can do a 'wait_for_log' instead of the\n'poll_query_until'. It will probably generate lots of extra logging\nbut it still might be lots faster that current code because it won't\nincur the 1s overheads of the stats.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia.\n\n\n",
"msg_date": "Mon, 13 Mar 2023 08:14:03 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Use indexes on the subscriber when REPLICA IDENTITY is\n full on the publisher"
},
{
"msg_contents": "On Fri, Mar 10, 2023 8:17 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> \r\n> On Fri, Mar 10, 2023 at 5:16 PM Önder Kalacı <onderkalaci@gmail.com> wrote:\r\n> >\r\n> >>\r\n> >> wip_for_optimize_index_column_match\r\n> >> +static bool\r\n> >> +IndexContainsAnyRemoteColumn(IndexInfo *indexInfo,\r\n> >> + LogicalRepRelation *remoterel)\r\n> >> +{\r\n> >> + for (int i = 0; i < indexInfo->ii_NumIndexAttrs; i++)\r\n> >> + {\r\n> >>\r\n> >> Wouldn't it be better to just check if the first column is not part of\r\n> >> the remote column then we can skip that index?\r\n> >\r\n> >\r\n> > Reading [1], I think I can follow what you suggest. So, basically,\r\n> > if the leftmost column is not filtered, we have the following:\r\n> >\r\n> >> but the entire index would have to be scanned, so in most cases the planner\r\n> would prefer a sequential table scan over using the index.\r\n> >\r\n> >\r\n> > So, in our case, we could follow a similar approach. If the leftmost column of\r\n> the index\r\n> > is not sent over the wire from the pub, we can prefer the sequential scan.\r\n> >\r\n> > Is my understanding of your suggestion accurate?\r\n> >\r\n> \r\n> Yes. I request an opinion from Shi-San who has reported the problem.\r\n> \r\n\r\nI also agree with this.\r\nAnd I think we can mention this in the comments if we do so.\r\n\r\nRegards,\r\nShi Yu\r\n",
"msg_date": "Mon, 13 Mar 2023 02:30:23 +0000",
"msg_from": "\"shiy.fnst@fujitsu.com\" <shiy.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: [PATCH] Use indexes on the subscriber when REPLICA IDENTITY is\n full on the publisher"
},
{
"msg_contents": "Hi Shi Yu,\n\n\n> > >\n> > >\n> > > Reading [1], I think I can follow what you suggest. So, basically,\n> > > if the leftmost column is not filtered, we have the following:\n> > >\n> > >> but the entire index would have to be scanned, so in most cases the\n> planner\n> > would prefer a sequential table scan over using the index.\n> > >\n> > >\n> > > So, in our case, we could follow a similar approach. If the leftmost\n> column of\n> > the index\n> > > is not sent over the wire from the pub, we can prefer the sequential\n> scan.\n> > >\n> > > Is my understanding of your suggestion accurate?\n> > >\n> >\n> > Yes. I request an opinion from Shi-San who has reported the problem.\n> >\n>\n> I also agree with this.\n> And I think we can mention this in the comments if we do so.\n>\n>\nAlready commented on FindUsableIndexForReplicaIdentityFull() on v44.\n\n\nThanks,\nOnder KALACI\n\nHi Shi Yu,\n> >\n> >\n> > Reading [1], I think I can follow what you suggest. So, basically,\n> > if the leftmost column is not filtered, we have the following:\n> >\n> >> but the entire index would have to be scanned, so in most cases the planner\n> would prefer a sequential table scan over using the index.\n> >\n> >\n> > So, in our case, we could follow a similar approach. If the leftmost column of\n> the index\n> > is not sent over the wire from the pub, we can prefer the sequential scan.\n> >\n> > Is my understanding of your suggestion accurate?\n> >\n> \n> Yes. I request an opinion from Shi-San who has reported the problem.\n> \n\nI also agree with this.\nAnd I think we can mention this in the comments if we do so.Already commented on FindUsableIndexForReplicaIdentityFull() on v44.Thanks,Onder KALACI",
"msg_date": "Mon, 13 Mar 2023 09:22:31 +0300",
"msg_from": "=?UTF-8?B?w5ZuZGVyIEthbGFjxLE=?= <onderkalaci@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Use indexes on the subscriber when REPLICA IDENTITY is\n full on the publisher"
},
{
"msg_contents": "On Monday, March 13, 2023 2:23 PM Önder Kalacı <onderkalaci@gmail.com> wrote:\r\nHi,\r\n\r\n> > >\r\n> > >\r\n> > > Reading [1], I think I can follow what you suggest. So, basically,\r\n> > > if the leftmost column is not filtered, we have the following:\r\n> > >\r\n> > >> but the entire index would have to be scanned, so in most cases the planner\r\n> > would prefer a sequential table scan over using the index.\r\n> > >\r\n> > >\r\n> > > So, in our case, we could follow a similar approach. If the leftmost column of\r\n> > the index\r\n> > > is not sent over the wire from the pub, we can prefer the sequential scan.\r\n> > >\r\n> > > Is my understanding of your suggestion accurate?\r\n> > >\r\n> > \r\n> > Yes. I request an opinion from Shi-San who has reported the problem.\r\n> > \r\n> \r\n> I also agree with this.\r\n> And I think we can mention this in the comments if we do so.\r\n> \r\n> Already commented on FindUsableIndexForReplicaIdentityFull() on v44.\r\n\r\nThanks for updating the patch.\r\n\r\nI noticed one problem:\r\n\r\n+static bool\r\n+RemoteRelContainsLeftMostColumnOnIdx(IndexInfo *indexInfo,\r\n+\t\t\t\t\t\t\t\t\t LogicalRepRelation *remoterel)\r\n+{\r\n+\tAttrNumber\t\t\tkeycol;\r\n+\r\n+\tif (indexInfo->ii_NumIndexAttrs < 1)\r\n+\t\treturn false;\r\n+\r\n+\tkeycol = indexInfo->ii_IndexAttrNumbers[0];\r\n+\tif (!AttributeNumberIsValid(keycol))\r\n+\t\treturn false;\r\n+\r\n+\treturn bms_is_member(keycol-1, remoterel->attkeys);\r\n+}\r\n\r\nIn this function, it used the local column number(keycol) to match the remote\r\ncolumn number(attkeys), I think it will cause problem if the column order\r\nbetween pub/sub doesn't match. Like:\r\n\r\n-------\r\n- pub\r\nCREATE TABLE test_replica_id_full (x int, y int);\r\nALTER TABLE test_replica_id_full REPLICA IDENTITY FULL;\r\nCREATE PUBLICATION tap_pub_rep_full FOR TABLE test_replica_id_full;\r\n- sub\r\nCREATE TABLE test_replica_id_full (z int, y int, x int);\r\nCREATE unique INDEX idx ON test_replica_id_full(z);\r\nCREATE SUBSCRIPTION tap_sub_rep_full_0 CONNECTION 'dbname=postgres port=5432' PUBLICATION tap_pub_rep_full;\r\n-------\r\n\r\nI think we need to use the attrmap->attnums to convert the column number before\r\ncomparing. Just for reference, attach a diff(0001) which I noted down when trying to\r\nfix the problem.\r\n\r\nBesides, I also look at the \"WIP: Optimize for non-pkey / non-RI unique\r\nindexes\" patch, I think it also had a similar problem about the column\r\nmatching. And another thing I think we can improved in this WIP patch is that\r\nwe can cache the result of IsIdxSafeToSkipDuplicates() instead of doing it for\r\neach UPDATE, because the cost of this function becomes bigger after applying\r\nthis patch. And for reference, I tried to improve the WIP for the same, and\r\nhere is a slight modified version of this WIP(0002). Feel free to modify or merge\r\nit if needed.\r\nThanks for Shi-san for helping to finish these fixes.\r\n\r\n\r\nBest Regards,\r\nHou zj",
"msg_date": "Mon, 13 Mar 2023 09:42:47 +0000",
"msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: [PATCH] Use indexes on the subscriber when REPLICA IDENTITY is\n full on the publisher"
},
{
"msg_contents": "Hi Hou zj, Shi-san, all\n\n\n> In this function, it used the local column number(keycol) to match the\n> remote\n> column number(attkeys), I think it will cause problem if the column order\n> between pub/sub doesn't match. Like:\n>\n> -------\n> - pub\n> CREATE TABLE test_replica_id_full (x int, y int);\n> ALTER TABLE test_replica_id_full REPLICA IDENTITY FULL;\n> CREATE PUBLICATION tap_pub_rep_full FOR TABLE test_replica_id_full;\n> - sub\n> CREATE TABLE test_replica_id_full (z int, y int, x int);\n> CREATE unique INDEX idx ON test_replica_id_full(z);\n> CREATE SUBSCRIPTION tap_sub_rep_full_0 CONNECTION 'dbname=postgres\n> port=5432' PUBLICATION tap_pub_rep_full;\n> -------\n>\n> I think we need to use the attrmap->attnums to convert the column number\n> before\n> comparing. Just for reference, attach a diff(0001) which I noted down when\n> trying to\n> fix the problem.\n>\n\nI'm always afraid of these types of last minute additions to the patch, and\nhere we have\nthis issue on one of the latest addition :(\n\nThanks for reporting the problem and also providing guidance on the fix.\nAfter reading\ncodes on attrMap and debugging this case further, I think your suggestion\nmakes sense.\n\nI only made some small changes, and included them in the patch.\n\n\n> Besides, I also look at the \"WIP: Optimize for non-pkey / non-RI unique\n> indexes\" patch, I think it also had a similar problem about the column\n> matching\n\n\nRight, I'll incorporate this fix to that one as well.\n\n\n> . And another thing I think we can improved in this WIP patch is that\n> we can cache the result of IsIdxSafeToSkipDuplicates() instead of doing it\n> for\n> each UPDATE, because the cost of this function becomes bigger after\n> applying\n> this patch\n\n\nYes, it makes sense.\n\n\n>\n> Thanks for Shi-san for helping to finish these fixes.\n>\n> Thank you both!\n\n\nOnder Kalaci",
"msg_date": "Mon, 13 Mar 2023 13:51:00 +0300",
"msg_from": "=?UTF-8?B?w5ZuZGVyIEthbGFjxLE=?= <onderkalaci@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Use indexes on the subscriber when REPLICA IDENTITY is\n full on the publisher"
},
{
"msg_contents": "On Mon, Mar 13, 2023 at 2:44 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> On Sat, Mar 11, 2023 at 6:05 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Fri, Mar 10, 2023 at 7:58 PM Önder Kalacı <onderkalaci@gmail.com> wrote:\n> > >\n> > >\n> > > I think one option could be to drop some cases altogether, but not sure we'd want that.\n> > >\n> > > As a semi-related question: Are you aware of any setting that'd make pg_stat_all_indexes\n> > > reflect the changes sooner? It is hard to debug what is the bottleneck in the tests, but\n> > > I have a suspicion that there might be several poll_query_until() calls on\n> > > pg_stat_all_indexes, which might be the reason?\n> > >\n> >\n> > Yeah, I also think poll_query_until() calls on pg_stat_all_indexes is\n> > the main reason for these tests taking more time. When I commented\n> > those polls, it drastically reduces the test time. On looking at\n> > pgstat_report_stat(), it seems we don't report stats sooner than 1s\n> > and as most of this patch's test relies on stats, it leads to taking\n> > more time. I don't have a better idea to verify this patch without\n> > checking whether the index scan is really used by referring to\n> > pg_stat_all_indexes. I think trying to reduce the poll calls may help\n> > in reducing the test timings further. Some ideas on those lines are as\n> > follows:\n>\n> If the reason for the stats polling was only to know if some index is\n> chosen or not, I was wondering if you can just convey the same\n> information to the TAP test via some conveniently placed (DEBUG?)\n> logging.\n>\n\nI had thought about it but didn't convince myself that it would be a\nbetter approach because it would LOG a lot of messages for bulk\nupdates/deletes. Note for each row update on the publisher a new\nindex/sequence scan will happen. So, instead, I tried to further\nchange the test cases to remove unnecessary parts. I have changed\nbelow tests:\n\n1.\n+# subscriber gets the missing table information\n+$node_subscriber->safe_psql('postgres',\n+ \"ALTER SUBSCRIPTION tap_sub_rep_full REFRESH PUBLICATION\");\n\nThis and the follow-on test was not required after we have removed\nDropped columns test.\n\n2. Reduce the number of updates/deletes in the first test to two rows.\n\n3. Removed the cases for dropping the index. This ensures that after\ndropping the index on the table we switch to either an index scan (if\na new index is created) or to a sequence scan. It doesn't seem like a\nvery interesting case to me.\n\nApart from the above, I have removed the explicit setting of\n'wal_retrieve_retry_interval = 1ms' as the same is not done for any\nother subscription tests. I know setting wal_retrieve_retry_interval\navoids the launcher sometimes taking more time to launch apply worker\nbut it is better to be consistent. See the changes in\nchanges_amit_1.patch, if you agree with the same then please include\nthem in the next version.\n\nAfter doing the above, the test time on my machine is closer to what\nother tests take which is ~5s.\n\n\n--\nWith Regards,\nAmit Kapila.",
"msg_date": "Mon, 13 Mar 2023 17:18:13 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Use indexes on the subscriber when REPLICA IDENTITY is\n full on the publisher"
},
{
"msg_contents": "Hi Amit, Peter, all\n\n\n> > If the reason for the stats polling was only to know if some index is\n> > chosen or not, I was wondering if you can just convey the same\n> > information to the TAP test via some conveniently placed (DEBUG?)\n> > logging.\n> >\n>\n> I had thought about it but didn't convince myself that it would be a\n> better approach because it would LOG a lot of messages for bulk\n> updates/deletes.\n\n\nI'm also hesitant to add any log messages for testing purposes, especially\nsomething like this one, where a single UPDATE on the source code\nleads to an unbounded number of logs.\n\n\n>\n> 1.\n> +# subscriber gets the missing table information\n> +$node_subscriber->safe_psql('postgres',\n> + \"ALTER SUBSCRIPTION tap_sub_rep_full REFRESH PUBLICATION\");\n>\n> This and the follow-on test was not required after we have removed\n> Dropped columns test.\n>\n>\nRight, I kept it with the idea that we might get the dropped column changes\nearlier, so that I can rebase and add the drop column ones.\n\nBut, sure, we can add that later to other tests.\n\n\n> 2. Reduce the number of updates/deletes in the first test to two rows.\n>\n\nWe don't have any particular reasons to have more tuples. Given the\ntime constraints, I don't have any objections to change this.\n\n\n>\n> 3. Removed the cases for dropping the index. This ensures that after\n> dropping the index on the table we switch to either an index scan (if\n> a new index is created) or to a sequence scan. It doesn't seem like a\n> very interesting case to me.\n>\n\nFor that test, my goal was to ensure/show that the invalidation callback\nis triggered after `DROP / CREATE INDEX` commands.\n\nCan we always assume that this would never change? Because if this\nbehavior ever changes, the users would stuck with the wrong/old\nindex until VACUUM happens.\n\n\n>\n> Apart from the above, I have removed the explicit setting of\n> 'wal_retrieve_retry_interval = 1ms' as the same is not done for any\n> other subscription tests. I know setting wal_retrieve_retry_interval\n> avoids the launcher sometimes taking more time to launch apply worker\n> but it is better to be consistent\n\n\nHmm, I cannot remember why I added that. It was probably to make\npoll_query_until/wait_for_catchup to happen faster.\n\nBut, running the test w/wout this setting, I cannot observe any noticeable\ndifference. So, probably fine to remove.\n\n\n> . See the changes in\n> changes_amit_1.patch, if you agree with the same then please include\n> them in the next version.\n>\n\nincluded all, but I'm not very sure to remove (3). If you think we have\ncoverage for that in other cases, I'm fine with that.\n\n\n>\n> After doing the above, the test time on my machine is closer to what\n> other tests take which is ~5s.\n>\n> Yes, same for me.\n\nThanks, attaching v46",
"msg_date": "Mon, 13 Mar 2023 15:43:54 +0300",
"msg_from": "=?UTF-8?B?w5ZuZGVyIEthbGFjxLE=?= <onderkalaci@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Use indexes on the subscriber when REPLICA IDENTITY is\n full on the publisher"
},
{
"msg_contents": "On Mon, Mar 13, 2023 at 6:14 PM Önder Kalacı <onderkalaci@gmail.com> wrote:\n>\n>>\n>>\n>> 3. Removed the cases for dropping the index. This ensures that after\n>> dropping the index on the table we switch to either an index scan (if\n>> a new index is created) or to a sequence scan. It doesn't seem like a\n>> very interesting case to me.\n>\n>\n> For that test, my goal was to ensure/show that the invalidation callback\n> is triggered after `DROP / CREATE INDEX` commands.\n>\n\nFair point. I suggest in that case just keep one of the tests for Drop\nIndex such that after that it will pick up a sequence scan. However,\njust do the poll for the number of index scans stat once. I think that\nwill cover the case you are worried about without having a noticeable\nimpact on test timing.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 13 Mar 2023 18:41:30 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Use indexes on the subscriber when REPLICA IDENTITY is\n full on the publisher"
},
{
"msg_contents": "Hi Amit, all\n\n> >\n> > For that test, my goal was to ensure/show that the invalidation callback\n> > is triggered after `DROP / CREATE INDEX` commands.\n> >\n>\n> Fair point. I suggest in that case just keep one of the tests for Drop\n> Index such that after that it will pick up a sequence scan. However,\n> just do the poll for the number of index scans stat once. I think that\n> will cover the case you are worried about without having a noticeable\n> impact on test timing.\n>\n>\nSo, after dropping the index, it is not possible to poll for the idxscan.\n\nBut, I think, after the drop index, it is enough to check if the\nmodification\nis applied properly on the target (wait_for_catchup + safe_psql).\nIf it were to cache the indexOid, the update/delete would fail anyway.\n\nAttaching v47.\n\n\nThanks,\nOnder KALACI",
"msg_date": "Mon, 13 Mar 2023 17:16:23 +0300",
"msg_from": "=?UTF-8?B?w5ZuZGVyIEthbGFjxLE=?= <onderkalaci@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Use indexes on the subscriber when REPLICA IDENTITY is\n full on the publisher"
},
{
"msg_contents": "On Mon, Mar 13, 2023 10:16 PM Önder Kalacı <onderkalaci@gmail.com> wrote:\r\n> \r\n> Attaching v47.\r\n> \r\n\r\nThanks for updating the patch. Here are some comments.\r\n\r\n1.\r\nin RemoteRelContainsLeftMostColumnOnIdx():\r\n\r\n+\tif (indexInfo->ii_NumIndexAttrs < 1)\r\n+\t\treturn false;\r\n\r\nDid you see any cases that the condition is true? I think there is at least one\r\ncolumn in the index. If so, we can use an Assert().\r\n\r\n+\tif (attrmap->maplen <= AttrNumberGetAttrOffset(keycol))\r\n+\t\treturn false;\r\n\r\nSimilarly, I think `attrmap->maplen` is the number of columns and it is always\r\ngreater than keycol. If you agree, we can check it with an Assert(). Besides, It\r\nseems we don't need AttrNumberGetAttrOffset().\r\n\r\n2.\r\n+# make sure that the subscriber has the correct data after the update UPDATE\r\n\r\n\"update UPDATE\" seems to be a typo.\r\n\r\n3.\r\n+# now, drop the index with the expression, and re-create index on column lastname\r\n\r\nThe comment says \"re-create index on column lastname\" but it seems we didn't do\r\nthat, should it be modified to something like: \r\n# now, drop the index with the expression, we will use sequential scan\r\n\r\nBesides these, the patch LGTM.\r\n\r\nRegards,\r\nShi Yu\r\n",
"msg_date": "Tue, 14 Mar 2023 06:01:11 +0000",
"msg_from": "\"shiy.fnst@fujitsu.com\" <shiy.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: [PATCH] Use indexes on the subscriber when REPLICA IDENTITY is\n full on the publisher"
},
{
"msg_contents": "On Mon, Mar 13, 2023 at 7:46 PM Önder Kalacı <onderkalaci@gmail.com> wrote:\n>\n> Attaching v47.\n>\n\nI have made the following changes in the attached patch (a) removed\nthe function IsIdxSafeToSkipDuplicates() and used the check directly\nin the caller; (b) changed a few comments in the patch; (c) the test\nfile was inconsistently using ';' while executing statement with\nsafe_psql, changed it to remove ';'.\n\n-- \nWith Regards,\nAmit Kapila.",
"msg_date": "Tue, 14 Mar 2023 12:20:14 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Use indexes on the subscriber when REPLICA IDENTITY is\n full on the publisher"
},
{
"msg_contents": "Hi Shi Yu,\n\n\n> in RemoteRelContainsLeftMostColumnOnIdx():\n>\n> + if (indexInfo->ii_NumIndexAttrs < 1)\n> + return false;\n>\n> Did you see any cases that the condition is true? I think there is at\n> least one\n> column in the index. If so, we can use an Assert().\n>\n\nActually, it was mostly to guard against any edge cases. I thought similar\nto tables,\nwe can have zero column indexes, but it turns out it is not possible. Also,\nindex_create seems to check that, so changing it asset makes sense:\n\n>\n> /*\n> * check parameters\n> */\n> if (indexInfo->ii_NumIndexAttrs < 1)\n> elog(ERROR, \"must index at least one column\");\n\n\n\n\n>\n> + if (attrmap->maplen <= AttrNumberGetAttrOffset(keycol))\n> + return false;\n>\n> Similarly, I think `attrmap->maplen` is the number of columns and it is\n> always\n> greater than keycol. If you agree, we can check it with an Assert().\n\n\nAt this point, I'm really hesitant to make any assumptions. Logical\nreplication\nis pretty flexible, and who knows maybe dropped columns will be treated\ndifferently at some point, and this assumption changes?\n\nI really feel more comfortable to keep this as-is. We call this function\nvery infrequently\nanyway.\n\n\n> Besides, It\n> seems we don't need AttrNumberGetAttrOffset().\n>\n>\nWhy not? We are accessing the AttrNumberGetAttrOffset(keycol) element\nof the array attnums?\n\n\n> 2.\n> +# make sure that the subscriber has the correct data after the update\n> UPDATE\n>\n> \"update UPDATE\" seems to be a typo.\n>\n>\nthanks, fixed\n\n\n> 3.\n> +# now, drop the index with the expression, and re-create index on column\n> lastname\n>\n> The comment says \"re-create index on column lastname\" but it seems we\n> didn't do\n> that, should it be modified to something like:\n> # now, drop the index with the expression, we will use sequential scan\n>\n>\n>\nThanks, fixed\n\nI'll add the changes to v49 in the next e-mail.\n\nThanks,\nOnder KALACI\n\nHi Shi Yu,\nin RemoteRelContainsLeftMostColumnOnIdx():\n\n+ if (indexInfo->ii_NumIndexAttrs < 1)\n+ return false;\n\nDid you see any cases that the condition is true? I think there is at least one\ncolumn in the index. If so, we can use an Assert().Actually, it was mostly to guard against any edge cases. I thought similar to tables,we can have zero column indexes, but it turns out it is not possible. Also, index_create seems to check that, so changing it asset makes sense: /** check parameters*/if (indexInfo->ii_NumIndexAttrs < 1)elog(ERROR, \"must index at least one column\"); \n\n+ if (attrmap->maplen <= AttrNumberGetAttrOffset(keycol))\n+ return false;\n\nSimilarly, I think `attrmap->maplen` is the number of columns and it is always\ngreater than keycol. If you agree, we can check it with an Assert(). At this point, I'm really hesitant to make any assumptions. Logical replicationis pretty flexible, and who knows maybe dropped columns will be treateddifferently at some point, and this assumption changes?I really feel more comfortable to keep this as-is. We call this function very infrequentlyanyway. Besides, It\nseems we don't need AttrNumberGetAttrOffset().\n Why not? We are accessing the AttrNumberGetAttrOffset(keycol) elementof the array attnums? \n2.\n+# make sure that the subscriber has the correct data after the update UPDATE\n\n\"update UPDATE\" seems to be a typo.\nthanks, fixed \n3.\n+# now, drop the index with the expression, and re-create index on column lastname\n\nThe comment says \"re-create index on column lastname\" but it seems we didn't do\nthat, should it be modified to something like: \n# now, drop the index with the expression, we will use sequential scan\nThanks, fixedI'll add the changes to v49 in the next e-mail.Thanks,Onder KALACI",
"msg_date": "Tue, 14 Mar 2023 10:18:03 +0300",
"msg_from": "=?UTF-8?B?w5ZuZGVyIEthbGFjxLE=?= <onderkalaci@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Use indexes on the subscriber when REPLICA IDENTITY is\n full on the publisher"
},
{
"msg_contents": "Hi Amit, all\n\n\nAmit Kapila <amit.kapila16@gmail.com>, 14 Mar 2023 Sal, 09:50 tarihinde\nşunu yazdı:\n\n> On Mon, Mar 13, 2023 at 7:46 PM Önder Kalacı <onderkalaci@gmail.com>\n> wrote:\n> >\n> > Attaching v47.\n> >\n>\n> I have made the following changes in the attached patch (a) removed\n> the function IsIdxSafeToSkipDuplicates() and used the check directly\n> in the caller\n\n\nShould be fine, we can re-introduce this function when I work on the\nnon-pkey/RI unique index improvement as a follow up to this.\n\n\n> ; (b) changed a few comments in the patch;\n\n\nThanks, looks good.\n\n\n> (c) the test\n> file was inconsistently using ';' while executing statement with\n> safe_psql, changed it to remove ';'.\n>\n>\nAlright, thanks.\n\nAnd as a self-review, when I write regression tests next time, I'll spend a\nlot\nmore time on the style/consistency/comments etc. During this review,\nthe reviewers had to spend many cycles on that area, which is something\nI should have done better.\n\nAttaching v49 with some minor changes Shi Yu noted earlier.\n\nThanks,\nOnder KALACI",
"msg_date": "Tue, 14 Mar 2023 10:18:08 +0300",
"msg_from": "=?UTF-8?B?w5ZuZGVyIEthbGFjxLE=?= <onderkalaci@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Use indexes on the subscriber when REPLICA IDENTITY is\n full on the publisher"
},
{
"msg_contents": "On Tue, Mar 14, 2023 at 12:48 PM Önder Kalacı <onderkalaci@gmail.com> wrote:\n>>\n>> 2.\n>> +# make sure that the subscriber has the correct data after the update UPDATE\n>>\n>> \"update UPDATE\" seems to be a typo.\n>>\n>\n> thanks, fixed\n>\n>>\n>> 3.\n>> +# now, drop the index with the expression, and re-create index on column lastname\n>>\n>> The comment says \"re-create index on column lastname\" but it seems we didn't do\n>> that, should it be modified to something like:\n>> # now, drop the index with the expression, we will use sequential scan\n>>\n>>\n>\n> Thanks, fixed\n>\n> I'll add the changes to v49 in the next e-mail.\n>\n\nIt seems you forgot to address these last two comments in the latest version.\n\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 14 Mar 2023 14:29:38 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Use indexes on the subscriber when REPLICA IDENTITY is\n full on the publisher"
},
{
"msg_contents": "Amit Kapila <amit.kapila16@gmail.com>, 14 Mar 2023 Sal, 11:59 tarihinde\nşunu yazdı:\n\n> On Tue, Mar 14, 2023 at 12:48 PM Önder Kalacı <onderkalaci@gmail.com>\n> wrote:\n> >>\n> >> 2.\n> >> +# make sure that the subscriber has the correct data after the update\n> UPDATE\n> >>\n> >> \"update UPDATE\" seems to be a typo.\n> >>\n> >\n> > thanks, fixed\n> >\n> >>\n> >> 3.\n> >> +# now, drop the index with the expression, and re-create index on\n> column lastname\n> >>\n> >> The comment says \"re-create index on column lastname\" but it seems we\n> didn't do\n> >> that, should it be modified to something like:\n> >> # now, drop the index with the expression, we will use sequential scan\n> >>\n> >>\n> >\n> > Thanks, fixed\n> >\n> > I'll add the changes to v49 in the next e-mail.\n> >\n>\n> It seems you forgot to address these last two comments in the latest\n> version.\n>\n>\nOops, sorry. I think when I get your test changes, I somehow overridden\nthese changes\non my local.",
"msg_date": "Tue, 14 Mar 2023 12:06:31 +0300",
"msg_from": "=?UTF-8?B?w5ZuZGVyIEthbGFjxLE=?= <onderkalaci@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Use indexes on the subscriber when REPLICA IDENTITY is\n full on the publisher"
},
{
"msg_contents": "On Tue, 14 Mar 2023 at 14:36, Önder Kalacı <onderkalaci@gmail.com> wrote:\n>\n>\n> Amit Kapila <amit.kapila16@gmail.com>, 14 Mar 2023 Sal, 11:59 tarihinde şunu yazdı:\n>>\n>> On Tue, Mar 14, 2023 at 12:48 PM Önder Kalacı <onderkalaci@gmail.com> wrote:\n>> >>\n>> >> 2.\n>> >> +# make sure that the subscriber has the correct data after the update UPDATE\n>> >>\n>> >> \"update UPDATE\" seems to be a typo.\n>> >>\n>> >\n>> > thanks, fixed\n>> >\n>> >>\n>> >> 3.\n>> >> +# now, drop the index with the expression, and re-create index on column lastname\n>> >>\n>> >> The comment says \"re-create index on column lastname\" but it seems we didn't do\n>> >> that, should it be modified to something like:\n>> >> # now, drop the index with the expression, we will use sequential scan\n>> >>\n>> >>\n>> >\n>> > Thanks, fixed\n>> >\n>> > I'll add the changes to v49 in the next e-mail.\n>> >\n>>\n>> It seems you forgot to address these last two comments in the latest version.\n>>\n>\n> Oops, sorry. I think when I get your test changes, I somehow overridden these changes\n> on my local.\n\nThanks for the updated patch.\nFew minor comments:\n1) The extra line break after IsIndexOnlyOnExpression function can be removed:\n+ }\n+\n+ return true;\n+}\n+\n+\n+/*\n+ * Returns true if the attrmap (which belongs to remoterel) contains the\n+ * leftmost column of the index.\n+ *\n+ * Otherwise returns false.\n+ */\n\n2) Generally we don't terminate with \".\" for single line comments\n+\n+ /*\n+ * Simple case, we already have a primary key or a replica identity index.\n+ */\n+ idxoid = GetRelationIdentityOrPK(localrel);\n+ if (OidIsValid(idxoid))\n+ return idxoid;\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Tue, 14 Mar 2023 14:44:16 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Use indexes on the subscriber when REPLICA IDENTITY is\n full on the publisher"
},
{
"msg_contents": ">\n>\n>\n> Thanks for the updated patch.\n> Few minor comments:\n> 1) The extra line break after IsIndexOnlyOnExpression function can be\n> removed:\n>\n\nremoved\n\n\n>\n>\n> 2) Generally we don't terminate with \".\" for single line comments\n> +\n> + /*\n> + * Simple case, we already have a primary key or a replica identity index.\n> + */\n> + idxoid = GetRelationIdentityOrPK(localrel);\n> + if (OidIsValid(idxoid))\n> + return idxoid;\n>\n> Well, there are several \".\" for single line comments even in the same\nfile such as:\n\n/* 0 means it's a dropped attribute. See comments atop AttrMap. */\n\nI really don't have any preference on this, but I doubt if I change it,\nI'll get\nanother review suggesting to conform to the existing style in the same file.\nSo, I'm skipping this suggestion for now, unless you have objections.",
"msg_date": "Tue, 14 Mar 2023 12:48:38 +0300",
"msg_from": "=?UTF-8?B?w5ZuZGVyIEthbGFjxLE=?= <onderkalaci@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Use indexes on the subscriber when REPLICA IDENTITY is\n full on the publisher"
},
{
"msg_contents": "On Tue, Mar 14, 2023 at 3:18 PM Önder Kalacı <onderkalaci@gmail.com> wrote:\n>>\n\nPushed this patch but forgot to add a new testfile. Will do that soon.\n\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 15 Mar 2023 09:12:20 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Use indexes on the subscriber when REPLICA IDENTITY is\n full on the publisher"
},
{
"msg_contents": "On Wed, Mar 15, 2023 at 9:12 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Mar 14, 2023 at 3:18 PM Önder Kalacı <onderkalaci@gmail.com> wrote:\n> >>\n>\n> Pushed this patch but forgot to add a new testfile. Will do that soon.\n>\n\nThe main patch is committed now. I think the pending item in this\nthread is to conclude whether we need a storage or subscription to\ndisable/enable this feature. Both Andres and Onder don't seem to be in\nfavor but I am of opinion that it could be helpful in scenarios where\nthe index scan (due to duplicates or dead tuples) is slower. However,\nif we don't have a consensus on the same, we can anyway add it later.\nIf there are no more votes in favor of adding such an option, we can\nprobably close the CF entry.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 16 Mar 2023 14:15:11 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Use indexes on the subscriber when REPLICA IDENTITY is\n full on the publisher"
},
{
"msg_contents": "On Thu, Mar 16, 2023 at 2:15 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, Mar 15, 2023 at 9:12 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Tue, Mar 14, 2023 at 3:18 PM Önder Kalacı <onderkalaci@gmail.com> wrote:\n> > >>\n> >\n> > Pushed this patch but forgot to add a new testfile. Will do that soon.\n> >\n>\n> The main patch is committed now. I think the pending item in this\n> thread is to conclude whether we need a storage or subscription to\n> disable/enable this feature. Both Andres and Onder don't seem to be in\n> favor but I am of opinion that it could be helpful in scenarios where\n> the index scan (due to duplicates or dead tuples) is slower. However,\n> if we don't have a consensus on the same, we can anyway add it later.\n> If there are no more votes in favor of adding such an option, we can\n> probably close the CF entry.\n>\n\nI have closed this CF entry for now. However, if there is any interest\nin pursuing the storage or subscription option for this feature, we\ncan still discuss it.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Sat, 18 Mar 2023 15:19:43 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Use indexes on the subscriber when REPLICA IDENTITY is\n full on the publisher"
}
] |
[
{
"msg_contents": "Hello,\n\nThe problem we face is excessive logging of connection information\nthat clutters the logs and in corner cases with many short-lived\nconnections leads to disk space exhaustion.\n\nCurrent connection log lines share significant parts of the\ninformation - host, port, very close timestamps etc. - in the common\ncase of a successful connection:\n\n2022-07-12 12:17:39.369\nUTC,,,12875,\"10.2.101.63:35616\",62cd6663.324b,1,\"\",2022-07-12 12:17:39\nUTC,,0,LOG,00000,\"connection received: host=10.2.101.63\nport=35616\",,,,,,,,,\"\",\"not initialized\"\n2022-07-12 12:17:39.374\nUTC,\"some_user\",\"postgres\",12875,\"10.2.101.63:35616\",62cd6663.324b,2,\"authentication\",2022-07-12\n12:17:39 UTC,18/4571,0,LOG,00000,\"connection authorized:\nuser=some_user database=postgres SSL enabled (protocol=, cipher=,\ncompression=)\",,,,,,,,,\"\",\"client backend\"\n2022-07-12 12:17:39.430\nUTC,\"some_user\",\"postgres\",12875,\"10.2.101.63:35616\",62cd6663.324b,3,\"idle\",2022-07-12\n12:17:39 UTC,,0,LOG,00000,\"disconnection: session time: 0:00:00.060\nuser=some_user database=postgres host=10.2.101.63\nport=35616\",,,,,,,,,\"\",\"client backend\"\n\nRemoving some of the lines should not harm log-based investigations in\nmost cases, but will shrink the logs improving readability and space\nconsumption.\n\nI would like to get feedback on the following idea:\n\nAdd the `log_connection_stages` setting of type “string” with possible\nvalues \"received\", \"authorized\", \"authenticated\", \"disconnected\", and\n\"all\", with \"all\" being the default.\nThe setting would have effect only when `log_connections` is on.\nExample: log_connection_stages=’authorized,disconnected’.\nThat also implies there would be no need for a separate\n\"log_disconnection\" setting.\n\nFor the sake of completeness I have to mention omitting ‘received’\nfrom `log_connection_stages` would lead to absence in logs info about\nconnections that do not complete initialization: for them only the\n“connection received” line is currently logged. The attachment\ncontains a log excerpt to clarify the situation I am talking about.\n\nRegards,\nSergey.",
"msg_date": "Tue, 12 Jul 2022 15:52:58 +0200",
"msg_from": "Sergey Dudoladov <sergey.dudoladov@gmail.com>",
"msg_from_op": true,
"msg_subject": "Introduce \"log_connection_stages\" setting."
},
{
"msg_contents": "On Tue, Jul 12, 2022, at 10:52 AM, Sergey Dudoladov wrote:\n> The problem we face is excessive logging of connection information\n> that clutters the logs and in corner cases with many short-lived\n> connections leads to disk space exhaustion.\nYou are proposing a fine-grained control over connection stages reported when\nlog_connections = on. It seems useful if you're only interested in (a)\nmalicious access or (b) authorized access (for audit purposes).\n\n> I would like to get feedback on the following idea:\n> \n> Add the `log_connection_stages` setting of type “string” with possible\n> values \"received\", \"authorized\", \"authenticated\", \"disconnected\", and\n> \"all\", with \"all\" being the default.\n> The setting would have effect only when `log_connections` is on.\n> Example: log_connection_stages=’authorized,disconnected’.\n> That also implies there would be no need for a separate\n> \"log_disconnection\" setting.\nYour proposal will add more confusion to the already-confused logging-related\nGUCs. If you are proposing to introduce a fine-grained control, the first step\nshould be merge log_connections and log_disconnections into a new GUC (?) and\ndeprecate them. (I wouldn't introduce a new GUC that depends on the stage of\nother GUC as you proposed.) There are 3 stages: connect (received), authorized\n(authenticated), disconnect. You can also add 'all' that mimics log_connections\n/ log_disconnections enabled. Another question is if we can reuse\nlog_connections for this improvement instead of a new GUC. In this case, you\nwould turn the boolean value into an enum value. Will it cause trouble while\nupgrading to this new version? It is one of the reasons to create a new GUC. I\nwould suggest log_connection_messages or log_connection (with the 's' in the\nend -- since it is too similar to the current GUC name, I'm afraid it is not a\ngood name for it).\n\n\n--\nEuler Taveira\nEDB https://www.enterprisedb.com/\n\nOn Tue, Jul 12, 2022, at 10:52 AM, Sergey Dudoladov wrote:The problem we face is excessive logging of connection informationthat clutters the logs and in corner cases with many short-livedconnections leads to disk space exhaustion.You are proposing a fine-grained control over connection stages reported whenlog_connections = on. It seems useful if you're only interested in (a)malicious access or (b) authorized access (for audit purposes).I would like to get feedback on the following idea:Add the `log_connection_stages` setting of type “string” with possiblevalues \"received\", \"authorized\", \"authenticated\", \"disconnected\", and\"all\", with \"all\" being the default.The setting would have effect only when `log_connections` is on.Example: log_connection_stages=’authorized,disconnected’.That also implies there would be no need for a separate\"log_disconnection\" setting.Your proposal will add more confusion to the already-confused logging-relatedGUCs. If you are proposing to introduce a fine-grained control, the first stepshould be merge log_connections and log_disconnections into a new GUC (?) anddeprecate them. (I wouldn't introduce a new GUC that depends on the stage ofother GUC as you proposed.) There are 3 stages: connect (received), authorized(authenticated), disconnect. You can also add 'all' that mimics log_connections/ log_disconnections enabled. Another question is if we can reuselog_connections for this improvement instead of a new GUC. In this case, youwould turn the boolean value into an enum value. Will it cause trouble whileupgrading to this new version? It is one of the reasons to create a new GUC. Iwould suggest log_connection_messages or log_connection (with the 's' in theend -- since it is too similar to the current GUC name, I'm afraid it is not agood name for it).--Euler TaveiraEDB https://www.enterprisedb.com/",
"msg_date": "Tue, 12 Jul 2022 12:22:47 -0300",
"msg_from": "\"Euler Taveira\" <euler@eulerto.com>",
"msg_from_op": false,
"msg_subject": "Re: Introduce \"log_connection_stages\" setting."
},
{
"msg_contents": "Hello,\n\nThank you for the constructive feedback.\n\n> Your proposal will add more confusion to the already-confused logging-related GUCs.\n> I wouldn't introduce a new GUC that depends on the stage of other GUC as you proposed.\n\nAgreed, coupling a new GUC with \"log_connections\" is likely to lead to\nextra confusion.\n\n> There are 3 stages: connect (received), authorized (authenticated), disconnect.\n\nI've taken connection stages and terminology from the existing log messages.\nThe reason I have separated \"authorized\" and \"authenticated\" are [1]\nand [2] usages of \"log_connections\";\n\"received\" is mentioned at [3].\n\n>> Example: log_connection_stages=’authorized,disconnected’.\n> would turn the boolean value into an enum value\n\nI have thought about enums too, but we need to cover arbitrary\ncombinations of message types, for example log only \"received\" and\n\"disconnected\".\nHence the proposed type \"string\" with individual values within the\nstring really drawn from an enum.\n\n> merge log_connections and log_disconnections into a new GUC (?) and deprecate them.\n\nAre there any specific deprecation guidelines ? I have not found any\nafter a quick search for GUC deprecation in Google and commit history.\nA deprecation scheme could look like that:\n1. Mention in the docs \"log_(dis)connections\" are deprecated in favor\nof \"log_connection_stages\"\n2. Map \"log_(dis)connections\" to relevant values of\n\"log_connection_stages\" in code if the latter is unset.\n3. Complain in the logs if a conflict arises between the old params\nand \"log_connection_stages\", with \"log_connection_stages\"\ntaking the precedence.\n\nRegards,\nSergey\n\n[1] https://github.com/postgres/postgres/blob/3f8148c256e067dc2e8929ed174671ba7dc3339c/src/backend/utils/init/postinit.c#L257-L262\n[2] https://github.com/postgres/postgres/blob/02c408e21a6e78ff246ea7a1beb4669634fa9c4c/src/backend/libpq/auth.c#L372\n[3] https://github.com/postgres/postgres/blob/master/src/backend/postmaster/postmaster.c#L4393\n\n\n",
"msg_date": "Thu, 14 Jul 2022 13:20:58 +0200",
"msg_from": "Sergey Dudoladov <sergey.dudoladov@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Introduce \"log_connection_stages\" setting."
},
{
"msg_contents": "On Thu, Jul 14, 2022, at 8:20 AM, Sergey Dudoladov wrote:\n> I've taken connection stages and terminology from the existing log messages.\n> The reason I have separated \"authorized\" and \"authenticated\" are [1]\n> and [2] usages of \"log_connections\";\n> \"received\" is mentioned at [3].\nAfter checking the commit 9afffcb833d, I agree that you need 4 stages:\nconnected, authorized, authenticated, and disconnected.\n\n> I have thought about enums too, but we need to cover arbitrary\n> combinations of message types, for example log only \"received\" and\n> \"disconnected\".\n> Hence the proposed type \"string\" with individual values within the\n> string really drawn from an enum.\nOoops. I said enum but I meant string list.\n\n> Are there any specific deprecation guidelines ? I have not found any\n> after a quick search for GUC deprecation in Google and commit history.\n> A deprecation scheme could look like that:\n> 1. Mention in the docs \"log_(dis)connections\" are deprecated in favor\n> of \"log_connection_stages\"\n> 2. Map \"log_(dis)connections\" to relevant values of\n> \"log_connection_stages\" in code if the latter is unset.\n> 3. Complain in the logs if a conflict arises between the old params\n> and \"log_connection_stages\", with \"log_connection_stages\"\n> taking the precedence.\nNo. AFAICS in this case, you just remove log_connections and log_disconnections\nand create the new one (see for example the commit 88e98230268 that replace\ncheckpoint_segments with min_wal_size and max_wal_size). We don't generally\nkeep ConfigureNames* entries for deprecated GUCs. Unless you are renaming a GUC\n-- see map_old_guc_names; that's not the case. When we remove a GUC, we are\nintroducing an incompatibility so the only place it will be mentioned is the\nrelease notes (there is a section called \"Migration to Version X\" that lists\nall incompatibilities). From the developer's point of view, you only need to\nmention in the commit message that this commit is introducing an\nincompatibility. Hence, when it is time to write the release notes, the\ninformation about the removal and the new replacement will be added.\n\n\n--\nEuler Taveira\nEDB https://www.enterprisedb.com/\n\nOn Thu, Jul 14, 2022, at 8:20 AM, Sergey Dudoladov wrote:I've taken connection stages and terminology from the existing log messages.The reason I have separated \"authorized\" and \"authenticated\" are [1]and [2] usages of \"log_connections\";\"received\" is mentioned at [3].After checking the commit 9afffcb833d, I agree that you need 4 stages:connected, authorized, authenticated, and disconnected.I have thought about enums too, but we need to cover arbitrarycombinations of message types, for example log only \"received\" and\"disconnected\".Hence the proposed type \"string\" with individual values within thestring really drawn from an enum.Ooops. I said enum but I meant string list.Are there any specific deprecation guidelines ? I have not found anyafter a quick search for GUC deprecation in Google and commit history.A deprecation scheme could look like that:1. Mention in the docs \"log_(dis)connections\" are deprecated in favorof \"log_connection_stages\"2. Map \"log_(dis)connections\" to relevant values of\"log_connection_stages\" in code if the latter is unset.3. Complain in the logs if a conflict arises between the old paramsand \"log_connection_stages\", with \"log_connection_stages\"taking the precedence.No. AFAICS in this case, you just remove log_connections and log_disconnectionsand create the new one (see for example the commit 88e98230268 that replacecheckpoint_segments with min_wal_size and max_wal_size). We don't generallykeep ConfigureNames* entries for deprecated GUCs. Unless you are renaming a GUC-- see map_old_guc_names; that's not the case. When we remove a GUC, we areintroducing an incompatibility so the only place it will be mentioned is therelease notes (there is a section called \"Migration to Version X\" that listsall incompatibilities). From the developer's point of view, you only need tomention in the commit message that this commit is introducing anincompatibility. Hence, when it is time to write the release notes, theinformation about the removal and the new replacement will be added.--Euler TaveiraEDB https://www.enterprisedb.com/",
"msg_date": "Thu, 14 Jul 2022 18:39:32 -0300",
"msg_from": "\"Euler Taveira\" <euler@eulerto.com>",
"msg_from_op": false,
"msg_subject": "Re: Introduce \"log_connection_stages\" setting."
},
{
"msg_contents": "Hi hackers,\n\nI've sketched an initial patch version; feedback is welcome.\n\nRegards,\nSergey Dudoladov",
"msg_date": "Tue, 8 Nov 2022 19:30:38 +0100",
"msg_from": "Sergey Dudoladov <sergey.dudoladov@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Introduce \"log_connection_stages\" setting."
},
{
"msg_contents": "On Tue, Nov 08, 2022 at 07:30:38PM +0100, Sergey Dudoladov wrote:\n> + <entry>Logs reception of a connection. At this point a connection has been received, but no further work has been done:\n\nreceipt\n\n> + <entry>Logs the original identity that an authentication method employs to identify a user. In most cases, the identity string equals the PostgreSQL username,\n\ns/equals/matches\n\n> +/* check_hook: validate new log_connection_messages value */\n> +bool\n> +check_log_connection_messages(char **newval, void **extra, GucSource source)\n> +{\n> +\tchar\t\t*rawname;\n> +\tList\t\t*namelist;\n> +\tListCell\t*l;\n> +\tchar\t\t*log_connection_messages = *newval;\n> +\tbool\t\t*myextra;\n> +\n> +\t/*\n> +\t * Set up the \"extra\" struct actually used by assign_log_connection_messages.\n> +\t */\n> +\tmyextra = (bool *) guc_malloc(LOG, 4 * sizeof(bool));\n\nThis function hardcodes each of the 4 connections:\n\n> +\t\tif (pg_strcasecmp(stage, \"received\") == 0)\n> +\t\t\tmyextra[0] = true;\n\nIt'd be better to use #defines or enums for these.\n\n> --- a/src/backend/tcop/postgres.c\n> +++ b/src/backend/tcop/postgres.c\n> @@ -84,8 +84,11 @@ const char *debug_query_string; /* client-supplied query string */\n> /* Note: whereToSendOutput is initialized for the bootstrap/standalone case */\n> CommandDest whereToSendOutput = DestDebug;\n> \n> -/* flag for logging end of session */\n> -bool\t\tLog_disconnections = false;\n> +/* flags for logging information about session state */\n> +bool\t\tLog_disconnected = false;\n> +bool\t\tLog_authenticated = false;\n> +bool\t\tLog_authorized = false;\n> +bool\t\tLog_received = false;\n\nI think this ought to be an integer with flag bits, rather than 4\nbooleans (I don't know, but there might be more later?). Then, the\nimplementation follows the user-facing GUC and also follows\nlog_destination.\n\n-- \nJustin\n\n\n",
"msg_date": "Thu, 17 Nov 2022 09:36:29 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Introduce \"log_connection_stages\" setting."
},
{
"msg_contents": "On Thu, Nov 17, 2022 at 09:36:29AM -0600, Justin Pryzby wrote:\n> On Tue, Nov 08, 2022 at 07:30:38PM +0100, Sergey Dudoladov wrote:\n...\n\nAlso (I didn't realize there was a CF entry), this is failing tests.\nhttp://cfbot.cputube.org/sergey-dudoladov.html\n\n-- \nJustin\n\n\n",
"msg_date": "Fri, 30 Dec 2022 21:51:03 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Introduce \"log_connection_stages\" setting."
},
{
"msg_contents": "Hi,\n\n+1 for the idea!\n\n> + <entry><literal>authenticated</literal></entry>\n> + <entry>Logs the original identity that an authentication method employs to identify a user. In most cases, the identity string equals the PostgreSQL username,\n> + but some third-party authentication methods may alter the original user identifier before the server stores it. Failed authentication is always logged regardless of the value of this setting.</entry>\n\nI think the documentation needs to be rewrapped; those are very long lines.\n\nOn 11/17/22 07:36, Justin Pryzby wrote:\n> This function hardcodes each of the 4 connections:\n> \n>> +\t\tif (pg_strcasecmp(stage, \"received\") == 0)\n>> +\t\t\tmyextra[0] = true;\n> \n> It'd be better to use #defines or enums for these.\n\nHardcoding seems reasonable to me, if this is the only place we're doing\nstring comparison.\n\n>> --- a/src/backend/tcop/postgres.c\n>> +++ b/src/backend/tcop/postgres.c\n>> @@ -84,8 +84,11 @@ const char *debug_query_string; /* client-supplied query string */\n>> /* Note: whereToSendOutput is initialized for the bootstrap/standalone case */\n>> CommandDest whereToSendOutput = DestDebug;\n>> \n>> -/* flag for logging end of session */\n>> -bool\t\tLog_disconnections = false;\n>> +/* flags for logging information about session state */\n>> +bool\t\tLog_disconnected = false;\n>> +bool\t\tLog_authenticated = false;\n>> +bool\t\tLog_authorized = false;\n>> +bool\t\tLog_received = false;\n> \n> I think this ought to be an integer with flag bits, rather than 4\n> booleans (I don't know, but there might be more later?). Then, the\n> implementation follows the user-facing GUC and also follows\n> log_destination.\n\nAgreed. Or at the very least, follow what's done with\nwal_consistency_checking? But I think flag bits would be better.\n\nThe tests should be expanded for cases other than 'all'.\n\nAs to the failing test cases: it looks like there's a keyword issue with\nALTER SYSTEM and 'all', but trying to fix it by quoting also fails. I\nthink it's because of GUC_LIST_QUOTE -- is there a reason that's used\nhere? I don't think we'd need any special characters in future option\nnames. wal_consistency_checking is very similar, and it just uses\nGUC_LIST_INPUT.\n\nThanks,\n--Jacob\n\n\n",
"msg_date": "Fri, 6 Jan 2023 11:56:09 -0800",
"msg_from": "Jacob Champion <jchampion@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: Introduce \"log_connection_stages\" setting."
},
{
"msg_contents": "Hello, hackers\n\nThank you for the reviews. I've modified the patch to incorporate your\nsuggestions:\n+ flag bits are now used to encode different connection stages\n+ failing tests are now fixed. It was not a keyword issue but rather\n\"check_log_connection_messages\" not allocating memory properly\n in the special case log_connection_messages = 'all'\n+ the GUC option is now only GUC_LIST_INPUT\n+ typo fixes and line rewrapping in the docs\n\nRegards,\nSergey",
"msg_date": "Mon, 30 Jan 2023 19:58:19 +0100",
"msg_from": "Sergey Dudoladov <sergey.dudoladov@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Introduce \"log_connection_stages\" setting."
},
{
"msg_contents": "Thanks for updating the patch. It's currently failing check-world, due\nto a test that was added on January 23 (a9dc7f941):\n\nhttp://cfbot.cputube.org/sergey-dudoladov.html\n[19:15:57.101] Summary of Failures:\n[19:15:57.101] [19:15:57.101] 250/251 postgresql:ldap / ldap/002_bindpasswd ERROR 1.38s\n\n2023-01-30 19:15:52.427 GMT [56038] LOG: unrecognized configuration parameter \"log_connections\" in file \"/tmp/cirrus-ci-build/build/testrun/ldap/002_bindpasswd/data/t_002_bindpasswd_node_data/pgdata/postgresql.conf\" line 839\n\n> + received, but no further work has been done: Postgres is about to start\n\nsay \"PostgreSQL\" to match the rest of the docs.\n\n> + GUC_check_errmsg(\"Invalid value '%s'\", stage);\n\nThis message shouldn't be uppercased.\n\n> +\t\t\tGUC_check_errdetail(\"Valid values to use in the list are 'received', 'authenticated', 'authorized', 'disconnected', and 'all'.\"\n> +\t\t\t\"If 'all' is present, it must be the only value in the list.\");\n\nMaybe \"all\" should be first ?\n\nThere's no spaces before \"If\":\n\n| 2023-01-31 00:17:48.906 GMT [5676] DETALLE: Valid values to use in the list are 'received', 'authenticated', 'authorized', 'disconnected', and 'all'.If 'all' is present, it must be the only value in the list.\n\n> +/* flags for logging information about session state */\n> +int\t\t\tLog_connection_messages = LOG_CONNECTION_ALL;\n\nThe initial value here is overwritten by the GUC default during startup.\nFor consistency, the integer should be initialized to 0.\n\n> +extern PGDLLIMPORT int\tLog_connection_messages;\n> +\n> +/* Bitmap for logging connection messages */\n> +#define LOG_CONNECTION_RECEIVED\t\t 1\n> +#define LOG_CONNECTION_AUTHENTICATED 2\n> +#define LOG_CONNECTION_AUTHORIZED\t 4\n> +#define LOG_CONNECTION_DISCONNECTED 8\n> +#define LOG_CONNECTION_ALL\t\t\t 15\n\nMaybe the integers should be written like (1<<0)..\nAnd maybe ALL should be 0xffff ?\n\nMore nitpicks:\n\n> + Causes stages of each attempted connection to the server to be logged. Example: <literal>authorized,disconnected</literal>.\n\n\"Causes the specified stages of each connection attempt ..\"\n\n> + The default is the empty string meaning nothing is logged.\n\n\".. string, which means that nothing is logged\"\n\n> + <entry>Logs the original identity that an authentication method employs\n> + to identify a user. In most cases, the identity string matches the\n\n\".. original identity used by an authentication method ..'\n\n-- \nJustin\n\n\n",
"msg_date": "Mon, 30 Jan 2023 18:24:58 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Introduce \"log_connection_stages\" setting."
},
{
"msg_contents": "Hi again,\n\nJustin, thank you for the fast review. The new version is attached.\n\nRegards,\nSergey Dudoladov",
"msg_date": "Wed, 1 Feb 2023 20:59:39 +0100",
"msg_from": "Sergey Dudoladov <sergey.dudoladov@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Introduce \"log_connection_stages\" setting."
},
{
"msg_contents": "On 2/1/23 11:59, Sergey Dudoladov wrote:\n> Justin, thank you for the fast review. The new version is attached.\n\nThis is looking very good. One bigger comment:\n\n> +\tmyextra = (int *) guc_malloc(ERROR, sizeof(int));\n> +\t*myextra = newlogconnect;\n\nIf I've understood Tom correctly in [1], both of these guc_mallocs\nshould be using a loglevel less than ERROR, to avoid forcing a\npostmaster exit when out of memory. (I used WARNING in that thread\ninstead, which seemed to be acceptable.)\n\nAnd a couple of nitpicks:\n\n> + Causes the specified stages of each connection attempt to the server to be logged. Example: <literal>authorized,disconnected</literal>.\n\nLong line; should be rewrapped.\n\n> + else { \n> + GUC_check_errcode(ERRCODE_INVALID_PARAMETER_VALUE); \n> + GUC_check_errmsg(\"invalid value '%s'\", stage); \n> + GUC_check_errdetail(\"Valid values to use in the list are 'all', 'received', 'authenticate\n> + \" If 'all' is present, it must be the only value in the list.\"); \n\nI think the errmsg here should reuse the standard message format\n invalid value for parameter \"%s\": \"%s\"\nboth for consistency and ease of translation.\n\nThanks!\n--Jacob\n\n[1] https://www.postgresql.org/message-id/2012342.1658356951%40sss.pgh.pa.us\n\n\n",
"msg_date": "Thu, 2 Mar 2023 14:35:01 -0800",
"msg_from": "Jacob Champion <jchampion@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: Introduce \"log_connection_stages\" setting."
},
{
"msg_contents": "Jacob Champion <jchampion@timescale.com> writes:\n> This is looking very good. One bigger comment:\n\n>> +\tmyextra = (int *) guc_malloc(ERROR, sizeof(int));\n>> +\t*myextra = newlogconnect;\n\n> If I've understood Tom correctly in [1], both of these guc_mallocs\n> should be using a loglevel less than ERROR, to avoid forcing a\n> postmaster exit when out of memory. (I used WARNING in that thread\n> instead, which seemed to be acceptable.)\n\nActually, preferred practice is as seen in e.g. check_datestyle:\n\n\tmyextra = (int *) guc_malloc(LOG, 2 * sizeof(int));\n\tif (!myextra)\n\t\treturn false;\n\tmyextra[0] = newDateStyle;\n\tmyextra[1] = newDateOrder;\n\t*extra = (void *) myextra;\n\nwhich gives the guc.c functions an opportunity to manage the\nfailure.\n\nA quick grep shows that there are existing check functions that\ndid not get that memo, e.g. check_recovery_target_lsn.\nWe ought to clean them up.\n\nThis is, of course, not super important unless you're allocating\nsomething quite large; the odds of going OOM in the postmaster\nshould be pretty small.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 02 Mar 2023 17:56:14 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Introduce \"log_connection_stages\" setting."
},
{
"msg_contents": "On 3/2/23 14:56, Tom Lane wrote:\n> Jacob Champion <jchampion@timescale.com> writes:\n>> If I've understood Tom correctly in [1], both of these guc_mallocs\n>> should be using a loglevel less than ERROR, to avoid forcing a\n>> postmaster exit when out of memory. (I used WARNING in that thread\n>> instead, which seemed to be acceptable.)\n> \n> Actually, preferred practice is as seen in e.g. check_datestyle:\n> \n> \tmyextra = (int *) guc_malloc(LOG, 2 * sizeof(int));\n> \tif (!myextra)\n> \t\treturn false;\n> \tmyextra[0] = newDateStyle;\n> \tmyextra[1] = newDateOrder;\n> \t*extra = (void *) myextra;\n> \n> which gives the guc.c functions an opportunity to manage the\n> failure.\n\nAh, thanks for the correction. (My guc_strdup(WARNING, ...) calls may\nneed to be cleaned up too, then.)\n\n--Jacob\n\n\n",
"msg_date": "Thu, 2 Mar 2023 15:02:18 -0800",
"msg_from": "Jacob Champion <jchampion@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: Introduce \"log_connection_stages\" setting."
},
{
"msg_contents": "Hello,\n\nI have attached the fourth version of the patch.\n\nRegards,\nSergey.",
"msg_date": "Tue, 16 May 2023 20:51:26 +0200",
"msg_from": "Sergey Dudoladov <sergey.dudoladov@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Introduce \"log_connection_stages\" setting."
},
{
"msg_contents": "> On 16 May 2023, at 20:51, Sergey Dudoladov <sergey.dudoladov@gmail.com> wrote:\n\n> I have attached the fourth version of the patch.\n\nThis version fails the ldap_password test on all platforms on pg_ctl failing to start:\n\n[14:48:10.544] --- stdout ---\n[14:48:10.544] # executing test in /tmp/cirrus-ci-build/build/testrun/ldap_password_func/001_mutated_bindpasswd group ldap_password_func test 001_mutated_bindpasswd\n[14:48:10.544] # waiting for slapd to accept requests...\n[14:48:10.544] # setting up PostgreSQL instance\n[14:48:10.544] Bail out! pg_ctl start failed\n[14:48:10.544] # test failed\n\nUpdating src/test/modules/ldap_password_func/t/001_mutated_bindpasswd.pl with\nthe new GUC might solve the problem from skimming this.\n\nPlease send a fixed version, I'm marking the patch Waiting on Author in the\nmeantime.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Mon, 3 Jul 2023 15:57:50 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Introduce \"log_connection_stages\" setting."
},
{
"msg_contents": "> On 3 Jul 2023, at 15:57, Daniel Gustafsson <daniel@yesql.se> wrote:\n> \n>> On 16 May 2023, at 20:51, Sergey Dudoladov <sergey.dudoladov@gmail.com> wrote:\n> \n>> I have attached the fourth version of the patch.\n> \n> This version fails the ldap_password test on all platforms on pg_ctl failing to start:\n> \n> [14:48:10.544] --- stdout ---\n> [14:48:10.544] # executing test in /tmp/cirrus-ci-build/build/testrun/ldap_password_func/001_mutated_bindpasswd group ldap_password_func test 001_mutated_bindpasswd\n> [14:48:10.544] # waiting for slapd to accept requests...\n> [14:48:10.544] # setting up PostgreSQL instance\n> [14:48:10.544] Bail out! pg_ctl start failed\n> [14:48:10.544] # test failed\n> \n> Updating src/test/modules/ldap_password_func/t/001_mutated_bindpasswd.pl with\n> the new GUC might solve the problem from skimming this.\n> \n> Please send a fixed version, I'm marking the patch Waiting on Author in the\n> meantime.\n\nWith the patch failing tests and the thread stalled with no update I am marking\nthis returned with feedback. Please feel free to resubmit to a future CF.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Thu, 3 Aug 2023 22:58:46 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Introduce \"log_connection_stages\" setting."
}
] |
[
{
"msg_contents": "Add copy/equal support for XID lists\n\nCommit f10a025cfe97 added support for List to store Xids, but didn't\nhandle the new type in all cases. Add some obviously necessary pieces.\nAs far as I am aware, this is all dead code as far as core code is\nconcerned, but it seems unacceptable not to have it in case third-party\ncode wants to rely on this type of list. (Some parts of the List API\nremain unimplemented, but that can be fixed as and when needed -- see\nlack of list_intersection_oid, list_deduplicate_int as precedents.)\n\nDiscussion: https://postgr.es/m/20220708164534.nbejhgt4ajz35p65@alvherre.pgsql\n\nBranch\n------\nmaster\n\nDetails\n-------\nhttps://git.postgresql.org/pg/commitdiff/5ca0fe5c8ad7987beee95669124c7e245f2816d8\n\nModified Files\n--------------\nsrc/backend/nodes/copyfuncs.c | 5 +++--\nsrc/backend/nodes/equalfuncs.c | 8 ++++++++\nsrc/test/modules/test_oat_hooks/test_oat_hooks.c | 3 +++\n3 files changed, 14 insertions(+), 2 deletions(-)",
"msg_date": "Tue, 12 Jul 2022 14:12:49 +0000",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": true,
"msg_subject": "pgsql: Add copy/equal support for XID lists"
},
{
"msg_contents": "Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> Add copy/equal support for XID lists\n\nWhat about outfuncs/readfuncs? I see that you fixed _outList,\nbut not its caller outNode:\n\n\telse if (IsA(obj, List) || IsA(obj, IntList) || IsA(obj, OidList))\n\t\t_outList(str, obj);\n\nand the LEFT_PAREN case in nodeRead() doesn't know what to do either.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 12 Jul 2022 10:36:45 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Add copy/equal support for XID lists"
},
{
"msg_contents": "While looking for a place to host a test for XID lists support, I\nnoticed a mistake in test_oat_hooks, fixed as per the attached.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/",
"msg_date": "Tue, 12 Jul 2022 17:20:59 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": true,
"msg_subject": "test_oat_hooks bug (was: Re: pgsql: Add copy/equal support for XID\n lists)"
},
{
"msg_contents": "On 2022-Jul-12, Tom Lane wrote:\n\n> What about outfuncs/readfuncs? I see that you fixed _outList,\n> but not its caller outNode:\n> \n> \telse if (IsA(obj, List) || IsA(obj, IntList) || IsA(obj, OidList))\n> \t\t_outList(str, obj);\n> \n> and the LEFT_PAREN case in nodeRead() doesn't know what to do either.\n\nHmm, true -- naively grepping for OidList wasn't enough (moreso when I\nfailed to notice one occurrence). This patch closes the holes you\nmentioned. I haven't found any others yet.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"La vida es para el que se aventura\"",
"msg_date": "Tue, 12 Jul 2022 20:35:16 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": true,
"msg_subject": "Re: pgsql: Add copy/equal support for XID lists"
},
{
"msg_contents": "On Tue, Jul 12, 2022 at 05:20:59PM +0200, Alvaro Herrera wrote:\n> While looking for a place to host a test for XID lists support, I\n> noticed a mistake in test_oat_hooks, fixed as per the attached.\n\nIndeed. Good catch.\n--\nMichael",
"msg_date": "Wed, 13 Jul 2022 10:02:27 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: test_oat_hooks bug (was: Re: pgsql: Add copy/equal support for\n XID lists)"
}
] |
[
{
"msg_contents": "I looked into the complaint at [1] about the planner being much\nstupider when one side of a JOIN USING is referenced than the other\nside. It seemed to me that that shouldn't be happening, because\nthe relevant decisions are made on the basis of EquivalenceClasses\nand both USING columns should be in the same EquivalenceClass.\nI was right about that ... but the damage is being done far upstream\nof any EquivalenceClass work. It turns out that the core of the\nissue is that the query looks like\n\nSELECT ... t1 JOIN t2 USING (x)\n GROUP BY x, t2.othercol ORDER BY t2.othercol LIMIT n\n\nIn the \"okay\" case, we resolve \"GROUP BY x\" as GROUP BY t1.x.\nLater on, we are able to realize that ordering by t1.x is\nequivalent to ordering by t2.x (because same EquivalenceClass),\nand that it's equally good to consider the GROUP BY items in\neither order, and that ordering by t2.othercol, t2.x would\nallow us to perform the grouping using a GroupAggregate on\ndata that's already sorted to meet the ORDER BY requirement.\nSince there happens to be an index on t2.othercol, t2.x,\nwe can implement the query with no explicit sort, which wins\nbig thanks to the small LIMIT.\n\nIn the not-okay case, we resolve \"GROUP BY x\" as GROUP BY t2.x.\nThen remove_useless_groupby_columns notices that x is t2's\nprimary key, so it figures that grouping by t2.othercol is\nredundant and throws away that element of GROUP BY. Now there\nis no apparent connection between the GROUP BY and ORDER BY\nlists, defeating the optimizations that would lead to a good\nchoice of plan --- in fact, we conclude early on that that\nindex's sort ordering is entirely useless to the query :-(\n\nI tried the attached quick-hack patch that just prevents\nremove_useless_groupby_columns from removing anything that\nappears in ORDER BY. That successfully fixes the complained-of\ncase, and it doesn't change any existing regression test results.\nI'm not sure whether there are any cases that it makes significantly\nworse.\n\n(I also kind of wonder if the fundamental problem here is that\nremove_useless_groupby_columns is being done at the wrong time,\nand we ought to do it later when we have more semantic info.\nBut I'm not volunteering to rewrite it completely.)\n\nAnyway, remove_useless_groupby_columns has been there since 9.6\nand we've not previously seen reports of cases that it makes worse,\nso this seems like a corner-case problem. Hence I wouldn't risk\nback-patching this change. It seems worth considering for HEAD\nthough, so I'll stick it in the September CF.\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/17544-ebd06b00b8836a04%40postgresql.org",
"msg_date": "Tue, 12 Jul 2022 13:31:13 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "remove_useless_groupby_columns is too enthusiastic"
},
{
"msg_contents": "On Wed, 13 Jul 2022 at 05:31, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I tried the attached quick-hack patch that just prevents\n> remove_useless_groupby_columns from removing anything that\n> appears in ORDER BY. That successfully fixes the complained-of\n> case, and it doesn't change any existing regression test results.\n> I'm not sure whether there are any cases that it makes significantly\n> worse.\n\nIn addition to this, we also do a pre-verification step to see if the\nORDER BY has anything in it that the GROUP BY does not?\n\nI don't think there's any harm in removing the GROUP BY item if the\npre-check finds something extra in ORDER BY since\npreprocess_groupclause() is going to fail in that case anyway.\n\nDavid\n\n\n",
"msg_date": "Wed, 13 Jul 2022 16:06:51 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: remove_useless_groupby_columns is too enthusiastic"
},
{
"msg_contents": "On Wed, Jul 13, 2022 at 1:31 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> I tried the attached quick-hack patch that just prevents\n> remove_useless_groupby_columns from removing anything that\n> appears in ORDER BY. That successfully fixes the complained-of\n> case, and it doesn't change any existing regression test results.\n> I'm not sure whether there are any cases that it makes significantly\n> worse.\n\n\nIf there happens to be an index on t2.othercol, t2.x, the patch would\ndefinitely win since we can perform a GroupAggregate with no explicit\nsort. If there is no such index, considering the redundant sorting work\ndue to the excess columns, do we still win?\n\nI'm testing with the query below:\n\ncreate table t (a int primary key, b int, c int);\ninsert into t select i, i%1000, i from generate_series(1,1000000)i;\nanalyze t;\ncreate index t_b_a_idx on t (b, a);\n\nselect sum(c) from t group by a, b order by b limit 10;\n\nIf we have index 't_b_a_idx', there would be big performance\nimprovement. Without the index, I can see some performance drop (I'm not\nusing parallel query mechanism).\n\n\n> (I also kind of wonder if the fundamental problem here is that\n> remove_useless_groupby_columns is being done at the wrong time,\n> and we ought to do it later when we have more semantic info.\n\n\nConcur with that.\n\nThanks\nRichard\n\nOn Wed, Jul 13, 2022 at 1:31 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\nI tried the attached quick-hack patch that just prevents\nremove_useless_groupby_columns from removing anything that\nappears in ORDER BY. That successfully fixes the complained-of\ncase, and it doesn't change any existing regression test results.\nI'm not sure whether there are any cases that it makes significantly\nworse.If there happens to be an index on t2.othercol, t2.x, the patch woulddefinitely win since we can perform a GroupAggregate with no explicitsort. If there is no such index, considering the redundant sorting workdue to the excess columns, do we still win?I'm testing with the query below:create table t (a int primary key, b int, c int);insert into t select i, i%1000, i from generate_series(1,1000000)i;analyze t;create index t_b_a_idx on t (b, a);select sum(c) from t group by a, b order by b limit 10;If we have index 't_b_a_idx', there would be big performanceimprovement. Without the index, I can see some performance drop (I'm notusing parallel query mechanism). \n(I also kind of wonder if the fundamental problem here is that\nremove_useless_groupby_columns is being done at the wrong time,\nand we ought to do it later when we have more semantic info. Concur with that.ThanksRichard",
"msg_date": "Thu, 14 Jul 2022 17:57:15 +0800",
"msg_from": "Richard Guo <guofenglinux@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: remove_useless_groupby_columns is too enthusiastic"
},
{
"msg_contents": "On Wed, 13 Jul 2022 at 05:31, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I tried the attached quick-hack patch that just prevents\n> remove_useless_groupby_columns from removing anything that\n> appears in ORDER BY. That successfully fixes the complained-of\n> case, and it doesn't change any existing regression test results.\n> I'm not sure whether there are any cases that it makes significantly\n> worse.\n\nWhat I am concerned about with this patch is that for Hash Aggregate,\nwe'll always want to remove the useless group by clauses to minimise\nthe amount of hashing and equal comparisons that we must do. So the\npatch makes that case worse in favour of possibly making Group\nAggregate cases better. I don't think that's going to be a great\ntrade-off as Hash Aggregate is probably more commonly used than Group\nAggregate, especially so when the number of rows in each group is\nlarge and there is no LIMIT clause to favour a cheap startup plan.\n\nMaybe the fix for this should be:\n\n1. Add a new bool \"isredundant_groupby\" field to SortGroupClause,\n2. Rename remove_useless_groupby_columns() to\nmark_redundant_groupby_columns() and have it set the\nisredundant_groupby instead of removing items from the list,\n3. Adjust get_useful_group_keys_orderings() so that it returns\nadditional PathKeyInfo with the isredundant_groupby items removed,\n4. Adjust the code in add_paths_to_grouping_rel() so that it always\nuses the minimal set of SortGroupClauses for Hash Aggregate paths,\n\nPerhaps a valid concern with the above is all the additional Paths\nwe'd consider if we did #3. But maybe that's not so bad as that's not\na multiplicate problem like it would be adding additional Paths to\nbase and joinrels.\n\nWe'd probably still want to keep preprocess_groupclause() as\nget_useful_group_keys_orderings() is not exhaustive in its search.\n\nDavid\n\n\n",
"msg_date": "Fri, 16 Sep 2022 12:22:08 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: remove_useless_groupby_columns is too enthusiastic"
},
{
"msg_contents": "On Fri, Sep 16, 2022 at 12:22:08PM +1200, David Rowley wrote:\n> We'd probably still want to keep preprocess_groupclause() as\n> get_useful_group_keys_orderings() is not exhaustive in its search.\n\nThis thread has been idle for a few weeks now with a review posted, so\nI have marked the patch as RwF.\n--\nMichael",
"msg_date": "Wed, 12 Oct 2022 14:42:02 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: remove_useless_groupby_columns is too enthusiastic"
}
] |
[
{
"msg_contents": "I was rebasing a patch which requires me to make some changes in\nget_cheapest_group_keys_order(). I noticed a few things in there that\nI think we could do a little better on:\n\n* The code uses pfree() on a list and it should be using list_free()\n\n* There's a manually coded for loop over a list which seems to be done\nso we can skip the first n elements of the list. for_each_from()\nshould be used for that.\n\n* I think list_truncate(list_copy(list), n) is a pretty bad way to\ncopy the first n elements of a list, especially when n is likely to be\n0 most of the time. I think we should just add a function called\nlist_copy_head(). We already have list_copy_tail().\n\n* We could reduce some of the branching in the while loop and just set\ncheapest_sort_cost to DBL_MAX to save having to check if we're doing\nthe first loop.\n\nI think the first 3 are worth fixing in PG15 since all that code is\nnew to that version. The 4th, I'm so sure about.\n\nDoes anyone else have any thoughts?\n\nDavid",
"msg_date": "Wed, 13 Jul 2022 10:55:07 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Some clean-up work in get_cheapest_group_keys_order()"
},
{
"msg_contents": "David Rowley <dgrowleyml@gmail.com> writes:\n> * I think list_truncate(list_copy(list), n) is a pretty bad way to\n> copy the first n elements of a list, especially when n is likely to be\n> 0 most of the time. I think we should just add a function called\n> list_copy_head(). We already have list_copy_tail().\n\nAgreed, but I think there are other instances of that idiom that\nshould be cleaned up while you're at it.\n\n> I think the first 3 are worth fixing in PG15 since all that code is\n> new to that version. The 4th, I'm so sure about.\n\nI'd say keeping v15 and v16 in sync here is worth something.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 12 Jul 2022 19:02:02 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Some clean-up work in get_cheapest_group_keys_order()"
},
{
"msg_contents": "On Wed, 13 Jul 2022 at 11:02, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> David Rowley <dgrowleyml@gmail.com> writes:\n> > * I think list_truncate(list_copy(list), n) is a pretty bad way to\n> > copy the first n elements of a list, especially when n is likely to be\n> > 0 most of the time. I think we should just add a function called\n> > list_copy_head(). We already have list_copy_tail().\n>\n> Agreed, but I think there are other instances of that idiom that\n> should be cleaned up while you're at it.\n\nAgreed. I imagine we should just do the remaining cleanup in master\nonly. Do you agree?\n\nDavid\n\n\n",
"msg_date": "Wed, 13 Jul 2022 12:50:52 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Some clean-up work in get_cheapest_group_keys_order()"
},
{
"msg_contents": "David Rowley <dgrowleyml@gmail.com> writes:\n> On Wed, 13 Jul 2022 at 11:02, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Agreed, but I think there are other instances of that idiom that\n>> should be cleaned up while you're at it.\n\n> Agreed. I imagine we should just do the remaining cleanup in master\n> only. Do you agree?\n\nNo objection.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 12 Jul 2022 21:12:06 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Some clean-up work in get_cheapest_group_keys_order()"
},
{
"msg_contents": "On Wed, 13 Jul 2022 at 13:12, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> David Rowley <dgrowleyml@gmail.com> writes:\n> > On Wed, 13 Jul 2022 at 11:02, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> Agreed, but I think there are other instances of that idiom that\n> >> should be cleaned up while you're at it.\n>\n> > Agreed. I imagine we should just do the remaining cleanup in master\n> > only. Do you agree?\n>\n> No objection.\n\nI've now pushed the original patch to 15 and master and also pushed a\ncleanup commit to remove the list_truncate(list_copy instances from\nmaster only.\n\nThanks for looking.\n\nDavid\n\n\n",
"msg_date": "Wed, 13 Jul 2022 15:06:10 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Some clean-up work in get_cheapest_group_keys_order()"
}
] |
[
{
"msg_contents": "Hi,\n\nMost of the code is common between GetSubscriptionRelations and\nGetSubscriptionNotReadyRelations. Added a parameter to\nGetSubscriptionRelations which could provide the same functionality as\nthe existing GetSubscriptionRelations and\nGetSubscriptionNotReadyRelations. Attached patch has the changes for\nthe same. Thoughts?\n\nRegards,\nVignesh",
"msg_date": "Wed, 13 Jul 2022 12:22:06 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": true,
"msg_subject": "Refactor to make use of a common function for\n GetSubscriptionRelations\n and GetSubscriptionNotReadyRelations."
},
{
"msg_contents": "On Wed, Jul 13, 2022 at 12:22:06PM +0530, vignesh C wrote:\n> Most of the code is common between GetSubscriptionRelations and\n> GetSubscriptionNotReadyRelations. Added a parameter to\n> GetSubscriptionRelations which could provide the same functionality as\n> the existing GetSubscriptionRelations and\n> GetSubscriptionNotReadyRelations. Attached patch has the changes for\n> the same. Thoughts?\n\nRight. Using all_rels to mean that we'd filter relations that are not\nready is a bit confusing, though. Perhaps this could use a bitmask as\nargument.\n--\nMichael",
"msg_date": "Wed, 13 Jul 2022 16:43:29 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Refactor to make use of a common function for\n GetSubscriptionRelations and GetSubscriptionNotReadyRelations."
},
{
"msg_contents": "On Wed, Jul 13, 2022 at 5:43 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Wed, Jul 13, 2022 at 12:22:06PM +0530, vignesh C wrote:\n> > Most of the code is common between GetSubscriptionRelations and\n> > GetSubscriptionNotReadyRelations. Added a parameter to\n> > GetSubscriptionRelations which could provide the same functionality as\n> > the existing GetSubscriptionRelations and\n> > GetSubscriptionNotReadyRelations. Attached patch has the changes for\n> > the same. Thoughts?\n>\n> Right. Using all_rels to mean that we'd filter relations that are not\n> ready is a bit confusing, though. Perhaps this could use a bitmask as\n> argument.\n\n+1\n\n(or some enum)\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia.\n\n\n",
"msg_date": "Wed, 13 Jul 2022 17:47:59 +1000",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Refactor to make use of a common function for\n GetSubscriptionRelations\n and GetSubscriptionNotReadyRelations."
},
{
"msg_contents": "On Wed, Jul 13, 2022 at 1:13 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Wed, Jul 13, 2022 at 12:22:06PM +0530, vignesh C wrote:\n> > Most of the code is common between GetSubscriptionRelations and\n> > GetSubscriptionNotReadyRelations. Added a parameter to\n> > GetSubscriptionRelations which could provide the same functionality as\n> > the existing GetSubscriptionRelations and\n> > GetSubscriptionNotReadyRelations. Attached patch has the changes for\n> > the same. Thoughts?\n>\n> Right. Using all_rels to mean that we'd filter relations that are not\n> ready is a bit confusing, though. Perhaps this could use a bitmask as\n> argument.\n\nThe attached v2 patch has the modified version which includes the\nchanges to make the argument as bitmask.\n\nRegards,\nVignesh",
"msg_date": "Wed, 13 Jul 2022 15:24:49 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Refactor to make use of a common function for\n GetSubscriptionRelations\n and GetSubscriptionNotReadyRelations."
},
{
"msg_contents": "On Wed, Jul 13, 2022 at 7:55 PM vignesh C <vignesh21@gmail.com> wrote:\n>\n> On Wed, Jul 13, 2022 at 1:13 PM Michael Paquier <michael@paquier.xyz> wrote:\n> >\n> > On Wed, Jul 13, 2022 at 12:22:06PM +0530, vignesh C wrote:\n> > > Most of the code is common between GetSubscriptionRelations and\n> > > GetSubscriptionNotReadyRelations. Added a parameter to\n> > > GetSubscriptionRelations which could provide the same functionality as\n> > > the existing GetSubscriptionRelations and\n> > > GetSubscriptionNotReadyRelations. Attached patch has the changes for\n> > > the same. Thoughts?\n> >\n> > Right. Using all_rels to mean that we'd filter relations that are not\n> > ready is a bit confusing, though. Perhaps this could use a bitmask as\n> > argument.\n>\n> The attached v2 patch has the modified version which includes the\n> changes to make the argument as bitmask.\n>\n\nBy using a bitmask I think there is an implication that the flags can\nbe combined...\n\nPerhaps it is not a problem today, but later you may want more flags. e.g.\n#define SUBSCRIPTION_REL_STATE_READY 0x02 /* READY relations */\n\nthen the bitmask idea falls apart because IIUC you have no intentions\nto permit things like:\n(SUBSCRIPTION_REL_STATE_NOT_READY | SUBSCRIPTION_REL_STATE_READY)\n\nIMO using an enum might be a better choice for that parameter.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Thu, 14 Jul 2022 09:03:48 +1000",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Refactor to make use of a common function for\n GetSubscriptionRelations\n and GetSubscriptionNotReadyRelations."
},
{
"msg_contents": "On Thu, Jul 14, 2022 at 4:34 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> On Wed, Jul 13, 2022 at 7:55 PM vignesh C <vignesh21@gmail.com> wrote:\n> >\n> > On Wed, Jul 13, 2022 at 1:13 PM Michael Paquier <michael@paquier.xyz> wrote:\n> > >\n> > > On Wed, Jul 13, 2022 at 12:22:06PM +0530, vignesh C wrote:\n> > > > Most of the code is common between GetSubscriptionRelations and\n> > > > GetSubscriptionNotReadyRelations. Added a parameter to\n> > > > GetSubscriptionRelations which could provide the same functionality as\n> > > > the existing GetSubscriptionRelations and\n> > > > GetSubscriptionNotReadyRelations. Attached patch has the changes for\n> > > > the same. Thoughts?\n> > >\n> > > Right. Using all_rels to mean that we'd filter relations that are not\n> > > ready is a bit confusing, though. Perhaps this could use a bitmask as\n> > > argument.\n> >\n> > The attached v2 patch has the modified version which includes the\n> > changes to make the argument as bitmask.\n> >\n>\n> By using a bitmask I think there is an implication that the flags can\n> be combined...\n>\n> Perhaps it is not a problem today, but later you may want more flags. e.g.\n> #define SUBSCRIPTION_REL_STATE_READY 0x02 /* READY relations */\n>\n> then the bitmask idea falls apart because IIUC you have no intentions\n> to permit things like:\n> (SUBSCRIPTION_REL_STATE_NOT_READY | SUBSCRIPTION_REL_STATE_READY)\n>\n> IMO using an enum might be a better choice for that parameter.\n\nChanged it to enum so that it can be extended to support other\nsubscription relations like ready state subscription relations later\neasily. The attached v3 patch has the changes for the same.\n\nRegards,\nVignesh",
"msg_date": "Thu, 14 Jul 2022 22:02:42 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Refactor to make use of a common function for\n GetSubscriptionRelations\n and GetSubscriptionNotReadyRelations."
},
{
"msg_contents": "On Thu, Jul 14, 2022 at 4:34 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> On Wed, Jul 13, 2022 at 7:55 PM vignesh C <vignesh21@gmail.com> wrote:\n> >\n> > On Wed, Jul 13, 2022 at 1:13 PM Michael Paquier <michael@paquier.xyz> wrote:\n> > >\n> > > On Wed, Jul 13, 2022 at 12:22:06PM +0530, vignesh C wrote:\n> > > > Most of the code is common between GetSubscriptionRelations and\n> > > > GetSubscriptionNotReadyRelations. Added a parameter to\n> > > > GetSubscriptionRelations which could provide the same functionality as\n> > > > the existing GetSubscriptionRelations and\n> > > > GetSubscriptionNotReadyRelations. Attached patch has the changes for\n> > > > the same. Thoughts?\n> > >\n> > > Right. Using all_rels to mean that we'd filter relations that are not\n> > > ready is a bit confusing, though. Perhaps this could use a bitmask as\n> > > argument.\n> >\n> > The attached v2 patch has the modified version which includes the\n> > changes to make the argument as bitmask.\n> >\n>\n> By using a bitmask I think there is an implication that the flags can\n> be combined...\n>\n> Perhaps it is not a problem today, but later you may want more flags. e.g.\n> #define SUBSCRIPTION_REL_STATE_READY 0x02 /* READY relations */\n>\n> then the bitmask idea falls apart because IIUC you have no intentions\n> to permit things like:\n> (SUBSCRIPTION_REL_STATE_NOT_READY | SUBSCRIPTION_REL_STATE_READY)\n>\n\nI think this will be an invalid combination if caller ever used it.\nHowever, one might need to use a combination like\n(SUBSCRIPTION_REL_STATE_READY | SUBSCRIPTION_REL_STATE_DONE). For such\ncases, I feel the bitmask idea will be better.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 20 Jul 2022 14:32:44 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Refactor to make use of a common function for\n GetSubscriptionRelations\n and GetSubscriptionNotReadyRelations."
},
{
"msg_contents": "On Wed, Jul 20, 2022 at 02:32:44PM +0530, Amit Kapila wrote:\n> On Thu, Jul 14, 2022 at 4:34 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>> By using a bitmask I think there is an implication that the flags can\n>> be combined...\n>>\n>> Perhaps it is not a problem today, but later you may want more flags. e.g.\n>> #define SUBSCRIPTION_REL_STATE_READY 0x02 /* READY relations */\n>>\n>> then the bitmask idea falls apart because IIUC you have no intentions\n>> to permit things like:\n>> (SUBSCRIPTION_REL_STATE_NOT_READY | SUBSCRIPTION_REL_STATE_READY)\n> \n> I think this will be an invalid combination if caller ever used it.\n> However, one might need to use a combination like\n> (SUBSCRIPTION_REL_STATE_READY | SUBSCRIPTION_REL_STATE_DONE). For such\n> cases, I feel the bitmask idea will be better.\n\nIt feels unnatural to me to have a flag saying \"not-ready\" and one\nsaying \"ready\", while we could have a flag saying \"ready\" that can be\ncombined with a second flag to decide if the contents of srsubstate\nshould be matched or *not* matched with the states expected by the\ncaller. This could be extended to more state values, for example.\n\nI am not sure if we actually need this much as I have no idea if\nfuture features would use it, so please take my suggestion lightly :)\n--\nMichael",
"msg_date": "Thu, 21 Jul 2022 10:40:31 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Refactor to make use of a common function for\n GetSubscriptionRelations and GetSubscriptionNotReadyRelations."
},
{
"msg_contents": "On Thu, Jul 21, 2022 at 7:10 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Wed, Jul 20, 2022 at 02:32:44PM +0530, Amit Kapila wrote:\n> > On Thu, Jul 14, 2022 at 4:34 AM Peter Smith <smithpb2250@gmail.com> wrote:\n> >> By using a bitmask I think there is an implication that the flags can\n> >> be combined...\n> >>\n> >> Perhaps it is not a problem today, but later you may want more flags. e.g.\n> >> #define SUBSCRIPTION_REL_STATE_READY 0x02 /* READY relations */\n> >>\n> >> then the bitmask idea falls apart because IIUC you have no intentions\n> >> to permit things like:\n> >> (SUBSCRIPTION_REL_STATE_NOT_READY | SUBSCRIPTION_REL_STATE_READY)\n> >\n> > I think this will be an invalid combination if caller ever used it.\n> > However, one might need to use a combination like\n> > (SUBSCRIPTION_REL_STATE_READY | SUBSCRIPTION_REL_STATE_DONE). For such\n> > cases, I feel the bitmask idea will be better.\n>\n> It feels unnatural to me to have a flag saying \"not-ready\" and one\n> saying \"ready\", while we could have a flag saying \"ready\" that can be\n> combined with a second flag to decide if the contents of srsubstate\n> should be matched or *not* matched with the states expected by the\n> caller. This could be extended to more state values, for example.\n>\n> I am not sure if we actually need this much as I have no idea if\n> future features would use it, so please take my suggestion lightly :)\n>\n\nYeah, it is not very clear to me either. I think this won't be\ndifficult to change one or another way depending on future needs. At\nthis stage, it appeared to me that bitmask is a better way to\nrepresent this information but if you and other feels using enum is a\nbetter idea then I am fine with that as well.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 21 Jul 2022 09:54:05 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Refactor to make use of a common function for\n GetSubscriptionRelations\n and GetSubscriptionNotReadyRelations."
},
{
"msg_contents": "On Thu, Jul 21, 2022 at 09:54:05AM +0530, Amit Kapila wrote:\n> Yeah, it is not very clear to me either. I think this won't be\n> difficult to change one or another way depending on future needs. At\n> this stage, it appeared to me that bitmask is a better way to\n> represent this information but if you and other feels using enum is a\n> better idea then I am fine with that as well.\n\nPlease don't get me wrong :)\n\nI favor a bitmask over an enum here, as you do, with a clean\nlayer for those flags.\n--\nMichael",
"msg_date": "Thu, 21 Jul 2022 13:33:19 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Refactor to make use of a common function for\n GetSubscriptionRelations and GetSubscriptionNotReadyRelations."
},
{
"msg_contents": "On Thu, Jul 21, 2022 at 10:03 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Thu, Jul 21, 2022 at 09:54:05AM +0530, Amit Kapila wrote:\n> > Yeah, it is not very clear to me either. I think this won't be\n> > difficult to change one or another way depending on future needs. At\n> > this stage, it appeared to me that bitmask is a better way to\n> > represent this information but if you and other feels using enum is a\n> > better idea then I am fine with that as well.\n>\n> Please don't get me wrong :)\n>\n> I favor a bitmask over an enum here, as you do, with a clean\n> layer for those flags.\n>\n\nOkay, let's see what Peter Smith has to say about this as he was in\nfavor of using enum here?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 21 Jul 2022 15:40:57 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Refactor to make use of a common function for\n GetSubscriptionRelations\n and GetSubscriptionNotReadyRelations."
},
{
"msg_contents": "On Thu, Jul 21, 2022 at 10:11 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Thu, Jul 21, 2022 at 10:03 AM Michael Paquier <michael@paquier.xyz> wrote:\n> >\n> > On Thu, Jul 21, 2022 at 09:54:05AM +0530, Amit Kapila wrote:\n> > > Yeah, it is not very clear to me either. I think this won't be\n> > > difficult to change one or another way depending on future needs. At\n> > > this stage, it appeared to me that bitmask is a better way to\n> > > represent this information but if you and other feels using enum is a\n> > > better idea then I am fine with that as well.\n> >\n> > Please don't get me wrong :)\n> >\n> > I favor a bitmask over an enum here, as you do, with a clean\n> > layer for those flags.\n> >\n>\n> Okay, let's see what Peter Smith has to say about this as he was in\n> favor of using enum here?\n>\n\nI was in favour of enum mostly because I thought the bitmask of an\nearlier patch was mis-used; IMO each bit should only be for\nrepresenting something as \"on/set\". So a bit for\nSUBSCRIPTION_REL_STATE_READY makes sense, but a bit for\nSUBSCRIPTION_REL_STATE_NOT_READY seemed strange/backwards to me. YMMV.\n\nSo using a bitmask is fine, except I thought it should be implemented\nso that one of the bits is for a \"NOT\" modifier (IIUC this is kind of\nsimilar to what Michael [1] suggested above?). So \"Not READY\" would be\n(SUBSCRIPTION_REL_STATE_MOD_NOT | SUBSCRIPTION_REL_STATE_READY)\n\nAlso, it may be better to add the bit constants for every one of the\ncurrent states, even if you are not needing to use all of them just\nyet. In fact, I thought this patch probably can implement the fully\ncapable common function (i.e. capable of multiple keys etc) right now,\nso there will be no need to revisit it again in the future.\n\n------\n[1] https://www.postgresql.org/message-id/Ytiuj4hLykTvBF46%40paquier.xyz\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Fri, 22 Jul 2022 10:16:57 +1200",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Refactor to make use of a common function for\n GetSubscriptionRelations\n and GetSubscriptionNotReadyRelations."
},
{
"msg_contents": "On Fri, Jul 22, 2022 at 3:47 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> I was in favour of enum mostly because I thought the bitmask of an\n> earlier patch was mis-used; IMO each bit should only be for\n> representing something as \"on/set\". So a bit for\n> SUBSCRIPTION_REL_STATE_READY makes sense, but a bit for\n> SUBSCRIPTION_REL_STATE_NOT_READY seemed strange/backwards to me. YMMV.\n>\n> So using a bitmask is fine, except I thought it should be implemented\n> so that one of the bits is for a \"NOT\" modifier (IIUC this is kind of\n> similar to what Michael [1] suggested above?). So \"Not READY\" would be\n> (SUBSCRIPTION_REL_STATE_MOD_NOT | SUBSCRIPTION_REL_STATE_READY)\n>\n\nHmm, I think that sounds more complicated than what I expected. I\nsuggest let's go with a simple idea of using a boolean not_ready which\nwill decide whether to use the additional key to search. I feel we can\nextend it by using a bitmask or enum when we have a clear need for\nmore states.\n\n> Also, it may be better to add the bit constants for every one of the\n> current states, even if you are not needing to use all of them just\n> yet. In fact, I thought this patch probably can implement the fully\n> capable common function (i.e. capable of multiple keys etc) right now,\n> so there will be no need to revisit it again in the future.\n>\n\nI don't know whether we need to go that far. Say for a year or so if\nwe don't have such a use case arising which appears to be quite likely\nthen one can question the need for additional defines.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 22 Jul 2022 11:11:23 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Refactor to make use of a common function for\n GetSubscriptionRelations\n and GetSubscriptionNotReadyRelations."
},
{
"msg_contents": "At Fri, 22 Jul 2022 11:11:23 +0530, Amit Kapila <amit.kapila16@gmail.com> wrote in \n> Hmm, I think that sounds more complicated than what I expected. I\n> suggest let's go with a simple idea of using a boolean not_ready which\n> will decide whether to use the additional key to search. I feel we can\n> extend it by using a bitmask or enum when we have a clear need for\n> more states.\n\nFWIW, I vote for (only_)not_ready after rading through the thread..\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Fri, 22 Jul 2022 15:30:39 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Refactor to make use of a common function for\n GetSubscriptionRelations and GetSubscriptionNotReadyRelations."
},
{
"msg_contents": "On Fri, Jul 22, 2022 at 11:11 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Fri, Jul 22, 2022 at 3:47 AM Peter Smith <smithpb2250@gmail.com> wrote:\n> >\n> > I was in favour of enum mostly because I thought the bitmask of an\n> > earlier patch was mis-used; IMO each bit should only be for\n> > representing something as \"on/set\". So a bit for\n> > SUBSCRIPTION_REL_STATE_READY makes sense, but a bit for\n> > SUBSCRIPTION_REL_STATE_NOT_READY seemed strange/backwards to me. YMMV.\n> >\n> > So using a bitmask is fine, except I thought it should be implemented\n> > so that one of the bits is for a \"NOT\" modifier (IIUC this is kind of\n> > similar to what Michael [1] suggested above?). So \"Not READY\" would be\n> > (SUBSCRIPTION_REL_STATE_MOD_NOT | SUBSCRIPTION_REL_STATE_READY)\n> >\n>\n> Hmm, I think that sounds more complicated than what I expected. I\n> suggest let's go with a simple idea of using a boolean not_ready which\n> will decide whether to use the additional key to search. I feel we can\n> extend it by using a bitmask or enum when we have a clear need for\n> more states.\n\nThanks for the comments, i have modified it by changing it to a\nboolean parameter. The attached v4 patch has the changes for the same.\n\nRegards,\nVignesh",
"msg_date": "Sun, 24 Jul 2022 21:52:16 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Refactor to make use of a common function for\n GetSubscriptionRelations\n and GetSubscriptionNotReadyRelations."
},
{
"msg_contents": "On Sun, Jul 24, 2022 at 09:52:16PM +0530, vignesh C wrote:\n> Thanks for the comments, i have modified it by changing it to a\n> boolean parameter. The attached v4 patch has the changes for the same.\n\nOkay, thanks for the patch. This looks good to me, so let's do as\nAmit suggests. I'll apply that if there are no objections.\n--\nMichael",
"msg_date": "Mon, 25 Jul 2022 10:08:42 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Refactor to make use of a common function for\n GetSubscriptionRelations and GetSubscriptionNotReadyRelations."
},
{
"msg_contents": "On Mon, Jul 25, 2022 at 11:08 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Sun, Jul 24, 2022 at 09:52:16PM +0530, vignesh C wrote:\n> > Thanks for the comments, i have modified it by changing it to a\n> > boolean parameter. The attached v4 patch has the changes for the same.\n>\n> Okay, thanks for the patch. This looks good to me, so let's do as\n> Amit suggests. I'll apply that if there are no objections.\n> --\n\nOK. I have no objections to just passing a boolean, but here are a\ncouple of other small review comments for the v4-0001 patch:\n\n======\n\n1. src/backend/catalog/pg_subscription.c\n\n@@ -533,65 +533,14 @@ HasSubscriptionRelations(Oid subid)\n }\n\n /*\n- * Get all relations for subscription.\n+ * Get the relations for subscription.\n *\n- * Returned list is palloc'ed in current memory context.\n+ * If only_not_ready is false, return all the relations for subscription. If\n+ * true, return all the relations for subscription that are not in a ready\n+ * state. Returned list is palloc'ed in current memory context.\n */\n\nThe function comment was describing the new boolean parameter in a\nkind of backwards way. It seems more natural to emphasise what true\nmeans.\n\n\nSUGGESTION\nGet the relations for the subscription.\n\nIf only_not_ready is true, return only the relations that are not in a\nready state, otherwise return all the subscription relations. The\nreturned list is palloc'ed in the current memory context.\n\n====\n\n2. <General - calling code>\n\nPerhaps this suggestion is overkill, but given that the param is not\ngoing to be a bitmask or enum anymore, IMO it means the calls are no\nlonger very self-explanatory.The calling code will be more readable if\nthe patch introduced some descriptive wrapper functions. e.g.\n\n\nList *\nGetSubscriptionAllRelations(Oid subid)\n{\n return GetSubscriptionRelations(subid, false);\n}\n\nList *\nGetSubscriptionNotReadyRelations(Oid subid)\n{\n return GetSubscriptionRelations(subid, true);\n}\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Mon, 25 Jul 2022 13:03:44 +1000",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Refactor to make use of a common function for\n GetSubscriptionRelations\n and GetSubscriptionNotReadyRelations."
},
{
"msg_contents": "On Mon, Jul 25, 2022 at 8:34 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> On Mon, Jul 25, 2022 at 11:08 AM Michael Paquier <michael@paquier.xyz> wrote:\n> >\n> > On Sun, Jul 24, 2022 at 09:52:16PM +0530, vignesh C wrote:\n> > > Thanks for the comments, i have modified it by changing it to a\n> > > boolean parameter. The attached v4 patch has the changes for the same.\n> >\n> > Okay, thanks for the patch. This looks good to me, so let's do as\n> > Amit suggests. I'll apply that if there are no objections.\n> > --\n>\n> OK. I have no objections to just passing a boolean, but here are a\n> couple of other small review comments for the v4-0001 patch:\n>\n> ======\n>\n> 1. src/backend/catalog/pg_subscription.c\n>\n> @@ -533,65 +533,14 @@ HasSubscriptionRelations(Oid subid)\n> }\n>\n> /*\n> - * Get all relations for subscription.\n> + * Get the relations for subscription.\n> *\n> - * Returned list is palloc'ed in current memory context.\n> + * If only_not_ready is false, return all the relations for subscription. If\n> + * true, return all the relations for subscription that are not in a ready\n> + * state. Returned list is palloc'ed in current memory context.\n> */\n>\n> The function comment was describing the new boolean parameter in a\n> kind of backwards way. It seems more natural to emphasise what true\n> means.\n>\n>\n> SUGGESTION\n> Get the relations for the subscription.\n>\n> If only_not_ready is true, return only the relations that are not in a\n> ready state, otherwise return all the subscription relations. The\n> returned list is palloc'ed in the current memory context.\n>\n\nThis suggestion sounds good. Also, I don't much like \"only\" in the\nparameter name. I think the explanation makes it clear.\n\n> ====\n>\n> 2. <General - calling code>\n>\n> Perhaps this suggestion is overkill, but given that the param is not\n> going to be a bitmask or enum anymore, IMO it means the calls are no\n> longer very self-explanatory.\n>\n\nI am not sure how much this will be helpful. This sounds like overkill.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 25 Jul 2022 10:22:35 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Refactor to make use of a common function for\n GetSubscriptionRelations\n and GetSubscriptionNotReadyRelations."
},
{
"msg_contents": "On Mon, Jul 25, 2022 at 8:34 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> On Mon, Jul 25, 2022 at 11:08 AM Michael Paquier <michael@paquier.xyz> wrote:\n> >\n> > On Sun, Jul 24, 2022 at 09:52:16PM +0530, vignesh C wrote:\n> > > Thanks for the comments, i have modified it by changing it to a\n> > > boolean parameter. The attached v4 patch has the changes for the same.\n> >\n> > Okay, thanks for the patch. This looks good to me, so let's do as\n> > Amit suggests. I'll apply that if there are no objections.\n> > --\n>\n> OK. I have no objections to just passing a boolean, but here are a\n> couple of other small review comments for the v4-0001 patch:\n>\n> ======\n>\n> 1. src/backend/catalog/pg_subscription.c\n>\n> @@ -533,65 +533,14 @@ HasSubscriptionRelations(Oid subid)\n> }\n>\n> /*\n> - * Get all relations for subscription.\n> + * Get the relations for subscription.\n> *\n> - * Returned list is palloc'ed in current memory context.\n> + * If only_not_ready is false, return all the relations for subscription. If\n> + * true, return all the relations for subscription that are not in a ready\n> + * state. Returned list is palloc'ed in current memory context.\n> */\n>\n> The function comment was describing the new boolean parameter in a\n> kind of backwards way. It seems more natural to emphasise what true\n> means.\n>\n>\n> SUGGESTION\n> Get the relations for the subscription.\n>\n> If only_not_ready is true, return only the relations that are not in a\n> ready state, otherwise return all the subscription relations. The\n> returned list is palloc'ed in the current memory context.\n\nModified\n\n> ====\n>\n> 2. <General - calling code>\n>\n> Perhaps this suggestion is overkill, but given that the param is not\n> going to be a bitmask or enum anymore, IMO it means the calls are no\n> longer very self-explanatory.The calling code will be more readable if\n> the patch introduced some descriptive wrapper functions. e.g.\n>\n>\n> List *\n> GetSubscriptionAllRelations(Oid subid)\n> {\n> return GetSubscriptionRelations(subid, false);\n> }\n>\n> List *\n> GetSubscriptionNotReadyRelations(Oid subid)\n> {\n> return GetSubscriptionRelations(subid, true);\n> }\n\nI feel this would be an overkill, I did not make any changes for this.\n\nThanks for the comments, the attached v5 patch has the changes for the same.\n\nRegards,\nVignesh",
"msg_date": "Wed, 27 Jul 2022 08:47:46 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Refactor to make use of a common function for\n GetSubscriptionRelations\n and GetSubscriptionNotReadyRelations."
},
{
"msg_contents": "On Mon, Jul 25, 2022 at 10:22 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Mon, Jul 25, 2022 at 8:34 AM Peter Smith <smithpb2250@gmail.com> wrote:\n> >\n> > On Mon, Jul 25, 2022 at 11:08 AM Michael Paquier <michael@paquier.xyz> wrote:\n> > >\n> > > On Sun, Jul 24, 2022 at 09:52:16PM +0530, vignesh C wrote:\n> > > > Thanks for the comments, i have modified it by changing it to a\n> > > > boolean parameter. The attached v4 patch has the changes for the same.\n> > >\n> > > Okay, thanks for the patch. This looks good to me, so let's do as\n> > > Amit suggests. I'll apply that if there are no objections.\n> > > --\n> >\n> > OK. I have no objections to just passing a boolean, but here are a\n> > couple of other small review comments for the v4-0001 patch:\n> >\n> > ======\n> >\n> > 1. src/backend/catalog/pg_subscription.c\n> >\n> > @@ -533,65 +533,14 @@ HasSubscriptionRelations(Oid subid)\n> > }\n> >\n> > /*\n> > - * Get all relations for subscription.\n> > + * Get the relations for subscription.\n> > *\n> > - * Returned list is palloc'ed in current memory context.\n> > + * If only_not_ready is false, return all the relations for subscription. If\n> > + * true, return all the relations for subscription that are not in a ready\n> > + * state. Returned list is palloc'ed in current memory context.\n> > */\n> >\n> > The function comment was describing the new boolean parameter in a\n> > kind of backwards way. It seems more natural to emphasise what true\n> > means.\n> >\n> >\n> > SUGGESTION\n> > Get the relations for the subscription.\n> >\n> > If only_not_ready is true, return only the relations that are not in a\n> > ready state, otherwise return all the subscription relations. The\n> > returned list is palloc'ed in the current memory context.\n> >\n>\n> This suggestion sounds good. Also, I don't much like \"only\" in the\n> parameter name. I think the explanation makes it clear.\n\nI have changed the parameter name to not_ready. The v5 patch attached\nat [1] has the changes for the same.\n[1] - https://www.postgresql.org/message-id/CALDaNm2sQD-bwMKavLyiogMBBrg3fx5PTaV5RyV8UiczR9K_tw%40mail.gmail.com\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Wed, 27 Jul 2022 08:49:16 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Refactor to make use of a common function for\n GetSubscriptionRelations\n and GetSubscriptionNotReadyRelations."
},
{
"msg_contents": "On Wed, Jul 27, 2022 at 08:47:46AM +0530, vignesh C wrote:\n> I feel this would be an overkill, I did not make any changes for this.\n\nAgreed. Using an extra layer of wrappers for that seems a bit too\nmuch, so I have applied your v5 with a slight tweak to the comment.\n--\nMichael",
"msg_date": "Wed, 27 Jul 2022 19:51:54 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Refactor to make use of a common function for\n GetSubscriptionRelations and GetSubscriptionNotReadyRelations."
},
{
"msg_contents": "On Wed, Jul 27, 2022 at 4:22 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Wed, Jul 27, 2022 at 08:47:46AM +0530, vignesh C wrote:\n> > I feel this would be an overkill, I did not make any changes for this.\n>\n> Agreed. Using an extra layer of wrappers for that seems a bit too\n> much, so I have applied your v5 with a slight tweak to the comment.\n\nThanks for pushing this patch.\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Wed, 27 Jul 2022 17:45:06 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Refactor to make use of a common function for\n GetSubscriptionRelations\n and GetSubscriptionNotReadyRelations."
}
] |
[
{
"msg_contents": "I'm not sure if it fits -hackers, but seems better than -translators.\n\nI find it annoying that make update-po stops at pg_upgrade on master.\nThe cause is that a file is renamed from relfilenode.c to\nrelfilenumber.c so just fixing the name works. (attached first).\n\n\nOn the other hand, basically, every nls.mk contain all *.c files in the\ntool's directory plus some in other directories with some exceptions:\n\na) nls.mk of pg_dump and psql contain some *.h files but they don't\n contain a translatable string.\n\nb) nls.mk of ecpglib is missing some files that contain translatable\n strings. (adding theses files doesn't change the result of make\n update-po, but I'll send another mail for this issue.)\n\nc) nls.mk of pg_waldump, psql, libpq and preproc excludes some *.c\n files which don't contain a translatable string.\n\nAt least for (a) above, removing them from nls.mk is rather good. For\n(b), adding them is also good. For (c), I think few extra files\ndoesn't make difference.\n\nI wonder if we can use $(wildcard *.c) instead of explicitly\nenumerating every *.c file in the directory then maintaining the\nlist. Since backend does that way, I think we can do that the same way\nalso for the tools. Attached second does that except for tools that\nhave only one *.c. The patch doesn't make a difference in the result\nof make update-po.\n\nEither will do for me but at least I'd like (a) applied.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Wed, 13 Jul 2022 16:08:53 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": true,
"msg_subject": "make update-po@master stops at pg_upgrade"
},
{
"msg_contents": "Kyotaro Horiguchi <horikyota.ntt@gmail.com> writes:\n> I find it annoying that make update-po stops at pg_upgrade on master.\n> The cause is that a file is renamed from relfilenode.c to\n> relfilenumber.c so just fixing the name works. (attached first).\n\nOoops.\n\n> I wonder if we can use $(wildcard *.c) instead of explicitly\n> enumerating every *.c file in the directory then maintaining the\n> list.\n\n+1. We've repeatedly forgotten to update the nls.mk files when\nadding/removing files; they're just not on most hackers' radar.\nAnything we can do to make that more automatic seems like a win.\n\n> Since backend does that way, I think we can do that the same way\n> also for the tools. Attached second does that except for tools that\n> have only one *.c. The patch doesn't make a difference in the result\n> of make update-po.\n\nI wonder if there's anything we can do about the manual cross-references\nto src/common. We could wildcard that, but then every FE program would\nhave to contain translations for all messages in src/common/, even for\nmodules it doesn't use.\n\nStill, wildcarding the local *.c references seems like a clear step\nforward. I'll go push that part.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 13 Jul 2022 12:07:19 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: make update-po@master stops at pg_upgrade"
},
{
"msg_contents": "I wrote:\n> Kyotaro Horiguchi <horikyota.ntt@gmail.com> writes:\n>> Since backend does that way, I think we can do that the same way\n>> also for the tools. Attached second does that except for tools that\n>> have only one *.c. The patch doesn't make a difference in the result\n>> of make update-po.\n\n> Still, wildcarding the local *.c references seems like a clear step\n> forward. I'll go push that part.\n\nI had to recreate the patch almost from scratch, because 88dad06b4\ntouched adjacent lines in most of these files, scaring patch(1)\naway from applying the changes. That being the case, I decided\nto use $(wildcard *.c) everywhere, including the places where there's\ncurrently just one *.c file. It seems to work for me, but please check.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 13 Jul 2022 13:00:36 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: make update-po@master stops at pg_upgrade"
},
{
"msg_contents": "On 2022-Jul-13, Tom Lane wrote:\n\n> I had to recreate the patch almost from scratch, because 88dad06b4\n> touched adjacent lines in most of these files, scaring patch(1)\n> away from applying the changes. That being the case, I decided\n> to use $(wildcard *.c) everywhere, including the places where there's\n> currently just one *.c file. It seems to work for me, but please check.\n\nHmm, I got this failure:\n\nmake[4]: se entra en el directorio '/home/alvherre/Code/pgsql-build/master/src/interfaces/ecpg/ecpglib'\n/usr/bin/xgettext -ctranslator --copyright-holder='PostgreSQL Global Development Group' --msgid-bugs-address=pgsql-bugs@lists.postgresql.org --no-wrap --sort-by-file --package-name='ecpglib (PostgreSQL)' --package-version='16' -D /pgsql/source/master/src/interfaces/ecpg/ecpglib -D . -n -kecpg_gettext -k_ --flag=ecpg_gettext:1:pass-c-format --flag=_:1:pass-c-format \n/usr/bin/xgettext: no se especificó el fichero de entrada\nPruebe '/usr/bin/xgettext --help' para más información.\nmake[4]: *** [/pgsql/source/master/src/nls-global.mk:108: po/ecpglib.pot] Error 1\n\nThe error from xgettext, of course, is\n/usr/bin/xgettext: no input file given\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"Puedes vivir sólo una vez, pero si lo haces bien, una vez es suficiente\"\n\n\n",
"msg_date": "Wed, 13 Jul 2022 19:25:05 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: make update-po@master stops at pg_upgrade"
},
{
"msg_contents": "Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> Hmm, I got this failure:\n> /usr/bin/xgettext: no se especificó el fichero de entrada\n\nHmm ... are you doing this in a VPATH setup? Does it help\nif you make the entry be\n\nGETTEXT_FILES = $(wildcard $(srcdir)/*.c)\n\nI'd supposed we didn't need to be careful about that, because\nI saw uses of $(wildcard) without it ... but I now see other uses\nwith.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 13 Jul 2022 13:41:35 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: make update-po@master stops at pg_upgrade"
},
{
"msg_contents": "On 13.07.22 18:07, Tom Lane wrote:\n> Still, wildcarding the local *.c references seems like a clear step\n> forward. I'll go push that part.\n\nIn some cases, the listed files are build output from another rule, for \nexample sql_help.c. By using a wildcard, you just take whatever files \nhappen to be there, not observing proper make dependencies. \n(Conversely, you could also include files that are not part of the \nbuild, maybe leftover from something aborted.) So I'm not sure this is \nsuch a clear win. In any case, someone should check that this creates \nidentical output before and after.\n\n\n",
"msg_date": "Wed, 13 Jul 2022 20:09:15 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: make update-po@master stops at pg_upgrade"
},
{
"msg_contents": "On 13.07.22 20:09, Peter Eisentraut wrote:\n> In any case, someone should check that this creates identical output \n> before and after.\n\nA quick check shows differences in\n\npg_rewind.pot\npsql.pot\necpg.pot\nlibpq.pot\nplpgsql.pot\n\n\n",
"msg_date": "Wed, 13 Jul 2022 20:19:40 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: make update-po@master stops at pg_upgrade"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> In some cases, the listed files are build output from another rule, for \n> example sql_help.c. By using a wildcard, you just take whatever files \n> happen to be there, not observing proper make dependencies. \n\nHmm. We could list built files explicitly, perhaps, and still be\na good step ahead on the maintenance burden. Does xgettext get\nupset if the same input file is mentioned twice, ie would we have\nto filter sql_help.c out of the wildcard result?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 13 Jul 2022 14:19:48 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: make update-po@master stops at pg_upgrade"
},
{
"msg_contents": "On 13.07.22 19:41, Tom Lane wrote:\n> Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n>> Hmm, I got this failure:\n>> /usr/bin/xgettext: no se especificó el fichero de entrada\n> \n> Hmm ... are you doing this in a VPATH setup? Does it help\n> if you make the entry be\n> \n> GETTEXT_FILES = $(wildcard $(srcdir)/*.c)\n> \n> I'd supposed we didn't need to be careful about that, because\n> I saw uses of $(wildcard) without it ... but I now see other uses\n> with.\n\nNote that we have this in nls-global.mk which tries to avoid having the\nvpath details sneak into the output:\n\npo/$(CATALOG_NAME).pot: $(GETTEXT_FILES) $(MAKEFILE_LIST)\n# Change to srcdir explicitly, don't rely on $^. That way we get\n# consistent #: file references in the po files.\n...\n $(XGETTEXT) -D $(srcdir) -D . -n $(addprefix -k, $(GETTEXT_TRIGGERS)) $(addprefix --flag=, $(GETTEXT_FLAGS)) $(GETTEXT_FILES)\n\n\n",
"msg_date": "Wed, 13 Jul 2022 20:24:47 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: make update-po@master stops at pg_upgrade"
},
{
"msg_contents": "On 13.07.22 20:19, Tom Lane wrote:\n> Hmm. We could list built files explicitly, perhaps, and still be\n> a good step ahead on the maintenance burden. Does xgettext get\n> upset if the same input file is mentioned twice, ie would we have\n> to filter sql_help.c out of the wildcard result?\n\nIt seems it would be ok with that.\n\n\n",
"msg_date": "Wed, 13 Jul 2022 20:28:44 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: make update-po@master stops at pg_upgrade"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> Note that we have this in nls-global.mk which tries to avoid having the\n> vpath details sneak into the output:\n\n> po/$(CATALOG_NAME).pot: $(GETTEXT_FILES) $(MAKEFILE_LIST)\n> # Change to srcdir explicitly, don't rely on $^. That way we get\n> # consistent #: file references in the po files.\n> ...\n> $(XGETTEXT) -D $(srcdir) -D . -n $(addprefix -k, $(GETTEXT_TRIGGERS)) $(addprefix --flag=, $(GETTEXT_FLAGS)) $(GETTEXT_FILES)\n\nHmm ... so how does that work with built files in a VPATH build today?\n\nAnyway, I'll go revert the patch for now, since it's clearly got\nat least a couple problems that need sorting.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 13 Jul 2022 14:28:44 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: make update-po@master stops at pg_upgrade"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> On 13.07.22 20:19, Tom Lane wrote:\n>> Hmm. We could list built files explicitly, perhaps, and still be\n>> a good step ahead on the maintenance burden. Does xgettext get\n>> upset if the same input file is mentioned twice, ie would we have\n>> to filter sql_help.c out of the wildcard result?\n\n> It seems it would be ok with that.\n\nActually, we can get rid of those easily enough anyway with $(sort).\nHere's a draft that might solve these problems. The idea is to use\n$(wildcard) for files in the srcdir, and manually enumerate only\nbuilt files.\n\n\t\t\tregards, tom lane",
"msg_date": "Wed, 13 Jul 2022 15:35:15 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: make update-po@master stops at pg_upgrade"
},
{
"msg_contents": "On 2022-Jul-13, Tom Lane wrote:\n\n> Actually, we can get rid of those easily enough anyway with $(sort).\n> Here's a draft that might solve these problems. The idea is to use\n> $(wildcard) for files in the srcdir, and manually enumerate only\n> built files.\n\nI checked the files in src/bin/scripts and they look OK; there are no\nmissing messages now. I also checked the es.po.new files with and\nwithout patch; they look good, nothing is lost.\n\nFiles that weren't previously being processed:\nsrc/interfaces/libpq/fe-print.c\nsrc/interfaces/ecpg/preproc/parser.c\n\nIncidentally, this patch is pretty similar to what Kyotaro-san sent when\nopening the thread, with the addition of the $(notdir) call.\n\nIn short, +1 to this patch.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"El sentido de las cosas no viene de las cosas, sino de\nlas inteligencias que las aplican a sus problemas diarios\nen busca del progreso.\" (Ernesto Hernández-Novich)\n\n\n",
"msg_date": "Thu, 28 Jul 2022 17:23:30 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: make update-po@master stops at pg_upgrade"
},
{
"msg_contents": "Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> In short, +1 to this patch.\n\nThanks for testing it. I think the only remaining concern is\nPeter's objection that $(wildcard) might pick up random junk files\nthat end in \".c\". That's true, but the backend's nls.mk also\npicks up everything matching \"*.c\" (over the whole backend tree,\nnot just one directory!), and I don't recall people complaining\nabout that. So I think the reduction in maintenance burden\njustifies the risk. What do others think?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 28 Jul 2022 11:39:44 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: make update-po@master stops at pg_upgrade"
},
{
"msg_contents": "On 2022-Jul-28, Tom Lane wrote:\n\n> Thanks for testing it. I think the only remaining concern is\n> Peter's objection that $(wildcard) might pick up random junk files\n> that end in \".c\". That's true, but the backend's nls.mk also\n> picks up everything matching \"*.c\" (over the whole backend tree,\n> not just one directory!),\n\nNow that I did the translation chores again after a few years I am\nreminded of a point about this argument: in reality, few people ever\nrun this recipe manually (I know I never do), because it's easier to\ngrab the already-merged files from the NLS website. It all happens\nmechanically and there's nobody leaving random junnk files.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"El que vive para el futuro es un iluso, y el que vive para el pasado,\nun imbécil\" (Luis Adler, \"Los tripulantes de la noche\")\n\n\n",
"msg_date": "Mon, 8 Aug 2022 18:08:09 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: make update-po@master stops at pg_upgrade"
},
{
"msg_contents": "Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> On 2022-Jul-28, Tom Lane wrote:\n>> Thanks for testing it. I think the only remaining concern is\n>> Peter's objection that $(wildcard) might pick up random junk files\n>> that end in \".c\". That's true, but the backend's nls.mk also\n>> picks up everything matching \"*.c\" (over the whole backend tree,\n>> not just one directory!),\n\n> Now that I did the translation chores again after a few years I am\n> reminded of a point about this argument: in reality, few people ever\n> run this recipe manually (I know I never do), because it's easier to\n> grab the already-merged files from the NLS website. It all happens\n> mechanically and there's nobody leaving random junnk files.\n\nHmm, so where does the NLS website get its data?\n\nI'd be all for flushing the recipe altogether if no one uses it.\nHowever, the existence of this thread suggests otherwise.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 08 Aug 2022 12:20:59 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: make update-po@master stops at pg_upgrade"
},
{
"msg_contents": "On 2022-Aug-08, Tom Lane wrote:\n\n> Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n\n> > Now that I did the translation chores again after a few years I am\n> > reminded of a point about this argument: in reality, few people ever\n> > run this recipe manually (I know I never do), because it's easier to\n> > grab the already-merged files from the NLS website. It all happens\n> > mechanically and there's nobody leaving random junnk files.\n> \n> Hmm, so where does the NLS website get its data?\n\nWell, the NLS website does invoke the recipe. Just not manually.\n\n> I'd be all for flushing the recipe altogether if no one uses it.\n> However, the existence of this thread suggests otherwise.\n\nI just meant it's not normally run manually. But if you do run it\nmanually, and you translate a file that has a few extra messages because\nof the hypothetical junk source file, then you'll upload a catalog with\nthose extra messages; these extra messages will be dropped the next time\nyour file is merged through the NLS website. Maybe you'll do some extra\nwork (translating useless messages) but there'll be no harm.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Mon, 8 Aug 2022 19:46:02 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: make update-po@master stops at pg_upgrade"
}
] |
[
{
"msg_contents": "I happened to see the message below.\n\n> WARNING: new data directory should not be inside the old data directory, e.g. %s\n\nThe corresponding code is\n\n> ... the old data directory, e.g. %s\", old_cluster_pgdata);\n\nSo, \"e.g.\" (for example) in the message sounds like \"that is\", which I\nthink is \"i.e.\". It should be fixed if this is correct. I'm not sure\nwhether to keep using Latin-origin acronyms like this, but in the\nattached I used \"i.e.\".\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Wed, 13 Jul 2022 18:09:43 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": true,
"msg_subject": "i.e. and e.g."
},
{
"msg_contents": "At Wed, 13 Jul 2022 18:09:43 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> I happened to see the message below.\n> \n> > WARNING: new data directory should not be inside the old data directory, e.g. %s\n> \n> The corresponding code is\n> \n> > ... the old data directory, e.g. %s\", old_cluster_pgdata);\n> \n> So, \"e.g.\" (for example) in the message sounds like \"that is\", which I\n> think is \"i.e.\". It should be fixed if this is correct. I'm not sure\n> whether to keep using Latin-origin acronyms like this, but in the\n> attached I used \"i.e.\".\n\nOops! There's another use of that word in the same context.\nAttached contains two fixes.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Wed, 13 Jul 2022 18:13:51 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: i.e. and e.g."
},
{
"msg_contents": "make sense.\n\n+1\n\nOn Wed, Jul 13, 2022 at 5:14 PM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> At Wed, 13 Jul 2022 18:09:43 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in\n> > I happened to see the message below.\n> >\n> > > WARNING: new data directory should not be inside the old data directory, e.g. %s\n> >\n> > The corresponding code is\n> >\n> > > ... the old data directory, e.g. %s\", old_cluster_pgdata);\n> >\n> > So, \"e.g.\" (for example) in the message sounds like \"that is\", which I\n> > think is \"i.e.\". It should be fixed if this is correct. I'm not sure\n> > whether to keep using Latin-origin acronyms like this, but in the\n> > attached I used \"i.e.\".\n>\n> Oops! There's another use of that word in the same context.\n> Attached contains two fixes.\n>\n> regards.\n>\n> --\n> Kyotaro Horiguchi\n> NTT Open Source Software Center\n\n\n\n-- \nRegards\nJunwang Zhao\n\n\n",
"msg_date": "Wed, 13 Jul 2022 17:21:20 +0800",
"msg_from": "Junwang Zhao <zhjwpku@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: i.e. and e.g."
},
{
"msg_contents": "Hi Kyotaro,\n\n> Oops! There's another use of that word in the same context.\n> Attached contains two fixes.\n\nGood catch. I did a quick search for similar messages and apparently\nthere are no others to fix.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Wed, 13 Jul 2022 12:25:32 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: i.e. and e.g."
},
{
"msg_contents": "On Wed, Jul 13, 2022 at 4:13 PM Kyotaro Horiguchi <horikyota.ntt@gmail.com>\nwrote:\n>\n> At Wed, 13 Jul 2022 18:09:43 +0900 (JST), Kyotaro Horiguchi <\nhorikyota.ntt@gmail.com> wrote in\n> > I happened to see the message below.\n> >\n> > > WARNING: new data directory should not be inside the old data\ndirectory, e.g. %s\n> >\n> > The corresponding code is\n> >\n> > > ... the old data directory, e.g. %s\", old_cluster_pgdata);\n> >\n> > So, \"e.g.\" (for example) in the message sounds like \"that is\", which I\n> > think is \"i.e.\". It should be fixed if this is correct. I'm not sure\n> > whether to keep using Latin-origin acronyms like this, but in the\n> > attached I used \"i.e.\".\n\nI did my own quick scan and found one use of i.e. that doesn't really fit,\nin a sentence that has other grammatical issues:\n\n- Due to the differences how ECPG works compared to Informix's\nESQL/C (i.e., which steps\n+ Due to differences in how ECPG works compared to Informix's ESQL/C\n(namely, which steps\n are purely grammar transformations and which steps rely on the\nunderlying run-time library)\n there is no <literal>FREE cursor_name</literal> statement in ECPG.\nThis is because in ECPG,\n <literal>DECLARE CURSOR</literal> doesn't translate to a function\ncall into\n\nI've pushed that in addition to your changes, thanks!\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Wed, Jul 13, 2022 at 4:13 PM Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:>> At Wed, 13 Jul 2022 18:09:43 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in> > I happened to see the message below.> >> > > WARNING: new data directory should not be inside the old data directory, e.g. %s> >> > The corresponding code is> >> > > ... the old data directory, e.g. %s\", old_cluster_pgdata);> >> > So, \"e.g.\" (for example) in the message sounds like \"that is\", which I> > think is \"i.e.\". It should be fixed if this is correct. I'm not sure> > whether to keep using Latin-origin acronyms like this, but in the> > attached I used \"i.e.\".I did my own quick scan and found one use of i.e. that doesn't really fit, in a sentence that has other grammatical issues:- Due to the differences how ECPG works compared to Informix's ESQL/C (i.e., which steps+ Due to differences in how ECPG works compared to Informix's ESQL/C (namely, which steps are purely grammar transformations and which steps rely on the underlying run-time library) there is no <literal>FREE cursor_name</literal> statement in ECPG. This is because in ECPG, <literal>DECLARE CURSOR</literal> doesn't translate to a function call intoI've pushed that in addition to your changes, thanks!--John NaylorEDB: http://www.enterprisedb.com",
"msg_date": "Thu, 14 Jul 2022 09:40:25 +0700",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: i.e. and e.g."
},
{
"msg_contents": "At Thu, 14 Jul 2022 09:40:25 +0700, John Naylor <john.naylor@enterprisedb.com> wrote in \n> On Wed, Jul 13, 2022 at 4:13 PM Kyotaro Horiguchi <horikyota.ntt@gmail.com>\n> wrote:\n> >\n> > At Wed, 13 Jul 2022 18:09:43 +0900 (JST), Kyotaro Horiguchi <\n> horikyota.ntt@gmail.com> wrote in\n> > > So, \"e.g.\" (for example) in the message sounds like \"that is\", which I\n> > > think is \"i.e.\". It should be fixed if this is correct. I'm not sure\n> > > whether to keep using Latin-origin acronyms like this, but in the\n> > > attached I used \"i.e.\".\n> \n> I did my own quick scan and found one use of i.e. that doesn't really fit,\n> in a sentence that has other grammatical issues:\n> \n> - Due to the differences how ECPG works compared to Informix's\n> ESQL/C (i.e., which steps\n> + Due to differences in how ECPG works compared to Informix's ESQL/C\n> (namely, which steps\n> are purely grammar transformations and which steps rely on the\n\nOh!\n\n> I've pushed that in addition to your changes, thanks!\n\nThanks!\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Thu, 14 Jul 2022 15:38:37 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: i.e. and e.g."
},
{
"msg_contents": "At Thu, 14 Jul 2022 15:38:37 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> At Thu, 14 Jul 2022 09:40:25 +0700, John Naylor <john.naylor@enterprisedb.com> wrote in \n> > On Wed, Jul 13, 2022 at 4:13 PM Kyotaro Horiguchi <horikyota.ntt@gmail.com>\n> > wrote:\n> > >\n> > > At Wed, 13 Jul 2022 18:09:43 +0900 (JST), Kyotaro Horiguchi <\n> > horikyota.ntt@gmail.com> wrote in\n> > > > So, \"e.g.\" (for example) in the message sounds like \"that is\", which I\n> > > > think is \"i.e.\". It should be fixed if this is correct. I'm not sure\n> > > > whether to keep using Latin-origin acronyms like this, but in the\n> > > > attached I used \"i.e.\".\n> > \n> > I did my own quick scan and found one use of i.e. that doesn't really fit,\n> > in a sentence that has other grammatical issues:\n> > \n> > - Due to the differences how ECPG works compared to Informix's\n> > ESQL/C (i.e., which steps\n> > + Due to differences in how ECPG works compared to Informix's ESQL/C\n> > (namely, which steps\n> > are purely grammar transformations and which steps rely on the\n> \n> Oh!\n> \n> > I've pushed that in addition to your changes, thanks!\n> \n> Thanks!\n\nBy the way, I forgot about back-branches. Don't we need to fix the\nsame in back-branches?\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Wed, 20 Jul 2022 17:40:50 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: i.e. and e.g."
},
{
"msg_contents": "On Wed, Jul 20, 2022 at 3:40 PM Kyotaro Horiguchi <horikyota.ntt@gmail.com>\nwrote:\n> By the way, I forgot about back-branches. Don't we need to fix the\n> same in back-branches?\n\nThe intent of the messages is pretty clear to me, so I don't really see a\nneed to change back branches. It does make sense for v15, though, and I\njust forgot to consider that.\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Wed, Jul 20, 2022 at 3:40 PM Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:> By the way, I forgot about back-branches. Don't we need to fix the> same in back-branches?The intent of the messages is pretty clear to me, so I don't really see a need to change back branches. It does make sense for v15, though, and I just forgot to consider that.--John NaylorEDB: http://www.enterprisedb.com",
"msg_date": "Thu, 21 Jul 2022 10:20:30 +0700",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: i.e. and e.g."
},
{
"msg_contents": "At Thu, 21 Jul 2022 10:20:30 +0700, John Naylor <john.naylor@enterprisedb.com> wrote in \n> On Wed, Jul 20, 2022 at 3:40 PM Kyotaro Horiguchi <horikyota.ntt@gmail.com>\n> wrote:\n> > By the way, I forgot about back-branches. Don't we need to fix the\n> > same in back-branches?\n> \n> The intent of the messages is pretty clear to me, so I don't really see a\n> need to change back branches. It does make sense for v15, though, and I\n> just forgot to consider that.\n\nOk, I'm fine with that. Thanks!\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Thu, 21 Jul 2022 13:22:35 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: i.e. and e.g."
},
{
"msg_contents": "On Thu, Jul 21, 2022 at 11:22 AM Kyotaro Horiguchi <horikyota.ntt@gmail.com>\nwrote:\n>\n> At Thu, 21 Jul 2022 10:20:30 +0700, John Naylor <\njohn.naylor@enterprisedb.com> wrote in\n> > On Wed, Jul 20, 2022 at 3:40 PM Kyotaro Horiguchi <\nhorikyota.ntt@gmail.com>\n> > wrote:\n> > > By the way, I forgot about back-branches. Don't we need to fix the\n> > > same in back-branches?\n> >\n> > The intent of the messages is pretty clear to me, so I don't really see\na\n> > need to change back branches. It does make sense for v15, though, and I\n> > just forgot to consider that.\n>\n> Ok, I'm fine with that. Thanks!\n\nThis is done.\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Thu, Jul 21, 2022 at 11:22 AM Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:>> At Thu, 21 Jul 2022 10:20:30 +0700, John Naylor <john.naylor@enterprisedb.com> wrote in> > On Wed, Jul 20, 2022 at 3:40 PM Kyotaro Horiguchi <horikyota.ntt@gmail.com>> > wrote:> > > By the way, I forgot about back-branches. Don't we need to fix the> > > same in back-branches?> >> > The intent of the messages is pretty clear to me, so I don't really see a> > need to change back branches. It does make sense for v15, though, and I> > just forgot to consider that.>> Ok, I'm fine with that. Thanks!This is done.--John NaylorEDB: http://www.enterprisedb.com",
"msg_date": "Thu, 21 Jul 2022 12:30:04 +0700",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: i.e. and e.g."
},
{
"msg_contents": "At Thu, 21 Jul 2022 12:30:04 +0700, John Naylor <john.naylor@enterprisedb.com> wrote in \n> On Thu, Jul 21, 2022 at 11:22 AM Kyotaro Horiguchi <horikyota.ntt@gmail.com>\n> wrote:\n> >\n> > At Thu, 21 Jul 2022 10:20:30 +0700, John Naylor <\n> john.naylor@enterprisedb.com> wrote in\n> > > need to change back branches. It does make sense for v15, though, and I\n> > > just forgot to consider that.\n> >\n> > Ok, I'm fine with that. Thanks!\n> \n> This is done.\n\nThanks!\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Thu, 21 Jul 2022 17:48:02 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: i.e. and e.g."
}
] |
[
{
"msg_contents": "Hi hackers,\n\nI noticed that we didn't collect the ObjectAddress returned by\nATExec[Attach|Detach]Partition. I think collecting this information can make it\neasier for users to get the partition OID of the attached or detached table in\nthe event trigger. So how about collecting it like the attached patch ?\n\nBest regards,\nHou zhijie",
"msg_date": "Wed, 13 Jul 2022 09:57:35 +0000",
"msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "Collect ObjectAddress for ATTACH DETACH PARTITION to use in event\n trigger"
},
{
"msg_contents": "On Wednesday, July 13, 2022 5:58 PM Hou, Zhijie wrote:\n> Hi hackers,\n> \n> I noticed that we didn't collect the ObjectAddress returned by\n> ATExec[Attach|Detach]Partition. I think collecting this information can make it\n> easier for users to get the partition OID of the attached or detached table in\n> the event trigger. So how about collecting it like the attached patch ?\n\nAdded to next CF.\n\nBest regards,\nHou zhijie\n\n\n\n",
"msg_date": "Fri, 15 Jul 2022 02:26:16 +0000",
"msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Collect ObjectAddress for ATTACH DETACH PARTITION to use in event\n trigger"
},
{
"msg_contents": "Hi,\n\n> > I noticed that we didn't collect the ObjectAddress returned by\n> > ATExec[Attach|Detach]Partition. I think collecting this information can make it\n> > easier for users to get the partition OID of the attached or detached table in\n> > the event trigger. So how about collecting it like the attached patch ?\n> \n> Added to next CF.\n\nSounds good. I grepped ATExecXXX() functions called in ATExecCmd(),\nand I confirmed that all returned values have been collected except them.\n\nWhile checking test code test about EVENT TRIGGER,\nI found there were no tests related with partitions in that.\nHow about adding them?\n\nBest Regards,\nHayato Kuroda\nFUJITSU LIMITED\n\n\n\n",
"msg_date": "Fri, 15 Jul 2022 03:21:30 +0000",
"msg_from": "\"kuroda.hayato@fujitsu.com\" <kuroda.hayato@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Collect ObjectAddress for ATTACH DETACH PARTITION to use in event\n trigger"
},
{
"msg_contents": "On Fri, Jul 15, 2022 at 03:21:30AM +0000, kuroda.hayato@fujitsu.com wrote:\n> Sounds good. I grepped ATExecXXX() functions called in ATExecCmd(),\n> and I confirmed that all returned values have been collected except them.\n> \n> While checking test code test about EVENT TRIGGER,\n> I found there were no tests related with partitions in that.\n> How about adding them?\n\nAgreed. It would be good to track down what changes once those\nObjectAddresses are collected.\n--\nMichael",
"msg_date": "Fri, 15 Jul 2022 12:41:03 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Collect ObjectAddress for ATTACH DETACH PARTITION to use in\n event trigger"
},
{
"msg_contents": "On Friday, July 15, 2022 11:41 AM Michael Paquier <michael@paquier.xyz> wrote:\n\nHi,\n> \n> On Fri, Jul 15, 2022 at 03:21:30AM +0000, kuroda.hayato@fujitsu.com wrote:\n> > Sounds good. I grepped ATExecXXX() functions called in ATExecCmd(),\n> > and I confirmed that all returned values have been collected except them.\n> >\n> > While checking test code test about EVENT TRIGGER, I found there were\n> > no tests related with partitions in that.\n> > How about adding them?\n> \n> Agreed. It would be good to track down what changes once those\n> ObjectAddresses are collected.\n\nThanks for having a look. It was a bit difficult to add a test for this.\nBecause we currently don't have a user function which can return these\ncollected ObjectAddresses for ALTER TABLE. And It seems we don't have tests for\nalready collected ObjectAddresses as well :(\n\nThe collected ObjectAddresses is in\n\"currentEventTriggerState->currentCommand->d.alterTable.subcmds.address\" while\nthe public function pg_event_trigger_ddl_commands doesn't return these\ninformation. It can only be used in user defined event trigger function (C\ncode).\n\nIf we want to add some tests for both already existed and newly added\nObjectAddresses, we might need to add some test codes in test_ddl_deparse.c.\nWhat do you think ?\n\nBest regards,\nHou zhijie\n\n\n",
"msg_date": "Fri, 15 Jul 2022 06:08:54 +0000",
"msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Collect ObjectAddress for ATTACH DETACH PARTITION to use in event\n trigger"
},
{
"msg_contents": "Dear Hou-san,\n\n> Thanks for having a look. It was a bit difficult to add a test for this.\n> Because we currently don't have a user function which can return these\n> collected ObjectAddresses for ALTER TABLE. And It seems we don't have tests for\n> already collected ObjectAddresses as well :(\n> The collected ObjectAddresses is in\n> \"currentEventTriggerState->currentCommand->d.alterTable.subcmds.address\" while\n> the public function pg_event_trigger_ddl_commands doesn't return these\n> information. It can only be used in user defined event trigger function (C\n> code).\n\nThanks for explaining. I did not know the reason why the test is not in event_trigger.sql.\n\n> If we want to add some tests for both already existed and newly added\n> ObjectAddresses, we might need to add some test codes in test_ddl_deparse.c.\n> What do you think ?\n\nI thought tests for ObjectAddresses should be added to test_ddl_deparse.c, but\nit might be bigger because there were many ATExecXXX() functions.\nI thought they could be added separately in another thread or patch.\n\nBest Regards,\nHayato Kuroda\nFUJITSU LIMITED\n\n\n\n",
"msg_date": "Fri, 15 Jul 2022 09:53:24 +0000",
"msg_from": "\"kuroda.hayato@fujitsu.com\" <kuroda.hayato@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Collect ObjectAddress for ATTACH DETACH PARTITION to use in event\n trigger"
},
{
"msg_contents": "On Fri, Jul 15, 2022 at 11:39 AM houzj.fnst@fujitsu.com\n<houzj.fnst@fujitsu.com> wrote:\n>\n> On Friday, July 15, 2022 11:41 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> Hi,\n> >\n> > On Fri, Jul 15, 2022 at 03:21:30AM +0000, kuroda.hayato@fujitsu.com wrote:\n> > > Sounds good. I grepped ATExecXXX() functions called in ATExecCmd(),\n> > > and I confirmed that all returned values have been collected except them.\n> > >\n> > > While checking test code test about EVENT TRIGGER, I found there were\n> > > no tests related with partitions in that.\n> > > How about adding them?\n> >\n> > Agreed. It would be good to track down what changes once those\n> > ObjectAddresses are collected.\n>\n> Thanks for having a look. It was a bit difficult to add a test for this.\n> Because we currently don't have a user function which can return these\n> collected ObjectAddresses for ALTER TABLE. And It seems we don't have tests for\n> already collected ObjectAddresses as well :(\n>\n> The collected ObjectAddresses is in\n> \"currentEventTriggerState->currentCommand->d.alterTable.subcmds.address\" while\n> the public function pg_event_trigger_ddl_commands doesn't return these\n> information. It can only be used in user defined event trigger function (C\n> code).\n>\n> If we want to add some tests for both already existed and newly added\n> ObjectAddresses, we might need to add some test codes in test_ddl_deparse.c.\n> What do you think ?\n>\n\nRight. But, I noticed that get_altertable_subcmdtypes() doesn't handle\nAT_AttachPartition or AT_DetachPartition. We can handle those and at\nleast have a test for those in test_ddl_deparse\\sql\\slter_table.sql. I\nknow it is not directly related to your patch but that way we will at\nleast have some tests for Attach/Detach partition and in the future\nwhen we extend it to test ObjectAddresses of subcommands that would be\nhandy. Feel free to write a separate patch for the same.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 20 Jul 2022 16:36:13 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Collect ObjectAddress for ATTACH DETACH PARTITION to use in event\n trigger"
},
{
"msg_contents": "On Wed, Jul 20, 2022 at 04:36:13PM +0530, Amit Kapila wrote:\n> Right. But, I noticed that get_altertable_subcmdtypes() doesn't handle\n> AT_AttachPartition or AT_DetachPartition. We can handle those and at\n> least have a test for those in test_ddl_deparse\\sql\\slter_table.sql. I\n> know it is not directly related to your patch but that way we will at\n> least have some tests for Attach/Detach partition and in the future\n> when we extend it to test ObjectAddresses of subcommands that would be\n> handy. Feel free to write a separate patch for the same.\n\nYeah, that could be a separate patch. On top of that, what about\nreworking get_altertable_subcmdtypes() so as it returns one row for\neach CollectedCommand, as of (type text, address text)? We have\nalready getObjectDescription() to transform the ObjectAddress from the\ncollected command to a proper string.\n--\nMichael",
"msg_date": "Thu, 21 Jul 2022 15:23:37 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Collect ObjectAddress for ATTACH DETACH PARTITION to use in\n event trigger"
},
{
"msg_contents": "On Thu, Jul 21, 2022 at 11:53 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Wed, Jul 20, 2022 at 04:36:13PM +0530, Amit Kapila wrote:\n> > Right. But, I noticed that get_altertable_subcmdtypes() doesn't handle\n> > AT_AttachPartition or AT_DetachPartition. We can handle those and at\n> > least have a test for those in test_ddl_deparse\\sql\\slter_table.sql. I\n> > know it is not directly related to your patch but that way we will at\n> > least have some tests for Attach/Detach partition and in the future\n> > when we extend it to test ObjectAddresses of subcommands that would be\n> > handy. Feel free to write a separate patch for the same.\n>\n> Yeah, that could be a separate patch. On top of that, what about\n> reworking get_altertable_subcmdtypes() so as it returns one row for\n> each CollectedCommand, as of (type text, address text)? We have\n> already getObjectDescription() to transform the ObjectAddress from the\n> collected command to a proper string.\n>\n\nYeah, that would be a good idea but I think instead of changing\nget_altertable_subcmdtypes(), can we have a new function say\nget_altertable_subcmdinfo() that returns additional information from\naddress. The other alternative could be that instead of returning the\naddress as a string, we can return some fields as a set of records\n(one row for each subcommand) as we do in\npg_event_trigger_ddl_commands().\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 22 Jul 2022 14:26:02 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Collect ObjectAddress for ATTACH DETACH PARTITION to use in event\n trigger"
},
{
"msg_contents": "On Fri, Jul 22, 2022 at 02:26:02PM +0530, Amit Kapila wrote:\n> Yeah, that would be a good idea but I think instead of changing\n> get_altertable_subcmdtypes(), can we have a new function say\n> get_altertable_subcmdinfo() that returns additional information from\n> address. The other alternative could be that instead of returning the\n> address as a string, we can return some fields as a set of records\n> (one row for each subcommand) as we do in\n> pg_event_trigger_ddl_commands().\n\nChanging get_altertable_subcmdtypes() to return a set of rows made of\n(subcommand, object description) is what I actually meant upthread as\nit feels natural given a CollectedCommand in input, and as\npg_event_trigger_ddl_commands() only gives access to a set of\nCollectedCommands. This is also a test module so \nthere is no issue in changing the existing function definitions.\n\nBut your point would be to have a new function that takes in input a\nCollectedATSubcmd, returning back the object address or its\ndescription? How would you make sure that a subcommand maps to a\ncorrect object address?\n--\nMichael",
"msg_date": "Sat, 23 Jul 2022 17:44:28 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Collect ObjectAddress for ATTACH DETACH PARTITION to use in\n event trigger"
},
{
"msg_contents": "On Sat, Jul 23, 2022 at 05:44:28PM +0900, Michael Paquier wrote:\n> Changing get_altertable_subcmdtypes() to return a set of rows made of\n> (subcommand, object description) is what I actually meant upthread as\n> it feels natural given a CollectedCommand in input, and as\n> pg_event_trigger_ddl_commands() only gives access to a set of\n> CollectedCommands. This is also a test module so \n> there is no issue in changing the existing function definitions.\n> \n> But your point would be to have a new function that takes in input a\n> CollectedATSubcmd, returning back the object address or its\n> description? How would you make sure that a subcommand maps to a\n> correct object address?\n\nFWIW, I was thinking about something among the lines of 0002 on top of\nHou's patch.\n--\nMichael",
"msg_date": "Sat, 23 Jul 2022 19:58:01 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Collect ObjectAddress for ATTACH DETACH PARTITION to use in\n event trigger"
},
{
"msg_contents": "On Sat, Jul 23, 2022 at 4:28 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Sat, Jul 23, 2022 at 05:44:28PM +0900, Michael Paquier wrote:\n> > Changing get_altertable_subcmdtypes() to return a set of rows made of\n> > (subcommand, object description) is what I actually meant upthread as\n> > it feels natural given a CollectedCommand in input, and as\n> > pg_event_trigger_ddl_commands() only gives access to a set of\n> > CollectedCommands. This is also a test module so\n> > there is no issue in changing the existing function definitions.\n> >\n> > But your point would be to have a new function that takes in input a\n> > CollectedATSubcmd, returning back the object address or its\n> > description? How would you make sure that a subcommand maps to a\n> > correct object address?\n>\n> FWIW, I was thinking about something among the lines of 0002 on top of\n> Hou's patch.\n>\n\nWhat I intended to say is similar to what you have done in the patch\nbut in a new function. OTOH, your point that it is okay to change\nfunction signature/name in the test module seems reasonable to me.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 25 Jul 2022 08:42:18 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Collect ObjectAddress for ATTACH DETACH PARTITION to use in event\n trigger"
},
{
"msg_contents": "On Mon, Jul 25, 2022 at 08:42:18AM +0530, Amit Kapila wrote:\n> What I intended to say is similar to what you have done in the patch\n> but in a new function. OTOH, your point that it is okay to change\n> function signature/name in the test module seems reasonable to me.\n\nThanks. Let's do with the function change then. As introduced\norginally in b488c58, it returns an array that gets just unnested\nonce, so I'd like to think that it had better be a SRF from the\nstart but things are what they are.\n--\nMichael",
"msg_date": "Mon, 25 Jul 2022 18:05:35 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Collect ObjectAddress for ATTACH DETACH PARTITION to use in\n event trigger"
},
{
"msg_contents": "On Saturday, July 23, 2022 6:58 PM Michael Paquier <michael@paquier.xyz> wrote:\n> \n> On Sat, Jul 23, 2022 at 05:44:28PM +0900, Michael Paquier wrote:\n> > Changing get_altertable_subcmdtypes() to return a set of rows made of\n> > (subcommand, object description) is what I actually meant upthread as\n> > it feels natural given a CollectedCommand in input, and as\n> > pg_event_trigger_ddl_commands() only gives access to a set of\n> > CollectedCommands. This is also a test module so there is no issue in\n> > changing the existing function definitions.\n> >\n> > But your point would be to have a new function that takes in input a\n> > CollectedATSubcmd, returning back the object address or its\n> > description? How would you make sure that a subcommand maps to a\n> > correct object address?\n> \n> FWIW, I was thinking about something among the lines of 0002 on top of Hou's\n> patch.\n\nThanks for the patches. The patches look good to me.\n\nBTW, while reviewing it, I found there are some more subcommands that the\nget_altertable_subcmdtypes() doesn't handle(e.g., ADD/DROP/SET IDENTITY and re ADD\nSTAT). Shall we fix them all while on it ?\n\nAttach a minor patch to fix those which is based on the v2 patch set.\n\nBest regards,\nHou zj",
"msg_date": "Mon, 25 Jul 2022 09:25:07 +0000",
"msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Collect ObjectAddress for ATTACH DETACH PARTITION to use in event\n trigger"
},
{
"msg_contents": "On 2022-Jul-25, houzj.fnst@fujitsu.com wrote:\n\n> BTW, while reviewing it, I found there are some more subcommands that the\n> get_altertable_subcmdtypes() doesn't handle(e.g., ADD/DROP/SET IDENTITY and re ADD\n> STAT). Shall we fix them all while on it ?\n> \n> Attach a minor patch to fix those which is based on the v2 patch set.\n\nYeah, I suppose these are all commands that were added after the last\nserious round of event trigger hacking, so it would be good to have\neverything on board.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"En las profundidades de nuestro inconsciente hay una obsesiva necesidad\nde un universo lógico y coherente. Pero el universo real se halla siempre\nun paso más allá de la lógica\" (Irulan)\n\n\n",
"msg_date": "Mon, 25 Jul 2022 11:31:27 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Collect ObjectAddress for ATTACH DETACH PARTITION to use in\n event trigger"
},
{
"msg_contents": "On Mon, Jul 25, 2022 at 09:25:07AM +0000, houzj.fnst@fujitsu.com wrote:\n> BTW, while reviewing it, I found there are some more subcommands that the\n> get_altertable_subcmdtypes() doesn't handle(e.g., ADD/DROP/SET IDENTITY and re ADD\n> STAT). Shall we fix them all while on it ?\n> \n> Attach a minor patch to fix those which is based on the v2 patch set.\n\n@@ -300,6 +300,18 @@ get_altertable_subcmdinfo(PG_FUNCTION_ARGS)\n[ ... ]\n default:\n strtype = \"unrecognized\";\n break;\n\nRemoving the \"default\" clause would help here as we would get compiler\nwarnings if there is anything missing. One way to do things is to set\nstrtype to NULL before going through the switch and have a failsafe as\nsome commands are internal so they may not be worth adding to the\noutput.\n--\nMichael",
"msg_date": "Mon, 25 Jul 2022 19:26:12 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Collect ObjectAddress for ATTACH DETACH PARTITION to use in\n event trigger"
},
{
"msg_contents": "On Monday, July 25, 2022 6:26 PM Michael Paquier <michael@paquier.xyz> wrote:\n> \n> On Mon, Jul 25, 2022 at 09:25:07AM +0000, houzj.fnst@fujitsu.com wrote:\n> > BTW, while reviewing it, I found there are some more subcommands that\n> > the\n> > get_altertable_subcmdtypes() doesn't handle(e.g., ADD/DROP/SET\n> > IDENTITY and re ADD STAT). Shall we fix them all while on it ?\n> >\n> > Attach a minor patch to fix those which is based on the v2 patch set.\n> \n> @@ -300,6 +300,18 @@ get_altertable_subcmdinfo(PG_FUNCTION_ARGS)\n> [ ... ]\n> default:\n> strtype = \"unrecognized\";\n> break;\n> \n> Removing the \"default\" clause would help here as we would get compiler\n> warnings if there is anything missing. One way to do things is to set strtype to\n> NULL before going through the switch and have a failsafe as some commands\n> are internal so they may not be worth adding to the output.\n\nThanks for the suggestion. I have removed the default and found some missed\nsubcommands in 0003 patch. Attach the new version patch here\n(The 0001 and 0002 is unchanged).\n\nBest regards,\nHou zj",
"msg_date": "Tue, 26 Jul 2022 13:00:41 +0000",
"msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Collect ObjectAddress for ATTACH DETACH PARTITION to use in event\n trigger"
},
{
"msg_contents": "On Tue, Jul 26, 2022 at 01:00:41PM +0000, houzj.fnst@fujitsu.com wrote:\n> Thanks for the suggestion. I have removed the default and found some missed\n> subcommands in 0003 patch. Attach the new version patch here\n> (The 0001 and 0002 is unchanged).\n\nI have reviewed what you have here, and I found that the change is too\ntimid, with a coverage of 32% for test_ddl_deparse. Attached is an\nupdated patch, that provides coverage for the most obvious cases I\ncould see in tablecmds.c, bringing the coverage to 64% here.\n\nSome cases are straight-forward, like the four cases for RLS or the\nthree subcases for RelOptions (where we'd better return an address\neven if doing doing for the replace case). Some cases that I have not\nincluded here would need more thoughts, like constraint validation and\ndrop or even SET ACCESS METHOD, so I have discarded for now all the\ncases where we don't (or cannot) report properly an ObjectAddress\nyet.\n\nThere is also a fancy case with DROP COLUMN, where we get an\nObjectAddress referring to the column already dropped, aka roughly a \n\".....pg_dropped.N.....\", and it is not like we should switch to only\na reference of the table here because we want to know the name of the\ncolumn dropped. I have discarded this last one as well, for now.\n\nAll that could be expanded in more patches (triggers are an easy one),\nbut what I have here is already a good cut.\n--\nMichael",
"msg_date": "Sat, 30 Jul 2022 16:14:33 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Collect ObjectAddress for ATTACH DETACH PARTITION to use in\n event trigger"
},
{
"msg_contents": "On Saturday, July 30, 2022 3:15 PM Michael Paquier <michael@paquier.xyz> wrote:\n> On Tue, Jul 26, 2022 at 01:00:41PM +0000, houzj.fnst@fujitsu.com wrote:\n> > Thanks for the suggestion. I have removed the default and found some\n> > missed subcommands in 0003 patch. Attach the new version patch here\n> > (The 0001 and 0002 is unchanged).\n> \n> I have reviewed what you have here, and I found that the change is too timid,\n> with a coverage of 32% for test_ddl_deparse. Attached is an updated patch, that\n> provides coverage for the most obvious cases I could see in tablecmds.c,\n> bringing the coverage to 64% here.\n\nThanks ! the patch looks better now.\n\n> Some cases are straight-forward, like the four cases for RLS or the three\n> subcases for RelOptions (where we'd better return an address even if doing\n> doing for the replace case). \n\nI am not against returning the objaddr for cases related to RLS and RelOption.\nBut just to confirm, do you have a use case to use the returned address(relation itself)\nfor RLS or RelOptions in event trigger ? I asked this because when I tried to\ndeparse the subcommand of ALTER TABLE. It seems enough to use the information\ninside the parse tree to deparse the RLS and RelOptions related subcommands.\n\nBest regards,\nHou Zhijie\n\n\n",
"msg_date": "Sat, 30 Jul 2022 13:13:52 +0000",
"msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Collect ObjectAddress for ATTACH DETACH PARTITION to use in event\n trigger"
},
{
"msg_contents": "On Sat, Jul 30, 2022 at 01:13:52PM +0000, houzj.fnst@fujitsu.com wrote:\n> I am not against returning the objaddr for cases related to RLS and RelOption.\n> But just to confirm, do you have a use case to use the returned address(relation itself)\n> for RLS or RelOptions in event trigger ? I asked this because when I tried to\n> deparse the subcommand of ALTER TABLE. It seems enough to use the information\n> inside the parse tree to deparse the RLS and RelOptions related subcommands.\n\nYou are right here, there is little point in returning the relation\nitself. I have removed these modifications, added a couple of extra\ncommands for some extra coverage, and applied all that. I have\nfinished by splitting the extension of test_ddl_deparse/ and the\naddition of ObjectAddress for the attach/detach into their own commit,\nmainly for clarity.\n--\nMichael",
"msg_date": "Sun, 31 Jul 2022 13:11:36 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Collect ObjectAddress for ATTACH DETACH PARTITION to use in\n event trigger"
},
{
"msg_contents": "On Sunday, July 31, 2022 12:12 PM Michael Paquier <michael@paquier.xyz> wrote:\n> \n> On Sat, Jul 30, 2022 at 01:13:52PM +0000, houzj.fnst@fujitsu.com wrote:\n> > I am not against returning the objaddr for cases related to RLS and\n> RelOption.\n> > But just to confirm, do you have a use case to use the returned\n> > address(relation itself) for RLS or RelOptions in event trigger ? I\n> > asked this because when I tried to deparse the subcommand of ALTER\n> > TABLE. It seems enough to use the information inside the parse tree to\n> deparse the RLS and RelOptions related subcommands.\n> \n> You are right here, there is little point in returning the relation itself. I have\n> removed these modifications, added a couple of extra commands for some\n> extra coverage, and applied all that. I have finished by splitting the extension\n> of test_ddl_deparse/ and the addition of ObjectAddress for the attach/detach\n> into their own commit, mainly for clarity.\n\nThanks!\n\nBest regards,\nHou Zhijie\n\n\n",
"msg_date": "Mon, 1 Aug 2022 03:33:44 +0000",
"msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Collect ObjectAddress for ATTACH DETACH PARTITION to use in event\n trigger"
}
] |
[
{
"msg_contents": "Hackers,\n\nI usually build the PostgreSQL from the sources directory. But I've\nheard a complaint that PostgreSQL can't be built in the external\ndirectory.\n\nAssuming postgres sources located in postgresql directory, the\nfollowing sequence of commands\n\nmkdir -p pgbld\ncd pgbld\n../postgresql/configure --disable-debug --disable-cassert --enable-tap-tests\nmake -j4\n\nresults in an error\n\n.../src/pgbld/../postgresql/src/include/utils/elog.h:73:10: fatal\nerror: utils/errcodes.h: No such file or directory\n 73 | #include \"utils/errcodes.h\"\n\n | ^~~~~~~~~~~~~~~~~~\n\nI've discovered this a bit. It appears that Makefile generates\nsrc/backend/utils/errcodes.h in the build directory, but symlinks\nsrc/include/utils/errcodes.h to the sources directory. That is\nsrc/include/utils/errcodes.h appears to be a broken symlink. I've\nwritten a short patch to fix that.\n\nIt seems strange to me that I'm the first one discovering this. Am I\nmissing something?\n\n------\nRegards,\nAlexander Korotkov",
"msg_date": "Wed, 13 Jul 2022 14:10:19 +0300",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": true,
"msg_subject": "Building PostgreSQL in external directory is broken?"
},
{
"msg_contents": "Hi Alexander,\n\n> Assuming postgres sources located in postgresql directory, the\n> following sequence of commands\n>\n> mkdir -p pgbld\n> cd pgbld\n> ../postgresql/configure --disable-debug --disable-cassert --enable-tap-tests\n> make -j4\n>\n> ...\n>\n> It seems strange to me that I'm the first one discovering this. Am I\n> missing something?\n\nTo be honest, this is the first time I see anyone trying to build a\nproject that is using Autotools from an external directory :) I\nchecked the documentation [1] and it doesn't seem that we claim to\nsupport this.\n\nAlso I tried your patch on MacOS Monterey 12.4 and it didn't work. I\nget the following error:\n\n```\n...\nar: cryptohash.o: No such file or directory\nar: hmac.o: No such file or directory\nar: sha1.o: No such file or directory\nar: sha2.o: No such file or directory\n```\n\n... with or without the patch.\n\nMy guess would be that the reason no one discovered this before is\nthat this is in fact not supported and/or tested on CI.\n\nCould you give an example of when this can be useful?\n\n[1]: https://www.postgresql.org/docs/current/install-short.html\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Wed, 13 Jul 2022 14:48:33 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: Building PostgreSQL in external directory is broken?"
},
{
"msg_contents": "Aleksander Alekseev <aleksander@timescale.com> writes:\n\n> Hi Alexander,\n>\n> To be honest, this is the first time I see anyone trying to build a\n> project that is using Autotools from an external directory :) I\n> checked the documentation [1] and it doesn't seem that we claim to\n> support this.\n>\n> [1]: https://www.postgresql.org/docs/current/install-short.html\n\nThat's the short version. The longer version² does claim it's supported:\n\n You can also run configure in a directory outside the source tree,\n and then build there, if you want to keep the build directory\n separate from the original source files. This procedure is called a\n VPATH build. Here's how:\n\n mkdir build_dir\n cd build_dir\n /path/to/source/tree/configure [options go here]\n make\n\n- ilmari\n\n[2] https://www.postgresql.org/docs/current/install-procedure.html\n\n\n",
"msg_date": "Wed, 13 Jul 2022 12:59:50 +0100",
"msg_from": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <ilmari@ilmari.org>",
"msg_from_op": false,
"msg_subject": "Re: Building PostgreSQL in external directory is broken?"
},
{
"msg_contents": "Hi Ilmari,\n\n> That's the short version. The longer version² does claim it's supported:\n\nYou are right, I missed this. Thanks!\n\nRegarding these errors:\n\n> ar: cryptohash.o: No such file or directory\n> ar: hmac.o: No such file or directory\n> ...\n\nThis has something to do with the particular choice of the ./configure\nflags on a given platform. If I omit:\n\n> --disable-debug --disable-cassert --enable-tap-tests\n\nI get completely different errors:\n\n> clang: error: no such file or directory: 'replication/backup_manifest.o'\n> clang: error: no such file or directory: 'replication/basebackup.o'\n\nApparently this should be checked carefully with different configure\nflags combinations if we are going to continue maintaining this.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Wed, 13 Jul 2022 15:12:12 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: Building PostgreSQL in external directory is broken?"
},
{
"msg_contents": "On 2022-Jul-13, Alexander Korotkov wrote:\n\n> results in an error\n> \n> .../src/pgbld/../postgresql/src/include/utils/elog.h:73:10: fatal\n> error: utils/errcodes.h: No such file or directory\n> 73 | #include \"utils/errcodes.h\"\n> \n> | ^~~~~~~~~~~~~~~~~~\n\nProbably what is happening here is that you have build artifacts in the\nsource tree after having built there, and that confuses make so not\neverything is rebuilt correctly when you call it from the external build\ndit. I suggest to \"git clean -dfx\" your source tree, then you can rerun\nconfigure/make from the external builddir.\n\nFWIW building in external dirs works fine. I use it all the time.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"Once again, thank you and all of the developers for your hard work on\nPostgreSQL. This is by far the most pleasant management experience of\nany database I've worked on.\" (Dan Harris)\nhttp://archives.postgresql.org/pgsql-performance/2006-04/msg00247.php\n\n\n",
"msg_date": "Wed, 13 Jul 2022 14:19:06 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Building PostgreSQL in external directory is broken?"
},
{
"msg_contents": "On Wed, Jul 13, 2022 at 3:19 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> On 2022-Jul-13, Alexander Korotkov wrote:\n>\n> > results in an error\n> >\n> > .../src/pgbld/../postgresql/src/include/utils/elog.h:73:10: fatal\n> > error: utils/errcodes.h: No such file or directory\n> > 73 | #include \"utils/errcodes.h\"\n> >\n> > | ^~~~~~~~~~~~~~~~~~\n>\n> Probably what is happening here is that you have build artifacts in the\n> source tree after having built there, and that confuses make so not\n> everything is rebuilt correctly when you call it from the external build\n> dit. I suggest to \"git clean -dfx\" your source tree, then you can rerun\n> configure/make from the external builddir.\n>\n> FWIW building in external dirs works fine. I use it all the time.\n\nYou are right. I made \"make distclean\" which appears to be not\nenough. With \"git clean -dfx\" correct symlink is generated. Sorry\nfor the noise.\n\n------\nRegards,\nAlexander Korotkov\n\n\n",
"msg_date": "Wed, 13 Jul 2022 15:35:56 +0300",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Building PostgreSQL in external directory is broken?"
},
{
"msg_contents": "On Wed, Jul 13, 2022 at 3:12 PM Aleksander Alekseev\n<aleksander@timescale.com> wrote:\n> > That's the short version. The longer version² does claim it's supported:\n>\n> You are right, I missed this. Thanks!\n>\n> Regarding these errors:\n>\n> > ar: cryptohash.o: No such file or directory\n> > ar: hmac.o: No such file or directory\n> > ...\n>\n> This has something to do with the particular choice of the ./configure\n> flags on a given platform. If I omit:\n>\n> > --disable-debug --disable-cassert --enable-tap-tests\n>\n> I get completely different errors:\n>\n> > clang: error: no such file or directory: 'replication/backup_manifest.o'\n> > clang: error: no such file or directory: 'replication/basebackup.o'\n>\n> Apparently this should be checked carefully with different configure\n> flags combinations if we are going to continue maintaining this.\n\nPlease, check Alvaro's advise to run \"git clean -dfx\". Helped to me.\n\n------\nRegards,\nAlexander Korotkov\n\n\n",
"msg_date": "Wed, 13 Jul 2022 15:37:04 +0300",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Building PostgreSQL in external directory is broken?"
},
{
"msg_contents": "Alvaro, Alexander,\n\n> Please, check Alvaro's advise to run \"git clean -dfx\". Helped to me.\n\nThanks, `git clean -dfx` did the trick!\n\n> Could you give an example of when this can be useful?\n\nAnd now I can answer my own question. I can move all shell scripts I\ntypically use for the development from the repository and be sure they\nare not going to be deleted by accident (by `git clean`, for\ninstance).\n\nVery convenient.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Wed, 13 Jul 2022 16:36:32 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: Building PostgreSQL in external directory is broken?"
},
{
"msg_contents": "Aleksander Alekseev <aleksander@timescale.com> writes:\n>> Could you give an example of when this can be useful?\n\n> And now I can answer my own question. I can move all shell scripts I\n> typically use for the development from the repository and be sure they\n> are not going to be deleted by accident (by `git clean`, for\n> instance).\n\nFWIW, I gather that the upcoming switch to the meson build system\nwill result in *requiring* use of outside-the-source-tree builds.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 13 Jul 2022 09:57:38 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Building PostgreSQL in external directory is broken?"
}
] |
[
{
"msg_contents": "Hi hackers,\n\nWhile I was writing a test for PSQL, I faced a weird scenario. Depending on\nhow I build PSQL (enabling or not debug options), I saw different results\nfor the following query.\n\nSteps to reproduce:\n- OS: Ubuntu 20.04\n- PSQL version 14.4\n\nCREATE TABLE test (single_byte_col \"char\");\nINSERT INTO test (single_byte_col) VALUES ('🀆');\n\nIn case where the following query runs in debug mode, I got this result:\nSELECT left(single_byte_col, 1) l FROM test;\n l\n-----------\n ~\\x7F\\x7F\n(1 row)\n\nOnce I turn off debug mode, I got:\nSELECT left(single_byte_col, 1) l FROM test;\n l\n---\n\n(1 row)\n\nThat triggered me to use Valgrind, which reported the following error:\n\n==00:00:03:22.867 1171== VALGRINDERROR-BEGIN\n==00:00:03:22.867 1171== Invalid read of size 1\n==00:00:03:22.867 1171== at 0x4C41209: memmove (vg_replace_strmem.c:1382)\n==00:00:03:22.868 1171== by 0xC8D8D0: text_substring\n(../src/backend/utils/adt/varlena.c:1050)\n==00:00:03:22.868 1171== by 0xC9AC78: text_left\n(../src/backend/utils/adt/varlena.c:5614)\n==00:00:03:22.868 1171== by 0x7FBB75: ExecInterpExpr\n(../src/backend/executor/execExprInterp.c:749)\n==00:00:03:22.868 1171== by 0x7FADA6: ExecInterpExprStillValid\n(../src/backend/executor/execExprInterp.c:1824)\n==00:00:03:22.868 1171== by 0x81C0AA: ExecEvalExprSwitchContext\n(../src/include/executor/executor.h:339)\n==00:00:03:22.868 1171== by 0x81BD96: ExecProject\n(../src/include/executor/executor.h:373)\n==00:00:03:22.868 1171== by 0x81B8BE: ExecScan\n(../src/backend/executor/execScan.c:238)\n==00:00:03:22.868 1171== by 0x8638BA: ExecSeqScan\n(../src/backend/executor/nodeSeqscan.c:112)\n==00:00:03:22.868 1171== by 0x8170D4: ExecProcNodeFirst\n(../src/backend/executor/execProcnode.c:465)\n==00:00:03:22.868 1171== by 0x80F291: ExecProcNode\n(../src/include/executor/executor.h:257)\n==00:00:03:22.868 1171== by 0x80A8F0: ExecutePlan\n(../src/backend/executor/execMain.c:1551)\n==00:00:03:22.868 1171== by 0x80A7A3: standard_ExecutorRun\n(../src/backend/executor/execMain.c:361)\n==00:00:03:22.868 1171== by 0x136231BC: pgss_ExecutorRun\n(../contrib/pg_stat_statements/pg_stat_statements.c:1001)\n==00:00:03:22.868 1171== by 0x80A506: ExecutorRun\n(../src/backend/executor/execMain.c:303)\n==00:00:03:22.868 1171== by 0xAD71B0: PortalRunSelect\n(../src/backend/tcop/pquery.c:921)\n==00:00:03:22.868 1171== by 0xAD6B75: PortalRun\n(../src/backend/tcop/pquery.c:765)\n==00:00:03:22.868 1171== by 0xAD1B3C: exec_simple_query\n(../src/backend/tcop/postgres.c:1214)\n==00:00:03:22.868 1171== by 0xAD0D7F: PostgresMain\n(../src/backend/tcop/postgres.c:4496)\n==00:00:03:22.868 1171== by 0x9D6C79: BackendRun\n(../src/backend/postmaster/postmaster.c:4530)\n==00:00:03:22.868 1171== by 0x9D61A3: BackendStartup\n(../src/backend/postmaster/postmaster.c:4252)\n==00:00:03:22.868 1171== by 0x9D4F56: ServerLoop\n(../src/backend/postmaster/postmaster.c:1745)\n==00:00:03:22.868 1171== by 0x9D23A4: PostmasterMain\n(../src/backend/postmaster/postmaster.c:1417)\n==00:00:03:22.868 1171== by 0x8AA0D2: main (../src/backend/main/main.c:209)\n==00:00:03:22.868 1171== Address 0x3ed1f3c5 is 293 bytes inside a block of\nsize 8,192 alloc'd\n==00:00:03:22.868 1171== at 0x4C37135: malloc (vg_replace_malloc.c:381)\n==00:00:03:22.868 1171== by 0xD1C3C0: AllocSetContextCreateInternal\n(../src/backend/utils/mmgr/aset.c:469)\n==00:00:03:22.868 1171== by 0x821CB4: CreateExprContextInternal\n(../src/backend/executor/execUtils.c:253)\n==00:00:03:22.868 1171== by 0x821BE2: CreateExprContext\n(../src/backend/executor/execUtils.c:303)\n==00:00:03:22.868 1171== by 0x822078: ExecAssignExprContext\n(../src/backend/executor/execUtils.c:482)\n==00:00:03:22.868 1171== by 0x8637D6: ExecInitSeqScan\n(../src/backend/executor/nodeSeqscan.c:147)\n==00:00:03:22.868 1171== by 0x816B2D: ExecInitNode\n(../src/backend/executor/execProcnode.c:211)\n==00:00:03:22.868 1171== by 0x80A336: InitPlan\n(../src/backend/executor/execMain.c:936)\n==00:00:03:22.868 1171== by 0x809BE7: standard_ExecutorStart\n(../src/backend/executor/execMain.c:263)\n==00:00:03:22.868 1171== by 0x1362303C: pgss_ExecutorStart\n(../contrib/pg_stat_statements/pg_stat_statements.c:963)\n==00:00:03:22.868 1171== by 0x809881: ExecutorStart\n(../src/backend/executor/execMain.c:141)\n==00:00:03:22.868 1171== by 0xAD636A: PortalStart\n(../src/backend/tcop/pquery.c:514)\n==00:00:03:22.868 1171== by 0xAD1A31: exec_simple_query\n(../src/backend/tcop/postgres.c:1175)\n==00:00:03:22.868 1171== by 0xAD0D7F: PostgresMain\n(../src/backend/tcop/postgres.c:4496)\n==00:00:03:22.868 1171== by 0x9D6C79: BackendRun\n(../src/backend/postmaster/postmaster.c:4530)\n==00:00:03:22.868 1171== by 0x9D61A3: BackendStartup\n(../src/backend/postmaster/postmaster.c:4252)\n==00:00:03:22.868 1171== by 0x9D4F56: ServerLoop\n(../src/backend/postmaster/postmaster.c:1745)\n==00:00:03:22.868 1171== by 0x9D23A4: PostmasterMain\n(../src/backend/postmaster/postmaster.c:1417)\n==00:00:03:22.868 1171== by 0x8AA0D2: main (../src/backend/main/main.c:209)\n==00:00:03:22.868 1171==\n==00:00:03:22.868 1171== VALGRINDERROR-END\n\nThen I attached a debugger to inspect the variables involved in the error\nreported by Valgrind for file /src/backend/utils/adt/varlena.c:\n\n1045 for (i = S1; i < E1; i++)\n1046 p += pg_mblen(p);\n1047\n1048 ret = (text *) palloc(VARHDRSZ + (p - s));\n1049 SET_VARSIZE(ret, VARHDRSZ + (p - s));\n1050 memcpy(VARDATA(ret), s, (p - s));\n\nThe column \"single_byte_col\" is supposed to store only 1\nbyte. Nevertheless, the INSERT command implicitly casts the '🀆' text into\n\"char\". This means that only the first byte of '🀆' ends up stored in the\ncolumn.\ngdb reports that \"pg_mblen(p) = 4\" (line 1046), which is expected since the\npg_mblen('🀆') is indeed 4. Later at line 1050, the memcpy will copy 4\nbytes instead of 1, hence an out of bounds memory read happens for pointer\n's', which effectively copies random bytes.\n\n-- \nSpiros\n(ServiceNow)\n\nHi hackers,While I was writing a test for PSQL, I faced a weird scenario. Depending on how I build PSQL (enabling or not debug options), I saw different results for the following query.Steps to reproduce:- OS: Ubuntu 20.04- PSQL version 14.4CREATE TABLE test (single_byte_col \"char\");INSERT INTO test (single_byte_col) VALUES ('🀆');In case where the following query runs in debug mode, I got this result:SELECT left(single_byte_col, 1) l FROM test; l----------- ~\\x7F\\x7F(1 row)Once I turn off debug mode, I got:SELECT left(single_byte_col, 1) l FROM test; l---(1 row)That triggered me to use Valgrind, which reported the following error:==00:00:03:22.867 1171== VALGRINDERROR-BEGIN==00:00:03:22.867 1171== Invalid read of size 1==00:00:03:22.867 1171== at 0x4C41209: memmove (vg_replace_strmem.c:1382)==00:00:03:22.868 1171== by 0xC8D8D0: text_substring (../src/backend/utils/adt/varlena.c:1050)==00:00:03:22.868 1171== by 0xC9AC78: text_left (../src/backend/utils/adt/varlena.c:5614)==00:00:03:22.868 1171== by 0x7FBB75: ExecInterpExpr (../src/backend/executor/execExprInterp.c:749)==00:00:03:22.868 1171== by 0x7FADA6: ExecInterpExprStillValid (../src/backend/executor/execExprInterp.c:1824)==00:00:03:22.868 1171== by 0x81C0AA: ExecEvalExprSwitchContext (../src/include/executor/executor.h:339)==00:00:03:22.868 1171== by 0x81BD96: ExecProject (../src/include/executor/executor.h:373)==00:00:03:22.868 1171== by 0x81B8BE: ExecScan (../src/backend/executor/execScan.c:238)==00:00:03:22.868 1171== by 0x8638BA: ExecSeqScan (../src/backend/executor/nodeSeqscan.c:112)==00:00:03:22.868 1171== by 0x8170D4: ExecProcNodeFirst (../src/backend/executor/execProcnode.c:465)==00:00:03:22.868 1171== by 0x80F291: ExecProcNode (../src/include/executor/executor.h:257)==00:00:03:22.868 1171== by 0x80A8F0: ExecutePlan (../src/backend/executor/execMain.c:1551)==00:00:03:22.868 1171== by 0x80A7A3: standard_ExecutorRun (../src/backend/executor/execMain.c:361)==00:00:03:22.868 1171== by 0x136231BC: pgss_ExecutorRun (../contrib/pg_stat_statements/pg_stat_statements.c:1001)==00:00:03:22.868 1171== by 0x80A506: ExecutorRun (../src/backend/executor/execMain.c:303)==00:00:03:22.868 1171== by 0xAD71B0: PortalRunSelect (../src/backend/tcop/pquery.c:921)==00:00:03:22.868 1171== by 0xAD6B75: PortalRun (../src/backend/tcop/pquery.c:765)==00:00:03:22.868 1171== by 0xAD1B3C: exec_simple_query (../src/backend/tcop/postgres.c:1214)==00:00:03:22.868 1171== by 0xAD0D7F: PostgresMain (../src/backend/tcop/postgres.c:4496)==00:00:03:22.868 1171== by 0x9D6C79: BackendRun (../src/backend/postmaster/postmaster.c:4530)==00:00:03:22.868 1171== by 0x9D61A3: BackendStartup (../src/backend/postmaster/postmaster.c:4252)==00:00:03:22.868 1171== by 0x9D4F56: ServerLoop (../src/backend/postmaster/postmaster.c:1745)==00:00:03:22.868 1171== by 0x9D23A4: PostmasterMain (../src/backend/postmaster/postmaster.c:1417)==00:00:03:22.868 1171== by 0x8AA0D2: main (../src/backend/main/main.c:209)==00:00:03:22.868 1171== Address 0x3ed1f3c5 is 293 bytes inside a block of size 8,192 alloc'd==00:00:03:22.868 1171== at 0x4C37135: malloc (vg_replace_malloc.c:381)==00:00:03:22.868 1171== by 0xD1C3C0: AllocSetContextCreateInternal (../src/backend/utils/mmgr/aset.c:469)==00:00:03:22.868 1171== by 0x821CB4: CreateExprContextInternal (../src/backend/executor/execUtils.c:253)==00:00:03:22.868 1171== by 0x821BE2: CreateExprContext (../src/backend/executor/execUtils.c:303)==00:00:03:22.868 1171== by 0x822078: ExecAssignExprContext (../src/backend/executor/execUtils.c:482)==00:00:03:22.868 1171== by 0x8637D6: ExecInitSeqScan (../src/backend/executor/nodeSeqscan.c:147)==00:00:03:22.868 1171== by 0x816B2D: ExecInitNode (../src/backend/executor/execProcnode.c:211)==00:00:03:22.868 1171== by 0x80A336: InitPlan (../src/backend/executor/execMain.c:936)==00:00:03:22.868 1171== by 0x809BE7: standard_ExecutorStart (../src/backend/executor/execMain.c:263)==00:00:03:22.868 1171== by 0x1362303C: pgss_ExecutorStart (../contrib/pg_stat_statements/pg_stat_statements.c:963)==00:00:03:22.868 1171== by 0x809881: ExecutorStart (../src/backend/executor/execMain.c:141)==00:00:03:22.868 1171== by 0xAD636A: PortalStart (../src/backend/tcop/pquery.c:514)==00:00:03:22.868 1171== by 0xAD1A31: exec_simple_query (../src/backend/tcop/postgres.c:1175)==00:00:03:22.868 1171== by 0xAD0D7F: PostgresMain (../src/backend/tcop/postgres.c:4496)==00:00:03:22.868 1171== by 0x9D6C79: BackendRun (../src/backend/postmaster/postmaster.c:4530)==00:00:03:22.868 1171== by 0x9D61A3: BackendStartup (../src/backend/postmaster/postmaster.c:4252)==00:00:03:22.868 1171== by 0x9D4F56: ServerLoop (../src/backend/postmaster/postmaster.c:1745)==00:00:03:22.868 1171== by 0x9D23A4: PostmasterMain (../src/backend/postmaster/postmaster.c:1417)==00:00:03:22.868 1171== by 0x8AA0D2: main (../src/backend/main/main.c:209)==00:00:03:22.868 1171====00:00:03:22.868 1171== VALGRINDERROR-ENDThen I attached a debugger to inspect the variables involved in the error reported by Valgrind for file /src/backend/utils/adt/varlena.c:1045\t\tfor (i = S1; i < E1; i++)1046\t\t\tp += pg_mblen(p);10471048\t\tret = (text *) palloc(VARHDRSZ + (p - s));1049\t\tSET_VARSIZE(ret, VARHDRSZ + (p - s));1050\t\tmemcpy(VARDATA(ret), s, (p - s));The column \"single_byte_col\" is supposed to store only 1 byte. Nevertheless, the INSERT command implicitly casts the '🀆' text into \"char\". This means that only the first byte of '🀆' ends up stored in the column.gdb reports that \"pg_mblen(p) = 4\" (line 1046), which is expected since the pg_mblen('🀆') is indeed 4. Later at line 1050, the memcpy will copy 4 bytes instead of 1, hence an out of bounds memory read happens for pointer 's', which effectively copies random bytes.-- Spiros(ServiceNow)",
"msg_date": "Wed, 13 Jul 2022 15:09:12 +0300",
"msg_from": "Spyridon Dimitrios Agathos <spyridon.dimitrios.agathos@gmail.com>",
"msg_from_op": true,
"msg_subject": "Bug: Reading from single byte character column type may cause out of\n bounds memory reads."
},
{
"msg_contents": "Hi Spyridon,\n\n> The column \"single_byte_col\" is supposed to store only 1 byte. Nevertheless, the INSERT command implicitly casts the '🀆' text into \"char\". This means that only the first byte of '🀆' ends up stored in the column.\n> gdb reports that \"pg_mblen(p) = 4\" (line 1046), which is expected since the pg_mblen('🀆') is indeed 4. Later at line 1050, the memcpy will copy 4 bytes instead of 1, hence an out of bounds memory read happens for pointer 's', which effectively copies random bytes.\n\nMany thanks for reporting this!\n\n> - OS: Ubuntu 20.04\n> - PSQL version 14.4\n\nI can confirm the bug exists in the `master` branch as well and\ndoesn't depend on the platform.\n\nAlthough the bug is easy to fix for this particular case (see the\npatch) I'm not sure if this solution is general enough. E.g. is there\nsomething that generally prevents pg_mblen() from doing out of bound\nreading in cases similar to this one? Should we prevent such an INSERT\nfrom happening instead?\n\n-- \nBest regards,\nAleksander Alekseev",
"msg_date": "Wed, 13 Jul 2022 16:14:39 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: Bug: Reading from single byte character column type may cause out\n of bounds memory reads."
},
{
"msg_contents": "Aleksander Alekseev <aleksander@timescale.com> writes:\n> Although the bug is easy to fix for this particular case (see the\n> patch) I'm not sure if this solution is general enough. E.g. is there\n> something that generally prevents pg_mblen() from doing out of bound\n> reading in cases similar to this one? Should we prevent such an INSERT\n> from happening instead?\n\nThis is ultimately down to char_text() generating a string that's alleged\nto be a valid \"text\" type value, but it contains illegally-encoded data.\nWhere we need to fix it is there: if we try to make every single\ntext-using function be 100% bulletproof against wrongly-encoded data,\nwe'll still be fixing bugs at the heat death of the universe.\n\nI complained about this in [1], but that thread died off before reaching a\nclear consensus about exactly what to do.\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/flat/2318797.1638558730%40sss.pgh.pa.us\n\n\n",
"msg_date": "Wed, 13 Jul 2022 11:11:34 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Bug: Reading from single byte character column type may cause out\n of bounds memory reads."
},
{
"msg_contents": "On Wed, 13 Jul 2022 at 09:15, Aleksander Alekseev <aleksander@timescale.com>\nwrote:\n\nI can confirm the bug exists in the `master` branch as well and\n> doesn't depend on the platform.\n>\n> Although the bug is easy to fix for this particular case (see the\n> patch) I'm not sure if this solution is general enough. E.g. is there\n> something that generally prevents pg_mblen() from doing out of bound\n> reading in cases similar to this one? Should we prevent such an INSERT\n> from happening instead?\n>\n\nNot just INSERTs, I would think: the implicit cast is already invalid,\nsince the \"char\" type can only hold characters that can be represented in 1\nbyte. A comparable example in the numeric types might be:\n\nodyssey=> select (2.0 ^ 80)::double precision::integer;\nERROR: integer out of range\n\nBy comparison:\n\nodyssey=> select '🀆'::\"char\";\n char\n──────\n\n(1 row)\n\nI think this should give an error, perhaps 'ERROR: \"char\" out of range'.\n\nIncidentally, if I apply ascii() to the result, I get sometimes 0 and\nsometimes 90112, neither of which should be a possible value for ascii ()\nof a \"char\" value and neither of which is 126982, the actual value of that\ncharacter.\n\nodyssey=> select ascii ('🀆'::\"char\");\n ascii\n───────\n 90112\n(1 row)\n\nodyssey=> select ascii ('🀆'::\"char\");\n ascii\n───────\n 0\n(1 row)\n\nodyssey=> select ascii ('🀆');\n ascii\n────────\n 126982\n(1 row)\n\nOn Wed, 13 Jul 2022 at 09:15, Aleksander Alekseev <aleksander@timescale.com> wrote:\nI can confirm the bug exists in the `master` branch as well and\ndoesn't depend on the platform.\n\nAlthough the bug is easy to fix for this particular case (see the\npatch) I'm not sure if this solution is general enough. E.g. is there\nsomething that generally prevents pg_mblen() from doing out of bound\nreading in cases similar to this one? Should we prevent such an INSERT\nfrom happening instead?Not just INSERTs, I would think: the implicit cast is already invalid, since the \"char\" type can only hold characters that can be represented in 1 byte. A comparable example in the numeric types might be:odyssey=> select (2.0 ^ 80)::double precision::integer;ERROR: integer out of rangeBy comparison:odyssey=> select '🀆'::\"char\"; char ────── (1 row)I think this should give an error, perhaps 'ERROR: \"char\" out of range'.Incidentally, if I apply ascii() to the result, I get sometimes 0 and sometimes 90112, neither of which should be a possible value for ascii () of a \"char\" value and neither of which is 126982, the actual value of that character.odyssey=> select ascii ('🀆'::\"char\"); ascii ─────── 90112(1 row)odyssey=> select ascii ('🀆'::\"char\"); ascii ─────── 0(1 row)odyssey=> select ascii ('🀆'); ascii ──────── 126982(1 row)",
"msg_date": "Wed, 13 Jul 2022 12:33:12 -0400",
"msg_from": "Isaac Morland <isaac.morland@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Bug: Reading from single byte character column type may cause out\n of bounds memory reads."
},
{
"msg_contents": "\nOn 2022-07-13 We 11:11, Tom Lane wrote:\n> Aleksander Alekseev <aleksander@timescale.com> writes:\n>> Although the bug is easy to fix for this particular case (see the\n>> patch) I'm not sure if this solution is general enough. E.g. is there\n>> something that generally prevents pg_mblen() from doing out of bound\n>> reading in cases similar to this one? Should we prevent such an INSERT\n>> from happening instead?\n> This is ultimately down to char_text() generating a string that's alleged\n> to be a valid \"text\" type value, but it contains illegally-encoded data.\n> Where we need to fix it is there: if we try to make every single\n> text-using function be 100% bulletproof against wrongly-encoded data,\n> we'll still be fixing bugs at the heat death of the universe.\n>\n> I complained about this in [1], but that thread died off before reaching a\n> clear consensus about exactly what to do.\n>\n> \t\t\tregards, tom lane\n>\n> [1] https://www.postgresql.org/message-id/flat/2318797.1638558730%40sss.pgh.pa.us\n>\n>\n\n\nLooks like the main controversy was about the output format. Make an\nexecutive decision and pick one.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Wed, 13 Jul 2022 15:46:17 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: Bug: Reading from single byte character column type may cause out\n of bounds memory reads."
},
{
"msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> On 2022-07-13 We 11:11, Tom Lane wrote:\n>> I complained about this in [1], but that thread died off before reaching a\n>> clear consensus about exactly what to do.\n>> [1] https://www.postgresql.org/message-id/flat/2318797.1638558730%40sss.pgh.pa.us\n\n> Looks like the main controversy was about the output format. Make an\n> executive decision and pick one.\n\nDone, see other thread.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 13 Jul 2022 17:25:02 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Bug: Reading from single byte character column type may cause out\n of bounds memory reads."
},
{
"msg_contents": "В письме от среда, 13 июля 2022 г. 16:14:39 MSK пользователь Aleksander \nAlekseev написал:\n\nHi! Let me join the review process. Postgres data types is field of expertise I \nam interested in.\n\n0. Though it looks like a steady bug, I can't reproduce it. Not using \nvalgrind, not using ASan (address sanitizer should catch reading out of bounds \ntoo). I am running Debian Bullseye, and tried to build both postgresl 14.4 and \ncurrent master.\n\nNever the less I would dig into this issue. And start with the parts that is \nnot covered by the patch, but seems to be important for me.\n\n1. typename \"char\" (with quotes) is very-very-very confusing. it is described \nin documentation, but you need to be postgres expert or careful documentation \nreader, to notice important difference between \"char\" and char. \nWhat is the history if \"char\" type? Is it specified by some standard? May be it \nis good point to create more understandable alias, like byte_char, ascii_char \nor something for usage in practice, and keep \"char\" for backward compatibility \nonly.\n\n2. I would totally agree with Tom Lane and Isaac Morland, that problem should \nbe also fixed on the side of type conversion. There is whole big thread about \nit. Guess we should come to some conclusion there\n\n3.Fixing out of bound reading for broken unicode is also important. Though \nfor now I am not quite sure it is possible.\n\n\n> -\t\t\tp += pg_mblen(p);\n> +\t\t{\n> +\t\t\tint t = pg_mblen(p);\n> +\t\t\tp += t;\n> +\t\t\tmax_copy_bytes -= t;\n> +\t\t}\n\nMinor issue: Here I would change variable name from \"t\" to \"char_len\" or \nsomething, to make code more easy to understand.\n\nMajor issue: is pg_mblen function safe to call with broken encoding at the end \nof buffer? What if last byte of the buffer is 0xF0 and you call pg_mblen for it?\n\n\n>+\t\tcopy_bytes = p - s;\n>+\t\tif(copy_bytes > max_copy_bytes)\n>+\t\t\tcopy_bytes = max_copy_bytes;\n\nHere I would suggest to add comment about broken utf encoding case. That would \nexplain why we might come to situation when we can try to copy more than we \nhave.\n\nI would also suggest to issue a warning here. I guess person that uses \npostgres would prefer to know that he managed to stuff into postgres a string \nwith broken utf encoding, before it comes to some terrible consequences. \n\n> Hi Spyridon,\n> \n> > The column \"single_byte_col\" is supposed to store only 1 byte.\n> > Nevertheless, the INSERT command implicitly casts the '🀆' text into\n> > \"char\". This means that only the first byte of '🀆' ends up stored in the\n> > column. gdb reports that \"pg_mblen(p) = 4\" (line 1046), which is expected\n> > since the pg_mblen('🀆') is indeed 4. Later at line 1050, the memcpy will\n> > copy 4 bytes instead of 1, hence an out of bounds memory read happens for\n> > pointer 's', which effectively copies random bytes.\n> Many thanks for reporting this!\n> \n> > - OS: Ubuntu 20.04\n> > - PSQL version 14.4\n> \n> I can confirm the bug exists in the `master` branch as well and\n> doesn't depend on the platform.\n> \n> Although the bug is easy to fix for this particular case (see the\n> patch) I'm not sure if this solution is general enough. E.g. is there\n> something that generally prevents pg_mblen() from doing out of bound\n> reading in cases similar to this one? Should we prevent such an INSERT\n> from happening instead?\n\n\n-- \nNikolay Shaplov aka Nataraj\nFuzzing Engineer at Postgres Professional\nMatrix IM: @dhyan:nataraj.su",
"msg_date": "Thu, 14 Jul 2022 14:07:56 +0300",
"msg_from": "Nikolay Shaplov <dhyan@nataraj.su>",
"msg_from_op": false,
"msg_subject": "Re: Bug: Reading from single byte character column type may cause out\n of bounds memory reads."
},
{
"msg_contents": "Hi all,\n\nthis is to verify that the .patch proposed here:\n\nhttps://www.postgresql.org/message-id/flat/2318797.1638558730%40sss.pgh.pa.us\n\nfixes the issue. I applied the patch and:\n1) The build type doesn't affect the result of the query result\n2) The valgrind doesn't complain about out of bound reads\n3) The output of the \"faulty\" insertion is shown in \"\\ooo format\".\n\nLooking forward to the next steps.\n\n--\nSpiros\n(ServiceNow)\n\n\nΣτις Πέμ 14 Ιουλ 2022 στις 2:08 μ.μ., ο/η Nikolay Shaplov <dhyan@nataraj.su>\nέγραψε:\n\n> В письме от среда, 13 июля 2022 г. 16:14:39 MSK пользователь Aleksander\n> Alekseev написал:\n>\n> Hi! Let me join the review process. Postgres data types is field of\n> expertise I\n> am interested in.\n>\n> 0. Though it looks like a steady bug, I can't reproduce it. Not using\n> valgrind, not using ASan (address sanitizer should catch reading out of\n> bounds\n> too). I am running Debian Bullseye, and tried to build both postgresl 14.4\n> and\n> current master.\n>\n> Never the less I would dig into this issue. And start with the parts that\n> is\n> not covered by the patch, but seems to be important for me.\n>\n> 1. typename \"char\" (with quotes) is very-very-very confusing. it is\n> described\n> in documentation, but you need to be postgres expert or careful\n> documentation\n> reader, to notice important difference between \"char\" and char.\n> What is the history if \"char\" type? Is it specified by some standard? May\n> be it\n> is good point to create more understandable alias, like byte_char,\n> ascii_char\n> or something for usage in practice, and keep \"char\" for backward\n> compatibility\n> only.\n>\n> 2. I would totally agree with Tom Lane and Isaac Morland, that problem\n> should\n> be also fixed on the side of type conversion. There is whole big thread\n> about\n> it. Guess we should come to some conclusion there\n>\n> 3.Fixing out of bound reading for broken unicode is also important.\n> Though\n> for now I am not quite sure it is possible.\n>\n>\n> > - p += pg_mblen(p);\n> > + {\n> > + int t = pg_mblen(p);\n> > + p += t;\n> > + max_copy_bytes -= t;\n> > + }\n>\n> Minor issue: Here I would change variable name from \"t\" to \"char_len\" or\n> something, to make code more easy to understand.\n>\n> Major issue: is pg_mblen function safe to call with broken encoding at the\n> end\n> of buffer? What if last byte of the buffer is 0xF0 and you call pg_mblen\n> for it?\n>\n>\n> >+ copy_bytes = p - s;\n> >+ if(copy_bytes > max_copy_bytes)\n> >+ copy_bytes = max_copy_bytes;\n>\n> Here I would suggest to add comment about broken utf encoding case. That\n> would\n> explain why we might come to situation when we can try to copy more than\n> we\n> have.\n>\n> I would also suggest to issue a warning here. I guess person that uses\n> postgres would prefer to know that he managed to stuff into postgres a\n> string\n> with broken utf encoding, before it comes to some terrible consequences.\n>\n> > Hi Spyridon,\n> >\n> > > The column \"single_byte_col\" is supposed to store only 1 byte.\n> > > Nevertheless, the INSERT command implicitly casts the '🀆' text into\n> > > \"char\". This means that only the first byte of '🀆' ends up stored in\n> the\n> > > column. gdb reports that \"pg_mblen(p) = 4\" (line 1046), which is\n> expected\n> > > since the pg_mblen('🀆') is indeed 4. Later at line 1050, the memcpy\n> will\n> > > copy 4 bytes instead of 1, hence an out of bounds memory read happens\n> for\n> > > pointer 's', which effectively copies random bytes.\n> > Many thanks for reporting this!\n> >\n> > > - OS: Ubuntu 20.04\n> > > - PSQL version 14.4\n> >\n> > I can confirm the bug exists in the `master` branch as well and\n> > doesn't depend on the platform.\n> >\n> > Although the bug is easy to fix for this particular case (see the\n> > patch) I'm not sure if this solution is general enough. E.g. is there\n> > something that generally prevents pg_mblen() from doing out of bound\n> > reading in cases similar to this one? Should we prevent such an INSERT\n> > from happening instead?\n>\n>\n> --\n> Nikolay Shaplov aka Nataraj\n> Fuzzing Engineer at Postgres Professional\n> Matrix IM: @dhyan:nataraj.su\n>\n\nHi all,this is to verify that the .patch proposed here:https://www.postgresql.org/message-id/flat/2318797.1638558730%40sss.pgh.pa.usfixes the issue. I applied the patch and:1) The build type doesn't affect the result of the query result2) The valgrind doesn't complain about out of bound reads3) The output of the \"faulty\" insertion is shown in \"\\ooo format\".Looking forward to the next steps.--Spiros(ServiceNow)Στις Πέμ 14 Ιουλ 2022 στις 2:08 μ.μ., ο/η Nikolay Shaplov <dhyan@nataraj.su> έγραψε:В письме от среда, 13 июля 2022 г. 16:14:39 MSK пользователь Aleksander \nAlekseev написал:\n\nHi! Let me join the review process. Postgres data types is field of expertise I \nam interested in.\n\n0. Though it looks like a steady bug, I can't reproduce it. Not using \nvalgrind, not using ASan (address sanitizer should catch reading out of bounds \ntoo). I am running Debian Bullseye, and tried to build both postgresl 14.4 and \ncurrent master.\n\nNever the less I would dig into this issue. And start with the parts that is \nnot covered by the patch, but seems to be important for me.\n\n1. typename \"char\" (with quotes) is very-very-very confusing. it is described \nin documentation, but you need to be postgres expert or careful documentation \nreader, to notice important difference between \"char\" and char. \nWhat is the history if \"char\" type? Is it specified by some standard? May be it \nis good point to create more understandable alias, like byte_char, ascii_char \nor something for usage in practice, and keep \"char\" for backward compatibility \nonly.\n\n2. I would totally agree with Tom Lane and Isaac Morland, that problem should \nbe also fixed on the side of type conversion. There is whole big thread about \nit. Guess we should come to some conclusion there\n\n3.Fixing out of bound reading for broken unicode is also important. Though \nfor now I am not quite sure it is possible.\n\n\n> - p += pg_mblen(p);\n> + {\n> + int t = pg_mblen(p);\n> + p += t;\n> + max_copy_bytes -= t;\n> + }\n\nMinor issue: Here I would change variable name from \"t\" to \"char_len\" or \nsomething, to make code more easy to understand.\n\nMajor issue: is pg_mblen function safe to call with broken encoding at the end \nof buffer? What if last byte of the buffer is 0xF0 and you call pg_mblen for it?\n\n\n>+ copy_bytes = p - s;\n>+ if(copy_bytes > max_copy_bytes)\n>+ copy_bytes = max_copy_bytes;\n\nHere I would suggest to add comment about broken utf encoding case. That would \nexplain why we might come to situation when we can try to copy more than we \nhave.\n\nI would also suggest to issue a warning here. I guess person that uses \npostgres would prefer to know that he managed to stuff into postgres a string \nwith broken utf encoding, before it comes to some terrible consequences. \n\n> Hi Spyridon,\n> \n> > The column \"single_byte_col\" is supposed to store only 1 byte.\n> > Nevertheless, the INSERT command implicitly casts the '🀆' text into\n> > \"char\". This means that only the first byte of '🀆' ends up stored in the\n> > column. gdb reports that \"pg_mblen(p) = 4\" (line 1046), which is expected\n> > since the pg_mblen('🀆') is indeed 4. Later at line 1050, the memcpy will\n> > copy 4 bytes instead of 1, hence an out of bounds memory read happens for\n> > pointer 's', which effectively copies random bytes.\n> Many thanks for reporting this!\n> \n> > - OS: Ubuntu 20.04\n> > - PSQL version 14.4\n> \n> I can confirm the bug exists in the `master` branch as well and\n> doesn't depend on the platform.\n> \n> Although the bug is easy to fix for this particular case (see the\n> patch) I'm not sure if this solution is general enough. E.g. is there\n> something that generally prevents pg_mblen() from doing out of bound\n> reading in cases similar to this one? Should we prevent such an INSERT\n> from happening instead?\n\n\n-- \nNikolay Shaplov aka Nataraj\nFuzzing Engineer at Postgres Professional\nMatrix IM: @dhyan:nataraj.su",
"msg_date": "Sat, 16 Jul 2022 21:38:14 +0300",
"msg_from": "Spyridon Dimitrios Agathos <spyridon.dimitrios.agathos@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Bug: Reading from single byte character column type may cause out\n of bounds memory reads."
},
{
"msg_contents": "Spyridon Dimitrios Agathos <spyridon.dimitrios.agathos@gmail.com> writes:\n> this is to verify that the .patch proposed here:\n> https://www.postgresql.org/message-id/flat/2318797.1638558730%40sss.pgh.pa.us\n> fixes the issue.\n\n> Looking forward to the next steps.\n\nThat's been committed into HEAD and v15, without pushback so far.\nSo the complained-of case is no longer reachable in those branches.\n\nI think we should reject Aleksander's patch, on the grounds that\nit's now unnecessary --- or if you want to argue that it's still\nnecessary, then it's woefully inadequate, because there are surely\na bunch of other text-processing functions that will also misbehave\non wrongly-encoded data. But our general policy for years has been\nthat we check incoming text for encoding validity and then presume\nthat it is valid in manipulation operations.\n\nWhat remains to be debated is whether to push ec62ce55a into the\nstable branches. While we've not had pushback about the change\nin 15beta3, that hasn't been out very long, so I don't know how\nmuch faith to put in the lack of complaints. Should we wait\nlonger before deciding?\n\nI'm leaning to the idea that we should not back-patch, because\nthis issue has been there for years with few complaints; it's\nnot clear that closing the hole is worth creating a compatibility\nhazard in minor releases. On the other hand, you could argue\nthat we should back-patch so that back-branch charin() will\nunderstand the strings that can now be emitted by v15 charout().\nFailing to do so will result in a different sort of compatibility\nproblem.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 01 Sep 2022 15:35:52 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Bug: Reading from single byte character column type may cause out\n of bounds memory reads."
},
{
"msg_contents": "On Thu, Sep 01, 2022 at 03:35:52PM -0400, Tom Lane wrote:\n> Spyridon Dimitrios Agathos <spyridon.dimitrios.agathos@gmail.com> writes:\n> > this is to verify that the .patch proposed here:\n> > https://www.postgresql.org/message-id/flat/2318797.1638558730%40sss.pgh.pa.us\n> > fixes the issue.\n> \n> > Looking forward to the next steps.\n> \n> That's been committed into HEAD and v15, without pushback so far.\n> So the complained-of case is no longer reachable in those branches.\n> \n> I think we should reject Aleksander's patch, on the grounds that\n> it's now unnecessary --- or if you want to argue that it's still\n> necessary, then it's woefully inadequate, because there are surely\n> a bunch of other text-processing functions that will also misbehave\n> on wrongly-encoded data. But our general policy for years has been\n> that we check incoming text for encoding validity and then presume\n> that it is valid in manipulation operations.\n\npg_upgrade carries forward invalid text. A presumption of encoding validity\nwon't be justified any sooner than a presumption of not finding HEAP_MOVED_OFF\nflags. Hence, I think there should exist another policy that text-processing\nfunctions prevent severe misbehavior when processing invalid text.\nOut-of-bounds memory access qualifies as severe.\n\n> I'm leaning to the idea that we should not back-patch, because\n> this issue has been there for years with few complaints; it's\n> not clear that closing the hole is worth creating a compatibility\n> hazard in minor releases.\n\nI would not back-patch.\n\n> On the other hand, you could argue\n> that we should back-patch so that back-branch charin() will\n> understand the strings that can now be emitted by v15 charout().\n> Failing to do so will result in a different sort of compatibility\n> problem.\n\nIf concerned, I'd back-patch enough of the read side only, not the output\nside. I wouldn't bother, though.\n\n\n",
"msg_date": "Thu, 1 Sep 2022 22:09:02 -0700",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": false,
"msg_subject": "Re: Bug: Reading from single byte character column type may cause\n out of bounds memory reads."
},
{
"msg_contents": "Noah Misch <noah@leadboat.com> writes:\n> On Thu, Sep 01, 2022 at 03:35:52PM -0400, Tom Lane wrote:\n>> I think we should reject Aleksander's patch, on the grounds that\n>> it's now unnecessary --- or if you want to argue that it's still\n>> necessary, then it's woefully inadequate, because there are surely\n>> a bunch of other text-processing functions that will also misbehave\n>> on wrongly-encoded data. But our general policy for years has been\n>> that we check incoming text for encoding validity and then presume\n>> that it is valid in manipulation operations.\n\n> pg_upgrade carries forward invalid text. A presumption of encoding validity\n> won't be justified any sooner than a presumption of not finding HEAP_MOVED_OFF\n> flags. Hence, I think there should exist another policy that text-processing\n> functions prevent severe misbehavior when processing invalid text.\n> Out-of-bounds memory access qualifies as severe.\n\nWell ... that sounds great in the abstract, but it's not clear to me\nthat the problem justifies either the amount of developer effort it'd\ntake to close all the holes, or the performance hits we'd likely take.\nIn any case, changing only text_substring() isn't going to move the\nball very far at all.\n\n>> I'm leaning to the idea that we should not back-patch, because\n>> this issue has been there for years with few complaints; it's\n>> not clear that closing the hole is worth creating a compatibility\n>> hazard in minor releases.\n\n> I would not back-patch.\n\nOK. Let's close out this CF item as RWF, then.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 07 Sep 2022 12:45:31 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Bug: Reading from single byte character column type may cause out\n of bounds memory reads."
}
] |
[
{
"msg_contents": "$ git grep 'is not supported by this build' '*c'\nsrc/backend/access/transam/xloginsert.c: elog(ERROR, \"LZ4 is not supported by this build\");\nsrc/backend/access/transam/xloginsert.c: elog(ERROR, \"zstd is not supported by this build\");\nsrc/backend/access/transam/xloginsert.c: elog(ERROR, \"LZ4 is not supported by this build\");\nsrc/backend/access/transam/xloginsert.c: elog(ERROR, \"zstd is not supported by this build\");\n...\nsrc/backend/replication/basebackup_gzip.c: errmsg(\"gzip compression is not supported by this build\")));\nsrc/backend/replication/basebackup_lz4.c: errmsg(\"lz4 compression is not supported by this build\")));\nsrc/backend/replication/basebackup_zstd.c: errmsg(\"zstd compression is not supported by this build\")));\n\nShould the word \"compression\" be removed from basebackup, for consistency with\nthe use in xloginsert.c ? And \"lz4\" capitalization changed for consistency (in\none direction or the other). See 4035cd5d4, e9537321a7, 7cf085f07. Maybe zstd\nshould also be changed to Zstandard per 586955ddd.\n\nTo avoid the extra translation, and allow the compiler to merge strings.\n\nThe \"binary size\" argument wouldn't apply, but note that pg_dump uses this\nlanguage:\n\nsrc/bin/pg_dump/compress_io.c: pg_fatal(\"not built with zlib support\");\n\nSee also some other string messages I mentioned here:\nhttps://www.postgresql.org/message-id/20210622001927.GE29179@telsasoft.com\n|+#define NO_LZ4_SUPPORT() \\\n|+ ereport(ERROR, \\\n|+ (errcode(ERRCODE_FEATURE_NOT_SUPPORTED), \\\n|+ errmsg(\"unsupported LZ4 compression method\"), \\\n|+ errdetail(\"This functionality requires the server to be built with lz4 support.\"), \\\n|+ errhint(\"You need to rebuild PostgreSQL using --with-lz4.\")))\n|\n|src/bin/pg_dump/pg_backup_archiver.c: fatal(\"cannot restore from compressed archive (compression not supported in this installation)\");\n|src/bin/pg_dump/pg_backup_archiver.c: pg_log_warning(\"archive is compressed, but this installation does not support compression -- no data will be available\");\n|src/bin/pg_dump/pg_dump.c: pg_log_warning(\"requested compression not available in this installation -- archive will be uncompressed\");\n\n-- \nJustin\n\n\n",
"msg_date": "Wed, 13 Jul 2022 09:33:13 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "strings: \".. (compression)? is not supported by this build\""
},
{
"msg_contents": "On Wed, Jul 13, 2022 at 10:33 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> $ git grep 'is not supported by this build' '*c'\n> src/backend/access/transam/xloginsert.c: elog(ERROR, \"LZ4 is not supported by this build\");\n> src/backend/access/transam/xloginsert.c: elog(ERROR, \"zstd is not supported by this build\");\n> src/backend/access/transam/xloginsert.c: elog(ERROR, \"LZ4 is not supported by this build\");\n> src/backend/access/transam/xloginsert.c: elog(ERROR, \"zstd is not supported by this build\");\n> ...\n> src/backend/replication/basebackup_gzip.c: errmsg(\"gzip compression is not supported by this build\")));\n> src/backend/replication/basebackup_lz4.c: errmsg(\"lz4 compression is not supported by this build\")));\n> src/backend/replication/basebackup_zstd.c: errmsg(\"zstd compression is not supported by this build\")));\n>\n> Should the word \"compression\" be removed from basebackup, for consistency with\n> the use in xloginsert.c ? And \"lz4\" capitalization changed for consistency (in\n> one direction or the other). See 4035cd5d4, e9537321a7, 7cf085f07. Maybe zstd\n> should also be changed to Zstandard per 586955ddd.\n>\n> To avoid the extra translation, and allow the compiler to merge strings.\n\nTranslation isn't an issue here because the first group of messages\nare reported using elog(), not ereport(). There is something to be\nsaid for being consistent, though.\n\nI feel like it's kind of awkward that this thing is named Zstandard\nbut the library is libzstd and the package name is probably also zstd.\nI worry a bit that if we make the messages say zstandard instead of\nzstd it makes it harder to figure out. It's probably not a huge issue,\nthough.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 13 Jul 2022 13:50:39 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: strings: \".. (compression)? is not supported by this build\""
}
] |
[
{
"msg_contents": "Hi hackers,\n\nA few years ago, there was a proposal to create hash tables for long\n[sub]xip arrays in snapshots [0], but the thread seems to have fizzled out.\nI was curious whether this idea still showed measurable benefits, so I\nrevamped the patch and ran the same test as before [1]. Here are the\nresults for 60₋second runs on an r5d.24xlarge with the data directory on\nthe local NVMe storage:\n\n writers HEAD patch diff\n ----------------------------\n 16 659 664 +1%\n 32 645 663 +3%\n 64 659 692 +5%\n 128 641 716 +12%\n 256 619 610 -1%\n 512 530 702 +32%\n 768 469 582 +24%\n 1000 367 577 +57%\n\nAs before, the hash table approach seems to provide a decent benefit at\nhigher client counts, so I felt it was worth reviving the idea.\n\nThe attached patch has some key differences from the previous proposal.\nFor example, the new patch uses simplehash instead of open-coding a new\nhash table. Also, I've bumped up the threshold for creating hash tables to\n128 based on the results of my testing. The attached patch waits until a\nlookup of [sub]xip before generating the hash table, so we only need to\nallocate enough space for the current elements in the [sub]xip array, and\nwe avoid allocating extra memory for workloads that do not need the hash\ntables. I'm slightly worried about increasing the number of memory\nallocations in this code path, but the results above seemed encouraging on\nthat front.\n\nThoughts?\n\n[0] https://postgr.es/m/35960b8af917e9268881cd8df3f88320%40postgrespro.ru\n[1] https://postgr.es/m/057a9a95-19d2-05f0-17e2-f46ff20e9b3e%402ndquadrant.com\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Wed, 13 Jul 2022 10:09:50 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "optimize lookups in snapshot [sub]xip arrays"
},
{
"msg_contents": "On Wed, Jul 13, 2022 at 10:40 PM Nathan Bossart\n<nathandbossart@gmail.com> wrote:\n>\n> Hi hackers,\n>\n> A few years ago, there was a proposal to create hash tables for long\n> [sub]xip arrays in snapshots [0], but the thread seems to have fizzled out.\n> I was curious whether this idea still showed measurable benefits, so I\n> revamped the patch and ran the same test as before [1]. Here are the\n> results for 60₋second runs on an r5d.24xlarge with the data directory on\n> the local NVMe storage:\n>\n> writers HEAD patch diff\n> ----------------------------\n> 16 659 664 +1%\n> 32 645 663 +3%\n> 64 659 692 +5%\n> 128 641 716 +12%\n> 256 619 610 -1%\n> 512 530 702 +32%\n> 768 469 582 +24%\n> 1000 367 577 +57%\n\nImpressive.\n\n> As before, the hash table approach seems to provide a decent benefit at\n> higher client counts, so I felt it was worth reviving the idea.\n>\n> The attached patch has some key differences from the previous proposal.\n> For example, the new patch uses simplehash instead of open-coding a new\n> hash table. Also, I've bumped up the threshold for creating hash tables to\n> 128 based on the results of my testing. The attached patch waits until a\n> lookup of [sub]xip before generating the hash table, so we only need to\n> allocate enough space for the current elements in the [sub]xip array, and\n> we avoid allocating extra memory for workloads that do not need the hash\n> tables. I'm slightly worried about increasing the number of memory\n> allocations in this code path, but the results above seemed encouraging on\n> that front.\n>\n> Thoughts?\n>\n> [0] https://postgr.es/m/35960b8af917e9268881cd8df3f88320%40postgrespro.ru\n> [1] https://postgr.es/m/057a9a95-19d2-05f0-17e2-f46ff20e9b3e%402ndquadrant.com\n\nAren't these snapshot arrays always sorted? I see the following code:\n\n/* sort so we can bsearch() */\nqsort(snapshot->xip, snapshot->xcnt, sizeof(TransactionId), xidComparator);\n\n/* sort so we can bsearch() later */\nqsort(snap->subxip, snap->subxcnt, sizeof(TransactionId), xidComparator);\n\nIf the ordering isn't an invariant of these snapshot arrays, can we\nalso use the hash table mechanism for all of the snapshot arrays\ninfrastructure rather than qsort+bsearch in a few places and hash\ntable for others?\n\n+ * The current value worked well in testing, but it's still mostly a guessed-at\n+ * number that might need updating in the future.\n+ */\n+#define XIP_HASH_MIN_ELEMENTS (128)\n+\n\nDo you see a regression with a hash table for all the cases? Why can't\nwe just build a hash table irrespective of these limits and use it for\nall the purposes instead of making it complex with different\napproaches if we don't have measurable differences in the performance\nor throughput?\n\n+static inline bool\n+XidInXip(TransactionId xid, TransactionId *xip, uint32 xcnt,\n+ xip_hash_hash **xiph)\n\n+ /* Make sure the hash table is built. */\n+ if (*xiph == NULL)\n+ {\n+ *xiph = xip_hash_create(TopTransactionContext, xcnt, NULL);\n+\n+ for (int i = 0; i < xcnt; i++)\n\nWhy create a hash table on the first search? Why can't it be built\nwhile inserting or creating these snapshots? Basically, instead of the\narray, can these snapshot structures be hash tables by themselves? I\nknow this requires a good amount of code refactoring, but worth\nconsidering IMO as it removes bsearch thus might improve the\nperformance further.\n\nRegards,\nBharath Rupireddy.\n\n\n",
"msg_date": "Thu, 14 Jul 2022 15:10:56 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: optimize lookups in snapshot [sub]xip arrays"
},
{
"msg_contents": "Hi Bharath,\n\nThanks for taking a look.\n\nOn Thu, Jul 14, 2022 at 03:10:56PM +0530, Bharath Rupireddy wrote:\n> Aren't these snapshot arrays always sorted? I see the following code:\n> \n> /* sort so we can bsearch() */\n> qsort(snapshot->xip, snapshot->xcnt, sizeof(TransactionId), xidComparator);\n> \n> /* sort so we can bsearch() later */\n> qsort(snap->subxip, snap->subxcnt, sizeof(TransactionId), xidComparator);\n\nAFAICT these arrays are sorted in limited cases, such as\npg_current_snapshot() and logical replication. GetSnapshotData() does not\nappear to sort them, so I don't think we can always assume they are sorted.\nIn the previous thread, Tomas analyzed simply sorting the arrays [0] and\nfound that it provided much less improvement compared to the hash table\napproach, so I have not seriously considered it here.\n\n> If the ordering isn't an invariant of these snapshot arrays, can we\n> also use the hash table mechanism for all of the snapshot arrays\n> infrastructure rather than qsort+bsearch in a few places and hash\n> table for others?\n\nUnless there is demonstrable benefit in doing so for the few places that\nsort the arrays, I'm ѕkeptical it's worth the complexity. This patch is\ntargeted to XidInMVCCSnapshot(), which we can demonstrate has clear impact\non TPS for some workloads.\n\n> + * The current value worked well in testing, but it's still mostly a guessed-at\n> + * number that might need updating in the future.\n> + */\n> +#define XIP_HASH_MIN_ELEMENTS (128)\n> +\n> \n> Do you see a regression with a hash table for all the cases? Why can't\n> we just build a hash table irrespective of these limits and use it for\n> all the purposes instead of making it complex with different\n> approaches if we don't have measurable differences in the performance\n> or throughput?\n\nI performed the same tests as before with a variety of values. Here are\nthe results:\n\n writers HEAD 1 16 32 64 128\n ------------------------------------\n 16 659 698 678 659 665 664\n 32 645 661 688 657 649 663\n 64 659 656 653 649 663 692\n 128 641 636 639 679 643 716\n 256 619 641 619 643 653 610\n 512 530 609 582 602 605 702\n 768 469 610 608 551 571 582\n 1000 367 610 538 557 556 577\n\nI was surpised to see that there really wasn't a regression at the low end,\nbut keep in mind that this is a rather large machine and a specialized\nworkload for generating snapshots with long [sub]xip arrays. That being\nsaid, there really wasn't any improvement at the low end, either. If we\nalways built a hash table, we'd be introducing more overhead and memory\nusage in return for approximately zero benefit. My intent was to only take\non the overhead in cases where we believe it might have a positive impact,\nwhich is why I picked the somewhat conservative value of 128. If the\noverhead isn't a concern, it might be feasible to always make [sub]xip a\nhash table.\n\n> +static inline bool\n> +XidInXip(TransactionId xid, TransactionId *xip, uint32 xcnt,\n> + xip_hash_hash **xiph)\n> \n> + /* Make sure the hash table is built. */\n> + if (*xiph == NULL)\n> + {\n> + *xiph = xip_hash_create(TopTransactionContext, xcnt, NULL);\n> +\n> + for (int i = 0; i < xcnt; i++)\n> \n> Why create a hash table on the first search? Why can't it be built\n> while inserting or creating these snapshots? Basically, instead of the\n> array, can these snapshot structures be hash tables by themselves? I\n> know this requires a good amount of code refactoring, but worth\n> considering IMO as it removes bsearch thus might improve the\n> performance further.\n\nThe idea is to avoid the overhead unless something actually needs to\ninspect these arrays.\n\n[0] https://postgr.es/m/057a9a95-19d2-05f0-17e2-f46ff20e9b3e%402ndquadrant.com\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 14 Jul 2022 11:09:38 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: optimize lookups in snapshot [sub]xip arrays"
},
{
"msg_contents": "Hi,\n\nSounds worth pursuing.\n\nOn 2022-07-13 10:09:50 -0700, Nathan Bossart wrote:\n> The attached patch has some key differences from the previous proposal.\n> For example, the new patch uses simplehash instead of open-coding a new\n> hash table.\n\n+1\n\n> The attached patch waits until a lookup of [sub]xip before generating the\n> hash table, so we only need to allocate enough space for the current\n> elements in the [sub]xip array, and we avoid allocating extra memory for\n> workloads that do not need the hash tables.\n\nHm. Are there any contexts where we'd not want the potential for failing due\nto OOM?\n\nI wonder if we additionally / alternatively could use a faster method of\nsearching the array linearly, e.g. using SIMD.\n\n\nAnother thing that might be worth looking into is to sort the xip/subxip\narrays into a binary-search optimized layout. That'd make the binary search\nfaster, wouldn't require additional memory (a boolean indicating whether\nsorted somewhere, I guess), and would easily persist across copies of the\nsnapshot.\n\n\n> I'm slightly worried about increasing the number of memory\n> allocations in this code path, but the results above seemed encouraging on\n> that front.\n\nISMT that the test wouldn't be likely to show those issues.\n\n\n> These hash tables are regarded as ephemeral; they only live in\n> process-local memory and are never rewritten, copied, or\n> serialized.\n\nWhat does rewriting refer to here?\n\nNot convinced that's the right idea in case of copying. I think we often end\nup copying snapshots frequently, and building & allocating the hashed xids\nseparately every time seems not great.\n\n\n> +\tsnapshot->xiph = NULL;\n> +\tsnapshot->subxiph = NULL;\n\nDo we need separate hashes for these? ISTM that if we're overflowed then we\ndon't need ->subxip[h], and if not, then the action for an xid being in ->xip\nand ->subxiph is the same?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 15 Jul 2022 13:08:57 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: optimize lookups in snapshot [sub]xip arrays"
},
{
"msg_contents": "Hi Andres,\n\nThanks for taking a look.\n\nOn Fri, Jul 15, 2022 at 01:08:57PM -0700, Andres Freund wrote:\n> Hm. Are there any contexts where we'd not want the potential for failing due\n> to OOM?\n\nI'm not sure about this one.\n\n> I wonder if we additionally / alternatively could use a faster method of\n> searching the array linearly, e.g. using SIMD.\n\nI looked into using SIMD. The patch is attached, but it is only intended\nfor benchmarking purposes and isn't anywhere close to being worth serious\nreview. There may be a simpler/better way to implement the linear search,\nbut this seemed to work well. Overall, SIMD provided a decent improvement.\nI had to increase the number of writers quite a bit in order to demonstrate\nwhere the hash tables began winning. Here are the numbers:\n\n writers head simd hash\n 256 663 632 694\n 512 530 618 637\n 768 489 544 573\n 1024 364 508 562\n 2048 185 306 485\n 4096 146 197 441\n\nWhile it is unsurprising that the hash tables perform the best, there are a\ncouple of advantages to SIMD that might make that approach worth\nconsidering. For one, there's really no overhead (i.e., you don't need to\nsort the array or build a hash table), so we can avoid picking an arbitrary\nthreshold and just have one code path. Also, a SIMD implementation for a\nlinear search through an array of integers could likely be easily reused\nelsewhere.\n\n> Another thing that might be worth looking into is to sort the xip/subxip\n> arrays into a binary-search optimized layout. That'd make the binary search\n> faster, wouldn't require additional memory (a boolean indicating whether\n> sorted somewhere, I guess), and would easily persist across copies of the\n> snapshot.\n\nI spent some time looking into this, but I haven't attempted to implement\nit. IIUC the most difficult part of this is sorting the array in place to\nthe special layout.\n\n>> These hash tables are regarded as ephemeral; they only live in\n>> process-local memory and are never rewritten, copied, or\n>> serialized.\n> \n> What does rewriting refer to here?\n\nI mean that a hash table created for one snapshot will not be cleared and\nreused for another.\n\n> Not convinced that's the right idea in case of copying. I think we often end\n> up copying snapshots frequently, and building & allocating the hashed xids\n> separately every time seems not great.\n\nRight. My concern with reusing the hash tables is that we'd need to\nallocate much more space that would go largely unused in many cases.\n\n>> +\tsnapshot->xiph = NULL;\n>> +\tsnapshot->subxiph = NULL;\n> \n> Do we need separate hashes for these? ISTM that if we're overflowed then we\n> don't need ->subxip[h], and if not, then the action for an xid being in ->xip\n> and ->subxiph is the same?\n\nDo you mean that we can combine these into one hash table? That might\nwork.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Sat, 16 Jul 2022 20:59:57 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: optimize lookups in snapshot [sub]xip arrays"
},
{
"msg_contents": "Hi, all\n\n\n> \n> \t\tif (!snapshot->suboverflowed)\n> \t\t{\n> \t\t\t/* we have full data, so search subxip */\n> -\t\t\tint32\t\tj;\n> -\n> -\t\t\tfor (j = 0; j < snapshot->subxcnt; j++)\n> -\t\t\t{\n> -\t\t\t\tif (TransactionIdEquals(xid, snapshot->subxip[j]))\n> -\t\t\t\t\treturn true;\n> -\t\t\t}\n> +\t\t\tif (XidInXip(xid, snapshot->subxip, snapshot->subxcnt,\n> +\t\t\t\t\t\t &snapshot->subxiph))\n> +\t\t\t\treturn true;\n> \n> \t\t\t/* not there, fall through to search xip[] */\n> \t\t}\n\n\nIf snaphost->suboverflowed is false then the subxcnt must be less than PGPROC_MAX_CACHED_SUBXIDS which is 64 now.\n\nAnd we won’t use hash if the xcnt is less than XIP_HASH_MIN_ELEMENTS which is 128 currently during discussion.\n\nSo that, subxid’s hash table will never be used, right?\n\nRegards,\n\nZhang Mingli\n\n\n> On Jul 14, 2022, at 01:09, Nathan Bossart <nathandbossart@gmail.com> wrote:\n> \n> Hi hackers,\n> \n> A few years ago, there was a proposal to create hash tables for long\n> [sub]xip arrays in snapshots [0], but the thread seems to have fizzled out.\n> I was curious whether this idea still showed measurable benefits, so I\n> revamped the patch and ran the same test as before [1]. Here are the\n> results for 60₋second runs on an r5d.24xlarge with the data directory on\n> the local NVMe storage:\n> \n> writers HEAD patch diff\n> ----------------------------\n> 16 659 664 +1%\n> 32 645 663 +3%\n> 64 659 692 +5%\n> 128 641 716 +12%\n> 256 619 610 -1%\n> 512 530 702 +32%\n> 768 469 582 +24%\n> 1000 367 577 +57%\n> \n> As before, the hash table approach seems to provide a decent benefit at\n> higher client counts, so I felt it was worth reviving the idea.\n> \n> The attached patch has some key differences from the previous proposal.\n> For example, the new patch uses simplehash instead of open-coding a new\n> hash table. Also, I've bumped up the threshold for creating hash tables to\n> 128 based on the results of my testing. The attached patch waits until a\n> lookup of [sub]xip before generating the hash table, so we only need to\n> allocate enough space for the current elements in the [sub]xip array, and\n> we avoid allocating extra memory for workloads that do not need the hash\n> tables. I'm slightly worried about increasing the number of memory\n> allocations in this code path, but the results above seemed encouraging on\n> that front.\n> \n> Thoughts?\n> \n> [0] https://postgr.es/m/35960b8af917e9268881cd8df3f88320%40postgrespro.ru\n> [1] https://postgr.es/m/057a9a95-19d2-05f0-17e2-f46ff20e9b3e%402ndquadrant.com\n> \n> -- \n> Nathan Bossart\n> Amazon Web Services: https://aws.amazon.com\n> <v1-0001-Optimize-lookups-in-snapshot-transactions-in-prog.patch>\n\n\n\n",
"msg_date": "Sun, 24 Jul 2022 12:48:25 +0800",
"msg_from": "Zhang Mingli <zmlpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: optimize lookups in snapshot [sub]xip arrays"
},
{
"msg_contents": "В Ср, 13/07/2022 в 10:09 -0700, Nathan Bossart пишет:\n> Hi hackers,\n> \n> A few years ago, there was a proposal to create hash tables for long\n> [sub]xip arrays in snapshots [0], but the thread seems to have fizzled out.\n> I was curious whether this idea still showed measurable benefits, so I\n> revamped the patch and ran the same test as before [1]. Here are the\n> results for 60₋second runs on an r5d.24xlarge with the data directory on\n> the local NVMe storage:\n> \n> writers HEAD patch diff\n> ----------------------------\n> 16 659 664 +1%\n> 32 645 663 +3%\n> 64 659 692 +5%\n> 128 641 716 +12%\n> 256 619 610 -1%\n> 512 530 702 +32%\n> 768 469 582 +24%\n> 1000 367 577 +57%\n> \n> As before, the hash table approach seems to provide a decent benefit at\n> higher client counts, so I felt it was worth reviving the idea.\n> \n> The attached patch has some key differences from the previous proposal.\n> For example, the new patch uses simplehash instead of open-coding a new\n> hash table. Also, I've bumped up the threshold for creating hash tables to\n> 128 based on the results of my testing. The attached patch waits until a\n> lookup of [sub]xip before generating the hash table, so we only need to\n> allocate enough space for the current elements in the [sub]xip array, and\n> we avoid allocating extra memory for workloads that do not need the hash\n> tables. I'm slightly worried about increasing the number of memory\n> allocations in this code path, but the results above seemed encouraging on\n> that front.\n> \n> Thoughts?\n> \n> [0] https://postgr.es/m/35960b8af917e9268881cd8df3f88320%40postgrespro.ru\n> [1] https://postgr.es/m/057a9a95-19d2-05f0-17e2-f46ff20e9b3e%402ndquadrant.com\n> \n\nI'm glad my idea has been reborn.\n\nWell, may be simplehash is not bad idea.\nWhile it certainly consumes more memory and CPU instructions.\n\nI'll try to review.\n\nregards,\n\nYura Sokolov\n\n\n",
"msg_date": "Sun, 24 Jul 2022 15:26:12 +0300",
"msg_from": "Yura Sokolov <y.sokolov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: optimize lookups in snapshot [sub]xip arrays"
},
{
"msg_contents": "On Sun, Jul 24, 2022 at 12:48:25PM +0800, Zhang Mingli wrote:\n> If snaphost->suboverflowed is false then the subxcnt must be less than PGPROC_MAX_CACHED_SUBXIDS which is 64 now.\n> \n> And we won’t use hash if the xcnt is less than XIP_HASH_MIN_ELEMENTS which is 128 currently during discussion.\n> \n> So that, subxid’s hash table will never be used, right?\n\nThis array will store up to TOTAL_MAX_CACHED_SUBXIDS transactions, which\nwill typically be much greater than 64. When there isn't any overflow,\nsubxip stores all of the subxids for all of the entries in the procarray.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Sun, 24 Jul 2022 21:08:56 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: optimize lookups in snapshot [sub]xip arrays"
},
{
"msg_contents": "Got it, thanks.\n\n\n\nRegards,\nZhang Mingli\n\n\n\n> On Jul 25, 2022, at 12:08, Nathan Bossart <nathandbossart@gmail.com> wrote:\n> \n> On Sun, Jul 24, 2022 at 12:48:25PM +0800, Zhang Mingli wrote:\n>> If snaphost->suboverflowed is false then the subxcnt must be less than PGPROC_MAX_CACHED_SUBXIDS which is 64 now.\n>> \n>> And we won’t use hash if the xcnt is less than XIP_HASH_MIN_ELEMENTS which is 128 currently during discussion.\n>> \n>> So that, subxid’s hash table will never be used, right?\n> \n> This array will store up to TOTAL_MAX_CACHED_SUBXIDS transactions, which\n> will typically be much greater than 64. When there isn't any overflow,\n> subxip stores all of the subxids for all of the entries in the procarray.\n> \n> -- \n> Nathan Bossart\n> Amazon Web Services: https://aws.amazon.com\n\n\n\n",
"msg_date": "Mon, 25 Jul 2022 12:28:23 +0800",
"msg_from": "Zhang Mingli <zmlpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: optimize lookups in snapshot [sub]xip arrays"
},
{
"msg_contents": "On Sat, Jul 16, 2022 at 08:59:57PM -0700, Nathan Bossart wrote:\n> On Fri, Jul 15, 2022 at 01:08:57PM -0700, Andres Freund wrote:\n>> I wonder if we additionally / alternatively could use a faster method of\n>> searching the array linearly, e.g. using SIMD.\n> \n> I looked into using SIMD. The patch is attached, but it is only intended\n> for benchmarking purposes and isn't anywhere close to being worth serious\n> review. There may be a simpler/better way to implement the linear search,\n> but this seemed to work well. Overall, SIMD provided a decent improvement.\n> I had to increase the number of writers quite a bit in order to demonstrate\n> where the hash tables began winning. Here are the numbers:\n> \n> writers head simd hash\n> 256 663 632 694\n> 512 530 618 637\n> 768 489 544 573\n> 1024 364 508 562\n> 2048 185 306 485\n> 4096 146 197 441\n> \n> While it is unsurprising that the hash tables perform the best, there are a\n> couple of advantages to SIMD that might make that approach worth\n> considering. For one, there's really no overhead (i.e., you don't need to\n> sort the array or build a hash table), so we can avoid picking an arbitrary\n> threshold and just have one code path. Also, a SIMD implementation for a\n> linear search through an array of integers could likely be easily reused\n> elsewhere.\n\n From the discussion thus far, it seems there is interest in optimizing\n[sub]xip lookups, so I'd like to spend some time moving it forward. I\nthink the biggest open question is which approach to take. Both the SIMD\nand hash table approaches seem viable, but I think I prefer the SIMD\napproach at the moment (see the last paragraph of quoted text for the\nreasons). What do folks think?\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 25 Jul 2022 12:04:19 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: optimize lookups in snapshot [sub]xip arrays"
},
{
"msg_contents": "\n\n\n> On Jul 26, 2022, at 03:04, Nathan Bossart <nathandbossart@gmail.com> wrote:\n>> \n> From the discussion thus far, it seems there is interest in optimizing\n> [sub]xip lookups, so I'd like to spend some time moving it forward. I\n> think the biggest open question is which approach to take. Both the SIMD\n> and hash table approaches seem viable, but I think I prefer the SIMD\n> approach at the moment (see the last paragraph of quoted text for the\n> reasons). What do folks think?\n> \n> -- \n> Nathan Bossart\n> Amazon Web Services: https://aws.amazon.com\n> \n> \n\n+1, I’m not familiar with SIMD, will try to review this patch.\n\n\nRegards,\nZhang Mingli\n\n\n\n\n",
"msg_date": "Tue, 26 Jul 2022 09:50:07 +0800",
"msg_from": "Zhang Mingli <zmlpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: optimize lookups in snapshot [sub]xip arrays"
},
{
"msg_contents": "On 2022-07-25 12:04:19 -0700, Nathan Bossart wrote:\n> On Sat, Jul 16, 2022 at 08:59:57PM -0700, Nathan Bossart wrote:\n> > On Fri, Jul 15, 2022 at 01:08:57PM -0700, Andres Freund wrote:\n> >> I wonder if we additionally / alternatively could use a faster method of\n> >> searching the array linearly, e.g. using SIMD.\n> > \n> > I looked into using SIMD. The patch is attached, but it is only intended\n> > for benchmarking purposes and isn't anywhere close to being worth serious\n> > review. There may be a simpler/better way to implement the linear search,\n> > but this seemed to work well. Overall, SIMD provided a decent improvement.\n> > I had to increase the number of writers quite a bit in order to demonstrate\n> > where the hash tables began winning. Here are the numbers:\n> > \n> > writers head simd hash\n> > 256 663 632 694\n> > 512 530 618 637\n> > 768 489 544 573\n> > 1024 364 508 562\n> > 2048 185 306 485\n> > 4096 146 197 441\n> > \n> > While it is unsurprising that the hash tables perform the best, there are a\n> > couple of advantages to SIMD that might make that approach worth\n> > considering. For one, there's really no overhead (i.e., you don't need to\n> > sort the array or build a hash table), so we can avoid picking an arbitrary\n> > threshold and just have one code path. Also, a SIMD implementation for a\n> > linear search through an array of integers could likely be easily reused\n> > elsewhere.\n> \n> From the discussion thus far, it seems there is interest in optimizing\n> [sub]xip lookups, so I'd like to spend some time moving it forward. I\n> think the biggest open question is which approach to take. Both the SIMD\n> and hash table approaches seem viable, but I think I prefer the SIMD\n> approach at the moment (see the last paragraph of quoted text for the\n> reasons). What do folks think?\n\nAgreed on all points.\n\n\n",
"msg_date": "Tue, 26 Jul 2022 11:19:06 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: optimize lookups in snapshot [sub]xip arrays"
},
{
"msg_contents": "On Tue, Jul 26, 2022 at 11:19:06AM -0700, Andres Freund wrote:\n> On 2022-07-25 12:04:19 -0700, Nathan Bossart wrote:\n>> From the discussion thus far, it seems there is interest in optimizing\n>> [sub]xip lookups, so I'd like to spend some time moving it forward. I\n>> think the biggest open question is which approach to take. Both the SIMD\n>> and hash table approaches seem viable, but I think I prefer the SIMD\n>> approach at the moment (see the last paragraph of quoted text for the\n>> reasons). What do folks think?\n> \n> Agreed on all points.\n\nGreat! Here is a new patch. A couple notes:\n\n * I briefly looked into seeing whether auto-vectorization was viable and\n concluded it was not for these loops.\n\n * I borrowed USE_SSE2 from one of John Naylor's patches [0]. I'm not sure\n whether this is committable, so I would welcome thoughts on the proper\n form. Given the comment says that SSE2 is supported by all x86-64\n hardware, I'm not seeing why we need the SSE 4.2 checks. Is it not\n enough to check for __x86_64__ and _M_AMD64?\n\n * I haven't looked into adding an ARM implementation yet.\n\n[0] https://postgr.es/m/CAFBsxsHko7yc8A-2PpjQ%3D2StomXF%2BT2jgKF%3DWaMFZWi8CvV7hA%40mail.gmail.com\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Thu, 28 Jul 2022 14:34:23 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: optimize lookups in snapshot [sub]xip arrays"
},
{
"msg_contents": "On Fri, Jul 29, 2022 at 4:34 AM Nathan Bossart <nathandbossart@gmail.com>\nwrote:\n> * I briefly looked into seeing whether auto-vectorization was viable and\n> concluded it was not for these loops.\n>\n> * I borrowed USE_SSE2 from one of John Naylor's patches [0]. I'm not\nsure\n> whether this is committable,\n\nI'll be the first to say it's not committable and needs some thought. Since\nthere are several recently proposed patches that take advantage of SSE2, it\nseems time for me to open a new thread and get that prerequisite settled.\nI'll do that next week.\n\n> so I would welcome thoughts on the proper\n> form. Given the comment says that SSE2 is supported by all x86-64\n> hardware, I'm not seeing why we need the SSE 4.2 checks. Is it not\n> enough to check for __x86_64__ and _M_AMD64?\n\nThat's enough for emitting instructions that the target CPU can run, but\nsays nothing (I think) about the host compiler's ability to understand the\nintrinsics and associated headers. The architecture is old enough that\nmaybe zero compilers in the buildfarm that target AMD64 fail to understand\nSSE2 intrinsics, but I hadn't looked into it. The SSE 4.2 intrinsics check\nis not necessary, but it was sufficient and already present, so I borrowed\nit for the PoC.\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Fri, Jul 29, 2022 at 4:34 AM Nathan Bossart <nathandbossart@gmail.com> wrote:> * I briefly looked into seeing whether auto-vectorization was viable and> concluded it was not for these loops.>> * I borrowed USE_SSE2 from one of John Naylor's patches [0]. I'm not sure> whether this is committable, I'll be the first to say it's not committable and needs some thought. Since there are several recently proposed patches that take advantage of SSE2, it seems time for me to open a new thread and get that prerequisite settled. I'll do that next week.> so I would welcome thoughts on the proper> form. Given the comment says that SSE2 is supported by all x86-64> hardware, I'm not seeing why we need the SSE 4.2 checks. Is it not> enough to check for __x86_64__ and _M_AMD64?That's enough for emitting instructions that the target CPU can run, but says nothing (I think) about the host compiler's ability to understand the intrinsics and associated headers. The architecture is old enough that maybe zero compilers in the buildfarm that target AMD64 fail to understand SSE2 intrinsics, but I hadn't looked into it. The SSE 4.2 intrinsics check is not necessary, but it was sufficient and already present, so I borrowed it for the PoC.--John NaylorEDB: http://www.enterprisedb.com",
"msg_date": "Sat, 30 Jul 2022 12:02:02 +0700",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: optimize lookups in snapshot [sub]xip arrays"
},
{
"msg_contents": "On Sat, Jul 30, 2022 at 12:02:02PM +0700, John Naylor wrote:\n> On Fri, Jul 29, 2022 at 4:34 AM Nathan Bossart <nathandbossart@gmail.com>\n> wrote:\n>> * I borrowed USE_SSE2 from one of John Naylor's patches [0]. I'm not\n> sure\n>> whether this is committable,\n> \n> I'll be the first to say it's not committable and needs some thought. Since\n> there are several recently proposed patches that take advantage of SSE2, it\n> seems time for me to open a new thread and get that prerequisite settled.\n> I'll do that next week.\n\nAwesome. I will help test and review.\n\n>> so I would welcome thoughts on the proper\n>> form. Given the comment says that SSE2 is supported by all x86-64\n>> hardware, I'm not seeing why we need the SSE 4.2 checks. Is it not\n>> enough to check for __x86_64__ and _M_AMD64?\n> \n> That's enough for emitting instructions that the target CPU can run, but\n> says nothing (I think) about the host compiler's ability to understand the\n> intrinsics and associated headers. The architecture is old enough that\n> maybe zero compilers in the buildfarm that target AMD64 fail to understand\n> SSE2 intrinsics, but I hadn't looked into it. The SSE 4.2 intrinsics check\n> is not necessary, but it was sufficient and already present, so I borrowed\n> it for the PoC.\n\nGot it, makes sense.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Fri, 29 Jul 2022 22:38:11 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: optimize lookups in snapshot [sub]xip arrays"
},
{
"msg_contents": "On Fri, Jul 29, 2022 at 10:38:11PM -0700, Nathan Bossart wrote:\n> On Sat, Jul 30, 2022 at 12:02:02PM +0700, John Naylor wrote:\n>> I'll be the first to say it's not committable and needs some thought. Since\n>> there are several recently proposed patches that take advantage of SSE2, it\n>> seems time for me to open a new thread and get that prerequisite settled.\n>> I'll do that next week.\n> \n> Awesome. I will help test and review.\n\nWhile this prerequisite is worked out [0], here is a new patch set. I've\nadded an 0002 in which I've made use of the proposed SSE2 linear search\nfunction in several other areas. I haven't done any additional performance\nanalysis, and it's likely I'm missing some eligible code locations, but at\nthe very least, this demonstrates the reusability of the new function.\n\n[0] https://postgr.es/m/CAFBsxsE2G_H_5Wbw%2BNOPm70-BK4xxKf86-mRzY%3DL2sLoQqM%2B-Q%40mail.gmail.com\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Tue, 2 Aug 2022 15:13:01 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: optimize lookups in snapshot [sub]xip arrays"
},
{
"msg_contents": "Hi,\n\nFWIW, I'd split the introduction of the helper and the use of it in snapmgr\ninto separate patches.\n\n\nOn 2022-08-02 15:13:01 -0700, Nathan Bossart wrote:\n> diff --git a/src/include/c.h b/src/include/c.h\n> index d35405f191..2c1a47bc28 100644\n> --- a/src/include/c.h\n> +++ b/src/include/c.h\n> @@ -371,6 +371,14 @@ typedef void (*pg_funcptr_t) (void);\n> #endif\n> #endif\n> \n> +/*\n> + * Are SSE2 intrinsics available?\n> + */\n> +#if (defined(__x86_64__) || defined(_M_AMD64))\n> +#include <emmintrin.h>\n> +#define USE_SSE2\n> +#endif\n> +\n\nIt doesn't strike me as a good idea to include this in every single\ntranslation unit in pg. That header (+dependencies) isn't small.\n\nI'm on board with normalizing defines for SSE availability somewhere central\nthough.\n\n\n> +/*\n> + * pg_linearsearch_uint32\n> + *\n> + * Returns the address of the first element in 'base' that equals 'key', or\n> + * NULL if no match is found.\n> + */\n> +#ifdef USE_SSE2\n> +pg_attribute_no_sanitize_alignment()\n> +#endif\n\nWhat's the deal with this annotation? Needs a comment.\n\n\n> +static inline uint32 *\n> +pg_linearsearch_uint32(uint32 key, uint32 *base, uint32 nelem)\n\nHm. I suspect this could be a bit faster if we didn't search for the offset,\nbut just for presence in the array? Most users don't need the offset.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 2 Aug 2022 15:55:39 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: optimize lookups in snapshot [sub]xip arrays"
},
{
"msg_contents": "On Tue, Aug 02, 2022 at 03:55:39PM -0700, Andres Freund wrote:\n> FWIW, I'd split the introduction of the helper and the use of it in snapmgr\n> into separate patches.\n\nWill do.\n\n> On 2022-08-02 15:13:01 -0700, Nathan Bossart wrote:\n>> diff --git a/src/include/c.h b/src/include/c.h\n>> index d35405f191..2c1a47bc28 100644\n>> --- a/src/include/c.h\n>> +++ b/src/include/c.h\n>> @@ -371,6 +371,14 @@ typedef void (*pg_funcptr_t) (void);\n>> #endif\n>> #endif\n>> \n>> +/*\n>> + * Are SSE2 intrinsics available?\n>> + */\n>> +#if (defined(__x86_64__) || defined(_M_AMD64))\n>> +#include <emmintrin.h>\n>> +#define USE_SSE2\n>> +#endif\n>> +\n> \n> It doesn't strike me as a good idea to include this in every single\n> translation unit in pg. That header (+dependencies) isn't small.\n> \n> I'm on board with normalizing defines for SSE availability somewhere central\n> though.\n\nYeah, this is just a temporary hack for now. It'll go away once the\ndefines for SSE2 availability are committed.\n\n>> +/*\n>> + * pg_linearsearch_uint32\n>> + *\n>> + * Returns the address of the first element in 'base' that equals 'key', or\n>> + * NULL if no match is found.\n>> + */\n>> +#ifdef USE_SSE2\n>> +pg_attribute_no_sanitize_alignment()\n>> +#endif\n> \n> What's the deal with this annotation? Needs a comment.\n\nWill do. c.h suggests that this should only be used for x86-specific code.\n\n>> +static inline uint32 *\n>> +pg_linearsearch_uint32(uint32 key, uint32 *base, uint32 nelem)\n> \n> Hm. I suspect this could be a bit faster if we didn't search for the offset,\n> but just for presence in the array? Most users don't need the offset.\n\nJust under half of the callers in 0002 require the offset, but I don't know\nif any of those are worth optimizing in the first place. I'll change it\nfor now. It's easy enough to add it back in the future if required.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 2 Aug 2022 16:43:57 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: optimize lookups in snapshot [sub]xip arrays"
},
{
"msg_contents": "On Wed, Aug 3, 2022 at 6:43 AM Nathan Bossart <nathandbossart@gmail.com>\nwrote:\n> Just under half of the callers in 0002 require the offset, but I don't\nknow\n> if any of those are worth optimizing in the first place. I'll change it\n> for now. It's easy enough to add it back in the future if required.\n\nYeah, some of those callers will rarely have more than several elements to\nsearch in the first place, or aren't performance-sensitive.\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Wed, Aug 3, 2022 at 6:43 AM Nathan Bossart <nathandbossart@gmail.com> wrote:> Just under half of the callers in 0002 require the offset, but I don't know> if any of those are worth optimizing in the first place. I'll change it> for now. It's easy enough to add it back in the future if required.Yeah, some of those callers will rarely have more than several elements to search in the first place, or aren't performance-sensitive.--John NaylorEDB: http://www.enterprisedb.com",
"msg_date": "Wed, 3 Aug 2022 12:36:20 +0700",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: optimize lookups in snapshot [sub]xip arrays"
},
{
"msg_contents": "Here is a new patch set. 0001 is the currently-proposed patch from the\nother thread [0] for determining SSE2 support. 0002 introduces the\noptimized linear search function. And 0003 makes use of the new function\nfor the [sub]xip lookups in XidInMVCCSnapshot().\n\n[0] https://postgr.es/m/CAFBsxsGktHL7%3DJXbgnKTi_uL0VRPcH4FSAqc6yK-3%2BJYfqPPjA%40mail.gmail.com\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Wed, 3 Aug 2022 10:11:59 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: optimize lookups in snapshot [sub]xip arrays"
},
{
"msg_contents": "Hi,\n\nOn 2022-08-02 16:43:57 -0700, Nathan Bossart wrote:\n> >> +/*\n> >> + * pg_linearsearch_uint32\n> >> + *\n> >> + * Returns the address of the first element in 'base' that equals 'key', or\n> >> + * NULL if no match is found.\n> >> + */\n> >> +#ifdef USE_SSE2\n> >> +pg_attribute_no_sanitize_alignment()\n> >> +#endif\n> > \n> > What's the deal with this annotation? Needs a comment.\n> \n> Will do. c.h suggests that this should only be used for x86-specific code.\n\nWhat I'm asking is why the annotation is needed at all?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 3 Aug 2022 11:06:58 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: optimize lookups in snapshot [sub]xip arrays"
},
{
"msg_contents": "On Wed, Aug 03, 2022 at 11:06:58AM -0700, Andres Freund wrote:\n> On 2022-08-02 16:43:57 -0700, Nathan Bossart wrote:\n>> >> +#ifdef USE_SSE2\n>> >> +pg_attribute_no_sanitize_alignment()\n>> >> +#endif\n>> > \n>> > What's the deal with this annotation? Needs a comment.\n>> \n>> Will do. c.h suggests that this should only be used for x86-specific code.\n> \n> What I'm asking is why the annotation is needed at all?\n\nUpon further inspection, I don't think this is needed. I originally\nborrowed it from the SSE version of the CRC code, but while it is trivial\nto produce alignment failures with the CRC code, I haven't been able to\ngenerate any with my patches. Looking at the code, I'm not sure why I was\nworried about this in the first place. Please pardon the brain fade.\n\nHere is a new patch set without the annotation.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Wed, 3 Aug 2022 13:25:40 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: optimize lookups in snapshot [sub]xip arrays"
},
{
"msg_contents": "On Thu, Aug 4, 2022 at 3:25 AM Nathan Bossart <nathandbossart@gmail.com>\nwrote:\n> Here is a new patch set without the annotation.\n\nWere you considering adding the new function to simd.h now that that's\ncommitted? It's a bit up in the air what should go in there, but this new\nfunction is low-level and generic enough to be a candidate...\n\nI wonder if the \"pg_\" prefix is appropriate here, as that is most often\nused for things that hide specific details *and* where the base name would\nclash, like OS calls or C library functions. I'm not quite sure where the\nline is drawn, but I mean that \"linearsearch\" is a generic algorithm and\nnot a specific API we are implementing, if that makes sense.\n\nThe suffix \"_uint32\" might be more succinct as \"32\" (cf pg_bswap32(),\npg_popcount32, etc). We'll likely want to search bytes sometime, so\nsomething to keep in mind as far as naming (\"_int\" vs \"_byte\"?).\n\nI'm not a fan of \"its\" as a variable name, and I'm curious what it's\nintended to convey.\n\nAll the __m128i vars could technically be declared const, although I think\nit doesn't matter -- it's just a hint to the reader.\n\nOut of curiosity do we know how much we get by loading four registers\nrather than two?\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Thu, Aug 4, 2022 at 3:25 AM Nathan Bossart <nathandbossart@gmail.com> wrote:> Here is a new patch set without the annotation.Were you considering adding the new function to simd.h now that that's committed? It's a bit up in the air what should go in there, but this new function is low-level and generic enough to be a candidate...I wonder if the \"pg_\" prefix is appropriate here, as that is most often used for things that hide specific details *and* where the base name would clash, like OS calls or C library functions. I'm not quite sure where the line is drawn, but I mean that \"linearsearch\" is a generic algorithm and not a specific API we are implementing, if that makes sense.The suffix \"_uint32\" might be more succinct as \"32\" (cf pg_bswap32(), pg_popcount32, etc). We'll likely want to search bytes sometime, so something to keep in mind as far as naming (\"_int\" vs \"_byte\"?).I'm not a fan of \"its\" as a variable name, and I'm curious what it's intended to convey.All the __m128i vars could technically be declared const, although I think it doesn't matter -- it's just a hint to the reader.Out of curiosity do we know how much we get by loading four registers rather than two?--John NaylorEDB: http://www.enterprisedb.com",
"msg_date": "Thu, 4 Aug 2022 14:58:14 +0700",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: optimize lookups in snapshot [sub]xip arrays"
},
{
"msg_contents": "On Thu, Aug 04, 2022 at 02:58:14PM +0700, John Naylor wrote:\n> Were you considering adding the new function to simd.h now that that's\n> committed? It's a bit up in the air what should go in there, but this new\n> function is low-level and generic enough to be a candidate...\n\nI don't have a strong opinion. I went with a separate file because I\nenvisioned a variety of possible linear search functions (e.g., char,\nuint16, uint32), and some might not use SIMD instructions. Futhermore, it\nseemed less obvious to look in simd.h for linear search functions. That\nbeing said, it might make sense to just add it here for now.\n\n> I wonder if the \"pg_\" prefix is appropriate here, as that is most often\n> used for things that hide specific details *and* where the base name would\n> clash, like OS calls or C library functions. I'm not quite sure where the\n> line is drawn, but I mean that \"linearsearch\" is a generic algorithm and\n> not a specific API we are implementing, if that makes sense.\n\nYeah, I was concerned about clashing with lsearch() and lfind(). I will\ndrop the prefix.\n\n> The suffix \"_uint32\" might be more succinct as \"32\" (cf pg_bswap32(),\n> pg_popcount32, etc). We'll likely want to search bytes sometime, so\n> something to keep in mind as far as naming (\"_int\" vs \"_byte\"?).\n\nHow about something like lsearch32 or linearsearch32?\n\n> I'm not a fan of \"its\" as a variable name, and I'm curious what it's\n> intended to convey.\n\nIt's short for \"iterations.\" I'll spell it out completely to avoid this\nkind of confusion.\n\n> All the __m128i vars could technically be declared const, although I think\n> it doesn't matter -- it's just a hint to the reader.\n\nWill do.\n\n> Out of curiosity do we know how much we get by loading four registers\n> rather than two?\n\nThe small program I've been using for testing takes about 40% more time\nwith the two register approach. The majority of this test involves\nsearching for elements that either don't exist in the array or that live\nnear the end of the array, so this is probably close to the worst case.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 4 Aug 2022 15:15:51 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: optimize lookups in snapshot [sub]xip arrays"
},
{
"msg_contents": "On Fri, Aug 5, 2022 at 5:15 AM Nathan Bossart <nathandbossart@gmail.com>\nwrote:\n>\n> On Thu, Aug 04, 2022 at 02:58:14PM +0700, John Naylor wrote:\n> > Were you considering adding the new function to simd.h now that that's\n> > committed? It's a bit up in the air what should go in there, but this\nnew\n> > function is low-level and generic enough to be a candidate...\n>\n> I don't have a strong opinion. I went with a separate file because I\n> envisioned a variety of possible linear search functions (e.g., char,\n> uint16, uint32), and some might not use SIMD instructions. Futhermore, it\n> seemed less obvious to look in simd.h for linear search functions.\n\nThat is a good point. Maybe potential helpers in simd.h should only deal\nspecifically with vector registers, with it's users providing C fallbacks.\nI don't have any good ideas of where to put the new function, though.\n\n> > I wonder if the \"pg_\" prefix is appropriate here, as that is most often\n> > used for things that hide specific details *and* where the base name\nwould\n> > clash, like OS calls or C library functions. I'm not quite sure where\nthe\n> > line is drawn, but I mean that \"linearsearch\" is a generic algorithm and\n> > not a specific API we are implementing, if that makes sense.\n>\n> Yeah, I was concerned about clashing with lsearch() and lfind(). I will\n> drop the prefix.\n\nHmm, I didn't know about those. lfind() is similar enough that it would\nmake sense to have pg_lfind32() etc in src/include/port/pg_lsearch.h, at\nleast for the v4 version that returns the pointer. We already declare\nbsearch_arg() in src/include/port.h and that's another kind of array\nsearch. Returning bool is different enough to have a different name.\npg_lfind32_ispresent()? *_noptr()? Meh.\n\nHaving said all that, the man page under BUGS [1] says \"The naming is\nunfortunate.\"\n\n> > Out of curiosity do we know how much we get by loading four registers\n> > rather than two?\n>\n> The small program I've been using for testing takes about 40% more time\n> with the two register approach. The majority of this test involves\n> searching for elements that either don't exist in the array or that live\n> near the end of the array, so this is probably close to the worst case.\n\nOk, sounds good.\n\n[1] https://man7.org/linux/man-pages/man3/lsearch.3.html#BUGS\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Fri, Aug 5, 2022 at 5:15 AM Nathan Bossart <nathandbossart@gmail.com> wrote:>> On Thu, Aug 04, 2022 at 02:58:14PM +0700, John Naylor wrote:> > Were you considering adding the new function to simd.h now that that's> > committed? It's a bit up in the air what should go in there, but this new> > function is low-level and generic enough to be a candidate...>> I don't have a strong opinion. I went with a separate file because I> envisioned a variety of possible linear search functions (e.g., char,> uint16, uint32), and some might not use SIMD instructions. Futhermore, it> seemed less obvious to look in simd.h for linear search functions.That is a good point. Maybe potential helpers in simd.h should only deal specifically with vector registers, with it's users providing C fallbacks. I don't have any good ideas of where to put the new function, though.> > I wonder if the \"pg_\" prefix is appropriate here, as that is most often> > used for things that hide specific details *and* where the base name would> > clash, like OS calls or C library functions. I'm not quite sure where the> > line is drawn, but I mean that \"linearsearch\" is a generic algorithm and> > not a specific API we are implementing, if that makes sense.>> Yeah, I was concerned about clashing with lsearch() and lfind(). I will> drop the prefix.Hmm, I didn't know about those. lfind() is similar enough that it would make sense to have pg_lfind32() etc in src/include/port/pg_lsearch.h, at least for the v4 version that returns the pointer. We already declare bsearch_arg() in src/include/port.h and that's another kind of array search. Returning bool is different enough to have a different name. pg_lfind32_ispresent()? *_noptr()? Meh.Having said all that, the man page under BUGS [1] says \"The naming is unfortunate.\"> > Out of curiosity do we know how much we get by loading four registers> > rather than two?>> The small program I've been using for testing takes about 40% more time> with the two register approach. The majority of this test involves> searching for elements that either don't exist in the array or that live> near the end of the array, so this is probably close to the worst case.Ok, sounds good.[1] https://man7.org/linux/man-pages/man3/lsearch.3.html#BUGS--John NaylorEDB: http://www.enterprisedb.com",
"msg_date": "Fri, 5 Aug 2022 11:02:15 +0700",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: optimize lookups in snapshot [sub]xip arrays"
},
{
"msg_contents": "On Fri, Aug 05, 2022 at 11:02:15AM +0700, John Naylor wrote:\n> That is a good point. Maybe potential helpers in simd.h should only deal\n> specifically with vector registers, with it's users providing C fallbacks.\n> I don't have any good ideas of where to put the new function, though.\n\nI moved it to src/include/port for now since that's where files like\npg_bswap.h live.\n\n> Hmm, I didn't know about those. lfind() is similar enough that it would\n> make sense to have pg_lfind32() etc in src/include/port/pg_lsearch.h, at\n> least for the v4 version that returns the pointer. We already declare\n> bsearch_arg() in src/include/port.h and that's another kind of array\n> search. Returning bool is different enough to have a different name.\n> pg_lfind32_ispresent()? *_noptr()? Meh.\n> \n> Having said all that, the man page under BUGS [1] says \"The naming is\n> unfortunate.\"\n\nI went ahead and renamed it to pg_lfind32() and switched it back to\nreturning the pointer. That felt the cleanest from the naming perspective,\nbut as Andres noted, it might not be as fast as just looking for the\npresence of the element. I modified my small testing program to perform\nmany searches on small arrays, and I wasn't able to identify any impact, so\nperhaps thіs is good enough.\n\nThoughts?\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Fri, 5 Aug 2022 13:25:10 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: optimize lookups in snapshot [sub]xip arrays"
},
{
"msg_contents": "Hi,\n\nOn 2022-08-05 13:25:10 -0700, Nathan Bossart wrote:\n> I went ahead and renamed it to pg_lfind32() and switched it back to\n> returning the pointer. That felt the cleanest from the naming perspective,\n> but as Andres noted, it might not be as fast as just looking for the\n> presence of the element. I modified my small testing program to perform\n> many searches on small arrays, and I wasn't able to identify any impact, so\n> perhaps thіs is good enough.\n\nWhy on small arrays? I'd expect a difference mainly if it there's at least a\nfew iterations.\n\nBut mainly I'd expect to find a difference if the SIMD code were optimized a\nfurther on the basis of not needing to return the offset. E.g. by\nreplacing _mm_packs_epi32 with _mm_or_si128, that's cheaper.\n\n- Andres\n\n\n",
"msg_date": "Fri, 5 Aug 2022 15:04:34 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: optimize lookups in snapshot [sub]xip arrays"
},
{
"msg_contents": "On Fri, Aug 05, 2022 at 03:04:34PM -0700, Andres Freund wrote:\n> But mainly I'd expect to find a difference if the SIMD code were optimized a\n> further on the basis of not needing to return the offset. E.g. by\n> replacing _mm_packs_epi32 with _mm_or_si128, that's cheaper.\n\nI haven't been able to find a significant difference between the two. If\nanything, the _mm_packs_epi* approach actually seems to be slightly faster\nin some cases. For something marginally more concrete, I compared the two\nin perf-top and saw the following for the relevant instructions:\n\n_mm_packs_epi*:\n\t0.19 │ packssdw %xmm1,%xmm0\n\t0.62 │ packssdw %xmm1,%xmm0\n\t7.14 │ packsswb %xmm1,%xmm0\n\n_mm_or_si128:\n\t1.52 │ por %xmm1,%xmm0\n\t2.05 │ por %xmm1,%xmm0\n\t5.66 │ por %xmm1,%xmm0\n\nI also tried a combined approach where I replaced _mm_packs_epi16 with\n_mm_or_si128:\n\t1.16 │ packssdw %xmm1,%xmm0\n\t1.47 │ packssdw %xmm1,%xmm0\n\t8.17 │ por %xmm1,%xmm0\n\nOf course, this simplistic analysis leaves out the impact of the\nsurrounding instructions, but it seems to support the idea that the\n_mm_packs_epi* approach might have a slight edge.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Sat, 6 Aug 2022 11:13:26 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: optimize lookups in snapshot [sub]xip arrays"
},
{
"msg_contents": "On Sat, Aug 06, 2022 at 11:13:26AM -0700, Nathan Bossart wrote:\n> On Fri, Aug 05, 2022 at 03:04:34PM -0700, Andres Freund wrote:\n>> But mainly I'd expect to find a difference if the SIMD code were optimized a\n>> further on the basis of not needing to return the offset. E.g. by\n>> replacing _mm_packs_epi32 with _mm_or_si128, that's cheaper.\n> \n> I haven't been able to find a significant difference between the two. If\n> anything, the _mm_packs_epi* approach actually seems to be slightly faster\n> in some cases. For something marginally more concrete, I compared the two\n> in perf-top and saw the following for the relevant instructions:\n\nNevermind, I'm wrong. When compiled with -O2, it uses more than just the\nxmm0 and xmm1 registers, and the _mm_or_si128 approach consistently shows a\nspeedup of slightly more than 5%. Patches attached.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Sat, 6 Aug 2022 14:25:22 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: optimize lookups in snapshot [sub]xip arrays"
},
{
"msg_contents": "On Sun, Aug 7, 2022 at 4:25 AM Nathan Bossart <nathandbossart@gmail.com> wrote:\n>\n> [v8]\n\nOkay, I think it's basically in good shape. Since it should be a bit\nfaster than a couple versions ago, would you be up for retesting with\nthe original test having 8 to 512 writers? And also add the const\nmarkers we discussed upthread? Aside from that, I plan to commit this\nweek unless there is further bikeshedding.\n\n-- \nJohn Naylor\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 8 Aug 2022 13:46:48 +0700",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: optimize lookups in snapshot [sub]xip arrays"
},
{
"msg_contents": "On Mon, Aug 8, 2022 at 12:17 PM John Naylor\n<john.naylor@enterprisedb.com> wrote:\n>\n> On Sun, Aug 7, 2022 at 4:25 AM Nathan Bossart <nathandbossart@gmail.com> wrote:\n> >\n> > [v8]\n>\n> Okay, I think it's basically in good shape. Since it should be a bit\n> faster than a couple versions ago, would you be up for retesting with\n> the original test having 8 to 512 writers? And also add the const\n> markers we discussed upthread? Aside from that, I plan to commit this\n> week unless there is further bikeshedding.\n\nI quickly reviewed v8 patch set, few comments:\n\n1) pg_lfind32 - why just uint32? If it's not possible to define\nfunctions for char, unsigned char, int16, uint16, int32, int64, uint64\nand so on, can we add a few comments around that? Also, the comments\ncan talk about if the base type or element data type of array or data\ntype of key matters to use pg_lfind32.\n\n2) I think this is not just for the remaining elements but also for\nnon-USE_SSE2 cases. Also, please specify in which cases we reach here\nfor USE_SSE2 cases.\n+ /* Process the remaining elements the slow way. */\n\n3) Can pg_lfind32 return the index of the key found, for instance to\nuse it for setting/resetting the found element in the array?\n+ * pg_lfind32\n+ *\n+ * Returns true if there is an element in 'base' that equals 'key'. Otherwise,\n+ * returns false.\n+ */\n+static inline bool\n+pg_lfind32(uint32 key, uint32 *base, uint32 nelem)\n\n4) Can we, right away, use this API to replace linear search, say, in\nSimpleLruReadPage_ReadOnly(), ATExecAttachPartitionIdx(),\nAfterTriggerSetState()? I'm sure I might be missing other places, but\ncan we replace the possible found areas with the new function?\n\n--\nBharath Rupireddy\nRDS Open Source Databases: https://aws.amazon.com/rds/postgresql/\n\n\n",
"msg_date": "Mon, 8 Aug 2022 12:56:28 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: optimize lookups in snapshot [sub]xip arrays"
},
{
"msg_contents": "On Mon, Aug 8, 2022 at 2:26 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n>\n> 1) pg_lfind32 - why just uint32? If it's not possible to define\n> functions for char, unsigned char, int16, uint16, int32, int64, uint64\n> and so on, can we add a few comments around that? Also, the comments\n\nFuture work, as far as I'm concerned. I'm interested in using a char\nversion for json strings.\n\n> 3) Can pg_lfind32 return the index of the key found, for instance to\n> use it for setting/resetting the found element in the array?\n\nThat was just discussed. It's slightly faster not to return an index.\n\n-- \nJohn Naylor\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 8 Aug 2022 16:00:11 +0700",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: optimize lookups in snapshot [sub]xip arrays"
},
{
"msg_contents": "On Mon, Aug 8, 2022 at 2:30 PM John Naylor <john.naylor@enterprisedb.com> wrote:\n>\n> On Mon, Aug 8, 2022 at 2:26 PM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> > 3) Can pg_lfind32 return the index of the key found, for instance to\n> > use it for setting/resetting the found element in the array?\n>\n> That was just discussed. It's slightly faster not to return an index.\n\nI haven't looked upthread, please share the difference. How about\nanother version of the function that returns the index too?\n\n-- \nBharath Rupireddy\nRDS Open Source Databases: https://aws.amazon.com/rds/postgresql/\n\n\n",
"msg_date": "Mon, 8 Aug 2022 14:34:58 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: optimize lookups in snapshot [sub]xip arrays"
},
{
"msg_contents": "On Mon, Aug 08, 2022 at 01:46:48PM +0700, John Naylor wrote:\n> Okay, I think it's basically in good shape. Since it should be a bit\n> faster than a couple versions ago, would you be up for retesting with\n> the original test having 8 to 512 writers?\n\nSure thing. The results are similar. As before, the improvements are most\nvisible when the arrays are large.\n\n\twriters head patch\n\t8 672 680\n\t16 639 664\n\t32 701 689\n\t64 705 703\n\t128 628 653\n\t256 576 627\n\t512 530 584\n\t768 450 536\n\t1024 350 494\n\n> And also add the const\n> markers we discussed upthread?\n\nOops, sorry about that. This is done in v9.\n\n> Aside from that, I plan to commit this\n> week unless there is further bikeshedding.\n\nGreat, thanks.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Mon, 8 Aug 2022 15:32:54 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: optimize lookups in snapshot [sub]xip arrays"
},
{
"msg_contents": "On Mon, Aug 08, 2022 at 12:56:28PM +0530, Bharath Rupireddy wrote:\n> 1) pg_lfind32 - why just uint32? If it's not possible to define\n> functions for char, unsigned char, int16, uint16, int32, int64, uint64\n> and so on, can we add a few comments around that? Also, the comments\n> can talk about if the base type or element data type of array or data\n> type of key matters to use pg_lfind32.\n\nI figured that we'd add functions for other types when needed. I\nconsidered making the new function generic by adding an argument for the\nelement size. Then, we could branch to optimized routines based on the\nelement size (e.g., pg_lfind() would call pg_lfind32() if the element size\nwas 4 bytes). However, that seemed like more complexity than is required,\nand it's probably nice to avoid the extra branching.\n\n> 2) I think this is not just for the remaining elements but also for\n> non-USE_SSE2 cases. Also, please specify in which cases we reach here\n> for USE_SSE2 cases.\n> + /* Process the remaining elements the slow way. */\n\nWell, in the non-SSE2 case, all of the elements are remaining at this\npoint. :)\n\n> 3) Can pg_lfind32 return the index of the key found, for instance to\n> use it for setting/resetting the found element in the array?\n\nAs discussed upthread, only returning whether the element is present in the\narray is slightly faster. If we ever needed a version that returned the\naddress of the matching element, we could reevaluate whether this small\nboost was worth creating a separate function or if we should just modify\npg_lfind32() to be a tad slower. I don't think we need to address that\nnow, though.\n\n> 4) Can we, right away, use this API to replace linear search, say, in\n> SimpleLruReadPage_ReadOnly(), ATExecAttachPartitionIdx(),\n> AfterTriggerSetState()? I'm sure I might be missing other places, but\n> can we replace the possible found areas with the new function?\n\nI had found a few eligible linear searches earlier [0], but I haven't done\nany performance analysis that proved such changes were worthwhile. While\nsubstituting linear searches with pg_lfind32() is probably an improvement\nin most cases, I think we ought to demonstrate the benefits for each one.\n\n[0] https://postgr.es/m/20220802221301.GA742739%40nathanxps13\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 8 Aug 2022 16:07:16 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: optimize lookups in snapshot [sub]xip arrays"
},
{
"msg_contents": "On Tue, Aug 9, 2022 at 7:33 AM Nathan Bossart <nathandbossart@gmail.com> wrote:\n>\n> On Mon, Aug 08, 2022 at 01:46:48PM +0700, John Naylor wrote:\n> > Okay, I think it's basically in good shape. Since it should be a bit\n> > faster than a couple versions ago, would you be up for retesting with\n> > the original test having 8 to 512 writers?\n>\n> Sure thing. The results are similar. As before, the improvements are most\n> visible when the arrays are large.\n>\n> writers head patch\n> 8 672 680\n> 16 639 664\n> 32 701 689\n> 64 705 703\n> 128 628 653\n> 256 576 627\n> 512 530 584\n> 768 450 536\n> 1024 350 494\n>\n> > And also add the const\n> > markers we discussed upthread?\n>\n> Oops, sorry about that. This is done in v9.\n>\n> > Aside from that, I plan to commit this\n> > week unless there is further bikeshedding.\n>\n> Great, thanks.\n\nThe patch looks good to me. One minor point is:\n\n+ * IDENTIFICATION\n+ * src/port/pg_lfind.h\n\nThe path doesn't match to the actual file path, src/include/port/pg_lfind.h.\n\nRegards,\n\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Tue, 9 Aug 2022 10:57:44 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: optimize lookups in snapshot [sub]xip arrays"
},
{
"msg_contents": "On Tue, Aug 09, 2022 at 10:57:44AM +0900, Masahiko Sawada wrote:\n> The patch looks good to me. One minor point is:\n\nThanks for taking a look.\n\n> + * IDENTIFICATION\n> + * src/port/pg_lfind.h\n> \n> The path doesn't match to the actual file path, src/include/port/pg_lfind.h.\n\nFixed in v10.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Mon, 8 Aug 2022 20:51:29 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: optimize lookups in snapshot [sub]xip arrays"
},
{
"msg_contents": "On Tue, Aug 9, 2022 at 4:37 AM Nathan Bossart <nathandbossart@gmail.com> wrote:\n>\n> On Mon, Aug 08, 2022 at 12:56:28PM +0530, Bharath Rupireddy wrote:\n> > 1) pg_lfind32 - why just uint32? If it's not possible to define\n> > functions for char, unsigned char, int16, uint16, int32, int64, uint64\n> > and so on, can we add a few comments around that? Also, the comments\n> > can talk about if the base type or element data type of array or data\n> > type of key matters to use pg_lfind32.\n>\n> I figured that we'd add functions for other types when needed. I\n> considered making the new function generic by adding an argument for the\n> element size. Then, we could branch to optimized routines based on the\n> element size (e.g., pg_lfind() would call pg_lfind32() if the element size\n> was 4 bytes). However, that seemed like more complexity than is required,\n> and it's probably nice to avoid the extra branching.\n>\n> > 3) Can pg_lfind32 return the index of the key found, for instance to\n> > use it for setting/resetting the found element in the array?\n>\n> As discussed upthread, only returning whether the element is present in the\n> array is slightly faster. If we ever needed a version that returned the\n> address of the matching element, we could reevaluate whether this small\n> boost was worth creating a separate function or if we should just modify\n> pg_lfind32() to be a tad slower. I don't think we need to address that\n> now, though.\n\nIsn't it a good idea to capture the above two points as comments in\nport/pg_lfind.h just to not lose track of it? I know these are present\nin the hackers thread, but having them in the form of comments helps\ndevelopers who attempt to change or use the new function.\n\n-- \nBharath Rupireddy\nRDS Open Source Databases: https://aws.amazon.com/rds/postgresql/\n\n\n",
"msg_date": "Tue, 9 Aug 2022 09:40:15 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: optimize lookups in snapshot [sub]xip arrays"
},
{
"msg_contents": "On Tue, Aug 09, 2022 at 09:40:15AM +0530, Bharath Rupireddy wrote:\n> Isn't it a good idea to capture the above two points as comments in\n> port/pg_lfind.h just to not lose track of it? I know these are present\n> in the hackers thread, but having them in the form of comments helps\n> developers who attempt to change or use the new function.\n\nHm. My first impression is that this is exactly the sort of information\nthat is better captured on the lists. I'm not sure that the lack of such\ncommentary really poses any threat for future changes, which would need to\nbe judged on their own merit, anyway.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 8 Aug 2022 21:43:17 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: optimize lookups in snapshot [sub]xip arrays"
},
{
"msg_contents": "Nathan Bossart <nathandbossart@gmail.com> writes:\n> On Tue, Aug 09, 2022 at 09:40:15AM +0530, Bharath Rupireddy wrote:\n>> Isn't it a good idea to capture the above two points as comments in\n>> port/pg_lfind.h just to not lose track of it? I know these are present\n>> in the hackers thread, but having them in the form of comments helps\n>> developers who attempt to change or use the new function.\n\n> Hm. My first impression is that this is exactly the sort of information\n> that is better captured on the lists. I'm not sure that the lack of such\n> commentary really poses any threat for future changes, which would need to\n> be judged on their own merit, anyway.\n\nIt's clearly unproductive (not to say impossible) to enumerate every\npossible alternative design and say why you didn't choose it. If\nthere's some particular \"obvious\" choice that you feel a need to\nrefute, then sure write a comment about that. Notably, if we used\nto do X and now do Y because X was found to be broken, then it's good\nto have a comment trail discouraging future hackers from reinventing\nX. But that doesn't lead to needing comments about an unrelated\noption Z.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 09 Aug 2022 01:12:53 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: optimize lookups in snapshot [sub]xip arrays"
},
{
"msg_contents": "On Tue, Aug 9, 2022 at 10:51 AM Nathan Bossart <nathandbossart@gmail.com> wrote:\n> Fixed in v10.\n\nI decided I wasn't quite comfortable changing snapshot handling\nwithout further guarantees. To this end, 0002 in the attached v11 is\nan addendum that adds assert checking (also pgindent and some\ncomment-smithing). As I suspected, make check-world passes even with\npurposefully screwed-up coding. 0003 uses pg_lfind32 in syscache.c and\nI verified that sticking in the wrong answer will lead to a crash in\nassert-enabled builds in short order. I'd kind of like to throw this\n(or something else suitable) at the build farm first for that reason.\nIt's simpler than the qsort/qunique/binary search that was there\nbefore, so that's nice, but I've not tried to test performance.\n\n-- \nJohn Naylor\nEDB: http://www.enterprisedb.com",
"msg_date": "Tue, 9 Aug 2022 13:21:41 +0700",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: optimize lookups in snapshot [sub]xip arrays"
},
{
"msg_contents": "On Tue, Aug 09, 2022 at 01:21:41PM +0700, John Naylor wrote:\n> I decided I wasn't quite comfortable changing snapshot handling\n> without further guarantees. To this end, 0002 in the attached v11 is\n> an addendum that adds assert checking (also pgindent and some\n> comment-smithing). As I suspected, make check-world passes even with\n> purposefully screwed-up coding. 0003 uses pg_lfind32 in syscache.c and\n> I verified that sticking in the wrong answer will lead to a crash in\n> assert-enabled builds in short order. I'd kind of like to throw this\n> (or something else suitable) at the build farm first for that reason.\n> It's simpler than the qsort/qunique/binary search that was there\n> before, so that's nice, but I've not tried to test performance.\n\nYour adjustments in 0002 seem reasonable to me. I think it makes sense to\nensure there is test coverage for pg_lfind32(), but I don't know if that\nsyscache code is the right choice. For non-USE_SSE2 builds, it might make\nthese lookups more expensive. I'll look around to see if there are any\nother suitable candidates.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 9 Aug 2022 13:00:37 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: optimize lookups in snapshot [sub]xip arrays"
},
{
"msg_contents": "On Tue, Aug 09, 2022 at 01:00:37PM -0700, Nathan Bossart wrote:\n> Your adjustments in 0002 seem reasonable to me. I think it makes sense to\n> ensure there is test coverage for pg_lfind32(), but I don't know if that\n> syscache code is the right choice. For non-USE_SSE2 builds, it might make\n> these lookups more expensive. I'll look around to see if there are any\n> other suitable candidates.\n\nOne option might be to create a small test module for pg_lfind32(). Here\nis an example.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Tue, 9 Aug 2022 17:13:43 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: optimize lookups in snapshot [sub]xip arrays"
},
{
"msg_contents": "On Wed, Aug 10, 2022 at 5:00 AM Nathan Bossart <nathandbossart@gmail.com> wrote:\n>\n> On Tue, Aug 09, 2022 at 01:21:41PM +0700, John Naylor wrote:\n> > I decided I wasn't quite comfortable changing snapshot handling\n> > without further guarantees. To this end, 0002 in the attached v11 is\n> > an addendum that adds assert checking (also pgindent and some\n> > comment-smithing). As I suspected, make check-world passes even with\n> > purposefully screwed-up coding. 0003 uses pg_lfind32 in syscache.c and\n> > I verified that sticking in the wrong answer will lead to a crash in\n> > assert-enabled builds in short order. I'd kind of like to throw this\n> > (or something else suitable) at the build farm first for that reason.\n> > It's simpler than the qsort/qunique/binary search that was there\n> > before, so that's nice, but I've not tried to test performance.\n>\n> Your adjustments in 0002 seem reasonable to me. I think it makes sense to\n> ensure there is test coverage for pg_lfind32(), but I don't know if that\n> syscache code is the right choice. For non-USE_SSE2 builds, it might make\n> these lookups more expensive.\n\nI think that for non-USE_SSE2 builds, there is no additional overhead\nas all assertion-related code in pg_lfind32 depends on USE_SSE2.\n\n> I'll look around to see if there are any\n> other suitable candidates.\n\nAs you proposed, having a test module for that seems to be a good\nidea. We can add test codes for future optimizations that utilize SIMD\noperations.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Wed, 10 Aug 2022 11:24:44 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: optimize lookups in snapshot [sub]xip arrays"
},
{
"msg_contents": "On Wed, Aug 10, 2022 at 7:13 AM Nathan Bossart <nathandbossart@gmail.com> wrote:\n>\n> On Tue, Aug 09, 2022 at 01:00:37PM -0700, Nathan Bossart wrote:\n> > Your adjustments in 0002 seem reasonable to me. I think it makes sense to\n> > ensure there is test coverage for pg_lfind32(), but I don't know if that\n> > syscache code is the right choice. For non-USE_SSE2 builds, it might make\n> > these lookups more expensive.\n\nYeah.\n\nOn Wed, Aug 10, 2022 at 9:25 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> I think that for non-USE_SSE2 builds, there is no additional overhead\n> as all assertion-related code in pg_lfind32 depends on USE_SSE2.\n\nNathan is referring to RelationSupportsSysCache() and\nRelationHasSysCache(). They currently use binary search and using\nlinear search on non-x86-64 platforms is probably slower.\n\n[Nathan again]\n> One option might be to create a small test module for pg_lfind32(). Here\n> is an example.\n\nLGTM, let's see what the buildfarm thinks of 0001.\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 10 Aug 2022 10:50:02 +0700",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: optimize lookups in snapshot [sub]xip arrays"
},
{
"msg_contents": "On Wed, Aug 10, 2022 at 10:50:02AM +0700, John Naylor wrote:\n> LGTM, let's see what the buildfarm thinks of 0001.\n\nThanks! I haven't noticed any related buildfarm failures yet.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 10 Aug 2022 14:45:04 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: optimize lookups in snapshot [sub]xip arrays"
},
{
"msg_contents": "On Thu, Aug 11, 2022 at 4:46 AM Nathan Bossart <nathandbossart@gmail.com> wrote:\n>\n> On Wed, Aug 10, 2022 at 10:50:02AM +0700, John Naylor wrote:\n> > LGTM, let's see what the buildfarm thinks of 0001.\n>\n> Thanks! I haven't noticed any related buildfarm failures yet.\n\nI was waiting for all the Windows animals to report in, and it looks\nlike they have, so I've pushed 0002. Thanks for picking this topic up\nagain!\n\n-- \nJohn Naylor\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 11 Aug 2022 09:50:54 +0700",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: optimize lookups in snapshot [sub]xip arrays"
},
{
"msg_contents": "On Thu, Aug 11, 2022 at 09:50:54AM +0700, John Naylor wrote:\n> I was waiting for all the Windows animals to report in, and it looks\n> like they have, so I've pushed 0002. Thanks for picking this topic up\n> again!\n\nThanks for reviewing and committing.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 10 Aug 2022 22:18:50 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: optimize lookups in snapshot [sub]xip arrays"
}
] |
[
{
"msg_contents": "Hackers,\n\nWe have noticed an issue when performing recovery with recovery_target = \n'immediate' when the latest timeline cannot be replayed to from the \nbackup (current) timeline.\n\nFirst, create two backups:\n\n$ pgbackrest --stanza=demo --type=full --start-fast backup\n$ pgbackrest --stanza=demo --type=full --start-fast backup\n$ pgbackrest info\n\n<snip>\n full backup: 20220713-175710F\n timestamp start/stop: 2022-07-13 17:57:10 / 2022-07-13 17:57:14\n wal start/stop: 000000010000000000000003 / \n000000010000000000000003\n database size: 23.2MB, database backup size: 23.2MB\n repo1: backup set size: 2.8MB, backup size: 2.8MB\n\n full backup: 20220713-175748F\n timestamp start/stop: 2022-07-13 17:57:48 / 2022-07-13 17:57:52\n wal start/stop: 000000010000000000000005 / \n000000010000000000000005\n database size: 23.2MB, database backup size: 23.2MB\n repo1: backup set size: 2.8MB, backup size: 2.8MB\n\nRestore the first backup:\n\n$ pg_ctlcluster 13 demo stop\n$ pgbackrest --stanza=demo --delta --set=20220713-175710F \n--type=immediate --target-action=promote restore\n\nRecovery settings:\n\n$ cat /var/lib/postgresql/13/demo/postgresql.auto.conf\n\n<snip>\nrestore_command = 'pgbackrest --stanza=demo archive-get %f \"%p\"'\nrecovery_target = 'immediate'\nrecovery_target_action = 'promote'\n\nStarting PostgreSQL performs recovery as expected:\n\n$ pg_ctlcluster 13 demo start\n$ cat /var/log/postgresql/postgresql-13-demo.log\n\nLOG: database system was interrupted; last known up at 2022-07-13 \n17:57:10 UTC\nLOG: starting point-in-time recovery to earliest consistent point\nLOG: restored log file \"000000010000000000000003\" from archive\nLOG: redo starts at 0/3000028\nLOG: consistent recovery state reached at 0/3000138\nLOG: recovery stopping after reaching consistency\nLOG: redo done at 0/3000138\nLOG: database system is ready to accept read only connections\nLOG: selected new timeline ID: 2\nLOG: archive recovery complete\nLOG: database system is ready to accept connections\n\nNow restore the second backup (recovery settings are identical):\n\n$ pg_ctlcluster 13 demo stop\n$ pgbackrest --stanza=demo --delta --set=20220713-175748F \n--type=immediate --target-action=promote restore\n\nRecovery now fails:\n\n$ pg_ctlcluster 13 demo start\n$ cat /var/log/postgresql/postgresql-13-demo.log\n\nLOG: database system was interrupted; last known up at 2022-07-13 \n17:57:48 UTC\nLOG: restored log file \"00000002.history\" from archive\nLOG: starting point-in-time recovery to earliest consistent point\nLOG: restored log file \"00000002.history\" from archive\nLOG: restored log file \"000000010000000000000005\" from archive\nFATAL: requested timeline 2 is not a child of this server's history\nDETAIL: Latest checkpoint is at 0/5000060 on timeline 1, but in the \nhistory of the requested timeline, the server forked off from that \ntimeline at 0/3000138.\nLOG: startup process (PID 511) exited with exit code 1\nLOG: aborting startup due to startup process failure\nLOG: database system is shut down\n\nWhile it is certainly true that timeline 2 cannot be replayed to from \ntimeline 1, it should not matter for an immediate recovery that stops at \nconsistency. No timeline switch will occur until promotion. Of course \nthe cluster could be shut down before promotion and the target changed, \nbut in that case couldn't timeline be adjusted at that point?\n\nThis works by default for PostgreSQL < 12 because the default timeline \nis current. Since PostgreSQL 12 the default has been latest, which does \nnot work by default.\n\nWhen a user does a number of recoveries it is pretty common for the \ntimelines to get in a state that will make most subsequent recoveries \nfail. We think it makes sense for recovery_target = 'immediate' to be a \nfail safe that always works no matter the state of the latest timeline.\n\nOur solution has been to force recovery_target_timeline = 'current' when \nrecovery_target = 'immediate', but it seems like this is something that \nshould be done in PostgreSQL instead.\n\nThoughts?\n\nRegards,\n-David\n\n\n",
"msg_date": "Wed, 13 Jul 2022 14:41:40 -0400",
"msg_from": "David Steele <david@pgmasters.net>",
"msg_from_op": true,
"msg_subject": "Issue with recovery_target = 'immediate'"
},
{
"msg_contents": "At Wed, 13 Jul 2022 14:41:40 -0400, David Steele <david@pgmasters.net> wrote in \n> While it is certainly true that timeline 2 cannot be replayed to from\n> timeline 1, it should not matter for an immediate recovery that stops\n> at consistency. No timeline switch will occur until promotion. Of\n> course the cluster could be shut down before promotion and the target\n> changed, but in that case couldn't timeline be adjusted at that point?\n> \n> This works by default for PostgreSQL < 12 because the default timeline\n> is current. Since PostgreSQL 12 the default has been latest, which\n> does not work by default.\n> \n> When a user does a number of recoveries it is pretty common for the\n> timelines to get in a state that will make most subsequent recoveries\n> fail. We think it makes sense for recovery_target = 'immediate' to be\n> a fail safe that always works no matter the state of the latest\n> timeline.\n> \n> Our solution has been to force recovery_target_timeline = 'current'\n> when recovery_target = 'immediate', but it seems like this is\n> something that should be done in PostgreSQL instead.\n> \n> Thoughts?\n\nI think it is natural that recovery defaultly targets the most recent\nupdate. In that sense, at the time the admin decided to recover the\nserver from the first backup, the second backup is kind of dead, at\nleast which should be forgotten in the future operation.\n\nEven if we want \"any\" backup usable, just re-targeting to the current\ntimeline after the timeline error looks kind of inconsistent to the\nbehavior mentioned above. To make \"dead\" backups behave like the\n\"live\" ones, we would need to check if the backup is in the history of\neach \"future\" timelines, then choose the latest timeline from them.\n\n# Mmm. I remember about a recent patch for pg_rewind to do the same...\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Thu, 14 Jul 2022 17:26:32 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Issue with recovery_target = 'immediate'"
},
{
"msg_contents": "On 7/14/22 04:26, Kyotaro Horiguchi wrote:\n> At Wed, 13 Jul 2022 14:41:40 -0400, David Steele <david@pgmasters.net> wrote in\n>> While it is certainly true that timeline 2 cannot be replayed to from\n>> timeline 1, it should not matter for an immediate recovery that stops\n>> at consistency. No timeline switch will occur until promotion. Of\n>> course the cluster could be shut down before promotion and the target\n>> changed, but in that case couldn't timeline be adjusted at that point?\n>>\n>> This works by default for PostgreSQL < 12 because the default timeline\n>> is current. Since PostgreSQL 12 the default has been latest, which\n>> does not work by default.\n>>\n>> When a user does a number of recoveries it is pretty common for the\n>> timelines to get in a state that will make most subsequent recoveries\n>> fail. We think it makes sense for recovery_target = 'immediate' to be\n>> a fail safe that always works no matter the state of the latest\n>> timeline.\n>>\n>> Our solution has been to force recovery_target_timeline = 'current'\n>> when recovery_target = 'immediate', but it seems like this is\n>> something that should be done in PostgreSQL instead.\n>>\n>> Thoughts?\n> \n> I think it is natural that recovery defaultly targets the most recent\n> update. In that sense, at the time the admin decided to recover the\n> server from the first backup, the second backup is kind of dead, at\n> least which should be forgotten in the future operation.\n\nWell, I dislike the idea of a dead backup. Certainly no backup can be \nfollowed along all timelines but it should still be recoverable.\n\n> Even if we want \"any\" backup usable, just re-targeting to the current\n> timeline after the timeline error looks kind of inconsistent to the\n> behavior mentioned above. To make \"dead\" backups behave like the\n> \"live\" ones, we would need to check if the backup is in the history of\n> each \"future\" timelines, then choose the latest timeline from them.\n\nI think this makes sense for for non-immediate targets. The idea is that \nrecovering to the \"latest\" timeline would actually recover to the latest \ntimeline that is valid for the backup. Is that what you had in mind?\n\nHowever, for immediate targets, only the current timeline makes sense so \nI feel like it would be better to simply force the current timeline.\n\n> # Mmm. I remember about a recent patch for pg_rewind to do the same...\n\nDo you have a link for this?\n\nRegards,\n-David\n\n\n",
"msg_date": "Thu, 14 Jul 2022 17:21:03 -0400",
"msg_from": "David Steele <david@pgmasters.net>",
"msg_from_op": true,
"msg_subject": "Re: Issue with recovery_target = 'immediate'"
}
] |
[
{
"msg_contents": "Hi hackers,\n\nWhile debugging some slow queries containing Bitmap Heap/Index Scans (in\nshort BHS / BIS), we observed a few issues regarding scalability:\n\n 1. The BIS always only runs in a single process, also when the parent\n BHS is parallel. The first process arriving in the BHS serves as leader and\n executes the BIS.\n 2. As long as execution is \"exact\" (TIDs are stored instead of page\n bits), the parallel BHS sorts all TIDs to ensure pages are accessed\n sequentially. The sort is also performed just by a single worker. Already\n with a few tens of thousands of pages to scan, the sort time can make up a\n significant portion of the total runtime. Large page counts and the need\n for parallelism are not uncommon for BHS, as one use case is closing the\n gap between index and sequential scans. The BHS costing seems to not\n account for that.\n 3. The BHS does not scale well with an increasing number of parallel\n workers, even when accounting for the sequential parts of execution. A perf\n profile shows that the TID list / bitmap iteration code heavily contents on\n a mutex taken for every single TID / page bit (see\n LWLockAcquire(&istate->lock, LW_EXCLUSIVE) in tidbitmap.c:1067).\n 4. The EXPLAIN ANALYZE statistics of the parallel BHS do not include the\n statistics of the parallel workers. For example the number of heap pages\n processed is what just the leader did. Similarly to other parallel plan\n nodes we should aggregate statistics across workers.\n\nThe EXPLAIN ANALYZE output below shows (1) to (3) happening in action for\ndifferent numbers of workers. I had to obfuscate the query slightly. The\ndifference between the startup time of the BHS and the BIS is the time it\ntakes to sort the TID list. The self time of the BHS is just the time spent\non processing the shared TID list and processing the pages. That part runs\nin parallel but does not scale.\n\nWorkers | Total runtime | Startup time BIS | Startup time BHS | Self time\nBHS (excl. sorting)\n-------------------------------------------|------------------|------------------------------\n2 | 15322 ms | 3107 ms | 5912 ms | 9269 ms\n4 | 13277 ms | 3094 ms | 5869 ms | 7260 ms\n8 | 14628 ms | 3106 ms | 5882 ms | 8598 ms\n\nNone of this is really new and some of it is even documented. So, what I am\nmore wondering about is why things are the way they are and how hard it\nwould be to change them. I am especially curious about:\n\n - What stops us from extending the BIS to run in parallel? Parallel\n Bitmap Index Scans are also supported.\n - What about reducing the sort time by, e.g.\n - dividing TIDs across workers, ending up with N parallely sorted\n streams,\n - cooperatively sorting the TIDs with multiple workers using barriers\n for synchronization,\n - optimizing the PagetableEntry data structure for size and using a\n faster sorting algorithm like e.g. radix sort\n - a combination of the first three options\n - With separate TID lists per worker process the iteration problem would\n be solved. Otherwise, we could\n - optimize the iteration code and thereby minimize the duration of\n the critical section,\n - have worker processes acquire chunks of TIDs / page bits to reduce\n locking.\n\nIs there interest in patches improving on the above mentioned shortcomings?\nIf so, which options do you deem best?\n\n--\nDavid Geier\n(ServiceNow)\n\n\n\n-- 2 workers\n\n Finalize Aggregate (actual time=15228.937..15321.356 rows=1 loops=1)\n Output: count(*)\n -> Gather (actual time=15187.942..15321.345 rows=2 loops=1)\n Output: (PARTIAL count(*))\n Workers Planned: 2\n Workers Launched: 2\n -> Partial Aggregate (actual time=15181.486..15181.488 rows=1\nloops=2)\n Output: PARTIAL count(*)\n Worker 0: actual time=15181.364..15181.366 rows=1 loops=1\n Worker 1: actual time=15181.608..15181.610 rows=1 loops=1\n -> Parallel Bitmap Heap Scan on foo (actual\ntime=5912.731..15166.992 rows=269713 loops=2)\n Filter: ...\n Rows Removed by Filter: 4020149\n Worker 0: actual time=5912.498..15166.936 rows=269305\nloops=1\n Worker 1: actual time=5912.963..15167.048 rows=270121\nloops=1\n -> Bitmap Index Scan on foo_idx (actual\ntime=3107.947..3107.948 rows=8579724 loops=1)\n Index Cond: -\n Worker 1: actual time=3107.947..3107.948\nrows=8579724 loops=1\n Planning Time: 0.167 ms\n Execution Time: 15322.081 ms\n\n\n-- 4 workers\n\n Finalize Aggregate (actual time=13175.765..13276.415 rows=1 loops=1)\n Output: count(*)\n -> Gather (actual time=13137.981..13276.403 rows=4 loops=1)\n Output: (PARTIAL count(*))\n Workers Planned: 4\n Workers Launched: 4\n -> Partial Aggregate (actual time=13130.344..13130.346 rows=1\nloops=4)\n Output: PARTIAL count(*)\n Worker 0: actual time=13129.363..13129.365 rows=1 loops=1\n Worker 1: actual time=13130.085..13130.087 rows=1 loops=1\n Worker 2: actual time=13130.634..13130.635 rows=1 loops=1\n Worker 3: actual time=13131.295..13131.298 rows=1 loops=1\n -> Parallel Bitmap Heap Scan on foo (actual\ntime=5870.026..13120.579 rows=134856 loops=4)\n Filter: ...\n Rows Removed by Filter: 2010074\n Worker 0: actual time=5869.033..13120.453 rows=128270\nloops=1\n Worker 1: actual time=5869.698..13118.811 rows=135333\nloops=1\n Worker 2: actual time=5870.465..13121.189 rows=137695\nloops=1\n Worker 3: actual time=5870.907..13121.864 rows=138128\nloops=1\n -> Bitmap Index Scan on foo_idx (actual\ntime=3094.585..3094.586 rows=8579724 loops=1)\n Index Cond: -\n Worker 3: actual time=3094.585..3094.586\nrows=8579724 loops=1\n Planning Time: 0.146 ms\n Execution Time: 13277.315 ms\n\n-- 8 workers\n\n Finalize Aggregate (actual time=14533.688..14627.962 rows=1 loops=1)\n Output: count(*)\n -> Gather (actual time=14492.463..14627.950 rows=8 loops=1)\n Output: (PARTIAL count(*))\n Workers Planned: 8\n Workers Launched: 8\n -> Partial Aggregate (actual time=14483.059..14483.061 rows=1\nloops=8)\n Output: PARTIAL count(*)\n Worker 0: actual time=14480.058..14480.061 rows=1 loops=1\n Worker 1: actual time=14480.948..14480.950 rows=1 loops=1\n Worker 2: actual time=14481.668..14481.670 rows=1 loops=1\n Worker 3: actual time=14482.829..14482.832 rows=1 loops=1\n Worker 4: actual time=14483.695..14483.697 rows=1 loops=1\n Worker 5: actual time=14484.290..14484.293 rows=1 loops=1\n Worker 6: actual time=14485.166..14485.168 rows=1 loops=1\n Worker 7: actual time=14485.819..14485.821 rows=1 loops=1\n -> Parallel Bitmap Heap Scan on foo (actual\ntime=5886.191..14477.239 rows=67428 loops=8)\n Filter: ...\n Rows Removed by Filter: 1005037\n Worker 0: actual time=5882.909..14474.627 rows=60325\nloops=1\n Worker 1: actual time=5883.788..14474.945 rows=69459\nloops=1\n Worker 2: actual time=5884.475..14475.735 rows=69686\nloops=1\n Worker 3: actual time=5886.149..14477.162 rows=64680\nloops=1\n Worker 4: actual time=5886.987..14477.653 rows=71034\nloops=1\n Worker 5: actual time=5887.347..14478.667 rows=65836\nloops=1\n Worker 6: actual time=5888.978..14479.239 rows=67755\nloops=1\n Worker 7: actual time=5888.896..14479.886 rows=70651\nloops=1\n -> Bitmap Index Scan on foo_idx (actual\ntime=3106.840..3106.840 rows=8579724 loops=1)\n Index Cond: -\n Worker 7: actual time=3106.840..3106.840\nrows=8579724 loops=1\n Planning Time: 0.150 ms\n Execution Time: 14628.648 ms\n\nHi hackers,While debugging some slow queries containing Bitmap Heap/Index Scans (in short BHS / BIS), we observed a few issues regarding scalability:The BIS always only runs in a single process, also when the parent BHS is parallel. The first process arriving in the BHS serves as leader and executes the BIS.As long as execution is \"exact\" (TIDs are stored instead of page bits), the parallel BHS sorts all TIDs to ensure pages are accessed sequentially. The sort is also performed just by a single worker. Already with a few tens of thousands of pages to scan, the sort time can make up a significant portion of the total runtime. Large page counts and the need for parallelism are not uncommon for BHS, as one use case is closing the gap between index and sequential scans. The BHS costing seems to not account for that.The BHS does not scale well with an increasing number of parallel workers, even when accounting for the sequential parts of execution. A perf profile shows that the TID list / bitmap iteration code heavily contents on a mutex taken for every single TID / page bit (see LWLockAcquire(&istate->lock, LW_EXCLUSIVE) in tidbitmap.c:1067).The EXPLAIN ANALYZE statistics of the parallel BHS do not include the statistics of the parallel workers. For example the number of heap pages processed is what just the leader did. Similarly to other parallel plan nodes we should aggregate statistics across workers.The EXPLAIN ANALYZE output below shows (1) to (3) happening in action for different numbers of workers. I had to obfuscate the query slightly. The difference between the startup time of the BHS and the BIS is the time it takes to sort the TID list. The self time of the BHS is just the time spent on processing the shared TID list and processing the pages. That part runs in parallel but does not scale.Workers | Total runtime | Startup time BIS | Startup time BHS | Self time BHS (excl. sorting)-------------------------------------------|------------------|------------------------------2 | 15322 ms | 3107 ms | 5912 ms | 9269 ms4 | 13277 ms | 3094 ms | 5869 ms | 7260 ms8 | 14628 ms | 3106 ms | 5882 ms | 8598 msNone of this is really new and some of it is even documented. So, what I am more wondering about is why things are the way they are and how hard it would be to change them. I am especially curious about:What stops us from extending the BIS to run in parallel? Parallel Bitmap Index Scans are also supported.What about reducing the sort time by, e.g.dividing TIDs across workers, ending up with N parallely sorted streams,cooperatively sorting the TIDs with multiple workers using barriers for synchronization,optimizing the PagetableEntry data structure for size and using a faster sorting algorithm like e.g. radix sorta combination of the first three optionsWith separate TID lists per worker process the iteration problem would be solved. Otherwise, we couldoptimize the iteration code and thereby minimize the duration of the critical section,have worker processes acquire chunks of TIDs / page bits to reduce locking.Is there interest in patches improving on the above mentioned shortcomings? If so, which options do you deem best?--David Geier(ServiceNow)-- 2 workers Finalize Aggregate (actual time=15228.937..15321.356 rows=1 loops=1) Output: count(*) -> Gather (actual time=15187.942..15321.345 rows=2 loops=1) Output: (PARTIAL count(*)) Workers Planned: 2 Workers Launched: 2 -> Partial Aggregate (actual time=15181.486..15181.488 rows=1 loops=2) Output: PARTIAL count(*) Worker 0: actual time=15181.364..15181.366 rows=1 loops=1 Worker 1: actual time=15181.608..15181.610 rows=1 loops=1 -> Parallel Bitmap Heap Scan on foo (actual time=5912.731..15166.992 rows=269713 loops=2) Filter: ... Rows Removed by Filter: 4020149 Worker 0: actual time=5912.498..15166.936 rows=269305 loops=1 Worker 1: actual time=5912.963..15167.048 rows=270121 loops=1 -> Bitmap Index Scan on foo_idx (actual time=3107.947..3107.948 rows=8579724 loops=1) Index Cond: - Worker 1: actual time=3107.947..3107.948 rows=8579724 loops=1 Planning Time: 0.167 ms Execution Time: 15322.081 ms-- 4 workers Finalize Aggregate (actual time=13175.765..13276.415 rows=1 loops=1) Output: count(*) -> Gather (actual time=13137.981..13276.403 rows=4 loops=1) Output: (PARTIAL count(*)) Workers Planned: 4 Workers Launched: 4 -> Partial Aggregate (actual time=13130.344..13130.346 rows=1 loops=4) Output: PARTIAL count(*) Worker 0: actual time=13129.363..13129.365 rows=1 loops=1 Worker 1: actual time=13130.085..13130.087 rows=1 loops=1 Worker 2: actual time=13130.634..13130.635 rows=1 loops=1 Worker 3: actual time=13131.295..13131.298 rows=1 loops=1 -> Parallel Bitmap Heap Scan on foo (actual time=5870.026..13120.579 rows=134856 loops=4) Filter: ... Rows Removed by Filter: 2010074 Worker 0: actual time=5869.033..13120.453 rows=128270 loops=1 Worker 1: actual time=5869.698..13118.811 rows=135333 loops=1 Worker 2: actual time=5870.465..13121.189 rows=137695 loops=1 Worker 3: actual time=5870.907..13121.864 rows=138128 loops=1 -> Bitmap Index Scan on foo_idx (actual time=3094.585..3094.586 rows=8579724 loops=1) Index Cond: - Worker 3: actual time=3094.585..3094.586 rows=8579724 loops=1 Planning Time: 0.146 ms Execution Time: 13277.315 ms-- 8 workers Finalize Aggregate (actual time=14533.688..14627.962 rows=1 loops=1) Output: count(*) -> Gather (actual time=14492.463..14627.950 rows=8 loops=1) Output: (PARTIAL count(*)) Workers Planned: 8 Workers Launched: 8 -> Partial Aggregate (actual time=14483.059..14483.061 rows=1 loops=8) Output: PARTIAL count(*) Worker 0: actual time=14480.058..14480.061 rows=1 loops=1 Worker 1: actual time=14480.948..14480.950 rows=1 loops=1 Worker 2: actual time=14481.668..14481.670 rows=1 loops=1 Worker 3: actual time=14482.829..14482.832 rows=1 loops=1 Worker 4: actual time=14483.695..14483.697 rows=1 loops=1 Worker 5: actual time=14484.290..14484.293 rows=1 loops=1 Worker 6: actual time=14485.166..14485.168 rows=1 loops=1 Worker 7: actual time=14485.819..14485.821 rows=1 loops=1 -> Parallel Bitmap Heap Scan on foo (actual time=5886.191..14477.239 rows=67428 loops=8) Filter: ... Rows Removed by Filter: 1005037 Worker 0: actual time=5882.909..14474.627 rows=60325 loops=1 Worker 1: actual time=5883.788..14474.945 rows=69459 loops=1 Worker 2: actual time=5884.475..14475.735 rows=69686 loops=1 Worker 3: actual time=5886.149..14477.162 rows=64680 loops=1 Worker 4: actual time=5886.987..14477.653 rows=71034 loops=1 Worker 5: actual time=5887.347..14478.667 rows=65836 loops=1 Worker 6: actual time=5888.978..14479.239 rows=67755 loops=1 Worker 7: actual time=5888.896..14479.886 rows=70651 loops=1 -> Bitmap Index Scan on foo_idx (actual time=3106.840..3106.840 rows=8579724 loops=1) Index Cond: - Worker 7: actual time=3106.840..3106.840 rows=8579724 loops=1 Planning Time: 0.150 ms Execution Time: 14628.648 ms",
"msg_date": "Thu, 14 Jul 2022 12:13:38 +0200",
"msg_from": "David Geier <geidav.pg@gmail.com>",
"msg_from_op": true,
"msg_subject": "Improving scalability of Parallel Bitmap Heap/Index Scan"
},
{
"msg_contents": "On Thu, Jul 14, 2022 at 5:13 PM David Geier <geidav.pg@gmail.com> wrote:\n> optimizing the PagetableEntry data structure for size and using a faster\nsorting algorithm like e.g. radix sort\n\nOn this note, there has been a proposed (but as far as I know untested)\npatch to speed up this sort in a much simpler way, in this thread\n\nhttps://www.postgresql.org/message-id/CA%2BhUKGKztHEWm676csTFjYzortziWmOcf8HDss2Zr0muZ2xfEg%40mail.gmail.com\n\nwhere you may find this patch\n\nhttps://www.postgresql.org/message-id/attachment/120560/0007-Specialize-pagetable-sort-routines-in-tidbitmap.c.patch\n\nand see if it helps.\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Thu, Jul 14, 2022 at 5:13 PM David Geier <geidav.pg@gmail.com> wrote:> optimizing the PagetableEntry data structure for size and using a faster sorting algorithm like e.g. radix sortOn this note, there has been a proposed (but as far as I know untested) patch to speed up this sort in a much simpler way, in this threadhttps://www.postgresql.org/message-id/CA%2BhUKGKztHEWm676csTFjYzortziWmOcf8HDss2Zr0muZ2xfEg%40mail.gmail.comwhere you may find this patchhttps://www.postgresql.org/message-id/attachment/120560/0007-Specialize-pagetable-sort-routines-in-tidbitmap.c.patchand see if it helps.--John NaylorEDB: http://www.enterprisedb.com",
"msg_date": "Sat, 16 Jul 2022 12:43:39 +0700",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Improving scalability of Parallel Bitmap Heap/Index Scan"
}
] |
[
{
"msg_contents": "Dear list,\n\ni am dealing with an application that processes fairly large arrays of \nintegers. It makes heavy use of the intarray extension, which works \ngreat in most cases. However, there are two requirements that cannot be \naddressed by the extension and are rather slow with plain SQL. Both can \nbe met with shuffling:\n\n- Taking n random members from an integer array\n- Splitting an array into n chunks, where each member is assigned to a \nrandom chunk\n\nShuffling is currently implemented by unnesting the array, ordering the \nmembers by random() and aggregating them again.\n\n\n create table numbers (arr int[]);\n\n insert into numbers (arr)\n select array_agg(i)\n from generate_series(1, 4000000) i;\n\n\n select arr[1:3]::text || ' ... ' || arr[3999998:4000000]::text\n from (\n select array_agg(n order by random()) arr\n from (\n select unnest(arr) n from numbers\n ) plain\n ) shuffled;\n\n ---------------------------------------------------------\n {2717290,3093757,2426384} ... {3011871,1402540,1613647}\n\n Time: 2348.961 ms (00:02.349)\n\n\nI wrote a small extension (see source code below) to see how much we can \ngain, when the shuffling is implemented in C and the results speak for \nthemselves:\n\n\n select arr[1:3]::text || ' ... ' || arr[3999998:4000000]::text\n from (\n select shuffle(arr) arr from numbers\n ) shuffled;\n\n ----------------------------------------------------\n {1313971,3593627,86630} ... {50764,430410,3901128}\n\n Time: 132.151 ms\n\n\nI would like to see a function like this inside the intarray extension. \nIs there any way to get to this point? How is the process to deal with \nsuch proposals?\n\nBest regards,\nMartin Kalcher\n\n\nSource code of extension mentioned above:\n\n\n#include \"postgres.h\"\n#include \"port.h\"\n#include \"utils/array.h\"\n\nPG_MODULE_MAGIC;\n\nPG_FUNCTION_INFO_V1(shuffle);\n\nvoid _shuffle(int32 *a, int len);\n\nDatum\nshuffle(PG_FUNCTION_ARGS)\n{\n ArrayType *a = PG_GETARG_ARRAYTYPE_P_COPY(0);\n\n int len;\n\n if (array_contains_nulls(a))\n ereport(ERROR,\n (errcode(ERRCODE_NULL_VALUE_NOT_ALLOWED),\n errmsg(\"array must not contain nulls\")));\n\n len = ArrayGetNItems(ARR_NDIM(a), ARR_DIMS(a));\n\n if (len > 1)\n _shuffle((int32 *) ARR_DATA_PTR(a), len);\n\n PG_RETURN_POINTER(a);\n}\n\nvoid\n_shuffle(int32 *a, int len) {\n int i, j;\n int32 tmp;\n\n for (i = len - 1; i > 0; i--) {\n j = random() % (i + 1);\n tmp = a[i];\n a[i] = a[j];\n a[j] = tmp;\n }\n}\n\n\n\n\n\n",
"msg_date": "Fri, 15 Jul 2022 10:36:06 +0200",
"msg_from": "Martin Kalcher <martin.kalcher@aboutsource.net>",
"msg_from_op": true,
"msg_subject": "Proposal to introduce a shuffle function to intarray extension"
},
{
"msg_contents": "On 7/15/22 04:36, Martin Kalcher wrote:\n> Dear list,\n>\n> i am dealing with an application that processes fairly large arrays of \n> integers. It makes heavy use of the intarray extension, which works \n> great in most cases. However, there are two requirements that cannot \n> be addressed by the extension and are rather slow with plain SQL. Both \n> can be met with shuffling:\n>\n> - Taking n random members from an integer array\n> - Splitting an array into n chunks, where each member is assigned to a \n> random chunk\n>\n> Shuffling is currently implemented by unnesting the array, ordering \n> the members by random() and aggregating them again.\n\n\nMartin, have you considered PL/Python and NumPy module?\n\n-- \nMladen Gogala\nDatabase Consultant\nTel: (347) 321-1217\nhttps://dbwhisperer.wordpress.com\n\n\n\n\n\n\nOn 7/15/22 04:36, Martin Kalcher wrote:\n\nDear\n list,\n \n\n i am dealing with an application that processes fairly large\n arrays of integers. It makes heavy use of the intarray extension,\n which works great in most cases. However, there are two\n requirements that cannot be addressed by the extension and are\n rather slow with plain SQL. Both can be met with shuffling:\n \n\n - Taking n random members from an integer array\n \n - Splitting an array into n chunks, where each member is assigned\n to a random chunk\n \n\n Shuffling is currently implemented by unnesting the array,\n ordering the members by random() and aggregating them again.\n \n\n\n\nMartin, have you considered PL/Python and NumPy module?\n\n-- \nMladen Gogala\nDatabase Consultant\nTel: (347) 321-1217\nhttps://dbwhisperer.wordpress.com",
"msg_date": "Sat, 16 Jul 2022 12:53:49 -0400",
"msg_from": "Mladen Gogala <gogala.mladen@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Proposal to introduce a shuffle function to intarray extension"
},
{
"msg_contents": "Am 16.07.22 um 18:53 schrieb Mladen Gogala:\n> On 7/15/22 04:36, Martin Kalcher wrote:\n>> Dear list,\n>>\n>> i am dealing with an application that processes fairly large arrays of\n>> integers. It makes heavy use of the intarray extension, which works\n>> great in most cases. However, there are two requirements that cannot\n>> be addressed by the extension and are rather slow with plain SQL. Both\n>> can be met with shuffling:\n>>\n>> - Taking n random members from an integer array\n>> - Splitting an array into n chunks, where each member is assigned to a\n>> random chunk\n>>\n>> Shuffling is currently implemented by unnesting the array, ordering\n>> the members by random() and aggregating them again.\n> \n> \n> Martin, have you considered PL/Python and NumPy module?\n\nHey Mladen,\n\nthank you for your advice. Unfortunately the performance of shuffling \nwith NumPy is about the same as with SQL.\n\n create function numpy_shuffle(arr int[])\n returns int[]\n as $$\n import numpy\n numpy.random.shuffle(arr)\n return arr\n $$ language 'plpython3u';\n\n select arr[1:3]::text || ' ... ' || arr[3999998:4000000]::text\n from (\n select numpy_shuffle(arr) arr from numbers\n ) shuffled;\n\n -------------------------------------------------------\n {674026,3306457,1727170} ... {343875,3825484,1235246}\n\n Time: 2315.431 ms (00:02.315)\n\nAm i doing something wrong?\n\nMartin\n\n\n",
"msg_date": "Sat, 16 Jul 2022 22:21:54 +0200",
"msg_from": "Martin Kalcher <martin.kalcher@aboutsource.net>",
"msg_from_op": true,
"msg_subject": "Re: Proposal to introduce a shuffle function to intarray extension"
},
{
"msg_contents": "On 7/16/22 16:21, Martin Kalcher wrote:\n> Hey Mladen,\n>\n> thank you for your advice. Unfortunately the performance of shuffling \n> with NumPy is about the same as with SQL.\n>\n> create function numpy_shuffle(arr int[])\n> returns int[]\n> as $$\n> import numpy\n> numpy.random.shuffle(arr)\n> return arr\n> $$ language 'plpython3u';\n>\n> select arr[1:3]::text || ' ... ' || arr[3999998:4000000]::text\n> from (\n> select numpy_shuffle(arr) arr from numbers\n> ) shuffled;\n>\n> -------------------------------------------------------\n> {674026,3306457,1727170} ... {343875,3825484,1235246}\n>\n> Time: 2315.431 ms (00:02.315)\n>\n> Am i doing something wrong?\n>\n> Martin\n\nHi Martin,\n\nNo, you're doing everything right. I have no solution for you. You may \nneed to do some C programming or throw a stronger hardware at the \nproblem. The performance of your processors may be the problem. Good luck!\n\n-- \nMladen Gogala\nDatabase Consultant\nTel: (347) 321-1217\nhttps://dbwhisperer.wordpress.com\n\n\n\n\n\n\nOn 7/16/22 16:21, Martin Kalcher wrote:\n\nHey\n Mladen,\n \n\n thank you for your advice. Unfortunately the performance of\n shuffling with NumPy is about the same as with SQL.\n \n\n create function numpy_shuffle(arr int[])\n \n returns int[]\n \n as $$\n \n import numpy\n \n numpy.random.shuffle(arr)\n \n return arr\n \n $$ language 'plpython3u';\n \n\n select arr[1:3]::text || ' ... ' || arr[3999998:4000000]::text\n \n from (\n \n select numpy_shuffle(arr) arr from numbers\n \n ) shuffled;\n \n\n -------------------------------------------------------\n \n {674026,3306457,1727170} ... {343875,3825484,1235246}\n \n\n Time: 2315.431 ms (00:02.315)\n \n\n Am i doing something wrong?\n \n\n Martin\n \n\nHi Martin,\nNo, you're doing everything right. I have no solution for you.\n You may need to do some C programming or throw a stronger hardware\n at the problem. The performance of your processors may be the\n problem. Good luck!\n\n-- \nMladen Gogala\nDatabase Consultant\nTel: (347) 321-1217\nhttps://dbwhisperer.wordpress.com",
"msg_date": "Sat, 16 Jul 2022 17:30:08 -0400",
"msg_from": "Mladen Gogala <gogala.mladen@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Proposal to introduce a shuffle function to intarray extension"
},
{
"msg_contents": "On Fri, Jul 15, 2022 at 8:36 PM Martin Kalcher\n<martin.kalcher@aboutsource.net> wrote:\n> I would like to see a function like this inside the intarray extension.\n> Is there any way to get to this point? How is the process to deal with\n> such proposals?\n\nHi Martin,\n\nI'm redirecting this to the pgsql-hackers@ mailing list, where we talk\nabout code. For the archives, Martin's initial message to -general\nwas:\n\nhttps://www.postgresql.org/message-id/9d160a44-7675-51e8-60cf-6d64b76db831%40aboutsource.net\n\nThe first question is whether such a thing belongs in an external\nextension, or in the contrib/intarray module. The latter seems like a\nreasonable thing to want to me. The first step towards that will be\nto get your code into .patch format, as in git format-patch or git\ndiff, that can be applied to the master branch.\n\nhttps://wiki.postgresql.org/wiki/Submitting_a_Patch\n\nSome initial feedback from me: I'd recommend adding a couple of\ncomments to the code, like the algorithm name for someone who wants to\nread more about it (I think it's a Fisher-Yates shuffle?). You'll\nneed to have the contrib/intarrays/intarray--1.5--1.6.sql file that\ncreates the function. You might want to add something to\ncontrol/intarray/sql/_int.sql that invokes the function when you run\nmake check in there (but doesn't display the results, since that'd be\nunstable across machines?), just to have 'code coverage' (I mean, it'd\nprove it doesn't crash at least). Once details are settled, you'd\nalso want to add documentation in doc/src/sgml/intarray.sgml. I\nunderstand that this is a specialised int[] shuffle, but I wonder if\nsomeone would ever want to have a general array shuffle, and how that\nwould work, in terms of naming convention etc.\n\n\n",
"msg_date": "Sun, 17 Jul 2022 09:56:27 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Proposal to introduce a shuffle function to intarray extension"
},
{
"msg_contents": "Am 16.07.22 um 23:56 schrieb Thomas Munro:\n> On Fri, Jul 15, 2022 at 8:36 PM Martin Kalcher\n> <martin.kalcher@aboutsource.net> wrote:\n>> I would like to see a function like this inside the intarray extension.\n>> Is there any way to get to this point? How is the process to deal with\n>> such proposals?\n> \n> Hi Martin,\n> \n> I'm redirecting this to the pgsql-hackers@ mailing list, where we talk\n> about code. For the archives, Martin's initial message to -general\n> was:\n> \n> https://www.postgresql.org/message-id/9d160a44-7675-51e8-60cf-6d64b76db831%40aboutsource.net\n> \n> The first question is whether such a thing belongs in an external\n> extension, or in the contrib/intarray module. The latter seems like a\n> reasonable thing to want to me. The first step towards that will be\n> to get your code into .patch format, as in git format-patch or git\n> diff, that can be applied to the master branch.\n> \n> https://wiki.postgresql.org/wiki/Submitting_a_Patch\n> \n> Some initial feedback from me: I'd recommend adding a couple of\n> comments to the code, like the algorithm name for someone who wants to\n> read more about it (I think it's a Fisher-Yates shuffle?). You'll\n> need to have the contrib/intarrays/intarray--1.5--1.6.sql file that\n> creates the function. You might want to add something to\n> control/intarray/sql/_int.sql that invokes the function when you run\n> make check in there (but doesn't display the results, since that'd be\n> unstable across machines?), just to have 'code coverage' (I mean, it'd\n> prove it doesn't crash at least). Once details are settled, you'd\n> also want to add documentation in doc/src/sgml/intarray.sgml. I\n> understand that this is a specialised int[] shuffle, but I wonder if\n> someone would ever want to have a general array shuffle, and how that\n> would work, in terms of naming convention etc.\n\nHello Thomas,\n\nThank you for pointing me towards the correct list.\n\nI do not feel qualified to answer the question wether this belongs in an \nexternal extension or in contrib/intarray. That reason i would like to \nsee it in contrib/intarray is, that it is lot easier for me to get our \noperations team to upgrade our database system, because of new features \nwe need, than to get them to install a self-maintained extension on our \ndatabase servers.\n\nThank you for your feedback. I tried to address all your points and made \na first patch. Some points are still open:\n\n- Documentation is postponed until further feedback.\n\n- I am not shure about the naming. intarray has generic names like\n sort() and uniq() and specialised names like icount(). I guess in case\n someone wants to have a general array shuffle it could be accomplished\n with function overloading. Or am i wrong here?\n\n- I added a second function sample(), because it is a lot faster to take\n some elements from an array than to shuffle the whole array and\n slice it. This function can be removed if it is not wanted. The\n important one is shuffle().\n\nMartin",
"msg_date": "Sun, 17 Jul 2022 04:25:27 +0200",
"msg_from": "Martin Kalcher <martin.kalcher@aboutsource.net>",
"msg_from_op": true,
"msg_subject": "Re: Proposal to introduce a shuffle function to intarray extension"
},
{
"msg_contents": "Martin Kalcher <martin.kalcher@aboutsource.net> writes:\n> Am 16.07.22 um 23:56 schrieb Thomas Munro:\n>> I understand that this is a specialised int[] shuffle, but I wonder if\n>> someone would ever want to have a general array shuffle, and how that\n>> would work, in terms of naming convention etc.\n\n> - I am not shure about the naming. intarray has generic names like\n> sort() and uniq() and specialised names like icount(). I guess in case\n> someone wants to have a general array shuffle it could be accomplished\n> with function overloading. Or am i wrong here?\n\nI suppose this is exactly the point Thomas was wondering about: if we\nuse a generic function name for this, will it cause problems for someone\ntrying to add a generic function later?\n\nWe can investigate that question with a couple of toy functions:\n\nregression=# create function foo(int[]) returns text as 'select ''int[] version''' language sql;\nCREATE FUNCTION\nregression=# create function foo(anyarray) returns text as 'select ''anyarray version''' language sql;\nCREATE FUNCTION\nregression=# select foo('{1,2,3}');\nERROR: function foo(unknown) is not unique\nLINE 1: select foo('{1,2,3}');\n ^\nHINT: Could not choose a best candidate function. You might need to add explicit type casts.\n\nOK, that's not too surprising: with an unknown input there's just not\nanything that the parser can use to disambiguate. But this happens\nwith just about any overloaded name, so I don't think it's a showstopper.\n\nregression=# select foo('{1,2,3}'::int[]);\n foo \n---------------\n int[] version\n(1 row)\n\nregression=# select foo('{1,2,3}'::int8[]);\n foo \n------------------\n anyarray version\n(1 row)\n\nGood, that's more or less the minimum functionality we should expect.\n\nregression=# select foo('{1,2,3}'::int2[]);\nERROR: function foo(smallint[]) is not unique\nLINE 1: select foo('{1,2,3}'::int2[]);\n ^\nHINT: Could not choose a best candidate function. You might need to add explicit type casts.\n\nOh, that's slightly annoying ...\n\nregression=# select foo('{1,2,3}'::oid[]);\n foo \n------------------\n anyarray version\n(1 row)\n\nregression=# select foo('{1,2,3}'::text[]);\n foo \n------------------\n anyarray version\n(1 row)\n\nregression=# select foo('{1,2,3}'::float8[]);\n foo \n------------------\n anyarray version\n(1 row)\n\nI couldn't readily find any case that misbehaves except for int2[].\nYou can force that to work, at least in one direction:\n\nregression=# select foo('{1,2,3}'::int2[]::int[]);\n foo \n---------------\n int[] version\n(1 row)\n\nOn the whole, I'd vote for calling it shuffle(), and expecting that\nwe'd also use that name for any future generic version. That's\nclearly the easiest-to-remember definition, and it doesn't seem\nlike the gotchas are bad enough to justify using separate names.\n\n> - I added a second function sample(), because it is a lot faster to take\n> some elements from an array than to shuffle the whole array and\n> slice it. This function can be removed if it is not wanted.\n\nI have no opinion about whether this one is valuable enough to include in\nintarray, but I do feel like sample() is a vague name, and easily confused\nwith marginally-related operations like TABLESAMPLE. Can we think of a\nmore on-point name? Something like \"random_subset\" would be pretty\nclear, but it's also clunky. It's too late here for me to think of\nle mot juste...\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 16 Jul 2022 23:18:29 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Proposal to introduce a shuffle function to intarray extension"
},
{
"msg_contents": "On Sat, Jul 16, 2022 at 7:25 PM Martin Kalcher <\nmartin.kalcher@aboutsource.net> wrote:\n\n>\n> - I added a second function sample(), because it is a lot faster to take\n> some elements from an array than to shuffle the whole array and\n> slice it. This function can be removed if it is not wanted. The\n> important one is shuffle().\n>\n>\n +SELECT sample('{1,2,3,4,5,6,7,8,9,10,11,12}', 6) !=\nsample('{1,2,3,4,5,6,7,8,9,10,11,12}', 6);\n+ ?column?\n+----------\n+ t\n+(1 row)\n+\n\nWhile small, there is a non-zero chance for both samples to be equal. This\ntest should probably just go, I don't see what it tests that isn't covered\nby other tests or even trivial usage.\n\nSame goes for:\n\n+SELECT shuffle('{1,2,3,4,5,6,7,8,9,10,11,12}') !=\nshuffle('{1,2,3,4,5,6,7,8,9,10,11,12}');\n+ ?column?\n+----------\n+ t\n+(1 row)\n+\n\n\nDavid J.\n\nOn Sat, Jul 16, 2022 at 7:25 PM Martin Kalcher <martin.kalcher@aboutsource.net> wrote:\n- I added a second function sample(), because it is a lot faster to take\n some elements from an array than to shuffle the whole array and\n slice it. This function can be removed if it is not wanted. The\n important one is shuffle(). +SELECT sample('{1,2,3,4,5,6,7,8,9,10,11,12}', 6) != sample('{1,2,3,4,5,6,7,8,9,10,11,12}', 6);+ ?column? +----------+ t+(1 row)+While small, there is a non-zero chance for both samples to be equal. This test should probably just go, I don't see what it tests that isn't covered by other tests or even trivial usage.Same goes for:+SELECT shuffle('{1,2,3,4,5,6,7,8,9,10,11,12}') != shuffle('{1,2,3,4,5,6,7,8,9,10,11,12}');+ ?column? +----------+ t+(1 row)+David J.",
"msg_date": "Sat, 16 Jul 2022 20:32:34 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Proposal to introduce a shuffle function to intarray extension"
},
{
"msg_contents": "On Sat, Jul 16, 2022 at 8:18 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Martin Kalcher <martin.kalcher@aboutsource.net> writes:\n>\n> > - I added a second function sample(), because it is a lot faster to take\n> > some elements from an array than to shuffle the whole array and\n> > slice it. This function can be removed if it is not wanted.\n>\n> I have no opinion about whether this one is valuable enough to include in\n> intarray, but I do feel like sample() is a vague name, and easily confused\n> with marginally-related operations like TABLESAMPLE. Can we think of a\n> more on-point name? Something like \"random_subset\" would be pretty\n> clear, but it's also clunky. It's too late here for me to think of\n> le mot juste...\n>\n>\nchoose(input anyarray, size integer, with_replacement boolean default\nfalse, algorithm text default 'default')?\n\nDavid J.\n\nOn Sat, Jul 16, 2022 at 8:18 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:Martin Kalcher <martin.kalcher@aboutsource.net> writes:\n> - I added a second function sample(), because it is a lot faster to take\n> some elements from an array than to shuffle the whole array and\n> slice it. This function can be removed if it is not wanted.\n\nI have no opinion about whether this one is valuable enough to include in\nintarray, but I do feel like sample() is a vague name, and easily confused\nwith marginally-related operations like TABLESAMPLE. Can we think of a\nmore on-point name? Something like \"random_subset\" would be pretty\nclear, but it's also clunky. It's too late here for me to think of\nle mot juste...choose(input anyarray, size integer, with_replacement boolean default false, algorithm text default 'default')?David J.",
"msg_date": "Sat, 16 Jul 2022 20:36:48 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Proposal to introduce a shuffle function to intarray extension"
},
{
"msg_contents": "I wrote:\n> On the whole, I'd vote for calling it shuffle(), and expecting that\n> we'd also use that name for any future generic version.\n\nActually ... is there a reason to bother with an intarray version\nat all, rather than going straight for an in-core anyarray function?\nIt's not obvious to me that an int4-only version would have\nmajor performance advantages.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 16 Jul 2022 23:37:26 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Proposal to introduce a shuffle function to intarray extension"
},
{
"msg_contents": "On Sun, Jul 17, 2022 at 3:37 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I wrote:\n> > On the whole, I'd vote for calling it shuffle(), and expecting that\n> > we'd also use that name for any future generic version.\n>\n> Actually ... is there a reason to bother with an intarray version\n> at all, rather than going straight for an in-core anyarray function?\n> It's not obvious to me that an int4-only version would have\n> major performance advantages.\n\nYeah, that seems like a good direction. If there is a performance\nadvantage to specialising, then perhaps we only have to specialise on\nsize, not type. Perhaps there could be a general function that\ninternally looks out for typbyval && typlen == 4, and dispatches to a\nspecialised 4-byte, and likewise for 8, if it can, and that'd already\nbe enough to cover int, bigint, float etc, without needing\nspecialisations for each type.\n\nI went to see what Professor Lemire would have to say about all this,\nexpecting to find a SIMD rabbit hole to fall down for some Sunday\nevening reading, but the main thing that jumped out was this article\nabout the modulo operation required by textbook Fisher-Yates to get a\nbounded random number, the random() % n that appears in the patch. He\ntalks about shuffling twice as fast by using a no-division trick to\nget bounded random numbers[1]. I guess you might need to use our\npg_prng_uint32() for that trick because random()'s 0..RAND_MAX might\nintroduce bias. Anyway, file that under go-faster ideas for later.\n\n[1] https://lemire.me/blog/2016/06/30/fast-random-shuffling/\n\n\n",
"msg_date": "Sun, 17 Jul 2022 18:00:09 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Proposal to introduce a shuffle function to intarray extension"
},
{
"msg_contents": "Am 17.07.22 um 05:37 schrieb Tom Lane:\n> \n> Actually ... is there a reason to bother with an intarray version\n> at all, rather than going straight for an in-core anyarray function?\n> It's not obvious to me that an int4-only version would have\n> major performance advantages.\n> \n> \t\t\tregards, tom lane\n\nHi Tom,\n\nthank you for your thoughts. There are two reasons for choosing an \nint4-only version. I am not familiar with postgres development (yet) and \ni was not sure how open you are about such changes to core and if the \nproposed feature is considered valuable enough to go into core. The \nsecond reason was ease of implementation. The intarray extension does \nnot allow any NULL elements in arrays and treats multidimensional arrays \nas though they were linear. Which makes the implementation straight \nforward, because there are fewer cases to consider.\n\nHowever, i will take a look at an implementation for anyarray in core.\n\nMartin\n\n\n",
"msg_date": "Sun, 17 Jul 2022 11:16:28 +0200",
"msg_from": "Martin Kalcher <martin.kalcher@aboutsource.net>",
"msg_from_op": true,
"msg_subject": "Re: Proposal to introduce a shuffle function to intarray extension"
},
{
"msg_contents": "Am 17.07.22 um 08:00 schrieb Thomas Munro:\n> \n> I went to see what Professor Lemire would have to say about all this,\n> expecting to find a SIMD rabbit hole to fall down for some Sunday\n> evening reading, but the main thing that jumped out was this article\n> about the modulo operation required by textbook Fisher-Yates to get a\n> bounded random number, the random() % n that appears in the patch. He\n> talks about shuffling twice as fast by using a no-division trick to\n> get bounded random numbers[1]. I guess you might need to use our\n> pg_prng_uint32() for that trick because random()'s 0..RAND_MAX might\n> introduce bias. Anyway, file that under go-faster ideas for later.\n> \n> [1] https://lemire.me/blog/2016/06/30/fast-random-shuffling/\n\nHi Thomas,\n\nthe small bias of random() % n is not a problem for my use case, but \nmight be for others. Its easily replaceable with\n\n (int) pg_prng_uint64_range(&pg_global_prng_state, 0, n-1)\n\nUnfortunately it is a bit slower (on my machine), but thats negligible.\n\nMartin\n\n\n",
"msg_date": "Sun, 17 Jul 2022 11:38:55 +0200",
"msg_from": "Martin Kalcher <martin.kalcher@aboutsource.net>",
"msg_from_op": true,
"msg_subject": "Re: Proposal to introduce a shuffle function to intarray extension"
},
{
"msg_contents": "Am 17.07.22 um 05:32 schrieb David G. Johnston:\n>\n> +SELECT sample('{1,2,3,4,5,6,7,8,9,10,11,12}', 6) !=\n> sample('{1,2,3,4,5,6,7,8,9,10,11,12}', 6);\n> + ?column?\n> +----------\n> + t\n> +(1 row)\n> +\n> \n> While small, there is a non-zero chance for both samples to be equal. This\n> test should probably just go, I don't see what it tests that isn't covered\n> by other tests or even trivial usage.\n> \n\nHey David,\n\nyou are right. There is a small chance for this test to fail. I wanted \nto test, that two invocations produce different results (after all the \nmain feature of the function). But it can probably go.\n\nMartin\n\n\n",
"msg_date": "Sun, 17 Jul 2022 11:54:22 +0200",
"msg_from": "Martin Kalcher <martin.kalcher@aboutsource.net>",
"msg_from_op": true,
"msg_subject": "Re: Proposal to introduce a shuffle function to intarray extension"
},
{
"msg_contents": "Am 17.07.22 um 08:00 schrieb Thomas Munro:\n>> Actually ... is there a reason to bother with an intarray version\n>> at all, rather than going straight for an in-core anyarray function?\n>> It's not obvious to me that an int4-only version would have\n>> major performance advantages.\n> \n> Yeah, that seems like a good direction. If there is a performance\n> advantage to specialising, then perhaps we only have to specialise on\n> size, not type. Perhaps there could be a general function that\n> internally looks out for typbyval && typlen == 4, and dispatches to a\n> specialised 4-byte, and likewise for 8, if it can, and that'd already\n> be enough to cover int, bigint, float etc, without needing\n> specialisations for each type.\n\nI played around with the idea of an anyarray shuffle(). The hard part \nwas to deal with arrays with variable length elements, as they can not \nbe swapped easily in place. I solved it by creating an intermediate \narray of references to the elements. I'll attach a patch with the proof \nof concept. Unfortunatly it is already about 5 times slower than the \nspecialised version and i am not sure if it is worth going down that road.\n\nMartin",
"msg_date": "Sun, 17 Jul 2022 18:15:51 +0200",
"msg_from": "Martin Kalcher <martin.kalcher@aboutsource.net>",
"msg_from_op": true,
"msg_subject": "Re: Proposal to introduce a shuffle function to intarray extension"
},
{
"msg_contents": "On Mon, Jul 18, 2022 at 4:15 AM Martin Kalcher\n<martin.kalcher@aboutsource.net> wrote:\n> Am 17.07.22 um 08:00 schrieb Thomas Munro:\n> >> Actually ... is there a reason to bother with an intarray version\n> >> at all, rather than going straight for an in-core anyarray function?\n> >> It's not obvious to me that an int4-only version would have\n> >> major performance advantages.\n> >\n> > Yeah, that seems like a good direction. If there is a performance\n> > advantage to specialising, then perhaps we only have to specialise on\n> > size, not type. Perhaps there could be a general function that\n> > internally looks out for typbyval && typlen == 4, and dispatches to a\n> > specialised 4-byte, and likewise for 8, if it can, and that'd already\n> > be enough to cover int, bigint, float etc, without needing\n> > specialisations for each type.\n>\n> I played around with the idea of an anyarray shuffle(). The hard part\n> was to deal with arrays with variable length elements, as they can not\n> be swapped easily in place. I solved it by creating an intermediate\n> array of references to the elements. I'll attach a patch with the proof\n> of concept. Unfortunatly it is already about 5 times slower than the\n> specialised version and i am not sure if it is worth going down that road.\n\nSeems OK for a worst case. It must still be a lot faster than doing\nit in SQL. Now I wonder what the exact requirements would be to\ndispatch to a faster version that would handle int4. I haven't\nstudied this in detail but perhaps to dispatch to a fast shuffle for\nobjects of size X, the requirement would be something like typlen == X\n&& align_bytes <= typlen && typlen % align_bytes == 0, where\nalign_bytes is typalign converted to ALIGNOF_{CHAR,SHORT,INT,DOUBLE}?\nOr in English, 'the data consists of densely packed objects of fixed\nsize X, no padding'. Or perhaps you can work out the padded size and\nuse that, to catch a few more types. Then you call\narray_shuffle_{2,4,8}() as appropriate, which should be as fast as\nyour original int[] proposal, but work also for float, date, ...?\n\nAbout your experimental patch, I haven't reviewed it properly or tried\nit but I wonder if uint32 dat_offset, uint32 size (= half size\nelements) would be enough due to limitations on varlenas.\n\n\n",
"msg_date": "Mon, 18 Jul 2022 10:37:04 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Proposal to introduce a shuffle function to intarray extension"
},
{
"msg_contents": "Martin Kalcher <martin.kalcher@aboutsource.net> writes:\n> Am 17.07.22 um 08:00 schrieb Thomas Munro:\n>>> Actually ... is there a reason to bother with an intarray version\n>>> at all, rather than going straight for an in-core anyarray function?\n\n> I played around with the idea of an anyarray shuffle(). The hard part \n> was to deal with arrays with variable length elements, as they can not \n> be swapped easily in place. I solved it by creating an intermediate \n> array of references to the elements. I'll attach a patch with the proof \n> of concept.\n\nThis does not look particularly idiomatic, or even type-safe. What you\nshould have done was use deconstruct_array to get an array of Datums and\nisnull flags, then shuffled those, then used construct_array to build the\noutput.\n\n(Or, perhaps, use construct_md_array to replicate the input's\nprecise dimensionality. Not sure if anyone would care.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 17 Jul 2022 18:46:27 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Proposal to introduce a shuffle function to intarray extension"
},
{
"msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> Seems OK for a worst case. It must still be a lot faster than doing\n> it in SQL. Now I wonder what the exact requirements would be to\n> dispatch to a faster version that would handle int4.\n\nI find it impossible to believe that it's worth micro-optimizing\nshuffle() to that extent. Now, maybe doing something in that line\nin deconstruct_array and construct_array would be worth our time,\nas that'd benefit a pretty wide group of functions.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 17 Jul 2022 18:53:31 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Proposal to introduce a shuffle function to intarray extension"
},
{
"msg_contents": "Am 18.07.22 um 00:46 schrieb Tom Lane:\n> \n> This does not look particularly idiomatic, or even type-safe. What you\n> should have done was use deconstruct_array to get an array of Datums and\n> isnull flags, then shuffled those, then used construct_array to build the\n> output.\n> \n> (Or, perhaps, use construct_md_array to replicate the input's\n> precise dimensionality. Not sure if anyone would care.)\n> \n> \t\t\tregards, tom lane\n\ndeconstruct_array() would destroy the arrays dimensions. I would expect \nthat shuffle() only shuffles the first dimension and keeps the inner \narrays intact.\n\nMartin\n\n\n",
"msg_date": "Mon, 18 Jul 2022 01:05:19 +0200",
"msg_from": "Martin Kalcher <martin.kalcher@aboutsource.net>",
"msg_from_op": true,
"msg_subject": "Re: Proposal to introduce a shuffle function to intarray extension"
},
{
"msg_contents": "Am 18.07.22 um 00:37 schrieb Thomas Munro:\n> Seems OK for a worst case. It must still be a lot faster than doing\n> it in SQL. Now I wonder what the exact requirements would be to\n> dispatch to a faster version that would handle int4. I haven't\n> studied this in detail but perhaps to dispatch to a fast shuffle for\n> objects of size X, the requirement would be something like typlen == X\n> && align_bytes <= typlen && typlen % align_bytes == 0, where\n> align_bytes is typalign converted to ALIGNOF_{CHAR,SHORT,INT,DOUBLE}?\n> Or in English, 'the data consists of densely packed objects of fixed\n> size X, no padding'. Or perhaps you can work out the padded size and\n> use that, to catch a few more types. Then you call\n> array_shuffle_{2,4,8}() as appropriate, which should be as fast as\n> your original int[] proposal, but work also for float, date, ...?\n> \n> About your experimental patch, I haven't reviewed it properly or tried\n> it but I wonder if uint32 dat_offset, uint32 size (= half size\n> elements) would be enough due to limitations on varlenas.\n\nI made another experimental patch with fast tracks for typelen4 and \ntypelen8. alignments are not yet considered.",
"msg_date": "Mon, 18 Jul 2022 01:07:22 +0200",
"msg_from": "Martin Kalcher <martin.kalcher@aboutsource.net>",
"msg_from_op": true,
"msg_subject": "Re: Proposal to introduce a shuffle function to intarray extension"
},
{
"msg_contents": "Martin Kalcher <martin.kalcher@aboutsource.net> writes:\n> Am 18.07.22 um 00:46 schrieb Tom Lane:\n>> This does not look particularly idiomatic, or even type-safe. What you\n>> should have done was use deconstruct_array to get an array of Datums and\n>> isnull flags, then shuffled those, then used construct_array to build the\n>> output.\n>> \n>> (Or, perhaps, use construct_md_array to replicate the input's\n>> precise dimensionality. Not sure if anyone would care.)\n\n> deconstruct_array() would destroy the arrays dimensions.\n\nAs I said, you can use construct_md_array if you want to preserve the\narray shape.\n\n> I would expect that shuffle() only shuffles the first dimension and\n> keeps the inner arrays intact.\n\nThis argument is based on a false premise, ie that Postgres thinks\nmultidimensional arrays are arrays-of-arrays. They aren't, and\nwe're not going to start making them so by defining shuffle()\nat variance with every other array-manipulating function. Shuffling\nthe individual elements regardless of array shape is the definition\nthat's consistent with our existing functionality.\n\n(Having said that, even if we were going to implement it with that\ndefinition, I should think that it'd be easiest to do so on the\narray-of-Datums representation produced by deconstruct_array.\nThat way you don't need to do different things for different element\ntypes.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 17 Jul 2022 19:20:33 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Proposal to introduce a shuffle function to intarray extension"
},
{
"msg_contents": "Am 18.07.22 um 01:20 schrieb Tom Lane:\n>> I would expect that shuffle() only shuffles the first dimension and\n>> keeps the inner arrays intact.\n> \n> This argument is based on a false premise, ie that Postgres thinks\n> multidimensional arrays are arrays-of-arrays. They aren't, and\n> we're not going to start making them so by defining shuffle()\n> at variance with every other array-manipulating function. Shuffling\n> the individual elements regardless of array shape is the definition\n> that's consistent with our existing functionality.\n\nHey Tom,\n\nthank you for clarification. I did not know that. I will make a patch \nthat is using deconstruct_array().\n\nMartin\n\n\n",
"msg_date": "Mon, 18 Jul 2022 09:12:50 +0200",
"msg_from": "Martin Kalcher <martin.kalcher@aboutsource.net>",
"msg_from_op": true,
"msg_subject": "Re: Proposal to introduce a shuffle function to intarray extension"
},
{
"msg_contents": "Am 18.07.22 um 01:20 schrieb Tom Lane:\n> (Having said that, even if we were going to implement it with that\n> definition, I should think that it'd be easiest to do so on the\n> array-of-Datums representation produced by deconstruct_array.\n> That way you don't need to do different things for different element\n> types.)\n\nThank you Tom, here is a patch utilising deconstruct_array(). If we \nagree, that this is the way to go, i would like to add array_sample() \n(good name?), some test cases, and documentation.\n\nOne more question. How do i pick a Oid for the functions?\n\nMartin",
"msg_date": "Mon, 18 Jul 2022 09:47:30 +0200",
"msg_from": "Martin Kalcher <martin.kalcher@aboutsource.net>",
"msg_from_op": true,
"msg_subject": "Re: Proposal to introduce a shuffle function to intarray extension"
},
{
"msg_contents": "On Mon, Jul 18, 2022 at 2:47 PM Martin Kalcher <\nmartin.kalcher@aboutsource.net> wrote:\n> One more question. How do i pick a Oid for the functions?\n\nFor this, we recommend running src/include/catalog/unused_oids, and it will\ngive you a random range to pick from. That reduces the chance of different\npatches conflicting with each other. It doesn't really matter what the oid\nhere is, since at feature freeze a committer will change them anyway.\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Mon, Jul 18, 2022 at 2:47 PM Martin Kalcher <martin.kalcher@aboutsource.net> wrote:> One more question. How do i pick a Oid for the functions?For this, we recommend running src/include/catalog/unused_oids, and it will give you a random range to pick from. That reduces the chance of different patches conflicting with each other. It doesn't really matter what the oid here is, since at feature freeze a committer will change them anyway.--John NaylorEDB: http://www.enterprisedb.com",
"msg_date": "Mon, 18 Jul 2022 15:18:44 +0700",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Proposal to introduce a shuffle function to intarray extension"
},
{
"msg_contents": "John Naylor <john.naylor@enterprisedb.com> writes:\n> On Mon, Jul 18, 2022 at 2:47 PM Martin Kalcher <\n> martin.kalcher@aboutsource.net> wrote:\n>> One more question. How do i pick a Oid for the functions?\n\n> For this, we recommend running src/include/catalog/unused_oids, and it will\n> give you a random range to pick from. That reduces the chance of different\n> patches conflicting with each other. It doesn't really matter what the oid\n> here is, since at feature freeze a committer will change them anyway.\n\nIf you want the nitty gritty details here, see\n\nhttps://www.postgresql.org/docs/devel/system-catalog-initial-data.html#SYSTEM-CATALOG-OID-ASSIGNMENT\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 18 Jul 2022 09:48:16 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Proposal to introduce a shuffle function to intarray extension"
},
{
"msg_contents": "Thanks for all your feedback and help. I got a patch that i consider \nready for review. It introduces two new functions:\n\n array_shuffle(anyarray) -> anyarray\n array_sample(anyarray, integer) -> anyarray\n\narray_shuffle() shuffles an array (obviously). array_sample() picks n \nrandom elements from an array.\n\nIs someone interested in looking at it? What are the next steps?\n\nMartin",
"msg_date": "Mon, 18 Jul 2022 21:03:35 +0200",
"msg_from": "Martin Kalcher <martin.kalcher@aboutsource.net>",
"msg_from_op": true,
"msg_subject": "[PATCH] Introduce array_shuffle() and array_sample()"
},
{
"msg_contents": "Martin Kalcher <martin.kalcher@aboutsource.net> writes:\n> Is someone interested in looking at it? What are the next steps?\n\nThe preferred thing to do is to add it to our \"commitfest\" queue,\nwhich will ensure that it gets looked at eventually. The currently\nopen cycle is 2022-09 [1] (see the \"New Patch\" button there).\n\nI believe you have to have signed up as a community member[2]\nin order to put yourself in as the patch author. I think \"New Patch\"\nwill work anyway, but it's better to have an author listed.\n\n\t\t\tregards, tom lane\n\n[1] https://commitfest.postgresql.org/39/\n[2] https://www.postgresql.org/account/login/\n\n\n",
"msg_date": "Mon, 18 Jul 2022 15:29:37 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Introduce array_shuffle() and array_sample()"
},
{
"msg_contents": "Am 18.07.22 um 21:29 schrieb Tom Lane:\n> The preferred thing to do is to add it to our \"commitfest\" queue,\n> which will ensure that it gets looked at eventually. The currently\n> open cycle is 2022-09 [1] (see the \"New Patch\" button there).\n\nThanks Tom, did that. I am not sure if \"SQL Commands\" is the correct \ntopic for this patch, but i guess its not that important. I am impressed \nby all the infrastructure build around this project.\n\nMartin\n\n\n",
"msg_date": "Mon, 18 Jul 2022 22:15:43 +0200",
"msg_from": "Martin Kalcher <martin.kalcher@aboutsource.net>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Introduce array_shuffle() and array_sample()"
},
{
"msg_contents": "On Mon, Jul 18, 2022 at 3:03 PM Martin Kalcher\n<martin.kalcher@aboutsource.net> wrote:\n> Thanks for all your feedback and help. I got a patch that i consider\n> ready for review. It introduces two new functions:\n>\n> array_shuffle(anyarray) -> anyarray\n> array_sample(anyarray, integer) -> anyarray\n>\n> array_shuffle() shuffles an array (obviously). array_sample() picks n\n> random elements from an array.\n\nI like this idea.\n\nI think it's questionable whether the behavior of array_shuffle() is\ncorrect for a multi-dimensional array. The implemented behavior is to\nkeep the dimensions as they were, but permute the elements across all\nlevels at random. But there are at least two other behaviors that seem\npotentially defensible: (1) always return a 1-dimensional array, (2)\nshuffle the sub-arrays at the top-level without the possibility of\nmoving elements within or between sub-arrays. What behavior we decide\nis best here should be documented.\n\narray_sample() will return elements in random order when sample_size <\narray_size, but in the original order when sample_size >= array_size.\nSimilarly, it will always return a 1-dimensional array in the former\ncase, but will keep the original dimensions in the latter case. That\nseems pretty hard to defend. I think it should always return a\n1-dimensional array with elements in random order, and I think this\nshould be documented.\n\nI also think you should add test cases involving multi-dimensional\narrays, as well as arrays with non-default bounds. e.g. trying\nshuffling or sampling some values like\n'[8:10][-6:-5]={{1,2},{3,4},{5,6}}'::int[]\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 18 Jul 2022 16:27:30 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Introduce array_shuffle() and array_sample()"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Mon, Jul 18, 2022 at 3:03 PM Martin Kalcher\n> <martin.kalcher@aboutsource.net> wrote:\n>> array_shuffle(anyarray) -> anyarray\n>> array_sample(anyarray, integer) -> anyarray\n\n> I think it's questionable whether the behavior of array_shuffle() is\n> correct for a multi-dimensional array. The implemented behavior is to\n> keep the dimensions as they were, but permute the elements across all\n> levels at random. But there are at least two other behaviors that seem\n> potentially defensible: (1) always return a 1-dimensional array, (2)\n> shuffle the sub-arrays at the top-level without the possibility of\n> moving elements within or between sub-arrays. What behavior we decide\n> is best here should be documented.\n\nMartin had originally proposed (2), which I rejected on the grounds\nthat we don't treat multi-dimensional arrays as arrays-of-arrays for\nany other purpose. Maybe we should have, but that ship sailed decades\nago, and I doubt that shuffle() is the place to start changing it.\n\nI could get behind your option (1) though, to make it clearer that\nthe input array's dimensionality is not of interest. Especially since,\nas you say, (1) is pretty much the only sensible choice for array_sample.\n\n> I also think you should add test cases involving multi-dimensional\n> arrays, as well as arrays with non-default bounds. e.g. trying\n> shuffling or sampling some values like\n> '[8:10][-6:-5]={{1,2},{3,4},{5,6}}'::int[]\n\nThis'd only matter if we decide not to ignore the input's dimensionality.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 18 Jul 2022 16:34:50 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Introduce array_shuffle() and array_sample()"
},
{
"msg_contents": "I wrote:\n> Martin had originally proposed (2), which I rejected on the grounds\n> that we don't treat multi-dimensional arrays as arrays-of-arrays for\n> any other purpose.\n\nActually, after poking at it for awhile, that's an overstatement.\nIt's true that the type system doesn't think N-D arrays are\narrays-of-arrays, but there are individual functions/operators that do.\n\nFor instance, @> is in the its-a-flat-list-of-elements camp:\n\nregression=# select array[[1,2],[3,4]] @> array[1,3];\n ?column? \n----------\n t\n(1 row)\n\nbut || wants to preserve dimensionality:\n\nregression=# select array[[1,2],[3,4]] || array[1];\nERROR: cannot concatenate incompatible arrays\nDETAIL: Arrays with differing dimensions are not compatible for concatenation.\n\nVarious other functions dodge the issue by refusing to work on arrays\nof more than one dimension.\n\nThere seem to be more functions that think arrays are flat than\notherwise, but it's not as black-and-white as I was thinking.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 18 Jul 2022 17:03:01 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Introduce array_shuffle() and array_sample()"
},
{
"msg_contents": "Am 18.07.22 um 23:03 schrieb Tom Lane:\n> I wrote:\n>> Martin had originally proposed (2), which I rejected on the grounds\n>> that we don't treat multi-dimensional arrays as arrays-of-arrays for\n>> any other purpose.\n> \n> Actually, after poking at it for awhile, that's an overstatement.\n> It's true that the type system doesn't think N-D arrays are\n> arrays-of-arrays, but there are individual functions/operators that do.\n> Thanks Robert for pointing out the inconsistent behavior of \narray_sample(). That needs to be fixed.\n\nAs Tom's investigation showed, there is no consensus in the code if \nmulti-dimensional arrays are treated as arrays-of-arrays or not. We need \nto decide what should be the correct treatment.\n\nIf we go with (1) array_shuffle() and array_sample() should shuffle each \nelement individually and always return a one-dimensional array.\n\n select array_shuffle('{{1,2},{3,4},{5,6}}');\n -----------\n {1,4,3,5,6,2}\n\n select array_sample('{{1,2},{3,4},{5,6}}', 3);\n ----------\n {1,4,3}\n\nIf we go with (2) both functions should only operate on the first \ndimension and shuffle whole subarrays and keep the dimensions intact.\n\n select array_shuffle('{{1,2},{3,4},{5,6}}');\n ---------------------\n {{3,4},{1,2},{5,6}}\n\n select array_sample('{{1,2},{3,4},{5,6}}', 2);\n ---------------\n {{3,4},{1,2}}\n\nI do not feel qualified to make that decision. (2) complicates the code \na bit, but that should not be the main argument here.\n\nMartin\n\n\n",
"msg_date": "Mon, 18 Jul 2022 23:48:55 +0200",
"msg_from": "Martin Kalcher <martin.kalcher@aboutsource.net>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Introduce array_shuffle() and array_sample()"
},
{
"msg_contents": "Martin Kalcher <martin.kalcher@aboutsource.net> writes:\n> If we go with (1) array_shuffle() and array_sample() should shuffle each \n> element individually and always return a one-dimensional array.\n\n> select array_shuffle('{{1,2},{3,4},{5,6}}');\n> -----------\n> {1,4,3,5,6,2}\n\n> select array_sample('{{1,2},{3,4},{5,6}}', 3);\n> ----------\n> {1,4,3}\n\nIndependently of the dimensionality question --- I'd imagined that\narray_sample would select a random subset of the array elements\nbut keep their order intact. If you want the behavior shown\nabove, you can do array_shuffle(array_sample(...)). But if we\nrandomize it, and that's not what the user wanted, she has no\nrecourse.\n\nNow, if you're convinced that the set of people wanting\nsampling-without-shuffling is the empty set, then making everybody\nelse call two functions is a loser. But I'm not convinced.\nAt the least, I'd like to see the argument made why nobody\nwould want that.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 18 Jul 2022 18:18:40 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Introduce array_shuffle() and array_sample()"
},
{
"msg_contents": "On Mon, Jul 18, 2022 at 3:18 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n>\n> Independently of the dimensionality question --- I'd imagined that\n> array_sample would select a random subset of the array elements\n> but keep their order intact. If you want the behavior shown\n> above, you can do array_shuffle(array_sample(...)). But if we\n> randomize it, and that's not what the user wanted, she has no\n> recourse.\n>\n>\nAnd for those that want to know in what order those elements were chosen\nthey have no recourse in the other setup.\n\nI really think this function needs to grow an algorithm argument that can\nbe used to specify stuff like ordering, replacement/without-replacement,\netc...just some enums separated by commas that can be added to the call.\n\nDavid J.\n\nOn Mon, Jul 18, 2022 at 3:18 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\nIndependently of the dimensionality question --- I'd imagined that\narray_sample would select a random subset of the array elements\nbut keep their order intact. If you want the behavior shown\nabove, you can do array_shuffle(array_sample(...)). But if we\nrandomize it, and that's not what the user wanted, she has no\nrecourse.And for those that want to know in what order those elements were chosen they have no recourse in the other setup.I really think this function needs to grow an algorithm argument that can be used to specify stuff like ordering, replacement/without-replacement, etc...just some enums separated by commas that can be added to the call.David J.",
"msg_date": "Mon, 18 Jul 2022 15:32:00 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Introduce array_shuffle() and array_sample()"
},
{
"msg_contents": "\"David G. Johnston\" <david.g.johnston@gmail.com> writes:\n> On Mon, Jul 18, 2022 at 3:18 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Independently of the dimensionality question --- I'd imagined that\n>> array_sample would select a random subset of the array elements\n>> but keep their order intact. If you want the behavior shown\n>> above, you can do array_shuffle(array_sample(...)). But if we\n>> randomize it, and that's not what the user wanted, she has no\n>> recourse.\n\n> And for those that want to know in what order those elements were chosen\n> they have no recourse in the other setup.\n\nUm ... why is \"the order in which the elements were chosen\" a concept\nwe want to expose? ISTM sample() is a black box in which notionally\nthe decisions could all be made at once.\n\n> I really think this function needs to grow an algorithm argument that can\n> be used to specify stuff like ordering, replacement/without-replacement,\n> etc...just some enums separated by commas that can be added to the call.\n\nI think you might run out of gold paint somewhere around here. I'm\nstill not totally convinced we should bother with the sample() function\nat all, let alone that it needs algorithm variants. At some point we\nsay to the user \"here's a PL, write what you want for yourself\".\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 18 Jul 2022 18:43:32 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Introduce array_shuffle() and array_sample()"
},
{
"msg_contents": "Am 19.07.22 um 00:18 schrieb Tom Lane:\n> \n> Independently of the dimensionality question --- I'd imagined that\n> array_sample would select a random subset of the array elements\n> but keep their order intact. If you want the behavior shown\n> above, you can do array_shuffle(array_sample(...)). But if we\n> randomize it, and that's not what the user wanted, she has no\n> recourse.\n> \n> Now, if you're convinced that the set of people wanting\n> sampling-without-shuffling is the empty set, then making everybody\n> else call two functions is a loser. But I'm not convinced.\n> At the least, I'd like to see the argument made why nobody\n> would want that.\n> \n\nOn the contrary! I am pretty sure there are people out there wanting \nsampling-without-shuffling. I will think about that.\n\n\n",
"msg_date": "Tue, 19 Jul 2022 00:52:47 +0200",
"msg_from": "Martin Kalcher <martin.kalcher@aboutsource.net>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Introduce array_shuffle() and array_sample()"
},
{
"msg_contents": "On Tue, Jul 19, 2022 at 8:15 AM Martin Kalcher\n<martin.kalcher@aboutsource.net> wrote:\n> Am 18.07.22 um 21:29 schrieb Tom Lane:\n> > The preferred thing to do is to add it to our \"commitfest\" queue,\n> > which will ensure that it gets looked at eventually. The currently\n> > open cycle is 2022-09 [1] (see the \"New Patch\" button there).\n>\n> Thanks Tom, did that.\n\nFYI that brought your patch to the attention of this CI robot, which\nis showing a couple of warnings. See the FAQ link there for an\nexplanation of that infrastructure.\n\nhttp://cfbot.cputube.org/martin-kalcher.html\n\n\n",
"msg_date": "Tue, 19 Jul 2022 13:09:12 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Introduce array_shuffle() and array_sample()"
},
{
"msg_contents": "Am 19.07.22 um 00:52 schrieb Martin Kalcher:\n> \n> On the contrary! I am pretty sure there are people out there wanting \n> sampling-without-shuffling. I will think about that.\n\nI gave it some thought. Even though there might be use cases, where a \nstable order is desired, i would consider them edge cases, not worth the \nadditional complexity. I personally would not expect array_sample() to \nreturn elements in any specific order. I looked up some sample() \nimplementations. None of them makes guarantees about the order of the \nresulting array or explicitly states that the resulting array is in \nrandom or selection order.\n\n- Python random.sample [0]\n- Ruby Array#sample [1]\n- Rust rand::seq::SliceRandom::choose_multiple [2]\n- Julia StatsBase.sample [3] stable order needs explicit request\n\n[0] https://docs.python.org/3/library/random.html#random.sample\n[1] https://ruby-doc.org/core-3.0.0/Array.html#method-i-sample\n[2] \nhttps://docs.rs/rand/0.6.5/rand/seq/trait.SliceRandom.html#tymethod.choose_multiple\n[3] https://juliastats.org/StatsBase.jl/stable/sampling/#StatsBase.sample\n\n\n",
"msg_date": "Tue, 19 Jul 2022 09:29:03 +0200",
"msg_from": "Martin Kalcher <martin.kalcher@aboutsource.net>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Introduce array_shuffle() and array_sample()"
},
{
"msg_contents": "Hi Martin,\n\nI didn't look at the code yet but I very much like the idea. Many\nthanks for working on this!\n\nIt's a pity your patch was too late for the July commitfest. In\nSeptember, please keep an eye on cfbot [1] to make sure your patch\napplies properly.\n\n> As Tom's investigation showed, there is no consensus in the code if\n> multi-dimensional arrays are treated as arrays-of-arrays or not. We need\n> to decide what should be the correct treatment.\n\nHere are my two cents.\n\n From the user perspective I would expect that by default a\nmultidimensional array should be treated as an array of arrays. So for\ninstance:\n\n```\narray_shuffle([ [1,2], [3,4], [5,6] ])\n```\n\n... should return something like:\n\n```\n[ [3,4], [1,2], [5,6] ]\n```\n\nNote that the order of the elements in the internal arrays is preserved.\n\nHowever, I believe there should be an optional argument that overrides\nthis behavior. For instance:\n\n```\narray_shuffle([ [1,2], [3,4], [5,6] ], depth => 2)\n```\n\nBTW, while on it, shouldn't we add similar functions for JSON and/or\nJSONB? Or is this going to be too much for a single discussion?\n\n[1]: http://cfbot.cputube.org/\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Tue, 19 Jul 2022 13:07:29 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Introduce array_shuffle() and array_sample()"
},
{
"msg_contents": "Am 18.07.22 um 23:48 schrieb Martin Kalcher:\n> \n> If we go with (1) array_shuffle() and array_sample() should shuffle each \n> element individually and always return a one-dimensional array.\n> \n> select array_shuffle('{{1,2},{3,4},{5,6}}');\n> -----------\n> {1,4,3,5,6,2}\n> \n> select array_sample('{{1,2},{3,4},{5,6}}', 3);\n> ----------\n> {1,4,3}\n> \n> If we go with (2) both functions should only operate on the first \n> dimension and shuffle whole subarrays and keep the dimensions intact.\n> \n> select array_shuffle('{{1,2},{3,4},{5,6}}');\n> ---------------------\n> {{3,4},{1,2},{5,6}}\n> \n> select array_sample('{{1,2},{3,4},{5,6}}', 2);\n> ---------------\n> {{3,4},{1,2}}\n> \n\nHaving thought about it, i would go with (2). It gives the user the \nability to decide wether or not array-of-arrays behavior is desired. If \nhe wants the behavior of (1) he can flatten the array before applying \narray_shuffle(). Unfortunately there is no array_flatten() function (at \nthe moment) and the user would have to work around it with unnest() and \narray_agg().\n\n\n",
"msg_date": "Tue, 19 Jul 2022 13:15:42 +0200",
"msg_from": "Martin Kalcher <martin.kalcher@aboutsource.net>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Introduce array_shuffle() and array_sample()"
},
{
"msg_contents": "On Mon, Jul 18, 2022 at 6:43 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Um ... why is \"the order in which the elements were chosen\" a concept\n> we want to expose? ISTM sample() is a black box in which notionally\n> the decisions could all be made at once.\n\nI agree with that. But I also think it's fine for the elements to be\nreturned in a shuffled order rather than the original order.\n\n> > I really think this function needs to grow an algorithm argument that can\n> > be used to specify stuff like ordering, replacement/without-replacement,\n> > etc...just some enums separated by commas that can be added to the call.\n>\n> I think you might run out of gold paint somewhere around here. I'm\n> still not totally convinced we should bother with the sample() function\n> at all, let alone that it needs algorithm variants. At some point we\n> say to the user \"here's a PL, write what you want for yourself\".\n\nI don't know what gold paint has to do with anything here, but I agree\nthat David's proposal seems to be moving the goalposts a very long\nway.\n\nThe thing is, as Martin points out, these functions already exist in a\nbunch of other systems. For one example I've used myself, see\nhttps://underscorejs.org/\n\nI probably wouldn't have called a function to put a list into a random\norder \"shuffle\" in a vacuum, but it seems to be common nomenclature\nthese days. I believe that if you don't make reference to Fisher-Yates\nin the documentation, they kick you out of the cool programmers club.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 19 Jul 2022 09:05:23 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Introduce array_shuffle() and array_sample()"
},
{
"msg_contents": "\nOn 2022-07-19 Tu 07:15, Martin Kalcher wrote:\n> Am 18.07.22 um 23:48 schrieb Martin Kalcher:\n>>\n>> If we go with (1) array_shuffle() and array_sample() should shuffle\n>> each element individually and always return a one-dimensional array.\n>>\n>> select array_shuffle('{{1,2},{3,4},{5,6}}');\n>> -----------\n>> {1,4,3,5,6,2}\n>>\n>> select array_sample('{{1,2},{3,4},{5,6}}', 3);\n>> ----------\n>> {1,4,3}\n>>\n>> If we go with (2) both functions should only operate on the first\n>> dimension and shuffle whole subarrays and keep the dimensions intact.\n>>\n>> select array_shuffle('{{1,2},{3,4},{5,6}}');\n>> ---------------------\n>> {{3,4},{1,2},{5,6}}\n>>\n>> select array_sample('{{1,2},{3,4},{5,6}}', 2);\n>> ---------------\n>> {{3,4},{1,2}}\n>>\n>\n> Having thought about it, i would go with (2). It gives the user the\n> ability to decide wether or not array-of-arrays behavior is desired.\n> If he wants the behavior of (1) he can flatten the array before\n> applying array_shuffle(). Unfortunately there is no array_flatten()\n> function (at the moment) and the user would have to work around it\n> with unnest() and array_agg().\n>\n>\n\n\nWhy not have an optional second parameter for array_shuffle that\nindicates whether or not to flatten? e.g. array_shuffle(my_array,\nflatten => true)\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Tue, 19 Jul 2022 09:53:44 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Introduce array_shuffle() and array_sample()"
},
{
"msg_contents": "On Tue, Jul 19, 2022 at 9:53 AM Andrew Dunstan <andrew@dunslane.net> wrote:\n> > Having thought about it, i would go with (2). It gives the user the\n> > ability to decide wether or not array-of-arrays behavior is desired.\n> > If he wants the behavior of (1) he can flatten the array before\n> > applying array_shuffle(). Unfortunately there is no array_flatten()\n> > function (at the moment) and the user would have to work around it\n> > with unnest() and array_agg().\n>\n> Why not have an optional second parameter for array_shuffle that\n> indicates whether or not to flatten? e.g. array_shuffle(my_array,\n> flatten => true)\n\nIMHO, if we think that's something many people are going to want, it\nwould be better to add an array_flatten() function, because that could\nbe used for a variety of purposes, whereas an option to this function\ncan only be used for this function.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 19 Jul 2022 09:56:41 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Introduce array_shuffle() and array_sample()"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Tue, Jul 19, 2022 at 9:53 AM Andrew Dunstan <andrew@dunslane.net> wrote:\n>> Why not have an optional second parameter for array_shuffle that\n>> indicates whether or not to flatten? e.g. array_shuffle(my_array,\n>> flatten => true)\n\n> IMHO, if we think that's something many people are going to want, it\n> would be better to add an array_flatten() function, because that could\n> be used for a variety of purposes, whereas an option to this function\n> can only be used for this function.\n\nAgreed. Whether it's really needed, I dunno --- I don't recall the\nissue having come up before.\n\nAfter taking a second look through\nhttps://www.postgresql.org/docs/current/functions-array.html\nit seems like the things that treat arrays as flat even when they\nare multi-D are just\n\n* unnest(), which is more or less forced into that position by our\ntype system: it has to be anyarray returning anyelement, not\nanyarray returning anyelement-or-anyarray-depending.\n\n* array_to_string(), which maybe could have done it differently but\ncan reasonably be considered a variant of unnest().\n\n* The containment/overlap operators, which are kind of their own\nthing anyway. Arguably they got this wrong, though I suppose it's\ntoo late to rethink that.\n\nEverything else either explicitly rejects more-than-one-D arrays\nor does something that is compatible with thinking of them as\narrays-of-arrays.\n\nSo I withdraw my original position. These functions should just\nshuffle or select in the array's first dimension, preserving\nsubarrays. Or else be lazy and reject more-than-one-D arrays;\nbut it's probably not that hard to handle them.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 19 Jul 2022 10:20:23 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Introduce array_shuffle() and array_sample()"
},
{
"msg_contents": "Am 19.07.22 um 16:20 schrieb Tom Lane:\n> \n> So I withdraw my original position. These functions should just\n> shuffle or select in the array's first dimension, preserving\n> subarrays. Or else be lazy and reject more-than-one-D arrays;\n> but it's probably not that hard to handle them.\n> \n\nHere is a patch with dimension aware array_shuffle() and array_sample().\n\nIf you think array_flatten() is desirable, i can take a look at it. \nMaybe a second parameter would be nice to specify the target dimension:\n\n select array_flatten(array[[[1,2],[3,4]],[[5,6],[7,8]]], 1);\n -------------------\n {1,2,3,4,5,6,7,8}\n\n select array_flatten(array[[[1,2],[3,4]],[[5,6],[7,8]]], 2);\n -----------------------\n {{1,2,3,4},{5,6,7,8}}\n\nMartin",
"msg_date": "Tue, 19 Jul 2022 22:20:57 +0200",
"msg_from": "Martin Kalcher <martin.kalcher@aboutsource.net>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Introduce array_shuffle() and array_sample()"
},
{
"msg_contents": "On Tue, 19 Jul 2022 at 21:21, Martin Kalcher\n<martin.kalcher@aboutsource.net> wrote:\n>\n> Here is a patch with dimension aware array_shuffle() and array_sample().\n>\n\n+1 for this feature, and this way of handling multi-dimensional arrays.\n\n> If you think array_flatten() is desirable, i can take a look at it.\n\nThat's not something I've ever wanted -- personally, I rarely use\nmulti-dimensional arrays in Postgres.\n\nA couple of quick comments on the current patch:\n\nIt's important to mark these new functions as VOLATILE, not IMMUTABLE,\notherwise they won't work as expected in queries. See\nhttps://www.postgresql.org/docs/current/xfunc-volatility.html\n\nIt would be better to use pg_prng_uint64_range() rather than rand() to\npick elements. Partly, that's because it uses a higher quality PRNG,\nwith a larger internal state, and it ensures that the results are\nunbiased across the range. But more importantly, it interoperates with\nsetseed(), allowing predictable sequences of \"random\" numbers to be\ngenerated -- something that's useful in writing repeatable regression\ntests.\n\nAssuming these new functions are made to interoperate with setseed(),\nwhich I think they should be, then they also need to be marked as\nPARALLEL RESTRICTED, rather than PARALLEL SAFE. See\nhttps://www.postgresql.org/docs/current/parallel-safety.html, which\nexplains why setseed() and random() are parallel restricted.\n\nIn my experience, the requirement for sampling with replacement is\nabout as common as the requirement for sampling without replacement,\nso it seems odd to provide one but not the other. Of course, we can\nalways add a with-replacement function later, and give it a different\nname. But maybe array_sample() could be used for both, with a\n\"with_replacement\" boolean parameter?\n\nRegards,\nDean\n\n\n",
"msg_date": "Thu, 21 Jul 2022 09:41:30 +0100",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Introduce array_shuffle() and array_sample()"
},
{
"msg_contents": "Am 21.07.22 um 10:41 schrieb Dean Rasheed:\n> \n> A couple of quick comments on the current patch:\n\nThank you for your feedback!\n\n> It's important to mark these new functions as VOLATILE, not IMMUTABLE,\n> otherwise they won't work as expected in queries. See\n> https://www.postgresql.org/docs/current/xfunc-volatility.html\n\nCREATE FUNCTION marks functions as VOLATILE by default if not explicitly \nmarked otherwise. I assumed function definitions in pg_proc.dat have the \nsame behavior. I will fix that.\nhttps://www.postgresql.org/docs/current/sql-createfunction.html\n\n> It would be better to use pg_prng_uint64_range() rather than rand() to\n> pick elements. Partly, that's because it uses a higher quality PRNG,\n> with a larger internal state, and it ensures that the results are\n> unbiased across the range. But more importantly, it interoperates with\n> setseed(), allowing predictable sequences of \"random\" numbers to be\n> generated -- something that's useful in writing repeatable regression\n> tests.\n\nI agree that we should use pg_prng_uint64_range(). However, in order to \nachieve interoperability with setseed() we would have to use \ndrandom_seed (rather than pg_global_prng_state) as rng state, which is \ndeclared statically in float.c and exclusively used by random(). Do we \nwant to expose drandom_seed to other functions?\n\n> Assuming these new functions are made to interoperate with setseed(),\n> which I think they should be, then they also need to be marked as\n> PARALLEL RESTRICTED, rather than PARALLEL SAFE. See\n> https://www.postgresql.org/docs/current/parallel-safety.html, which\n> explains why setseed() and random() are parallel restricted.\n\nAs mentioned above, i assumed the default here is PARALLEL UNSAFE. I'll \nfix that.\n\n> In my experience, the requirement for sampling with replacement is\n> about as common as the requirement for sampling without replacement,\n> so it seems odd to provide one but not the other. Of course, we can\n> always add a with-replacement function later, and give it a different\n> name. But maybe array_sample() could be used for both, with a\n> \"with_replacement\" boolean parameter?\n\nWe can also add a \"with_replacement\" boolean parameter which is false by \ndefault to array_sample() later. I do not have a strong opinion about \nthat and will implement it, if desired. Any opinions about \ndefault/no-default?\n\nMartin\n\n\n",
"msg_date": "Thu, 21 Jul 2022 13:15:43 +0200",
"msg_from": "Martin Kalcher <martin.kalcher@aboutsource.net>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Introduce array_shuffle() and array_sample()"
},
{
"msg_contents": "On Thu, 21 Jul 2022 at 12:15, Martin Kalcher\n<martin.kalcher@aboutsource.net> wrote:\n>\n> I agree that we should use pg_prng_uint64_range(). However, in order to\n> achieve interoperability with setseed() we would have to use\n> drandom_seed (rather than pg_global_prng_state) as rng state, which is\n> declared statically in float.c and exclusively used by random(). Do we\n> want to expose drandom_seed to other functions?\n>\n\nAh, I didn't realise that setseed() and random() were bound up so\ntightly. It does feel as though, if we're adding more user-facing\nfunctions that return random sequences, there ought to be a way to\nseed them, and I wouldn't want to have separate setseed functions for\neach one.\n\nI'm inclined to say that we want a new pg_global_prng_user_state that\nis updated by setseed(), and used by random(), array_shuffle(),\narray_sample(), and any other user-facing random functions we add\nlater.\n\nI can also see that others might not like expanding the scope of\nsetseed() in this way.\n\nRegards,\nDean\n\n\n",
"msg_date": "Thu, 21 Jul 2022 13:25:27 +0100",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Introduce array_shuffle() and array_sample()"
},
{
"msg_contents": "Am 21.07.22 um 14:25 schrieb Dean Rasheed:\n> \n> I'm inclined to say that we want a new pg_global_prng_user_state that\n> is updated by setseed(), and used by random(), array_shuffle(),\n> array_sample(), and any other user-facing random functions we add\n> later.\n> \n\nI like the idea. How would you organize the code? I imagine some sort of \nuser prng that is encapsulated in something like utils/adt/random.c and \nis guaranteed to always be seeded.\n\nMartin\n\n\n",
"msg_date": "Thu, 21 Jul 2022 17:43:02 +0200",
"msg_from": "Martin Kalcher <martin.kalcher@aboutsource.net>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Introduce array_shuffle() and array_sample()"
},
{
"msg_contents": "Am 21.07.22 um 10:41 schrieb Dean Rasheed:\n> \n> It's important to mark these new functions as VOLATILE, not IMMUTABLE,\n> otherwise they won't work as expected in queries. See\n> https://www.postgresql.org/docs/current/xfunc-volatility.html\n> \n> It would be better to use pg_prng_uint64_range() rather than rand() to\n> pick elements. Partly, that's because it uses a higher quality PRNG,\n> with a larger internal state, and it ensures that the results are\n> unbiased across the range. But more importantly, it interoperates with\n> setseed(), allowing predictable sequences of \"random\" numbers to be\n> generated -- something that's useful in writing repeatable regression\n> tests.\n> \n> Assuming these new functions are made to interoperate with setseed(),\n> which I think they should be, then they also need to be marked as\n> PARALLEL RESTRICTED, rather than PARALLEL SAFE. See\n> https://www.postgresql.org/docs/current/parallel-safety.html, which\n> explains why setseed() and random() are parallel restricted.\n> \n\nHere is an updated patch that marks the functions VOLATILE PARALLEL \nRESTRICTED and uses pg_prng_uint64_range() rather than rand().",
"msg_date": "Thu, 21 Jul 2022 19:29:12 +0200",
"msg_from": "Martin Kalcher <martin.kalcher@aboutsource.net>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Introduce array_shuffle() and array_sample()"
},
{
"msg_contents": "On Thu, 21 Jul 2022 at 16:43, Martin Kalcher\n<martin.kalcher@aboutsource.net> wrote:\n>\n> Am 21.07.22 um 14:25 schrieb Dean Rasheed:\n> >\n> > I'm inclined to say that we want a new pg_global_prng_user_state that\n> > is updated by setseed(), and used by random(), array_shuffle(),\n> > array_sample(), and any other user-facing random functions we add\n> > later.\n> >\n>\n> I like the idea. How would you organize the code? I imagine some sort of\n> user prng that is encapsulated in something like utils/adt/random.c and\n> is guaranteed to always be seeded.\n>\n\nSomething like that, perhaps. I can see 2 ways it could go:\n\nOption 1:\n Keep random.c small\n - Responsible for initialisation of the user prng on demand\n - Expose the user prng state to other code like float.c and arrayfuncs.c\n\nOption 2:\n Move all random functions wanting to use the user prng to random.c\n - Starting with drandom() and setseed()\n - Then, array_shuffle() and array_sample()\n - Later, any new SQL-callable random functions we might add\n - Keep the user prng state local to random.c\n\nOption 2 seems quite appealing, because it keeps all SQL-callable\nrandom functions together in one place, and keeps the state local,\nmaking it easier to keep track of which functions are using it.\n\nCode like the Fisher-Yates algorithm is more to do with random than it\nis to do with arrays, so putting it in random.c seems to make more\nsense.\n\nIt's possible that some hypothetical random function might need access\nto type-specific internals. For example, if we wanted to add a\nfunction to return a random numeric, it would probably have to go in\nnumeric.c, but that could be solved by having random.c call numeric.c,\npassing it the prng state.\n\nRegards,\nDean\n\n\n",
"msg_date": "Fri, 22 Jul 2022 08:59:20 +0100",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Introduce array_shuffle() and array_sample()"
},
{
"msg_contents": "Am 22.07.22 um 09:59 schrieb Dean Rasheed:>\n> Option 1:\n> Keep random.c small\n> - Responsible for initialisation of the user prng on demand\n> - Expose the user prng state to other code like float.c and arrayfuncs.c\n> \n> Option 2:\n> Move all random functions wanting to use the user prng to random.c\n> - Starting with drandom() and setseed()\n> - Then, array_shuffle() and array_sample()\n> - Later, any new SQL-callable random functions we might add\n> - Keep the user prng state local to random.c\n> \n\nHey Dean,\n\ni came to the same conclusions and went with Option 1 (see patch). \nMainly because most code in utils/adt is organized by type and this way \nit is clear, that this is a thin wrapper around pg_prng.\n\nWhat do you think?",
"msg_date": "Fri, 22 Jul 2022 11:31:07 +0200",
"msg_from": "Martin Kalcher <martin.kalcher@aboutsource.net>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Introduce array_shuffle() and array_sample()"
},
{
"msg_contents": "On Fri, 22 Jul 2022 at 10:31, Martin Kalcher\n<martin.kalcher@aboutsource.net> wrote:\n>\n> i came to the same conclusions and went with Option 1 (see patch).\n> Mainly because most code in utils/adt is organized by type and this way\n> it is clear, that this is a thin wrapper around pg_prng.\n>\n> What do you think?\n\nLooks fairly neat, on a quick read-through. There's certainly\nsomething to be said for preserving the organisation by type.\n\nRegards,\nDean\n\n\n",
"msg_date": "Fri, 22 Jul 2022 12:04:06 +0100",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Introduce array_shuffle() and array_sample()"
},
{
"msg_contents": "On 7/19/22 10:20, Tom Lane wrote:\n> Everything else either explicitly rejects more-than-one-D arrays\n> or does something that is compatible with thinking of them as\n> arrays-of-arrays.\n\nI think I am responsible for at least some of those, and I agree that \nthinking of MD arrays as arrays-of-arrays is preferable even though they \nare not actually that. Long ago[1] Peter E asked me to fix that as I \nrecall but it was one of those round tuits that I never found.\n\n> So I withdraw my original position. These functions should just\n> shuffle or select in the array's first dimension, preserving\n> subarrays. Or else be lazy and reject more-than-one-D arrays;\n> but it's probably not that hard to handle them.\n\n+1\n\nJoe\n\n[1] \nhttps://www.postgresql.org/message-id/flat/Pine.LNX.4.44.0306281418020.2178-100000%40peter.localdomain#a064d6dd8593993d799db453a3ee04d1\n\n-- \nJoe Conway\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Fri, 22 Jul 2022 12:49:10 -0400",
"msg_from": "Joe Conway <mail@joeconway.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Introduce array_shuffle() and array_sample()"
},
{
"msg_contents": "Am 22.07.22 um 11:31 schrieb Martin Kalcher:\n> \n> i came to the same conclusions and went with Option 1 (see patch). \n> Mainly because most code in utils/adt is organized by type and this way \n> it is clear, that this is a thin wrapper around pg_prng.\n> \n\nSmall patch update. I realized the new functions should live \narray_userfuncs.c (rather than arrayfuncs.c), fixed some file headers \nand added some comments to the code.",
"msg_date": "Sat, 23 Jul 2022 14:20:47 +0200",
"msg_from": "Martin Kalcher <martin.kalcher@aboutsource.net>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Introduce array_shuffle() and array_sample()"
},
{
"msg_contents": ">> i came to the same conclusions and went with Option 1 (see patch). Mainly \n>> because most code in utils/adt is organized by type and this way it is \n>> clear, that this is a thin wrapper around pg_prng.\n>> \n>\n> Small patch update. I realized the new functions should live \n> array_userfuncs.c (rather than arrayfuncs.c), fixed some file headers and \n> added some comments to the code.\n\nMy 0,02� about this patch:\n\nUse (void) when declaring no parameters in headers or in functions.\n\nShould the exchange be skipped when i == k?\n\nI do not see the point of having *only* inline functions in a c file. The \nexternal functions should not be inlined?\n\nThe random and array shuffling functions share a common state. I'm \nwondering whether it should ot should not be so. It seems ok, but then \nISTM that the documentation suggests implicitely that setseed applies to \nrandom() only, which is not the case anymore after the patch.\n\nIf more samples are required than the number of elements, it does not \nerror out. I'm wondering whether it should.\n\nAlso, the sampling should not return its result in order when the number \nof elements required is the full array, ISTM that it should be shuffled \nthere as well.\n\nI must say that without significant knowledge of the array internal \nimplementation, the swap code looks pretty strange. ISTM that the code \nwould be clearer if pointers and array references style were not \nintermixed.\n\nMaybe you could add a test with a 3D array? Some sample with NULLs?\n\nUnrelated: I notice again that (postgre)SQL does not provide a way to \ngenerate random integers. I do not see why not. Should we provide one?\n\n-- \nFabien.",
"msg_date": "Sun, 24 Jul 2022 10:15:22 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Introduce array_shuffle() and array_sample()"
},
{
"msg_contents": "Am 24.07.22 um 10:15 schrieb Fabien COELHO:\n> \n> My 0,02€ about this patch:\n\nThank you for your feedback. I attached a patch, that addresses most of \nyour points.\n\n> Use (void) when declaring no parameters in headers or in functions.\n\nDone.\n\n> Should the exchange be skipped when i == k?\n\nThe additional branch is actually slower (on my machine, tested with an \n10M element int array) for 1D-arrays, which i assume is the most common \ncase. Even with a 2D-array with a subarray size of 1M there is no \ndifference in execution time. i == k should be relatively rare for large \narrays, given a good prng.\n\n> I do not see the point of having *only* inline functions in a c file. \n> The external functions should not be inlined?\n\nDone.\n\n> The random and array shuffling functions share a common state. I'm \n> wondering whether it should ot should not be so. It seems ok, but then \n> ISTM that the documentation suggests implicitely that setseed applies to \n> random() only, which is not the case anymore after the patch.\n\nI really like the idea of a prng state owned by the user, that is used \nby all user facing random functions, but see that their might be \nconcerns about this change. I would update the setseed() documentation, \nif this proposal is accepted. Do you think my patch should already \ninclude the documentation update?\n\n> If more samples are required than the number of elements, it does not \n> error out. I'm wondering whether it should.\n\nThe second argument to array_sample() is treated like a LIMIT, rather \nthan the actual number of elements. I updated the documentation. My \nuse-case is: take max random items from an array of unknown size.\n\n> Also, the sampling should not return its result in order when the number \n> of elements required is the full array, ISTM that it should be shuffled \n> there as well.\n\nYou are the second person asking for this. It's done. I am thinking \nabout ditching array_sample() and replace it with a second signature for \narray_shuffle():\n\n array_shuffle(array anyarray) -> anyarray\n array_shuffle(array anyarray, limit int) -> anyarray\n\nWhat do you think?\n\n> I must say that without significant knowledge of the array internal \n> implementation, the swap code looks pretty strange. ISTM that the code \n> would be clearer if pointers and array references style were not \n> intermixed.\n\nDone. Went with pointers.\n\n> Maybe you could add a test with a 3D array? Some sample with NULLs?\n\nDone.\n\n> Unrelated: I notice again that (postgre)SQL does not provide a way to \n> generate random integers. I do not see why not. Should we provide one?\n\nMaybe. I think it is outside the scope of this patch.\n\nThank you, Martin",
"msg_date": "Sun, 24 Jul 2022 21:29:39 +0200",
"msg_from": "Martin Kalcher <martin.kalcher@aboutsource.net>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Introduce array_shuffle() and array_sample()"
},
{
"msg_contents": "\nHello,\n\n> Thank you for your feedback. I attached a patch, that addresses most of your \n> points.\n\nI'll look into it. It would help if the patch could include a version \nnumber at the end.\n\n>> Should the exchange be skipped when i == k?\n>\n> The additional branch is actually slower (on my machine, tested with an 10M \n> element int array) for 1D-arrays, which i assume is the most common case. \n> Even with a 2D-array with a subarray size of 1M there is no difference in \n> execution time. i == k should be relatively rare for large arrays, given a \n> good prng.\n\nOk, slower is bad.\n\n>> The random and array shuffling functions share a common state. I'm \n>> wondering whether it should ot should not be so. It seems ok, but then ISTM \n>> that the documentation suggests implicitely that setseed applies to \n>> random() only, which is not the case anymore after the patch.\n>\n> I really like the idea of a prng state owned by the user, that is used by all \n> user facing random functions, but see that their might be concerns about this \n> change. I would update the setseed() documentation, if this proposal is \n> accepted. Do you think my patch should already include the documentation \n> update?\n\nDuno. I'm still wondering what it should do. I'm pretty sure that the \ndocumentation should be clear about a shared seed, if any. I do not think \nthat departing from the standard is a good thing, either.\n\n>> If more samples are required than the number of elements, it does not error \n>> out. I'm wondering whether it should.\n>\n> The second argument to array_sample() is treated like a LIMIT, rather than \n> the actual number of elements. I updated the documentation. My use-case is: \n> take max random items from an array of unknown size.\n\nHmmm. ISTM that the use case of wanting \"this many\" stuff would make more \nsense because it is strictier so less likely to result in unforseen \nconsequences. On principle I do not like not knowing the output size.\n\nIf someone wants a limit, they can easily \"LEAST(#1 dim size, other \nlimit)\" to get it, it is easy enough with a strict function.\n\n>> Also, the sampling should not return its result in order when the number of \n>> elements required is the full array, ISTM that it should be shuffled there \n>> as well.\n>\n> You are the second person asking for this. It's done. I am thinking about \n> ditching array_sample() and replace it with a second signature for \n> array_shuffle():\n>\n> array_shuffle(array anyarray) -> anyarray\n> array_shuffle(array anyarray, limit int) -> anyarray\n>\n> What do you think?\n\nI think that shuffle means shuffle, not partial shuffle, so a different \nname seems better to me.\n\n-- \nFabien.\n\n\n",
"msg_date": "Sun, 24 Jul 2022 21:42:48 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Introduce array_shuffle() and array_sample()"
},
{
"msg_contents": "Am 24.07.22 um 21:42 schrieb Fabien COELHO:\n> \n> Duno. I'm still wondering what it should do. I'm pretty sure that the \n> documentation should be clear about a shared seed, if any. I do not \n> think that departing from the standard is a good thing, either.\n\nAre sure it violates the standard? I could not find anything about it. \nThe private prng state for random() was introduced in 2018 [0]. Neither \ncommit nor discussion mentions any standard compliance.\n\n[0] \nhttps://www.postgresql.org/message-id/E1gdNAo-00036g-TB%40gemulon.postgresql.org\n\nI updated the documentation for setseed().\n\n> If someone wants a limit, they can easily \"LEAST(#1 dim size, other \n> limit)\" to get it, it is easy enough with a strict function.\n\nConvinced. It errors out now if n is out of bounds.\n\nMartin",
"msg_date": "Mon, 25 Jul 2022 09:34:31 +0200",
"msg_from": "Martin Kalcher <martin.kalcher@aboutsource.net>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Introduce array_shuffle() and array_sample()"
},
{
"msg_contents": "Patch update without merge conflicts.\n\nMartin",
"msg_date": "Thu, 4 Aug 2022 07:46:10 +0200",
"msg_from": "Martin Kalcher <martin.kalcher@aboutsource.net>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Introduce array_shuffle() and array_sample()"
},
{
"msg_contents": "Hi,\n\nOn 2022-08-04 07:46:10 +0200, Martin Kalcher wrote:\n> Patch update without merge conflicts.\n\nDue to the merge of the meson based build, this patch needs to be adjusted. See\nhttps://cirrus-ci.com/build/6580671765282816\nLooks like it'd just be adding user_prng.c to\nsrc/backend/utils/adt/meson.build's list.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 22 Sep 2022 08:23:20 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Introduce array_shuffle() and array_sample()"
},
{
"msg_contents": "Am 22.09.22 um 17:23 schrieb Andres Freund:\n> Hi,\n> \n> On 2022-08-04 07:46:10 +0200, Martin Kalcher wrote:\n>> Patch update without merge conflicts.\n> \n> Due to the merge of the meson based build, this patch needs to be adjusted. See\n> https://cirrus-ci.com/build/6580671765282816\n> Looks like it'd just be adding user_prng.c to\n> src/backend/utils/adt/meson.build's list.\n> \n> Greetings,\n> \n> Andres Freund\n\nHi Andres,\n\nthanks for the heads up. Adjusted patch is attached.\n\n- Martin",
"msg_date": "Thu, 22 Sep 2022 17:40:17 +0200",
"msg_from": "Martin Kalcher <martin.kalcher@aboutsource.net>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Introduce array_shuffle() and array_sample()"
},
{
"msg_contents": "Martin Kalcher <martin.kalcher@aboutsource.net> writes:\n> [ v4-0001-Introduce-array_shuffle-and-array_sample.patch ]\n\nI think this idea of exporting drandom()'s PRNG for all and sundry\nto use is completely misguided. If we go down that path we'll\nbe right back in the swamp that we were in when we used random(3),\nnamely that (a) it's not clear what affects what, and (b) to the\nextent that there are such interferences, it could be bad, maybe\neven a security problem. We very intentionally decoupled drandom()\nfrom the rest of the world at commit 6645ad6bd, and I'm not ready\nto unlearn that lesson.\n\nWith our current PRNG infrastructure it doesn't cost much to have\na separate PRNG for every purpose. I don't object to having\narray_shuffle() and array_sample() share one PRNG, but I don't\nthink it should go much further than that.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 26 Sep 2022 16:16:25 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Introduce array_shuffle() and array_sample()"
},
{
"msg_contents": "Am 26.09.22 um 22:16 schrieb Tom Lane:\n> \n> With our current PRNG infrastructure it doesn't cost much to have\n> a separate PRNG for every purpose. I don't object to having\n> array_shuffle() and array_sample() share one PRNG, but I don't\n> think it should go much further than that.\n> \n\nThanks for your thoughts, Tom. I have a couple of questions. Should we \nintroduce a new seed function for the new PRNG state, used by \narray_shuffle() and array_sample()? What would be a good name? Or should \nthose functions use pg_global_prng_state? Is it safe to assume, that \npg_global_prng_state is seeded?\n\nMartin\n\n\n",
"msg_date": "Wed, 28 Sep 2022 12:40:31 +0200",
"msg_from": "Martin Kalcher <martin.kalcher@aboutsource.net>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Introduce array_shuffle() and array_sample()"
},
{
"msg_contents": "\n>> With our current PRNG infrastructure it doesn't cost much to have\n>> a separate PRNG for every purpose. I don't object to having\n>> array_shuffle() and array_sample() share one PRNG, but I don't\n>> think it should go much further than that.\n>\n> Thanks for your thoughts, Tom. I have a couple of questions. Should we \n> introduce a new seed function for the new PRNG state, used by array_shuffle() \n> and array_sample()? What would be a good name? Or should those functions use \n> pg_global_prng_state? Is it safe to assume, that pg_global_prng_state is \n> seeded?\n\nI'd suggest to use the existing global state. The global state should be \nseeded at the process start, AFAIKR.\n\n-- \nFabien.\n\n\n",
"msg_date": "Wed, 28 Sep 2022 15:07:06 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Introduce array_shuffle() and array_sample()"
},
{
"msg_contents": "Fabien COELHO <coelho@cri.ensmp.fr> writes:\n>> Thanks for your thoughts, Tom. I have a couple of questions. Should we \n>> introduce a new seed function for the new PRNG state, used by array_shuffle() \n>> and array_sample()? What would be a good name? Or should those functions use \n>> pg_global_prng_state? Is it safe to assume, that pg_global_prng_state is \n>> seeded?\n\n> I'd suggest to use the existing global state. The global state should be \n> seeded at the process start, AFAIKR.\n\nIt is seeded at process start, yes. If you don't feel a need for\nuser control over the sequence used by these functions, then using\npg_global_prng_state is fine. (Basically the point to be made\nhere is that we need to keep a tight rein on what can be affected\nby setseed().)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 28 Sep 2022 10:18:45 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Introduce array_shuffle() and array_sample()"
},
{
"msg_contents": "Am 28.09.22 um 16:18 schrieb Tom Lane:\n> It is seeded at process start, yes. If you don't feel a need for\n> user control over the sequence used by these functions, then using\n> pg_global_prng_state is fine. (Basically the point to be made\n> here is that we need to keep a tight rein on what can be affected\n> by setseed().)\n> \n> \t\t\tregards, tom lane\n\nNew patch: array_shuffle() and array_sample() use pg_global_prng_state now.\n\nMartin",
"msg_date": "Thu, 29 Sep 2022 11:39:28 +0200",
"msg_from": "Martin Kalcher <martin.kalcher@aboutsource.net>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Introduce array_shuffle() and array_sample()"
},
{
"msg_contents": "Martin Kalcher <martin.kalcher@aboutsource.net> writes:\n> New patch: array_shuffle() and array_sample() use pg_global_prng_state now.\n\nI took a closer look at the patch today. I find this behavior a bit\nsurprising:\n\n+SELECT array_dims(array_sample('[-1:2][2:3]={{1,2},{3,NULL},{5,6},{7,8}}'::int[], 3));\n+ array_dims \n+-------------\n+ [-1:1][2:3]\n+(1 row)\n\nI can buy preserving the lower bound in array_shuffle(), but\narray_sample() is not preserving the first-dimension indexes of\nthe array, so ISTM it ought to reset the first lower bound to 1.\n\nSome other comments:\n\n+ Returns <parameter>n</parameter> randomly chosen elements from <parameter>array</parameter> in selection order.\n\nWhat's \"selection order\"? And this probably shouldn't just rely\non the example to describe what happens with multi-D arrays.\nWriting \"elements\" seems somewhere between confusing and wrong.\n\n* Personally I think I'd pass the TypeCacheEntry pointer to array_shuffle_n,\nand let it pull out what it needs. Less duplicate code that way.\n\n* I find array_shuffle_n drastically undercommented, and what comments\nit has are pretty misleading, eg\n\n+\t\t/* Swap all elements in item (i) with elements in item (j). */\n\nj is *not* the index of the second item to be swapped. You could make\nit so, and that might be more readable:\n\n\t\tj = (int) pg_prng_uint64_range(&pg_global_prng_state, i, nitem - 1);\n\t\tjelms = elms + j * nelm;\n\t\tjnuls = nuls + j * nelm;\n\nBut if you want the code to stay as it is, this comment needs work.\n\n* I think good style for SQL-callable C functions is to make the arguments\nclear at the top:\n\n+array_sample(PG_FUNCTION_ARGS)\n+{\n+ ArrayType *array = PG_GETARG_ARRAYTYPE_P(0);\n+ int n = PG_GETARG_INT32(1);\n+ ArrayType *result;\n+ ... other declarations as needed ...\n\nWe can't quite make normal C declaration style work, but that's a poor\nexcuse for not making the argument list visible as directly as possible.\n\n* I wouldn't bother with the PG_FREE_IF_COPY calls. Those are generally\nonly used in btree comparison functions, in which there's a need to not\nleak memory.\n\n* I wonder if we really need these to be proparallel = 'r'. If we let\na worker process execute them, they will take numbers from the worker's\npg_global_prng_state seeding not the parent process's seeding, but why\nis that a problem? In the prior version where you were tying them\nto the parent's drandom() sequence, proparallel = 'r' made sense;\nbut now I think it's unnecessary.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 29 Sep 2022 15:33:26 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Introduce array_shuffle() and array_sample()"
},
{
"msg_contents": "On Thu, 29 Sept 2022 at 15:34, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Martin Kalcher <martin.kalcher@aboutsource.net> writes:\n> > New patch: array_shuffle() and array_sample() use pg_global_prng_state now.\n>\n> I took a closer look at the patch today. I find this behavior a bit\n> surprising:\n>\n\nIt looks like this patch received useful feedback and it wouldn't take\nmuch to push it over the line. But it's been Waiting On Author since\nlast September.\n\nMartin, any chance of getting these last bits of feedback resolved so\nit can be Ready for Commit?\n\n-- \nGregory Stark\nAs Commitfest Manager\n\n\n",
"msg_date": "Mon, 20 Mar 2023 15:34:55 -0400",
"msg_from": "\"Gregory Stark (as CFM)\" <stark.cfm@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Introduce array_shuffle() and array_sample()"
},
{
"msg_contents": "Given that there's been no updates since September 22 I'm going to\nmake this patch Returned with Feedback. The patch can be resurrected\nwhen there's more work done -- don't forget to move it to the new CF\nwhen you do that.\n\n\n-- \nGregory Stark\nAs Commitfest Manager\n\n\n",
"msg_date": "Mon, 3 Apr 2023 17:16:22 -0400",
"msg_from": "\"Gregory Stark (as CFM)\" <stark.cfm@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Introduce array_shuffle() and array_sample()"
},
{
"msg_contents": "> On 29 Sep 2022, at 21:33, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> Martin Kalcher <martin.kalcher@aboutsource.net> writes:\n>> New patch: array_shuffle() and array_sample() use pg_global_prng_state now.\n> \n> I took a closer look at the patch today.\n\nSince this seems pretty close to going in, and seems like quite useful\nfunctions, I took a look to see if I could get it across the line (although I\nnoticed that CFM beat me to the clock in sending this =)).\n\n> I find this behavior a bit surprising:\n> \n> +SELECT array_dims(array_sample('[-1:2][2:3]={{1,2},{3,NULL},{5,6},{7,8}}'::int[], 3));\n> + array_dims \n> +-------------\n> + [-1:1][2:3]\n> +(1 row)\n> \n> I can buy preserving the lower bound in array_shuffle(), but\n> array_sample() is not preserving the first-dimension indexes of\n> the array, so ISTM it ought to reset the first lower bound to 1.\n\nI might be daft but I'm not sure I follow why not preserving here, can you\nexplain?\n\nThe rest of your comments have been addressed in the attached v6 I think\n(although I'm pretty sure the docs part is just as bad now, explaining these in\nconcise words is hard, will take another look with fresh eyes tomorrow).\n\n--\nDaniel Gustafsson",
"msg_date": "Mon, 3 Apr 2023 23:25:39 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Introduce array_shuffle() and array_sample()"
},
{
"msg_contents": "Daniel Gustafsson <daniel@yesql.se> writes:\n> On 29 Sep 2022, at 21:33, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> I find this behavior a bit surprising:\n>> \n>> +SELECT array_dims(array_sample('[-1:2][2:3]={{1,2},{3,NULL},{5,6},{7,8}}'::int[], 3));\n>> + array_dims \n>> +-------------\n>> + [-1:1][2:3]\n>> +(1 row)\n>> \n>> I can buy preserving the lower bound in array_shuffle(), but\n>> array_sample() is not preserving the first-dimension indexes of\n>> the array, so ISTM it ought to reset the first lower bound to 1.\n\n> I might be daft but I'm not sure I follow why not preserving here, can you\n> explain?\n\nBecause array_sample selects only some of the (first level) array\nelements, those elements are typically not going to have the same\nindexes in the output as they did in the input. So I don't see why\nit would be useful to preserve the same lower-bound index. It does\nmake sense to preserve the lower-order index bounds ([2:3] in this\nexample) because we are including or not including those array\nslices as a whole.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 03 Apr 2023 17:46:57 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Introduce array_shuffle() and array_sample()"
},
{
"msg_contents": "> On 3 Apr 2023, at 23:46, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> Daniel Gustafsson <daniel@yesql.se> writes:\n>> On 29 Sep 2022, at 21:33, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>> I find this behavior a bit surprising:\n>>> \n>>> +SELECT array_dims(array_sample('[-1:2][2:3]={{1,2},{3,NULL},{5,6},{7,8}}'::int[], 3));\n>>> + array_dims \n>>> +-------------\n>>> + [-1:1][2:3]\n>>> +(1 row)\n>>> \n>>> I can buy preserving the lower bound in array_shuffle(), but\n>>> array_sample() is not preserving the first-dimension indexes of\n>>> the array, so ISTM it ought to reset the first lower bound to 1.\n> \n>> I might be daft but I'm not sure I follow why not preserving here, can you\n>> explain?\n> \n> Because array_sample selects only some of the (first level) array\n> elements, those elements are typically not going to have the same\n> indexes in the output as they did in the input. So I don't see why\n> it would be useful to preserve the same lower-bound index. It does\n> make sense to preserve the lower-order index bounds ([2:3] in this\n> example) because we are including or not including those array\n> slices as a whole.\n\nAh, ok, now I see what you mean, thanks! I'll try to fix up the patch like\nthis tomorrow.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Mon, 3 Apr 2023 23:52:31 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Introduce array_shuffle() and array_sample()"
},
{
"msg_contents": "Daniel Gustafsson <daniel@yesql.se> writes:\n> Ah, ok, now I see what you mean, thanks! I'll try to fix up the patch like\n> this tomorrow.\n\nSince we're running out of time, I took the liberty of fixing and\npushing this.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 07 Apr 2023 11:47:54 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Introduce array_shuffle() and array_sample()"
},
{
"msg_contents": "> On 7 Apr 2023, at 17:47, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> Daniel Gustafsson <daniel@yesql.se> writes:\n>> Ah, ok, now I see what you mean, thanks! I'll try to fix up the patch like\n>> this tomorrow.\n> \n> Since we're running out of time, I took the liberty of fixing and\n> pushing this.\n\nGreat, thanks!\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Fri, 7 Apr 2023 18:01:15 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Introduce array_shuffle() and array_sample()"
},
{
"msg_contents": "Hi all,\n\nreading this blog post\nhttps://www.depesz.com/2023/04/18/waiting-for-postgresql-16-add-array_sample-and-array_shuffle-functions/\nI became aware of the new feature and had a look at it and the commit\nhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=888f2ea0a81ff171087bdd1c5c1eeda3b78d73d4\n\nTo me the description\n /*\n * Shuffle array using Fisher-Yates algorithm. Scan the array and swap\n * current item (nelm datums starting at ielms) with a randomly chosen\n * later item (nelm datums starting at jelms) in each iteration. We can\n * stop once we've done n iterations; then first n items are the result.\n */\n\nseems wrong. For n = 1 the returned item could never be the 1st item of the\narray (see \"randomly chosen later item\").\nIf this really is the case then the result is not really random. But to me\nit seems j later can be 0 (making it not really \"later\"), so this might\nonly be a documentation issue.\n\nBest regards\nSalek Talangi\n\nAm Mi., 19. Apr. 2023 um 13:48 Uhr schrieb Daniel Gustafsson <\ndaniel@yesql.se>:\n\n> > On 7 Apr 2023, at 17:47, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >\n> > Daniel Gustafsson <daniel@yesql.se> writes:\n> >> Ah, ok, now I see what you mean, thanks! I'll try to fix up the patch\n> like\n> >> this tomorrow.\n> >\n> > Since we're running out of time, I took the liberty of fixing and\n> > pushing this.\n>\n> Great, thanks!\n>\n> --\n> Daniel Gustafsson\n\n\nHi all,reading this blog post https://www.depesz.com/2023/04/18/waiting-for-postgresql-16-add-array_sample-and-array_shuffle-functions/ I became aware of the new feature and had a look at it and the commit https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=888f2ea0a81ff171087bdd1c5c1eeda3b78d73d4To me the description /* * Shuffle array using Fisher-Yates algorithm. Scan the array and swap * current item (nelm datums starting at ielms) with a randomly chosen * later item (nelm datums starting at jelms) in each iteration. We can * stop once we've done n iterations; then first n items are the result. */seems wrong. For n = 1 the returned item could never be the 1st item of the array (see \"randomly chosen later item\").If this really is the case then the result is not really random. But to me it seems j later can be 0 (making it not really \"later\"), so this might only be a documentation issue.Best regardsSalek Talangi\nAm Mi., 19. Apr. 2023 um 13:48 Uhr schrieb Daniel Gustafsson <daniel@yesql.se>:> On 7 Apr 2023, at 17:47, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> Daniel Gustafsson <daniel@yesql.se> writes:\n>> Ah, ok, now I see what you mean, thanks! I'll try to fix up the patch like\n>> this tomorrow.\n> \n> Since we're running out of time, I took the liberty of fixing and\n> pushing this.\n\nGreat, thanks!\n\n--\nDaniel Gustafsson",
"msg_date": "Wed, 19 Apr 2023 14:04:45 +0200",
"msg_from": "Salek Talangi <salek.talangi@googlemail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Introduce array_shuffle() and array_sample()"
}
] |
[
{
"msg_contents": "Hi,\n I run TPC-DS benchmark for Postgres and find the join size estimation has several problems.\n For example, Ndistinct is key to join selectivity's estimation, this value does not take restrictions\n of the rel, I hit some cases in the function eqjoinsel, nd is much larger than vardata.rel->rows.\n\n Accurate estimation need good math model that considering dependency of join var and vars in restriction.\n But at least, indistinct should not be greater than the number of rows.\n\n See the attached patch to adjust nd in eqjoinsel.\n\nBest,\nZhenghua Lyu",
"msg_date": "Fri, 15 Jul 2022 14:06:54 +0000",
"msg_from": "Zhenghua Lyu <zlyu@vmware.com>",
"msg_from_op": true,
"msg_subject": "Adjust ndistinct for eqjoinsel"
},
{
"msg_contents": "Zhenghua Lyu <zlyu@vmware.com> writes:\n> I run TPC-DS benchmark for Postgres and find the join size estimation has several problems.\n> For example, Ndistinct is key to join selectivity's estimation, this value does not take restrictions\n> of the rel, I hit some cases in the function eqjoinsel, nd is much larger than vardata.rel->rows.\n\n> Accurate estimation need good math model that considering dependency of join var and vars in restriction.\n> But at least, indistinct should not be greater than the number of rows.\n\n> See the attached patch to adjust nd in eqjoinsel.\n\nWe're very unlikely to accept this with no test case and no explanation\nof why it's not an overcorrection. get_variable_numdistinct already\nclamps its result to rel->tuples, and I think that by using rel->rows\ninstead you are probably double-counting the selectivity of the rel's\nrestriction clauses.\n\nSee the sad history of commit 7f3eba30c, which did something\npretty close to this and eventually got almost entirely reverted\n(97930cf57, 0d3b231ee). I'd be the first to agree that better\nestimates here would be great, but it's not as simple as it looks.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 15 Jul 2022 11:56:41 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Adjust ndistinct for eqjoinsel"
}
] |
[
{
"msg_contents": "Hi,\n\nOne of the problems we found ([1]) when looking at spurious failures of the\nrecovery conflict test ([2]) is that a single backend can FATAL out multiple\ntimes. That seems independent enough from that thread that I thought it best\nto raise separately.\n\nThe problem is that during the course of FATAL processing, we end up\nprocessing interrupts, which then can process the next recovery conflict\ninterrupt. This happens because while we send the FATAL to the client\n(errfinish()->EmitErrorReport()) we'll process interrupts in various\nplaces. If e.g. a new recovery conflict interrupt has been raised (which\nstartup.c does at an absurd rate), that'll then trigger a new FATAL.\n\nSources for recursive processing of interrupts are e.g. < ERROR ereports like\nthe COMERROR in internal_flush().\n\nA similar problem does *not* exist for for ERROR, because errfinish()\nPG_RE_THROW()s before the EmitErrorReport() and PostgresMain() does\nHOLD_INTERRUPTS() first thing after sigsetjmp().\n\nOne might reasonably think that the proc_exit_inprogress logic in die(),\nRecoveryConflictInterrupt() etc. protects us against this - but it doesn't,\nbecause we've not yet set it when doing EmitErrorReport(), it gets set later\nduring proc_exit().\n\nI'm not certain what the best way to address this is.\n\nOne approach would be to put a HOLD_INTERRUPTS() before EmitErrorReport() when\nhandling a FATAL error. I'm a bit worried this might make it harder to\ninterrupt FATAL errors when they're blocking sending the message to the client\n- ProcessClientWriteInterrupt() won't do its thing. OTOH, it looks like we\nalready have that problem if there's a FATAL after the sigsetjmp() block did\nHOLD_INTERRUPTS(), because erfinish() won't reset InterruptHoldoffCount like\nit does for ERROR.\n\nAnother approach would be to set proc_exit_inprogress earlier, before the\nEmitErrorReport(). That's a bit ugly, because it ties ipc.c more closely to\nelog.c, but it also \"feels\" correct to me. OTOH, it's at best a partial\nprotection, because it doesn't prevent already pending interrupts from being\nprocessed.\n\nI guess we could instead add a dedicated variable indicating whether we're\ncurrently processing a FATAL error? I was considering exposin a function\nchecking whether elog's recursion_depth is != 0, and short-circuit\nProcessInterrupts() based on that, but I don't think that'd be good - we want\nto be able interrupt some super-noisy NOTICE or such.\n\n\nI suspect that we should, even if it does not address this issue, reset\nInterruptHoldoffCount in errfinish() for FATALs, similar to ERRORs, so that\nFATALs can benefit from the logic in ProcessClientWriteInterrupt().\n\nGreetings,\n\nAndres Freund\n\n[1] https://postgr.es/m/20220701231833.vh76xkf3udani3np%40alap3.anarazel.de\n[2] clearly we needed that test urgently, but I can't deny regretting the\n consequence of having to fix the plethora of bugs it's uncovering\n\n\n",
"msg_date": "Fri, 15 Jul 2022 10:29:38 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "interrupt processing during FATAL processing"
}
] |
[
{
"msg_contents": "Hi,\nI was looking at the code in use_physical_tlist().\n\nIn the code block checking CP_LABEL_TLIST, I noticed that\nthe Bitmapset sortgroupatts is not freed before returning from the method.\n\nLooking at create_foreignscan_plan() (in the same file):\n\n bms_free(attrs_used);\n\nIt seems the intermediate Bitmapset is freed before returning.\n\nI would appreciate review comments for the proposed patch.\n\nThanks",
"msg_date": "Fri, 15 Jul 2022 19:49:23 -0700",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": true,
"msg_subject": "Freeing sortgroupatts in use_physical_tlist"
},
{
"msg_contents": "Zhihong Yu <zyu@yugabyte.com> writes:\n> I was looking at the code in use_physical_tlist().\n> In the code block checking CP_LABEL_TLIST, I noticed that\n> the Bitmapset sortgroupatts is not freed before returning from the method.\n> Looking at create_foreignscan_plan() (in the same file):\n> bms_free(attrs_used);\n> It seems the intermediate Bitmapset is freed before returning.\n\nTBH, I'd say that it's probably the former code not the latter\nthat is good practice. Retail pfree's in code that's not in a\nloop very possibly expend more cycles than they are worth, because\nthe space will get cleaned up anyway when the active memory\ncontext is reset, and pfree is not as cheap as one might wish.\nIt might be possible to make a case for one method over the other\nwith some study of the particular situation, but you can't say\na priori which way is better.\n\nOn the whole, I would not bother changing either of these bits\nof code without some clear evidence that it matters. It likely\ndoesn't. It's even more likely that it doesn't matter enough\nto be worth investigating.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 15 Jul 2022 23:32:54 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Freeing sortgroupatts in use_physical_tlist"
},
{
"msg_contents": "On Fri, Jul 15, 2022 at 8:33 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Zhihong Yu <zyu@yugabyte.com> writes:\n> > I was looking at the code in use_physical_tlist().\n> > In the code block checking CP_LABEL_TLIST, I noticed that\n> > the Bitmapset sortgroupatts is not freed before returning from the\n> method.\n> > Looking at create_foreignscan_plan() (in the same file):\n> > bms_free(attrs_used);\n> > It seems the intermediate Bitmapset is freed before returning.\n>\n> TBH, I'd say that it's probably the former code not the latter\n> that is good practice. Retail pfree's in code that's not in a\n> loop very possibly expend more cycles than they are worth, because\n> the space will get cleaned up anyway when the active memory\n> context is reset, and pfree is not as cheap as one might wish.\n> It might be possible to make a case for one method over the other\n> with some study of the particular situation, but you can't say\n> a priori which way is better.\n>\n> On the whole, I would not bother changing either of these bits\n> of code without some clear evidence that it matters. It likely\n> doesn't. It's even more likely that it doesn't matter enough\n> to be worth investigating.\n>\n> regards, tom lane\n>\nHi, Tom:\nThanks for responding over the weekend.\n\nI will try to remember what you said.\n\nCheers\n\nOn Fri, Jul 15, 2022 at 8:33 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:Zhihong Yu <zyu@yugabyte.com> writes:\n> I was looking at the code in use_physical_tlist().\n> In the code block checking CP_LABEL_TLIST, I noticed that\n> the Bitmapset sortgroupatts is not freed before returning from the method.\n> Looking at create_foreignscan_plan() (in the same file):\n> bms_free(attrs_used);\n> It seems the intermediate Bitmapset is freed before returning.\n\nTBH, I'd say that it's probably the former code not the latter\nthat is good practice. Retail pfree's in code that's not in a\nloop very possibly expend more cycles than they are worth, because\nthe space will get cleaned up anyway when the active memory\ncontext is reset, and pfree is not as cheap as one might wish.\nIt might be possible to make a case for one method over the other\nwith some study of the particular situation, but you can't say\na priori which way is better.\n\nOn the whole, I would not bother changing either of these bits\nof code without some clear evidence that it matters. It likely\ndoesn't. It's even more likely that it doesn't matter enough\nto be worth investigating.\n\n regards, tom laneHi, Tom:Thanks for responding over the weekend.I will try to remember what you said.Cheers",
"msg_date": "Fri, 15 Jul 2022 20:52:37 -0700",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": true,
"msg_subject": "Re: Freeing sortgroupatts in use_physical_tlist"
}
] |
[
{
"msg_contents": "I've run into an existing behavior where xmax(), and various other system\ntables, return an error when included in the RETURNING list on a\npartitioned table.\n\nERROR: cannot retrieve a system column in this context\n`\nThis issue got a fair airing back in 2020:\n\nAW: posgres 12 bug (partitioned table)\nhttps://www.postgresql.org/message-id/flat/GVAP278MB006939B1D7DFDD650E383FBFEACE0%40GVAP278MB0069.CHEP278.PROD.OUTLOOK.COM#908f2604081699e7f41fa20d352e1b79\n\nI'm using 14.4, and just ran into this behavior today. I'm wondering if\nthere has been any new work on this subject, or anything to take into\naccount moving forward?\n\nI'm not a C coder, and do not know the Postgres internals, but here's what\nI gleaned from the thread:\n\n* Available system columns depend on the underlying table access method,\nand may/will vary across AMs. For example, the columns implemented by heap\nis what the docs describe, an FDW could be anything, and Postgres has no\ncontrol of what, if any, system column-like attributes they support, and\nfuture and hypothetical AMs may have different sets.\n\n* Rather than return garbage results, or a default of 0, etc., the system\nthrows the error I ran into.\n\nI'd be happier working with a NULL result than garbage, ambiguous results,\nor errors...but an error is the current behavior. Agreed on that, I'd\nrather an error than a bad/meaningless result. Postgres' consistent\nemphasis on correctness is easily one of its greatest qualities.\n\nIn my case, I'm upgrading a lot of existing code to try and capture a more\ncomplete profile of what an UPSERT did. Right now, I grab a count(*) of the\nrows and return that. Works fine. A revised snippet looks a bit like this:\n\n------------------------------------------------------------\n...UPSERT code\nreturning xmax as inserted_transaction_id),\n\nstatus_data AS (\n select count(*) FILTER (where inserted_transaction_id = 0) AS\ninsert_count,\n count(*) FILTER (where inserted_transaction_id != 0) AS\nestimated_update_count,\n pg_current_xact_id_if_assigned()::text AS\ntransaction_id\n\n from inserted_rows),\n\n...custom logging code\n\n-- Final output/result.\n select insert_count,\n estimated_update_count,\n transaction_id\n\n from status_data;\n------------------------------------------------------------\n\nThis fails on a partitioned table because xmax() may not exist. In fact, it\ndoes exist in all of those tables, but the system doesn't know how to\nguarantee that. I know which tables are partitioned, and can downgrade the\nresult on partitioned tables to the count(*) I've been using to date. But\nnow I'm wondering if working with xmax() like this is a poor idea going\nforward. I don't want to lean on a feature/behavior that's likely to\nchange. For example, I noticed the other day that MERGE does not support\nRETURNING.\n\nI'd appreciate any insight or advice you can offer.\n\nI've run into an existing behavior where xmax(), and various other system tables, return an error when included in the RETURNING list on a partitioned table. ERROR: cannot retrieve a system column in this context`This issue got a fair airing back in 2020:AW: posgres 12 bug (partitioned table)https://www.postgresql.org/message-id/flat/GVAP278MB006939B1D7DFDD650E383FBFEACE0%40GVAP278MB0069.CHEP278.PROD.OUTLOOK.COM#908f2604081699e7f41fa20d352e1b79I'm using 14.4, and just ran into this behavior today. I'm wondering if there has been any new work on this subject, or anything to take into account moving forward?I'm not a C coder, and do not know the Postgres internals, but here's what I gleaned from the thread:* Available system columns depend on the underlying table access method, and may/will vary across AMs. For example, the columns implemented by heap is what the docs describe, an FDW could be anything, and Postgres has no control of what, if any, system column-like attributes they support, and future and hypothetical AMs may have different sets. * Rather than return garbage results, or a default of 0, etc., the system throws the error I ran into.I'd be happier working with a NULL result than garbage, ambiguous results, or errors...but an error is the current behavior. Agreed on that, I'd rather an error than a bad/meaningless result. Postgres' consistent emphasis on correctness is easily one of its greatest qualities.In my case, I'm upgrading a lot of existing code to try and capture a more complete profile of what an UPSERT did. Right now, I grab a count(*) of the rows and return that. Works fine. A revised snippet looks a bit like this: ------------------------------------------------------------...UPSERT codereturning xmax as inserted_transaction_id),status_data AS ( select count(*) FILTER (where inserted_transaction_id = 0) AS insert_count, count(*) FILTER (where inserted_transaction_id != 0) AS estimated_update_count, pg_current_xact_id_if_assigned()::text AS transaction_id from inserted_rows),...custom logging code-- Final output/result. select insert_count, estimated_update_count, transaction_id from status_data;------------------------------------------------------------This fails on a partitioned table because xmax() may not exist. In fact, it does exist in all of those tables, but the system doesn't know how to guarantee that. I know which tables are partitioned, and can downgrade the result on partitioned tables to the count(*) I've been using to date. But now I'm wondering if working with xmax() like this is a poor idea going forward. I don't want to lean on a feature/behavior that's likely to change. For example, I noticed the other day that MERGE does not support RETURNING.I'd appreciate any insight or advice you can offer.",
"msg_date": "Mon, 18 Jul 2022 11:04:21 +1000",
"msg_from": "Morris de Oryx <morrisdeoryx@gmail.com>",
"msg_from_op": true,
"msg_subject": "System column support for partitioned tables using heap"
},
{
"msg_contents": "On Sun, Jul 17, 2022 at 9:04 PM Morris de Oryx <morrisdeoryx@gmail.com> wrote:\n> This fails on a partitioned table because xmax() may not exist. In fact, it does exist in all of those tables, but the system doesn't know how to guarantee that. I know which tables are partitioned, and can downgrade the result on partitioned tables to the count(*) I've been using to date. But now I'm wondering if working with xmax() like this is a poor idea going forward. I don't want to lean on a feature/behavior that's likely to change. For example, I noticed the other day that MERGE does not support RETURNING.\n>\n> I'd appreciate any insight or advice you can offer.\n\nWhat is motivating you to want to see the xmax value here? It's not an\nunreasonable thing to want to do, IMHO, but it's a little bit niche so\nI'm just curious what the motivation is.\n\nI do agree with you that it would be nice if this worked better than\nit does, but I don't really know exactly how to make that happen. The\ncolumn list for a partitioned table must be fixed at the time it is\ncreated, but we do not know what partitions might be added in the\nfuture, and thus we don't know whether they will have an xmax column.\nI guess we could have tried to work things out so that a 0 value would\nbe passed up from children that lack an xmax column, and that would\nallow the parent to have such a column, but I don't feel too bad that\nwe didn't do that ... should I?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 18 Jul 2022 16:12:55 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: System column support for partitioned tables using heap"
},
{
"msg_contents": "> What is motivating you to want to see the xmax value here? It's not an\n> unreasonable thing to want to do, IMHO, but it's a little bit niche so\n> I'm just curious what the motivation is.\n\nYeah, I figured it was niche when I saw so little mention of the issue.\n\nMy reason for xmax() in the result is to break down the affected rows count\ninto an insert count, and a modified estimate. Not super critical, but\nhelpful. I've built out some simple custom logging table in out system for\nthis kind of detail, and folks have been wanting to break down rows\nsubmitted, rows inserted, and rows updated a bit better. Rows submitted is\neasy and rows inserted is too...update is an estimate as I'm not using\nanything fancy with xmax() to sort out what exactly happened.\n\nFor clarification, we're not using an ORM, and may need to support\nstraggling clients, so our push cycle works like this:\n\n* Create a view with the fields expected in the insert. I figured I'd use\nCREATE VIEW instead of CREATE TYPE as then I can quickly check out the\ndetails against live data, and I still get a custom compound type.\n\n* Write a function that accepts an array of view_name_type. I *love* Postgres'\ntyping system, It has spoiled me forever. Can't submit badly formatted\nobjects from the client, they're rejected automatically.\n\n* Write a client-side routine to package data as an array and push it into\nthe insert handling function. The function unnests the array, and then the\nactual insert code draws from the unpacked values. If I need to extend the\ntable, I can add a new function that knows about the revised fields, and\nrevise (when necessary) earlier supported formats to map to new\ntypes/columns/defaults.\n\nThere are few CTEs in there, including one that does the main insert and\nreturns the xmax(). That lets me distinguish xmax = 0 (insert) from xmax <>\n0 (not an insert).\n\n> I do agree with you that it would be nice if this worked better than\n> it does, but I don't really know exactly how to make that happen. The\n> column list for a partitioned table must be fixed at the time it is\n> created, but we do not know what partitions might be added in the\n> future, and thus we don't know whether they will have an xmax column.\n> I guess we could have tried to work things out so that a 0 value would\n> be passed up from children that lack an xmax column, and that would\n> allow the parent to have such a column, but I don't feel too bad that\n> we didn't do that ... should I?\n\nYou should never feel bad about anything ;-) You and others on that thread\ncontribute so much that I'm getting value out of.\n\nI had it in mind that it would be nice to have some kind of\ncatalog/abstraction that would make it possible to interrogate what system\ncolumns are available on a table/partition based on access method. In my\nvague notion, that might make some of the other ideas from that thread,\nsuch as index-oriented stores with quite different physical layouts, easier\nto implement. But, it's all free when you aren't the one who can write the\ncode.\n\nI've switched the partition-based tables back to returning * on the insert\nCTE, and then aggregating that to add to a log table and the client result.\nIt's fine. A rich result summary would be very nice. As in rows\nadded/modified/deleted on whatever table(s). If anyone ever decides to\nimplement such a structure for MERGE, it would be nice to see it\nretrofitted to the other data modification commands where RETURNING works.\n\nOn Tue, Jul 19, 2022 at 6:13 AM Robert Haas <robertmhaas@gmail.com> wrote:\n\n> On Sun, Jul 17, 2022 at 9:04 PM Morris de Oryx <morrisdeoryx@gmail.com>\n> wrote:\n> > This fails on a partitioned table because xmax() may not exist. In fact,\n> it does exist in all of those tables, but the system doesn't know how to\n> guarantee that. I know which tables are partitioned, and can downgrade the\n> result on partitioned tables to the count(*) I've been using to date. But\n> now I'm wondering if working with xmax() like this is a poor idea going\n> forward. I don't want to lean on a feature/behavior that's likely to\n> change. For example, I noticed the other day that MERGE does not support\n> RETURNING.\n> >\n> > I'd appreciate any insight or advice you can offer.\n>\n> What is motivating you to want to see the xmax value here? It's not an\n> unreasonable thing to want to do, IMHO, but it's a little bit niche so\n> I'm just curious what the motivation is.\n>\n> I do agree with you that it would be nice if this worked better than\n> it does, but I don't really know exactly how to make that happen. The\n> column list for a partitioned table must be fixed at the time it is\n> created, but we do not know what partitions might be added in the\n> future, and thus we don't know whether they will have an xmax column.\n> I guess we could have tried to work things out so that a 0 value would\n> be passed up from children that lack an xmax column, and that would\n> allow the parent to have such a column, but I don't feel too bad that\n> we didn't do that ... should I?\n>\n> --\n> Robert Haas\n> EDB: http://www.enterprisedb.com\n>\n\n> What is motivating you to want to see the xmax value here? It's not an> unreasonable thing to want to do, IMHO, but it's a little bit niche so> I'm just curious what the motivation is.Yeah, I figured it was niche when I saw so little mention of the issue.My reason for xmax() in the result is to break down the affected rows count into an insert count, and a modified estimate. Not super critical, but helpful. I've built out some simple custom logging table in out system for this kind of detail, and folks have been wanting to break down rows submitted, rows inserted, and rows updated a bit better. Rows submitted is easy and rows inserted is too...update is an estimate as I'm not using anything fancy with xmax() to sort out what exactly happened.For clarification, we're not using an ORM, and may need to support straggling clients, so our push cycle works like this:* Create a view with the fields expected in the insert. I figured I'd use CREATE VIEW instead of CREATE TYPE as then I can quickly check out the details against live data, and I still get a custom compound type.* Write a function that accepts an array of view_name_type. I love Postgres' typing system, It has spoiled me forever. Can't submit badly formatted objects from the client, they're rejected automatically.* Write a client-side routine to package data as an array and push it into the insert handling function. The function unnests the array, and then the actual insert code draws from the unpacked values. If I need to extend the table, I can add a new function that knows about the revised fields, and revise (when necessary) earlier supported formats to map to new types/columns/defaults.There are few CTEs in there, including one that does the main insert and returns the xmax(). That lets me distinguish xmax = 0 (insert) from xmax <> 0 (not an insert).> I do agree with you that it would be nice if this worked better than> it does, but I don't really know exactly how to make that happen. The> column list for a partitioned table must be fixed at the time it is> created, but we do not know what partitions might be added in the> future, and thus we don't know whether they will have an xmax column.> I guess we could have tried to work things out so that a 0 value would> be passed up from children that lack an xmax column, and that would> allow the parent to have such a column, but I don't feel too bad that> we didn't do that ... should I?You should never feel bad about anything ;-) You and others on that thread contribute so much that I'm getting value out of.I had it in mind that it would be nice to have some kind of catalog/abstraction that would make it possible to interrogate what system columns are available on a table/partition based on access method. In my vague notion, that might make some of the other ideas from that thread, such as index-oriented stores with quite different physical layouts, easier to implement. But, it's all free when you aren't the one who can write the code.I've switched the partition-based tables back to returning * on the insert CTE, and then aggregating that to add to a log table and the client result. It's fine. A rich result summary would be very nice. As in rows added/modified/deleted on whatever table(s). If anyone ever decides to implement such a structure for MERGE, it would be nice to see it retrofitted to the other data modification commands where RETURNING works.On Tue, Jul 19, 2022 at 6:13 AM Robert Haas <robertmhaas@gmail.com> wrote:On Sun, Jul 17, 2022 at 9:04 PM Morris de Oryx <morrisdeoryx@gmail.com> wrote:\n> This fails on a partitioned table because xmax() may not exist. In fact, it does exist in all of those tables, but the system doesn't know how to guarantee that. I know which tables are partitioned, and can downgrade the result on partitioned tables to the count(*) I've been using to date. But now I'm wondering if working with xmax() like this is a poor idea going forward. I don't want to lean on a feature/behavior that's likely to change. For example, I noticed the other day that MERGE does not support RETURNING.\n>\n> I'd appreciate any insight or advice you can offer.\n\nWhat is motivating you to want to see the xmax value here? It's not an\nunreasonable thing to want to do, IMHO, but it's a little bit niche so\nI'm just curious what the motivation is.\n\nI do agree with you that it would be nice if this worked better than\nit does, but I don't really know exactly how to make that happen. The\ncolumn list for a partitioned table must be fixed at the time it is\ncreated, but we do not know what partitions might be added in the\nfuture, and thus we don't know whether they will have an xmax column.\nI guess we could have tried to work things out so that a 0 value would\nbe passed up from children that lack an xmax column, and that would\nallow the parent to have such a column, but I don't feel too bad that\nwe didn't do that ... should I?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com",
"msg_date": "Tue, 19 Jul 2022 18:43:50 +1000",
"msg_from": "Morris de Oryx <morrisdeoryx@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: System column support for partitioned tables using heap"
},
{
"msg_contents": "> The column list for a partitioned table must be fixed at the time it is\n> created, but we do not know what partitions might be added in the\n> future, and thus we don't know whether they will have an xmax column.\n\nRight, seeing what you're meaning there. It's fantastic that a partition\nmight be an FDW to a system that has no concept at all of anything like a\n\"system column\", or something with an alternative AM to heap that has a\ndifferent set of system columns. That flexibility in partitions is super\nvaluable. I'd love to be able to convert old partitions into column stores,\nfor example. (I think that Citus offers that feature now.)\n\nI guess if anyone ever felt it was worth the effort, maybe whatever checks\nare done at attach-partition time for the column list could also enforce\nmeta/system columns. If missing, a shimming mechanism would be pretty\nnecessary.\n\nSounds like a lot of work for not much gain, at least in this narrow case.\n\nThanks again for answering.\n\nOn Tue, Jul 19, 2022 at 6:43 PM Morris de Oryx <morrisdeoryx@gmail.com>\nwrote:\n\n> > What is motivating you to want to see the xmax value here? It's not an\n> > unreasonable thing to want to do, IMHO, but it's a little bit niche so\n> > I'm just curious what the motivation is.\n>\n> Yeah, I figured it was niche when I saw so little mention of the issue.\n>\n> My reason for xmax() in the result is to break down the affected rows\n> count into an insert count, and a modified estimate. Not super critical,\n> but helpful. I've built out some simple custom logging table in out system\n> for this kind of detail, and folks have been wanting to break down rows\n> submitted, rows inserted, and rows updated a bit better. Rows submitted is\n> easy and rows inserted is too...update is an estimate as I'm not using\n> anything fancy with xmax() to sort out what exactly happened.\n>\n> For clarification, we're not using an ORM, and may need to support\n> straggling clients, so our push cycle works like this:\n>\n> * Create a view with the fields expected in the insert. I figured I'd use\n> CREATE VIEW instead of CREATE TYPE as then I can quickly check out the\n> details against live data, and I still get a custom compound type.\n>\n> * Write a function that accepts an array of view_name_type. I *love* Postgres'\n> typing system, It has spoiled me forever. Can't submit badly formatted\n> objects from the client, they're rejected automatically.\n>\n> * Write a client-side routine to package data as an array and push it into\n> the insert handling function. The function unnests the array, and then the\n> actual insert code draws from the unpacked values. If I need to extend the\n> table, I can add a new function that knows about the revised fields, and\n> revise (when necessary) earlier supported formats to map to new\n> types/columns/defaults.\n>\n> There are few CTEs in there, including one that does the main insert and\n> returns the xmax(). That lets me distinguish xmax = 0 (insert) from xmax <>\n> 0 (not an insert).\n>\n> > I do agree with you that it would be nice if this worked better than\n> > it does, but I don't really know exactly how to make that happen. The\n> > column list for a partitioned table must be fixed at the time it is\n> > created, but we do not know what partitions might be added in the\n> > future, and thus we don't know whether they will have an xmax column.\n> > I guess we could have tried to work things out so that a 0 value would\n> > be passed up from children that lack an xmax column, and that would\n> > allow the parent to have such a column, but I don't feel too bad that\n> > we didn't do that ... should I?\n>\n> You should never feel bad about anything ;-) You and others on that thread\n> contribute so much that I'm getting value out of.\n>\n> I had it in mind that it would be nice to have some kind of\n> catalog/abstraction that would make it possible to interrogate what system\n> columns are available on a table/partition based on access method. In my\n> vague notion, that might make some of the other ideas from that thread,\n> such as index-oriented stores with quite different physical layouts, easier\n> to implement. But, it's all free when you aren't the one who can write the\n> code.\n>\n> I've switched the partition-based tables back to returning * on the insert\n> CTE, and then aggregating that to add to a log table and the client result.\n> It's fine. A rich result summary would be very nice. As in rows\n> added/modified/deleted on whatever table(s). If anyone ever decides to\n> implement such a structure for MERGE, it would be nice to see it\n> retrofitted to the other data modification commands where RETURNING works.\n>\n> On Tue, Jul 19, 2022 at 6:13 AM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n>> On Sun, Jul 17, 2022 at 9:04 PM Morris de Oryx <morrisdeoryx@gmail.com>\n>> wrote:\n>> > This fails on a partitioned table because xmax() may not exist. In\n>> fact, it does exist in all of those tables, but the system doesn't know how\n>> to guarantee that. I know which tables are partitioned, and can downgrade\n>> the result on partitioned tables to the count(*) I've been using to date.\n>> But now I'm wondering if working with xmax() like this is a poor idea going\n>> forward. I don't want to lean on a feature/behavior that's likely to\n>> change. For example, I noticed the other day that MERGE does not support\n>> RETURNING.\n>> >\n>> > I'd appreciate any insight or advice you can offer.\n>>\n>> What is motivating you to want to see the xmax value here? It's not an\n>> unreasonable thing to want to do, IMHO, but it's a little bit niche so\n>> I'm just curious what the motivation is.\n>>\n>> I do agree with you that it would be nice if this worked better than\n>> it does, but I don't really know exactly how to make that happen. The\n>> column list for a partitioned table must be fixed at the time it is\n>> created, but we do not know what partitions might be added in the\n>> future, and thus we don't know whether they will have an xmax column.\n>> I guess we could have tried to work things out so that a 0 value would\n>> be passed up from children that lack an xmax column, and that would\n>> allow the parent to have such a column, but I don't feel too bad that\n>> we didn't do that ... should I?\n>>\n>> --\n>> Robert Haas\n>> EDB: http://www.enterprisedb.com\n>>\n>\n\n> The column list for a partitioned table must be fixed at the time it is> created, but we do not know what partitions might be added in the> future, and thus we don't know whether they will have an xmax column.Right, seeing what you're meaning there. It's fantastic that a partition might be an FDW to a system that has no concept at all of anything like a \"system column\", or something with an alternative AM to heap that has a different set of system columns. That flexibility in partitions is super valuable. I'd love to be able to convert old partitions into column stores, for example. (I think that Citus offers that feature now.)I guess if anyone ever felt it was worth the effort, maybe whatever checks are done at attach-partition time for the column list could also enforce meta/system columns. If missing, a shimming mechanism would be pretty necessary.Sounds like a lot of work for not much gain, at least in this narrow case. Thanks again for answering.On Tue, Jul 19, 2022 at 6:43 PM Morris de Oryx <morrisdeoryx@gmail.com> wrote:> What is motivating you to want to see the xmax value here? It's not an> unreasonable thing to want to do, IMHO, but it's a little bit niche so> I'm just curious what the motivation is.Yeah, I figured it was niche when I saw so little mention of the issue.My reason for xmax() in the result is to break down the affected rows count into an insert count, and a modified estimate. Not super critical, but helpful. I've built out some simple custom logging table in out system for this kind of detail, and folks have been wanting to break down rows submitted, rows inserted, and rows updated a bit better. Rows submitted is easy and rows inserted is too...update is an estimate as I'm not using anything fancy with xmax() to sort out what exactly happened.For clarification, we're not using an ORM, and may need to support straggling clients, so our push cycle works like this:* Create a view with the fields expected in the insert. I figured I'd use CREATE VIEW instead of CREATE TYPE as then I can quickly check out the details against live data, and I still get a custom compound type.* Write a function that accepts an array of view_name_type. I love Postgres' typing system, It has spoiled me forever. Can't submit badly formatted objects from the client, they're rejected automatically.* Write a client-side routine to package data as an array and push it into the insert handling function. The function unnests the array, and then the actual insert code draws from the unpacked values. If I need to extend the table, I can add a new function that knows about the revised fields, and revise (when necessary) earlier supported formats to map to new types/columns/defaults.There are few CTEs in there, including one that does the main insert and returns the xmax(). That lets me distinguish xmax = 0 (insert) from xmax <> 0 (not an insert).> I do agree with you that it would be nice if this worked better than> it does, but I don't really know exactly how to make that happen. The> column list for a partitioned table must be fixed at the time it is> created, but we do not know what partitions might be added in the> future, and thus we don't know whether they will have an xmax column.> I guess we could have tried to work things out so that a 0 value would> be passed up from children that lack an xmax column, and that would> allow the parent to have such a column, but I don't feel too bad that> we didn't do that ... should I?You should never feel bad about anything ;-) You and others on that thread contribute so much that I'm getting value out of.I had it in mind that it would be nice to have some kind of catalog/abstraction that would make it possible to interrogate what system columns are available on a table/partition based on access method. In my vague notion, that might make some of the other ideas from that thread, such as index-oriented stores with quite different physical layouts, easier to implement. But, it's all free when you aren't the one who can write the code.I've switched the partition-based tables back to returning * on the insert CTE, and then aggregating that to add to a log table and the client result. It's fine. A rich result summary would be very nice. As in rows added/modified/deleted on whatever table(s). If anyone ever decides to implement such a structure for MERGE, it would be nice to see it retrofitted to the other data modification commands where RETURNING works.On Tue, Jul 19, 2022 at 6:13 AM Robert Haas <robertmhaas@gmail.com> wrote:On Sun, Jul 17, 2022 at 9:04 PM Morris de Oryx <morrisdeoryx@gmail.com> wrote:\n> This fails on a partitioned table because xmax() may not exist. In fact, it does exist in all of those tables, but the system doesn't know how to guarantee that. I know which tables are partitioned, and can downgrade the result on partitioned tables to the count(*) I've been using to date. But now I'm wondering if working with xmax() like this is a poor idea going forward. I don't want to lean on a feature/behavior that's likely to change. For example, I noticed the other day that MERGE does not support RETURNING.\n>\n> I'd appreciate any insight or advice you can offer.\n\nWhat is motivating you to want to see the xmax value here? It's not an\nunreasonable thing to want to do, IMHO, but it's a little bit niche so\nI'm just curious what the motivation is.\n\nI do agree with you that it would be nice if this worked better than\nit does, but I don't really know exactly how to make that happen. The\ncolumn list for a partitioned table must be fixed at the time it is\ncreated, but we do not know what partitions might be added in the\nfuture, and thus we don't know whether they will have an xmax column.\nI guess we could have tried to work things out so that a 0 value would\nbe passed up from children that lack an xmax column, and that would\nallow the parent to have such a column, but I don't feel too bad that\nwe didn't do that ... should I?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com",
"msg_date": "Tue, 19 Jul 2022 18:54:05 +1000",
"msg_from": "Morris de Oryx <morrisdeoryx@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: System column support for partitioned tables using heap"
},
{
"msg_contents": "On Tue, Jul 19, 2022 at 4:44 AM Morris de Oryx <morrisdeoryx@gmail.com> wrote:\n> My reason for xmax() in the result is to break down the affected rows count into an insert count, and a modified estimate. Not super critical, but helpful. I've built out some simple custom logging table in out system for this kind of detail, and folks have been wanting to break down rows submitted, rows inserted, and rows updated a bit better. Rows submitted is easy and rows inserted is too...update is an estimate as I'm not using anything fancy with xmax() to sort out what exactly happened.\n\nI wonder whether you could just have the CTEs bubble up 1 or 0 and\nthen sum them at some stage, instead of relying on xmax. Presumably\nyour UPSERT simulation knows which thing it did in each case.\n\nFor MERGE itself, I wonder if some information about this should be\nincluded in the command tag. It looks like MERGE already includes some\nsort of row count in the command tag, but I guess perhaps it doesn't\ndistinguish between inserts and updates. I don't know why we couldn't\nexpose multiple values this way, though.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 19 Jul 2022 08:38:08 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: System column support for partitioned tables using heap"
},
{
"msg_contents": "On Tue, Jul 19, 2022 at 10:38 PM Robert Haas <robertmhaas@gmail.com> wrote:\n\n\n> For MERGE itself, I wonder if some information about this should be\n> included in the command tag. It looks like MERGE already includes some\n> sort of row count in the command tag, but I guess perhaps it doesn't\n> distinguish between inserts and updates. I don't know why we couldn't\n> expose multiple values this way, though.\n\nIt would be great to get some sort of feedback from MERGE accessible\nthrough SQL results, even if that doesn't come in the form of a RETURNING\nlist.\n\n> I wonder whether you could just have the CTEs bubble up 1 or 0 and\n> then sum them at some stage, instead of relying on xmax. Presumably\n> your UPSERT simulation knows which thing it did in each case.\n\nIt might help if I show a sample insert handling function. The issue is\nwith the line at the end of the top CTE, insert_rows:\n\n returning xmax as inserted_transaction_id),\n\nThat's what fails on partitions. Is there an alternative way to test what\nhappened to the row(s)? here's the full function. . I wrote a code\ngenerator, so I don't have to hand-code all of these bits for each\ntable+version:\n\n-- Create a function to accept an array of rows formatted as item_type_v1\nfor UPSERT into item_type.\nDROP FUNCTION IF EXISTS types_plus.insert_item_type_v1\n(types_plus.item_type_v1[]);\n\nCREATE OR REPLACE FUNCTION types_plus.insert_item_type_v1 (data_in\ntypes_plus.item_type_v1[])\n\nRETURNS TABLE (\n insert_count integer,\n estimated_update_count integer,\n transaction_id text)\n\nLANGUAGE SQL\n\nBEGIN ATOMIC\n\n-- The CTE below is a roundabout way of returning an insertion count from a\npure SQL function in Postgres.\nWITH\ninserted_rows as (\n INSERT INTO item_type (\nid,\nmarked_for_deletion,\nname_)\n\n SELECT\nrows_in.id,\nrows_in.marked_for_deletion,\nrows_in.name_\n\n FROM unnest(data_in) as rows_in\n\n ON CONFLICT(id) DO UPDATE SET\nmarked_for_deletion = EXCLUDED.marked_for_deletion,\nname_ = EXCLUDED.name_\n\n returning xmax as inserted_transaction_id),\n\nstatus_data AS (\n select count(*) FILTER (where inserted_transaction_id = 0) AS\ninsert_count,\n count(*) FILTER (where inserted_transaction_id != 0) AS\nestimated_update_count,\n pg_current_xact_id_if_assigned()::text AS\ntransaction_id\n\n from inserted_rows),\n\ninsert_log_entry AS (\n INSERT INTO insert_log (\n data_file_id,\n ib_version,\n job_run_id,\n\n schema_name,\n table_name,\n records_submitted,\n insert_count,\n estimated_update_count)\n\nSELECT\n coalesce_session_variable(\n 'data_file_id',\n '00000000000000000000000000000000')::uuid,\n\n coalesce_session_variable('ib_version'), -- Default result is ''\n\n coalesce_session_variable(\n 'job_run_id',\n '00000000000000000000000000000000')::uuid,\n\n 'ascendco',\n 'item_type',\n (select cardinality(data_in)),\n insert_count,\n estimated_update_count\n\n FROM status_data\n)\n\n-- Final output/result.\n select insert_count,\n estimated_update_count,\n transaction_id\n\n from status_data;\n\nEND;\n\nOn Tue, Jul 19, 2022 at 10:38 PM Robert Haas <robertmhaas@gmail.com> wrote:> For MERGE itself, I wonder if some information about this should be> included in the command tag. It looks like MERGE already includes some> sort of row count in the command tag, but I guess perhaps it doesn't> distinguish between inserts and updates. I don't know why we couldn't> expose multiple values this way, though.It would be great to get some sort of feedback from MERGE accessible through SQL results, even if that doesn't come in the form of a RETURNING list.> I wonder whether you could just have the CTEs bubble up 1 or 0 and> then sum them at some stage, instead of relying on xmax. Presumably> your UPSERT simulation knows which thing it did in each case.It might help if I show a sample insert handling function. The issue is with the line at the end of the top CTE, insert_rows: returning xmax as inserted_transaction_id),That's what fails on partitions. Is there an alternative way to test what happened to the row(s)? here's the full function. . I wrote a code generator, so I don't have to hand-code all of these bits for each table+version:-- Create a function to accept an array of rows formatted as item_type_v1 for UPSERT into item_type.DROP FUNCTION IF EXISTS types_plus.insert_item_type_v1 (types_plus.item_type_v1[]);CREATE OR REPLACE FUNCTION types_plus.insert_item_type_v1 (data_in types_plus.item_type_v1[])RETURNS TABLE ( insert_count integer, estimated_update_count integer, transaction_id text)LANGUAGE SQLBEGIN ATOMIC-- The CTE below is a roundabout way of returning an insertion count from a pure SQL function in Postgres.WITHinserted_rows as ( INSERT INTO item_type (\t\t\tid,\t\t\tmarked_for_deletion,\t\t\tname_) SELECT rows_in.id,\t\t\trows_in.marked_for_deletion,\t\t\trows_in.name_ FROM unnest(data_in) as rows_in ON CONFLICT(id) DO UPDATE SET\t\t\tmarked_for_deletion = EXCLUDED.marked_for_deletion,\t\t\tname_ = EXCLUDED.name_ returning xmax as inserted_transaction_id),status_data AS ( select count(*) FILTER (where inserted_transaction_id = 0) AS insert_count, count(*) FILTER (where inserted_transaction_id != 0) AS estimated_update_count, pg_current_xact_id_if_assigned()::text AS transaction_id from inserted_rows),insert_log_entry AS ( INSERT INTO insert_log ( data_file_id, ib_version, job_run_id,\t\t schema_name,\t\t table_name,\t\t records_submitted,\t\t insert_count,\t\t estimated_update_count)\t\tSELECT\t\t coalesce_session_variable(\t\t 'data_file_id',\t\t '00000000000000000000000000000000')::uuid,\t\t coalesce_session_variable('ib_version'), -- Default result is ''\t\t coalesce_session_variable(\t\t 'job_run_id',\t\t '00000000000000000000000000000000')::uuid,\t\t 'ascendco',\t\t 'item_type',\t\t (select cardinality(data_in)),\t\t insert_count,\t\t estimated_update_count\t FROM status_data)-- Final output/result. select insert_count, estimated_update_count, transaction_id from status_data;END;",
"msg_date": "Wed, 20 Jul 2022 13:22:01 +1000",
"msg_from": "Morris de Oryx <morrisdeoryx@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: System column support for partitioned tables using heap"
},
{
"msg_contents": "On Tue, Jul 19, 2022 at 11:22 PM Morris de Oryx <morrisdeoryx@gmail.com> wrote:\n> It might help if I show a sample insert handling function. The issue is with the line at the end of the top CTE, insert_rows:\n>\n> returning xmax as inserted_transaction_id),\n>\n> That's what fails on partitions. Is there an alternative way to test what happened to the row(s)? here's the full function. . I wrote a code generator, so I don't have to hand-code all of these bits for each table+version:\n\nOh I see. I didn't realize you were using INSERT .. ON CONFLICT\nUPDATE, but that makes tons of sense, and I don't see an obvious\nalternative to the way you wrote this.\n\nHmm.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 21 Jul 2022 16:32:38 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: System column support for partitioned tables using heap"
}
] |
[
{
"msg_contents": "Default to hidden visibility for extension libraries where possible\n\nUntil now postgres built extension libraries with global visibility, i.e.\nexporting all symbols. On the one platform where that behavior is not\nnatively available, namely windows, we emulate it by analyzing the input files\nto the shared library and exporting all the symbols therein.\n\nNot exporting all symbols is actually desirable, as it can improve loading\nspeed, reduces the likelihood of symbol conflicts and can improve intra\nextension library function call performance. It also makes the non-windows\nbuilds more similar to windows builds.\n\nAdditionally, with meson implementing the export-all-symbols behavior for\nwindows, turns out to be more verbose than desirable.\n\nThis patch adds support for hiding symbols by default and, to counteract that,\nexplicit symbol visibility annotation for compilers that support\n__attribute__((visibility(\"default\"))) and -fvisibility=hidden. That is\nexpected to be most, if not all, compilers except msvc (for which we already\nsupport explicit symbol export annotations).\n\nNow that extension library symbols are explicitly exported, we don't need to\nexport all symbols on windows anymore, hence remove that behavior from\nsrc/tools/msvc. The supporting code can't be removed, as we still need to\nexport all symbols from the main postgres binary.\n\nAuthor: Andres Freund <andres@anarazel.de>\nAuthor: Tom Lane <tgl@sss.pgh.pa.us>\nDiscussion: https://postgr.es/m/20211101020311.av6hphdl6xbjbuif@alap3.anarazel.de\n\nBranch\n------\nmaster\n\nDetails\n-------\nhttps://git.postgresql.org/pg/commitdiff/089480c077056fc20fa8d8f5a3032a9dcf5ed812\n\nModified Files\n--------------\nconfigure | 152 +++++++++++++++++++++++++++++++++++++++++++++\nconfigure.ac | 13 ++++\nsrc/Makefile.global.in | 3 +\nsrc/Makefile.shlib | 13 ++++\nsrc/include/c.h | 13 ++--\nsrc/include/pg_config.h.in | 3 +\nsrc/makefiles/pgxs.mk | 5 +-\nsrc/tools/msvc/Project.pm | 7 ---\nsrc/tools/msvc/Solution.pm | 1 +\n9 files changed, 198 insertions(+), 12 deletions(-)",
"msg_date": "Mon, 18 Jul 2022 01:05:56 +0000",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "pgsql: Default to hidden visibility for extension libraries where\n possi"
},
{
"msg_contents": "Hi,\n\nOn 2022-07-18 01:05:56 +0000, Andres Freund wrote:\n> Default to hidden visibility for extension libraries where possible\n\nLooking at the odd failures, not sure what went wrong.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sun, 17 Jul 2022 18:39:35 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: pgsql: Default to hidden visibility for extension libraries\n where possi"
},
{
"msg_contents": "On 2022-07-17 18:39:35 -0700, Andres Freund wrote:\n> On 2022-07-18 01:05:56 +0000, Andres Freund wrote:\n> > Default to hidden visibility for extension libraries where possible\n>\n> Looking at the odd failures, not sure what went wrong.\n\nI don't know how the configure exec bit got removed, shell history doesn't\nshow anything odd in that regard. I do see a mistake in locally merging the\nsymbol-visibility branch (leading to a crucial commit being missed). I do see\nin shell history that I ran check-world without a failure just before pushing,\nwhich should have shown the failure, but didn't.\n\nDefinitely a brown paper bag moment.\n\n\n",
"msg_date": "Sun, 17 Jul 2022 19:01:55 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: pgsql: Default to hidden visibility for extension libraries\n where possi"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> I don't know how the configure exec bit got removed, shell history doesn't\n> show anything odd in that regard. I do see a mistake in locally merging the\n> symbol-visibility branch (leading to a crucial commit being missed). I do see\n> in shell history that I ran check-world without a failure just before pushing,\n> which should have shown the failure, but didn't.\n\ncheck-world would not have re-run configure even if it was out of date\n(only config.status), so that part of it's not so surprising.\n\n> Definitely a brown paper bag moment.\n\nWe've all been there :-)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 17 Jul 2022 22:34:42 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Default to hidden visibility for extension libraries where\n possi"
},
{
"msg_contents": "On Sun, Jul 17, 2022 at 07:01:55PM -0700, Andres Freund wrote:\n> On 2022-07-17 18:39:35 -0700, Andres Freund wrote:\n> > On 2022-07-18 01:05:56 +0000, Andres Freund wrote:\n> > > Default to hidden visibility for extension libraries where possible\n> >\n> > Looking at the odd failures, not sure what went wrong.\n> \n> I don't know how the configure exec bit got removed, shell history doesn't\n\nI think git reflog -p would help to answer that, but I suppose you already\nlooked.\n\n-- \nJustin\n\n\n",
"msg_date": "Mon, 18 Jul 2022 18:36:47 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Default to hidden visibility for extension libraries\n where possi"
},
{
"msg_contents": "Hi\n\npo 18. 7. 2022 v 3:06 odesílatel Andres Freund <andres@anarazel.de> napsal:\n\n> Default to hidden visibility for extension libraries where possible\n>\n> Until now postgres built extension libraries with global visibility, i.e.\n> exporting all symbols. On the one platform where that behavior is not\n> natively available, namely windows, we emulate it by analyzing the input\n> files\n> to the shared library and exporting all the symbols therein.\n>\n> Not exporting all symbols is actually desirable, as it can improve loading\n> speed, reduces the likelihood of symbol conflicts and can improve intra\n> extension library function call performance. It also makes the non-windows\n> builds more similar to windows builds.\n>\n> Additionally, with meson implementing the export-all-symbols behavior for\n> windows, turns out to be more verbose than desirable.\n>\n> This patch adds support for hiding symbols by default and, to counteract\n> that,\n> explicit symbol visibility annotation for compilers that support\n> __attribute__((visibility(\"default\"))) and -fvisibility=hidden. That is\n> expected to be most, if not all, compilers except msvc (for which we\n> already\n> support explicit symbol export annotations).\n>\n> Now that extension library symbols are explicitly exported, we don't need\n> to\n> export all symbols on windows anymore, hence remove that behavior from\n> src/tools/msvc. The supporting code can't be removed, as we still need to\n> export all symbols from the main postgres binary.\n>\n> Author: Andres Freund <andres@anarazel.de>\n> Author: Tom Lane <tgl@sss.pgh.pa.us>\n> Discussion:\n> https://postgr.es/m/20211101020311.av6hphdl6xbjbuif@alap3.anarazel.de\n>\n>\nUnfortunately, this commit definitely breaks plpgsql_check. Can the\nfollowing routines be visible?\n\n AssertVariableIsOfType(&plpgsql_build_datatype,\nplpgsql_check__build_datatype_t);\n plpgsql_check__build_datatype_p = (plpgsql_check__build_datatype_t)\n LOAD_EXTERNAL_FUNCTION(\"$libdir/plpgsql\", \"plpgsql_build_datatype\");\n\n AssertVariableIsOfType(&plpgsql_compile, plpgsql_check__compile_t);\n plpgsql_check__compile_p = (plpgsql_check__compile_t)\n LOAD_EXTERNAL_FUNCTION(\"$libdir/plpgsql\", \"plpgsql_compile\");\n\n AssertVariableIsOfType(&plpgsql_parser_setup,\nplpgsql_check__parser_setup_t);\n plpgsql_check__parser_setup_p = (plpgsql_check__parser_setup_t)\n LOAD_EXTERNAL_FUNCTION(\"$libdir/plpgsql\", \"plpgsql_parser_setup\");\n\n AssertVariableIsOfType(&plpgsql_stmt_typename,\nplpgsql_check__stmt_typename_t);\n plpgsql_check__stmt_typename_p = (plpgsql_check__stmt_typename_t)\n LOAD_EXTERNAL_FUNCTION(\"$libdir/plpgsql\", \"plpgsql_stmt_typename\");\n\n AssertVariableIsOfType(&plpgsql_exec_get_datum_type,\nplpgsql_check__exec_get_datum_type_t);\n plpgsql_check__exec_get_datum_type_p =\n(plpgsql_check__exec_get_datum_type_t)\n LOAD_EXTERNAL_FUNCTION(\"$libdir/plpgsql\",\n\"plpgsql_exec_get_datum_type\");\n\n AssertVariableIsOfType(&plpgsql_recognize_err_condition,\nplpgsql_check__recognize_err_condition_t);\n plpgsql_check__recognize_err_condition_p =\n(plpgsql_check__recognize_err_condition_t)\n LOAD_EXTERNAL_FUNCTION(\"$libdir/plpgsql\",\n\"plpgsql_recognize_err_condition\");\n\n AssertVariableIsOfType(&plpgsql_ns_lookup, plpgsql_check__ns_lookup_t);\n plpgsql_check__ns_lookup_p = (plpgsql_check__ns_lookup_t)\n LOAD_EXTERNAL_FUNCTION(\"$libdir/plpgsql\", \"plpgsql_ns_lookup\");\n\nRegards\n\nPavel\n\n\n\n\n> Branch\n> ------\n> master\n>\n> Details\n> -------\n>\n> https://git.postgresql.org/pg/commitdiff/089480c077056fc20fa8d8f5a3032a9dcf5ed812\n>\n> Modified Files\n> --------------\n> configure | 152\n> +++++++++++++++++++++++++++++++++++++++++++++\n> configure.ac | 13 ++++\n> src/Makefile.global.in | 3 +\n> src/Makefile.shlib | 13 ++++\n> src/include/c.h | 13 ++--\n> src/include/pg_config.h.in | 3 +\n> src/makefiles/pgxs.mk | 5 +-\n> src/tools/msvc/Project.pm | 7 ---\n> src/tools/msvc/Solution.pm | 1 +\n> 9 files changed, 198 insertions(+), 12 deletions(-)\n>\n>\n\nHipo 18. 7. 2022 v 3:06 odesílatel Andres Freund <andres@anarazel.de> napsal:Default to hidden visibility for extension libraries where possible\n\nUntil now postgres built extension libraries with global visibility, i.e.\nexporting all symbols. On the one platform where that behavior is not\nnatively available, namely windows, we emulate it by analyzing the input files\nto the shared library and exporting all the symbols therein.\n\nNot exporting all symbols is actually desirable, as it can improve loading\nspeed, reduces the likelihood of symbol conflicts and can improve intra\nextension library function call performance. It also makes the non-windows\nbuilds more similar to windows builds.\n\nAdditionally, with meson implementing the export-all-symbols behavior for\nwindows, turns out to be more verbose than desirable.\n\nThis patch adds support for hiding symbols by default and, to counteract that,\nexplicit symbol visibility annotation for compilers that support\n__attribute__((visibility(\"default\"))) and -fvisibility=hidden. That is\nexpected to be most, if not all, compilers except msvc (for which we already\nsupport explicit symbol export annotations).\n\nNow that extension library symbols are explicitly exported, we don't need to\nexport all symbols on windows anymore, hence remove that behavior from\nsrc/tools/msvc. The supporting code can't be removed, as we still need to\nexport all symbols from the main postgres binary.\n\nAuthor: Andres Freund <andres@anarazel.de>\nAuthor: Tom Lane <tgl@sss.pgh.pa.us>\nDiscussion: https://postgr.es/m/20211101020311.av6hphdl6xbjbuif@alap3.anarazel.de\nUnfortunately, this commit definitely breaks plpgsql_check. Can the following routines be visible? AssertVariableIsOfType(&plpgsql_build_datatype, plpgsql_check__build_datatype_t); plpgsql_check__build_datatype_p = (plpgsql_check__build_datatype_t) LOAD_EXTERNAL_FUNCTION(\"$libdir/plpgsql\", \"plpgsql_build_datatype\"); AssertVariableIsOfType(&plpgsql_compile, plpgsql_check__compile_t); plpgsql_check__compile_p = (plpgsql_check__compile_t) LOAD_EXTERNAL_FUNCTION(\"$libdir/plpgsql\", \"plpgsql_compile\"); AssertVariableIsOfType(&plpgsql_parser_setup, plpgsql_check__parser_setup_t); plpgsql_check__parser_setup_p = (plpgsql_check__parser_setup_t) LOAD_EXTERNAL_FUNCTION(\"$libdir/plpgsql\", \"plpgsql_parser_setup\"); AssertVariableIsOfType(&plpgsql_stmt_typename, plpgsql_check__stmt_typename_t); plpgsql_check__stmt_typename_p = (plpgsql_check__stmt_typename_t) LOAD_EXTERNAL_FUNCTION(\"$libdir/plpgsql\", \"plpgsql_stmt_typename\"); AssertVariableIsOfType(&plpgsql_exec_get_datum_type, plpgsql_check__exec_get_datum_type_t); plpgsql_check__exec_get_datum_type_p = (plpgsql_check__exec_get_datum_type_t) LOAD_EXTERNAL_FUNCTION(\"$libdir/plpgsql\", \"plpgsql_exec_get_datum_type\"); AssertVariableIsOfType(&plpgsql_recognize_err_condition, plpgsql_check__recognize_err_condition_t); plpgsql_check__recognize_err_condition_p = (plpgsql_check__recognize_err_condition_t) LOAD_EXTERNAL_FUNCTION(\"$libdir/plpgsql\", \"plpgsql_recognize_err_condition\"); AssertVariableIsOfType(&plpgsql_ns_lookup, plpgsql_check__ns_lookup_t); plpgsql_check__ns_lookup_p = (plpgsql_check__ns_lookup_t) LOAD_EXTERNAL_FUNCTION(\"$libdir/plpgsql\", \"plpgsql_ns_lookup\");RegardsPavel \nBranch\n------\nmaster\n\nDetails\n-------\nhttps://git.postgresql.org/pg/commitdiff/089480c077056fc20fa8d8f5a3032a9dcf5ed812\n\nModified Files\n--------------\nconfigure | 152 +++++++++++++++++++++++++++++++++++++++++++++\nconfigure.ac | 13 ++++\nsrc/Makefile.global.in | 3 +\nsrc/Makefile.shlib | 13 ++++\nsrc/include/c.h | 13 ++--\nsrc/include/pg_config.h.in | 3 +\nsrc/makefiles/pgxs.mk | 5 +-\nsrc/tools/msvc/Project.pm | 7 ---\nsrc/tools/msvc/Solution.pm | 1 +\n9 files changed, 198 insertions(+), 12 deletions(-)",
"msg_date": "Tue, 19 Jul 2022 05:26:58 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Default to hidden visibility for extension libraries where\n possi"
},
{
"msg_contents": "On 2022-Jul-19, Pavel Stehule wrote:\n\n> po 18. 7. 2022 v 3:06 odesílatel Andres Freund <andres@anarazel.de> napsal:\n> \n> > Default to hidden visibility for extension libraries where possible\n\n> Unfortunately, this commit definitely breaks plpgsql_check. Can the\n> following routines be visible?\n\nDo you just need to send a patch to add an exports.txt file to\nsrc/pl/plpgsql/src/ for these functions?\n\nplpgsql_build_datatype\nplpgsql_compile\nplpgsql_parser_setup\nplpgsql_stmt_typename\nplpgsql_exec_get_datum_type\nplpgsql_recognize_err_condition\nplpgsql_ns_lookup\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Tue, 19 Jul 2022 15:18:01 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Default to hidden visibility for extension libraries\n where possi"
},
{
"msg_contents": "Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> On 2022-Jul-19, Pavel Stehule wrote:\n>> Unfortunately, this commit definitely breaks plpgsql_check. Can the\n>> following routines be visible?\n\n> Do you just need to send a patch to add an exports.txt file to\n> src/pl/plpgsql/src/ for these functions?\n\nThe precedent of plpython says that PGDLLEXPORT markers are sufficient.\nBut yeah, we need a list of exactly which functions need to be\nre-exposed. I imagine pldebugger has its own needs.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 19 Jul 2022 09:45:52 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Default to hidden visibility for extension libraries where\n possi"
},
{
"msg_contents": "[ Redirecting thread to -hackers from -committers ]\n\nOn 2022-Jul-19, Tom Lane wrote:\n\n> Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n\n> > Do you just need to send a patch to add an exports.txt file to\n> > src/pl/plpgsql/src/ for these functions?\n> \n> The precedent of plpython says that PGDLLEXPORT markers are sufficient.\n> But yeah, we need a list of exactly which functions need to be\n> re-exposed. I imagine pldebugger has its own needs.\n\nA reasonable guess. I went as far as downloading pldebugger and\ncompiling it, but it doesn't have a test suite of its own, so I couldn't\nverify anything about it. I did notice that plpgsql_check is calling\nfunction load_external_function(), and that doesn't appear in\npldebugger. I wonder if the find_rendezvous_variable business is at\nplay.\n\nAnyway, the minimal patch that makes plpgsql_check tests pass is\nattached. This seems a bit random. Maybe it'd be better to have a\nplpgsql_internal.h with functions that are exported only for plpgsql\nitself, and keep plpgsql.h with a set of functions, all marked\nPGDLLEXPORT, that are for external use.\n\n\n... oh, and:\n\n$ postmaster -c shared_preload_libraries=plugin_debugger\n2022-07-19 16:27:24.006 CEST [742142] FATAL: cannot request additional shared memory outside shmem_request_hook\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/",
"msg_date": "Tue, 19 Jul 2022 16:28:07 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Default to hidden visibility for extension libraries\n where possi"
},
{
"msg_contents": "On Tue, Jul 19, 2022 at 04:28:07PM +0200, Alvaro Herrera wrote:\n> ... oh, and:\n>\n> $ postmaster -c shared_preload_libraries=plugin_debugger\n> 2022-07-19 16:27:24.006 CEST [742142] FATAL: cannot request additional shared memory outside shmem_request_hook\n\nThis has been reported multiple times (including on one of my own projects),\nsee\nhttps://www.postgresql.org/message-id/flat/81f82c00-8818-91f3-96fa-47976f94662b%40pm.me\nfor the last report.\n\n\n",
"msg_date": "Tue, 19 Jul 2022 22:45:52 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Default to hidden visibility for extension libraries\n where possi"
},
{
"msg_contents": "Hi\n\nút 19. 7. 2022 v 16:28 odesílatel Alvaro Herrera <alvherre@alvh.no-ip.org>\nnapsal:\n\n> [ Redirecting thread to -hackers from -committers ]\n>\n> On 2022-Jul-19, Tom Lane wrote:\n>\n> > Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n>\n> > > Do you just need to send a patch to add an exports.txt file to\n> > > src/pl/plpgsql/src/ for these functions?\n> >\n> > The precedent of plpython says that PGDLLEXPORT markers are sufficient.\n> > But yeah, we need a list of exactly which functions need to be\n> > re-exposed. I imagine pldebugger has its own needs.\n>\n> A reasonable guess. I went as far as downloading pldebugger and\n> compiling it, but it doesn't have a test suite of its own, so I couldn't\n> verify anything about it. I did notice that plpgsql_check is calling\n> function load_external_function(), and that doesn't appear in\n> pldebugger. I wonder if the find_rendezvous_variable business is at\n> play.\n>\n> Anyway, the minimal patch that makes plpgsql_check tests pass is\n> attached. This seems a bit random. Maybe it'd be better to have a\n> plpgsql_internal.h with functions that are exported only for plpgsql\n> itself, and keep plpgsql.h with a set of functions, all marked\n> PGDLLEXPORT, that are for external use.\n>\n>\nI can confirm that the attached patch fixes plpgsql_check.\n\nThank you\n\nPavel\n\n\n\n\n>\n> ... oh, and:\n>\n> $ postmaster -c shared_preload_libraries=plugin_debugger\n> 2022-07-19 16:27:24.006 CEST [742142] FATAL: cannot request additional\n> shared memory outside shmem_request_hook\n>\n> --\n> Álvaro Herrera Breisgau, Deutschland —\n> https://www.EnterpriseDB.com/\n>\n\nHiút 19. 7. 2022 v 16:28 odesílatel Alvaro Herrera <alvherre@alvh.no-ip.org> napsal:[ Redirecting thread to -hackers from -committers ]\n\nOn 2022-Jul-19, Tom Lane wrote:\n\n> Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n\n> > Do you just need to send a patch to add an exports.txt file to\n> > src/pl/plpgsql/src/ for these functions?\n> \n> The precedent of plpython says that PGDLLEXPORT markers are sufficient.\n> But yeah, we need a list of exactly which functions need to be\n> re-exposed. I imagine pldebugger has its own needs.\n\nA reasonable guess. I went as far as downloading pldebugger and\ncompiling it, but it doesn't have a test suite of its own, so I couldn't\nverify anything about it. I did notice that plpgsql_check is calling\nfunction load_external_function(), and that doesn't appear in\npldebugger. I wonder if the find_rendezvous_variable business is at\nplay.\n\nAnyway, the minimal patch that makes plpgsql_check tests pass is\nattached. This seems a bit random. Maybe it'd be better to have a\nplpgsql_internal.h with functions that are exported only for plpgsql\nitself, and keep plpgsql.h with a set of functions, all marked\nPGDLLEXPORT, that are for external use.\nI can confirm that the attached patch fixes plpgsql_check.Thank youPavel \n\n... oh, and:\n\n$ postmaster -c shared_preload_libraries=plugin_debugger\n2022-07-19 16:27:24.006 CEST [742142] FATAL: cannot request additional shared memory outside shmem_request_hook\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/",
"msg_date": "Tue, 19 Jul 2022 16:47:43 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Default to hidden visibility for extension libraries where\n possi"
},
{
"msg_contents": "Hi,\n\nOn 2022-07-19 16:28:07 +0200, Alvaro Herrera wrote:\n> Anyway, the minimal patch that makes plpgsql_check tests pass is\n> attached. This seems a bit random. Maybe it'd be better to have a\n> plpgsql_internal.h with functions that are exported only for plpgsql\n> itself, and keep plpgsql.h with a set of functions, all marked\n> PGDLLEXPORT, that are for external use.\n\nIt does seem a bit random. But I think we probably should err on the side of\nadding more declarations, rather than the oposite.\n\nI like the plpgsql_internal.h idea, but probably done separately?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 19 Jul 2022 08:31:49 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: pgsql: Default to hidden visibility for extension libraries\n where possi"
},
{
"msg_contents": "Hi\n\nút 19. 7. 2022 v 17:31 odesílatel Andres Freund <andres@anarazel.de> napsal:\n\n> Hi,\n>\n> On 2022-07-19 16:28:07 +0200, Alvaro Herrera wrote:\n> > Anyway, the minimal patch that makes plpgsql_check tests pass is\n> > attached. This seems a bit random. Maybe it'd be better to have a\n> > plpgsql_internal.h with functions that are exported only for plpgsql\n> > itself, and keep plpgsql.h with a set of functions, all marked\n> > PGDLLEXPORT, that are for external use.\n>\n> It does seem a bit random. But I think we probably should err on the side\n> of\n> adding more declarations, rather than the oposite.\n>\n\nThis list can be extended. I think plpgsql_check is maybe one extension\nthat uses code from another extension directly. This is really not common\nusage.\n\n\n>\n> I like the plpgsql_internal.h idea, but probably done separately?\n>\n\ncan be\n\nI have not any problem with it or with exports.txt file.\n\n\n\n> Greetings,\n>\n> Andres Freund\n>\n\nHiút 19. 7. 2022 v 17:31 odesílatel Andres Freund <andres@anarazel.de> napsal:Hi,\n\nOn 2022-07-19 16:28:07 +0200, Alvaro Herrera wrote:\n> Anyway, the minimal patch that makes plpgsql_check tests pass is\n> attached. This seems a bit random. Maybe it'd be better to have a\n> plpgsql_internal.h with functions that are exported only for plpgsql\n> itself, and keep plpgsql.h with a set of functions, all marked\n> PGDLLEXPORT, that are for external use.\n\nIt does seem a bit random. But I think we probably should err on the side of\nadding more declarations, rather than the oposite.This list can be extended. I think plpgsql_check is maybe one extension that uses code from another extension directly. This is really not common usage. \n\nI like the plpgsql_internal.h idea, but probably done separately?can beI have not any problem with it or with exports.txt file.\n\nGreetings,\n\nAndres Freund",
"msg_date": "Tue, 19 Jul 2022 17:37:11 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Default to hidden visibility for extension libraries where\n possi"
},
{
"msg_contents": "Hi,\n\nOn 2022-07-19 17:37:11 +0200, Pavel Stehule wrote:\n> �t 19. 7. 2022 v 17:31 odes�latel Andres Freund <andres@anarazel.de> napsal:\n> \n> > Hi,\n> >\n> > On 2022-07-19 16:28:07 +0200, Alvaro Herrera wrote:\n> > > Anyway, the minimal patch that makes plpgsql_check tests pass is\n> > > attached. This seems a bit random. Maybe it'd be better to have a\n> > > plpgsql_internal.h with functions that are exported only for plpgsql\n> > > itself, and keep plpgsql.h with a set of functions, all marked\n> > > PGDLLEXPORT, that are for external use.\n> >\n> > It does seem a bit random. But I think we probably should err on the side\n> > of\n> > adding more declarations, rather than the oposite.\n> >\n> \n> This list can be extended. I think plpgsql_check is maybe one extension\n> that uses code from another extension directly. This is really not common\n> usage.\n\nThere's a few more use cases, e.g. transform modules. Hence exposing e.g. many\nplpython helpers.\n\n\n> I have not any problem with it or with exports.txt file.\n\nJust to be clear, there shouldn't be any use exports.txt here, just a few\nPGDLLEXPORTs.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 19 Jul 2022 08:40:04 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: pgsql: Default to hidden visibility for extension libraries\n where possi"
},
{
"msg_contents": "Hi,\n\nOn 2022-07-19 16:28:07 +0200, Alvaro Herrera wrote:\n> Anyway, the minimal patch that makes plpgsql_check tests pass is\n> attached.\n\nDo you want to commit that or should I?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 19 Jul 2022 15:39:30 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: pgsql: Default to hidden visibility for extension libraries\n where possi"
},
{
"msg_contents": "Hi,\n\nOn 2022-07-19 08:31:49 -0700, Andres Freund wrote:\n> But I think we probably should err on the side of adding more\n> declarations, rather than the oposite.\n\nTo see if there's other declarations that should be added, I used\nhttps://codesearch.debian.net/search?q=load_external_function&literal=1&perpkg=1\n\nwhich shows plpgsql_check and hstore_pllua. All the hstore symbols for\nthe latter are exported already.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 19 Jul 2022 15:44:35 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: pgsql: Default to hidden visibility for extension libraries\n where possi"
},
{
"msg_contents": "Hello\n\nOn 2022-Jul-19, Andres Freund wrote:\n\n> On 2022-07-19 16:28:07 +0200, Alvaro Herrera wrote:\n> > Anyway, the minimal patch that makes plpgsql_check tests pass is\n> > attached.\n> \n> Do you want to commit that or should I?\n\nDone.\n\nNo immediate plans for splitting plpgsql.h, so if anyone wants to take a\nstab at that, be my guest.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"Aprender sin pensar es inútil; pensar sin aprender, peligroso\" (Confucio)\n\n\n",
"msg_date": "Wed, 20 Jul 2022 11:02:47 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Default to hidden visibility for extension libraries\n where possi"
},
{
"msg_contents": "Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> No immediate plans for splitting plpgsql.h, so if anyone wants to take a\n> stab at that, be my guest.\n\nISTM that a comment pointing out that the functions marked PGDLLEXPORT\nare meant to be externally accessible should be sufficient.\n\nI'll try to do some research later today to identify anything else\nwe need to mark in plpgsql. I recall doing some work specifically\ncreating functions for pldebugger's use, but I'll need to dig.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 20 Jul 2022 09:38:50 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Default to hidden visibility for extension libraries where\n possi"
},
{
"msg_contents": "On Wed, Jul 20, 2022 at 9:39 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> ISTM that a comment pointing out that the functions marked PGDLLEXPORT\n> are meant to be externally accessible should be sufficient.\n\nThe name PGDLLEXPORT is actually slightly misleading, now, because\nthere's no longer anything about it that is specific to DLLs.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 20 Jul 2022 09:54:03 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Default to hidden visibility for extension libraries where\n possi"
},
{
"msg_contents": "On 2022-Jul-20, Tom Lane wrote:\n\n> I'll try to do some research later today to identify anything else\n> we need to mark in plpgsql. I recall doing some work specifically\n> creating functions for pldebugger's use, but I'll need to dig.\n\nI suppose you're probably thinking of commit 53ef6c40f1e7; that didn't\nexpose functions directly, but through plpgsql_plugin_ptr. Maybe that\none does need to be made PGDLLEXPORT, since currently it isn't.\n\nThat was also reported by Pavel. He was concerned about plpgsql_check,\nthough, not pldebugger.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\nBob [Floyd] used to say that he was planning to get a Ph.D. by the \"green\nstamp method,\" namely by saving envelopes addressed to him as 'Dr. Floyd'.\nAfter collecting 500 such letters, he mused, a university somewhere in\nArizona would probably grant him a degree. (Don Knuth)\n\n\n",
"msg_date": "Wed, 20 Jul 2022 16:04:30 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Default to hidden visibility for extension libraries\n where possi"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Wed, Jul 20, 2022 at 9:39 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> ISTM that a comment pointing out that the functions marked PGDLLEXPORT\n>> are meant to be externally accessible should be sufficient.\n\n> The name PGDLLEXPORT is actually slightly misleading, now, because\n> there's no longer anything about it that is specific to DLLs.\n\nTrue, but the mess of changing it seems to outweigh any likely clarity\ngain. As long as there's adequate commentary about what it means,\nI'm okay with the existing naming.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 20 Jul 2022 10:09:01 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Default to hidden visibility for extension libraries where\n possi"
},
{
"msg_contents": "Hi,\n\nOn July 20, 2022 3:54:03 PM GMT+02:00, Robert Haas <robertmhaas@gmail.com> wrote:\n>On Wed, Jul 20, 2022 at 9:39 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> ISTM that a comment pointing out that the functions marked PGDLLEXPORT\n>> are meant to be externally accessible should be sufficient.\n>\n>The name PGDLLEXPORT is actually slightly misleading, now, because\n>there's no longer anything about it that is specific to DLLs.\n\nHow so? Right now it's solely used for making symbols in DLLs as exported?\n\nAndres \n-- \nSent from my Android device with K-9 Mail. Please excuse my brevity.\n\n\n",
"msg_date": "Wed, 20 Jul 2022 16:12:43 +0200",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "=?US-ASCII?Q?Re=3A_pgsql=3A_Default_to_hidden_visibili?=\n =?US-ASCII?Q?ty_for_extension_libraries_where_possi?="
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On July 20, 2022 3:54:03 PM GMT+02:00, Robert Haas <robertmhaas@gmail.com> wrote:\n>> The name PGDLLEXPORT is actually slightly misleading, now, because\n>> there's no longer anything about it that is specific to DLLs.\n\n> How so? Right now it's solely used for making symbols in DLLs as exported?\n\nI suspect Robert is reading \"DLL\" as meaning only a Windows thing.\nYou're right, if you read it as a generic term for loadable libraries,\nit's more or less applicable everywhere.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 20 Jul 2022 10:15:55 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: =?US-ASCII?Q?Re=3A_pgsql=3A_Default_to_hidden_visibili?=\n =?US-ASCII?Q?ty_for_extension_libraries_where_possi?="
},
{
"msg_contents": "Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> I suppose you're probably thinking of commit 53ef6c40f1e7; that didn't\n> expose functions directly, but through plpgsql_plugin_ptr. Maybe that\n> one does need to be made PGDLLEXPORT, since currently it isn't.\n\nHm. Not sure if the rules are the same for global variables as\nthey are for functions, but if so, yeah ...\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 20 Jul 2022 10:20:04 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Default to hidden visibility for extension libraries where\n possi"
},
{
"msg_contents": "Hi, \n\nOn July 20, 2022 4:20:04 PM GMT+02:00, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n>> I suppose you're probably thinking of commit 53ef6c40f1e7; that didn't\n>> expose functions directly, but through plpgsql_plugin_ptr. Maybe that\n>> one does need to be made PGDLLEXPORT, since currently it isn't.\n>\n>Hm. Not sure if the rules are the same for global variables as\n>they are for functions, but if so, yeah ...\n\nThey're the same on the export side. On windows the rules for linking to variables are stricter (they need declspec dllimport), but that doesn't matter for dlsym style stuff.\n\nAndres\n-- \nSent from my Android device with K-9 Mail. Please excuse my brevity.\n\n\n",
"msg_date": "Wed, 20 Jul 2022 17:05:00 +0200",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "=?US-ASCII?Q?Re=3A_pgsql=3A_Default_to_hidden_visibili?=\n =?US-ASCII?Q?ty_for_extension_libraries_where_possi?="
},
{
"msg_contents": "Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> On 2022-Jul-20, Tom Lane wrote:\n>> I'll try to do some research later today to identify anything else\n>> we need to mark in plpgsql. I recall doing some work specifically\n>> creating functions for pldebugger's use, but I'll need to dig.\n\n> I suppose you're probably thinking of commit 53ef6c40f1e7; that didn't\n> expose functions directly, but through plpgsql_plugin_ptr. Maybe that\n> one does need to be made PGDLLEXPORT, since currently it isn't.\n\nAfter some experimentation, it does not need to be marked: pldebugger\ngets at that via find_rendezvous_variable(), so there is no need for\nany explicit linkage at all between plpgsql.so and plugin_debugger.so.\n\nAlong the way, I made a quick hack to get pldebugger to load into\nv15/HEAD. It lacks #ifdef's which'd be needed so that it'd still\ncompile against older branches, but perhaps this'll save someone\nsome time.\n\n\t\t\tregards, tom lane",
"msg_date": "Wed, 20 Jul 2022 11:11:51 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Default to hidden visibility for extension libraries where\n possi"
},
{
"msg_contents": "On Wed, 20 Jul 2022 at 16:12, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> > On 2022-Jul-20, Tom Lane wrote:\n> >> I'll try to do some research later today to identify anything else\n> >> we need to mark in plpgsql. I recall doing some work specifically\n> >> creating functions for pldebugger's use, but I'll need to dig.\n>\n> > I suppose you're probably thinking of commit 53ef6c40f1e7; that didn't\n> > expose functions directly, but through plpgsql_plugin_ptr. Maybe that\n> > one does need to be made PGDLLEXPORT, since currently it isn't.\n>\n> After some experimentation, it does not need to be marked: pldebugger\n> gets at that via find_rendezvous_variable(), so there is no need for\n> any explicit linkage at all between plpgsql.so and plugin_debugger.so.\n>\n> Along the way, I made a quick hack to get pldebugger to load into\n> v15/HEAD. It lacks #ifdef's which'd be needed so that it'd still\n> compile against older branches, but perhaps this'll save someone\n> some time.\n>\n\nThanks Tom - I've pushed that patch with the relevant #ifdefs added.\n\n-- \nDave Page\nPostgreSQL Core Team\nhttp://www.postgresql.org/\n\nOn Wed, 20 Jul 2022 at 16:12, Tom Lane <tgl@sss.pgh.pa.us> wrote:Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> On 2022-Jul-20, Tom Lane wrote:\n>> I'll try to do some research later today to identify anything else\n>> we need to mark in plpgsql. I recall doing some work specifically\n>> creating functions for pldebugger's use, but I'll need to dig.\n\n> I suppose you're probably thinking of commit 53ef6c40f1e7; that didn't\n> expose functions directly, but through plpgsql_plugin_ptr. Maybe that\n> one does need to be made PGDLLEXPORT, since currently it isn't.\n\nAfter some experimentation, it does not need to be marked: pldebugger\ngets at that via find_rendezvous_variable(), so there is no need for\nany explicit linkage at all between plpgsql.so and plugin_debugger.so.\n\nAlong the way, I made a quick hack to get pldebugger to load into\nv15/HEAD. It lacks #ifdef's which'd be needed so that it'd still\ncompile against older branches, but perhaps this'll save someone\nsome time.Thanks Tom - I've pushed that patch with the relevant #ifdefs added. -- Dave PagePostgreSQL Core Teamhttp://www.postgresql.org/",
"msg_date": "Wed, 20 Jul 2022 16:58:24 +0100",
"msg_from": "Dave Page <dpage@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Default to hidden visibility for extension libraries where\n possi"
}
] |
[
{
"msg_contents": "Hi\n\nI am trying to fix one slow query, and found that optimization of min, max\nfunctions is possible only when there is no JOIN in the query.\n\nIs it true?\n\nI need to do manual transformation of query\n\nselect max(insert_date) from foo join boo on foo.boo_id = boo.id\nwhere foo.item_id = 100 and boo.is_ok\n\nto\n\nselect insert_date from foo join boo on foo.boo_id = boo.id\nwhere foo.item_id = 100 and boo.is_ok order by insert_date desc limit 1;\n\nRegards\n\nPavel\n\nHiI am trying to fix one slow query, and found that optimization of min, max functions is possible only when there is no JOIN in the query. Is it true?I need to do manual transformation of queryselect max(insert_date) from foo join boo on foo.boo_id = boo.id where foo.item_id = 100 and boo.is_oktoselect insert_date from foo join boo on foo.boo_id = boo.idwhere foo.item_id = 100 and boo.is_ok order by insert_date desc limit 1;RegardsPavel",
"msg_date": "Mon, 18 Jul 2022 15:24:02 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": true,
"msg_subject": "limits of max, min optimization"
},
{
"msg_contents": "On 2022-Jul-18, Pavel Stehule wrote:\n\n> Hi\n> \n> I am trying to fix one slow query, and found that optimization of min, max\n> functions is possible only when there is no JOIN in the query.\n> \n> Is it true?\n\nSee preprocess_minmax_aggregates() in\nsrc/backend/optimizer/plan/planagg.c\n\n> select max(insert_date) from foo join boo on foo.boo_id = boo.id\n> where foo.item_id = 100 and boo.is_ok\n\nMaybe it is possible to hack that code so that this case can be handled\nbetter.\n\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Mon, 18 Jul 2022 16:24:03 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: limits of max, min optimization"
},
{
"msg_contents": "Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> On 2022-Jul-18, Pavel Stehule wrote:\n>> I am trying to fix one slow query, and found that optimization of min, max\n>> functions is possible only when there is no JOIN in the query.\n\n> See preprocess_minmax_aggregates() in\n> src/backend/optimizer/plan/planagg.c\n> Maybe it is possible to hack that code so that this case can be handled\n> better.\n\nThe comments show this was already thought about:\n\n * We also restrict the query to reference exactly one table, since join\n * conditions can't be handled reasonably. (We could perhaps handle a\n * query containing cartesian-product joins, but it hardly seems worth the\n * trouble.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 18 Jul 2022 10:29:07 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: limits of max, min optimization"
},
{
"msg_contents": "po 18. 7. 2022 v 16:29 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\n\n> Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> > On 2022-Jul-18, Pavel Stehule wrote:\n> >> I am trying to fix one slow query, and found that optimization of min,\n> max\n> >> functions is possible only when there is no JOIN in the query.\n>\n> > See preprocess_minmax_aggregates() in\n> > src/backend/optimizer/plan/planagg.c\n> > Maybe it is possible to hack that code so that this case can be handled\n> > better.\n>\n> The comments show this was already thought about:\n>\n> * We also restrict the query to reference exactly one table, since\n> join\n> * conditions can't be handled reasonably. (We could perhaps handle a\n> * query containing cartesian-product joins, but it hardly seems worth\n> the\n> * trouble.)\n>\n>\nThank you for reply\n\nRegards\n\nPavel\n\n\n> regards, tom lane\n>\n\npo 18. 7. 2022 v 16:29 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> On 2022-Jul-18, Pavel Stehule wrote:\n>> I am trying to fix one slow query, and found that optimization of min, max\n>> functions is possible only when there is no JOIN in the query.\n\n> See preprocess_minmax_aggregates() in\n> src/backend/optimizer/plan/planagg.c\n> Maybe it is possible to hack that code so that this case can be handled\n> better.\n\nThe comments show this was already thought about:\n\n * We also restrict the query to reference exactly one table, since join\n * conditions can't be handled reasonably. (We could perhaps handle a\n * query containing cartesian-product joins, but it hardly seems worth the\n * trouble.)\nThank you for replyRegardsPavel \n regards, tom lane",
"msg_date": "Mon, 18 Jul 2022 16:32:56 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: limits of max, min optimization"
}
] |
[
{
"msg_contents": "It's easy to use CREATE TABLE..LIKE + ALTER..ATTACH PARTITION to avoid\nacquiring a strong lock when creating a new partition.\nBut it's also easy to forget.\n\ncommit 76c0d1198cf2908423b321cd3340d296cb668c8e\nAuthor: Justin Pryzby <pryzbyj@telsasoft.com>\nDate: Mon Jul 18 09:24:55 2022 -0500\n\n doc: mention CREATE+ATTACH PARTITION as an alternative to CREATE TABLE..PARTITION OF\n \n See also: 898e5e3290a72d288923260143930fb32036c00c\n Should backpatch to v12\n\ndiff --git a/doc/src/sgml/ref/create_table.sgml b/doc/src/sgml/ref/create_table.sgml\nindex 6bbf15ed1a4..db7d8710bae 100644\n--- a/doc/src/sgml/ref/create_table.sgml\n+++ b/doc/src/sgml/ref/create_table.sgml\n@@ -619,6 +619,16 @@ WITH ( MODULUS <replaceable class=\"parameter\">numeric_literal</replaceable>, REM\n with <literal>DROP TABLE</literal> requires taking an <literal>ACCESS\n EXCLUSIVE</literal> lock on the parent table.\n </para>\n+\n+ <para>\n+ Note that creating a partition acquires an <literal>ACCESS\n+ EXCLUSIVE</literal> lock on the parent table.\n+ It may be preferable to first CREATE a separate table and then ATTACH it,\n+ which does not require as strong of a lock.\n+ See <link linkend=\"sql-altertable-attach-partition\">ATTACH PARTITION</link>\n+ and <xref linkend=\"ddl-partitioning\"/> for more information.\n+ </para>\n+\n </listitem>\n </varlistentry>\n \n\n\n",
"msg_date": "Mon, 18 Jul 2022 09:33:04 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "doc: mentioned CREATE+ATTACH PARTITION as an alternative to CREATE\n TABLE..PARTITION OF"
},
{
"msg_contents": "\nOn 2022-07-18 Mo 10:33, Justin Pryzby wrote:\n> It's easy to use CREATE TABLE..LIKE + ALTER..ATTACH PARTITION to avoid\n> acquiring a strong lock when creating a new partition.\n> But it's also easy to forget.\n>\n> commit 76c0d1198cf2908423b321cd3340d296cb668c8e\n> Author: Justin Pryzby <pryzbyj@telsasoft.com>\n> Date: Mon Jul 18 09:24:55 2022 -0500\n>\n> doc: mention CREATE+ATTACH PARTITION as an alternative to CREATE TABLE..PARTITION OF\n> \n> See also: 898e5e3290a72d288923260143930fb32036c00c\n> Should backpatch to v12\n>\n> diff --git a/doc/src/sgml/ref/create_table.sgml b/doc/src/sgml/ref/create_table.sgml\n> index 6bbf15ed1a4..db7d8710bae 100644\n> --- a/doc/src/sgml/ref/create_table.sgml\n> +++ b/doc/src/sgml/ref/create_table.sgml\n> @@ -619,6 +619,16 @@ WITH ( MODULUS <replaceable class=\"parameter\">numeric_literal</replaceable>, REM\n> with <literal>DROP TABLE</literal> requires taking an <literal>ACCESS\n> EXCLUSIVE</literal> lock on the parent table.\n> </para>\n> +\n> + <para>\n> + Note that creating a partition acquires an <literal>ACCESS\n> + EXCLUSIVE</literal> lock on the parent table.\n> + It may be preferable to first CREATE a separate table and then ATTACH it,\n> + which does not require as strong of a lock.\n> + See <link linkend=\"sql-altertable-attach-partition\">ATTACH PARTITION</link>\n> + and <xref linkend=\"ddl-partitioning\"/> for more information.\n> + </para>\n> +\n> </listitem>\n> </varlistentry>\n> \n\n\nStyle nitpick.\n\n\nI would prefer \"does not require as strong a lock.\"\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Mon, 18 Jul 2022 10:39:09 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: doc: mentioned CREATE+ATTACH PARTITION as an alternative to\n CREATE TABLE..PARTITION OF"
},
{
"msg_contents": "On Mon, Jul 18, 2022 at 10:39 AM Andrew Dunstan <andrew@dunslane.net> wrote:\n> On 2022-07-18 Mo 10:33, Justin Pryzby wrote:\n> > It's easy to use CREATE TABLE..LIKE + ALTER..ATTACH PARTITION to avoid\n> > acquiring a strong lock when creating a new partition.\n> > But it's also easy to forget.\n> >\n> > commit 76c0d1198cf2908423b321cd3340d296cb668c8e\n> > Author: Justin Pryzby <pryzbyj@telsasoft.com>\n> > Date: Mon Jul 18 09:24:55 2022 -0500\n> >\n> > doc: mention CREATE+ATTACH PARTITION as an alternative to CREATE TABLE..PARTITION OF\n> >\n> > See also: 898e5e3290a72d288923260143930fb32036c00c\n> > Should backpatch to v12\n> >\n> > diff --git a/doc/src/sgml/ref/create_table.sgml b/doc/src/sgml/ref/create_table.sgml\n> > index 6bbf15ed1a4..db7d8710bae 100644\n> > --- a/doc/src/sgml/ref/create_table.sgml\n> > +++ b/doc/src/sgml/ref/create_table.sgml\n> > @@ -619,6 +619,16 @@ WITH ( MODULUS <replaceable class=\"parameter\">numeric_literal</replaceable>, REM\n> > with <literal>DROP TABLE</literal> requires taking an <literal>ACCESS\n> > EXCLUSIVE</literal> lock on the parent table.\n> > </para>\n> > +\n> > + <para>\n> > + Note that creating a partition acquires an <literal>ACCESS\n> > + EXCLUSIVE</literal> lock on the parent table.\n> > + It may be preferable to first CREATE a separate table and then ATTACH it,\n> > + which does not require as strong of a lock.\n> > + See <link linkend=\"sql-altertable-attach-partition\">ATTACH PARTITION</link>\n> > + and <xref linkend=\"ddl-partitioning\"/> for more information.\n> > + </para>\n> > +\n> > </listitem>\n> > </varlistentry>\n> >\n>\n> Style nitpick.\n>\n> I would prefer \"does not require as strong a lock.\"\n>\nFWIW, this is also proper grammar as well.\n\nAfter reading this again, it isn't clear to me that this advice would\nbe more appropriately placed into Section 5.11, aka\nhttps://www.postgresql.org/docs/current/ddl-partitioning.html, but in\nlieu of a specific suggestion for where to place it there (I haven't\nsettled on one yet), IMHO, I think the first sentence of the suggested\nchange should be rewritten as:\n\n<para>\nNote that creating a partition using <literal>PARTITION OF<literal>\nrequires taking an <literal>ACCESS EXCLUSIVE</literal> lock on the parent table.\nIt may be preferable to first CREATE a separate table...\n\n\nRobert Treat\nhttps://xzilla.net\n\n\n",
"msg_date": "Thu, 4 Aug 2022 01:45:49 -0400",
"msg_from": "Robert Treat <rob@xzilla.net>",
"msg_from_op": false,
"msg_subject": "Re: doc: mentioned CREATE+ATTACH PARTITION as an alternative to\n CREATE TABLE..PARTITION OF"
},
{
"msg_contents": "On Thu, Aug 04, 2022 at 01:45:49AM -0400, Robert Treat wrote:\n> After reading this again, it isn't clear to me that this advice would\n> be more appropriately placed into Section 5.11, aka\n> https://www.postgresql.org/docs/current/ddl-partitioning.html, but in\n> lieu of a specific suggestion for where to place it there (I haven't\n> settled on one yet), IMHO, I think the first sentence of the suggested\n> change should be rewritten as:\n> \n> <para>\n> Note that creating a partition using <literal>PARTITION OF<literal>\n> requires taking an <literal>ACCESS EXCLUSIVE</literal> lock on the parent table.\n> It may be preferable to first CREATE a separate table...\n\nThanks for looking. I used your language.\n\nThere is some relevant information in ddl.sgml, but not a lot, and it's\nnot easily referred to, so I removed the part of the patch that tried to\ncross-reference.\n\n@Robert: I wonder why shouldn't CREATE..PARTITION OF *also* be patched\nto first create a table, and then attach the partition, transparently\ndoing what everyone would want, without having to re-read the updated\ndocs or know to issue two commands? I wrote a patch for this which\n\"doesn't fail tests\", but I still wonder if I'm missing something..\n\ncommit 723fa7df82f39aed5d58e5e52ba80caa8cb13515\nAuthor: Justin Pryzby <pryzbyj@telsasoft.com>\nDate: Mon Jul 18 09:24:55 2022 -0500\n\n doc: mention CREATE+ATTACH PARTITION as an alternative to CREATE TABLE..PARTITION OF\n \n In v12, 898e5e329 (Allow ATTACH PARTITION with only ShareUpdateExclusiveLock)\n allows attaching a partition with a weaker lock than in CREATE..PARTITION OF,\n but it does that silently. On the one hand, things that are automatically\n better, without having to enable the option are the best kind of feature.\n \n OTOH, I doubt many people know to do that, because the docs don't say\n so, because it was implemented as an transparent improvement. This\n patch adds a bit of documentations to make that more visible.\n \n See also: 898e5e3290a72d288923260143930fb32036c00c\n Should backpatch to v12\n\ndiff --git a/doc/src/sgml/ddl.sgml b/doc/src/sgml/ddl.sgml\nindex 360284e37d6..66138b9299d 100644\n--- a/doc/src/sgml/ddl.sgml\n+++ b/doc/src/sgml/ddl.sgml\n@@ -4092,7 +4092,9 @@ ALTER TABLE measurement ATTACH PARTITION measurement_y2008m02\n \n <para>\n The <command>ATTACH PARTITION</command> command requires taking a\n- <literal>SHARE UPDATE EXCLUSIVE</literal> lock on the partitioned table.\n+ <literal>SHARE UPDATE EXCLUSIVE</literal> lock on the partitioned table,\n+ as opposed to the <literal>Access Exclusive</literal> lock which is\n+ required by <literal>CREATE TABLE .. PARTITION OF</literal>.\n </para>\n \n <para>\ndiff --git a/doc/src/sgml/ref/create_table.sgml b/doc/src/sgml/ref/create_table.sgml\nindex c14b2010d81..54dbfa72e4c 100644\n--- a/doc/src/sgml/ref/create_table.sgml\n+++ b/doc/src/sgml/ref/create_table.sgml\n@@ -619,6 +619,16 @@ WITH ( MODULUS <replaceable class=\"parameter\">numeric_literal</replaceable>, REM\n with <literal>DROP TABLE</literal> requires taking an <literal>ACCESS\n EXCLUSIVE</literal> lock on the parent table.\n </para>\n+\n+ <para>\n+ Note that creating a partition using <literal>PARTITION OF<literal>\n+ requires taking an <literal>ACCESS EXCLUSIVE</literal> lock on the parent\n+ table. It may be preferable to first create a separate table and then\n+ attach it, which does not require as strong a lock.\n+ See <link linkend=\"sql-altertable-attach-partition\">ATTACH PARTITION</link>\n+ for more information.\n+ </para>\n+\n </listitem>\n </varlistentry>\n \n\n\n",
"msg_date": "Mon, 5 Sep 2022 13:04:03 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: doc: mentioned CREATE+ATTACH PARTITION as an alternative to\n CREATE TABLE..PARTITION OF"
},
{
"msg_contents": "On Mon, Sep 5, 2022 at 2:04 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n> On Thu, Aug 04, 2022 at 01:45:49AM -0400, Robert Treat wrote:\n> > After reading this again, it isn't clear to me that this advice would\n> > be more appropriately placed into Section 5.11, aka\n> > https://www.postgresql.org/docs/current/ddl-partitioning.html, but in\n> > lieu of a specific suggestion for where to place it there (I haven't\n> > settled on one yet), IMHO, I think the first sentence of the suggested\n> > change should be rewritten as:\n> >\n> > <para>\n> > Note that creating a partition using <literal>PARTITION OF<literal>\n> > requires taking an <literal>ACCESS EXCLUSIVE</literal> lock on the parent table.\n> > It may be preferable to first CREATE a separate table...\n>\n> Thanks for looking. I used your language.\n>\n> There is some relevant information in ddl.sgml, but not a lot, and it's\n> not easily referred to, so I removed the part of the patch that tried to\n> cross-reference.\n>\n\nYes, I see now what you are referring to, and thinking maybe an option\nwould be to also add a reference there back to what will include your\nchange above.\n\ndiff --git a/doc/src/sgml/ddl.sgml b/doc/src/sgml/ddl.sgml\nindex 4b219435d4..c52092a45e 100644\n--- a/doc/src/sgml/ddl.sgml\n+++ b/doc/src/sgml/ddl.sgml\n@@ -4088,7 +4088,9 @@ CREATE TABLE measurement_y2008m02 PARTITION OF measurement\n As an alternative, it is sometimes more convenient to create the\n new table outside the partition structure, and make it a proper\n partition later. This allows new data to be loaded, checked, and\n- transformed prior to it appearing in the partitioned table.\n+ transformed prior to it appearing in the partitioned table; see\n+ <link linkend=\"sql-altertable-attach-partition\"><literal>ALTER\nTABLE ... ATTACH PARTITION</literal></link>\n+ for additional details.\n The <literal>CREATE TABLE ... LIKE</literal> option is helpful\n to avoid tediously repeating the parent table's definition:\n\n> @Robert: I wonder why shouldn't CREATE..PARTITION OF *also* be patched\n> to first create a table, and then attach the partition, transparently\n> doing what everyone would want, without having to re-read the updated\n> docs or know to issue two commands? I wrote a patch for this which\n> \"doesn't fail tests\", but I still wonder if I'm missing something..\n>\n\nI was thinking there might be either lock escalation issues or perhaps\nissues around index attachment that don't surface using create\npartition of, but I don't actually see any, in which case that does\nseem like a better change all around. But like you, I feel I must be\noverlooking something :-)\n\n> commit 723fa7df82f39aed5d58e5e52ba80caa8cb13515\n> Author: Justin Pryzby <pryzbyj@telsasoft.com>\n> Date: Mon Jul 18 09:24:55 2022 -0500\n>\n> doc: mention CREATE+ATTACH PARTITION as an alternative to CREATE TABLE..PARTITION OF\n>\n> In v12, 898e5e329 (Allow ATTACH PARTITION with only ShareUpdateExclusiveLock)\n> allows attaching a partition with a weaker lock than in CREATE..PARTITION OF,\n> but it does that silently. On the one hand, things that are automatically\n> better, without having to enable the option are the best kind of feature.\n>\n> OTOH, I doubt many people know to do that, because the docs don't say\n> so, because it was implemented as an transparent improvement. This\n> patch adds a bit of documentations to make that more visible.\n>\n> See also: 898e5e3290a72d288923260143930fb32036c00c\n> Should backpatch to v12\n>\n> diff --git a/doc/src/sgml/ddl.sgml b/doc/src/sgml/ddl.sgml\n> index 360284e37d6..66138b9299d 100644\n> --- a/doc/src/sgml/ddl.sgml\n> +++ b/doc/src/sgml/ddl.sgml\n> @@ -4092,7 +4092,9 @@ ALTER TABLE measurement ATTACH PARTITION measurement_y2008m02\n>\n> <para>\n> The <command>ATTACH PARTITION</command> command requires taking a\n> - <literal>SHARE UPDATE EXCLUSIVE</literal> lock on the partitioned table.\n> + <literal>SHARE UPDATE EXCLUSIVE</literal> lock on the partitioned table,\n> + as opposed to the <literal>Access Exclusive</literal> lock which is\n> + required by <literal>CREATE TABLE .. PARTITION OF</literal>.\n> </para>\n>\n> <para>\n> diff --git a/doc/src/sgml/ref/create_table.sgml b/doc/src/sgml/ref/create_table.sgml\n> index c14b2010d81..54dbfa72e4c 100644\n> --- a/doc/src/sgml/ref/create_table.sgml\n> +++ b/doc/src/sgml/ref/create_table.sgml\n> @@ -619,6 +619,16 @@ WITH ( MODULUS <replaceable class=\"parameter\">numeric_literal</replaceable>, REM\n> with <literal>DROP TABLE</literal> requires taking an <literal>ACCESS\n> EXCLUSIVE</literal> lock on the parent table.\n> </para>\n> +\n> + <para>\n> + Note that creating a partition using <literal>PARTITION OF<literal>\n> + requires taking an <literal>ACCESS EXCLUSIVE</literal> lock on the parent\n> + table. It may be preferable to first create a separate table and then\n> + attach it, which does not require as strong a lock.\n> + See <link linkend=\"sql-altertable-attach-partition\">ATTACH PARTITION</link>\n> + for more information.\n> + </para>\n> +\n> </listitem>\n> </varlistentry>\n>\n\n\nRobert Treat\nhttps://xzilla.net\n\n\n",
"msg_date": "Wed, 11 Jan 2023 10:47:54 -0500",
"msg_from": "Robert Treat <rob@xzilla.net>",
"msg_from_op": false,
"msg_subject": "Re: doc: mentioned CREATE+ATTACH PARTITION as an alternative to\n CREATE TABLE..PARTITION OF"
},
{
"msg_contents": "On Wed, Jan 11, 2023 at 10:48 AM Robert Treat <rob@xzilla.net> wrote:\n> > @Robert: I wonder why shouldn't CREATE..PARTITION OF *also* be patched\n> > to first create a table, and then attach the partition, transparently\n> > doing what everyone would want, without having to re-read the updated\n> > docs or know to issue two commands? I wrote a patch for this which\n> > \"doesn't fail tests\", but I still wonder if I'm missing something..\n> >\n>\n> I was thinking there might be either lock escalation issues or perhaps\n> issues around index attachment that don't surface using create\n> partition of, but I don't actually see any, in which case that does\n> seem like a better change all around. But like you, I feel I must be\n> overlooking something :-)\n\nTo be honest, I'm not sure whether either of you are missing anything\nor not. I think a major reason why I didn't implement this was that\nit's a different code path. DefineRelation() has code to do a bunch of\nthings that are also done by ATExecAttachPartition(), and I haven't\ngone through exhaustively and checked whether there are any relevant\ndifferences. I think that part of the reason that I did not research\nthat at the time is that the patch was incredibly complicated to get\nworking at all and I didn't want to take any risk of adding things to\nit that might create more problems. Now that it's been a few years, we\nmight feel more confident.\n\nAnother thing that probably deserves at least a bit of thought is the\nfact that ATTACH PARTITION just attaches a partition, whereas CREATE\nTABLE does a lot more things. Are any of those things potential\nhazards? Like what if the newly-created table references the parent\nvia a foreign key, or uses the parent's row type as a column type or\nas part of a column default expression or in a CHECK constraint or\nsomething? Basically, try to think of weird scenarios where the new\ntable would interact with the parent in some weird way where the\nweaker lock would be a problem. Maybe there's nothing to see here: not\nsure.\n\nAlso, we need to separately analyze the cases where (1) the new\npartition is the default partition, (2) the new partition is not the\ndefault partition but a default partition exists, and (3) the new\npartition is not the default partition and no default partition\nexists.\n\nSorry not to have more definite thoughts here. I know that when I\ndeveloped the original patch, I thought about this case and decided my\nbrain was full. However, I do not recall whether I knew about any\nspecific problems that needed to be fixed, or just feared that there\nmight be some.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 11 Jan 2023 16:13:01 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: doc: mentioned CREATE+ATTACH PARTITION as an alternative to\n CREATE TABLE..PARTITION OF"
},
{
"msg_contents": "On Wed, Jan 11, 2023 at 4:13 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> On Wed, Jan 11, 2023 at 10:48 AM Robert Treat <rob@xzilla.net> wrote:\n> > > @Robert: I wonder why shouldn't CREATE..PARTITION OF *also* be patched\n> > > to first create a table, and then attach the partition, transparently\n> > > doing what everyone would want, without having to re-read the updated\n> > > docs or know to issue two commands? I wrote a patch for this which\n> > > \"doesn't fail tests\", but I still wonder if I'm missing something..\n> > >\n> >\n> > I was thinking there might be either lock escalation issues or perhaps\n> > issues around index attachment that don't surface using create\n> > partition of, but I don't actually see any, in which case that does\n> > seem like a better change all around. But like you, I feel I must be\n> > overlooking something :-)\n>\n> To be honest, I'm not sure whether either of you are missing anything\n> or not. I think a major reason why I didn't implement this was that\n> it's a different code path. DefineRelation() has code to do a bunch of\n> things that are also done by ATExecAttachPartition(), and I haven't\n> gone through exhaustively and checked whether there are any relevant\n> differences. I think that part of the reason that I did not research\n> that at the time is that the patch was incredibly complicated to get\n> working at all and I didn't want to take any risk of adding things to\n> it that might create more problems. Now that it's been a few years, we\n> might feel more confident.\n>\n> Another thing that probably deserves at least a bit of thought is the\n> fact that ATTACH PARTITION just attaches a partition, whereas CREATE\n> TABLE does a lot more things. Are any of those things potential\n> hazards? Like what if the newly-created table references the parent\n> via a foreign key, or uses the parent's row type as a column type or\n> as part of a column default expression or in a CHECK constraint or\n> something? Basically, try to think of weird scenarios where the new\n> table would interact with the parent in some weird way where the\n> weaker lock would be a problem. Maybe there's nothing to see here: not\n> sure.\n>\n> Also, we need to separately analyze the cases where (1) the new\n> partition is the default partition, (2) the new partition is not the\n> default partition but a default partition exists, and (3) the new\n> partition is not the default partition and no default partition\n> exists.\n>\n> Sorry not to have more definite thoughts here. I know that when I\n> developed the original patch, I thought about this case and decided my\n> brain was full. However, I do not recall whether I knew about any\n> specific problems that needed to be fixed, or just feared that there\n> might be some.\n>\n\nI think all of that feedback is useful, I guess the immediate question\nbecomes if Justin wants to try to proceed with his patch implementing\nthe change, or if adjusting the documentation for the current\nimplementation is the right move for now.\n\n\nRobert Treat\nhttps://xzilla.net\n\n\n",
"msg_date": "Thu, 19 Jan 2023 16:47:59 -0500",
"msg_from": "Robert Treat <rob@xzilla.net>",
"msg_from_op": false,
"msg_subject": "Re: doc: mentioned CREATE+ATTACH PARTITION as an alternative to\n CREATE TABLE..PARTITION OF"
},
{
"msg_contents": "On Thu, Jan 19, 2023 at 04:47:59PM -0500, Robert Treat wrote:\n> I think all of that feedback is useful, I guess the immediate question\n> becomes if Justin wants to try to proceed with his patch implementing\n> the change, or if adjusting the documentation for the current\n> implementation is the right move for now.\n\nThe docs change is desirable in any case, since it should be\nbackpatched, and any patch to change CREATE..PARTITION OF would be for\nv17+ anyway.\n\n-- \nJustin\n\n\n",
"msg_date": "Thu, 19 Jan 2023 15:58:32 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: doc: mentioned CREATE+ATTACH PARTITION as an alternative to\n CREATE TABLE..PARTITION OF"
},
{
"msg_contents": "Hi, I've tested the attached patch by Justin and it applied almost\ncleanly to the master, but there was a tiny typo and make\npostgres-A4.pdf didn't want to run:\nNote that creating a partition using <literal>PARTITION OF<literal>\n=> (note lack of closing literal) =>\nNote that creating a partition using <literal>PARTITION OF</literal>\n\nAttached is version v0002 that contains this fix. @Justin maybe you\ncould set the status to Ready for Comitter (\nhttps://commitfest.postgresql.org/42/3790/ ) ?\n\nOn Thu, Jan 19, 2023 at 10:58 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n> On Thu, Jan 19, 2023 at 04:47:59PM -0500, Robert Treat wrote:\n> > I think all of that feedback is useful, I guess the immediate question\n> > becomes if Justin wants to try to proceed with his patch implementing\n> > the change, or if adjusting the documentation for the current\n> > implementation is the right move for now.\n>\n> The docs change is desirable in any case, since it should be\n> backpatched, and any patch to change CREATE..PARTITION OF would be for\n> v17+ anyway.\n>\n> --\n> Justin\n>\n>",
"msg_date": "Tue, 14 Mar 2023 15:47:01 +0100",
"msg_from": "Jakub Wartak <jakub.wartak@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: doc: mentioned CREATE+ATTACH PARTITION as an alternative to\n CREATE TABLE..PARTITION OF"
},
{
"msg_contents": "Justin Pryzby <pryzby@telsasoft.com> writes:\n> On Thu, Jan 19, 2023 at 04:47:59PM -0500, Robert Treat wrote:\n>> I think all of that feedback is useful, I guess the immediate question\n>> becomes if Justin wants to try to proceed with his patch implementing\n>> the change, or if adjusting the documentation for the current\n>> implementation is the right move for now.\n\n> The docs change is desirable in any case, since it should be\n> backpatched, and any patch to change CREATE..PARTITION OF would be for\n> v17+ anyway.\n\nRight. Pushed with a little further effort to align it better with\nsurrounding text.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 16 Mar 2023 16:52:07 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: doc: mentioned CREATE+ATTACH PARTITION as an alternative to\n CREATE TABLE..PARTITION OF"
},
{
"msg_contents": "On Thu, Mar 16, 2023 at 04:52:07PM -0400, Tom Lane wrote:\n> Justin Pryzby <pryzby@telsasoft.com> writes:\n> > On Thu, Jan 19, 2023 at 04:47:59PM -0500, Robert Treat wrote:\n> >> I think all of that feedback is useful, I guess the immediate question\n> >> becomes if Justin wants to try to proceed with his patch implementing\n> >> the change, or if adjusting the documentation for the current\n> >> implementation is the right move for now.\n> \n> > The docs change is desirable in any case, since it should be\n> > backpatched, and any patch to change CREATE..PARTITION OF would be for\n> > v17+ anyway.\n> \n> Right. Pushed with a little further effort to align it better with\n> surrounding text.\n\nThanks.\n\n It is possible to use <link linkend=\"sql-altertable\"><command>ALTER\n TABLE ATTACH/DETACH PARTITION</command></link> to perform these\n operations with a weaker lock, thus reducing interference with\n concurrent operations on the partitioned table.\n\nNote that in order for DETACH+DROP to use a lower lock level, it has to be\nDETACH CONCURRENTLY. ATTACH is implicitly uses a lower lock level, but for\nDETACH it's only on request.\n\n-- \nJustin\n\n\n",
"msg_date": "Thu, 16 Mar 2023 16:11:59 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: doc: mentioned CREATE+ATTACH PARTITION as an alternative to\n CREATE TABLE..PARTITION OF"
},
{
"msg_contents": "Justin Pryzby <pryzby@telsasoft.com> writes:\n> On Thu, Mar 16, 2023 at 04:52:07PM -0400, Tom Lane wrote:\n> It is possible to use <link linkend=\"sql-altertable\"><command>ALTER\n> TABLE ATTACH/DETACH PARTITION</command></link> to perform these\n> operations with a weaker lock, thus reducing interference with\n> concurrent operations on the partitioned table.\n\n> Note that in order for DETACH+DROP to use a lower lock level, it has to be\n> DETACH CONCURRENTLY. ATTACH is implicitly uses a lower lock level, but for\n> DETACH it's only on request.\n\nRight, but that's the sort of detail you should read on that command's man\npage, we don't need to duplicate it in N other places.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 16 Mar 2023 17:27:30 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: doc: mentioned CREATE+ATTACH PARTITION as an alternative to\n CREATE TABLE..PARTITION OF"
}
] |
[
{
"msg_contents": "I got annoyed just now upon finding that pprint() applied to the planner's\n\"root\" pointer doesn't dump root->agginfos or root->aggtransinfos. That's\nevidently because AggInfo and AggTransInfo aren't proper Nodes, just bare\nstructs, which presumably is because somebody couldn't be bothered to\nwrite outfuncs support for them. I'd say that was a questionable shortcut\neven when it was made, and there's certainly precious little excuse now\nthat gen_node_support.pl can do all the heavy lifting. Hence, PFA a\nlittle finger exercise to turn them into Nodes. I took the opportunity\nto improve related comments too, and in particular to fix some comments\nthat leave the impression that preprocess_minmax_aggregates still does\nits own scan of the query tree. (It was momentary confusion over that\nidea that got me to the point of being annoyed in the first place.)\n\nAny objections so far?\n\nI'm kind of tempted to mount an effort to get rid of as many of\npathnodes.h's \"read_write_ignore\" annotations as possible. Some are\nnecessary to prevent infinite recursion, and others represent considered\njudgments that they'd bloat node dumps more than they're worth --- but\nI think quite a lot of them arose from plain laziness about updating\noutfuncs.c. With the infrastructure we have now, that's no longer\na good reason.\n\nIn particular, I'm tempted to make a dump of PlannerInfo include\nall the baserel RelOptInfos (not joins though; there could be a\nmighty lot of those.) I think we didn't print the simple_rel_array[]\narray before mostly because outfuncs didn't use to have reasonable\nsupport for printing arrays.\n\nThoughts?\n\n\t\t\tregards, tom lane",
"msg_date": "Mon, 18 Jul 2022 12:08:24 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Convert planner's AggInfo and AggTransInfo to Nodes"
},
{
"msg_contents": "Tom Lane <tgl@sss.pgh.pa.us> writes:\n\n> I got annoyed just now upon finding that pprint() applied to the planner's\n> \"root\" pointer doesn't dump root->agginfos or root->aggtransinfos. That's\n> evidently because AggInfo and AggTransInfo aren't proper Nodes, just bare\n> structs, which presumably is because somebody couldn't be bothered to\n> write outfuncs support for them. I'd say that was a questionable shortcut\n> even when it was made, and there's certainly precious little excuse now\n> that gen_node_support.pl can do all the heavy lifting. Hence, PFA a\n> little finger exercise to turn them into Nodes. I took the opportunity\n> to improve related comments too, and in particular to fix some comments\n> that leave the impression that preprocess_minmax_aggregates still does\n> its own scan of the query tree. (It was momentary confusion over that\n> idea that got me to the point of being annoyed in the first place.)\n>\n> Any objections so far?\n\nIt seems like a reasonable idea, but I don't know enough to judge the\nwider ramifications of it. But one thing that the patch should also do,\nis switch to using the l*_node() functions instead of manual casting.\n\nThe ones I noticed in the patch/context are below, but there are a few\nmore:\n\n> \tforeach(lc, root->agginfos)\n> \t{\n> \t\tAggInfo *agginfo = (AggInfo *) lfirst(lc);\n\n\t\tAggInfo *agginfo = lfirst_node(AggInfo, lc);\n\n[…]\n> \tforeach(lc, transnos)\n> \t{\n> \t\tint\t\t\ttransno = lfirst_int(lc);\n> -\t\tAggTransInfo *pertrans = (AggTransInfo *) list_nth(root->aggtransinfos, transno);\n> +\t\tAggTransInfo *pertrans = (AggTransInfo *) list_nth(root->aggtransinfos,\n> +\t\t\t\t\t\t\t\t\t\t\t\t\t\t transno);\n> +\t\tAggTransInfo *pertrans = (AggTransInfo *) list_nth(root->aggtransinfos,\n> +\t\t\t\t\t\t\t\t\t\t\t\t\t\t transno);\n\n\t\tAggTransInfo *pertrans = list_nth_node(AggTransInfo, root->aggtransinfos,\n\t\t\t\t\t\t\t\t\t\t\t transno);\n\n- ilmari\n\n\n",
"msg_date": "Mon, 18 Jul 2022 18:42:47 +0100",
"msg_from": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <ilmari@ilmari.org>",
"msg_from_op": false,
"msg_subject": "Re: Convert planner's AggInfo and AggTransInfo to Nodes"
},
{
"msg_contents": "On 18.07.22 18:08, Tom Lane wrote:\n> I'm kind of tempted to mount an effort to get rid of as many of\n> pathnodes.h's \"read_write_ignore\" annotations as possible. Some are\n> necessary to prevent infinite recursion, and others represent considered\n> judgments that they'd bloat node dumps more than they're worth --- but\n> I think quite a lot of them arose from plain laziness about updating\n> outfuncs.c. With the infrastructure we have now, that's no longer\n> a good reason.\n\nThat was my impression as well, and I agree it would be good to sort \nthat out.\n\n\n",
"msg_date": "Tue, 19 Jul 2022 17:49:34 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Convert planner's AggInfo and AggTransInfo to Nodes"
},
{
"msg_contents": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <ilmari@ilmari.org> writes:\n> It seems like a reasonable idea, but I don't know enough to judge the\n> wider ramifications of it. But one thing that the patch should also do,\n> is switch to using the l*_node() functions instead of manual casting.\n\nHm, I didn't bother with that on the grounds that there's no question\nwhat should be in those two lists. But I guess it's not much extra\ncode, so pushed that way.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 19 Jul 2022 12:31:08 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Convert planner's AggInfo and AggTransInfo to Nodes"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> On 18.07.22 18:08, Tom Lane wrote:\n>> I'm kind of tempted to mount an effort to get rid of as many of\n>> pathnodes.h's \"read_write_ignore\" annotations as possible. Some are\n>> necessary to prevent infinite recursion, and others represent considered\n>> judgments that they'd bloat node dumps more than they're worth --- but\n>> I think quite a lot of them arose from plain laziness about updating\n>> outfuncs.c. With the infrastructure we have now, that's no longer\n>> a good reason.\n\n> That was my impression as well, and I agree it would be good to sort \n> that out.\n\nI had a go at doing this, and ended up with something that seems\nreasonable for now (attached). The thing that'd have to be done to\nmake additional progress is to convert a lot of partitioning-related\nstructs into full Nodes. That seems like it might possibly be\nworth doing, but I don't feel like doing it. I doubt that making\nplanner node dumps smarter is a sufficient excuse for that anyway.\n(But possibly if we then larded related code with castNode() and\nsibling macros, there'd be enough of a gain in type-safety to\njustify it?)\n\nI learned a couple of interesting things along the way:\n\n* I'd thought we already had outfuncs support for writing an array\nof node pointers. We don't, but it's easily added. I chose to\nwrite the array with parenthesis decoration, mainly because that\neases moving around it in emacs.\n\n* WRITE_OID_ARRAY and WRITE_BOOL_ARRAY needed extension to handle a null\narray pointer. I think we should make all the WRITE_FOO_ARRAY macros\nwork alike, so I added that to all of them. I first tried to make the\nrest work like WRITE_INDEX_ARRAY, but that failed because readfuncs.c\nisn't expecting \"<>\" for an empty array; it's expecting nothing at\nall. (Note there is no readfuncs equivalent to WRITE_INDEX_ARRAY.)\nWhat I've done here is to change WRITE_INDEX_ARRAY to work like the\nothers and print nothing for an empty array, but I wonder if now\nwouldn't be a good time to redefine the serialized representation\nto be more robust. I'm imagining \"<>\" for a NULL array pointer and\n\"(item item item)\" otherwise, allowing a cross-check that we're\ngetting the right number of items.\n\n* gen_node_support.pl was being insufficiently careful about parsing\ntype names, so I tightened its regexes a bit.\n\n\t\t\tregards, tom lane",
"msg_date": "Tue, 19 Jul 2022 15:23:57 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Convert planner's AggInfo and AggTransInfo to Nodes"
},
{
"msg_contents": "I wrote:\n> * WRITE_OID_ARRAY and WRITE_BOOL_ARRAY needed extension to handle a null\n> array pointer. I think we should make all the WRITE_FOO_ARRAY macros\n> work alike, so I added that to all of them. I first tried to make the\n> rest work like WRITE_INDEX_ARRAY, but that failed because readfuncs.c\n> isn't expecting \"<>\" for an empty array; it's expecting nothing at\n> all. (Note there is no readfuncs equivalent to WRITE_INDEX_ARRAY.)\n> What I've done here is to change WRITE_INDEX_ARRAY to work like the\n> others and print nothing for an empty array, but I wonder if now\n> wouldn't be a good time to redefine the serialized representation\n> to be more robust. I'm imagining \"<>\" for a NULL array pointer and\n> \"(item item item)\" otherwise, allowing a cross-check that we're\n> getting the right number of items.\n\nConcretely, about like this.\n\n\t\t\tregards, tom lane",
"msg_date": "Tue, 19 Jul 2022 19:08:55 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Convert planner's AggInfo and AggTransInfo to Nodes"
}
] |
[
{
"msg_contents": "Move snowball_create.sql creation into perl file\n\nThis is in preparation for building postgres with meson / ninja.\n\nWe already have duplicated code for this between the make and msvc\nbuilds. Adding a third copy seems like a bad plan, thus move the generation\ninto a perl script.\n\nAs we don't want to rely on perl being available for builds from tarballs,\ngenerate the file during distprep.\n\nAuthor: Peter Eisentraut <peter@eisentraut.org>\nAuthor: Andres Freund <andres@anarazel.de>\nDiscussion: https://postgr.es/m/5e216522-ba3c-f0e6-7f97-5276d0270029@enterprisedb.com\n\nBranch\n------\nmaster\n\nDetails\n-------\nhttps://git.postgresql.org/pg/commitdiff/b3a0d8324cf1f02c04a7099a436cfd68cfbf4566\n\nModified Files\n--------------\nsrc/backend/snowball/Makefile | 106 ++++++-----------------\nsrc/backend/snowball/snowball_create.pl | 148 ++++++++++++++++++++++++++++++++\nsrc/tools/msvc/Install.pm | 36 +-------\n3 files changed, 179 insertions(+), 111 deletions(-)",
"msg_date": "Mon, 18 Jul 2022 19:56:18 +0000",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "pgsql: Move snowball_create.sql creation into perl file"
},
{
"msg_contents": "Re: Andres Freund\n> Move snowball_create.sql creation into perl file\n> \n> This is in preparation for building postgres with meson / ninja.\n> \n> We already have duplicated code for this between the make and msvc\n> builds. Adding a third copy seems like a bad plan, thus move the generation\n> into a perl script.\n> \n> As we don't want to rely on perl being available for builds from tarballs,\n> generate the file during distprep.\n> \n> Author: Peter Eisentraut <peter@eisentraut.org>\n> Author: Andres Freund <andres@anarazel.de>\n> Discussion: https://postgr.es/m/5e216522-ba3c-f0e6-7f97-5276d0270029@enterprisedb.com\n\nHi,\n\nthis seems to have broken out-of-tree builds from tarballs:\n\nmake -C backend/snowball install\nmake[3]: Entering directory '/srv/projects/postgresql/debian/16/build/src/backend/snowball'\n/bin/mkdir -p '/srv/projects/postgresql/debian/16/build/tmp_install/usr/lib/postgresql/16/lib'\n/bin/mkdir -p '/srv/projects/postgresql/debian/16/build/tmp_install/usr/share/postgresql/16' '/srv/projects/postgresql/debian/16/build/tmp_install/usr/share/postgresql/16/tsearch_data'\n/usr/bin/install -c -m 755 dict_snowball.so '/srv/projects/postgresql/debian/16/build/tmp_install/usr/lib/postgresql/16/lib/dict_snowball.so'\n/usr/bin/install -c -m 644 snowball_create.sql '/srv/projects/postgresql/debian/16/build/tmp_install/usr/share/postgresql/16'\n/usr/bin/install: cannot stat 'snowball_create.sql': No such file or directory\nmake[3]: *** [Makefile:110: install] Error 1\n\nThe file is present in src/backend/snowball/ but not in build/src/backend/snowball/:\n\n-rw-r--r-- 1 myon myon 44176 22. Mai 21:20 src/backend/snowball/snowball_create.sql\n\nChristoph\n\n\n",
"msg_date": "Tue, 23 May 2023 15:47:46 +0200",
"msg_from": "Christoph Berg <myon@debian.org>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Move snowball_create.sql creation into perl file"
},
{
"msg_contents": "Re: To Andres Freund\n> this seems to have broken out-of-tree builds from tarballs:\n> \n> /usr/bin/install -c -m 644 snowball_create.sql '/srv/projects/postgresql/debian/16/build/tmp_install/usr/share/postgresql/16'\n> /usr/bin/install: cannot stat 'snowball_create.sql': No such file or directory\n\nFortunately, there is an easy workaround, just delete\nsrc/backend/snowball/snowball_create.sql before building, it will then\nbe recreated in the proper build directory.\n\nChristoph\n\n\n",
"msg_date": "Tue, 23 May 2023 16:06:26 +0200",
"msg_from": "Christoph Berg <myon@debian.org>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Move snowball_create.sql creation into perl file"
},
{
"msg_contents": "Christoph Berg <myon@debian.org> writes:\n>> this seems to have broken out-of-tree builds from tarballs:\n>> \n>> /usr/bin/install -c -m 644 snowball_create.sql '/srv/projects/postgresql/debian/16/build/tmp_install/usr/share/postgresql/16'\n>> /usr/bin/install: cannot stat 'snowball_create.sql': No such file or directory\n\nI think the attached will do for a proper fix. I'm not inclined\nto re-wrap just for this.\n\n\t\t\tregards, tom lane",
"msg_date": "Tue, 23 May 2023 10:46:30 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Move snowball_create.sql creation into perl file"
},
{
"msg_contents": "Re: Tom Lane\n> I think the attached will do for a proper fix. I'm not inclined\n> to re-wrap just for this.\n\nSure, I just posted it here in case others run into the same problem.\n\nThanks!\n\nChristoph\n\n\n",
"msg_date": "Tue, 23 May 2023 16:50:25 +0200",
"msg_from": "Christoph Berg <myon@debian.org>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Move snowball_create.sql creation into perl file"
},
{
"msg_contents": "Hi,\n\nOn 2023-05-23 10:46:30 -0400, Tom Lane wrote:\n> Christoph Berg <myon@debian.org> writes:\n> >> this seems to have broken out-of-tree builds from tarballs:\n> >> \n> >> /usr/bin/install -c -m 644 snowball_create.sql '/srv/projects/postgresql/debian/16/build/tmp_install/usr/share/postgresql/16'\n> >> /usr/bin/install: cannot stat 'snowball_create.sql': No such file or directory\n> \n> I think the attached will do for a proper fix.\n\nThanks.\n\n\n> I'm not inclined to re-wrap just for this.\n\nAgreed.\n\n\nI wonder if we should add a CI task to test creating a tarball and building\nfrom it, both inside the source directory and as a vpath build? We rebuild for\nboth gcc and clang, each with assertions and without, to check if there are\nwarnings. We could probably just switch to building from the tarball for some\nof those.\n\nI guess I need to go and check how long the \"release\" tarball generation\ntakes...\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 23 May 2023 11:41:57 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: pgsql: Move snowball_create.sql creation into perl file"
},
{
"msg_contents": "[ dropping -packagers ]\n\nAndres Freund <andres@anarazel.de> writes:\n> I guess I need to go and check how long the \"release\" tarball generation\n> takes...\n\nIt's quick except for the documentation-generating steps. Maybe\nwe could test that part only once?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 23 May 2023 14:51:03 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Move snowball_create.sql creation into perl file"
},
{
"msg_contents": "Hi,\n\nOn 2023-05-23 14:51:03 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > I guess I need to go and check how long the \"release\" tarball generation\n> > takes...\n> \n> It's quick except for the documentation-generating steps. Maybe\n> we could test that part only once?\n\nFirst thing I noticed that 'make dist' doesn't work in a vpath, failing in a\nsomewhat obscure way (likely because in a vpath build the the copy from the\nsource dir doesn't include GNUMakefile). Do we expect it to work?\n\n\nBesides docs, the slowest part appears to be gzip --best and then bzip2, as\nthose runs serially and takes 11 and 13 seconds respectively here...\n\nThe first thing I tried was:\n make -j8 dist GZIP=pigz BZIP2=pbzip2\n\nunfortunately that results in\n\npigz: abort: cannot provide files in GZIP environment variable\n\necho GZIP=pigz >> src/Makefile.custom\necho BZIP2=pbzip2 >> src/Makefile.custom\n\nreduces that to\n\nreal\t1m6.472s\nuser\t1m28.316s\nsys\t0m5.340s\n\nreal\t0m54.811s\nuser\t1m42.078s\nsys\t0m6.183s\n\nstill not great...\n\n\nOTOH, we currently already build the docs as part of the CompilerWarnings\ntest. I don't think there's a reason to test that twice?\n\n\nFor me make distcheck currently fails:\n\nIn file included from ../../src/include/postgres.h:46,\n from hashfn.c:24:\n../../src/include/utils/elog.h:79:10: fatal error: utils/errcodes.h: No such file or directory\n 79 | #include \"utils/errcodes.h\"\n | ^~~~~~~~~~~~~~~~~~\ncompilation terminated.\nmake[3]: *** [<builtin>: hashfn.o] Error 1\n\nat first I thought it was due to my use of -j8 - but it doesn't even work\nwithout that.\n\n\nThat's due to MAKELEVEL:\n\nsubmake-generated-headers:\nifndef NO_GENERATED_HEADERS\nifeq ($(MAKELEVEL),0)\n\t$(MAKE) -C $(top_builddir)/src/backend generated-headers\nendif\nendif\n\nSo the distcheck target needs to reset MAKELEVEL=0 - unless somebody has a\nbetter idea?\n\n\nSeparately, it's somewhat confusing that we include errcodes.h etc in\nsrc/backend/utils, rather than its final location, in src/include/utils. It\nworks, even without perl, because copying the file doesn't require perl, it's\njust generating it...\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 24 May 2023 14:24:37 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "testing dist tarballs"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> First thing I noticed that 'make dist' doesn't work in a vpath, failing in a\n> somewhat obscure way (likely because in a vpath build the the copy from the\n> source dir doesn't include GNUMakefile). Do we expect it to work?\n\nDon't see how it could possibly be useful in a vpath, because you'd have\nthe real source files and the generated files in different trees.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 24 May 2023 17:35:52 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: testing dist tarballs"
},
{
"msg_contents": "Re: Andres Freund\n> That's due to MAKELEVEL:\n> \n> submake-generated-headers:\n> ifndef NO_GENERATED_HEADERS\n> ifeq ($(MAKELEVEL),0)\n> \t$(MAKE) -C $(top_builddir)/src/backend generated-headers\n> endif\n> endif\n> \n> So the distcheck target needs to reset MAKELEVEL=0 - unless somebody has a\n> better idea?\n\nFwiw, I've had that problem as well in the Debian packages where\ndebian/rules is already a Makefile and calling $(MAKE) from there\ntrips up that logic. The workaround I used is:\n\noverride_dh_auto_build-arch:\n # set MAKELEVEL to 0 to force building submake-generated-headers in src/Makefile.global(.in)\n MAKELEVEL=0 $(MAKE) -C build/src all\n\n...\noverride_dh_auto_test-arch:\nifeq (, $(findstring nocheck, $(DEB_BUILD_OPTIONS)))\n # when tests fail, print newest log files\n # initdb doesn't like LANG and LC_ALL to contradict, unset LANG and LC_CTYPE here\n # temp-install wants to be invoked from a top-level make, unset MAKELEVEL here\n # tell pg_upgrade to create its sockets in /tmp to avoid too long paths\n unset LANG LC_CTYPE MAKELEVEL; ulimit -c unlimited; \\\n if ! make -C build check-world \\\n $(TEMP_CONFIG) \\\n PGSOCKETDIR=\"/tmp\" \\\n PG_TEST_EXTRA='ssl' \\\n PROVE_FLAGS=\"--verbose\"; \\\n...\n\n(Just mentioning this, not asking it to be changed.)\n\n\nRe: Tom Lane\n> Andres Freund <andres@anarazel.de> writes:\n> > First thing I noticed that 'make dist' doesn't work in a vpath, failing in a\n> > somewhat obscure way (likely because in a vpath build the the copy from the\n> > source dir doesn't include GNUMakefile). Do we expect it to work?\n> \n> Don't see how it could possibly be useful in a vpath, because you'd have\n> the real source files and the generated files in different trees.\n\nI don't think \"make dist\" is generally expected to work in vpath\nbuilds, that's probably one indirection layer too much. (The \"make\ndistcheck\" rule generated by automake tests vpath builds, though.)\n\nChristoph\n\n\n",
"msg_date": "Thu, 25 May 2023 16:46:41 +0200",
"msg_from": "Christoph Berg <myon@debian.org>",
"msg_from_op": false,
"msg_subject": "Re: testing dist tarballs"
},
{
"msg_contents": "On 24.05.23 23:24, Andres Freund wrote:\n> First thing I noticed that 'make dist' doesn't work in a vpath, failing in a\n> somewhat obscure way (likely because in a vpath build the the copy from the\n> source dir doesn't include GNUMakefile). Do we expect it to work?\n\nI don't think so.\n\n> Separately, it's somewhat confusing that we include errcodes.h etc in\n> src/backend/utils, rather than its final location, in src/include/utils. It\n> works, even without perl, because copying the file doesn't require perl, it's\n> just generating it...\n\nThe \"copying\" is actually a symlink, right? I don't think we want to \nship symlinks in the tarball?\n\n\n\n",
"msg_date": "Fri, 26 May 2023 09:02:33 +0200",
"msg_from": "Peter Eisentraut <peter@eisentraut.org>",
"msg_from_op": false,
"msg_subject": "Re: testing dist tarballs"
},
{
"msg_contents": "Hi,\n\nOn 2023-05-26 09:02:33 +0200, Peter Eisentraut wrote:\n> On 24.05.23 23:24, Andres Freund wrote:\n> > First thing I noticed that 'make dist' doesn't work in a vpath, failing in a\n> > somewhat obscure way (likely because in a vpath build the the copy from the\n> > source dir doesn't include GNUMakefile). Do we expect it to work?\n> \n> I don't think so.\n\nMaybe we should just error out in that case, instead of failing in an obscure\nway down the line?\n\n\n> > Separately, it's somewhat confusing that we include errcodes.h etc in\n> > src/backend/utils, rather than its final location, in src/include/utils. It\n> > works, even without perl, because copying the file doesn't require perl, it's\n> > just generating it...\n> \n> The \"copying\" is actually a symlink, right? I don't think we want to ship\n> symlinks in the tarball?\n\nFair point - still seems we should just create the files in the right\ndirectory instead of doing it in the wrong place and then creating symlinks to\nmake them accessible...\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sat, 27 May 2023 11:47:13 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: testing dist tarballs"
},
{
"msg_contents": "On 27.05.23 14:47, Andres Freund wrote:\n>>> Separately, it's somewhat confusing that we include errcodes.h etc in\n>>> src/backend/utils, rather than its final location, in src/include/utils. It\n>>> works, even without perl, because copying the file doesn't require perl, it's\n>>> just generating it...\n>>\n>> The \"copying\" is actually a symlink, right? I don't think we want to ship\n>> symlinks in the tarball?\n> \n> Fair point - still seems we should just create the files in the right\n> directory instead of doing it in the wrong place and then creating symlinks to\n> make them accessible...\n\nRight. I think the reason this was set up this way is that with make it \nis generally dubious to create target files outside of the current \ndirectory.\n\n\n\n",
"msg_date": "Wed, 31 May 2023 06:52:03 -0400",
"msg_from": "Peter Eisentraut <peter@eisentraut.org>",
"msg_from_op": false,
"msg_subject": "Re: testing dist tarballs"
}
] |
[
{
"msg_contents": "I propose to rename some of the rel truncation related constants at\nthe top of vacuumlazy.c, per the attached patch. The patch\nconsolidates related constants into a single block/grouping, and\nimposes a uniform naming scheme.\n\n-- \nPeter Geoghegan",
"msg_date": "Mon, 18 Jul 2022 20:47:49 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Rename some rel truncation related constants at the top of\n vacuumlazy.c"
},
{
"msg_contents": "Peter Geoghegan <pg@bowt.ie> writes:\n> I propose to rename some of the rel truncation related constants at\n> the top of vacuumlazy.c, per the attached patch. The patch\n> consolidates related constants into a single block/grouping, and\n> imposes a uniform naming scheme.\n\nUm ... you seem to have removed some useful comments?\n\nPersonally I wouldn't do this, as I don't think the renaming\nbrings much benefit, and it will create a hazard for back-patching\nany fixes that might be needed in that code. I'm not hugely upset\nabout it, but that's the way I'd vote if asked.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 18 Jul 2022 23:55:55 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Rename some rel truncation related constants at the top of\n vacuumlazy.c"
},
{
"msg_contents": "On Mon, Jul 18, 2022 at 8:55 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Um ... you seem to have removed some useful comments?\n\nI don't think that the stuff about making them into a GUC is useful myself.\n\n> Personally I wouldn't do this, as I don't think the renaming\n> brings much benefit, and it will create a hazard for back-patching\n> any fixes that might be needed in that code. I'm not hugely upset\n> about it, but that's the way I'd vote if asked.\n\nIn that case I withdraw the patch.\n\nFWIW I wrote the patch during the course of work on new feature\ndevelopment. A patch that added a couple of similar constants a bit\nfurther down. Seemed neater this way, but it's certainly not worth\narguing over.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Mon, 18 Jul 2022 21:07:59 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: Rename some rel truncation related constants at the top of\n vacuumlazy.c"
}
] |
[
{
"msg_contents": "Hi,\n\nAt times it's useful to know the last replayed WAL record's timeline\nID (especially on the standbys that are lagging in applying WAL while\nfailing over - for reporting, logging and debugging purposes). AFICS,\nthere's no function that exposes the last replayed TLI. We can either\nchange the existing pg_last_wal_replay_lsn() to report TLI along with\nthe LSN which might break the compatibility or introduce a new\nfunction pg_last_wal_replay_info() that emits both LSN and TLI. I'm\nfine with either of the approaches, but for now, I'm attaching a WIP\npatch that adds a new function pg_last_wal_replay_info().\n\nThoughts?\n\nRegards,\nBharath Rupireddy.",
"msg_date": "Tue, 19 Jul 2022 14:28:40 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Expose last replayed timeline ID along with last replayed LSN"
},
{
"msg_contents": "At Tue, 19 Jul 2022 14:28:40 +0530, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote in \n> Hi,\n> \n> At times it's useful to know the last replayed WAL record's timeline\n> ID (especially on the standbys that are lagging in applying WAL while\n> failing over - for reporting, logging and debugging purposes). AFICS,\n> there's no function that exposes the last replayed TLI. We can either\n> change the existing pg_last_wal_replay_lsn() to report TLI along with\n> the LSN which might break the compatibility or introduce a new\n> function pg_last_wal_replay_info() that emits both LSN and TLI. I'm\n> fine with either of the approaches, but for now, I'm attaching a WIP\n> patch that adds a new function pg_last_wal_replay_info().\n> \n> Thoughts?\n\nThere was a more comprehensive discussion [1], which went nowhere..\n\n[1] https://www.postgresql.org/message-id/20191211052002.GK72921%40paquier.xyz\n\nregadrs.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Wed, 20 Jul 2022 10:36:29 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Expose last replayed timeline ID along with last replayed LSN"
},
{
"msg_contents": "On Wed, Jul 20, 2022 at 7:06 AM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> At Tue, 19 Jul 2022 14:28:40 +0530, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote in\n> > Hi,\n> >\n> > At times it's useful to know the last replayed WAL record's timeline\n> > ID (especially on the standbys that are lagging in applying WAL while\n> > failing over - for reporting, logging and debugging purposes). AFICS,\n> > there's no function that exposes the last replayed TLI. We can either\n> > change the existing pg_last_wal_replay_lsn() to report TLI along with\n> > the LSN which might break the compatibility or introduce a new\n> > function pg_last_wal_replay_info() that emits both LSN and TLI. I'm\n> > fine with either of the approaches, but for now, I'm attaching a WIP\n> > patch that adds a new function pg_last_wal_replay_info().\n> >\n> > Thoughts?\n>\n> There was a more comprehensive discussion [1], which went nowhere..\n>\n> [1] https://www.postgresql.org/message-id/20191211052002.GK72921%40paquier.xyz\n\nThanks Kyotaro-san for pointing at that thread. Infact, I did think\nabout having a new set of info functions pg_current_wal_info,\npg_current_wal_insert_info, pg_current_wal_flush_info,\npg_last_wal_receive_info, pg_last_wal_replay_info - IMO, these APIs\nare the ones that we would want to keep in the code going forward.\nAlthough they introduce some more code momentarily, eventually, it\nmakes sense to delete pg_current_wal_lsn, pg_current_wal_insert_lsn,\npg_current_wal_flush_lsn, pg_last_wal_receive_lsn,\npg_last_wal_replay_lsn, perhaps in the future versions of PG.\n\nThoughts?\n\nRegards,\nBharath Rupireddy.\n\n\n",
"msg_date": "Wed, 20 Jul 2022 11:04:58 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Expose last replayed timeline ID along with last replayed LSN"
}
] |
[
{
"msg_contents": "Hi\n\nI think there is a newly introduced memory leak in your patch d2d3547.\nTry to fix it in the attached patch. \nKindly to have a check.\n\nRegards,\nTang",
"msg_date": "Tue, 19 Jul 2022 09:02:07 +0000",
"msg_from": "\"tanghy.fnst@fujitsu.com\" <tanghy.fnst@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "Memory leak fix in psql"
},
{
"msg_contents": "On Tue, 19 Jul 2022 at 17:02, tanghy.fnst@fujitsu.com <tanghy.fnst@fujitsu.com> wrote:\n> Hi\n>\n> I think there is a newly introduced memory leak in your patch d2d3547.\n> Try to fix it in the attached patch. \n> Kindly to have a check.\n>\n\nYeah, it leaks, and the patch can fix it.\n\nAfter looking around, I found psql/describe.c also has some memory leaks,\nattached a patch to fix these leaks.\n\n-- \nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.",
"msg_date": "Tue, 19 Jul 2022 18:41:13 +0800",
"msg_from": "Japin Li <japinli@hotmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Memory leak fix in psql"
},
{
"msg_contents": "On Tue, Jul 19, 2022 at 06:41:13PM +0800, Japin Li wrote:\n> After looking around, I found psql/describe.c also has some memory leaks,\n> attached a patch to fix these leaks.\n\nIndeed. There are quite a bit of them, so let's fix all that. You\nhave missed a couple of code paths in objectDescription().\n--\nMichael",
"msg_date": "Tue, 19 Jul 2022 21:32:07 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Memory leak fix in psql"
},
{
"msg_contents": "On Tue, 19 Jul 2022 at 20:32, Michael Paquier <michael@paquier.xyz> wrote:\n> On Tue, Jul 19, 2022 at 06:41:13PM +0800, Japin Li wrote:\n>> After looking around, I found psql/describe.c also has some memory leaks,\n>> attached a patch to fix these leaks.\n>\n> Indeed. There are quite a bit of them, so let's fix all that. You\n> have missed a couple of code paths in objectDescription().\n\nThanks for reviewing. Attached fix the memory leak in objectDescription().\nPlease consider v2 for further review.\n\n-- \nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.",
"msg_date": "Tue, 19 Jul 2022 21:08:53 +0800",
"msg_from": "Japin Li <japinli@hotmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Memory leak fix in psql"
},
{
"msg_contents": "Hi,\n\nOn 2022-07-19 21:08:53 +0800, Japin Li wrote:\n> From b2bcc3a1bac67b8b414f2025607f8dd35e096289 Mon Sep 17 00:00:00 2001\n> From: Japin Li <japinli@hotmail.com>\n> Date: Tue, 19 Jul 2022 18:27:25 +0800\n> Subject: [PATCH v2 1/1] Fix the memory leak in psql describe\n> \n> ---\n> src/bin/psql/describe.c | 168 ++++++++++++++++++++++++++++++++++++++++\n> 1 file changed, 168 insertions(+)\n> \n> diff --git a/src/bin/psql/describe.c b/src/bin/psql/describe.c\n> index 0ce38e4b4c..7a070a6cd0 100644\n> --- a/src/bin/psql/describe.c\n> +++ b/src/bin/psql/describe.c\n> @@ -112,7 +112,10 @@ describeAggregates(const char *pattern, bool verbose, bool showSystem)\n> \t\t\t\t\t\t\t\t\"n.nspname\", \"p.proname\", NULL,\n> \t\t\t\t\t\t\t\t\"pg_catalog.pg_function_is_visible(p.oid)\",\n> \t\t\t\t\t\t\t\tNULL, 3))\n> +\t{\n> +\t\ttermPQExpBuffer(&buf);\n> \t\treturn false;\n> +\t}\n> \n> \tappendPQExpBufferStr(&buf, \"ORDER BY 1, 2, 4;\");\n\nAdding copy over copy of this same block doesn't seem great. Can we instead\nadd a helper for it or such?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 19 Jul 2022 09:28:14 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Memory leak fix in psql"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2022-07-19 21:08:53 +0800, Japin Li wrote:\n>> +\t{\n>> +\t\ttermPQExpBuffer(&buf);\n>> \t\treturn false;\n>> +\t}\n\n> Adding copy over copy of this same block doesn't seem great. Can we instead\n> add a helper for it or such?\n\nThe usual style in these files is something like\n\n\tif (bad things happened)\n\t goto fail;\n\n\t...\n\nfail:\n\ttermPQExpBuffer(&buf);\n\treturn false;\n\nYeah, it's old school, but please let's not have a few functions that\ndo it randomly differently from all their neighbors.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 19 Jul 2022 12:36:31 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Memory leak fix in psql"
},
{
"msg_contents": "\n\n> On Jul 19, 2022, at 2:02 AM, tanghy.fnst@fujitsu.com wrote:\n> \n> I think there is a newly introduced memory leak in your patch d2d3547.\n\nI agree. Thanks for noticing, and for the patch!\n\n> Try to fix it in the attached patch. \n> Kindly to have a check.\n\nThis looks ok, but comments down-thread seem reasonable, so I suspect a new patch will be needed. Would you like to author it, or would you prefer that I, as the guilty party, do so?\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Tue, 19 Jul 2022 09:43:21 -0700",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Memory leak fix in psql"
},
{
"msg_contents": "On Tue, Jul 19, 2022 at 12:36:31PM -0400, Tom Lane wrote:\n> Yeah, it's old school, but please let's not have a few functions that\n> do it randomly differently from all their neighbors.\n\nTrue enough. And it is not like we should free the PQExpBuffer given\nby the caller in validateSQLNamePattern().\n--\nMichael",
"msg_date": "Wed, 20 Jul 2022 08:59:48 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Memory leak fix in psql"
},
{
"msg_contents": "On Tue, Jul 19, 2022 at 09:43:21AM -0700, Mark Dilger wrote:\n> This looks ok, but comments down-thread seem reasonable, so I\n> suspect a new patch will be needed. Would you like to author it, or\n> would you prefer that I, as the guilty party, do so? \n\nIf any of you could update the patch, that would be great. Thanks!\n--\nMichael",
"msg_date": "Wed, 20 Jul 2022 09:01:13 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Memory leak fix in psql"
},
{
"msg_contents": "On Tuesday, July 19, 2022 7:41 PM, Japin Li <japinli@hotmail.com> wrote:\n> After looking around, I found psql/describe.c also has some memory\n> leaks,\n> attached a patch to fix these leaks.\n\nThanks for your check and improvement.\nYour fix LGTM so please allow me to merge it in the attached patch. \nBased on your rebased version, now this new patch version is V3.\n\nRegards,\nTang",
"msg_date": "Wed, 20 Jul 2022 03:14:35 +0000",
"msg_from": "\"tanghy.fnst@fujitsu.com\" <tanghy.fnst@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Memory leak fix in psql"
},
{
"msg_contents": "On Wed, Jul 20, 2022 at 03:14:35AM +0000, tanghy.fnst@fujitsu.com wrote:\n> Your fix LGTM so please allow me to merge it in the attached patch. \n> Based on your rebased version, now this new patch version is V3.\n\nWhat about the argument of upthread where we could use a goto in\nfunctions where there are multiple pattern validation checks? Per se\nv4 attached.\n--\nMichael",
"msg_date": "Wed, 20 Jul 2022 12:51:32 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Memory leak fix in psql"
},
{
"msg_contents": "\nOn Wed, 20 Jul 2022 at 11:51, Michael Paquier <michael@paquier.xyz> wrote:\n> On Wed, Jul 20, 2022 at 03:14:35AM +0000, tanghy.fnst@fujitsu.com wrote:\n>> Your fix LGTM so please allow me to merge it in the attached patch. \n>> Based on your rebased version, now this new patch version is V3.\n>\n> What about the argument of upthread where we could use a goto in\n> functions where there are multiple pattern validation checks? Per se\n> v4 attached.\n\n+1. LGTM.\n\n-- \nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.\n\n\n",
"msg_date": "Wed, 20 Jul 2022 12:01:12 +0800",
"msg_from": "Japin Li <japinli@hotmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Memory leak fix in psql"
},
{
"msg_contents": "-1\n\nThough the patch looks good, I myself think the patch should be edited\nand submitted by Tang\nIt's easy to attach a fixed patch based on the comments of the thread,\nbut coins should be\ngiven to Tang since he is the first one to find the mem leak.\n\nNo offense, but that's what I think how open source works ;)\n\nOn Wed, Jul 20, 2022 at 11:51 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Wed, Jul 20, 2022 at 03:14:35AM +0000, tanghy.fnst@fujitsu.com wrote:\n> > Your fix LGTM so please allow me to merge it in the attached patch.\n> > Based on your rebased version, now this new patch version is V3.\n>\n> What about the argument of upthread where we could use a goto in\n> functions where there are multiple pattern validation checks? Per se\n> v4 attached.\n> --\n> Michael\n\n\n\n-- \nRegards\nJunwang Zhao\n\n\n",
"msg_date": "Wed, 20 Jul 2022 12:51:24 +0800",
"msg_from": "Junwang Zhao <zhjwpku@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Memory leak fix in psql"
},
{
"msg_contents": "On Wed, Jul 20, 2022 at 12:51:24PM +0800, Junwang Zhao wrote:\n> Though the patch looks good, I myself think the patch should be edited\n> and submitted by Tang\n> It's easy to attach a fixed patch based on the comments of the thread,\n> but coins should be\n> given to Tang since he is the first one to find the mem leak.\n\nPlease note that I sometimes edit slightly patches that I finish to\nmerge into the tree, where the author listed in the commit log is the\nsame as the original while I usually don't list mine. Credit goes\nwhere it should, and Tang is the one here who authored this patch.\n--\nMichael",
"msg_date": "Wed, 20 Jul 2022 14:14:21 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Memory leak fix in psql"
},
{
"msg_contents": "Got it, thanks!\n\nOn Wed, Jul 20, 2022 at 1:14 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Wed, Jul 20, 2022 at 12:51:24PM +0800, Junwang Zhao wrote:\n> > Though the patch looks good, I myself think the patch should be edited\n> > and submitted by Tang\n> > It's easy to attach a fixed patch based on the comments of the thread,\n> > but coins should be\n> > given to Tang since he is the first one to find the mem leak.\n>\n> Please note that I sometimes edit slightly patches that I finish to\n> merge into the tree, where the author listed in the commit log is the\n> same as the original while I usually don't list mine. Credit goes\n> where it should, and Tang is the one here who authored this patch.\n> --\n> Michael\n\n\n\n-- \nRegards\nJunwang Zhao\n\n\n",
"msg_date": "Wed, 20 Jul 2022 13:56:38 +0800",
"msg_from": "Junwang Zhao <zhjwpku@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Memory leak fix in psql"
},
{
"msg_contents": "On Wednesday, July 20, 2022 12:52 PM, Michael Paquier <michael@paquier.xyz> wrote:\n> What about the argument of upthread where we could use a goto in\n> functions where there are multiple pattern validation checks? Per se\n> v4 attached.\n\nThanks for your kindly remind and modification.\nI checked v4 patch, it looks good but I think there can be some minor improvement.\nSo I deleted some redundant braces around \"goto error_return; \".\nAlso added an error handle section in validateSQLNamePattern.\n\nKindly to have a check at the attached v5 patch.\n\nRegards,\nTang",
"msg_date": "Wed, 20 Jul 2022 06:21:36 +0000",
"msg_from": "\"tanghy.fnst@fujitsu.com\" <tanghy.fnst@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Memory leak fix in psql"
},
{
"msg_contents": "> On Wed, Jul 20, 2022 at 12:51:24PM +0800, Junwang Zhao wrote:\r\n> > Though the patch looks good, I myself think the patch should be edited\r\n> > and submitted by Tang\r\n> > It's easy to attach a fixed patch based on the comments of the thread,\r\n> > but coins should be\r\n> > given to Tang since he is the first one to find the mem leak.\r\n\r\nHello, Zhao\r\n\r\nThanks for your check at this patch. \r\n\r\nI appreciate your kindly comment but there may be a misunderstanding here.\r\nAs Michael explained, committers in Postgres will review carefully and \r\nhelp to improve contributors' patches. When the patch is finally committed \r\nby one committer, from what I can see, he or she will try to make sure the \r\ncredit goes with everyone who contributed to the committed patch(such as \r\nbug reporter, patch author, tester, reviewer etc.). \r\n\r\nAlso, developers and reviewers will try to help improving our proposed patch\r\nby rebasing it or adding an on-top patch(like Japin Li did in v2).\r\nThese will make the patch better and to be committed ASAP.\r\n\r\nGood to see you at Postgres community.\r\n\r\nRegards,\r\nTang\r\n",
"msg_date": "Wed, 20 Jul 2022 07:03:47 +0000",
"msg_from": "\"tanghy.fnst@fujitsu.com\" <tanghy.fnst@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Memory leak fix in psql"
},
{
"msg_contents": "Thanks for your explanation, this time I know how it works, thanks ;)\n\nOn Wed, Jul 20, 2022 at 3:04 PM tanghy.fnst@fujitsu.com\n<tanghy.fnst@fujitsu.com> wrote:\n>\n> > On Wed, Jul 20, 2022 at 12:51:24PM +0800, Junwang Zhao wrote:\n> > > Though the patch looks good, I myself think the patch should be edited\n> > > and submitted by Tang\n> > > It's easy to attach a fixed patch based on the comments of the thread,\n> > > but coins should be\n> > > given to Tang since he is the first one to find the mem leak.\n>\n> Hello, Zhao\n>\n> Thanks for your check at this patch.\n>\n> I appreciate your kindly comment but there may be a misunderstanding here.\n> As Michael explained, committers in Postgres will review carefully and\n> help to improve contributors' patches. When the patch is finally committed\n> by one committer, from what I can see, he or she will try to make sure the\n> credit goes with everyone who contributed to the committed patch(such as\n> bug reporter, patch author, tester, reviewer etc.).\n>\n> Also, developers and reviewers will try to help improving our proposed patch\n> by rebasing it or adding an on-top patch(like Japin Li did in v2).\n> These will make the patch better and to be committed ASAP.\n>\n> Good to see you at Postgres community.\n>\n> Regards,\n> Tang\n\n\n\n-- \nRegards\nJunwang Zhao\n\n\n",
"msg_date": "Wed, 20 Jul 2022 16:03:02 +0800",
"msg_from": "Junwang Zhao <zhjwpku@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Memory leak fix in psql"
},
{
"msg_contents": "More on the same tune.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"This is what I like so much about PostgreSQL. Most of the surprises\nare of the \"oh wow! That's cool\" Not the \"oh shit!\" kind. :)\"\nScott Marlowe, http://archives.postgresql.org/pgsql-admin/2008-10/msg00152.php",
"msg_date": "Wed, 20 Jul 2022 10:05:47 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Memory leak fix in psql"
},
{
"msg_contents": "On Wed, 20 Jul 2022 at 14:21, tanghy.fnst@fujitsu.com <tanghy.fnst@fujitsu.com> wrote:\n> On Wednesday, July 20, 2022 12:52 PM, Michael Paquier <michael@paquier.xyz> wrote:\n>> What about the argument of upthread where we could use a goto in\n>> functions where there are multiple pattern validation checks? Per se\n>> v4 attached.\n>\n> Thanks for your kindly remind and modification.\n> I checked v4 patch, it looks good but I think there can be some minor improvement.\n> So I deleted some redundant braces around \"goto error_return; \".\n> Also added an error handle section in validateSQLNamePattern.\n>\n> Kindly to have a check at the attached v5 patch.\n>\n> Regards,\n> Tang\n\nThanks for updating the patch. It looks good. However, it cannot be\napplied on 14 stable. The attached patches are for 10-14.\n\n-- \nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.",
"msg_date": "Wed, 20 Jul 2022 16:13:11 +0800",
"msg_from": "Japin Li <japinli@hotmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Memory leak fix in psql"
},
{
"msg_contents": "Japin Li <japinli@hotmail.com> writes:\n> Thanks for updating the patch. It looks good. However, it cannot be\n> applied on 14 stable. The attached patches are for 10-14.\n\nWhile I think this is good cleanup, I'm doubtful that any of these\nleaks are probable enough to be worth back-patching into stable\nbranches. The risk of breaking something should not be neglected.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 20 Jul 2022 09:54:32 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Memory leak fix in psql"
},
{
"msg_contents": "On Wed, Jul 20, 2022 at 10:05:47AM +0200, Alvaro Herrera wrote:\n> More on the same tune.\n\nThanks. I have noticed that as well. I'll include all that in the\nset.\n--\nMichael",
"msg_date": "Thu, 21 Jul 2022 10:02:12 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Memory leak fix in psql"
},
{
"msg_contents": "\nOn Wed, 20 Jul 2022 at 21:54, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Japin Li <japinli@hotmail.com> writes:\n>> Thanks for updating the patch. It looks good. However, it cannot be\n>> applied on 14 stable. The attached patches are for 10-14.\n>\n> While I think this is good cleanup, I'm doubtful that any of these\n> leaks are probable enough to be worth back-patching into stable\n> branches. The risk of breaking something should not be neglected.\n>\n\nYeah, we should take care of the backpatch risk. However, I think\nit makes sense to backpatch.\n\n-- \nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.\n\n\n",
"msg_date": "Thu, 21 Jul 2022 09:10:43 +0800",
"msg_from": "Japin Li <japinli@hotmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Memory leak fix in psql"
},
{
"msg_contents": "On Thu, Jul 21, 2022 at 09:10:43AM +0800, Japin Li wrote:\n> Yeah, we should take care of the backpatch risk. However, I think\n> it makes sense to backpatch.\n\nWe are talking about 256 bytes being leaked in each loop when a\nvalidation pattern or when a query fails, so I don't see a strong\nargument in manipulating 10~14 more than necessary for this amount of\nmemory. The contents of describe.c are the same for v15 though, and\nwe are still in beta on REL_15_STABLE, so I have applied the patch\ndown to v15, adding what Alvaro has sent on top of the rest.\n--\nMichael",
"msg_date": "Thu, 21 Jul 2022 10:48:13 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Memory leak fix in psql"
},
{
"msg_contents": "\nOn Thu, 21 Jul 2022 at 09:48, Michael Paquier <michael@paquier.xyz> wrote:\n> On Thu, Jul 21, 2022 at 09:10:43AM +0800, Japin Li wrote:\n>> Yeah, we should take care of the backpatch risk. However, I think\n>> it makes sense to backpatch.\n>\n> We are talking about 256 bytes being leaked in each loop when a\n> validation pattern or when a query fails, so I don't see a strong\n> argument in manipulating 10~14 more than necessary for this amount of\n> memory. The contents of describe.c are the same for v15 though, and\n> we are still in beta on REL_15_STABLE, so I have applied the patch\n> down to v15, adding what Alvaro has sent on top of the rest.\n\nThanks for the explanation! IMO, we could ignore v10-13 branches, however,\nwe should backpatch to v14 which also uses the validateSQLNamePattern()\nfunction leading to a memory leak.\n\n-- \nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.\n\n\n",
"msg_date": "Thu, 21 Jul 2022 14:02:49 +0800",
"msg_from": "Japin Li <japinli@hotmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Memory leak fix in psql"
},
{
"msg_contents": "On 2022-Jul-21, Japin Li wrote:\n\n> On Thu, 21 Jul 2022 at 09:48, Michael Paquier <michael@paquier.xyz> wrote:\n\n> > We are talking about 256 bytes being leaked in each loop when a\n> > validation pattern or when a query fails, so I don't see a strong\n> > argument in manipulating 10~14 more than necessary for this amount of\n> > memory. The contents of describe.c are the same for v15 though, and\n> > we are still in beta on REL_15_STABLE, so I have applied the patch\n> > down to v15, adding what Alvaro has sent on top of the rest.\n> \n> Thanks for the explanation! IMO, we could ignore v10-13 branches, however,\n> we should backpatch to v14 which also uses the validateSQLNamePattern()\n> function leading to a memory leak.\n\nI'd agree in principle, but in practice the commit from 15 does not apply\ncleanly on 14 -- there's a ton of conflicts.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"Here's a general engineering tip: if the non-fun part is too complex for you\nto figure out, that might indicate the fun part is too ambitious.\" (John Naylor)\nhttps://postgr.es/m/CAFBsxsG4OWHBbSDM%3DsSeXrQGOtkPiOEOuME4yD7Ce41NtaAD9g%40mail.gmail.com\n\n\n",
"msg_date": "Thu, 21 Jul 2022 08:23:45 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Memory leak fix in psql"
},
{
"msg_contents": "\nOn Thu, 21 Jul 2022 at 14:23, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> On 2022-Jul-21, Japin Li wrote:\n>\n>> On Thu, 21 Jul 2022 at 09:48, Michael Paquier <michael@paquier.xyz> wrote:\n>\n>> > We are talking about 256 bytes being leaked in each loop when a\n>> > validation pattern or when a query fails, so I don't see a strong\n>> > argument in manipulating 10~14 more than necessary for this amount of\n>> > memory. The contents of describe.c are the same for v15 though, and\n>> > we are still in beta on REL_15_STABLE, so I have applied the patch\n>> > down to v15, adding what Alvaro has sent on top of the rest.\n>> \n>> Thanks for the explanation! IMO, we could ignore v10-13 branches, however,\n>> we should backpatch to v14 which also uses the validateSQLNamePattern()\n>> function leading to a memory leak.\n>\n> I'd agree in principle, but in practice the commit from 15 does not apply\n> cleanly on 14 -- there's a ton of conflicts.\n\nI attached a patch for v14 [1] based on master, if you want to apply it,\nplease consider reviewing it.\n\n[1] https://www.postgresql.org/message-id/MEYP282MB166974D3883A1A6E25B5FDAEB68E9%40MEYP282MB1669.AUSP282.PROD.OUTLOOK.COM\n-- \nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.\n\n\n",
"msg_date": "Thu, 21 Jul 2022 14:43:03 +0800",
"msg_from": "Japin Li <japinli@hotmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Memory leak fix in psql"
},
{
"msg_contents": "On Thu, Jul 21, 2022 at 02:43:03PM +0800, Japin Li wrote:\n> I attached a patch for v14 [1] based on master, if you want to apply it,\n> please consider reviewing it.\n\nWe are talking about a few hundred bytes leaked each time, so this\ndoes not worry me much in the older branches, honestly.\n--\nMichael",
"msg_date": "Fri, 22 Jul 2022 10:02:54 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Memory leak fix in psql"
}
] |
[
{
"msg_contents": "Most of these are new in v15.\nIn any case, I'm not sure if the others ought to be backpatched.\nThere may be additional fixes to make with the same grepping.",
"msg_date": "Tue, 19 Jul 2022 07:09:48 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "errdetail/errhint style"
},
{
"msg_contents": "On 2022-Jul-19, Justin Pryzby wrote:\n\n> https://www.postgresql.org/docs/current/error-style-guide.html#id-1.10.6.4.7\n> \n> git grep 'errdetail(\"[[:lower:]]'\n> git grep 'errdetail(\".*\".*;' |grep -v '\\.\"'\n\nHmm, +1, though a few of these are still missing ending periods after\nyour patch.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"The problem with the facetime model is not just that it's demoralizing, but\nthat the people pretending to work interrupt the ones actually working.\"\n (Paul Graham)\n\n\n",
"msg_date": "Tue, 19 Jul 2022 14:31:28 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: errdetail/errhint style"
},
{
"msg_contents": "On Tue, Jul 19, 2022 at 02:31:28PM +0200, Alvaro Herrera wrote:\n> Hmm, +1, though a few of these are still missing ending periods after\n> your patch.\n\n+1.\n--\nMichael",
"msg_date": "Tue, 19 Jul 2022 21:51:13 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: errdetail/errhint style"
},
{
"msg_contents": "On Tue, Jul 19, 2022 at 09:51:13PM +0900, Michael Paquier wrote:\n> +1.\n\nI have looked at that and added the extra periods, and applied it.\nThanks, Justin.\n--\nMichael",
"msg_date": "Wed, 20 Jul 2022 10:30:08 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: errdetail/errhint style"
}
] |
[
{
"msg_contents": "Hello all,\n\nWhile investigating a problem in a PG14 instance I noticed that autovacuum\nworkers\nstop processing other databases when a database has a temporary table with\nage\nolder than `autovacuum_freeze_max_age`. To test that I added a custom\nlogline showing\nwhich database the about to spawned autovacuum worker will target. Here are\nthe details:\n\n```\ntest=# select oid,datname from pg_database;\n oid | datname\n-------+-----------\n 13757 | postgres\n 32850 | test\n 1 | template1\n 13756 | template0\n(4 rows)\n```\n\nHere are the loglines under normal circumstances:\n\n```\n2022-07-19 11:24:29.406 -03 [18627] WARNING: AUTOVACUUM WORKER SPAWNED\n(db=13756)\n2022-07-19 11:24:44.406 -03 [18627] WARNING: AUTOVACUUM WORKER SPAWNED\n(db=1)\n2022-07-19 11:24:59.406 -03 [18627] WARNING: AUTOVACUUM WORKER SPAWNED\n(db=32850)\n2022-07-19 11:25:14.406 -03 [18627] WARNING: AUTOVACUUM WORKER SPAWNED\n(db=13757)\n2022-07-19 11:25:29.417 -03 [18627] WARNING: AUTOVACUUM WORKER SPAWNED\n(db=13756)\n2022-07-19 11:25:44.417 -03 [18627] WARNING: AUTOVACUUM WORKER SPAWNED\n(db=1)\n2022-07-19 11:25:59.418 -03 [18627] WARNING: AUTOVACUUM WORKER SPAWNED\n(db=32850)\n2022-07-19 11:26:14.417 -03 [18627] WARNING: AUTOVACUUM WORKER SPAWNED\n(db=13757)\n2022-07-19 11:26:29.429 -03 [18627] WARNING: AUTOVACUUM WORKER SPAWNED\n(db=13756)\n2022-07-19 11:26:44.430 -03 [18627] WARNING: AUTOVACUUM WORKER SPAWNED\n(db=1)\n2022-07-19 11:26:59.432 -03 [18627] WARNING: AUTOVACUUM WORKER SPAWNED\n(db=32850)\n2022-07-19 11:27:14.429 -03 [18627] WARNING: AUTOVACUUM WORKER SPAWNED\n(db=13757)\n2022-07-19 11:27:29.442 -03 [18627] WARNING: AUTOVACUUM WORKER SPAWNED\n(db=13756)\n2022-07-19 11:27:44.441 -03 [18627] WARNING: AUTOVACUUM WORKER SPAWNED\n(db=1)\n2022-07-19 11:27:59.446 -03 [18627] WARNING: AUTOVACUUM WORKER SPAWNED\n(db=32850)\n2022-07-19 11:28:14.442 -03 [18627] WARNING: AUTOVACUUM WORKER SPAWNED\n(db=13757)\n2022-07-19 11:28:29.454 -03 [18627] WARNING: AUTOVACUUM WORKER SPAWNED\n(db=13756)\n2022-07-19 11:28:44.454 -03 [18627] WARNING: AUTOVACUUM WORKER SPAWNED\n(db=1)\n2022-07-19 11:28:59.458 -03 [18627] WARNING: AUTOVACUUM WORKER SPAWNED\n(db=32850)\n2022-07-19 11:29:14.443 -03 [18627] WARNING: AUTOVACUUM WORKER SPAWNED\n(db=13757)\n2022-07-19 11:29:29.465 -03 [18627] WARNING: AUTOVACUUM WORKER SPAWNED\n(db=32850)\n2022-07-19 11:29:44.485 -03 [18627] WARNING: AUTOVACUUM WORKER SPAWNED\n(db=1)\n2022-07-19 11:29:59.499 -03 [18627] WARNING: AUTOVACUUM WORKER SPAWNED\n(db=13756)\n```\n\nBut when I create a temp table and make it older than\n`autovacuum_freeze_max_age`\nI get this:\n\n```\n2022-07-19 11:30:14.496 -03 [18627] WARNING: AUTOVACUUM WORKER SPAWNED\n(db=13757)\n2022-07-19 11:30:29.495 -03 [18627] WARNING: AUTOVACUUM WORKER SPAWNED\n(db=13757)\n2022-07-19 11:30:44.507 -03 [18627] WARNING: AUTOVACUUM WORKER SPAWNED\n(db=13757)\n2022-07-19 11:30:59.522 -03 [18627] WARNING: AUTOVACUUM WORKER SPAWNED\n(db=13757)\n2022-07-19 11:31:14.536 -03 [18627] WARNING: AUTOVACUUM WORKER SPAWNED\n(db=13757)\n2022-07-19 11:31:29.551 -03 [18627] WARNING: AUTOVACUUM WORKER SPAWNED\n(db=13757)\n2022-07-19 11:31:44.565 -03 [18627] WARNING: AUTOVACUUM WORKER SPAWNED\n(db=13757)\n2022-07-19 11:31:59.579 -03 [18627] WARNING: AUTOVACUUM WORKER SPAWNED\n(db=13757)\n2022-07-19 11:32:14.591 -03 [18627] WARNING: AUTOVACUUM WORKER SPAWNED\n(db=13757)\n2022-07-19 11:32:29.606 -03 [18627] WARNING: AUTOVACUUM WORKER SPAWNED\n(db=13757)\n2022-07-19 11:32:44.619 -03 [18627] WARNING: AUTOVACUUM WORKER SPAWNED\n(db=13757)\n2022-07-19 11:32:59.631 -03 [18627] WARNING: AUTOVACUUM WORKER SPAWNED\n(db=13757)\n2022-07-19 11:33:14.643 -03 [18627] WARNING: AUTOVACUUM WORKER SPAWNED\n(db=13757)\n2022-07-19 11:33:29.655 -03 [18627] WARNING: AUTOVACUUM WORKER SPAWNED\n(db=13757)\n2022-07-19 11:33:44.667 -03 [18627] WARNING: AUTOVACUUM WORKER SPAWNED\n(db=13757)\n2022-07-19 11:33:59.679 -03 [18627] WARNING: AUTOVACUUM WORKER SPAWNED\n(db=13757)\n2022-07-19 11:34:14.694 -03 [18627] WARNING: AUTOVACUUM WORKER SPAWNED\n(db=13757)\n2022-07-19 11:34:29.707 -03 [18627] WARNING: AUTOVACUUM WORKER SPAWNED\n(db=13757)\n2022-07-19 11:34:44.719 -03 [18627] WARNING: AUTOVACUUM WORKER SPAWNED\n(db=13757)\n2022-07-19 11:34:59.732 -03 [18627] WARNING: AUTOVACUUM WORKER SPAWNED\n(db=13757)\n2022-07-19 11:35:14.743 -03 [18627] WARNING: AUTOVACUUM WORKER SPAWNED\n(db=13757)\n```\n\nThis is actually expected behavior judging by the code logic:\nhttps://github.com/postgres/postgres/blob/master/src/backend/postmaster/autovacuum.c#L1201\n\nPG prioritizes databases that need to be frozen and since a temporary table\ncan't\nbe frozen by a process other than the session that created it, that DB will\nremain\na priority until the table is dropped.\n\nI acknowledge that having a temp table existing for long enough to reach\n`autovacuum_freeze_max_age`\nis a problem itself as the table will never be frozen and if age reaches 2\nbillion\nthe instance will shut down. That being said, perhaps there is room for\nimprovement\nin the AV worker spawning strategy to avoid leaving other DBs in the dark.\n\nThis database where I spotted the problem is from a customer that consumes\n100m xacts/hour\nand makes extensive uses of temp tables to load data, so that scenario can\nactually\nhappen.\n\nRegards,\n\nRafael Castro — https://www.EnterpriseDB.com/\n\nHello all,While investigating a problem in a PG14 instance I noticed that autovacuum workersstop processing other databases when a database has a temporary table with ageolder than `autovacuum_freeze_max_age`. To test that I added a custom logline showingwhich database the about to spawned autovacuum worker will target. Here are the details:```test=# select oid,datname from pg_database; oid | datname -------+----------- 13757 | postgres 32850 | test 1 | template1 13756 | template0(4 rows)```Here are the loglines under normal circumstances:```2022-07-19 11:24:29.406 -03 [18627] WARNING: AUTOVACUUM WORKER SPAWNED (db=13756)2022-07-19 11:24:44.406 -03 [18627] WARNING: AUTOVACUUM WORKER SPAWNED (db=1)2022-07-19 11:24:59.406 -03 [18627] WARNING: AUTOVACUUM WORKER SPAWNED (db=32850)2022-07-19 11:25:14.406 -03 [18627] WARNING: AUTOVACUUM WORKER SPAWNED (db=13757)2022-07-19 11:25:29.417 -03 [18627] WARNING: AUTOVACUUM WORKER SPAWNED (db=13756)2022-07-19 11:25:44.417 -03 [18627] WARNING: AUTOVACUUM WORKER SPAWNED (db=1)2022-07-19 11:25:59.418 -03 [18627] WARNING: AUTOVACUUM WORKER SPAWNED (db=32850)2022-07-19 11:26:14.417 -03 [18627] WARNING: AUTOVACUUM WORKER SPAWNED (db=13757)2022-07-19 11:26:29.429 -03 [18627] WARNING: AUTOVACUUM WORKER SPAWNED (db=13756)2022-07-19 11:26:44.430 -03 [18627] WARNING: AUTOVACUUM WORKER SPAWNED (db=1)2022-07-19 11:26:59.432 -03 [18627] WARNING: AUTOVACUUM WORKER SPAWNED (db=32850)2022-07-19 11:27:14.429 -03 [18627] WARNING: AUTOVACUUM WORKER SPAWNED (db=13757)2022-07-19 11:27:29.442 -03 [18627] WARNING: AUTOVACUUM WORKER SPAWNED (db=13756)2022-07-19 11:27:44.441 -03 [18627] WARNING: AUTOVACUUM WORKER SPAWNED (db=1)2022-07-19 11:27:59.446 -03 [18627] WARNING: AUTOVACUUM WORKER SPAWNED (db=32850)2022-07-19 11:28:14.442 -03 [18627] WARNING: AUTOVACUUM WORKER SPAWNED (db=13757)2022-07-19 11:28:29.454 -03 [18627] WARNING: AUTOVACUUM WORKER SPAWNED (db=13756)2022-07-19 11:28:44.454 -03 [18627] WARNING: AUTOVACUUM WORKER SPAWNED (db=1)2022-07-19 11:28:59.458 -03 [18627] WARNING: AUTOVACUUM WORKER SPAWNED (db=32850)2022-07-19 11:29:14.443 -03 [18627] WARNING: AUTOVACUUM WORKER SPAWNED (db=13757)2022-07-19 11:29:29.465 -03 [18627] WARNING: AUTOVACUUM WORKER SPAWNED (db=32850)2022-07-19 11:29:44.485 -03 [18627] WARNING: AUTOVACUUM WORKER SPAWNED (db=1)2022-07-19 11:29:59.499 -03 [18627] WARNING: AUTOVACUUM WORKER SPAWNED (db=13756)```But when I create a temp table and make it older than `autovacuum_freeze_max_age`I get this:```2022-07-19 11:30:14.496 -03 [18627] WARNING: AUTOVACUUM WORKER SPAWNED (db=13757)2022-07-19 11:30:29.495 -03 [18627] WARNING: AUTOVACUUM WORKER SPAWNED (db=13757)2022-07-19 11:30:44.507 -03 [18627] WARNING: AUTOVACUUM WORKER SPAWNED (db=13757)2022-07-19 11:30:59.522 -03 [18627] WARNING: AUTOVACUUM WORKER SPAWNED (db=13757)2022-07-19 11:31:14.536 -03 [18627] WARNING: AUTOVACUUM WORKER SPAWNED (db=13757)2022-07-19 11:31:29.551 -03 [18627] WARNING: AUTOVACUUM WORKER SPAWNED (db=13757)2022-07-19 11:31:44.565 -03 [18627] WARNING: AUTOVACUUM WORKER SPAWNED (db=13757)2022-07-19 11:31:59.579 -03 [18627] WARNING: AUTOVACUUM WORKER SPAWNED (db=13757)2022-07-19 11:32:14.591 -03 [18627] WARNING: AUTOVACUUM WORKER SPAWNED (db=13757)2022-07-19 11:32:29.606 -03 [18627] WARNING: AUTOVACUUM WORKER SPAWNED (db=13757)2022-07-19 11:32:44.619 -03 [18627] WARNING: AUTOVACUUM WORKER SPAWNED (db=13757)2022-07-19 11:32:59.631 -03 [18627] WARNING: AUTOVACUUM WORKER SPAWNED (db=13757)2022-07-19 11:33:14.643 -03 [18627] WARNING: AUTOVACUUM WORKER SPAWNED (db=13757)2022-07-19 11:33:29.655 -03 [18627] WARNING: AUTOVACUUM WORKER SPAWNED (db=13757)2022-07-19 11:33:44.667 -03 [18627] WARNING: AUTOVACUUM WORKER SPAWNED (db=13757)2022-07-19 11:33:59.679 -03 [18627] WARNING: AUTOVACUUM WORKER SPAWNED (db=13757)2022-07-19 11:34:14.694 -03 [18627] WARNING: AUTOVACUUM WORKER SPAWNED (db=13757)2022-07-19 11:34:29.707 -03 [18627] WARNING: AUTOVACUUM WORKER SPAWNED (db=13757)2022-07-19 11:34:44.719 -03 [18627] WARNING: AUTOVACUUM WORKER SPAWNED (db=13757)2022-07-19 11:34:59.732 -03 [18627] WARNING: AUTOVACUUM WORKER SPAWNED (db=13757)2022-07-19 11:35:14.743 -03 [18627] WARNING: AUTOVACUUM WORKER SPAWNED (db=13757)```This is actually expected behavior judging by the code logic: https://github.com/postgres/postgres/blob/master/src/backend/postmaster/autovacuum.c#L1201PG prioritizes databases that need to be frozen and since a temporary table can'tbe frozen by a process other than the session that created it, that DB will remaina priority until the table is dropped.I acknowledge that having a temp table existing for long enough to reach `autovacuum_freeze_max_age`is a problem itself as the table will never be frozen and if age reaches 2 billionthe instance will shut down. That being said, perhaps there is room for improvementin the AV worker spawning strategy to avoid leaving other DBs in the dark.This database where I spotted the problem is from a customer that consumes 100m xacts/hourand makes extensive uses of temp tables to load data, so that scenario can actuallyhappen.Regards,Rafael Castro — https://www.EnterpriseDB.com/",
"msg_date": "Tue, 19 Jul 2022 12:40:06 -0300",
"msg_from": "Rafael Thofehrn Castro <rafaelthca@gmail.com>",
"msg_from_op": true,
"msg_subject": "Autovacuum worker spawning strategy"
},
{
"msg_contents": "On Tue, Jul 19, 2022 at 12:40:06PM -0300, Rafael Thofehrn Castro wrote:\n> PG prioritizes databases that need to be frozen and since a temporary table\n> can't\n> be frozen by a process other than the session that created it, that DB will\n> remain\n> a priority until the table is dropped.\n> \n> I acknowledge that having a temp table existing for long enough to reach\n> `autovacuum_freeze_max_age`\n> is a problem itself as the table will never be frozen and if age reaches 2\n> billion\n> the instance will shut down. That being said, perhaps there is room for\n> improvement\n> in the AV worker spawning strategy to avoid leaving other DBs in the dark.\n> \n> This database where I spotted the problem is from a customer that consumes\n> 100m xacts/hour\n> and makes extensive uses of temp tables to load data, so that scenario can\n> actually\n> happen.\n\nI wonder if it's worth tracking a ѕeparate datfrozenxid that does not\ninclude stuff that is beyond autovacuum's control, like temporary tables.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 19 Jul 2022 09:43:55 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Autovacuum worker spawning strategy"
}
] |
[
{
"msg_contents": "I'm preparing the way for a later patch that would allow unique hash\nindexes to be primary keys. There are various parts to the problem. I\nwas surprised at how many times we hardcode BTREE_AM_OID and\nassociated BT Strategy Numbers in many parts of the code, so have been\nlooking for ways to reconcile how Hash and Btree strategies work and\npotentially remove hardcoding. There are various comments that say we\nneed a way to be able to define which Strategy Numbers are used by\nindexams.\n\nI came up with a rather simple way: the indexam just says \"I'm like a\nbtree\", which allows you to avoid adding hundreds of operator classes\nfor the new index, since that is cumbersome and insecure.\n\nSpecifically, we add a \"strategyam\" field to the IndexAmRoutine that\nallows an indexam to declare whether it is like a btree, like a hash\nindex or another am. This then allows us to KEEP the hardcoded\nBTREE_AM_OID tests, but point them at the strategyam rather than the\nrelam, which can be cached in various places as needed. No catalog\nchanges needed.\n\nI've coded this up and it works fine.\n\nThe attached patch is still incomplete because we use this in a few\nplaces and they all need to be checked. So before I do that, it seems\nsensible to agree the approach.\n\n(Obviously, there are hundreds of places where BTEqualStrategyNumber\nis hardcoded, and this doesn't change that at all, in case that wasn't\nclear).\n\nComments welcome on this still WIP patch.\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/",
"msg_date": "Tue, 19 Jul 2022 17:56:26 +0100",
"msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "StrategyAM for IndexAMs"
},
{
"msg_contents": "On Tue, 19 Jul 2022 at 18:56, Simon Riggs <simon.riggs@enterprisedb.com> wrote:\n>\n> I'm preparing the way for a later patch that would allow unique hash\n> indexes to be primary keys. There are various parts to the problem. I\n> was surprised at how many times we hardcode BTREE_AM_OID and\n> associated BT Strategy Numbers in many parts of the code, so have been\n> looking for ways to reconcile how Hash and Btree strategies work and\n> potentially remove hardcoding. There are various comments that say we\n> need a way to be able to define which Strategy Numbers are used by\n> indexams.\n>\n> I came up with a rather simple way: the indexam just says \"I'm like a\n> btree\", which allows you to avoid adding hundreds of operator classes\n> for the new index, since that is cumbersome and insecure.\n\nI'm fairly certain that you can't (and don't want to) make a hash\nindex look like a btree index, considering that of the btree\noperations only equality checks make sense in the hash context, and\nthat you can't do ordered retrieval (incl. no min/max), which are\nmajor features of btree.\n\nWith that in mind, could you tell whether this patch is related to the\neffort of hash-based unique primary keys (apart from inspiration\nduring development), and if so, how?\n\n> Specifically, we add a \"strategyam\" field to the IndexAmRoutine that\n> allows an indexam to declare whether it is like a btree, like a hash\n> index or another am. This then allows us to KEEP the hardcoded\n> BTREE_AM_OID tests, but point them at the strategyam rather than the\n> relam, which can be cached in various places as needed. No catalog\n> changes needed.\n>\n> I've coded this up and it works fine.\n>\n> The attached patch is still incomplete because we use this in a few\n> places and they all need to be checked. So before I do that, it seems\n> sensible to agree the approach.\n>\n> (Obviously, there are hundreds of places where BTEqualStrategyNumber\n> is hardcoded, and this doesn't change that at all, in case that wasn't\n> clear).\n>\n> Comments welcome on this still WIP patch.\n\nI think this is a great step in the right direction, fixing one of the\nissues with core index AMs, issues I also complained about earlier\n[0].\n\nThanks,\n\nMatthias van de Meent\n\n[0] https://www.postgresql.org/message-id/CAEze2Wg8QhpOnHoqPNB-AaexGX4Zaij%3D4TT0kaMhF_6T5FXxmQ%40mail.gmail.com\n\n\n",
"msg_date": "Fri, 22 Jul 2022 11:23:23 +0200",
"msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: StrategyAM for IndexAMs"
},
{
"msg_contents": "On Fri, 22 Jul 2022 at 10:23, Matthias van de Meent\n<boekewurm+postgres@gmail.com> wrote:\n>\n> On Tue, 19 Jul 2022 at 18:56, Simon Riggs <simon.riggs@enterprisedb.com> wrote:\n> >\n> > I'm preparing the way for a later patch that would allow unique hash\n> > indexes to be primary keys. There are various parts to the problem. I\n> > was surprised at how many times we hardcode BTREE_AM_OID and\n> > associated BT Strategy Numbers in many parts of the code, so have been\n> > looking for ways to reconcile how Hash and Btree strategies work and\n> > potentially remove hardcoding. There are various comments that say we\n> > need a way to be able to define which Strategy Numbers are used by\n> > indexams.\n> >\n> > I came up with a rather simple way: the indexam just says \"I'm like a\n> > btree\", which allows you to avoid adding hundreds of operator classes\n> > for the new index, since that is cumbersome and insecure.\n>\n> I'm fairly certain that you can't (and don't want to) make a hash\n> index look like a btree index, considering that of the btree\n> operations only equality checks make sense in the hash context, and\n> that you can't do ordered retrieval (incl. no min/max), which are\n> major features of btree.\n\n\"like a $INDEX_TYPE\" is wrong. What I really mean is \"use the operator\nstrategy numbering same as $INDEX_TYPE\".\n\n> With that in mind, could you tell whether this patch is related to the\n> effort of hash-based unique primary keys (apart from inspiration\n> during development), and if so, how?\n\nThere are lots of places that are hardcoded BTREE_AM_OID, with a mix\nof purposes. It's hard to tackle one without getting drawn in to fix\nthe others.\n\n> > Specifically, we add a \"strategyam\" field to the IndexAmRoutine that\n> > allows an indexam to declare whether it is like a btree, like a hash\n> > index or another am. This then allows us to KEEP the hardcoded\n> > BTREE_AM_OID tests, but point them at the strategyam rather than the\n> > relam, which can be cached in various places as needed. No catalog\n> > changes needed.\n> >\n> > I've coded this up and it works fine.\n> >\n> > The attached patch is still incomplete because we use this in a few\n> > places and they all need to be checked. So before I do that, it seems\n> > sensible to agree the approach.\n> >\n> > (Obviously, there are hundreds of places where BTEqualStrategyNumber\n> > is hardcoded, and this doesn't change that at all, in case that wasn't\n> > clear).\n> >\n> > Comments welcome on this still WIP patch.\n>\n> I think this is a great step in the right direction, fixing one of the\n> issues with core index AMs, issues I also complained about earlier\n> [0].\n>\n> [0] https://www.postgresql.org/message-id/CAEze2Wg8QhpOnHoqPNB-AaexGX4Zaij%3D4TT0kaMhF_6T5FXxmQ%40mail.gmail.com\n\nGuess we're thinking along similar lines. I was unaware of your recent\npost; these days I don't read Hackers apart from what I'm working on.\n\n\n--\nSimon Riggs http://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Fri, 22 Jul 2022 14:49:23 +0100",
"msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: StrategyAM for IndexAMs"
}
] |
[
{
"msg_contents": "Hackers,\n\nCurrently, if we have a query such as:\n\nSELECT a,b, COUNT(*)\nFROM a\nINNER JOIN b on a.a = b.x\nGROUP BY a,b\nORDER BY a DESC, b;\n\nWith enable_hashagg = off, we get the following plan:\n\n QUERY PLAN\n---------------------------------------\n GroupAggregate\n Group Key: a.a, a.b\n -> Sort\n Sort Key: a.a DESC, a.b\n -> Merge Join\n Merge Cond: (a.a = b.x)\n -> Sort\n Sort Key: a.a\n -> Seq Scan on a\n -> Sort\n Sort Key: b.x\n -> Seq Scan on b\n\nWe can see that the merge join picked to sort the input on a.a rather\nthan a.a DESC. This is due to the way\nselect_outer_pathkeys_for_merge() only picks the query_pathkeys as a\nprefix of the join pathkeys if we can find all of the join pathkeys in\nthe query_pathkeys.\n\nI think we can relax this now that we have incremental sort. I think\na better way to limit this is to allow a prefix of the query_pathkeys\nproviding that covers *all* of the join pathkeys. That way, for the\nabove query, it leaves it open for the planner to do the Merge Join by\nsorting by a.a DESC then just do an Incremental Sort to get the\nGroupAggregate input sorted by a.b.\n\nI've attached a patch for this and it changes the plan for the above query to:\n\n QUERY PLAN\n----------------------------------------\n GroupAggregate\n Group Key: a.a, a.b\n -> Incremental Sort\n Sort Key: a.a DESC, a.b\n Presorted Key: a.a\n -> Merge Join\n Merge Cond: (a.a = b.x)\n -> Sort\n Sort Key: a.a DESC\n -> Seq Scan on a\n -> Sort\n Sort Key: b.x DESC\n -> Seq Scan on b\n\nThe current behaviour is causing me a bit of trouble in plan\nregression for the ORDER BY / DISTINCT aggregate improvement patch\nthat I have pending.\n\nIs there any reason that we shouldn't do this?\n\nDavid",
"msg_date": "Wed, 20 Jul 2022 15:02:36 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Is select_outer_pathkeys_for_merge() too strict now we have\n Incremental Sorts?"
},
{
"msg_contents": "On Wed, 20 Jul 2022 at 15:02, David Rowley <dgrowleyml@gmail.com> wrote:\n> I've attached a patch for this and it changes the plan for the above query to:\n\nLooks like I based that patch on the wrong branch.\n\nHere's another version of the patch that's based on master.\n\nDavid",
"msg_date": "Wed, 20 Jul 2022 15:56:53 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Is select_outer_pathkeys_for_merge() too strict now we have\n Incremental Sorts?"
},
{
"msg_contents": "On Wed, Jul 20, 2022 at 11:03 AM David Rowley <dgrowleyml@gmail.com> wrote:\n\n> I think we can relax this now that we have incremental sort. I think\n> a better way to limit this is to allow a prefix of the query_pathkeys\n> providing that covers *all* of the join pathkeys. That way, for the\n> above query, it leaves it open for the planner to do the Merge Join by\n> sorting by a.a DESC then just do an Incremental Sort to get the\n> GroupAggregate input sorted by a.b.\n\n\nSo the idea is if the ECs used by the mergeclauses are prefix of the\nquery_pathkeys, we use this prefix as pathkeys for the mergejoin. Why\nnot relax this further that if the ECs in the mergeclauses and the\nquery_pathkeys have common prefix, we use that prefix as pathkeys? So\nthat we can have a plan like below:\n\n# explain (costs off) select * from t1 join t2 on t1.c = t2.c and t1.a =\nt2.a order by t1.a DESC, t1.b;\n QUERY PLAN\n-------------------------------------------------------\n Incremental Sort\n Sort Key: t1.a DESC, t1.b\n Presorted Key: t1.a\n -> Merge Join\n Merge Cond: ((t1.a = t2.a) AND (t1.c = t2.c))\n -> Sort\n Sort Key: t1.a DESC, t1.c\n -> Seq Scan on t1\n -> Sort\n Sort Key: t2.a DESC, t2.c\n -> Seq Scan on t2\n(11 rows)\n\nThanks\nRichard\n\nOn Wed, Jul 20, 2022 at 11:03 AM David Rowley <dgrowleyml@gmail.com> wrote:\nI think we can relax this now that we have incremental sort. I think\na better way to limit this is to allow a prefix of the query_pathkeys\nproviding that covers *all* of the join pathkeys. That way, for the\nabove query, it leaves it open for the planner to do the Merge Join by\nsorting by a.a DESC then just do an Incremental Sort to get the\nGroupAggregate input sorted by a.b.So the idea is if the ECs used by the mergeclauses are prefix of thequery_pathkeys, we use this prefix as pathkeys for the mergejoin. Whynot relax this further that if the ECs in the mergeclauses and thequery_pathkeys have common prefix, we use that prefix as pathkeys? Sothat we can have a plan like below:# explain (costs off) select * from t1 join t2 on t1.c = t2.c and t1.a = t2.a order by t1.a DESC, t1.b; QUERY PLAN------------------------------------------------------- Incremental Sort Sort Key: t1.a DESC, t1.b Presorted Key: t1.a -> Merge Join Merge Cond: ((t1.a = t2.a) AND (t1.c = t2.c)) -> Sort Sort Key: t1.a DESC, t1.c -> Seq Scan on t1 -> Sort Sort Key: t2.a DESC, t2.c -> Seq Scan on t2(11 rows)ThanksRichard",
"msg_date": "Wed, 20 Jul 2022 17:18:59 +0800",
"msg_from": "Richard Guo <guofenglinux@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Is select_outer_pathkeys_for_merge() too strict now we have\n Incremental Sorts?"
},
{
"msg_contents": "On Wed, 20 Jul 2022 at 21:19, Richard Guo <guofenglinux@gmail.com> wrote:\n> So the idea is if the ECs used by the mergeclauses are prefix of the\n> query_pathkeys, we use this prefix as pathkeys for the mergejoin. Why\n> not relax this further that if the ECs in the mergeclauses and the\n> query_pathkeys have common prefix, we use that prefix as pathkeys? So\n> that we can have a plan like below:\n\nI don't think that's a clear-cut win. There is scoring code in there\nto try to arrange the pathkey list in the order of\nmost-useful-to-upper-level-joins firsts. If we were to do as you\ndescribe we could end up generating worse plans when there is some\nsubsequent Merge Join above this one that has join conditions that the\nquery_pathkeys are not compatible with.\n\nMaybe your idea could be made to work in cases where\nbms_equal(joinrel->relids, root->all_baserels). In that case, we\nshould not be processing any further joins and don't need to consider\nthat as a factor for the scoring.\n\nDavid\n\n\n",
"msg_date": "Thu, 21 Jul 2022 07:46:46 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Is select_outer_pathkeys_for_merge() too strict now we have\n Incremental Sorts?"
},
{
"msg_contents": "On Thu, Jul 21, 2022 at 3:47 AM David Rowley <dgrowleyml@gmail.com> wrote:\n\n> On Wed, 20 Jul 2022 at 21:19, Richard Guo <guofenglinux@gmail.com> wrote:\n> > So the idea is if the ECs used by the mergeclauses are prefix of the\n> > query_pathkeys, we use this prefix as pathkeys for the mergejoin. Why\n> > not relax this further that if the ECs in the mergeclauses and the\n> > query_pathkeys have common prefix, we use that prefix as pathkeys? So\n> > that we can have a plan like below:\n>\n> I don't think that's a clear-cut win. There is scoring code in there\n> to try to arrange the pathkey list in the order of\n> most-useful-to-upper-level-joins firsts. If we were to do as you\n> describe we could end up generating worse plans when there is some\n> subsequent Merge Join above this one that has join conditions that the\n> query_pathkeys are not compatible with.\n\n\nYeah, you're right. Although we would try different permutation of the\npathkeys in sort_inner_and_outer() but that does not cover every\npossible ordering due to cost consideration. So we still need to respect\nthe heuristics behind the pathkey order returned by this function, which\nis the scoring logic trying to list most-useful-to-upper-level-joins\nkeys earlier.\n\n\n> Maybe your idea could be made to work in cases where\n> bms_equal(joinrel->relids, root->all_baserels). In that case, we\n> should not be processing any further joins and don't need to consider\n> that as a factor for the scoring.\n\n\nThat should work, as long as this case is common enough to worth we\nwriting the codes.\n\nThanks\nRichard\n\nOn Thu, Jul 21, 2022 at 3:47 AM David Rowley <dgrowleyml@gmail.com> wrote:On Wed, 20 Jul 2022 at 21:19, Richard Guo <guofenglinux@gmail.com> wrote:\n> So the idea is if the ECs used by the mergeclauses are prefix of the\n> query_pathkeys, we use this prefix as pathkeys for the mergejoin. Why\n> not relax this further that if the ECs in the mergeclauses and the\n> query_pathkeys have common prefix, we use that prefix as pathkeys? So\n> that we can have a plan like below:\n\nI don't think that's a clear-cut win. There is scoring code in there\nto try to arrange the pathkey list in the order of\nmost-useful-to-upper-level-joins firsts. If we were to do as you\ndescribe we could end up generating worse plans when there is some\nsubsequent Merge Join above this one that has join conditions that the\nquery_pathkeys are not compatible with.Yeah, you're right. Although we would try different permutation of thepathkeys in sort_inner_and_outer() but that does not cover everypossible ordering due to cost consideration. So we still need to respectthe heuristics behind the pathkey order returned by this function, whichis the scoring logic trying to list most-useful-to-upper-level-joinskeys earlier. \nMaybe your idea could be made to work in cases where\nbms_equal(joinrel->relids, root->all_baserels). In that case, we\nshould not be processing any further joins and don't need to consider\nthat as a factor for the scoring.That should work, as long as this case is common enough to worth wewriting the codes.ThanksRichard",
"msg_date": "Thu, 21 Jul 2022 10:22:48 +0800",
"msg_from": "Richard Guo <guofenglinux@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Is select_outer_pathkeys_for_merge() too strict now we have\n Incremental Sorts?"
},
{
"msg_contents": "On Thu, 21 Jul 2022 at 14:23, Richard Guo <guofenglinux@gmail.com> wrote:\n>\n> On Thu, Jul 21, 2022 at 3:47 AM David Rowley <dgrowleyml@gmail.com> wrote:\n>> Maybe your idea could be made to work in cases where\n>> bms_equal(joinrel->relids, root->all_baserels). In that case, we\n>> should not be processing any further joins and don't need to consider\n>> that as a factor for the scoring.\n>\n>\n> That should work, as long as this case is common enough to worth we\n> writing the codes.\n\nThanks for looking at this patch. I've now pushed it.\n\nDavid\n\n\n",
"msg_date": "Tue, 2 Aug 2022 11:04:58 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Is select_outer_pathkeys_for_merge() too strict now we have\n Incremental Sorts?"
}
] |
[
{
"msg_contents": "Hi,\n\nBack in commit 4f658dc8 we gained src/port/fls.c. As anticipated by\nits commit message, we later finished up with something better in\nsrc/include/port/pg_bitutils.h. fls() (\"find last set\") is an\noff-by-one cousin of pg_leftmost_one_pos32(). I don't know why ffs()\n(\"find first set\", the rightmost variant) made it into POSIX while\nfls() did not, other than perhaps its more amusing name. fls() is\npresent on *BSD, Macs and maybe more, but not everywhere, hence the\nconfigure test. Let's just do it with pg_bitutils.h instead, and drop\nsome cruft? Open to better ideas on whether we need a new function,\nor there is some way to use the existing facilities directly without\nworrying about undefined behaviour for 0, etc.\n\nNoticed while looking for configure stuff to cull. Mentioning\nseparately because this isn't a simple\nno-longer-needed-crutch-for-prestandard-system case like the others in\na nearby thread.",
"msg_date": "Wed, 20 Jul 2022 16:20:46 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Remove fls(), use pg_bitutils.h facilities instead?"
},
{
"msg_contents": "On Wed, 20 Jul 2022 at 16:21, Thomas Munro <thomas.munro@gmail.com> wrote:\n> Back in commit 4f658dc8 we gained src/port/fls.c. As anticipated by\n> its commit message, we later finished up with something better in\n> src/include/port/pg_bitutils.h. fls() (\"find last set\") is an\n> off-by-one cousin of pg_leftmost_one_pos32().\n\nSeems like a good idea to me.\n\nOne thing I noticed was that pg_leftmost_one_pos32() expects a uint32\nbut the new function passes it an int. Is it worth mentioning that's\nok in a comment somewhere or maybe adding an explicit cast?\n\nDavid\n\n\n",
"msg_date": "Wed, 20 Jul 2022 16:43:03 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Remove fls(), use pg_bitutils.h facilities instead?"
},
{
"msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> Back in commit 4f658dc8 we gained src/port/fls.c. As anticipated by\n> its commit message, we later finished up with something better in\n> src/include/port/pg_bitutils.h. fls() (\"find last set\") is an\n> off-by-one cousin of pg_leftmost_one_pos32(). I don't know why ffs()\n> (\"find first set\", the rightmost variant) made it into POSIX while\n> fls() did not, other than perhaps its more amusing name. fls() is\n> present on *BSD, Macs and maybe more, but not everywhere, hence the\n> configure test. Let's just do it with pg_bitutils.h instead, and drop\n> some cruft? Open to better ideas on whether we need a new function,\n\nI think we could probably just drop fls() entirely. It doesn't look\nto me like any of the existing callers expect a zero argument, so they\ncould be converted to use pg_leftmost_one_pos32() pretty trivially.\nI don't see that fls() is buying us anything that is worth requiring\nreaders to know yet another nonstandard function.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 20 Jul 2022 00:52:54 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Remove fls(), use pg_bitutils.h facilities instead?"
},
{
"msg_contents": "On Wed, Jul 20, 2022 at 4:52 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I think we could probably just drop fls() entirely. It doesn't look\n> to me like any of the existing callers expect a zero argument, so they\n> could be converted to use pg_leftmost_one_pos32() pretty trivially.\n> I don't see that fls() is buying us anything that is worth requiring\n> readers to know yet another nonstandard function.\n\nThat was not true for the case in contiguous_pages_to_segment_bin(),\nin dsa.c. If it's just one place like that (and, hrrm, curiously\nthere is an open issue about binning quality on my to do list...),\nthen perhaps we should just open code it there. The attached doesn't\ntrigger the assertion that work != 0 in a simple make check.",
"msg_date": "Wed, 20 Jul 2022 17:26:10 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Remove fls(), use pg_bitutils.h facilities instead?"
},
{
"msg_contents": "On Wed, Jul 20, 2022 at 5:26 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> On Wed, Jul 20, 2022 at 4:52 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > I think we could probably just drop fls() entirely. It doesn't look\n> > to me like any of the existing callers expect a zero argument, so they\n> > could be converted to use pg_leftmost_one_pos32() pretty trivially.\n> > I don't see that fls() is buying us anything that is worth requiring\n> > readers to know yet another nonstandard function.\n>\n> That was not true for the case in contiguous_pages_to_segment_bin(),\n> in dsa.c. If it's just one place like that (and, hrrm, curiously\n> there is an open issue about binning quality on my to do list...),\n> then perhaps we should just open code it there. The attached doesn't\n> trigger the assertion that work != 0 in a simple make check.\n\nThat double eval macro wasn't nice. This time with a static inline function.",
"msg_date": "Wed, 20 Jul 2022 17:44:35 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Remove fls(), use pg_bitutils.h facilities instead?"
},
{
"msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> On Wed, Jul 20, 2022 at 4:52 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> I think we could probably just drop fls() entirely. It doesn't look\n>> to me like any of the existing callers expect a zero argument, so they\n>> could be converted to use pg_leftmost_one_pos32() pretty trivially.\n\n> That was not true for the case in contiguous_pages_to_segment_bin(),\n> in dsa.c. If it's just one place like that (and, hrrm, curiously\n> there is an open issue about binning quality on my to do list...),\n\nHow is it sane to ask for a segment bin for zero pages? Seems like\nsomething should have short-circuited such a case well before here.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 20 Jul 2022 09:34:37 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Remove fls(), use pg_bitutils.h facilities instead?"
},
{
"msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> That double eval macro wasn't nice. This time with a static inline function.\n\nSeems like passing a size_t to pg_leftmost_one_pos32 isn't great.\nIt was just as wrong before (if the caller-supplied argument is\nindeed a size_t), but no time like the present to fix it.\n\nWe could have pg_bitutils.h #define pg_leftmost_one_pos_size_t\nas the appropriate one of pg_leftmost_one_pos32/64, perhaps.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 20 Jul 2022 09:48:36 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Remove fls(), use pg_bitutils.h facilities instead?"
},
{
"msg_contents": "On Thu, Jul 21, 2022 at 1:34 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> How is it sane to ask for a segment bin for zero pages? Seems like\n> something should have short-circuited such a case well before here.\n\nIt's intended. There are two ways you can arrive here with n == 0:\n\n* There's a special case in execParallel.c that creates a DSA segment\n\"in-place\" with initial size dsa_minimum_size(). That's because we\ndon't know yet if we have any executor nodes that need a DSA segment\n(Parallel Hash, Parallel Bitmap Heap Scan), so we create one with the\nminimum amount of space other than the DSA control meta-data, so you\nget an in-place segment 0 with 0 usable pages. As soon as someone\ntries to allocate one byte, the first external DSM segment will be\ncreated.\n\n* A full segment can be re-binned into slot 0.\n\nOn Thu, Jul 21, 2022 at 1:48 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Seems like passing a size_t to pg_leftmost_one_pos32 isn't great.\n> It was just as wrong before (if the caller-supplied argument is\n> indeed a size_t), but no time like the present to fix it.\n>\n> We could have pg_bitutils.h #define pg_leftmost_one_pos_size_t\n> as the appropriate one of pg_leftmost_one_pos32/64, perhaps.\n\nYeah.",
"msg_date": "Thu, 21 Jul 2022 20:13:52 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Remove fls(), use pg_bitutils.h facilities instead?"
},
{
"msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> On Thu, Jul 21, 2022 at 1:34 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> How is it sane to ask for a segment bin for zero pages? Seems like\n>> something should have short-circuited such a case well before here.\n\n> It's intended. There are two ways you can arrive here with n == 0:\n\nOK.\n\n>> We could have pg_bitutils.h #define pg_leftmost_one_pos_size_t\n>> as the appropriate one of pg_leftmost_one_pos32/64, perhaps.\n\n> Yeah.\n\nPatches look good to me.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 21 Jul 2022 09:46:05 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Remove fls(), use pg_bitutils.h facilities instead?"
}
] |
[
{
"msg_contents": "Hi Michael,\n\nThank you for your feedback, I've incorporated your suggestions by scanning the logs produced from pg_rewind when asserting that certain WAL segment files were skipped from being copied over to the target server.\n\nI've also updated the pg_rewind patch file to target the Postgres master branch (version 16 to be). Please see attached.\n\nThanks,\nJustin\n\n\n\n________________________________\nFrom: Michael Paquier\nSent: Tuesday, July 19, 2022 1:36 AM\nTo: Justin Kwan\nCc: Tom Lane; pgsql-hackers; vignesh; jkwan@cloudflare.com; vignesh ravichandran; hlinnaka@iki.fi\nSubject: Re: Making pg_rewind faster\n\nOn Mon, Jul 18, 2022 at 05:14:00PM +0000, Justin Kwan wrote:\n> Thank you for taking a look at this and that sounds good. I will\n> send over a patch compatible with Postgres v16.\n\n+$node_2->psql(\n+ 'postgres',\n+ \"SELECT extract(epoch from modification) FROM pg_stat_file('pg_wal/000000010000000000000003');\",\n+ stdout => \\my $last_common_tli1_wal_last_modified_at);\nPlease note that you should not rely on the FS-level stats for\nanything that touches the WAL segments. A rough guess about what you\ncould here to make sure that only the set of WAL segments you are\nlooking for is being copied over would be to either:\n- Scan the logs produced by pg_rewind and see if the segments are\ncopied or not, depending on the divergence point (aka the last\ncheckpoint before WAL forked).\n- Clean up pg_wal/ in the target node before running pg_rewind,\nchecking that only the segments you want are available once the\noperation completes.\n--\nMichael",
"msg_date": "Wed, 20 Jul 2022 05:20:18 +0000",
"msg_from": "Justin Kwan <justinpkwan@outlook.com>",
"msg_from_op": true,
"msg_subject": "Re: Making pg_rewind faster"
}
] |
[
{
"msg_contents": "Hi,\n\nIf you look at GetFlushRecPtr() function the OUT parameter for\nTimeLineID is optional and this is not only one, see\nGetWalRcvFlushRecPtr(), GetXLogReplayRecPtr(), etc.\n\nI think we have missed that for GetStandbyFlushRecPtr(), to be\ninlined, we should change this as follow:\n\n--- a/src/backend/replication/walsender.c\n+++ b/src/backend/replication/walsender.c\n@@ -3156,7 +3156,8 @@ GetStandbyFlushRecPtr(TimeLineID *tli)\n receivePtr = GetWalRcvFlushRecPtr(NULL, &receiveTLI);\n replayPtr = GetXLogReplayRecPtr(&replayTLI);\n\n- *tli = replayTLI;\n+ if (tli)\n+ *tli = replayTLI;\n\nThoughts?\n--\nRegards,\nAmul Sul\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 20 Jul 2022 16:38:17 +0530",
"msg_from": "Amul Sul <sulamul@gmail.com>",
"msg_from_op": true,
"msg_subject": "GetStandbyFlushRecPtr() : OUT param is not optional like elsewhere."
},
{
"msg_contents": "Hi Amul,\n\n> - *tli = replayTLI;\n> + if (tli)\n> + *tli = replayTLI;\n\nI would guess the difference here is that GetStandbyFlushRecPtr is\nstatic. It is used 3 times in walsender.c and in all cases the\nargument can't be NULL.\n\nSo I'm not certain what we will gain from the proposed check.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Wed, 20 Jul 2022 14:35:24 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: GetStandbyFlushRecPtr() : OUT param is not optional like\n elsewhere."
},
{
"msg_contents": "Hello\n\nOn 2022-Jul-20, Amul Sul wrote:\n\n> If you look at GetFlushRecPtr() function the OUT parameter for\n> TimeLineID is optional and this is not only one, see\n> GetWalRcvFlushRecPtr(), GetXLogReplayRecPtr(), etc.\n> \n> I think we have missed that for GetStandbyFlushRecPtr(), to be\n> inlined, we should change this as follow:\n\nThis is something we decide mostly on a case-by-case basis. There's no\nfixed rule that all out params have to be optional.\n\nIf anything is improved by this change, let's see what it is.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Wed, 20 Jul 2022 15:36:16 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: GetStandbyFlushRecPtr() : OUT param is not optional like\n elsewhere."
},
{
"msg_contents": "Thanks Aleksander and Álvaro for your inputs.\n\nI understand this change is not making any improvement to the current\ncode. I was a bit concerned regarding the design and consistency of\nthe function that exists for the server in recovery and for the server\nthat is not in recovery. I was trying to write following snippet\nwhere I am interested only for XLogRecPtr:\n\nrecPtr = RecoveryInProgress() ? GetStandbyFlushRecPtr(NULL) :\nGetFlushRecPtr(NULL);\n\nBut I can't write this since I have to pass an argument for\nGetStandbyFlushRecPtr() but that is not compulsory for\nGetFlushRecPtr().\n\nI agree to reject proposed changes since that is not useful\nimmediately and have no rule for the optional argument for the\nsimilar-looking function.\n\nRegards,\nAmul\n\n\n",
"msg_date": "Thu, 21 Jul 2022 09:38:17 +0530",
"msg_from": "Amul Sul <sulamul@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: GetStandbyFlushRecPtr() : OUT param is not optional like\n elsewhere."
}
] |
[
{
"msg_contents": "Hi,\n\nAfter the commit [1], is it correct to say errmsg(\"invalid data in file\n\\\"%s\\\"\", BACKUP_LABEL_FILE))); in do_pg_backup_stop() when we hold the\ncontents in backend global memory, not actually reading from backup_label\nfile? However, it is correct to say that in read_backup_label.\n\nIMO, we can either say \"invalid backup_label contents found\" or we can be\nmore descriptive and say \"invalid \"START WAL LOCATION\" line found in\nbackup_label content\" and \"invalid \"BACKUP FROM\" line found in\nbackup_label content\" and so on.\n\nThoughts?\n\n[1]\ncommit 39969e2a1e4d7f5a37f3ef37d53bbfe171e7d77a\nAuthor: Stephen Frost <sfrost@snowman.net>\nDate: Wed Apr 6 14:41:03 2022 -0400\n\n Remove exclusive backup mode\n\nerrmsg(\"invalid data in file \\\"%s\\\"\", BACKUP_LABEL_FILE)));\n\nRegards,\nBharath Rupireddy.\n\nHi,After the commit [1], is it correct to say errmsg(\"invalid data in file \\\"%s\\\"\", BACKUP_LABEL_FILE))); in do_pg_backup_stop() when we hold the contents in backend global memory, not actually reading from backup_label file? However, it is correct to say that in read_backup_label.IMO, we can either say \"invalid backup_label contents found\" or we can be more descriptive and say \"invalid \"START WAL LOCATION\" line found in backup_label content\" and \"invalid \"BACKUP FROM\" line found in backup_label content\" and so on.Thoughts?[1]commit 39969e2a1e4d7f5a37f3ef37d53bbfe171e7d77aAuthor: Stephen Frost <sfrost@snowman.net>Date: Wed Apr 6 14:41:03 2022 -0400 Remove exclusive backup mode errmsg(\"invalid data in file \\\"%s\\\"\", BACKUP_LABEL_FILE)));Regards,\nBharath Rupireddy.",
"msg_date": "Wed, 20 Jul 2022 17:09:09 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Is it correct to say, \"invalid data in file \\\"%s\\\"\",\n BACKUP_LABEL_FILE in do_pg_backup_stop?"
},
{
"msg_contents": "At Wed, 20 Jul 2022 17:09:09 +0530, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote in \n> Hi,\n> \n> After the commit [1], is it correct to say errmsg(\"invalid data in file\n> \\\"%s\\\"\", BACKUP_LABEL_FILE))); in do_pg_backup_stop() when we hold the\n> contents in backend global memory, not actually reading from backup_label\n> file? However, it is correct to say that in read_backup_label.\n> \n> IMO, we can either say \"invalid backup_label contents found\" or we can be\n> more descriptive and say \"invalid \"START WAL LOCATION\" line found in\n> backup_label content\" and \"invalid \"BACKUP FROM\" line found in\n> backup_label content\" and so on.\n> \n> Thoughts?\n\nPreviously there the case the \"char *labelfile\" is loaded from a file,\nbut currently it is alwasy a string build on the process. In that\nsense, nowadays it is a kind of internal error, which I think is not\nsupposed to be exposed to users.\n\nSo I think we can leave the code alone to avoid back-patching\nobstacles. But if we decided to change the code around, I'd like to\nchange the string into a C struct, so that we don't need to parse it.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Thu, 21 Jul 2022 18:02:59 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Is it correct to say, \"invalid data in file \\\"%s\\\"\",\n BACKUP_LABEL_FILE in do_pg_backup_stop?"
},
{
"msg_contents": "On Thu, Jul 21, 2022 at 2:33 PM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> At Wed, 20 Jul 2022 17:09:09 +0530, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote in\n> > Hi,\n> >\n> > After the commit [1], is it correct to say errmsg(\"invalid data in file\n> > \\\"%s\\\"\", BACKUP_LABEL_FILE))); in do_pg_backup_stop() when we hold the\n> > contents in backend global memory, not actually reading from backup_label\n> > file? However, it is correct to say that in read_backup_label.\n> >\n> > IMO, we can either say \"invalid backup_label contents found\" or we can be\n> > more descriptive and say \"invalid \"START WAL LOCATION\" line found in\n> > backup_label content\" and \"invalid \"BACKUP FROM\" line found in\n> > backup_label content\" and so on.\n> >\n> > Thoughts?\n>\n> Previously there the case the \"char *labelfile\" is loaded from a file,\n> but currently it is alwasy a string build on the process. In that\n> sense, nowadays it is a kind of internal error, which I think is not\n> supposed to be exposed to users.\n>\n> So I think we can leave the code alone to avoid back-patching\n> obstacles. But if we decided to change the code around, I'd like to\n> change the string into a C struct, so that we don't need to parse it.\n\nHm. I think we must take this opportunity to clean it up. You are\nright, we don't need to parse the label file contents (just like we\nused to do previously after reading it from the file) in\ndo_pg_backup_stop(), instead we can just pass a structure. Also,\ndo_pg_backup_stop() isn't modifying any labelfile contents, but using\nstartxlogfilename, startpoint and backupfrom from the labelfile\ncontents. I think this information can easily be passed as a single\nstructure. In fact, I might think a bit more here and wrap label_file,\ntblspc_map_file to a single structure something like below and pass it\nacross the functions.\n\ntypedef struct BackupState\n{\nStringInfo label_file;\nStringInfo tblspc_map_file;\nchar startxlogfilename[MAXFNAMELEN];\nXLogRecPtr startpoint;\nchar backupfrom[20];\n} BackupState;\n\nThis way, the code is more readable, structured and we can remove 2\nsscanf() calls, 2 \"invalid data in file\" errors, 1 strchr() call, 1\nstrstr() call. Only thing is that it creates code diff from the\nprevious PG versions which is fine IMO. If okay, I'm happy to prepare\na patch.\n\nThoughts?\n\nRegards,\nBharath Rupireddy.\n\n\n",
"msg_date": "Mon, 25 Jul 2022 14:21:38 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Is it correct to say, \"invalid data in file \\\"%s\\\"\",\n BACKUP_LABEL_FILE in do_pg_backup_stop?"
},
{
"msg_contents": "At Mon, 25 Jul 2022 14:21:38 +0530, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote in \n> Hm. I think we must take this opportunity to clean it up. You are\n> right, we don't need to parse the label file contents (just like we\n> used to do previously after reading it from the file) in\n> do_pg_backup_stop(), instead we can just pass a structure. Also,\n> do_pg_backup_stop() isn't modifying any labelfile contents, but using\n> startxlogfilename, startpoint and backupfrom from the labelfile\n> contents. I think this information can easily be passed as a single\n> structure. In fact, I might think a bit more here and wrap label_file,\n> tblspc_map_file to a single structure something like below and pass it\n> across the functions.\n> \n> typedef struct BackupState\n> {\n> StringInfo label_file;\n> StringInfo tblspc_map_file;\n> char startxlogfilename[MAXFNAMELEN];\n> XLogRecPtr startpoint;\n> char backupfrom[20];\n> } BackupState;\n> \n> This way, the code is more readable, structured and we can remove 2\n> sscanf() calls, 2 \"invalid data in file\" errors, 1 strchr() call, 1\n> strstr() call. Only thing is that it creates code diff from the\n> previous PG versions which is fine IMO. If okay, I'm happy to prepare\n> a patch.\n> \n> Thoughts?\n\nIt is more or less what was in my mind, but it seems that we don't\nneed StringInfo there, or should avoid it to signal the strings are\nnot editable.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Tue, 26 Jul 2022 11:49:05 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Is it correct to say, \"invalid data in file \\\"%s\\\"\",\n BACKUP_LABEL_FILE in do_pg_backup_stop?"
},
{
"msg_contents": "On 7/25/22 22:49, Kyotaro Horiguchi wrote:\n> At Mon, 25 Jul 2022 14:21:38 +0530, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote in\n>> Hm. I think we must take this opportunity to clean it up. You are\n>> right, we don't need to parse the label file contents (just like we\n>> used to do previously after reading it from the file) in\n>> do_pg_backup_stop(), instead we can just pass a structure. Also,\n>> do_pg_backup_stop() isn't modifying any labelfile contents, but using\n>> startxlogfilename, startpoint and backupfrom from the labelfile\n>> contents. I think this information can easily be passed as a single\n>> structure. In fact, I might think a bit more here and wrap label_file,\n>> tblspc_map_file to a single structure something like below and pass it\n>> across the functions.\n>>\n>> typedef struct BackupState\n>> {\n>> StringInfo label_file;\n>> StringInfo tblspc_map_file;\n>> char startxlogfilename[MAXFNAMELEN];\n>> XLogRecPtr startpoint;\n>> char backupfrom[20];\n>> } BackupState;\n>>\n>> This way, the code is more readable, structured and we can remove 2\n>> sscanf() calls, 2 \"invalid data in file\" errors, 1 strchr() call, 1\n>> strstr() call. Only thing is that it creates code diff from the\n>> previous PG versions which is fine IMO. If okay, I'm happy to prepare\n>> a patch.\n>>\n>> Thoughts?\n> \n> It is more or less what was in my mind, but it seems that we don't\n> need StringInfo there, or should avoid it to signal the strings are\n> not editable.\n\nI would prefer to have all the components of backup_label stored \nseparately and then generate backup_label from them in pg_backup_stop().\n\nFor PG16 I am planning to add some fields to backup_label that are not \nknown when pg_backup_start() is called, e.g. min recovery time.\n\nRegards,\n-David\n\n\n",
"msg_date": "Tue, 26 Jul 2022 07:52:31 -0400",
"msg_from": "David Steele <david@pgmasters.net>",
"msg_from_op": false,
"msg_subject": "Re: Is it correct to say, \"invalid data in file \\\"%s\\\"\",\n BACKUP_LABEL_FILE in do_pg_backup_stop?"
},
{
"msg_contents": "On Tue, Jul 26, 2022 at 5:22 PM David Steele <david@pgmasters.net> wrote:\n>\n> On 7/25/22 22:49, Kyotaro Horiguchi wrote:\n> > At Mon, 25 Jul 2022 14:21:38 +0530, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote in\n> >> Hm. I think we must take this opportunity to clean it up. You are\n> >> right, we don't need to parse the label file contents (just like we\n> >> used to do previously after reading it from the file) in\n> >> do_pg_backup_stop(), instead we can just pass a structure. Also,\n> >> do_pg_backup_stop() isn't modifying any labelfile contents, but using\n> >> startxlogfilename, startpoint and backupfrom from the labelfile\n> >> contents. I think this information can easily be passed as a single\n> >> structure. In fact, I might think a bit more here and wrap label_file,\n> >> tblspc_map_file to a single structure something like below and pass it\n> >> across the functions.\n> >>\n> >> typedef struct BackupState\n> >> {\n> >> StringInfo label_file;\n> >> StringInfo tblspc_map_file;\n> >> char startxlogfilename[MAXFNAMELEN];\n> >> XLogRecPtr startpoint;\n> >> char backupfrom[20];\n> >> } BackupState;\n> >>\n> >> This way, the code is more readable, structured and we can remove 2\n> >> sscanf() calls, 2 \"invalid data in file\" errors, 1 strchr() call, 1\n> >> strstr() call. Only thing is that it creates code diff from the\n> >> previous PG versions which is fine IMO. If okay, I'm happy to prepare\n> >> a patch.\n> >>\n> >> Thoughts?\n> >\n> > It is more or less what was in my mind, but it seems that we don't\n> > need StringInfo there, or should avoid it to signal the strings are\n> > not editable.\n>\n> I would prefer to have all the components of backup_label stored\n> separately and then generate backup_label from them in pg_backup_stop().\n\n+1, because pg_backup_stop is the one that's returning backup_label\ncontents, so it does make sense for it to prepare it once and for all\nand return.\n\n> For PG16 I am planning to add some fields to backup_label that are not\n> known when pg_backup_start() is called, e.g. min recovery time.\n\nCan you please point to your patch that does above?\n\nYes, right now, backup_label or tablespace_map contents are being\nfilled in by pg_backup_start and are never changed again. But if your\nabove proposal is for fixing some issue, then it would make sense for\nus to carry all the info in a structure to pg_backup_stop and then let\nit prepare the backup_label and tablespace_map contents.\n\nIf the approach is okay for the hackers, I would like to spend time on it.\n\nRegards,\nBharath Rupireddy.\n\n\n",
"msg_date": "Tue, 26 Jul 2022 17:29:29 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Is it correct to say, \"invalid data in file \\\"%s\\\"\",\n BACKUP_LABEL_FILE in do_pg_backup_stop?"
},
{
"msg_contents": "\n\nOn 7/26/22 07:59, Bharath Rupireddy wrote:\n> On Tue, Jul 26, 2022 at 5:22 PM David Steele <david@pgmasters.net> wrote:\n>>\n>> On 7/25/22 22:49, Kyotaro Horiguchi wrote:\n>>> At Mon, 25 Jul 2022 14:21:38 +0530, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote in\n>>>> Hm. I think we must take this opportunity to clean it up. You are\n>>>> right, we don't need to parse the label file contents (just like we\n>>>> used to do previously after reading it from the file) in\n>>>> do_pg_backup_stop(), instead we can just pass a structure. Also,\n>>>> do_pg_backup_stop() isn't modifying any labelfile contents, but using\n>>>> startxlogfilename, startpoint and backupfrom from the labelfile\n>>>> contents. I think this information can easily be passed as a single\n>>>> structure. In fact, I might think a bit more here and wrap label_file,\n>>>> tblspc_map_file to a single structure something like below and pass it\n>>>> across the functions.\n>>>>\n>>>> typedef struct BackupState\n>>>> {\n>>>> StringInfo label_file;\n>>>> StringInfo tblspc_map_file;\n>>>> char startxlogfilename[MAXFNAMELEN];\n>>>> XLogRecPtr startpoint;\n>>>> char backupfrom[20];\n>>>> } BackupState;\n>>>>\n>>>> This way, the code is more readable, structured and we can remove 2\n>>>> sscanf() calls, 2 \"invalid data in file\" errors, 1 strchr() call, 1\n>>>> strstr() call. Only thing is that it creates code diff from the\n>>>> previous PG versions which is fine IMO. If okay, I'm happy to prepare\n>>>> a patch.\n>>>>\n>>>> Thoughts?\n>>>\n>>> It is more or less what was in my mind, but it seems that we don't\n>>> need StringInfo there, or should avoid it to signal the strings are\n>>> not editable.\n>>\n>> I would prefer to have all the components of backup_label stored\n>> separately and then generate backup_label from them in pg_backup_stop().\n> \n> +1, because pg_backup_stop is the one that's returning backup_label\n> contents, so it does make sense for it to prepare it once and for all\n> and return.\n> \n>> For PG16 I am planning to add some fields to backup_label that are not\n>> known when pg_backup_start() is called, e.g. min recovery time.\n> \n> Can you please point to your patch that does above?\n\nCurrently it is a plan, not a patch. So there is nothing to show yet.\n\n> Yes, right now, backup_label or tablespace_map contents are being\n> filled in by pg_backup_start and are never changed again. But if your\n> above proposal is for fixing some issue, then it would make sense for\n> us to carry all the info in a structure to pg_backup_stop and then let\n> it prepare the backup_label and tablespace_map contents.\n\nI think this makes sense even if I don't get these changes into PG16.\n\n> If the approach is okay for the hackers, I would like to spend time on it.\n\n+1 from me.\n\nRegards,\n-David\n\n\n",
"msg_date": "Tue, 26 Jul 2022 08:20:49 -0400",
"msg_from": "David Steele <david@pgmasters.net>",
"msg_from_op": false,
"msg_subject": "Re: Is it correct to say, \"invalid data in file \\\"%s\\\"\",\n BACKUP_LABEL_FILE in do_pg_backup_stop?"
},
{
"msg_contents": "On Tue, Jul 26, 2022 at 5:50 PM David Steele <david@pgmasters.net> wrote:\n>\n> >> I would prefer to have all the components of backup_label stored\n> >> separately and then generate backup_label from them in pg_backup_stop().\n> >\n> > +1, because pg_backup_stop is the one that's returning backup_label\n> > contents, so it does make sense for it to prepare it once and for all\n> > and return.\n> >\n> >> For PG16 I am planning to add some fields to backup_label that are not\n> >> known when pg_backup_start() is called, e.g. min recovery time.\n> >\n> > Can you please point to your patch that does above?\n>\n> Currently it is a plan, not a patch. So there is nothing to show yet.\n>\n> > Yes, right now, backup_label or tablespace_map contents are being\n> > filled in by pg_backup_start and are never changed again. But if your\n> > above proposal is for fixing some issue, then it would make sense for\n> > us to carry all the info in a structure to pg_backup_stop and then let\n> > it prepare the backup_label and tablespace_map contents.\n>\n> I think this makes sense even if I don't get these changes into PG16.\n>\n> > If the approach is okay for the hackers, I would like to spend time on it.\n>\n> +1 from me.\n\nHere comes the v1 patch. This patch tries to refactor backup related\ncode, advantages of doing so are following:\n\n1) backup state is more structured now - all in a single structure,\ncallers can create backup_label contents whenever required, either\nduring the pg_backup_start or the pg_backup_stop or in between.\n2) no parsing of backup_label file lines now in pg_backup_stop, no\nerror checking for invalid parsing.\n3) backup_label and history file contents have most of the things in\ncommon, they can now be created within a single function.\n4) makes backup related code extensible and readable.\n\nOne downside is that it creates a lot of diff with previous versions.\n\nPlease review.\n\n--\nBharath Rupireddy\nRDS Open Source Databases: https://aws.amazon.com/",
"msg_date": "Sat, 30 Jul 2022 05:37:23 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Is it correct to say, \"invalid data in file \\\"%s\\\"\",\n BACKUP_LABEL_FILE in do_pg_backup_stop?"
},
{
"msg_contents": "On Sat, Jul 30, 2022 at 5:37 AM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Tue, Jul 26, 2022 at 5:50 PM David Steele <david@pgmasters.net> wrote:\n> >\n> > >> I would prefer to have all the components of backup_label stored\n> > >> separately and then generate backup_label from them in pg_backup_stop().\n> > >\n> > > +1, because pg_backup_stop is the one that's returning backup_label\n> > > contents, so it does make sense for it to prepare it once and for all\n> > > and return.\n> > >\n> > >> For PG16 I am planning to add some fields to backup_label that are not\n> > >> known when pg_backup_start() is called, e.g. min recovery time.\n> > >\n> > > Can you please point to your patch that does above?\n> >\n> > Currently it is a plan, not a patch. So there is nothing to show yet.\n> >\n> > > Yes, right now, backup_label or tablespace_map contents are being\n> > > filled in by pg_backup_start and are never changed again. But if your\n> > > above proposal is for fixing some issue, then it would make sense for\n> > > us to carry all the info in a structure to pg_backup_stop and then let\n> > > it prepare the backup_label and tablespace_map contents.\n> >\n> > I think this makes sense even if I don't get these changes into PG16.\n> >\n> > > If the approach is okay for the hackers, I would like to spend time on it.\n> >\n> > +1 from me.\n>\n> Here comes the v1 patch. This patch tries to refactor backup related\n> code, advantages of doing so are following:\n>\n> 1) backup state is more structured now - all in a single structure,\n> callers can create backup_label contents whenever required, either\n> during the pg_backup_start or the pg_backup_stop or in between.\n> 2) no parsing of backup_label file lines now in pg_backup_stop, no\n> error checking for invalid parsing.\n> 3) backup_label and history file contents have most of the things in\n> common, they can now be created within a single function.\n> 4) makes backup related code extensible and readable.\n>\n> One downside is that it creates a lot of diff with previous versions.\n>\n> Please review.\n\nI added this to current CF - https://commitfest.postgresql.org/39/3808/\n\n-- \nBharath Rupireddy\nRDS Open Source Databases: https://aws.amazon.com/rds/postgresql/\n\n\n",
"msg_date": "Mon, 8 Aug 2022 19:20:31 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Refactor backup related code (was: Is it correct to say, \"invalid\n data in file \\\"%s\\\"\", BACKUP_LABEL_FILE in do_pg_backup_stop?)"
},
{
"msg_contents": "On Mon, Aug 8, 2022 at 7:20 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> > Please review.\n>\n> I added this to current CF - https://commitfest.postgresql.org/39/3808/\n\nHere's the v2 patch, no change from v1, just rebased because of commit\na8c012869763c711abc9085f54b2a100b60a85fa (Move basebackup code to new\ndirectory src/backend/backup).\n\n-- \nBharath Rupireddy\nRDS Open Source Databases: https://aws.amazon.com/rds/postgresql/",
"msg_date": "Thu, 11 Aug 2022 09:55:13 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Refactor backup related code (was: Is it correct to say, \"invalid\n data in file \\\"%s\\\"\", BACKUP_LABEL_FILE in do_pg_backup_stop?)"
},
{
"msg_contents": "On Thu, Aug 11, 2022 at 09:55:13AM +0530, Bharath Rupireddy wrote:\n> Here's the v2 patch, no change from v1, just rebased because of commit\n> a8c012869763c711abc9085f54b2a100b60a85fa (Move basebackup code to new\n> directory src/backend/backup).\n\nI was skimming at this patch, and indeed it is a bit crazy to write\nthe generate the contents of the backup_label file at backup start,\njust to parse them again at backup stop with these extra sscan calls.\n\n-#define PG_STOP_BACKUP_V2_COLS 3\n+#define PG_BACKUP_STOP_V2_COLS 3\nIt seems to me that such changes, while they make sense with the new\nnaming of the backup start/stop functions are unrelated to what you\nare trying to solve primarily here. This justifies a separate\ncleanup, but I am perhaps overly-pedantic here :)\n--\nMichael",
"msg_date": "Mon, 12 Sep 2022 16:42:37 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Refactor backup related code (was: Is it correct to say,\n \"invalid data in file \\\"%s\\\"\", BACKUP_LABEL_FILE in do_pg_backup_stop?)"
},
{
"msg_contents": "On Mon, Sep 12, 2022 at 1:12 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Thu, Aug 11, 2022 at 09:55:13AM +0530, Bharath Rupireddy wrote:\n> > Here's the v2 patch, no change from v1, just rebased because of commit\n> > a8c012869763c711abc9085f54b2a100b60a85fa (Move basebackup code to new\n> > directory src/backend/backup).\n>\n> I was skimming at this patch, and indeed it is a bit crazy to write\n> the generate the contents of the backup_label file at backup start,\n> just to parse them again at backup stop with these extra sscan calls.\n\nThanks for taking a look at the patch.\n\n> -#define PG_STOP_BACKUP_V2_COLS 3\n> +#define PG_BACKUP_STOP_V2_COLS 3\n> It seems to me that such changes, while they make sense with the new\n> naming of the backup start/stop functions are unrelated to what you\n> are trying to solve primarily here. This justifies a separate\n> cleanup, but I am perhaps overly-pedantic here :)\n\nI've posted a separate patch [1] to adjust the macro name alone.\n\nPlease review the attached v3 patch after removing the above macro changes.\n\n[1] https://www.postgresql.org/message-id/CALj2ACXjvC28ppeDTCrfaSyHga0ggP5nRLJbsjx%3D7N-74UT4QA%40mail.gmail.com\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Mon, 12 Sep 2022 17:09:17 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Refactor backup related code (was: Is it correct to say, \"invalid\n data in file \\\"%s\\\"\", BACKUP_LABEL_FILE in do_pg_backup_stop?)"
},
{
"msg_contents": "On Mon, Sep 12, 2022 at 5:09 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> Please review the attached v3 patch after removing the above macro changes.\n\nI'm attaching the v4 patch that's rebased on to the latest HEAD.\nPlease consider this for review.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Wed, 14 Sep 2022 14:24:12 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Refactor backup related code (was: Is it correct to say, \"invalid\n data in file \\\"%s\\\"\", BACKUP_LABEL_FILE in do_pg_backup_stop?)"
},
{
"msg_contents": "On Wed, Sep 14, 2022 at 02:24:12PM +0530, Bharath Rupireddy wrote:\n> I'm attaching the v4 patch that's rebased on to the latest HEAD.\n> Please consider this for review.\n\nI have been looking at this patch.\n\n- StringInfo labelfile;\n- StringInfo tblspc_map_file;\n backup_manifest_info manifest;\n+ BackupState backup_state;\nYou could use initialize the state here with a {0}. That's simpler.\n\n--- a/src/include/access/xlog_internal.h\n+++ b/src/include/access/xlog_internal.h\n@@ -380,6 +380,31 @@ GetRmgr(RmgrId rmid)\n }\n #endif\n \n+/* Structure to hold backup state. */\n+typedef struct BackupStateData\n+{\nWhy is that in xlog_internal.h? This header includes a lot of\ndeclarations about the internals of WAL, but the backup state is not\nthat internal. I'd like to think that we should begin separating the\nbackup-related routines into their own file, as of a set of\nxlogbackup.c/h in this case. That's a split I have been wondering\nabout for some time now. The internals of xlog.c for the start/stop\nbackups are tied to XLogCtlData which map such a switch more\ncomplicated than it looks, so we can begin small and have the routines\nto create, free and build the label file and the tablespace map in\nthis new file.\n\n+ state->name = (char *) palloc0(len + 1);\n+ memcpy(state->name, name, len);\nOr just pstrdup()?\n\n+BackupState\n+get_backup_state(const char *name)\n+{\nI would name this one create_backup_state() instead.\n\n+void\n+create_backup_content_str(BackupState state, bool forhistoryfile)\n+{\nThis could be a build_backup_content().\n\nIt seems to me that there is no point in having the list of\ntablespaces in BackupStateData? This actually makes the code harder\nto follow, see for example the changes with do_pg_backup_start(), we\nthe list of tablespace may or may be not passed down as a pointer of\nBackupStateData while BackupStateData is itself the first argument of\nthis routine. These are independent from the label and backup history\nfile, as well.\n\nWe need to be careful about the file format (see read_backup_label()),\nand create_backup_content_str() is careful about that which is good.\nCore does not care about the format of the backup history file, though\nsome community tools may. I agree that what you are proposing here\nmakes the generation of these files easier to follow, but let's\ndocument what forhistoryfile is here for, at least. Saving the\nthe backup label and history file strings in BackupState is a\nconfusing interface IMO. It would be more intuitive to have the\nbackup state in input, and provide the string generated in output\ndepending on what we want from the backup state.\n\n- backup_started_in_recovery = RecoveryInProgress();\n+ Assert(state != NULL);\n+\n+ in_recovery = RecoveryInProgress();\n[...]\n- if (strcmp(backupfrom, \"standby\") == 0 && !backup_started_in_recovery)\n+ if (state->started_in_recovery == true && in_recovery == false)\n\nI would have kept the naming to backup_started_in_recovery here. What\nyou are doing is logically right by relying on started_in_recovery to\ncheck if recovery was running when the backup started, but this just\ncreates useless noise in the refactoring.\n\nSomething unrelated to your patch that I am noticing while scanning\nthe area is that we have been always lazy in freeing the label file\ndata allocated in TopMemoryContext when using the SQL functions if the\nbackup is aborted. We are not talking about this much amount of\nmemory each time a backup is performed, but that could be a cause for\nmemory bloat in a server if the same session is used and backups keep\nfailing, as the data is freed only on a successful pg_backup_stop().\nBase backups through the replication protocol don't care about that as\nthe code keeps around the same pointer for the whole duration of\nperform_base_backup(). Trying to tie the cleanup of the label file\nwith the abort phase would be the cause of more complications with\ndo_pg_backup_stop(), and xlog.c has no need to know about that now.\nJust a remark for later.\n--\nMichael",
"msg_date": "Fri, 16 Sep 2022 15:30:59 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Refactor backup related code (was: Is it correct to say,\n \"invalid data in file \\\"%s\\\"\", BACKUP_LABEL_FILE in do_pg_backup_stop?)"
},
{
"msg_contents": "On Fri, Sep 16, 2022 at 12:01 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Wed, Sep 14, 2022 at 02:24:12PM +0530, Bharath Rupireddy wrote:\n> > I'm attaching the v4 patch that's rebased on to the latest HEAD.\n> > Please consider this for review.\n>\n> I have been looking at this patch.\n\nThanks for reviewing it.\n\n> - StringInfo labelfile;\n> - StringInfo tblspc_map_file;\n> backup_manifest_info manifest;\n> + BackupState backup_state;\n> You could use initialize the state here with a {0}. That's simpler.\n\nBackupState is a pointer to BackupStateData, we can't initialize that\nway. However, I got rid of BackupStateData and used BackupState for\nthe structure directly, whenever pointer to the structure is required,\nI'm using BackupState * to be more clear.\n\n> --- a/src/include/access/xlog_internal.h\n> +++ b/src/include/access/xlog_internal.h\n> @@ -380,6 +380,31 @@ GetRmgr(RmgrId rmid)\n> }\n> #endif\n>\n> +/* Structure to hold backup state. */\n> +typedef struct BackupStateData\n> +{\n> Why is that in xlog_internal.h? This header includes a lot of\n> declarations about the internals of WAL, but the backup state is not\n> that internal. I'd like to think that we should begin separating the\n> backup-related routines into their own file, as of a set of\n> xlogbackup.c/h in this case. That's a split I have been wondering\n> about for some time now. The internals of xlog.c for the start/stop\n> backups are tied to XLogCtlData which map such a switch more\n> complicated than it looks, so we can begin small and have the routines\n> to create, free and build the label file and the tablespace map in\n> this new file.\n\nGood idea. It makes a lot more sense to me, because xlog.c is already\na file of 9000 LOC. I've created xlogbackup.c/.h files and added the\nnew code there. Once this patch gets in, I can offer my hand to move\ndo_pg_backup_start() and do_pg_backup_stop() from xlog.c and if okay,\npg_backup_start() and pg_backup_stop() from xlogfuncs.c to\nxlogbackup.c/.h. Then, we might have to create new get/set APIs for\nXLogCtl fields that do_pg_backup_start() and do_pg_backup_stop()\naccess.\n\n> + state->name = (char *) palloc0(len + 1);\n> + memcpy(state->name, name, len);\n> Or just pstrdup()?\n\nDone.\n\n> +BackupState\n> +get_backup_state(const char *name)\n> +{\n> I would name this one create_backup_state() instead.\n>\n> +void\n> +create_backup_content_str(BackupState state, bool forhistoryfile)\n> +{\n> This could be a build_backup_content().\n\nI came up with more meaningful names - allocate_backup_state(),\ndeallocate_backup_state(), build_backup_content().\n\n> It seems to me that there is no point in having the list of\n> tablespaces in BackupStateData? This actually makes the code harder\n> to follow, see for example the changes with do_pg_backup_start(), we\n> the list of tablespace may or may be not passed down as a pointer of\n> BackupStateData while BackupStateData is itself the first argument of\n> this routine. These are independent from the label and backup history\n> file, as well.\n\nI haven't stored the list of tablespaces in BackupState, it's the\nstring that do_pg_backup_start() creates is stored in there for\ncarrying it till pg_backup_stop(). Adding the tablespace_map,\nbackup_label, history_file in BackupState makes it easy to carry them\nacross various backup related functions.\n\n> We need to be careful about the file format (see read_backup_label()),\n> and create_backup_content_str() is careful about that which is good.\n> Core does not care about the format of the backup history file, though\n> some community tools may.\n\nAre you suggesting that we need something like check_history_file()\nsimilar to what read_backup_label() does by parsing each line of the\nlabel file and erroring out if not in the required format?\n\n> I agree that what you are proposing here\n> makes the generation of these files easier to follow, but let's\n> document what forhistoryfile is here for, at least. Saving the\n> the backup label and history file strings in BackupState is a\n> confusing interface IMO. It would be more intuitive to have the\n> backup state in input, and provide the string generated in output\n> depending on what we want from the backup state.\n\nWe need to carry tablespace_map contents from do_pg_backup_start()\ntill pg_backup_stop(), backup_label and history_file too are easy to\ncarry across. Hence it will be good to have all of them i.e.\ntablespace_map, backup_label and history_file in the BackupState\nstructure. IMO, this way is good.\n\n> - backup_started_in_recovery = RecoveryInProgress();\n> + Assert(state != NULL);\n> +\n> + in_recovery = RecoveryInProgress();\n> [...]\n> - if (strcmp(backupfrom, \"standby\") == 0 && !backup_started_in_recovery)\n> + if (state->started_in_recovery == true && in_recovery == false)\n>\n> I would have kept the naming to backup_started_in_recovery here. What\n> you are doing is logically right by relying on started_in_recovery to\n> check if recovery was running when the backup started, but this just\n> creates useless noise in the refactoring.\n\nPSA new patch.\n\n> Something unrelated to your patch that I am noticing while scanning\n> the area is that we have been always lazy in freeing the label file\n> data allocated in TopMemoryContext when using the SQL functions if the\n> backup is aborted. We are not talking about this much amount of\n> memory each time a backup is performed, but that could be a cause for\n> memory bloat in a server if the same session is used and backups keep\n> failing, as the data is freed only on a successful pg_backup_stop().\n> Base backups through the replication protocol don't care about that as\n> the code keeps around the same pointer for the whole duration of\n> perform_base_backup(). Trying to tie the cleanup of the label file\n> with the abort phase would be the cause of more complications with\n> do_pg_backup_stop(), and xlog.c has no need to know about that now.\n> Just a remark for later.\n\nYeah, I think that can be solved by passing in backup_state to\ndo_pg_abort_backup(). If okay, I can work on this too as 0002 patch in\nthis thread or we can discuss this separately to get more attention\nafter this refactoring patch gets in.\n\nI'm attaching v5 patch with the above review comments addressed,\nplease review it further.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Sat, 17 Sep 2022 12:48:34 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Refactor backup related code (was: Is it correct to say, \"invalid\n data in file \\\"%s\\\"\", BACKUP_LABEL_FILE in do_pg_backup_stop?)"
},
{
"msg_contents": "\n\nOn 2022/09/17 16:18, Bharath Rupireddy wrote:\n> Good idea. It makes a lot more sense to me, because xlog.c is already\n> a file of 9000 LOC. I've created xlogbackup.c/.h files and added the\n> new code there. Once this patch gets in, I can offer my hand to move\n> do_pg_backup_start() and do_pg_backup_stop() from xlog.c and if okay,\n> pg_backup_start() and pg_backup_stop() from xlogfuncs.c to\n> xlogbackup.c/.h. Then, we might have to create new get/set APIs for\n> XLogCtl fields that do_pg_backup_start() and do_pg_backup_stop()\n> access.\n\nThe definition of SessionBackupState enum type also should be in xlogbackup.h?\n\n> We need to carry tablespace_map contents from do_pg_backup_start()\n> till pg_backup_stop(), backup_label and history_file too are easy to\n> carry across. Hence it will be good to have all of them i.e.\n> tablespace_map, backup_label and history_file in the BackupState\n> structure. IMO, this way is good.\n\nbackup_label and history_file are not carried between pg_backup_start()\nand _stop(), so don't need to be saved in BackupState. Their contents\ncan be easily created from other saved fields in BackupState,\nif necessary. So at least for me it's better to get rid of them from\nBackupState and don't allocate TopMemoryContext memory for them.\n\n>> Something unrelated to your patch that I am noticing while scanning\n>> the area is that we have been always lazy in freeing the label file\n>> data allocated in TopMemoryContext when using the SQL functions if the\n>> backup is aborted. We are not talking about this much amount of\n>> memory each time a backup is performed, but that could be a cause for\n>> memory bloat in a server if the same session is used and backups keep\n>> failing, as the data is freed only on a successful pg_backup_stop().\n>> Base backups through the replication protocol don't care about that as\n>> the code keeps around the same pointer for the whole duration of\n>> perform_base_backup(). Trying to tie the cleanup of the label file\n>> with the abort phase would be the cause of more complications with\n>> do_pg_backup_stop(), and xlog.c has no need to know about that now.\n>> Just a remark for later.\n> \n> Yeah, I think that can be solved by passing in backup_state to\n> do_pg_abort_backup(). If okay, I can work on this too as 0002 patch in\n> this thread or we can discuss this separately to get more attention\n> after this refactoring patch gets in.\n\nOr, to avoid such memory bloat, how about allocating the memory for\nbackup_state only when it's NULL?\n\n> I'm attaching v5 patch with the above review comments addressed,\n> please review it further.\n\nThanks for updating the patch!\n\n+\tchar startxlogfile[MAXFNAMELEN_BACKUP]; /* backup start WAL file */\n<snip>\n+\tchar\t stopxlogfile[MAXFNAMELEN_BACKUP];\t/* backup stop WAL file */\n\nThese file names seem not necessary in BackupState because they can be\ncalculated from other fields like startpoint and starttli, etc when\nmaking backup_label and history file contents. If we remove them from\nBackupState, we can also remove the definition of MAXFNAMELEN_BACKUP\nmacro from xlogbackup.h.\n\n+\t/* construct backup_label contents */\n+\tbuild_backup_content(state, false);\n\nIn basebackup case, build_backup_content() is called unnecessary twice\nbecause do_pg_stop_backup() and its caller, perform_base_backup() call\nthat. This makes me think that it's better to get rid of the call to\nbuild_backup_content() from do_pg_backup_stop(). Instead its callers,\nperform_base_backup() and pg_backup_stop() should call that.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Sun, 18 Sep 2022 11:08:28 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: Refactor backup related code (was: Is it correct to say, \"invalid\n data in file \\\"%s\\\"\", BACKUP_LABEL_FILE in do_pg_backup_stop?)"
},
{
"msg_contents": "On Sun, Sep 18, 2022 at 7:38 AM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>\n> On 2022/09/17 16:18, Bharath Rupireddy wrote:\n> > Good idea. It makes a lot more sense to me, because xlog.c is already\n> > a file of 9000 LOC. I've created xlogbackup.c/.h files and added the\n> > new code there. Once this patch gets in, I can offer my hand to move\n> > do_pg_backup_start() and do_pg_backup_stop() from xlog.c and if okay,\n> > pg_backup_start() and pg_backup_stop() from xlogfuncs.c to\n> > xlogbackup.c/.h. Then, we might have to create new get/set APIs for\n> > XLogCtl fields that do_pg_backup_start() and do_pg_backup_stop()\n> > access.\n>\n> The definition of SessionBackupState enum type also should be in xlogbackup.h?\n\nCorrect. Basically, all the backup related code from xlog.c,\nxlogfuncs.c and elsewhere can go to xlogbackup.c/.h. I will focus on\nthat refactoring patch once this gets in.\n\n> > We need to carry tablespace_map contents from do_pg_backup_start()\n> > till pg_backup_stop(), backup_label and history_file too are easy to\n> > carry across. Hence it will be good to have all of them i.e.\n> > tablespace_map, backup_label and history_file in the BackupState\n> > structure. IMO, this way is good.\n>\n> backup_label and history_file are not carried between pg_backup_start()\n> and _stop(), so don't need to be saved in BackupState. Their contents\n> can be easily created from other saved fields in BackupState,\n> if necessary. So at least for me it's better to get rid of them from\n> BackupState and don't allocate TopMemoryContext memory for them.\n\nYeah, but they have to be carried from do_pg_backup_stop() to\npg_backup_stop() or callers and also instead of keeping tablespace_map\nin BackupState and others elsewhere don't seem to be a good idea to\nme. IMO, BackupState is a good place to contain all the information\nthat's carried across various functions. I've changed the code to\nlazily (upon first use in the backend) allocate memory for all of them\nas we're concerned of the memory allocation beforehand.\n\n> > Yeah, I think that can be solved by passing in backup_state to\n> > do_pg_abort_backup(). If okay, I can work on this too as 0002 patch in\n> > this thread or we can discuss this separately to get more attention\n> > after this refactoring patch gets in.\n>\n> Or, to avoid such memory bloat, how about allocating the memory for\n> backup_state only when it's NULL?\n\nAh my bad, I missed that. Done now.\n\n> > I'm attaching v5 patch with the above review comments addressed,\n> > please review it further.\n>\n> Thanks for updating the patch!\n\nThanks for reviewing it.\n\n> + char startxlogfile[MAXFNAMELEN_BACKUP]; /* backup start WAL file */\n> <snip>\n> + char stopxlogfile[MAXFNAMELEN_BACKUP]; /* backup stop WAL file */\n>\n> These file names seem not necessary in BackupState because they can be\n> calculated from other fields like startpoint and starttli, etc when\n> making backup_label and history file contents. If we remove them from\n> BackupState, we can also remove the definition of MAXFNAMELEN_BACKUP\n> macro from xlogbackup.h.\n\nDone.\n\n> + /* construct backup_label contents */\n> + build_backup_content(state, false);\n>\n> In basebackup case, build_backup_content() is called unnecessary twice\n> because do_pg_stop_backup() and its caller, perform_base_backup() call\n> that. This makes me think that it's better to get rid of the call to\n> build_backup_content() from do_pg_backup_stop(). Instead its callers,\n> perform_base_backup() and pg_backup_stop() should call that.\n\nYeah, it's a good idea. Done that way. It's easier because we can\ncreate backup_label file contents at any point of time after\npg_backup_start().\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Sun, 18 Sep 2022 13:23:57 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Refactor backup related code (was: Is it correct to say, \"invalid\n data in file \\\"%s\\\"\", BACKUP_LABEL_FILE in do_pg_backup_stop?)"
},
{
"msg_contents": "On Sun, Sep 18, 2022 at 1:23 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n\ncfbot fails [1] with v6 patch. I made a silly mistake by not checking\nthe output of \"make check-world -j 16\" fully, I just saw the end\nmessage \"All tests successful.\" before posting the v6 patch.\n\nThe failure is due to perform_base_backup() accessing BackupState's\ntablespace_map without a null check, so I fixed it.\n\nSorry for the noise. Please review the attached v7 patch further.\n\n[1] https://cirrus-ci.com/task/5816966114967552?logs=test_world#L720\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Sun, 18 Sep 2022 19:39:24 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Refactor backup related code (was: Is it correct to say, \"invalid\n data in file \\\"%s\\\"\", BACKUP_LABEL_FILE in do_pg_backup_stop?)"
},
{
"msg_contents": "\n\nOn 2022/09/18 23:09, Bharath Rupireddy wrote:\n> On Sun, Sep 18, 2022 at 1:23 PM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n> \n> cfbot fails [1] with v6 patch. I made a silly mistake by not checking\n> the output of \"make check-world -j 16\" fully, I just saw the end\n> message \"All tests successful.\" before posting the v6 patch.\n> \n> The failure is due to perform_base_backup() accessing BackupState's\n> tablespace_map without a null check, so I fixed it.\n> \n> Sorry for the noise. Please review the attached v7 patch further.\n\nThanks for updating the patch!\n\n=# SELECT * FROM pg_backup_start('test', true);\n=# SELECT * FROM pg_backup_stop();\nLOG: server process (PID 15651) was terminated by signal 11: Segmentation fault: 11\nDETAIL: Failed process was running: SELECT * FROM pg_backup_stop();\n\nWith v7 patch, pg_backup_stop() caused the segmentation fault.\n\n\n=# SELECT * FROM pg_backup_start(repeat('test', 1024));\nERROR: backup label too long (max 1024 bytes)\nSTATEMENT: SELECT * FROM pg_backup_start(repeat('test', 1024));\n\n=# SELECT * FROM pg_backup_start(repeat('testtest', 1024));\nLOG: server process (PID 15844) was terminated by signal 11: Segmentation fault: 11\nDETAIL: Failed process was running: SELECT * FROM pg_backup_start(repeat('testtest', 1024));\n\nWhen I specified longer backup label in the second run of pg_backup_start()\nafter the first run failed, it caused the segmentation fault.\n\n\n+\tstate = (BackupState *) palloc0(sizeof(BackupState));\n+\tstate->name = pstrdup(name);\n\npg_backup_start() calls allocate_backup_state() and allocates the memory for\nthe input backup label before checking its length in do_pg_backup_start().\nThis can cause the memory for backup label to be allocated too much\nunnecessary. I think that the maximum length of BackupState.name should\nbe MAXPGPATH (i.e., maximum allowed length for backup label).\n\n\n>> The definition of SessionBackupState enum type also should be in xlogbackup.h?\n>\n> Correct. Basically, all the backup related code from xlog.c,\n> xlogfuncs.c and elsewhere can go to xlogbackup.c/.h. I will focus on\n> that refactoring patch once this gets in.\n\nUnderstood.\n\n\n> Yeah, but they have to be carried from do_pg_backup_stop() to\n> pg_backup_stop() or callers and also instead of keeping tablespace_map\n> in BackupState and others elsewhere don't seem to be a good idea to\n> me. IMO, BackupState is a good place to contain all the information\n> that's carried across various functions.\n\nIn v7 patch, since pg_backup_stop() calls build_backup_content(),\nbackup_label and history_file seem not to be carried from do_pg_backup_stop()\nto pg_backup_stop(). This makes me still think that it's better not to include\nthem in BackupState...\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Mon, 19 Sep 2022 18:08:51 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: Refactor backup related code (was: Is it correct to say, \"invalid\n data in file \\\"%s\\\"\", BACKUP_LABEL_FILE in do_pg_backup_stop?)"
},
{
"msg_contents": "On Mon, Sep 19, 2022 at 2:38 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>\n> Thanks for updating the patch!\n>\n> =# SELECT * FROM pg_backup_start('test', true);\n> =# SELECT * FROM pg_backup_stop();\n> LOG: server process (PID 15651) was terminated by signal 11: Segmentation fault: 11\n> DETAIL: Failed process was running: SELECT * FROM pg_backup_stop();\n>\n> With v7 patch, pg_backup_stop() caused the segmentation fault.\n\nFixed. I believed that the regression tests cover pg_backup_start()\nand pg_backup_stop(), and relied on make check-world, surprisingly\nthere's no test that covers these functions. Is it a good idea to add\ntests for these functions in misc_functions.sql or backup.sql or\nsomewhere so that they get run as part of make check? Thoughts?\n\n> =# SELECT * FROM pg_backup_start(repeat('test', 1024));\n> ERROR: backup label too long (max 1024 bytes)\n> STATEMENT: SELECT * FROM pg_backup_start(repeat('test', 1024));\n>\n> =# SELECT * FROM pg_backup_start(repeat('testtest', 1024));\n> LOG: server process (PID 15844) was terminated by signal 11: Segmentation fault: 11\n> DETAIL: Failed process was running: SELECT * FROM pg_backup_start(repeat('testtest', 1024));\n>\n> When I specified longer backup label in the second run of pg_backup_start()\n> after the first run failed, it caused the segmentation fault.\n>\n>\n> + state = (BackupState *) palloc0(sizeof(BackupState));\n> + state->name = pstrdup(name);\n>\n> pg_backup_start() calls allocate_backup_state() and allocates the memory for\n> the input backup label before checking its length in do_pg_backup_start().\n> This can cause the memory for backup label to be allocated too much\n> unnecessary. I think that the maximum length of BackupState.name should\n> be MAXPGPATH (i.e., maximum allowed length for backup label).\n\nThat's a good idea. I'm marking a flag if the label name overflows (>\nMAXPGPATH), later using it in do_pg_backup_start() to error out. We\ncould've thrown error directly, but that changes the behaviour - right\nnow, first \"\nwal_level must be set to \\\"replica\\\" or \\\"logical\\\" at server start.\"\ngets emitted and then label name overflow error - I don't want to do\nthat.\n\n> > Yeah, but they have to be carried from do_pg_backup_stop() to\n> > pg_backup_stop() or callers and also instead of keeping tablespace_map\n> > in BackupState and others elsewhere don't seem to be a good idea to\n> > me. IMO, BackupState is a good place to contain all the information\n> > that's carried across various functions.\n>\n> In v7 patch, since pg_backup_stop() calls build_backup_content(),\n> backup_label and history_file seem not to be carried from do_pg_backup_stop()\n> to pg_backup_stop(). This makes me still think that it's better not to include\n> them in BackupState...\n\nI'm a bit worried about the backup state being spread across if we\nseparate out backup_label and history_file from BackupState and keep\ntablespace_map contents there. As I said upthread, we are not\nallocating memory for them at the beginning, we allocate only when\nneeded. IMO, this code is readable and more extensible.\n\nI've also taken help of the error callback mechanism to clean up the\nallocated memory in case of a failure. For do_pg_abort_backup() cases,\nI think we don't need to clean the memory because that gets called on\nproc exit (before_shmem_exit()).\n\nPlease review the v8 patch further.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Mon, 19 Sep 2022 18:26:34 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Refactor backup related code (was: Is it correct to say, \"invalid\n data in file \\\"%s\\\"\", BACKUP_LABEL_FILE in do_pg_backup_stop?)"
},
{
"msg_contents": "On Mon, Sep 19, 2022 at 06:26:34PM +0530, Bharath Rupireddy wrote:\n> On Mon, Sep 19, 2022 at 2:38 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> Fixed. I believed that the regression tests cover pg_backup_start()\n> and pg_backup_stop(), and relied on make check-world, surprisingly\n> there's no test that covers these functions. Is it a good idea to add\n> tests for these functions in misc_functions.sql or backup.sql or\n> somewhere so that they get run as part of make check? Thoughts?\n\nThe main regression test suite should not include direct calls to\npg_backup_start() or pg_backup_stop() as these depend on wal_level,\nand we spend a certain amount of resources in keeping the tests a\nmaximum portable across such configurations, wal_level = minimal being\none of them. One portable example is in 001_stream_rep.pl.\n\n> That's a good idea. I'm marking a flag if the label name overflows (>\n> MAXPGPATH), later using it in do_pg_backup_start() to error out. We\n> could've thrown error directly, but that changes the behaviour - right\n> now, first \"\n> wal_level must be set to \\\"replica\\\" or \\\"logical\\\" at server start.\"\n> gets emitted and then label name overflow error - I don't want to do\n> that.\n\n- if (strlen(backupidstr) > MAXPGPATH)\n+ if (state->name_overflowed == true)\n ereport(ERROR,\n (errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n errmsg(\"backup label too long (max %d bytes)\",\nIt does not strike me as a huge issue to force a truncation of such\nbackup label names. 1024 is large enough for basically all users,\nin my opinion. Or you could just truncate in the internal logic, but\nstill complain at MAXPGPATH - 1 as the last byte would be for the zero\ntermination. In short, there is no need to complicate things with\nname_overflowed.\n\n>> In v7 patch, since pg_backup_stop() calls build_backup_content(),\n>> backup_label and history_file seem not to be carried from do_pg_backup_stop()\n>> to pg_backup_stop(). This makes me still think that it's better not to include\n>> them in BackupState...\n> \n> I'm a bit worried about the backup state being spread across if we\n> separate out backup_label and history_file from BackupState and keep\n> tablespace_map contents there. As I said upthread, we are not\n> allocating memory for them at the beginning, we allocate only when\n> needed. IMO, this code is readable and more extensible.\n\n+ StringInfo backup_label; /* backup_label file contents */\n+ StringInfo tablespace_map; /* tablespace_map file contents */\n+ StringInfo history_file; /* history file contents */\nIMV, repeating a point I already made once upthread, BackupState\nshould hold none of these. Let's just generate the contents of these\nfiles in the contexts where they are needed, making BackupState\nsomething to rely on to build them in the code paths where they are\nnecessary. This is going to make the reasoning around the memory\ncontexts where each one of them is stored much easier and reduce the\nchanges of bugs in the long-term.\n\n> I've also taken help of the error callback mechanism to clean up the\n> allocated memory in case of a failure. For do_pg_abort_backup() cases,\n> I think we don't need to clean the memory because that gets called on\n> proc exit (before_shmem_exit()).\n\nMemory could still bloat while the process running the SQL functions\nis running depending on the error code path, anyway.\n--\nMichael",
"msg_date": "Tue, 20 Sep 2022 10:50:37 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Refactor backup related code (was: Is it correct to say,\n \"invalid data in file \\\"%s\\\"\", BACKUP_LABEL_FILE in do_pg_backup_stop?)"
},
{
"msg_contents": "On Tue, Sep 20, 2022 at 7:20 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> The main regression test suite should not include direct calls to\n> pg_backup_start() or pg_backup_stop() as these depend on wal_level,\n> and we spend a certain amount of resources in keeping the tests a\n> maximum portable across such configurations, wal_level = minimal being\n> one of them. One portable example is in 001_stream_rep.pl.\n\nUnderstood.\n\n> > That's a good idea. I'm marking a flag if the label name overflows (>\n> > MAXPGPATH), later using it in do_pg_backup_start() to error out. We\n> > could've thrown error directly, but that changes the behaviour - right\n> > now, first \"\n> > wal_level must be set to \\\"replica\\\" or \\\"logical\\\" at server start.\"\n> > gets emitted and then label name overflow error - I don't want to do\n> > that.\n>\n> - if (strlen(backupidstr) > MAXPGPATH)\n> + if (state->name_overflowed == true)\n> ereport(ERROR,\n> (errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n> errmsg(\"backup label too long (max %d bytes)\",\n> It does not strike me as a huge issue to force a truncation of such\n> backup label names. 1024 is large enough for basically all users,\n> in my opinion. Or you could just truncate in the internal logic, but\n> still complain at MAXPGPATH - 1 as the last byte would be for the zero\n> termination. In short, there is no need to complicate things with\n> name_overflowed.\n\nWe currently allow MAXPGPATH bytes of label name excluding null\ntermination. I don't want to change this behaviour. In the attached v9\npatch, I'm truncating the larger names to MAXPGPATH + 1 bytes in\nbackup state (one extra byte for representing that the name has\noverflown, and another extra byte for null termination).\n\n> + StringInfo backup_label; /* backup_label file contents */\n> + StringInfo tablespace_map; /* tablespace_map file contents */\n> + StringInfo history_file; /* history file contents */\n> IMV, repeating a point I already made once upthread, BackupState\n> should hold none of these. Let's just generate the contents of these\n> files in the contexts where they are needed, making BackupState\n> something to rely on to build them in the code paths where they are\n> necessary. This is going to make the reasoning around the memory\n> contexts where each one of them is stored much easier and reduce the\n> changes of bugs in the long-term.\n\nI've separated out these variables from the backup state, please see\nthe attached v9 patch.\n\n> > I've also taken help of the error callback mechanism to clean up the\n> > allocated memory in case of a failure. For do_pg_abort_backup() cases,\n> > I think we don't need to clean the memory because that gets called on\n> > proc exit (before_shmem_exit()).\n>\n> Memory could still bloat while the process running the SQL functions\n> is running depending on the error code path, anyway.\n\nI didn't get your point. Can you please elaborate it? I think adding\nerror callbacks at the right places would free up the memory for us.\nPlease note that we already are causing memory leaks on HEAD today.\n\nI addressed the above review comments. I also changed a wrong comment\n[1] that lies before pg_backup_start() even after the removal of\nexclusive backup.\n\nI'm attaching v9 patch set herewith, 0001 - refactors the backup code\nwith backup state, 0002 - adds error callbacks to clean up the memory\nallocated for backup variables. Please review them further.\n\n[1]\n * Essentially what this does is to create a backup label file in $PGDATA,\n * where it will be archived as part of the backup dump. The label file\n * contains the user-supplied label string (typically this would be used\n * to tell where the backup dump will be stored) and the starting time and\n * starting WAL location for the dump.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Tue, 20 Sep 2022 17:13:49 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Refactor backup related code (was: Is it correct to say, \"invalid\n data in file \\\"%s\\\"\", BACKUP_LABEL_FILE in do_pg_backup_stop?)"
},
{
"msg_contents": "\n\nOn 2022/09/20 20:43, Bharath Rupireddy wrote:\n>> - if (strlen(backupidstr) > MAXPGPATH)\n>> + if (state->name_overflowed == true)\n>> ereport(ERROR,\n>> (errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n>> errmsg(\"backup label too long (max %d bytes)\",\n>> It does not strike me as a huge issue to force a truncation of such\n>> backup label names. 1024 is large enough for basically all users,\n>> in my opinion. Or you could just truncate in the internal logic, but\n>> still complain at MAXPGPATH - 1 as the last byte would be for the zero\n>> termination. In short, there is no need to complicate things with\n>> name_overflowed.\n> \n> We currently allow MAXPGPATH bytes of label name excluding null\n> termination. I don't want to change this behaviour. In the attached v9\n> patch, I'm truncating the larger names to MAXPGPATH + 1 bytes in\n> backup state (one extra byte for representing that the name has\n> overflown, and another extra byte for null termination).\n\nThis looks much complicated to me.\n\nInstead of making allocate_backup_state() or reset_backup_state()\nstore the label name in BackupState before do_pg_backup_start(),\nhow about making do_pg_backup_start() do that after checking its length?\nSeems this can simplify the code very much.\n\nIf so, ISTM that we can replace allocate_backup_state() and\nreset_backup_state() with just palloc0() and MemSet(), respectively.\nAlso we can remove fill_backup_label_name().\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Wed, 21 Sep 2022 15:45:49 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: Refactor backup related code (was: Is it correct to say, \"invalid\n data in file \\\"%s\\\"\", BACKUP_LABEL_FILE in do_pg_backup_stop?)"
},
{
"msg_contents": "On Tue, Sep 20, 2022 at 05:13:49PM +0530, Bharath Rupireddy wrote:\n> On Tue, Sep 20, 2022 at 7:20 AM Michael Paquier <michael@paquier.xyz> wrote:\n> I've separated out these variables from the backup state, please see\n> the attached v9 patch.\n\nThanks, the separation looks cleaner.\n\n>>> I've also taken help of the error callback mechanism to clean up the\n>>> allocated memory in case of a failure. For do_pg_abort_backup() cases,\n>>> I think we don't need to clean the memory because that gets called on\n>>> proc exit (before_shmem_exit()).\n>>\n>> Memory could still bloat while the process running the SQL functions\n>> is running depending on the error code path, anyway.\n> \n> I didn't get your point. Can you please elaborate it? I think adding\n> error callbacks at the right places would free up the memory for us.\n> Please note that we already are causing memory leaks on HEAD today.\n\nI mean that HEAD makes no effort in freeing this memory in\nTopMemoryContext on session ERROR.\n\n> I addressed the above review comments. I also changed a wrong comment\n> [1] that lies before pg_backup_start() even after the removal of\n> exclusive backup.\n> \n> I'm attaching v9 patch set herewith, 0001 - refactors the backup code\n> with backup state, 0002 - adds error callbacks to clean up the memory\n> allocated for backup variables. Please review them further.\n\nI have a few comments on 0001.\n\n+#include <access/xlogbackup.h>\nDouble quotes wanted here.\n\ndeallocate_backup_variables() is the only part of xlogbackup.c that\nincludes references of the tablespace map_and backup_label\nStringInfos. I would be tempted to fully decouple that from\nxlogbackup.c/h for the time being.\n\n- tblspc_map_file = makeStringInfo();\nNot sure that there is a need for a rename here.\n\n+void\n+build_backup_content(BackupState *state, bool ishistoryfile, StringInfo str)\n+{\nIt would be more natural to have build_backup_content() do by itself\nthe initialization of the StringInfo for the contents of backup_label\nand return it as a result of the routine? This is short-lived in\nxlogfuncs.c when the backup ends.\n\n@@ -248,18 +250,25 @@ perform_base_backup(basebackup_options *opt, bbsink *sink)\n[...]\n+ /* Construct backup_label contents. */\n+ build_backup_content(backup_state, false, backup_label);\n\nActually, for base backups, perhaps it would be more intuitive to\nbuild and free the StringInfo of the backup_label when we send it for\nbase.tar rather than initializing it at the beginning and freeing it\nat the end?\n\n- * pg_backup_start: set up for taking an on-line backup dump\n+ * pg_backup_start: start taking an on-line backup.\n *\n- * Essentially what this does is to create a backup label file in $PGDATA,\n- * where it will be archived as part of the backup dump. The label file\n- * contains the user-supplied label string (typically this would be used\n- * to tell where the backup dump will be stored) and the starting time and\n- * starting WAL location for the dump.\n+ * This function starts the backup and creates tablespace_map contents.\n\nThe last part of the comment is still correct while the former is not,\nso this loses some information.\n--\nMichael",
"msg_date": "Wed, 21 Sep 2022 15:57:41 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Refactor backup related code (was: Is it correct to say,\n \"invalid data in file \\\"%s\\\"\", BACKUP_LABEL_FILE in do_pg_backup_stop?)"
},
{
"msg_contents": "On Wed, Sep 21, 2022 at 03:45:49PM +0900, Fujii Masao wrote:\n> Instead of making allocate_backup_state() or reset_backup_state()\n> store the label name in BackupState before do_pg_backup_start(),\n> how about making do_pg_backup_start() do that after checking its length?\n> Seems this can simplify the code very much.\n> \n> If so, ISTM that we can replace allocate_backup_state() and\n> reset_backup_state() with just palloc0() and MemSet(), respectively.\n> Also we can remove fill_backup_label_name().\n\nYep, agreed. Having all these routines feels a bit overengineered.\n--\nMichael",
"msg_date": "Wed, 21 Sep 2022 15:59:36 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Refactor backup related code (was: Is it correct to say,\n \"invalid data in file \\\"%s\\\"\", BACKUP_LABEL_FILE in do_pg_backup_stop?)"
},
{
"msg_contents": "On Wed, Sep 21, 2022 at 12:15 PM Fujii Masao\n<masao.fujii@oss.nttdata.com> wrote:\n>\n> This looks much complicated to me.\n>\n> Instead of making allocate_backup_state() or reset_backup_state()\n> store the label name in BackupState before do_pg_backup_start(),\n> how about making do_pg_backup_start() do that after checking its length?\n> Seems this can simplify the code very much.\n>\n> If so, ISTM that we can replace allocate_backup_state() and\n> reset_backup_state() with just palloc0() and MemSet(), respectively.\n> Also we can remove fill_backup_label_name().\n\nYes, that makes things simpler. I will post the v10 patch-set soon.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 21 Sep 2022 16:10:38 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Refactor backup related code (was: Is it correct to say, \"invalid\n data in file \\\"%s\\\"\", BACKUP_LABEL_FILE in do_pg_backup_stop?)"
},
{
"msg_contents": "On Wed, Sep 21, 2022 at 12:27 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> >>> I've also taken help of the error callback mechanism to clean up the\n> >>> allocated memory in case of a failure. For do_pg_abort_backup() cases,\n> >>> I think we don't need to clean the memory because that gets called on\n> >>> proc exit (before_shmem_exit()).\n> >>\n> >> Memory could still bloat while the process running the SQL functions\n> >> is running depending on the error code path, anyway.\n> >\n> > I didn't get your point. Can you please elaborate it? I think adding\n> > error callbacks at the right places would free up the memory for us.\n> > Please note that we already are causing memory leaks on HEAD today.\n>\n> I mean that HEAD makes no effort in freeing this memory in\n> TopMemoryContext on session ERROR.\n\nCorrect. We can also solve it as part of this commit. Please let me\nknow your thoughts on 0002 patch.\n\n> I have a few comments on 0001.\n>\n> +#include <access/xlogbackup.h>\n> Double quotes wanted here.\n\nAh, my bad. Corrected now.\n\n> deallocate_backup_variables() is the only part of xlogbackup.c that\n> includes references of the tablespace map_and backup_label\n> StringInfos. I would be tempted to fully decouple that from\n> xlogbackup.c/h for the time being.\n\nThere's no problem with it IMO, after all, they are backup related\nvariables. And that function reduces a bit of duplicate code.\n\n> - tblspc_map_file = makeStringInfo();\n> Not sure that there is a need for a rename here.\n\nWe're referring tablespace_map and backup_label internally all around,\njust to be in sync, I wanted to rename it while we're refactoring this\ncode.\n\n> +void\n> +build_backup_content(BackupState *state, bool ishistoryfile, StringInfo str)\n> +{\n> It would be more natural to have build_backup_content() do by itself\n> the initialization of the StringInfo for the contents of backup_label\n> and return it as a result of the routine? This is short-lived in\n> xlogfuncs.c when the backup ends.\n\nSee the below explaination.\n\n> @@ -248,18 +250,25 @@ perform_base_backup(basebackup_options *opt, bbsink *sink)\n> [...]\n> + /* Construct backup_label contents. */\n> + build_backup_content(backup_state, false, backup_label);\n>\n> Actually, for base backups, perhaps it would be more intuitive to\n> build and free the StringInfo of the backup_label when we send it for\n> base.tar rather than initializing it at the beginning and freeing it\n> at the end?\n\nsendFileWithContent() is in a for-loop and we are good if we call\nbuild_backup_content() before do_pg_backup_start() just once. Also,\nallocating backup_label in the for loop makes error handling trickier,\nhow do we free-up when sendFileWithContent() errors out? Of course, we\ncan allocate backup_label once even in the for loop with bool\nfirst_time sort of variable and store StringInfo *ptr_backup_label; in\nerror callback info, but that would make things unnecessarily complex,\ninstead we're good allocating and creating backup_label content at the\nbeginning and freeing-it up at the end.\n\n> - * pg_backup_start: set up for taking an on-line backup dump\n> + * pg_backup_start: start taking an on-line backup.\n> *\n> - * Essentially what this does is to create a backup label file in $PGDATA,\n> - * where it will be archived as part of the backup dump. The label file\n> - * contains the user-supplied label string (typically this would be used\n> - * to tell where the backup dump will be stored) and the starting time and\n> - * starting WAL location for the dump.\n> + * This function starts the backup and creates tablespace_map contents.\n>\n> The last part of the comment is still correct while the former is not,\n> so this loses some information.\n\nAdded that part before pg_backup_stop() now where it makes sense with\nthe refactoring.\n\nI'm attaching the v10 patch-set with the above review comments addressed.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Wed, 21 Sep 2022 17:10:39 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Refactor backup related code (was: Is it correct to say, \"invalid\n data in file \\\"%s\\\"\", BACKUP_LABEL_FILE in do_pg_backup_stop?)"
},
{
"msg_contents": "On Wed, Sep 21, 2022 at 05:10:39PM +0530, Bharath Rupireddy wrote:\n>> deallocate_backup_variables() is the only part of xlogbackup.c that\n>> includes references of the tablespace map_and backup_label\n>> StringInfos. I would be tempted to fully decouple that from\n>> xlogbackup.c/h for the time being.\n> \n> There's no problem with it IMO, after all, they are backup related\n> variables. And that function reduces a bit of duplicate code.\n\nHmm. I'd like to disagree with this statement :)\n\n> Added that part before pg_backup_stop() now where it makes sense with\n> the refactoring.\n\nI have put my hands on 0001, and finished with the attached, that\nincludes many fixes and tweaks. Some of the variable renames felt out\nof place, while some comments were overly verbose for their purpose,\nthough for the last part we did not lose any information in the last\nversion proposed.\n\nAs I suspected, the deallocate routine felt unnecessary, as\nxlogbackup.c/h have no idea what these are. The remark is\nparticularly true for the StringInfo of the backup_label file: for\nbasebackup.c we need to build it when sending base.tar and in\nxlogfuncs.c we need it only at the backup stop phase. The code was\nactually a bit wrong, because free-ing StringInfos requires to free\nits ->data and then the main object (stringinfo.h explains that). My\ntweaks have shaved something like 10%~15% of the patch, while making\nit IMO more readable.\n\nA second issue I had was with the build function, and again it seemed\nmuch cleaner to let the routine do the makeStringInfo() and return the\nresult. This is not the most popular routine ever, but this reduces\nthe workload of the caller of build_backup_content().\n\nSo, opinions?\n--\nMichael",
"msg_date": "Thu, 22 Sep 2022 16:43:19 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Refactor backup related code (was: Is it correct to say,\n \"invalid data in file \\\"%s\\\"\", BACKUP_LABEL_FILE in do_pg_backup_stop?)"
},
{
"msg_contents": "\n\nOn 2022/09/22 16:43, Michael Paquier wrote:\n>> Added that part before pg_backup_stop() now where it makes sense with\n>> the refactoring.\n> \n> I have put my hands on 0001, and finished with the attached, that\n> includes many fixes and tweaks. Some of the variable renames felt out\n> of place, while some comments were overly verbose for their purpose,\n> though for the last part we did not lose any information in the last\n> version proposed.\n\nThanks for updating the patch! This looks better to me.\n\n+\t\tMemSet(backup_state, 0, sizeof(BackupState));\n+\t\tMemSet(backup_state->name, '\\0', sizeof(backup_state->name));\n\nThe latter MemSet() is not necessary because the former already\nresets that with zero, is it?\n\n+\t\tpfree(tablespace_map);\n+\t\ttablespace_map = NULL;\n+\t}\n+\n+\ttablespace_map = makeStringInfo();\n\ntablespace_map doesn't need to be reset to NULL here.\n\n \t/* Free structures allocated in TopMemoryContext */\n-\tpfree(label_file->data);\n-\tpfree(label_file);\n<snip>\n+\tpfree(backup_label->data);\n+\tpfree(backup_label);\n+\tbackup_label = NULL;\n\nThis source comment is a bit misleading, isn't it? Because the memory\nfor backup_label is allocated under the memory context other than\nTopMemoryContext.\n\n+#include \"access/xlog.h\"\n+#include \"access/xlog_internal.h\"\n+#include \"access/xlogbackup.h\"\n+#include \"utils/memutils.h\"\n\nSeems \"utils/memutils.h\" doesn't need to be included.\n\n+\t\tXLByteToSeg(state->startpoint, stopsegno, wal_segment_size);\n+\t\tXLogFileName(stopxlogfile, state->starttli, stopsegno, wal_segment_size);\n+\t\tappendStringInfo(result, \"STOP WAL LOCATION: %X/%X (file %s)\\n\",\n+\t\t\t\t\t\t LSN_FORMAT_ARGS(state->startpoint), stopxlogfile);\n\nstate->stoppoint and state->stoptli should be used instead of\nstate->startpoint and state->starttli?\n\n+\t\tpfree(tablespace_map);\n+\t\ttablespace_map = NULL;\n+\t\tpfree(backup_state);\n+\t\tbackup_state = NULL;\n\nIt's harmless to set tablespace_map and backup_state to NULL after pfree(),\nbut it's also unnecessary at least here.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Thu, 22 Sep 2022 18:47:50 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: Refactor backup related code (was: Is it correct to say, \"invalid\n data in file \\\"%s\\\"\", BACKUP_LABEL_FILE in do_pg_backup_stop?)"
},
{
"msg_contents": "Hi,\n\nOn 2022-09-22 16:43:19 +0900, Michael Paquier wrote:\n> I have put my hands on 0001, and finished with the attached, that\n> includes many fixes and tweaks.\n\nDue to the merge of the meson based build this patch needs some\nadjustment. See\nhttps://cirrus-ci.com/build/6146162607521792\nLooks like it just requires adding xlogbackup.c to\nsrc/backend/access/transam/meson.build.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 22 Sep 2022 08:25:31 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Refactor backup related code (was: Is it correct to say,\n \"invalid data in file \\\"%s\\\"\", BACKUP_LABEL_FILE in do_pg_backup_stop?)"
},
{
"msg_contents": "On Thu, Sep 22, 2022 at 8:55 PM Andres Freund <andres@anarazel.de> wrote:\n>\n> Due to the merge of the meson based build this patch needs some\n> adjustment. See\n> https://cirrus-ci.com/build/6146162607521792\n> Looks like it just requires adding xlogbackup.c to\n> src/backend/access/transam/meson.build.\n\nThanks! I will post a new patch with that change.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Fri, 23 Sep 2022 06:01:45 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Refactor backup related code (was: Is it correct to say, \"invalid\n data in file \\\"%s\\\"\", BACKUP_LABEL_FILE in do_pg_backup_stop?)"
},
{
"msg_contents": "On Thu, Sep 22, 2022 at 3:17 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>\n> + MemSet(backup_state, 0, sizeof(BackupState));\n> + MemSet(backup_state->name, '\\0', sizeof(backup_state->name));\n>\n> The latter MemSet() is not necessary because the former already\n> resets that with zero, is it?\n\nYes, earlier, the name was a palloc'd string but now it is a fixed\narray. Removed.\n\n> + pfree(tablespace_map);\n> + tablespace_map = NULL;\n> + }\n> +\n> + tablespace_map = makeStringInfo();\n>\n> tablespace_map doesn't need to be reset to NULL here.\n>\n> + pfree(tablespace_map);\n> + tablespace_map = NULL;\n> + pfree(backup_state);\n> + backup_state = NULL;\n>\n> It's harmless to set tablespace_map and backup_state to NULL after pfree(),\n> but it's also unnecessary at least here.\n\nYes. But we can retain it for the sake of consistency with the other\nplaces and avoid dangling pointers, if at all any new code gets added\nin between it will be useful.\n\n> /* Free structures allocated in TopMemoryContext */\n> - pfree(label_file->data);\n> - pfree(label_file);\n> <snip>\n> + pfree(backup_label->data);\n> + pfree(backup_label);\n> + backup_label = NULL;\n>\n> This source comment is a bit misleading, isn't it? Because the memory\n> for backup_label is allocated under the memory context other than\n> TopMemoryContext.\n\nYes, we can just say /* Deallocate backup-related variables. */. The\npg_backup_start() has the info about the variables being allocated in\nTopMemoryContext.\n\n> +#include \"access/xlog.h\"\n> +#include \"access/xlog_internal.h\"\n> +#include \"access/xlogbackup.h\"\n> +#include \"utils/memutils.h\"\n>\n> Seems \"utils/memutils.h\" doesn't need to be included.\n\nYes, removed now.\n\n> + XLByteToSeg(state->startpoint, stopsegno, wal_segment_size);\n> + XLogFileName(stopxlogfile, state->starttli, stopsegno, wal_segment_size);\n> + appendStringInfo(result, \"STOP WAL LOCATION: %X/%X (file %s)\\n\",\n> + LSN_FORMAT_ARGS(state->startpoint), stopxlogfile);\n>\n> state->stoppoint and state->stoptli should be used instead of\n> state->startpoint and state->starttli?\n\nNice catch! Corrected.\n\nPSA v12 patch with the above review comments addressed.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Fri, 23 Sep 2022 06:02:24 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Refactor backup related code (was: Is it correct to say, \"invalid\n data in file \\\"%s\\\"\", BACKUP_LABEL_FILE in do_pg_backup_stop?)"
},
{
"msg_contents": "On Fri, Sep 23, 2022 at 06:02:24AM +0530, Bharath Rupireddy wrote:\n> PSA v12 patch with the above review comments addressed.\n\nI've read this one, and nothing is standing out at quick glance, so\nthat looks rather reasonable to me. I should be able to spend more\ntime on that at the beginning of next week, and maybe apply it.\n--\nMichael",
"msg_date": "Fri, 23 Sep 2022 21:12:22 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Refactor backup related code (was: Is it correct to say,\n \"invalid data in file \\\"%s\\\"\", BACKUP_LABEL_FILE in do_pg_backup_stop?)"
},
{
"msg_contents": "On Thu, Sep 22, 2022 at 08:25:31AM -0700, Andres Freund wrote:\n> Due to the merge of the meson based build this patch needs some\n> adjustment. See\n> https://cirrus-ci.com/build/6146162607521792\n> Looks like it just requires adding xlogbackup.c to\n> src/backend/access/transam/meson.build.\n\nThanks for the reminder. I have played a bit with meson and ninja,\nand that's a rather straight-forward experience. The output is muuuch\nnicer to parse.\n--\nMichael",
"msg_date": "Mon, 26 Sep 2022 11:11:16 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Refactor backup related code (was: Is it correct to say,\n \"invalid data in file \\\"%s\\\"\", BACKUP_LABEL_FILE in do_pg_backup_stop?)"
},
{
"msg_contents": "On Fri, Sep 23, 2022 at 09:12:22PM +0900, Michael Paquier wrote:\n> I've read this one, and nothing is standing out at quick glance, so\n> that looks rather reasonable to me. I should be able to spend more\n> time on that at the beginning of next week, and maybe apply it.\n\nWhat I had at hand seemed fine on a second look, so applied after\ntweaking a couple of comments. One thing that I have been wondering\nafter-the-fact is whether it would be cleaner to return the contents\nof the backup history file or backup_label as a char rather than a\nStringInfo? This simplifies a bit what the callers of\nbuild_backup_content() need to do.\n--\nMichael",
"msg_date": "Mon, 26 Sep 2022 11:43:08 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Refactor backup related code (was: Is it correct to say,\n \"invalid data in file \\\"%s\\\"\", BACKUP_LABEL_FILE in do_pg_backup_stop?)"
},
{
"msg_contents": "On Mon, Sep 26, 2022 at 8:13 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> What I had at hand seemed fine on a second look, so applied after\n> tweaking a couple of comments. One thing that I have been wondering\n> after-the-fact is whether it would be cleaner to return the contents\n> of the backup history file or backup_label as a char rather than a\n> StringInfo? This simplifies a bit what the callers of\n> build_backup_content() need to do.\n\n+1 because callers don't use returned StringInfo structure outside of\nbuild_backup_content(). The patch looks good to me. I think it will be\ngood to add a note about the caller freeing up the retired string's\nmemory [1], just in case.\n\n[1]\n * Returns the result generated as a palloc'd string. It is the caller's\n * responsibility to free the returned string's memory.\n */\nchar *\nbuild_backup_content(BackupState *state, bool ishistoryfile)\n{\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 26 Sep 2022 11:57:58 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Refactor backup related code (was: Is it correct to say, \"invalid\n data in file \\\"%s\\\"\", BACKUP_LABEL_FILE in do_pg_backup_stop?)"
},
{
"msg_contents": "At Mon, 26 Sep 2022 11:57:58 +0530, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote in \n> On Mon, Sep 26, 2022 at 8:13 AM Michael Paquier <michael@paquier.xyz> wrote:\n> >\n> > What I had at hand seemed fine on a second look, so applied after\n> > tweaking a couple of comments. One thing that I have been wondering\n> > after-the-fact is whether it would be cleaner to return the contents\n> > of the backup history file or backup_label as a char rather than a\n> > StringInfo? This simplifies a bit what the callers of\n> > build_backup_content() need to do.\n> \n> +1 because callers don't use returned StringInfo structure outside of\n> build_backup_content(). The patch looks good to me. I think it will be\n> good to add a note about the caller freeing up the retired string's\n> memory [1], just in case.\n\nDoesn't the following (from you :) work?\n\n+ * Returns the result generated as a palloc'd string.\n\nThis suggests no need for pfree if the caller properly destroys the\ncontext or pfree is needed otherwise. In this case, the active memory\ncontexts are \"ExprContext\" and \"Replication command context\" so I\nthink we actually do not need to pfree it but I don't mean we sholnd't\ndo that in this patch (since those contexts are somewhat remote from\nwhat the function does and pfree doesn't matter at all here.).\n\n> [1]\n> * Returns the result generated as a palloc'd string. It is the caller's\n> * responsibility to free the returned string's memory.\n> */\n> char *\n> build_backup_content(BackupState *state, bool ishistoryfile)\n> {\n\n+1. A nitpick.\n\n-\tif (strcmp(backupfrom, \"standby\") == 0 && !backup_started_in_recovery)\n+\tif (state->started_in_recovery == true &&\n+\t\tbackup_stopped_in_recovery == false)\n\nUsing == for Booleans may not be great?\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Mon, 26 Sep 2022 17:10:16 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Refactor backup related code (was: Is it correct to say,\n \"invalid data in file \\\"%s\\\"\", BACKUP_LABEL_FILE in do_pg_backup_stop?)"
},
{
"msg_contents": "On Mon, Sep 26, 2022 at 05:10:16PM +0900, Kyotaro Horiguchi wrote:\n> -\tif (strcmp(backupfrom, \"standby\") == 0 && !backup_started_in_recovery)\n> +\tif (state->started_in_recovery == true &&\n> +\t\tbackup_stopped_in_recovery == false)\n> \n> Using == for Booleans may not be great?\n\nYes. That's why 7d70809 does not use the way proposed by the previous\npatch.\n--\nMichael",
"msg_date": "Tue, 27 Sep 2022 08:52:26 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Refactor backup related code (was: Is it correct to say,\n \"invalid data in file \\\"%s\\\"\", BACKUP_LABEL_FILE in do_pg_backup_stop?)"
},
{
"msg_contents": "On Mon, Sep 26, 2022 at 11:57:58AM +0530, Bharath Rupireddy wrote:\n> +1 because callers don't use returned StringInfo structure outside of\n> build_backup_content(). The patch looks good to me.\n\nThanks for looking.\n\n> I think it will be\n> good to add a note about the caller freeing up the retired string's\n> memory [1], just in case.\n\nNot sure that this is worth it. It is fine to use palloc() in a\ndedicated memory context, for one.\n--\nMichael",
"msg_date": "Tue, 27 Sep 2022 09:15:59 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Refactor backup related code (was: Is it correct to say,\n \"invalid data in file \\\"%s\\\"\", BACKUP_LABEL_FILE in do_pg_backup_stop?)"
},
{
"msg_contents": "This commit introduced BackupState struct. The comment of\ndo_pg_backup_start says that:\n\n> * It fills in backup_state with the information required for the backup,\n\nAnd the parameters are:\n\n> do_pg_backup_start(const char *backupidstr, bool fast, List **tablespaces,\n> \t\t\t\t BackupState *state, StringInfo tblspcmapfile)\n\nSo backup_state is different from both the type BackupState and the\nparameter state. I find it annoying. Don't we either rename the\nparameter or fix the comment?\n\nThe parameter \"state\" sounds a bit too generic. So I prefer to rename\nthe parameter to backup_state, as the attached.\n\nWhat do you think about this?\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Tue, 27 Sep 2022 17:24:27 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Refactor backup related code (was: Is it correct to say,\n \"invalid data in file \\\"%s\\\"\", BACKUP_LABEL_FILE in do_pg_backup_stop?)"
},
{
"msg_contents": "On Tue, Sep 27, 2022 at 1:54 PM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> This commit introduced BackupState struct. The comment of\n> do_pg_backup_start says that:\n>\n> > * It fills in backup_state with the information required for the backup,\n>\n> And the parameters are:\n>\n> > do_pg_backup_start(const char *backupidstr, bool fast, List **tablespaces,\n> > BackupState *state, StringInfo tblspcmapfile)\n>\n> So backup_state is different from both the type BackupState and the\n> parameter state. I find it annoying. Don't we either rename the\n> parameter or fix the comment?\n>\n> The parameter \"state\" sounds a bit too generic. So I prefer to rename\n> the parameter to backup_state, as the attached.\n>\n> What do you think about this?\n\n-1 from me. We have the function context and the structure name there\nto represent that the parameter name 'state' is actually 'backup\nstate'. I don't think we gain anything here. Whenever the BackupState\nis used away from any function, the variable name backup_state is\nused, static variable in xlogfuncs.c\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 27 Sep 2022 14:03:24 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Refactor backup related code (was: Is it correct to say, \"invalid\n data in file \\\"%s\\\"\", BACKUP_LABEL_FILE in do_pg_backup_stop?)"
},
{
"msg_contents": "At Tue, 27 Sep 2022 14:03:24 +0530, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote in \n> On Tue, Sep 27, 2022 at 1:54 PM Kyotaro Horiguchi\n> <horikyota.ntt@gmail.com> wrote:\n> > What do you think about this?\n> \n> -1 from me. We have the function context and the structure name there\n> to represent that the parameter name 'state' is actually 'backup\n> state'. I don't think we gain anything here. Whenever the BackupState\n> is used away from any function, the variable name backup_state is\n> used, static variable in xlogfuncs.c\n\nThere's no shadowing caused by the change. If we mind the same\nvariable names between files, we could rename backup_state in\nxlogfuncs.c to process_backup_state or session_backup_state.\n\nIf this is still unacceptable, I propose to change the comment. (I\nfound that the previous patch forgets about do_pg_backup_stop())\n\n- * It fills in backup_state with the information required for the backup,\n+ * It fills in the parameter \"state\" with the information required for the backup,\n\n(This is following the notation just above)\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Tue, 27 Sep 2022 17:50:44 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Refactor backup related code (was: Is it correct to say,\n \"invalid data in file \\\"%s\\\"\", BACKUP_LABEL_FILE in do_pg_backup_stop?)"
},
{
"msg_contents": "On Tue, Sep 27, 2022 at 2:20 PM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> > -1 from me. We have the function context and the structure name there\n> > to represent that the parameter name 'state' is actually 'backup\n> > state'. I don't think we gain anything here. Whenever the BackupState\n> > is used away from any function, the variable name backup_state is\n> > used, static variable in xlogfuncs.c\n>\n> There's no shadowing caused by the change. If we mind the same\n> variable names between files, we could rename backup_state in\n> xlogfuncs.c to process_backup_state or session_backup_state.\n\n-1.\n\n> If this is still unacceptable, I propose to change the comment. (I\n> found that the previous patch forgets about do_pg_backup_stop())\n>\n> - * It fills in backup_state with the information required for the backup,\n> + * It fills in the parameter \"state\" with the information required for the backup,\n\n+1. There's another place that uses backup_state in the comments. I\nmodified that as well. Please see the attached patch.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Tue, 27 Sep 2022 15:11:54 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Refactor backup related code (was: Is it correct to say, \"invalid\n data in file \\\"%s\\\"\", BACKUP_LABEL_FILE in do_pg_backup_stop?)"
},
{
"msg_contents": "On Tue, Sep 27, 2022 at 03:11:54PM +0530, Bharath Rupireddy wrote:\n> On Tue, Sep 27, 2022 at 2:20 PM Kyotaro Horiguchi\n> <horikyota.ntt@gmail.com> wrote:\n>> If this is still unacceptable, I propose to change the comment. (I\n>> found that the previous patch forgets about do_pg_backup_stop())\n>>\n>> - * It fills in backup_state with the information required for the backup,\n>> + * It fills in the parameter \"state\" with the information required for the backup,\n> \n> +1. There's another place that uses backup_state in the comments. I\n> modified that as well. Please see the attached patch.\n\nThanks, fixed the comments. I have let the variable names as they are\nnow in the code, as both are backup-related code paths so it is IMO\nclear that the state is linked to a backup.\n--\nMichael",
"msg_date": "Wed, 28 Sep 2022 10:09:39 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Refactor backup related code (was: Is it correct to say,\n \"invalid data in file \\\"%s\\\"\", BACKUP_LABEL_FILE in do_pg_backup_stop?)"
},
{
"msg_contents": "At Wed, 28 Sep 2022 10:09:39 +0900, Michael Paquier <michael@paquier.xyz> wrote in \n> On Tue, Sep 27, 2022 at 03:11:54PM +0530, Bharath Rupireddy wrote:\n> > On Tue, Sep 27, 2022 at 2:20 PM Kyotaro Horiguchi\n> > <horikyota.ntt@gmail.com> wrote:\n> >> If this is still unacceptable, I propose to change the comment. (I\n> >> found that the previous patch forgets about do_pg_backup_stop())\n> >>\n> >> - * It fills in backup_state with the information required for the backup,\n> >> + * It fills in the parameter \"state\" with the information required for the backup,\n> > \n> > +1. There's another place that uses backup_state in the comments. I\n> > modified that as well. Please see the attached patch.\n> \n> Thanks, fixed the comments. I have let the variable names as they are\n> now in the code, as both are backup-related code paths so it is IMO\n> clear that the state is linked to a backup.\n\nThanks! I'm fine with that.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Wed, 28 Sep 2022 13:07:43 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Refactor backup related code (was: Is it correct to say,\n \"invalid data in file \\\"%s\\\"\", BACKUP_LABEL_FILE in do_pg_backup_stop?)"
}
] |
[
{
"msg_contents": "Hi,\n\nI'd like to propose to remove \"whichChkpt\" and \"report\" arguments in ReadCheckpointRecord(). \"report\" is obviously useless because it's always true, i.e., there are two callers of the function and they always specify true as \"report\".\n\n\"whichChkpt\" indicates where the specified checkpoint location came from, pg_control or backup_label. This information is used to log different messages as follows.\n\n\t\tswitch (whichChkpt)\n\t\t{\n\t\t\tcase 1:\n\t\t\t\tereport(LOG,\n\t\t\t\t\t\t(errmsg(\"invalid primary checkpoint link in control file\")));\n\t\t\t\tbreak;\n\t\t\tdefault:\n\t\t\t\tereport(LOG,\n\t\t\t\t\t\t(errmsg(\"invalid checkpoint link in backup_label file\")));\n\t\t\t\tbreak;\n\t\t}\n\t\treturn NULL;\n\t\t...\n\t\tswitch (whichChkpt)\n\t\t{\n\t\t\tcase 1:\n\t\t\t\tereport(LOG,\n\t\t\t\t\t\t(errmsg(\"invalid primary checkpoint record\")));\n\t\t\t\tbreak;\n\t\t\tdefault:\n\t\t\t\tereport(LOG,\n\t\t\t\t\t\t(errmsg(\"invalid checkpoint record\")));\n\t\t\t\tbreak;\n\t\t}\n\t\treturn NULL;\n\t\t...\n\nBut the callers of ReadCheckpointRecord() already output different log messages depending on where the invalid checkpoint record came from. So even if ReadCheckpointRecord() doesn't use \"whichChkpt\", i.e., use the same log message in both pg_control and backup_label cases, users can still identify where the invalid checkpoint record came from, by reading the log message.\n\nAlso when whichChkpt = 0, \"primary checkpoint\" is used in the log message and sounds confusing because, as far as I recall correctly, we removed the concept of primary and secondary checkpoints before.\n\n Therefore I think that it's better to remove \"whichChkpt\" argument in ReadCheckpointRecord().\n\nPatch attached. Thought?\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION",
"msg_date": "Wed, 20 Jul 2022 23:50:44 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "Remove useless arguments in ReadCheckpointRecord()."
},
{
"msg_contents": "On Wed, Jul 20, 2022 at 8:21 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>\n> Hi,\n>\n> I'd like to propose to remove \"whichChkpt\" and \"report\" arguments in ReadCheckpointRecord(). \"report\" is obviously useless because it's always true, i.e., there are two callers of the function and they always specify true as \"report\".\n\nYes, the report parameter is obvious to delete. The commit 1d919de5eb\nremoved the only call with the report parameter as false.\n\n> \"whichChkpt\" indicates where the specified checkpoint location came from, pg_control or backup_label. This information is used to log different messages as follows.\n>\n> switch (whichChkpt)\n> {\n> case 1:\n> ereport(LOG,\n> (errmsg(\"invalid primary checkpoint link in control file\")));\n> break;\n> default:\n> ereport(LOG,\n> (errmsg(\"invalid checkpoint link in backup_label file\")));\n> break;\n> }\n> return NULL;\n> ...\n> switch (whichChkpt)\n> {\n> case 1:\n> ereport(LOG,\n> (errmsg(\"invalid primary checkpoint record\")));\n> break;\n> default:\n> ereport(LOG,\n> (errmsg(\"invalid checkpoint record\")));\n> break;\n> }\n> return NULL;\n> ...\n>\n> But the callers of ReadCheckpointRecord() already output different log messages depending on where the invalid checkpoint record came from. So even if ReadCheckpointRecord() doesn't use \"whichChkpt\", i.e., use the same log message in both pg_control and backup_label cases, users can still identify where the invalid checkpoint record came from, by reading the log message.\n>\n> Also when whichChkpt = 0, \"primary checkpoint\" is used in the log message and sounds confusing because, as far as I recall correctly, we removed the concept of primary and secondary checkpoints before.\n\nYes, using \"primary checkpoint\" confuses, as we usually refer primary\nin the context of replication and HA.\n\n> Therefore I think that it's better to remove \"whichChkpt\" argument in ReadCheckpointRecord().\n>\n> Patch attached. Thought?\n\nHow about we transform the following messages into something like below?\n\n(errmsg(\"could not locate a valid checkpoint record\"))); after\nReadCheckpointRecord() for control file cases to \"could not locate\nvalid checkpoint record in control file\"\n(errmsg(\"could not locate required checkpoint record\"), after\nReadCheckpointRecord() for backup_label case to \"could not locate\nvalid checkpoint record in backup_label file\"\n\nThe above messages give more meaningful and unique info to the users.\n\nMay be unrelated, IIRC, for the errors like ereport(PANIC,\n(errmsg(\"could not locate a valid checkpoint record\"))); we wanted to\nadd a hint asking users to consider running pg_resetwal to fix the\nissue. The error for ReadCheckpointRecord() in backup_label file\ncases, already gives a hint errhint(\"If you are restoring from a\nbackup, touch \\\"%s/recovery.signal\\\" ...... Adding the hint of running\npg_resetwal (of course with a caution that it can cause inconsistency\nin the data and use it as a last resort as described in the docs)\nhelps users and support engineers a lot to mitigate the server down\ncases quickly.\n\nRegards,\nBharath Rupireddy.\n\n\n",
"msg_date": "Wed, 20 Jul 2022 20:59:20 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Remove useless arguments in ReadCheckpointRecord()."
},
{
"msg_contents": "On 2022/07/21 0:29, Bharath Rupireddy wrote:\n> How about we transform the following messages into something like below?\n> \n> (errmsg(\"could not locate a valid checkpoint record\"))); after\n> ReadCheckpointRecord() for control file cases to \"could not locate\n> valid checkpoint record in control file\"\n> (errmsg(\"could not locate required checkpoint record\"), after\n> ReadCheckpointRecord() for backup_label case to \"could not locate\n> valid checkpoint record in backup_label file\"\n> \n> The above messages give more meaningful and unique info to the users.\n\nAgreed. Attached is the updated version of the patch.\nThanks for the review!\n\n\n> May be unrelated, IIRC, for the errors like ereport(PANIC,\n> (errmsg(\"could not locate a valid checkpoint record\"))); we wanted to\n> add a hint asking users to consider running pg_resetwal to fix the\n> issue. The error for ReadCheckpointRecord() in backup_label file\n> cases, already gives a hint errhint(\"If you are restoring from a\n> backup, touch \\\"%s/recovery.signal\\\" ...... Adding the hint of running\n> pg_resetwal (of course with a caution that it can cause inconsistency\n> in the data and use it as a last resort as described in the docs)\n> helps users and support engineers a lot to mitigate the server down\n> cases quickly.\n\n+1 to add some hint messages. But I'm not sure if it's good to hint the use of pg_resetwal because, as you're saying, it should be used as a last resort and has some big risks like data loss, corruption, etc. That is, I'm concerned about that some users might quickly run pg_resetwal because hint message says that, without reading the docs nor understanding those risks.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION",
"msg_date": "Thu, 21 Jul 2022 11:45:23 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "Re: Remove useless arguments in ReadCheckpointRecord()."
},
{
"msg_contents": "I agree to removing the two parameters. And agree to let\nReadCheckpointRecord not conscious of the location source.\n\nAt Thu, 21 Jul 2022 11:45:23 +0900, Fujii Masao <masao.fujii@oss.nttdata.com> wrote in \n> Agreed. Attached is the updated version of the patch.\n> Thanks for the review!\n\n-\t(errmsg(\"could not locate required checkpoint record\"),\n+\t(errmsg(\"could not locate a valid checkpoint record in backup_label file\"),\n\n\"in backup_label\" there looks *to me* need some verb.. By the way,\nthis looks like a good chance to remove the (now) extra parens around\nerrmsg() and friends.\n\nFor example:\n\n-\t(errmsg(\"could not locate a valid checkpoint record in backup_label file\"),\n+\terrmsg(\"could not locate checkpoint record spcified in backup_label file\"),\n\n-\t(errmsg(\"could not locate a valid checkpoint record in control file\")));\n+\terrmsg(\"could not locate checkpoint record recorded in control file\")));\n\n\n+\t\t\t\t(errmsg(\"invalid checkpoint record\")));\n\nIs it useful to show the specified LSN there?\n\n+\t\t\t\t(errmsg(\"invalid resource manager ID in checkpoint record\")));\n\nWe have a similar message \"invalid resource manager ID %u at\n%X/%X\". Since the caller explains that it is a checkpoint record, we\ncan share the message here.\n\n+\t\t\t\t(errmsg(\"invalid xl_info in checkpoint record\")));\n\n(It is not an issue of this patch, though) I don't think this is\nappropriate for user-facing message. Counldn't we say \"unexpected\nrecord type: %x\" or something like?\n\n+\t\t\t\t(errmsg(\"invalid length of checkpoint record\")));\n\nWe have \"record with invalid length at %X/%X\" or \"invalid record\nlength at %X/%X: wanted %u, got %u\". Counld we reuse any of them?\n\n> > May be unrelated, IIRC, for the errors like ereport(PANIC,\n> > (errmsg(\"could not locate a valid checkpoint record\"))); we wanted to\n> > add a hint asking users to consider running pg_resetwal to fix the\n> > issue. The error for ReadCheckpointRecord() in backup_label file\n> > cases, already gives a hint errhint(\"If you are restoring from a\n> > backup, touch \\\"%s/recovery.signal\\\" ...... Adding the hint of running\n> > pg_resetwal (of course with a caution that it can cause inconsistency\n> > in the data and use it as a last resort as described in the docs)\n> > helps users and support engineers a lot to mitigate the server down\n> > cases quickly.\n> \n> +1 to add some hint messages. But I'm not sure if it's good to hint\n> the use of pg_resetwal because, as you're saying, it should be used as\n> a last resort and has some big risks like data loss, corruption,\n> etc. That is, I'm concerned about that some users might quickly run\n> pg_resetwal because hint message says that, without reading the docs\n> nor understanding those risks.\n\nI don't think we want to recommend pg_resetwal to those who don't\nreach it by themselves by other means. I know of a few instances of\npeople who had the tool (unrecoverably) break their own clusters.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Thu, 21 Jul 2022 14:54:49 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Remove useless arguments in ReadCheckpointRecord()."
},
{
"msg_contents": "On 2022/07/21 14:54, Kyotaro Horiguchi wrote:\n> I agree to removing the two parameters. And agree to let\n> ReadCheckpointRecord not conscious of the location source.\n\nThanks for the review!\n\n\n> At Thu, 21 Jul 2022 11:45:23 +0900, Fujii Masao <masao.fujii@oss.nttdata.com> wrote in\n>> Agreed. Attached is the updated version of the patch.\n>> Thanks for the review!\n> \n> -\t(errmsg(\"could not locate required checkpoint record\"),\n> +\t(errmsg(\"could not locate a valid checkpoint record in backup_label file\"),\n> \n> \"in backup_label\" there looks *to me* need some verb.. \n\nSorry, I failed to understand your point. Could you clarify your point?\n\n\n> By the way,\n> this looks like a good chance to remove the (now) extra parens around\n> errmsg() and friends.\n> \n> For example:\n> \n> -\t(errmsg(\"could not locate a valid checkpoint record in backup_label file\"),\n> +\terrmsg(\"could not locate checkpoint record spcified in backup_label file\"),\n> \n> -\t(errmsg(\"could not locate a valid checkpoint record in control file\")));\n> +\terrmsg(\"could not locate checkpoint record recorded in control file\")));\n\nBecause it's recommended not to put parenthesis just before errmsg(), you mean? I'm ok to remove such parenthesis, but I'd like understand why before doing that. I was thinking that either having or not having parenthesis in front of errmsg() is ok, so there are many calls to errmsg() with parenthesis, in xlogrecovery.c.\n\n\n> +\t\t\t\t(errmsg(\"invalid checkpoint record\")));\n> \n> Is it useful to show the specified LSN there?\n\nYes, LSN info would be helpful also for debugging.\n\nI separated the patch into two; one is to remove useless arguments in ReadCheckpointRecord(), another is to improve log messages. I added LSN info in log messages in the second patch.\n\n\n> +\t\t\t\t(errmsg(\"invalid resource manager ID in checkpoint record\")));\n> \n> We have a similar message \"invalid resource manager ID %u at\n> %X/%X\". Since the caller explains that it is a checkpoint record, we\n> can share the message here.\n\n+1\n\n\n> +\t\t\t\t(errmsg(\"invalid xl_info in checkpoint record\")));\n> \n> (It is not an issue of this patch, though) I don't think this is\n> appropriate for user-facing message. Counldn't we say \"unexpected\n> record type: %x\" or something like?\n\nThe proposed log message doesn't look like an improvement. But I have no better one. So I left the message as it is, in the patch, for now.\n\n> \n> +\t\t\t\t(errmsg(\"invalid length of checkpoint record\")));\n> \n> We have \"record with invalid length at %X/%X\" or \"invalid record\n> length at %X/%X: wanted %u, got %u\". Counld we reuse any of them?\n\n+1\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION",
"msg_date": "Fri, 22 Jul 2022 11:50:14 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "Re: Remove useless arguments in ReadCheckpointRecord()."
},
{
"msg_contents": "Fujii Masao <masao.fujii@oss.nttdata.com> writes:\n> On 2022/07/21 14:54, Kyotaro Horiguchi wrote:\n>> At Thu, 21 Jul 2022 11:45:23 +0900, Fujii Masao <masao.fujii@oss.nttdata.com> wrote in\n>>> -\t(errmsg(\"could not locate required checkpoint record\"),\n>>> +\t(errmsg(\"could not locate a valid checkpoint record in backup_label file\"),\n\n>> \"in backup_label\" there looks *to me* need some verb.. \n\n> Sorry, I failed to understand your point. Could you clarify your point?\n\nFWIW, the proposed change looks like perfectly good English to me.\n\"locate\" is the verb. It's been way too many years since high\nschool grammar for me to remember the exact term for auxiliary\nclauses like \"in backup_label file\", but that doesn't need its\nown verb. Possibly Kyotaro-san is feeling that it should be\nlike \"... checkpoint record in the backup_label file\". That'd\nbe more formal, but in the telegraphic style that we prefer for\nprimary error messages, omitting the \"the\" is fine.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 21 Jul 2022 23:10:04 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Remove useless arguments in ReadCheckpointRecord()."
},
{
"msg_contents": "At Thu, 21 Jul 2022 23:10:04 -0400, Tom Lane <tgl@sss.pgh.pa.us> wrote in \n> Fujii Masao <masao.fujii@oss.nttdata.com> writes:\n> > On 2022/07/21 14:54, Kyotaro Horiguchi wrote:\n> >> At Thu, 21 Jul 2022 11:45:23 +0900, Fujii Masao <masao.fujii@oss.nttdata.com> wrote in\n> >>> -\t(errmsg(\"could not locate required checkpoint record\"),\n> >>> +\t(errmsg(\"could not locate a valid checkpoint record in backup_label file\"),\n> \n> >> \"in backup_label\" there looks *to me* need some verb.. \n> \n> > Sorry, I failed to understand your point. Could you clarify your point?\n> \n> FWIW, the proposed change looks like perfectly good English to me.\n> \"locate\" is the verb. It's been way too many years since high\n> school grammar for me to remember the exact term for auxiliary\n> clauses like \"in backup_label file\", but that doesn't need its\n> own verb. Possibly Kyotaro-san is feeling that it should be\n> like \"... checkpoint record in the backup_label file\". That'd\n> be more formal, but in the telegraphic style that we prefer for\n> primary error messages, omitting the \"the\" is fine.\n\nMaybe a little different. I thought that a checkpoint record cannot\nbe located in backup_label file. In other words what is in\nbackup_label file is a pointer to the record and the record is in a\nWAL file.\n\nI'm fine with the proposed sentsnse if the it makes the correct\nsense. (And sorry for the noise)\n\nBy the way, I learned that the style is called \"telegraphic style\".\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Fri, 22 Jul 2022 17:17:12 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Remove useless arguments in ReadCheckpointRecord()."
},
{
"msg_contents": "At Fri, 22 Jul 2022 11:50:14 +0900, Fujii Masao <masao.fujii@oss.nttdata.com> wrote in \n> Sorry, I failed to understand your point. Could you clarify your\n> point?\n\nWrote as a reply to Tom's message.\n\n> > By the way,\n> > this looks like a good chance to remove the (now) extra parens around\n> > errmsg() and friends.\n> > For example:\n> > -\t(errmsg(\"could not locate a valid checkpoint record in backup_label\n> > -\tfile\"),\n> > + errmsg(\"could not locate checkpoint record spcified in backup_label\n> > file\"),\n> > -\t(errmsg(\"could not locate a valid checkpoint record in control file\")));\n> > + errmsg(\"could not locate checkpoint record recorded in control\n> > file\")));\n> \n> Because it's recommended not to put parenthesis just before errmsg(),\n> you mean? I'm ok to remove such parenthesis, but I'd like understand\n> why before doing that. I was thinking that either having or not having\n> parenthesis in front of errmsg() is ok, so there are many calls to\n> errmsg() with parenthesis, in xlogrecovery.c.\n\nI believed that it is recommended to move to the style not having the\noutmost parens. That style has been introduced by e3a87b4991.\n\n> * The extra parentheses around the auxiliary function calls are now\n> optional. Aside from being a bit less ugly, this removes a common\n> gotcha for new contributors, because in some cases the compiler errors\n> you got from forgetting them were unintelligible.\n...\n> While new code can be written either way, code intended to be\n> back-patched will need to use extra parens for awhile yet. It seems\n> worth back-patching this change into v12, so as to reduce the window\n> where we have to be careful about that by one year. Hence, this patch\n> is careful to preserve ABI compatibility; a followup HEAD-only patch\n> will make some additional simplifications.\n\nSo I thought that if we modify an error message, its ererpot can be\nrewritten.\n\n\n> > +\t\t\t\t(errmsg(\"invalid checkpoint record\")));\n> > Is it useful to show the specified LSN there?\n> \n> Yes, LSN info would be helpful also for debugging.\n> \n> I separated the patch into two; one is to remove useless arguments in\n> ReadCheckpointRecord(), another is to improve log messages. I added\n> LSN info in log messages in the second patch.\n\nThanks!\n\n> > +\t\t\t\t(errmsg(\"invalid xl_info in checkpoint record\")));\n> > (It is not an issue of this patch, though) I don't think this is\n> > appropriate for user-facing message. Counldn't we say \"unexpected\n> > record type: %x\" or something like?\n> \n> The proposed log message doesn't look like an improvement. But I have\n> no better one. So I left the message as it is, in the patch, for now.\n\nUnderstood.\n\nregards\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Fri, 22 Jul 2022 17:31:52 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Remove useless arguments in ReadCheckpointRecord()."
},
{
"msg_contents": "\n\nOn 2022/07/22 17:31, Kyotaro Horiguchi wrote:\n> I believed that it is recommended to move to the style not having the\n> outmost parens. That style has been introduced by e3a87b4991.\n\nI read the commit log, but I'm not sure yet if it's really recommended to remove extra parens even from the existing calls to errmsg(). Removing extra parens can interfere with back-patching of the changes around those errmsg(), can't it?\n\nAnyway, at first I pushed the 0001 patch that removes useless arguments in ReadCheckpointRecord().\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Mon, 25 Jul 2022 11:30:30 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "Re: Remove useless arguments in ReadCheckpointRecord()."
},
{
"msg_contents": "Fujii Masao <masao.fujii@oss.nttdata.com> writes:\n> On 2022/07/22 17:31, Kyotaro Horiguchi wrote:\n>> I believed that it is recommended to move to the style not having the\n>> outmost parens. That style has been introduced by e3a87b4991.\n\n> I read the commit log, but I'm not sure yet if it's really recommended to remove extra parens even from the existing calls to errmsg(). Removing extra parens can interfere with back-patching of the changes around those errmsg(), can't it?\n\nRight, so I wouldn't be in a hurry to change existing calls. If you're\nediting an ereport call for some other reason, it's fine to remove the\nexcess parens in it, because you're creating a backpatch hazard there\nanyway. But otherwise, I think such changes are make-work in themselves\nand risk creating more make-work for somebody else later.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 24 Jul 2022 22:40:16 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Remove useless arguments in ReadCheckpointRecord()."
},
{
"msg_contents": "On Thu, Jul 21, 2022 at 11:24 AM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> > > May be unrelated, IIRC, for the errors like ereport(PANIC,\n> > > (errmsg(\"could not locate a valid checkpoint record\"))); we wanted to\n> > > add a hint asking users to consider running pg_resetwal to fix the\n> > > issue. The error for ReadCheckpointRecord() in backup_label file\n> > > cases, already gives a hint errhint(\"If you are restoring from a\n> > > backup, touch \\\"%s/recovery.signal\\\" ...... Adding the hint of running\n> > > pg_resetwal (of course with a caution that it can cause inconsistency\n> > > in the data and use it as a last resort as described in the docs)\n> > > helps users and support engineers a lot to mitigate the server down\n> > > cases quickly.\n> >\n> > +1 to add some hint messages. But I'm not sure if it's good to hint\n> > the use of pg_resetwal because, as you're saying, it should be used as\n> > a last resort and has some big risks like data loss, corruption,\n> > etc. That is, I'm concerned about that some users might quickly run\n> > pg_resetwal because hint message says that, without reading the docs\n> > nor understanding those risks.\n>\n> I don't think we want to recommend pg_resetwal to those who don't\n> reach it by themselves by other means. I know of a few instances of\n> people who had the tool (unrecoverably) break their own clusters.\n\nAgree. We might want to take this topic separately as it needs more\ncareful study of common issues such as PANICs and then adding hints\nwith proven ways to repair the server and bring it back online.\n\nRegards,\nBharath Rupireddy.\n\n\n",
"msg_date": "Mon, 25 Jul 2022 12:09:26 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Remove useless arguments in ReadCheckpointRecord()."
},
{
"msg_contents": "At Sun, 24 Jul 2022 22:40:16 -0400, Tom Lane <tgl@sss.pgh.pa.us> wrote in \n> Fujii Masao <masao.fujii@oss.nttdata.com> writes:\n> > On 2022/07/22 17:31, Kyotaro Horiguchi wrote:\n> >> I believed that it is recommended to move to the style not having the\n> >> outmost parens. That style has been introduced by e3a87b4991.\n> \n> > I read the commit log, but I'm not sure yet if it's really recommended to remove extra parens even from the existing calls to errmsg(). Removing extra parens can interfere with back-patching of the changes around those errmsg(), can't it?\n> \n> Right, so I wouldn't be in a hurry to change existing calls. If you're\n> editing an ereport call for some other reason, it's fine to remove the\n> excess parens in it, because you're creating a backpatch hazard there\n> anyway. But otherwise, I think such changes are make-work in themselves\n> and risk creating more make-work for somebody else later.\n\nSo, I meant to propose to remove extra parens for errmsg()'s where the\nmessage string is edited. Is it fine in that criteria?\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Tue, 26 Jul 2022 09:42:23 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Remove useless arguments in ReadCheckpointRecord()."
},
{
"msg_contents": "\n\nOn 2022/07/26 9:42, Kyotaro Horiguchi wrote:\n> At Sun, 24 Jul 2022 22:40:16 -0400, Tom Lane <tgl@sss.pgh.pa.us> wrote in\n>> Fujii Masao <masao.fujii@oss.nttdata.com> writes:\n>>> On 2022/07/22 17:31, Kyotaro Horiguchi wrote:\n>>>> I believed that it is recommended to move to the style not having the\n>>>> outmost parens. That style has been introduced by e3a87b4991.\n>>\n>>> I read the commit log, but I'm not sure yet if it's really recommended to remove extra parens even from the existing calls to errmsg(). Removing extra parens can interfere with back-patching of the changes around those errmsg(), can't it?\n>>\n>> Right, so I wouldn't be in a hurry to change existing calls. If you're\n>> editing an ereport call for some other reason, it's fine to remove the\n>> excess parens in it, because you're creating a backpatch hazard there\n>> anyway. But otherwise, I think such changes are make-work in themselves\n>> and risk creating more make-work for somebody else later.\n> \n> So, I meant to propose to remove extra parens for errmsg()'s where the\n> message string is edited. Is it fine in that criteria?\n\nEven in that case, removing parens may interfere with the back-patch. For example, please imagine the case where wasShutdown is changed to be set to true in the following code and this changed is back-patched to v15. If we modify only the log message in the following errmsg() and leave the parens around that, git cherry-pick of the change of wasShutdown to v15 would be completed successfully. But if we remove the parens, git cherry-pick would fail.\n\n\tereport(FATAL,\n\t\t\t(errmsg(\"could not locate required checkpoint record\"),\n\t\t\t errhint(\"If you are restoring from a backup, touch \\\"%s/recovery.signal\\\" and add required recovery options.\\n\"\n\t\t\t\t\t \"If you are not restoring from a backup, try removing the file \\\"%s/backup_label\\\".\\n\"\n\t\t\t\t\t \"Be careful: removing \\\"%s/backup_label\\\" will result in a corrupt cluster if restoring from a backup.\",\n\t\t\t\t\t DataDir, DataDir, DataDir)));\n\twasShutdown = false;\t/* keep compiler quiet */\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Tue, 26 Jul 2022 11:02:27 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "Re: Remove useless arguments in ReadCheckpointRecord()."
}
] |
[
{
"msg_contents": "The GUC units are currently defined like:\n\n#define GUC_UNIT_KB 0x1000 /* value is in kilobytes */\n#define GUC_UNIT_BLOCKS 0x2000 /* value is in blocks */\n#define GUC_UNIT_XBLOCKS 0x3000 /* value is in xlog blocks */\n#define GUC_UNIT_MB 0x4000 /* value is in megabytes */\n#define GUC_UNIT_BYTE 0x8000 /* value is in bytes */\n#define GUC_UNIT_MEMORY 0xF000 /* mask for size-related units */\n\n#define GUC_UNIT_MS 0x10000 /* value is in milliseconds */\n#define GUC_UNIT_S 0x20000 /* value is in seconds */\n#define GUC_UNIT_MIN 0x30000 /* value is in minutes */\n#define GUC_UNIT_TIME 0xF0000 /* mask for time-related units */\n\n0x3000 and 0x30000 seemed wrong, since they're a combination of other flags\nrather than being an independant power of two.\n\nBut actually, these aren't flags: they're tested in a \"case\" statement for\nequality, not in a bitwise & test.\n\nSo the outlier is actually 0x8000, added at:\n|commit 6e7baa322773ff8c79d4d8883c99fdeff5bfa679\n|Author: Andres Freund <andres@anarazel.de>\n|Date: Tue Sep 12 12:13:12 2017 -0700\n|\n| Introduce BYTES unit for GUCs.\n\nIt looks like that originated here:\n\nhttps://www.postgresql.org/message-id/CAOG9ApEu8bXVwBxkOO9J7ZpM76TASK_vFMEEiCEjwhMmSLiaqQ%40mail.gmail.com\n\ncommit 162e4838103e7957cccfe7868fc28397b55ca1d7\nAuthor: Justin Pryzby <pryzbyj@telsasoft.com>\nDate: Wed Jul 20 09:27:24 2022 -0500\n\n Renumber confusing value for GUC_UNIT_BYTE\n \n It had a power-of-two value, which looks right, and causes the other values\n which aren't powers-of-two to look wrong. But this is tested for equality and\n not a bitwise test.\n \n See also:\n 6e7baa322773ff8c79d4d8883c99fdeff5bfa679\n https://www.postgresql.org/message-id/CAOG9ApEu8bXVwBxkOO9J7ZpM76TASK_vFMEEiCEjwhMmSLiaqQ%40mail.gmail.com\n\ndiff --git a/src/include/utils/guc.h b/src/include/utils/guc.h\nindex 4d0920c42e2..be928fac881 100644\n--- a/src/include/utils/guc.h\n+++ b/src/include/utils/guc.h\n@@ -219,11 +219,12 @@ typedef enum\n #define GUC_DISALLOW_IN_AUTO_FILE 0x0800\t/* can't set in\n \t\t\t\t\t\t\t\t\t\t\t * PG_AUTOCONF_FILENAME */\n \n+/* GUC_UNIT_* are not flags - they're tested for equality */\n #define GUC_UNIT_KB\t\t\t\t0x1000\t/* value is in kilobytes */\n #define GUC_UNIT_BLOCKS\t\t\t0x2000\t/* value is in blocks */\n #define GUC_UNIT_XBLOCKS\t\t0x3000\t/* value is in xlog blocks */\n #define GUC_UNIT_MB\t\t\t\t0x4000\t/* value is in megabytes */\n-#define GUC_UNIT_BYTE\t\t\t0x8000\t/* value is in bytes */\n+#define GUC_UNIT_BYTE\t\t\t0x5000\t/* value is in bytes */\n #define GUC_UNIT_MEMORY\t\t\t0xF000\t/* mask for size-related units */\n \n #define GUC_UNIT_MS\t\t\t 0x10000\t/* value is in milliseconds */\n\n\n",
"msg_date": "Wed, 20 Jul 2022 09:52:21 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "[PATCH] Renumber confusing value for GUC_UNIT_BYTE"
},
{
"msg_contents": "On 20.07.22 16:52, Justin Pryzby wrote:\n> +/* GUC_UNIT_* are not flags - they're tested for equality */\n\nWell, there is GUC_UNIT_MEMORY, etc. so there is an additional \nconstraint beyond just \"pick any number\". I'm not sure that \"flag\" and \n\"tested for equality\" are really antonyms anyway.\n\nI think renumbering this makes sense. We could just leave the comment \nas is if we don't come up with a better wording.\n\n> #define GUC_UNIT_KB\t\t\t\t0x1000\t/* value is in kilobytes */\n> #define GUC_UNIT_BLOCKS\t\t\t0x2000\t/* value is in blocks */\n> #define GUC_UNIT_XBLOCKS\t\t0x3000\t/* value is in xlog blocks */\n> #define GUC_UNIT_MB\t\t\t\t0x4000\t/* value is in megabytes */\n> -#define GUC_UNIT_BYTE\t\t\t0x8000\t/* value is in bytes */\n> +#define GUC_UNIT_BYTE\t\t\t0x5000\t/* value is in bytes */\n> #define GUC_UNIT_MEMORY\t\t\t0xF000\t/* mask for size-related units */\n> \n> #define GUC_UNIT_MS\t\t\t 0x10000\t/* value is in milliseconds */\n> \n> \n\n\n\n",
"msg_date": "Tue, 6 Sep 2022 07:44:13 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Renumber confusing value for GUC_UNIT_BYTE"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> I think renumbering this makes sense. We could just leave the comment \n> as is if we don't come up with a better wording.\n\n+1, I see no need to change the comment. We just need to establish\nthe precedent that values within the GUC_UNIT_MEMORY field can be\nchosen sequentially.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 06 Sep 2022 01:57:53 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Renumber confusing value for GUC_UNIT_BYTE"
},
{
"msg_contents": "On Tue, Sep 06, 2022 at 01:57:53AM -0400, Tom Lane wrote:\n> Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n>> I think renumbering this makes sense. We could just leave the comment \n>> as is if we don't come up with a better wording.\n> \n> +1, I see no need to change the comment. We just need to establish\n> the precedent that values within the GUC_UNIT_MEMORY field can be\n> chosen sequentially.\n\n+1.\n--\nMichael",
"msg_date": "Tue, 6 Sep 2022 15:27:58 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Renumber confusing value for GUC_UNIT_BYTE"
},
{
"msg_contents": "On 06.09.22 08:27, Michael Paquier wrote:\n> On Tue, Sep 06, 2022 at 01:57:53AM -0400, Tom Lane wrote:\n>> Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n>>> I think renumbering this makes sense. We could just leave the comment\n>>> as is if we don't come up with a better wording.\n>>\n>> +1, I see no need to change the comment. We just need to establish\n>> the precedent that values within the GUC_UNIT_MEMORY field can be\n>> chosen sequentially.\n> \n> +1.\n\ncommitted without the comment change\n\n\n",
"msg_date": "Wed, 7 Sep 2022 11:11:37 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Renumber confusing value for GUC_UNIT_BYTE"
},
{
"msg_contents": "On Wed, Sep 07, 2022 at 11:11:37AM +0200, Peter Eisentraut wrote:\n> On 06.09.22 08:27, Michael Paquier wrote:\n> > On Tue, Sep 06, 2022 at 01:57:53AM -0400, Tom Lane wrote:\n> > > Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> > > > I think renumbering this makes sense. We could just leave the comment\n> > > > as is if we don't come up with a better wording.\n> > > \n> > > +1, I see no need to change the comment. We just need to establish\n> > > the precedent that values within the GUC_UNIT_MEMORY field can be\n> > > chosen sequentially.\n> > \n> > +1.\n> \n> committed without the comment change\n\nThank you\n\n\n",
"msg_date": "Wed, 7 Sep 2022 04:17:49 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Renumber confusing value for GUC_UNIT_BYTE"
}
] |
[
{
"msg_contents": "make -C ./src/interfaces/libpq check\nPATH=... && @echo \"TAP tests not enabled. Try configuring with --enable-tap-tests\"\n/bin/sh: 1: @echo: not found\n\nmake is telling the shell to run \"@echo\" , rather than running \"echo\" silently.\n\nSince:\n\ncommit 6b04abdfc5e0653542ac5d586e639185a8c61a39\nAuthor: Andres Freund <andres@anarazel.de>\nDate: Sat Feb 26 16:51:47 2022 -0800\n\n Run tap tests in src/interfaces/libpq.\n\n-- \nJustin\n\n\n",
"msg_date": "Wed, 20 Jul 2022 12:23:22 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "make -C libpq check fails obscurely if tap tests are disabled"
},
{
"msg_contents": "On 2022-07-20 We 13:23, Justin Pryzby wrote:\n> make -C ./src/interfaces/libpq check\n> PATH=... && @echo \"TAP tests not enabled. Try configuring with --enable-tap-tests\"\n> /bin/sh: 1: @echo: not found\n>\n> make is telling the shell to run \"@echo\" , rather than running \"echo\" silently.\n>\n> Since:\n>\n> commit 6b04abdfc5e0653542ac5d586e639185a8c61a39\n> Author: Andres Freund <andres@anarazel.de>\n> Date: Sat Feb 26 16:51:47 2022 -0800\n>\n> Run tap tests in src/interfaces/libpq.\n\n\n\nYeah. It's a bit ugly but I think the attached would fix it.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Wed, 20 Jul 2022 15:00:28 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: make -C libpq check fails obscurely if tap tests are disabled"
},
{
"msg_contents": "On 2022-Jul-20, Andrew Dunstan wrote:\n\n> On 2022-07-20 We 13:23, Justin Pryzby wrote:\n> > PATH=... && @echo \"TAP tests not enabled. Try configuring with --enable-tap-tests\"\n> > /bin/sh: 1: @echo: not found\n> >\n> > make is telling the shell to run \"@echo\" , rather than running \"echo\" silently.\n\n> Yeah. It's a bit ugly but I think the attached would fix it.\n\nHere's a different take. Just assign the variable separately.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"No renuncies a nada. No te aferres a nada.\"",
"msg_date": "Thu, 21 Jul 2022 10:53:48 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: make -C libpq check fails obscurely if tap tests are disabled"
},
{
"msg_contents": "\nOn 2022-07-21 Th 04:53, Alvaro Herrera wrote:\n> On 2022-Jul-20, Andrew Dunstan wrote:\n>\n>> On 2022-07-20 We 13:23, Justin Pryzby wrote:\n>>> PATH=... && @echo \"TAP tests not enabled. Try configuring with --enable-tap-tests\"\n>>> /bin/sh: 1: @echo: not found\n>>>\n>>> make is telling the shell to run \"@echo\" , rather than running \"echo\" silently.\n>> Yeah. It's a bit ugly but I think the attached would fix it.\n> Here's a different take. Just assign the variable separately.\n\n\nNice, I didn't know you could do that.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Thu, 21 Jul 2022 16:00:17 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: make -C libpq check fails obscurely if tap tests are disabled"
},
{
"msg_contents": "On 2022-Jul-21, Andrew Dunstan wrote:\n\n> On 2022-07-21 Th 04:53, Alvaro Herrera wrote:\n\n> > Here's a different take. Just assign the variable separately.\n> \n> Nice, I didn't know you could do that.\n\nIt's not very common -- we do have some target-specific variable\nassignments, but none of them use 'export'. I saw somewhere that this\nworks from Make 3.77 onwards, and we require 3.80, so it should be okay.\nThe buildfarm will tell us ...\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"Industry suffers from the managerial dogma that for the sake of stability\nand continuity, the company should be independent of the competence of\nindividual employees.\" (E. Dijkstra)\n\n\n",
"msg_date": "Fri, 22 Jul 2022 20:23:23 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: make -C libpq check fails obscurely if tap tests are disabled"
},
{
"msg_contents": "On 2022-Jul-22, Alvaro Herrera wrote:\n\n> It's not very common -- we do have some target-specific variable\n> assignments, but none of them use 'export'. I saw somewhere that this\n> works from Make 3.77 onwards, and we require 3.80, so it should be okay.\n> The buildfarm will tell us ...\n\nHm, so prairiedog didn't like this:\n\nmake -C libpq all\nMakefile:146: *** multiple target patterns. Stop.\n\nbut I don't understand which part it is upset about. The rules are:\n\ncheck installcheck: export PATH := $(CURDIR)/test:$(PATH)\n\ncheck: test-build all\n $(prove_check)\n\ninstallcheck: test-build all\n $(prove_installcheck)\n\nI think \"multiple target patterns\" means it doesn't like the fact that\nthere are two colons in the first line. But if I use a recursive\nassignment (PATH = ...), that of course doesn't work because PATH appears on\nboth sides of the assignment:\n\nMakefile:146: *** Recursive variable 'PATH' references itself (eventually). Stop.\n\nNow, maybe that colon is not the issue and perhaps the problem can be\nsolved by splitting the rule:\n\ncheck: export PATH := $(CURDIR)/test:$(PATH)\ninstallcheck: export PATH := $(CURDIR)/test:$(PATH)\n\nAccording to 32568.1536241083@sss.pgh.pa.us, prairiedog is on Make 3.80.\nHmmm.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\nVoy a acabar con todos los humanos / con los humanos yo acabaré\nvoy a acabar con todos (bis) / con todos los humanos acabaré ¡acabaré! (Bender)\n\n\n",
"msg_date": "Fri, 22 Jul 2022 22:19:45 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: make -C libpq check fails obscurely if tap tests are disabled"
},
{
"msg_contents": "Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> On 2022-Jul-22, Alvaro Herrera wrote:\n>> It's not very common -- we do have some target-specific variable\n>> assignments, but none of them use 'export'. I saw somewhere that this\n>> works from Make 3.77 onwards, and we require 3.80, so it should be okay.\n>> The buildfarm will tell us ...\n\n> Hm, so prairiedog didn't like this:\n> According to 32568.1536241083@sss.pgh.pa.us, prairiedog is on Make 3.80.\n\nYeah, it is. I looked at the gmake manual on that machine, and its\ndescription of \"export\" seems about the same as what I see in a\nmodern version. So it should work ... but we've found bugs in 3.80\nbefore.\n\nLet me poke at it and see if there's a variant that works.\nThe wording of the message suggests that maybe breaking it into\ntwo separate rules would help.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 22 Jul 2022 16:40:43 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: make -C libpq check fails obscurely if tap tests are disabled"
},
{
"msg_contents": "I wrote:\n> Yeah, it is. I looked at the gmake manual on that machine, and its\n> description of \"export\" seems about the same as what I see in a\n> modern version.\n\nUm ... I was not looking in the right place. The description of\n\"target-specific variables\" does not say you can use \"export\",\nwhereas the modern manual mentions that specifically. I found\na relevant entry in their changelog:\n\n2002-10-13 Paul D. Smith <psmith@gnu.org>\n\t...\n\t* read.c (eval): Fix Bug #1391: allow \"export\" keyword in\n\ttarget-specific variable definitions. Check for it and set an\n\t\"exported\" flag.\n\t* doc/make.texi (Target-specific): Document the ability to use\n\t\"export\".\n\nSo it'll work in 3.81 (released 2006) and later, but not 3.80.\n\nTBH my inclination here is to move our goalposts to say \"we support\ngmake 3.81 and later\". It's possible that prairiedog's copy of 3.80 is\nthe last one left in the wild, and nearly certain that it's the last\none left that anyone would try to build PG with. (I see gmake 3.81 in\nthe next macOS version, 10.5.) I doubt it'd take long to install a newer\nversion on prairiedog.\n\nAlternatively, we could use Andrew's hacky solution from upthread.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 22 Jul 2022 17:24:11 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: make -C libpq check fails obscurely if tap tests are disabled"
},
{
"msg_contents": "I wrote:\n> So it'll work in 3.81 (released 2006) and later, but not 3.80.\n\nConfirmed that things are fine with 3.81.\n\n> TBH my inclination here is to move our goalposts to say \"we support\n> gmake 3.81 and later\".\n\nBarring objections, I'll push the attached patch. I suppose we\ncould undo whatever dumbing-down was done in _create_recursive_target,\nbut is it worth troubling with?\n\n\t\t\tregards, tom lane",
"msg_date": "Fri, 22 Jul 2022 19:50:02 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: make -C libpq check fails obscurely if tap tests are disabled"
},
{
"msg_contents": "On 2022-Jul-22, Tom Lane wrote:\n\n> Barring objections, I'll push the attached patch. I suppose we\n> could undo whatever dumbing-down was done in _create_recursive_target,\n> but is it worth troubling with?\n\nExcellent, many thanks. I tried to get Make 3.80 built here, to no\navail. I have updated that to 3.81, but still haven't found a way past\nthe automake phase.\n\nAnyway, I tried a revert of 1bd201214965 -- I ended up with the\nattached. However, while a serial compile fails, parallel ones fail\nrandomly, and apparently because two submakes compete in building\nlibpq.a and each deletes the other's file. What I think this is saying\nis that the 3.80-induced wording of that function limits concurrency of\nthe generated recursive rules, which prevents the problem from\noccurring; and if we were to fix that bug we would probably end up with\nmore concurrency.\n\nHere's the bottom of the 'make -j8' log:\n\nrm -f libpq.a\nranlib libpq.a\nranlib: 'libpq.a': No such file\nmake[5]: *** [/pgsql/source/master/src/Makefile.shlib:261: libpq.a] Error 1\nmake[5]: *** Waiting for unfinished jobs....\nar crs libpq.a fe-auth-scram.o fe-connect.o fe-exec.o fe-lobj.o fe-misc.o fe-print.o fe-protocol3.o fe-secure.o fe-trace.o legacy-pqsignal.o libpq-events.o pqexpbuffer.o fe-auth.o fe-secure-common.o fe-secure-openssl.o\nranlib libpq.a\ntouch libpq.a\nrm -f libpq.so.5\nln -s libpq.so.5.16 libpq.so.5\nrm -f libpq.so\nln -s libpq.so.5.16 libpq.so\ntouch libpq-refs-stamp\nrm -f libpq.so.5\nln -s libpq.so.5.16 libpq.so.5\nrm -f libpq.so\nln -s libpq.so.5.16 libpq.so\nrm -f libpq.so.5\nln -s libpq.so.5.16 libpq.so.5\ntouch libpq-refs-stamp\nrm -f libpq.so\ngcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wimplicit-fallthrough=3 -Wcast-function-type -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -g -O2 -pthread -D_REENTRANT -D_THREAD_SAFE -fPIC -shared -Wl,-soname,libecpg.so.6 -Wl,--version-script=exports.list -o libecpg.so.6.16 connect.o data.o descriptor.o error.o execute.o memory.o misc.o prepare.o sqlda.o typename.o -L../../../../src/port -L../../../../src/common -L../pgtypeslib -lpgtypes -L../../../../src/common -lpgcommon_shlib -L../../../../src/port -lpgport_shlib -L../../../../src/interfaces/libpq -lpq -L/usr/lib/llvm-11/lib -Wl,--as-needed -Wl,-rpath,'/pgsql/install/master/lib',--enable-new-dtags -lm \nln -s libpq.so.5.16 libpq.so\nrm -f libecpg.a\nmake[4]: *** [../../../../src/Makefile.global:618: submake-libpq] Error 2\nar crs libecpg.a connect.o data.o descriptor.o error.o execute.o memory.o misc.o prepare.o sqlda.o typename.o\nmake[3]: *** [Makefile:17: all-ecpglib-recursive] Error 2\nmake[3]: *** Waiting for unfinished jobs....\nranlib libecpg.a\ntouch libecpg.a\nrm -f libecpg.so.6\nln -s libecpg.so.6.16 libecpg.so.6\nrm -f libecpg.so\nln -s libecpg.so.6.16 libecpg.so\ngcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wimplicit-fallthrough=3 -Wcast-function-type -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -g -O2 -pthread -D_REENTRANT -D_THREAD_SAFE -fPIC -shared -Wl,-soname,libecpg_compat.so.3 -Wl,--version-script=exports.list -o libecpg_compat.so.3.16 informix.o -L../../../../src/port -L../../../../src/common -L../ecpglib -lecpg -L../pgtypeslib -lpgtypes -L../../../../src/common -lpgcommon_shlib -L../../../../src/port -lpgport_shlib -L../../../../src/interfaces/libpq -lpq -L/usr/lib/llvm-11/lib -Wl,--as-needed -Wl,-rpath,'/pgsql/install/master/lib',--enable-new-dtags -lm \nrm -f libecpg_compat.a\nar crs libecpg_compat.a informix.o\nranlib libecpg_compat.a\ntouch libecpg_compat.a\nrm -f libecpg_compat.so.3\nln -s libecpg_compat.so.3.16 libecpg_compat.so.3\nrm -f libecpg_compat.so\nln -s libecpg_compat.so.3.16 libecpg_compat.so\nmake[2]: *** [Makefile:17: all-ecpg-recursive] Error 2\nmake[1]: *** [Makefile:42: all-interfaces-recursive] Error 2\nmake: *** [GNUmakefile:11: all-src-recursive] Error 2\n\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"Los dioses no protegen a los insensatos. Éstos reciben protección de\notros insensatos mejor dotados\" (Luis Wu, Mundo Anillo)\n\n\n",
"msg_date": "Mon, 25 Jul 2022 12:32:55 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: make -C libpq check fails obscurely if tap tests are disabled"
},
{
"msg_contents": "On 2022-Jul-25, Alvaro Herrera wrote:\n\n> Anyway, I tried a revert of 1bd201214965 -- I ended up with the\n> attached.\n\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/",
"msg_date": "Mon, 25 Jul 2022 12:33:39 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: make -C libpq check fails obscurely if tap tests are disabled"
},
{
"msg_contents": "Ah, I found what the actual problem is: we have sprinkled a few\ndependencies on \"...-recurse\" throught the tree, but the patch I posted\nyesterday changes the manufactured target as \"-recursive\", as it was\nprior to 1bd201214965; so essentially these manually added dependencies\nall became silent no-ops.\n\nWith this version I keep the target name as -recurse, and at least the\necpg<->libpq problem is no more AFAICT.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/",
"msg_date": "Tue, 26 Jul 2022 10:21:24 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: make -C libpq check fails obscurely if tap tests are disabled"
},
{
"msg_contents": "Hi,\n\nOn 2022-07-22 19:50:02 -0400, Tom Lane wrote:\n> I wrote:\n> > So it'll work in 3.81 (released 2006) and later, but not 3.80.\n> \n> Confirmed that things are fine with 3.81.\n\nThanks for looking into this Alvaro, Andrew, Justin, Tom - I was on\nvacation...\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 26 Jul 2022 04:28:21 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: make -C libpq check fails obscurely if tap tests are disabled"
},
{
"msg_contents": "On 2022-Jul-26, Alvaro Herrera wrote:\n\n> With this version I keep the target name as -recurse, and at least the\n> ecpg<->libpq problem is no more AFAICT.\n\n... but I think we're missing some more dependencies, because if I\nremove everything (beyond make clean), then a \"make -j8 world-bin\"\nfails, per Cirrus.\n https://cirrus-ci.com/build/6305635338813440\nWith that, I'm going to set this aside for the time being. If somebody\nwants to play with these Makefile rules, be my guest. It sounds like\nthere's some compile time gains to be had, but it may require some\nfiddling and it's not clear to me if the move to Meson is going to make\nthis moot.\n\nRunning it locally, I get this:\n\n[...]\ngcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wimplicit-fallthrough=3 -Wcast-function-type -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -g -O2 -DFRONTEND -I. -I/pgsql/source/master/src/common -I../../src/include -I/pgsql/source/master/src/include -D_GNU_SOURCE -DVAL_CC=\"\\\"gcc\\\"\" -DVAL_CPPFLAGS=\"\\\"-D_GNU_SOURCE\\\"\" -DVAL_CFLAGS=\"\\\"-Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wimplicit-fallthrough=3 -Wcast-function-type -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -g -O2\\\"\" -DVAL_CFLAGS_SL=\"\\\"-fPIC\\\"\" -DVAL_LDFLAGS=\"\\\"-L/usr/lib/llvm-11/lib -Wl,--as-needed -Wl,-rpath,'/pgsql/install/master/lib',--enable-new-dtags\\\"\" -DVAL_LDFLAGS_EX=\"\\\"\\\"\" -DVAL_LDFLAGS_SL=\"\\\"\\\"\" -DVAL_LIBS=\"\\\"-lpgcommon -lpgport -llz4 -lssl -lcrypto -lz -lreadline -lpthread -lrt -ldl -lm \\\"\" -c -o hashfn.o /pgsql/source/master/src/common/hashfn.c -MMD -MP -MF .deps/hashfn.Po\n[...]\ngcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wimplicit-fallthrough=3 -Wcast-function-type -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -g -O2 -DFRONTEND -I. -I/pgsql/source/master/src/common -I../../src/include -I/pgsql/source/master/src/include -D_GNU_SOURCE -DVAL_CC=\"\\\"gcc\\\"\" -DVAL_CPPFLAGS=\"\\\"-D_GNU_SOURCE\\\"\" -DVAL_CFLAGS=\"\\\"-Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wimplicit-fallthrough=3 -Wcast-function-type -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -g -O2\\\"\" -DVAL_CFLAGS_SL=\"\\\"-fPIC\\\"\" -DVAL_LDFLAGS=\"\\\"-L/usr/lib/llvm-11/lib -Wl,--as-needed -Wl,-rpath,'/pgsql/install/master/lib',--enable-new-dtags\\\"\" -DVAL_LDFLAGS_EX=\"\\\"\\\"\" -DVAL_LDFLAGS_SL=\"\\\"\\\"\" -DVAL_LIBS=\"\\\"-lpgcommon -lpgport -llz4 -lssl -lcrypto -lz -lreadline -lpthread -lrt -ldl -lm \\\"\" -c -o relpath.o /pgsql/source/master/src/common/relpath.c -MMD -MP -MF .deps/relpath.Po\n[...]\nIn file included from /pgsql/source/master/src/include/postgres.h:47,\n from /pgsql/source/master/src/common/hashfn.c:24:\n/pgsql/source/master/src/include/utils/elog.h:75:10: fatal error: utils/errcodes.h: No existe el fichero o el directorio\n 75 | #include \"utils/errcodes.h\"\n | ^~~~~~~~~~~~~~~~~~\ncompilation terminated.\n[...]\nmake[2]: *** [../../src/Makefile.global:945: hashfn.o] Error 1\nmake[2]: *** Se espera a que terminen otras tareas....\n/pgsql/source/master/src/common/relpath.c:21:10: fatal error: catalog/pg_tablespace_d.h: No existe el fichero o el directorio\n 21 | #include \"catalog/pg_tablespace_d.h\"\n | ^~~~~~~~~~~~~~~~~~~~~~~~~~~\ncompilation terminated.\nmake[2]: *** [../../src/Makefile.global:945: relpath.o] Error 1\nmake[2]: se sale del directorio '/home/alvherre/Code/pgsql-build/master/src/common'\nmake[1]: *** [Makefile:42: all-common-recurse] Error 2\nmake[1]: se sale del directorio '/home/alvherre/Code/pgsql-build/master/src'\nmake: *** [GNUmakefile:21: world-bin-src-recurse] Error 2\n\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"I must say, I am absolutely impressed with what pgsql's implementation of\nVALUES allows me to do. It's kind of ridiculous how much \"work\" goes away in\nmy code. Too bad I can't do this at work (Oracle 8/9).\" (Tom Allison)\n http://archives.postgresql.org/pgsql-general/2007-06/msg00016.php\n\n\n",
"msg_date": "Tue, 26 Jul 2022 17:40:19 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: make -C libpq check fails obscurely if tap tests are disabled"
}
] |
[
{
"msg_contents": "Hi,\n\nI realized that standby_desc_running_xacts() in standbydesc.c doesn't\ndescribe subtransaction XIDs. I've attached the patch to improve the\ndescription. Here is an example by pg_wlaldump:\n\n* HEAD\nrmgr: Standby len (rec/tot): 58/ 58, tx: 0, lsn:\n0/01D0C608, prev 0/01D0C5D8, desc: RUNNING_XACTS nextXid 1050\nlatestCompletedXid 1047 oldestRunningXid 1048; 1 xacts: 1048\n\n* w/ patch\nrmgr: Standby len (rec/tot): 58/ 58, tx: 0, lsn:\n0/01D0C608, prev 0/01D0C5D8, desc: RUNNING_XACTS nextXid 1050\nlatestCompletedXid 1047 oldestRunningXid 1048; 1 xacts: 1048; 1\nsubxacts: 1049\n\nPlease review it.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/",
"msg_date": "Thu, 21 Jul 2022 10:13:25 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Improve description of XLOG_RUNNING_XACTS"
},
{
"msg_contents": "\n\nOn 2022/07/21 10:13, Masahiko Sawada wrote:\n> Hi,\n> \n> I realized that standby_desc_running_xacts() in standbydesc.c doesn't\n> describe subtransaction XIDs. I've attached the patch to improve the\n> description.\n\n+1\n\nThe patch looks good to me.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Thu, 21 Jul 2022 11:21:09 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: Improve description of XLOG_RUNNING_XACTS"
},
{
"msg_contents": "At Thu, 21 Jul 2022 11:21:09 +0900, Fujii Masao <masao.fujii@oss.nttdata.com> wrote in \n> \n> \n> On 2022/07/21 10:13, Masahiko Sawada wrote:\n> > Hi,\n> > I realized that standby_desc_running_xacts() in standbydesc.c doesn't\n> > describe subtransaction XIDs. I've attached the patch to improve the\n> > description.\n> \n> +1\n> \n> The patch looks good to me.\n\nThe subxids can reach TOTAL_MAX_CACHED_SUBXIDS =\nPGPROC_MAX_CACHED_SUBXIDS(=64) * PROCARRAY_MAXPROCS. xact_desc_commit\nalso shows subtransactions but they are at maximum 64. I feel like\n-0.3 if there's no obvious advantage showing them.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Thu, 21 Jul 2022 16:29:44 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Improve description of XLOG_RUNNING_XACTS"
},
{
"msg_contents": "On Thu, Jul 21, 2022 at 4:29 PM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> At Thu, 21 Jul 2022 11:21:09 +0900, Fujii Masao <masao.fujii@oss.nttdata.com> wrote in\n> >\n> >\n> > On 2022/07/21 10:13, Masahiko Sawada wrote:\n> > > Hi,\n> > > I realized that standby_desc_running_xacts() in standbydesc.c doesn't\n> > > describe subtransaction XIDs. I've attached the patch to improve the\n> > > description.\n> >\n> > +1\n> >\n> > The patch looks good to me.\n>\n> The subxids can reach TOTAL_MAX_CACHED_SUBXIDS =\n> PGPROC_MAX_CACHED_SUBXIDS(=64) * PROCARRAY_MAXPROCS. xact_desc_commit\n> also shows subtransactions but they are at maximum 64. I feel like\n> -0.3 if there's no obvious advantage showing them.\n\nxxx_desc() functions are debugging purpose functions as they are used\nby pg_waldump and pg_walinspect etc. I think such functions should\nshow all contents unless there is reason to hide them. Particularly,\nstandby_desc_running_xacts() currently shows subtransaction\ninformation only when subtransactions are overflowed, which got me\nconfused when inspecting WAL records.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Thu, 21 Jul 2022 16:58:45 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Improve description of XLOG_RUNNING_XACTS"
},
{
"msg_contents": "At Thu, 21 Jul 2022 16:58:45 +0900, Masahiko Sawada <sawada.mshk@gmail.com> wrote in \n> > > The patch looks good to me.\n\nBy the way +1 to this.\n\n> > The subxids can reach TOTAL_MAX_CACHED_SUBXIDS =\n> > PGPROC_MAX_CACHED_SUBXIDS(=64) * PROCARRAY_MAXPROCS. xact_desc_commit\n> > also shows subtransactions but they are at maximum 64. I feel like\n> > -0.3 if there's no obvious advantage showing them.\n> \n> xxx_desc() functions are debugging purpose functions as they are used\n> by pg_waldump and pg_walinspect etc. I think such functions should\n> show all contents unless there is reason to hide them. Particularly,\n> standby_desc_running_xacts() currently shows subtransaction\n> information only when subtransactions are overflowed, which got me\n> confused when inspecting WAL records.\n\nI'm not sure just confusing can justify that but after finding\nlogicalmsg_desc dumps the whole content, I no longer feel against to\nshow subxacts. \n\nJust for information, but as far as I saw, relmap_desc shows only the\nlength of \"data\" but doesn't dump all of it. generic_desc behaves the\nsame way. Thus we could just show \"%d subxacts\" instead of dumping\nout the all subxact ids just to avoid that confusion.\n\nHowever, again, I no longer object to show all subxids.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Thu, 21 Jul 2022 17:28:40 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Improve description of XLOG_RUNNING_XACTS"
},
{
"msg_contents": "Hi\n\nOn Thu, Jul 21, 2022 at 6:44 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> Hi,\n>\n> I realized that standby_desc_running_xacts() in standbydesc.c doesn't\n> describe subtransaction XIDs. I've attached the patch to improve the\n> description. Here is an example by pg_wlaldump:\n>\n> * HEAD\n> rmgr: Standby len (rec/tot): 58/ 58, tx: 0, lsn:\n> 0/01D0C608, prev 0/01D0C5D8, desc: RUNNING_XACTS nextXid 1050\n> latestCompletedXid 1047 oldestRunningXid 1048; 1 xacts: 1048\n>\n> * w/ patch\n> rmgr: Standby len (rec/tot): 58/ 58, tx: 0, lsn:\n> 0/01D0C608, prev 0/01D0C5D8, desc: RUNNING_XACTS nextXid 1050\n> latestCompletedXid 1047 oldestRunningXid 1048; 1 xacts: 1048; 1\n> subxacts: 1049\n>\n\nI think this is a good addition to debugging info. +1\n\nIf we are going to add 64 subxid numbers then it would help if we\ncould be more verbose and print \"subxid overflowed\" instead of \"subxid\novf\".\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n",
"msg_date": "Thu, 21 Jul 2022 18:42:53 +0530",
"msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Improve description of XLOG_RUNNING_XACTS"
},
{
"msg_contents": "On Thu, Jul 21, 2022 at 10:13 PM Ashutosh Bapat\n<ashutosh.bapat.oss@gmail.com> wrote:\n>\n> Hi\n>\n> On Thu, Jul 21, 2022 at 6:44 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > Hi,\n> >\n> > I realized that standby_desc_running_xacts() in standbydesc.c doesn't\n> > describe subtransaction XIDs. I've attached the patch to improve the\n> > description. Here is an example by pg_wlaldump:\n> >\n> > * HEAD\n> > rmgr: Standby len (rec/tot): 58/ 58, tx: 0, lsn:\n> > 0/01D0C608, prev 0/01D0C5D8, desc: RUNNING_XACTS nextXid 1050\n> > latestCompletedXid 1047 oldestRunningXid 1048; 1 xacts: 1048\n> >\n> > * w/ patch\n> > rmgr: Standby len (rec/tot): 58/ 58, tx: 0, lsn:\n> > 0/01D0C608, prev 0/01D0C5D8, desc: RUNNING_XACTS nextXid 1050\n> > latestCompletedXid 1047 oldestRunningXid 1048; 1 xacts: 1048; 1\n> > subxacts: 1049\n> >\n>\n> I think this is a good addition to debugging info. +1\n>\n> If we are going to add 64 subxid numbers then it would help if we\n> could be more verbose and print \"subxid overflowed\" instead of \"subxid\n> ovf\".\n\nYeah, it looks better so agreed. I've attached an updated patch.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/",
"msg_date": "Thu, 28 Jul 2022 11:11:15 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Improve description of XLOG_RUNNING_XACTS"
},
{
"msg_contents": "Thanks Masahiko for the updated patch. It looks good to me.\n\nI wonder whether the logic should be, similar\nto ProcArrayApplyRecoveryInfo()\n if (xlrec->subxid_overflow)\n...\nelse if (xlrec->subxcnt > 0)\n...\n\nBut you may ignore it.\n\n\n--\nBest Wishes,\nAshutosh\n\nOn Thu, Jul 28, 2022 at 7:41 AM Masahiko Sawada <sawada.mshk@gmail.com>\nwrote:\n\n> On Thu, Jul 21, 2022 at 10:13 PM Ashutosh Bapat\n> <ashutosh.bapat.oss@gmail.com> wrote:\n> >\n> > Hi\n> >\n> > On Thu, Jul 21, 2022 at 6:44 AM Masahiko Sawada <sawada.mshk@gmail.com>\n> wrote:\n> > >\n> > > Hi,\n> > >\n> > > I realized that standby_desc_running_xacts() in standbydesc.c doesn't\n> > > describe subtransaction XIDs. I've attached the patch to improve the\n> > > description. Here is an example by pg_wlaldump:\n> > >\n> > > * HEAD\n> > > rmgr: Standby len (rec/tot): 58/ 58, tx: 0, lsn:\n> > > 0/01D0C608, prev 0/01D0C5D8, desc: RUNNING_XACTS nextXid 1050\n> > > latestCompletedXid 1047 oldestRunningXid 1048; 1 xacts: 1048\n> > >\n> > > * w/ patch\n> > > rmgr: Standby len (rec/tot): 58/ 58, tx: 0, lsn:\n> > > 0/01D0C608, prev 0/01D0C5D8, desc: RUNNING_XACTS nextXid 1050\n> > > latestCompletedXid 1047 oldestRunningXid 1048; 1 xacts: 1048; 1\n> > > subxacts: 1049\n> > >\n> >\n> > I think this is a good addition to debugging info. +1\n> >\n> > If we are going to add 64 subxid numbers then it would help if we\n> > could be more verbose and print \"subxid overflowed\" instead of \"subxid\n> > ovf\".\n>\n> Yeah, it looks better so agreed. I've attached an updated patch.\n>\n> Regards,\n>\n> --\n> Masahiko Sawada\n> EDB: https://www.enterprisedb.com/\n>\n\nThanks Masahiko for the updated patch. It looks good to me.I wonder whether the logic should be, similar to ProcArrayApplyRecoveryInfo() if (xlrec->subxid_overflow)...else if (xlrec->subxcnt > 0)...But you may ignore it.--Best Wishes,AshutoshOn Thu, Jul 28, 2022 at 7:41 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:On Thu, Jul 21, 2022 at 10:13 PM Ashutosh Bapat\n<ashutosh.bapat.oss@gmail.com> wrote:\n>\n> Hi\n>\n> On Thu, Jul 21, 2022 at 6:44 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > Hi,\n> >\n> > I realized that standby_desc_running_xacts() in standbydesc.c doesn't\n> > describe subtransaction XIDs. I've attached the patch to improve the\n> > description. Here is an example by pg_wlaldump:\n> >\n> > * HEAD\n> > rmgr: Standby len (rec/tot): 58/ 58, tx: 0, lsn:\n> > 0/01D0C608, prev 0/01D0C5D8, desc: RUNNING_XACTS nextXid 1050\n> > latestCompletedXid 1047 oldestRunningXid 1048; 1 xacts: 1048\n> >\n> > * w/ patch\n> > rmgr: Standby len (rec/tot): 58/ 58, tx: 0, lsn:\n> > 0/01D0C608, prev 0/01D0C5D8, desc: RUNNING_XACTS nextXid 1050\n> > latestCompletedXid 1047 oldestRunningXid 1048; 1 xacts: 1048; 1\n> > subxacts: 1049\n> >\n>\n> I think this is a good addition to debugging info. +1\n>\n> If we are going to add 64 subxid numbers then it would help if we\n> could be more verbose and print \"subxid overflowed\" instead of \"subxid\n> ovf\".\n\nYeah, it looks better so agreed. I've attached an updated patch.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/",
"msg_date": "Thu, 28 Jul 2022 09:56:33 +0530",
"msg_from": "Ashutosh Bapat <ashutosh.bapat@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Improve description of XLOG_RUNNING_XACTS"
},
{
"msg_contents": "At Thu, 28 Jul 2022 09:56:33 +0530, Ashutosh Bapat <ashutosh.bapat@enterprisedb.com> wrote in \n> Thanks Masahiko for the updated patch. It looks good to me.\n> \n> I wonder whether the logic should be, similar\n> to ProcArrayApplyRecoveryInfo()\n> if (xlrec->subxid_overflow)\n> ...\n> else if (xlrec->subxcnt > 0)\n> ...\n> \n> But you may ignore it.\n\nEither is fine if we asuume the record is sound, but since it is\ndebugging output, I think we should always output the information *for\nboth* . The following change doesn't change the output for a sound\nrecord.\n\n====\n\tif (xlrec->subxcnt > 0)\n\t{\n\t\tappendStringInfo(buf, \"; %d subxacts:\", xlrec->subxcnt);\n\t\tfor (i = 0; i < xlrec->subxcnt; i++)\n\t\t\tappendStringInfo(buf, \" %u\", xlrec->xids[xlrec->xcnt + i]);\n\t}\n-\telse if (xlrec->subxid_overflow)\n+\tif (xlrec->subxid_overflow)\n\t\tappendStringInfoString(buf, \"; subxid overflowed\");\n====\n\nAnother point is if the xid/subxid lists get long, I see it annoying\nthat the \"overflowed\" messages goes far away to the end of the long\nline. Couldn't we rearrange the item order of the line as the follows?\n\nnextXid %u latestCompletedXid %u oldestRunningXid %u;[ subxid overflowed;][ %d xacts: %u %u ...;][ subxacts: %u %u ..]\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Thu, 28 Jul 2022 15:24:54 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Improve description of XLOG_RUNNING_XACTS"
},
{
"msg_contents": "On Thu, Jul 28, 2022 at 3:24 PM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> At Thu, 28 Jul 2022 09:56:33 +0530, Ashutosh Bapat <ashutosh.bapat@enterprisedb.com> wrote in\n> > Thanks Masahiko for the updated patch. It looks good to me.\n> >\n> > I wonder whether the logic should be, similar\n> > to ProcArrayApplyRecoveryInfo()\n> > if (xlrec->subxid_overflow)\n> > ...\n> > else if (xlrec->subxcnt > 0)\n> > ...\n> >\n> > But you may ignore it.\n>\n> Either is fine if we asuume the record is sound, but since it is\n> debugging output, I think we should always output the information *for\n> both* . The following change doesn't change the output for a sound\n> record.\n>\n> ====\n> if (xlrec->subxcnt > 0)\n> {\n> appendStringInfo(buf, \"; %d subxacts:\", xlrec->subxcnt);\n> for (i = 0; i < xlrec->subxcnt; i++)\n> appendStringInfo(buf, \" %u\", xlrec->xids[xlrec->xcnt + i]);\n> }\n> - else if (xlrec->subxid_overflow)\n> + if (xlrec->subxid_overflow)\n> appendStringInfoString(buf, \"; subxid overflowed\");\n> ====\n\nDo you mean that both could be true at the same time? If I read\nGetRunningTransactionData() correctly, that doesn't happen.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Thu, 28 Jul 2022 15:53:33 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Improve description of XLOG_RUNNING_XACTS"
},
{
"msg_contents": "At Thu, 28 Jul 2022 15:53:33 +0900, Masahiko Sawada <sawada.mshk@gmail.com> wrote in \n> >\n> Do you mean that both could be true at the same time? If I read\n> GetRunningTransactionData() correctly, that doesn't happen.\n\nSo, I wrote \"since it is debugging output\", and \"fine if we asuume the\nrecord is sound\". Is it any trouble with assuming the both *can*\nhappen at once? If something's broken, it will be reflected in the\noutput.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Thu, 28 Jul 2022 16:29:32 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Improve description of XLOG_RUNNING_XACTS"
},
{
"msg_contents": "Sorry for the late reply.\n\nOn Thu, Jul 28, 2022 at 4:29 PM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> At Thu, 28 Jul 2022 15:53:33 +0900, Masahiko Sawada <sawada.mshk@gmail.com> wrote in\n> > >\n> > Do you mean that both could be true at the same time? If I read\n> > GetRunningTransactionData() correctly, that doesn't happen.\n>\n> So, I wrote \"since it is debugging output\", and \"fine if we asuume the\n> record is sound\". Is it any trouble with assuming the both *can*\n> happen at once? If something's broken, it will be reflected in the\n> output.\n\nFair point. We may not need to interpret the contents.\n\nOn Thu, Jul 28, 2022 at 3:24 PM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> Another point is if the xid/subxid lists get long, I see it annoying\n> that the \"overflowed\" messages goes far away to the end of the long\n> line. Couldn't we rearrange the item order of the line as the follows?\n>\n> nextXid %u latestCompletedXid %u oldestRunningXid %u;[ subxid overflowed;][ %d xacts: %u %u ...;][ subxacts: %u %u ..]\n>\n\nI'm concerned that we have two information of subxact apart. Given\nthat showing both individual subxacts and \"overflow\" is a bug, I think\nwe can output like:\n\n if (xlrec->subxcnt > 0)\n {\n appendStringInfo(buf, \"; %d subxacts:\", xlrec->subxcnt);\n for (i = 0; i < xlrec->subxcnt; i++)\n appendStringInfo(buf, \" %u\", xlrec->xids[xlrec->xcnt + i]);\n }\n\n if (xlrec->subxid_overflow)\n appendStringInfoString(buf, \"; subxid overflowed\");\n\nOr we can output the \"subxid overwlowed\" first.\n\nRegards,\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Mon, 15 Aug 2022 11:16:56 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Improve description of XLOG_RUNNING_XACTS"
},
{
"msg_contents": "At Mon, 15 Aug 2022 11:16:56 +0900, Masahiko Sawada <sawada.mshk@gmail.com> wrote in \n> Sorry for the late reply.\n\nNo worries. Anyway I was in a long (as a Japanese:) vacation.\n\n> On Thu, Jul 28, 2022 at 4:29 PM Kyotaro Horiguchi\n> <horikyota.ntt@gmail.com> wrote:\n> > record is sound\". Is it any trouble with assuming the both *can*\n> > happen at once? If something's broken, it will be reflected in the\n> > output.\n> \n> Fair point. We may not need to interpret the contents.\n\nYeah.\n\n> On Thu, Jul 28, 2022 at 3:24 PM Kyotaro Horiguchi\n> <horikyota.ntt@gmail.com> wrote:\n> >\n> > Another point is if the xid/subxid lists get long, I see it annoying\n> > that the \"overflowed\" messages goes far away to the end of the long\n> > line. Couldn't we rearrange the item order of the line as the follows?\n> >\n> > nextXid %u latestCompletedXid %u oldestRunningXid %u;[ subxid overflowed;][ %d xacts: %u %u ...;][ subxacts: %u %u ..]\n> >\n> \n> I'm concerned that we have two information of subxact apart. Given\n> that showing both individual subxacts and \"overflow\" is a bug, I think\n\nBug or every kind of breakage of the file. So if \"overflow\"ed, we\ndon't need to show a subxid there but I thought that there's no need\nto change behavior in that case since it scarcely happens.\n\n> we can output like:\n> \n> if (xlrec->subxcnt > 0)\n> {\n> appendStringInfo(buf, \"; %d subxacts:\", xlrec->subxcnt);\n> for (i = 0; i < xlrec->subxcnt; i++)\n> appendStringInfo(buf, \" %u\", xlrec->xids[xlrec->xcnt + i]);\n> }\n> \n> if (xlrec->subxid_overflow)\n> appendStringInfoString(buf, \"; subxid overflowed\");\n\nYea, it seems like what I proposed upthread. I'm fine with that since\nit is an abonormal situation.\n\n> Or we can output the \"subxid overwlowed\" first.\n\n(I prefer this, as that doesn't change the output in the normal case\nbut the anormality will be easilly seen if happens.)\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Tue, 23 Aug 2022 11:53:30 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Improve description of XLOG_RUNNING_XACTS"
},
{
"msg_contents": "On Tue, Aug 23, 2022 at 11:53 AM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> At Mon, 15 Aug 2022 11:16:56 +0900, Masahiko Sawada <sawada.mshk@gmail.com> wrote in\n> > Sorry for the late reply.\n>\n> No worries. Anyway I was in a long (as a Japanese:) vacation.\n>\n> > On Thu, Jul 28, 2022 at 4:29 PM Kyotaro Horiguchi\n> > <horikyota.ntt@gmail.com> wrote:\n> > > record is sound\". Is it any trouble with assuming the both *can*\n> > > happen at once? If something's broken, it will be reflected in the\n> > > output.\n> >\n> > Fair point. We may not need to interpret the contents.\n>\n> Yeah.\n>\n> > On Thu, Jul 28, 2022 at 3:24 PM Kyotaro Horiguchi\n> > <horikyota.ntt@gmail.com> wrote:\n> > >\n> > > Another point is if the xid/subxid lists get long, I see it annoying\n> > > that the \"overflowed\" messages goes far away to the end of the long\n> > > line. Couldn't we rearrange the item order of the line as the follows?\n> > >\n> > > nextXid %u latestCompletedXid %u oldestRunningXid %u;[ subxid overflowed;][ %d xacts: %u %u ...;][ subxacts: %u %u ..]\n> > >\n> >\n> > I'm concerned that we have two information of subxact apart. Given\n> > that showing both individual subxacts and \"overflow\" is a bug, I think\n>\n> Bug or every kind of breakage of the file. So if \"overflow\"ed, we\n> don't need to show a subxid there but I thought that there's no need\n> to change behavior in that case since it scarcely happens.\n>\n> > we can output like:\n> >\n> > if (xlrec->subxcnt > 0)\n> > {\n> > appendStringInfo(buf, \"; %d subxacts:\", xlrec->subxcnt);\n> > for (i = 0; i < xlrec->subxcnt; i++)\n> > appendStringInfo(buf, \" %u\", xlrec->xids[xlrec->xcnt + i]);\n> > }\n> >\n> > if (xlrec->subxid_overflow)\n> > appendStringInfoString(buf, \"; subxid overflowed\");\n>\n> Yea, it seems like what I proposed upthread. I'm fine with that since\n> it is an abonormal situation.\n>\n> > Or we can output the \"subxid overwlowed\" first.\n>\n> (I prefer this, as that doesn't change the output in the normal case\n> but the anormality will be easilly seen if happens.)\n>\n\nUpdated the patch accordingly.\n\nRegards,\n\n-- \nMasahiko Sawada",
"msg_date": "Fri, 9 Sep 2022 09:48:05 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Improve description of XLOG_RUNNING_XACTS"
},
{
"msg_contents": "At Fri, 9 Sep 2022 09:48:05 +0900, Masahiko Sawada <sawada.mshk@gmail.com> wrote in \n> On Tue, Aug 23, 2022 at 11:53 AM Kyotaro Horiguchi\n> <horikyota.ntt@gmail.com> wrote:\n> >\n> > At Mon, 15 Aug 2022 11:16:56 +0900, Masahiko Sawada <sawada.mshk@gmail.com> wrote in\n> > > Or we can output the \"subxid overwlowed\" first.\n> >\n> > (I prefer this, as that doesn't change the output in the normal case\n> > but the anormality will be easilly seen if happens.)\n> >\n> \n> Updated the patch accordingly.\n\nThanks! Considering the discussion so far, how about adding a comment\nlike this?\n\n\n +\t\tappendStringInfoString(buf, \"; subxid overflowed\");\n +\n++\t/*\n++\t * subxids and subxid_overflow are mutually exclusive, but we deliberitely\n++\t * print the both simultaneously in case the record is broken.\n++\t */\n +\tif (xlrec->subxcnt > 0)\n +\t{\n +\t\tappendStringInfo(buf, \"; %d subxacts:\", xlrec->subxcnt);\n +\t\tfor (i = 0; i < xlrec->subxcnt; i++)\n +\t\t\tappendStringInfo(buf, \" %u\", xlrec->xids[xlrec->xcnt + i]);\n +\t}\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Fri, 09 Sep 2022 16:02:01 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Improve description of XLOG_RUNNING_XACTS"
},
{
"msg_contents": "On Fri, Sep 9, 2022 at 6:18 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> Updated the patch accordingly.\n>\n\nI have created two xacts each with savepoints and after your patch,\nthe record will show xacts/subxacts information as below:\n\nrmgr: Standby len (rec/tot): 74/ 74, tx: 0, lsn:\n0/014AC238, prev 0/014AC1F8, desc: RUNNING_XACTS nextXid 733\nlatestCompletedXid 726 oldestRunningXid 727; 2 xacts: 729 727; 4\nsubxacts: 730 731 728 732\n\nThere is no way to associate which subxacts belong to which xact, so\nwill it be useful, and if so, how? I guess we probably don't need it\nhere because the describe records just display the record information.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 14 Sep 2022 15:03:12 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Improve description of XLOG_RUNNING_XACTS"
},
{
"msg_contents": "Hi,\n\nWhen translating error messages, Alexander Lakhin \n(<exclusion@gmail.com>) noticed some inconsistencies so I prepared a \nsmall patch to fix those.\n\nPlease see attached.\n\n-- \nEkaterina Kiryanova\nTechnical Writer\nPostgres Professional\nthe Russian PostgreSQL Company",
"msg_date": "Wed, 14 Sep 2022 13:01:24 +0300",
"msg_from": "Ekaterina Kiryanova <e.kiryanova@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Inconsistencies in error messages"
},
{
"msg_contents": "On Wed, Sep 14, 2022 at 5:01 PM Ekaterina Kiryanova\n<e.kiryanova@postgrespro.ru> wrote:\n>\n> Hi,\n>\n> When translating error messages, Alexander Lakhin\n> (<exclusion@gmail.com>) noticed some inconsistencies so I prepared a\n> small patch to fix those.\n\n+1\n\nThis one\n\n- errmsg(\"background worker \\\"%s\\\": background worker without shared\nmemory access are not supported\",\n+ errmsg(\"background worker \\\"%s\\\": background workers without shared\nmemory access are not supported\",\n\nis a grammar error so worth backpatching, but the rest are cosmetic.\n\nWill commit this way unless there are objections.\n\n-- \nJohn Naylor\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 14 Sep 2022 17:25:57 +0700",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Inconsistencies in error messages"
},
{
"msg_contents": "On 2022-Sep-14, John Naylor wrote:\n\n> This one\n> \n> + errmsg(\"background worker \\\"%s\\\": background workers without shared\n> memory access are not supported\",\n> \n> is a grammar error so worth backpatching, but the rest are cosmetic.\n> \n> Will commit this way unless there are objections.\n\n+1\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"I dream about dreams about dreams\", sang the nightingale\nunder the pale moon (Sandman)\n\n\n",
"msg_date": "Wed, 14 Sep 2022 12:42:42 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Inconsistencies in error messages"
},
{
"msg_contents": "On Wed, Sep 14, 2022 at 6:33 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Fri, Sep 9, 2022 at 6:18 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > Updated the patch accordingly.\n> >\n>\n> I have created two xacts each with savepoints and after your patch,\n> the record will show xacts/subxacts information as below:\n>\n> rmgr: Standby len (rec/tot): 74/ 74, tx: 0, lsn:\n> 0/014AC238, prev 0/014AC1F8, desc: RUNNING_XACTS nextXid 733\n> latestCompletedXid 726 oldestRunningXid 727; 2 xacts: 729 727; 4\n> subxacts: 730 731 728 732\n>\n> There is no way to associate which subxacts belong to which xact, so\n> will it be useful, and if so, how? I guess we probably don't need it\n> here because the describe records just display the record information.\n\nI think it's useful for debugging purposes. For instance, when I was\nworking on the fix 68dcce247f1a13318613a0e27782b2ca21a4ceb7\n(REL_14_STABLE), I checked if all initial running transactions\nincluding subtransactions are properly stored and purged by checking\nthe debug logs and pg_waldump output. Actually, until I realize that\nthe description of RUNNING_XACTS doesn't show subtransaction\ninformation, I was confused by the fact that the saved initial running\ntransactions didn't match the description shown by pg_waldump.\n\nRegards,\n\n-- \nMasahiko Sawada\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 15 Sep 2022 16:55:37 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Improve description of XLOG_RUNNING_XACTS"
},
{
"msg_contents": "On Thu, Sep 15, 2022 at 1:26 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Wed, Sep 14, 2022 at 6:33 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Fri, Sep 9, 2022 at 6:18 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > >\n> > > Updated the patch accordingly.\n> > >\n> >\n> > I have created two xacts each with savepoints and after your patch,\n> > the record will show xacts/subxacts information as below:\n> >\n> > rmgr: Standby len (rec/tot): 74/ 74, tx: 0, lsn:\n> > 0/014AC238, prev 0/014AC1F8, desc: RUNNING_XACTS nextXid 733\n> > latestCompletedXid 726 oldestRunningXid 727; 2 xacts: 729 727; 4\n> > subxacts: 730 731 728 732\n> >\n> > There is no way to associate which subxacts belong to which xact, so\n> > will it be useful, and if so, how? I guess we probably don't need it\n> > here because the describe records just display the record information.\n>\n> I think it's useful for debugging purposes. For instance, when I was\n> working on the fix 68dcce247f1a13318613a0e27782b2ca21a4ceb7\n> (REL_14_STABLE), I checked if all initial running transactions\n> including subtransactions are properly stored and purged by checking\n> the debug logs and pg_waldump output. Actually, until I realize that\n> the description of RUNNING_XACTS doesn't show subtransaction\n> information, I was confused by the fact that the saved initial running\n> transactions didn't match the description shown by pg_waldump.\n>\n\nI see your point but I am still worried due to the concern raised by\nHoriguchi-San earlier in this thread that the total number could be as\nlarge as TOTAL_MAX_CACHED_SUBXIDS. I think if we want to include\ninformation only on the number of subxacts then that is clearly an\nimprovement without any disadvantage.\n\nDoes anyone else have an opinion on this matter?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 15 Sep 2022 17:39:17 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Improve description of XLOG_RUNNING_XACTS"
},
{
"msg_contents": "At Thu, 15 Sep 2022 17:39:17 +0530, Amit Kapila <amit.kapila16@gmail.com> wrote in \n> I see your point but I am still worried due to the concern raised by\n> Horiguchi-San earlier in this thread that the total number could be as\n> large as TOTAL_MAX_CACHED_SUBXIDS. I think if we want to include\n> information only on the number of subxacts then that is clearly an\n> improvement without any disadvantage.\n> \n> Does anyone else have an opinion on this matter?\n\nThe doesn't seem to work for Sawada-san's case, but I'm fine with\nthat:p\n\nPutting an arbitrary upper-bound on the number of subxids to print\nmight work? I'm not sure how we can determine the upper-bound, though.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Fri, 16 Sep 2022 10:55:53 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Improve description of XLOG_RUNNING_XACTS"
},
{
"msg_contents": "On Wed, Sep 14, 2022 at 5:25 PM John Naylor\n<john.naylor@enterprisedb.com> wrote:\n> Will commit this way unless there are objections.\n\nI forgot to mention yesterday, but this is done.\n\n-- \nJohn Naylor\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 16 Sep 2022 09:48:03 +0700",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Inconsistencies in error messages"
},
{
"msg_contents": "On Fri, Sep 16, 2022 at 10:55:53AM +0900, Kyotaro Horiguchi wrote:\n> Putting an arbitrary upper-bound on the number of subxids to print\n> might work? I'm not sure how we can determine the upper-bound, though.\n\nYou could hardcode it so as it does not blow up the whole view, say\n20~30. Anyway, I agree with the concern raised upthread about the\namount of extra data this would add to the output, so having at least\nthe number of subxids would be better than the existing state of\nthings telling only if the list of overflowed. So let's stick to\nthat.\n\nHere is another idea for the list of subxids: show the full list of\nsubxids only when using --xid. We could always introduce an extra\nswitch, but that does not seem worth the extra workload here.\n--\nMichael",
"msg_date": "Mon, 3 Oct 2022 17:15:45 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Improve description of XLOG_RUNNING_XACTS"
},
{
"msg_contents": "On Mon, Oct 3, 2022 at 5:15 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Fri, Sep 16, 2022 at 10:55:53AM +0900, Kyotaro Horiguchi wrote:\n> > Putting an arbitrary upper-bound on the number of subxids to print\n> > might work? I'm not sure how we can determine the upper-bound, though.\n>\n> You could hardcode it so as it does not blow up the whole view, say\n> 20~30. Anyway, I agree with the concern raised upthread about the\n> amount of extra data this would add to the output, so having at least\n> the number of subxids would be better than the existing state of\n> things telling only if the list of overflowed. So let's stick to\n> that.\n\nWhy are only subtransaction information in XLOG_RUNNING_XACTS limited?\nI think we have other information shown without bounds such as lock\ninformation in XLOG_STANDBY_LOCK and invalidation messages in\nXLOG_INVALIDATIONS.\n\nRegards,\n\n-- \nMasahiko Sawada\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 3 Oct 2022 17:49:23 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Improve description of XLOG_RUNNING_XACTS"
},
{
"msg_contents": "On Mon, Oct 3, 2022 at 1:46 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Fri, Sep 16, 2022 at 10:55:53AM +0900, Kyotaro Horiguchi wrote:\n> > Putting an arbitrary upper-bound on the number of subxids to print\n> > might work? I'm not sure how we can determine the upper-bound, though.\n>\n> You could hardcode it so as it does not blow up the whole view, say\n> 20~30. Anyway, I agree with the concern raised upthread about the\n> amount of extra data this would add to the output, so having at least\n> the number of subxids would be better than the existing state of\n> things telling only if the list of overflowed. So let's stick to\n> that.\n\nI spent some time today reading this. As others said upthread, the\noutput can be more verbose if all the backends are running max\nsubtransactions or subtransactions overflow occurred in all the\nbackends. This can blow-up the output.\n\nHard-limiting the number of subxids isn't a good idea because the\nwhole purpose of it is gone.\n\nAs Amit said upthread, we can't really link subtxns with the\ncorresponding txns by looking at the output of the pg_waldump. And I\nunderstand that having the subxid info helped Mashaiko-san debug an\nissue. Wouldn't it be better to have a SQL-callable function that can\nreturn txn and all its subxid information? I'm not sure if it's useful\nor worth at all because the contents are so dynamic. I'm not sure if\nwe have one already or if it's possible to have one such function.\n\n> Here is another idea for the list of subxids: show the full list of\n> subxids only when using --xid. We could always introduce an extra\n> switch, but that does not seem worth the extra workload here.\n\nThis seems interesting, but I agree that the extra code isn't worth it.\n\nFWIW, I quickly looked at few other resource managers XXXX_desc\nfunctions to find if they output all the record's info:\n\nxlog_desc - doesn't show restart point timestamp for xl_restore_point\nrecord type and\nlogicalmsg_desc - doesn't show the database id that generated the record\nclog_desc - doesn't show oldest xact db of xl_clog_truncate record\nand there may be more.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Fri, 14 Oct 2022 15:38:20 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Improve description of XLOG_RUNNING_XACTS"
},
{
"msg_contents": "On Fri, Oct 14, 2022 at 3:38 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Mon, Oct 3, 2022 at 1:46 PM Michael Paquier <michael@paquier.xyz> wrote:\n> >\n> > On Fri, Sep 16, 2022 at 10:55:53AM +0900, Kyotaro Horiguchi wrote:\n> > > Putting an arbitrary upper-bound on the number of subxids to print\n> > > might work? I'm not sure how we can determine the upper-bound, though.\n> >\n> > You could hardcode it so as it does not blow up the whole view, say\n> > 20~30. Anyway, I agree with the concern raised upthread about the\n> > amount of extra data this would add to the output, so having at least\n> > the number of subxids would be better than the existing state of\n> > things telling only if the list of overflowed. So let's stick to\n> > that.\n>\n> I spent some time today reading this. As others said upthread, the\n> output can be more verbose if all the backends are running max\n> subtransactions or subtransactions overflow occurred in all the\n> backends.\n>\n\nAs far as I can understand, this contains subtransactions only when\nthey didn't overflow. The latest information provided by Sawada-San\nfor similar records (XLOG_STANDBY_LOCK and XLOG_INVALIDATIONS) made me\nthink that maybe we are just over-worried about the worst case.\n\n>\n This can blow-up the output.\n>\n\nIf we get some reports like that, then we can probably use Michael's\nidea of displaying additional information with a separate flag.\n\n> Hard-limiting the number of subxids isn't a good idea because the\n> whole purpose of it is gone.\n>\n\nAgreed.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Sat, 15 Oct 2022 16:58:37 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Improve description of XLOG_RUNNING_XACTS"
},
{
"msg_contents": "On Sat, Oct 15, 2022 at 4:58 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> > I spent some time today reading this. As others said upthread, the\n> > output can be more verbose if all the backends are running max\n> > subtransactions or subtransactions overflow occurred in all the\n> > backends.\n> >\n>\n> As far as I can understand, this contains subtransactions only when\n> they didn't overflow. The latest information provided by Sawada-San\n> for similar records (XLOG_STANDBY_LOCK and XLOG_INVALIDATIONS) made me\n> think that maybe we are just over-worried about the worst case.\n\nAgreed. I see the below comment, which means when\nxlrec->subxid_overflow is set to true, there will not be any\nsubtransaction ids logged in the WAL record.\n\n * Note that if any transaction has overflowed its cached subtransactions\n * then there is no real need include any subtransactions.\n */\nRunningTransactions\nGetRunningTransactionData(void)\n\nIf my above understanding is correct, having something like below does\nno harm, like Masahiko-san's one of the initial patches, no? I'm also\nfine with the way it is in the v3 patch.\nif (xlrec->subxid_overflow)\n{\n /*\n * Server doesn't include any subtransactions if any transaction has\n * overflowed its cached subtransactions.\n */\n Assert(xlrec->subxcnt == 0)\n appendStringInfoString(buf, \"; subxid overflowed\");\n}\nelse if (xlrec->subxcnt > 0)\n{\n appendStringInfo(buf, \"; %d subxacts:\", xlrec->subxcnt);\n for (i = 0; i < xlrec->subxcnt; i++)\n appendStringInfo(buf, \" %u\", xlrec->xids[xlrec->xcnt + i]);\n}\n\nThe v3 patch posted upthread LGTM and I marked it as RfC. I'm just\nreattaching the v3 patch posted upthread herewith so that the cfbot\ncan test the right patch - https://commitfest.postgresql.org/40/3779/.\n\n> >\n> This can blow-up the output.\n> >\n>\n> If we get some reports like that, then we can probably use Michael's\n> idea of displaying additional information with a separate flag.\n\nAgreed.\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Sun, 16 Oct 2022 12:32:56 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Improve description of XLOG_RUNNING_XACTS"
},
{
"msg_contents": "At Sun, 16 Oct 2022 12:32:56 +0530, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote in \n> On Sat, Oct 15, 2022 at 4:58 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > > I spent some time today reading this. As others said upthread, the\n> > > output can be more verbose if all the backends are running max\n> > > subtransactions or subtransactions overflow occurred in all the\n> > > backends.\n> > >\n> >\n> > As far as I can understand, this contains subtransactions only when\n> > they didn't overflow. The latest information provided by Sawada-San\n> > for similar records (XLOG_STANDBY_LOCK and XLOG_INVALIDATIONS) made me\n> > think that maybe we are just over-worried about the worst case.\n> \n> Agreed. I see the below comment, which means when\n> xlrec->subxid_overflow is set to true, there will not be any\n> subtransaction ids logged in the WAL record.\n\nSince I categorized this tool as semi-debugging purpose so I'm fine\nthat sometimes very long lines are seen. In the first place it is\nalready seen in, for example, transaction commit records. They can be\n30k characters long by many relfile locators, stats locators,\ninvalidations and snapshots, when 100 relations are dropped.\n\n> If my above understanding is correct, having something like below does\n> no harm, like Masahiko-san's one of the initial patches, no? I'm also\n> fine with the way it is in the v3 patch.\n\nYeah, v3 works exactly the same way with the initial patch, except\nwhen something bad happens in that record. So *I* thought that it's\nrather better that the tool describes records as-is (even if only for\nthis record..:p) rather than how the broken records are recognized by\nthe recovery code.\n\n> The v3 patch posted upthread LGTM and I marked it as RfC. I'm just\n> reattaching the v3 patch posted upthread herewith so that the cfbot\n> can test the right patch - https://commitfest.postgresql.org/40/3779/.\n> \n> > >\n> > This can blow-up the output.\n> > >\n> >\n> > If we get some reports like that, then we can probably use Michael's\n> > idea of displaying additional information with a separate flag.\n> \n> Agreed.\n\nAgreed, but maybe we need to recheck what should be hidden (or\nabbreviated) in the concise (or terse?) mode.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Mon, 17 Oct 2022 10:16:25 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Improve description of XLOG_RUNNING_XACTS"
},
{
"msg_contents": "On Mon, Oct 17, 2022 at 6:46 AM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> At Sun, 16 Oct 2022 12:32:56 +0530, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote in\n> > On Sat, Oct 15, 2022 at 4:58 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > > I spent some time today reading this. As others said upthread, the\n> > > > output can be more verbose if all the backends are running max\n> > > > subtransactions or subtransactions overflow occurred in all the\n> > > > backends.\n> > > >\n> > >\n> > > As far as I can understand, this contains subtransactions only when\n> > > they didn't overflow. The latest information provided by Sawada-San\n> > > for similar records (XLOG_STANDBY_LOCK and XLOG_INVALIDATIONS) made me\n> > > think that maybe we are just over-worried about the worst case.\n> >\n> > Agreed. I see the below comment, which means when\n> > xlrec->subxid_overflow is set to true, there will not be any\n> > subtransaction ids logged in the WAL record.\n>\n> Since I categorized this tool as semi-debugging purpose so I'm fine\n> that sometimes very long lines are seen. In the first place it is\n> already seen in, for example, transaction commit records. They can be\n> 30k characters long by many relfile locators, stats locators,\n> invalidations and snapshots, when 100 relations are dropped.\n>\n> > If my above understanding is correct, having something like below does\n> > no harm, like Masahiko-san's one of the initial patches, no? I'm also\n> > fine with the way it is in the v3 patch.\n>\n> Yeah, v3 works exactly the same way with the initial patch, except\n> when something bad happens in that record. So *I* thought that it's\n> rather better that the tool describes records as-is (even if only for\n> this record..:p) rather than how the broken records are recognized by\n> the recovery code.\n>\n\nOkay, let's wait for two or three days and see if anyone thinks\ndifferently, otherwise, I'll push v3 after a bit more testing.\n\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 17 Oct 2022 09:58:31 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Improve description of XLOG_RUNNING_XACTS"
},
{
"msg_contents": "On Mon, Oct 17, 2022 at 09:58:31AM +0530, Amit Kapila wrote:\n> Okay, let's wait for two or three days and see if anyone thinks\n> differently, otherwise, I'll push v3 after a bit more testing.\n\nNo objections from here if you want to go ahead with v3 and print the\nfull set of subxids on top of the information about these\noverflowing.\n--\nMichael",
"msg_date": "Mon, 17 Oct 2022 14:53:57 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Improve description of XLOG_RUNNING_XACTS"
},
{
"msg_contents": "On Mon, Oct 17, 2022 at 02:53:57PM +0900, Michael Paquier wrote:\n> No objections from here if you want to go ahead with v3 and print the\n> full set of subxids on top of the information about these\n> overflowing.\n\nWhile browsing the CF entries, this was still listed. Amit, any\nupdates?\n--\nMichael",
"msg_date": "Tue, 1 Nov 2022 16:22:59 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Improve description of XLOG_RUNNING_XACTS"
},
{
"msg_contents": "On Tue, Nov 1, 2022 at 12:53 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Mon, Oct 17, 2022 at 02:53:57PM +0900, Michael Paquier wrote:\n> > No objections from here if you want to go ahead with v3 and print the\n> > full set of subxids on top of the information about these\n> > overflowing.\n>\n> While browsing the CF entries, this was still listed. Amit, any\n> updates?\n>\n\nI am planning to take care of this entry sometime this week.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 1 Nov 2022 16:33:41 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Improve description of XLOG_RUNNING_XACTS"
},
{
"msg_contents": "On Tue, Nov 1, 2022 at 4:33 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Nov 1, 2022 at 12:53 PM Michael Paquier <michael@paquier.xyz> wrote:\n> >\n> > On Mon, Oct 17, 2022 at 02:53:57PM +0900, Michael Paquier wrote:\n> > > No objections from here if you want to go ahead with v3 and print the\n> > > full set of subxids on top of the information about these\n> > > overflowing.\n> >\n> > While browsing the CF entries, this was still listed. Amit, any\n> > updates?\n> >\n>\n> I am planning to take care of this entry sometime this week.\n>\n\nPushed.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 2 Nov 2022 11:54:11 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Improve description of XLOG_RUNNING_XACTS"
},
{
"msg_contents": "2022年11月2日(水) 15:24 Amit Kapila <amit.kapila16@gmail.com>:\n>\n> On Tue, Nov 1, 2022 at 4:33 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Tue, Nov 1, 2022 at 12:53 PM Michael Paquier <michael@paquier.xyz> wrote:\n> > >\n> > > On Mon, Oct 17, 2022 at 02:53:57PM +0900, Michael Paquier wrote:\n> > > > No objections from here if you want to go ahead with v3 and print the\n> > > > full set of subxids on top of the information about these\n> > > > overflowing.\n> > >\n> > > While browsing the CF entries, this was still listed. Amit, any\n> > > updates?\n> > >\n> >\n> > I am planning to take care of this entry sometime this week.\n> >\n>\n> Pushed.\n\nMarked as committed in the CF app, many thanks for closing this one out.\n\nRegards\n\nIan Barwick\n\n\n",
"msg_date": "Thu, 3 Nov 2022 16:37:38 +0900",
"msg_from": "Ian Lawrence Barwick <barwick@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Improve description of XLOG_RUNNING_XACTS"
}
] |
[
{
"msg_contents": "Moving the report from security to -hackers on Noah's advice. Since\nthe function(s) involved in the crash are not present in any of the\nreleased versions, it is not considered a security issue.\n\nI can confirm that this is reproducible on the latest commit on\nmaster, 3c0bcdbc66. Below is the original analysis, followed by Noah's\nanalysis.\n\nTo be able to reproduce it, please note that perl support is required;\n hence `./configure --with-perl`.\n\nThe note about 'security concerns around on_plperl_init parameter',\nbelow, refers to now-fixed issue, at commit 13d8388151.\n\nBest regards,\nGurjeet\nhttp://Gurje.et\n\nForwarded Conversation\nSubject: Unprivileged user can induce crash by using an SUSET param in PGOPTIONS\n------------------------\n\nFrom: Gurjeet Singh <gurjeet@singh.im>\nDate: Mon, Jul 4, 2022 at 10:24 AM\nTo: Postgres Security <security@postgresql.org>\nCc: Bossart, Nathan <bossartn@amazon.com>\n\n\nWhile poking at plperl's GUC in an internal discussion, I was able to\ninduce a crash (or an assertion failure in assert-enabled builds) as\nan unprivileged user.\n\nMy investigation so far has revealed that the code path for the\nfollowing condition has never been tested, and because of this, when a\nuser tries to override an SUSET param via PGOPTIONS, Postgres tries to\nperform a table lookup during process initialization. Because there's\nno transaction in progress, and because this table is not in the\nprimed caches, we end up with code trying to dereference an\nuninitialized CurrentResourceOwner.\n\nThe condition:\nUser specifies PGOPTIONS\"-c custom.param\"\n\"custom.param\" is used by an extension which is specified in\nsession_preload_libraries\nThe extension uses DefineCustom*Variable(\"custom.param\", PGC_SUSET)\ninside set_config_option()\n record->context == PGC_SUSET\n context == PGC_BACKEND\n calls pg_parameter_aclcheck() -> eventually leads to\nassertion-failure (or crash, when assertions are disabled)\n\nSee below for 1. How to reproduce, 2. Assertion failure stack, and 3.\nCrash stack\n\nWhen the user does not specify PGOPTIONS, the code in\ndefine_custom_variable() returns prematurely, after a failed bsearch()\nlookup, and hence avoids this bug.\n\nI think similar crash can be induced when the custom parameter is of\nkind PGC_SU_BACKEND, because the code to handle that also invokes\npg_parameter_aclcheck(). Also, I believe the same condition would\narise if the extension is specified local_preload_libraries.\n\nI haven't been able to think of an attack vector using this bug, but\nit can be used to cause a denial-of-service by an authenticated user.\nI'm sending this report to security list, instead of -hackers, out of\nabundance of caution; please feel free to move it to -hackers, if it's\nnot considered a security concern.\n\nI discovered this bug a couple of days ago, just before leaving on a\ntrip. But because of shortage of time over the weekend, I haven't been\nable to dig deeper into it. Since I don't think I'll be able to spend\nany significant time on it for at least another couple of days, I'm\nsending this report without a patch or a proposed fix.\n\nCC: Nathan, whose security concerns around on_plperl_init parameter\nlead to this discovery.\n\n[1]: How to reproduce\n\n$ psql -c 'create user test'\nCREATE ROLE\n\n$ psql -c \"alter system set session_preload_libraries='plperl'\"\nALTER SYSTEM\n\n$ # restart server\n\n$ psql -c 'show session_preload_libraries'\n session_preload_libraries\n---------------------------\n plperl\n(1 row)\n\n$ PGOPTIONS=\"-c plperl.on_plperl_init=\" psql -U test\npsql: error: connection to server on socket \"/tmp/.s.psql.5432\"\nfailed: server closed the connection unexpectedly\n ┆ This probably means the server terminated abnormally\n before or while processing the request.\n\n\n[2]: Assertion failure stack\n\nLOG: database system is ready to accept connections\nTRAP: FailedAssertion(\"IsTransactionState()\", File:\n\"../../../../../../POSTGRES/src/backend/utils/cache/catcache.c\", Line:\n1209, P\nID: 199868)\npostgres: test postgres [local]\nstartup(ExceptionalCondition+0xd0)[0x55e503a4e6c9]\npostgres: test postgres [local] startup(+0x7e069b)[0x55e503a2a69b]\npostgres: test postgres [local] startup(SearchCatCache1+0x3a)[0x55e503a2a56b]\npostgres: test postgres [local] startup(SearchSysCache1+0xc1)[0x55e503a46fe4]\npostgres: test postgres [local]\nstartup(pg_parameter_aclmask+0x6f)[0x55e50345f098]\npostgres: test postgres [local]\nstartup(pg_parameter_aclcheck+0x2d)[0x55e50346039c]\npostgres: test postgres [local] startup(set_config_option+0x450)[0x55e503a70727]\npostgres: test postgres [local] startup(+0x829ce8)[0x55e503a73ce8]\npostgres: test postgres [local]\nstartup(DefineCustomStringVariable+0xa4)[0x55e503a74306]\n/home/vagrant/dev/POSTGRES_builds/add_tz_param/db/lib/postgresql/plperl.so(_PG_init+0xd7)[0x7fed3d845425]\npostgres: test postgres [local] startup(+0x80cc50)[0x55e503a56c50]\npostgres: test postgres [local] startup(load_file+0x43)[0x55e503a566d9]\npostgres: test postgres [local] startup(+0x81ba89)[0x55e503a65a89]\npostgres: test postgres [local]\nstartup(process_session_preload_libraries+0x23)[0x55e503a65bc6]\npostgres: test postgres [local] startup(PostgresMain+0x23b)[0x55e50388a52a]\npostgres: test postgres [local] startup(+0x564c5d)[0x55e5037aec5d]\npostgres: test postgres [local] startup(+0x564542)[0x55e5037ae542]\npostgres: test postgres [local] startup(+0x560777)[0x55e5037aa777]\npostgres: test postgres [local] startup(PostmasterMain+0x1374)[0x55e5037a9f10]\npostgres: test postgres [local] startup(+0x451550)[0x55e50369b550]\n/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf3)[0x7fed46dac083]\npostgres: test postgres [local] startup(_start+0x2e)[0x55e503317eae]\nLOG: server process (PID 199868) was terminated by signal 6: Aborted\nLOG: terminating any other active server processes\n\n[3]: Crash stack\n\n(gdb) bt\n#0 0x0000560937b35206 in ResourceArrayEnlarge (resarr=0x80)\n at ../../../../../../POSTGRES/src/backend/utils/resowner/resowner.c:222\n#1 0x0000560937b36693 in ResourceOwnerEnlargeRelationRefs (owner=0x0)\n at ../../../../../../POSTGRES/src/backend/utils/resowner/resowner.c:1106\n#2 0x0000560937ae32ca in RelationIncrementReferenceCount (rel=0x7fb697a0b860)\n at ../../../../../../POSTGRES/src/backend/utils/cache/relcache.c:2128\n#3 0x0000560937ae322b in RelationIdGetRelation (relationId=6243)\n at ../../../../../../POSTGRES/src/backend/utils/cache/relcache.c:2074\n#4 0x00005609374758a3 in relation_open (relationId=6243, lockmode=1)\n at ../../../../../../POSTGRES/src/backend/access/common/relation.c:59\n#5 0x00005609375181ca in table_open (relationId=6243, lockmode=1)\n at ../../../../../../POSTGRES/src/backend/access/table/table.c:43\n#6 0x0000560937ad41dc in SearchCatCacheMiss (cache=0x560938b60500,\nnkeys=1, hashValue=658344123, hashIndex=3,\n v1=94597605943240, v2=0, v3=0, v4=0) at\n../../../../../../POSTGRES/src/backend/utils/cache/catcache.c:1353\n#7 0x0000560937ad40cc in SearchCatCacheInternal\n(cache=0x560938b60500, nkeys=1, v1=94597605943240, v2=0, v3=0, v4=0)\n at ../../../../../../POSTGRES/src/backend/utils/cache/catcache.c:1295\n#8 0x0000560937ad3de7 in SearchCatCache1 (cache=0x560938b60500,\nv1=94597605943240)\n at ../../../../../../POSTGRES/src/backend/utils/cache/catcache.c:1163\n#9 0x0000560937aedba7 in SearchSysCache1 (cacheId=41, key1=94597605943240)\n at ../../../../../../POSTGRES/src/backend/utils/cache/syscache.c:1180\n#10 0x00005609375658b0 in pg_parameter_aclmask (name=0x560938b26670\n\"plperl.on_plperl_init\", roleid=16384, mask=4096,\n how=ACLMASK_ANY) at\n../../../../../POSTGRES/src/backend/catalog/aclchk.c:4234\n#11 0x0000560937566b82 in pg_parameter_aclcheck (name=0x560938b26670\n\"plperl.on_plperl_init\", roleid=16384, mode=4096)\n at ../../../../../POSTGRES/src/backend/catalog/aclchk.c:5048\n#12 0x0000560937b14fbc in set_config_option (name=0x560938b26670\n\"plperl.on_plperl_init\", value=0x560938ba13a0 \"\",\n context=PGC_BACKEND, source=PGC_S_CLIENT, action=GUC_ACTION_SET,\nchangeVal=true, elevel=19, is_reload=false)\n at ../../../../../../POSTGRES/src/backend/utils/misc/guc.c:7735\n#13 0x0000560937b18408 in define_custom_variable (variable=0x560938b265c0)\n at ../../../../../../POSTGRES/src/backend/utils/misc/guc.c:9361\n#14 0x0000560937b189fa in DefineCustomStringVariable\n(name=0x7fb697963114 \"plperl.on_plperl_init\",\n short_desc=0x7fb6979630d0 \"Perl initialization code to execute\nonce when plperl is first used.\", long_desc=0x0,\n valueAddr=0x7fb697968730 <plperl_on_plperl_init>, bootValue=0x0,\ncontext=PGC_SUSET, flags=0, check_hook=0x0,\n assign_hook=0x0, show_hook=0x0) at\n../../../../../../POSTGRES/src/backend/utils/misc/guc.c:9589\n#15 0x00007fb69795234d in _PG_init () at\n../../../../../POSTGRES/src/pl/plperl/plperl.c:443\n#16 0x0000560937afcc8c in internal_load_library (\n--Type <RET> for more, q to quit, c to continue without paging--\n libname=0x560938b2e188\n\"/home/vagrant/dev/POSTGRES_builds/add_tz_param/db/lib/postgresql/plperl.so\")\n at ../../../../../../POSTGRES/src/backend/utils/fmgr/dfmgr.c:289\n#17 0x0000560937afc729 in load_file (filename=0x560938b2e748 \"plperl\",\nrestricted=false)\n at ../../../../../../POSTGRES/src/backend/utils/fmgr/dfmgr.c:156\n#18 0x0000560937b0ab02 in load_libraries (libraries=0x560938b1b5c0\n\"plperl\", gucname=0x560937cd3706 \"session_preload_libraries\",\n restricted=false) at\n../../../../../../POSTGRES/src/backend/utils/init/miscinit.c:1668\n#19 0x0000560937b0ac3f in process_session_preload_libraries ()\n at ../../../../../../POSTGRES/src/backend/utils/init/miscinit.c:1699\n#20 0x0000560937945908 in PostgresMain (dbname=0x560938af9ca8\n\"postgres\", username=0x560938b2b0d8 \"test\")\n at ../../../../../POSTGRES/src/backend/tcop/postgres.c:4170\n#21 0x00005609378827ce in BackendRun (port=0x560938b240e0) at\n../../../../../POSTGRES/src/backend/postmaster/postmaster.c:4504\n#22 0x00005609378820b3 in BackendStartup (port=0x560938b240e0)\n at ../../../../../POSTGRES/src/backend/postmaster/postmaster.c:4232\n#23 0x000056093787e5e3 in ServerLoop () at\n../../../../../POSTGRES/src/backend/postmaster/postmaster.c:1806\n#24 0x000056093787dd86 in PostmasterMain (argc=3, argv=0x560938af7be0)\n at ../../../../../POSTGRES/src/backend/postmaster/postmaster.c:1478\n#25 0x000056093778078f in main (argc=3, argv=0x560938af7be0) at\n../../../../../POSTGRES/src/backend/main/main.c:202\n(gdb)\n\nBest regards,\nGurjeet\nhttp://Gurje.et\n\n\n----------\nFrom: Noah Misch <noah@leadboat.com>\nDate: Tue, Jul 5, 2022 at 10:50 AM\nTo: Gurjeet Singh <gurjeet@singh.im>\nCc: Postgres Security <security@postgresql.org>, Bossart, Nathan\n<bossartn@amazon.com>\n\n\nOn Mon, Jul 04, 2022 at 10:24:13AM -0700, Gurjeet Singh wrote:\n> calls pg_parameter_aclcheck() -> eventually leads to\n> assertion-failure (or crash, when assertions are disabled)\n\n> I'm sending this report to security list, instead of -hackers, out of\n> abundance of caution; please feel free to move it to -hackers, if it's\n> not considered a security concern.\n\nThanks for the report. v14 doesn't have pg_parameter_aclcheck(). If this is\nspecific to unreleased and beta versions, do use -hackers.\n\n\n",
"msg_date": "Wed, 20 Jul 2022 19:31:47 -0700",
"msg_from": "Gurjeet Singh <gurjeet@singh.im>",
"msg_from_op": true,
"msg_subject": "Fwd: Unprivileged user can induce crash by using an SUSET param in\n PGOPTIONS"
},
{
"msg_contents": "On Wed, Jul 20, 2022 at 07:31:47PM -0700, Gurjeet Singh wrote:\n> Moving the report from security to -hackers on Noah's advice. Since\n> the function(s) involved in the crash are not present in any of the\n> released versions, it is not considered a security issue.\n> \n> I can confirm that this is reproducible on the latest commit on\n> master, 3c0bcdbc66. Below is the original analysis, followed by Noah's\n> analysis.\n> \n> To be able to reproduce it, please note that perl support is required;\n> hence `./configure --with-perl`.\n> \n> The note about 'security concerns around on_plperl_init parameter',\n> below, refers to now-fixed issue, at commit 13d8388151.\n\nThis ACL lookup still happens when pre-loading libraries at session\nstartup with custom GUCs, as this checks if the GUC can be changed by\nthe user connecting or not. I am adding an open item to track that.\n--\nMichael",
"msg_date": "Thu, 21 Jul 2022 15:04:30 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Fwd: Unprivileged user can induce crash by using an SUSET param\n in PGOPTIONS"
},
{
"msg_contents": "Gurjeet Singh <gurjeet@singh.im> writes:\n> While poking at plperl's GUC in an internal discussion, I was able to\n> induce a crash (or an assertion failure in assert-enabled builds) as\n> an unprivileged user.\n> My investigation so far has revealed that the code path for the\n> following condition has never been tested, and because of this, when a\n> user tries to override an SUSET param via PGOPTIONS, Postgres tries to\n> perform a table lookup during process initialization. Because there's\n> no transaction in progress, and because this table is not in the\n> primed caches, we end up with code trying to dereference an\n> uninitialized CurrentResourceOwner.\n\nRight. So there are basically two things we could do about this:\n\n1. set_config_option could decline to call pg_parameter_aclcheck\nif not IsTransactionState(), instead failing the assignment.\nThis isn't a great answer because it would result in disallowing\nGUC assignments that users might expect to work.\n\n2. We could move process_session_preload_libraries() to someplace\nwhere a transaction is in progress -- probably, relocate it to\ninside InitPostgres().\n\nI'm inclined to think that #2 is a better long-term solution,\nbecause it'd allow you to session-preload libraries that expect\nto be able to do database access during _PG_init. (Right now\nthat'd fail with the same sort of symptoms seen here.) But\nthere's no denying that this might have surprising side-effects\nfor extensions that weren't expecting such a change.\n\nIt could also be reasonable to do both #1 and #2, with the idea\nthat #1 might save us from crashing if there are any other\ncode paths where we can reach that pg_parameter_aclcheck call\noutside a transaction.\n\nThoughts?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 21 Jul 2022 17:44:11 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Fwd: Unprivileged user can induce crash by using an SUSET param\n in PGOPTIONS"
},
{
"msg_contents": "On Thu, Jul 21, 2022 at 05:44:11PM -0400, Tom Lane wrote:\n> Right. So there are basically two things we could do about this:\n> \n> 1. set_config_option could decline to call pg_parameter_aclcheck\n> if not IsTransactionState(), instead failing the assignment.\n> This isn't a great answer because it would result in disallowing\n> GUC assignments that users might expect to work.\n> \n> 2. We could move process_session_preload_libraries() to someplace\n> where a transaction is in progress -- probably, relocate it to\n> inside InitPostgres().\n> \n> I'm inclined to think that #2 is a better long-term solution,\n> because it'd allow you to session-preload libraries that expect\n> to be able to do database access during _PG_init. (Right now\n> that'd fail with the same sort of symptoms seen here.) But\n> there's no denying that this might have surprising side-effects\n> for extensions that weren't expecting such a change.\n> \n> It could also be reasonable to do both #1 and #2, with the idea\n> that #1 might save us from crashing if there are any other\n> code paths where we can reach that pg_parameter_aclcheck call\n> outside a transaction.\n> \n> Thoughts?\n\nI wrote up a small patch along the same lines as #2 before seeing this\nmessage. It simply ensures that process_session_preload_libraries() is\ncalled within a transaction. I don't have a strong opinion about doing it\nthis way versus moving this call somewhere else as you proposed, but I'd\nagree that #2 is a better long-term solution than #1. AFAICT\nshared_preload_libraries, even with EXEC_BACKEND, should not have the same\nproblem.\n\nI'm not sure whether we should be worried about libraries that are already\ncreating transactions in their _PG_init() functions. Off the top of my\nhead, I don't recall seeing anything like that. Even if it does impact\nsome extensions, it doesn't seem like it'd be too much trouble to fix.\n\ndiff --git a/src/backend/tcop/postgres.c b/src/backend/tcop/postgres.c\nindex 8ba1c170f0..fd471d74a3 100644\n--- a/src/backend/tcop/postgres.c\n+++ b/src/backend/tcop/postgres.c\n@@ -4115,8 +4115,15 @@ PostgresMain(const char *dbname, const char *username)\n /*\n * process any libraries that should be preloaded at backend start (this\n * likewise can't be done until GUC settings are complete)\n+ *\n+ * If the user provided a setting at session startup for a custom GUC\n+ * defined by one of these libraries, we might need syscache access when\n+ * evaluating whether she has permission to set it, so do this step within\n+ * a transaction.\n */\n+ StartTransactionCommand();\n process_session_preload_libraries();\n+ CommitTransactionCommand();\n \n /*\n * Send this backend's cancellation info to the frontend.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 21 Jul 2022 15:29:00 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Fwd: Unprivileged user can induce crash by using an SUSET param\n in PGOPTIONS"
},
{
"msg_contents": "Nathan Bossart <nathandbossart@gmail.com> writes:\n> + StartTransactionCommand();\n> process_session_preload_libraries();\n> + CommitTransactionCommand();\n\nYeah, that way would avoid any questions about changing the order of\noperations, but it seems like a mighty expensive solution: it's\nadding a transaction to each backend start on the off chance that\n(a) session_preload_libraries/local_preload_libraries is nonempty and\n(b) the loaded libraries are going to do anything where it'd matter.\nSo that's why I thought of moving the call inside a pre-existing\ntransaction.\n\nIf we had to back-patch this into any released versions, I'd agree with\ntaking the performance hit in order to reduce the chance of side-effects.\nBut I think as long as we only have to do it in v15, it's not too late to\npossibly cause some compatibility issues for extensions.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 21 Jul 2022 19:30:20 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Fwd: Unprivileged user can induce crash by using an SUSET param\n in PGOPTIONS"
},
{
"msg_contents": "On Thu, Jul 21, 2022 at 2:44 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Gurjeet Singh <gurjeet@singh.im> writes:\n> > While poking at plperl's GUC in an internal discussion, I was able to\n> > induce a crash (or an assertion failure in assert-enabled builds) as\n> > an unprivileged user.\n> > My investigation so far has revealed that the code path for the\n> > following condition has never been tested, and because of this, when a\n> > user tries to override an SUSET param via PGOPTIONS, Postgres tries to\n> > perform a table lookup during process initialization. Because there's\n> > no transaction in progress, and because this table is not in the\n> > primed caches, we end up with code trying to dereference an\n> > uninitialized CurrentResourceOwner.\n>\n> Right. So there are basically two things we could do about this:\n>\n> 1. set_config_option could decline to call pg_parameter_aclcheck\n> if not IsTransactionState(), instead failing the assignment.\n> This isn't a great answer because it would result in disallowing\n> GUC assignments that users might expect to work.\n>\n> 2. We could move process_session_preload_libraries() to someplace\n> where a transaction is in progress -- probably, relocate it to\n> inside InitPostgres().\n>\n> I'm inclined to think that #2 is a better long-term solution,\n> because it'd allow you to session-preload libraries that expect\n> to be able to do database access during _PG_init. (Right now\n> that'd fail with the same sort of symptoms seen here.) But\n> there's no denying that this might have surprising side-effects\n> for extensions that weren't expecting such a change.\n>\n> It could also be reasonable to do both #1 and #2, with the idea\n> that #1 might save us from crashing if there are any other\n> code paths where we can reach that pg_parameter_aclcheck call\n> outside a transaction.\n>\n> Thoughts?\n\nI had debated just wrapping the process_session_preload_libraries()\ncall with a transaction, like Nathan's patch posted ealier on this\nthread does. But I hesitated because of the sensitivity around the\norder of operations/call during process initialization.\n\nI like the idea of performing library initialization in\nInitPostgres(), as it performs the first transaction of the\nconnection, and because of the libraries' ability to gin up new GUC\nvariables that might need special handling, and also if it allows them\nto do database access.\n\nI think anywhere after the 'PostAuthDelay' check in InitPostgres()\nwould be a good place to perform process_session_preload_libraries().\nI'm inclined to invoke it as late as possible, before we commit the\ntransaction.\n\nAs for making set_config_option() throw an error if not in\ntransaction, I'm not a big fan of checks that break the flow, and of\nunrelated code showing up when reading a function. For a casual\nreader, such a check for transaction would make for a jarring\nexperience; \"why are we checking for active transaction in the guts\nof guc.c?\", they might think. If anything, such an error should be\nthrown from or below pg_parameter_aclcheck().\n\nBut I am not sure if it should be exposed as an error. A user\nencountering that error is not at fault. Hence I believe an assertion\ncheck is more suitable for catching code that invokes\nset_config_option() outside a transaction.\n\nBest regards,\nGurjeet\nhttp://Gurje.et\n\n\n",
"msg_date": "Thu, 21 Jul 2022 16:35:16 -0700",
"msg_from": "Gurjeet Singh <gurjeet@singh.im>",
"msg_from_op": true,
"msg_subject": "Re: Fwd: Unprivileged user can induce crash by using an SUSET param\n in PGOPTIONS"
},
{
"msg_contents": "On Thu, Jul 21, 2022 at 3:29 PM Nathan Bossart <nathandbossart@gmail.com> wrote:\n>\n> On Thu, Jul 21, 2022 at 05:44:11PM -0400, Tom Lane wrote:\n> > Right. So there are basically two things we could do about this:\n> >\n> > 1. set_config_option could decline to call pg_parameter_aclcheck\n> > if not IsTransactionState(), instead failing the assignment.\n> > This isn't a great answer because it would result in disallowing\n> > GUC assignments that users might expect to work.\n> >\n> > 2. We could move process_session_preload_libraries() to someplace\n> > where a transaction is in progress -- probably, relocate it to\n> > inside InitPostgres().\n> >\n> > I'm inclined to think that #2 is a better long-term solution,\n> > because it'd allow you to session-preload libraries that expect\n> > to be able to do database access during _PG_init. (Right now\n> > that'd fail with the same sort of symptoms seen here.) But\n> > there's no denying that this might have surprising side-effects\n> > for extensions that weren't expecting such a change.\n> >\n> > It could also be reasonable to do both #1 and #2, with the idea\n> > that #1 might save us from crashing if there are any other\n> > code paths where we can reach that pg_parameter_aclcheck call\n> > outside a transaction.\n> >\n> > Thoughts?\n>\n> I wrote up a small patch along the same lines as #2 before seeing this\n> message. It simply ensures that process_session_preload_libraries() is\n> called within a transaction. I don't have a strong opinion about doing it\n> this way versus moving this call somewhere else as you proposed, but I'd\n> agree that #2 is a better long-term solution than #1. AFAICT\n> shared_preload_libraries, even with EXEC_BACKEND, should not have the same\n> problem.\n>\n> I'm not sure whether we should be worried about libraries that are already\n> creating transactions in their _PG_init() functions. Off the top of my\n> head, I don't recall seeing anything like that. Even if it does impact\n> some extensions, it doesn't seem like it'd be too much trouble to fix.\n>\n> diff --git a/src/backend/tcop/postgres.c b/src/backend/tcop/postgres.c\n> index 8ba1c170f0..fd471d74a3 100644\n> --- a/src/backend/tcop/postgres.c\n> +++ b/src/backend/tcop/postgres.c\n> @@ -4115,8 +4115,15 @@ PostgresMain(const char *dbname, const char *username)\n> /*\n> * process any libraries that should be preloaded at backend start (this\n> * likewise can't be done until GUC settings are complete)\n> + *\n> + * If the user provided a setting at session startup for a custom GUC\n> + * defined by one of these libraries, we might need syscache access when\n> + * evaluating whether she has permission to set it, so do this step within\n> + * a transaction.\n> */\n> + StartTransactionCommand();\n> process_session_preload_libraries();\n> + CommitTransactionCommand();\n>\n> /*\n> * Send this backend's cancellation info to the frontend.\n\n(none of the following is your patch's fault)\n\nI don't think that is a good call-site for\nprocess_session_preload_libraries(), because a library being loaded\ncan declare its own GUCs, hence I believe this should be called at\nleast before the call to BeginReportingGUCOptions().\n\nIf an extension creates a GUC with GUC_REPORT flag, it is violating\nexpectations. But since the DefineCustomXVariable() stack does not\nprevent the callers from doing so, we must still honor the protocol\nfollowed for all params with GUC_REPORT. And hence the\n\nI think it'd be a good idea to ban the callers of\nDefineCustomXVariable() from declaring their variable GUC_REPORT, to\nensure that only core code can define such variables.\n\nBest regards,\nGurjeet\nhttp://Gurje.et\n\n\n",
"msg_date": "Thu, 21 Jul 2022 16:37:40 -0700",
"msg_from": "Gurjeet Singh <gurjeet@singh.im>",
"msg_from_op": true,
"msg_subject": "Re: Fwd: Unprivileged user can induce crash by using an SUSET param\n in PGOPTIONS"
},
{
"msg_contents": "On Thu, Jul 21, 2022 at 07:30:20PM -0400, Tom Lane wrote:\n> Nathan Bossart <nathandbossart@gmail.com> writes:\n>> + StartTransactionCommand();\n>> process_session_preload_libraries();\n>> + CommitTransactionCommand();\n> \n> Yeah, that way would avoid any questions about changing the order of\n> operations, but it seems like a mighty expensive solution: it's\n> adding a transaction to each backend start on the off chance that\n> (a) session_preload_libraries/local_preload_libraries is nonempty and\n> (b) the loaded libraries are going to do anything where it'd matter.\n> So that's why I thought of moving the call inside a pre-existing\n> transaction.\n> \n> If we had to back-patch this into any released versions, I'd agree with\n> taking the performance hit in order to reduce the chance of side-effects.\n> But I think as long as we only have to do it in v15, it's not too late to\n> possibly cause some compatibility issues for extensions.\n\nYeah, fair point.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 21 Jul 2022 16:48:32 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Fwd: Unprivileged user can induce crash by using an SUSET param\n in PGOPTIONS"
},
{
"msg_contents": "On Thu, Jul 21, 2022 at 4:35 PM Gurjeet Singh <gurjeet@singh.im> wrote:\n> I like the idea of performing library initialization in\n> InitPostgres(), as it performs the first transaction of the\n> connection, and because of the libraries' ability to gin up new GUC\n> variables that might need special handling, and also if it allows them\n> to do database access.\n>\n> I think anywhere after the 'PostAuthDelay' check in InitPostgres()\n> would be a good place to perform process_session_preload_libraries().\n> I'm inclined to invoke it as late as possible, before we commit the\n> transaction.\n>\n> As for making set_config_option() throw an error if not in\n> transaction, I'm not a big fan of checks that break the flow, and of\n> unrelated code showing up when reading a function. For a casual\n> reader, such a check for transaction would make for a jarring\n> experience; \"why are we checking for active transaction in the guts\n> of guc.c?\", they might think. If anything, such an error should be\n> thrown from or below pg_parameter_aclcheck().\n>\n> But I am not sure if it should be exposed as an error. A user\n> encountering that error is not at fault. Hence I believe an assertion\n> check is more suitable for catching code that invokes\n> set_config_option() outside a transaction.\n\nPlease see attached the patch that implements the above proposal.\n\nThe process_session_preload_libraries() call has been moved to the end\nof InitPostgres(), just before we report the backend to\nPgBackendStatus and commit the first transaction.\n\nOne notable side effect of this change is that\nprocess_session_preload_libraries() is now called _before_ we\nSetProcessingMode(NormalProcessing). Which means any database access\nperformed by _PG_init() of an extension will be doing it in\nInitProcessing mode. I'm not sure if that's problematic.\n\nThe patch also adds an assertion in pg_parameter_aclcheck() to ensure\nthat there's a transaction is in progress before it's called.\n\nThe patch now lets the user connect, throws a warning, and does not crash.\n\n$ PGOPTIONS=\"-c plperl.on_plperl_init=\" psql -U test\nWARNING: permission denied to set parameter \"plperl.on_plperl_init\"\nExpanded display is used automatically.\npsql (15beta1)\nType \"help\" for help.\n\npostgres@B:694512=>\n\nBest regards,\nGurjeet\nhttp://Gurje.et",
"msg_date": "Thu, 21 Jul 2022 17:39:35 -0700",
"msg_from": "Gurjeet Singh <gurjeet@singh.im>",
"msg_from_op": true,
"msg_subject": "Re: Fwd: Unprivileged user can induce crash by using an SUSET param\n in PGOPTIONS"
},
{
"msg_contents": "On Thu, Jul 21, 2022 at 05:39:35PM -0700, Gurjeet Singh wrote:\n> One notable side effect of this change is that\n> process_session_preload_libraries() is now called _before_ we\n> SetProcessingMode(NormalProcessing). Which means any database access\n> performed by _PG_init() of an extension will be doing it in\n> InitProcessing mode. I'm not sure if that's problematic.\n\nI cannot see a reason why on top of my mind. The restrictions of\nInitProcessing apply to two code paths of bgworkers connecting to a\ndatabase, and normal processing is used as a barrier to prevent the\ncreation of some objects.\n\n> The patch also adds an assertion in pg_parameter_aclcheck() to ensure\n> that there's a transaction is in progress before it's called.\n\n+ /* It's pointless to call this function, unless we're in a transaction. */\n+ Assert(IsTransactionState());\n\nThis can involve extension code, I think that this should be at least\nan elog(ERROR) so as we have higher chances of knowing if something\nstill goes wrong in the wild.\n\n> The patch now lets the user connect, throws a warning, and does not crash.\n> \n> $ PGOPTIONS=\"-c plperl.on_plperl_init=\" psql -U test\n> WARNING: permission denied to set parameter \"plperl.on_plperl_init\"\n> Expanded display is used automatically.\n> psql (15beta1)\n> Type \"help\" for help.\n\nI am wondering whether we'd better have a test on this one with a\nnon-superuser. Except for a few tests in the unsafe section,\nsession_preload_libraries has a limited amount of coverage.\n\n+ /*\n+ * process any libraries that should be preloaded at backend start (this\n+ * can't be done until GUC settings are complete). Note that these libraries\n+ * can declare new GUC variables.\n+ */\n+ process_session_preload_libraries();\nThere is no point in doing that during bootstrap anyway, no?\n--\nMichael",
"msg_date": "Fri, 22 Jul 2022 10:00:34 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Fwd: Unprivileged user can induce crash by using an SUSET param\n in PGOPTIONS"
},
{
"msg_contents": "At Fri, 22 Jul 2022 10:00:34 +0900, Michael Paquier <michael@paquier.xyz> wrote in \n> On Thu, Jul 21, 2022 at 05:39:35PM -0700, Gurjeet Singh wrote:\n> > One notable side effect of this change is that\n> > process_session_preload_libraries() is now called _before_ we\n> > SetProcessingMode(NormalProcessing). Which means any database access\n> > performed by _PG_init() of an extension will be doing it in\n> > InitProcessing mode. I'm not sure if that's problematic.\n> \n> I cannot see a reason why on top of my mind. The restrictions of\n> InitProcessing apply to two code paths of bgworkers connecting to a\n> database, and normal processing is used as a barrier to prevent the\n> creation of some objects.\n> \n> > The patch also adds an assertion in pg_parameter_aclcheck() to ensure\n> > that there's a transaction is in progress before it's called.\n> \n> + /* It's pointless to call this function, unless we're in a transaction. */\n> + Assert(IsTransactionState());\n> \n> This can involve extension code, I think that this should be at least\n> an elog(ERROR) so as we have higher chances of knowing if something\n> still goes wrong in the wild.\n\npg_parameter_aclmask involves the same assertion, so the same\nbacktrace can be obtained without it. I think it is no bad of users\nso I'm not sure ERROR is appropriate even if we were to add something\nthere.\n\n> > The patch now lets the user connect, throws a warning, and does not crash.\n> > \n> > $ PGOPTIONS=\"-c plperl.on_plperl_init=\" psql -U test\n> > WARNING: permission denied to set parameter \"plperl.on_plperl_init\"\n> > Expanded display is used automatically.\n> > psql (15beta1)\n> > Type \"help\" for help.\n> \n> I am wondering whether we'd better have a test on this one with a\n> non-superuser. Except for a few tests in the unsafe section,\n> session_preload_libraries has a limited amount of coverage.\n\n+1\n\n> + /*\n> + * process any libraries that should be preloaded at backend start (this\n> + * can't be done until GUC settings are complete). Note that these libraries\n> + * can declare new GUC variables.\n> + */\n> + process_session_preload_libraries();\n> There is no point in doing that during bootstrap anyway, no?\n\nThis patch makes process_session_preload_libraries called in\nautovacuum worker/launcher and background worker in addition to client\nbackends. It seems to me we also need to prevent that.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Fri, 22 Jul 2022 15:16:00 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Unprivileged user can induce crash by using an SUSET param in\n PGOPTIONS"
},
{
"msg_contents": "Kyotaro Horiguchi <horikyota.ntt@gmail.com> writes:\n> At Fri, 22 Jul 2022 10:00:34 +0900, Michael Paquier <michael@paquier.xyz> wrote in \n>> On Thu, Jul 21, 2022 at 05:39:35PM -0700, Gurjeet Singh wrote:\n>>> The patch also adds an assertion in pg_parameter_aclcheck() to ensure\n>>> that there's a transaction is in progress before it's called.\n\n>> This can involve extension code, I think that this should be at least\n>> an elog(ERROR) so as we have higher chances of knowing if something\n>> still goes wrong in the wild.\n\nThat assert strikes me as having been inserted with the advice of a\ndartboard. Why pg_parameter_aclcheck, and not any other aclchk.c\nfunctions? Why in the callers at all, rather than somewhere down\ninside the syscache code? And why isn't the existing Assert that\nyou started the thread with plenty sufficient for that already?\n\n> This patch makes process_session_preload_libraries called in\n> autovacuum worker/launcher and background worker in addition to client\n> backends. It seems to me we also need to prevent that.\n\nYeah. I think the definition of session/local_preload_libraries\nis that it loads libraries into *interactive* sessions. The\nexisting coding seems already buggy in that regard, because it\nwill load such libraries into walsenders as well; isn't that\na POLA violation?\n\nSo I propose the attached. I tested this to the extent of checking\nthat all our contrib modules can be loaded via session_preload_libraries.\nThat doesn't prove a whole lot about what outside extensions might do,\nbut it's some comfort.\n\n\t\t\tregards, tom lane",
"msg_date": "Fri, 22 Jul 2022 14:56:22 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Unprivileged user can induce crash by using an SUSET param in\n PGOPTIONS"
},
{
"msg_contents": "On Fri, Jul 22, 2022 at 02:56:22PM -0400, Tom Lane wrote:\n> +\t/*\n> +\t * If this is an interactive session, load any libraries that should be\n> +\t * preloaded at backend start. Since those are determined by GUCs, this\n> +\t * can't happen until GUC settings are complete, but we want it to happen\n> +\t * during the initial transaction in case anything that requires database\n> +\t * access needs to be done.\n> +\t */\n> +\tif (!bootstrap &&\n> +\t\t!IsAutoVacuumWorkerProcess() &&\n> +\t\t!IsBackgroundWorker &&\n> +\t\t!am_walsender)\n> +\t\tprocess_session_preload_libraries();\n\nI worry that this will be easily missed when adding new types of\nnon-interactive sessions, but I can't claim to have a better idea.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Fri, 22 Jul 2022 15:33:59 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Unprivileged user can induce crash by using an SUSET param in\n PGOPTIONS"
},
{
"msg_contents": "Nathan Bossart <nathandbossart@gmail.com> writes:\n> On Fri, Jul 22, 2022 at 02:56:22PM -0400, Tom Lane wrote:\n>> +\tif (!bootstrap &&\n>> +\t\t!IsAutoVacuumWorkerProcess() &&\n>> +\t\t!IsBackgroundWorker &&\n>> +\t\t!am_walsender)\n>> +\t\tprocess_session_preload_libraries();\n\n> I worry that this will be easily missed when adding new types of\n> non-interactive sessions, but I can't claim to have a better idea.\n\nYeah, that bothered me too. A variant that I'd considered is to\ncreate a local variable \"bool interactive\" and set it properly\nin each of the arms of the if-chain dealing with authentication\n(starting about postinit.c:800). While that approach would cover\nmost of the tests shown above, it would not have exposed the issue\nof needing to check am_walsender, so I'm not very convinced that\nit'd be any better.\n\nAnother idea is to add a \"bool interactive\" parameter to InitPostgres,\nthereby shoving the issue out to the call sites. Still wouldn't\nexpose the am_walsender angle, but conceivably it'd be more\nfuture-proof anyway?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 22 Jul 2022 18:44:04 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Unprivileged user can induce crash by using an SUSET param in\n PGOPTIONS"
},
{
"msg_contents": "On Fri, Jul 22, 2022 at 06:44:04PM -0400, Tom Lane wrote:\n> Another idea is to add a \"bool interactive\" parameter to InitPostgres,\n> thereby shoving the issue out to the call sites. Still wouldn't\n> expose the am_walsender angle, but conceivably it'd be more\n> future-proof anyway?\n\nI hesitated to suggest this exactly because of the WAL sender problem, but\nit does seem slightly more future-proof, so +1 for this approach.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Fri, 22 Jul 2022 15:56:15 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Unprivileged user can induce crash by using an SUSET param in\n PGOPTIONS"
},
{
"msg_contents": "Nathan Bossart <nathandbossart@gmail.com> writes:\n> On Fri, Jul 22, 2022 at 06:44:04PM -0400, Tom Lane wrote:\n>> Another idea is to add a \"bool interactive\" parameter to InitPostgres,\n>> thereby shoving the issue out to the call sites. Still wouldn't\n>> expose the am_walsender angle, but conceivably it'd be more\n>> future-proof anyway?\n\n> I hesitated to suggest this exactly because of the WAL sender problem, but\n> it does seem slightly more future-proof, so +1 for this approach.\n\nSo about like this then. (I spent some effort on cleaning up the\ndisjointed-to-nonexistent presentation of InitPostgres' parameters.)\n\n\t\t\tregards, tom lane",
"msg_date": "Sat, 23 Jul 2022 13:23:24 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Unprivileged user can induce crash by using an SUSET param in\n PGOPTIONS"
},
{
"msg_contents": "On Sat, Jul 23, 2022 at 01:23:24PM -0400, Tom Lane wrote:\n> -\t/*\n> -\t * process any libraries that should be preloaded at backend start (this\n> -\t * likewise can't be done until GUC settings are complete)\n> -\t */\n> -\tprocess_session_preload_libraries();\n\nThis patch essentially moveѕ the call to\nprocess_session_preload_libraries() to earlier in PostgresMain(). The\ndiscussion upthread seems to indicate that this is okay. I did notice that\nthe log_disconnections handler won't be set up yet, so failures due to\nsession_preload_libraries won't be logged the same way as before. Also,\nthe call to pgstat_report_connect() won't happen. Neither of these strikes\nme as particularly bad, but it seemed worth noting.\n\n> + *\tload_session_libraries: TRUE to honor session_preload_libraries\n\nnitpick: Should we call out local_preload_libraries here, too?\n\n> +\tif (load_session_libraries)\n> +\t\tprocess_session_preload_libraries();\n\nI noticed that a couple of places check whether whereToSendOutput is set to\nDestRemote to determine if this is an interactive session. Maybe something\nlike\n\n\tif (whereToSendOutput == DestRemote && !am_walsender)\n\nwould be a reasonably future-proof way to avoid the need for a new\nInitPostgres() argument.\n\nOtherwise, the patch looks good to me.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Sun, 24 Jul 2022 20:40:40 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Unprivileged user can induce crash by using an SUSET param in\n PGOPTIONS"
},
{
"msg_contents": "Nathan Bossart <nathandbossart@gmail.com> writes:\n> I noticed that a couple of places check whether whereToSendOutput is set to\n> DestRemote to determine if this is an interactive session.\n\nIIRC, that would end in not loading the preload libraries in a standalone\nbackend. Perhaps that's what we want, but I'd supposed not. Discuss.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 24 Jul 2022 23:49:23 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Unprivileged user can induce crash by using an SUSET param in\n PGOPTIONS"
},
{
"msg_contents": "On Sun, Jul 24, 2022 at 11:49:23PM -0400, Tom Lane wrote:\n> Nathan Bossart <nathandbossart@gmail.com> writes:\n>> I noticed that a couple of places check whether whereToSendOutput is set to\n>> DestRemote to determine if this is an interactive session.\n> \n> IIRC, that would end in not loading the preload libraries in a standalone\n> backend. Perhaps that's what we want, but I'd supposed not. Discuss.\n\nAh, I see. There was a recent change to make sure shared_preload_libraries\nare loaded in single-user mode (6c31ac0), but those are for load at \"server\nstart\" instead of \"connection start.\" However, AFAICT\nsession_preload_libraries is loaded in single-user mode today, and\nsingle-user mode is arguably a connection, so my instinct is that we should\ncontinue to process it in single-user mode. I suppose we might be able to\nadd more hacks to load it in single-user mode without a new argument, but\nat that point, we're probably not too far from your original proposal.\nGiven all this, I think I'm inclined for the new argument.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Sun, 24 Jul 2022 21:30:44 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Unprivileged user can induce crash by using an SUSET param in\n PGOPTIONS"
},
{
"msg_contents": "Nathan Bossart <nathandbossart@gmail.com> writes:\n> Given all this, I think I'm inclined for the new argument.\n\nPushed like that then (after a bit more fooling with the comments).\n\nI haven't done anything about a test case. We can't rely on plperl\ngetting built, and even if we could, it doesn't have any TAP-style\ntests so it'd be hard to get it to test this scenario. However,\nI do see that we're not testing session_preload_libraries anywhere,\nwhich seems bad. I wonder if it'd be a good idea to convert\nauto_explain's TAP test to load auto_explain via session_preload_libraries\ninstead of shared_preload_libraries, and then pass in the settings for\neach test via PGOPTIONS instead of constantly rewriting postgresql.conf.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 25 Jul 2022 10:32:42 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Unprivileged user can induce crash by using an SUSET param in\n PGOPTIONS"
},
{
"msg_contents": "Tom Lane <tgl@sss.pgh.pa.us> writes:\n\n> I wonder if it'd be a good idea to convert\n> auto_explain's TAP test to load auto_explain via session_preload_libraries\n> instead of shared_preload_libraries, and then pass in the settings for\n> each test via PGOPTIONS instead of constantly rewriting postgresql.conf.\n\nThat whole config-file rewriting did feel a bit icky when I added more\ntests recently, but I completely forgot about PGOPTIONS and -c.\nSomething like the attached is indeed much nicer.\n\n- ilmari",
"msg_date": "Mon, 25 Jul 2022 16:26:32 +0100",
"msg_from": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <ilmari@ilmari.org>",
"msg_from_op": false,
"msg_subject": "Re: Unprivileged user can induce crash by using an SUSET param in\n PGOPTIONS"
},
{
"msg_contents": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <ilmari@ilmari.org> writes:\n> Tom Lane <tgl@sss.pgh.pa.us> writes:\n>> I wonder if it'd be a good idea to convert\n>> auto_explain's TAP test to load auto_explain via session_preload_libraries\n>> instead of shared_preload_libraries, and then pass in the settings for\n>> each test via PGOPTIONS instead of constantly rewriting postgresql.conf.\n\n> That whole config-file rewriting did feel a bit icky when I added more\n> tests recently, but I completely forgot about PGOPTIONS and -c.\n> Something like the attached is indeed much nicer.\n\nThanks! I added a test to verify the permissions-checking issue\nand pushed it.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 25 Jul 2022 15:46:18 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Unprivileged user can induce crash by using an SUSET param in\n PGOPTIONS"
},
{
"msg_contents": "Tom Lane <tgl@sss.pgh.pa.us> writes:\n\n> =?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <ilmari@ilmari.org> writes:\n>> Tom Lane <tgl@sss.pgh.pa.us> writes:\n>>> I wonder if it'd be a good idea to convert\n>>> auto_explain's TAP test to load auto_explain via session_preload_libraries\n>>> instead of shared_preload_libraries, and then pass in the settings for\n>>> each test via PGOPTIONS instead of constantly rewriting postgresql.conf.\n>\n>> That whole config-file rewriting did feel a bit icky when I added more\n>> tests recently, but I completely forgot about PGOPTIONS and -c.\n>> Something like the attached is indeed much nicer.\n>\n> Thanks! I added a test to verify the permissions-checking issue\n> and pushed it.\n\nThanks! Just one minor nitpick: setting an %ENV entry to `undef`\ndoesn't unset the environment variable, it sets it to the empty string.\nTo unset a variable it needs to be deleted from %ENV, i.e. `delete\n$ENV{PGUSER};`. Alternatively, wrap the relevant tests in a block and\nuse `local`, like in the `query_log` function.\n\n> \t\t\tregards, tom lane\n\n- ilmari\n\n\n",
"msg_date": "Tue, 26 Jul 2022 17:02:40 +0100",
"msg_from": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <ilmari@ilmari.org>",
"msg_from_op": false,
"msg_subject": "Re: Unprivileged user can induce crash by using an SUSET param in\n PGOPTIONS"
},
{
"msg_contents": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <ilmari@ilmari.org> writes:\n> Thanks! Just one minor nitpick: setting an %ENV entry to `undef`\n> doesn't unset the environment variable, it sets it to the empty string.\n> To unset a variable it needs to be deleted from %ENV, i.e. `delete\n> $ENV{PGUSER};`.\n\nAh. Still, libpq doesn't distinguish, so the test works anyway.\nNot sure if it's worth changing.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 26 Jul 2022 12:04:33 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Unprivileged user can induce crash by using an SUSET param in\n PGOPTIONS"
},
{
"msg_contents": "I wrote:\n> =?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <ilmari@ilmari.org> writes:\n>> Thanks! Just one minor nitpick: setting an %ENV entry to `undef`\n>> doesn't unset the environment variable, it sets it to the empty string.\n>> To unset a variable it needs to be deleted from %ENV, i.e. `delete\n>> $ENV{PGUSER};`.\n\n> Ah. Still, libpq doesn't distinguish, so the test works anyway.\n> Not sure if it's worth changing.\n\nMeh ... I had to un-break the test for Windows, so did this while\nat it, using the local-in-block method. Thanks for the suggestion.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 27 Jul 2022 19:01:00 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Unprivileged user can induce crash by using an SUSET param in\n PGOPTIONS"
}
] |
[
{
"msg_contents": "There are some duplicate code in table.c, add a static inline function\nto eliminate the duplicates.\n\n-- \nRegards\nJunwang Zhao",
"msg_date": "Thu, 21 Jul 2022 16:26:38 +0800",
"msg_from": "Junwang Zhao <zhjwpku@gmail.com>",
"msg_from_op": true,
"msg_subject": "[PATCH v1] eliminate duplicate code in table.c"
},
{
"msg_contents": "On Thu, Jul 21, 2022 at 1:56 PM Junwang Zhao <zhjwpku@gmail.com> wrote:\n>\n> There are some duplicate code in table.c, add a static inline function\n> to eliminate the duplicates.\n>\n\nCan we name function as validate_object_type, or check_object_type?\n\nOtherwise, the patch looks fine to me. Let's see if others have\nsomething to say.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 21 Jul 2022 16:39:35 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH v1] eliminate duplicate code in table.c"
},
{
"msg_contents": "Hi hackers,\n\n> > There are some duplicate code in table.c, add a static inline function\n> > to eliminate the duplicates.\n> >\n>\n> Can we name function as validate_object_type, or check_object_type?\n>\n> Otherwise, the patch looks fine to me. Let's see if others have\n> something to say.\n\nLGTM\n\n-- \nBest regards,\nAleksander Alekseev",
"msg_date": "Thu, 21 Jul 2022 14:39:33 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH v1] eliminate duplicate code in table.c"
},
{
"msg_contents": "On Thu, Jul 21, 2022 at 5:09 PM Aleksander Alekseev\n<aleksander@timescale.com> wrote:\n>\n> > > There are some duplicate code in table.c, add a static inline function\n> > > to eliminate the duplicates.\n> > >\n> >\n> > Can we name function as validate_object_type, or check_object_type?\n> >\n> > Otherwise, the patch looks fine to me. Let's see if others have\n> > something to say.\n>\n> LGTM\n>\n\n@@ -161,10 +121,32 @@ table_openrv_extended(const RangeVar *relation,\nLOCKMODE lockmode,\n *\n * Note that it is often sensible to hold a lock beyond relation_close;\n * in that case, the lock is released automatically at xact end.\n- * ----------------\n+ * ----------------\n */\n void\n table_close(Relation relation, LOCKMODE lockmode)\n\nI don't think this change should be part of this patch. Do you see a\nreason for doing this?\n\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 21 Jul 2022 17:12:20 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH v1] eliminate duplicate code in table.c"
},
{
"msg_contents": "Hi Amit,\n\n> I don't think this change should be part of this patch. Do you see a\n> reason for doing this?\n\nMy bad. I thought this was done by pgindent.\n\n\n-- \nBest regards,\nAleksander Alekseev",
"msg_date": "Thu, 21 Jul 2022 14:49:26 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH v1] eliminate duplicate code in table.c"
},
{
"msg_contents": "On 2022-Jul-21, Junwang Zhao wrote:\n\n> There are some duplicate code in table.c, add a static inline function\n> to eliminate the duplicates.\n\nHmm, but see commit 2ed532ee8c47 about this kind of check. Perhaps we\nshould change these error messages to conform to the same message style.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"I love the Postgres community. It's all about doing things _properly_. :-)\"\n(David Garamond)\n\n\n",
"msg_date": "Thu, 21 Jul 2022 14:18:24 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH v1] eliminate duplicate code in table.c"
},
{
"msg_contents": "Hi Alvaro,\n\n> Hmm, but see commit 2ed532ee8c47 about this kind of check. Perhaps we\n> should change these error messages to conform to the same message style.\n\nGood point! Done.\n\n-- \nBest regards,\nAleksander Alekseev",
"msg_date": "Thu, 21 Jul 2022 15:41:42 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH v1] eliminate duplicate code in table.c"
},
{
"msg_contents": "On Thu, Jul 21, 2022 at 6:12 PM Aleksander Alekseev\n<aleksander@timescale.com> wrote:\n>\n> Hi Alvaro,\n>\n> > Hmm, but see commit 2ed532ee8c47 about this kind of check. Perhaps we\n> > should change these error messages to conform to the same message style.\n>\n> Good point! Done.\n>\n\nYeah, that's better. On again thinking about the function name, I\nwonder if validate_relation_type() suits here as there is no generic\nobject being passed?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 21 Jul 2022 19:17:21 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH v1] eliminate duplicate code in table.c"
},
{
"msg_contents": "Hi Amit,\n\n> Yeah, that's better. On again thinking about the function name, I\n> wonder if validate_relation_type() suits here as there is no generic\n> object being passed?\n\nYep, validate_relation_type() sounds better.\n\n-- \nBest regards,\nAleksander Alekseev",
"msg_date": "Thu, 21 Jul 2022 17:05:30 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH v1] eliminate duplicate code in table.c"
},
{
"msg_contents": "Hi Amit,\n\n> Yep, validate_relation_type() sounds better.\n\nOr maybe validate_relation_kind() after all?\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Thu, 21 Jul 2022 17:21:35 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH v1] eliminate duplicate code in table.c"
},
{
"msg_contents": "yeah, IMHO validate_relation_kind() is better ;)\n\nOn Thu, Jul 21, 2022 at 10:21 PM Aleksander Alekseev\n<aleksander@timescale.com> wrote:\n>\n> Hi Amit,\n>\n> > Yep, validate_relation_type() sounds better.\n>\n> Or maybe validate_relation_kind() after all?\n>\n> --\n> Best regards,\n> Aleksander Alekseev\n\n\n\n-- \nRegards\nJunwang Zhao\n\n\n",
"msg_date": "Thu, 21 Jul 2022 23:05:38 +0800",
"msg_from": "Junwang Zhao <zhjwpku@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH v1] eliminate duplicate code in table.c"
},
{
"msg_contents": "btw, there are some typos in Patch v5, %s/ralation/relation/g\n\nOn Thu, Jul 21, 2022 at 10:05 PM Aleksander Alekseev\n<aleksander@timescale.com> wrote:\n>\n> Hi Amit,\n>\n> > Yeah, that's better. On again thinking about the function name, I\n> > wonder if validate_relation_type() suits here as there is no generic\n> > object being passed?\n>\n> Yep, validate_relation_type() sounds better.\n>\n> --\n> Best regards,\n> Aleksander Alekseev\n\n\n\n-- \nRegards\nJunwang Zhao\n\n\n",
"msg_date": "Thu, 21 Jul 2022 23:10:52 +0800",
"msg_from": "Junwang Zhao <zhjwpku@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH v1] eliminate duplicate code in table.c"
},
{
"msg_contents": "Hi Junwang,\n\n> btw, there are some typos in Patch v5, %s/ralation/relation/g\n\nD'oh!\n\n> yeah, IMHO validate_relation_kind() is better ;)\n\nCool. Here is the corrected patch. Thanks!\n\n-- \nBest regards,\nAleksander Alekseev",
"msg_date": "Thu, 21 Jul 2022 18:51:57 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH v1] eliminate duplicate code in table.c"
},
{
"msg_contents": "LGTM\n\nOn Thu, Jul 21, 2022 at 11:52 PM Aleksander Alekseev\n<aleksander@timescale.com> wrote:\n>\n> Hi Junwang,\n>\n> > btw, there are some typos in Patch v5, %s/ralation/relation/g\n>\n> D'oh!\n>\n> > yeah, IMHO validate_relation_kind() is better ;)\n>\n> Cool. Here is the corrected patch. Thanks!\n>\n> --\n> Best regards,\n> Aleksander Alekseev\n\n\n\n-- \nRegards\nJunwang Zhao\n\n\n",
"msg_date": "Thu, 21 Jul 2022 23:58:06 +0800",
"msg_from": "Junwang Zhao <zhjwpku@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH v1] eliminate duplicate code in table.c"
},
{
"msg_contents": "Hi.\n\n+\t\t\t\t errmsg(\"cannot operate on relation \\\"%s\\\"\",\n\nOther callers of errdetail_relkind_not_supported() describing\noperations concretely. In that sense we I think should say \"cannot\nopen relation \\\"%s\\\"\" here.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n \n\n\n",
"msg_date": "Fri, 22 Jul 2022 11:09:22 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH v1] eliminate duplicate code in table.c"
},
{
"msg_contents": "On Fri, Jul 22, 2022 at 7:39 AM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> + errmsg(\"cannot operate on relation \\\"%s\\\"\",\n>\n> Other callers of errdetail_relkind_not_supported() describing\n> operations concretely. In that sense we I think should say \"cannot\n> open relation \\\"%s\\\"\" here.\n>\n\nSounds reasonable to me. This will give more precise information to the user.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 22 Jul 2022 10:44:49 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH v1] eliminate duplicate code in table.c"
},
{
"msg_contents": "Here is the patch v7. Thanks!\n\nOn Fri, Jul 22, 2022 at 1:15 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Fri, Jul 22, 2022 at 7:39 AM Kyotaro Horiguchi\n> <horikyota.ntt@gmail.com> wrote:\n> >\n> > + errmsg(\"cannot operate on relation \\\"%s\\\"\",\n> >\n> > Other callers of errdetail_relkind_not_supported() describing\n> > operations concretely. In that sense we I think should say \"cannot\n> > open relation \\\"%s\\\"\" here.\n> >\n>\n> Sounds reasonable to me. This will give more precise information to the user.\n>\n> --\n> With Regards,\n> Amit Kapila.\n\n\n\n-- \nRegards\nJunwang Zhao",
"msg_date": "Fri, 22 Jul 2022 16:06:50 +0800",
"msg_from": "Junwang Zhao <zhjwpku@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH v1] eliminate duplicate code in table.c"
},
{
"msg_contents": "On Fri, Jul 22, 2022 at 1:37 PM Junwang Zhao <zhjwpku@gmail.com> wrote:\n>\n> Here is the patch v7. Thanks!\n>\n\nLGTM. I'll push this sometime early next week unless there are more\nsuggestions/comments.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 22 Jul 2022 15:45:52 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH v1] eliminate duplicate code in table.c"
}
] |
[
{
"msg_contents": "This is a minor fix that adds a missing space in file lockdefs.h\n\n-- \nRegards\nJunwang Zhao",
"msg_date": "Thu, 21 Jul 2022 16:38:21 +0800",
"msg_from": "Junwang Zhao <zhjwpku@gmail.com>",
"msg_from_op": true,
"msg_subject": "add a missing space"
},
{
"msg_contents": "On Thu, Jul 21, 2022 at 2:08 PM Junwang Zhao <zhjwpku@gmail.com> wrote:\n>\n> This is a minor fix that adds a missing space in file lockdefs.h\n>\n\nLGTM. I'll push this in some time.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 21 Jul 2022 15:43:09 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: add a missing space"
},
{
"msg_contents": "Great, thanks!\n\nOn Thu, Jul 21, 2022 at 6:13 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Thu, Jul 21, 2022 at 2:08 PM Junwang Zhao <zhjwpku@gmail.com> wrote:\n> >\n> > This is a minor fix that adds a missing space in file lockdefs.h\n> >\n>\n> LGTM. I'll push this in some time.\n>\n> --\n> With Regards,\n> Amit Kapila.\n\n\n\n-- \nRegards\nJunwang Zhao\n\n\n",
"msg_date": "Thu, 21 Jul 2022 18:24:29 +0800",
"msg_from": "Junwang Zhao <zhjwpku@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: add a missing space"
}
] |
[
{
"msg_contents": "Hi,\n\nI found that fetch_more_data_begin() in postgres_fdw reports an error when PQsendQuery() returns the value less than 0 as follows though PQsendQuery() can return only 1 or 0. I think this is a bug. Attached is the patch that fixes this bug. This needs to be back-ported to v14 where async execution was supported in postgres_fdw.\n\n\tif (PQsendQuery(fsstate->conn, sql) < 0)\n\t\tpgfdw_report_error(ERROR, NULL, fsstate->conn, false, fsstate->query);\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION",
"msg_date": "Thu, 21 Jul 2022 23:22:26 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "postgres_fdw: Fix bug in checking of return value of PQsendQuery()."
},
{
"msg_contents": "\nOn Thu, 21 Jul 2022 at 22:22, Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> Hi,\n>\n> I found that fetch_more_data_begin() in postgres_fdw reports an error when PQsendQuery() returns the value less than 0 as follows though PQsendQuery() can return only 1 or 0. I think this is a bug. Attached is the patch that fixes this bug. This needs to be back-ported to v14 where async execution was supported in postgres_fdw.\n>\n> \tif (PQsendQuery(fsstate->conn, sql) < 0)\n> \t\tpgfdw_report_error(ERROR, NULL, fsstate->conn, false, fsstate->query);\n>\n> Regards,\n\n+1. However, I think check whether the result equals 0 or 1 might be better.\nAnyway, the patch works correctly.\n\n\n$ grep 'PQsendQuery(' -rn . --include '*.c'\n./contrib/postgres_fdw/postgres_fdw.c:7073: if (PQsendQuery(fsstate->conn, sql) < 0)\n./contrib/postgres_fdw/connection.c:647: if (!PQsendQuery(conn, sql))\n./contrib/postgres_fdw/connection.c:782: if (!PQsendQuery(conn, query))\n./contrib/postgres_fdw/connection.c:1347: if (!PQsendQuery(conn, query))\n./contrib/postgres_fdw/connection.c:1575: if (PQsendQuery(entry->conn, \"DEALLOCATE ALL\"))\n./contrib/dblink/dblink.c:720: retval = PQsendQuery(conn, sql);\n./contrib/dblink/dblink.c:1146: if (!PQsendQuery(conn, sql))\n./src/test/isolation/isolationtester.c:669: if (!PQsendQuery(conn, step->sql))\n./src/test/modules/libpq_pipeline/libpq_pipeline.c:500: if (PQsendQuery(conn, \"SELECT 1; SELECT 2\") != 1)\n./src/test/modules/libpq_pipeline/libpq_pipeline.c:532: if (PQsendQuery(conn, \"SELECT 1.0/g FROM generate_series(3, -1, -1) g\") != 1)\n./src/test/modules/libpq_pipeline/libpq_pipeline.c:1000: if (PQsendQuery(conn, \"SELECT 1\") != 1)\n./src/test/modules/libpq_pipeline/libpq_pipeline.c:1046: if (PQsendQuery(conn, \"SELECT 1\") != 1)\n./src/test/modules/libpq_pipeline/libpq_pipeline.c:1084: if (PQsendQuery(conn, \"SELECT 1\") != 1)\n./src/test/modules/libpq_pipeline/libpq_pipeline.c:1094: if (PQsendQuery(conn, \"SELECT 2\") != 1)\n./src/test/modules/libpq_pipeline/libpq_pipeline.c:1118: if (PQsendQuery(conn, \"SELECT 1\") != 1)\n./src/test/modules/libpq_pipeline/libpq_pipeline.c:1132: if (PQsendQuery(conn, \"SELECT 2\") != 1)\n./src/test/modules/libpq_pipeline/libpq_pipeline.c:1159: if (PQsendQuery(conn, \"SELECT pg_catalog.pg_advisory_unlock(1,1)\") != 1)\n./src/bin/pg_basebackup/pg_basebackup.c:1921: if (PQsendQuery(conn, basebkp) == 0)\n./src/bin/pg_amcheck/pg_amcheck.c:891: if (PQsendQuery(slot->connection, sql) == 0)\n./src/bin/psql/common.c:1451: success = PQsendQuery(pset.db, query);\n./src/bin/scripts/reindexdb.c:551: status = PQsendQuery(conn, sql.data) == 1;\n./src/bin/scripts/vacuumdb.c:947: status = PQsendQuery(conn, sql) == 1;\n./src/bin/pgbench/pgbench.c:3089: r = PQsendQuery(st->con, sql);\n./src/bin/pgbench/pgbench.c:4012: if (!PQsendQuery(st->con, \"ROLLBACK\"))\n./src/backend/replication/libpqwalreceiver/libpqwalreceiver.c:663: if (!PQsendQuery(streamConn, query))\n./src/interfaces/libpq/fe-exec.c:1421:PQsendQuery(PGconn *conn, const char *query)\n./src/interfaces/libpq/fe-exec.c:2319: if (!PQsendQuery(conn, query))\n\n-- \nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.\n\n\n",
"msg_date": "Thu, 21 Jul 2022 22:41:54 +0800",
"msg_from": "Japin Li <japinli@hotmail.com>",
"msg_from_op": false,
"msg_subject": "Re: postgres_fdw: Fix bug in checking of return value of\n PQsendQuery()."
},
{
"msg_contents": "Fujii Masao <masao.fujii@oss.nttdata.com> writes:\n> I found that fetch_more_data_begin() in postgres_fdw reports an error when PQsendQuery() returns the value less than 0 as follows though PQsendQuery() can return only 1 or 0. I think this is a bug. Attached is the patch that fixes this bug. This needs to be back-ported to v14 where async execution was supported in postgres_fdw.\n\nYup, clearly a thinko.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 21 Jul 2022 11:22:48 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: postgres_fdw: Fix bug in checking of return value of\n PQsendQuery()."
},
{
"msg_contents": "\n\nOn 2022/07/21 23:41, Japin Li wrote:\n> \n> On Thu, 21 Jul 2022 at 22:22, Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>> Hi,\n>>\n>> I found that fetch_more_data_begin() in postgres_fdw reports an error when PQsendQuery() returns the value less than 0 as follows though PQsendQuery() can return only 1 or 0. I think this is a bug. Attached is the patch that fixes this bug. This needs to be back-ported to v14 where async execution was supported in postgres_fdw.\n>>\n>> \tif (PQsendQuery(fsstate->conn, sql) < 0)\n>> \t\tpgfdw_report_error(ERROR, NULL, fsstate->conn, false, fsstate->query);\n>>\n>> Regards,\n> \n> +1. However, I think check whether the result equals 0 or 1 might be better.\n\nMaybe. I just used \"if (!PQsendQuery())\" style because it's used in postgres_fdw elsewhere.\n\n> Anyway, the patch works correctly.\n\nThanks for the review! Pushed.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Fri, 22 Jul 2022 12:07:28 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "Re: postgres_fdw: Fix bug in checking of return value of\n PQsendQuery()."
},
{
"msg_contents": "On Fri, Jul 22, 2022 at 12:07 PM Fujii Masao\n<masao.fujii@oss.nttdata.com> wrote:\n> Pushed.\n\nThis is my oversight in commit 27e1f1456. :-(\n\nThanks for the report and fix, Fujii-san!\n\nBest regards,\nEtsuro Fujita\n\n\n",
"msg_date": "Fri, 22 Jul 2022 17:03:19 +0900",
"msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: postgres_fdw: Fix bug in checking of return value of\n PQsendQuery()."
}
] |
[
{
"msg_contents": "Hi,\n\nCurrently, it's possible to remove the rolissuper bit from the\nbootstrap superuser, but this leaves that user - and the system in\ngeneral - in an odd state. The bootstrap user continues to own all of\nthe objects it owned before, e.g. all of the system catalogs. Direct\nDML on system catalogs is blocked by pg_class_aclmask_ext(), but it's\npossible to do things like rename a system function out of the way and\ncreate a new function with the same signature. Therefore, creating a\nnew superuser and making the original one a non-superuser is probably\nnot viable from a security perspective, because anyone who gained\naccess to that role would likely have little difficulty mounting a\nTrojan horse attack against the current superusers.\n\nThere are other problems, too. (1) pg_parameter_acl entries are\nconsidered to be owned by the bootstrap superuser, so while the\nbootstrap user loses the ability to directly ALTER SYSTEM SET\narchive_command, they can still grant that ability to some other user\n(possibly one they've just created, if they still have CREATEROLE)\nwhich pretty much gives the whole show away. (2) When a trusted\nextension is created, the extension objects are documented as ending\nup owned by the bootstrap superuser, and the bootstrap user will end\nup owning them even if they are no longer super. (3) Range\nconstructors end up getting owned by the bootstrap user, too. I\nhaven't really tried to verify whether ownership of trusted extension\nobjects or range constructors would allow the bootstrap\nnot-a-superuser to escalate back to superuser, but it seems fairly\nlikely. I believe these object ownership assignments were made with\nthe idea that the bootstrap user would always be a superuser.\n\npg_upgrade refers to the \"install user\" rather than the bootstrap\nsuperuser, but it's talking about the same thing. If you've made the\nbootstrap user non-super, pg_upgrade will fail. It is only able to\nconnect as the bootstrap user, and it must connect as superuser or it\ncan't do the things it needs to do.\n\nAll in all, it seems to me that various parts of the system are built\naround the assumption that you will not try to execute ALTER ROLE\nbootstrap_superuser NOSUPERUSER. I suggest that we formally prohibit\nthat, as per the attached patch. Otherwise, I suppose we need to\nprevent privilege escalation attacks from a bootstrap ex-superuser,\nwhich seems fairly impractical and a poor use of engineering\nresources. Or I suppose we could continue with the present state of\naffairs where our code and documentation assume you won't do that but\nnothing actually stops you from doing it, but that doesn't seem to\nhave much to recommend it.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com",
"msg_date": "Thu, 21 Jul 2022 12:15:30 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "let's disallow ALTER ROLE bootstrap_superuser NOSUPERUSER"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> Currently, it's possible to remove the rolissuper bit from the\n> bootstrap superuser, but this leaves that user - and the system in\n> general - in an odd state. The bootstrap user continues to own all of\n> the objects it owned before, e.g. all of the system catalogs. Direct\n> DML on system catalogs is blocked by pg_class_aclmask_ext(), but it's\n> possible to do things like rename a system function out of the way and\n> create a new function with the same signature. Therefore, creating a\n> new superuser and making the original one a non-superuser is probably\n> not viable from a security perspective, because anyone who gained\n> access to that role would likely have little difficulty mounting a\n> Trojan horse attack against the current superusers.\n\nTrue, but what if the idea is to have *no* superusers? I seem\nto recall people being interested in setups like that.\n\nOn the whole I don't have any objection to your proposal, I just\nworry that somebody else will.\n\nOf course there's always \"UPDATE pg_authid SET rolsuper = false\",\nwhich makes it absolutely clear that you're breaking the glass cover.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 21 Jul 2022 12:28:07 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: let's disallow ALTER ROLE bootstrap_superuser NOSUPERUSER"
},
{
"msg_contents": "On Thu, Jul 21, 2022 at 9:28 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Robert Haas <robertmhaas@gmail.com> writes:\n> > Currently, it's possible to remove the rolissuper bit from the\n> > bootstrap superuser, but this leaves that user - and the system in\n> > general - in an odd state. The bootstrap user continues to own all of\n> > the objects it owned before, e.g. all of the system catalogs. Direct\n> > DML on system catalogs is blocked by pg_class_aclmask_ext(), but it's\n> > possible to do things like rename a system function out of the way and\n> > create a new function with the same signature. Therefore, creating a\n> > new superuser and making the original one a non-superuser is probably\n> > not viable from a security perspective, because anyone who gained\n> > access to that role would likely have little difficulty mounting a\n> > Trojan horse attack against the current superusers.\n>\n> True, but what if the idea is to have *no* superusers? I seem\n> to recall people being interested in setups like that.\n>\n\n\n> On the whole I don't have any objection to your proposal, I just\n> worry that somebody else will.\n>\n> Of course there's always \"UPDATE pg_authid SET rolsuper = false\",\n> which makes it absolutely clear that you're breaking the glass cover.\n>\n>\nI would expect an initdb option (once this is possible) to specify this\ndesire and we just never set one up in the first place. It seems\nimpractical to remove one after it already exists. Though we could enable\nthe option (or a function) tied to the specific predefined role that, say,\npermits catalog changes, when that day comes.\n\nDavid J.\n\nOn Thu, Jul 21, 2022 at 9:28 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:Robert Haas <robertmhaas@gmail.com> writes:\n> Currently, it's possible to remove the rolissuper bit from the\n> bootstrap superuser, but this leaves that user - and the system in\n> general - in an odd state. The bootstrap user continues to own all of\n> the objects it owned before, e.g. all of the system catalogs. Direct\n> DML on system catalogs is blocked by pg_class_aclmask_ext(), but it's\n> possible to do things like rename a system function out of the way and\n> create a new function with the same signature. Therefore, creating a\n> new superuser and making the original one a non-superuser is probably\n> not viable from a security perspective, because anyone who gained\n> access to that role would likely have little difficulty mounting a\n> Trojan horse attack against the current superusers.\n\nTrue, but what if the idea is to have *no* superusers? I seem\nto recall people being interested in setups like that. \nOn the whole I don't have any objection to your proposal, I just\nworry that somebody else will.\n\nOf course there's always \"UPDATE pg_authid SET rolsuper = false\",\nwhich makes it absolutely clear that you're breaking the glass cover.I would expect an initdb option (once this is possible) to specify this desire and we just never set one up in the first place. It seems impractical to remove one after it already exists. Though we could enable the option (or a function) tied to the specific predefined role that, say, permits catalog changes, when that day comes.David J.",
"msg_date": "Thu, 21 Jul 2022 09:41:04 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: let's disallow ALTER ROLE bootstrap_superuser NOSUPERUSER"
},
{
"msg_contents": "\"David G. Johnston\" <david.g.johnston@gmail.com> writes:\n> On Thu, Jul 21, 2022 at 9:28 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> True, but what if the idea is to have *no* superusers? I seem\n>> to recall people being interested in setups like that.\n\n> I would expect an initdb option (once this is possible) to specify this\n> desire and we just never set one up in the first place. It seems\n> impractical to remove one after it already exists.\n\nThere has to be a role that owns the built-in objects. Robert's point\nis that pretending that that role isn't high-privilege is silly.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 21 Jul 2022 12:46:57 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: let's disallow ALTER ROLE bootstrap_superuser NOSUPERUSER"
},
{
"msg_contents": "On Thu, Jul 21, 2022 at 12:28 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> True, but what if the idea is to have *no* superusers? I seem\n> to recall people being interested in setups like that.\n\nHmm, right. There's nothing that stops you from de-super-ing all of\nyour superusers today, and then if you ever need to do anything as\nsuperuser again, you have to start up in single-user mode, which will\ntreat your session as super regardless. But considering how much power\nthe bootstrap user still has, I'm not sure that's really buying you\nvery much. In particular, the new GRANT ALTER SYSTEM stuff looks\nsufficient to allow the bootstrap user to break out to the OS, so if\nwe want to regard no-superusers as a supported configuration, we\nprobably need to tighten that up. I think it's kind of hopeless,\nthough, because of the fact that you can also freely Trojan functions\nand operators in pg_catalog. Maybe that's insufficient to break out to\nthe OS or assume superuser privileges, but you should be able to at\nleast Trojan every other user on the system.\n\n> On the whole I don't have any objection to your proposal, I just\n> worry that somebody else will.\n\nOK, good to know. Thanks.\n\n> Of course there's always \"UPDATE pg_authid SET rolsuper = false\",\n> which makes it absolutely clear that you're breaking the glass cover.\n\nRight.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 21 Jul 2022 12:47:30 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: let's disallow ALTER ROLE bootstrap_superuser NOSUPERUSER"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> ... if\n> we want to regard no-superusers as a supported configuration, we\n> probably need to tighten that up. I think it's kind of hopeless,\n\nYeah, I agree. At least, I'm uninterested in spending any of my\nown time trying to make that usefully-more-secure than it is today.\nIf somebody else is interested enough to do the legwork, we can\nlook at what they come up with.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 21 Jul 2022 13:02:50 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: let's disallow ALTER ROLE bootstrap_superuser NOSUPERUSER"
},
{
"msg_contents": "On Thu, Jul 21, 2022 at 01:02:50PM -0400, Tom Lane wrote:\n> Robert Haas <robertmhaas@gmail.com> writes:\n>> ... if\n>> we want to regard no-superusers as a supported configuration, we\n>> probably need to tighten that up. I think it's kind of hopeless,\n> \n> Yeah, I agree. At least, I'm uninterested in spending any of my\n> own time trying to make that usefully-more-secure than it is today.\n> If somebody else is interested enough to do the legwork, we can\n> look at what they come up with.\n\nGiven the current assumptions the code makes about the bootstrap superuser,\nI think it makes sense to disallow removing its superuser attribute (at\nleast via ALTER ROLE NOSUPERUSER). It seems like there is much work to do\nbefore a no-superuser configuration could be formally supported. If/when\nsuch support materializes, it might be possible to remove the restriction\nthat Robert is proposing.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 21 Jul 2022 10:27:26 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: let's disallow ALTER ROLE bootstrap_superuser NOSUPERUSER"
},
{
"msg_contents": "On 7/21/22 12:46, Tom Lane wrote:\n> \"David G. Johnston\" <david.g.johnston@gmail.com> writes:\n>> On Thu, Jul 21, 2022 at 9:28 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>> True, but what if the idea is to have *no* superusers? I seem\n>>> to recall people being interested in setups like that.\n> \n>> I would expect an initdb option (once this is possible) to specify this\n>> desire and we just never set one up in the first place. It seems\n>> impractical to remove one after it already exists.\n> \n> There has to be a role that owns the built-in objects. Robert's point\n> is that pretending that that role isn't high-privilege is silly.\n\nMy strategy has been to ensure no other roles are members of the \nbootstrap superuser role, and then alter the bootstrap user to be \nNOLOGIN. E.g. in the example here:\n\nhttps://github.com/pgaudit/set_user/blob/1335cd34ca91b6bd19d5e910cc93c831d1ed0db0/README.md?plain=1#L589\n\nAnd checked here:\n\nhttps://github.com/pgaudit/set_user/blob/1335cd34ca91b6bd19d5e910cc93c831d1ed0db0/README.md?plain=1#L612\n\nhttps://github.com/pgaudit/set_user/blob/1335cd34ca91b6bd19d5e910cc93c831d1ed0db0/README.md?plain=1#L618\n\n-- \nJoe Conway\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Fri, 22 Jul 2022 13:21:54 -0400",
"msg_from": "Joe Conway <mail@joeconway.com>",
"msg_from_op": false,
"msg_subject": "Re: let's disallow ALTER ROLE bootstrap_superuser NOSUPERUSER"
},
{
"msg_contents": "On Fri, Jul 22, 2022 at 1:21 PM Joe Conway <mail@joeconway.com> wrote:\n> My strategy has been to ensure no other roles are members of the\n> bootstrap superuser role, and then alter the bootstrap user to be\n> NOLOGIN. E.g. in the example here:\n\nYeah, making the bootstrap role NOLOGIN seems more reasonable than\nmaking it NOSUPERUSER, at least to me.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 22 Jul 2022 16:40:48 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: let's disallow ALTER ROLE bootstrap_superuser NOSUPERUSER"
},
{
"msg_contents": "On Thu, Jul 21, 2022 at 1:27 PM Nathan Bossart <nathandbossart@gmail.com> wrote:\n> Given the current assumptions the code makes about the bootstrap superuser,\n> I think it makes sense to disallow removing its superuser attribute (at\n> least via ALTER ROLE NOSUPERUSER). It seems like there is much work to do\n> before a no-superuser configuration could be formally supported. If/when\n> such support materializes, it might be possible to remove the restriction\n> that Robert is proposing.\n\nReaction to this patch seems tentatively positive so far, so I have\ncommitted it. Maybe someone will still show up to complain ... but I\nthink it's a good change, so I hope not.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 26 Jul 2022 14:40:49 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: let's disallow ALTER ROLE bootstrap_superuser NOSUPERUSER"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> Reaction to this patch seems tentatively positive so far, so I have\n> committed it. Maybe someone will still show up to complain ... but I\n> think it's a good change, so I hope not.\n\nI had not actually read the patch, but now that I have, it's got\na basic typing error:\n\n+ bool should_be_super = BoolGetDatum(boolVal(dissuper->arg));\n+\n+ if (!should_be_super && roleid == BOOTSTRAP_SUPERUSERID)\n+ ereport(ERROR,\n\nThe result of BoolGetDatum is not bool, it's Datum. This is\nprobably harmless, but it's still a typing violation.\nYou want something like\n\n\tbool should_be_super = boolVal(dissuper->arg);\n\t...\n\tnew_record[Anum_pg_authid_rolsuper - 1] = BoolGetDatum(should_be_super);\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 26 Jul 2022 14:59:24 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: let's disallow ALTER ROLE bootstrap_superuser NOSUPERUSER"
},
{
"msg_contents": "On Tue, Jul 26, 2022 at 2:59 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I had not actually read the patch, but now that I have, it's got\n> a basic typing error:\n>\n> + bool should_be_super = BoolGetDatum(boolVal(dissuper->arg));\n> +\n> + if (!should_be_super && roleid == BOOTSTRAP_SUPERUSERID)\n> + ereport(ERROR,\n>\n> The result of BoolGetDatum is not bool, it's Datum. This is\n> probably harmless, but it's still a typing violation.\n> You want something like\n>\n> bool should_be_super = boolVal(dissuper->arg);\n> ...\n> new_record[Anum_pg_authid_rolsuper - 1] = BoolGetDatum(should_be_super);\n\nOops. Will fix.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 26 Jul 2022 15:06:08 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: let's disallow ALTER ROLE bootstrap_superuser NOSUPERUSER"
}
] |
[
{
"msg_contents": "Hey,\n\nThis came up today on twitter as a claimed POLA violation:\n\npostgres=# select random(), random() order by random();\n random | random\n---------------------+---------------------\n 0.08176638503720679 | 0.08176638503720679\n(1 row)\n\nWhich was explained long ago by Tom as:\n\nhttps://www.postgresql.org/message-id/9570.1193941378%40sss.pgh.pa.us\n\nThe parser makes it behave equivalent to:\n\nSELECT random() AS foo ORDER BY foo;\n\nWhich apparently extends to any column, even aliased ones, that use the\nsame expression:\n\npostgres=# select random() as foo, random() as foo2 order by foo;\n foo | foo2\n--------------------+--------------------\n 0.7334292196943459 | 0.7334292196943459\n(1 row)\n\nThe documentation does say:\n\n\"A query using a volatile function will re-evaluate the function at every\nrow where its value is needed.\"\n\nhttps://www.postgresql.org/docs/current/xfunc-volatility.html\n\nThat sentence is insufficient to explain why, without the order by, the\nsystem chooses to evaluate random() twice, while with order by it does so\nonly once.\n\nI propose extending the existing ORDER BY paragraph in the SELECT Command\nReference as follows:\n\n\"A limitation of this feature is that an ORDER BY clause applying to the\nresult of a UNION, INTERSECT, or EXCEPT clause can only specify an output\ncolumn name or number, not an expression.\"\n\nAdd:\n\nA side-effect of this feature is that ORDER BY expressions containing\nvolatile functions will execute the volatile function only once for the\nentire row; thus any column expressions using the same function will reuse\nthe same function result. By way of example, note the output differences\nfor the following two queries:\n\npostgres=# select random() as foo, random()*1 as foo2 from\ngenerate_series(1,2) order by foo;\n foo | foo2\n--------------------+--------------------\n 0.2631492904302788 | 0.2631492904302788\n 0.9019166692448664 | 0.9019166692448664\n(2 rows)\n\npostgres=# select random() as foo, random() as foo2 from\ngenerate_series(1,2);\n foo | foo2\n--------------------+--------------------\n 0.7763978178239725 | 0.3569212477832773\n 0.7360531822096732 | 0.7028952103643864\n(2 rows)\n\nDavid J.\n\nHey,This came up today on twitter as a claimed POLA violation:postgres=# select random(), random() order by random(); random | random---------------------+--------------------- 0.08176638503720679 | 0.08176638503720679(1 row)Which was explained long ago by Tom as:https://www.postgresql.org/message-id/9570.1193941378%40sss.pgh.pa.usThe parser makes it behave equivalent to:SELECT random() AS foo ORDER BY foo;Which apparently extends to any column, even aliased ones, that use the same expression:postgres=# select random() as foo, random() as foo2 order by foo; foo | foo2--------------------+-------------------- 0.7334292196943459 | 0.7334292196943459(1 row)The documentation does say:\"A query using a volatile function will re-evaluate the function at every row where its value is needed.\"https://www.postgresql.org/docs/current/xfunc-volatility.htmlThat sentence is insufficient to explain why, without the order by, the system chooses to evaluate random() twice, while with order by it does so only once.I propose extending the existing ORDER BY paragraph in the SELECT Command Reference as follows:\"A limitation of this feature is that an ORDER BY clause applying to the result of a UNION, INTERSECT, or EXCEPT clause can only specify an output column name or number, not an expression.\"Add:A side-effect of this feature is that ORDER BY expressions containing volatile functions will execute the volatile function only once for the entire row; thus any column expressions using the same function will reuse the same function result. By way of example, note the output differences for the following two queries:postgres=# select random() as foo, random()*1 as foo2 from generate_series(1,2) order by foo; foo | foo2--------------------+-------------------- 0.2631492904302788 | 0.2631492904302788 0.9019166692448664 | 0.9019166692448664(2 rows)postgres=# select random() as foo, random() as foo2 from generate_series(1,2); foo | foo2--------------------+-------------------- 0.7763978178239725 | 0.3569212477832773 0.7360531822096732 | 0.7028952103643864(2 rows)David J.",
"msg_date": "Thu, 21 Jul 2022 13:20:01 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": true,
"msg_subject": "Undocumented Order By vs Target List Volatile Function Behavior"
}
] |
[
{
"msg_contents": "Hi all:\nHere's a patch to add counters about planned/executed for parallelism \nto pg_stat_statements, as a way to follow-up on if the queries are \nplanning/executing with parallelism, this can help to understand if you \nhave a good/bad configuration or if your hardware is enough\n\n\n\n\nWe decided to store information about the number of times is planned \nand the number of times executed the parallelism by queries\n\n\nRegards\n\nAnthony",
"msg_date": "Thu, 21 Jul 2022 18:26:58 -0400",
"msg_from": "Anthony Sotolongo <asotolongo@gmail.com>",
"msg_from_op": true,
"msg_subject": "Expose Parallelism counters planned/execute in pg_stat_statements"
},
{
"msg_contents": "On Thu, Jul 21, 2022 at 06:26:58PM -0400, Anthony Sotolongo wrote:\n> Hi all:\n> Here's a patch to add counters about� planned/executed� for parallelism� to\n> pg_stat_statements, as a way to follow-up on if the queries are\n> planning/executing with parallelism, this can help to understand if you have\n> a good/bad configuration or if your hardware is enough\n\n+1, I was missing something like this before, but it didn't occur to me to use\nPSS:\n\nhttps://www.postgresql.org/message-id/20200310190142.GB29065@telsasoft.com\n> My hope is to answer to questions like these:\n>\n> . is query (ever? usually?) using parallel paths?\n> . is query usefully using parallel paths?\n> . what queries are my max_parallel_workers(_per_process) being used for ?\n> . Are certain longrunning or frequently running queries which are using\n> parallel paths using all max_parallel_workers and precluding other queries\n> from using parallel query ? Or, are semi-short queries sometimes precluding\n> longrunning queries from using parallelism, when the long queries would\n> better benefit ?\n\nThis patch is storing the number of times the query was planned/executed using\nparallelism, but not the number of workers. Would it make sense to instead\nstore the the *number* of workers launched/planned ? Otherwise, it might be\nthat a query is consistently planned to use a large number of workers, but then\nruns with few. I'm referring to the fields shown in \"explain/analyze\". (Then,\nthe 2nd field should be renamed to \"launched\").\n\n Workers Planned: 2\n Workers Launched: 2\n\nI don't think this is doing the right thing for prepared statements, like\nPQprepare()/PQexecPrepared(), or SQL: PREPARE p AS SELECT; EXECUTE p;\n\nRight now, the docs say that it shows the \"number of times the statement was\nplanned to use parallelism\", but the planning counter is incremented during\neach execution. PSS already shows \"calls\" and \"plans\" separately. The\ndocumentation doesn't mention prepared statements as a reason why they wouldn't\nmatch, which seems like a deficiency.\n\nThis currently doesn't count parallel workers used by utility statements, such\nas CREATE INDEX and VACUUM (see max_parallel_maintenance_workers). If that's\nnot easy to do, mention that in the docs as a limitation.\n\nYou should try to add some test to contrib/pg_stat_statements/sql, or add\nparallelism test to an existing test. Note that the number of parallel workers\nlaunched isn't stable, so you can't test that part..\n\nYou modified pgss_store() to take two booleans, but pass \"NULL\" instead of\n\"false\". Curiously, of all the compilers in cirrusci, only MSVC complained ..\n\n\"planed\" is actually spelled \"planned\", with two enns.\n\nThe patch has some leading/trailing whitespace (maybe shown by git log\ndepending on your configuration).\n\nPlease add this patch to the next commitfest.\nhttps://commitfest.postgresql.org/39/\n\n-- \nJustin\n\n\n",
"msg_date": "Thu, 21 Jul 2022 19:35:56 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Expose Parallelism counters planned/execute in pg_stat_statements"
},
{
"msg_contents": "\nOn 21-07-22 20:35, Justin Pryzby wrote:\n> On Thu, Jul 21, 2022 at 06:26:58PM -0400, Anthony Sotolongo wrote:\n>> Hi all:\n>> Here's a patch to add counters about planned/executed for parallelism to\n>> pg_stat_statements, as a way to follow-up on if the queries are\n>> planning/executing with parallelism, this can help to understand if you have\n>> a good/bad configuration or if your hardware is enough\n> +1, I was missing something like this before, but it didn't occur to me to use\n> PSS:\n\nFirst of all, thanks for review the the patch and for the comments\n\n\n> https://www.postgresql.org/message-id/20200310190142.GB29065@telsasoft.com\n>> My hope is to answer to questions like these:\n>>\n>> . is query (ever? usually?) using parallel paths?\n>> . is query usefully using parallel paths?\n>> . what queries are my max_parallel_workers(_per_process) being used for ?\n>> . Are certain longrunning or frequently running queries which are using\n>> parallel paths using all max_parallel_workers and precluding other queries\n>> from using parallel query ? Or, are semi-short queries sometimes precluding\n>> longrunning queries from using parallelism, when the long queries would\n>> better benefit ?\n> This patch is storing the number of times the query was planned/executed using\n> parallelism, but not the number of workers. Would it make sense to instead\n> store the the *number* of workers launched/planned ? Otherwise, it might be\n> that a query is consistently planned to use a large number of workers, but then\n> runs with few. I'm referring to the fields shown in \"explain/analyze\". (Then,\n> the 2nd field should be renamed to \"launched\").\n>\n> Workers Planned: 2\n> Workers Launched: 2\n\nThe main idea of the patch is to store the number of times the \nstatements were planned and executed in parallel, not the number of \nworkers used in the execution. Of course, what you mention can be \nhelpful, it will be given a review to see how it can be achieved\n\n>\n> I don't think this is doing the right thing for prepared statements, like\n> PQprepare()/PQexecPrepared(), or SQL: PREPARE p AS SELECT; EXECUTE p;\n>\n> Right now, the docs say that it shows the \"number of times the statement was\n> planned to use parallelism\", but the planning counter is incremented during\n> each execution. PSS already shows \"calls\" and \"plans\" separately. The\n> documentation doesn't mention prepared statements as a reason why they wouldn't\n> match, which seems like a deficiency.\n\nWe will check it and see how fix it\n\n>\n> This currently doesn't count parallel workers used by utility statements, such\n> as CREATE INDEX and VACUUM (see max_parallel_maintenance_workers). If that's\n> not easy to do, mention that in the docs as a limitation.\n\nWe will update the documentation with information related to this comment\n\n>\n> You should try to add some test to contrib/pg_stat_statements/sql, or add\n> parallelism test to an existing test. Note that the number of parallel workers\n> launched isn't stable, so you can't test that part..\n>\n> You modified pgss_store() to take two booleans, but pass \"NULL\" instead of\n> \"false\". Curiously, of all the compilers in cirrusci, only MSVC complained ..\n>\n> \"planed\" is actually spelled \"planned\", with two enns.\n>\n> The patch has some leading/trailing whitespace (maybe shown by git log\n> depending on your configuration).\n\nOK, we will fix it\n\n> Please add this patch to the next commitfest.\n> https://commitfest.postgresql.org/39/\n>\n\n\n",
"msg_date": "Fri, 22 Jul 2022 11:17:52 -0400",
"msg_from": "Anthony Sotolongo <asotolongo@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Expose Parallelism counters planned/execute in pg_stat_statements"
},
{
"msg_contents": "Hi,\n\nOn Fri, Jul 22, 2022 at 11:17:52AM -0400, Anthony Sotolongo wrote:\n>\n> On 21-07-22 20:35, Justin Pryzby wrote:\n> > On Thu, Jul 21, 2022 at 06:26:58PM -0400, Anthony Sotolongo wrote:\n> > > Hi all:\n> > > Here's a patch to add counters about� planned/executed� for parallelism� to\n> > > pg_stat_statements, as a way to follow-up on if the queries are\n> > > planning/executing with parallelism, this can help to understand if you have\n> > > a good/bad configuration or if your hardware is enough\n> > +1, I was missing something like this before, but it didn't occur to me to use\n> > PSS:\n>\n> First of all, thanks for review the the patch and for the comments\n>\n>\n> > https://www.postgresql.org/message-id/20200310190142.GB29065@telsasoft.com\n> > > My hope is to answer to questions like these:\n> > >\n> > > . is query (ever? usually?) using parallel paths?\n> > > . is query usefully using parallel paths?\n> > > . what queries are my max_parallel_workers(_per_process) being used for ?\n> > > . Are certain longrunning or frequently running queries which are using\n> > > parallel paths using all max_parallel_workers and precluding other queries\n> > > from using parallel query ? Or, are semi-short queries sometimes precluding\n> > > longrunning queries from using parallelism, when the long queries would\n> > > better benefit ?\n> > This patch is storing the number of times the query was planned/executed using\n> > parallelism, but not the number of workers. Would it make sense to instead\n> > store the the *number* of workers launched/planned ? Otherwise, it might be\n> > that a query is consistently planned to use a large number of workers, but then\n> > runs with few. I'm referring to the fields shown in \"explain/analyze\". (Then,\n> > the 2nd field should be renamed to \"launched\").\n> >\n> > Workers Planned: 2\n> > Workers Launched: 2\n>\n> The main idea of the patch is to store the number of times the statements\n> were planned and executed in parallel, not the number of workers used in the\n> execution. Of course, what you mention can be helpful, it will be given a\n> review to see how it can be achieved\n\nI think you would need both information.\n\nWith your current patch it only says if the plan and execution had parallelism\nenabled, but not if it could actually use with parallelism at all. It gives\nsome information, but it's not that useful on its own.\n\nAlso, a cumulated number of workers isn't really useful if you don't know what\nfraction of the number of executions (or planning) they refer to.\n\nThat being said, I'm not sure how exactly the information about the number of\nworkers can be exposed, as there might be multiple gathers per plan and AKAIK\nthey can run at different part of the query execution. So in some case having\na total of 3 workers planned means that you ideally needed 3 workers available\nat the same time, and in some other case it might be only 2 or even 1.\n\n\n",
"msg_date": "Sat, 23 Jul 2022 00:08:30 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Expose Parallelism counters planned/execute in pg_stat_statements"
},
{
"msg_contents": "On 22-07-22 12:08, Julien Rouhaud wrote:\n> Hi,\n>\n> On Fri, Jul 22, 2022 at 11:17:52AM -0400, Anthony Sotolongo wrote:\n>> On 21-07-22 20:35, Justin Pryzby wrote:\n>>> On Thu, Jul 21, 2022 at 06:26:58PM -0400, Anthony Sotolongo wrote:\n>>>> Hi all:\n>>>> Here's a patch to add counters about planned/executed for parallelism to\n>>>> pg_stat_statements, as a way to follow-up on if the queries are\n>>>> planning/executing with parallelism, this can help to understand if you have\n>>>> a good/bad configuration or if your hardware is enough\n>>> +1, I was missing something like this before, but it didn't occur to me to use\n>>> PSS:\n>> First of all, thanks for review the the patch and for the comments\n>>\n>>\n>>> https://www.postgresql.org/message-id/20200310190142.GB29065@telsasoft.com\n>>>> My hope is to answer to questions like these:\n>>>>\n>>>> . is query (ever? usually?) using parallel paths?\n>>>> . is query usefully using parallel paths?\n>>>> . what queries are my max_parallel_workers(_per_process) being used for ?\n>>>> . Are certain longrunning or frequently running queries which are using\n>>>> parallel paths using all max_parallel_workers and precluding other queries\n>>>> from using parallel query ? Or, are semi-short queries sometimes precluding\n>>>> longrunning queries from using parallelism, when the long queries would\n>>>> better benefit ?\n>>> This patch is storing the number of times the query was planned/executed using\n>>> parallelism, but not the number of workers. Would it make sense to instead\n>>> store the the *number* of workers launched/planned ? Otherwise, it might be\n>>> that a query is consistently planned to use a large number of workers, but then\n>>> runs with few. I'm referring to the fields shown in \"explain/analyze\". (Then,\n>>> the 2nd field should be renamed to \"launched\").\n>>>\n>>> Workers Planned: 2\n>>> Workers Launched: 2\n>> The main idea of the patch is to store the number of times the statements\n>> were planned and executed in parallel, not the number of workers used in the\n>> execution. Of course, what you mention can be helpful, it will be given a\n>> review to see how it can be achieved\n> I think you would need both information.\n>\n> With your current patch it only says if the plan and execution had parallelism\n> enabled, but not if it could actually use with parallelism at all. It gives\n> some information, but it's not that useful on its own.\n\nThe original idea of this patch was identify when occurred some of the \ncircumstances under which it was impossible to execute that plan in \nparallel at execution time\n\nas mentioned on the documentation at [1]\n\nFor example:\n\nDue to the different client configuration, the execution behavior can \nbe different , and can affect the performance:\n\nAs you can see in the above execution plan\n\n\n From psql\n\n -> Gather Merge (cost=779747.43..795700.62 rows=126492 \nwidth=40) (actual time=1109.515..1472.369 rows=267351 loops=1)\n Output: t.entity_node_id, t.configuration_id, \nt.stream_def_id, t.run_type_id, t.state_datetime, (PARTIAL count(1))\n Workers Planned: 6\n Workers Launched: 6\n -> Partial GroupAggregate (cost=778747.33..779327.09 \nrows=21082 width=40) (actual time=889.129..974.028 rows=38193 loops=7)\n\n From jdbc (from dbeaver)\n\n -> Gather Merge (cost=779747.43..795700.62 rows=126492 \nwidth=40) (actual time=4383.576..4385.856 rows=398 loops=1)\n Output: t.entity_node_id, t.configuration_id, \nt.stream_def_id, t.run_type_id, t.state_datetime, (PARTIAL count(1))\n Workers Planned: 6\n Workers Launched: 0\n -> Partial GroupAggregate (cost=778747.33..779327.09 \nrows=21082 width=40) (actual time=4383.574..4385.814 rows=398 loops=1)\n\nThis example was discussed also at this Thread [2]\n\nWith these PSS counters will be easily identified when some of these \ncauses are happening.\n\n [1] \nhttps://www.postgresql.org/docs/current/when-can-parallel-query-be-used.html \n\n\n [2] \nhttps://www.postgresql.org/message-id/flat/32277_1555482629_5CB6C805_32277_8_1_A971FB43DFBC3D4C859ACB3316C9FF4632D98B37%40OPEXCAUBM42.corporate.adroot.infra.ftgroup\n\n>\n> Also, a cumulated number of workers isn't really useful if you don't know what\n> fraction of the number of executions (or planning) they refer to.\n\nWe will try to investigate how to do this.\n\n>\n> That being said, I'm not sure how exactly the information about the number of\n> workers can be exposed, as there might be multiple gathers per plan and AKAIK\n> they can run at different part of the query execution. So in some case having\n> a total of 3 workers planned means that you ideally needed 3 workers available\n> at the same time, and in some other case it might be only 2 or even 1.\n\n\n\n\n\n\n\nOn 22-07-22 12:08, Julien Rouhaud\n wrote:\n\n\nHi,\n\nOn Fri, Jul 22, 2022 at 11:17:52AM -0400, Anthony Sotolongo wrote:\n\n\n\nOn 21-07-22 20:35, Justin Pryzby wrote:\n\n\nOn Thu, Jul 21, 2022 at 06:26:58PM -0400, Anthony Sotolongo wrote:\n\n\nHi all:\nHere's a patch to add counters about planned/executed for parallelism to\npg_stat_statements, as a way to follow-up on if the queries are\nplanning/executing with parallelism, this can help to understand if you have\na good/bad configuration or if your hardware is enough\n\n\n+1, I was missing something like this before, but it didn't occur to me to use\nPSS:\n\n\n\nFirst of all, thanks for review the the patch and for the comments\n\n\n\n\nhttps://www.postgresql.org/message-id/20200310190142.GB29065@telsasoft.com\n\n\nMy hope is to answer to questions like these:\n\n. is query (ever? usually?) using parallel paths?\n. is query usefully using parallel paths?\n. what queries are my max_parallel_workers(_per_process) being used for ?\n. Are certain longrunning or frequently running queries which are using\n parallel paths using all max_parallel_workers and precluding other queries\n from using parallel query ? Or, are semi-short queries sometimes precluding\n longrunning queries from using parallelism, when the long queries would\n better benefit ?\n\n\nThis patch is storing the number of times the query was planned/executed using\nparallelism, but not the number of workers. Would it make sense to instead\nstore the the *number* of workers launched/planned ? Otherwise, it might be\nthat a query is consistently planned to use a large number of workers, but then\nruns with few. I'm referring to the fields shown in \"explain/analyze\". (Then,\nthe 2nd field should be renamed to \"launched\").\n\n Workers Planned: 2\n Workers Launched: 2\n\n\n\nThe main idea of the patch is to store the number of times the statements\nwere planned and executed in parallel, not the number of workers used in the\nexecution. Of course, what you mention can be helpful, it will be given a\nreview to see how it can be achieved\n\n\n\nI think you would need both information.\n\nWith your current patch it only says if the plan and execution had parallelism\nenabled, but not if it could actually use with parallelism at all. It gives\nsome information, but it's not that useful on its own.\n\nThe original idea of this patch was identify when occurred some\n of the circumstances under which it was impossible to execute\n that plan in parallel at execution time\n\nas mentioned on the documentation at [1] \n\nFor example: \n\nDue to the different client configuration, the execution behavior\n can be different , and can affect the performance:\nAs you can see in the above execution plan\n\n\n\nFrom psql\n -> Gather Merge (cost=779747.43..795700.62\n rows=126492 width=40) (actual time=1109.515..1472.369 rows=267351\n loops=1)\n Output: t.entity_node_id, t.configuration_id,\n t.stream_def_id, t.run_type_id, t.state_datetime, (PARTIAL\n count(1))\n Workers Planned: 6\n Workers Launched: 6\n -> Partial GroupAggregate \n (cost=778747.33..779327.09 rows=21082 width=40) (actual\n time=889.129..974.028 rows=38193 loops=7)\n\nFrom jdbc (from dbeaver)\n\n -> Gather Merge (cost=779747.43..795700.62\n rows=126492 width=40) (actual time=4383.576..4385.856 rows=398\n loops=1)\n Output: t.entity_node_id, t.configuration_id,\n t.stream_def_id, t.run_type_id, t.state_datetime, (PARTIAL\n count(1))\n Workers Planned: 6\n Workers Launched: 0\n -> Partial GroupAggregate \n (cost=778747.33..779327.09 rows=21082 width=40) (actual\n time=4383.574..4385.814 rows=398 loops=1)\n\nThis example was discussed also at this Thread [2]\n\nWith these PSS counters will be easily identified when some of\n these causes are happening.\n\n [1]\nhttps://www.postgresql.org/docs/current/when-can-parallel-query-be-used.html\n [2]\nhttps://www.postgresql.org/message-id/flat/32277_1555482629_5CB6C805_32277_8_1_A971FB43DFBC3D4C859ACB3316C9FF4632D98B37%40OPEXCAUBM42.corporate.adroot.infra.ftgroup\n\n\n\nAlso, a cumulated number of workers isn't really useful if you don't know what\nfraction of the number of executions (or planning) they refer to.\n\nWe will try to investigate how to do this.\n\n\n\n\nThat being said, I'm not sure how exactly the information about the number of\nworkers can be exposed, as there might be multiple gathers per plan and AKAIK\nthey can run at different part of the query execution. So in some case having\na total of 3 workers planned means that you ideally needed 3 workers available\nat the same time, and in some other case it might be only 2 or even 1.",
"msg_date": "Fri, 22 Jul 2022 14:11:35 -0400",
"msg_from": "Anthony Sotolongo <asotolongo@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Expose Parallelism counters planned/execute in pg_stat_statements"
},
{
"msg_contents": "Hi,\n\nOn Fri, Jul 22, 2022 at 02:11:35PM -0400, Anthony Sotolongo wrote:\n>\n> On 22-07-22 12:08, Julien Rouhaud wrote:\n> >\n> > With your current patch it only says if the plan and execution had parallelism\n> > enabled, but not if it could actually use with parallelism at all. It gives\n> > some information, but it's not that useful on its own.\n>\n> The original idea of this patch was� identify when occurred some of the\n> circumstances under which it was� impossible to execute that plan in\n> parallel at execution time\n>\n> as mentioned on the documentation at [1]\n>\n> For example:\n>\n> Due to the different client configuration, the execution behavior can be�\n> different , and can affect the performance:\n>\n> As you can see in the above execution plan\n>\n>\n> From psql\n>\n> �� �������� ->� Gather Merge� (cost=779747.43..795700.62 rows=126492\n> width=40) (actual time=1109.515..1472.369 rows=267351 loops=1)\n> �� �������������� Output: t.entity_node_id, t.configuration_id,\n> t.stream_def_id, t.run_type_id, t.state_datetime, (PARTIAL count(1))\n> �� �������������� Workers Planned: 6\n> �� �������������� Workers Launched: 6\n> �� �������������� ->� Partial GroupAggregate (cost=778747.33..779327.09\n> rows=21082 width=40) (actual time=889.129..974.028 rows=38193 loops=7)\n>\n> From jdbc (from dbeaver)\n>\n> �� �������� ->� Gather Merge� (cost=779747.43..795700.62 rows=126492\n> width=40) (actual time=4383.576..4385.856 rows=398 loops=1)\n> �� �������������� Output: t.entity_node_id, t.configuration_id,\n> t.stream_def_id, t.run_type_id, t.state_datetime, (PARTIAL count(1))\n> �� �������������� Workers Planned: 6\n> �� �������������� Workers Launched: 0\n> �� �������������� ->� Partial GroupAggregate (cost=778747.33..779327.09\n> rows=21082 width=40) (actual time=4383.574..4385.814 rows=398 loops=1)\n>\n> This example was� discussed also at this Thread [2]\n>\n> With these PSS counters will be easily identified when some of these causes\n> are happening.\n\nI agree it can be hard to identify, but I don't think that your proposed\napproach is enough to be able to do so. There's no guarantee of an exact 1:1\nmapping between planning and execution, so you could totally see the same value\nfor parallel_planned and parallel_exec and still have the dbeaver behavior\nhappening.\n\nIf you want to be able to distinguish \"plan was parallel but execution was\nforced to disable it\" from \"plan wasn't parallel, so was the execution\", you\nneed some specific counters for both situations.\n\n\n",
"msg_date": "Sat, 23 Jul 2022 12:03:34 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Expose Parallelism counters planned/execute in pg_stat_statements"
},
{
"msg_contents": "On 23-07-22 00:03, Julien Rouhaud wrote:\n> Hi,\n>\n> On Fri, Jul 22, 2022 at 02:11:35PM -0400, Anthony Sotolongo wrote:\n>> On 22-07-22 12:08, Julien Rouhaud wrote:\n>>> With your current patch it only says if the plan and execution had parallelism\n>>> enabled, but not if it could actually use with parallelism at all. It gives\n>>> some information, but it's not that useful on its own.\n>> The original idea of this patch was identify when occurred some of the\n>> circumstances under which it was impossible to execute that plan in\n>> parallel at execution time\n>>\n>> as mentioned on the documentation at [1]\n>>\n>> For example:\n>>\n>> Due to the different client configuration, the execution behavior can be\n>> different , and can affect the performance:\n>>\n>> As you can see in the above execution plan\n>>\n>>\n>> From psql\n>>\n>> -> Gather Merge (cost=779747.43..795700.62 rows=126492\n>> width=40) (actual time=1109.515..1472.369 rows=267351 loops=1)\n>> Output: t.entity_node_id, t.configuration_id,\n>> t.stream_def_id, t.run_type_id, t.state_datetime, (PARTIAL count(1))\n>> Workers Planned: 6\n>> Workers Launched: 6\n>> -> Partial GroupAggregate (cost=778747.33..779327.09\n>> rows=21082 width=40) (actual time=889.129..974.028 rows=38193 loops=7)\n>>\n>> From jdbc (from dbeaver)\n>>\n>> -> Gather Merge (cost=779747.43..795700.62 rows=126492\n>> width=40) (actual time=4383.576..4385.856 rows=398 loops=1)\n>> Output: t.entity_node_id, t.configuration_id,\n>> t.stream_def_id, t.run_type_id, t.state_datetime, (PARTIAL count(1))\n>> Workers Planned: 6\n>> Workers Launched: 0\n>> -> Partial GroupAggregate (cost=778747.33..779327.09\n>> rows=21082 width=40) (actual time=4383.574..4385.814 rows=398 loops=1)\n>>\n>> This example was discussed also at this Thread [2]\n>>\n>> With these PSS counters will be easily identified when some of these causes\n>> are happening.\n> I agree it can be hard to identify, but I don't think that your proposed\n> approach is enough to be able to do so. There's no guarantee of an exact 1:1\n> mapping between planning and execution, so you could totally see the same value\n> for parallel_planned and parallel_exec and still have the dbeaver behavior\n> happening.\n>\n> If you want to be able to distinguish \"plan was parallel but execution was\n> forced to disable it\" from \"plan wasn't parallel, so was the execution\", you\n> need some specific counters for both situations.\n\nThanks for your time and feedback, yes we were missing some details, so \nwe need to rethink some points to continue\n\n\n\n\n\n",
"msg_date": "Mon, 25 Jul 2022 15:19:22 -0400",
"msg_from": "Anthony Sotolongo <asotolongo@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Expose Parallelism counters planned/execute in pg_stat_statements"
},
{
"msg_contents": "Hi,\n\nEl lun, 25 jul 2022 a la(s) 14:19, Anthony Sotolongo (asotolongo@gmail.com)\nescribió:\n\n> On 23-07-22 00:03, Julien Rouhaud wrote:\n> > Hi,\n> >\n> > On Fri, Jul 22, 2022 at 02:11:35PM -0400, Anthony Sotolongo wrote:\n> >> On 22-07-22 12:08, Julien Rouhaud wrote:\n> >>> With your current patch it only says if the plan and execution had\n> parallelism\n> >>> enabled, but not if it could actually use with parallelism at all. It\n> gives\n> >>> some information, but it's not that useful on its own.\n> >> The original idea of this patch was identify when occurred some of the\n> >> circumstances under which it was impossible to execute that plan in\n> >> parallel at execution time\n> >>\n> >> as mentioned on the documentation at [1]\n> >>\n> >> For example:\n> >>\n> >> Due to the different client configuration, the execution behavior can be\n> >> different , and can affect the performance:\n> >>\n> >> As you can see in the above execution plan\n> >>\n> >>\n> >> From psql\n> >>\n> >> -> Gather Merge (cost=779747.43..795700.62 rows=126492\n> >> width=40) (actual time=1109.515..1472.369 rows=267351 loops=1)\n> >> Output: t.entity_node_id, t.configuration_id,\n> >> t.stream_def_id, t.run_type_id, t.state_datetime, (PARTIAL count(1))\n> >> Workers Planned: 6\n> >> Workers Launched: 6\n> >> -> Partial GroupAggregate (cost=778747.33..779327.09\n> >> rows=21082 width=40) (actual time=889.129..974.028 rows=38193 loops=7)\n> >>\n> >> From jdbc (from dbeaver)\n> >>\n> >> -> Gather Merge (cost=779747.43..795700.62 rows=126492\n> >> width=40) (actual time=4383.576..4385.856 rows=398 loops=1)\n> >> Output: t.entity_node_id, t.configuration_id,\n> >> t.stream_def_id, t.run_type_id, t.state_datetime, (PARTIAL count(1))\n> >> Workers Planned: 6\n> >> Workers Launched: 0\n> >> -> Partial GroupAggregate (cost=778747.33..779327.09\n> >> rows=21082 width=40) (actual time=4383.574..4385.814 rows=398 loops=1)\n> >>\n> >> This example was discussed also at this Thread [2]\n> >>\n> >> With these PSS counters will be easily identified when some of these\n> causes\n> >> are happening.\n> > I agree it can be hard to identify, but I don't think that your proposed\n> > approach is enough to be able to do so. There's no guarantee of an\n> exact 1:1\n> > mapping between planning and execution, so you could totally see the\n> same value\n> > for parallel_planned and parallel_exec and still have the dbeaver\n> behavior\n> > happening.\n> >\n> > If you want to be able to distinguish \"plan was parallel but execution\n> was\n> > forced to disable it\" from \"plan wasn't parallel, so was the execution\",\n> you\n> > need some specific counters for both situations.\n>\n> Thanks for your time and feedback, yes we were missing some details, so\n> we need to rethink some points to continue\n>\n\nWe have rewritten the patch and added the necessary columns to have the\nnumber of times a parallel query plan was not executed using parallelism.\n\nWe are investigating how to add more information related to the workers\ncreated\nby the Gather/GatherMerge nodes, but it is not a trivial task.\n\nRegards.",
"msg_date": "Fri, 29 Jul 2022 08:36:44 -0500",
"msg_from": "=?UTF-8?Q?Daymel_Bonne_Sol=C3=ADs?= <daymelbonne@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Expose Parallelism counters planned/execute in pg_stat_statements"
},
{
"msg_contents": "Hi:\n\nWe have rewritten the patch and added the necessary columns to have the\n> number of times a parallel query plan was not executed using parallelism.\n>\n>\n This version includes comments on the source code and documentation.\n\nRegards",
"msg_date": "Mon, 15 Aug 2022 15:14:59 -0500",
"msg_from": "=?UTF-8?Q?Daymel_Bonne_Sol=C3=ADs?= <daymelbonne@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Expose Parallelism counters planned/execute in pg_stat_statements"
},
{
"msg_contents": "Hi,\n\nOn Fri, Jul 29, 2022 at 08:36:44AM -0500, Daymel Bonne Sol�s wrote:\n>\n> We have rewritten the patch and added the necessary columns to have the\n> number of times a parallel query plan was not executed using parallelism.\n>\n> We are investigating how to add more information related to the workers\n> created\n> by the Gather/GatherMerge nodes, but it is not a trivial task.\n\nAs far as I can see the scope of the counters is now different. You said you\nwanted to be able to identify when a parallel query plan cannot be executed\nwith parallelism, but what the fields are now showing is simply whether no\nworkers were launched at all. It could be because of the dbeaver behavior you\nmentioned (the !es_use_parallel_mode case), but also if the executor did try to\nlaunch parallel workers and didn't get any.\n\nI don't think that's an improvement. With this patch if you see the\n\"paral_planned_not_exec\" counter going up, you still don't know if this is\nbecause of the !es_use_parallel_mode or if you simply have too many parallel\nqueries running at the same time, or both, and therefore can't do much with\nthat information. Both situations are different and in my opinion require\ndifferent (and specialized) counters to properly handle them.\n\nAlso, I don't think that paral_planned_exec and paral_planned_not_exec are good\ncolumn (and variable) names. Maybe something like\n\"parallel_exec_count\" and \"forced_non_parallel_exec_count\" (assuming it's based\non a parallel plan and !es_use_parallel_mode).\n\n\n",
"msg_date": "Tue, 16 Aug 2022 14:58:43 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Expose Parallelism counters planned/execute in pg_stat_statements"
},
{
"msg_contents": "On Tue, Aug 16, 2022 at 02:58:43PM +0800, Julien Rouhaud wrote:\n> I don't think that's an improvement. With this patch if you see the\n> \"paral_planned_not_exec\" counter going up, you still don't know if this is\n> because of the !es_use_parallel_mode or if you simply have too many parallel\n> queries running at the same time, or both, and therefore can't do much with\n> that information. Both situations are different and in my opinion require\n> different (and specialized) counters to properly handle them.\n\nThis thread has been idle for a few weeks now, and this feedback has\nnot been answered to. This CF entry has been marked as RwF.\n--\nMichael",
"msg_date": "Wed, 12 Oct 2022 15:03:54 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Expose Parallelism counters planned/execute in pg_stat_statements"
}
] |
[
{
"msg_contents": "Hi,\n\nHere are some recent $SUBJECT on HEAD. Unfortunately we don't see the\nregression.diffs file :-(\n\nbfbot=> select make_snapshot_url(animal, snapshot) from run where\n'slot_creation_error' = any(fail_tests) order by snapshot desc;\n make_snapshot_url\n------------------------------------------------------------------------------------------------\n https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2022-07-21%2023:33:50\n https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2022-06-15%2003:12:54\n https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2022-05-10%2021:03:37\n https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2022-04-11%2021:04:15\n https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2022-04-08%2018:04:15\n https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2022-04-01%2018:27:29\n https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=calliphoridae&dt=2022-03-08%2001:14:51\n https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=flaviventris&dt=2022-02-24%2015:17:30\n https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=wrasse&dt=2022-02-15%2009:29:06\n(9 rows)\n\n\n",
"msg_date": "Fri, 22 Jul 2022 13:59:09 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "slot_creation_error failures"
},
{
"msg_contents": "On Fri, Jul 22, 2022 at 10:59 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n>\n> Hi,\n>\n> Here are some recent $SUBJECT on HEAD. Unfortunately we don't see the\n> regression.diffs file :-(\n\nWe can see regression.diffs[1]:\n\n========================== output_iso/regression.diffs ================\ndiff -w -U3 C:/tools/nmsys64/home/pgrunner/bf/root/HEAD/pgsql/contrib/test_decoding/expected/slot_creation_error.out\nC:/tools/nmsys64/home/pgrunner/bf/root/HEAD/pgsql.build/contrib/test_decoding/output_iso/results/slot_creation_error.out\n--- C:/tools/nmsys64/home/pgrunner/bf/root/HEAD/pgsql/contrib/test_decoding/expected/slot_creation_error.out\n2022-03-19 06:04:11.806604000 +0000\n+++ C:/tools/nmsys64/home/pgrunner/bf/root/HEAD/pgsql.build/contrib/test_decoding/output_iso/results/slot_creation_error.out\n2022-04-11 21:15:58.342700300 +0000\n@@ -92,23 +92,7 @@\n FROM pg_stat_activity\n WHERE application_name = 'isolation/slot_creation_error/s2';\n <waiting ...>\n-step s2_init: <... completed>\n-FATAL: terminating connection due to administrator command\n-server closed the connection unexpectedly\n+PQconsumeInput failed: server closed the connection unexpectedly\n This probably means the server terminated abnormally\n before or while processing the request.\n\n-step s1_terminate_s2: <... completed>\n-pg_terminate_backend\n---------------------\n-t\n-(1 row)\n-\n-step s1_c: COMMIT;\n-step s1_view_slot:\n- SELECT slot_name, slot_type, active FROM pg_replication_slots\nWHERE slot_name = 'slot_creation_error'\n-\n-slot_name|slot_type|active\n----------+---------+------\n-(0 rows)\n-\n\nRegards,\n\n[1] https://buildfarm.postgresql.org/cgi-bin/show_stage_log.pl?nm=fairywren&dt=2022-04-11%2021%3A04%3A15&stg=test-decoding-check\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Fri, 22 Jul 2022 12:10:27 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: slot_creation_error failures"
},
{
"msg_contents": "On Fri, Jul 22, 2022 at 3:11 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> We can see regression.diffs[1]:\n\nAhh, right, thanks. We see it when it fails in test-decoding-check on\nWindows, but not when it fails in MiscCheck, and I didn't check enough\nof them.\n\n\n",
"msg_date": "Fri, 22 Jul 2022 15:34:45 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: slot_creation_error failures"
}
] |
[
{
"msg_contents": "I notice that there are lots of *static inline functions* in header files,\nthe header file's content will go into each translation unit at preprocess\nphase, that means all the c file including the header will have a copy\nof the static inline function.\n\nThe inline keyword is a hint for compiler to inline the function, if the\ncompiler does inline the function, the definition could be optimized out\nby the compiler, but if the *inline function* can not be inlined, the function\nwill reside in each of the translation units that include the header file, which\nmeans the same static function compiled multiple times and may waste\nsome space?\n\nIMHO, the header files should only include the inline function's declaration,\nand the definition should be in c files.\n\nI am not sure why this kind of coding style came along, appreciate if\nsome one can give me some clue, thanks :)\n\n-- \nRegards\nJunwang Zhao\n\n\n",
"msg_date": "Fri, 22 Jul 2022 10:17:00 +0800",
"msg_from": "Junwang Zhao <zhjwpku@gmail.com>",
"msg_from_op": true,
"msg_subject": "question about `static inline` functions in header files"
},
{
"msg_contents": "Junwang Zhao <zhjwpku@gmail.com> writes:\n> I notice that there are lots of *static inline functions* in header files,\n> the header file's content will go into each translation unit at preprocess\n> phase, that means all the c file including the header will have a copy\n> of the static inline function.\n\nWe are assuming that the compiler will not emit unused static functions.\nThis has been default behavior in gcc for ages. If you're unfortunate\nenough to have a compiler that won't do it, yes you're going to have a\nbloated binary.\n\n> IMHO, the header files should only include the inline function's declaration,\n> and the definition should be in c files.\n\nThen it couldn't be inlined, defeating the purpose.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 21 Jul 2022 23:03:43 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: question about `static inline` functions in header files"
},
{
"msg_contents": "Ok, thanks for the clarification.\n\nOn Fri, Jul 22, 2022 at 11:03 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Junwang Zhao <zhjwpku@gmail.com> writes:\n> > I notice that there are lots of *static inline functions* in header files,\n> > the header file's content will go into each translation unit at preprocess\n> > phase, that means all the c file including the header will have a copy\n> > of the static inline function.\n>\n> We are assuming that the compiler will not emit unused static functions.\n> This has been default behavior in gcc for ages. If you're unfortunate\n> enough to have a compiler that won't do it, yes you're going to have a\n> bloated binary.\n>\n> > IMHO, the header files should only include the inline function's declaration,\n> > and the definition should be in c files.\n>\n> Then it couldn't be inlined, defeating the purpose.\n>\n> regards, tom lane\n\n\n\n-- \nRegards\nJunwang Zhao\n\n\n",
"msg_date": "Fri, 22 Jul 2022 11:08:54 +0800",
"msg_from": "Junwang Zhao <zhjwpku@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: question about `static inline` functions in header files"
}
] |
[
{
"msg_contents": "Hi,\nI recently find this problem while testing PG14 with sysbench.\nThen I look through the emails from pgsql-hackers and find a previous similary bug which is https://www.postgresql.org/message-id/flat/2247102.1618008027%40sss.pgh.pa.us. But the bugfix commit(34f581c39e97e2ea237255cf75cccebccc02d477) is already patched to PG14.\n\nThe following is the stack of coredump.\n#0 0x00007f2ba9bfa277 in raise () from /lib64/libc.so.6\n#1 0x00007f2ba9bfb968 in abort () from /lib64/libc.so.6\n#2 0x00000000009416bb in errfinish () at elog.c:717\n#3 0x000000000049518b in visibilitymap_clear (rel=<optimized out>, heapBlk=<optimized out>, buf=<optimized out>, flags=<optimized out>, polar_record=<optimized out>) at visibilitymap.c:142\n#4 0x000000000054c2df in heap_update () at heapam.c:3948\n#5 0x0000000000555538 in heapam_tuple_update (relation=0x7f2b909cfef8, otid=0x7fff930c577a, slot=0x2612528, cid=0, snapshot=<optimized out>, crosscheck=0x0, wait=true, tmfd=0x7fff930c5690, lockmode=0x7fff930c5684, update_indexes=0x7fff930c5681)\n at heapam_handler.c:327\n#6 0x00000000006e04f3 in table_tuple_update (update_indexes=0x7fff930c5681, lockmode=0x7fff930c5684, tmfd=0x7fff930c5690, wait=true, crosscheck=<optimized out>, snapshot=<optimized out>, cid=<optimized out>, slot=0x2612528, otid=0x26126d0, \n rel=0x7f2b909cfef8) at ../../../src/include/access/tableam.h:1509\n#7 ExecUpdate () at nodeModifyTable.c:1785\n#8 0x00000000006e0f2a in ExecModifyTable () at nodeModifyTable.c:2592\n#9 0x00000000006b909c in ExecProcNode (node=0x24d4eb0) at ../../../src/include/executor/executor.h:257\n#10 ExecutePlan (execute_once=<optimized out>, dest=0xaa52e0 <donothingDR>, direction=<optimized out>, numberTuples=0, sendTuples=<optimized out>, operation=CMD_UPDATE, use_parallel_mode=<optimized out>, planstate=0x24d4eb0, estate=0x24d4c08)\n at execMain.c:1553\n#11 standard_ExecutorRun () at execMain.c:363\n#12 0x00007f2babd893dd in pgss_ExecutorRun (queryDesc=0x25dcab8, direction=ForwardScanDirection, count=0, execute_once=<optimized out>) at pg_stat_statements.c:1018\n#13 0x00007f2babd816fa in explain_ExecutorRun (queryDesc=0x25dcab8, direction=ForwardScanDirection, count=0, execute_once=<optimized out>) at auto_explain.c:334\n#14 0x0000000000828ac8 in ProcessQuery (plan=0x25c9e88, sourceText=0x25dc9a8 \"UPDATE sbtest8 SET k=k+1 WHERE id=$1\", params=0x25dca28, queryEnv=0x0, dest=<optimized out>, qc=0x7fff930c5e40) at pquery.c:160\n#15 0x00000000008294d8 in PortalRunMulti (portal=portal@entry=0x25697c8, isTopLevel=isTopLevel@entry=true, setHoldSnapshot=setHoldSnapshot@entry=false, dest=0xaa52e0 <donothingDR>, dest@entry=0x24a6568, altdest=0xaa52e0 <donothingDR>, \n altdest@entry=0x24a6568, qc=qc@entry=0x7fff930c5e40) at pquery.c:1277\n#16 0x0000000000829929 in PortalRun () at pquery.c:797\n#17 0x0000000000826f57 in exec_execute_message (max_rows=9223372036854775807, portal_name=0x24a6158 \"\") at postgres.c:2306\n#18 PostgresMain () at postgres.c:4826\n#19 0x00000000007a1e8a in BackendRun (port=<optimized out>, port=<optimized out>) at postmaster.c:4910\n#20 BackendStartup (port=<optimized out>) at postmaster.c:4621\n#21 ServerLoop () at postmaster.c:1823\n#22 0x00000000007a2c4b in PostmasterMain () at postmaster.c:1488#23 0x000000000050c5d0 in main (argc=3, argv=0x24a0f50) at main.c:209\n\nI'm wondering whether there's another code path to lead this problem happened. Since, I take a deep dig via gdb which turns out that newbuffer is not euqal to buffer. In other words, the function RelationGetBufferForTuple must have been called just now.\nBesides, why didn't we re-check the flag after RelationGetBufferForTuple was called?\nBut I'm confused about the heap_update and RelationGetBufferForTuple functions which are too long to understand for me. Can anyone give me a hand?\n\n--\nBest regards,\nrogers.ww\n\n\nHi,I recently find this problem while testing PG14 with sysbench.Then I look through the emails from pgsql-hackers and find a previous similary bug which is https://www.postgresql.org/message-id/flat/2247102.1618008027%40sss.pgh.pa.us. But the bugfix commit(34f581c39e97e2ea237255cf75cccebccc02d477) is already patched to PG14.The following is the stack of coredump.#0 0x00007f2ba9bfa277 in raise () from /lib64/libc.so.6#1 0x00007f2ba9bfb968 in abort () from /lib64/libc.so.6#2 0x00000000009416bb in errfinish () at elog.c:717#3 0x000000000049518b in visibilitymap_clear (rel=<optimized out>, heapBlk=<optimized out>, buf=<optimized out>, flags=<optimized out>, polar_record=<optimized out>) at visibilitymap.c:142#4 0x000000000054c2df in heap_update () at heapam.c:3948#5 0x0000000000555538 in heapam_tuple_update (relation=0x7f2b909cfef8, otid=0x7fff930c577a, slot=0x2612528, cid=0, snapshot=<optimized out>, crosscheck=0x0, wait=true, tmfd=0x7fff930c5690, lockmode=0x7fff930c5684, update_indexes=0x7fff930c5681) at heapam_handler.c:327#6 0x00000000006e04f3 in table_tuple_update (update_indexes=0x7fff930c5681, lockmode=0x7fff930c5684, tmfd=0x7fff930c5690, wait=true, crosscheck=<optimized out>, snapshot=<optimized out>, cid=<optimized out>, slot=0x2612528, otid=0x26126d0, rel=0x7f2b909cfef8) at ../../../src/include/access/tableam.h:1509#7 ExecUpdate () at nodeModifyTable.c:1785#8 0x00000000006e0f2a in ExecModifyTable () at nodeModifyTable.c:2592#9 0x00000000006b909c in ExecProcNode (node=0x24d4eb0) at ../../../src/include/executor/executor.h:257#10 ExecutePlan (execute_once=<optimized out>, dest=0xaa52e0 <donothingDR>, direction=<optimized out>, numberTuples=0, sendTuples=<optimized out>, operation=CMD_UPDATE, use_parallel_mode=<optimized out>, planstate=0x24d4eb0, estate=0x24d4c08) at execMain.c:1553#11 standard_ExecutorRun () at execMain.c:363#12 0x00007f2babd893dd in pgss_ExecutorRun (queryDesc=0x25dcab8, direction=ForwardScanDirection, count=0, execute_once=<optimized out>) at pg_stat_statements.c:1018#13 0x00007f2babd816fa in explain_ExecutorRun (queryDesc=0x25dcab8, direction=ForwardScanDirection, count=0, execute_once=<optimized out>) at auto_explain.c:334#14 0x0000000000828ac8 in ProcessQuery (plan=0x25c9e88, sourceText=0x25dc9a8 \"UPDATE sbtest8 SET k=k+1 WHERE id=$1\", params=0x25dca28, queryEnv=0x0, dest=<optimized out>, qc=0x7fff930c5e40) at pquery.c:160#15 0x00000000008294d8 in PortalRunMulti (portal=portal@entry=0x25697c8, isTopLevel=isTopLevel@entry=true, setHoldSnapshot=setHoldSnapshot@entry=false, dest=0xaa52e0 <donothingDR>, dest@entry=0x24a6568, altdest=0xaa52e0 <donothingDR>, altdest@entry=0x24a6568, qc=qc@entry=0x7fff930c5e40) at pquery.c:1277#16 0x0000000000829929 in PortalRun () at pquery.c:797#17 0x0000000000826f57 in exec_execute_message (max_rows=9223372036854775807, portal_name=0x24a6158 \"\") at postgres.c:2306#18 PostgresMain () at postgres.c:4826#19 0x00000000007a1e8a in BackendRun (port=<optimized out>, port=<optimized out>) at postmaster.c:4910#20 BackendStartup (port=<optimized out>) at postmaster.c:4621#21 ServerLoop () at postmaster.c:1823#22 0x00000000007a2c4b in PostmasterMain () at postmaster.c:1488#23 0x000000000050c5d0 in main (argc=3, argv=0x24a0f50) at main.c:209I'm wondering whether there's another code path to lead this problem happened. Since, I take a deep dig via gdb which turns out that newbuffer is not euqal to buffer. In other words, the function RelationGetBufferForTuple must have been called just now.Besides, why didn't we re-check the flag after RelationGetBufferForTuple was called?But I'm confused about the heap_update and RelationGetBufferForTuple functions which are too long to understand for me. Can anyone give me a hand?--Best regards,rogers.ww",
"msg_date": "Fri, 22 Jul 2022 16:22:46 +0800",
"msg_from": "\"=?UTF-8?B?546L5LyfKOWtpuW8iCk=?=\" <rogers.ww@alibaba-inc.com>",
"msg_from_op": true,
"msg_subject": "\n =?UTF-8?B?UEFOSUM6IHdyb25nIGJ1ZmZlciBwYXNzZWQgdG8gdmlzaWJpbGl0eW1hcF9jbGVhcg==?="
},
{
"msg_contents": "\n\nOn 7/22/22 10:22, 王伟(学弈) wrote:\n> Hi,\n> I recently find this problem while testing PG14 with sysbench.\n> Then I look through the emails from pgsql-hackers and find a previous\n> similary bug which\n> is https://www.postgresql.org/message-id/flat/2247102.1618008027%40sss.pgh.pa.us\n> <https://www.postgresql.org/message-id/flat/2247102.1618008027%40sss.pgh.pa.us>.\n> But the bugfix commit(34f581c39e97e2ea237255cf75cccebccc02d477) is\n> already patched to PG14.\n> \n\nWhich PG14 version / commit is this, exactly? What sysbench parameters\ndid you use, how likely is hitting the issue?\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 22 Jul 2022 12:06:21 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: PANIC: wrong buffer passed to visibilitymap_clear"
},
{
"msg_contents": "On 7/22/22 18:06, Tomas Vondra wrote:\n> Which PG14 version / commit is this, exactly? What sysbench parameters\n> did you use, how likely is hitting the issue?\nPG_VERSION is '14beta2'.\nThe head commit id is 'e1c1c30f635390b6a3ae4993e8cac213a33e6e3f'.\nI have run these sysbench commands for couple of days, but only two times to hit the issue.\nThese sysbench commands are:\nprepare:\nsysbench --tables=10 --table-size=1000000 --db-ps-mode=auto --pgsql-password=* --time=72000 --db-driver=pgsql --pgsql-port=* --threads=50 --thread-init-timeout=3000 --report-interval=5 --pgsql-user=* --pgsql-host=* --pgsql-db=* --events=0 --pgsql-ignore-errors=\"PX000,58M01\" ./src/lua/oltp_insert.lua prepare\nparallel execution: \nsysbench --tables=10 --table-size=1000000 --db-ps-mode=auto --pgsql-password=* --time=72000 --db-driver=pgsql --pgsql-port=* --threads=50 --thread-init-timeout=3000 --report-interval=5 --pgsql-user=* --pgsql-host=* --pgsql-db=* --events=0 --pgsql-ignore-errors=\"PX000,58M01\" ./src/lua/oltp_insert.lua run\nsysbench --tables=10 --table-size=1000000 --db-ps-mode=auto --pgsql-password=* --time=72000 --db-driver=pgsql --pgsql-port=* --threads=50 --thread-init-timeout=3000 --report-interval=5 --pgsql-user=* --pgsql-host=*--pgsql-db=* --events=0 --pgsql-ignore-errors=\"PX000,58M01\" ./src/lua/oltp_delete.lua run\nsysbench --tables=10 --table-size=1000000 --db-ps-mode=auto --pgsql-password=* --time=72000 --db-driver=pgsql --pgsql-port=* --threads=50 --thread-init-timeout=3000 --report-interval=5 --pgsql-user=* --pgsql-host=* --pgsql-db=* --events=0 --pgsql-ignore-errors=\"PX000,58M01\" ./src/lua/oltp_read_write.lua run\nsysbench --tables=10 --table-size=1000000 --db-ps-mode=auto --pgsql-password=* --time=72000 --db-driver=pgsql --pgsql-port=* --threads=50 --thread-init-timeout=3000 --report-interval=5 --pgsql-user=* --pgsql-host=* --pgsql-db=* --events=0 --pgsql-ignore-errors=\"PX000,58M01\" ./src/lua/oltp_update_index.lua run\n\n--\nregards,\nrogers.ww\n------------------------------------------------------------------\n发件人:Tomas Vondra<tomas.vondra@enterprisedb.com>\n日 期:2022年07月22日 18:06:21\n收件人:王伟(学弈)<rogers.ww@alibaba-inc.com>; pgsql-hackers<pgsql-hackers@lists.postgresql.org>\n主 题:Re: PANIC: wrong buffer passed to visibilitymap_clear\n\n\n\nOn 7/22/22 10:22, 王伟(学弈) wrote:\n> Hi,\n> I recently find this problem while testing PG14 with sysbench.\n> Then I look through the emails from pgsql-hackers and find a previous\n> similary bug which\n> is https://www.postgresql.org/message-id/flat/2247102.1618008027%40sss.pgh.pa.us\n> <https://www.postgresql.org/message-id/flat/2247102.1618008027%40sss.pgh.pa.us>.\n> But the bugfix commit(34f581c39e97e2ea237255cf75cccebccc02d477) is\n> already patched to PG14.\n> \n\nWhich PG14 version / commit is this, exactly? What sysbench parameters\ndid you use, how likely is hitting the issue?\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\nOn 7/22/22 18:06, Tomas Vondra wrote:> Which PG14 version / commit is this, exactly? What sysbench parameters> did you use, how likely is hitting the issue?PG_VERSION is '14beta2'.The head commit id is 'e1c1c30f635390b6a3ae4993e8cac213a33e6e3f'.I have run these sysbench commands for couple of days, but only two times to hit the issue.These sysbench commands are:prepare:sysbench --tables=10 --table-size=1000000 --db-ps-mode=auto --pgsql-password=* --time=72000 --db-driver=pgsql --pgsql-port=* --threads=50 --thread-init-timeout=3000 --report-interval=5 --pgsql-user=* --pgsql-host=* --pgsql-db=* --events=0 --pgsql-ignore-errors=\"PX000,58M01\" ./src/lua/oltp_insert.lua prepareparallel execution: sysbench --tables=10 --table-size=1000000 --db-ps-mode=auto --pgsql-password=* --time=72000 --db-driver=pgsql --pgsql-port=* --threads=50 --thread-init-timeout=3000 --report-interval=5 --pgsql-user=* --pgsql-host=* --pgsql-db=* --events=0 --pgsql-ignore-errors=\"PX000,58M01\" ./src/lua/oltp_insert.lua runsysbench --tables=10 --table-size=1000000 --db-ps-mode=auto --pgsql-password=* --time=72000 --db-driver=pgsql --pgsql-port=* --threads=50 --thread-init-timeout=3000 --report-interval=5 --pgsql-user=* --pgsql-host=*--pgsql-db=* --events=0 --pgsql-ignore-errors=\"PX000,58M01\" ./src/lua/oltp_delete.lua runsysbench --tables=10 --table-size=1000000 --db-ps-mode=auto --pgsql-password=* --time=72000 --db-driver=pgsql --pgsql-port=* --threads=50 --thread-init-timeout=3000 --report-interval=5 --pgsql-user=* --pgsql-host=* --pgsql-db=* --events=0 --pgsql-ignore-errors=\"PX000,58M01\" ./src/lua/oltp_read_write.lua runsysbench --tables=10 --table-size=1000000 --db-ps-mode=auto --pgsql-password=* --time=72000 --db-driver=pgsql --pgsql-port=* --threads=50 --thread-init-timeout=3000 --report-interval=5 --pgsql-user=* --pgsql-host=* --pgsql-db=* --events=0 --pgsql-ignore-errors=\"PX000,58M01\" ./src/lua/oltp_update_index.lua run--regards,rogers.ww------------------------------------------------------------------发件人:Tomas Vondra<tomas.vondra@enterprisedb.com>日 期:2022年07月22日 18:06:21收件人:王伟(学弈)<rogers.ww@alibaba-inc.com>; pgsql-hackers<pgsql-hackers@lists.postgresql.org>主 题:Re: PANIC: wrong buffer passed to visibilitymap_clearOn 7/22/22 10:22, 王伟(学弈) wrote:> Hi,> I recently find this problem while testing PG14 with sysbench.> Then I look through the emails from pgsql-hackers and find a previous> similary bug which> is https://www.postgresql.org/message-id/flat/2247102.1618008027%40sss.pgh.pa.us> <https://www.postgresql.org/message-id/flat/2247102.1618008027%40sss.pgh.pa.us>.> But the bugfix commit(34f581c39e97e2ea237255cf75cccebccc02d477) is> already patched to PG14.> Which PG14 version / commit is this, exactly? What sysbench parametersdid you use, how likely is hitting the issue?regards-- Tomas VondraEnterpriseDB: http://www.enterprisedb.comThe Enterprise PostgreSQL Company",
"msg_date": "Fri, 22 Jul 2022 20:17:51 +0800",
"msg_from": "\"=?UTF-8?B?546L5LyfKOWtpuW8iCk=?=\" <rogers.ww@alibaba-inc.com>",
"msg_from_op": true,
"msg_subject": "\n =?UTF-8?B?5Zue5aSN77yaUmU6IFBBTklDOiB3cm9uZyBidWZmZXIgcGFzc2VkIHRvIHZpc2liaWxpdHlt?=\n =?UTF-8?B?YXBfY2xlYXI=?="
},
{
"msg_contents": "On 7/22/22 14:17, 王伟(学弈) wrote:\n> On 7/22/22 18:06, Tomas Vondra wrote:\n>> Which PG14 version / commit is this, exactly? What sysbench parameters\n>> did you use, how likely is hitting the issue?\n> PG_VERSION is '14beta2'.\n> The head commit id is 'e1c1c30f635390b6a3ae4993e8cac213a33e6e3f'.\n\nWhy not current REL_14_STABLE? 14beta2 is pretty old, and while I\nhaven't checked, perhaps this was already fixed since then.\n\n> I have run these sysbench commands for couple of days, but only two times to hit the issue.\n> These sysbench commands are:\n> prepare:\n> sysbench --tables=10 --table-size=1000000 --db-ps-mode=auto --pgsql-password=* --time=72000 --db-driver=pgsql --pgsql-port=*\n> --threads=50 --thread-init-timeout=3000 --report-interval=5 --pgsql-user=* --pgsql-host=* --pgsql-db=* --events=0 --pgsql-ignore-errors=\"PX000,58M01\"\n> ./src/lua/oltp_insert.lua prepare\n> parallel execution: \n> sysbench --tables=10 --table-size=1000000 --db-ps-mode=auto --pgsql-password=* --time=72000 --db-driver=pgsql --pgsql-port=*\n> --threads=50 --thread-init-timeout=3000 --report-interval=5 --pgsql-user=* --pgsql-host=*\n> --pgsql-db=* --events=0 --pgsql-ignore-errors=\"PX000,58M01\"\n> ./src/lua/oltp_insert.lua run\n> sysbench --tables=10 --table-size=1000000 --db-ps-mode=auto --pgsql-password=* --time=72000 --db-driver=pgsql --pgsql-port=*\n> --threads=50 --thread-init-timeout=3000 --report-interval=5 --pgsql-user=* --pgsql-host=*--pgsql-db=* --events=0 --pgsql-ignore-errors=\"PX000,58M01\"\n> ./src/lua/oltp_delete.lua run\n> sysbench --tables=10 --table-size=1000000 --db-ps-mode=auto --pgsql-password=* --time=72000 --db-driver=pgsql --pgsql-port=*\n> --threads=50 --thread-init-timeout=3000 --report-interval=5 --pgsql-user=* --pgsql-host=*\n> --pgsql-db=* --events=0 --pgsql-ignore-errors=\"PX000,58M01\" ./src/lua/oltp_read_write.lua run\n> sysbench --tables=10 --table-size=1000000 --db-ps-mode=auto --pgsql-password=* --time=72000 --db-driver=pgsql --pgsql-port=*\n> --threads=50 --thread-init-timeout=3000 --report-interval=5 --pgsql-user=* --pgsql-host=*\n> --pgsql-db=* --events=0 --pgsql-ignore-errors=\"PX000,58M01\" ./src/lua/oltp_update_index.lua run\n> \n\nThanks. Not sure I'll be able to do such long sysbench runs, though. Can\nyou try reproducing this with current REL_14_STABLE? I wonder if dumping\nthe WAL (using pg_waldump) might tell us more about what happened.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 22 Jul 2022 23:08:34 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "=?UTF-8?B?UmU6IOWbnuWkje+8mlJlOiBQQU5JQzogd3JvbmcgYnVmZmVyIHBhc3Nl?=\n =?UTF-8?Q?d_to_visibilitymap=5fclear?="
},
{
"msg_contents": "On Fri, Jul 22, 2022 at 1:22 AM 王伟(学弈) <rogers.ww@alibaba-inc.com> wrote:\n> I recently find this problem while testing PG14 with sysbench.\n\nThe line numbers from your stack trace don't match up with\nREL_14_STABLE. Is this actually a fork of Postgres 14? (Oh, looks like\nit's an old beta release.)\n\n> Then I look through the emails from pgsql-hackers and find a previous similary bug which is https://www.postgresql.org/message-id/flat/2247102.1618008027%40sss.pgh.pa.us. But the bugfix commit(34f581c39e97e2ea237255cf75cccebccc02d477) is already patched to PG14.\n\nIt does seem possible that there is another similar bug somewhere --\nanother case where we were protected by the fact that VACUUM acquired\na full cleanup lock (not just an exclusive buffer lock) during its\nsecond heap pass. That changed in Postgres 14 (commit 8523492d4e). But\nI really don't know -- almost anything is possible.\n\n> I'm wondering whether there's another code path to lead this problem happened. Since, I take a deep dig via gdb which turns out that newbuffer is not euqal to buffer. In other words, the function RelationGetBufferForTuple must have been called just now.\n> Besides, why didn't we re-check the flag after RelationGetBufferForTuple was called?\n\nRecheck what flag? And at what point? It's not easy to figure this out\nfrom your stack trace, because of the line number issues.\n\nIt would also be helpful if you told us about the specific table\ninvolved. Though the important thing (the essential thing) is to test\ntoday's REL_14_STABLE. There have been *lots* of bug fixes since\nPostgres 14 beta2 was current.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Fri, 22 Jul 2022 14:49:08 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: PANIC: wrong buffer passed to visibilitymap_clear"
},
{
"msg_contents": "Peter Geoghegan <pg@bowt.ie> writes:\n> It would also be helpful if you told us about the specific table\n> involved. Though the important thing (the essential thing) is to test\n> today's REL_14_STABLE. There have been *lots* of bug fixes since\n> Postgres 14 beta2 was current.\n\nYeah. To be blunt, you're wasting your time and ours by testing\na year-old beta version. The odds are respectable that the problem\nis already fixed. Even if it's not, the version skew between\nwhat you are looking at and what we are looking at creates lots of\nconfusion.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 22 Jul 2022 17:55:28 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: PANIC: wrong buffer passed to visibilitymap_clear"
}
] |
[
{
"msg_contents": "Hi\n\nnow we have lot of nice json related functions and I think so can be nice\nif plpgsql's statement FOREACH can directly support json type. It can save\nsome CPY cycles by reducing some transformations.\n\nMy idea is following -\n\nnew syntax\n\nFOREACH targetvar IN JSON ARRAY json array expr\nLOOP\n ...\n\nand\n\nFOREACH targetvar, keyvar IN JSON OBJECT json record expr\nLOOP\n ...\n\n\nWhat do you think about this proposal? Comments, notes?\n\nRegards\n\nPavel\n\nHinow we have lot of nice json related functions and I think so can be nice if plpgsql's statement FOREACH can directly support json type. It can save some CPY cycles by reducing some transformations.My idea is following - new syntaxFOREACH targetvar IN JSON ARRAY json array exprLOOP ...andFOREACH targetvar, keyvar IN JSON OBJECT json record exprLOOP ...What do you think about this proposal? Comments, notes?RegardsPavel",
"msg_date": "Fri, 22 Jul 2022 15:04:11 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": true,
"msg_subject": "proposal - enhancing plpgsql's FOREACH statement for support json\n type"
}
] |
[
{
"msg_contents": "Greetings,\n\nJack Christensen the author of the go pgx driver had suggested Default\nresult formats should be settable per session · Discussion #5 ·\npostgresql-interfaces/enhancement-ideas (github.com)\n<https://github.com/postgresql-interfaces/enhancement-ideas/discussions/5>\n\nThe JDBC driver has a similar problem and defers switching to binary format\nuntil a statement has been reused 5 times; at which point we create a named\nprepared statement and incur the overhead of an extra round trip for the\nDESCRIBE statement. Because the extra round trip generally negates any\nperformance enhancements that receiving the data in binary format may\nprovide, we avoid using binary and receive everything in text format until\nwe are sure the extra trip is worth it.\n\nConnection pools further complicate the issue: We can't use named\nstatements with connection pools since there is no binding of the\nconnection to the client. As such in the JDBC driver we recommend turning\noff the ability to create a named statement and thus binary formats.\n\nAs a proof of concept I provide the attached patch which implements the\nability to specify which oids will be returned in binary format per\nsession.\n\nIE set format_binary='20,21,25' for instance.\n\nAfter which the specified oids will be output in binary format if there is\nno describe statement or even using simpleQuery.\n\nBoth the JDBC driver and the go driver can exploit this change with no\nchanges. I haven't confirmed if other drivers would work without changes.\n\nFurthermore jackc/postgresql_simple_protocol_binary_format_bench\n(github.com)\n<https://github.com/jackc/postgresql_simple_protocol_binary_format_bench>\nsuggests\nthat there is a considerable performance benefit. To quote 'At 100 rows the\ntext format takes 48% longer than the binary format.'\n\nRegards,\nDave Cramer",
"msg_date": "Fri, 22 Jul 2022 11:00:18 -0400",
"msg_from": "Dave Cramer <davecramer@gmail.com>",
"msg_from_op": true,
"msg_subject": "Proposal to provide the facility to set binary format output for\n specific OID's per session"
},
{
"msg_contents": "At Fri, 22 Jul 2022 11:00:18 -0400, Dave Cramer <davecramer@gmail.com> wrote in \n> As a proof of concept I provide the attached patch which implements the\n> ability to specify which oids will be returned in binary format per\n> session.\n...\n> Both the JDBC driver and the go driver can exploit this change with no\n> changes. I haven't confirmed if other drivers would work without changes.\n\nI'm not sure about the needs of that, but binary exchange format is\nnot the one that can be turned on ignoring the peer's capability. If\nJDBC driver wants some types be sent in binary format, it seems to be\nable to be specified in bind message.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Mon, 25 Jul 2022 12:02:22 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Proposal to provide the facility to set binary format output\n for specific OID's per session"
},
{
"msg_contents": "Dave Cramer\n\n\nOn Sun, 24 Jul 2022 at 23:02, Kyotaro Horiguchi <horikyota.ntt@gmail.com>\nwrote:\n\n> At Fri, 22 Jul 2022 11:00:18 -0400, Dave Cramer <davecramer@gmail.com>\n> wrote in\n> > As a proof of concept I provide the attached patch which implements the\n> > ability to specify which oids will be returned in binary format per\n> > session.\n> ...\n> > Both the JDBC driver and the go driver can exploit this change with no\n> > changes. I haven't confirmed if other drivers would work without changes.\n>\n> I'm not sure about the needs of that, but binary exchange format is\n> not the one that can be turned on ignoring the peer's capability.\n\nI'm not sure what this means. The client is specifying which types it wants\nin binary format.\n\n> If\n> JDBC driver wants some types be sent in binary format, it seems to be\n> able to be specified in bind message.\n>\nTo be clear it's not just the JDBC client; the original idea came from the\nauthor of go driver.\nAnd yes you can specify it in the bind message but you have to specify it\nin *every* bind message which pretty much negates any advantage you might\nget out of binary format due to the extra round trip.\n\nRegards,\nDave\n\n>\n> regards.\n>\n> --\n> Kyotaro Horiguchi\n> NTT Open Source Software Center\n>\n\nDave CramerOn Sun, 24 Jul 2022 at 23:02, Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:At Fri, 22 Jul 2022 11:00:18 -0400, Dave Cramer <davecramer@gmail.com> wrote in \n> As a proof of concept I provide the attached patch which implements the\n> ability to specify which oids will be returned in binary format per\n> session.\n...\n> Both the JDBC driver and the go driver can exploit this change with no\n> changes. I haven't confirmed if other drivers would work without changes.\n\nI'm not sure about the needs of that, but binary exchange format is\nnot the one that can be turned on ignoring the peer's capability.I'm not sure what this means. The client is specifying which types it wants in binary format. If\nJDBC driver wants some types be sent in binary format, it seems to be\nable to be specified in bind message.To be clear it's not just the JDBC client; the original idea came from the author of go driver.And yes you can specify it in the bind message but you have to specify it in *every* bind message which pretty much negates any advantage you might get out of binary format due to the extra round trip. Regards,Dave \n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Mon, 25 Jul 2022 05:57:26 -0400",
"msg_from": "Dave Cramer <davecramer@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Proposal to provide the facility to set binary format output for\n specific OID's per session"
},
{
"msg_contents": "On Mon, Jul 25, 2022 at 4:57 AM Dave Cramer <davecramer@gmail.com> wrote:\n\n>\n> Dave Cramer\n>\n>\n> On Sun, 24 Jul 2022 at 23:02, Kyotaro Horiguchi <horikyota.ntt@gmail.com>\n> wrote:\n>\n>> At Fri, 22 Jul 2022 11:00:18 -0400, Dave Cramer <davecramer@gmail.com>\n>> wrote in\n>> > As a proof of concept I provide the attached patch which implements the\n>> > ability to specify which oids will be returned in binary format per\n>> > session.\n>> ...\n>> > Both the JDBC driver and the go driver can exploit this change with no\n>> > changes. I haven't confirmed if other drivers would work without\n>> changes.\n>>\n>> I'm not sure about the needs of that, but binary exchange format is\n>> not the one that can be turned on ignoring the peer's capability.\n>\n> I'm not sure what this means. The client is specifying which types it\n> wants in binary format.\n>\n>> If\n>> JDBC driver wants some types be sent in binary format, it seems to be\n>> able to be specified in bind message.\n>>\n> To be clear it's not just the JDBC client; the original idea came from the\n> author of go driver.\n> And yes you can specify it in the bind message but you have to specify it\n> in *every* bind message which pretty much negates any advantage you might\n> get out of binary format due to the extra round trip.\n>\n> Regards,\n> Dave\n>\n>>\n>> regards.\n>>\n>> --\n>> Kyotaro Horiguchi\n>> NTT Open Source Software Center\n>>\n>\nThe advantage is to be able to use the binary format with only a single\nnetwork round trip in cases where prepared statements are not possible.\ne.g. when using PgBouncer. Using the simple protocol with this patch lets\nusers of pgx (the Go driver mentioned above) and PgBouncer use the binary\nformat. The performance gains can be significant especially with types such\nas timestamptz that are very slow to parse.\n\nAs far as only sending binary types that the client can understand, the\nclient driver would call `set format_binary` at the beginning of the\nsession.\n\nJack Christensen\n\nOn Mon, Jul 25, 2022 at 4:57 AM Dave Cramer <davecramer@gmail.com> wrote:Dave CramerOn Sun, 24 Jul 2022 at 23:02, Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:At Fri, 22 Jul 2022 11:00:18 -0400, Dave Cramer <davecramer@gmail.com> wrote in \n> As a proof of concept I provide the attached patch which implements the\n> ability to specify which oids will be returned in binary format per\n> session.\n...\n> Both the JDBC driver and the go driver can exploit this change with no\n> changes. I haven't confirmed if other drivers would work without changes.\n\nI'm not sure about the needs of that, but binary exchange format is\nnot the one that can be turned on ignoring the peer's capability.I'm not sure what this means. The client is specifying which types it wants in binary format. If\nJDBC driver wants some types be sent in binary format, it seems to be\nable to be specified in bind message.To be clear it's not just the JDBC client; the original idea came from the author of go driver.And yes you can specify it in the bind message but you have to specify it in *every* bind message which pretty much negates any advantage you might get out of binary format due to the extra round trip. Regards,Dave \n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software CenterThe advantage is to be able to use the binary format with only a single network round trip in cases where prepared statements are not possible. e.g. when using PgBouncer. Using the simple protocol with this patch lets users of pgx (the Go driver mentioned above) and PgBouncer use the binary format. The performance gains can be significant especially with types such as timestamptz that are very slow to parse.As far as only sending binary types that the client can understand, the client driver would call `set format_binary` at the beginning of the session.Jack Christensen",
"msg_date": "Mon, 25 Jul 2022 09:07:25 -0500",
"msg_from": "Jack Christensen <jack@jackchristensen.com>",
"msg_from_op": false,
"msg_subject": "Re: Proposal to provide the facility to set binary format output for\n specific OID's per session"
},
{
"msg_contents": "On 7/25/22 10:07, Jack Christensen wrote:\n> The advantage is to be able to use the binary format with only a single \n> network round trip in cases where prepared statements are not possible. \n> e.g. when using PgBouncer. Using the simple protocol with this patch \n> lets users of pgx (the Go driver mentioned above) and PgBouncer use the \n> binary format. The performance gains can be significant especially with \n> types such as timestamptz that are very slow to parse.\n> \n> As far as only sending binary types that the client can understand, the \n> client driver would call `set format_binary` at the beginning of the \n> session.\n\n+1 makes a lot of sense to me.\n\nDave please add this to the open commitfest (202209)\n\n-- \nJoe Conway\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 25 Jul 2022 13:30:43 -0400",
"msg_from": "Joe Conway <mail@joeconway.com>",
"msg_from_op": false,
"msg_subject": "Re: Proposal to provide the facility to set binary format output for\n specific OID's per session"
},
{
"msg_contents": "Idea here makes sense and I've seen this brought up repeatedly on the JDBC\nlists.\n\nDoes the driver need to be aware that this SET command was executed? I'm\nwondering what happens if an end user executes this with an OID the driver\ndoes not actually know how to handle.\n\n> + Oid *tmpOids = palloc(length+1);\n> ...\n> + tmpOids = repalloc(tmpOids, length+1);\n\nThese should be: sizeof(Oid) * (length + 1)\n\nAlso, I think you need to specify an explicit context via\nMemoryContextAlloc or the allocated memory will be in the default context\nand released at the end of the command.\n\nRegards,\n-- Sehrope Sarkuni\nFounder & CEO | JackDB, Inc. | https://www.jackdb.com/\n\nIdea here makes sense and I've seen this brought up repeatedly on the JDBC lists.Does the driver need to be aware that this SET command was executed? I'm wondering what happens if an end user executes this with an OID the driver does not actually know how to handle.> +\tOid *tmpOids = palloc(length+1);> ...> +\t\t\ttmpOids = repalloc(tmpOids, length+1);These should be: sizeof(Oid) * (length + 1)Also, I think you need to specify an explicit context via MemoryContextAlloc or the allocated memory will be in the default context and released at the end of the command.Regards,-- Sehrope SarkuniFounder & CEO | JackDB, Inc. | https://www.jackdb.com/",
"msg_date": "Mon, 25 Jul 2022 17:22:24 -0400",
"msg_from": "Sehrope Sarkuni <sehrope@jackdb.com>",
"msg_from_op": false,
"msg_subject": "Re: Proposal to provide the facility to set binary format output for\n specific OID's per session"
},
{
"msg_contents": "Hi Sehrope,\n\n\nOn Mon, 25 Jul 2022 at 17:22, Sehrope Sarkuni <sehrope@jackdb.com> wrote:\n\n> Idea here makes sense and I've seen this brought up repeatedly on the JDBC\n> lists.\n>\n> Does the driver need to be aware that this SET command was executed? I'm\n> wondering what happens if an end user executes this with an OID the driver\n> does not actually know how to handle.\n>\nI suppose there would be a failure to read the attribute correctly.\n\n>\n> > + Oid *tmpOids = palloc(length+1);\n> > ...\n> > + tmpOids = repalloc(tmpOids, length+1);\n>\n> These should be: sizeof(Oid) * (length + 1)\n>\n\nYes they should, thanks!\n\n>\n> Also, I think you need to specify an explicit context via\n> MemoryContextAlloc or the allocated memory will be in the default context\n> and released at the end of the command.\n>\n\nAlso good catch\n\nThanks,\n\nDave\n\n>\n\nHi Sehrope,On Mon, 25 Jul 2022 at 17:22, Sehrope Sarkuni <sehrope@jackdb.com> wrote:Idea here makes sense and I've seen this brought up repeatedly on the JDBC lists.Does the driver need to be aware that this SET command was executed? I'm wondering what happens if an end user executes this with an OID the driver does not actually know how to handle.I suppose there would be a failure to read the attribute correctly.> +\tOid *tmpOids = palloc(length+1);> ...> +\t\t\ttmpOids = repalloc(tmpOids, length+1);These should be: sizeof(Oid) * (length + 1)Yes they should, thanks! Also, I think you need to specify an explicit context via MemoryContextAlloc or the allocated memory will be in the default context and released at the end of the command.Also good catch Thanks,Dave",
"msg_date": "Mon, 25 Jul 2022 17:53:10 -0400",
"msg_from": "Dave Cramer <davecramer@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Proposal to provide the facility to set binary format output for\n specific OID's per session"
},
{
"msg_contents": "Hi Sehrope,\n\n\n\n\nOn Mon, 25 Jul 2022 at 17:53, Dave Cramer <davecramer@gmail.com> wrote:\n\n> Hi Sehrope,\n>\n>\n> On Mon, 25 Jul 2022 at 17:22, Sehrope Sarkuni <sehrope@jackdb.com> wrote:\n>\n>> Idea here makes sense and I've seen this brought up repeatedly on the\n>> JDBC lists.\n>>\n>> Does the driver need to be aware that this SET command was executed? I'm\n>> wondering what happens if an end user executes this with an OID the driver\n>> does not actually know how to handle.\n>>\n> I suppose there would be a failure to read the attribute correctly.\n>\n>>\n>> > + Oid *tmpOids = palloc(length+1);\n>> > ...\n>> > + tmpOids = repalloc(tmpOids, length+1);\n>>\n>> These should be: sizeof(Oid) * (length + 1)\n>>\n>\n> Yes they should, thanks!\n>\n>>\n>> Also, I think you need to specify an explicit context via\n>> MemoryContextAlloc or the allocated memory will be in the default context\n>> and released at the end of the command.\n>>\n>\n> Also good catch\n>\n> Thanks,\n>\n\nAttached patch to correct these deficiencies.\n\nThanks again,\n\n\n>\n> Dave\n>\n>>",
"msg_date": "Tue, 26 Jul 2022 08:11:04 -0400",
"msg_from": "Dave Cramer <davecramer@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Proposal to provide the facility to set binary format output for\n specific OID's per session"
},
{
"msg_contents": "On Tue, Jul 26, 2022 at 08:11:04AM -0400, Dave Cramer wrote:\n> Attached patch to correct these deficiencies.\n\nYou sent a patch to be applied on top of the first patch, but cfbot doesn't\nknow that, so it says the patch doesn't apply.\nhttp://cfbot.cputube.org/dave-cramer.html\n\nBTW, a previous discussion about this idea is here:\nhttps://www.postgresql.org/message-id/flat/40cbb35d-774f-23ed-3079-03f938aacdae@2ndquadrant.com\n\n-- \nJustin\n\n\n",
"msg_date": "Fri, 5 Aug 2022 16:51:39 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Proposal to provide the facility to set binary format output for\n specific OID's per session"
},
{
"msg_contents": "On Fri, 5 Aug 2022 at 17:51, Justin Pryzby <pryzby@telsasoft.com> wrote:\n\n> On Tue, Jul 26, 2022 at 08:11:04AM -0400, Dave Cramer wrote:\n> > Attached patch to correct these deficiencies.\n>\n> You sent a patch to be applied on top of the first patch, but cfbot doesn't\n> know that, so it says the patch doesn't apply.\n> http://cfbot.cputube.org/dave-cramer.html\n>\n> BTW, a previous discussion about this idea is here:\n>\n> https://www.postgresql.org/message-id/flat/40cbb35d-774f-23ed-3079-03f938aacdae@2ndquadrant.com\n\n\nsquashed patch attached\n\nDave",
"msg_date": "Fri, 12 Aug 2022 08:48:33 -0400",
"msg_from": "Dave Cramer <davecramer@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Proposal to provide the facility to set binary format output for\n specific OID's per session"
},
{
"msg_contents": "On Fri, Aug 12, 2022 at 5:48 PM Dave Cramer <davecramer@gmail.com> wrote:\n\n>\n>\n> On Fri, 5 Aug 2022 at 17:51, Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n>> On Tue, Jul 26, 2022 at 08:11:04AM -0400, Dave Cramer wrote:\n>> > Attached patch to correct these deficiencies.\n>>\n>> You sent a patch to be applied on top of the first patch, but cfbot\n>> doesn't\n>> know that, so it says the patch doesn't apply.\n>> http://cfbot.cputube.org/dave-cramer.html\n>>\n>> BTW, a previous discussion about this idea is here:\n>>\n>> https://www.postgresql.org/message-id/flat/40cbb35d-774f-23ed-3079-03f938aacdae@2ndquadrant.com\n>\n>\n> squashed patch attached\n>\n> Dave\n>\nThe patch does not apply successfully; a rebase is required.\n\n=== applying patch ./0001-add-format_binary.patch\npatching file src/backend/tcop/postgres.c\nHunk #1 succeeded at 97 (offset -8 lines).\npatching file src/backend/tcop/pquery.c\npatching file src/backend/utils/init/globals.c\npatching file src/backend/utils/misc/guc.c\nHunk #1 succeeded at 144 (offset 1 line).\nHunk #2 succeeded at 244 with fuzz 2 (offset 1 line).\nHunk #3 succeeded at 4298 (offset -1 lines).\nHunk #4 FAILED at 12906.\n1 out of 4 hunks FAILED -- saving rejects to file\nsrc/backend/utils/misc/guc.c.rej\npatching file src/include/miscadmin.h\n\n\n\n\n-- \nIbrar Ahmed\n\nOn Fri, Aug 12, 2022 at 5:48 PM Dave Cramer <davecramer@gmail.com> wrote:On Fri, 5 Aug 2022 at 17:51, Justin Pryzby <pryzby@telsasoft.com> wrote:On Tue, Jul 26, 2022 at 08:11:04AM -0400, Dave Cramer wrote:\n> Attached patch to correct these deficiencies.\n\nYou sent a patch to be applied on top of the first patch, but cfbot doesn't\nknow that, so it says the patch doesn't apply.\nhttp://cfbot.cputube.org/dave-cramer.html\n\nBTW, a previous discussion about this idea is here:\nhttps://www.postgresql.org/message-id/flat/40cbb35d-774f-23ed-3079-03f938aacdae@2ndquadrant.comsquashed patch attachedDaveThe patch does not apply successfully; a rebase is required.=== applying patch ./0001-add-format_binary.patch\npatching file src/backend/tcop/postgres.c\nHunk #1 succeeded at 97 (offset -8 lines).\npatching file src/backend/tcop/pquery.c\npatching file src/backend/utils/init/globals.c\npatching file src/backend/utils/misc/guc.c\nHunk #1 succeeded at 144 (offset 1 line).\nHunk #2 succeeded at 244 with fuzz 2 (offset 1 line).\nHunk #3 succeeded at 4298 (offset -1 lines).\nHunk #4 FAILED at 12906.\n1 out of 4 hunks FAILED -- saving rejects to file src/backend/utils/misc/guc.c.rej\npatching file src/include/miscadmin.h -- Ibrar Ahmed",
"msg_date": "Tue, 6 Sep 2022 11:30:00 +0500",
"msg_from": "Ibrar Ahmed <ibrar.ahmad@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Proposal to provide the facility to set binary format output for\n specific OID's per session"
},
{
"msg_contents": "On Tue, 6 Sept 2022 at 02:30, Ibrar Ahmed <ibrar.ahmad@gmail.com> wrote:\n\n>\n>\n> On Fri, Aug 12, 2022 at 5:48 PM Dave Cramer <davecramer@gmail.com> wrote:\n>\n>>\n>>\n>> On Fri, 5 Aug 2022 at 17:51, Justin Pryzby <pryzby@telsasoft.com> wrote:\n>>\n>>> On Tue, Jul 26, 2022 at 08:11:04AM -0400, Dave Cramer wrote:\n>>> > Attached patch to correct these deficiencies.\n>>>\n>>> You sent a patch to be applied on top of the first patch, but cfbot\n>>> doesn't\n>>> know that, so it says the patch doesn't apply.\n>>> http://cfbot.cputube.org/dave-cramer.html\n>>>\n>>> BTW, a previous discussion about this idea is here:\n>>>\n>>> https://www.postgresql.org/message-id/flat/40cbb35d-774f-23ed-3079-03f938aacdae@2ndquadrant.com\n>>\n>>\n>> squashed patch attached\n>>\n>> Dave\n>>\n> The patch does not apply successfully; a rebase is required.\n>\n> === applying patch ./0001-add-format_binary.patch\n> patching file src/backend/tcop/postgres.c\n> Hunk #1 succeeded at 97 (offset -8 lines).\n> patching file src/backend/tcop/pquery.c\n> patching file src/backend/utils/init/globals.c\n> patching file src/backend/utils/misc/guc.c\n> Hunk #1 succeeded at 144 (offset 1 line).\n> Hunk #2 succeeded at 244 with fuzz 2 (offset 1 line).\n> Hunk #3 succeeded at 4298 (offset -1 lines).\n> Hunk #4 FAILED at 12906.\n> 1 out of 4 hunks FAILED -- saving rejects to file src/backend/utils/misc/guc.c.rej\n> patching file src/include/miscadmin.h\n>\n>\n>\n>\n\nThanks,\n\nNew rebased patch attached\n\nDave\n\n>",
"msg_date": "Tue, 6 Sep 2022 08:31:47 -0400",
"msg_from": "Dave Cramer <davecramer@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Proposal to provide the facility to set binary format output for\n specific OID's per session"
},
{
"msg_contents": "Waiting on the author to do what ? I'm waiting for a review.",
"msg_date": "Wed, 12 Oct 2022 11:52:35 +0000",
"msg_from": "Dave Cramer <davecramer@postgres.rocks>",
"msg_from_op": false,
"msg_subject": "Re: Proposal to provide the facility to set binary format output for\n specific OID's per session"
},
{
"msg_contents": "2022年9月6日(火) 21:32 Dave Cramer <davecramer@gmail.com>:\n>\n>\n>\n>\n> On Tue, 6 Sept 2022 at 02:30, Ibrar Ahmed <ibrar.ahmad@gmail.com> wrote:\n>>\n>>\n>>\n>> On Fri, Aug 12, 2022 at 5:48 PM Dave Cramer <davecramer@gmail.com> wrote:\n>>>\n>>>\n>>>\n>>> On Fri, 5 Aug 2022 at 17:51, Justin Pryzby <pryzby@telsasoft.com> wrote:\n>>>>\n>>>> On Tue, Jul 26, 2022 at 08:11:04AM -0400, Dave Cramer wrote:\n>>>> > Attached patch to correct these deficiencies.\n>>>>\n>>>> You sent a patch to be applied on top of the first patch, but cfbot doesn't\n>>>> know that, so it says the patch doesn't apply.\n>>>> http://cfbot.cputube.org/dave-cramer.html\n>>>>\n>>>> BTW, a previous discussion about this idea is here:\n>>>> https://www.postgresql.org/message-id/flat/40cbb35d-774f-23ed-3079-03f938aacdae@2ndquadrant.com\n>>>\n>>>\n>>> squashed patch attached\n>>>\n>>> Dave\n>>\n>> The patch does not apply successfully; a rebase is required.\n>>\n>> === applying patch ./0001-add-format_binary.patch\n>> patching file src/backend/tcop/postgres.c\n>> Hunk #1 succeeded at 97 (offset -8 lines).\n>> patching file src/backend/tcop/pquery.c\n>> patching file src/backend/utils/init/globals.c\n>> patching file src/backend/utils/misc/guc.c\n>> Hunk #1 succeeded at 144 (offset 1 line).\n>> Hunk #2 succeeded at 244 with fuzz 2 (offset 1 line).\n>> Hunk #3 succeeded at 4298 (offset -1 lines).\n>> Hunk #4 FAILED at 12906.\n>> 1 out of 4 hunks FAILED -- saving rejects to file src/backend/utils/misc/guc.c.rej\n>> patching file src/include/miscadmin.h\n>>\n>\n> Thanks,\n>\n> New rebased patch attached\n\nHi\n\ncfbot reports the patch no longer applies [1]. As CommitFest 2022-11 is\ncurrently underway, this would be an excellent time to update the patch again.\n\n[1] http://cfbot.cputube.org/patch_40_3777.log\n\nThanks\n\nIan Barwick\n\n\n",
"msg_date": "Fri, 4 Nov 2022 10:35:57 +0900",
"msg_from": "Ian Lawrence Barwick <barwick@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Proposal to provide the facility to set binary format output for\n specific OID's per session"
},
{
"msg_contents": "Hi Ian,\n\nThanks, will do\nDave Cramer\n\n\nOn Thu, 3 Nov 2022 at 21:36, Ian Lawrence Barwick <barwick@gmail.com> wrote:\n\n> 2022年9月6日(火) 21:32 Dave Cramer <davecramer@gmail.com>:\n> >\n> >\n> >\n> >\n> > On Tue, 6 Sept 2022 at 02:30, Ibrar Ahmed <ibrar.ahmad@gmail.com> wrote:\n> >>\n> >>\n> >>\n> >> On Fri, Aug 12, 2022 at 5:48 PM Dave Cramer <davecramer@gmail.com>\n> wrote:\n> >>>\n> >>>\n> >>>\n> >>> On Fri, 5 Aug 2022 at 17:51, Justin Pryzby <pryzby@telsasoft.com>\n> wrote:\n> >>>>\n> >>>> On Tue, Jul 26, 2022 at 08:11:04AM -0400, Dave Cramer wrote:\n> >>>> > Attached patch to correct these deficiencies.\n> >>>>\n> >>>> You sent a patch to be applied on top of the first patch, but cfbot\n> doesn't\n> >>>> know that, so it says the patch doesn't apply.\n> >>>> http://cfbot.cputube.org/dave-cramer.html\n> >>>>\n> >>>> BTW, a previous discussion about this idea is here:\n> >>>>\n> https://www.postgresql.org/message-id/flat/40cbb35d-774f-23ed-3079-03f938aacdae@2ndquadrant.com\n> >>>\n> >>>\n> >>> squashed patch attached\n> >>>\n> >>> Dave\n> >>\n> >> The patch does not apply successfully; a rebase is required.\n> >>\n> >> === applying patch ./0001-add-format_binary.patch\n> >> patching file src/backend/tcop/postgres.c\n> >> Hunk #1 succeeded at 97 (offset -8 lines).\n> >> patching file src/backend/tcop/pquery.c\n> >> patching file src/backend/utils/init/globals.c\n> >> patching file src/backend/utils/misc/guc.c\n> >> Hunk #1 succeeded at 144 (offset 1 line).\n> >> Hunk #2 succeeded at 244 with fuzz 2 (offset 1 line).\n> >> Hunk #3 succeeded at 4298 (offset -1 lines).\n> >> Hunk #4 FAILED at 12906.\n> >> 1 out of 4 hunks FAILED -- saving rejects to file\n> src/backend/utils/misc/guc.c.rej\n> >> patching file src/include/miscadmin.h\n> >>\n> >\n> > Thanks,\n> >\n> > New rebased patch attached\n>\n> Hi\n>\n> cfbot reports the patch no longer applies [1]. As CommitFest 2022-11 is\n> currently underway, this would be an excellent time to update the patch\n> again.\n>\n> [1] http://cfbot.cputube.org/patch_40_3777.log\n>\n> Thanks\n>\n> Ian Barwick\n>\n\nHi Ian,Thanks, will doDave CramerOn Thu, 3 Nov 2022 at 21:36, Ian Lawrence Barwick <barwick@gmail.com> wrote:2022年9月6日(火) 21:32 Dave Cramer <davecramer@gmail.com>:\n>\n>\n>\n>\n> On Tue, 6 Sept 2022 at 02:30, Ibrar Ahmed <ibrar.ahmad@gmail.com> wrote:\n>>\n>>\n>>\n>> On Fri, Aug 12, 2022 at 5:48 PM Dave Cramer <davecramer@gmail.com> wrote:\n>>>\n>>>\n>>>\n>>> On Fri, 5 Aug 2022 at 17:51, Justin Pryzby <pryzby@telsasoft.com> wrote:\n>>>>\n>>>> On Tue, Jul 26, 2022 at 08:11:04AM -0400, Dave Cramer wrote:\n>>>> > Attached patch to correct these deficiencies.\n>>>>\n>>>> You sent a patch to be applied on top of the first patch, but cfbot doesn't\n>>>> know that, so it says the patch doesn't apply.\n>>>> http://cfbot.cputube.org/dave-cramer.html\n>>>>\n>>>> BTW, a previous discussion about this idea is here:\n>>>> https://www.postgresql.org/message-id/flat/40cbb35d-774f-23ed-3079-03f938aacdae@2ndquadrant.com\n>>>\n>>>\n>>> squashed patch attached\n>>>\n>>> Dave\n>>\n>> The patch does not apply successfully; a rebase is required.\n>>\n>> === applying patch ./0001-add-format_binary.patch\n>> patching file src/backend/tcop/postgres.c\n>> Hunk #1 succeeded at 97 (offset -8 lines).\n>> patching file src/backend/tcop/pquery.c\n>> patching file src/backend/utils/init/globals.c\n>> patching file src/backend/utils/misc/guc.c\n>> Hunk #1 succeeded at 144 (offset 1 line).\n>> Hunk #2 succeeded at 244 with fuzz 2 (offset 1 line).\n>> Hunk #3 succeeded at 4298 (offset -1 lines).\n>> Hunk #4 FAILED at 12906.\n>> 1 out of 4 hunks FAILED -- saving rejects to file src/backend/utils/misc/guc.c.rej\n>> patching file src/include/miscadmin.h\n>>\n>\n> Thanks,\n>\n> New rebased patch attached\n\nHi\n\ncfbot reports the patch no longer applies [1]. As CommitFest 2022-11 is\ncurrently underway, this would be an excellent time to update the patch again.\n\n[1] http://cfbot.cputube.org/patch_40_3777.log\n\nThanks\n\nIan Barwick",
"msg_date": "Fri, 4 Nov 2022 05:35:52 -0400",
"msg_from": "Dave Cramer <davecramer@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Proposal to provide the facility to set binary format output for\n specific OID's per session"
}
] |
[
{
"msg_contents": "Hi hackers,\n\nPresently, if a role has privileges to SET a parameter, it is able to ALTER\nROLE/DATABASE SET that parameter, provided it otherwise has permission to\nalter that role/database. This includes cases where the role only has SET\nprivileges via the new pg_parameter_acl catalog. For example, if a role is\ngranted the ability to SET a PGC_SUSET GUC, it also has the ability to\nALTER ROLE/DATABASE SET that GUC. A couple of recent threads have alluded\nto the possibility of introducing a new set of privileges for ALTER\nROLE/DATABASE SET [0] [1], so I thought I'd start the discussion.\n\nFirst, is it necessary to introduce new privileges, or should the ability\nto SET a parameter be enough to ALTER ROLE/DATABASE SET it? AFAICT this is\nroughly the behavior before v15, but it simply disallowed non-superusers\nfrom setting certain parameters.\n\nSecond, if new privileges are required, what would they look like? My\nfirst instinct is to add GRANT ALTER ROLE ON PARAMETER and GRANT ALTER\nDATABASE ON PARAMETER.\n\nThoughts?\n\n[0] https://postgr.es/m/1732511.1658332210%40sss.pgh.pa.us\n[1] https://postgr.es/m/20220714225735.GB3173833%40nathanxps13\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Fri, 22 Jul 2022 13:04:22 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "privileges for ALTER ROLE/DATABASE SET"
},
{
"msg_contents": "Nathan Bossart <nathandbossart@gmail.com> writes:\n> Presently, if a role has privileges to SET a parameter, it is able to ALTER\n> ROLE/DATABASE SET that parameter, provided it otherwise has permission to\n> alter that role/database. This includes cases where the role only has SET\n> privileges via the new pg_parameter_acl catalog. For example, if a role is\n> granted the ability to SET a PGC_SUSET GUC, it also has the ability to\n> ALTER ROLE/DATABASE SET that GUC. A couple of recent threads have alluded\n> to the possibility of introducing a new set of privileges for ALTER\n> ROLE/DATABASE SET [0] [1], so I thought I'd start the discussion.\n\n> First, is it necessary to introduce new privileges, or should the ability\n> to SET a parameter be enough to ALTER ROLE/DATABASE SET it?\n\nClearly, you need enough privilege to SET the parameter, and you need\nsome sort of management privilege on the target role or DB. There\nmight be room to discuss what that per-role/DB privilege needs to be.\nBut I'm very skeptical that we need to manage this at the level\nof the cross product of GUCs and roles/DBs, which is what you seem\nto be proposing. That seems awfully unwieldy, and is there really\nany use-case for it?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 22 Jul 2022 16:16:14 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: privileges for ALTER ROLE/DATABASE SET"
},
{
"msg_contents": "On Fri, Jul 22, 2022 at 04:16:14PM -0400, Tom Lane wrote:\n> Clearly, you need enough privilege to SET the parameter, and you need\n> some sort of management privilege on the target role or DB. There\n> might be room to discuss what that per-role/DB privilege needs to be.\n> But I'm very skeptical that we need to manage this at the level\n> of the cross product of GUCs and roles/DBs, which is what you seem\n> to be proposing. That seems awfully unwieldy, and is there really\n> any use-case for it?\n\nActually, I think my vote is to do nothing, except for perhaps updating the\ndocumentation to indicate that SET privileges on a parameter are sufficient\nfor ALTER ROLE/DATABASE SET (given you have the other required privileges\nfor altering the role/database). I can't think of a use-case for allowing\na role to SET a GUC but not change the session default for another role.\nAnd I agree that requiring extra permissions for this feels excessive.\nMaybe someone else has a use-case in mind, though. I figured it would be\ngood to hash this out prior to 15.0, at which point changing the behavior\nwould become substantially more difficult.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Fri, 22 Jul 2022 15:25:16 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: privileges for ALTER ROLE/DATABASE SET"
},
{
"msg_contents": "On Fri, Jul 22, 2022 at 03:25:16PM -0700, Nathan Bossart wrote:\n> On Fri, Jul 22, 2022 at 04:16:14PM -0400, Tom Lane wrote:\n> > Clearly, you need enough privilege to SET the parameter, and you need\n> > some sort of management privilege on the target role or DB. There\n> > might be room to discuss what that per-role/DB privilege needs to be.\n> > But I'm very skeptical that we need to manage this at the level\n> > of the cross product of GUCs and roles/DBs, which is what you seem\n> > to be proposing. That seems awfully unwieldy, and is there really\n> > any use-case for it?\n> \n> Actually, I think my vote is to do nothing, except for perhaps updating the\n> documentation to indicate that SET privileges on a parameter are sufficient\n> for ALTER ROLE/DATABASE SET (given you have the other required privileges\n> for altering the role/database). I can't think of a use-case for allowing\n> a role to SET a GUC but not change the session default for another role.\n\nIf I wanted to argue for a use case, I'd point to ALTER ROLE/DATABASE SET\nsurviving REVOKE of SET privileges. Revoking a SET privilege promptly affects\nfuture SET statements, but the REVOKE issuer would need to take the separate\nstep of clearing unwanted pg_db_role_setting entries. Even so, ...\n\n> And I agree that requiring extra permissions for this feels excessive.\n> Maybe someone else has a use-case in mind, though.\n\n... I, too, vote to change nothing. We have lots of cases where REVOKE\ndoesn't reverse actions taken while the user held the privilege being revoked.\nChanging that isn't worth much.\n\n\n",
"msg_date": "Sun, 28 Aug 2022 21:37:52 -0700",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": false,
"msg_subject": "Re: privileges for ALTER ROLE/DATABASE SET"
}
] |
[
{
"msg_contents": "Hi hackers,\n\nThe previous attempt to add a predefined role for VACUUM and ANALYZE [0]\nresulted in the new pg_checkpoint role in v15. I'd like to try again to\nadd a new role (or multiple new roles) for VACUUM and ANALYZE.\n\nThe primary motivation for this is to continue chipping away at things that\nrequire special privileges or even superuser. VACUUM and ANALYZE typically\nrequire table ownership, database ownership, or superuser. And only\nsuperusers can VACUUM/ANALYZE shared catalogs. A predefined role for these\noperations would allow delegating such tasks (e.g., a nightly VACUUM\nscheduled with pg_cron) to a role with fewer privileges.\n\nThe attached patch adds a pg_vacuum_analyze role that allows VACUUM and\nANALYZE commands on all relations. I started by trying to introduce\nseparate pg_vacuum and pg_analyze roles, but that quickly became\ncomplicated because the VACUUM and ANALYZE code is intertwined. To\ninitiate the discussion, here's the simplest thing I could think of.\n\nAn alternate approach might be to allow using GRANT to manage these\nprivileges, as suggested in the previous thread [1].\n\nThoughts?\n\n[0] https://postgr.es/m/67a1d667e8ec228b5e07f232184c80348c5d93f4.camel%40j-davis.com\n[1] https://postgr.es/m/20211104224636.5qg6cfyjkw52rh4d@alap3.anarazel.de\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Fri, 22 Jul 2022 13:37:35 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "predefined role(s) for VACUUM and ANALYZE"
},
{
"msg_contents": "On Fri, Jul 22, 2022 at 01:37:35PM -0700, Nathan Bossart wrote:\n> The attached patch adds a pg_vacuum_analyze role that allows VACUUM and\n> ANALYZE commands on all relations. I started by trying to introduce\n> separate pg_vacuum and pg_analyze roles, but that quickly became\n> complicated because the VACUUM and ANALYZE code is intertwined. To\n> initiate the discussion, here's the simplest thing I could think of.\n\nAnd here's the same patch, but with docs that actually build.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Sun, 24 Jul 2022 21:39:31 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: predefined role(s) for VACUUM and ANALYZE"
},
{
"msg_contents": "On Sat, Jul 23, 2022 at 2:07 AM Nathan Bossart <nathandbossart@gmail.com> wrote:\n>\n> Hi hackers,\n>\n> The previous attempt to add a predefined role for VACUUM and ANALYZE [0]\n> resulted in the new pg_checkpoint role in v15. I'd like to try again to\n> add a new role (or multiple new roles) for VACUUM and ANALYZE.\n>\n> The primary motivation for this is to continue chipping away at things that\n> require special privileges or even superuser. VACUUM and ANALYZE typically\n> require table ownership, database ownership, or superuser. And only\n> superusers can VACUUM/ANALYZE shared catalogs. A predefined role for these\n> operations would allow delegating such tasks (e.g., a nightly VACUUM\n> scheduled with pg_cron) to a role with fewer privileges.\n\nThanks. I'm personally happy with more granular levels of control (as\nwe don't have to give full superuser access to just run a few commands\nor maintenance operations) for various postgres commands. The only\nconcern is that we might eventually end up with many predefined roles\n(perhaps one predefined role per command), spreading all around the\ncode base and it might be difficult for the users to digest all of the\nroles in. It will be great if we can have some sort of rules or\nmethods to define a separate role for a command.\n\n> The attached patch adds a pg_vacuum_analyze role that allows VACUUM and\n> ANALYZE commands on all relations. I started by trying to introduce\n> separate pg_vacuum and pg_analyze roles, but that quickly became\n> complicated because the VACUUM and ANALYZE code is intertwined. To\n> initiate the discussion, here's the simplest thing I could think of.\n\npg_vacuum_analyze, immediately, makes me think if we need to have a\npredefined role for CLUSTER command and maybe for other commands as\nwell such as EXECUTE, CALL, ALTER SYSTEM SET, LOAD, COPY and so on.\n\n> An alternate approach might be to allow using GRANT to manage these\n> privileges, as suggested in the previous thread [1].\n>\n> Thoughts?\n>\n> [0] https://postgr.es/m/67a1d667e8ec228b5e07f232184c80348c5d93f4.camel%40j-davis.com\n> [1] https://postgr.es/m/20211104224636.5qg6cfyjkw52rh4d@alap3.anarazel.de\n\nI think GRANT approach [1] is worth considering or at least discussing\nits pros and cons might give us a better idea as to why we need\nseparate predefined roles.\n\nRegards,\nBharath Rupireddy.\n\n\n",
"msg_date": "Mon, 25 Jul 2022 12:58:36 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: predefined role(s) for VACUUM and ANALYZE"
},
{
"msg_contents": "On Mon, Jul 25, 2022 at 12:58:36PM +0530, Bharath Rupireddy wrote:\n> Thanks. I'm personally happy with more granular levels of control (as\n> we don't have to give full superuser access to just run a few commands\n> or maintenance operations) for various postgres commands. The only\n> concern is that we might eventually end up with many predefined roles\n> (perhaps one predefined role per command), spreading all around the\n> code base and it might be difficult for the users to digest all of the\n> roles in. It will be great if we can have some sort of rules or\n> methods to define a separate role for a command.\n\nYeah, in the future, I could see this growing to a couple dozen predefined\nroles. Given they are relatively inexpensive and there are already 12 of\nthem, I'm personally not too worried about the list becoming too unwieldy.\nAnother way to help users might be to create additional aggregate\npredefined roles (like pg_monitor) for common combinations.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 25 Jul 2022 09:40:49 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: predefined role(s) for VACUUM and ANALYZE"
},
{
"msg_contents": "At Mon, 25 Jul 2022 09:40:49 -0700, Nathan Bossart <nathandbossart@gmail.com> wrote in \n> On Mon, Jul 25, 2022 at 12:58:36PM +0530, Bharath Rupireddy wrote:\n> > Thanks. I'm personally happy with more granular levels of control (as\n> > we don't have to give full superuser access to just run a few commands\n> > or maintenance operations) for various postgres commands. The only\n> > concern is that we might eventually end up with many predefined roles\n> > (perhaps one predefined role per command), spreading all around the\n> > code base and it might be difficult for the users to digest all of the\n> > roles in. It will be great if we can have some sort of rules or\n> > methods to define a separate role for a command.\n> \n> Yeah, in the future, I could see this growing to a couple dozen predefined\n> roles. Given they are relatively inexpensive and there are already 12 of\n> them, I'm personally not too worried about the list becoming too unwieldy.\n> Another way to help users might be to create additional aggregate\n> predefined roles (like pg_monitor) for common combinations.\n\nI agree to the necessity of that execution control, but I fear a\nlittle how many similar roles will come in future (but it doesn't seem\nso much?). I didn't think so when pg_checkpoint was introdueced,\nthough. That being said, since we're going to control\nmaintenance'ish-command execution via predefined roles so it is fine\nin that criteria.\n\nOne arguable point would be whether we will need to put restriction\nthe target relations that Bob can vacuum/analyze. If we need that, the\nnew predeefined role is not sufficient then need a new syntax for that.\n\nGRANT EXECUTION COMMAND VACUUM ON TABLE rel1 TO bob.\nGRANT EXECUTION COMMAND VACUUM ON TABLES OWNED BY alice TO bob.\nGRANT EXECUTION COMMAND VACUUM ON ALL TABLES OWNED BY alice TO bob.\n\nHowever, one problem of these syntaxes is they cannot do something to\nfuture relations.\n\nSo, considering that aspect, I would finally agree to the proposal\nhere. (In short +1 for what the patch does.)\n\n\nAbout the patch, it seems fine as the whole except the change in error\nmessages.\n\n- (errmsg(\"skipping \\\"%s\\\" --- only superuser can analyze it\",\n+ (errmsg(\"skipping \\\"%s\\\" --- only superusers and roles with \"\n+ \"privileges of pg_vacuum_analyze can analyze it\",\n\nThe message looks a bit too verbose or lengty especially when many\nrelations are rejected.\n\nWARNING: skipping \"pg_statistic\" --- only superusers, roles with privileges of pg_vacuum_analyze, or the database owner can vacuum it\nWARNING: skipping \"pg_type\" --- only superusers, roles with privileges of pg_vacuum_analyze, or the database owner can vacuum it\n<snip many lines>\nWARNING: skipping \"user_mappings\" --- only table or database owner can vacuum it\nVACUUM\n\nCouldn't we simplify the message? For example \"skipping \\\"%s\\\" ---\ninsufficient priviledges\". We could add a detailed (not a DETAIL:)\nmessage at the end to cover all of the skipped relations, but it may\nbe too much.\n\n\nBy the way the patch splits an error message into several parts but\nthat later makes it harder to search for the message in the tree. *I*\nwould suggest not splitting message strings.\n\n\n# I refrain from suggesing removing parens surrounding errmsg() :p\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Tue, 26 Jul 2022 10:47:12 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: predefined role(s) for VACUUM and ANALYZE"
},
{
"msg_contents": "At Tue, 26 Jul 2022 10:47:12 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> WARNING: skipping \"pg_statistic\" --- only superusers, roles with privileges of pg_vacuum_analyze, or the database owner can vacuum it\n> WARNING: skipping \"pg_type\" --- only superusers, roles with privileges of pg_vacuum_analyze, or the database owner can vacuum it\n> <snip many lines>\n\n> WARNING: skipping \"user_mappings\" --- only table or database owner can vacuum it\n\nBy the way, the last error above dissapears by granting\npg_vacuum_analyze to the role. Is there a reason the message is left\nalone? And If I specified the view directly, I would get the\nfollowing message.\n\npostgres=> vacuum information_schema.user_mappings;\nWARNING: skipping \"user_mappings\" --- cannot vacuum non-tables or special system tables\n\nSo, \"VACUUM;\" does something wrong? Or is it the designed behavior?\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Tue, 26 Jul 2022 10:58:14 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: predefined role(s) for VACUUM and ANALYZE"
},
{
"msg_contents": "On Mon, Jul 25, 2022 at 9:47 PM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n> One arguable point would be whether we will need to put restriction\n> the target relations that Bob can vacuum/analyze.\n\nYeah. pg_checkpoint makes sense because you can either CHECKPOINT or\nyou can't. But for a command with a target, you really ought to have a\npermission on the object, not just a general permission. On the other\nhand, we do have things like pg_read_all_tables, so we could have\npg_vacuum_all_tables too. Still, it seems somewhat appealing to give\npeople fine-grained control over this, rather than just \"on\" or \"off\".\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 26 Jul 2022 13:37:37 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: predefined role(s) for VACUUM and ANALYZE"
},
{
"msg_contents": "On Tue, Jul 26, 2022 at 10:37 AM Robert Haas <robertmhaas@gmail.com> wrote:\n\n> On Mon, Jul 25, 2022 at 9:47 PM Kyotaro Horiguchi\n> <horikyota.ntt@gmail.com> wrote:\n> > One arguable point would be whether we will need to put restriction\n> > the target relations that Bob can vacuum/analyze.\n>\n\n\n> But for a command with a target, you really ought to have a\n> permission on the object, not just a general permission. On the other\n> hand, we do have things like pg_read_all_tables, so we could have\n> pg_vacuum_all_tables too.\n\n\nI'm still more likely to create a specific security definer function owned\nby the relevant table owner to give out ANALYZE (and maybe VACUUM)\npermission to ETL-performing roles.\n\nStill, it seems somewhat appealing to give\n> people fine-grained control over this, rather than just \"on\" or \"off\".\n>\n>\nAppealing enough to consume a couple of permission bits?\n\nhttps://www.postgresql.org/message-id/CAKFQuwZ6dhjTFV7Bwmehe1N3%3Dk484y4mM22zuYjVEU2dq9V1aQ%40mail.gmail.com\n\nDavid J.\n\nOn Tue, Jul 26, 2022 at 10:37 AM Robert Haas <robertmhaas@gmail.com> wrote:On Mon, Jul 25, 2022 at 9:47 PM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n> One arguable point would be whether we will need to put restriction\n> the target relations that Bob can vacuum/analyze. But for a command with a target, you really ought to have a\npermission on the object, not just a general permission. On the other\nhand, we do have things like pg_read_all_tables, so we could have\npg_vacuum_all_tables too.I'm still more likely to create a specific security definer function owned by the relevant table owner to give out ANALYZE (and maybe VACUUM) permission to ETL-performing roles.Still, it seems somewhat appealing to give\npeople fine-grained control over this, rather than just \"on\" or \"off\".Appealing enough to consume a couple of permission bits?https://www.postgresql.org/message-id/CAKFQuwZ6dhjTFV7Bwmehe1N3%3Dk484y4mM22zuYjVEU2dq9V1aQ%40mail.gmail.comDavid J.",
"msg_date": "Tue, 26 Jul 2022 10:50:32 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: predefined role(s) for VACUUM and ANALYZE"
},
{
"msg_contents": "On Tue, Jul 26, 2022 at 1:50 PM David G. Johnston\n<david.g.johnston@gmail.com> wrote:\n>> Still, it seems somewhat appealing to give\n>> people fine-grained control over this, rather than just \"on\" or \"off\".\n> Appealing enough to consume a couple of permission bits?\n> https://www.postgresql.org/message-id/CAKFQuwZ6dhjTFV7Bwmehe1N3%3Dk484y4mM22zuYjVEU2dq9V1aQ%40mail.gmail.com\n\nI think we're down to 0 remaining now, so it'd be hard to justify\nconsuming 2 of 0 remaining bits. However, I maintain that the solution\nto this is either (1) change the aclitem representation to get another\n32 bits or (2) invent a different system for less-commonly used\npermission bits. Checking permissions for SELECT or UPDATE has to be\nreally fast, because most queries will need to do that sort of thing.\nIf we represented VACUUM or ANALYZE in some other way in the catalogs\nthat was more scalable but less efficient, it wouldn't be a big deal\n(although there's the issue of code duplication to consider).\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 26 Jul 2022 13:54:38 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: predefined role(s) for VACUUM and ANALYZE"
},
{
"msg_contents": "At Tue, 26 Jul 2022 13:54:38 -0400, Robert Haas <robertmhaas@gmail.com> wrote in \n> On Tue, Jul 26, 2022 at 1:50 PM David G. Johnston\n> <david.g.johnston@gmail.com> wrote:\n> >> Still, it seems somewhat appealing to give\n> >> people fine-grained control over this, rather than just \"on\" or \"off\".\n> > Appealing enough to consume a couple of permission bits?\n> > https://www.postgresql.org/message-id/CAKFQuwZ6dhjTFV7Bwmehe1N3%3Dk484y4mM22zuYjVEU2dq9V1aQ%40mail.gmail.com\n> \n> I think we're down to 0 remaining now, so it'd be hard to justify\n> consuming 2 of 0 remaining bits. However, I maintain that the solution\n> to this is either (1) change the aclitem representation to get another\n> 32 bits or (2) invent a different system for less-commonly used\n> permission bits. Checking permissions for SELECT or UPDATE has to be\n> really fast, because most queries will need to do that sort of thing.\n> If we represented VACUUM or ANALYZE in some other way in the catalogs\n> that was more scalable but less efficient, it wouldn't be a big deal\n> (although there's the issue of code duplication to consider).\n\nI guess that we can use the last bit for ACL_SLOW_PATH or something\nlike. Furthermore we could move some existing ACL modeds to that slow\npath to vacate some fast-ACL bits.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Wed, 27 Jul 2022 14:24:29 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: predefined role(s) for VACUUM and ANALYZE"
},
{
"msg_contents": "On Tue, Jul 26, 2022 at 01:54:38PM -0400, Robert Haas wrote:\n> I think we're down to 0 remaining now, so it'd be hard to justify\n> consuming 2 of 0 remaining bits.\n\nAFAICT there are 2 remaining. N_ACL_RIGHTS is only 14.\n\n> However, I maintain that the solution\n> to this is either (1) change the aclitem representation to get another\n> 32 bits or (2) invent a different system for less-commonly used\n> permission bits. Checking permissions for SELECT or UPDATE has to be\n> really fast, because most queries will need to do that sort of thing.\n> If we represented VACUUM or ANALYZE in some other way in the catalogs\n> that was more scalable but less efficient, it wouldn't be a big deal\n> (although there's the issue of code duplication to consider).\n\nPerhaps we could add something like a relacl_ext column to affected\ncatalogs with many more than 32 privilege bits. However, if we actually do\nhave 2 bits remaining, we wouldn't need to do that work now unless someone\nelse uses them first. That being said, it's certainly worth thinking about\nthe future of this stuff.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 28 Jul 2022 15:26:02 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: predefined role(s) for VACUUM and ANALYZE"
},
{
"msg_contents": "Greetings,\n\n* Robert Haas (robertmhaas@gmail.com) wrote:\n> On Tue, Jul 26, 2022 at 1:50 PM David G. Johnston\n> <david.g.johnston@gmail.com> wrote:\n> >> Still, it seems somewhat appealing to give\n> >> people fine-grained control over this, rather than just \"on\" or \"off\".\n> > Appealing enough to consume a couple of permission bits?\n> > https://www.postgresql.org/message-id/CAKFQuwZ6dhjTFV7Bwmehe1N3%3Dk484y4mM22zuYjVEU2dq9V1aQ%40mail.gmail.com\n> \n> I think we're down to 0 remaining now, so it'd be hard to justify\n> consuming 2 of 0 remaining bits. However, I maintain that the solution\n> to this is either (1) change the aclitem representation to get another\n> 32 bits or (2) invent a different system for less-commonly used\n> permission bits. Checking permissions for SELECT or UPDATE has to be\n> really fast, because most queries will need to do that sort of thing.\n> If we represented VACUUM or ANALYZE in some other way in the catalogs\n> that was more scalable but less efficient, it wouldn't be a big deal\n> (although there's the issue of code duplication to consider).\n\nI've long felt that we should redefine the way the ACLs work to have a\ndistinct set of bits for each object type. We don't need to support a\nCONNECT bit on a table, yet we do today and we expend quite a few bits\nin that way. Having that handled on a per-object-type basis instead\nwould allow us to get quite a bit more mileage out of the existing 32bit\nfield before having to introduce more complicated storage methods like\nusing a bit to tell us to go look up more ACLs somewhere else.\n\nThanks,\n\nStephen",
"msg_date": "Tue, 23 Aug 2022 19:46:47 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: predefined role(s) for VACUUM and ANALYZE"
},
{
"msg_contents": "Here is a first attempt at allowing users to grant VACUUM or ANALYZE\nper-relation. Overall, this seems pretty straightforward. I needed to\nadjust the permissions logic for VACUUM/ANALYZE a bit, which causes some\nextra WARNING messages for VACUUM (ANALYZE) in some cases, but this didn't\nseem particularly worrisome. It may be desirable to allow granting ANALYZE\non specific columns or to allow granting VACUUM/ANALYZE at the schema or\ndatabase level, but that is left as a future exercise.\n\nOn Tue, Aug 23, 2022 at 07:46:47PM -0400, Stephen Frost wrote:\n> I've long felt that we should redefine the way the ACLs work to have a\n> distinct set of bits for each object type. We don't need to support a\n> CONNECT bit on a table, yet we do today and we expend quite a few bits\n> in that way. Having that handled on a per-object-type basis instead\n> would allow us to get quite a bit more mileage out of the existing 32bit\n> field before having to introduce more complicated storage methods like\n> using a bit to tell us to go look up more ACLs somewhere else.\n\nThere are 2 bits remaining at the moment, so I didn't redesign the ACL\nsystem in the attached patch. However, I did some research on a couple\noptions. Using a distinct set of bits for each catalog table should free\nup a handful of bits, which should indeed kick the can down the road a\nlittle. Another easy option is to simply make AclMode a uint64, which\nwould immediately free up another 16 privilege bits. I was able to get\nthis approach building and passing tests in a few minutes, but there might\nbe performance/space concerns.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Mon, 5 Sep 2022 11:56:30 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: predefined role(s) for VACUUM and ANALYZE"
},
{
"msg_contents": "On Mon, Sep 5, 2022 at 2:56 PM Nathan Bossart <nathandbossart@gmail.com> wrote:\n> There are 2 bits remaining at the moment, so I didn't redesign the ACL\n> system in the attached patch. However, I did some research on a couple\n> options. Using a distinct set of bits for each catalog table should free\n> up a handful of bits, which should indeed kick the can down the road a\n> little. Another easy option is to simply make AclMode a uint64, which\n> would immediately free up another 16 privilege bits. I was able to get\n> this approach building and passing tests in a few minutes, but there might\n> be performance/space concerns.\n\nI believe Tom has expressed such concerns in the past, but it is not\nclear to me that they are well-founded. I don't think we have much\ncode that manipulates large numbers of aclitems, so I can't quite see\nwhere the larger size would be an issue. There may well be some\nplaces, so I'm not saying that Tom or anyone else with concerns is\nwrong, but I'm just having a hard time thinking of where it would be a\nreal issue.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 6 Sep 2022 08:26:55 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: predefined role(s) for VACUUM and ANALYZE"
},
{
"msg_contents": "Greetings,\n\n* Nathan Bossart (nathandbossart@gmail.com) wrote:\n> On Tue, Aug 23, 2022 at 07:46:47PM -0400, Stephen Frost wrote:\n> > I've long felt that we should redefine the way the ACLs work to have a\n> > distinct set of bits for each object type. We don't need to support a\n> > CONNECT bit on a table, yet we do today and we expend quite a few bits\n> > in that way. Having that handled on a per-object-type basis instead\n> > would allow us to get quite a bit more mileage out of the existing 32bit\n> > field before having to introduce more complicated storage methods like\n> > using a bit to tell us to go look up more ACLs somewhere else.\n> \n> There are 2 bits remaining at the moment, so I didn't redesign the ACL\n> system in the attached patch. However, I did some research on a couple\n> options. Using a distinct set of bits for each catalog table should free\n> up a handful of bits, which should indeed kick the can down the road a\n> little. Another easy option is to simply make AclMode a uint64, which\n> would immediately free up another 16 privilege bits. I was able to get\n> this approach building and passing tests in a few minutes, but there might\n> be performance/space concerns.\n\nConsidering our burn rate of ACL bits is really rather slow (2 this\nyear, but prior to that was TRUNCATE in 2008 and CONNECT in 2006), I'd\nargue that moving away from the current one-size-fits-all situation\nwould kick the can down the road more than just 'a little' and wouldn't\nhave any performance or space concerns. Once we actually get to the\npoint where we've burned through all of those after the next few decades\nthen we can move to a uint64 or something else more complicated,\nperhaps.\n\nIf we were to make the specific bits depend on the object type as I'm\nsuggesting, then we'd have 8 bits used for relations (10 with the vacuum\nand analyze bits), leaving us with 6 remaining inside the existing\nuint32, or more bits available than we've ever used since the original\nimplementation from what I can tell, or at least 15+ years. That seems\nlike pretty darn good future-proofing without a lot of complication or\nany change in physical size. We would also be able to get rid of the\nquestion of \"well, is it more valuable to add the ability to GRANT\nTRUNCATE on a relation, or GRANT CONNECT on databases\" or other rather\nodd debates between ultimately very different things.\n\nThanks,\n\nStephen",
"msg_date": "Tue, 6 Sep 2022 11:11:51 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: predefined role(s) for VACUUM and ANALYZE"
},
{
"msg_contents": "On Tue, Sep 6, 2022 at 11:11 AM Stephen Frost <sfrost@snowman.net> wrote:\n> Considering our burn rate of ACL bits is really rather slow (2 this\n> year, but prior to that was TRUNCATE in 2008 and CONNECT in 2006), I'd\n> argue that moving away from the current one-size-fits-all situation\n> would kick the can down the road more than just 'a little' and wouldn't\n> have any performance or space concerns. Once we actually get to the\n> point where we've burned through all of those after the next few decades\n> then we can move to a uint64 or something else more complicated,\n> perhaps.\n\nOur burn rate is slow because there's been a lot of pushback - mostly\nfrom Tom - about consuming the remaining bits. It's not because people\nhaven't had ideas about how to use them up.\n\n> If we were to make the specific bits depend on the object type as I'm\n> suggesting, then we'd have 8 bits used for relations (10 with the vacuum\n> and analyze bits), leaving us with 6 remaining inside the existing\n> uint32, or more bits available than we've ever used since the original\n> implementation from what I can tell, or at least 15+ years. That seems\n> like pretty darn good future-proofing without a lot of complication or\n> any change in physical size. We would also be able to get rid of the\n> question of \"well, is it more valuable to add the ability to GRANT\n> TRUNCATE on a relation, or GRANT CONNECT on databases\" or other rather\n> odd debates between ultimately very different things.\n\nI mostly agree with this. I don't think it's entirely clear how we\nshould try to get more bits going forward, but it's clear that we\ncannot just forever hold our breath and refuse to find any more bits.\nAnd of the possible ways of doing it, this seems like the one with the\nlowest impact, so I think it likely makes sense to do this one first.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 6 Sep 2022 11:24:18 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: predefined role(s) for VACUUM and ANALYZE"
},
{
"msg_contents": "On Tue, Sep 06, 2022 at 11:24:18AM -0400, Robert Haas wrote:\n> On Tue, Sep 6, 2022 at 11:11 AM Stephen Frost <sfrost@snowman.net> wrote:\n>> If we were to make the specific bits depend on the object type as I'm\n>> suggesting, then we'd have 8 bits used for relations (10 with the vacuum\n>> and analyze bits), leaving us with 6 remaining inside the existing\n>> uint32, or more bits available than we've ever used since the original\n>> implementation from what I can tell, or at least 15+ years. That seems\n>> like pretty darn good future-proofing without a lot of complication or\n>> any change in physical size. We would also be able to get rid of the\n>> question of \"well, is it more valuable to add the ability to GRANT\n>> TRUNCATE on a relation, or GRANT CONNECT on databases\" or other rather\n>> odd debates between ultimately very different things.\n> \n> I mostly agree with this. I don't think it's entirely clear how we\n> should try to get more bits going forward, but it's clear that we\n> cannot just forever hold our breath and refuse to find any more bits.\n> And of the possible ways of doing it, this seems like the one with the\n> lowest impact, so I think it likely makes sense to do this one first.\n\n+1. My earlier note wasn't intended to suggest that one approach was\nbetter than the other, merely that there are a couple of options to choose\nfrom once we run out of bits. I don't think this work needs to be tied to\nthe VACUUM/ANALYZE stuff, but I am interested in it and hope to take it on\nat some point.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 6 Sep 2022 08:54:32 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: predefined role(s) for VACUUM and ANALYZE"
},
{
"msg_contents": "On Mon, Sep 05, 2022 at 11:56:30AM -0700, Nathan Bossart wrote:\n> Here is a first attempt at allowing users to grant VACUUM or ANALYZE\n> per-relation. Overall, this seems pretty straightforward. I needed to\n> adjust the permissions logic for VACUUM/ANALYZE a bit, which causes some\n> extra WARNING messages for VACUUM (ANALYZE) in some cases, but this didn't\n> seem particularly worrisome. It may be desirable to allow granting ANALYZE\n> on specific columns or to allow granting VACUUM/ANALYZE at the schema or\n> database level, but that is left as a future exercise.\n\nHere is a new patch set with some follow-up patches to implement $SUBJECT.\n0001 is the same as v3. 0002 simplifies some WARNING messages as suggested\nupthread [0]. 0003 adds the new pg_vacuum_all_tables and\npg_analyze_all_tables predefined roles. Instead of adjusting the\npermissions logic in vacuum.c, I modified pg_class_aclmask_ext() to return\nthe ACL_VACUUM and/or ACL_ANALYZE bits as appropriate.\n\n[0] https://postgr.es/m/20220726.104712.912995710251150228.horikyota.ntt%40gmail.com\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Tue, 6 Sep 2022 10:47:49 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: predefined role(s) for VACUUM and ANALYZE"
},
{
"msg_contents": "Greetings,\n\n* Nathan Bossart (nathandbossart@gmail.com) wrote:\n> On Tue, Sep 06, 2022 at 11:24:18AM -0400, Robert Haas wrote:\n> > On Tue, Sep 6, 2022 at 11:11 AM Stephen Frost <sfrost@snowman.net> wrote:\n> >> If we were to make the specific bits depend on the object type as I'm\n> >> suggesting, then we'd have 8 bits used for relations (10 with the vacuum\n> >> and analyze bits), leaving us with 6 remaining inside the existing\n> >> uint32, or more bits available than we've ever used since the original\n> >> implementation from what I can tell, or at least 15+ years. That seems\n> >> like pretty darn good future-proofing without a lot of complication or\n> >> any change in physical size. We would also be able to get rid of the\n> >> question of \"well, is it more valuable to add the ability to GRANT\n> >> TRUNCATE on a relation, or GRANT CONNECT on databases\" or other rather\n> >> odd debates between ultimately very different things.\n> > \n> > I mostly agree with this. I don't think it's entirely clear how we\n> > should try to get more bits going forward, but it's clear that we\n> > cannot just forever hold our breath and refuse to find any more bits.\n> > And of the possible ways of doing it, this seems like the one with the\n> > lowest impact, so I think it likely makes sense to do this one first.\n> \n> +1. My earlier note wasn't intended to suggest that one approach was\n> better than the other, merely that there are a couple of options to choose\n> from once we run out of bits. I don't think this work needs to be tied to\n> the VACUUM/ANALYZE stuff, but I am interested in it and hope to take it on\n> at some point.\n\nI disagree that we should put the onus for addressing this on the next\nperson who wants to add bits and just willfully use up the last of them\nright now for what strikes me, at least, as a relatively marginal use\ncase. If we had plenty of bits then, sure, let's use a couple of for\nthis, but that isn't currently the case. If you want this feature then\nthe onus is on you to do the legwork to make it such that we have plenty\nof bits.\n\nMy 2c anyway.\n\nThanks,\n\nStephen",
"msg_date": "Wed, 7 Sep 2022 17:13:44 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: predefined role(s) for VACUUM and ANALYZE"
},
{
"msg_contents": "\n\n> On Jul 22, 2022, at 1:37 PM, Nathan Bossart <nathandbossart@gmail.com> wrote:\n> \n> The primary motivation for this is to continue chipping away at things that\n> require special privileges or even superuser. VACUUM and ANALYZE typically\n> require table ownership, database ownership, or superuser. And only\n> superusers can VACUUM/ANALYZE shared catalogs. A predefined role for these\n> operations would allow delegating such tasks (e.g., a nightly VACUUM\n> scheduled with pg_cron) to a role with fewer privileges.\n> \n> The attached patch adds a pg_vacuum_analyze role that allows VACUUM and\n> ANALYZE commands on all relations.\n\nGranting membership in a role that can VACUUM and ANALYZE any relation seems to grant a subset of a more general category, the ability to perform modifying administrative operations on a relation without necessarily being able to read or modify the logical contents of that relation. That more general category would seem to also include CLUSTER, REINDEX, REFRESH MATERIALIZED VIEW and more broadly ALTER SUBSCRIPTION ... REFRESH PUBLICATION and ALTER DATABASE ... REFRESH COLLATION VERSION. These latter operations may be less critical to database maintenance than is VACUUM, but arguably ANALYZE isn't as critical as is VACUUM, either.\n\nAssuming for the sake of argument that we should create a role something like you propose, can you explain why we should draw the line around just VACUUM and ANALYZE? I am not arguing for including these other commands, but don't want to regret having drawn the line in the wrong place when later we decide to add more roles like the one you are proposing.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Wed, 7 Sep 2022 14:53:57 -0700",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: predefined role(s) for VACUUM and ANALYZE"
},
{
"msg_contents": "On Wed, Sep 07, 2022 at 05:13:44PM -0400, Stephen Frost wrote:\n> I disagree that we should put the onus for addressing this on the next\n> person who wants to add bits and just willfully use up the last of them\n> right now for what strikes me, at least, as a relatively marginal use\n> case. If we had plenty of bits then, sure, let's use a couple of for\n> this, but that isn't currently the case. If you want this feature then\n> the onus is on you to do the legwork to make it such that we have plenty\n> of bits.\n\nFWIW what I really want is the new predefined roles. I received feedback\nupthread that it might also make sense to give people more fine-grained\ncontrol, so I implemented that. And now you're telling me that I need to\nredesign the ACL system. :)\n\nI'm happy to give that project a try given there is agreement on the\ndirection and general interest in the patches. From the previous\ndiscussion, it sounds like we want to first use a distinct set of bits for\neach catalog table. Is that what I should proceed with?\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 7 Sep 2022 15:11:03 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: predefined role(s) for VACUUM and ANALYZE"
},
{
"msg_contents": "On Wed, Sep 07, 2022 at 02:53:57PM -0700, Mark Dilger wrote:\n> Assuming for the sake of argument that we should create a role something like you propose, can you explain why we should draw the line around just VACUUM and ANALYZE? I am not arguing for including these other commands, but don't want to regret having drawn the line in the wrong place when later we decide to add more roles like the one you are proposing.\n\nThere was some previous discussion around adding a pg_maintenance role that\ncould perform all of these commands [0]. I didn't intend to draw a line\naround VACUUM and ANALYZE. Those are just the commands I started with.\nIf/when there are many of these roles, it might make sense to create a\npg_maintenance role that is a member of pg_vacuum_all_tables,\npg_analyze_all_tables, etc.\n\n[0] https://postgr.es/m/67a1d667e8ec228b5e07f232184c80348c5d93f4.camel%40j-davis.com\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 7 Sep 2022 15:21:29 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: predefined role(s) for VACUUM and ANALYZE"
},
{
"msg_contents": "\n\n> On Sep 7, 2022, at 3:21 PM, Nathan Bossart <nathandbossart@gmail.com> wrote:\n> \n> There was some previous discussion around adding a pg_maintenance role that\n> could perform all of these commands [0]. I didn't intend to draw a line\n> around VACUUM and ANALYZE. Those are just the commands I started with.\n> If/when there are many of these roles, it might make sense to create a\n> pg_maintenance role that is a member of pg_vacuum_all_tables,\n> pg_analyze_all_tables, etc.\n\nThank you, that sounds fair enough.\n\nIt seems you've been pushed to make the patch-set more complicated, and now we're debating privilege bits, which seems pretty far off topic.\n\nI may be preaching to the choir here, but wouldn't it work to commit new roles pg_vacuum_all_tables and pg_analyze_all_tables with checks like you had in the original patch of this thread? That wouldn't block the later addition of finer grained controls allowing users to grant VACUUM or ANALYZE per-relation, would it? Something like what Stephen is requesting, and what you did with new privilege bits for VACUUM and ANALYZE could still be added, unless I'm missing something.\n\nI'd hate to see your patch set get further delayed by things that aren't logically prerequisites. The conversation upthread was useful to determine that they aren't prerequisites, but if anybody wants to explain to me why they are....\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Wed, 7 Sep 2022 15:50:13 -0700",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: predefined role(s) for VACUUM and ANALYZE"
},
{
"msg_contents": "Greetings,\n\nOn Wed, Sep 7, 2022 at 18:11 Nathan Bossart <nathandbossart@gmail.com>\nwrote:\n\n> On Wed, Sep 07, 2022 at 05:13:44PM -0400, Stephen Frost wrote:\n> > I disagree that we should put the onus for addressing this on the next\n> > person who wants to add bits and just willfully use up the last of them\n> > right now for what strikes me, at least, as a relatively marginal use\n> > case. If we had plenty of bits then, sure, let's use a couple of for\n> > this, but that isn't currently the case. If you want this feature then\n> > the onus is on you to do the legwork to make it such that we have plenty\n> > of bits.\n>\n> FWIW what I really want is the new predefined roles. I received feedback\n> upthread that it might also make sense to give people more fine-grained\n> control, so I implemented that. And now you're telling me that I need to\n> redesign the ACL system. :)\n\n\nCalling this a redesign is over-stating things, imv … and I’d much rather\nhave the per-relation granularity than predefined roles for this, so there\nis that to consider too, perhaps.\n\nI'm happy to give that project a try given there is agreement on the\n> direction and general interest in the patches. From the previous\n> discussion, it sounds like we want to first use a distinct set of bits for\n> each catalog table. Is that what I should proceed with?\n\n\nYes, that seems to be the consensus among those involved in this thread\nthus far. Basically, I imagine this involves passing around the object\ntype along with the acl info and then using that to check the bits and\nsuch. I doubt it’s worth inventing a new structure to combine the two …\nbut that’s just gut feeling and you may find it does make sense to once you\nget into it.\n\nThanks!\n\nStephen\n\n>\n\nGreetings,On Wed, Sep 7, 2022 at 18:11 Nathan Bossart <nathandbossart@gmail.com> wrote:On Wed, Sep 07, 2022 at 05:13:44PM -0400, Stephen Frost wrote:\n> I disagree that we should put the onus for addressing this on the next\n> person who wants to add bits and just willfully use up the last of them\n> right now for what strikes me, at least, as a relatively marginal use\n> case. If we had plenty of bits then, sure, let's use a couple of for\n> this, but that isn't currently the case. If you want this feature then\n> the onus is on you to do the legwork to make it such that we have plenty\n> of bits.\n\nFWIW what I really want is the new predefined roles. I received feedback\nupthread that it might also make sense to give people more fine-grained\ncontrol, so I implemented that. And now you're telling me that I need to\nredesign the ACL system. :)Calling this a redesign is over-stating things, imv … and I’d much rather have the per-relation granularity than predefined roles for this, so there is that to consider too, perhaps.\nI'm happy to give that project a try given there is agreement on the\ndirection and general interest in the patches. From the previous\ndiscussion, it sounds like we want to first use a distinct set of bits for\neach catalog table. Is that what I should proceed with?Yes, that seems to be the consensus among those involved in this thread thus far. Basically, I imagine this involves passing around the object type along with the acl info and then using that to check the bits and such. I doubt it’s worth inventing a new structure to combine the two … but that’s just gut feeling and you may find it does make sense to once you get into it.Thanks!Stephen",
"msg_date": "Wed, 7 Sep 2022 19:09:05 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: predefined role(s) for VACUUM and ANALYZE"
},
{
"msg_contents": "\n\n> On Sep 7, 2022, at 4:09 PM, Stephen Frost <sfrost@snowman.net> wrote:\n> \n> Calling this a redesign is over-stating things, imv … and I’d much rather have the per-relation granularity than predefined roles for this, so there is that to consider too, perhaps.\n\nOk, now I'm a bit lost. If I want to use Nathan's feature to create a role to vacuum and analyze my database on a regular basis, how does per-relation granularity help me? If somebody creates a new table and doesn't grant those privileges to the role, doesn't that break the usage case? To me, per-relation granularity sounds useful, but orthogonal, to this feature.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Wed, 7 Sep 2022 16:15:23 -0700",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: predefined role(s) for VACUUM and ANALYZE"
},
{
"msg_contents": "On Wed, Sep 07, 2022 at 07:09:05PM -0400, Stephen Frost wrote:\n> Yes, that seems to be the consensus among those involved in this thread\n> thus far. Basically, I imagine this involves passing around the object\n> type along with the acl info and then using that to check the bits and\n> such. I doubt it’s worth inventing a new structure to combine the two …\n> but that’s just gut feeling and you may find it does make sense to once you\n> get into it.\n\nI've done some preliminary research for this approach, and I've found some\ninteresting challenges.\n\n* aclparse() will need to handle ambiguous strings. For example, USAGE is\navailable for most catalogs, so which ACL bit should be chosen? One\npossible solution would be to make sure the common privilege types always\nuse the same bit.\n\n* When comparing ACLs, there probably should be some way to differentiate\noverloaded privilege bits, else ACLs for different catalogs that have\nnothing in common could evaluate as equal. Such comparisons may be\nunlikely, but this still doesn't strike me as acceptable.\n\n* aclitemout() needs some way to determine what privilege an ACL bit\nactually refers to. I can think of a couple of ways to do this: 1) we\ncould create different aclitem types for each catalog (or maybe just one\nfor pg_class and another for everything else), or 2) we could include the\ntype in AclItem, perhaps by adding a uint8 field. I noticed that Tom\ncalled out this particular challenge back in 2018 [0].\n\nAm I overlooking an easier way to handle these things? From my admittedly\nbrief analysis thus far, I'm worried this could devolve into something\noverly complex or magical, especially when simply moving to a uint64 might\nbe a reasonable way to significantly extend AclItem's life span. Robert\nsuggested upthread that Tom might have concerns with adding another 32 bits\nto AclItem, but the archives indicate he has previously proposed exactly\nthat [1]. Of course, I don't know how everyone feels about the uint64 idea\ntoday, but ISTM like it might be the path of least resistance.\n\nSo, here is a new patch set. 0001 expands AclMode to a uint64. 0002\nsimplifies some WARNING messages for VACUUM/ANALYZE. 0003 introduces\nprivilege bits for VACUUM and ANALYZE on relations. And 0004 introduces\nthe pg_vacuum/analyze_all_tables predefined roles.\n\n[0] https://postgr.es/m/18391.1521419120%40sss.pgh.pa.us\n[1] https://postgr.es/m/11414.1526422062%40sss.pgh.pa.us\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Wed, 7 Sep 2022 22:50:35 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: predefined role(s) for VACUUM and ANALYZE"
},
{
"msg_contents": "On Wed, Sep 7, 2022 at 7:09 PM Stephen Frost <sfrost@snowman.net> wrote:\n> Calling this a redesign is over-stating things, imv … and I’d much rather have the per-relation granularity than predefined roles for this, so there is that to consider too, perhaps.\n\nI also prefer the finer granularity.\n\nOn the question of whether freeing up more privilege bits is a\nprerequisite for this patch, I'm a bit on the fence about that. If I\nlook at the amount of extra work that your review comments have caused\nme to do over, let's say, the last three years, and I compare that to\nthe amount of extra work that the review comments of other people have\ncaused me to do in the same period of time, you win. In fact, you win\nagainst all of them added together and doubled. I think that as a\ngeneral matter you are far too willing to argue vigorously for people\nto do work that isn't closely related to their original goals, and\nwhich is at times even opposed to their original goals, and I think\nthe project would be better off if you tempered that urge.\n\nNow on the other hand, I also do think we need more privilege bits.\nYou're not alone in making the case that this is a problem which needs\nto be solved, and the set of other people who are also making that\nargument includes me. At the same time, there is certainly a double\nstandard here. When Andrew and Tom committed\nd11e84ea466b4e3855d7bd5142fb68f51c273567 and\na0ffa885e478f5eeacc4e250e35ce25a4740c487 respectively, we used up 2 of\nthe remaining 4 bits, bits which other people would have liked to have\nused up years ago and they were told \"no you can't.\" I don't believe I\nwould have been willing to commit those patches without doing\nsomething to solve this problem, because I would have been worried\nabout getting yelled at by Tom. But now here we are with only 2 bits\nleft instead of 4, and we're telling the next patch author - who is\nnot Tom - that he's on the hook to solve the problem.\n\nWell, we do need to solve the problem. But we're not necessarily being\nfair about how the work involved gets distributed. It's a heck of a\nlot easier for a committer to get something committed to address this\nissue than a non-committer, and it's a heck of a lot easier for a\ncommitter to ignore the fact that the problem hasn't been solved and\npress ahead anyway, and yet somehow we're trying to dump a problem\nthat's a decade in the making on Nathan. I'm not exactly sure what to\npropose as an alternative, but that doesn't seem quite fair.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 8 Sep 2022 09:41:20 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: predefined role(s) for VACUUM and ANALYZE"
},
{
"msg_contents": "On Wed, Sep 07, 2022 at 04:15:23PM -0700, Mark Dilger wrote:\n> Ok, now I'm a bit lost. If I want to use Nathan's feature to create a role to vacuum and analyze my database on a regular basis, how does per-relation granularity help me? If somebody creates a new table and doesn't grant those privileges to the role, doesn't that break the usage case? To me, per-relation granularity sounds useful, but orthogonal, to this feature.\n\nI think there is room for both per-relation privileges and new predefined\nroles. My latest patch set [0] introduces both.\n\n[0] https://postgr.es/m/20220908055035.GA2100193%40nathanxps13\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Fri, 9 Sep 2022 15:10:04 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: predefined role(s) for VACUUM and ANALYZE"
},
{
"msg_contents": "On Thu, Sep 08, 2022 at 09:41:20AM -0400, Robert Haas wrote:\n> Now on the other hand, I also do think we need more privilege bits.\n> You're not alone in making the case that this is a problem which needs\n> to be solved, and the set of other people who are also making that\n> argument includes me. At the same time, there is certainly a double\n> standard here. When Andrew and Tom committed\n> d11e84ea466b4e3855d7bd5142fb68f51c273567 and\n> a0ffa885e478f5eeacc4e250e35ce25a4740c487 respectively, we used up 2 of\n> the remaining 4 bits, bits which other people would have liked to have\n> used up years ago and they were told \"no you can't.\" I don't believe I\n> would have been willing to commit those patches without doing\n> something to solve this problem, because I would have been worried\n> about getting yelled at by Tom. But now here we are with only 2 bits\n> left instead of 4, and we're telling the next patch author - who is\n> not Tom - that he's on the hook to solve the problem.\n> \n> Well, we do need to solve the problem. But we're not necessarily being\n> fair about how the work involved gets distributed. It's a heck of a\n> lot easier for a committer to get something committed to address this\n> issue than a non-committer, and it's a heck of a lot easier for a\n> committer to ignore the fact that the problem hasn't been solved and\n> press ahead anyway, and yet somehow we're trying to dump a problem\n> that's a decade in the making on Nathan. I'm not exactly sure what to\n> propose as an alternative, but that doesn't seem quite fair.\n\nAre there any concerns with simply expanding AclMode to 64 bits, as done in\nv5 [0]?\n\n[0] https://postgr.es/m/20220908055035.GA2100193%40nathanxps13\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 19 Sep 2022 20:51:47 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: predefined role(s) for VACUUM and ANALYZE"
},
{
"msg_contents": "On Mon, Sep 19, 2022 at 08:51:47PM -0700, Nathan Bossart wrote:\n> Are there any concerns with simply expanding AclMode to 64 bits, as done in\n> v5 [0]?\n> \n> [0] https://postgr.es/m/20220908055035.GA2100193%40nathanxps13\n\nI have gone through the thread, and I'd agree with getting more\ngranularity when it comes to assigning ACLs to relations rather than\njust an on/off switch for the objects of a given type would be nice.\nI've been looking at the whole use of AclMode and AclItem in the code,\nand I don't quite see why a larger size could have a noticeable\nimpact. There are a few things that could handle a large number of\nAclItems, though, say for array operations like aclupdate(). These\ncould be easily checked with some micro-benchmarking or some SQL\nqueries that emulate a large number of items in aclitem[] arrays.\n\nAny impact for the column sizes of the catalogs holding ACL\ninformation? Just asking while browsing the patch set.\n\nSome comments in utils/acl.h need a refresh as the number of lower and\nupper bits looked at from ai_privs changes.\n--\nMichael",
"msg_date": "Tue, 20 Sep 2022 14:45:52 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: predefined role(s) for VACUUM and ANALYZE"
},
{
"msg_contents": "On Tue, Sep 20, 2022 at 02:45:52PM +0900, Michael Paquier wrote:\n> I have gone through the thread, and I'd agree with getting more\n> granularity when it comes to assigning ACLs to relations rather than\n> just an on/off switch for the objects of a given type would be nice.\n> I've been looking at the whole use of AclMode and AclItem in the code,\n> and I don't quite see why a larger size could have a noticeable\n> impact. There are a few things that could handle a large number of\n> AclItems, though, say for array operations like aclupdate(). These\n> could be easily checked with some micro-benchmarking or some SQL\n> queries that emulate a large number of items in aclitem[] arrays.\n\nI performed a few quick tests with a couple thousand ACLs on my laptop, and\nI'm consistently seeing a 4.3% regression.\n\n> Any impact for the column sizes of the catalogs holding ACL\n> information? Just asking while browsing the patch set.\n\nSince each aclitem requires 16 bytes instead of 12, I assume so. However,\nin my testing, I hit a \"row is too big\" error with the same number of\naclitems in a pg_class row before and after the change. I might be missing\nsomething in my patch, or maybe I am misunderstanding how arrays of\naclitems are stored on disk.\n\n> Some comments in utils/acl.h need a refresh as the number of lower and\n> upper bits looked at from ai_privs changes.\n\nOops, I missed that one. I fixed it in the attached patch set.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Tue, 20 Sep 2022 11:05:33 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: predefined role(s) for VACUUM and ANALYZE"
},
{
"msg_contents": "On Tue, Sep 20, 2022 at 11:05:33AM -0700, Nathan Bossart wrote:\n> On Tue, Sep 20, 2022 at 02:45:52PM +0900, Michael Paquier wrote:\n>> Any impact for the column sizes of the catalogs holding ACL\n>> information? Just asking while browsing the patch set.\n> \n> Since each aclitem requires 16 bytes instead of 12, I assume so. However,\n> in my testing, I hit a \"row is too big\" error with the same number of\n> aclitems in a pg_class row before and after the change. I might be missing\n> something in my patch, or maybe I am misunderstanding how arrays of\n> aclitems are stored on disk.\n\nAh, it looks like relacl is compressed. The column is marked \"extended,\"\nbut pg_class doesn't appear to have a TOAST table, so presumably no\nout-of-line storage can be used. I found a couple of threads about this\n[0] [1] [2].\n\n[0] https://postgr.es/m/17245.964897719%40sss.pgh.pa.us\n[1] https://postgr.es/m/200309040531.h845ViP05881%40candle.pha.pa.us\n[2] https://postgr.es/m/29061.1265327626%40sss.pgh.pa.us\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 20 Sep 2022 16:31:17 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: predefined role(s) for VACUUM and ANALYZE"
},
{
"msg_contents": "On Tue, Sep 20, 2022 at 04:31:17PM -0700, Nathan Bossart wrote:\n> On Tue, Sep 20, 2022 at 11:05:33AM -0700, Nathan Bossart wrote:\n>> On Tue, Sep 20, 2022 at 02:45:52PM +0900, Michael Paquier wrote:\n>>> Any impact for the column sizes of the catalogs holding ACL\n>>> information? Just asking while browsing the patch set.\n>> \n>> Since each aclitem requires 16 bytes instead of 12, I assume so. However,\n>> in my testing, I hit a \"row is too big\" error with the same number of\n>> aclitems in a pg_class row before and after the change. I might be missing\n>> something in my patch, or maybe I am misunderstanding how arrays of\n>> aclitems are stored on disk.\n> \n> Ah, it looks like relacl is compressed. The column is marked \"extended,\"\n> but pg_class doesn't appear to have a TOAST table, so presumably no\n> out-of-line storage can be used. I found a couple of threads about this\n> [0] [1] [2].\n\nI suppose there is some risk that folks with really long aclitem arrays\nmight be unable to pg_upgrade to a version with uint64 AclModes, but I\nsuspect that risk is limited to extreme cases (i.e., multiple thousands of\naclitems). I'm not sure whether that's worth worrying about too much.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 20 Sep 2022 16:50:10 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: predefined role(s) for VACUUM and ANALYZE"
},
{
"msg_contents": "On Tue, Sep 20, 2022 at 04:50:10PM -0700, Nathan Bossart wrote:\n> On Tue, Sep 20, 2022 at 04:31:17PM -0700, Nathan Bossart wrote:\n>> On Tue, Sep 20, 2022 at 11:05:33AM -0700, Nathan Bossart wrote:\n>>> On Tue, Sep 20, 2022 at 02:45:52PM +0900, Michael Paquier wrote:\n>>>> Any impact for the column sizes of the catalogs holding ACL\n>>>> information? Just asking while browsing the patch set.\n>>> \n>>> Since each aclitem requires 16 bytes instead of 12, I assume so. However,\n>>> in my testing, I hit a \"row is too big\" error with the same number of\n>>> aclitems in a pg_class row before and after the change. I might be missing\n>>> something in my patch, or maybe I am misunderstanding how arrays of\n>>> aclitems are stored on disk.\n>> \n>> Ah, it looks like relacl is compressed. The column is marked \"extended,\"\n>> but pg_class doesn't appear to have a TOAST table, so presumably no\n>> out-of-line storage can be used. I found a couple of threads about this\n>> [0] [1] [2].\n\nAdding a toast table to pg_class has been a sensitive topic over the\nyears. Based on my recollection of the events, there were worries\nabout the potential cross-dependencies with pg_class and pg_attribute\nthat this would create.\n\n> I suppose there is some risk that folks with really long aclitem arrays\n> might be unable to pg_upgrade to a version with uint64 AclModes, but I\n> suspect that risk is limited to extreme cases (i.e., multiple thousands of\n> aclitems). I'm not sure whether that's worth worrying about too much.\n\nDid you just run an aclupdate()? 4% for aclitem[] sounds like quite a\nnumber to me :/ It may be worth looking at if these operations could\nbe locally optimized more, as well. I'd like to think that we could\nlive with that to free up enough bits in AclItems for the next 20\nyears, anyway. Any opinions?\n\nFor the column sizes of the catalogs, I was wondering about how\npg_column_size() changes when they hold ACL information. Unoptimized\nalignment could cause an unnecessary increase in the structure sizes,\nso the addition of new fields or changes in object size could have\nunexpected side effects.\n--\nMichael",
"msg_date": "Wed, 21 Sep 2022 10:31:47 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: predefined role(s) for VACUUM and ANALYZE"
},
{
"msg_contents": "On Wed, Sep 21, 2022 at 10:31:47AM +0900, Michael Paquier wrote:\n> Did you just run an aclupdate()? 4% for aclitem[] sounds like quite a\n> number to me :/ It may be worth looking at if these operations could\n> be locally optimized more, as well. I'd like to think that we could\n> live with that to free up enough bits in AclItems for the next 20\n> years, anyway. Any opinions?\n\nYes, the test was mostly for aclupdate(). Looking at that function, I bet\nmost of its time is spent in palloc0() and memcpy(). It might be possible\nto replace the linear search if the array was sorted, but I'm skeptical\nthat will help much. In the end, I'm not it's worth worrying too much\nabout 2,000 calls to aclupdate() with an array of 2,000 ACLs taking 5.3\nseconds instead of 5.1 seconds.\n\nI bet a more pressing concern is the calls to aclmask() since checking\nprivileges is probably done more frequently than updating them. That\nappears to use a linear search, too, so maybe sorting the aclitem arrays is\nactually worth exploring. I still doubt there will be much noticeable\nimpact from expanding AclMode outside of the most extreme cases.\n\n> For the column sizes of the catalogs, I was wondering about how\n> pg_column_size() changes when they hold ACL information. Unoptimized\n> alignment could cause an unnecessary increase in the structure sizes,\n> so the addition of new fields or changes in object size could have\n> unexpected side effects.\n\nAfter a few tests, I haven't discovered any changes to the output of\npg_column_size().\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 20 Sep 2022 21:31:26 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: predefined role(s) for VACUUM and ANALYZE"
},
{
"msg_contents": "On Tue, Sep 20, 2022 at 09:31:26PM -0700, Nathan Bossart wrote:\n> I bet a more pressing concern is the calls to aclmask() since checking\n> privileges is probably done more frequently than updating them. That\n> appears to use a linear search, too, so maybe sorting the aclitem arrays is\n> actually worth exploring. I still doubt there will be much noticeable\n> impact from expanding AclMode outside of the most extreme cases.\n\nI've been testing aclmask() with long aclitem arrays (2,000 entries is\nclose to the limit for pg_class entries), and I haven't found any\nsignificant impact from bumping AclMode to 64 bits.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 28 Sep 2022 11:50:34 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: predefined role(s) for VACUUM and ANALYZE"
},
{
"msg_contents": "Greetings,\n\nOn Wed, Sep 28, 2022 at 14:50 Nathan Bossart <nathandbossart@gmail.com>\nwrote:\n\n> On Tue, Sep 20, 2022 at 09:31:26PM -0700, Nathan Bossart wrote:\n> > I bet a more pressing concern is the calls to aclmask() since checking\n> > privileges is probably done more frequently than updating them. That\n> > appears to use a linear search, too, so maybe sorting the aclitem arrays\n> is\n> > actually worth exploring. I still doubt there will be much noticeable\n> > impact from expanding AclMode outside of the most extreme cases.\n>\n> I've been testing aclmask() with long aclitem arrays (2,000 entries is\n> close to the limit for pg_class entries), and I haven't found any\n> significant impact from bumping AclMode to 64 bits.\n\n\nThe max is the same regardless of the size..? Considering the size is\ncapped since pg_class doesn’t (and isn’t likely to..) have a toast table,\nthat seems unlikely, so I’m asking for clarification on that. We may be\nable to get consensus that the difference isn’t material since no one is\nlikely to have such long lists, but we should at least be aware.\n\nThanks,\n\nStephen\n\n>\n\nGreetings,On Wed, Sep 28, 2022 at 14:50 Nathan Bossart <nathandbossart@gmail.com> wrote:On Tue, Sep 20, 2022 at 09:31:26PM -0700, Nathan Bossart wrote:\n> I bet a more pressing concern is the calls to aclmask() since checking\n> privileges is probably done more frequently than updating them. That\n> appears to use a linear search, too, so maybe sorting the aclitem arrays is\n> actually worth exploring. I still doubt there will be much noticeable\n> impact from expanding AclMode outside of the most extreme cases.\n\nI've been testing aclmask() with long aclitem arrays (2,000 entries is\nclose to the limit for pg_class entries), and I haven't found any\nsignificant impact from bumping AclMode to 64 bits.The max is the same regardless of the size..? Considering the size is capped since pg_class doesn’t (and isn’t likely to..) have a toast table, that seems unlikely, so I’m asking for clarification on that. We may be able to get consensus that the difference isn’t material since no one is likely to have such long lists, but we should at least be aware.Thanks,Stephen",
"msg_date": "Wed, 28 Sep 2022 15:09:46 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: predefined role(s) for VACUUM and ANALYZE"
},
{
"msg_contents": "On Wed, Sep 28, 2022 at 03:09:46PM -0400, Stephen Frost wrote:\n> On Wed, Sep 28, 2022 at 14:50 Nathan Bossart <nathandbossart@gmail.com>\n> wrote:\n>> I've been testing aclmask() with long aclitem arrays (2,000 entries is\n>> close to the limit for pg_class entries), and I haven't found any\n>> significant impact from bumping AclMode to 64 bits.\n> \n> The max is the same regardless of the size..? Considering the size is\n> capped since pg_class doesn’t (and isn’t likely to..) have a toast table,\n> that seems unlikely, so I’m asking for clarification on that. We may be\n> able to get consensus that the difference isn’t material since no one is\n> likely to have such long lists, but we should at least be aware.\n\nWhile pg_class doesn't have a TOAST table, that column is marked as\n\"extended,\" so I believe it is still compressed, and the maximum aclitem\narray length for pg_class.relacl would depend on how well the array\ncompresses.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 28 Sep 2022 13:12:22 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: predefined role(s) for VACUUM and ANALYZE"
},
{
"msg_contents": "On Wed, Sep 28, 2022 at 01:12:22PM -0700, Nathan Bossart wrote:\n> On Wed, Sep 28, 2022 at 03:09:46PM -0400, Stephen Frost wrote:\n>> The max is the same regardless of the size..? Considering the size is\n>> capped since pg_class doesn’t (and isn’t likely to..) have a toast table,\n>> that seems unlikely, so I’m asking for clarification on that. We may be\n>> able to get consensus that the difference isn’t material since no one is\n>> likely to have such long lists, but we should at least be aware.\n> \n> While pg_class doesn't have a TOAST table, that column is marked as\n> \"extended,\" so I believe it is still compressed, and the maximum aclitem\n> array length for pg_class.relacl would depend on how well the array\n> compresses.\n\nAre there any remaining concerns about this approach? I'm happy to do any\ntesting that folks deem necessary, or anything else really that might help\nmove this patch set forward. If we don't want to extend AclMode right\naway, we could also keep it in our back pocket for the next time someone\n(which may very well be me) wants to add privileges. That is, 0001 is not\nfundamentally a prerequisite for 0002-0004, but I recognize that freeing up\nsome extra bits would be the most courteous.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Fri, 30 Sep 2022 12:23:50 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: predefined role(s) for VACUUM and ANALYZE"
},
{
"msg_contents": "Nathan Bossart <nathandbossart@gmail.com> writes:\n> Are there any remaining concerns about this approach? I'm happy to do any\n> testing that folks deem necessary, or anything else really that might help\n> move this patch set forward. If we don't want to extend AclMode right\n> away, we could also keep it in our back pocket for the next time someone\n> (which may very well be me) wants to add privileges. That is, 0001 is not\n> fundamentally a prerequisite for 0002-0004, but I recognize that freeing up\n> some extra bits would be the most courteous.\n\nIn view of the recent mess around bigint relfilenodes, it seems to me\nthat we shouldn't move forward with widening AclMode unless somebody\nruns down which structs will get wider (or more aligned) and how much\nthat'll cost us. Maybe it's not a problem, but it could do with an\nexplicit look at the point.\n\nI do agree with the position that these features are not where to\nspend our last remaining privilege bits.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 30 Sep 2022 16:15:24 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: predefined role(s) for VACUUM and ANALYZE"
},
{
"msg_contents": "On Fri, Sep 30, 2022 at 04:15:24PM -0400, Tom Lane wrote:\n> In view of the recent mess around bigint relfilenodes, it seems to me\n> that we shouldn't move forward with widening AclMode unless somebody\n> runs down which structs will get wider (or more aligned) and how much\n> that'll cost us. Maybe it's not a problem, but it could do with an\n> explicit look at the point.\n\nThe main one I see is AclItem, which increases from 12 bytes to 16 bytes.\nAFAICT all of the catalogs that store aclitem arrays have the aclitem[]\ncolumn marked extended, so they are compressed or moved out-of-line as\nneeded, too. The only other structs I've spotted that make use of AclMode\nare InternalGrant and InternalDefaultACL. I haven't identified anything\nthat leads me to believe there are alignment problems or anything else\ncomparable to the issues listed in the relfilenode thread [0], but I could\nbe missing something. Did you have something else in mind you think ought\nto be checked? I'm not sure my brief analysis here suffices.\n\n[0] https://postgr.es/m/CA%2BTgmoaa9Yc9O-FP4vS_xTKf8Wgy8TzHpjnjN56_ShKE%3DjrP-Q%40mail.gmail.com\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Fri, 30 Sep 2022 14:47:28 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: predefined role(s) for VACUUM and ANALYZE"
},
{
"msg_contents": "Nathan Bossart <nathandbossart@gmail.com> writes:\n> The main one I see is AclItem, which increases from 12 bytes to 16 bytes.\n\n... and now requires double alignment ... did you fix its typalign?\n\nWe could conceivably dodge the alignment increase by splitting the 64-bit\nfield into two 32-bit fields, one for base privileges and one for grant\noptions. That'd be rather invasive, so unless it leads to pleasant\nimprovements in readability (which it might, perhaps) I wouldn't advocate\nfor it.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 30 Sep 2022 18:00:53 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: predefined role(s) for VACUUM and ANALYZE"
},
{
"msg_contents": "On Fri, Sep 30, 2022 at 06:00:53PM -0400, Tom Lane wrote:\n> Nathan Bossart <nathandbossart@gmail.com> writes:\n>> The main one I see is AclItem, which increases from 12 bytes to 16 bytes.\n> \n> ... and now requires double alignment ... did you fix its typalign?\n\nNope, I missed that, thanks for pointing it out. Should we move ai_privs\nto the beginning of the struct, too? The only other similar example I see\nis TimeTzADT, but that only consists of an int64 and an int32, while\nAclItem starts with 2 uint32s. While it might not be strictly necessary,\nit seems like there is a small chance it could become necessary in the\nfuture.\n\n> We could conceivably dodge the alignment increase by splitting the 64-bit\n> field into two 32-bit fields, one for base privileges and one for grant\n> options. That'd be rather invasive, so unless it leads to pleasant\n> improvements in readability (which it might, perhaps) I wouldn't advocate\n> for it.\n\nYeah, the invasiveness is the main reason I haven't tried this yet, but it\ndoes seem like it'd improve readability.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Fri, 30 Sep 2022 15:32:57 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: predefined role(s) for VACUUM and ANALYZE"
},
{
"msg_contents": "Nathan Bossart <nathandbossart@gmail.com> writes:\n> On Fri, Sep 30, 2022 at 06:00:53PM -0400, Tom Lane wrote:\n>> ... and now requires double alignment ... did you fix its typalign?\n\n> Nope, I missed that, thanks for pointing it out. Should we move ai_privs\n> to the beginning of the struct, too?\n\nDon't see any point, there won't be any padding. If we ever change the\nsizeof(Oid), or add more fields, we can consider what to do then.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 30 Sep 2022 19:05:38 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: predefined role(s) for VACUUM and ANALYZE"
},
{
"msg_contents": "On Fri, Sep 30, 2022 at 07:05:38PM -0400, Tom Lane wrote:\n> Nathan Bossart <nathandbossart@gmail.com> writes:\n>> On Fri, Sep 30, 2022 at 06:00:53PM -0400, Tom Lane wrote:\n>>> ... and now requires double alignment ... did you fix its typalign?\n> \n>> Nope, I missed that, thanks for pointing it out. Should we move ai_privs\n>> to the beginning of the struct, too?\n> \n> Don't see any point, there won't be any padding. If we ever change the\n> sizeof(Oid), or add more fields, we can consider what to do then.\n\nSounds good. Here's a new patch set with aclitem's typalign fixed.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Fri, 30 Sep 2022 16:18:34 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: predefined role(s) for VACUUM and ANALYZE"
},
{
"msg_contents": ">\n> Sounds good. Here's a new patch set with aclitem's typalign fixed.\n>\n\nPatch applies.\nPasses make check and make check-world.\nTest coverage seems adequate.\n\nCoding is very clear and very much in the style of the existing code. Any\nquibbles I have with the coding style are ones I have with the overall\npg-style, and this isn't the forum for that.\n\nI haven't done any benchmarking yet, but it seems that the main question\nwill be the impact on ordinary DML statements.\n\nI have no opinion about the design debate earlier in this thread, but I do\nthink that this patch is ready and adds something concrete to the ongoing\ndiscussion.\n\nSounds good. Here's a new patch set with aclitem's typalign fixed.Patch applies.Passes make check and make check-world.Test coverage seems adequate.Coding is very clear and very much in the style of the existing code. Any quibbles I have with the coding style are ones I have with the overall pg-style, and this isn't the forum for that.I haven't done any benchmarking yet, but it seems that the main question will be the impact on ordinary DML statements.I have no opinion about the design debate earlier in this thread, but I do think that this patch is ready and adds something concrete to the ongoing discussion.",
"msg_date": "Fri, 14 Oct 2022 19:37:38 -0400",
"msg_from": "Corey Huinker <corey.huinker@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: predefined role(s) for VACUUM and ANALYZE"
},
{
"msg_contents": "On Fri, Oct 14, 2022 at 07:37:38PM -0400, Corey Huinker wrote:\n> Patch applies.\n> Passes make check and make check-world.\n> Test coverage seems adequate.\n> \n> Coding is very clear and very much in the style of the existing code. Any\n> quibbles I have with the coding style are ones I have with the overall\n> pg-style, and this isn't the forum for that.\n> \n> I haven't done any benchmarking yet, but it seems that the main question\n> will be the impact on ordinary DML statements.\n> \n> I have no opinion about the design debate earlier in this thread, but I do\n> think that this patch is ready and adds something concrete to the ongoing\n> discussion.\n\nThanks for taking a look! Here is a rebased version of the patch set.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Mon, 14 Nov 2022 15:40:04 -0800",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: predefined role(s) for VACUUM and ANALYZE"
},
{
"msg_contents": "On Mon, Nov 14, 2022 at 03:40:04PM -0800, Nathan Bossart wrote:\n> Thanks for taking a look! Here is a rebased version of the patch set.\n\nOops, apparently object_aclcheck() cannot be used for pg_class. Here is\nanother version that uses pg_class_aclcheck() instead. I'm not sure how I\nmissed this earlier.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Mon, 14 Nov 2022 21:08:13 -0800",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: predefined role(s) for VACUUM and ANALYZE"
},
{
"msg_contents": "\nOn 2022-11-15 Tu 00:08, Nathan Bossart wrote:\n> On Mon, Nov 14, 2022 at 03:40:04PM -0800, Nathan Bossart wrote:\n>> Thanks for taking a look! Here is a rebased version of the patch set.\n> Oops, apparently object_aclcheck() cannot be used for pg_class. Here is\n> another version that uses pg_class_aclcheck() instead. I'm not sure how I\n> missed this earlier.\n>\n\nOK, reading the history I think everyone is on board with expanding\nAclMode from uint32 to uint64. Is that right? If so I'm intending to\ncommit at least the first two of these patches fairly soon.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Wed, 16 Nov 2022 15:09:47 -0500",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: predefined role(s) for VACUUM and ANALYZE"
},
{
"msg_contents": "On Wed, Nov 16, 2022 at 03:09:47PM -0500, Andrew Dunstan wrote:\n> OK, reading the history I think everyone is on board with expanding\n> AclMode from uint32 to uint64. Is that right?\n\nI skimmed through this thread again, and AFAICT folks are okay with this\napproach. I'm not aware of any remaining concerns.\n\n> If so I'm intending to\n> commit at least the first two of these patches fairly soon.\n\nThanks!\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 16 Nov 2022 20:39:52 -0800",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: predefined role(s) for VACUUM and ANALYZE"
},
{
"msg_contents": "rebased\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Fri, 18 Nov 2022 09:05:04 -0800",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: predefined role(s) for VACUUM and ANALYZE"
},
{
"msg_contents": "On Fri, Nov 18, 2022 at 09:05:04AM -0800, Nathan Bossart wrote:\n> rebased\n\nanother rebase\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Sat, 19 Nov 2022 10:50:04 -0800",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: predefined role(s) for VACUUM and ANALYZE"
},
{
"msg_contents": "On Sat, Nov 19, 2022 at 10:50:04AM -0800, Nathan Bossart wrote:\n> another rebase\n\nAnother rebase for cfbot.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Sun, 20 Nov 2022 08:57:13 -0800",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: predefined role(s) for VACUUM and ANALYZE"
},
{
"msg_contents": "\nOn 2022-11-20 Su 11:57, Nathan Bossart wrote:\n> On Sat, Nov 19, 2022 at 10:50:04AM -0800, Nathan Bossart wrote:\n>> another rebase\n> Another rebase for cfbot.\n>\n\n\nI have committed the first couple of these to get them out of the way.\n\nBut I think we need a bit of cleanup in the next patch.\nvacuum_is_relation_owner() looks like it's now rather misnamed. Maybe\nvacuum_is_permitted_for_relation()? Also I think we need a more thorough\nreworking of the comments around line 566. And I think we need a more\ndetailed explanation of why the change in vacuum_rel is ok, and if it is\nOK we should adjust the head comment on the function.\n\nIn any case I think this comment would be better English with \"might\"\ninstead of \"may\":\n\n/* user may have the ANALYZE privilege */\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Wed, 23 Nov 2022 14:56:28 -0500",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: predefined role(s) for VACUUM and ANALYZE"
},
{
"msg_contents": "On Wed, Nov 23, 2022 at 02:56:28PM -0500, Andrew Dunstan wrote:\n> I have committed the first couple of these to get them out of the way.\n\nThanks!\n\n> But I think we need a bit of cleanup in the next patch.\n> vacuum_is_relation_owner() looks like it's now rather misnamed. Maybe\n> vacuum_is_permitted_for_relation()? Also I think we need a more thorough\n> reworking of the comments around line 566. And I think we need a more\n> detailed explanation of why the change in vacuum_rel is ok, and if it is\n> OK we should adjust the head comment on the function.\n> \n> In any case I think this comment would be better English with \"might\"\n> instead of \"may\":\n> \n> /* user may have the ANALYZE privilege */\n\nI've attempted to address all your feedback in v13. Please let me know if\nanything needs further reworking.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Wed, 23 Nov 2022 15:54:44 -0800",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: predefined role(s) for VACUUM and ANALYZE"
},
{
"msg_contents": "\nOn 2022-11-23 We 18:54, Nathan Bossart wrote:\n> On Wed, Nov 23, 2022 at 02:56:28PM -0500, Andrew Dunstan wrote:\n>> I have committed the first couple of these to get them out of the way.\n> Thanks!\n>\n>> But I think we need a bit of cleanup in the next patch.\n>> vacuum_is_relation_owner() looks like it's now rather misnamed. Maybe\n>> vacuum_is_permitted_for_relation()? Also I think we need a more thorough\n>> reworking of the comments around line 566. And I think we need a more\n>> detailed explanation of why the change in vacuum_rel is ok, and if it is\n>> OK we should adjust the head comment on the function.\n>>\n>> In any case I think this comment would be better English with \"might\"\n>> instead of \"may\":\n>>\n>> /* user may have the ANALYZE privilege */\n> I've attempted to address all your feedback in v13. Please let me know if\n> anything needs further reworking.\n\n\n\nThanks,\n\n\npushed.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Mon, 28 Nov 2022 12:13:13 -0500",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: predefined role(s) for VACUUM and ANALYZE"
},
{
"msg_contents": "On Mon, Nov 28, 2022 at 12:13:13PM -0500, Andrew Dunstan wrote:\n> pushed.\n\nThanks!\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 28 Nov 2022 10:52:47 -0800",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: predefined role(s) for VACUUM and ANALYZE"
},
{
"msg_contents": "Hello,\n\nWhile looking into the new feature, I found the following situation with \nthe \\dp command displaying privileges on the system tables:\n\nGRANT VACUUM, ANALYZE ON TABLE pg_type TO alice;\n\nSELECT relacl FROM pg_class WHERE oid = 'pg_type'::regclass;\n relacl\n-------------------------------------------------------------\n {=r/postgres,postgres=arwdDxtvz/postgres,alice=vz/postgres}\n(1 row)\n\nBut the \\dp command does not show the granted privileges:\n\n\\dp pg_type\n Access privileges\n Schema | Name | Type | Access privileges | Column privileges | Policies\n--------+------+------+-------------------+-------------------+----------\n(0 rows)\n\nThe comment in src/bin/psql/describe.c explains the situation:\n\n /*\n * Unless a schema pattern is specified, we suppress system and temp\n * tables, since they normally aren't very interesting from a \npermissions\n * point of view. You can see 'em by explicit request though, eg \nwith \\z\n * pg_catalog.*\n */\n\n\nSo to see the privileges you have to explicitly specify the schema name:\n\n\\dp pg_catalog.pg_type\n Access privileges\n Schema | Name | Type | Access privileges | Column \nprivileges | Policies\n------------+---------+-------+-----------------------------+-------------------+----------\n pg_catalog | pg_type | table | =r/postgres +| |\n | | | \npostgres=arwdDxtvz/postgres+| |\n | | | alice=vz/postgres | |\n(1 row)\n\nBut perhaps this behavior should be reviewed or at least documented?\n\n-----\nPavel Luzanov\n\n\n",
"msg_date": "Mon, 5 Dec 2022 23:21:08 +0300",
"msg_from": "Pavel Luzanov <p.luzanov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: predefined role(s) for VACUUM and ANALYZE"
},
{
"msg_contents": "On Mon, Dec 05, 2022 at 11:21:08PM +0300, Pavel Luzanov wrote:\n> But perhaps this behavior should be reviewed or at least documented?\n\nI wonder why \\dpS wasn't added. I wrote up a patch to add it and the\ncorresponding documentation that other meta-commands already have.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Mon, 5 Dec 2022 16:04:14 -0800",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: predefined role(s) for VACUUM and ANALYZE"
},
{
"msg_contents": "Nathan Bossart <nathandbossart@gmail.com> writes:\n\n> diff --git a/src/backend/catalog/aclchk.c b/src/backend/catalog/aclchk.c\n> index 3b5ea3c137..bd967eaa78 100644\n> --- a/src/backend/catalog/aclchk.c\n> +++ b/src/backend/catalog/aclchk.c\n> @@ -4202,6 +4202,26 @@ pg_class_aclmask_ext(Oid table_oid, Oid roleid, AclMode mask,\n> \t\thas_privs_of_role(roleid, ROLE_PG_WRITE_ALL_DATA))\n> \t\tresult |= (mask & (ACL_INSERT | ACL_UPDATE | ACL_DELETE));\n> \n> +\t/*\n> +\t * Check if ACL_VACUUM is being checked and, if so, and not already set as\n> +\t * part of the result, then check if the user is a member of the\n> +\t * pg_vacuum_all_tables role, which allows VACUUM on all relations.\n> +\t */\n> +\tif (mask & ACL_VACUUM &&\n> +\t\t!(result & ACL_VACUUM) &&\n> +\t\thas_privs_of_role(roleid, ROLE_PG_VACUUM_ALL_TABLES))\n> +\t\tresult |= ACL_VACUUM;\n> +\n> +\t/*\n> +\t * Check if ACL_ANALYZE is being checked and, if so, and not already set as\n> +\t * part of the result, then check if the user is a member of the\n> +\t * pg_analyze_all_tables role, which allows ANALYZE on all relations.\n> +\t */\n> +\tif (mask & ACL_ANALYZE &&\n> +\t\t!(result & ACL_ANALYZE) &&\n> +\t\thas_privs_of_role(roleid, ROLE_PG_ANALYZE_ALL_TABLES))\n> +\t\tresult |= ACL_ANALYZE;\n> +\n> \treturn result;\n> }\n\nThese checks are getting rather repetitive, how about a data-driven\napproach, along the lines of the below patch? I'm not quite happy with\nthe naming of the struct and its members (and maybe it should be in a\nheader?), suggestions welcome.\n\n- ilmari",
"msg_date": "Tue, 06 Dec 2022 11:47:50 +0000",
"msg_from": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <ilmari@ilmari.org>",
"msg_from_op": false,
"msg_subject": "Re: predefined role(s) for VACUUM and ANALYZE"
},
{
"msg_contents": "On 06.12.2022 03:04, Nathan Bossart wrote:\n> I wonder why \\dpS wasn't added. I wrote up a patch to add it and the\n> corresponding documentation that other meta-commands already have.\n\nYes, \\dpS command and clarification in the documentation is exactly what \nis needed.\n\n-- \nPavel Luzanov\nPostgres Professional: https://postgrespro.com\n\n\n\n",
"msg_date": "Tue, 6 Dec 2022 16:57:37 +0300",
"msg_from": "Pavel Luzanov <p.luzanov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: predefined role(s) for VACUUM and ANALYZE"
},
{
"msg_contents": "On Tue, Dec 06, 2022 at 04:57:37PM +0300, Pavel Luzanov wrote:\n> On 06.12.2022 03:04, Nathan Bossart wrote:\n>> I wonder why \\dpS wasn't added. I wrote up a patch to add it and the\n>> corresponding documentation that other meta-commands already have.\n> \n> Yes, \\dpS command and clarification in the documentation is exactly what is\n> needed.\n\nI created a new thread for this:\n\n\thttps://postgr.es/m/20221206193606.GB3078082%40nathanxps13\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 6 Dec 2022 11:38:09 -0800",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: predefined role(s) for VACUUM and ANALYZE"
},
{
"msg_contents": "On Tue, Dec 06, 2022 at 11:47:50AM +0000, Dagfinn Ilmari Manns�ker wrote:\n> These checks are getting rather repetitive, how about a data-driven\n> approach, along the lines of the below patch? I'm not quite happy with\n> the naming of the struct and its members (and maybe it should be in a\n> header?), suggestions welcome.\n\n+1. I wonder if we should also consider checking all the bits at once\nbefore we start checking for the predefined roles. I'm thinking of\nsomething a bit like this:\n\n\trole_mask = ACL_SELECT | ACL_INSERT | ACL_UPDATE |\n\t\t\t\tACL_DELETE | ACL_VACUUM | ACL_ANALYZE;\n\n\tif (mask & role_mask != result & role_mask)\n\t{\n\t\t... existing checks here ...\n\t}\n\nI'm skeptical this actually produces any measurable benefit, but presumably\nthe predefined roles list will continue to grow, so maybe it's still worth\nadding a fast path.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 6 Dec 2022 11:51:08 -0800",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: predefined role(s) for VACUUM and ANALYZE"
}
] |
[
{
"msg_contents": "This works:\n\nvagrant@vagrant:/usr/local/pgsql/bin$ echo 'value1' | ./psql -d postgres -c\n'\\copy csvimport from stdin;'\nCOPY 1\n\nHowever:\n\nFor \\copy ... from stdin, data rows are read from the same source that\nissued the command\n\nand\n\nWhen either -c or -f is specified, psql does not read commands from\nstandard input;\n\nSo the meta-command is not read from standard input, thus standard input is\nnot the source of the command, yet the copy data sitting on standard input\nis indeed read and used for the copy.\n\nThe behavior when the \\copy command is in --file does conform to the\ndescriptions. Thus one must write pstdin as the source to make the\nfollowing work:\n\nvagrant@vagrant:/usr/local/pgsql/bin$ echo 'value1' | ./psql -d postgres -f\n<(echo '\\copy csvimport from pstdin;')\nCOPY 1\n\nThis also shows up with SQL COPY ... FROM since one is able to write:\n\nvagrant@vagrant:/usr/local/pgsql/bin$ echo 'value1' | ./psql -d postgres -c\n'copy csvimport from stdin;'\nCOPY 1\n\nbut not:\n\nvagrant@vagrant:/usr/local/pgsql/bin$ echo 'value1' | ./psql -d postgres -f\n<(echo 'copy csvimport from stdin;')\nCOPY 0\n\nThis last form is especially useful for COPY ... TO STDOUT but considerably\nless so for COPY ... FROM; though you lose the flexibility to target\npstdout (which is likewise minor). It is an inconsistency but an\nunderstandable one.\n\nShould we amend \\copy to read:\n\n\"For \\copy ... from stdin, data rows are read from the same source that\nissued the command (for --command the related source is stdin, see Notes),\ncontinuing...\"\n\nand then in Notes:\n\n\"The accessibility of the psql command's standard input varies slightly\ndepending on whether --command or --file is specified as the source of\ncommands. For --command, the input is accessible both via pstdin and\nstdin, but when using --file it can only be accessed via pstdin. This most\noften arises when using the \\copy meta-command or SQL COPY, the latter\nbeing unable to access pstdin.\"\n\n?\n\nDavid J.\n\np.s. For now I've taken the position that figuring out what works when both\n--command and --file are specified is an exercise for the reader.\n\nThis works:vagrant@vagrant:/usr/local/pgsql/bin$ echo 'value1' | ./psql -d postgres -c '\\copy csvimport from stdin;'COPY 1However:For \\copy ... from stdin, data rows are read from the same source that issued the commandandWhen either -c or -f is specified, psql does not read commands from standard input;So the meta-command is not read from standard input, thus standard input is not the source of the command, yet the copy data sitting on standard input is indeed read and used for the copy.The behavior when the \\copy command is in --file does conform to the descriptions. Thus one must write pstdin as the source to make the following work:vagrant@vagrant:/usr/local/pgsql/bin$ echo 'value1' | ./psql -d postgres -f <(echo '\\copy csvimport from pstdin;')COPY 1This also shows up with SQL COPY ... FROM since one is able to write:vagrant@vagrant:/usr/local/pgsql/bin$ echo 'value1' | ./psql -d postgres -c 'copy csvimport from stdin;'COPY 1but not:vagrant@vagrant:/usr/local/pgsql/bin$ echo 'value1' | ./psql -d postgres -f <(echo 'copy csvimport from stdin;')COPY 0This last form is especially useful for COPY ... TO STDOUT but considerably less so for COPY ... FROM; though you lose the flexibility to target pstdout (which is likewise minor). It is an inconsistency but an understandable one.Should we amend \\copy to read:\"For \\copy ... from stdin, data rows are read from the same source that issued the command (for --command the related source is stdin, see Notes), continuing...\"and then in Notes:\"The accessibility of the psql command's standard input varies slightly depending on whether --command or --file is specified as the source of commands. For --command, the input is accessible both via pstdin and stdin, but when using --file it can only be accessed via pstdin. This most often arises when using the \\copy meta-command or SQL COPY, the latter being unable to access pstdin.\"?David J.p.s. For now I've taken the position that figuring out what works when both --command and --file are specified is an exercise for the reader.",
"msg_date": "Fri, 22 Jul 2022 18:28:41 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": true,
"msg_subject": "Interpretation of docs for \\copy ... from stdin inaccurate when using\n -c"
}
] |
[
{
"msg_contents": "Hi,\nI was looking at pg_regcomp():\n\n re->re_guts = VS(MALLOC(sizeof(struct guts)));\n\nI did some search trying to find where re_guts is freed but haven't\nfound it.\nCan someone enlighten me?\n\nThanks\n\nHi,I was looking at pg_regcomp(): re->re_guts = VS(MALLOC(sizeof(struct guts)));I did some search trying to find where re_guts is freed but haven't found it.Can someone enlighten me?Thanks",
"msg_date": "Fri, 22 Jul 2022 20:20:04 -0700",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": true,
"msg_subject": "potential memory leak in pg_regcomp()"
},
{
"msg_contents": "On Fri, Jul 22, 2022 at 08:20:04PM -0700, Zhihong Yu wrote:\n> Hi,\n> I was looking at pg_regcomp():\n> \n> re->re_guts = VS(MALLOC(sizeof(struct guts)));\n> \n> I did some search trying to find where re_guts is freed but haven't\n> found it.\n> Can someone enlighten me?\n\nOops. It seems that you are right and that there is room for\nimprovement here. regguts.h defines MALLOC() as a simple malloc()..\n--\nMichael",
"msg_date": "Sat, 23 Jul 2022 12:29:17 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: potential memory leak in pg_regcomp()"
},
{
"msg_contents": "Zhihong Yu <zyu@yugabyte.com> writes:\n> I was looking at pg_regcomp():\n> re->re_guts = VS(MALLOC(sizeof(struct guts)));\n> I did some search trying to find where re_guts is freed but haven't\n> found it.\n\nIn rfree(), which is called from freev().\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 22 Jul 2022 23:49:25 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: potential memory leak in pg_regcomp()"
}
] |
[
{
"msg_contents": "Hi,\n\nVariableCacheData.nextFullXid is renamed to nextXid in commit https://github.com/postgres/postgres//commit/fea10a64340e529805609126740a540c8f9daab4 <https://github.com/postgres/postgres//commit/fea10a64340e529805609126740a540c8f9daab4>\n\nFix the annotations for less confusion.\n\nRegards,\n\nZhang Mingli",
"msg_date": "Sat, 23 Jul 2022 13:01:26 +0800",
"msg_from": "Zhang Mingli <zmlpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Fix annotations nextFullXid"
},
{
"msg_contents": "\n\nOn 2022/07/23 14:01, Zhang Mingli wrote:\n> Hi,\n> \n> VariableCacheData.nextFullXid is renamed to nextXid in commit https://github.com/postgres/postgres//commit/fea10a64340e529805609126740a540c8f9daab4 <https://github.com/postgres/postgres//commit/fea10a64340e529805609126740a540c8f9daab4>\n> \n> Fix the annotations for less confusion.\n\nThanks for the patch! LGTM. I will commit it.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Tue, 26 Jul 2022 11:08:45 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: Fix annotations nextFullXid"
},
{
"msg_contents": "Thanks!\n\nRegards,\nZhang Mingli\n\n\n\n> On Jul 26, 2022, at 10:08, Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> \n> \n> \n> On 2022/07/23 14:01, Zhang Mingli wrote:\n>> Hi,\n>> VariableCacheData.nextFullXid is renamed to nextXid in commit https://github.com/postgres/postgres//commit/fea10a64340e529805609126740a540c8f9daab4 <https://github.com/postgres/postgres//commit/fea10a64340e529805609126740a540c8f9daab4>\n>> Fix the annotations for less confusion.\n> \n> Thanks for the patch! LGTM. I will commit it.\n> \n> Regards,\n> \n> -- \n> Fujii Masao\n> Advanced Computing Technology Center\n> Research and Development Headquarters\n> NTT DATA CORPORATION\n\n\n\n",
"msg_date": "Wed, 27 Jul 2022 00:29:05 +0800",
"msg_from": "Zhang Mingli <zmlpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Fix annotations nextFullXid"
},
{
"msg_contents": "\n\nOn 2022/07/27 1:29, Zhang Mingli wrote:\n> Thanks!\n\nPushed. Thanks!\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Thu, 28 Jul 2022 15:00:44 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: Fix annotations nextFullXid"
}
] |
[
{
"msg_contents": "Hi,\n\nRight now, the session that starts the backup with pg_backup_start()\nhas to end it with pg_backup_stop() which returns the backup_label and\ntablespace_map contents (commit 39969e2a1). If the backups were to be\ntaken using custom disk snapshot tools on production servers,\nfollowing are the high-level steps involved:\n1) open a session\n2) run pg_backup_start() using the same session opened in (1)\n3) run custom disk snapshot tools which may, many-a-times, will copy\nthe entire data directory over the network\n4) run pg_backup_stop() using the same session opened in (1)\n\nTypically, step (3) takes a good amount of time in production\nenvironments with terabytes or petabytes scale of data and keeping the\nsession alive from step (1) to (4) has overhead and it wastes the\nresources. And the session can get closed for various reasons - idle\nin session timeout, tcp/ip keepalive timeout, network problems etc.\nAll of these can render the backup useless.\n\nWhat if the backup started by a session can also be closed by another\nsession? This seems to be achievable, if we can place the\nbackup_label, tablespace_map and other required session/backend level\ncontents in shared memory with the key as backup_label name. It's a\nlong way to go. The idea may be naive at this stage and there might be\nsomething important that doesn't let us do the proposed solution. I\nwould like to hear more thoughts from the hackers.\n\nThanks to Sameer, Satya (cc-ed) for the offlist discussion.\n\nRegards,\nBharath Rupireddy.\n\n\n",
"msg_date": "Sat, 23 Jul 2022 14:58:53 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "A proposal for shared memory based backup infrastructure"
},
{
"msg_contents": "Hi Bharath,\n\n\n\n\n\n\n*\"Typically, step (3) takes a good amount of time in productionenvironments\nwith terabytes or petabytes scale of data and keeping thesession alive from\nstep (1) to (4) has overhead and it wastes theresources. And the session\ncan get closed for various reasons - idlein session timeout, tcp/ip\nkeepalive timeout, network problems etc.All of these can render the backup\nuseless.\"*\n\n>> this could be a common scenario and needs to be addressed.\n\n\n\n\n\n*\"What if the backup started by a session can also be closed by\nanothersession? This seems to be achievable, if we can place\nthebackup_label, tablespace_map and other required session/backend\nlevelcontents in shared memory with the key as backup_label name. It's\nalong way to go.\"*\n\n*>> * I think storing metadata about backup of a session in shared memory\nmay not work as it gets purged when the database goes for restart. We might\nrequire a separate catalogue table to handle the backup session.\n\nThanks,\nMahendrakar.\n\n\nOn Sat, 23 Jul 2022 at 14:59, Bharath Rupireddy <\nbharath.rupireddyforpostgres@gmail.com> wrote:\n\n> Hi,\n>\n> Right now, the session that starts the backup with pg_backup_start()\n> has to end it with pg_backup_stop() which returns the backup_label and\n> tablespace_map contents (commit 39969e2a1). If the backups were to be\n> taken using custom disk snapshot tools on production servers,\n> following are the high-level steps involved:\n> 1) open a session\n> 2) run pg_backup_start() using the same session opened in (1)\n> 3) run custom disk snapshot tools which may, many-a-times, will copy\n> the entire data directory over the network\n> 4) run pg_backup_stop() using the same session opened in (1)\n>\n> Typically, step (3) takes a good amount of time in production\n> environments with terabytes or petabytes scale of data and keeping the\n> session alive from step (1) to (4) has overhead and it wastes the\n> resources. And the session can get closed for various reasons - idle\n> in session timeout, tcp/ip keepalive timeout, network problems etc.\n> All of these can render the backup useless.\n>\n> What if the backup started by a session can also be closed by another\n> session? This seems to be achievable, if we can place the\n> backup_label, tablespace_map and other required session/backend level\n> contents in shared memory with the key as backup_label name. It's a\n> long way to go. The idea may be naive at this stage and there might be\n> something important that doesn't let us do the proposed solution. I\n> would like to hear more thoughts from the hackers.\n>\n> Thanks to Sameer, Satya (cc-ed) for the offlist discussion.\n>\n> Regards,\n> Bharath Rupireddy.\n>\n>\n>\n\nHi Bharath,\"Typically, step (3) takes a good amount of time in productionenvironments with terabytes or petabytes scale of data and keeping thesession alive from step (1) to (4) has overhead and it wastes theresources. And the session can get closed for various reasons - idlein session timeout, tcp/ip keepalive timeout, network problems etc.All of these can render the backup useless.\">> this could be a common scenario and needs to be addressed. \"What if the backup started by a session can also be closed by anothersession? This seems to be achievable, if we can place thebackup_label, tablespace_map and other required session/backend levelcontents in shared memory with the key as backup_label name. It's along way to go.\">> I think storing metadata about backup of a session in shared memory may not work as it gets purged when the database goes for restart. We might require a separate catalogue table to handle the backup session. Thanks,Mahendrakar.On Sat, 23 Jul 2022 at 14:59, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote:Hi,\n\nRight now, the session that starts the backup with pg_backup_start()\nhas to end it with pg_backup_stop() which returns the backup_label and\ntablespace_map contents (commit 39969e2a1). If the backups were to be\ntaken using custom disk snapshot tools on production servers,\nfollowing are the high-level steps involved:\n1) open a session\n2) run pg_backup_start() using the same session opened in (1)\n3) run custom disk snapshot tools which may, many-a-times, will copy\nthe entire data directory over the network\n4) run pg_backup_stop() using the same session opened in (1)\n\nTypically, step (3) takes a good amount of time in production\nenvironments with terabytes or petabytes scale of data and keeping the\nsession alive from step (1) to (4) has overhead and it wastes the\nresources. And the session can get closed for various reasons - idle\nin session timeout, tcp/ip keepalive timeout, network problems etc.\nAll of these can render the backup useless.\n\nWhat if the backup started by a session can also be closed by another\nsession? This seems to be achievable, if we can place the\nbackup_label, tablespace_map and other required session/backend level\ncontents in shared memory with the key as backup_label name. It's a\nlong way to go. The idea may be naive at this stage and there might be\nsomething important that doesn't let us do the proposed solution. I\nwould like to hear more thoughts from the hackers.\n\nThanks to Sameer, Satya (cc-ed) for the offlist discussion.\n\nRegards,\nBharath Rupireddy.",
"msg_date": "Mon, 25 Jul 2022 10:03:03 +0530",
"msg_from": "mahendrakar s <mahendrakarforpg@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: A proposal for shared memory based backup infrastructure"
},
{
"msg_contents": "On Mon, Jul 25, 2022 at 10:03 AM mahendrakar s\n<mahendrakarforpg@gmail.com> wrote:\n>\n> Hi Bharath,\n\nThanks Mahendrakar for taking a look at the design.\n\n> \"Typically, step (3) takes a good amount of time in production\n> environments with terabytes or petabytes scale of data and keeping the\n> session alive from step (1) to (4) has overhead and it wastes the\n> resources. And the session can get closed for various reasons - idle\n> in session timeout, tcp/ip keepalive timeout, network problems etc.\n> All of these can render the backup useless.\"\n>\n> >> this could be a common scenario and needs to be addressed.\n\nHm. Additionally, the problem of keeping the session that starts the\nbackup open until the entire data directory is backed-up becomes more\nworrisome if we were to run backups for a huge number of servers at\nscale - the entity (control plane or whatever), that is responsible\nfor taking backups across huge fleet of postgres production servers,\nwill have tremendous amount of resources wasted and it's a problem for\nthat entity to keep the backup sessions active until the actual backup\nis finished.\n\n> \"What if the backup started by a session can also be closed by another\n> session? This seems to be achievable, if we can place the\n> backup_label, tablespace_map and other required session/backend level\n> contents in shared memory with the key as backup_label name. It's a\n> long way to go.\"\n>\n> >> I think storing metadata about backup of a session in shared memory may not work as it gets purged when the database goes for restart. We might require a separate catalogue table to handle the backup session.\n\nRight now, the non-exclusive (and we don't have exclusive backups now\nfrom postgres 15) backup will anyway become useless if the postgres\nrestarts, because there's no running backup state (backup_label,\ntablespace_map contents) that's persisted.\n\nFollowing are few more thoughts with the shared memory based backups\nas proposed in this thread:\n\n1) How many max backups do we want to allow? Right now, there's no\nlimit, I believe, max_connections number of concurrent backups can be\ntaken - we have XLogCtlInsert->runningBackups but no limit. If we were\nto use shared memory to track the backup state, we might or might not\nhave to decide on max backup limit to not preallocate and consume\nshared memory unnecessarily, otherwise, we could use something like\ndynamic shared memory hash table for storing backup state.\n\n2) How to deal with the backups that are started but no one is coming\nto stop them? Basically, when to declare that the backup is dead or\nexpired? Perhaps, we can have a max time limit after which if no stop\nbackup is issued for a backup, which is then marked as dead or\nexpired.\n\nWe may or may not want to think on the above points for now until the\nidea in general has some benefits over the current backup\ninfrastructure.\n\nRegards,\nBharath Rupireddy.\n\n\n",
"msg_date": "Mon, 25 Jul 2022 11:59:50 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: A proposal for shared memory based backup infrastructure"
},
{
"msg_contents": "Hi Bharath,\n\nThere might be security concerns if the backup started by one user can be\nstopped by another user.\nThis is because the user who stops the backup will get the backup_label or\ntable space map file contents of other user.\nIsn't this a concern for non-exclusive backup?\n\nI think there should be role based control for backup related activity\nwhich can prevent other unprivileged users from stopping the backup.\n\nThoughts?\n\nThanks,\nMahendrakar.\n\n\nOn Mon, 25 Jul 2022 at 12:00, Bharath Rupireddy <\nbharath.rupireddyforpostgres@gmail.com> wrote:\n\n> On Mon, Jul 25, 2022 at 10:03 AM mahendrakar s\n> <mahendrakarforpg@gmail.com> wrote:\n> >\n> > Hi Bharath,\n>\n> Thanks Mahendrakar for taking a look at the design.\n>\n> > \"Typically, step (3) takes a good amount of time in production\n> > environments with terabytes or petabytes scale of data and keeping the\n> > session alive from step (1) to (4) has overhead and it wastes the\n> > resources. And the session can get closed for various reasons - idle\n> > in session timeout, tcp/ip keepalive timeout, network problems etc.\n> > All of these can render the backup useless.\"\n> >\n> > >> this could be a common scenario and needs to be addressed.\n>\n> Hm. Additionally, the problem of keeping the session that starts the\n> backup open until the entire data directory is backed-up becomes more\n> worrisome if we were to run backups for a huge number of servers at\n> scale - the entity (control plane or whatever), that is responsible\n> for taking backups across huge fleet of postgres production servers,\n> will have tremendous amount of resources wasted and it's a problem for\n> that entity to keep the backup sessions active until the actual backup\n> is finished.\n>\n> > \"What if the backup started by a session can also be closed by another\n> > session? This seems to be achievable, if we can place the\n> > backup_label, tablespace_map and other required session/backend level\n> > contents in shared memory with the key as backup_label name. It's a\n> > long way to go.\"\n> >\n> > >> I think storing metadata about backup of a session in shared memory\n> may not work as it gets purged when the database goes for restart. We might\n> require a separate catalogue table to handle the backup session.\n>\n> Right now, the non-exclusive (and we don't have exclusive backups now\n> from postgres 15) backup will anyway become useless if the postgres\n> restarts, because there's no running backup state (backup_label,\n> tablespace_map contents) that's persisted.\n>\n> Following are few more thoughts with the shared memory based backups\n> as proposed in this thread:\n>\n> 1) How many max backups do we want to allow? Right now, there's no\n> limit, I believe, max_connections number of concurrent backups can be\n> taken - we have XLogCtlInsert->runningBackups but no limit. If we were\n> to use shared memory to track the backup state, we might or might not\n> have to decide on max backup limit to not preallocate and consume\n> shared memory unnecessarily, otherwise, we could use something like\n> dynamic shared memory hash table for storing backup state.\n>\n> 2) How to deal with the backups that are started but no one is coming\n> to stop them? Basically, when to declare that the backup is dead or\n> expired? Perhaps, we can have a max time limit after which if no stop\n> backup is issued for a backup, which is then marked as dead or\n> expired.\n>\n> We may or may not want to think on the above points for now until the\n> idea in general has some benefits over the current backup\n> infrastructure.\n>\n> Regards,\n> Bharath Rupireddy.\n>\n\nHi Bharath,There might be security concerns if the backup started by one user can be stopped by another user.This is because the user who stops the backup will get the backup_label or table space map file contents of other user.Isn't this a concern for non-exclusive backup?I think there should be role based control for backup related activity which can prevent other unprivileged users from stopping the backup. Thoughts?Thanks,Mahendrakar.On Mon, 25 Jul 2022 at 12:00, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote:On Mon, Jul 25, 2022 at 10:03 AM mahendrakar s\n<mahendrakarforpg@gmail.com> wrote:\n>\n> Hi Bharath,\n\nThanks Mahendrakar for taking a look at the design.\n\n> \"Typically, step (3) takes a good amount of time in production\n> environments with terabytes or petabytes scale of data and keeping the\n> session alive from step (1) to (4) has overhead and it wastes the\n> resources. And the session can get closed for various reasons - idle\n> in session timeout, tcp/ip keepalive timeout, network problems etc.\n> All of these can render the backup useless.\"\n>\n> >> this could be a common scenario and needs to be addressed.\n\nHm. Additionally, the problem of keeping the session that starts the\nbackup open until the entire data directory is backed-up becomes more\nworrisome if we were to run backups for a huge number of servers at\nscale - the entity (control plane or whatever), that is responsible\nfor taking backups across huge fleet of postgres production servers,\nwill have tremendous amount of resources wasted and it's a problem for\nthat entity to keep the backup sessions active until the actual backup\nis finished.\n\n> \"What if the backup started by a session can also be closed by another\n> session? This seems to be achievable, if we can place the\n> backup_label, tablespace_map and other required session/backend level\n> contents in shared memory with the key as backup_label name. It's a\n> long way to go.\"\n>\n> >> I think storing metadata about backup of a session in shared memory may not work as it gets purged when the database goes for restart. We might require a separate catalogue table to handle the backup session.\n\nRight now, the non-exclusive (and we don't have exclusive backups now\nfrom postgres 15) backup will anyway become useless if the postgres\nrestarts, because there's no running backup state (backup_label,\ntablespace_map contents) that's persisted.\n\nFollowing are few more thoughts with the shared memory based backups\nas proposed in this thread:\n\n1) How many max backups do we want to allow? Right now, there's no\nlimit, I believe, max_connections number of concurrent backups can be\ntaken - we have XLogCtlInsert->runningBackups but no limit. If we were\nto use shared memory to track the backup state, we might or might not\nhave to decide on max backup limit to not preallocate and consume\nshared memory unnecessarily, otherwise, we could use something like\ndynamic shared memory hash table for storing backup state.\n\n2) How to deal with the backups that are started but no one is coming\nto stop them? Basically, when to declare that the backup is dead or\nexpired? Perhaps, we can have a max time limit after which if no stop\nbackup is issued for a backup, which is then marked as dead or\nexpired.\n\nWe may or may not want to think on the above points for now until the\nidea in general has some benefits over the current backup\ninfrastructure.\n\nRegards,\nBharath Rupireddy.",
"msg_date": "Sat, 30 Jul 2022 12:23:47 +0530",
"msg_from": "mahendrakar s <mahendrakarforpg@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: A proposal for shared memory based backup infrastructure"
},
{
"msg_contents": "On Sat, Jul 30, 2022 at 12:23 PM mahendrakar s\n<mahendrakarforpg@gmail.com> wrote:\n>\n> On Mon, 25 Jul 2022 at 12:00, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote:\n>>\n>> On Mon, Jul 25, 2022 at 10:03 AM mahendrakar s\n>> <mahendrakarforpg@gmail.com> wrote:\n>> >\n>> > Hi Bharath,\n>>\n>> Thanks Mahendrakar for taking a look at the design.\n>>\n>> > \"Typically, step (3) takes a good amount of time in production\n>> > environments with terabytes or petabytes scale of data and keeping the\n>> > session alive from step (1) to (4) has overhead and it wastes the\n>> > resources. And the session can get closed for various reasons - idle\n>> > in session timeout, tcp/ip keepalive timeout, network problems etc.\n>> > All of these can render the backup useless.\"\n>> >\n>> > >> this could be a common scenario and needs to be addressed.\n>>\n>> Hm. Additionally, the problem of keeping the session that starts the\n>> backup open until the entire data directory is backed-up becomes more\n>> worrisome if we were to run backups for a huge number of servers at\n>> scale - the entity (control plane or whatever), that is responsible\n>> for taking backups across huge fleet of postgres production servers,\n>> will have tremendous amount of resources wasted and it's a problem for\n>> that entity to keep the backup sessions active until the actual backup\n>> is finished.\n>>\n>> > \"What if the backup started by a session can also be closed by another\n>> > session? This seems to be achievable, if we can place the\n>> > backup_label, tablespace_map and other required session/backend level\n>> > contents in shared memory with the key as backup_label name. It's a\n>> > long way to go.\"\n>> >\n>> > >> I think storing metadata about backup of a session in shared memory may not work as it gets purged when the database goes for restart. We might require a separate catalogue table to handle the backup session.\n>>\n>> Right now, the non-exclusive (and we don't have exclusive backups now\n>> from postgres 15) backup will anyway become useless if the postgres\n>> restarts, because there's no running backup state (backup_label,\n>> tablespace_map contents) that's persisted.\n>>\n>> Following are few more thoughts with the shared memory based backups\n>> as proposed in this thread:\n>>\n>> 1) How many max backups do we want to allow? Right now, there's no\n>> limit, I believe, max_connections number of concurrent backups can be\n>> taken - we have XLogCtlInsert->runningBackups but no limit. If we were\n>> to use shared memory to track the backup state, we might or might not\n>> have to decide on max backup limit to not preallocate and consume\n>> shared memory unnecessarily, otherwise, we could use something like\n>> dynamic shared memory hash table for storing backup state.\n>>\n>> 2) How to deal with the backups that are started but no one is coming\n>> to stop them? Basically, when to declare that the backup is dead or\n>> expired? Perhaps, we can have a max time limit after which if no stop\n>> backup is issued for a backup, which is then marked as dead or\n>> expired.\n>>\n>> We may or may not want to think on the above points for now until the\n>> idea in general has some benefits over the current backup\n>> infrastructure.\n>\n> Hi Bharath,\n>\n> There might be security concerns if the backup started by one user can be stopped by another user.\n> This is because the user who stops the backup will get the backup_label or table space map file contents of other user.\n> Isn't this a concern for non-exclusive backup?\n>\n> I think there should be role based control for backup related activity which can prevent other unprivileged users from stopping the backup.\n>\n> Thoughts?\n\nThe pg_backup_start() and pg_backup_stop() functions are role based -\nrestricted to superusers by default, but other users can be granted\nEXECUTE to run the functions - I think the existing behaviour would\nsuffice. However, the responsibility of not letting the users stop\nbackups started by other users (yes, just with the label name) can lie\nwith those who use these functions with the new shared memory based\nbackups, they have to ensure that whoever starts the backup, they only\nshould stop it. Perhaps, we can call that out in the documentations\nexplicitly.\n\n-- \nBharath Rupireddy\nRDS Open Source Databases: https://aws.amazon.com/rds/postgresql/\n\n\n",
"msg_date": "Thu, 4 Aug 2022 12:48:59 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: A proposal for shared memory based backup infrastructure"
},
{
"msg_contents": "On Sat, Jul 30, 2022 at 2:54 AM mahendrakar s\n<mahendrakarforpg@gmail.com> wrote:\n> There might be security concerns if the backup started by one user can be stopped by another user.\n> This is because the user who stops the backup will get the backup_label or table space map file contents of other user.\n> Isn't this a concern for non-exclusive backup?\n\nThis doesn't seem like a real problem. If you can take a backup,\nyou're already a highly-privileged user.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 5 Aug 2022 11:55:33 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: A proposal for shared memory based backup infrastructure"
}
] |
[
{
"msg_contents": "If you happen to have noticed that you aren't getting any email\ndirectly from me, or other people who set an SPF policy for their\ndomain, the reason might be this:\n\n<redacted>: host gmail-smtp-in.l.google.com[74.125.140.26] said:\n550-5.7.26 The MAIL FROM domain [sss.pgh.pa.us] has an SPF record with a hard\n550-5.7.26 fail policy (-all) but it fails to pass SPF checks with the ip:\n550-5.7.26 [redacted]. To best protect our users from spam and phishing,\n550-5.7.26 the message has been blocked. Please visit\n550-5.7.26 https://support.google.com/mail/answer/81126#authentication for more\n550-5.7.26 information.\n\nI've been seeing these bounces from a number of PG people for a couple\nof months now. The messages didn't use to be quite this explicit, but\nit seems absolutely clear now that <redacted>'s private email domain is\ntrying to forward his email to a Gmail account, and it ain't working\nbecause the mail's envelope sender is still me. Gmail looks at my SPF\nrecord, notes that the mail is not coming from my IP address, and\nbounces it. Unfortunately it bounces it to me, who can't do anything\nabout the misconfiguration.\n\nIf you want to do this kind of forwarding, please fix your mail\nprocessing recipe so that the outgoing envelope sender is yourself,\nnot the incoming sender.\n\nFor extra credit, you could lobby Gmail to think a bit harder about\nwho they send bounces to. From my perspective this behavior is not\nmuch better than a spam amplifier.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 23 Jul 2022 13:42:56 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "PSA for folks forwarding personal email domains to Gmail"
}
] |
[
{
"msg_contents": "Hi,\nCurrently, in situation such as duplicate role creation, the server log\nwould show something such as the following:\n\n2022-07-22 13:48:18.251 UTC [330] STATEMENT: CREATE ROLE test WITH LOGIN\nPASSWORD 'foobar';\n\nThe password itself should be redacted before logging the statement.\n\nHere is sample output with the patch applied:\n\n2022-07-23 23:28:20.359 UTC [16850] ERROR: role \"test\" already exists\n2022-07-23 23:28:20.359 UTC [16850] STATEMENT: CREATE ROLE test WITH LOGIN\nPASSWORD\n\nPlease take a look at the short patch.\nI know variables should be declared at the start of the func - I can do\nthat once the approach is confirmed.\n\nCheers",
"msg_date": "Sat, 23 Jul 2022 16:44:58 -0700",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": true,
"msg_subject": "redacting password in SQL statement in server log"
},
{
"msg_contents": "Zhihong Yu <zyu@yugabyte.com> writes:\n> Currently, in situation such as duplicate role creation, the server log\n> would show something such as the following:\n\n> 2022-07-22 13:48:18.251 UTC [330] STATEMENT: CREATE ROLE test WITH LOGIN\n> PASSWORD 'foobar';\n\n> The password itself should be redacted before logging the statement.\n\nThis has been proposed multiple times, and rejected multiple times,\nprimarily because it offers only false security: you'll never cover\nall the cases. (The proposed patch manages to create a bunch of\nfalse positives to go along with its false negatives, too.)\n\nThe only safe answer is to be sure to keep the server log contents\nsecure. Please see prior discussions in the archives.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 23 Jul 2022 20:27:37 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: redacting password in SQL statement in server log"
},
{
"msg_contents": "On Sat, Jul 23, 2022 at 5:27 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Zhihong Yu <zyu@yugabyte.com> writes:\n> > Currently, in situation such as duplicate role creation, the server log\n> > would show something such as the following:\n>\n> > 2022-07-22 13:48:18.251 UTC [330] STATEMENT: CREATE ROLE test WITH LOGIN\n> > PASSWORD 'foobar';\n>\n> > The password itself should be redacted before logging the statement.\n>\n> This has been proposed multiple times, and rejected multiple times,\n> primarily because it offers only false security: you'll never cover\n> all the cases. (The proposed patch manages to create a bunch of\n> false positives to go along with its false negatives, too.)\n>\n> The only safe answer is to be sure to keep the server log contents\n> secure. Please see prior discussions in the archives.\n>\n> regards, tom lane\n>\n\nPardon my laziness.\n\nI will pay more attention.\n\nOn Sat, Jul 23, 2022 at 5:27 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:Zhihong Yu <zyu@yugabyte.com> writes:\n> Currently, in situation such as duplicate role creation, the server log\n> would show something such as the following:\n\n> 2022-07-22 13:48:18.251 UTC [330] STATEMENT: CREATE ROLE test WITH LOGIN\n> PASSWORD 'foobar';\n\n> The password itself should be redacted before logging the statement.\n\nThis has been proposed multiple times, and rejected multiple times,\nprimarily because it offers only false security: you'll never cover\nall the cases. (The proposed patch manages to create a bunch of\nfalse positives to go along with its false negatives, too.)\n\nThe only safe answer is to be sure to keep the server log contents\nsecure. Please see prior discussions in the archives.\n\n regards, tom lanePardon my laziness.I will pay more attention.",
"msg_date": "Sat, 23 Jul 2022 18:27:59 -0700",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": true,
"msg_subject": "Re: redacting password in SQL statement in server log"
},
{
"msg_contents": "On Sat, Jul 23, 2022 at 5:27 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Zhihong Yu <zyu@yugabyte.com> writes:\n> > Currently, in situation such as duplicate role creation, the server log\n> > would show something such as the following:\n>\n> > 2022-07-22 13:48:18.251 UTC [330] STATEMENT: CREATE ROLE test WITH LOGIN\n> > PASSWORD 'foobar';\n>\n> > The password itself should be redacted before logging the statement.\n>\n> This has been proposed multiple times, and rejected multiple times,\n> primarily because it offers only false security: you'll never cover\n> all the cases. (The proposed patch manages to create a bunch of\n> false positives to go along with its false negatives, too.)\n>\n> The only safe answer is to be sure to keep the server log contents\n> secure. Please see prior discussions in the archives.\n>\n> regards, tom lane\n>\nHi,\nI am thinking of adding `if not exists` to `CREATE ROLE` statement:\n\nCREATE ROLE trustworthy if not exists;\n\nIn my previous example, if the user can issue the above command, there\nwould be no SQL statement logged.\n\nDo you think it is worth adding `if not exists` clause ?\n\nThanks\n\nOn Sat, Jul 23, 2022 at 5:27 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:Zhihong Yu <zyu@yugabyte.com> writes:\n> Currently, in situation such as duplicate role creation, the server log\n> would show something such as the following:\n\n> 2022-07-22 13:48:18.251 UTC [330] STATEMENT: CREATE ROLE test WITH LOGIN\n> PASSWORD 'foobar';\n\n> The password itself should be redacted before logging the statement.\n\nThis has been proposed multiple times, and rejected multiple times,\nprimarily because it offers only false security: you'll never cover\nall the cases. (The proposed patch manages to create a bunch of\nfalse positives to go along with its false negatives, too.)\n\nThe only safe answer is to be sure to keep the server log contents\nsecure. Please see prior discussions in the archives.\n\n regards, tom laneHi,I am thinking of adding `if not exists` to `CREATE ROLE` statement:CREATE ROLE trustworthy if not exists;In my previous example, if the user can issue the above command, there would be no SQL statement logged.Do you think it is worth adding `if not exists` clause ?Thanks",
"msg_date": "Sun, 24 Jul 2022 04:33:59 -0700",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": true,
"msg_subject": "Re: redacting password in SQL statement in server log"
},
{
"msg_contents": "Hi,\n\nOn Sun, Jul 24, 2022 at 04:33:59AM -0700, Zhihong Yu wrote:\n> I am thinking of adding `if not exists` to `CREATE ROLE` statement:\n>\n> CREATE ROLE trustworthy if not exists;\n>\n> In my previous example, if the user can issue the above command, there\n> would be no SQL statement logged.\n\nIt's not because there might not be an error that the password wouldn't end up\nin the logs (log_statement, log_min_duration_statement, typo in the\ncommand...).\n>\n> Do you think it is worth adding `if not exists` clause ?\n\nThis has already been discussed and isn't wanted. You can refer to the last\ndiscussion about that at:\nhttps://www.postgresql.org/message-id/flat/CAOxo6XJy5_fUT4uDo2251Z_9whzu0JJGbtDgZKqZtOT9KhOKiQ@mail.gmail.com\n\n\n",
"msg_date": "Sun, 24 Jul 2022 19:44:49 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: redacting password in SQL statement in server log"
}
] |
[
{
"msg_contents": "fairywren (msys2 animal) is currently hung in the pg_basebackup tests.\nHere's the bottom of the regress log. I don't have further info as yet,\nbut can dig is someone has a suggestion.\n\n\n### Starting node \"main\"\n# Running: pg_ctl -w -D\nC:/tools/nmsys64/home/pgrunner/bf/root/REL_15_STABLE/pgsql.build/src/bin/pg_basebackup/tmp_check/t_010_pg_basebackup_main_data/pgdata\n-l\nC:/tools/nmsys64/home/pgrunner/bf/root/REL_15_STABLE/pgsql.build/src/bin/pg_basebackup/tmp_check/log/010_pg_basebackup_main.log\n-o --cluster-name=main start\nwaiting for server to start.... done\nserver started\n# Postmaster PID for node \"main\" is 5368\nJunction created for C:\\tools\\nmsys64\\tmp\\pIBOsSp9se\\tempdir <<===>>\nC:\\tools\\nmsys64\\home\\pgrunner\\bf\\root\\REL_15_STABLE\\pgsql.build\\src\\bin\\pg_basebackup\\tmp_check\\tmp_test_jVCb\n# Taking pg_basebackup tarbackup2 from node \"main\"\n# Running: pg_basebackup -D\nC:/tools/nmsys64/home/pgrunner/bf/root/REL_15_STABLE/pgsql.build/src/bin/pg_basebackup/tmp_check/t_010_pg_basebackup_main_data/backup/tarbackup2\n-h 127.0.0.1 -p 52897 --checkpoint fast --no-sync -Ft\n# Backup finished\n[13:11:33.592](0.978s) ok 95 - backup tar was created\n[13:11:33.592](0.000s) ok 96 - WAL tar was created\n[13:11:33.593](0.000s) not ok 97 - one tablespace tar was created\n[13:11:33.593](0.000s)\n[13:11:33.593](0.000s) # Failed test 'one tablespace tar was created'\n# at t/010_pg_basebackup.pl line 352.\n[13:11:33.593](0.000s) # got: '0'\n# expected: '1'\n# Checking port 52898\n# Found port 52898\nName: replica\nData directory:\nC:/tools/nmsys64/home/pgrunner/bf/root/REL_15_STABLE/pgsql.build/src/bin/pg_basebackup/tmp_check/t_010_pg_basebackup_replica_data/pgdata\nBackup directory:\nC:/tools/nmsys64/home/pgrunner/bf/root/REL_15_STABLE/pgsql.build/src/bin/pg_basebackup/tmp_check/t_010_pg_basebackup_replica_data/backup\nArchive directory:\nC:/tools/nmsys64/home/pgrunner/bf/root/REL_15_STABLE/pgsql.build/src/bin/pg_basebackup/tmp_check/t_010_pg_basebackup_replica_data/archives\nConnection string: port=52898 host=127.0.0.1\nLog file:\nC:/tools/nmsys64/home/pgrunner/bf/root/REL_15_STABLE/pgsql.build/src/bin/pg_basebackup/tmp_check/log/010_pg_basebackup_replica.log\n# Initializing node \"replica\" from backup \"tarbackup2\" of node \"main\"\n# Running: C:/Windows/System32/tar xf\nC:/tools/nmsys64/home/pgrunner/bf/root/REL_15_STABLE/pgsql.build/src/bin/pg_basebackup/tmp_check/t_010_pg_basebackup_main_data/backup/tarbackup2/base.tar\n-C\nC:/tools/nmsys64/home/pgrunner/bf/root/REL_15_STABLE/pgsql.build/src/bin/pg_basebackup/tmp_check/t_010_pg_basebackup_replica_data/pgdata\n# Running: C:/Windows/System32/tar xf\nC:/tools/nmsys64/home/pgrunner/bf/root/REL_15_STABLE/pgsql.build/src/bin/pg_basebackup/tmp_check/t_010_pg_basebackup_main_data/backup/tarbackup2/pg_wal.tar\n-C\nC:/tools/nmsys64/home/pgrunner/bf/root/REL_15_STABLE/pgsql.build/src/bin/pg_basebackup/tmp_check/t_010_pg_basebackup_replica_data/pgdata/pg_wal\nUse of uninitialized value $_[2] in join or string at\nC:\\tools\\nmsys64\\home\\pgrunner\\bf\\root\\REL_15_STABLE\\pgsql\\src\\test\\perl/PostgreSQL/Test/Utils.pm\nline 337.\n# Running: C:/Windows/System32/tar xf -C\nC:\\tools\\nmsys64\\home\\pgrunner\\bf\\root\\REL_15_STABLE\\pgsql.build\\src\\bin\\pg_basebackup\\tmp_check\\tmp_test_jVCb/tblspc1replica\nUse of uninitialized value $_[2] in system at\nC:\\tools\\nmsys64\\home\\pgrunner\\bf\\root\\REL_15_STABLE\\pgsql\\src\\test\\perl/PostgreSQL/Test/Utils.pm\nline 338.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Sun, 24 Jul 2022 12:27:07 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": true,
"msg_subject": "fairywren hung in pg_basebackup tests"
},
{
"msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> fairywren (msys2 animal) is currently hung in the pg_basebackup tests.\n> Here's the bottom of the regress log. I don't have further info as yet,\n> but can dig is someone has a suggestion.\n\nHm, what's with the \"Use of uninitialized value\" warnings?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 24 Jul 2022 12:55:56 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: fairywren hung in pg_basebackup tests"
},
{
"msg_contents": "On Sun, Jul 24, 2022 at 12:55:56PM -0400, Tom Lane wrote:\n> Andrew Dunstan <andrew@dunslane.net> writes:\n> > fairywren (msys2 animal) is currently hung in the pg_basebackup tests.\n> > Here's the bottom of the regress log. I don't have further info as yet,\n> > but can dig is someone has a suggestion.\n> \n> Hm, what's with the \"Use of uninitialized value\" warnings?\n\nThe warnings are sequelae of:\n\n> > [13:11:33.593](0.000s) not ok 97 - one tablespace tar was created\n\n From that, it follows that $tblspc_tars[0] is undef at:\n\n\tPostgreSQL::Test::Utils::system_or_bail($tar, 'xf', $tblspc_tars[0],\n\t\t'-C', $repTsDir);\n\n> > # Running: C:/Windows/System32/tar xf� -C\n> > C:\\tools\\nmsys64\\home\\pgrunner\\bf\\root\\REL_15_STABLE\\pgsql.build\\src\\bin\\pg_basebackup\\tmp_check\\tmp_test_jVCb/tblspc1replica\n\nI can confirm that Windows tar hangs when invoked that way. For preventing\nthe hang, the test file could die() or skip the tar-program-using section\nafter failing 'one tablespace tar was created'. That still leaves a question\nabout why pg_basebackup didn't make the tablespace tar file. I would check\n010_pg_basebackup_main.log for clues about that.\n\n\n",
"msg_date": "Sun, 24 Jul 2022 12:10:17 -0700",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": false,
"msg_subject": "Re: fairywren hung in pg_basebackup tests"
},
{
"msg_contents": "\nOn 2022-07-24 Su 15:10, Noah Misch wrote:\n> On Sun, Jul 24, 2022 at 12:55:56PM -0400, Tom Lane wrote:\n>> Andrew Dunstan <andrew@dunslane.net> writes:\n>>> fairywren (msys2 animal) is currently hung in the pg_basebackup tests.\n>>> Here's the bottom of the regress log. I don't have further info as yet,\n>>> but can dig is someone has a suggestion.\n>> Hm, what's with the \"Use of uninitialized value\" warnings?\n> The warnings are sequelae of:\n>\n>>> [13:11:33.593](0.000s) not ok 97 - one tablespace tar was created\n> >From that, it follows that $tblspc_tars[0] is undef at:\n>\n> \tPostgreSQL::Test::Utils::system_or_bail($tar, 'xf', $tblspc_tars[0],\n> \t\t'-C', $repTsDir);\n>\n>>> # Running: C:/Windows/System32/tar xf -C\n>>> C:\\tools\\nmsys64\\home\\pgrunner\\bf\\root\\REL_15_STABLE\\pgsql.build\\src\\bin\\pg_basebackup\\tmp_check\\tmp_test_jVCb/tblspc1replica\n\n\n\nPerhaps we should have a guard in system_or_bail() and/or system_log()\nwhich bails if some element of @_ is undefined.\n\n\n\n\n> I can confirm that Windows tar hangs when invoked that way. For preventing\n> the hang, the test file could die() or skip the tar-program-using section\n> after failing 'one tablespace tar was created'. That still leaves a question\n> about why pg_basebackup didn't make the tablespace tar file. I would check\n> 010_pg_basebackup_main.log for clues about that.\n\n\n\nThe same thing has happened again on HEAD. I don't see anything in that\nfile that gives any clue.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Mon, 25 Jul 2022 09:44:21 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": true,
"msg_subject": "Re: fairywren hung in pg_basebackup tests"
},
{
"msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> On 2022-07-24 Su 15:10, Noah Misch wrote:\n>> On Sun, Jul 24, 2022 at 12:55:56PM -0400, Tom Lane wrote:\n>>> Andrew Dunstan <andrew@dunslane.net> writes:\n>>>> fairywren (msys2 animal) is currently hung in the pg_basebackup tests.\n>>>> Here's the bottom of the regress log. I don't have further info as yet,\n>>>> but can dig is someone has a suggestion.\n\n>>> Hm, what's with the \"Use of uninitialized value\" warnings?\n\n>> The warnings are sequelae of:\n>>> [13:11:33.593](0.000s) not ok 97 - one tablespace tar was created\n>> From that, it follows that $tblspc_tars[0] is undef at:\n>> PostgreSQL::Test::Utils::system_or_bail($tar, 'xf', $tblspc_tars[0],\n>> '-C', $repTsDir);\n\nRight, so the \"glob\" failed to find anything. Seeing that this test\nis new as of 534472375, which postdates fairywren's last successful\nrun, I'd guess that the \"glob\" needs adjustment for msys path names.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 25 Jul 2022 10:52:15 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: fairywren hung in pg_basebackup tests"
},
{
"msg_contents": "I wrote:\n> Right, so the \"glob\" failed to find anything. Seeing that this test\n> is new as of 534472375, which postdates fairywren's last successful\n> run, I'd guess that the \"glob\" needs adjustment for msys path names.\n\nHmm ... an alternative theory is that the test is fine, and what\nit's telling us is that get_dirent_type() is still wrong on msys.\nWould that end in this symptom?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 25 Jul 2022 11:08:55 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: fairywren hung in pg_basebackup tests"
},
{
"msg_contents": "On Mon, Jul 25, 2022 at 09:44:21AM -0400, Andrew Dunstan wrote:\n> On 2022-07-24 Su 15:10, Noah Misch wrote:\n> > On Sun, Jul 24, 2022 at 12:55:56PM -0400, Tom Lane wrote:\n> >> Andrew Dunstan <andrew@dunslane.net> writes:\n> >>> fairywren (msys2 animal) is currently hung in the pg_basebackup tests.\n> >>> Here's the bottom of the regress log. I don't have further info as yet,\n> >>> but can dig is someone has a suggestion.\n> >> Hm, what's with the \"Use of uninitialized value\" warnings?\n> > The warnings are sequelae of:\n> >\n> >>> [13:11:33.593](0.000s) not ok 97 - one tablespace tar was created\n> > >From that, it follows that $tblspc_tars[0] is undef at:\n> >\n> > \tPostgreSQL::Test::Utils::system_or_bail($tar, 'xf', $tblspc_tars[0],\n> > \t\t'-C', $repTsDir);\n> >\n> >>> # Running: C:/Windows/System32/tar xf� -C\n> >>> C:\\tools\\nmsys64\\home\\pgrunner\\bf\\root\\REL_15_STABLE\\pgsql.build\\src\\bin\\pg_basebackup\\tmp_check\\tmp_test_jVCb/tblspc1replica\n> \n> Perhaps we should have a guard in system_or_bail() and/or system_log()\n> which bails if some element of @_ is undefined.\n\nThat would be reasonable. Also reasonable to impose some long timeout, maybe\n10x or 100x PG_TEST_TIMEOUT_DEFAULT, on calls to those functions.\n\n\n",
"msg_date": "Mon, 25 Jul 2022 08:16:43 -0700",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": false,
"msg_subject": "Re: fairywren hung in pg_basebackup tests"
},
{
"msg_contents": "On Tue, Jul 26, 2022 at 3:08 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I wrote:\n> > Right, so the \"glob\" failed to find anything. Seeing that this test\n> > is new as of 534472375, which postdates fairywren's last successful\n> > run, I'd guess that the \"glob\" needs adjustment for msys path names.\n\nThe test added by 534472375 is at the end, hundreds of lines later\nthan the one that appears to be failing.\n\n> Hmm ... an alternative theory is that the test is fine, and what\n> it's telling us is that get_dirent_type() is still wrong on msys.\n> Would that end in this symptom?\n\nHmm, possibly yes (if it sees a non-symlink, it'll skip it). If\nsomeone can run the test on an msys system, perhaps they could put a\ndebugging elog() into the code modified by 9d3444dc to log d_name and\nthe d_type that is returned? I'm struggling to understand why msys\nwould change the answer though.\n\n\n",
"msg_date": "Tue, 26 Jul 2022 03:24:13 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: fairywren hung in pg_basebackup tests"
},
{
"msg_contents": "Noah Misch <noah@leadboat.com> writes:\n> On Mon, Jul 25, 2022 at 09:44:21AM -0400, Andrew Dunstan wrote:\n>> Perhaps we should have a guard in system_or_bail() and/or system_log()\n>> which bails if some element of @_ is undefined.\n\n+1, seeing how hard this is to diagnose.\n\n> That would be reasonable. Also reasonable to impose some long timeout, maybe\n> 10x or 100x PG_TEST_TIMEOUT_DEFAULT, on calls to those functions.\n\nWhy would it need to be more than PG_TEST_TIMEOUT_DEFAULT?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 25 Jul 2022 11:35:12 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: fairywren hung in pg_basebackup tests"
},
{
"msg_contents": "\nOn 2022-07-25 Mo 11:24, Thomas Munro wrote:\n> On Tue, Jul 26, 2022 at 3:08 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> I wrote:\n>>> Right, so the \"glob\" failed to find anything. Seeing that this test\n>>> is new as of 534472375, which postdates fairywren's last successful\n>>> run, I'd guess that the \"glob\" needs adjustment for msys path names.\n> The test added by 534472375 is at the end, hundreds of lines later\n> than the one that appears to be failing.\n\n\nRight.\n\n\n>\n>> Hmm ... an alternative theory is that the test is fine, and what\n>> it's telling us is that get_dirent_type() is still wrong on msys.\n>> Would that end in this symptom?\n> Hmm, possibly yes (if it sees a non-symlink, it'll skip it). If\n> someone can run the test on an msys system, perhaps they could put a\n> debugging elog() into the code modified by 9d3444dc to log d_name and\n> the d_type that is returned? I'm struggling to understand why msys\n> would change the answer though.\n\n\n\nI have no idea either. The link exists and it is a junction. I'll see\nabout logging details.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Mon, 25 Jul 2022 12:02:57 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": true,
"msg_subject": "Re: fairywren hung in pg_basebackup tests"
},
{
"msg_contents": "On Mon, Jul 25, 2022 at 11:35:12AM -0400, Tom Lane wrote:\n> Noah Misch <noah@leadboat.com> writes:\n> > On Mon, Jul 25, 2022 at 09:44:21AM -0400, Andrew Dunstan wrote:\n> >> Perhaps we should have a guard in system_or_bail() and/or system_log()\n> >> which bails if some element of @_ is undefined.\n> \n> +1, seeing how hard this is to diagnose.\n> \n> > That would be reasonable. Also reasonable to impose some long timeout, maybe\n> > 10x or 100x PG_TEST_TIMEOUT_DEFAULT, on calls to those functions.\n> \n> Why would it need to be more than PG_TEST_TIMEOUT_DEFAULT?\n\nWe run some long commands, like the parallel_schedule runs. Those currently\nuse plain system(), but they probably should have used system_log() from a\nlogging standpoint. If they had, PG_TEST_TIMEOUT_DEFAULT would have been too\nshort. One could argue that anything that slow should declare its intent to\nbe that slow, but that argument is getting into the territory of a policy\nchange rather than a backstop for clearly-unintended longevity.\n\n\n",
"msg_date": "Mon, 25 Jul 2022 21:53:47 -0700",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": false,
"msg_subject": "Re: fairywren hung in pg_basebackup tests"
},
{
"msg_contents": "On Tue, Jul 26, 2022 at 4:03 AM Andrew Dunstan <andrew@dunslane.net> wrote:\n> On 2022-07-25 Mo 11:24, Thomas Munro wrote:\n> > On Tue, Jul 26, 2022 at 3:08 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> Hmm ... an alternative theory is that the test is fine, and what\n> >> it's telling us is that get_dirent_type() is still wrong on msys.\n> >> Would that end in this symptom?\n> > Hmm, possibly yes (if it sees a non-symlink, it'll skip it). If\n> > someone can run the test on an msys system, perhaps they could put a\n> > debugging elog() into the code modified by 9d3444dc to log d_name and\n> > the d_type that is returned? I'm struggling to understand why msys\n> > would change the answer though.\n>\n> I have no idea either. The link exists and it is a junction. I'll see\n> about logging details.\n\n From the clues so far, it seems like pgwin32_is_junction(fullpath) was\nreturning true, but the following code in get_dirent_type(), which was\nsupposed to be equivalent, is not reached on MSYS (though it\napparently does on MSVC?):\n\n+ if ((fd.dwFileAttributes & FILE_ATTRIBUTE_REPARSE_POINT) != 0 &&\n+ (fd.dwReserved0 == IO_REPARSE_TAG_MOUNT_POINT))\n d->ret.d_type = DT_LNK;\n\npgwin32_is_junction() uses GetFileAttributes() and tests (attr &\nFILE_ATTRIBUTE_REPARSE_POINT) == FILE_ATTRIBUTE_REPARSE_POINT, which\nis like the first condition but lacks the dwReserved0 part. What is\nthat part doing, and why would it be doing something different in MSVC\nand MSYS builds? That code came from 87e6ed7c, recently I was just\ntrying to fix it by reordering the checks; oh, there was some\ndiscussion about that field[1].\n\nOne idea is that something about dwReserved0 or\nIO_REPARSE_TAG_MOUNT_POINT is different in the open source replacement\nsystem headers supplied by the MinGW project used by MSYS builds\n(right?), compared to the \"real\" Windows SDK's headers used by MSVC\nbuilds.\n\nOr perhaps there is some other dumb mistake, or perhaps the reparse\npoint really is different, or ... I dunno, I'd probably shove a load\nof log messages in there and see what's going on.\n\n[1] https://www.postgresql.org/message-id/flat/CABUevEzURN%3DwC95JHvTKFJtEy0eY9rWO42yU%3D59-q8xSwm-Dug%40mail.gmail.com#ac54acd782fc849c0fe6c2c05db101dc\n\n\n",
"msg_date": "Wed, 27 Jul 2022 10:31:18 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: fairywren hung in pg_basebackup tests"
},
{
"msg_contents": "\nOn 2022-07-26 Tu 18:31, Thomas Munro wrote:\n> On Tue, Jul 26, 2022 at 4:03 AM Andrew Dunstan <andrew@dunslane.net> wrote:\n>> On 2022-07-25 Mo 11:24, Thomas Munro wrote:\n>>> On Tue, Jul 26, 2022 at 3:08 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>>> Hmm ... an alternative theory is that the test is fine, and what\n>>>> it's telling us is that get_dirent_type() is still wrong on msys.\n>>>> Would that end in this symptom?\n>>> Hmm, possibly yes (if it sees a non-symlink, it'll skip it). If\n>>> someone can run the test on an msys system, perhaps they could put a\n>>> debugging elog() into the code modified by 9d3444dc to log d_name and\n>>> the d_type that is returned? I'm struggling to understand why msys\n>>> would change the answer though.\n>> I have no idea either. The link exists and it is a junction. I'll see\n>> about logging details.\n> >From the clues so far, it seems like pgwin32_is_junction(fullpath) was\n> returning true, but the following code in get_dirent_type(), which was\n> supposed to be equivalent, is not reached on MSYS (though it\n> apparently does on MSVC?):\n>\n> + if ((fd.dwFileAttributes & FILE_ATTRIBUTE_REPARSE_POINT) != 0 &&\n> + (fd.dwReserved0 == IO_REPARSE_TAG_MOUNT_POINT))\n> d->ret.d_type = DT_LNK;\n>\n> pgwin32_is_junction() uses GetFileAttributes() and tests (attr &\n> FILE_ATTRIBUTE_REPARSE_POINT) == FILE_ATTRIBUTE_REPARSE_POINT, which\n> is like the first condition but lacks the dwReserved0 part. What is\n> that part doing, and why would it be doing something different in MSVC\n> and MSYS builds? That code came from 87e6ed7c, recently I was just\n> trying to fix it by reordering the checks; oh, there was some\n> discussion about that field[1].\n>\n> One idea is that something about dwReserved0 or\n> IO_REPARSE_TAG_MOUNT_POINT is different in the open source replacement\n> system headers supplied by the MinGW project used by MSYS builds\n> (right?), compared to the \"real\" Windows SDK's headers used by MSVC\n> builds.\n>\n> Or perhaps there is some other dumb mistake, or perhaps the reparse\n> point really is different, or ... I dunno, I'd probably shove a load\n> of log messages in there and see what's going on.\n>\n> [1] https://www.postgresql.org/message-id/flat/CABUevEzURN%3DwC95JHvTKFJtEy0eY9rWO42yU%3D59-q8xSwm-Dug%40mail.gmail.com#ac54acd782fc849c0fe6c2c05db101dc\n\n\ndirent.c is not used on msys, only on MSVC. msys is apparently using\nopendir and friends supplied by the system.\n\nWhat it does if there's a junction I'll try to find out, but it appears\nthat 5344723755 was conceived under a misapprehension about the\nbehaviour of msys.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Wed, 27 Jul 2022 09:47:38 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": true,
"msg_subject": "Re: fairywren hung in pg_basebackup tests"
},
{
"msg_contents": "\nOn 2022-07-27 We 09:47, Andrew Dunstan wrote:\n> On 2022-07-26 Tu 18:31, Thomas Munro wrote:\n>> On Tue, Jul 26, 2022 at 4:03 AM Andrew Dunstan <andrew@dunslane.net> wrote:\n>>> On 2022-07-25 Mo 11:24, Thomas Munro wrote:\n>>>> On Tue, Jul 26, 2022 at 3:08 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>>>> Hmm ... an alternative theory is that the test is fine, and what\n>>>>> it's telling us is that get_dirent_type() is still wrong on msys.\n>>>>> Would that end in this symptom?\n>>>> Hmm, possibly yes (if it sees a non-symlink, it'll skip it). If\n>>>> someone can run the test on an msys system, perhaps they could put a\n>>>> debugging elog() into the code modified by 9d3444dc to log d_name and\n>>>> the d_type that is returned? I'm struggling to understand why msys\n>>>> would change the answer though.\n>>> I have no idea either. The link exists and it is a junction. I'll see\n>>> about logging details.\n>> >From the clues so far, it seems like pgwin32_is_junction(fullpath) was\n>> returning true, but the following code in get_dirent_type(), which was\n>> supposed to be equivalent, is not reached on MSYS (though it\n>> apparently does on MSVC?):\n>>\n>> + if ((fd.dwFileAttributes & FILE_ATTRIBUTE_REPARSE_POINT) != 0 &&\n>> + (fd.dwReserved0 == IO_REPARSE_TAG_MOUNT_POINT))\n>> d->ret.d_type = DT_LNK;\n>>\n>> pgwin32_is_junction() uses GetFileAttributes() and tests (attr &\n>> FILE_ATTRIBUTE_REPARSE_POINT) == FILE_ATTRIBUTE_REPARSE_POINT, which\n>> is like the first condition but lacks the dwReserved0 part. What is\n>> that part doing, and why would it be doing something different in MSVC\n>> and MSYS builds? That code came from 87e6ed7c, recently I was just\n>> trying to fix it by reordering the checks; oh, there was some\n>> discussion about that field[1].\n>>\n>> One idea is that something about dwReserved0 or\n>> IO_REPARSE_TAG_MOUNT_POINT is different in the open source replacement\n>> system headers supplied by the MinGW project used by MSYS builds\n>> (right?), compared to the \"real\" Windows SDK's headers used by MSVC\n>> builds.\n>>\n>> Or perhaps there is some other dumb mistake, or perhaps the reparse\n>> point really is different, or ... I dunno, I'd probably shove a load\n>> of log messages in there and see what's going on.\n>>\n>> [1] https://www.postgresql.org/message-id/flat/CABUevEzURN%3DwC95JHvTKFJtEy0eY9rWO42yU%3D59-q8xSwm-Dug%40mail.gmail.com#ac54acd782fc849c0fe6c2c05db101dc\n>\n> dirent.c is not used on msys, only on MSVC. msys is apparently using\n> opendir and friends supplied by the system.\n>\n> What it does if there's a junction I'll try to find out, but it appears\n> that 5344723755 was conceived under a misapprehension about the\n> behaviour of msys.\n>\n>\n\nThe msys dirent.h doesn't have a d_type field at all in a struct dirent.\nI can see a number of ways of dealing with this, but the simplest seems\nto be just to revert 5344723755, at least for msys, along with a comment\nabout why it's necessary.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Wed, 27 Jul 2022 10:15:54 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": true,
"msg_subject": "Re: fairywren hung in pg_basebackup tests"
},
{
"msg_contents": "On 2022-Jul-27, Andrew Dunstan wrote:\n\n> The msys dirent.h doesn't have a d_type field at all in a struct dirent.\n> I can see a number of ways of dealing with this, but the simplest seems\n> to be just to revert 5344723755, at least for msys, along with a comment\n> about why it's necessary.\n\nHmm, what other ways there are? I'm about to push a change that\nduplicates the get_dirent_type call pattern and I was happy about not\nhaving that #ifdef there. Not that it's critical, but ...\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Wed, 27 Jul 2022 16:24:31 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: fairywren hung in pg_basebackup tests"
},
{
"msg_contents": "\nOn 2022-07-27 We 10:24, Alvaro Herrera wrote:\n> On 2022-Jul-27, Andrew Dunstan wrote:\n>\n>> The msys dirent.h doesn't have a d_type field at all in a struct dirent.\n>> I can see a number of ways of dealing with this, but the simplest seems\n>> to be just to revert 5344723755, at least for msys, along with a comment\n>> about why it's necessary.\n> Hmm, what other ways there are? I'm about to push a change that\n> duplicates the get_dirent_type call pattern and I was happy about not\n> having that #ifdef there. Not that it's critical, but ...\n\n\n\nThe alternative I thought of would be to switch msys to using our\ndirent.c. Probably not too hard, but certainly more work than reverting.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Wed, 27 Jul 2022 10:32:29 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": true,
"msg_subject": "Re: fairywren hung in pg_basebackup tests"
},
{
"msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> The alternative I thought of would be to switch msys to using our\n> dirent.c. Probably not too hard, but certainly more work than reverting.\n\nIf you ask me, the shortest-path general-purpose fix is to insert\n\n#if MSYS\n\tif (pgwin32_is_junction(path))\n\t return PGFILETYPE_DIR;\n#endif\n\nat the start of get_dirent_type. (I'm not sure how to spell the\n#if test.) We could look at using dirent.c later, but I think\nright now it's important to un-break the buildfarm ASAP.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 27 Jul 2022 10:58:44 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: fairywren hung in pg_basebackup tests"
},
{
"msg_contents": "\nOn 2022-07-27 We 10:58, Tom Lane wrote:\n> Andrew Dunstan <andrew@dunslane.net> writes:\n>> The alternative I thought of would be to switch msys to using our\n>> dirent.c. Probably not too hard, but certainly more work than reverting.\n> If you ask me, the shortest-path general-purpose fix is to insert\n>\n> #if MSYS\n> \tif (pgwin32_is_junction(path))\n> \t return PGFILETYPE_DIR;\n> #endif\n>\n> at the start of get_dirent_type. (I'm not sure how to spell the\n> #if test.) We could look at using dirent.c later, but I think\n> right now it's important to un-break the buildfarm ASAP.\n>\n> \t\t\t\n\n\n\n+1. I think you spell it:\n\n\n#if defined(WIN32) && !defined(_MSC_VER)\n\n\n(c.f. libpq-be.h)\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Wed, 27 Jul 2022 11:21:15 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": true,
"msg_subject": "Re: fairywren hung in pg_basebackup tests"
},
{
"msg_contents": "On Thu, Jul 28, 2022 at 3:21 AM Andrew Dunstan <andrew@dunslane.net> wrote:\n> On 2022-07-27 We 10:58, Tom Lane wrote:\n> > Andrew Dunstan <andrew@dunslane.net> writes:\n> >> The alternative I thought of would be to switch msys to using our\n> >> dirent.c. Probably not too hard, but certainly more work than reverting.\n\nThanks for figuring this out Andrew. Previously I thought of MSYS as\na way to use configure+make+gcc/clang but pure Windows C APIs (using\nMinGW's replacement Windows headers), but today I learned that\nMSYS/MinGW also supplies a small amount of POSIX stuff, including\nreaddir() etc, so we don't use our own emulation in that case.\n\nI suppose we could consider using own dirent.h/c with MinGW (and\nseeing if there are other similar hazards), to reduce the number of\nWindows/POSIX API combinations we have to fret about, but not today.\n\nAnother thought for the future is that lstat() + S_ISLNK() could\nprobably be made to fire for junction points on Windows (all build\nvariants), and then get_dirent_type()'s fallback code for DT_UNKNOWN\nwould have Just Worked (no extra system call required), and we could\nalso probably remove calls to pgwin32_is_junction() everywhere.\n\n> > If you ask me, the shortest-path general-purpose fix is to insert\n> >\n> > #if MSYS\n> > if (pgwin32_is_junction(path))\n> > return PGFILETYPE_DIR;\n> > #endif\n> >\n> > at the start of get_dirent_type. (I'm not sure how to spell the\n> > #if test.) We could look at using dirent.c later, but I think\n> > right now it's important to un-break the buildfarm ASAP.\n>\n> +1. I think you spell it:\n>\n> #if defined(WIN32) && !defined(_MSC_VER)\n\nI thought about putting it at the top, but don't we really only need\nto make an extra system call if MinGW's stat() told us it saw a\ndirectory? And what if you asked to look through symlinks? I thought\nabout putting it near the S_ISLNK() test, which is the usual pattern,\nbut then what if MinGW decides to add d_type support one day? Those\nthoughts led to the attached formulation. Untested. I'll go and try\nto see if I can run this with Melih's proposed MSYS CI support...",
"msg_date": "Thu, 28 Jul 2022 12:41:20 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: fairywren hung in pg_basebackup tests"
},
{
"msg_contents": "On Thu, Jul 28, 2022 at 12:41 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> I thought about putting it at the top, but don't we really only need\n> to make an extra system call if MinGW's stat() told us it saw a\n> directory? And what if you asked to look through symlinks? I thought\n> about putting it near the S_ISLNK() test, which is the usual pattern,\n> but then what if MinGW decides to add d_type support one day? Those\n> thoughts led to the attached formulation. Untested. I'll go and try\n> to see if I can run this with Melih's proposed MSYS CI support...\n\nSuccess. I'll push this, and then hopefully those BF animals can be\nunwedged. (I see some unrelated warnings to look into.)\n\nhttps://cirrus-ci.com/task/5253183533481984?logs=tests#L835\n\n\n",
"msg_date": "Thu, 28 Jul 2022 14:12:13 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: fairywren hung in pg_basebackup tests"
}
] |
[
{
"msg_contents": "ReadRecentBuffer() doesn't work for local buffers, i.e. for temp tables. \nThe bug is pretty clear if you look at the code:\n\n \tif (BufferIsLocal(recent_buffer))\n \t{\n-\t\tbufHdr = GetBufferDescriptor(-recent_buffer - 1);\n+\t\tbufHdr = GetLocalBufferDescriptor(-recent_buffer - 1);\n\nThe code after that looks suspicious, too. It increases the usage count \neven if the buffer was already pinned. That's different from what it \ndoes for a shared buffer, and different from LocalBufferAlloc(). That's \npretty harmless, just causes the usage count to be bumped more \nfrequently, but I don't think it was intentional. The ordering of \nbumping the usage count, the local ref count, and registration in the \nresource owner are different too. As far as I can see, that makes no \ndifference, but I think we should keep this code as close as possible to \nsimilar code used elsewhere, unless there's a particular reason to differ.\n\nI propose the attached to fix those things.\n\nI tested this by adding this little snippet to a random place where we \nhave just read a page with ReadBuffer:\n\ndiff --git a/src/backend/access/heap/heapam.c \nb/src/backend/access/heap/heapam.c\nindex aab8d6fa4e5..c4abdbc96dd 100644\n--- a/src/backend/access/heap/heapam.c\n+++ b/src/backend/access/heap/heapam.c\n@@ -403,6 +403,14 @@ heapgetpage(TableScanDesc sscan, BlockNumber page)\n \n RBM_NORMAL, scan->rs_strategy);\n scan->rs_cblock = page;\n\n+ {\n+ bool still_ok;\n+\n+ still_ok = \nReadRecentBuffer(scan->rs_base.rs_rd->rd_locator, MAIN_FORKNUM, page, \nscan->rs_cbuf);\n+ Assert(still_ok);\n+ ReleaseBuffer(scan->rs_cbuf);\n+ }\n+\n if (!(scan->rs_base.rs_flags & SO_ALLOW_PAGEMODE))\n return;\n\nWithout the fix, the assertion is fails quickly on \"make check\".\n\n- Heikki",
"msg_date": "Sun, 24 Jul 2022 21:22:02 +0300",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": true,
"msg_subject": "ReadRecentBuffer() is broken for local buffer"
},
{
"msg_contents": "On Mon, Jul 25, 2022 at 6:22 AM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n> ReadRecentBuffer() doesn't work for local buffers, i.e. for temp tables.\n> The bug is pretty clear if you look at the code:\n\n- bufHdr = GetBufferDescriptor(-recent_buffer - 1);\n+ int b = -recent_buffer - 1;\n+\n+ bufHdr = GetLocalBufferDescriptor(b);\n\nUgh, right. Obviously this code path is not reached currently. I\nadded the local path for completeness but I didn't think of the idea\nof testing it the way you suggested, hence thinko escaped into the\nwild. That way of testing seems good and the patch indeed fixes the\nproblem.\n\n- /* Bump local buffer's ref and usage counts. */\n+ /*\n+ * Bump buffer's ref and usage counts. This is equivalent of\n+ * PinBuffer for a shared buffer.\n+ */\n+ if (LocalRefCount[b] == 0)\n+ {\n+ if (BUF_STATE_GET_USAGECOUNT(buf_state) < BM_MAX_USAGE_COUNT)\n+ {\n+ buf_state += BUF_USAGECOUNT_ONE;\n+ pg_atomic_unlocked_write_u32(&bufHdr->state, buf_state);\n+ }\n+ }\n+ LocalRefCount[b]++;\n ResourceOwnerRememberBuffer(CurrentResourceOwner, recent_buffer);\n- LocalRefCount[-recent_buffer - 1]++;\n- if (BUF_STATE_GET_USAGECOUNT(buf_state) < BM_MAX_USAGE_COUNT)\n- pg_atomic_write_u32(&bufHdr->state,\n- buf_state + BUF_USAGECOUNT_ONE);\n\n+1, it makes sense to do it only if it wasn't pinned already, and it\nreally should look identical to the code in LocalBufferAlloc, and\nperhaps the comment should even say so.\n\nLGTM.\n\n\n",
"msg_date": "Mon, 25 Jul 2022 09:35:36 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: ReadRecentBuffer() is broken for local buffer"
},
{
"msg_contents": "\nNice catch, LGTM.\n\n\n\n> On Jul 25, 2022, at 02:22, Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n> \n> ReadRecentBuffer() doesn't work for local buffers, i.e. for temp tables. The bug is pretty clear if you look at the code:\n> \n> \tif (BufferIsLocal(recent_buffer))\n> \t{\n> -\t\tbufHdr = GetBufferDescriptor(-recent_buffer - 1);\n> +\t\tbufHdr = GetLocalBufferDescriptor(-recent_buffer - 1);\n> \n> The code after that looks suspicious, too. It increases the usage count even if the buffer was already pinned. That's different from what it does for a shared buffer, and different from LocalBufferAlloc(). That's pretty harmless, just causes the usage count to be bumped more frequently, but I don't think it was intentional. The ordering of bumping the usage count, the local ref count, and registration in the resource owner are different too. As far as I can see, that makes no difference, but I think we should keep this code as close as possible to similar code used elsewhere, unless there's a particular reason to differ.\n> \n> I propose the attached to fix those things.\n> \n> I tested this by adding this little snippet to a random place where we have just read a page with ReadBuffer:\n> \n> diff --git a/src/backend/access/heap/heapam.c b/src/backend/access/heap/heapam.c\n> index aab8d6fa4e5..c4abdbc96dd 100644\n> --- a/src/backend/access/heap/heapam.c\n> +++ b/src/backend/access/heap/heapam.c\n> @@ -403,6 +403,14 @@ heapgetpage(TableScanDesc sscan, BlockNumber page)\n> RBM_NORMAL, scan->rs_strategy);\n> scan->rs_cblock = page;\n> \n> + {\n> + bool still_ok;\n> +\n> + still_ok = ReadRecentBuffer(scan->rs_base.rs_rd->rd_locator, MAIN_FORKNUM, page, scan->rs_cbuf);\n> + Assert(still_ok);\n> + ReleaseBuffer(scan->rs_cbuf);\n> + }\n> +\n> if (!(scan->rs_base.rs_flags & SO_ALLOW_PAGEMODE))\n> return;\n> \n> Without the fix, the assertion is fails quickly on \"make check\".\n> \n> - Heikki<0001-Fix-ReadRecentBuffer-for-local-buffers.patch>\n\n\n\n",
"msg_date": "Mon, 25 Jul 2022 10:44:10 +0800",
"msg_from": "Zhang Mingli <zmlpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: ReadRecentBuffer() is broken for local buffer"
},
{
"msg_contents": "On Mon, Jul 25, 2022 at 2:22 AM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n\n> if (BufferIsLocal(recent_buffer))\n> {\n> - bufHdr = GetBufferDescriptor(-recent_buffer - 1);\n> + bufHdr = GetLocalBufferDescriptor(-recent_buffer - 1);\n\n\nAha, we're using the wrong buffer descriptors here. Currently this\nfunction is only called in XLogReadBufferExtended(), so the branch for\nlocal buffer cannot be reached. Maybe that's why it is not identified\nuntil now.\n\n\n> The code after that looks suspicious, too. It increases the usage count\n> even if the buffer was already pinned. That's different from what it\n> does for a shared buffer, and different from LocalBufferAlloc(). That's\n> pretty harmless, just causes the usage count to be bumped more\n> frequently, but I don't think it was intentional. The ordering of\n> bumping the usage count, the local ref count, and registration in the\n> resource owner are different too. As far as I can see, that makes no\n> difference, but I think we should keep this code as close as possible to\n> similar code used elsewhere, unless there's a particular reason to differ.\n\n\nAgree. Maybe we can wrap the codes in an inline function or macro and\ncall that in both LocalBufferAlloc and here.\n\nThanks\nRichard\n\nOn Mon, Jul 25, 2022 at 2:22 AM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n if (BufferIsLocal(recent_buffer))\n {\n- bufHdr = GetBufferDescriptor(-recent_buffer - 1);\n+ bufHdr = GetLocalBufferDescriptor(-recent_buffer - 1);Aha, we're using the wrong buffer descriptors here. Currently thisfunction is only called in XLogReadBufferExtended(), so the branch forlocal buffer cannot be reached. Maybe that's why it is not identifieduntil now. \nThe code after that looks suspicious, too. It increases the usage count \neven if the buffer was already pinned. That's different from what it \ndoes for a shared buffer, and different from LocalBufferAlloc(). That's \npretty harmless, just causes the usage count to be bumped more \nfrequently, but I don't think it was intentional. The ordering of \nbumping the usage count, the local ref count, and registration in the \nresource owner are different too. As far as I can see, that makes no \ndifference, but I think we should keep this code as close as possible to \nsimilar code used elsewhere, unless there's a particular reason to differ.Agree. Maybe we can wrap the codes in an inline function or macro andcall that in both LocalBufferAlloc and here.ThanksRichard",
"msg_date": "Mon, 25 Jul 2022 11:51:40 +0800",
"msg_from": "Richard Guo <guofenglinux@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: ReadRecentBuffer() is broken for local buffer"
},
{
"msg_contents": "On 25/07/2022 00:35, Thomas Munro wrote:\n> On Mon, Jul 25, 2022 at 6:22 AM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n>> ReadRecentBuffer() doesn't work for local buffers, i.e. for temp tables.\n>> The bug is pretty clear if you look at the code:\n> \n> - bufHdr = GetBufferDescriptor(-recent_buffer - 1);\n> + int b = -recent_buffer - 1;\n> +\n> + bufHdr = GetLocalBufferDescriptor(b);\n> \n> Ugh, right. Obviously this code path is not reached currently. I\n> added the local path for completeness but I didn't think of the idea\n> of testing it the way you suggested, hence thinko escaped into the\n> wild. That way of testing seems good and the patch indeed fixes the\n> problem.\n\nPushed, thanks for the reviews.\n\n- Heikki\n\n\n",
"msg_date": "Mon, 25 Jul 2022 09:10:45 +0300",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": true,
"msg_subject": "Re: ReadRecentBuffer() is broken for local buffer"
}
] |
[
{
"msg_contents": "I found that -fsanitize causes the test to fail, going back to REL_10_STABLE,\nfor any clang in:\n\n1:11.1.0-6\n1:12.0.1-19ubuntu3\n1:13.0.1-2ubuntu2\n1:14.0.0-1ubuntu1\n\n| time ./configure --enable-cassert --enable-debug --enable-tap-tests --with-CC=clang-13 CFLAGS='-fsanitize=undefined'\n| time { make -j4 clean; make -j4; } >/dev/null\n| time PROVE_TESTS=t/012_subtransactions.pl make -C ./src/test/recovery check\n| \n| t/012_subtransactions.pl .. 2/? \n| # Failed test 'Visible'\n| # at t/012_subtransactions.pl line 111.\n| # got: '-1'\n| # expected: '8128'\n| ...\n| # Looks like you failed 6 tests of 12.\n\nI haven't found any combination of options which cause it to fail differently,\nso I'm not even sure if the problem is in postgres, the test case, clang or\nlibubsan. Note that optimization seems to avoid the problem, which is why\n\"kestrel\" shows no issue.\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=kestrel&dt=2022-07-23%2022%3A17%3A48\n\n-- \nJustin\n\n\n",
"msg_date": "Sun, 24 Jul 2022 15:59:37 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "012_subtransactions.pl vs clang -fsanitize=undefined"
},
{
"msg_contents": "On Mon, Jul 25, 2022 at 8:59 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> I found that -fsanitize causes the test to fail, going back to REL_10_STABLE,\n> for any clang in:\n>\n> 1:11.1.0-6\n> 1:12.0.1-19ubuntu3\n> 1:13.0.1-2ubuntu2\n> 1:14.0.0-1ubuntu1\n>\n> | time ./configure --enable-cassert --enable-debug --enable-tap-tests --with-CC=clang-13 CFLAGS='-fsanitize=undefined'\n> | time { make -j4 clean; make -j4; } >/dev/null\n> | time PROVE_TESTS=t/012_subtransactions.pl make -C ./src/test/recovery check\n> |\n> | t/012_subtransactions.pl .. 2/?\n> | # Failed test 'Visible'\n> | # at t/012_subtransactions.pl line 111.\n> | # got: '-1'\n> | # expected: '8128'\n> | ...\n> | # Looks like you failed 6 tests of 12.\n>\n> I haven't found any combination of options which cause it to fail differently,\n> so I'm not even sure if the problem is in postgres, the test case, clang or\n> libubsan. Note that optimization seems to avoid the problem, which is why\n> \"kestrel\" shows no issue.\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=kestrel&dt=2022-07-23%2022%3A17%3A48\n\nYeah I've seen this too... it'd be good to figure out how to fix it:\n\nhttps://www.postgresql.org/message-id/CA%2BhUKGLDA-GuQKRvDF3abHadDrrYZ33N9e4DEOGwKH3JqdYSCQ%40mail.gmail.com\n\n\n",
"msg_date": "Mon, 25 Jul 2022 09:19:37 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: 012_subtransactions.pl vs clang -fsanitize=undefined"
},
{
"msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> On Mon, Jul 25, 2022 at 8:59 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n>> I found that -fsanitize causes the test to fail, going back to REL_10_STABLE,\n>> for any clang in:\n\n> Yeah I've seen this too... it'd be good to figure out how to fix it:\n> https://www.postgresql.org/message-id/CA%2BhUKGLDA-GuQKRvDF3abHadDrrYZ33N9e4DEOGwKH3JqdYSCQ%40mail.gmail.com\n\nYeah, reproduces here too with RHEL8's clang 13.0.1. I also see\nthat the failures are due to \"stack depth exceeded\" errors from that\nrecursive hs_subxids() function. As best I can tell, the stack depth\nfailure is entirely honest:\n\n(gdb) p stack_base_ptr\n$1 = 0x7ffd92032100 \"\\360[\\\\\\006\"\n(gdb) p $sp\n$2 = (void *) 0x7ffd91e305a0\n(gdb) p 0x7ffd92032100 - 0x7ffd91e305a0\n$3 = 2104160\n\nI can get at most 82 recursion levels without failure.\nWith my normal build it can get to 678 levels before dying.\n\nI think what's happening is just that this build configuration\neats stack extravagantly. Maybe it keeps all locals on the stack\ninstead of in registers? I'm too lazy to check out the assembly\ncode.\n\nI thought for a moment of rewriting hs_subxids() to iterate instead\nof recurse, but it doesn't look like that's tremendously feasible in\nplpgsql --- the only way to make nested subtransactions is to recurse.\nIt could probably be done from the client by issuing a series of\nSAVEPOINT commands, but not nearly as elegantly.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 24 Jul 2022 17:50:59 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: 012_subtransactions.pl vs clang -fsanitize=undefined"
},
{
"msg_contents": "I wrote:\n> I think what's happening is just that this build configuration\n> eats stack extravagantly.\n\nThat's definitely it, but I don't entirely see why. Here are a\ncouple of major offenders though:\n\n(gdb) x/8i ExecInterpExpr\n 0x11a5530 <ExecInterpExpr>: push %rbp\n 0x11a5531 <ExecInterpExpr+1>: mov %rsp,%rbp\n 0x11a5534 <ExecInterpExpr+4>: sub $0x2f40,%rsp\n 0x11a553b <ExecInterpExpr+11>: mov %rdi,-0x10(%rbp)\n 0x11a553f <ExecInterpExpr+15>: mov %rsi,-0x18(%rbp)\n 0x11a5543 <ExecInterpExpr+19>: mov %rdx,-0x20(%rbp)\n 0x11a5547 <ExecInterpExpr+23>: jmpq 0x11a554c <ExecInterpExpr+28>\n 0x11a554c <ExecInterpExpr+28>: cmpq $0x0,-0x10(%rbp)\n\n(gdb) p 0x2f40\n$51 = 12096\n\n(gdb) x/8i ExecInitExprRec\n 0x11672e0 <ExecInitExprRec>: push %rbp\n 0x11672e1 <ExecInitExprRec+1>: mov %rsp,%rbp\n 0x11672e4 <ExecInitExprRec+4>: sub $0x3c80,%rsp\n 0x11672eb <ExecInitExprRec+11>: mov %rdi,-0x8(%rbp)\n 0x11672ef <ExecInitExprRec+15>: mov %rsi,-0x10(%rbp)\n 0x11672f3 <ExecInitExprRec+19>: mov %rdx,-0x18(%rbp)\n 0x11672f7 <ExecInitExprRec+23>: mov %rcx,-0x20(%rbp)\n 0x11672fb <ExecInitExprRec+27>: lea -0x60(%rbp),%rdi\n\n(gdb) p 0x3c80\n$52 = 15488\n\nIt looks like this build eats about 24K of stack per plpgsql recursion\nlevel, of which ExecInterpExpr alone accounts for half. Why is that?\nIt has no large local variables, mostly just ints and pointers.\nThere are a lot of them, but even if you presume that each one gets\nits own dedicated bit of the stack frame, it's hard to arrive at 12K.\n\nI'd almost call this a compiler bug.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 24 Jul 2022 18:18:03 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: 012_subtransactions.pl vs clang -fsanitize=undefined"
},
{
"msg_contents": "On Mon, Jul 25, 2022 at 10:18 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I wrote:\n> > I think what's happening is just that this build configuration\n> > eats stack extravagantly.\n>\n> That's definitely it, but I don't entirely see why. Here are a\n> couple of major offenders though:\n\nInteresting. I wonder where we can read about what stuff clang puts\non the stack to implement the undefined behaviour checker (and what\nGCC does differently here), but today I will resist the urge to go\nlooking.\n\nAs for workarounds (and as a note for my future self next time I'm\ntesting with UBSan), this is enough for the test to pass on my dev box\n(4MB is not enough):\n\n--- a/src/test/recovery/t/012_subtransactions.pl\n+++ b/src/test/recovery/t/012_subtransactions.pl\n@@ -16,6 +16,7 @@ $node_primary->append_conf(\n 'postgresql.conf', qq(\n max_prepared_transactions = 10\n log_checkpoints = true\n+ max_stack_depth = 5MB\n ));\n\nIt's also possible to tell it to keep out of certain functions:\n\nhttps://github.com/llvm/llvm-project/blob/main/clang/docs/UndefinedBehaviorSanitizer.rst#disabling-instrumentation-with-attribute-no-sanitize-undefined\n\n\n",
"msg_date": "Mon, 25 Jul 2022 10:39:25 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: 012_subtransactions.pl vs clang -fsanitize=undefined"
}
] |
[
{
"msg_contents": "Hi, there.\n\ncopy force null git commit\n<https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=3b5e03dca2afea7a2c12dbc8605175d0568b5555>\ndidn't attach a discussion link. So I don't know if it's already been\ndiscussed.\n\nCurrent seem you cannot do\n COPY forcetest FROM STDIN WITH (FORMAT csv, FORCE_NULL(*));\n\ncan we have FORCE_NULL(*)? Since We already have FORCE_QUOTE(*).\n\n\n-- \n I recommend David Deutsch's <<The Beginning of Infinity>>\n\n Jian\n\nHi, there. copy force null git commit didn't attach a discussion link. So I don't know if it's already been discussed.Current seem you cannot do COPY forcetest FROM STDIN WITH (FORMAT csv, FORCE_NULL(*));can we have \nFORCE_NULL(*)? Since We already have FORCE_QUOTE(*). \n\n\n\n-- I recommend David Deutsch's <<The Beginning of Infinity>> Jian",
"msg_date": "Mon, 25 Jul 2022 09:48:12 +0530",
"msg_from": "jian he <jian.universality@gmail.com>",
"msg_from_op": true,
"msg_subject": "COPY FROM FORMAT CSV FORCE_NULL(*) ?"
},
{
"msg_contents": "\nOn 2022-07-25 Mo 00:18, jian he wrote:\n> Hi, there.\n>\n> copy force null git commit\n> <https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=3b5e03dca2afea7a2c12dbc8605175d0568b5555>\n> didn't attach a discussion link. So I don't know if it's already been\n> discussed.\n>\n> Current seem you cannot do\n> COPY forcetest FROM STDIN WITH (FORMAT csv, FORCE_NULL(*));\n>\n> can we have FORCE_NULL(*)? Since We already have FORCE_QUOTE(*). \n>\n\nWe only started adding discussion links in later years. Here's a link to\nthe original discussion.\n\n\n<https://www.postgresql.org/message-id/flat/CAB8KJ%3DjS-Um4TGwenS5wLUfJK6K4rNOm_V6GRUj%2BtcKekL2%3DGQ%40mail.gmail.com>\n\n\nOffhand I don't see why we shouldn't have this. Someone interested\nenough would need to submit a patch.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Mon, 25 Jul 2022 09:28:24 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: COPY FROM FORMAT CSV FORCE_NULL(*) ?"
},
{
"msg_contents": "Hi,\n\n\nAgree, FORCE_NULL(*) is useful as well as FORCE_NOT_NULL(*).\n\nWe can have them both.\n\nThey are useful when users copy tables that have many columns.\n\nRegards,\nZhang Mingli\nOn Jul 25, 2022, 21:28 +0800, Andrew Dunstan <andrew@dunslane.net>, wrote:\n>\n> On 2022-07-25 Mo 00:18, jian he wrote:\n> > Hi, there.\n> >\n> > copy force null git commit\n> > <https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=3b5e03dca2afea7a2c12dbc8605175d0568b5555>\n> > didn't attach a discussion link. So I don't know if it's already been\n> > discussed.\n> >\n> > Current seem you cannot do\n> > COPY forcetest FROM STDIN WITH (FORMAT csv, FORCE_NULL(*));\n> >\n> > can we have FORCE_NULL(*)? Since We already have FORCE_QUOTE(*).\n> >\n>\n> We only started adding discussion links in later years. Here's a link to\n> the original discussion.\n>\n>\n> <https://www.postgresql.org/message-id/flat/CAB8KJ%3DjS-Um4TGwenS5wLUfJK6K4rNOm_V6GRUj%2BtcKekL2%3DGQ%40mail.gmail.com>\n>\n>\n> Offhand I don't see why we shouldn't have this. Someone interested\n> enough would need to submit a patch.\n>\n>\n> cheers\n>\n>\n> andrew\n>\n>\n> --\n> Andrew Dunstan\n> EDB: https://www.enterprisedb.com\n>\n>\n>\n\n\n\n\n\n\n\nHi,\n\n\nAgree, FORCE_NULL(*) is useful as well as FORCE_NOT_NULL(*).\n\nWe can have them both.\n\nThey are useful when users copy tables that have many columns. \n\n\nRegards,\nZhang Mingli\n\n\nOn Jul 25, 2022, 21:28 +0800, Andrew Dunstan <andrew@dunslane.net>, wrote:\n\nOn 2022-07-25 Mo 00:18, jian he wrote:\nHi, there.\n\ncopy force null git commit\n<https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=3b5e03dca2afea7a2c12dbc8605175d0568b5555>\ndidn't attach a discussion link. So I don't know if it's already been\ndiscussed.\n\nCurrent seem you cannot do\n COPY forcetest FROM STDIN WITH (FORMAT csv, FORCE_NULL(*));\n\ncan we have FORCE_NULL(*)? Since We already have FORCE_QUOTE(*). \n\n\nWe only started adding discussion links in later years. Here's a link to\nthe original discussion.\n\n\n<https://www.postgresql.org/message-id/flat/CAB8KJ%3DjS-Um4TGwenS5wLUfJK6K4rNOm_V6GRUj%2BtcKekL2%3DGQ%40mail.gmail.com>\n\n\nOffhand I don't see why we shouldn't have this. Someone interested\nenough would need to submit a patch.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Thu, 28 Jul 2022 22:09:42 +0800",
"msg_from": "Zhang Mingli <zmlpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: COPY FROM FORMAT CSV FORCE_NULL(*) ?"
}
] |
[
{
"msg_contents": "Hello\n\nI'm working on several databases where schemas are used to differentiate the \ntenants.\nThis is great for performance, but several tools are lacking around this \nusecase by not showing the schema, one of them being log_line_prefix.\nIt is possible to work around this using the application_name, but a mistake \non the application side would be fatal, while the search_path would still \nindicate the real tables used in a query.\nThe attached patch implements this, using %S. I've not written the \ndocumentation yet, since I'm not sure this would be acceptable as is, or if a \nmore \"generic\" method should be used (I thought of %{name} to fetch an \narbitrary GUC, but did not implement due to a lack of need for that feature)",
"msg_date": "Mon, 25 Jul 2022 09:37:52 +0200",
"msg_from": "Pierre Ducroquet <p.psql@pinaraf.info>",
"msg_from_op": true,
"msg_subject": "log_line_prefix: make it possible to add the search_path"
},
{
"msg_contents": "On 2022-Jul-25, Pierre Ducroquet wrote:\n\n> This is great for performance, but several tools are lacking around this \n> usecase by not showing the schema, one of them being log_line_prefix.\n\n> The attached patch implements this, using %S. I've not written the \n> documentation yet, since I'm not sure this would be acceptable as is, or if a \n> more \"generic\" method should be used (I thought of %{name} to fetch an \n> arbitrary GUC, but did not implement due to a lack of need for that feature)\n\nIt seems that this would be too noisy to be truly usable. What if we\nemitted a log line when the variable changed, and the value that's in\nuse when the connection starts?\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"En las profundidades de nuestro inconsciente hay una obsesiva necesidad\nde un universo lógico y coherente. Pero el universo real se halla siempre\nun paso más allá de la lógica\" (Irulan)\n\n\n",
"msg_date": "Mon, 25 Jul 2022 11:52:41 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: log_line_prefix: make it possible to add the search_path"
},
{
"msg_contents": "On Monday, July 25, 2022 11:52:41 AM CEST Alvaro Herrera wrote:\n> On 2022-Jul-25, Pierre Ducroquet wrote:\n> > This is great for performance, but several tools are lacking around this\n> > usecase by not showing the schema, one of them being log_line_prefix.\n> > \n> > The attached patch implements this, using %S. I've not written the\n> > documentation yet, since I'm not sure this would be acceptable as is, or\n> > if a more \"generic\" method should be used (I thought of %{name} to fetch\n> > an arbitrary GUC, but did not implement due to a lack of need for that\n> > feature)\n> It seems that this would be too noisy to be truly usable. What if we\n> emitted a log line when the variable changed, and the value that's in\n> use when the connection starts?\n\nThen the log files would be filled by these messages, only to be able to make \nuse of the few slow queries that are logged during the day. Or it would need \nto be a log before each slow query. I'm not sure how well it would work.\n\n\n\n\n",
"msg_date": "Mon, 25 Jul 2022 15:38:55 +0200",
"msg_from": "Pierre <p.psql@pinaraf.info>",
"msg_from_op": false,
"msg_subject": "Re: log_line_prefix: make it possible to add the search_path"
},
{
"msg_contents": "On 2022-Jul-25, Pierre wrote:\n\n> On Monday, July 25, 2022 11:52:41 AM CEST Alvaro Herrera wrote:\n\n> > It seems that this would be too noisy to be truly usable. What if we\n> > emitted a log line when the variable changed, and the value that's in\n> > use when the connection starts?\n> \n> Then the log files would be filled by these messages, only to be able to make \n> use of the few slow queries that are logged during the day.\n\nAh, yeah, that's not useful for that case ...\n\n> Or it would need to be a log before each slow query. I'm not sure how\n> well it would work.\n\n... and this would probably be prohibitively complex to implement and\nuse, as well as too slow for the high traffic case.\n\n\nMaybe your idea of allowing arbitrary GUCs is not a bad one, something\nlike\n %{search_path}G\n(where we add a letter at the end just so we can add other things in the\nfuture that aren't GUCs.)\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Mon, 25 Jul 2022 18:06:28 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: log_line_prefix: make it possible to add the search_path"
},
{
"msg_contents": "Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> Maybe your idea of allowing arbitrary GUCs is not a bad one, something\n> like\n> %{search_path}G\n> (where we add a letter at the end just so we can add other things in the\n> future that aren't GUCs.)\n\nI'm pretty uncomfortable about the amount of code that could potentially\nbe reached during an error logging attempt if we do that.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 25 Jul 2022 16:43:01 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: log_line_prefix: make it possible to add the search_path"
},
{
"msg_contents": "On Mon, Jul 25, 2022 at 12:38 AM Pierre Ducroquet <p.psql@pinaraf.info>\nwrote:\n\n> usecase by not showing the schema, one of them being log_line_prefix.\n> It is possible to work around this using the application_name, but a\n> mistake\n> on the application side would be fatal, while the search_path would still\n> indicate the real tables used in a query.\n>\n\nI'm assuming this is mostly referring to STATEMENT log lines and other\nsituations where the original query is output (e.g. auto_explain).\n\n+1 on the benefit of solving this (I've had this use case before), but I\nthink we can keep this more specific than a general log_line_prefix option.\nThe search_path isn't relevant to any log line that doesn't reference a\nquery, since e.g. autovacuum log output fully qualifies its relation names,\nand many other common log lines have nothing to do with tables or queries.\n\nWhat if we instead had something like this, as an extra CONTEXT (or DETAIL)\nlog line:\n\nLOG: duration: 4079.697 ms execute <unnamed>:\nSELECT * FROM x WHERE y = $1 LIMIT $2\nDETAIL: parameters: $1 = 'long string', $2 = '1'\nCONTEXT: settings: search_path = 'my_tenant_schema, \"$user\", public'\n\nThat way you could determine that the slow query was affecting the \"x\"\ntable in \"my_tenant_schema\".\n\nThis log output would be controlled by a new GUC, e.g.\n\"log_statement_search_path\" with three settings: (1) never, (2)\nnon_default, (3) always.\n\nThe default would be \"never\" (same as today). \"non_default\" would output\nthe search path when a SET has modified it in the current session (and so\nwe couldn't infer it from the config or the role/database overrides).\n\"always\" would always output the search path for statement-related log\nlines.\n\nThanks,\nLukas\n\n-- \nLukas Fittl\n\nOn Mon, Jul 25, 2022 at 12:38 AM Pierre Ducroquet <p.psql@pinaraf.info> wrote:\nusecase by not showing the schema, one of them being log_line_prefix.\nIt is possible to work around this using the application_name, but a mistake \non the application side would be fatal, while the search_path would still \nindicate the real tables used in a query.\nI'm assuming this is mostly referring to STATEMENT log lines and other situations where the original query is output (e.g. auto_explain).+1 on the benefit of solving this (I've had this use case before), but I think we can keep this more specific than a general log_line_prefix option. The search_path isn't relevant to any log line that doesn't reference a query, since e.g. autovacuum log output fully qualifies its relation names, and many other common log lines have nothing to do with tables or queries.What if we instead had something like this, as an extra CONTEXT (or DETAIL) log line:LOG: duration: 4079.697 ms execute <unnamed>:SELECT * FROM x WHERE y = $1 LIMIT $2DETAIL: parameters: $1 = 'long string', $2 = '1'CONTEXT: settings: search_path = 'my_tenant_schema, \"$user\", public'That way you could determine that the slow query was affecting the \"x\" table in \"my_tenant_schema\".This log output would be controlled by a new GUC, e.g. \"log_statement_search_path\" with three settings: (1) never, (2) non_default, (3) always.The default would be \"never\" (same as today). \"non_default\" would output the search path when a SET has modified it in the current session (and so we couldn't infer it from the config or the role/database overrides). \"always\" would always output the search path for statement-related log lines.Thanks,Lukas-- Lukas Fittl",
"msg_date": "Mon, 25 Jul 2022 18:08:01 -0700",
"msg_from": "Lukas Fittl <lukas@fittl.com>",
"msg_from_op": false,
"msg_subject": "Re: log_line_prefix: make it possible to add the search_path"
},
{
"msg_contents": "On Tuesday, July 26, 2022 3:08:01 AM CEST Lukas Fittl wrote:\n> On Mon, Jul 25, 2022 at 12:38 AM Pierre Ducroquet <p.psql@pinaraf.info>\n> \n> wrote:\n> > usecase by not showing the schema, one of them being log_line_prefix.\n> > It is possible to work around this using the application_name, but a\n> > mistake\n> > on the application side would be fatal, while the search_path would still\n> > indicate the real tables used in a query.\n> \n> I'm assuming this is mostly referring to STATEMENT log lines and other\n> situations where the original query is output (e.g. auto_explain).\n> \n> +1 on the benefit of solving this (I've had this use case before), but I\n> think we can keep this more specific than a general log_line_prefix option.\n> The search_path isn't relevant to any log line that doesn't reference a\n> query, since e.g. autovacuum log output fully qualifies its relation names,\n> and many other common log lines have nothing to do with tables or queries.\n> \n> What if we instead had something like this, as an extra CONTEXT (or DETAIL)\n> log line:\n> \n> LOG: duration: 4079.697 ms execute <unnamed>:\n> SELECT * FROM x WHERE y = $1 LIMIT $2\n> DETAIL: parameters: $1 = 'long string', $2 = '1'\n> CONTEXT: settings: search_path = 'my_tenant_schema, \"$user\", public'\n> \n> That way you could determine that the slow query was affecting the \"x\"\n> table in \"my_tenant_schema\".\n> \n> This log output would be controlled by a new GUC, e.g.\n> \"log_statement_search_path\" with three settings: (1) never, (2)\n> non_default, (3) always.\n> \n> The default would be \"never\" (same as today). \"non_default\" would output\n> the search path when a SET has modified it in the current session (and so\n> we couldn't infer it from the config or the role/database overrides).\n> \"always\" would always output the search path for statement-related log\n> lines.\n> \n> Thanks,\n> Lukas\n\nHi\n\nThis is a good idea. I've hacked a first implementation of it (lacking \ndocumentation, and several logs are still missing) attached to this email.\nThe biggest issue I had was with knowing where the setting come from since no \nguc.h function expose that information. I worked around this a bit, but I'm \nsure it would be preferable to do it otherwise.\n\nThanks for your feedbacks\n\nRegards\n\n Pierre",
"msg_date": "Tue, 26 Jul 2022 14:41:55 +0200",
"msg_from": "Pierre <p.psql@pinaraf.info>",
"msg_from_op": false,
"msg_subject": "Re: log_line_prefix: make it possible to add the search_path"
}
] |
[
{
"msg_contents": "Hi, hackers\n\nI found the misc_sanity has a SQL to check system catalogs that\ndo not have primary keys, however, in current exceptions it says\npg_depend, pg_shdepend don't have a unique key.\n\nShould we fix it?\n\n-- \nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.",
"msg_date": "Mon, 25 Jul 2022 15:54:07 +0800",
"msg_from": "Japin Li <japinli@hotmail.com>",
"msg_from_op": true,
"msg_subject": "Typo in misc_sanity.sql?"
},
{
"msg_contents": "On Mon, Jul 25, 2022 at 1:24 PM Japin Li <japinli@hotmail.com> wrote:\n>\n>\n> Hi, hackers\n>\n> I found the misc_sanity has a SQL to check system catalogs that\n> do not have primary keys, however, in current exceptions it says\n> pg_depend, pg_shdepend don't have a unique key.\n>\n> Should we fix it?\n\nIndeed. There's a clear difference between primary key and unique key.\n\nThe patch LGTM.\n\nRegards,\nBharath Rupireddy.\n\n\n",
"msg_date": "Mon, 25 Jul 2022 13:32:35 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Typo in misc_sanity.sql?"
},
{
"msg_contents": "Japin Li <japinli@hotmail.com> writes:\n> I found the misc_sanity has a SQL to check system catalogs that\n> do not have primary keys, however, in current exceptions it says\n> pg_depend, pg_shdepend don't have a unique key.\n\nAs indeed they do not:\n\nregression=# \\d pg_depend\n Table \"pg_catalog.pg_depend\"\n Column | Type | Collation | Nullable | Default \n-------------+---------+-----------+----------+---------\n classid | oid | | not null | \n objid | oid | | not null | \n objsubid | integer | | not null | \n refclassid | oid | | not null | \n refobjid | oid | | not null | \n refobjsubid | integer | | not null | \n deptype | \"char\" | | not null | \nIndexes:\n \"pg_depend_depender_index\" btree (classid, objid, objsubid)\n \"pg_depend_reference_index\" btree (refclassid, refobjid, refobjsubid)\n\nregression=# \\d pg_shdepend\n Table \"pg_catalog.pg_shdepend\"\n Column | Type | Collation | Nullable | Default \n------------+---------+-----------+----------+---------\n dbid | oid | | not null | \n classid | oid | | not null | \n objid | oid | | not null | \n objsubid | integer | | not null | \n refclassid | oid | | not null | \n refobjid | oid | | not null | \n deptype | \"char\" | | not null | \nIndexes:\n \"pg_shdepend_depender_index\" btree (dbid, classid, objid, objsubid), tablespace \"pg_global\"\n \"pg_shdepend_reference_index\" btree (refclassid, refobjid), tablespace \"pg_global\"\nTablespace: \"pg_global\"\n\nYour proposed wording seems to give strictly less information,\nso I do not see why it's an improvement.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 25 Jul 2022 07:39:00 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Typo in misc_sanity.sql?"
},
{
"msg_contents": "\nOn Mon, 25 Jul 2022 at 19:39, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Japin Li <japinli@hotmail.com> writes:\n>> I found the misc_sanity has a SQL to check system catalogs that\n>> do not have primary keys, however, in current exceptions it says\n>> pg_depend, pg_shdepend don't have a unique key.\n>\n> As indeed they do not:\n>\n> regression=# \\d pg_depend\n> Table \"pg_catalog.pg_depend\"\n> Column | Type | Collation | Nullable | Default \n> -------------+---------+-----------+----------+---------\n> classid | oid | | not null | \n> objid | oid | | not null | \n> objsubid | integer | | not null | \n> refclassid | oid | | not null | \n> refobjid | oid | | not null | \n> refobjsubid | integer | | not null | \n> deptype | \"char\" | | not null | \n> Indexes:\n> \"pg_depend_depender_index\" btree (classid, objid, objsubid)\n> \"pg_depend_reference_index\" btree (refclassid, refobjid, refobjsubid)\n>\n> regression=# \\d pg_shdepend\n> Table \"pg_catalog.pg_shdepend\"\n> Column | Type | Collation | Nullable | Default \n> ------------+---------+-----------+----------+---------\n> dbid | oid | | not null | \n> classid | oid | | not null | \n> objid | oid | | not null | \n> objsubid | integer | | not null | \n> refclassid | oid | | not null | \n> refobjid | oid | | not null | \n> deptype | \"char\" | | not null | \n> Indexes:\n> \"pg_shdepend_depender_index\" btree (dbid, classid, objid, objsubid), tablespace \"pg_global\"\n> \"pg_shdepend_reference_index\" btree (refclassid, refobjid), tablespace \"pg_global\"\n> Tablespace: \"pg_global\"\n>\n\nYeah, they do not have unique keys, however, here we check primary keys. So,\nIMO, the description exceptions should say they do not have primary keys,\nrather than do not have unique keys.\n\n-- \nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.\n\n\n",
"msg_date": "Mon, 25 Jul 2022 20:04:30 +0800",
"msg_from": "Japin Li <japinli@hotmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Typo in misc_sanity.sql?"
},
{
"msg_contents": "On 25.07.22 14:04, Japin Li wrote:\n> Yeah, they do not have unique keys, however, here we check primary keys. So,\n> IMO, the description exceptions should say they do not have primary keys,\n> rather than do not have unique keys.\n\nThe context of that check is that for each system catalog we pick one of \nthe available unique keys and designate it as the one primary key. If a \nsystem catalog doesn't have a unique key to choose from, then we can't \ndo that, hence the comment. Changing the comment as suggested would \nessentially be saying, this catalog has no primary key because it has no \nprimary key, which wouldn't be helpful.\n\n\n",
"msg_date": "Wed, 10 Aug 2022 13:49:24 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Typo in misc_sanity.sql?"
}
] |
[
{
"msg_contents": "Hello,\n\nI have two very simple questions:\n\n1) I have an account at postgresql.org, but a link to a 'forgot password' seems to be missing on the login page. I have my password stored only on an old Fedora 32 computer. To change the password\nwhen logged in, you need to supply the old password. In short, I have no way to migrate this postgresql.org account to my new Fedora 35 and Fedora 36 computers. What can be done about this?\n\n2) I have three psql clients running, a version 12.6, a version 13.4 and a version 14.3. Until now a 'select * from table;' showed the output in 'less' or something alike and exited from 'less' when\nthe output was complete. Both version 12.6 and version 13.4 work that way. Version 14.3 does not exit from 'less' when the output is complete. Did anyone notice this already?\n\nBest regards,\nMischa Baars.\n\n\n\n",
"msg_date": "Mon, 25 Jul 2022 11:02:26 +0200",
"msg_from": "\"Michael J. Baars\" <mjbaars1977.pgsql.hackers@gmail.com>",
"msg_from_op": true,
"msg_subject": "Password reset link / 'less' does not exit in psql version 13.4"
},
{
"msg_contents": "On Mon, Jul 25, 2022 at 11:02:26AM +0200, Michael J. Baars wrote:\n> Hello,\n> \n> I have two very simple questions:\n> \n> 1) I have an account at postgresql.org, but a link to a 'forgot password' seems to be missing on the login page. I have my password stored only on an old Fedora 32 computer. To change the password\n> when logged in, you need to supply the old password. In short, I have no way to migrate this postgresql.org account to my new Fedora 35 and Fedora 36 computers. What can be done about this?\n\nIt say this:\n| If you have a postgresql.org community account with a password, please use the form below to sign in. If you have one but have lost your password, you can use the password reset form. \n\n(BTW, the development list isn't the right place; pgsql-www is better).\n\n> 2) I have three psql clients running, a version 12.6, a version 13.4 and a version 14.3. Until now a 'select * from table;' showed the output in 'less' or something alike and exited from 'less' when\n> the output was complete. Both version 12.6 and version 13.4 work that way. Version 14.3 does not exit from 'less' when the output is complete. Did anyone notice this already?\n\nIs it actually running less or some other pager ?\n\nDo you have all 3 versions of psql installed and the same (different) behavior\nhappening today ? How was postgres installed ? Compiled locally or from which\npackages ? Please show pg_config for each.\n\nCould you check how the pager is being run ?\nCheck the commandline in ps -fC less or similar, and the environment in \n\"cat /proc/PID/environ\" or \"ps wwe -C less\"\n\n-- \nJustin\n\n\n",
"msg_date": "Tue, 26 Jul 2022 12:20:00 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Password reset link / 'less' does not exit in psql version 13.4"
}
] |
[
{
"msg_contents": "This patch makes the backup history filename check more tight.\n\n-- \nRegards\nJunwang Zhao",
"msg_date": "Mon, 25 Jul 2022 19:31:08 +0800",
"msg_from": "Junwang Zhao <zhjwpku@gmail.com>",
"msg_from_op": true,
"msg_subject": "[PATCH v1] strengthen backup history filename check"
},
{
"msg_contents": "On Mon, Jul 25, 2022 at 5:01 PM Junwang Zhao <zhjwpku@gmail.com> wrote:\n>\n> This patch makes the backup history filename check more tight.\n\nCan you please elaborate a bit on the issue with existing\nIsBackupHistoryFileName(), if there's any?\n\nAlso, the patch does have hard coded numbers [1] which isn't good from\na readability perspective, adding macros and/or comments would help\nhere.\n\n[1]\n static inline bool\n IsBackupHistoryFileName(const char *fname)\n {\n- return (strlen(fname) > XLOG_FNAME_LEN &&\n+ return (strlen(fname) == XLOG_FNAME_LEN + 9 + strlen(\".backup\") &&\n strspn(fname, \"0123456789ABCDEF\") == XLOG_FNAME_LEN &&\n- strcmp(fname + strlen(fname) - strlen(\".backup\"), \".backup\") == 0);\n+ strspn(fname + XLOG_FNAME_LEN + 1, \"0123456789ABCDEF\") == 8 &&\n+ strcmp(fname + XLOG_FNAME_LEN + 9, \".backup\") == 0);\n }\n\nRegards,\nBharath Rupireddy.\n\n\n",
"msg_date": "Mon, 25 Jul 2022 17:09:48 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH v1] strengthen backup history filename check"
},
{
"msg_contents": "On Mon, Jul 25, 2022 at 7:39 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Mon, Jul 25, 2022 at 5:01 PM Junwang Zhao <zhjwpku@gmail.com> wrote:\n> >\n> > This patch makes the backup history filename check more tight.\n>\n> Can you please elaborate a bit on the issue with existing\n> IsBackupHistoryFileName(), if there's any?\n>\n\nThere are two call of this function, `CleanupBackupHistory` and\n`SetWALFileNameForCleanup`, there\nseems no issue of the existing IsBackupHistoryFileName() since the\ncreation of the backup history file\nwill make sure it is of format \"%08X%08X%08X.%08X.backup\".\n\nThe patch just makes `IsBackupHistoryFileName()` more match to\n`BackupHistoryFileName()`, thus\nmore easier to understand.\n\n> Also, the patch does have hard coded numbers [1] which isn't good from\n> a readability perspective, adding macros and/or comments would help\n> here.\n\nI'm not sure using macros is a good idea here, cause I noticed\n`IsTLHistoryFileName()` also uses\nsome hard code numbers [1].\n\n[1]\nstatic inline bool\nIsTLHistoryFileName(const char *fname)\n{\nreturn (strlen(fname) == 8 + strlen(\".history\") &&\nstrspn(fname, \"0123456789ABCDEF\") == 8 &&\nstrcmp(fname + 8, \".history\") == 0);\n}\n\n>\n> [1]\n> static inline bool\n> IsBackupHistoryFileName(const char *fname)\n> {\n> - return (strlen(fname) > XLOG_FNAME_LEN &&\n> + return (strlen(fname) == XLOG_FNAME_LEN + 9 + strlen(\".backup\") &&\n> strspn(fname, \"0123456789ABCDEF\") == XLOG_FNAME_LEN &&\n> - strcmp(fname + strlen(fname) - strlen(\".backup\"), \".backup\") == 0);\n> + strspn(fname + XLOG_FNAME_LEN + 1, \"0123456789ABCDEF\") == 8 &&\n> + strcmp(fname + XLOG_FNAME_LEN + 9, \".backup\") == 0);\n> }\n>\n> Regards,\n> Bharath Rupireddy.\n\n\n\n-- \nRegards\nJunwang Zhao\n\n\n",
"msg_date": "Mon, 25 Jul 2022 23:14:31 +0800",
"msg_from": "Junwang Zhao <zhjwpku@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH v1] strengthen backup history filename check"
}
] |
[
{
"msg_contents": "Hi,\n\nwhile working on something else i encountered a bug in the trim_array() \nfunction. The bounds check fails for empty arrays without any \ndimensions. It reads the size of the non existing first dimension to \ndetermine the arrays length.\n\n select trim_array('{}'::int[], 10);\n ------------\n {}\n\n select trim_array('{}'::int[], 100);\n ERROR: number of elements to trim must be between 0 and 64\n\nThe attached patch fixes that check.\n\nMartin",
"msg_date": "Mon, 25 Jul 2022 16:40:51 +0200",
"msg_from": "Martin Kalcher <martin.kalcher@aboutsource.net>",
"msg_from_op": true,
"msg_subject": "[Patch] Fix bounds check in trim_array()"
},
{
"msg_contents": "On Mon, Jul 25, 2022 at 04:40:51PM +0200, Martin Kalcher wrote:\n> +SELECT trim_array(ARRAY[]::int[], 1); -- fail\n> +ERROR: number of elements to trim must be between 0 and 0\n\nCan we improve the error message? Maybe it should look something like\n\n\tERROR: number of elements to trim must be 0\n\nfor this case.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 28 Jul 2022 15:46:18 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] Fix bounds check in trim_array()"
},
{
"msg_contents": "Nathan Bossart <nathandbossart@gmail.com> writes:\n> On Mon, Jul 25, 2022 at 04:40:51PM +0200, Martin Kalcher wrote:\n>> +SELECT trim_array(ARRAY[]::int[], 1); -- fail\n>> +ERROR: number of elements to trim must be between 0 and 0\n\n> Can we improve the error message? Maybe it should look something like\n> \tERROR: number of elements to trim must be 0\n> for this case.\n\nHmm, I'm unexcited about making our long-suffering translators\ndeal with another translatable string for such a corner case.\nI think it's fine as-is.\n\nA bigger problem is that a little further down, there's an equally\nunprotected reference to ARR_LBOUND(v)[0]. Now, the fact that that\nexpression computes garbage doesn't matter too much, because AFAICS\nif the array is zero-D then array_get_slice is going to exit at\n\n\tif (ndim < nSubscripts || ndim <= 0 || ndim > MAXDIM)\n\t\treturn PointerGetDatum(construct_empty_array(elemtype));\n\nwithout ever examining its upperIndx[] argument. However,\nonce we put in a test case covering this behavior, I bet that\nvalgrind-using buildfarm animals will start to bleat about the\ninvalid memory access. I think the easiest fix is like\n\n\tif (ARR_NDIM(v) > 0)\n\t{\n\t\tupper[0] = ARR_LBOUND(v)[0] + array_length - n - 1;\n\t\tupperProvided[0] = true;\n\t}\n\nIt'd be good to get this fix into next week's minor releases,\nso I'll go push it.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 31 Jul 2022 13:25:33 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] Fix bounds check in trim_array()"
}
] |
[
{
"msg_contents": "Hi,\n\nWhen reviewing the postgres_fdw parallel-abort patch [1], I found that\nthere are several duplicate codes in postgres_fdw/connection.c.\nWhich seems to make it harder to review the patch changing connection.c.\nSo I'd like to remove such duplicate codes and refactor the functions\nin connection.c. I attached the following three patches.\n\nThere are two functions, pgfdw_get_result() and pgfdw_get_cleanup_result(),\nto get a query result. They have almost the same code, call PQisBusy(),\nWaitLatchOrSocket(), PQconsumeInput() and PQgetResult() in the loop,\nbut only pgfdw_get_cleanup_result() allows its callers to specify the timeout.\n0001 patch transforms pgfdw_get_cleanup_result() to the common function\nto get a query result and makes pgfdw_get_result() use it instead of\nits own (duplicate) code. The patch also renames pgfdw_get_cleanup_result()\nto pgfdw_get_result_timed().\n\npgfdw_xact_callback() and pgfdw_subxact_callback() have similar codes to\nissue COMMIT or RELEASE SAVEPOINT commands. 0002 patch adds the common function,\npgfdw_exec_pre_commit(), for that purpose, and changes those functions\nso that they use the common one.\n\npgfdw_finish_pre_commit_cleanup() and pgfdw_finish_pre_subcommit_cleanup()\nhave similar codes to wait for the results of COMMIT or RELEASE SAVEPOINT commands.\n0003 patch adds the common function, pgfdw_finish_pre_commit(), for that purpose,\nand replaces those functions with the common one.\nThat is, pgfdw_finish_pre_commit_cleanup() and pgfdw_finish_pre_subcommit_cleanup()\nare no longer necessary and 0003 patch removes them.\n\n[1] https://commitfest.postgresql.org/38/3392/\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION",
"msg_date": "Tue, 26 Jul 2022 00:54:47 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "Refactoring postgres_fdw/connection.c"
},
{
"msg_contents": "At Tue, 26 Jul 2022 00:54:47 +0900, Fujii Masao <masao.fujii@oss.nttdata.com> wrote in \n> Hi,\n> \n> When reviewing the postgres_fdw parallel-abort patch [1], I found that\n> there are several duplicate codes in postgres_fdw/connection.c.\n> Which seems to make it harder to review the patch changing\n> connection.c.\n> So I'd like to remove such duplicate codes and refactor the functions\n> in connection.c. I attached the following three patches.\n> \n> There are two functions, pgfdw_get_result() and\n> pgfdw_get_cleanup_result(),\n> to get a query result. They have almost the same code, call\n> PQisBusy(),\n> WaitLatchOrSocket(), PQconsumeInput() and PQgetResult() in the loop,\n> but only pgfdw_get_cleanup_result() allows its callers to specify the\n> timeout.\n> 0001 patch transforms pgfdw_get_cleanup_result() to the common\n> function\n> to get a query result and makes pgfdw_get_result() use it instead of\n> its own (duplicate) code. The patch also renames\n> pgfdw_get_cleanup_result()\n> to pgfdw_get_result_timed().\n\nAgree to that refactoring. And it looks fine to me.\n\n> pgfdw_xact_callback() and pgfdw_subxact_callback() have similar codes\n> to\n> issue COMMIT or RELEASE SAVEPOINT commands. 0002 patch adds the common\n> function,\n> pgfdw_exec_pre_commit(), for that purpose, and changes those functions\n> so that they use the common one.\n\nI'm not sure the two are similar with each other. The new function\npgfdw_exec_pre_commit() looks like a merger of two isolated code paths\nintended to share a seven-line codelet. I feel the code gets a bit\nharder to understand after the change. I mildly oppose to this part.\n\n> pgfdw_finish_pre_commit_cleanup() and\n> pgfdw_finish_pre_subcommit_cleanup()\n> have similar codes to wait for the results of COMMIT or RELEASE\n> SAVEPOINT commands.\n> 0003 patch adds the common function, pgfdw_finish_pre_commit(), for\n> that purpose,\n> and replaces those functions with the common one.\n> That is, pgfdw_finish_pre_commit_cleanup() and\n> pgfdw_finish_pre_subcommit_cleanup()\n> are no longer necessary and 0003 patch removes them.\n\nIt gives the same feeling with 0002. Considering that\npending_deallocate becomes non-NIL only when toplevel, 38 lines out of\n66 lines of the function are the toplevel-dedicated stuff.\n\n> [1] https://commitfest.postgresql.org/38/3392/\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Tue, 26 Jul 2022 16:25:27 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Refactoring postgres_fdw/connection.c"
},
{
"msg_contents": "\n\nOn 2022/07/26 16:25, Kyotaro Horiguchi wrote:\n> Agree to that refactoring. And it looks fine to me.\n\nThanks for reviewing the patches!\n\n\n> I'm not sure the two are similar with each other. The new function\n> pgfdw_exec_pre_commit() looks like a merger of two isolated code paths\n> intended to share a seven-line codelet. I feel the code gets a bit\n> harder to understand after the change. I mildly oppose to this part.\n\nIf so, we can pgfdw_exec_pre_commit() into two, one is the common\nfunction that sends or executes the command (i.e., calls\ndo_sql_command_begin() or do_sql_command()), and another is\nthe function only for toplevel. The latter function calls\nthe common function and then executes DEALLOCATE ALL things.\n\nBut this is not the way that other functions like pgfdw_abort_cleanup()\nis implemented. Those functions have both codes for toplevel and\n!toplevel (i.e., subxact), and run the processings depending\non the argument \"toplevel\". So I'm thinking that\npgfdw_exec_pre_commit() implemented in the same way is better.\n\n\n> It gives the same feeling with 0002. Considering that\n> pending_deallocate becomes non-NIL only when toplevel, 38 lines out of\n> 66 lines of the function are the toplevel-dedicated stuff.\n\nSame as above.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Tue, 26 Jul 2022 18:33:04 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "Re: Refactoring postgres_fdw/connection.c"
},
{
"msg_contents": "Fujii-san,\n\nOn Tue, Jul 26, 2022 at 12:55 AM Fujii Masao\n<masao.fujii@oss.nttdata.com> wrote:\n> When reviewing the postgres_fdw parallel-abort patch [1], I found that\n> there are several duplicate codes in postgres_fdw/connection.c.\n> Which seems to make it harder to review the patch changing connection.c.\n> So I'd like to remove such duplicate codes and refactor the functions\n> in connection.c. I attached the following three patches.\n>\n> There are two functions, pgfdw_get_result() and pgfdw_get_cleanup_result(),\n> to get a query result. They have almost the same code, call PQisBusy(),\n> WaitLatchOrSocket(), PQconsumeInput() and PQgetResult() in the loop,\n> but only pgfdw_get_cleanup_result() allows its callers to specify the timeout.\n> 0001 patch transforms pgfdw_get_cleanup_result() to the common function\n> to get a query result and makes pgfdw_get_result() use it instead of\n> its own (duplicate) code. The patch also renames pgfdw_get_cleanup_result()\n> to pgfdw_get_result_timed().\n>\n> pgfdw_xact_callback() and pgfdw_subxact_callback() have similar codes to\n> issue COMMIT or RELEASE SAVEPOINT commands. 0002 patch adds the common function,\n> pgfdw_exec_pre_commit(), for that purpose, and changes those functions\n> so that they use the common one.\n>\n> pgfdw_finish_pre_commit_cleanup() and pgfdw_finish_pre_subcommit_cleanup()\n> have similar codes to wait for the results of COMMIT or RELEASE SAVEPOINT commands.\n> 0003 patch adds the common function, pgfdw_finish_pre_commit(), for that purpose,\n> and replaces those functions with the common one.\n> That is, pgfdw_finish_pre_commit_cleanup() and pgfdw_finish_pre_subcommit_cleanup()\n> are no longer necessary and 0003 patch removes them.\n>\n> [1] https://commitfest.postgresql.org/38/3392/\n\nThanks for working on this! I'd like to review this after the end of\nthe current CF. Could you add this to the next CF?\n\nBest regards,\nEtsuro Fujita\n\n\n",
"msg_date": "Tue, 26 Jul 2022 19:26:20 +0900",
"msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Refactoring postgres_fdw/connection.c"
},
{
"msg_contents": "\n\nOn 2022/07/26 19:26, Etsuro Fujita wrote:\n> Thanks for working on this! I'd like to review this after the end of\n> the current CF.\n\nThanks!\n\n\n> Could you add this to the next CF?\n\nYes.\nhttps://commitfest.postgresql.org/39/3782/\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Tue, 26 Jul 2022 19:46:52 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "Re: Refactoring postgres_fdw/connection.c"
},
{
"msg_contents": "At Tue, 26 Jul 2022 18:33:04 +0900, Fujii Masao <masao.fujii@oss.nttdata.com> wrote in \n> > I'm not sure the two are similar with each other. The new function\n> > pgfdw_exec_pre_commit() looks like a merger of two isolated code paths\n> > intended to share a seven-line codelet. I feel the code gets a bit\n> > harder to understand after the change. I mildly oppose to this part.\n> \n> If so, we can pgfdw_exec_pre_commit() into two, one is the common\n> function that sends or executes the command (i.e., calls\n> do_sql_command_begin() or do_sql_command()), and another is\n> the function only for toplevel. The latter function calls\n> the common function and then executes DEALLOCATE ALL things.\n> \n> But this is not the way that other functions like\n> pgfdw_abort_cleanup()\n> is implemented. Those functions have both codes for toplevel and\n> !toplevel (i.e., subxact), and run the processings depending\n> on the argument \"toplevel\". So I'm thinking that\n> pgfdw_exec_pre_commit() implemented in the same way is better.\n\nI didn't see it from that viewpoint but I don't think that\nunconditionally justifies other refactoring. If we merge\npgfdw_finish_pre_(sub)?commit_cleanup()s this way, in turn\npgfdw_subxact_callback() and pgfdw_xact_callback() are going to be\nalmost identical except event IDs to handle. But I don't think we\nwould want to merge them.\n\nA concern on 0002 is that it is hiding the subxact-specific steps from\nthe subxact callback. It would look reasonable if it were called from\ntwo or more places for each topleve and !toplevel, but actually it has\nonly one caller for each. So I think that pgfdw_exec_pre_commit\nshould not do that and should be renamed to pgfdw_commit_remote() or\nsomething. On the other hand pgfdw_finish_pre_commit() hides\ntoplevel-specific steps from the caller so the same argument holds.\n\nAnother point that makes me concern about the patch is the new\nfunction takes an SQL statement, along with the toplevel flag. I guess\nthe reason is that the command for subxact (RELEASE SAVEPOINT %d)\nrequires the current transaction level. However, the values\nisobtainable very cheap within the cleanup functions. So I propose to\nget rid of the parameter \"sql\" from the two functions.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Wed, 27 Jul 2022 10:36:12 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Refactoring postgres_fdw/connection.c"
},
{
"msg_contents": "On Tue, Jul 26, 2022 at 7:46 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> On 2022/07/26 19:26, Etsuro Fujita wrote:\n> > Could you add this to the next CF?\n>\n> Yes.\n> https://commitfest.postgresql.org/39/3782/\n\nThanks!\n\nBest regards,\nEtsuro Fujita\n\n\n",
"msg_date": "Wed, 27 Jul 2022 14:30:14 +0900",
"msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Refactoring postgres_fdw/connection.c"
},
{
"msg_contents": "\n\nOn 2022/07/27 10:36, Kyotaro Horiguchi wrote:\n> At Tue, 26 Jul 2022 18:33:04 +0900, Fujii Masao <masao.fujii@oss.nttdata.com> wrote in\n>>> I'm not sure the two are similar with each other. The new function\n>>> pgfdw_exec_pre_commit() looks like a merger of two isolated code paths\n>>> intended to share a seven-line codelet. I feel the code gets a bit\n>>> harder to understand after the change. I mildly oppose to this part.\n>>\n>> If so, we can pgfdw_exec_pre_commit() into two, one is the common\n>> function that sends or executes the command (i.e., calls\n>> do_sql_command_begin() or do_sql_command()), and another is\n>> the function only for toplevel. The latter function calls\n>> the common function and then executes DEALLOCATE ALL things.\n>>\n>> But this is not the way that other functions like\n>> pgfdw_abort_cleanup()\n>> is implemented. Those functions have both codes for toplevel and\n>> !toplevel (i.e., subxact), and run the processings depending\n>> on the argument \"toplevel\". So I'm thinking that\n>> pgfdw_exec_pre_commit() implemented in the same way is better.\n> \n> I didn't see it from that viewpoint but I don't think that\n> unconditionally justifies other refactoring. If we merge\n> pgfdw_finish_pre_(sub)?commit_cleanup()s this way, in turn\n> pgfdw_subxact_callback() and pgfdw_xact_callback() are going to be\n> almost identical except event IDs to handle. But I don't think we\n> would want to merge them.\n\nI don't think they are so identical because (as you say) they have to handle different event IDs. So I agree we don't want to merge them.\n\n\n> A concern on 0002 is that it is hiding the subxact-specific steps from\n> the subxact callback. It would look reasonable if it were called from\n> two or more places for each topleve and !toplevel, but actually it has\n> only one caller for each. So I think that pgfdw_exec_pre_commit\n> should not do that and should be renamed to pgfdw_commit_remote() or\n> something. On the other hand pgfdw_finish_pre_commit() hides\n> toplevel-specific steps from the caller so the same argument holds.\n\nSo you conclusion is to rename pgfdw_exec_pre_commit() to pgfdw_commit_remote() or something?\n\n\n> Another point that makes me concern about the patch is the new\n> function takes an SQL statement, along with the toplevel flag. I guess\n> the reason is that the command for subxact (RELEASE SAVEPOINT %d)\n> requires the current transaction level. However, the values\n> isobtainable very cheap within the cleanup functions. So I propose to\n> get rid of the parameter \"sql\" from the two functions.\n\nYes, that's possible. That is, pgfdw_exec_pre_commit() can construct the query string by executing the following codes, instead of accepting the query as an argument. But one downside of this approach is that the following codes are executed for every remote subtransaction entries. Maybe it's cheap to construct the query string as follows, but I'd like to avoid any unnecessray overhead if possible. So the patch makes the caller, pgfdw_subxact_callback(), construct the query string only once and give it to pgfdw_exec_pre_commit().\n\n\tcurlevel = GetCurrentTransactionNestLevel();\n\tsnprintf(sql, sizeof(sql), \"RELEASE SAVEPOINT s%d\", curlevel);\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Thu, 28 Jul 2022 15:26:42 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "Re: Refactoring postgres_fdw/connection.c"
},
{
"msg_contents": "At Thu, 28 Jul 2022 15:26:42 +0900, Fujii Masao <masao.fujii@oss.nttdata.com> wrote in \n> \n> \n> On 2022/07/27 10:36, Kyotaro Horiguchi wrote:\n> > At Tue, 26 Jul 2022 18:33:04 +0900, Fujii Masao\n> > <masao.fujii@oss.nttdata.com> wrote in\n> > I didn't see it from that viewpoint but I don't think that\n> > unconditionally justifies other refactoring. If we merge\n> > pgfdw_finish_pre_(sub)?commit_cleanup()s this way, in turn\n> > pgfdw_subxact_callback() and pgfdw_xact_callback() are going to be\n> > almost identical except event IDs to handle. But I don't think we\n> > would want to merge them.\n> \n> I don't think they are so identical because (as you say) they have to\n> handle different event IDs. So I agree we don't want to merge them.\n\nThe xact_callback and subxact_callback handles different sets of event\nIDs so they can be merged into one switch(). I don't think there's\nmuch difference from merging the functions for xact and subxact into\none rountine then calling it with a flag to chose one of the two\npaths. (Even though less-than-half lines of the fuction are shared..)\nHowever, still I don't think they ought to be merged.\n\n> > A concern on 0002 is that it is hiding the subxact-specific steps from\n> > the subxact callback. It would look reasonable if it were called from\n> > two or more places for each topleve and !toplevel, but actually it has\n> > only one caller for each. So I think that pgfdw_exec_pre_commit\n> > should not do that and should be renamed to pgfdw_commit_remote() or\n> > something. On the other hand pgfdw_finish_pre_commit() hides\n> > toplevel-specific steps from the caller so the same argument holds.\n> \n> So you conclusion is to rename pgfdw_exec_pre_commit() to\n> pgfdw_commit_remote() or something?\n\nAnd the remote stuff is removed from the function. That being said, I\ndon't mean to fight this no longer since that is rather a matter of\ntaste.\n\n> > Another point that makes me concern about the patch is the new\n> > function takes an SQL statement, along with the toplevel flag. I guess\n> > the reason is that the command for subxact (RELEASE SAVEPOINT %d)\n> > requires the current transaction level. However, the values\n> > isobtainable very cheap within the cleanup functions. So I propose to\n> > get rid of the parameter \"sql\" from the two functions.\n> \n> Yes, that's possible. That is, pgfdw_exec_pre_commit() can construct\n> the query string by executing the following codes, instead of\n> accepting the query as an argument. But one downside of this approach\n> is that the following codes are executed for every remote\n> subtransaction entries. Maybe it's cheap to construct the query string\n> as follows, but I'd like to avoid any unnecessray overhead if\n> possible. So the patch makes the caller, pgfdw_subxact_callback(),\n> construct the query string only once and give it to\n> pgfdw_exec_pre_commit().\n> \n> \tcurlevel = GetCurrentTransactionNestLevel();\n> \tsnprintf(sql, sizeof(sql), \"RELEASE SAVEPOINT s%d\", curlevel);\n\nThat *overhead* has been there and I'm not sure how much actual impact\nit gives on performance (comparing to the surrounding code). But I\nwould choose leaving it open-coded as-is than turning it into a\nfunction that need two tightly-bonded parameters passed and that also\ntightly bonded to the caller via the parameters. ...In other words,\nthe original code doesn't seem to meet the requirement for a function.\n\nHowever, it's okay if you prefer the functions than the open-coded\nlines based on the above discussion, I'd stop objecting.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Thu, 28 Jul 2022 16:27:14 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Refactoring postgres_fdw/connection.c"
},
{
"msg_contents": "On Thu, Jul 28, 2022 at 11:56 AM Fujii Masao\n<masao.fujii@oss.nttdata.com> wrote:\n>\n>\n>\n> On 2022/07/27 10:36, Kyotaro Horiguchi wrote:\n> > At Tue, 26 Jul 2022 18:33:04 +0900, Fujii Masao <masao.fujii@oss.nttdata.com> wrote in\n> >>> I'm not sure the two are similar with each other. The new function\n> >>> pgfdw_exec_pre_commit() looks like a merger of two isolated code paths\n> >>> intended to share a seven-line codelet. I feel the code gets a bit\n> >>> harder to understand after the change. I mildly oppose to this part.\n> >>\n> >> If so, we can pgfdw_exec_pre_commit() into two, one is the common\n> >> function that sends or executes the command (i.e., calls\n> >> do_sql_command_begin() or do_sql_command()), and another is\n> >> the function only for toplevel. The latter function calls\n> >> the common function and then executes DEALLOCATE ALL things.\n> >>\n> >> But this is not the way that other functions like\n> >> pgfdw_abort_cleanup()\n> >> is implemented. Those functions have both codes for toplevel and\n> >> !toplevel (i.e., subxact), and run the processings depending\n> >> on the argument \"toplevel\". So I'm thinking that\n> >> pgfdw_exec_pre_commit() implemented in the same way is better.\n> >\n> > I didn't see it from that viewpoint but I don't think that\n> > unconditionally justifies other refactoring. If we merge\n> > pgfdw_finish_pre_(sub)?commit_cleanup()s this way, in turn\n> > pgfdw_subxact_callback() and pgfdw_xact_callback() are going to be\n> > almost identical except event IDs to handle. But I don't think we\n> > would want to merge them.\n>\n> I don't think they are so identical because (as you say) they have to handle different event IDs. So I agree we don't want to merge them.\n>\n>\n> > A concern on 0002 is that it is hiding the subxact-specific steps from\n> > the subxact callback. It would look reasonable if it were called from\n> > two or more places for each topleve and !toplevel, but actually it has\n> > only one caller for each. So I think that pgfdw_exec_pre_commit\n> > should not do that and should be renamed to pgfdw_commit_remote() or\n> > something. On the other hand pgfdw_finish_pre_commit() hides\n> > toplevel-specific steps from the caller so the same argument holds.\n>\n> So you conclusion is to rename pgfdw_exec_pre_commit() to pgfdw_commit_remote() or something?\n>\n>\n> > Another point that makes me concern about the patch is the new\n> > function takes an SQL statement, along with the toplevel flag. I guess\n> > the reason is that the command for subxact (RELEASE SAVEPOINT %d)\n> > requires the current transaction level. However, the values\n> > isobtainable very cheap within the cleanup functions. So I propose to\n> > get rid of the parameter \"sql\" from the two functions.\n>\n> Yes, that's possible. That is, pgfdw_exec_pre_commit() can construct the query string by executing the following codes, instead of accepting the query as an argument. But one downside of this approach is that the following codes are executed for every remote subtransaction entries. Maybe it's cheap to construct the query string as follows, but I'd like to avoid any unnecessray overhead if possible. So the patch makes the caller, pgfdw_subxact_callback(), construct the query string only once and give it to pgfdw_exec_pre_commit().\n>\n> curlevel = GetCurrentTransactionNestLevel();\n> snprintf(sql, sizeof(sql), \"RELEASE SAVEPOINT s%d\", curlevel);\n>\n\nAnother possibility I can see is that instead of calling\npgfdw_exec_pre_commit() (similarly pgfdw_abort_cleanup) for every\nconnection entry, we should call that once from the callback function,\nand for that we need to move the hash table loop inside that function.\n\nThe structure of the callback function looks a little fuzzy to me\nwhere the same event is checked for every entry of the connection hash\ntable. Instead of simply move that loop should be inside those\nfunction (e.g. pgfdw_exec_pre_commit and pgfdw_abort_cleanup), and let\ncalled those function called once w.r.t to event and that function\nshould take care of every entry of the connection hash table. The\nbenefit is that we would save a few processing cycles that needed to\nmatch events and call the same function for each connection entry.\n\nI tried this refactoring in 0004 patch which is not complete, and\nreattaching other patches too to make CFboat happy.\n\nThoughts? Suggestions?\n\nRegards,\nAmul",
"msg_date": "Thu, 4 Aug 2022 09:41:57 +0530",
"msg_from": "Amul Sul <sulamul@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Refactoring postgres_fdw/connection.c"
},
{
"msg_contents": "On Tue, Jul 26, 2022 at 4:25 PM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n> At Tue, 26 Jul 2022 00:54:47 +0900, Fujii Masao <masao.fujii@oss.nttdata.com> wrote in\n> > There are two functions, pgfdw_get_result() and\n> > pgfdw_get_cleanup_result(),\n> > to get a query result. They have almost the same code, call\n> > PQisBusy(),\n> > WaitLatchOrSocket(), PQconsumeInput() and PQgetResult() in the loop,\n> > but only pgfdw_get_cleanup_result() allows its callers to specify the\n> > timeout.\n> > 0001 patch transforms pgfdw_get_cleanup_result() to the common\n> > function\n> > to get a query result and makes pgfdw_get_result() use it instead of\n> > its own (duplicate) code. The patch also renames\n> > pgfdw_get_cleanup_result()\n> > to pgfdw_get_result_timed().\n>\n> Agree to that refactoring.\n\n+1 for that refactoring. Here are a few comments about the 0001 patch:\n\nI'm not sure it's a good idea to change the function's name, because\nthat would make backpatching hard. To avoid that, how about\nintroducing a workhorse function for pgfdw_get_result and\npgfdw_get_cleanup_result, based on the latter function as you\nproposed, and modifying the two functions so that they call the\nworkhorse function?\n\n@@ -1599,13 +1572,9 @@ pgfdw_finish_pre_commit_cleanup(List *pending_entries)\n entry = (ConnCacheEntry *) lfirst(lc);\n\n /* Ignore errors (see notes in pgfdw_xact_callback) */\n- while ((res = PQgetResult(entry->conn)) != NULL)\n- {\n- PQclear(res);\n- /* Stop if the connection is lost (else we'll loop infinitely) */\n- if (PQstatus(entry->conn) == CONNECTION_BAD)\n- break;\n- }\n+ pgfdw_get_result_timed(entry->conn, 0, &res, NULL);\n+ PQclear(res);\n\nThe existing code prevents interruption, but this would change to\nallow it. Do we need this change?\n\n> > pgfdw_xact_callback() and pgfdw_subxact_callback() have similar codes\n> > to\n> > issue COMMIT or RELEASE SAVEPOINT commands. 0002 patch adds the common\n> > function,\n> > pgfdw_exec_pre_commit(), for that purpose, and changes those functions\n> > so that they use the common one.\n>\n> I'm not sure the two are similar with each other. The new function\n> pgfdw_exec_pre_commit() looks like a merger of two isolated code paths\n> intended to share a seven-line codelet. I feel the code gets a bit\n> harder to understand after the change. I mildly oppose to this part.\n\nI have to agree with Horiguchi-san, because as mentioned by him, 1)\nthere isn't enough duplicate code in the two bits to justify merging\nthem into a single function, and 2) the 0002 patch rather just makes\ncode complicated. The current implementation is easy to understand,\nso I'd vote for leaving them alone for now.\n\n(I think the introduction of pgfdw_abort_cleanup is good, because that\nde-duplicated much code that existed both in pgfdw_xact_callback and\nin pgfdw_subxact_callback, which would outweigh the downside of\npgfdw_abort_cleanup that it made code somewhat complicated due to the\nlogic to handle both the transaction and subtransaction cases within\nthat single function. But 0002 is not the case, I think.)\n\n> > pgfdw_finish_pre_commit_cleanup() and\n> > pgfdw_finish_pre_subcommit_cleanup()\n> > have similar codes to wait for the results of COMMIT or RELEASE\n> > SAVEPOINT commands.\n> > 0003 patch adds the common function, pgfdw_finish_pre_commit(), for\n> > that purpose,\n> > and replaces those functions with the common one.\n> > That is, pgfdw_finish_pre_commit_cleanup() and\n> > pgfdw_finish_pre_subcommit_cleanup()\n> > are no longer necessary and 0003 patch removes them.\n>\n> It gives the same feeling with 0002.\n\nI have to agree with Horiguchi-san on this as well; the existing\nsingle-purpose functions are easy to understand, so I'd vote for\nleaving them alone.\n\nSorry for being late to the party.\n\nBest regards,\nEtsuro Fujita\n\n\n",
"msg_date": "Mon, 5 Sep 2022 15:17:51 +0900",
"msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Refactoring postgres_fdw/connection.c"
},
{
"msg_contents": "\n\nOn 2022/09/05 15:17, Etsuro Fujita wrote:\n> +1 for that refactoring. Here are a few comments about the 0001 patch:\n\nThanks for reviewing the patch!\n\n\n> I'm not sure it's a good idea to change the function's name, because\n> that would make backpatching hard. To avoid that, how about\n> introducing a workhorse function for pgfdw_get_result and\n> pgfdw_get_cleanup_result, based on the latter function as you\n> proposed, and modifying the two functions so that they call the\n> workhorse function?\n\nThat's possible. We can revive pgfdw_get_cleanup_result() and\nmake it call pgfdw_get_result_timed(). Also, with the patch,\npgfdw_get_result() already works in that way. But I'm not sure\nhow much we should be concerned about back-patch \"issue\"\nin this case. We usually rename the internal functions\nif new names are better.\n\n> \n> @@ -1599,13 +1572,9 @@ pgfdw_finish_pre_commit_cleanup(List *pending_entries)\n> entry = (ConnCacheEntry *) lfirst(lc);\n> \n> /* Ignore errors (see notes in pgfdw_xact_callback) */\n> - while ((res = PQgetResult(entry->conn)) != NULL)\n> - {\n> - PQclear(res);\n> - /* Stop if the connection is lost (else we'll loop infinitely) */\n> - if (PQstatus(entry->conn) == CONNECTION_BAD)\n> - break;\n> - }\n> + pgfdw_get_result_timed(entry->conn, 0, &res, NULL);\n> + PQclear(res);\n> \n> The existing code prevents interruption, but this would change to\n> allow it. Do we need this change?\n\nYou imply that we intentially avoided calling CHECK_FOR_INTERRUPT()\nthere, don't you? But could you tell me why?\n\n \n> I have to agree with Horiguchi-san, because as mentioned by him, 1)\n> there isn't enough duplicate code in the two bits to justify merging\n> them into a single function, and 2) the 0002 patch rather just makes\n> code complicated. The current implementation is easy to understand,\n> so I'd vote for leaving them alone for now.\n> \n> (I think the introduction of pgfdw_abort_cleanup is good, because that\n> de-duplicated much code that existed both in pgfdw_xact_callback and\n> in pgfdw_subxact_callback, which would outweigh the downside of\n> pgfdw_abort_cleanup that it made code somewhat complicated due to the\n> logic to handle both the transaction and subtransaction cases within\n> that single function. But 0002 is not the case, I think.)\n\nThe function pgfdw_exec_pre_commit() that I newly introduced consists\nof two parts; issue the transaction-end command based on\nparallel_commit setting and issue DEALLOCATE ALL. The first part is\nduplicated between pgfdw_xact_callback() and pgfdw_subxact_callback(),\nbut the second not (i.e., it's used only by pgfdw_xact_callback()).\nSo how about getting rid of that non duplicated part from\npgfdw_exec_pre_commit()?\n\n\n>> It gives the same feeling with 0002.\n> \n> I have to agree with Horiguchi-san on this as well; the existing\n> single-purpose functions are easy to understand, so I'd vote for\n> leaving them alone.\n\nOk, I will reconsider 0003 patch. BTW, parallel abort patch that\nyou're proposing seems to add new function pgfdw_finish_abort_cleanup()\nwith the similar structure as the function added by 0003 patch.\nSo probably it's helpful for us to consider this together :)\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Thu, 15 Sep 2022 00:17:26 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "Re: Refactoring postgres_fdw/connection.c"
},
{
"msg_contents": "Hi Fujii-san,\n\nOn Thu, Sep 15, 2022 at 12:17 AM Fujii Masao\n<masao.fujii@oss.nttdata.com> wrote:\n> On 2022/09/05 15:17, Etsuro Fujita wrote:\n> > I'm not sure it's a good idea to change the function's name, because\n> > that would make backpatching hard. To avoid that, how about\n> > introducing a workhorse function for pgfdw_get_result and\n> > pgfdw_get_cleanup_result, based on the latter function as you\n> > proposed, and modifying the two functions so that they call the\n> > workhorse function?\n>\n> That's possible. We can revive pgfdw_get_cleanup_result() and\n> make it call pgfdw_get_result_timed(). Also, with the patch,\n> pgfdw_get_result() already works in that way. But I'm not sure\n> how much we should be concerned about back-patch \"issue\"\n> in this case. We usually rename the internal functions\n> if new names are better.\n\nI agree that if the name of an existing function was bad, we should\nrename it, but I do not think the name pgfdw_get_cleanup_result is\nbad; I think it is good in the sense that it well represents what the\nfunction waits for.\n\nThe patch you proposed changes pgfdw_get_cleanup_result to cover the\ntimed_out==NULL case, but I do not think it is a good idea to rename\nit for such a minor reason. That function is used in all supported\nversions, so that would just make back-patching hard.\n\n> > @@ -1599,13 +1572,9 @@ pgfdw_finish_pre_commit_cleanup(List *pending_entries)\n> > entry = (ConnCacheEntry *) lfirst(lc);\n> >\n> > /* Ignore errors (see notes in pgfdw_xact_callback) */\n> > - while ((res = PQgetResult(entry->conn)) != NULL)\n> > - {\n> > - PQclear(res);\n> > - /* Stop if the connection is lost (else we'll loop infinitely) */\n> > - if (PQstatus(entry->conn) == CONNECTION_BAD)\n> > - break;\n> > - }\n> > + pgfdw_get_result_timed(entry->conn, 0, &res, NULL);\n> > + PQclear(res);\n> >\n> > The existing code prevents interruption, but this would change to\n> > allow it. Do we need this change?\n>\n> You imply that we intentially avoided calling CHECK_FOR_INTERRUPT()\n> there, don't you?\n\nYeah, this is intentional; in commit 04e706d42, I coded this to match\nthe behavior in the non-parallel-commit mode, which does not call\nCHECK_FOR_INTERRUPT.\n\n> But could you tell me why?\n\nMy concern about doing so is that WaitLatchOrSocket is rather\nexpensive, so it might lead to useless overhead in most cases.\nAnyway, this changes the behavior, so you should show the evidence\nthat this is useful. I think this would be beyond refactoring,\nthough.\n\n> > I have to agree with Horiguchi-san, because as mentioned by him, 1)\n> > there isn't enough duplicate code in the two bits to justify merging\n> > them into a single function, and 2) the 0002 patch rather just makes\n> > code complicated. The current implementation is easy to understand,\n> > so I'd vote for leaving them alone for now.\n> >\n> > (I think the introduction of pgfdw_abort_cleanup is good, because that\n> > de-duplicated much code that existed both in pgfdw_xact_callback and\n> > in pgfdw_subxact_callback, which would outweigh the downside of\n> > pgfdw_abort_cleanup that it made code somewhat complicated due to the\n> > logic to handle both the transaction and subtransaction cases within\n> > that single function. But 0002 is not the case, I think.)\n>\n> The function pgfdw_exec_pre_commit() that I newly introduced consists\n> of two parts; issue the transaction-end command based on\n> parallel_commit setting and issue DEALLOCATE ALL. The first part is\n> duplicated between pgfdw_xact_callback() and pgfdw_subxact_callback(),\n> but the second not (i.e., it's used only by pgfdw_xact_callback()).\n> So how about getting rid of that non duplicated part from\n> pgfdw_exec_pre_commit()?\n\nSeems like a good idea.\n\n> > I have to agree with Horiguchi-san on this as well; the existing\n> > single-purpose functions are easy to understand, so I'd vote for\n> > leaving them alone.\n>\n> Ok, I will reconsider 0003 patch. BTW, parallel abort patch that\n> you're proposing seems to add new function pgfdw_finish_abort_cleanup()\n> with the similar structure as the function added by 0003 patch.\n> So probably it's helpful for us to consider this together :)\n\nOk, let us discuss this after the parallel-abort patch.\n\nSorry for the late reply again.\n\nBest regards,\nEtsuro Fujita\n\n\n",
"msg_date": "Sun, 29 Jan 2023 19:31:11 +0900",
"msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Refactoring postgres_fdw/connection.c"
},
{
"msg_contents": "\n\nOn 2023/01/29 19:31, Etsuro Fujita wrote:\n> I agree that if the name of an existing function was bad, we should\n> rename it, but I do not think the name pgfdw_get_cleanup_result is\n> bad; I think it is good in the sense that it well represents what the\n> function waits for.\n> \n> The patch you proposed changes pgfdw_get_cleanup_result to cover the\n> timed_out==NULL case, but I do not think it is a good idea to rename\n> it for such a minor reason. That function is used in all supported\n> versions, so that would just make back-patching hard.\n\nAs far as I understand, the function name pgfdw_get_cleanup_result is\nused because it's used to get the result during abort cleanup as\nthe comment tells. OTOH new function is used even not during abort clean,\ne.g., pgfdw_get_result() calls that new function. So I don't think that\npgfdw_get_cleanup_result is good name in some places.\n\nIf you want to leave pgfdw_get_cleanup_result for the existing uses,\nwe can leave that and redefine it so that it just calls the workhorse\nfunction pgfdw_get_result_timed.\n\n\n> Yeah, this is intentional; in commit 04e706d42, I coded this to match\n> the behavior in the non-parallel-commit mode, which does not call\n> CHECK_FOR_INTERRUPT.\n> \n>> But could you tell me why?\n> \n> My concern about doing so is that WaitLatchOrSocket is rather\n> expensive, so it might lead to useless overhead in most cases.\n\npgfdw_get_result() and pgfdw_get_cleanup_result() already call\nWaitLatchOrSocket() and CHECK_FOR_INTERRUPTS(). That is, during\ncommit phase, they are currently called when receiving the result\nof COMMIT TRANSACTION command from remote server. Why do we need\nto worry about their overhead only when executing DEALLOCATE ALL?\n\n\n> Anyway, this changes the behavior, so you should show the evidence\n> that this is useful. I think this would be beyond refactoring,\n> though.\n\nIsn't it useful to react the interrupts (e.g., shutdown requests)\nsoon even during waiting for the result of DEALLOCATE ALL?\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Tue, 31 Jan 2023 15:44:47 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "Re: Refactoring postgres_fdw/connection.c"
},
{
"msg_contents": "On Tue, Jan 31, 2023 at 3:44 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> On 2023/01/29 19:31, Etsuro Fujita wrote:\n> > I agree that if the name of an existing function was bad, we should\n> > rename it, but I do not think the name pgfdw_get_cleanup_result is\n> > bad; I think it is good in the sense that it well represents what the\n> > function waits for.\n> >\n> > The patch you proposed changes pgfdw_get_cleanup_result to cover the\n> > timed_out==NULL case, but I do not think it is a good idea to rename\n> > it for such a minor reason. That function is used in all supported\n> > versions, so that would just make back-patching hard.\n>\n> As far as I understand, the function name pgfdw_get_cleanup_result is\n> used because it's used to get the result during abort cleanup as\n> the comment tells. OTOH new function is used even not during abort clean,\n> e.g., pgfdw_get_result() calls that new function. So I don't think that\n> pgfdw_get_cleanup_result is good name in some places.\n\nYeah, I agree on that point.\n\n> If you want to leave pgfdw_get_cleanup_result for the existing uses,\n> we can leave that and redefine it so that it just calls the workhorse\n> function pgfdw_get_result_timed.\n\n+1; that's actually what I proposed upthread. :-)\n\nBTW the name \"pgfdw_get_result_timed\" is a bit confusing to me,\nbecause the new function works *without* a timeout condition. We\nusually append the suffix \"_internal\", so how about\n\"pgfdw_get_result_internal\", to avoid that confusion?\n\n> > Yeah, this is intentional; in commit 04e706d42, I coded this to match\n> > the behavior in the non-parallel-commit mode, which does not call\n> > CHECK_FOR_INTERRUPT.\n> >\n> >> But could you tell me why?\n> >\n> > My concern about doing so is that WaitLatchOrSocket is rather\n> > expensive, so it might lead to useless overhead in most cases.\n>\n> pgfdw_get_result() and pgfdw_get_cleanup_result() already call\n> WaitLatchOrSocket() and CHECK_FOR_INTERRUPTS(). That is, during\n> commit phase, they are currently called when receiving the result\n> of COMMIT TRANSACTION command from remote server. Why do we need\n> to worry about their overhead only when executing DEALLOCATE ALL?\n\nDEALLOCATE ALL is a light operation and is issued immediately after\nexecuting COMMIT TRANSACTION successfully, so I thought that in most\ncases it too would be likely to be executed successfully and quickly;\nthere would be less need to do so for DEALLOCATE ALL.\n\n> > Anyway, this changes the behavior, so you should show the evidence\n> > that this is useful. I think this would be beyond refactoring,\n> > though.\n>\n> Isn't it useful to react the interrupts (e.g., shutdown requests)\n> soon even during waiting for the result of DEALLOCATE ALL?\n\nThat might be useful, but another concern about this is error\nhandling. The existing code (both in parallel commit and non-parallel\ncommit) ignores any kinds of errors in libpq as well as interrupts\nwhen doing DEALLOCATE ALL, and then commits the local transaction,\nmaking the remote/local transaction states consistent, but IIUC the\npatch aborts the local transaction when doing the command, e.g., if\nWaitLatchOrSocket detected some kind of error in socket access, making\nthe transaction states *inconsistent*, which I don't think would be\ngreat. So I'm still not sure this would be acceptable.\n\nBest regards,\nEtsuro Fujita\n\n\n",
"msg_date": "Wed, 15 Feb 2023 20:53:00 +0900",
"msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Refactoring postgres_fdw/connection.c"
}
] |
[
{
"msg_contents": "Hi,\n\nIn tap tests for logical replication, we have the following code in many places:\n\n$node_publisher->wait_for_catchup('tap_sub');\nmy $synced_query =\n \"SELECT count(1) = 0 FROM pg_subscription_rel WHERE srsubstate NOT\nIN ('r', 's');\";\n$node_subscriber->poll_query_until('postgres', $synced_query)\n or die \"Timed out while waiting for subscriber to synchronize data\";\n\nAlso, we sometime forgot to check either one, like we fixed in commit\n1f50918a6fb02207d151e7cb4aae4c36de9d827c.\n\nI think we can have a new function to wait for all subscriptions to\nsynchronize data. The attached patch introduce a new function\nwait_for_subscription_sync(). With this function, we can replace the\nabove code with this one function as follows:\n\n$node_subscriber->wait_for_subscription_sync($node_publisher, 'tap_sub');\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/",
"msg_date": "Tue, 26 Jul 2022 10:36:39 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Introduce wait_for_subscription_sync for TAP tests"
},
{
"msg_contents": "On Tue, Jul 26, 2022 at 7:07 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> Hi,\n>\n> In tap tests for logical replication, we have the following code in many places:\n>\n> $node_publisher->wait_for_catchup('tap_sub');\n> my $synced_query =\n> \"SELECT count(1) = 0 FROM pg_subscription_rel WHERE srsubstate NOT\n> IN ('r', 's');\";\n> $node_subscriber->poll_query_until('postgres', $synced_query)\n> or die \"Timed out while waiting for subscriber to synchronize data\";\n>\n> Also, we sometime forgot to check either one, like we fixed in commit\n> 1f50918a6fb02207d151e7cb4aae4c36de9d827c.\n>\n> I think we can have a new function to wait for all subscriptions to\n> synchronize data. The attached patch introduce a new function\n> wait_for_subscription_sync(). With this function, we can replace the\n> above code with this one function as follows:\n>\n> $node_subscriber->wait_for_subscription_sync($node_publisher, 'tap_sub');\n>\n\n+1. This reduces quite some code in various tests and will make it\neasier to write future tests.\n\nFew comments/questions:\n====================\n1.\n-$node_publisher->wait_for_catchup('mysub1');\n-\n-# Also wait for initial table sync to finish.\n-my $synced_query =\n- \"SELECT count(1) = 0 FROM pg_subscription_rel WHERE srsubstate NOT\nIN ('r', 's');\";\n-$node_subscriber->poll_query_until('postgres', $synced_query)\n- or die \"Timed out while waiting for subscriber to synchronize data\";\n-\n # Also wait for initial table sync to finish.\n-$node_subscriber->poll_query_until('postgres', $synced_query)\n- or die \"Timed out while waiting for subscriber to synchronize data\";\n+$node_subscriber->wait_for_subscription_sync($node_publisher, 'mysub1');\n\nIt seems to me without your patch there is an extra poll in the above\ntest. If so, we can probably remove that in a separate patch?\n\n2.\n+ # wait for the replication to catchup if required.\n+ if (defined($publisher))\n+ {\n+ croak 'subscription name must be specified' unless defined($subname);\n+ $publisher->wait_for_catchup($subname, 'replay');\n+ }\n+\n+ # then, wait for all table states to be ready.\n+ print \"Waiting for all subscriptions in \\\"$name\\\" to synchronize data\\n\";\n+ my $query = qq[SELECT count(1) = 0\n+ FROM pg_subscription_rel\n+ WHERE srsubstate NOT IN ('r', 's');];\n+ $self->poll_query_until($dbname, $query)\n+ or croak \"timed out waiting for subscriber to synchronize data\";\n\nIn the tests, I noticed that a few places did wait_for_catchup after\nthe subscription check, and at other places, we did that check before\nas you have it here. Ideally, I think wait_for_catchup should be after\nconfirming the initial sync is over as without initial sync, the\npublisher node won't be completely in sync with the subscriber. What\ndo you think?\n\n3. In the code quoted in the previous point, why did you pass the\nsecond parameter as 'replay' when we have not used that in the tests\notherwise?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 26 Jul 2022 10:31:15 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Introduce wait_for_subscription_sync for TAP tests"
},
{
"msg_contents": "On Tue, Jul 26, 2022 at 2:01 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Jul 26, 2022 at 7:07 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > Hi,\n> >\n> > In tap tests for logical replication, we have the following code in many places:\n> >\n> > $node_publisher->wait_for_catchup('tap_sub');\n> > my $synced_query =\n> > \"SELECT count(1) = 0 FROM pg_subscription_rel WHERE srsubstate NOT\n> > IN ('r', 's');\";\n> > $node_subscriber->poll_query_until('postgres', $synced_query)\n> > or die \"Timed out while waiting for subscriber to synchronize data\";\n> >\n> > Also, we sometime forgot to check either one, like we fixed in commit\n> > 1f50918a6fb02207d151e7cb4aae4c36de9d827c.\n> >\n> > I think we can have a new function to wait for all subscriptions to\n> > synchronize data. The attached patch introduce a new function\n> > wait_for_subscription_sync(). With this function, we can replace the\n> > above code with this one function as follows:\n> >\n> > $node_subscriber->wait_for_subscription_sync($node_publisher, 'tap_sub');\n> >\n>\n> +1. This reduces quite some code in various tests and will make it\n> easier to write future tests.\n>\n> Few comments/questions:\n> ====================\n> 1.\n> -$node_publisher->wait_for_catchup('mysub1');\n> -\n> -# Also wait for initial table sync to finish.\n> -my $synced_query =\n> - \"SELECT count(1) = 0 FROM pg_subscription_rel WHERE srsubstate NOT\n> IN ('r', 's');\";\n> -$node_subscriber->poll_query_until('postgres', $synced_query)\n> - or die \"Timed out while waiting for subscriber to synchronize data\";\n> -\n> # Also wait for initial table sync to finish.\n> -$node_subscriber->poll_query_until('postgres', $synced_query)\n> - or die \"Timed out while waiting for subscriber to synchronize data\";\n> +$node_subscriber->wait_for_subscription_sync($node_publisher, 'mysub1');\n>\n> It seems to me without your patch there is an extra poll in the above\n> test. If so, we can probably remove that in a separate patch?\n\nAgreed.\n\n>\n> 2.\n> + # wait for the replication to catchup if required.\n> + if (defined($publisher))\n> + {\n> + croak 'subscription name must be specified' unless defined($subname);\n> + $publisher->wait_for_catchup($subname, 'replay');\n> + }\n> +\n> + # then, wait for all table states to be ready.\n> + print \"Waiting for all subscriptions in \\\"$name\\\" to synchronize data\\n\";\n> + my $query = qq[SELECT count(1) = 0\n> + FROM pg_subscription_rel\n> + WHERE srsubstate NOT IN ('r', 's');];\n> + $self->poll_query_until($dbname, $query)\n> + or croak \"timed out waiting for subscriber to synchronize data\";\n>\n> In the tests, I noticed that a few places did wait_for_catchup after\n> the subscription check, and at other places, we did that check before\n> as you have it here. Ideally, I think wait_for_catchup should be after\n> confirming the initial sync is over as without initial sync, the\n> publisher node won't be completely in sync with the subscriber.\n\nWhat do you mean by the last sentence? I thought the order doesn't\nmatter here. Even if we do wait_for_catchup first then the\nsubscription check, we can make sure that the apply worker caught up\nand table synchronization has been done, no?\n\n>\n> 3. In the code quoted in the previous point, why did you pass the\n> second parameter as 'replay' when we have not used that in the tests\n> otherwise?\n\nIt makes sure to use the (current) default value of $mode of\nwait_for_catchup(). But probably it's not necessary, I've removed it.\n\nI've attached an updated patch as well as a patch to remove duplicated\nwaits in 007_ddl.pl.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/",
"msg_date": "Tue, 26 Jul 2022 16:41:57 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Introduce wait_for_subscription_sync for TAP tests"
},
{
"msg_contents": "On Tue, Jul 26, 2022 3:42 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\r\n> \r\n> I've attached an updated patch as well as a patch to remove duplicated\r\n> waits in 007_ddl.pl.\r\n> \r\n\r\nThanks for your patch. Here are some comments.\r\n\r\n1.\r\nI think some comments need to be changed in the patch.\r\nFor example:\r\n# Also wait for initial table sync to finish\r\n# Wait for initial sync to finish as well\r\n\r\nWords like \"Also\" and \"as well\" can be removed now, we originally used them\r\nbecause we wait for catchup and \"also\" wait for initial sync.\r\n\r\n2.\r\nIn the following places, we can remove wait_for_catchup() and then call it in\r\nwait_for_subscription_sync().\r\n\r\n2.1.\r\n030_origin.pl:\r\n@@ -128,8 +120,7 @@ $node_B->safe_psql(\r\n \r\n $node_C->wait_for_catchup($appname_B2);\r\n \r\n-$node_B->poll_query_until('postgres', $synced_query)\r\n- or die \"Timed out while waiting for subscriber to synchronize data\";\r\n+$node_B->wait_for_subscription_sync;\r\n \r\n2.2.\r\n031_column_list.pl:\r\n@@ -385,7 +373,7 @@ $node_subscriber->safe_psql(\r\n \tALTER SUBSCRIPTION sub1 SET PUBLICATION pub2, pub3\r\n ));\r\n \r\n-wait_for_subscription_sync($node_subscriber);\r\n+$node_subscriber->wait_for_subscription_sync;\r\n \r\n $node_publisher->wait_for_catchup('sub1');\r\n\r\n2.3.\r\n100_bugs.pl:\r\n@@ -281,8 +276,7 @@ $node_subscriber->safe_psql('postgres',\r\n $node_publisher->wait_for_catchup('tap_sub');\r\n \r\n # Also wait for initial table sync to finish\r\n-$node_subscriber->poll_query_until('postgres', $synced_query)\r\n- or die \"Timed out while waiting for subscriber to synchronize data\";\r\n+$node_subscriber->wait_for_subscription_sync;\r\n \r\n is( $node_subscriber->safe_psql(\r\n \t\t'postgres', \"SELECT * FROM tab_replidentity_index\"),\r\n\r\nRegards,\r\nShi yu\r\n",
"msg_date": "Wed, 27 Jul 2022 10:08:47 +0000",
"msg_from": "\"shiy.fnst@fujitsu.com\" <shiy.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Introduce wait_for_subscription_sync for TAP tests"
},
{
"msg_contents": "On Tue, Jul 26, 2022 at 1:12 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Tue, Jul 26, 2022 at 2:01 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > 2.\n> > + # wait for the replication to catchup if required.\n> > + if (defined($publisher))\n> > + {\n> > + croak 'subscription name must be specified' unless defined($subname);\n> > + $publisher->wait_for_catchup($subname, 'replay');\n> > + }\n> > +\n> > + # then, wait for all table states to be ready.\n> > + print \"Waiting for all subscriptions in \\\"$name\\\" to synchronize data\\n\";\n> > + my $query = qq[SELECT count(1) = 0\n> > + FROM pg_subscription_rel\n> > + WHERE srsubstate NOT IN ('r', 's');];\n> > + $self->poll_query_until($dbname, $query)\n> > + or croak \"timed out waiting for subscriber to synchronize data\";\n> >\n> > In the tests, I noticed that a few places did wait_for_catchup after\n> > the subscription check, and at other places, we did that check before\n> > as you have it here. Ideally, I think wait_for_catchup should be after\n> > confirming the initial sync is over as without initial sync, the\n> > publisher node won't be completely in sync with the subscriber.\n>\n> What do you mean by the last sentence? I thought the order doesn't\n> matter here. Even if we do wait_for_catchup first then the\n> subscription check, we can make sure that the apply worker caught up\n> and table synchronization has been done, no?\n>\n\nThat's right. I thought we should first ensure the subscriber has\nfinished operations if possible, like in this case, it can ensure\ntable sync has finished and then we can ensure whether publisher and\nsubscriber are in sync. That sounds more logical to me.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 27 Jul 2022 17:24:46 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Introduce wait_for_subscription_sync for TAP tests"
},
{
"msg_contents": "On Wed, Jul 27, 2022 at 7:08 PM shiy.fnst@fujitsu.com\n<shiy.fnst@fujitsu.com> wrote:\n>\n> On Tue, Jul 26, 2022 3:42 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > I've attached an updated patch as well as a patch to remove duplicated\n> > waits in 007_ddl.pl.\n> >\n>\n> Thanks for your patch. Here are some comments.\n\nThank you for the comments!\n\n>\n> 1.\n> I think some comments need to be changed in the patch.\n> For example:\n> # Also wait for initial table sync to finish\n> # Wait for initial sync to finish as well\n>\n> Words like \"Also\" and \"as well\" can be removed now, we originally used them\n> because we wait for catchup and \"also\" wait for initial sync.\n\nAgreed.\n\n>\n> 2.\n> In the following places, we can remove wait_for_catchup() and then call it in\n> wait_for_subscription_sync().\n>\n> 2.1.\n> 030_origin.pl:\n> @@ -128,8 +120,7 @@ $node_B->safe_psql(\n>\n> $node_C->wait_for_catchup($appname_B2);\n>\n> -$node_B->poll_query_until('postgres', $synced_query)\n> - or die \"Timed out while waiting for subscriber to synchronize data\";\n> +$node_B->wait_for_subscription_sync;\n>\n> 2.2.\n> 031_column_list.pl:\n> @@ -385,7 +373,7 @@ $node_subscriber->safe_psql(\n> ALTER SUBSCRIPTION sub1 SET PUBLICATION pub2, pub3\n> ));\n>\n> -wait_for_subscription_sync($node_subscriber);\n> +$node_subscriber->wait_for_subscription_sync;\n>\n> $node_publisher->wait_for_catchup('sub1');\n>\n> 2.3.\n> 100_bugs.pl:\n> @@ -281,8 +276,7 @@ $node_subscriber->safe_psql('postgres',\n> $node_publisher->wait_for_catchup('tap_sub');\n>\n> # Also wait for initial table sync to finish\n> -$node_subscriber->poll_query_until('postgres', $synced_query)\n> - or die \"Timed out while waiting for subscriber to synchronize data\";\n> +$node_subscriber->wait_for_subscription_sync;\n>\n> is( $node_subscriber->safe_psql(\n> 'postgres', \"SELECT * FROM tab_replidentity_index\"),\n\nAgreed.\n\nI've attached updated patches that incorporated the above comments as\nwell as the comment from Amit.\n\nBTW regarding 0001 patch to remove the duplicated wait, should we\nbackpatch to v15? I think we can do that as it's an obvious fix and it\nseems to be an oversight in 8f2e2bbf145.\n\nRegards,\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/",
"msg_date": "Thu, 28 Jul 2022 10:06:41 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Introduce wait_for_subscription_sync for TAP tests"
},
{
"msg_contents": "On Wed, Jul 27, 2022 at 8:54 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Jul 26, 2022 at 1:12 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Tue, Jul 26, 2022 at 2:01 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > 2.\n> > > + # wait for the replication to catchup if required.\n> > > + if (defined($publisher))\n> > > + {\n> > > + croak 'subscription name must be specified' unless defined($subname);\n> > > + $publisher->wait_for_catchup($subname, 'replay');\n> > > + }\n> > > +\n> > > + # then, wait for all table states to be ready.\n> > > + print \"Waiting for all subscriptions in \\\"$name\\\" to synchronize data\\n\";\n> > > + my $query = qq[SELECT count(1) = 0\n> > > + FROM pg_subscription_rel\n> > > + WHERE srsubstate NOT IN ('r', 's');];\n> > > + $self->poll_query_until($dbname, $query)\n> > > + or croak \"timed out waiting for subscriber to synchronize data\";\n> > >\n> > > In the tests, I noticed that a few places did wait_for_catchup after\n> > > the subscription check, and at other places, we did that check before\n> > > as you have it here. Ideally, I think wait_for_catchup should be after\n> > > confirming the initial sync is over as without initial sync, the\n> > > publisher node won't be completely in sync with the subscriber.\n> >\n> > What do you mean by the last sentence? I thought the order doesn't\n> > matter here. Even if we do wait_for_catchup first then the\n> > subscription check, we can make sure that the apply worker caught up\n> > and table synchronization has been done, no?\n> >\n>\n> That's right. I thought we should first ensure the subscriber has\n> finished operations if possible, like in this case, it can ensure\n> table sync has finished and then we can ensure whether publisher and\n> subscriber are in sync. That sounds more logical to me.\n\nMake sense. I've incorporated it in the v3 patch.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Thu, 28 Jul 2022 10:07:36 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Introduce wait_for_subscription_sync for TAP tests"
},
{
"msg_contents": "On Thu, Jul 28, 2022 at 6:37 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Wed, Jul 27, 2022 at 7:08 PM shiy.fnst@fujitsu.com\n> <shiy.fnst@fujitsu.com> wrote:\n>\n> I've attached updated patches that incorporated the above comments as\n> well as the comment from Amit.\n>\n> BTW regarding 0001 patch to remove the duplicated wait, should we\n> backpatch to v15?\n>\n\nI think it is good to clean this test case even for PG15 even though\nthere is no major harm in keeping it. I'll push this early next week\nby Tuesday unless someone thinks otherwise.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Sat, 30 Jul 2022 12:25:01 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Introduce wait_for_subscription_sync for TAP tests"
},
{
"msg_contents": "On Sat, Jul 30, 2022 at 12:25 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Thu, Jul 28, 2022 at 6:37 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Wed, Jul 27, 2022 at 7:08 PM shiy.fnst@fujitsu.com\n> > <shiy.fnst@fujitsu.com> wrote:\n> >\n> > I've attached updated patches that incorporated the above comments as\n> > well as the comment from Amit.\n> >\n> > BTW regarding 0001 patch to remove the duplicated wait, should we\n> > backpatch to v15?\n> >\n>\n> I think it is good to clean this test case even for PG15 even though\n> there is no major harm in keeping it. I'll push this early next week\n> by Tuesday unless someone thinks otherwise.\n>\n\nPushed this one and now I'll look at your other patch.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 3 Aug 2022 10:21:15 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Introduce wait_for_subscription_sync for TAP tests"
},
{
"msg_contents": "On Wed, Aug 3, 2022 at 1:51 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Sat, Jul 30, 2022 at 12:25 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Thu, Jul 28, 2022 at 6:37 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > >\n> > > On Wed, Jul 27, 2022 at 7:08 PM shiy.fnst@fujitsu.com\n> > > <shiy.fnst@fujitsu.com> wrote:\n> > >\n> > > I've attached updated patches that incorporated the above comments as\n> > > well as the comment from Amit.\n> > >\n> > > BTW regarding 0001 patch to remove the duplicated wait, should we\n> > > backpatch to v15?\n> > >\n> >\n> > I think it is good to clean this test case even for PG15 even though\n> > there is no major harm in keeping it. I'll push this early next week\n> > by Tuesday unless someone thinks otherwise.\n> >\n>\n> Pushed this one and now I'll look at your other patch.\n\nThanks!\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Wed, 3 Aug 2022 15:46:02 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Introduce wait_for_subscription_sync for TAP tests"
},
{
"msg_contents": "On Wed, Aug 3, 2022 at 10:21 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> Pushed this one and now I'll look at your other patch.\n>\n\nI have pushed the second patch as well after making minor changes in\nthe comments. Alvaro [1] and Tom [2] suggest to back-patch this and\nthey sound reasonable to me. Will you be able to produce back branch\npatches?\n\n[1] - https://www.postgresql.org/message-id/20220803104544.k2luy5hr2ugnhgr2%40alvherre.pgsql\n[2] - https://www.postgresql.org/message-id/2966703.1659535343%40sss.pgh.pa.us\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 4 Aug 2022 07:07:16 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Introduce wait_for_subscription_sync for TAP tests"
},
{
"msg_contents": "On Thu, Aug 4, 2022 at 10:37 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, Aug 3, 2022 at 10:21 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > Pushed this one and now I'll look at your other patch.\n> >\n>\n> I have pushed the second patch as well after making minor changes in\n> the comments. Alvaro [1] and Tom [2] suggest to back-patch this and\n> they sound reasonable to me. Will you be able to produce back branch\n> patches?\n\nYes. I've attached patches for backbranches. The updates are\nstraightforward on v11 - v15. However, on v10, we don't use\nwait_for_catchup() in some logical replication test cases. The commit\nbbd3363e128dae refactored the tests to use wait_for_catchup but it's\nnot backpatched. So in the patch for v10, I didn't change the code\nthat was changed by the commit. Also, since wait_for_catchup requires\nto specify $target_lsn, unlike the one in v11 or later, I changed\nwait_for_subscription_sync() accordingly.\n\nRegards,\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/",
"msg_date": "Thu, 4 Aug 2022 15:28:05 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Introduce wait_for_subscription_sync for TAP tests"
},
{
"msg_contents": "On Thu, Aug 4, 2022 2:28 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\r\n> \r\n> On Thu, Aug 4, 2022 at 10:37 AM Amit Kapila <amit.kapila16@gmail.com>\r\n> wrote:\r\n> >\r\n> > On Wed, Aug 3, 2022 at 10:21 AM Amit Kapila <amit.kapila16@gmail.com>\r\n> wrote:\r\n> > >\r\n> > > Pushed this one and now I'll look at your other patch.\r\n> > >\r\n> >\r\n> > I have pushed the second patch as well after making minor changes in\r\n> > the comments. Alvaro [1] and Tom [2] suggest to back-patch this and\r\n> > they sound reasonable to me. Will you be able to produce back branch\r\n> > patches?\r\n> \r\n> Yes. I've attached patches for backbranches. The updates are\r\n> straightforward on v11 - v15. However, on v10, we don't use\r\n> wait_for_catchup() in some logical replication test cases. The commit\r\n> bbd3363e128dae refactored the tests to use wait_for_catchup but it's\r\n> not backpatched. So in the patch for v10, I didn't change the code\r\n> that was changed by the commit. Also, since wait_for_catchup requires\r\n> to specify $target_lsn, unlike the one in v11 or later, I changed\r\n> wait_for_subscription_sync() accordingly.\r\n> \r\n\r\nThanks for your patches.\r\n\r\nIn the patches for pg11 ~ pg14, it looks we need to add a \"=pod\" before the\r\ncurrent change in PostgresNode.pm. Right?\r\n\r\nRegards,\r\nShi yu\r\n",
"msg_date": "Thu, 4 Aug 2022 09:49:15 +0000",
"msg_from": "\"shiy.fnst@fujitsu.com\" <shiy.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Introduce wait_for_subscription_sync for TAP tests"
},
{
"msg_contents": "Masahiko Sawada <sawada.mshk@gmail.com> writes:\n> Yes. I've attached patches for backbranches.\n\nFWIW, I'd recommend waiting till after next week's wrap before\npushing these. While I'm definitely in favor of doing this,\nthe odds of introducing a bug are nonzero, so right before a\nrelease deadline doesn't seem like a good time.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 04 Aug 2022 09:43:26 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Introduce wait_for_subscription_sync for TAP tests"
},
{
"msg_contents": "On Thu, Aug 4, 2022 5:49 PM shiy.fnst@fujitsu.com <shiy.fnst@fujitsu.com> wrote:\r\n> \r\n> On Thu, Aug 4, 2022 2:28 PM Masahiko Sawada <sawada.mshk@gmail.com>\r\n> wrote:\r\n> >\r\n> > On Thu, Aug 4, 2022 at 10:37 AM Amit Kapila <amit.kapila16@gmail.com>\r\n> > wrote:\r\n> > >\r\n> > > On Wed, Aug 3, 2022 at 10:21 AM Amit Kapila\r\n> <amit.kapila16@gmail.com>\r\n> > wrote:\r\n> > > >\r\n> > > > Pushed this one and now I'll look at your other patch.\r\n> > > >\r\n> > >\r\n> > > I have pushed the second patch as well after making minor changes in\r\n> > > the comments. Alvaro [1] and Tom [2] suggest to back-patch this and\r\n> > > they sound reasonable to me. Will you be able to produce back branch\r\n> > > patches?\r\n> >\r\n> > Yes. I've attached patches for backbranches. The updates are\r\n> > straightforward on v11 - v15. However, on v10, we don't use\r\n> > wait_for_catchup() in some logical replication test cases. The commit\r\n> > bbd3363e128dae refactored the tests to use wait_for_catchup but it's\r\n> > not backpatched. So in the patch for v10, I didn't change the code\r\n> > that was changed by the commit. Also, since wait_for_catchup requires\r\n> > to specify $target_lsn, unlike the one in v11 or later, I changed\r\n> > wait_for_subscription_sync() accordingly.\r\n> >\r\n> \r\n> Thanks for your patches.\r\n> \r\n> In the patches for pg11 ~ pg14, it looks we need to add a \"=pod\" before the\r\n> current change in PostgresNode.pm. Right?\r\n> \r\n\r\nBy the way, I notice that in 002_types.pl (on master branch), it seems the \"as\r\nwell\" in the following comment should be removed. Is it worth being fixed?\r\n\r\n$node_subscriber->safe_psql('postgres',\r\n\t\"CREATE SUBSCRIPTION tap_sub CONNECTION '$publisher_connstr' PUBLICATION tap_pub WITH (slot_name = tap_sub_slot)\"\r\n);\r\n\r\n# Wait for initial sync to finish as well\r\n$node_subscriber->wait_for_subscription_sync($node_publisher, 'tap_sub');\r\n\r\nRegards,\r\nShi yu\r\n\r\n",
"msg_date": "Fri, 5 Aug 2022 01:39:49 +0000",
"msg_from": "\"shiy.fnst@fujitsu.com\" <shiy.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Introduce wait_for_subscription_sync for TAP tests"
},
{
"msg_contents": "On Thu, Aug 4, 2022 at 7:13 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Masahiko Sawada <sawada.mshk@gmail.com> writes:\n> > Yes. I've attached patches for backbranches.\n>\n> FWIW, I'd recommend waiting till after next week's wrap before\n> pushing these. While I'm definitely in favor of doing this,\n> the odds of introducing a bug are nonzero, so right before a\n> release deadline doesn't seem like a good time.\n>\n\nAgreed. I was planning to do it only after next week's wrap. Thanks\nfor your suggestion.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 5 Aug 2022 08:27:23 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Introduce wait_for_subscription_sync for TAP tests"
},
{
"msg_contents": "On Fri, Aug 5, 2022 at 10:39 AM shiy.fnst@fujitsu.com\n<shiy.fnst@fujitsu.com> wrote:\n>\n> On Thu, Aug 4, 2022 5:49 PM shiy.fnst@fujitsu.com <shiy.fnst@fujitsu.com> wrote:\n> >\n> > On Thu, Aug 4, 2022 2:28 PM Masahiko Sawada <sawada.mshk@gmail.com>\n> > wrote:\n> > >\n> > > On Thu, Aug 4, 2022 at 10:37 AM Amit Kapila <amit.kapila16@gmail.com>\n> > > wrote:\n> > > >\n> > > > On Wed, Aug 3, 2022 at 10:21 AM Amit Kapila\n> > <amit.kapila16@gmail.com>\n> > > wrote:\n> > > > >\n> > > > > Pushed this one and now I'll look at your other patch.\n> > > > >\n> > > >\n> > > > I have pushed the second patch as well after making minor changes in\n> > > > the comments. Alvaro [1] and Tom [2] suggest to back-patch this and\n> > > > they sound reasonable to me. Will you be able to produce back branch\n> > > > patches?\n> > >\n> > > Yes. I've attached patches for backbranches. The updates are\n> > > straightforward on v11 - v15. However, on v10, we don't use\n> > > wait_for_catchup() in some logical replication test cases. The commit\n> > > bbd3363e128dae refactored the tests to use wait_for_catchup but it's\n> > > not backpatched. So in the patch for v10, I didn't change the code\n> > > that was changed by the commit. Also, since wait_for_catchup requires\n> > > to specify $target_lsn, unlike the one in v11 or later, I changed\n> > > wait_for_subscription_sync() accordingly.\n> > >\n> >\n> > Thanks for your patches.\n> >\n> > In the patches for pg11 ~ pg14, it looks we need to add a \"=pod\" before the\n> > current change in PostgresNode.pm. Right?\n> >\n>\n> By the way, I notice that in 002_types.pl (on master branch), it seems the \"as\n> well\" in the following comment should be removed. Is it worth being fixed?\n>\n> $node_subscriber->safe_psql('postgres',\n> \"CREATE SUBSCRIPTION tap_sub CONNECTION '$publisher_connstr' PUBLICATION tap_pub WITH (slot_name = tap_sub_slot)\"\n> );\n>\n> # Wait for initial sync to finish as well\n> $node_subscriber->wait_for_subscription_sync($node_publisher, 'tap_sub');\n>\n\nThank you for the comments. I've attached updated version patches.\nPlease review them.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/",
"msg_date": "Wed, 10 Aug 2022 14:09:20 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Introduce wait_for_subscription_sync for TAP tests"
},
{
"msg_contents": "On Wed, Aug 10, 2022 at 10:39 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Fri, Aug 5, 2022 at 10:39 AM shiy.fnst@fujitsu.com\n> <shiy.fnst@fujitsu.com> wrote:\n>\n> Thank you for the comments. I've attached updated version patches.\n> Please review them.\n>\n\nPushed.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 12 Aug 2022 14:17:42 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Introduce wait_for_subscription_sync for TAP tests"
},
{
"msg_contents": "Amit Kapila <amit.kapila16@gmail.com> writes:\n> Pushed.\n\nRecently a number of buildfarm animals have failed at the same\nplace in src/test/subscription/t/100_bugs.pl [1][2][3][4]:\n\n# Failed test '2x3000 rows in t'\n# at t/100_bugs.pl line 149.\n# got: '9000'\n# expected: '6000'\n# Looks like you failed 1 test of 7.\n[09:30:56] t/100_bugs.pl ...................... \n\nThis was the last commit to touch that test script. I'm thinking\nmaybe it wasn't adjusted quite correctly? On the other hand, since\nI can't find any similar failures before the last 48 hours, maybe\nthere is some other more-recent commit to blame. Anyway, something\nis wrong there.\n\n\t\t\tregards, tom lane\n\n[1] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=jacana&dt=2022-09-09%2012%3A03%3A46\n[2] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2022-09-09%2011%3A16%3A36\n[3] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2022-09-09%2010%3A33%3A19\n[4] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=peripatus&dt=2022-09-08%2010%3A56%3A59\n\n\n",
"msg_date": "Fri, 09 Sep 2022 10:31:10 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Introduce wait_for_subscription_sync for TAP tests"
},
{
"msg_contents": "On Fri, Sep 9, 2022 at 11:31 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Amit Kapila <amit.kapila16@gmail.com> writes:\n> > Pushed.\n>\n> Recently a number of buildfarm animals have failed at the same\n> place in src/test/subscription/t/100_bugs.pl [1][2][3][4]:\n>\n> # Failed test '2x3000 rows in t'\n> # at t/100_bugs.pl line 149.\n> # got: '9000'\n> # expected: '6000'\n> # Looks like you failed 1 test of 7.\n> [09:30:56] t/100_bugs.pl ......................\n>\n> This was the last commit to touch that test script. I'm thinking\n> maybe it wasn't adjusted quite correctly? On the other hand, since\n> I can't find any similar failures before the last 48 hours, maybe\n> there is some other more-recent commit to blame. Anyway, something\n> is wrong there.\n\nIt seems that this commit is innocent as it changed only how to wait.\nRather, looking at the logs, the tablesync worker errored out at an\ninteresting point:\n\n022-09-09 09:30:19.630 EDT [631b3feb.840:13]\npg_16400_sync_16392_7141371862484106124 ERROR: could not find record\nwhile sending logically-decoded data: missing contrecord at 0/1D4FFF8\n2022-09-09 09:30:19.630 EDT [631b3feb.840:14]\npg_16400_sync_16392_7141371862484106124 STATEMENT: START_REPLICATION\nSLOT \"pg_16400_sync_16392_7141371862484106124\" LOGICAL 0/0\n(proto_version '3', origin 'any', publication_names '\"testpub\"')\nERROR: could not find record while sending logically-decoded data:\nmissing contrecord at 0/1D4FFF8\n2022-09-09 09:30:19.631 EDT [631b3feb.26e8:2] ERROR: error while\nshutting down streaming COPY: ERROR: could not find record while\nsending logically-decoded data: missing contrecord at 0/1D4FFF8\n\nIt's likely that the commit f6c5edb8abcac04eb3eac6da356e59d399b2bcef\nis relevant.\n\nRegards,\n\n-- \nMasahiko Sawada\n\n\n",
"msg_date": "Sat, 10 Sep 2022 06:33:51 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Introduce wait_for_subscription_sync for TAP tests"
},
{
"msg_contents": "Masahiko Sawada <sawada.mshk@gmail.com> writes:\n> On Fri, Sep 9, 2022 at 11:31 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Recently a number of buildfarm animals have failed at the same\n>> place in src/test/subscription/t/100_bugs.pl [1][2][3][4]:\n>> \n>> # Failed test '2x3000 rows in t'\n>> # at t/100_bugs.pl line 149.\n>> # got: '9000'\n>> # expected: '6000'\n>> # Looks like you failed 1 test of 7.\n>> [09:30:56] t/100_bugs.pl ......................\n>> \n>> This was the last commit to touch that test script. I'm thinking\n>> maybe it wasn't adjusted quite correctly? On the other hand, since\n>> I can't find any similar failures before the last 48 hours, maybe\n>> there is some other more-recent commit to blame. Anyway, something\n>> is wrong there.\n\n> It seems that this commit is innocent as it changed only how to wait.\n\nYeah. I was wondering if it caused us to fail to wait somewhere,\nbut I concur that's not all that likely.\n\n> It's likely that the commit f6c5edb8abcac04eb3eac6da356e59d399b2bcef\n> is relevant.\n\nNoting that the errors have only appeared in the past couple of\ndays, I'm now suspicious of adb466150b44d1eaf43a2d22f58ff4c545a0ed3f\n(Fix recovery_prefetch with low maintenance_io_concurrency).\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 09 Sep 2022 17:45:43 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Introduce wait_for_subscription_sync for TAP tests"
},
{
"msg_contents": "On Sat, Sep 10, 2022 at 6:45 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Masahiko Sawada <sawada.mshk@gmail.com> writes:\n> > On Fri, Sep 9, 2022 at 11:31 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> Recently a number of buildfarm animals have failed at the same\n> >> place in src/test/subscription/t/100_bugs.pl [1][2][3][4]:\n> >>\n> >> # Failed test '2x3000 rows in t'\n> >> # at t/100_bugs.pl line 149.\n> >> # got: '9000'\n> >> # expected: '6000'\n> >> # Looks like you failed 1 test of 7.\n> >> [09:30:56] t/100_bugs.pl ......................\n> >>\n> >> This was the last commit to touch that test script. I'm thinking\n> >> maybe it wasn't adjusted quite correctly? On the other hand, since\n> >> I can't find any similar failures before the last 48 hours, maybe\n> >> there is some other more-recent commit to blame. Anyway, something\n> >> is wrong there.\n>\n> > It seems that this commit is innocent as it changed only how to wait.\n>\n> Yeah. I was wondering if it caused us to fail to wait somewhere,\n> but I concur that's not all that likely.\n>\n> > It's likely that the commit f6c5edb8abcac04eb3eac6da356e59d399b2bcef\n> > is relevant.\n>\n> Noting that the errors have only appeared in the past couple of\n> days, I'm now suspicious of adb466150b44d1eaf43a2d22f58ff4c545a0ed3f\n> (Fix recovery_prefetch with low maintenance_io_concurrency).\n\nProbably I found the cause of this failure[1]. The commit\nf6c5edb8abcac04eb3eac6da356e59d399b2bcef didn't fix the problem\nproperly.\n\nRegards,\n\n[1] https://www.postgresql.org/message-id/CAD21AoAw0Oofi4kiDpJBOwpYyBBBkJj%3DsLUOn4Gd2GjUAKG-fw%40mail.gmail.com\n\n-- \nMasahiko Sawada\n\n\n",
"msg_date": "Sat, 10 Sep 2022 06:54:42 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Introduce wait_for_subscription_sync for TAP tests"
},
{
"msg_contents": "On Sat, Sep 10, 2022 at 9:45 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Masahiko Sawada <sawada.mshk@gmail.com> writes:\n> > On Fri, Sep 9, 2022 at 11:31 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> Recently a number of buildfarm animals have failed at the same\n> >> place in src/test/subscription/t/100_bugs.pl [1][2][3][4]:\n> >>\n> >> # Failed test '2x3000 rows in t'\n> >> # at t/100_bugs.pl line 149.\n> >> # got: '9000'\n> >> # expected: '6000'\n> >> # Looks like you failed 1 test of 7.\n> >> [09:30:56] t/100_bugs.pl ......................\n> >>\n> >> This was the last commit to touch that test script. I'm thinking\n> >> maybe it wasn't adjusted quite correctly? On the other hand, since\n> >> I can't find any similar failures before the last 48 hours, maybe\n> >> there is some other more-recent commit to blame. Anyway, something\n> >> is wrong there.\n>\n> > It seems that this commit is innocent as it changed only how to wait.\n>\n> Yeah. I was wondering if it caused us to fail to wait somewhere,\n> but I concur that's not all that likely.\n>\n> > It's likely that the commit f6c5edb8abcac04eb3eac6da356e59d399b2bcef\n> > is relevant.\n>\n> Noting that the errors have only appeared in the past couple of\n> days, I'm now suspicious of adb466150b44d1eaf43a2d22f58ff4c545a0ed3f\n> (Fix recovery_prefetch with low maintenance_io_concurrency).\n\nYeah, I also just spotted the coincidence of those failures while\nmonitoring the build farm. I'll look into this later today. My\ninitial suspicion is that there was pre-existing code here that was\n(incorrectly?) relying on the lack of error reporting in that case.\nBut maybe I misunderstood and it was incorrect to report the error for\nsome reason that was not robustly covered with tests.\n\n\n",
"msg_date": "Sat, 10 Sep 2022 10:00:51 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Introduce wait_for_subscription_sync for TAP tests"
},
{
"msg_contents": "On Sat, Sep 10, 2022 at 10:00 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> On Sat, Sep 10, 2022 at 9:45 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > Masahiko Sawada <sawada.mshk@gmail.com> writes:\n> > > It's likely that the commit f6c5edb8abcac04eb3eac6da356e59d399b2bcef\n> > > is relevant.\n> >\n> > Noting that the errors have only appeared in the past couple of\n> > days, I'm now suspicious of adb466150b44d1eaf43a2d22f58ff4c545a0ed3f\n> > (Fix recovery_prefetch with low maintenance_io_concurrency).\n>\n> Yeah, I also just spotted the coincidence of those failures while\n> monitoring the build farm. I'll look into this later today. My\n> initial suspicion is that there was pre-existing code here that was\n> (incorrectly?) relying on the lack of error reporting in that case.\n> But maybe I misunderstood and it was incorrect to report the error for\n> some reason that was not robustly covered with tests.\n\nAfter I wrote that I saw Sawada-san's message and waited for more\ninformation, and I see there was now a commit. I noticed that\nperipatus was already logging the 'missing contrecord' error even when\nit didn't fail the test, and still does. I'm still looking into that\n(ie whether I need to take that new report_invalid_record() call out\nand replace it with errormsg_deferred = true so that XLogReadRecord()\nreturns NULL with no error message in this case).\n\n\n",
"msg_date": "Mon, 12 Sep 2022 22:42:31 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Introduce wait_for_subscription_sync for TAP tests"
},
{
"msg_contents": "On Mon, Sep 12, 2022 at 10:42 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> On Sat, Sep 10, 2022 at 10:00 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> > On Sat, Sep 10, 2022 at 9:45 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > > Masahiko Sawada <sawada.mshk@gmail.com> writes:\n> > > > It's likely that the commit f6c5edb8abcac04eb3eac6da356e59d399b2bcef\n> > > > is relevant.\n> > >\n> > > Noting that the errors have only appeared in the past couple of\n> > > days, I'm now suspicious of adb466150b44d1eaf43a2d22f58ff4c545a0ed3f\n> > > (Fix recovery_prefetch with low maintenance_io_concurrency).\n> >\n> > Yeah, I also just spotted the coincidence of those failures while\n> > monitoring the build farm. I'll look into this later today. My\n> > initial suspicion is that there was pre-existing code here that was\n> > (incorrectly?) relying on the lack of error reporting in that case.\n> > But maybe I misunderstood and it was incorrect to report the error for\n> > some reason that was not robustly covered with tests.\n>\n> After I wrote that I saw Sawada-san's message and waited for more\n> information, and I see there was now a commit. I noticed that\n> peripatus was already logging the 'missing contrecord' error even when\n> it didn't fail the test, and still does. I'm still looking into that\n> (ie whether I need to take that new report_invalid_record() call out\n> and replace it with errormsg_deferred = true so that XLogReadRecord()\n> returns NULL with no error message in this case).\n\nI will go ahead and remove this new error message added by adb46615.\nThe message was correlated with the problem on peripatus fixed by\n88f48831, but not the cause of it -- but it's also not terribly\nhelpful and might be confusing. It might be reported: (1) in\npg_waldump when you hit the end of the segment with a missing\ncontrecord after the end, arguably rightfully, but then perhaps\nsomeone might complain that they expect an error from pg_waldump only\non the final live segment at the end of the WAL, and (2) in a\nwalsender that is asked to shut down while between reads of pages with\na spanning contrecord, reported by logical_read_xlog_page() with a\nmessageless error (presumably peripatus's case?), (3) in crash\nrecovery with wal_recycle off (whereas normally you'd expect to see\nthe page_addr message from a recycled file), maybe more legitimately\nthan the above. The problem I needed to solve can be solved without\nthe message just by setting that flag as mentioned above, so I'll do\nthat to remove the new noise.\n\n\n",
"msg_date": "Fri, 30 Sep 2022 15:11:49 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Introduce wait_for_subscription_sync for TAP tests"
}
] |
[
{
"msg_contents": "Hi,\n\nI propose to acquire SI-read predicate locks on materialized views\nas the attached patch.\n\nCurrently, materialized views do not participate in predicate locking,\nbut I think this causes a serialization anomaly when `REFRESH\nMATERIALIZED VIEW CONCURRENTLY` is used.\n\nFor example, supporse that there is a table \"orders\" which contains\norder information and a materialized view \"order_summary\" which contains\nsummary of the order information. \n\n CREATE TABLE orders (date date, item text, num int);\n\n CREATE MATERIALIZED VIEW order_summary AS\n SELECT date, item, sum(num) FROM orders GROUP BY date, item;\n\n\"order_summary\" is refreshed once per day in the following transaction.\n\n T1:\n REFRESH MATERIALIZED VIEW CONCURRENTLY order_summary;\n\n\"orders\" has a date column, and when a new item is inserted, the date\nvalue is determined as the next day of the last date recorded in\n\"order_summary\" as in the following transaction.\n \n T2:\n SELECT max(date) + 1 INTO today FROM order_summary;\n INSERT INTO orders(date, item, num) VALUES (today, 'apple', 1);\n\nIf such two transactions run concurrently, a write skew anomaly occurs,\nand the result of order_summary refreshed in T1 will not contain the\nrecord inserted in T2. \n\nOn the other hand, if the materialized view participates in predicate\nlocking and the transaction isolation level is SELIALIZABLE, this\nanomaly can be avoided; one of the transaction will be aborted and\nsuggested to be retried.\n\nThe problem doesn't occur when we use REFRESH MATERIALIZED VIEW\n(not CONCURRENTLY) because it acquires the strongest lock and\nany concurrent transactions are prevent from reading the materialized view.\nI think this is the reason why materialized views didn't have to\nparticipate in predicate locking. However, this is no longer the case\nbecause now we support REFRESH ... CONCURRENTLY which refreshes the\nmaterialized view using DELETE and INSERT and also allow to read it\nfrom concurrent transactions. I think we can regard them as same as\nDELETE, INSERT, and SELECT on regular tables and acquire predicate\nlocks on materialized views as well.\n\nWhat do you think about it?\n\nRegards,\nYugo Nagata\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>",
"msg_date": "Tue, 26 Jul 2022 16:44:34 +0900",
"msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>",
"msg_from_op": true,
"msg_subject": "SI-read predicate locks on materialized views"
},
{
"msg_contents": "On Tue, Jul 26, 2022 at 3:44 PM Yugo NAGATA <nagata@sraoss.co.jp> wrote:\n\n> If such two transactions run concurrently, a write skew anomaly occurs,\n> and the result of order_summary refreshed in T1 will not contain the\n> record inserted in T2.\n\n\nIndeed we have write skew anomaly here between the two transactions.\n\n\n> On the other hand, if the materialized view participates in predicate\n> locking and the transaction isolation level is SELIALIZABLE, this\n> anomaly can be avoided; one of the transaction will be aborted and\n> suggested to be retried.\n\n\nThe idea works for me.\n\nThanks\nRichard\n\nOn Tue, Jul 26, 2022 at 3:44 PM Yugo NAGATA <nagata@sraoss.co.jp> wrote:\nIf such two transactions run concurrently, a write skew anomaly occurs,\nand the result of order_summary refreshed in T1 will not contain the\nrecord inserted in T2.Indeed we have write skew anomaly here between the two transactions. \nOn the other hand, if the materialized view participates in predicate\nlocking and the transaction isolation level is SELIALIZABLE, this\nanomaly can be avoided; one of the transaction will be aborted and\nsuggested to be retried.The idea works for me.ThanksRichard",
"msg_date": "Tue, 26 Jul 2022 18:00:57 +0800",
"msg_from": "Richard Guo <guofenglinux@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: SI-read predicate locks on materialized views"
},
{
"msg_contents": "On Tue, Jul 26, 2022 at 3:31 PM Richard Guo <guofenglinux@gmail.com> wrote:\n>\n>\n> On Tue, Jul 26, 2022 at 3:44 PM Yugo NAGATA <nagata@sraoss.co.jp> wrote:\n>>\n>> If such two transactions run concurrently, a write skew anomaly occurs,\n>> and the result of order_summary refreshed in T1 will not contain the\n>> record inserted in T2.\n\nYes we do have write skew anomaly. I think the patch looks fine to me.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 9 Sep 2022 16:27:45 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: SI-read predicate locks on materialized views"
},
{
"msg_contents": "On Fri, 9 Sep 2022 16:27:45 +0530\nDilip Kumar <dilipbalaut@gmail.com> wrote:\n\n> On Tue, Jul 26, 2022 at 3:31 PM Richard Guo <guofenglinux@gmail.com> wrote:\n> >\n> >\n> > On Tue, Jul 26, 2022 at 3:44 PM Yugo NAGATA <nagata@sraoss.co.jp> wrote:\n> >>\n> >> If such two transactions run concurrently, a write skew anomaly occurs,\n> >> and the result of order_summary refreshed in T1 will not contain the\n> >> record inserted in T2.\n> \n> Yes we do have write skew anomaly. I think the patch looks fine to me.\n\nThank you for comment. Do you think it can be marked as Ready for Commiter?\n\nRegards,\nYugo Nagata\n\n> \n> -- \n> Regards,\n> Dilip Kumar\n> EnterpriseDB: http://www.enterprisedb.com\n\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>\n\n\n",
"msg_date": "Fri, 30 Sep 2022 10:12:13 +0900",
"msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>",
"msg_from_op": true,
"msg_subject": "Re: SI-read predicate locks on materialized views"
},
{
"msg_contents": "On Fri, Sep 30, 2022 at 10:12:13AM +0900, Yugo NAGATA wrote:\n> Thank you for comment. Do you think it can be marked as Ready for Commiter?\n\nMatviews have been discarded from needing predicate locks since\n3bf3ab8 and their introduction, where there was no concurrent flavor\nof refresh yet. Shouldn't this patch have at least an isolation test\nto show the difference in terms of read-write conflicts with some\nserializable transactions and REFRESH CONCURRENTLY?\n--\nMichael",
"msg_date": "Thu, 13 Oct 2022 17:02:06 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: SI-read predicate locks on materialized views"
},
{
"msg_contents": "Hello Micheal-san,\n\nOn Thu, 13 Oct 2022 17:02:06 +0900\nMichael Paquier <michael@paquier.xyz> wrote:\n\n> On Fri, Sep 30, 2022 at 10:12:13AM +0900, Yugo NAGATA wrote:\n> > Thank you for comment. Do you think it can be marked as Ready for Commiter?\n> \n> Matviews have been discarded from needing predicate locks since\n> 3bf3ab8 and their introduction, where there was no concurrent flavor\n> of refresh yet. Shouldn't this patch have at least an isolation test\n> to show the difference in terms of read-write conflicts with some\n> serializable transactions and REFRESH CONCURRENTLY?\n\nThank you for your review. I agree that an isolation test is required.\nThe attached patch contains the test using the scenario as explained in\nthe previous post.\n\nRegards,\nYugo Nagata\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>",
"msg_date": "Tue, 18 Oct 2022 17:29:58 +0900",
"msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>",
"msg_from_op": true,
"msg_subject": "Re: SI-read predicate locks on materialized views"
},
{
"msg_contents": "On Tue, Oct 18, 2022 at 05:29:58PM +0900, Yugo NAGATA wrote:\n> Thank you for your review. I agree that an isolation test is required.\n> The attached patch contains the test using the scenario as explained in\n> the previous post.\n\nCool, thanks. Sorry for my late reply here. I have put my head on\nthat for a few hours and could not see why we should not allow that.\nSo committed the change after a few tweaks to the tests with the use\nof custom permutations, mainly.\n\nWhile looking at all that, I have looked at the past threads like [1],\njust to note that this has never been really mentioned.\n\n[1]: https://www.postgresql.org/message-id/1371225929.28496.YahooMailNeo@web162905.mail.bf1.yahoo.com\n--\nMichael",
"msg_date": "Thu, 1 Dec 2022 15:48:21 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: SI-read predicate locks on materialized views"
},
{
"msg_contents": "On Thu, 1 Dec 2022 15:48:21 +0900\nMichael Paquier <michael@paquier.xyz> wrote:\n\n> On Tue, Oct 18, 2022 at 05:29:58PM +0900, Yugo NAGATA wrote:\n> > Thank you for your review. I agree that an isolation test is required.\n> > The attached patch contains the test using the scenario as explained in\n> > the previous post.\n> \n> Cool, thanks. Sorry for my late reply here. I have put my head on\n> that for a few hours and could not see why we should not allow that.\n> So committed the change after a few tweaks to the tests with the use\n> of custom permutations, mainly.\n\nThank!\n\n> While looking at all that, I have looked at the past threads like [1],\n> just to note that this has never been really mentioned.\n> \n> [1]: https://www.postgresql.org/message-id/1371225929.28496.YahooMailNeo@web162905.mail.bf1.yahoo.com\n> --\n> Michael\n\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>\n\n\n",
"msg_date": "Thu, 1 Dec 2022 16:53:11 +0900",
"msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>",
"msg_from_op": true,
"msg_subject": "Re: SI-read predicate locks on materialized views"
}
] |
[
{
"msg_contents": "A long time ago, Tom Lane came up with the idea that when tables get\nbloated, tables might be allowed to shrink down again in size\nnaturally by altering the way FSM allocates blocks. That's a very good\nidea, but we didn't implement it back then...\n\nThis patch allows the Heap to specify what FreeSpaceStrategy it would\nlike to see.\n\n(extract from attached patch...)\n+typedef enum FreeSpaceStrategy\n+{\n+ FREESPACE_STRATEGY_MAX_CONCURRENCY = 0,\n+ /*\n+ * Each time we ask for a new block with freespace this will set\n+ * the advancenext flag which increments the next block by one.\n+ * The effect of this is to ensure that all backends are given\n+ * a separate block, minimizing block contention and thereby\n+ * maximising concurrency. This is the default strategy used by\n+ * PostgreSQL since at least PostgreSQL 8.4.\n+ */\n+ FREESPACE_STRATEGY_MAX_COMPACT\n+ /*\n+ * All backends are given the earliest block in the table with\n+ * sufficient freespace for the insert. This could cause block\n+ * contention for concurrent inserts, but ensures maximum data\n+ * compaction, which will then allow vacuum truncation\nto release\n+ * as much space as possible. This strategy may be appropriate\n+ * for short periods if a table becomes bloated.\n+ */\n+} FreeSpaceStrategy;\n\nAll we need is a simple heuristic to allow us to choose between\nvarious strategies.\n\nYour input is welcome! Please read the short patch.\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/",
"msg_date": "Tue, 26 Jul 2022 10:34:09 +0100",
"msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Max compact as an FSM strategy"
},
{
"msg_contents": "On Tue, Jul 26, 2022 at 3:04 PM Simon Riggs\n<simon.riggs@enterprisedb.com> wrote:\n>\n> A long time ago, Tom Lane came up with the idea that when tables get\n> bloated, tables might be allowed to shrink down again in size\n> naturally by altering the way FSM allocates blocks. That's a very good\n> idea, but we didn't implement it back then...\n>\n> This patch allows the Heap to specify what FreeSpaceStrategy it would\n> like to see.\n>\n> (extract from attached patch...)\n> +typedef enum FreeSpaceStrategy\n> +{\n> + FREESPACE_STRATEGY_MAX_CONCURRENCY = 0,\n> + /*\n> + * Each time we ask for a new block with freespace this will set\n> + * the advancenext flag which increments the next block by one.\n> + * The effect of this is to ensure that all backends are given\n> + * a separate block, minimizing block contention and thereby\n> + * maximising concurrency. This is the default strategy used by\n> + * PostgreSQL since at least PostgreSQL 8.4.\n> + */\n> + FREESPACE_STRATEGY_MAX_COMPACT\n> + /*\n> + * All backends are given the earliest block in the table with\n> + * sufficient freespace for the insert. This could cause block\n> + * contention for concurrent inserts, but ensures maximum data\n> + * compaction, which will then allow vacuum truncation\n> to release\n> + * as much space as possible. This strategy may be appropriate\n> + * for short periods if a table becomes bloated.\n> + */\n> +} FreeSpaceStrategy;\n\nI think this is a really interesting idea. So IIUC this patch enables\nan option to select between the strategy but don't yet decide on that.\n\n> All we need is a simple heuristic to allow us to choose between\n> various strategies.\n\nI think it would be really interesting to see what would be the exact\ndeciding point between these strategies. Because when we switch from\nCONCURRENCY to COMPACT it would immediately affect the insert/update\nperformance but it would control the bloat. So I am not sure whether\nthe selection should be completely based on the heuristic or there\nshould be some GUC parameter where the user can decide at what point\nwe should switch to the COMPACT strategy or it should not at all\nswitch?\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 26 Jul 2022 15:32:28 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Max compact as an FSM strategy"
},
{
"msg_contents": "On Tue, 26 Jul 2022 at 11:02, Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Tue, Jul 26, 2022 at 3:04 PM Simon Riggs\n> <simon.riggs@enterprisedb.com> wrote:\n> >\n> > A long time ago, Tom Lane came up with the idea that when tables get\n> > bloated, tables might be allowed to shrink down again in size\n> > naturally by altering the way FSM allocates blocks. That's a very good\n> > idea, but we didn't implement it back then...\n> >\n> > This patch allows the Heap to specify what FreeSpaceStrategy it would\n> > like to see.\n\n> I think this is a really interesting idea. So IIUC this patch enables\n> an option to select between the strategy but don't yet decide on that.\n\nCorrect\n\n> > All we need is a simple heuristic to allow us to choose between\n> > various strategies.\n>\n> I think it would be really interesting to see what would be the exact\n> deciding point between these strategies. Because when we switch from\n> CONCURRENCY to COMPACT it would immediately affect the insert/update\n> performance but it would control the bloat. So I am not sure whether\n> the selection should be completely based on the heuristic or there\n> should be some GUC parameter where the user can decide at what point\n> we should switch to the COMPACT strategy or it should not at all\n> switch?\n\nHow and when is the right question. I am happy to hear thoughts and\nopinions from others before coming up with a specific scheme to do\nthat.\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Tue, 26 Jul 2022 11:50:09 +0100",
"msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Max compact as an FSM strategy"
}
] |
[
{
"msg_contents": "On Fri, Jul 22, 2022 at 14:49 Peter Geoghegan wrote:\n> The line numbers from your stack trace don't match up with> REL_14_STABLE. Is this actually a fork of Postgres 14? (Oh, looks like\n> it's an old beta release.)\n\nYeah, I was testing on 14beta2 branch once. So I considered your\nadvices and test on REL_14_STABLE branch. But I'm not lucky\nenough to repeat this core file during the past few days.\n\n> It would also be helpful if you told us about the specific table\n> involved. Though the important thing (the essential thing) is to test\n> today's REL_14_STABLE. There have been *lots* of bug fixes since\n> Postgres 14 beta2 was current.\n\nBesides, the logic of heap_update and RelationGetBufferForTuple functions\nare mostly same between 14beta2 and REL_14_STABLE versions. \nSo, I take a deep research on heap_update and RelationGetBufferForTuple\nfunctions and find one code path for missing recheck of all-visible flag after\nlocking buffer with BUFFER_LOCK_EXCLUSIVE.\n\nThe following is the code path:\n(1) heap_update calls RelationGetBufferForTuple to get new suitable buffer;\n(2) during 'loop' loop of RelationGetBufferForTuple, it looks up one suitable new\nbuffer via FSM. But we assume that it was failed to find one available buffer here\nand old buffer ('otherBuffer') was not set the all-visible flag.\n(3) Next, it decides to bulk-extend the relation via RelationAddExtraBlocks and\n\nget one new locked empty buffer via ReadBufferBI within ExclusiveLock.\n\n\n(4) Then, it's succeed to ConditionalLockBuffer old buffer ('otherBuffer') and\n\nreturns the new buffer number without rechecking the all-visibility flag of old buffer.\n(5) Finally, heap_update do the real update work and clear the all-visibility flag of\n both old buffer and new buffer. It finds that the old buffer was set all-visibility\nflag but vmbuffer is still 0. At last, visibilitymap_clear reports error 'wrong buffer\npassed to visibilitymap_clear'.\n\nI propose one patch which rechecks the all-visibility flag of old buffer after\nConditionalLockBuffer old buffer is successful. I start to test this patch for\ncouple of days and it seems to work well for me till now.\n\n--\nRegards,\nrogers.ww",
"msg_date": "Tue, 26 Jul 2022 17:51:07 +0800",
"msg_from": "\"=?UTF-8?B?546L5LyfKOWtpuW8iCk=?=\" <rogers.ww@alibaba-inc.com>",
"msg_from_op": true,
"msg_subject": "\n =?UTF-8?B?5Zue5aSN77yaUmU6IFBBTklDOiB3cm9uZyBidWZmZXIgcGFzc2VkIHRvIHZpc2liaWxpdHlt?=\n =?UTF-8?B?YXBfY2xlYXI=?="
}
] |
[
{
"msg_contents": "Hi,\n\n\nFORCE_NOT_NULL and FORCE_NULL are only used when COPY FROM.\n\nAnd copyto.c and copyfrom.c are split in this commit https://github.com/postgres/postgres//commit/c532d15dddff14b01fe9ef1d465013cb8ef186df <https://github.com/postgres/postgres//commit/c532d15dddff14b01fe9ef1d465013cb8ef186df> .\n\nThere is no need to handle these options when COPY TO.\n\n\nRegards,\nZhang Mingli",
"msg_date": "Wed, 27 Jul 2022 00:16:33 +0800",
"msg_from": "Zhang Mingli <zmlpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "[Refactor]Avoid to handle FORCE_NOT_NULL/FORCE_NULL options when COPY\n TO"
},
{
"msg_contents": "At Wed, 27 Jul 2022 00:16:33 +0800, Zhang Mingli <zmlpostgres@gmail.com> wrote in \n> FORCE_NOT_NULL and FORCE_NULL are only used when COPY FROM.\n> \n> And copyto.c and copyfrom.c are split in this commit https://github.com/postgres/postgres//commit/c532d15dddff14b01fe9ef1d465013cb8ef186df <https://github.com/postgres/postgres//commit/c532d15dddff14b01fe9ef1d465013cb8ef186df> .\n> \n> There is no need to handle these options when COPY TO.\n\nAgreed.\n\nProcessCopyOptions previously rejects force_quote_all for COPY FROM\nand copyfrom.c is not even conscious of the option (that is, even no\nassertion on it). The two options are rejected for COPY TO by the same\nfunction so it seems like a thinko of the commit. Getting rid of the\ncode would be good in the view of code coverage and maintenance.\n\nOn the otherhand I wonder if it is good that we have assertions on the\noption values.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Wed, 27 Jul 2022 13:55:49 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Refactor]Avoid to handle FORCE_NOT_NULL/FORCE_NULL options\n when COPY TO"
},
{
"msg_contents": "On Wed, Jul 27, 2022 at 12:55 PM Kyotaro Horiguchi <horikyota.ntt@gmail.com>\nwrote:\n\n> ProcessCopyOptions previously rejects force_quote_all for COPY FROM\n> and copyfrom.c is not even conscious of the option (that is, even no\n> assertion on it). The two options are rejected for COPY TO by the same\n> function so it seems like a thinko of the commit. Getting rid of the\n> code would be good in the view of code coverage and maintenance.\n\n\nYeah, ProcessCopyOptions() does have the check for force_notnull and\nforce_null whether it is using COPY FROM and whether it is in CSV mode.\nSo the codes in copyto.c processing force_notnull/force_null are\nactually dead codes.\n\n\n> On the otherhand I wonder if it is good that we have assertions on the\n> option values.\n\n\nAgree. Assertions would be better.\n\nThanks\nRichard\n\nOn Wed, Jul 27, 2022 at 12:55 PM Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\nProcessCopyOptions previously rejects force_quote_all for COPY FROM\nand copyfrom.c is not even conscious of the option (that is, even no\nassertion on it). The two options are rejected for COPY TO by the same\nfunction so it seems like a thinko of the commit. Getting rid of the\ncode would be good in the view of code coverage and maintenance.Yeah, ProcessCopyOptions() does have the check for force_notnull andforce_null whether it is using COPY FROM and whether it is in CSV mode.So the codes in copyto.c processing force_notnull/force_null areactually dead codes. \nOn the otherhand I wonder if it is good that we have assertions on the\noption values.Agree. Assertions would be better.ThanksRichard",
"msg_date": "Wed, 27 Jul 2022 14:37:40 +0800",
"msg_from": "Richard Guo <guofenglinux@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Refactor]Avoid to handle FORCE_NOT_NULL/FORCE_NULL options when\n COPY TO"
},
{
"msg_contents": "Hi, all\n\nAssertions added.\n\nThanks for review.\n\nRegards,\n\nZhang Mingli\n\nSent with a Spark\nOn Jul 27, 2022, 14:37 +0800, Richard Guo <guofenglinux@gmail.com>, wrote:\n>\n> > On Wed, Jul 27, 2022 at 12:55 PM Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\n> > > ProcessCopyOptions previously rejects force_quote_all for COPY FROM\n> > > and copyfrom.c is not even conscious of the option (that is, even no\n> > > assertion on it). The two options are rejected for COPY TO by the same\n> > > function so it seems like a thinko of the commit. Getting rid of the\n> > > code would be good in the view of code coverage and maintenance.\n> >\n> > Yeah, ProcessCopyOptions() does have the check for force_notnull and\n> > force_null whether it is using COPY FROM and whether it is in CSV mode.\n> > So the codes in copyto.c processing force_notnull/force_null are\n> > actually dead codes.\n> >\n> > > On the otherhand I wonder if it is good that we have assertions on the\n> > > option values.\n> >\n> > Agree. Assertions would be better.\n> >\n> > Thanks\n> > Richard",
"msg_date": "Thu, 28 Jul 2022 21:04:04 +0800",
"msg_from": "Zhang Mingli <zmlpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [Refactor]Avoid to handle FORCE_NOT_NULL/FORCE_NULL options\n when COPY TO"
},
{
"msg_contents": "On Thu, Jul 28, 2022 at 9:04 PM Zhang Mingli <zmlpostgres@gmail.com> wrote:\n\n> Assertions added.\n>\n\nCan we also add assertions to make sure force_quote, force_notnull and\nforce_null are available only in CSV mode?\n\nThanks\nRichard\n\nOn Thu, Jul 28, 2022 at 9:04 PM Zhang Mingli <zmlpostgres@gmail.com> wrote:\nAssertions added. Can we also add assertions to make sure force_quote, force_notnull andforce_null are available only in CSV mode?ThanksRichard",
"msg_date": "Fri, 29 Jul 2022 11:23:57 +0800",
"msg_from": "Richard Guo <guofenglinux@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Refactor]Avoid to handle FORCE_NOT_NULL/FORCE_NULL options when\n COPY TO"
},
{
"msg_contents": "HI,\n\nMore assertions added.\n\nThanks.\n\nRegards,\nZhang Mingli\nOn Jul 29, 2022, 11:24 +0800, Richard Guo <guofenglinux@gmail.com>, wrote:\n>\n> > On Thu, Jul 28, 2022 at 9:04 PM Zhang Mingli <zmlpostgres@gmail.com> wrote:\n> > > Assertions added.\n> >\n> > Can we also add assertions to make sure force_quote, force_notnull and\n> > force_null are available only in CSV mode?\n> >\n> > Thanks\n> > Richard",
"msg_date": "Mon, 1 Aug 2022 09:59:49 +0800",
"msg_from": "Zhang Mingli <zmlpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [Refactor]Avoid to handle FORCE_NOT_NULL/FORCE_NULL options\n when COPY TO"
},
{
"msg_contents": "At Mon, 1 Aug 2022 09:59:49 +0800, Zhang Mingli <zmlpostgres@gmail.com> wrote in \n> On Jul 29, 2022, 11:24 +0800, Richard Guo <guofenglinux@gmail.com>, wrote:\n> >\n> > > On Thu, Jul 28, 2022 at 9:04 PM Zhang Mingli <zmlpostgres@gmail.com> wrote:\n> > > > Assertions added.\n> > >\n> > > Can we also add assertions to make sure force_quote, force_notnull and\n> > > force_null are available only in CSV mode?\n> \n> More assertions added.\n\nAn empty List is not NULL but NIL (which values are identical,\nthough). There are some other option combinations that are rejected\nby ProcessCopyOptions. On the other hand *re*checking all\ncombinations that the function should have rejected is kind of silly.\nAddition to that, I doubt the assertions are really needed even though\nthe wrong values don't lead to any serious consequence.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Tue, 02 Aug 2022 13:30:50 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Refactor]Avoid to handle FORCE_NOT_NULL/FORCE_NULL options\n when COPY TO"
},
{
"msg_contents": "Regards,\nZhang Mingli\nOn Aug 2, 2022, 12:30 +0800, Kyotaro Horiguchi <horikyota.ntt@gmail.com>, wrote:\n> An empty List is not NULL but NIL (which values are identical,\n> though).\nThanks for pointing that out. Fix it in new patch.\n> There are some other option combinations that are rejected\n> by ProcessCopyOptions. On the other hand *re*checking all\n> combinations that the function should have rejected is kind of silly.\n> Addition to that, I doubt the assertions are really needed even though\n> the wrong values don't lead to any serious consequence.\n>\nAgree.\nProcessCopyOptions has rejected all invalid combinations and assertions are optional.\n> regards.\n>\n> --\n> Kyotaro Horiguchi\n> NTT Open Source Software Center",
"msg_date": "Tue, 2 Aug 2022 16:13:30 +0800",
"msg_from": "Zhang Mingli <zmlpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [Refactor]Avoid to handle FORCE_NOT_NULL/FORCE_NULL options\n when COPY TO"
},
{
"msg_contents": "On Tue, Aug 02, 2022 at 04:13:30PM +0800, Zhang Mingli wrote:\n> On Aug 2, 2022, 12:30 +0800, Kyotaro Horiguchi <horikyota.ntt@gmail.com>, wrote:\n>> There are some other option combinations that are rejected\n>> by ProcessCopyOptions. On the other hand *re*checking all\n>> combinations that the function should have rejected is kind of silly.\n>> Addition to that, I doubt the assertions are really needed even though\n>> the wrong values don't lead to any serious consequence.\n>\n> ProcessCopyOptions has rejected all invalid combinations and assertions are optional.\n\nI agree with Horiguchi-san's point here: there is no real point in\nhaving these assertions, especially just after the options are\nprocessed. A few extensions in-core (or even outside core) that I\nknow of, could call BeginCopyTo() or BeginCopyFrom(), but the option\nprocessing is the same for all.\n\nThe point about cleaning up the attribute handling of FORCE_NOT_NULL\nand FORCE_NULL in the COPY TO path is a good catch, though, so let's\nremove all that. I'll go apply this part of the patch in a bit, or\ntomorrow.\n--\nMichael",
"msg_date": "Tue, 1 Nov 2022 16:41:15 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: [Refactor]Avoid to handle FORCE_NOT_NULL/FORCE_NULL options when\n COPY TO"
},
{
"msg_contents": "On Tue, Nov 1, 2022 at 3:41 PM Michael Paquier <michael@paquier.xyz> wrote:\n\n> On Tue, Aug 02, 2022 at 04:13:30PM +0800, Zhang Mingli wrote:\n> > On Aug 2, 2022, 12:30 +0800, Kyotaro Horiguchi <horikyota.ntt@gmail.com>,\n> wrote:\n> >> There are some other option combinations that are rejected\n> >> by ProcessCopyOptions. On the other hand *re*checking all\n> >> combinations that the function should have rejected is kind of silly.\n> >> Addition to that, I doubt the assertions are really needed even though\n> >> the wrong values don't lead to any serious consequence.\n> >\n> > ProcessCopyOptions has rejected all invalid combinations and assertions\n> are optional.\n>\n> I agree with Horiguchi-san's point here: there is no real point in\n> having these assertions, especially just after the options are\n> processed. A few extensions in-core (or even outside core) that I\n> know of, could call BeginCopyTo() or BeginCopyFrom(), but the option\n> processing is the same for all.\n\n\nI'm OK with not having these assertions. I have to admit they look\nsomewhat redundant here, after what ProcessCopyOptions has done.\n\nThanks\nRichard\n\nOn Tue, Nov 1, 2022 at 3:41 PM Michael Paquier <michael@paquier.xyz> wrote:On Tue, Aug 02, 2022 at 04:13:30PM +0800, Zhang Mingli wrote:\n> On Aug 2, 2022, 12:30 +0800, Kyotaro Horiguchi <horikyota.ntt@gmail.com>, wrote:\n>> There are some other option combinations that are rejected\n>> by ProcessCopyOptions. On the other hand *re*checking all\n>> combinations that the function should have rejected is kind of silly.\n>> Addition to that, I doubt the assertions are really needed even though\n>> the wrong values don't lead to any serious consequence.\n>\n> ProcessCopyOptions has rejected all invalid combinations and assertions are optional.\n\nI agree with Horiguchi-san's point here: there is no real point in\nhaving these assertions, especially just after the options are\nprocessed. A few extensions in-core (or even outside core) that I\nknow of, could call BeginCopyTo() or BeginCopyFrom(), but the option\nprocessing is the same for all. I'm OK with not having these assertions. I have to admit they looksomewhat redundant here, after what ProcessCopyOptions has done.ThanksRichard",
"msg_date": "Tue, 1 Nov 2022 17:51:42 +0800",
"msg_from": "Richard Guo <guofenglinux@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Refactor]Avoid to handle FORCE_NOT_NULL/FORCE_NULL options when\n COPY TO"
},
{
"msg_contents": "Hi,\n\n>\n\n> The point about cleaning up the attribute handling of FORCE_NOT_NULL\n> and FORCE_NULL in the COPY TO path is a good catch, though, so let's\n> remove all that. I'll go apply this part of the patch in a bit, or\n> tomorrow.\n> --\n> Michael\n\n\nThanks for review!\n\nHi,\nThe point about cleaning up the attribute handling of FORCE_NOT_NULL\nand FORCE_NULL in the COPY TO path is a good catch, though, so let's\nremove all that. I'll go apply this part of the patch in a bit, or\ntomorrow.\n--\nMichaelThanks for review!",
"msg_date": "Tue, 1 Nov 2022 23:45:45 +0800",
"msg_from": "Mingli Zhang <zmlpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Refactor]Avoid to handle FORCE_NOT_NULL/FORCE_NULL options when\n COPY TO"
},
{
"msg_contents": "On Tue, Nov 01, 2022 at 05:51:42PM +0800, Richard Guo wrote:\n> I'm OK with not having these assertions. I have to admit they look\n> somewhat redundant here, after what ProcessCopyOptions has done.\n\nThanks, and done.\n\nWhile on it, I have noticed some gaps with the coverage of the code,\nwhere we did not check that FORCE_NULL & co are not allowed in some\ncases. With two tests for BINARY, that made a total of 8 patterns,\napplied as of 451d116.\n--\nMichael",
"msg_date": "Wed, 2 Nov 2022 10:15:37 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: [Refactor]Avoid to handle FORCE_NOT_NULL/FORCE_NULL options when\n COPY TO"
}
] |
[
{
"msg_contents": "Hi hackers,\n\nI noticed that commit_ts.c has the following comment:\n\n * XLOG interactions: this module generates an XLOG record whenever a new\n * CommitTs page is initialized to zeroes. Also, one XLOG record is\n * generated for setting of values when the caller requests it; this allows\n * us to support values coming from places other than transaction commit.\n * Other writes of CommitTS come from recording of transaction commit in\n * xact.c, which generates its own XLOG records for these events and will\n * re-perform the status update on redo; so we need make no additional XLOG\n * entry here.\n\nIIUC the ability for callers to request WAL record generation is no longer\npossible as of 08aa89b [0]. Should the second sentence be removed?\n\n[0] https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=08aa89b\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 26 Jul 2022 10:33:43 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "out of date comment in commit_ts.c"
},
{
"msg_contents": "On Tue, Jul 26, 2022 at 10:33:43AM -0700, Nathan Bossart wrote:\n> IIUC the ability for callers to request WAL record generation is no longer\n> possible as of 08aa89b [0]. Should the second sentence be removed?\n\nHere's a patch.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Wed, 27 Jul 2022 13:29:52 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: out of date comment in commit_ts.c"
},
{
"msg_contents": "On Thu, Jul 28, 2022 at 8:30 AM Nathan Bossart <nathandbossart@gmail.com> wrote:\n> On Tue, Jul 26, 2022 at 10:33:43AM -0700, Nathan Bossart wrote:\n> > IIUC the ability for callers to request WAL record generation is no longer\n> > possible as of 08aa89b [0]. Should the second sentence be removed?\n>\n> Here's a patch.\n\nPushed.\n\n\n",
"msg_date": "Tue, 9 Aug 2022 13:02:15 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: out of date comment in commit_ts.c"
},
{
"msg_contents": "On Tue, Aug 09, 2022 at 01:02:15PM +1200, Thomas Munro wrote:\n> Pushed.\n\nThanks!\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 8 Aug 2022 20:29:15 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: out of date comment in commit_ts.c"
}
] |
[
{
"msg_contents": "Remove the restriction that the relmap must be 512 bytes.\n\nInstead of relying on the ability to atomically overwrite the\nentire relmap file in one shot, write a new one and durably\nrename it into place. Removing the struct padding and the\ncalculation showing why the map is exactly 512 bytes, and change\nthe maximum number of entries to a nearby round number.\n\nPatch by me, reviewed by Andres Freund and Dilip Kumar.\n\nDiscussion: http://postgr.es/m/CA+TgmoZq5%3DLWDK7kHaUbmWXxcaTuw_QwafgG9dr-BaPym_U8WQ%40mail.gmail.com\nDiscussion: http://postgr.es/m/CAFiTN-ttOXLX75k_WzRo9ar=VvxFhrHi+rJxns997F+yvkm==A@mail.gmail.com\n\nBranch\n------\nmaster\n\nDetails\n-------\nhttps://git.postgresql.org/pg/commitdiff/d8cd0c6c95c0120168df93aae095df4e0682a08a\n\nModified Files\n--------------\ndoc/src/sgml/monitoring.sgml | 4 +-\nsrc/backend/utils/activity/wait_event.c | 4 +-\nsrc/backend/utils/cache/relmapper.c | 94 +++++++++++++++++++--------------\nsrc/include/utils/wait_event.h | 2 +-\n4 files changed, 58 insertions(+), 46 deletions(-)",
"msg_date": "Tue, 26 Jul 2022 19:10:22 +0000",
"msg_from": "Robert Haas <rhaas@postgresql.org>",
"msg_from_op": true,
"msg_subject": "pgsql: Remove the restriction that the relmap must be 512 bytes."
},
{
"msg_contents": "Hi Robert,\n\nOn Tue, Jul 26, 2022 at 07:10:22PM +0000, Robert Haas wrote:\n> Remove the restriction that the relmap must be 512 bytes.\n> \n> Instead of relying on the ability to atomically overwrite the\n> entire relmap file in one shot, write a new one and durably\n> rename it into place. Removing the struct padding and the\n> calculation showing why the map is exactly 512 bytes, and change\n> the maximum number of entries to a nearby round number.\n> \n> Patch by me, reviewed by Andres Freund and Dilip Kumar.\n> \n> Discussion: http://postgr.es/m/CA+TgmoZq5%3DLWDK7kHaUbmWXxcaTuw_QwafgG9dr-BaPym_U8WQ%40mail.gmail.com\n> Discussion: http://postgr.es/m/CAFiTN-ttOXLX75k_WzRo9ar=VvxFhrHi+rJxns997F+yvkm==A@mail.gmail.com\n\nThe CI on Windows is blowing up here and there after something that\nlooks to come from this commit, as of this backtrace:\n00000000`007fe300 00000001`405c62dd postgres!errfinish(\nchar * filename = 0x00000001`40bf1513 \"fd.c\",\nint lineno = 0n756,\nchar * funcname = 0x00000001`40bf14e0 \"durable_rename\")+0x41b\n[c:\\cirrus\\src\\backend\\utils\\error\\elog.c @ 683]\n00000000`007fe360 00000001`4081647b postgres!durable_rename(\nchar * oldfile = 0x00000000`007fe430 \"base/16384/pg_filenode.map.tmp\",\nchar * newfile = 0x00000000`007fe830 \"base/16384/pg_filenode.map\",\nint elevel = 0n21)+0x22d [c:\\cirrus\\src\\backend\\storage\\file\\fd.c @\n753]\n00000000`007fe3b0 00000001`408166c9 postgres!write_relmap_file(\nstruct RelMapFile * newmap = 0x00000000`007fecb0,\nbool write_wal = true,\nbool send_sinval = true,\nbool preserve_files = true,\nunsigned int dbid = 0x4000,\nunsigned int tsid = 0x67f,\nchar * dbpath = 0x00000000`0090b1c0 \"base/16384\")+0x38b\n[c:\\cirrus\\src\\backend\\utils\\cache\\relmapper.c @ 971]\n\nHere is one of them, kicked by the CF bot, but I have seen similar\ncrashes with some of my own things (see the txt file in crashlog, in a\nmanual VACUUM):\nhttps://cirrus-ci.com/task/5240408958566400\n\nThanks,\n--\nMichael",
"msg_date": "Wed, 27 Jul 2022 13:34:48 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Remove the restriction that the relmap must be 512 bytes."
},
{
"msg_contents": "On Wed, Jul 27, 2022 at 4:35 PM Michael Paquier <michael@paquier.xyz> wrote:\n> On Tue, Jul 26, 2022 at 07:10:22PM +0000, Robert Haas wrote:\n> > Remove the restriction that the relmap must be 512 bytes.\n\n> The CI on Windows is blowing up here and there after something that\n> looks to come from this commit, as of this backtrace:\n> 00000000`007fe300 00000001`405c62dd postgres!errfinish(\n> char * filename = 0x00000001`40bf1513 \"fd.c\",\n> int lineno = 0n756,\n> char * funcname = 0x00000001`40bf14e0 \"durable_rename\")+0x41b\n> [c:\\cirrus\\src\\backend\\utils\\error\\elog.c @ 683]\n\nAnd here's what the error looks like:\n\n2022-07-26 19:38:04.321 GMT [8020][client backend]\n[pg_regress/vacuum][8/349:4527] PANIC: could not rename file\n\"global/pg_filenode.map.tmp\" to \"global/pg_filenode.map\": Permission\ndenied\n\nSomeone else still has the old file open, so we can't rename the new\none to its name? On Windows that should have gone through pgrename()\nin dirmod.c, which would retry 100 times with a 100ms sleep between.\nSince every backend reads that file (I added an elog() and saw it 2289\ntimes during make check), I guess you can run out of luck.\n\n/me thinks\n\n\n",
"msg_date": "Wed, 27 Jul 2022 17:01:09 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Remove the restriction that the relmap must be 512 bytes."
},
{
"msg_contents": "On Wed, Jul 27, 2022 at 5:01 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> On Wed, Jul 27, 2022 at 4:35 PM Michael Paquier <michael@paquier.xyz> wrote:\n> > On Tue, Jul 26, 2022 at 07:10:22PM +0000, Robert Haas wrote:\n> > > Remove the restriction that the relmap must be 512 bytes.\n>\n> > The CI on Windows is blowing up here and there after something that\n> > looks to come from this commit, as of this backtrace:\n> > 00000000`007fe300 00000001`405c62dd postgres!errfinish(\n> > char * filename = 0x00000001`40bf1513 \"fd.c\",\n> > int lineno = 0n756,\n> > char * funcname = 0x00000001`40bf14e0 \"durable_rename\")+0x41b\n> > [c:\\cirrus\\src\\backend\\utils\\error\\elog.c @ 683]\n>\n> And here's what the error looks like:\n>\n> 2022-07-26 19:38:04.321 GMT [8020][client backend]\n> [pg_regress/vacuum][8/349:4527] PANIC: could not rename file\n> \"global/pg_filenode.map.tmp\" to \"global/pg_filenode.map\": Permission\n> denied\n>\n> Someone else still has the old file open, so we can't rename the new\n> one to its name? On Windows that should have gone through pgrename()\n> in dirmod.c, which would retry 100 times with a 100ms sleep between.\n> Since every backend reads that file (I added an elog() and saw it 2289\n> times during make check), I guess you can run out of luck.\n>\n> /me thinks\n\nMaybe we just have to rearrange the locking slightly? Something like\nthe attached.",
"msg_date": "Wed, 27 Jul 2022 18:06:12 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Remove the restriction that the relmap must be 512 bytes."
},
{
"msg_contents": "On Wed, Jul 27, 2022 at 6:06 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> On Wed, Jul 27, 2022 at 5:01 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> > Someone else still has the old file open, so we can't rename the new\n> > one to its name? On Windows that should have gone through pgrename()\n> > in dirmod.c, which would retry 100 times with a 100ms sleep between.\n> > Since every backend reads that file (I added an elog() and saw it 2289\n> > times during make check), I guess you can run out of luck.\n> >\n> > /me thinks\n>\n> Maybe we just have to rearrange the locking slightly? Something like\n> the attached.\n\nErm, let me try that again, this time with the CloseTransientFile()\nalso under the lock, so that we never have a file handle without a\nlock.",
"msg_date": "Wed, 27 Jul 2022 18:21:07 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Remove the restriction that the relmap must be 512 bytes."
},
{
"msg_contents": "On Wed, Jul 27, 2022 at 06:21:07PM +1200, Thomas Munro wrote:\n> Erm, let me try that again, this time with the CloseTransientFile()\n> also under the lock, so that we never have a file handle without a\n> lock.\n\nRight. The whole write_relmap_file() already happens while taking\nRelationMappingLock, so that seems like a good idea for consistency at\nthe end (even if I remember that there is a patch floating around to\nimprove the concurrency of pgrename, which may become an easier move\nnow that we require Windows 10).\n\nI have tested three runs and that was working here even if the\nissue is sporadic, so more runs may be better to have more\nconfidence.\n--\nMichael",
"msg_date": "Wed, 27 Jul 2022 19:25:47 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Remove the restriction that the relmap must be 512 bytes."
},
{
"msg_contents": "On Wed, Jul 27, 2022 at 6:25 AM Michael Paquier <michael@paquier.xyz> wrote:\n> Right. The whole write_relmap_file() already happens while taking\n> RelationMappingLock, so that seems like a good idea for consistency at\n> the end (even if I remember that there is a patch floating around to\n> improve the concurrency of pgrename, which may become an easier move\n> now that we require Windows 10).\n>\n> I have tested three runs and that was working here even if the\n> issue is sporadic, so more runs may be better to have more\n> confidence.\n\nOK, I committed Thomas's patch, after taking the liberty of adding an\nexplanatory comment.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 27 Jul 2022 11:20:46 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Remove the restriction that the relmap must be 512 bytes."
},
{
"msg_contents": "On 2022-Jul-26, Robert Haas wrote:\n\n> Remove the restriction that the relmap must be 512 bytes.\n> \n> Instead of relying on the ability to atomically overwrite the\n> entire relmap file in one shot, write a new one and durably\n> rename it into place. Removing the struct padding and the\n> calculation showing why the map is exactly 512 bytes, and change\n> the maximum number of entries to a nearby round number.\n\nAnother thing that seems to have happened here is that catversion ought\nto have been touched and wasn't. Trying to start a cluster that was\ninitdb'd with the previous code enters an infinite loop that dies each\ntime with\n\n2022-07-27 19:17:27.589 CEST [2516547] LOG: database system is ready to accept connections\n2022-07-27 19:17:27.589 CEST [2516730] FATAL: could not read file \"global/pg_filenode.map\": read 512 of 524\n2022-07-27 19:17:27.589 CEST [2516731] FATAL: could not read file \"global/pg_filenode.map\": read 512 of 524\n2022-07-27 19:17:27.589 CEST [2516547] LOG: autovacuum launcher process (PID 2516730) exited with exit code 1\n2022-07-27 19:17:27.589 CEST [2516547] LOG: terminating any other active server processes\n\nPerhaps we should still do a catversion bump now, since one hasn't\nhappened since the commit.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"El número de instalaciones de UNIX se ha elevado a 10,\ny se espera que este número aumente\" (UPM, 1972)\n\n\n",
"msg_date": "Wed, 27 Jul 2022 19:19:39 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Remove the restriction that the relmap must be 512 bytes."
},
{
"msg_contents": "On Wed, Jul 27, 2022 at 1:19 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> Another thing that seems to have happened here is that catversion ought\n> to have been touched and wasn't. Trying to start a cluster that was\n> initdb'd with the previous code enters an infinite loop that dies each\n> time with\n>\n> 2022-07-27 19:17:27.589 CEST [2516547] LOG: database system is ready to accept connections\n> 2022-07-27 19:17:27.589 CEST [2516730] FATAL: could not read file \"global/pg_filenode.map\": read 512 of 524\n> 2022-07-27 19:17:27.589 CEST [2516731] FATAL: could not read file \"global/pg_filenode.map\": read 512 of 524\n> 2022-07-27 19:17:27.589 CEST [2516547] LOG: autovacuum launcher process (PID 2516730) exited with exit code 1\n> 2022-07-27 19:17:27.589 CEST [2516547] LOG: terminating any other active server processes\n>\n> Perhaps we should still do a catversion bump now, since one hasn't\n> happened since the commit.\n\nHmm, interesting. I didn't think about bumping catversion because I\ndidn't change anything in the catalogs. I did think about changing the\nmagic number for the file at one point, but unlike some of our other\nconstants, there's no indication that this one is intended to be used\nas a version number. But in retrospect it would have been good to\nchange something somewhere. If you want me to bump catversion now, I\ncan. If you or someone else wants to do it, that's also fine.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 27 Jul 2022 13:38:36 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Remove the restriction that the relmap must be 512 bytes."
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Wed, Jul 27, 2022 at 1:19 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>> Another thing that seems to have happened here is that catversion ought\n>> to have been touched and wasn't.\n\n> Hmm, interesting. I didn't think about bumping catversion because I\n> didn't change anything in the catalogs. I did think about changing the\n> magic number for the file at one point, but unlike some of our other\n> constants, there's no indication that this one is intended to be used\n> as a version number. But in retrospect it would have been good to\n> change something somewhere. If you want me to bump catversion now, I\n> can. If you or someone else wants to do it, that's also fine.\n\nIf there's a magic number, then I'd (a) change that and (b) adjust\nwhatever comments led you to think you shouldn't. Bumping catversion\nis a good fallback choice when there's not any more-proximate version\nindicator, but here there is.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 27 Jul 2022 13:42:27 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Remove the restriction that the relmap must be 512 bytes."
},
{
"msg_contents": "On Wed, Jul 27, 2022 at 1:42 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> If there's a magic number, then I'd (a) change that and (b) adjust\n> whatever comments led you to think you shouldn't. Bumping catversion\n> is a good fallback choice when there's not any more-proximate version\n> indicator, but here there is.\n\nMaybe I just got cold feet because it doesn't ever have seem to have\nbeen changed before, because the definition says:\n\n#define RELMAPPER_FILEMAGIC 0x592717 /* version ID value */\n\nAnd the fact that \"version\" is in there sure makes it seem like a\nversion number, now that I look again.\n\nI'll add 1 to the value.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 27 Jul 2022 14:13:57 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Remove the restriction that the relmap must be 512 bytes."
},
{
"msg_contents": "On Wed, Jul 27, 2022 at 2:13 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> On Wed, Jul 27, 2022 at 1:42 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > If there's a magic number, then I'd (a) change that and (b) adjust\n> > whatever comments led you to think you shouldn't. Bumping catversion\n> > is a good fallback choice when there's not any more-proximate version\n> > indicator, but here there is.\n>\n> Maybe I just got cold feet because it doesn't ever have seem to have\n> been changed before, because the definition says:\n>\n> #define RELMAPPER_FILEMAGIC 0x592717 /* version ID value */\n>\n> And the fact that \"version\" is in there sure makes it seem like a\n> version number, now that I look again.\n>\n> I'll add 1 to the value.\n\nHmm, but that doesn't actually produce nice behavior. It just goes\ninto an infinite loop, like this:\n\n2022-07-27 14:21:12.826 EDT [32849] LOG: database system was\ninterrupted; last known up at 2022-07-27 14:21:12 EDT\n2022-07-27 14:21:12.860 EDT [32849] LOG: database system was not\nproperly shut down; automatic recovery in progress\n2022-07-27 14:21:12.861 EDT [32849] LOG: invalid record length at\n0/14B3BB8: wanted 24, got 0\n2022-07-27 14:21:12.861 EDT [32849] LOG: redo is not required\n2022-07-27 14:21:12.864 EDT [32850] LOG: checkpoint starting:\nend-of-recovery immediate wait\n2022-07-27 14:21:12.865 EDT [32850] LOG: checkpoint complete: wrote 3\nbuffers (0.0%); 0 WAL file(s) added, 0 removed, 0 recycled;\nwrite=0.001 s, sync=0.001 s, total=0.002 s; sync files=2,\nlongest=0.001 s, average=0.001 s; distance=0 kB, estimate=0 kB;\nlsn=0/14B3BB8, redo lsn=0/14B3BB8\n2022-07-27 14:21:12.868 EDT [31930] LOG: database system is ready to\naccept connections\n2022-07-27 14:21:12.869 EDT [32853] FATAL: relation mapping file\n\"global/pg_filenode.map\" contains invalid data\n2022-07-27 14:21:12.869 EDT [32854] FATAL: relation mapping file\n\"global/pg_filenode.map\" contains invalid data\n2022-07-27 14:21:12.870 EDT [31930] LOG: autovacuum launcher process\n(PID 32853) exited with exit code 1\n2022-07-27 14:21:12.870 EDT [31930] LOG: terminating any other active\nserver processes\n2022-07-27 14:21:12.870 EDT [31930] LOG: background worker \"logical\nreplication launcher\" (PID 32854) exited with exit code 1\n2022-07-27 14:21:12.871 EDT [31930] LOG: all server processes\nterminated; reinitializing\n\nWhile I agree that changing a version identifier that is more closely\nrelated to what got changed is better than changing a generic one, I\nthink people won't like an infinite loop that spews messages into the\nlog at top speed as a way of telling them about the problem.\n\nSo now I'm back to being unsure what to do here.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 27 Jul 2022 14:24:23 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Remove the restriction that the relmap must be 512 bytes."
},
{
"msg_contents": "On 2022-Jul-27, Robert Haas wrote:\n\n> Hmm, but that doesn't actually produce nice behavior. It just goes\n> into an infinite loop, like this:\n\n> 2022-07-27 14:21:12.869 EDT [32853] FATAL: relation mapping file\n> \"global/pg_filenode.map\" contains invalid data\n\nThis seems almost identical what happens without the version number\nchange, so I wouldn't call it much of an improvement.\n\n> While I agree that changing a version identifier that is more closely\n> related to what got changed is better than changing a generic one, I\n> think people won't like an infinite loop that spews messages into the\n> log at top speed as a way of telling them about the problem.\n> \n> So now I'm back to being unsure what to do here.\n\nI vote to go for the catversion bump for now. \n\nMaybe it's possible to patch the relmapper code afterwards, so that if a\nversion mismatch is detected the server is stopped with a reasonable\nmessage. An hypothetical improvement would be to keep the code to read\nthe old version and upgrade the file automatically, but given the number\nof times that this file has changed, it's likely pointless effort.\n\nTherefore, my proposal is to add a comment next to the struct definition\nsuggesting to bump catversion and call it a day.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Wed, 27 Jul 2022 20:45:23 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Remove the restriction that the relmap must be 512 bytes."
},
{
"msg_contents": "Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n>> Hmm, but that doesn't actually produce nice behavior. It just goes\n>> into an infinite loop, like this:\n>> So now I'm back to being unsure what to do here.\n\n> I vote to go for the catversion bump for now. \n\nWhat this is showing us is that any form of corruption in the relmapper\nfile causes very unpleasant behavior. We probably had better do something\nabout that, independently of this issue.\n\nIn the meantime, I still think bumping the file magic number is a better\nanswer. It won't make the behavior any worse for un-updated code than\nit is already.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 27 Jul 2022 15:17:36 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Remove the restriction that the relmap must be 512 bytes."
},
{
"msg_contents": "On Wed, Jul 27, 2022 at 3:17 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> >> Hmm, but that doesn't actually produce nice behavior. It just goes\n> >> into an infinite loop, like this:\n> >> So now I'm back to being unsure what to do here.\n>\n> > I vote to go for the catversion bump for now.\n>\n> What this is showing us is that any form of corruption in the relmapper\n> file causes very unpleasant behavior. We probably had better do something\n> about that, independently of this issue.\n\nI'm not sure how important that is, but it certainly wouldn't hurt.\n\n> In the meantime, I still think bumping the file magic number is a better\n> answer. It won't make the behavior any worse for un-updated code than\n> it is already.\n\nBut it also won't make it any better, so why even bother? The goal of\ncatversion bumps is to replace crashes or unpredictable misbehavior\nwith a nice error message that tells you exactly what the problem is.\nHere we'd just be replacing an infinite series of crashes with an\ninfinite series of crashes with a slightly different error message.\nIt's probably worth comparing those error messages:\n\nFATAL: could not read file \"global/pg_filenode.map\": read 512 of 524\nFATAL: relation mapping file \"global/pg_filenode.map\" contains invalid data\n\nThe first message is what you get now. The second message is what you\nget with the proposed change to the magic number. I would argue that\nthe second message is actually worse than the first one, because the\nfirst one actually gives you some hint what the problem is, whereas\nthe second one really just says that an unspecified bad thing\nhappened.\n\nIn short, I think Alvaro's idea is unprincipled but solves the problem\nof making it clear that a new initdb is required. Your idea is\nprincipled but does not solve any problem.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 27 Jul 2022 15:39:03 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Remove the restriction that the relmap must be 512 bytes."
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> In short, I think Alvaro's idea is unprincipled but solves the problem\n> of making it clear that a new initdb is required. Your idea is\n> principled but does not solve any problem.\n\n[ shrug... ] Fair enough.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 27 Jul 2022 16:05:16 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Remove the restriction that the relmap must be 512 bytes."
}
] |
[
{
"msg_contents": "Hello all,\n\nI'm making my way through some stalled patches in Waiting on Author. If\nnothing changes by the end of this CF, I'd recommend marking these\nas Returned with Feedback.\n\nPatch authors CC'd.\n\n- jsonpath syntax extensions\n https://commitfest.postgresql.org/38/2482/\n\n As a few people pointed out, this has not seen much review/interest in\n the roughly two years it's been posted, and it's needed a rebase since\n last CF. Daniel suggested that the featureset be split up for easier\n review during the 2021-11 triage. I recommend RwF with that\n suggestion.\n\n- Consider parallel for LATERAL subqueries having LIMIT/OFFSET\n https://commitfest.postgresql.org/38/2851/\n\n Does this have a path forward? It's been Waiting on Author since the\n beginning of last CF, with open concerns from Tom about safety.\n\n- Parallel INSERT SELECT take 2\n https://commitfest.postgresql.org/38/3143/\n\n There was a lot of discussion early on in this patchset's life, and\n then it got caught in a rebase loop without review in August 2021. The\n thread has mostly gone dark since then and the patch does not apply.\n\n- Add callback table access method to reset filenode when dropping relation\n https://commitfest.postgresql.org/38/3073/\n\n Heikki had some feedback back in Februrary but it hasn't been answered\n yet. It needs a rebase too.\n\n- Avoid orphaned dependencies\n https://commitfest.postgresql.org/38/3106/\n\n Tom notes that this cannot be committed as-is; the thread has been\n silent since last CF. Last Author comment in January and needs a\n rebase.\n\n- Allow multiple recursive self-references\n https://commitfest.postgresql.org/38/3046/\n\n There appears to be agreement that this is useful, but it looks like\n the patch needs some changes before it's committable. Last post from\n the Author was in January.\n\n- Push down time-related SQLValue functions to foreign server\n https://commitfest.postgresql.org/38/3289/\n\n There's interest and engagement, but it's not committable as-is and\n needs a rebase. Last Author post in January.\n\n- Parallelize correlated subqueries that execute within each worker\n https://commitfest.postgresql.org/38/3246/\n\n Patch needs to be fixed for FreeBSD; last Author post in January.\n\n- libpq compression\n https://commitfest.postgresql.org/38/3499/\n\n Needs a rebase and response to feedback; mostly quiet since January.\n\nThanks,\n--Jacob\n\n\n",
"msg_date": "Tue, 26 Jul 2022 12:26:59 -0700",
"msg_from": "Jacob Champion <jchampion@timescale.com>",
"msg_from_op": true,
"msg_subject": "[Commitfest 2022-07] Patch Triage: Waiting on Author"
},
{
"msg_contents": "On Tue, Jul 26, 2022 at 12:26:59PM -0700, Jacob Champion wrote:\n> Hello all,\n> \n> I'm making my way through some stalled patches in Waiting on Author. If\n> nothing changes by the end of this CF, I'd recommend marking these\n> as Returned with Feedback.\n\n+1\n\nI suggest that, if you send an email when marking as RWF, to mention that the\nexisting patch record can be re-opened and moved to next CF.\n\nI'm aware that people may think that this isn't always a good idea, but it's\nnice to mention that it's possible. It's somewhat comparable to starting a new\nthread (preferably including a link to the earlier one).\n\n-- \nJustin\n\n\n",
"msg_date": "Tue, 26 Jul 2022 18:20:19 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: [Commitfest 2022-07] Patch Triage: Waiting on Author"
},
{
"msg_contents": "On 7/26/22 16:20, Justin Pryzby wrote:\n> I suggest that, if you send an email when marking as RWF, to mention that the\n> existing patch record can be re-opened and moved to next CF.\n> \n> I'm aware that people may think that this isn't always a good idea, but it's\n> nice to mention that it's possible. It's somewhat comparable to starting a new\n> thread (preferably including a link to the earlier one).\n\nThanks, will do!\n\n--Jacob\n\n\n\n",
"msg_date": "Tue, 26 Jul 2022 16:30:01 -0700",
"msg_from": "Jacob Champion <jchampion@timescale.com>",
"msg_from_op": true,
"msg_subject": "Re: [Commitfest 2022-07] Patch Triage: Waiting on Author"
},
{
"msg_contents": "On Tue, Jul 26, 2022 at 3:27 PM Jacob Champion <jchampion@timescale.com> wrote:\n...\n> - Consider parallel for LATERAL subqueries having LIMIT/OFFSET\n> https://commitfest.postgresql.org/38/2851/\n>\n> Does this have a path forward? It's been Waiting on Author since the\n> beginning of last CF, with open concerns from Tom about safety.\n...\n> - Parallelize correlated subqueries that execute within each worker\n> https://commitfest.postgresql.org/38/3246/\n>\n> Patch needs to be fixed for FreeBSD; last Author post in January.\n\nThese are both mine, and I'd hoped to work on them this CF, but I've\nbeen sufficiently busy that that hasn't happened.\n\nI'd like to just move these to the next CF.\n\nThanks,\nJames Coleman\n\n\n",
"msg_date": "Tue, 26 Jul 2022 19:47:31 -0400",
"msg_from": "James Coleman <jtc331@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Commitfest 2022-07] Patch Triage: Waiting on Author"
},
{
"msg_contents": "On Tue, Jul 26, 2022 at 7:47 PM James Coleman <jtc331@gmail.com> wrote:\n> These are both mine, and I'd hoped to work on them this CF, but I've\n> been sufficiently busy that that hasn't happened.\n>\n> I'd like to just move these to the next CF.\n\nWell, if we mark them returned with feedback now, and you get time to\nwork on them, you can always change the status back to something else\nat that point.\n\nThat has the advantage that, if you don't get time to work on them,\nthey're not cluttering up the next CF in the meantime.\n\nWe're not doing a great job kicking things out of the CF when they are\nnon-actionable, and thereby we are making life harder for ourselves\ncollectively.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 27 Jul 2022 11:43:19 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Commitfest 2022-07] Patch Triage: Waiting on Author"
},
{
"msg_contents": "On Wednesday, July 27, 2022 3:27 AM Jacob Champion <jchampion@timescale.com> wrote:\r\n> \r\n> - Parallel INSERT SELECT take 2\r\n> https://commitfest.postgresql.org/38/3143/\r\n> \r\n> There was a lot of discussion early on in this patchset's life, and\r\n> then it got caught in a rebase loop without review in August 2021. The\r\n> thread has mostly gone dark since then and the patch does not apply.\r\n\r\nSorry, I think we don't enough time to work on this recently. So please mark it as RWF and\r\nwe will get back to this in the future.\r\n\r\nBest regards,\r\nHou zj\r\n",
"msg_date": "Thu, 28 Jul 2022 02:09:43 +0000",
"msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: [Commitfest 2022-07] Patch Triage: Waiting on Author"
},
{
"msg_contents": "\n\n> 27 июля 2022 г., в 00:26, Jacob Champion <jchampion@timescale.com> написал(а):\n> \n> - libpq compression\n> https://commitfest.postgresql.org/38/3499/\n> \n> Needs a rebase and response to feedback; mostly quiet since January.\n\nDaniil is working on this, but currently he's on vacation.\nI think we should not mark patch as RwF and move it to next CF instead.\n\nThank you!\n\nBest regards, Andrey Borodin.\n\n",
"msg_date": "Thu, 28 Jul 2022 16:46:01 +0500",
"msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>",
"msg_from_op": false,
"msg_subject": "Re: [Commitfest 2022-07] Patch Triage: Waiting on Author"
},
{
"msg_contents": "On Wed, Jul 27, 2022 at 7:09 PM houzj.fnst@fujitsu.com\n<houzj.fnst@fujitsu.com> wrote:\n> Sorry, I think we don't enough time to work on this recently. So please mark it as RWF and\n> we will get back to this in the future.\n\nDone, thanks!\n\n--Jacob\n\n\n",
"msg_date": "Thu, 28 Jul 2022 08:52:24 -0700",
"msg_from": "Jacob Champion <jchampion@timescale.com>",
"msg_from_op": true,
"msg_subject": "Re: [Commitfest 2022-07] Patch Triage: Waiting on Author"
},
{
"msg_contents": "On Thu, Jul 28, 2022 at 4:46 AM Andrey Borodin <x4mmm@yandex-team.ru> wrote:\n> Daniil is working on this, but currently he's on vacation.\n> I think we should not mark patch as RwF and move it to next CF instead.\n\nIs there a downside to marking it RwF, from your perspective? As\nRobert pointed out upthread, it can be switched back at any time once\nDaniil's ready.\n\nLife happens; there isn't (or there shouldn't be) any shame in having\na patch returned temporarily. But it is important that patches which\naren't ready for review at the moment don't stick around for months.\nThey take up reviewer time and need to be triaged continually.\n\n--Jacob\n\n\n",
"msg_date": "Thu, 28 Jul 2022 08:58:31 -0700",
"msg_from": "Jacob Champion <jchampion@timescale.com>",
"msg_from_op": true,
"msg_subject": "Re: [Commitfest 2022-07] Patch Triage: Waiting on Author"
},
{
"msg_contents": "On Thu, Jul 28, 2022 at 08:58:31AM -0700, Jacob Champion wrote:\n> On Thu, Jul 28, 2022 at 4:46 AM Andrey Borodin <x4mmm@yandex-team.ru> wrote:\n> > Daniil is working on this, but currently he's on vacation.\n> > I think we should not mark patch as RwF and move it to next CF instead.\n> \n> Is there a downside to marking it RwF, from your perspective? As\n> Robert pointed out upthread, it can be switched back at any time once\n> Daniil's ready.\n\nAs someone interested in seeing the patch progress, I still think it may be\nbetter to close the patch record, which can be re-opened when it's ready to be\nreviewed.\n\n> They take up reviewer time and need to be triaged continually.\n\nAlternately:\n\n@Jacob: Is there any reason why it's necessary to do anything at all ?\nDoes something bad happen if the patches are left in the current CF ?\nWhy make not let patch authors (re) submit the patch for review when they're\nready? Someone went to the effort to move it to the current CF, even though the\npatch wasn't ready to be reviewed. It'd be less work and avoid the process of\n\"moving patches to the next CF\" even though (at least in this case) it maybe\nshouldn't have even been in the current CF.\n\nAlso, is there a place which lists all of an author's patches (current and\nhistoric)? I think people would be less adverse to having their patches closed\nif 1) they knew they could re-open them; and, 2) there were a list of patches\nand their disposition (not a separate list per commitfest, and not showing each\npatch duplicated for each CF that a patch was opened in).\n\n-- \nJustin\n\n\n",
"msg_date": "Mon, 1 Aug 2022 10:51:05 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: [Commitfest 2022-07] Patch Triage: Waiting on Author"
},
{
"msg_contents": "On 8/1/22 08:51, Justin Pryzby wrote:\n> @Jacob: Is there any reason why it's necessary to do anything at all ?\n> Does something bad happen if the patches are left in the current CF ?\n> Why make not let patch authors (re) submit the patch for review when they're\n> ready? Someone went to the effort to move it to the current CF, even though the\n> patch wasn't ready to be reviewed. It'd be less work and avoid the process of\n> \"moving patches to the next CF\" even though (at least in this case) it maybe\n> shouldn't have even been in the current CF.\n\nMaybe this is something to look into once we've implemented some more of\nthe low-hanging usability features that people have asked for. But if we\nstarted doing it now, I'd expect the CFM's job to simply change from\nmoving patches ahead to pinging people who have patches left behind,\nasking them if they meant to move the patches forward. I'm not convinced\nit'd be all that useful.\n\n> Also, is there a place which lists all of an author's patches (current and\n> historic)? I think people would be less adverse to having their patches closed\n> if 1) they knew they could re-open them; and, 2) there were a list of patches\n> and their disposition (not a separate list per commitfest, and not showing each\n> patch duplicated for each CF that a patch was opened in).\n\nThis would be great to have. I have a patch in progress that introduces\na \"deferred\" group, to make it more obvious the difference between a\npatch that has been Rejected and a patch that's simply Returned or\nMoved. Your suggestion would dovetail nicely with that, to able to see\n\"all my deferred patches\".\n\n--Jacob\n\n\n",
"msg_date": "Mon, 1 Aug 2022 09:30:39 -0700",
"msg_from": "Jacob Champion <jchampion@timescale.com>",
"msg_from_op": true,
"msg_subject": "Re: [Commitfest 2022-07] Patch Triage: Waiting on Author"
},
{
"msg_contents": "On Mon, Aug 1, 2022 at 12:30 PM Jacob Champion <jchampion@timescale.com> wrote:\n> Maybe this is something to look into once we've implemented some more of\n> the low-hanging usability features that people have asked for. But if we\n> started doing it now, I'd expect the CFM's job to simply change from\n> moving patches ahead to pinging people who have patches left behind,\n> asking them if they meant to move the patches forward. I'm not convinced\n> it'd be all that useful.\n\nWe really need to move to a system where it's the patch author's job\nto take some action if the patch is alive, rather than having the CM\n(or any other human being) pinging to find out whether it's dead.\nHaving the default action for a patch be to carry it along to the next\nCF whether or not there are any signs of life is unproductive.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 1 Aug 2022 12:33:52 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Commitfest 2022-07] Patch Triage: Waiting on Author"
},
{
"msg_contents": "On 8/1/22 09:33, Robert Haas wrote:\n> We really need to move to a system where it's the patch author's job\n> to take some action if the patch is alive, rather than having the CM\n> (or any other human being) pinging to find out whether it's dead.> Having the default action for a patch be to carry it along to the next\n> CF whether or not there are any signs of life is unproductive.\n\nIn the medium to long term, I agree with you.\n\nIn the short term I want to see the features that help authors keep\ntheir patches alive (cfbot integration! automatic rebase reminders!\nautomated rebase?) so that we're not just artificially raising the\nbarrier to entry. People with plenty of time on their hands will be able\nto go through the motions of moving their patches ahead regardless of\nwhether or not the patch is dead.\n\n--Jacob\n\n\n",
"msg_date": "Mon, 1 Aug 2022 09:43:06 -0700",
"msg_from": "Jacob Champion <jchampion@timescale.com>",
"msg_from_op": true,
"msg_subject": "Re: [Commitfest 2022-07] Patch Triage: Waiting on Author"
},
{
"msg_contents": "Jacob Champion <jchampion@timescale.com> writes:\n> On 8/1/22 09:33, Robert Haas wrote:\n>> We really need to move to a system where it's the patch author's job\n>> to take some action if the patch is alive, rather than having the CM\n>> (or any other human being) pinging to find out whether it's dead.> Having the default action for a patch be to carry it along to the next\n>> CF whether or not there are any signs of life is unproductive.\n\n> In the medium to long term, I agree with you.\n\n> In the short term I want to see the features that help authors keep\n> their patches alive (cfbot integration! automatic rebase reminders!\n> automated rebase?) so that we're not just artificially raising the\n> barrier to entry. People with plenty of time on their hands will be able\n> to go through the motions of moving their patches ahead regardless of\n> whether or not the patch is dead.\n\nYeah, I don't want to introduce make-work into the process; there's\nmore than enough real work involved. At minimum, a patch that's\nshown signs of life since the previous CF should be auto-advanced\nto the next one.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 01 Aug 2022 12:56:15 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [Commitfest 2022-07] Patch Triage: Waiting on Author"
},
{
"msg_contents": "On Mon, Aug 1, 2022 at 12:56 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Yeah, I don't want to introduce make-work into the process; there's\n> more than enough real work involved. At minimum, a patch that's\n> shown signs of life since the previous CF should be auto-advanced\n> to the next one.\n\nMaybe so, but we routinely have situations where a patch hasn't been\nupdated in 3-6 months and we tentatively ask the author if it would be\nOK to mark it RwF, and they often say something like \"please keep it\nalive for one more CF to see if I have time to work on it.\" IMHO, that\ncreates the pretty ridiculous situation where CFMs are putting time\ninto patches that the author isn't working on and hasn't worked on in\na long time. The CF list isn't supposed to be a catalog of every patch\nsomebody's thought about working on at any point in the last few\nyears; it's supposed to be a list of things that need to be reviewed\nfor possible commit. That's why it's called a COMMIT-fest.\n\nBack in the day, I booted patches out of the CF if they weren't\nupdated within 4 days of a review being posted. I guess people found\nthat too harsh, but now it feels like we've gone awfully far towards\nthe other extreme.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 1 Aug 2022 13:20:48 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Commitfest 2022-07] Patch Triage: Waiting on Author"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> Maybe so, but we routinely have situations where a patch hasn't been\n> updated in 3-6 months and we tentatively ask the author if it would be\n> OK to mark it RwF, and they often say something like \"please keep it\n> alive for one more CF to see if I have time to work on it.\"\n\nAgreed, we don't want that. IMO the CF list isn't primarily a to-do\nlist for patch authors; it's primarily a to-do list for reviewers and\ncommitters. The case that I'm concerned about here is where an author\nsubmits a patch and, through no fault of his/hers, it goes unreviewed\nfor multiple CFs. As long as the author is keeping the patch refreshed\nper CI testing, I don't think additional work to express interest should\nbe required from the author.\n\nNow admittedly, at some point we should decide that lack of review\nindicates that nobody else cares about this patch, in which case it\nshould get booted with a \"sorry, we're just not interested\" resolution.\nBut that can't happen quickly, because we're just drastically short\nof review manpower at all times.\n\nOn the other hand, I'm quite willing to convert WOA state into RWF\nstate quickly. The author can always resubmit, or resurrect the\nold CF entry, once they have something new for people to look at.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 01 Aug 2022 13:33:27 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [Commitfest 2022-07] Patch Triage: Waiting on Author"
},
{
"msg_contents": "On Mon, Aug 1, 2022 at 1:33 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> On the other hand, I'm quite willing to convert WOA state into RWF\n> state quickly. The author can always resubmit, or resurrect the\n> old CF entry, once they have something new for people to look at.\n\nRight. This is what I'm on about.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 1 Aug 2022 16:01:20 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Commitfest 2022-07] Patch Triage: Waiting on Author"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.