threads
listlengths 1
2.99k
|
|---|
[
{
"msg_contents": "\nI don't know if this has been discussed before, but I mentioned recently\n(<https://www.postgresql.org/message-id/e4233934-98a6-6f76-46a0-992c0f4f1208%40dunslane.net>)\nthat I think the MSVC build system is too eager about installing\nexecutables it builds. In particular, it installs these binaries for\nwhich the analogs are not installed by the makefile system:\n\n\nisolationtester.exe\n\nlibpq_pipeline.exe\n\npg_isolation_regress.exe\n\npg_regress_ecpg.exe\n\npg_regress.exe\n\nzic.exe\n\n\nDo we want to do anything about that? ISTM we should be installing\nidentical sets of binaries as far as possible. The installation of\nlibpq_pipeline.exe is apparently what has led Justin Pryzby to think\nit's OK to undo the effect of commit f4ce6c4d3a on vcregress.pl.\n\n\ncheers\n\n\nandrew\n\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Mon, 28 Feb 2022 17:59:04 -0500",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": true,
"msg_subject": "MSVC build system installs extra executables"
}
] |
[
{
"msg_contents": "Consider this admittedly-rather-contrived example:\n\nregression=# create table foo(f1 int);\nCREATE TABLE\nregression=# alter table foo add column bar text default repeat('xyzzy', 1000000);\nERROR: row is too big: size 57416, maximum size 8160\n\nSince the table contains no rows at all, this is a surprising\nfailure. The reason for it of course is that pg_attribute\nhas no TOAST table, so it can't store indefinitely large\nattmissingval fields.\n\nI think the simplest answer, and likely the only feasible one for\nthe back branches, is to disable the attmissingval optimization\nif the proposed value is \"too large\". Not sure exactly where the\nthreshold for that ought to be, but maybe BLCKSZ/8 could be a\nstarting offer.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 28 Feb 2022 18:21:54 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Overflow of attmissingval is not handled gracefully"
},
{
"msg_contents": "\nOn 2/28/22 18:21, Tom Lane wrote:\n> Consider this admittedly-rather-contrived example:\n>\n> regression=# create table foo(f1 int);\n> CREATE TABLE\n> regression=# alter table foo add column bar text default repeat('xyzzy', 1000000);\n> ERROR: row is too big: size 57416, maximum size 8160\n>\n> Since the table contains no rows at all, this is a surprising\n> failure. The reason for it of course is that pg_attribute\n> has no TOAST table, so it can't store indefinitely large\n> attmissingval fields.\n>\n> I think the simplest answer, and likely the only feasible one for\n> the back branches, is to disable the attmissingval optimization\n> if the proposed value is \"too large\". Not sure exactly where the\n> threshold for that ought to be, but maybe BLCKSZ/8 could be a\n> starting offer.\n>\n> \t\t\t\n\n\nWFM. After all, it's taken several years for this to surface. Is it\nbased on actual field experience?\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Mon, 28 Feb 2022 18:36:14 -0500",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: Overflow of attmissingval is not handled gracefully"
},
{
"msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> On 2/28/22 18:21, Tom Lane wrote:\n>> regression=# create table foo(f1 int);\n>> CREATE TABLE\n>> regression=# alter table foo add column bar text default repeat('xyzzy', 1000000);\n>> ERROR: row is too big: size 57416, maximum size 8160\n>> \n>> I think the simplest answer, and likely the only feasible one for\n>> the back branches, is to disable the attmissingval optimization\n>> if the proposed value is \"too large\". Not sure exactly where the\n>> threshold for that ought to be, but maybe BLCKSZ/8 could be a\n>> starting offer.\n\n> WFM. After all, it's taken several years for this to surface. Is it\n> based on actual field experience?\n\nNo, it was an experiment that occurred to me while thinking about\nthe nearby proposal to add a TOAST table to pg_attribute [1].\nIf we do that, this restriction could be dropped. But I agree that\nthere's hardly any practical use-case for such default values,\nso I wouldn't mind living with the de-optimization either.\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/flat/1643112264.186902312@f325.i.mail.ru\n\n\n",
"msg_date": "Mon, 28 Feb 2022 18:46:49 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Overflow of attmissingval is not handled gracefully"
}
] |
[
{
"msg_contents": "Hi,\n\nI found that when we use pipeline mode, we can execute commands\nwhich is not allowed in a transaction block, for example\nCREATE DATABASE, in the same transaction with other commands.\n\nIn extended query protocol, a transaction starts when Parse, \nBind, Executor, or Describe message is received, and is closed\nwhen Sync message is received if COMMIT, ROLLBACK, or END is not\nsent. In a pipeline mode, Sync message is sent at the end of the\npipeline instead of for each query. Therefore, multiple queries\ncan be in the same transaction without using an explicit\ntransaction block.\n\nIt is similar to implicit transaction block which starts when\nmultiple statements are sent in simple query protocol, but the\nserver doesn't regard it as an implicit transaction block. \nTherefore, problems that would not occur in implicit transactions\ncould occur in transactions started in a pipeline mode.\n\nFor example, CREATE DATABASE or DROP DATABASE can be executed\nin the same transaction with other commands, and when the\ntransaction fails, this causes an inconsistency between the\nsystem catalog and base directory. \n\nDo you think we should prevent such problems from server side? or, \nit is user's responsible to avoid such problematic use of pipeline\nor protocol messages?\n\nIf we want to handle it from server side, I think a few ideas:\n\n1. \nIf the server receive more than one Execute messages before\nreceiving Sync, start an implicit transaction block. If the first\nExecute message is for a command not allowed in a transaction\n(CREATE DATABASE etc.), explicitly close the transaction after the\ncommand not to share the transaction with other commands.\n\n2.\nWhen a pipeline start by calling PQenterPipelineMode in libpq, \nstart an implicit transaction at the server. For this purpose, we\nwould need to add a new message to signal the start of pipeline mode\nto the protocol. It is user responsible to avoid the problematic\nprotocol use when libpq is not used.\n\nWhat do you think about it?\n\n\nRegards,\nYugo Nagata\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>\n\n\n",
"msg_date": "Tue, 1 Mar 2022 15:17:04 +0900",
"msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>",
"msg_from_op": true,
"msg_subject": "pipeline mode and commands not allowed in a transaction block"
}
] |
[
{
"msg_contents": "Hi hackers,\n\nI needed an aggregate function similar to array_agg() but which\naggregates only unique values. As it turned out there is no convenient\nway of doing this. What I ended up doing instead was aggregating to\nJSONB keys and then converting a JSONB object to an array:\n\nSELECT array(select jsonb_object_keys(jsonb_object_agg(mycolumn, true)))\nFROM ...\n\nThis works but doesn't seem to be the greatest user experience. I\nwould like to submit a patch that adds array_unique_agg() function\nunless anyone has strong objections to this feature.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Tue, 1 Mar 2022 16:39:34 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": true,
"msg_subject": "Proposal: array_unique_agg() function"
},
{
"msg_contents": "Hi\n\nút 1. 3. 2022 v 14:39 odesílatel Aleksander Alekseev <\naleksander@timescale.com> napsal:\n\n> Hi hackers,\n>\n> I needed an aggregate function similar to array_agg() but which\n> aggregates only unique values. As it turned out there is no convenient\n> way of doing this. What I ended up doing instead was aggregating to\n> JSONB keys and then converting a JSONB object to an array:\n>\n> SELECT array(select jsonb_object_keys(jsonb_object_agg(mycolumn, true)))\n> FROM ...\n>\n> This works but doesn't seem to be the greatest user experience. I\n> would like to submit a patch that adds array_unique_agg() function\n> unless anyone has strong objections to this feature.\n>\n\nSELECT array_agg(DISTINCT ...) doesn't help?\n\nRegards\n\nPavel\n\n\n>\n> --\n> Best regards,\n> Aleksander Alekseev\n>\n>\n>\n\nHiút 1. 3. 2022 v 14:39 odesílatel Aleksander Alekseev <aleksander@timescale.com> napsal:Hi hackers,\n\nI needed an aggregate function similar to array_agg() but which\naggregates only unique values. As it turned out there is no convenient\nway of doing this. What I ended up doing instead was aggregating to\nJSONB keys and then converting a JSONB object to an array:\n\nSELECT array(select jsonb_object_keys(jsonb_object_agg(mycolumn, true)))\nFROM ...\n\nThis works but doesn't seem to be the greatest user experience. I\nwould like to submit a patch that adds array_unique_agg() function\nunless anyone has strong objections to this feature.SELECT array_agg(DISTINCT ...) doesn't help?RegardsPavel \n\n-- \nBest regards,\nAleksander Alekseev",
"msg_date": "Tue, 1 Mar 2022 14:46:53 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Proposal: array_unique_agg() function"
},
{
"msg_contents": "Hello\n\nselect array_agg(distinct mycolumn) from \n\nfrom the very beginning? Even the 7.1 manual describes such a syntax: https://www.postgresql.org/docs/7.1/sql-expressions.html\n\nregards, Sergei\n\n\n",
"msg_date": "Tue, 01 Mar 2022 16:48:48 +0300",
"msg_from": "Sergei Kornilov <sk@zsrv.org>",
"msg_from_op": false,
"msg_subject": "Re:Proposal: array_unique_agg() function"
},
{
"msg_contents": "Pavel, Sergei,\n\n> SELECT array_agg(DISTINCT ...) doesn't help?\n\nIt works, many thanks!\n\n-- \nBest regards,\nAleksander Alekseev\n\nPavel, Sergei,> SELECT array_agg(DISTINCT ...) doesn't help?It works, many thanks!-- Best regards,Aleksander Alekseev",
"msg_date": "Tue, 1 Mar 2022 16:54:23 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": true,
"msg_subject": "Re: Proposal: array_unique_agg() function"
}
] |
[
{
"msg_contents": "Last November Daniel Gustafsson did a patch triage. It took him three\nemails to get through the patches in the commitfest back then. Since\nthen we've had the November and the January commitfests so I was\ninterested to see how many of these patches had advanced....\n\nI'm only part way through the first email but so far only two patches\nhave changed status -- and both to \"Returned with feedback\" :(\n\nSo I'm going to post updates but I'm going to break it up into smaller\nbatches because otherwise it'll take me a month before I post\nanything.\n\n\n\n> 1608: schema variables, LET command\n> ===================================\n> After 18 CF's and two very long threads it seems to be nearing completion\n> judging by Tomas' review. There is an ask in that review for a second pass\n> over the docs by a native speaker, any takers?\n\nPatch has a new name, \"session variables, LET command\"\n\nThere's been a *lot* of work on this patch so I'm loath to bump it.\nThe last review was from Julien Rouhaud which had mostly code style\ncomments but it had one particular concern about using xact callback\nin core and about EOX cleanup.\n\nPavel, do you have a plan to improve this or are you looking for\nsuggestions from someone about how you should solve this problem?\n\n> 1741: Index Skip Scan\n> =====================\n> An often requested feature which has proven hard to reach consensus on an\n> implementation for. The thread(s) have stalled since May, is there any hope of\n> taking this further? Where do we go from here with this patch?\n\n\"Often requested indeed! I would love to be able to stop explaining to\npeople that Postgres can't handle this case well.\n\nIt seems there are multiple patch variants around and suggestions from\nHeikki and Peter G about fundamental interface choices. It would be\nnice to have a good summary from someone who is involved about what's\nactually left unresolved.\n\n\n> 1712: Remove self join on a unique column\n> =========================================\n> This has moved from \"don't write bad queries\" to \"maybe we should do something\n> about this\". It seems like there is concensus that it can be worth paying the\n> added planner cost for this, especially if gated by a GUC to keep it out of\n> installations where it doesn't apply. The regression on large join queries\n> hasn't been benchmarked it seems (or I missed it), but the patch seems to have\n> matured and be quite ready. Any takers on getting it across the finish line?\n\nThere hasn't been any review since the v29 patch was posted in July.\nThat said, to my eye it seemed like pretty basic functionality errors\nwere being corrected quite late. All the bugs got patch updates\nquickly but does this have enough tests to be sure it's working right?\n\nThe only real objection was about whether the planning time justified\nthe gains since the gains are small. But I think they were resolved by\nmaking the optimization optional. Do we have consensus that that\nresolved the issue or do we still need benchmarks showing the planning\ntime hit is reasonable?\n\n> 2161: standby recovery fails when re-replaying due to missing directory which\n> was removed in previous replay.\n> =============================================================================\n> Tom and Robert seem to be in agreement that parts of this patchset are good,\n> and that some parts are not. The thread has stalled and the patch no longer\n> apply, so unless someone feels like picking up the good pieces this seems like\n> a contender to close for now.\n\nTom's feedback seems to have been acted on last November. And the last\nupdate in January was that it was passing CI now. Is this ready to\ncommit now?\n\n\n> 2138: Incremental Materialized View Maintenance\n> ===============================================\n> The size of the\n> patchset and the length of the thread make it hard to gauge just far away it\n> is, maybe the author or a reviewer can summarize the current state and outline\n> what is left for it to be committable.\n\nThere is an updated patch set as of February but I have the same\ndifficulty wrapping my head around the amount of info here.\n\nIs this one really likely to be commitable in 15? If not I think we\nshould move this to 16 now and concentrate on patches that will be\ncommitable in this release.\n\n\n> 2218: Implement INSERT SET syntax\n> =================================\n> The author has kept this patch updated, and has seemingly updated according to\n> the review comments. Tom: do you have any opinions on whether the updated\n> version addresses your concerns wrt the SELECT rewrite?\n\nI don't see any discussion implying that Tom's concerns were met. I'm\nnot exactly clear why Tom's concerns are real problems though --\nwouldn't it be a *good* thing if we have a more expressive syntax? But\nthat's definitely what needs to be resolved before it can move ahead.\n\nSo unless there's objections I'm going to update this to \"waiting on author\".\n\n-- \ngreg\n\n\n",
"msg_date": "Tue, 1 Mar 2022 11:16:36 -0500",
"msg_from": "Greg Stark <stark@mit.edu>",
"msg_from_op": true,
"msg_subject": "Commitfest 2022-03 Patch Triage Part 1a.i"
},
{
"msg_contents": "Hi,\n\nOn Tue, Mar 01, 2022 at 11:16:36AM -0500, Greg Stark wrote:\n>\n> > 1608: schema variables, LET command\n> > ===================================\n> > After 18 CF's and two very long threads it seems to be nearing completion\n> > judging by Tomas' review. There is an ask in that review for a second pass\n> > over the docs by a native speaker, any takers?\n>\n> Patch has a new name, \"session variables, LET command\"\n>\n> There's been a *lot* of work on this patch so I'm loath to bump it.\n> The last review was from Julien Rouhaud which had mostly code style\n> comments but it had one particular concern about using xact callback\n> in core and about EOX cleanup.\n>\n> Pavel, do you have a plan to improve this or are you looking for\n> suggestions from someone about how you should solve this problem?\n\nThere has indeed been a lot of work done on the patch during the last commit\nfest, and Pavel always fixed all the reported issues promptly, which is why\napart from the EOX cleanup thing most of the last review was minor problems.\n\nPavel sent a new version today that address the EOX problem (and everything\nelse) so I'm now the one that needs to do my reviewer job (which I already\nstarted). I didn't get through all the changes yet but as far as I can\nsee the patch is in a very good shape. I'm quite optimistic about this patch\nbeing ready for committer very soon, so I think it would be good to keep it and\nseeif we can get it committed in pg 15. Note that some committers already\nshowed interest in the patch.\n\n\n",
"msg_date": "Wed, 2 Mar 2022 00:36:09 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Commitfest 2022-03 Patch Triage Part 1a.i"
},
{
"msg_contents": "Can I suggest to copy the patch authors on bulk emails like these ?\n\n(Obviously, an extended discussion about a particular patch should happen on\nits original thread, but that's not what this is about).\n\n-- \nJustin\n\n\n",
"msg_date": "Tue, 1 Mar 2022 10:37:58 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Commitfest 2022-03 Patch Triage Part 1a.i"
},
{
"msg_contents": "As Justin suggested I CC the authors from these patches I'm adding\nthem here. Some of the patches have multiple \"Authors\" listed in the\ncommitfest which may just be people who posted updated patches so I\nmay have added more people than necessary.\n[If you received two copies of this do not reply to the first one or\nyou'll get bounces. Hopefully I've cleaned the list now but we'll\nsee...]\n\n\n",
"msg_date": "Tue, 1 Mar 2022 14:59:46 -0500",
"msg_from": "Greg Stark <stark@mit.edu>",
"msg_from_op": true,
"msg_subject": "Re: Commitfest 2022-03 Patch Triage Part 1a.i"
},
{
"msg_contents": "On Tue, 1 Mar 2022 at 14:59, Greg Stark <stark@mit.edu> wrote:\n>\n> As Justin suggested I CC the authors from these patches I'm adding\n> them here. Some of the patches have multiple \"Authors\" listed in the\n> commitfest which may just be people who posted updated patches so I\n> may have added more people than necessary.\n\n> [If you received two copies of this do not reply to the first one or\n> you'll get bounces. Hopefully I've cleaned the list now but we'll\n> see...]\n\nNope. One more address to remove from the CC.\n\n\n-- \ngreg\n\n\n",
"msg_date": "Tue, 1 Mar 2022 15:01:20 -0500",
"msg_from": "Greg Stark <stark@mit.edu>",
"msg_from_op": true,
"msg_subject": "Re: Commitfest 2022-03 Patch Triage Part 1a.i"
},
{
"msg_contents": "> On Tue, Mar 01, 2022 at 11:16:36AM -0500, Greg Stark wrote:\n> Last November Daniel Gustafsson did a patch triage. It took him three\n> emails to get through the patches in the commitfest back then. Since\n> then we've had the November and the January commitfests so I was\n> interested to see how many of these patches had advanced....\n>\n> I'm only part way through the first email but so far only two patches\n> have changed status -- and both to \"Returned with feedback\" :(\n>\n> So I'm going to post updates but I'm going to break it up into smaller\n> batches because otherwise it'll take me a month before I post\n> anything.\n\nThanks for being proactive!\n\n> > 1741: Index Skip Scan\n> > =====================\n> > An often requested feature which has proven hard to reach consensus on an\n> > implementation for. The thread(s) have stalled since May, is there any hope of\n> > taking this further? Where do we go from here with this patch?\n>\n> \"Often requested indeed! I would love to be able to stop explaining to\n> people that Postgres can't handle this case well.\n>\n> It seems there are multiple patch variants around and suggestions from\n> Heikki and Peter G about fundamental interface choices. It would be\n> nice to have a good summary from someone who is involved about what's\n> actually left unresolved.\n\nI'm going to leave a summary for this one here, if you don't mind.\n\nI believe the design commentary from Heikki about using index_rescan was\nmore or less answered by Thomas, and having no follow up on that I'm\nassuming it was convincing enough.\n\nPeter G most recent suggestion about MDAM approach was interesting, but\nvery general, not sure what to make of it in absence of any feedback on\nfollow-up questions/proposed experimental changes.\n\nOn top of that a correlated patch [1] that supposed to get some\nimprovements for this feature on the planner side didn't get much\nfeedback either. The idea is that the feature could be done in much\nbetter way, but the alternative proposal is still not there and I think\ndoesn't even have a CF item.\n\nThe current state of things is that I've managed to prepare much smaller\nand less invasive standalone version of the patch for review, leaving\nmost questionable parts aside as optional.\n\nOverall it seems that the common agreement about the patch is \"the\ndesign could be better\", but no one have yet articulated in which way,\nor formulated what are the current issues. Having being through 19 CF\nthe common ground for folks, who were involved into it, is that with no\nfurther feedback the CF item could be closed. Sad but true :(\n\n[1]: https://commitfest.postgresql.org/37/2433/\n\n\n",
"msg_date": "Tue, 1 Mar 2022 21:26:11 +0100",
"msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Commitfest 2022-03 Patch Triage Part 1a.i"
},
{
"msg_contents": "> On 1 Mar 2022, at 17:16, Greg Stark <stark@mit.edu> wrote:\n\n> Last November Daniel Gustafsson did a patch triage. It took him three\n> emails to get through the patches in the commitfest back then.\n\nIt should be noted that I only powered through the patches that had been in 3+\ncommitfests at the time..\n\n> Since then we've had the November and the January commitfests so I was\n> interested to see how many of these patches had advanced....\n\n..so there are new patches now that have crossed the (admittedly arbitrarily\nchosen) breakpoint of 3+ CF's.\n\n> So I'm going to post updates but I'm going to break it up into smaller\n> batches because otherwise it'll take me a month before I post\n> anything.\n\nThanks for picking it up and continuing with recent developments. Let me know\nif you want a hand in triaging patchsets.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n",
"msg_date": "Wed, 2 Mar 2022 13:11:27 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Commitfest 2022-03 Patch Triage Part 1a.i"
},
{
"msg_contents": "On Wed, 2 Mar 2022 at 07:12, Daniel Gustafsson <daniel@yesql.se> wrote:\n\n> Thanks for picking it up and continuing with recent developments. Let me know\n> if you want a hand in triaging patchsets.\n\nWhile I have the time there may be patches I may need help coming to\nthe right conclusions about what actions to take.\n\nI think the main thing I can do to help is to take patches that have\nno chance of making this release and taking them off our collective\nplates. -- Hopefully after they've received feedback but as this is\nthe last commitfest of the release that's a secondary concern.\n\nBut I'm unclear exactly what the consequences in the commitfest app\nare of specific state changes. As I understand it there are basically\ntwo alternatives:\n\n1) Returned with feedback -- does this make it harder for an author to\nresume work release? Can they simply reactivate the CF entry or do\nthey need to start a new one and then lose history in the app?\n\n2) Moved to next commitfest -- this seems to just drag the pain on.\nThen it has to get triaged again next commitfest and if it's actually\nstalled (or progressing fine without needing feedback) that's just\nextra work for nothing.\n\nDo I have this right? What is the right state to put a patch in that\nmeans \"this patch doesn't need to be triaged again unless the author\nactually feels progress has been made and needs new feedback or thinks\nits committable\"?\n\n-- \ngreg\n\n\n",
"msg_date": "Wed, 2 Mar 2022 11:58:28 -0500",
"msg_from": "Greg Stark <stark@mit.edu>",
"msg_from_op": true,
"msg_subject": "Re: Commitfest 2022-03 Patch Triage Part 1a.i"
},
{
"msg_contents": "Greg Stark <stark@mit.edu> writes:\n> Do I have this right? What is the right state to put a patch in that\n> means \"this patch doesn't need to be triaged again unless the author\n> actually feels progress has been made and needs new feedback or thinks\n> its committable\"?\n\nBut that's not really the goal, is it? ISTM what you want to do is\nidentify patches that we're not going to try to get into v15, and\nthen push them out to the next CF so that we don't spend more time\non them this month. But that determination should not preclude them\nfrom being looked at on the normal basis once the next CF arrives.\nSo I'd say just push them forward with status \"Needs review\" or\n\"Waiting on author\", whichever seems more appropriate.\n\nIf a patch seems to have stalled to the point where neither of\nthose statuses is appropriate, then closing it RWF would be the\nthing to do; but that's not special to the last-CF situation.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 02 Mar 2022 12:28:17 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Commitfest 2022-03 Patch Triage Part 1a.i"
},
{
"msg_contents": "On Wed, Mar 02, 2022 at 11:58:28AM -0500, Greg Stark wrote:\n>\n> But I'm unclear exactly what the consequences in the commitfest app\n> are of specific state changes. As I understand it there are basically\n> two alternatives:\n>\n> 1) Returned with feedback -- does this make it harder for an author to\n> resume work release? Can they simply reactivate the CF entry or do\n> they need to start a new one and then lose history in the app?\n\nAs far as I know they would need to create a new entry, and thus lose the\nhistory.\n\n> 2) Moved to next commitfest -- this seems to just drag the pain on.\n> Then it has to get triaged again next commitfest and if it's actually\n> stalled (or progressing fine without needing feedback) that's just\n> extra work for nothing.\n>\n> Do I have this right? What is the right state to put a patch in that\n> means \"this patch doesn't need to be triaged again unless the author\n> actually feels progress has been made and needs new feedback or thinks\n> its committable\"?\n\nI don't think that 2) means having to triage again. If a patch gets moved to\nthe next commitfest now, then clearly it's not ready and should be also\nswitched to Waiting on Author.\n\nIn the next commitfest, if the author doesn't address the problems raised\nduring review the patch will still be in Waiting for Author, and the only\nneeded triaging would be to close as Return With Feedback patches that looks\nabandoned. For now the arbitrary \"abandoned\" definition is usually \"patch in\nWaiting on Author for at least 2 weeks at the end of the commitfest with no\nsign of activity from the author\".\n\n\n",
"msg_date": "Thu, 3 Mar 2022 01:28:54 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Commitfest 2022-03 Patch Triage Part 1a.i"
},
{
"msg_contents": "On Tue, 1 Mar 2022 13:39:45 -0500\nGreg Stark <stark@mit.edu> wrote:\n\n> As Justin suggested I CC the authors from these patches I'm adding\n> them here. Some of the patches have multiple \"Authors\" listed in the\n> commitfest which may just be people who posted updated patches so I\n> may have added more people than necessary.\n> \n> On Tue, 1 Mar 2022 at 11:16, Greg Stark <stark@mit.edu> wrote:\n> >\n> > Last November Daniel Gustafsson did a patch triage. It took him three\n> > emails to get through the patches in the commitfest back then. Since\n> > then we've had the November and the January commitfests so I was\n> > interested to see how many of these patches had advanced....\n> >\n> > I'm only part way through the first email but so far only two patches\n> > have changed status -- and both to \"Returned with feedback\" :(\n> >\n> > So I'm going to post updates but I'm going to break it up into smaller\n> > batches because otherwise it'll take me a month before I post\n> > anything.\n> >\n> >\n> >\n> > > 1608: schema variables, LET command\n> > > ===================================\n> > > After 18 CF's and two very long threads it seems to be nearing completion\n> > > judging by Tomas' review. There is an ask in that review for a second pass\n> > > over the docs by a native speaker, any takers?\n> >\n> > Patch has a new name, \"session variables, LET command\"\n> >\n> > There's been a *lot* of work on this patch so I'm loath to bump it.\n> > The last review was from Julien Rouhaud which had mostly code style\n> > comments but it had one particular concern about using xact callback\n> > in core and about EOX cleanup.\n> >\n> > Pavel, do you have a plan to improve this or are you looking for\n> > suggestions from someone about how you should solve this problem?\n> >\n> > > 1741: Index Skip Scan\n> > > =====================\n> > > An often requested feature which has proven hard to reach consensus on an\n> > > implementation for. The thread(s) have stalled since May, is there any hope of\n> > > taking this further? Where do we go from here with this patch?\n> >\n> > \"Often requested indeed! I would love to be able to stop explaining to\n> > people that Postgres can't handle this case well.\n> >\n> > It seems there are multiple patch variants around and suggestions from\n> > Heikki and Peter G about fundamental interface choices. It would be\n> > nice to have a good summary from someone who is involved about what's\n> > actually left unresolved.\n> >\n> >\n> > > 1712: Remove self join on a unique column\n> > > =========================================\n> > > This has moved from \"don't write bad queries\" to \"maybe we should do something\n> > > about this\". It seems like there is concensus that it can be worth paying the\n> > > added planner cost for this, especially if gated by a GUC to keep it out of\n> > > installations where it doesn't apply. The regression on large join queries\n> > > hasn't been benchmarked it seems (or I missed it), but the patch seems to have\n> > > matured and be quite ready. Any takers on getting it across the finish line?\n> >\n> > There hasn't been any review since the v29 patch was posted in July.\n> > That said, to my eye it seemed like pretty basic functionality errors\n> > were being corrected quite late. All the bugs got patch updates\n> > quickly but does this have enough tests to be sure it's working right?\n> >\n> > The only real objection was about whether the planning time justified\n> > the gains since the gains are small. But I think they were resolved by\n> > making the optimization optional. Do we have consensus that that\n> > resolved the issue or do we still need benchmarks showing the planning\n> > time hit is reasonable?\n> >\n> > > 2161: standby recovery fails when re-replaying due to missing directory which\n> > > was removed in previous replay.\n> > > =============================================================================\n> > > Tom and Robert seem to be in agreement that parts of this patchset are good,\n> > > and that some parts are not. The thread has stalled and the patch no longer\n> > > apply, so unless someone feels like picking up the good pieces this seems like\n> > > a contender to close for now.\n> >\n> > Tom's feedback seems to have been acted on last November. And the last\n> > update in January was that it was passing CI now. Is this ready to\n> > commit now?\n> >\n> >\n> > > 2138: Incremental Materialized View Maintenance\n> > > ===============================================\n> > > The size of the\n> > > patchset and the length of the thread make it hard to gauge just far away it\n> > > is, maybe the author or a reviewer can summarize the current state and outline\n> > > what is left for it to be committable.\n> >\n> > There is an updated patch set as of February but I have the same\n> > difficulty wrapping my head around the amount of info here.\n> >\n> > Is this one really likely to be commitable in 15? If not I think we\n> > should move this to 16 now and concentrate on patches that will be\n> > commitable in this release.\n\nI think this patch set needs more reviews to be commitable in 15, so I\nreturned the target version to blank. I'll change it to 16 later.\n\n\n> >\n> > > 2218: Implement INSERT SET syntax\n> > > =================================\n> > > The author has kept this patch updated, and has seemingly updated according to\n> > > the review comments. Tom: do you have any opinions on whether the updated\n> > > version addresses your concerns wrt the SELECT rewrite?\n> >\n> > I don't see any discussion implying that Tom's concerns were met. I'm\n> > not exactly clear why Tom's concerns are real problems though --\n> > wouldn't it be a *good* thing if we have a more expressive syntax? But\n> > that's definitely what needs to be resolved before it can move ahead.\n> >\n> > So unless there's objections I'm going to update this to \"waiting on author\".\n> >\n> > --\n> > greg\n> \n> \n> \n> -- \n> greg\n\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>\n\n\n",
"msg_date": "Thu, 3 Mar 2022 18:11:44 +0900",
"msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Commitfest 2022-03 Patch Triage Part 1a.i"
},
{
"msg_contents": "> On 3 Mar 2022, at 10:11, Yugo NAGATA <nagata@sraoss.co.jp> wrote:\n\n> I think this patch set needs more reviews to be commitable in 15, so I\n> returned the target version to blank. I'll change it to 16 later.\n\nI've added 16 as a target version in the CF app now.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n",
"msg_date": "Thu, 3 Mar 2022 13:50:44 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Commitfest 2022-03 Patch Triage Part 1a.i"
}
] |
[
{
"msg_contents": "The final commitfest of this release begins now.\n\nWhereas most commitfests are about getting feedback to authors so they\ncan advance the patch -- this one is about actually committing patches\nto wrap up the release.\n\nPlease when reviewing patches try to push yourself to make the\ndifficult call about whether a patch will actually be feasible to\ncommit in this commitfest. If not be up front about it and say so\nbecause we need to focus energy in this commitfest on patches that are\ncommittable in Postgres 15.\n\nThere are a lot of great patches in the commitfest that would really\nbe great to get committed!\n\n--\ngreg\n\n\n",
"msg_date": "Tue, 1 Mar 2022 11:25:19 -0500",
"msg_from": "Greg Stark <stark@mit.edu>",
"msg_from_op": true,
"msg_subject": "Commitfest 2022-03 Starts Now"
},
{
"msg_contents": "So it's 8 days into the commitfest. So far 3 patches have been committed:\n\n* parse/analyze API refactoring by Peter Eisentraut\n* FUNCAPI tuplestore helper function by Melanie Plagemen committed by\nMichael Paquier\n* Typo in pgbench messages by Kawamoto Masay committed by Tatsuo Ishii\n\n(There was also a flurry of five commits from Tom on Feb 28 including\none that needed to be backpatched subsequently)\n\nIn addition 5 patches have been moved moved to a future commitfest,\nwithdrawn, or rejected.\n\nSo that has removed 8 patches from the commitfest. There are now 189\n\"Needs Review\" and 24 \"Ready for Committer\" patches.\n\nOf those 24 \"Ready for Committer\" patches only 3 actually have\ncommitters assigned andonly 6 have had any emails since the beginning\nof the commitfest.\n\nIs there anything I can do to get committers assigned to these\npatches? Should I do a round-robin style assignment for any of these?\n\n\n",
"msg_date": "Wed, 9 Mar 2022 14:38:42 -0500",
"msg_from": "Greg Stark <stark@mit.edu>",
"msg_from_op": true,
"msg_subject": "Re: Commitfest 2022-03 One Week in. 3 Commits 213 Patches Remaining"
},
{
"msg_contents": "On 3/9/22 13:38, Greg Stark wrote:\n> \n> Is there anything I can do to get committers assigned to these\n> patches? Should I do a round-robin style assignment for any of these?\n\nI don't think this is a good idea. Committers pick the patches they are \ngoing to commit.\n\nWhat prefer to do is bump any committers that have been involved in a \npatch thread to see if they are willing to commit it.\n\nRegards,\n-David\n\n\n\n",
"msg_date": "Wed, 9 Mar 2022 14:43:54 -0600",
"msg_from": "David Steele <david@pgmasters.net>",
"msg_from_op": false,
"msg_subject": "Re: Commitfest 2022-03 One Week in. 3 Commits 213 Patches Remaining"
},
{
"msg_contents": "On Wed, 9 Mar 2022 at 15:44, David Steele <david@pgmasters.net> wrote:\n>\n> On 3/9/22 13:38, Greg Stark wrote:\n> Should I do a round-robin style assignment for any of these?\n>\n> I don't think this is a good idea. Committers pick the patches they are\n> going to commit.\n>\n> What prefer to do is bump any committers that have been involved in a\n> patch thread to see if they are willing to commit it.\n\nWell yes, I suppose that's probably what I had in mind despite calling it that.\n\nBut I've been skimming the set of \"Ready for Committer\" patches and\nI'm a bit down on them. Many of them seem to mostly have gotten\nfeedback from committers already and the type of feedback that leads\nme to think it's ready for commit.\n\nI suppose people mark patches \"Ready for Committer\" when the level of\nfeedback they require is more in depth or more design feedback that\nthey think requires a committer even if it's not ready for commit.\n\nSo I'm going to go through the patches and ask the committers who have\nalready commented if they think the patch is on track to be committed\nthis release or should be pushed to the next commitfest.\n\n-- \ngreg\n\n\n",
"msg_date": "Wed, 9 Mar 2022 16:46:29 -0500",
"msg_from": "Greg Stark <stark@mit.edu>",
"msg_from_op": true,
"msg_subject": "Re: Commitfest 2022-03 One Week in. 3 Commits 213 Patches Remaining"
},
{
"msg_contents": "On Wed, 9 Mar 2022 at 16:46, Greg Stark <stark@mit.edu> wrote:\n> Many of them seem to mostly have gotten\n> feedback from committers already and the type of feedback that leads\n> me to think it's ready for commit.\n\nEr. I meant *not* the type of feedback that leads me to think it's\nready for commit. I mostly see patches with early design feedback or\neven feedback questioning whether they're good ideas at all.\n\n-- \ngreg\n\n\n",
"msg_date": "Wed, 9 Mar 2022 16:57:24 -0500",
"msg_from": "Greg Stark <stark@mit.edu>",
"msg_from_op": true,
"msg_subject": "Re: Commitfest 2022-03 One Week in. 3 Commits 213 Patches Remaining"
}
] |
[
{
"msg_contents": "Hi,\n\nFollowing are the files related to PostgreSQL statistics collector,\nbackend status reporting and command progress reporting.\n\npgstat.[ch] - Definitions for the PostgreSQL statistics collector daemon.\nbackend_status.[ch] - Definitions related to backend status reporting.\nbackend_progress.[ch] - Definitions related to command progress reporting.\nprogress.h - Constants used with the progress reporting facilities\ndefined in backend_status.h\npgstatfuncs.c - Functions for accessing the statistics collector data\n\nThere is a scope for some refactoring here even though the backend\nstatus and command progress related code is separated from pgstat.c.\n\n1) There is a lot of confusion between progress.h and\nbackend_progress.h. Why 2 header files required for the same\nfunctionality? I feel progress.h can be merged with\nbackend_progress.h.\n2) The access functions related to statistics collector are included\nin pgstatfuncs.c file. This also contains the access functions related\nto backend status and backend progress. I feel the access function\nrelated to backend status should go in backend_status.c and the access\nfunctions related to backend progress should go in backend progress.c\nfile. If the size grows in future then we can create new files for\naccess functions (like backend_status_funcs.c and\nbackend_progress_funcs.c).\n3) There is a dependency between backend status and command progress\nreporting but the corresponding functions are separated into 2\ndifferent files. It is better to define a new structure named\n'PgBackendProgress' in backend_progress.h which consists of\n'st_progress_command', 'st_progress_command_target' and\n'st_progress_param'. Include a variable of type 'PgBackendProgress' as\na member of 'PgBackendStatus' structure.\n\nPlease share your thoughts.\nIf required, I would like to work on the patch.\n\nThanks & Regards,\nNitin Jadhav\n\n\n",
"msg_date": "Tue, 1 Mar 2022 23:42:02 +0530",
"msg_from": "Nitin Jadhav <nitinjadhavpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Refactor statistics collector, backend status reporting and command\n progress reporting"
}
] |
[
{
"msg_contents": "> 2096: psql - add SHOW_ALL_RESULTS option\n> ========================================\n> Peter posted an updated version of Fabiens patch about a month ago (which at\n> this point no longer applies) fixing a few issues, but also point at old review\n> comments still unaddressed. Since this was pushed, but had to be reverted, I\n> assume there is a desire for the feature but it seems to need more work still.\n\nIt looks like Peter and Fabien were debating the merits of a libpq\nchange and probably that won't happen this release cycle. Is there a\nkernel of psql functionality that can be extracted from this without\nthe libpq change in this release cycle or should it wait until we add\nthe functionality to libpq?\n\nIf it's the latter then perhaps we should move this to 16?\n\n\n> 1651: GROUP BY optimization\n> ===========================\n> This is IMO a desired optimization, and after all the heavy lifting by Tomas\n> the patch seems to be in pretty good shape.\n\nThis is two patches and it sounds like the first one is ready for\ncommitter whereas the second one is less clear. Or is the second one\nmeant to be an alternative for the first one?\n\n>\n> 2377: pg_ls_* functions for showing metadata and recurse (pg_ls_tmpdir to show\n> shared filesets)\n> ==============================================================================\n> The question of what to do with lstat() on Windows is still left unanswered,\n> but the patchset has been split to up to be able to avoid it. Stephen and Tom,\n> having done prior reviews do you have any thoughts on this?\n\nIs this still blocked on lstat for windows? I couldn't tell, is there\nconsensus on a behaviour for windows even if that just means failing\nor returning partial results on windows?\n\nOther than that it seems like there's a lot of this patch that has\npositive reviews and is ready for committing.\n\n\n> 2349: global temporary table\n> ============================\n> GTT has been up for discussion numerous times in tbe past, and I can't judge\n> whether this proposal has a better chance than previous ones. I do note the\n> patch has a number of crashes reported lately, and no reviews from senior\n> contributors in a while, making it seem unlikely to be committed in this CF.\n> Since the patch is very big, can it be teased apart and committed separately\n> for easier review?\n\nI think Andres's review decisively makes it clear this in an\nuncommittable state.\n\nhttps://www.postgresql.org/message-id/20220225074500.sfizxbmlrj2s6hp5%40alap3.anarazel.de\nhttps://www.postgresql.org/message-id/20220227041304.mnimeqkhwktrjyht%40alap3.anarazel.de\n\nIt's definitely not going to make it this release and will probably\nneed a significant amount of time next release cycle. IMHO dividing it\nup into smaller features does seem like it would be more effective at\ngetting things committed.\n\nShould we mark this returned with feedback or just move it to the next\ncommitfest as waiting on author?\n\n\n> 2433: Erase the distinctClause if the result is unique by definition\n> ====================================================================\n> (parts of) The approach taken in this patch has been objected against in favor\n> of work that Tom has proposed. Until that work materialize this patch is\n> blocked, and thus I think we are better of closing it and re-opening it when it\n> gets unstuck. Unless Tom has plans to hack on this shortly?\n\nUgh. This is a problematic dynamic. Tom has a different idea of what\ndirection to take this but hasn't had a chance to work on it. So\nwhat's Andy Fan supposed to do here? He can't read Tom's mind and\nnobody else can really help him. Ultimately we all have limited time\nso this is a thing that will happen but is there anything we can do to\nresolve it in this case?\n\nWe definitely shouldn't spend lots of time on this patch unless we're\ngoing to be ok going ahead without Tom's version of it. Is this\nsomething we can do using the Andy's data structure for now and change\nin the future?\n\nIt looks like the Skip Scan patch was related to this work in some\nway? Is it blocked on it?\n\n-- \ngreg\n\n\n",
"msg_date": "Tue, 1 Mar 2022 16:12:25 -0500",
"msg_from": "Greg Stark <stark@mit.edu>",
"msg_from_op": true,
"msg_subject": "Commitfest 2022-03 Patch Triage Part 1b"
},
{
"msg_contents": "Greg Stark <stark@mit.edu> writes:\n>> 2433: Erase the distinctClause if the result is unique by definition\n>> ====================================================================\n>> (parts of) The approach taken in this patch has been objected against in favor\n>> of work that Tom has proposed. Until that work materialize this patch is\n>> blocked, and thus I think we are better of closing it and re-opening it when it\n>> gets unstuck. Unless Tom has plans to hack on this shortly?\n\n> Ugh. This is a problematic dynamic. Tom has a different idea of what\n> direction to take this but hasn't had a chance to work on it. So\n> what's Andy Fan supposed to do here? He can't read Tom's mind and\n> nobody else can really help him. Ultimately we all have limited time\n> so this is a thing that will happen but is there anything we can do to\n> resolve it in this case?\n\n> We definitely shouldn't spend lots of time on this patch unless we're\n> going to be ok going ahead without Tom's version of it. Is this\n> something we can do using the Andy's data structure for now and change\n> in the future?\n\n> It looks like the Skip Scan patch was related to this work in some\n> way? Is it blocked on it?\n\nI did promise some time ago to get involved in the skip scan work.\nI've so far failed to make good on that promise, but I will make\nit a high priority to look at the area during this CF.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 01 Mar 2022 16:29:36 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Commitfest 2022-03 Patch Triage Part 1b"
},
{
"msg_contents": "Hello Greg,\n\n>> Peter posted an updated version of Fabiens patch about a month ago (which at\n>> this point no longer applies)\n\nAttached a v15 which is a rebase, after some minor changes in the source \nand some new test cases added (good!).\n\n>> fixing a few issues, but also point at old review comments still \n>> unaddressed.\n\nISTM that all comments have been addressed. However, the latest patch \nraises issues about work around libpq corner case behaviors which are \nreally just that, corner cases.\n\n>> Since this was pushed, but had to be reverted, I assume there is a \n>> desire for the feature but it seems to need more work still.\n\n\n> It looks like Peter and Fabien were debating the merits of a libpq\n> change and probably that won't happen this release cycle.\n\n\nISTM these are really very minor issues that could be resolved in this \ncycle.\n\n> Is there a kernel of psql functionality that can be extracted from this \n> without the libpq change in this release cycle or should it wait until \n> we add the functionality to libpq?\n\nThe patch can wait for the issues to be resolved one way or an other \nbefore proceeding, *or* it can be applied, maybe with a small tweak, and \nthe libpq issues be fixed separately.\n\nFor a reminder, there are two actual \"issues\"features\" or \"bug\" with \nlibpq, which are made visible by the patch, but are pre-existing:\n\n(1) under some circumstances a infinite stream of empty results is \nreturned, that has to be filtered out manually.\n\n(2) under some special circumstances some error messages may be output \ntwice because of when libpq decides to reset the error buffer.\n\n(1) has been there for ever, and (2) is under investigation to possibly \nimprove the situation, so as to remove a hack in the code to avoid it.\nThe alternative which IMO would be ok is to admit that under some very \nspecial conditions the same error message may be output twice, and if it \nis resolved later on then fine.\n\n> If it's the latter then perhaps we should move this to 16?\n\nI'm not that pessimistic! I may be proven wrong:-)\n\n-- \nFabien.",
"msg_date": "Thu, 3 Mar 2022 08:24:27 +0100 (CET)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": false,
"msg_subject": "Re: Commitfest 2022-03 Patch Triage Part 1b"
},
{
"msg_contents": "Just FYI. Better to follow up to the thread for the patch that's\nalready in the CF. Otherwise your patch will missed by someone who\nlooks at the CF entry to see the latest patch.\n\n\n",
"msg_date": "Thu, 3 Mar 2022 14:10:07 -0500",
"msg_from": "Greg Stark <stark@mit.edu>",
"msg_from_op": true,
"msg_subject": "Re: Commitfest 2022-03 Patch Triage Part 1b"
},
{
"msg_contents": "\n> Just FYI. Better to follow up to the thread for the patch that's\n> already in the CF. Otherwise your patch will missed by someone who\n> looks at the CF entry to see the latest patch.\n\nIndeed. Done.\n\n-- \nFabien.\n\n\n",
"msg_date": "Fri, 4 Mar 2022 14:49:14 +0100 (CET)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": false,
"msg_subject": "Re: Commitfest 2022-03 Patch Triage Part 1b"
}
] |
[
{
"msg_contents": "Hi.\n\nPSA a PG docs patch that is associated with the logical replication\nRow Filters feature which was recently pushed [1].\n\nThis patch introduces a new \"Filtering\" page to give a common place\nwhere all kinds of logical replication filtering can be described.\n(e.g. It is envisaged that a \"Column Filters\" section can be added\nsometime in the future).\n\nThe main new content for this page is the \"Row Filters\" section. This\ngives a full overview of the new row filter feature, plus examples.\n\n------\n[1] https://github.com/postgres/postgres/commit/52e4f0cd472d39d07732b99559989ea3b615be78\n\nKind Regards,\nPeter Smith.\nFujitsu Australia",
"msg_date": "Wed, 2 Mar 2022 15:47:46 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": true,
"msg_subject": "PG DOCS - logical replication filtering"
},
{
"msg_contents": "Hi Peter,\n\n> PSA a PG docs patch that is associated with the logical replication\n> Row Filters feature which was recently pushed [1].\n\nThe patch looks mostly OK, but I have several nitpicks.\n\n```\n By default, all data from all published tables will be replicated to the\n appropriate subscribers.\n[...]\n By default, all operation types are replicated.\n```\n\nThe second sentence seems to be redundant.\n\n```\n (This feature is available since PostgreSQL 15)\n```\n\nPlease correct me if I'm wrong, but I don't think we say that in the docs.\nWhen the user opens the documentation for version X he or she sees\neverything that is available in this version.\n\n```\n31.3. Filtering\n[...]\nThere are 3 different ways to filter what data gets replicated.\n31.3.1. Operation Filters\n[...]\n31.3.2. Row Filters\n[...]\n```\nIt looks like there are 2 different ways after all.\n\nI see that a large part of the documentation is commented and marked as TBA\n(Column Filters, Combining Different Kinds of Filters). Could you please\nclarify if it's a work-in-progress patch? If it's not, I believe the\ncommented part should be removed before committing.\n\n-- \nBest regards,\nAleksander Alekseev\n\nHi Peter,> PSA a PG docs patch that is associated with the logical replication> Row Filters feature which was recently pushed [1].The patch looks mostly OK, but I have several nitpicks.``` By default, all data from all published tables will be replicated to the appropriate subscribers.[...] By default, all operation types are replicated.```The second sentence seems to be redundant.``` (This feature is available since PostgreSQL 15)```Please correct me if I'm wrong, but I don't think we say that in the docs. When the user opens the documentation for version X he or she sees everything that is available in this version.```31.3. Filtering[...]There are 3 different ways to filter what data gets replicated.31.3.1. Operation Filters[...]31.3.2. Row Filters[...]```It looks like there are 2 different ways after all.I see that a large part of the documentation is commented and marked as TBA (Column Filters, Combining Different Kinds of Filters). Could you please clarify if it's a work-in-progress patch? If it's not, I believe the commented part should be removed before committing.-- Best regards,Aleksander Alekseev",
"msg_date": "Wed, 2 Mar 2022 12:07:09 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: PG DOCS - logical replication filtering"
},
{
"msg_contents": "Hi again,\n\n> The second sentence seems to be redundant.\n\nActually, I'm wrong on this one.\n\n>\n-- \nBest regards,\nAleksander Alekseev\n\nHi again,> The second sentence seems to be redundant.Actually, I'm wrong on this one.\n-- Best regards,Aleksander Alekseev",
"msg_date": "Wed, 2 Mar 2022 12:14:02 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: PG DOCS - logical replication filtering"
},
{
"msg_contents": "On Wed, Mar 2, 2022 at 2:37 PM Aleksander Alekseev\n<aleksander@timescale.com> wrote:\n>\n>\n> I see that a large part of the documentation is commented and marked as TBA (Column Filters, Combining Different Kinds of Filters). Could you please clarify if it's a work-in-progress patch? If it's not, I believe the commented part should be removed before committing.\n>\n\nI think we can remove any Column Filters related information\n(placeholders), if that patch gets committed, we can always extend the\nexisting docs.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 2 Mar 2022 14:55:55 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: PG DOCS - logical replication filtering"
},
{
"msg_contents": "Hi hackers,\n\n> I see that a large part of the documentation is commented and marked as\n> TBA (Column Filters, Combining Different Kinds of Filters). Could you\n> please clarify if it's a work-in-progress patch? If it's not, I believe the\n> commented part should be removed before committing.\n> >\n>\n> I think we can remove any Column Filters related information\n> (placeholders), if that patch gets committed, we can always extend the\n> existing docs.\n>\n\nHere is an updated version of the patch.\n\n-- \nBest regards,\nAleksander Alekseev",
"msg_date": "Wed, 2 Mar 2022 12:42:52 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: PG DOCS - logical replication filtering"
},
{
"msg_contents": "On 02.03.22 05:47, Peter Smith wrote:\n> This patch introduces a new \"Filtering\" page to give a common place\n> where all kinds of logical replication filtering can be described.\n> (e.g. It is envisaged that a \"Column Filters\" section can be added\n> sometime in the future).\n\nThe pending feature to select a subset of table columns to replicate is \nnot \"column filtering\". The thread might still be still called that, \nbut we have changed the patch to not use that terminology.\n\nFiltering is a dynamic action based on actual values. The row filtering \nfeature does that. The column list feature is a static DDL-time \nconfiguration. It is no more filtering than specifying a list of tables \nin a publication is table filtering.\n\nSo please consider organizing the documentation differently to not \ncreate this confusion.\n\n\n\n",
"msg_date": "Wed, 2 Mar 2022 15:29:59 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: PG DOCS - logical replication filtering"
},
{
"msg_contents": "On Wed, Mar 2, 2022 at 8:43 PM Aleksander Alekseev\n<aleksander@timescale.com> wrote:\n...\n> Here is an updated version of the patch.\n\nThanks for your review comments and fixes. The updated v2 patch looks\ngood to me.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Thu, 3 Mar 2022 11:52:16 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: PG DOCS - logical replication filtering"
},
{
"msg_contents": "On Wed, Mar 2, 2022 at 8:26 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, Mar 2, 2022 at 2:37 PM Aleksander Alekseev\n> <aleksander@timescale.com> wrote:\n> >\n> >\n> > I see that a large part of the documentation is commented and marked as TBA (Column Filters, Combining Different Kinds of Filters). Could you please clarify if it's a work-in-progress patch? If it's not, I believe the commented part should be removed before committing.\n> >\n>\n> I think we can remove any Column Filters related information\n> (placeholders), if that patch gets committed, we can always extend the\n> existing docs.\n>\n\n+1\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia.\n\n\n",
"msg_date": "Thu, 3 Mar 2022 11:56:27 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: PG DOCS - logical replication filtering"
},
{
"msg_contents": "On Wed, Mar 2, 2022 at 8:00 PM Peter Eisentraut\n<peter.eisentraut@enterprisedb.com> wrote:\n>\n> On 02.03.22 05:47, Peter Smith wrote:\n> > This patch introduces a new \"Filtering\" page to give a common place\n> > where all kinds of logical replication filtering can be described.\n> > (e.g. It is envisaged that a \"Column Filters\" section can be added\n> > sometime in the future).\n>\n> The pending feature to select a subset of table columns to replicate is\n> not \"column filtering\". The thread might still be still called that,\n> but we have changed the patch to not use that terminology.\n>\n> Filtering is a dynamic action based on actual values. The row filtering\n> feature does that. The column list feature is a static DDL-time\n> configuration. It is no more filtering than specifying a list of tables\n> in a publication is table filtering.\n>\n> So please consider organizing the documentation differently to not\n> create this confusion.\n>\n\n+1. I think Row Filters can directly be a section just before\nConflicts on the logical replication page [1].\n\nSome comments on the patch:\n1. I think we can extend/add the example to have filters on more than\none table. This has been noticed multiple times during development\nthat people are not very clear on it.\n2. I think we can add an example or two for row filters actions (like\nInsert, Update).\n3.\n Publications can choose to limit the changes they produce to\n any combination of <command>INSERT</command>, <command>UPDATE</command>,\n- <command>DELETE</command>, and <command>TRUNCATE</command>,\nsimilar to how triggers are fired by\n- particular event types. By default, all operation types are replicated.\n+ <command>DELETE</command>, and <command>TRUNCATE</command> by using\n+ <quote>operation filters</quote>.\n\nBy this, one can imply that row filters are used for Truncate as well\nbut that is not true. I know that that patch later specifies that \"Row\nfilters have no effect for <command>TRUNCATE</command> commands.\" but\nthe above modification is not very clear.\n\n[1] - https://www.postgresql.org/docs/devel/logical-replication.html\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 3 Mar 2022 08:45:17 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: PG DOCS - logical replication filtering"
},
{
"msg_contents": "On Thu, Mar 3, 2022 at 2:15 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, Mar 2, 2022 at 8:00 PM Peter Eisentraut\n> <peter.eisentraut@enterprisedb.com> wrote:\n> >\n> > On 02.03.22 05:47, Peter Smith wrote:\n> > > This patch introduces a new \"Filtering\" page to give a common place\n> > > where all kinds of logical replication filtering can be described.\n> > > (e.g. It is envisaged that a \"Column Filters\" section can be added\n> > > sometime in the future).\n> >\n> > The pending feature to select a subset of table columns to replicate is\n> > not \"column filtering\". The thread might still be still called that,\n> > but we have changed the patch to not use that terminology.\n> >\n> > Filtering is a dynamic action based on actual values. The row filtering\n> > feature does that. The column list feature is a static DDL-time\n> > configuration. It is no more filtering than specifying a list of tables\n> > in a publication is table filtering.\n> >\n> > So please consider organizing the documentation differently to not\n> > create this confusion.\n> >\n>\n> +1. I think Row Filters can directly be a section just before\n> Conflicts on the logical replication page [1].\n>\n\nOK. I will reorganize the page as suggested, and also attend to the\nother comments below.\n\n> Some comments on the patch:\n> 1. I think we can extend/add the example to have filters on more than\n> one table. This has been noticed multiple times during development\n> that people are not very clear on it.\n> 2. I think we can add an example or two for row filters actions (like\n> Insert, Update).\n> 3.\n> Publications can choose to limit the changes they produce to\n> any combination of <command>INSERT</command>, <command>UPDATE</command>,\n> - <command>DELETE</command>, and <command>TRUNCATE</command>,\n> similar to how triggers are fired by\n> - particular event types. By default, all operation types are replicated.\n> + <command>DELETE</command>, and <command>TRUNCATE</command> by using\n> + <quote>operation filters</quote>.\n>\n> By this, one can imply that row filters are used for Truncate as well\n> but that is not true. I know that that patch later specifies that \"Row\n> filters have no effect for <command>TRUNCATE</command> commands.\" but\n> the above modification is not very clear.\n>\n> [1] - https://www.postgresql.org/docs/devel/logical-replication.html\n>\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia.\n\n\n",
"msg_date": "Thu, 3 Mar 2022 15:10:36 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: PG DOCS - logical replication filtering"
},
{
"msg_contents": "PSA patch v3 to address all comments received so far.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia",
"msg_date": "Fri, 4 Mar 2022 14:41:58 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: PG DOCS - logical replication filtering"
},
{
"msg_contents": "On Thu, Mar 3, 2022 at 2:15 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, Mar 2, 2022 at 8:00 PM Peter Eisentraut\n> <peter.eisentraut@enterprisedb.com> wrote:\n> >\n> > On 02.03.22 05:47, Peter Smith wrote:\n> > > This patch introduces a new \"Filtering\" page to give a common place\n> > > where all kinds of logical replication filtering can be described.\n> > > (e.g. It is envisaged that a \"Column Filters\" section can be added\n> > > sometime in the future).\n> >\n> > The pending feature to select a subset of table columns to replicate is\n> > not \"column filtering\". The thread might still be still called that,\n> > but we have changed the patch to not use that terminology.\n> >\n> > Filtering is a dynamic action based on actual values. The row filtering\n> > feature does that. The column list feature is a static DDL-time\n> > configuration. It is no more filtering than specifying a list of tables\n> > in a publication is table filtering.\n> >\n> > So please consider organizing the documentation differently to not\n> > create this confusion.\n> >\n>\n> +1. I think Row Filters can directly be a section just before\n> Conflicts on the logical replication page [1].\n\nDone as suggested in v3. [1]\n\n>\n> Some comments on the patch:\n> 1. I think we can extend/add the example to have filters on more than\n> one table. This has been noticed multiple times during development\n> that people are not very clear on it.\n\nAdded example in v3 [1]\n\n> 2. I think we can add an example or two for row filters actions (like\n> Insert, Update).\n\nAdded examples of INSERT and UPDATE in v3 [1]\n\n> 3.\n> Publications can choose to limit the changes they produce to\n> any combination of <command>INSERT</command>, <command>UPDATE</command>,\n> - <command>DELETE</command>, and <command>TRUNCATE</command>,\n> similar to how triggers are fired by\n> - particular event types. By default, all operation types are replicated.\n> + <command>DELETE</command>, and <command>TRUNCATE</command> by using\n> + <quote>operation filters</quote>.\n>\n> By this, one can imply that row filters are used for Truncate as well\n> but that is not true. I know that that patch later specifies that \"Row\n> filters have no effect for <command>TRUNCATE</command> commands.\" but\n> the above modification is not very clear.\n>\n\nFixed in v3 [1]. Restored original text, and added a note about row filters.\n\n------\n[1] https://www.postgresql.org/message-id/CAHut%2BPtwp0FscVpmMjHLV6_%3DSHR5ndwvWdX_gb41_3H2UA9ecA%40mail.gmail.com\n\nKind Regards,\nPeter Smith.\nFujitsu Australia.\n\n\n",
"msg_date": "Fri, 4 Mar 2022 14:47:56 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: PG DOCS - logical replication filtering"
},
{
"msg_contents": "On Fri, Mar 4, 2022, at 12:41 AM, Peter Smith wrote:\n> PSA patch v3 to address all comments received so far.\nPeter,\n\nI started reviewing this documentation for row filter and I noticed that I was\nchanging too much lines to submit a patch on the top of it. Hence, I'm\nattaching a new version based on this one.\n\nHere as some of the changes that I did:\n\n* links: I renamed the ids to use \"logical-replication-row-filter\" instead of\n \"logical-replication-rf\" because it is used in the anchors. A compound word\n is better than an abbreviation.\n* titles: I changed all titles because of some stylish issue (words are\n generally capitalized) or because it reads better.\n* sections: I moved the \"Restrictions\" section to the top but it seems\n important than the other sections. I removed the \"psql\" section. The psql\n commands are shown in the \"Examples\" section so I don't think we should\n provide a section for it.\n* sentences: I expanded several sentences to provide details about the specific\n topic. I also reordered some sentences and removed some duplicated sentences.\n* replica identity: I added a link to replica identity. It is a key concept for\n row filter.\n* transformations: I added a few sentences explaining when/why a transformation\n is required. I removed the \"Cases\" part (same as in the source code) because\n it is redundant with the new sentences.\n* partitioned table: the title should be _partitioned_ table. I replaced the\n bullets with sentences in the same paragraph.\n* initial data sync: I removed the \"subscriber\" from the section title. When\n you say \"initial data synchronization\" it seems clear we're talking about the\n subscriber. I also add a sentence saying why pre-15 does not consider row\n filters.\n* combining row filters: I renamed the section and decided to remove \"for the\n same table\". When reading the first sentences from this section, it is clear\n that row filtering is per table. So if you are combining multiple row\n filters, it should be for the same table. I also added a sentence saying why\n some clauses make the row filter irrelevant.\n* examples: I combined the psql commands that shows row filter information\n together (\\dRp+ and \\d). I changed to connection string to avoid \"localhost\".\n Why? It does not seem a separate service (there is no port) and setup pub/sub\n in the same cluster requires additional steps. It is better to illustrate\n different clusters (at least it seems so since we don't provide details from\n publisher). I changed a value in an UPDATE because both UPDATEs use 999.\n\nWe could probably reduce the number of rows in the example but I didn't bother\nto remove them.\n\nIt seems we can remove some sentences from the CREATE PUBLICATION because we\nhave a new section that explains all of it. I think the link that was added by\nthis patch is sufficient.\n\n\n--\nEuler Taveira\nEDB https://www.enterprisedb.com/",
"msg_date": "Thu, 10 Mar 2022 20:06:38 -0300",
"msg_from": "\"Euler Taveira\" <euler@eulerto.com>",
"msg_from_op": false,
"msg_subject": "Re: PG DOCS - logical replication filtering"
},
{
"msg_contents": "On Fri, Mar 11, 2022 at 9:37 AM Euler Taveira <euler@eulerto.com> wrote:\n>\n> On Fri, Mar 4, 2022, at 12:41 AM, Peter Smith wrote:\n>\n> PSA patch v3 to address all comments received so far.\n>\n> Peter,\n>\n> I started reviewing this documentation for row filter and I noticed that I was\n> changing too much lines to submit a patch on the top of it. Hence, I'm\n> attaching a new version based on this one.\n\nSorry for my long delay in replying. I have been away.\n\nThanks very much for the updated patch with all your suggestions.\nThere were many changes in your v4. I have not merged everything\nexactly, but I did take the majority of your suggestions on-board in\nthe attached v5.\n\nI responded below to each change.\n\n>\n> Here as some of the changes that I did:\n>\n> * links: I renamed the ids to use \"logical-replication-row-filter\" instead of\n> \"logical-replication-rf\" because it is used in the anchors. A compound word\n> is better than an abbreviation.\n\nOK, done as suggested.\n\n> * titles: I changed all titles because of some stylish issue (words are\n> generally capitalized) or because it reads better.\n\nOK, most titles changed as suggested.\n\n> * sections: I moved the \"Restrictions\" section to the top but it seems\n> important than the other sections. I removed the \"psql\" section. The psql\n> commands are shown in the \"Examples\" section so I don't think we should\n> provide a section for it.\n\nOK, moved the \"Restrictions\" section and removed the \"psql\" section.\n\n> * sentences: I expanded several sentences to provide details about the specific\n> topic. I also reordered some sentences and removed some duplicated sentences.\n\nI did not take everything exactly as in your v4, but wherever your\nsuggested changes added some more information I tried to include them\nusing similar wording to yours.\n\n> * replica identity: I added a link to replica identity. It is a key concept for\n> row filter.\n\nOK, done as suggested.\n\n> * transformations: I added a few sentences explaining when/why a transformation\n> is required. I removed the \"Cases\" part (same as in the source code) because\n> it is redundant with the new sentences.\n\nOK, I incorporated most of your new descriptions for the\ntransformations, however I still wanted to keep the summary of \"cases\"\npart because IMO it makes the rules so much clearer than just having\nthe text descriptions.\n\n> * partitioned table: the title should be _partitioned_ table. I replaced the\n> bullets with sentences in the same paragraph.\n\nOK. The title was changed, but I kept the bullets.\n\n> * initial data sync: I removed the \"subscriber\" from the section title. When\n> you say \"initial data synchronization\" it seems clear we're talking about the\n> subscriber. I also add a sentence saying why pre-15 does not consider row\n> filters.\n\nOK. Title is changed. Also I added your sentence about the pre-15. But\nI kept all the HTML note rendering that you'd removed in v4. I think\nthis information was important enough to be a \"note\" instead of just\nburied in a paragraph.\n\n> * combining row filters: I renamed the section and decided to remove \"for the\n> same table\". When reading the first sentences from this section, it is clear\n> that row filtering is per table. So if you are combining multiple row\n> filters, it should be for the same table. I also added a sentence saying why\n> some clauses make the row filter irrelevant.\n\nOK. Title is changed. Your extra sentence was added.\n\n> * examples: I combined the psql commands that shows row filter information\n> together (\\dRp+ and \\d). I changed to connection string to avoid \"localhost\".\n> Why? It does not seem a separate service (there is no port) and setup pub/sub\n> in the same cluster requires additional steps. It is better to illustrate\n> different clusters (at least it seems so since we don't provide details from\n> publisher). I changed a value in an UPDATE because both UPDATEs use 999.\n>\n\nI did not combine those \\dRp+ and \\d. I kept them separate so I could\nwrite some separate notes about them.\n\nI'm unsure about the connection change. This documentation is for \"Row\nFilters\" so the examples were only intended to demonstrate row filters\nand nothing else. I wanted to have a trivial connection such that a\nuser can just cut/paste directly from the example and get something\nthey can immediately test without having to change it. I don't mind\nchanging this later but probably I'd like to get some other opinions\nabout it first.\n\nAbout the UPDATE (555 value) - OK, I changed that value as you suggested.\n\n> We could probably reduce the number of rows in the example but I didn't bother\n> to remove them.\n>\n> It seems we can remove some sentences from the CREATE PUBLICATION because we\n> have a new section that explains all of it. I think the link that was added by\n> this patch is sufficient.\n>\n\nYeah, maybe some sentences can be removed. But even though some\ninformation is duplicated it might be useful to have a few things\nstill mentioned on the CREATE PUBLICATION page just so the user does\nnot have to chase links too much. I will wait to see if other people\nhave an opinion about it before removing any content from that page.\nMeanwhile, I have made the create_publication.sgml identical to your\nv4.\n\n~~~\n\nThere should be much fewer v4/v5 differences now although there might\nbe a few things I missed that you want to re-comment on. Hopefully, it\nwill now be easier to post review comments as BEFORE/AFTER text\nfragments - that would help other people to see the suggestions more\neasily and give their opinions too.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia.",
"msg_date": "Thu, 24 Mar 2022 16:47:46 +1030",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: PG DOCS - logical replication filtering"
},
{
"msg_contents": "On Thu, Mar 24, 2022 at 11:48 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n\nReview comments:\n===============\n1.\n+ The <literal>WHERE</literal> clause expression is evaluated with the same\n+ role used for the replication connection. i.e. the role specified in the\n+ <literal>CONNECTION</literal> clause of the <xref\nlinkend=\"sql-createsubscription\"/>.\n\nCan we use () around i.e. sentence? It looks bit odd to me now.\nThe <literal>WHERE</literal> clause expression is evaluated with the\nsame role used for the replication connection (i.e., the role\nspecified in the <literal>CONNECTION</literal> clause of the <xref\nlinkend=\"sql-createsubscription\"/>).\n\n2.\n+ <para>\n+ Whenever an <command>UPDATE</command> is processed, the row filter\n+ expression is evaluated for both the old and new row (i.e. before\n+ and after the data is updated).\n\nCan we write the below part slightly differently?\nBefore:\n(i.e. before and after the data is updated).\nAfter:\n(i.e the data before and after the update).\n\n3.\n+ <para>\n+ Whenever an <command>UPDATE</command> is processed, the row filter\n+ expression is evaluated for both the old and new row (i.e. before\n+ and after the data is updated).\n+ </para>\n+\n+ <para>\n+ If both evaluations are <literal>true</literal>, it replicates the\n+ <command>UPDATE</command> change.\n+ </para>\n+\n+ <para>\n+ If both evaluations are <literal>false</literal>, it doesn't replicate\n+ the change.\n+ </para>\n\nI think we can combine these three separate paragraphs.\n\n4.\n+ <para>\n+ If the publication contains a partitioned table, the publication parameter\n+ <literal>publish_via_partition_root</literal> determines which row filter\n+ is used.\n+ <itemizedlist>\n+\n+ <listitem>\n+ <para>\n+ If <literal>publish_via_partition_root</literal> is\n<literal>false</literal>\n+ (default), each <emphasis>partition's</emphasis> row filter is used.\n+ </para>\n+ </listitem>\n+\n+ <listitem>\n+ <para>\n+ If <literal>publish_via_partition_root</literal> is\n<literal>true</literal>,\n+ the <emphasis>root partitioned table's</emphasis> row filter is used.\n+ </para>\n+ </listitem>\n+\n+ </itemizedlist>\n+ </para>\n\nI think we can combine this into single para as Euler had in his version.\n\n5.\n+ <note>\n+ <para>\n+ Publication <literal>publish</literal> operations are ignored\nwhen copying pre-existing table data.\n+ </para>\n+ </note>\n+\n+ <note>\n+ <para>\n+ If the subscriber is in a release prior to 15, copy pre-existing data\n+ doesn't use row filters even if they are defined in the publication.\n+ This is because old releases can only copy the entire table data.\n+ </para>\n+ </note>\n\nI don't see the need for the first <note> here, the second one seems\nto convey it.\n\n6.\n+ <para>\n+ Create some tables to be used in the following examples.\n+<programlisting>\n+testpub=# CREATE TABLE t1(a int, b int, c text, primary key(a,c));\n+CREATE TABLE\n+testpub=# CREATE TABLE t2(d int primary key, e int, f int);\n+CREATE TABLE\n+testpub=# CREATE TABLE t3(g int primary key, h int, i int);\n+CREATE TABLE\n+</programlisting>\n+ </para>\n+\n+ <para>\n+ Create some publications.\n+ </para>\n+ <para>\n+ - notice publication <literal>p1</literal> has 1 table with a row filter.\n+ </para>\n+ <para>\n+ - notice publication <literal>p2</literal> has 2 tables, one without a row\n+ filter, and one with a row filter.\n+ </para>\n+ <para>\n+ - notice publication <literal>p3</literal> has 2 tables, both\nwith row filters.\n+<programlisting>\n+testpub=# CREATE PUBLICATION p1 FOR TABLE t1 WHERE (a > 5 AND c = 'NSW');\n+CREATE PUBLICATION\n+testpub=# CREATE PUBLICATION p2 FOR TABLE t1, t2 WHERE (e = 99);\n+CREATE PUBLICATION\n+testpub=# CREATE PUBLICATION p3 FOR TABLE t2 WHERE (d = 10), t3 WHERE (g = 10);\n+CREATE PUBLICATION\n+</programlisting>\n+ </para>\n\nI think it is better to use the corresponding content from Euler's version.\n\n7.\n+ <para>\n+ The PSQL command <command>\\d</command> shows what publications the table is\n+ a member of, as well as that table's row filter expression (if defined) in\n+ those publications.\n+ </para>\n+ <para>\n+ - notice table <literal>t1</literal> is a member of 2 publications, but\n+ has a row filter only in <literal>p1</literal>.\n+ </para>\n+ <para>\n+ - notice table <literal>t2</literal> is a member of 2 publications, and\n+ has a different row filter in each of them.\n\nThis looks unnecessary to me. Let's remove this part.\n\n8.\n+ <para>\n+ - notice that only the rows satisfying the <literal>t1 WHERE</literal>\n+ clause of publication <literal>p1</literal> are replicated.\n\nAgain, it is better to use Euler's version for this and at the place,\nhe had in his version. Similarly, adjust other notices if any like\nthis one.\n\n9. I suggest adding an example for partition tables showing how the\nrow filter is used based on the 'publish_via_partition_root'\nparameter.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 6 Apr 2022 16:27:49 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: PG DOCS - logical replication filtering"
},
{
"msg_contents": "PSA patch v6 to address some of Amit's review comments [1].\n\n------\n[1] https://www.postgresql.org/message-id/CAA4eK1JdwQQsxa%2BzpsBW5rCxEfXopYx381nwcCyeXk6mpF8ChQ%40mail.gmail.com\n\nKind Regards,\nPeter Smith.\nFujitsu Australia",
"msg_date": "Fri, 8 Apr 2022 15:53:38 +1000",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: PG DOCS - logical replication filtering"
},
{
"msg_contents": "On Wed, Apr 6, 2022 at 8:58 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Thu, Mar 24, 2022 at 11:48 AM Peter Smith <smithpb2250@gmail.com> wrote:\n> >\n>\n> Review comments:\n> ===============\n> 1.\n> + The <literal>WHERE</literal> clause expression is evaluated with the same\n> + role used for the replication connection. i.e. the role specified in the\n> + <literal>CONNECTION</literal> clause of the <xref\n> linkend=\"sql-createsubscription\"/>.\n>\n> Can we use () around i.e. sentence? It looks bit odd to me now.\n> The <literal>WHERE</literal> clause expression is evaluated with the\n> same role used for the replication connection (i.e., the role\n> specified in the <literal>CONNECTION</literal> clause of the <xref\n> linkend=\"sql-createsubscription\"/>).\n\nOK. Modified in v6 [1].\n\n> 2.\n> + <para>\n> + Whenever an <command>UPDATE</command> is processed, the row filter\n> + expression is evaluated for both the old and new row (i.e. before\n> + and after the data is updated).\n>\n> Can we write the below part slightly differently?\n> Before:\n> (i.e. before and after the data is updated).\n> After:\n> (i.e the data before and after the update).\n\nOK. Modified in v6 [1].\n\n> 3.\n> + <para>\n> + Whenever an <command>UPDATE</command> is processed, the row filter\n> + expression is evaluated for both the old and new row (i.e. before\n> + and after the data is updated).\n> + </para>\n> +\n> + <para>\n> + If both evaluations are <literal>true</literal>, it replicates the\n> + <command>UPDATE</command> change.\n> + </para>\n> +\n> + <para>\n> + If both evaluations are <literal>false</literal>, it doesn't replicate\n> + the change.\n> + </para>\n>\n> I think we can combine these three separate paragraphs.\n\nThe first sentence is the explanation, then there are the 3 separate\n“If …” cases mentioned. It doesn’t seem quite right to group just the\nfirst 2 “if…” cases into one paragraph, while leaving the 3rd one\nseparated. OTOH combining everything into one big paragraph seems\nworse. Please confirm if you still want this changed.\n\n> 4.\n> + <para>\n> + If the publication contains a partitioned table, the publication parameter\n> + <literal>publish_via_partition_root</literal> determines which row filter\n> + is used.\n> + <itemizedlist>\n> +\n> + <listitem>\n> + <para>\n> + If <literal>publish_via_partition_root</literal> is\n> <literal>false</literal>\n> + (default), each <emphasis>partition's</emphasis> row filter is used.\n> + </para>\n> + </listitem>\n> +\n> + <listitem>\n> + <para>\n> + If <literal>publish_via_partition_root</literal> is\n> <literal>true</literal>,\n> + the <emphasis>root partitioned table's</emphasis> row filter is used.\n> + </para>\n> + </listitem>\n> +\n> + </itemizedlist>\n> + </para>\n>\n> I think we can combine this into single para as Euler had in his version.\n\nWe can do it, but I am not sure if your review was looking at the\nrendered HTML or just looking at the SGML text? IMO using bullets here\nended up being more readable (it is also consistent with other bullet\nusages on this page). Please confirm if you still want this changed.\n\n> 5.\n> + <note>\n> + <para>\n> + Publication <literal>publish</literal> operations are ignored\n> when copying pre-existing table data.\n> + </para>\n> + </note>\n> +\n> + <note>\n> + <para>\n> + If the subscriber is in a release prior to 15, copy pre-existing data\n> + doesn't use row filters even if they are defined in the publication.\n> + This is because old releases can only copy the entire table data.\n> + </para>\n> + </note>\n>\n> I don't see the need for the first <note> here, the second one seems\n> to convey it.\n\nWell, the 2nd note is only about compatibility with older versions\ndoing the subscribe. But the 1st note is not version-specific at all.\nIt is saying that the COPY does not take the “publish” option into\naccount. If you know of some other docs already mentioning this subtle\nbehaviour of the COPY then I can remove this note and just\ncross-reference to the other place. But I did not know anywhere this\nis already mentioned, so that is why I wrote the note about it.\n\n> 6.\n> + <para>\n> + Create some tables to be used in the following examples.\n> +<programlisting>\n> +testpub=# CREATE TABLE t1(a int, b int, c text, primary key(a,c));\n> +CREATE TABLE\n> +testpub=# CREATE TABLE t2(d int primary key, e int, f int);\n> +CREATE TABLE\n> +testpub=# CREATE TABLE t3(g int primary key, h int, i int);\n> +CREATE TABLE\n> +</programlisting>\n> + </para>\n> +\n> + <para>\n> + Create some publications.\n> + </para>\n> + <para>\n> + - notice publication <literal>p1</literal> has 1 table with a row filter.\n> + </para>\n> + <para>\n> + - notice publication <literal>p2</literal> has 2 tables, one without a row\n> + filter, and one with a row filter.\n> + </para>\n> + <para>\n> + - notice publication <literal>p3</literal> has 2 tables, both\n> with row filters.\n> +<programlisting>\n> +testpub=# CREATE PUBLICATION p1 FOR TABLE t1 WHERE (a > 5 AND c = 'NSW');\n> +CREATE PUBLICATION\n> +testpub=# CREATE PUBLICATION p2 FOR TABLE t1, t2 WHERE (e = 99);\n> +CREATE PUBLICATION\n> +testpub=# CREATE PUBLICATION p3 FOR TABLE t2 WHERE (d = 10), t3 WHERE (g = 10);\n> +CREATE PUBLICATION\n> +</programlisting>\n> + </para>\n>\n> I think it is better to use the corresponding content from Euler's version.\n\nOK. Modified in v6 [1]. I changed the primary key syntax to be the\nsame as Euler had. I also moved all the 'notice' parts to below the\ncorresponding example and modified the text.\n\n>\n> 7.\n> + <para>\n> + The PSQL command <command>\\d</command> shows what publications the table is\n> + a member of, as well as that table's row filter expression (if defined) in\n> + those publications.\n> + </para>\n> + <para>\n> + - notice table <literal>t1</literal> is a member of 2 publications, but\n> + has a row filter only in <literal>p1</literal>.\n> + </para>\n> + <para>\n> + - notice table <literal>t2</literal> is a member of 2 publications, and\n> + has a different row filter in each of them.\n>\n> This looks unnecessary to me. Let's remove this part.\n\nI thought something is needed to explain/demonstrate how the user can\nknow the different row filters for all the publications that the same\ntable is a member of. Otherwise, the user has to guess (??) what\npublications are using their table and then use \\dRp+ to dig at all\nthose publications to find the row filters.\n\nI can remove all this part from the Examples, but I think at least the\n\\d should still be mentioned somewhere. IMO I should put that \"PSQL\ncommands\" section back (which existed in an earlier version) and just\nadd a sentence about this. Then this examples part can be removed.\nWhat do you think?\n\n> 8.\n> + <para>\n> + - notice that only the rows satisfying the <literal>t1 WHERE</literal>\n> + clause of publication <literal>p1</literal> are replicated.\n>\n> Again, it is better to use Euler's version for this and at the place,\n> he had in his version. Similarly, adjust other notices if any like\n> this one.\n\nOK. Modified in v6 [1]. Every “notice” has now been moved to follow\nthe associated example (how Euler had them)\n\n\n> 9. I suggest adding an example for partition tables showing how the\n> row filter is used based on the 'publish_via_partition_root'\n> parameter.\n\nOK - I am working on this and will add it in a future patch version.\n\n------\n[1] https://www.postgresql.org/message-id/CAHut%2BPvyxMedYY-jHaT9YSfEPHv0jU2-CZ8F_nPvhuP0b955og%40mail.gmail.com\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Fri, 8 Apr 2022 16:12:07 +1000",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: PG DOCS - logical replication filtering"
},
{
"msg_contents": "On 2022-Apr-08, Peter Smith wrote:\n\n> PSA patch v6 to address some of Amit's review comments [1].\n\nI think the text is good (didn't read it all, just about the first\nhalf), but why is there one paragraph per sentence? Seems a bit too\nsparse.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"You're _really_ hosed if the person doing the hiring doesn't understand\nrelational systems: you end up with a whole raft of programmers, none of\nwhom has had a Date with the clue stick.\" (Andrew Sullivan)\n\n\n",
"msg_date": "Sat, 9 Apr 2022 12:45:31 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: PG DOCS - logical replication filtering"
},
{
"msg_contents": "On Fri, Apr 8, 2022 at 11:42 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> On Wed, Apr 6, 2022 at 8:58 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n>\n> > 3.\n> > + <para>\n> > + Whenever an <command>UPDATE</command> is processed, the row filter\n> > + expression is evaluated for both the old and new row (i.e. before\n> > + and after the data is updated).\n> > + </para>\n> > +\n> > + <para>\n> > + If both evaluations are <literal>true</literal>, it replicates the\n> > + <command>UPDATE</command> change.\n> > + </para>\n> > +\n> > + <para>\n> > + If both evaluations are <literal>false</literal>, it doesn't replicate\n> > + the change.\n> > + </para>\n> >\n> > I think we can combine these three separate paragraphs.\n>\n> The first sentence is the explanation, then there are the 3 separate\n> “If …” cases mentioned. It doesn’t seem quite right to group just the\n> first 2 “if…” cases into one paragraph, while leaving the 3rd one\n> separated. OTOH combining everything into one big paragraph seems\n> worse. Please confirm if you still want this changed.\n>\n\nYeah, I think we should have something like what Euler's version had\nand maybe keep a summary section from the current patch.\n\n> > 4.\n> > + <para>\n> > + If the publication contains a partitioned table, the publication parameter\n> > + <literal>publish_via_partition_root</literal> determines which row filter\n> > + is used.\n> > + <itemizedlist>\n> > +\n> > + <listitem>\n> > + <para>\n> > + If <literal>publish_via_partition_root</literal> is\n> > <literal>false</literal>\n> > + (default), each <emphasis>partition's</emphasis> row filter is used.\n> > + </para>\n> > + </listitem>\n> > +\n> > + <listitem>\n> > + <para>\n> > + If <literal>publish_via_partition_root</literal> is\n> > <literal>true</literal>,\n> > + the <emphasis>root partitioned table's</emphasis> row filter is used.\n> > + </para>\n> > + </listitem>\n> > +\n> > + </itemizedlist>\n> > + </para>\n> >\n> > I think we can combine this into single para as Euler had in his version.\n>\n> We can do it, but I am not sure if your review was looking at the\n> rendered HTML or just looking at the SGML text? IMO using bullets here\n> ended up being more readable (it is also consistent with other bullet\n> usages on this page). Please confirm if you still want this changed.\n>\n\nFair enough. We can keep this part as it is.\n\n> > 5.\n> > + <note>\n> > + <para>\n> > + Publication <literal>publish</literal> operations are ignored\n> > when copying pre-existing table data.\n> > + </para>\n> > + </note>\n> > +\n> > + <note>\n> > + <para>\n> > + If the subscriber is in a release prior to 15, copy pre-existing data\n> > + doesn't use row filters even if they are defined in the publication.\n> > + This is because old releases can only copy the entire table data.\n> > + </para>\n> > + </note>\n> >\n> > I don't see the need for the first <note> here, the second one seems\n> > to convey it.\n>\n> Well, the 2nd note is only about compatibility with older versions\n> doing the subscribe. But the 1st note is not version-specific at all.\n> It is saying that the COPY does not take the “publish” option into\n> account. If you know of some other docs already mentioning this subtle\n> behaviour of the COPY then I can remove this note and just\n> cross-reference to the other place. But I did not know anywhere this\n> is already mentioned, so that is why I wrote the note about it.\n>\n\nI don't see the need to say about general initial sync (COPY) behavior\nhere. It is already defined at [1]. If we want to enhance, we can do\nthat as a separate patch to make changes where the initial sync is\nexplained. I am not sure that is required though.\n\n> >\n> > 7.\n> > + <para>\n> > + The PSQL command <command>\\d</command> shows what publications the table is\n> > + a member of, as well as that table's row filter expression (if defined) in\n> > + those publications.\n> > + </para>\n> > + <para>\n> > + - notice table <literal>t1</literal> is a member of 2 publications, but\n> > + has a row filter only in <literal>p1</literal>.\n> > + </para>\n> > + <para>\n> > + - notice table <literal>t2</literal> is a member of 2 publications, and\n> > + has a different row filter in each of them.\n> >\n> > This looks unnecessary to me. Let's remove this part.\n>\n> I thought something is needed to explain/demonstrate how the user can\n> know the different row filters for all the publications that the same\n> table is a member of. Otherwise, the user has to guess (??) what\n> publications are using their table and then use \\dRp+ to dig at all\n> those publications to find the row filters.\n>\n> I can remove all this part from the Examples, but I think at least the\n> \\d should still be mentioned somewhere. IMO I should put that \"PSQL\n> commands\" section back (which existed in an earlier version) and just\n> add a sentence about this. Then this examples part can be removed.\n> What do you think?\n>\n\nI think the way it is changed in the current patch by moving that\nexplanation down seems okay to me.\n\nI feel in the initial \"Row Filters\" and \"Row Filter Rules\" sections,\nwe don't need to have separate paragraphs. I think the same is pointed\nout by Alvaro as well.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 11 Apr 2022 08:57:30 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: PG DOCS - logical replication filtering"
},
{
"msg_contents": "PSA patch v7 which addresses all the remaining review comments (from\nAmit [1a][1b], and from Alvaro [2]).\n\n------\n[1a] https://www.postgresql.org/message-id/CAHut%2BPvDFWGUOBugYMtcXhAiViZu%2BQ6P-kxw2%2BU83VOGx0Osdg%40mail.gmail.com\n[1b] https://www.postgresql.org/message-id/CAA4eK1JPyVoc1dUjeqbPd9D0_uYxWyyx-8fcsrgiZ5Tpr9OAuw%40mail.gmail.com\n[2] https://www.postgresql.org/message-id/202204091045.v2ei4yupxqso%40alvherre.pgsql\n\nKind Regards,\nPeter Smith.\nFujitsu Australia",
"msg_date": "Mon, 11 Apr 2022 17:03:55 +1000",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: PG DOCS - logical replication filtering"
},
{
"msg_contents": "On Fri, Apr 8, 2022 at 4:12 PM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> On Wed, Apr 6, 2022 at 8:58 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Thu, Mar 24, 2022 at 11:48 AM Peter Smith <smithpb2250@gmail.com> wrote:\n> > >\n> >\n> > Review comments:\n> > ===============\n[snip]\n>\n> > 9. I suggest adding an example for partition tables showing how the\n> > row filter is used based on the 'publish_via_partition_root'\n> > parameter.\n>\n> OK - I am working on this and will add it in a future patch version.\n>\n\nOK. Added in v7 [1]\n\n------\n[1] https://www.postgresql.org/message-id/CAHut%2BPt1X%3D3VaWRbx3yHByEMC-GPh4oeeMeJKJeTmOELDxZJHQ%40mail.gmail.com\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Mon, 11 Apr 2022 17:09:02 +1000",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: PG DOCS - logical replication filtering"
},
{
"msg_contents": "On Mon, Apr 11, 2022 at 1:27 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Fri, Apr 8, 2022 at 11:42 AM Peter Smith <smithpb2250@gmail.com> wrote:\n> >\n> > On Wed, Apr 6, 2022 at 8:58 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> >\n> > > 3.\n> > > + <para>\n> > > + Whenever an <command>UPDATE</command> is processed, the row filter\n> > > + expression is evaluated for both the old and new row (i.e. before\n> > > + and after the data is updated).\n> > > + </para>\n> > > +\n> > > + <para>\n> > > + If both evaluations are <literal>true</literal>, it replicates the\n> > > + <command>UPDATE</command> change.\n> > > + </para>\n> > > +\n> > > + <para>\n> > > + If both evaluations are <literal>false</literal>, it doesn't replicate\n> > > + the change.\n> > > + </para>\n> > >\n> > > I think we can combine these three separate paragraphs.\n> >\n> > The first sentence is the explanation, then there are the 3 separate\n> > “If …” cases mentioned. It doesn’t seem quite right to group just the\n> > first 2 “if…” cases into one paragraph, while leaving the 3rd one\n> > separated. OTOH combining everything into one big paragraph seems\n> > worse. Please confirm if you still want this changed.\n> >\n>\n> Yeah, I think we should have something like what Euler's version had\n> and maybe keep a summary section from the current patch.\n>\n\nModified in v7 [1]. Removed the bullets. Structured the text\nparagraphs the same way that Euler had it. The summary is kept as-is.\n\n\n> > > 5.\n> > > + <note>\n> > > + <para>\n> > > + Publication <literal>publish</literal> operations are ignored\n> > > when copying pre-existing table data.\n> > > + </para>\n> > > + </note>\n> > > +\n> > > + <note>\n> > > + <para>\n> > > + If the subscriber is in a release prior to 15, copy pre-existing data\n> > > + doesn't use row filters even if they are defined in the publication.\n> > > + This is because old releases can only copy the entire table data.\n> > > + </para>\n> > > + </note>\n> > >\n> > > I don't see the need for the first <note> here, the second one seems\n> > > to convey it.\n> >\n> > Well, the 2nd note is only about compatibility with older versions\n> > doing the subscribe. But the 1st note is not version-specific at all.\n> > It is saying that the COPY does not take the “publish” option into\n> > account. If you know of some other docs already mentioning this subtle\n> > behaviour of the COPY then I can remove this note and just\n> > cross-reference to the other place. But I did not know anywhere this\n> > is already mentioned, so that is why I wrote the note about it.\n> >\n>\n> I don't see the need to say about general initial sync (COPY) behavior\n> here. It is already defined at [1]. If we want to enhance, we can do\n> that as a separate patch to make changes where the initial sync is\n> explained. I am not sure that is required though.\n>\n\nDid you miss providing the link URL? Anyway, I removed the note in v7\n[1]. This information can be done as a separate patch one day (or not\nat all).\n\n> I feel in the initial \"Row Filters\" and \"Row Filter Rules\" sections,\n> we don't need to have separate paragraphs. I think the same is pointed\n> out by Alvaro as well.\n>\n\nModified in v7 [1] those sections as suggested. I also assumed these\nwere the same sections that Alvaro was referring to.\n\n------\n[1] https://www.postgresql.org/message-id/CAHut%2BPt1X%3D3VaWRbx3yHByEMC-GPh4oeeMeJKJeTmOELDxZJHQ%40mail.gmail.com\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Mon, 11 Apr 2022 17:17:27 +1000",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: PG DOCS - logical replication filtering"
},
{
"msg_contents": "On Mon, Apr 11, 2022 at 12:39 PM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> On Fri, Apr 8, 2022 at 4:12 PM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> OK. Added in v7 [1]\n>\n\nThanks, this looks mostly good to me. I didn't like the new section\nadded for partitioned tables examples, so I removed it and added some\nexplanation of the tests. I have slightly changed a few other lines. I\nam planning to commit the attached tomorrow unless there are more\ncomments.\n\n-- \nWith Regards,\nAmit Kapila.",
"msg_date": "Mon, 11 Apr 2022 16:10:10 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: PG DOCS - logical replication filtering"
},
{
"msg_contents": "On Mon, Apr 11, 2022, at 7:40 AM, Amit Kapila wrote:\n> On Mon, Apr 11, 2022 at 12:39 PM Peter Smith <smithpb2250@gmail.com> wrote:\n> >\n> > On Fri, Apr 8, 2022 at 4:12 PM Peter Smith <smithpb2250@gmail.com> wrote:\n> >\n> > OK. Added in v7 [1]\n> >\n> \n> Thanks, this looks mostly good to me. I didn't like the new section\n> added for partitioned tables examples, so I removed it and added some\n> explanation of the tests. I have slightly changed a few other lines. I\n> am planning to commit the attached tomorrow unless there are more\n> comments.\nI have a few comments.\n\n> If a publication publishes UPDATE and/or DELETE operations ...\n>\n\nIf we are talking about operations, use lowercase like I suggested in the\nprevious version. See the publish parameter [1]. If we are talking about\ncommands, the word \"operations\" should be removed or replaced by \"commands\".\nThe \"and/or\" isn't required, \"or\" implies the same. If you prefer \"operations\",\nmy suggestion is to adjust the last sentence that says \"only INSERT\" to \"only\n<italic>insert</italic> operation\".\n\n> If the old row satisfies the row filter expression (it was sent to the\n> subscriber) but the new row doesn't, then from a data consistency perspective\n> the old row should be removed from the subscriber.\n>\n\nI suggested small additions to this sentence. We should at least add a comma\nafter \"then\" and \"perspective\". The same applies to the next paragraph too.\n\nRegarding the \"Summary\", it is redundant as I said before. We already described\nall 4 cases. I vote to remove it. However, if we would go with a table, I\nsuggest to improve the formatting: add borders, \"old row\" and \"new row\" should\nbe titles, \"no match\" and \"match\" should be represented by symbols (\"\" and \"X\",\nfor example), and \"Case X\" column should be removed because this extra column\nadds nothing.\n\nRegarding the \"Partitioned Tables\", I suggested to remove the bullets. We\ngenerally have paragraphs with a few sentences. I tend to avoid short\nparagraphs. I also didn't like the 2 bullets using almost the same words. In\nthe previous version, I suggested something like\n\nIf the publication contains a partitioned table, the parameter\npublish_via_partition_root determines which row filter expression is used. If\nthe parameter publish_via_partition_root is true, the row filter expression\nassociated with the partitioned table is used. Otherwise, the row filter\nexpression associated with the individual partition is used.\n\n> will be copied. (see Section 31.3.6 for details).\n\nThere is an extra period after \"copied\" that should be removed. The other\noption is to remove the parentheses and have another sentence for \"See ...\".\n\n> those expressions get OR'ed together\n\nI prefer plain English here. This part of the sentence is also redundant with\nthe rest of the sentence so I suggested to remove it in the previous version.\n\nrows that satisfy any of the row filter expressions is replicated.\n\ninstead of\n\nthose expressions get OR'ed together, so that rows satisfying any of the\nexpressions will be replicated.\n\nI also didn't use a different paragraph (like I suggested in the previous\nversion) because we are talking about the same thing.\n\nThe bullets in the example sounds strange, that's why I suggested removing it.\nWe can even combine the 3 sentences into one paragraph.\n\n> The PSQL command \\dRp+ shows the row filter expressions (if defined) for each\n> table of the publications.\n\nWell, we don't use PSQL (upppercase) in the documentation. I suggested a\ndifferent sentence:\n\nThe psql shows the row filter expressions (if defined) for each table.\n\n> The PSQL command \\d shows what publications the table is a member of, as well\n> as that table's row filter expression (if defined) in those publications.\n\nIt is not logical replication business to explain about psql behavior. If, for\nsome reason, someone decided to change it, this section will contain obsolete\ninformation. The psql output is fine, the explanation is not. That's why I\nsuggested this modification.\n\n> Only the rows satisfying the t1 WHERE clause of publication p1 are\n> replicated.\n\nAgain, no bullets. This sentence is useful *before* the psql output. We're\npresenting the results. Let's follow the pattern, describe the action and show\nthe results.\n\n> The UPDATE replicates the change as normal.\n\nThis sentence should be *before* the psql output (see my previous version).\n\nRegarding the new examples (for partitioned tables), shouldn't we move the\nparent / child definitions to the beginning of the Examples section? It seems\nconfusing use the same code snippet to show repeated table definitions\n(publisher and subscriber). I checked fast and after a few seconds I realized\nthat the example is not wrong but the database name has a small difference (one\nletter \"s\" x \"p\"). The publication and subscription definitions are fine there.\n\nI think reusing the same tables and publication introduces complexity.\nShouldn't we just use different tables and publication to provide an \"easy\"\nexample? It would avoid DROP PUBLICATION, ALTER SUBSCRIPTION and TRUNCATE.\n\n> Do the inserts same as before.\n\nWe should indicate the node (publisher) to be clear.\n\n[1] https://www.postgresql.org/docs/devel/sql-createpublication.html\n\n\n--\nEuler Taveira\nEDB https://www.enterprisedb.com/\n\nOn Mon, Apr 11, 2022, at 7:40 AM, Amit Kapila wrote:On Mon, Apr 11, 2022 at 12:39 PM Peter Smith <smithpb2250@gmail.com> wrote:>> On Fri, Apr 8, 2022 at 4:12 PM Peter Smith <smithpb2250@gmail.com> wrote:>> OK. Added in v7 [1]>Thanks, this looks mostly good to me. I didn't like the new sectionadded for partitioned tables examples, so I removed it and added someexplanation of the tests. I have slightly changed a few other lines. Iam planning to commit the attached tomorrow unless there are morecomments.I have a few comments.> If a publication publishes UPDATE and/or DELETE operations ...>If we are talking about operations, use lowercase like I suggested in theprevious version. See the publish parameter [1]. If we are talking aboutcommands, the word \"operations\" should be removed or replaced by \"commands\".The \"and/or\" isn't required, \"or\" implies the same. If you prefer \"operations\",my suggestion is to adjust the last sentence that says \"only INSERT\" to \"only<italic>insert</italic> operation\".> If the old row satisfies the row filter expression (it was sent to the> subscriber) but the new row doesn't, then from a data consistency perspective> the old row should be removed from the subscriber.>I suggested small additions to this sentence. We should at least add a commaafter \"then\" and \"perspective\". The same applies to the next paragraph too.Regarding the \"Summary\", it is redundant as I said before. We already describedall 4 cases. I vote to remove it. However, if we would go with a table, Isuggest to improve the formatting: add borders, \"old row\" and \"new row\" shouldbe titles, \"no match\" and \"match\" should be represented by symbols (\"\" and \"X\",for example), and \"Case X\" column should be removed because this extra columnadds nothing.Regarding the \"Partitioned Tables\", I suggested to remove the bullets. Wegenerally have paragraphs with a few sentences. I tend to avoid shortparagraphs. I also didn't like the 2 bullets using almost the same words. Inthe previous version, I suggested something likeIf the publication contains a partitioned table, the parameterpublish_via_partition_root determines which row filter expression is used. Ifthe parameter publish_via_partition_root is true, the row filter expressionassociated with the partitioned table is used. Otherwise, the row filterexpression associated with the individual partition is used.> will be copied. (see Section 31.3.6 for details).There is an extra period after \"copied\" that should be removed. The otheroption is to remove the parentheses and have another sentence for \"See ...\".> those expressions get OR'ed togetherI prefer plain English here. This part of the sentence is also redundant withthe rest of the sentence so I suggested to remove it in the previous version.rows that satisfy any of the row filter expressions is replicated.instead ofthose expressions get OR'ed together, so that rows satisfying any of theexpressions will be replicated.I also didn't use a different paragraph (like I suggested in the previousversion) because we are talking about the same thing.The bullets in the example sounds strange, that's why I suggested removing it.We can even combine the 3 sentences into one paragraph.> The PSQL command \\dRp+ shows the row filter expressions (if defined) for each> table of the publications.Well, we don't use PSQL (upppercase) in the documentation. I suggested adifferent sentence:The psql shows the row filter expressions (if defined) for each table.> The PSQL command \\d shows what publications the table is a member of, as well> as that table's row filter expression (if defined) in those publications.It is not logical replication business to explain about psql behavior. If, forsome reason, someone decided to change it, this section will contain obsoleteinformation. The psql output is fine, the explanation is not. That's why Isuggested this modification.> Only the rows satisfying the t1 WHERE clause of publication p1 are> replicated.Again, no bullets. This sentence is useful *before* the psql output. We'representing the results. Let's follow the pattern, describe the action and showthe results.> The UPDATE replicates the change as normal.This sentence should be *before* the psql output (see my previous version).Regarding the new examples (for partitioned tables), shouldn't we move theparent / child definitions to the beginning of the Examples section? It seemsconfusing use the same code snippet to show repeated table definitions(publisher and subscriber). I checked fast and after a few seconds I realizedthat the example is not wrong but the database name has a small difference (oneletter \"s\" x \"p\"). The publication and subscription definitions are fine there.I think reusing the same tables and publication introduces complexity.Shouldn't we just use different tables and publication to provide an \"easy\"example? It would avoid DROP PUBLICATION, ALTER SUBSCRIPTION and TRUNCATE.> Do the inserts same as before.We should indicate the node (publisher) to be clear.[1] https://www.postgresql.org/docs/devel/sql-createpublication.html--Euler TaveiraEDB https://www.enterprisedb.com/",
"msg_date": "Mon, 11 Apr 2022 14:32:53 -0300",
"msg_from": "\"Euler Taveira\" <euler@eulerto.com>",
"msg_from_op": false,
"msg_subject": "Re: PG DOCS - logical replication filtering"
},
{
"msg_contents": "On Mon, Apr 11, 2022 at 11:03 PM Euler Taveira <euler@eulerto.com> wrote:\n>\n> On Mon, Apr 11, 2022, at 7:40 AM, Amit Kapila wrote:\n>\n> Regarding the new examples (for partitioned tables), shouldn't we move the\n> parent / child definitions to the beginning of the Examples section?\n>\n\nI think that will make examples less clear.\n\n> It seems\n> confusing use the same code snippet to show repeated table definitions\n> (publisher and subscriber). I checked fast and after a few seconds I realized\n> that the example is not wrong but the database name has a small difference (one\n> letter \"s\" x \"p\").\n>\n\nCan you be more specific? AFAICS, dbname used (testpub) is same.\n\n> The publication and subscription definitions are fine there.\n>\n> I think reusing the same tables and publication introduces complexity.\n> Shouldn't we just use different tables and publication to provide an \"easy\"\n> example? It would avoid DROP PUBLICATION, ALTER SUBSCRIPTION and TRUNCATE.\n>\n\nI don't know. I find the current way understandable. I feel using\ndifferent names won't gain much and make the example difficult to\nfollow.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 12 Apr 2022 09:22:32 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: PG DOCS - logical replication filtering"
},
{
"msg_contents": "PSA patch v9 which addresses most of Euler's review comments [1]\n\n------\n[1] https://www.postgresql.org/message-id/1c78ebd4-b38d-4b5d-a6ea-d583efe87d97%40www.fastmail.com\n\nKind Regards,\nPeter Smith.\nFujitsu Australia",
"msg_date": "Tue, 12 Apr 2022 17:50:52 +1000",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: PG DOCS - logical replication filtering"
},
{
"msg_contents": "On Tue, Apr 12, 2022 at 3:33 AM Euler Taveira <euler@eulerto.com> wrote:\n>\n> On Mon, Apr 11, 2022, at 7:40 AM, Amit Kapila wrote:\n>\n> On Mon, Apr 11, 2022 at 12:39 PM Peter Smith <smithpb2250@gmail.com> wrote:\n> >\n> > On Fri, Apr 8, 2022 at 4:12 PM Peter Smith <smithpb2250@gmail.com> wrote:\n> >\n> > OK. Added in v7 [1]\n> >\n>\n> Thanks, this looks mostly good to me. I didn't like the new section\n> added for partitioned tables examples, so I removed it and added some\n> explanation of the tests. I have slightly changed a few other lines. I\n> am planning to commit the attached tomorrow unless there are more\n> comments.\n>\n> I have a few comments.\n\nThanks for your review comments! I addressed most of them as suggested\n- see the details below.\n\n>\n> > If a publication publishes UPDATE and/or DELETE operations ...\n> >\n>\n> If we are talking about operations, use lowercase like I suggested in the\n> previous version. See the publish parameter [1]. If we are talking about\n> commands, the word \"operations\" should be removed or replaced by \"commands\".\n> The \"and/or\" isn't required, \"or\" implies the same. If you prefer \"operations\",\n> my suggestion is to adjust the last sentence that says \"only INSERT\" to \"only\n> <italic>insert</italic> operation\".\n\nNot changed. Because in fact, I copied most of this sentence\n(including the uppercase, \"operations\", \"and/or\") from existing\ndocumentation [1]\ne.g. see \"The tables added to a publication that publishes UPDATE\nand/or DELETE operations must ...\"\n[1] https://www.postgresql.org/docs/current/sql-createpublication.html\n\n\n>\n> > If the old row satisfies the row filter expression (it was sent to the\n> > subscriber) but the new row doesn't, then from a data consistency perspective\n> > the old row should be removed from the subscriber.\n> >\n>\n> I suggested small additions to this sentence. We should at least add a comma\n> after \"then\" and \"perspective\". The same applies to the next paragraph too.\n\nModified the commas in [v9] as suggested.\n\n>\n> Regarding the \"Summary\", it is redundant as I said before. We already described\n> all 4 cases. I vote to remove it. However, if we would go with a table, I\n> suggest to improve the formatting: add borders, \"old row\" and \"new row\" should\n> be titles, \"no match\" and \"match\" should be represented by symbols (\"\" and \"X\",\n> for example), and \"Case X\" column should be removed because this extra column\n> adds nothing.\n\nYeah, I know the information is the same in the summary and in the\ntext. Personally, I find big slabs of technical text difficult to\ndigest, so I'd have to spend 5 minutes of careful reading and drawing\nthe exact same summary on a piece of paper anyway just to visualize\nwhat the text says. The summary makes it easy to understand at a\nglance. But I have modified the summary in [v9] to remove the \"case\"\nand to add other column headings as suggested.\n\n>\n> Regarding the \"Partitioned Tables\", I suggested to remove the bullets. We\n> generally have paragraphs with a few sentences. I tend to avoid short\n> paragraphs. I also didn't like the 2 bullets using almost the same words. In\n> the previous version, I suggested something like\n>\n> If the publication contains a partitioned table, the parameter\n> publish_via_partition_root determines which row filter expression is used. If\n> the parameter publish_via_partition_root is true, the row filter expression\n> associated with the partitioned table is used. Otherwise, the row filter\n> expression associated with the individual partition is used.\n>\n\nModified in [v9] to remove the bullets.\n\n> > will be copied. (see Section 31.3.6 for details).\n>\n> There is an extra period after \"copied\" that should be removed. The other\n> option is to remove the parentheses and have another sentence for \"See ...\".\n>\n\nModified in [v9] as suggested.\n\n> > those expressions get OR'ed together\n>\n> I prefer plain English here. This part of the sentence is also redundant with\n> the rest of the sentence so I suggested to remove it in the previous version.\n>\n> rows that satisfy any of the row filter expressions is replicated.\n>\n> instead of\n>\n> those expressions get OR'ed together, so that rows satisfying any of the\n> expressions will be replicated.\n\nNot changed. The readers of this docs page are all users who will be\nfamiliar with the filter expressions, so I felt the \"OR'ed together\"\npart will be perfectly clear to the intended audience.\n\n>\n> I also didn't use a different paragraph (like I suggested in the previous\n> version) because we are talking about the same thing.\n>\n\nModified in [v9] to use a single paragraph.\n\n> The bullets in the example sounds strange, that's why I suggested removing it.\n> We can even combine the 3 sentences into one paragraph.\n\nModified [v9] so the whole example now has no bullets. Also combined\nall these 3 sentences as suggested.\n\n>\n> > The PSQL command \\dRp+ shows the row filter expressions (if defined) for each\n> > table of the publications.\n>\n> Well, we don't use PSQL (upppercase) in the documentation. I suggested a\n> different sentence:\n>\n> The psql shows the row filter expressions (if defined) for each table.\n>\n\nModified the sentence in [v9]. Now it uses lowercase psql.\n\n> > The PSQL command \\d shows what publications the table is a member of, as well\n> > as that table's row filter expression (if defined) in those publications.\n>\n> It is not logical replication business to explain about psql behavior. If, for\n> some reason, someone decided to change it, this section will contain obsolete\n> information. The psql output is fine, the explanation is not. That's why I\n> suggested this modification.\n\nModified [v9] this sentence also to use lowercase psql.\n\nBut I did not understand your point about “If, for some reason,\nsomeone decided to change it, this section will contain obsolete\ninformation”, because IIUC that will be equally true for both the\nexplanation and the output, so I did not understand why you say \"psql\noutput is fine, the explanation is not\". It is the business of this\ndocumentation to help the user to know how and where they can find the\nrow filter information they may need to know.\n\n>\n> > Only the rows satisfying the t1 WHERE clause of publication p1 are\n> > replicated.\n>\n> Again, no bullets. This sentence is useful *before* the psql output. We're\n> presenting the results. Let's follow the pattern, describe the action and show\n> the results.\n>\n\nOK. Modified all the [v9] example now has all the bullets removed and\nfollows the suggested pattern (e.g. where the notes always come\n*before* the results)\n\n> > The UPDATE replicates the change as normal.\n>\n> This sentence should be *before* the psql output (see my previous version).\n>\n\nModified [v9] as suggested.\n\n> Regarding the new examples (for partitioned tables), shouldn't we move the\n> parent / child definitions to the beginning of the Examples section?\n\nNot changed. IMO if we moved those CREATE TABLE as suggested then they\nwill then be too far away from where they are being used.\n\n\n> It seems\n> confusing use the same code snippet to show repeated table definitions\n> (publisher and subscriber). I checked fast and after a few seconds I realized\n> that the example is not wrong but the database name has a small difference (one\n> letter \"s\" x \"p\"). The publication and subscription definitions are fine there.\n>\n\nNot changed. The publisher and subscriber programlistings are always\nseparated. If you are looking at the rendered HTML I think it is quite\nclear that one is at the publisher and one is at the subscriber. OTOH,\nif we omitted creating the tables on the subscriber then I think that\nreally would cause some confusion.\n\n> I think reusing the same tables and publication introduces complexity.\n> Shouldn't we just use different tables and publication to provide an \"easy\"\n> example? It would avoid DROP PUBLICATION, ALTER SUBSCRIPTION and TRUNCATE.\n>\n\nNot changed. Those same tables were re-used *deliberately* so that the\nexamples could use identical inserts, and to emphasize that the\ndifferent behaviour was caused only by the\n\"publish_via_partition_root\" setting.\n\n\n> > Do the inserts same as before.\n>\n> We should indicate the node (publisher) to be clear.\n\nOK. Modified [v9] as suggested.\n\n------\n[v9] https://www.postgresql.org/message-id/CAHut%2BPvYqo77rwg_vHC%3DOyQ7hCHZGVm%3DNi%2BJQbf8VyBz8hoo2w%40mail.gmail.com\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Tue, 12 Apr 2022 18:30:40 +1000",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: PG DOCS - logical replication filtering"
},
{
"msg_contents": "On Tue, Apr 12, 2022, at 5:30 AM, Peter Smith wrote:\n> Not changed. Because in fact, I copied most of this sentence\n> (including the uppercase, \"operations\", \"and/or\") from existing\n> documentation [1]\n> e.g. see \"The tables added to a publication that publishes UPDATE\n> and/or DELETE operations must ...\"\n> [1] https://www.postgresql.org/docs/current/sql-createpublication.html\nHmm. You are checking the Notes. I'm referring the the publish parameter. IMO\nthis sentence should use operations in lowercase letters too because even if\nyou create it with uppercase letters, Postgres will provide a lowercase word\nwhen you dump it.\n\n> Yeah, I know the information is the same in the summary and in the\n> text. Personally, I find big slabs of technical text difficult to\n> digest, so I'd have to spend 5 minutes of careful reading and drawing\n> the exact same summary on a piece of paper anyway just to visualize\n> what the text says. The summary makes it easy to understand at a\n> glance. But I have modified the summary in [v9] to remove the \"case\"\n> and to add other column headings as suggested.\nIsn't it better to use a table instead of synopsis?\n\n> Not changed. The readers of this docs page are all users who will be\n> familiar with the filter expressions, so I felt the \"OR'ed together\"\n> part will be perfectly clear to the intended audience.\nIf you want to keep it, change it to \"ORed\". It is used in indices.sgml. Let's\nkeep the consistence.\n\n> But I did not understand your point about “If, for some reason,\n> someone decided to change it, this section will contain obsolete\n> information”, because IIUC that will be equally true for both the\n> explanation and the output, so I did not understand why you say \"psql\n> output is fine, the explanation is not\". It is the business of this\n> documentation to help the user to know how and where they can find the\n> row filter information they may need to know.\nYou are describing a psql command here. My point is keep psql explanation in\nthe psql section. This section is to describe the row filter feature. If we\nstart describing features in other sections, we will have outdated information\nwhen the referred feature is changed and someone fails to find all references.\nI tend to concentrate detailed explanation in the feature section. If I have to\nadd links in other sections, I use \"Seee Section 1.23 for details ...\".\n\n> Not changed. The publisher and subscriber programlistings are always\n> separated. If you are looking at the rendered HTML I think it is quite\n> clear that one is at the publisher and one is at the subscriber. OTOH,\n> if we omitted creating the tables on the subscriber then I think that\n> really would cause some confusion.\nThe difference is extra space. By default, the CSS does not include a border\nfor programlisting. That's why I complained about it. I noticed that the\nwebsite CSS includes it. However, the PDF will not include the border. I would\nadd a separate description for the subscriber just to be clear.\n\nOne last suggestion, you are using identifiers in uppercase letters but\n\"primary key\" is in lowercase.\n\n\n--\nEuler Taveira\nEDB https://www.enterprisedb.com/\n\nOn Tue, Apr 12, 2022, at 5:30 AM, Peter Smith wrote:Not changed. Because in fact, I copied most of this sentence(including the uppercase, \"operations\", \"and/or\") from existingdocumentation [1]e.g. see \"The tables added to a publication that publishes UPDATEand/or DELETE operations must ...\"[1] https://www.postgresql.org/docs/current/sql-createpublication.htmlHmm. You are checking the Notes. I'm referring the the publish parameter. IMOthis sentence should use operations in lowercase letters too because even ifyou create it with uppercase letters, Postgres will provide a lowercase wordwhen you dump it.Yeah, I know the information is the same in the summary and in thetext. Personally, I find big slabs of technical text difficult todigest, so I'd have to spend 5 minutes of careful reading and drawingthe exact same summary on a piece of paper anyway just to visualizewhat the text says. The summary makes it easy to understand at aglance. But I have modified the summary in [v9] to remove the \"case\"and to add other column headings as suggested.Isn't it better to use a table instead of synopsis?Not changed. The readers of this docs page are all users who will befamiliar with the filter expressions, so I felt the \"OR'ed together\"part will be perfectly clear to the intended audience.If you want to keep it, change it to \"ORed\". It is used in indices.sgml. Let'skeep the consistence.But I did not understand your point about “If, for some reason,someone decided to change it, this section will contain obsoleteinformation”, because IIUC that will be equally true for both theexplanation and the output, so I did not understand why you say \"psqloutput is fine, the explanation is not\". It is the business of thisdocumentation to help the user to know how and where they can find therow filter information they may need to know.You are describing a psql command here. My point is keep psql explanation inthe psql section. This section is to describe the row filter feature. If westart describing features in other sections, we will have outdated informationwhen the referred feature is changed and someone fails to find all references.I tend to concentrate detailed explanation in the feature section. If I have toadd links in other sections, I use \"Seee Section 1.23 for details ...\".Not changed. The publisher and subscriber programlistings are alwaysseparated. If you are looking at the rendered HTML I think it is quiteclear that one is at the publisher and one is at the subscriber. OTOH,if we omitted creating the tables on the subscriber then I think thatreally would cause some confusion.The difference is extra space. By default, the CSS does not include a borderfor programlisting. That's why I complained about it. I noticed that thewebsite CSS includes it. However, the PDF will not include the border. I wouldadd a separate description for the subscriber just to be clear.One last suggestion, you are using identifiers in uppercase letters but\"primary key\" is in lowercase.--Euler TaveiraEDB https://www.enterprisedb.com/",
"msg_date": "Tue, 12 Apr 2022 11:07:27 -0300",
"msg_from": "\"Euler Taveira\" <euler@eulerto.com>",
"msg_from_op": false,
"msg_subject": "Re: PG DOCS - logical replication filtering"
},
{
"msg_contents": "PSA patch v10 which addresses the remaining review comments from Euler [1]\n\n------\n[1] https://www.postgresql.org/message-id/3cd8d622-6a26-4eaf-a5aa-ac78030e5f50%40www.fastmail.com\n\nKind Regards,\nPeter Smith.\nFujitsu Australia",
"msg_date": "Wed, 13 Apr 2022 13:24:53 +1000",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: PG DOCS - logical replication filtering"
},
{
"msg_contents": "On Wed, Apr 13, 2022 at 12:08 AM Euler Taveira <euler@eulerto.com> wrote:\n>\n> On Tue, Apr 12, 2022, at 5:30 AM, Peter Smith wrote:\n>\n> Not changed. Because in fact, I copied most of this sentence\n> (including the uppercase, \"operations\", \"and/or\") from existing\n> documentation [1]\n> e.g. see \"The tables added to a publication that publishes UPDATE\n> and/or DELETE operations must ...\"\n> [1] https://www.postgresql.org/docs/current/sql-createpublication.html\n>\n> Hmm. You are checking the Notes. I'm referring the the publish parameter. IMO\n> this sentence should use operations in lowercase letters too because even if\n> you create it with uppercase letters, Postgres will provide a lowercase word\n> when you dump it.\n\nIIRC the row filter replication identity checking is at run-time based\non the operation (not at DDL time based on the publish_parameter). For\nthis reason, and also because this is the same format/wording used in\nmany places already on the create_publication.sgml, I did not change\nthis formatting. But I did modify [v10] as earlier suggested to\nreplace the “and/or” with “or”, and also added another word\n“operation”.\n\n>\n> Yeah, I know the information is the same in the summary and in the\n> text. Personally, I find big slabs of technical text difficult to\n> digest, so I'd have to spend 5 minutes of careful reading and drawing\n> the exact same summary on a piece of paper anyway just to visualize\n> what the text says. The summary makes it easy to understand at a\n> glance. But I have modified the summary in [v9] to remove the \"case\"\n> and to add other column headings as suggested.\n>\n> Isn't it better to use a table instead of synopsis?\n\nModified [v10] as suggested.\n\n>\n> Not changed. The readers of this docs page are all users who will be\n> familiar with the filter expressions, so I felt the \"OR'ed together\"\n> part will be perfectly clear to the intended audience.\n>\n> If you want to keep it, change it to \"ORed\". It is used in indices.sgml. Let's\n> keep the consistence.\n\nModified [v10] as suggested.\n\n>\n> But I did not understand your point about “If, for some reason,\n> someone decided to change it, this section will contain obsolete\n> information”, because IIUC that will be equally true for both the\n> explanation and the output, so I did not understand why you say \"psql\n> output is fine, the explanation is not\". It is the business of this\n> documentation to help the user to know how and where they can find the\n> row filter information they may need to know.\n>\n> You are describing a psql command here. My point is keep psql explanation in\n> the psql section. This section is to describe the row filter feature. If we\n> start describing features in other sections, we will have outdated information\n> when the referred feature is changed and someone fails to find all references.\n> I tend to concentrate detailed explanation in the feature section. If I have to\n> add links in other sections, I use \"Seee Section 1.23 for details ...\".\n\nModified [v10] so the psql descriptions are now very generic.\n\n>\n> Not changed. The publisher and subscriber programlistings are always\n> separated. If you are looking at the rendered HTML I think it is quite\n> clear that one is at the publisher and one is at the subscriber. OTOH,\n> if we omitted creating the tables on the subscriber then I think that\n> really would cause some confusion.\n>\n> The difference is extra space. By default, the CSS does not include a border\n> for programlisting. That's why I complained about it. I noticed that the\n> website CSS includes it. However, the PDF will not include the border. I would\n> add a separate description for the subscriber just to be clear.\n>\n\nModified [v10] as suggested to add a separate description for the subscriber.\n\n> One last suggestion, you are using identifiers in uppercase letters but\n> \"primary key\" is in lowercase.\n>\n\nModified [v10] as suggested to make this uppercase\n\n------\n[v10] https://www.postgresql.org/message-id/CAHut%2BPvMEYkCRWDoZSpFnP%2B5SExus7YzWAd%3D6ah9vwkfRhOnSg%40mail.gmail.com\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Wed, 13 Apr 2022 13:34:33 +1000",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: PG DOCS - logical replication filtering"
},
{
"msg_contents": "On Wed, Apr 13, 2022, at 12:24 AM, Peter Smith wrote:\n> PSA patch v10 which addresses the remaining review comments from Euler [1]\nLooks good to me.\n\n\n--\nEuler Taveira\nEDB https://www.enterprisedb.com/\n\nOn Wed, Apr 13, 2022, at 12:24 AM, Peter Smith wrote:PSA patch v10 which addresses the remaining review comments from Euler [1]Looks good to me.--Euler TaveiraEDB https://www.enterprisedb.com/",
"msg_date": "Wed, 13 Apr 2022 16:59:30 -0300",
"msg_from": "\"Euler Taveira\" <euler@eulerto.com>",
"msg_from_op": false,
"msg_subject": "Re: PG DOCS - logical replication filtering"
},
{
"msg_contents": "I've changed the CF entry [1] status to \"ready for committer\".\n\n------\n[1] https://commitfest.postgresql.org/38/3605/\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Thu, 14 Apr 2022 09:01:46 +1000",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: PG DOCS - logical replication filtering"
},
{
"msg_contents": "On Wednesday, April 13, 2022 11:25 AM Peter Smith <smithpb2250@gmail.com> wrote:\r\n> \r\n> PSA patch v10 which addresses the remaining review comments from Euler [1]\r\n\r\nThanks for the patch, it looks good to me.\r\n\r\nBest regards,\r\nHou zj\r\n",
"msg_date": "Thu, 14 Apr 2022 00:58:42 +0000",
"msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: PG DOCS - logical replication filtering"
},
{
"msg_contents": "On Thu, Apr 14, 2022 at 1:29 AM Euler Taveira <euler@eulerto.com> wrote:\n>\n> On Wed, Apr 13, 2022, at 12:24 AM, Peter Smith wrote:\n>\n> PSA patch v10 which addresses the remaining review comments from Euler [1]\n>\n> Looks good to me.\n>\n\nThanks, this looks good to me as well. I'll check this again early\nnext week and push unless I find something or there are more comments.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 14 Apr 2022 08:55:18 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: PG DOCS - logical replication filtering"
},
{
"msg_contents": "On Thu, Apr 14, 2022 at 8:55 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Thu, Apr 14, 2022 at 1:29 AM Euler Taveira <euler@eulerto.com> wrote:\n> >\n> > On Wed, Apr 13, 2022, at 12:24 AM, Peter Smith wrote:\n> >\n> > PSA patch v10 which addresses the remaining review comments from Euler [1]\n> >\n> > Looks good to me.\n> >\n>\n> Thanks, this looks good to me as well. I'll check this again early\n> next week and push unless I find something or there are more comments.\n>\n\nPushed.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 18 Apr 2022 11:22:56 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: PG DOCS - logical replication filtering"
},
{
"msg_contents": "On Mon, Apr 18, 2022 at 3:53 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Thu, Apr 14, 2022 at 8:55 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Thu, Apr 14, 2022 at 1:29 AM Euler Taveira <euler@eulerto.com> wrote:\n> > >\n> > > On Wed, Apr 13, 2022, at 12:24 AM, Peter Smith wrote:\n> > >\n> > > PSA patch v10 which addresses the remaining review comments from Euler [1]\n> > >\n> > > Looks good to me.\n> > >\n> >\n> > Thanks, this looks good to me as well. I'll check this again early\n> > next week and push unless I find something or there are more comments.\n> >\n>\n> Pushed.\n>\n\nThanks for pushing. I updated the CF entry [1] to say \"committed'.\n\n------\n[1] https://commitfest.postgresql.org/38/3605/\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Tue, 19 Apr 2022 09:15:37 +1000",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: PG DOCS - logical replication filtering"
}
] |
[
{
"msg_contents": "Hi all,\n\nIn my hunt looking for incorrect SRFs, I have noticed a new case of a\nsystem function marked as proretset while it builds and returns only\none record. And this is a popular one: pg_stop_backup(), labelled\nv2.\n\nThis leads to a lot of unnecessary work, as the function creates a\ntuplestore it has no need for with the usual set of checks related to\nSRFs. The logic can be be simplified as of the attached.\n\nThoughts?\n--\nMichael",
"msg_date": "Wed, 2 Mar 2022 16:46:01 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "pg_stop_backup() v2 incorrectly marked as proretset"
},
{
"msg_contents": "At Wed, 2 Mar 2022 16:46:01 +0900, Michael Paquier <michael@paquier.xyz> wrote in \n> Hi all,\n> \n> In my hunt looking for incorrect SRFs, I have noticed a new case of a\n> system function marked as proretset while it builds and returns only\n> one record. And this is a popular one: pg_stop_backup(), labelled\n> v2.\n> \n> This leads to a lot of unnecessary work, as the function creates a\n> tuplestore it has no need for with the usual set of checks related to\n> SRFs. The logic can be be simplified as of the attached.\n> \n> Thoughts?\n\nThat direction seems find to me.\n\nBut the patch forgets to remove an useless variable.\n\n>\t/* Initialise attributes information in the tuple descriptor */\n>\ttupdesc = CreateTemplateTupleDesc(PG_STOP_BACKUP_V2_COLS);\n>\tTupleDescInitEntry(tupdesc, (AttrNumber) 1, \"lsn\",\n>\t\t\t\t\t PG_LSNOID, -1, 0);\n\nI think we can use get_call_resuilt_type here.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Wed, 02 Mar 2022 17:22:35 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_stop_backup() v2 incorrectly marked as proretset"
},
{
"msg_contents": "On Wed, Mar 02, 2022 at 05:22:35PM +0900, Kyotaro Horiguchi wrote:\n> But the patch forgets to remove an useless variable.\n\nIndeed. I forgot to look at stderr.\n\n>>\t/* Initialise attributes information in the tuple descriptor */\n>>\ttupdesc = CreateTemplateTupleDesc(PG_STOP_BACKUP_V2_COLS);\n>>\tTupleDescInitEntry(tupdesc, (AttrNumber) 1, \"lsn\",\n>>\t\t\t\t\t PG_LSNOID, -1, 0);\n> \n> I think we can use get_call_resuilt_type here.\n\nYes, I don't mind doing so here.\n--\nMichael",
"msg_date": "Wed, 2 Mar 2022 19:04:17 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: pg_stop_backup() v2 incorrectly marked as proretset"
},
{
"msg_contents": "Hi Michael,\n\n```\n Datum\n pg_stop_backup_v2(PG_FUNCTION_ARGS)\n {\n- ReturnSetInfo *rsinfo = (ReturnSetInfo *) fcinfo->resultinfo;\n+#define PG_STOP_BACKUP_V2_COLS 3\n TupleDesc tupdesc;\n- Tuplestorestate *tupstore;\n- MemoryContext per_query_ctx;\n- MemoryContext oldcontext;\n- Datum values[3];\n- bool nulls[3];\n+ Datum values[PG_STOP_BACKUP_V2_COLS];\n+ bool nulls[PG_STOP_BACKUP_V2_COLS];\n```\n\nDeclaring a macro inside the procedure body is a bit unconventional.\nSince it doesn't seem to be used for anything except these two array\ndeclarations I suggest keeping simply \"3\" here.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Wed, 2 Mar 2022 13:25:17 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_stop_backup() v2 incorrectly marked as proretset"
},
{
"msg_contents": "On Wed, Mar 2, 2022 at 5:25 AM Aleksander Alekseev\n<aleksander@timescale.com> wrote:\n> ```\n> Datum\n> pg_stop_backup_v2(PG_FUNCTION_ARGS)\n> {\n> - ReturnSetInfo *rsinfo = (ReturnSetInfo *) fcinfo->resultinfo;\n> +#define PG_STOP_BACKUP_V2_COLS 3\n> TupleDesc tupdesc;\n> - Tuplestorestate *tupstore;\n> - MemoryContext per_query_ctx;\n> - MemoryContext oldcontext;\n> - Datum values[3];\n> - bool nulls[3];\n> + Datum values[PG_STOP_BACKUP_V2_COLS];\n> + bool nulls[PG_STOP_BACKUP_V2_COLS];\n> ```\n>\n> Declaring a macro inside the procedure body is a bit unconventional.\n> Since it doesn't seem to be used for anything except these two array\n> declarations I suggest keeping simply \"3\" here.\n\nI think we do this kind of thing in various places in similar\nsituations, and I think it is good style. It makes it easier to catch\neverything if you ever need to update the code.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 2 Mar 2022 09:31:49 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_stop_backup() v2 incorrectly marked as proretset"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Wed, Mar 2, 2022 at 5:25 AM Aleksander Alekseev\n> <aleksander@timescale.com> wrote:\n>> Declaring a macro inside the procedure body is a bit unconventional.\n>> Since it doesn't seem to be used for anything except these two array\n>> declarations I suggest keeping simply \"3\" here.\n\n> I think we do this kind of thing in various places in similar\n> situations, and I think it is good style. It makes it easier to catch\n> everything if you ever need to update the code.\n\nYeah, there's plenty of precedent for that coding if you look around.\nI've not read the whole patch, but this snippet seems fine to me\nif there's also an #undef at the end of the function.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 02 Mar 2022 09:35:44 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_stop_backup() v2 incorrectly marked as proretset"
},
{
"msg_contents": "Hi Tom.\n\nYeah, there's plenty of precedent for that coding if you look around.\n> I've not read the whole patch, but this snippet seems fine to me\n> if there's also an #undef at the end of the function.\n>\n\nNo, there is no #undef. With #undef I don't mind it either.\n\n-- \nBest regards,\nAleksander Alekseev\n\nHi Tom.\nYeah, there's plenty of precedent for that coding if you look around.\nI've not read the whole patch, but this snippet seems fine to me\nif there's also an #undef at the end of the function.No, there is no #undef. With #undef I don't mind it either.-- Best regards,Aleksander Alekseev",
"msg_date": "Wed, 2 Mar 2022 17:40:00 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_stop_backup() v2 incorrectly marked as proretset"
},
{
"msg_contents": "On Wed, Mar 02, 2022 at 05:40:00PM +0300, Aleksander Alekseev wrote:\n> Hi Tom.\n>\n> Yeah, there's plenty of precedent for that coding if you look around.\n> > I've not read the whole patch, but this snippet seems fine to me\n> > if there's also an #undef at the end of the function.\n>\n> No, there is no #undef. With #undef I don't mind it either.\n\nI don't see strong evidence for that pattern being wildly used with some naive\ngrepping:\n\n#define for such use without undef:\nPOSTGRES_FDW_GET_CONNECTIONS_COLS\nHEAP_TUPLE_INFOMASK_COLS\nCONNECTBY_NCOLS\nDBLINK_NOTIFY_COLS\nPG_STAT_STATEMENTS_COLS\nPG_STAT_STATEMENTS_INFO_COLS\nHEAPCHECK_RELATION_COLS\nPG_PARTITION_TREE_COLS\nPG_STAT_GET_ACTIVITY_COLS\nPG_STAT_GET_WAL_COLS\nPG_STAT_GET_SLRU_COLS\nPG_STAT_GET_REPLICATION_SLOT_COLS\nPG_STAT_GET_SUBSCRIPTION_STATS_COLS\nPG_GET_BACKEND_MEMORY_CONTEXTS_COLS\nPG_GET_SHMEM_SIZES_COLS\nPG_GET_REPLICATION_SLOTS_COLS\nREAD_REPLICATION_SLOT_COLS\nPG_STAT_GET_WAL_SENDERS_COLS\nPG_STAT_GET_SUBSCRIPTION_COLS\n\nWith an undef:\nREPLICATION_ORIGIN_PROGRESS_COLS\n\n\n",
"msg_date": "Thu, 3 Mar 2022 00:36:32 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_stop_backup() v2 incorrectly marked as proretset"
},
{
"msg_contents": "On 03/02/22 02:46, Michael Paquier wrote:\n> system function marked as proretset while it builds and returns only\n> one record. And this is a popular one: pg_stop_backup(), labelled\n> v2.\n\nI had just recently noticed that while reviewing [0], but shrugged,\nas I didn't know what the history was.\n\nIs this best handled as a separate patch, or folded into [0], which is\ngoing to be altering and renaming that function anyway?\n\n\nOn 03/02/22 09:31, Robert Haas wrote:\n> On Wed, Mar 2, 2022 at 5:25 AM Aleksander Alekseev\n>> Since it doesn't seem to be used for anything except these two array\n>> declarations I suggest keeping simply \"3\" here.\n>\n> I think we do this kind of thing in various places in similar\n> situations, and I think it is good style. It makes it easier to catch\n> everything if you ever need to update the code.\n\n\nI've been known (in other projects) to sometimes accomplish the same\nthing with, e.g.,\n\nDatum values[3];\nbool nulls[sizeof values / sizeof *values];\n\n\nDoesn't win any beauty contests, but just one place to change the length\nif it needs changing. I see we define a lengthof in c.h, so could use:\n\nDatum values[3];\nbool nulls[lengthof(values)];\n\nRegards,\n-Chap\n\n\n[0] https://commitfest.postgresql.org/37/3436/\n\n\n",
"msg_date": "Wed, 2 Mar 2022 12:04:59 -0500",
"msg_from": "Chapman Flack <chap@anastigmatix.net>",
"msg_from_op": false,
"msg_subject": "Re: pg_stop_backup() v2 incorrectly marked as proretset"
},
{
"msg_contents": "On 3/2/22 11:04, Chapman Flack wrote:\n> On 03/02/22 02:46, Michael Paquier wrote:\n>> system function marked as proretset while it builds and returns only\n>> one record. And this is a popular one: pg_stop_backup(), labelled\n>> v2.\n> \n> I had just recently noticed that while reviewing [0], but shrugged,\n> as I didn't know what the history was.\n> \n> Is this best handled as a separate patch, or folded into [0], which is\n> going to be altering and renaming that function anyway?\n> \n> \n> On 03/02/22 09:31, Robert Haas wrote:\n>> On Wed, Mar 2, 2022 at 5:25 AM Aleksander Alekseev\n>>> Since it doesn't seem to be used for anything except these two array\n>>> declarations I suggest keeping simply \"3\" here.\n>>\n>> I think we do this kind of thing in various places in similar\n>> situations, and I think it is good style. It makes it easier to catch\n>> everything if you ever need to update the code.\n> \n> \n> I've been known (in other projects) to sometimes accomplish the same\n> thing with, e.g.,\n> \n> Datum values[3];\n> bool nulls[sizeof values / sizeof *values];\n\nI also use this pattern, though I would generally write it as:\n\nbool nulls[sizeof(values) / sizeof(Datum)];\n\nChap's way makes it possible to use a macro, though, so that's a plus.\n\nRegards,\n-David\n\n\n",
"msg_date": "Wed, 2 Mar 2022 11:24:24 -0600",
"msg_from": "David Steele <david@pgmasters.net>",
"msg_from_op": false,
"msg_subject": "Re: pg_stop_backup() v2 incorrectly marked as proretset"
},
{
"msg_contents": "On Thu, Mar 03, 2022 at 12:36:32AM +0800, Julien Rouhaud wrote:\n> I don't see strong evidence for that pattern being wildly used with some naive\n> grepping:\n\nYes, I don't recall either seeing the style with an undef a lot when\nit came to system functions. I'll move on and apply the fix in a\nminute using this style.\n--\nMichael",
"msg_date": "Thu, 3 Mar 2022 10:10:36 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: pg_stop_backup() v2 incorrectly marked as proretset"
},
{
"msg_contents": "On Wed, Mar 02, 2022 at 12:04:59PM -0500, Chapman Flack wrote:\n> I had just recently noticed that while reviewing [0], but shrugged,\n> as I didn't know what the history was.\n\nOkay. I did not see you mention it on the thread, but the discussion\nis long so it is easy to miss some of its details.\n\n> Is this best handled as a separate patch, or folded into [0], which is\n> going to be altering and renaming that function anyway?\n\nNo idea where this is leading, but I'd rather fix what is at hands now\nrather than assuming that something may or may not happen. If, as you\nsay, this code gets removed, rebasing this conflict is just a matter\nof removing the existing code again so that's trivial.\n--\nMichael",
"msg_date": "Thu, 3 Mar 2022 10:17:10 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: pg_stop_backup() v2 incorrectly marked as proretset"
},
{
"msg_contents": "On Wed, Mar 2, 2022 at 9:35 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Yeah, there's plenty of precedent for that coding if you look around.\n> I've not read the whole patch, but this snippet seems fine to me\n> if there's also an #undef at the end of the function.\n\n From later emails, it sounds like that's not the common practice in\nsimilar cases, and I don't personally see the point.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 3 Mar 2022 16:09:39 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_stop_backup() v2 incorrectly marked as proretset"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Wed, Mar 2, 2022 at 9:35 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> I've not read the whole patch, but this snippet seems fine to me\n>> if there's also an #undef at the end of the function.\n\n>> From later emails, it sounds like that's not the common practice in\n> similar cases, and I don't personally see the point.\n\nThe point is to make it clear that the macro isn't intended to affect\ncode outside the function. Since C lacks block-scoped macros,\nthere's no other way to do that.\n\nI concede that a lot of our code is pretty sloppy about this, but\nthat doesn't make it a good practice.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 03 Mar 2022 16:40:42 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_stop_backup() v2 incorrectly marked as proretset"
},
{
"msg_contents": "On 03/03/22 16:40, Tom Lane wrote:\n> The point is to make it clear that the macro isn't intended to affect\n> code outside the function. Since C lacks block-scoped macros,\n> there's no other way to do that.\n> \n> I concede that a lot of our code is pretty sloppy about this, but\n> that doesn't make it a good practice.\n\nWould the\n\n Datum values[3];\n bool nulls[ lengthof(values) ];\n\npattern be more notationally tidy? No macro to define or undefine,\nwe already define lengthof() in c.h, and it seems pretty much made\nfor the purpose, if the objective is to have just one 3 to change\nif it someday becomes not-3.\n\nRegards,\n-Chap\n\n\n",
"msg_date": "Thu, 3 Mar 2022 17:00:23 -0500",
"msg_from": "Chapman Flack <chap@anastigmatix.net>",
"msg_from_op": false,
"msg_subject": "Re: pg_stop_backup() v2 incorrectly marked as proretset"
},
{
"msg_contents": "Chapman Flack <chap@anastigmatix.net> writes:\n> On 03/03/22 16:40, Tom Lane wrote:\n>> The point is to make it clear that the macro isn't intended to affect\n>> code outside the function. Since C lacks block-scoped macros,\n>> there's no other way to do that.\n\n> Would the\n\n> Datum values[3];\n> bool nulls[ lengthof(values) ];\n\n> pattern be more notationally tidy?\n\nHm, I don't care for that particularly.\n\n(1) It *looks* asymmetrical, even if it isn't.\n\n(2) I think a lot of the benefit of the macro approach is to give a name\n(and thereby some free documentation, assuming you take some care in\nchoosing the name) to what would otherwise be a very anonymous constant.\n\nThere's an actual practical problem with the anonymous-constant approach,\nwhich is that if you have some other occurrence of \"3\" in the function,\nit's very hard to tell if that's indeed an independent value or it's\nsomething that should have been replaced by lengthof(values).\nNow admittedly the same complaint can be made against the macro\napproach, but at least there, you have some chance of the macro's name\nproviding enough docs to make it clear what the proper uses of it are.\n(I'd suggest that that other \"3\" should also have been made a named\nconstant in many cases.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 03 Mar 2022 17:17:56 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_stop_backup() v2 incorrectly marked as proretset"
},
{
"msg_contents": "On Thu, Mar 03, 2022 at 04:40:42PM -0500, Tom Lane wrote:\n> The point is to make it clear that the macro isn't intended to affect\n> code outside the function. Since C lacks block-scoped macros,\n> there's no other way to do that.\n> \n> I concede that a lot of our code is pretty sloppy about this, but\n> that doesn't make it a good practice.\n\nWell, if we change that, better to do that in all the places where\nthis would be affected, but I am not sure to see a style appealing\nenough on this thread.\n\nFrom what I can see, history shows that the style of using a #define\nfor the number of columns originates from da2c1b8, aka 9.0. Its use\ninside a function originates from a755ea3 as of 9.1 and then it has\njust spread around without any undefs, so it looks like people like\nthat way of doing things.\n--\nMichael",
"msg_date": "Fri, 4 Mar 2022 10:09:19 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: pg_stop_backup() v2 incorrectly marked as proretset"
},
{
"msg_contents": "At Fri, 4 Mar 2022 10:09:19 +0900, Michael Paquier <michael@paquier.xyz> wrote in \n> On Thu, Mar 03, 2022 at 04:40:42PM -0500, Tom Lane wrote:\n> > The point is to make it clear that the macro isn't intended to affect\n> > code outside the function. Since C lacks block-scoped macros,\n> > there's no other way to do that.\n> > \n> > I concede that a lot of our code is pretty sloppy about this, but\n> > that doesn't make it a good practice.\n> \n> Well, if we change that, better to do that in all the places where\n> this would be affected, but I am not sure to see a style appealing\n> enough on this thread.\n> \n> From what I can see, history shows that the style of using a #define\n> for the number of columns originates from da2c1b8, aka 9.0. Its use\n> inside a function originates from a755ea3 as of 9.1 and then it has\n> just spread around without any undefs, so it looks like people like\n> that way of doing things.\n\nI'm one of them. Not unliking #undef, though.\n\nIt seems to me the name \"PG_STOP_BACKUP_V2_COLS\" alone is specific\nenough for the purpose of avoiding misuse.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Fri, 04 Mar 2022 10:32:27 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_stop_backup() v2 incorrectly marked as proretset"
}
] |
[
{
"msg_contents": "Hi,\n\nwith reference to the discussion in docs: https://www.postgresql.org/message-id/flat/2221339.1645896597%40sss.pgh.pa.us#5a346c15ec2edbe8fcc93a1ffc2a7c7d\n\nHere is a patch that changes \"Hot Standby\" to \"hot standby\" in high-availability.sgml, so we have a consistent wording.\nThoughts?\n\nThere are other places where hot standby is capitalized, but I guess we should start here.\n\nRegards\nDaniel",
"msg_date": "Wed, 2 Mar 2022 10:16:27 +0000",
"msg_from": "\"Daniel Westermann (DWE)\" <daniel.westermann@dbi-services.com>",
"msg_from_op": true,
"msg_subject": "Changing \"Hot Standby\" to \"hot standby\""
},
{
"msg_contents": "Hi Daniel,\n\n> Here is a patch that changes \"Hot Standby\" to \"hot standby\" in\nhigh-availability.sgml, so we have a consistent wording.\n> Thoughts?\n\n```\n- <title>Hot Standby Parameter Reference</title>\n+ <title>hot standby Parameter Reference</title>\n```\n\nPretty sure that for titles we should keep English capitalization rules.\n\n-- \nBest regards,\nAleksander Alekseev\n\nHi Daniel,> Here is a patch that changes \"Hot Standby\" to \"hot standby\" in high-availability.sgml, so we have a consistent wording.> Thoughts?```- <title>Hot Standby Parameter Reference</title>+ <title>hot standby Parameter Reference</title>```Pretty sure that for titles we should keep English capitalization rules.-- Best regards,Aleksander Alekseev",
"msg_date": "Wed, 2 Mar 2022 16:44:19 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: Changing \"Hot Standby\" to \"hot standby\""
},
{
"msg_contents": "Hi Aleksander,\n\n> Pretty sure that for titles we should keep English capitalization rules.\n\nDone like that. Thanks for taking a look.\n\nRegards\nDaniel",
"msg_date": "Wed, 2 Mar 2022 15:22:44 +0000",
"msg_from": "\"Daniel Westermann (DWE)\" <daniel.westermann@dbi-services.com>",
"msg_from_op": true,
"msg_subject": "Re: Changing \"Hot Standby\" to \"hot standby\""
},
{
"msg_contents": "At Wed, 2 Mar 2022 15:22:44 +0000, \"Daniel Westermann (DWE)\" <daniel.westermann@dbi-services.com> wrote in \n> > Pretty sure that for titles we should keep English capitalization rules.\n> \n> Done like that. Thanks for taking a look.\n\n <para>\n- Hot Standby feedback propagates upstream, whatever the cascaded arrangement.\n+ hot standby feedback propagates upstream, whatever the cascaded arrangement\n\n <para>\n- Hot Standby is the term used to describe the ability to connect to\n+ hot standby is the term used to describe the ability to connect to\n\n\nThey look like decapitalizing the first word in a sentsnce.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Thu, 03 Mar 2022 10:24:03 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Changing \"Hot Standby\" to \"hot standby\""
},
{
"msg_contents": "Hi Kyotaro,\n\n>> <para>\n>>- Hot Standby is the term used to describe the ability to connect to\n>>+ hot standby is the term used to describe the ability to connect to\n\n>They look like decapitalizing the first word in a sentsnce.\n\nThanks for having a look. Are you suggesting to change it like this?\n- Hot Standby is the term used to describe the ability to connect to\n+ Hot standby is the term used to describe the ability to connect to\n\nRegards\nDaniel\n\n",
"msg_date": "Thu, 3 Mar 2022 06:55:43 +0000",
"msg_from": "\"Daniel Westermann (DWE)\" <daniel.westermann@dbi-services.com>",
"msg_from_op": true,
"msg_subject": "Re: Changing \"Hot Standby\" to \"hot standby\""
},
{
"msg_contents": "At Thu, 3 Mar 2022 06:55:43 +0000, \"Daniel Westermann (DWE)\" <daniel.westermann@dbi-services.com> wrote in \n> Hi Kyotaro,\n> \n> >> <para>\n> >>- Hot Standby is the term used to describe the ability to connect to\n> >>+ hot standby is the term used to describe the ability to connect to\n> \n> >They look like decapitalizing the first word in a sentsnce.\n> \n> Thanks for having a look. Are you suggesting to change it like this?\n> - Hot Standby is the term used to describe the ability to connect to\n> + Hot standby is the term used to describe the ability to connect to\n\nYes. Isn't it the right form of a sentence?\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Thu, 03 Mar 2022 16:17:43 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Changing \"Hot Standby\" to \"hot standby\""
},
{
"msg_contents": ">> Thanks for having a look. Are you suggesting to change it like this?\n>> - Hot Standby is the term used to describe the ability to connect to\n>> + Hot standby is the term used to describe the ability to connect to\n\n>Yes. Isn't it the right form of a sentence?\n\nDone like that.\n\nRegards\nDaniel",
"msg_date": "Thu, 3 Mar 2022 07:32:11 +0000",
"msg_from": "\"Daniel Westermann (DWE)\" <daniel.westermann@dbi-services.com>",
"msg_from_op": true,
"msg_subject": "Re: Changing \"Hot Standby\" to \"hot standby\""
},
{
"msg_contents": ">>> Thanks for having a look. Are you suggesting to change it like this?\n>>> - Hot Standby is the term used to describe the ability to connect to\n>>> + Hot standby is the term used to describe the ability to connect to\n\n>>Yes. Isn't it the right form of a sentence?\n\nI've created and entry in the Commitfest 2022-07 for this.\n\nRegards\nDaniel\n\n\n\n\n\n\n\n>>> Thanks for having a look. Are you suggesting to change it like this?\n\n\n>>> - Hot Standby is the term used to describe the ability to connect to\n>>> + Hot standby is the term used to describe the ability to connect to\n\n>>Yes. Isn't it the right form of a sentence?\n\n\n\nI've created and entry in the Commitfest 2022-07 for this.\n\n\nRegards\nDaniel",
"msg_date": "Mon, 7 Mar 2022 16:06:22 +0000",
"msg_from": "\"Daniel Westermann (DWE)\" <daniel.westermann@dbi-services.com>",
"msg_from_op": true,
"msg_subject": "Re: Changing \"Hot Standby\" to \"hot standby\""
},
{
"msg_contents": "On Mon, Mar 7, 2022 at 11:06 AM Daniel Westermann (DWE)\n<daniel.westermann@dbi-services.com> wrote:\n>\n> >>> Thanks for having a look. Are you suggesting to change it like this?\n> >>> - Hot Standby is the term used to describe the ability to connect to\n> >>> + Hot standby is the term used to describe the ability to connect to\n>\n> >>Yes. Isn't it the right form of a sentence?\n>\n> I've created and entry in the Commitfest 2022-07 for this.\n>\nI think one more small change...\n\n A standby server can also be used for read-only queries, in which case\n- it is called a Hot Standby server. See <xref linkend=\"hot-standby\"/> for\n+ it is called a hot standby server. See <xref linkend=\"hot-standby\"/> for\n more information.\n\n A standby server can also be used for read-only queries, in which case\n- it is called a Hot Standby server. See <xref linkend=\"hot-standby\"/> for\n+ it is called a <firstterm>hot standby</firstterm> server. See\n<xref linkend=\"hot-standby\"/> for\n more information.\n\nRobert Treat\nhttps://xzilla.net\n\n\n",
"msg_date": "Tue, 8 Mar 2022 18:30:37 -0500",
"msg_from": "Robert Treat <rob@xzilla.net>",
"msg_from_op": false,
"msg_subject": "Re: Changing \"Hot Standby\" to \"hot standby\""
},
{
"msg_contents": ">I think one more small change...\n\n> A standby server can also be used for read-only queries, in which case\n>- it is called a Hot Standby server. See <xref linkend=\"hot-standby\"/> for\n>+ it is called a hot standby server. See <xref linkend=\"hot-standby\"/> for\n> more information.\n\n> A standby server can also be used for read-only queries, in which case\n>- it is called a Hot Standby server. See <xref linkend=\"hot-standby\"/> for\n>+ it is called a <firstterm>hot standby</firstterm> server. See\n><xref linkend=\"hot-standby\"/> for\n> more information.\n\nThanks for having a look. Done that way.\n\nRegards\nDaniel",
"msg_date": "Wed, 9 Mar 2022 07:45:32 +0000",
"msg_from": "\"Daniel Westermann (DWE)\" <daniel.westermann@dbi-services.com>",
"msg_from_op": true,
"msg_subject": "Re: Changing \"Hot Standby\" to \"hot standby\""
},
{
"msg_contents": "On Wed, Mar 09, 2022 at 07:45:32AM +0000, Daniel Westermann (DWE) wrote:\n> Thanks for having a look. Done that way.\n\nHmm. Outside the title that had better use upper-case characters for\nthe first letter of each word, I can see references to the pattern you\nare trying to eliminate in amcheck.sgml (1), config.sgml (3),\nprotocol.sgml (3) and mvcc.sgml (1). Shouldn't you refresh these as\nwell if the point is to make the full set of docs consistent?\n\nAs of the full tree, I can see that:\n$ git grep \"hot standby\" | wc -l\n259\n$ git grep \"Hot Standby\" | wc -l\n73\n\nSo there is a trend for one of the two.\n--\nMichael",
"msg_date": "Wed, 9 Mar 2022 17:18:26 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Changing \"Hot Standby\" to \"hot standby\""
}
] |
[
{
"msg_contents": "Hi,\n\nI have noticed that the CHECKPOINT_REQUESTED flag information is not\npresent in the log message of LogCheckpointStart() function. I would\nlike to understand if it was missed or left intentionally. The log\nmessage describes all the possible checkpoint flags except\nCHECKPOINT_REQUESTED flag. I feel we should support this. Thoughts?\n\nPlease find the patch attached.\n\nThanks & Regards,\nNitin Jadhav",
"msg_date": "Wed, 2 Mar 2022 17:40:28 +0530",
"msg_from": "Nitin Jadhav <nitinjadhavpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Add CHECKPOINT_REQUESTED flag to the log message in\n LogCheckpointStart()"
},
{
"msg_contents": "On Wed, Mar 2, 2022 at 5:41 PM Nitin Jadhav\n<nitinjadhavpostgres@gmail.com> wrote:\n>\n> Hi,\n>\n> I have noticed that the CHECKPOINT_REQUESTED flag information is not\n> present in the log message of LogCheckpointStart() function. I would\n> like to understand if it was missed or left intentionally. The log\n> message describes all the possible checkpoint flags except\n> CHECKPOINT_REQUESTED flag. I feel we should support this. Thoughts?\n\nI don't think that's useful. Being in LogCheckpointStart\n(CreateCheckPoint or CreateRestartPoint) itself means that somebody\nhas requested a checkpoint. Having CHECKPOINT_REQUESTED doesn't add\nany value.\n\nI would suggest removing the CHECKPOINT_REQUESTED flag as it's not\nbeing used anywhere instead CheckpointerShmem->ckpt_flags is used as\nan indication of the checkpoint requested in CheckpointerMain [1]. If\nothers don't agree to remove as it doesn't cause any harm, then, I\nwould add something like this for more readability:\n\n if ((((volatile CheckpointerShmemStruct *)\nCheckpointerShmem)->ckpt_flags) & CHECKPOINT_REQUESTED))\n {\n do_checkpoint = true;\n PendingCheckpointerStats.m_requested_checkpoints++;\n }\n\n[1]\n /*\n * Detect a pending checkpoint request by checking whether the flags\n * word in shared memory is nonzero. We shouldn't need to acquire the\n * ckpt_lck for this.\n */\n if (((volatile CheckpointerShmemStruct *)\nCheckpointerShmem)->ckpt_flags)\n {\n do_checkpoint = true;\n PendingCheckpointerStats.m_requested_checkpoints++;\n }\n\nRegards,\nBharath Rupireddy.\n\n\n",
"msg_date": "Wed, 2 Mar 2022 18:18:10 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add CHECKPOINT_REQUESTED flag to the log message in\n LogCheckpointStart()"
},
{
"msg_contents": "At Wed, 2 Mar 2022 18:18:10 +0530, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote in \n> On Wed, Mar 2, 2022 at 5:41 PM Nitin Jadhav\n> <nitinjadhavpostgres@gmail.com> wrote:\n> >\n> > Hi,\n> >\n> > I have noticed that the CHECKPOINT_REQUESTED flag information is not\n> > present in the log message of LogCheckpointStart() function. I would\n> > like to understand if it was missed or left intentionally. The log\n> > message describes all the possible checkpoint flags except\n> > CHECKPOINT_REQUESTED flag. I feel we should support this. Thoughts?\n> \n> I don't think that's useful. Being in LogCheckpointStart\n> (CreateCheckPoint or CreateRestartPoint) itself means that somebody\n> has requested a checkpoint. Having CHECKPOINT_REQUESTED doesn't add\n> any value.\n\nAgreed.\n\n> I would suggest removing the CHECKPOINT_REQUESTED flag as it's not\n> being used anywhere instead CheckpointerShmem->ckpt_flags is used as\n> an indication of the checkpoint requested in CheckpointerMain [1]. If\n\nActually no one does but RequestCheckpoint() accepts 0 as flags.\nCheckpointer would be a bit more complex without CHECKPOINT_REQUESTED.\nI don't think it does us any good to get rid of the flag value.\n\n> others don't agree to remove as it doesn't cause any harm, then, I\n> would add something like this for more readability:\n\n if (((volatile CheckpointerShmemStruct *)\n- CheckpointerShmem)->ckpt_flags)\n+ CheckpointerShmem)->ckpt_flags) & CHECKPOINT_REQUESTED))\n\nI don't particularly object to this, but I don't think that change\nmakes the code significantly easier to read either.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Thu, 03 Mar 2022 09:39:37 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add CHECKPOINT_REQUESTED flag to the log message in\n LogCheckpointStart()"
},
{
"msg_contents": "On Thu, Mar 03, 2022 at 09:39:37AM +0900, Kyotaro Horiguchi wrote:\n> At Wed, 2 Mar 2022 18:18:10 +0530, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote in \n>> I don't think that's useful. Being in LogCheckpointStart\n>> (CreateCheckPoint or CreateRestartPoint) itself means that somebody\n>> has requested a checkpoint. Having CHECKPOINT_REQUESTED doesn't add\n>> any value.\n> \n> Agreed.\n\nExactly my impression. This would apply now to the WAL shutdown code\npaths, and I'd suspect that the callers of CreateCheckPoint() are not\ngoing to increase soon. The point is: the logs already provide some\ncontexts for any of those callers so I see no need for this additional\ninformation.\n\n> Actually no one does but RequestCheckpoint() accepts 0 as flags.\n> Checkpointer would be a bit more complex without CHECKPOINT_REQUESTED.\n> I don't think it does us any good to get rid of the flag value.\n\nI'd rather keep this code as-is.\n--\nMichael",
"msg_date": "Thu, 3 Mar 2022 10:27:10 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Add CHECKPOINT_REQUESTED flag to the log message in\n LogCheckpointStart()"
},
{
"msg_contents": "At Thu, 3 Mar 2022 10:27:10 +0900, Michael Paquier <michael@paquier.xyz> wrote in \n> On Thu, Mar 03, 2022 at 09:39:37AM +0900, Kyotaro Horiguchi wrote:\n> > At Wed, 2 Mar 2022 18:18:10 +0530, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote in \n> >> I don't think that's useful. Being in LogCheckpointStart\n> >> (CreateCheckPoint or CreateRestartPoint) itself means that somebody\n> >> has requested a checkpoint. Having CHECKPOINT_REQUESTED doesn't add\n> >> any value.\n> > \n> > Agreed.\n> \n> Exactly my impression. This would apply now to the WAL shutdown code\n> paths, and I'd suspect that the callers of CreateCheckPoint() are not\n> going to increase soon. The point is: the logs already provide some\n> contexts for any of those callers so I see no need for this additional\n> information.\n> \n> > Actually no one does but RequestCheckpoint() accepts 0 as flags.\n> > Checkpointer would be a bit more complex without CHECKPOINT_REQUESTED.\n> > I don't think it does us any good to get rid of the flag value.\n> \n> I'd rather keep this code as-is.\n\nI fail to identify the nuance of the phrase, so just for a\nclarification. In short, I think we should keep the exiting code\nas-is.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Thu, 03 Mar 2022 12:04:32 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add CHECKPOINT_REQUESTED flag to the log message in\n LogCheckpointStart()"
}
] |
[
{
"msg_contents": "This is not intended for PG15.\n\nAttached are a proof of concept patchset to implement multiple valid\npasswords, which have independent expirations, set by a GUC or SQL\nusing an interval.\n\nThis allows the superuser to set a password validity period of e.g.,\n60 days, and for users to create new passwords before the old ones\nexpire, and use both until the old one expires. This will aid in\npassword rollovers for apps and other systems that need to connect\nwith password authentication.\n\nThe first patch simply moves password to a new catalog, no functional changes.\nThe second patch allows multiple passwords to be used simultaneously.\nThe third adds per-password expiration, SQL grammar, and the GUC.\n\nSome future work intended to build on this includes:\n- disallowing password reuse\n- transitioning between password mechanisms\n\nExample output (note the NOTICES can go away, but are helpful for\ndemo/testing purposes):\n\npostgres=# alter system set password_valid_duration = '1 day';\nNOTICE: Setting password duration to \"1 day\"\nALTER SYSTEM\npostgres=# select pg_reload_conf();\n pg_reload_conf\n----------------\n t\n(1 row)\n\npostgres=# create user joshua password 'a' expires in '5 minutes';\nNOTICE: Setting password duration to \"1 day\"\nNOTICE: Password will expire at: \"2022-03-02 14:52:31.217193\" (from SQL)\nCREATE ROLE\n\n---\n\n$ psql -h 127.0.0.1 -U joshua postgres\nPassword for user joshua:\npsql (12.7, server 15devel)\nWARNING: psql major version 12, server major version 15.\n Some psql features might not work.\nType \"help\" for help.\n\npostgres=> alter role joshua passname 'newone' password 'asdf' expires\nin '1 year';\nERROR: must be superuser to override password_validity_duration GUC\npostgres=> alter role joshua passname 'newone' password 'asdf';\nNOTICE: Password will expire at: \"2022-03-03 14:47:53.728159\" (from GUC)\nALTER ROLE\npostgres=>\n\n--\n\npostgres=# select * from pg_auth_password ;\n roleid | name |\n password\n | expiration\n--------+---------+-------------------------------------------------------------------------------------------------------------------\n--------------------+-------------------------------\n 10 | __def__ |\nSCRAM-SHA-256$4096:yGiHIYPwc2az7xj/7TIyTA==$OQL/AEcEY1yOCNbrZEj4zDvNnOLpIqltOW1uQvosLvc=:9VRRppuIkSrwhiBN5ePy8wB1y\nzDa/2uX0WUx6gXi93E= |\n 16384 | __def__ |\nSCRAM-SHA-256$4096:AAAAAAAAAAAAAAAAAAAAAA==$1Ivp4d+vAWxowpuGEn05KR9lxyGOms3yy85k3D7XpBg=:k8xUjU6xrJG17PMGa/Zya6pAE\n/M7pEDaoIFmWvNIEUg= | 2022-03-02 06:52:31.217193-08\n 16384 | newone |\nSCRAM-SHA-256$4096:AAAAAAAAAAAAAAAAAAAAAA==$WK3+41CCGDognSnZrtpHhv00z9LuVUjHR1hWq8T1+iE=:w2C5GuhgiEB7wXqPxYfxBKB+e\nhm4h6Oeif1uzpPIFVk= | 2022-03-03 06:47:53.728159-08\n(3 rows)",
"msg_date": "Wed, 2 Mar 2022 09:58:16 -0500",
"msg_from": "Joshua Brindle <joshua.brindle@crunchydata.com>",
"msg_from_op": true,
"msg_subject": "[PoC/RFC] Multiple passwords, interval expirations"
},
{
"msg_contents": "On Wed, Mar 2, 2022 at 9:58 AM Joshua Brindle\n<joshua.brindle@crunchydata.com> wrote:\n>\n> This is not intended for PG15.\n>\n> Attached are a proof of concept patchset to implement multiple valid\n> passwords, which have independent expirations, set by a GUC or SQL\n> using an interval.\n>\n\n<snip>\n\n> postgres=# select * from pg_auth_password ;\n> roleid | name |\n> password\n> | expiration\n> --------+---------+-------------------------------------------------------------------------------------------------------------------\n> --------------------+-------------------------------\n> 10 | __def__ |\n> SCRAM-SHA-256$4096:yGiHIYPwc2az7xj/7TIyTA==$OQL/AEcEY1yOCNbrZEj4zDvNnOLpIqltOW1uQvosLvc=:9VRRppuIkSrwhiBN5ePy8wB1y\n> zDa/2uX0WUx6gXi93E= |\n> 16384 | __def__ |\n> SCRAM-SHA-256$4096:AAAAAAAAAAAAAAAAAAAAAA==$1Ivp4d+vAWxowpuGEn05KR9lxyGOms3yy85k3D7XpBg=:k8xUjU6xrJG17PMGa/Zya6pAE\n> /M7pEDaoIFmWvNIEUg= | 2022-03-02 06:52:31.217193-08\n> 16384 | newone |\n> SCRAM-SHA-256$4096:AAAAAAAAAAAAAAAAAAAAAA==$WK3+41CCGDognSnZrtpHhv00z9LuVUjHR1hWq8T1+iE=:w2C5GuhgiEB7wXqPxYfxBKB+e\n> hm4h6Oeif1uzpPIFVk= | 2022-03-03 06:47:53.728159-08\n> (3 rows)\n\nThere's obviously a salt problem here that I'll need to fix that\napparently snuck in at the last rebase, but this brings up one aspect\nof the patchset I didn't mention in the original email:\n\nFor the SCRAM protocol to work as is with existing clients the salt\nfor each password must be the same. Right now ALTER USER will find and\nreuse the salt, but a user passing in a pre-computed SCRAM secret\ncurrently has no way to get the salt.\n\nfor \\password (we'll need a new one that takes a password name) I was\nthinking libpq could hold onto the salt that was used to log in, but\nfor outside computation we'll need some way for the client to request\nit.\n\nNone of that is done yet.\n\n\n",
"msg_date": "Wed, 2 Mar 2022 10:35:44 -0500",
"msg_from": "Joshua Brindle <joshua.brindle@crunchydata.com>",
"msg_from_op": true,
"msg_subject": "Re: [PoC/RFC] Multiple passwords, interval expirations"
},
{
"msg_contents": "On Wed, Mar 2, 2022 at 10:35 AM Joshua Brindle\n<joshua.brindle@crunchydata.com> wrote:\n>\n> On Wed, Mar 2, 2022 at 9:58 AM Joshua Brindle\n> <joshua.brindle@crunchydata.com> wrote:\n> >\n> > This is not intended for PG15.\n> >\n> > Attached are a proof of concept patchset to implement multiple valid\n> > passwords, which have independent expirations, set by a GUC or SQL\n> > using an interval.\n> >\n>\n> <snip>\n>\n> > postgres=# select * from pg_auth_password ;\n> > roleid | name |\n> > password\n> > | expiration\n> > --------+---------+-------------------------------------------------------------------------------------------------------------------\n> > --------------------+-------------------------------\n> > 10 | __def__ |\n> > SCRAM-SHA-256$4096:yGiHIYPwc2az7xj/7TIyTA==$OQL/AEcEY1yOCNbrZEj4zDvNnOLpIqltOW1uQvosLvc=:9VRRppuIkSrwhiBN5ePy8wB1y\n> > zDa/2uX0WUx6gXi93E= |\n> > 16384 | __def__ |\n> > SCRAM-SHA-256$4096:AAAAAAAAAAAAAAAAAAAAAA==$1Ivp4d+vAWxowpuGEn05KR9lxyGOms3yy85k3D7XpBg=:k8xUjU6xrJG17PMGa/Zya6pAE\n> > /M7pEDaoIFmWvNIEUg= | 2022-03-02 06:52:31.217193-08\n> > 16384 | newone |\n> > SCRAM-SHA-256$4096:AAAAAAAAAAAAAAAAAAAAAA==$WK3+41CCGDognSnZrtpHhv00z9LuVUjHR1hWq8T1+iE=:w2C5GuhgiEB7wXqPxYfxBKB+e\n> > hm4h6Oeif1uzpPIFVk= | 2022-03-03 06:47:53.728159-08\n> > (3 rows)\n>\n> There's obviously a salt problem here that I'll need to fix that\n> apparently snuck in at the last rebase, but this brings up one aspect\n> of the patchset I didn't mention in the original email:\n>\n\nAttached are fixed patches rebased against the lastest master.\n\n\n> For the SCRAM protocol to work as is with existing clients the salt\n> for each password must be the same. Right now ALTER USER will find and\n> reuse the salt, but a user passing in a pre-computed SCRAM secret\n> currently has no way to get the salt.\n>\n> for \\password (we'll need a new one that takes a password name) I was\n> thinking libpq could hold onto the salt that was used to log in, but\n> for outside computation we'll need some way for the client to request\n> it.\n>\n> None of that is done yet.",
"msg_date": "Wed, 2 Mar 2022 13:02:18 -0500",
"msg_from": "Joshua Brindle <joshua.brindle@crunchydata.com>",
"msg_from_op": true,
"msg_subject": "Re: [PoC/RFC] Multiple passwords, interval expirations"
},
{
"msg_contents": "> > On Wed, Mar 2, 2022 at 9:58 AM Joshua Brindle\n> > <joshua.brindle@crunchydata.com> wrote:\n> > >\n> > > This is not intended for PG15.\n> > >\n> > > Attached are a proof of concept patchset to implement multiple valid\n> > > passwords, which have independent expirations, set by a GUC or SQL\n> > > using an interval.\n> > >\n> >\n> > <snip>\n> >\n> > > postgres=# select * from pg_auth_password ;\n> > > roleid | name |\n> > > password\n> > > | expiration\n> > > --------+---------+-------------------------------------------------------------------------------------------------------------------\n> > > --------------------+-------------------------------\n> > > 10 | __def__ |\n> > > SCRAM-SHA-256$4096:yGiHIYPwc2az7xj/7TIyTA==$OQL/AEcEY1yOCNbrZEj4zDvNnOLpIqltOW1uQvosLvc=:9VRRppuIkSrwhiBN5ePy8wB1y\n> > > zDa/2uX0WUx6gXi93E= |\n> > > 16384 | __def__ |\n> > > SCRAM-SHA-256$4096:AAAAAAAAAAAAAAAAAAAAAA==$1Ivp4d+vAWxowpuGEn05KR9lxyGOms3yy85k3D7XpBg=:k8xUjU6xrJG17PMGa/Zya6pAE\n> > > /M7pEDaoIFmWvNIEUg= | 2022-03-02 06:52:31.217193-08\n> > > 16384 | newone |\n> > > SCRAM-SHA-256$4096:AAAAAAAAAAAAAAAAAAAAAA==$WK3+41CCGDognSnZrtpHhv00z9LuVUjHR1hWq8T1+iE=:w2C5GuhgiEB7wXqPxYfxBKB+e\n> > > hm4h6Oeif1uzpPIFVk= | 2022-03-03 06:47:53.728159-08\n> > > (3 rows)\n> >\n> > There's obviously a salt problem here that I'll need to fix that\n> > apparently snuck in at the last rebase, but this brings up one aspect\n> > of the patchset I didn't mention in the original email:\n> >\n>\n> Attached are fixed patches rebased against the lastest master.\n>\n>\n> > For the SCRAM protocol to work as is with existing clients the salt\n> > for each password must be the same. Right now ALTER USER will find and\n> > reuse the salt, but a user passing in a pre-computed SCRAM secret\n> > currently has no way to get the salt.\n> >\n> > for \\password (we'll need a new one that takes a password name) I was\n> > thinking libpq could hold onto the salt that was used to log in, but\n> > for outside computation we'll need some way for the client to request\n> > it.\n> >\n> > None of that is done yet.\n\nNow that the commitfest is over these are rebased on master.\n\nIt's unclear if I will be able to continue working on this featureset,\nthis email address will be inactive after today.",
"msg_date": "Fri, 8 Apr 2022 13:04:22 -0400",
"msg_from": "Joshua Brindle <joshua.brindle@crunchydata.com>",
"msg_from_op": true,
"msg_subject": "Re: [PoC/RFC] Multiple passwords, interval expirations"
},
{
"msg_contents": "On 4/8/22 10:04, Joshua Brindle wrote:\n> It's unclear if I will be able to continue working on this featureset,\n> this email address will be inactive after today.\n\nI'm assuming the answer to this was \"no\". Is there any interest out\nthere to pick this up for the July CF?\n\n--Jacob\n\n\n",
"msg_date": "Wed, 29 Jun 2022 14:21:53 -0700",
"msg_from": "Jacob Champion <jchampion@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: [PoC/RFC] Multiple passwords, interval expirations"
},
{
"msg_contents": "Greetings,\n\nOn Wed, Jun 29, 2022 at 17:22 Jacob Champion <jchampion@timescale.com>\nwrote:\n\n> On 4/8/22 10:04, Joshua Brindle wrote:\n> > It's unclear if I will be able to continue working on this featureset,\n> > this email address will be inactive after today.\n>\n> I'm assuming the answer to this was \"no\". Is there any interest out\n> there to pick this up for the July CF?\n\n\nShort answer to that is yes, I’m interested in continuing this (though\ncertainly would welcome it if there are others who are also interested, and\nmay be able to bring someone else to help work on it too but that might be\nmore August / September time frame).\n\nThanks,\n\nStephen\n\n>\n\nGreetings,On Wed, Jun 29, 2022 at 17:22 Jacob Champion <jchampion@timescale.com> wrote:On 4/8/22 10:04, Joshua Brindle wrote:\n> It's unclear if I will be able to continue working on this featureset,\n> this email address will be inactive after today.\n\nI'm assuming the answer to this was \"no\". Is there any interest out\nthere to pick this up for the July CF?Short answer to that is yes, I’m interested in continuing this (though certainly would welcome it if there are others who are also interested, and may be able to bring someone else to help work on it too but that might be more August / September time frame).Thanks,Stephen",
"msg_date": "Wed, 29 Jun 2022 17:27:37 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: [PoC/RFC] Multiple passwords, interval expirations"
},
{
"msg_contents": "I am planning on picking it up next week; right now picking up steam,\nand reviewing a different, smaller patch.\n\nAt his behest, I had a conversation with Joshua (OP), and have his\nsupport to pick up and continue working on this patch. I have a some\nideas of my own, on what this patch should do, but since I haven't\nfully reviewed the (bulky) patch, I'll reserve my proposals until I\nwrap my head around it.\n\nPlease expect some activity on this patch towards the end of next week.\n\nBCC: Joshua's new work email.\n\nBest regards,\nGurjeet\nhttp://Gurje.et\n\nOn Wed, Jun 29, 2022 at 2:27 PM Stephen Frost <sfrost@snowman.net> wrote:\n>\n> Greetings,\n>\n> On Wed, Jun 29, 2022 at 17:22 Jacob Champion <jchampion@timescale.com> wrote:\n>>\n>> On 4/8/22 10:04, Joshua Brindle wrote:\n>> > It's unclear if I will be able to continue working on this featureset,\n>> > this email address will be inactive after today.\n>>\n>> I'm assuming the answer to this was \"no\". Is there any interest out\n>> there to pick this up for the July CF?\n>\n>\n> Short answer to that is yes, I’m interested in continuing this (though certainly would welcome it if there are others who are also interested, and may be able to bring someone else to help work on it too but that might be more August / September time frame).\n>\n> Thanks,\n>\n> Stephen\n\n\n",
"msg_date": "Thu, 30 Jun 2022 16:53:58 -0700",
"msg_from": "Gurjeet Singh <gurjeet@singh.im>",
"msg_from_op": false,
"msg_subject": "Re: [PoC/RFC] Multiple passwords, interval expirations"
},
{
"msg_contents": "Greetings,\n\n* Gurjeet Singh (gurjeet@singh.im) wrote:\n> I am planning on picking it up next week; right now picking up steam,\n> and reviewing a different, smaller patch.\n\nGreat! Glad that others are interested in this.\n\n> At his behest, I had a conversation with Joshua (OP), and have his\n> support to pick up and continue working on this patch. I have a some\n> ideas of my own, on what this patch should do, but since I haven't\n> fully reviewed the (bulky) patch, I'll reserve my proposals until I\n> wrap my head around it.\n\nI'd be curious as to your thought as to what the patch should be doing.\nJoshua and I had discussed it at some length as he was working on it.\n\n> Please expect some activity on this patch towards the end of next week.\n\nI've gone ahead and updated it, cleaned up a couple things, and make it\nso that check-world actually passes with it. Attached is an updated\nversion and I'll add it to the July commitfest.\n\nThanks!\n\nStephen",
"msg_date": "Thu, 30 Jun 2022 20:20:34 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: [PoC/RFC] Multiple passwords, interval expirations"
},
{
"msg_contents": "\nOn 6/30/22 8:20 PM, Stephen Frost wrote:\n> Greetings,\n>\n> * Gurjeet Singh (gurjeet@singh.im) wrote:\n>> I am planning on picking it up next week; right now picking up steam,\n>> and reviewing a different, smaller patch.\n> Great! Glad that others are interested in this.\n>\n>> At his behest, I had a conversation with Joshua (OP), and have his\n>> support to pick up and continue working on this patch. I have a some\n>> ideas of my own, on what this patch should do, but since I haven't\n>> fully reviewed the (bulky) patch, I'll reserve my proposals until I\n>> wrap my head around it.\n> I'd be curious as to your thought as to what the patch should be doing.\n> Joshua and I had discussed it at some length as he was working on it.\n\n\nAdding myself to the CC list here /waves\n\n\nI gave Gurjeet a bit of a brain dump on what I had planned (and what \nwe'd talked about), though he's free to take it in a different direction \nif he wants.\n\n\n>> Please expect some activity on this patch towards the end of next week.\n> I've gone ahead and updated it, cleaned up a couple things, and make it\n> so that check-world actually passes with it. Attached is an updated\n> version and I'll add it to the July commitfest.\n\n\nAh, thanks. Hopefully it wasn't too horrible of a rebase.\n\n\n> Thanks!\n>\n> Stephen\n\n\n",
"msg_date": "Fri, 1 Jul 2022 10:50:56 -0400",
"msg_from": "\"Brindle, Joshua\" <joshuqbr@amazon.com>",
"msg_from_op": false,
"msg_subject": "Re: [PoC/RFC] Multiple passwords, interval expirations"
},
{
"msg_contents": "Greetings,\n\nOn Fri, Jul 1, 2022 at 10:51 Brindle, Joshua <joshuqbr@amazon.com> wrote:\n\n>\n> On 6/30/22 8:20 PM, Stephen Frost wrote:\n> > * Gurjeet Singh (gurjeet@singh.im) wrote:\n> >> I am planning on picking it up next week; right now picking up steam,\n> >> and reviewing a different, smaller patch.\n> > Great! Glad that others are interested in this.\n> >\n> >> At his behest, I had a conversation with Joshua (OP), and have his\n> >> support to pick up and continue working on this patch. I have a some\n> >> ideas of my own, on what this patch should do, but since I haven't\n> >> fully reviewed the (bulky) patch, I'll reserve my proposals until I\n> >> wrap my head around it.\n> > I'd be curious as to your thought as to what the patch should be doing.\n> > Joshua and I had discussed it at some length as he was working on it.\n>\n>\n> Adding myself to the CC list here /waves\n\n\nHi!\n\nI gave Gurjeet a bit of a brain dump on what I had planned (and what\n> we'd talked about), though he's free to take it in a different direction\n> if he wants.\n\n\nPerhaps though would certainly like this to patch to be useful for the\nuse-cases that we had discussed, naturally. :)\n\n>> Please expect some activity on this patch towards the end of next week.\n> > I've gone ahead and updated it, cleaned up a couple things, and make it\n> > so that check-world actually passes with it. Attached is an updated\n> > version and I'll add it to the July commitfest.\n>\n> Ah, thanks. Hopefully it wasn't too horrible of a rebase.\n\n\nWasn’t too bad.. needs more clean-up, there was some white space issues and\nsome simple re-base stuff, but then the support for “md5” pg_hba option was\nbroken for users with SCRAM passwords because we weren’t checking if there\nwas a SCRAM pw stored and upgrading to SCRAM in that case. That’s the main\ncase that I fixed. We will need to document this though, of course. The\npatch I submitted should basically do:\n\npg_hba md5 + md5-only pws -> md5 auth used\npg_hba md5 + scram-only pws -> scram\npg_hba md5 + md5 and scram pws -> scram\npg_hba scram -> scram\n\nNot sure if we need to try and do something to make it possible to have\npg_hba md5 + mixed pws and have md5 used but it’s tricky as we would have\nto know on the server side early on if that’s what we want to do. We could\nadd an option to md5 to say “only do md5” maybe but I’m also inclined to\nnot bother and tell people to just get moved to scram already.\n\nFor my 2c, I’d also like to move to having a separate column for the PW\ntype from the actual secret but that’s largely an independent change.\n\nThanks!\n\nStephen\n\n>\n\nGreetings,On Fri, Jul 1, 2022 at 10:51 Brindle, Joshua <joshuqbr@amazon.com> wrote:\nOn 6/30/22 8:20 PM, Stephen Frost wrote:\n> * Gurjeet Singh (gurjeet@singh.im) wrote:\n>> I am planning on picking it up next week; right now picking up steam,\n>> and reviewing a different, smaller patch.\n> Great! Glad that others are interested in this.\n>\n>> At his behest, I had a conversation with Joshua (OP), and have his\n>> support to pick up and continue working on this patch. I have a some\n>> ideas of my own, on what this patch should do, but since I haven't\n>> fully reviewed the (bulky) patch, I'll reserve my proposals until I\n>> wrap my head around it.\n> I'd be curious as to your thought as to what the patch should be doing.\n> Joshua and I had discussed it at some length as he was working on it.\n\n\nAdding myself to the CC list here /wavesHi!\nI gave Gurjeet a bit of a brain dump on what I had planned (and what \nwe'd talked about), though he's free to take it in a different direction \nif he wants.Perhaps though would certainly like this to patch to be useful for the use-cases that we had discussed, naturally. :)\n>> Please expect some activity on this patch towards the end of next week.\n> I've gone ahead and updated it, cleaned up a couple things, and make it\n> so that check-world actually passes with it. Attached is an updated\n> version and I'll add it to the July commitfest.\nAh, thanks. Hopefully it wasn't too horrible of a rebase.Wasn’t too bad.. needs more clean-up, there was some white space issues and some simple re-base stuff, but then the support for “md5” pg_hba option was broken for users with SCRAM passwords because we weren’t checking if there was a SCRAM pw stored and upgrading to SCRAM in that case. That’s the main case that I fixed. We will need to document this though, of course. The patch I submitted should basically do:pg_hba md5 + md5-only pws -> md5 auth usedpg_hba md5 + scram-only pws -> scrampg_hba md5 + md5 and scram pws -> scrampg_hba scram -> scramNot sure if we need to try and do something to make it possible to have pg_hba md5 + mixed pws and have md5 used but it’s tricky as we would have to know on the server side early on if that’s what we want to do. We could add an option to md5 to say “only do md5” maybe but I’m also inclined to not bother and tell people to just get moved to scram already. For my 2c, I’d also like to move to having a separate column for the PW type from the actual secret but that’s largely an independent change.Thanks!Stephen",
"msg_date": "Fri, 1 Jul 2022 11:13:35 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: [PoC/RFC] Multiple passwords, interval expirations"
},
{
"msg_contents": "On Fri, Jul 1, 2022 at 8:13 AM Stephen Frost <sfrost@snowman.net> wrote:\n> On Fri, Jul 1, 2022 at 10:51 Brindle, Joshua <joshuqbr@amazon.com> wrote:\n>> On 6/30/22 8:20 PM, Stephen Frost wrote:\n\n>> > I've gone ahead and updated it, cleaned up a couple things, and make it\n>> > so that check-world actually passes with it. Attached is an updated\n>> > version and I'll add it to the July commitfest.\n>>\n>> Ah, thanks. Hopefully it wasn't too horrible of a rebase.\n>\n> Wasn’t too bad.. needs more clean-up, there was some white space issues and some simple re-base stuff, but then the support for “md5” pg_hba option was broken for users with SCRAM passwords because we weren’t checking if there was a SCRAM pw stored and upgrading to SCRAM in that case. That’s the main case that I fixed. We will need to document this though, of course. The patch I submitted should basically do:\n>\n> pg_hba md5 + md5-only pws -> md5 auth used\n> pg_hba md5 + scram-only pws -> scram\n> pg_hba md5 + md5 and scram pws -> scram\n> pg_hba scram -> scram\n>\n> Not sure if we need to try and do something to make it possible to have pg_hba md5 + mixed pws and have md5 used but it’s tricky as we would have to know on the server side early on if that’s what we want to do. We could add an option to md5 to say “only do md5” maybe but I’m also inclined to not bother and tell people to just get moved to scram already.\n>\n> For my 2c, I’d also like to move to having a separate column for the PW type from the actual secret but that’s largely an independent change.\n\nThe docs say this about rolpassword in case it stores SCRAM-SHA-256\nencrypted password \"If the password is encrypted with SCRAM-SHA-256,\nit has the format SCRAM-SHA-256$... This format is the same as that\nspecified by RFC-5803\". So I believe our hands are tied, and we\ncannot change that without breaking compliance with RFC 5803.\n\nPlease see attached v4 of the patch. The patch takes care of rebase to\nthe master/17-devel branch, and includes some changes, too. The\nrebase/merge conflicts were quite involved, since some affected files\nhad been removed, or even split into multiple files over the course of\nthe last year; resolving merge-conflicts was more of a grunt work.\n\nThe changes since V3 are (compare [1] vs. [2], Git branches linked below):\n- Remove TOAST table and corresponding index from pg_authid.\n- Fix memory leak/allocation bug; replace malloc() with guc_alloc().\n- Fix assumptions about passed-in double-pointers to GUC handling functions.\n- Remove the new function is_role_valid() and its call sites, because\nI believe it made backward-incompatible change to authentication\nbehavior (see more below).\n- Improve error handling that was missing at a few places.\n- Remove unnecessary checks, like (*var != NULL) checks when we know\nall callers pass a NULL by convention.\n- Replace MemSet() calls with var={0} styled initialization.\n- Minor edits to docs to change them from pg_authid to pg_auth_password.\n\nMore details about why I chose to remove is_role_valid() and revert to\nthe old code:\nis_role_valid() was a new function that pulled out a small section of\ncode from get_role_passwords() . I don't think moving this code block\nto a new function gains us anything; in fact, it now forces us to call\nthe new function in two new locations, which we didn't have to do\nbefore. It has to throw the same error messages as before, to maintain\ncompatibility with external tools/libraries, hence it duplicates those\nmessages as well, which is not ideal.\n\nMoreover, before the patch, in case of CheckPasswordAuth(), the error\n(if any) would have been thrown _after_ network communication done by\nsendAuthRequest() call. But with this patch, the error is thrown\nbefore the network interaction, hence this changes the order of\nnetwork interaction and the error message. This may have security\nimplications, too, but I'm unable to articulate one right now.\n\nIf we really want the role-validity check to be a function of its own,\na separate patch can address that; this patch doesn't have to make\nthat decision.\n\nOpen question: If a client is capable of providing just md5 passwords\nhandshake, and because of pg_hba.conf setting, or because the role has\nat least one SCRAM password (essentially the 3rd case you mention\nabove: pg_hba md5 + md5 and scram pws -> scram), the server will\nrespond with a SASL/SCRAM authentication response, and that would\nbreak the backwards compatibility and will deny access to the client.\nDoes this make it necessary to use a newer libpq/client library?\n\nBefore the patch, the rolvaliduntil was used to check and complain\nthat the password has expired, as the docs explicitly state that\nrolvaliduntil represents \"Password expiry time (only used for password\nauthentication); null if no expiration\" . Keeping that column after\nthe introduction of per-password expiry times now separates the\nrole-validity from password validity. During an internal discussion a\ncuriosity arose whether we can simply remove rolvaliduntil. And I\nbelieve the answer is yes, primarily because of how the docs describe\nthe column. So my proposal is to remove rolvaliduntil from pg_authid,\nand on a case-by-case basis, optionally replace its uses with\nmax(pg_auth_password.expiration).\n\nComments?\n\nNext steps:\n- Break the patch into smaller patches.\n- Address TODO items\n- Comment each new function\n- Add tests\n- Add/update documentation\n\nPS: Since this is a large patch, and because in some portions the code\nhas been indented by a level or two (e.g. to run a `for` loop over\nexisting code for single-password), I have found the following Git\ncommand to be helpful in reviewing the changes between master and this\nbranch,: `git diff -b --color-words -U20 origin/master...HEAD -- `\n\n[1]: v3 patch, applied to a contemporary commit on master branch\nhttps://github.com/gurjeet/postgres/commits/multiple_passwords_v3\n\n[2]: main development branch, patch rebased to current master branch,\nfollowed by many changes\nhttps://github.com/gurjeet/postgres/commits/multiple_passwords\n\n\n\nBest regards,\nGurjeet\nhttp://Gurje.et",
"msg_date": "Mon, 25 Sep 2023 00:31:43 -0700",
"msg_from": "Gurjeet Singh <gurjeet@singh.im>",
"msg_from_op": false,
"msg_subject": "Re: [PoC/RFC] Multiple passwords, interval expirations"
},
{
"msg_contents": "On Mon, 2023-09-25 at 00:31 -0700, Gurjeet Singh wrote:\n\n> Please see attached v4 of the patch. The patch takes care of rebase\n> to\n> the master/17-devel branch, and includes some changes, too.\n\nFWIW I got some failures applying. I didn't investigate much, and\ninstead I looked at your git branch (7a35619e).\n\n> Moreover, before the patch, in case of CheckPasswordAuth(), the error\n> (if any) would have been thrown _after_ network communication done by\n> sendAuthRequest() call. But with this patch, the error is thrown\n> before the network interaction, hence this changes the order of\n> network interaction and the error message. This may have security\n> implications, too, but I'm unable to articulate one right now.\n\nYou mean before v3 or before v4? Is this currently a problem in v4?\n\n> Open question: If a client is capable of providing just md5 passwords\n> handshake, and because of pg_hba.conf setting, or because the role\n> has\n> at least one SCRAM password (essentially the 3rd case you mention\n> above: pg_hba md5 + md5 and scram pws -> scram), the server will\n> respond with a SASL/SCRAM authentication response, and that would\n> break the backwards compatibility and will deny access to the client.\n> Does this make it necessary to use a newer libpq/client library?\n\nPerhaps you can try the MD5 passwords first, and only if they fail,\nmove on to try scram passwords?\n\n> Comments?\n\nIIUC, for the case of multiple scram passwords, we use the salt to\nselect the right scram password, and then proceed from there?\n\nI'm not very excited about the idea of naming passwords, or having\npasswords with default names. I can't think of anything better right\nnow, so it might be OK.\n\n> - Add tests\n> - Add/update documentation\n\nThese are needed to provide better review.\n\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Tue, 26 Sep 2023 16:36:31 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": false,
"msg_subject": "Re: [PoC/RFC] Multiple passwords, interval expirations"
},
{
"msg_contents": "I had an idea to simplify this feature/patch and after some validation\nin internal discussions, I am posting the new approach here. I'd\nappreciate any feedback and comments.\n\nTo begin with, the feature we are chasing is to make it convenient for\nthe users to rollover their passwords. Currently there is no easy way\nto rollover passwords without increasing the risk of an application\noutage. After a password change, the users/admins have to rush to\nchange the password in all locations where it is stored. There is a\nwindow of time where if the application password is not changed to the\nnew one, and the application tries to connect/reconnect for any\nreason, the application will fail authentication and lead to an outage.\n\nI personally haven't seen any attempts by any\napplication/driver/framework to solve this problem in the wild, so\nfollowing is just me theorizing how one may solve this problem on the\napplication side; there may be other ways in which others may solve\nthis problem. The application may be written in such a way that upon\npassword authentication failure, it tries again with a second\npassword. The application's configuration file (or environment\nvariables) may allow specifying 2 passwords at the same time, and the\napplication will keep trying these 2 passwords alternatively until it\nsucceeds or the user restarts it with a new configuration. With such a\nlogic in place in their application, the users may first change the\nconfiguration of all the instances of the application to hold the new\npassword along with the old/current working password, and only then\nchange the password in the database. This way, in the event of an\napplication instance start/restart either the old password will\nsucceed, or the new password will.\n\nThere may be other ways to solve this problem, but I can't imagine any\nof those ways to be convenient and straightforward. At least not as\nconvenient as it can be if the database itself allowed for storing\nboth the passwords, and honored both passwords at the same time, while\nallowing to associate a separate validity period with each of the\npasswords.\n\nThe patches posted in this thread so far attempt to add the ability to\nallow the user to have an arbitrary number of passwords. I believe\nthat allowing arbitrary number of passwords is not only unnecessary,\nbut the need to name passwords, the need to store them in a shared\ncatalog, etc. may actually create problems in the field. The\nusers/admins will have to choose names for passwords, which they\ndidn't have to previously. The need to name them may also lead to\nusers storing password-hints in the password names (e.g. 'mom''s\nbirthday', 'ex''s phone number', 'third password'), rendering the\npasswords weak.\n\nMoreover, allowing an arbitrarily many number of passwords will\nrequire us to provide additional infrastructure to solve problems like\nobservability (which passwords are currently in use, and which ones\nhave been effectively forgotten by applications), or create a nuisance\nfor admins that can create more problems than it solves.\n\nSo I propose that the feature should allow no more than 2 passwords\nfor a role, each with their own validity periods. This eliminates the\nneed to name passwords, because at any given time there are no more\nthan 2 passwords; current one, and old one. This also eliminates the\nneed for a shared catalog to hold passwords, because with the limit of\n2 imposed, we can store the old password and its validity period in\nadditional columns in the pg_authid table.\n\nThe patches so far also add a notion of max validity period of\npasswords, which only a superuser can override. I believe this is a\nuseful feature, but that feature can be dealt with separately,\nindependent of password rollover feature. So in the newer patches I\nwill not include the relevant GUC and code.\n\nWith the above being said, following is the user interface I can think\nof that can allow for various operations that users may need to\nperform to rollover their passwords. The 'ADD PASSWORD' and 'ALL\nPASSWORD' are additions to the grammar. rololdpassword and\nrololdvaliduntil will be new columns in pg_authid that will hold the\nold password and its valid-until value.\n\nIn essence, we create a stack that can hold 2 passwords. Pushing an\nelement when it's full will make it forget the bottom element. Popping\nthe stack makes it forget the top element, and the only remaining\nelement, if any, becomes the top.\n\n-- Create a user, as usual\nCREATE ROLE u1 PASSWORD 'p1' VALID UNTIL '2020/01/01';\n\n-- Add another password that the user can use for authentication. This moves\n-- the 'p1' password hash and its valid-until value to rololdpassword and\n-- rololdvaliduntil, respectively.\nALTER ROLE u1 ADD PASSWORD 'p2' VALID UNTIL '2021/01/01';\n\n-- Change the password 'p2's (current password's) validity\nALTER ROLE u1 VALID UNTIL '2022/01/01';\n-- Note that currently I don't have a proposal for how to change the old\n-- password's validity period, without deleting the latest/main password. See\n-- PASSWORD NULL command below on how to delete the current password. It's very\n-- likely that in a password rollover use case it's unnecessary, even\n-- undesirable, to change the old password's validity period.\n\n-- If, for some reason, the user wants to get rid of the latest password added.\n-- Remove 'p2' (current password). The old password (p1), will be restored to\n-- rolpassword, along with its valid-until value.\nALTER ROLE u1 PASSWORD NULL;\n-- This may come as a surprise to some users, because currently they expect the\n-- user to completely lose the ability to use passwords for login after this\n-- command. To get the old behavior, the user must now use the ALL PASSWORD\n-- NULL incantation; see below.\n-- Issuing this command one more time will remove even the restored password,\n-- hence leaving the user with no passwords.\n\n-- Change the validity of the restored/current password (p1)\nALTER ROLE u1 VALID UNTIL '2022/01/01';\n\n-- Add a new password (p3) without affecting old password's (p1) validity\nALTER ROLE u1 ADD PASSWORD 'p3' VALID UNTIL '2023/01/01';\n\n-- Add a new password 'p4'. This will move 'p3' to rololdpassword, and hence\n-- 'p1' will be forgotten completely.\n-- After this command, user can use passwords 'p3' (old) and 'p4' (current) to\n-- login.\nALTER ROLE u1 ADD PASSWORD 'p4' VALID UNTIL '2024/01/01';\n\n-- Replace 'p4' (current) password with 'p5'. Note that this command is _not_\n-- using the ADD keyword, hence 'p4' is _not_ moved to rololdpassword column.\n-- After this command, user can use passwords 'p3' (old) and 'p5'\n-- (current) to login.\nALTER ROLE u1 PASSWORD 'p5' VALID UNTIL '2025/01/01';\n\n-- Using the old password to login will produce a warning, hopefully nudging\n-- the user to start using the new password.\nexport PGPASSWORD=p3\npsql -U u1\n...\nWARNING: Used old password to login\n\nexport PGPASSWORD=p5\npsql -U u1\n...\n=> (no warning)\n\n-- Remove all passwords from the role. Even old password, if any, is removed.\nALTER ROLE u1 ALL PASSWORD NULL;\n\nIn normal use, the users can simply keep ADDing new passwords, and the\ndatabase will promptly remember only one old password, and keep\nforgetting any passwords older than that. But on the off chance\nthat someone needs to forget the latest password they added, and\nrestore the old password to be the \"current\" password, they can use\nthe PASSWORD NULL incantation. Note that this will result in\nrol*old*password being set to NULL, because our password memory stack\ncannot hold more than 2 elements.\n\nSince this feature is targeted towards password rollovers, it's a legitimate\nquestion to ask if we should enforce that the new password being added has a\nvalid-until greater than the valid-until of the existing/old password. I don't\nthink we should enforce this, at least not in this patch, because the\nuser/admin may have a use case where they want a short-lived new password that\nthey intend/want to change very soon; I'm thinking of cases where passwords are\nbeing rolled over while they are also moving from older clients/libraries that\ndon't yet support scram-sha-256; keep using md5 and add passwords to honor\npassword rollover policy, but then as soon as all clients have been updated and\nhave the ability to use scram-sha-256, rollover the password again to utilize\nthe better mechanism.\n\nI realize that allowing for a maximum of 2 passwords goes against the\nzero-one-infinity rule [1], but I think in the case of password\nrollovers it's perfectly acceptable to limit the number of active\npasswords to just 2. If there are use cases, either related to password\nrollovers, or in its vicinity, that can be better addressed by\nallowing an arbitrarily many passwords, I would love to learn about\nthem and change this design to accommodate for those use cases, or\nperhaps revert to pursuing the multiple-passwords feature.\n\n[1]: https://en.wikipedia.org/wiki/Zero_one_infinity_rule\n\nBest regards,\nGurjeet\nhttp://Gurje.et\n\n\n",
"msg_date": "Wed, 4 Oct 2023 22:41:15 -0700",
"msg_from": "Gurjeet Singh <gurjeet@singh.im>",
"msg_from_op": false,
"msg_subject": "Re: [PoC/RFC] Multiple passwords, interval expirations"
},
{
"msg_contents": "On Wed, Oct 04, 2023 at 10:41:15PM -0700, Gurjeet Singh wrote:\n> The patches posted in this thread so far attempt to add the ability to\n> allow the user to have an arbitrary number of passwords. I believe\n> that allowing arbitrary number of passwords is not only unnecessary,\n> but the need to name passwords, the need to store them in a shared\n> catalog, etc. may actually create problems in the field. The\n> users/admins will have to choose names for passwords, which they\n> didn't have to previously. The need to name them may also lead to\n> users storing password-hints in the password names (e.g. 'mom''s\n> birthday', 'ex''s phone number', 'third password'), rendering the\n> passwords weak.\n> \n> Moreover, allowing an arbitrarily many number of passwords will\n> require us to provide additional infrastructure to solve problems like\n> observability (which passwords are currently in use, and which ones\n> have been effectively forgotten by applications), or create a nuisance\n> for admins that can create more problems than it solves.\n\nIMHO neither of these problems seems insurmountable. Besides advising\nagainst using hints as names, we could also automatically generate safe\nnames, or even disallow user-provided names entirely. And adding\nobservability for passwords seems worthwhile anyway.\n\n> So I propose that the feature should allow no more than 2 passwords\n> for a role, each with their own validity periods. This eliminates the\n> need to name passwords, because at any given time there are no more\n> than 2 passwords; current one, and old one. This also eliminates the\n> need for a shared catalog to hold passwords, because with the limit of\n> 2 imposed, we can store the old password and its validity period in\n> additional columns in the pg_authid table.\n\nAnother approach could be to allow an abritrary number of passwords but to\nalso allow administrators to limit how many passwords can be associated to\neach role. That way, we needn't restrict this feature to 2 passwords for\neveryone. Perhaps 2 should be the default, but in any case, IMO we\nshouldn't design to only support 2.\n\n> In essence, we create a stack that can hold 2 passwords. Pushing an\n> element when it's full will make it forget the bottom element. Popping\n> the stack makes it forget the top element, and the only remaining\n> element, if any, becomes the top.\n\nI think this would be mighty confusing to users since it's not clear that\nadding a password will potentially invalidate a current password (which\nmight be actively in use), but only if there are already 2 in the stack. I\nworry that such a desіgn might be too closely tailored to the\nimplementation details. If we proceed with this design, perhaps we should\nconsider ERROR-ing if a user tries to add a third password.\n\n> -- If, for some reason, the user wants to get rid of the latest password added.\n> -- Remove 'p2' (current password). The old password (p1), will be restored to\n> -- rolpassword, along with its valid-until value.\n> ALTER ROLE u1 PASSWORD NULL;\n> -- This may come as a surprise to some users, because currently they expect the\n> -- user to completely lose the ability to use passwords for login after this\n> -- command. To get the old behavior, the user must now use the ALL PASSWORD\n> -- NULL incantation; see below.\n> -- Issuing this command one more time will remove even the restored password,\n> -- hence leaving the user with no passwords.\n\nIs it possible to remove the oldest password added without removing the\nlatest password added?\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 5 Oct 2023 14:04:53 -0500",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PoC/RFC] Multiple passwords, interval expirations"
},
{
"msg_contents": "On Thu, 2023-10-05 at 14:04 -0500, Nathan Bossart wrote:\n> IMHO neither of these problems seems insurmountable. Besides\n> advising\n> against using hints as names, we could also automatically generate\n> safe\n> names, or even disallow user-provided names entirely. \n\nI'd like to see what this looks like as a user-interface. Using a name\nseems weird because of the reasons Gurjeet mentioned.\n\nUsing a number seems weird to me because either:\n\n (a) if the number is always increasing you'd have to look to find the\nnumber of the new slot to add and the old slot to remove; or\n (b) if switched between two numbers (say 0 and 1), it would be error\nprone because you'd have to remember which is the old one that can be\nsafely replaced\n\nMaybe a password is best described by its validity period rather than a\nname? But what about passwords that don't expire?\n\n> And adding\n> observability for passwords seems worthwhile anyway.\n\nThat might be useful just to know whether a user's password is even\nbeing used -- in case the admin makes a mistake and some other auth\nmethod is being used. Also it would help to know when a password can\nsafely be removed.\n> \n\n> That way, we needn't restrict this feature to 2 passwords for\n> everyone. Perhaps 2 should be the default, but in any case, IMO we\n> shouldn't design to only support 2.\n\nAre there use cases for lots of passwords, or is it just a matter of\nnot introducing an artificial limitation?\n\nWould it ever make sense to have a role that has two permanent\npasswords, or would that be an abuse of this feature? Any use cases I\ncan think of would be better solved with multiple user that are part of\nthe same group.\n\n> \n> I think this would be mighty confusing to users since it's not clear\n> that\n> adding a password will potentially invalidate a current password\n> (which\n> might be actively in use), but only if there are already 2 in the\n> stack. I\n> worry that such a desіgn might be too closely tailored to the\n> implementation details. If we proceed with this design, perhaps we\n> should\n> consider ERROR-ing if a user tries to add a third password.\n\nI agree that the proposed language is confusing, especially because ADD\ncauses a password to be added and another one to be removed. But\nperhaps there are some non-confusing ways to expose a similar idea.\n\nThe thing I like about Gurjeet's proposal is that it's well-targeted at\na specific use case rather than trying to be too general. That makes it\na little easier to avoid certain problems like having a process that\nadds passwords and never removes the old ones (leading to weird\nproblems like 47000 passwords for one user).\n\nBut it also feels strange to be limited to two -- perhaps the password\nrotation schedule or policy just doesn't work with a limit of two, or\nperhaps that introduces new kinds of mistakes.\n\nAnother idea: what if we introduce the notion of deprecating a\npassword? To remove a password, it would have to be deprecated first,\nand maybe that would cause a LOG or WARNING message to be emitted when\nused, or show up differently in some system view. And perhaps you could\nhave at most one non-deprecated password. That would give a workflow\nsomething like (I'm not proposing these exact keywords):\n\n CREATE USER foo PASSWORD 'secret1';\n ALTER USER foo DEPRECATE PASSWORD; -- 'secret1' still works\n ALTER USER foo PASSWORD 'secret2'; -- 'secret1' or 'secret2' works\n ... fix some applications\n SET log_deprecated_password_use = WARNING;\n\n ...\n WARNING: deprecated password used for user 'foo'\n ... fix some applications you forgot about\n ... warnings quiet down\n ALTER USER foo DROP DEPRECATED PASSWORD; -- only 'secret2' works\n\nIf the user wants to un-deprecate a password (before they drop it, of\ncourse), they can do something like:\n\n ALTER USER foo PASSWORD NULL; -- 'secret2' removed\n ALTER USER foo UNDEPRECATE PASSWORD; -- 'secret1' restored\n\nif we allow multiple deprecated passwords, we'd still have to come up\nwith some way to address them (names, numbers, validity period,\nsomething). But by isolating the problem to deprecated passwords only,\nit feels like the system is still being restored to a clean state with\nat most one single current password. The awkwardness is contained to\nold passwords which will hopefully go away soon anyway and not\nrepresent permanent clutter.\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Thu, 05 Oct 2023 13:09:36 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": false,
"msg_subject": "Re: [PoC/RFC] Multiple passwords, interval expirations"
},
{
"msg_contents": "On Thu, Oct 5, 2023 at 12:04 PM Nathan Bossart <nathandbossart@gmail.com> wrote:\n>\n> On Wed, Oct 04, 2023 at 10:41:15PM -0700, Gurjeet Singh wrote:\n> > The patches posted in this thread so far attempt to add the ability to\n> > allow the user to have an arbitrary number of passwords. I believe\n> > that allowing arbitrary number of passwords is not only unnecessary,\n> > but the need to name passwords, the need to store them in a shared\n> > catalog, etc. may actually create problems in the field. The\n> > users/admins will have to choose names for passwords, which they\n> > didn't have to previously. The need to name them may also lead to\n> > users storing password-hints in the password names (e.g. 'mom''s\n> > birthday', 'ex''s phone number', 'third password'), rendering the\n> > passwords weak.\n> >\n> > Moreover, allowing an arbitrarily many number of passwords will\n> > require us to provide additional infrastructure to solve problems like\n> > observability (which passwords are currently in use, and which ones\n> > have been effectively forgotten by applications), or create a nuisance\n> > for admins that can create more problems than it solves.\n>\n> IMHO neither of these problems seems insurmountable.\n\nAgreed.\n\n> Besides advising\n> against using hints as names, we could also automatically generate safe\n> names, or even disallow user-provided names entirely.\n\nSomehow naming passwords doesn't feel palatable to me.\n\n> And adding\n> observability for passwords seems worthwhile anyway.\n\nAgreed.\n\n> > So I propose that the feature should allow no more than 2 passwords\n> > for a role, each with their own validity periods. This eliminates the\n> > need to name passwords, because at any given time there are no more\n> > than 2 passwords; current one, and old one. This also eliminates the\n> > need for a shared catalog to hold passwords, because with the limit of\n> > 2 imposed, we can store the old password and its validity period in\n> > additional columns in the pg_authid table.\n>\n> Another approach could be to allow an abritrary number of passwords but to\n> also allow administrators to limit how many passwords can be associated to\n> each role. That way, we needn't restrict this feature to 2 passwords for\n> everyone. Perhaps 2 should be the default, but in any case, IMO we\n> shouldn't design to only support 2.\n\nI don't see a real use case to support more than 2 passwords. Allowing\nan arbitrary number of passwords might look good and clean from\naesthetics and documentation perspective (no artificially enforced\nlimits, as in the zero-one-infinity rule), but in absence of real use\ncases for that many passwords, I'm afraid we might end up with a\nfeature that creates more and worse problems for the users than it\nsolves.\n\n> > In essence, we create a stack that can hold 2 passwords. Pushing an\n> > element when it's full will make it forget the bottom element. Popping\n> > the stack makes it forget the top element, and the only remaining\n> > element, if any, becomes the top.\n>\n> I think this would be mighty confusing to users since it's not clear that\n> adding a password will potentially invalidate a current password (which\n> might be actively in use), but only if there are already 2 in the stack.\n\nFair point. We can aid the user by emitting a NOTICE (or a WARNING)\nmessage whenever an old password is removed from the system because of\naddition of a new password.\n\n> I\n> worry that such a desіgn might be too closely tailored to the\n> implementation details. If we proceed with this design, perhaps we should\n> consider ERROR-ing if a user tries to add a third password.\n\nI did not have a stack in mind when developing the use case and the\ngrammar, so implementation details did not drive this design. This new\ndesign was more of a response to the manageability nightmare that I\ncould see the old approach may lead to. When writing the email I\nthought mentioning the stack analogy may make it easier to develop a\nmental model. I certainly won't suggest using it in the docs for\nexplanation :-)\n\n> > -- If, for some reason, the user wants to get rid of the latest password added.\n> > -- Remove 'p2' (current password). The old password (p1), will be restored to\n> > -- rolpassword, along with its valid-until value.\n> > ALTER ROLE u1 PASSWORD NULL;\n> > -- This may come as a surprise to some users, because currently they expect the\n> > -- user to completely lose the ability to use passwords for login after this\n> > -- command. To get the old behavior, the user must now use the ALL PASSWORD\n> > -- NULL incantation; see below.\n> > -- Issuing this command one more time will remove even the restored password,\n> > -- hence leaving the user with no passwords.\n>\n> Is it possible to remove the oldest password added without removing the\n> latest password added?\n\nIn the patch I have so far, ALTER ROLE u1 ADD PASSWORD '' (empty\nstring) will drop the old password (what you asked for), and move the\ncurrent password to rololdpassword (which is not exactly what you\nasked for :-). Hence the oldest password will be forgotten, and the\ncurrent password will continue to work;\n\nPerhaps an explicit syntax like ALTER ROLE u1 DROP OLD PASSWORD can be\nused for this.\n\nBest regards,\nGurjeet\nhttp://Gurje.et\n\n\n",
"msg_date": "Thu, 5 Oct 2023 13:55:26 -0700",
"msg_from": "Gurjeet Singh <gurjeet@singh.im>",
"msg_from_op": false,
"msg_subject": "Re: [PoC/RFC] Multiple passwords, interval expirations"
},
{
"msg_contents": "On Thu, Oct 5, 2023 at 1:09 PM Jeff Davis <pgsql@j-davis.com> wrote:\n>\n> >\n> > I think this would be mighty confusing to users since it's not clear\n> > that\n> > adding a password will potentially invalidate a current password\n> > (which\n> > might be actively in use), but only if there are already 2 in the\n> > stack. I\n> > worry that such a desіgn might be too closely tailored to the\n> > implementation details. If we proceed with this design, perhaps we\n> > should\n> > consider ERROR-ing if a user tries to add a third password.\n>\n> I agree that the proposed language is confusing, especially because ADD\n> causes a password to be added and another one to be removed. But\n> perhaps there are some non-confusing ways to expose a similar idea.\n\nHow about a language like the following (I haven't tried if this will\nwork in the grammar we have):\n\nCREATE ROLE u1 PASSWORD 'p1';\n\nALTER ROLE u1ADD NEW PASSWORD 'p2';\n\nALTER ROLE u1 ADD NEW PASSOWRD 'p3';\nERROR: Cannot have more than 2 passwords at the same time.\n\nALTER ROLE u1 DROP OLD PASSWORD;\n\nALTER ROLE u1 ADD NEW PASSOWRD 'p3';\n-- succeeds; forgets password 'p1'; p2 and p3 can be used to login\n\nALTER ROLE u1 DROP NEW PASSWORD;\n-- forgets password 'p3'. Only 'p2' can be used to login\n\nALTER ROLE u1 ADD NEW PASSOWRD 'p4';\n-- succeeds; 'p2' and 'p4' can be used to login\n\n-- Set the valid-until of 'new' (p4) password\nALTER ROLE u1 VALID UNTIL '2024/01/01';\n\n-- If we need the ability to change valid-until of both old and new,\nwe may allow something like the following.\nALTER ROLE u1 [_NEW_ | OLD] VALID UNTIL '2024/01/01';\n\nThis way there's a notion of a 'new' and 'old' passwords. User cannot\nadd a third password without explicitly dropping one of existing\npasswords (either old or new). At any time the user can choose to drop\nthe old or the new password. Adding a new password will mark the\ncurrent password as 'old'; if there's only old password (because 'new'\nwas dropped) the 'old' password will remain intact and the new one\nwill be placed in 'current'/new spot.\n\nSo in normal course of operation, even for automated jobs, the\nexpected flow to roll over the passwords would be:\n\nALTER USER u1 DROP OLD PASSWORD;\n-- success if there is an old password; otherwise NOTICE: no old password\nALTER USER u1 ADD NEW PASSWORD 'new-secret';\n\n> The thing I like about Gurjeet's proposal is that it's well-targeted at\n> a specific use case rather than trying to be too general. That makes it\n> a little easier to avoid certain problems like having a process that\n> adds passwords and never removes the old ones (leading to weird\n> problems like 47000 passwords for one user).\n>\n> But it also feels strange to be limited to two -- perhaps the password\n> rotation schedule or policy just doesn't work with a limit of two, or\n> perhaps that introduces new kinds of mistakes.\n>\n> Another idea: what if we introduce the notion of deprecating a\n> password?\n\nI'll have to think more about it, but perhaps my above proposal\naddresses the use case you describe.\n\n> if we allow multiple deprecated passwords, we'd still have to come up\n> with some way to address them (names, numbers, validity period,\n> something). But by isolating the problem to deprecated passwords only,\n> it feels like the system is still being restored to a clean state with\n> at most one single current password. The awkwardness is contained to\n> old passwords which will hopefully go away soon anyway and not\n> represent permanent clutter.\n\n+1\n\nBest regards,\nGurjeet\nhttp://Gurje.et\n\n\n",
"msg_date": "Thu, 5 Oct 2023 14:28:17 -0700",
"msg_from": "Gurjeet Singh <gurjeet@singh.im>",
"msg_from_op": false,
"msg_subject": "Re: [PoC/RFC] Multiple passwords, interval expirations"
},
{
"msg_contents": "On Thu, Oct 05, 2023 at 01:09:36PM -0700, Jeff Davis wrote:\n> On Thu, 2023-10-05 at 14:04 -0500, Nathan Bossart wrote:\n>> That way, we needn't restrict this feature to 2 passwords for\n>> everyone.� Perhaps 2 should be the default, but in any case, IMO we\n>> shouldn't design to only support 2.\n> \n> Are there use cases for lots of passwords, or is it just a matter of\n> not introducing an artificial limitation?\n\nI guess it's more of the latter. Perhaps one potential use case would be\nshort-lived credentials that are created on demand. Such a password might\nonly be valid for something like 15 minutes, and many users might have the\nability to request a password for the database role. I don't know whether\nthere is a ton of demand for such a use case, and it might already be\nsolvable by just creating separate roles. In any case, if there's general\nagreement that we only want to target the rotation use case, that's fine by\nme.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Fri, 6 Oct 2023 14:26:31 -0500",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PoC/RFC] Multiple passwords, interval expirations"
},
{
"msg_contents": "On Fri, 2023-10-06 at 14:26 -0500, Nathan Bossart wrote:\n> I guess it's more of the latter. Perhaps one potential use case\n> would be\n> short-lived credentials that are created on demand. Such a password\n> might\n> only be valid for something like 15 minutes, and many users might\n> have the\n> ability to request a password for the database role. I don't know\n> whether\n> there is a ton of demand for such a use case, and it might already be\n> solvable by just creating separate roles. In any case, if there's\n> general\n> agreement that we only want to target the rotation use case, that's\n> fine by\n> me.\n\nThe basic problem, as I see it, is: how do we keep users from\naccidentally dropping the wrong password? Generated unique names or\nnumbers don't solve that problem. Auto-incrementing or a created-at\ntimestamp solves it in the sense that you can at least look at a system\nview and see if there's a newer one, but it's a little awkward. A\nvalidity period is a good fit if all passwords have a validity period\nand we don't change it, but gets awkward otherwise.\n\nI'm also worried about two kinds of clutter:\n\n* old passwords not being garbage-collected\n* the identifier of the current password always changing (perhaps fine\nif it'a a \"created at\" ID?)\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Fri, 06 Oct 2023 13:20:03 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": false,
"msg_subject": "Re: [PoC/RFC] Multiple passwords, interval expirations"
},
{
"msg_contents": "On Thu, 2023-10-05 at 14:28 -0700, Gurjeet Singh wrote:\n\n> This way there's a notion of a 'new' and 'old' passwords.\n\nIIUC, you are proposing that there are exactly two slots, NEW and OLD.\nWhen adding a password, OLD must be unset and it moves NEW to OLD, and\nadds the new password in NEW. DROP only works on OLD. Is that right?\n\nIt's close to the idea of deprecation, except that adding a new\npassword implicitly deprecates the existing one. I'm not sure about\nthat -- it could be confusing.\n\nWe could also try using a verb like \"expire\" that could be coupled with\na date, and that way all old passwords would always have some validity\nperiod. That might make it a bit easier to manage if we do need more\nthan two passwords.\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Fri, 06 Oct 2023 13:29:24 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": false,
"msg_subject": "Re: [PoC/RFC] Multiple passwords, interval expirations"
},
{
"msg_contents": "On Fri, Oct 6, 2023 at 01:20:03PM -0700, Jeff Davis wrote:\n> The basic problem, as I see it, is: how do we keep users from\n> accidentally dropping the wrong password? Generated unique names or\n\nI thought we could auto-remove old password if the valid-until date is\nin the past. You would need a separate ALTER command to sets its date\nin the past without that. Also, defining a new password could require\nsetting the expiration date of the old password to make future additions\neasier.\n\nFor pg_authid, I was thinking of columns:\n\n\tADD\trolpassword_old\n\tADD\trolvaliduntil_old\n\tEXISTS\trolpassword\n\tEXISTS\trolvaliduntil\n\nI did blog about the password rotation problem and suggested\ncertificates:\n\n\thttps://momjian.us/main/blogs/pgblog/2020.html#July_17_2020\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n",
"msg_date": "Fri, 6 Oct 2023 16:46:01 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: [PoC/RFC] Multiple passwords, interval expirations"
},
{
"msg_contents": "On Fri, Oct 6, 2023 at 1:46 PM Bruce Momjian <bruce@momjian.us> wrote:\n>\n> On Fri, Oct 6, 2023 at 01:20:03PM -0700, Jeff Davis wrote:\n> > The basic problem, as I see it, is: how do we keep users from\n> > accidentally dropping the wrong password? Generated unique names or\n>\n> I thought we could auto-remove old password if the valid-until date is\n> in the past.\n\nAutoremoving expired passwords will surprise users, and not in a good\nway. Making a password, even an expired one, disappear from the system\nwill lead to astonishment. Among uses of an expired password are cases\nof it acting like a tombstone, and the case where the user may want to\nextend the validity of a password, instead of having to create a new\none and change application configuration(s) to specify the new\npassword.\n\nBest regards,\nGurjeet\nhttp://Gurje.et\n\n\n",
"msg_date": "Sun, 8 Oct 2023 10:24:42 -0700",
"msg_from": "Gurjeet Singh <gurjeet@singh.im>",
"msg_from_op": false,
"msg_subject": "Re: [PoC/RFC] Multiple passwords, interval expirations"
},
{
"msg_contents": "On Sun, Oct 8, 2023 at 10:24:42AM -0700, Gurjeet Singh wrote:\n> On Fri, Oct 6, 2023 at 1:46 PM Bruce Momjian <bruce@momjian.us> wrote:\n> >\n> > On Fri, Oct 6, 2023 at 01:20:03PM -0700, Jeff Davis wrote:\n> > > The basic problem, as I see it, is: how do we keep users from\n> > > accidentally dropping the wrong password? Generated unique names or\n> >\n> > I thought we could auto-remove old password if the valid-until date is\n> > in the past.\n> \n> Autoremoving expired passwords will surprise users, and not in a good\n> way. Making a password, even an expired one, disappear from the system\n> will lead to astonishment. Among uses of an expired password are cases\n> of it acting like a tombstone, and the case where the user may want to\n> extend the validity of a password, instead of having to create a new\n> one and change application configuration(s) to specify the new\n> password.\n\nI was speaking of autoremoving in cases where we are creating a new one,\nand taking the previous new one and making it the old one, if that was\nnot clear.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n",
"msg_date": "Sun, 8 Oct 2023 13:29:14 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: [PoC/RFC] Multiple passwords, interval expirations"
},
{
"msg_contents": "On Sun, Oct 8, 2023 at 10:29 AM Bruce Momjian <bruce@momjian.us> wrote:\n>\n> I was speaking of autoremoving in cases where we are creating a new one,\n> and taking the previous new one and making it the old one, if that was\n> not clear.\n\nYes, I think I understood it differently. I understood it to mean that\nthis behaviour would apply to all passwords, those created by existing\ncommands, as well as to those created by new commands for rollover use\ncase. Whereas you meant this autoremove behaviour to apply only to\nthose passwords created by/for rollover related commands. I hope I've\nunderstood your proposal correctly this time around :-)\n\nI believe the passwords created by rollover feature should\nbehave by the same rules as the rules for passwords created by\nexisting CREATE/ALTER ROLE commands. If we implement the behaviour to\ndelete expired passwords, then I believe that behaviour should apply\nto all passwords, irrespective of which command/feature was used to\ncreate a password.\n\n\nBest regards,\nGurjeet\nhttp://Gurje.et\n\n\n",
"msg_date": "Sun, 8 Oct 2023 10:50:15 -0700",
"msg_from": "Gurjeet Singh <gurjeet@singh.im>",
"msg_from_op": false,
"msg_subject": "Re: [PoC/RFC] Multiple passwords, interval expirations"
},
{
"msg_contents": "On Sun, Oct 8, 2023 at 10:50:15AM -0700, Gurjeet Singh wrote:\n> On Sun, Oct 8, 2023 at 10:29 AM Bruce Momjian <bruce@momjian.us> wrote:\n> >\n> > I was speaking of autoremoving in cases where we are creating a new one,\n> > and taking the previous new one and making it the old one, if that was\n> > not clear.\n> \n> Yes, I think I understood it differently. I understood it to mean that\n> this behaviour would apply to all passwords, those created by existing\n> commands, as well as to those created by new commands for rollover use\n> case. Whereas you meant this autoremove behaviour to apply only to\n> those passwords created by/for rollover related commands. I hope I've\n> understood your proposal correctly this time around :-)\n\nYes, it is only during the addition of a new password when the previous\nnew password becomes the new old password. The previous old password\nwould need to have an rolvaliduntil in the past.\n\n> I believe the passwords created by rollover feature should\n> behave by the same rules as the rules for passwords created by\n> existing CREATE/ALTER ROLE commands. If we implement the behaviour to\n> delete expired passwords, then I believe that behaviour should apply\n> to all passwords, irrespective of which command/feature was used to\n> create a password.\n\nThis would only apply when we are moving the previous new password to\nold and the old one is removed.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n",
"msg_date": "Sun, 8 Oct 2023 14:55:19 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: [PoC/RFC] Multiple passwords, interval expirations"
},
{
"msg_contents": "On Fri, Oct 6, 2023 at 1:29 PM Jeff Davis <pgsql@j-davis.com> wrote:\n>\n> On Thu, 2023-10-05 at 14:28 -0700, Gurjeet Singh wrote:\n>\n> > This way there's a notion of a 'new' and 'old' passwords.\n>\n> IIUC, you are proposing that there are exactly two slots, NEW and OLD.\n> When adding a password, OLD must be unset and it moves NEW to OLD, and\n> adds the new password in NEW. DROP only works on OLD. Is that right?\n\nYes, that's what I was proposing. But thinking a bit more about it,\nthe _implicit_ shuffling of labels 'new' and 'old' doesn't feel right\nto me. The password that used to be referred to as 'new' now\nautomatically gets labeled 'old'.\n\n> It's close to the idea of deprecation, except that adding a new\n> password implicitly deprecates the existing one. I'm not sure about\n> that -- it could be confusing.\n\n+1\n\n> We could also try using a verb like \"expire\" that could be coupled with\n> a date, and that way all old passwords would always have some validity\n> period.\n\nForcing the users to pick an expiry date for a password they intend to\nrollover, when an expiry date did not exist before for that password,\nfeels like adding more burden to their password rollover decision\nmaking. The dates and rules of password rollover may be a part of a\nsystem external to their database, (wiki, docs, calendar, etc.) which\nnow they will be forced to translate into a timestamp to specify in\nthe rollover commands.\n\nI believe we should fix the _names_ of the slots the 2 passwords are\nstored in, and provide commands that manipulate those slots by\nrespective names; the commands should not implicitly move the\npasswords between the slots. Additionally, we may provide functions\nthat provide observability info for these slots. I propose the slot\nnames FIRST and SECOND (I picked these because these keywords/tokens\nalready exist in the grammar, but not yet sure if the grammar rules\nwould allow their use; feel free to propose better names). FIRST\nrefers to the the existing slot, namely rolpassword. SECOND refers to\nthe new slot we'd add, that is, a pgauthid column named\nrolsecondpassword. The existing commands, or when neither FIRST nor\nSECOND are specified, the commands apply to the existing slot, a.k.a.\nFIRST.\n\nThe user interface might look like the following:\n\n-- Create a user, as usual\nCREATE ROLE u1 PASSWORD 'p1' VALID UNTIL '2020/01/01';\n-- This automatically occupies the 'first' slot\n\n-- Add another password that the user can use for authentication.\nALTER ROLE u1 ADD SECOND PASSWORD 'p2' VALID UNTIL '2021/01/01';\n\n-- Change both the passwords' validity independently; this solves the\n-- problem with the previous '2-element stack' approach, where we\n-- could not address the password at the bottom of the stack.\nALTER ROLE u1 SECOND PASSWORD VALID UNTIL '2022/01/01';\n\nALTER ROLE u1 [ [ FIRST ] PASSWORD ] VALID UNTIL '2022/01/01';\n\n-- If, for some reason, the user wants to get rid of the latest password added.\nALTER ROLE u1 DROP SECOND PASSWORD;\n\n-- Add a new password (p3) in 'second' slot\nALTER ROLE u1 ADD SECOND PASSWORD 'p3' VALID UNTIL '2023/01/01';\n\n-- Attempting to add a password while the respective slot is occupied\n-- results in error\nALTER ROLE u1 ADD [ [ FIRST ] PASSWORD ] 'p4' VALID UNTIL '2024/01/01';\nERROR: first password already exists\n\nALTER ROLE u1 ADD SECOND PASSWORD 'p4' VALID UNTIL '2024/01/01';\nERROR: second password already exists\n\n-- Users can use this function to check whether a password slot is occupied\n=> select password_exists('u1', 'first');\npassword_exists\n-----\nt\n\n=> select password_exists('u1', 'second');\npassword_exists\n-----\nt\n\n-- Remove all passwords from the role. Both, 'first' and 'second',\npasswords are removed.\nALTER ROLE u1 DROP ALL PASSWORD;\n\nBest regards,\nGurjeet\nhttp://Gurje.et\n\n\n",
"msg_date": "Sun, 8 Oct 2023 13:01:00 -0700",
"msg_from": "Gurjeet Singh <gurjeet@singh.im>",
"msg_from_op": false,
"msg_subject": "Re: [PoC/RFC] Multiple passwords, interval expirations"
},
{
"msg_contents": "On Sun, Oct 8, 2023 at 1:01 PM Gurjeet Singh <gurjeet@singh.im> wrote:\n>\n> On Fri, Oct 6, 2023 at 1:29 PM Jeff Davis <pgsql@j-davis.com> wrote:\n> >\n> > On Thu, 2023-10-05 at 14:28 -0700, Gurjeet Singh wrote:\n> >\n> > > This way there's a notion of a 'new' and 'old' passwords.\n> >\n> > IIUC, you are proposing that there are exactly two slots, NEW and OLD.\n> > When adding a password, OLD must be unset and it moves NEW to OLD, and\n> > adds the new password in NEW. DROP only works on OLD. Is that right?\n>\n> Yes, that's what I was proposing. But thinking a bit more about it,\n> the _implicit_ shuffling of labels 'new' and 'old' doesn't feel right\n> to me. The password that used to be referred to as 'new' now\n> automatically gets labeled 'old'.\n>\n> > It's close to the idea of deprecation, except that adding a new\n> > password implicitly deprecates the existing one. I'm not sure about\n> > that -- it could be confusing.\n>\n> +1\n\n> I believe we should fix the _names_ of the slots the 2 passwords are\n> stored in, and provide commands that manipulate those slots by\n> respective names; the commands should not implicitly move the\n> passwords between the slots. Additionally, we may provide functions\n> that provide observability info for these slots. I propose the slot\n> names FIRST and SECOND (I picked these because these keywords/tokens\n> already exist in the grammar, but not yet sure if the grammar rules\n> would allow their use; feel free to propose better names). FIRST\n> refers to the the existing slot, namely rolpassword. SECOND refers to\n> the new slot we'd add, that is, a pgauthid column named\n> rolsecondpassword. The existing commands, or when neither FIRST nor\n> SECOND are specified, the commands apply to the existing slot, a.k.a.\n> FIRST.\n\nPlease see attached the patch that implements this proposal. The patch\nis named password_rollover_v3.diff, breaking from the name\n'multiple_passwords...', since this patch limits itself to address the\npassword-rollover feature.\n\nThe multiple_password* series of patches had removed a critical\nfunctionality, which I believe is crucial from security perspective.\nWhen a user does not exist, or has no passwords, or has passwords that\nhave expired, we must pretend to perform authentication (network\npacket exchanges) as normally as possible, so that the absence of\nuser, or lack of (or expiration of) passwords is not revealed to an\nattacker. I have restored the original behaviour in the\nCheckPWChallengeAuth() function; see commit aba99df407 [2].\n\nI looked for any existing keywords that may better fit the purpose of\nnaming the slots, better than FIRST and SECOND, but I could not find\nany. Instead of DROP to remove the passwords, I tried DELETE and the\ngrammar/bison did not complain about it; so DELETE is an option, too,\nbut I feel DROP FIRST/SECOND/ALL PASSWORD is a better companion of ADD\nFIRST/SECOND PASSWORD syntax, in the same vein as ADD/DROP COLUMN.\n\nThe doc changes are still missing, but the regression tests and the\ncomments therein should provide a good idea of the user interface of\nthis new feature. Documenting this behaviour in a succinct manner\nfeels difficult; so ideas welcome for how to inform the reader that\nnow a role is accompanied by two slots to store the passwords, and\nthat the old commands operate on the first slot, and to operate on the\nsecond password slot one must use the new syntax. I guess it would be\nbest to start the relevant section with \"To support gradual password\nrollovers, Postgres provides the ability to store 2 active passwords\nat the same time. The passwords are referred to as FIRST and SECOND\npassword. Each of these passwords can be changed independently, and\neach of these can have an associated expiration time, if necessary.\"\n\nSince these new commands are only available to ALTER ROLE (and not to\nCREATE ROLE), the corresponding command doc page also needs to be\nupdated.\n\nNext steps:\n- Break the patch into a series of smaller patches.\n- Add TAP tests (test the ability to actually login with these passwords)\n- Add/update documentation\n- Add more regression tests\n\nThe patch progress can be followed on the Git branch\npassword_rollover_v3 [1]. This branch begins uses\nmultiple_passwords_v4 as starting point, and removes unnecessary code\n(commit 326f60225f [3])\n\nThe v1 (and tombstone of v2) patches of password_rollover never\nfinished as the consensus changed while they were in progress, but\nthey exist as sibling branches of the v3 branch.\n\n[1]: https://github.com/gurjeet/postgres/commits/password_rollover_v3\n[2]: https://github.com/gurjeet/postgres/commit/aba99df407a523357db2813f0eea0b45dbeb6006\n[3]: https://github.com/gurjeet/postgres/commit/326f60225f0e660338fc9c276c8728dc10db435b\n\n\nBest regards,\nGurjeet\nhttp://Gurje.et",
"msg_date": "Mon, 9 Oct 2023 02:31:30 -0700",
"msg_from": "Gurjeet Singh <gurjeet@singh.im>",
"msg_from_op": false,
"msg_subject": "Re: [PoC/RFC] Multiple passwords, interval expirations"
},
{
"msg_contents": "On Mon, Oct 9, 2023 at 2:31 AM Gurjeet Singh <gurjeet@singh.im> wrote:\n>\n> Next steps:\n> - Break the patch into a series of smaller patches.\n\nPlease see attached the same v3 patch, but now split into 3 separate\npatches. Each patch in the series depends on the previous patch to\nhave been applied. I have made sure that each patch passes `make\ncheck` individually.\n\nFirst patch adds the two new columns, rolsecondpassword and\nrolsecondvaliduntil to the pg_authid shared catalog. This patch also\nupdates the corresponding pg_authid.dat file to set these values to\nnull for the rows populated during bootstrap. Finally, it adds code to\nCreateRole() to set these columns' values to NULL for a role being\ncreated.\n\nThe second patch updates the password extraction, verification\nfunctions as well as authentication functions to honor the second\npassword, if any. There is more detailed description in the commit\nmessage/body of the patch.\n\nThe third patch adds the SQL support to the ALTER ROLE command which\nallows manipulation of both, the rolpassword and rolsecondpassword,\ncolumns and their respective expiration timestamps,\nrol[second]validuntil. This patch also adds regression tests for the\nnew SQL command, demonstrating the use of the new grammar.\n\nv3-0001-Add-new-columns-to-pg_authid.patch\nv3-0002-Update-password-verification-infrastructure-to-ha.patch\nv3-0003-Added-SQL-support-for-ALTER-ROLE-to-manage-two-pa.patch\n\n\n\nBest regards,\nGurjeet\nhttp://Gurje.et",
"msg_date": "Mon, 9 Oct 2023 12:53:39 -0700",
"msg_from": "Gurjeet Singh <gurjeet@singh.im>",
"msg_from_op": false,
"msg_subject": "Re: [PoC/RFC] Multiple passwords, interval expirations"
},
{
"msg_contents": "> On Mon, Oct 9, 2023 at 2:31 AM Gurjeet Singh <gurjeet@singh.im> wrote:\n> >\n> > Next steps:\n> > - Break the patch into a series of smaller patches.\n> > - Add TAP tests (test the ability to actually login with these passwords)\n> > - Add/update documentation\n> > - Add more regression tests\n\nPlease see attached the v4 of the patchset that introduces the notion\nof named passwords slots, namely 'first' and 'second' passwords, and\nallows users to address each of these passwords separately for the\npurposes of adding, dropping, or assigning expiration times.\n\nApart from the changes described by each patch's commit title, one\nsignificant change since v3 is that now (included in v4-0002...patch)\nit is not allowed for a role to have a mix of a types of passwords.\nWhen adding a password, the patch ensures that the password being\nadded uses the same hashing algorithm (md5 or scram-sha-256) as the\nexisting password, if any. Having all passwords of the same type\nhelps the server pick the corresponding authentication method during\nconnection attempt.\n\nThe v3 patch also had a few bugs that were exposed by cfbot's\nautomatic run. All those bugs have now been fixed, and the latest run\non the v4 branch [1] on my private Git repo shows a clean run [1].\n\nThe list of patches, and their commit titles are as follows:\n\n> v4-0001-...patch Add new columns to pg_authid\n> v4-0002-...patch Update password verification infrastructure to handle two passwords\n> v4-0003-...patch Added SQL support for ALTER ROLE to manage two passwords\n> v4-0004-...patch Updated pg_dumpall to support exporting a role's second password\n> v4-0005-...patch Update system views pg_roles and pg_shadow\n> v4-0006-...patch Updated pg_authid catalog documentation\n> v4-0007-...patch Updated psql's describe-roles meta-command\n> v4-0008-...patch Added documentation for ALTER ROLE command\n> v4-0009-...patch Added TAP tests to prove that a role can use two passwords to login\n> v4-0010-...patch pgindent run\n> v4-0011-...patch Run pgperltidy on files changed by this patchset\n\nRunning pgperltidy updated many perl files unrelated to this patch, so\nin the last patch I chose to include only the one perl file that is\naffected by this patchset.\n\n[1]: password_rollover_v4 (910f81be54)\nhttps://github.com/gurjeet/postgres/commits/password_rollover_v4\n\n[2]: https://cirrus-ci.com/build/4675613999497216\n\nBest regards,\nGurjeet\nhttp://Gurje.et",
"msg_date": "Tue, 10 Oct 2023 04:03:15 -0700",
"msg_from": "Gurjeet Singh <gurjeet@singh.im>",
"msg_from_op": false,
"msg_subject": "Re: [PoC/RFC] Multiple passwords, interval expirations"
},
{
"msg_contents": "Greetings,\n\n* Nathan Bossart (nathandbossart@gmail.com) wrote:\n> On Wed, Oct 04, 2023 at 10:41:15PM -0700, Gurjeet Singh wrote:\n> > The patches posted in this thread so far attempt to add the ability to\n> > allow the user to have an arbitrary number of passwords. I believe\n> > that allowing arbitrary number of passwords is not only unnecessary,\n> > but the need to name passwords, the need to store them in a shared\n> > catalog, etc. may actually create problems in the field. The\n> > users/admins will have to choose names for passwords, which they\n> > didn't have to previously. The need to name them may also lead to\n> > users storing password-hints in the password names (e.g. 'mom''s\n> > birthday', 'ex''s phone number', 'third password'), rendering the\n> > passwords weak.\n> > \n> > Moreover, allowing an arbitrarily many number of passwords will\n> > require us to provide additional infrastructure to solve problems like\n> > observability (which passwords are currently in use, and which ones\n> > have been effectively forgotten by applications), or create a nuisance\n> > for admins that can create more problems than it solves.\n> \n> IMHO neither of these problems seems insurmountable. Besides advising\n> against using hints as names, we could also automatically generate safe\n> names, or even disallow user-provided names entirely. And adding\n> observability for passwords seems worthwhile anyway.\n\nAgreed, particularly on adding observability for password use.\nRegardless of what we do, I feel pretty strongly that we need that.\nThat said, having this handled in a separate catalog feels like a just\ngenerally better idea than shoving it all into pg_authid as we extend\nthings to include information like \"last used date\", \"last used source\nIP\", etc.\n\nProviding this observability purely through logging strikes me as a\nterrible idea.\n\nI don't find the concern about names as 'hints' to be at all compelling.\nHaving a way to avoid having names may have some value, but only if we\ncan come up with something reasonable.\n\n> > So I propose that the feature should allow no more than 2 passwords\n> > for a role, each with their own validity periods. This eliminates the\n> > need to name passwords, because at any given time there are no more\n> > than 2 passwords; current one, and old one. This also eliminates the\n> > need for a shared catalog to hold passwords, because with the limit of\n> > 2 imposed, we can store the old password and its validity period in\n> > additional columns in the pg_authid table.\n> \n> Another approach could be to allow an abritrary number of passwords but to\n> also allow administrators to limit how many passwords can be associated to\n> each role. That way, we needn't restrict this feature to 2 passwords for\n> everyone. Perhaps 2 should be the default, but in any case, IMO we\n> shouldn't design to only support 2.\n\nAgreed that it's a bad idea to design to support 2 and only 2. If\nnothing else, there's the very simple case that the user needs to do\nanother password rotation ... and they look and discover that the old\npassword is still being used and that if they took it away, things would\nbreak, but they need time to run down which system is still using it\nwhile still needing to perform the migration for the other systems that\nare correctly being updated- boom, need 3 for that case. There's other\nuse-cases that could be interesting though- presumably we'd log which\npassword is used to authenticate and then users could have a fleet of\nweb servers which each have their own password but log into the same PG\nuser and they could happily rotate the passwords independently for all\nof those systems.\n\nI don't propose this as some use-case just for the purpose of argument-\nnot sharing passwords across a bunch of systems is absolutely a good\nstance when it comes to security, and due to the way permissions and\nroles work in PG, being able to have both distinct passwords with\nexplicitly logged indication of which system used what password to log\nin while not having to deal with possibly permission differences due to\nusing actually independent roles is valuable. That is, each server\nusing a separate role to log in could lead to some servers having access\nto something or other while others don't pretty easily- if they're all\nlogging in with the same role and just a different password, that's not\ngoing to happen.\n\n* Jeff Davis (pgsql@j-davis.com) wrote:\n> Using a number seems weird to me because either:\n> \n> (a) if the number is always increasing you'd have to look to find the\n> number of the new slot to add and the old slot to remove; or\n> (b) if switched between two numbers (say 0 and 1), it would be error\n> prone because you'd have to remember which is the old one that can be\n> safely replaced\n\nYeah, a number doesn't strike me as very good either.\n\n> Maybe a password is best described by its validity period rather than a\n> name? But what about passwords that don't expire?\n\nThe validity idea is interesting but falls down when you want multiple\npasswords that have the same validity period (or which all don't have\nany expiration).\n\nGiving users the option of not having to specify a name and letting the\nsystem come up with one (similar to what we do for indexes and such)\ncould work out pretty decently, imv. I'd have that be optional though-\nif the user wants to specify a name, then they should be allowed to do\nso.\n\n> > And adding\n> > observability for passwords seems worthwhile anyway.\n> \n> That might be useful just to know whether a user's password is even\n> being used -- in case the admin makes a mistake and some other auth\n> method is being used. Also it would help to know when a password can\n> safely be removed.\n\nYup, +100 on this.\n\n> Another idea: what if we introduce the notion of deprecating a\n> password? To remove a password, it would have to be deprecated first,\n> and maybe that would cause a LOG or WARNING message to be emitted when\n> used, or show up differently in some system view. And perhaps you could\n> have at most one non-deprecated password. That would give a workflow\n> something like (I'm not proposing these exact keywords):\n\nI don't see logs or warnings as being at all useful for this kind of\nthing. Making it available through a view would work- but I don't think\nwe need to go into the deprecation language ourselves as part of the\ngrammer and instead should let users write their own queries against the\nviews we provide to see what passwords are being used and what aren't.\n\n* Nathan Bossart (nathandbossart@gmail.com) wrote:\n> On Thu, Oct 05, 2023 at 01:09:36PM -0700, Jeff Davis wrote:\n> > On Thu, 2023-10-05 at 14:04 -0500, Nathan Bossart wrote:\n> >> That way, we needn't restrict this feature to 2 passwords for\n> >> everyone. Perhaps 2 should be the default, but in any case, IMO we\n> >> shouldn't design to only support 2.\n> > \n> > Are there use cases for lots of passwords, or is it just a matter of\n> > not introducing an artificial limitation?\n> \n> I guess it's more of the latter. Perhaps one potential use case would be\n> short-lived credentials that are created on demand. Such a password might\n> only be valid for something like 15 minutes, and many users might have the\n> ability to request a password for the database role. I don't know whether\n> there is a ton of demand for such a use case, and it might already be\n> solvable by just creating separate roles. In any case, if there's general\n> agreement that we only want to target the rotation use case, that's fine by\n> me.\n\nAgreed, this seems like another good example of why we shouldn't design\nthis for just 2.\n\n* Jeff Davis (pgsql@j-davis.com) wrote:\n> The basic problem, as I see it, is: how do we keep users from\n> accidentally dropping the wrong password? Generated unique names or\n> numbers don't solve that problem. Auto-incrementing or a created-at\n> timestamp solves it in the sense that you can at least look at a system\n> view and see if there's a newer one, but it's a little awkward. A\n> validity period is a good fit if all passwords have a validity period\n> and we don't change it, but gets awkward otherwise.\n\nProviding admins a way of seeing which passwords have been recently used\nseems like a clear way to minimize, at least, the risk of the wrong\npassword being dropped. Allowing them to control the name feels like\nit's another good way to minimize the risk since it's something they can\ndefine and they can include in the name whatever info they need to\nfigure out if it's the one that's no longer being used, or not. Forcing\nthe use of numbers or of validation periods seems like it'd make it\nharder on users to avoid this risk, not easier.\n\nAllowing per-password expiration is another way to address the issue of\nthe wrong password being removed by essentially taking the password away\nand seeing what breaks while having an easy way to bring it back if\nsomething important stops working.\n\n> I'm also worried about two kinds of clutter:\n> \n> * old passwords not being garbage-collected\n\nI'm not terribly concerned with this if the password's validity has\npassed and therefore it can't be used any longer. I could agree with\nthe concern about it being an issue if the passwords all stay valid\nforever. I'll point out that we arguably have this problem with roles\nalready today and it doesn't seem to be a huge issue- PG has no idea how\nlong that user is actually valid for, the admin needs to have something\nexternal to PG to deal with this.\n\n> * the identifier of the current password always changing (perhaps fine\n> if it'a a \"created at\" ID?)\n\nI'm not following what the issue is you're getting at here.\n\nThanks,\n\nStephen",
"msg_date": "Tue, 17 Oct 2023 16:20:58 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: [PoC/RFC] Multiple passwords, interval expirations"
},
{
"msg_contents": "On Tue, 2023-10-17 at 16:20 -0400, Stephen Frost wrote:\n> Agreed that it's a bad idea to design to support 2 and only 2.\n\nI don't disagree, but it's difficult to come up with syntax that:\n\n (a) supports N passwords\n (b) makes the ordinary cases simple and documentable\n (c) helps users avoid mistakes (at least in the simple cases)\n (d) makes sense passwords with and without validity period\n (e) handles the challenging cases\n\nOne challenging case is that we cannot allow the mixing of password\nprotocols (e.g. MD5 & SCRAM), because the authentication exchange only\ngets one chance at success. If someone ends up with 7 MD5 passwords,\nand they'd like to migrate to SCRAM, then we can't allow them to\nmigrate one password at a time (because then the other passwords would\nbreak). I'd like to see what the SQL for doing this should look like.\n\n> If\n> nothing else, there's the very simple case that the user needs to do\n> another password rotation ... and they look and discover that the old\n> password is still being used and that if they took it away, things\n> would\n> break, but they need time to run down which system is still using it\n> while still needing to perform the migration for the other systems\n> that\n> are correctly being updated- boom, need 3 for that case.\n\nThat sounds like a reasonable use case. I don't know if we should make\nit a requirement, but if we come up with something reasonable that\nsupports this case I'm fine with it. Ideally, it would still be easy to\nsee when you are making a mistake (e.g. forgetting to ever remove\npasswords).\n\n> There's other\n> use-cases that could be interesting though- presumably we'd log which\n> password is used to authenticate and then users could have a fleet of\n> web servers which each have their own password but log into the same\n> PG\n> user and they could happily rotate the passwords independently for\n> all\n> of those systems.\n> \n> if they're all\n> logging in with the same role and just a different password, that's\n> not\n> going to happen.\n\nI'm not sure this is a great idea. Can you point to some precedent\nhere?\n\n\n> Giving users the option of not having to specify a name and letting\n> the\n> system come up with one (similar to what we do for indexes and such)\n> could work out pretty decently, imv. I'd have that be optional\n> though-\n> if the user wants to specify a name, then they should be allowed to\n> do\n> so.\n\nCan you describe how some basic use cases should work with example SQL?\n\n> \n> > * the identifier of the current password always changing (perhaps\n> > fine\n> > if it'a a \"created at\" ID?)\n> \n> I'm not following what the issue is you're getting at here.\n\nI just meant that when rotating passwords, you have to keep coming up\nwith new names, so the \"current\" or \"primary\" one would not have a\nconsistent name.\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Tue, 17 Oct 2023 16:40:45 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": false,
"msg_subject": "Re: [PoC/RFC] Multiple passwords, interval expirations"
},
{
"msg_contents": "Greetings,\n\n* Jeff Davis (pgsql@j-davis.com) wrote:\n> On Tue, 2023-10-17 at 16:20 -0400, Stephen Frost wrote:\n> > Agreed that it's a bad idea to design to support 2 and only 2.\n> \n> I don't disagree, but it's difficult to come up with syntax that:\n> \n> (a) supports N passwords\n> (b) makes the ordinary cases simple and documentable\n> (c) helps users avoid mistakes (at least in the simple cases)\n> (d) makes sense passwords with and without validity period\n> (e) handles the challenging cases\n\nUndoubtably biased ... but I don't agree that this is so difficult.\nWhat points have been raised as a downside of the originally proposed\napproach, specifically?\n\nReading back through the thread, from a user perspective, the primary\none seems to be that passwords are expected to be named. I'm surprised\nthis is being brought up as such a serious concern. Lots and lots and\nlots of things in the system require naming, after all, and the idea\nthat this is somehow harder or more of an issue is quite odd to me.\n\n> One challenging case is that we cannot allow the mixing of password\n> protocols (e.g. MD5 & SCRAM), because the authentication exchange only\n> gets one chance at success. If someone ends up with 7 MD5 passwords,\n> and they'd like to migrate to SCRAM, then we can't allow them to\n> migrate one password at a time (because then the other passwords would\n> break). I'd like to see what the SQL for doing this should look like.\n\nI've got absolutely no interest in supporting md5- it's high time to rip\nthat out and be happy that it's gone. We nearly did it last year and\nI'm really hoping we accomplish that this year.\n\nI'm open to the idea that we may need to support some new SCRAM version\nor an alternative mechanism in the future, but it's long past time to\nspend any time worrying about md5. As for how difficult it is to deal\nwith supporting an alternative in the future- that's going to depend a\ngreat deal on what that alternative is and I don't know that we can\nreally code to handle that as part of this effort in a sensible way, and\ntrying to code for \"anything\" is going to likely make this much more\ncomplicated, and probably rife with security issues too.\n\n> > If\n> > nothing else, there's the very simple case that the user needs to do\n> > another password rotation ... and they look and discover that the old\n> > password is still being used and that if they took it away, things\n> > would\n> > break, but they need time to run down which system is still using it\n> > while still needing to perform the migration for the other systems\n> > that\n> > are correctly being updated- boom, need 3 for that case.\n> \n> That sounds like a reasonable use case. I don't know if we should make\n> it a requirement, but if we come up with something reasonable that\n> supports this case I'm fine with it. Ideally, it would still be easy to\n> see when you are making a mistake (e.g. forgetting to ever remove\n> passwords).\n\nWe have monitoring for many, many parts of the system and this would be\na good thing to monitor also- not just at a per-password level but also\nat an overall role/user level, as you have a similar issue there and we\ndon't properly provide users with any way to check reasonably \"hey, when\nwas the last time this user logged in?\". No, trawling through ancient\nlogs, if you even have them, isn't a proper solution to this problem.\n\n> > There's other\n> > use-cases that could be interesting though- presumably we'd log which\n> > password is used to authenticate and then users could have a fleet of\n> > web servers which each have their own password but log into the same\n> > PG\n> > user and they could happily rotate the passwords independently for\n> > all\n> > of those systems.\n> > \n> > if they're all\n> > logging in with the same role and just a different password, that's\n> > not\n> > going to happen.\n> \n> I'm not sure this is a great idea. Can you point to some precedent\n> here?\n\nIt's already the case that lots and lots and lots of systems out there\nlog into PG using the same username/password. With this, we're at least\noffering them the ability to vary the password and keep the user the\nsame. We've even seen this be asked for in other ways- the ability to\nuse distinct Kerberos or LDAP identifies to log into the same user in\nthe database, see pg_ident.conf and various suggestions for how to bring\nthat to LDAP too. Other systems also support the ability to have a\ngroup of users in LDAP, or such, be allowed to log into a specific\ndatabase user. One big reason for this is that then you know everyong\nlogging into that account has exactly the same access to things- because\nthat's the lowest level at which access can be granted. Having a way to\nsupport this similar capability but for passwords is certainly useful.\n\n> > Giving users the option of not having to specify a name and letting\n> > the\n> > system come up with one (similar to what we do for indexes and such)\n> > could work out pretty decently, imv. I'd have that be optional\n> > though-\n> > if the user wants to specify a name, then they should be allowed to\n> > do\n> > so.\n> \n> Can you describe how some basic use cases should work with example SQL?\n\nAs mentioned, we have this general idea already; a simplified CREATE\nINDEX syntax can be used as an example:\n\nCREATE INDEX [ [ IF NOT EXISTS ] name ] ON table_name ...\n\nand so we could have:\n\nCREATE PASS [ [ IF NOT EXISTS ] name ] FOR role ...\n\nGiving the user the option of providing a name, but just generating one\nif the user declines to include a name. Naturally, users would have to\nhave some way to look up the names but that doesn't seem difficult to\nprovide such as through some \\du command, along with appropriate views,\njust like we have for indexes.\n\nI do generally feel like the original patch was lacking when it came to\nsyntax and password management functions, but I don't think the answer\nto that is to just throw away the whole concept of multiple passwords in\nfavor of only supporting a very limited number of them. I'd think we'd\nwork towards improving on the syntax to support what we think users will\nneed to make all of this work smoothly.\n\n> > > * the identifier of the current password always changing (perhaps\n> > > fine\n> > > if it'a a \"created at\" ID?)\n> > \n> > I'm not following what the issue is you're getting at here.\n> \n> I just meant that when rotating passwords, you have to keep coming up\n> with new names, so the \"current\" or \"primary\" one would not have a\n> consistent name.\n\nDo we need a current or primary? How is that defined, exactly? If we\nwant it, we could surely create it and make it work, but we'd need to\nhave a good understand of just what it is and why it's the 'current' or\n'primary' and I'm not sure that we've actually got a good definition for\nthat and I'm not convinced yet that we should.\n\nThanks,\n\nStephen",
"msg_date": "Tue, 17 Oct 2023 22:52:05 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: [PoC/RFC] Multiple passwords, interval expirations"
},
{
"msg_contents": "On Tue, 2023-10-17 at 22:52 -0400, Stephen Frost wrote:\n\n> Reading back through the thread, from a user perspective, the primary\n> one seems to be that passwords are expected to be named. I'm\n> surprised\n> this is being brought up as such a serious concern. Lots and lots\n> and\n> lots of things in the system require naming, after all, and the idea\n> that this is somehow harder or more of an issue is quite odd to me.\n\nIn the simplest intended use case, the names will be arbitrary and\ntemporary. It's easy for me to imagine someone wondering \"was I\nsupposed to delete 'bear' or 'lion'?\". For indexes and other objects,\nthere's a lot more to go on, easily visible with \\d.\n\nNow, obviously that is not the end of the world, and the user could\nprevent that problem a number of different ways. And we can do things\nlike improve the monitoring of password use, and store the password\ncreation time, to help users if they are confused. So I don't raise\nconcerns about naming as an objection to the feature overall, but\nrather a concern that we aren't getting it quite right.\n\nMaybe a name should be entirely optional, more like a comment, and the\npasswords can be referenced by the salt? The salt needs to be unique\nfor a given user anyway.\n\n(Aside: is the uniqueness of the salt enforced in the current patch?)\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Wed, 18 Oct 2023 09:40:02 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": false,
"msg_subject": "Re: [PoC/RFC] Multiple passwords, interval expirations"
},
{
"msg_contents": "Greetings,\n\n* Jeff Davis (pgsql@j-davis.com) wrote:\n> On Tue, 2023-10-17 at 22:52 -0400, Stephen Frost wrote:\n> \n> > Reading back through the thread, from a user perspective, the primary\n> > one seems to be that passwords are expected to be named. I'm\n> > surprised\n> > this is being brought up as such a serious concern. Lots and lots\n> > and\n> > lots of things in the system require naming, after all, and the idea\n> > that this is somehow harder or more of an issue is quite odd to me.\n> \n> In the simplest intended use case, the names will be arbitrary and\n> temporary. It's easy for me to imagine someone wondering \"was I\n> supposed to delete 'bear' or 'lion'?\". For indexes and other objects,\n> there's a lot more to go on, easily visible with \\d.\n\nSure, agreed.\n\n> Now, obviously that is not the end of the world, and the user could\n> prevent that problem a number of different ways. And we can do things\n> like improve the monitoring of password use, and store the password\n> creation time, to help users if they are confused. So I don't raise\n> concerns about naming as an objection to the feature overall, but\n> rather a concern that we aren't getting it quite right.\n\nRight, we need more observability, agreed, but that's not strictly\nnecessary of this patch and could certainly be added independently. Is\nthere really a need to make this observability a requirement of this\nparticular change?\n\n> Maybe a name should be entirely optional, more like a comment, and the\n> passwords can be referenced by the salt? The salt needs to be unique\n> for a given user anyway.\n\nI proposed an approach in the email you replied to explicitly suggesting\na way we could make the name be optional, so, sure, I'm generally on\nboard with that idea. Note that it'd be optional for the user to\nprovide and then we'd simply generate one for them and then that name is\nwhat would be used to refer to that password later.\n\n> (Aside: is the uniqueness of the salt enforced in the current patch?)\n\nErr, the salt has to be *identical* for each password of a given user,\nnot unique, so I'm a bit confused here.\n\nThanks,\n\nStephen",
"msg_date": "Wed, 18 Oct 2023 14:48:22 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: [PoC/RFC] Multiple passwords, interval expirations"
},
{
"msg_contents": "On Wed, 2023-10-18 at 14:48 -0400, Stephen Frost wrote:\n> Right, we need more observability, agreed, but that's not strictly\n> necessary of this patch and could certainly be added independently. \n> Is\n> there really a need to make this observability a requirement of this\n> particular change?\n\nI won't draw a line in the sand, but it feels like something should be\nthere to help the user keep track of which password they might want to\nkeep. At least a \"created on\" date or something.\n\n> > (Aside: is the uniqueness of the salt enforced in the current\n> > patch?)\n> \n> Err, the salt has to be *identical* for each password of a given\n> user,\n> not unique, so I'm a bit confused here.\n\nSorry, my mistake.\n\nIf the client needs to use the same salt as existing passwords, can you\nstill use PQencryptPasswordConn() on the client to avoid sending the\nplaintext password to the server?\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Thu, 19 Oct 2023 19:22:07 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": false,
"msg_subject": "Re: [PoC/RFC] Multiple passwords, interval expirations"
},
{
"msg_contents": "Greetings,\n\n* Jeff Davis (pgsql@j-davis.com) wrote:\n> On Wed, 2023-10-18 at 14:48 -0400, Stephen Frost wrote:\n> > Right, we need more observability, agreed, but that's not strictly\n> > necessary of this patch and could certainly be added independently. \n> > Is\n> > there really a need to make this observability a requirement of this\n> > particular change?\n> \n> I won't draw a line in the sand, but it feels like something should be\n> there to help the user keep track of which password they might want to\n> keep. At least a \"created on\" date or something.\n\nSure, no objection to adding that and seems like it should be fairly\neasy ... but then again, I tend to feel that we should do that for all\nof the objects in the system and we've got some strong feelings against\ndoing that from others. Perhaps this case is different to them, in\nwhich case, great, but if it's not, it'd be unfortunate for this feature\nto get bogged down due to that.\n\n> > > (Aside: is the uniqueness of the salt enforced in the current\n> > > patch?)\n> > \n> > Err, the salt has to be *identical* for each password of a given\n> > user,\n> > not unique, so I'm a bit confused here.\n> \n> Sorry, my mistake.\n\nSure, no worries.\n\n> If the client needs to use the same salt as existing passwords, can you\n> still use PQencryptPasswordConn() on the client to avoid sending the\n> plaintext password to the server?\n\nShort answer, yes ... but seems that wasn't actually done yet. Requires\na bit of additional work, since the client needs to get the existing\nsalt (note that as part of the SCRAM typical exchange, the client is\nprovided with the salt, so this isn't exposing anything new) to use to\nconstruct what is then sent to the server to store. We'll also need to\ndecide how to handle the case if the client tries to send a password\nthat doesn't have the same salt as the existing ones (regardless of how\nmany passwords we end up supporting). Perhaps we require the user,\nthrough the grammar, to make clear if they want to add a password, and\nthen error if they don't provide a matching salt, or if they want to\nremove existing passwords and replace with the new one.\n\nThanks,\n\nStephen",
"msg_date": "Fri, 20 Oct 2023 08:35:04 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: [PoC/RFC] Multiple passwords, interval expirations"
},
{
"msg_contents": "On Tue, 10 Oct 2023 at 16:37, Gurjeet Singh <gurjeet@singh.im> wrote:\n>\n> > On Mon, Oct 9, 2023 at 2:31 AM Gurjeet Singh <gurjeet@singh.im> wrote:\n> > >\n> > > Next steps:\n> > > - Break the patch into a series of smaller patches.\n> > > - Add TAP tests (test the ability to actually login with these passwords)\n> > > - Add/update documentation\n> > > - Add more regression tests\n>\n> Please see attached the v4 of the patchset that introduces the notion\n> of named passwords slots, namely 'first' and 'second' passwords, and\n> allows users to address each of these passwords separately for the\n> purposes of adding, dropping, or assigning expiration times.\n>\n> Apart from the changes described by each patch's commit title, one\n> significant change since v3 is that now (included in v4-0002...patch)\n> it is not allowed for a role to have a mix of a types of passwords.\n> When adding a password, the patch ensures that the password being\n> added uses the same hashing algorithm (md5 or scram-sha-256) as the\n> existing password, if any. Having all passwords of the same type\n> helps the server pick the corresponding authentication method during\n> connection attempt.\n>\n> The v3 patch also had a few bugs that were exposed by cfbot's\n> automatic run. All those bugs have now been fixed, and the latest run\n> on the v4 branch [1] on my private Git repo shows a clean run [1].\n>\n> The list of patches, and their commit titles are as follows:\n>\n> > v4-0001-...patch Add new columns to pg_authid\n> > v4-0002-...patch Update password verification infrastructure to handle two passwords\n> > v4-0003-...patch Added SQL support for ALTER ROLE to manage two passwords\n> > v4-0004-...patch Updated pg_dumpall to support exporting a role's second password\n> > v4-0005-...patch Update system views pg_roles and pg_shadow\n> > v4-0006-...patch Updated pg_authid catalog documentation\n> > v4-0007-...patch Updated psql's describe-roles meta-command\n> > v4-0008-...patch Added documentation for ALTER ROLE command\n> > v4-0009-...patch Added TAP tests to prove that a role can use two passwords to login\n> > v4-0010-...patch pgindent run\n> > v4-0011-...patch Run pgperltidy on files changed by this patchset\n>\n> Running pgperltidy updated many perl files unrelated to this patch, so\n> in the last patch I chose to include only the one perl file that is\n> affected by this patchset.\n\nCFBot shows that the patch does not apply anymore as in [1]:\n=== Applying patches on top of PostgreSQL commit ID\n4d969b2f85e1fd00e860366f101fd3e3160aab41 ===\n=== applying patch\n./v4-0002-Update-password-verification-infrastructure-to-ha.patch\n...\npatching file src/backend/libpq/auth.c\nHunk #4 FAILED at 828.\nHunk #5 succeeded at 886 (offset -2 lines).\nHunk #6 succeeded at 907 (offset -2 lines).\n1 out of 6 hunks FAILED -- saving rejects to file src/backend/libpq/auth.c.rej\n\nPlease post an updated version for the same.\n\n[1] - http://cfbot.cputube.org/patch_46_4432.log\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Sat, 27 Jan 2024 07:18:59 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PoC/RFC] Multiple passwords, interval expirations"
},
{
"msg_contents": "On Sat, 27 Jan 2024 at 07:18, vignesh C <vignesh21@gmail.com> wrote:\n>\n> On Tue, 10 Oct 2023 at 16:37, Gurjeet Singh <gurjeet@singh.im> wrote:\n> >\n> > > On Mon, Oct 9, 2023 at 2:31 AM Gurjeet Singh <gurjeet@singh.im> wrote:\n> > > >\n> > > > Next steps:\n> > > > - Break the patch into a series of smaller patches.\n> > > > - Add TAP tests (test the ability to actually login with these passwords)\n> > > > - Add/update documentation\n> > > > - Add more regression tests\n> >\n> > Please see attached the v4 of the patchset that introduces the notion\n> > of named passwords slots, namely 'first' and 'second' passwords, and\n> > allows users to address each of these passwords separately for the\n> > purposes of adding, dropping, or assigning expiration times.\n> >\n> > Apart from the changes described by each patch's commit title, one\n> > significant change since v3 is that now (included in v4-0002...patch)\n> > it is not allowed for a role to have a mix of a types of passwords.\n> > When adding a password, the patch ensures that the password being\n> > added uses the same hashing algorithm (md5 or scram-sha-256) as the\n> > existing password, if any. Having all passwords of the same type\n> > helps the server pick the corresponding authentication method during\n> > connection attempt.\n> >\n> > The v3 patch also had a few bugs that were exposed by cfbot's\n> > automatic run. All those bugs have now been fixed, and the latest run\n> > on the v4 branch [1] on my private Git repo shows a clean run [1].\n> >\n> > The list of patches, and their commit titles are as follows:\n> >\n> > > v4-0001-...patch Add new columns to pg_authid\n> > > v4-0002-...patch Update password verification infrastructure to handle two passwords\n> > > v4-0003-...patch Added SQL support for ALTER ROLE to manage two passwords\n> > > v4-0004-...patch Updated pg_dumpall to support exporting a role's second password\n> > > v4-0005-...patch Update system views pg_roles and pg_shadow\n> > > v4-0006-...patch Updated pg_authid catalog documentation\n> > > v4-0007-...patch Updated psql's describe-roles meta-command\n> > > v4-0008-...patch Added documentation for ALTER ROLE command\n> > > v4-0009-...patch Added TAP tests to prove that a role can use two passwords to login\n> > > v4-0010-...patch pgindent run\n> > > v4-0011-...patch Run pgperltidy on files changed by this patchset\n> >\n> > Running pgperltidy updated many perl files unrelated to this patch, so\n> > in the last patch I chose to include only the one perl file that is\n> > affected by this patchset.\n>\n> CFBot shows that the patch does not apply anymore as in [1]:\n> === Applying patches on top of PostgreSQL commit ID\n> 4d969b2f85e1fd00e860366f101fd3e3160aab41 ===\n> === applying patch\n> ./v4-0002-Update-password-verification-infrastructure-to-ha.patch\n> ...\n> patching file src/backend/libpq/auth.c\n> Hunk #4 FAILED at 828.\n> Hunk #5 succeeded at 886 (offset -2 lines).\n> Hunk #6 succeeded at 907 (offset -2 lines).\n> 1 out of 6 hunks FAILED -- saving rejects to file src/backend/libpq/auth.c.rej\n>\n> Please post an updated version for the same.\n\nThe patch which you submitted has been awaiting your attention for\nquite some time now. As such, we have moved it to \"Returned with\nFeedback\" and removed it from the reviewing queue. Depending on\ntiming, this may be reversible. Kindly address the feedback you have\nreceived, and resubmit the patch to the next CommitFest.\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Thu, 1 Feb 2024 16:34:56 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PoC/RFC] Multiple passwords, interval expirations"
},
{
"msg_contents": "Hi!\n\nI'm interested in this feature presence in the PostgreSQL core. Will\ntry to provide valuable review/comments/suggestions and other help.\n\nOn Tue, 10 Oct 2023 at 16:17, Gurjeet Singh <gurjeet@singh.im> wrote:\n>\n> > On Mon, Oct 9, 2023 at 2:31 AM Gurjeet Singh <gurjeet@singh.im> wrote:\n> > >\n> > > Next steps:\n> > > - Break the patch into a series of smaller patches.\n> > > - Add TAP tests (test the ability to actually login with these passwords)\n> > > - Add/update documentation\n> > > - Add more regression tests\n>\n> Please see attached the v4 of the patchset that introduces the notion\n> of named passwords slots, namely 'first' and 'second' passwords, and\n> allows users to address each of these passwords separately for the\n> purposes of adding, dropping, or assigning expiration times.\n>\n> Apart from the changes described by each patch's commit title, one\n> significant change since v3 is that now (included in v4-0002...patch)\n> it is not allowed for a role to have a mix of a types of passwords.\n> When adding a password, the patch ensures that the password being\n> added uses the same hashing algorithm (md5 or scram-sha-256) as the\n> existing password, if any. Having all passwords of the same type\n> helps the server pick the corresponding authentication method during\n> connection attempt.\n>\n> The v3 patch also had a few bugs that were exposed by cfbot's\n> automatic run. All those bugs have now been fixed, and the latest run\n> on the v4 branch [1] on my private Git repo shows a clean run [1].\n>\n> The list of patches, and their commit titles are as follows:\n>\n> > v4-0001-...patch Add new columns to pg_authid\n> > v4-0002-...patch Update password verification infrastructure to handle two passwords\n> > v4-0003-...patch Added SQL support for ALTER ROLE to manage two passwords\n> > v4-0004-...patch Updated pg_dumpall to support exporting a role's second password\n> > v4-0005-...patch Update system views pg_roles and pg_shadow\n> > v4-0006-...patch Updated pg_authid catalog documentation\n> > v4-0007-...patch Updated psql's describe-roles meta-command\n> > v4-0008-...patch Added documentation for ALTER ROLE command\n> > v4-0009-...patch Added TAP tests to prove that a role can use two passwords to login\n> > v4-0010-...patch pgindent run\n> > v4-0011-...patch Run pgperltidy on files changed by this patchset\n>\n> Running pgperltidy updated many perl files unrelated to this patch, so\n> in the last patch I chose to include only the one perl file that is\n> affected by this patchset.\n>\n> [1]: password_rollover_v4 (910f81be54)\n> https://github.com/gurjeet/postgres/commits/password_rollover_v4\n>\n> [2]: https://cirrus-ci.com/build/4675613999497216\n>\n> Best regards,\n> Gurjeet\n> http://Gurje.et\n\n\nLatest attachment does not apply to HEAD anymore. I have rebased\nthem. While rebasing, a couple of minor changes were done:\n\n1) Little correction in the `plain_crypt_verify` comment. IMO this\nsounds a little better and comprehensible, is it?\n\n> - * 'shadow_pass' is the user's correct password hash, as stored in\n> - * pg_authid's rolpassword or rolsecondpassword.\n> + * 'shadow_pass' is one of the user's correct password hashes, as stored in\n> + * pg_authid's.\n\n2) in v4-0004:\n\n> /* note: rolconfig is dumped later */\n> - if (server_version >= 90600)\n> + if (server_version >= 170000)\n> printfPQExpBuffer(buf,\n> \"SELECT oid, rolname, rolsuper, rolinherit, \"\n> \"rolcreaterole, rolcreatedb, \"\n> - \"rolcanlogin, rolconnlimit, rolpassword, \"\n> - \"rolvaliduntil, rolreplication, rolbypassrls, \"\n> + \"rolcanlogin, rolconnlimit, \"\n> + \"rolpassword, rolvaliduntil, \"\n> + \"rolsecondpassword, rolsecondvaliduntil, \"\n> + \"rolreplication, rolbypassrls, \"\n> + \"pg_catalog.shobj_description(oid, '%s') as rolcomment, \"\n> + \"rolname = current_user AS is_current_user \"\n> + \"FROM %s \"\n> + \"WHERE rolname !~ '^pg_' \"\n> + \"ORDER BY 2\", role_catalog, role_catalog);\n> + else if (server_version >= 90600)\n> + printfPQExpBuffer(buf,\n> + \"SELECT oid, rolname, rolsuper, rolinherit, \"\n> + \"rolcreaterole, rolcreatedb, \"\n> + \"rolcanlogin, rolconnlimit, \"\n> + \"rolpassword, rolvaliduntil, \"\n> + \"rolsecondpassword, rolsecodnvaliduntil, \"\n> + \"null as rolsecondpassword, null as rolsecondvaliduntil, \"\n> + \"rolreplication, rolbypassrls, \"\n\nIs it a bug in server_version < 17000 && server_version >= 90600 case?\nwe are trying to get \"rolsecondpassword, rolsecodnvaliduntil, \" in\nthis case? I have removed this, let me know if there is my\nmisunderstanding.\nAlso, if stmt changed to ` if (server_version >= 180000)` since pg17\nfeature freeze.\n\nv4-0005 - v4-000 Applied cleanly, didn't touch them. But, I haven't\nreviewed them yet either.\n\nv4-0010-pgindent-run.patch - is it actually needed?\n\nOverall comments:\n\n1) AFAIU we are forcing all passwords to have/interact with the same\nsalt. We really have no choice here, because in the case of SCRAM auth\nsalt is used client-side to authenticate the server.I dont feel this\nis the best approach to restrict auth. process this much, though I\ndidn't come up with strong objections to it. Anyway, what if we give\nthe server a hint, which password to use in the startup message (in\ncase users have more than one password)? This way we still can use\ndifferent salts.\n\n2) I'm also not a big fan of max-2-password restriction. There are\nobjections in the thread that having multiple passwords and named\npasswords leads to problems on the server-side, but I think that at\nleast named passwords are a useful feature for those who use external\nVault systems for password management. In this case, you can set the\nname of your password to the version of Vault secret, which is a very\nnatural thing to do (and, thus, support it).\n\n3) Should we unite 0001 & 0004 patches in one? Or maybe 0003 & 0004?\nIt is not a good idea to commit one of these patches without\nanother.... (Since we can reach a database state that cannot be\n`pg_dump`-ed)\n\n\nSo, that's it.",
"msg_date": "Wed, 10 Apr 2024 17:44:10 +0500",
"msg_from": "Kirill Reshke <reshkekirill@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PoC/RFC] Multiple passwords, interval expirations"
}
] |
[
{
"msg_contents": "If you enable the CHECK_WRITE_VS_EXTEND-protected assert in mdwrite(),\nyou'll trip it anytime you exercise btbuildempty(), blbuildempty(), or\nspgbuildempty().\n\nIn this case, it may not make any meaningful difference if smgrwrite()\nor smgrextend() is called (_mdfd_getseg() behavior won't differ even\nwith the different flags, so really only the FileWrite() wait event will\nbe different).\nHowever, it seems like it should still be changed to call smgrextend().\nOr, since they only write a few pages, why not use shared buffers like\ngistbuildempty() and brinbuildempty() do?\n\nI've attached a patch to move these three into shared buffers.\n\nAnd, speaking of spgbuildempty(), there is no test exercising it in\ncheck-world, so I've attached a patch to do so. I wasn't sure if it\nbelonged in spgist.sql or create_index_spgist.sql (on name alone, seems\nlike the latter but based on content, seems like the former).\n\n- Melanie",
"msg_date": "Wed, 2 Mar 2022 20:07:14 -0500",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": true,
"msg_subject": "Why do spgbuildempty(), btbuildempty(),\n and blbuildempty() use smgrwrite()?"
},
{
"msg_contents": "At Wed, 2 Mar 2022 20:07:14 -0500, Melanie Plageman <melanieplageman@gmail.com> wrote in \n> If you enable the CHECK_WRITE_VS_EXTEND-protected assert in mdwrite(),\n> you'll trip it anytime you exercise btbuildempty(), blbuildempty(), or\n> spgbuildempty().\n> \n> In this case, it may not make any meaningful difference if smgrwrite()\n> or smgrextend() is called (_mdfd_getseg() behavior won't differ even\n> with the different flags, so really only the FileWrite() wait event will\n> be different).\n> However, it seems like it should still be changed to call smgrextend().\n> Or, since they only write a few pages, why not use shared buffers like\n> gistbuildempty() and brinbuildempty() do?\n> \n> I've attached a patch to move these three into shared buffers.\n> \n> And, speaking of spgbuildempty(), there is no test exercising it in\n> check-world, so I've attached a patch to do so. I wasn't sure if it\n> belonged in spgist.sql or create_index_spgist.sql (on name alone, seems\n> like the latter but based on content, seems like the former).\n> \n> - Melanie\n\nI didn't dig into your specific isssue, but I'm mildly opposed to\nmoving them onto shared buffers. They are cold images of init-fork,\nwhich is actually no-use for active servers. Rather I'd like to move\nbrinbuildempty out of shared buffers considering one of my proposing\npatches..\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Thu, 03 Mar 2022 11:57:23 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Why do spgbuildempty(), btbuildempty(), and blbuildempty() use\n smgrwrite()?"
}
] |
[
{
"msg_contents": "Hi, hackers\n\nWhen I try to change wal_level to minimal and restart the database, it complains\nmax_wal_senders > 0.\n\n2022-03-03 10:10:16.938 CST [6389] FATAL: WAL streaming (max_wal_senders > 0) requires wal_level \"replica\" or \"logical\"\n\nHowever, the documentation about wal_level [1] doesn't mentation this.\nHow about adding a sentence to describe how to set max_wal_senders when\nsetting wal_level to minimal?\n\n[1] https://www.postgresql.org/docs/devel/runtime-config-wal.html\n\n-- \nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.",
"msg_date": "Thu, 03 Mar 2022 10:44:25 +0800",
"msg_from": "Japin Li <japinli@hotmail.com>",
"msg_from_op": true,
"msg_subject": "Doc about how to set max_wal_senders when setting minimal wal_level"
},
{
"msg_contents": "On Wed, Mar 2, 2022 at 7:44 PM Japin Li <japinli@hotmail.com> wrote:\n\n>\n> Hi, hackers\n>\n> When I try to change wal_level to minimal and restart the database, it\n> complains\n> max_wal_senders > 0.\n>\n> 2022-03-03 10:10:16.938 CST [6389] FATAL: WAL streaming (max_wal_senders\n> > 0) requires wal_level \"replica\" or \"logical\"\n>\n> However, the documentation about wal_level [1] doesn't mentation this.\n> How about adding a sentence to describe how to set max_wal_senders when\n> setting wal_level to minimal?\n>\n> [1] https://www.postgresql.org/docs/devel/runtime-config-wal.html\n>\n>\nI would suggest a wording more like:\n\n\"A precondition for using minimal WAL is to disable WAL archiving and\nstreaming replication by setting max_wal_senders to 0, and archive_mode to\noff.\"\n\nWhile accurate, the phrase \"you must set\" just doesn't feel right to me. I\nalso don't like how the proposed sentence (either one) is added separately\nas opposed to being included in the immediately preceding paragraph. While\nthis limited patch is probably sufficient I would suggest trying to work\nout a slightly larger patch the improves the wording on the entire existing\nparagraph while incorporating the reference to max_wal_senders.\n\nNote, we seem to be missing the documentation of the default setting for\narchive_mode.\n\nIn addition, max_wal_senders could also be changed, adding a sentence like:\n\n\"If setting max_wal_senders to 0 consider also reducing the amount of WAL\nproduced by changing wal_level to minimal.\"\n\nAt least insofar as core is concerned without a wal sender the additional\nwal of replica is not actually able to be leveraged as pg_basebackup will\nnot work (at noted in its own description).\n\nDavid J.\n\nOn Wed, Mar 2, 2022 at 7:44 PM Japin Li <japinli@hotmail.com> wrote:\nHi, hackers\n\nWhen I try to change wal_level to minimal and restart the database, it complains\nmax_wal_senders > 0.\n\n2022-03-03 10:10:16.938 CST [6389] FATAL: WAL streaming (max_wal_senders > 0) requires wal_level \"replica\" or \"logical\"\n\nHowever, the documentation about wal_level [1] doesn't mentation this.\nHow about adding a sentence to describe how to set max_wal_senders when\nsetting wal_level to minimal?\n\n[1] https://www.postgresql.org/docs/devel/runtime-config-wal.htmlI would suggest a wording more like:\"A precondition for using minimal WAL is to disable WAL archiving and streaming replication by setting max_wal_senders to 0, and archive_mode to off.\"While accurate, the phrase \"you must set\" just doesn't feel right to me. I also don't like how the proposed sentence (either one) is added separately as opposed to being included in the immediately preceding paragraph. While this limited patch is probably sufficient I would suggest trying to work out a slightly larger patch the improves the wording on the entire existing paragraph while incorporating the reference to max_wal_senders.Note, we seem to be missing the documentation of the default setting for archive_mode.In addition, max_wal_senders could also be changed, adding a sentence like:\"If setting max_wal_senders to 0 consider also reducing the amount of WAL produced by changing wal_level to minimal.\"At least insofar as core is concerned without a wal sender the additional wal of replica is not actually able to be leveraged as pg_basebackup will not work (at noted in its own description).David J.",
"msg_date": "Wed, 2 Mar 2022 20:25:52 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Doc about how to set max_wal_senders when setting minimal\n wal_level"
},
{
"msg_contents": "On Thu, 03 Mar 2022 at 11:25, David G. Johnston <david.g.johnston@gmail.com> wrote:\n> I would suggest a wording more like:\n>\n> \"A precondition for using minimal WAL is to disable WAL archiving and\n> streaming replication by setting max_wal_senders to 0, and archive_mode to\n> off.\"\n>\n> While accurate, the phrase \"you must set\" just doesn't feel right to me. I\n> also don't like how the proposed sentence (either one) is added separately\n> as opposed to being included in the immediately preceding paragraph. While\n> this limited patch is probably sufficient I would suggest trying to work\n> out a slightly larger patch the improves the wording on the entire existing\n> paragraph while incorporating the reference to max_wal_senders.\n>\n\nThanks for your review. Modified as your suggestion.\n\n> Note, we seem to be missing the documentation of the default setting for\n> archive_mode.\n>\n\nAdd the default value for archive_mode.\n\n> In addition, max_wal_senders could also be changed, adding a sentence like:\n>\n> \"If setting max_wal_senders to 0 consider also reducing the amount of WAL\n> produced by changing wal_level to minimal.\"\n>\n\nModified.\n\n-- \nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.",
"msg_date": "Thu, 03 Mar 2022 12:10:38 +0800",
"msg_from": "Japin Li <japinli@hotmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Doc about how to set max_wal_senders when setting minimal\n wal_level"
},
{
"msg_contents": "On Thu, 03 Mar 2022 at 12:10, Japin Li <japinli@hotmail.com> wrote:\n> On Thu, 03 Mar 2022 at 11:25, David G. Johnston <david.g.johnston@gmail.com> wrote:\n>> I would suggest a wording more like:\n>>\n>> \"A precondition for using minimal WAL is to disable WAL archiving and\n>> streaming replication by setting max_wal_senders to 0, and archive_mode to\n>> off.\"\n>>\n>> While accurate, the phrase \"you must set\" just doesn't feel right to me. I\n>> also don't like how the proposed sentence (either one) is added separately\n>> as opposed to being included in the immediately preceding paragraph. While\n>> this limited patch is probably sufficient I would suggest trying to work\n>> out a slightly larger patch the improves the wording on the entire existing\n>> paragraph while incorporating the reference to max_wal_senders.\n>>\n>\n> Thanks for your review. Modified as your suggestion.\n>\n>> Note, we seem to be missing the documentation of the default setting for\n>> archive_mode.\n>>\n>\n> Add the default value for archive_mode.\n>\n>> In addition, max_wal_senders could also be changed, adding a sentence like:\n>>\n>> \"If setting max_wal_senders to 0 consider also reducing the amount of WAL\n>> produced by changing wal_level to minimal.\"\n>>\n>\n> Modified.\n\nAttach v3 patch to fix missing close varname tag.\n\n-- \nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.",
"msg_date": "Fri, 04 Mar 2022 12:18:29 +0800",
"msg_from": "Japin Li <japinli@hotmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Doc about how to set max_wal_senders when setting minimal\n wal_level"
},
{
"msg_contents": "At Fri, 04 Mar 2022 12:18:29 +0800, Japin Li <japinli@hotmail.com> wrote in \n> \n> On Thu, 03 Mar 2022 at 12:10, Japin Li <japinli@hotmail.com> wrote:\n>\n> Attach v3 patch to fix missing close varname tag.\n\n+ A precondition for using minimal WAL is to disable WAL archiving and\n+ streaming replication by setting <xref linkend=\"guc-max-wal-senders\"/>\n+ to <literal>0</literal>, and <varname>archive_mode</varname>\n+ to <literal>off</literal>.\n\nIt is a bit odd that the features to stop and the corresponding GUCs\nare written irrespectively. It would be better they're in the same\norder.\n\n+ servers. If setting <varname>max_wal_senders</varname> to\n+ <literal>0</literal> consider also reducing the amount of WAL produced\n+ by changing <varname>wal_level</varname> to <literal>minimal</literal>.\n\nThose who anively follow this suggestion may bump into failure when\narhive_mode is on. Thus archive_mode is also worth referred to. But,\nanyway, IMHO, it is mere a performance tips that is not necessarily\nrequired in this section, or even in this documentaiotn. Addtion to\nthat, if we write this for max_wal_senders, archive_mode will deserve\nthe similar tips but I think it is too verbose. In short, I think I\nwould not add that description at all.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Fri, 04 Mar 2022 15:05:35 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Doc about how to set max_wal_senders when setting minimal\n wal_level"
},
{
"msg_contents": "On Fri, 04 Mar 2022 at 14:05, Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\n> At Fri, 04 Mar 2022 12:18:29 +0800, Japin Li <japinli@hotmail.com> wrote in \n>> \n>> On Thu, 03 Mar 2022 at 12:10, Japin Li <japinli@hotmail.com> wrote:\n>>\n>> Attach v3 patch to fix missing close varname tag.\n>\n> + A precondition for using minimal WAL is to disable WAL archiving and\n> + streaming replication by setting <xref linkend=\"guc-max-wal-senders\"/>\n> + to <literal>0</literal>, and <varname>archive_mode</varname>\n> + to <literal>off</literal>.\n>\n> It is a bit odd that the features to stop and the corresponding GUCs\n> are written irrespectively. It would be better they're in the same\n> order.\n>\n\nThanks for your review. Modified.\n\n> + servers. If setting <varname>max_wal_senders</varname> to\n> + <literal>0</literal> consider also reducing the amount of WAL produced\n> + by changing <varname>wal_level</varname> to <literal>minimal</literal>.\n>\n> Those who anively follow this suggestion may bump into failure when\n> arhive_mode is on. Thus archive_mode is also worth referred to. But,\n> anyway, IMHO, it is mere a performance tips that is not necessarily\n> required in this section, or even in this documentaiotn. Addtion to\n> that, if we write this for max_wal_senders, archive_mode will deserve\n> the similar tips but I think it is too verbose. In short, I think I\n> would not add that description at all.\n>\n\nIt already has a tip about wal_level for archive_mode [1], IIUC.\n\n\tarchive_mode cannot be enabled when wal_level is set to minimal.\n\n\n[1] https://www.postgresql.org/docs/devel/runtime-config-wal.html#GUC-ARCHIVE-MODE\n\n-- \nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.",
"msg_date": "Fri, 04 Mar 2022 17:49:35 +0800",
"msg_from": "Japin Li <japinli@hotmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Doc about how to set max_wal_senders when setting minimal\n wal_level"
},
{
"msg_contents": "On Thu, Mar 3, 2022 at 11:05 PM Kyotaro Horiguchi <horikyota.ntt@gmail.com>\nwrote:\n\n> But,\n> anyway, IMHO, it is mere a performance tips that is not necessarily\n> required in this section, or even in this documentaiotn. Addtion to\n> that, if we write this for max_wal_senders, archive_mode will deserve\n> the similar tips but I think it is too verbose. In short, I think I\n> would not add that description at all.\n>\n>\nI wrote it as a performance tip but it is documenting that when set to 0 no\nfeatures of the server require more information than is captured in the\nminimal wal. That fact seems worthy of noting. Even at the cost of a bit\nof verbosity. These features interact with each other and that interaction\nshould be adequately described. While subjective, this dynamic seems to\nwarrant inclusion.\n\nDavid J.\n\nOn Thu, Mar 3, 2022 at 11:05 PM Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:But,\nanyway, IMHO, it is mere a performance tips that is not necessarily\nrequired in this section, or even in this documentaiotn. Addtion to\nthat, if we write this for max_wal_senders, archive_mode will deserve\nthe similar tips but I think it is too verbose. In short, I think I\nwould not add that description at all.I wrote it as a performance tip but it is documenting that when set to 0 no features of the server require more information than is captured in the minimal wal. That fact seems worthy of noting. Even at the cost of a bit of verbosity. These features interact with each other and that interaction should be adequately described. While subjective, this dynamic seems to warrant inclusion.David J.",
"msg_date": "Mon, 7 Mar 2022 14:54:59 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Doc about how to set max_wal_senders when setting minimal\n wal_level"
},
{
"msg_contents": "On Fri, Mar 4, 2022 at 2:49 AM Japin Li <japinli@hotmail.com> wrote:\n\n> Thanks for your review. Modified.\n>\n\nWorks for me. I have some additional sparks of ideas but nothing that need\nhold this up.\n\nDavid J.\n\nOn Fri, Mar 4, 2022 at 2:49 AM Japin Li <japinli@hotmail.com> wrote:Thanks for your review. Modified.Works for me. I have some additional sparks of ideas but nothing that need hold this up.David J.",
"msg_date": "Mon, 7 Mar 2022 14:56:34 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Doc about how to set max_wal_senders when setting minimal\n wal_level"
},
{
"msg_contents": "Japin Li <japinli@hotmail.com> writes:\n> [ v4-wal-level-documentation.patch ]\n\nHm, I don't care for the wording here:\n\n+ A precondition for using minimal WAL is to disable WAL archiving and\n+ streaming replication by setting <varname>archive_mode</varname> to\n+ <literal>off</literal>, and <xref linkend=\"guc-max-wal-senders\"/> to\n+ <literal>0</literal>.\n\n\"Precondition\" is an overly fancy word that makes things less clear\nnot more so. Does it mean that setting wal_level = minimal will fail\nif you don't do these other things, or does it just mean that you\nwon't be getting the absolute minimum WAL volume? If the former,\nI think it'd be better to say something like \"To set wal_level to minimal,\nyou must also set [these variables], which has the effect of disabling\nboth WAL archiving and streaming replication.\"\n\n+ servers. If setting <varname>max_wal_senders</varname> to\n+ <literal>0</literal> consider also reducing the amount of WAL produced\n+ by changing <varname>wal_level</varname> to <literal>minimal</literal>.\n\nI don't think this is great advice. It will encourage people to use\nwal_level = minimal even if they have other requirements that weigh\nagainst it. If they feel that their system is producing too much\nWAL, I doubt they'll have a hard time finding the wal_level knob.\n\nTBH, I think the real problem with the docs in this area is that\nthe first para about wal_level is disjointed and unclear; we ought\nto nuke and rewrite that. In particular it fails to provide adequate\ncontext about what \"logical decoding\" means. I think possibly what\nit needs to say is that replica mode supports *physical* replication\nbut if you want to use *logical* replication you need logical mode;\nif you need neither, and are not doing WAL archiving either, you\ncan get away with minimal mode.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 05 Jul 2022 20:02:33 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Doc about how to set max_wal_senders when setting minimal\n wal_level"
},
{
"msg_contents": "On Tue, Jul 5, 2022 at 08:02:33PM -0400, Tom Lane wrote:\n> \"Precondition\" is an overly fancy word that makes things less clear\n> not more so. Does it mean that setting wal_level = minimal will fail\n> if you don't do these other things, or does it just mean that you\n> won't be getting the absolute minimum WAL volume? If the former,\n> I think it'd be better to say something like \"To set wal_level to minimal,\n> you must also set [these variables], which has the effect of disabling\n> both WAL archiving and streaming replication.\"\n\nI have created the attached patch to try to improve this text.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson",
"msg_date": "Thu, 14 Jul 2022 20:49:36 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Doc about how to set max_wal_senders when setting minimal\n wal_level"
},
{
"msg_contents": "\nSorry for the late reply.\n\nOn Wed, 06 Jul 2022 at 08:02, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Japin Li <japinli@hotmail.com> writes:\n>> [ v4-wal-level-documentation.patch ]\n>\n> Hm, I don't care for the wording here:\n>\n> + A precondition for using minimal WAL is to disable WAL archiving and\n> + streaming replication by setting <varname>archive_mode</varname> to\n> + <literal>off</literal>, and <xref linkend=\"guc-max-wal-senders\"/> to\n> + <literal>0</literal>.\n>\n> \"Precondition\" is an overly fancy word that makes things less clear\n> not more so. Does it mean that setting wal_level = minimal will fail\n> if you don't do these other things, or does it just mean that you\n> won't be getting the absolute minimum WAL volume? If the former,\n> I think it'd be better to say something like \"To set wal_level to minimal,\n> you must also set [these variables], which has the effect of disabling\n> both WAL archiving and streaming replication.\"\n\nYeah, it's the former case.\n\n>\n> + servers. If setting <varname>max_wal_senders</varname> to\n> + <literal>0</literal> consider also reducing the amount of WAL produced\n> + by changing <varname>wal_level</varname> to <literal>minimal</literal>.\n>\n> I don't think this is great advice. It will encourage people to use\n> wal_level = minimal even if they have other requirements that weigh\n> against it. If they feel that their system is producing too much\n> WAL, I doubt they'll have a hard time finding the wal_level knob.\n>\n\nAgreed. It isn't good advice. We can remove the suggestion.\n\n\n--\nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.\n\n\n",
"msg_date": "Fri, 15 Jul 2022 21:23:02 +0800",
"msg_from": "Japin Li <japinli@hotmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Doc about how to set max_wal_senders when setting minimal\n wal_level"
},
{
"msg_contents": "\nOn Fri, 15 Jul 2022 at 08:49, Bruce Momjian <bruce@momjian.us> wrote:\n> On Tue, Jul 5, 2022 at 08:02:33PM -0400, Tom Lane wrote:\n>> \"Precondition\" is an overly fancy word that makes things less clear\n>> not more so. Does it mean that setting wal_level = minimal will fail\n>> if you don't do these other things, or does it just mean that you\n>> won't be getting the absolute minimum WAL volume? If the former,\n>> I think it'd be better to say something like \"To set wal_level to minimal,\n>> you must also set [these variables], which has the effect of disabling\n>> both WAL archiving and streaming replication.\"\n>\n> I have created the attached patch to try to improve this text.\n\nIMO we can add the following sentence for wal_level description, since\nif wal_level = minimal and max_wal_senders > 0, we cannot start the database.\n\nTo set wal_level to minimal, you must also set max_wal_senders to 0,\nwhich has the effect of disabling both WAL archiving and streaming\nreplication.\n\n--\nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.\n\n\n",
"msg_date": "Fri, 15 Jul 2022 21:29:20 +0800",
"msg_from": "Japin Li <japinli@hotmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Doc about how to set max_wal_senders when setting minimal\n wal_level"
},
{
"msg_contents": "On Fri, Jul 15, 2022 at 6:27 AM Japin Li <japinli@hotmail.com> wrote:\n\n>\n> >\n> > + servers. If setting <varname>max_wal_senders</varname> to\n> > + <literal>0</literal> consider also reducing the amount of WAL\n> produced\n> > + by changing <varname>wal_level</varname> to\n> <literal>minimal</literal>.\n> >\n> > I don't think this is great advice. It will encourage people to use\n> > wal_level = minimal even if they have other requirements that weigh\n> > against it. If they feel that their system is producing too much\n> > WAL, I doubt they'll have a hard time finding the wal_level knob.\n> >\n>\n> Agreed. It isn't good advice. We can remove the suggestion.\n>\n>\nYeah, I wrote that thinking that max_wal_senders being set to 0 implied\narchive_mode = off, but that isn't the case.\n\nDavid J.\n\nOn Fri, Jul 15, 2022 at 6:27 AM Japin Li <japinli@hotmail.com> wrote:\n>\n> + servers. If setting <varname>max_wal_senders</varname> to\n> + <literal>0</literal> consider also reducing the amount of WAL produced\n> + by changing <varname>wal_level</varname> to <literal>minimal</literal>.\n>\n> I don't think this is great advice. It will encourage people to use\n> wal_level = minimal even if they have other requirements that weigh\n> against it. If they feel that their system is producing too much\n> WAL, I doubt they'll have a hard time finding the wal_level knob.\n>\n\nAgreed. It isn't good advice. We can remove the suggestion.Yeah, I wrote that thinking that max_wal_senders being set to 0 implied archive_mode = off, but that isn't the case.David J.",
"msg_date": "Fri, 15 Jul 2022 07:33:17 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Doc about how to set max_wal_senders when setting minimal\n wal_level"
},
{
"msg_contents": "On Fri, Jul 15, 2022 at 09:29:20PM +0800, Japin Li wrote:\n> \n> On Fri, 15 Jul 2022 at 08:49, Bruce Momjian <bruce@momjian.us> wrote:\n> > On Tue, Jul 5, 2022 at 08:02:33PM -0400, Tom Lane wrote:\n> >> \"Precondition\" is an overly fancy word that makes things less clear\n> >> not more so. Does it mean that setting wal_level = minimal will fail\n> >> if you don't do these other things, or does it just mean that you\n> >> won't be getting the absolute minimum WAL volume? If the former,\n> >> I think it'd be better to say something like \"To set wal_level to minimal,\n> >> you must also set [these variables], which has the effect of disabling\n> >> both WAL archiving and streaming replication.\"\n> >\n> > I have created the attached patch to try to improve this text.\n> \n> IMO we can add the following sentence for wal_level description, since\n> if wal_level = minimal and max_wal_senders > 0, we cannot start the database.\n> \n> To set wal_level to minimal, you must also set max_wal_senders to 0,\n> which has the effect of disabling both WAL archiving and streaming\n> replication.\n\nOkay, text added in the attached patch.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson",
"msg_date": "Mon, 18 Jul 2022 15:58:17 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Doc about how to set max_wal_senders when setting minimal\n wal_level"
},
{
"msg_contents": "\nOn Tue, 19 Jul 2022 at 03:58, Bruce Momjian <bruce@momjian.us> wrote:\n> On Fri, Jul 15, 2022 at 09:29:20PM +0800, Japin Li wrote:\n>> \n>> On Fri, 15 Jul 2022 at 08:49, Bruce Momjian <bruce@momjian.us> wrote:\n>> > On Tue, Jul 5, 2022 at 08:02:33PM -0400, Tom Lane wrote:\n>> >> \"Precondition\" is an overly fancy word that makes things less clear\n>> >> not more so. Does it mean that setting wal_level = minimal will fail\n>> >> if you don't do these other things, or does it just mean that you\n>> >> won't be getting the absolute minimum WAL volume? If the former,\n>> >> I think it'd be better to say something like \"To set wal_level to minimal,\n>> >> you must also set [these variables], which has the effect of disabling\n>> >> both WAL archiving and streaming replication.\"\n>> >\n>> > I have created the attached patch to try to improve this text.\n>> \n>> IMO we can add the following sentence for wal_level description, since\n>> if wal_level = minimal and max_wal_senders > 0, we cannot start the database.\n>> \n>> To set wal_level to minimal, you must also set max_wal_senders to 0,\n>> which has the effect of disabling both WAL archiving and streaming\n>> replication.\n>\n> Okay, text added in the attached patch.\n\nThanks for updating the patch! LGTM.\n\n-- \nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.\n\n\n",
"msg_date": "Tue, 19 Jul 2022 09:27:31 +0800",
"msg_from": "Japin Li <japinli@hotmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Doc about how to set max_wal_senders when setting minimal\n wal_level"
},
{
"msg_contents": "On Mon, Jul 18, 2022 at 6:27 PM Japin Li <japinli@hotmail.com> wrote:\n\n>\n> On Tue, 19 Jul 2022 at 03:58, Bruce Momjian <bruce@momjian.us> wrote:\n> > On Fri, Jul 15, 2022 at 09:29:20PM +0800, Japin Li wrote:\n> >>\n> >> On Fri, 15 Jul 2022 at 08:49, Bruce Momjian <bruce@momjian.us> wrote:\n> >> > On Tue, Jul 5, 2022 at 08:02:33PM -0400, Tom Lane wrote:\n> >> >> \"Precondition\" is an overly fancy word that makes things less clear\n> >> >> not more so. Does it mean that setting wal_level = minimal will fail\n> >> >> if you don't do these other things, or does it just mean that you\n> >> >> won't be getting the absolute minimum WAL volume? If the former,\n> >> >> I think it'd be better to say something like \"To set wal_level to\n> minimal,\n> >> >> you must also set [these variables], which has the effect of\n> disabling\n> >> >> both WAL archiving and streaming replication.\"\n> >> >\n> >> > I have created the attached patch to try to improve this text.\n> >>\n> >> IMO we can add the following sentence for wal_level description, since\n> >> if wal_level = minimal and max_wal_senders > 0, we cannot start the\n> database.\n> >>\n> >> To set wal_level to minimal, you must also set max_wal_senders to 0,\n> >> which has the effect of disabling both WAL archiving and streaming\n> >> replication.\n> >\n> > Okay, text added in the attached patch.\n>\n> Thanks for updating the patch! LGTM.\n>\n>\n+0.90\n\nConsider changing:\n\n\"makes any base backups taken before this unusable\"\n\nto:\n\n\"makes existing base backups unusable\"\n\nAs I try to justify this, though, it isn't quite true, maybe:\n\n\"makes point-in-time recovery, using existing base backups, unable to\nreplay future WAL.\"\n\nDavid J.\n\nOn Mon, Jul 18, 2022 at 6:27 PM Japin Li <japinli@hotmail.com> wrote:\nOn Tue, 19 Jul 2022 at 03:58, Bruce Momjian <bruce@momjian.us> wrote:\n> On Fri, Jul 15, 2022 at 09:29:20PM +0800, Japin Li wrote:\n>> \n>> On Fri, 15 Jul 2022 at 08:49, Bruce Momjian <bruce@momjian.us> wrote:\n>> > On Tue, Jul 5, 2022 at 08:02:33PM -0400, Tom Lane wrote:\n>> >> \"Precondition\" is an overly fancy word that makes things less clear\n>> >> not more so. Does it mean that setting wal_level = minimal will fail\n>> >> if you don't do these other things, or does it just mean that you\n>> >> won't be getting the absolute minimum WAL volume? If the former,\n>> >> I think it'd be better to say something like \"To set wal_level to minimal,\n>> >> you must also set [these variables], which has the effect of disabling\n>> >> both WAL archiving and streaming replication.\"\n>> >\n>> > I have created the attached patch to try to improve this text.\n>> \n>> IMO we can add the following sentence for wal_level description, since\n>> if wal_level = minimal and max_wal_senders > 0, we cannot start the database.\n>> \n>> To set wal_level to minimal, you must also set max_wal_senders to 0,\n>> which has the effect of disabling both WAL archiving and streaming\n>> replication.\n>\n> Okay, text added in the attached patch.\n\nThanks for updating the patch! LGTM.+0.90Consider changing:\"makes any base backups taken before this unusable\"to:\"makes existing base backups unusable\"As I try to justify this, though, it isn't quite true, maybe:\"makes point-in-time recovery, using existing base backups, unable to replay future WAL.\"David J.",
"msg_date": "Mon, 18 Jul 2022 19:39:55 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Doc about how to set max_wal_senders when setting minimal\n wal_level"
},
{
"msg_contents": "On Mon, Jul 18, 2022 at 07:39:55PM -0700, David G. Johnston wrote:\n> On Mon, Jul 18, 2022 at 6:27 PM Japin Li <japinli@hotmail.com> wrote:\n> \n> \n> +0.90\n> \n> Consider changing:\n> \n> \"makes any base backups taken before this unusable\"\n> \n> to:\n> \n> \"makes existing base backups unusable\"\n> \n> As I try to justify this, though, it isn't quite true, maybe:\n> \n> \"makes point-in-time recovery, using existing base backups, unable to replay\n> future WAL.\"\n\nI went with simpler wording.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson",
"msg_date": "Mon, 18 Jul 2022 23:16:15 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Doc about how to set max_wal_senders when setting minimal\n wal_level"
},
{
"msg_contents": "On Mon, Jul 18, 2022 at 8:16 PM Bruce Momjian <bruce@momjian.us> wrote:\n\n> On Mon, Jul 18, 2022 at 07:39:55PM -0700, David G. Johnston wrote:\n> > On Mon, Jul 18, 2022 at 6:27 PM Japin Li <japinli@hotmail.com> wrote:\n> >\n> >\n> > +0.90\n> >\n> > Consider changing:\n> >\n> > \"makes any base backups taken before this unusable\"\n> >\n> > to:\n> >\n> > \"makes existing base backups unusable\"\n> >\n> > As I try to justify this, though, it isn't quite true, maybe:\n> >\n> > \"makes point-in-time recovery, using existing base backups, unable to\n> replay\n> > future WAL.\"\n>\n> I went with simpler wording.\n>\n>\n+1\n\nThanks!\n\nDavid J.\n\nOn Mon, Jul 18, 2022 at 8:16 PM Bruce Momjian <bruce@momjian.us> wrote:On Mon, Jul 18, 2022 at 07:39:55PM -0700, David G. Johnston wrote:\n> On Mon, Jul 18, 2022 at 6:27 PM Japin Li <japinli@hotmail.com> wrote:\n> \n> \n> +0.90\n> \n> Consider changing:\n> \n> \"makes any base backups taken before this unusable\"\n> \n> to:\n> \n> \"makes existing base backups unusable\"\n> \n> As I try to justify this, though, it isn't quite true, maybe:\n> \n> \"makes point-in-time recovery, using existing base backups, unable to replay\n> future WAL.\"\n\nI went with simpler wording.+1Thanks!David J.",
"msg_date": "Mon, 18 Jul 2022 20:22:44 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Doc about how to set max_wal_senders when setting minimal\n wal_level"
},
{
"msg_contents": "On Mon, Jul 18, 2022 at 11:16:15PM -0400, Bruce Momjian wrote:\n> On Mon, Jul 18, 2022 at 07:39:55PM -0700, David G. Johnston wrote:\n> > On Mon, Jul 18, 2022 at 6:27 PM Japin Li <japinli@hotmail.com> wrote:\n> > \n> > \n> > +0.90\n> > \n> > Consider changing:\n> > \n> > \"makes any base backups taken before this unusable\"\n> > \n> > to:\n> > \n> > \"makes existing base backups unusable\"\n> > \n> > As I try to justify this, though, it isn't quite true, maybe:\n> > \n> > \"makes point-in-time recovery, using existing base backups, unable to replay\n> > future WAL.\"\n> \n> I went with simpler wording.\n\nPatch applied back to PG 14, and partial to PG 13. Docs before that\nwere too different to be safe.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson\n\n\n\n",
"msg_date": "Fri, 12 Aug 2022 10:30:52 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Doc about how to set max_wal_senders when setting minimal\n wal_level"
}
] |
[
{
"msg_contents": "Hi, hackers!\n\nAround 2 months ago I've noticed a problem that messages containing patches\nin the thread [1] were always processed with manual moderation. They appear\nin hackers' thread hours after posting None of them are from new CF members\nand personally, I don't see a reason for such inconvenience. The problem\nstill exists as of today.\n\nCan someone make changes in a moderation engine to make it more liberal and\nconvenient for authors?\n\n[1]\nhttps://www.postgresql.org/message-id/flat/CACG%3DezZe1NQSCnfHOr78AtAZxJZeCvxrts0ygrxYwe%3DpyyjVWA%40mail.gmail.com\n\n-- \nBest regards,\nPavel Borisov\n\nPostgres Professional: http://postgrespro.com <http://www.postgrespro.com>\n\nHi, hackers!Around 2 months ago I've noticed a problem that messages containing patches in the thread [1] were always processed with manual moderation. They appear in hackers' thread hours after posting None of them are from new CF members and personally, I don't see a reason for such inconvenience. The problem still exists as of today.Can someone make changes in a moderation engine to make it more liberal and convenient for authors?[1] https://www.postgresql.org/message-id/flat/CACG%3DezZe1NQSCnfHOr78AtAZxJZeCvxrts0ygrxYwe%3DpyyjVWA%40mail.gmail.com-- Best regards,Pavel BorisovPostgres Professional: http://postgrespro.com",
"msg_date": "Thu, 3 Mar 2022 16:31:14 +0400",
"msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>",
"msg_from_op": true,
"msg_subject": "Problem with moderation of messages with patched attached."
},
{
"msg_contents": "On Thu, Mar 3, 2022 at 3:31 PM Pavel Borisov <pashkin.elfe@gmail.com> wrote:\n\n> Hi, hackers!\n>\n> Around 2 months ago I've noticed a problem that messages containing\n> patches in the thread [1] were always processed with manual moderation.\n> They appear in hackers' thread hours after posting None of them are from\n> new CF members and personally, I don't see a reason for such inconvenience.\n> The problem still exists as of today.\n>\n> Can someone make changes in a moderation engine to make it more liberal\n> and convenient for authors?\n>\n> [1]\n> https://www.postgresql.org/message-id/flat/CACG%3DezZe1NQSCnfHOr78AtAZxJZeCvxrts0ygrxYwe%3DpyyjVWA%40mail.gmail.com\n>\n> --\n> Best regards,\n> Pavel Borisov\n>\n> Postgres Professional: http://postgrespro.com <http://www.postgrespro.com>\n>\n\nConfirm\n\n-- \nBest regards,\nMaxim Orlov.\n\nOn Thu, Mar 3, 2022 at 3:31 PM Pavel Borisov <pashkin.elfe@gmail.com> wrote:Hi, hackers!Around 2 months ago I've noticed a problem that messages containing patches in the thread [1] were always processed with manual moderation. They appear in hackers' thread hours after posting None of them are from new CF members and personally, I don't see a reason for such inconvenience. The problem still exists as of today.Can someone make changes in a moderation engine to make it more liberal and convenient for authors?[1] https://www.postgresql.org/message-id/flat/CACG%3DezZe1NQSCnfHOr78AtAZxJZeCvxrts0ygrxYwe%3DpyyjVWA%40mail.gmail.com-- Best regards,Pavel BorisovPostgres Professional: http://postgrespro.com\nConfirm -- Best regards,Maxim Orlov.",
"msg_date": "Thu, 3 Mar 2022 15:33:40 +0300",
"msg_from": "Maxim Orlov <orlovmg@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Problem with moderation of messages with patched attached."
},
{
"msg_contents": "Hi hackers,\n\n> Confirm\n\nHere are my two cents.\n\nMy last email to pgsql-jobs@ was moderated in a similar fashion. To my\nknowledge that mailing list is not pre-moderated. So it may have the same\nproblem, and not only with patches. (We use regular Google Workspace.)\n\nThe pgsql-hackers@ thread under question seems to have two mailing list\naddresses in cc:. Maybe this is the reason [1]:\n\n> Cross-posted emails will be moderated and therefore will also take longer\nto reach the subscribers if approved.\n\nAlthough it's strange that only emails with attachments seem to be affected.\n\n[1]: https://www.postgresql.org/list/\n\n-- \nBest regards,\nAleksander Alekseev\n\nHi hackers,> ConfirmHere are my two cents.My last email to pgsql-jobs@ was moderated in a similar fashion. To my knowledge that mailing list is not pre-moderated. So it may have the same problem, and not only with patches. (We use regular Google Workspace.)The pgsql-hackers@ thread under question seems to have two mailing list addresses in cc:. Maybe this is the reason [1]:> Cross-posted emails will be moderated and therefore will also take longer to reach the subscribers if approved.Although it's strange that only emails with attachments seem to be affected.[1]: https://www.postgresql.org/list/-- Best regards,Aleksander Alekseev",
"msg_date": "Thu, 3 Mar 2022 16:00:06 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: Problem with moderation of messages with patched attached."
},
{
"msg_contents": "Hi\n\nOn Thu, 3 Mar 2022 at 12:31, Pavel Borisov <pashkin.elfe@gmail.com> wrote:\n\n> Hi, hackers!\n>\n> Around 2 months ago I've noticed a problem that messages containing\n> patches in the thread [1] were always processed with manual moderation.\n> They appear in hackers' thread hours after posting None of them are from\n> new CF members and personally, I don't see a reason for such inconvenience.\n> The problem still exists as of today.\n>\n> Can someone make changes in a moderation engine to make it more liberal\n> and convenient for authors?\n>\n> [1]\n> https://www.postgresql.org/message-id/flat/CACG%3DezZe1NQSCnfHOr78AtAZxJZeCvxrts0ygrxYwe%3DpyyjVWA%40mail.gmail.com\n>\n\nHere's the moderation reason for that message:\n\nMessage to list pgsql-hackers held for moderation due to 'Size 1MB (1061796\nbytes) is larger than threshold 1000KB (1024000 bytes)', notice queued for\n2 moderators\n\n-- \nDave Page\nBlog: https://pgsnake.blogspot.com\nTwitter: @pgsnake\n\nEDB: https://www.enterprisedb.com\n\nHiOn Thu, 3 Mar 2022 at 12:31, Pavel Borisov <pashkin.elfe@gmail.com> wrote:Hi, hackers!Around 2 months ago I've noticed a problem that messages containing patches in the thread [1] were always processed with manual moderation. They appear in hackers' thread hours after posting None of them are from new CF members and personally, I don't see a reason for such inconvenience. The problem still exists as of today.Can someone make changes in a moderation engine to make it more liberal and convenient for authors?[1] https://www.postgresql.org/message-id/flat/CACG%3DezZe1NQSCnfHOr78AtAZxJZeCvxrts0ygrxYwe%3DpyyjVWA%40mail.gmail.comHere's the moderation reason for that message:Message to list pgsql-hackers held for moderation due to 'Size 1MB (1061796 bytes) is larger than threshold 1000KB (1024000 bytes)', notice queued for 2 moderators -- Dave PageBlog: https://pgsnake.blogspot.comTwitter: @pgsnakeEDB: https://www.enterprisedb.com",
"msg_date": "Thu, 3 Mar 2022 13:18:14 +0000",
"msg_from": "Dave Page <dpage@pgadmin.org>",
"msg_from_op": false,
"msg_subject": "Re: Problem with moderation of messages with patched attached."
},
{
"msg_contents": ">\n> Message to list pgsql-hackers held for moderation due to 'Size 1MB\n> (1061796 bytes) is larger than threshold 1000KB (1024000 bytes)', notice\n> queued for 2 moderators\n>\nCould you make this limit 2MB at least for authorized commitfest members?\nThanks!\n\n--\nBest regards,\nPavel Borisov\n\nPostgres Professional: http://postgrespro.com <http://www.postgrespro.com>\n\nMessage to list pgsql-hackers held for moderation due to 'Size 1MB (1061796 bytes) is larger than threshold 1000KB (1024000 bytes)', notice queued for 2 moderatorsCould you make this limit 2MB at least for authorized commitfest members?Thanks!--Best regards,Pavel BorisovPostgres Professional: http://postgrespro.com",
"msg_date": "Thu, 3 Mar 2022 17:22:39 +0400",
"msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Problem with moderation of messages with patched attached."
},
{
"msg_contents": "Hi Dave,\n\n> Message to list pgsql-hackers held for moderation due to 'Size 1MB (1061796 bytes) is larger than threshold 1000KB (1024000 bytes)', notice queued for 2 moderators\n\nThanks! Does anyone know if cfbot understands .patch.gz and/or .tgz ?\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Thu, 3 Mar 2022 16:24:03 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: Problem with moderation of messages with patched attached."
},
{
"msg_contents": "On Thu, 3 Mar 2022 at 13:22, Pavel Borisov <pashkin.elfe@gmail.com> wrote:\n\n> Message to list pgsql-hackers held for moderation due to 'Size 1MB\n>> (1061796 bytes) is larger than threshold 1000KB (1024000 bytes)', notice\n>> queued for 2 moderators\n>>\n> Could you make this limit 2MB at least for authorized commitfest members?\n> Thanks!\n>\n\nThe mail system doesn't have the capability to apply different moderation\nrules for people in that way I'm afraid.\n\n-- \nDave Page\nBlog: https://pgsnake.blogspot.com\nTwitter: @pgsnake\n\nEDB: https://www.enterprisedb.com\n\nOn Thu, 3 Mar 2022 at 13:22, Pavel Borisov <pashkin.elfe@gmail.com> wrote:Message to list pgsql-hackers held for moderation due to 'Size 1MB (1061796 bytes) is larger than threshold 1000KB (1024000 bytes)', notice queued for 2 moderatorsCould you make this limit 2MB at least for authorized commitfest members?Thanks!The mail system doesn't have the capability to apply different moderation rules for people in that way I'm afraid. -- Dave PageBlog: https://pgsnake.blogspot.comTwitter: @pgsnakeEDB: https://www.enterprisedb.com",
"msg_date": "Thu, 3 Mar 2022 13:24:27 +0000",
"msg_from": "Dave Page <dpage@pgadmin.org>",
"msg_from_op": false,
"msg_subject": "Re: Problem with moderation of messages with patched attached."
},
{
"msg_contents": ">\n> The mail system doesn't have the capability to apply different moderation\n> rules for people in that way I'm afraid.\n>\nMaybe then 2MB for everyone? Otherwise it's not so convenient. Lead to\nanswers before the questions in the thread [1], seems weird.\n\n[1]\nhttps://www.postgresql.org/message-id/flat/CACG%3DezZe1NQSCnfHOr78AtAZxJZeCvxrts0ygrxYwe%3DpyyjVWA%40mail.gmail.com\n\n\n\n-- \nBest regards,\nPavel Borisov\n\nPostgres Professional: http://postgrespro.com <http://www.postgrespro.com>\n\nThe mail system doesn't have the capability to apply different moderation rules for people in that way I'm afraid.Maybe then 2MB for everyone? Otherwise it's not so convenient. Lead to answers before the questions in the thread [1], seems weird.[1] https://www.postgresql.org/message-id/flat/CACG%3DezZe1NQSCnfHOr78AtAZxJZeCvxrts0ygrxYwe%3DpyyjVWA%40mail.gmail.com -- Best regards,Pavel BorisovPostgres Professional: http://postgrespro.com",
"msg_date": "Thu, 3 Mar 2022 17:27:50 +0400",
"msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Problem with moderation of messages with patched attached."
},
{
"msg_contents": "On Thu, Mar 03, 2022 at 04:24:03PM +0300, Aleksander Alekseev wrote:\n> \n> Thanks! Does anyone know if cfbot understands .patch.gz and/or .tgz ?\n\nThere's a FAQ link on the cfbot main page that answers this kind of questions.\n\n\n",
"msg_date": "Thu, 3 Mar 2022 21:31:37 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Problem with moderation of messages with patched attached."
},
{
"msg_contents": ">\n> There's a FAQ link on the cfbot main page that answers this kind of\n> questions.\n>\nGood to know! I'll try [.gz] next time then.\n\nThanks!\n\nThere's a FAQ link on the cfbot main page that answers this kind of questions.Good to know! I'll try [.gz] next time then. Thanks!",
"msg_date": "Thu, 3 Mar 2022 17:35:01 +0400",
"msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Problem with moderation of messages with patched attached."
},
{
"msg_contents": "On Thu, 3 Mar 2022 at 13:28, Pavel Borisov <pashkin.elfe@gmail.com> wrote:\n\n> The mail system doesn't have the capability to apply different moderation\n>> rules for people in that way I'm afraid.\n>>\n> Maybe then 2MB for everyone? Otherwise it's not so convenient. Lead to\n> answers before the questions in the thread [1], seems weird.\n>\n\nThen someone will complain if their patch is 2.1MB! How often are messages\nlegitimately over 1MB anyway, even with a patch? I don't usually moderate\n-hackers, so I don't know if this is a common thing or not.\n\nI'll ping a message across to the sysadmin team anyway; I can't just change\nthat setting without buy-in from the rest of the team.\n\n-- \nDave Page\nBlog: https://pgsnake.blogspot.com\nTwitter: @pgsnake\n\nEDB: https://www.enterprisedb.com\n\nOn Thu, 3 Mar 2022 at 13:28, Pavel Borisov <pashkin.elfe@gmail.com> wrote:The mail system doesn't have the capability to apply different moderation rules for people in that way I'm afraid.Maybe then 2MB for everyone? Otherwise it's not so convenient. Lead to answers before the questions in the thread [1], seems weird.Then someone will complain if their patch is 2.1MB! How often are messages legitimately over 1MB anyway, even with a patch? I don't usually moderate -hackers, so I don't know if this is a common thing or not.I'll ping a message across to the sysadmin team anyway; I can't just change that setting without buy-in from the rest of the team. -- Dave PageBlog: https://pgsnake.blogspot.comTwitter: @pgsnakeEDB: https://www.enterprisedb.com",
"msg_date": "Thu, 3 Mar 2022 13:37:35 +0000",
"msg_from": "Dave Page <dpage@pgadmin.org>",
"msg_from_op": false,
"msg_subject": "Re: Problem with moderation of messages with patched attached."
},
{
"msg_contents": "Hi Dave,\n\n> Then someone will complain if their patch is 2.1MB! How often are messages legitimately over 1MB anyway, even with a patch? I don't usually moderate -hackers, so I don't know if this is a common thing or not.\n>\n> I'll ping a message across to the sysadmin team anyway; I can't just change that setting without buy-in from the rest of the team.\n\nIMO, current limits are OK. The actual problem is that when the\nmessage gets into moderation, the notice to the author doesn't contain\nthe reason:\n\n> Your message to pgsql-hackers with subject\n> \"Re: Add 64-bit XIDs into PostgreSQL 15\"\n> has been held for moderation.\n>\n> It will be delivered to the list recipients as soon as it has been\n> approved by a moderator.\n>\n> If you wish to cancel the message without delivery, please click\n> this link: ....\n\nAny chance we could include the reason in the message? I foresee that\notherwise such kinds of questions will be asked over and over again.\n\n--\nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Thu, 3 Mar 2022 16:45:55 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: Problem with moderation of messages with patched attached."
},
{
"msg_contents": "On Thu, Mar 03, 2022 at 01:37:35PM +0000, Dave Page wrote:\n>\n> Then someone will complain if their patch is 2.1MB! How often are messages\n> legitimately over 1MB anyway, even with a patch? I don't usually moderate\n> -hackers, so I don't know if this is a common thing or not.\n\nIt's not common, most people send compressed versions.\n\nAlso, gigantic patchsets tend to be hard to maintain and rot pretty fast, so\nauthors also sometimes maintain a branch on some external repository and just\nsend newer versions on the ML infrequently.\n\n\n",
"msg_date": "Thu, 3 Mar 2022 21:46:47 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Problem with moderation of messages with patched attached."
},
{
"msg_contents": "Hi again,\n\n> Any chance we could include the reason in the message? I foresee that\n> otherwise such kinds of questions will be asked over and over again.\n\nA link to the list of common reasons should work too.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Thu, 3 Mar 2022 16:47:40 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: Problem with moderation of messages with patched attached."
},
{
"msg_contents": ">\n> On Thu, 3 Mar 2022 at 13:28, Pavel Borisov <pashkin.elfe@gmail.com> wrote:\n>\n>> The mail system doesn't have the capability to apply different moderation\n>>> rules for people in that way I'm afraid.\n>>>\n>> Maybe then 2MB for everyone? Otherwise it's not so convenient. Lead to\n>> answers before the questions in the thread [1], seems weird.\n>>\n>\n> Then someone will complain if their patch is 2.1MB! How often are messages\n> legitimately over 1MB anyway, even with a patch? I don't usually moderate\n> -hackers, so I don't know if this is a common thing or not.\n>\nHi, Dave!\nAuthors in the mentioned thread [1] bump into this issue while posting all\n11 versions of a patchset. It is little bit more than 1MB. We can try to\nuse .gz and if this doesn't work we report it again.\n\nI'll ping a message across to the sysadmin team anyway; I can't just change\n> that setting without buy-in from the rest of the team.\n>\nThanks! Maybe this will solve the issue.\n\n[1]\nhttps://www.postgresql.org/message-id/flat/CACG%3DezZe1NQSCnfHOr78AtAZxJZeCvxrts0ygrxYwe%3DpyyjVWA%40mail.gmail.com\n\n-- \nBest regards,\nPavel Borisov\n\nPostgres Professional: http://postgrespro.com <http://www.postgrespro.com>\n\nOn Thu, 3 Mar 2022 at 13:28, Pavel Borisov <pashkin.elfe@gmail.com> wrote:The mail system doesn't have the capability to apply different moderation rules for people in that way I'm afraid.Maybe then 2MB for everyone? Otherwise it's not so convenient. Lead to answers before the questions in the thread [1], seems weird.Then someone will complain if their patch is 2.1MB! How often are messages legitimately over 1MB anyway, even with a patch? I don't usually moderate -hackers, so I don't know if this is a common thing or not.Hi, Dave!Authors in the mentioned thread [1] bump into this issue while posting all 11 versions of a patchset. It is little bit more than 1MB. We can try to use .gz and if this doesn't work we report it again.I'll ping a message across to the sysadmin team anyway; I can't just change that setting without buy-in from the rest of the team.Thanks! Maybe this will solve the issue. [1] https://www.postgresql.org/message-id/flat/CACG%3DezZe1NQSCnfHOr78AtAZxJZeCvxrts0ygrxYwe%3DpyyjVWA%40mail.gmail.com-- Best regards,Pavel BorisovPostgres Professional: http://postgrespro.com",
"msg_date": "Thu, 3 Mar 2022 17:47:43 +0400",
"msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Problem with moderation of messages with patched attached."
},
{
"msg_contents": "Pavel Borisov <pashkin.elfe@gmail.com> writes:\n>> The mail system doesn't have the capability to apply different moderation\n>> rules for people in that way I'm afraid.\n\n> Maybe then 2MB for everyone?\n\nMaybe your patch needs to be split up? You're going to have a hard time\nfinding people who want to review or commit such large chunks.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 03 Mar 2022 10:17:06 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Problem with moderation of messages with patched attached."
},
{
"msg_contents": "On Thu, Mar 03, 2022 at 10:17:06AM -0500, Tom Lane wrote:\n> Pavel Borisov <pashkin.elfe@gmail.com> writes:\n> >> The mail system doesn't have the capability to apply different moderation\n> >> rules for people in that way I'm afraid.\n> \n> > Maybe then 2MB for everyone?\n> \n> Maybe your patch needs to be split up? You're going to have a hard time\n> finding people who want to review or commit such large chunks.\n\nI think it's the total attachment size, not a single file. So while splitting\nup the patchset even more would still be a good idea, compressing the files\nbefore sending them to hundred of people would be an even better one.\n\n\n",
"msg_date": "Thu, 3 Mar 2022 23:40:26 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Problem with moderation of messages with patched attached."
},
{
"msg_contents": "Greetings,\n\n* Aleksander Alekseev (aleksander@timescale.com) wrote:\n> My last email to pgsql-jobs@ was moderated in a similar fashion. To my\n> knowledge that mailing list is not pre-moderated. So it may have the same\n> problem, and not only with patches. (We use regular Google Workspace.)\n\n-jobs is moderated.\n\nThanks,\n\nStephen",
"msg_date": "Thu, 3 Mar 2022 12:40:47 -0500",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: Problem with moderation of messages with patched attached."
},
{
"msg_contents": "Greetings,\n\n* Pavel Borisov (pashkin.elfe@gmail.com) wrote:\n> > On Thu, 3 Mar 2022 at 13:28, Pavel Borisov <pashkin.elfe@gmail.com> wrote:\n> >> The mail system doesn't have the capability to apply different moderation\n> >>> rules for people in that way I'm afraid.\n> >>>\n> >> Maybe then 2MB for everyone? Otherwise it's not so convenient. Lead to\n> >> answers before the questions in the thread [1], seems weird.\n> >\n> > Then someone will complain if their patch is 2.1MB! How often are messages\n> > legitimately over 1MB anyway, even with a patch? I don't usually moderate\n> > -hackers, so I don't know if this is a common thing or not.\n\nI do pay attention to -hackers and no, it doesn't come up very often.\n\n> Authors in the mentioned thread [1] bump into this issue while posting all\n> 11 versions of a patchset. It is little bit more than 1MB. We can try to\n> use .gz and if this doesn't work we report it again.\n\nThis patch set really shoudl be broken down into smaller independent\npieces that attack different parts and not be all one big series of\npatches.\n\nThanks,\n\nStephen",
"msg_date": "Thu, 3 Mar 2022 12:44:12 -0500",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: Problem with moderation of messages with patched attached."
},
{
"msg_contents": "Hi,\n\nOn 2022-03-03 13:37:35 +0000, Dave Page wrote:\n> On Thu, 3 Mar 2022 at 13:28, Pavel Borisov <pashkin.elfe@gmail.com> wrote:\n> \n> > The mail system doesn't have the capability to apply different moderation\n> >> rules for people in that way I'm afraid.\n> >>\n> > Maybe then 2MB for everyone? Otherwise it's not so convenient. Lead to\n> > answers before the questions in the thread [1], seems weird.\n> >\n> \n> Then someone will complain if their patch is 2.1MB! How often are messages\n> legitimately over 1MB anyway, even with a patch? I don't usually moderate\n> -hackers, so I don't know if this is a common thing or not.\n\nI don't think it's actually that rare. But most contributors writing that\nlarge patchsets know about the limit and work around it - I gzip patches when\nI see the email getting too large. But it's more annoying to work with for\nreviewers.\n\nIt's somewhat annoying. If you e.g. append a few graphs of performance changes\nand a patch it's pretty easy to get into the range where compressing won't\nhelp anymore.\n\nAnd sure, any limit may be hit by somebody. But 1MB across the whole email\nseems pretty low these days.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sat, 19 Mar 2022 11:48:11 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Problem with moderation of messages with patched attached."
},
{
"msg_contents": "\nOn 3/19/22 14:48, Andres Freund wrote:\n> Hi,\n>\n> On 2022-03-03 13:37:35 +0000, Dave Page wrote:\n>> On Thu, 3 Mar 2022 at 13:28, Pavel Borisov <pashkin.elfe@gmail.com> wrote:\n>>\n>>> The mail system doesn't have the capability to apply different moderation\n>>>> rules for people in that way I'm afraid.\n>>>>\n>>> Maybe then 2MB for everyone? Otherwise it's not so convenient. Lead to\n>>> answers before the questions in the thread [1], seems weird.\n>>>\n>> Then someone will complain if their patch is 2.1MB! How often are messages\n>> legitimately over 1MB anyway, even with a patch? I don't usually moderate\n>> -hackers, so I don't know if this is a common thing or not.\n> I don't think it's actually that rare. But most contributors writing that\n> large patchsets know about the limit and work around it - I gzip patches when\n> I see the email getting too large. But it's more annoying to work with for\n> reviewers.\n>\n> It's somewhat annoying. If you e.g. append a few graphs of performance changes\n> and a patch it's pretty easy to get into the range where compressing won't\n> help anymore.\n>\n> And sure, any limit may be hit by somebody. But 1MB across the whole email\n> seems pretty low these days.\n>\n\nOf course we could get complaints no matter what level we set the limit\nat. I think raising it to 2Mb would be a reasonable experiment. If no\nobservable evil ensues then leave it that way. If it does then roll it\nback. I agree that plain uncompressed patches are easier to deal with in\ngeneral.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Sun, 20 Mar 2022 09:52:29 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: Problem with moderation of messages with patched attached."
},
{
"msg_contents": ">\n> оf course we could get complaints no matter what level we set the limit\n> at. I think raising it to 2Mb would be a reasonable experiment. If no\n> observable evil ensues then leave it that way. If it does then roll it\n> back. I agree that plain uncompressed patches are easier to deal with in\n> general.\n>\nThanks, Andrew! I think it will be more comfortable now.\n\nPavel.\n\n>\n\nоf course we could get complaints no matter what level we set the limit\nat. I think raising it to 2Mb would be a reasonable experiment. If no\nobservable evil ensues then leave it that way. If it does then roll it\nback. I agree that plain uncompressed patches are easier to deal with in\ngeneral.Thanks, Andrew! I think it will be more comfortable now.Pavel.",
"msg_date": "Sun, 20 Mar 2022 21:02:02 +0400",
"msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Problem with moderation of messages with patched attached."
},
{
"msg_contents": "On Sun, 20 Mar 2022 at 13:52, Andrew Dunstan <andrew@dunslane.net> wrote:\n\n>\n> On 3/19/22 14:48, Andres Freund wrote:\n> > Hi,\n> >\n> > On 2022-03-03 13:37:35 +0000, Dave Page wrote:\n> >> On Thu, 3 Mar 2022 at 13:28, Pavel Borisov <pashkin.elfe@gmail.com>\n> wrote:\n> >>\n> >>> The mail system doesn't have the capability to apply different\n> moderation\n> >>>> rules for people in that way I'm afraid.\n> >>>>\n> >>> Maybe then 2MB for everyone? Otherwise it's not so convenient. Lead to\n> >>> answers before the questions in the thread [1], seems weird.\n> >>>\n> >> Then someone will complain if their patch is 2.1MB! How often are\n> messages\n> >> legitimately over 1MB anyway, even with a patch? I don't usually\n> moderate\n> >> -hackers, so I don't know if this is a common thing or not.\n> > I don't think it's actually that rare. But most contributors writing that\n> > large patchsets know about the limit and work around it - I gzip patches\n> when\n> > I see the email getting too large. But it's more annoying to work with\n> for\n> > reviewers.\n> >\n> > It's somewhat annoying. If you e.g. append a few graphs of performance\n> changes\n> > and a patch it's pretty easy to get into the range where compressing\n> won't\n> > help anymore.\n> >\n> > And sure, any limit may be hit by somebody. But 1MB across the whole\n> email\n> > seems pretty low these days.\n> >\n>\n> Of course we could get complaints no matter what level we set the limit\n> at. I think raising it to 2Mb would be a reasonable experiment. If no\n> observable evil ensues then leave it that way. If it does then roll it\n> back. I agree that plain uncompressed patches are easier to deal with in\n> general.\n>\n\nThanks for the reminder :-)\n\nI've bumped the limit to 2MB.\n\n-- \nDave Page\nBlog: https://pgsnake.blogspot.com\nTwitter: @pgsnake\n\nEDB: https://www.enterprisedb.com\n\nOn Sun, 20 Mar 2022 at 13:52, Andrew Dunstan <andrew@dunslane.net> wrote:\nOn 3/19/22 14:48, Andres Freund wrote:\n> Hi,\n>\n> On 2022-03-03 13:37:35 +0000, Dave Page wrote:\n>> On Thu, 3 Mar 2022 at 13:28, Pavel Borisov <pashkin.elfe@gmail.com> wrote:\n>>\n>>> The mail system doesn't have the capability to apply different moderation\n>>>> rules for people in that way I'm afraid.\n>>>>\n>>> Maybe then 2MB for everyone? Otherwise it's not so convenient. Lead to\n>>> answers before the questions in the thread [1], seems weird.\n>>>\n>> Then someone will complain if their patch is 2.1MB! How often are messages\n>> legitimately over 1MB anyway, even with a patch? I don't usually moderate\n>> -hackers, so I don't know if this is a common thing or not.\n> I don't think it's actually that rare. But most contributors writing that\n> large patchsets know about the limit and work around it - I gzip patches when\n> I see the email getting too large. But it's more annoying to work with for\n> reviewers.\n>\n> It's somewhat annoying. If you e.g. append a few graphs of performance changes\n> and a patch it's pretty easy to get into the range where compressing won't\n> help anymore.\n>\n> And sure, any limit may be hit by somebody. But 1MB across the whole email\n> seems pretty low these days.\n>\n\nOf course we could get complaints no matter what level we set the limit\nat. I think raising it to 2Mb would be a reasonable experiment. If no\nobservable evil ensues then leave it that way. If it does then roll it\nback. I agree that plain uncompressed patches are easier to deal with in\ngeneral.Thanks for the reminder :-)I've bumped the limit to 2MB. -- Dave PageBlog: https://pgsnake.blogspot.comTwitter: @pgsnakeEDB: https://www.enterprisedb.com",
"msg_date": "Mon, 21 Mar 2022 09:41:57 +0000",
"msg_from": "Dave Page <dpage@pgadmin.org>",
"msg_from_op": false,
"msg_subject": "Re: Problem with moderation of messages with patched attached."
}
] |
[
{
"msg_contents": "Hi,\nIn test output, I saw:\n\nsrc/backend/utils/adt/tid.c:112:16: runtime error: left shift of 65535 by\n16 places cannot be represented in type 'int'\n\nI think this was due to the left shift in BlockIdGetBlockNumber not\nproperly casting its operand.\n\nPlease see the proposed change in patch.\n\nThanks",
"msg_date": "Thu, 3 Mar 2022 07:34:08 -0800",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": true,
"msg_subject": "casting operand to proper type in BlockIdGetBlockNumber"
},
{
"msg_contents": "Zhihong Yu <zyu@yugabyte.com> writes:\n> In test output, I saw:\n> src/backend/utils/adt/tid.c:112:16: runtime error: left shift of 65535 by\n> 16 places cannot be represented in type 'int'\n\nWhat compiler is that?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 03 Mar 2022 10:44:13 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: casting operand to proper type in BlockIdGetBlockNumber"
},
{
"msg_contents": "On Thu, Mar 3, 2022 at 7:44 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Zhihong Yu <zyu@yugabyte.com> writes:\n> > In test output, I saw:\n> > src/backend/utils/adt/tid.c:112:16: runtime error: left shift of 65535 by\n> > 16 places cannot be represented in type 'int'\n>\n> What compiler is that?\n>\n> regards, tom lane\n>\nHi,\nJenkins build is alma8-clang12-asan\n\nSo it is clang12 on alma.\n\nCheers\n\nOn Thu, Mar 3, 2022 at 7:44 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:Zhihong Yu <zyu@yugabyte.com> writes:\n> In test output, I saw:\n> src/backend/utils/adt/tid.c:112:16: runtime error: left shift of 65535 by\n> 16 places cannot be represented in type 'int'\n\nWhat compiler is that?\n\n regards, tom laneHi,Jenkins build is alma8-clang12-asanSo it is clang12 on alma.Cheers",
"msg_date": "Thu, 3 Mar 2022 07:57:27 -0800",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": true,
"msg_subject": "Re: casting operand to proper type in BlockIdGetBlockNumber"
},
{
"msg_contents": "Zhihong Yu <zyu@yugabyte.com> writes:\n> On Thu, Mar 3, 2022 at 7:44 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Zhihong Yu <zyu@yugabyte.com> writes:\n>>> In test output, I saw:\n>>> src/backend/utils/adt/tid.c:112:16: runtime error: left shift of 65535 by\n>>> 16 places cannot be represented in type 'int'\n\n> Jenkins build is alma8-clang12-asan\n\nOh, I misread this as a compile-time warning, but it must be from ASAN.\nWas the test case one of your own, or just our normal regression tests?\n\n(I think the code is indeed incorrect, but I'm wondering why this hasn't\nbeen reported before. It's been like that for a long time.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 03 Mar 2022 11:24:28 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: casting operand to proper type in BlockIdGetBlockNumber"
},
{
"msg_contents": "On Thu, Mar 3, 2022 at 8:24 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Zhihong Yu <zyu@yugabyte.com> writes:\n> > On Thu, Mar 3, 2022 at 7:44 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> Zhihong Yu <zyu@yugabyte.com> writes:\n> >>> In test output, I saw:\n> >>> src/backend/utils/adt/tid.c:112:16: runtime error: left shift of 65535\n> by\n> >>> 16 places cannot be represented in type 'int'\n>\n> > Jenkins build is alma8-clang12-asan\n>\n> Oh, I misread this as a compile-time warning, but it must be from ASAN.\n> Was the test case one of your own, or just our normal regression tests?\n>\n> (I think the code is indeed incorrect, but I'm wondering why this hasn't\n> been reported before. It's been like that for a long time.)\n>\n> regards, tom lane\n>\nHi,\nThe Jenkins test is ported from contrib/postgres_fdw/sql/postgres_fdw.sql -\nso theoretically PG would see the same error for clang12 on Alma.\n\nHere were a few lines prior to the sanitizer complaint:\n\nts1|pid123867|:30045 2022-03-02 01:47:57.098 UTC [124161] STATEMENT:\n CREATE TRIGGER trig_row_before\nts1|pid123867|:30045 BEFORE INSERT OR UPDATE OR DELETE ON rem1\nts1|pid123867|:30045 FOR EACH ROW EXECUTE PROCEDURE\ntrigger_data(23,'skidoo');\nts1|pid123867|:30045 2022-03-02 01:47:57.106 UTC [124161] ERROR: function\ntrigger_data() does not exist\nts1|pid123867|:30045 2022-03-02 01:47:57.106 UTC [124161] STATEMENT:\n CREATE TRIGGER trig_row_after\nts1|pid123867|:30045 AFTER INSERT OR UPDATE OR DELETE ON rem1\nts1|pid123867|:30045 FOR EACH ROW EXECUTE PROCEDURE\ntrigger_data(23,'skidoo');\n\nI think the ASAN build on Alma is able to detect errors such as this.\n\nCheers\n\nOn Thu, Mar 3, 2022 at 8:24 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:Zhihong Yu <zyu@yugabyte.com> writes:\n> On Thu, Mar 3, 2022 at 7:44 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Zhihong Yu <zyu@yugabyte.com> writes:\n>>> In test output, I saw:\n>>> src/backend/utils/adt/tid.c:112:16: runtime error: left shift of 65535 by\n>>> 16 places cannot be represented in type 'int'\n\n> Jenkins build is alma8-clang12-asan\n\nOh, I misread this as a compile-time warning, but it must be from ASAN.\nWas the test case one of your own, or just our normal regression tests?\n\n(I think the code is indeed incorrect, but I'm wondering why this hasn't\nbeen reported before. It's been like that for a long time.)\n\n regards, tom laneHi,The Jenkins test is ported from contrib/postgres_fdw/sql/postgres_fdw.sql - so theoretically PG would see the same error for clang12 on Alma.Here were a few lines prior to the sanitizer complaint:ts1|pid123867|:30045 2022-03-02 01:47:57.098 UTC [124161] STATEMENT: CREATE TRIGGER trig_row_beforets1|pid123867|:30045 BEFORE INSERT OR UPDATE OR DELETE ON rem1ts1|pid123867|:30045 FOR EACH ROW EXECUTE PROCEDURE trigger_data(23,'skidoo');ts1|pid123867|:30045 2022-03-02 01:47:57.106 UTC [124161] ERROR: function trigger_data() does not existts1|pid123867|:30045 2022-03-02 01:47:57.106 UTC [124161] STATEMENT: CREATE TRIGGER trig_row_afterts1|pid123867|:30045 AFTER INSERT OR UPDATE OR DELETE ON rem1ts1|pid123867|:30045 FOR EACH ROW EXECUTE PROCEDURE trigger_data(23,'skidoo'); I think the ASAN build on Alma is able to detect errors such as this.Cheers",
"msg_date": "Thu, 3 Mar 2022 08:34:33 -0800",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": true,
"msg_subject": "Re: casting operand to proper type in BlockIdGetBlockNumber"
},
{
"msg_contents": "Zhihong Yu <zyu@yugabyte.com> writes:\n> On Thu, Mar 3, 2022 at 8:24 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Oh, I misread this as a compile-time warning, but it must be from ASAN.\n>> Was the test case one of your own, or just our normal regression tests?\n\n> The Jenkins test is ported from contrib/postgres_fdw/sql/postgres_fdw.sql -\n> so theoretically PG would see the same error for clang12 on Alma.\n\nHmph. I tried enabling -fsanitize=undefined here, and I get some\ncomplaints about passing null pointers to memcmp and the like, but\nnothing about this shift (tested with clang 12.0.1 on RHEL8 as well\nas clang 13.0.0 on Fedora 35). What compiler switches are being\nused exactly?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 03 Mar 2022 12:13:40 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: casting operand to proper type in BlockIdGetBlockNumber"
},
{
"msg_contents": "On Thu, Mar 3, 2022 at 9:13 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Zhihong Yu <zyu@yugabyte.com> writes:\n> > On Thu, Mar 3, 2022 at 8:24 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> Oh, I misread this as a compile-time warning, but it must be from ASAN.\n> >> Was the test case one of your own, or just our normal regression tests?\n>\n> > The Jenkins test is ported from\n> contrib/postgres_fdw/sql/postgres_fdw.sql -\n> > so theoretically PG would see the same error for clang12 on Alma.\n>\n> Hmph. I tried enabling -fsanitize=undefined here, and I get some\n> complaints about passing null pointers to memcmp and the like, but\n> nothing about this shift (tested with clang 12.0.1 on RHEL8 as well\n> as clang 13.0.0 on Fedora 35). What compiler switches are being\n> used exactly?\n>\n> regards, tom lane\n>\nHi,\nThis is from (internal Jenkins) build log:\n\nCMAKE_C_FLAGS -Werror -fno-strict-aliasing -Wall -msse4.2 -Winvalid-pch\n-pthread -DBOOST_BIND_NO_PLACEHOLDERS\n-DBOOST_UUID_RANDOM_PROVIDER_FORCE_POSIX -DROCKSDB_PLATFORM_POSIX\n-DBOOST_ERROR_CODE_HEADER_ONLY -march=ivybridge -mcx16\n-DYB_COMPILER_TYPE=clang12 -DYB_COMPILER_VERSION=12.0.1\n-DROCKSDB_LIB_IO_POSIX -DSNAPPY -DLZ4 -DZLIB -mno-avx -mno-bmi -mno-bmi2\n-mno-fma -D__STDC_FORMAT_MACROS -Wno-deprecated-declarations\n-DGFLAGS=gflags -Werror=enum-compare -Werror=switch -Werror=return-type\n -Werror=string-plus-int -Werror=return-stack-address\n-Werror=implicit-fallthrough -D_LIBCPP_ENABLE_THREAD_SAFETY_ANNOTATIONS\n-Wthread-safety-analysis -Wshorten-64-to-32 -ggdb -O1\n-fno-omit-frame-pointer -DFASTDEBUG -Wno-ambiguous-member-template\n-Wimplicit-fallthrough -Qunused-arguments -stdlib=libc++\n-D_GLIBCXX_EXTERN_TEMPLATE=0 -nostdinc++ -stdlib=libc++\n-D_GLIBCXX_EXTERN_TEMPLATE=0 -nostdinc++ -shared-libasan -fsanitize=address\n-DADDRESS_SANITIZER -fsanitize=undefined -fno-sanitize-recover=all\n-fno-sanitize=alignment,vptr -fsanitize-recover=float-cast-overflow\n-fsanitize-blacklist=... -fPIC\n\nI would suggest trying out the build on Alma Linux.\n\nFYI\n\nOn Thu, Mar 3, 2022 at 9:13 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:Zhihong Yu <zyu@yugabyte.com> writes:\n> On Thu, Mar 3, 2022 at 8:24 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Oh, I misread this as a compile-time warning, but it must be from ASAN.\n>> Was the test case one of your own, or just our normal regression tests?\n\n> The Jenkins test is ported from contrib/postgres_fdw/sql/postgres_fdw.sql -\n> so theoretically PG would see the same error for clang12 on Alma.\n\nHmph. I tried enabling -fsanitize=undefined here, and I get some\ncomplaints about passing null pointers to memcmp and the like, but\nnothing about this shift (tested with clang 12.0.1 on RHEL8 as well\nas clang 13.0.0 on Fedora 35). What compiler switches are being\nused exactly?\n\n regards, tom laneHi,This is from (internal Jenkins) build log:CMAKE_C_FLAGS -Werror -fno-strict-aliasing -Wall -msse4.2 -Winvalid-pch -pthread -DBOOST_BIND_NO_PLACEHOLDERS -DBOOST_UUID_RANDOM_PROVIDER_FORCE_POSIX -DROCKSDB_PLATFORM_POSIX -DBOOST_ERROR_CODE_HEADER_ONLY -march=ivybridge -mcx16 -DYB_COMPILER_TYPE=clang12 -DYB_COMPILER_VERSION=12.0.1 -DROCKSDB_LIB_IO_POSIX -DSNAPPY -DLZ4 -DZLIB -mno-avx -mno-bmi -mno-bmi2 -mno-fma -D__STDC_FORMAT_MACROS -Wno-deprecated-declarations -DGFLAGS=gflags -Werror=enum-compare -Werror=switch -Werror=return-type -Werror=string-plus-int -Werror=return-stack-address -Werror=implicit-fallthrough -D_LIBCPP_ENABLE_THREAD_SAFETY_ANNOTATIONS -Wthread-safety-analysis -Wshorten-64-to-32 -ggdb -O1 -fno-omit-frame-pointer -DFASTDEBUG -Wno-ambiguous-member-template -Wimplicit-fallthrough -Qunused-arguments -stdlib=libc++ -D_GLIBCXX_EXTERN_TEMPLATE=0 -nostdinc++ -stdlib=libc++ -D_GLIBCXX_EXTERN_TEMPLATE=0 -nostdinc++ -shared-libasan -fsanitize=address -DADDRESS_SANITIZER -fsanitize=undefined -fno-sanitize-recover=all -fno-sanitize=alignment,vptr -fsanitize-recover=float-cast-overflow -fsanitize-blacklist=... -fPIC I would suggest trying out the build on Alma Linux.FYI",
"msg_date": "Thu, 3 Mar 2022 09:28:57 -0800",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": true,
"msg_subject": "Re: casting operand to proper type in BlockIdGetBlockNumber"
},
{
"msg_contents": "Hi,\n\nOn 2022-03-03 12:13:40 -0500, Tom Lane wrote:\n> Zhihong Yu <zyu@yugabyte.com> writes:\n> > On Thu, Mar 3, 2022 at 8:24 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> Oh, I misread this as a compile-time warning, but it must be from ASAN.\n> >> Was the test case one of your own, or just our normal regression tests?\n> \n> > The Jenkins test is ported from contrib/postgres_fdw/sql/postgres_fdw.sql -\n> > so theoretically PG would see the same error for clang12 on Alma.\n> \n> Hmph. I tried enabling -fsanitize=undefined here, and I get some\n> complaints about passing null pointers to memcmp and the like, but\n> nothing about this shift (tested with clang 12.0.1 on RHEL8 as well\n> as clang 13.0.0 on Fedora 35).\n\nWe should fix these passing-null-pointer cases...\n\n\n> What compiler switches are being used exactly?\n\nFWIW, I've successfully used:\n-Og -fsanitize=alignment,undefined -fno-sanitize=nonnull-attribute -fno-sanitize=float-cast-overflow -fno-sanitize-recover=all\n\nNeed to manually add -ldl, because -fsanitize breaks our dl test (it uses\ndlopen, but not dlsym). Was planning to submit a fix for that...\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 3 Mar 2022 09:29:37 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: casting operand to proper type in BlockIdGetBlockNumber"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2022-03-03 12:13:40 -0500, Tom Lane wrote:\n>> Hmph. I tried enabling -fsanitize=undefined here, and I get some\n>> complaints about passing null pointers to memcmp and the like, but\n>> nothing about this shift (tested with clang 12.0.1 on RHEL8 as well\n>> as clang 13.0.0 on Fedora 35).\n\n> We should fix these passing-null-pointer cases...\n\nYeah, working on that now. But I'm pretty confused about why I can't\nduplicate this shift complaint. Alma is a Red Hat clone no? Why\ndoesn't its compiler act the same as RHEL8's?\n\n> Need to manually add -ldl, because -fsanitize breaks our dl test (it uses\n> dlopen, but not dlsym). Was planning to submit a fix for that...\n\nHmm ... didn't get through check-world yet, but I don't see that\nso far.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 03 Mar 2022 12:45:22 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: casting operand to proper type in BlockIdGetBlockNumber"
},
{
"msg_contents": "Hi,\n\nOn 2022-03-03 12:45:22 -0500, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > On 2022-03-03 12:13:40 -0500, Tom Lane wrote:\n> >> Hmph. I tried enabling -fsanitize=undefined here, and I get some\n> >> complaints about passing null pointers to memcmp and the like, but\n> >> nothing about this shift (tested with clang 12.0.1 on RHEL8 as well\n> >> as clang 13.0.0 on Fedora 35).\n> \n> > We should fix these passing-null-pointer cases...\n> \n> Yeah, working on that now. But I'm pretty confused about why I can't\n> duplicate this shift complaint. Alma is a Red Hat clone no? Why\n> doesn't its compiler act the same as RHEL8's?\n\nI didn't see that either. It could be a question of building with full\noptimizations / asserts vs without?\n\n\n> > Need to manually add -ldl, because -fsanitize breaks our dl test (it uses\n> > dlopen, but not dlsym). Was planning to submit a fix for that...\n> \n> Hmm ... didn't get through check-world yet, but I don't see that\n> so far.\n\nOh, for me it doesn't even build. Perhaps one of the dependencies injects it\nas well?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 3 Mar 2022 09:50:22 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: casting operand to proper type in BlockIdGetBlockNumber"
},
{
"msg_contents": "I wrote:\n> Andres Freund <andres@anarazel.de> writes:\n>> We should fix these passing-null-pointer cases...\n\n> Yeah, working on that now.\n\nThe attached is enough to get through check-world with\n\"-fsanitize=undefined\" using RHEL8's clang 12.0.1.\nMost of it is the same old null-pointer-with-zero-count\nbusiness, but the change in numeric.c is a different\nissue: \"ln(-1.0)\" ends up computing log10(0), which\nproduces -Inf, and then tries to assign that to an integer.\nWe don't actually care about the garbage result in that case,\nso it's only a sanitizer complaint not a live bug.\n\nI'm not sure whether to back-patch --- looking through the\ngit logs, it seems we've back-patched some fixes like these\nand not others. Thoughts?\n\nIn any case, if we're going to take this seriously it seems\nlike we need a buildfarm machine or two testing this option.\n\n\t\t\tregards, tom lane",
"msg_date": "Thu, 03 Mar 2022 14:00:14 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: casting operand to proper type in BlockIdGetBlockNumber"
},
{
"msg_contents": "Hi,\n\nOn 2022-03-03 14:00:14 -0500, Tom Lane wrote:\n> The attached is enough to get through check-world with\n> \"-fsanitize=undefined\" using RHEL8's clang 12.0.1.\n\nCool.\n\n\n> I'm not sure whether to back-patch --- looking through the\n> git logs, it seems we've back-patched some fixes like these\n> and not others. Thoughts?\n\nIt'd be easier to run a BF animal if we fixed it everywhere.\n\n\n> In any case, if we're going to take this seriously it seems like we need a\n> buildfarm machine or two testing this option.\n\nI was planning to add it to the CI runs, just didn't have energy to fix the\nfailures yet. But you just did (although I think there might be failure or two\nmore on new-ish debians).\n\nFor the buildfarm, I could enable it on flaviventris? That runs an\nexperimental gcc, without optimization (whereas serinus runs with\noptimization). Which seems reasonable to combine with sanitizers?\n\nFor CI I compared the cost of the different sanitizers. It looks like\nalignment sanitizer is almost free, undefined is pretty cheap, and address\nsanitizer is pretty expensive (but still much cheaper than valgrind).\n\nGreetings,\n\nAndres Freund\n\nPS: Hm, seems mylodon died a while ago... Need to check what's up with that.\n\n\n",
"msg_date": "Thu, 3 Mar 2022 11:46:00 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: casting operand to proper type in BlockIdGetBlockNumber"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2022-03-03 14:00:14 -0500, Tom Lane wrote:\n>> I'm not sure whether to back-patch --- looking through the\n>> git logs, it seems we've back-patched some fixes like these\n>> and not others. Thoughts?\n\n> It'd be easier to run a BF animal if we fixed it everywhere.\n\nFair enough, will BP.\n\n>> In any case, if we're going to take this seriously it seems like we need a\n>> buildfarm machine or two testing this option.\n\n> For the buildfarm, I could enable it on flaviventris? That runs an\n> experimental gcc, without optimization (whereas serinus runs with\n> optimization). Which seems reasonable to combine with sanitizers?\n\nDunno. I already found out that my Mac laptop (w/ clang 13) detects\nthe numeric.c problem but not any of the other ones. The messages\non RHEL8 cite where the system headers declare memcmp and friends\nwith \"attribute nonnull\", so I'm betting that Apple's headers lack\nthat annotation.\n\nI also tried adding the various -m switches shown in Zhihong's\nCFLAGS setting, but that still didn't repro the Alma warning\nfor me.\n\nSo it definitely seems like it's *real* system dependent which of\nthese warnings you get :-(.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 03 Mar 2022 15:31:51 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: casting operand to proper type in BlockIdGetBlockNumber"
},
{
"msg_contents": "Hi,\n\nOn 2022-03-03 15:31:51 -0500, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > On 2022-03-03 14:00:14 -0500, Tom Lane wrote:\n> > For the buildfarm, I could enable it on flaviventris? That runs an\n> > experimental gcc, without optimization (whereas serinus runs with\n> > optimization). Which seems reasonable to combine with sanitizers?\n> \n> Dunno. I already found out that my Mac laptop (w/ clang 13) detects\n> the numeric.c problem but not any of the other ones. The messages\n> on RHEL8 cite where the system headers declare memcmp and friends\n> with \"attribute nonnull\", so I'm betting that Apple's headers lack\n> that annotation.\n\nThe sanitizers are documented to work best on linux... As flaviventris runs\nlinux, so I'm not sure what your concern is?\n\nI think basically newer glibc versions have more annotations, so ubsan will\nhave more things to fail against. So it'd be good to have a fairly regularly\nupdated OS.\n\n\n> I also tried adding the various -m switches shown in Zhihong's\n> CFLAGS setting, but that still didn't repro the Alma warning\n> for me.\n\nThe compilation flags make it look like it's from a run of yugabyte's fork,\nrather than plain postgres.\n\nThe message says:\nsrc/backend/utils/adt/tid.c:112:16: runtime error: left shift of 65535 by 16 places cannot be represented in type 'int'\n\nAfaics that means bi_hi is 65535. So either we're dealing with a very large\nrelation or BlockIdGetBlockNumber() is getting passed InvalidBlockNumber?\n\nIt might be enough to do something like\nSELECT * FROM pg_class WHERE ctid = '(65535, 17)';\nto trigger the problem?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 3 Mar 2022 13:11:27 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: casting operand to proper type in BlockIdGetBlockNumber"
},
{
"msg_contents": "On Thu, Mar 3, 2022 at 1:11 PM Andres Freund <andres@anarazel.de> wrote:\n\n> Hi,\n>\n> On 2022-03-03 15:31:51 -0500, Tom Lane wrote:\n> > Andres Freund <andres@anarazel.de> writes:\n> > > On 2022-03-03 14:00:14 -0500, Tom Lane wrote:\n> > > For the buildfarm, I could enable it on flaviventris? That runs an\n> > > experimental gcc, without optimization (whereas serinus runs with\n> > > optimization). Which seems reasonable to combine with sanitizers?\n> >\n> > Dunno. I already found out that my Mac laptop (w/ clang 13) detects\n> > the numeric.c problem but not any of the other ones. The messages\n> > on RHEL8 cite where the system headers declare memcmp and friends\n> > with \"attribute nonnull\", so I'm betting that Apple's headers lack\n> > that annotation.\n>\n> The sanitizers are documented to work best on linux... As flaviventris runs\n> linux, so I'm not sure what your concern is?\n>\n> I think basically newer glibc versions have more annotations, so ubsan will\n> have more things to fail against. So it'd be good to have a fairly\n> regularly\n> updated OS.\n>\n>\n> > I also tried adding the various -m switches shown in Zhihong's\n> > CFLAGS setting, but that still didn't repro the Alma warning\n> > for me.\n>\n> The compilation flags make it look like it's from a run of yugabyte's fork,\n> rather than plain postgres.\n>\nHi,\nI should mention that, the PG subtree in yugabyte is currently aligned with\nPG 11.\nThere have been backports from PG 12, but code related to tid.c\nand block.h, etc is the same with upstream PG.\n\nThe fdw tests are backported from PG as well.\n\n\n>\n> The message says:\n> src/backend/utils/adt/tid.c:112:16: runtime error: left shift of 65535 by\n> 16 places cannot be represented in type 'int'\n>\n> Afaics that means bi_hi is 65535. So either we're dealing with a very large\n> relation or BlockIdGetBlockNumber() is getting passed InvalidBlockNumber?\n>\n> It might be enough to do something like\n> SELECT * FROM pg_class WHERE ctid = '(65535, 17)';\n> to trigger the problem?\n>\n\nThe above syntax is not currently supported in yugabyte.\n\nFYI\n\nOn Thu, Mar 3, 2022 at 1:11 PM Andres Freund <andres@anarazel.de> wrote:Hi,\n\nOn 2022-03-03 15:31:51 -0500, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > On 2022-03-03 14:00:14 -0500, Tom Lane wrote:\n> > For the buildfarm, I could enable it on flaviventris? That runs an\n> > experimental gcc, without optimization (whereas serinus runs with\n> > optimization). Which seems reasonable to combine with sanitizers?\n> \n> Dunno. I already found out that my Mac laptop (w/ clang 13) detects\n> the numeric.c problem but not any of the other ones. The messages\n> on RHEL8 cite where the system headers declare memcmp and friends\n> with \"attribute nonnull\", so I'm betting that Apple's headers lack\n> that annotation.\n\nThe sanitizers are documented to work best on linux... As flaviventris runs\nlinux, so I'm not sure what your concern is?\n\nI think basically newer glibc versions have more annotations, so ubsan will\nhave more things to fail against. So it'd be good to have a fairly regularly\nupdated OS.\n\n\n> I also tried adding the various -m switches shown in Zhihong's\n> CFLAGS setting, but that still didn't repro the Alma warning\n> for me.\n\nThe compilation flags make it look like it's from a run of yugabyte's fork,\nrather than plain postgres.Hi,I should mention that, the PG subtree in yugabyte is currently aligned with PG 11.There have been backports from PG 12, but code related to tid.c and block.h, etc is the same with upstream PG.The fdw tests are backported from PG as well. \n\nThe message says:\nsrc/backend/utils/adt/tid.c:112:16: runtime error: left shift of 65535 by 16 places cannot be represented in type 'int'\n\nAfaics that means bi_hi is 65535. So either we're dealing with a very large\nrelation or BlockIdGetBlockNumber() is getting passed InvalidBlockNumber?\n\nIt might be enough to do something like\nSELECT * FROM pg_class WHERE ctid = '(65535, 17)';\nto trigger the problem?The above syntax is not currently supported in yugabyte.FYI",
"msg_date": "Thu, 3 Mar 2022 13:21:39 -0800",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": true,
"msg_subject": "Re: casting operand to proper type in BlockIdGetBlockNumber"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> The message says:\n> src/backend/utils/adt/tid.c:112:16: runtime error: left shift of 65535 by 16 places cannot be represented in type 'int'\n\n> Afaics that means bi_hi is 65535. So either we're dealing with a very large\n> relation or BlockIdGetBlockNumber() is getting passed InvalidBlockNumber?\n\nPresumably the latter, since we surely aren't using any terabyte-size\nrelations in our tests.\n\n> It might be enough to do something like\n> SELECT * FROM pg_class WHERE ctid = '(65535, 17)';\n> to trigger the problem?\n\nI tried to provoke it with cases like\n\n# select '(-1,0)'::tid;\n tid \n----------------\n (4294967295,0)\n(1 row)\n\n# select '(4000000000,1)'::tid;\n tid \n----------------\n (4000000000,1)\n(1 row)\n\nwithout success.\n\nOn a nearby topic, I see that tidin's overflow checks are somewhere\nbetween sloppy and nonexistent:\n\n# select '(40000000000,1)'::tid;\n tid \n----------------\n (1345294336,1)\n(1 row)\n\nI think I'll fix that while I'm looking at it ... but it still\ndoesn't explain why no complaint in tidout.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 03 Mar 2022 16:45:14 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: casting operand to proper type in BlockIdGetBlockNumber"
}
] |
[
{
"msg_contents": "Hello.\n\nThe CF-CI complained on one of my patch for seemingly a reason\nunrelated to the patch.\n\nhttps://cirrus-ci.com/task/5544213843542016?logs=test_world#L1666\n\n> diff -U3 /tmp/cirrus-ci-build/contrib/test_decoding/expected/slot_creation_error.out /tmp/cirrus-ci-build/contrib/test_decoding/output_iso/results/slot_creation_error.out\n> --- /tmp/cirrus-ci-build/contrib/test_decoding/expected/slot_creation_error.out\t2022-03-03 22:45:04.708072000 +0000\n> +++ /tmp/cirrus-ci-build/contrib/test_decoding/output_iso/results/slot_creation_error.out\t2022-03-03 22:54:49.621351000 +0000\n> @@ -96,13 +96,13 @@\n> t \n> (1 row)\n> \n> +step s1_c: COMMIT;\n> step s2_init: <... completed>\n> FATAL: terminating connection due to administrator command\n> server closed the connection unexpectedly\n> \tThis probably means the server terminated abnormally\n> \tbefore or while processing the request.\n> \n> -step s1_c: COMMIT;\n> step s1_view_slot: \n> SELECT slot_name, slot_type, active FROM pg_replication_slots WHERE slot_name = 'slot_creation_error'\n\nThis comes from the permuattion 'permutation s1_b s1_xid s2_init\ns1_terminate_s2 s1_c s1_view_slot'. That means the process\ntermination by s1_terminate_s2 is a bit delayed until the next s1_c\nends. So it is rare false failure but it is annoying enough on the\nCI. It seems to me we need to wait for process termination at the\ntime. postgres_fdw does that in regression test.\n\nThoughts?\n\nSimliar use is found in temp-schema-cleanup. There's another possible\ninstability between s2_advisory and s2_check_schema but this change\nalone reduces the chance for false failures.\n\nThe attached fixes the both points.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Fri, 04 Mar 2022 11:33:47 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": true,
"msg_subject": "false failure of test_docoding regression test"
}
] |
[
{
"msg_contents": "Hi all,\n\nWhile playing with tablespaces and recovery in a TAP test, I have\nnoticed that retrieving the location of a tablespace created with\nallow_in_place_tablespaces enabled fails in pg_tablespace_location(),\nbecause readlink() sees a directory in this case.\n\nThe use may be limited to any automated testing and\nallow_in_place_tablespaces is a developer GUC, still it seems to me\nthat there is an argument to allow the case rather than tweak any\ntests to hardcode a path with the tablespace OID. And any other code\npaths are able to handle such tablespaces, be they in recovery or in\ntablespace create/drop.\n\nA junction point is a directory on WIN32 as far as I recall, but\npgreadlink() is here to ensure that we get the correct path on\na source found as pgwin32_is_junction(), so we can rely on that. This\nstuff has led me to the attached.\n\nThoughts?\n--\nMichael",
"msg_date": "Fri, 4 Mar 2022 15:44:22 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "pg_tablespace_location() failure with allow_in_place_tablespaces"
},
{
"msg_contents": "At Fri, 4 Mar 2022 15:44:22 +0900, Michael Paquier <michael@paquier.xyz> wrote in \n> Hi all,\n> \n> While playing with tablespaces and recovery in a TAP test, I have\n> noticed that retrieving the location of a tablespace created with\n> allow_in_place_tablespaces enabled fails in pg_tablespace_location(),\n> because readlink() sees a directory in this case.\n\nERROR: could not read symbolic link \"pg_tblspc/16407\": Invalid argument\n\n> The use may be limited to any automated testing and\n> allow_in_place_tablespaces is a developer GUC, still it seems to me\n> that there is an argument to allow the case rather than tweak any\n> tests to hardcode a path with the tablespace OID. And any other code\n> paths are able to handle such tablespaces, be they in recovery or in\n> tablespace create/drop.\n\n+1\n\n> A junction point is a directory on WIN32 as far as I recall, but\n> pgreadlink() is here to ensure that we get the correct path on\n> a source found as pgwin32_is_junction(), so we can rely on that. This\n> stuff has led me to the attached.\n> \n> Thoughts?\n\nThe function I think is expected to return a absolute path but it\nreturns a relative path for in-place tablespaces. While it is\napparently incovenient for general use, there might be a case where we\nwant to know whether the tablespace is in-place or not. So I'm not\nsure which is better..\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Fri, 04 Mar 2022 16:41:03 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_tablespace_location() failure with\n allow_in_place_tablespaces"
},
{
"msg_contents": "At Fri, 04 Mar 2022 16:41:03 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> At Fri, 4 Mar 2022 15:44:22 +0900, Michael Paquier <michael@paquier.xyz> wrote in \n> > The use may be limited to any automated testing and\n> > allow_in_place_tablespaces is a developer GUC, still it seems to me\n> > that there is an argument to allow the case rather than tweak any\n> > tests to hardcode a path with the tablespace OID. And any other code\n> > paths are able to handle such tablespaces, be they in recovery or in\n> > tablespace create/drop.\n> \n> +1\n\nBy the way, regardless of the patch, I got an error from pg_basebackup\nfor an in-place tablespace. pg_do_start_backup calls readlink\nbelieving pg_tblspc/* is always a symlink.\n\n# Running: pg_basebackup -D /home/horiguti/work/worktrees/tsp_replay_2/src/test/recovery/tmp_check/t_029_replay_tsp_drops_primary1_data/backup/my_backup -h /tmp/X8E4nbF4en -p 51584 --checkpoint fast --no-sync\nWARNING: could not read symbolic link \"pg_tblspc/16384\": Invalid argument\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Fri, 04 Mar 2022 16:54:49 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_tablespace_location() failure with\n allow_in_place_tablespaces"
},
{
"msg_contents": "At Fri, 04 Mar 2022 16:54:49 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> At Fri, 04 Mar 2022 16:41:03 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> > At Fri, 4 Mar 2022 15:44:22 +0900, Michael Paquier <michael@paquier.xyz> wrote in \n> > > The use may be limited to any automated testing and\n> > > allow_in_place_tablespaces is a developer GUC, still it seems to me\n> > > that there is an argument to allow the case rather than tweak any\n> > > tests to hardcode a path with the tablespace OID. And any other code\n> > > paths are able to handle such tablespaces, be they in recovery or in\n> > > tablespace create/drop.\n> > \n> > +1\n> \n> By the way, regardless of the patch, I got an error from pg_basebackup\n> for an in-place tablespace. pg_do_start_backup calls readlink\n> believing pg_tblspc/* is always a symlink.\n> \n> # Running: pg_basebackup -D /home/horiguti/work/worktrees/tsp_replay_2/src/test/recovery/tmp_check/t_029_replay_tsp_drops_primary1_data/backup/my_backup -h /tmp/X8E4nbF4en -p 51584 --checkpoint fast --no-sync\n> WARNING: could not read symbolic link \"pg_tblspc/16384\": Invalid argument\n\nSo now we know that there are three places that needs the same\nprocessing.\n\npg_tablespace_location: this patch tries to fix\nsendDir: it already supports in-place tsp\ndo_pg_start_backup: not supports in-place tsp yet.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Fri, 04 Mar 2022 17:28:45 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_tablespace_location() failure with\n allow_in_place_tablespaces"
},
{
"msg_contents": "At Fri, 04 Mar 2022 17:28:45 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> > By the way, regardless of the patch, I got an error from pg_basebackup\n> > for an in-place tablespace. pg_do_start_backup calls readlink\n> > believing pg_tblspc/* is always a symlink.\n> > \n> > # Running: pg_basebackup -D /home/horiguti/work/worktrees/tsp_replay_2/src/test/recovery/tmp_check/t_029_replay_tsp_drops_primary1_data/backup/my_backup -h /tmp/X8E4nbF4en -p 51584 --checkpoint fast --no-sync\n> > WARNING: could not read symbolic link \"pg_tblspc/16384\": Invalid argument\n> \n> So now we know that there are three places that needs the same\n> processing.\n> \n> pg_tablespace_location: this patch tries to fix\n> sendDir: it already supports in-place tsp\n> do_pg_start_backup: not supports in-place tsp yet.\n\nAnd I made a quick hack on do_pg_start_backup. And I found that\npg_basebackup copies in-place tablespaces under the *current\ndirectory*, which is not ok at all:(\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Fri, 04 Mar 2022 18:04:00 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_tablespace_location() failure with\n allow_in_place_tablespaces"
},
{
"msg_contents": "On Fri, Mar 4, 2022 at 10:04 PM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n> And I made a quick hack on do_pg_start_backup. And I found that\n> pg_basebackup copies in-place tablespaces under the *current\n> directory*, which is not ok at all:(\n\nHmm. Which OS are you on? Looks OK here -- the \"in place\" tablespace\ngets copied as a directory under pg_tblspc, no symlink:\n\npostgres=# set allow_in_place_tablespaces = on;\nSET\npostgres=# create tablespace ts1 location '';\nCREATE TABLESPACE\npostgres=# create table t (i int) tablespace ts1;\nCREATE TABLE\npostgres=# insert into t values (1), (2);\nINSERT 0 2\npostgres=# create user replication replication;\nCREATE ROLE\n\n$ pg_basebackup --user replication -D pgdata2\n$ ls -slaph pgdata/pg_tblspc/\ntotal 4.0K\n 0 drwx------ 3 tmunro tmunro 19 Mar 4 23:16 ./\n4.0K drwx------ 19 tmunro tmunro 4.0K Mar 4 23:16 ../\n 0 drwx------ 3 tmunro tmunro 29 Mar 4 23:16 16384/\n$ ls -slaph pgdata2/pg_tblspc/\ntotal 4.0K\n 0 drwx------ 3 tmunro tmunro 19 Mar 4 23:16 ./\n4.0K drwx------ 19 tmunro tmunro 4.0K Mar 4 23:16 ../\n 0 drwx------ 3 tmunro tmunro 29 Mar 4 23:16 16384/\n$ ls -slaph pgdata/pg_tblspc/16384/PG_15_202203031/5/\ntotal 8.0K\n 0 drwx------ 2 tmunro tmunro 19 Mar 4 23:16 ./\n 0 drwx------ 3 tmunro tmunro 15 Mar 4 23:16 ../\n8.0K -rw------- 1 tmunro tmunro 8.0K Mar 4 23:16 16385\n$ ls -slaph pgdata2/pg_tblspc/16384/PG_15_202203031/5/\ntotal 8.0K\n 0 drwx------ 2 tmunro tmunro 19 Mar 4 23:16 ./\n 0 drwx------ 3 tmunro tmunro 15 Mar 4 23:16 ../\n8.0K -rw------- 1 tmunro tmunro 8.0K Mar 4 23:16 16385\n\nThe warning from readlink() while making the mapping file isn't ideal,\nand perhaps we should suppress that with something like the attached.\nOr does the missing map file entry break something on Windows?",
"msg_date": "Fri, 4 Mar 2022 23:26:43 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_tablespace_location() failure with allow_in_place_tablespaces"
},
{
"msg_contents": "On Fri, Mar 04, 2022 at 06:04:00PM +0900, Kyotaro Horiguchi wrote:\n> And I made a quick hack on do_pg_start_backup. And I found that\n> pg_basebackup copies in-place tablespaces under the *current\n> directory*, which is not ok at all:(\n\nYeah, I have noticed that as well while testing such configurations a\ncouple of hours ago, but I am not sure yet how much we need to care\nabout that as in-place tablespaces are included in the main data\ndirectory anyway, which would be fine for most test purposes we\nusually care about. Perhaps this has an impact on the patch posted on\nthe thread that wants to improve the guarantees around tablespace\ndirectory structures, but I have not studied this thread much to have\nan opinion. And it is Friday.\n--\nMichael",
"msg_date": "Fri, 4 Mar 2022 19:40:51 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: pg_tablespace_location() failure with allow_in_place_tablespaces"
},
{
"msg_contents": "At Fri, 4 Mar 2022 23:26:43 +1300, Thomas Munro <thomas.munro@gmail.com> wrote in \n> On Fri, Mar 4, 2022 at 10:04 PM Kyotaro Horiguchi\n> <horikyota.ntt@gmail.com> wrote:\n> > And I made a quick hack on do_pg_start_backup. And I found that\n> > pg_basebackup copies in-place tablespaces under the *current\n> > directory*, which is not ok at all:(\n> \n> Hmm. Which OS are you on? Looks OK here -- the \"in place\" tablespace\n> gets copied as a directory under pg_tblspc, no symlink:\n\n> The warning from readlink() while making the mapping file isn't ideal,\n> and perhaps we should suppress that with something like the attached.\n> Or does the missing map file entry break something on Windows?\n\nAh.. Ok, somehow I thought that pg_basebackup failed for readlink\nfailure and the tweak I made made things worse. I got to make it\nwork.\n\nThanks!\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Mon, 07 Mar 2022 16:42:26 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_tablespace_location() failure with\n allow_in_place_tablespaces"
},
{
"msg_contents": "On Fri, Mar 04, 2022 at 03:44:22PM +0900, Michael Paquier wrote:\n> The use may be limited to any automated testing and\n> allow_in_place_tablespaces is a developer GUC, still it seems to me\n> that there is an argument to allow the case rather than tweak any\n> tests to hardcode a path with the tablespace OID. And any other code\n> paths are able to handle such tablespaces, be they in recovery or in\n> tablespace create/drop.\n> \n> A junction point is a directory on WIN32 as far as I recall, but\n> pgreadlink() is here to ensure that we get the correct path on\n> a source found as pgwin32_is_junction(), so we can rely on that. This\n> stuff has led me to the attached.\n\nThomas, I'd rather fix this for the sake of the tests. One point is\nthat the function returns a relative path for in-place tablespaces,\nbut it would be easy enough to append a DataDir. What do you think?\n--\nMichael",
"msg_date": "Mon, 7 Mar 2022 20:36:20 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: pg_tablespace_location() failure with allow_in_place_tablespaces"
},
{
"msg_contents": "On Fri, Mar 04, 2022 at 11:26:43PM +1300, Thomas Munro wrote:\n> The warning from readlink() while making the mapping file isn't ideal,\n> and perhaps we should suppress that with something like the attached.\n> Or does the missing map file entry break something on Windows?\n\n> @@ -8292,6 +8293,10 @@ do_pg_start_backup(const char *backupidstr, bool fast, TimeLineID *starttli_p,\n> \n> \tsnprintf(fullpath, sizeof(fullpath), \"pg_tblspc/%s\", de->d_name);\n> \n> +\t/* Skip in-place tablespaces (testing use only) */\n> +\tif (get_dirent_type(fullpath, de, false, ERROR) == PGFILETYPE_DIR)\n> +\t\tcontinue;\n\nI saw the warning when testing base backups with in-place tablespaces\nand it did not annoy me much, but, yes, that can be confusing.\n\nJunction points are directories, no? Are you sure that this works\ncorrectly on WIN32? It seems to me that we'd better use readlink()\nonly for entries in pg_tlbspc/ that are PGFILETYPE_LNK on non-WIN32\nand pgwin32_is_junction() on WIN32.\n--\nMichael",
"msg_date": "Mon, 7 Mar 2022 20:58:50 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: pg_tablespace_location() failure with allow_in_place_tablespaces"
},
{
"msg_contents": "On Tue, Mar 8, 2022 at 12:58 AM Michael Paquier <michael@paquier.xyz> wrote:\n> On Fri, Mar 04, 2022 at 11:26:43PM +1300, Thomas Munro wrote:\n> > + /* Skip in-place tablespaces (testing use only) */\n> > + if (get_dirent_type(fullpath, de, false, ERROR) == PGFILETYPE_DIR)\n> > + continue;\n>\n> I saw the warning when testing base backups with in-place tablespaces\n> and it did not annoy me much, but, yes, that can be confusing.\n>\n> Junction points are directories, no? Are you sure that this works\n> correctly on WIN32? It seems to me that we'd better use readlink()\n> only for entries in pg_tlbspc/ that are PGFILETYPE_LNK on non-WIN32\n> and pgwin32_is_junction() on WIN32.\n\nThanks, you're right. Test on a Win10 VM. Here's a new version.",
"msg_date": "Tue, 8 Mar 2022 10:39:06 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_tablespace_location() failure with allow_in_place_tablespaces"
},
{
"msg_contents": "On Tue, Mar 8, 2022 at 10:39 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> Test on a Win10 VM.\n\nErm, \"Tested\" (as in, I tested), I meant to write...\n\n\n",
"msg_date": "Tue, 8 Mar 2022 10:43:57 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_tablespace_location() failure with allow_in_place_tablespaces"
},
{
"msg_contents": "At Tue, 8 Mar 2022 10:39:06 +1300, Thomas Munro <thomas.munro@gmail.com> wrote in \n> On Tue, Mar 8, 2022 at 12:58 AM Michael Paquier <michael@paquier.xyz> wrote:\n> > On Fri, Mar 04, 2022 at 11:26:43PM +1300, Thomas Munro wrote:\n> > > + /* Skip in-place tablespaces (testing use only) */\n> > > + if (get_dirent_type(fullpath, de, false, ERROR) == PGFILETYPE_DIR)\n> > > + continue;\n> >\n> > I saw the warning when testing base backups with in-place tablespaces\n> > and it did not annoy me much, but, yes, that can be confusing.\n> >\n> > Junction points are directories, no? Are you sure that this works\n> > correctly on WIN32? It seems to me that we'd better use readlink()\n> > only for entries in pg_tlbspc/ that are PGFILETYPE_LNK on non-WIN32\n> > and pgwin32_is_junction() on WIN32.\n> \n> Thanks, you're right. Test on a Win10 VM. Here's a new version.\n\nThanks! It works for me on CentOS8 and Windows11.\n\nFYI, on Windows11, pg_basebackup didn't work correctly without the\npatch. So this looks like fixing an undiscovered bug as well.\n\n===\n> pg_basebackup -D copy\nWARNING: could not read symbolic link \"pg_tblspc/16384\": Invalid argument\npg_basebackup: error: tar member has empty name\n\n> dir copy\n Volume in drive C has no label.\n Volume serial number: 10C6-4BA6\n\n Directory of c:\\..\\copy\n\n2022/03/08 09:53 <DIR> .\n2022/03/08 09:53 <DIR> ..\n2022/03/08 09:53 0 nbase.tar\n2022/03/08 09:53 <DIR> pg_wal\n 1 File(s) 0 bytes\n\t\t\t 3 Dir(s) 171,920,613,376 bytes free\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Tue, 08 Mar 2022 10:06:50 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_tablespace_location() failure with\n allow_in_place_tablespaces"
},
{
"msg_contents": "On Tue, Mar 08, 2022 at 10:06:50AM +0900, Kyotaro Horiguchi wrote:\n> At Tue, 8 Mar 2022 10:39:06 +1300, Thomas Munro <thomas.munro@gmail.com> wrote in \n>> Thanks, you're right. Test on a Win10 VM. Here's a new version.\n\nLooks fine to me.\n\n> FYI, on Windows11, pg_basebackup didn't work correctly without the\n> patch. So this looks like fixing an undiscovered bug as well.\n\nWell, that's not really a long-time bug but just a side effect of\nin-place tablespaces because we don't use them in many test cases \nyet, is it?\n\n>> pg_basebackup -D copy\n> WARNING: could not read symbolic link \"pg_tblspc/16384\": Invalid argument\n> pg_basebackup: error: tar member has empty name\n> \n> 1 File(s) 0 bytes\n> \t\t\t 3 Dir(s) 171,920,613,376 bytes free\n\nThat's a lot of free space.\n--\nMichael",
"msg_date": "Tue, 8 Mar 2022 10:28:46 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: pg_tablespace_location() failure with allow_in_place_tablespaces"
},
{
"msg_contents": "At Tue, 8 Mar 2022 10:28:46 +0900, Michael Paquier <michael@paquier.xyz> wrote in \n> On Tue, Mar 08, 2022 at 10:06:50AM +0900, Kyotaro Horiguchi wrote:\n> > At Tue, 8 Mar 2022 10:39:06 +1300, Thomas Munro <thomas.munro@gmail.com> wrote in \n> >> Thanks, you're right. Test on a Win10 VM. Here's a new version.\n> \n> Looks fine to me.\n> \n> > FYI, on Windows11, pg_basebackup didn't work correctly without the\n> > patch. So this looks like fixing an undiscovered bug as well.\n> \n> Well, that's not really a long-time bug but just a side effect of\n> in-place tablespaces because we don't use them in many test cases \n> yet, is it?\n\nNo, we don't. So just FYI.\n\n> >> pg_basebackup -D copy\n> > WARNING: could not read symbolic link \"pg_tblspc/16384\": Invalid argument\n> > pg_basebackup: error: tar member has empty name\n> > \n> > 1 File(s) 0 bytes\n> > \t\t\t 3 Dir(s) 171,920,613,376 bytes free\n> \n> That's a lot of free space.\n\nThe laptop has a 512GB storage, so 160GB is pretty normal, maybe.\n129GB of the storage is used by some VMs..\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Tue, 08 Mar 2022 12:01:03 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_tablespace_location() failure with\n allow_in_place_tablespaces"
},
{
"msg_contents": "On Tue, Mar 8, 2022 at 4:01 PM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n> At Tue, 8 Mar 2022 10:28:46 +0900, Michael Paquier <michael@paquier.xyz> wrote in\n> > On Tue, Mar 08, 2022 at 10:06:50AM +0900, Kyotaro Horiguchi wrote:\n> > > At Tue, 8 Mar 2022 10:39:06 +1300, Thomas Munro <thomas.munro@gmail.com> wrote in\n> > >> Thanks, you're right. Test on a Win10 VM. Here's a new version.\n> >\n> > Looks fine to me.\n> >\n> > > FYI, on Windows11, pg_basebackup didn't work correctly without the\n> > > patch. So this looks like fixing an undiscovered bug as well.\n> >\n> > Well, that's not really a long-time bug but just a side effect of\n> > in-place tablespaces because we don't use them in many test cases\n> > yet, is it?\n>\n> No, we don't. So just FYI.\n\nOk, I pushed the fix for pg_basebackup.\n\nAs for the complaint about pg_tablespace_location() failing, would it\nbe better to return an empty string? That's what was passed in as\nLOCATION. Something like the attached.",
"msg_date": "Tue, 15 Mar 2022 14:33:17 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_tablespace_location() failure with allow_in_place_tablespaces"
},
{
"msg_contents": "On Tue, Mar 15, 2022 at 2:33 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> As for the complaint about pg_tablespace_location() failing, would it\n> be better to return an empty string? That's what was passed in as\n> LOCATION. Something like the attached.\n\n(Hrrmm, the contract for pgwin32_is_junction() is a little weird:\nfalse means \"success, but no\" and also \"failure, you should check\nerrno\". But we never do.)\n\n\n",
"msg_date": "Tue, 15 Mar 2022 14:44:03 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_tablespace_location() failure with allow_in_place_tablespaces"
},
{
"msg_contents": "On Tue, Mar 15, 2022 at 02:33:17PM +1300, Thomas Munro wrote:\n> Ok, I pushed the fix for pg_basebackup.\n> \n> As for the complaint about pg_tablespace_location() failing, would it\n> be better to return an empty string? That's what was passed in as\n> LOCATION. Something like the attached.\n\nHmm, I don't think so. The point of the function is to be able to\nknow the location of a tablespace at SQL level so as we don't have any\nneed to hardcode its location within any external tests (be it a\npg_regress test or a TAP test) based on how in-place tablespace paths\nare built in the backend, so I think that we'd better report either a\nrelative path from data_directory or an absolute path, but not an\nempty string.\n\nIn any case, I'd suggest to add a regression test. What I have sent\nupthread would be portable enough. \n--\nMichael",
"msg_date": "Tue, 15 Mar 2022 10:50:53 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: pg_tablespace_location() failure with allow_in_place_tablespaces"
},
{
"msg_contents": "On Tue, Mar 15, 2022 at 2:50 PM Michael Paquier <michael@paquier.xyz> wrote:\n> On Tue, Mar 15, 2022 at 02:33:17PM +1300, Thomas Munro wrote:\n> > As for the complaint about pg_tablespace_location() failing, would it\n> > be better to return an empty string? That's what was passed in as\n> > LOCATION. Something like the attached.\n>\n> Hmm, I don't think so. The point of the function is to be able to\n> know the location of a tablespace at SQL level so as we don't have any\n> need to hardcode its location within any external tests (be it a\n> pg_regress test or a TAP test) based on how in-place tablespace paths\n> are built in the backend, so I think that we'd better report either a\n> relative path from data_directory or an absolute path, but not an\n> empty string.\n>\n> In any case, I'd suggest to add a regression test. What I have sent\n> upthread would be portable enough.\n\nFair enough. No objections here.\n\n\n",
"msg_date": "Tue, 15 Mar 2022 15:55:56 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_tablespace_location() failure with allow_in_place_tablespaces"
},
{
"msg_contents": "On Tue, Mar 15, 2022 at 03:55:56PM +1300, Thomas Munro wrote:\n> On Tue, Mar 15, 2022 at 2:50 PM Michael Paquier <michael@paquier.xyz> wrote:\n>> On Tue, Mar 15, 2022 at 02:33:17PM +1300, Thomas Munro wrote:\n>> > As for the complaint about pg_tablespace_location() failing, would it\n>> > be better to return an empty string? That's what was passed in as\n>> > LOCATION. Something like the attached.\n>>\n>> Hmm, I don't think so. The point of the function is to be able to\n>> know the location of a tablespace at SQL level so as we don't have any\n>> need to hardcode its location within any external tests (be it a\n>> pg_regress test or a TAP test) based on how in-place tablespace paths\n>> are built in the backend, so I think that we'd better report either a\n>> relative path from data_directory or an absolute path, but not an\n>> empty string.\n>>\n>> In any case, I'd suggest to add a regression test. What I have sent\n>> upthread would be portable enough.\n> \n> Fair enough. No objections here.\n\nSo, which one of a relative path or an absolute path do you think\nwould be better for the user? My preference tends toward the relative\npath, as we know that all those tablespaces stay in pg_tblspc/ so one\ncan make the difference with normal tablespaces more easily. The\nbarrier is thin, though :p\n--\nMichael",
"msg_date": "Tue, 15 Mar 2022 18:30:31 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: pg_tablespace_location() failure with allow_in_place_tablespaces"
},
{
"msg_contents": "On Tue, Mar 15, 2022 at 10:30 PM Michael Paquier <michael@paquier.xyz> wrote:\n> So, which one of a relative path or an absolute path do you think\n> would be better for the user? My preference tends toward the relative\n> path, as we know that all those tablespaces stay in pg_tblspc/ so one\n> can make the difference with normal tablespaces more easily. The\n> barrier is thin, though :p\n\nSounds good to me.\n\n\n",
"msg_date": "Tue, 15 Mar 2022 23:16:52 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_tablespace_location() failure with allow_in_place_tablespaces"
},
{
"msg_contents": "At Tue, 15 Mar 2022 23:16:52 +1300, Thomas Munro <thomas.munro@gmail.com> wrote in \n> On Tue, Mar 15, 2022 at 10:30 PM Michael Paquier <michael@paquier.xyz> wrote:\n> > So, which one of a relative path or an absolute path do you think\n> > would be better for the user? My preference tends toward the relative\n> > path, as we know that all those tablespaces stay in pg_tblspc/ so one\n> > can make the difference with normal tablespaces more easily. The\n> > barrier is thin, though :p\n> \n> Sounds good to me.\n\n+1. Desn't the doc need to mention that?\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Wed, 16 Mar 2022 10:34:15 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_tablespace_location() failure with\n allow_in_place_tablespaces"
},
{
"msg_contents": "On Wed, Mar 16, 2022 at 10:34:15AM +0900, Kyotaro Horiguchi wrote:\n> +1. Desn't the doc need to mention that?\n\nYes, I agree that it makes sense to add a note, even if\nallow_in_place_tablespaces is a developer option. I have added the\nfollowing paragraph in the docs:\n+ A full path of the symbolic link in <filename>pg_tblspc/</filename>\n+ is returned. A relative path to the data directory is returned\n+ for tablespaces created with\n+ <xref linkend=\"guc-allow-in-place-tablespaces\"/> enabled.\n\nAnother thing that was annoying in the first version of the patch is\nthe useless call to lstat() on Windows, not needed because it is\npossible to rely just on pgwin32_is_junction() to check if readlink()\nshould be called or not.\n\nThis leads me to the revised version attached. What do you think?\n--\nMichael",
"msg_date": "Wed, 16 Mar 2022 15:42:52 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: pg_tablespace_location() failure with allow_in_place_tablespaces"
},
{
"msg_contents": "At Wed, 16 Mar 2022 15:42:52 +0900, Michael Paquier <michael@paquier.xyz> wrote in \n> On Wed, Mar 16, 2022 at 10:34:15AM +0900, Kyotaro Horiguchi wrote:\n> > +1. Desn't the doc need to mention that?\n> \n> Yes, I agree that it makes sense to add a note, even if\n> allow_in_place_tablespaces is a developer option. I have added the\n> following paragraph in the docs:\n> + A full path of the symbolic link in <filename>pg_tblspc/</filename>\n> + is returned. A relative path to the data directory is returned\n> + for tablespaces created with\n> + <xref linkend=\"guc-allow-in-place-tablespaces\"/> enabled.\n\nI'm not sure that the \"of the symbolic link in pg_tblspc/\" is\nneeded. And allow_in_place_tablespaces alone doesn't create in-place\ntablesapce. So this might need rethink at least for the second point.\n\n> Another thing that was annoying in the first version of the patch is\n> the useless call to lstat() on Windows, not needed because it is\n> possible to rely just on pgwin32_is_junction() to check if readlink()\n> should be called or not.\n\nAgreed. And v2 looks cleaner.\n\nThe test detects the lack of the feature.\nIt successfully builds and runs on Rocky8 and Windows11.\n\n> This leads me to the revised version attached. What do you think?\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Wed, 16 Mar 2022 17:15:58 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_tablespace_location() failure with\n allow_in_place_tablespaces"
},
{
"msg_contents": "On Wed, Mar 16, 2022 at 05:15:58PM +0900, Kyotaro Horiguchi wrote:\n> I'm not sure that the \"of the symbolic link in pg_tblspc/\" is\n> needed. And allow_in_place_tablespaces alone doesn't create in-place\n> tablespace. So this might need rethink at least for the second point.\n\nSurely this can be improved. I was not satisfied with this paragraph\nafter re-reading it this morning, so I have just removed it, rewording\nslightly the part for in-place tablespaces that is still necessary.\n\n> Agreed. And v2 looks cleaner.\n> \n> The test detects the lack of the feature.\n> It successfully builds and runs on Rocky8 and Windows11.\n\nThanks for the review. After a second look, it seemed fine so I have\napplied it. (I'll try to jump on the tablespace patch for recovery\nsoon-ish-ly if nobody beats me to it.)\n--\nMichael",
"msg_date": "Thu, 17 Mar 2022 11:52:56 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: pg_tablespace_location() failure with allow_in_place_tablespaces"
},
{
"msg_contents": "On Thu, Mar 17, 2022 at 3:53 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Wed, Mar 16, 2022 at 05:15:58PM +0900, Kyotaro Horiguchi wrote:\n> > I'm not sure that the \"of the symbolic link in pg_tblspc/\" is\n> > needed. And allow_in_place_tablespaces alone doesn't create in-place\n> > tablespace. So this might need rethink at least for the second point.\n>\n> Surely this can be improved. I was not satisfied with this paragraph\n> after re-reading it this morning, so I have just removed it, rewording\n> slightly the part for in-place tablespaces that is still necessary.\n\n+ <para>\n+ A relative path to the data directory is returned for tablespaces\n+ created when <xref linkend=\"guc-allow-in-place-tablespaces\"/> is\n+ enabled.\n+ </para>\n+ </entry>\n\nI think what Horiguchi-san was pointing out above is that you need to\nenable the GUC *and* say LOCATION '', which the new paragraph doesn't\ncapture. What do you think about this:\n\nA path relative to the data directory is returned for in-place\ntablespaces (see <xref ...>).\n\n\n",
"msg_date": "Thu, 17 Mar 2022 16:34:30 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_tablespace_location() failure with allow_in_place_tablespaces"
},
{
"msg_contents": "On Thu, Mar 17, 2022 at 04:34:30PM +1300, Thomas Munro wrote:\n> I think what Horiguchi-san was pointing out above is that you need to\n> enable the GUC *and* say LOCATION '', which the new paragraph doesn't\n> capture. What do you think about this:\n> \n> A path relative to the data directory is returned for in-place\n> tablespaces (see <xref ...>).\n\nAn issue I have with this wording is that we give nowhere in the docs\nan explanation of about the term \"in-place tablespace\", even if it can\nbe guessed from the name of the GUC.\n\nAnother idea would be something like that:\n\"A relative path to the data directory is returned for tablespaces\ncreated with an empty location string specified in the CREATE\nTABLESPACE query when allow_in_place_tablespaces is enabled (see link\nblah).\"\n\nBut perhaps that's just me being overly pedantic :p\n--\nMichael",
"msg_date": "Thu, 17 Mar 2022 15:18:24 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: pg_tablespace_location() failure with allow_in_place_tablespaces"
},
{
"msg_contents": "On Thu, Mar 17, 2022 at 7:18 PM Michael Paquier <michael@paquier.xyz> wrote:\n> On Thu, Mar 17, 2022 at 04:34:30PM +1300, Thomas Munro wrote:\n> > I think what Horiguchi-san was pointing out above is that you need to\n> > enable the GUC *and* say LOCATION '', which the new paragraph doesn't\n> > capture. What do you think about this:\n> >\n> > A path relative to the data directory is returned for in-place\n> > tablespaces (see <xref ...>).\n>\n> An issue I have with this wording is that we give nowhere in the docs\n> an explanation of about the term \"in-place tablespace\", even if it can\n> be guessed from the name of the GUC.\n\nMaybe we don't need this paragraph at all. Who is it aimed at?\n\n> Another idea would be something like that:\n> \"A relative path to the data directory is returned for tablespaces\n> created with an empty location string specified in the CREATE\n> TABLESPACE query when allow_in_place_tablespaces is enabled (see link\n> blah).\"\n>\n> But perhaps that's just me being overly pedantic :p\n\nI don't really want to spill details of this developer-only stuff onto\nmore manual sections... It's not really helping users if we confuse\nthem with irrelevant details of a feature they shouldn't be using, is\nit? And the existing treatment \"Returns the file system path that\nthis tablespace is located in\" is not invalidated by this special\ncase, so maybe we shouldn't mention it?\n\n\n",
"msg_date": "Thu, 17 Mar 2022 19:55:30 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_tablespace_location() failure with allow_in_place_tablespaces"
},
{
"msg_contents": "On Thu, Mar 17, 2022 at 07:55:30PM +1300, Thomas Munro wrote:\n> I don't really want to spill details of this developer-only stuff onto\n> more manual sections... It's not really helping users if we confuse\n> them with irrelevant details of a feature they shouldn't be using, is\n> it? And the existing treatment \"Returns the file system path that\n> this tablespace is located in\" is not invalidated by this special\n> case, so maybe we shouldn't mention it?\n\nRight, I see your point. The existing description is not wrong\neither. Fine by me to just drop all that.\n--\nMichael",
"msg_date": "Thu, 17 Mar 2022 16:39:52 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: pg_tablespace_location() failure with allow_in_place_tablespaces"
},
{
"msg_contents": "At Thu, 17 Mar 2022 16:39:52 +0900, Michael Paquier <michael@paquier.xyz> wrote in \n> On Thu, Mar 17, 2022 at 07:55:30PM +1300, Thomas Munro wrote:\n> > I don't really want to spill details of this developer-only stuff onto\n> > more manual sections... It's not really helping users if we confuse\n> > them with irrelevant details of a feature they shouldn't be using, is\n> > it? And the existing treatment \"Returns the file system path that\n> > this tablespace is located in\" is not invalidated by this special\n> > case, so maybe we shouldn't mention it?\n> \n> Right, I see your point. The existing description is not wrong\n> either. Fine by me to just drop all that.\n\n+1. Sorry for my otiose comment..\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Thu, 17 Mar 2022 17:07:37 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_tablespace_location() failure with\n allow_in_place_tablespaces"
},
{
"msg_contents": "On Tue, Mar 8, 2022 at 12:58 AM Michael Paquier <michael@paquier.xyz> wrote:\n> Junction points are directories, no? Are you sure that this works\n> correctly on WIN32? It seems to me that we'd better use readlink()\n> only for entries in pg_tlbspc/ that are PGFILETYPE_LNK on non-WIN32\n> and pgwin32_is_junction() on WIN32.\n\nHmm. So the code we finished up with in the tree looks like this:\n\n#ifdef WIN32\n if (!pgwin32_is_junction(fullpath))\n continue;\n#else\n if (get_dirent_type(fullpath, de, false, ERROR) != PGFILETYPE_LNK)\n continue;\n#endif\n\nAs mentioned, I was unhappy with the lack of error checking for that\ninterface, and I've started a new thread about that, but then I\nstarted wondering if we missed a trick here: get_dirent_type() contain\ncode that wants to return PGFILETYPE_LNK for reparse points. Clearly\nit's not working, based on results reported in this thread. Is that\nexplained by your comment above, \"junction points _are_ directories\",\nand we're testing the attribute flags in the wrong order here?\n\n if ((fd.dwFileAttributes & FILE_ATTRIBUTE_DIRECTORY) != 0)\n d->ret.d_type = DT_DIR;\n /* For reparse points dwReserved0 field will contain the ReparseTag */\n else if ((fd.dwFileAttributes & FILE_ATTRIBUTE_REPARSE_POINT) != 0 &&\n (fd.dwReserved0 == IO_REPARSE_TAG_MOUNT_POINT))\n d->ret.d_type = DT_LNK;\n else\n d->ret.d_type = DT_REG;\n\n\n",
"msg_date": "Thu, 24 Mar 2022 16:41:30 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_tablespace_location() failure with allow_in_place_tablespaces"
},
{
"msg_contents": "On Thu, Mar 24, 2022 at 04:41:30PM +1300, Thomas Munro wrote:\n> As mentioned, I was unhappy with the lack of error checking for that\n> interface, and I've started a new thread about that, but then I\n> started wondering if we missed a trick here: get_dirent_type() contain\n> code that wants to return PGFILETYPE_LNK for reparse points. Clearly\n> it's not working, based on results reported in this thread. Is that\n> explained by your comment above, \"junction points _are_ directories\",\n> and we're testing the attribute flags in the wrong order here?\n> \n> if ((fd.dwFileAttributes & FILE_ATTRIBUTE_DIRECTORY) != 0)\n> d->ret.d_type = DT_DIR;\n> /* For reparse points dwReserved0 field will contain the ReparseTag */\n> else if ((fd.dwFileAttributes & FILE_ATTRIBUTE_REPARSE_POINT) != 0 &&\n> (fd.dwReserved0 == IO_REPARSE_TAG_MOUNT_POINT))\n> d->ret.d_type = DT_LNK;\n> else\n> d->ret.d_type = DT_REG;\n\nAh, good point. I have not tested on Windows so I am not 100% sure,\nbut indeed it would make sense to reverse both conditions if a\njunction point happens to be marked as both FILE_ATTRIBUTE_DIRECTORY\nand FILE_ATTRIBUTE_REPARSE_POINT when scanning a directory. Based on\na read of the the upstream docs, I guess that this is the case:\nhttps://docs.microsoft.com/en-us/openspecs/windows_protocols/ms-fscc/ca28ec38-f155-4768-81d6-4bfeb8586fc9\n\nNote the \"A file or directory that has an associated reparse point.\"\nfor the description of FILE_ATTRIBUTE_REPARSE_POINT.\n--\nMichael",
"msg_date": "Sat, 26 Mar 2022 14:33:46 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: pg_tablespace_location() failure with allow_in_place_tablespaces"
},
{
"msg_contents": "On Sat, Mar 26, 2022 at 6:33 PM Michael Paquier <michael@paquier.xyz> wrote:\n> Ah, good point. I have not tested on Windows so I am not 100% sure,\n> but indeed it would make sense to reverse both conditions if a\n> junction point happens to be marked as both FILE_ATTRIBUTE_DIRECTORY\n> and FILE_ATTRIBUTE_REPARSE_POINT when scanning a directory. Based on\n> a read of the the upstream docs, I guess that this is the case:\n> https://docs.microsoft.com/en-us/openspecs/windows_protocols/ms-fscc/ca28ec38-f155-4768-81d6-4bfeb8586fc9\n>\n> Note the \"A file or directory that has an associated reparse point.\"\n> for the description of FILE_ATTRIBUTE_REPARSE_POINT.\n\nThat leads to the attached patches, the first of which I'd want to back-patch.\n\nUnfortunately while testing this I realised there is something else\nwrong here: if you take a basebackup using tar format, in-place\ntablespaces are skipped (they should get their own OID.tar file, but\nthey don't, also no error). While it wasn't one of my original goals\nfor in-place tablespaces to work in every way (and I'm certain some\nexternal tools would be confused by them), it seems we're pretty close\nso we should probably figure out that piece of the puzzle. It may be\nobvious why but I didn't have time to dig into that today... perhaps\ninstead of just skipping the readlink() we should be writing something\ndifferent into the mapfile and then restoring as appropriate...",
"msg_date": "Wed, 30 Mar 2022 20:23:25 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_tablespace_location() failure with allow_in_place_tablespaces"
},
{
"msg_contents": "On Wed, Mar 30, 2022 at 08:23:25PM +1300, Thomas Munro wrote:\n> That leads to the attached patches, the first of which I'd want to back-patch.\n\nMakes sense.\n\n- if ((fd.dwFileAttributes & FILE_ATTRIBUTE_DIRECTORY) != 0)\n- d->ret.d_type = DT_DIR;\n /* For reparse points dwReserved0 field will contain the ReparseTag */\n- else if ((fd.dwFileAttributes & FILE_ATTRIBUTE_REPARSE_POINT) != 0 &&\n- (fd.dwReserved0 == IO_REPARSE_TAG_MOUNT_POINT))\n+ if ((fd.dwFileAttributes & FILE_ATTRIBUTE_REPARSE_POINT) != 0 &&\n+ (fd.dwReserved0 == IO_REPARSE_TAG_MOUNT_POINT))\n d->ret.d_type = DT_LNK;\n+ else if ((fd.dwFileAttributes & FILE_ATTRIBUTE_DIRECTORY) != 0)\n+ d->ret.d_type = DT_DIR;\n\nThis should also work for plain files, so that looks fine to me.\n\n> Unfortunately while testing this I realised there is something else\n> wrong here: if you take a basebackup using tar format, in-place\n> tablespaces are skipped (they should get their own OID.tar file, but\n> they don't, also no error). While it wasn't one of my original goals\n> for in-place tablespaces to work in every way (and I'm certain some\n> external tools would be confused by them), it seems we're pretty close\n> so we should probably figure out that piece of the puzzle. It may be\n> obvious why but I didn't have time to dig into that today... perhaps\n> instead of just skipping the readlink() we should be writing something\n> different into the mapfile and then restoring as appropriate...\n\nYeah, I saw that in-place tablespaces were part of the main tarball in\nbase backups as we rely on the existence of a link to decide if the\ncontents of a path should be separated in a different tarball or not.\nThis does not strike me as a huge problem in itself, TBH, as the\nimprovement would be limited to make sure that the base backups could\nbe correctly restored with multiple tablespaces. And you can get\npretty much the same amount of coverage to make sure that the backup\ncontents are correct without fully restoring them.\n--\nMichael",
"msg_date": "Thu, 31 Mar 2022 13:00:52 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: pg_tablespace_location() failure with allow_in_place_tablespaces"
},
{
"msg_contents": "On Thu, Mar 31, 2022 at 5:01 PM Michael Paquier <michael@paquier.xyz> wrote:\n> On Wed, Mar 30, 2022 at 08:23:25PM +1300, Thomas Munro wrote:\n> > That leads to the attached patches, the first of which I'd want to back-patch.\n>\n> Makes sense.\n> ...\n> This should also work for plain files, so that looks fine to me.\n\nThanks. Pushed. Also CC'ing Alvaro who expressed an interest in this\nproblem[1].\n\n> > Unfortunately while testing this I realised there is something else\n> > wrong here: if you take a basebackup using tar format, in-place\n> > tablespaces are skipped (they should get their own OID.tar file, but\n> > they don't, also no error). While it wasn't one of my original goals\n> > for in-place tablespaces to work in every way (and I'm certain some\n> > external tools would be confused by them), it seems we're pretty close\n> > so we should probably figure out that piece of the puzzle. It may be\n> > obvious why but I didn't have time to dig into that today... perhaps\n> > instead of just skipping the readlink() we should be writing something\n> > different into the mapfile and then restoring as appropriate...\n>\n> Yeah, I saw that in-place tablespaces were part of the main tarball in\n> base backups as we rely on the existence of a link to decide if the\n> contents of a path should be separated in a different tarball or not.\n> This does not strike me as a huge problem in itself, TBH, as the\n> improvement would be limited to make sure that the base backups could\n> be correctly restored with multiple tablespaces. And you can get\n> pretty much the same amount of coverage to make sure that the backup\n> contents are correct without fully restoring them.\n\nAre they in the main tar file, or are they just missing?\n\n[1] https://postgr.es/m/20220721111751.x7hod2xgrd76xr5c%40alvherre.pgsql\n\n\n",
"msg_date": "Fri, 22 Jul 2022 17:50:58 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_tablespace_location() failure with allow_in_place_tablespaces"
},
{
"msg_contents": "On 2022-Jul-22, Thomas Munro wrote:\n\n> Thanks. Pushed. Also CC'ing Alvaro who expressed an interest in this\n> problem[1].\n\n> [1] https://postgr.es/m/20220721111751.x7hod2xgrd76xr5c%40alvherre.pgsql\n\nYay! Thanks.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"Los dioses no protegen a los insensatos. Éstos reciben protección de\notros insensatos mejor dotados\" (Luis Wu, Mundo Anillo)\n\n\n",
"msg_date": "Fri, 22 Jul 2022 08:49:31 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: pg_tablespace_location() failure with allow_in_place_tablespaces"
},
{
"msg_contents": "On Fri, Jul 22, 2022 at 05:50:58PM +1200, Thomas Munro wrote:\n> On Thu, Mar 31, 2022 at 5:01 PM Michael Paquier <michael@paquier.xyz> wrote:\n>> Yeah, I saw that in-place tablespaces were part of the main tarball in\n>> base backups as we rely on the existence of a link to decide if the\n>> contents of a path should be separated in a different tarball or not.\n>> This does not strike me as a huge problem in itself, TBH, as the\n>> improvement would be limited to make sure that the base backups could\n>> be correctly restored with multiple tablespaces. And you can get\n>> pretty much the same amount of coverage to make sure that the backup\n>> contents are correct without fully restoring them.\n> \n> Are they in the main tar file, or are they just missing?\n\nSo, coming back to this thread.. Sorry for the late reply.\n\nSomething is still broken here with in-place tablespaces on HEAD.\nWhen taking a base backup in plain format, in-place tablespaces are\ncorrectly in the stream. However, when using the tar format, these\nare not streamed. c6f2f01 has cleaned the WARNING \"could not read\nsymbolic link\", still we have the following, when having an in-place\ntablespace on a primary:\n- For a base backup in plain format, the in-place tablespace path is\nincluded in the base backup.\n- For a base backup in tar format, the in-place tablespace path is not\nincluded in the base backup. It is not in base.tar, and there is no\nadditional tar file. c6f2f01 does not change this result.\n\nSo they are missing, to answer your question.\n--\nMichael",
"msg_date": "Sat, 23 Jul 2022 11:58:30 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: pg_tablespace_location() failure with allow_in_place_tablespaces"
}
] |
[
{
"msg_contents": "Hi,\n\n\nWhen we want to vacuum and/or analyze all tables in a dedicated schema, \nlet's say pg_catalog for example, there is no easy way to do that. The \nVACUUM command doesn't allow it so we have to use \\gexec or a SQL script \nto do that. We have an external command vacuumdb that could be used to \nsimplify this task. For example the following command can be used to \nclean all tables stored in the pg_catalog schema:\n\n vacuumdb --schema pg_catalog -d foo\n\nThe attached patch implements that. Option -n | --schema can be used \nmultiple time and can not be used together with options -a or -t.\n\n\nCommon use cases are an application that creates lot of temporary \nobjects then drop them which can bloat a lot the catalog or which have \nheavy work in some schemas only. Of course the good practice is to find \nthe bloated tables and execute VACUUM on each table but if most of the \ntables in the schema are regularly bloated the use of the vacuumdb \n--schema script can save time.\n\n\nI do not propose to extend the VACUUM and ANALYZE commands because their \ncurrent syntax doesn't allow me to see an easy way to do that and also \nbecause I'm not really in favor of such change. But if there is interest \nin improving these commands I will be pleased to do that, with the \nsyntax suggested.\n\n\nBest regards,\n\n-- \nGilles Darold",
"msg_date": "Fri, 4 Mar 2022 10:11:28 +0100",
"msg_from": "Gilles Darold <gilles@migops.com>",
"msg_from_op": true,
"msg_subject": "[Proposal] vacuumdb --schema only"
},
{
"msg_contents": "On Fri, Mar 04, 2022 at 10:11:28AM +0100, Gilles Darold wrote:\n> The attached patch implements that. Option -n | --schema can be used\n> multiple time and can not be used together with options -a or -t.\n\nYes, thanks.\n\nI suggest there should also be an --exclude-schema.\n\n> I do not propose to extend the VACUUM and ANALYZE commands because their\n> current syntax doesn't allow me to see an easy way to do that\n\nI think this would be easy with the parenthesized syntax.\nI'm not suggesting to do it there, though.\n\n> +\t/*\n> +\t * When filtereing on schema name, filter by table is not allowed.\n> +\t * The schema name can already be set in a fqdn table name.\n\nset *to*\n\n-- \nJustin\n\n\n",
"msg_date": "Fri, 4 Mar 2022 04:56:40 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: [Proposal] vacuumdb --schema only"
},
{
"msg_contents": "On Fri, 4 Mar 2022 at 14:41, Gilles Darold <gilles@migops.com> wrote:\n\n> Hi,\n>\n>\n> When we want to vacuum and/or analyze all tables in a dedicated schema,\n> let's say pg_catalog for example, there is no easy way to do that. The\n> VACUUM command doesn't allow it so we have to use \\gexec or a SQL script\n> to do that. We have an external command vacuumdb that could be used to\n> simplify this task. For example the following command can be used to\n> clean all tables stored in the pg_catalog schema:\n>\n> vacuumdb --schema pg_catalog -d foo\n>\n>\n+1\nThis gives much better flexibility to users.\n\n\n\n\n> The attached patch implements that. Option -n | --schema can be used\n> multiple time and can not be used together with options -a or -t.\n>\n>\n> Common use cases are an application that creates lot of temporary\n> objects then drop them which can bloat a lot the catalog or which have\n> heavy work in some schemas only. Of course the good practice is to find\n> the bloated tables and execute VACUUM on each table but if most of the\n> tables in the schema are regularly bloated the use of the vacuumdb\n> --schema script can save time.\n>\n>\n> I do not propose to extend the VACUUM and ANALYZE commands because their\n> current syntax doesn't allow me to see an easy way to do that and also\n> because I'm not really in favor of such change. But if there is interest\n> in improving these commands I will be pleased to do that, with the\n> syntax suggested.\n>\n>\n> Best regards,\n>\n> --\n> Gilles Darold\n>\n\nOn Fri, 4 Mar 2022 at 14:41, Gilles Darold <gilles@migops.com> wrote:Hi,\n\n\nWhen we want to vacuum and/or analyze all tables in a dedicated schema, \nlet's say pg_catalog for example, there is no easy way to do that. The \nVACUUM command doesn't allow it so we have to use \\gexec or a SQL script \nto do that. We have an external command vacuumdb that could be used to \nsimplify this task. For example the following command can be used to \nclean all tables stored in the pg_catalog schema:\n\n vacuumdb --schema pg_catalog -d foo\n+1This gives much better flexibility to users. \nThe attached patch implements that. Option -n | --schema can be used \nmultiple time and can not be used together with options -a or -t.\n\n\nCommon use cases are an application that creates lot of temporary \nobjects then drop them which can bloat a lot the catalog or which have \nheavy work in some schemas only. Of course the good practice is to find \nthe bloated tables and execute VACUUM on each table but if most of the \ntables in the schema are regularly bloated the use of the vacuumdb \n--schema script can save time.\n\n\nI do not propose to extend the VACUUM and ANALYZE commands because their \ncurrent syntax doesn't allow me to see an easy way to do that and also \nbecause I'm not really in favor of such change. But if there is interest \nin improving these commands I will be pleased to do that, with the \nsyntax suggested.\n\n\nBest regards,\n\n-- \nGilles Darold",
"msg_date": "Fri, 4 Mar 2022 16:57:56 +0530",
"msg_from": "Dinesh Chemuduru <dinesh.kumar@migops.com>",
"msg_from_op": false,
"msg_subject": "Re: [Proposal] vacuumdb --schema only"
},
{
"msg_contents": "Le 04/03/2022 à 11:56, Justin Pryzby a écrit :\n> On Fri, Mar 04, 2022 at 10:11:28AM +0100, Gilles Darold wrote:\n>> The attached patch implements that. Option -n | --schema can be used\n>> multiple time and can not be used together with options -a or -t.\n> Yes, thanks.\n>\n> I suggest there should also be an --exclude-schema.\n\n\nOk, I will add it too.\n\n\n>\n>> I do not propose to extend the VACUUM and ANALYZE commands because their\n>> current syntax doesn't allow me to see an easy way to do that\n> I think this would be easy with the parenthesized syntax.\n> I'm not suggesting to do it there, though.\n\n\nYes this is what I've though, something a la EXPLAIN, for example : \n\"VACUUM (ANALYZE, SCHEMA foo)\" but this is a change in the VACUUM syntax \nthat needs to keep the compatibility with the current syntax. We will \nhave two syntax something like \"VACUUM ANALYZE FULL dbname\" and \"VACUUM \n(ANALYZE, FULL) dbname\". The other syntax \"problem\" is to be able to use \nmultiple schema values in the VACUUM command, perhaps \"VACUUM (ANALYZE, \nSCHEMA (foo,bar))\".\n\n\n>> +\t/*\n>> +\t * When filtereing on schema name, filter by table is not allowed.\n>> +\t * The schema name can already be set in a fqdn table name.\n> set *to*\n\nThanks, will be fixed in next patch version.\n\n\n-- \nGilles Darold\n\n\n\n",
"msg_date": "Fri, 4 Mar 2022 13:35:36 +0100",
"msg_from": "Gilles Darold <gilles@migops.com>",
"msg_from_op": true,
"msg_subject": "Re: [Proposal] vacuumdb --schema only"
},
{
"msg_contents": "Le 04/03/2022 à 11:56, Justin Pryzby a écrit :\n> On Fri, Mar 04, 2022 at 10:11:28AM +0100, Gilles Darold wrote:\n>> The attached patch implements that. Option -n | --schema can be used\n>> multiple time and can not be used together with options -a or -t.\n> Yes, thanks.\n>\n> I suggest there should also be an --exclude-schema.\n>\n>> I do not propose to extend the VACUUM and ANALYZE commands because their\n>> current syntax doesn't allow me to see an easy way to do that\n> I think this would be easy with the parenthesized syntax.\n> I'm not suggesting to do it there, though.\n>\n>> +\t/*\n>> +\t * When filtereing on schema name, filter by table is not allowed.\n>> +\t * The schema name can already be set in a fqdn table name.\n> set *to*\n>\n\nAttached a new patch version that adds the -N | --exclude-schema option\nto the vacuumdb command as suggested. Documentation updated too.\n\n\nI will add this patch to the commitfest unless there is cons about\nadding these options.\n\n\n-- \nGilles Darold",
"msg_date": "Sun, 6 Mar 2022 09:39:37 +0100",
"msg_from": "Gilles Darold <gilles@migops.com>",
"msg_from_op": true,
"msg_subject": "Re: [Proposal] vacuumdb --schema only"
},
{
"msg_contents": "On Sun, Mar 06, 2022 at 09:39:37AM +0100, Gilles Darold wrote:\n> Attached a new patch version that adds the -N | --exclude-schema option\n> to the vacuumdb command as suggested. Documentation updated too.\n> \n> +\t\tpg_log_error(\"cannot vacuum all tables in schema(s) and and exclude specific schema(s) at the same time\");\n\nand and\n\nIt's odd that schema_exclusion is a global var, but schemas/excluded are not.\n\nAlso, it seems unnecessary to have two schemas vars, since they can't be used\ntogether. Maybe there's a better way than what I did in 003.\n\n> +\t\tfor (cell = schemas ? schemas->head : NULL; cell; cell = cell->next)\n\nIt's preferred to write cell != NULL\n\n> + bool schemas_listed = false;\n...\n> +\t\tfor (cell = schemas ? schemas->head : NULL; cell; cell = cell->next)\n> +\t\t{\n> +\t\t\tif (!schemas_listed) {\n> +\t\t\t\tappendPQExpBufferStr(&catalog_query,\n> +\t\t\t\t\t\t\t\t\t \" AND pg_catalog.quote_ident(ns.nspname)\");\n> +\t\t\t\tif (schema_exclusion)\n> +\t\t\t\t\tappendPQExpBufferStr(&catalog_query, \" NOT IN (\");\n> +\t\t\t\telse\n> +\t\t\t\t\tappendPQExpBufferStr(&catalog_query, \" IN (\");\n> +\n> +\t\t\t\tschemas_listed = true;\n> +\t\t\t}\n> +\t\t\telse\n> +\t\t\t\tappendPQExpBufferStr(&catalog_query, \", \");\n> +\n> +\t\t\tappendStringLiteralConn(&catalog_query, cell->val, conn);\n> +\t\t\tappendPQExpBufferStr(&catalog_query, \"::pg_catalog.regnamespace::pg_catalog.name\");\n> +\n> +\t\t}\n> +\t\t/* Finish formatting schema filter */\n> +\t\tif (schemas_listed)\n> +\t\t\tappendPQExpBufferStr(&catalog_query, \")\\n\");\n> \t}\n\nMaybe it's clearer to write this with =ANY() / != ALL() ?\nSee 002.\n\n-- \nJustin",
"msg_date": "Sun, 6 Mar 2022 09:04:17 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: [Proposal] vacuumdb --schema only"
},
{
"msg_contents": "Le 06/03/2022 à 16:04, Justin Pryzby a écrit :\n> On Sun, Mar 06, 2022 at 09:39:37AM +0100, Gilles Darold wrote:\n>> Attached a new patch version that adds the -N | --exclude-schema option\n>> to the vacuumdb command as suggested. Documentation updated too.\n>>\n>> +\t\tpg_log_error(\"cannot vacuum all tables in schema(s) and and exclude specific schema(s) at the same time\");\n> and and\n>\n> It's odd that schema_exclusion is a global var, but schemas/excluded are not.\n>\n> Also, it seems unnecessary to have two schemas vars, since they can't be used\n> together. Maybe there's a better way than what I did in 003.\n>\n>> +\t\tfor (cell = schemas ? schemas->head : NULL; cell; cell = cell->next)\n> It's preferred to write cell != NULL\n>\n>> + bool schemas_listed = false;\n> ...\n>> +\t\tfor (cell = schemas ? schemas->head : NULL; cell; cell = cell->next)\n>> +\t\t{\n>> +\t\t\tif (!schemas_listed) {\n>> +\t\t\t\tappendPQExpBufferStr(&catalog_query,\n>> +\t\t\t\t\t\t\t\t\t \" AND pg_catalog.quote_ident(ns.nspname)\");\n>> +\t\t\t\tif (schema_exclusion)\n>> +\t\t\t\t\tappendPQExpBufferStr(&catalog_query, \" NOT IN (\");\n>> +\t\t\t\telse\n>> +\t\t\t\t\tappendPQExpBufferStr(&catalog_query, \" IN (\");\n>> +\n>> +\t\t\t\tschemas_listed = true;\n>> +\t\t\t}\n>> +\t\t\telse\n>> +\t\t\t\tappendPQExpBufferStr(&catalog_query, \", \");\n>> +\n>> +\t\t\tappendStringLiteralConn(&catalog_query, cell->val, conn);\n>> +\t\t\tappendPQExpBufferStr(&catalog_query, \"::pg_catalog.regnamespace::pg_catalog.name\");\n>> +\n>> +\t\t}\n>> +\t\t/* Finish formatting schema filter */\n>> +\t\tif (schemas_listed)\n>> +\t\t\tappendPQExpBufferStr(&catalog_query, \")\\n\");\n>> \t}\n> Maybe it's clearer to write this with =ANY() / != ALL() ?\n> See 002.\n>\n\nI have applied your changes and produced a new version v3 of the patch,\nthanks for the improvements. The patch have been added to commitfest\ninterface, see here https://commitfest.postgresql.org/38/3587/\n\n\n-- \nGilles Darold",
"msg_date": "Mon, 7 Mar 2022 08:38:04 +0100",
"msg_from": "Gilles Darold <gilles@migops.com>",
"msg_from_op": true,
"msg_subject": "Re: [Proposal] vacuumdb --schema only"
},
{
"msg_contents": "Hi,\n\nNew version v4 of the patch to fix a typo in a comment.\n\n-- \nGilles Darold",
"msg_date": "Wed, 9 Mar 2022 20:53:48 +0100",
"msg_from": "Gilles Darold <gilles@migops.com>",
"msg_from_op": true,
"msg_subject": "Re: [Proposal] vacuumdb --schema only"
},
{
"msg_contents": "On Mon, Mar 07, 2022 at 08:38:04AM +0100, Gilles Darold wrote:\n> > Maybe it's clearer to write this with =ANY() / != ALL() ?\n> > See 002.\n> \n> I have applied your changes and produced a new version v3 of the patch,\n> thanks for the improvements. The patch have been added to commitfest\n> interface, see here https://commitfest.postgresql.org/38/3587/\n\nI wondered whether my patches were improvements, and it occurred to me that\nyour patch didn't fail if the specified schema didn't exist. That's arguably\npreferable, but that's the pre-existing behavior for tables. So I think the\nbehavior of my patch is more consistent.\n\n$ ./src/bin/scripts/vacuumdb -h /tmp -d postgres --table foo\nvacuumdb: vacuuming database \"postgres\"\n2022-03-09 15:04:06.922 CST client backend[25540] vacuumdb ERROR: relation \"foo\" does not exist at character 60\n\n$ ./src/bin/scripts/vacuumdb -h /tmp -d postgres --schema foo\nvacuumdb: vacuuming database \"postgres\"\n2022-03-09 15:02:59.926 CST client backend[23516] vacuumdb ERROR: schema \"foo\" does not exist at character 335\n\n\n",
"msg_date": "Wed, 9 Mar 2022 15:10:33 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: [Proposal] vacuumdb --schema only"
},
{
"msg_contents": "Le 09/03/2022 à 22:10, Justin Pryzby a écrit :\n> On Mon, Mar 07, 2022 at 08:38:04AM +0100, Gilles Darold wrote:\n>>> Maybe it's clearer to write this with =ANY() / != ALL() ?\n>>> See 002.\n>> I have applied your changes and produced a new version v3 of the patch,\n>> thanks for the improvements. The patch have been added to commitfest\n>> interface, see here https://commitfest.postgresql.org/38/3587/\n> I wondered whether my patches were improvements, and it occurred to me that\n> your patch didn't fail if the specified schema didn't exist. That's arguably\n> preferable, but that's the pre-existing behavior for tables. So I think the\n> behavior of my patch is more consistent.\n\n+1\n\n-- \nGilles Darold\n\n\n\n\n\n\nLe 09/03/2022 à 22:10, Justin Pryzby a\n écrit :\n\n\nOn Mon, Mar 07, 2022 at 08:38:04AM +0100, Gilles Darold wrote:\n\n\n\nMaybe it's clearer to write this with =ANY() / != ALL() ?\nSee 002.\n\n\nI have applied your changes and produced a new version v3 of the patch,\nthanks for the improvements. The patch have been added to commitfest\ninterface, see here https://commitfest.postgresql.org/38/3587/\n\n\nI wondered whether my patches were improvements, and it occurred to me that\nyour patch didn't fail if the specified schema didn't exist. That's arguably\npreferable, but that's the pre-existing behavior for tables. So I think the\nbehavior of my patch is more consistent.\n\n+1\n\n-- \nGilles Darold",
"msg_date": "Thu, 10 Mar 2022 07:32:28 +0100",
"msg_from": "Gilles Darold <gilles@migops.com>",
"msg_from_op": true,
"msg_subject": "Re: [Proposal] vacuumdb --schema only"
},
{
"msg_contents": "On Thu, Mar 10, 2022 at 1:32 AM Gilles Darold <gilles@migops.com> wrote:\n>\n> Le 09/03/2022 à 22:10, Justin Pryzby a écrit :\n>\n> On Mon, Mar 07, 2022 at 08:38:04AM +0100, Gilles Darold wrote:\n>\n> Maybe it's clearer to write this with =ANY() / != ALL() ?\n> See 002.\n>\n> I have applied your changes and produced a new version v3 of the patch,\n> thanks for the improvements. The patch have been added to commitfest\n> interface, see here https://commitfest.postgresql.org/38/3587/\n>\n> I wondered whether my patches were improvements, and it occurred to me that\n> your patch didn't fail if the specified schema didn't exist. That's arguably\n> preferable, but that's the pre-existing behavior for tables. So I think the\n> behavior of my patch is more consistent.\n>\n> +1\n>\n\n+1 for consistency.\n\nRobert Treat\nhttps://xzilla.net\n\n\n",
"msg_date": "Thu, 10 Mar 2022 18:02:13 -0500",
"msg_from": "Robert Treat <rob@xzilla.net>",
"msg_from_op": false,
"msg_subject": "Re: [Proposal] vacuumdb --schema only"
},
{
"msg_contents": "I took a look at the v4 patch.\n\n'git-apply' complains about whitespace errors:\n\n\t0001-vacuumdb-schema-only-v4.patch:17: tab in indent.\n\t\t <arg choice=\"plain\">\n\t0001-vacuumdb-schema-only-v4.patch:18: tab in indent.\n\t\t <arg choice=\"opt\">\n\t0001-vacuumdb-schema-only-v4.patch:19: tab in indent.\n\t\t <group choice=\"plain\">\n\t0001-vacuumdb-schema-only-v4.patch:20: tab in indent.\n\t\t <arg choice=\"plain\"><option>-n</option></arg>\n\t0001-vacuumdb-schema-only-v4.patch:21: tab in indent.\n\t\t <arg choice=\"plain\"><option>--schema</option></arg>\n\twarning: squelched 13 whitespace errors\n\twarning: 18 lines add whitespace errors.\n\n+ printf(_(\" -N, --exclude-schema=PATTERN do NOT vacuum tables in the specified schema(s)\\n\"));\n\nI'm personally -1 for the --exclude-schema option. I don't see any\nexisting \"exclude\" options in vacuumdb, and the uses for such an option\nseem rather limited. If we can point to specific use-cases for this\noption, I might be willing to change my vote.\n\n+ <para>\n+ To clean all tables in the <literal>Foo</literal> and <literal>bar</literal> schemas\n+ only in a database named <literal>xyzzy</literal>:\n+<screen>\n+<prompt>$ </prompt><userinput>vacuumdb --schema='\"Foo\"' --schema='bar' xyzzy</userinput>\n+</screen></para>\n\nnitpicks: I think the phrasing should be \"To only clean tables in the...\".\nAlso, is there any reason to use a schema name with a capital letter as an\nexample? IMO that just adds unnecessary complexity to the example.\n\n+$node->issues_sql_like(\n+ [ 'vacuumdb', '--schema', '\"Foo\"', 'postgres' ],\n+ qr/VACUUM \"Foo\".*/,\n+ 'vacuumdb --schema schema only');\n\nIIUC there should only be one table in the schema. Can we avoid matching\n\"*\" and check for the exact command instead?\n\nI think there should be a few more test cases. For example, we should test\nusing -n and -N at the same time, and we should test what happens when\nthose options are used for missing schemas.\n\n+ /*\n+ * When filtering on schema name, filter by table is not allowed.\n+ * The schema name can already be set to a fqdn table name.\n+ */\n+ if (tbl_count && (schemas.head != NULL))\n+ {\n+ pg_log_error(\"cannot vacuum all tables in schema(s) and specific table(s) at the same time\");\n+ exit(1);\n+ }\n\nI think there might be some useful refactoring we can do that would\nsimplify adding similar options in the future. Specifically, can we have a\nglobal variable that stores the type of vacuumdb command (e.g., all,\ntables, or schemas)? If so, perhaps the tables list could be renamed and\nreused for schemas (and any other objects that need listing in the future).\n\n+ if (schemas != NULL && schemas->head != NULL)\n+ {\n+ appendPQExpBufferStr(&catalog_query,\n+ \" AND c.relnamespace\");\n+ if (schema_is_exclude == TRI_YES)\n+ appendPQExpBufferStr(&catalog_query,\n+ \" OPERATOR(pg_catalog.!=) ALL (ARRAY[\");\n+ else if (schema_is_exclude == TRI_NO)\n+ appendPQExpBufferStr(&catalog_query,\n+ \" OPERATOR(pg_catalog.=) ANY (ARRAY[\");\n+\n+ for (cell = schemas->head; cell != NULL; cell = cell->next)\n+ {\n+ appendStringLiteralConn(&catalog_query, cell->val, conn);\n+\n+ if (cell->next != NULL)\n+ appendPQExpBufferStr(&catalog_query, \", \");\n+ }\n+\n+ /* Finish formatting schema filter */\n+ appendPQExpBufferStr(&catalog_query, \"]::pg_catalog.regnamespace[])\\n\");\n+ }\n\nIMO we should use a CTE for specified schemas like we do for the specified\ntables. I wonder if we could even have a mostly-shared CTE code path for\nall vacuumdb commands with a list of names.\n\n- /*\n- * If no tables were listed, filter for the relevant relation types. If\n- * tables were given via --table, don't bother filtering by relation type.\n- * Instead, let the server decide whether a given relation can be\n- * processed in which case the user will know about it.\n- */\n- if (!tables_listed)\n+ else\n {\n+ /*\n+ * If no tables were listed, filter for the relevant relation types. If\n+ * tables were given via --table, don't bother filtering by relation type.\n+ * Instead, let the server decide whether a given relation can be\n+ * processed in which case the user will know about it.\n+ */\n\nnitpick: This change seems unnecessary.\n\nI noticed upthread that there was some discussion around adding a way to\nspecify a schema in VACUUM and ANALYZE commands. I think this patch is\nuseful even if such an option is eventually added, as we'll still want\nvacuumdb to obtain the full list of tables to process so that it can\neffectively parallelize.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 30 Mar 2022 14:22:58 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Proposal] vacuumdb --schema only"
},
{
"msg_contents": "On Wed, Mar 30, 2022 at 02:22:58PM -0700, Nathan Bossart wrote:\n> I'm personally -1 for the --exclude-schema option. I don't see any\n> existing \"exclude\" options in vacuumdb, and the uses for such an option\n> seem rather limited. If we can point to specific use-cases for this\n> option, I might be willing to change my vote.\n\nI suggested it because I would consider using it, even though I don't currently\nuse the vacuumdb script at all. I think this would allow partially\nretiring/simplifying our existing vacuum script.\n\nWe 1) put all our partitions in a separate \"child\" schema (so \\d is more\nusable), and also 2) put some short-lived tables into their own schemas. Some\nof those tables may only exist for ~1 day so I'd perfer to neither vacuum nor\nanalyze them (they're only used for SELECT *). But there can be a lot of them,\nso a nightly job could do something like vacuumdb --schema public or vacuumdb\n--exclude-schema ephemeral.\n\nEverything would be processed nightly using vacuumdb --min-xid (to keep the\nmonitoring system happy).\n\nThe non-partitioned tables could be vacuumed nightly (without min-xid), with\n--exclude ephemeral.\n\nThe partitioned tables could be processed monthly with vacuumdb --analyze.\n\nI'd also want to be able to run vacuumdb --analyze nightly, but I'd want to\nexclude the schema with short-lived tables. I'd also need a way to exclude\nour partitioned tables from nightly analyze (they should run monthly only).\n\nMaybe this could share something with this patch:\nhttps://commitfest.postgresql.org/37/2573/\npg_dump - read data for some options from external file\n\nThe goal of that patch was to put it in a file, which isn't really needed here.\nBut if there were common infrastructure for matching tables, it could be\nshared. The interesting part for this patch is to avoid adding separate\ncommandline arguments for --include-table, --exclude-table, --include-schema,\n--exclude-schema (and anything else?)\n\n\n",
"msg_date": "Fri, 1 Apr 2022 10:01:28 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: [Proposal] vacuumdb --schema only"
},
{
"msg_contents": "On Fri, Apr 01, 2022 at 10:01:28AM -0500, Justin Pryzby wrote:\n> On Wed, Mar 30, 2022 at 02:22:58PM -0700, Nathan Bossart wrote:\n>> I'm personally -1 for the --exclude-schema option. I don't see any\n>> existing \"exclude\" options in vacuumdb, and the uses for such an option\n>> seem rather limited. If we can point to specific use-cases for this\n>> option, I might be willing to change my vote.\n> \n> I suggested it because I would consider using it, even though I don't currently\n> use the vacuumdb script at all. I think this would allow partially\n> retiring/simplifying our existing vacuum script.\n> \n> We 1) put all our partitions in a separate \"child\" schema (so \\d is more\n> usable), and also 2) put some short-lived tables into their own schemas. Some\n> of those tables may only exist for ~1 day so I'd perfer to neither vacuum nor\n> analyze them (they're only used for SELECT *). But there can be a lot of them,\n> so a nightly job could do something like vacuumdb --schema public or vacuumdb\n> --exclude-schema ephemeral.\n> \n> Everything would be processed nightly using vacuumdb --min-xid (to keep the\n> monitoring system happy).\n> \n> The non-partitioned tables could be vacuumed nightly (without min-xid), with\n> --exclude ephemeral.\n> \n> The partitioned tables could be processed monthly with vacuumdb --analyze.\n> \n> I'd also want to be able to run vacuumdb --analyze nightly, but I'd want to\n> exclude the schema with short-lived tables. I'd also need a way to exclude\n> our partitioned tables from nightly analyze (they should run monthly only).\n\nThanks for elaborating. I retract my -1 vote.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Fri, 1 Apr 2022 15:24:22 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Proposal] vacuumdb --schema only"
},
{
"msg_contents": "Le 30/03/2022 à 23:22, Nathan Bossart a écrit :\n> I took a look at the v4 patch.\n>\n> 'git-apply' complains about whitespace errors:\n\nFixed.\n\n\n> + <para>\n> + To clean all tables in the <literal>Foo</literal> and <literal>bar</literal> schemas\n> + only in a database named <literal>xyzzy</literal>:\n> +<screen>\n> +<prompt>$ </prompt><userinput>vacuumdb --schema='\"Foo\"' --schema='bar' xyzzy</userinput>\n> +</screen></para>\n>\n> nitpicks: I think the phrasing should be \"To only clean tables in the...\".\n> Also, is there any reason to use a schema name with a capital letter as an\n> example? IMO that just adds unnecessary complexity to the example.\n\nI have though that an example of a schema with case sensitivity was\nmissing in the documentation but I agree with your comment, this is\nprobably not he best place to do that. Fixed.\n\n\n> +$node->issues_sql_like(\n> + [ 'vacuumdb', '--schema', '\"Foo\"', 'postgres' ],\n> + qr/VACUUM \"Foo\".*/,\n> + 'vacuumdb --schema schema only');\n>\n> IIUC there should only be one table in the schema. Can we avoid matching\n> \"*\" and check for the exact command instead?\n\nFixed.\n\n\n> I think there should be a few more test cases. For example, we should test\n> using -n and -N at the same time, and we should test what happens when\n> those options are used for missing schemas.\n\nFixed\n\n\n> + /*\n> + * When filtering on schema name, filter by table is not allowed.\n> + * The schema name can already be set to a fqdn table name.\n> + */\n> + if (tbl_count && (schemas.head != NULL))\n> + {\n> + pg_log_error(\"cannot vacuum all tables in schema(s) and specific table(s) at the same time\");\n> + exit(1);\n> + }\n>\n> I think there might be some useful refactoring we can do that would\n> simplify adding similar options in the future. Specifically, can we have a\n> global variable that stores the type of vacuumdb command (e.g., all,\n> tables, or schemas)? If so, perhaps the tables list could be renamed and\n> reused for schemas (and any other objects that need listing in the future).\n\nI don't think there will be much more options like this one that will be\nadded to this command but anyway I have changed the patch that way.\n\n\n> + if (schemas != NULL && schemas->head != NULL)\n> + {\n> + appendPQExpBufferStr(&catalog_query,\n> + \" AND c.relnamespace\");\n> + if (schema_is_exclude == TRI_YES)\n> + appendPQExpBufferStr(&catalog_query,\n> + \" OPERATOR(pg_catalog.!=) ALL (ARRAY[\");\n> + else if (schema_is_exclude == TRI_NO)\n> + appendPQExpBufferStr(&catalog_query,\n> + \" OPERATOR(pg_catalog.=) ANY (ARRAY[\");\n> +\n> + for (cell = schemas->head; cell != NULL; cell = cell->next)\n> + {\n> + appendStringLiteralConn(&catalog_query, cell->val, conn);\n> +\n> + if (cell->next != NULL)\n> + appendPQExpBufferStr(&catalog_query, \", \");\n> + }\n> +\n> + /* Finish formatting schema filter */\n> + appendPQExpBufferStr(&catalog_query, \"]::pg_catalog.regnamespace[])\\n\");\n> + }\n>\n> IMO we should use a CTE for specified schemas like we do for the specified\n> tables. I wonder if we could even have a mostly-shared CTE code path for\n> all vacuumdb commands with a list of names.\n\nFixed\n\n\n> - /*\n> - * If no tables were listed, filter for the relevant relation types. If\n> - * tables were given via --table, don't bother filtering by relation type.\n> - * Instead, let the server decide whether a given relation can be\n> - * processed in which case the user will know about it.\n> - */\n> - if (!tables_listed)\n> + else\n> {\n> + /*\n> + * If no tables were listed, filter for the relevant relation types. If\n> + * tables were given via --table, don't bother filtering by relation type.\n> + * Instead, let the server decide whether a given relation can be\n> + * processed in which case the user will know about it.\n> + */\n> nitpick: This change seems unnecessary.\n\nFixed\n\n\nThanks for the review, all these changes are available in new version v6\nof the patch and attached here.\n\n\nBest regards,\n\n-- \nGilles Darold",
"msg_date": "Wed, 6 Apr 2022 19:43:42 +0200",
"msg_from": "Gilles Darold <gilles@migops.com>",
"msg_from_op": true,
"msg_subject": "Re: [Proposal] vacuumdb --schema only"
},
{
"msg_contents": "On Wed, Apr 06, 2022 at 07:43:42PM +0200, Gilles Darold wrote:\n> Thanks for the review, all these changes are available in new version v6\n> of the patch and attached here.\n\nThis is failing in CI (except on macos, which is strangely passing).\nhttp://cfbot.cputube.org/gilles-darold.html\n\nhttps://api.cirrus-ci.com/v1/artifact/task/5379693443547136/log/src/bin/scripts/tmp_check/log/regress_log_100_vacuumdb\n\nnot ok 59 - vacuumdb --schema \"Foo\" postgres exit code 0\n\n# Failed test 'vacuumdb --schema \"Foo\" postgres exit code 0'\n# at t/100_vacuumdb.pl line 151.\nnot ok 60 - vacuumdb --schema schema only: SQL found in server log\n\n# Failed test 'vacuumdb --schema schema only: SQL found in server log'\n# at t/100_vacuumdb.pl line 151.\n# '2022-04-06 18:15:36.313 UTC [34857][not initialized] [[unknown]][:0] LOG: connection received: host=[local]\n# 2022-04-06 18:15:36.314 UTC [34857][client backend] [[unknown]][3/2801:0] LOG: connection authorized: user=postgres database=postgres application_name=100_vacuumdb.pl\n# 2022-04-06 18:15:36.318 UTC [34857][client backend] [100_vacuumdb.pl][3/2802:0] LOG: statement: SELECT pg_catalog.set_config('search_path', '', false);\n# 2022-04-06 18:15:36.586 UTC [34857][client backend] [100_vacuumdb.pl][:0] LOG: disconnection: session time: 0:00:00.273 user=postgres database=postgres host=[local]\n# '\n# doesn't match '(?^:VACUUM \"Foo\".bar)'\n\n\n",
"msg_date": "Thu, 7 Apr 2022 19:46:13 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: [Proposal] vacuumdb --schema only"
},
{
"msg_contents": "Le 08/04/2022 à 02:46, Justin Pryzby a écrit :\n> On Wed, Apr 06, 2022 at 07:43:42PM +0200, Gilles Darold wrote:\n>> Thanks for the review, all these changes are available in new version v6\n>> of the patch and attached here.\n> This is failing in CI (except on macos, which is strangely passing).\n> http://cfbot.cputube.org/gilles-darold.html\n>\n> https://api.cirrus-ci.com/v1/artifact/task/5379693443547136/log/src/bin/scripts/tmp_check/log/regress_log_100_vacuumdb\n>\n> not ok 59 - vacuumdb --schema \"Foo\" postgres exit code 0\n>\n> # Failed test 'vacuumdb --schema \"Foo\" postgres exit code 0'\n> # at t/100_vacuumdb.pl line 151.\n> not ok 60 - vacuumdb --schema schema only: SQL found in server log\n>\n> # Failed test 'vacuumdb --schema schema only: SQL found in server log'\n> # at t/100_vacuumdb.pl line 151.\n> # '2022-04-06 18:15:36.313 UTC [34857][not initialized] [[unknown]][:0] LOG: connection received: host=[local]\n> # 2022-04-06 18:15:36.314 UTC [34857][client backend] [[unknown]][3/2801:0] LOG: connection authorized: user=postgres database=postgres application_name=100_vacuumdb.pl\n> # 2022-04-06 18:15:36.318 UTC [34857][client backend] [100_vacuumdb.pl][3/2802:0] LOG: statement: SELECT pg_catalog.set_config('search_path', '', false);\n> # 2022-04-06 18:15:36.586 UTC [34857][client backend] [100_vacuumdb.pl][:0] LOG: disconnection: session time: 0:00:00.273 user=postgres database=postgres host=[local]\n> # '\n> # doesn't match '(?^:VACUUM \"Foo\".bar)'\n\nI'm surprised because make check do do not reports errors running on an \nUbuntu 20.04 and CentOs 8:\n\n\nt/010_clusterdb.pl ........ ok\nt/011_clusterdb_all.pl .... ok\nt/020_createdb.pl ......... ok\nt/040_createuser.pl ....... ok\nt/050_dropdb.pl ........... ok\nt/070_dropuser.pl ......... ok\nt/080_pg_isready.pl ....... ok\nt/090_reindexdb.pl ........ ok\nt/091_reindexdb_all.pl .... ok\nt/100_vacuumdb.pl ......... ok\nt/101_vacuumdb_all.pl ..... ok\nt/102_vacuumdb_stages.pl .. ok\nt/200_connstr.pl .......... ok\nAll tests successful.\nFiles=13, Tests=233, 17 wallclock secs ( 0.09 usr 0.02 sys + 6.63 cusr \n2.68 csys = 9.42 CPU)\nResult: PASS\n\n\nIn tmp_check/log/regress_log_100_vacuumdb:\n\n# Running: vacuumdb --schema \"Foo\" postgres\nvacuumdb: vacuuming database \"postgres\"\nok 59 - vacuumdb --schema \"Foo\" postgres exit code 0\nok 60 - vacuumdb --schema schema only: SQL found in server log\n\nIn PG log:\n\n2022-04-08 11:01:44.519 CEST [17223] 100_vacuumdb.pl LOG: statement: \nRESET search_path;\n2022-04-08 11:01:44.519 CEST [17223] 100_vacuumdb.pl LOG: statement: \nWITH listed_objects (object_oid, column_list) AS (\n VALUES ('\"Foo\"'::pg_catalog.regnamespace::pg_catalog.oid, \nNULL::pg_catalog.text)\n )\n SELECT c.relname, ns.nspname, listed_objects.column_list FROM \npg_catalog.pg_class c\n JOIN pg_catalog.pg_namespace ns ON c.relnamespace \nOPERATOR(pg_catalog.=) ns.oid\n LEFT JOIN pg_catalog.pg_class t ON c.reltoastrelid \nOPERATOR(pg_catalog.=) t.oid\n JOIN listed_objects ON listed_objects.object_oid \nOPERATOR(pg_catalog.=) ns.oid\n WHERE c.relkind OPERATOR(pg_catalog.=) ANY (array['r', 'm'])\n ORDER BY c.relpages DESC;\n2022-04-08 11:01:44.521 CEST [17223] 100_vacuumdb.pl LOG: statement: \nSELECT pg_catalog.set_config('search_path', '', false);\n2022-04-08 11:01:44.521 CEST [17223] 100_vacuumdb.pl LOG: statement: \nVACUUM \"Foo\".bar;\n\nAnd if I run the command manually:\n\n$ /usr/local/pgsql/bin/vacuumdb -e -h localhost --schema '\"Foo\"' -d \ncontrib_regress -U postgres\nSELECT pg_catalog.set_config('search_path', '', false);\nvacuumdb: vacuuming database \"contrib_regress\"\nRESET search_path;\nWITH listed_objects (object_oid, column_list) AS (\n VALUES ('\"Foo\"'::pg_catalog.regnamespace::pg_catalog.oid, \nNULL::pg_catalog.text)\n)\nSELECT c.relname, ns.nspname, listed_objects.column_list FROM \npg_catalog.pg_class c\n JOIN pg_catalog.pg_namespace ns ON c.relnamespace \nOPERATOR(pg_catalog.=) ns.oid\n LEFT JOIN pg_catalog.pg_class t ON c.reltoastrelid \nOPERATOR(pg_catalog.=) t.oid\n JOIN listed_objects ON listed_objects.object_oid \nOPERATOR(pg_catalog.=) ns.oid\n WHERE c.relkind OPERATOR(pg_catalog.=) ANY (array['r', 'm'])\n ORDER BY c.relpages DESC;\nSELECT pg_catalog.set_config('search_path', '', false);\n\nVACUUM \"Foo\".bar;\n\n\n$ echo $?\n0\n\nI don't know what happen on cfbot, investigating...\n\n\n-- \nGilles Darold\n\n\n\n",
"msg_date": "Fri, 8 Apr 2022 11:16:52 +0200",
"msg_from": "Gilles Darold <gilles@migops.com>",
"msg_from_op": true,
"msg_subject": "Re: [Proposal] vacuumdb --schema only"
},
{
"msg_contents": "Le 08/04/2022 à 02:46, Justin Pryzby a écrit :\n> On Wed, Apr 06, 2022 at 07:43:42PM +0200, Gilles Darold wrote:\n>> Thanks for the review, all these changes are available in new version v6\n>> of the patch and attached here.\n> This is failing in CI (except on macos, which is strangely passing).\n> http://cfbot.cputube.org/gilles-darold.html\n>\n> https://api.cirrus-ci.com/v1/artifact/task/5379693443547136/log/src/bin/scripts/tmp_check/log/regress_log_100_vacuumdb\n>\n> not ok 59 - vacuumdb --schema \"Foo\" postgres exit code 0\n>\n> # Failed test 'vacuumdb --schema \"Foo\" postgres exit code 0'\n> # at t/100_vacuumdb.pl line 151.\n> not ok 60 - vacuumdb --schema schema only: SQL found in server log\n>\n> # Failed test 'vacuumdb --schema schema only: SQL found in server log'\n> # at t/100_vacuumdb.pl line 151.\n> # '2022-04-06 18:15:36.313 UTC [34857][not initialized] [[unknown]][:0] LOG: connection received: host=[local]\n> # 2022-04-06 18:15:36.314 UTC [34857][client backend] [[unknown]][3/2801:0] LOG: connection authorized: user=postgres database=postgres application_name=100_vacuumdb.pl\n> # 2022-04-06 18:15:36.318 UTC [34857][client backend] [100_vacuumdb.pl][3/2802:0] LOG: statement: SELECT pg_catalog.set_config('search_path', '', false);\n> # 2022-04-06 18:15:36.586 UTC [34857][client backend] [100_vacuumdb.pl][:0] LOG: disconnection: session time: 0:00:00.273 user=postgres database=postgres host=[local]\n> # '\n> # doesn't match '(?^:VACUUM \"Foo\".bar)'\n\n\nOk, got it with the help of rjuju. Actually it was compiling well using \ngcc but clang give some warnings. A fix of these warning makes CI happy.\n\n\nAttached v7 of the patch that should pass cfbot.\n\n-- \nGilles Darold",
"msg_date": "Fri, 8 Apr 2022 17:16:06 +0200",
"msg_from": "Gilles Darold <gilles@migops.com>",
"msg_from_op": true,
"msg_subject": "Re: [Proposal] vacuumdb --schema only"
},
{
"msg_contents": "On Fri, Apr 08, 2022 at 05:16:06PM +0200, Gilles Darold wrote:\n> Attached v7 of the patch that should pass cfbot.\n\nThanks for the new patch! Unfortunately, it looks like some recent changes\nhave broken it again.\n\n> +enum trivalue schema_is_exclude = TRI_DEFAULT;\n> +\n> +/*\n> + * The kind of object filter to use. '0': none, 'n': schema, 't': table\n> + * these values correspond to the -n | -N and -t command line options.\n> + */\n> +char objfilter = '0';\n\nI think these should be combined into a single enum for simplicity and\nreadability (e.g., OBJFILTER_NONE, OBJFILTER_INCLUDE_SCHEMA,\nOBJFILTER_EXCLUDE_SCHEMA, OBJFILTER_TABLE).\n\n> \t * Instead, let the server decide whether a given relation can be\n> \t * processed in which case the user will know about it.\n> \t */\n> -\tif (!tables_listed)\n> +\tif (!objects_listed || objfilter == 'n')\n> \t{\n> \t\tappendPQExpBufferStr(&catalog_query, \" WHERE c.relkind OPERATOR(pg_catalog.=) ANY (array[\"\n> \t\t\t\t\t\t\t CppAsString2(RELKIND_RELATION) \", \"\n\nI think this deserveѕ a comment.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 11 Apr 2022 11:37:07 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Proposal] vacuumdb --schema only"
},
{
"msg_contents": "Le 11/04/2022 à 20:37, Nathan Bossart a écrit :\n> On Fri, Apr 08, 2022 at 05:16:06PM +0200, Gilles Darold wrote:\n>> Attached v7 of the patch that should pass cfbot.\n> Thanks for the new patch! Unfortunately, it looks like some recent changes\n> have broken it again.\n>\n>> +enum trivalue schema_is_exclude = TRI_DEFAULT;\n>> +\n>> +/*\n>> + * The kind of object filter to use. '0': none, 'n': schema, 't': table\n>> + * these values correspond to the -n | -N and -t command line options.\n>> + */\n>> +char objfilter = '0';\n> I think these should be combined into a single enum for simplicity and\n> readability (e.g., OBJFILTER_NONE, OBJFILTER_INCLUDE_SCHEMA,\n> OBJFILTER_EXCLUDE_SCHEMA, OBJFILTER_TABLE).\n>\n>> \t * Instead, let the server decide whether a given relation can be\n>> \t * processed in which case the user will know about it.\n>> \t */\n>> -\tif (!tables_listed)\n>> +\tif (!objects_listed || objfilter == 'n')\n>> \t{\n>> \t\tappendPQExpBufferStr(&catalog_query, \" WHERE c.relkind OPERATOR(pg_catalog.=) ANY (array[\"\n>> \t\t\t\t\t\t\t CppAsString2(RELKIND_RELATION) \", \"\n> I think this deserveѕ a comment.\n>\n\nAttached v8 of the patch that tries to address the remarks above, fixes\npatch apply failure to master and replace calls to pg_log_error+exit\nwith pg_fatal.\n\n\n.Thanks.\n\n-- \nGilles Darold",
"msg_date": "Thu, 14 Apr 2022 22:27:46 +0200",
"msg_from": "Gilles Darold <gilles@migops.com>",
"msg_from_op": true,
"msg_subject": "Re: [Proposal] vacuumdb --schema only"
},
{
"msg_contents": "On Thu, Apr 14, 2022 at 10:27:46PM +0200, Gilles Darold wrote:\n> Attached v8 of the patch that tries to address the remarks above, fixes\n> patch apply failure to master and replace calls to pg_log_error+exit\n> with pg_fatal.\n\nThanks for the new patch.\n\n> +enum trivalue schema_is_exclude = TRI_DEFAULT;\n> +\n> +/*\n> + * The kind of object use in the command line filter.\n> + * OBJFILTER_NONE: no filter used\n> + * OBJFILTER_SCHEMA: -n | --schema or -N | --exclude-schema\n> + * OBJFILTER_TABLE: -t | --table\n> + */\n> +enum VacObjectFilter\n> +{\n> +\tOBJFILTER_NONE,\n> +\tOBJFILTER_TABLE,\n> +\tOBJFILTER_SCHEMA\n> +};\n> +\n> +enum VacObjectFilter objfilter = OBJFILTER_NONE;\n\nI still think we ought to remove schema_is_exclude in favor of adding\nOBJFILTER_SCHEMA_EXCLUDE to the enum. I think that would simplify some of\nthe error handling and improve readability. IMO we should add\nOBJFILTER_ALL, too.\n\n> -\t\t\t\t\tsimple_string_list_append(&tables, optarg);\n> +\t\t\t\t\t/* When filtering on schema name, filter by table is not allowed. */\n> +\t\t\t\t\tif (schema_is_exclude != TRI_DEFAULT)\n> +\t\t\t\t\t\tpg_fatal(\"cannot vacuum all tables in schema(s) and specific table(s) at the same time\");\n> +\t\t\t\t\tsimple_string_list_append(&objects, optarg);\n> +\t\t\t\t\tobjfilter = OBJFILTER_TABLE;\n> \t\t\t\t\ttbl_count++;\n> \t\t\t\t\tbreak;\n> \t\t\t\t}\n> @@ -202,6 +224,32 @@ main(int argc, char *argv[])\n> \t\t\t\t\t\t\t\t\t &vacopts.parallel_workers))\n> \t\t\t\t\texit(1);\n> \t\t\t\tbreak;\n> +\t\t\tcase 'n':\t\t\t/* include schema(s) */\n> +\t\t\t\t{\n> +\t\t\t\t\t/* When filtering on schema name, filter by table is not allowed. */\n> +\t\t\t\t\tif (tbl_count)\n> +\t\t\t\t\t\tpg_fatal(\"cannot vacuum all tables in schema(s) and specific table(s) at the same time\");\n> +\n> +\t\t\t\t\tif (schema_is_exclude == TRI_YES)\n> +\t\t\t\t\t\tpg_fatal(\"cannot vacuum all tables in schema(s) and exclude specific schema(s) at the same time\");\n> +\t\t\t\t\tsimple_string_list_append(&objects, optarg);\n> +\t\t\t\t\tobjfilter = OBJFILTER_SCHEMA;\n> +\t\t\t\t\tschema_is_exclude = TRI_NO;\n> +\t\t\t\t\tbreak;\n> +\t\t\t\t}\n> +\t\t\tcase 'N':\t\t\t/* exclude schema(s) */\n> +\t\t\t\t{\n> +\t\t\t\t\t/* When filtering on schema name, filter by table is not allowed. */\n> +\t\t\t\t\tif (tbl_count)\n> +\t\t\t\t\t\tpg_fatal(\"cannot vacuum all tables in schema(s) and specific table(s) at the same time\");\n> +\t\t\t\t\tif (schema_is_exclude == TRI_NO)\n> +\t\t\t\t\t\tpg_fatal(\"cannot vacuum all tables in schema(s) and exclude specific schema(s) at the same time\");\n> +\n> +\t\t\t\t\tsimple_string_list_append(&objects, optarg);\n> +\t\t\t\t\tobjfilter = OBJFILTER_SCHEMA;\n> +\t\t\t\t\tschema_is_exclude = TRI_YES;\n> +\t\t\t\t\tbreak;\n\nI was expecting these to check objfilter. For example:\n\n\tcase 'N':\n\t\t{\n\t\t\tif (objfilter == OBJFILTER_TABLE)\n\t\t\t\tpg_fatal(\"...\");\n\t\t\telse if (objfilter == OBJFILTER_SCHEMA)\n\t\t\t\tpg_fatal(\"...\");\n\t\t\telse if (objfilter == OBJFILTER_ALL)\n\t\t\t\tpg_fatal(\"...\");\n\n\t\t\tsimple_string_list_append(&objects, optarg);\n\t\t\tobjfilter = OBJFILTER_SCHEMA_EXCLUDE;\n\t\t\tbreak;\n\t\t}\n\nAnother possible improvement could be to move the pg_fatal() calls to a\nhelper function that generates the message based on the current objfilter\nsetting and the current option. I'm envisioning something like\ncheck_objfilter(VacObjFilter curr_objfilter, VacObjFilter curr_option).\n\n> +\t/*\n> +\t * When filtering on schema name, filter by table is not allowed.\n> +\t * The schema name can already be set to a fqdn table name.\n> +\t */\n> +\tif (tbl_count && objfilter == OBJFILTER_SCHEMA && objects.head != NULL)\n> +\t\tpg_fatal(\"cannot vacuum all tables in schema(s) and specific table(s) at the same time\");\n\nIsn't this redundant with the error in the option handling?\n\n> -\t\tif (tables.head != NULL)\n> +\t\tif (objfilter == OBJFILTER_SCHEMA && objects.head != NULL)\n> +\t\t{\n> +\t\t\tif (schema_is_exclude == TRI_YES)\n> +\t\t\t\tpg_fatal(\"cannot exclude from vacuum specific schema(s) in all databases\");\n> +\t\t\telse if (schema_is_exclude == TRI_NO)\n> +\t\t\t\tpg_fatal(\"cannot vacuum specific schema(s) in all databases\");\n> +\t\t}\n> +\n> +\t\tif (objfilter == OBJFILTER_TABLE && objects.head != NULL)\n> \t\t\tpg_fatal(\"cannot vacuum specific table(s) in all databases\");\n\nI think we could move all these into check_objfilter(), too.\n\nnitpick: Why do we need to check that objects.head is not NULL? Isn't the\nobjfilter check enough?\n\n> \t/*\n> -\t * If no tables were listed, filter for the relevant relation types. If\n> -\t * tables were given via --table, don't bother filtering by relation type.\n> -\t * Instead, let the server decide whether a given relation can be\n> -\t * processed in which case the user will know about it.\n> +\t * If no tables were listed or that a filter by schema is used, filter\n> +\t * for the relevant relation types. If tables were given via --table,\n> +\t * don't bother filtering by relation type. Instead, let the server\n> +\t * decide whether a given relation can be processed in which case the\n> +\t * user will know about it. If there is a filter by schema the use of\n> +\t * --table is not possible so we have to filter by relation type too.\n> \t */\n> -\tif (!tables_listed)\n> +\tif (!objects_listed || objfilter == OBJFILTER_SCHEMA)\n\nDo we need to check for objects_listed here? IIUC we can just check for\nobjfilter != OBJFILTER_TABLE.\n\nUnless I'm missing something, schema_is_exclude appears to only be used for\nerror checking and doesn't impact the generated catalog query. It looks\nlike the relevant logic disappeared after v4 of the patch.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 18 Apr 2022 14:56:15 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Proposal] vacuumdb --schema only"
},
{
"msg_contents": "Le 18/04/2022 à 23:56, Nathan Bossart a écrit :\n> On Thu, Apr 14, 2022 at 10:27:46PM +0200, Gilles Darold wrote:\n>> Attached v8 of the patch that tries to address the remarks above, fixes\n>> patch apply failure to master and replace calls to pg_log_error+exit\n>> with pg_fatal.\n> Thanks for the new patch.\n>\n>> +enum trivalue schema_is_exclude = TRI_DEFAULT;\n>> +\n>> +/*\n>> + * The kind of object use in the command line filter.\n>> + * OBJFILTER_NONE: no filter used\n>> + * OBJFILTER_SCHEMA: -n | --schema or -N | --exclude-schema\n>> + * OBJFILTER_TABLE: -t | --table\n>> + */\n>> +enum VacObjectFilter\n>> +{\n>> +\tOBJFILTER_NONE,\n>> +\tOBJFILTER_TABLE,\n>> +\tOBJFILTER_SCHEMA\n>> +};\n>> +\n>> +enum VacObjectFilter objfilter = OBJFILTER_NONE;\n> I still think we ought to remove schema_is_exclude in favor of adding\n> OBJFILTER_SCHEMA_EXCLUDE to the enum. I think that would simplify some of\n> the error handling and improve readability. IMO we should add\n> OBJFILTER_ALL, too.\n\nFixed.\n\n\n>> -\t\t\t\t\tsimple_string_list_append(&tables, optarg);\n>> +\t\t\t\t\t/* When filtering on schema name, filter by table is not allowed. */\n>> +\t\t\t\t\tif (schema_is_exclude != TRI_DEFAULT)\n>> +\t\t\t\t\t\tpg_fatal(\"cannot vacuum all tables in schema(s) and specific table(s) at the same time\");\n>> +\t\t\t\t\tsimple_string_list_append(&objects, optarg);\n>> +\t\t\t\t\tobjfilter = OBJFILTER_TABLE;\n>> \t\t\t\t\ttbl_count++;\n>> \t\t\t\t\tbreak;\n>> \t\t\t\t}\n>> @@ -202,6 +224,32 @@ main(int argc, char *argv[])\n>> \t\t\t\t\t\t\t\t\t &vacopts.parallel_workers))\n>> \t\t\t\t\texit(1);\n>> \t\t\t\tbreak;\n>> +\t\t\tcase 'n':\t\t\t/* include schema(s) */\n>> +\t\t\t\t{\n>> +\t\t\t\t\t/* When filtering on schema name, filter by table is not allowed. */\n>> +\t\t\t\t\tif (tbl_count)\n>> +\t\t\t\t\t\tpg_fatal(\"cannot vacuum all tables in schema(s) and specific table(s) at the same time\");\n>> +\n>> +\t\t\t\t\tif (schema_is_exclude == TRI_YES)\n>> +\t\t\t\t\t\tpg_fatal(\"cannot vacuum all tables in schema(s) and exclude specific schema(s) at the same time\");\n>> +\t\t\t\t\tsimple_string_list_append(&objects, optarg);\n>> +\t\t\t\t\tobjfilter = OBJFILTER_SCHEMA;\n>> +\t\t\t\t\tschema_is_exclude = TRI_NO;\n>> +\t\t\t\t\tbreak;\n>> +\t\t\t\t}\n>> +\t\t\tcase 'N':\t\t\t/* exclude schema(s) */\n>> +\t\t\t\t{\n>> +\t\t\t\t\t/* When filtering on schema name, filter by table is not allowed. */\n>> +\t\t\t\t\tif (tbl_count)\n>> +\t\t\t\t\t\tpg_fatal(\"cannot vacuum all tables in schema(s) and specific table(s) at the same time\");\n>> +\t\t\t\t\tif (schema_is_exclude == TRI_NO)\n>> +\t\t\t\t\t\tpg_fatal(\"cannot vacuum all tables in schema(s) and exclude specific schema(s) at the same time\");\n>> +\n>> +\t\t\t\t\tsimple_string_list_append(&objects, optarg);\n>> +\t\t\t\t\tobjfilter = OBJFILTER_SCHEMA;\n>> +\t\t\t\t\tschema_is_exclude = TRI_YES;\n>> +\t\t\t\t\tbreak;\n> I was expecting these to check objfilter.\n\nFixed.\n\n\n> Another possible improvement could be to move the pg_fatal() calls to a\n> helper function that generates the message based on the current objfilter\n> setting and the current option. I'm envisioning something like\n> check_objfilter(VacObjFilter curr_objfilter, VacObjFilter curr_option).\n\nI agree, done.\n\n\n>> +\t/*\n>> +\t * When filtering on schema name, filter by table is not allowed.\n>> +\t * The schema name can already be set to a fqdn table name.\n>> +\t */\n>> +\tif (tbl_count && objfilter == OBJFILTER_SCHEMA && objects.head != NULL)\n>> +\t\tpg_fatal(\"cannot vacuum all tables in schema(s) and specific table(s) at the same time\");\n> Isn't this redundant with the error in the option handling?\n\nFixed.\n\n\n>> -\t\tif (tables.head != NULL)\n>> +\t\tif (objfilter == OBJFILTER_SCHEMA && objects.head != NULL)\n>> +\t\t{\n>> +\t\t\tif (schema_is_exclude == TRI_YES)\n>> +\t\t\t\tpg_fatal(\"cannot exclude from vacuum specific schema(s) in all databases\");\n>> +\t\t\telse if (schema_is_exclude == TRI_NO)\n>> +\t\t\t\tpg_fatal(\"cannot vacuum specific schema(s) in all databases\");\n>> +\t\t}\n>> +\n>> +\t\tif (objfilter == OBJFILTER_TABLE && objects.head != NULL)\n>> \t\t\tpg_fatal(\"cannot vacuum specific table(s) in all databases\");\n> I think we could move all these into check_objfilter(), too.\n>\n> nitpick: Why do we need to check that objects.head is not NULL? Isn't the\n> objfilter check enough?\n\nDone.\n\n\n>> \t/*\n>> -\t * If no tables were listed, filter for the relevant relation types. If\n>> -\t * tables were given via --table, don't bother filtering by relation type.\n>> -\t * Instead, let the server decide whether a given relation can be\n>> -\t * processed in which case the user will know about it.\n>> +\t * If no tables were listed or that a filter by schema is used, filter\n>> +\t * for the relevant relation types. If tables were given via --table,\n>> +\t * don't bother filtering by relation type. Instead, let the server\n>> +\t * decide whether a given relation can be processed in which case the\n>> +\t * user will know about it. If there is a filter by schema the use of\n>> +\t * --table is not possible so we have to filter by relation type too.\n>> \t */\n>> -\tif (!tables_listed)\n>> +\tif (!objects_listed || objfilter == OBJFILTER_SCHEMA)\n> Do we need to check for objects_listed here? IIUC we can just check for\n> objfilter != OBJFILTER_TABLE.\n\nYes we need it otherwise test 'vacuumdb with view' fail because we are \nnot trying to vacuum the view so the PG doesn't report:\n\n WARNING: cannot vacuum non-tables or special system tables\n\n\n> Unless I'm missing something, schema_is_exclude appears to only be used for\n> error checking and doesn't impact the generated catalog query. It looks\n> like the relevant logic disappeared after v4 of the patch.\n\nRight, removed.\n\n\nNew patch attached v9.\n\n\n-- \nGilles Darold",
"msg_date": "Wed, 20 Apr 2022 17:15:02 +0200",
"msg_from": "Gilles Darold <gilles@migops.com>",
"msg_from_op": true,
"msg_subject": "Re: [Proposal] vacuumdb --schema only"
},
{
"msg_contents": "Thanks for the new patch! I think this is on the right track.\n\nOn Wed, Apr 20, 2022 at 05:15:02PM +0200, Gilles Darold wrote:\n> Le 18/04/2022 � 23:56, Nathan Bossart a �crit�:\n>> > -\tif (!tables_listed)\n>> > +\tif (!objects_listed || objfilter == OBJFILTER_SCHEMA)\n>> Do we need to check for objects_listed here? IIUC we can just check for\n>> objfilter != OBJFILTER_TABLE.\n> \n> Yes we need it otherwise test 'vacuumdb with view' fail because we are not\n> trying to vacuum the view so the PG doesn't report:\n> \n> ��� WARNING:� cannot vacuum non-tables or special system tables\n\nMy point is that the only time we don't want to filter for relevant\nrelation types is when the user provides a list of tables. So my\nsuggestion would be to simplify this to the following:\n\n\tif (objfilter != OBJFILTER_TABLE)\n\t{\n\t\tappendPQExpBufferStr(...);\n\t\thas_where = true;\n\t}\n\n>> Unless I'm missing something, schema_is_exclude appears to only be used for\n>> error checking and doesn't impact the generated catalog query. It looks\n>> like the relevant logic disappeared after v4 of the patch.\n> \n> Right, removed.\n\nI don't think -N works at the moment. I tested it out, and vacuumdb was\nstill processing tables in schemas I excluded. Can we add a test case for\nthis, too?\n\n> +/*\n> + * Verify that the filters used at command line are compatible\n> + */\n> +void\n> +check_objfilter(VacObjectFilter curr_objfilter, VacObjectFilter curr_option)\n> +{\n> +\tswitch (curr_option)\n> +\t{\n> +\t\tcase OBJFILTER_NONE:\n> +\t\t\tbreak;\n> +\t\tcase OBJFILTER_DATABASE:\n> +\t\t\t/* When filtering on database name, vacuum on all database is not allowed. */\n> +\t\t\tif (curr_objfilter == OBJFILTER_ALL)\n> +\t\t\t\tpg_fatal(\"cannot vacuum all databases and a specific one at the same time\");\n> +\t\t\tbreak;\n> +\t\tcase OBJFILTER_ALL:\n> +\t\t\t/* When vacuuming all database, filter on database name is not allowed. */\n> +\t\t\tif (curr_objfilter == OBJFILTER_DATABASE)\n> +\t\t\t\tpg_fatal(\"cannot vacuum all databases and a specific one at the same time\");\n> +\t\t\t/* When vacuuming all database, filter on schema name is not allowed. */\n> +\t\t\tif (curr_objfilter == OBJFILTER_SCHEMA)\n> +\t\t\t\tpg_fatal(\"cannot vacuum specific schema(s) in all databases\");\n> +\t\t\t/* When vacuuming all database, schema exclusion is not allowed. */\n> +\t\t\tif (curr_objfilter == OBJFILTER_SCHEMA_EXCLUDE)\n> +\t\t\t\tpg_fatal(\"cannot exclude from vacuum specific schema(s) in all databases\");\n> +\t\t\t/* When vacuuming all database, filter on table name is not allowed. */\n> +\t\t\tif (curr_objfilter == OBJFILTER_TABLE)\n> +\t\t\t\tpg_fatal(\"cannot vacuum specific table(s) in all databases\");\n> +\t\t\tbreak;\n> +\t\tcase OBJFILTER_TABLE:\n> +\t\t\t/* When filtering on table name, filter by schema is not allowed. */\n> +\t\t\tif (curr_objfilter == OBJFILTER_SCHEMA)\n> +\t\t\t\tpg_fatal(\"cannot vacuum all tables in schema(s) and specific table(s) at the same time\");\n> +\t\t\t/* When filtering on table name, schema exclusion is not allowed. */\n> +\t\t\tif (curr_objfilter == OBJFILTER_SCHEMA_EXCLUDE)\n> +\t\t\t\tpg_fatal(\"cannot vacuum specific table(s) and exclude specific schema(s) at the same time\");\n> +\t\t\tbreak;\n> +\t\tcase OBJFILTER_SCHEMA:\n> +\t\t\t/* When filtering on schema name, filter by table is not allowed. */\n> +\t\t\tif (curr_objfilter == OBJFILTER_TABLE)\n> +\t\t\t\tpg_fatal(\"cannot vacuum all tables in schema(s) and specific table(s) at the same time\");\n> +\t\t\t/* When filtering on schema name, schema exclusion is not allowed. */\n> +\t\t\tif (curr_objfilter == OBJFILTER_SCHEMA_EXCLUDE)\n> +\t\t\t\tpg_fatal(\"cannot vacuum all tables in schema(s) and exclude specific schema(s) at the same time\");\n> +\t\t\t/* filtering on schema name can not be use on all database. */\n> +\t\t\tif (curr_objfilter == OBJFILTER_ALL)\n> +\t\t\t\tpg_fatal(\"cannot vacuum specific schema(s) in all databases\");\n> +\t\t\tbreak;\n> +\t\tcase OBJFILTER_SCHEMA_EXCLUDE:\n> +\t\t\t/* When filtering on schema exclusion, filter by table is not allowed. */\n> +\t\t\tif (curr_objfilter == OBJFILTER_TABLE)\n> +\t\t\t\tpg_fatal(\"cannot vacuum all tables in schema(s) and specific table(s) at the same time\");\n> +\t\t\t/* When filetring on schema exclusion, filter by schema is not allowed. */\n> +\t\t\tif (curr_objfilter == OBJFILTER_SCHEMA)\n> +\t\t\t\tpg_fatal(\"cannot vacuum all tables in schema(s) and exclude specific schema(s) at the same time\");\n> +\t\t\tbreak;\n> +\t}\n> +}\n\nI don't think this handles all combinations. For example, the following\ncommand does not fail:\n\n\tvacuumdb -a -N test postgres\n\nFurthermore, do you think it'd be possible to dynamically generate the\nmessage? If it doesn't add too much complexity, this might be a nice way\nto simplify this function.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 20 Apr 2022 10:38:46 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Proposal] vacuumdb --schema only"
},
{
"msg_contents": "On Wed, Apr 20, 2022 at 10:38:46AM -0700, Nathan Bossart wrote:\n> > +void\n> > +check_objfilter(VacObjectFilter curr_objfilter, VacObjectFilter curr_option)\n> > +{\n> > +\tswitch (curr_option)\n> > +\t{\n> > +\t\tcase OBJFILTER_NONE:\n> > +\t\t\tbreak;\n> > +\t\tcase OBJFILTER_DATABASE:\n> > +\t\t\t/* When filtering on database name, vacuum on all database is not allowed. */\n> > +\t\t\tif (curr_objfilter == OBJFILTER_ALL)\n> > +\t\t\t\tpg_fatal(\"cannot vacuum all databases and a specific one at the same time\");\n> > +\t\t\tbreak;\n> > +\t\tcase OBJFILTER_ALL:\n> > +\t\t\t/* When vacuuming all database, filter on database name is not allowed. */\n> > +\t\t\tif (curr_objfilter == OBJFILTER_DATABASE)\n> > +\t\t\t\tpg_fatal(\"cannot vacuum all databases and a specific one at the same time\");\n> > +\t\t\t/* When vacuuming all database, filter on schema name is not allowed. */\n> > +\t\t\tif (curr_objfilter == OBJFILTER_SCHEMA)\n> > +\t\t\t\tpg_fatal(\"cannot vacuum specific schema(s) in all databases\");\n> > +\t\t\t/* When vacuuming all database, schema exclusion is not allowed. */\n> > +\t\t\tif (curr_objfilter == OBJFILTER_SCHEMA_EXCLUDE)\n> > +\t\t\t\tpg_fatal(\"cannot exclude from vacuum specific schema(s) in all databases\");\n> > +\t\t\t/* When vacuuming all database, filter on table name is not allowed. */\n> > +\t\t\tif (curr_objfilter == OBJFILTER_TABLE)\n> > +\t\t\t\tpg_fatal(\"cannot vacuum specific table(s) in all databases\");\n> > +\t\t\tbreak;\n> > +\t\tcase OBJFILTER_TABLE:\n> > +\t\t\t/* When filtering on table name, filter by schema is not allowed. */\n> > +\t\t\tif (curr_objfilter == OBJFILTER_SCHEMA)\n> > +\t\t\t\tpg_fatal(\"cannot vacuum all tables in schema(s) and specific table(s) at the same time\");\n> > +\t\t\t/* When filtering on table name, schema exclusion is not allowed. */\n> > +\t\t\tif (curr_objfilter == OBJFILTER_SCHEMA_EXCLUDE)\n> > +\t\t\t\tpg_fatal(\"cannot vacuum specific table(s) and exclude specific schema(s) at the same time\");\n> > +\t\t\tbreak;\n> > +\t\tcase OBJFILTER_SCHEMA:\n> > +\t\t\t/* When filtering on schema name, filter by table is not allowed. */\n> > +\t\t\tif (curr_objfilter == OBJFILTER_TABLE)\n> > +\t\t\t\tpg_fatal(\"cannot vacuum all tables in schema(s) and specific table(s) at the same time\");\n> > +\t\t\t/* When filtering on schema name, schema exclusion is not allowed. */\n> > +\t\t\tif (curr_objfilter == OBJFILTER_SCHEMA_EXCLUDE)\n> > +\t\t\t\tpg_fatal(\"cannot vacuum all tables in schema(s) and exclude specific schema(s) at the same time\");\n> > +\t\t\t/* filtering on schema name can not be use on all database. */\n> > +\t\t\tif (curr_objfilter == OBJFILTER_ALL)\n> > +\t\t\t\tpg_fatal(\"cannot vacuum specific schema(s) in all databases\");\n> > +\t\t\tbreak;\n> > +\t\tcase OBJFILTER_SCHEMA_EXCLUDE:\n> > +\t\t\t/* When filtering on schema exclusion, filter by table is not allowed. */\n> > +\t\t\tif (curr_objfilter == OBJFILTER_TABLE)\n> > +\t\t\t\tpg_fatal(\"cannot vacuum all tables in schema(s) and specific table(s) at the same time\");\n> > +\t\t\t/* When filetring on schema exclusion, filter by schema is not allowed. */\n> > +\t\t\tif (curr_objfilter == OBJFILTER_SCHEMA)\n> > +\t\t\t\tpg_fatal(\"cannot vacuum all tables in schema(s) and exclude specific schema(s) at the same time\");\n> > +\t\t\tbreak;\n> > +\t}\n> > +}\n> \n> I don't think this handles all combinations. For example, the following\n> command does not fail:\n> \n> \tvacuumdb -a -N test postgres\n> \n> Furthermore, do you think it'd be possible to dynamically generate the\n> message?\n\nNot in the obvious way, because that breaks translatability.\n\n-- \nJustin\n\n\n",
"msg_date": "Wed, 20 Apr 2022 12:40:52 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: [Proposal] vacuumdb --schema only"
},
{
"msg_contents": "On Wed, Apr 20, 2022 at 12:40:52PM -0500, Justin Pryzby wrote:\n> On Wed, Apr 20, 2022 at 10:38:46AM -0700, Nathan Bossart wrote:\n>> Furthermore, do you think it'd be possible to dynamically generate the\n>> message?\n> \n> Not in the obvious way, because that breaks translatability.\n\nAh, right.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 20 Apr 2022 10:43:57 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Proposal] vacuumdb --schema only"
},
{
"msg_contents": "Le 20/04/2022 à 19:38, Nathan Bossart a écrit :\n> Thanks for the new patch! I think this is on the right track.\n>\n> On Wed, Apr 20, 2022 at 05:15:02PM +0200, Gilles Darold wrote:\n>> Le 18/04/2022 à 23:56, Nathan Bossart a écrit :\n>>>> -\tif (!tables_listed)\n>>>> +\tif (!objects_listed || objfilter == OBJFILTER_SCHEMA)\n>>> Do we need to check for objects_listed here? IIUC we can just check for\n>>> objfilter != OBJFILTER_TABLE.\n>> Yes we need it otherwise test 'vacuumdb with view' fail because we are not\n>> trying to vacuum the view so the PG doesn't report:\n>>\n>> WARNING: cannot vacuum non-tables or special system tables\n> My point is that the only time we don't want to filter for relevant\n> relation types is when the user provides a list of tables. So my\n> suggestion would be to simplify this to the following:\n>\n> \tif (objfilter != OBJFILTER_TABLE)\n> \t{\n> \t\tappendPQExpBufferStr(...);\n> \t\thas_where = true;\n> \t}\n\n\nRight, I must have gotten mixed up in the test results. Fixed.\n\n\n>>> Unless I'm missing something, schema_is_exclude appears to only be used for\n>>> error checking and doesn't impact the generated catalog query. It looks\n>>> like the relevant logic disappeared after v4 of the patch.\n>> Right, removed.\n> I don't think -N works at the moment. I tested it out, and vacuumdb was\n> still processing tables in schemas I excluded. Can we add a test case for\n> this, too?\n\n\nFixed and regression tests added as well as some others to test the \nfilter options compatibility.\n\n\n> +/*\n> + * Verify that the filters used at command line are compatible\n> + */\n> +void\n> +check_objfilter(VacObjectFilter curr_objfilter, VacObjectFilter curr_option)\n> +{\n> +\tswitch (curr_option)\n> +\t{\n> +\t\tcase OBJFILTER_NONE:\n> +\t\t\tbreak;\n> +\t\tcase OBJFILTER_DATABASE:\n> +\t\t\t/* When filtering on database name, vacuum on all database is not allowed. */\n> +\t\t\tif (curr_objfilter == OBJFILTER_ALL)\n> +\t\t\t\tpg_fatal(\"cannot vacuum all databases and a specific one at the same time\");\n> +\t\t\tbreak;\n> [...]\n> +\t}\n> +}\n> I don't think this handles all combinations. For example, the following\n> command does not fail:\n>\n> \tvacuumdb -a -N test postgres\n\n\nRight, I have fix them all in this new patch.\n\n\n> Furthermore, do you think it'd be possible to dynamically generate the\n> message? If it doesn't add too much complexity, this might be a nice way\n> to simplify this function.\n\n\nI have tried to avoid reusing the same error message several time by \nusing a new enum and function filter_error(). I also use the same \nmessages with --schema and --exclude-schema related errors.\n\n\nPatch v10 attached.\n\n\n-- \nGilles Darold",
"msg_date": "Fri, 22 Apr 2022 11:57:05 +0200",
"msg_from": "Gilles Darold <gilles@migops.com>",
"msg_from_op": true,
"msg_subject": "Re: [Proposal] vacuumdb --schema only"
},
{
"msg_contents": "On Fri, Apr 22, 2022 at 11:57:05AM +0200, Gilles Darold wrote:\n> Patch v10 attached.\n\nThanks! I've attached a v11 with some minor editorialization. I think I\nwas able to improve the error handling for invalid combinations of\ncommand-line options a bit, but please let me know what you think.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Fri, 22 Apr 2022 22:57:46 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Proposal] vacuumdb --schema only"
},
{
"msg_contents": "On Fri, Apr 22, 2022 at 10:57:46PM -0700, Nathan Bossart wrote:\n> On Fri, Apr 22, 2022 at 11:57:05AM +0200, Gilles Darold wrote:\n>> Patch v10 attached.\n> \n> Thanks! I've attached a v11 with some minor editorialization. I think I\n> was able to improve the error handling for invalid combinations of\n> command-line options a bit, but please let me know what you think.\n\nI've attached a v12 of the patch that further simplifieѕ check_objfilter().\nApologies for the noise.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Sun, 24 Apr 2022 18:27:22 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Proposal] vacuumdb --schema only"
},
{
"msg_contents": "goo\nLe 25/04/2022 à 03:27, Nathan Bossart a écrit :\n> On Fri, Apr 22, 2022 at 10:57:46PM -0700, Nathan Bossart wrote:\n>> On Fri, Apr 22, 2022 at 11:57:05AM +0200, Gilles Darold wrote:\n>>> Patch v10 attached.\n>> Thanks! I've attached a v11 with some minor editorialization. I think I\n>> was able to improve the error handling for invalid combinations of\n>> command-line options a bit, but please let me know what you think.\n> I've attached a v12 of the patch that further simplifieѕ check_objfilter().\n> Apologies for the noise.\n>\n\nLooks good for me, there is a failure on cfbot on FreeBSD but I have run\na CI with latest master and it pass.\n\n\nCan I change the commitfest status to ready for committers?\n\n\n-- \nGilles Darold\n\n\n\n",
"msg_date": "Mon, 25 Apr 2022 08:50:09 +0200",
"msg_from": "Gilles Darold <gilles@migops.com>",
"msg_from_op": true,
"msg_subject": "Re: [Proposal] vacuumdb --schema only"
},
{
"msg_contents": "On Mon, Apr 25, 2022 at 08:50:09AM +0200, Gilles Darold wrote:\n> Can I change the commitfest status to ready for committers?\n\nI've marked it as ready-for-committer.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 25 Apr 2022 09:18:53 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Proposal] vacuumdb --schema only"
},
{
"msg_contents": "On Mon, Apr 25, 2022 at 09:18:53AM -0700, Nathan Bossart wrote:\n> I've marked it as ready-for-committer.\n\nThe refactoring logic to build the queries is clear to follow. I have\na few comments about the shape of the patch, though.\n\n case 'a':\n- alldb = true;\n+ check_objfilter(OBJFILTER_ALL_DBS);\n break;\nThe cross-option checks are usually done after all the options\nswitches are check. Why does this need to be different? It does not\nstrike me as a huge problem to do one filter check at the end.\n\n+void\n+check_objfilter(VacObjFilter curr_option)\n+{\n+ objfilter |= curr_option;\n+\n+ if ((objfilter & OBJFILTER_ALL_DBS) &&\n+ (objfilter & OBJFILTER_DATABASE))\n+ pg_fatal(\"cannot vacuum all databases and a specific one at the same time\");\nThe addition of more OBJFILTER_* (unlikely going to happen, but who\nknows) would make it hard to know which option should not interact\nwith each other. Wouldn't it be better to use a kind of compatibility\ntable for that? As one OBJFILTER_* matches with one option, you could\nsimplify the number of strings in need of translation by switching to\nan error message like \"cannot use options %s and %s together\", or\nsomething like that?\n\n+$node->command_fails(\n+ [ 'vacuumdb', '-a', '-d', 'postgres' ],\n+ 'cannot use options -a and -d at the same time');\nThis set of tests had better use command_fails_like() to make sure\nthat the correct error patterns from check_objfilter() show up?\n--\nMichael",
"msg_date": "Tue, 26 Apr 2022 11:36:02 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: [Proposal] vacuumdb --schema only"
},
{
"msg_contents": "On Tue, Apr 26, 2022 at 11:36:02AM +0900, Michael Paquier wrote:\n> The refactoring logic to build the queries is clear to follow. I have\n> a few comments about the shape of the patch, though.\n\nThanks for taking a look!\n\n> case 'a':\n> - alldb = true;\n> + check_objfilter(OBJFILTER_ALL_DBS);\n> break;\n> The cross-option checks are usually done after all the options\n> switches are check. Why does this need to be different? It does not\n> strike me as a huge problem to do one filter check at the end.\n\nMakes sense. I fixed this in v13.\n\n> +void\n> +check_objfilter(VacObjFilter curr_option)\n> +{\n> + objfilter |= curr_option;\n> +\n> + if ((objfilter & OBJFILTER_ALL_DBS) &&\n> + (objfilter & OBJFILTER_DATABASE))\n> + pg_fatal(\"cannot vacuum all databases and a specific one at the same time\");\n> The addition of more OBJFILTER_* (unlikely going to happen, but who\n> knows) would make it hard to know which option should not interact\n> with each other. Wouldn't it be better to use a kind of compatibility\n> table for that? As one OBJFILTER_* matches with one option, you could\n> simplify the number of strings in need of translation by switching to\n> an error message like \"cannot use options %s and %s together\", or\n> something like that?\n\nI think this might actually make things more complicated. In addition to\nthe compatibility table, we'd need to define the strings to use in the\nerror message somewhere. I can give this a try if you feel strongly about\nit.\n\n> +$node->command_fails(\n> + [ 'vacuumdb', '-a', '-d', 'postgres' ],\n> + 'cannot use options -a and -d at the same time');\n> This set of tests had better use command_fails_like() to make sure\n> that the correct error patterns from check_objfilter() show up?\n\nYes. I did this in v13.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Mon, 25 Apr 2022 21:46:55 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Proposal] vacuumdb --schema only"
},
{
"msg_contents": "\nOn 2022-04-26 Tu 00:46, Nathan Bossart wrote:\n> On Tue, Apr 26, 2022 at 11:36:02AM +0900, Michael Paquier wrote:\n>> The refactoring logic to build the queries is clear to follow. I have\n>> a few comments about the shape of the patch, though.\n> Thanks for taking a look!\n>\n>> case 'a':\n>> - alldb = true;\n>> + check_objfilter(OBJFILTER_ALL_DBS);\n>> break;\n>> The cross-option checks are usually done after all the options\n>> switches are check. Why does this need to be different? It does not\n>> strike me as a huge problem to do one filter check at the end.\n> Makes sense. I fixed this in v13.\n>\n>> +void\n>> +check_objfilter(VacObjFilter curr_option)\n>> +{\n>> + objfilter |= curr_option;\n>> +\n>> + if ((objfilter & OBJFILTER_ALL_DBS) &&\n>> + (objfilter & OBJFILTER_DATABASE))\n>> + pg_fatal(\"cannot vacuum all databases and a specific one at the same time\");\n>> The addition of more OBJFILTER_* (unlikely going to happen, but who\n>> knows) would make it hard to know which option should not interact\n>> with each other. Wouldn't it be better to use a kind of compatibility\n>> table for that? As one OBJFILTER_* matches with one option, you could\n>> simplify the number of strings in need of translation by switching to\n>> an error message like \"cannot use options %s and %s together\", or\n>> something like that?\n> I think this might actually make things more complicated. In addition to\n> the compatibility table, we'd need to define the strings to use in the\n> error message somewhere. I can give this a try if you feel strongly about\n> it.\n>\n>> +$node->command_fails(\n>> + [ 'vacuumdb', '-a', '-d', 'postgres' ],\n>> + 'cannot use options -a and -d at the same time');\n>> This set of tests had better use command_fails_like() to make sure\n>> that the correct error patterns from check_objfilter() show up?\n> Yes. I did this in v13.\n\n\n\ncommitted.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Sun, 31 Jul 2022 16:50:31 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: [Proposal] vacuumdb --schema only"
}
] |
[
{
"msg_contents": "Hi,\n\nIt looks like regression tests are failing on Windows Server 2019 [1].\nIgnore if it's reported elsewhere.\n\n[1] https://github.com/postgres/postgres/runs/5419324953\n❌ 00:02 test_pl\n\n[08:28:53.758]\n[08:28:53.758] c:\\cirrus>call \"C:/Program\nFiles/Git/usr/bin/timeout.exe\" -v -k60s 15m perl\nsrc/tools/msvc/vcregress.pl plcheck\n[08:28:53.953] ============================================================\n[08:28:53.953] Checking plpgsql\n[08:28:53.977] (using postmaster on Unix socket, default port)\n[08:28:53.977] ============== dropping database \"pl_regression\"\n==============\n[08:28:56.044] psql: error: connection to server at \"localhost\" (::1),\nport 5432 failed: Connection refused (0x0000274D/10061)\n[08:28:56.044] Is the server running on that host and accepting TCP/IP\nconnections?\n[08:28:56.044] connection to server at \"localhost\" (127.0.0.1), port\n5432 failed: Connection refused (0x0000274D/10061)\n[08:28:56.044] Is the server running on that host and accepting TCP/IP\nconnections?\n[08:28:56.050] command failed: \"c:/cirrus/Debug/psql/psql\" -X -c \"SET\nclient_min_messages = warning\" -c \"DROP DATABASE IF EXISTS\n\\\"pl_regression\\\"\" \"postgres\"\n[08:28:56.063]\n[08:28:56.063] c:\\cirrus>if 2 NEQ 0 exit /b 2\n[08:28:56.084]\n[08:28:56.084] Exit status: 2\n\nRegards,\nBharath Rupireddy.\n\n\n",
"msg_date": "Fri, 4 Mar 2022 15:07:27 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Regression tests failures on Windows Server 2019 - on master at\n commit # d816f366b"
},
{
"msg_contents": "On Fri, Mar 4, 2022 at 10:37 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n> It looks like regression tests are failing on Windows Server 2019 [1].\n> Ignore if it's reported elsewhere.\n\nIt's broken since 46ab07ff \"Clean up assorted failures under clang's\n-fsanitize=undefined checks.\":\n\nhttps://github.com/postgres/postgres/commits/master\n\nI don't see what that patch has to do with the symptoms. It looks a\nbit like the cluster started by the \"startcreate\" step (\"pg_ctl.exe\nstart ...\") is mysteriously no longer running when we get to the\n\"test_pl\" step (\"Connection refused\").\n\n\n",
"msg_date": "Fri, 4 Mar 2022 23:50:46 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Regression tests failures on Windows Server 2019 - on master at\n commit # d816f366b"
},
{
"msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> On Fri, Mar 4, 2022 at 10:37 PM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n>> It looks like regression tests are failing on Windows Server 2019 [1].\n>> Ignore if it's reported elsewhere.\n\n> It's broken since 46ab07ff \"Clean up assorted failures under clang's\n> -fsanitize=undefined checks.\":\n\n> https://github.com/postgres/postgres/commits/master\n\n> I don't see what that patch has to do with the symptoms.\n\nThe buildfarm is not unhappy, so I'd be looking at what changed\nrecently in the cfbot's setup.\n\nI find it a little suspicious that startcreate is merrily starting\nthe postmaster on port 5432, without concern for the possibility\nthat that port is in use or blocked by a packet filter.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 04 Mar 2022 09:54:28 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Regression tests failures on Windows Server 2019 - on master at\n commit # d816f366b"
},
{
"msg_contents": "Hi,\n\nOn 2022-03-04 09:54:28 -0500, Tom Lane wrote:\n> Thomas Munro <thomas.munro@gmail.com> writes:\n> > https://github.com/postgres/postgres/commits/master\n> \n> > I don't see what that patch has to do with the symptoms.\n> \n> The buildfarm is not unhappy, so I'd be looking at what changed\n> recently in the cfbot's setup.\n\nVery odd.\n\n\n> I find it a little suspicious that startcreate is merrily starting\n> the postmaster on port 5432, without concern for the possibility\n> that that port is in use or blocked by a packet filter.\n\nIt's a container that's just created for that run, so there shouldn't be\nanything else running. We also see that the server successfully starts:\n\n2022-03-03 23:24:15.261 GMT [5504][postmaster] LOG: starting PostgreSQL 15devel, compiled by Visual C++ build 1929, 64-bit\n2022-03-03 23:24:15.270 GMT [5504][postmaster] LOG: listening on IPv6 address \"::1\", port 5432\n2022-03-03 23:24:15.270 GMT [5504][postmaster] LOG: listening on IPv4 address \"127.0.0.1\", port 5432\n2022-03-03 23:24:15.321 GMT [5808][startup] LOG: database system was shut down at 2022-03-03 23:24:15 GMT\n2022-03-03 23:24:15.341 GMT [5504][postmaster] LOG: database system is ready to accept connections\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 4 Mar 2022 09:10:14 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Regression tests failures on Windows Server 2019 - on master at\n commit # d816f366b"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2022-03-04 09:54:28 -0500, Tom Lane wrote:\n>> I find it a little suspicious that startcreate is merrily starting\n>> the postmaster on port 5432, without concern for the possibility\n>> that that port is in use or blocked by a packet filter.\n\n> It's a container that's just created for that run, so there shouldn't be\n> anything else running. We also see that the server successfully starts:\n\nTrue, so the port was free. The postmaster wouldn't know whether there's\na relevant packet filter though. The reason I'm harping on this is that\nit's hard to see why this postmaster start would fail before anything\nwas actually done, when several previous starts on other ports behaved\njust fine.\n\nActually ... looking closer at the failure log:\n\n[09:41:50.589] Checking plpgsql\n[09:41:50.608] (using postmaster on Unix socket, default port)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n[09:41:50.608] ============== dropping database \"pl_regression\" ==============\n[09:41:52.731] psql: error: connection to server at \"localhost\" (::1), port 5432 failed: Connection refused (0x0000274D/10061)\n[09:41:52.731] \tIs the server running on that host and accepting TCP/IP connections?\n[09:41:52.731] connection to server at \"localhost\" (127.0.0.1), port 5432 failed: Connection refused (0x0000274D/10061)\n[09:41:52.731] \tIs the server running on that host and accepting TCP/IP connections?\n\npg_regress thinks the postmaster is listening to a Unix socket.\nMaybe it's *only* listening to a Unix socket. In any case,\npsql has very clearly not been told to use a Unix socket.\nSomething is not wired up correctly there.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 04 Mar 2022 12:23:43 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Regression tests failures on Windows Server 2019 - on master at\n commit # d816f366b"
},
{
"msg_contents": "Hi,\n\nOn 2022-03-04 09:10:14 -0800, Andres Freund wrote:\n> > I find it a little suspicious that startcreate is merrily starting\n> > the postmaster on port 5432, without concern for the possibility\n> > that that port is in use or blocked by a packet filter.\n> \n> It's a container that's just created for that run, so there shouldn't be\n> anything else running. We also see that the server successfully starts:\n> \n> 2022-03-03 23:24:15.261 GMT [5504][postmaster] LOG: starting PostgreSQL 15devel, compiled by Visual C++ build 1929, 64-bit\n> 2022-03-03 23:24:15.270 GMT [5504][postmaster] LOG: listening on IPv6 address \"::1\", port 5432\n> 2022-03-03 23:24:15.270 GMT [5504][postmaster] LOG: listening on IPv4 address \"127.0.0.1\", port 5432\n> 2022-03-03 23:24:15.321 GMT [5808][startup] LOG: database system was shut down at 2022-03-03 23:24:15 GMT\n> 2022-03-03 23:24:15.341 GMT [5504][postmaster] LOG: database system is ready to accept connections\n\nOh, huh. I added a printout of the running tasks, and it sure looks like\npostgres is not running anymore when plcheck is run, without anything ending\nup in the server log...\n\nIt looks like the CI agent had some changes about stdin / stdout of the shells\nused to start scripts. The version between the last successful run and the\nfirst failing run changed.\nhttps://github.com/cirruslabs/cirrus-ci-agent/commits/master\n\nI wonder if we're missing some steps, at least on windows, to make pg_ctl\nstart independent of the starting shell?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 4 Mar 2022 09:30:37 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Regression tests failures on Windows Server 2019 - on master at\n commit # d816f366b"
},
{
"msg_contents": "Hi,\n\nOn 2022-03-04 12:23:43 -0500, Tom Lane wrote:\n> Actually ... looking closer at the failure log:\n> \n> [09:41:50.589] Checking plpgsql\n> [09:41:50.608] (using postmaster on Unix socket, default port)\n> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n> [09:41:50.608] ============== dropping database \"pl_regression\" ==============\n> [09:41:52.731] psql: error: connection to server at \"localhost\" (::1), port 5432 failed: Connection refused (0x0000274D/10061)\n> [09:41:52.731] \tIs the server running on that host and accepting TCP/IP connections?\n> [09:41:52.731] connection to server at \"localhost\" (127.0.0.1), port 5432 failed: Connection refused (0x0000274D/10061)\n> [09:41:52.731] \tIs the server running on that host and accepting TCP/IP connections?\n> \n> pg_regress thinks the postmaster is listening to a Unix socket.\n> Maybe it's *only* listening to a Unix socket. In any case,\n> psql has very clearly not been told to use a Unix socket.\n> Something is not wired up correctly there.\n\nI saw that too, but I don't think it's the primary problem. The server is\nlistening on ::1 as we know from the log, and quite evidently psql is trying\nto connect to that too. This output also looked this way before the failures.\n\npg_regress outputs the above message when neither PGHOST, PGPORT or\n--host/--port are set. On windows that nevertheless ends up with connecting to\nlocalhost.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 4 Mar 2022 09:37:51 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Regression tests failures on Windows Server 2019 - on master at\n commit # d816f366b"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2022-03-04 12:23:43 -0500, Tom Lane wrote:\n>> pg_regress thinks the postmaster is listening to a Unix socket.\n\n> pg_regress outputs the above message when neither PGHOST, PGPORT or\n> --host/--port are set. On windows that nevertheless ends up with connecting to\n> localhost.\n\nYeah, I just traced that down. pg_regress was not updated when libpq's\nbehavior was changed for platform-varying DEFAULT_PGSOCKET_DIR.\nI'll go fix that, but you're right that it's cosmetic.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 04 Mar 2022 12:40:41 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Regression tests failures on Windows Server 2019 - on master at\n commit # d816f366b"
},
{
"msg_contents": "Hi,\n\nOn 2022-03-04 09:30:37 -0800, Andres Freund wrote:\n> I wonder if we're missing some steps, at least on windows, to make pg_ctl\n> start independent of the starting shell?\n\nSure looks that way. On windows, if I do pg_ctl start, then hit ctrl-c, the\nserver shuts down.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 4 Mar 2022 09:46:44 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Regression tests failures on Windows Server 2019 - on master at\n commit # d816f366b"
},
{
"msg_contents": "On 2022-03-04 09:46:44 -0800, Andres Freund wrote:\n> On 2022-03-04 09:30:37 -0800, Andres Freund wrote:\n> > I wonder if we're missing some steps, at least on windows, to make pg_ctl\n> > start independent of the starting shell?\n>\n> Sure looks that way. On windows, if I do pg_ctl start, then hit ctrl-c, the\n> server shuts down.\n\nShort term the easiest fix might be to start postgres for those tests as a\nservice. But it seems we should fix whatever the cause of that\nterminal-connectedness behaviour is.\n\nI'm out for ~2-3h. I started a test run with using a service just now:\nhttps://cirrus-ci.com/task/5519573792325632 but I very well might have typoed\nsomething...\n\n\n",
"msg_date": "Fri, 4 Mar 2022 09:57:35 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Regression tests failures on Windows Server 2019 - on master at\n commit # d816f366b"
},
{
"msg_contents": "Hi,\n\nOn 2022-03-04 09:57:35 -0800, Andres Freund wrote:\n> On 2022-03-04 09:46:44 -0800, Andres Freund wrote:\n> > On 2022-03-04 09:30:37 -0800, Andres Freund wrote:\n> > > I wonder if we're missing some steps, at least on windows, to make pg_ctl\n> > > start independent of the starting shell?\n> >\n> > Sure looks that way. On windows, if I do pg_ctl start, then hit ctrl-c, the\n> > server shuts down.\n> \n> Short term the easiest fix might be to start postgres for those tests as a\n> service. But it seems we should fix whatever the cause of that\n> terminal-connectedness behaviour is.\n> \n> I'm out for ~2-3h. I started a test run with using a service just now:\n> https://cirrus-ci.com/task/5519573792325632 but I very well might have typoed\n> something...\n\nSeems to have worked for the first few tests at least. Unless somebody wants\nto clean up that commit and push it, I'll do so once I'm back.\n\n\nPerhaps pg_ctl needs to call FreeConsole() or such?\n\nhttps://docs.microsoft.com/en-us/windows/console/closing-a-console\n\nOr perhaps pg_ctl ought to pass CREATE_NEW_PROCESS_GROUP to CreateProcess()?\nThe lack of a process group would explain why we're getting signalled on\nctrl-c...\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 4 Mar 2022 10:10:24 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Regression tests failures on Windows Server 2019 - on master at\n commit # d816f366b"
},
{
"msg_contents": "On Sat, Mar 5, 2022 at 7:10 AM Andres Freund <andres@anarazel.de> wrote:\n> Or perhaps pg_ctl ought to pass CREATE_NEW_PROCESS_GROUP to CreateProcess()?\n> The lack of a process group would explain why we're getting signalled on\n> ctrl-c...\n\nI thought that sounded promising, given that the recent Cirrus agent\ncommit you pointed to says \"Always kill spawned shell's process group\nto avoid pipe FD hangs\", and given that we do something conceptually\nsimilar on Unix. It doesn't seem to help, though...\n\nhttps://cirrus-ci.com/task/5572163880091648\n\n\n",
"msg_date": "Sat, 5 Mar 2022 09:29:01 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Regression tests failures on Windows Server 2019 - on master at\n commit # d816f366b"
},
{
"msg_contents": "Hi,\n\nOn 2022-03-04 09:57:35 -0800, Andres Freund wrote:\n> On 2022-03-04 09:46:44 -0800, Andres Freund wrote:\n> > On 2022-03-04 09:30:37 -0800, Andres Freund wrote:\n> > > I wonder if we're missing some steps, at least on windows, to make pg_ctl\n> > > start independent of the starting shell?\n> >\n> > Sure looks that way. On windows, if I do pg_ctl start, then hit ctrl-c, the\n> > server shuts down.\n> \n> Short term the easiest fix might be to start postgres for those tests as a\n> service. But it seems we should fix whatever the cause of that\n> terminal-connectedness behaviour is.\n> \n> I'm out for ~2-3h. I started a test run with using a service just now:\n> https://cirrus-ci.com/task/5519573792325632 but I very well might have typoed\n> something...\n\nThat fixed the immediate problem, but dblink, postgres_fdw tests failed:\n\nhttps://api.cirrus-ci.com/v1/artifact/task/5519573792325632/log/contrib/dblink/regression.diffs\n FROM dblink(connection_parameters(),'SELECT * FROM foo') AS t(a int, b text, c text[])\n WHERE t.a > 7;\n- a | b | c \n----+---+------------\n- 8 | i | {a8,b8,c8}\n- 9 | j | {a9,b9,c9}\n-(2 rows)\n-\n+ERROR: could not establish connection\n+DETAIL: connection to server at \"localhost\" (::1), port 5432 failed: FATAL:\n role \"SYSTEM\" does not exist\n\n\nhttps://api.cirrus-ci.com/v1/artifact/task/5519573792325632/log/contrib/postgres_fdw/regression.diffs\n+ERROR: could not connect to server \"loopback\"\n\nand it also seems to redirect logging to the event log without further\nconfiguration...\n\nThe dblink and fdw failures can presumably be fixed by passing current_user as\nthe username. That seems like a good idea anyway?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 4 Mar 2022 13:42:06 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Regression tests failures on Windows Server 2019 - on master at\n commit # d816f366b"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> That fixed the immediate problem, but dblink, postgres_fdw tests failed:\n> +ERROR: could not establish connection\n> +DETAIL: connection to server at \"localhost\" (::1), port 5432 failed: FATAL:\n> role \"SYSTEM\" does not exist\n\n[ scratches head... ] Where is libpq getting that username from?\nWhy is it different from whatever we'd determined during initdb?\n(Maybe a case-folding issue??)\n\n> The dblink and fdw failures can presumably be fixed by passing current_user as\n> the username. That seems like a good idea anyway?\n\nI don't think it's a good idea to hack that without understanding\nwhy it's suddenly going wrong.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 04 Mar 2022 16:51:32 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Regression tests failures on Windows Server 2019 - on master at\n commit # d816f366b"
},
{
"msg_contents": "Hi,\n\nOn 2022-03-05 09:29:01 +1300, Thomas Munro wrote:\n> On Sat, Mar 5, 2022 at 7:10 AM Andres Freund <andres@anarazel.de> wrote:\n> > Or perhaps pg_ctl ought to pass CREATE_NEW_PROCESS_GROUP to CreateProcess()?\n> > The lack of a process group would explain why we're getting signalled on\n> > ctrl-c...\n> \n> I thought that sounded promising, given that the recent Cirrus agent\n> commit you pointed to says \"Always kill spawned shell's process group\n> to avoid pipe FD hangs\", and given that we do something conceptually\n> similar on Unix. It doesn't seem to help, though...\n> \n> https://cirrus-ci.com/task/5572163880091648\n\nI suspect one also needs the console detach thing.\n\nI don't really understand why start_postmaster() bothers to wrap postmaster\nstart through cmd.exe, particularly when it prevents us from getting\npostmaster's pid. Also the caveats around cmd.exe and sharing mode.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 4 Mar 2022 13:56:57 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Regression tests failures on Windows Server 2019 - on master at\n commit # d816f366b"
},
{
"msg_contents": "Hi,\n\nOn 2022-03-04 16:51:32 -0500, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > That fixed the immediate problem, but dblink, postgres_fdw tests failed:\n> > +ERROR: could not establish connection\n> > +DETAIL: connection to server at \"localhost\" (::1), port 5432 failed: FATAL:\n> > role \"SYSTEM\" does not exist\n>\n> [ scratches head... ] Where is libpq getting that username from?\n> Why is it different from whatever we'd determined during initdb?\n> (Maybe a case-folding issue??)\n\nWhen running as a service (via pg_ctl register) the default username to run\nunder appears to be SYSTEM. Which then differs from the user that vcregress.pl\nruns under. Trying to make it use the current user now - I don't know what\npermissions services are needed to run a service as a user or such.\n\n\n> > The dblink and fdw failures can presumably be fixed by passing current_user as\n> > the username. That seems like a good idea anyway?\n>\n> I don't think it's a good idea to hack that without understanding\n> why it's suddenly going wrong.\n\nI think I understand why - seems more a question of whether we want to support\nrunning tests against a server running as a different user.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 4 Mar 2022 14:04:44 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Regression tests failures on Windows Server 2019 - on master at\n commit # d816f366b"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> I don't really understand why start_postmaster() bothers to wrap postmaster\n> start through cmd.exe, particularly when it prevents us from getting\n> postmaster's pid. Also the caveats around cmd.exe and sharing mode.\n\nI think the basic idea is to avoid having to write our own code to do\nthe I/O redirections --- that's certainly why we use a shell on the\nUnix side. But it might well be that biting the bullet and handling\nredirection ourselves would be easier than coping with all these\nother problems.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 04 Mar 2022 17:06:38 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Regression tests failures on Windows Server 2019 - on master at\n commit # d816f366b"
},
{
"msg_contents": "On 2022-03-04 14:04:44 -0800, Andres Freund wrote:\n> Trying to make it use the current user now - I don't know what\n> permissions services are needed to run a service as a user or such.\n\nMy first attempt of using %USERNAME% failed:\n\n[22:10:20.862] c:\\cirrus>call tmp_install\\bin\\pg_ctl.exe register -D tmp_check/db \"-UContainerAdministrator\"\n[22:10:20.889] pg_ctl: could not register service \"PostgreSQL\": error code 1057\n\nseems to require a domain for some reason:\n[22:33:54.599] c:\\cirrus>call tmp_install\\bin\\pg_ctl.exe register -Dtmp_check/db -U \"User Manager\\ContainerAdministrator\"\n\nbut that then doesn't start:\n[22:33:54.660] c:\\cirrus>call net start PostgreSQL\n[22:33:55.887] System error 1068 has occurred.\n\nSo it indeed seems hard to start a service as the current user. At least with\nmy limited windows knowledge.\n\n\n",
"msg_date": "Fri, 4 Mar 2022 14:44:12 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Regression tests failures on Windows Server 2019 - on master at\n commit # d816f366b"
},
{
"msg_contents": "On Sat, Mar 5, 2022 at 10:56 AM Andres Freund <andres@anarazel.de> wrote:\n> I suspect one also needs the console detach thing.\n\nI tried adding DETACHED_PROCESS (which should be like calling\nFreeConsole() in child process) and then I tried CREATE_NEW_CONSOLE\ninstead, on top of CREATE_NEW_PROCESS_GROUP. Neither helped, though I\nlost the postmaster's output.\n\nThings I tried and their output are here:\nhttps://github.com/macdice/postgres/commits/hack3\n\n\n",
"msg_date": "Sat, 5 Mar 2022 13:21:26 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Regression tests failures on Windows Server 2019 - on master at\n commit # d816f366b"
},
{
"msg_contents": "Hi,\n\nOn 2022-03-05 13:21:26 +1300, Thomas Munro wrote:\n> On Sat, Mar 5, 2022 at 10:56 AM Andres Freund <andres@anarazel.de> wrote:\n> > I suspect one also needs the console detach thing.\n>\n> I tried adding DETACHED_PROCESS (which should be like calling\n> FreeConsole() in child process) and then I tried CREATE_NEW_CONSOLE\n> instead, on top of CREATE_NEW_PROCESS_GROUP. Neither helped, though I\n> lost the postmaster's output.\n\nI think the issue with the process group is real, but independent of the\nfailing test.Locally just specifying CREATE_NEW_PROCESS_GROUP fixes the\nproblem of a pg_ctl start'ed database being stopped after ctrl-c.\n\nI think CI failing is due to cirrus bug, assuming a bit too much similarity\nbetween unix and windows....\n\nhttps://github.com/cirruslabs/cirrus-ci-agent/issues/218#issuecomment-1059657781\n\nAs indicated in the message above it, there's a workaround. Not sure if worth\ncommitting, if they were to fix it in a few days? cfbot could be repaired by\njust adding a repo environment variable of CIRRUS_AGENT_VERSION 1.73.2...\nOTOH, committing and reverting a single line + comment is cheap.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 4 Mar 2022 19:19:41 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Regression tests failures on Windows Server 2019 - on master at\n commit # d816f366b"
},
{
"msg_contents": "On Sat, Mar 5, 2022 at 4:19 PM Andres Freund <andres@anarazel.de> wrote:\n> https://github.com/cirruslabs/cirrus-ci-agent/issues/218#issuecomment-1059657781\n\nOof. Nice detective work.\n\n> As indicated in the message above it, there's a workaround. Not sure if worth\n> committing, if they were to fix it in a few days? cfbot could be repaired by\n> just adding a repo environment variable of CIRRUS_AGENT_VERSION 1.73.2...\n> OTOH, committing and reverting a single line + comment is cheap.\n\nI vote for committing that workaround into the tree temporarily,\nbecause it's not just cfbot, it's also everyone's dev branches on\nGithub + the official mirror that are red.\n\n\n",
"msg_date": "Sat, 5 Mar 2022 16:39:21 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Regression tests failures on Windows Server 2019 - on master at\n commit # d816f366b"
},
{
"msg_contents": "Hi,\n\nOn 2022-03-05 16:39:21 +1300, Thomas Munro wrote:\n> On Sat, Mar 5, 2022 at 4:19 PM Andres Freund <andres@anarazel.de> wrote:\n> > https://github.com/cirruslabs/cirrus-ci-agent/issues/218#issuecomment-1059657781\n> \n> Oof. Nice detective work.\n\nThanks.\n\n> > As indicated in the message above it, there's a workaround. Not sure if worth\n> > committing, if they were to fix it in a few days? cfbot could be repaired by\n> > just adding a repo environment variable of CIRRUS_AGENT_VERSION 1.73.2...\n> > OTOH, committing and reverting a single line + comment is cheap.\n> \n> I vote for committing that workaround into the tree temporarily,\n> because it's not just cfbot, it's also everyone's dev branches on\n> Github + the official mirror that are red.\n\nI'll do so after making dinner, unless you want to do so sooner. It did fix\nthe problem (intermixed with a few irrelevant changes): https://cirrus-ci.com/task/4928987829895168\n\n- Andres\n\n\n",
"msg_date": "Fri, 4 Mar 2022 20:06:43 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Regression tests failures on Windows Server 2019 - on master at\n commit # d816f366b"
},
{
"msg_contents": "On 2022-03-04 20:06:43 -0800, Andres Freund wrote:\n> On 2022-03-05 16:39:21 +1300, Thomas Munro wrote:\n> > I vote for committing that workaround into the tree temporarily,\n> > because it's not just cfbot, it's also everyone's dev branches on\n> > Github + the official mirror that are red.\n> \n> I'll do so after making dinner, unless you want to do so sooner. It did fix\n> the problem (intermixed with a few irrelevant changes): https://cirrus-ci.com/task/4928987829895168\n\nPushed.\n\n\n",
"msg_date": "Fri, 4 Mar 2022 22:01:22 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Regression tests failures on Windows Server 2019 - on master at\n commit # d816f366b"
},
{
"msg_contents": "\nOn 3/4/22 17:04, Andres Freund wrote:\n> Hi,\n>\n> On 2022-03-04 16:51:32 -0500, Tom Lane wrote:\n>> Andres Freund <andres@anarazel.de> writes:\n>>> That fixed the immediate problem, but dblink, postgres_fdw tests failed:\n>>> +ERROR: could not establish connection\n>>> +DETAIL: connection to server at \"localhost\" (::1), port 5432 failed: FATAL:\n>>> role \"SYSTEM\" does not exist\n>> [ scratches head... ] Where is libpq getting that username from?\n>> Why is it different from whatever we'd determined during initdb?\n>> (Maybe a case-folding issue??)\n> When running as a service (via pg_ctl register) the default username to run\n> under appears to be SYSTEM. Which then differs from the user that vcregress.pl\n> runs under. Trying to make it use the current user now - I don't know what\n> permissions services are needed to run a service as a user or such.\n\n\nSeServiceLogonRight is what the user needs to run the service.\n\n\nTo manage it (e.g. stop or start it) they need some extra permissions,\nsee for example <http://get-carbon.org/Grant-ServicePermission.html>\n\n\ncheers\n\n\nandrew\n\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Sat, 5 Mar 2022 07:40:18 -0500",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: Regression tests failures on Windows Server 2019 - on master at\n commit # d816f366b"
},
{
"msg_contents": "Hi,\n\nOn 2022-03-04 22:01:22 -0800, Andres Freund wrote:\n> On 2022-03-04 20:06:43 -0800, Andres Freund wrote:\n> > On 2022-03-05 16:39:21 +1300, Thomas Munro wrote:\n> > > I vote for committing that workaround into the tree temporarily,\n> > > because it's not just cfbot, it's also everyone's dev branches on\n> > > Github + the official mirror that are red.\n> >\n> > I'll do so after making dinner, unless you want to do so sooner. It did fix\n> > the problem (intermixed with a few irrelevant changes): https://cirrus-ci.com/task/4928987829895168\n>\n> Pushed.\n\nCirrus now provides a way to get the old behaviour back without pinning an old\nagent version. See attached and a run passing the problematic steps [1]:\n\nThe way this is intended to be done in cirrus (rather than preventing it from\nkilling \"escaped\" processes) would be to use 'background_script' to run\nsomething longer running.\n\nUnfortunately that's surprisingly hard with our tooling, or maybe I'm just\ndaft:\n\n1) We don't have a way to wait for server startup to finish if we don't block\n on pg_ctl start. So we'd have to use 'sleep', write a loop around\n pq_isready. Perhaps pg_ctl should have an option to wait for server startup\n / shutdown without doing the starting/stopping itself?\n\n2) There's no trivial way of starting postgres with pg_ctl and then waiting\n for the server to be shut down in the background script. The easiest would\n be to start psql just wait for it to be killed by an immediate shutdown :/.\n\n3) We can't just start postgres in the foreground to get around 2), because\n pg_ctl does the dropping of permissions we need, rather than postgres\n itself. It'd also need 1).\n\nSeems like we're missing some fairly basic tooling somehow...\n\nRegards,\n\nAndres\n\n[1] https://cirrus-ci.com/task/5810196135018496",
"msg_date": "Sat, 19 Mar 2022 11:16:48 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Regression tests failures on Windows Server 2019 - on master at\n commit # d816f366b"
},
{
"msg_contents": "On 2022-03-19 11:16:48 -0700, Andres Freund wrote:\n> Cirrus now provides a way to get the old behaviour back without pinning an old\n> agent version. See attached\n\nI've pushed that now. If we can come up with a better recipe for starting\npostgres in the background, cool. But until then...\n\n\n",
"msg_date": "Sat, 19 Mar 2022 11:51:28 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Regression tests failures on Windows Server 2019 - on master at\n commit # d816f366b"
}
] |
[
{
"msg_contents": "Hi,\n\nWhile using pg_rewind, I found that it is a bit difficult to use pg_rewind\nas it seems to copy even the configuration files and also remove some of\nthe files created on the old primary which may not be present on the new\nprimary. Similarly it copies files under the data directory of the new\nprimary which may not be needed or which possibly could be junk files.\n\nI would propose to have a couple of new command line arguments to\npg_rewind. One, a comma separated list of files which should be preserved\non the old primary, in other words which shouldn't be overwritten from the\nnew primary. Second, a comma separated list of files which should be\nexcluded while copying files from the new primary onto the old primary.\n\nWould like to invite more thoughts from the hackers.\n\nRegards,\nRKN\n\nHi,While using pg_rewind, I found that it is a bit difficult to use pg_rewind as it seems to copy even the configuration files and also remove some of the files created on the old primary which may not be present on the new primary. Similarly it copies files under the data directory of the new primary which may not be needed or which possibly could be junk files.I would propose to have a couple of new command line arguments to pg_rewind. One, a comma separated list of files which should be preserved on the old primary, in other words which shouldn't be overwritten from the new primary. Second, a comma separated list of files which should be excluded while copying files from the new primary onto the old primary.Would like to invite more thoughts from the hackers.Regards,RKN",
"msg_date": "Fri, 4 Mar 2022 19:49:51 +0530",
"msg_from": "RKN Sai Krishna <rknsaiforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "pg_rewind enhancements"
},
{
"msg_contents": "On Fri, Mar 4, 2022 at 7:50 PM RKN Sai Krishna\n<rknsaiforpostgres@gmail.com> wrote:\n>\n> Hi,\n>\n> While using pg_rewind, I found that it is a bit difficult to use pg_rewind as it seems to copy even the configuration files and also remove some of the files created on the old primary which may not be present on the new primary. Similarly it copies files under the data directory of the new primary which may not be needed or which possibly could be junk files.\n\nIt's possible that the postgres vendors can have their own\nfiles/directories in the data directory which they may not want to be\noverwritten by the pg_rewind. Also, if the source server is\ncompromised (somebody put in some junk file) for whatever reasons,\nnobody wants those files to pass over to the target server.\n\n> I would propose to have a couple of new command line arguments to pg_rewind. One, a comma separated list of files which should be preserved on the old primary, in other words which shouldn't be overwritten from the new primary.\n\n+1 from my end to have a new pg_rewind option such as --skip-file-list\nor --skip-list which is basically a list of files that pg_rewind will\nnot overwrite in the target directory.\n\n> Second, a comma separated list of files which should be excluded while copying files from the new primary onto the old primary.\n\nI'm not sure how it is different from the above option\n--skip-file-list or --skip-list?\n\nAnother idea I can think of is to be able to tell pg_rewind \"don't\ncopy/bring in any non-postgres files/directories from source server to\ntarget server\". This requires pg_rewind to know what are\npostgres/non-postgres files/directories. Probably, we could define a\nstatic list of what a postgres files/directories constitute, but this\ncreates tight-coupling with the core, say a new directory or\nconfiguration file gets added to the core, this static list in\npg_rewind needs to be updated. Having said that initdb.c already has\nthis sort of list [1], we need similar kind of structures and probably\nanother structure for files (postgresql.auto.conf, postgresql.conf,\npg_ident.conf, pg_hba.conf, postmaster.opts, backup_label,\nstandby.signal, recovery.signal etc.).\n\nAbove option seems an overkill, but --skip-file-list or --skip-list is\ndefinitely an improvement IMO.\n\n[1] static const char *const subdirs[] = {\n \"global\",\n \"pg_wal/archive_status\",\n \"pg_commit_ts\",\n \"pg_dynshmem\",\n\nRegards,\nBharath Rupireddy.\n\n\n",
"msg_date": "Fri, 11 Mar 2022 09:42:50 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_rewind enhancements"
}
] |
[
{
"msg_contents": "Hello,\r\n\r\nTL;DR: this patch lets you specify exactly one authentication method in\r\nthe connection string, and libpq will fail the connection if the server\r\ndoesn't use that method.\r\n\r\n(This is not intended for PG15. I'm generally anxious about posting\r\nexperimental work during a commitfest, but there's been enough\r\nconversation about this topic recently that I felt like it'd be useful\r\nto have code to point to.)\r\n\r\n== Proposal and Alternatives ==\r\n\r\n$subject keeps coming up in threads. I think my first introduction to\r\nit was after the TLS injection CVE, and then it came up again in the\r\npluggable auth thread. It's hard for me to generalize based on \"sound\r\nbites\", but among the proposals I've seen are\r\n\r\n1. reject plaintext passwords\r\n2. reject a configurable list of unacceptable methods\r\n3. allow client and server to negotiate a method\r\n\r\nAll of them seem to have merit. I'm personally motivated by the case\r\nbrought up by the CVE: if I'm expecting client certificate\r\nauthentication, it's not acceptable for the server to extract _any_\r\ninformation about passwords from my system, whether they're plaintext,\r\nhashed, or SCRAM-protected. So I chose not to implement option 1. And\r\noption 3 looked like a lot of work to take on in an experiment without\r\na clear consensus.\r\n\r\nHere is my take on option 2, then: you get to choose exactly one method\r\nthat the client will accept. If you want to use client certificates,\r\nuse require_auth=cert. If you want to force SCRAM, use\r\nrequire_auth=scram-sha-256. If the server asks for something different,\r\nlibpq will fail. If the server tries to get away without asking you for\r\nauthentication, libpq will fail. There is no negotiation.\r\n\r\n== Why Force Authn? ==\r\n\r\nI think my decision to fail if the server doesn't authenticate might be\r\ncontroversial. It doesn't provide additional protection against active\r\nattack unless you're using a mutual authentication method (SCRAM),\r\nbecause you can't prove that the server actually did anything with its\r\nside of the handshake. But this approach grew on me for a few reasons:\r\n\r\n- When using SCRAM, it allows the client to force a server to\r\nauthenticate itself, even when channel bindings aren't being used. (I\r\nreally think it's weird that we let the server get away with that\r\ntoday.)\r\n\r\n- In cases where you want to ensure that your actions are logged for\r\nlater audit, you can be reasonably sure that you're connecting to a\r\ndatabase that hasn't been configured with a `trust` setting.\r\n\r\n- For cert authentication, it ensures that the server asked for a cert\r\nand that you actually sent one. This is more forward-looking; today, we\r\nalways ask for a certificate from the client even if we don't use it.\r\nBut if implicit TLS takes off, I'd expect to see more middleware, with\r\nmore potential for misconfiguration.\r\n\r\n== General Thoughts ==\r\n\r\nI like that this approach fits nicely into the existing code. The\r\nmajority of the patch just beefs up check_expected_areq(). The new flag\r\nthat tracks whether or not we've authenticated is scattered around more\r\nthan I would like, but I'm hopeful that some of the pluggable auth\r\nconversations will lead to nice refactoring opportunities for those\r\ninternal helpers.\r\n\r\nThere's currently no way to prohibit client certificates from being\r\nsent. If my use case is \"servers shouldn't be able to extract password\r\ninfo if the client expects certificates\", then someone else may very\r\nwell say \"servers shouldn't be able to extract my client certificate if\r\nI wanted to use SCRAM\". Likewise, this feature won't prevent a GSS\r\nauthenticated channel -- but we do have gssencmode=disable, so I'm less\r\nconcerned there.\r\n\r\nI made the assumption that a GSS encrypted channel authenticates both\r\nparties to each other, but I don't actually know what guarantees are\r\nmade there. I have not tested SSPI.\r\n\r\nI'm not a fan of the multiple spellings of \"password\" (\"ldap\", \"pam\",\r\net al). My initial thought was that it'd be nice to match the client\r\nsetting to the HBA setting in the server. But I don't think it's really\r\nall that helpful; worst-case, it pretends to do something it can't,\r\nsince the client can't determine what the plaintext password is\r\nactually used for on the backend. And if someone devises (say) a SASL\r\nscheme for proxied LDAP authentication, that spelling becomes obsolete.\r\n\r\nSpeaking of obsolete, the current implementation assumes that any SASL\r\nexchange must be for SCRAM. That won't be anywhere close to future-\r\nproof.\r\n\r\nThanks,\r\n--Jacob",
"msg_date": "Sat, 5 Mar 2022 01:04:05 +0000",
"msg_from": "Jacob Champion <pchampion@vmware.com>",
"msg_from_op": true,
"msg_subject": "[PoC] Let libpq reject unexpected authentication requests"
},
{
"msg_contents": "Jacob Champion <pchampion@vmware.com> writes:\n> $subject keeps coming up in threads. I think my first introduction to\n> it was after the TLS injection CVE, and then it came up again in the\n> pluggable auth thread. It's hard for me to generalize based on \"sound\n> bites\", but among the proposals I've seen are\n\n> 1. reject plaintext passwords\n> 2. reject a configurable list of unacceptable methods\n> 3. allow client and server to negotiate a method\n\n> All of them seem to have merit.\n\nAgreed.\n\n> Here is my take on option 2, then: you get to choose exactly one method\n> that the client will accept. If you want to use client certificates,\n> use require_auth=cert. If you want to force SCRAM, use\n> require_auth=scram-sha-256. If the server asks for something different,\n> libpq will fail. If the server tries to get away without asking you for\n> authentication, libpq will fail. There is no negotiation.\n\nSeems reasonable, but I bet that for very little more code you could\naccept a comma-separated list of allowed methods; libpq already allows\ncomma-separated lists for some other connection options. That seems\nlike it'd be a useful increment of flexibility.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 04 Mar 2022 20:19:26 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] Let libpq reject unexpected authentication requests"
},
{
"msg_contents": "On Fri, Mar 04, 2022 at 08:19:26PM -0500, Tom Lane wrote:\n> Jacob Champion <pchampion@vmware.com> writes:\n>> Here is my take on option 2, then: you get to choose exactly one method\n>> that the client will accept. If you want to use client certificates,\n>> use require_auth=cert. If you want to force SCRAM, use\n>> require_auth=scram-sha-256. If the server asks for something different,\n>> libpq will fail. If the server tries to get away without asking you for\n>> authentication, libpq will fail. There is no negotiation.\n\nFine by me to put all the control on the client-side, that makes the\nwhole much simpler to reason about.\n\n> Seems reasonable, but I bet that for very little more code you could\n> accept a comma-separated list of allowed methods; libpq already allows\n> comma-separated lists for some other connection options. That seems\n> like it'd be a useful increment of flexibility.\n\nSame impression here, so +1 for supporting a comma-separated list of\nvalues here. This is already handled in parse_comma_separated_list(),\nnow used for multiple hosts and hostaddrs.\n--\nMichael",
"msg_date": "Sat, 5 Mar 2022 15:20:29 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] Let libpq reject unexpected authentication requests"
},
{
"msg_contents": "\nOn 3/4/22 20:19, Tom Lane wrote:\n> Jacob Champion <pchampion@vmware.com> writes:\n>> $subject keeps coming up in threads. I think my first introduction to\n>> it was after the TLS injection CVE, and then it came up again in the\n>> pluggable auth thread. It's hard for me to generalize based on \"sound\n>> bites\", but among the proposals I've seen are\n>> 1. reject plaintext passwords\n>> 2. reject a configurable list of unacceptable methods\n>> 3. allow client and server to negotiate a method\n>> All of them seem to have merit.\n> Agreed.\n>\n>> Here is my take on option 2, then: you get to choose exactly one method\n>> that the client will accept. If you want to use client certificates,\n>> use require_auth=cert. If you want to force SCRAM, use\n>> require_auth=scram-sha-256. If the server asks for something different,\n>> libpq will fail. If the server tries to get away without asking you for\n>> authentication, libpq will fail. There is no negotiation.\n> Seems reasonable, but I bet that for very little more code you could\n> accept a comma-separated list of allowed methods; libpq already allows\n> comma-separated lists for some other connection options. That seems\n> like it'd be a useful increment of flexibility.\n>\n> \t\t\t\n\n\nJust about necessary I guess, since you can specify that a client cert\nis required in addition to some other auth method, so for such cases you\nmight want something like \"required_auth=cert,scram-sha-256\"? Or do we\nneed a way of specifying the combination?\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Sat, 5 Mar 2022 07:46:55 -0500",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] Let libpq reject unexpected authentication requests"
},
{
"msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> On 3/4/22 20:19, Tom Lane wrote:\n>> Seems reasonable, but I bet that for very little more code you could\n>> accept a comma-separated list of allowed methods; libpq already allows\n>> comma-separated lists for some other connection options. That seems\n>> like it'd be a useful increment of flexibility.\n\n> Just about necessary I guess, since you can specify that a client cert\n> is required in addition to some other auth method, so for such cases you\n> might want something like \"required_auth=cert,scram-sha-256\"? Or do we\n> need a way of specifying the combination?\n\nI'd view the comma as strictly meaning OR, so that you might need some\nnotation like \"required_auth=cert+scram-sha-256\" if you want to demand\nANDed conditions. It might be better to handle TLS-specific\nconditions orthogonally to the authentication exchange, though.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 05 Mar 2022 10:12:20 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] Let libpq reject unexpected authentication requests"
},
{
"msg_contents": "On Sat, 2022-03-05 at 01:04 +0000, Jacob Champion wrote:\n> TL;DR: this patch lets you specify exactly one authentication method in\n> the connection string, and libpq will fail the connection if the server\n> doesn't use that method.\n> \n> (This is not intended for PG15. I'm generally anxious about posting\n> experimental work during a commitfest, but there's been enough\n> conversation about this topic recently that I felt like it'd be useful\n> to have code to point to.)\n> \n> == Proposal and Alternatives ==\n> \n> $subject keeps coming up in threads. I think my first introduction to\n> it was after the TLS injection CVE, and then it came up again in the\n> pluggable auth thread. It's hard for me to generalize based on \"sound\n> bites\", but among the proposals I've seen are\n> \n> 1. reject plaintext passwords\n> 2. reject a configurable list of unacceptable methods\n> 3. allow client and server to negotiate a method\n> \n> All of them seem to have merit. I'm personally motivated by the case\n> brought up by the CVE: if I'm expecting client certificate\n> authentication, it's not acceptable for the server to extract _any_\n> information about passwords from my system, whether they're plaintext,\n> hashed, or SCRAM-protected. So I chose not to implement option 1. And\n> option 3 looked like a lot of work to take on in an experiment without\n> a clear consensus.\n> \n> Here is my take on option 2, then: you get to choose exactly one method\n> that the client will accept.\n\nI am all for the idea, but you implemented the reverse of proposal 2.\n\nWouldn't it be better to list the *rejected* authentication methods?\nThen we could have \"password\" on there by default.\n\nYours,\nLaurenz Albe\n\n\n\n",
"msg_date": "Mon, 07 Mar 2022 11:44:00 +0100",
"msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] Let libpq reject unexpected authentication requests"
},
{
"msg_contents": "On Mon, 2022-03-07 at 11:44 +0100, Laurenz Albe wrote:\r\n> I am all for the idea, but you implemented the reverse of proposal 2.\r\n\r\n(This email was caught in my spam filter; sorry for the delay.)\r\n\r\n> Wouldn't it be better to list the *rejected* authentication methods?\r\n> Then we could have \"password\" on there by default.\r\n\r\nSpecifying the allowed list rather than the denied list tends to have\r\nbetter security properties.\r\n\r\nIn the case I'm pursuing (the attack vector from the CVE), the end user\r\nexpects certificates to be used. Any other authentication method --\r\nplaintext, hashed, SCRAM, Kerberos -- is unacceptable; it shouldn't be\r\npossible for the server to extract any information about the client\r\nenvironment other than the cert. And I don't want to have to specify\r\nthe whole list of things that _aren't_ allowed, and keep that list\r\nupdated as we add new fancy auth methods, if I just want certs to be\r\nused. So that's my argument for making the methods opt-in rather than\r\nopt-out.\r\n\r\nBut that doesn't help your case; you want to choose a good default, and\r\nI agree that's important. Since there are arguments already for\r\naccepting a OR in the list, and -- if we couldn't find a good\r\northogonal method for certs, like Tom suggested -- an AND, maybe it\r\nwouldn't be so bad to accept a NOT as well?\r\n\r\n require_auth=cert # certs only\r\n require_auth=cert+scram-sha-256 # SCRAM wrapped by certs\r\n require_auth=cert,scram-sha-256 # SCRAM or certs (or both)\r\n require_auth=!password # anything but plaintext\r\n require_auth=!password,!md5 # no plaintext or MD5\r\n\r\nBut it doesn't ever make sense to mix them:\r\n\r\n require_auth=cert,!password # error: !password is useless\r\n require_auth=!password,password # error: nonsense\r\n\r\n--Jacob\r\n",
"msg_date": "Wed, 23 Mar 2022 21:31:32 +0000",
"msg_from": "Jacob Champion <pchampion@vmware.com>",
"msg_from_op": true,
"msg_subject": "Re: [PoC] Let libpq reject unexpected authentication requests"
},
{
"msg_contents": "On Wed, 2022-03-23 at 21:31 +0000, Jacob Champion wrote:\n> On Mon, 2022-03-07 at 11:44 +0100, Laurenz Albe wrote:\n> > I am all for the idea, but you implemented the reverse of proposal 2.\n> >\n> > Wouldn't it be better to list the *rejected* authentication methods?\n> > Then we could have \"password\" on there by default.\n> \n> Specifying the allowed list rather than the denied list tends to have\n> better security properties.\n> \n> In the case I'm pursuing (the attack vector from the CVE), the end user\n> expects certificates to be used. Any other authentication method --\n> plaintext, hashed, SCRAM, Kerberos -- is unacceptable;\n\nThat makes sense.\n\n> But that doesn't help your case; you want to choose a good default, and\n> I agree that's important. Since there are arguments already for\n> accepting a OR in the list, and -- if we couldn't find a good\n> orthogonal method for certs, like Tom suggested -- an AND, maybe it\n> wouldn't be so bad to accept a NOT as well?\n> \n> require_auth=cert # certs only\n> require_auth=cert+scram-sha-256 # SCRAM wrapped by certs\n> require_auth=cert,scram-sha-256 # SCRAM or certs (or both)\n> require_auth=!password # anything but plaintext\n> require_auth=!password,!md5 # no plaintext or MD5\n\nGreat, if there is a !something syntax, then I have nothing left to wish.\nIt may not be the most secure way do do it, but it sure is convenient.\n\nYours,\nLaurenz Albe\n\n\n\n",
"msg_date": "Thu, 24 Mar 2022 06:17:04 +0100",
"msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] Let libpq reject unexpected authentication requests"
},
{
"msg_contents": "On Sat, Mar 05, 2022 at 01:04:05AM +0000, Jacob Champion wrote:\n> the connection string, and libpq will fail the connection if the server\n> doesn't use that method.\n> \n> (This is not intended for PG15. I'm generally anxious about posting\n> experimental work during a commitfest, but there's been enough\n> conversation about this topic recently that I felt like it'd be useful\n> to have code to point to.)\n\nJacob, do you still have plans to work on this patch?\n--\nMichael",
"msg_date": "Wed, 1 Jun 2022 16:55:09 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] Let libpq reject unexpected authentication requests"
},
{
"msg_contents": "On Wed, Jun 1, 2022 at 12:55 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> Jacob, do you still have plans to work on this patch?\n\nYes, definitely. That said, the more the merrier if there are others\ninterested in taking a shot at it. There are a large number of\nalternative implementation proposals.\n\nThanks,\n--Jacob\n\n\n",
"msg_date": "Wed, 1 Jun 2022 08:11:28 -0700",
"msg_from": "Jacob Champion <jchampion@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] Let libpq reject unexpected authentication requests"
},
{
"msg_contents": "v2 rebases over latest, removes the alternate spellings of \"password\",\nand implements OR operations with a comma-separated list. For example:\n\n- require_auth=cert means that the server must ask for, and the client\nmust provide, a client certificate.\n- require_auth=password,md5 means that the server must ask for a\nplaintext password or an MD5 hash.\n- require_auth=scram-sha-256,gss means that one of SCRAM, Kerberos\nauthentication, or GSS transport encryption must be successfully\nnegotiated.\n- require_auth=scram-sha-256,cert means that either a SCRAM handshake\nmust be completed, or the server must request a client certificate. It\nhas a potential pitfall in that it allows a partial SCRAM handshake,\nas long as a certificate is requested and sent.\n\nAND and NOT, discussed upthread, are not yet implemented. I tied\nmyself up in knots trying to make AND generic, so I think I\"m going to\ntackle NOT first, instead. The problem with AND is that it only makes\nsense when one (and only one) of the options is a form of transport\nauthentication. (E.g. password+md5 never makes sense.) And although\ncert+<something> and gss+<something> could be useful, the latter case\nis already handled by gssencmode=require, and the gssencmode option is\nmore powerful since you can disable it (or set it to don't-care).\n\nI'm not generally happy with how the \"cert\" option is working. With\nthe other methods, if you don't include a method in the list, then the\nconnection fails if the server tries to negotiate it. But if you don't\ninclude the cert method in the list, we don't forbid the server from\nasking for a cert, because the server always asks for a client\ncertificate via TLS whether it needs one or not. Behaving in the\nintuitive way here would effectively break all use of TLS.\n\nSo I think Tom's recommendation that the cert method be handled by an\northogonal option was a good one, and if that works then maybe we\ndon't need an AND syntax at all. Presumably I can just add an option\nthat parallels gssencmode, and then the current don't-care semantics\ncan be explicitly controlled. Skipping AND also means that I don't\nhave to create a syntax that can handle AND and NOT at the same time,\nwhich I was dreading.\n\n--Jacob",
"msg_date": "Tue, 7 Jun 2022 14:22:28 -0700",
"msg_from": "Jacob Champion <jchampion@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] Let libpq reject unexpected authentication requests"
},
{
"msg_contents": "On Tue, Jun 07, 2022 at 02:22:28PM -0700, Jacob Champion wrote:\n> v2 rebases over latest, removes the alternate spellings of \"password\",\n> and implements OR operations with a comma-separated list. For example:\n> \n> - require_auth=cert means that the server must ask for, and the client\n> must provide, a client certificate.\n\nHmm.. Nya.\n\n> - require_auth=password,md5 means that the server must ask for a\n> plaintext password or an MD5 hash.\n> - require_auth=scram-sha-256,gss means that one of SCRAM, Kerberos\n> authentication, or GSS transport encryption must be successfully\n> negotiated.\n\nMakes sense.\n\n> - require_auth=scram-sha-256,cert means that either a SCRAM handshake\n> must be completed, or the server must request a client certificate. It\n> has a potential pitfall in that it allows a partial SCRAM handshake,\n> as long as a certificate is requested and sent.\n\nEr, this one could be a problem protocol-wise for SASL, because that\nwould mean that the authentication gets completed but that the\nexchange has begun and is not finished?\n\n> AND and NOT, discussed upthread, are not yet implemented. I tied\n> myself up in knots trying to make AND generic, so I think I\"m going to\n> tackle NOT first, instead. The problem with AND is that it only makes\n> sense when one (and only one) of the options is a form of transport\n> authentication. (E.g. password+md5 never makes sense.) And although\n> cert+<something> and gss+<something> could be useful, the latter case\n> is already handled by gssencmode=require, and the gssencmode option is\n> more powerful since you can disable it (or set it to don't-care).\n\nI am on the edge regarding NOT as well, and I am unsure of the actual\nbenefits we could get from it as long as one can provide a white list\nof auth methods. If we don't see an immediate benefit in that, I'd\nrather choose a minimal, still useful, design. As a whole, I would\nvote with adding only support for OR and a comma-separated list like\nyour proposal.\n\n> I'm not generally happy with how the \"cert\" option is working. With\n> the other methods, if you don't include a method in the list, then the\n> connection fails if the server tries to negotiate it. But if you don't\n> include the cert method in the list, we don't forbid the server from\n> asking for a cert, because the server always asks for a client\n> certificate via TLS whether it needs one or not. Behaving in the\n> intuitive way here would effectively break all use of TLS.\n\nAgreed. Looking at what you are doing with allowed_auth_method_cert,\nthis makes the code harder to follow, which is risky for any\nsecurity-related feature, and that's different than the other methods\nwhere we have the AUTH_REQ_* codes. This leads to extra complications\nin the shape of ssl_cert_requested and ssl_cert_sent, as well. This\nstrongly looks like what we do for channel binding as it has\nrequirements separated from the actual auth methods but has dependency\nwith them, so a different parameter, as suggested, would make sense.\nIf we are not sure about this part, we could discard it in the first\ninstance of the patch.\n\n> So I think Tom's recommendation that the cert method be handled by an\n> orthogonal option was a good one, and if that works then maybe we\n> don't need an AND syntax at all. Presumably I can just add an option\n> that parallels gssencmode, and then the current don't-care semantics\n> can be explicitly controlled. Skipping AND also means that I don't\n> have to create a syntax that can handle AND and NOT at the same time,\n> which I was dreading.\n\nI am not convinced that we have any need for the AND grammar within\none parameter, as that's basically the same thing as defining multiple\nconnection parameters, isn't it? This is somewhat a bit similar to\nthe interactions of channel binding with this new parameter and what\nyou have implemented. For example, channel_binding=require\nrequire_auth=md5 would imply both and should fail, even if it makes\nlittle sense because MD5 has no idea of channel binding. One\ninteresting case comes down to stuff like channel_binding=require\nrequire_auth=\"md5,scram-sha-256\", where I think that we should still\nfail even if the server asks for MD5 and enforce an equivalent of an\nAND grammar through the parameters. This reasoning limits the\ndependencies between each parameter and treats the areas where these\nare checked independently, which is what check_expected_areq() does\nfor channel binding. So that's more robust at the end.\n\nSpeaking of which, I would add tests to check some combinations of\nchannel_binding and require_auth.\n\n+ appendPQExpBuffer(&conn->errorMessage,\n+ libpq_gettext(\"auth method \\\"%s\\\" required, but %s\\n\"),\n+ conn->require_auth, reason);\nThis one is going to make translation impossible. One way to tackle\nthis issue is to use \"auth method \\\"%s\\\" required: %s\".\n\n+ {\"require_auth\", NULL, NULL, NULL,\n+ \"Require-Auth\", \"\", 14, /* sizeof(\"scram-sha-256\") == 14 */\n+ offsetof(struct pg_conn, require_auth)},\nWe could have an environment variable for that.\n\n+/*\n+ * Convenience macro for checking the allowed_auth_methods bitmask. Caller must\n+ * ensure that type is not greater than 31 (high bit of the bitmask).\n+ */\n+#define auth_allowed(conn, type) \\\n+ (((conn)->allowed_auth_methods & (1 << (type))) != 0)\nBetter to add a compile-time check with StaticAssertDecl() then? Or\nadd a note about that in pqcomm.h?\n\n+ else if (auth_allowed(conn, AUTH_REQ_GSS) && conn->gssenc)\n+ {\nThis field is only available under ENABLE_GSS, so this would fail to\ncompile when building without it?\n\n+ method = parse_comma_separated_list(&s, &more);\n+ if (method == NULL)\n+ goto oom_error;\nThis should free the malloc'd copy of the element parsed, no? That\nmeans a free at the end of the while loop processing the options.\n\n+ /*\n+ * Sanity check; a duplicated method probably indicates a typo in a\n+ * setting where typos are extremely risky.\n+ */\nNot sure why this is a problem. Fine by me to check that, but this is\nan OR list, so specifying one element twice means the same as once.\n\nI get that it is not the priority yet as long as the design is not\ncompletely clear, but having some docs would be nice :)\n--\nMichael",
"msg_date": "Thu, 9 Jun 2022 13:58:00 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] Let libpq reject unexpected authentication requests"
},
{
"msg_contents": "On Wed, Jun 8, 2022 at 9:58 PM Michael Paquier <michael@paquier.xyz> wrote:\n> > - require_auth=scram-sha-256,cert means that either a SCRAM handshake\n> > must be completed, or the server must request a client certificate. It\n> > has a potential pitfall in that it allows a partial SCRAM handshake,\n> > as long as a certificate is requested and sent.\n>\n> Er, this one could be a problem protocol-wise for SASL, because that\n> would mean that the authentication gets completed but that the\n> exchange has begun and is not finished?\n\nI think it's already a problem, if you're not using channel_binding.\nThe cert behavior here makes it even less intuitive.\n\n> > AND and NOT, discussed upthread, are not yet implemented. I tied\n> > myself up in knots trying to make AND generic, so I think I\"m going to\n> > tackle NOT first, instead. The problem with AND is that it only makes\n> > sense when one (and only one) of the options is a form of transport\n> > authentication. (E.g. password+md5 never makes sense.) And although\n> > cert+<something> and gss+<something> could be useful, the latter case\n> > is already handled by gssencmode=require, and the gssencmode option is\n> > more powerful since you can disable it (or set it to don't-care).\n>\n> I am on the edge regarding NOT as well, and I am unsure of the actual\n> benefits we could get from it as long as one can provide a white list\n> of auth methods. If we don't see an immediate benefit in that, I'd\n> rather choose a minimal, still useful, design. As a whole, I would\n> vote with adding only support for OR and a comma-separated list like\n> your proposal.\n\nPersonally I think the ability to provide a default of `!password` is\nvery compelling. Any allowlist we hardcode won't be future-proof (see\nalso my response to Laurenz upthread [1]).\n\n> > I'm not generally happy with how the \"cert\" option is working. With\n> > the other methods, if you don't include a method in the list, then the\n> > connection fails if the server tries to negotiate it. But if you don't\n> > include the cert method in the list, we don't forbid the server from\n> > asking for a cert, because the server always asks for a client\n> > certificate via TLS whether it needs one or not. Behaving in the\n> > intuitive way here would effectively break all use of TLS.\n>\n> Agreed. Looking at what you are doing with allowed_auth_method_cert,\n> this makes the code harder to follow, which is risky for any\n> security-related feature, and that's different than the other methods\n> where we have the AUTH_REQ_* codes. This leads to extra complications\n> in the shape of ssl_cert_requested and ssl_cert_sent, as well. This\n> strongly looks like what we do for channel binding as it has\n> requirements separated from the actual auth methods but has dependency\n> with them, so a different parameter, as suggested, would make sense.\n> If we are not sure about this part, we could discard it in the first\n> instance of the patch.\n\nI'm pretty motivated to provide the ability to say \"I want cert auth\nonly, nothing else.\" Using a separate parameter would mean we'd need\nsomething like `require_auth=none`, but I think that makes a certain\namount of sense.\n\n> I am not convinced that we have any need for the AND grammar within\n> one parameter, as that's basically the same thing as defining multiple\n> connection parameters, isn't it? This is somewhat a bit similar to\n> the interactions of channel binding with this new parameter and what\n> you have implemented. For example, channel_binding=require\n> require_auth=md5 would imply both and should fail, even if it makes\n> little sense because MD5 has no idea of channel binding. One\n> interesting case comes down to stuff like channel_binding=require\n> require_auth=\"md5,scram-sha-256\", where I think that we should still\n> fail even if the server asks for MD5 and enforce an equivalent of an\n> AND grammar through the parameters. This reasoning limits the\n> dependencies between each parameter and treats the areas where these\n> are checked independently, which is what check_expected_areq() does\n> for channel binding. So that's more robust at the end.\n\nAgreed.\n\n> Speaking of which, I would add tests to check some combinations of\n> channel_binding and require_auth.\n\nSounds good.\n\n> + appendPQExpBuffer(&conn->errorMessage,\n> + libpq_gettext(\"auth method \\\"%s\\\" required, but %s\\n\"),\n> + conn->require_auth, reason);\n> This one is going to make translation impossible. One way to tackle\n> this issue is to use \"auth method \\\"%s\\\" required: %s\".\n\nYeah, I think I have a TODO somewhere about that somewhere. I'm\nconfused about your suggested fix though; can you elaborate?\n\n> + {\"require_auth\", NULL, NULL, NULL,\n> + \"Require-Auth\", \"\", 14, /* sizeof(\"scram-sha-256\") == 14 */\n> + offsetof(struct pg_conn, require_auth)},\n> We could have an environment variable for that.\n\nI think that'd be a good idea. It'd be nice to have the option of\nforcing a particular auth type across a process tree.\n\n> +/*\n> + * Convenience macro for checking the allowed_auth_methods bitmask. Caller must\n> + * ensure that type is not greater than 31 (high bit of the bitmask).\n> + */\n> +#define auth_allowed(conn, type) \\\n> + (((conn)->allowed_auth_methods & (1 << (type))) != 0)\n> Better to add a compile-time check with StaticAssertDecl() then? Or\n> add a note about that in pqcomm.h?\n\nIf we only passed constants, that would work, but we also determine\nthe type at runtime, from the server message. Or am I missing\nsomething?\n\n> + else if (auth_allowed(conn, AUTH_REQ_GSS) && conn->gssenc)\n> + {\n> This field is only available under ENABLE_GSS, so this would fail to\n> compile when building without it?\n\nYes, thank you for the catch. Will fix.\n\n> + method = parse_comma_separated_list(&s, &more);\n> + if (method == NULL)\n> + goto oom_error;\n> This should free the malloc'd copy of the element parsed, no? That\n> means a free at the end of the while loop processing the options.\n\nGood catch again, thanks!\n\n> + /*\n> + * Sanity check; a duplicated method probably indicates a typo in a\n> + * setting where typos are extremely risky.\n> + */\n> Not sure why this is a problem. Fine by me to check that, but this is\n> an OR list, so specifying one element twice means the same as once.\n\nSince this is likely to be a set-and-forget sort of option, and it\nneeds to behave correctly across server upgrades, I'd personally\nprefer that the client tell me immediately if I've made a silly\nmistake. Even for something relatively benign like this (but arguably,\nit's more important for the NOT case).\n\n> I get that it is not the priority yet as long as the design is not\n> completely clear, but having some docs would be nice :)\n\nAgreed; I will tackle that soon.\n\nThanks!\n--Jacob\n\n[1] https://www.postgresql.org/message-id/a14b1f89dcde75fb20afa7a1ffd2c2587b8d1a08.camel%40vmware.com\n\n\n",
"msg_date": "Thu, 9 Jun 2022 16:29:38 -0700",
"msg_from": "Jacob Champion <jchampion@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] Let libpq reject unexpected authentication requests"
},
{
"msg_contents": "On Thu, Jun 09, 2022 at 04:29:38PM -0700, Jacob Champion wrote:\n> On Wed, Jun 8, 2022 at 9:58 PM Michael Paquier <michael@paquier.xyz> wrote:\n>> Er, this one could be a problem protocol-wise for SASL, because that\n>> would mean that the authentication gets completed but that the\n>> exchange has begun and is not finished?\n> \n> I think it's already a problem, if you're not using channel_binding.\n> The cert behavior here makes it even less intuitive.\n\nAh right. I forgot about the part where we need to have the backend\npublish the set of supported machanisms. If we don't get back\nSCRAM-SHA-256-PLUS we are currently complaining after the exchange has\nbeen initialized, true. Maybe I should look at the RFC more closely.\nThe backend is very strict regarding that and needs to return an error\nback to the client only when the exchange is done, but I don't recall\nall the bits about the client handling.\n\n> Personally I think the ability to provide a default of `!password` is\n> very compelling. Any allowlist we hardcode won't be future-proof (see\n> also my response to Laurenz upthread [1]).\n\nHm, perhaps. You could use that as a default at application level,\nbut the default set in libpq should be backward-compatible (aka allow\neverything, even trust where the backend just sends AUTH_REQ_OK).\nThis is unfortunate but there is a point in not breaking any user's\napplication, as well, particularly with ldap, and note that we have to\nkeep a certain amount of things backward-compatible as a very common\npractice of packaging with Postgres is to have libpq link to binaries\nacross *multiple* major versions (Debian is one if I recall), with the\nnewest version of libpq used for all. One argument in favor of\n!password would be to control whether one does not want to use ldap,\nbut I'd still see most users just specify one or more methods in line\nwith their HBA policy as an approved list.\n\n> I'm pretty motivated to provide the ability to say \"I want cert auth\n> only, nothing else.\" Using a separate parameter would mean we'd need\n> something like `require_auth=none`, but I think that makes a certain\n> amount of sense.\n\nIf the default of require_auth is backward-compatible and allows\neverything, using a different parameter for the certs won't matter\nanyway?\n\n>> + appendPQExpBuffer(&conn->errorMessage,\n>> + libpq_gettext(\"auth method \\\"%s\\\" required, but %s\\n\"),\n>> + conn->require_auth, reason);\n>> This one is going to make translation impossible. One way to tackle\n>> this issue is to use \"auth method \\\"%s\\\" required: %s\".\n> \n> Yeah, I think I have a TODO somewhere about that somewhere. I'm\n> confused about your suggested fix though; can you elaborate?\n\nMy suggestion is to reword the error message so as the reason and the\nmain error message can be treated as two independent things. You are\nright to apply two times libpq_gettext(), once to \"reason\" and once to\nthe main string.\n\n>> +/*\n>> + * Convenience macro for checking the allowed_auth_methods bitmask. Caller must\n>> + * ensure that type is not greater than 31 (high bit of the bitmask).\n>> + */\n>> +#define auth_allowed(conn, type) \\\n>> + (((conn)->allowed_auth_methods & (1 << (type))) != 0)\n>> Better to add a compile-time check with StaticAssertDecl() then? Or\n>> add a note about that in pqcomm.h?\n> \n> If we only passed constants, that would work, but we also determine\n> the type at runtime, from the server message. Or am I missing\n> something?\n\nMy point would be to either register a max flag in the set of\nAUTH_REQ_* in pqcomm.h so as we never go above 32 with an assertion to\nmake sure that this would never overflow, but I would add a comment in\npqcomm.h telling that we also do bitwise operations, relying on the\nnumber of AUTH_REQ_* flags, and that we'd better be careful once the\nnumber of flags gets higher than 32. There is some margin, still that\ncould be easily forgotten.\n\n>> + /*\n>> + * Sanity check; a duplicated method probably indicates a typo in a\n>> + * setting where typos are extremely risky.\n>> + */\n>> Not sure why this is a problem. Fine by me to check that, but this is\n>> an OR list, so specifying one element twice means the same as once.\n> \n> Since this is likely to be a set-and-forget sort of option, and it\n> needs to behave correctly across server upgrades, I'd personally\n> prefer that the client tell me immediately if I've made a silly\n> mistake. Even for something relatively benign like this (but arguably,\n> it's more important for the NOT case).\n\nThis is just a couple of lines. Fine by me to keep it if you feel\nthat's better in the long run.\n--\nMichael",
"msg_date": "Tue, 14 Jun 2022 13:59:53 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] Let libpq reject unexpected authentication requests"
},
{
"msg_contents": "On Mon, Jun 13, 2022 at 10:00 PM Michael Paquier <michael@paquier.xyz> wrote:\n> > Personally I think the ability to provide a default of `!password` is\n> > very compelling. Any allowlist we hardcode won't be future-proof (see\n> > also my response to Laurenz upthread [1]).\n>\n> Hm, perhaps. You could use that as a default at application level,\n> but the default set in libpq should be backward-compatible (aka allow\n> everything, even trust where the backend just sends AUTH_REQ_OK).\n> This is unfortunate but there is a point in not breaking any user's\n> application, as well, particularly with ldap, and note that we have to\n> keep a certain amount of things backward-compatible as a very common\n> practice of packaging with Postgres is to have libpq link to binaries\n> across *multiple* major versions (Debian is one if I recall), with the\n> newest version of libpq used for all.\n\nI am 100% with you on this, but there's been a lot of chatter around\nremoving plaintext password support from libpq. I would much rather\nsee them rejected by default than removed entirely. !password would\nprovide an easy path for that.\n\n> > I'm pretty motivated to provide the ability to say \"I want cert auth\n> > only, nothing else.\" Using a separate parameter would mean we'd need\n> > something like `require_auth=none`, but I think that makes a certain\n> > amount of sense.\n>\n> If the default of require_auth is backward-compatible and allows\n> everything, using a different parameter for the certs won't matter\n> anyway?\n\nIf you want cert authentication only, allowing \"everything\" will let\nthe server extract a password and then you're back at square one.\nThere needs to be a way to prohibit all explicit authentication\nrequests.\n\n> My suggestion is to reword the error message so as the reason and the\n> main error message can be treated as two independent things. You are\n> right to apply two times libpq_gettext(), once to \"reason\" and once to\n> the main string.\n\nAh, thanks for the clarification. Done that way in v3.\n\n> My point would be to either register a max flag in the set of\n> AUTH_REQ_* in pqcomm.h so as we never go above 32 with an assertion to\n> make sure that this would never overflow, but I would add a comment in\n> pqcomm.h telling that we also do bitwise operations, relying on the\n> number of AUTH_REQ_* flags, and that we'd better be careful once the\n> number of flags gets higher than 32. There is some margin, still that\n> could be easily forgotten.\n\nMakes sense; done.\n\nv3 also removes \"cert\" from require_auth while I work on a replacement\nconnection option, fixes the bugs/suggestions pointed out upthread,\nand adds a documentation first draft. I tried combining this with the\nNOT work but it was too much to juggle, so that'll wait for a v4+,\nalong with require_auth=none and \"cert mode\".\n\nThanks for the detailed review!\n--Jacob",
"msg_date": "Wed, 22 Jun 2022 16:36:00 -0700",
"msg_from": "Jacob Champion <jchampion@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] Let libpq reject unexpected authentication requests"
},
{
"msg_contents": "On Thu, Jun 9, 2022 at 4:30 PM Jacob Champion <jchampion@timescale.com>\nwrote:\n\n> On Wed, Jun 8, 2022 at 9:58 PM Michael Paquier <michael@paquier.xyz>\n> wrote:\n>\n> > One\n> > interesting case comes down to stuff like channel_binding=require\n> > require_auth=\"md5,scram-sha-256\", where I think that we should still\n> > fail even if the server asks for MD5 and enforce an equivalent of an\n> > AND grammar through the parameters. This reasoning limits the\n> > dependencies between each parameter and treats the areas where these\n> > are checked independently, which is what check_expected_areq() does\n> > for channel binding. So that's more robust at the end.\n>\n> Agreed.\n>\n>\nThat just makes me want to not implement OR'ing...\n\nThe existing set of individual parameters doesn't work as an API for\ntry-and-fallback.\n\nSomething like would be less problematic when it comes to setting multiple\nrelated options:\n\n--auth-try\n\"1;sslmode=require;channel_binding=require;method=scram-sha-256;sslcert=/tmp/machine.cert;sslkey=/tmp/machine.key\"\n--auth-try\n\"2;sslmode=require;method=cert;sslcert=/tmp/me.cert;sslkey=/tmp/me.key\"\n--auth-try \"3;sslmode=prefer;method=md5\"\n\nAbsent that radical idea, require_auth probably shouldn't change any\nbehavior that exists today without having specified require_auth and having\nthe chosen method happen anyway. So whatever happens today with an md5\npassword prompt while channel_binding is set to require (not in the mood\nright now to figure out how to test that on a compiled against HEAD\ninstance).\n\nDavid J.\n\nOn Thu, Jun 9, 2022 at 4:30 PM Jacob Champion <jchampion@timescale.com> wrote:On Wed, Jun 8, 2022 at 9:58 PM Michael Paquier <michael@paquier.xyz> wrote:> One\n> interesting case comes down to stuff like channel_binding=require\n> require_auth=\"md5,scram-sha-256\", where I think that we should still\n> fail even if the server asks for MD5 and enforce an equivalent of an\n> AND grammar through the parameters. This reasoning limits the\n> dependencies between each parameter and treats the areas where these\n> are checked independently, which is what check_expected_areq() does\n> for channel binding. So that's more robust at the end.\n\nAgreed.That just makes me want to not implement OR'ing...The existing set of individual parameters doesn't work as an API for try-and-fallback.Something like would be less problematic when it comes to setting multiple related options:--auth-try \"1;sslmode=require;channel_binding=require;method=scram-sha-256;sslcert=/tmp/machine.cert;sslkey=/tmp/machine.key\"--auth-try \"2;sslmode=require;method=cert;sslcert=/tmp/me.cert;sslkey=/tmp/me.key\"--auth-try \"3;sslmode=prefer;method=md5\"Absent that radical idea, require_auth probably shouldn't change any behavior that exists today without having specified require_auth and having the chosen method happen anyway. So whatever happens today with an md5 password prompt while channel_binding is set to require (not in the mood right now to figure out how to test that on a compiled against HEAD instance).David J.",
"msg_date": "Wed, 22 Jun 2022 17:56:18 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] Let libpq reject unexpected authentication requests"
},
{
"msg_contents": "On Wed, Jun 22, 2022 at 5:56 PM David G. Johnston\n<david.g.johnston@gmail.com> wrote:\n> That just makes me want to not implement OR'ing...\n>\n> The existing set of individual parameters doesn't work as an API for try-and-fallback.\n>\n> Something like would be less problematic when it comes to setting multiple related options:\n>\n> --auth-try \"1;sslmode=require;channel_binding=require;method=scram-sha-256;sslcert=/tmp/machine.cert;sslkey=/tmp/machine.key\"\n> --auth-try \"2;sslmode=require;method=cert;sslcert=/tmp/me.cert;sslkey=/tmp/me.key\"\n> --auth-try \"3;sslmode=prefer;method=md5\"\n\nI think that's a fair point, and your --auth-try example definitely\nillustrates why having require_auth try to do everything is probably\nnot a viable strategy. My arguments for keeping OR in spite of that\nare\n\n- the default is effectively an OR of all available methods (plus \"none\");\n- I think NOT is a important case in practice, which is effectively a\nnegative OR (\"anything but this/these\"); and\n- not providing an explicit, positive OR to complement the above seems\nlike it would be a frustrating user experience once you want to get\njust a little bit more creative.\n\nIt's also low-hanging fruit that doesn't require multiple connections\nto the server per attempt (which I think your --auth-try proposal\nmight, if I understand it correctly).\n\n> Absent that radical idea, require_auth probably shouldn't change any behavior that exists today without having specified require_auth and having the chosen method happen anyway. So whatever happens today with an md5 password prompt while channel_binding is set to require (not in the mood right now to figure out how to test that on a compiled against HEAD instance).\n\nI think the newest tests in v3 should enforce that, but let me know if\nI've missed something.\n\n--Jacob\n\n\n",
"msg_date": "Thu, 23 Jun 2022 10:33:57 -0700",
"msg_from": "Jacob Champion <jchampion@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] Let libpq reject unexpected authentication requests"
},
{
"msg_contents": "On Thu, Jun 23, 2022 at 10:33 AM Jacob Champion <jchampion@timescale.com> wrote:\n> - I think NOT is a important case in practice, which is effectively a\n> negative OR (\"anything but this/these\")\n\nBoth NOT (via ! negation) and \"none\" are implemented in v4.\n\nExamples:\n\n# The server must use SCRAM.\nrequire_auth=scram-sha-256\n# The server must use SCRAM or Kerberos.\nrequire_auth=scram-sha-256,gss,sspi\n# The server may optionally use SCRAM.\nrequire_auth=none,scram-sha-256\n# The server must not use any application-level authentication.\nrequire_auth=none\n# The server may optionally use authentication, except plaintext\n# passwords.\nrequire_auth=!password\n# The server may optionally use authentication, except weaker password\n# challenges.\nrequire_auth=!password,!md5\n# The server must use an authentication method.\nrequire_auth=!none\n# The server must use a non-plaintext authentication method.\nrequire_auth=!none,!password\n\nNote that `require_auth=none,scram-sha-256` allows the server to\nabandon a SCRAM exchange early, same as it can today. That might be a\nbit surprising.\n\n--Jacob",
"msg_date": "Fri, 24 Jun 2022 12:17:08 -0700",
"msg_from": "Jacob Champion <jchampion@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] Let libpq reject unexpected authentication requests"
},
{
"msg_contents": "On Fri, Jun 24, 2022 at 12:17 PM Jacob Champion <jchampion@timescale.com> wrote:\n> Both NOT (via ! negation) and \"none\" are implemented in v4.\n\nv5 adds a second patch which implements a client-certificate analogue\nto gssencmode; I've named it sslcertmode. This takes the place of the\nrequire_auth=[!]cert setting implemented previously.\n\nAs I mentioned upthread, I think sslcertmode=require is the weakest\nfeature here, since the server always sends a certificate request if\nyou are using TLS. It would potentially be more useful if we start\nexpanding TLS setups and middlebox options, but I still only see it as\na troubleshooting feature for administrators. By contrast,\nsslcertmode=disable lets you turn off the use of the certificate, no\nmatter what libpq is able to find in your environment or home\ndirectory. That seems more immediately useful.\n\nWith this addition, I'm wondering if GSS encrypted transport should be\nremoved from the definition/scope of require_auth=gss. We already have\ngssencmode to control that, and it would remove an ugly special case\nfrom the patch.\n\nI'll add this patchset to the commitfest.\n\n--Jacob",
"msg_date": "Mon, 27 Jun 2022 12:05:57 -0700",
"msg_from": "Jacob Champion <jchampion@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] Let libpq reject unexpected authentication requests"
},
{
"msg_contents": "On Mon, Jun 27, 2022 at 12:05 PM Jacob Champion <jchampion@timescale.com> wrote:\n> v5 adds a second patch which implements a client-certificate analogue\n> to gssencmode; I've named it sslcertmode.\n\n...and v6 fixes check-world, because I always forget about postgres_fdw.\n\n--Jacob",
"msg_date": "Mon, 27 Jun 2022 14:40:01 -0700",
"msg_from": "Jacob Champion <jchampion@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] Let libpq reject unexpected authentication requests"
},
{
"msg_contents": "On 27.06.22 23:40, Jacob Champion wrote:\n> -HINT: Valid options in this context are: service, passfile, channel_binding, connect_timeout, dbname, host, hostaddr, port, options, application_name, keepalives, keepalives_idle, keepalives_interval, keepalives_count, tcp_user_timeout, sslmode, sslcompression, sslcert, sslkey, sslrootcert, sslcrl, sslcrldir, sslsni, requirepeer, ssl_min_protocol_version, ssl_max_protocol_version, gssencmode, krbsrvname, gsslib, target_session_attrs, use_remote_estimate, fdw_startup_cost, fdw_tuple_cost, extensions, updatable, truncatable, fetch_size, batch_size, async_capable, parallel_commit, keep_connections\n> +HINT: Valid options in this context are: service, passfile, channel_binding, connect_timeout, dbname, host, hostaddr, port, options, application_name, keepalives, keepalives_idle, keepalives_interval, keepalives_count, tcp_user_timeout, sslmode, sslcompression, sslcert, sslkey, sslcertmode, sslrootcert, sslcrl, sslcrldir, sslsni, requirepeer, require_auth, ssl_min_protocol_version, ssl_max_protocol_version, gssencmode, krbsrvname, gsslib, target_session_attrs, use_remote_estimate, fdw_startup_cost, fdw_tuple_cost, extensions, updatable, truncatable, fetch_size, batch_size, async_capable, parallel_commit, keep_connections\n\nIt's not strictly related to your patch, but maybe this hint has \noutlived its usefulness? I mean, we don't list all available tables \nwhen you try to reference a table that doesn't exist. And unordered on \ntop of that.\n\n\n\n\n",
"msg_date": "Wed, 29 Jun 2022 14:36:08 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] Let libpq reject unexpected authentication requests"
},
{
"msg_contents": "On Wed, Jun 29, 2022 at 6:36 AM Peter Eisentraut\n<peter.eisentraut@enterprisedb.com> wrote:\n> It's not strictly related to your patch, but maybe this hint has\n> outlived its usefulness? I mean, we don't list all available tables\n> when you try to reference a table that doesn't exist. And unordered on\n> top of that.\n\nYeah, maybe it'd be better to tell the user the correct context for an\notherwise-valid option (\"the 'sslcert' option may only be applied to\nUSER MAPPING\"), and avoid the option dump entirely?\n\n--\n\nv7, attached, fixes configuration on Windows.\n\n--Jacob",
"msg_date": "Thu, 30 Jun 2022 16:26:54 -0700",
"msg_from": "Jacob Champion <jchampion@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] Let libpq reject unexpected authentication requests"
},
{
"msg_contents": "On Thu, Jun 30, 2022 at 04:26:54PM -0700, Jacob Champion wrote:\n> Yeah, maybe it'd be better to tell the user the correct context for an\n> otherwise-valid option (\"the 'sslcert' option may only be applied to\n> USER MAPPING\"), and avoid the option dump entirely?\n\nYes, that would be nice. Now, this HINT has been an annoyance in the\ncontext of the regression tests when adding features entirely\nunrelated to postgres_fdw, at least for me. I would be more tempted\nto get rid of it entirely, FWIW.\n--\nMichael",
"msg_date": "Tue, 19 Jul 2022 15:37:42 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] Let libpq reject unexpected authentication requests"
},
{
"msg_contents": "I'm wondering about making the list of things you can specify in \nrequire_auth less confusing and future proof.\n\nFor example, before long someone is going to try putting \"ldap\" into \nrequire_auth. The fact that the methods in pg_hba.conf are not what \nlibpq sees is not something that was really exposed to users until now. \n\"none\" vs. \"trust\" takes advantage of that. But then I think we could \nalso make \"password\" clearer, which surely sounds like any kind of \npassword, encrypted or not, and that's also how pg_hba.conf behaves. \nThe protocol specification calls that \"AuthenticationCleartextPassword\"; \nmaybe we could pick a name based on that.\n\nAnd then, what if we add a new method in the future, and someone puts \nthat into their connection string. Old clients will just refuse to \nparse that. Ok, that effectively gives you the same behavior as \nrejecting the server's authentication offer. But what about the negated \nversion? Also, what if we add new SASL methods. How would we modify \nthis code to be able to pick and choose and also have backward and \nforward compatible behavior?\n\nIn general, I like this. We just need to think about the above things a \nbit more.\n\n\n",
"msg_date": "Thu, 8 Sep 2022 15:25:01 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] Let libpq reject unexpected authentication requests"
},
{
"msg_contents": "On Thu, Sep 8, 2022 at 6:25 AM Peter Eisentraut\n<peter.eisentraut@enterprisedb.com> wrote:\n> For example, before long someone is going to try putting \"ldap\" into\n> require_auth. The fact that the methods in pg_hba.conf are not what\n> libpq sees is not something that was really exposed to users until now.\n> \"none\" vs. \"trust\" takes advantage of that. But then I think we could\n> also make \"password\" clearer, which surely sounds like any kind of\n> password, encrypted or not, and that's also how pg_hba.conf behaves.\n> The protocol specification calls that \"AuthenticationCleartextPassword\";\n> maybe we could pick a name based on that.\n\nSounds fair. \"cleartext\"? \"plaintext\"? \"plain\" (like SASL's PLAIN)?\n\n> And then, what if we add a new method in the future, and someone puts\n> that into their connection string. Old clients will just refuse to\n> parse that. Ok, that effectively gives you the same behavior as\n> rejecting the server's authentication offer. But what about the negated\n> version?\n\nI assume the alternative behavior you're thinking of is to ignore\nnegated \"future methods\"? I think the danger with that (for a feature\nthat's supposed to be locking communication down) is that it's not\npossible to differentiate between a maybe-future method and a typo. If\nI want \"!password\" because my intention is to disallow a plaintext\nexchange, I really don't want \"!pasword\" to silently allow anything.\n\n> Also, what if we add new SASL methods. How would we modify\n> this code to be able to pick and choose and also have backward and\n> forward compatible behavior?\n\nOn the SASL front: In the back of my head I'd been considering adding\na \"sasl:\" prefix to \"scram-sha-256\", so that we have a namespace for\nnew SASL methods. That would also give us a jumping-off point in the\nfuture if we decide to add SASL method negotiation to the protocol.\nWhat do you think about that?\n\nBackwards compatibility will, I think, be handled trivially by a newer\nclient. The only way to break backwards compatibility would be to\nremove support for a method, which I assume would be independent of\nthis feature.\n\nForwards compatibility doesn't seem like something this feature can\nadd by itself (old clients can't speak new methods). Though we could\nbackport new method names to allow them to be used in negations, if\nmaintaining that aspect of compatibility is worth the effort.\n\n> In general, I like this. We just need to think about the above things a\n> bit more.\n\nThanks!\n\n--Jacob\n\n\n",
"msg_date": "Thu, 8 Sep 2022 11:18:42 -0700",
"msg_from": "Jacob Champion <jchampion@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] Let libpq reject unexpected authentication requests"
},
{
"msg_contents": "On 08.09.22 20:18, Jacob Champion wrote:\n> Sounds fair. \"cleartext\"? \"plaintext\"? \"plain\" (like SASL's PLAIN)?\n\n> On the SASL front: In the back of my head I'd been considering adding\n> a \"sasl:\" prefix to \"scram-sha-256\", so that we have a namespace for\n> new SASL methods. That would also give us a jumping-off point in the\n> future if we decide to add SASL method negotiation to the protocol.\n> What do you think about that?\n\nAfter thinking about this a bit more, I think it would be best if the \nwords used here match exactly with what is used in pg_hba.conf. That's \nthe only thing the user cares about: reject \"password\", reject \"trust\", \nrequire \"scram-sha-256\", etc. How this maps to the protocol and that \nsome things are SASL or not is not something they have needed to care \nabout and don't really need to know for this. So I would suggest to \norganize it that way.\n\nAnother idea: Maybe instead of the \"!\" syntax, use two settings, \nrequire_auth and reject_auth? Might be simpler?\n\n\n\n",
"msg_date": "Fri, 16 Sep 2022 16:56:20 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] Let libpq reject unexpected authentication requests"
},
{
"msg_contents": "On Fri, Sep 16, 2022 at 7:56 AM Peter Eisentraut\n<peter.eisentraut@enterprisedb.com> wrote:\n> On 08.09.22 20:18, Jacob Champion wrote:\n> After thinking about this a bit more, I think it would be best if the\n> words used here match exactly with what is used in pg_hba.conf. That's\n> the only thing the user cares about: reject \"password\", reject \"trust\",\n> require \"scram-sha-256\", etc. How this maps to the protocol and that\n> some things are SASL or not is not something they have needed to care\n> about and don't really need to know for this. So I would suggest to\n> organize it that way.\n\nI tried that in v1, if you'd like to see what that ends up looking\nlike. As a counterexample, I believe `cert` auth looks identical to\n`trust` on the client side. (The server always asks for a client\ncertificate even if it doesn't use it. Otherwise, this proposal would\nprobably have looked different.) And `ldap` auth is indistinguishable\nfrom `password`, etc. In my opinion, how it maps to the protocol is\nmore honest to the user than how it maps to HBA, because the auth\nrequest sent by the protocol determines your level of risk.\n\nI also like `none` over `trust` because you don't have to administer a\nserver to understand what it means. That's why I was on board with\nyour proposal to change the name of `password`. And you don't have to\nignore the natural meaning of client-side \"trust\", which IMO means\n\"trust the server.\" There's opportunity for confusion either way,\nunfortunately, but naming them differently may help make it clear that\nthey _are_ different.\n\nThis problem overlaps a bit with the last remaining TODO in the code.\nI treat gssenc tunnels as satisfying require_auth=gss. Maybe that's\nuseful, because it kind of maps to how HBA treats it? But it's not\nconsistent with the TLS side of things, and it overlaps with\ngssencmode=require, complicating the relationship with the new\nsslcertmode.\n\n> Another idea: Maybe instead of the \"!\" syntax, use two settings,\n> require_auth and reject_auth? Might be simpler?\n\nMight be. If that means we have to handle the case where both are set\nto something, though, it might make things harder.\n\nWe can error out if they conflict, which adds a decent but not huge\namount of complication. Or we can require that only one is set, which\nis both easy and overly restrictive. But either choice makes it harder\nto adopt a `reject password` default, as many people seem to be\ninterested in doing. Because if you want to override that default,\nthen you have to first unset reject_auth and then set require_auth, as\nopposed to just saying require_auth=something and being done with it.\nI'm not sure that's worth it. Thoughts?\n\nI'm happy to implement proofs of concept for that, or any other ideas,\ngiven the importance of getting this \"right enough\" the first time.\nJust let me know.\n\nThanks,\n--Jacob\n\n\n",
"msg_date": "Fri, 16 Sep 2022 13:29:29 -0700",
"msg_from": "Jacob Champion <jchampion@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] Let libpq reject unexpected authentication requests"
},
{
"msg_contents": "On Fri, Sep 16, 2022 at 1:29 PM Jacob Champion <jchampion@timescale.com> wrote:\n> I'm happy to implement proofs of concept for that, or any other ideas,\n> given the importance of getting this \"right enough\" the first time.\n> Just let me know.\n\nv8 rebases over the postgres_fdw HINT changes; there are no functional\ndifferences.\n\n--Jacob",
"msg_date": "Wed, 21 Sep 2022 08:33:55 -0700",
"msg_from": "Jacob Champion <jchampion@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] Let libpq reject unexpected authentication requests"
},
{
"msg_contents": "On 21.09.22 17:33, Jacob Champion wrote:\n> On Fri, Sep 16, 2022 at 1:29 PM Jacob Champion <jchampion@timescale.com> wrote:\n>> I'm happy to implement proofs of concept for that, or any other ideas,\n>> given the importance of getting this \"right enough\" the first time.\n>> Just let me know.\n> \n> v8 rebases over the postgres_fdw HINT changes; there are no functional\n> differences.\n\nSo let's look at the two TODO comments you have:\n\n * TODO: how should !auth_required interact with an incomplete\n * SCRAM exchange?\n\nWhat specific combination of events are you thinking of here?\n\n\n /*\n * If implicit GSS auth has already been performed via GSS\n * encryption, we don't need to have performed an\n * AUTH_REQ_GSS exchange.\n *\n * TODO: check this assumption. What mutual auth guarantees\n * are made in this case?\n */\n\nI don't understand the details involved here, but I would be surprised \nif this assumption is true. For example, does GSS encryption deal with \nuser names and a user name map? I don't see how these can be \nequivalent. In any case, it seems to me that it would be safer to *not* \nmake this assumption at first and then have someone more knowledgeable \nmake the argument that it would be safe.\n\n\n\n",
"msg_date": "Wed, 21 Sep 2022 18:36:41 -0400",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] Let libpq reject unexpected authentication requests"
},
{
"msg_contents": "On Wed, Sep 21, 2022 at 3:36 PM Peter Eisentraut\n<peter.eisentraut@enterprisedb.com> wrote:\n> So let's look at the two TODO comments you have:\n>\n> * TODO: how should !auth_required interact with an incomplete\n> * SCRAM exchange?\n>\n> What specific combination of events are you thinking of here?\n\nLet's say the client is using `require_auth=!password`. If the server\nstarts a SCRAM exchange. but doesn't finish it, the connection will\nstill succeed with the current implementation (because it satisfies\nthe \"none\" case). This is also true for a client setting of\n`require_auth=scram-sha-256,none`. I think this is potentially\ndangerous, but it mirrors the current behavior of libpq and I'm not\nsure that we should change it as part of this patch.\n\n> /*\n> * If implicit GSS auth has already been performed via GSS\n> * encryption, we don't need to have performed an\n> * AUTH_REQ_GSS exchange.\n> *\n> * TODO: check this assumption. What mutual auth guarantees\n> * are made in this case?\n> */\n>\n> I don't understand the details involved here, but I would be surprised\n> if this assumption is true. For example, does GSS encryption deal with\n> user names and a user name map?\n\nTo my understanding, yes. There are explicit tests for it.\n\n> In any case, it seems to me that it would be safer to *not*\n> make this assumption at first and then have someone more knowledgeable\n> make the argument that it would be safe.\n\nI think I'm okay with that, regardless. Here's one of the wrinkles:\nright now, both of the following connstrings work:\n\n require_auth=gss gssencmode=require\n require_auth=gss gssencmode=prefer\n\nIf we don't treat gssencmode as providing GSS auth, then the first\ncase will always fail, because there will be no GSS authentication\npacket over an encrypted connection. Likewise, the second case will\nalmost always fail, unless the server doesn't support gssencmode at\nall (so why are you using prefer?).\n\nIf you're okay with those limitations, I will rip out the code. The\nreason I'm not too worried about it is, I don't think it makes much\nsense to be strict about your authentication requirements while at the\nsame time leaving the choice of transport encryption up to the server.\n\nThanks,\n--Jacob\n\n\n",
"msg_date": "Wed, 21 Sep 2022 16:37:58 -0700",
"msg_from": "Jacob Champion <jchampion@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] Let libpq reject unexpected authentication requests"
},
{
"msg_contents": "On 22.09.22 01:37, Jacob Champion wrote:\n> On Wed, Sep 21, 2022 at 3:36 PM Peter Eisentraut\n> <peter.eisentraut@enterprisedb.com> wrote:\n>> So let's look at the two TODO comments you have:\n>>\n>> * TODO: how should !auth_required interact with an incomplete\n>> * SCRAM exchange?\n>>\n>> What specific combination of events are you thinking of here?\n> \n> Let's say the client is using `require_auth=!password`. If the server\n> starts a SCRAM exchange. but doesn't finish it, the connection will\n> still succeed with the current implementation (because it satisfies\n> the \"none\" case). This is also true for a client setting of\n> `require_auth=scram-sha-256,none`. I think this is potentially\n> dangerous, but it mirrors the current behavior of libpq and I'm not\n> sure that we should change it as part of this patch.\n\nIt might be worth reviewing that behavior for other reasons, but I think \nsemantics of your patch are correct.\n\n>> In any case, it seems to me that it would be safer to *not*\n>> make this assumption at first and then have someone more knowledgeable\n>> make the argument that it would be safe.\n> \n> I think I'm okay with that, regardless. Here's one of the wrinkles:\n> right now, both of the following connstrings work:\n> \n> require_auth=gss gssencmode=require\n> require_auth=gss gssencmode=prefer\n> \n> If we don't treat gssencmode as providing GSS auth, then the first\n> case will always fail, because there will be no GSS authentication\n> packet over an encrypted connection. Likewise, the second case will\n> almost always fail, unless the server doesn't support gssencmode at\n> all (so why are you using prefer?).\n> \n> If you're okay with those limitations, I will rip out the code. The\n> reason I'm not too worried about it is, I don't think it makes much\n> sense to be strict about your authentication requirements while at the\n> same time leaving the choice of transport encryption up to the server.\n\nThe way I understand what you explained here is that it would be more \nsensible to leave that code in. I would be okay with that.\n\n\n\n",
"msg_date": "Thu, 22 Sep 2022 07:52:23 -0400",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] Let libpq reject unexpected authentication requests"
},
{
"msg_contents": "On Thu, Sep 22, 2022 at 4:52 AM Peter Eisentraut\n<peter.eisentraut@enterprisedb.com> wrote:\n> On 22.09.22 01:37, Jacob Champion wrote:\n> > I think this is potentially\n> > dangerous, but it mirrors the current behavior of libpq and I'm not\n> > sure that we should change it as part of this patch.\n>\n> It might be worth reviewing that behavior for other reasons, but I think\n> semantics of your patch are correct.\n\nSounds good. v9 removes the TODO and adds a better explanation.\n\n> > If you're okay with those [GSS] limitations, I will rip out the code. The\n> > reason I'm not too worried about it is, I don't think it makes much\n> > sense to be strict about your authentication requirements while at the\n> > same time leaving the choice of transport encryption up to the server.\n>\n> The way I understand what you explained here is that it would be more\n> sensible to leave that code in. I would be okay with that.\n\nI've added a comment there explaining the gssencmode interaction. That\nleaves no TODOs inside the code itself.\n\nI removed the commit message note about not being able to prevent\nunexpected client cert requests or GSS encryption, since we've decided\nto handle those cases outside of require_auth.\n\nI'm not able to test SSPI easily at the moment; if anyone is able to\ntry it out, that'd be really helpful. There's also the question of\nSASL forwards compatibility -- if someone adds a new SASL mechanism,\nthe code will treat it like scram-sha-256 until it's changed, and\nthere will be no test to catch it. Should we leave that to the future\nmechanism implementer to fix, or add a mechanism check now so the\nclient is safe even if they forget?\n\nThanks!\n--Jacob",
"msg_date": "Thu, 22 Sep 2022 17:02:30 -0700",
"msg_from": "Jacob Champion <jchampion@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] Let libpq reject unexpected authentication requests"
},
{
"msg_contents": "On 23.09.22 02:02, Jacob Champion wrote:\n> On Thu, Sep 22, 2022 at 4:52 AM Peter Eisentraut\n> <peter.eisentraut@enterprisedb.com> wrote:\n>> On 22.09.22 01:37, Jacob Champion wrote:\n>>> I think this is potentially\n>>> dangerous, but it mirrors the current behavior of libpq and I'm not\n>>> sure that we should change it as part of this patch.\n>>\n>> It might be worth reviewing that behavior for other reasons, but I think\n>> semantics of your patch are correct.\n> \n> Sounds good. v9 removes the TODO and adds a better explanation.\n\nI'm generally okay with these patches now.\n\n> I'm not able to test SSPI easily at the moment; if anyone is able to\n> try it out, that'd be really helpful. There's also the question of\n> SASL forwards compatibility -- if someone adds a new SASL mechanism,\n> the code will treat it like scram-sha-256 until it's changed, and\n> there will be no test to catch it. Should we leave that to the future\n> mechanism implementer to fix, or add a mechanism check now so the\n> client is safe even if they forget?\n\nI think it would be good to put some provisions in place here, even if \nthey are elementary. Otherwise, there will be a significant burden on \nthe person who implements the next SASL method (i.e., you ;-) ) to \nfigure that out then.\n\nI think you could just stick a string list of allowed SASL methods into \nPGconn.\n\nBy the way, I'm not sure all the bit fiddling is really worth it. An \narray of integers (or unsigned char or whatever) would work just as \nwell. Especially if you are going to have a string list for SASL \nanyway. You're not really saving any bits or bytes either way in the \nnormal case.\n\nMinor comments:\n\nPasting together error messages like with auth_description() isn't going \nto work. You either need to expand the whole message in \ncheck_expected_areq(), or perhaps rephrase the message like\n\nlibpq_gettext(\"auth method \\\"%s\\\" required, but server requested \\\"%s\\\"\\n\"),\n conn->require_auth,\n auth_description(areq)\n\nand make auth_description() just return a single word not subject to \ntranslation.\n\nspurious whitespace change in fe-secure-openssl.c\n\nwhitespace error in patch:\n\n.git/rebase-apply/patch:109: tab in indent.\n via TLS, nor GSS authentication via its \nencrypted transport.)\n\nIn the 0002 patch, the configure test needs to be added to meson.build.\n\n\n\n",
"msg_date": "Wed, 5 Oct 2022 15:33:45 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] Let libpq reject unexpected authentication requests"
},
{
"msg_contents": "On 10/5/22 06:33, Peter Eisentraut wrote:\n> I think it would be good to put some provisions in place here, even if \n> they are elementary. Otherwise, there will be a significant burden on \n> the person who implements the next SASL method (i.e., you ;-) ) to \n> figure that out then.\n\nSounds good, I'll work on that. v10 does not yet make changes in this area.\n\n> I think you could just stick a string list of allowed SASL methods into \n> PGconn.\n> \n> By the way, I'm not sure all the bit fiddling is really worth it. An \n> array of integers (or unsigned char or whatever) would work just as \n> well. Especially if you are going to have a string list for SASL \n> anyway. You're not really saving any bits or bytes either way in the \n> normal case.\n\nYeah, with the SASL case added in, the bitmasks might not be long for\nthis world. It is nice to be able to invert the whole thing, but a\nseparate boolean saying \"invert the list\" could accomplish the same goal\nand I think we'll need to have that for the SASL mechanism list anyway.\n\n> Minor comments:\n> \n> Pasting together error messages like with auth_description() isn't going \n> to work. You either need to expand the whole message in \n> check_expected_areq(), or perhaps rephrase the message like\n> \n> libpq_gettext(\"auth method \\\"%s\\\" required, but server requested \\\"%s\\\"\\n\"),\n> conn->require_auth,\n> auth_description(areq)\n> \n> and make auth_description() just return a single word not subject to \n> translation.\n\nRight. Michael tried to warn me about that upthread, but I only ended up\nfixing one of the two error cases for some reason. I've merged the two\ninto one code path for v10.\n\nQuick error messaging bikeshed: do you prefer\n\n auth method \"!password,!md5\" requirement failed: ...\n\nor\n\n auth method requirement \"!password,!md5\" failed: ...\n\n?\n\n> spurious whitespace change in fe-secure-openssl.c\n\nFixed.\n\n> whitespace error in patch:\n> \n> .git/rebase-apply/patch:109: tab in indent.\n> via TLS, nor GSS authentication via its \n> encrypted transport.)\n\nFixed.\n\n> In the 0002 patch, the configure test needs to be added to meson.build.\n\nAdded.\n\nThanks,\n--Jacob",
"msg_date": "Wed, 12 Oct 2022 09:40:05 -0700",
"msg_from": "Jacob Champion <jchampion@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] Let libpq reject unexpected authentication requests"
},
{
"msg_contents": "On Wed, Oct 12, 2022 at 9:40 AM Jacob Champion <jchampion@timescale.com> wrote:\n> On 10/5/22 06:33, Peter Eisentraut wrote:\n> > I think it would be good to put some provisions in place here, even if\n> > they are elementary. Otherwise, there will be a significant burden on\n> > the person who implements the next SASL method (i.e., you ;-) ) to\n> > figure that out then.\n>\n> Sounds good, I'll work on that. v10 does not yet make changes in this area.\n\nv11 makes an attempt at this (see 0003), using the proposed string list.\n\nPersonally I'm not happy with the amount of complexity it adds in\nexchange for flexibility we can't use yet. Maybe there's a way to\nsimplify it, but I think the two-tiered approach of the patch has to\nremain, unless we find a way to move SASL mechanism selection to a\ndifferent part of the code. I'm not sure that'd be helpful.\n\nMaybe I should just add a basic Assert here, to trip if someone adds a\nnew SASL mechanism, and point that lucky person to this thread with a\ncomment?\n\n--Jacob",
"msg_date": "Thu, 20 Oct 2022 11:36:34 -0700",
"msg_from": "Jacob Champion <jchampion@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] Let libpq reject unexpected authentication requests"
},
{
"msg_contents": "Hi Jacob,\n\n> v11 makes an attempt at this (see 0003), using the proposed string list.\n\nI noticed that this patchset stuck a bit so I decided to take a look.\n\nIn 0001:\n\n```\n+ conn->auth_required = false;\n+ conn->allowed_auth_methods = -1;\n...\n+ uint32 allowed_auth_methods; /* bitmask of acceptable\nAuthRequest codes */\n```\n\nAssigning a negative number to uint32 doesn't necessarily work on all\nplatforms. I suggest using PG_UINT32_MAX.\n\nIn 0002:\n\n```\n+ <term><literal>require</literal></term>\n+ <listitem>\n+ <para>\n+ the server <emphasis>must</emphasis> request a certificate. The\n+ connection will fail if the server authenticates the client despite\n+ not requesting or receiving one.\n```\n\nThe commit message IMO has a better description of \"require\". I\nsuggest adding the part about \"This doesn't add any additional\nsecurity ...\" to the documentation.\n\n```\n+ * hard-coded certificate via sslcert, so we don't actually set any\ncertificates\n+ * here; we just it to record whether or not the server has actually asked for\n```\n\nSomething is off with the wording here in the \"we just it to ...\" part.\n\nThe patchset seems to be in very good shape except for these few\nnitpicks. I'm inclined to change its status to \"Ready for Committer\"\nas soon as the new version will pass cfbot unless there are going to\nbe any objections from the community.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Fri, 11 Nov 2022 16:52:56 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] Let libpq reject unexpected authentication requests"
},
{
"msg_contents": "On 11/11/22 05:52, Aleksander Alekseev wrote:\n> I noticed that this patchset stuck a bit so I decided to take a look.\n\nThanks!\n\n> Assigning a negative number to uint32 doesn't necessarily work on all\n> platforms. I suggest using PG_UINT32_MAX.\n\nHmm -- on which platforms is \"-1 converted to unsigned\" not equivalent\nto the maximum value? Are they C-compliant?\n\n> The commit message IMO has a better description of \"require\". I\n> suggest adding the part about \"This doesn't add any additional\n> security ...\" to the documentation.\n\nSounds good; see what you think of v12.\n\n> ```\n> + * hard-coded certificate via sslcert, so we don't actually set any\n> certificates\n> + * here; we just it to record whether or not the server has actually asked for\n> ```\n> \n> Something is off with the wording here in the \"we just it to ...\" part.\n\nFixed.\n\n> The patchset seems to be in very good shape except for these few\n> nitpicks. I'm inclined to change its status to \"Ready for Committer\"\n> as soon as the new version will pass cfbot unless there are going to\n> be any objections from the community.\n\nThank you! I expect a maintainer will need to weigh in on the\ncost/benefit of 0003 either way.\n\n--Jacob",
"msg_date": "Fri, 11 Nov 2022 16:11:07 -0800",
"msg_from": "Jacob Champion <jchampion@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] Let libpq reject unexpected authentication requests"
},
{
"msg_contents": "Hi Jacob,\n\n> > Assigning a negative number to uint32 doesn't necessarily work on all\n> > platforms. I suggest using PG_UINT32_MAX.\n>\n> Hmm -- on which platforms is \"-1 converted to unsigned\" not equivalent\n> to the maximum value? Are they C-compliant?\n\nI did a little more research and I think you are right. What happens\naccording to the C standard:\n\n\"\"\"\nthe value is converted to unsigned by adding to it one greater than the largest\nnumber that can be represented in the unsigned integer type\n\"\"\"\n\nso this is effectively -1 + (PG_UINT32_MAX + 1).\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Sat, 12 Nov 2022 09:57:06 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] Let libpq reject unexpected authentication requests"
},
{
"msg_contents": "On 11/11/22 22:57, Aleksander Alekseev wrote:\n> I did a little more research and I think you are right. What happens\n> according to the C standard:\n\nThanks for confirming! (I personally prefer -1 to a *MAX macro, because\nit works regardless of the length of the type.)\n\n--Jacob\n\n\n",
"msg_date": "Mon, 14 Nov 2022 11:01:34 -0800",
"msg_from": "Jacob Champion <jchampion@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] Let libpq reject unexpected authentication requests"
},
{
"msg_contents": "On Thu, Oct 20, 2022 at 11:36:34AM -0700, Jacob Champion wrote:\n> Maybe I should just add a basic Assert here, to trip if someone adds a\n> new SASL mechanism, and point that lucky person to this thread with a\n> comment?\n\nI am beginning to look at the last version proposed, which has been\nmarked as RfC. Does this patch need a refresh in light of a9e9a9f and\n0873b2d? The changes for libpq_append_conn_error() should be\nstraight-forward.\n\nThe CF bot is still happy.\n--\nMichael",
"msg_date": "Wed, 16 Nov 2022 16:06:54 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] Let libpq reject unexpected authentication requests"
},
{
"msg_contents": "On Tue, Nov 15, 2022 at 11:07 PM Michael Paquier <michael@paquier.xyz> wrote:\n> I am beginning to look at the last version proposed, which has been\n> marked as RfC. Does this patch need a refresh in light of a9e9a9f and\n> 0873b2d? The changes for libpq_append_conn_error() should be\n> straight-forward.\n\nUpdated in v13, thanks!\n\n--Jacob",
"msg_date": "Wed, 16 Nov 2022 09:26:01 -0800",
"msg_from": "Jacob Champion <jchampion@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] Let libpq reject unexpected authentication requests"
},
{
"msg_contents": "On 16.11.22 18:26, Jacob Champion wrote:\n> On Tue, Nov 15, 2022 at 11:07 PM Michael Paquier <michael@paquier.xyz> wrote:\n>> I am beginning to look at the last version proposed, which has been\n>> marked as RfC. Does this patch need a refresh in light of a9e9a9f and\n>> 0873b2d? The changes for libpq_append_conn_error() should be\n>> straight-forward.\n> \n> Updated in v13, thanks!\n\nWhat is the status of this patch set? Michael had registered himself as \ncommitter and then removed himself again. So I hadn't been paying much \nattention myself. Was there anything left to discuss?\n\n\n\n",
"msg_date": "Tue, 31 Jan 2023 10:59:10 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] Let libpq reject unexpected authentication requests"
},
{
"msg_contents": "Hi Peter,\n\n> > Updated in v13, thanks!\n>\n> What is the status of this patch set? Michael had registered himself as\n> committer and then removed himself again. So I hadn't been paying much\n> attention myself. Was there anything left to discuss?\n\nPreviously I marked the patch as RfC. Although it's been a few months\nago and I don't recall all the details, it should have been in good\nshape (in my personal opinion at least). The commits a9e9a9f and\n0873b2d Michael referred to are message refactorings so I doubt Jacob\nhad serious problems with them.\n\nOf course, I'll take another fresh look and let you know my findings in a bit.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Tue, 31 Jan 2023 14:03:54 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] Let libpq reject unexpected authentication requests"
},
{
"msg_contents": "Hi Peter,\n\n> > What is the status of this patch set? Michael had registered himself as\n> > committer and then removed himself again. So I hadn't been paying much\n> > attention myself. Was there anything left to discuss?\n>\n> Previously I marked the patch as RfC. Although it's been a few months\n> ago and I don't recall all the details, it should have been in good\n> shape (in my personal opinion at least). The commits a9e9a9f and\n> 0873b2d Michael referred to are message refactorings so I doubt Jacob\n> had serious problems with them.\n>\n> Of course, I'll take another fresh look and let you know my findings in a bit.\n\nThe code is well written, documented and test-covered. All the tests\npass. To my knowledge there are no open questions left. I think the\npatch is as good as it will ever get.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Tue, 31 Jan 2023 16:19:50 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] Let libpq reject unexpected authentication requests"
},
{
"msg_contents": "On Tue, Jan 31, 2023 at 5:20 AM Aleksander Alekseev\n<aleksander@timescale.com> wrote:\n> To my knowledge there are no open questions left. I think the\n> patch is as good as it will ever get.\n\nA committer will need to decide whether they're willing to maintain\n0003 or not, as mentioned with the v11 post. Which I suppose is the\nlast open question, but not one I can answer from here.\n\nThanks!\n--Jacob\n\n\n",
"msg_date": "Tue, 31 Jan 2023 16:00:42 -0800",
"msg_from": "Jacob Champion <jchampion@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] Let libpq reject unexpected authentication requests"
},
{
"msg_contents": "On Tue, Jan 31, 2023 at 02:03:54PM +0300, Aleksander Alekseev wrote:\n>> What is the status of this patch set? Michael had registered himself as\n>> committer and then removed himself again. So I hadn't been paying much\n>> attention myself. Was there anything left to discuss?\n\nYes, sorry about not following up on that. I was registered as such\nfor a few weeks, but I have not been able to follow up. It did not\nseem fair for this patch to wait on only me, which is why I have\nremoved my name, at least temporarily, so as somebody may be able to\ncome back to it before me. I am not completely sure whether I will be\nable to come back and dive deeply into this thread soon, TBH :/\n\n> Previously I marked the patch as RfC. Although it's been a few months\n> ago and I don't recall all the details, it should have been in good\n> shape (in my personal opinion at least). The commits a9e9a9f and\n> 0873b2d Michael referred to are message refactorings so I doubt Jacob\n> had serious problems with them.\n> \n> Of course, I'll take another fresh look and let you know my findings in a bit.\n\n(There were a few things around certificate handling that need careful\nconsideration, at least that was my impression.)\n--\nMichael",
"msg_date": "Wed, 1 Feb 2023 09:06:00 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] Let libpq reject unexpected authentication requests"
},
{
"msg_contents": "v14 rebases over the test and solution conflicts from 9244c11afe2.\n\nThanks,\n--Jacob",
"msg_date": "Thu, 16 Feb 2023 10:57:55 -0800",
"msg_from": "Jacob Champion <jchampion@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] Let libpq reject unexpected authentication requests"
},
{
"msg_contents": "On 2/16/23 10:57, Jacob Champion wrote:\n> v14 rebases over the test and solution conflicts from 9244c11afe2.\n\nSince we're to the final CF for PG16, here's a rough summary.\n\nThis patchset provides two related features: 1) the ability for a client\nto explicitly allow or deny particular methods of in-band authentication\n(that is, things like password exchange), and 2) the ability to withhold\na client certificate from a server that asks for it.\n\nFeature 1 was originally proposed to mitigate abuse where a successful\nMITM attack can then be used to fish for client credentials [1]. It also\nlets users disable undesirable authentication types (like plaintext) by\ndefault, which seems to be a common interest. Both features came up\nagain in the context of proxies such as postgres_fdw, where it's\nsometimes important that users authenticate using only their credentials\nand not piggyback on the authority of the proxy host [2]. And another\nuse case for feature 2 just came up independently [3], to fix\nconnections where the default client certificate isn't valid for a\nparticular server.\n\nSince this is all client-side, it's compatible with existing servers.\nAlso since it's client-side, it can't prevent connections from being\nestablished by an eager server; it can only drop the connection once it\nsees that its requirement was not met, similar to how we handle\ntarget_session_attrs. That means it can't prevent a login trigger from\nbeing processed on behalf of a confused proxy. (I think that would\nrequire server-side support.)\n\n0001 and 0002 are the core features. 0003 is a more future-looking\nrefactoring of the internals, to make it easier to handle more SASL\nmechanisms, but it's not required and contains some unexercised code.\n\nThanks,\n--Jacob\n\n[1]\nhttps://www.postgresql.org/message-id/fcc3ebeb7f05775b63f3207ed52a54ea5d17fb42.camel%40vmware.com\n[2]\nhttps://www.postgresql.org/message-id/20230123015255.h3jro3yyitlsqykp%40awork3.anarazel.de\n[3]\nhttps://www.postgresql.org/message-id/CAAWbhmh_QqCnRVV8ct3gJULReQjWxLTaTBqs%2BfV7c7FpH0zbew%40mail.gmail.com\n\n\n",
"msg_date": "Tue, 28 Feb 2023 15:38:21 -0800",
"msg_from": "Jacob Champion <jchampion@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] Let libpq reject unexpected authentication requests"
},
{
"msg_contents": "On Tue, Feb 28, 2023 at 03:38:21PM -0800, Jacob Champion wrote:\n> 0001 and 0002 are the core features. 0003 is a more future-looking\n> refactoring of the internals, to make it easier to handle more SASL\n> mechanisms, but it's not required and contains some unexercised code.\n\nI was refreshing my mind with 0001 yesterday, and except for the two\nparts where we need to worry about AUTH_REQ_OK being sent too early\nand the business with gssenc, this is a rather straight-forward. It\nalso looks like the the participants of the thread are OK with the\ndesign you are proposing (list of keywords, potentially negative \npatterns). I think that I can get this part merged for this CF, at\nleast, not sure about the rest :p\n--\nMichael",
"msg_date": "Sat, 4 Mar 2023 11:35:30 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] Let libpq reject unexpected authentication requests"
},
{
"msg_contents": "On Fri, Mar 3, 2023 at 6:35 PM Michael Paquier <michael@paquier.xyz> wrote:\n> I was refreshing my mind with 0001 yesterday, and except for the two\n> parts where we need to worry about AUTH_REQ_OK being sent too early\n> and the business with gssenc, this is a rather straight-forward. It\n> also looks like the the participants of the thread are OK with the\n> design you are proposing (list of keywords, potentially negative\n> patterns). I think that I can get this part merged for this CF, at\n> least, not sure about the rest :p\n\nThanks! Is there anything that would make the sslcertmode patch more\npalatable? Or any particular areas of concern?\n\n--Jacob\n\n\n",
"msg_date": "Mon, 6 Mar 2023 16:02:25 -0800",
"msg_from": "Jacob Champion <jchampion@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] Let libpq reject unexpected authentication requests"
},
{
"msg_contents": "On Mon, Mar 06, 2023 at 04:02:25PM -0800, Jacob Champion wrote:\n> On Fri, Mar 3, 2023 at 6:35 PM Michael Paquier <michael@paquier.xyz> wrote:\n>> I was refreshing my mind with 0001 yesterday, and except for the two\n>> parts where we need to worry about AUTH_REQ_OK being sent too early\n>> and the business with gssenc, this is a rather straight-forward. It\n>> also looks like the the participants of the thread are OK with the\n>> design you are proposing (list of keywords, potentially negative\n>> patterns). I think that I can get this part merged for this CF, at\n>> least, not sure about the rest :p\n> \n> Thanks! Is there anything that would make the sslcertmode patch more\n> palatable? Or any particular areas of concern?\n\nI have been reviewing 0001, finishing with the attached, and that's\nnice work. My notes are below.\n\npqDropServerData() is in charge of cleaning up the transient data of a\nconnection between different attempts. Shouldn't client_finished_auth\nbe reset to false there? No parameters related to the connection\nparameters should be reset in this code path, but this state is\ndifferent. It does not seem possible that we could reach\npqDropServerData() after client_finished_auth has been set to true,\nbut that feels safer. I was tempted first to do that as well in\nmakeEmptyPGconn(), but we do a memset(0) there, so there is no point\nin doing that anyway ;)\n\nrequire_auth needs a cleanup in freePGconn().\n\n+ case AUTH_REQ_SCM_CREDS:\n+ return libpq_gettext(\"server requested UNIX socket credentials\");\nI am not really cool with the fact that this would fail and that we\noffer no options to control that. Now, this involves servers up to\n9.1, which is also a very good to rip of this code entirely. For now,\nI think that we'd better allow this option, and discuss the removal of\nthat in a separate thread.\n\npgindent has been complaining on the StaticAssertDecl() in fe-auth.c:\nsrc/interfaces/libpq/fe-auth.c: Error@847: Unbalanced parens\nWarning@847: Extra )\nWarning@847: Extra )\nWarning@848: Extra )\n\nFrom what I can see, this comes from the use of {0} within the\nexpression itself. I don't really want to dig into why pg_bsd_indent\nthinks this is a bad idea, so let's just move the StaticAssertDecl() a\nbit, like in the attached. The result is the same.\n\nAs of the \"sensitive\" cases of the patch:\n- I don't really think that we have to care much of the cases like\n\"none,scram\" meaning that a SASL exchange hastily downgraded to\nAUTH_REQ_OK by the server would be a success, as \"none\" means that the\nclient is basically OK with trust-level. This said, \"none\" could be a\ndangerous option in some cases, while useful in others.\n- SSPI is the default connection setup for the TAP tests on Windows.\nWe could stick a small test somewhere, perhaps, certainly not in\nsrc/test/authentication/.\n- SASL/SCRAM is indeed a problem on its own. My guess is that we\nshould let channel_binding do the job for SASL, or introduce a new\noption to decide which sasl mechanisms are authorized. At the end,\nusing \"scram-sha-256\" as the keyword is fine by me as we use that even\nfor HBA files, so that's quite known now, I hope.\n--\nMichael",
"msg_date": "Thu, 9 Mar 2023 15:35:26 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] Let libpq reject unexpected authentication requests"
},
{
"msg_contents": "On 3/8/23 22:35, Michael Paquier wrote:\n> I have been reviewing 0001, finishing with the attached, and that's\n> nice work. My notes are below.\n\nThanks!\n\n> pqDropServerData() is in charge of cleaning up the transient data of a\n> connection between different attempts. Shouldn't client_finished_auth\n> be reset to false there? No parameters related to the connection\n> parameters should be reset in this code path, but this state is\n> different. It does not seem possible that we could reach\n> pqDropServerData() after client_finished_auth has been set to true,\n> but that feels safer.\n\nYeah, that seems reasonable.\n\n> + case AUTH_REQ_SCM_CREDS:\n> + return libpq_gettext(\"server requested UNIX socket credentials\");\n> I am not really cool with the fact that this would fail and that we\n> offer no options to control that. Now, this involves servers up to\n> 9.1, which is also a very good to rip of this code entirely. For now,\n> I think that we'd better allow this option, and discuss the removal of\n> that in a separate thread.\n\nFair enough.\n\n> pgindent has been complaining on the StaticAssertDecl() in fe-auth.c:\n> src/interfaces/libpq/fe-auth.c: Error@847: Unbalanced parens\n> Warning@847: Extra )\n> Warning@847: Extra )\n> Warning@848: Extra )\n> \n> From what I can see, this comes from the use of {0} within the\n> expression itself. I don't really want to dig into why pg_bsd_indent\n> thinks this is a bad idea, so let's just move the StaticAssertDecl() a\n> bit, like in the attached. The result is the same.\n\nWorks for me. I wonder if\n\n sizeof(((PGconn*) 0)->allowed_auth_methods)\n\nwould make pgindent any happier? That'd let you keep the assertion local\nto auth_method_allowed, but it looks scarier. :)\n\n> As of the \"sensitive\" cases of the patch:\n> - I don't really think that we have to care much of the cases like\n> \"none,scram\" meaning that a SASL exchange hastily downgraded to\n> AUTH_REQ_OK by the server would be a success, as \"none\" means that the\n> client is basically OK with trust-level. This said, \"none\" could be a\n> dangerous option in some cases, while useful in others.\n\nYeah. I think a server shouldn't be allowed to abandon a SCRAM exchange\npartway through, but that's completely independent of this patchset.\n\n> - SSPI is the default connection setup for the TAP tests on Windows.\n\nOh, I don't think I ever noticed that.\n\n> We could stick a small test somewhere, perhaps, certainly not in\n> src/test/authentication/.\n\nWhere were you thinking? (Would it be so bad to have a tiny\nt/005_sspi.pl that's just skipped on *nix?)\n\n> - SASL/SCRAM is indeed a problem on its own. My guess is that we\n> should let channel_binding do the job for SASL, or introduce a new\n> option to decide which sasl mechanisms are authorized. At the end,\n> using \"scram-sha-256\" as the keyword is fine by me as we use that even\n> for HBA files, so that's quite known now, I hope.\n\nDid you have any thoughts about the 0003 generalization attempt?\n\n> -+ if (conn->require_auth)\n> ++ if (conn->require_auth && conn->require_auth[0])\n\nThank you for that catch. I guess we should test somewhere that\n`require_auth=` behaves normally?\n\n> + reason = libpq_gettext(\"server did not complete authentication\"),\n> -+ result = false;\n> ++ result = false;\n> + }\n\nThis reindentation looks odd.\n\nnit: some of the new TAP test names have been rewritten with commas,\nothers with colons.\n\nThanks,\n--Jacob\n\n\n",
"msg_date": "Fri, 10 Mar 2023 14:32:17 -0800",
"msg_from": "Jacob Champion <jchampion@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] Let libpq reject unexpected authentication requests"
},
{
"msg_contents": "On Fri, Mar 10, 2023 at 02:32:17PM -0800, Jacob Champion wrote:\n> On 3/8/23 22:35, Michael Paquier wrote:\n> Works for me. I wonder if\n>\n> sizeof(((PGconn*) 0)->allowed_auth_methods)\n>\n> would make pgindent any happier? That'd let you keep the assertion local\n> to auth_method_allowed, but it looks scarier. :)\n\nI can check that, now it's not bad to keep the assertion as it is,\neither.\n\n>> As of the \"sensitive\" cases of the patch:\n>> - I don't really think that we have to care much of the cases like\n>> \"none,scram\" meaning that a SASL exchange hastily downgraded to\n>> AUTH_REQ_OK by the server would be a success, as \"none\" means that the\n>> client is basically OK with trust-level. This said, \"none\" could be a\n>> dangerous option in some cases, while useful in others.\n>\n> Yeah. I think a server shouldn't be allowed to abandon a SCRAM exchange\n> partway through, but that's completely independent of this patchset.\n\nAgreed.\n\n>> We could stick a small test somewhere, perhaps, certainly not in\n>> src/test/authentication/.\n>\n> Where were you thinking? (Would it be so bad to have a tiny\n> t/005_sspi.pl that's just skipped on *nix?)\n\nHmm, OK. It may be worth having a 005_sspi.pl in\nsrc/test/authentication/ specifically for Windows. This patch gives\nat least one reason to do so. Looking at pg_regress.c, we have that:\n if (config_auth_datadir)\n {\n#ifdef ENABLE_SSPI\n if (!use_unix_sockets)\n config_sspi_auth(config_auth_datadir, user);\n#endif\n exit(0);\n }\n\nSo applying a check on $use_unix_sockets should be OK, I hope.\n\n>> - SASL/SCRAM is indeed a problem on its own. My guess is that we\n>> should let channel_binding do the job for SASL, or introduce a new\n>> option to decide which sasl mechanisms are authorized. At the end,\n>> using \"scram-sha-256\" as the keyword is fine by me as we use that even\n>> for HBA files, so that's quite known now, I hope.\n>\n> Did you have any thoughts about the 0003 generalization attempt?\n\nNot yet, unfortunately.\n\n> > -+ if (conn->require_auth)\n> > ++ if (conn->require_auth && conn->require_auth[0])\n>\n> Thank you for that catch. I guess we should test somewhere that\n> `require_auth=` behaves normally?\n\nYeah, that seems like an idea. That would be cheap enough.\n\n>> + reason = libpq_gettext(\"server did not complete authentication\"),\n>> -+ result = false;\n>> ++ result = false;\n>> + }\n>\n> This reindentation looks odd.\n\nThat's because the previous line has a comma. So the reindent is\nright, not the code.\n\n> nit: some of the new TAP test names have been rewritten with commas,\n> others with colons.\n\nIndeed, I thought to have caught all of them, but you wrote a lot of\ntests :)\n\nCould you send a new patch with all these adjustments? That would\nhelp a lot.\n--\nMichael",
"msg_date": "Sat, 11 Mar 2023 08:09:33 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] Let libpq reject unexpected authentication requests"
},
{
"msg_contents": "On Fri, Mar 10, 2023 at 3:09 PM Michael Paquier <michael@paquier.xyz> wrote:\n> >> + reason = libpq_gettext(\"server did not complete authentication\"),\n> >> -+ result = false;\n> >> ++ result = false;\n> >> + }\n> >\n> > This reindentation looks odd.\n>\n> That's because the previous line has a comma. So the reindent is\n> right, not the code.\n\nWhoops. :(\n\n> Could you send a new patch with all these adjustments? That would\n> help a lot.\n\nWill do!\n\nThanks,\n--Jacob\n\n\n",
"msg_date": "Fri, 10 Mar 2023 15:16:20 -0800",
"msg_from": "Jacob Champion <jchampion@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] Let libpq reject unexpected authentication requests"
},
{
"msg_contents": "On Fri, Mar 10, 2023 at 3:16 PM Jacob Champion <jchampion@timescale.com> wrote:\n> > Could you send a new patch with all these adjustments? That would\n> > help a lot.\n>\n> Will do!\n\nHere's a v16:\n- updated 0001 patch message\n- all test names should have commas rather than colons now\n- new test for an empty require_auth\n- new SSPI suite (note that it doesn't run by default on Cirrus, due\nto the use of PG_TEST_USE_UNIX_SOCKETS)\n- fixed errant comma at EOL\n\nThanks,\n--Jacob",
"msg_date": "Mon, 13 Mar 2023 12:38:10 -0700",
"msg_from": "Jacob Champion <jchampion@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] Let libpq reject unexpected authentication requests"
},
{
"msg_contents": "On Mon, Mar 13, 2023 at 12:38:10PM -0700, Jacob Champion wrote:\n> Here's a v16:\n> - updated 0001 patch message\n> - all test names should have commas rather than colons now\n> - new test for an empty require_auth\n> - new SSPI suite (note that it doesn't run by default on Cirrus, due\n> to the use of PG_TEST_USE_UNIX_SOCKETS)\n> - fixed errant comma at EOL\n\n0001 was looking fine enough seen from here, so applied it after\ntweaking a few comments. That's enough to cover most of the needs of\nthis thread.\n\n0002 looks pretty simple as well, I think that's worth a look for this\nCF. I am not sure about 0003, to be honest, as I am wondering if\nthere could be a better solution than tying more the mechanism names\nwith the expected AUTH_REQ_* values..\n--\nMichael",
"msg_date": "Tue, 14 Mar 2023 14:39:07 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] Let libpq reject unexpected authentication requests"
},
{
"msg_contents": "On Mon, Mar 13, 2023 at 10:39 PM Michael Paquier <michael@paquier.xyz> wrote:\n> 0001 was looking fine enough seen from here, so applied it after\n> tweaking a few comments. That's enough to cover most of the needs of\n> this thread.\n\nThank you very much!\n\n> 0002 looks pretty simple as well, I think that's worth a look for this\n> CF.\n\nCool. v17 just rebases the set over HEAD, then, for cfbot.\n\n> I am not sure about 0003, to be honest, as I am wondering if\n> there could be a better solution than tying more the mechanism names\n> with the expected AUTH_REQ_* values..\n\nYeah, I'm not particularly excited about the approach I took. It'd be\neasier if we had a second SASL method to verify the implementation...\nI'd also proposed just adding an Assert, as a third option, to guide\nthe eventual SASL implementer back to this conversation?\n\n--Jacob",
"msg_date": "Tue, 14 Mar 2023 12:14:40 -0700",
"msg_from": "Jacob Champion <jchampion@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] Let libpq reject unexpected authentication requests"
},
{
"msg_contents": "On Tue, Mar 14, 2023 at 12:14:40PM -0700, Jacob Champion wrote:\n> On Mon, Mar 13, 2023 at 10:39 PM Michael Paquier <michael@paquier.xyz> wrote:\n>> 0002 looks pretty simple as well, I think that's worth a look for this\n>> CF.\n> \n> Cool. v17 just rebases the set over HEAD, then, for cfbot.\n\nI have looked at 0002, and I am on board with using a separate\nconnection parameter for this case, orthogonal to require_auth, with\nthe three value \"allow\", \"disable\" and \"require\". So that's one thing\n:)\n\n- # Function introduced in OpenSSL 1.0.2.\n+ # Functions introduced in OpenSSL 1.0.2. LibreSSL doesn't have all of these.\n ['X509_get_signature_nid'],\n+ ['SSL_CTX_set_cert_cb'],\n\nFrom what I can see, X509_get_signature_nid() is in LibreSSL, but not\nSSL_CTX_set_cert_cb(). Perhaps that's worth having two different\ncomments?\n\n+ <para>\n+ a certificate may be sent, if the server requests one and it has\n+ been provided via <literal>sslcert</literal>\n+ </para>\n\nIt seems to me that this description is not completely exact. The\ndefault is to look at ~/.postgresql/postgresql.crt, so sslcert is not\nmandatory. There could be a certificate even without sslcert set.\n\n+ libpq_append_conn_error(conn, \"sslcertmode value \\\"%s\\\" invalid when SSL support is not compiled in\",\n+ conn->sslcertmode);\n\nThis string could be combined with the same one used for sslmode,\nsaving a bit in translation effortm by making the connection parameter\nname a value of the string (\"%s value \\\"%s\\\" invalid ..\"). The second\nstring where HAVE_SSL_CTX_SET_CERT_CB is not set could be refactored\nthe same way, I guess.\n\n+ * figure out if a certficate was actually requested, so \"require\" is\ns/certficate/certificate/.\n\ncontrib/sslinfo/ has ssl_client_cert_present(), that we could use in\nthe tests to make sure that the client has actually sent a\ncertificate? How about adding some of these tests to 003_sslinfo.pl\nfor the \"allow\" and \"require\" cases? Even for \"disable\", we could\ncheck check that ssl_client_cert_present() returns false? That would\nmake four tests if everything is covered:\n- \"allow\" without a certificate sent.\n- \"allow\" with a certificate sent.\n- \"disable\".\n- \"require\"\n\n+ if (!conn->ssl_cert_requested)\n+ {\n+ libpq_append_conn_error(conn, \"server did not request a certificate\");\n+ return false;\n+ }\n+ else if (!conn->ssl_cert_sent)\n+ {\n+ libpq_append_conn_error(conn, \"server accepted connection without a valid certificate\");\n+ return false;\n+ }\nPerhaps useless question: should this say \"SSL certificate\"?\n\nfreePGconn() is missing a free(sslcertmode).\n--\nMichael",
"msg_date": "Wed, 22 Mar 2023 15:00:55 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] Let libpq reject unexpected authentication requests"
},
{
"msg_contents": "On Tue, Mar 21, 2023 at 11:01 PM Michael Paquier <michael@paquier.xyz> wrote:\n> - # Function introduced in OpenSSL 1.0.2.\n> + # Functions introduced in OpenSSL 1.0.2. LibreSSL doesn't have all of these.\n> ['X509_get_signature_nid'],\n> + ['SSL_CTX_set_cert_cb'],\n>\n> From what I can see, X509_get_signature_nid() is in LibreSSL, but not\n> SSL_CTX_set_cert_cb(). Perhaps that's worth having two different\n> comments?\n\nI took a stab at that in v18. I diverged a bit between Meson and\nAutoconf, which you may not care for.\n\n> + <para>\n> + a certificate may be sent, if the server requests one and it has\n> + been provided via <literal>sslcert</literal>\n> + </para>\n>\n> It seems to me that this description is not completely exact. The\n> default is to look at ~/.postgresql/postgresql.crt, so sslcert is not\n> mandatory. There could be a certificate even without sslcert set.\n\nReworded.\n\n> + libpq_append_conn_error(conn, \"sslcertmode value \\\"%s\\\" invalid when SSL support is not compiled in\",\n> + conn->sslcertmode);\n>\n> This string could be combined with the same one used for sslmode,\n> saving a bit in translation effortm by making the connection parameter\n> name a value of the string (\"%s value \\\"%s\\\" invalid ..\").\n\nDone.\n\n> + * figure out if a certficate was actually requested, so \"require\" is\n> s/certficate/certificate/.\n\nHeh, fixed. I need new glasses, clearly.\n\n> contrib/sslinfo/ has ssl_client_cert_present(), that we could use in\n> the tests to make sure that the client has actually sent a\n> certificate? How about adding some of these tests to 003_sslinfo.pl\n> for the \"allow\" and \"require\" cases?\n\nAdded; see what you think.\n\n> + if (!conn->ssl_cert_requested)\n> + {\n> + libpq_append_conn_error(conn, \"server did not request a certificate\");\n> + return false;\n> + }\n> + else if (!conn->ssl_cert_sent)\n> + {\n> + libpq_append_conn_error(conn, \"server accepted connection without a valid certificate\");\n> + return false;\n> + }\n> Perhaps useless question: should this say \"SSL certificate\"?\n\nI have no objection, so done that way.\n\n> freePGconn() is missing a free(sslcertmode).\n\nArgh, I keep forgetting that. Fixed, thanks!\n\n--Jacob",
"msg_date": "Thu, 23 Mar 2023 15:40:55 -0700",
"msg_from": "Jacob Champion <jchampion@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] Let libpq reject unexpected authentication requests"
},
{
"msg_contents": "On Thu, Mar 23, 2023 at 03:40:55PM -0700, Jacob Champion wrote:\n> On Tue, Mar 21, 2023 at 11:01 PM Michael Paquier <michael@paquier.xyz> wrote:\n>> contrib/sslinfo/ has ssl_client_cert_present(), that we could use in\n>> the tests to make sure that the client has actually sent a\n>> certificate? How about adding some of these tests to 003_sslinfo.pl\n>> for the \"allow\" and \"require\" cases?\n> \n> Added; see what you think.\n\nThat's a pretty good test design, covering all 4 cases. Nice.\n\n>> freePGconn() is missing a free(sslcertmode).\n> \n> Argh, I keep forgetting that. Fixed, thanks!\n\nI have spent a couple of hours looking at the whole again today,\ntesting that with OpenSSL to make sure that everything was OK. Apart\nfrom a few tweaks, that seemed pretty good. So, applied.\n--\nMichael",
"msg_date": "Fri, 24 Mar 2023 14:18:02 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] Let libpq reject unexpected authentication requests"
},
{
"msg_contents": "On Thu, Mar 23, 2023 at 10:18 PM Michael Paquier <michael@paquier.xyz> wrote:\n> I have spent a couple of hours looking at the whole again today,\n> testing that with OpenSSL to make sure that everything was OK. Apart\n> from a few tweaks, that seemed pretty good. So, applied.\n\nThank you!\n\n--Jacob\n\n\n",
"msg_date": "Fri, 24 Mar 2023 09:30:06 -0700",
"msg_from": "Jacob Champion <jchampion@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] Let libpq reject unexpected authentication requests"
},
{
"msg_contents": "On Fri, Mar 24, 2023 at 09:30:06AM -0700, Jacob Champion wrote:\n> On Thu, Mar 23, 2023 at 10:18 PM Michael Paquier <michael@paquier.xyz> wrote:\n>> I have spent a couple of hours looking at the whole again today,\n>> testing that with OpenSSL to make sure that everything was OK. Apart\n>> from a few tweaks, that seemed pretty good. So, applied.\n> \n> Thank you!\n\nPlease note that the CF entry has been marked as committed. We should\nreally do something about having a cleaner separation between SASL,\nthe mechanisms and the AUTH_REQ_* codes, in the long term, though\nhonestly I don't know yet what would be the most elegant and the least\nerror-prone approach. And for anything that touches authentication,\nsimpler means better.\n--\nMichael",
"msg_date": "Sat, 25 Mar 2023 11:59:10 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] Let libpq reject unexpected authentication requests"
}
] |
[
{
"msg_contents": "Hi,\nLooking at pg_stat_statements, there doesn't seem to be timestamp column\nfor when the underlying query is performed.\nSince the same query can be run multiple times, the absence of timestamp\ncolumn makes finding the most recent invocation of the query difficult.\n\nDoes it make sense to add such a column ?\n\nThanks\n\nHi,Looking at pg_stat_statements, there doesn't seem to be timestamp column for when the underlying query is performed.Since the same query can be run multiple times, the absence of timestamp column makes finding the most recent invocation of the query difficult.Does it make sense to add such a column ?Thanks",
"msg_date": "Sat, 5 Mar 2022 18:10:44 -0800",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": true,
"msg_subject": "timestamp for query in pg_stat_statements"
},
{
"msg_contents": "On Sat, Mar 05, 2022 at 06:10:44PM -0800, Zhihong Yu wrote:\n>\n> Looking at pg_stat_statements, there doesn't seem to be timestamp column\n> for when the underlying query is performed.\n> Since the same query can be run multiple times, the absence of timestamp\n> column makes finding the most recent invocation of the query difficult.\n>\n> Does it make sense to add such a column ?\n\nI don't think it would be that helpful. Why do you need to only know when the\nlast execution was, but no other details among every other cumulated counters?\n\nYou should consider using some other tools on top of pg_stat_statements (and\npossibly other extensions) that performs snapshot regularly and can show you\nall the details at the given frequency.\n\n\n",
"msg_date": "Sun, 6 Mar 2022 12:17:09 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: timestamp for query in pg_stat_statements"
},
{
"msg_contents": "On Sat, Mar 5, 2022 at 8:17 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n\n> On Sat, Mar 05, 2022 at 06:10:44PM -0800, Zhihong Yu wrote:\n> >\n> > Looking at pg_stat_statements, there doesn't seem to be timestamp column\n> > for when the underlying query is performed.\n> > Since the same query can be run multiple times, the absence of timestamp\n> > column makes finding the most recent invocation of the query difficult.\n> >\n> > Does it make sense to add such a column ?\n>\n> I don't think it would be that helpful. Why do you need to only know when\n> the\n> last execution was, but no other details among every other cumulated\n> counters?\n>\n> You should consider using some other tools on top of pg_stat_statements\n> (and\n> possibly other extensions) that performs snapshot regularly and can show\n> you\n> all the details at the given frequency.\n>\nHi,\nThe current design of pg_stat_statements doesn't have the concept of\nobservation.\n\nBy observation I mean scenarios where pg_stat_statements is read by people\ndoing performance tuning.\n\nHere is one example (same query, q, is concerned).\nAt t1, q is performed, leaving one row in pg_stat_statements with mean_time\nof 10.\nAt t2, operator examines pg_stat_statements and provides some suggestion\nfor tuning q (which is carried out).\nAt t3, q is run again leaving the row with mean_time of 9.\nNow with two rows for q, how do we know whether the row written at t3 is\nprior to or after implementing the suggestion made at t2 ?\n\nUsing other tools, a lot of the information in pg_stat_statements would be\nduplicated to distinguish the counters recorded w.r.t. tuning operation.\n\nI think pg_stat_statements can do better in this regard.\n\nCheers\n\nOn Sat, Mar 5, 2022 at 8:17 PM Julien Rouhaud <rjuju123@gmail.com> wrote:On Sat, Mar 05, 2022 at 06:10:44PM -0800, Zhihong Yu wrote:\n>\n> Looking at pg_stat_statements, there doesn't seem to be timestamp column\n> for when the underlying query is performed.\n> Since the same query can be run multiple times, the absence of timestamp\n> column makes finding the most recent invocation of the query difficult.\n>\n> Does it make sense to add such a column ?\n\nI don't think it would be that helpful. Why do you need to only know when the\nlast execution was, but no other details among every other cumulated counters?\n\nYou should consider using some other tools on top of pg_stat_statements (and\npossibly other extensions) that performs snapshot regularly and can show you\nall the details at the given frequency.Hi,The current design of pg_stat_statements doesn't have the concept of observation.By observation I mean scenarios where pg_stat_statements is read by people doing performance tuning.Here is one example (same query, q, is concerned).At t1, q is performed, leaving one row in pg_stat_statements with mean_time of 10.At t2, operator examines pg_stat_statements and provides some suggestion for tuning q (which is carried out).At t3, q is run again leaving the row with mean_time of 9.Now with two rows for q, how do we know whether the row written at t3 is prior to or after implementing the suggestion made at t2 ?Using other tools, a lot of the information in pg_stat_statements would be duplicated to distinguish the counters recorded w.r.t. tuning operation.I think pg_stat_statements can do better in this regard.Cheers",
"msg_date": "Sun, 6 Mar 2022 12:37:00 -0800",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": true,
"msg_subject": "Re: timestamp for query in pg_stat_statements"
},
{
"msg_contents": "On Sun, Mar 06, 2022 at 12:37:00PM -0800, Zhihong Yu wrote:\n> The current design of pg_stat_statements doesn't have the concept of\n> observation.\n>\n> By observation I mean scenarios where pg_stat_statements is read by people\n> doing performance tuning.\n>\n> Here is one example (same query, q, is concerned).\n> At t1, q is performed, leaving one row in pg_stat_statements with mean_time\n> of 10.\n> At t2, operator examines pg_stat_statements and provides some suggestion\n> for tuning q (which is carried out).\n> At t3, q is run again leaving the row with mean_time of 9.\n> Now with two rows for q, how do we know whether the row written at t3 is\n> prior to or after implementing the suggestion made at t2 ?\n\nWell, if pg_stat_statements is read by people doing performance tuning\nshouldn't they be able to distinguish which query text is the one they just\nrewrote?\n\n> Using other tools, a lot of the information in pg_stat_statements would be\n> duplicated to distinguish the counters recorded w.r.t. tuning operation.\n\nYes, which is good. Your example was about rewriting a query, but what about\nother possibilities like creating an index, changing hash_mem_multiplier...?\nYou won't get a new record and the mean_time will mostly be useless.\n\nIf you take regular snapshot, then you will be able to compute the mean_time\nfor each interval, and that will answer bot this scenario and the one in your\nexample (since the 2nd row won't exist in the earlier snapshots).\n\n\n",
"msg_date": "Mon, 7 Mar 2022 10:23:35 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: timestamp for query in pg_stat_statements"
},
{
"msg_contents": "On Sun, Mar 6, 2022 at 6:23 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n\n> On Sun, Mar 06, 2022 at 12:37:00PM -0800, Zhihong Yu wrote:\n> > The current design of pg_stat_statements doesn't have the concept of\n> > observation.\n> >\n> > By observation I mean scenarios where pg_stat_statements is read by\n> people\n> > doing performance tuning.\n> >\n> > Here is one example (same query, q, is concerned).\n> > At t1, q is performed, leaving one row in pg_stat_statements with\n> mean_time\n> > of 10.\n> > At t2, operator examines pg_stat_statements and provides some suggestion\n> > for tuning q (which is carried out).\n> > At t3, q is run again leaving the row with mean_time of 9.\n> > Now with two rows for q, how do we know whether the row written at t3 is\n> > prior to or after implementing the suggestion made at t2 ?\n>\n> Well, if pg_stat_statements is read by people doing performance tuning\n> shouldn't they be able to distinguish which query text is the one they just\n> rewrote?\n>\nDid I mention rewriting ?\nAs you said below, adding index is one way of tuning which doesn't involve\nrewriting.\n\nPlease also note that the person tuning the query may be different from the\nperson writing the query.\nSo some information in pg_stat_statements (or related table) is needed to\ndisambiguate.\n\n\n> > Using other tools, a lot of the information in pg_stat_statements would\n> be\n> > duplicated to distinguish the counters recorded w.r.t. tuning operation.\n>\n> Yes, which is good. Your example was about rewriting a query, but what\n> about\n> other possibilities like creating an index, changing\n> hash_mem_multiplier...?\n> You won't get a new record and the mean_time will mostly be useless.\n>\n> If you take regular snapshot, then you will be able to compute the\n> mean_time\n> for each interval, and that will answer bot this scenario and the one in\n> your\n> example (since the 2nd row won't exist in the earlier snapshots).\n>\n\nOn Sun, Mar 6, 2022 at 6:23 PM Julien Rouhaud <rjuju123@gmail.com> wrote:On Sun, Mar 06, 2022 at 12:37:00PM -0800, Zhihong Yu wrote:\n> The current design of pg_stat_statements doesn't have the concept of\n> observation.\n>\n> By observation I mean scenarios where pg_stat_statements is read by people\n> doing performance tuning.\n>\n> Here is one example (same query, q, is concerned).\n> At t1, q is performed, leaving one row in pg_stat_statements with mean_time\n> of 10.\n> At t2, operator examines pg_stat_statements and provides some suggestion\n> for tuning q (which is carried out).\n> At t3, q is run again leaving the row with mean_time of 9.\n> Now with two rows for q, how do we know whether the row written at t3 is\n> prior to or after implementing the suggestion made at t2 ?\n\nWell, if pg_stat_statements is read by people doing performance tuning\nshouldn't they be able to distinguish which query text is the one they just\nrewrote?Did I mention rewriting ?As you said below, adding index is one way of tuning which doesn't involve rewriting.Please also note that the person tuning the query may be different from the person writing the query.So some information in pg_stat_statements (or related table) is needed to disambiguate.\n\n> Using other tools, a lot of the information in pg_stat_statements would be\n> duplicated to distinguish the counters recorded w.r.t. tuning operation.\n\nYes, which is good. Your example was about rewriting a query, but what about\nother possibilities like creating an index, changing hash_mem_multiplier...?\nYou won't get a new record and the mean_time will mostly be useless.\n\nIf you take regular snapshot, then you will be able to compute the mean_time\nfor each interval, and that will answer bot this scenario and the one in your\nexample (since the 2nd row won't exist in the earlier snapshots).",
"msg_date": "Sun, 6 Mar 2022 19:10:49 -0800",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": true,
"msg_subject": "Re: timestamp for query in pg_stat_statements"
},
{
"msg_contents": "On Sun, Mar 06, 2022 at 07:10:49PM -0800, Zhihong Yu wrote:\n> On Sun, Mar 6, 2022 at 6:23 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n> \n> > On Sun, Mar 06, 2022 at 12:37:00PM -0800, Zhihong Yu wrote:\n> > >\n> > > Here is one example (same query, q, is concerned).\n> > > At t1, q is performed, leaving one row in pg_stat_statements with\n> > mean_time\n> > > of 10.\n> > > At t2, operator examines pg_stat_statements and provides some suggestion\n> > > for tuning q (which is carried out).\n> > > At t3, q is run again leaving the row with mean_time of 9.\n> > > Now with two rows for q, how do we know whether the row written at t3 is\n> > > prior to or after implementing the suggestion made at t2 ?\n> >\n> > Well, if pg_stat_statements is read by people doing performance tuning\n> > shouldn't they be able to distinguish which query text is the one they just\n> > rewrote?\n> >\n> Did I mention rewriting ?\n\nHow else would you end up with two entries in pg_stat_statements?\n\n> As you said below, adding index is one way of tuning which doesn't involve\n> rewriting.\n\nYes, and in that case you have a single row for that query, and mean_time is\nuseless. You need to compute it yourself using snapshots of\npg_stat_statements if you want to know how that query performed since the\noptimization.\n\n> So some information in pg_stat_statements (or related table) is needed to\n> disambiguate.\n\nIn my opinion that's not pg_stat_statements' job. Like all other similar\ninfrastructure in postgres it only provides cumulated counters. You would\nhave exactly the same issue with e.g. pg_stat_user_indexes or pg_stat_bgwriter.\n\n\n",
"msg_date": "Mon, 7 Mar 2022 11:21:58 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: timestamp for query in pg_stat_statements"
}
] |
[
{
"msg_contents": "PSA patch to fix a comment typo.\n\n(The 'OR' should not be uppercase - that keyword is irrelevant here).\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia",
"msg_date": "Mon, 7 Mar 2022 09:31:33 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": true,
"msg_subject": "Comment typo in CheckCmdReplicaIdentity"
},
{
"msg_contents": "On Mon, Mar 07, 2022 at 09:31:33AM +1100, Peter Smith wrote:\n> PSA patch to fix a comment typo.\n> \n> (The 'OR' should not be uppercase - that keyword is irrelevant here).\n\nI was looking at the whole routine, and your suggestion looks like an\nimprovement to me. Will apply if there are no objections.\n--\nMichael",
"msg_date": "Mon, 7 Mar 2022 10:36:24 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Comment typo in CheckCmdReplicaIdentity"
},
{
"msg_contents": "On Mon, Mar 07, 2022 at 10:36:24AM +0900, Michael Paquier wrote:\n> On Mon, Mar 07, 2022 at 09:31:33AM +1100, Peter Smith wrote:\n> > PSA patch to fix a comment typo.\n> > \n> > (The 'OR' should not be uppercase - that keyword is irrelevant here).\n> \n> I was looking at the whole routine, and your suggestion looks like an\n> improvement to me. Will apply if there are no objections.\n\n+1\n\n\n",
"msg_date": "Mon, 7 Mar 2022 10:28:08 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Comment typo in CheckCmdReplicaIdentity"
},
{
"msg_contents": "On Mon, Mar 07, 2022 at 10:28:08AM +0800, Julien Rouhaud wrote:\n> +1\n\nAnd done.\n--\nMichael",
"msg_date": "Tue, 8 Mar 2022 14:31:29 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Comment typo in CheckCmdReplicaIdentity"
},
{
"msg_contents": "On Tue, Mar 8, 2022 at 4:31 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Mon, Mar 07, 2022 at 10:28:08AM +0800, Julien Rouhaud wrote:\n> > +1\n>\n> And done.\n> --\n> Michael\n\nThanks!\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Tue, 8 Mar 2022 16:50:39 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Comment typo in CheckCmdReplicaIdentity"
}
] |
[
{
"msg_contents": "Hi,\nCurrently the query id for pg_stat_statements gets calculated based on the\nparse nodes specifics.\nThis means that the user cannot add a comment to a SQL query to test\nsomething. (though some other RDBMS allows this practice).\n\nConsider this use case: for query q, admin looks at stats and performs some\noptimization (without changing the query). Admin adds / modifies the\ncomment for q - now the query becomes q'. If query id doesn't change, there\nstill would be one row in pg_stat_statements which makes it difficult to\ngauge the effectiveness of the tuning.\n\nI want to get opinion from the community whether adding / changing comment\nin SQL query should result in new query id for pg_stat_statements.\n\nCheers\n\nHi,Currently the query id for pg_stat_statements gets calculated based on the parse nodes specifics. This means that the user cannot add a comment to a SQL query to test something. (though some other RDBMS allows this practice).Consider this use case: for query q, admin looks at stats and performs some optimization (without changing the query). Admin adds / modifies the comment for q - now the query becomes q'. If query id doesn't change, there still would be one row in pg_stat_statements which makes it difficult to gauge the effectiveness of the tuning.I want to get opinion from the community whether adding / changing comment in SQL query should result in new query id for pg_stat_statements.Cheers",
"msg_date": "Mon, 7 Mar 2022 09:42:26 -0800",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": true,
"msg_subject": "refreshing query id for pg_stat_statements based on comment in sql"
},
{
"msg_contents": "On Mon, Mar 7, 2022 at 09:42:26AM -0800, Zhihong Yu wrote:\n> Hi,\n> Currently the query id for pg_stat_statements gets calculated based on the\n> parse nodes specifics. \n> This means that the user cannot add a comment to a SQL query to test something.\n> (though some other RDBMS allows this practice).\n> \n> Consider this use case: for query q, admin looks at stats and performs some\n> optimization (without changing the query). Admin adds / modifies the comment\n> for q - now the query becomes q'. If query id doesn't change, there still would\n> be one row in pg_stat_statements which makes it difficult to gauge the\n> effectiveness of the tuning.\n> \n> I want to get opinion from the community whether adding / changing comment in\n> SQL query should result in new query id for pg_stat_statements.\n\nUh, we don't have a parse node for comments, and I didn't think comments\nwere part of the query id, and my testing confirms that:\n\n\tpsql -c \"SET log_statement = 'all'\" -c \"select pg_sleep(10000) -- test1;\" test\n\tpsql -c \"SET log_statement = 'all'\" -c \"select pg_sleep(10000) -- test2;\" test\n\nshows the comment in the logs:\n\n\n\t2022-03-07 19:02:19.509 EST [1075860] LOG: statement: select pg_sleep(10000) -- test1;\n\t2022-03-07 19:02:24.389 EST [1075860] ERROR: canceling statement due to user request\n\t2022-03-07 19:02:24.389 EST [1075860] STATEMENT: select pg_sleep(10000) -- test1;\n\t2022-03-07 19:02:27.029 EST [1075893] LOG: statement: select pg_sleep(10000) -- test2;\n\t2022-03-07 19:02:47.915 EST [1075893] ERROR: canceling statement due to user request\n\t2022-03-07 19:02:47.915 EST [1075893] STATEMENT: select pg_sleep(10000) -- test2;\n\nand I see the same query_id for both:\n\n\ttest=> select query, query_id from pg_stat_activity;\n\t query | query_id\n\t-----------------------------------------------+----------------------\n\t |\n\t |\n-->\t select pg_sleep(10000) -- test1; | 2920433178127795318\n\t select query, query_id from pg_stat_activity; | -8032661921273433383\n\t |\n\t |\n\t |\n\t(7 rows)\n\t\n\ttest=> select query, query_id from pg_stat_activity;\n\t query | query_id\n\t-----------------------------------------------+----------------------\n\t |\n\t |\n-->\t select pg_sleep(10000) -- test2; | 2920433178127795318\n\t select query, query_id from pg_stat_activity; | -8032661921273433383\n\nI think you need to show us the problem you are having.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n",
"msg_date": "Mon, 7 Mar 2022 19:06:08 -0500",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: refreshing query id for pg_stat_statements based on comment in\n sql"
},
{
"msg_contents": "Hi,\n\nOn Mon, Mar 07, 2022 at 09:42:26AM -0800, Zhihong Yu wrote:\n> Hi,\n> Currently the query id for pg_stat_statements gets calculated based on the\n> parse nodes specifics.\n> This means that the user cannot add a comment to a SQL query to test\n> something. (though some other RDBMS allows this practice).\n>\n> Consider this use case: for query q, admin looks at stats and performs some\n> optimization (without changing the query). Admin adds / modifies the\n> comment for q - now the query becomes q'. If query id doesn't change, there\n> still would be one row in pg_stat_statements which makes it difficult to\n> gauge the effectiveness of the tuning.\n>\n> I want to get opinion from the community whether adding / changing comment\n> in SQL query should result in new query id for pg_stat_statements.\n\nAre you talking about optimizer hint with something like pg_hint_plan, or just\nrandom comment like \"/* we now added index blabla */ SELECT ...\"?\n\nIf the former, then such an extension can already provide its own queryid\ngenerator which can chose to ignore part or all of the comments or not.\n\nIf the latter, then it seems shortsighted to me. At the very least not all\napplication can be modified to have a specific comment attached to a query.\n\nAlso, if you want check how a query if performing after doing some\nmodifications, you should start with some EXPLAIN ANALYZE first (or even a\nsimple EXPLAIN if you want to validate some new index using hypothetical\nindexes). If this is some more general change (e.g. shared_buffers,\nwork_mem...) then the whole system is going to perform differently, and you\ncertainly won't add a new comment to every single query executed.\n\nSo again it seems to me that doing pg_stat_statement snapshots and comparing\nthe diff between each to see how the whole workload, or specific queries, is\nbehaving is still the best answer here.\n\n\n",
"msg_date": "Tue, 8 Mar 2022 09:42:33 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: refreshing query id for pg_stat_statements based on comment in\n sql"
}
] |
[
{
"msg_contents": "Hi,\n\nI created a .patch that will allow me to recover the stat files after a\npotential crash.\nDepending on the traffic on the server some records might be lost (0.5 sec\nof records / more or less ? ).\n From what I read it is still better than no stat files at all.\n\nI restricted it to the default recovery scenario only\n(RECOVERY_TARGET_TIMELINE_LATEST) to avoid having invalid stats files with\nother recovery options.\n\nAm I missing something ? File integrity should be fine because of renaming.",
"msg_date": "Tue, 8 Mar 2022 00:05:16 +0100",
"msg_from": "Marek Kulik <mkulik@redhat.com>",
"msg_from_op": true,
"msg_subject": "Recovering stat file from crash"
}
] |
[
{
"msg_contents": "Hi,\nI just added some tests for the pg_freespacemap extension because the test\ncoverage was 0 percent.\nBut I don't know if I did it correctly.\n\n---\nRegards\nLee Dong Wook",
"msg_date": "Tue, 8 Mar 2022 23:39:08 +0900",
"msg_from": "Dong Wook Lee <sh95119@gmail.com>",
"msg_from_op": true,
"msg_subject": "Add pg_freespacemap extension sql test"
},
{
"msg_contents": "I'm sorry for attaching the wrong patch file.\n\n2022년 3월 8일 (화) 오후 11:39, Dong Wook Lee <sh95119@gmail.com>님이 작성:\n\n> Hi,\n> I just added some tests for the pg_freespacemap extension because the test\n> coverage was 0 percent.\n> But I don't know if I did it correctly.\n>\n> ---\n> Regards\n> Lee Dong Wook\n>",
"msg_date": "Tue, 8 Mar 2022 23:43:47 +0900",
"msg_from": "Dong Wook Lee <sh95119@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Add pg_freespacemap extension sql test"
},
{
"msg_contents": "On Tue, Mar 08, 2022 at 11:39:08PM +0900, Dong Wook Lee wrote:\n> Hi,\n> I just added some tests for the pg_freespacemap extension because the test\n> coverage was 0 percent.\n> But I don't know if I did it correctly.\n\nThe patch only touches doc/*.sgml.\nI suppose you forgot to use \"git add\".\n\n-- \nJustin\n\n\n",
"msg_date": "Tue, 8 Mar 2022 08:45:13 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Add pg_freespacemap extension sql test"
},
{
"msg_contents": "That's right, so I attached the correct file again.\n\n\n2022년 3월 8일 (화) 오후 11:45, Justin Pryzby <pryzby@telsasoft.com>님이 작성:\n\n> On Tue, Mar 08, 2022 at 11:39:08PM +0900, Dong Wook Lee wrote:\n> > Hi,\n> > I just added some tests for the pg_freespacemap extension because the\n> test\n> > coverage was 0 percent.\n> > But I don't know if I did it correctly.\n>\n> The patch only touches doc/*.sgml.\n> I suppose you forgot to use \"git add\".\n>\n> --\n> Justin\n>\n\nThat's right, so I attached the correct file again.2022년 3월 8일 (화) 오후 11:45, Justin Pryzby <pryzby@telsasoft.com>님이 작성:On Tue, Mar 08, 2022 at 11:39:08PM +0900, Dong Wook Lee wrote:\n> Hi,\n> I just added some tests for the pg_freespacemap extension because the test\n> coverage was 0 percent.\n> But I don't know if I did it correctly.\n\nThe patch only touches doc/*.sgml.\nI suppose you forgot to use \"git add\".\n\n-- \nJustin",
"msg_date": "Tue, 8 Mar 2022 23:54:12 +0900",
"msg_from": "Dong Wook Lee <sh95119@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Add pg_freespacemap extension sql test"
},
{
"msg_contents": "Dong Wook Lee <sh95119@gmail.com> writes:\n> [ 0001_add_test_pg_fsm.patch ]\n\nI think having some coverage here would be great, but I'm concerned that\nthis patch doesn't look very portable. Aren't the numbers liable to\nchange on 32-bit machines, in particular?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 08 Mar 2022 11:19:18 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Add pg_freespacemap extension sql test"
},
{
"msg_contents": "2022년 3월 9일 (수) 오전 1:19, Tom Lane <tgl@sss.pgh.pa.us>님이 작성:\n\n> Dong Wook Lee <sh95119@gmail.com> writes:\n> > [ 0001_add_test_pg_fsm.patch ]\n>\n> I think having some coverage here would be great, but I'm concerned that\n> this patch doesn't look very portable. Aren't the numbers liable to\n> change on 32-bit machines, in particular?\n>\n> regards, tom lane\n>\n\nI agree with you, but I have no good idea how to deal with it.\nCan the Perl TAP test be a good way?\nThought?\n\n2022년 3월 9일 (수) 오전 1:19, Tom Lane <tgl@sss.pgh.pa.us>님이 작성:Dong Wook Lee <sh95119@gmail.com> writes:\n> [ 0001_add_test_pg_fsm.patch ]\n\nI think having some coverage here would be great, but I'm concerned that\nthis patch doesn't look very portable. Aren't the numbers liable to\nchange on 32-bit machines, in particular?\n\n regards, tom laneI agree with you, but I have no good idea how to deal with it.Can the Perl TAP test be a good way?Thought?",
"msg_date": "Wed, 9 Mar 2022 20:13:15 +0900",
"msg_from": "Dong Wook Lee <sh95119@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Add pg_freespacemap extension sql test"
},
{
"msg_contents": "On Wed, Mar 09, 2022 at 08:13:15PM +0900, Dong Wook Lee wrote:\n> I agree with you, but I have no good idea how to deal with it.\n\nWell, my guess is that you basically just care about being able to\ndetect if there is free space in the map or not, which goes down to\ndetecting if pg_freespace() returns 0 or a number strictly higher than\n0, so wouldn't it be enough to stick some > 0 in your test queries?\nBtw, if you want to test 32-bit builds, gcc allows that by passing\ndown -m32.\n\n> Can the Perl TAP test be a good way?\n\nThat does not seem necessary here.\n--\nMichael",
"msg_date": "Fri, 11 Mar 2022 14:51:35 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Add pg_freespacemap extension sql test"
},
{
"msg_contents": "2022년 3월 11일 (금) 오후 2:51, Michael Paquier <michael@paquier.xyz>님이 작성:\n>\n> On Wed, Mar 09, 2022 at 08:13:15PM +0900, Dong Wook Lee wrote:\n> > I agree with you, but I have no good idea how to deal with it.\n>\n> Well, my guess is that you basically just care about being able to\n> detect if there is free space in the map or not, which goes down to\n> detecting if pg_freespace() returns 0 or a number strictly higher than\n> 0, so wouldn't it be enough to stick some > 0 in your test queries?\n> Btw, if you want to test 32-bit builds, gcc allows that by passing\n> down -m32.\n>\n> > Can the Perl TAP test be a good way?\n>\n> That does not seem necessary here.\n> --\n> Michael\n\nso, you mean it's not necessary to add cases for negative numbers or\nbeyond the range?\nI just wrote down testable cases, and if it doesn't have a big\nadvantage, I don't mind not adding that case.\n\n\n",
"msg_date": "Tue, 15 Mar 2022 22:50:25 +0900",
"msg_from": "Dong Wook Lee <sh95119@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Add pg_freespacemap extension sql test"
},
{
"msg_contents": "> Well, my guess is that you basically just care about being able to\n> detect if there is free space in the map or not, which goes down to\n> detecting if pg_freespace() returns 0 or a number strictly higher than\n> 0, so wouldn't it be enough to stick some > 0 in your test queries?\n\nI edited the previous patch file.\nAm I correct in understanding that?",
"msg_date": "Sun, 20 Mar 2022 01:18:26 +0900",
"msg_from": "Dong Wook Lee <sh95119@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Add pg_freespacemap extension sql test"
},
{
"msg_contents": "On Sat, Mar 19, 2022 at 1:18 PM Dong Wook Lee <sh95119@gmail.com> wrote:\n>\n> > Well, my guess is that you basically just care about being able to\n> > detect if there is free space in the map or not, which goes down to\n> > detecting if pg_freespace() returns 0 or a number strictly higher than\n> > 0, so wouldn't it be enough to stick some > 0 in your test queries?\n>\n> I edited the previous patch file.\n> Am I correct in understanding that?\n>\n\nI think what Michael meant is something like attached.\n\nRegards,\n\n--\nFabrízio de Royes Mello",
"msg_date": "Sat, 19 Mar 2022 15:13:38 -0300",
"msg_from": "=?UTF-8?Q?Fabr=C3=ADzio_de_Royes_Mello?= <fabriziomello@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add pg_freespacemap extension sql test"
},
{
"msg_contents": "2022년 3월 20일 (일) 03:13, Fabrízio de Royes Mello <fabriziomello@gmail.com>님이\n작성:\n\n>\n>\n> On Sat, Mar 19, 2022 at 1:18 PM Dong Wook Lee <sh95119@gmail.com> wrote:\n> >\n> > > Well, my guess is that you basically just care about being able to\n> > > detect if there is free space in the map or not, which goes down to\n> > > detecting if pg_freespace() returns 0 or a number strictly higher than\n> > > 0, so wouldn't it be enough to stick some > 0 in your test queries?\n> >\n> > I edited the previous patch file.\n> > Am I correct in understanding that?\n> >\n>\n> I think what Michael meant is something like attached.\n>\n> Regards,\n>\n> --\n> Fabrízio de Royes Mello\n>\n\nI think you’re right, thank you for sending it instead of me.\n\n>\n\n2022년 3월 20일 (일) 03:13, Fabrízio de Royes Mello <fabriziomello@gmail.com>님이 작성:On Sat, Mar 19, 2022 at 1:18 PM Dong Wook Lee <sh95119@gmail.com> wrote:>> > Well, my guess is that you basically just care about being able to> > detect if there is free space in the map or not, which goes down to> > detecting if pg_freespace() returns 0 or a number strictly higher than> > 0, so wouldn't it be enough to stick some > 0 in your test queries?>> I edited the previous patch file.> Am I correct in understanding that?>I think what Michael meant is something like attached.Regards,--Fabrízio de Royes MelloI think you’re right, thank you for sending it instead of me.",
"msg_date": "Mon, 21 Mar 2022 21:12:37 +0900",
"msg_from": "Dong Wook Lee <sh95119@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Add pg_freespacemap extension sql test"
},
{
"msg_contents": "On Mon, Mar 21, 2022 at 09:12:37PM +0900, Dong Wook Lee wrote:\n> 2022년 3월 20일 (일) 03:13, Fabrízio de Royes Mello <fabriziomello@gmail.com>님이\n> 작성:\n>> On Sat, Mar 19, 2022 at 1:18 PM Dong Wook Lee <sh95119@gmail.com> wrote:\n>>>> Well, my guess is that you basically just care about being able to\n>>>> detect if there is free space in the map or not, which goes down to\n>>>> detecting if pg_freespace() returns 0 or a number strictly higher than\n>>>> 0, so wouldn't it be enough to stick some > 0 in your test queries?\n>>>\n>>> I edited the previous patch file.\n>>> Am I correct in understanding that?\n>>>\n>>\n>> I think what Michael meant is something like attached.\n> \n> I think you’re right, thank you for sending it instead of me.\n\nYes, something like v3 was what I was referring to as we cannot rely\non exact numbers for this test suite. At least, we can check if there\nis a FSM for a given block, even if that can be limited.\n\nAfter review, I don't like much the idea of allowing concurrent\nautovacuums to run in parallel of the table(s) of this test, so we'd\nbetter disable it explicitely. \"t1\" is also a very generic name to\nuse in a regression test. Another thing that itched me is that we\ncould also test more with indexes, particularly with btree, BRIN and\nhash (the latter should not have a FSM with 10 pages as per the first\ngroup batch, and each one has a stable an initial state). Finally,\nmaking the tests stable across 32-bit compilations (say gcc -m32) is\nproving to be tricky, but it should be safe enough to check if the FSM\nis computed or not with a minimal number of tuples.\n\nBtw, a .gitignore was also forgotten.\n\nI have extended the set of tests as of the attached, running these\nacross everything I could (CI, all my hosts including Windows, macos,\nLinux). We could do more later, of course, but this looks enough to\nme as a first step. And I think that this will not upset the\nbuildfarm.\n--\nMichael",
"msg_date": "Wed, 23 Mar 2022 15:05:31 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Add pg_freespacemap extension sql test"
},
{
"msg_contents": "On Wed, Mar 23, 2022 at 3:05 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> After review, I don't like much the idea of allowing concurrent\n> autovacuums to run in parallel of the table(s) of this test, so we'd\n> better disable it explicitely.\n\nMake sense.\n\n> \"t1\" is also a very generic name to use in a regression test.\n\nAgreed!\n\n> Another thing that itched me is that we\n> could also test more with indexes, particularly with btree, BRIN and\n> hash (the latter should not have a FSM with 10 pages as per the first\n> group batch, and each one has a stable an initial state).\n\nWhat about GIN/GIST indexes?\n\n> I have extended the set of tests as of the attached, running these\n> across everything I could (CI, all my hosts including Windows, macos,\n> Linux). We could do more later, of course, but this looks enough to\n> me as a first step. And I think that this will not upset the\n> buildfarm.\n\nAlso LGTM.\n\nRegards,\n\n--\nFabrízio de Royes Mello\n\nOn Wed, Mar 23, 2022 at 3:05 AM Michael Paquier <michael@paquier.xyz> wrote:>> After review, I don't like much the idea of allowing concurrent> autovacuums to run in parallel of the table(s) of this test, so we'd> better disable it explicitely.Make sense.> \"t1\" is also a very generic name to use in a regression test.Agreed!> Another thing that itched me is that we> could also test more with indexes, particularly with btree, BRIN and> hash (the latter should not have a FSM with 10 pages as per the first> group batch, and each one has a stable an initial state).What about GIN/GIST indexes?> I have extended the set of tests as of the attached, running these> across everything I could (CI, all my hosts including Windows, macos,> Linux). We could do more later, of course, but this looks enough to> me as a first step. And I think that this will not upset the> buildfarm.Also LGTM.Regards,--Fabrízio de Royes Mello",
"msg_date": "Wed, 23 Mar 2022 10:45:19 -0300",
"msg_from": "=?UTF-8?Q?Fabr=C3=ADzio_de_Royes_Mello?= <fabriziomello@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add pg_freespacemap extension sql test"
},
{
"msg_contents": "On Wed, Mar 23, 2022 at 10:45:19AM -0300, Fabrízio de Royes Mello wrote:\n> On Wed, Mar 23, 2022 at 3:05 AM Michael Paquier <michael@paquier.xyz> wrote:\n>> Another thing that itched me is that we\n>> could also test more with indexes, particularly with btree, BRIN and\n>> hash (the latter should not have a FSM with 10 pages as per the first\n>> group batch, and each one has a stable an initial state).\n> \n> What about GIN/GIST indexes?\n\nYes, we could extend that more. For now, I am curious to see what the\nbuildfarm has to say with the current contents of the patch, and I can\nkeep an eye on the buildfarm today, so I have applied it.\n--\nMichael",
"msg_date": "Thu, 24 Mar 2022 09:40:23 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Add pg_freespacemap extension sql test"
},
{
"msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> Yes, we could extend that more. For now, I am curious to see what the\n> buildfarm has to say with the current contents of the patch, and I can\n> keep an eye on the buildfarm today, so I have applied it.\n\nIt seems this is unstable under valgrind [1]:\n\n--- /mnt/resource/bf/build/skink-master/HEAD/pgsql/contrib/pg_freespacemap/expected/pg_freespacemap.out\t2022-03-24 09:39:43.974477703 +0000\n+++ /mnt/resource/bf/build/skink-master/HEAD/pgsql.build/contrib/pg_freespacemap/results/pg_freespacemap.out\t2022-03-27 17:07:23.896287669 +0000\n@@ -60,6 +60,7 @@\n ORDER BY 1, 2;\n id | blkno | is_avail \n -----------------+-------+----------\n+ freespace_tab | 0 | t\n freespace_brin | 0 | f\n freespace_brin | 1 | f\n freespace_brin | 2 | t\n@@ -75,7 +76,7 @@\n freespace_hash | 7 | f\n freespace_hash | 8 | f\n freespace_hash | 9 | f\n-(15 rows)\n+(16 rows)\n \n -- failures with incorrect block number\n SELECT * FROM pg_freespace('freespace_tab', -1);\n\nskink has passed several runs since the commit went in, so it's\n\"unstable\" not \"fails consistently\". I see the test tries to\ndisable autovacuum on that table, so that doesn't seem to be\nthe problem ... what is?\n\n\t\t\tregards, tom lane\n\n[1] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=skink&dt=2022-03-27%2008%3A26%3A20\n\n\n",
"msg_date": "Sun, 27 Mar 2022 13:18:46 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Add pg_freespacemap extension sql test"
},
{
"msg_contents": "On Sun, Mar 27, 2022 at 01:18:46PM -0400, Tom Lane wrote:\n> skink has passed several runs since the commit went in, so it's\n> \"unstable\" not \"fails consistently\". I see the test tries to\n> disable autovacuum on that table, so that doesn't seem to be\n> the problem ... what is?\n\nThis is a race condition, directly unrelated to valgrind but easier to\ntrigger under it because things get slower. It takes me a dozen of\ntries to be able to reproduce the failure locally, but I can wiht\nvalgrind enabled.\n\nSo, the output of the test is simply telling us that the FSM of the\nmain table is not getting truncated. From what I can see, the\ndifference is in should_attempt_truncation(), where we finish with\nnonempty_pages set to 1 rather than 0 on failure. And it just takes\none autovacuum to run in parallel of the manual VACUUM after the\nDELETE to prevent the removal of those tuples, which is what I can see\nfrom the logs on failure:\nLOG: statement: DELETE FROM freespace_tab;\nDEBUG: autovacuum: processing database \"contrib_regression\"\nLOG: statement: VACUUM freespace_tab;\n\nIt seems to me here that the snapshot hold by autovacuum during the\nscan of pg_database to find the relations to process is enough to\nprevent the FSM truncation, as the tuples cleaned up by the DELETE\nquery still need to be visible. One simple way to keep this test\nwould be a custom configuration file with autovacuum disabled and\nNO_INSTALLCHECK. Any better ideas?\n--\nMichael",
"msg_date": "Mon, 28 Mar 2022 12:12:48 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Add pg_freespacemap extension sql test"
},
{
"msg_contents": "On Mon, Mar 28, 2022 at 12:12:48PM +0900, Michael Paquier wrote:\n> It seems to me here that the snapshot hold by autovacuum during the\n> scan of pg_database to find the relations to process is enough to\n> prevent the FSM truncation, as the tuples cleaned up by the DELETE\n> query still need to be visible. One simple way to keep this test\n> would be a custom configuration file with autovacuum disabled and\n> NO_INSTALLCHECK.\n\nWell, done this way. We already do that in other tests that rely on a\nFSM truncation to happen, like 008_fsm_truncation.pl.\n--\nMichael",
"msg_date": "Tue, 29 Mar 2022 14:05:56 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Add pg_freespacemap extension sql test"
}
] |
[
{
"msg_contents": "Hi,\n\nWhen running cpluspluscheck I get many many complaints like\n\nIn file included from /tmp/pg-test-repo/src/include/port/atomics.h:70,\n from /tmp/pg-test-repo/src/include/utils/dsa.h:17,\n from /tmp/pg-test-repo/src/include/nodes/tidbitmap.h:26,\n from /tmp/pg-test-repo/src/include/nodes/execnodes.h:24,\n from /tmp/pg-test-repo/src/include/commands/trigger.h:17,\n from /tmp/pg-test-repo/src/pl/plpgsql/src/plpgsql.h:21,\n from /tmp/cpluspluscheck.qOi18T/test.cpp:3:\n/tmp/pg-test-repo/src/include/port/atomics/arch-x86.h: In function ‘bool pg_atomic_test_set_flag_impl(volatile pg_atomic_flag*)’:\n/tmp/pg-test-repo/src/include/port/atomics/arch-x86.h:143:23: warning: ISO C++17 does not allow ‘register’ storage class specifier [-Wregister]\n 143 | register char _res = 1;\n | ^~~~\n\nIt seems we should just remove the use of register? It's currently only used\nin\nsrc/include/storage/s_lock.h\nsrc/include/port/atomics/arch-x86.h\n\n From what I understand compilers essentially have been ignoring it for quite a\nwhile...\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 8 Mar 2022 10:18:37 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "cpluspluscheck complains about use of register"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> When running cpluspluscheck I get many many complaints like\n> /tmp/pg-test-repo/src/include/port/atomics/arch-x86.h:143:23: warning: ISO C++17 does not allow ‘register’ storage class specifier [-Wregister]\n\nInteresting, I don't see that here.\n\n> It seems we should just remove the use of register?\n\nI have a vague idea that it was once important to say \"register\" if\nyou are going to use the variable in an asm snippet that requires it\nto be in a register. That might be wrong, or it might be obsolete\neven if once true. We could try taking these out and seeing if the\nbuildfarm complains. (If so, maybe -Wno-register would help?)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 08 Mar 2022 13:46:36 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: cpluspluscheck complains about use of register"
},
{
"msg_contents": "Hi,\n\nOn 2022-03-08 13:46:36 -0500, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > When running cpluspluscheck I get many many complaints like\n> > /tmp/pg-test-repo/src/include/port/atomics/arch-x86.h:143:23: warning: ISO C++17 does not allow ‘register’ storage class specifier [-Wregister]\n>\n> Interesting, I don't see that here.\n\nProbably a question of the gcc version. I think starting with 11 g++ defaults\nto C++ 17.\n\n\n> > It seems we should just remove the use of register?\n>\n> I have a vague idea that it was once important to say \"register\" if\n> you are going to use the variable in an asm snippet that requires it\n> to be in a register. That might be wrong, or it might be obsolete\n> even if once true. We could try taking these out and seeing if the\n> buildfarm complains.\n\nWe have several inline asm statements not using register despite using\nvariables in a register (e.g. pg_atomic_compare_exchange_u32_impl()), so I\nwouldn't expect a problem with compilers we support.\n\nShould we make configure test for -Wregister? There's at least one additional\nuse of register that we'd have to change (pg_regexec).\n\n\n> (If so, maybe -Wno-register would help?)\n\nThat's what I did to work around the flood of warnings locally, so it'd\nwork.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 8 Mar 2022 10:59:02 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: cpluspluscheck complains about use of register"
},
{
"msg_contents": "\n>>> It seems we should just remove the use of register?\n>>\n>> I have a vague idea that it was once important to say \"register\" if\n>> you are going to use the variable in an asm snippet that requires it\n>> to be in a register. That might be wrong, or it might be obsolete\n>> even if once true. We could try taking these out and seeing if the\n>> buildfarm complains.\n>\n> We have several inline asm statements not using register despite using\n> variables in a register (e.g. pg_atomic_compare_exchange_u32_impl()), so I\n> wouldn't expect a problem with compilers we support.\n>\n> Should we make configure test for -Wregister? There's at least one additional\n> use of register that we'd have to change (pg_regexec).\n\n From a compilation perspective, \"register\" tells the compiler that you \ncannot have a pointer on a variable, i.e. it generates an error if someone \nadds something like:\n\n void * p = ®ister_variable;\n\nRemoving the \"register\" declaration means that such protection would be \nremoved, and creating such a pointer could reduce drastically compiler \noptimization opportunities.\n\n-- \nFabien.\n\n\n",
"msg_date": "Wed, 9 Mar 2022 11:08:57 +0100 (CET)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": false,
"msg_subject": "Re: cpluspluscheck complains about use of register"
},
{
"msg_contents": "Hi,\n\nOn 2022-03-08 10:59:02 -0800, Andres Freund wrote:\n> On 2022-03-08 13:46:36 -0500, Tom Lane wrote:\n> > Andres Freund <andres@anarazel.de> writes:\n> > > When running cpluspluscheck I get many many complaints like\n> > > /tmp/pg-test-repo/src/include/port/atomics/arch-x86.h:143:23: warning: ISO C++17 does not allow ‘register’ storage class specifier [-Wregister]\n> >\n> > Interesting, I don't see that here.\n> \n> Probably a question of the gcc version. I think starting with 11 g++ defaults\n> to C++ 17.\n> \n> \n> > > It seems we should just remove the use of register?\n> >\n> > I have a vague idea that it was once important to say \"register\" if\n> > you are going to use the variable in an asm snippet that requires it\n> > to be in a register. That might be wrong, or it might be obsolete\n> > even if once true. We could try taking these out and seeing if the\n> > buildfarm complains.\n> \n> We have several inline asm statements not using register despite using\n> variables in a register (e.g. pg_atomic_compare_exchange_u32_impl()), so I\n> wouldn't expect a problem with compilers we support.\n> \n> Should we make configure test for -Wregister? There's at least one additional\n> use of register that we'd have to change (pg_regexec).\n> \n> \n> > (If so, maybe -Wno-register would help?)\n> \n> That's what I did to work around the flood of warnings locally, so it'd\n> work.\n\nI hit this again while porting cplupluscheck to be invoked by meson as\nwell. ISTM that we should just remove the uses of register. Yes, some very old\ncompilers might generate worse code without register, but I don't think we\nneed to care about peak efficiency with neolithic compilers.\n\nFabien raised the concern that removing register might lead to accidentally\nadding pointers to such variables - I don't find that convincing, because a)\nsuch code is typically inside a helper inline anyway b) we don't use register\nwidely enough to ensure this.\n\n\nAttached is a patch removing uses of register. The use in regexec.c could\nremain, since we only try to keep headers C++ clean. But there really doesn't\nseem to be a good reason to use register in that spot.\n\nI tried to use -Wregister to keep us honest going forward, but unfortunately\nit only works with a C++ compiler...\n\nI tested this by redefining register to something else, and I grepped for\nnon-comment uses of register. Entirely possible that I missed something.\n\nGreetings,\n\nAndres Freund",
"msg_date": "Sat, 24 Sep 2022 12:11:40 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: cpluspluscheck complains about use of register"
},
{
"msg_contents": "On Sat, Sep 24, 2022 at 12:11 PM Andres Freund <andres@anarazel.de> wrote:\n> I hit this again while porting cplupluscheck to be invoked by meson as\n> well. ISTM that we should just remove the uses of register. Yes, some very old\n> compilers might generate worse code without register, but I don't think we\n> need to care about peak efficiency with neolithic compilers.\n\n+1. I seem to recall reading that the register keyword was basically\nuseless as long as 15 years ago.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Sat, 24 Sep 2022 12:59:30 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: cpluspluscheck complains about use of register"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> I hit this again while porting cplupluscheck to be invoked by meson as\n> well. ISTM that we should just remove the uses of register.\n\nOK by me.\n\n> I tried to use -Wregister to keep us honest going forward, but unfortunately\n> it only works with a C++ compiler...\n\nI think we only really care about stuff that cpluspluscheck would spot,\nso I don't feel a need to mess with the standard compilation flags.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 24 Sep 2022 16:01:25 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: cpluspluscheck complains about use of register"
},
{
"msg_contents": "Hi,\n\nOn 2022-09-24 16:01:25 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > I hit this again while porting cplupluscheck to be invoked by meson as\n> > well. ISTM that we should just remove the uses of register.\n> \n> OK by me.\n\nDone. Thanks Tom, Peter.\n\n\n",
"msg_date": "Sat, 24 Sep 2022 15:13:00 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: cpluspluscheck complains about use of register"
},
{
"msg_contents": "Re: Tom Lane\n> > I hit this again while porting cplupluscheck to be invoked by meson as\n> > well. ISTM that we should just remove the uses of register.\n> \n> OK by me.\n> \n> > I tried to use -Wregister to keep us honest going forward, but unfortunately\n> > it only works with a C++ compiler...\n> \n> I think we only really care about stuff that cpluspluscheck would spot,\n> so I don't feel a need to mess with the standard compilation flags.\n\nThis has started to hurt: postgresql-debversion (a Debian version number\ndata type written in C++) failed to build against Postgresql <= 15 on\nUbuntu's next LTS release (24.04):\n\nIn file included from /usr/include/postgresql/15/server/port/atomics.h:70:\n/usr/include/postgresql/15/server/port/atomics/arch-x86.h:143:2: error: ISO C++17 does not allow 'register' storage class specifier [-Wregister]\n 143 | register char _res = 1;\n\nI managed to work around it by putting `#define register` before\nincluding the PG headers.\n\nShould the removal of \"register\" be backported to support that better?\n\nChristoph\n\n\n",
"msg_date": "Mon, 12 Feb 2024 12:03:01 +0100",
"msg_from": "Christoph Berg <myon@debian.org>",
"msg_from_op": false,
"msg_subject": "Re: cpluspluscheck complains about use of register"
},
{
"msg_contents": "Christoph Berg <myon@debian.org> writes:\n> Should the removal of \"register\" be backported to support that better?\n\nPerhaps. It's early days yet, but nobody has complained that that\nbroke anything in v16, so I'm guessing it'd be fine.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 12 Feb 2024 11:08:47 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: cpluspluscheck complains about use of register"
}
] |
[
{
"msg_contents": "Hi,\nI was looking at commit db632fbca and noticed that,\nin create_list_bounds(), if index is added to boundinfo->interleaved_parts\nin the first if statement, there is no need to perform the second check\ninvolving call to partition_bound_accepts_nulls().\n\nHere is a short patch.\n\nCheers",
"msg_date": "Tue, 8 Mar 2022 11:05:10 -0800",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": true,
"msg_subject": "minor change for create_list_bounds()"
},
{
"msg_contents": "On Tue, Mar 08, 2022 at 11:05:10AM -0800, Zhihong Yu wrote:\n> I was looking at commit db632fbca and noticed that,\n> in create_list_bounds(), if index is added to boundinfo->interleaved_parts\n> in the first if statement, there is no need to perform the second check\n> involving call to partition_bound_accepts_nulls().\n\nGiven this change probably doesn't meaningfully impact performance or code\nclarity, I'm personally -1 for this patch. Is there another motivation\nthat I am missing?\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 29 Jun 2022 16:41:01 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: minor change for create_list_bounds()"
},
{
"msg_contents": "On Thu, 30 Jun 2022 at 11:41, Nathan Bossart <nathandbossart@gmail.com> wrote:\n>\n> On Tue, Mar 08, 2022 at 11:05:10AM -0800, Zhihong Yu wrote:\n> > I was looking at commit db632fbca and noticed that,\n> > in create_list_bounds(), if index is added to boundinfo->interleaved_parts\n> > in the first if statement, there is no need to perform the second check\n> > involving call to partition_bound_accepts_nulls().\n>\n> Given this change probably doesn't meaningfully impact performance or code\n> clarity, I'm personally -1 for this patch. Is there another motivation\n> that I am missing?\n\nWhile I agree that the gains on making this change are small. It just\naccounts to saving a call to bms_add_member() when we've already found\nthe partition to be interleaved due to interleaved Datum values, I\njust disagree with not doing anything about it. My reasons are:\n\n1. This code is new to PG15. We have the opportunity now to make a\nmeaningful improvement and backpatch it. When PG15 is out, the bar is\nset significantly higher for fixing this type of thing due to having\nto consider the additional cost of backpatching conflicts with other\nfuture fixes in that area.\n2. I think the code as I just pushed it is easier to understand than\nwhat was there before.\n3. I'd like to encourage people to look at and critique our newly\nadded code. Having a concern addressed seems like a good reward for\nthe work.\n\nI've now pushed the patch along with some other minor adjustments in the area.\n\nThanks for the report/patch.\n\nDavid\n\n\n",
"msg_date": "Wed, 13 Jul 2022 17:07:53 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: minor change for create_list_bounds()"
},
{
"msg_contents": "On Wed, Jul 13, 2022 at 05:07:53PM +1200, David Rowley wrote:\n> While I agree that the gains on making this change are small. It just\n> accounts to saving a call to bms_add_member() when we've already found\n> the partition to be interleaved due to interleaved Datum values, I\n> just disagree with not doing anything about it. My reasons are:\n> \n> 1. This code is new to PG15. We have the opportunity now to make a\n> meaningful improvement and backpatch it. When PG15 is out, the bar is\n> set significantly higher for fixing this type of thing due to having\n> to consider the additional cost of backpatching conflicts with other\n> future fixes in that area.\n> 2. I think the code as I just pushed it is easier to understand than\n> what was there before.\n\nFair enough.\n\n> 3. I'd like to encourage people to look at and critique our newly\n> added code. Having a concern addressed seems like a good reward for\n> the work.\n\n+1\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 13 Jul 2022 11:30:08 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: minor change for create_list_bounds()"
}
] |
[
{
"msg_contents": "Hi,\n\nOne thing I'm not yet happy around the shared memory stats patch is\nnaming. Currently a lot of comments say things like:\n\n * [...] We convert to\n * microseconds in PgStat_Counter format when transmitting to the collector.\n\nor\n\n# - Query and Index Statistics Collector -\n\nor\n\n/* ----------\n * pgstat_report_subscription_drop() -\n *\n * Tell the collector about dropping the subscription.\n * ----------\n */\n\n\nthe immediate question for the patch is what to replace \"collector\" with.\n\n\"stats subsystem\" is too general, because that could be about\nbackend_activity.c, or pg_statistic, or ...\n\n\"shared memory stats\" seems too focussed on the manner of storage, rather than\nthe kind of stats.\n\nThe patch currently uses \"activity statistics\" in a number of places, but that\nis confusing too, because pg_stat_activity is a different kind of stats.\n\nAny ideas?\n\n\nThe postgresql.conf.sample section header seems particularly odd - \"index\nstatistics\"? We collect more data about tables etc.\n\n\nA more general point: Our naming around different types of stats is horribly\nconfused. We have stats describing the current state (e.g. pg_stat_activity,\npg_stat_replication, pg_stat_progress_*, ...) and accumulated stats\n(pg_stat_user_tables, pg_stat_database, etc) in the same namespace. Should we\ntry to move towards something more coherent, at least going forward?\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 8 Mar 2022 12:53:51 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Naming of the different stats systems / \"stats collector\""
},
{
"msg_contents": "On Tue, Mar 8, 2022 at 1:54 PM Andres Freund <andres@anarazel.de> wrote:\n\n>\n> One thing I'm not yet happy around the shared memory stats patch is\n> naming. Currently a lot of comments say things like:\n>\n> * [...] We convert to\n> * microseconds in PgStat_Counter format when transmitting to the\n> collector.\n>\n> or\n>\n> # - Query and Index Statistics Collector -\n>\n> or\n>\n> /* ----------\n> * pgstat_report_subscription_drop() -\n> *\n> * Tell the collector about dropping the subscription.\n> * ----------\n> */\n>\n>\n> the immediate question for the patch is what to replace \"collector\" with.\n>\n>\nNot really following the broader context here so this came out of nowhere\nfor me. What is the argument for changing the status quo here? Collector\nseems like good term.\n\n\n>\n> The patch currently uses \"activity statistics\" in a number of places, but\n> that\n> is confusing too, because pg_stat_activity is a different kind of stats.\n>\n> Any ideas?\n>\n\nIf the complaint is that not all of these statistics modules use the\nstatistics collector then maybe we say each non-collector module defines an\n\"Event Listener\". Or, and without looking at the source code, have the\ncollector simply forward events like \"reset now\" to the appropriate module\nbut keep the collector as the single point of message interchange for all.\nAnd so \"tell the collector about\" is indeed the correct phrasing of what\nhappens.\n\n\n>\n> The postgresql.conf.sample section header seems particularly odd - \"index\n> statistics\"? We collect more data about tables etc.\n>\n\nNo argument for bringing the header current.\n\n>\n> A more general point: Our naming around different types of stats is\n> horribly\n> confused. We have stats describing the current state (e.g.\n> pg_stat_activity,\n> pg_stat_replication, pg_stat_progress_*, ...) and accumulated stats\n> (pg_stat_user_tables, pg_stat_database, etc) in the same namespace.\n> Should we\n> try to move towards something more coherent, at least going forward?\n>\n>\nI'm not sure trying to improve this going forward, and thus having at least\nthree categories, is particularly desirable. While it is unfortunate that\nwe don't have separate pg_metric and pg_status namespaces (combining\npg_stat with pg_status or pg_state, the two obvious choices, would be\nundesirable being they all have a shared leading character sequence) that\nis where we are today. We are probably stuck with just using the pg_stat\nnamespace and doing a better job of letting users know about the underlying\nimplementation choice each pg_stat relation took in order to know whether\nwhat is being reported is considered reliable (self-managed shared memory)\nor not (leverages the unreliable collector). In short, deal with this\nmainly in documentation/comments and implementation details but leave the\npublic facing naming alone.\n\nDavid J.\n\nOn Tue, Mar 8, 2022 at 1:54 PM Andres Freund <andres@anarazel.de> wrote:\nOne thing I'm not yet happy around the shared memory stats patch is\nnaming. Currently a lot of comments say things like:\n\n * [...] We convert to\n * microseconds in PgStat_Counter format when transmitting to the collector.\n\nor\n\n# - Query and Index Statistics Collector -\n\nor\n\n/* ----------\n * pgstat_report_subscription_drop() -\n *\n * Tell the collector about dropping the subscription.\n * ----------\n */\n\n\nthe immediate question for the patch is what to replace \"collector\" with.\nNot really following the broader context here so this came out of nowhere for me. What is the argument for changing the status quo here? Collector seems like good term. \n\nThe patch currently uses \"activity statistics\" in a number of places, but that\nis confusing too, because pg_stat_activity is a different kind of stats.\n\nAny ideas?If the complaint is that not all of these statistics modules use the statistics collector then maybe we say each non-collector module defines an \"Event Listener\". Or, and without looking at the source code, have the collector simply forward events like \"reset now\" to the appropriate module but keep the collector as the single point of message interchange for all. And so \"tell the collector about\" is indeed the correct phrasing of what happens. \n\nThe postgresql.conf.sample section header seems particularly odd - \"index\nstatistics\"? We collect more data about tables etc.No argument for bringing the header current.\n\nA more general point: Our naming around different types of stats is horribly\nconfused. We have stats describing the current state (e.g. pg_stat_activity,\npg_stat_replication, pg_stat_progress_*, ...) and accumulated stats\n(pg_stat_user_tables, pg_stat_database, etc) in the same namespace. Should we\ntry to move towards something more coherent, at least going forward?I'm not sure trying to improve this going forward, and thus having at least three categories, is particularly desirable. While it is unfortunate that we don't have separate pg_metric and pg_status namespaces (combining pg_stat with pg_status or pg_state, the two obvious choices, would be undesirable being they all have a shared leading character sequence) that is where we are today. We are probably stuck with just using the pg_stat namespace and doing a better job of letting users know about the underlying implementation choice each pg_stat relation took in order to know whether what is being reported is considered reliable (self-managed shared memory) or not (leverages the unreliable collector). In short, deal with this mainly in documentation/comments and implementation details but leave the public facing naming alone.David J.",
"msg_date": "Tue, 8 Mar 2022 15:55:04 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Naming of the different stats systems / \"stats collector\""
},
{
"msg_contents": "At Tue, 8 Mar 2022 15:55:04 -0700, \"David G. Johnston\" <david.g.johnston@gmail.com> wrote in \n> On Tue, Mar 8, 2022 at 1:54 PM Andres Freund <andres@anarazel.de> wrote:\n> > the immediate question for the patch is what to replace \"collector\" with.\n> >\n> >\n> Not really following the broader context here so this came out of nowhere\n> for me. What is the argument for changing the status quo here? Collector\n> seems like good term.\n\nThe name \"stats collector\" is tied with the story that \"there is a\nprocess that only collects stats data that arrive from working\nproceses.\". We have such modules like bgwriter, checkpointer,\nwalwriter and so on. On the other hand we have many features with no\ndedicate process instead work on shared storage area as a part of\nworking prcesses. table/column statistics, XLOG, heap, SLUR and so on.\n\nIn the world where every working process writes statitics to shared\nmeomry area by its own, no such process exists. I think we no longer\nname it \"stats collector\".\n\n> > The patch currently uses \"activity statistics\" in a number of places, but\n> > that\n> > is confusing too, because pg_stat_activity is a different kind of stats.\n> >\n> > Any ideas?\n> >\n> \n> If the complaint is that not all of these statistics modules use the\n> statistics collector then maybe we say each non-collector module defines an\n> \"Event Listener\". Or, and without looking at the source code, have the\n> collector simply forward events like \"reset now\" to the appropriate module\n> but keep the collector as the single point of message interchange for all.\n> And so \"tell the collector about\" is indeed the correct phrasing of what\n> happens.\n\nSo the collector as a process is going to die. We need alternative\nname for the non-collector. Metrics, as you mentioned below, sounds\ngood to me. The name \"activity stat(istics)?s\" is an answer to my\ndesire to discriminate it from \"table/column statistics\" but I have to\nadmit that it is still not great.\n\n> > The postgresql.conf.sample section header seems particularly odd - \"index\n> > statistics\"? We collect more data about tables etc.\n> >\n> \n> No argument for bringing the header current.\n> \n> >\n> > A more general point: Our naming around different types of stats is\n> > horribly\n> > confused. We have stats describing the current state (e.g.\n> > pg_stat_activity,\n> > pg_stat_replication, pg_stat_progress_*, ...) and accumulated stats\n> > (pg_stat_user_tables, pg_stat_database, etc) in the same namespace.\n> > Should we\n> > try to move towards something more coherent, at least going forward?\n> >\n> >\n> I'm not sure trying to improve this going forward, and thus having at least\n> three categories, is particularly desirable. While it is unfortunate that\n> we don't have separate pg_metric and pg_status namespaces (combining\n> pg_stat with pg_status or pg_state, the two obvious choices, would be\n> undesirable being they all have a shared leading character sequence) that\n> is where we are today. We are probably stuck with just using the pg_stat\n> namespace and doing a better job of letting users know about the underlying\n> implementation choice each pg_stat relation took in order to know whether\n> what is being reported is considered reliable (self-managed shared memory)\n> or not (leverages the unreliable collector). In short, deal with this\n> mainly in documentation/comments and implementation details but leave the\n> public facing naming alone.\n> \n> David J.\n\nIf we could, I like the namings like pg_metrics.process,\npg_metrics.replication, pg_progress.vacuum, pg_progress.basebackup,\nand pg_stats.database, pg_stats.user_tables.. With such eyes, it\nlooks somewhat odd that pg_stat_* views are belonging to the\npg_catalog namespace.\n\nIf we had system table-aliases, people who insist on the good-old\nnames can live with that. Even if there isn't, we can instead provide\nviews with the old names.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Wed, 09 Mar 2022 10:34:58 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Naming of the different stats systems / \"stats collector\""
},
{
"msg_contents": "On 2022-03-08 15:55:04 -0700, David G. Johnston wrote:\n> On Tue, Mar 8, 2022 at 1:54 PM Andres Freund <andres@anarazel.de> wrote:\n> > One thing I'm not yet happy around the shared memory stats patch is\n> > naming. Currently a lot of comments say things like:\n> >\n> > * [...] We convert to\n> > * microseconds in PgStat_Counter format when transmitting to the\n> > collector.\n> >\n> > or\n> >\n> > # - Query and Index Statistics Collector -\n> >\n> > or\n> >\n> > /* ----------\n> > * pgstat_report_subscription_drop() -\n> > *\n> > * Tell the collector about dropping the subscription.\n> > * ----------\n> > */\n> >\n> >\n> > the immediate question for the patch is what to replace \"collector\" with.\n> >\n> >\n> Not really following the broader context here so this came out of nowhere\n> for me. What is the argument for changing the status quo here? Collector\n> seems like good term.\n\nSorry, probably should have shared a bit more context. The shared memory stats\npatch removes the stats collector process - which seems to make 'collector'\nnot descriptive anymore...\n\nIt's still lossy in the sense that a crash will result in stats being lost and\ninprecise in that counter updates can be delayed, but there won't be lost\nstats due to UDP messages being thrown away under load anymore.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 8 Mar 2022 17:50:14 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Naming of the different stats systems / \"stats collector\""
},
{
"msg_contents": "On Tue, Mar 8, 2022 at 6:50 PM Andres Freund <andres@anarazel.de> wrote:\n\n> On 2022-03-08 15:55:04 -0700, David G. Johnston wrote:\n> > On Tue, Mar 8, 2022 at 1:54 PM Andres Freund <andres@anarazel.de> wrote:\n> > > One thing I'm not yet happy around the shared memory stats patch is\n> > > naming. Currently a lot of comments say things like:\n> > >\n> > > * [...] We convert to\n> > > * microseconds in PgStat_Counter format when transmitting to the\n> > > collector.\n> > >\n>\n\n\"...format for writing to the statistics datastore\"\n\n\n> > > or\n> > >\n> > > # - Query and Index Statistics Collector -\n>\n\n\"...Statistics Collection\"\n\n\n\n> > >\n> > > or\n> > >\n> > > /* ----------\n> > > * pgstat_report_subscription_drop() -\n> > > *\n> > > * Tell the collector about dropping the subscription.\n> > > * ----------\n> > > */\n>\n\nI would expect that either the function gets renamed or just goes away.\nJust changing the word \"collector\" isn't going to be a good change, the new\ndescription should describe whatever the new behavior is.\n\n\n> > >\n> > > the immediate question for the patch is what to replace \"collector\"\n> with.\n> > >\n> > >\n> > Not really following the broader context here so this came out of nowhere\n> > for me. What is the argument for changing the status quo here?\n> Collector\n> > seems like good term.\n>\n> Sorry, probably should have shared a bit more context. The shared memory\n> stats\n> patch removes the stats collector process - which seems to make 'collector'\n> not descriptive anymore...\n>\n>\nAs shown above I don't see that there is a single word that will simply\nreplace \"collector\". We are changing a core design of the system and each\ndependent system will need to be tweaked in a context-appropriate manner.\n\nAs the process goes away we are now dealing directly with a conceptual\ndatastore. And instead of referring to the implementation detail of how\nstatistics are collected we can just refer to the \"collection\" behavior\ngenerically. Whether we funnel through a process or write directly to the\ndatastore it is still statistics collection.\n\nDavid J.\n\nOn Tue, Mar 8, 2022 at 6:50 PM Andres Freund <andres@anarazel.de> wrote:On 2022-03-08 15:55:04 -0700, David G. Johnston wrote:\n> On Tue, Mar 8, 2022 at 1:54 PM Andres Freund <andres@anarazel.de> wrote:\n> > One thing I'm not yet happy around the shared memory stats patch is\n> > naming. Currently a lot of comments say things like:\n> >\n> > * [...] We convert to\n> > * microseconds in PgStat_Counter format when transmitting to the\n> > collector.\n> >\"...format for writing to the statistics datastore\" \n> > or\n> >\n> > # - Query and Index Statistics Collector -\"...Statistics Collection\" \n> >\n> > or\n> >\n> > /* ----------\n> > * pgstat_report_subscription_drop() -\n> > *\n> > * Tell the collector about dropping the subscription.\n> > * ----------\n> > */I would expect that either the function gets renamed or just goes away. Just changing the word \"collector\" isn't going to be a good change, the new description should describe whatever the new behavior is.\n> >\n> > the immediate question for the patch is what to replace \"collector\" with.\n> >\n> >\n> Not really following the broader context here so this came out of nowhere\n> for me. What is the argument for changing the status quo here? Collector\n> seems like good term.\n\nSorry, probably should have shared a bit more context. The shared memory stats\npatch removes the stats collector process - which seems to make 'collector'\nnot descriptive anymore...As shown above I don't see that there is a single word that will simply replace \"collector\". We are changing a core design of the system and each dependent system will need to be tweaked in a context-appropriate manner.As the process goes away we are now dealing directly with a conceptual datastore. And instead of referring to the implementation detail of how statistics are collected we can just refer to the \"collection\" behavior generically. Whether we funnel through a process or write directly to the datastore it is still statistics collection.David J.",
"msg_date": "Tue, 8 Mar 2022 19:13:45 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Naming of the different stats systems / \"stats collector\""
},
{
"msg_contents": "Hi,\n\nOn 2022-03-08 19:13:45 -0700, David G. Johnston wrote:\n> On Tue, Mar 8, 2022 at 6:50 PM Andres Freund <andres@anarazel.de> wrote:\n>\n> > On 2022-03-08 15:55:04 -0700, David G. Johnston wrote:\n> > > On Tue, Mar 8, 2022 at 1:54 PM Andres Freund <andres@anarazel.de> wrote:\n> > > > One thing I'm not yet happy around the shared memory stats patch is\n> > > > naming. Currently a lot of comments say things like:\n> > > >\n> > > > * [...] We convert to\n> > > > * microseconds in PgStat_Counter format when transmitting to the\n> > > > collector.\n> > > >\n> >\n>\n> \"...format for writing to the statistics datastore\"\n\nThat could also describe pg_statistic. Nor are we writing during normal\noperation (stats are first accumulated locally and subsequently a shared\nmemory hash table is updated), the on-disk file is only written at shutdown.\n\nWhich is my problem with this - we need a descriptive term / shorthand that\ndescribes the type of statistics we currently send to the stats collector.\n\n\"cumulative stats subsystem\"?\n\n\n> > > > /* ----------\n> > > > * pgstat_report_subscription_drop() -\n> > > > *\n> > > > * Tell the collector about dropping the subscription.\n> > > > * ----------\n> > > > */\n> >\n>\n> I would expect that either the function gets renamed or just goes away.\n> Just changing the word \"collector\" isn't going to be a good change, the new\n> description should describe whatever the new behavior is.\n\nIt currently has the same signature in the patch, and I don't forsee that\nchanging.\n\n\n> As the process goes away we are now dealing directly with a conceptual\n> datastore. And instead of referring to the implementation detail of how\n> statistics are collected we can just refer to the \"collection\" behavior\n> generically. Whether we funnel through a process or write directly to the\n> datastore it is still statistics collection.\n\nWe have many other types of stats that we collect, so yes, it's statistic\ncollection, but that's not descriptive enough imo. \"Stats collector\" somewhat\nworked because the fact that the collector process was involved served to\ndistinguish from other types of stats.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 8 Mar 2022 18:32:35 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Naming of the different stats systems / \"stats collector\""
},
{
"msg_contents": "On Tue, Mar 8, 2022 at 7:32 PM Andres Freund <andres@anarazel.de> wrote:\n\n> we need a descriptive term / shorthand that\n> describes the type of statistics we currently send to the stats collector.\n>\n> \"cumulative stats subsystem\"?\n>\n>\nI'm growing fond of \"cumulative\". It is more precise (and restrictive)\nthan \"metric\" but that is beneficial here so long as it is indeed true\n(which a quick skim of Table 28.2. Collected Statistics Views [1] leads me\nto believe it is).\n\nI'd be concerned that subsystem implies a collection process in a manner\nsimilar to how you associated datastore with a physical file on disk. But\nI'd pick subsystem over datastore here in any case.\n\nDavid J.\n\n[1] https://www.postgresql.org/docs/current/monitoring-stats.html\n\nOn Tue, Mar 8, 2022 at 7:32 PM Andres Freund <andres@anarazel.de> wrote:we need a descriptive term / shorthand that\ndescribes the type of statistics we currently send to the stats collector.\n\n\"cumulative stats subsystem\"?I'm growing fond of \"cumulative\". It is more precise (and restrictive) than \"metric\" but that is beneficial here so long as it is indeed true (which a quick skim of Table 28.2. Collected Statistics Views [1] leads me to believe it is).I'd be concerned that subsystem implies a collection process in a manner similar to how you associated datastore with a physical file on disk. But I'd pick subsystem over datastore here in any case.David J.[1] https://www.postgresql.org/docs/current/monitoring-stats.html",
"msg_date": "Tue, 8 Mar 2022 20:17:06 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Naming of the different stats systems / \"stats collector\""
},
{
"msg_contents": "Hi,\n\nOn 2022-03-08 20:17:06 -0700, David G. Johnston wrote:\n> On Tue, Mar 8, 2022 at 7:32 PM Andres Freund <andres@anarazel.de> wrote:\n> \n> > we need a descriptive term / shorthand that\n> > describes the type of statistics we currently send to the stats collector.\n> >\n> > \"cumulative stats subsystem\"?\n> >\n> >\n> I'm growing fond of \"cumulative\". It is more precise (and restrictive)\n> than \"metric\" but that is beneficial here so long as it is indeed true\n> (which a quick skim of Table 28.2. Collected Statistics Views [1] leads me\n> to believe it is).\n\nI did go for that one - I think it looks better than the other\nalternatives. Should you be interested, I posted a version using that name at\n[1]. The majority of the changes related to the naming are in 0005, 0026,\n0028...\n\nGreetings,\n\nAndres Freund\n\n[1] https://www.postgresql.org/message-id/20220404041516.cctrvpadhuriawlq%40alap3.anarazel.de\n\n\n",
"msg_date": "Sun, 3 Apr 2022 21:22:59 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Naming of the different stats systems / \"stats collector\""
}
] |
[
{
"msg_contents": "Hi, hackers!\nI've noticed that check_ok() in pg_upgrade.h has been declared two times.\nHere's a one-line patch correcting this.\n\n-- \nBest regards,\nPavel Borisov\n\nPostgres Professional: http://postgrespro.com <http://www.postgrespro.com>",
"msg_date": "Wed, 9 Mar 2022 14:29:09 +0400",
"msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>",
"msg_from_op": true,
"msg_subject": "[PATCH] Double declaration in pg_upgrade.h"
},
{
"msg_contents": "On 09.03.22 11:29, Pavel Borisov wrote:\n> I've noticed that check_ok() in pg_upgrade.h has been declared two times.\n> Here's a one-line patch correcting this.\n\nFixed, thanks.\n\n\n",
"msg_date": "Wed, 9 Mar 2022 12:14:10 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Double declaration in pg_upgrade.h"
}
] |
[
{
"msg_contents": "Hi Michael,\n\n>On Wed, Mar 09, 2022 at 07:45:32AM +0000, Daniel Westermann (DWE) wrote:\n>> Thanks for having a look. Done that way.\n\n>Hmm. Outside the title that had better use upper-case characters for\n>the first letter of each word, I can see references to the pattern you\n>are trying to eliminate in amcheck.sgml (1), config.sgml (3),\n>protocol.sgml (3) and mvcc.sgml (1). Shouldn't you refresh these as\n>well if the point is to make the full set of docs consistent?\n\n>As of the full tree, I can see that:\n>\n>$ git grep \"hot standby\" | wc -l\n>259\n\n>$ git grep \"Hot Standby\" | wc -l\n>73\n\n>So there is a trend for one of the two.\n\nThanks for looking at it. Yes, I am aware there are other places which would need to be changed and I think I mentioned that in an earlier Email. Are you suggesting to change all at once? I wanted to start with the documentation and then continue with the other places.\n\nRegards\nDaniel\n\n",
"msg_date": "Wed, 9 Mar 2022 11:29:53 +0000",
"msg_from": "\"Daniel Westermann (DWE)\" <daniel.westermann@dbi-services.com>",
"msg_from_op": true,
"msg_subject": "Re: Changing \"Hot Standby\" to \"hot standby\""
},
{
"msg_contents": ">>Hmm. Outside the title that had better use upper-case characters for\n>>the first letter of each word, I can see references to the pattern you\n>>are trying to eliminate in amcheck.sgml (1), config.sgml (3),\n>>protocol.sgml (3) and mvcc.sgml (1). Shouldn't you refresh these as\n>>well if the point is to make the full set of docs consistent?\n\n>>As of the full tree, I can see that:\n>>\n>>$ git grep \"hot standby\" | wc -l\n>>259\n\n>>$ git grep \"Hot Standby\" | wc -l\n>>73\n\n>>So there is a trend for one of the two.\n\n>Thanks for looking at it. Yes, I am aware there are other places which would need to be changed and I think I mentioned that in an >earlier Email. Are you suggesting to change all at once? I wanted to start with the documentation and then continue with the other >places.\n\nAttached a new version which also modifies amcheck.sgml, config.sgml, protocol.sgml, and mvcc.sgml accordingly.\n\nRegards\nDaniel",
"msg_date": "Wed, 9 Mar 2022 14:15:53 +0000",
"msg_from": "\"Daniel Westermann (DWE)\" <daniel.westermann@dbi-services.com>",
"msg_from_op": true,
"msg_subject": "Re: Changing \"Hot Standby\" to \"hot standby\""
},
{
"msg_contents": ">>>Hmm. Outside the title that had better use upper-case characters for\n>>>the first letter of each word, I can see references to the pattern you\n>>>are trying to eliminate in amcheck.sgml (1), config.sgml (3),\n>>>protocol.sgml (3) and mvcc.sgml (1). Shouldn't you refresh these as\n>>>well if the point is to make the full set of docs consistent?\n\n>>>As of the full tree, I can see that:\n>>>\n>>$ git grep \"hot standby\" | wc -l\n>>259\n\n>>$ git grep \"Hot Standby\" | wc -l\n>>73\n\n>>>So there is a trend for one of the two.\n\n>>Thanks for looking at it. Yes, I am aware there are other places which would need to be changed and I think I mentioned that in an >>earlier Email. Are you suggesting to change all at once? I wanted to start with the documentation and then continue with the other >>places.\n\n>Attached a new version which also modifies amcheck.sgml, config.sgml, protocol.sgml, and mvcc.sgml accordingly.\n\nRegards\nDaniel\n\n\nFrom: Daniel Westermann (DWE) <daniel.westermann@dbi-services.com>\nSent: Wednesday, March 9, 2022 15:15\nTo: Michael Paquier <michael@paquier.xyz>\nCc: Robert Treat <rob@xzilla.net>; Kyotaro Horiguchi <horikyota.ntt@gmail.com>; aleksander@timescale.com <aleksander@timescale.com>; pgsql-hackers@lists.postgresql.org <pgsql-hackers@lists.postgresql.org>\nSubject: Re: Changing \"Hot Standby\" to \"hot standby\" \n \n>>Hmm. Outside the title that had better use upper-case characters for\n>>the first letter of each word, I can see references to the pattern you\n>>are trying to eliminate in amcheck.sgml (1), config.sgml (3),\n>>protocol.sgml (3) and mvcc.sgml (1). Shouldn't you refresh these as\n>>well if the point is to make the full set of docs consistent?\n\n>>As of the full tree, I can see that:\n>>\n>>$ git grep \"hot standby\" | wc -l\n>>259\n\n>>$ git grep \"Hot Standby\" | wc -l\n>>73\n\n>>So there is a trend for one of the two.\n\n>>Thanks for looking at it. Yes, I am aware there are other places which would need to be changed and I think I mentioned that in an >>earlier Email. Are you suggesting to change all at once? I wanted to start with the documentation and then continue with the other >>places.\n\n>Attached a new version which also modifies amcheck.sgml, config.sgml, protocol.sgml, and mvcc.sgml accordingly.\n\nSending this again as my last two mails did not seem to reach the archives or the commitfest. Or do they need moderation somehow?\n\nRegards\nDaniel",
"msg_date": "Thu, 10 Mar 2022 13:45:55 +0000",
"msg_from": "\"Daniel Westermann (DWE)\" <daniel.westermann@dbi-services.com>",
"msg_from_op": true,
"msg_subject": "Re: Changing \"Hot Standby\" to \"hot standby\""
},
{
"msg_contents": "On Thu, Mar 10, 2022 at 8:45 AM Daniel Westermann (DWE)\n<daniel.westermann@dbi-services.com> wrote:\n>\n> >>>Hmm. Outside the title that had better use upper-case characters for\n> >>>the first letter of each word, I can see references to the pattern you\n> >>>are trying to eliminate in amcheck.sgml (1), config.sgml (3),\n> >>>protocol.sgml (3) and mvcc.sgml (1). Shouldn't you refresh these as\n> >>>well if the point is to make the full set of docs consistent?\n>\n> >>>As of the full tree, I can see that:\n> >>>\n> >>$ git grep \"hot standby\" | wc -l\n> >>259\n>\n> >>$ git grep \"Hot Standby\" | wc -l\n> >>73\n>\n> >>>So there is a trend for one of the two.\n>\n> >>Thanks for looking at it. Yes, I am aware there are other places which would need to be changed and I think I mentioned that in an >>earlier Email. Are you suggesting to change all at once? I wanted to start with the documentation and then continue with the other >>places.\n>\n> >Attached a new version which also modifies amcheck.sgml, config.sgml, protocol.sgml, and mvcc.sgml accordingly.\n>\n> Regards\n> Daniel\n>\n>\n> From: Daniel Westermann (DWE) <daniel.westermann@dbi-services.com>\n> Sent: Wednesday, March 9, 2022 15:15\n> To: Michael Paquier <michael@paquier.xyz>\n> Cc: Robert Treat <rob@xzilla.net>; Kyotaro Horiguchi <horikyota.ntt@gmail.com>; aleksander@timescale.com <aleksander@timescale.com>; pgsql-hackers@lists.postgresql.org <pgsql-hackers@lists.postgresql.org>\n> Subject: Re: Changing \"Hot Standby\" to \"hot standby\"\n>\n> >>Hmm. Outside the title that had better use upper-case characters for\n> >>the first letter of each word, I can see references to the pattern you\n> >>are trying to eliminate in amcheck.sgml (1), config.sgml (3),\n> >>protocol.sgml (3) and mvcc.sgml (1). Shouldn't you refresh these as\n> >>well if the point is to make the full set of docs consistent?\n>\n> >>As of the full tree, I can see that:\n> >>\n> >>$ git grep \"hot standby\" | wc -l\n> >>259\n>\n> >>$ git grep \"Hot Standby\" | wc -l\n> >>73\n>\n> >>So there is a trend for one of the two.\n>\n> >>Thanks for looking at it. Yes, I am aware there are other places which would need to be changed and I think I mentioned that in an >>earlier Email. Are you suggesting to change all at once? I wanted to start with the documentation and then continue with the other >>places.\n>\n> >Attached a new version which also modifies amcheck.sgml, config.sgml, protocol.sgml, and mvcc.sgml accordingly.\n>\n> Sending this again as my last two mails did not seem to reach the archives or the commitfest. Or do they need moderation somehow?\n>\n\nNot sure why the previous emails didn't go through, and still doesn't\nlook like they were picked up. In the interest of progress though,\nattaching an updated patch with some minor wordsmithing; lmk if you'd\nprefer this differently\n\nRobert Treat\nhttps://xzilla.net",
"msg_date": "Thu, 10 Mar 2022 17:58:05 -0500",
"msg_from": "Robert Treat <rob@xzilla.net>",
"msg_from_op": false,
"msg_subject": "Re: Changing \"Hot Standby\" to \"hot standby\""
},
{
"msg_contents": "On Thu, Mar 10, 2022 at 05:58:05PM -0500, Robert Treat wrote:\n> Not sure why the previous emails didn't go through, and still doesn't\n> look like they were picked up. In the interest of progress though,\n> attaching an updated patch with some minor wordsmithing; lmk if you'd\n> prefer this differently\n\nLooks the same as v5 for me, that applies the same consistency rules\neverywhere in the docs. So applied this one.\n--\nMichael",
"msg_date": "Fri, 11 Mar 2022 15:18:37 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Changing \"Hot Standby\" to \"hot standby\""
}
] |
[
{
"msg_contents": "On 03/09/22 12:19, Stephen Frost wrote:\n> Let's avoid hijacking [thread about other patch] [1]\n> for an independent debate about what our documentation should or\n> shouldn't include.\n\nAgreed. New thread here.\n\nStephen wrote:\n> Documenting everything that pg_basebackup does to make sure that the\n> backup is viable might be something to work on if someone is really\n> excited about this, but it's not 'dead-simple' and it's darn close to\n> the bare minimum,\n\nI wrote:\n> if the claim is that an admin who relies on pg_basebackup is relying\n> on essential things pg_basebackup does that have not been enumerated\n> in our documentation yet, I would argue they should be.\n\nMagnus wrote:\n> For the people who want to drive their backups from a shellscript and\n> for some reason *don't* want to use pg_basebackup, we need to come up\n> with a different API or a different set of tools. That is not a\n> documentation task. That is a \"start from a list of which things\n> pg_basebackup cannot do that are still simple, or that tools like\n> pgbackrest cannot do if they're complicated\". And then design an API\n> that's actually safe and easy to use *for that usecase*.\n\nI wrote:\n> That might also be a good thing, but I don't see it as a substitute\n> for documenting the present reality of what the irreducibly essential\n> behaviors of pg_basebackup (or of third-party tools like pgbackrest)\n> are, and why they are so.\n\nStephen wrote:\n> I disagree. If we provided a tool then we'd document that tool and how\n> users can use it, not every single step that it does (see also:\n> pg_basebackup).\n\n\nI could grant, arguendo, that for most cases where we've \"provided a tool\"\nthat's enough, and still distinguish pg_basebackup from those. In no\nparticular order:\n\n- pg_basebackup comes late to the party. It appears in 9.1 as a tool that\n conveniently automates a process (performing an online base backup)\n that has already been documented since 8.0 six and a half years earlier.\n (While, yes, it streams the file contents over a newly-introduced\n protocol, I don't think anyone has called that one of its irreducibly\n essential behaviors, or claimed that any other way of reliably copying\n those contents during the backup window would be inherently flawed.)\n\n- By the release where pg_basebackup appears, anyone who is doing\n online backup and PITR is already using some other tooling (third-party\n or locally developed) to do so. There may be benefits and costs in\n migrating those procedures to pg_basebackup. If one of the benefits is\n \"your current procedures may be missing essential steps we built into\n pg_basebackup but left out of our documentation\" then that is important\n to know for an admin who is making that decision. Even better, knowing\n what those essential steps are will allow that admin to make an informed\n assessment of whether the existing procedures are broken or not.\n\n- Typical tools are easy for an admin to judge the fitness of.\n The tool does a thing, and you can tell right away if it did the thing\n you needed or not. pg_basebackup, like any backup tool, does a thing,\n and you don't find out if that was the thing you needed until later,\n when failure isn't an option. That's a less-typical kind of a tool,\n for which it's less ok to be a black box.\n\n- Ultimately, an admin's job isn't \"use pg_basebackup\" (or \"use pgbackrest\"\n or \"use barman\"). The job is \"be certain that this cluster is recoverably\n backed up, and for any tool you may be using to do it, that you have the\n same grasp of what the tool has done as if you had done it yourself.\"\n\n\nIn view of all that, I would think it perfectly reasonable to present\npg_basebackup as one convenient and included reference implementation\nof the irreducibly essential steps of an online base backup, which we\nseparately document.\n\nI don't think it is as reasonable to say, effectively, that you learn\nwhat the irreducibly essential steps of an online base backup are by\nreading the source of pg_basebackup, and then intuiting which of the\ndetails you find there are the essential ones and which are outgrowths\nof its particular design choices.\n\nRegards,\n-Chap\n\n\n[1] https://www.postgresql.org/message-id/20220221172306.GA3698472%40nathanxps13\n\n\n",
"msg_date": "Wed, 9 Mar 2022 14:28:36 -0500",
"msg_from": "Chapman Flack <chap@anastigmatix.net>",
"msg_from_op": true,
"msg_subject": "Document what is essential and undocumented in pg_basebackup"
},
{
"msg_contents": "Greetings,\n\n* Chapman Flack (chap@anastigmatix.net) wrote:\n> On 03/09/22 12:19, Stephen Frost wrote:\n> > Let's avoid hijacking [thread about other patch] [1]\n> > for an independent debate about what our documentation should or\n> > shouldn't include.\n> \n> Agreed. New thread here.\n\nThanks.\n\n> Stephen wrote:\n> > Documenting everything that pg_basebackup does to make sure that the\n> > backup is viable might be something to work on if someone is really\n> > excited about this, but it's not 'dead-simple' and it's darn close to\n> > the bare minimum,\n> \n> I wrote:\n> > if the claim is that an admin who relies on pg_basebackup is relying\n> > on essential things pg_basebackup does that have not been enumerated\n> > in our documentation yet, I would argue they should be.\n> \n> Magnus wrote:\n> > For the people who want to drive their backups from a shellscript and\n> > for some reason *don't* want to use pg_basebackup, we need to come up\n> > with a different API or a different set of tools. That is not a\n> > documentation task. That is a \"start from a list of which things\n> > pg_basebackup cannot do that are still simple, or that tools like\n> > pgbackrest cannot do if they're complicated\". And then design an API\n> > that's actually safe and easy to use *for that usecase*.\n> \n> I wrote:\n> > That might also be a good thing, but I don't see it as a substitute\n> > for documenting the present reality of what the irreducibly essential\n> > behaviors of pg_basebackup (or of third-party tools like pgbackrest)\n> > are, and why they are so.\n> \n> Stephen wrote:\n> > I disagree. If we provided a tool then we'd document that tool and how\n> > users can use it, not every single step that it does (see also:\n> > pg_basebackup).\n> \n> \n> I could grant, arguendo, that for most cases where we've \"provided a tool\"\n> that's enough, and still distinguish pg_basebackup from those. In no\n> particular order:\n> \n> - pg_basebackup comes late to the party. It appears in 9.1 as a tool that\n> conveniently automates a process (performing an online base backup)\n> that has already been documented since 8.0 six and a half years earlier.\n> (While, yes, it streams the file contents over a newly-introduced\n> protocol, I don't think anyone has called that one of its irreducibly\n> essential behaviors, or claimed that any other way of reliably copying\n> those contents during the backup window would be inherently flawed.)\n> \n> - By the release where pg_basebackup appears, anyone who is doing\n> online backup and PITR is already using some other tooling (third-party\n> or locally developed) to do so. There may be benefits and costs in\n> migrating those procedures to pg_basebackup. If one of the benefits is\n> \"your current procedures may be missing essential steps we built into\n> pg_basebackup but left out of our documentation\" then that is important\n> to know for an admin who is making that decision. Even better, knowing\n> what those essential steps are will allow that admin to make an informed\n> assessment of whether the existing procedures are broken or not.\n> \n> - Typical tools are easy for an admin to judge the fitness of.\n> The tool does a thing, and you can tell right away if it did the thing\n> you needed or not. pg_basebackup, like any backup tool, does a thing,\n> and you don't find out if that was the thing you needed until later,\n> when failure isn't an option. That's a less-typical kind of a tool,\n> for which it's less ok to be a black box.\n> \n> - Ultimately, an admin's job isn't \"use pg_basebackup\" (or \"use pgbackrest\"\n> or \"use barman\"). The job is \"be certain that this cluster is recoverably\n> backed up, and for any tool you may be using to do it, that you have the\n> same grasp of what the tool has done as if you had done it yourself.\"\n> \n> \n> In view of all that, I would think it perfectly reasonable to present\n> pg_basebackup as one convenient and included reference implementation\n> of the irreducibly essential steps of an online base backup, which we\n> separately document.\n\n... except that pg_basebackup isn't quite that, it just happens to do\nthe things that *it* needs to do to give some level of confidence that\nthe backup it took will be useable later.\n\n> I don't think it is as reasonable to say, effectively, that you learn\n> what the irreducibly essential steps of an online base backup are by\n> reading the source of pg_basebackup, and then intuiting which of the\n> details you find there are the essential ones and which are outgrowths\n> of its particular design choices.\n\nWhile reading the pg_basebackup source would be helpful to someone\ndeveloping a new backup tool for PG, it's not the only source you'd need\nto read- you also need to read the PG source for things like what return\ncodes from archive_command and restore_command mean to PG or how a\npromoted system finds a new timeline or what .partial or .backup files\nmean. Further, you'd need to understand that it's essential that all of\nthe files from the backup are fsync'd to disk along with the directories\nthat they're in (which is something you might glean from reading the\npg_basebackup source) as otherwise they might disappear if a crash\nhappened shortly after the backup was taken. Same for how\narchive_command has to handle that same concern for WAL files. Not to\nmention the considerations around how to deal with page-level checksums\nwhen reading from an actively-being-modified PG data directory.\n\nDocumenting absolutely everything needed to write a good backup tool for\nPG strikes me as unlikely to end up actually being useful. Those who\nwrite backup tools for PG are reading the source for PG and likely\nwouldn't find such documentation helpful as not everything needed would\nbe included even if we did try to document everything, making such an\neffort a waste of time. The idea that we could document everything\nneeded and that someone could then write a simple shell script or even a\nsimple perl script (as pgbackrest started out as ...) from that\ndocumentation that did everything necessary is a fiction that we need to\naccept as such and move on from.\n\nThanks,\n\nStephen",
"msg_date": "Wed, 9 Mar 2022 14:46:00 -0500",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: Document what is essential and undocumented in pg_basebackup"
},
{
"msg_contents": "On 3/9/22 13:46, Stephen Frost wrote:\n> \n>> I don't think it is as reasonable to say, effectively, that you learn\n>> what the irreducibly essential steps of an online base backup are by\n>> reading the source of pg_basebackup, and then intuiting which of the\n>> details you find there are the essential ones and which are outgrowths\n>> of its particular design choices.\n> \n> Documenting absolutely everything needed to write a good backup tool for\n> PG strikes me as unlikely to end up actually being useful. Those who\n> write backup tools for PG are reading the source for PG and likely\n> wouldn't find such documentation helpful as not everything needed would\n> be included even if we did try to document everything, making such an\n> effort a waste of time. The idea that we could document everything\n> needed and that someone could then write a simple shell script or even a\n> simple perl script (as pgbackrest started out as ...) from that\n> documentation that did everything necessary is a fiction that we need to\n> accept as such and move on from.\n\nI would argue that the \"Making a Non-Exclusive Low-Level Backup\" and \n\"Backing Up the Data Directory\" sections do contain the minimal \ninformation you need to create a valid backup. I (and others) work hard \nto keep these sections up to date.\n\nArguably it is a bit confusing that \"Backing Up the Data Directory\" is a \nseparate section, but that's because we have two backup methods and it \nneeds to be kept separate. But since it is linked in the appropriate \npart of \"Making a Non-Exclusive Low-Level Backup\" I don't think it is \ntoo big a deal.\n\nIf you see something missing then let's add it. But I agree with Stephen \nthat it is not a good idea to include a simplistic pseudo-solution to a \nproblem that is anything but simple.\n\nRegards,\n-David\n\n\n",
"msg_date": "Wed, 9 Mar 2022 14:39:37 -0600",
"msg_from": "David Steele <david@pgmasters.net>",
"msg_from_op": false,
"msg_subject": "Re: Document what is essential and undocumented in pg_basebackup"
}
] |
[
{
"msg_contents": "dshash: Add sequential scan support.\n\nAdd ability to scan all entries sequentially to dshash. The interface is\nsimilar but a bit different both from that of dynahash and simple dshash\nsearch functions. The most significant differences is that dshash's interfac\nalways needs a call to dshash_seq_term when scan ends. Another is\nlocking. Dshash holds partition lock when returning an entry,\ndshash_seq_next() also holds lock when returning an entry but callers\nshouldn't release it, since the lock is essential to continue a scan. The\nseqscan interface allows entry deletion while a scan is in progress using\ndshash_delete_current().\n\nReviewed-By: Andres Freund <andres@anarazel.de>\nAuthor: Kyotaro Horiguchi <horikyoga.ntt@gmail.com>\n\nBranch\n------\nmaster\n\nDetails\n-------\nhttps://git.postgresql.org/pg/commitdiff/352d297dc74feb0bf0dcb255cc0dfaaed2b96c1e\n\nModified Files\n--------------\nsrc/backend/lib/dshash.c | 163 ++++++++++++++++++++++++++++++++++++++-\nsrc/include/lib/dshash.h | 23 ++++++\nsrc/tools/pgindent/typedefs.list | 1 +\n3 files changed, 186 insertions(+), 1 deletion(-)",
"msg_date": "Fri, 11 Mar 2022 01:02:51 +0000",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "pgsql: dshash: Add sequential scan support."
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> dshash: Add sequential scan support.\n> Add ability to scan all entries sequentially to dshash. The interface is\n> similar but a bit different both from that of dynahash and simple dshash\n> search functions. The most significant differences is that dshash's interfac\n> always needs a call to dshash_seq_term when scan ends.\n\nUmm ... what about error recovery? Or have you just cemented the\nproposition that long-lived dshashes are unsafe?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 10 Mar 2022 20:09:56 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: dshash: Add sequential scan support."
},
{
"msg_contents": "Hi,\n\nOn 2022-03-10 20:09:56 -0500, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > dshash: Add sequential scan support.\n> > Add ability to scan all entries sequentially to dshash. The interface is\n> > similar but a bit different both from that of dynahash and simple dshash\n> > search functions. The most significant differences is that dshash's interfac\n> > always needs a call to dshash_seq_term when scan ends.\n> \n> Umm ... what about error recovery? Or have you just cemented the\n> proposition that long-lived dshashes are unsafe?\n\nI don't think this commit made it worse. dshash_seq_term() releases an lwlock\n(which will be released in case of an error) and unsets\nhash_table->find_[exclusively_]locked. The latter weren't introduced by this\npatch, and are also set by dshash_find().\n\nI agree that ->find_[exclusively_]locked are problematic from an error\nrecovery perspective.\n\nIt's per-backend state at least and just used for assertions. We could remove\nit. Or stop checking it in places where it could be set wrongly: dshash_find()\nand dshash_detach() couldn't check anymore, but the rest of the assertions\nwould still be valid afaics?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 10 Mar 2022 17:27:12 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: pgsql: dshash: Add sequential scan support."
},
{
"msg_contents": "[Re-directing to -hackers]\n\nOn Fri, Mar 11, 2022 at 2:27 PM Andres Freund <andres@anarazel.de> wrote:\n> On 2022-03-10 20:09:56 -0500, Tom Lane wrote:\n> > Andres Freund <andres@anarazel.de> writes:\n> > > dshash: Add sequential scan support.\n> > > Add ability to scan all entries sequentially to dshash. The interface is\n> > > similar but a bit different both from that of dynahash and simple dshash\n> > > search functions. The most significant differences is that dshash's interfac\n> > > always needs a call to dshash_seq_term when scan ends.\n> >\n> > Umm ... what about error recovery? Or have you just cemented the\n> > proposition that long-lived dshashes are unsafe?\n>\n> I don't think this commit made it worse. dshash_seq_term() releases an lwlock\n> (which will be released in case of an error) and unsets\n> hash_table->find_[exclusively_]locked. The latter weren't introduced by this\n> patch, and are also set by dshash_find().\n>\n> I agree that ->find_[exclusively_]locked are problematic from an error\n> recovery perspective.\n\nRight, as seen in the build farm at [1]. Also reproducible with something like:\n\n@@ -269,6 +269,14 @@ dsm_impl_posix(dsm_op op, dsm_handle handle, Size\nrequest_size,\n return false;\n }\n\n+ /* XXX random fault injection */\n+ if (op == DSM_OP_ATTACH && random() < RAND_MAX / 8)\n+ {\n+ close(fd);\n+ elog(ERROR, \"chaos\");\n+ return false;\n+ }\n+\n\nI must have thought that it was easy and practical to write no-throw\nstraight-line code and be sure to reach dshash_release_lock(), but I\nconcede that it was a bad idea: even dsa_get_address() can throw*, and\nyou're often likely to need to call that while accessing dshash\nelements. For example, in lookup_rowtype_tupdesc_internal(), there is\na sequence dshash_find(), ..., dsa_get_address(), ...,\ndshash_release_lock(), and I must have considered the range of code\nbetween find and release to be no-throw, but now I know that it is\nnot.\n\n> It's per-backend state at least and just used for assertions. We could remove\n> it. Or stop checking it in places where it could be set wrongly: dshash_find()\n> and dshash_detach() couldn't check anymore, but the rest of the assertions\n> would still be valid afaics?\n\nYeah, it's all for assertions... let's just remove it. Those\nassertions were useful to me at some stage in development but won't\nhold as well as I thought, at least without widespread PG_FINALLY(),\nwhich wouldn't be nice.\n\n*dsa_get_address() might need to adjust the memory map with system\ncalls, which might fail. If you think of DSA as not only an allocator\nbut also a poor man's user level virtual memory scheme to tide us over\nuntil we get threads, then this is a pretty low level kind of\nshould-not-happen failure that is analogous on some level to SIGBUS or\nSIGSEGV or something like that, and we should PANIC. Then we could\nclaim that dsa_get_address() is no-throw. At least, that was one\nargument I had with myself while investigating that strange Solaris\nshm_open() failure, but ... I lost the argument. It's quite an\nextreme position to take just to support these assertions, which are\nof pretty limited value.\n\n[1] https://www.postgresql.org/message-id/20220701232009.jcwxpl45bptaxv5n%40alap3.anarazel.de",
"msg_date": "Mon, 4 Jul 2022 14:55:43 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: dshash: Add sequential scan support."
},
{
"msg_contents": "At Mon, 4 Jul 2022 14:55:43 +1200, Thomas Munro <thomas.munro@gmail.com> wrote in \n> [Re-directing to -hackers]\n> \n> On Fri, Mar 11, 2022 at 2:27 PM Andres Freund <andres@anarazel.de> wrote:\n> > It's per-backend state at least and just used for assertions. We could remove\n> > it. Or stop checking it in places where it could be set wrongly: dshash_find()\n> > and dshash_detach() couldn't check anymore, but the rest of the assertions\n> > would still be valid afaics?\n> \n> Yeah, it's all for assertions... let's just remove it. Those\n> assertions were useful to me at some stage in development but won't\n> hold as well as I thought, at least without widespread PG_FINALLY(),\n> which wouldn't be nice.\n> \n> *dsa_get_address() might need to adjust the memory map with system\n> calls, which might fail. If you think of DSA as not only an allocator\n> but also a poor man's user level virtual memory scheme to tide us over\n> until we get threads, then this is a pretty low level kind of\n> should-not-happen failure that is analogous on some level to SIGBUS or\n> SIGSEGV or something like that, and we should PANIC. Then we could\n> claim that dsa_get_address() is no-throw. At least, that was one\n> argument I had with myself while investigating that strange Solaris\n> shm_open() failure, but ... I lost the argument. It's quite an\n> extreme position to take just to support these assertions, which are\n> of pretty limited value.\n> \n> [1] https://www.postgresql.org/message-id/20220701232009.jcwxpl45bptaxv5n%40alap3.anarazel.de\n\nFWIW, the discussion above is convincing to me and the patch looks good.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Mon, 04 Jul 2022 17:31:46 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: dshash: Add sequential scan support."
},
{
"msg_contents": "On Sun, Jul 3, 2022 at 7:56 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n\n> [Re-directing to -hackers]\n>\n> On Fri, Mar 11, 2022 at 2:27 PM Andres Freund <andres@anarazel.de> wrote:\n> > On 2022-03-10 20:09:56 -0500, Tom Lane wrote:\n> > > Andres Freund <andres@anarazel.de> writes:\n> > > > dshash: Add sequential scan support.\n> > > > Add ability to scan all entries sequentially to dshash. The\n> interface is\n> > > > similar but a bit different both from that of dynahash and simple\n> dshash\n> > > > search functions. The most significant differences is that dshash's\n> interfac\n> > > > always needs a call to dshash_seq_term when scan ends.\n> > >\n> > > Umm ... what about error recovery? Or have you just cemented the\n> > > proposition that long-lived dshashes are unsafe?\n> >\n> > I don't think this commit made it worse. dshash_seq_term() releases an\n> lwlock\n> > (which will be released in case of an error) and unsets\n> > hash_table->find_[exclusively_]locked. The latter weren't introduced by\n> this\n> > patch, and are also set by dshash_find().\n> >\n> > I agree that ->find_[exclusively_]locked are problematic from an error\n> > recovery perspective.\n>\n> Right, as seen in the build farm at [1]. Also reproducible with something\n> like:\n>\n> @@ -269,6 +269,14 @@ dsm_impl_posix(dsm_op op, dsm_handle handle, Size\n> request_size,\n> return false;\n> }\n>\n> + /* XXX random fault injection */\n> + if (op == DSM_OP_ATTACH && random() < RAND_MAX / 8)\n> + {\n> + close(fd);\n> + elog(ERROR, \"chaos\");\n> + return false;\n> + }\n> +\n>\n> I must have thought that it was easy and practical to write no-throw\n> straight-line code and be sure to reach dshash_release_lock(), but I\n> concede that it was a bad idea: even dsa_get_address() can throw*, and\n> you're often likely to need to call that while accessing dshash\n> elements. For example, in lookup_rowtype_tupdesc_internal(), there is\n> a sequence dshash_find(), ..., dsa_get_address(), ...,\n> dshash_release_lock(), and I must have considered the range of code\n> between find and release to be no-throw, but now I know that it is\n> not.\n>\n> > It's per-backend state at least and just used for assertions. We could\n> remove\n> > it. Or stop checking it in places where it could be set wrongly:\n> dshash_find()\n> > and dshash_detach() couldn't check anymore, but the rest of the\n> assertions\n> > would still be valid afaics?\n>\n> Yeah, it's all for assertions... let's just remove it. Those\n> assertions were useful to me at some stage in development but won't\n> hold as well as I thought, at least without widespread PG_FINALLY(),\n> which wouldn't be nice.\n>\n> *dsa_get_address() might need to adjust the memory map with system\n> calls, which might fail. If you think of DSA as not only an allocator\n> but also a poor man's user level virtual memory scheme to tide us over\n> until we get threads, then this is a pretty low level kind of\n> should-not-happen failure that is analogous on some level to SIGBUS or\n> SIGSEGV or something like that, and we should PANIC. Then we could\n> claim that dsa_get_address() is no-throw. At least, that was one\n> argument I had with myself while investigating that strange Solaris\n> shm_open() failure, but ... I lost the argument. It's quite an\n> extreme position to take just to support these assertions, which are\n> of pretty limited value.\n>\n> [1]\n> https://www.postgresql.org/message-id/20220701232009.jcwxpl45bptaxv5n%40alap3.anarazel.de\n\nHi,\nIn the description,\n\n`new shared memory stats system in 15`\n\nIt would be clearer to add `release` before `15`.\n\nCheers\n\nOn Sun, Jul 3, 2022 at 7:56 PM Thomas Munro <thomas.munro@gmail.com> wrote:[Re-directing to -hackers]\n\nOn Fri, Mar 11, 2022 at 2:27 PM Andres Freund <andres@anarazel.de> wrote:\n> On 2022-03-10 20:09:56 -0500, Tom Lane wrote:\n> > Andres Freund <andres@anarazel.de> writes:\n> > > dshash: Add sequential scan support.\n> > > Add ability to scan all entries sequentially to dshash. The interface is\n> > > similar but a bit different both from that of dynahash and simple dshash\n> > > search functions. The most significant differences is that dshash's interfac\n> > > always needs a call to dshash_seq_term when scan ends.\n> >\n> > Umm ... what about error recovery? Or have you just cemented the\n> > proposition that long-lived dshashes are unsafe?\n>\n> I don't think this commit made it worse. dshash_seq_term() releases an lwlock\n> (which will be released in case of an error) and unsets\n> hash_table->find_[exclusively_]locked. The latter weren't introduced by this\n> patch, and are also set by dshash_find().\n>\n> I agree that ->find_[exclusively_]locked are problematic from an error\n> recovery perspective.\n\nRight, as seen in the build farm at [1]. Also reproducible with something like:\n\n@@ -269,6 +269,14 @@ dsm_impl_posix(dsm_op op, dsm_handle handle, Size\nrequest_size,\n return false;\n }\n\n+ /* XXX random fault injection */\n+ if (op == DSM_OP_ATTACH && random() < RAND_MAX / 8)\n+ {\n+ close(fd);\n+ elog(ERROR, \"chaos\");\n+ return false;\n+ }\n+\n\nI must have thought that it was easy and practical to write no-throw\nstraight-line code and be sure to reach dshash_release_lock(), but I\nconcede that it was a bad idea: even dsa_get_address() can throw*, and\nyou're often likely to need to call that while accessing dshash\nelements. For example, in lookup_rowtype_tupdesc_internal(), there is\na sequence dshash_find(), ..., dsa_get_address(), ...,\ndshash_release_lock(), and I must have considered the range of code\nbetween find and release to be no-throw, but now I know that it is\nnot.\n\n> It's per-backend state at least and just used for assertions. We could remove\n> it. Or stop checking it in places where it could be set wrongly: dshash_find()\n> and dshash_detach() couldn't check anymore, but the rest of the assertions\n> would still be valid afaics?\n\nYeah, it's all for assertions... let's just remove it. Those\nassertions were useful to me at some stage in development but won't\nhold as well as I thought, at least without widespread PG_FINALLY(),\nwhich wouldn't be nice.\n\n*dsa_get_address() might need to adjust the memory map with system\ncalls, which might fail. If you think of DSA as not only an allocator\nbut also a poor man's user level virtual memory scheme to tide us over\nuntil we get threads, then this is a pretty low level kind of\nshould-not-happen failure that is analogous on some level to SIGBUS or\nSIGSEGV or something like that, and we should PANIC. Then we could\nclaim that dsa_get_address() is no-throw. At least, that was one\nargument I had with myself while investigating that strange Solaris\nshm_open() failure, but ... I lost the argument. It's quite an\nextreme position to take just to support these assertions, which are\nof pretty limited value.\n\n[1] https://www.postgresql.org/message-id/20220701232009.jcwxpl45bptaxv5n%40alap3.anarazel.deHi,In the description,`new shared memory stats system in 15`It would be clearer to add `release` before `15`.Cheers",
"msg_date": "Mon, 4 Jul 2022 03:46:11 -0700",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: dshash: Add sequential scan support."
},
{
"msg_contents": "Hi,\n\nOn 2022-07-04 14:55:43 +1200, Thomas Munro wrote:\n> Right, as seen in the build farm at [1]. Also reproducible with something like:\n>\n> @@ -269,6 +269,14 @@ dsm_impl_posix(dsm_op op, dsm_handle handle, Size\n> request_size,\n> return false;\n> }\n>\n> + /* XXX random fault injection */\n> + if (op == DSM_OP_ATTACH && random() < RAND_MAX / 8)\n> + {\n> + close(fd);\n> + elog(ERROR, \"chaos\");\n> + return false;\n> + }\n> +\n>\n> I must have thought that it was easy and practical to write no-throw\n> straight-line code and be sure to reach dshash_release_lock(), but I\n> concede that it was a bad idea: even dsa_get_address() can throw*, and\n> you're often likely to need to call that while accessing dshash\n> elements. For example, in lookup_rowtype_tupdesc_internal(), there is\n> a sequence dshash_find(), ..., dsa_get_address(), ...,\n> dshash_release_lock(), and I must have considered the range of code\n> between find and release to be no-throw, but now I know that it is\n> not.\n\nYea - I'd go as far as saying that it's almost never feasible.\n\n\n> > It's per-backend state at least and just used for assertions. We could remove\n> > it. Or stop checking it in places where it could be set wrongly: dshash_find()\n> > and dshash_detach() couldn't check anymore, but the rest of the assertions\n> > would still be valid afaics?\n>\n> Yeah, it's all for assertions... let's just remove it. Those\n> assertions were useful to me at some stage in development but won't\n> hold as well as I thought, at least without widespread PG_FINALLY(),\n> which wouldn't be nice.\n\nHm. I'd be inclined to at least add a few more\nAssert(!LWLockHeldByMe[InMode]()) style assertions. E.g. to\ndshash_find_or_insert().\n\n\n\n> @@ -572,13 +552,8 @@ dshash_release_lock(dshash_table *hash_table, void *entry)\n> \tsize_t\t\tpartition_index = PARTITION_FOR_HASH(item->hash);\n>\n> \tAssert(hash_table->control->magic == DSHASH_MAGIC);\n> -\tAssert(hash_table->find_locked);\n> -\tAssert(LWLockHeldByMeInMode(PARTITION_LOCK(hash_table, partition_index),\n> -\t\t\t\t\t\t\t\thash_table->find_exclusively_locked\n> -\t\t\t\t\t\t\t\t? LW_EXCLUSIVE : LW_SHARED));\n> +\tAssert(LWLockHeldByMe(PARTITION_LOCK(hash_table, partition_index)));\n>\n> -\thash_table->find_locked = false;\n> -\thash_table->find_exclusively_locked = false;\n> \tLWLockRelease(PARTITION_LOCK(hash_table, partition_index));\n> }\n\nThis LWLockHeldByMe() doesn't add much - the LWLockRelease() will error out if\nwe don't hold the lock.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 4 Jul 2022 13:54:04 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: pgsql: dshash: Add sequential scan support."
},
{
"msg_contents": "On Tue, Jul 5, 2022 at 8:54 AM Andres Freund <andres@anarazel.de> wrote:\n> On 2022-07-04 14:55:43 +1200, Thomas Munro wrote:\n> > > It's per-backend state at least and just used for assertions. We could remove\n> > > it. Or stop checking it in places where it could be set wrongly: dshash_find()\n> > > and dshash_detach() couldn't check anymore, but the rest of the assertions\n> > > would still be valid afaics?\n> >\n> > Yeah, it's all for assertions... let's just remove it. Those\n> > assertions were useful to me at some stage in development but won't\n> > hold as well as I thought, at least without widespread PG_FINALLY(),\n> > which wouldn't be nice.\n>\n> Hm. I'd be inclined to at least add a few more\n> Assert(!LWLockHeldByMe[InMode]()) style assertions. E.g. to\n> dshash_find_or_insert().\n\nYeah, I was wondering about that, but it needs to check the whole 128\nelement lock array. Hmm, yeah that seems OK for assertion builds.\nSince there were 6 places with I-hold-no-lock assertions, I shoved the\nloop into a function so I could do:\n\n- Assert(!status->hash_table->find_locked);\n+ assert_no_lock_held_by_me(hash_table);\n\n> > + Assert(LWLockHeldByMe(PARTITION_LOCK(hash_table, partition_index)));\n> >\n> > - hash_table->find_locked = false;\n> > - hash_table->find_exclusively_locked = false;\n> > LWLockRelease(PARTITION_LOCK(hash_table, partition_index));\n\n> This LWLockHeldByMe() doesn't add much - the LWLockRelease() will error out if\n> we don't hold the lock.\n\nDuh. Removed.",
"msg_date": "Tue, 5 Jul 2022 11:20:54 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: dshash: Add sequential scan support."
},
{
"msg_contents": "Hi,\n\nOn 2022-07-05 11:20:54 +1200, Thomas Munro wrote:\n> On Tue, Jul 5, 2022 at 8:54 AM Andres Freund <andres@anarazel.de> wrote:\n> > > Yeah, it's all for assertions... let's just remove it. Those\n> > > assertions were useful to me at some stage in development but won't\n> > > hold as well as I thought, at least without widespread PG_FINALLY(),\n> > > which wouldn't be nice.\n> >\n> > Hm. I'd be inclined to at least add a few more\n> > Assert(!LWLockHeldByMe[InMode]()) style assertions. E.g. to\n> > dshash_find_or_insert().\n> \n> Yeah, I was wondering about that, but it needs to check the whole 128\n> element lock array.\n\nI think it'd be ok to just check the current partition - yes, it'd not catch\ncases where we're still holding a lock on another partition, but that's imo\nnot too bad?\n\n\n> Hmm, yeah that seems OK for assertion builds.\n> Since there were 6 places with I-hold-no-lock assertions, I shoved the\n> loop into a function so I could do:\n> \n> - Assert(!status->hash_table->find_locked);\n> + assert_no_lock_held_by_me(hash_table);\n\nI am a *bit* wary about the costs of that, even in assert builds - each of the\npartition checks in the loop will in turn need to iterate through\nheld_lwlocks. But I guess we can also just later weaken them if it turns out\nto be a problem.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 4 Jul 2022 16:25:38 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: pgsql: dshash: Add sequential scan support."
},
{
"msg_contents": "On Tue, Jul 5, 2022 at 11:25 AM Andres Freund <andres@anarazel.de> wrote:\n> On 2022-07-05 11:20:54 +1200, Thomas Munro wrote:\n> > Since there were 6 places with I-hold-no-lock assertions, I shoved the\n> > loop into a function so I could do:\n> >\n> > - Assert(!status->hash_table->find_locked);\n> > + assert_no_lock_held_by_me(hash_table);\n>\n> I am a *bit* wary about the costs of that, even in assert builds - each of the\n> partition checks in the loop will in turn need to iterate through\n> held_lwlocks. But I guess we can also just later weaken them if it turns out\n> to be a problem.\n\nMaybe we should add assertion support for arrays of locks, so we don't\nneed two levels of loop? Something like the attached?",
"msg_date": "Tue, 5 Jul 2022 15:21:40 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: dshash: Add sequential scan support."
},
{
"msg_contents": "On Tue, Jul 5, 2022 at 3:21 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> On Tue, Jul 5, 2022 at 11:25 AM Andres Freund <andres@anarazel.de> wrote:\n> > On 2022-07-05 11:20:54 +1200, Thomas Munro wrote:\n> > > Since there were 6 places with I-hold-no-lock assertions, I shoved the\n> > > loop into a function so I could do:\n> > >\n> > > - Assert(!status->hash_table->find_locked);\n> > > + assert_no_lock_held_by_me(hash_table);\n> >\n> > I am a *bit* wary about the costs of that, even in assert builds - each of the\n> > partition checks in the loop will in turn need to iterate through\n> > held_lwlocks. But I guess we can also just later weaken them if it turns out\n> > to be a problem.\n>\n> Maybe we should add assertion support for arrays of locks, so we don't\n> need two levels of loop? Something like the attached?\n\nPushed.\n\n\n",
"msg_date": "Mon, 11 Jul 2022 16:50:25 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: dshash: Add sequential scan support."
}
] |
[
{
"msg_contents": ">Looks the same as v5 for me, that applies the same consistency rules\n>everywhere in the docs. So applied this one.\n\nThank you, Michael\n\n\n",
"msg_date": "Fri, 11 Mar 2022 06:24:15 +0000",
"msg_from": "\"Daniel Westermann (DWE)\" <daniel.westermann@dbi-services.com>",
"msg_from_op": true,
"msg_subject": "Re: Changing \"Hot Standby\" to \"hot standby\""
}
] |
[
{
"msg_contents": "Hi,\n\nI have observed that the table naming conventions used in\n'progress-reporting.html' are not consistent across different\nsections. For some cases \"Phases\" (Table 28.37. CREATE INDEX Phases)\nis used and for some cases \"phases\" (Table 28.35. ANALYZE phases) is\nused. I have attached a patch to correct this.\n\nThanks and Regards,\nNitin Jahav",
"msg_date": "Fri, 11 Mar 2022 16:07:32 +0530",
"msg_from": "Nitin Jadhav <nitinjadhavpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Fix typo in progress reporting doc"
},
{
"msg_contents": "On Fri, Mar 11, 2022 at 04:07:32PM +0530, Nitin Jadhav wrote:\n> Hi,\n> \n> I have observed that the table naming conventions used in\n> 'progress-reporting.html' are not consistent across different\n> sections. For some cases \"Phases\" (Table 28.37. CREATE INDEX Phases)\n> is used and for some cases \"phases\" (Table 28.35. ANALYZE phases) is\n> used. I have attached a patch to correct this.\n\nPatch applied to PG 13, where it first appeared, and all later releases.\nThanks.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson\n\n\n\n",
"msg_date": "Thu, 14 Jul 2022 20:01:41 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Fix typo in progress reporting doc"
}
] |
[
{
"msg_contents": "The following bug has been logged on the website:\n\nBug reference: 17434\nLogged by: Yugo Nagata\nEmail address: nagata@sraoss.co.jp\nPostgreSQL version: 14.2\nOperating system: Ubuntu\nDescription: \n\nCREATE/DROP DATABASE can be executed in the same transaction with other\ncommands when we use pipeline mode in pgbench or libpq API. If the\ntransaction aborts, this causes an inconsistency between the system catalog\nand base directory.\r\n\r\nHere is an example using the pgbench /startpipeline meta command.\r\n\r\n----------------------------------------------------\r\n(1) Confirm that there are four databases from psql and directories in\nbase.\r\n\r\n$ psql -l\r\n List of databases\r\n Name | Owner | Encoding | Collate | Ctype | Access\nprivileges \r\n-----------+--------+----------+-------------+-------------+-----------------------\r\n postgres | yugo-n | UTF8 | ja_JP.UTF-8 | ja_JP.UTF-8 | \r\n template0 | yugo-n | UTF8 | ja_JP.UTF-8 | ja_JP.UTF-8 | =c/\"yugo-n\" \n +\r\n | | | | |\n\"yugo-n\"=CTc/\"yugo-n\"\r\n template1 | yugo-n | UTF8 | ja_JP.UTF-8 | ja_JP.UTF-8 | =c/\"yugo-n\" \n +\r\n | | | | |\n\"yugo-n\"=CTc/\"yugo-n\"\r\n test0 | yugo-n | UTF8 | ja_JP.UTF-8 | ja_JP.UTF-8 | \r\n(4 rows)\r\n\r\n$ ls data/base/\r\n1 13014 13015 16409 pgsql_tmp\r\n\r\n(2) Execute CREATE DATABASE in a transaction, and the transaction fails.\r\n\r\n$ cat pipeline_createdb.sql \r\n\\startpipeline\r\ncreate database test;\r\nselect 1/0;\r\n\\endpipeline\r\n\r\n$ pgbench -t 1 -f pipeline_createdb.sql -M extended\r\npgbench (14.2)\r\nstarting vacuum...end.\r\npgbench: error: client 0 script 0 aborted in command 3 query 0: \r\n....\r\n\r\n(3) There are still four databases but a new directory was created in\nbase.\r\n\r\n$ psql -l\r\n List of databases\r\n Name | Owner | Encoding | Collate | Ctype | Access\nprivileges \r\n-----------+--------+----------+-------------+-------------+-----------------------\r\n postgres | yugo-n | UTF8 | ja_JP.UTF-8 | ja_JP.UTF-8 | \r\n template0 | yugo-n | UTF8 | ja_JP.UTF-8 | ja_JP.UTF-8 | =c/\"yugo-n\" \n +\r\n | | | | |\n\"yugo-n\"=CTc/\"yugo-n\"\r\n template1 | yugo-n | UTF8 | ja_JP.UTF-8 | ja_JP.UTF-8 | =c/\"yugo-n\" \n +\r\n | | | | |\n\"yugo-n\"=CTc/\"yugo-n\"\r\n test0 | yugo-n | UTF8 | ja_JP.UTF-8 | ja_JP.UTF-8 | \r\n(4 rows)\r\n\r\n$ ls data/base/\r\n1 13014 13015 16409 16411 pgsql_tmp\r\n\r\n(4) Next, execute DROP DATABASE in a transaction, and the transaction\nfails.\r\n\r\n$ cat pipeline_dropdb.sql \r\n\\startpipeline\r\ndrop database test0;\r\nselect 1/0;\r\n\\endpipeline\r\n\r\n$ pgbench -t 1 -f pipeline_dropdb.sql -M extended\r\npgbench (14.2)\r\nstarting vacuum...end.\r\npgbench: error: client 0 script 0 aborted in command 3 query 0:\r\n...\r\n\r\n(5) There are still four databases but the corresponding directory was\ndeleted in base.\r\n\r\n$ psql -l\r\n List of databases\r\n Name | Owner | Encoding | Collate | Ctype | Access\nprivileges \r\n-----------+--------+----------+-------------+-------------+-----------------------\r\n postgres | yugo-n | UTF8 | ja_JP.UTF-8 | ja_JP.UTF-8 | \r\n template0 | yugo-n | UTF8 | ja_JP.UTF-8 | ja_JP.UTF-8 | =c/\"yugo-n\" \n +\r\n | | | | |\n\"yugo-n\"=CTc/\"yugo-n\"\r\n template1 | yugo-n | UTF8 | ja_JP.UTF-8 | ja_JP.UTF-8 | =c/\"yugo-n\" \n +\r\n | | | | |\n\"yugo-n\"=CTc/\"yugo-n\"\r\n test0 | yugo-n | UTF8 | ja_JP.UTF-8 | ja_JP.UTF-8 | \r\n(4 rows)\r\n\r\n$ ls data/base/\r\n1 13014 13015 16411 pgsql_tmp\r\n\r\n(6) We cannot connect the database \"test0\".\r\n\r\n$ psql test0\r\npsql: error: connection to server on socket \"/tmp/.s.PGSQL.25435\" failed:\nFATAL: database \"test0\" does not exist\r\nDETAIL: The database subdirectory \"base/16409\" is missing.\r\n----------------------------------------------------\r\n\r\nDetailed discussions are here;\r\nhttps://www.postgresql.org/message-id/20220301151704.76adaaefa8ed5d6c12ac3079@sraoss.co.jp",
"msg_date": "Fri, 11 Mar 2022 11:11:54 +0000",
"msg_from": "PG Bug reporting form <noreply@postgresql.org>",
"msg_from_op": true,
"msg_subject": "BUG #17434: CREATE/DROP DATABASE can be executed in the same\n transaction with other commands"
},
{
"msg_contents": "\nDid we make any decision on this?\n\n---------------------------------------------------------------------------\n\nOn Fri, Mar 11, 2022 at 11:11:54AM +0000, PG Bug reporting form wrote:\n> The following bug has been logged on the website:\n> \n> Bug reference: 17434\n> Logged by: Yugo Nagata\n> Email address: nagata@sraoss.co.jp\n> PostgreSQL version: 14.2\n> Operating system: Ubuntu\n> Description: \n> \n> CREATE/DROP DATABASE can be executed in the same transaction with other\n> commands when we use pipeline mode in pgbench or libpq API. If the\n> transaction aborts, this causes an inconsistency between the system catalog\n> and base directory.\n> \n> Here is an example using the pgbench /startpipeline meta command.\n> \n> ----------------------------------------------------\n> (1) Confirm that there are four databases from psql and directories in\n> base.\n> \n> $ psql -l\n> List of databases\n> Name | Owner | Encoding | Collate | Ctype | Access\n> privileges \n> -----------+--------+----------+-------------+-------------+-----------------------\n> postgres | yugo-n | UTF8 | ja_JP.UTF-8 | ja_JP.UTF-8 | \n> template0 | yugo-n | UTF8 | ja_JP.UTF-8 | ja_JP.UTF-8 | =c/\"yugo-n\" \n> +\n> | | | | |\n> \"yugo-n\"=CTc/\"yugo-n\"\n> template1 | yugo-n | UTF8 | ja_JP.UTF-8 | ja_JP.UTF-8 | =c/\"yugo-n\" \n> +\n> | | | | |\n> \"yugo-n\"=CTc/\"yugo-n\"\n> test0 | yugo-n | UTF8 | ja_JP.UTF-8 | ja_JP.UTF-8 | \n> (4 rows)\n> \n> $ ls data/base/\n> 1 13014 13015 16409 pgsql_tmp\n> \n> (2) Execute CREATE DATABASE in a transaction, and the transaction fails.\n> \n> $ cat pipeline_createdb.sql \n> \\startpipeline\n> create database test;\n> select 1/0;\n> \\endpipeline\n> \n> $ pgbench -t 1 -f pipeline_createdb.sql -M extended\n> pgbench (14.2)\n> starting vacuum...end.\n> pgbench: error: client 0 script 0 aborted in command 3 query 0: \n> ....\n> \n> (3) There are still four databases but a new directory was created in\n> base.\n> \n> $ psql -l\n> List of databases\n> Name | Owner | Encoding | Collate | Ctype | Access\n> privileges \n> -----------+--------+----------+-------------+-------------+-----------------------\n> postgres | yugo-n | UTF8 | ja_JP.UTF-8 | ja_JP.UTF-8 | \n> template0 | yugo-n | UTF8 | ja_JP.UTF-8 | ja_JP.UTF-8 | =c/\"yugo-n\" \n> +\n> | | | | |\n> \"yugo-n\"=CTc/\"yugo-n\"\n> template1 | yugo-n | UTF8 | ja_JP.UTF-8 | ja_JP.UTF-8 | =c/\"yugo-n\" \n> +\n> | | | | |\n> \"yugo-n\"=CTc/\"yugo-n\"\n> test0 | yugo-n | UTF8 | ja_JP.UTF-8 | ja_JP.UTF-8 | \n> (4 rows)\n> \n> $ ls data/base/\n> 1 13014 13015 16409 16411 pgsql_tmp\n> \n> (4) Next, execute DROP DATABASE in a transaction, and the transaction\n> fails.\n> \n> $ cat pipeline_dropdb.sql \n> \\startpipeline\n> drop database test0;\n> select 1/0;\n> \\endpipeline\n> \n> $ pgbench -t 1 -f pipeline_dropdb.sql -M extended\n> pgbench (14.2)\n> starting vacuum...end.\n> pgbench: error: client 0 script 0 aborted in command 3 query 0:\n> ...\n> \n> (5) There are still four databases but the corresponding directory was\n> deleted in base.\n> \n> $ psql -l\n> List of databases\n> Name | Owner | Encoding | Collate | Ctype | Access\n> privileges \n> -----------+--------+----------+-------------+-------------+-----------------------\n> postgres | yugo-n | UTF8 | ja_JP.UTF-8 | ja_JP.UTF-8 | \n> template0 | yugo-n | UTF8 | ja_JP.UTF-8 | ja_JP.UTF-8 | =c/\"yugo-n\" \n> +\n> | | | | |\n> \"yugo-n\"=CTc/\"yugo-n\"\n> template1 | yugo-n | UTF8 | ja_JP.UTF-8 | ja_JP.UTF-8 | =c/\"yugo-n\" \n> +\n> | | | | |\n> \"yugo-n\"=CTc/\"yugo-n\"\n> test0 | yugo-n | UTF8 | ja_JP.UTF-8 | ja_JP.UTF-8 | \n> (4 rows)\n> \n> $ ls data/base/\n> 1 13014 13015 16411 pgsql_tmp\n> \n> (6) We cannot connect the database \"test0\".\n> \n> $ psql test0\n> psql: error: connection to server on socket \"/tmp/.s.PGSQL.25435\" failed:\n> FATAL: database \"test0\" does not exist\n> DETAIL: The database subdirectory \"base/16409\" is missing.\n> ----------------------------------------------------\n> \n> Detailed discussions are here;\n> https://www.postgresql.org/message-id/20220301151704.76adaaefa8ed5d6c12ac3079@sraoss.co.jp\n> \n\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson\n\n\n\n",
"msg_date": "Thu, 14 Jul 2022 19:49:32 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: BUG #17434: CREATE/DROP DATABASE can be executed in the same\n transaction with other commands"
},
{
"msg_contents": "Bruce Momjian <bruce@momjian.us> writes:\n> Did we make any decision on this?\n\nHmm, that one seems to have slipped past me. I agree it doesn't\nlook good. But why isn't the PreventInTransactionBlock() check\nblocking the command from even starting?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 14 Jul 2022 20:36:54 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: BUG #17434: CREATE/DROP DATABASE can be executed in the same\n transaction with other commands"
},
{
"msg_contents": "On Thu, Jul 14, 2022 at 5:37 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Bruce Momjian <bruce@momjian.us> writes:\n> > Did we make any decision on this?\n>\n> Hmm, that one seems to have slipped past me. I agree it doesn't\n> look good. But why isn't the PreventInTransactionBlock() check\n> blocking the command from even starting?\n>\n>\nI assume because pgbench never sends a BEGIN command so the create database\nsees itself in an implicit transaction and happily goes about its business,\nexpecting the system to commit its work immediately after it says it is\ndone. But that never happens, instead the next command comes along and\ncrashes the implicit transaction it is now sharing with the create database\ncommand. Create database understands how to rollback if it is the one that\ncauses the failure but isn't designed to operate in a situation where it\nhas to rollback because of someone else. That isn't how implicit\ntransactions are supposed to work, whether in the middle of a pipeline or\notherwise. Or at least that is my, and apparently CREATE DATABASE's,\nunderstanding of implicit transactions: one top-level command only.\n\nSlight tangent, but while I'm trying to get my own head around this I just\nwant to point out that the first sentence of the following doesn't make\nsense given the above understanding of implicit transactions, and the\nparagraph as a whole is tough to comprehend.\n\nIf the pipeline used an implicit transaction, then operations that have\nalready executed are rolled back and operations that were queued to follow\nthe failed operation are skipped entirely. The same behavior holds if the\npipeline starts and commits a single explicit transaction (i.e. the first\nstatement is BEGIN and the last is COMMIT) except that the session remains\nin an aborted transaction state at the end of the pipeline. If a pipeline\ncontains multiple explicit transactions, all transactions that committed\nprior to the error remain committed, the currently in-progress transaction\nis aborted, and all subsequent operations are skipped completely, including\nsubsequent transactions. If a pipeline synchronization point occurs with an\nexplicit transaction block in aborted state, the next pipeline will become\naborted immediately unless the next command puts the transaction in normal\nmode with ROLLBACK.\n\nhttps://www.postgresql.org/docs/current/libpq-pipeline-mode.html#LIBPQ-PIPELINE-USING\n\nI don't know what the answer is here but I don't think \"tell the user not\nto do that\" is appropriate.\n\nDavid J.\n\nOn Thu, Jul 14, 2022 at 5:37 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:Bruce Momjian <bruce@momjian.us> writes:\n> Did we make any decision on this?\n\nHmm, that one seems to have slipped past me. I agree it doesn't\nlook good. But why isn't the PreventInTransactionBlock() check\nblocking the command from even starting?I assume because pgbench never sends a BEGIN command so the create database sees itself in an implicit transaction and happily goes about its business, expecting the system to commit its work immediately after it says it is done. But that never happens, instead the next command comes along and crashes the implicit transaction it is now sharing with the create database command. Create database understands how to rollback if it is the one that causes the failure but isn't designed to operate in a situation where it has to rollback because of someone else. That isn't how implicit transactions are supposed to work, whether in the middle of a pipeline or otherwise. Or at least that is my, and apparently CREATE DATABASE's, understanding of implicit transactions: one top-level command only.Slight tangent, but while I'm trying to get my own head around this I just want to point out that the first sentence of the following doesn't make sense given the above understanding of implicit transactions, and the paragraph as a whole is tough to comprehend.If the pipeline used an implicit transaction, then operations that have already executed are rolled back and operations that were queued to follow the failed operation are skipped entirely. The same behavior holds if the pipeline starts and commits a single explicit transaction (i.e. the first statement is BEGIN and the last is COMMIT) except that the session remains in an aborted transaction state at the end of the pipeline. If a pipeline contains multiple explicit transactions, all transactions that committed prior to the error remain committed, the currently in-progress transaction is aborted, and all subsequent operations are skipped completely, including subsequent transactions. If a pipeline synchronization point occurs with an explicit transaction block in aborted state, the next pipeline will become aborted immediately unless the next command puts the transaction in normal mode with ROLLBACK.https://www.postgresql.org/docs/current/libpq-pipeline-mode.html#LIBPQ-PIPELINE-USINGI don't know what the answer is here but I don't think \"tell the user not to do that\" is appropriate.David J.",
"msg_date": "Thu, 14 Jul 2022 19:14:33 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: BUG #17434: CREATE/DROP DATABASE can be executed in the same\n transaction with other commands"
},
{
"msg_contents": "\"David G. Johnston\" <david.g.johnston@gmail.com> writes:\n> On Thu, Jul 14, 2022 at 5:37 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Hmm, that one seems to have slipped past me. I agree it doesn't\n>> look good. But why isn't the PreventInTransactionBlock() check\n>> blocking the command from even starting?\n\n> I assume because pgbench never sends a BEGIN command so the create database\n> sees itself in an implicit transaction and happily goes about its business,\n> expecting the system to commit its work immediately after it says it is\n> done.\n\nYeah. Upon inspection, the fundamental problem here is that in extended\nquery protocol we typically don't issue finish_xact_command() until we\nget a Sync message. So even though everything looks kosher when\nPreventInTransactionBlock() runs, the client can send another statement\nwhich will be executed in the same transaction, risking trouble.\n\nHere's a draft patch to fix this. We basically just need to force\nfinish_xact_command() in the same way as we do for transaction control\nstatements. I considered using the same technology as the code uses\nfor transaction control --- that is, statically check for the types of\nstatements that are trouble --- but after reviewing the set of callers\nof PreventInTransactionBlock() I gave that up as unmaintainable. So\nwhat this does is make PreventInTransactionBlock() set a flag to be\nchecked later, back in exec_execute_message. I was initially going\nto make that be a new boolean global, but I happened to notice the\nMyXactFlags variable which seems entirely suited to this use-case.\n\nOne thing that I'm dithering over is whether to add a check of the\nnew flag in exec_simple_query. As things currently stand that would\nbe redundant, but it seems like doing things the same way in both\nof those functions might be more future-proof and understandable.\n(Note the long para I added to justify not doing it ;-))\n\n\t\t\tregards, tom lane",
"msg_date": "Fri, 15 Jul 2022 17:06:51 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: BUG #17434: CREATE/DROP DATABASE can be executed in the same\n transaction with other commands"
},
{
"msg_contents": "On Fri, Jul 15, 2022 at 2:06 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> \"David G. Johnston\" <david.g.johnston@gmail.com> writes:\n> > On Thu, Jul 14, 2022 at 5:37 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> Hmm, that one seems to have slipped past me. I agree it doesn't\n> >> look good. But why isn't the PreventInTransactionBlock() check\n> >> blocking the command from even starting?\n>\n> > I assume because pgbench never sends a BEGIN command so the create\n> database\n> > sees itself in an implicit transaction and happily goes about its\n> business,\n> > expecting the system to commit its work immediately after it says it is\n> > done.\n>\n> Yeah. Upon inspection, the fundamental problem here is that in extended\n> query protocol we typically don't issue finish_xact_command() until we\n> get a Sync message. So even though everything looks kosher when\n> PreventInTransactionBlock() runs, the client can send another statement\n> which will be executed in the same transaction, risking trouble.\n\n\n> Here's a draft patch to fix this. We basically just need to force\n> finish_xact_command() in the same way as we do for transaction control\n> statements. I considered using the same technology as the code uses\n> for transaction control --- that is, statically check for the types of\n> statements that are trouble --- but after reviewing the set of callers\n> of PreventInTransactionBlock() I gave that up as unmaintainable.\n\n\nThis seems like too narrow a fix though. The fact that a sync message is\nthe thing causing the commit of the implicit transaction in the extended\nquery protocol has been exposed as a latent bug in the system by the\nintroduction of the Pipeline functionality in libpq that relies on the\n\"should\" in message protocol's:\n\n\"At completion of each series of extended-query messages, the frontend\nshould issue a Sync message. This parameterless message causes the backend\nto close the current transaction if it's not inside a BEGIN/COMMIT\ntransaction block (“close” meaning to commit if no error, or roll back if\nerror).\" [1]\n\nHowever, the implicit promise of the extended query protocol, which only\nallows one command to be executed at a time, is that each command, no\nmatter whether it must execute \"outside of a transaction\", that executes in\nthe implicit transaction block will commit at the end of the command.\n\nI don't see needing to update simple_query_exec to recognize this flag, if\nit survives, so long as we describe the flag as an implementation detail\nrelated to the extended query protocol promise to commit implicit\ntransactions regardless of when the sync command arrives.\n\nPlus, the simple query protocol doesn't have the same one command per\ntransaction promise. Any attempts at equivalency between the two really\ndoesn't have a strong foundation to work from. I could see that code\ncomment you wrote being part of the commit message for why\nexec_simple_query was not touched but I don't find any particular value in\nhaving it remain as presented. If anything, a comment like that would be\nREADME scoped describing the differences between the simply and extended\nprotocol.\n\nDavid J.\n\n[1]\nhttps://www.postgresql.org/docs/current/protocol-flow.html#PROTOCOL-FLOW-EXT-QUERY\n\nOn Fri, Jul 15, 2022 at 2:06 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\"David G. Johnston\" <david.g.johnston@gmail.com> writes:\n> On Thu, Jul 14, 2022 at 5:37 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Hmm, that one seems to have slipped past me. I agree it doesn't\n>> look good. But why isn't the PreventInTransactionBlock() check\n>> blocking the command from even starting?\n\n> I assume because pgbench never sends a BEGIN command so the create database\n> sees itself in an implicit transaction and happily goes about its business,\n> expecting the system to commit its work immediately after it says it is\n> done.\n\nYeah. Upon inspection, the fundamental problem here is that in extended\nquery protocol we typically don't issue finish_xact_command() until we\nget a Sync message. So even though everything looks kosher when\nPreventInTransactionBlock() runs, the client can send another statement\nwhich will be executed in the same transaction, risking trouble. \n\nHere's a draft patch to fix this. We basically just need to force\nfinish_xact_command() in the same way as we do for transaction control\nstatements. I considered using the same technology as the code uses\nfor transaction control --- that is, statically check for the types of\nstatements that are trouble --- but after reviewing the set of callers\nof PreventInTransactionBlock() I gave that up as unmaintainable.This seems like too narrow a fix though. The fact that a sync message is the thing causing the commit of the implicit transaction in the extended query protocol has been exposed as a latent bug in the system by the introduction of the Pipeline functionality in libpq that relies on the \"should\" in message protocol's:\"At completion of each series of extended-query messages, the frontend should issue a Sync message. This parameterless message causes the backend to close the current transaction if it's not inside a BEGIN/COMMIT transaction block (“close” meaning to commit if no error, or roll back if error).\" [1]However, the implicit promise of the extended query protocol, which only allows one command to be executed at a time, is that each command, no matter whether it must execute \"outside of a transaction\", that executes in the implicit transaction block will commit at the end of the command.I don't see needing to update simple_query_exec to recognize this flag, if it survives, so long as we describe the flag as an implementation detail related to the extended query protocol promise to commit implicit transactions regardless of when the sync command arrives.Plus, the simple query protocol doesn't have the same one command per transaction promise. Any attempts at equivalency between the two really doesn't have a strong foundation to work from. I could see that code comment you wrote being part of the commit message for why exec_simple_query was not touched but I don't find any particular value in having it remain as presented. If anything, a comment like that would be README scoped describing the differences between the simply and extended protocol.David J.[1] https://www.postgresql.org/docs/current/protocol-flow.html#PROTOCOL-FLOW-EXT-QUERY",
"msg_date": "Mon, 18 Jul 2022 12:55:48 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: BUG #17434: CREATE/DROP DATABASE can be executed in the same\n transaction with other commands"
},
{
"msg_contents": "\"David G. Johnston\" <david.g.johnston@gmail.com> writes:\n> On Fri, Jul 15, 2022 at 2:06 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Here's a draft patch to fix this. We basically just need to force\n>> finish_xact_command() in the same way as we do for transaction control\n>> statements. I considered using the same technology as the code uses\n>> for transaction control --- that is, statically check for the types of\n>> statements that are trouble --- but after reviewing the set of callers\n>> of PreventInTransactionBlock() I gave that up as unmaintainable.\n\n> This seems like too narrow a fix though.\n\nI read this, and I have absolutely no idea what you're talking about\nor what you concretely want to do differently. If it's just a\ndocumentation question, I agree that I didn't address docs yet.\nProbably we do need to put something in the protocol chapter\npointing out that some commands will commit immediately.\n\nI'm not sure I buy your argument that there's a fundamental\ndifference between simple and extended query protocol in this\narea. In simple protocol you can wrap an \"implicit transaction\"\naround several commands by sending them in one query message.\nWhat we've got here is that you can do the same thing in\nextended protocol by omitting Syncs. Extended protocol's\nskip-till-Sync-after-error behavior is likewise very much like\nthe fact that simple protocol abandons the rest of the query\nstring after an error.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 18 Jul 2022 16:20:24 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: BUG #17434: CREATE/DROP DATABASE can be executed in the same\n transaction with other commands"
},
{
"msg_contents": "On Mon, Jul 18, 2022 at 1:20 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> \"David G. Johnston\" <david.g.johnston@gmail.com> writes:\n> > On Fri, Jul 15, 2022 at 2:06 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> Here's a draft patch to fix this. We basically just need to force\n> >> finish_xact_command() in the same way as we do for transaction control\n> >> statements. I considered using the same technology as the code uses\n> >> for transaction control --- that is, statically check for the types of\n> >> statements that are trouble --- but after reviewing the set of callers\n> >> of PreventInTransactionBlock() I gave that up as unmaintainable.\n>\n> > This seems like too narrow a fix though.\n>\n> I read this, and I have absolutely no idea what you're talking about\n> or what you concretely want to do differently. If it's just a\n> documentation question, I agree that I didn't address docs yet.\n> Probably we do need to put something in the protocol chapter\n> pointing out that some commands will commit immediately.\n>\n\nI guess I am expecting exec_execute_message to have:\n\nif (completed && use_implicit_block)\n{\n EndImplicitTransactionBlock();\n finish_xact_command();\n} else if (completed) [existing code continues]\nOr, in terms of the protocol,\n\n\"Therefore, an Execute phase is always terminated by the appearance of\nexactly one of these messages: CommandComplete, EmptyQueryResponse (if the\nportal was created from an empty query string), ErrorResponse, or\nPortalSuspended.\"\n\nCommandComplete includes an implied commit when the implicit transaction\nblock is in use; which basically means sending Execute while using the\nimplicit transaction block will cause a commit to happen.\n\nI don't fully understand PortalSuspended but it seems like it is indeed a\nvalid exception to this rule.\n\nEmptyQueryResponse seems like it should be immaterial.\n\nErrorResponse seems to preempt all of these.\n\nThe implied transaction block does not count for purposes of determining\nwhether a command that must not be executed in a transaction block can be\nexecuted.\n\nNow, as you say below, the \"multiple commands per implicit transaction\nblock in extended query mode\" is an intentional design choice so the above\nwould indeed be incorrect. However, there is still something fishy here,\nso please read below.\n\n\n> I'm not sure I buy your argument that there's a fundamental\n> difference between simple and extended query protocol in this\n> area. In simple protocol you can wrap an \"implicit transaction\"\n> around several commands by sending them in one query message.\n> What we've got here is that you can do the same thing in\n> extended protocol by omitting Syncs. Extended protocol's\n> skip-till-Sync-after-error behavior is likewise very much like\n> the fact that simple protocol abandons the rest of the query\n> string after an error.\n>\n\nThe fact that SYNC has the side effect of ending the implicit transaction\nblock is a POLA violation to me and the root of my misunderstanding here.\nI suppose it is too late to change at this point. I can at least see that\ngiving the client control of the implicit transaction block, even if not\nthrough SQL (which I suppose comes with implicit), has merit, even if this\nchoice of implementation is unintuitive.\n\nIn any case, I tried to extend the pgbench exercise but don't know what\nwent wrong. I will explain what I think would happen:\n\nFor the script:\n\ndrop table if exists performupdate;\ncreate table performupdate (val integer);\ninsert into performupdate values (2);\n\\startpipeline\nupdate performupdate set val = val * 2;\n--create database benchtest;\nselect 1/0;\n--rollback\n\\endpipeline\nDO $$BEGIN RAISE NOTICE 'Value = %', (select val from performupdate); END;\n$$\n\nI get this result - the post-pipeline DO block never executes and I\nexpected that it would. Uncommenting the rollback made no difference. I\nsuppose this is just because we are abusing the tool in lieu of writing C\ncode. That's fine.\n\npgbench: client 0 executing script \"/home/vagrant/pipebench.sql\"\npgbench: client 0 sending drop table if exists performupdate;\npgbench: client 0 receiving\npgbench: client 0 receiving\npgbench: client 0 sending create table performupdate (val integer);\npgbench: client 0 receiving\npgbench: client 0 receiving\npgbench: client 0 sending insert into performupdate values (2);\npgbench: client 0 receiving\npgbench: client 0 receiving\npgbench: client 0 executing \\startpipeline\npgbench: client 0 sending update performupdate set val = val * 2;\npgbench: client 0 sending select 1/0;\npgbench: client 0 executing \\endpipeline\npgbench: client 0 receiving\npgbench: client 0 receiving\npgbench: client 0 receiving\npgbench: error: client 0 script 0 aborted in command 6 query 0:\ntransaction type: /home/vagrant/pipebench.sql\nscaling factor: 1\nquery mode: extended\nnumber of clients: 1\nnumber of threads: 1\nmaximum number of tries: 1\nnumber of transactions per client: 1\nnumber of transactions actually processed: 0/1\nnumber of failed transactions: 0 (NaN%)\npgbench: error: Run was aborted; the above results are incomplete.\n\nIn any case, for the above script, given the definition of pipeline mode, I\nwould expect that the value reported to be 2. This assumes that when\ncoming out of pipeline mode the system basically goes back to ReadyForQuery.\n\nHowever, if I now uncomment the create database command the expectation is\neither:\n\n1. It fails to execute since an existing command is sharing the implicit\ntransaction, and fails the implicit transaction block, thus the reported\nvalue is still 2\n2. It succeeds, the next command executes and fails, the database creation\nis undone and the update is undone, thus the reported value is still 2\n\nWhat does happen, IIUC, is that both the preceding update command and the\ncreate database are now committed and the returned value is 4\n\nIn short, we are saying that issuing a command that cannot be executed in a\ntransaction block within the middle of the implicit transaction block will\ncause the block to implicitly commit if the command completes successfully.\n\n From this it seems that not only should we issue a commit after executing\ncreate database in the implicit transaction block but we also need to\ncommit before attempting to execute the command in the first place. The\nmere presence of a such a command basically means:\n\nCOMMIT;\nCREATE DATABASE...;\nCOMMIT;\n\nThat is what it means to be unable to be executed in a transaction block -\nwith an outright error if an explicit transaction block has already been\nestablished.\n\nDavid J.\n\nOn Mon, Jul 18, 2022 at 1:20 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\"David G. Johnston\" <david.g.johnston@gmail.com> writes:\n> On Fri, Jul 15, 2022 at 2:06 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Here's a draft patch to fix this. We basically just need to force\n>> finish_xact_command() in the same way as we do for transaction control\n>> statements. I considered using the same technology as the code uses\n>> for transaction control --- that is, statically check for the types of\n>> statements that are trouble --- but after reviewing the set of callers\n>> of PreventInTransactionBlock() I gave that up as unmaintainable.\n\n> This seems like too narrow a fix though.\n\nI read this, and I have absolutely no idea what you're talking about\nor what you concretely want to do differently. If it's just a\ndocumentation question, I agree that I didn't address docs yet.\nProbably we do need to put something in the protocol chapter\npointing out that some commands will commit immediately.I guess I am expecting exec_execute_message to have:if (completed && use_implicit_block){ EndImplicitTransactionBlock(); finish_xact_command();} else if (completed) [existing code continues]Or, in terms of the protocol, \"Therefore, an Execute phase is always terminated by the appearance of exactly one of these messages: CommandComplete, EmptyQueryResponse (if the portal was created from an empty query string), ErrorResponse, or PortalSuspended.\"CommandComplete includes an implied commit when the implicit transaction block is in use; which basically means sending Execute while using the implicit transaction block will cause a commit to happen.I don't fully understand PortalSuspended but it seems like it is indeed a valid exception to this rule.EmptyQueryResponse seems like it should be immaterial.ErrorResponse seems to preempt all of these.The implied transaction block does not count for purposes of determining whether a command that must not be executed in a transaction block can be executed.Now, as you say below, the \"multiple commands per implicit transaction block in extended query mode\" is an intentional design choice so the above would indeed be incorrect. However, there is still something fishy here, so please read below.\n\nI'm not sure I buy your argument that there's a fundamental\ndifference between simple and extended query protocol in this\narea. In simple protocol you can wrap an \"implicit transaction\"\naround several commands by sending them in one query message.\nWhat we've got here is that you can do the same thing in\nextended protocol by omitting Syncs. Extended protocol's\nskip-till-Sync-after-error behavior is likewise very much like\nthe fact that simple protocol abandons the rest of the query\nstring after an error.The fact that SYNC has the side effect of ending the implicit transaction block is a POLA violation to me and the root of my misunderstanding here. I suppose it is too late to change at this point. I can at least see that giving the client control of the implicit transaction block, even if not through SQL (which I suppose comes with implicit), has merit, even if this choice of implementation is unintuitive.In any case, I tried to extend the pgbench exercise but don't know what went wrong. I will explain what I think would happen:For the script:drop table if exists performupdate;create table performupdate (val integer);insert into performupdate values (2);\\startpipelineupdate performupdate set val = val * 2;--create database benchtest;select 1/0;--rollback\\endpipelineDO $$BEGIN RAISE NOTICE 'Value = %', (select val from performupdate); END; $$I get this result - the post-pipeline DO block never executes and I expected that it would. Uncommenting the rollback made no difference. I suppose this is just because we are abusing the tool in lieu of writing C code. That's fine.pgbench: client 0 executing script \"/home/vagrant/pipebench.sql\"pgbench: client 0 sending drop table if exists performupdate;pgbench: client 0 receivingpgbench: client 0 receivingpgbench: client 0 sending create table performupdate (val integer);pgbench: client 0 receivingpgbench: client 0 receivingpgbench: client 0 sending insert into performupdate values (2);pgbench: client 0 receivingpgbench: client 0 receivingpgbench: client 0 executing \\startpipelinepgbench: client 0 sending update performupdate set val = val * 2;pgbench: client 0 sending select 1/0;pgbench: client 0 executing \\endpipelinepgbench: client 0 receivingpgbench: client 0 receivingpgbench: client 0 receivingpgbench: error: client 0 script 0 aborted in command 6 query 0:transaction type: /home/vagrant/pipebench.sqlscaling factor: 1query mode: extendednumber of clients: 1number of threads: 1maximum number of tries: 1number of transactions per client: 1number of transactions actually processed: 0/1number of failed transactions: 0 (NaN%)pgbench: error: Run was aborted; the above results are incomplete.In any case, for the above script, given the definition of pipeline mode, I would expect that the value reported to be 2. This assumes that when coming out of pipeline mode the system basically goes back to ReadyForQuery.However, if I now uncomment the create database command the expectation is either:1. It fails to execute since an existing command is sharing the implicit transaction, and fails the implicit transaction block, thus the reported value is still 22. It succeeds, the next command executes and fails, the database creation is undone and the update is undone, thus the reported value is still 2What does happen, IIUC, is that both the preceding update command and the create database are now committed and the returned value is 4In short, we are saying that issuing a command that cannot be executed in a transaction block within the middle of the implicit transaction block will cause the block to implicitly commit if the command completes successfully.From this it seems that not only should we issue a commit after executing create database in the implicit transaction block but we also need to commit before attempting to execute the command in the first place. The mere presence of a such a command basically means:COMMIT;CREATE DATABASE...;COMMIT;That is what it means to be unable to be executed in a transaction block - with an outright error if an explicit transaction block has already been established.David J.",
"msg_date": "Mon, 18 Jul 2022 15:17:11 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: BUG #17434: CREATE/DROP DATABASE can be executed in the same\n transaction with other commands"
},
{
"msg_contents": "\tDavid G. Johnston wrote:\n\n\n> drop table if exists performupdate;\n> create table performupdate (val integer);\n> insert into performupdate values (2);\n> \\startpipeline\n> update performupdate set val = val * 2;\n> --create database benchtest;\n> select 1/0;\n> --rollback\n> \\endpipeline\n> DO $$BEGIN RAISE NOTICE 'Value = %', (select val from performupdate); END;\n> $$\n> \n> I get this result - the post-pipeline DO block never executes and I\n> expected that it would. \n\npgbench stops the script on errors. If the script was reduced to\n\n select 1/0;\n DO $$BEGIN RAISE NOTICE 'print this; END; $$\n\nthe DO statement would not be executed either.\nWhen the error happens inside a pipeline section, it's the same.\nThe pgbench code collects the results sent by the server to clear up,\nbut the script is aborted at this point, and the DO block is not going\nto be sent to the server.\n\n\nBest regards,\n-- \nDaniel Vérité\nhttps://postgresql.verite.pro/\nTwitter: @DanielVerite\n\n\n",
"msg_date": "Tue, 19 Jul 2022 13:16:21 +0200",
"msg_from": "\"Daniel Verite\" <daniel@manitou-mail.org>",
"msg_from_op": false,
"msg_subject": "Re: BUG #17434: CREATE/DROP DATABASE can be executed in the same\n transaction with other commands"
},
{
"msg_contents": "\"David G. Johnston\" <david.g.johnston@gmail.com> writes:\n> I guess I am expecting exec_execute_message to have:\n\n> if (completed && use_implicit_block)\n> {\n> EndImplicitTransactionBlock();\n> finish_xact_command();\n> } else if (completed) [existing code continues]\n\nThe problem with that is \"where do we get use_implicit_block from\"?\nIn simple query mode it's set if the simple-query message contains\nmore than one statement. But the issue we face in extended mode is\nprecisely that we don't know if the client will try to send another\nstatement before Sync.\n\nI spent some time thinking about alternative solutions for this.\nAFAICS the only other feasible approach is to continue to not do\nfinish_xact_command() until Sync, but change state so that any\nmessage that tries to do other work will be rejected. But that's\nnot really at all attractive, for these reasons:\n\n1. Rejecting other message types implies an error (unless we\nget REALLY weird), which implies a rollback, which gets us into\nthe same inconsistent state as a user-issued rollback.\n\n2. Once we've completed the CREATE DATABASE or whatever, we really\nhave got to commit or we end with inconsistent state. So it does\nnot seem like a good plan to sit and wait for the client, even if\nwe were certain that it'd eventually issue Sync. The longer we\nsit, the more chance of something interfering --- database shutdown,\nnetwork connection drop, etc.\n\n3. This approach winds up throwing errors for cases that used\nto work, eg multiple CREATE DATABASE commands before Sync.\nThe immediate-silent-commit approach doesn't. The only compatibility\nbreak is that you can't ROLLBACK after CREATE DATABASE ... but that's\nprecisely the case that doesn't work anyway.\n\nIdeally we'd dodge all of this mess by making all our DDL fully\ntransactional and getting rid of PreventInTransactionBlock.\nI'm not sure that will ever happen; but I am sad that so many\nnew calls of it have been introduced by the logical replication\nstuff. (Doesn't look like anybody bothered to teach psql's\ncommand_no_begin() about those, either.) In any case, that's a\nlong-term direction to pursue, not something that could yield\na back-patchable fix.\n\nAnyway, here's an updated patch, now with docs. I was surprised\nto realize that protocol.sgml has no explicit mention of pipelining,\neven though extended query protocol was intentionally set up to make\nthat possible. So I added a <sect2> about that, which provides a home\nfor the caveat about immediate-commit commands.\n\n\t\t\tregards, tom lane",
"msg_date": "Tue, 26 Jul 2022 11:08:21 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: BUG #17434: CREATE/DROP DATABASE can be executed in the same\n transaction with other commands"
},
{
"msg_contents": "On Tue, Jul 26, 2022 at 8:08 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> \"David G. Johnston\" <david.g.johnston@gmail.com> writes:\n> > I guess I am expecting exec_execute_message to have:\n>\n> > if (completed && use_implicit_block)\n> > {\n> > EndImplicitTransactionBlock();\n> > finish_xact_command();\n> > } else if (completed) [existing code continues]\n>\n> The problem with that is \"where do we get use_implicit_block from\"?\n> In simple query mode it's set if the simple-query message contains\n> more than one statement. But the issue we face in extended mode is\n> precisely that we don't know if the client will try to send another\n> statement before Sync.\n> [...]\n> Anyway, here's an updated patch, now with docs. I was surprised\n> to realize that protocol.sgml has no explicit mention of pipelining,\n> even though extended query protocol was intentionally set up to make\n> that possible. So I added a <sect2> about that, which provides a home\n> for the caveat about immediate-commit commands.\n>\n>\nThanks! This added section is clear and now affirms the understanding I've\ncome to with this thread, mostly. I'm still of the opinion that the\ndefinition of \"cannot be executed inside a transaction block\" means that we\nmust \"auto-sync\" (implicit commit) before and after the restricted command,\nnot just after, and that the new section should cover this - whether we do\nor do not - explicitly.\n\nDavid J.\n\nOn Tue, Jul 26, 2022 at 8:08 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\"David G. Johnston\" <david.g.johnston@gmail.com> writes:\n> I guess I am expecting exec_execute_message to have:\n\n> if (completed && use_implicit_block)\n> {\n> EndImplicitTransactionBlock();\n> finish_xact_command();\n> } else if (completed) [existing code continues]\n\nThe problem with that is \"where do we get use_implicit_block from\"?\nIn simple query mode it's set if the simple-query message contains\nmore than one statement. But the issue we face in extended mode is\nprecisely that we don't know if the client will try to send another\nstatement before Sync.[...]\nAnyway, here's an updated patch, now with docs. I was surprised\nto realize that protocol.sgml has no explicit mention of pipelining,\neven though extended query protocol was intentionally set up to make\nthat possible. So I added a <sect2> about that, which provides a home\nfor the caveat about immediate-commit commands.Thanks! This added section is clear and now affirms the understanding I've come to with this thread, mostly. I'm still of the opinion that the definition of \"cannot be executed inside a transaction block\" means that we must \"auto-sync\" (implicit commit) before and after the restricted command, not just after, and that the new section should cover this - whether we do or do not - explicitly.David J.",
"msg_date": "Tue, 26 Jul 2022 08:22:38 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: BUG #17434: CREATE/DROP DATABASE can be executed in the same\n transaction with other commands"
},
{
"msg_contents": "\"David G. Johnston\" <david.g.johnston@gmail.com> writes:\n> Thanks! This added section is clear and now affirms the understanding I've\n> come to with this thread, mostly. I'm still of the opinion that the\n> definition of \"cannot be executed inside a transaction block\" means that we\n> must \"auto-sync\" (implicit commit) before and after the restricted command,\n> not just after, and that the new section should cover this - whether we do\n> or do not - explicitly.\n\nI'm not excited about your proposal to auto-commit before starting\nthe command. In the first place, we can't: we do not know whether\nthe command will call PreventInTransactionBlock. Restructuring to\nchange that seems untenable in view of past cowboy decisions about\nuse of PreventInTransactionBlock in the replication logic. In the\nsecond place, it'd be a deviation from the current behavior (namely\nthat a failure in CREATE DATABASE et al rolls back previous un-synced\ncommands) that is not necessary to fix a bug, so changing that in\nthe back branches would be a hard sell. I don't even agree that\nit's obviously better than the current behavior, so I'm not much\non board with changing it in HEAD either.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 26 Jul 2022 11:37:14 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: BUG #17434: CREATE/DROP DATABASE can be executed in the same\n transaction with other commands"
},
{
"msg_contents": "On Tue, Jul 26, 2022 at 8:37 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> \"David G. Johnston\" <david.g.johnston@gmail.com> writes:\n> > Thanks! This added section is clear and now affirms the understanding\n> I've\n> > come to with this thread, mostly. I'm still of the opinion that the\n> > definition of \"cannot be executed inside a transaction block\" means that\n> we\n> > must \"auto-sync\" (implicit commit) before and after the restricted\n> command,\n> > not just after, and that the new section should cover this - whether we\n> do\n> > or do not - explicitly.\n>\n> I'm not excited about your proposal to auto-commit before starting\n> the command. In the first place, we can't: we do not know whether\n> the command will call PreventInTransactionBlock. Restructuring to\n> change that seems untenable in view of past cowboy decisions about\n> use of PreventInTransactionBlock in the replication logic. In the\n> second place, it'd be a deviation from the current behavior (namely\n> that a failure in CREATE DATABASE et al rolls back previous un-synced\n> commands) that is not necessary to fix a bug, so changing that in\n> the back branches would be a hard sell. I don't even agree that\n> it's obviously better than the current behavior, so I'm not much\n> on board with changing it in HEAD either.\n>\n>\nThat leaves us with changing the documentation then, from:\n\nCREATE DATABASE cannot be executed inside a transaction block.\n\nto:\n\nCREATE DATABASE cannot be executed inside an explicit transaction block (it\nwill error in this case), and will commit (or rollback on failure) any\nimplicit transaction it is a part of.\n\nThe content of the section you added works fine so long as we are clear\nregarding the fact it can be executed in a transaction so long as it is\nimplicit.\n\nDavid J.\n\nOn Tue, Jul 26, 2022 at 8:37 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\"David G. Johnston\" <david.g.johnston@gmail.com> writes:\n> Thanks! This added section is clear and now affirms the understanding I've\n> come to with this thread, mostly. I'm still of the opinion that the\n> definition of \"cannot be executed inside a transaction block\" means that we\n> must \"auto-sync\" (implicit commit) before and after the restricted command,\n> not just after, and that the new section should cover this - whether we do\n> or do not - explicitly.\n\nI'm not excited about your proposal to auto-commit before starting\nthe command. In the first place, we can't: we do not know whether\nthe command will call PreventInTransactionBlock. Restructuring to\nchange that seems untenable in view of past cowboy decisions about\nuse of PreventInTransactionBlock in the replication logic. In the\nsecond place, it'd be a deviation from the current behavior (namely\nthat a failure in CREATE DATABASE et al rolls back previous un-synced\ncommands) that is not necessary to fix a bug, so changing that in\nthe back branches would be a hard sell. I don't even agree that\nit's obviously better than the current behavior, so I'm not much\non board with changing it in HEAD either.That leaves us with changing the documentation then, from:CREATE DATABASE cannot be executed inside a transaction block.to:CREATE DATABASE cannot be executed inside an explicit transaction block (it will error in this case), and will commit (or rollback on failure) any implicit transaction it is a part of.The content of the section you added works fine so long as we are clear regarding the fact it can be executed in a transaction so long as it is implicit.David J.",
"msg_date": "Tue, 26 Jul 2022 08:48:23 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: BUG #17434: CREATE/DROP DATABASE can be executed in the same\n transaction with other commands"
},
{
"msg_contents": "\"David G. Johnston\" <david.g.johnston@gmail.com> writes:\n> That leaves us with changing the documentation then, from:\n> CREATE DATABASE cannot be executed inside a transaction block.\n> to:\n> CREATE DATABASE cannot be executed inside an explicit transaction block (it\n> will error in this case), and will commit (or rollback on failure) any\n> implicit transaction it is a part of.\n\nThat's not going to help anybody unless we also provide a definition of\n\"implicit transaction\", which is a bit far afield for that man page.\n\nI did miss a bet in the proposed pipeline addendum, though.\nI should have written\n\n ... However, there\n are a few DDL commands (such as <command>CREATE DATABASE</command>)\n that cannot be executed inside a transaction block. If one of\n these is executed in a pipeline, it will, upon success, force an\n immediate commit to preserve database consistency.\n\nThat ties the info to our standard wording in the per-command man\npages.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 26 Jul 2022 12:03:06 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: BUG #17434: CREATE/DROP DATABASE can be executed in the same\n transaction with other commands"
},
{
"msg_contents": "On Tue, Jul 26, 2022 at 9:03 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> \"David G. Johnston\" <david.g.johnston@gmail.com> writes:\n> > That leaves us with changing the documentation then, from:\n> > CREATE DATABASE cannot be executed inside a transaction block.\n> > to:\n> > CREATE DATABASE cannot be executed inside an explicit transaction block\n> (it\n> > will error in this case), and will commit (or rollback on failure) any\n> > implicit transaction it is a part of.\n>\n> That's not going to help anybody unless we also provide a definition of\n> \"implicit transaction\", which is a bit far afield for that man page.\n>\n> I did miss a bet in the proposed pipeline addendum, though.\n> I should have written\n>\n> ... However, there\n> are a few DDL commands (such as <command>CREATE DATABASE</command>)\n> that cannot be executed inside a transaction block. If one of\n> these is executed in a pipeline, it will, upon success, force an\n> immediate commit to preserve database consistency.\n>\n> That ties the info to our standard wording in the per-command man\n> pages.\n>\n>\nAnd we are back around to the fact that only by using libpq directly, or\nvia the pipeline feature of pgbench, can one actually exert control over\nthe implicit transaction. The psql and general SQL interface\nimplementation are just going to Sync after each command and so everything\nlooks like one transaction per command to them and only explicit\ntransactions matter. From that, the adjustment you describe above is\nsufficient for me.\n\nDavid J.\n\nOn Tue, Jul 26, 2022 at 9:03 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\"David G. Johnston\" <david.g.johnston@gmail.com> writes:\n> That leaves us with changing the documentation then, from:\n> CREATE DATABASE cannot be executed inside a transaction block.\n> to:\n> CREATE DATABASE cannot be executed inside an explicit transaction block (it\n> will error in this case), and will commit (or rollback on failure) any\n> implicit transaction it is a part of.\n\nThat's not going to help anybody unless we also provide a definition of\n\"implicit transaction\", which is a bit far afield for that man page.\n\nI did miss a bet in the proposed pipeline addendum, though.\nI should have written\n\n ... However, there\n are a few DDL commands (such as <command>CREATE DATABASE</command>)\n that cannot be executed inside a transaction block. If one of\n these is executed in a pipeline, it will, upon success, force an\n immediate commit to preserve database consistency.\n\nThat ties the info to our standard wording in the per-command man\npages.And we are back around to the fact that only by using libpq directly, or via the pipeline feature of pgbench, can one actually exert control over the implicit transaction. The psql and general SQL interface implementation are just going to Sync after each command and so everything looks like one transaction per command to them and only explicit transactions matter. From that, the adjustment you describe above is sufficient for me.David J.",
"msg_date": "Tue, 26 Jul 2022 09:11:52 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: BUG #17434: CREATE/DROP DATABASE can be executed in the same\n transaction with other commands"
},
{
"msg_contents": "\"David G. Johnston\" <david.g.johnston@gmail.com> writes:\n> And we are back around to the fact that only by using libpq directly, or\n> via the pipeline feature of pgbench, can one actually exert control over\n> the implicit transaction. The psql and general SQL interface\n> implementation are just going to Sync after each command and so everything\n> looks like one transaction per command to them and only explicit\n> transactions matter.\n\nRight.\n\n> From that, the adjustment you describe above is sufficient for me.\n\nCool, I'll set about back-patching.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 26 Jul 2022 12:14:19 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: BUG #17434: CREATE/DROP DATABASE can be executed in the same\n transaction with other commands"
},
{
"msg_contents": "Hi,\n\nThank you for treating this bug report!\n\nOn Tue, 26 Jul 2022 12:14:19 -0400\nTom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> \"David G. Johnston\" <david.g.johnston@gmail.com> writes:\n> > And we are back around to the fact that only by using libpq directly, or\n> > via the pipeline feature of pgbench, can one actually exert control over\n> > the implicit transaction. The psql and general SQL interface\n> > implementation are just going to Sync after each command and so everything\n> > looks like one transaction per command to them and only explicit\n> > transactions matter.\n> \n> Right.\n> \n> > From that, the adjustment you describe above is sufficient for me.\n> \n> Cool, I'll set about back-patching.\n> \n> \t\t\tregards, tom lane\n\nI've looked at the commited fix. What I wonder is whether a change in\nIsInTransactionBlock() is necessary or not.\n\n + /*\n + * If we tell the caller we're not in a transaction block, then inform\n + * postgres.c that it had better commit when the statement is done.\n + * Otherwise our report could be a lie.\n + */\n + MyXactFlags |= XACT_FLAGS_NEEDIMMEDIATECOMMIT;\n +\n return false;\n\nThe comment says that is required to prevent the report from being\na lie. Indeed, after this function returns false, it is guaranteed\nthat following statements are executed in a separate transaction from\nthat of the current statement. However, there is no guarantee that the\ncurrent statement is running in a separate transaction from that of\nthe previous statements. The only caller of this function is ANALYZE\ncommand, and this is used for the latter purpose. That is, if we are\nnot in a transaction block, ANALYZE can close the current transaction\nand restart another one without affecting previous transactions. \n(At least, ANALYZE command seems to assume it.) So,\nI think the fix does not seem to make a sense.\n\nIn fact, the result of IsInTransactionBlock does not make senses at\nall in pipe-line mode regardless to the fix. ANALYZE could commit all\nprevious commands in pipelining, and this may not be user expected\nbehaviour. Moreover, before the fix ANALYZE didn't close and open a\ntransaction if the target is only one table, but after the fix ANALYZE\nalways issues commit regardless to the number of table.\n\nI am not sure if we should fix it to prevent such confusing behavior\nbecause this breaks back-compatibility, but I prefer to fixing it. \n\nThe idea is to start an implicit transaction block if the server receive\nmore than one Execute messages before receiving Sync as discussed in [1]. \nI attached the patch for this fix. \n\nIf the first command in a pipeline is DDL commands such as CREATE\nDATABASE, this is allowed and immediately committed after success, as\nsame as the current behavior. Executing such commands in the middle of\npipeline is not allowed because the pipeline is regarded as \"an implicit\ntransaction block\" at that time. Similarly, ANALYZE in the middle of\npipeline can not close and open transaction.\n\n[1] https://www.postgresql.org/message-id/20220301151704.76adaaefa8ed5d6c12ac3079@sraoss.co.jp\n\n\nRegards,\nYugo Nagata\n-- \nYugo NAGATA <nagata@sraoss.co.jp>",
"msg_date": "Thu, 28 Jul 2022 10:51:34 +0900",
"msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: BUG #17434: CREATE/DROP DATABASE can be executed in the same\n transaction with other commands"
},
{
"msg_contents": "Yugo NAGATA <nagata@sraoss.co.jp> writes:\n> I've looked at the commited fix. What I wonder is whether a change in\n> IsInTransactionBlock() is necessary or not.\n\nI've not examined ANALYZE's dependencies on this closely, but it doesn't\nmatter really, because I'm not willing to assume that ANALYZE is the\nonly caller. There could be external modules with stronger assumptions\nthat IsInTransactionBlock() yielding false provides guarantees equivalent\nto PreventInTransactionBlock(). It did before this patch, so I think\nit needs to still do so after.\n\n> In fact, the result of IsInTransactionBlock does not make senses at\n> all in pipe-line mode regardless to the fix. ANALYZE could commit all\n> previous commands in pipelining, and this may not be user expected\n> behaviour.\n\nThis seems pretty much isomorphic to the fact that CREATE DATABASE\nwill commit preceding steps in the pipeline. That's not great,\nI admit; we'd not have designed it like that if we'd had complete\nunderstanding of the behavior at the beginning. But it's acted\nlike that for a couple of decades now, so changing it seems far\nmore likely to make people unhappy than happy. The same for\nANALYZE in a pipeline.\n\n> If the first command in a pipeline is DDL commands such as CREATE\n> DATABASE, this is allowed and immediately committed after success, as\n> same as the current behavior. Executing such commands in the middle of\n> pipeline is not allowed because the pipeline is regarded as \"an implicit\n> transaction block\" at that time. Similarly, ANALYZE in the middle of\n> pipeline can not close and open transaction.\n\nI'm not going there. If you can persuade some other committer that\nthis is worth breaking backward compatibility for, fine; the user\ncomplaints will be their problem.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 27 Jul 2022 22:50:55 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: BUG #17434: CREATE/DROP DATABASE can be executed in the same\n transaction with other commands"
},
{
"msg_contents": "On Wed, Jul 27, 2022 at 7:50 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Yugo NAGATA <nagata@sraoss.co.jp> writes:\n> > I've looked at the commited fix. What I wonder is whether a change in\n> > IsInTransactionBlock() is necessary or not.\n>\n> > In fact, the result of IsInTransactionBlock does not make senses at\n> > all in pipe-line mode regardless to the fix. ANALYZE could commit all\n> > previous commands in pipelining, and this may not be user expected\n> > behaviour.\n>\n> This seems pretty much isomorphic to the fact that CREATE DATABASE\n> will commit preceding steps in the pipeline. That's not great,\n> I admit; we'd not have designed it like that if we'd had complete\n> understanding of the behavior at the beginning. But it's acted\n> like that for a couple of decades now, so changing it seems far\n> more likely to make people unhappy than happy. The same for\n> ANALYZE in a pipeline.\n>\n>\nI agreed to leaving the description of CREATE DATABASE simplified by not\nintroducing the idea of implicit transactions or, equivalently,\n\"autocommit\".\n\nJust tossing out there that we should acknowledge that our wording in the\nBEGIN Reference should remain status quo based upon the same reasoning.\n\n\"By default (without BEGIN), PostgreSQL executes transactions in\n“autocommit” mode, that is, each statement is executed in its own\ntransaction and a commit is implicitly performed at the end of the\nstatement (if execution was successful, otherwise a rollback is done).\"\n\nhttps://www.postgresql.org/docs/current/sql-begin.html\n\nMaybe write instead:\n\n\"By default (without BEGIN), PostgreSQL creates transactions based upon the\nunderlying messages passed between the client and server. Typically this\nmeans each statement ends up having its own transaction. In any case,\nstatements that must not execute in a transaction (like CREATE DATABASE)\nmust use the default, and will always cause a commit or rollback to happen\nupon completion.\"\n\nIt feels a bit out-of-place, maybe if the content scope is acceptable we\ncan work it better into the Tutorial-Advanced Features-Transaction section\nand just replace the existing sentence with a link to there?\n\nDavid J.\n\nOn Wed, Jul 27, 2022 at 7:50 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:Yugo NAGATA <nagata@sraoss.co.jp> writes:\n> I've looked at the commited fix. What I wonder is whether a change in\n> IsInTransactionBlock() is necessary or not.\n> In fact, the result of IsInTransactionBlock does not make senses at\n> all in pipe-line mode regardless to the fix. ANALYZE could commit all\n> previous commands in pipelining, and this may not be user expected\n> behaviour.\n\nThis seems pretty much isomorphic to the fact that CREATE DATABASE\nwill commit preceding steps in the pipeline. That's not great,\nI admit; we'd not have designed it like that if we'd had complete\nunderstanding of the behavior at the beginning. But it's acted\nlike that for a couple of decades now, so changing it seems far\nmore likely to make people unhappy than happy. The same for\nANALYZE in a pipeline.I agreed to leaving the description of CREATE DATABASE simplified by not introducing the idea of implicit transactions or, equivalently, \"autocommit\".Just tossing out there that we should acknowledge that our wording in the BEGIN Reference should remain status quo based upon the same reasoning.\"By default (without BEGIN), PostgreSQL executes transactions in “autocommit” mode, that is, each statement is executed in its own transaction and a commit is implicitly performed at the end of the statement (if execution was successful, otherwise a rollback is done).\"https://www.postgresql.org/docs/current/sql-begin.htmlMaybe write instead:\"By default (without BEGIN), PostgreSQL creates transactions based upon the underlying messages passed between the client and server. Typically this means each statement ends up having its own transaction. In any case, statements that must not execute in a transaction (like CREATE DATABASE) must use the default, and will always cause a commit or rollback to happen upon completion.\"It feels a bit out-of-place, maybe if the content scope is acceptable we can work it better into the Tutorial-Advanced Features-Transaction section and just replace the existing sentence with a link to there?David J.",
"msg_date": "Thu, 28 Jul 2022 09:13:05 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: BUG #17434: CREATE/DROP DATABASE can be executed in the same\n transaction with other commands"
},
{
"msg_contents": "On Wed, 27 Jul 2022 22:50:55 -0400\nTom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Yugo NAGATA <nagata@sraoss.co.jp> writes:\n> > I've looked at the commited fix. What I wonder is whether a change in\n> > IsInTransactionBlock() is necessary or not.\n> \n> I've not examined ANALYZE's dependencies on this closely, but it doesn't\n> matter really, because I'm not willing to assume that ANALYZE is the\n> only caller. There could be external modules with stronger assumptions\n> that IsInTransactionBlock() yielding false provides guarantees equivalent\n> to PreventInTransactionBlock(). It did before this patch, so I think\n> it needs to still do so after.\n\nThank you for your explanation. I understood that IsInTransactionBlock()\nand PreventInTransactionBlock() share the equivalent assumption.\n\nAs to ANALYZE, after investigating the code more, I found that setting XACT_FLAGS_NEEDIMMEDIATECOMMIT in IsInTransactionBlock() is needed indeed.\nThat is, some flags in pg_class such as relhasindex can be safely updated\nonly if ANALYZE is not in a transaction block and never rolled back. So,\nin a pipeline, ANALYZE must be immediately committed.\n\nHowever, I think we need more comments on these functions to clarify what\nusers can expect or not for them. It is ensured that the statement that\ncalls PreventInTransactionBlock() or receives false from\nIsInTransactionBlock() never be rolled back if it finishes successfully.\nThis can eliminate the harmful influence of non-rollback-able side effects.\n\nOn the other hand, it cannot ensure that the statement calling these\nfunctions is the first or only one in the transaction in pipelining. If\nthere are preceding statements in a pipeline, they are committed in the\nsame transaction of the current statement.\n\nThe attached patch tries to add comments explaining it on the functions.\n\n> > In fact, the result of IsInTransactionBlock does not make senses at\n> > all in pipe-line mode regardless to the fix. ANALYZE could commit all\n> > previous commands in pipelining, and this may not be user expected\n> > behaviour.\n> \n> This seems pretty much isomorphic to the fact that CREATE DATABASE\n> will commit preceding steps in the pipeline. \n\nI am not sure if we can think CREATE DATABASE case and ANLALYZE case\nsimilarly. First, CREATE DATABASE is one of the commands that cannot be\nexecuted inside a transaction block, but ANALYZE can be. So, users would\nnot be able to know ANALYZE in a pipeline causes a commit from the\ndocumentation. Second, ANALYZE issues a commit internally in an early\nstage not only after it finished successfully. For example, even if\nANALYZE is failing because a not-existing column name is specified, it\nissues a commit before checking the column name. This makes more hard\nto know which statements will be committed and which statements not\ncommitted in a pipeline. Also, as you know, there are other commands\nthat issue internal commits.\n\n> That's not great,\n> I admit; we'd not have designed it like that if we'd had complete\n> understanding of the behavior at the beginning. But it's acted\n> like that for a couple of decades now, so changing it seems far\n> more likely to make people unhappy than happy. The same for\n> ANALYZE in a pipeline.\n\n> > If the first command in a pipeline is DDL commands such as CREATE\n> > DATABASE, this is allowed and immediately committed after success, as\n> > same as the current behavior. Executing such commands in the middle of\n> > pipeline is not allowed because the pipeline is regarded as \"an implicit\n> > transaction block\" at that time. Similarly, ANALYZE in the middle of\n> > pipeline can not close and open transaction.\n> \n> I'm not going there. If you can persuade some other committer that\n> this is worth breaking backward compatibility for, fine; the user\n> complaints will be their problem.\n\nI don't have no idea how to reduce the complexity explained above and\nclarify the transactional behavior of pipelining to users other than the\nfix I proposed in the previous post. However, I also agree that such\nchanging may make some people unhappy. If there is no good way and we\nwould not like to change the behavior, I think it is better to mention\nthe effects of commands that issue internal commits in the documentation\nat least.\n\nRegards,\nYugo Nagata\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>",
"msg_date": "Tue, 9 Aug 2022 00:21:02 +0900",
"msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: BUG #17434: CREATE/DROP DATABASE can be executed in the same\n transaction with other commands"
},
{
"msg_contents": "Hi,\n\nOn Tue, 9 Aug 2022 00:21:02 +0900\nYugo NAGATA <nagata@sraoss.co.jp> wrote:\n\n> On Wed, 27 Jul 2022 22:50:55 -0400\n> Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> > Yugo NAGATA <nagata@sraoss.co.jp> writes:\n> > > I've looked at the commited fix. What I wonder is whether a change in\n> > > IsInTransactionBlock() is necessary or not.\n> > \n> > I've not examined ANALYZE's dependencies on this closely, but it doesn't\n> > matter really, because I'm not willing to assume that ANALYZE is the\n> > only caller. There could be external modules with stronger assumptions\n> > that IsInTransactionBlock() yielding false provides guarantees equivalent\n> > to PreventInTransactionBlock(). It did before this patch, so I think\n> > it needs to still do so after.\n> \n> Thank you for your explanation. I understood that IsInTransactionBlock()\n> and PreventInTransactionBlock() share the equivalent assumption.\n> \n> As to ANALYZE, after investigating the code more, I found that setting XACT_FLAGS_NEEDIMMEDIATECOMMIT in IsInTransactionBlock() is needed indeed.\n> That is, some flags in pg_class such as relhasindex can be safely updated\n> only if ANALYZE is not in a transaction block and never rolled back. So,\n> in a pipeline, ANALYZE must be immediately committed.\n> \n> However, I think we need more comments on these functions to clarify what\n> users can expect or not for them. It is ensured that the statement that\n> calls PreventInTransactionBlock() or receives false from\n> IsInTransactionBlock() never be rolled back if it finishes successfully.\n> This can eliminate the harmful influence of non-rollback-able side effects.\n> \n> On the other hand, it cannot ensure that the statement calling these\n> functions is the first or only one in the transaction in pipelining. If\n> there are preceding statements in a pipeline, they are committed in the\n> same transaction of the current statement.\n> \n> The attached patch tries to add comments explaining it on the functions.\n\nI forward it to the hackers list because the patch is to fix comments.\nAlso, I'll register it to commitfest.\n\nThe past discussion is here.\nhttps://www.postgresql.org/message-id/flat/17434-d9f7a064ce2a88a3%40postgresql.org\n\n> \n> > > In fact, the result of IsInTransactionBlock does not make senses at\n> > > all in pipe-line mode regardless to the fix. ANALYZE could commit all\n> > > previous commands in pipelining, and this may not be user expected\n> > > behaviour.\n> > \n> > This seems pretty much isomorphic to the fact that CREATE DATABASE\n> > will commit preceding steps in the pipeline. \n> \n> I am not sure if we can think CREATE DATABASE case and ANLALYZE case\n> similarly. First, CREATE DATABASE is one of the commands that cannot be\n> executed inside a transaction block, but ANALYZE can be. So, users would\n> not be able to know ANALYZE in a pipeline causes a commit from the\n> documentation. Second, ANALYZE issues a commit internally in an early\n> stage not only after it finished successfully. For example, even if\n> ANALYZE is failing because a not-existing column name is specified, it\n> issues a commit before checking the column name. This makes more hard\n> to know which statements will be committed and which statements not\n> committed in a pipeline. Also, as you know, there are other commands\n> that issue internal commits.\n> \n> > That's not great,\n> > I admit; we'd not have designed it like that if we'd had complete\n> > understanding of the behavior at the beginning. But it's acted\n> > like that for a couple of decades now, so changing it seems far\n> > more likely to make people unhappy than happy. The same for\n> > ANALYZE in a pipeline.\n> \n> > > If the first command in a pipeline is DDL commands such as CREATE\n> > > DATABASE, this is allowed and immediately committed after success, as\n> > > same as the current behavior. Executing such commands in the middle of\n> > > pipeline is not allowed because the pipeline is regarded as \"an implicit\n> > > transaction block\" at that time. Similarly, ANALYZE in the middle of\n> > > pipeline can not close and open transaction.\n> > \n> > I'm not going there. If you can persuade some other committer that\n> > this is worth breaking backward compatibility for, fine; the user\n> > complaints will be their problem.\n> \n> I don't have no idea how to reduce the complexity explained above and\n> clarify the transactional behavior of pipelining to users other than the\n> fix I proposed in the previous post. However, I also agree that such\n> changing may make some people unhappy. If there is no good way and we\n> would not like to change the behavior, I think it is better to mention\n> the effects of commands that issue internal commits in the documentation\n> at least.\n> \n> Regards,\n> Yugo Nagata\n> \n> -- \n> Yugo NAGATA <nagata@sraoss.co.jp>\n\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>",
"msg_date": "Fri, 30 Sep 2022 10:23:42 +0900",
"msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: BUG #17434: CREATE/DROP DATABASE can be executed in the same\n transaction with other commands"
},
{
"msg_contents": "Yugo NAGATA <nagata@sraoss.co.jp> writes:\n>> The attached patch tries to add comments explaining it on the functions.\n\n> I forward it to the hackers list because the patch is to fix comments.\n\nWhat do you think of the attached wording?\n\nI don't think the pipeline angle is of concern to anyone who might be\nreading these comments with the aim of understanding what guarantees\nthey have. Perhaps there should be more about that in the user-facing\ndocs, though.\n\n\t\t\tregards, tom lane",
"msg_date": "Sun, 06 Nov 2022 12:54:17 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: BUG #17434: CREATE/DROP DATABASE can be executed in the same\n transaction with other commands"
},
{
"msg_contents": "On Sun, 06 Nov 2022 12:54:17 -0500\nTom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Yugo NAGATA <nagata@sraoss.co.jp> writes:\n> >> The attached patch tries to add comments explaining it on the functions.\n> \n> > I forward it to the hackers list because the patch is to fix comments.\n> \n> What do you think of the attached wording?\n\nIt looks good to me. That describes the expected behaviour exactly.\n\n> I don't think the pipeline angle is of concern to anyone who might be\n> reading these comments with the aim of understanding what guarantees\n> they have. Perhaps there should be more about that in the user-facing\n> docs, though.\n\nI agree with that we don't need to mention pipelining in these comments,\nand that we need more in the documentation. I attached a doc patch to add\na mention of commands that do internal commit to the pipelining section.\nAlso, this adds a reference for the pipelining protocol to the libpq doc.\n\nRegards,\nYugo Nagata\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>",
"msg_date": "Wed, 9 Nov 2022 19:01:14 +0900",
"msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: BUG #17434: CREATE/DROP DATABASE can be executed in the same\n transaction with other commands"
},
{
"msg_contents": "Yugo NAGATA <nagata@sraoss.co.jp> writes:\n> Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> What do you think of the attached wording?\n\n> It looks good to me. That describes the expected behaviour exactly.\n\nPushed that, then.\n\n>> I don't think the pipeline angle is of concern to anyone who might be\n>> reading these comments with the aim of understanding what guarantees\n>> they have. Perhaps there should be more about that in the user-facing\n>> docs, though.\n\n> I agree with that we don't need to mention pipelining in these comments,\n> and that we need more in the documentation. I attached a doc patch to add\n> a mention of commands that do internal commit to the pipelining section.\n> Also, this adds a reference for the pipelining protocol to the libpq doc.\n\nHmm ... I don't really find either of these changes to be improvements.\nThe fact that, say, multi-table ANALYZE uses multiple transactions\nseems to me to be a property of that statement, not of the protocol.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 09 Nov 2022 11:17:29 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: BUG #17434: CREATE/DROP DATABASE can be executed in the same\n transaction with other commands"
},
{
"msg_contents": "On 08.08.22 17:21, Yugo NAGATA wrote:\n>>> In fact, the result of IsInTransactionBlock does not make senses at\n>>> all in pipe-line mode regardless to the fix. ANALYZE could commit all\n>>> previous commands in pipelining, and this may not be user expected\n>>> behaviour.\n>> This seems pretty much isomorphic to the fact that CREATE DATABASE\n>> will commit preceding steps in the pipeline.\n> I am not sure if we can think CREATE DATABASE case and ANLALYZE case\n> similarly. First, CREATE DATABASE is one of the commands that cannot be\n> executed inside a transaction block, but ANALYZE can be. So, users would\n> not be able to know ANALYZE in a pipeline causes a commit from the\n> documentation. Second, ANALYZE issues a commit internally in an early\n> stage not only after it finished successfully. For example, even if\n> ANALYZE is failing because a not-existing column name is specified, it\n> issues a commit before checking the column name. This makes more hard\n> to know which statements will be committed and which statements not\n> committed in a pipeline. Also, as you know, there are other commands\n> that issue internal commits.\n\nThis has broken the following use:\n\nparse: create temporary table t1 (a int) on commit drop\nbind\nexecute\nparse: analyze t1\nbind\nexecute\nparse: select * from t1\nbind\nexecute\nsync\n\nI think the behavior of IsInTransactionBlock() needs to be further \nrefined to support this. If we are worried about external callers, \nmaybe we need to provide a separate version. AFAICT, all the callers of \nIsInTransactionBlock() over time have been in vacuum/analyze-related \ncode, so perhaps in master we should just move it there.\n\n\n\n",
"msg_date": "Wed, 9 Nov 2022 17:24:43 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: BUG #17434: CREATE/DROP DATABASE can be executed in the same\n transaction with other commands"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> This has broken the following use:\n\n> parse: create temporary table t1 (a int) on commit drop\n> bind\n> execute\n> parse: analyze t1\n> bind\n> execute\n> parse: select * from t1\n> bind\n> execute\n> sync\n\n> I think the behavior of IsInTransactionBlock() needs to be further \n> refined to support this.\n\nHmm. Maybe the right way to think about this is \"if we have completed an\nEXECUTE, and not yet received a following SYNC, then report that we are in\na transaction block\"? But I'm not sure if that breaks any other cases.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 09 Nov 2022 11:38:05 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: BUG #17434: CREATE/DROP DATABASE can be executed in the same\n transaction with other commands"
},
{
"msg_contents": "On Wed, 09 Nov 2022 11:17:29 -0500\nTom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Yugo NAGATA <nagata@sraoss.co.jp> writes:\n> > Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> What do you think of the attached wording?\n> \n> > It looks good to me. That describes the expected behaviour exactly.\n> \n> Pushed that, then.\n\nThank you.\n\n> >> I don't think the pipeline angle is of concern to anyone who might be\n> >> reading these comments with the aim of understanding what guarantees\n> >> they have. Perhaps there should be more about that in the user-facing\n> >> docs, though.\n> \n> > I agree with that we don't need to mention pipelining in these comments,\n> > and that we need more in the documentation. I attached a doc patch to add\n> > a mention of commands that do internal commit to the pipelining section.\n> > Also, this adds a reference for the pipelining protocol to the libpq doc.\n> \n> Hmm ... I don't really find either of these changes to be improvements.\n> The fact that, say, multi-table ANALYZE uses multiple transactions\n> seems to me to be a property of that statement, not of the protocol.\n\nOk. Then, if we want to notice users that commands using internal commits\ncould unexpectedly close a transaction in pipelining, the proper place is\nlibpq section?\n\nRegards,\nYugo Nagata\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>\n\n\n",
"msg_date": "Thu, 10 Nov 2022 13:49:19 +0900",
"msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: BUG #17434: CREATE/DROP DATABASE can be executed in the same\n transaction with other commands"
},
{
"msg_contents": "On Wed, 09 Nov 2022 11:38:05 -0500\nTom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> > This has broken the following use:\n> \n> > parse: create temporary table t1 (a int) on commit drop\n> > bind\n> > execute\n> > parse: analyze t1\n> > bind\n> > execute\n> > parse: select * from t1\n> > bind\n> > execute\n> > sync\n> \n> > I think the behavior of IsInTransactionBlock() needs to be further \n> > refined to support this.\n> \n> Hmm. Maybe the right way to think about this is \"if we have completed an\n> EXECUTE, and not yet received a following SYNC, then report that we are in\n> a transaction block\"? But I'm not sure if that breaks any other cases.\n\nOr, in that case, regarding it as an implicit transaction if multiple commands\nare executed in a pipeline as proposed in [1] could be another solution, \nalthough I have once withdrawn this for not breaking backward compatibility.\nAttached is the same patch of [1].\n\n[1] https://www.postgresql.org/message-id/20220728105134.d5ce51dd756b3149e9b9c52c%40sraoss.co.jp\n\nRegards,\nYugo Nagata\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>\n\n\n",
"msg_date": "Thu, 10 Nov 2022 14:48:59 +0900",
"msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: BUG #17434: CREATE/DROP DATABASE can be executed in the same\n transaction with other commands"
},
{
"msg_contents": "Yugo NAGATA <nagata@sraoss.co.jp> writes:\n> Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Hmm. Maybe the right way to think about this is \"if we have completed an\n>> EXECUTE, and not yet received a following SYNC, then report that we are in\n>> a transaction block\"? But I'm not sure if that breaks any other cases.\n\n> Or, in that case, regarding it as an implicit transaction if multiple commands\n> are executed in a pipeline as proposed in [1] could be another solution, \n> although I have once withdrawn this for not breaking backward compatibility.\n\nI didn't like that patch then and I still don't. In particular, it's\nmighty weird to be issuing BeginImplicitTransactionBlock after we've\nalready completed one command of the pipeline. If that works without\nobvious warts, it's only accidental.\n\nAttached is a draft patch along the lines I speculated about above.\nIt breaks backwards compatibility in that PreventInTransactionBlock\ncommands will now be rejected if they're a non-first command in a\npipeline. I think that's okay, and arguably desirable, for HEAD\nbut I'm pretty uncomfortable about back-patching it.\n\nI thought of a variant idea that I think would significantly reduce\nthe risk of breaking working applications, which is to restrict things\nonly in the case of pipelines with previous data-modifying commands.\nI tried to implement that by having PreventInTransactionBlock test\n\n\tif (GetTopTransactionIdIfAny() != InvalidTransactionId)\n\nbut it blew up, because we have various (mostly partitioning-related)\nDDL commands that run PreventInTransactionBlock only after they've\nacquired an exclusive lock on something, and LogAccessExclusiveLock\ngets an XID. (That was always a horrid POLA-violating kluge that\nwould bite us on the rear someday, and now it has. But I can't see\ntrying to change that in back branches.)\n\nSomething could still be salvaged of the idea, perhaps: we could\nadjust this patch so that the tests are like\n\n\tif ((MyXactFlags & XACT_FLAGS_PIPELINING) &&\n\t GetTopTransactionIdIfAny() != InvalidTransactionId)\n\nMaybe that makes it a small enough hazard to be back-patchable.\n\nAnother objection that could be raised is the same one I made\nalready, that !IsInTransactionBlock() doesn't provide the same\nguarantee as PreventInTransactionBlock. I'm not too happy\nabout that either, but given that we know of no other uses of\nIsInTransactionBlock besides ANALYZE, maybe it's okay. I'm\nnot sure it's worth trying to avoid it anyway --- we'd just\nend up with a probably-dead backwards compatibility stub.\n\n\t\t\tregards, tom lane",
"msg_date": "Thu, 10 Nov 2022 15:33:37 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: BUG #17434: CREATE/DROP DATABASE can be executed in the same\n transaction with other commands"
},
{
"msg_contents": "On Thu, 10 Nov 2022 15:33:37 -0500\nTom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Yugo NAGATA <nagata@sraoss.co.jp> writes:\n> > Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> Hmm. Maybe the right way to think about this is \"if we have completed an\n> >> EXECUTE, and not yet received a following SYNC, then report that we are in\n> >> a transaction block\"? But I'm not sure if that breaks any other cases.\n> \n> > Or, in that case, regarding it as an implicit transaction if multiple commands\n> > are executed in a pipeline as proposed in [1] could be another solution, \n> > although I have once withdrawn this for not breaking backward compatibility.\n> \n> I didn't like that patch then and I still don't. In particular, it's\n> mighty weird to be issuing BeginImplicitTransactionBlock after we've\n> already completed one command of the pipeline. If that works without\n> obvious warts, it's only accidental.\n\nOk, I agree with that ugly part of my proposal, so I withdraw it again\nif there is another acceptable solution.\n\n> Attached is a draft patch along the lines I speculated about above.\n> It breaks backwards compatibility in that PreventInTransactionBlock\n> commands will now be rejected if they're a non-first command in a\n> pipeline. I think that's okay, and arguably desirable, for HEAD\n\nThat patch seems good to me. It fixes the problem reported from\nPeter Eisentraut. Also, this seems simple way to define what is\n\"pipelining\" in the code. \n\n> but I'm pretty uncomfortable about back-patching it.\n\nIf we want to fix the ANALYZE problem without breaking backward\ncompatibility for back-patching, maybe we could fix only\nIsInTransactionBlock and remain PreventInTransactionBlock as it is.\nObviously, this will break consistency of guarantee between those\nfunctions, but if we are abandoning it eventually, it might be okay.\n\nAnyway, if we change PreventInTransactionBlock to forbid execute\nsome DDLs in a pipeline, we also need to modify the doc.\n\n> I thought of a variant idea that I think would significantly reduce\n> the risk of breaking working applications, which is to restrict things\n> only in the case of pipelines with previous data-modifying commands.\n> I tried to implement that by having PreventInTransactionBlock test\n> \n> \tif (GetTopTransactionIdIfAny() != InvalidTransactionId)\n> \n> but it blew up, because we have various (mostly partitioning-related)\n> DDL commands that run PreventInTransactionBlock only after they've\n> acquired an exclusive lock on something, and LogAccessExclusiveLock\n> gets an XID. (That was always a horrid POLA-violating kluge that\n> would bite us on the rear someday, and now it has. But I can't see\n> trying to change that in back branches.)\n> \n> Something could still be salvaged of the idea, perhaps: we could\n> adjust this patch so that the tests are like\n> \n> \tif ((MyXactFlags & XACT_FLAGS_PIPELINING) &&\n> \t GetTopTransactionIdIfAny() != InvalidTransactionId)\n> \n> Maybe that makes it a small enough hazard to be back-patchable.\n\nIn this case, DDLs that call PreventInTransactionBlock would be\nallowed in a pipeline if any data-modifying commands are yet executed.\nThis specification is a bit complicated and I'm not sure how many\ncases are salvaged by this, but I agree that this will reduce the\nhazard of breaking backward-compatibility.\n\n> Another objection that could be raised is the same one I made\n> already, that !IsInTransactionBlock() doesn't provide the same\n> guarantee as PreventInTransactionBlock. I'm not too happy\n> about that either, but given that we know of no other uses of\n> IsInTransactionBlock besides ANALYZE, maybe it's okay. I'm\n> not sure it's worth trying to avoid it anyway --- we'd just\n> end up with a probably-dead backwards compatibility stub.\n\nOne way to fix the ANALYZE problem while maintaining the\nbackward-compatibility for third-party tools using IsInTransactionBlock\nmight be to rename the function (ex. IsInTransactionBlockWithoutCommit)\nand define a new function with the original name. \n\nFor example, define the followin for third party tools,\n\nbool IsInTransactionBlock()\n{\n if (!IsInTransactionBlockWithoutCommit())\n {\n MyXactFlags |= XACT_FLAGS_NEEDIMMEDIATECOMMIT;\n return false;\n }\n else\n return true;\n}\n\nand use IsInTransactionBlockWithoutCommit in ANALYZE.\n\nRegards,\nYugo Nagata\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>\n\n\n",
"msg_date": "Wed, 16 Nov 2022 19:53:02 +0900",
"msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: BUG #17434: CREATE/DROP DATABASE can be executed in the same\n transaction with other commands"
},
{
"msg_contents": "> Attached is a draft patch along the lines I speculated about above.\n> It breaks backwards compatibility in that PreventInTransactionBlock\n> commands will now be rejected if they're a non-first command in a\n> pipeline. I think that's okay, and arguably desirable, for HEAD\n> but I'm pretty uncomfortable about back-patching it.\n\nI attempted to run these using HEAD, and it fails:\n\n parse: create temporary table t1 (a int) on commit drop\n bind\n execute\n parse: analyze t1\n bind\n execute\n parse: select * from t1\n bind\n execute\n sync\n\nIt then works fine after applying your patch!\n\nJust for some context, this was brought by Peter E. based on an issue\nreported by a customer. They are using PostgreSQL 11, and the issue\nwas observed after upgrading to PostgreSQL 11.17, which includes the\ncommit 9e3e1ac458abcda5aa03fa2a136e6fa492d58bd6. As a workaround\nthey downgraded the binaries to 11.16.\n\nIt would be great if we can back-patch this to all supported versions,\nas the issue itself is currently affecting them all.\n\nRegards,\nIsrael.\n\n> Attached is a draft patch along the lines I speculated about above.> It breaks backwards compatibility in that PreventInTransactionBlock> commands will now be rejected if they're a non-first command in a> pipeline. I think that's okay, and arguably desirable, for HEAD> but I'm pretty uncomfortable about back-patching it.I attempted to run these using HEAD, and it fails: parse: create temporary table t1 (a int) on commit drop bind execute parse: analyze t1 bind execute parse: select * from t1 bind execute syncIt then works fine after applying your patch!Just for some context, this was brought by Peter E. based on an issuereported by a customer. They are using PostgreSQL 11, and the issuewas observed after upgrading to PostgreSQL 11.17, which includes thecommit 9e3e1ac458abcda5aa03fa2a136e6fa492d58bd6. As a workaroundthey downgraded the binaries to 11.16.It would be great if we can back-patch this to all supported versions,as the issue itself is currently affecting them all.Regards,Israel.",
"msg_date": "Fri, 25 Nov 2022 12:17:02 -0300",
"msg_from": "Israel Barth Rubio <barthisrael@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: BUG #17434: CREATE/DROP DATABASE can be executed in the same\n transaction with other commands"
},
{
"msg_contents": "Israel Barth Rubio <barthisrael@gmail.com> writes:\n> It would be great if we can back-patch this to all supported versions,\n> as the issue itself is currently affecting them all.\n\nIn my mind, this is waiting for Peter to opine on whether it satisfies\nhis concern.\n\nI'm also looking for input on whether to reject if\n\n if ((MyXactFlags & XACT_FLAGS_PIPELINING) &&\n GetTopTransactionIdIfAny() != InvalidTransactionId)\n\nrather than just the bare\n\n if (MyXactFlags & XACT_FLAGS_PIPELINING)\n\ntests in the patch-as-posted.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 25 Nov 2022 12:06:15 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: BUG #17434: CREATE/DROP DATABASE can be executed in the same\n transaction with other commands"
},
{
"msg_contents": "On 25.11.22 18:06, Tom Lane wrote:\n> Israel Barth Rubio <barthisrael@gmail.com> writes:\n>> It would be great if we can back-patch this to all supported versions,\n>> as the issue itself is currently affecting them all.\n> \n> In my mind, this is waiting for Peter to opine on whether it satisfies\n> his concern.\n\nThe case I was working on is the same as Israel's. He has confirmed \nthat this fixes the issue we have been working on.\n\n\n\n",
"msg_date": "Mon, 12 Dec 2022 20:16:32 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: BUG #17434: CREATE/DROP DATABASE can be executed in the same\n transaction with other commands"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> On 25.11.22 18:06, Tom Lane wrote:\n>> In my mind, this is waiting for Peter to opine on whether it satisfies\n>> his concern.\n\n> The case I was working on is the same as Israel's. He has confirmed \n> that this fixes the issue we have been working on.\n\nOK, I'll make this happen soon.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 12 Dec 2022 16:22:58 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: BUG #17434: CREATE/DROP DATABASE can be executed in the same\n transaction with other commands"
},
{
"msg_contents": "I wrote:\n> Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n>> The case I was working on is the same as Israel's. He has confirmed \n>> that this fixes the issue we have been working on.\n\n> OK, I'll make this happen soon.\n\nPushed. I left out the idea of making this conditional on whether\nany preceding command had performed data modification, as that seemed\nto greatly complicate the explanation (since \"have we performed any\ndata modification\" is a rather squishy question from a user's viewpoint).\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 13 Dec 2022 14:26:58 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: BUG #17434: CREATE/DROP DATABASE can be executed in the same\n transaction with other commands"
}
] |
[
{
"msg_contents": "Hi,\n\nXlogreader limits the size of what it considers valid xlog records to\nMaxAllocSize; but this is not currently enforced in the\nXLogRecAssemble API. This means it is possible to assemble a record\nthat postgresql cannot replay.\nSimilarly; it is possible to repeatedly call XlogRegisterData() so as\nto overflow rec->xl_tot_len; resulting in out-of-bounds reads and\nwrites while processing record data;\n\nPFA a patch that attempts to fix both of these issues in the insertion\nAPI; by checking against overflows and other incorrectly large values\nin the relevant functions in xloginsert.c. In this patch, I've also\nadded a comment to the XLogRecord spec to document that xl_tot_len\nshould not be larger than 1GB - 1B; and why that limit exists.\n\nKind regards,\n\nMatthias van de Meent",
"msg_date": "Fri, 11 Mar 2022 16:42:23 +0100",
"msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Non-replayable WAL records through overflows and >MaxAllocSize\n lengths"
},
{
"msg_contents": "On 11/03/2022 17:42, Matthias van de Meent wrote:\n> Hi,\n> \n> Xlogreader limits the size of what it considers valid xlog records to\n> MaxAllocSize; but this is not currently enforced in the\n> XLogRecAssemble API. This means it is possible to assemble a record\n> that postgresql cannot replay.\n\nOops, that would be nasty.\n\n> Similarly; it is possible to repeatedly call XlogRegisterData() so as\n> to overflow rec->xl_tot_len; resulting in out-of-bounds reads and\n> writes while processing record data;\n\nAnd that too.\n\nHave you been able to create a test case for that? The largest record I \ncan think of is a commit record with a huge number of subtransactions, \ndropped relations, and shared inval messages. I'm not sure if you can \noverflow a uint32 with that, but exceeding MaxAllocSize seems possible.\n\n> PFA a patch that attempts to fix both of these issues in the insertion\n> API; by checking against overflows and other incorrectly large values\n> in the relevant functions in xloginsert.c. In this patch, I've also\n> added a comment to the XLogRecord spec to document that xl_tot_len\n> should not be larger than 1GB - 1B; and why that limit exists.\n> diff --git a/src/backend/access/transam/xloginsert.c b/src/backend/access/transam/xloginsert.c\n> index c260310c4c..ae654177de 100644\n> --- a/src/backend/access/transam/xloginsert.c\n> +++ b/src/backend/access/transam/xloginsert.c\n> @@ -342,6 +342,11 @@ XLogRegisterData(char *data, int len)\n> \n> \tif (num_rdatas >= max_rdatas)\n> \t\telog(ERROR, \"too much WAL data\");\n> +\n> +\t/* protect against overflow */\n> +\tif (unlikely((uint64) mainrdata_len + (uint64) len > UINT32_MAX))\n> +\t\telog(ERROR, \"too much WAL data\");\n> +\n> \trdata = &rdatas[num_rdatas++];\n> \n> \trdata->data = data;\n\nCould check for just AllocSizeValid(mainrdata_len), if we're only \nworried about the total size of the data to exceed the limit, and assume \nthat each individual piece of data is smaller.\n\nWe also don't check for negative 'len'. I think that's fine, the caller \nbears some responsibility for passing valid arguments too. But maybe \nuint32 or size_t would be more appropriate here.\n\nI wonder if these checks hurt performance. These are very cheap, but \nthen again, this codepath is very hot. It's probably fine, but it still \nworries me a little. Maybe some of these could be Asserts.\n\n> @@ -387,6 +392,11 @@ XLogRegisterBufData(uint8 block_id, char *data, int len)\n> \n> \tif (num_rdatas >= max_rdatas)\n> \t\telog(ERROR, \"too much WAL data\");\n> +\n> +\t/* protect against overflow */\n> +\tif (unlikely((uint64) regbuf->rdata_len + (uint64) len > UINT32_MAX))\n> +\t\telog(ERROR, \"too much WAL data\");\n> +\n> \trdata = &rdatas[num_rdatas++];\n> \n> \trdata->data = data;\n\nCould check \"len > UINT16_MAX\". As you noted in XLogRecordAssemble, \nthat's the real limit. And if you check for that here, you don't need to \ncheck it in XLogRecordAssemble.\n\n> @@ -505,7 +515,7 @@ XLogRecordAssemble(RmgrId rmid, uint8 info,\n> \t\t\t\t XLogRecPtr *fpw_lsn, int *num_fpi, bool *topxid_included)\n> {\n> \tXLogRecData *rdt;\n> -\tuint32\t\ttotal_len = 0;\n> +\tuint64\t\ttotal_len = 0;\n> \tint\t\t\tblock_id;\n> \tpg_crc32c\trdata_crc;\n> \tregistered_buffer *prev_regbuf = NULL;\n\nI don't think the change to uint64 is necessary. If all the data blocks \nare limited to 64 kB, and the number of blocks is limited, and the \nnumber of blocks is limited too.\n\n> @@ -734,6 +744,10 @@ XLogRecordAssemble(RmgrId rmid, uint8 info,\n> \n> \t\tif (needs_data)\n> \t\t{\n> +\t\t\t/* protect against overflow */\n> +\t\t\tif (unlikely(regbuf->rdata_len > UINT16_MAX))\n> +\t\t\t\telog(ERROR, \"too much WAL data for registered buffer\");\n> +\n> \t\t\t/*\n> \t\t\t * Link the caller-supplied rdata chain for this buffer to the\n> \t\t\t * overall list.\n> @@ -836,6 +850,13 @@ XLogRecordAssemble(RmgrId rmid, uint8 info,\n> \tfor (rdt = hdr_rdt.next; rdt != NULL; rdt = rdt->next)\n> \t\tCOMP_CRC32C(rdata_crc, rdt->data, rdt->len);\n> \n> +\t/*\n> +\t * Ensure that xlogreader.c can read the record; and check that we don't\n> +\t * accidentally overflow the size of the record.\n> +\t * */\n> +\tif (unlikely(!AllocSizeIsValid(total_len) || total_len > UINT32_MAX))\n> +\t\telog(ERROR, \"too much registered data for WAL record\");\n> +\n> \t/*\n> \t * Fill in the fields in the record header. Prev-link is filled in later,\n> \t * once we know where in the WAL the record will be inserted. The CRC does\n\nIt's enough to check AllocSizeIsValid(total_len), no need to also check \nagainst UINT32_MAX.\n\n- Heikki\n\n\n",
"msg_date": "Fri, 11 Mar 2022 22:42:42 +0200",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": false,
"msg_subject": "Re: Non-replayable WAL records through overflows and >MaxAllocSize\n lengths"
},
{
"msg_contents": "On Fri, Mar 11, 2022 at 3:42 PM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n> Have you been able to create a test case for that? The largest record I\n> can think of is a commit record with a huge number of subtransactions,\n> dropped relations, and shared inval messages. I'm not sure if you can\n> overflow a uint32 with that, but exceeding MaxAllocSize seems possible.\n\nI believe that wal_level=logical can generate very large update and\ndelete records, especially with REPLICA IDENTITY FULL.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 11 Mar 2022 16:12:30 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Non-replayable WAL records through overflows and >MaxAllocSize\n lengths"
},
{
"msg_contents": "Hi,\n\nOn 2022-03-11 22:42:42 +0200, Heikki Linnakangas wrote:\n> Have you been able to create a test case for that? The largest record I can\n> think of is a commit record with a huge number of subtransactions, dropped\n> relations, and shared inval messages. I'm not sure if you can overflow a\n> uint32 with that, but exceeding MaxAllocSize seems possible.\n\nMaxAllocSize is pretty easy:\nSELECT pg_logical_emit_message(false, long, long) FROM repeat(repeat(' ', 1024), 1024*1023) as l(long);\n\non a standby:\n\n2022-03-11 16:41:59.336 PST [3639744][startup][1/0:0] LOG: record length 2145386550 at 0/3000060 too long\n\n\n\n> I wonder if these checks hurt performance. These are very cheap, but then\n> again, this codepath is very hot. It's probably fine, but it still worries\n> me a little. Maybe some of these could be Asserts.\n\nI wouldn't expect the added branch itself to hurt much in XLogRegisterData() -\nit should be statically predicted to be not taken with the unlikely. I don't\nthink it's quite inner-loop enough for the instructions or the number of\n\"concurrently out of order branches\" to be a problem.\n\nFWIW, often the added elog()s are worse, because they require a decent amount\nof code and restrict the optimizer somewhat (e.g. no sibling calls, more local\nvariables etc). They can't even be deduplicated because of the line-numbers\nembedded.\n\nSo maybe just collapse the new elog() with the previous elog, with a common\nunlikely()?\n\n\n> > @@ -734,6 +744,10 @@ XLogRecordAssemble(RmgrId rmid, uint8 info,\n> > \t\tif (needs_data)\n> > \t\t{\n> > +\t\t\t/* protect against overflow */\n> > +\t\t\tif (unlikely(regbuf->rdata_len > UINT16_MAX))\n> > +\t\t\t\telog(ERROR, \"too much WAL data for registered buffer\");\n> > +\n> > \t\t\t/*\n> > \t\t\t * Link the caller-supplied rdata chain for this buffer to the\n> > \t\t\t * overall list.\n\nFWIW, this branch I'm a tad more concerned about - it's in a loop body where\nplausibly a lot of branches could be outstanding at the same time.\n\nISTM that this could just be an assert?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 11 Mar 2022 17:03:15 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Non-replayable WAL records through overflows and >MaxAllocSize\n lengths"
},
{
"msg_contents": "Thank you all for the feedback. Please find attached v2 of the\npatchset, which contains updated comments and applies the suggested\nchanges.\n\nOn Sat, 12 Mar 2022 at 02:03, Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On 2022-03-11 22:42:42 +0200, Heikki Linnakangas wrote:\n> > Have you been able to create a test case for that? The largest record I can\n> > think of is a commit record with a huge number of subtransactions, dropped\n> > relations, and shared inval messages. I'm not sure if you can overflow a\n> > uint32 with that, but exceeding MaxAllocSize seems possible.\n>\n> MaxAllocSize is pretty easy:\n> SELECT pg_logical_emit_message(false, long, long) FROM repeat(repeat(' ', 1024), 1024*1023) as l(long);\n>\n> on a standby:\n>\n> 2022-03-11 16:41:59.336 PST [3639744][startup][1/0:0] LOG: record length 2145386550 at 0/3000060 too long\n\nThanks for the reference. I was already playing around with 2PC log\nrecords (which can theoretically contain >4GB of data); but your\nexample is much easier and takes significantly less time.\n\nI'm not sure whether or not to include this in the test suite, though,\nas this would require a machine with at least 1GB of memory available\nfor this test alone, and I don't know the current requirements for\nrunning the test suite.\n\n> > I wonder if these checks hurt performance. These are very cheap, but then\n> > again, this codepath is very hot. It's probably fine, but it still worries\n> > me a little. Maybe some of these could be Asserts.\n>\n> I wouldn't expect the added branch itself to hurt much in XLogRegisterData() -\n> it should be statically predicted to be not taken with the unlikely. I don't\n> think it's quite inner-loop enough for the instructions or the number of\n> \"concurrently out of order branches\" to be a problem.\n>\n> FWIW, often the added elog()s are worse, because they require a decent amount\n> of code and restrict the optimizer somewhat (e.g. no sibling calls, more local\n> variables etc). They can't even be deduplicated because of the line-numbers\n> embedded.\n>\n> So maybe just collapse the new elog() with the previous elog, with a common\n> unlikely()?\n\nUpdated.\n\n> > > @@ -734,6 +744,10 @@ XLogRecordAssemble(RmgrId rmid, uint8 info,\n> > > if (needs_data)\n> > > {\n> > > + /* protect against overflow */\n> > > + if (unlikely(regbuf->rdata_len > UINT16_MAX))\n> > > + elog(ERROR, \"too much WAL data for registered buffer\");\n> > > +\n> > > /*\n> > > * Link the caller-supplied rdata chain for this buffer to the\n> > > * overall list.\n>\n> FWIW, this branch I'm a tad more concerned about - it's in a loop body where\n> plausibly a lot of branches could be outstanding at the same time.\n>\n> ISTM that this could just be an assert?\n\nThis specific location has been replaced with an Assert, while\nXLogRegisterBufData always does the unlikely()-ed bounds check.\n\nKind regards,\n\nMatthias",
"msg_date": "Mon, 14 Mar 2022 17:57:23 +0100",
"msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Non-replayable WAL records through overflows and >MaxAllocSize\n lengths"
},
{
"msg_contents": "Hi\n\nA random thought I had while thinking about the size limits: We could use the\nlow bits of the length and xl_prev to store XLR_SPECIAL_REL_UPDATE |\nXLR_CHECK_CONSISTENCY and give rmgrs the full 8 bit of xl_info. Which would\nallow us to e.g. get away from needing Heap2. Which would aestethically be\npleasing.\n\n\n\nOn 2022-03-14 17:57:23 +0100, Matthias van de Meent wrote:\n> I'm not sure whether or not to include this in the test suite, though,\n> as this would require a machine with at least 1GB of memory available\n> for this test alone, and I don't know the current requirements for\n> running the test suite.\n\nWe definitely shouldn't require this much RAM for the tests.\n\nIt might be worth adding tests exercising edge cases around segment boundaries\n(and perhaps page boundaries) though. E.g. record headers split across pages\nand segments.\n\n\n\n> --- a/src/backend/access/transam/xloginsert.c\n> +++ b/src/backend/access/transam/xloginsert.c\n> @@ -338,10 +338,16 @@ XLogRegisterData(char *data, int len)\n> {\n> \tXLogRecData *rdata;\n> \n> -\tAssert(begininsert_called);\n> +\tAssert(begininsert_called && len >= 0 && AllocSizeIsValid(len));\n\nShouldn't we just make the length argument unsigned?\n\n\n> -\tif (num_rdatas >= max_rdatas)\n> +\t/*\n> +\t * Check against max_rdatas; and ensure we don't fill a record with\n> +\t * more data than can be replayed\n> +\t */\n> +\tif (unlikely(num_rdatas >= max_rdatas ||\n> +\t\t\t\t !AllocSizeIsValid((uint64) mainrdata_len + (uint64) len)))\n> \t\telog(ERROR, \"too much WAL data\");\n> +\n> \trdata = &rdatas[num_rdatas++];\n\nPersonally I'd write it as unlikely(num_rdatas >= max_rdatas) || unlikely(...)\nbut I doubt if it makes an actual difference to the compiler.\n\n\n> \trdata->data = data;\n> @@ -377,7 +383,7 @@ XLogRegisterBufData(uint8 block_id, char *data, int len)\n> \tregistered_buffer *regbuf;\n> \tXLogRecData *rdata;\n> \n> -\tAssert(begininsert_called);\n> +\tAssert(begininsert_called && len >= 0 && len <= UINT16_MAX);\n> \n> \t/* find the registered buffer struct */\n> \tregbuf = ®istered_buffers[block_id];\n> @@ -385,8 +391,14 @@ XLogRegisterBufData(uint8 block_id, char *data, int len)\n> \t\telog(ERROR, \"no block with id %d registered with WAL insertion\",\n> \t\t\t block_id);\n> \n> -\tif (num_rdatas >= max_rdatas)\n> +\t/*\n> +\t * Check against max_rdatas; and ensure we don't register more data per\n> +\t * buffer than can be handled by the physical record format.\n> +\t */\n> +\tif (unlikely(num_rdatas >= max_rdatas ||\n> +\t\t\t\t regbuf->rdata_len + len > UINT16_MAX))\n> \t\telog(ERROR, \"too much WAL data\");\n> +\n> \trdata = &rdatas[num_rdatas++];\n\nGiven the repeated check it might be worth to just put it in a static inline\nused from the relevant places (which'd generate less code because the same\nline number would be used for all the checks).\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 14 Mar 2022 10:14:45 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Non-replayable WAL records through overflows and >MaxAllocSize\n lengths"
},
{
"msg_contents": "On Mon, 14 Mar 2022 at 18:14, Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi\n>\n> A random thought I had while thinking about the size limits: We could use the\n> low bits of the length and xl_prev to store XLR_SPECIAL_REL_UPDATE |\n> XLR_CHECK_CONSISTENCY and give rmgrs the full 8 bit of xl_info. Which would\n> allow us to e.g. get away from needing Heap2. Which would aestethically be\n> pleasing.\n\nThat would be interesting; though out of scope for this bug I'm trying to fix.\n\n> On 2022-03-14 17:57:23 +0100, Matthias van de Meent wrote:\n> > I'm not sure whether or not to include this in the test suite, though,\n> > as this would require a machine with at least 1GB of memory available\n> > for this test alone, and I don't know the current requirements for\n> > running the test suite.\n>\n> We definitely shouldn't require this much RAM for the tests.\n>\n> It might be worth adding tests exercising edge cases around segment boundaries\n> (and perhaps page boundaries) though. E.g. record headers split across pages\n> and segments.\n>\n>\n>\n> > --- a/src/backend/access/transam/xloginsert.c\n> > +++ b/src/backend/access/transam/xloginsert.c\n> > @@ -338,10 +338,16 @@ XLogRegisterData(char *data, int len)\n> > {\n> > XLogRecData *rdata;\n> >\n> > - Assert(begininsert_called);\n> > + Assert(begininsert_called && len >= 0 && AllocSizeIsValid(len));\n>\n> Shouldn't we just make the length argument unsigned?\n\nI've applied that in the attached revision; but I'd like to note that\nthis makes the fix less straightforward to backpatch; as the changes\nto the public function signatures shouldn't be applied in older\nversions.\n\n> > - if (num_rdatas >= max_rdatas)\n> > + /*\n> > + * Check against max_rdatas; and ensure we don't fill a record with\n> > + * more data than can be replayed\n> > + */\n> > + if (unlikely(num_rdatas >= max_rdatas ||\n> > + !AllocSizeIsValid((uint64) mainrdata_len + (uint64) len)))\n> > elog(ERROR, \"too much WAL data\");\n> > +\n> > rdata = &rdatas[num_rdatas++];\n>\n> Personally I'd write it as unlikely(num_rdatas >= max_rdatas) || unlikely(...)\n> but I doubt if it makes an actual difference to the compiler.\n\nAgreed, updated.\n\n> > rdata->data = data;\n> > @@ -377,7 +383,7 @@ XLogRegisterBufData(uint8 block_id, char *data, int len)\n> > registered_buffer *regbuf;\n> > XLogRecData *rdata;\n> >\n> > - Assert(begininsert_called);\n> > + Assert(begininsert_called && len >= 0 && len <= UINT16_MAX);\n> >\n> > /* find the registered buffer struct */\n> > regbuf = ®istered_buffers[block_id];\n> > @@ -385,8 +391,14 @@ XLogRegisterBufData(uint8 block_id, char *data, int len)\n> > elog(ERROR, \"no block with id %d registered with WAL insertion\",\n> > block_id);\n> >\n> > - if (num_rdatas >= max_rdatas)\n> > + /*\n> > + * Check against max_rdatas; and ensure we don't register more data per\n> > + * buffer than can be handled by the physical record format.\n> > + */\n> > + if (unlikely(num_rdatas >= max_rdatas ||\n> > + regbuf->rdata_len + len > UINT16_MAX))\n> > elog(ERROR, \"too much WAL data\");\n> > +\n> > rdata = &rdatas[num_rdatas++];\n>\n> Given the repeated check it might be worth to just put it in a static inline\n> used from the relevant places (which'd generate less code because the same\n> line number would be used for all the checks).\n\nThe check itself is slightly different in those 3 places; but the\nerror message is shared. Do you mean to extract the elog() into a\nstatic inline function (as attached), or did I misunderstand?\n\n\n-Matthias",
"msg_date": "Tue, 15 Mar 2022 20:48:58 +0100",
"msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Non-replayable WAL records through overflows and >MaxAllocSize\n lengths"
},
{
"msg_contents": "Apart from registering this in CF 2022-07 last Friday, I've also just\nadded this issue to the Open Items list for PG15 under \"Older bugs\naffecting stable branches\"; as a precaution to not lose track of this\nissue in the buzz of the upcoming feature freeze.\n\n-Matthias\n\n\n",
"msg_date": "Tue, 15 Mar 2022 23:57:48 +0100",
"msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Non-replayable WAL records through overflows and >MaxAllocSize\n lengths"
},
{
"msg_contents": "Seeing that the busiest time for PG15 - the last commitfest before the\nfeature freeze - has passed, could someone take another look at this?\n\nThe changes that were requested by Heikki and Andres have been merged\ninto patch v3, and I think it would be nice to fix this security issue\nin the upcoming minor release(s).\n\n\n-Matthias\n\n\n",
"msg_date": "Mon, 18 Apr 2022 17:48:50 +0200",
"msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Non-replayable WAL records through overflows and >MaxAllocSize\n lengths"
},
{
"msg_contents": "On Mon, Apr 18, 2022 at 05:48:50PM +0200, Matthias van de Meent wrote:\n> Seeing that the busiest time for PG15 - the last commitfest before the\n> feature freeze - has passed, could someone take another look at this?\n\nThe next minor release is three weeks away, so now would be a good\ntime to get that addressed. Heikki, Andres, are you planning to look\nmore at what has been proposed here?\n--\nMichael",
"msg_date": "Tue, 19 Apr 2022 14:19:45 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Non-replayable WAL records through overflows and >MaxAllocSize\n lengths"
},
{
"msg_contents": "Hi,\n\n > > MaxAllocSize is pretty easy:\n > > SELECT pg_logical_emit_message(false, long, long) FROM \nrepeat(repeat(' ', 1024), 1024*1023) as l(long);\n > >\n > > on a standby:\n > >\n > > 2022-03-11 16:41:59.336 PST [3639744][startup][1/0:0] LOG: record \nlength 2145386550 at 0/3000060 too long\n >\n > Thanks for the reference. I was already playing around with 2PC log\n > records (which can theoretically contain >4GB of data); but your\n > example is much easier and takes significantly less time.\n\nA little confused here, does this patch V3 intend to solve this problem \n\"record length 2145386550 at 0/3000060 too long\"?\n\nI set up a simple Primary and Standby stream replication environment, \nand use the above query to run the test for before and after patch v3. \nThe error message still exist, but with different message.\n\nBefore patch v3, the error is showing below,\n\n2022-06-10 15:32:25.307 PDT [4253] LOG: record length 2145386550 at \n0/3000060 too long\n2022-06-10 15:32:47.763 PDT [4257] FATAL: terminating walreceiver \nprocess due to administrator command\n2022-06-10 15:32:47.763 PDT [4253] LOG: record length 2145386550 at \n0/3000060 too long\n\nAfter patch v3, the error displays differently\n\n2022-06-10 15:53:53.397 PDT [12848] LOG: record length 2145386550 at \n0/3000060 too long\n2022-06-10 15:54:07.249 PDT [12852] FATAL: could not receive data from \nWAL stream: ERROR: requested WAL segment 000000010000000000000045 has \nalready been removed\n2022-06-10 15:54:07.275 PDT [12848] LOG: record length 2145386550 at \n0/3000060 too long\n\nAnd once the error happens, then the Standby can't continue the replication.\n\n\nIs a particular reason to say \"more datas\" at line 52 in patch v3?\n\n+ * more datas than are being accounted for by the XLog infrastructure.\n\n\nOn 2022-04-18 10:19 p.m., Michael Paquier wrote:\n> On Mon, Apr 18, 2022 at 05:48:50PM +0200, Matthias van de Meent wrote:\n>> Seeing that the busiest time for PG15 - the last commitfest before the\n>> feature freeze - has passed, could someone take another look at this?\n> The next minor release is three weeks away, so now would be a good\n> time to get that addressed. Heikki, Andres, are you planning to look\n> more at what has been proposed here?\n> --\n> Michael\n\nThank you,\n\n-- \nDavid\n\nSoftware Engineer\nHighgo Software Inc. (Canada)\nwww.highgo.ca\n\n\n\n\n\nHi,\n> > MaxAllocSize is pretty easy:\n > > SELECT pg_logical_emit_message(false, long, long) FROM\n repeat(repeat(' ', 1024), 1024*1023) as l(long);\n > >\n > > on a standby:\n > >\n > > 2022-03-11 16:41:59.336 PST [3639744][startup][1/0:0]\n LOG: record length 2145386550 at 0/3000060 too long\n > \n > Thanks for the reference. I was already playing around with\n 2PC log\n > records (which can theoretically contain >4GB of data);\n but your\n > example is much easier and takes significantly less time.\nA little confused here, does this patch V3 intend to solve this\n problem \"record length 2145386550 at 0/3000060 too long\"?\n \n\nI set up a simple Primary and Standby stream replication\n environment, and use the above query to run the test for before\n and after patch v3. The error message still exist, but with\n different message.\n\nBefore patch v3, the error is showing below,\n\n2022-06-10 15:32:25.307 PDT [4253] LOG: \n record length 2145386550 at 0/3000060 too long\n 2022-06-10 15:32:47.763 PDT [4257] FATAL: terminating\n walreceiver process due to administrator command\n 2022-06-10 15:32:47.763 PDT [4253] LOG: record length\n 2145386550 at 0/3000060 too long\n\nAfter patch v3, the error displays differently\n\n2022-06-10 15:53:53.397 PDT [12848] LOG: \n record length 2145386550 at 0/3000060 too long\n 2022-06-10 15:54:07.249 PDT [12852] FATAL: could not receive\n data from WAL stream: ERROR: requested WAL segment\n 000000010000000000000045 has already been removed\n 2022-06-10 15:54:07.275 PDT [12848] LOG: record length\n 2145386550 at 0/3000060 too long\nAnd once the error happens, then the Standby can't continue the\n replication.\n\n\nIs a particular reason to say \"more datas\" at line 52 in patch\n v3?\n\n+ * more datas than are being accounted\n for by the XLog infrastructure.\n\nOn 2022-04-18 10:19 p.m., Michael\n Paquier wrote:\n\n\nOn Mon, Apr 18, 2022 at 05:48:50PM +0200, Matthias van de Meent wrote:\n\n\nSeeing that the busiest time for PG15 - the last commitfest before the\nfeature freeze - has passed, could someone take another look at this?\n\n\n\nThe next minor release is three weeks away, so now would be a good\ntime to get that addressed. Heikki, Andres, are you planning to look\nmore at what has been proposed here?\n--\nMichael\n\n\n\nThank you,\n\n-- \n David\n\n Software Engineer\n Highgo Software Inc. (Canada)\nwww.highgo.ca",
"msg_date": "Fri, 10 Jun 2022 16:31:53 -0700",
"msg_from": "David Zhang <david.zhang@highgo.ca>",
"msg_from_op": false,
"msg_subject": "Re: Non-replayable WAL records through overflows and >MaxAllocSize\n lengths"
},
{
"msg_contents": "On Sat, 11 Jun 2022 at 01:32, David Zhang <david.zhang@highgo.ca> wrote:\n>\n> Hi,\n>\n> > > MaxAllocSize is pretty easy:\n> > > SELECT pg_logical_emit_message(false, long, long) FROM repeat(repeat(' ', 1024), 1024*1023) as l(long);\n> > >\n> > > on a standby:\n> > >\n> > > 2022-03-11 16:41:59.336 PST [3639744][startup][1/0:0] LOG: record length 2145386550 at 0/3000060 too long\n> >\n> > Thanks for the reference. I was already playing around with 2PC log\n> > records (which can theoretically contain >4GB of data); but your\n> > example is much easier and takes significantly less time.\n>\n> A little confused here, does this patch V3 intend to solve this problem \"record length 2145386550 at 0/3000060 too long\"?\n\nNo, not once the record exists. But it does remove Postgres' ability\nto create such records, thereby solving the problem for all systems\nthat generate WAL through Postgres' WAL writing APIs.\n\n> I set up a simple Primary and Standby stream replication environment, and use the above query to run the test for before and after patch v3. The error message still exist, but with different message.\n>\n> Before patch v3, the error is showing below,\n>\n> 2022-06-10 15:32:25.307 PDT [4253] LOG: record length 2145386550 at 0/3000060 too long\n> 2022-06-10 15:32:47.763 PDT [4257] FATAL: terminating walreceiver process due to administrator command\n> 2022-06-10 15:32:47.763 PDT [4253] LOG: record length 2145386550 at 0/3000060 too long\n>\n> After patch v3, the error displays differently\n>\n> 2022-06-10 15:53:53.397 PDT [12848] LOG: record length 2145386550 at 0/3000060 too long\n> 2022-06-10 15:54:07.249 PDT [12852] FATAL: could not receive data from WAL stream: ERROR: requested WAL segment 000000010000000000000045 has already been removed\n> 2022-06-10 15:54:07.275 PDT [12848] LOG: record length 2145386550 at 0/3000060 too long\n>\n> And once the error happens, then the Standby can't continue the replication.\n\nDid you initiate a new cluster or otherwise skip the invalid record\nyou generated when running the instance based on master? It seems to\nme you're trying to replay the invalid record (len > MaxAllocSize),\nand this patch does not try to fix that issue. This patch just tries\nto forbid emitting records larger than MaxAllocSize, as per the check\nin XLogRecordAssemble, so that we wont emit unreadable records into\nthe WAL anymore.\n\nReading unreadable records still won't be possible, but that's also\nnot something I'm trying to fix.\n\n> Is a particular reason to say \"more datas\" at line 52 in patch v3?\n>\n> + * more datas than are being accounted for by the XLog infrastructure.\n\nYes. This error is thrown when you try to register a 34th block, or an\nNth rdata where the caller previously only reserved n - 1 data slots.\nAs such 'datas', for the num_rdatas and max_rdatas variables.\n\nThanks for looking at the patch.\n\n- Matthias\n\n\n",
"msg_date": "Sat, 11 Jun 2022 21:25:48 +0200",
"msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Non-replayable WAL records through overflows and >MaxAllocSize\n lengths"
},
{
"msg_contents": "\n>> A little confused here, does this patch V3 intend to solve this problem \"record length 2145386550 at 0/3000060 too long\"?\n> No, not once the record exists. But it does remove Postgres' ability\n> to create such records, thereby solving the problem for all systems\n> that generate WAL through Postgres' WAL writing APIs.\n>\n>> I set up a simple Primary and Standby stream replication environment, and use the above query to run the test for before and after patch v3. The error message still exist, but with different message.\n>>\n>> Before patch v3, the error is showing below,\n>>\n>> 2022-06-10 15:32:25.307 PDT [4253] LOG: record length 2145386550 at 0/3000060 too long\n>> 2022-06-10 15:32:47.763 PDT [4257] FATAL: terminating walreceiver process due to administrator command\n>> 2022-06-10 15:32:47.763 PDT [4253] LOG: record length 2145386550 at 0/3000060 too long\n>>\n>> After patch v3, the error displays differently\n>>\n>> 2022-06-10 15:53:53.397 PDT [12848] LOG: record length 2145386550 at 0/3000060 too long\n>> 2022-06-10 15:54:07.249 PDT [12852] FATAL: could not receive data from WAL stream: ERROR: requested WAL segment 000000010000000000000045 has already been removed\n>> 2022-06-10 15:54:07.275 PDT [12848] LOG: record length 2145386550 at 0/3000060 too long\n>>\n>> And once the error happens, then the Standby can't continue the replication.\n> Did you initiate a new cluster or otherwise skip the invalid record\n> you generated when running the instance based on master? It seems to\n> me you're trying to replay the invalid record (len > MaxAllocSize),\n> and this patch does not try to fix that issue. This patch just tries\n> to forbid emitting records larger than MaxAllocSize, as per the check\n> in XLogRecordAssemble, so that we wont emit unreadable records into\n> the WAL anymore.\n>\n> Reading unreadable records still won't be possible, but that's also\n> not something I'm trying to fix.\n\nThanks a lot for the clarification. My testing environment is pretty \nsimple, initdb for Primary, run basebackup and set the connection string \nfor Standby, then run the \"pg_logical_emit_message\" query and tail the \nlog on standby side.\n\nBest regards,\n\n-- \nDavid\n\nSoftware Engineer\nHighgo Software Inc. (Canada)\nwww.highgo.ca\n\n\n",
"msg_date": "Mon, 13 Jun 2022 15:18:45 -0700",
"msg_from": "David Zhang <david.zhang@highgo.ca>",
"msg_from_op": false,
"msg_subject": "Re: Non-replayable WAL records through overflows and >MaxAllocSize\n lengths"
},
{
"msg_contents": "On Sat, Jun 11, 2022 at 09:25:48PM +0200, Matthias van de Meent wrote:\n> Did you initiate a new cluster or otherwise skip the invalid record\n> you generated when running the instance based on master? It seems to\n> me you're trying to replay the invalid record (len > MaxAllocSize),\n> and this patch does not try to fix that issue. This patch just tries\n> to forbid emitting records larger than MaxAllocSize, as per the check\n> in XLogRecordAssemble, so that we wont emit unreadable records into\n> the WAL anymore.\n> \n> Reading unreadable records still won't be possible, but that's also\n> not something I'm trying to fix.\n\nAs long as you cannot generate such WAL records that should be fine as\nwAL is not reused across upgrades, so this kind of restriction is a\nno-brainer on HEAD. The back-patching argument is not on the table\nanyway, as some of the routine signatures change with the unsigned\narguments, because of those safety checks.\n\n+ if (unlikely(num_rdatas >= max_rdatas) ||\n+ unlikely(!AllocSizeIsValid((uint64) mainrdata_len + (uint64) len)))\n+ XLogErrorDataLimitExceeded();\n[...]\n+inline void\n+XLogErrorDataLimitExceeded()\n+{\n+ elog(ERROR, \"too much WAL data\");\n+}\nThe three checks are different, OK.. Note that static is missing.\n\n+ if (unlikely(!AllocSizeIsValid(total_len)))\n+ XLogErrorDataLimitExceeded();\nRather than a single check at the end of XLogRecordAssemble(), you'd\nbetter look after that each time total_len is added up?\n--\nMichael",
"msg_date": "Mon, 20 Jun 2022 14:02:17 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Non-replayable WAL records through overflows and >MaxAllocSize\n lengths"
},
{
"msg_contents": "On Mon, 20 Jun 2022 at 07:02, Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Sat, Jun 11, 2022 at 09:25:48PM +0200, Matthias van de Meent wrote:\n> > Did you initiate a new cluster or otherwise skip the invalid record\n> > you generated when running the instance based on master? It seems to\n> > me you're trying to replay the invalid record (len > MaxAllocSize),\n> > and this patch does not try to fix that issue. This patch just tries\n> > to forbid emitting records larger than MaxAllocSize, as per the check\n> > in XLogRecordAssemble, so that we wont emit unreadable records into\n> > the WAL anymore.\n> >\n> > Reading unreadable records still won't be possible, but that's also\n> > not something I'm trying to fix.\n>\n> As long as you cannot generate such WAL records that should be fine as\n> wAL is not reused across upgrades, so this kind of restriction is a\n> no-brainer on HEAD. The back-patching argument is not on the table\n> anyway, as some of the routine signatures change with the unsigned\n> arguments, because of those safety checks.\n\nThe signature change is mostly ornamental, see attached v4.backpatch.\nThe main reason for changing the signature is to make sure nobody can\nprovide a negative value, but it's not important to the patch.\n\n>\n> + if (unlikely(num_rdatas >= max_rdatas) ||\n> + unlikely(!AllocSizeIsValid((uint64) mainrdata_len + (uint64) len)))\n> + XLogErrorDataLimitExceeded();\n> [...]\n> +inline void\n> +XLogErrorDataLimitExceeded()\n> +{\n> + elog(ERROR, \"too much WAL data\");\n> +}\n> The three checks are different, OK..\n\nThey each check slightly different things, but with the same error. In\nRegisterData, it checks that the data can still be allocated and does\nnot overflow the register, in RegisterBlock it checks that the total\nlength of data registered to the block does not exceed the max value\nof XLogRecordBlockHeader->data_length. I've updated the comments above\nthe checks so that this distinction is more clear.\n\n> Note that static is missing.\n\nFixed in attached v4.patch\n\n> + if (unlikely(!AllocSizeIsValid(total_len)))\n> + XLogErrorDataLimitExceeded();\n> Rather than a single check at the end of XLogRecordAssemble(), you'd\n> better look after that each time total_len is added up?\n\nI was doing so previously, but there were some good arguments against that:\n\n- Performance of XLogRecordAssemble should be impacted as little as\npossible. XLogRecordAssemble is in many hot paths, and it is highly\nunlikely this check will be hit, because nobody else has previously\nreported this issue. Any check, however unlikely, will add some\noverhead, so removing check counts reduces overhead of this patch.\n\n- The user or system is unlikely to care about which specific check\nwas hit, and only needs to care _that_ the check was hit. An attached\ndebugger will be able to debug the internals of the xlog machinery and\nfind out the specific reasons for the error, but I see no specific\nreason why the specific reason would need to be reported to the\nconnection.\n\nKind regards,\n\nMatthias van de Meent",
"msg_date": "Mon, 20 Jun 2022 11:01:51 +0200",
"msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Non-replayable WAL records through overflows and >MaxAllocSize\n lengths"
},
{
"msg_contents": "On Mon, Jun 20, 2022 at 11:01:51AM +0200, Matthias van de Meent wrote:\n> On Mon, 20 Jun 2022 at 07:02, Michael Paquier <michael@paquier.xyz> wrote:\n>> + if (unlikely(!AllocSizeIsValid(total_len)))\n>> + XLogErrorDataLimitExceeded();\n>> Rather than a single check at the end of XLogRecordAssemble(), you'd\n>> better look after that each time total_len is added up?\n>\n> I was doing so previously, but there were some good arguments against that:\n> \n> - Performance of XLogRecordAssemble should be impacted as little as\n> possible. XLogRecordAssemble is in many hot paths, and it is highly\n> unlikely this check will be hit, because nobody else has previously\n> reported this issue. Any check, however unlikely, will add some\n> overhead, so removing check counts reduces overhead of this patch.\n\nSome macro-benchmarking could be in place here, and this would most\nlikely become noticeable when assembling a bunch of little records?\n\n> - The user or system is unlikely to care about which specific check\n> was hit, and only needs to care _that_ the check was hit. An attached\n> debugger will be able to debug the internals of the xlog machinery and\n> find out the specific reasons for the error, but I see no specific\n> reason why the specific reason would need to be reported to the\n> connection.\n\nOkay.\n\n+ /*\n+ * Ensure that xlogreader.c can read the record by ensuring that the\n+ * data section of the WAL record can be allocated.\n+ */\n+ if (unlikely(!AllocSizeIsValid(total_len)))\n+ XLogErrorDataLimitExceeded();\n\nBy the way, while skimming through the patch, the WAL reader seems to\nbe a bit more pessimistic than this estimation, calculating the amount\nto allocate as of DecodeXLogRecordRequiredSpace(), based on the\nxl_tot_len given by a record.\n--\nMichael",
"msg_date": "Tue, 21 Jun 2022 10:44:57 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Non-replayable WAL records through overflows and >MaxAllocSize\n lengths"
},
{
"msg_contents": "On Tue, 21 Jun 2022 at 03:45, Michael Paquier <michael@paquier.xyz> wrote:\n> + /*\n> + * Ensure that xlogreader.c can read the record by ensuring that the\n> + * data section of the WAL record can be allocated.\n> + */\n> + if (unlikely(!AllocSizeIsValid(total_len)))\n> + XLogErrorDataLimitExceeded();\n>\n> By the way, while skimming through the patch, the WAL reader seems to\n> be a bit more pessimistic than this estimation, calculating the amount\n> to allocate as of DecodeXLogRecordRequiredSpace(), based on the\n> xl_tot_len given by a record.\n\nI see, thanks for notifying me about that.\n\nPFA a correction for that issue. It does copy over the value for\nMaxAllocSize from memutils.h into xlogreader.h, because we need that\nvalue in FRONTEND builds too, and memutils.h can't be included in\nFRONTEND builds. One file suffixed with .backpatch that doesn't\ninclude the function signature changes, but it is not optimized for\nany stable branch[15].\n\n-Matthias\n\nPS. I'm not amused by the double copy we do in the xlogreader, as I\nhad expected we'd just read the record and point into that single\nxl_rec_len-sized buffer. Apparently that's not how it works...\n\n[15] it should apply to stable branches all the way back to\nREL_15_STABLE and still work as expected. Any older than that I\nhaven't tested, but probably only require some updates for\nXLogRecMaxLength in xlogreader.h.",
"msg_date": "Fri, 1 Jul 2022 17:11:05 +0200",
"msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Non-replayable WAL records through overflows and >MaxAllocSize\n lengths"
},
{
"msg_contents": "Hi,\n\nI tried to apply this patch v5 to current master branch but it complains,\n\"git apply --check \nv5-0001-Add-protections-in-xlog-record-APIs-against-large.patch\nerror: patch failed: src/include/access/xloginsert.h:43\nerror: src/include/access/xloginsert.h: patch does not apply\"\n\nthen I checked it out before the commit \n`b0a55e43299c4ea2a9a8c757f9c26352407d0ccc` and applied this v5 patch.\n\n1) both make check and make installcheck passed.\n\n2) and I can also see this patch v5 prevents the error happens previously,\n\n\"postgres=# SELECT pg_logical_emit_message(false, long, long) FROM \nrepeat(repeat(' ', 1024), 1024*1023) as l(long);\nERROR: too much WAL data\"\n\n3) without this v5 patch, the same test will cause the standby crash \nlike below, and the standby not be able to boot up after this crash.\n\n\"2022-07-08 12:28:16.425 PDT [2363] FATAL: invalid memory alloc request \nsize 2145388995\n2022-07-08 12:28:16.426 PDT [2360] LOG: startup process (PID 2363) \nexited with exit code 1\n2022-07-08 12:28:16.426 PDT [2360] LOG: terminating any other active \nserver processes\n2022-07-08 12:28:16.427 PDT [2360] LOG: shutting down due to startup \nprocess failure\n2022-07-08 12:28:16.428 PDT [2360] LOG: database system is shut down\"\n\n\nBest regards,\n\n-- \nDavid\n\nSoftware Engineer\nHighgo Software Inc. (Canada)\nwww.highgo.ca\n\n\n",
"msg_date": "Fri, 8 Jul 2022 12:35:22 -0700",
"msg_from": "David Zhang <david.zhang@highgo.ca>",
"msg_from_op": false,
"msg_subject": "Re: Non-replayable WAL records through overflows and >MaxAllocSize\n lengths"
},
{
"msg_contents": "On Fri, 8 Jul 2022 at 21:35, David Zhang <david.zhang@highgo.ca> wrote:\n>\n> Hi,\n>\n> I tried to apply this patch v5 to current master branch but it complains,\n> \"git apply --check\n> v5-0001-Add-protections-in-xlog-record-APIs-against-large.patch\n> error: patch failed: src/include/access/xloginsert.h:43\n> error: src/include/access/xloginsert.h: patch does not apply\"\n>\n> then I checked it out before the commit\n> `b0a55e43299c4ea2a9a8c757f9c26352407d0ccc` and applied this v5 patch.\n\nThe attached rebased patchset should work with master @ 2cd2569c and\nREL_15_STABLE @ 53df1e28. I've also added a patch that works for PG14\nand earlier, which should be correct for all versions that include\ncommit 2c03216d (that is, all versions back to 9.5).\n\n> 1) both make check and make installcheck passed.\n>\n> 2) and I can also see this patch v5 prevents the error happens previously,\n>\n> \"postgres=# SELECT pg_logical_emit_message(false, long, long) FROM\n> repeat(repeat(' ', 1024), 1024*1023) as l(long);\n> ERROR: too much WAL data\"\n>\n> 3) without this v5 patch, the same test will cause the standby crash\n> like below, and the standby not be able to boot up after this crash.\n\nThanks for reviewing.\n\nKind regards,\n\nMatthias van de Meent",
"msg_date": "Mon, 11 Jul 2022 14:26:46 +0200",
"msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Non-replayable WAL records through overflows and >MaxAllocSize\n lengths"
},
{
"msg_contents": "On Mon, Jul 11, 2022 at 02:26:46PM +0200, Matthias van de Meent wrote:\n> Thanks for reviewing.\n\nI think that v6 is over-engineered because there should be no need to\nadd a check in xlogreader.c as long as the origin of the problem is\nblocked, no? And the origin here is when the record is assembled. At\nleast this is the cleanest solution for HEAD, but not in the\nback-branches if we'd care about doing something with records already\ngenerated, and I am not sure that we need to care about other things\nthan HEAD, TBH. So it seems to me that there is no need to create a\nXLogRecMaxLength which is close to a duplicate of\nDecodeXLogRecordRequiredSpace().\n\n@@ -519,7 +549,7 @@ XLogRecordAssemble(RmgrId rmid, uint8 info,\n XLogRecPtr *fpw_lsn, int *num_fpi, bool *topxid_included)\n\n {\n XLogRecData *rdt;\n- uint32 total_len = 0;\n+ uint64 total_len = 0;\nThis has no need to change.\n\nMy suggestion from upthread was close to what you proposed, but I had\nin mind something simpler, as of:\n\n+ /*\n+ * Ensure that xlogreader.c can read the record.\n+ */\n+ if (unlikely(!AllocSizeIsValid(DecodeXLogRecordRequiredSpace(total_len))))\n+ elog(ERROR, \"too much WAL data\");\n\nThis would be the amount of data allocated by the WAL reader when it\nis possible to allocate an oversized record, related to the business\nof the circular buffer depending on if the read is blocking or not.\n\nAmong the two problems to solve at hand, the parts where the APIs are\nchanged and made more robust with unsigned types and where block data\nis not overflowed with its 16-byte limit are committable, so I'd like\nto do that first (still need to check its performance with some micro\nbenchmark on XLogRegisterBufData()). The second part to block the\ncreation of the assembled record is simpler, now\nDecodeXLogRecordRequiredSpace() would make the path a bit hotter,\nthough we could inline it in the worst case?\n--\nMichael",
"msg_date": "Wed, 13 Jul 2022 14:54:18 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Non-replayable WAL records through overflows and >MaxAllocSize\n lengths"
},
{
"msg_contents": "On Mon, 14 Mar 2022 at 18:14, Andres Freund <andres@anarazel.de> wrote:\n>\n> A random thought I had while thinking about the size limits: We could use the\n> low bits of the length and xl_prev to store XLR_SPECIAL_REL_UPDATE |\n> XLR_CHECK_CONSISTENCY and give rmgrs the full 8 bit of xl_info. Which would\n> allow us to e.g. get away from needing Heap2. Which would aestethically be\n> pleasing.\n\nI just remembered your comment while going through the xlog code and\nthought this about the same issue: We still have 2 bytes of padding in\nXLogRecord, between xl_rmid and xl_crc. Can't we instead use that\nspace for rmgr-specific flags, as opposed to stealing bits from\nxl_info?\n\n\nKind regards,\n\nMatthias van de Meent\n\n\n",
"msg_date": "Fri, 15 Jul 2022 11:25:54 +0200",
"msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Non-replayable WAL records through overflows and >MaxAllocSize\n lengths"
},
{
"msg_contents": "Hi,\n\nOn 2022-07-15 11:25:54 +0200, Matthias van de Meent wrote:\n> On Mon, 14 Mar 2022 at 18:14, Andres Freund <andres@anarazel.de> wrote:\n> >\n> > A random thought I had while thinking about the size limits: We could use the\n> > low bits of the length and xl_prev to store XLR_SPECIAL_REL_UPDATE |\n> > XLR_CHECK_CONSISTENCY and give rmgrs the full 8 bit of xl_info. Which would\n> > allow us to e.g. get away from needing Heap2. Which would aestethically be\n> > pleasing.\n> \n> I just remembered your comment while going through the xlog code and\n> thought this about the same issue: We still have 2 bytes of padding in\n> XLogRecord, between xl_rmid and xl_crc. Can't we instead use that\n> space for rmgr-specific flags, as opposed to stealing bits from\n> xl_info?\n\nSounds like a good idea to me. I'm not sure who is stealing bits from what\nright now, but it clearly seems worthwhile to separate \"flags\" from \"record\ntype within rmgr\".\n\nI think we should split it at least into three things:\n\n1) generic per-record flags for xlog machinery (ie. XLR_SPECIAL_REL_UPDATE, XLR_CHECK_CONSISTENCY)\n2) rmgr record type identifier (e.g. XLOG_HEAP_*)\n2) rmgr specific flags (e.g. XLOG_HEAP_INIT_PAGE)\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 15 Jul 2022 10:37:31 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Non-replayable WAL records through overflows and >MaxAllocSize\n lengths"
},
{
"msg_contents": "On 13/07/2022 08:54, Michael Paquier wrote:\n> I think that v6 is over-engineered because there should be no need to\n> add a check in xlogreader.c as long as the origin of the problem is\n> blocked, no? And the origin here is when the record is assembled. At\n> least this is the cleanest solution for HEAD, but not in the\n> back-branches if we'd care about doing something with records already\n> generated, and I am not sure that we need to care about other things\n> than HEAD, TBH. So it seems to me that there is no need to create a\n> XLogRecMaxLength which is close to a duplicate of\n> DecodeXLogRecordRequiredSpace().\n> \n> @@ -519,7 +549,7 @@ XLogRecordAssemble(RmgrId rmid, uint8 info,\n> XLogRecPtr *fpw_lsn, int *num_fpi, bool *topxid_included)\n> \n> {\n> XLogRecData *rdt;\n> - uint32 total_len = 0;\n> + uint64 total_len = 0;\n> This has no need to change.\n> \n> My suggestion from upthread was close to what you proposed, but I had\n> in mind something simpler, as of:\n> \n> + /*\n> + * Ensure that xlogreader.c can read the record.\n> + */\n> + if (unlikely(!AllocSizeIsValid(DecodeXLogRecordRequiredSpace(total_len))))\n> + elog(ERROR, \"too much WAL data\");\n> \n> This would be the amount of data allocated by the WAL reader when it\n> is possible to allocate an oversized record, related to the business\n> of the circular buffer depending on if the read is blocking or not.\n\nThe way this is written, it would change whenever we add/remove fields \nin DecodedBkpBlock, for example. That's fragile; if you added a field in \na back-branch, you could accidentally make the new minor version unable \nto read maximum-sized WAL records generated with an older version. I'd \nlike the maximum to be more explicit.\n\nHow large exactly is the maximum size that this gives? I'd prefer to set \nthe limit conservatively to 1020 MB, for example, with a compile-time \nstatic assertion that \nAllocSizeIsValid(DecodeXLogRecordRequiredSpace(1020 MB)).\n\n> Among the two problems to solve at hand, the parts where the APIs are\n> changed and made more robust with unsigned types and where block data\n> is not overflowed with its 16-byte limit are committable, so I'd like\n> to do that first (still need to check its performance with some micro\n> benchmark on XLogRegisterBufData()).\n\n+1. I'm not excited about adding the \"unlikely()\" hints, though. We have \na pg_attribute_cold hint in ereport(), that should be enough.\n\n- Heikki\n\n\n",
"msg_date": "Mon, 25 Jul 2022 14:12:05 +0300",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": false,
"msg_subject": "Re: Non-replayable WAL records through overflows and >MaxAllocSize\n lengths"
},
{
"msg_contents": "On Wed, 13 Jul 2022 at 07:54, Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Mon, Jul 11, 2022 at 02:26:46PM +0200, Matthias van de Meent wrote:\n> > Thanks for reviewing.\n>\n> I think that v6 is over-engineered because there should be no need to\n> add a check in xlogreader.c as long as the origin of the problem is\n> blocked, no? And the origin here is when the record is assembled. At\n> least this is the cleanest solution for HEAD, but not in the\n> back-branches if we'd care about doing something with records already\n> generated, and I am not sure that we need to care about other things\n> than HEAD, TBH.\n\nI would prefer it if we would fix the \"cannot catch up to primary\nbecause of oversized WAL record\" issue in backbranches too. Rather\nthan failing to recover after failure or breaking replication streams,\nI'd rather be unable to write the singular offending WAL record and\nbreak up to one transaction.\n\n> So it seems to me that there is no need to create a\n> XLogRecMaxLength which is close to a duplicate of\n> DecodeXLogRecordRequiredSpace().\n>\n> @@ -519,7 +549,7 @@ XLogRecordAssemble(RmgrId rmid, uint8 info,\n> XLogRecPtr *fpw_lsn, int *num_fpi, bool *topxid_included)\n>\n> {\n> XLogRecData *rdt;\n> - uint32 total_len = 0;\n> + uint64 total_len = 0;\n> This has no need to change.\n>\n> My suggestion from upthread was close to what you proposed, but I had\n> in mind something simpler, as of:\n>\n> + /*\n> + * Ensure that xlogreader.c can read the record.\n> + */\n> + if (unlikely(!AllocSizeIsValid(DecodeXLogRecordRequiredSpace(total_len))))\n> + elog(ERROR, \"too much WAL data\");\n\nHuh, yeah, I hadn't thought of that, but that's much simpler indeed.\n\n> This would be the amount of data allocated by the WAL reader when it\n> is possible to allocate an oversized record, related to the business\n> of the circular buffer depending on if the read is blocking or not.\n\nYes, I see your point.\n\n> Among the two problems to solve at hand, the parts where the APIs are\n> changed and made more robust with unsigned types and where block data\n> is not overflowed with its 16-byte limit are committable, so I'd like\n> to do that first (still need to check its performance with some micro\n> benchmark on XLogRegisterBufData()).\n\n> The second part to block the\n> creation of the assembled record is simpler, now\n> DecodeXLogRecordRequiredSpace() would make the path a bit hotter,\n> though we could inline it in the worst case?\n\nI think that would be better for performance, yes.\nDecodeXLogRecordRequiredSpace will already be optimized to just a\nsingle addition by any of `-O[123]`, so keeping this indirection is\nquite expensive (relative to the operation being performed).\n\nAs for your patch patch:\n\n> +XLogRegisterData(char *data, uint32 len)\n> {\n> XLogRecData *rdata;\n>\n> Assert(begininsert_called);\n>\n> - if (num_rdatas >= max_rdatas)\n> + if (unlikely(num_rdatas >= max_rdatas))\n> elog(ERROR, \"too much WAL data\");\n> rdata = &rdatas[num_rdatas++];\n\nXLogRegisterData is designed to be called multiple times for each\nrecord, and this allows the user of the API to overflow the internal\nmainrdata_len field if we don't check that the field does not exceed\nthe maximum record length (or overflow the 32-bit field). As such, I'd\nstill want a len-check in that function.\n\nI'll send an updated patch by tomorrow.\n\n- Matthias\n\n\n",
"msg_date": "Mon, 25 Jul 2022 13:17:21 +0200",
"msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Non-replayable WAL records through overflows and >MaxAllocSize\n lengths"
},
{
"msg_contents": "On Mon, Jul 25, 2022 at 02:12:05PM +0300, Heikki Linnakangas wrote:\n> The way this is written, it would change whenever we add/remove fields in\n> DecodedBkpBlock, for example. That's fragile; if you added a field in a\n> back-branch, you could accidentally make the new minor version unable to\n> read maximum-sized WAL records generated with an older version. I'd like the\n> maximum to be more explicit.\n\nThat's a good point.\n\n> How large exactly is the maximum size that this gives? I'd prefer to set the\n> limit conservatively to 1020 MB, for example, with a compile-time static\n> assertion that AllocSizeIsValid(DecodeXLogRecordRequiredSpace(1020 MB)).\n\nSomething like that would work, I guess.\n\n> Among the two problems to solve at hand, the parts where the APIs are\n> changed and made more robust with unsigned types and where block data\n> is not overflowed with its 16-byte limit are committable, so I'd like\n> to do that first (still need to check its performance with some micro\n> benchmark on XLogRegisterBufData()).\n> \n> +1. I'm not excited about adding the \"unlikely()\" hints, though. We have a\n> pg_attribute_cold hint in ereport(), that should be enough.\n\nOkay, that makes sense. FWIW, I have been wondering about the\naddition of the extra condition in XLogRegisterBufData() and I did not\nsee a difference on HEAD in terms of execution time or profile, with a\nmicro-benchmark doing a couple of million calls in a row as of the\nfollowing, roughly:\n // Can be anything, really..\n rel = relation_open(RelationRelationId, AccessShareLock);\n buffer = ReadBuffer(rel, 0);\n for (i = 0 ; i < WAL_MAX_CALLS ; i++)\n {\n XLogBeginInsert();\n XLogRegisterBuffer(0, buffer, REGBUF_STANDARD);\n XLogRegisterBufData(0, buf, 10);\n XLogResetInsertion();\n }\n ReleaseBuffer(buffer);\n relation_close(rel, AccessShareLock);\n--\nMichael",
"msg_date": "Tue, 26 Jul 2022 16:20:34 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Non-replayable WAL records through overflows and >MaxAllocSize\n lengths"
},
{
"msg_contents": "On Tue, 26 Jul 2022 at 09:20, Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Mon, Jul 25, 2022 at 02:12:05PM +0300, Heikki Linnakangas wrote:\n> > Among the two problems to solve at hand, the parts where the APIs are\n> > changed and made more robust with unsigned types and where block data\n> > is not overflowed with its 16-byte limit are committable, so I'd like\n> > to do that first (still need to check its performance with some micro\n> > benchmark on XLogRegisterBufData()).\n> >\n> > +1. I'm not excited about adding the \"unlikely()\" hints, though. We have a\n> > pg_attribute_cold hint in ereport(), that should be enough.\n>\n> Okay, that makes sense. FWIW, I have been wondering about the\n> addition of the extra condition in XLogRegisterBufData() and I did not\n> see a difference on HEAD in terms of execution time or profile, with a\n> micro-benchmark doing a couple of million calls in a row as of the\n> following, roughly:\n> [...]\n\nThanks for testing.\n\n> > How large exactly is the maximum size that this gives? I'd prefer to set the\n> > limit conservatively to 1020 MB, for example, with a compile-time static\n> > assertion that AllocSizeIsValid(DecodeXLogRecordRequiredSpace(1020 MB)).\n>\n> Something like that would work, I guess.\n\nI've gone over the patch and reviews again, and updated those places\nthat received comments:\n\n- updated the MaxXLogRecordSize and XLogRecordLengthIsValid(len)\nmacros (now in xlogrecord.h), with a max length of the somewhat\narbitrary 1020MiB.\n This leaves room for approx. 4MiB of per-record allocation overhead\nbefore you'd hit MaxAllocSize, and also detaches the dependency on\nmemutils.h.\n\n- Retained the check in XLogRegisterData, so that we check against\ninteger overflows in the registerdata code instead of only an assert\nin XLogRecordAssemble where it might be too late.\n- Kept the inline static elog-ing function (as per Andres' suggestion\non 2022-03-14; this decreases binary sizes)\n- Dropped any changes in xlogreader.h/c\n\nKind regards,\n\nMatthias van de Meent",
"msg_date": "Tue, 26 Jul 2022 18:58:02 +0200",
"msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Non-replayable WAL records through overflows and >MaxAllocSize\n lengths"
},
{
"msg_contents": "On Tue, Jul 26, 2022 at 06:58:02PM +0200, Matthias van de Meent wrote:\n> - Retained the check in XLogRegisterData, so that we check against\n> integer overflows in the registerdata code instead of only an assert\n> in XLogRecordAssemble where it might be too late.\n\nWhy? The record has not been inserted yet. I would tend to keep only\nthe check at the bottom of XLogRecordAssemble(), for simplicity, and\ncall it a day.\n\n> - Kept the inline static elog-ing function (as per Andres' suggestion\n> on 2022-03-14; this decreases binary sizes)\n\nI am not really convinced that this one is worth doing.\n\n+#define MaxXLogRecordSize (1020 * 1024 * 1024)\n+\n+#define XLogRecordLengthIsValid(len) ((len) >= 0 && (len) < MaxXLogRecordSize)\n\nThese are used only in xloginsert.c, so we could keep them isolated.\n\n+ * To accommodate some overhead, hhis MaxXLogRecordSize value allows for\ns/hhis/this/.\n\nFor now, I have extracted from the patch the two API changes and the\nchecks for the block information for uint16, and applied this part.\nThat's one less thing to worry about.\n--\nMichael",
"msg_date": "Wed, 27 Jul 2022 18:09:38 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Non-replayable WAL records through overflows and >MaxAllocSize\n lengths"
},
{
"msg_contents": "On Wed, 27 Jul 2022 at 11:09, Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Tue, Jul 26, 2022 at 06:58:02PM +0200, Matthias van de Meent wrote:\n> > - Retained the check in XLogRegisterData, so that we check against\n> > integer overflows in the registerdata code instead of only an assert\n> > in XLogRecordAssemble where it might be too late.\n>\n> Why? The record has not been inserted yet. I would tend to keep only\n> the check at the bottom of XLogRecordAssemble(), for simplicity, and\n> call it a day.\n\nBecause the sum value main_rdatalen can easily overflow in both the\ncurrent and the previous APIs, which then corrupts the WAL - one of\nthe two issues that I mentioned when I started the thread.\n\nWe don't re-summarize the lengths of all XLogRecData segments for the\nmain record data when assembling a record to keep the performance of\nRecordAssemble (probably to limit the complexity when many data\nsegments are registered), and because I didn't want to add more\nchanges than necessary this check will need to be done in the place\nwhere the overflow may occur, which is in XLogRegisterData.\n\n> > - Kept the inline static elog-ing function (as per Andres' suggestion\n> > on 2022-03-14; this decreases binary sizes)\n>\n> I am not really convinced that this one is worth doing.\n\nI'm not married to that change, but I also don't see why this can't be\nupdated while this code is already being touched.\n\n> +#define MaxXLogRecordSize (1020 * 1024 * 1024)\n> +\n> +#define XLogRecordLengthIsValid(len) ((len) >= 0 && (len) < MaxXLogRecordSize)\n>\n> These are used only in xloginsert.c, so we could keep them isolated.\n\nThey might be only used in xloginsert right now, but that's not the\npoint. This is now advertised as part of the record API spec: A record\nlarger than 1020MB is explicitly not supported. If it was kept\ninternal to xloginsert, that would be implicit and other people might\nstart hitting issues similar to those we're hitting right now -\nrecords that are too large to read. Although PostgreSQL is usually the\nonly one generating WAL, we do support physical replication from\narbitrary PG-compatible WAL streams, which means that any compatible\nWAL source could be the origin of our changes - and those need to be\naware of the assumptions we make about the WAL format.\n\nI'm fine with also updating xlogreader.c to check this while reading\nrecords to clarify the limits there as well, if so desired.\n\n> + * To accommodate some overhead, hhis MaxXLogRecordSize value allows for\n> s/hhis/this/.\n\nWill be included in the next update..\n\n> For now, I have extracted from the patch the two API changes and the\n> checks for the block information for uint16, and applied this part.\n> That's one less thing to worry about.\n\nThanks.\n\nKind regards,\n\nMatthias van de Meent\n\n\n",
"msg_date": "Wed, 27 Jul 2022 14:07:05 +0200",
"msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Non-replayable WAL records through overflows and >MaxAllocSize\n lengths"
},
{
"msg_contents": "Hi Matthias,\n\nOn Wed, Jul 27, 2022 at 02:07:05PM +0200, Matthias van de Meent wrote:\n\nMy apologies for the time it took me to come back to this thread.\n> > + * To accommodate some overhead, hhis MaxXLogRecordSize value allows for\n> > s/hhis/this/.\n> \n> Will be included in the next update..\n\nv8 fails to apply. Could you send a rebased version?\n\nAs far as I recall the problems with the block image sizes are solved,\nbut we still have a bit more to do in terms of the overall record\nsize. Perhaps there are some parts of the patch you'd like to\nrevisit?\n\nFor now, I have switched the back as waiting on author, and moved it\nto the next CF.\n--\nMichael",
"msg_date": "Wed, 5 Oct 2022 16:46:25 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Non-replayable WAL records through overflows and >MaxAllocSize\n lengths"
},
{
"msg_contents": "2022年10月5日(水) 16:46 Michael Paquier <michael@paquier.xyz>:\n>\n> Hi Matthias,\n>\n> On Wed, Jul 27, 2022 at 02:07:05PM +0200, Matthias van de Meent wrote:\n>\n> My apologies for the time it took me to come back to this thread.\n> > > + * To accommodate some overhead, hhis MaxXLogRecordSize value allows for\n> > > s/hhis/this/.\n> >\n> > Will be included in the next update..\n>\n> v8 fails to apply. Could you send a rebased version?\n>\n> As far as I recall the problems with the block image sizes are solved,\n> but we still have a bit more to do in terms of the overall record\n> size. Perhaps there are some parts of the patch you'd like to\n> revisit?\n>\n> For now, I have switched the back as waiting on author, and moved it\n> to the next CF.\n\nHi Matthias\n\nCommitFest 2022-11 is currently underway, so if you are interested\nin moving this patch forward, now would be a good time to update it.\n\nThanks\n\nIan Barwick\n\n\n",
"msg_date": "Fri, 4 Nov 2022 09:52:39 +0900",
"msg_from": "Ian Lawrence Barwick <barwick@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Non-replayable WAL records through overflows and >MaxAllocSize\n lengths"
},
{
"msg_contents": "On Fri, Nov 04, 2022 at 09:52:39AM +0900, Ian Lawrence Barwick wrote:\n> CommitFest 2022-11 is currently underway, so if you are interested\n> in moving this patch forward, now would be a good time to update it.\n\nNo replies after 4 weeks, so I have marked this entry as returned\nwith feedback. I am still wondering what would be the best thing to\ndo here..\n--\nMichael",
"msg_date": "Fri, 2 Dec 2022 14:22:55 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Non-replayable WAL records through overflows and >MaxAllocSize\n lengths"
},
{
"msg_contents": "Hi,\n\nOn 2022-12-02 14:22:55 +0900, Michael Paquier wrote:\n> On Fri, Nov 04, 2022 at 09:52:39AM +0900, Ian Lawrence Barwick wrote:\n> > CommitFest 2022-11 is currently underway, so if you are interested\n> > in moving this patch forward, now would be a good time to update it.\n> \n> No replies after 4 weeks, so I have marked this entry as returned\n> with feedback. I am still wondering what would be the best thing to\n> do here..\n\nIMO this a bugfix, I don't think we can just close the entry, even if Matthias\ndoesn't have time / energy to push it forward.\n\n\nI think the big issue with the patch as it stands is that it will typically\ncause PANICs on failure, because the record-too-large ERROR be a in a critical\nsection. That's still better than generating a record that can't be replayed,\nbut it's not good.\n\nThere's not all that many places with potentially huge records. I wonder if we\nought to modify at least the most prominent ones to prepare the record before\nthe critical section. I think the by far most prominent real-world case is\nRecordTransactionCommit(). I think we could rename XactLogCommitRecord() to\nXactBuildCommitRecord() build the commit record, then have the caller do\nSTART_CRIT_SECTION(), set DELAY_CHKPT_START, and only then do the\nXLogInsert().\n\nThat'd even have the nice side-effect of reducing the window in which\nDELAY_CHKPT_START is set a bit.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 2 Dec 2022 08:57:17 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Non-replayable WAL records through overflows and >MaxAllocSize\n lengths"
},
{
"msg_contents": "Hi,\n\nOn 2022-07-26 18:58:02 +0200, Matthias van de Meent wrote:\n> - updated the MaxXLogRecordSize and XLogRecordLengthIsValid(len)\n> macros (now in xlogrecord.h), with a max length of the somewhat\n> arbitrary 1020MiB.\n> This leaves room for approx. 4MiB of per-record allocation overhead\n> before you'd hit MaxAllocSize, and also detaches the dependency on\n> memutils.h.\n> \n> - Retained the check in XLogRegisterData, so that we check against\n> integer overflows in the registerdata code instead of only an assert\n> in XLogRecordAssemble where it might be too late.\n> - Kept the inline static elog-ing function (as per Andres' suggestion\n> on 2022-03-14; this decreases binary sizes)\n\nI don't think it should be a static inline. It should to be a *non* inlined\nfunction, so we don't include the code for the elog in the callers.\n\n\n> +/*\n> + * Error due to exceeding the maximum size of a WAL record, or registering\n> + * more datas than are being accounted for by the XLog infrastructure.\n> + */\n> +static inline void\n> +XLogErrorDataLimitExceeded()\n> +{\n> +\telog(ERROR, \"too much WAL data\");\n> +}\n\nI think this should be pg_noinline, as mentioned above.\n\n\n> /*\n> * Begin constructing a WAL record. This must be called before the\n> * XLogRegister* functions and XLogInsert().\n> @@ -348,14 +359,29 @@ XLogRegisterBlock(uint8 block_id, RelFileLocator *rlocator, ForkNumber forknum,\n> * XLogRecGetData().\n> */\n> void\n> -XLogRegisterData(char *data, int len)\n> +XLogRegisterData(char *data, uint32 len)\n> {\n> \tXLogRecData *rdata;\n> \n> -\tAssert(begininsert_called);\n> +\tAssert(begininsert_called && XLogRecordLengthIsValid(len));\n> +\n> +\t/*\n> +\t * Check against max_rdatas; and ensure we don't fill a record with\n> +\t * more data than can be replayed. Records are allocated in one chunk\n> +\t * with some overhead, so ensure XLogRecordLengthIsValid() for that\n> +\t * size of record.\n> +\t *\n> +\t * Additionally, check that we don't accidentally overflow the\n> +\t * intermediate sum value on 32-bit systems by ensuring that the\n> +\t * sum of the two inputs is no less than one of the inputs.\n> +\t */\n> +\tif (num_rdatas >= max_rdatas ||\n> +#if SIZEOF_SIZE_T == 4\n> +\t\t mainrdata_len + len < len ||\n> +#endif\n> +\t\t!XLogRecordLengthIsValid((size_t) mainrdata_len + (size_t) len))\n> +\t\tXLogErrorDataLimitExceeded();\n\nThis is quite a complicated check, and the SIZEOF_SIZE_T == 4 bit is fairly\nugly.\n\nI think we should make mainrdata_len a uint64, then we don't have to worry\nabout it overflowing on 32bit systems. And TBH, we don't care about some minor\ninefficiency on 32bit systems.\n\n\n\n> @@ -399,8 +425,16 @@ XLogRegisterBufData(uint8 block_id, char *data, int len)\n> \t\telog(ERROR, \"no block with id %d registered with WAL insertion\",\n> \t\t\t block_id);\n> \n> -\tif (num_rdatas >= max_rdatas)\n> -\t\telog(ERROR, \"too much WAL data\");\n> +\t/*\n> +\t * Check against max_rdatas; and ensure we don't register more data per\n> +\t * buffer than can be handled by the physical data format; \n> +\t * i.e. that regbuf->rdata_len does not grow beyond what\n> +\t * XLogRecordBlockHeader->data_length can hold.\n> +\t */\n> +\tif (num_rdatas >= max_rdatas ||\n> +\t\tregbuf->rdata_len + len > UINT16_MAX)\n> +\t\tXLogErrorDataLimitExceeded();\n> +\n> \trdata = &rdatas[num_rdatas++];\n> \n> \trdata->data = data;\n\nThis partially has been applied in ffd1b6bb6f8, I think we should consider\nadding XLogErrorDataLimitExceeded() separately too.\n\n\n> \t\t\trdt_datas_last->next = regbuf->rdata_head;\n> @@ -858,6 +907,16 @@ XLogRecordAssemble(RmgrId rmid, uint8 info,\n> \tfor (rdt = hdr_rdt.next; rdt != NULL; rdt = rdt->next)\n> \t\tCOMP_CRC32C(rdata_crc, rdt->data, rdt->len);\n> \n> +\t/*\n> +\t * Ensure that the XLogRecord is not too large.\n> +\t *\n> +\t * XLogReader machinery is only able to handle records up to a certain\n> +\t * size (ignoring machine resource limitations), so make sure we will\n> +\t * not emit records larger than those sizes we advertise we support.\n> +\t */\n> +\tif (!XLogRecordLengthIsValid(total_len))\n> +\t\tXLogErrorDataLimitExceeded();\n> +\n> \t/*\n> \t * Fill in the fields in the record header. Prev-link is filled in later,\n> \t * once we know where in the WAL the record will be inserted. The CRC does\n\nI think this needs to mention that it'll typically cause a PANIC.\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 2 Dec 2022 09:09:13 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Non-replayable WAL records through overflows and >MaxAllocSize\n lengths"
},
{
"msg_contents": "On 2022-Dec-02, Andres Freund wrote:\n\n> Hi,\n> \n> On 2022-12-02 14:22:55 +0900, Michael Paquier wrote:\n> > On Fri, Nov 04, 2022 at 09:52:39AM +0900, Ian Lawrence Barwick wrote:\n> > > CommitFest 2022-11 is currently underway, so if you are interested\n> > > in moving this patch forward, now would be a good time to update it.\n> > \n> > No replies after 4 weeks, so I have marked this entry as returned\n> > with feedback. I am still wondering what would be the best thing to\n> > do here..\n> \n> IMO this a bugfix, I don't think we can just close the entry, even if Matthias\n> doesn't have time / energy to push it forward.\n\nI have created one in the January commitfest,\nhttps://commitfest.postgresql.org/41/\nand rebased the patch on current master. (I have not reviewed this.)\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"La gente vulgar sólo piensa en pasar el tiempo;\nel que tiene talento, en aprovecharlo\"",
"msg_date": "Mon, 19 Dec 2022 12:37:19 +0100",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Non-replayable WAL records through overflows and >MaxAllocSize\n lengths"
},
{
"msg_contents": "On Mon, Dec 19, 2022 at 12:37:19PM +0100, Alvaro Herrera wrote:\n> I have created one in the January commitfest,\n> https://commitfest.postgresql.org/41/\n> and rebased the patch on current master. (I have not reviewed this.)\n\nI have spent some time on that, and here are some comments with an\nupdated version of the patch attached.\n\nThe checks in XLogRegisterData() seemed overcomplicated to me. In\nthis context, I think that we should just care about making sure that\nmainrdata_len does not overflow depending on the length given by the\ncaller, which is where pg_add_u32_overflow() becomes handy.\n\nXLogRegisterBufData() added a check on UINT16_MAX in an assert, though\nwe already check for overflow a couple of lines down. This is not\nnecessary, it seems.\n\n@@ -535,6 +567,9 @@ XLogRecordAssemble(RmgrId rmid, uint8 info,\n XLogRecord *rechdr;\n char *scratch = hdr_scratch;\n \n+ /* ensure that any assembled record can be decoded */\n+ Assert(AllocSizeIsValid(DecodeXLogRecordRequiredSpace(MaxXLogRecordSize)));\n\nA hardcoded check like that has no need to be in a code path triggered\neach time a WAL record is assembled. One place where this could be is\nInitXLogInsert(). It still means that it is called one time for each\nbackend, but seeing it where the initialization of xloginsert.c feels\nnatural, at least. A postmaster location would be enough, as well.\n\nXLogRecordMaxSize just needs to be checked once IMO, around the end of\nXLogRecordAssemble() once we know the total size of the record that\nwill be fed to a XLogReader. One thing that we should be more careful\nof is to make sure that total_len does not overflow its uint32 value\nwhile assembling the record, as well.\n\nI have removed XLogErrorDataLimitExceeded(), replacing it with more\ncontext about the errors happening. Perhaps this has no need to be\nthat much verbose, but it can be really useful for developers.\n\nSome comments had no need to be updated, and there were some typos.\n\nI am on board with the idea of a XLogRecordMaxSize that's bounded at\n1020MB, leaving 4MB as room for the extra data needed by a\nXLogReader.\n\nAt the end, I think that this is quite interesting long-term. For\nexample, if we lift up XLogRecordMaxSize, we can evaluate the APIs\nadding buffer data or main data separately.\n\nThoughts about this version?\n--\nMichael",
"msg_date": "Tue, 28 Mar 2023 20:42:42 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Non-replayable WAL records through overflows and >MaxAllocSize\n lengths"
},
{
"msg_contents": "On Tue, 28 Mar 2023 at 13:42, Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Mon, Dec 19, 2022 at 12:37:19PM +0100, Alvaro Herrera wrote:\n> > I have created one in the January commitfest,\n> > https://commitfest.postgresql.org/41/\n> > and rebased the patch on current master. (I have not reviewed this.)\n>\n> I have spent some time on that, and here are some comments with an\n> updated version of the patch attached.\n>\n> The checks in XLogRegisterData() seemed overcomplicated to me. In\n> this context, I think that we should just care about making sure that\n> mainrdata_len does not overflow depending on the length given by the\n> caller, which is where pg_add_u32_overflow() becomes handy.\n>\n> XLogRegisterBufData() added a check on UINT16_MAX in an assert, though\n> we already check for overflow a couple of lines down. This is not\n> necessary, it seems.\n>\n> @@ -535,6 +567,9 @@ XLogRecordAssemble(RmgrId rmid, uint8 info,\n> XLogRecord *rechdr;\n> char *scratch = hdr_scratch;\n>\n> + /* ensure that any assembled record can be decoded */\n> + Assert(AllocSizeIsValid(DecodeXLogRecordRequiredSpace(MaxXLogRecordSize)));\n>\n> A hardcoded check like that has no need to be in a code path triggered\n> each time a WAL record is assembled. One place where this could be is\n> InitXLogInsert(). It still means that it is called one time for each\n> backend, but seeing it where the initialization of xloginsert.c feels\n> natural, at least. A postmaster location would be enough, as well.\n>\n> XLogRecordMaxSize just needs to be checked once IMO, around the end of\n> XLogRecordAssemble() once we know the total size of the record that\n> will be fed to a XLogReader. One thing that we should be more careful\n> of is to make sure that total_len does not overflow its uint32 value\n> while assembling the record, as well.\n>\n> I have removed XLogErrorDataLimitExceeded(), replacing it with more\n> context about the errors happening. Perhaps this has no need to be\n> that much verbose, but it can be really useful for developers.\n>\n> Some comments had no need to be updated, and there were some typos.\n>\n> I am on board with the idea of a XLogRecordMaxSize that's bounded at\n> 1020MB, leaving 4MB as room for the extra data needed by a\n> XLogReader.\n>\n> At the end, I think that this is quite interesting long-term. For\n> example, if we lift up XLogRecordMaxSize, we can evaluate the APIs\n> adding buffer data or main data separately.\n>\n> Thoughts about this version?\n\nI thought that the plan was to use int64 to skip checking for most\noverflows and to do a single check at the end in XLogRecordAssemble,\nso that the checking has minimal overhead in the performance-critical\nlog record assembling path and reduced load on the branch predictor.\n\nOne more issue that Andres was suggesting we'd fix was to allow XLog\nassembly separate from the actual XLog insertion:\nCurrently you can't pre-assemble a record outside a critical section\nif the record must be inserted in a critical section, which makes e.g.\ncommit records problematic due to the potentially oversized data\nresulting in ERRORs during record assembly. This would crash postgres\nbecause commit xlog insertion happens in a critical section. Having a\npre-assembled record would greatly improve the ergonomics in that path\nand reduce the length of the critical path.\n\nI think it was something along the lines of the attached; 0001\ncontains separated Commit/Abort record construction and insertion like\nAndres suggested, 0002 does the size checks with updated error\nmessages.\n\n\nKind regards,\n\nMatthias van de Meent",
"msg_date": "Wed, 5 Apr 2023 16:35:37 +0200",
"msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Non-replayable WAL records through overflows and >MaxAllocSize\n lengths"
},
{
"msg_contents": "On Wed, Apr 05, 2023 at 04:35:37PM +0200, Matthias van de Meent wrote:\n> I thought that the plan was to use int64 to skip checking for most\n> overflows and to do a single check at the end in XLogRecordAssemble,\n> so that the checking has minimal overhead in the performance-critical\n> log record assembling path and reduced load on the branch predictor.\n\nAnd that's the reason why your v11-0002 is better and simpler than the\nv10-0001 I posted a few days ago.\n\n+ if (regbuf->rdata_len + len > UINT16_MAX || len > UINT16_MAX)\n+ ereport(ERROR,\n+ (errmsg_internal(\"too much WAL data\"),\n+ errdetail_internal(\"Registering more than max %u bytes total to block %u: current %uB, adding %uB\",\n+ UINT16_MAX, block_id, regbuf->rdata_len, len)));\n\nI was wondering for a few minutes about the second part of this\ncheck.. But you are worried about the case where len is too large\nthat it would overflow rdata_len if calling XLogRegisterBufData() more\nthan once on the same block, if len is between\n(UINT32_MAX-UINT16_MAX,UINT32_MAX) on the second call.\n\nThe extra errdetail_internal() could be tweaked a bit more, but I'm\nalso OK with your proposal, overall. One thing is \"current %uB,\nadding %uB\" would be better using \"bytes\".\n\n> One more issue that Andres was suggesting we'd fix was to allow XLog\n> assembly separate from the actual XLog insertion:\n> Currently you can't pre-assemble a record outside a critical section\n> if the record must be inserted in a critical section, which makes e.g.\n> commit records problematic due to the potentially oversized data\n> resulting in ERRORs during record assembly. This would crash postgres\n> because commit xlog insertion happens in a critical section. Having a\n> pre-assembled record would greatly improve the ergonomics in that path\n> and reduce the length of the critical path.\n>\n> I think it was something along the lines of the attached; 0001\n> contains separated Commit/Abort record construction and insertion like\n> Andres suggested,\n\nI am honestly not sure whether we should complicate xloginsert.c this\nway, but we could look at that for v17.\n\n> 0002 does the size checks with updated error messages.\n\n0002 can also be done before 0001, so I'd like to get that part\napplied on HEAD before the feature freeze and close this thread. If\nthere are any objections, please feel free..\n--\nMichael",
"msg_date": "Thu, 6 Apr 2023 10:54:43 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Non-replayable WAL records through overflows and >MaxAllocSize\n lengths"
},
{
"msg_contents": "On Thu, Apr 06, 2023 at 10:54:43AM +0900, Michael Paquier wrote:\n> 0002 can also be done before 0001, so I'd like to get that part\n> applied on HEAD before the feature freeze and close this thread. If\n> there are any objections, please feel free..\n\nI was doing a pre-commit review of the patch, and double-checked the\nuses of mainrdata_len. And there is this part:\n /* followed by main data, if any */\n if (mainrdata_len > 0)\n {\n if (mainrdata_len > 255)\n {\n *(scratch++) = (char) XLR_BLOCK_ID_DATA_LONG;\n memcpy(scratch, &mainrdata_len, sizeof(uint32));\n scratch += sizeof(uint32);\n }\n else\n {\n *(scratch++) = (char) XLR_BLOCK_ID_DATA_SHORT;\n *(scratch++) = (uint8) mainrdata_len;\n }\n rdt_datas_last->next = mainrdata_head;\n rdt_datas_last = mainrdata_last;\n total_len += mainrdata_len;\n }\n rdt_datas_last->next = NULL;\n\nSo bumping mainrdata_len to uint64 is actually not entirely in line\nwith this code. Well, it will work because we'd still fail a couple\nof lines down, but perhaps its readability should be improved so as\nwe have an extra check in this code path to make sure that\nmainrdata_len is not higher than PG_UINT32_MAX, then use an\nintermediate casted variable before saving the length in the record\ndata to make clear that the type of the main static length in\nxloginsert.c is not the same as what a record has? The v10 I sent\npreviously blocked this possibility, but not v11.\n--\nMichael",
"msg_date": "Fri, 7 Apr 2023 08:08:34 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Non-replayable WAL records through overflows and >MaxAllocSize\n lengths"
},
{
"msg_contents": "On Fri, Apr 07, 2023 at 08:08:34AM +0900, Michael Paquier wrote:\n> So bumping mainrdata_len to uint64 is actually not entirely in line\n> with this code. Well, it will work because we'd still fail a couple\n> of lines down, but perhaps its readability should be improved so as\n> we have an extra check in this code path to make sure that\n> mainrdata_len is not higher than PG_UINT32_MAX, then use an\n> intermediate casted variable before saving the length in the record\n> data to make clear that the type of the main static length in\n> xloginsert.c is not the same as what a record has? The v10 I sent\n> previously blocked this possibility, but not v11.\n\nSo, I was thinking about something like the attached tweaking this\npoint, the error details a bit, applying an indentation and writing a\ncommit message... Matthias?\n--\nMichael",
"msg_date": "Fri, 7 Apr 2023 08:35:05 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Non-replayable WAL records through overflows and >MaxAllocSize\n lengths"
},
{
"msg_contents": "On Fri, 7 Apr 2023, 01:35 Michael Paquier, <michael@paquier.xyz> wrote:\n\n> On Fri, Apr 07, 2023 at 08:08:34AM +0900, Michael Paquier wrote:\n> > So bumping mainrdata_len to uint64 is actually not entirely in line\n> > with this code. Well, it will work because we'd still fail a couple\n> > of lines down, but perhaps its readability should be improved so as\n> > we have an extra check in this code path to make sure that\n> > mainrdata_len is not higher than PG_UINT32_MAX, then use an\n> > intermediate casted variable before saving the length in the record\n> > data to make clear that the type of the main static length in\n> > xloginsert.c is not the same as what a record has? The v10 I sent\n> > previously blocked this possibility, but not v11.\n>\n\nYes, that was a bad oversight, which would've shown up in tests on a system\nwith an endianness that my computer doesn't have...\n\n\n> So, I was thinking about something like the attached tweaking this\n> point, the error details a bit, applying an indentation and writing a\n> commit message... Matthias?\n>\n\nThat looks fine to me. Thanks for picking this up and fixing the issue.\n\n\n\nKind regards,\n\nMatthias van de Meent\n\nOn Fri, 7 Apr 2023, 01:35 Michael Paquier, <michael@paquier.xyz> wrote:On Fri, Apr 07, 2023 at 08:08:34AM +0900, Michael Paquier wrote:\n> So bumping mainrdata_len to uint64 is actually not entirely in line\n> with this code. Well, it will work because we'd still fail a couple\n> of lines down, but perhaps its readability should be improved so as\n> we have an extra check in this code path to make sure that\n> mainrdata_len is not higher than PG_UINT32_MAX, then use an\n> intermediate casted variable before saving the length in the record\n> data to make clear that the type of the main static length in\n> xloginsert.c is not the same as what a record has? The v10 I sent\n> previously blocked this possibility, but not v11.Yes, that was a bad oversight, which would've shown up in tests on a system with an endianness that my computer doesn't have... \n\nSo, I was thinking about something like the attached tweaking this\npoint, the error details a bit, applying an indentation and writing a\ncommit message... Matthias?That looks fine to me. Thanks for picking this up and fixing the issue.Kind regards,Matthias van de Meent",
"msg_date": "Fri, 7 Apr 2023 01:50:00 +0200",
"msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Non-replayable WAL records through overflows and >MaxAllocSize\n lengths"
},
{
"msg_contents": "On Fri, Apr 07, 2023 at 01:50:00AM +0200, Matthias van de Meent wrote:\n> Yes, that was a bad oversight, which would've shown up in tests on a system\n> with an endianness that my computer doesn't have...\n\nI don't think that we have many bigendian animals in the buildfarm,\neither.. \n\n> That looks fine to me. Thanks for picking this up and fixing the issue.\n\nOkay, cool!\n--\nMichael",
"msg_date": "Fri, 7 Apr 2023 08:59:22 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Non-replayable WAL records through overflows and >MaxAllocSize\n lengths"
},
{
"msg_contents": "On Fri, Apr 07, 2023 at 08:59:22AM +0900, Michael Paquier wrote:\n> Okay, cool!\n\nDone this one with 8fcb32d.\n--\nMichael",
"msg_date": "Fri, 7 Apr 2023 15:05:17 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Non-replayable WAL records through overflows and >MaxAllocSize\n lengths"
},
{
"msg_contents": "On Fri, 7 Apr 2023 at 08:05, Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Fri, Apr 07, 2023 at 08:59:22AM +0900, Michael Paquier wrote:\n> > Okay, cool!\n>\n> Done this one with 8fcb32d.\n\nThanks a lot! I'll post the separation of record construction and\nwrite-out to xlog in a future thread for 17.\n\nOne remaining question: Considering that the changes and checks of\nthat commit are mostly internal to xloginsert.c (or xlog.c in older\nreleases), and that no special public-facing changes were made, would\nit be safe to backport this to older releases?\n\nPostgreSQL 15 specifically would benefit from this as it supports\nexternal rmgrs which may generate WAL records and would benefit from\nthese additional checks, but all supported releases of PostgreSQL have\npg_logical_emit_message and are thus easily subject to the issue of\nwriting oversized WAL records and subsequent recovery- and replication\nstream failures.\n\nKind regards,\n\nMatthias van de Meent\n\n\n",
"msg_date": "Sat, 8 Apr 2023 16:24:35 +0200",
"msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Non-replayable WAL records through overflows and >MaxAllocSize\n lengths"
},
{
"msg_contents": "On Sat, Apr 08, 2023 at 04:24:35PM +0200, Matthias van de Meent wrote:\n> Thanks a lot! I'll post the separation of record construction and\n> write-out to xlog in a future thread for 17.\n\nThanks! Creating a new thread makes sense.\n\n> One remaining question: Considering that the changes and checks of\n> that commit are mostly internal to xloginsert.c (or xlog.c in older\n> releases), and that no special public-facing changes were made, would\n> it be safe to backport this to older releases?\n\nThe routine changes done in ffd1b6b cannot be backpatched on ABI\ngrounds, still you would propose to have protection around\nneeds_data as well as the whole record length.\n\n> PostgreSQL 15 specifically would benefit from this as it supports\n> external rmgrs which may generate WAL records and would benefit from\n> these additional checks, but all supported releases of PostgreSQL have\n> pg_logical_emit_message and are thus easily subject to the issue of\n> writing oversized WAL records and subsequent recovery- and replication\n> stream failures.\n\nCustom RMGRs are a good argument, though I don't really see an urgent\nargument about doing something in REL_15_STABLE. For one, it would\nmean more backpatching conflicts with ~14. Another argument is that\nXLogRecordMaxSize is not an exact science, either. In ~15, a record\nwith a total size between XLogRecordMaxSize and\nDecodeXLogRecordRequiredSpace(MaxAllocSize) would work, though it\nwould not in 16~ because we have the 4MB margin given as room for the\nper-record allocation in the XLogReader. A record of such a size\nwould not be generated anymore after a minor release update of 15.3~\nif we were to do something about that by May on REL_15_STABLE.\n--\nMichael",
"msg_date": "Mon, 10 Apr 2023 08:31:21 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Non-replayable WAL records through overflows and >MaxAllocSize\n lengths"
}
] |
[
{
"msg_contents": "So, I noticed that pg_stat_reset_subscription_stats() wasn't working\nproperly, and, upon further investigation, I'm not sure the view\npg_stat_subscription_stats is being properly populated.\n\nI don't think subscriptionStatHash will be created properly and that the\nreset timestamp won't be initialized without the following code:\n\ndiff --git a/src/backend/postmaster/pgstat.c b/src/backend/postmaster/pgstat.c\nindex 53ddd930e6..0b8c5436e9 100644\n--- a/src/backend/postmaster/pgstat.c\n+++ b/src/backend/postmaster/pgstat.c\n@@ -3092,7 +3092,7 @@ pgstat_fetch_stat_subscription(Oid subid)\n /* Load the stats file if needed */\n backend_read_statsfile();\n\n- return pgstat_get_subscription_entry(subid, false);\n+ return pgstat_get_subscription_entry(subid, true);\n }\n\n /*\n@@ -6252,7 +6252,7 @@ pgstat_get_subscription_entry(Oid subid, bool create)\n\n /* If not found, initialize the new one */\n if (!found)\n- pgstat_reset_subscription(subentry, 0);\n+ pgstat_reset_subscription(subentry, GetCurrentTimestamp());\n\n return subentry;\n }\n\n- melanie\n\n\n",
"msg_date": "Fri, 11 Mar 2022 15:44:15 -0500",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": true,
"msg_subject": "Issue with pg_stat_subscription_stats"
},
{
"msg_contents": "On Sat, Mar 12, 2022 at 2:14 AM Melanie Plageman\n<melanieplageman@gmail.com> wrote:\n>\n> So, I noticed that pg_stat_reset_subscription_stats() wasn't working\n> properly, and, upon further investigation, I'm not sure the view\n> pg_stat_subscription_stats is being properly populated.\n>\n\nI have tried the below scenario based on this:\nStep:1 Create some data that generates conflicts and lead to apply\nfailures and then check in the view:\n\npostgres=# select * from pg_stat_subscription_stats;\n subid | subname | apply_error_count | sync_error_count | stats_reset\n-------+---------+-------------------+------------------+-------------\n 16389 | sub1 | 4 | 0 |\n(1 row)\n\nStep-2: Reset the view\npostgres=# select * from pg_stat_reset_subscription_stats(16389);\n pg_stat_reset_subscription_stats\n----------------------------------\n\n(1 row)\n\nStep-3: Again, check the view:\npostgres=# select * from pg_stat_subscription_stats;\n subid | subname | apply_error_count | sync_error_count | stats_reset\n-------+---------+-------------------+------------------+----------------------------------\n 16389 | sub1 | 0 | 0 | 2022-03-12\n08:21:39.156971+05:30\n(1 row)\n\nThe stats_reset time seems to be populated. Similarly, I have tried by\npassing NULL to pg_stat_reset_subscription_stats and it works. I think\nI am missing something here, can you please explain the exact\nscenario/steps where you observed that this API is not working.\n\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Sat, 12 Mar 2022 08:28:35 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Issue with pg_stat_subscription_stats"
},
{
"msg_contents": "Hi,\n\nOn 2022-03-12 08:28:35 +0530, Amit Kapila wrote:\n> On Sat, Mar 12, 2022 at 2:14 AM Melanie Plageman\n> <melanieplageman@gmail.com> wrote:\n> >\n> > So, I noticed that pg_stat_reset_subscription_stats() wasn't working\n> > properly, and, upon further investigation, I'm not sure the view\n> > pg_stat_subscription_stats is being properly populated.\n> >\n> \n> I have tried the below scenario based on this:\n> Step:1 Create some data that generates conflicts and lead to apply\n> failures and then check in the view:\n\nI think the problem is present when there was *no* conflict\npreviously. Because nothing populates the stats entry without an error, the\nreset doesn't have anything to set the stats_reset field in, which then means\nthat the stats_reset field is NULL even though stats have been reset.\n\nI'll just repeat what I've said before: Making variable numbered stats\nindividiually resettable is a bad idea.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sat, 12 Mar 2022 12:15:17 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Issue with pg_stat_subscription_stats"
},
{
"msg_contents": "On Sat, Mar 12, 2022 at 3:15 PM Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On 2022-03-12 08:28:35 +0530, Amit Kapila wrote:\n> > On Sat, Mar 12, 2022 at 2:14 AM Melanie Plageman\n> > <melanieplageman@gmail.com> wrote:\n> > >\n> > > So, I noticed that pg_stat_reset_subscription_stats() wasn't working\n> > > properly, and, upon further investigation, I'm not sure the view\n> > > pg_stat_subscription_stats is being properly populated.\n> > >\n> >\n> > I have tried the below scenario based on this:\n> > Step:1 Create some data that generates conflicts and lead to apply\n> > failures and then check in the view:\n>\n> I think the problem is present when there was *no* conflict\n> previously. Because nothing populates the stats entry without an error, the\n> reset doesn't have anything to set the stats_reset field in, which then means\n> that the stats_reset field is NULL even though stats have been reset.\n\nYes, this is what I meant. stats_reset is not initialized and without\nany conflict happening to populate the stats, after resetting the stats,\nthe field still does not get populated. I think this is a bit\nunexpected.\n\npsql (15devel)\nType \"help\" for help.\n\nmplageman=# select * from pg_stat_subscription_stats ;\n subid | subname | apply_error_count | sync_error_count | stats_reset\n-------+---------+-------------------+------------------+-------------\n 16398 | mysub | 0 | 0 |\n(1 row)\n\nmplageman=# select pg_stat_reset_subscription_stats(16398);\n pg_stat_reset_subscription_stats\n----------------------------------\n\n(1 row)\n\nmplageman=# select * from pg_stat_subscription_stats ;\n subid | subname | apply_error_count | sync_error_count | stats_reset\n-------+---------+-------------------+------------------+-------------\n 16398 | mysub | 0 | 0 |\n(1 row)\n\n- Melanie\n\n\n",
"msg_date": "Sun, 13 Mar 2022 13:05:27 -0400",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Issue with pg_stat_subscription_stats"
},
{
"msg_contents": "On Mon, Mar 14, 2022 at 2:05 AM Melanie Plageman\n<melanieplageman@gmail.com> wrote:\n>\n> On Sat, Mar 12, 2022 at 3:15 PM Andres Freund <andres@anarazel.de> wrote:\n> >\n> > Hi,\n> >\n> > On 2022-03-12 08:28:35 +0530, Amit Kapila wrote:\n> > > On Sat, Mar 12, 2022 at 2:14 AM Melanie Plageman\n> > > <melanieplageman@gmail.com> wrote:\n> > > >\n> > > > So, I noticed that pg_stat_reset_subscription_stats() wasn't working\n> > > > properly, and, upon further investigation, I'm not sure the view\n> > > > pg_stat_subscription_stats is being properly populated.\n> > > >\n> > >\n> > > I have tried the below scenario based on this:\n> > > Step:1 Create some data that generates conflicts and lead to apply\n> > > failures and then check in the view:\n> >\n> > I think the problem is present when there was *no* conflict\n> > previously. Because nothing populates the stats entry without an error, the\n> > reset doesn't have anything to set the stats_reset field in, which then means\n> > that the stats_reset field is NULL even though stats have been reset.\n>\n> Yes, this is what I meant. stats_reset is not initialized and without\n> any conflict happening to populate the stats, after resetting the stats,\n> the field still does not get populated. I think this is a bit\n> unexpected.\n>\n> psql (15devel)\n> Type \"help\" for help.\n>\n> mplageman=# select * from pg_stat_subscription_stats ;\n> subid | subname | apply_error_count | sync_error_count | stats_reset\n> -------+---------+-------------------+------------------+-------------\n> 16398 | mysub | 0 | 0 |\n> (1 row)\n>\n> mplageman=# select pg_stat_reset_subscription_stats(16398);\n> pg_stat_reset_subscription_stats\n> ----------------------------------\n>\n> (1 row)\n>\n> mplageman=# select * from pg_stat_subscription_stats ;\n> subid | subname | apply_error_count | sync_error_count | stats_reset\n> -------+---------+-------------------+------------------+-------------\n> 16398 | mysub | 0 | 0 |\n> (1 row)\n>\n\nLooking at other statistics such as replication slots, shared stats,\nand SLRU stats, it makes sense that resetting it populates the stats.\nSo we need to fix this issue.\n\nHowever, I think the proposed fix has two problems; it can create an\nentry for non-existing subscriptions if the user directly calls\nfunction pg_stat_get_subscription_stats(), and stats_reset value is\nnot updated in the stats file as it is not done by the stats\ncollector.\n\nAn alternative solution would be to send the message for creating the\nsubscription at the end of CRAETE SUBSCRIPTION which basically\nresolves them. A caveat is that if CREATE SUBSCRIPTION (that doesn't\ninvolve replication slot creation) is rolled back, the first problem\nstill occurs. But it should not practically matter as a similar thing\nis possible via existing table-related functions for dropped tables.\nAlso, we normally don't know the OID of subscription that is rolled\nback. I've attached a patch for that.\n\nRegards,\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/",
"msg_date": "Mon, 14 Mar 2022 17:02:10 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Issue with pg_stat_subscription_stats"
},
{
"msg_contents": "On Mon, Mar 14, 2022 at 4:02 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Mon, Mar 14, 2022 at 2:05 AM Melanie Plageman\n> <melanieplageman@gmail.com> wrote:\n> >\n> > On Sat, Mar 12, 2022 at 3:15 PM Andres Freund <andres@anarazel.de> wrote:\n> > >\n> > > Hi,\n> > >\n> > > On 2022-03-12 08:28:35 +0530, Amit Kapila wrote:\n> > > > On Sat, Mar 12, 2022 at 2:14 AM Melanie Plageman\n> > > > <melanieplageman@gmail.com> wrote:\n> > > > >\n> > > > > So, I noticed that pg_stat_reset_subscription_stats() wasn't working\n> > > > > properly, and, upon further investigation, I'm not sure the view\n> > > > > pg_stat_subscription_stats is being properly populated.\n> > > > >\n> > > >\n> > > > I have tried the below scenario based on this:\n> > > > Step:1 Create some data that generates conflicts and lead to apply\n> > > > failures and then check in the view:\n> > >\n> > > I think the problem is present when there was *no* conflict\n> > > previously. Because nothing populates the stats entry without an error, the\n> > > reset doesn't have anything to set the stats_reset field in, which then means\n> > > that the stats_reset field is NULL even though stats have been reset.\n> >\n> > Yes, this is what I meant. stats_reset is not initialized and without\n> > any conflict happening to populate the stats, after resetting the stats,\n> > the field still does not get populated. I think this is a bit\n> > unexpected.\n> >\n> > psql (15devel)\n> > Type \"help\" for help.\n> >\n> > mplageman=# select * from pg_stat_subscription_stats ;\n> > subid | subname | apply_error_count | sync_error_count | stats_reset\n> > -------+---------+-------------------+------------------+-------------\n> > 16398 | mysub | 0 | 0 |\n> > (1 row)\n> >\n> > mplageman=# select pg_stat_reset_subscription_stats(16398);\n> > pg_stat_reset_subscription_stats\n> > ----------------------------------\n> >\n> > (1 row)\n> >\n> > mplageman=# select * from pg_stat_subscription_stats ;\n> > subid | subname | apply_error_count | sync_error_count | stats_reset\n> > -------+---------+-------------------+------------------+-------------\n> > 16398 | mysub | 0 | 0 |\n> > (1 row)\n> >\n>\n> Looking at other statistics such as replication slots, shared stats,\n> and SLRU stats, it makes sense that resetting it populates the stats.\n> So we need to fix this issue.\n>\n> However, I think the proposed fix has two problems; it can create an\n> entry for non-existing subscriptions if the user directly calls\n> function pg_stat_get_subscription_stats(), and stats_reset value is\n> not updated in the stats file as it is not done by the stats\n> collector.\n\nYou are right. My initial patch was incorrect.\n\nThinking about it more, the initial behavior is technically the same for\npg_stat_database. It is just that I didn't notice because you end up\ncreating stats for pg_stat_database so quickly that you usually never\nsee them before.\n\nIn pg_stat_get_db_stat_reset_time():\n\n if ((dbentry = pgstat_fetch_stat_dbentry(dbid)) == NULL)\n result = 0;\n else\n result = dbentry->stat_reset_timestamp;\n\n if (result == 0)\n PG_RETURN_NULL();\n else\n PG_RETURN_TIMESTAMPTZ(result);\n\nand in pgstat_recv_resetcounter():\n\n dbentry = pgstat_get_db_entry(msg->m_databaseid, false);\n\n if (!dbentry)\n return;\n\nThinking about it now, though, maybe an alternative solution would be to\nhave all columns or all columns except the subid/subname or dbname/dboid\nbe NULL until the statistics have been created, at which point the\nreset_timestamp is populated with the current timestamp.\n\nmplageman=# select * from pg_stat_subscription_stats ;\n subid | subname | apply_error_count | sync_error_count | stats_reset\n-------+---------+-------------------+------------------+-------------\n 16397 | foosub | | |\n 16408 | barsub | | |\n(2 rows)\n\nAll resetting before the stats are created would be a no-op.\n\n- Melanie\n\n\n",
"msg_date": "Mon, 14 Mar 2022 14:34:42 -0400",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Issue with pg_stat_subscription_stats"
},
{
"msg_contents": "On Sun, Mar 13, 2022 at 1:45 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> On 2022-03-12 08:28:35 +0530, Amit Kapila wrote:\n> > On Sat, Mar 12, 2022 at 2:14 AM Melanie Plageman\n> > <melanieplageman@gmail.com> wrote:\n> > >\n> > > So, I noticed that pg_stat_reset_subscription_stats() wasn't working\n> > > properly, and, upon further investigation, I'm not sure the view\n> > > pg_stat_subscription_stats is being properly populated.\n> > >\n> >\n> > I have tried the below scenario based on this:\n> > Step:1 Create some data that generates conflicts and lead to apply\n> > failures and then check in the view:\n>\n> I think the problem is present when there was *no* conflict\n> previously. Because nothing populates the stats entry without an error, the\n> reset doesn't have anything to set the stats_reset field in, which then means\n> that the stats_reset field is NULL even though stats have been reset.\n>\n> I'll just repeat what I've said before: Making variable numbered stats\n> individiually resettable is a bad idea.\n>\n\nIIUC correctly, we are doing this via\npg_stat_reset_single_table_counters(),\npg_stat_reset_single_function_counters(),\npg_stat_reset_replication_slot(), pg_stat_reset_subscription_stats().\nSo, if we want to do something in this regrard then it is probably\nbetter to do for all.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 15 Mar 2022 09:34:16 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Issue with pg_stat_subscription_stats"
},
{
"msg_contents": "On Tue, Mar 15, 2022 at 3:34 AM Melanie Plageman\n<melanieplageman@gmail.com> wrote:\n>\n> On Mon, Mar 14, 2022 at 4:02 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Mon, Mar 14, 2022 at 2:05 AM Melanie Plageman\n> > <melanieplageman@gmail.com> wrote:\n> > >\n> > > On Sat, Mar 12, 2022 at 3:15 PM Andres Freund <andres@anarazel.de> wrote:\n> > > >\n> > > > Hi,\n> > > >\n> > > > On 2022-03-12 08:28:35 +0530, Amit Kapila wrote:\n> > > > > On Sat, Mar 12, 2022 at 2:14 AM Melanie Plageman\n> > > > > <melanieplageman@gmail.com> wrote:\n> > > > > >\n> > > > > > So, I noticed that pg_stat_reset_subscription_stats() wasn't working\n> > > > > > properly, and, upon further investigation, I'm not sure the view\n> > > > > > pg_stat_subscription_stats is being properly populated.\n> > > > > >\n> > > > >\n> > > > > I have tried the below scenario based on this:\n> > > > > Step:1 Create some data that generates conflicts and lead to apply\n> > > > > failures and then check in the view:\n> > > >\n> > > > I think the problem is present when there was *no* conflict\n> > > > previously. Because nothing populates the stats entry without an error, the\n> > > > reset doesn't have anything to set the stats_reset field in, which then means\n> > > > that the stats_reset field is NULL even though stats have been reset.\n> > >\n> > > Yes, this is what I meant. stats_reset is not initialized and without\n> > > any conflict happening to populate the stats, after resetting the stats,\n> > > the field still does not get populated. I think this is a bit\n> > > unexpected.\n> > >\n> > > psql (15devel)\n> > > Type \"help\" for help.\n> > >\n> > > mplageman=# select * from pg_stat_subscription_stats ;\n> > > subid | subname | apply_error_count | sync_error_count | stats_reset\n> > > -------+---------+-------------------+------------------+-------------\n> > > 16398 | mysub | 0 | 0 |\n> > > (1 row)\n> > >\n> > > mplageman=# select pg_stat_reset_subscription_stats(16398);\n> > > pg_stat_reset_subscription_stats\n> > > ----------------------------------\n> > >\n> > > (1 row)\n> > >\n> > > mplageman=# select * from pg_stat_subscription_stats ;\n> > > subid | subname | apply_error_count | sync_error_count | stats_reset\n> > > -------+---------+-------------------+------------------+-------------\n> > > 16398 | mysub | 0 | 0 |\n> > > (1 row)\n> > >\n> >\n> > Looking at other statistics such as replication slots, shared stats,\n> > and SLRU stats, it makes sense that resetting it populates the stats.\n> > So we need to fix this issue.\n> >\n> > However, I think the proposed fix has two problems; it can create an\n> > entry for non-existing subscriptions if the user directly calls\n> > function pg_stat_get_subscription_stats(), and stats_reset value is\n> > not updated in the stats file as it is not done by the stats\n> > collector.\n>\n> You are right. My initial patch was incorrect.\n>\n> Thinking about it more, the initial behavior is technically the same for\n> pg_stat_database. It is just that I didn't notice because you end up\n> creating stats for pg_stat_database so quickly that you usually never\n> see them before.\n>\n> In pg_stat_get_db_stat_reset_time():\n>\n> if ((dbentry = pgstat_fetch_stat_dbentry(dbid)) == NULL)\n> result = 0;\n> else\n> result = dbentry->stat_reset_timestamp;\n>\n> if (result == 0)\n> PG_RETURN_NULL();\n> else\n> PG_RETURN_TIMESTAMPTZ(result);\n>\n> and in pgstat_recv_resetcounter():\n>\n> dbentry = pgstat_get_db_entry(msg->m_databaseid, false);\n>\n> if (!dbentry)\n> return;\n>\n> Thinking about it now, though, maybe an alternative solution would be to\n> have all columns or all columns except the subid/subname or dbname/dboid\n> be NULL until the statistics have been created, at which point the\n> reset_timestamp is populated with the current timestamp.\n\nIt's true that stats_reset is NULL if the statistics of database are\nnot created yet. But looking at other columns such as tup_deleted,\nthey show 0 in the case. So having all columns or all counter columns\nin pg_stat_subscription_stats be NULL would not be consistent with\nother statistics, which I think is not a good idea.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Tue, 15 Mar 2022 13:38:26 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Issue with pg_stat_subscription_stats"
},
{
"msg_contents": "On Tue, Mar 15, 2022 at 10:09 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Tue, Mar 15, 2022 at 3:34 AM Melanie Plageman\n> <melanieplageman@gmail.com> wrote:\n> >\n> > On Mon, Mar 14, 2022 at 4:02 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > >\n> > > On Mon, Mar 14, 2022 at 2:05 AM Melanie Plageman\n> > > <melanieplageman@gmail.com> wrote:\n> > > >\n> > > > On Sat, Mar 12, 2022 at 3:15 PM Andres Freund <andres@anarazel.de> wrote:\n> > > > >\n> > > > > Hi,\n> > > > >\n> > > > > On 2022-03-12 08:28:35 +0530, Amit Kapila wrote:\n> > > > > > On Sat, Mar 12, 2022 at 2:14 AM Melanie Plageman\n> > > > > > <melanieplageman@gmail.com> wrote:\n> > > > > > >\n> > > > > > > So, I noticed that pg_stat_reset_subscription_stats() wasn't working\n> > > > > > > properly, and, upon further investigation, I'm not sure the view\n> > > > > > > pg_stat_subscription_stats is being properly populated.\n> > > > > > >\n> > > > > >\n> > > > > > I have tried the below scenario based on this:\n> > > > > > Step:1 Create some data that generates conflicts and lead to apply\n> > > > > > failures and then check in the view:\n> > > > >\n> > > > > I think the problem is present when there was *no* conflict\n> > > > > previously. Because nothing populates the stats entry without an error, the\n> > > > > reset doesn't have anything to set the stats_reset field in, which then means\n> > > > > that the stats_reset field is NULL even though stats have been reset.\n> > > >\n> > > > Yes, this is what I meant. stats_reset is not initialized and without\n> > > > any conflict happening to populate the stats, after resetting the stats,\n> > > > the field still does not get populated. I think this is a bit\n> > > > unexpected.\n> > > >\n> > > > psql (15devel)\n> > > > Type \"help\" for help.\n> > > >\n> > > > mplageman=# select * from pg_stat_subscription_stats ;\n> > > > subid | subname | apply_error_count | sync_error_count | stats_reset\n> > > > -------+---------+-------------------+------------------+-------------\n> > > > 16398 | mysub | 0 | 0 |\n> > > > (1 row)\n> > > >\n> > > > mplageman=# select pg_stat_reset_subscription_stats(16398);\n> > > > pg_stat_reset_subscription_stats\n> > > > ----------------------------------\n> > > >\n> > > > (1 row)\n> > > >\n> > > > mplageman=# select * from pg_stat_subscription_stats ;\n> > > > subid | subname | apply_error_count | sync_error_count | stats_reset\n> > > > -------+---------+-------------------+------------------+-------------\n> > > > 16398 | mysub | 0 | 0 |\n> > > > (1 row)\n> > > >\n> > >\n> > > Looking at other statistics such as replication slots, shared stats,\n> > > and SLRU stats, it makes sense that resetting it populates the stats.\n> > > So we need to fix this issue.\n> > >\n> > > However, I think the proposed fix has two problems; it can create an\n> > > entry for non-existing subscriptions if the user directly calls\n> > > function pg_stat_get_subscription_stats(), and stats_reset value is\n> > > not updated in the stats file as it is not done by the stats\n> > > collector.\n> >\n> > You are right. My initial patch was incorrect.\n> >\n> > Thinking about it more, the initial behavior is technically the same for\n> > pg_stat_database. It is just that I didn't notice because you end up\n> > creating stats for pg_stat_database so quickly that you usually never\n> > see them before.\n> >\n> > In pg_stat_get_db_stat_reset_time():\n> >\n> > if ((dbentry = pgstat_fetch_stat_dbentry(dbid)) == NULL)\n> > result = 0;\n> > else\n> > result = dbentry->stat_reset_timestamp;\n> >\n> > if (result == 0)\n> > PG_RETURN_NULL();\n> > else\n> > PG_RETURN_TIMESTAMPTZ(result);\n> >\n> > and in pgstat_recv_resetcounter():\n> >\n> > dbentry = pgstat_get_db_entry(msg->m_databaseid, false);\n> >\n> > if (!dbentry)\n> > return;\n> >\n> > Thinking about it now, though, maybe an alternative solution would be to\n> > have all columns or all columns except the subid/subname or dbname/dboid\n> > be NULL until the statistics have been created, at which point the\n> > reset_timestamp is populated with the current timestamp.\n>\n> It's true that stats_reset is NULL if the statistics of database are\n> not created yet.\n>\n\nSo, if the behavior is the same as pg_stat_database, do we really want\nto change anything in this regard?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 16 Mar 2022 17:21:48 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Issue with pg_stat_subscription_stats"
},
{
"msg_contents": "On Wed, Mar 16, 2022 at 8:51 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Mar 15, 2022 at 10:09 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Tue, Mar 15, 2022 at 3:34 AM Melanie Plageman\n> > <melanieplageman@gmail.com> wrote:\n> > >\n> > > On Mon, Mar 14, 2022 at 4:02 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > >\n> > > > On Mon, Mar 14, 2022 at 2:05 AM Melanie Plageman\n> > > > <melanieplageman@gmail.com> wrote:\n> > > > >\n> > > > > On Sat, Mar 12, 2022 at 3:15 PM Andres Freund <andres@anarazel.de> wrote:\n> > > > > >\n> > > > > > Hi,\n> > > > > >\n> > > > > > On 2022-03-12 08:28:35 +0530, Amit Kapila wrote:\n> > > > > > > On Sat, Mar 12, 2022 at 2:14 AM Melanie Plageman\n> > > > > > > <melanieplageman@gmail.com> wrote:\n> > > > > > > >\n> > > > > > > > So, I noticed that pg_stat_reset_subscription_stats() wasn't working\n> > > > > > > > properly, and, upon further investigation, I'm not sure the view\n> > > > > > > > pg_stat_subscription_stats is being properly populated.\n> > > > > > > >\n> > > > > > >\n> > > > > > > I have tried the below scenario based on this:\n> > > > > > > Step:1 Create some data that generates conflicts and lead to apply\n> > > > > > > failures and then check in the view:\n> > > > > >\n> > > > > > I think the problem is present when there was *no* conflict\n> > > > > > previously. Because nothing populates the stats entry without an error, the\n> > > > > > reset doesn't have anything to set the stats_reset field in, which then means\n> > > > > > that the stats_reset field is NULL even though stats have been reset.\n> > > > >\n> > > > > Yes, this is what I meant. stats_reset is not initialized and without\n> > > > > any conflict happening to populate the stats, after resetting the stats,\n> > > > > the field still does not get populated. I think this is a bit\n> > > > > unexpected.\n> > > > >\n> > > > > psql (15devel)\n> > > > > Type \"help\" for help.\n> > > > >\n> > > > > mplageman=# select * from pg_stat_subscription_stats ;\n> > > > > subid | subname | apply_error_count | sync_error_count | stats_reset\n> > > > > -------+---------+-------------------+------------------+-------------\n> > > > > 16398 | mysub | 0 | 0 |\n> > > > > (1 row)\n> > > > >\n> > > > > mplageman=# select pg_stat_reset_subscription_stats(16398);\n> > > > > pg_stat_reset_subscription_stats\n> > > > > ----------------------------------\n> > > > >\n> > > > > (1 row)\n> > > > >\n> > > > > mplageman=# select * from pg_stat_subscription_stats ;\n> > > > > subid | subname | apply_error_count | sync_error_count | stats_reset\n> > > > > -------+---------+-------------------+------------------+-------------\n> > > > > 16398 | mysub | 0 | 0 |\n> > > > > (1 row)\n> > > > >\n> > > >\n> > > > Looking at other statistics such as replication slots, shared stats,\n> > > > and SLRU stats, it makes sense that resetting it populates the stats.\n> > > > So we need to fix this issue.\n> > > >\n> > > > However, I think the proposed fix has two problems; it can create an\n> > > > entry for non-existing subscriptions if the user directly calls\n> > > > function pg_stat_get_subscription_stats(), and stats_reset value is\n> > > > not updated in the stats file as it is not done by the stats\n> > > > collector.\n> > >\n> > > You are right. My initial patch was incorrect.\n> > >\n> > > Thinking about it more, the initial behavior is technically the same for\n> > > pg_stat_database. It is just that I didn't notice because you end up\n> > > creating stats for pg_stat_database so quickly that you usually never\n> > > see them before.\n> > >\n> > > In pg_stat_get_db_stat_reset_time():\n> > >\n> > > if ((dbentry = pgstat_fetch_stat_dbentry(dbid)) == NULL)\n> > > result = 0;\n> > > else\n> > > result = dbentry->stat_reset_timestamp;\n> > >\n> > > if (result == 0)\n> > > PG_RETURN_NULL();\n> > > else\n> > > PG_RETURN_TIMESTAMPTZ(result);\n> > >\n> > > and in pgstat_recv_resetcounter():\n> > >\n> > > dbentry = pgstat_get_db_entry(msg->m_databaseid, false);\n> > >\n> > > if (!dbentry)\n> > > return;\n> > >\n> > > Thinking about it now, though, maybe an alternative solution would be to\n> > > have all columns or all columns except the subid/subname or dbname/dboid\n> > > be NULL until the statistics have been created, at which point the\n> > > reset_timestamp is populated with the current timestamp.\n> >\n> > It's true that stats_reset is NULL if the statistics of database are\n> > not created yet.\n> >\n>\n> So, if the behavior is the same as pg_stat_database, do we really want\n> to change anything in this regard?\n\nBoth pg_stat_database and pg_stat_subscription_stats work similarly in\nprinciple but they work differently in practice since there are more\nchances to create the database stats entry such as connections,\ndisconnections, and autovacuum than the subscription stats entry. I\nthink that the issue reported by Melanie is valid and perhaps most\nusers would expect the same behavior as other statistics.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Wed, 16 Mar 2022 23:34:35 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Issue with pg_stat_subscription_stats"
},
{
"msg_contents": " Hi,\n\nOn Wed, Mar 16, 2022 at 11:34 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Wed, Mar 16, 2022 at 8:51 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Tue, Mar 15, 2022 at 10:09 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > >\n> > > On Tue, Mar 15, 2022 at 3:34 AM Melanie Plageman\n> > > <melanieplageman@gmail.com> wrote:\n> > > >\n> > > > On Mon, Mar 14, 2022 at 4:02 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > > >\n> > > > > On Mon, Mar 14, 2022 at 2:05 AM Melanie Plageman\n> > > > > <melanieplageman@gmail.com> wrote:\n> > > > > >\n> > > > > > On Sat, Mar 12, 2022 at 3:15 PM Andres Freund <andres@anarazel.de> wrote:\n> > > > > > >\n> > > > > > > Hi,\n> > > > > > >\n> > > > > > > On 2022-03-12 08:28:35 +0530, Amit Kapila wrote:\n> > > > > > > > On Sat, Mar 12, 2022 at 2:14 AM Melanie Plageman\n> > > > > > > > <melanieplageman@gmail.com> wrote:\n> > > > > > > > >\n> > > > > > > > > So, I noticed that pg_stat_reset_subscription_stats() wasn't working\n> > > > > > > > > properly, and, upon further investigation, I'm not sure the view\n> > > > > > > > > pg_stat_subscription_stats is being properly populated.\n> > > > > > > > >\n> > > > > > > >\n> > > > > > > > I have tried the below scenario based on this:\n> > > > > > > > Step:1 Create some data that generates conflicts and lead to apply\n> > > > > > > > failures and then check in the view:\n> > > > > > >\n> > > > > > > I think the problem is present when there was *no* conflict\n> > > > > > > previously. Because nothing populates the stats entry without an error, the\n> > > > > > > reset doesn't have anything to set the stats_reset field in, which then means\n> > > > > > > that the stats_reset field is NULL even though stats have been reset.\n> > > > > >\n> > > > > > Yes, this is what I meant. stats_reset is not initialized and without\n> > > > > > any conflict happening to populate the stats, after resetting the stats,\n> > > > > > the field still does not get populated. I think this is a bit\n> > > > > > unexpected.\n> > > > > >\n> > > > > > psql (15devel)\n> > > > > > Type \"help\" for help.\n> > > > > >\n> > > > > > mplageman=# select * from pg_stat_subscription_stats ;\n> > > > > > subid | subname | apply_error_count | sync_error_count | stats_reset\n> > > > > > -------+---------+-------------------+------------------+-------------\n> > > > > > 16398 | mysub | 0 | 0 |\n> > > > > > (1 row)\n> > > > > >\n> > > > > > mplageman=# select pg_stat_reset_subscription_stats(16398);\n> > > > > > pg_stat_reset_subscription_stats\n> > > > > > ----------------------------------\n> > > > > >\n> > > > > > (1 row)\n> > > > > >\n> > > > > > mplageman=# select * from pg_stat_subscription_stats ;\n> > > > > > subid | subname | apply_error_count | sync_error_count | stats_reset\n> > > > > > -------+---------+-------------------+------------------+-------------\n> > > > > > 16398 | mysub | 0 | 0 |\n> > > > > > (1 row)\n> > > > > >\n> > > > >\n> > > > > Looking at other statistics such as replication slots, shared stats,\n> > > > > and SLRU stats, it makes sense that resetting it populates the stats.\n> > > > > So we need to fix this issue.\n> > > > >\n> > > > > However, I think the proposed fix has two problems; it can create an\n> > > > > entry for non-existing subscriptions if the user directly calls\n> > > > > function pg_stat_get_subscription_stats(), and stats_reset value is\n> > > > > not updated in the stats file as it is not done by the stats\n> > > > > collector.\n> > > >\n> > > > You are right. My initial patch was incorrect.\n> > > >\n> > > > Thinking about it more, the initial behavior is technically the same for\n> > > > pg_stat_database. It is just that I didn't notice because you end up\n> > > > creating stats for pg_stat_database so quickly that you usually never\n> > > > see them before.\n> > > >\n> > > > In pg_stat_get_db_stat_reset_time():\n> > > >\n> > > > if ((dbentry = pgstat_fetch_stat_dbentry(dbid)) == NULL)\n> > > > result = 0;\n> > > > else\n> > > > result = dbentry->stat_reset_timestamp;\n> > > >\n> > > > if (result == 0)\n> > > > PG_RETURN_NULL();\n> > > > else\n> > > > PG_RETURN_TIMESTAMPTZ(result);\n> > > >\n> > > > and in pgstat_recv_resetcounter():\n> > > >\n> > > > dbentry = pgstat_get_db_entry(msg->m_databaseid, false);\n> > > >\n> > > > if (!dbentry)\n> > > > return;\n> > > >\n> > > > Thinking about it now, though, maybe an alternative solution would be to\n> > > > have all columns or all columns except the subid/subname or dbname/dboid\n> > > > be NULL until the statistics have been created, at which point the\n> > > > reset_timestamp is populated with the current timestamp.\n> > >\n> > > It's true that stats_reset is NULL if the statistics of database are\n> > > not created yet.\n> > >\n> >\n> > So, if the behavior is the same as pg_stat_database, do we really want\n> > to change anything in this regard?\n>\n> Both pg_stat_database and pg_stat_subscription_stats work similarly in\n> principle but they work differently in practice since there are more\n> chances to create the database stats entry such as connections,\n> disconnections, and autovacuum than the subscription stats entry. I\n> think that the issue reported by Melanie is valid and perhaps most\n> users would expect the same behavior as other statistics.\n\nWhile looking at this issue again, I realized there seems to be two\nproblems with subscription stats on shmem stats:\n\nFirstly, we call pgstat_create_subscription() when creating a\nsubscription but the subscription stats are reported by apply workers.\nAnd pgstat_create_subscription() just calls\npgstat_create_transactional():\n\nvoid\npgstat_create_subscription(Oid subid)\n{\n pgstat_create_transactional(PGSTAT_KIND_SUBSCRIPTION,\n InvalidOid, subid);\n}\n\nI guess calling pgstat_create_subscription() is not necessary for the\ncurrent usage. On the other hand, if we create the subscription stats\nthere we can resolve the issue Melanie reported in this thread.\n\nThe second problem is that the following code in DropSubscription()\nshould be updated:\n\n /*\n * Tell the cumulative stats system that the subscription is getting\n * dropped. We can safely report dropping the subscription statistics here\n * if the subscription is associated with a replication slot since we\n * cannot run DROP SUBSCRIPTION inside a transaction block. Subscription\n * statistics will be removed later by (auto)vacuum either if it's not\n * associated with a replication slot or if the message for dropping the\n * subscription gets lost.\n */\n if (slotname)\n pgstat_drop_subscription(subid);\n\nI think we can call pgstat_drop_subscription() even if slotname is\nNULL and need to update the comment. IIUC autovacuum is no longer\nresponsible for garbage collection.\n\nRegards,\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Fri, 1 Jul 2022 10:41:55 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Issue with pg_stat_subscription_stats"
},
{
"msg_contents": "On Fri, Jul 1, 2022 at 7:12 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Wed, Mar 16, 2022 at 11:34 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n>\n> While looking at this issue again, I realized there seems to be two\n> problems with subscription stats on shmem stats:\n>\n> Firstly, we call pgstat_create_subscription() when creating a\n> subscription but the subscription stats are reported by apply workers.\n> And pgstat_create_subscription() just calls\n> pgstat_create_transactional():\n>\n> void\n> pgstat_create_subscription(Oid subid)\n> {\n> pgstat_create_transactional(PGSTAT_KIND_SUBSCRIPTION,\n> InvalidOid, subid);\n> }\n>\n> I guess calling pgstat_create_subscription() is not necessary for the\n> current usage. On the other hand, if we create the subscription stats\n> there we can resolve the issue Melanie reported in this thread.\n>\n\nIt won't create the stats entry in the shared hash table, so the\nbehavior should be the same as without shared stats. I am not sure we\nneed to do anything for this one.\n\n> The second problem is that the following code in DropSubscription()\n> should be updated:\n>\n> /*\n> * Tell the cumulative stats system that the subscription is getting\n> * dropped. We can safely report dropping the subscription statistics here\n> * if the subscription is associated with a replication slot since we\n> * cannot run DROP SUBSCRIPTION inside a transaction block. Subscription\n> * statistics will be removed later by (auto)vacuum either if it's not\n> * associated with a replication slot or if the message for dropping the\n> * subscription gets lost.\n> */\n> if (slotname)\n> pgstat_drop_subscription(subid);\n>\n> I think we can call pgstat_drop_subscription() even if slotname is\n> NULL and need to update the comment.\n>\n\n+1.\n\n> IIUC autovacuum is no longer\n> responsible for garbage collection.\n>\n\nRight, this is my understanding as well.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 1 Jul 2022 11:31:09 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Issue with pg_stat_subscription_stats"
},
{
"msg_contents": "On Fri, Jul 1, 2022 at 3:01 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Fri, Jul 1, 2022 at 7:12 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Wed, Mar 16, 2022 at 11:34 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > >\n> >\n> > While looking at this issue again, I realized there seems to be two\n> > problems with subscription stats on shmem stats:\n> >\n> > Firstly, we call pgstat_create_subscription() when creating a\n> > subscription but the subscription stats are reported by apply workers.\n> > And pgstat_create_subscription() just calls\n> > pgstat_create_transactional():\n> >\n> > void\n> > pgstat_create_subscription(Oid subid)\n> > {\n> > pgstat_create_transactional(PGSTAT_KIND_SUBSCRIPTION,\n> > InvalidOid, subid);\n> > }\n> >\n> > I guess calling pgstat_create_subscription() is not necessary for the\n> > current usage. On the other hand, if we create the subscription stats\n> > there we can resolve the issue Melanie reported in this thread.\n> >\n>\n> It won't create the stats entry in the shared hash table, so the\n> behavior should be the same as without shared stats.\n\nYes, my point is that it may be misleading that the subscription stats\nare created when a subscription is created. The initial behavior is\ntechnically the same for pg_stat_database. That is, we don't create\nthe stats entry for them when creating the object. But we don’t call\npgstat_create_transactional when creating a database (we don’t have a\nfunction like pgstat_create_database()) whereas we do for subscription\nstats.\n\nOn the other hand, I'm not sure we agreed that the behavior that\nMelanie reported is not a problem. The user might get confused since\nthe subscription stats works differently than other stats when a\nreset. Previously, the primary reason why I hesitated to create the\nsubscription stats when creating a subscription is that CREATE\nSUBSCRIPTION (with create_slot = false) can be rolled back. But with\nthe shmem stats, we can easily resolve it by using\npgstat_create_transactional().\n\n>\n> > The second problem is that the following code in DropSubscription()\n> > should be updated:\n> >\n> > /*\n> > * Tell the cumulative stats system that the subscription is getting\n> > * dropped. We can safely report dropping the subscription statistics here\n> > * if the subscription is associated with a replication slot since we\n> > * cannot run DROP SUBSCRIPTION inside a transaction block. Subscription\n> > * statistics will be removed later by (auto)vacuum either if it's not\n> > * associated with a replication slot or if the message for dropping the\n> > * subscription gets lost.\n> > */\n> > if (slotname)\n> > pgstat_drop_subscription(subid);\n> >\n> > I think we can call pgstat_drop_subscription() even if slotname is\n> > NULL and need to update the comment.\n> >\n>\n> +1.\n>\n> > IIUC autovacuum is no longer\n> > responsible for garbage collection.\n> >\n>\n> Right, this is my understanding as well.\n\nThank you for the confirmation.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Fri, 1 Jul 2022 16:08:48 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Issue with pg_stat_subscription_stats"
},
{
"msg_contents": "Hi,\n\nOn 2022-07-01 10:41:55 +0900, Masahiko Sawada wrote:\n> While looking at this issue again, I realized there seems to be two\n> problems with subscription stats on shmem stats:\n>\n> Firstly, we call pgstat_create_subscription() when creating a\n> subscription but the subscription stats are reported by apply workers.\n\nWhy is it relevant where the stats are reported?\n\n\n> And pgstat_create_subscription() just calls\n> pgstat_create_transactional():\n>\n> void\n> pgstat_create_subscription(Oid subid)\n> {\n> pgstat_create_transactional(PGSTAT_KIND_SUBSCRIPTION,\n> InvalidOid, subid);\n> }\n>\n> I guess calling pgstat_create_subscription() is not necessary for the\n> current usage.\n\nIt ensures that the stats are dropped if the subscription fails to be created\npartway through / the transaction is aborted. There's probably no way for that\nto happen today, but it still seems the right thing.\n\n\n> On the other hand, if we create the subscription stats\n> there we can resolve the issue Melanie reported in this thread.\n\nI am confused what the place of creation addresses?\n\n\n> The second problem is that the following code in DropSubscription()\n> should be updated:\n>\n> /*\n> * Tell the cumulative stats system that the subscription is getting\n> * dropped. We can safely report dropping the subscription statistics here\n> * if the subscription is associated with a replication slot since we\n> * cannot run DROP SUBSCRIPTION inside a transaction block. Subscription\n> * statistics will be removed later by (auto)vacuum either if it's not\n> * associated with a replication slot or if the message for dropping the\n> * subscription gets lost.\n> */\n> if (slotname)\n> pgstat_drop_subscription(subid);\n>\n> I think we can call pgstat_drop_subscription() even if slotname is\n> NULL and need to update the comment. IIUC autovacuum is no longer\n> responsible for garbage collection.\n\nYep, that needs to be updated.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 1 Jul 2022 10:48:20 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Issue with pg_stat_subscription_stats"
},
{
"msg_contents": "Hi,\n\nOn 2022-07-01 16:08:48 +0900, Masahiko Sawada wrote:\n> Yes, my point is that it may be misleading that the subscription stats\n> are created when a subscription is created.\n\nI think it's important to create stats at that time, because otherwise it's\nbasically impossible to ensure that stats are dropped when a transaction rolls\nback. If some / all columns should return something else before stats are\nreported that can be addressed easily by tracking that in a separate field.\n\n\n> On the other hand, I'm not sure we agreed that the behavior that\n> Melanie reported is not a problem. The user might get confused since\n> the subscription stats works differently than other stats when a\n> reset. Previously, the primary reason why I hesitated to create the\n> subscription stats when creating a subscription is that CREATE\n> SUBSCRIPTION (with create_slot = false) can be rolled back. But with\n> the shmem stats, we can easily resolve it by using\n> pgstat_create_transactional().\n\nYep.\n\n\n> > > The second problem is that the following code in DropSubscription()\n> > > should be updated:\n> > >\n> > > /*\n> > > * Tell the cumulative stats system that the subscription is getting\n> > > * dropped. We can safely report dropping the subscription statistics here\n> > > * if the subscription is associated with a replication slot since we\n> > > * cannot run DROP SUBSCRIPTION inside a transaction block. Subscription\n> > > * statistics will be removed later by (auto)vacuum either if it's not\n> > > * associated with a replication slot or if the message for dropping the\n> > > * subscription gets lost.\n> > > */\n> > > if (slotname)\n> > > pgstat_drop_subscription(subid);\n> > >\n> > > I think we can call pgstat_drop_subscription() even if slotname is\n> > > NULL and need to update the comment.\n> > >\n> >\n> > +1.\n> >\n> > > IIUC autovacuum is no longer\n> > > responsible for garbage collection.\n> > >\n> >\n> > Right, this is my understanding as well.\n> \n> Thank you for the confirmation.\n\nWant to propose a patch?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 1 Jul 2022 10:53:25 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Issue with pg_stat_subscription_stats"
},
{
"msg_contents": "On Sat, Jul 2, 2022 at 2:53 Andres Freund <andres@anarazel.de> wrote:\n\n> Hi,\n>\n> On 2022-07-01 16:08:48 +0900, Masahiko Sawada wrote:\n> > Yes, my point is that it may be misleading that the subscription stats\n> > are created when a subscription is created.\n>\n> I think it's important to create stats at that time, because otherwise it's\n> basically impossible to ensure that stats are dropped when a transaction\n> rolls\n> back. If some / all columns should return something else before stats are\n> reported that can be addressed easily by tracking that in a separate field.\n>\n>\n> > On the other hand, I'm not sure we agreed that the behavior that\n> > Melanie reported is not a problem. The user might get confused since\n> > the subscription stats works differently than other stats when a\n> > reset. Previously, the primary reason why I hesitated to create the\n> > subscription stats when creating a subscription is that CREATE\n> > SUBSCRIPTION (with create_slot = false) can be rolled back. But with\n> > the shmem stats, we can easily resolve it by using\n> > pgstat_create_transactional().\n>\n> Yep.\n>\n>\n> > > > The second problem is that the following code in DropSubscription()\n> > > > should be updated:\n> > > >\n> > > > /*\n> > > > * Tell the cumulative stats system that the subscription is\n> getting\n> > > > * dropped. We can safely report dropping the subscription\n> statistics here\n> > > > * if the subscription is associated with a replication slot\n> since we\n> > > > * cannot run DROP SUBSCRIPTION inside a transaction block.\n> Subscription\n> > > > * statistics will be removed later by (auto)vacuum either if\n> it's not\n> > > > * associated with a replication slot or if the message for\n> dropping the\n> > > > * subscription gets lost.\n> > > > */\n> > > > if (slotname)\n> > > > pgstat_drop_subscription(subid);\n> > > >\n> > > > I think we can call pgstat_drop_subscription() even if slotname is\n> > > > NULL and need to update the comment.\n> > > >\n> > >\n> > > +1.\n> > >\n> > > > IIUC autovacuum is no longer\n> > > > responsible for garbage collection.\n> > > >\n> > >\n> > > Right, this is my understanding as well.\n> >\n> > Thank you for the confirmation.\n>\n> Want to propose a patch?\n\n\nYes, I’ll propose a patch.\n\nRegards,\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\nOn Sat, Jul 2, 2022 at 2:53 Andres Freund <andres@anarazel.de> wrote:Hi,\n\nOn 2022-07-01 16:08:48 +0900, Masahiko Sawada wrote:\n> Yes, my point is that it may be misleading that the subscription stats\n> are created when a subscription is created.\n\nI think it's important to create stats at that time, because otherwise it's\nbasically impossible to ensure that stats are dropped when a transaction rolls\nback. If some / all columns should return something else before stats are\nreported that can be addressed easily by tracking that in a separate field.\n\n\n> On the other hand, I'm not sure we agreed that the behavior that\n> Melanie reported is not a problem. The user might get confused since\n> the subscription stats works differently than other stats when a\n> reset. Previously, the primary reason why I hesitated to create the\n> subscription stats when creating a subscription is that CREATE\n> SUBSCRIPTION (with create_slot = false) can be rolled back. But with\n> the shmem stats, we can easily resolve it by using\n> pgstat_create_transactional().\n\nYep.\n\n\n> > > The second problem is that the following code in DropSubscription()\n> > > should be updated:\n> > >\n> > > /*\n> > > * Tell the cumulative stats system that the subscription is getting\n> > > * dropped. We can safely report dropping the subscription statistics here\n> > > * if the subscription is associated with a replication slot since we\n> > > * cannot run DROP SUBSCRIPTION inside a transaction block. Subscription\n> > > * statistics will be removed later by (auto)vacuum either if it's not\n> > > * associated with a replication slot or if the message for dropping the\n> > > * subscription gets lost.\n> > > */\n> > > if (slotname)\n> > > pgstat_drop_subscription(subid);\n> > >\n> > > I think we can call pgstat_drop_subscription() even if slotname is\n> > > NULL and need to update the comment.\n> > >\n> >\n> > +1.\n> >\n> > > IIUC autovacuum is no longer\n> > > responsible for garbage collection.\n> > >\n> >\n> > Right, this is my understanding as well.\n> \n> Thank you for the confirmation.\n\nWant to propose a patch?Yes, I’ll propose a patch.Regards,-- Masahiko SawadaEDB: https://www.enterprisedb.com/",
"msg_date": "Sat, 2 Jul 2022 09:52:41 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Issue with pg_stat_subscription_stats"
},
{
"msg_contents": "On Sat, Jul 2, 2022 at 9:52 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n>\n>\n> On Sat, Jul 2, 2022 at 2:53 Andres Freund <andres@anarazel.de> wrote:\n>>\n>> Hi,\n>>\n>> On 2022-07-01 16:08:48 +0900, Masahiko Sawada wrote:\n>> > Yes, my point is that it may be misleading that the subscription stats\n>> > are created when a subscription is created.\n>>\n>> I think it's important to create stats at that time, because otherwise it's\n>> basically impossible to ensure that stats are dropped when a transaction rolls\n>> back. If some / all columns should return something else before stats are\n>> reported that can be addressed easily by tracking that in a separate field.\n>>\n>>\n>> > On the other hand, I'm not sure we agreed that the behavior that\n>> > Melanie reported is not a problem. The user might get confused since\n>> > the subscription stats works differently than other stats when a\n>> > reset. Previously, the primary reason why I hesitated to create the\n>> > subscription stats when creating a subscription is that CREATE\n>> > SUBSCRIPTION (with create_slot = false) can be rolled back. But with\n>> > the shmem stats, we can easily resolve it by using\n>> > pgstat_create_transactional().\n>>\n>> Yep.\n>>\n>>\n>> > > > The second problem is that the following code in DropSubscription()\n>> > > > should be updated:\n>> > > >\n>> > > > /*\n>> > > > * Tell the cumulative stats system that the subscription is getting\n>> > > > * dropped. We can safely report dropping the subscription statistics here\n>> > > > * if the subscription is associated with a replication slot since we\n>> > > > * cannot run DROP SUBSCRIPTION inside a transaction block. Subscription\n>> > > > * statistics will be removed later by (auto)vacuum either if it's not\n>> > > > * associated with a replication slot or if the message for dropping the\n>> > > > * subscription gets lost.\n>> > > > */\n>> > > > if (slotname)\n>> > > > pgstat_drop_subscription(subid);\n>> > > >\n>> > > > I think we can call pgstat_drop_subscription() even if slotname is\n>> > > > NULL and need to update the comment.\n>> > > >\n>> > >\n>> > > +1.\n>> > >\n>> > > > IIUC autovacuum is no longer\n>> > > > responsible for garbage collection.\n>> > > >\n>> > >\n>> > > Right, this is my understanding as well.\n>> >\n>> > Thank you for the confirmation.\n>>\n>> Want to propose a patch?\n>\n>\n> Yes, I’ll propose a patch.\n>\n\nI've attached the patch, fix_drop_subscriptions_stats.patch, to fix it.\n\nI've also attached another PoC patch,\npoc_create_subscription_stats.patch, to create the stats entry when\ncreating the subscription, which address the issue reported in this\nthread; the pg_stat_reset_subscription_stats() doesn't update the\nstats_reset if no error is reported yet.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/",
"msg_date": "Mon, 4 Jul 2022 11:01:01 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Issue with pg_stat_subscription_stats"
},
{
"msg_contents": "Hi,\n\nOn 2022-07-04 11:01:01 +0900, Masahiko Sawada wrote:\n> I've attached the patch, fix_drop_subscriptions_stats.patch, to fix it.\n\nLGTM. Unless somebody sees a reason not to, I'm planning to commit that to 15\nand HEAD.\n\n\n> I've also attached another PoC patch,\n> poc_create_subscription_stats.patch, to create the stats entry when\n> creating the subscription, which address the issue reported in this\n> thread; the pg_stat_reset_subscription_stats() doesn't update the\n> stats_reset if no error is reported yet.\n\nIt'd be good for this to include a test.\n\n\n> diff --git a/src/backend/utils/activity/pgstat_subscription.c b/src/backend/utils/activity/pgstat_subscription.c\n> index e1072bd5ba..ef318b7422 100644\n> --- a/src/backend/utils/activity/pgstat_subscription.c\n> +++ b/src/backend/utils/activity/pgstat_subscription.c\n> @@ -47,8 +47,20 @@ pgstat_report_subscription_error(Oid subid, bool is_apply_error)\n> void\n> pgstat_create_subscription(Oid subid)\n> {\n> +\tPgStat_EntryRef *entry_ref;\n> +\tPgStatShared_Subscription *shstatent;\n> +\n> \tpgstat_create_transactional(PGSTAT_KIND_SUBSCRIPTION,\n> \t\t\t\t\t\t\t\tInvalidOid, subid);\n> +\n> +\tentry_ref = pgstat_get_entry_ref_locked(PGSTAT_KIND_SUBSCRIPTION,\n> +\t\t\t\t\t\t\t\t\t\t\tInvalidOid, subid,\n> +\t\t\t\t\t\t\t\t\t\t\tfalse);\n> +\tshstatent = (PgStatShared_Subscription *) entry_ref->shared_stats;\n> +\n> +\tmemset(&shstatent->stats, 0, sizeof(shstatent->stats));\n> +\n> +\tpgstat_unlock_entry(entry_ref);\n> }\n> \n> /*\n\nI think most of this could just be pgstat_reset_entry().\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 5 Jul 2022 14:52:45 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Issue with pg_stat_subscription_stats"
},
{
"msg_contents": "On Wed, Jul 6, 2022 at 6:52 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On 2022-07-04 11:01:01 +0900, Masahiko Sawada wrote:\n> > I've attached the patch, fix_drop_subscriptions_stats.patch, to fix it.\n>\n> LGTM. Unless somebody sees a reason not to, I'm planning to commit that to 15\n> and HEAD.\n>\n>\n> > I've also attached another PoC patch,\n> > poc_create_subscription_stats.patch, to create the stats entry when\n> > creating the subscription, which address the issue reported in this\n> > thread; the pg_stat_reset_subscription_stats() doesn't update the\n> > stats_reset if no error is reported yet.\n>\n> It'd be good for this to include a test.\n\nAgreed.\n\n>\n>\n> > diff --git a/src/backend/utils/activity/pgstat_subscription.c b/src/backend/utils/activity/pgstat_subscription.c\n> > index e1072bd5ba..ef318b7422 100644\n> > --- a/src/backend/utils/activity/pgstat_subscription.c\n> > +++ b/src/backend/utils/activity/pgstat_subscription.c\n> > @@ -47,8 +47,20 @@ pgstat_report_subscription_error(Oid subid, bool is_apply_error)\n> > void\n> > pgstat_create_subscription(Oid subid)\n> > {\n> > + PgStat_EntryRef *entry_ref;\n> > + PgStatShared_Subscription *shstatent;\n> > +\n> > pgstat_create_transactional(PGSTAT_KIND_SUBSCRIPTION,\n> > InvalidOid, subid);\n> > +\n> > + entry_ref = pgstat_get_entry_ref_locked(PGSTAT_KIND_SUBSCRIPTION,\n> > + InvalidOid, subid,\n> > + false);\n> > + shstatent = (PgStatShared_Subscription *) entry_ref->shared_stats;\n> > +\n> > + memset(&shstatent->stats, 0, sizeof(shstatent->stats));\n> > +\n> > + pgstat_unlock_entry(entry_ref);\n> > }\n> >\n> > /*\n>\n> I think most of this could just be pgstat_reset_entry().\n\nI think pgstat_reset_entry() doesn't work for this case as it skips\nresetting the entry if it doesn't exist.\n\nI've attached an updated patch.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/",
"msg_date": "Wed, 6 Jul 2022 10:25:02 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Issue with pg_stat_subscription_stats"
},
{
"msg_contents": "On 2022-07-06 10:25:02 +0900, Masahiko Sawada wrote:\n> > I think most of this could just be pgstat_reset_entry().\n> \n> I think pgstat_reset_entry() doesn't work for this case as it skips\n> resetting the entry if it doesn't exist.\n\nTrue - but a pgstat_get_entry_ref(create = true); pgstat_reset_entry(); would\nstill be shorter?\n\n\n",
"msg_date": "Tue, 5 Jul 2022 18:48:20 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Issue with pg_stat_subscription_stats"
},
{
"msg_contents": "On Wed, Jul 6, 2022 at 10:48 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> On 2022-07-06 10:25:02 +0900, Masahiko Sawada wrote:\n> > > I think most of this could just be pgstat_reset_entry().\n> >\n> > I think pgstat_reset_entry() doesn't work for this case as it skips\n> > resetting the entry if it doesn't exist.\n>\n> True - but a pgstat_get_entry_ref(create = true); pgstat_reset_entry(); would\n> still be shorter?\n\nIndeed. I've updated the patch.\n\nRegards,\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/",
"msg_date": "Wed, 6 Jul 2022 11:41:46 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Issue with pg_stat_subscription_stats"
},
{
"msg_contents": "Hi,\n\nOn 2022-07-06 11:41:46 +0900, Masahiko Sawada wrote:\n> diff --git a/src/test/regress/sql/subscription.sql b/src/test/regress/sql/subscription.sql\n> index 74c38ead5d..6a46956f6e 100644\n> --- a/src/test/regress/sql/subscription.sql\n> +++ b/src/test/regress/sql/subscription.sql\n> @@ -30,6 +30,12 @@ CREATE SUBSCRIPTION regress_testsub CONNECTION 'dbname=regress_doesnotexist' PUB\n> COMMENT ON SUBSCRIPTION regress_testsub IS 'test subscription';\n> SELECT obj_description(s.oid, 'pg_subscription') FROM pg_subscription s;\n> \n> +-- Check if the subscription stats are created and stats_reset is updated\n> +-- by pg_stat_reset_subscription_stats().\n> +SELECT subname, stats_reset IS NULL stats_reset_is_null FROM pg_stat_subscription_stats ORDER BY 1;\n\nWhy use ORDER BY 1 instead of just getting the stats for the subscription we\nwant to test? Seems a bit more robust to show only that one, so we don't get\nunnecessary changes if the test needs to create another subscription or such.\n\n\n> +SELECT pg_stat_reset_subscription_stats(oid) FROM pg_subscription;\n> +SELECT subname, stats_reset IS NULL stats_reset_is_null FROM pg_stat_subscription_stats ORDER BY 1;\n> +\n\nPerhaps worth resetting again and checking that the timestamp is bigger than\nthe previous timestamp? You can do that with \\gset etc.\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 6 Jul 2022 08:53:06 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Issue with pg_stat_subscription_stats"
},
{
"msg_contents": "On 2022-07-05 14:52:45 -0700, Andres Freund wrote:\n> On 2022-07-04 11:01:01 +0900, Masahiko Sawada wrote:\n> > I've attached the patch, fix_drop_subscriptions_stats.patch, to fix it.\n> \n> LGTM. Unless somebody sees a reason not to, I'm planning to commit that to 15\n> and HEAD.\n\nPushed.\n\n\n",
"msg_date": "Wed, 6 Jul 2022 09:28:26 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Issue with pg_stat_subscription_stats"
},
{
"msg_contents": "On Thu, Jul 7, 2022 at 12:53 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On 2022-07-06 11:41:46 +0900, Masahiko Sawada wrote:\n> > diff --git a/src/test/regress/sql/subscription.sql b/src/test/regress/sql/subscription.sql\n> > index 74c38ead5d..6a46956f6e 100644\n> > --- a/src/test/regress/sql/subscription.sql\n> > +++ b/src/test/regress/sql/subscription.sql\n> > @@ -30,6 +30,12 @@ CREATE SUBSCRIPTION regress_testsub CONNECTION 'dbname=regress_doesnotexist' PUB\n> > COMMENT ON SUBSCRIPTION regress_testsub IS 'test subscription';\n> > SELECT obj_description(s.oid, 'pg_subscription') FROM pg_subscription s;\n> >\n> > +-- Check if the subscription stats are created and stats_reset is updated\n> > +-- by pg_stat_reset_subscription_stats().\n> > +SELECT subname, stats_reset IS NULL stats_reset_is_null FROM pg_stat_subscription_stats ORDER BY 1;\n>\n> Why use ORDER BY 1 instead of just getting the stats for the subscription we\n> want to test? Seems a bit more robust to show only that one, so we don't get\n> unnecessary changes if the test needs to create another subscription or such.\n\nRight, it's more robust. I've updated the patch accordingly.\n\n>\n>\n> > +SELECT pg_stat_reset_subscription_stats(oid) FROM pg_subscription;\n> > +SELECT subname, stats_reset IS NULL stats_reset_is_null FROM pg_stat_subscription_stats ORDER BY 1;\n> > +\n>\n> Perhaps worth resetting again and checking that the timestamp is bigger than\n> the previous timestamp? You can do that with \\gset etc.\n\nAgreed.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/",
"msg_date": "Thu, 7 Jul 2022 10:50:27 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Issue with pg_stat_subscription_stats"
},
{
"msg_contents": "On Thu, Jul 7, 2022 at 1:28 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> On 2022-07-05 14:52:45 -0700, Andres Freund wrote:\n> > On 2022-07-04 11:01:01 +0900, Masahiko Sawada wrote:\n> > > I've attached the patch, fix_drop_subscriptions_stats.patch, to fix it.\n> >\n> > LGTM. Unless somebody sees a reason not to, I'm planning to commit that to 15\n> > and HEAD.\n>\n> Pushed.\n\nThanks!\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Thu, 7 Jul 2022 10:50:39 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Issue with pg_stat_subscription_stats"
},
{
"msg_contents": "On 2022-07-07 10:50:27 +0900, Masahiko Sawada wrote:\n> Right, it's more robust. I've updated the patch accordingly.\n\nDo others have thoughts about backpatching this to 15 or not?\n\n\n",
"msg_date": "Mon, 11 Jul 2022 14:56:37 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Issue with pg_stat_subscription_stats"
},
{
"msg_contents": "On Tue, Jul 12, 2022 at 3:26 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> On 2022-07-07 10:50:27 +0900, Masahiko Sawada wrote:\n> > Right, it's more robust. I've updated the patch accordingly.\n>\n> Do others have thoughts about backpatching this to 15 or not?\n>\n\nI am not against backpatching this but OTOH it doesn't appear critical\nenough to block one's work, so not backpatching should be fine.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 12 Jul 2022 09:31:16 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Issue with pg_stat_subscription_stats"
},
{
"msg_contents": "On Tue, Jul 12, 2022 at 09:31:16AM +0530, Amit Kapila wrote:\n> I am not against backpatching this but OTOH it doesn't appear critical\n> enough to block one's work, so not backpatching should be fine.\n\nWe are just talking about the reset timestamp not being set at\nwhen the object is created, right? This does not strike me as\ncritical, so applying it only on HEAD is fine IMO. A few months ago,\nwhile in beta, I would have been fine with something applied to\nREL_15_STABLE. Now that we are in RC, that's not worth taking a risk\nin my opinion.\n\nAmit or Andres, are you planning to double-check and perhaps merge\nthis patch to take care of the inconsistency?\n--\nMichael",
"msg_date": "Thu, 6 Oct 2022 14:10:56 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Issue with pg_stat_subscription_stats"
},
{
"msg_contents": "Hi,\n\nOn 2022-10-06 14:10:56 +0900, Michael Paquier wrote:\n> On Tue, Jul 12, 2022 at 09:31:16AM +0530, Amit Kapila wrote:\n> > I am not against backpatching this but OTOH it doesn't appear critical\n> > enough to block one's work, so not backpatching should be fine.\n> \n> We are just talking about the reset timestamp not being set at\n> when the object is created, right? This does not strike me as\n> critical, so applying it only on HEAD is fine IMO. A few months ago,\n> while in beta, I would have been fine with something applied to\n> REL_15_STABLE. Now that we are in RC, that's not worth taking a risk\n> in my opinion.\n\nAgreed.\n\n> Amit or Andres, are you planning to double-check and perhaps merge\n> this patch to take care of the inconsistency?\n\nI'll run it through CI and then to master unless somebody pipes up in the\nmeantime.\n\nThanks for bringing this thread up, I'd lost track of it.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 6 Oct 2022 16:43:43 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Issue with pg_stat_subscription_stats"
},
{
"msg_contents": "On 2022-10-06 16:43:43 -0700, Andres Freund wrote:\n> On 2022-10-06 14:10:56 +0900, Michael Paquier wrote:\n> > On Tue, Jul 12, 2022 at 09:31:16AM +0530, Amit Kapila wrote:\n> > > I am not against backpatching this but OTOH it doesn't appear critical\n> > > enough to block one's work, so not backpatching should be fine.\n> > \n> > We are just talking about the reset timestamp not being set at\n> > when the object is created, right? This does not strike me as\n> > critical, so applying it only on HEAD is fine IMO. A few months ago,\n> > while in beta, I would have been fine with something applied to\n> > REL_15_STABLE. Now that we are in RC, that's not worth taking a risk\n> > in my opinion.\n> \n> Agreed.\n> \n> > Amit or Andres, are you planning to double-check and perhaps merge\n> > this patch to take care of the inconsistency?\n> \n> I'll run it through CI and then to master unless somebody pipes up in the\n> meantime.\n\nAnd pushed. Thanks all!\n\n\n",
"msg_date": "Thu, 6 Oct 2022 17:27:06 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Issue with pg_stat_subscription_stats"
},
{
"msg_contents": "On Fri, Oct 7, 2022 at 9:27 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> On 2022-10-06 16:43:43 -0700, Andres Freund wrote:\n> > On 2022-10-06 14:10:56 +0900, Michael Paquier wrote:\n> > > On Tue, Jul 12, 2022 at 09:31:16AM +0530, Amit Kapila wrote:\n> > > > I am not against backpatching this but OTOH it doesn't appear critical\n> > > > enough to block one's work, so not backpatching should be fine.\n> > >\n> > > We are just talking about the reset timestamp not being set at\n> > > when the object is created, right? This does not strike me as\n> > > critical, so applying it only on HEAD is fine IMO. A few months ago,\n> > > while in beta, I would have been fine with something applied to\n> > > REL_15_STABLE. Now that we are in RC, that's not worth taking a risk\n> > > in my opinion.\n> >\n> > Agreed.\n> >\n> > > Amit or Andres, are you planning to double-check and perhaps merge\n> > > this patch to take care of the inconsistency?\n> >\n> > I'll run it through CI and then to master unless somebody pipes up in the\n> > meantime.\n>\n> And pushed. Thanks all!\n\nThanks!\n\nRegards,\n\n-- \nMasahiko Sawada\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Fri, 7 Oct 2022 09:33:57 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Issue with pg_stat_subscription_stats"
},
{
"msg_contents": "On Thu, Oct 06, 2022 at 04:43:43PM -0700, Andres Freund wrote:\n> Thanks for bringing this thread up, I'd lost track of it.\n\nThe merit goes to Sawada-san here, who has poked me about this thread\n:p\n--\nMichael",
"msg_date": "Fri, 7 Oct 2022 10:08:36 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Issue with pg_stat_subscription_stats"
}
] |
[
{
"msg_contents": "Hi,\nw.r.t. v5-0003-Teach-AcquireExecutorLocks-to-skip-locking-pruned.patch :\n\n(pruning steps containing expressions that can be computed before\nbefore the executor proper has started)\n\nthe word 'before' was repeated.\n\nFor ExecInitParallelPlan():\n\n+ char *execlockrelsinfo_data;\n+ char *execlockrelsinfo_space;\n\nthe content of execlockrelsinfo_data is copied into execlockrelsinfo_space.\nI wonder if having one of execlockrelsinfo_data and\nexeclockrelsinfo_space suffices.\n\nCheers\n\nHi,w.r.t. v5-0003-Teach-AcquireExecutorLocks-to-skip-locking-pruned.patch :(pruning steps containing expressions that can be computed beforebefore the executor proper has started)the word 'before' was repeated.For ExecInitParallelPlan():+ char *execlockrelsinfo_data;+ char *execlockrelsinfo_space;the content of execlockrelsinfo_data is copied into execlockrelsinfo_space.I wonder if having one of execlockrelsinfo_data and execlockrelsinfo_space suffices.Cheers",
"msg_date": "Fri, 11 Mar 2022 14:09:16 -0800",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": true,
"msg_subject": "Re: generic plans and \"initial\" pruning"
}
] |
[
{
"msg_contents": "Hello PGSQL Mailing List,\nI am Arjun a 2nd year CSE Student and I am eager to contribute to this\ncommunity and was interested in the GUI System Admin Dashboard project\nmentioned here\n<https://wiki.postgresql.org/wiki/GSoC_2022#GUI_representation_of_monitoring_System_Activity_with_the_system_stats_Extension_in_pgAdmin_4>\n .\n\nI am a React-Nextjs Web developer and am familiar with typescript.\nI have created a project with React and is available on my github page\n<https://github.com/Arjun31415/amazon-clone> and hosted with Firebase\n<https://clone-5943a.web.app>.\nI am also quite familiar with the basics of python-flask and jinja2\ntemplating. I also know SQL(SQL plus) and No-SQL (mongoDB) for querying\ndatabases.\nI have built couple of projects with flask -\n\n 1. Microblogging Application (Github\n <https://github.com/Arjun31415/Microblog> | Heroku\n <https://arj-microblog.herokuapp.com>)\n 2. Price Drop Notifier (Github\n <https://github.com/Arjun31415/Price-Drop-Notifier> | Heroku\n <https://arj-price-alerts.herokuapp.com>)\n\n\nI am willing to take up a proficiency test if required.\n\nRegarding this project it would be helpful If someone could direct me to\nresources to utilize so that i can familiarize myself with it.\n\nHoping to hear from you soon.\nYours Sincerely\nArjun\n\nHello PGSQL Mailing List,I am Arjun a 2nd year CSE Student and I am eager to contribute to this community and was interested in the GUI System Admin Dashboard project mentioned here .I am a React-Nextjs Web developer and am familiar with typescript.I have created a project with React and is available on my github page and hosted with Firebase.I am also quite familiar with the basics of python-flask and jinja2 templating. I also know SQL(SQL plus) and No-SQL (mongoDB) for querying databases. I have built couple of projects with flask -Microblogging Application (Github | Heroku)Price Drop Notifier (Github | Heroku)I am willing to take up a proficiency test if required.Regarding this project it would be helpful If someone could direct me to resources to utilize so that i can familiarize myself with it. Hoping to hear from you soon.Yours SincerelyArjun",
"msg_date": "Sat, 12 Mar 2022 21:42:07 +0530",
"msg_from": "Arjun Prashanth <arjunp0710@gmail.com>",
"msg_from_op": true,
"msg_subject": "[GSOC 22] GUI representation of monitoring System Activity with the\n system_stats Extension in pgAdmin 4"
}
] |
[
{
"msg_contents": "Hi,\n\nI noticed that the following commands \"CREATE PUBLICATION pub1 FOR ALL\nTABLES IN SCHEMA\" and \"ALTER PUBLICATION pub1 ADD ALL TABLES IN\nSCHEMA\" does not complete with the schema list. I feel this is because\nof the following code in tab-complete.c:\n.........\nCOMPLETE_WITH_QUERY_PLUS(Query_for_list_of_schemas\n\" AND nspname NOT LIKE E'pg\\\\\\\\_%'\",\n\"CURRENT_SCHEMA\");\n.........\nHere \"pg\\\\\\\\_%\" should be \"pg\\\\\\\\_%%\".\nAttached a patch to handle this.\nThoughts?\n\nRegards,\nVignesh",
"msg_date": "Sun, 13 Mar 2022 22:33:43 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": true,
"msg_subject": "Tab completion not listing schema list for create/alter publication\n for all tables in schema"
},
{
"msg_contents": "vignesh C <vignesh21@gmail.com> writes:\n> Here \"pg\\\\\\\\_%\" should be \"pg\\\\\\\\_%%\".\n\nRight you are. Patch pushed, thanks!\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 13 Mar 2022 19:53:36 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Tab completion not listing schema list for create/alter\n publication for all tables in schema"
},
{
"msg_contents": "On Mon, Mar 14, 2022 at 5:23 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> vignesh C <vignesh21@gmail.com> writes:\n> > Here \"pg\\\\\\\\_%\" should be \"pg\\\\\\\\_%%\".\n>\n> Right you are. Patch pushed, thanks!\n\nThanks for pushing the patch.\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Mon, 14 Mar 2022 17:47:16 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Tab completion not listing schema list for create/alter\n publication for all tables in schema"
}
] |
[
{
"msg_contents": "Hello!\r\n\r\nI propose the attached patch to be applied on the 'master' branch of PostgreSQL\r\nto fix various spelling errors.\r\n\r\nMost fixes are in comments and have no effect on functionality. Some fixes are\r\nalso in variable names but they should be safe to change, as the change is\r\nconsistent in all occurrences of the variable.\r\n\r\nOtto Kekalainen\r\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Mon, 14 Mar 2022 23:03:50 +0000",
"msg_from": "\"Kekalainen, Otto\" <ottoke@amazon.com>",
"msg_from_op": true,
"msg_subject": "[PATCH] Fix various spelling errors"
},
{
"msg_contents": "On Mon, Mar 14, 2022 at 11:03:50PM +0000, Kekalainen, Otto wrote:\n> Hello!\n> \n> I propose the attached patch to be applied on the 'master' branch of PostgreSQL\n> to fix various spelling errors.\n> \n> Most fixes are in comments and have no effect on functionality. Some fixes are\n> also in variable names but they should be safe to change, as the change is\n> consistent in all occurrences of the variable.\n\nLGTM - I found a few of these myself.\nAttached now, in case it's useful to handle them together.",
"msg_date": "Mon, 14 Mar 2022 18:49:07 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Fix various spelling errors"
},
{
"msg_contents": "On Mon, Mar 14, 2022 at 06:49:07PM -0500, Justin Pryzby wrote:\n> On Mon, Mar 14, 2022 at 11:03:50PM +0000, Kekalainen, Otto wrote:\n>> I propose the attached patch to be applied on the 'master' branch of PostgreSQL\n>> to fix various spelling errors.\n>> \n>> Most fixes are in comments and have no effect on functionality. Some fixes are\n>> also in variable names but they should be safe to change, as the change is\n>> consistent in all occurrences of the variable.\n> \n> LGTM - I found a few of these myself.\n> Attached now, in case it's useful to handle them together.\n\nIt is useful to group that together. I have gathered everything that\nlooked like a typo or a grammar mistake, and applied the fixes.\nThanks!\n--\nMichael",
"msg_date": "Tue, 15 Mar 2022 11:46:24 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Fix various spelling errors"
},
{
"msg_contents": "\r\nOn 2022-03-14, 19:47, \"Michael Paquier\" <michael@paquier.xyz> wrote:\r\n> It is useful to group that together. I have gathered everything that\r\n> looked like a typo or a grammar mistake, and applied the fixes.\r\n\r\nThat was quick, thanks!\r\n\r\nPlease next time use `git am` to import the patch so that author and other\r\ncommit metadata is kept, or if you apply patches manually then commit with `git\r\n--author` so that original author will be correct in the commit and your name\r\nwill be only in the committer field.\r\n\r\nThis was just a spelling fix so I don't care, but for other people doing more\r\nsignificant contributions, getting the git authorship and having a PostgreSQL\r\ncontribution show up on their Gihub (or Gitlab profile, or in other tools that\r\nread git commits) can be important and maybe the only reward/credit for those\r\ndoing open source on their free time.\r\n\r\nAgain, personally I don't care and I don't need Github credits for this, just\r\nstating as general advice for a better contribution process in the future.\r\n\r\n- Otto\r\n\r\n",
"msg_date": "Tue, 15 Mar 2022 16:40:42 +0000",
"msg_from": "\"Kekalainen, Otto\" <ottoke@amazon.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Fix various spelling errors"
},
{
"msg_contents": "\"Kekalainen, Otto\" <ottoke@amazon.com> writes:\n> Please next time use `git am` to import the patch so that author and other\n> commit metadata is kept, or if you apply patches manually then commit with `git\n> --author` so that original author will be correct in the commit and your name\n> will be only in the committer field.\n\nThis is not our practice. We credit authors in the body of the commit\nmessage, but we don't worry about the git metadata. git is a tool we\nhappen to be using at the moment, but it doesn't run the project,\nand in any case it's far short of being adequate for such a purpose.\nWhat would you do with multiple-author patches, for a start?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 15 Mar 2022 12:46:34 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Fix various spelling errors"
}
] |
[
{
"msg_contents": "Dear Hackers\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\nWhen I audit the Postgresql database recently, I found that after configuring the log type as csv, the output log content is as follows: \"database \"\"lp_db1\"\" does not exist\",,,,,\"DROP DATABASE lp_db1;\",,\"dropdb, dbcommands.c:841\",\"\",\"client backend\",,0 It is very inconvenient to understand the real meaning of each field. And in the log content,\" is escaped as \"\", which is not friendly to regular expression matching. Therefore, I want to modify the csv log function, change its format to key:value, assign the content of the non-existing field to NULL, and at the same time, \" will be escaped as \\\" in the log content. After the modification, the above log format is as follows: Log_time:\"2022-03-15 09:17:55.289 CST\",User_name:\"postgres\",Database_name:\"lp_db\",Process_id:\"17995\",Remote_host:\"192.168.88.130\",Remote_port:\"38402\",Line_number: \"622fe941.464b\",PS_display:\"DROP DATABASE\",Session_start_timestamp:\"2022-03-15 09:17:53 CST\",Virtual_transaction_id:\"3/2\",Transaction_id:\"NULL\",Error_severity:\"ERROR\",SQL_state_code :\"3D000\",Errmessage:\"database \\\"lp_db1\\\" does not exist\",Errdetail:\"NULL\",Errhint:\"NULL\",Internal_query:\"NULL\",Internal_pos:\"0\",Errcontext:\"NULL\",User_query :\"DROP DATABASE lp_db1;\",Cursorpos:\"NULL\",File_location:\"dropdb, dbcommands.c:841\",Application_name:\"NULL\",Backend_type:\"client backend\",Leader_PID:\"0\",Query_id:\"0\"\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\nRegards,\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n-- \r\n-lupeng\nDear HackersWhen I audit the Postgresql database recently, I found that after configuring the log type as csv, the output log content is as follows:\n\"database \"\"lp_db1\"\" does not exist\",,,,,\"DROP DATABASE lp_db1;\",,\"dropdb, dbcommands.c:841\",\"\",\"client backend\",,0\nIt is very inconvenient to understand the real meaning of each field. And in the log content,\" is escaped as \"\", which is not friendly to regular expression matching.\nTherefore, I want to modify the csv log function, change its format to key:value, assign the content of the non-existing field to NULL, and at the same time, \" will be escaped as \\\" in the log content. After the modification, the above log format is as follows:\nLog_time:\"2022-03-15 09:17:55.289 CST\",User_name:\"postgres\",Database_name:\"lp_db\",Process_id:\"17995\",Remote_host:\"192.168.88.130\",Remote_port:\"38402\",Line_number: \"622fe941.464b\",PS_display:\"DROP DATABASE\",Session_start_timestamp:\"2022-03-15 09:17:53 CST\",Virtual_transaction_id:\"3/2\",Transaction_id:\"NULL\",Error_severity:\"ERROR\",SQL_state_code :\"3D000\",Errmessage:\"database \\\"lp_db1\\\" does not exist\",Errdetail:\"NULL\",Errhint:\"NULL\",Internal_query:\"NULL\",Internal_pos:\"0\",Errcontext:\"NULL\",User_query :\"DROP DATABASE lp_db1;\",Cursorpos:\"NULL\",File_location:\"dropdb, dbcommands.c:841\",Application_name:\"NULL\",Backend_type:\"client backend\",Leader_PID:\"0\",Query_id:\"0\"Regards,-- -lupeng",
"msg_date": "Tue, 15 Mar 2022 09:31:19 +0800",
"msg_from": "\"=?gb18030?B?bHVwZW5n?=\" <lpmstsc@foxmail.com>",
"msg_from_op": true,
"msg_subject": "Change the csv log to 'key:value' to facilitate the user to\n understanding and processing of logs"
},
{
"msg_contents": "Hi,\n\nOn Tue, Mar 15, 2022 at 09:31:19AM +0800, lupeng wrote:\n>\n> When I audit the Postgresql database recently, I found that after configuring\n> the log type as csv, the output log content is as follows: \"database\n> \"\"lp_db1\"\" does not exist\",,,,,\"DROP DATABASE lp_db1;\",,\"dropdb,\n> dbcommands.c:841\",\"\",\"client backend\",,0 It is very inconvenient to\n> understand the real meaning of each field. And in the log content,\" is\n> escaped as \"\", which is not friendly to regular expression matching.\n> Therefore, I want to modify the csv log function, change its format to\n> key:value, assign the content of the non-existing field to NULL, and at the\n> same time, \" will be escaped as \\\" in the log content. After the\n> modification, the above log format is as follows: Log_time:\"2022-03-15\n> 09:17:55.289\n> CST\",User_name:\"postgres\",Database_name:\"lp_db\",Process_id:\"17995\", [...]\n\nThis would make the logs a lot more verbose, and a lot less easy to process if\nyou process them with tools intended for csv files.\n\nYou should consider using the newly introduced jsonlog format (as soon as pg15\nis released), which seems closer to what you want.\n\n\n",
"msg_date": "Tue, 15 Mar 2022 20:38:34 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Change the csv log to 'key:value' to facilitate the user to\n understanding and processing of logs"
},
{
"msg_contents": "On Tue, Mar 15, 2022 at 09:31:19AM +0800, lupeng wrote:\n> Dear Hackers\n> When I audit the Postgresql database recently, I found that after\n> configuring the log type as csv, the output log content is as follows:\n> \"database \"\"lp_db1\"\" does not exist\",,,,,\"DROP DATABASE\n> lp_db1;\",,\"dropdb, dbcommands.c:841\",\"\",\"client backend\",,0 It is very\n> inconvenient to understand the real meaning of each field. And in the\n> log content,\" is escaped as \"\", which is not friendly to regular\n> expression matching. Therefore, I want to modify the csv log function,\n> change its format to key:value, assign the content of the non-existing\n> field to NULL, and at the same time, \" will be escaped as \\\" in the\n> log content. After the modification, the above log format is as\n> follows: Log_time:\"2022-03-15 09:17:55.289\n> CST\",User_name:\"postgres\",Database_name:\"lp_db\",Process_id:\"17995\",Remote_host:\"192.168.88.130\",Remote_port:\"38402\",Line_number:\n> \"622fe941.464b\",PS_display:\"DROP\n> DATABASE\",Session_start_timestamp:\"2022-03-15 09:17:53\n> CST\",Virtual_transaction_id:\"3/2\",Transaction_id:\"NULL\",Error_severity:\"ERROR\",SQL_state_code\n> :\"3D000\",Errmessage:\"database \\\"lp_db1\\\" does not\n> exist\",Errdetail:\"NULL\",Errhint:\"NULL\",Internal_query:\"NULL\",Internal_pos:\"0\",Errcontext:\"NULL\",User_query\n> :\"DROP DATABASE lp_db1;\",Cursorpos:\"NULL\",File_location:\"dropdb,\n> dbcommands.c:841\",Application_name:\"NULL\",Backend_type:\"client\n> backend\",Leader_PID:\"0\",Query_id:\"0\"\n\nCSV format is well documented\n(https://www.postgresql.org/docs/current/runtime-config-logging.html#RUNTIME-CONFIG-LOGGING-CSVLOG).\n\nIf you want named fields you can wait for pg15 and its jsonlog\n(https://www.depesz.com/2022/01/17/waiting-for-postgresql-15-introduce-log_destinationjsonlog/).\n\nI, for one, wouldn't want to have to deal with field names repeated in\nevery single record.\n\ndepesz\n\n\n",
"msg_date": "Tue, 15 Mar 2022 14:30:32 +0100",
"msg_from": "hubert depesz lubaczewski <depesz@depesz.com>",
"msg_from_op": false,
"msg_subject": "Re: Change the csv log to 'key:value' to facilitate the user to\n understanding and processing of logs"
},
{
"msg_contents": "\nOn 3/15/22 09:30, hubert depesz lubaczewski wrote:\n> On Tue, Mar 15, 2022 at 09:31:19AM +0800, lupeng wrote:\n>> Dear Hackers\n>> When I audit the Postgresql database recently, I found that after\n>> configuring the log type as csv, the output log content is as follows:\n>> \"database \"\"lp_db1\"\" does not exist\",,,,,\"DROP DATABASE\n>> lp_db1;\",,\"dropdb, dbcommands.c:841\",\"\",\"client backend\",,0 It is very\n>> inconvenient to understand the real meaning of each field. And in the\n>> log content,\" is escaped as \"\", which is not friendly to regular\n>> expression matching. Therefore, I want to modify the csv log function,\n>> change its format to key:value, assign the content of the non-existing\n>> field to NULL, and at the same time, \" will be escaped as \\\" in the\n>> log content. After the modification, the above log format is as\n>> follows: Log_time:\"2022-03-15 09:17:55.289\n>> CST\",User_name:\"postgres\",Database_name:\"lp_db\",Process_id:\"17995\",Remote_host:\"192.168.88.130\",Remote_port:\"38402\",Line_number:\n>> \"622fe941.464b\",PS_display:\"DROP\n>> DATABASE\",Session_start_timestamp:\"2022-03-15 09:17:53\n>> CST\",Virtual_transaction_id:\"3/2\",Transaction_id:\"NULL\",Error_severity:\"ERROR\",SQL_state_code\n>> :\"3D000\",Errmessage:\"database \\\"lp_db1\\\" does not\n>> exist\",Errdetail:\"NULL\",Errhint:\"NULL\",Internal_query:\"NULL\",Internal_pos:\"0\",Errcontext:\"NULL\",User_query\n>> :\"DROP DATABASE lp_db1;\",Cursorpos:\"NULL\",File_location:\"dropdb,\n>> dbcommands.c:841\",Application_name:\"NULL\",Backend_type:\"client\n>> backend\",Leader_PID:\"0\",Query_id:\"0\"\n> CSV format is well documented\n> (https://www.postgresql.org/docs/current/runtime-config-logging.html#RUNTIME-CONFIG-LOGGING-CSVLOG).\n>\n> If you want named fields you can wait for pg15 and its jsonlog\n> (https://www.depesz.com/2022/01/17/waiting-for-postgresql-15-introduce-log_destinationjsonlog/).\n>\n> I, for one, wouldn't want to have to deal with field names repeated in\n> every single record.\n>\n\nIndeed. And even if this were a good idea, which it's not, it would be\n15 years too late.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Tue, 15 Mar 2022 10:12:02 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: Change the csv log to 'key:value' to facilitate the user to\n understanding and processing of logs"
},
{
"msg_contents": "On 3/15/22 10:12, Andrew Dunstan wrote:\n> \n> On 3/15/22 09:30, hubert depesz lubaczewski wrote:\n>> On Tue, Mar 15, 2022 at 09:31:19AM +0800, lupeng wrote:\n>>> Dear Hackers\n>>> When I audit the Postgresql database recently, I found that after\n>>> configuring the log type as csv, the output log content is as follows:\n>>> \"database \"\"lp_db1\"\" does not exist\",,,,,\"DROP DATABASE\n>>> lp_db1;\",,\"dropdb, dbcommands.c:841\",\"\",\"client backend\",,0 It is very\n>>> inconvenient to understand the real meaning of each field. And in the\n>>> log content,\" is escaped as \"\", which is not friendly to regular\n>>> expression matching. Therefore, I want to modify the csv log function,\n>>> change its format to key:value, assign the content of the non-existing\n>>> field to NULL, and at the same time, \" will be escaped as \\\" in the\n>>> log content. After the modification, the above log format is as\n>>> follows: Log_time:\"2022-03-15 09:17:55.289\n>>> CST\",User_name:\"postgres\",Database_name:\"lp_db\",Process_id:\"17995\",Remote_host:\"192.168.88.130\",Remote_port:\"38402\",Line_number:\n>>> \"622fe941.464b\",PS_display:\"DROP\n>>> DATABASE\",Session_start_timestamp:\"2022-03-15 09:17:53\n>>> CST\",Virtual_transaction_id:\"3/2\",Transaction_id:\"NULL\",Error_severity:\"ERROR\",SQL_state_code\n>>> :\"3D000\",Errmessage:\"database \\\"lp_db1\\\" does not\n>>> exist\",Errdetail:\"NULL\",Errhint:\"NULL\",Internal_query:\"NULL\",Internal_pos:\"0\",Errcontext:\"NULL\",User_query\n>>> :\"DROP DATABASE lp_db1;\",Cursorpos:\"NULL\",File_location:\"dropdb,\n>>> dbcommands.c:841\",Application_name:\"NULL\",Backend_type:\"client\n>>> backend\",Leader_PID:\"0\",Query_id:\"0\"\n>> CSV format is well documented\n>> (https://www.postgresql.org/docs/current/runtime-config-logging.html#RUNTIME-CONFIG-LOGGING-CSVLOG).\n>>\n>> If you want named fields you can wait for pg15 and its jsonlog\n>> (https://www.depesz.com/2022/01/17/waiting-for-postgresql-15-introduce-log_destinationjsonlog/).\n>>\n>> I, for one, wouldn't want to have to deal with field names repeated in\n>> every single record.\n>>\n> \n> Indeed. And even if this were a good idea, which it's not, it would be\n> 15 years too late.\n\nAlso, the CSV format, while human readable to a degree, wasn't meant for \ndirect, human consumption. It was meant to be read by programs and at \nthe time, CSV made the most sense.\n\n\nRegards, Jan\n\n\n",
"msg_date": "Tue, 15 Mar 2022 10:33:42 -0400",
"msg_from": "Jan Wieck <jan@wi3ck.info>",
"msg_from_op": false,
"msg_subject": "Re: Change the csv log to 'key:value' to facilitate the user to\n understanding and processing of logs"
},
{
"msg_contents": "On Tue, Mar 15, 2022 at 10:33:42AM -0400, Jan Wieck wrote:\n> Also, the CSV format, while human readable to a degree, wasn't meant for\n> direct, human consumption. It was meant to be read by programs and at the\n> time, CSV made the most sense.\n\nFWIW, I have noticed that this patch was still listed in the next CF,\nwith a reference to an incorrect thread:\nhttps://commitfest.postgresql.org/38/3591/\n\nI have updated the CF entry to poin to this thread, and it is clear\nthat csvlog is not going to change now so this patch status has been\nswitched to rejected.\n--\nMichael",
"msg_date": "Thu, 7 Apr 2022 14:40:55 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Change the csv log to 'key:value' to facilitate the user to\n understanding and processing of logs"
}
] |
[
{
"msg_contents": "Hi All,\nKindly check the below scenario with INTERVAL datatype.\n\npostgres=# select interval '01 20:59:59' + interval '00 05:00:01' as\ninterval;\n interval\n----------------\n 1 day 26:00:00\n(1 row)\n\nAny operation with INTERVAL data, We are changing the interval values as\n\"60 sec\" as \"next minute\"\n\"60 min\" as \"next hour\"\n*Similarly can't we consider \"24 Hours\" for \"next day\" ?*\nIs there any specific purpose we are holding the hours as an increasing\nnumber beyond 24 hours also?\n\nBut when we are dealing with TIMESTAMP with INTERVAL values it's considered\nthe \"24 Hours\" for \"next day\".\n\npostgres=# select timestamp '01-MAR-22 20:59:59' + interval '00 05:00:01'\n as interval;\n interval\n---------------------\n 2022-03-02 02:00:00\n(1 row)\n\n-- \n\nWith Regards,\nPrabhat Kumar Sahu\nEnterpriseDB: http://www.enterprisedb.com\n\nHi All,Kindly check the below scenario with INTERVAL datatype.postgres=# select interval '01 20:59:59' + interval '00 05:00:01' as interval; interval ---------------- 1 day 26:00:00(1 row)Any operation with INTERVAL data, We are changing the interval values as \"60 sec\" as \"next minute\"\"60 min\" as \"next hour\"Similarly can't we consider \"24 Hours\" for \"next day\" ?Is there any specific purpose we are holding the hours as an increasing number beyond 24 hours also?But when we are dealing with TIMESTAMP with INTERVAL values it's considered the \"24 Hours\" for \"next day\".postgres=# select timestamp '01-MAR-22 20:59:59' + interval '00 05:00:01' as interval; interval --------------------- 2022-03-02 02:00:00(1 row)-- \nWith Regards,Prabhat Kumar SahuEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Tue, 15 Mar 2022 12:54:58 +0530",
"msg_from": "Prabhat Sahu <prabhat.sahu@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Can we consider \"24 Hours\" for \"next day\" in INTERVAL datatype ?"
},
{
"msg_contents": "On Tue, 2022-03-15 at 12:54 +0530, Prabhat Sahu wrote:\n> Kindly check the below scenario with INTERVAL datatype.\n> \n> postgres=# select interval '01 20:59:59' + interval '00 05:00:01' as interval;\n> interval \n> ----------------\n> 1 day 26:00:00\n> (1 row)\n> \n> Any operation with INTERVAL data, We are changing the interval values as \n> \"60 sec\" as \"next minute\"\n> \"60 min\" as \"next hour\"\n> Similarly can't we consider \"24 Hours\" for \"next day\" ?\n> Is there any specific purpose we are holding the hours as an increasing number beyond 24 hours also?\n> \n> But when we are dealing with TIMESTAMP with INTERVAL values it's considered the \"24 Hours\" for \"next day\".\n> \n> postgres=# select timestamp '01-MAR-22 20:59:59' + interval '00 05:00:01' as interval;\n> interval \n> ---------------------\n> 2022-03-02 02:00:00\n> (1 row)\n\nThe case is different with days:\n\ntest=> SELECT TIMESTAMPTZ '2022-03-26 20:00:00 Europe/Vienna' + INTERVAL '12 hours' + INTERVAL '12 hours';\n ?column? \n════════════════════════\n 2022-03-27 21:00:00+02\n(1 row)\n\ntest=> SELECT TIMESTAMPTZ '2022-03-26 20:00:00 Europe/Vienna' + INTERVAL '1 day';\n ?column? \n════════════════════════\n 2022-03-27 20:00:00+02\n(1 row)\n\nYours,\nLaurenz Albe\n\n\n\n",
"msg_date": "Tue, 15 Mar 2022 08:40:12 +0100",
"msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>",
"msg_from_op": false,
"msg_subject": "Re: Can we consider \"24 Hours\" for \"next day\" in INTERVAL datatype ?"
},
{
"msg_contents": "Hi,\n\nOn Tue, Mar 15, 2022 at 12:54:58PM +0530, Prabhat Sahu wrote:\n>\n> Kindly check the below scenario with INTERVAL datatype.\n> \n> postgres=# select interval '01 20:59:59' + interval '00 05:00:01' as\n> interval;\n> interval\n> ----------------\n> 1 day 26:00:00\n> (1 row)\n> \n> Any operation with INTERVAL data, We are changing the interval values as\n> \"60 sec\" as \"next minute\"\n> \"60 min\" as \"next hour\"\n> *Similarly can't we consider \"24 Hours\" for \"next day\" ?*\n> Is there any specific purpose we are holding the hours as an increasing\n> number beyond 24 hours also?\n\nYes, you can't blindly assume that adding 24 hours will always be the same as\nadding a day. You can just justify_days if you want to force that behavior.\n\n\n",
"msg_date": "Tue, 15 Mar 2022 15:46:11 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Can we consider \"24 Hours\" for \"next day\" in INTERVAL datatype ?"
},
{
"msg_contents": "On Tue, Mar 15, 2022 at 3:46 AM Julien Rouhaud <rjuju123@gmail.com> wrote:\n> On Tue, Mar 15, 2022 at 12:54:58PM +0530, Prabhat Sahu wrote:\n> >\n> > Is there any specific purpose we are holding the hours as an increasing\n> > number beyond 24 hours also?\n>\n> Yes, you can't blindly assume that adding 24 hours will always be the same as\n> adding a day. You can just justify_days if you want to force that behavior.\n\nThe specific purpose by the way, at least according to the docs [1],\nis daylights savings time:\n> Internally interval values are stored as months, days, and microseconds. This is done because\n> the number of days in a month varies, and a day can have 23 or 25 hours if a daylight savings\n> time adjustment is involved.\nThough I suppose leap seconds may also follow similar logic.\n\n[1] https://www.postgresql.org/docs/current/datatype-datetime.html#DATATYPE-INTERVAL-INPUT\n\n- Joe Koshakow\n\n\n",
"msg_date": "Tue, 15 Mar 2022 08:14:18 -0400",
"msg_from": "Joseph Koshakow <koshy44@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Can we consider \"24 Hours\" for \"next day\" in INTERVAL datatype ?"
}
] |
[
{
"msg_contents": "Add 'basebackup_to_shell' contrib module.\n\nAs a demonstration of the sort of thing that can be done by adding a\ncustom backup target, this defines a 'shell' target which executes a\ncommand defined by the system administrator. The command is executed\nonce for each tar archive generate by the backup and once for the\nbackup manifest, if any. Each time the command is executed, it\nreceives the contents of th file for which it is executed via standard\ninput.\n\nThe configured command can use %f to refer to the name of the archive\n(e.g. base.tar, $TABLESPACE_OID.tar, backup_manifest) and %d to refer\nto the target detail (pg_basebackup --target shell:DETAIL). A target\ndetail is required if %d appears in the configured command and\nforbidden if it does not.\n\nPatch by me, reviewed by Abhijit Menon-Sen.\n\nDiscussion: http://postgr.es/m/CA+TgmoaqvdT-u3nt+_kkZ7bgDAyqDB0i-+XOMmr5JN2Rd37hxw@mail.gmail.com\n\nBranch\n------\nmaster\n\nDetails\n-------\nhttps://git.postgresql.org/pg/commitdiff/c6306db24bd913375f99494e38ab315befe44e11\n\nModified Files\n--------------\ncontrib/Makefile | 1 +\ncontrib/basebackup_to_shell/Makefile | 19 +\ncontrib/basebackup_to_shell/basebackup_to_shell.c | 419 ++++++++++++++++++++++\ndoc/src/sgml/basebackup-to-shell.sgml | 69 ++++\ndoc/src/sgml/contrib.sgml | 1 +\ndoc/src/sgml/filelist.sgml | 1 +\n6 files changed, 510 insertions(+)",
"msg_date": "Tue, 15 Mar 2022 17:33:12 +0000",
"msg_from": "Robert Haas <rhaas@postgresql.org>",
"msg_from_op": true,
"msg_subject": "pgsql: Add 'basebackup_to_shell' contrib module."
},
{
"msg_contents": "Hi,\n\nOn 2022-03-15 17:33:12 +0000, Robert Haas wrote:\n> Add 'basebackup_to_shell' contrib module.\n> \n> As a demonstration of the sort of thing that can be done by adding a\n> custom backup target, this defines a 'shell' target which executes a\n> command defined by the system administrator. The command is executed\n> once for each tar archive generate by the backup and once for the\n> backup manifest, if any. Each time the command is executed, it\n> receives the contents of th file for which it is executed via standard\n> input.\n> \n> The configured command can use %f to refer to the name of the archive\n> (e.g. base.tar, $TABLESPACE_OID.tar, backup_manifest) and %d to refer\n> to the target detail (pg_basebackup --target shell:DETAIL). A target\n> detail is required if %d appears in the configured command and\n> forbidden if it does not.\n> \n> Patch by me, reviewed by Abhijit Menon-Sen.\n\n> Modified Files\n> --------------\n> contrib/Makefile | 1 +\n> contrib/basebackup_to_shell/Makefile | 19 +\n> contrib/basebackup_to_shell/basebackup_to_shell.c | 419 ++++++++++++++++++++++\n> doc/src/sgml/basebackup-to-shell.sgml | 69 ++++\n> doc/src/sgml/contrib.sgml | 1 +\n> doc/src/sgml/filelist.sgml | 1 +\n> 6 files changed, 510 insertions(+)\n\nSeems like this ought to have at least some basic test to make sure it\nactually works / keeps working?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 15 Mar 2022 12:04:28 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Add 'basebackup_to_shell' contrib module."
},
{
"msg_contents": "On Tue, Mar 15, 2022 at 3:04 PM Andres Freund <andres@anarazel.de> wrote:\n> Seems like this ought to have at least some basic test to make sure it\n> actually works / keeps working?\n\nWouldn't hurt, although it may be a little bit tricky to getting it\nwork portably. I'll try to take a look at it.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 17 Mar 2022 11:52:09 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Add 'basebackup_to_shell' contrib module."
},
{
"msg_contents": "On Thu, Mar 17, 2022 at 11:52 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> On Tue, Mar 15, 2022 at 3:04 PM Andres Freund <andres@anarazel.de> wrote:\n> > Seems like this ought to have at least some basic test to make sure it\n> > actually works / keeps working?\n>\n> Wouldn't hurt, although it may be a little bit tricky to getting it\n> work portably. I'll try to take a look at it.\n\nHere is a basic test. I am unable to verify whether it works on Windows.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com",
"msg_date": "Fri, 25 Mar 2022 12:22:09 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Add 'basebackup_to_shell' contrib module."
},
{
"msg_contents": "Hi, \n\nOn March 25, 2022 9:22:09 AM PDT, Robert Haas <robertmhaas@gmail.com> wrote:\n>On Thu, Mar 17, 2022 at 11:52 AM Robert Haas <robertmhaas@gmail.com> wrote:\n>> On Tue, Mar 15, 2022 at 3:04 PM Andres Freund <andres@anarazel.de> wrote:\n>> > Seems like this ought to have at least some basic test to make sure it\n>> > actually works / keeps working?\n>>\n>> Wouldn't hurt, although it may be a little bit tricky to getting it\n>> work portably. I'll try to take a look at it.\n>\n>Here is a basic test. I am unable to verify whether it works on Windows.\n\nCreate a CF entry for it, or enable CI on a github repo?\n\nAndres\n-- \nSent from my Android device with K-9 Mail. Please excuse my brevity.\n\n\n",
"msg_date": "Fri, 25 Mar 2022 09:36:19 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Add 'basebackup_to_shell' contrib module."
},
{
"msg_contents": "On Fri, Mar 25, 2022 at 12:36 PM Andres Freund <andres@anarazel.de> wrote:\n> Create a CF entry for it, or enable CI on a github repo?\n\nI created a CF entry for it. Then I had to try to Google around to\nfind the URL from the cfbot, because it's not even linked from\ncommitfest.postgresql.org for some reason. #blamemagnus\n\nI don't think that the Windows CI is running the TAP tests for\ncontrib. At least, I can't find any indication of it in the output. So\nit doesn't really help to assess how portable this test is, unless I'm\nmissing something.\n\nI looked through the Linux output. It looks to me like that does run\nthe TAP tests for contrib. Unfortunately, the output is not in order\nand is also not labelled, so it's hard to tell what output goes with\nwhat contrib module. I named my test 001_basic.pl, but there are 12 of\nthose already. I see that 13 copies of 001_basic.pl seem to have\npassed CI on Linux, so I guess the test ran and passed there. It seems\nlike it would be an awfully good idea to mention the subdirectory name\nbefore each dump of output.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 25 Mar 2022 13:52:11 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Add 'basebackup_to_shell' contrib module."
},
{
"msg_contents": "\nOn 3/25/22 13:52, Robert Haas wrote:\n> On Fri, Mar 25, 2022 at 12:36 PM Andres Freund <andres@anarazel.de> wrote:\n>> Create a CF entry for it, or enable CI on a github repo?\n> I created a CF entry for it. Then I had to try to Google around to\n> find the URL from the cfbot, because it's not even linked from\n> commitfest.postgresql.org for some reason. #blamemagnus\n>\n> I don't think that the Windows CI is running the TAP tests for\n> contrib. At least, I can't find any indication of it in the output. So\n> it doesn't really help to assess how portable this test is, unless I'm\n> missing something.\n>\n> I looked through the Linux output. It looks to me like that does run\n> the TAP tests for contrib. Unfortunately, the output is not in order\n> and is also not labelled, so it's hard to tell what output goes with\n> what contrib module. I named my test 001_basic.pl, but there are 12 of\n> those already. I see that 13 copies of 001_basic.pl seem to have\n> passed CI on Linux, so I guess the test ran and passed there. It seems\n> like it would be an awfully good idea to mention the subdirectory name\n> before each dump of output.\n\n\n\nDuplication of TAP test names has long been something that's annoyed me.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Fri, 25 Mar 2022 16:09:04 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Add 'basebackup_to_shell' contrib module."
},
{
"msg_contents": "On Sat, Mar 26, 2022 at 6:52 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> I don't think that the Windows CI is running the TAP tests for\n> contrib. At least, I can't find any indication of it in the output. So\n> it doesn't really help to assess how portable this test is, unless I'm\n> missing something.\n\nYeah :-( vcregress.pl doesn't yet have an easy way to run around and\nfind contrib modules with tap tests and run them, for the CI script to\ncall. (I think there was a patch somewhere? I've been bitten by the\nlack of this recently...)\n\nIn case it's helpful, here's how to run a specific contrib module's\nTAP test by explicitly adding it. That'll run once I post this email,\nbut I already ran in it my own github account and it looks like this:\n\nhttps://cirrus-ci.com/task/5637156969381888\nhttps://api.cirrus-ci.com/v1/artifact/task/5637156969381888/log/contrib/basebackup_to_shell/tmp_check/log/regress_log_001_basic\nhttps://api.cirrus-ci.com/v1/artifact/task/5637156969381888/log/contrib/basebackup_to_shell/tmp_check/log/001_basic_primary.log",
"msg_date": "Sat, 26 Mar 2022 09:55:47 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Add 'basebackup_to_shell' contrib module."
},
{
"msg_contents": "Hi,\n\nOn 2022-03-25 13:52:11 -0400, Robert Haas wrote:\n> On Fri, Mar 25, 2022 at 12:36 PM Andres Freund <andres@anarazel.de> wrote:\n> > Create a CF entry for it, or enable CI on a github repo?\n>\n> I created a CF entry for it. Then I had to try to Google around to\n> find the URL from the cfbot, because it's not even linked from\n> commitfest.postgresql.org for some reason. #blamemagnus\n\nYea, we really need to improve on that. I think Thomas has some hope of\nimproving things after the release...\n\n\n> I don't think that the Windows CI is running the TAP tests for\n> contrib. At least, I can't find any indication of it in the output. So\n> it doesn't really help to assess how portable this test is, unless I'm\n> missing something.\n\nYea. It's really unfortunate how vcregress.pl makes it hard to run all\ntests. And we're kind of stuck finding a way forward. It's easy enough to work\naround for individual tests by just adding them to the test file (like Thomas\ndid nearby), but clearly that doesn't scale. Andrew wasn't happy with\nadditional vcregress commands. The fact that vcregress doesn't run tests in\nparallel makes things take forever. And so it goes on.\n\n\n> I looked through the Linux output. It looks to me like that does run\n> the TAP tests for contrib. Unfortunately, the output is not in order\n> and is also not labelled, so it's hard to tell what output goes with\n> what contrib module. I named my test 001_basic.pl, but there are 12 of\n> those already. I see that 13 copies of 001_basic.pl seem to have\n> passed CI on Linux, so I guess the test ran and passed there. It seems\n> like it would be an awfully good idea to mention the subdirectory name\n> before each dump of output.\n\nYea, the current output is *awful*.\n\nFWIW, the way it's hard to run tests the same way across platforms, the crappy\noutput etc was one of the motivations behind the meson effort. If you just\ncompare the output from both *nix and windows runs today with the meson\noutput, it's imo night and day:\n\nhttps://cirrus-ci.com/task/5869668815601664?logs=check_world#L67\n\nThat's a recent run where I'd not properly mirrored 7c51b7f7cc0, leading to a\nfailure on windows. Though it'd be more intersting to see a run with a\nfailure.\n\nIf one wants one can also see the test output of individual tests (it's always\nlogged to a file). But I typically find that not useful for a 'general' test\nrun, too much output. In that case there's a nice list of failed tests at the\nend:\n\nSummary of Failures:\n\n144/219 postgresql:tap+vacuumlo / vacuumlo/t/001_basic.pl ERROR 0.48s (exit status 255 or signal 127 SIGinvalid)\n\n\nOk: 218\nExpected Fail: 0\nFail: 1\nUnexpected Pass: 0\nSkipped: 0\nTimeout: 0\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 25 Mar 2022 14:27:07 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Add 'basebackup_to_shell' contrib module."
},
{
"msg_contents": "On Sat, Mar 26, 2022 at 9:55 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> https://api.cirrus-ci.com/v1/artifact/task/5637156969381888/log/contrib/basebackup_to_shell/tmp_check/log/regress_log_001_basic\n\nThis line doesn't look too healthy:\n\npg_basebackup: error: backup failed: ERROR: shell command \"type con >\n\"C:cirruscontrib asebackup_to_shell mp_check mp_test_tch3\\base.tar\"\"\nfailed\n\nI guess it's an escaping problem around \\ characters.\n\n\n",
"msg_date": "Sat, 26 Mar 2022 10:52:03 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Add 'basebackup_to_shell' contrib module."
},
{
"msg_contents": "On Fri, Mar 25, 2022 at 02:27:07PM -0700, Andres Freund wrote:\n> On 2022-03-25 13:52:11 -0400, Robert Haas wrote:\n> > On Fri, Mar 25, 2022 at 12:36 PM Andres Freund <andres@anarazel.de> wrote:\n> > > Create a CF entry for it, or enable CI on a github repo?\n> >\n> > I created a CF entry for it. Then I had to try to Google around to\n> > find the URL from the cfbot, because it's not even linked from\n> > commitfest.postgresql.org for some reason. #blamemagnus\n\nI see it here (and in cfbot), although I'm not sure how you created a new\npatch for the active CF, and not for the next CF.\nhttps://commitfest.postgresql.org/37/\n\n> > I don't think that the Windows CI is running the TAP tests for\n> > contrib. At least, I can't find any indication of it in the output. So\n> > it doesn't really help to assess how portable this test is, unless I'm\n> > missing something.\n> \n> Yea. It's really unfortunate how vcregress.pl makes it hard to run all\n> tests. And we're kind of stuck finding a way forward. It's easy enough to work\n> around for individual tests by just adding them to the test file (like Thomas\n> did nearby), but clearly that doesn't scale. Andrew wasn't happy with\n> additional vcregress commands. The fact that vcregress doesn't run tests in\n> parallel makes things take forever. And so it goes on.\n\nI have a patch to add alltaptests target to vcregress. But I don't recall\nhearing any objection to new targets until now.\n\nhttps://github.com/justinpryzby/postgres/runs/5174877506\n\n\n",
"msg_date": "Fri, 25 Mar 2022 22:13:53 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Add 'basebackup_to_shell' contrib module."
},
{
"msg_contents": "On Fri, Mar 25, 2022 at 4:09 PM Andrew Dunstan <andrew@dunslane.net> wrote:\n> Duplication of TAP test names has long been something that's annoyed me.\n\nWell, I think that's unwarranted. Many years ago, people discovered\nthat it was annoying if you had to distinguish files solely based on\nname, and so they invented directories and pathnames. That was a good\ncall. Displaying that information in the buildfarm output would be a\ngood call, too.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Sat, 26 Mar 2022 14:40:06 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Add 'basebackup_to_shell' contrib module."
},
{
"msg_contents": "Hi,\n\nOn 2022-03-26 14:40:06 -0400, Robert Haas wrote:\n> Well, I think that's unwarranted. Many years ago, people discovered\n> that it was annoying if you had to distinguish files solely based on\n> name, and so they invented directories and pathnames. That was a good\n> call.\n\nYea. I have no problem naming tests the same way, particularly if they do\nsimilar things. But we should show the path.\n\n\n> Displaying that information in the buildfarm output would be a good call,\n> too.\n\nI would find it very useful locally when running the tests too. A very simple\napproach would be to invoke prove with absolute paths to the tests. But that's\nnot particularly pretty. But unless we change the directory that prove is run\nin away from the directory that contains t/ (there's a thread about that, but\nmore to do), I don't think we can do better on an individual test basis?\n\nWe could just make prove_[install]check echo the $(subdir) it's about to run\ntests for? Certainly looks better to me:\n\nmake -j48 -Otarget -s -C src/bin/ check NO_TEMP_INSTALL=1\n...\n=== tap tests in src/bin/pg_resetwal ===\nt/001_basic.pl ...... ok\nt/002_corrupted.pl .. ok\nAll tests successful.\nFiles=2, Tests=18, 3 wallclock secs ( 0.01 usr 0.01 sys + 2.39 cusr 0.31 csys = 2.72 CPU)\nResult: PASS\n=== tap tests in src/bin/pg_checksums ===\nt/001_basic.pl .... ok\nt/002_actions.pl .. ok\nAll tests successful.\nFiles=2, Tests=74, 4 wallclock secs ( 0.02 usr 0.01 sys + 1.57 cusr 0.42 csys = 2.02 CPU)\nResult: PASS\n=== tap tests in src/bin/psql ===\nt/001_basic.pl ........... ok\nt/010_tab_completion.pl .. ok\nt/020_cancel.pl .......... ok\nAll tests successful.\nFiles=3, Tests=125, 6 wallclock secs ( 0.03 usr 0.00 sys + 3.65 cusr 0.56 csys = 4.24 CPU)\nResult: PASS\n...\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sat, 26 Mar 2022 12:35:28 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Add 'basebackup_to_shell' contrib module."
},
{
"msg_contents": "On Sat, Mar 26, 2022 at 3:35 PM Andres Freund <andres@anarazel.de> wrote:\n> === tap tests in src/bin/pg_resetwal ===\n> t/001_basic.pl ...... ok\n> t/002_corrupted.pl .. ok\n> All tests successful.\n> Files=2, Tests=18, 3 wallclock secs ( 0.01 usr 0.01 sys + 2.39 cusr 0.31 csys = 2.72 CPU)\n> Result: PASS\n> === tap tests in src/bin/pg_checksums ===\n> t/001_basic.pl .... ok\n> t/002_actions.pl .. ok\n> All tests successful.\n> Files=2, Tests=74, 4 wallclock secs ( 0.02 usr 0.01 sys + 1.57 cusr 0.42 csys = 2.02 CPU)\n> Result: PASS\n\nYeah, this certainly seems like an improvement to me.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Sat, 26 Mar 2022 16:03:24 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Add 'basebackup_to_shell' contrib module."
},
{
"msg_contents": "> On 26 Mar 2022, at 21:03, Robert Haas <robertmhaas@gmail.com> wrote:\n> \n> On Sat, Mar 26, 2022 at 3:35 PM Andres Freund <andres@anarazel.de> wrote:\n>> === tap tests in src/bin/pg_resetwal ===\n>> t/001_basic.pl ...... ok\n>> t/002_corrupted.pl .. ok\n>> All tests successful.\n>> Files=2, Tests=18, 3 wallclock secs ( 0.01 usr 0.01 sys + 2.39 cusr 0.31 csys = 2.72 CPU)\n>> Result: PASS\n>> === tap tests in src/bin/pg_checksums ===\n>> t/001_basic.pl .... ok\n>> t/002_actions.pl .. ok\n>> All tests successful.\n>> Files=2, Tests=74, 4 wallclock secs ( 0.02 usr 0.01 sys + 1.57 cusr 0.42 csys = 2.02 CPU)\n>> Result: PASS\n> \n> Yeah, this certainly seems like an improvement to me.\n\n+1, that's clearly better.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n",
"msg_date": "Sat, 26 Mar 2022 21:09:44 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Add 'basebackup_to_shell' contrib module."
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Sat, Mar 26, 2022 at 3:35 PM Andres Freund <andres@anarazel.de> wrote:\n>> === tap tests in src/bin/pg_resetwal ===\n>> t/001_basic.pl ...... ok\n>> t/002_corrupted.pl .. ok\n>> All tests successful.\n\n> Yeah, this certainly seems like an improvement to me.\n\n+1, but will it help for CI or buildfarm cases?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 26 Mar 2022 16:24:32 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Add 'basebackup_to_shell' contrib module."
},
{
"msg_contents": "Hi,\n\nOn 2022-03-26 16:24:32 -0400, Tom Lane wrote:\n> Robert Haas <robertmhaas@gmail.com> writes:\n> > On Sat, Mar 26, 2022 at 3:35 PM Andres Freund <andres@anarazel.de> wrote:\n> >> === tap tests in src/bin/pg_resetwal ===\n> >> t/001_basic.pl ...... ok\n> >> t/002_corrupted.pl .. ok\n> >> All tests successful.\n>\n> > Yeah, this certainly seems like an improvement to me.\n\nDo we want to do the same of regress and isolation tests? They're mostly a bit\neasier to place, but it's still a memory retention game. Using the above\nformat for all looks a tad weird, due to pg_regress' output having kinda\nsimilar markers.\n\n...\n======================\n All 22 tests passed.\n======================\n\n=== regress tests in contrib/ltree_plpython\" ===\n============== creating temporary instance ==============\n============== initializing database system ==============\n============== starting postmaster ==============\nrunning on port 51696 with PID 3905518\n============== creating database \"contrib_regression\" ==============\nCREATE DATABASE\nALTER DATABASE\n============== installing ltree ==============\nCREATE EXTENSION\n============== running regression test queries ==============\ntest ltree_plpython ... ok 51 ms\n============== shutting down postmaster ==============\n============== removing temporary instance ==============\n...\n\n\nCould just use a different character. +++ doesn't look bad:\n+++ tap tests in contrib/test_decoding +++\nt/001_repl_stats.pl .. ok\nAll tests successful.\nFiles=1, Tests=2, 3 wallclock secs ( 0.02 usr 0.00 sys + 1.74 cusr 0.28 csys = 2.04 CPU)\nResult: PASS\n\n\nWould we want to do this in all branches? I'd vote for yes, but ...\n\nPrototype patch attached. I looked through the uses of\n pg_(isolation_)?regress_(install)?check'\nand didn't find any that'd have a problem with turning the invocation into a\nmulti-command one.\n\n\n> +1, but will it help for CI\n\nYes, it should make it considerably better (for everything but windows, but\nthat outputs separators already).\n\n\n> or buildfarm cases?\n\nProbably not much, because that largely runs tests serially with \"stage\" names\ncorresponding to the test. And when it runs multiple tests in a row it adds\nsomething similar to the above, e.g.:\n=========== Module pg_stat_statements check =============\nhttps://buildfarm.postgresql.org/cgi-bin/show_stage_log.pl?nm=peripatus&dt=2022-03-26%2000%3A20%3A30&stg=misc-check\n\nBut I think it'll still be a tad better when it runs a single test:\nhttps://buildfarm.postgresql.org/cgi-bin/show_stage_log.pl?nm=snapper&dt=2022-03-26%2018%3A46%3A28&stg=subscription-check\n\n\nMight make it more realistic to make -s, at least to run tests? The reams of\noutput like:\ngmake -C ../../../../src/test/regress pg_regress\ngmake[1]: Entering directory '/home/pgbuildfarm/buildroot/HEAD/pgsql.build/src/test/regress'\ngmake -C ../../../src/port all\ngmake[2]: Entering directory '/home/pgbuildfarm/buildroot/HEAD/pgsql.build/src/port'\ngmake[2]: Nothing to be done for 'all'.\ngmake[2]: Leaving directory '/home/pgbuildfarm/buildroot/HEAD/pgsql.build/src/port'\ngmake -C ../../../src/common all\ngmake[2]: Entering directory '/home/pgbuildfarm/buildroot/HEAD/pgsql.build/src/common'\ngmake[2]: Nothing to be done for 'all'.\n\nare quite clutter-y.\n\nGreetings,\n\nAndres Freund",
"msg_date": "Sat, 26 Mar 2022 13:51:35 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Add 'basebackup_to_shell' contrib module."
},
{
"msg_contents": "> On 26 Mar 2022, at 21:24, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> +1, but will it help for CI or buildfarm cases?\n\nIsn't it both, but mostly for CI since the buildfarm already prints the path\nwhen dumping the logfile. Below is a random example snippet from the buildfarm\nwhere it's fairly easy to see 001_basic.pl being the pg_test_fsync test:\n\n/bin/prove -I ../../../src/test/perl/ -I . --timer t/*.pl\n[20:31:18] t/001_basic.pl .. ok 224 ms ( 0.00 usr 0.01 sys + 0.18 cusr 0.01 csys = 0.20 CPU)\n[20:31:18]\nAll tests successful.\nFiles=1, Tests=12, 0 wallclock secs ( 0.05 usr 0.02 sys + 0.18 cusr 0.01 csys = 0.26 CPU)\nResult: PASS\n\n\n================== pgsql.build/src/bin/pg_test_fsync/tmp_check/log/regress_log_001_basic ===================\n..\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n",
"msg_date": "Sat, 26 Mar 2022 21:53:38 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Add 'basebackup_to_shell' contrib module."
},
{
"msg_contents": "On Sat, Mar 26, 2022 at 4:14 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> I see it here (and in cfbot), although I'm not sure how you created a new\n> patch for the active CF, and not for the next CF.\n\nAnyone who has ever been a CF manager has this power, it seems. I did\nit myself once, by accident, and got told off by the active CF\nmanager.\n\n\n",
"msg_date": "Sun, 27 Mar 2022 12:12:07 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Add 'basebackup_to_shell' contrib module."
},
{
"msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> On Sat, Mar 26, 2022 at 4:14 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n>> I see it here (and in cfbot), although I'm not sure how you created a new\n>> patch for the active CF, and not for the next CF.\n\n> Anyone who has ever been a CF manager has this power, it seems. I did\n> it myself once, by accident, and got told off by the active CF\n> manager.\n\nI'm not sure what the policy is for that. I have done it myself,\nalthough I've never been a CF manager, so maybe it was granted\nto all committers?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 26 Mar 2022 19:28:07 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Add 'basebackup_to_shell' contrib module."
},
{
"msg_contents": "On Sun, Mar 27, 2022 at 12:28 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Thomas Munro <thomas.munro@gmail.com> writes:\n> > On Sat, Mar 26, 2022 at 4:14 PM Justin Pryzby <pryzby@telsasoft.com>\n> wrote:\n> >> I see it here (and in cfbot), although I'm not sure how you created a\n> new\n> >> patch for the active CF, and not for the next CF.\n>\n> > Anyone who has ever been a CF manager has this power, it seems. I did\n> > it myself once, by accident, and got told off by the active CF\n> > manager.\n>\n> I'm not sure what the policy is for that. I have done it myself,\n> although I've never been a CF manager, so maybe it was granted\n> to all committers?\n>\n\nIt is not. In fact, you have some strange half-between power that is only\nyou and those pginfra members that are *not* developers in it... I've made\nyou a \"full cf manager\" now so it's at least consistent :)\n\nAnd yes, the way it works now is once a cf manager always a cf manager. We\nhaven't had enough of them that it's been something worth considering yet.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Sun, Mar 27, 2022 at 12:28 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:Thomas Munro <thomas.munro@gmail.com> writes:\n> On Sat, Mar 26, 2022 at 4:14 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n>> I see it here (and in cfbot), although I'm not sure how you created a new\n>> patch for the active CF, and not for the next CF.\n\n> Anyone who has ever been a CF manager has this power, it seems. I did\n> it myself once, by accident, and got told off by the active CF\n> manager.\n\nI'm not sure what the policy is for that. I have done it myself,\nalthough I've never been a CF manager, so maybe it was granted\nto all committers?It is not. In fact, you have some strange half-between power that is only you and those pginfra members that are *not* developers in it... I've made you a \"full cf manager\" now so it's at least consistent :)And yes, the way it works now is once a cf manager always a cf manager. We haven't had enough of them that it's been something worth considering yet.-- Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/",
"msg_date": "Mon, 28 Mar 2022 18:27:29 +0200",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Add 'basebackup_to_shell' contrib module."
},
{
"msg_contents": "On Fri, Mar 25, 2022 at 5:52 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> On Sat, Mar 26, 2022 at 9:55 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> > https://api.cirrus-ci.com/v1/artifact/task/5637156969381888/log/contrib/basebackup_to_shell/tmp_check/log/regress_log_001_basic\n>\n> This line doesn't look too healthy:\n>\n> pg_basebackup: error: backup failed: ERROR: shell command \"type con >\n> \"C:cirruscontrib asebackup_to_shell mp_check mp_test_tch3\\base.tar\"\"\n> failed\n>\n> I guess it's an escaping problem around \\ characters.\n\nOh, right. I didn't copy the usual incantation as completely as I\nshould have done.\n\nHere's a new version, hopefully rectifying that deficiency. I also add\na second patch here, documenting basebackup_to_shell.required_role,\nbecause Joe Conway observed elsewhere that I forgot to do that.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com",
"msg_date": "Tue, 29 Mar 2022 10:08:03 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Add 'basebackup_to_shell' contrib module."
},
{
"msg_contents": "On Wed, Mar 30, 2022 at 3:08 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> On Fri, Mar 25, 2022 at 5:52 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> > On Sat, Mar 26, 2022 at 9:55 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> > This line doesn't look too healthy:\n> >\n> > pg_basebackup: error: backup failed: ERROR: shell command \"type con >\n> > \"C:cirruscontrib asebackup_to_shell mp_check mp_test_tch3\\base.tar\"\"\n> > failed\n> >\n> > I guess it's an escaping problem around \\ characters.\n>\n> Oh, right. I didn't copy the usual incantation as completely as I\n> should have done.\n>\n> Here's a new version, hopefully rectifying that deficiency. I also add\n> a second patch here, documenting basebackup_to_shell.required_role,\n> because Joe Conway observed elsewhere that I forgot to do that.\n\nHere are your patches again, plus that kludge to make the CI run your\nTAP test on Windows.",
"msg_date": "Wed, 30 Mar 2022 08:30:27 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Add 'basebackup_to_shell' contrib module."
},
{
"msg_contents": "On Wed, Mar 30, 2022 at 8:30 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> On Wed, Mar 30, 2022 at 3:08 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> > Here's a new version, hopefully rectifying that deficiency. I also add\n> > a second patch here, documenting basebackup_to_shell.required_role,\n> > because Joe Conway observed elsewhere that I forgot to do that.\n>\n> Here are your patches again, plus that kludge to make the CI run your\n> TAP test on Windows.\n\nIt failed:\n\nhttps://cirrus-ci.com/task/5567070686412800\nhttps://api.cirrus-ci.com/v1/artifact/task/5567070686412800/log/contrib/basebackup_to_shell/tmp_check/log/001_basic_primary.log\nhttps://api.cirrus-ci.com/v1/artifact/task/5567070686412800/log/contrib/basebackup_to_shell/tmp_check/log/regress_log_001_basic\n\n\n",
"msg_date": "Wed, 30 Mar 2022 09:35:31 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Add 'basebackup_to_shell' contrib module."
},
{
"msg_contents": "On Tue, Mar 29, 2022 at 4:36 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> It failed:\n>\n> https://cirrus-ci.com/task/5567070686412800\n> https://api.cirrus-ci.com/v1/artifact/task/5567070686412800/log/contrib/basebackup_to_shell/tmp_check/log/001_basic_primary.log\n> https://api.cirrus-ci.com/v1/artifact/task/5567070686412800/log/contrib/basebackup_to_shell/tmp_check/log/regress_log_001_basic\n\nHmm. The log here is not very informative. It just says that the first\ntime we tried to use the 'shell' target, it timed out. I suppose the\nmost obvious explanation for that is that the shell command we\nexecuted timed out:\n\nqq{type con > \"$escaped_backup_path\\\\\\\\%f\"}\n\nBut why should that be so? Does 'type con' not respond to EOF? I don't\nsee how that can be the case. Is our implementation of pclose broken?\nIf so, then I think COPY TO/FROM PROGRAM would be broken on Windows.\n\nAny ideas?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 29 Mar 2022 17:19:44 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Add 'basebackup_to_shell' contrib module."
},
{
"msg_contents": "Hi,\n\nOn 2022-03-29 17:19:44 -0400, Robert Haas wrote:\n> On Tue, Mar 29, 2022 at 4:36 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> > It failed:\n> >\n> > https://cirrus-ci.com/task/5567070686412800\n> > https://api.cirrus-ci.com/v1/artifact/task/5567070686412800/log/contrib/basebackup_to_shell/tmp_check/log/001_basic_primary.log\n> > https://api.cirrus-ci.com/v1/artifact/task/5567070686412800/log/contrib/basebackup_to_shell/tmp_check/log/regress_log_001_basic\n> \n> Hmm. The log here is not very informative. It just says that the first\n> time we tried to use the 'shell' target, it timed out. I suppose the\n> most obvious explanation for that is that the shell command we\n> executed timed out:\n> \n> qq{type con > \"$escaped_backup_path\\\\\\\\%f\"}\n> \n> But why should that be so? Does 'type con' not respond to EOF?\n\nThis is trying to write stdin into a file? I think the problem may be that con\ndoesn't represent stdin, it it's console input. I think consoles are a\nseparate thing from stdin on windows - you can have console input, even while\nstdin is coming from a file or such.\n\nDidn't immediate find a reference to a cat equivalent. Maybe just gzip the\nfile? That can read from stdin across platforms afaict.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 29 Mar 2022 15:25:04 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Add 'basebackup_to_shell' contrib module."
},
{
"msg_contents": "\nOn 3/29/22 17:19, Robert Haas wrote:\n> On Tue, Mar 29, 2022 at 4:36 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n>> It failed:\n>>\n>> https://cirrus-ci.com/task/5567070686412800\n>> https://api.cirrus-ci.com/v1/artifact/task/5567070686412800/log/contrib/basebackup_to_shell/tmp_check/log/001_basic_primary.log\n>> https://api.cirrus-ci.com/v1/artifact/task/5567070686412800/log/contrib/basebackup_to_shell/tmp_check/log/regress_log_001_basic\n> Hmm. The log here is not very informative. It just says that the first\n> time we tried to use the 'shell' target, it timed out. I suppose the\n> most obvious explanation for that is that the shell command we\n> executed timed out:\n>\n> qq{type con > \"$escaped_backup_path\\\\\\\\%f\"}\n>\n> But why should that be so? Does 'type con' not respond to EOF? I don't\n> see how that can be the case. Is our implementation of pclose broken?\n> If so, then I think COPY TO/FROM PROGRAM would be broken on Windows.\n>\n\nAIUI 'type con' is not the equivalent of Unix cat, especially w.r.t.\nstdin. It's going to try to read from the console, not from stdin. It's\nmore the equivalent of 'cat /dev/tty'. So it's not at all surprising\nthat it hangs. I don't know of a Windows builtin that is equivalent to cat.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Tue, 29 Mar 2022 18:34:17 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Add 'basebackup_to_shell' contrib module."
},
{
"msg_contents": "On Wed, Mar 30, 2022 at 11:25 AM Andres Freund <andres@anarazel.de> wrote:\n> Didn't immediate find a reference to a cat equivalent. Maybe just gzip the\n> file? That can read from stdin across platforms afaict.\n\n. o O ( gzip | gzip -d )\n\n\n",
"msg_date": "Wed, 30 Mar 2022 13:48:25 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Add 'basebackup_to_shell' contrib module."
},
{
"msg_contents": "\nOn 3/29/22 20:48, Thomas Munro wrote:\n> On Wed, Mar 30, 2022 at 11:25 AM Andres Freund <andres@anarazel.de> wrote:\n>> Didn't immediate find a reference to a cat equivalent. Maybe just gzip the\n>> file? That can read from stdin across platforms afaict.\n> . o O ( gzip | gzip -d )\n>\n\n\nTriple bleah. If we have to do that at least we should probably use\n`gzip --fast`\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Wed, 30 Mar 2022 07:01:01 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Add 'basebackup_to_shell' contrib module."
},
{
"msg_contents": "On Wed, Mar 30, 2022 at 7:01 AM Andrew Dunstan <andrew@dunslane.net> wrote:\n> Triple bleah. If we have to do that at least we should probably use\n> `gzip --fast`\n\nI'm not sure it's going to make enough difference to get fussed about,\nbut sure. Here's a new series, adjusted to use 'gzip' instead of 'cat'\nand 'type'.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com",
"msg_date": "Wed, 30 Mar 2022 08:53:43 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Add 'basebackup_to_shell' contrib module."
},
{
"msg_contents": "Hi,\n\nOn 2022-03-26 13:51:35 -0700, Andres Freund wrote:\n> Prototype patch attached.\n\nBecause I forgot about cfbot when attaching the patch, cfbot actually ran with\nit under this thread's cf entry. It does look like an improvement to me:\nhttps://cirrus-ci.com/task/6397292629458944?logs=test_world#L900\n\nWe certainly can do better, but it's sufficiently better than what we have\nright now. So I'd like to commit it?\n\n\n> Would we want to do this in all branches? I'd vote for yes, but ...\n\nUnless somebody speaks in favor of doing this across branches, I'd just go for\nHEAD.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 30 Mar 2022 09:23:11 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Add 'basebackup_to_shell' contrib module."
},
{
"msg_contents": "On 2022-03-30 08:53:43 -0400, Robert Haas wrote:\n> Here's a new series, adjusted to use 'gzip' instead of 'cat' and 'type'.\n\nSeems to have done the trick: https://cirrus-ci.com/task/6474955838717952?logs=test_contrib_basebackup_to_shell#L1\n\n\n# Reconfigure to restrict access and require a detail.\n$shell_command =\n\t$PostgreSQL::Test::Utils::windows_os\n\t? qq{$gzip --fast > \"$escaped_backup_path\\\\\\\\%d.%f.gz\"}\n : qq{$gzip --fast > \"$escaped_backup_path/%d.%f.gz\"};\n\nI don't think the branch is needed anymore, forward slashes should work for\noutput redirection.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 30 Mar 2022 09:30:24 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Add 'basebackup_to_shell' contrib module."
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> Because I forgot about cfbot when attaching the patch, cfbot actually ran with\n> it under this thread's cf entry. It does look like an improvement to me:\n> https://cirrus-ci.com/task/6397292629458944?logs=test_world#L900\n> We certainly can do better, but it's sufficiently better than what we have\n> right now. So I'd like to commit it?\n\nNo objection here.\n\n>> Would we want to do this in all branches? I'd vote for yes, but ...\n\n> Unless somebody speaks in favor of doing this across branches, I'd just go for\n> HEAD.\n\n+1 for HEAD only, especially if we think we might change it some more\nlater. It seems possible this might break somebody's tooling if we\ndrop it into minor releases.\n\nOne refinement that comes to mind as I look at the patch is to distinguish\nbetween \"check\" and \"installcheck\". Not sure that's worthwhile, but not\nsure it isn't, either.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 30 Mar 2022 12:34:34 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Add 'basebackup_to_shell' contrib module."
},
{
"msg_contents": "On Wed, Mar 30, 2022 at 12:30 PM Andres Freund <andres@anarazel.de> wrote:\n> # Reconfigure to restrict access and require a detail.\n> $shell_command =\n> $PostgreSQL::Test::Utils::windows_os\n> ? qq{$gzip --fast > \"$escaped_backup_path\\\\\\\\%d.%f.gz\"}\n> : qq{$gzip --fast > \"$escaped_backup_path/%d.%f.gz\"};\n>\n> I don't think the branch is needed anymore, forward slashes should work for\n> output redirection.\n\nWe have similar things in src/test/perl/PostgreSQL/Test/Cluster.pm. Do\nyou think those can also be removed? I'm not sure it's the place of\nthis patch to introduce a mix of styles.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 30 Mar 2022 12:42:50 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Add 'basebackup_to_shell' contrib module."
},
{
"msg_contents": "Hi,\n\nOn 2022-03-30 12:34:34 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > Unless somebody speaks in favor of doing this across branches, I'd just go for\n> > HEAD.\n> \n> +1 for HEAD only, especially if we think we might change it some more\n> later. It seems possible this might break somebody's tooling if we\n> drop it into minor releases.\n\nYea. I certainly have written scripts that parse check-world output - they\ndidn't break, but...\n\n\n> One refinement that comes to mind as I look at the patch is to distinguish\n> between \"check\" and \"installcheck\". Not sure that's worthwhile, but not\n> sure it isn't, either.\n\nAs it's just about \"free\" to do so, I see no reason not to go for showing that\ndifference. How about:\n\necho \"+++ (tap|regress|isolation) [install-]check in $(subdir) +++\" && \\\n\nI see no reason to distinguish the PGXS / non-PGXs tap installcheck cases?\n\n\nRandom aside: Am I the only one bothered by a bunch of places in\nMakefile.global.in quoting like\n $(MAKE) -C '$(top_builddir)' DESTDIR='$(abs_top_builddir)'/tmp_install install >'$(abs_top_builddir)'/tmp_install/log/install.log 2>&1\nand\n rm -rf '$(CURDIR)'/tmp_check &&\netc\nyielding commands like:\n make -C '.' DESTDIR='/home/andres/build/postgres/dev-assert/vpath'/tmp_install install >'/home/andres/build/postgres/dev-assert/vpath'/tmp_install/log/install.log 2>&1\nand\n rm -rf '/home/andres/build/postgres/dev-assert/vpath/contrib/test_decoding'/tmp_check &\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 30 Mar 2022 09:50:39 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Add 'basebackup_to_shell' contrib module."
},
{
"msg_contents": "Hi,\n\nOn 2022-03-30 12:42:50 -0400, Robert Haas wrote:\n> On Wed, Mar 30, 2022 at 12:30 PM Andres Freund <andres@anarazel.de> wrote:\n> > # Reconfigure to restrict access and require a detail.\n> > $shell_command =\n> > $PostgreSQL::Test::Utils::windows_os\n> > ? qq{$gzip --fast > \"$escaped_backup_path\\\\\\\\%d.%f.gz\"}\n> > : qq{$gzip --fast > \"$escaped_backup_path/%d.%f.gz\"};\n> >\n> > I don't think the branch is needed anymore, forward slashes should work for\n> > output redirection.\n> \n> We have similar things in src/test/perl/PostgreSQL/Test/Cluster.pm.\n\nThere are some commandline utilities (including copy) where backward slashes\nin arguments are necessary, to separate options from paths :/. Those are the\nextent of backslash use in Cluster.pm that I could see quickly.\n\n\n> I'm not sure it's the place of this patch to introduce a mix of styles.\n\nFair enough. I found it a bit grating to read in the test, that's why I\nmentioned it...\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 30 Mar 2022 09:54:50 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Add 'basebackup_to_shell' contrib module."
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2022-03-30 12:34:34 -0400, Tom Lane wrote:\n>> One refinement that comes to mind as I look at the patch is to distinguish\n>> between \"check\" and \"installcheck\". Not sure that's worthwhile, but not\n>> sure it isn't, either.\n\n> As it's just about \"free\" to do so, I see no reason not to go for showing that\n> difference. How about:\n\n> echo \"+++ (tap|regress|isolation) [install-]check in $(subdir) +++\" && \\\n\nWFM.\n\n> I see no reason to distinguish the PGXS / non-PGXs tap installcheck cases?\n\nAgreed.\n\n> Random aside: Am I the only one bothered by a bunch of places in\n> Makefile.global.in quoting like\n> $(MAKE) -C '$(top_builddir)' DESTDIR='$(abs_top_builddir)'/tmp_install install >'$(abs_top_builddir)'/tmp_install/log/install.log 2>&1\n> and\n> rm -rf '$(CURDIR)'/tmp_check &&\n> etc\n\nDon't we need that to handle, say, build paths with spaces in them?\nAdmittedly we're probably not completely clean for such paths,\nbut that's not an excuse to break the places that do it right.\n\n(I occasionally think about setting up a BF animal configured\nlike that, but haven't tried yet.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 30 Mar 2022 12:58:26 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Add 'basebackup_to_shell' contrib module."
},
{
"msg_contents": "On Wed, Mar 30, 2022 at 12:54 PM Andres Freund <andres@anarazel.de> wrote:\n> There are some commandline utilities (including copy) where backward slashes\n> in arguments are necessary, to separate options from paths :/. Those are the\n> extent of backslash use in Cluster.pm that I could see quickly.\n\nI just copied this logic from that file:\n\n $path =~ s{\\\\}{\\\\\\\\}g if ($PostgreSQL::Test::Utils::windows_os);\n my $copy_command =\n $PostgreSQL::Test::Utils::windows_os\n ? qq{copy \"$path\\\\\\\\%f\" \"%p\"}\n : qq{cp \"$path/%f\" \"%p\"};\n\nIn the first version of the patch I neglected the first of those lines\nand it broke, so the s{\\\\}{\\\\\\\\}g thing is definitely needed. It's\npossible that / would be as good as \\\\\\\\ in the command text itself,\nbut it doesn't seem worth getting excited about. It'd be best if any\nunnecessary garbage of this sort got cleaned up by someone who has a\ntest environment locally, rather than me testing by sending emails to\na mailing list which Thomas then downloads into a sandbox and executes\nwhich you then send me links to what broke on the mailing list and I\ntry again.\n\n> Fair enough. I found it a bit grating to read in the test, that's why I\n> mentioned it...\n\nI'm going to go ahead and commit this test script later on this\nafternoon unless there are vigorous objections real soon now, and then\nif somebody wants to improve it, great!\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 30 Mar 2022 13:04:21 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Add 'basebackup_to_shell' contrib module."
},
{
"msg_contents": "Hi, \n\nOn March 30, 2022 9:58:26 AM PDT, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>Andres Freund <andres@anarazel.de> writes:\n>> Random aside: Am I the only one bothered by a bunch of places in\n>> Makefile.global.in quoting like\n>> $(MAKE) -C '$(top_builddir)' DESTDIR='$(abs_top_builddir)'/tmp_install install >'$(abs_top_builddir)'/tmp_install/log/install.log 2>&1\n>> and\n>> rm -rf '$(CURDIR)'/tmp_check &&\n>> etc\n>\n>Don't we need that to handle, say, build paths with spaces in them?\n\nMy concern is about the quote in the middle of the path, not about quoting at all... I.e. the ' should be after /tmp_install, not before.\n\n\n>Admittedly we're probably not completely clean for such paths,\n>but that's not an excuse to break the places that do it right.\n>\n>(I occasionally think about setting up a BF animal configured\n>like that, but haven't tried yet.)\n\nThat might be a fun exercise. Not so much for the build aspect, but to make sure our tools handle it.\n\nAndres\n-- \nSent from my Android device with K-9 Mail. Please excuse my brevity.\n\n\n",
"msg_date": "Wed, 30 Mar 2022 10:11:49 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Add 'basebackup_to_shell' contrib module."
},
{
"msg_contents": "On Wed, Mar 30, 2022 at 1:11 PM Andres Freund <andres@anarazel.de> wrote:\n> On March 30, 2022 9:58:26 AM PDT, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >Andres Freund <andres@anarazel.de> writes:\n> >> Random aside: Am I the only one bothered by a bunch of places in\n> >> Makefile.global.in quoting like\n> >> $(MAKE) -C '$(top_builddir)' DESTDIR='$(abs_top_builddir)'/tmp_install install >'$(abs_top_builddir)'/tmp_install/log/install.log 2>&1\n> >> and\n> >> rm -rf '$(CURDIR)'/tmp_check &&\n> >> etc\n> >\n> >Don't we need that to handle, say, build paths with spaces in them?\n>\n> My concern is about the quote in the middle of the path, not about quoting at all... I.e. the ' should be after /tmp_install, not before.\n\nMakes no difference. We know that the string /tmp_install contains no\nshell metacharacters, so why does it need to be in quotes? I would've\nprobably written it the way it is here, rather than what you are\nproposing.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 30 Mar 2022 13:16:47 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Add 'basebackup_to_shell' contrib module."
},
{
"msg_contents": "On Thu, Mar 31, 2022 at 5:23 AM Andres Freund <andres@anarazel.de> wrote:\n> On 2022-03-26 13:51:35 -0700, Andres Freund wrote:\n> > Prototype patch attached.\n>\n> Because I forgot about cfbot when attaching the patch, cfbot actually ran with\n> it under this thread's cf entry. It does look like an improvement to me:\n> https://cirrus-ci.com/task/6397292629458944?logs=test_world#L900\n>\n> We certainly can do better, but it's sufficiently better than what we have\n> right now. So I'd like to commit it?\n\nNice, this will save a lot of time scrolling around trying to figure\nout what broke.\n\n> > Would we want to do this in all branches? I'd vote for yes, but ...\n>\n> Unless somebody speaks in favor of doing this across branches, I'd just go for\n> HEAD.\n\nI don't see any reason not to do it on all branches. If anyone is\nmachine-processing the output and cares about format changes they will\nbe happy about the improvement.\n\n\n",
"msg_date": "Thu, 31 Mar 2022 07:07:21 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Add 'basebackup_to_shell' contrib module."
},
{
"msg_contents": "Hi,\n\nOn 2022-03-30 13:16:47 -0400, Robert Haas wrote:\n> On Wed, Mar 30, 2022 at 1:11 PM Andres Freund <andres@anarazel.de> wrote:\n> > On March 30, 2022 9:58:26 AM PDT, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > >Andres Freund <andres@anarazel.de> writes:\n> > >> Random aside: Am I the only one bothered by a bunch of places in\n> > >> Makefile.global.in quoting like\n> > >> $(MAKE) -C '$(top_builddir)' DESTDIR='$(abs_top_builddir)'/tmp_install install >'$(abs_top_builddir)'/tmp_install/log/install.log 2>&1\n> > >> and\n> > >> rm -rf '$(CURDIR)'/tmp_check &&\n> > >> etc\n> > >\n> > >Don't we need that to handle, say, build paths with spaces in them?\n> >\n> > My concern is about the quote in the middle of the path, not about quoting at all... I.e. the ' should be after /tmp_install, not before.\n> \n> Makes no difference. We know that the string /tmp_install contains no\n> shell metacharacters, so why does it need to be in quotes? I would've\n> probably written it the way it is here, rather than what you are\n> proposing.\n\nIt looks ugly, and it can't be copy-pasted as easily. Seems I'm alone on this,\nso I'll leave it be...\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 30 Mar 2022 12:12:51 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Add 'basebackup_to_shell' contrib module."
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2022-03-30 13:16:47 -0400, Robert Haas wrote:\n>> On Wed, Mar 30, 2022 at 1:11 PM Andres Freund <andres@anarazel.de> wrote:\n>>> My concern is about the quote in the middle of the path, not about quoting at all... I.e. the ' should be after /tmp_install, not before.\n\n>> Makes no difference. We know that the string /tmp_install contains no\n>> shell metacharacters, so why does it need to be in quotes? I would've\n>> probably written it the way it is here, rather than what you are\n>> proposing.\n\n> It looks ugly, and it can't be copy-pasted as easily. Seems I'm alone on this,\n> so I'll leave it be...\n\nFWIW, I agree with Andres that I'd probably have put the quote\nat the end. But Robert is right that it's functionally equivalent;\nso I doubt it's worth changing.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 30 Mar 2022 15:19:48 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Add 'basebackup_to_shell' contrib module."
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> I'm going to go ahead and commit this test script later on this\n> afternoon unless there are vigorous objections real soon now, and then\n> if somebody wants to improve it, great!\n\nI see you did that, but the CF entry is still open?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 30 Mar 2022 16:22:03 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Add 'basebackup_to_shell' contrib module."
},
{
"msg_contents": "On Wed, Mar 30, 2022 at 4:22 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Robert Haas <robertmhaas@gmail.com> writes:\n> > I'm going to go ahead and commit this test script later on this\n> > afternoon unless there are vigorous objections real soon now, and then\n> > if somebody wants to improve it, great!\n>\n> I see you did that, but the CF entry is still open?\n\nFixed.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 30 Mar 2022 17:44:55 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Add 'basebackup_to_shell' contrib module."
},
{
"msg_contents": "I think we should give this module a .gitignore file. Patch attached.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Wed, 30 Mar 2022 15:39:43 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Add 'basebackup_to_shell' contrib module."
},
{
"msg_contents": "Nathan Bossart <nathandbossart@gmail.com> writes:\n> I think we should give this module a .gitignore file. Patch attached.\n\nIndeed, before somebody accidentally commits the cruft that\ncheck-world is leaving around. Pushed.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 30 Mar 2022 20:01:05 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Add 'basebackup_to_shell' contrib module."
},
{
"msg_contents": "So ... none of the Windows buildfarm members actually like this\ntest script. They're all showing failures along the lines of\n\nnot ok 2 - fails if basebackup_to_shell.command is not set: matches\n\n# Failed test 'fails if basebackup_to_shell.command is not set: matches'\n# at t/001_basic.pl line 38.\n# 'pg_basebackup: error: connection to server at \"127.0.0.1\", port 55358 failed: FATAL: SSPI authentication failed for user \"backupuser\"\n# '\n# doesn't match '(?^:shell command for backup is not configured)'\n\nDoes the CI setup not account for this issue?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 30 Mar 2022 22:07:48 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Add 'basebackup_to_shell' contrib module."
},
{
"msg_contents": "Hi,\n\nOn 2022-03-30 22:07:48 -0400, Tom Lane wrote:\n> So ... none of the Windows buildfarm members actually like this\n> test script. They're all showing failures along the lines of\n> \n> not ok 2 - fails if basebackup_to_shell.command is not set: matches\n> \n> # Failed test 'fails if basebackup_to_shell.command is not set: matches'\n> # at t/001_basic.pl line 38.\n> # 'pg_basebackup: error: connection to server at \"127.0.0.1\", port 55358 failed: FATAL: SSPI authentication failed for user \"backupuser\"\n> # '\n> # doesn't match '(?^:shell command for backup is not configured)'\n> \n> Does the CI setup not account for this issue?\n\nOn windows CI sets\n # Avoids port conflicts between concurrent tap test runs\n PG_TEST_USE_UNIX_SOCKETS: 1\n\nbecause I've otherwise seen a lot of spurious tap test failures - Cluster.pm\nget_free_port() is racy, as it admits:\nXXX A port available now may become unavailable by the time we start\nthe desired service.\n\nThe only alternative is to not use parallelism when running tap tests, but\nthat makes test runs even slower - and windows is already the bottleneck for\ncfbot.\n\nI assume SSPI doesn't work over unix sockets? Oh. Maybe it's not even that -\nwe only enable it when not using unix sockets:\n\n# Internal method to set up trusted pg_hba.conf for replication. Not\n# documented because you shouldn't use it, it's called automatically if needed.\nsub set_replication_conf\n{\n\tmy ($self) = @_;\n\tmy $pgdata = $self->data_dir;\n\n\t$self->host eq $test_pghost\n\t or croak \"set_replication_conf only works with the default host\";\n\n\topen my $hba, '>>', \"$pgdata/pg_hba.conf\";\n\tprint $hba \"\\n# Allow replication (set up by PostgreSQL::Test::Cluster.pm)\\n\";\n\tif ($PostgreSQL::Test::Utils::windows_os && !$PostgreSQL::Test::Utils::use_unix_sockets)\n\t{\n\t\tprint $hba\n\t\t \"host replication all $test_localhost/32 sspi include_realm=1 map=regress\\n\";\n\t}\n\tclose $hba;\n\treturn;\n}\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 30 Mar 2022 19:17:17 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Add 'basebackup_to_shell' contrib module."
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2022-03-30 22:07:48 -0400, Tom Lane wrote:\n>> So ... none of the Windows buildfarm members actually like this\n>> test script.\n\n> On windows CI sets\n> # Avoids port conflicts between concurrent tap test runs\n> PG_TEST_USE_UNIX_SOCKETS: 1\n\nOk ...\n\n> I assume SSPI doesn't work over unix sockets? Oh. Maybe it's not even that -\n> we only enable it when not using unix sockets:\n\nDuh. But can it work over unix sockets?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 31 Mar 2022 00:08:00 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Add 'basebackup_to_shell' contrib module."
},
{
"msg_contents": "Hi,\n\nOn 2022-03-31 00:08:00 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > I assume SSPI doesn't work over unix sockets? Oh. Maybe it's not even that -\n> > we only enable it when not using unix sockets:\n> \n> Duh. But can it work over unix sockets?\n\nI wonder if we should go the other way and use unix sockets by default in the\ntests. Even if CI windows could be made to use SSPI, it'd still be further\naway from the environment most of us write tests in.\n\nAfaics the only reason we use SSPI is to secure the tests, because they run\nover tcp by default. But since we have unix socket support for windows now,\nthat shouldn't really be needed anymore.\n\nThe only animal that might not be new enough for it is hamerkop. I don't\nreally understand when windows features end up in which release.\n\nLooking at 8f3ec75de40 it seems we just assume unix sockets are available, we\ndon't have a version / feature test (win32.h just defines\nHAVE_STRUCT_SOCKADDR_UN).\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 30 Mar 2022 22:25:00 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Add 'basebackup_to_shell' contrib module."
},
{
"msg_contents": "On 31.03.22 07:25, Andres Freund wrote:\n> Looking at 8f3ec75de40 it seems we just assume unix sockets are available, we\n> don't have a version / feature test (win32.h just defines\n> HAVE_STRUCT_SOCKADDR_UN).\n\nI think you have to handle that dynamically at run time, a bit like \nIPv6: The build environment might provide symbols, structs, etc., but \nthe kernel might return an error when you try to create a socket.\n\n\n",
"msg_date": "Thu, 31 Mar 2022 09:43:47 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Add 'basebackup_to_shell' contrib module."
},
{
"msg_contents": "\nOn 3/30/22 22:07, Tom Lane wrote:\n> So ... none of the Windows buildfarm members actually like this\n> test script. They're all showing failures along the lines of\n>\n> not ok 2 - fails if basebackup_to_shell.command is not set: matches\n>\n> # Failed test 'fails if basebackup_to_shell.command is not set: matches'\n> # at t/001_basic.pl line 38.\n> # 'pg_basebackup: error: connection to server at \"127.0.0.1\", port 55358 failed: FATAL: SSPI authentication failed for user \"backupuser\"\n> # '\n> # doesn't match '(?^:shell command for backup is not configured)'\n>\n> Does the CI setup not account for this issue?\n>\n> \t\t\t\n\n\nI have configured fairywren and drongo to use Unix sockets., and they\nhave turned green Here are the settings I'm using in the config's\nbuild_env section:\n\n PG_TEST_USE_UNIX_SOCKETS => 1,\n PG_REGRESS_SOCK_DIR =>\n'C:/Users/pgrunner/AppData/Local/Temp',\n\nWe should probably fix the test though, so it doesn't require Unix\nsockets. It should be possible, although I haven't looked yet to see how.\n\n\ncheers\n\n\nandrew\n\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Thu, 31 Mar 2022 10:51:53 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Add 'basebackup_to_shell' contrib module."
},
{
"msg_contents": "On Thu, Mar 31, 2022 at 10:52 AM Andrew Dunstan <andrew@dunslane.net> wrote:\n> I have configured fairywren and drongo to use Unix sockets., and they\n> have turned green Here are the settings I'm using in the config's\n> build_env section:\n>\n> PG_TEST_USE_UNIX_SOCKETS => 1,\n> PG_REGRESS_SOCK_DIR =>\n> 'C:/Users/pgrunner/AppData/Local/Temp',\n>\n> We should probably fix the test though, so it doesn't require Unix\n> sockets. It should be possible, although I haven't looked yet to see how.\n\nOur mutual colleague Neha Sharma pointed out this email message to me:\n\nhttp://postgr.es/m/106926.1643842376@sss.pgh.pa.us\n\nI actually don't understand why using pg_regress --auth-extra would\nfix it, or what that option does, or why we're even running pg_regress\nat all in PostgreSQL::Test::Cluster::init. I think it might be to fix\nthis exact issue, but there's no SGML documentation for pg_regress,\nand the output of pg_regress -h isn't really clear enough to\nunderstand what's going on here.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 31 Mar 2022 11:12:07 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Add 'basebackup_to_shell' contrib module."
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Thu, Mar 31, 2022 at 10:52 AM Andrew Dunstan <andrew@dunslane.net> wrote:\n>> We should probably fix the test though, so it doesn't require Unix\n>> sockets. It should be possible, although I haven't looked yet to see how.\n\n> Our mutual colleague Neha Sharma pointed out this email message to me:\n> http://postgr.es/m/106926.1643842376@sss.pgh.pa.us\n\nAh, right.\n\n> I actually don't understand why using pg_regress --auth-extra would\n> fix it, or what that option does, or why we're even running pg_regress\n> at all in PostgreSQL::Test::Cluster::init. I think it might be to fix\n> this exact issue, but there's no SGML documentation for pg_regress,\n\nI'm not volunteering to fix that, but this comment in pg_regress.c\nis probably adequately illuminating:\n\n * Rewrite pg_hba.conf and pg_ident.conf to use SSPI authentication. Permit\n * the current OS user to authenticate as the bootstrap superuser and as any\n * user named in a --create-role option.\n\nThis script is creating users manually rather than letting the TAP\ninfrastructure do it, which is an antipattern.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 31 Mar 2022 11:32:15 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Add 'basebackup_to_shell' contrib module."
},
{
"msg_contents": "On 2022-Mar-30, Andres Freund wrote:\n\n> It looks ugly, and it can't be copy-pasted as easily. Seems I'm alone on this,\n> so I'll leave it be...\n\nI'm bothered by that quote-in-the-middle occassionally as well (requires\nmore clicks to paste).\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Thu, 31 Mar 2022 17:32:23 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Add 'basebackup_to_shell' contrib module."
},
{
"msg_contents": "On Thu, Mar 31, 2022 at 11:32 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I'm not volunteering to fix that, but this comment in pg_regress.c\n> is probably adequately illuminating:\n>\n> * Rewrite pg_hba.conf and pg_ident.conf to use SSPI authentication. Permit\n> * the current OS user to authenticate as the bootstrap superuser and as any\n> * user named in a --create-role option.\n>\n> This script is creating users manually rather than letting the TAP\n> infrastructure do it, which is an antipattern.\n\nWell, first, I don't really think it's great if you have to try to\nfigure out what a tool does by reading the comments in the source\ncode. I grant that it's a step above trying to interpret the source\ncode itself, but it's still not great. Second, I think your diagnosis\nof the problem is slightly incorrect, because your comment seems to\nimply that this change ought to work:\n\ndiff --git a/contrib/basebackup_to_shell/t/001_basic.pl\nb/contrib/basebackup_to_shell/t/001_basic.pl\nindex 57534b62c8..1fc0d9ab15 100644\n--- a/contrib/basebackup_to_shell/t/001_basic.pl\n+++ b/contrib/basebackup_to_shell/t/001_basic.pl\n@@ -17,11 +17,11 @@ if (!defined $gzip || $gzip eq '')\n }\n\n my $node = PostgreSQL::Test::Cluster->new('primary');\n-$node->init('allows_streaming' => 1);\n+$node->init('allows_streaming' => 1, auth_extra => [ '--create-role',\n'backupuser' ]);\n $node->append_conf('postgresql.conf',\n \"shared_preload_libraries = 'basebackup_to_shell'\");\n $node->start;\n-$node->safe_psql('postgres', 'CREATE USER backupuser REPLICATION');\n+#$node->safe_psql('postgres', 'CREATE USER backupuser REPLICATION');\n $node->safe_psql('postgres', 'CREATE ROLE trustworthy');\n\n # For nearly all pg_basebackup invocations some options should be specified,\n\nBut it doesn't -- with that change, the test fails on Linux,\ncomplaining that the backupuser user does not exist. That's because\n--create-role doesn't actually create a role at all, and in fact\nabsolutely couldn't, because the server isn't even started at the\npoint where we're running pg_regress. I think we need to both tell\npg_regress to \"create the role\" and also actually create it. Which is\nmaybe not a great sign that everything here is totally clear and\ncomprehensible...\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 31 Mar 2022 12:08:20 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Add 'basebackup_to_shell' contrib module."
},
{
"msg_contents": "Hi,\n\nOn 2022-03-31 11:32:15 -0400, Tom Lane wrote:\n> Robert Haas <robertmhaas@gmail.com> writes:\n> * Rewrite pg_hba.conf and pg_ident.conf to use SSPI authentication. Permit\n> * the current OS user to authenticate as the bootstrap superuser and as any\n> * user named in a --create-role option.\n> \n> This script is creating users manually rather than letting the TAP\n> infrastructure do it, which is an antipattern.\n\nSeems like Cluster.pm should have a helper for creating roles, which then\nwould use --create-role internally. So there's at least something to find when\nlooking through Cluster.pm...\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 31 Mar 2022 09:10:02 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Add 'basebackup_to_shell' contrib module."
},
{
"msg_contents": "On 3/31/22 11:32, Tom Lane wrote:\n> Robert Haas <robertmhaas@gmail.com> writes:\n>> On Thu, Mar 31, 2022 at 10:52 AM Andrew Dunstan <andrew@dunslane.net> wrote:\n>>> We should probably fix the test though, so it doesn't require Unix\n>>> sockets. It should be possible, although I haven't looked yet to see how.\n>> Our mutual colleague Neha Sharma pointed out this email message to me:\n>> http://postgr.es/m/106926.1643842376@sss.pgh.pa.us\n\n\nYep, that's kinda what I was expecting.\n\n\n>> I actually don't understand why using pg_regress --auth-extra would\n>> fix it, or what that option does, or why we're even running pg_regress\n>> at all in PostgreSQL::Test::Cluster::init. I think it might be to fix\n>> this exact issue, but there's no SGML documentation for pg_regress,\n\n\n\nI really don't know why this stuff is in pg_regress at all. It seems\nrather odd to me and it's annoyed me for a while. But that's a fight for\nanother day.\n\n\n> I'm not volunteering to fix that, but this comment in pg_regress.c\n> is probably adequately illuminating:\n>\n> * Rewrite pg_hba.conf and pg_ident.conf to use SSPI authentication. Permit\n> * the current OS user to authenticate as the bootstrap superuser and as any\n> * user named in a --create-role option.\n>\n> This script is creating users manually rather than letting the TAP\n> infrastructure do it, which is an antipattern.\n>\n> \t\t\t\n\n\nYeah, I think the fix is as simple as the attached.\n\n\ncheers\n\n\nandrew\n\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Thu, 31 Mar 2022 12:25:38 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Add 'basebackup_to_shell' contrib module."
},
{
"msg_contents": "On Thu, Mar 31, 2022 at 12:25 PM Andrew Dunstan <andrew@dunslane.net> wrote:\n> Yeah, I think the fix is as simple as the attached.\n\nWell, that does not work because you added an extra parenthesis which\nmakes Perl barf. If you fix that, then the test does not pass because,\nas I just explained to Tom, the flag we call --create-role doesn't\ncreate a role:\n\nerror running SQL: 'psql:<stdin>:1: ERROR: role \"backupuser\" does not exist'\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 31 Mar 2022 12:30:08 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Add 'basebackup_to_shell' contrib module."
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Thu, Mar 31, 2022 at 12:25 PM Andrew Dunstan <andrew@dunslane.net> wrote:\n>> Yeah, I think the fix is as simple as the attached.\n\n> Well, that does not work because you added an extra parenthesis which\n> makes Perl barf. If you fix that, then the test does not pass because,\n> as I just explained to Tom, the flag we call --create-role doesn't\n> create a role:\n\n> error running SQL: 'psql:<stdin>:1: ERROR: role \"backupuser\" does not exist'\n\nOn looking closer, the combination of --config-auth and --create-role\n*only* fixes the config files for SSPI, it doesn't expect the server\nto be running.\n\nI agree that the documentation of this is nonexistent and the design\nis probably questionable, but I'm not volunteering to fix either.\nIf you are, step right up. In the meantime, I believe (without\nhaving tested) that the correct incantation is to use auth_extra\nbut *also* create the user further down.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 31 Mar 2022 12:45:13 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Add 'basebackup_to_shell' contrib module."
},
{
"msg_contents": "On Thu, Mar 31, 2022 at 12:45 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I agree that the documentation of this is nonexistent and the design\n> is probably questionable, but I'm not volunteering to fix either.\n> If you are, step right up. In the meantime, I believe (without\n> having tested) that the correct incantation is to use auth_extra\n> but *also* create the user further down.\n\nI agree. That's exactly what I said in\nhttp://postgr.es/m/CA+TgmoasOhqLR=TSYmHd4TyX-qnfwtde_u19ZphKunpSCkh_iw@mail.gmail.com\n...\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 31 Mar 2022 12:56:31 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Add 'basebackup_to_shell' contrib module."
},
{
"msg_contents": "On Thu, Mar 31, 2022 at 12:56 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> I agree. That's exactly what I said in\n> http://postgr.es/m/CA+TgmoasOhqLR=TSYmHd4TyX-qnfwtde_u19ZphKunpSCkh_iw@mail.gmail.com\n> ...\n\nOK, so I pushed a commit adding that incantation to the test script,\nand also a comment explaining why it's there. Possibly we ought to go\nadd similar comments to other places where this incantation is used,\nor find a way to make this all a bit more self-documenting, but that\ndoesn't necessarily need to be done today.\n\nThe buildfarm does look rather green at the moment, though, so I'm not\nsure how I know whether this \"worked\".\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 31 Mar 2022 14:12:20 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Add 'basebackup_to_shell' contrib module."
},
{
"msg_contents": "\nOn 3/31/22 14:12, Robert Haas wrote:\n> On Thu, Mar 31, 2022 at 12:56 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>> I agree. That's exactly what I said in\n>> http://postgr.es/m/CA+TgmoasOhqLR=TSYmHd4TyX-qnfwtde_u19ZphKunpSCkh_iw@mail.gmail.com\n>> ...\n> OK, so I pushed a commit adding that incantation to the test script,\n> and also a comment explaining why it's there. Possibly we ought to go\n> add similar comments to other places where this incantation is used,\n> or find a way to make this all a bit more self-documenting, but that\n> doesn't necessarily need to be done today.\n>\n> The buildfarm does look rather green at the moment, though, so I'm not\n> sure how I know whether this \"worked\".\n\n\n\nYou should know when jacana reports next (in the next hour or three), as\nit's not set up for Unix sockets.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Thu, 31 Mar 2022 14:27:04 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Add 'basebackup_to_shell' contrib module."
},
{
"msg_contents": "\nOn 3/31/22 12:45, Tom Lane wrote:\n>\n> On looking closer, the combination of --config-auth and --create-role\n> *only* fixes the config files for SSPI, it doesn't expect the server\n> to be running.\n>\n> I agree that the documentation of this is nonexistent and the design\n> is probably questionable, but I'm not volunteering to fix either.\n> If you are, step right up. In the meantime, I believe (without\n> having tested) that the correct incantation is to use auth_extra\n> but *also* create the user further down.\n>\n\nI will take fixing it as a TODO. But not until after feature freeze.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Thu, 31 Mar 2022 14:31:55 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Add 'basebackup_to_shell' contrib module."
},
{
"msg_contents": "Hi,\n\nOn 2022-03-30 12:58:26 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > On 2022-03-30 12:34:34 -0400, Tom Lane wrote:\n> >> One refinement that comes to mind as I look at the patch is to distinguish\n> >> between \"check\" and \"installcheck\". Not sure that's worthwhile, but not\n> >> sure it isn't, either.\n> \n> > As it's just about \"free\" to do so, I see no reason not to go for showing that\n> > difference. How about:\n> \n> > echo \"+++ (tap|regress|isolation) [install-]check in $(subdir) +++\" && \\\n> \n> WFM.\n\nPushed like that.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 31 Mar 2022 11:44:02 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Add 'basebackup_to_shell' contrib module."
}
] |
[
{
"msg_contents": "Hello, (Cc:ed Fujii-san)\n\nThis is a diverged topic from [1], which is summarized as $SUBJECT.\n\nTo recap:\n\nWhile discussing on additional LSNs in checkpoint log message,\nFujii-san pointed out [2] that there is a case where\nCreateRestartPoint leaves unrecoverable database when concurrent\npromotion happens. That corruption is \"fixed\" by the next checkpoint\nso it is not a severe corruption.\n\nAFAICS since 9.5, no check(/restart)pionts won't run concurrently with\nrestartpoint [3]. So I propose to remove the code path as attached.\n\nregards.\n\n\n[1] https://www.postgresql.org/message-id/20220316.091913.806120467943749797.horikyota.ntt%40gmail.com\n\n[2] https://www.postgresql.org/message-id/7bfad665-db9c-0c2a-2604-9f54763c5f9e%40oss.nttdata.com\n\n[3] https://www.postgresql.org/message-id/20220222.174401.765586897814316743.horikyota.ntt%40gmail.com\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n From e983f3d4c2dbeea742aed0ef1e209e7821f6687f Mon Sep 17 00:00:00 2001\nFrom: Kyotaro Horiguchi <horikyota.ntt@gmail.com>\nDate: Mon, 14 Feb 2022 13:04:33 +0900\nSubject: [PATCH v2] Correctly update contfol file at the end of archive recovery\n\nCreateRestartPoint runs WAL file cleanup basing on the checkpoint just\nhave finished in the function. If the database has exited\nDB_IN_ARCHIVE_RECOVERY state when the function is going to update\ncontrol file, the function refrains from updating the file at all then\nproceeds to WAL cleanup having the latest REDO LSN, which is now\ninconsistent with the control file. As the result, the succeeding\ncleanup procedure overly removes WAL files against the control file\nand leaves unrecoverable database until the next checkpoint finishes.\n\nAlong with that fix, we remove a dead code path for the case some\nother process ran a simultaneous checkpoint. It seems like just a\npreventive measure but it's no longer useful because we are sure that\ncheckpoint is performed only by checkpointer except single process\nmode.\n---\n src/backend/access/transam/xlog.c | 73 ++++++++++++++++++++-----------\n 1 file changed, 47 insertions(+), 26 deletions(-)\n\ndiff --git a/src/backend/access/transam/xlog.c b/src/backend/access/transam/xlog.c\nindex 6208e123e5..ff4a90eacc 100644\n--- a/src/backend/access/transam/xlog.c\n+++ b/src/backend/access/transam/xlog.c\n@@ -9587,6 +9587,9 @@ CreateRestartPoint(int flags)\n \tXLogSegNo\t_logSegNo;\n \tTimestampTz xtime;\n \n+\t/* we don't assume concurrent checkpoint/restartpoint to run */\n+\tAssert (!IsUnderPostmaster || MyBackendType == B_CHECKPOINTER);\n+\n \t/* Get a local copy of the last safe checkpoint record. */\n \tSpinLockAcquire(&XLogCtl->info_lck);\n \tlastCheckPointRecPtr = XLogCtl->lastCheckPointRecPtr;\n@@ -9653,7 +9656,7 @@ CreateRestartPoint(int flags)\n \n \t/* Also update the info_lck-protected copy */\n \tSpinLockAcquire(&XLogCtl->info_lck);\n-\tXLogCtl->RedoRecPtr = lastCheckPoint.redo;\n+\tXLogCtl->RedoRecPtr = RedoRecPtr;\n \tSpinLockRelease(&XLogCtl->info_lck);\n \n \t/*\n@@ -9672,7 +9675,10 @@ CreateRestartPoint(int flags)\n \t/* Update the process title */\n \tupdate_checkpoint_display(flags, true, false);\n \n-\tCheckPointGuts(lastCheckPoint.redo, flags);\n+\tCheckPointGuts(RedoRecPtr, flags);\n+\n+\t/* Update pg_control */\n+\tLWLockAcquire(ControlFileLock, LW_EXCLUSIVE);\n \n \t/*\n \t * Remember the prior checkpoint's redo ptr for\n@@ -9680,31 +9686,29 @@ CreateRestartPoint(int flags)\n \t */\n \tPriorRedoPtr = ControlFile->checkPointCopy.redo;\n \n+\tAssert (PriorRedoPtr < RedoRecPtr);\n+\n+\tControlFile->checkPoint = lastCheckPointRecPtr;\n+\tControlFile->checkPointCopy = lastCheckPoint;\n+\n+\t/* Update control file using current time */\n+\tControlFile->time = (pg_time_t) time(NULL);\n+\n \t/*\n-\t * Update pg_control, using current time. Check that it still shows\n-\t * DB_IN_ARCHIVE_RECOVERY state and an older checkpoint, else do nothing;\n-\t * this is a quick hack to make sure nothing really bad happens if somehow\n-\t * we get here after the end-of-recovery checkpoint.\n+\t * Ensure minRecoveryPoint is past the checkpoint record while archive\n+\t * recovery is still ongoing. Normally, this will have happened already\n+\t * while writing out dirty buffers, but not necessarily - e.g. because no\n+\t * buffers were dirtied. We do this because a non-exclusive base backup\n+\t * uses minRecoveryPoint to determine which WAL files must be included in\n+\t * the backup, and the file (or files) containing the checkpoint record\n+\t * must be included, at a minimum. Note that for an ordinary restart of\n+\t * recovery there's no value in having the minimum recovery point any\n+\t * earlier than this anyway, because redo will begin just after the\n+\t * checkpoint record. This is a quick hack to make sure nothing really bad\n+\t * happens if somehow we get here after the end-of-recovery checkpoint.\n \t */\n-\tLWLockAcquire(ControlFileLock, LW_EXCLUSIVE);\n-\tif (ControlFile->state == DB_IN_ARCHIVE_RECOVERY &&\n-\t\tControlFile->checkPointCopy.redo < lastCheckPoint.redo)\n+\tif (ControlFile->state == DB_IN_ARCHIVE_RECOVERY)\n \t{\n-\t\tControlFile->checkPoint = lastCheckPointRecPtr;\n-\t\tControlFile->checkPointCopy = lastCheckPoint;\n-\t\tControlFile->time = (pg_time_t) time(NULL);\n-\n-\t\t/*\n-\t\t * Ensure minRecoveryPoint is past the checkpoint record. Normally,\n-\t\t * this will have happened already while writing out dirty buffers,\n-\t\t * but not necessarily - e.g. because no buffers were dirtied. We do\n-\t\t * this because a non-exclusive base backup uses minRecoveryPoint to\n-\t\t * determine which WAL files must be included in the backup, and the\n-\t\t * file (or files) containing the checkpoint record must be included,\n-\t\t * at a minimum. Note that for an ordinary restart of recovery there's\n-\t\t * no value in having the minimum recovery point any earlier than this\n-\t\t * anyway, because redo will begin just after the checkpoint record.\n-\t\t */\n \t\tif (ControlFile->minRecoveryPoint < lastCheckPointEndPtr)\n \t\t{\n \t\t\tControlFile->minRecoveryPoint = lastCheckPointEndPtr;\n@@ -9716,8 +9720,25 @@ CreateRestartPoint(int flags)\n \t\t}\n \t\tif (flags & CHECKPOINT_IS_SHUTDOWN)\n \t\t\tControlFile->state = DB_SHUTDOWNED_IN_RECOVERY;\n-\t\tUpdateControlFile();\n \t}\n+\telse\n+\t{\n+\t\t/* recovery mode is not supposed to end during shutdown restartpoint */\n+\t\tAssert((flags & CHECKPOINT_IS_SHUTDOWN) == 0);\n+\n+\t\t/*\n+\t\t * Aarchive recovery has ended. Crash recovery ever after should\n+\t\t * always recover to the end of WAL\n+\t\t */\n+\t\tControlFile->minRecoveryPoint = InvalidXLogRecPtr;\n+\t\tControlFile->minRecoveryPointTLI = 0;\n+\n+\t\t/* also update local copy */\n+\t\tminRecoveryPoint = InvalidXLogRecPtr;\n+\t\tminRecoveryPointTLI = 0;\n+\t}\n+\n+\tUpdateControlFile();\n \tLWLockRelease(ControlFileLock);\n \n \t/*\n@@ -9804,7 +9825,7 @@ CreateRestartPoint(int flags)\n \txtime = GetLatestXTime();\n \tereport((log_checkpoints ? LOG : DEBUG2),\n \t\t\t(errmsg(\"recovery restart point at %X/%X\",\n-\t\t\t\t\tLSN_FORMAT_ARGS(lastCheckPoint.redo)),\n+\t\t\t\t\tLSN_FORMAT_ARGS(RedoRecPtr)),\n \t\t\t xtime ? errdetail(\"Last completed transaction was at log time %s.\",\n \t\t\t\t\t\t\t timestamptz_to_str(xtime)) : 0));\n \n-- \n2.27.0\n\n\n From 13329169b996509a3a853afb9c283c3b27e0eab7 Mon Sep 17 00:00:00 2001\nFrom: Kyotaro Horiguchi <horikyota.ntt@gmail.com>\nDate: Fri, 25 Feb 2022 14:46:41 +0900\nSubject: [PATCH v2] Correctly update contfol file at the end of archive \n recovery\n\nCreateRestartPoint runs WAL file cleanup basing on the checkpoint just\nhave finished in the function. If the database has exited\nDB_IN_ARCHIVE_RECOVERY state when the function is going to update\ncontrol file, the function refrains from updating the file at all then\nproceeds to WAL cleanup having the latest REDO LSN, which is now\ninconsistent with the control file. As the result, the succeeding\ncleanup procedure overly removes WAL files against the control file\nand leaves unrecoverable database until the next checkpoint finishes.\n\nAlong with that fix, we remove a dead code path for the case some\nother process ran a simultaneous checkpoint. It seems like just a\npreventive measure but it's no longer useful because we are sure that\ncheckpoint is performed only by checkpointer except single process\nmode.\n---\n src/backend/access/transam/xlog.c | 73 ++++++++++++++++++++-----------\n 1 file changed, 47 insertions(+), 26 deletions(-)\n\ndiff --git a/src/backend/access/transam/xlog.c b/src/backend/access/transam/xlog.c\nindex 3d76fad128..3670ff81e7 100644\n--- a/src/backend/access/transam/xlog.c\n+++ b/src/backend/access/transam/xlog.c\n@@ -9376,6 +9376,9 @@ CreateRestartPoint(int flags)\n \t */\n \tLWLockAcquire(CheckpointLock, LW_EXCLUSIVE);\n \n+\t/* we don't assume concurrent checkpoint/restartpoint to run */\n+\tAssert (!IsUnderPostmaster || MyBackendType == B_CHECKPOINTER);\n+\n \t/* Get a local copy of the last safe checkpoint record. */\n \tSpinLockAcquire(&XLogCtl->info_lck);\n \tlastCheckPointRecPtr = XLogCtl->lastCheckPointRecPtr;\n@@ -9445,7 +9448,7 @@ CreateRestartPoint(int flags)\n \n \t/* Also update the info_lck-protected copy */\n \tSpinLockAcquire(&XLogCtl->info_lck);\n-\tXLogCtl->RedoRecPtr = lastCheckPoint.redo;\n+\tXLogCtl->RedoRecPtr = RedoRecPtr;\n \tSpinLockRelease(&XLogCtl->info_lck);\n \n \t/*\n@@ -9461,7 +9464,10 @@ CreateRestartPoint(int flags)\n \tif (log_checkpoints)\n \t\tLogCheckpointStart(flags, true);\n \n-\tCheckPointGuts(lastCheckPoint.redo, flags);\n+\tCheckPointGuts(RedoRecPtr, flags);\n+\n+\t/* Update pg_control */\n+\tLWLockAcquire(ControlFileLock, LW_EXCLUSIVE);\n \n \t/*\n \t * Remember the prior checkpoint's redo ptr for\n@@ -9469,31 +9475,29 @@ CreateRestartPoint(int flags)\n \t */\n \tPriorRedoPtr = ControlFile->checkPointCopy.redo;\n \n+\tAssert (PriorRedoPtr < RedoRecPtr);\n+\n+\tControlFile->checkPoint = lastCheckPointRecPtr;\n+\tControlFile->checkPointCopy = lastCheckPoint;\n+\n+\t/* Update control file using current time */\n+\tControlFile->time = (pg_time_t) time(NULL);\n+\n \t/*\n-\t * Update pg_control, using current time. Check that it still shows\n-\t * DB_IN_ARCHIVE_RECOVERY state and an older checkpoint, else do nothing;\n-\t * this is a quick hack to make sure nothing really bad happens if somehow\n-\t * we get here after the end-of-recovery checkpoint.\n+\t * Ensure minRecoveryPoint is past the checkpoint record while archive\n+\t * recovery is still ongoing. Normally, this will have happened already\n+\t * while writing out dirty buffers, but not necessarily - e.g. because no\n+\t * buffers were dirtied. We do this because a non-exclusive base backup\n+\t * uses minRecoveryPoint to determine which WAL files must be included in\n+\t * the backup, and the file (or files) containing the checkpoint record\n+\t * must be included, at a minimum. Note that for an ordinary restart of\n+\t * recovery there's no value in having the minimum recovery point any\n+\t * earlier than this anyway, because redo will begin just after the\n+\t * checkpoint record. This is a quick hack to make sure nothing really bad\n+\t * happens if somehow we get here after the end-of-recovery checkpoint.\n \t */\n-\tLWLockAcquire(ControlFileLock, LW_EXCLUSIVE);\n-\tif (ControlFile->state == DB_IN_ARCHIVE_RECOVERY &&\n-\t\tControlFile->checkPointCopy.redo < lastCheckPoint.redo)\n+\tif (ControlFile->state == DB_IN_ARCHIVE_RECOVERY)\n \t{\n-\t\tControlFile->checkPoint = lastCheckPointRecPtr;\n-\t\tControlFile->checkPointCopy = lastCheckPoint;\n-\t\tControlFile->time = (pg_time_t) time(NULL);\n-\n-\t\t/*\n-\t\t * Ensure minRecoveryPoint is past the checkpoint record. Normally,\n-\t\t * this will have happened already while writing out dirty buffers,\n-\t\t * but not necessarily - e.g. because no buffers were dirtied. We do\n-\t\t * this because a non-exclusive base backup uses minRecoveryPoint to\n-\t\t * determine which WAL files must be included in the backup, and the\n-\t\t * file (or files) containing the checkpoint record must be included,\n-\t\t * at a minimum. Note that for an ordinary restart of recovery there's\n-\t\t * no value in having the minimum recovery point any earlier than this\n-\t\t * anyway, because redo will begin just after the checkpoint record.\n-\t\t */\n \t\tif (ControlFile->minRecoveryPoint < lastCheckPointEndPtr)\n \t\t{\n \t\t\tControlFile->minRecoveryPoint = lastCheckPointEndPtr;\n@@ -9505,8 +9509,25 @@ CreateRestartPoint(int flags)\n \t\t}\n \t\tif (flags & CHECKPOINT_IS_SHUTDOWN)\n \t\t\tControlFile->state = DB_SHUTDOWNED_IN_RECOVERY;\n-\t\tUpdateControlFile();\n \t}\n+\telse\n+\t{\n+\t\t/* recovery mode is not supposed to end during shutdown restartpoint */\n+\t\tAssert((flags & CHECKPOINT_IS_SHUTDOWN) == 0);\n+\n+\t\t/*\n+\t\t * Aarchive recovery has ended. Crash recovery ever after should\n+\t\t * always recover to the end of WAL\n+\t\t */\n+\t\tControlFile->minRecoveryPoint = InvalidXLogRecPtr;\n+\t\tControlFile->minRecoveryPointTLI = 0;\n+\n+\t\t/* also update local copy */\n+\t\tminRecoveryPoint = InvalidXLogRecPtr;\n+\t\tminRecoveryPointTLI = 0;\n+\t}\n+\n+\tUpdateControlFile();\n \tLWLockRelease(ControlFileLock);\n \n \t/*\n@@ -9590,7 +9611,7 @@ CreateRestartPoint(int flags)\n \txtime = GetLatestXTime();\n \tereport((log_checkpoints ? LOG : DEBUG2),\n \t\t\t(errmsg(\"recovery restart point at %X/%X\",\n-\t\t\t\t\t(uint32) (lastCheckPoint.redo >> 32), (uint32) lastCheckPoint.redo),\n+\t\t\t\t\t(uint32) (RedoRecPtr >> 32), (uint32) RedoRecPtr),\n \t\t\t xtime ? errdetail(\"Last completed transaction was at log time %s.\",\n \t\t\t\t\t\t\t timestamptz_to_str(xtime)) : 0));\n \n-- \n2.27.0\n\n\n From c89e2b509723b68897f2af49a154af2a69f0747b Mon Sep 17 00:00:00 2001\nFrom: Kyotaro Horiguchi <horikyota.ntt@gmail.com>\nDate: Fri, 25 Feb 2022 15:04:00 +0900\nSubject: [PATCH v3] Correctly update contfol file at the end of archive\n recovery\n\nCreateRestartPoint runs WAL file cleanup basing on the checkpoint just\nhave finished in the function. If the database has exited\nDB_IN_ARCHIVE_RECOVERY state when the function is going to update\ncontrol file, the function refrains from updating the file at all then\nproceeds to WAL cleanup having the latest REDO LSN, which is now\ninconsistent with the control file. As the result, the succeeding\ncleanup procedure overly removes WAL files against the control file\nand leaves unrecoverable database until the next checkpoint finishes.\n\nAlong with that fix, we remove a dead code path for the case some\nother process ran a simultaneous checkpoint. It seems like just a\npreventive measure but it's no longer useful because we are sure that\ncheckpoint is performed only by checkpointer except single process\nmode.\n---\n src/backend/access/transam/xlog.c | 71 +++++++++++++++++++------------\n 1 file changed, 44 insertions(+), 27 deletions(-)\n\ndiff --git a/src/backend/access/transam/xlog.c b/src/backend/access/transam/xlog.c\nindex 885558f291..2b2568c475 100644\n--- a/src/backend/access/transam/xlog.c\n+++ b/src/backend/access/transam/xlog.c\n@@ -9334,7 +9334,7 @@ CreateRestartPoint(int flags)\n \n \t/* Also update the info_lck-protected copy */\n \tSpinLockAcquire(&XLogCtl->info_lck);\n-\tXLogCtl->RedoRecPtr = lastCheckPoint.redo;\n+\tXLogCtl->RedoRecPtr = RedoRecPtr;\n \tSpinLockRelease(&XLogCtl->info_lck);\n \n \t/*\n@@ -9350,7 +9350,10 @@ CreateRestartPoint(int flags)\n \tif (log_checkpoints)\n \t\tLogCheckpointStart(flags, true);\n \n-\tCheckPointGuts(lastCheckPoint.redo, flags);\n+\tCheckPointGuts(RedoRecPtr, flags);\n+\n+\t/* Update pg_control */\n+\tLWLockAcquire(ControlFileLock, LW_EXCLUSIVE);\n \n \t/*\n \t * Remember the prior checkpoint's redo ptr for\n@@ -9358,31 +9361,28 @@ CreateRestartPoint(int flags)\n \t */\n \tPriorRedoPtr = ControlFile->checkPointCopy.redo;\n \n+\tAssert (PriorRedoPtr < RedoRecPtr);\n+\n+\tControlFile->checkPoint = lastCheckPointRecPtr;\n+\tControlFile->checkPointCopy = lastCheckPoint;\n+\n+\t/* Update control file using current time */\n+\tControlFile->time = (pg_time_t) time(NULL);\n+\n \t/*\n-\t * Update pg_control, using current time. Check that it still shows\n-\t * IN_ARCHIVE_RECOVERY state and an older checkpoint, else do nothing;\n-\t * this is a quick hack to make sure nothing really bad happens if somehow\n-\t * we get here after the end-of-recovery checkpoint.\n+\t * Ensure minRecoveryPoint is past the checkpoint record while archive\n+\t * recovery is still ongoing. Normally, this will have happened already\n+\t * while writing out dirty buffers, but not necessarily - e.g. because no\n+\t * buffers were dirtied. We do this because a non-exclusive base backup\n+\t * uses minRecoveryPoint to determine which WAL files must be included in\n+\t * the backup, and the file (or files) containing the checkpoint record\n+\t * must be included, at a minimum. Note that for an ordinary restart of\n+\t * recovery there's no value in having the minimum recovery point any\n+\t * earlier than this anyway, because redo will begin just after the\n+\t * checkpoint record.\n \t */\n-\tLWLockAcquire(ControlFileLock, LW_EXCLUSIVE);\n-\tif (ControlFile->state == DB_IN_ARCHIVE_RECOVERY &&\n-\t\tControlFile->checkPointCopy.redo < lastCheckPoint.redo)\n+\tif (ControlFile->state == DB_IN_ARCHIVE_RECOVERY)\n \t{\n-\t\tControlFile->checkPoint = lastCheckPointRecPtr;\n-\t\tControlFile->checkPointCopy = lastCheckPoint;\n-\t\tControlFile->time = (pg_time_t) time(NULL);\n-\n-\t\t/*\n-\t\t * Ensure minRecoveryPoint is past the checkpoint record. Normally,\n-\t\t * this will have happened already while writing out dirty buffers,\n-\t\t * but not necessarily - e.g. because no buffers were dirtied. We do\n-\t\t * this because a non-exclusive base backup uses minRecoveryPoint to\n-\t\t * determine which WAL files must be included in the backup, and the\n-\t\t * file (or files) containing the checkpoint record must be included,\n-\t\t * at a minimum. Note that for an ordinary restart of recovery there's\n-\t\t * no value in having the minimum recovery point any earlier than this\n-\t\t * anyway, because redo will begin just after the checkpoint record.\n-\t\t */\n \t\tif (ControlFile->minRecoveryPoint < lastCheckPointEndPtr)\n \t\t{\n \t\t\tControlFile->minRecoveryPoint = lastCheckPointEndPtr;\n@@ -9393,9 +9393,26 @@ CreateRestartPoint(int flags)\n \t\t\tminRecoveryPointTLI = ControlFile->minRecoveryPointTLI;\n \t\t}\n \t\tif (flags & CHECKPOINT_IS_SHUTDOWN)\n-\t\t\tControlFile->state = DB_SHUTDOWNED_IN_RECOVERY;\n-\t\tUpdateControlFile();\n+\t\tControlFile->state = DB_SHUTDOWNED_IN_RECOVERY;\n \t}\n+\telse\n+\t{\n+\t\t/* recovery mode is not supposed to end during shutdown restartpoint */\n+\t\tAssert((flags & CHECKPOINT_IS_SHUTDOWN) == 0);\n+\n+\t\t/*\n+\t\t * Aarchive recovery has ended. Crash recovery ever after should\n+\t\t * always recover to the end of WAL\n+\t\t */\n+\t\tControlFile->minRecoveryPoint = InvalidXLogRecPtr;\n+\t\tControlFile->minRecoveryPointTLI = 0;\n+\n+\t\t/* also update local copy */\n+\t\tminRecoveryPoint = InvalidXLogRecPtr;\n+\t\tminRecoveryPointTLI = 0;\n+\t}\n+\n+\tUpdateControlFile();\n \tLWLockRelease(ControlFileLock);\n \n \t/*\n@@ -9470,7 +9487,7 @@ CreateRestartPoint(int flags)\n \txtime = GetLatestXTime();\n \tereport((log_checkpoints ? LOG : DEBUG2),\n \t\t\t(errmsg(\"recovery restart point at %X/%X\",\n-\t\t\t\t\t(uint32) (lastCheckPoint.redo >> 32), (uint32) lastCheckPoint.redo),\n+\t\t\t\t\t(uint32) (RedoRecPtr >> 32), (uint32) RedoRecPtr),\n \t\t\t xtime ? errdetail(\"Last completed transaction was at log time %s.\",\n \t\t\t\t\t\t\t timestamptz_to_str(xtime)) : 0));\n \n-- \n2.27.0\n\n\n From 7dd174d165b3639b573bfc47c2e8b2fba61395c5 Mon Sep 17 00:00:00 2001\nFrom: Kyotaro Horiguchi <horikyota.ntt@gmail.com>\nDate: Fri, 25 Feb 2022 16:35:16 +0900\nSubject: [PATCH v3] Correctly update contfol file at the end of archive\n recovery\n\nCreateRestartPoint runs WAL file cleanup basing on the checkpoint just\nhave finished in the function. If the database has exited\nDB_IN_ARCHIVE_RECOVERY state when the function is going to update\ncontrol file, the function refrains from updating the file at all then\nproceeds to WAL cleanup having the latest REDO LSN, which is now\ninconsistent with the control file. As the result, the succeeding\ncleanup procedure overly removes WAL files against the control file\nand leaves unrecoverable database until the next checkpoint finishes.\n\nAlong with that fix, we remove a dead code path for the case some\nother process ran a simultaneous checkpoint. It seems like just a\npreventive measure but it's no longer useful because we are sure that\ncheckpoint is performed only by checkpointer except single process\nmode.\n---\n src/backend/access/transam/xlog.c | 73 +++++++++++++++++++------------\n 1 file changed, 45 insertions(+), 28 deletions(-)\n\ndiff --git a/src/backend/access/transam/xlog.c b/src/backend/access/transam/xlog.c\nindex c64febdb53..9fb66ad7d5 100644\n--- a/src/backend/access/transam/xlog.c\n+++ b/src/backend/access/transam/xlog.c\n@@ -9434,7 +9434,7 @@ CreateRestartPoint(int flags)\n \n \t/* Also update the info_lck-protected copy */\n \tSpinLockAcquire(&XLogCtl->info_lck);\n-\tXLogCtl->RedoRecPtr = lastCheckPoint.redo;\n+\tXLogCtl->RedoRecPtr = RedoRecPtr;\n \tSpinLockRelease(&XLogCtl->info_lck);\n \n \t/*\n@@ -9450,7 +9450,10 @@ CreateRestartPoint(int flags)\n \tif (log_checkpoints)\n \t\tLogCheckpointStart(flags, true);\n \n-\tCheckPointGuts(lastCheckPoint.redo, flags);\n+\tCheckPointGuts(RedoRecPtr, flags);\n+\n+\t/* Update pg_control */\n+\tLWLockAcquire(ControlFileLock, LW_EXCLUSIVE);\n \n \t/*\n \t * Remember the prior checkpoint's redo pointer, used later to determine\n@@ -9458,32 +9461,29 @@ CreateRestartPoint(int flags)\n \t */\n \tPriorRedoPtr = ControlFile->checkPointCopy.redo;\n \n+\tAssert (PriorRedoPtr < RedoRecPtr);\n+\n+\tControlFile->prevCheckPoint = ControlFile->checkPoint;\n+\tControlFile->checkPoint = lastCheckPointRecPtr;\n+\tControlFile->checkPointCopy = lastCheckPoint;\n+\n+\t/* Update control file using current time */\n+\tControlFile->time = (pg_time_t) time(NULL);\n+\n \t/*\n-\t * Update pg_control, using current time. Check that it still shows\n-\t * IN_ARCHIVE_RECOVERY state and an older checkpoint, else do nothing;\n-\t * this is a quick hack to make sure nothing really bad happens if somehow\n-\t * we get here after the end-of-recovery checkpoint.\n+\t * Ensure minRecoveryPoint is past the checkpoint record while archive\n+\t * recovery is still running. Normally, this will have happened already\n+\t * while writing out dirty buffers, but not necessarily - e.g. because no\n+\t * buffers were dirtied. We do this because a non-exclusive base backup\n+\t * uses minRecoveryPoint to determine which WAL files must be included in\n+\t * the backup, and the file (or files) containing the checkpoint record\n+\t * must be included, at a minimum. Note that for an ordinary restart of\n+\t * recovery there's no value in having the minimum recovery point any\n+\t * earlier than this anyway, because redo will begin just after the\n+\t * checkpoint record.\n \t */\n-\tLWLockAcquire(ControlFileLock, LW_EXCLUSIVE);\n-\tif (ControlFile->state == DB_IN_ARCHIVE_RECOVERY &&\n-\t\tControlFile->checkPointCopy.redo < lastCheckPoint.redo)\n+\tif (ControlFile->state == DB_IN_ARCHIVE_RECOVERY)\n \t{\n-\t\tControlFile->prevCheckPoint = ControlFile->checkPoint;\n-\t\tControlFile->checkPoint = lastCheckPointRecPtr;\n-\t\tControlFile->checkPointCopy = lastCheckPoint;\n-\t\tControlFile->time = (pg_time_t) time(NULL);\n-\n-\t\t/*\n-\t\t * Ensure minRecoveryPoint is past the checkpoint record. Normally,\n-\t\t * this will have happened already while writing out dirty buffers,\n-\t\t * but not necessarily - e.g. because no buffers were dirtied. We do\n-\t\t * this because a non-exclusive base backup uses minRecoveryPoint to\n-\t\t * determine which WAL files must be included in the backup, and the\n-\t\t * file (or files) containing the checkpoint record must be included,\n-\t\t * at a minimum. Note that for an ordinary restart of recovery there's\n-\t\t * no value in having the minimum recovery point any earlier than this\n-\t\t * anyway, because redo will begin just after the checkpoint record.\n-\t\t */\n \t\tif (ControlFile->minRecoveryPoint < lastCheckPointEndPtr)\n \t\t{\n \t\t\tControlFile->minRecoveryPoint = lastCheckPointEndPtr;\n@@ -9494,9 +9494,26 @@ CreateRestartPoint(int flags)\n \t\t\tminRecoveryPointTLI = ControlFile->minRecoveryPointTLI;\n \t\t}\n \t\tif (flags & CHECKPOINT_IS_SHUTDOWN)\n-\t\t\tControlFile->state = DB_SHUTDOWNED_IN_RECOVERY;\n-\t\tUpdateControlFile();\n+\t\tControlFile->state = DB_SHUTDOWNED_IN_RECOVERY;\n \t}\n+\telse\n+\t{\n+\t\t/* recovery mode is not supposed to end during shutdown restartpoint */\n+\t\tAssert((flags & CHECKPOINT_IS_SHUTDOWN) == 0);\n+\n+\t\t/*\n+\t\t * Aarchive recovery has ended. Crash recovery ever after should\n+\t\t * always recover to the end of WAL\n+\t\t */\n+\t\tControlFile->minRecoveryPoint = InvalidXLogRecPtr;\n+\t\tControlFile->minRecoveryPointTLI = 0;\n+\n+\t\t/* also update local copy */\n+\t\tminRecoveryPoint = InvalidXLogRecPtr;\n+\t\tminRecoveryPointTLI = 0;\n+\t}\n+\n+\tUpdateControlFile();\n \tLWLockRelease(ControlFileLock);\n \n \t/*\n@@ -9579,7 +9596,7 @@ CreateRestartPoint(int flags)\n \txtime = GetLatestXTime();\n \tereport((log_checkpoints ? LOG : DEBUG2),\n \t\t\t(errmsg(\"recovery restart point at %X/%X\",\n-\t\t\t\t\t(uint32) (lastCheckPoint.redo >> 32), (uint32) lastCheckPoint.redo),\n+\t\t\t\t\t(uint32) (RedoRecPtr >> 32), (uint32) RedoRecPtr),\n \t\t\t xtime ? errdetail(\"last completed transaction was at log time %s\",\n \t\t\t\t\t\t\t timestamptz_to_str(xtime)) : 0));\n \n-- \n2.27.0",
"msg_date": "Wed, 16 Mar 2022 10:24:44 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": true,
"msg_subject": "Possible corruption by CreateRestartPoint at promotion"
},
{
"msg_contents": "Just for the record.\n\nAn instance of the corruption showed up in this mailing list [1].\n\n[1] https://www.postgresql.org/message-id/flat/9EB4CF63-1107-470E-B5A4-061FB9EF8CC8%40outlook.com\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Wed, 16 Mar 2022 17:36:18 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Possible corruption by CreateRestartPoint at promotion"
},
{
"msg_contents": "On Wed, Mar 16, 2022 at 10:24:44AM +0900, Kyotaro Horiguchi wrote:\n> While discussing on additional LSNs in checkpoint log message,\n> Fujii-san pointed out [2] that there is a case where\n> CreateRestartPoint leaves unrecoverable database when concurrent\n> promotion happens. That corruption is \"fixed\" by the next checkpoint\n> so it is not a severe corruption.\n\nI suspect we'll start seeing this problem more often once end-of-recovery\ncheckpoints are removed [0]. Would you mind creating a commitfest entry\nfor this thread? I didn't see one.\n\n> AFAICS since 9.5, no check(/restart)pionts won't run concurrently with\n> restartpoint [3]. So I propose to remove the code path as attached.\n\nYeah, this \"quick hack\" has been around for some time (2de48a8), and I\nbelieve much has changed since then, so something like what you're\nproposing is probably the right thing to do.\n\n> \t/* Also update the info_lck-protected copy */\n> \tSpinLockAcquire(&XLogCtl->info_lck);\n> -\tXLogCtl->RedoRecPtr = lastCheckPoint.redo;\n> +\tXLogCtl->RedoRecPtr = RedoRecPtr;\n> \tSpinLockRelease(&XLogCtl->info_lck);\n> \n> \t/*\n> @@ -6984,7 +6987,10 @@ CreateRestartPoint(int flags)\n> \t/* Update the process title */\n> \tupdate_checkpoint_display(flags, true, false);\n> \n> -\tCheckPointGuts(lastCheckPoint.redo, flags);\n> +\tCheckPointGuts(RedoRecPtr, flags);\n\nI don't understand the purpose of these changes. Are these related to the\nfix, or is this just tidying up?\n\n[0] https://postgr.es/m/CA%2BTgmoY%2BSJLTjma4Hfn1sA7S6CZAgbihYd%3DKzO6srd7Ut%3DXVBQ%40mail.gmail.com\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 26 Apr 2022 11:33:49 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Possible corruption by CreateRestartPoint at promotion"
},
{
"msg_contents": "At Tue, 26 Apr 2022 11:33:49 -0700, Nathan Bossart <nathandbossart@gmail.com> wrote in \n> On Wed, Mar 16, 2022 at 10:24:44AM +0900, Kyotaro Horiguchi wrote:\n> > While discussing on additional LSNs in checkpoint log message,\n> > Fujii-san pointed out [2] that there is a case where\n> > CreateRestartPoint leaves unrecoverable database when concurrent\n> > promotion happens. That corruption is \"fixed\" by the next checkpoint\n> > so it is not a severe corruption.\n> \n> I suspect we'll start seeing this problem more often once end-of-recovery\n> checkpoints are removed [0]. Would you mind creating a commitfest entry\n> for this thread? I didn't see one.\n\nI'm not sure the patch makes any change here, because restart points\ndon't run while crash recovery, since no checkpoint records seen\nduring a crash recovery. Anyway the patch doesn't apply anymore so\nrebased, but only the one for master for the lack of time for now.\n\n> > AFAICS since 9.5, no check(/restart)pionts won't run concurrently with\n> > restartpoint [3]. So I propose to remove the code path as attached.\n> \n> Yeah, this \"quick hack\" has been around for some time (2de48a8), and I\n> believe much has changed since then, so something like what you're\n> proposing is probably the right thing to do.\n\nThanks for checking!\n\n> > \t/* Also update the info_lck-protected copy */\n> > \tSpinLockAcquire(&XLogCtl->info_lck);\n> > -\tXLogCtl->RedoRecPtr = lastCheckPoint.redo;\n> > +\tXLogCtl->RedoRecPtr = RedoRecPtr;\n> > \tSpinLockRelease(&XLogCtl->info_lck);\n> > \n> > \t/*\n> > @@ -6984,7 +6987,10 @@ CreateRestartPoint(int flags)\n> > \t/* Update the process title */\n> > \tupdate_checkpoint_display(flags, true, false);\n> > \n> > -\tCheckPointGuts(lastCheckPoint.redo, flags);\n> > +\tCheckPointGuts(RedoRecPtr, flags);\n> \n> I don't understand the purpose of these changes. Are these related to the\n> fix, or is this just tidying up?\n\nThe latter, since the mixed use of two not-guaranteed-to-be-same\nvariables at the same time for the same purpose made me perplexed (but\nI feel the change can hardly incorporated alone). However, you're\nright that it is irrelevant to the fix, so removed including other\ninstances of the same.\n\n> [0] https://postgr.es/m/CA%2BTgmoY%2BSJLTjma4Hfn1sA7S6CZAgbihYd%3DKzO6srd7Ut%3DXVBQ%40mail.gmail.com\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Wed, 27 Apr 2022 10:43:53 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Possible corruption by CreateRestartPoint at promotion"
},
{
"msg_contents": "On Wed, Apr 27, 2022 at 10:43:53AM +0900, Kyotaro Horiguchi wrote:\n> At Tue, 26 Apr 2022 11:33:49 -0700, Nathan Bossart <nathandbossart@gmail.com> wrote in \n>> I suspect we'll start seeing this problem more often once end-of-recovery\n>> checkpoints are removed [0]. Would you mind creating a commitfest entry\n>> for this thread? I didn't see one.\n> \n> I'm not sure the patch makes any change here, because restart points\n> don't run while crash recovery, since no checkpoint records seen\n> during a crash recovery. Anyway the patch doesn't apply anymore so\n> rebased, but only the one for master for the lack of time for now.\n\nThanks for the new patch! Yeah, it wouldn't affect crash recovery, but\nIIUC Robert's patch also applies to archive recovery.\n\n> +\t/* Update pg_control */\n> +\tLWLockAcquire(ControlFileLock, LW_EXCLUSIVE);\n> +\n> \t/*\n> \t * Remember the prior checkpoint's redo ptr for\n> \t * UpdateCheckPointDistanceEstimate()\n> \t */\n> \tPriorRedoPtr = ControlFile->checkPointCopy.redo;\n\nnitpick: Why move the LWLockAcquire() all the way up here?\n\n> +\tAssert (PriorRedoPtr < RedoRecPtr);\n\nI think this could use a short explanation.\n\n> +\t * Ensure minRecoveryPoint is past the checkpoint record while archive\n> +\t * recovery is still ongoing. Normally, this will have happened already\n> +\t * while writing out dirty buffers, but not necessarily - e.g. because no\n> +\t * buffers were dirtied. We do this because a non-exclusive base backup\n> +\t * uses minRecoveryPoint to determine which WAL files must be included in\n> +\t * the backup, and the file (or files) containing the checkpoint record\n> +\t * must be included, at a minimum. Note that for an ordinary restart of\n> +\t * recovery there's no value in having the minimum recovery point any\n> +\t * earlier than this anyway, because redo will begin just after the\n> +\t * checkpoint record.\n\nnitpick: Since exclusive backup mode is now removed, we don't need to\nspecify that the base backup is non-exclusive.\n\n> +\t\t/*\n> +\t\t * Aarchive recovery has ended. Crash recovery ever after should\n> +\t\t * always recover to the end of WAL\n> +\t\t */\n> +\t\tControlFile->minRecoveryPoint = InvalidXLogRecPtr;\n> +\t\tControlFile->minRecoveryPointTLI = 0;\n> +\n> +\t\t/* also update local copy */\n> +\t\tLocalMinRecoveryPoint = InvalidXLogRecPtr;\n> +\t\tLocalMinRecoveryPointTLI = 0;\n\nShould this be handled by the code that changes the control file state to\nDB_IN_PRODUCTION instead? It looks like this is ordinarily done in the\nnext checkpoint. It's not clear to me why it is done this way.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 26 Apr 2022 20:26:09 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Possible corruption by CreateRestartPoint at promotion"
},
{
"msg_contents": "On Tue, Apr 26, 2022 at 08:26:09PM -0700, Nathan Bossart wrote:\n> On Wed, Apr 27, 2022 at 10:43:53AM +0900, Kyotaro Horiguchi wrote:\n>> At Tue, 26 Apr 2022 11:33:49 -0700, Nathan Bossart <nathandbossart@gmail.com> wrote in \n>>> I suspect we'll start seeing this problem more often once end-of-recovery\n>>> checkpoints are removed [0]. Would you mind creating a commitfest entry\n>>> for this thread? I didn't see one.\n>> \n>> I'm not sure the patch makes any change here, because restart points\n>> don't run while crash recovery, since no checkpoint records seen\n>> during a crash recovery. Anyway the patch doesn't apply anymore so\n>> rebased, but only the one for master for the lack of time for now.\n> \n> Thanks for the new patch! Yeah, it wouldn't affect crash recovery, but\n> IIUC Robert's patch also applies to archive recovery.\n> \n>> +\t/* Update pg_control */\n>> +\tLWLockAcquire(ControlFileLock, LW_EXCLUSIVE);\n>> +\n>> \t/*\n>> \t * Remember the prior checkpoint's redo ptr for\n>> \t * UpdateCheckPointDistanceEstimate()\n>> \t */\n>> \tPriorRedoPtr = ControlFile->checkPointCopy.redo;\n> \n> nitpick: Why move the LWLockAcquire() all the way up here?\n\nYeah, that should not be necessary. InitWalRecovery() is the only\nplace outside the checkpointer that would touch this field, but that\nhappens far too early in the startup sequence to matter with the\ncheckpointer.\n\n>> +\tAssert (PriorRedoPtr < RedoRecPtr);\n> \n> I think this could use a short explanation.\n\nThat's just to make sure that the current redo LSN is always older\nthan the one prior that. It does not seem really necessary to me to\nadd that.\n\n>> +\t\t/*\n>> +\t\t * Aarchive recovery has ended. Crash recovery ever after should\n>> +\t\t * always recover to the end of WAL\n>> +\t\t */\n\ns/Aarchive/Archive/.\n\n>> +\t\tControlFile->minRecoveryPoint = InvalidXLogRecPtr;\n>> +\t\tControlFile->minRecoveryPointTLI = 0;\n>> +\n>> +\t\t/* also update local copy */\n>> +\t\tLocalMinRecoveryPoint = InvalidXLogRecPtr;\n>> +\t\tLocalMinRecoveryPointTLI = 0;\n> \n> Should this be handled by the code that changes the control file state to\n> DB_IN_PRODUCTION instead? It looks like this is ordinarily done in the\n> next checkpoint. It's not clear to me why it is done this way.\n\nAnyway, that would be the work of the end-of-recovery checkpoint\nrequested at the end of StartupXLOG() once a promotion happens or of\nthe checkpoint requested by PerformRecoveryXLogAction() in the second\ncase, no? So, I don't quite see why we need to update\nminRecoveryPoint and minRecoveryPointTLI in the control file here, as\nmuch as this does not have to be part of the end-of-recovery code\nthat switches the control file to DB_IN_PRODUCTION.\n\n- if (ControlFile->state == DB_IN_ARCHIVE_RECOVERY &&\n- ControlFile->checkPointCopy.redo < lastCheckPoint.redo)\n- {\n7ff23c6 has removed the last call to CreateCheckpoint() outside the\ncheckpointer, meaning that there is one less concurrent race to worry\nabout, but I have to admit that this change, to update the control\nfile's checkPoint and checkPointCopy even if we don't check after\nControlFile->checkPointCopy.redo < lastCheckPoint.redo would make the\ncode less robust in ~14. So I am questioning whether a backpatch\nis actually worth the risk here.\n--\nMichael",
"msg_date": "Wed, 27 Apr 2022 14:16:01 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Possible corruption by CreateRestartPoint at promotion"
},
{
"msg_contents": "On Wed, Apr 27, 2022 at 02:16:01PM +0900, Michael Paquier wrote:\n> On Tue, Apr 26, 2022 at 08:26:09PM -0700, Nathan Bossart wrote:\n>> On Wed, Apr 27, 2022 at 10:43:53AM +0900, Kyotaro Horiguchi wrote:\n>>> +\t\tControlFile->minRecoveryPoint = InvalidXLogRecPtr;\n>>> +\t\tControlFile->minRecoveryPointTLI = 0;\n>>> +\n>>> +\t\t/* also update local copy */\n>>> +\t\tLocalMinRecoveryPoint = InvalidXLogRecPtr;\n>>> +\t\tLocalMinRecoveryPointTLI = 0;\n>> \n>> Should this be handled by the code that changes the control file state to\n>> DB_IN_PRODUCTION instead? It looks like this is ordinarily done in the\n>> next checkpoint. It's not clear to me why it is done this way.\n> \n> Anyway, that would be the work of the end-of-recovery checkpoint\n> requested at the end of StartupXLOG() once a promotion happens or of\n> the checkpoint requested by PerformRecoveryXLogAction() in the second\n> case, no? So, I don't quite see why we need to update\n> minRecoveryPoint and minRecoveryPointTLI in the control file here, as\n> much as this does not have to be part of the end-of-recovery code\n> that switches the control file to DB_IN_PRODUCTION.\n\n+1. We probably don't need to reset minRecoveryPoint here.\n\n> - if (ControlFile->state == DB_IN_ARCHIVE_RECOVERY &&\n> - ControlFile->checkPointCopy.redo < lastCheckPoint.redo)\n> - {\n> 7ff23c6 has removed the last call to CreateCheckpoint() outside the\n> checkpointer, meaning that there is one less concurrent race to worry\n> about, but I have to admit that this change, to update the control\n> file's checkPoint and checkPointCopy even if we don't check after\n> ControlFile->checkPointCopy.redo < lastCheckPoint.redo would make the\n> code less robust in ~14. So I am questioning whether a backpatch\n> is actually worth the risk here.\n\nIMO we should still check this before updating ControlFile to be safe.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 27 Apr 2022 11:09:45 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Possible corruption by CreateRestartPoint at promotion"
},
{
"msg_contents": "On Wed, Apr 27, 2022 at 11:09:45AM -0700, Nathan Bossart wrote:\n> On Wed, Apr 27, 2022 at 02:16:01PM +0900, Michael Paquier wrote:\n>> - if (ControlFile->state == DB_IN_ARCHIVE_RECOVERY &&\n>> - ControlFile->checkPointCopy.redo < lastCheckPoint.redo)\n>> - {\n>> 7ff23c6 has removed the last call to CreateCheckpoint() outside the\n>> checkpointer, meaning that there is one less concurrent race to worry\n>> about, but I have to admit that this change, to update the control\n>> file's checkPoint and checkPointCopy even if we don't check after\n>> ControlFile->checkPointCopy.redo < lastCheckPoint.redo would make the\n>> code less robust in ~14. So I am questioning whether a backpatch\n>> is actually worth the risk here.\n> \n> IMO we should still check this before updating ControlFile to be safe.\n\nSure. Fine by me to play it safe.\n--\nMichael",
"msg_date": "Thu, 28 Apr 2022 09:12:13 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Possible corruption by CreateRestartPoint at promotion"
},
{
"msg_contents": "At Wed, 27 Apr 2022 14:16:01 +0900, Michael Paquier <michael@paquier.xyz> wrote in \n> On Tue, Apr 26, 2022 at 08:26:09PM -0700, Nathan Bossart wrote:\n> > On Wed, Apr 27, 2022 at 10:43:53AM +0900, Kyotaro Horiguchi wrote:\n> >> At Tue, 26 Apr 2022 11:33:49 -0700, Nathan Bossart <nathandbossart@gmail.com> wrote in \n> >>> I suspect we'll start seeing this problem more often once end-of-recovery\n> >>> checkpoints are removed [0]. Would you mind creating a commitfest entry\n> >>> for this thread? I didn't see one.\n> >> \n> >> I'm not sure the patch makes any change here, because restart points\n> >> don't run while crash recovery, since no checkpoint records seen\n> >> during a crash recovery. Anyway the patch doesn't apply anymore so\n> >> rebased, but only the one for master for the lack of time for now.\n> > \n> > Thanks for the new patch! Yeah, it wouldn't affect crash recovery, but\n> > IIUC Robert's patch also applies to archive recovery.\n> > \n> >> +\t/* Update pg_control */\n> >> +\tLWLockAcquire(ControlFileLock, LW_EXCLUSIVE);\n> >> +\n> >> \t/*\n> >> \t * Remember the prior checkpoint's redo ptr for\n> >> \t * UpdateCheckPointDistanceEstimate()\n> >> \t */\n> >> \tPriorRedoPtr = ControlFile->checkPointCopy.redo;\n> > \n> > nitpick: Why move the LWLockAcquire() all the way up here?\n> \n> Yeah, that should not be necessary. InitWalRecovery() is the only\n> place outside the checkpointer that would touch this field, but that\n> happens far too early in the startup sequence to matter with the\n> checkpointer.\n\nYes it is not necessary. I just wanted to apparently ensure not to\naccess ControlFile outside ControlFileLoc. So I won't be opposed to\nreverting it since, as you say, it is *actuall* safe..\n\n> >> +\tAssert (PriorRedoPtr < RedoRecPtr);\n> > \n> > I think this could use a short explanation.\n> \n> That's just to make sure that the current redo LSN is always older\n> than the one prior that. It does not seem really necessary to me to\n> add that.\n\nJust after we call UpdateCheckPointDistanceEstimate(RedoRecPtr -\nPriorRedoPtr). Don't we really need any safeguard against giving a\nwrap-arounded (in other words, treamendously large) value to the\nfunction? Actually it doesn't seem to happen now, but I don't\nconfident that that never ever happens.\n\nThat being said, I'm a minority here, so removed it.\n\n> >> +\t\t/*\n> >> +\t\t * Aarchive recovery has ended. Crash recovery ever after should\n> >> +\t\t * always recover to the end of WAL\n> >> +\t\t */\n> \n> s/Aarchive/Archive/.\n\nOops! Fixed.\n\n> >> +\t\tControlFile->minRecoveryPoint = InvalidXLogRecPtr;\n> >> +\t\tControlFile->minRecoveryPointTLI = 0;\n> >> +\n> >> +\t\t/* also update local copy */\n> >> +\t\tLocalMinRecoveryPoint = InvalidXLogRecPtr;\n> >> +\t\tLocalMinRecoveryPointTLI = 0;\n> > \n> > Should this be handled by the code that changes the control file state to\n> > DB_IN_PRODUCTION instead? It looks like this is ordinarily done in the\n> > next checkpoint. It's not clear to me why it is done this way.\n> \n> Anyway, that would be the work of the end-of-recovery checkpoint\n> requested at the end of StartupXLOG() once a promotion happens or of\n> the checkpoint requested by PerformRecoveryXLogAction() in the second\n> case, no? So, I don't quite see why we need to update\n\nEventually the work is done by StartupXLOG(). So we don't need to do\nthat at all even in CreateCheckPoint(). If we expect that the\nend-of-recovery checkpoint clears it, that won't happen if if the last\nrestart point takes so long time that the end-of-recovery checkpoint\nrequest is ignored. If DB_IN_ARCHIVE_RECOVERY ended while the restart\npoint is running, it is highly possible that the end-of-recovery\ncheckpoint trigger is ignored. In that case the values are cleard at\nthe next checkpoint.\n\nIn short, if we want to reset them at so-called end-of-recovery\ncheckpoint, we should do that also in CreateRecoveryPoint.\n\nSo, it is not removed in this version.\n\n> minRecoveryPoint and minRecoveryPointTLI in the control file here, as\n> much as this does not have to be part of the end-of-recovery code\n> that switches the control file to DB_IN_PRODUCTION.\n> \n> - if (ControlFile->state == DB_IN_ARCHIVE_RECOVERY &&\n> - ControlFile->checkPointCopy.redo < lastCheckPoint.redo)\n> - {\n> 7ff23c6 has removed the last call to CreateCheckpoint() outside the\n> checkpointer, meaning that there is one less concurrent race to worry\n> about, but I have to admit that this change, to update the control\n\nSure. In very early stage the reasoning to rmove the code was it.\nAnd the rason for proposing to back-patch the same to older versions\nis basing on the further investigation, and I'm not fully confident\nthat for the earlier versions.\n\n> file's checkPoint and checkPointCopy even if we don't check after\n> ControlFile->checkPointCopy.redo < lastCheckPoint.redo would make the\n> code less robust in ~14. So I am questioning whether a backpatch\n> is actually worth the risk here.\n\nAgreed.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Thu, 28 Apr 2022 11:39:38 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Possible corruption by CreateRestartPoint at promotion"
},
{
"msg_contents": "At Thu, 28 Apr 2022 09:12:13 +0900, Michael Paquier <michael@paquier.xyz> wrote in \n> On Wed, Apr 27, 2022 at 11:09:45AM -0700, Nathan Bossart wrote:\n> > On Wed, Apr 27, 2022 at 02:16:01PM +0900, Michael Paquier wrote:\n> >> - if (ControlFile->state == DB_IN_ARCHIVE_RECOVERY &&\n> >> - ControlFile->checkPointCopy.redo < lastCheckPoint.redo)\n> >> - {\n> >> 7ff23c6 has removed the last call to CreateCheckpoint() outside the\n> >> checkpointer, meaning that there is one less concurrent race to worry\n> >> about, but I have to admit that this change, to update the control\n> >> file's checkPoint and checkPointCopy even if we don't check after\n> >> ControlFile->checkPointCopy.redo < lastCheckPoint.redo would make the\n> >> code less robust in ~14. So I am questioning whether a backpatch\n> >> is actually worth the risk here.\n> > \n> > IMO we should still check this before updating ControlFile to be safe.\n> \n> Sure. Fine by me to play it safe.\n\nWhy do we consider concurrent check/restart points here while we don't\nconsider the same for ControlFile->checkPointCopy?\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Thu, 28 Apr 2022 11:43:57 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Possible corruption by CreateRestartPoint at promotion"
},
{
"msg_contents": "On Thu, Apr 28, 2022 at 11:43:57AM +0900, Kyotaro Horiguchi wrote:\n> At Thu, 28 Apr 2022 09:12:13 +0900, Michael Paquier <michael@paquier.xyz> wrote in \n>> On Wed, Apr 27, 2022 at 11:09:45AM -0700, Nathan Bossart wrote:\n>>> On Wed, Apr 27, 2022 at 02:16:01PM +0900, Michael Paquier wrote:\n>>>> - if (ControlFile->state == DB_IN_ARCHIVE_RECOVERY &&\n>>>> - ControlFile->checkPointCopy.redo < lastCheckPoint.redo)\n>>>> - {\n>>>> 7ff23c6 has removed the last call to CreateCheckpoint() outside the\n>>>> checkpointer, meaning that there is one less concurrent race to worry\n>>>> about, but I have to admit that this change, to update the control\n>>>> file's checkPoint and checkPointCopy even if we don't check after\n>>>> ControlFile->checkPointCopy.redo < lastCheckPoint.redo would make the\n>>>> code less robust in ~14. So I am questioning whether a backpatch\n>>>> is actually worth the risk here.\n>>> \n>>> IMO we should still check this before updating ControlFile to be safe.\n>> \n>> Sure. Fine by me to play it safe.\n> \n> Why do we consider concurrent check/restart points here while we don't\n> consider the same for ControlFile->checkPointCopy?\n\nI am not sure what you mean here. FWIW, I am translating the\nsuggestion of Nathan to split the existing check in\nCreateRestartPoint() that we are discussing here into two if blocks,\ninstead of just one:\n- Move the update of checkPoint and checkPointCopy into its own if\nblock, controlled only by the check on\n(ControlFile->checkPointCopy.redo < lastCheckPoint.redo)\n- Keep the code updating minRecoveryPoint and minRecoveryPointTLI\nmostly as-is, but just do the update when the control file state is\nDB_IN_ARCHIVE_RECOVERY. Of course we need to keep the check on\n(minRecoveryPoint < lastCheckPointEndPtr).\n\nv5 is mostly doing that.\n--\nMichael",
"msg_date": "Thu, 28 Apr 2022 15:49:42 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Possible corruption by CreateRestartPoint at promotion"
},
{
"msg_contents": "On Thu, Apr 28, 2022 at 03:49:42PM +0900, Michael Paquier wrote:\n> I am not sure what you mean here. FWIW, I am translating the\n> suggestion of Nathan to split the existing check in\n> CreateRestartPoint() that we are discussing here into two if blocks,\n> instead of just one:\n> - Move the update of checkPoint and checkPointCopy into its own if\n> block, controlled only by the check on\n> (ControlFile->checkPointCopy.redo < lastCheckPoint.redo)\n> - Keep the code updating minRecoveryPoint and minRecoveryPointTLI\n> mostly as-is, but just do the update when the control file state is\n> DB_IN_ARCHIVE_RECOVERY. Of course we need to keep the check on\n> (minRecoveryPoint < lastCheckPointEndPtr).\n\nAnd I have spent a bit of this stuff to finish with the attached. It\nwill be a plus to get that done on HEAD for beta1, so I'll try to deal\nwith it on Monday. I am still a bit stressed about the back branches\nas concurrent checkpoints are possible via CreateCheckPoint() from the\nstartup process (not the case of HEAD), but the stable branches will\nhave a new point release very soon so let's revisit this choice there\nlater. v6 attached includes a TAP test, but I don't intend to include\nit as it is expensive.\n--\nMichael",
"msg_date": "Fri, 6 May 2022 19:58:43 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Possible corruption by CreateRestartPoint at promotion"
},
{
"msg_contents": "On Fri, May 06, 2022 at 07:58:43PM +0900, Michael Paquier wrote:\n> And I have spent a bit of this stuff to finish with the attached. It\n> will be a plus to get that done on HEAD for beta1, so I'll try to deal\n> with it on Monday. I am still a bit stressed about the back branches\n> as concurrent checkpoints are possible via CreateCheckPoint() from the\n> startup process (not the case of HEAD), but the stable branches will\n> have a new point release very soon so let's revisit this choice there\n> later. v6 attached includes a TAP test, but I don't intend to include\n> it as it is expensive.\n\nI was looking at other changes in this area (e.g., 3c64dcb), and now I'm\nwondering if we actually should invalidate the minRecoveryPoint when the\ncontrol file no longer indicates archive recovery. Specifically, what\nhappens if a base backup of a newly promoted standby is used for a\npoint-in-time restore? If the checkpoint location is updated and all\nprevious segments have been recycled/removed, it seems like the\nminRecoveryPoint might point to a missing segment.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Fri, 6 May 2022 08:52:45 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Possible corruption by CreateRestartPoint at promotion"
},
{
"msg_contents": "On Fri, May 06, 2022 at 08:52:45AM -0700, Nathan Bossart wrote:\n> I was looking at other changes in this area (e.g., 3c64dcb), and now I'm\n> wondering if we actually should invalidate the minRecoveryPoint when the\n> control file no longer indicates archive recovery. Specifically, what\n> happens if a base backup of a newly promoted standby is used for a\n> point-in-time restore? If the checkpoint location is updated and all\n> previous segments have been recycled/removed, it seems like the\n> minRecoveryPoint might point to a missing segment.\n\nA new checkpoint is enforced at the beginning of the backup which\nwould update minRecoveryPoint and minRecoveryPointTLI, while we don't\na allow a backup to finish if it began on a standby has just promoted\nin-between.\n--\nMichael",
"msg_date": "Sat, 7 May 2022 08:43:27 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Possible corruption by CreateRestartPoint at promotion"
},
{
"msg_contents": "On Fri, May 06, 2022 at 07:58:43PM +0900, Michael Paquier wrote:\n> And I have spent a bit of this stuff to finish with the attached. It\n> will be a plus to get that done on HEAD for beta1, so I'll try to deal\n> with it on Monday. I am still a bit stressed about the back branches\n> as concurrent checkpoints are possible via CreateCheckPoint() from the\n> startup process (not the case of HEAD), but the stable branches will\n> have a new point release very soon so let's revisit this choice there\n> later. v6 attached includes a TAP test, but I don't intend to include\n> it as it is expensive.\n\nOkay, applied this one on HEAD after going back-and-forth on it for\nthe last couple of days. I have found myself shaping the patch in\nwhat looks like its simplest form, by applying the check based on an\nolder checkpoint to all the fields updated in the control file, with\nthe check on DB_IN_ARCHIVE_RECOVERY applying to the addition of\nDB_SHUTDOWNED_IN_RECOVERY (got initialially surprised that this was\nhaving side effects on pg_rewind) and the minRecoveryPoint\ncalculations.\n\nNow, it would be nice to get a test for this stuff, and we are going\nto need something cheaper than what's been proposed upthread. This\ncomes down to the point of being able to put a deterministic stop\nin a restart point while it is processing, meaning that we need to\ninteract with one of the internal routines of CheckPointGuts(). One\nfancy way to do so would be to forcibly take a LWLock to stuck the\nrestart point until it is released. Using a SQL function for that\nwould be possible, if not artistic. Perhaps we don't need such a\nfunction though, if we could stuck arbitrarily the internals of a \ncheckpoint? Any ideas?\n--\nMichael",
"msg_date": "Mon, 9 May 2022 09:24:06 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Possible corruption by CreateRestartPoint at promotion"
},
{
"msg_contents": "On Mon, May 09, 2022 at 09:24:06AM +0900, Michael Paquier wrote:\n> Okay, applied this one on HEAD after going back-and-forth on it for\n> the last couple of days. I have found myself shaping the patch in\n> what looks like its simplest form, by applying the check based on an\n> older checkpoint to all the fields updated in the control file, with\n> the check on DB_IN_ARCHIVE_RECOVERY applying to the addition of\n> DB_SHUTDOWNED_IN_RECOVERY (got initialially surprised that this was\n> having side effects on pg_rewind) and the minRecoveryPoint\n> calculations.\n\nIt took me some time to make sure that this would be safe, but done\nnow for all the stable branches.\n\n> Now, it would be nice to get a test for this stuff, and we are going\n> to need something cheaper than what's been proposed upthread. This\n> comes down to the point of being able to put a deterministic stop\n> in a restart point while it is processing, meaning that we need to\n> interact with one of the internal routines of CheckPointGuts(). One\n> fancy way to do so would be to forcibly take a LWLock to stuck the\n> restart point until it is released. Using a SQL function for that\n> would be possible, if not artistic. Perhaps we don't need such a\n> function though, if we could stuck arbitrarily the internals of a \n> checkpoint? Any ideas?\n\nOne thing that we could do here is to resurrect the patch that adds\nsupport for stop points in the code..\n--\nMichael",
"msg_date": "Mon, 16 May 2022 12:47:29 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Possible corruption by CreateRestartPoint at promotion"
}
] |
[
{
"msg_contents": "Hello, this is a follow-up topic of [1] (add LSNs to checkpint logs).\n\nMany user-facing texts contains wording like \"WAL location\" or such\nlike. The attached is WIP patches that fixes such wordings to \"WAL\nLSN\" or alikes.\n\nThe attached patches is v13 but it is not changed at all from v12.\nMy lastest comments on this version are as follows.\n\nhttps://www.postgresql.org/message-id/20220209.115204.1794224638476710282.horikyota.ntt@gmail.com\n\n> The old 0003 (attached 0004):\n> \n> \n> \n> +++ b/src/backend/access/rmgrdesc/xlogdesc.c\n> -\t\tappendStringInfo(buf, \"redo %X/%X; \"\n> +\t\tappendStringInfo(buf, \"redo lsn %X/%X; \"\n> \n> \n> \n> It is shown in the context of a checkpoint record, so I think it is\n> not needed or rather lengthning the dump line uselessly. \n> \n> \n> \n> +++ b/src/backend/access/transam/xlog.c\n> -\t\t\t\t(errmsg(\"request to flush past end of generated WAL; request %X/%X, current position %X/%X\",\n> +\t\t\t\t(errmsg(\"request to flush past end of generated WAL; request lsn %X/%X, current lsn %X/%X\",\n> \n> \n> \n> +++ b/src/backend/replication/walsender.c\n> -\t\t\t\t\t(errmsg(\"requested starting point %X/%X is ahead of the WAL flush position of this server %X/%X\",\n> +\t\t\t\t\t(errmsg(\"requested starting point %X/%X is ahead of the WAL flush LSN of this server %X/%X\",\n> \n> \n> \n> \"WAL\" is upper-cased. So it seems rather strange that the \"lsn\" is\n> lower-cased. In the first place the message doesn't look like a\n> user-facing error message and I feel we don't need position or lsn\n> there..\n> \n> \n> \n> +++ b/src/bin/pg_rewind/pg_rewind.c\n> -\t\tpg_log_info(\"servers diverged at WAL location %X/%X on timeline %u\",\n> +\t\tpg_log_info(\"servers diverged at WAL LSN %X/%X on timeline %u\",\n> \n> \n> \n> I feel that we don't need \"WAL\" there.\n> \n> \n> \n> +++ b/src/bin/pg_waldump/pg_waldump.c\n> -\tprintf(_(\" -e, --end=RECPTR stop reading at WAL location RECPTR\\n\"));\n> +\tprintf(_(\" -e, --end=RECPTR stop reading at WAL LSN RECPTR\\n\"));\n> \n> \n> \n> Mmm.. \"WAL LSN RECPTR\" looks strange to me. In the first place I\n> don't think \"RECPTR\" is a user-facing term. Doesn't something like the\n> follows work?\n> \n> \n> \n> +\tprintf(_(\" -e, --end=WAL-LSN stop reading at WAL-LSN\\n\"));\n> \n> \n> \n> In some changes in this patch shorten the main message text of\n> fprintf-ish functions. That makes the succeeding parameters can be\n> inlined.\nregards.\n\n[1] https://www.postgresql.org/message-id/20220316.091913.806120467943749797.horikyota.ntt%40gmail.com\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Wed, 16 Mar 2022 10:24:53 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Add checkpoint and redo LSN to LogCheckpointEnd log message"
},
{
"msg_contents": "Uh.. Sorry, I forgot to change the subject. Resent with the correct\nsubject.\n\n=====\nHello, this is a follow-up topic of [1] (add LSNs to checkpint logs).\n\nMany user-facing texts contains wording like \"WAL location\" or such\nlike. The attached is WIP patches that fixes such wordings to \"WAL\nLSN\" or alikes.\n\nThe attached patches is v13 but it is not changed at all from v12.\nMy lastest comments on this version are as follows.\n\nhttps://www.postgresql.org/message-id/20220209.115204.1794224638476710282.horikyota.ntt@gmail.com\n\n> The old 0003 (attached 0004):\n> \n> \n> \n> +++ b/src/backend/access/rmgrdesc/xlogdesc.c\n> -\t\tappendStringInfo(buf, \"redo %X/%X; \"\n> +\t\tappendStringInfo(buf, \"redo lsn %X/%X; \"\n> \n> \n> \n> It is shown in the context of a checkpoint record, so I think it is\n> not needed or rather lengthning the dump line uselessly. \n> \n> \n> \n> +++ b/src/backend/access/transam/xlog.c\n> -\t\t\t\t(errmsg(\"request to flush past end of generated WAL; request %X/%X, current position %X/%X\",\n> +\t\t\t\t(errmsg(\"request to flush past end of generated WAL; request lsn %X/%X, current lsn %X/%X\",\n> \n> \n> \n> +++ b/src/backend/replication/walsender.c\n> -\t\t\t\t\t(errmsg(\"requested starting point %X/%X is ahead of the WAL flush position of this server %X/%X\",\n> +\t\t\t\t\t(errmsg(\"requested starting point %X/%X is ahead of the WAL flush LSN of this server %X/%X\",\n> \n> \n> \n> \"WAL\" is upper-cased. So it seems rather strange that the \"lsn\" is\n> lower-cased. In the first place the message doesn't look like a\n> user-facing error message and I feel we don't need position or lsn\n> there..\n> \n> \n> \n> +++ b/src/bin/pg_rewind/pg_rewind.c\n> -\t\tpg_log_info(\"servers diverged at WAL location %X/%X on timeline %u\",\n> +\t\tpg_log_info(\"servers diverged at WAL LSN %X/%X on timeline %u\",\n> \n> \n> \n> I feel that we don't need \"WAL\" there.\n> \n> \n> \n> +++ b/src/bin/pg_waldump/pg_waldump.c\n> -\tprintf(_(\" -e, --end=RECPTR stop reading at WAL location RECPTR\\n\"));\n> +\tprintf(_(\" -e, --end=RECPTR stop reading at WAL LSN RECPTR\\n\"));\n> \n> \n> \n> Mmm.. \"WAL LSN RECPTR\" looks strange to me. In the first place I\n> don't think \"RECPTR\" is a user-facing term. Doesn't something like the\n> follows work?\n> \n> \n> \n> +\tprintf(_(\" -e, --end=WAL-LSN stop reading at WAL-LSN\\n\"));\n> \n> \n> \n> In some changes in this patch shorten the main message text of\n> fprintf-ish functions. That makes the succeeding parameters can be\n> inlined.\nregards.\n\n[1] https://www.postgresql.org/message-id/20220316.091913.806120467943749797.horikyota.ntt%40gmail.com\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Wed, 16 Mar 2022 10:29:00 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": true,
"msg_subject": "Reword \"WAL location\" as \"WAL LSN\""
}
] |
[
{
"msg_contents": "Hello, this is a derived topic from [1], summarized as $SUBJECT.\n\nThis just removes useless hyphens from the words\n\"(crash|emergency)-recovery\". We don't have such wordings for \"archive\nrecovery\" This patch fixes non-user-facing texts as well as\nuser-facing ones.\n\nregards.\n\n[1] https://www.postgresql.org/message-id/20220316.091913.806120467943749797.horikyota.ntt%40gmail.com\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Wed, 16 Mar 2022 10:25:09 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": true,
"msg_subject": "Unhyphenation of crash-recovery"
},
{
"msg_contents": "Hi\n\nOn March 15, 2022 6:25:09 PM PDT, Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\n>Hello, this is a derived topic from [1], summarized as $SUBJECT.\n>\n>This just removes useless hyphens from the words\n>\"(crash|emergency)-recovery\". We don't have such wordings for \"archive\n>recovery\" This patch fixes non-user-facing texts as well as\n>user-facing ones.\n\nI don't see the point of this kind of change. \n-- \nSent from my Android device with K-9 Mail. Please excuse my brevity.\n\n\n",
"msg_date": "Tue, 15 Mar 2022 18:39:31 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Unhyphenation of crash-recovery"
},
{
"msg_contents": "On Tue, Mar 15, 2022 at 9:39 PM Andres Freund <andres@anarazel.de> wrote:\n> On March 15, 2022 6:25:09 PM PDT, Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\n> >Hello, this is a derived topic from [1], summarized as $SUBJECT.\n> >\n> >This just removes useless hyphens from the words\n> >\"(crash|emergency)-recovery\". We don't have such wordings for \"archive\n> >recovery\" This patch fixes non-user-facing texts as well as\n> >user-facing ones.\n>\n> I don't see the point of this kind of change.\n\nIt seems like better grammar to me, although whether it is worth the\neffort to go fix everything of this kind is certainly debatable.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 16 Mar 2022 10:33:39 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Unhyphenation of crash-recovery"
},
{
"msg_contents": "On 16.03.22 02:25, Kyotaro Horiguchi wrote:\n> Hello, this is a derived topic from [1], summarized as $SUBJECT.\n> \n> This just removes useless hyphens from the words\n> \"(crash|emergency)-recovery\". We don't have such wordings for \"archive\n> recovery\" This patch fixes non-user-facing texts as well as\n> user-facing ones.\n\nMost changes in this patch are not the correct direction. The hyphens \nare used to group compound adjectives before nouns. For example,\n\n simple crash-recovery cases\n\nmeans\n\n simple (crash recovery) cases\n\nrather than\n\n simple crash (recovery cases)\n\nif it were without hyphens.\n\n\n",
"msg_date": "Thu, 17 Mar 2022 07:42:42 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Unhyphenation of crash-recovery"
},
{
"msg_contents": "At Thu, 17 Mar 2022 07:42:42 +0100, Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote in \n> On 16.03.22 02:25, Kyotaro Horiguchi wrote:\n> > Hello, this is a derived topic from [1], summarized as $SUBJECT.\n> > This just removes useless hyphens from the words\n> > \"(crash|emergency)-recovery\". We don't have such wordings for \"archive\n> > recovery\" This patch fixes non-user-facing texts as well as\n> > user-facing ones.\n> \n> Most changes in this patch are not the correct direction. The hyphens\n> are used to group compound adjectives before nouns. For example,\n> \n> simple crash-recovery cases\n> \n> means\n> \n> simple (crash recovery) cases\n> \n> rather than\n> \n> simple crash (recovery cases)\n> \n> if it were without hyphens.\n\nReally? The latter recognization doesn't seem to make sense. I might\nbe too-trained so that I capture \"(crash|archive|blah) recovery\" as\nimplicit compound words. But anyway there's no strong reason to be\naggressive to unhyphenate compound words.\n\n\"point-in-time-recovery\" and \"(during) emergency-recovery operations\"\nseem like better be unhyphnated, but now I'm not sure it is really so.\n\nThanks for the comments.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Thu, 17 Mar 2022 16:45:35 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Unhyphenation of crash-recovery"
},
{
"msg_contents": "On Thu, Mar 17, 2022 at 2:42 AM Peter Eisentraut\n<peter.eisentraut@enterprisedb.com> wrote:\n> On 16.03.22 02:25, Kyotaro Horiguchi wrote:\n> > Hello, this is a derived topic from [1], summarized as $SUBJECT.\n> >\n> > This just removes useless hyphens from the words\n> > \"(crash|emergency)-recovery\". We don't have such wordings for \"archive\n> > recovery\" This patch fixes non-user-facing texts as well as\n> > user-facing ones.\n>\n> Most changes in this patch are not the correct direction. The hyphens\n> are used to group compound adjectives before nouns. For example,\n>\n> simple crash-recovery cases\n>\n> means\n>\n> simple (crash recovery) cases\n>\n> rather than\n>\n> simple crash (recovery cases)\n>\n> if it were without hyphens.\n\nI agree with that as a general principle, but I also think the\nparticular case you've chosen here is a good example of another\nprinciple: sometimes it just doesn't matter very much. A case of crash\nrecovery that happens to be simple is pretty much the same thing as a\ncase of recovery that is simple and involves a crash. My understanding\nof English grammar is that one typically does not hyphenate unless it\nis required to avoid confusion. A quick Google search suggests this\nexample:\n\nMr Harper had a funny smelling dog\n\nWe must try to figure out whether the smell of the dog is funny or\nwhether the dog itself is both funny and smelling. If we hyphenate\nfunny-smelling, then it's clear that it is the smell of the dog that\nis funny and not the dog itself. But in your example I cannot see that\nthere is any similar ambiguity. Recovery cases can involve a crash,\nand crash recovery can have cases, and what's the difference, anyway?\nSo I wouldn't hyphenate it, but I also wouldn't spend a lot of time\narguing if someone else did. Except maybe that's exactly what I am\ndoing. Perhaps I should find something else to do.\n\n--\nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 17 Mar 2022 12:56:45 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Unhyphenation of crash-recovery"
}
] |
[
{
"msg_contents": "Generally, we should define struct in the header file(.h). But I found struct \"DR_intorel\" in createas.c and it doesn't seem to be properly defined. May be it should define in createas.h.\r\nBesides, this is my first contribution. If there are any submitted questions, please let me know. Thank you~ :)",
"msg_date": "Wed, 16 Mar 2022 11:16:58 +0800",
"msg_from": "\"=?ISO-8859-1?B?emsud2FuZw==?=\" <zackery.wang@qq.com>",
"msg_from_op": true,
"msg_subject": "Move the \"DR_intorel\" struct to a more suitable position"
},
{
"msg_contents": "Hi, \n\nOn March 15, 2022 8:16:58 PM PDT, \"zk.wang\" <zackery.wang@qq.com> wrote:\n>Generally, we should define struct in the header file(.h).\n\nWhy? It's perfectly sensible to define types in .c files when they're not used elsewhere.\n\nGreetings,\n\nAndres\n-- \nSent from my Android device with K-9 Mail. Please excuse my brevity.\n\n\n",
"msg_date": "Tue, 15 Mar 2022 20:24:02 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Move the \"DR_intorel\" struct to a more suitable position"
},
{
"msg_contents": "Hi,\n\nOn Wed, Mar 16, 2022 at 11:16:58AM +0800, zk.wang wrote:\n> Generally, we should define struct in the header file(.h). But I found struct\n> \"DR_intorel\" in createas.c and it doesn't seem to be properly defined. May be\n> it should define in createas.h.\n\nWe put struct declarations in header files when multiple other files needs to\nknow about it. For DR_intorel it's a private struct that isn't needed outside\ncreateas.c, so it's defined in the file. You can find a lot of similar usage\nin the source.\n\n> Besides, this is my first\n> contribution. If there are any submitted questions, please let me know.\n> Thank you~ :)\n\nWelcome! For the record, it's usually better to provide a patch. You can\nrefer to https://wiki.postgresql.org/wiki/Submitting_a_Patch for more\ninformation.\n\n\n",
"msg_date": "Wed, 16 Mar 2022 11:25:46 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Move the \"DR_intorel\" struct to a more suitable position"
}
] |
[
{
"msg_contents": "Hello hackers,\n\nSET ACCESS METHOD is supported in ALTER TABLE since the commit\nb0483263dd. Since that time, this also has be allowed SET ACCESS\nMETHOD in ALTER MATERIALIZED VIEW. Although it is not documented,\nthis works.\n\nI cannot found any reasons to prohibit SET ACCESS METHOD in ALTER\nMATERIALIZED VIEW, so I think it is better to support this in psql\ntab-completion and be documented.\n\nI attached a patch to fix the tab-completion and the documentation\nabout this syntax. Also, I added description about SET TABLESPACE\nsyntax that would have been overlooked.\n\nRegards,\nYugo Nagata\n\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>",
"msg_date": "Wed, 16 Mar 2022 13:33:37 +0900",
"msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>",
"msg_from_op": true,
"msg_subject": "Tab completion for ALTER MATERIALIZED VIEW ... SET ACCESS METHOD"
},
{
"msg_contents": "Hi Nagata-san,\n\nOn Wed, Mar 16, 2022 at 01:33:37PM +0900, Yugo NAGATA wrote:\n> SET ACCESS METHOD is supported in ALTER TABLE since the commit\n> b0483263dd. Since that time, this also has be allowed SET ACCESS\n> METHOD in ALTER MATERIALIZED VIEW. Although it is not documented,\n> this works.\n\nYes, that's an oversight. I see no reason to not authorize that, and\nthe rewrite path in tablecmds.c is the same as for plain tables.\n\n> I cannot found any reasons to prohibit SET ACCESS METHOD in ALTER\n> MATERIALIZED VIEW, so I think it is better to support this in psql\n> tab-completion and be documented.\n\nI think that we should have some regression tests about those command\nflavors. How about adding a couple of queries to create_am.sql for\nSET ACCESS METHOD and to tablespace.sql for SET TABLESPACE?\n--\nMichael",
"msg_date": "Wed, 16 Mar 2022 16:18:09 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Tab completion for ALTER MATERIALIZED VIEW ... SET ACCESS METHOD"
},
{
"msg_contents": "Hi Michael-san,\n\nOn Wed, 16 Mar 2022 16:18:09 +0900\nMichael Paquier <michael@paquier.xyz> wrote:\n\n> Hi Nagata-san,\n> \n> On Wed, Mar 16, 2022 at 01:33:37PM +0900, Yugo NAGATA wrote:\n> > SET ACCESS METHOD is supported in ALTER TABLE since the commit\n> > b0483263dd. Since that time, this also has be allowed SET ACCESS\n> > METHOD in ALTER MATERIALIZED VIEW. Although it is not documented,\n> > this works.\n> \n> Yes, that's an oversight. I see no reason to not authorize that, and\n> the rewrite path in tablecmds.c is the same as for plain tables.\n> \n> > I cannot found any reasons to prohibit SET ACCESS METHOD in ALTER\n> > MATERIALIZED VIEW, so I think it is better to support this in psql\n> > tab-completion and be documented.\n> \n> I think that we should have some regression tests about those command\n> flavors. How about adding a couple of queries to create_am.sql for\n> SET ACCESS METHOD and to tablespace.sql for SET TABLESPACE?\n\nThank you for your review!\n\nI added some queries in the regression test. Attached is the updated patch.\n\nRegards,\nYugo Nagata\n\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>",
"msg_date": "Fri, 18 Mar 2022 10:27:41 +0900",
"msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>",
"msg_from_op": true,
"msg_subject": "Re: Tab completion for ALTER MATERIALIZED VIEW ... SET ACCESS\n METHOD"
},
{
"msg_contents": "On Fri, Mar 18, 2022 at 10:27:41AM +0900, Yugo NAGATA wrote:\n> I added some queries in the regression test. Attached is the updated patch.\n\nThanks. This looks rather sane to me. I'll split things into 3\ncommits in total, as of the psql completion, SET TABLESPACE and SET\nACCESS METHOD. The first and third patches are only for HEAD, while\nthe documentation hole with SET TABLESPACE should go down to v10.\nBackpatching the tests of ALTER MATERIALIZED VIEW ALL IN TABLESPACE\nwould not hurt, either, as there is zero coverage for that now.\n--\nMichael",
"msg_date": "Fri, 18 Mar 2022 15:13:05 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Tab completion for ALTER MATERIALIZED VIEW ... SET ACCESS METHOD"
},
{
"msg_contents": "On Fri, Mar 18, 2022 at 03:13:05PM +0900, Michael Paquier wrote:\n> Thanks. This looks rather sane to me. I'll split things into 3\n> commits in total, as of the psql completion, SET TABLESPACE and SET\n> ACCESS METHOD. The first and third patches are only for HEAD, while\n> the documentation hole with SET TABLESPACE should go down to v10.\n> Backpatching the tests of ALTER MATERIALIZED VIEW ALL IN TABLESPACE\n> would not hurt, either, as there is zero coverage for that now.\n\nI have applied the set, after splitting things mostly as mentioned\nupthread:\n- The doc change for SET TABLESPACE on ALTER MATVIEW has been\nbackpatched.\n- The regression tests for SET TABLESPACE have been applied on HEAD,\nas these are independent of the rest, good on their own.\n- All the remaining parts for SET ACCESS METHOD (psql tab completion,\ntests and docs) have been merged together on HEAD. I could not\nunderstand why the completion done after SET ACCESS METHOD was not\ngrouped with the other parts for ALTER MATVIEW, so I have moved the\nnew entry there, and I have added a test checking after an error for\nmultiple subcommands, while on it.\n\nThanks, Nagata-san!\n--\nMichael",
"msg_date": "Sat, 19 Mar 2022 19:31:59 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Tab completion for ALTER MATERIALIZED VIEW ... SET ACCESS METHOD"
},
{
"msg_contents": "On Sat, 19 Mar 2022 19:31:59 +0900\nMichael Paquier <michael@paquier.xyz> wrote:\n\n> On Fri, Mar 18, 2022 at 03:13:05PM +0900, Michael Paquier wrote:\n> > Thanks. This looks rather sane to me. I'll split things into 3\n> > commits in total, as of the psql completion, SET TABLESPACE and SET\n> > ACCESS METHOD. The first and third patches are only for HEAD, while\n> > the documentation hole with SET TABLESPACE should go down to v10.\n> > Backpatching the tests of ALTER MATERIALIZED VIEW ALL IN TABLESPACE\n> > would not hurt, either, as there is zero coverage for that now.\n> \n> I have applied the set, after splitting things mostly as mentioned\n> upthread:\n> - The doc change for SET TABLESPACE on ALTER MATVIEW has been\n> backpatched.\n> - The regression tests for SET TABLESPACE have been applied on HEAD,\n> as these are independent of the rest, good on their own.\n> - All the remaining parts for SET ACCESS METHOD (psql tab completion,\n> tests and docs) have been merged together on HEAD. I could not\n> understand why the completion done after SET ACCESS METHOD was not\n> grouped with the other parts for ALTER MATVIEW, so I have moved the\n> new entry there, and I have added a test checking after an error for\n> multiple subcommands, while on it.\n> \n> Thanks, Nagata-san!\n\nThank you!\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>\n\n\n",
"msg_date": "Tue, 22 Mar 2022 17:53:17 +0900",
"msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>",
"msg_from_op": true,
"msg_subject": "Re: Tab completion for ALTER MATERIALIZED VIEW ... SET ACCESS\n METHOD"
}
] |
[
{
"msg_contents": "Hi, pgsql-hackers,\r\n\r\nI think I found a case that database is not recoverable, would you please give a look?\r\n\r\nHere is how it happens:\r\n\r\n- setup primary/standby\r\n- do a lots INSERT at primary\r\n- create a checkpoint at primary\r\n- wait until standby start doing restart point, it take about 3mins syncing buffers to complete\r\n- before the restart point update ControlFile, promote the standby, that changed ControlFile\r\n ->state to DB_IN_PRODUCTION, this will skip update to ControlFile, leaving the ControlFile\r\n ->checkPoint pointing to a removed file\r\n- before the promoted standby request the post-recovery checkpoint (fast promoted), \r\n one backend crashed, it will kill other server process, so the post-recovery checkpoint skipped\r\n- the database restart startup process, which report: \"could not locate a valid checkpoint record\"\r\n\r\nI attached a test to reproduce it, it does not fail every time, it fails every 10 times to me.\r\nTo increase the chance CreateRestartPoint skip update ControlFile and to simulate a crash,\r\nthe patch 0001 is needed.\r\n\r\nBest Regard.\r\n\r\nHarry Hao",
"msg_date": "Wed, 16 Mar 2022 07:16:16 +0000",
"msg_from": "hao harry <harry-hao@outlook.com>",
"msg_from_op": true,
"msg_subject": "Standby got invalid primary checkpoint after crashed right after\n promoted."
},
{
"msg_contents": "Found this issue is duplicated to [1], after applied that patch, I cannot reproduce it anymore.\r\n\r\n[1] https://www.postgresql.org/message-id/flat/20220316.102444.2193181487576617583.horikyota.ntt%40gmail.com<https://www.postgresql.org/message-id/flat/20220316.102444.2193181487576617583.horikyota.ntt@gmail.com>\r\n\r\n2022年3月16日 下午3:16,hao harry <harry-hao@outlook.com<mailto:harry-hao@outlook.com>> 写道:\r\n\r\nHi, pgsql-hackers,\r\n\r\nI think I found a case that database is not recoverable, would you please give a look?\r\n\r\nHere is how it happens:\r\n\r\n- setup primary/standby\r\n- do a lots INSERT at primary\r\n- create a checkpoint at primary\r\n- wait until standby start doing restart point, it take about 3mins syncing buffers to complete\r\n- before the restart point update ControlFile, promote the standby, that changed ControlFile\r\n ->state to DB_IN_PRODUCTION, this will skip update to ControlFile, leaving the ControlFile\r\n ->checkPoint pointing to a removed file\r\n- before the promoted standby request the post-recovery checkpoint (fast promoted),\r\n one backend crashed, it will kill other server process, so the post-recovery checkpoint skipped\r\n- the database restart startup process, which report: \"could not locate a valid checkpoint record\"\r\n\r\nI attached a test to reproduce it, it does not fail every time, it fails every 10 times to me.\r\nTo increase the chance CreateRestartPoint skip update ControlFile and to simulate a crash,\r\nthe patch 0001 is needed.\r\n\r\nBest Regard.\r\n\r\nHarry Hao\r\n\r\n<0001-Patched-CreateRestartPoint-to-reproduce-invalid-chec.patch><reprod_crash_right_after_promoted.pl>\r\n\r\n\n\n\n\n\n\r\nFound this issue is duplicated to [1], after applied that patch, I cannot reproduce it anymore.\r\n\n\n\n\n\n[1] https://www.postgresql.org/message-id/flat/20220316.102444.2193181487576617583.horikyota.ntt%40gmail.com\n\n\n\n\n2022年3月16日 下午3:16,hao harry <harry-hao@outlook.com> 写道:\n\n\nHi, pgsql-hackers,\n\r\nI think I found a case that database is not recoverable, would you please give a look?\n\r\nHere is how it happens:\n\r\n- setup primary/standby\r\n- do a lots INSERT at primary\r\n- create a checkpoint at primary\r\n- wait until standby start doing restart point, it take about 3mins syncing buffers to complete\r\n- before the restart point update ControlFile, promote the standby, that changed ControlFile\r\n ->state to DB_IN_PRODUCTION, this will skip update to ControlFile, leaving the ControlFile\r\n ->checkPoint pointing to a removed file\r\n- before the promoted standby request the post-recovery checkpoint (fast promoted),\r\n\r\n one backend crashed, it will kill other server process, so the post-recovery checkpoint skipped\r\n- the database restart startup process, which report: \"could not locate a valid checkpoint record\"\n\r\nI attached a test to reproduce it, it does not fail every time, it fails every 10 times to me.\r\nTo increase the chance CreateRestartPoint skip update ControlFile and to simulate a crash,\r\nthe patch 0001 is needed.\n\r\nBest Regard.\n\r\nHarry Hao\n\n<0001-Patched-CreateRestartPoint-to-reproduce-invalid-chec.patch><reprod_crash_right_after_promoted.pl>",
"msg_date": "Wed, 16 Mar 2022 08:21:46 +0000",
"msg_from": "hao harry <harry-hao@outlook.com>",
"msg_from_op": true,
"msg_subject": "Re: Standby got invalid primary checkpoint after crashed right after\n promoted."
},
{
"msg_contents": "At Wed, 16 Mar 2022 07:16:16 +0000, hao harry <harry-hao@outlook.com> wrote in \n> Hi, pgsql-hackers,\n> \n> I think I found a case that database is not recoverable, would you please give a look?\n> \n> Here is how it happens:\n> \n> - setup primary/standby\n> - do a lots INSERT at primary\n> - create a checkpoint at primary\n> - wait until standby start doing restart point, it take about 3mins syncing buffers to complete\n> - before the restart point update ControlFile, promote the standby, that changed ControlFile\n> ->state to DB_IN_PRODUCTION, this will skip update to ControlFile, leaving the ControlFile\n> ->checkPoint pointing to a removed file\n\nYeah, it seems like exactly the same issue pointed in [1]. A fix is\nproposed in [1]. Maybe I can remove \"possible\" from the mail subject:p\n\n[1] https://www.postgresql.org/message-id/7bfad665-db9c-0c2a-2604-9f54763c5f9e%40oss.nttdata.com\n[2] https://www.postgresql.org/message-id/20220316.102444.2193181487576617583.horikyota.ntt@gmail.com\n\n> - before the promoted standby request the post-recovery checkpoint (fast promoted), \n> one backend crashed, it will kill other server process, so the post-recovery checkpoint skipped\n> - the database restart startup process, which report: \"could not locate a valid checkpoint record\"\n> \n> I attached a test to reproduce it, it does not fail every time, it fails every 10 times to me.\n> To increase the chance CreateRestartPoint skip update ControlFile and to simulate a crash,\n> the patch 0001 is needed.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Wed, 16 Mar 2022 17:28:45 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Standby got invalid primary checkpoint after crashed right\n after promoted."
},
{
"msg_contents": "(My previous mail hass crossed with this one)\n\nAt Wed, 16 Mar 2022 08:21:46 +0000, hao harry <harry-hao@outlook.com> wrote in \n> Found this issue is duplicated to [1], after applied that patch, I cannot reproduce it anymore.\n> \n> [1] https://www.postgresql.org/message-id/flat/20220316.102444.2193181487576617583.horikyota.ntt%40gmail.com<https://www.postgresql.org/message-id/flat/20220316.102444.2193181487576617583.horikyota.ntt@gmail.com>\n\nGlad to hear that. Thanks for checking it!\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Wed, 16 Mar 2022 17:31:44 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Standby got invalid primary checkpoint after crashed right\n after promoted."
}
] |
[
{
"msg_contents": "Hello.\n\n003_sslinfo.pl fails for me.\n\nok 6 - ssl_client_cert_present() for connection with cert\nconnection error: 'psql: error: connection to server at \"127.0.0.1\", port 61688 failed: could not read certificate file \"/home/horiguti/.postgresql/postgresql.crt\": no start line'\nwhile running 'psql -XAtq -d sslrootcert=ssl/root+server_ca.crt sslmode=require dbname=trustdb hostaddr=127.0.0.1 user=ssltestuser host=localhost -f - -v ON_ERR\n\nI think we don't want this behavior.\n\nThe attached fixes that and make-world successfully finished even if I\nhave a cert file in my home direcotory.\n\nregareds.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Wed, 16 Mar 2022 16:36:58 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": true,
"msg_subject": "Out-of-tree certificate interferes ssltest"
},
{
"msg_contents": "On Wed, Mar 16, 2022 at 04:36:58PM +0900, Kyotaro Horiguchi wrote:\n> ok 6 - ssl_client_cert_present() for connection with cert\n> connection error: 'psql: error: connection to server at \"127.0.0.1\", port 61688 failed: could not read certificate file \"/home/horiguti/.postgresql/postgresql.crt\": no start line'\n> while running 'psql -XAtq -d sslrootcert=ssl/root+server_ca.crt sslmode=require dbname=trustdb hostaddr=127.0.0.1 user=ssltestuser host=localhost -f - -v ON_ERR\n> \n> I think we don't want this behavior.\n> \n> The attached fixes that and make-world successfully finished even if I\n> have a cert file in my home direcotory.\n\nThat's the same issue as the one fixed in dd87799, using the same\nmethod. I'll double-check on top of looking at what you are\nsuggesting here.\n--\nMichael",
"msg_date": "Wed, 16 Mar 2022 16:49:40 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Out-of-tree certificate interferes ssltest"
},
{
"msg_contents": "> On 16 Mar 2022, at 08:36, Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\n\n> I think we don't want this behavior.\n\nAgreed.\n\n> The attached fixes that and make-world successfully finished even if I\n> have a cert file in my home direcotory.\n\nSeems correct to me, thanks!\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n",
"msg_date": "Wed, 16 Mar 2022 11:45:39 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Out-of-tree certificate interferes ssltest"
},
{
"msg_contents": "On Wed, Mar 16, 2022 at 11:45:39AM +0100, Daniel Gustafsson wrote:\n> On 16 Mar 2022, at 08:36, Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\n>> The attached fixes that and make-world successfully finished even if I\n>> have a cert file in my home direcotory.\n> \n> Seems correct to me, thanks!\n\nThe ultimate test I can think about to stress the robustness of this\ntest suite is to generate various certs and keys using \"make\nsslfiles\", save them into a ~/.postgresql/ (postgresql.crt,\npostgresql.key, root.crl and root.crt), and then run the tests to see\nhow much junk data the SSL scripts would feed on. With this method, I\nhave caught a total of 71 failures, much more than reported upthread.\n\nWe should really put more attention to set invalid default values for\nsslcert, sslkey, sslcrl, sslcrldir and sslrootcert, rather than\nhardcoding a couple of them in only a few places, opening ourselves to\nthe same problem, again, each time a new test is added. The best way\nI can think about here is to use a string that includes all the\ndefault SSL settings, appending that at the beginning of each\n$common_connstr. This takes care of most the failures, except two\ncases related to expected failures for sslcrldir:\n- directory CRL belonging to a different CA\n- does not connect with client-side CRL directory\n\nIn both cases, enforcing sslcrl to a value of \"invalid\" interferes\nwith the failure scenario we expect from sslcrldir. It is possible to\nbypass that with something like the attached, but that's a kind of\nugly hack. Another alternative would be to drop those two tests, and\nI am not sure how much we care about these two negative scenarios.\n\nThoughts?\n--\nMichael",
"msg_date": "Thu, 17 Mar 2022 14:59:26 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Out-of-tree certificate interferes ssltest"
},
{
"msg_contents": "On Thu, Mar 17, 2022 at 02:59:26PM +0900, Michael Paquier wrote:\n> In both cases, enforcing sslcrl to a value of \"invalid\" interferes\n> with the failure scenario we expect from sslcrldir. It is possible to\n> bypass that with something like the attached, but that's a kind of\n> ugly hack. Another alternative would be to drop those two tests, and\n> I am not sure how much we care about these two negative scenarios.\n\nActually, there is a trick I have recalled here: we can enforce sslcrl\nto an empty value in the connection string after the default. This\nstill ensures that the test won't pick up any SSL data from the local\nenvironment and avoids any interferences of OpenSSL's\nX509_STORE_load_locations(). This gives a much simpler and cleaner\npatch.\n\nThoughts?\n--\nMichael",
"msg_date": "Thu, 17 Mar 2022 16:22:14 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Out-of-tree certificate interferes ssltest"
},
{
"msg_contents": "At Thu, 17 Mar 2022 16:22:14 +0900, Michael Paquier <michael@paquier.xyz> wrote in \n> On Thu, Mar 17, 2022 at 02:59:26PM +0900, Michael Paquier wrote:\n> > In both cases, enforcing sslcrl to a value of \"invalid\" interferes\n> > with the failure scenario we expect from sslcrldir. It is possible to\n> > bypass that with something like the attached, but that's a kind of\n> > ugly hack. Another alternative would be to drop those two tests, and\n> > I am not sure how much we care about these two negative scenarios.\n> \n> Actually, there is a trick I have recalled here: we can enforce sslcrl\n> to an empty value in the connection string after the default. This\n> still ensures that the test won't pick up any SSL data from the local\n> environment and avoids any interferences of OpenSSL's\n> X509_STORE_load_locations(). This gives a much simpler and cleaner\n> patch.\n> \n> Thoughts?\n\nAh! I didn't have a thought that we can specify the same parameter\ntwice. It looks like clean and robust. $default_ssl_connstr contains\nall required options in PQconninfoOptions[].\n\nThe same method worked for 003_sslinfo.pl, too. (of course).\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Thu, 17 Mar 2022 17:05:10 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Out-of-tree certificate interferes ssltest"
},
{
"msg_contents": "> On 17 Mar 2022, at 09:05, Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\n> \n> At Thu, 17 Mar 2022 16:22:14 +0900, Michael Paquier <michael@paquier.xyz> wrote in \n>> On Thu, Mar 17, 2022 at 02:59:26PM +0900, Michael Paquier wrote:\n>>> In both cases, enforcing sslcrl to a value of \"invalid\" interferes\n>>> with the failure scenario we expect from sslcrldir. It is possible to\n>>> bypass that with something like the attached, but that's a kind of\n>>> ugly hack. Another alternative would be to drop those two tests, and\n>>> I am not sure how much we care about these two negative scenarios.\n\nI really don't think we should drop tests based on these premises, at least not\nuntil it's raised as a problem/inconvenience but more hackers. I would prefer\nto instead extend the error message with hints that ~/.postgresql contents\ncould've affected test outcome. But, as the v2 patch handles it this is mostly\nacademic at this point.\n\n>> Actually, there is a trick I have recalled here: we can enforce sslcrl\n>> to an empty value in the connection string after the default. This\n>> still ensures that the test won't pick up any SSL data from the local\n>> environment and avoids any interferences of OpenSSL's\n>> X509_STORE_load_locations(). This gives a much simpler and cleaner\n>> patch.\n\n> Ah! I didn't have a thought that we can specify the same parameter\n> twice. It looks like clean and robust. $default_ssl_connstr contains\n> all required options in PQconninfoOptions[].\n\n+1\n\nOne small concern though. This hunk:\n\n+my $default_ssl_connstr = \"sslkey=invalid sslcert=invalid sslrootcert=invalid sslcrl=invalid sslcrldir=invalid\";\n+\n $common_connstr =\n- \"user=ssltestuser dbname=trustdb sslcert=invalid hostaddr=$SERVERHOSTADDR host=common-name.pg-ssltest.test\";\n+ \"$default_ssl_connstr user=ssltestuser dbname=trustdb hostaddr=$SERVERHOSTADDR host=common-name.pg-ssltest.test\";\n \n..together with the following changes along the lines of:\n\n-\t\"$common_connstr sslrootcert=invalid sslmode=require\",\n+\t\"$common_connstr sslmode=require\",\n\n..is making it fairly hard to read the test and visualize what the connection\nstring is and how the test should behave. I don't have a better idea off the\ntop of my head right now, but I think this is an area to revisit and improve\non.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n",
"msg_date": "Thu, 17 Mar 2022 14:28:49 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Out-of-tree certificate interferes ssltest"
},
{
"msg_contents": "On Thu, Mar 17, 2022 at 02:28:49PM +0100, Daniel Gustafsson wrote:\n> One small concern though. This hunk:\n> \n> +my $default_ssl_connstr = \"sslkey=invalid sslcert=invalid sslrootcert=invalid sslcrl=invalid sslcrldir=invalid\";\n> +\n> $common_connstr =\n> - \"user=ssltestuser dbname=trustdb sslcert=invalid hostaddr=$SERVERHOSTADDR host=common-name.pg-ssltest.test\";\n> + \"$default_ssl_connstr user=ssltestuser dbname=trustdb hostaddr=$SERVERHOSTADDR host=common-name.pg-ssltest.test\";\n> \n> ..together with the following changes along the lines of:\n> \n> -\t\"$common_connstr sslrootcert=invalid sslmode=require\",\n> +\t\"$common_connstr sslmode=require\",\n> \n> ..is making it fairly hard to read the test and visualize what the connection\n> string is and how the test should behave. I don't have a better idea off the\n> top of my head right now, but I think this is an area to revisit and improve\n> on.\n\nI agree that this makes this set of three tests harder to follow, as\nwe expect a root cert to *not* be set locally. Keeping the behavior\ndocumented in each individual string would be better, even if that\nduplicates more the keys in those final strings.\n\nAnother thing that Horiguchi-san has pointed out upthread (?) is 003,\nwhere it is also possible to trigger failures once the environment is\nhijacked. The attached allows the full test suite to pass without\nissues on my side.\n--\nMichael",
"msg_date": "Fri, 18 Mar 2022 10:02:52 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Out-of-tree certificate interferes ssltest"
},
{
"msg_contents": "\nOn 3/17/22 21:02, Michael Paquier wrote:\n> On Thu, Mar 17, 2022 at 02:28:49PM +0100, Daniel Gustafsson wrote:\n>> One small concern though. This hunk:\n>>\n>> +my $default_ssl_connstr = \"sslkey=invalid sslcert=invalid sslrootcert=invalid sslcrl=invalid sslcrldir=invalid\";\n>> +\n>> $common_connstr =\n>> - \"user=ssltestuser dbname=trustdb sslcert=invalid hostaddr=$SERVERHOSTADDR host=common-name.pg-ssltest.test\";\n>> + \"$default_ssl_connstr user=ssltestuser dbname=trustdb hostaddr=$SERVERHOSTADDR host=common-name.pg-ssltest.test\";\n>> \n>> ..together with the following changes along the lines of:\n>>\n>> -\t\"$common_connstr sslrootcert=invalid sslmode=require\",\n>> +\t\"$common_connstr sslmode=require\",\n>>\n>> ..is making it fairly hard to read the test and visualize what the connection\n>> string is and how the test should behave. I don't have a better idea off the\n>> top of my head right now, but I think this is an area to revisit and improve\n>> on.\n> I agree that this makes this set of three tests harder to follow, as\n> we expect a root cert to *not* be set locally. Keeping the behavior\n> documented in each individual string would be better, even if that\n> duplicates more the keys in those final strings.\n>\n> Another thing that Horiguchi-san has pointed out upthread (?) is 003,\n> where it is also possible to trigger failures once the environment is\n> hijacked. The attached allows the full test suite to pass without\n> issues on my side.\n\n\n\nLGTM\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Fri, 18 Mar 2022 18:15:28 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: Out-of-tree certificate interferes ssltest"
},
{
"msg_contents": "On Fri, Mar 18, 2022 at 06:15:28PM -0400, Andrew Dunstan wrote:\n> On 3/17/22 21:02, Michael Paquier wrote:\n>> Another thing that Horiguchi-san has pointed out upthread (?) is 003,\n>> where it is also possible to trigger failures once the environment is\n>> hijacked. The attached allows the full test suite to pass without\n>> issues on my side.\n> \n> LGTM\n\nThanks for looking at it. I have been able to check this stuff across\nall the supported branches, and failures happen down to 10. That's\neasy enough to see once you know how to break the tests.\n\nThere were a couple of conflicts, but nothing impossible to fix, so\napplied down to v10. REL_11_STABLE had one extra failure in\n002_scram.pl that was already fixed in v12~.\n--\nMichael",
"msg_date": "Tue, 22 Mar 2022 13:27:38 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Out-of-tree certificate interferes ssltest"
}
] |
[
{
"msg_contents": "Hi All,\nAt the beginning of LogicalIncreaseRestartDecodingForSlot(), we have codeine\n```\n1623 /* don't overwrite if have a newer restart lsn */\n1624 if (restart_lsn <= slot->data.restart_lsn)\n1625 {\n1626 }\n1627\n1628 /*\n1629 * We might have already flushed far enough to directly\naccept this lsn,\n1630 * in this case there is no need to check for existing candidate LSNs\n1631 */\n1632 else if (current_lsn <= slot->data.confirmed_flush)\n1633 {\n1634 slot->candidate_restart_valid = current_lsn;\n1635 slot->candidate_restart_lsn = restart_lsn;\n1636\n1637 /* our candidate can directly be used */\n1638 updated_lsn = true;\n1639 }\n```\nThis code avoids directly writing slot's persistent data multiple\ntimes if the restart_lsn does not change between successive running\ntransactions WAL records and the confirmed flush LSN is higher than\nall of those.\n\nBut if the downstream is fast enough to send at least one newer\nconfirmed flush between every two running transactions WAL records, we\nend up overwriting slot's restart LSN even if it does not change\nbecause of following code\n```\n1641 /*\n1642 * Only increase if the previous values have been applied, otherwise we\n1643 * might never end up updating if the receiver acks too\nslowly. A missed\n1644 * value here will just cause some extra effort after reconnecting.\n1645 */\n1646 if (slot->candidate_restart_valid == InvalidXLogRecPtr)\n1647 {\n1648 slot->candidate_restart_valid = current_lsn;\n1649 slot->candidate_restart_lsn = restart_lsn;\n1650 SpinLockRelease(&slot->mutex);\n1651\n1652 elog(LOG, \"got new restart lsn %X/%X at %X/%X\",\n1653 LSN_FORMAT_ARGS(restart_lsn),\n1654 LSN_FORMAT_ARGS(current_lsn));\n1655 }\n```\n\nLet's say RLSN is the restart LSN computed when processing successive\nrunning transaction WAL records at RT_LSN1, RT_LSN2, RT_LSN3 ....\nLet's say downstream sends confirmed flush LSNs CF_LSN1, CF_LSN2,\nCF_LSN3, ... such that RT_LSNx <= CF_LSNx <= RT_LSN(x+1). CF_LSNx\narrives between processing records at RT_LSNx and RT_LSN(x+1). So the\ncandidate_restart_lsn is always InvalidXlogRecPtr and we install the\nsame RLSN as candidate_restart_lsn again and again with a different\ncandidate_restart_valid.\n\nEvery installed candidate is updated in the slot by\nLogicalConfirmReceivedLocation() when the next confirmed flush\narrives. Such an update also causes a disk write which looks\nunnecessary.\n\nI think the function should ignore a restart_lsn older than\ndata.restart_lsn right away at the beginning of this function.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n",
"msg_date": "Wed, 16 Mar 2022 17:01:42 +0530",
"msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>",
"msg_from_op": true,
"msg_subject": "unnecessary (same) restart_lsn handing in\n LogicalIncreaseRestartDecodingForSlot"
},
{
"msg_contents": "On Wed, Mar 16, 2022 at 5:02 PM Ashutosh Bapat\n<ashutosh.bapat.oss@gmail.com> wrote:\n>\n> Hi All,\n> At the beginning of LogicalIncreaseRestartDecodingForSlot(), we have codeine\n> ```\n> 1623 /* don't overwrite if have a newer restart lsn */\n> 1624 if (restart_lsn <= slot->data.restart_lsn)\n> 1625 {\n> 1626 }\n> 1627\n> 1628 /*\n> 1629 * We might have already flushed far enough to directly\n> accept this lsn,\n> 1630 * in this case there is no need to check for existing candidate LSNs\n> 1631 */\n> 1632 else if (current_lsn <= slot->data.confirmed_flush)\n> 1633 {\n> 1634 slot->candidate_restart_valid = current_lsn;\n> 1635 slot->candidate_restart_lsn = restart_lsn;\n> 1636\n> 1637 /* our candidate can directly be used */\n> 1638 updated_lsn = true;\n> 1639 }\n> ```\n> This code avoids directly writing slot's persistent data multiple\n> times if the restart_lsn does not change between successive running\n> transactions WAL records and the confirmed flush LSN is higher than\n> all of those.\n>\n> But if the downstream is fast enough to send at least one newer\n> confirmed flush between every two running transactions WAL records, we\n> end up overwriting slot's restart LSN even if it does not change\n> because of following code\n> ```\n> 1641 /*\n> 1642 * Only increase if the previous values have been applied, otherwise we\n> 1643 * might never end up updating if the receiver acks too\n> slowly. A missed\n> 1644 * value here will just cause some extra effort after reconnecting.\n> 1645 */\n> 1646 if (slot->candidate_restart_valid == InvalidXLogRecPtr)\n> 1647 {\n> 1648 slot->candidate_restart_valid = current_lsn;\n> 1649 slot->candidate_restart_lsn = restart_lsn;\n> 1650 SpinLockRelease(&slot->mutex);\n> 1651\n> 1652 elog(LOG, \"got new restart lsn %X/%X at %X/%X\",\n> 1653 LSN_FORMAT_ARGS(restart_lsn),\n> 1654 LSN_FORMAT_ARGS(current_lsn));\n> 1655 }\n> ```\n>\n\nAre you worried that in corner cases we will update the persistent\ncopy of slot with same restart_lsn multiple times?\n\nAFAICS, we update persistent copy via LogicalConfirmReceivedLocation()\nwhich is called only when 'updated_lsn' is set and that doesn't get\nset in the if check (slot->candidate_restart_valid ==\nInvalidXLogRecPtr) you quoted. It is not very clear to me after\nreading your email what exactly is your concern, so I might be missing\nsomething.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 1 Apr 2022 10:54:06 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: unnecessary (same) restart_lsn handing in\n LogicalIncreaseRestartDecodingForSlot"
}
] |
[
{
"msg_contents": "Good day, hackers.\n\nArchitecture Reference Manual for ARMv8 B2.2.1 [1] states:\n\n For explicit memory effects generated from an Exception level the\n following rules apply:\n - A read that is generated by a load instruction that loads a single\n general-purpose register and is aligned to the size of the read in the\n instruction is single-copy atomic.\n - A write that is generated by a store instruction that stores a single\n general-purpose register and is aligned to the size of the write in the\n instruction is single-copy atomic.\n\nSo I believe it is safe to define PG_HAVE_8BYTE_SINGLE_COPY_ATOMICITY\nfor aarch64\n\n[1] https://documentation-service.arm.com/static/61fbe8f4fa8173727a1b734e\nhttps://developer.arm.com/documentation/ddi0487/latest\n\n-------\n\nregards\n\nYura Sokolov\nPostgres Professional\ny.sokolov@postgrespro.ru\nfunny.falcon@gmail.com",
"msg_date": "Wed, 16 Mar 2022 15:32:35 +0300",
"msg_from": "Yura Sokolov <y.sokolov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Declare PG_HAVE_8BYTE_SINGLE_COPY_ATOMICITY for aarch64"
},
{
"msg_contents": "On Thu, Mar 17, 2022 at 1:32 AM Yura Sokolov <y.sokolov@postgrespro.ru> wrote:\n> So I believe it is safe to define PG_HAVE_8BYTE_SINGLE_COPY_ATOMICITY\n> for aarch64\n\nAgreed, and pushed. There was another thread that stalled, so I added\na reference and a reviewer from that to your commit message.\n\nThis should probably also be set for RISCV64.\n\n\n",
"msg_date": "Fri, 18 Mar 2022 13:50:49 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Declare PG_HAVE_8BYTE_SINGLE_COPY_ATOMICITY for aarch64"
}
] |
[
{
"msg_contents": "Hi hackers,\n\nI noticed there was no tab completion for time zones in psql, so here's\na patch that implements that. I chose lower-casing the names since they\nare case insensitive, and verbatim since ones without slashes can be\nentered without quotes, and (at least my version of) readline completes\nthem correctly if the entered value starts with a single quote.\n\n- ilmari",
"msg_date": "Wed, 16 Mar 2022 13:22:07 +0000",
"msg_from": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <ilmari@ilmari.org>",
"msg_from_op": true,
"msg_subject": "Tab completion for SET TimeZone"
},
{
"msg_contents": "Dagfinn Ilmari Mannsåker <ilmari@ilmari.org> writes:\n\n> Hi hackers,\n>\n> I noticed there was no tab completion for time zones in psql, so here's\n> a patch that implements that.\n\nI just noticed I left out the = in the match check, here's an updated\npatch that fixes that.\n\n- ilmari",
"msg_date": "Wed, 16 Mar 2022 13:26:57 +0000",
"msg_from": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <ilmari@ilmari.org>",
"msg_from_op": true,
"msg_subject": "Re: Tab completion for SET TimeZone"
},
{
"msg_contents": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <ilmari@ilmari.org> writes:\n> I just noticed I left out the = in the match check, here's an updated\n> patch that fixes that.\n\nHmm .... is that actually going to be useful in that form?\nMost time zone names contain slashes and will therefore require\nsingle-quoting. I think you might need pushups comparable to\nCOMPLETE_WITH_ENUM_VALUE.\n\nAlso, personally, I'd rather not smash the names to lower case.\nI think that's a significant decrement of readability.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 16 Mar 2022 09:40:49 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Tab completion for SET TimeZone"
},
{
"msg_contents": "Tom Lane <tgl@sss.pgh.pa.us> writes:\n\n> =?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <ilmari@ilmari.org> writes:\n>> I just noticed I left out the = in the match check, here's an updated\n>> patch that fixes that.\n>\n> Hmm .... is that actually going to be useful in that form?\n> Most time zone names contain slashes and will therefore require\n> single-quoting. I think you might need pushups comparable to\n> COMPLETE_WITH_ENUM_VALUE.\n\nWith readline (which is what I tested on) the completion works with or\nwithout a single quote, but the user has to supply the quote themselves\nfor non-identifier-syntax timezone names.\n\nI wasn't aware of the difference in behaviour with libedit, but now that\nI've tested I agree that quoting things even when not strictly needed is\nbetter.\n\nThis does however have the unfortunate side effect that on readline it\nwill suggest DEFAULT even after a single quote, which is not valid.\n\n> Also, personally, I'd rather not smash the names to lower case.\n> I think that's a significant decrement of readability.\n\nThat was mainly for convenience of not having to capitalise the place\nnames when typing (since they are accepted case insensitively). A\ncompromise would be to lower-case it in the WHERE, but not in the\nSELECT, as in the attached v3 patch.\n\nI've tested this version on Debian stable with both readline 8.1-1 and\nlibedit 3.1-20191231-2.\n\n- ilmari",
"msg_date": "Fri, 18 Mar 2022 00:35:10 +0000",
"msg_from": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <ilmari@ilmari.org>",
"msg_from_op": true,
"msg_subject": "Re: Tab completion for SET TimeZone"
},
{
"msg_contents": "Dagfinn Ilmari Mannsåker <ilmari@ilmari.org> writes:\n\n> Tom Lane <tgl@sss.pgh.pa.us> writes:\n>\n>> Also, personally, I'd rather not smash the names to lower case.\n>> I think that's a significant decrement of readability.\n>\n> That was mainly for convenience of not having to capitalise the place\n> names when typing (since they are accepted case insensitively). A\n> compromise would be to lower-case it in the WHERE, but not in the\n> SELECT, as in the attached v3 patch.\n\nI just realised there's no point in the subselect when I'm not applying\nthe same function in the WHERE and the SELECT, so here's an updated\nversion that simplifies that. It also fixes a typo in the commit\nmessage.\n\n- ilmari",
"msg_date": "Fri, 18 Mar 2022 10:17:00 +0000",
"msg_from": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <ilmari@ilmari.org>",
"msg_from_op": true,
"msg_subject": "Re: Tab completion for SET TimeZone"
},
{
"msg_contents": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <ilmari@ilmari.org> writes:\n> I just realised there's no point in the subselect when I'm not applying\n> the same function in the WHERE and the SELECT, so here's an updated\n> version that simplifies that. It also fixes a typo in the commit\n> message.\n\nThis doesn't work right for me under libedit -- it will correctly\ncomplete \"am<TAB>\" to \"'America/\", but then it fails to complete\nanything past that. The reason seems to be that once we have a\nleading single quote, libedit will include that in the text passed\nto future completion attempts, while readline won't. I ended up\nneeding three query variants, as attached (bikeshedding welcome).\n\nI think the reason the COMPLETE_WITH_ENUM_VALUE macro doesn't look\nsimilar is that it hasn't made an attempt to work with input that\nthe user didn't quote --- that is, if you type\n\talter type planets rename value ur<TAB>\nit just fails to match anything, instead of providing \"'uranus'\".\nShould we upgrade that likewise? Not sure it's worth the trouble\nthough; I think COMPLETE_WITH_ENUM_VALUE is there more as a finger\nexercise than because people use it regularly.\n\nI added a regression test case too.\n\n\t\t\tregards, tom lane",
"msg_date": "Fri, 18 Mar 2022 14:53:04 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Tab completion for SET TimeZone"
},
{
"msg_contents": "... btw, I forgot to mention that I don't see any problem with\nthe patch's behavior for DEFAULT. What I see with both readline\nand libedit is that if you type something reasonable, like\n set timezone to d<TAB>\nthen it will correctly complete \"efault \", without any extra\nquotes. Now, if you're silly and do\n set timezone to 'd<TAB>\nthen readline gives you \"efault' \" and libedit gives you nothing.\nBut I think that's your own, um, fault. We don't have any other\nplaces where tab completion is so aggressive as to remove quotes\nfrom a keyword.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 18 Mar 2022 15:01:56 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Tab completion for SET TimeZone"
},
{
"msg_contents": "Tom Lane <tgl@sss.pgh.pa.us> writes:\n\n> =?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <ilmari@ilmari.org> writes:\n>> I just realised there's no point in the subselect when I'm not applying\n>> the same function in the WHERE and the SELECT, so here's an updated\n>> version that simplifies that. It also fixes a typo in the commit\n>> message.\n>\n> This doesn't work right for me under libedit -- it will correctly\n> complete \"am<TAB>\" to \"'America/\", but then it fails to complete\n> anything past that. The reason seems to be that once we have a\n> leading single quote, libedit will include that in the text passed\n> to future completion attempts, while readline won't. I ended up\n> needing three query variants, as attached (bikeshedding welcome).\n\nWell spotted, I must have just tested that it completed something on the\nfirst tab. The fix looks reasonable to me, and I have no better ideas\nfor the names of the query string #defines.\n\n> I think the reason the COMPLETE_WITH_ENUM_VALUE macro doesn't look\n> similar is that it hasn't made an attempt to work with input that\n> the user didn't quote --- that is, if you type\n> \talter type planets rename value ur<TAB>\n> it just fails to match anything, instead of providing \"'uranus'\".\n> Should we upgrade that likewise?`\n\nThe comment says it will add the quote before text if it's not there, so\nmaybe we should adjust that to say that it will only add the quote if\nthe user hasn't typed anything?\n\n> Not sure it's worth the trouble though; I think\n> COMPLETE_WITH_ENUM_VALUE is there more as a finger exercise than\n> because people use it regularly.\n\nI agree, I mostly implemented that for completeness after adding ALTER\nTYPE … RENAME VALUE. Also, enum values always need quoting, while SET\nTIMEZONE doesn't if the zone name follows identifier syntax and people\nmight start typing it without quotes if they intend set one of those, so\nthe extra effort to make that work even though we can't suggest a mix of\nquoted and non-quoted names is worth it.\n\n> I added a regression test case too.\n\nGood idea idea, I keep forgetting that we actually have tests for tab\ncompletion, since most cases are so simple and obviously correct that we\ndon't bother with tests for them.\n\n> ... btw, I forgot to mention that I don't see any problem with\n> the patch's behavior for DEFAULT. What I see with both readline\n> and libedit is that if you type something reasonable, like\n> set timezone to d<TAB>\n> then it will correctly complete \"efault \", without any extra\n> quotes. Now, if you're silly and do\n> set timezone to 'd<TAB>\n> then readline gives you \"efault' \" and libedit gives you nothing.\n> But I think that's your own, um, fault.\n\nI agree, it was just a quirk noticed while testing. Also, it helps that\nthere aren't any actual time zone names starting with 'd'.\n\n> We don't have any other places where tab completion is so aggressive\n> as to remove quotes from a keyword.\n\nI noticed afterwards that other config varables behave the same, so even\nif we wanted to improve that, it's outwith the scope of this patch.\n\n> \t\t\tregards, tom lane\n\n- ilmari\n\n\n",
"msg_date": "Sun, 20 Mar 2022 19:47:58 +0000",
"msg_from": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <ilmari@ilmari.org>",
"msg_from_op": true,
"msg_subject": "Re: Tab completion for SET TimeZone"
},
{
"msg_contents": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <ilmari@ilmari.org> writes:\n> Tom Lane <tgl@sss.pgh.pa.us> writes:\n>> I think the reason the COMPLETE_WITH_ENUM_VALUE macro doesn't look\n>> similar is that it hasn't made an attempt to work with input that\n>> the user didn't quote --- that is, if you type\n>> alter type planets rename value ur<TAB>\n>> it just fails to match anything, instead of providing \"'uranus'\".\n>> Should we upgrade that likewise?`\n\n> The comment says it will add the quote before text if it's not there, so\n> maybe we should adjust that to say that it will only add the quote if\n> the user hasn't typed anything?\n\nAfter thinking a bit harder, I realized that the SchemaQuery\ninfrastructure has no way to deal with the case of the input text not\nbeing a prefix of what we want the output to be, so it can't do something\ncomparable to Query_for_list_of_timezone_names_quoted_out. Maybe someday\nwe'll feel like adding that, but COMPLETE_WITH_ENUM_VALUE isn't a\ncompelling enough reason in current usage. So I just tweaked the comment\na bit.\n\nPushed that way.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 20 Mar 2022 16:11:03 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Tab completion for SET TimeZone"
},
{
"msg_contents": "Tom Lane <tgl@sss.pgh.pa.us> writes:\n\n> After thinking a bit harder, I realized that the SchemaQuery\n> infrastructure has no way to deal with the case of the input text not\n> being a prefix of what we want the output to be, so it can't do something\n> comparable to Query_for_list_of_timezone_names_quoted_out. Maybe someday\n> we'll feel like adding that, but COMPLETE_WITH_ENUM_VALUE isn't a\n> compelling enough reason in current usage. So I just tweaked the comment\n> a bit.\n\nFair enough.\n\n> Pushed that way.\n\nThanks!\n\n> \t\t\tregards, tom lane\n\n- ilmari\n\n\n",
"msg_date": "Mon, 21 Mar 2022 00:05:43 +0000",
"msg_from": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <ilmari@ilmari.org>",
"msg_from_op": true,
"msg_subject": "Re: Tab completion for SET TimeZone"
}
] |
[
{
"msg_contents": "Hello,\n\nWhile testing on the current PG master, I noticed a problem between\nbackends communicating over a shared memory queue. I think `shm_mq_sendv()`\nfails to flush the queue, even if `force_flush` is set to true, if the\nreceiver is not yet attached to the queue. This simple fix solves\nthe problem for me.\n\nOn another note, `shm_mq.h` declares `shm_mq_flush()`, but I don't see it\nbeing implemented. Maybe just a leftover from the previous work? Though it\nseems useful to implement that API.\n\nThanks,\nPavan\n\n-- \nPavan Deolasee\nEnterpriseDB: https://www.enterprisedb..com",
"msg_date": "Thu, 17 Mar 2022 12:42:58 +0530",
"msg_from": "Pavan Deolasee <pavan.deolasee@gmail.com>",
"msg_from_op": true,
"msg_subject": "Shmem queue is not flushed if receiver is not yet attached"
},
{
"msg_contents": "On Thu, Mar 17, 2022 at 3:13 AM Pavan Deolasee <pavan.deolasee@gmail.com> wrote:\n> While testing on the current PG master, I noticed a problem between backends communicating over a shared memory queue. I think `shm_mq_sendv()` fails to flush the queue, even if `force_flush` is set to true, if the receiver is not yet attached to the queue. This simple fix solves the problem for me.\n>\n> On another note, `shm_mq.h` declares `shm_mq_flush()`, but I don't see it being implemented. Maybe just a leftover from the previous work? Though it seems useful to implement that API.\n\nI think that this patch is basically correct, except that it's not\ncorrect to set mqh_counterparty_attached when receiver is still NULL.\nHere's a v2 where I've attempted to correct that while preserving the\nessence of your proposed fix.\n\nI'm not sure that we need a shm_mq_flush(), but we definitely don't\nhave one currently, so I've also adjusted your patch to remove the\ndead prototype.\n\nPlease let me know your thoughts on the attached.\n\nThanks,\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com",
"msg_date": "Tue, 24 May 2022 11:05:23 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Shmem queue is not flushed if receiver is not yet attached"
},
{
"msg_contents": "\nOn Tue, 24 May 2022 at 23:05, Robert Haas <robertmhaas@gmail.com> wrote:\n> On Thu, Mar 17, 2022 at 3:13 AM Pavan Deolasee <pavan.deolasee@gmail.com> wrote:\n>> While testing on the current PG master, I noticed a problem between backends communicating over a shared memory queue. I think `shm_mq_sendv()` fails to flush the queue, even if `force_flush` is set to true, if the receiver is not yet attached to the queue. This simple fix solves the problem for me.\n>>\n>> On another note, `shm_mq.h` declares `shm_mq_flush()`, but I don't see it being implemented. Maybe just a leftover from the previous work? Though it seems useful to implement that API.\n>\n> I think that this patch is basically correct, except that it's not\n> correct to set mqh_counterparty_attached when receiver is still NULL.\n> Here's a v2 where I've attempted to correct that while preserving the\n> essence of your proposed fix.\n>\n> I'm not sure that we need a shm_mq_flush(), but we definitely don't\n> have one currently, so I've also adjusted your patch to remove the\n> dead prototype.\n>\n> Please let me know your thoughts on the attached.\n>\n> Thanks,\n\nHi,\n\nI have a problem that is also related to shmem queue [1], however, I cannot\nreproduce it. How did you reproduce this problem?\n\n[1] https://www.postgresql.org/message-id/MEYP282MB1669C8D88F0997354C2313C1B6CA9%40MEYP282MB1669.AUSP282.PROD.OUTLOOK.COM\n\n-- \nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.\n\n\n",
"msg_date": "Wed, 25 May 2022 09:30:55 +0800",
"msg_from": "Japin Li <japinli@hotmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Shmem queue is not flushed if receiver is not yet attached"
},
{
"msg_contents": "Hi Robert,\n\nOn Tue, May 24, 2022 at 8:35 PM Robert Haas <robertmhaas@gmail.com> wrote:\n\n>\n>\n> I think that this patch is basically correct, except that it's not\n> correct to set mqh_counterparty_attached when receiver is still NULL.\n> Here's a v2 where I've attempted to correct that while preserving the\n> essence of your proposed fix.\n>\n\nThis looks good to me,\n\n\n>\n> I'm not sure that we need a shm_mq_flush(), but we definitely don't\n> have one currently, so I've also adjusted your patch to remove the\n> dead prototype.\n>\n>\nMakes sense to me.\n\nThanks,\nPavan\n\n-- \nPavan Deolasee\nEnterpriseDB: https://www.enterprisedb..com\n\nHi Robert,On Tue, May 24, 2022 at 8:35 PM Robert Haas <robertmhaas@gmail.com> wrote:\n\nI think that this patch is basically correct, except that it's not\ncorrect to set mqh_counterparty_attached when receiver is still NULL.\nHere's a v2 where I've attempted to correct that while preserving the\nessence of your proposed fix.This looks good to me, \n\nI'm not sure that we need a shm_mq_flush(), but we definitely don't\nhave one currently, so I've also adjusted your patch to remove the\ndead prototype.\nMakes sense to me.Thanks,Pavan -- Pavan DeolaseeEnterpriseDB: https://www.enterprisedb..com",
"msg_date": "Mon, 30 May 2022 12:36:14 +0530",
"msg_from": "Pavan Deolasee <pavan.deolasee@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Shmem queue is not flushed if receiver is not yet attached"
},
{
"msg_contents": "On Wed, May 25, 2022 at 7:01 AM Japin Li <japinli@hotmail.com> wrote:\n\n>\n> I have a problem that is also related to shmem queue [1], however, I cannot\n> reproduce it. How did you reproduce this problem?\n>\n>\nI discovered this bug while working on an extension that makes use of the\nshared memory queue facility. Not sure how useful that is for your purpose.\n\nThanks,\nPavan\n\n-- \nPavan Deolasee\nEnterpriseDB: https://www.enterprisedb..com\n\nOn Wed, May 25, 2022 at 7:01 AM Japin Li <japinli@hotmail.com> wrote:\nI have a problem that is also related to shmem queue [1], however, I cannot\nreproduce it. How did you reproduce this problem?\nI discovered this bug while working on an extension that makes use of the shared memory queue facility. Not sure how useful that is for your purpose.Thanks,Pavan-- Pavan DeolaseeEnterpriseDB: https://www.enterprisedb..com",
"msg_date": "Mon, 30 May 2022 12:37:51 +0530",
"msg_from": "Pavan Deolasee <pavan.deolasee@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Shmem queue is not flushed if receiver is not yet attached"
},
{
"msg_contents": "On Mon, May 30, 2022 at 3:06 AM Pavan Deolasee <pavan.deolasee@gmail.com> wrote:\n>> I think that this patch is basically correct, except that it's not\n>> correct to set mqh_counterparty_attached when receiver is still NULL.\n>> Here's a v2 where I've attempted to correct that while preserving the\n>> essence of your proposed fix.\n>\n> This looks good to me,\n\nThanks for checking. Committed.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 31 May 2022 09:02:55 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Shmem queue is not flushed if receiver is not yet attached"
}
] |
[
{
"msg_contents": "Add option to use ICU as global locale provider\n\nThis adds the option to use ICU as the default locale provider for\neither the whole cluster or a database. New options for initdb,\ncreatedb, and CREATE DATABASE are used to select this.\n\nSince some (legacy) code still uses the libc locale facilities\ndirectly, we still need to set the libc global locale settings even if\nICU is otherwise selected. So pg_database now has three\nlocale-related fields: the existing datcollate and datctype, which are\nalways set, and a new daticulocale, which is only set if ICU is\nselected. A similar change is made in pg_collation for consistency,\nbut in that case, only the libc-related fields or the ICU-related\nfield is set, never both.\n\nReviewed-by: Julien Rouhaud <rjuju123@gmail.com>\nDiscussion: https://www.postgresql.org/message-id/flat/5e756dd6-0e91-d778-96fd-b1bcb06c161a%402ndquadrant.com\n\nBranch\n------\nmaster\n\nDetails\n-------\nhttps://git.postgresql.org/pg/commitdiff/f2553d43060edb210b36c63187d52a632448e1d2\n\nModified Files\n--------------\ndoc/src/sgml/catalogs.sgml | 9 ++\ndoc/src/sgml/charset.sgml | 102 ++++++++++++++++\ndoc/src/sgml/ref/create_database.sgml | 32 +++++\ndoc/src/sgml/ref/createdb.sgml | 19 +++\ndoc/src/sgml/ref/initdb.sgml | 72 +++++++++---\nsrc/backend/catalog/pg_collation.c | 18 ++-\nsrc/backend/commands/collationcmds.c | 97 +++++++++------\nsrc/backend/commands/dbcommands.c | 157 +++++++++++++++++++++----\nsrc/backend/utils/adt/pg_locale.c | 144 ++++++++++++++---------\nsrc/backend/utils/init/postinit.c | 21 +++-\nsrc/bin/initdb/Makefile | 4 +-\nsrc/bin/initdb/initdb.c | 97 +++++++++++++--\nsrc/bin/initdb/t/001_initdb.pl | 27 +++++\nsrc/bin/pg_dump/pg_dump.c | 30 ++++-\nsrc/bin/pg_upgrade/check.c | 13 ++\nsrc/bin/pg_upgrade/info.c | 18 ++-\nsrc/bin/pg_upgrade/pg_upgrade.h | 2 +\nsrc/bin/psql/describe.c | 23 +++-\nsrc/bin/psql/tab-complete.c | 3 +-\nsrc/bin/scripts/Makefile | 2 +\nsrc/bin/scripts/createdb.c | 20 ++++\nsrc/bin/scripts/t/020_createdb.pl | 28 +++++\nsrc/include/catalog/catversion.h | 2 +-\nsrc/include/catalog/pg_collation.dat | 3 +-\nsrc/include/catalog/pg_collation.h | 20 +++-\nsrc/include/catalog/pg_database.dat | 4 +-\nsrc/include/catalog/pg_database.h | 6 +\nsrc/include/utils/pg_locale.h | 5 +\nsrc/test/Makefile | 6 +-\nsrc/test/icu/.gitignore | 2 +\nsrc/test/icu/Makefile | 25 ++++\nsrc/test/icu/README | 27 +++++\nsrc/test/icu/t/010_database.pl | 58 +++++++++\nsrc/test/regress/expected/collate.icu.utf8.out | 10 +-\nsrc/test/regress/sql/collate.icu.utf8.sql | 8 +-\n35 files changed, 947 insertions(+), 167 deletions(-)",
"msg_date": "Thu, 17 Mar 2022 10:22:32 +0000",
"msg_from": "Peter Eisentraut <peter@eisentraut.org>",
"msg_from_op": true,
"msg_subject": "pgsql: Add option to use ICU as global locale provider"
},
{
"msg_contents": "Hi Peter,\n\nOn Thu, Mar 17, 2022 at 10:22:32AM +0000, Peter Eisentraut wrote:\n> Add option to use ICU as global locale provider\n> \n> This adds the option to use ICU as the default locale provider for\n> either the whole cluster or a database. New options for initdb,\n> createdb, and CREATE DATABASE are used to select this.\n> \n> Since some (legacy) code still uses the libc locale facilities\n> directly, we still need to set the libc global locale settings even if\n> ICU is otherwise selected. So pg_database now has three\n> locale-related fields: the existing datcollate and datctype, which are\n> always set, and a new daticulocale, which is only set if ICU is\n> selected. A similar change is made in pg_collation for consistency,\n> but in that case, only the libc-related fields or the ICU-related\n> field is set, never both.\n\nFYI, prion is complaining here:\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=prion&dt=2022-03-18%2001%3A43%3A13\n\nSome details:\n# Failed test 'fails for invalid ICU locale: matches'\n# at t/001_initdb.pl line 107.\n# '2022-03-18 01:54:58.563 UTC [504] FATAL: could\n# not open collator for locale \"@colNumeric=lower\":\n# U_ILLEGAL_ARGUMENT_ERROR\n--\nMichael",
"msg_date": "Fri, 18 Mar 2022 11:01:11 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Add option to use ICU as global locale provider"
},
{
"msg_contents": "Hi,\n\nOn Fri, Mar 18, 2022 at 11:01:11AM +0900, Michael Paquier wrote:\n>\n> FYI, prion is complaining here:\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=prion&dt=2022-03-18%2001%3A43%3A13\n> \n> Some details:\n> # Failed test 'fails for invalid ICU locale: matches'\n> # at t/001_initdb.pl line 107.\n> # '2022-03-18 01:54:58.563 UTC [504] FATAL: could\n> # not open collator for locale \"@colNumeric=lower\":\n> # U_ILLEGAL_ARGUMENT_ERROR\n\nThat's very strange, apparently initdb doesn't detect any problem when checking\nucol_open() for initial checks, since it's expecting:\n\n# doesn't match '(?^:initdb: error: could not open collator for locale)'\n\nbut then postgres in single-backend mode does detect the problem, with the\nexact same check, so it's not like --icu-locale=@colNumeric=lower wasn't\ncorrectly interpreted. Unfortunately we don't have the full initdb output, so\nwe can't check what setup_locale_encoding reported. The only difference is\nthat in initdb's setlocale(), the result of ucol_open is discarded, maybe the\ncompiler is optimizing it away for some reason, even though it seems really\nunlikely.\n\nThat being said, we could save the result and explicitly close the collator.\nThat wouldn't make much difference in initdb (but may be a bit cleaner), but I\nsee that there's a similar coding in createdb(), which seems like it could leak\nsome memory according to ucol_close man page.\n\n\n",
"msg_date": "Fri, 18 Mar 2022 11:12:44 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Add option to use ICU as global locale provider"
},
{
"msg_contents": "On Fri, Mar 18, 2022 at 4:12 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n> On Fri, Mar 18, 2022 at 11:01:11AM +0900, Michael Paquier wrote:\n> > FYI, prion is complaining here:\n> > https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=prion&dt=2022-03-18%2001%3A43%3A13\n> >\n> > Some details:\n> > # Failed test 'fails for invalid ICU locale: matches'\n> > # at t/001_initdb.pl line 107.\n> > # '2022-03-18 01:54:58.563 UTC [504] FATAL: could\n> > # not open collator for locale \"@colNumeric=lower\":\n> > # U_ILLEGAL_ARGUMENT_ERROR\n>\n> That's very strange, apparently initdb doesn't detect any problem when checking\n> ucol_open() for initial checks, since it's expecting:\n>\n> # doesn't match '(?^:initdb: error: could not open collator for locale)'\n>\n> but then postgres in single-backend mode does detect the problem, with the\n> exact same check, so it's not like --icu-locale=@colNumeric=lower wasn't\n> correctly interpreted. Unfortunately we don't have the full initdb output, so\n> we can't check what setup_locale_encoding reported. The only difference is\n> that in initdb's setlocale(), the result of ucol_open is discarded, maybe the\n> compiler is optimizing it away for some reason, even though it seems really\n> unlikely.\n>\n> That being said, we could save the result and explicitly close the collator.\n> That wouldn't make much difference in initdb (but may be a bit cleaner), but I\n> see that there's a similar coding in createdb(), which seems like it could leak\n> some memory according to ucol_close man page.\n\nNo idea what's happening here but one observation is that that animal\nis running an older distro that shipped with ICU 5.0. Commit b8f9a2a6\nmay hold a clue...\n\n\n",
"msg_date": "Fri, 18 Mar 2022 18:15:47 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Add option to use ICU as global locale provider"
},
{
"msg_contents": "On Fri, Mar 18, 2022 at 06:15:47PM +1300, Thomas Munro wrote:\n>\n> No idea what's happening here but one observation is that that animal\n> is running an older distro that shipped with ICU 5.0. Commit b8f9a2a6\n> may hold a clue...\n\nRight. I'm setting up a similar podman environment, hopefully more info soon.\n\n\n",
"msg_date": "Fri, 18 Mar 2022 14:36:48 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Add option to use ICU as global locale provider"
},
{
"msg_contents": "On Fri, Mar 18, 2022 at 02:36:48PM +0800, Julien Rouhaud wrote:\n> On Fri, Mar 18, 2022 at 06:15:47PM +1300, Thomas Munro wrote:\n> >\n> > No idea what's happening here but one observation is that that animal\n> > is running an older distro that shipped with ICU 5.0. Commit b8f9a2a6\n> > may hold a clue...\n>\n> Right. I'm setting up a similar podman environment, hopefully more info soon.\n\nAnd indeed b8f9a2a6 is the problem. We would need some form of\nicu_set_collation_attributes() on the frontend side if we want to detect such a\nproblem on older ICU version at the expected moment rather than when\nbootstrapping the info. A similar check is also needed in createdb().\n\nI was thinking that this could be the cause of problem reported by Andres on\ncentos 7 (which seems to ship ICU 50), but postinit.c calls\nmake_icu_collator(), which sets the attribute as expected. Maybe it's because\nold ICU version simply don't understand locale ID like \"en-u-kf-upper\" and\nsilently falls back to the root collator?\n\n\n",
"msg_date": "Fri, 18 Mar 2022 15:40:51 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Add option to use ICU as global locale provider"
},
{
"msg_contents": "(moving to -hackers)\n\nOn Fri, Mar 18, 2022 at 03:40:51PM +0800, Julien Rouhaud wrote:\n> On Fri, Mar 18, 2022 at 02:36:48PM +0800, Julien Rouhaud wrote:\n> > On Fri, Mar 18, 2022 at 06:15:47PM +1300, Thomas Munro wrote:\n> > >\n> > > No idea what's happening here but one observation is that that animal\n> > > is running an older distro that shipped with ICU 5.0. Commit b8f9a2a6\n> > > may hold a clue...\n> >\n> > Right. I'm setting up a similar podman environment, hopefully more info soon.\n> \n> And indeed b8f9a2a6 is the problem. We would need some form of\n> icu_set_collation_attributes() on the frontend side if we want to detect such a\n> problem on older ICU version at the expected moment rather than when\n> bootstrapping the info. A similar check is also needed in createdb().\n\nI'm attaching a patch that fixes both issues for me with ICU 50. Note that\nthere's already a test that would have failed for CREATE DATABASE if initdb\ntests didn't fail first, so no new test needed.\n\nI ended up copy/pasting icu_set_collation_attributes() in initdb.c. There\nshouldn't be new attributes added in old ICU versions, and there are enough\ndifferences to make it work in the frontend that it didn't seems worth to have\na single function.",
"msg_date": "Fri, 18 Mar 2022 17:27:49 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Add option to use ICU as global locale provider"
},
{
"msg_contents": "Julien Rouhaud <rjuju123@gmail.com> writes:\n> That being said, we could save the result and explicitly close the collator.\n> That wouldn't make much difference in initdb (but may be a bit cleaner), but I\n> see that there's a similar coding in createdb(), which seems like it could leak\n> some memory according to ucol_close man page.\n\nFYI, I verified using valgrind that (as of HEAD) there is a leak\nwhen creating a database with ICU collation that doesn't appear\nwhen creating one with libc collation. It's not a lot, a few\nhundred bytes per iteration, but it's there.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 18 Mar 2022 11:07:04 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Add option to use ICU as global locale provider"
},
{
"msg_contents": "I found a different problem with src/test/icu/: it fails altogether\nif the prevailing locale is \"C\", because then the database encoding\ndefaults to SQL_ASCII which our ICU code won't cope with. I'm not\nsure if that explains any of the buildfarm failures, but it broke\nmy local build (yeah, I'm that guy). I un-broke it for the moment\nby forcing the test to use UTF8 encoding, but do we want to do\nanything smarter than that?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 18 Mar 2022 13:29:41 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Add option to use ICU as global locale provider"
},
{
"msg_contents": "On 18.03.22 10:27, Julien Rouhaud wrote:\n> I'm attaching a patch that fixes both issues for me with ICU 50. Note that\n> there's already a test that would have failed for CREATE DATABASE if initdb\n> tests didn't fail first, so no new test needed.\n> \n> I ended up copy/pasting icu_set_collation_attributes() in initdb.c. There\n> shouldn't be new attributes added in old ICU versions, and there are enough\n> differences to make it work in the frontend that it didn't seems worth to have\n> a single function.\n\nAnother option is that we just don't do the check in initdb. As the \ntests show, you will then get an error from the backend call, so it's \nreally just a question of when the error is reported.\n\nWhy does your patch introduce a function check_icu_locale() that is only \ncalled once? Did you have further plans for that?\n\n\n",
"msg_date": "Fri, 18 Mar 2022 20:28:58 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Add option to use ICU as global locale provider"
},
{
"msg_contents": "On 18.03.22 18:29, Tom Lane wrote:\n> I found a different problem with src/test/icu/: it fails altogether\n> if the prevailing locale is \"C\", because then the database encoding\n> defaults to SQL_ASCII which our ICU code won't cope with. I'm not\n> sure if that explains any of the buildfarm failures, but it broke\n> my local build (yeah, I'm that guy). I un-broke it for the moment\n> by forcing the test to use UTF8 encoding, but do we want to do\n> anything smarter than that?\n\nThis is an appropriate solution, I think.\n\n\n",
"msg_date": "Fri, 18 Mar 2022 20:30:21 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Add option to use ICU as global locale provider"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> Another option is that we just don't do the check in initdb. As the \n> tests show, you will then get an error from the backend call, so it's \n> really just a question of when the error is reported.\n\n+1 ... seems better to not have two copies of the code.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 18 Mar 2022 16:04:10 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Add option to use ICU as global locale provider"
},
{
"msg_contents": "Hi,\n\nOn 2022-03-18 20:28:58 +0100, Peter Eisentraut wrote:\n> Why does your patch introduce a function check_icu_locale() that is only\n> called once? Did you have further plans for that?\n\nI like that it moves ICU code out of dbcommands.c - imo there should be few\ncalls to ICU functions outside of pg_locale.c. There might be an argument for\nmoving *more* into such a function though.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 18 Mar 2022 15:09:59 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Add option to use ICU as global locale provider"
},
{
"msg_contents": "On Fri, Mar 18, 2022 at 04:04:10PM -0400, Tom Lane wrote:\n> Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> > Another option is that we just don't do the check in initdb. As the\n> > tests show, you will then get an error from the backend call, so it's\n> > really just a question of when the error is reported.\n>\n> +1 ... seems better to not have two copies of the code.\n\nOk, I also prefer to not have two copies of the code but wasn't sure that\nhaving the error in the boostrapping phase was ok. I will change that.\n\n\n",
"msg_date": "Sat, 19 Mar 2022 08:59:14 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Add option to use ICU as global locale provider"
},
{
"msg_contents": "On Fri, Mar 18, 2022 at 03:09:59PM -0700, Andres Freund wrote:\n> Hi,\n> \n> On 2022-03-18 20:28:58 +0100, Peter Eisentraut wrote:\n> > Why does your patch introduce a function check_icu_locale() that is only\n> > called once? Did you have further plans for that?\n> \n> I like that it moves ICU code out of dbcommands.c\n\nYes, it seemed cleaner this way. But more importantly code outside pg_locale.c\nreally shouldn't have to deal with ICU specific version code.\n\nI'm attaching a v2, addressing Peter and Tom comments to not duplicate the old\nICU versions attribute function. I removed the ICU locale check entirely (for\nconsistency across ICU version) thus removing any need for ucol.h include in\ninitdb.\n\nFor the problem you reported at [1] with the meson branch, I changed createdb\ntests with s/en-u-kf-upper/en@colCaseFirst=upper/, as older ICU versions don't\nunderstand the former notation. check-world now pass for me, using either ICU\n< 54 or >= 54.\n\n> imo there should be few\n> calls to ICU functions outside of pg_locale.c. There might be an argument for\n> moving *more* into such a function though.\n\nI think it would be a good improvement. I can work on that next week if\nneeded.\n\n[1] https://www.postgresql.org/message-id/20220318000140.vzri3qw3p4aebn5p@alap3.anarazel.de",
"msg_date": "Sat, 19 Mar 2022 12:14:59 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Add option to use ICU as global locale provider"
},
{
"msg_contents": "Re: Peter Eisentraut\n> Since some (legacy) code still uses the libc locale facilities\n> directly, we still need to set the libc global locale settings even if\n> ICU is otherwise selected. So pg_database now has three\n> locale-related fields: the existing datcollate and datctype, which are\n> always set, and a new daticulocale, which is only set if ICU is\n> selected. A similar change is made in pg_collation for consistency,\n> but in that case, only the libc-related fields or the ICU-related\n> field is set, never both.\n\nSince the intended usage seems to be that databases should either be\nusing libc, or the ICU locales, but probably not both at the same\ntime, does it make sense to clutter the already very wide `psql -l`\noutput with two new extra columns?\n\nThis hardly fits in normal-size terminals:\n\n=# \\l\n List of databases\n Name │ Owner │ Encoding │ Collate │ Ctype │ ICU Locale │ Locale Provider │ Access privileges\n───────────┼───────┼──────────┼────────────┼────────────┼────────────┼─────────────────┼───────────────────\n postgres │ myon │ UTF8 │ de_DE.utf8 │ de_DE.utf8 │ │ libc │\n template0 │ myon │ UTF8 │ de_DE.utf8 │ de_DE.utf8 │ │ libc │ =c/myon ↵\n │ │ │ │ │ │ │ myon=CTc/myon\n template1 │ myon │ UTF8 │ de_DE.utf8 │ de_DE.utf8 │ │ libc │ =c/myon ↵\n │ │ │ │ │ │ │ myon=CTc/myon\n(3 rows)\n\n(Even longer if the username is \"postgres\")\n\nIt also makes \\l+ even harder to read when the most often only\nrelevant new column, the database size, is even more to the far right.\n\nCouldn't that be a single \"Locale\" column, possibly extended by more\ninfo in parentheses if the values differ?\n\n Locale\n de_DE.utf8\n de-x-icu-whatever\n de_DE.utf8 (Ctype: C.UTF-8)\n SQL_ASCII (ICU Locale: en-x-something)\n\nChristoph\n\n\n",
"msg_date": "Sat, 19 Mar 2022 18:53:30 +0100",
"msg_from": "Christoph Berg <myon@debian.org>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Add option to use ICU as global locale provider"
},
{
"msg_contents": "On 19.03.22 05:14, Julien Rouhaud wrote:\n> On Fri, Mar 18, 2022 at 03:09:59PM -0700, Andres Freund wrote:\n>> Hi,\n>>\n>> On 2022-03-18 20:28:58 +0100, Peter Eisentraut wrote:\n>>> Why does your patch introduce a function check_icu_locale() that is only\n>>> called once? Did you have further plans for that?\n>>\n>> I like that it moves ICU code out of dbcommands.c\n> \n> Yes, it seemed cleaner this way. But more importantly code outside pg_locale.c\n> really shouldn't have to deal with ICU specific version code.\n> \n> I'm attaching a v2, addressing Peter and Tom comments to not duplicate the old\n> ICU versions attribute function. I removed the ICU locale check entirely (for\n> consistency across ICU version) thus removing any need for ucol.h include in\n> initdb.\n\ncommitted\n\n\n",
"msg_date": "Sun, 20 Mar 2022 11:03:38 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Add option to use ICU as global locale provider"
},
{
"msg_contents": "On 19.03.22 18:53, Christoph Berg wrote:\n> Re: Peter Eisentraut\n>> Since some (legacy) code still uses the libc locale facilities\n>> directly, we still need to set the libc global locale settings even if\n>> ICU is otherwise selected. So pg_database now has three\n>> locale-related fields: the existing datcollate and datctype, which are\n>> always set, and a new daticulocale, which is only set if ICU is\n>> selected. A similar change is made in pg_collation for consistency,\n>> but in that case, only the libc-related fields or the ICU-related\n>> field is set, never both.\n> \n> Since the intended usage seems to be that databases should either be\n> using libc, or the ICU locales, but probably not both at the same\n> time, does it make sense to clutter the already very wide `psql -l`\n> output with two new extra columns?\n\nGood point, let me think about that.\n\n\n",
"msg_date": "Sun, 20 Mar 2022 11:04:17 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Add option to use ICU as global locale provider"
},
{
"msg_contents": "On Sun, Mar 20, 2022 at 11:03:38AM +0100, Peter Eisentraut wrote:\n> On 19.03.22 05:14, Julien Rouhaud wrote:\n> > On Fri, Mar 18, 2022 at 03:09:59PM -0700, Andres Freund wrote:\n> > > Hi,\n> > > \n> > > On 2022-03-18 20:28:58 +0100, Peter Eisentraut wrote:\n> > > > Why does your patch introduce a function check_icu_locale() that is only\n> > > > called once? Did you have further plans for that?\n> > > \n> > > I like that it moves ICU code out of dbcommands.c\n> > \n> > Yes, it seemed cleaner this way. But more importantly code outside pg_locale.c\n> > really shouldn't have to deal with ICU specific version code.\n> > \n> > I'm attaching a v2, addressing Peter and Tom comments to not duplicate the old\n> > ICU versions attribute function. I removed the ICU locale check entirely (for\n> > consistency across ICU version) thus removing any need for ucol.h include in\n> > initdb.\n> \n> committed\n\nThanks!\n\n\n",
"msg_date": "Sun, 20 Mar 2022 19:46:26 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Add option to use ICU as global locale provider"
},
{
"msg_contents": "Hi,\n\nOn 2022-03-20 11:03:38 +0100, Peter Eisentraut wrote:\n> committed\n\nThanks. Rebasing over that fixed the meson Centos 7 build in my meson\ntree. https://cirrus-ci.com/build/5265480968568832\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sun, 20 Mar 2022 09:31:34 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Add option to use ICU as global locale provider"
},
{
"msg_contents": "Re: Peter Eisentraut\n> > Since the intended usage seems to be that databases should either be\n> > using libc, or the ICU locales, but probably not both at the same\n> > time, does it make sense to clutter the already very wide `psql -l`\n> > output with two new extra columns?\n> \n> Good point, let me think about that.\n\nA possible solution might be to rip out all the locale columns except\n\"Encoding\" from \\l, and leave them in place for \\l+.\n\nFor \\l+, I'd suggest moving the database size and the tablespace to\nthe front, after owner.\n\nChristoph\n\n\n",
"msg_date": "Mon, 21 Mar 2022 17:59:56 +0100",
"msg_from": "Christoph Berg <myon@debian.org>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Add option to use ICU as global locale provider"
},
{
"msg_contents": "Christoph Berg <myon@debian.org> writes:\n> A possible solution might be to rip out all the locale columns except\n> \"Encoding\" from \\l, and leave them in place for \\l+.\n\nI'd rather see a single column summarizing the locale situation.\nPerhaps it could be COALESCE(daticulocale, datcollate), or\nsomething using a CASE on datlocprovider?\nThen \\l+ could replace that with all the underlying columns.\n\n> For \\l+, I'd suggest moving the database size and the tablespace to\n> the front, after owner.\n\nI think it's confusing if the + and non-+ versions of a command\npresent their columns in inconsistent orders. I'm not dead set\nagainst this, but -0.5 or so.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 21 Mar 2022 14:37:12 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Add option to use ICU as global locale provider"
},
{
"msg_contents": "Re: To Peter Eisentraut\n> This hardly fits in normal-size terminals:\n> \n> =# \\l\n> List of databases\n> Name │ Owner │ Encoding │ Collate │ Ctype │ ICU Locale │ Locale Provider │ Access privileges\n> ───────────┼───────┼──────────┼────────────┼────────────┼────────────┼─────────────────┼───────────────────\n> postgres │ myon │ UTF8 │ de_DE.utf8 │ de_DE.utf8 │ │ libc │\n> template0 │ myon │ UTF8 │ de_DE.utf8 │ de_DE.utf8 │ │ libc │ =c/myon ↵\n> │ │ │ │ │ │ │ myon=CTc/myon\n> template1 │ myon │ UTF8 │ de_DE.utf8 │ de_DE.utf8 │ │ libc │ =c/myon ↵\n> │ │ │ │ │ │ │ myon=CTc/myon\n> (3 rows)\n\nAnother gripe here: The above is the output when run against a PG15\ncluster, created without an ICU locale set.\n\nWhen running psql 15 against PG 14, the output is this:\n\n$ psql -l\n List of databases\n Name │ Owner │ Encoding │ Collate │ Ctype │ ICU Locale │ Locale Provider │ Access privileges\n───────────┼──────────┼──────────┼────────────┼────────────┼────────────┼─────────────────┼───────────────────────\n postgres │ postgres │ UTF8 │ de_DE.utf8 │ de_DE.utf8 │ de_DE.utf8 │ libc │\n template0 │ postgres │ UTF8 │ de_DE.utf8 │ de_DE.utf8 │ de_DE.utf8 │ libc │ =c/postgres ↵\n │ │ │ │ │ │ │ postgres=CTc/postgres\n template1 │ postgres │ UTF8 │ de_DE.utf8 │ de_DE.utf8 │ de_DE.utf8 │ libc │ =c/postgres ↵\n │ │ │ │ │ │ │ postgres=CTc/postgres\n(3 rows)\n\nThe \"ICU Locale\" column is now populated, that seems wrong.\n\nThe problem is in the else branch in src/bin/psql/describe.c around\nline 900:\n\n+ if (pset.sversion >= 150000)\n+ appendPQExpBuffer(&buf,\n+ \" d.daticulocale as \\\"%s\\\",\\n\"\n+ \" CASE d.datlocprovider WHEN 'c' THEN 'libc' WHEN 'i' THEN 'icu' END AS \\\"%s\\\",\\\n+ gettext_noop(\"ICU Locale\"),\n+ gettext_noop(\"Locale Provider\"));\n+ else\n+ appendPQExpBuffer(&buf,\n+ \" d.datcollate as \\\"%s\\\",\\n\" <--- there\n+ \" 'libc' AS \\\"%s\\\",\\n\",\n+ gettext_noop(\"ICU Locale\"),\n+ gettext_noop(\"Locale Provider\"));\n\nI'd think this should rather be\n\n+ \" '' as \\\"%s\\\",\\n\"\n\nChristoph\n\n\n",
"msg_date": "Fri, 15 Apr 2022 16:58:28 +0200",
"msg_from": "Christoph Berg <myon@debian.org>",
"msg_from_op": false,
"msg_subject": "Inconsistent \"ICU Locale\" output on older server versions"
},
{
"msg_contents": "On Fri, Apr 15, 2022, at 11:58 AM, Christoph Berg wrote:\n> When running psql 15 against PG 14, the output is this:\n> \n> $ psql -l\n> List of databases\n> Name │ Owner │ Encoding │ Collate │ Ctype │ ICU Locale │ Locale Provider │ Access privileges\n> ───────────┼──────────┼──────────┼────────────┼────────────┼────────────┼─────────────────┼───────────────────────\n> postgres │ postgres │ UTF8 │ de_DE.utf8 │ de_DE.utf8 │ de_DE.utf8 │ libc │\n> template0 │ postgres │ UTF8 │ de_DE.utf8 │ de_DE.utf8 │ de_DE.utf8 │ libc │ =c/postgres ↵\n> │ │ │ │ │ │ │ postgres=CTc/postgres\n> template1 │ postgres │ UTF8 │ de_DE.utf8 │ de_DE.utf8 │ de_DE.utf8 │ libc │ =c/postgres ↵\n> │ │ │ │ │ │ │ postgres=CTc/postgres\n> (3 rows)\n> \n> The \"ICU Locale\" column is now populated, that seems wrong.\nGood catch!\n\n> The problem is in the else branch in src/bin/psql/describe.c around\n> line 900:\n> \n> + if (pset.sversion >= 150000)\n> + appendPQExpBuffer(&buf,\n> + \" d.daticulocale as \\\"%s\\\",\\n\"\n> + \" CASE d.datlocprovider WHEN 'c' THEN 'libc' WHEN 'i' THEN 'icu' END AS \\\"%s\\\",\\\n> + gettext_noop(\"ICU Locale\"),\n> + gettext_noop(\"Locale Provider\"));\n> + else\n> + appendPQExpBuffer(&buf,\n> + \" d.datcollate as \\\"%s\\\",\\n\" <--- there\n> + \" 'libc' AS \\\"%s\\\",\\n\",\n> + gettext_noop(\"ICU Locale\"),\n> + gettext_noop(\"Locale Provider\"));\n> \n> I'd think this should rather be\n> \n> + \" '' as \\\"%s\\\",\\n\"\nSince dataiculocale allows NULL, my suggestion is to use NULL instead of an\nempty string. It is consistent with a cluster whose locale provider is libc.\n\n\n--\nEuler Taveira\nEDB https://www.enterprisedb.com/\n\nOn Fri, Apr 15, 2022, at 11:58 AM, Christoph Berg wrote:When running psql 15 against PG 14, the output is this:$ psql -l List of databases Name │ Owner │ Encoding │ Collate │ Ctype │ ICU Locale │ Locale Provider │ Access privileges───────────┼──────────┼──────────┼────────────┼────────────┼────────────┼─────────────────┼───────────────────────postgres │ postgres │ UTF8 │ de_DE.utf8 │ de_DE.utf8 │ de_DE.utf8 │ libc │template0 │ postgres │ UTF8 │ de_DE.utf8 │ de_DE.utf8 │ de_DE.utf8 │ libc │ =c/postgres ↵ │ │ │ │ │ │ │ postgres=CTc/postgrestemplate1 │ postgres │ UTF8 │ de_DE.utf8 │ de_DE.utf8 │ de_DE.utf8 │ libc │ =c/postgres ↵ │ │ │ │ │ │ │ postgres=CTc/postgres(3 rows)The \"ICU Locale\" column is now populated, that seems wrong.Good catch!The problem is in the else branch in src/bin/psql/describe.c aroundline 900:+ if (pset.sversion >= 150000)+ appendPQExpBuffer(&buf,+ \" d.daticulocale as \\\"%s\\\",\\n\"+ \" CASE d.datlocprovider WHEN 'c' THEN 'libc' WHEN 'i' THEN 'icu' END AS \\\"%s\\\",\\+ gettext_noop(\"ICU Locale\"),+ gettext_noop(\"Locale Provider\"));+ else+ appendPQExpBuffer(&buf,+ \" d.datcollate as \\\"%s\\\",\\n\" <--- there+ \" 'libc' AS \\\"%s\\\",\\n\",+ gettext_noop(\"ICU Locale\"),+ gettext_noop(\"Locale Provider\"));I'd think this should rather be+ \" '' as \\\"%s\\\",\\n\"Since dataiculocale allows NULL, my suggestion is to use NULL instead of anempty string. It is consistent with a cluster whose locale provider is libc.--Euler TaveiraEDB https://www.enterprisedb.com/",
"msg_date": "Fri, 15 Apr 2022 12:48:30 -0300",
"msg_from": "\"Euler Taveira\" <euler@eulerto.com>",
"msg_from_op": false,
"msg_subject": "Re: Inconsistent \"ICU Locale\" output on older server versions"
},
{
"msg_contents": "\"Euler Taveira\" <euler@eulerto.com> writes:\n> On Fri, Apr 15, 2022, at 11:58 AM, Christoph Berg wrote:\n>> When running psql 15 against PG 14, the output is this:\n>> The \"ICU Locale\" column is now populated, that seems wrong.\n\n> Good catch!\n\nIndeed.\n\n> Since dataiculocale allows NULL, my suggestion is to use NULL instead of an\n> empty string. It is consistent with a cluster whose locale provider is libc.\n\nYeah, I agree. We should make the pre-v15 output match what you'd see\nif looking at a non-ICU v15 database.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 15 Apr 2022 12:47:42 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Inconsistent \"ICU Locale\" output on older server versions"
},
{
"msg_contents": "Re: Tom Lane\n> Christoph Berg <myon@debian.org> writes:\n> > A possible solution might be to rip out all the locale columns except\n> > \"Encoding\" from \\l, and leave them in place for \\l+.\n> \n> I'd rather see a single column summarizing the locale situation.\n> Perhaps it could be COALESCE(daticulocale, datcollate), or\n> something using a CASE on datlocprovider?\n> Then \\l+ could replace that with all the underlying columns.\n\nFwiw I still think the default psql -l output should be more concise.\nAny chance to have that happen for PG15?\n\nI can try creating a patch if it has chances of getting through.\n\nChristoph\n\n\n",
"msg_date": "Tue, 6 Sep 2022 14:25:26 +0200",
"msg_from": "Christoph Berg <myon@debian.org>",
"msg_from_op": false,
"msg_subject": "psql -l and locales (Re: pgsql: Add option to use ICU as global\n locale provider)"
}
] |
[
{
"msg_contents": "Hi hackers,\n\nThis thread is a fork of [1], created per request by several people in\nthe discussion. It includes two patches from the patchset that we\nbelieve can be delivered in PG15. The rest of the patches are\ntargeting PG >= 16 and can be discussed further in [1].\n\nv19-0001 changes the format string for XIDs from %u to XID_FMT. This\nrefactoring allows us to switch to UINT64_FORMAT by changing one\n#define in the future patches.\n\nKyotaro suggested using `errmsg(\"blah blah %lld ..\", (long long)\nxid)` instead in order to simplify localization of the error messages.\nPersonally I don't have a strong opinion here. Either approach will\nwork and will affect the error messages eventually. Please let us know\nwhat you think.\n\nv19-0002 refactors SLRU and the dependent code so that `pageno`s\nbecome int64's. This is a requirement for the rest of the patches.\n\nThe patches were in pretty good shape last time I checked several days\nago, I even suggested changing their status to \"Ready for Committer\".\nLet's see what cfbot will tell us.\n\n[1]: https://postgr.es/m/CACG%3DezZe1NQSCnfHOr78AtAZxJZeCvxrts0ygrxYwe%3DpyyjVWA%40mail.gmail.com\n\n-- \nBest regards,\nAleksander Alekseev",
"msg_date": "Thu, 17 Mar 2022 16:12:28 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": true,
"msg_subject": "XID formatting and SLRU refactorings (was: Add 64-bit XIDs into\n PostgreSQL 15)"
},
{
"msg_contents": "Hi, Aleksander!\n\nOn Thu, Mar 17, 2022 at 4:12 PM Aleksander Alekseev\n<aleksander@timescale.com> wrote:\n> This thread is a fork of [1], created per request by several people in\n> the discussion. It includes two patches from the patchset that we\n> believe can be delivered in PG15. The rest of the patches are\n> targeting PG >= 16 and can be discussed further in [1].\n\nThank you for putting this into a separate thread.\n\n> v19-0001 changes the format string for XIDs from %u to XID_FMT. This\n> refactoring allows us to switch to UINT64_FORMAT by changing one\n> #define in the future patches.\n>\n> Kyotaro suggested using `errmsg(\"blah blah %lld ..\", (long long)\n> xid)` instead in order to simplify localization of the error messages.\n> Personally I don't have a strong opinion here. Either approach will\n> work and will affect the error messages eventually. Please let us know\n> what you think.\n\nI'm not a localization expert. Could you clarify what localization\nmessages should look like if we switch to XID_FMT? And will we have\nto change them if change the definition of XID_FMT?\n\n------\nRegards,\nAlexander Korotkov\n\n\n",
"msg_date": "Thu, 17 Mar 2022 16:20:39 +0300",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: XID formatting and SLRU refactorings (was: Add 64-bit XIDs into\n PostgreSQL 15)"
},
{
"msg_contents": "On 17.03.22 14:12, Aleksander Alekseev wrote:\n> v19-0001 changes the format string for XIDs from %u to XID_FMT. This\n> refactoring allows us to switch to UINT64_FORMAT by changing one\n> #define in the future patches.\n> \n> Kyotaro suggested using `errmsg(\"blah blah %lld ..\", (long long)\n> xid)` instead in order to simplify localization of the error messages.\n> Personally I don't have a strong opinion here. Either approach will\n> work and will affect the error messages eventually. Please let us know\n> what you think.\n\nThis is not a question of simplification. Translatable messages with \nembedded macros won't work. This patch isn't going to be acceptable.\n\n\n",
"msg_date": "Thu, 17 Mar 2022 14:23:36 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: XID formatting and SLRU refactorings (was: Add 64-bit XIDs into\n PostgreSQL 15)"
},
{
"msg_contents": "On Thu, Mar 17, 2022 at 4:23 PM Peter Eisentraut\n<peter.eisentraut@enterprisedb.com> wrote:\n> On 17.03.22 14:12, Aleksander Alekseev wrote:\n> > v19-0001 changes the format string for XIDs from %u to XID_FMT. This\n> > refactoring allows us to switch to UINT64_FORMAT by changing one\n> > #define in the future patches.\n> >\n> > Kyotaro suggested using `errmsg(\"blah blah %lld ..\", (long long)\n> > xid)` instead in order to simplify localization of the error messages.\n> > Personally I don't have a strong opinion here. Either approach will\n> > work and will affect the error messages eventually. Please let us know\n> > what you think.\n>\n> This is not a question of simplification. Translatable messages with\n> embedded macros won't work. This patch isn't going to be acceptable.\n\nI've suspected this, but wasn't sure. Thank you for clarification.\n\n------\nRegards,\nAlexander Korotkov\n\n\n",
"msg_date": "Thu, 17 Mar 2022 16:31:06 +0300",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: XID formatting and SLRU refactorings (was: Add 64-bit XIDs into\n PostgreSQL 15)"
},
{
"msg_contents": "\nOn Thu, 17 Mar 2022 at 21:31, Alexander Korotkov <aekorotkov@gmail.com> wrote:\n> On Thu, Mar 17, 2022 at 4:23 PM Peter Eisentraut\n> <peter.eisentraut@enterprisedb.com> wrote:\n>> On 17.03.22 14:12, Aleksander Alekseev wrote:\n>> > v19-0001 changes the format string for XIDs from %u to XID_FMT. This\n>> > refactoring allows us to switch to UINT64_FORMAT by changing one\n>> > #define in the future patches.\n>> >\n>> > Kyotaro suggested using `errmsg(\"blah blah %lld ..\", (long long)\n>> > xid)` instead in order to simplify localization of the error messages.\n>> > Personally I don't have a strong opinion here. Either approach will\n>> > work and will affect the error messages eventually. Please let us know\n>> > what you think.\n>>\n>> This is not a question of simplification. Translatable messages with\n>> embedded macros won't work. This patch isn't going to be acceptable.\n>\n> I've suspected this, but wasn't sure. Thank you for clarification.\n>\n\nMaybe, we should format it to string before for localization messages,\nlike the following code snippet comes from pg_backup_tar.c.\nHowever, I do not think it is a good way.\n\n snprintf(buf1, sizeof(buf1), INT64_FORMAT, (int64) len);\n snprintf(buf2, sizeof(buf2), INT64_FORMAT, (int64) th->fileLen);\n fatal(\"actual file length (%s) does not match expected (%s)\",\n buf1, buf2);\n\n--\nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.\n\n\n",
"msg_date": "Thu, 17 Mar 2022 22:07:32 +0800",
"msg_from": "Japin Li <japinli@hotmail.com>",
"msg_from_op": false,
"msg_subject": "Re: XID formatting and SLRU refactorings (was: Add 64-bit XIDs into\n PostgreSQL 15)"
},
{
"msg_contents": "Japin Li <japinli@hotmail.com> writes:\n> Maybe, we should format it to string before for localization messages,\n> like the following code snippet comes from pg_backup_tar.c.\n> However, I do not think it is a good way.\n\n> snprintf(buf1, sizeof(buf1), INT64_FORMAT, (int64) len);\n> snprintf(buf2, sizeof(buf2), INT64_FORMAT, (int64) th->fileLen);\n> fatal(\"actual file length (%s) does not match expected (%s)\",\n> buf1, buf2);\n\nThat used to be our standard practice before we switched to always\nrelying on our own snprintf.c. Now, we know that \"%lld\" with an\nexplicit cast to long long will work, so that's the preferred method\nfor printing 64-bit values in localizable strings. Not all of the old\ncode has been updated, though.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 17 Mar 2022 10:41:08 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: XID formatting and SLRU refactorings (was: Add 64-bit XIDs into\n PostgreSQL 15)"
},
{
"msg_contents": ">\n> On Thu, 17 Mar 2022 at 21:31, Alexander Korotkov <aekorotkov@gmail.com>\n> wrote:\n> > On Thu, Mar 17, 2022 at 4:23 PM Peter Eisentraut\n> > <peter.eisentraut@enterprisedb.com> wrote:\n> >> On 17.03.22 14:12, Aleksander Alekseev wrote:\n> >> > v19-0001 changes the format string for XIDs from %u to XID_FMT. This\n> >> > refactoring allows us to switch to UINT64_FORMAT by changing one\n> >> > #define in the future patches.\n> >> >\n> >> > Kyotaro suggested using `errmsg(\"blah blah %lld ..\", (long long)\n> >> > xid)` instead in order to simplify localization of the error messages.\n> >> > Personally I don't have a strong opinion here. Either approach will\n> >> > work and will affect the error messages eventually. Please let us know\n> >> > what you think.\n> >>\n> >> This is not a question of simplification. Translatable messages with\n> >> embedded macros won't work. This patch isn't going to be acceptable.\n> >\n> > I've suspected this, but wasn't sure. Thank you for clarification.\n>\nHi, hackers!\n\nThe need to support localization is very much understood by us. We'll\ndeliver a patchset soon with localization based on %lld/%llu format and\nexplicit casts to unsigned/signed long long.\nAlexander Alexeev could you wait a little bit and give us time to deliver\nv20 patch which will address localization (I propose concurrent work should\nstop until that to avoid conflicts)\nAdvice and discussion help us a lot.\n\nThanks!\n\n\n-- \nBest regards,\nPavel Borisov\n\nPostgres Professional: http://postgrespro.com <http://www.postgrespro.com>\n\nOn Thu, 17 Mar 2022 at 21:31, Alexander Korotkov <aekorotkov@gmail.com> wrote:\n> On Thu, Mar 17, 2022 at 4:23 PM Peter Eisentraut\n> <peter.eisentraut@enterprisedb.com> wrote:\n>> On 17.03.22 14:12, Aleksander Alekseev wrote:\n>> > v19-0001 changes the format string for XIDs from %u to XID_FMT. This\n>> > refactoring allows us to switch to UINT64_FORMAT by changing one\n>> > #define in the future patches.\n>> >\n>> > Kyotaro suggested using `errmsg(\"blah blah %lld ..\", (long long)\n>> > xid)` instead in order to simplify localization of the error messages.\n>> > Personally I don't have a strong opinion here. Either approach will\n>> > work and will affect the error messages eventually. Please let us know\n>> > what you think.\n>>\n>> This is not a question of simplification. Translatable messages with\n>> embedded macros won't work. This patch isn't going to be acceptable.\n>\n> I've suspected this, but wasn't sure. Thank you for clarification.Hi, hackers!The need to support localization is very much understood by us. We'll deliver a patchset soon with localization based on %lld/%llu format and explicit casts to unsigned/signed long long.Alexander Alexeev could you wait a little bit and give us time to deliver v20 patch which will address localization (I propose concurrent work should stop until that to avoid conflicts) Advice and discussion help us a lot. Thanks!-- Best regards,Pavel BorisovPostgres Professional: http://postgrespro.com",
"msg_date": "Thu, 17 Mar 2022 18:46:30 +0400",
"msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: XID formatting and SLRU refactorings (was: Add 64-bit XIDs into\n PostgreSQL 15)"
},
{
"msg_contents": "Hi!\n\nHere is the v20 patch. 0001 and 0002 are proposed into PG15 as\ndiscussed above.\nThe whole set of patches is added into [1] to be committed into PG16.\n\nIn this version we've made a major revision related to printf/elog format\ncompatible with localization\nas was proposed above.\n\nWe also think of adding 0003 patch related to 64 bit GUC's into this\nthread. Suppose it also may be delivered\ninto PG15.\n\nAleksander Alekseev, we've done this major revision mentioned above and you\nare free to continue working on this patch set.\n\nReviews and proposals are very welcome!\n\n[1]\nhttps://www.postgresql.org/message-id/flat/CACG%3DezZe1NQSCnfHOr78AtAZxJZeCvxrts0ygrxYwe%3DpyyjVWA%40mail.gmail.com\n\n-- \nBest regards,\nMaxim Orlov.",
"msg_date": "Thu, 17 Mar 2022 19:25:00 +0300",
"msg_from": "Maxim Orlov <orlovmg@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: XID formatting and SLRU refactorings (was: Add 64-bit XIDs into\n PostgreSQL 15)"
},
{
"msg_contents": "At Thu, 17 Mar 2022 19:25:00 +0300, Maxim Orlov <orlovmg@gmail.com> wrote in \n> Hi!\n> \n> Here is the v20 patch. 0001 and 0002 are proposed into PG15 as\n> discussed above.\n> The whole set of patches is added into [1] to be committed into PG16.\n> \n> In this version we've made a major revision related to printf/elog format\n> compatible with localization\n> as was proposed above.\n> \n> We also think of adding 0003 patch related to 64 bit GUC's into this\n> thread. Suppose it also may be delivered\n> into PG15.\n> \n> Aleksander Alekseev, we've done this major revision mentioned above and you\n> are free to continue working on this patch set.\n> \n> Reviews and proposals are very welcome!\n\n(I'm afraid that this thread is not for the discussion of the patch?:)\n\n> [1]\n> https://www.postgresql.org/message-id/flat/CACG%3DezZe1NQSCnfHOr78AtAZxJZeCvxrts0ygrxYwe%3DpyyjVWA%40mail.gmail.com\n\n+/* printf/elog format compatible with 32 and 64 bit xid. */\n+typedef unsigned long long\t\tXID_TYPE;\n...\n+\t errmsg_internal(\"found multixact %llu from before relminmxid %llu\",\n+\t\t\t (XID_TYPE) multi, (XID_TYPE) relminmxid)));\n\n\"(XID_TYPE) x\" is actually equivalent to \"(long long) x\" here, but the\npoint here is \"%llu in format string accepts (long long)\" so we should\nuse literally (or bare) (long long) and the maybe-all precedents does\nthat.\n\nAnd.. You looks like applying the change to some inappropriate places?\n\n- elog(DEBUG2, \"deleted page from block %u has safexid %u\",\n- blkno, opaque->btpo_level);\n+ elog(DEBUG2, \"deleted page from block %u has safexid %llu\",\n+ blkno, (XID_TYPE) opaque->btpo_level);\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Fri, 18 Mar 2022 10:20:08 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: XID formatting and SLRU refactorings"
},
{
"msg_contents": "At Fri, 18 Mar 2022 10:20:08 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> \"(XID_TYPE) x\" is actually equivalent to \"(long long) x\" here, but the\n> point here is \"%llu in format string accepts (long long)\" so we should\n\nOf course it's the typo of\n \"%llu in format string accepts (*unsigned* long long)\".\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Fri, 18 Mar 2022 10:23:44 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: XID formatting and SLRU refactorings"
},
{
"msg_contents": "Kyotaro Horiguchi <horikyota.ntt@gmail.com> writes:\n> At Thu, 17 Mar 2022 19:25:00 +0300, Maxim Orlov <orlovmg@gmail.com> wrote in \n>> +/* printf/elog format compatible with 32 and 64 bit xid. */\n>> +typedef unsigned long long\t\tXID_TYPE;\n>> ...\n>> +\t errmsg_internal(\"found multixact %llu from before relminmxid %llu\",\n>> +\t\t\t (XID_TYPE) multi, (XID_TYPE) relminmxid)));\n\n> \"(XID_TYPE) x\" is actually equivalent to \"(long long) x\" here, but the\n> point here is \"%llu in format string accepts (long long)\" so we should\n> use literally (or bare) (long long) and the maybe-all precedents does\n> that.\n\nYes. Please do NOT do it like that. Write (long long), not something\nelse, to cast a value to match an \"ll\" format specifier. Otherwise\nyou're just making readers wonder whether your code is correct.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 17 Mar 2022 22:29:45 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: XID formatting and SLRU refactorings"
},
{
"msg_contents": "Hi hackers,\n\n> Aleksander Alekseev, we've done this major revision mentioned above and\nyou are free to continue working on this patch set.\n>\n> Reviews and proposals are very welcome!\n\nMany thanks!\n\nHere is an new version with the following changes compared to v20:\n\n- Commit messages and links to the discussions were updated;\n- XID_TYPE name seemed to be slightly misleading. I changed it to\nXID_FMT_TYPE. Not 100% sure if we really need this typedef though. If not,\nXID_FMT_TYPE is easy to replace in the .patch files. Same for\nXID32_SCANF_FMT definition;\n- I noticed that pg_resetwal.c continues to use %u to format XIDs. Fixed;\n- Since v20-0001 modifies gettext() arguments, I'm pretty sure the\ncorresponding .po files should be modified as well. I addressed this in a\nseparate patch in order to simplify the review;\n\nTo me personally v21 looks almost OK. The comments in c.h should be\nrewritten depending on whether we choose to keep XID_FMT_TYPE and/or\nXID32_SCANF_FMT. The patchset passes all the tests.\n\n(As a side note, it looks like cfbot was slightly confused by forking the\nthread and modifying the CF entry. It couldn't find v20. If somebody knows\nhow to fix this, please help.)\n\n-- \nBest regards,\nAleksander Alekseev",
"msg_date": "Fri, 18 Mar 2022 16:50:01 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": true,
"msg_subject": "Re: XID formatting and SLRU refactorings (was: Add 64-bit XIDs into\n PostgreSQL 15)"
},
{
"msg_contents": "Hi hackers,\n\n> Here is an new version with the following changes compared to v20:\n> [...]\n> To me personally v21 looks almost OK.\n\nFor some reason I didn't receive the recent e-mails from Tom and Kyotaro.\nI've just discovered them accidentally by reading the thread through the\nweb interface. These comments were not addressed in v21.\n\n-- \nBest regards,\nAleksander Alekseev\n\nHi hackers,> Here is an new version with the following changes compared to v20:> [...]> To me personally v21 looks almost OK. For some reason I didn't receive the recent e-mails from Tom and Kyotaro. I've just discovered them accidentally by reading the thread through the web interface. These comments were not addressed in v21.-- Best regards,Aleksander Alekseev",
"msg_date": "Fri, 18 Mar 2022 17:10:39 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": true,
"msg_subject": "Re: XID formatting and SLRU refactorings (was: Add 64-bit XIDs into\n PostgreSQL 15)"
},
{
"msg_contents": ">\n> > To me personally v21 looks almost OK.\n>\n> For some reason I didn't receive the recent e-mails from Tom and Kyotaro.\n> I've just discovered them accidentally by reading the thread through the\n> web interface. These comments were not addressed in v21.\n>\nAleksander, as of now we're preparing a new version that addresses a thing\nmentioned by Tom&Kyotaro. We'll try to add what you've done in v21, but\nplease check. We're going to send a patch very soon, most probably today in\nseveral hours.\n\n--\nBest regards,\nPavel Borisov\n\nPostgres Professional: http://postgrespro.com <http://www.postgrespro.com>\n\n> To me personally v21 looks almost OK. For some reason I didn't receive the recent e-mails from Tom and Kyotaro. I've just discovered them accidentally by reading the thread through the web interface. These comments were not addressed in v21.Aleksander, as of now we're preparing a new version that addresses a thing mentioned by Tom&Kyotaro. We'll try to add what you've done in v21, but please check. We're going to send a patch very soon, most probably today in several hours.--Best regards,Pavel BorisovPostgres Professional: http://postgrespro.com",
"msg_date": "Fri, 18 Mar 2022 18:39:21 +0400",
"msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: XID formatting and SLRU refactorings (was: Add 64-bit XIDs into\n PostgreSQL 15)"
},
{
"msg_contents": "Hi!\n\nHere is v22. It addresses things mentioned by Tom and Kyotaro. Also added\nAleksander's changes from v21.\n\n-- \nBest regards,\nMaxim Orlov.",
"msg_date": "Fri, 18 Mar 2022 18:14:52 +0300",
"msg_from": "Maxim Orlov <orlovmg@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: XID formatting and SLRU refactorings (was: Add 64-bit XIDs into\n PostgreSQL 15)"
},
{
"msg_contents": "Hi,\n\nOn 2022-03-18 18:14:52 +0300, Maxim Orlov wrote:\n> Subject: [PATCH v22 3/6] Use 64-bit pages in SLRU\n> \n> This is one step toward 64-bit XIDs.\n\nI think this should explain in more detail why this move is done. Neither the\ncommit message nor the rest of the thread does so afaics. It's not too hard to\ninfer, but the central reason behind a patch shouldn't need to be inferred.\n\n\n> -static bool CLOGPagePrecedes(int page1, int page2);\n> +static bool CLOGPagePrecedes(int64 page1, int64 page2);\n\nI think all of these are actually unsigned integers. If all of this stuff gets\ntouched, perhaps worth moving to uint64 instead?\n\nhttps://www.postgresql.org/message-id/20220318231430.m5g56yk4ztlz2man%40alap3.anarazel.de\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 18 Mar 2022 16:20:49 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: XID formatting and SLRU refactorings (was: Add 64-bit XIDs into\n PostgreSQL 15)"
},
{
"msg_contents": "> On 2022-03-18 18:14:52 +0300, Maxim Orlov wrote:\n> > Subject: [PATCH v22 3/6] Use 64-bit pages in SLRU\n> >\n> > This is one step toward 64-bit XIDs.\n>\n> I think this should explain in more detail why this move is done. Neither\n> the\n> commit message nor the rest of the thread does so afaics. It's not too\n> hard to\n> infer, but the central reason behind a patch shouldn't need to be inferred.\n>\n>\n> > -static bool CLOGPagePrecedes(int page1, int page2);\n> > +static bool CLOGPagePrecedes(int64 page1, int64 page2);\n>\n> I think all of these are actually unsigned integers. If all of this stuff\n> gets\n> touched, perhaps worth moving to uint64 instead?\n>\n>\n> https://www.postgresql.org/message-id/20220318231430.m5g56yk4ztlz2man%40alap3.anarazel.de\n\n\nWe'll try to add these and many similar changes in Slru code, thanks!\n\n\nOn 2022-03-18 18:14:52 +0300, Maxim Orlov wrote:\n> Subject: [PATCH v22 3/6] Use 64-bit pages in SLRU\n> \n> This is one step toward 64-bit XIDs.\n\nI think this should explain in more detail why this move is done. Neither the\ncommit message nor the rest of the thread does so afaics. It's not too hard to\ninfer, but the central reason behind a patch shouldn't need to be inferred.\n\n\n> -static bool CLOGPagePrecedes(int page1, int page2);\n> +static bool CLOGPagePrecedes(int64 page1, int64 page2);\n\nI think all of these are actually unsigned integers. If all of this stuff gets\ntouched, perhaps worth moving to uint64 instead?\n\nhttps://www.postgresql.org/message-id/20220318231430.m5g56yk4ztlz2man%40alap3.anarazel.deWe'll try to add these and many similar changes in Slru code, thanks!",
"msg_date": "Sat, 19 Mar 2022 13:52:35 +0400",
"msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: XID formatting and SLRU refactorings (was: Add 64-bit XIDs into\n PostgreSQL 15)"
},
{
"msg_contents": "On 18.03.22 16:14, Maxim Orlov wrote:\n> Here is v22. It addresses things mentioned by Tom and Kyotaro. Also \n> added Aleksander's changes from v21.\n\nThe v22-0002-Update-XID-formatting-in-the-.po-files.patch is not \nnecessary. That is done by a separate procedure.\n\nI'm wondering about things like this:\n\n- psprintf(\"xmax %u equals or exceeds next valid transaction ID %u:%u\",\n- xmax,\n+ psprintf(\"xmax %llu equals or exceeds next valid transaction ID %u:%llu\",\n+ (unsigned long long) xmax,\n EpochFromFullTransactionId(ctx->next_fxid),\n- XidFromFullTransactionId(ctx->next_fxid)));\n+ (unsigned long long) XidFromFullTransactionId(ctx->next_fxid)));\n\nThis %u:%u business is basically an existing workaround for the lack of \n64-bit transaction identifiers. Presumably, when those are available, \nall of this will be replaced by a single number display, possibly a \nsingle %llu. So these sites you change here will have to be touched \nagain, and so changing this now doesn't make sense. At least that's my \nguess. Maybe there needs to be a fuller explanation of how this is \nmeant to be transitioned.\n\nAs a more general point, I don't like plastering these bulky casts all \nover the place. Casts hide problems. For example, if we currently have\n\n elog(LOG, \"xid is %u\", xid);\n\nand then xid is changed to a 64-bit type, this will give a compiler \nwarning until you change the format. If we add a (long long unsigned) \ncast here now and then somehow forget to change the type of xid, nothing \nwill warn us about that. (Note that there is also third-party code \naffected by this.) Besides, these casts are quite ugly anyway, and I \ndon't think the solution for all time should be that we have to add \nthese casts just to print xids.\n\nI think there needs to be a bit more soul searching here on how to \nhandle that in the future and how to transition it. I don't think \ntargeting this patch for PG15 is useful at this point.\n\n\n",
"msg_date": "Mon, 21 Mar 2022 19:40:22 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: XID formatting and SLRU refactorings (was: Add 64-bit XIDs into\n PostgreSQL 15)"
},
{
"msg_contents": "Hi Peter,\n\n> I think there needs to be a bit more soul searching here on how to\n> handle that in the future and how to transition it. I don't think\n> targeting this patch for PG15 is useful at this point.\n\nThe patches can be reordered so that we are still able to deliver SLRU\nrefactoring in PG15.\n\n> As a more general point, I don't like plastering these bulky casts all\n> over the place. Casts hide problems.\n\nRegarding the casts, I don't like them either. I agree that it could\nbe a good idea to invest a little more time into figuring out if this\ntransit can be handled in a better way and deliver this change in the\nnext CF. However, if no one will be able to suggest a better\nalternative, I think we should continue making progress here. The\n32-bit XIDs are a major inconvenience for many users.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Tue, 22 Mar 2022 10:35:05 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": true,
"msg_subject": "Re: XID formatting and SLRU refactorings (was: Add 64-bit XIDs into\n PostgreSQL 15)"
},
{
"msg_contents": ">\n> > I think there needs to be a bit more soul searching here on how to\n> > handle that in the future and how to transition it. I don't think\n> > targeting this patch for PG15 is useful at this point.\n>\n> The patches can be reordered so that we are still able to deliver SLRU\n> refactoring in PG15.\n>\nSure.\n\n> > As a more general point, I don't like plastering these bulky casts all\n> > over the place. Casts hide problems.\n>\n> Regarding the casts, I don't like them either. I agree that it could\n> be a good idea to invest a little more time into figuring out if this\n> transit can be handled in a better way and deliver this change in the\n> next CF. However, if no one will be able to suggest a better\n> alternative, I think we should continue making progress here. The\n> 32-bit XIDs are a major inconvenience for many users.\n>\n\nI'd like to add that the initial way of shifting to 64bit was based on\nXID_FMT in a print formatting template which could be changed from 32 to 64\nbit when shifting to 64-bit xids later. But this template is not\nlocalizable so hackers recommended using %lld/%llu with (long\nlong)/(unsigned long long cast) which is a current best practice elsewhere\nin the code (e.g. recent 1f8bc448680bf93a9). So I suppose we already have a\ngood enough way to stick to.\n\nThis approach in 0001 inherently processes both 32/64 bit xids and should\nnot change with later committing 64bit xids later (\nhttps://postgr.es/m/CACG%3DezZe1NQSCnfHOr78AtAZxJZeCvxrts0ygrxYwe%3DpyyjVWA%40mail.gmail.com\n)\n\nThe thing that needs to change then is suppressing output of Epoch. It\nshould be done when 64-xids are committed and it is done by 0006 in the\nmentioned thread. Until that I've left Epoch in the output.\n\nBig thanks for your considerations!\n\n-- \nBest regards,\nPavel Borisov\n\nPostgres Professional: http://postgrespro.com <http://www.postgrespro.com>\n\n> I think there needs to be a bit more soul searching here on how to\n> handle that in the future and how to transition it. I don't think\n> targeting this patch for PG15 is useful at this point.\n\nThe patches can be reordered so that we are still able to deliver SLRU\nrefactoring in PG15.Sure. \n> As a more general point, I don't like plastering these bulky casts all\n> over the place. Casts hide problems.\n\nRegarding the casts, I don't like them either. I agree that it could\nbe a good idea to invest a little more time into figuring out if this\ntransit can be handled in a better way and deliver this change in the\nnext CF. However, if no one will be able to suggest a better\nalternative, I think we should continue making progress here. The\n32-bit XIDs are a major inconvenience for many users.I'd like to add that the initial way of shifting to 64bit was based on XID_FMT in a print formatting template which could be changed from 32 to 64 bit when shifting to 64-bit xids later. But this template is not localizable so hackers recommended using %lld/%llu with (long long)/(unsigned long long cast) which is a current best practice elsewhere in the code (e.g. recent 1f8bc448680bf93a9). So I suppose we already have a good enough way to stick to. This approach in 0001 inherently processes both 32/64 bit xids and should not change with later committing 64bit xids later (https://postgr.es/m/CACG%3DezZe1NQSCnfHOr78AtAZxJZeCvxrts0ygrxYwe%3DpyyjVWA%40mail.gmail.com)The thing that needs to change then is suppressing output of Epoch. It should be done when 64-xids are committed and it is done by 0006 in the mentioned thread. Until that I've left Epoch in the output.Big thanks for your considerations!-- Best regards,Pavel BorisovPostgres Professional: http://postgrespro.com",
"msg_date": "Tue, 22 Mar 2022 12:31:58 +0400",
"msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: XID formatting and SLRU refactorings (was: Add 64-bit XIDs into\n PostgreSQL 15)"
},
{
"msg_contents": "Hi!\n\nHere is v23. As was suggested by Alexander above, I've changed the order of\nthe patches and improved the commit message. Now, SLRU patch is the first.\n\nSplitting 64 bit XIDs into a bunch of patches was done to simplify\nreviewing and making commits in small portions. We have little overhead\nhere like removing Epoch later and now changes are based on the fact that\nEpoch still exists.\n\nIn the SLRU patch we have changed SLRU page numbering from int to int64.\nThere were proposals to make use of SLRU pages numbers that are in fact\nunsigned and change from int to uint64. I fully support this, but I'm not\nsure this big SLRU refactoring should be done in this patchset. On the\nother hand it seems logical to change everything in SLRU at once. I think I\nneed a second opinion in support of this change.\n\nIn general, I consider this patchset is ready to commit. It would be great\nto deliver it in PG15.\n\n-- \nBest regards,\nMaxim Orlov.",
"msg_date": "Tue, 22 Mar 2022 14:54:56 +0300",
"msg_from": "Maxim Orlov <orlovmg@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: XID formatting and SLRU refactorings (was: Add 64-bit XIDs into\n PostgreSQL 15)"
},
{
"msg_contents": "Hi hackers,\n\n> Here is v23. As was suggested by Alexander above, I've changed the order of the patches and improved the commit message. Now, SLRU patch is the first.\n\nMany thanks!\n\n> There were proposals to make use of SLRU pages numbers that are in fact unsigned and change from int to uint64. I fully support this, but I'm not sure this big SLRU refactoring should be done in this patchset.\n\nIf it takes a lot of effort and doesn't bring us any closer to 64-bit\nXIDs, I suggest not doing this in v23-0001. I can invest some time\ninto this refactoring in April and create a separate CF entry, if\nsomeone will second the idea.\n\n> In general, I consider this patchset is ready to commit. It would be great to deliver it in PG15.\n\n+1.\n\nv23-0002 seems to have two extra sentences in the commit message that\nare outdated, but this is a minor issue. The commit message should be:\n\n\"\"\"\nReplace the %u formatting string for XIDs with %llu and cast to\nunsigned long long. While actually XIDs are still 32 bit, this patch\ncompletely supports both 32 and 64 bit.\n\"\"\"\n\nSince Peter expressed some concerns regarding v23-0002, maybe we\nshould discuss it a bit more. Although personally I doubt that we can\ndo much better than that, and as I recall this particular change was\nexplicitly requested by several people.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Tue, 22 Mar 2022 16:21:10 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": true,
"msg_subject": "Re: XID formatting and SLRU refactorings (was: Add 64-bit XIDs into\n PostgreSQL 15)"
},
{
"msg_contents": "Hi!\n\nHere is v24. Changes are:\n- correct commit messages for 0001 and 0002\n- use uint64 for SLRU page numbering instead of int64 in v23\n- fix code formatting and indent\n- and minor fixes in slru.c\n\nReviews are very welcome!\n\n --\nBest regards,\nMaxim Orlov.",
"msg_date": "Tue, 22 Mar 2022 20:22:59 +0300",
"msg_from": "Maxim Orlov <orlovmg@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: XID formatting and SLRU refactorings (was: Add 64-bit XIDs into\n PostgreSQL 15)"
},
{
"msg_contents": "\nOn Wed, 23 Mar 2022 at 01:22, Maxim Orlov <orlovmg@gmail.com> wrote:\n> Hi!\n>\n> Here is v24. Changes are:\n> - correct commit messages for 0001 and 0002\n> - use uint64 for SLRU page numbering instead of int64 in v23\n> - fix code formatting and indent\n> - and minor fixes in slru.c\n>\n> Reviews are very welcome!\n>\n\nThanks for updating the patchs. I have a little comment on 0002.\n\n+ errmsg_internal(\"found xmax %llu\" \" (infomask 0x%04x) not frozen, not multi, not normal\",\n+ (unsigned long long) xid, tuple->t_infomask)));\n\nIMO, we can remove the double quote inside the sentence.\n\n errmsg_internal(\"found xmax %llu (infomask 0x%04x) not frozen, not multi, not normal\",\n (unsigned long long) xid, tuple->t_infomask)));\n\n--\nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.\n\n\n",
"msg_date": "Wed, 23 Mar 2022 08:24:26 +0800",
"msg_from": "Japin Li <japinli@hotmail.com>",
"msg_from_op": false,
"msg_subject": "Re: XID formatting and SLRU refactorings (was: Add 64-bit XIDs into\n PostgreSQL 15)"
},
{
"msg_contents": ">\n> Thanks for updating the patchs. I have a little comment on 0002.\n>\n> + errmsg_internal(\"found xmax %llu\" \"\n> (infomask 0x%04x) not frozen, not multi, not normal\",\n> + (unsigned\n> long long) xid, tuple->t_infomask)));\n>\n>\nThanks for your review! Fixed.\n\n-- \nBest regards,\nMaxim Orlov.",
"msg_date": "Wed, 23 Mar 2022 12:51:33 +0300",
"msg_from": "Maxim Orlov <orlovmg@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: XID formatting and SLRU refactorings (was: Add 64-bit XIDs into\n PostgreSQL 15)"
},
{
"msg_contents": "On 23.03.22 10:51, Maxim Orlov wrote:\n> Thanks for updating the patchs. I have a little comment on 0002.\n> \n> + errmsg_internal(\"found xmax %llu\" \"\n> (infomask 0x%04x) not frozen, not multi, not normal\",\n> + \n> (unsigned long long) xid, tuple->t_infomask)));\n> \n> \n> Thanks for your review! Fixed.\n\nAbout v25-0001-Use-unsigned-64-bit-numbering-of-SLRU-pages.patch:\n\n-static bool CLOGPagePrecedes(int page1, int page2);\n+static bool CLOGPagePrecedes(uint64 page1, uint64 page2);\n\nYou are changing the API from signed to unsigned. Is this intentional? \nIt is not explained anywhere. Are we sure that nothing uses, for \nexample, negative values as error markers?\n\n #define SlruFileName(ctl, path, seg) \\\n- snprintf(path, MAXPGPATH, \"%s/%04X\", (ctl)->Dir, seg)\n+ snprintf(path, MAXPGPATH, \"%s/%04X%08X\", (ctl)->Dir, \\\n+ (uint32) ((seg) >> 32), (uint32) ((seg) & \n(uint64)0xFFFFFFFF))\n\nWhat's going on here? Some explanation? Why not use something like \n%llX or whatever you need?\n\n+ uint64 segno = pageno / SLRU_PAGES_PER_SEGMENT;\n+ uint64 rpageno = pageno % SLRU_PAGES_PER_SEGMENT;\n[etc.]\n\nNot clear whether segno etc. need to be changed to 64 bits, or whether \nan increase of SLRU_PAGES_PER_SEGMENT should also be considered.\n\n- if ((len == 4 || len == 5 || len == 6) &&\n+ if ((len == 12 || len == 13 || len == 14) &&\n\nThis was horrible before, but maybe we can take this opportunity now to \nadd a comment?\n\n- SlruFileName(ctl, path, ftag->segno);\n+ SlruFileName(ctl, path, (uint64) ftag->segno);\n\nMaybe ftag->segno should be changed to 64 bits as well? Not clear.\n\nThere is a code comment at the definition of SLRU_PAGES_PER_SEGMENT that \nhas some detailed explanations of how the SLRU numbering, SLRU file \nnames, and transaction IDs tie together. This doesn't seem to apply \nanymore after this change.\n\nThe reference page of pg_resetwal contains various pieces of information \nof how to map SLRU files back to transaction IDs. Please check if that \nis still all up to date.\n\n\nAbout v25-0002-Use-64-bit-format-to-output-XIDs.patch:\n\nI don't see the point of applying this now. It doesn't make PG15 any \nbetter. It's just a patch part of which we might need later. \nEspecially the issue of how we are handwaving past the epoch-enabled \noutput sites disturbs me. At least those should be omitted from this \npatch, since this patch makes these more wrong, not more right for the \nfuture.\n\nAn alternative we might want to consider is that we use PRId64 as \nexplained here: \n<https://www.gnu.org/software/gettext/manual/html_node/Preparing-Strings.html>. \n We'd have to check whether we still support any non-GNU gettext \nimplementations and to what extent they support this. But I think it's \nsomething to think about if we are going to embark on a journey of much \nmore int64 printf output.\n\n\n",
"msg_date": "Wed, 23 Mar 2022 23:33:20 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: XID formatting and SLRU refactorings (was: Add 64-bit XIDs into\n PostgreSQL 15)"
},
{
"msg_contents": "Hi!\n\nHere is the v26 patchset. Main changes:\n- back to signed int in SLRU pages\n- fix printing epoch and xid as single value\n- SlruFileName is not changed, thus no need special procedure in pg_upgrade\n- remove cast from SlruFileName\n- refactoring macro SlruFileName into inline function\n\nReviews are very welcome!\n\n-- \nBest regards,\nMaxim Orlov.",
"msg_date": "Thu, 24 Mar 2022 17:43:51 +0300",
"msg_from": "Maxim Orlov <orlovmg@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: XID formatting and SLRU refactorings (was: Add 64-bit XIDs into\n PostgreSQL 15)"
},
{
"msg_contents": "Hi, Peter!\nThanks for your review!\n\nAbout v25-0001-Use-unsigned-64-bit-numbering-of-SLRU-pages.patch:\n>\n> -static bool CLOGPagePrecedes(int page1, int page2);\n> +static bool CLOGPagePrecedes(uint64 page1, uint64 page2);\n>\n> You are changing the API from signed to unsigned. Is this intentional?\n> It is not explained anywhere. Are we sure that nothing uses, for\n> example, negative values as error markers?\n>\nInitially, we've made SLRU pages to be int64, and reworked them into uint64\nas per Andres Freund's proposal. It is not necessary for a 64xid patchset\nthough as maximum page number is at least several (>2) times less than the\nmaximum 64bit xid value. So we've returned them to be signed int64. I don't\nsee the reason why make SRLU unsigned for introducing 64bit xids.\n\n\n> #define SlruFileName(ctl, path, seg) \\\n> - snprintf(path, MAXPGPATH, \"%s/%04X\", (ctl)->Dir, seg)\n> + snprintf(path, MAXPGPATH, \"%s/%04X%08X\", (ctl)->Dir, \\\n> + (uint32) ((seg) >> 32), (uint32) ((seg) &\n> (uint64)0xFFFFFFFF))\n\nWhat's going on here? Some explanation? Why not use something like\n> %llX or whatever you need?\n>\nOf course, this should be simplified as %012llX and we will do this in\nlater stage (in 0006 patch in 64xid thread) as this should be done together\nwith CLOG pg_upgrade. So we've returned this to the initial state in 0001.\nThanks for the notion!\n\n+ uint64 segno = pageno / SLRU_PAGES_PER_SEGMENT;\n> + uint64 rpageno = pageno % SLRU_PAGES_PER_SEGMENT;\n> [etc.]\n>\n> Not clear whether segno etc. need to be changed to 64 bits, or whether\n> an increase of SLRU_PAGES_PER_SEGMENT should also be considered.\n>\nYes, segno should be 64bits because even multiple of SLRU_PAGES_PER_SEGMENT\nand CLOG_XACTS_PER_PAGE (and similar for commit_ts and mxact) is far less\nthan 2^32 and the overall length of clog/commit_ts/mxact is 64bit.\n\n\n> - if ((len == 4 || len == 5 || len == 6) &&\n> + if ((len == 12 || len == 13 || len == 14) &&\n>\n> This was horrible before, but maybe we can take this opportunity now to\n> add a comment?\n>\nThis should also be introduced later together with SlruFileName changes. So\nwe've removed this change from 0001. Later in 0006 we'll add this with\ncomments.\n\n- SlruFileName(ctl, path, ftag->segno);\n> + SlruFileName(ctl, path, (uint64) ftag->segno);\n>\n> Maybe ftag->segno should be changed to 64 bits as well? Not clear.\n>\nSame as previous.\n\n\n> There is a code comment at the definition of SLRU_PAGES_PER_SEGMENT that\n> has some detailed explanations of how the SLRU numbering, SLRU file\n> names, and transaction IDs tie together. This doesn't seem to apply\n> anymore after this change.\n>\nSame as previous.\n\nThe reference page of pg_resetwal contains various pieces of information\n> of how to map SLRU files back to transaction IDs. Please check if that\n> is still all up to date.\n>\nSame as previous.\n\nAbout v25-0002-Use-64-bit-format-to-output-XIDs.patch:\n>\n> I don't see the point of applying this now. It doesn't make PG15 any\n> better. It's just a patch part of which we might need later.\n> Especially the issue of how we are handwaving past the epoch-enabled\n> output sites disturbs me. At least those should be omitted from this\n> patch, since this patch makes these more wrong, not more right for the\n> future.\n\n psprintf(\"xmax %u equals or exceeds next valid transaction ID %u:%u\",\n> - xmax,\n> + psprintf(\"xmax %llu equals or exceeds next valid transaction ID %u:%llu\",\n> + (unsigned long long) xmax,\n> EpochFromFullTransactionId(ctx->next_fxid),\n> - XidFromFullTransactionId(ctx->next_fxid)));\n> + (unsigned long long) XidFromFullTransactionId(ctx->\n> next_fxid)));\n\n\n> This %u:%u business is basically an existing workaround for the lack of\n> 64-bit transaction identifiers. Presumably, when those are available,\n> all of this will be replaced by a single number display, possibly a\n> single %llu. So these sites you change here will have to be touched\n> again, and so changing this now doesn't make sense. At least that's my\n> guess. Maybe there needs to be a fuller explanation of how this is\n> meant to be transitioned.\n\nFixed, thanks.\n\nAn alternative we might want to consider is that we use PRId64 as\n> explained here:\n> <\n> https://www.gnu.org/software/gettext/manual/html_node/Preparing-Strings.html>.\n>\n> We'd have to check whether we still support any non-GNU gettext\n> implementations and to what extent they support this. But I think it's\n> something to think about if we are going to embark on a journey of much\n> more int64 printf output.\n\n\nThere were several other ways that have met opposition above in the thread.\nI guess PRId64 will also be opposed as not portable enough. Personally, I\ndon't see much trouble when we cast the value to be printed to more wide\nvalue and consider this the best choice of all that was discussed. We just\nstick to a portable way of printing which was recommended by Tom and in\nagreement with 1f8bc448680bf93a974cb5f5\n\nIn [1] we initially proposed a 64xid patch to be committed at once. But it\nappeared that a patch of this complexity can not be reviewed at once. It\nwas proposed in [1] that we'd better cut it into separate threads and\ncommit by parts, some into v15, the other into v16. So we did. In view of\nthis, I can not accept that 0002 patch doesn't make v15 better. I consider\nit is separate enough to be committed as a base to further 64xid parts.\n\nAnyway, big thanks for the review, which is quite useful!\n\n[1]\nhttps://www.postgresql.org/message-id/CACG%3DezZe1NQSCnfHOr78AtAZxJZeCvxrts0ygrxYwe%3DpyyjVWA%40mail.gmail.com\n\n-- \nBest regards,\nPavel Borisov\n\nPostgres Professional: http://postgrespro.com <http://www.postgrespro.com>\n\nHi, Peter!Thanks for your review!\nAbout v25-0001-Use-unsigned-64-bit-numbering-of-SLRU-pages.patch:\n\n-static bool CLOGPagePrecedes(int page1, int page2);\n+static bool CLOGPagePrecedes(uint64 page1, uint64 page2);\n\nYou are changing the API from signed to unsigned. Is this intentional? \nIt is not explained anywhere. Are we sure that nothing uses, for \nexample, negative values as error markers?Initially, we've made SLRU pages to be int64, and reworked them into uint64 as per Andres Freund's proposal. It is not necessary for a 64xid patchset though as maximum page number is at least several (>2) times less than the maximum 64bit xid value. So we've returned them to be signed int64. I don't see the reason why make SRLU unsigned for introducing 64bit xids. \n #define SlruFileName(ctl, path, seg) \\\n- snprintf(path, MAXPGPATH, \"%s/%04X\", (ctl)->Dir, seg)\n+ snprintf(path, MAXPGPATH, \"%s/%04X%08X\", (ctl)->Dir, \\\n+ (uint32) ((seg) >> 32), (uint32) ((seg) & \n(uint64)0xFFFFFFFF))\nWhat's going on here? Some explanation? Why not use something like \n%llX or whatever you need?Of course, this should be simplified as %012llX and we will do this in later stage (in 0006 patch in 64xid thread) as this should be done together with CLOG pg_upgrade. So we've returned this to the initial state in 0001. Thanks for the notion! \n+ uint64 segno = pageno / SLRU_PAGES_PER_SEGMENT;\n+ uint64 rpageno = pageno % SLRU_PAGES_PER_SEGMENT;\n[etc.]\n\nNot clear whether segno etc. need to be changed to 64 bits, or whether \nan increase of SLRU_PAGES_PER_SEGMENT should also be considered.Yes, segno should be 64bits because even multiple of SLRU_PAGES_PER_SEGMENT and CLOG_XACTS_PER_PAGE (and similar for commit_ts and mxact) is far less than 2^32 and the overall length of clog/commit_ts/mxact is 64bit. \n- if ((len == 4 || len == 5 || len == 6) &&\n+ if ((len == 12 || len == 13 || len == 14) &&\n\nThis was horrible before, but maybe we can take this opportunity now to \nadd a comment?This should also be introduced later together with SlruFileName changes. So we've removed this change from 0001. Later in 0006 we'll add this with comments.\n- SlruFileName(ctl, path, ftag->segno);\n+ SlruFileName(ctl, path, (uint64) ftag->segno);\n\nMaybe ftag->segno should be changed to 64 bits as well? Not clear.Same as previous. \nThere is a code comment at the definition of SLRU_PAGES_PER_SEGMENT that \nhas some detailed explanations of how the SLRU numbering, SLRU file \nnames, and transaction IDs tie together. This doesn't seem to apply \nanymore after this change.Same as previous. \nThe reference page of pg_resetwal contains various pieces of information \nof how to map SLRU files back to transaction IDs. Please check if that \nis still all up to date.Same as previous. \nAbout v25-0002-Use-64-bit-format-to-output-XIDs.patch:\n\nI don't see the point of applying this now. It doesn't make PG15 any \nbetter. It's just a patch part of which we might need later. \nEspecially the issue of how we are handwaving past the epoch-enabled \noutput sites disturbs me. At least those should be omitted from this \npatch, since this patch makes these more wrong, not more right for the \nfuture. psprintf(\"xmax %u equals or exceeds next valid transaction ID %u:%u\",- xmax,+ psprintf(\"xmax %llu equals or exceeds next valid transaction ID %u:%llu\",+ (unsigned long long) xmax, EpochFromFullTransactionId(ctx->next_fxid),- XidFromFullTransactionId(ctx->next_fxid)));+ (unsigned long long) XidFromFullTransactionId(ctx->next_fxid)));This %u:%u business is basically an existing workaround for the lack of64-bit transaction identifiers. Presumably, when those are available,all of this will be replaced by a single number display, possibly asingle %llu. So these sites you change here will have to be touchedagain, and so changing this now doesn't make sense. At least that's myguess. Maybe there needs to be a fuller explanation of how this ismeant to be transitioned. Fixed, thanks. \nAn alternative we might want to consider is that we use PRId64 as \nexplained here: \n<https://www.gnu.org/software/gettext/manual/html_node/Preparing-Strings.html>. \n We'd have to check whether we still support any non-GNU gettext \nimplementations and to what extent they support this. But I think it's \nsomething to think about if we are going to embark on a journey of much \nmore int64 printf output.There were several other ways that have met opposition above in the thread. I guess PRId64 will also be opposed as not portable enough. Personally, I don't see much trouble when we cast the value to be printed to more wide value and consider this the best choice of all that was discussed. We just stick to a portable way of printing which was recommended by Tom and in agreement with 1f8bc448680bf93a974cb5f5 In [1] we initially proposed a 64xid patch to be committed at once. But it appeared that a patch of this complexity can not be reviewed at once. It was proposed in [1] that we'd better cut it into separate threads and commit by parts, some into v15, the other into v16. So we did. In view of this, I can not accept that 0002 patch doesn't make v15 better. I consider it is separate enough to be committed as a base to further 64xid parts.Anyway, big thanks for the review, which is quite useful![1] https://www.postgresql.org/message-id/CACG%3DezZe1NQSCnfHOr78AtAZxJZeCvxrts0ygrxYwe%3DpyyjVWA%40mail.gmail.com-- Best regards,Pavel BorisovPostgres Professional: http://postgrespro.com",
"msg_date": "Thu, 24 Mar 2022 19:26:45 +0400",
"msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: XID formatting and SLRU refactorings (was: Add 64-bit XIDs into\n PostgreSQL 15)"
},
{
"msg_contents": "Just forgot to mention that answers in a previous message are describing\nthe changes that are in v26.\n\n-- \nBest regards,\nPavel Borisov\n\nPostgres Professional: http://postgrespro.com <http://www.postgrespro.com>\n\nJust forgot to mention that answers in a previous message are describing the changes that are in v26.-- Best regards,Pavel BorisovPostgres Professional: http://postgrespro.com",
"msg_date": "Thu, 24 Mar 2022 19:29:23 +0400",
"msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: XID formatting and SLRU refactorings (was: Add 64-bit XIDs into\n PostgreSQL 15)"
},
{
"msg_contents": "Sorry, I forgot to change pg_amcheck tests to correspond to the removed\nepoch from output in 0002.\nFixed in v27.\n\n-- \nBest regards,\nMaxim Orlov.",
"msg_date": "Thu, 24 Mar 2022 19:12:20 +0300",
"msg_from": "Maxim Orlov <orlovmg@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: XID formatting and SLRU refactorings (was: Add 64-bit XIDs into\n PostgreSQL 15)"
},
{
"msg_contents": "Hi!\nIt seems that CFbot was still unhappy with pg_upgrade test due to epoch\nremoval from NextXid in controldata.\nI've reverted this change as support for \"epochless\" 64-bit control data\nwith xids that haven't yet switched to 64-bit would otherwise need extra\ntemporary code to support.\nI suppose this should be committed with the main 64xid (0006) patch later.\n\nPFA v28 patch.\nThanks, you all for your attention, interest, and help with this patch!\n\n-- \nBest regards,\nPavel Borisov\n\nPostgres Professional: http://postgrespro.com <http://www.postgrespro.com>",
"msg_date": "Fri, 25 Mar 2022 00:02:55 +0400",
"msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: XID formatting and SLRU refactorings (was: Add 64-bit XIDs into\n PostgreSQL 15)"
},
{
"msg_contents": "At Fri, 25 Mar 2022 00:02:55 +0400, Pavel Borisov <pashkin.elfe@gmail.com> wrote in \n> Hi!\n> It seems that CFbot was still unhappy with pg_upgrade test due to epoch\n> removal from NextXid in controldata.\n> I've reverted this change as support for \"epochless\" 64-bit control data\n> with xids that haven't yet switched to 64-bit would otherwise need extra\n> temporary code to support.\n> I suppose this should be committed with the main 64xid (0006) patch later.\n> \n> PFA v28 patch.\n> Thanks, you all for your attention, interest, and help with this patch!\n\n+SlruScanDirCbFindEarliest(SlruCtl ctl, char *filename, int64 segpage, void *data)\n\nsegpage doesn't fit mxtruncinfo.earliestExistingPage. Doesn't it need\nto be int64?\n\n\n+\treturn snprintf(path, MAXPGPATH, \"%s/%04llX\", ctl->Dir, (long long) segno);\n\nWe have two way to go here. One way is expanding the file name\naccording to the widened segno, another is keep the old format string\nthen cast the segno to (int). Since the objective of this patch is\nwiden pageno, I think, as Pavel's comment upthread, we should widen\nthe file format to \"%s/%012llX\".\n\n\n\nAs Peter suggested upthread,\n\n+\tint64\t\tsegno = pageno / SLRU_PAGES_PER_SEGMENT;\n+\tint64\t\trpageno = pageno % SLRU_PAGES_PER_SEGMENT;\n+\tint64\t\toffset = rpageno * BLCKSZ;\n\nrpageno is apparently over-sized. So offset is also over-sized. segno\ncan be up to 48 bits (maybe) so int64 is appropriate.\n\n-SlruPhysicalWritePage(SlruCtl ctl, int pageno, int slotno, SlruWriteAll fdata)\n+SlruPhysicalWritePage(SlruCtl ctl, int64 pageno, int slotno, SlruWriteAll fdata)\n\nThis function does the followng.\n\n>\tFileTag\t\ttag;\n>\n>\tINIT_SLRUFILETAG(tag, ctl->sync_handler, segno);\n\ntag.segno is uin32, which is too narrow here.\n\n\n\nThis is not an issue of this patch, but..\n\n-\t\t errdetail(\"Could not read from file \\\"%s\\\" at offset %u: %m.\",\n-\t\t\t\t path, offset)));\n\nWhy do we print int by \"%u\" here, even though that doesn't harm at all?\n\n\n\n-SlruScanDirCbReportPresence(SlruCtl ctl, char *filename, int segpage, void *data)\n+SlruScanDirCbReportPresence(SlruCtl ctl, char *filename, int64 segpage,\n+\t\t\t\t\t\t\tvoid *data)\n {\n-\tint\t\t\tcutoffPage = *(int *) data;\n+\tint64\t\tcutoffPage = *(int64 *) data;\n\nSlruMayDeleteSegment, called from this function, still thinks page\nnumbers as int.\n\n\n \t\tif ((len == 4 || len == 5 || len == 6) &&\n \t\t\tstrspn(clde->d_name, \"0123456789ABCDEF\") == len)\n \t\t{\n-\t\t\tsegno = (int) strtol(clde->d_name, NULL, 16);\n+\t\t\tsegno = strtoi64(clde->d_name, NULL, 16);\n\n(I'm not sure about \"len == 5 || len == 6\", though), the name of the\nfile is (I think) now expanded to 12 bytes. Otherwise, strtoi64 is\nnot needed here.\n\n\n\n-/* Currently, no field of AsyncQueueEntry requires more than int alignment */\n-#define QUEUEALIGN(len)\t\tINTALIGN(len)\n+/* AsyncQueueEntry.xid requires 8-byte alignment */\n+#define QUEUEALIGN(len)\t\tTYPEALIGN(8, len)\n \nI think we haven't expanded xid yet? (And the first member of\nAsyncQueueEntry is int even after expanding xid.)\n\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Fri, 25 Mar 2022 12:07:18 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: XID formatting and SLRU refactorings"
},
{
"msg_contents": ">\n> +SlruScanDirCbFindEarliest(SlruCtl ctl, char *filename, int64 segpage,\n> void *data)\n> segpage doesn't fit mxtruncinfo.earliestExistingPage. Doesn't it need\n> to be int64?\n\n\nI think yes, fixed. Thanks!\n\n+ return snprintf(path, MAXPGPATH, \"%s/%04llX\", ctl->Dir, (long long)\n> segno);\n\nWe have two way to go here. One way is expanding the file name\n> according to the widened segno, another is keep the old format string\n> then cast the segno to (int). Since the objective of this patch is\n> widen pageno, I think, as Pavel's comment upthread, we should widen\n> the file format to \"%s/%012llX\".\n\n\nI did it the first way. I moved the actual change of segment file name in\nthe next patches that are to be committed in v16 or later.\n\nAs Peter suggested upthread,\n> + int64 segno = pageno / SLRU_PAGES_PER_SEGMENT;\n> + int64 rpageno = pageno % SLRU_PAGES_PER_SEGMENT;\n> + int64 offset = rpageno * BLCKSZ;\n> rpageno is apparently over-sized. So offset is also over-sized. segno\n> can be up to 48 bits (maybe) so int64 is appropriate.\n\n\nFixed. Thanks!\n\n-SlruPhysicalWritePage(SlruCtl ctl, int pageno, int slotno, SlruWriteAll\n> fdata)\n> +SlruPhysicalWritePage(SlruCtl ctl, int64 pageno, int slotno, SlruWriteAll\n> fdata)\n> This function does the followng.\n> > FileTag tag;\n> >\n> > INIT_SLRUFILETAG(tag, ctl->sync_handler, segno);\n> tag.segno is uin32, which is too narrow here.\n\n\n Fixed. Thanks!\n\nThis is not an issue of this patch, but..\n> - errdetail(\"Could not read from file \\\"%s\\\" at offset %u: %m.\",\n> - path, offset)));\n> Why do we print int by \"%u\" here, even though that doesn't harm at all?\n\n\nSince it is not related to making XIDs 64 bit it is addressed in the\nseparate thread [1].\n\n-SlruScanDirCbReportPresence(SlruCtl ctl, char *filename, int segpage, void\n> *data)\n> +SlruScanDirCbReportPresence(SlruCtl ctl, char *filename, int64 segpage,\n> + void *data)\n> {\n> - int cutoffPage = *(int *) data;\n> + int64 cutoffPage = *(int64 *) data;\n> SlruMayDeleteSegment, called from this function, still thinks page\n> numbers as int.\n\n\nFixed. Thanks!\n\nif ((len == 4 || len == 5 || len == 6) &&\n> strspn(clde->d_name, \"0123456789ABCDEF\") == len)\n> {\n> - segno = (int) strtol(clde->d_name, NULL, 16);\n> + segno = strtoi64(clde->d_name, NULL, 16);\n> (I'm not sure about \"len == 5 || len == 6\", though), the name of the\n> file is (I think) now expanded to 12 bytes. Otherwise, strtoi64 is\n> not needed here.\n\n\nSame as \"%s/%04llX\" issues mentioned above. Moved to the next patches.\n\n-/* Currently, no field of AsyncQueueEntry requires more than int alignment\n> */\n> -#define QUEUEALIGN(len) INTALIGN(len)\n> +/* AsyncQueueEntry.xid requires 8-byte alignment */\n> +#define QUEUEALIGN(len) TYPEALIGN(8, len)\n> I think we haven't expanded xid yet? (And the first member of\n> AsyncQueueEntry is int even after expanding xid.)\n\n\nSame as above.\n\nThanks for your review!\n\nHere is a new patchset v29.\nMajor changes:\n- fixes from review by Kyotaro mentioned above\n- 0002 is split into two patches: 0002 is change output XIDs format only,\n0003 is get rid of epoch in output\n- 0003 includes changes in controldata file format in order to support both\nformats: old format with epoch and new as FullTransactionId\n\nI'm not sure if it is worth it at this stage to change pg_resetwal handling\non epoch (for example, remove -e option and so on) or do it later?\n\nOpinions are welcome!\n\n[1]\nhttps://www.postgresql.org/message-id/flat/CALT9ZEG1Oo9W_bME5yhsE96AYz19VOnEwHxFUNCosBJHmc0bhw%40mail.gmail.com#86813d80ca9827d36524a9a2adc77c4c\n\n-- \nBest regards,\nMaxim Orlov.",
"msg_date": "Fri, 25 Mar 2022 17:06:39 +0300",
"msg_from": "Maxim Orlov <orlovmg@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: XID formatting and SLRU refactorings (was: Add 64-bit XIDs into\n PostgreSQL 15)"
},
{
"msg_contents": "Sorry, I forgot to append a fix for FileTag in v29. Here is v30.\n\n-- \nBest regards,\nMaxim Orlov.",
"msg_date": "Fri, 25 Mar 2022 17:42:36 +0300",
"msg_from": "Maxim Orlov <orlovmg@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: XID formatting and SLRU refactorings (was: Add 64-bit XIDs into\n PostgreSQL 15)"
},
{
"msg_contents": "Hi hackers,\n\n> Here is v30.\n\nI took another look at the patchset. Personally I don't think it will get much\nbetter than it is now. I'm inclined to change the status of the CF entry to\n\"Ready for Committer\" unless anyone disagrees.\n\ncfbot reports a problem with t/013_partition.pl but the test seems to be flaky\non `master` [1]. I couldn't find anything useful in the logs except for\n\"[postmaster] LOG: received immediate shutdown request\". Then I re-checked the\npatchset on FreeBSD 13 locally. The patchset passed `make installcked-world`.\n\n> About v25-0002-Use-64-bit-format-to-output-XIDs.patch:\n> I don't see the point of applying this now. It doesn't make PG15 any\n> better. It's just a patch part of which we might need later.\n\n> It was proposed in [1] that we'd better cut it into separate threads and\n> commit by parts, some into v15, the other into v16. So we did. In view of\n> this, I can not accept that 0002 patch doesn't make v15 better. I consider\n> it is separate enough to be committed as a base to further 64xid parts.\n\nI understand how disappointing this may be.\n\nPersonally I don't have a strong opinion here. Merging the patch sooner will\nallow us to move toward 64-bit XIDs faster (e.g. to gather the feedback from\nthe early adopters, allow the translators to do their thing earlier, etc).\nMerging it later will make PG15 more stable (you can't break anything new if\nyou don't change anything). As always, engineering is all about compromises.\n\nIt's up to the committer to decide.\n\n[1]: https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=frogfish&dt=2022-03-25%2018%3A37%3A10\n\n--\nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Mon, 28 Mar 2022 13:46:02 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": true,
"msg_subject": "Re: XID formatting and SLRU refactorings (was: Add 64-bit XIDs into\n PostgreSQL 15)"
},
{
"msg_contents": "On 28.03.22 12:46, Aleksander Alekseev wrote:\n> Personally I don't have a strong opinion here. Merging the patch sooner will\n> allow us to move toward 64-bit XIDs faster (e.g. to gather the feedback from\n> the early adopters, allow the translators to do their thing earlier, etc).\n> Merging it later will make PG15 more stable (you can't break anything new if\n> you don't change anything). As always, engineering is all about compromises.\n\nAt this point, I'm more concerned that code review is still finding \nbugs, and that we have no test coverage for any of this, so we are \nsupposed to gain confidence in this patch by staring at it very hard. ;-)\n\nAFAICT, this patch provides no actual functionality change, so holding \nit a bit for early in the PG16 cycle wouldn't lose anything.\n\n\n\n",
"msg_date": "Mon, 28 Mar 2022 15:45:48 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: XID formatting and SLRU refactorings (was: Add 64-bit XIDs into\n PostgreSQL 15)"
},
{
"msg_contents": "Hi hackers,\n\n> I took another look at the patchset. Personally I don't think it will get much\n> better than it is now. I'm inclined to change the status of the CF entry to\n> \"Ready for Committer\" unless anyone disagrees.\n\n> > About v25-0002-Use-64-bit-format-to-output-XIDs.patch:\n> > I don't see the point of applying this now. It doesn't make PG15 any\n> > better. It's just a patch part of which we might need later.\n>\n> AFAICT, this patch provides no actual functionality change, so holding\n> it a bit for early in the PG16 cycle wouldn't lose anything.\n\nOK. As I understand we still have a consensus that v30-0001 (SLRU refactoring,\nnot the XID formatting) is targeting PG15, so I changed the CF entry to\n\"Ready for Committer\" for this single patch. I rechecked it again on the\ncurrent `master` branch without the other patches and it is OK. cfbot is happy\nwith the patchset as well.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Tue, 29 Mar 2022 16:09:14 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": true,
"msg_subject": "Re: XID formatting and SLRU refactorings (was: Add 64-bit XIDs into\n PostgreSQL 15)"
},
{
"msg_contents": "On 29.03.22 15:09, Aleksander Alekseev wrote:\n> Hi hackers,\n> \n>> I took another look at the patchset. Personally I don't think it will get much\n>> better than it is now. I'm inclined to change the status of the CF entry to\n>> \"Ready for Committer\" unless anyone disagrees.\n> \n>>> About v25-0002-Use-64-bit-format-to-output-XIDs.patch:\n>>> I don't see the point of applying this now. It doesn't make PG15 any\n>>> better. It's just a patch part of which we might need later.\n>>\n>> AFAICT, this patch provides no actual functionality change, so holding\n>> it a bit for early in the PG16 cycle wouldn't lose anything.\n> \n> OK. As I understand we still have a consensus that v30-0001 (SLRU refactoring,\n> not the XID formatting) is targeting PG15, so I changed the CF entry to\n> \"Ready for Committer\" for this single patch. I rechecked it again on the\n> current `master` branch without the other patches and it is OK. cfbot is happy\n> with the patchset as well.\n\nThat isn't really what I meant. When I said\n\n > At this point, I'm more concerned that code review is still finding\n > bugs, and that we have no test coverage for any of this\n\nthis meant especially the SLRU refactoring patch.\n\n\n\n",
"msg_date": "Tue, 29 Mar 2022 19:25:49 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: XID formatting and SLRU refactorings (was: Add 64-bit XIDs into\n PostgreSQL 15)"
},
{
"msg_contents": "Hi Peter,\n\n> That isn't really what I meant. When I said\n>\n> > At this point, I'm more concerned that code review is still finding\n> > bugs, and that we have no test coverage for any of this\n>\n> this meant especially the SLRU refactoring patch.\n\nGot it, and sorry for the confusion. I decided to invest some time\ninto improving the SLRU test coverage. I created a new thread, please\njoin the discussion [1].\n\nMaxim, Pavel and I agreed (offlist) that I will rebase v30 patchset if\n[1] will be merged.\n\n[1]: https://postgr.es/m/CAJ7c6TOFoWcHOW4BVe3BG_uikCrO9B91ayx9d6rh5JZr_tPESg%40mail.gmail.com\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Thu, 31 Mar 2022 17:35:30 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": true,
"msg_subject": "Re: XID formatting and SLRU refactorings (was: Add 64-bit XIDs into\n PostgreSQL 15)"
},
{
"msg_contents": "Hi!\n\nHere is rebased version of a patch with minor improvements.\n\n-- \nBest regards,\nMaxim Orlov.",
"msg_date": "Tue, 5 Apr 2022 14:01:06 +0300",
"msg_from": "Maxim Orlov <orlovmg@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: XID formatting and SLRU refactorings (was: Add 64-bit XIDs into\n PostgreSQL 15)"
},
{
"msg_contents": "Hi!\n\nTo keep this thread up to date with [1], here is the rebased v32 version of\nthe patch.\n\n[1]\nhttps://www.postgresql.org/message-id/flat/CACG%3DezZe1NQSCnfHOr78AtAZxJZeCvxrts0ygrxYwe%3DpyyjVWA%40mail.gmail.com\n-- \nBest regards,\nMaxim Orlov.",
"msg_date": "Wed, 13 Apr 2022 13:54:02 +0300",
"msg_from": "Maxim Orlov <orlovmg@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: XID formatting and SLRU refactorings (was: Add 64-bit XIDs into\n PostgreSQL 15)"
},
{
"msg_contents": "Hi hackers,\n\n> here is the rebased v32 version of the patch.\n\nThe patchset rotted a bit. Here is a rebased version.\n\n-- \nBest regards,\nAleksander Alekseev",
"msg_date": "Tue, 26 Apr 2022 15:55:16 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": true,
"msg_subject": "Re: XID formatting and SLRU refactorings (was: Add 64-bit XIDs into\n PostgreSQL 15)"
},
{
"msg_contents": "> here is the rebased v32 version of the patch.\n>\n> The patchset rotted a bit. Here is a rebased version.\n>\n\nWe have posted an updated version v34 of the whole patchset in [1].\nChanges of patches 0001-0003 there are identical to v33. So, no update is\nneeded in this thread.\n\n[1]\nhttps://www.postgresql.org/message-id/flat/CACG%3DezZe1NQSCnfHOr78AtAZxJZeCvxrts0ygrxYwe%3DpyyjVWA%40mail.gmail.com\n\n-- \nBest regards,\nMaxim Orlov.\n\n\n> here is the rebased v32 version of the patch.\n\nThe patchset rotted a bit. Here is a rebased version.We have posted an updated version v34 of the whole patchset in [1].Changes of patches 0001-0003 there are identical to v33. So, no update is needed in this thread.[1] https://www.postgresql.org/message-id/flat/CACG%3DezZe1NQSCnfHOr78AtAZxJZeCvxrts0ygrxYwe%3DpyyjVWA%40mail.gmail.com-- Best regards,Maxim Orlov.",
"msg_date": "Fri, 13 May 2022 16:21:29 +0300",
"msg_from": "Maxim Orlov <orlovmg@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: XID formatting and SLRU refactorings (was: Add 64-bit XIDs into\n PostgreSQL 15)"
},
{
"msg_contents": "Hi hackers,\n\nHere is the rebased patchset.\n\n-- \nBest regards,\nAleksander Alekseev",
"msg_date": "Fri, 8 Jul 2022 12:05:08 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": true,
"msg_subject": "Re: XID formatting and SLRU refactorings (was: Add 64-bit XIDs into\n PostgreSQL 15)"
},
{
"msg_contents": ">\n> Here is the rebased patchset.\n>\n\n Hi!\n\nWhile working on v40 of 64-bit XID patch [1], we've noticed a couple of\nforgotten things in v34 in this thread.\nSo, update the patchset to v40 from the thread [1].\n\nIt seems convenient to use common numbering of versions in this thread and\nthe thread [1].\nSo, please, don't be surprised to see v40 here just after v34.\n\n[1]\nhttps://www.postgresql.org/message-id/flat/CACG=ezZe1NQSCnfHOr78AtAZxJZeCvxrts0ygrxYwe=pyyjVWA@mail.gmail.com\n\n-- \nBest regards,\nMaxim Orlov.",
"msg_date": "Fri, 8 Jul 2022 17:40:00 +0300",
"msg_from": "Maxim Orlov <orlovmg@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: XID formatting and SLRU refactorings (was: Add 64-bit XIDs into\n PostgreSQL 15)"
},
{
"msg_contents": "Hi Maxim,\n\n> It seems convenient to use common numbering of versions in this thread and the thread [1].\n\nAgree!\n\nHere is the rebased patchset v41.\n\n-- \nBest regards,\nAleksander Alekseev",
"msg_date": "Wed, 13 Jul 2022 17:38:05 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": true,
"msg_subject": "Re: XID formatting and SLRU refactorings (was: Add 64-bit XIDs into\n PostgreSQL 15)"
},
{
"msg_contents": "On Fri, May 13, 2022 at 04:21:29PM +0300, Maxim Orlov wrote:\n> We have posted an updated version v34 of the whole patchset in [1].\n> Changes of patches 0001-0003 there are identical to v33. So, no update is\n> needed in this thread.\n\nIs there any reason to continue with two separate threads and CF entries ?\nThe original reason was to have a smaller patch for considerate late in v15.\n\nBut right now, it just seems to cause every update to imply two email messages\nrather than one.\n\nSince the patch is split into 0001, 0002, 0003, 0004+, both can continue in the\nmain thread. The early patches can still be applied independently from each\nlater patch (the same as with any other patch series).\n\nAlso, since this patch series is large, and expects a lot of conflicts, it\nseems better to update the cfapp with a \"git link\" [0] where you can maintain a\nrebased branch. It avoids the need to mail new patches to the list several\ntimes more often than it's being reviewed.\n\n[0] Note that cirrusci will run the same checks as cfbot if you push a branch\nto github.\n\n\n",
"msg_date": "Wed, 13 Jul 2022 09:54:39 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: XID formatting and SLRU refactorings (was: Add 64-bit XIDs into\n PostgreSQL 15)"
},
{
"msg_contents": ">\n> Is there any reason to continue with two separate threads and CF entries ?\n> The original reason was to have a smaller patch for considerate late in\n> v15.\n>\n> But right now, it just seems to cause every update to imply two email\n> messages\n> rather than one.\n>\n> Since the patch is split into 0001, 0002, 0003, 0004+, both can continue\n> in the\n> main thread. The early patches can still be applied independently from\n> each\n> later patch (the same as with any other patch series).\n>\n\nI see the main goal of this split is to make discussion of this (easier)\nthread separate to the discussion of a whole patchset which is expected to\nbe more thorough.\n\nAlso I see the chances of this thread to be committed into v16 to be much\nhigher than of a main patch, which will be for v17 then.\n\nThanks for the advice to add git thread instead of patch posting. Will try\nto do this.\n-- \nBest regards,\nPavel Borisov\n\nIs there any reason to continue with two separate threads and CF entries ?\nThe original reason was to have a smaller patch for considerate late in v15.\n\nBut right now, it just seems to cause every update to imply two email messages\nrather than one.\n\nSince the patch is split into 0001, 0002, 0003, 0004+, both can continue in the\nmain thread. The early patches can still be applied independently from each\nlater patch (the same as with any other patch series). I see the main goal of this split is to make discussion of this (easier) thread separate to the discussion of a whole patchset which is expected to be more thorough. Also I see the chances of this thread to be committed into v16 to be much higher than of a main patch, which will be for v17 then.Thanks for the advice to add git thread instead of patch posting. Will try to do this.-- Best regards,Pavel Borisov",
"msg_date": "Wed, 13 Jul 2022 19:48:23 +0400",
"msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: XID formatting and SLRU refactorings (was: Add 64-bit XIDs into\n PostgreSQL 15)"
},
{
"msg_contents": "Pavel, Justin,\n\n>> Is there any reason to continue with two separate threads and CF entries ?\n>> The original reason was to have a smaller patch for considerate late in v15.\n\n> I see the main goal of this split is to make discussion of this (easier) thread separate to the discussion of a whole patchset which is expected to be more thorough.\n\n+1. This was done per explicit request in the first thread.\n\n>> But right now, it just seems to cause every update to imply two email messages\n>> rather than one.\n>>\n>> Also, since this patch series is large, and expects a lot of conflicts, it\n>> seems better to update the cfapp with a \"git link\" [0] where you can maintain a\n>> rebased branch. It avoids the need to mail new patches to the list several\n>> times more often than it's being reviewed.\n\nYep, this is a bit inconvenient. Personally I didn't expect that\nmerging patches in this thread would take that long. They are in\n\"Ready for Committer\" state for a long time now and there are no known\nissues with them other than unit tests for SLRU [1] should be merged\nfirst.\n\nI suggest we use \"git link\" for the larger patchset in the other\nthread since I'm not contributing to it right now and all in all that\nthread is waiting for this one. For this thread we continue using\npatches since several people contribute to them.\n\n[1]: https://commitfest.postgresql.org/38/3608/\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Fri, 15 Jul 2022 14:29:21 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": true,
"msg_subject": "Re: XID formatting and SLRU refactorings (was: Add 64-bit XIDs into\n PostgreSQL 15)"
},
{
"msg_contents": "On Fri, 15 Jul 2022 at 12:29, Aleksander Alekseev\n<aleksander@timescale.com> wrote:\n> Personally I didn't expect that\n> merging patches in this thread would take that long. They are in\n> \"Ready for Committer\" state for a long time now and there are no known\n> issues with them other than unit tests for SLRU [1] should be merged\n> first.\n\nThese patches look ready to me, including the SLRU tests.\n\nEven though they do very little, these patches touch many aspects of\nthe code, so it would make sense to apply these as the last step in\nthe CF.\n\nTo encourage committers to take that next step, let's have a\ndemocratic vote on moving this forwards:\n+1 from me.\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Tue, 26 Jul 2022 19:35:13 +0100",
"msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: XID formatting and SLRU refactorings (was: Add 64-bit XIDs into\n PostgreSQL 15)"
},
{
"msg_contents": "On Tue, 26 Jul 2022 at 21:35, Simon Riggs <simon.riggs@enterprisedb.com>\nwrote:\n\n> On Fri, 15 Jul 2022 at 12:29, Aleksander Alekseev\n> <aleksander@timescale.com> wrote:\n> > Personally I didn't expect that\n> > merging patches in this thread would take that long. They are in\n> > \"Ready for Committer\" state for a long time now and there are no known\n> > issues with them other than unit tests for SLRU [1] should be merged\n> > first.\n>\n> These patches look ready to me, including the SLRU tests.\n>\n> Even though they do very little, these patches touch many aspects of\n> the code, so it would make sense to apply these as the last step in\n> the CF.\n>\n> To encourage committers to take that next step, let's have a\n> democratic vote on moving this forwards:\n> +1 from me.\n>\n\nThis set of patches no longer applies cleanly to the master branch. There\nare lots of\nhunks as well as failures. Please rebase the patches.\n\nThere are failures for multiple files including the one given below:\n\npatching file src/backend/replication/logical/worker.c\nHunk #1 succeeded at 1089 (offset 1 line).\nHunk #2 succeeded at 1481 (offset 1 line).\nHunk #3 succeeded at 3322 (offset 2 lines).\nHunk #4 succeeded at 3493 (offset 2 lines).\nHunk #5 FAILED at 4009.\n1 out of 5 hunks FAILED -- saving rejects to file\nsrc/backend/replication/logical/worker.c.rej\n\nOn Tue, 26 Jul 2022 at 21:35, Simon Riggs <simon.riggs@enterprisedb.com> wrote:On Fri, 15 Jul 2022 at 12:29, Aleksander Alekseev\n<aleksander@timescale.com> wrote:\n> Personally I didn't expect that\n> merging patches in this thread would take that long. They are in\n> \"Ready for Committer\" state for a long time now and there are no known\n> issues with them other than unit tests for SLRU [1] should be merged\n> first.\n\nThese patches look ready to me, including the SLRU tests.\n\nEven though they do very little, these patches touch many aspects of\nthe code, so it would make sense to apply these as the last step in\nthe CF.\n\nTo encourage committers to take that next step, let's have a\ndemocratic vote on moving this forwards:\n+1 from me. This set of patches no longer applies cleanly to the master branch. There are lots of hunks as well as failures. Please rebase the patches.There are failures for multiple files including the one given below:patching file src/backend/replication/logical/worker.cHunk #1 succeeded at 1089 (offset 1 line).Hunk #2 succeeded at 1481 (offset 1 line).Hunk #3 succeeded at 3322 (offset 2 lines).Hunk #4 succeeded at 3493 (offset 2 lines).Hunk #5 FAILED at 4009.1 out of 5 hunks FAILED -- saving rejects to file src/backend/replication/logical/worker.c.rej",
"msg_date": "Tue, 27 Sep 2022 16:54:04 +0300",
"msg_from": "Hamid Akhtar <hamid.akhtar@percona.com>",
"msg_from_op": false,
"msg_subject": "Re: XID formatting and SLRU refactorings (was: Add 64-bit XIDs into\n PostgreSQL 15)"
},
{
"msg_contents": "Hi Hamid,\n\n> There are failures for multiple files including the one given below:\n\nThanks for letting us know. There was a little conflict in\nsrc/backend/replication/logical/worker.c since the message format has\nchanged. Nothing particularly awful.\n\nHere is the rebased patchset.\n\n> These patches look ready to me, including the SLRU tests.\n\n> To encourage committers to take that next step, let's have a\n> democratic vote on moving this forwards: +1 from me.\n\nThanks, Simon. Not 100% sure who exactly is invited to vote, but just\nin case here is +1 from me. These patches were \"Ready for Committer\"\nfor several months now and are unlikely to get any better. So I\nsuggest we merge them.\n\n-- \nBest regards,\nAleksander Alekseev",
"msg_date": "Tue, 27 Sep 2022 17:25:14 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": true,
"msg_subject": "Re: XID formatting and SLRU refactorings (was: Add 64-bit XIDs into\n PostgreSQL 15)"
},
{
"msg_contents": "Hi hackers,\n\n> Thanks, Simon. Not 100% sure who exactly is invited to vote, but just\n> in case here is +1 from me. These patches were \"Ready for Committer\"\n> for several months now and are unlikely to get any better. So I\n> suggest we merge them.\n\nHere is the rebased patchset.\n\n-- \nBest regards,\nAleksander Alekseev",
"msg_date": "Thu, 6 Oct 2022 13:05:39 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": true,
"msg_subject": "Re: XID formatting and SLRU refactorings (was: Add 64-bit XIDs into\n PostgreSQL 15)"
},
{
"msg_contents": "Hi!\n\nHere is a rebased version of the patch set.\nMajor changes are:\n1. Fix rare replica fault.\n Upon page pruning in heap_page_prune, page fragmentation repair is\ndetermined\nby\n a parameter repairFragmentation. At the same time, on a replica, upon\nhandling XLOG_HEAP2_PRUNE record type\n in heap_xlog_prune, we always call heap_page_prune_execute with\nrepairFragmentation\nparameter equal to true.\n This caused page inconsistency and lead to the crash of the replica. Fix\nthis by adding new flag in\n struct xl_heap_prune.\n2. Add support for meson build.\n3. Add assertion \"buffer is locked\" in HeapTupleCopyBaseFromPage.\n4. Add assertion \"buffer is locked exclusive\" in heap_page_shift_base.\n5. Prevent excessive growth of xmax in heap_prepare_freeze_tuple.\n\nAs always, reviews are very welcome!\n\n-- \nBest regards,\nMaxim Orlov.",
"msg_date": "Thu, 6 Oct 2022 13:45:20 +0300",
"msg_from": "Maxim Orlov <orlovmg@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: XID formatting and SLRU refactorings (was: Add 64-bit XIDs into\n PostgreSQL 15)"
},
{
"msg_contents": "Maxim,\n\n> Here is a rebased version of the patch set.\n\nThis is the wrong thread / CF entry. Please see\nhttp://cfbot.cputube.org/ and https://commitfest.postgresql.org/ and\nthe first email in the thread.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Thu, 6 Oct 2022 15:15:37 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": true,
"msg_subject": "Re: XID formatting and SLRU refactorings (was: Add 64-bit XIDs into\n PostgreSQL 15)"
},
{
"msg_contents": ">\n> This is the wrong thread / CF entry. Please see\n>\n\nYep, my fault. Sorry about that.\n\n-- \nBest regards,\nMaxim Orlov.\n\n\n\nThis is the wrong thread / CF entry. Please see\nYep, my fault. Sorry about that.-- Best regards,Maxim Orlov.",
"msg_date": "Fri, 7 Oct 2022 14:00:50 +0300",
"msg_from": "Maxim Orlov <orlovmg@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: XID formatting and SLRU refactorings (was: Add 64-bit XIDs into\n PostgreSQL 15)"
},
{
"msg_contents": "Hi hackers,\n\n> Sorry about that.\n\nNo problem.\n\n> Thanks, Simon. Not 100% sure who exactly is invited to vote, but just\n> in case here is +1 from me. These patches were \"Ready for Committer\"\n> for several months now and are unlikely to get any better. So I\n> suggest we merge them.\n\nHere is the rebased patchset number 44.\n\n-- \nBest regards,\nAleksander Alekseev",
"msg_date": "Mon, 10 Oct 2022 12:16:12 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": true,
"msg_subject": "Re: XID formatting and SLRU refactorings (was: Add 64-bit XIDs into\n PostgreSQL 15)"
},
{
"msg_contents": "2022年10月10日(月) 18:16 Aleksander Alekseev <aleksander@timescale.com>:\n>\n> Hi hackers,\n>\n> > Sorry about that.\n>\n> No problem.\n>\n> > Thanks, Simon. Not 100% sure who exactly is invited to vote, but just\n> > in case here is +1 from me. These patches were \"Ready for Committer\"\n> > for several months now and are unlikely to get any better. So I\n> > suggest we merge them.\n>\n> Here is the rebased patchset number 44.\n\nHi\n\nThis entry was marked \"Ready for committer\" in the CommitFest app but cfbot\nreports the patch no longer applies.\n\nWe've marked it as \"Waiting on Author\". As CommitFest 2022-11 is\ncurrently underway, this would be an excellent time update the patch.\n\nOnce you think the patchset is ready for review again, you (or any\ninterested party) can move the patch entry forward by visiting\n\n https://commitfest.postgresql.org/40/3489/\n\nand changing the status to \"Ready for committer\".\n\n\nThanks\n\nIan Barwick\n\n\n",
"msg_date": "Thu, 3 Nov 2022 16:51:13 +0900",
"msg_from": "Ian Lawrence Barwick <barwick@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: XID formatting and SLRU refactorings (was: Add 64-bit XIDs into\n PostgreSQL 15)"
},
{
"msg_contents": "Hi!\n\n\n> This entry was marked \"Ready for committer\" in the CommitFest app but cfbot\n> reports the patch no longer applies.\n>\nThanks for the reminder. I think, this should work.\n\n\n-- \nBest regards,\nMaxim Orlov.",
"msg_date": "Thu, 3 Nov 2022 11:15:43 +0300",
"msg_from": "Maxim Orlov <orlovmg@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: XID formatting and SLRU refactorings (was: Add 64-bit XIDs into\n PostgreSQL 15)"
},
{
"msg_contents": "On Thu, Nov 3, 2022 at 1:46 PM Maxim Orlov <orlovmg@gmail.com> wrote:\n>\n> Hi!\n>\n>>\n>> This entry was marked \"Ready for committer\" in the CommitFest app but cfbot\n>> reports the patch no longer applies.\n>\n> Thanks for the reminder. I think, this should work.\n\nHave we measured the WAL overhead because of this patch set? maybe\nthese particular patches will not impact but IIUC this is ground work\nfor making xid 64 bit. So each XLOG record size will increase at\nleast by 4 bytes because the XLogRecord contains the xid.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 3 Nov 2022 13:50:11 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: XID formatting and SLRU refactorings (was: Add 64-bit XIDs into\n PostgreSQL 15)"
},
{
"msg_contents": "2022年11月3日(木) 17:15 Maxim Orlov <orlovmg@gmail.com>:\n>\n> Hi!\n>\n>>\n>> This entry was marked \"Ready for committer\" in the CommitFest app but cfbot\n>> reports the patch no longer applies.\n>\n> Thanks for the reminder. I think, this should work.\n\nThanks for the quick update, cfbot reports no issues.\n\nI've set the CF entry back to \"Ready for Committer\",\n\nRegards\n\nIan Barwick\n\n\n",
"msg_date": "Thu, 3 Nov 2022 18:29:44 +0900",
"msg_from": "Ian Lawrence Barwick <barwick@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: XID formatting and SLRU refactorings (was: Add 64-bit XIDs into\n PostgreSQL 15)"
},
{
"msg_contents": "Hi Ian,\n\n> I've set the CF entry back to \"Ready for Committer\",\n\nThanks. Here is the rebased patchset.\n\nDilip asked a good question above about the rest of the 64-bit XIDs\npatches. I'm going to do some testing and post the results to the main\n64-bit XIDs thread [1].\n\n[1]: https://www.postgresql.org/message-id/CAJ7c6TOkpJi78A9chR-j0OOMvP6G%3DuR%2BscpEKsM4jtw0dK9-3Q%40mail.gmail.com\n\n-- \nBest regards,\nAleksander Alekseev",
"msg_date": "Wed, 16 Nov 2022 11:37:20 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": true,
"msg_subject": "Re: XID formatting and SLRU refactorings (was: Add 64-bit XIDs into\n PostgreSQL 15)"
},
{
"msg_contents": "Hi hackers,\n\n> > I've set the CF entry back to \"Ready for Committer\",\n>\n> Thanks. Here is the rebased patchset.\n\nAfter merging 006b69fd [1] the 0001 patch needed a rebase. PFA v46.\n\n[1]: https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=006b69fd\n\n-- \nBest regards,\nAleksander Alekseev",
"msg_date": "Thu, 17 Nov 2022 12:44:54 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": true,
"msg_subject": "Re: XID formatting and SLRU refactorings (was: Add 64-bit XIDs into\n PostgreSQL 15)"
},
{
"msg_contents": "Hi hackers,\n\n> > > I've set the CF entry back to \"Ready for Committer\",\n> >\n> > Thanks. Here is the rebased patchset.\n>\n> After merging 006b69fd the 0001 patch needed a rebase. PFA v46.\n\nAfter merging 1489b1ce [1] the patchset needed a rebase. PFA v47.\n\n[1]: https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=1489b1ce\n\n-- \nBest regards,\nAleksander Alekseev",
"msg_date": "Mon, 21 Nov 2022 12:21:09 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": true,
"msg_subject": "Re: XID formatting and SLRU refactorings (was: Add 64-bit XIDs into\n PostgreSQL 15)"
},
{
"msg_contents": "On Mon, Nov 21, 2022 at 12:21:09PM +0300, Aleksander Alekseev wrote:\n> After merging 1489b1ce [1] the patchset needed a rebase. PFA v47.\n> \n> [1]: https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=1489b1ce\n\nThe CF bot is showing some failures here. You may want to\ndouble-check.\n--\nMichael",
"msg_date": "Fri, 2 Dec 2022 14:15:47 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: XID formatting and SLRU refactorings (was: Add 64-bit XIDs into\n PostgreSQL 15)"
},
{
"msg_contents": "Hi Michael,\n\n> The CF bot is showing some failures here. You may want to\n> double-check.\n\nThanks! PFA v48.\n\n-- \nBest regards,\nAleksander Alekseev",
"msg_date": "Wed, 7 Dec 2022 11:40:08 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": true,
"msg_subject": "Re: XID formatting and SLRU refactorings (was: Add 64-bit XIDs into\n PostgreSQL 15)"
},
{
"msg_contents": "Hi,\n\nOn 2022-12-07 11:40:08 +0300, Aleksander Alekseev wrote:\n> Hi Michael,\n>\n> > The CF bot is showing some failures here. You may want to\n> > double-check.\n>\n> Thanks! PFA v48.\n\nThis causes a lot of failures with ubsan:\n\nhttps://cirrus-ci.com/task/6035600772431872\n\nperforming post-bootstrap initialization ... ../src/backend/access/transam/slru.c:1520:9: runtime error: load of misaligned address 0x7fff6362db8c for type 'int64', which requires 8 byte alignment\n0x7fff6362db8c: note: pointer points here\n 01 00 00 00 00 00 00 00 d0 02 00 00 00 00 00 00 d0 02 00 00 00 00 00 00 01 00 00 00 00 00 00 00\n ^\n==18947==Using libbacktrace symbolizer.\n #0 0x564d7c45cc6b in SlruScanDirCbReportPresence ../src/backend/access/transam/slru.c:1520\n #1 0x564d7c45cd88 in SlruScanDirectory ../src/backend/access/transam/slru.c:1595\n #2 0x564d7c44872c in TruncateCLOG ../src/backend/access/transam/clog.c:889\n #3 0x564d7c62ecd7 in vac_truncate_clog ../src/backend/commands/vacuum.c:1779\n #4 0x564d7c6320a8 in vac_update_datfrozenxid ../src/backend/commands/vacuum.c:1642\n #5 0x564d7c632a78 in vacuum ../src/backend/commands/vacuum.c:537\n #6 0x564d7c63347d in ExecVacuum ../src/backend/commands/vacuum.c:273\n #7 0x564d7ca4afea in standard_ProcessUtility ../src/backend/tcop/utility.c:866\n #8 0x564d7ca4b723 in ProcessUtility ../src/backend/tcop/utility.c:530\n #9 0x564d7ca46e81 in PortalRunUtility ../src/backend/tcop/pquery.c:1158\n #10 0x564d7ca4755d in PortalRunMulti ../src/backend/tcop/pquery.c:1315\n #11 0x564d7ca47c02 in PortalRun ../src/backend/tcop/pquery.c:791\n #12 0x564d7ca40ecb in exec_simple_query ../src/backend/tcop/postgres.c:1238\n #13 0x564d7ca43c01 in PostgresMain ../src/backend/tcop/postgres.c:4551\n #14 0x564d7ca441a4 in PostgresSingleUserMain ../src/backend/tcop/postgres.c:4028\n #15 0x564d7c74d883 in main ../src/backend/main/main.c:197\n #16 0x7fde7793dd09 in __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x23d09)\n #17 0x564d7c2d30c9 in _start (/tmp/cirrus-ci-build/build/tmp_install/usr/local/pgsql/bin/postgres+0x8530c9)\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 7 Dec 2022 09:50:27 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: XID formatting and SLRU refactorings (was: Add 64-bit XIDs into\n PostgreSQL 15)"
},
{
"msg_contents": "On Wed, Dec 7, 2022 at 9:50 AM Andres Freund <andres@anarazel.de> wrote:\n> performing post-bootstrap initialization ... ../src/backend/access/transam/slru.c:1520:9: runtime error: load of misaligned address 0x7fff6362db8c for type 'int64', which requires 8 byte alignment\n> 0x7fff6362db8c: note: pointer points here\n> 01 00 00 00 00 00 00 00 d0 02 00 00 00 00 00 00 d0 02 00 00 00 00 00 00 01 00 00 00 00 00 00 00\n\nI bet that this alignment issue can be fixed by using PGAlignedBlock\ninstead of a raw char buffer for a page. (I'm guessing, I haven't\ndirectly checked.)\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Wed, 7 Dec 2022 09:57:01 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: XID formatting and SLRU refactorings (was: Add 64-bit XIDs into\n PostgreSQL 15)"
},
{
"msg_contents": "Hi Andres,\n\n> This causes a lot of failures with ubsan\n\nThanks for reporting this!\n\nI managed to reproduce the issue locally and to fix it. UBSAN is happy\nnow. PFA v49.\n\n> I bet that this alignment issue can be fixed by using PGAlignedBlock\n> instead of a raw char buffer for a page. (I'm guessing, I haven't\n> directly checked.)\n\nNo, actually the problem was much simpler.\n\n0001 changes SLRU page numbering from 32-bit to 64-bit one. This also\nchanged the SlruScanDirCbReportPresence() callback:\n\n```\nbool\nSlruScanDirCbReportPresence(SlruCtl ctl, char *filename, int64 segpage,\n void *data)\n{\n int64 cutoffPage = *(int64 *) data;\n```\n\nHowever TruncateCLOG() and TruncateCommitTs() were not changed\naccordingly in v47, they were passing a pointer to int32 as *data.\n\n-- \nBest regards,\nAleksander Alekseev",
"msg_date": "Fri, 9 Dec 2022 12:50:46 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": true,
"msg_subject": "Re: XID formatting and SLRU refactorings (was: Add 64-bit XIDs into\n PostgreSQL 15)"
},
{
"msg_contents": "Hi hackers,\n\n> I managed to reproduce the issue locally and to fix it. UBSAN is happy\n> now. PFA v49.\n\nMaxim Orlov pointed out offlist that this cast is not needed:\n\n```\n- SimpleLruTruncate(CommitTsCtl, (int)cutoffPage);\n+ SimpleLruTruncate(CommitTsCtl, cutoffPage);\n```\n\nSimpleLruTruncate() accepts cutoffPage as uint64 in 0001.\n\nPFA the corrected patchset v50.\n\n-- \nBest regards,\nAleksander Alekseev",
"msg_date": "Fri, 9 Dec 2022 16:42:31 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": true,
"msg_subject": "Re: XID formatting and SLRU refactorings (was: Add 64-bit XIDs into\n PostgreSQL 15)"
},
{
"msg_contents": "Hi!\n\nAs a result of discussion in the thread [0], Robert Haas proposed to focus\non making SLRU 64 bit, as a first step towards 64 bit XIDs.\nHere is the patch set.\n\nIn overall, code of this patch set is based on the existing code from [0]\nand may be simplified, due to the fact, that SLRU_PAGES_PER_SEGMENT is not\nmeant to be changed now.\nBut I decided to leave it that way. At least for now.\n\nAs always, reviews and opinions are very welcome.\n\nShould we change status for this thread to \"need review\"?\n\n[0]\nhttps://www.postgresql.org/message-id/flat/CA%2BTgmoZFmTGjgkmjgkcm2-vQq3_TzcoMKmVimvQLx9oJLbye0Q%40mail.gmail.com#03a4ab82569bb7b112db4a2f352d96b9\n\n-- \nBest regards,\nMaxim Orlov.",
"msg_date": "Mon, 19 Dec 2022 17:40:52 +0300",
"msg_from": "Maxim Orlov <orlovmg@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: XID formatting and SLRU refactorings (was: Add 64-bit XIDs into\n PostgreSQL 15)"
},
{
"msg_contents": "On Mon, Dec 19, 2022 at 6:41 AM Maxim Orlov <orlovmg@gmail.com> wrote:\n>\n> As always, reviews and opinions are very welcome.\n\n\nHi! I think that 64-bit xids are a very important feature and I want\nto help advance it. That's why I want to try to understand a patch\nbetter.\nDo I get it right that the proposed v51 patchset only changes the SLRU\nfilenames and type of pageno representation? Is SLRU wraparound still\nexactly there after 0xFFFFFFFF byte?\n\nThe thing is we had some nasty bugs because SLRU wraparound is tricky.\nAnd I think it would be beneficial if we could get to continuous SLRU\nspace. But the patch seems to avoid addressing this problem.\n\nAlso, I do not understand what is the reason for splitting 1st and 2nd\nsteps. Certainly, there must be some logic behind it, but I just can't\ngrasp it...\nAnd the purpose of the 3rd step with pg_upgrade changes is a complete\nmystery for me. Please excuse my incompetence in the topic, but maybe\nsome commit message or comments would help. What kind of xact segments\nconversion we do? Why is it only necessary for xacts, but not other\nSLRUs?\n\nThank you for working on this important project!\n\n\nBest regards, Andrey Borodin.\n\n\n",
"msg_date": "Thu, 5 Jan 2023 21:30:49 -0800",
"msg_from": "Andrey Borodin <amborodin86@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: XID formatting and SLRU refactorings (was: Add 64-bit XIDs into\n PostgreSQL 15)"
},
{
"msg_contents": "\nOn Mon, 19 Dec 2022 at 22:40, Maxim Orlov <orlovmg@gmail.com> wrote:\n> Hi!\n>\n> As a result of discussion in the thread [0], Robert Haas proposed to focus\n> on making SLRU 64 bit, as a first step towards 64 bit XIDs.\n> Here is the patch set.\n>\n> In overall, code of this patch set is based on the existing code from [0]\n> and may be simplified, due to the fact, that SLRU_PAGES_PER_SEGMENT is not\n> meant to be changed now.\n> But I decided to leave it that way. At least for now.\n>\n> As always, reviews and opinions are very welcome.\n>\n\nFor v51-0003. We can use GetClogDirName instead of GET_MAJOR_VERSION in\ncopy_subdir_files().\n\ndiff --git a/src/bin/pg_upgrade/pg_upgrade.c b/src/bin/pg_upgrade/pg_upgrade.c\nindex 1c49c63444..3934978b97 100644\n--- a/src/bin/pg_upgrade/pg_upgrade.c\n+++ b/src/bin/pg_upgrade/pg_upgrade.c\n@@ -857,10 +857,7 @@ copy_xact_xlog_xid(void)\n \t\tpfree(new_path);\n \t}\n \telse\n-\t\tcopy_subdir_files(GET_MAJOR_VERSION(old_cluster.major_version) <= 906 ?\n-\t\t\t\t\t\t \"pg_clog\" : \"pg_xact\",\n-\t\t\t\t\t\t GET_MAJOR_VERSION(new_cluster.major_version) <= 906 ?\n-\t\t\t\t\t\t \"pg_clog\" : \"pg_xact\");\n+\t\tcopy_subdir_files(GetClogDirName(old_cluster), GetClogDirName(new_cluster));\n \n \tprep_status(\"Setting oldest XID for new cluster\");\n \texec_prog(UTILITY_LOG_FILE, NULL, true, true,\n\n-- \nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.\n\n\n",
"msg_date": "Fri, 06 Jan 2023 14:51:23 +0800",
"msg_from": "Japin Li <japinli@hotmail.com>",
"msg_from_op": false,
"msg_subject": "Re: XID formatting and SLRU refactorings (was: Add 64-bit XIDs into\n PostgreSQL 15)"
},
{
"msg_contents": ">\n> Do I get it right that the proposed v51 patchset only changes the SLRU\n> filenames and type of pageno representation? Is SLRU wraparound still\n> exactly there after 0xFFFFFFFF byte?\n>\nAfter applying the whole patch set, SLRU will become 64–bit without a\nwraparound. Thus, no wraparound\nshould be there.\n\n0001 - should make SLRU internally 64–bit, no effects from \"outside\"\n0002 - should make SLRU callers 64–bit, SLRU segment files naming are\nchanged\n0003 - make upgrade from previous versions feasible\n\n\n> Also, I do not understand what is the reason for splitting 1st and 2nd\n> steps. Certainly, there must be some logic behind it, but I just can't\n> grasp it...\n>\nAs we did discuss somewhere in the beginning of the discussion, we try to\nmake every commit as independent as possible.\nThus, it is much easier to review and commit. I see no problem to meld\nthese commits into one, if consensus will be reached.\n\n\n> And the purpose of the 3rd step with pg_upgrade changes is a complete\n> mystery for me. Please excuse my incompetence in the topic, but maybe\n> some commit message or comments would help. What kind of xact segments\n> conversion we do? Why is it only necessary for xacts, but not other\n> SLRUs?\n>\nThe purpose of the third patch is to make upgrade feasible. Since we've\nchange pg_xact files naming,\nPostgres could not read status of \"old\" transactions from \"old\" pg_xact\nfiles. So, we have to convert those files.\nThe major problem here is that we must handle possible segment wraparound\n(in \"old\" cluster). The whole idea\nfor an upgrade is to read SLRU pages for pg_xact one by one and write it in\na \"new\" filename.\n\nMaybe, It's just a little bit complicated, since the algorithm is intended\nto deal with different SLRU pages per segment\nin \"new\" and \"old\" clusters. But, on the other hand, it is already created\nin original patch set of 64–bit XIDs and will be useful\nin the future. AFAICS, arguably, any variant of 64–bit XIDs should lead to\nincrease of an amount of SLRU pages per segment.\n\nAnd as for other SLRUs, they cannot survive pg_upgrade mostly by the fact,\nthat cluster must be stopped upon upgrade.\nThus, no conversion needed.\n\n-- \nBest regards,\nMaxim Orlov.\n\n\nDo I get it right that the proposed v51 patchset only changes the SLRU\nfilenames and type of pageno representation? Is SLRU wraparound still\nexactly there after 0xFFFFFFFF byte?After applying the whole patch set, SLRU will become 64–bit without a wraparound. Thus, no wraparoundshould be there. 0001 - should make SLRU internally 64–bit, no effects from \"outside\"0002 - should make SLRU callers 64–bit, SLRU segment files naming are changed0003 - make upgrade from previous versions feasible \nAlso, I do not understand what is the reason for splitting 1st and 2nd\nsteps. Certainly, there must be some logic behind it, but I just can't\ngrasp it...As we did discuss somewhere in the beginning of the discussion, we try to make every commit as independent as possible.Thus, it is much easier to review and commit. I see no problem to meld these commits into one, if consensus will be reached. \nAnd the purpose of the 3rd step with pg_upgrade changes is a complete\nmystery for me. Please excuse my incompetence in the topic, but maybe\nsome commit message or comments would help. What kind of xact segments\nconversion we do? Why is it only necessary for xacts, but not other\nSLRUs?\nThe purpose of the third patch is to make upgrade feasible. Since we've change pg_xact files naming, Postgres could not read status of \"old\" transactions from \"old\" pg_xact files. So, we have to convert those files.The major problem here is that we must handle possible segment wraparound (in \"old\" cluster). The whole idea for an upgrade is to read SLRU pages for pg_xact one by one and write it in a \"new\" filename.Maybe, It's just a little bit complicated, since the algorithm is intended to deal with different SLRU pages per segmentin \"new\" and \"old\" clusters. But, on the other hand, it is already created in original patch set of 64–bit XIDs and will be usefulin the future. AFAICS, arguably, any variant of 64–bit XIDs should lead to increase of an amount of SLRU pages per segment.And as for other SLRUs, they cannot survive pg_upgrade mostly by the fact, that cluster must be stopped upon upgrade.Thus, no conversion needed.-- Best regards,Maxim Orlov.",
"msg_date": "Mon, 9 Jan 2023 16:29:01 +0300",
"msg_from": "Maxim Orlov <orlovmg@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: XID formatting and SLRU refactorings (was: Add 64-bit XIDs into\n PostgreSQL 15)"
},
{
"msg_contents": "Hi Andrey,\n\n> Hi! I think that 64-bit xids are a very important feature and I want\n> to help advance it. That's why I want to try to understand a patch\n> better.\n\nThanks for your interest to the patchset!\n\n> Do I get it right that the proposed v51 patchset only changes the SLRU\n> filenames and type of pageno representation? Is SLRU wraparound still\n> exactly there after 0xFFFFFFFF byte?\n\nOK, let me give some background then. I suspect you already know this,\nbut this can be useful for other reviewers. Additionally we have two\nrather large threads on our hands and it's easy to lose track of\nthings.\n\nSLRU is basically a general-purpose LRU implementation with ReadPage()\n/ WritePage() interface with the only exception that instead of\nsomething like Page* object it operates slot numbers (array indexes).\nSLRU is used as an underlying container for several internal\nPostgreSQL structures, most importantly CLOG. Despite the name CLOG is\nnot a log (journal) but rather a large bit array. For every\ntransaction it stores two bits that reflect the status of the\ntransaction (more detail in clog.c / clog.h).\n\nCurrently SLRU operates 32-bit page numbers. What we currently agreed\non [1] and what we are trying to achieve in this thread is to make\nSLRU pages 64-bit. The rest of the 64-bit XIDs is discussed in another\nthread [2].\n\nAs Robert Haas put it:\n\n> Specifically, there are a couple of patches in here that\n> have to do with making SLRUs indexed by 64-bit integers rather than by\n> 32-bit integers. We've had repeated bugs in the area of handling SLRU\n> wraparound in the past, some of which have caused data loss. Just by\n> chance, I ran across a situation just yesterday where an SLRU wrapped\n> around on disk for reasons that I don't really understand yet and\n> chaos ensued. Switching to an indexing system for SLRUs that does not\n> ever wrap around would probably enable us to get rid of a whole bunch\n> of crufty code, and would also likely improve the general reliability\n> of the system in situations where wraparound is threatened. It seems\n> like a really, really good idea.\n\nSo our goal here is to eliminate wrap-around for SLRU. It means that\nif I save something to the page 0x0000000012345678 it will stay there\nforever. Other parts of the system however have to form proper 64-bit\npage numbers in order to make it work. If they don't the wrap-around\nis possible for these particular subsystems (but not SLRU per se).\n\n> Also, I do not understand what is the reason for splitting 1st and 2nd\n> steps. Certainly, there must be some logic behind it, but I just can't\n> grasp it...\n\n0001 patch changes the SLRU internals without affecting the callers.\nIt also preserves the short SLRU filenames which means nothing changes\nfor an outside observer. All it changes is PostgreSQL binary. It can\nbe merged any time and even backported to the previous versions if we\nwant to.\n\nThe 0002 patch makes changes to the callers and also enlarges SLRU\nfilenames. For sure we could do everything at once, but it would\ncomplicate testing and more importantly code review. Personally I\nbelieve Maxim did a great job here. Both patches were easy to read and\nunderstand (relatively, of course).\n\n> And the purpose of the 3rd step with pg_upgrade changes is a complete\n> mystery for me.\n\n0001 and 0002 will work fine for new PostgreSQL instances. But if you\nhave an instance that already has on-disk state we have to move the\nSLRU segments accordingly. This is what 0003 does.\n\nThat's the theory at least. Personally I still have to meditate a bit\nmore on the code in order to get a good understanding of it,\nespecially the parts that deal with transaction epochs because this is\nsomething I have limited experience with. Also I wouldn't exclude the\npossibility of bugs. Particularly this part of 0003:\n\n```\n+ oldseg.segno = pageno / SLRU_PAGES_PER_SEGMENT;\n+ oldseg.pageno = pageno % SLRU_PAGES_PER_SEGMENT;\n+\n+ newseg.segno = pageno / SLRU_PAGES_PER_SEGMENT;\n+ newseg.pageno = pageno % SLRU_PAGES_PER_SEGMENT;\n```\n\nlooks suspicious to me.\n\nI agree that adding a couple of additional comments could be\nappropriate, especially when it comes to epochs.\n\n[1]: https://www.postgresql.org/message-id/CA%2BTgmoZFmTGjgkmjgkcm2-vQq3_TzcoMKmVimvQLx9oJLbye0Q%40mail.gmail.com\n[2]: https://www.postgresql.org/message-id/CACG%3DezZe1NQSCnfHOr78AtAZxJZeCvxrts0ygrxYwe%3DpyyjVWA%40mail.gmail.com\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Mon, 9 Jan 2023 16:30:47 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": true,
"msg_subject": "Re: XID formatting and SLRU refactorings (was: Add 64-bit XIDs into\n PostgreSQL 15)"
},
{
"msg_contents": "On Fri, 6 Jan 2023 at 09:51, Japin Li <japinli@hotmail.com> wrote:\n\n>\n> For v51-0003. We can use GetClogDirName instead of GET_MAJOR_VERSION in\n> copy_subdir_files().\n>\n\nOf course! Tanks! I'll address this in the next iteration, v52.\n\n-- \nBest regards,\nMaxim Orlov.\n\nOn Fri, 6 Jan 2023 at 09:51, Japin Li <japinli@hotmail.com> wrote:\nFor v51-0003. We can use GetClogDirName instead of GET_MAJOR_VERSION in\ncopy_subdir_files().Of course! Tanks! I'll address this in the next iteration, v52.-- Best regards,Maxim Orlov.",
"msg_date": "Mon, 9 Jan 2023 17:03:27 +0300",
"msg_from": "Maxim Orlov <orlovmg@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: XID formatting and SLRU refactorings (was: Add 64-bit XIDs into\n PostgreSQL 15)"
},
{
"msg_contents": "Hi!\n\nHere is a new patch set.\nI've added comments and make use GetClogDirName call in copy_subdir_files.\n\n-- \nBest regards,\nMaxim Orlov.",
"msg_date": "Mon, 9 Jan 2023 17:15:21 +0300",
"msg_from": "Maxim Orlov <orlovmg@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: XID formatting and SLRU refactorings (was: Add 64-bit XIDs into\n PostgreSQL 15)"
},
{
"msg_contents": "Hi Maxim,\n\n> Here is a new patch set.\n> I've added comments and make use GetClogDirName call in copy_subdir_files.\n\nJacob Champion pointed out (offlist, cc:'ed) that we may be wrong on this one:\n\n> 0001 patch changes the SLRU internals without affecting the callers.\n\n> 0001 - should make SLRU internally 64–bit, no effects from \"outside\"\n\n... and apparently Jacob is right.\n\nBesides other things 0001 modifies CLOG_ZEROPAGE and CLOG_TRUNCATE WAL\nrecords - it makes changes to WriteZeroPageXlogRec() and\nWriteTruncateXlogRec() and corresponding changes to clog_desc() and\nclog_redo().\n\nFirstly, it means that the patch doesn't change what it claims to\nchange. I think these changes should be part of 0002.\n\nSecondly, shouldn't we introduce a new WAL record type in order to\nmake the code backward compatible with previous PG versions? I'm not\n100% sure how the upgrade procedure works in all the details. If it\nrequires the DBMS to be gracefully shut down before the upgrade then\nwe are probably fine here.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Wed, 11 Jan 2023 12:22:50 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": true,
"msg_subject": "Re: XID formatting and SLRU refactorings (was: Add 64-bit XIDs into\n PostgreSQL 15)"
},
{
"msg_contents": "Hi Maxim,\n\n> Secondly, shouldn't we introduce a new WAL record type in order to\n> make the code backward compatible with previous PG versions? I'm not\n> 100% sure how the upgrade procedure works in all the details. If it\n> requires the DBMS to be gracefully shut down before the upgrade then\n> we are probably fine here.\n\nAfter reading [1] carefully it looks like we shouldn't worry about\nthis. The upgrade procedure explicitly requires to run `pg_ctl stop`\nduring step 8 of the upgrade procedure, i.e. not in the immediate mode\n[2]. It also has explicit instructions regarding the replicas. From\nwhat I can tell there is no way they will see WAL records they\nwouldn't understand.\n\n[1]: https://www.postgresql.org/docs/current/pgupgrade.html\n[2]: https://www.postgresql.org/docs/current/app-pg-ctl.html\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Wed, 11 Jan 2023 12:47:57 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": true,
"msg_subject": "Re: XID formatting and SLRU refactorings (was: Add 64-bit XIDs into\n PostgreSQL 15)"
},
{
"msg_contents": "On Wed, Jan 11, 2023 at 1:48 AM Aleksander Alekseev\n<aleksander@timescale.com> wrote:\n> After reading [1] carefully it looks like we shouldn't worry about\n> this. The upgrade procedure explicitly requires to run `pg_ctl stop`\n> during step 8 of the upgrade procedure, i.e. not in the immediate mode\n> [2].\n\nYeah, pg_upgrade will briefly start and stop the old server to make\nsure all the WAL is replayed, and won't transfer any of the files\nover. AFAIK, major-version WAL changes are fine; it was the previous\nclaim that we could do it in a minor version that I was unsure about.\n\nThanks!\n--Jacob\n\n\n",
"msg_date": "Wed, 11 Jan 2023 08:49:52 -0800",
"msg_from": "Jacob Champion <jchampion@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: XID formatting and SLRU refactorings (was: Add 64-bit XIDs into\n PostgreSQL 15)"
},
{
"msg_contents": "Hi hackers,\n\n> Yeah, pg_upgrade will briefly start and stop the old server to make\n> sure all the WAL is replayed, and won't transfer any of the files\n> over. AFAIK, major-version WAL changes are fine; it was the previous\n> claim that we could do it in a minor version that I was unsure about.\n\nOK, here is the patchset v53 where I mostly modified the commit\nmessages. It is explicitly said that 0001 modifies the WAL records and\nwhy we decided to do it in this patch. Additionally any mention of\n64-bit XIDs is removed since it is not guaranteed that the rest of the\npatches are going to be accepted. 64-bit SLRU page numbering is a\nvaluable change per se.\n\nChanging the status of the CF entry to RfC apparently was a bit\npremature. It looks like the patchset can use a few more rounds of\nreview.\n\nIn 0002:\n\n```\n-#define TransactionIdToCTsPage(xid) \\\n- ((xid) / (TransactionId) COMMIT_TS_XACTS_PER_PAGE)\n+static inline int64\n+TransactionIdToCTsPageInternal(TransactionId xid, bool lock)\n+{\n+ FullTransactionId fxid,\n+ nextXid;\n+ uint32 epoch;\n+\n+ if (lock)\n+ LWLockAcquire(XidGenLock, LW_SHARED);\n+\n+ /* make a local copy */\n+ nextXid = ShmemVariableCache->nextXid;\n+\n+ if (lock)\n+ LWLockRelease(XidGenLock);\n+\n+ epoch = EpochFromFullTransactionId(nextXid);\n+ if (xid > XidFromFullTransactionId(nextXid))\n+ --epoch;\n+\n+ fxid = FullTransactionIdFromEpochAndXid(epoch, xid);\n+\n+ return fxid.value / (uint64) COMMIT_TS_XACTS_PER_PAGE;\n+}\n```\n\nI'm pretty confident that shared memory can't be accessed like this,\nwithout taking a lock. Although it may work on x64 generally we can\nget garbage, unless nextXid is accessed atomically and has a\ncorresponding atomic type. On top of that I'm pretty sure\nTransactionIds can't be compared with the regular comparison\noperators. All in all, so far I don't understand why this piece of\ncode should be so complicated.\n\nThe same applies to:\n\n```\n-#define TransactionIdToPage(xid) ((xid) / (TransactionId)\nSUBTRANS_XACTS_PER_PAGE)\n+static inline int64\n+TransactionIdToPageInternal(TransactionId xid, bool lock)\n```\n\n... in subtrans.c\n\nMaxim, perhaps you could share with us what your reasoning was here?\n\n-- \nBest regards,\nAleksander Alekseev",
"msg_date": "Tue, 17 Jan 2023 16:32:56 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": true,
"msg_subject": "Re: XID formatting and SLRU refactorings (was: Add 64-bit XIDs into\n PostgreSQL 15)"
},
{
"msg_contents": "Hi,\n\n> OK, here is the patchset v53 where I mostly modified the commit\n> messages. It is explicitly said that 0001 modifies the WAL records and\n> why we decided to do it in this patch. Additionally any mention of\n> 64-bit XIDs is removed since it is not guaranteed that the rest of the\n> patches are going to be accepted. 64-bit SLRU page numbering is a\n> valuable change per se.\n>\n> Changing the status of the CF entry to RfC apparently was a bit\n> premature. It looks like the patchset can use a few more rounds of\n> review.\n>\n> In 0002:\n>\n> [...]\n>\n> Maxim, perhaps you could share with us what your reasoning was here?\n\nI played with the patch a bit and managed to figure out what you tried\nto accomplish. Unfortunately generally you can't derive a\nFullTransactionId from a TransactionId, and you can't access\nShmemVariableCache fields without taking a lock unless during the\nstartup when there are no concurrent processes.\n\nI don't think this patch should do anything but change the SLRU\nindexing from 32-bit to 64-bit one. Trying to address the wraparounds\nwould be nice but I'm afraid we are not quite there yet.\n\nAlso I found strage little changes that seemed to be unrelated to the\npatch. I believe they ended up here by accident (used to be a part of\n64-bit XIDs patchset) and removed them.\n\nPFA the cleaned up version of the patch. I noticed that splitting it\ninto parts doesn't help much with the review or testing, nor seems it\nlikely that the patches are going to be merged separately one by one.\nFor these reasons I merged everything into a single patch.\n\nThe convert_pg_xact_segments() function is still obviously\noverengineered. As I understand, all it has to do is simply renaming\npg_xact/XXXX to pg_xact/00000000XXXX. Unfortunately I used up all the\nmana for today and have to take a long rest in order to continue.\n\n-- \nBest regards,\nAleksander Alekseev",
"msg_date": "Mon, 20 Feb 2023 18:30:31 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": true,
"msg_subject": "Re: XID formatting and SLRU refactorings (was: Add 64-bit XIDs into\n PostgreSQL 15)"
},
{
"msg_contents": "Hi,\n\n> The convert_pg_xact_segments() function is still obviously\n> overengineered. As I understand, all it has to do is simply renaming\n> pg_xact/XXXX to pg_xact/00000000XXXX. Unfortunately I used up all the\n> mana for today and have to take a long rest in order to continue.\n\nPFA the corrected patch v55.\n\nOne thing that still bothers me is that during the upgrade we only\nmigrate the CLOG segments (i.e. pg_xact / pg_clog) and completely\nignore all the rest of SLRUs:\n\n* pg_commit_ts\n* pg_multixact/offsets\n* pg_multixact/members\n* pg_subtrans\n* pg_notify\n* pg_serial\n\nMy knowledge in this area is somewhat limited and I can't tell whether\nthis is OK. I will investigate but also I could use some feedback from\nthe reviewers.\n\n-- \nBest regards,\nAleksander Alekseev",
"msg_date": "Tue, 21 Feb 2023 16:58:54 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": true,
"msg_subject": "Re: XID formatting and SLRU refactorings (was: Add 64-bit XIDs into\n PostgreSQL 15)"
},
{
"msg_contents": "On Tue, 21 Feb 2023 at 16:59, Aleksander Alekseev <aleksander@timescale.com>\nwrote:\n\n> Hi,\n>\n> One thing that still bothers me is that during the upgrade we only\n> migrate the CLOG segments (i.e. pg_xact / pg_clog) and completely\n> ignore all the rest of SLRUs:\n>\n> * pg_commit_ts\n> * pg_multixact/offsets\n> * pg_multixact/members\n> * pg_subtrans\n> * pg_notify\n> * pg_serial\n\n\nHi! We do ignore these values, since in order to pg_upgrade the server it\nmust be properly stopped and no transactions can outlast this moment.\n\n\n-- \nBest regards,\nMaxim Orlov.\n\nOn Tue, 21 Feb 2023 at 16:59, Aleksander Alekseev <aleksander@timescale.com> wrote:Hi,\n\nOne thing that still bothers me is that during the upgrade we only\nmigrate the CLOG segments (i.e. pg_xact / pg_clog) and completely\nignore all the rest of SLRUs:\n\n* pg_commit_ts\n* pg_multixact/offsets\n* pg_multixact/members\n* pg_subtrans\n* pg_notify\n* pg_serial Hi! We do ignore these values, since in order to pg_upgrade the server it must be properly stopped and no transactions can outlast this moment.-- Best regards,Maxim Orlov.",
"msg_date": "Wed, 22 Feb 2023 17:29:29 +0300",
"msg_from": "Maxim Orlov <orlovmg@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: XID formatting and SLRU refactorings (was: Add 64-bit XIDs into\n PostgreSQL 15)"
},
{
"msg_contents": "(CC list trimmed, gmail wouldn't let me send otherwise)\n\nOn 22/02/2023 16:29, Maxim Orlov wrote:\n> On Tue, 21 Feb 2023 at 16:59, Aleksander Alekseev \n> <aleksander@timescale.com <mailto:aleksander@timescale.com>> wrote:\n> One thing that still bothers me is that during the upgrade we only\n> migrate the CLOG segments (i.e. pg_xact / pg_clog) and completely\n> ignore all the rest of SLRUs:\n> \n> * pg_commit_ts\n> * pg_multixact/offsets\n> * pg_multixact/members\n> * pg_subtrans\n> * pg_notify\n> * pg_serial\n> \n> Hi! We do ignore these values, since in order to pg_upgrade the server \n> it must be properly stopped and no transactions can outlast this moment.\n\nThat sounds right for pg_serial, pg_notify, and pg_subtrans. But not for \npg_commit_ts and the pg_multixacts.\n\nThis needs tests for pg_upgrading those SLRUs, after 0, 1 and N wraparounds.\n\nI'm surprised that these patches extend the page numbering to 64 bits, \nbut never actually uses the high bits. The XID \"epoch\" is not used, and \npg_xact still wraps around and the segment names are still reused. I \nthought we could stop doing that. Certainly if we start supporting \n64-bit XIDs properly, that will need to change and we will pg_upgrade \nwill need to rename the segments again.\n\nThe previous versions of these patches did that, but I think you changed \ntact in response to Robert's suggestion at [1]:\n\n> Lest we miss the forest for the trees, there is an aspect of this\n> patch that I find to be an extremely good idea and think we should try\n> to get committed even if the rest of the patch set ends up in the\n> rubbish bin. Specifically, there are a couple of patches in here that\n> have to do with making SLRUs indexed by 64-bit integers rather than by\n> 32-bit integers. We've had repeated bugs in the area of handling SLRU\n> wraparound in the past, some of which have caused data loss. Just by\n> chance, I ran across a situation just yesterday where an SLRU wrapped\n> around on disk for reasons that I don't really understand yet and\n> chaos ensued. Switching to an indexing system for SLRUs that does not\n> ever wrap around would probably enable us to get rid of a whole bunch\n> of crufty code, and would also likely improve the general reliability\n> of the system in situations where wraparound is threatened. It seems\n> like a really, really good idea.\n\nThese new versions of this patch don't achieve the goal of avoiding \nwraparound. I think the previous versions that did that was the right \napproach.\n\n[1] \nhttps://www.postgresql.org/message-id/CA%2BTgmoZFmTGjgkmjgkcm2-vQq3_TzcoMKmVimvQLx9oJLbye0Q%40mail.gmail.com\n\n- Heikki\n\n\n\n",
"msg_date": "Mon, 27 Feb 2023 11:10:38 +0200",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": false,
"msg_subject": "Re: XID formatting and SLRU refactorings (was: Add 64-bit XIDs into\n PostgreSQL 15)"
},
{
"msg_contents": "On Mon, 27 Feb 2023 at 12:10, Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n\n> (CC list trimmed, gmail wouldn't let me send otherwise)\n>\n> That sounds right for pg_serial, pg_notify, and pg_subtrans. But not for\n> pg_commit_ts and the pg_multixacts.\n>\n> This needs tests for pg_upgrading those SLRUs, after 0, 1 and N\n> wraparounds.\n>\n> Yep, that's my fault. I've forgotten about pg_multixacts. But for the\npg_commit_ts it's a bit complicated story.\nThe thing is, if we do upgrade, old files from pg_commit_ts not copied into\na new server.\n\nFor example, I've checked one more time on the current master branch:\n1). initdb\n2). add \"track_commit_timestamp = on\" into postgresql.conf\n3). pgbench\n4). $ ls pg_commit_ts/\n 0000 0005 000A 000F 0014 0019 001E 0023...\n ...009A 009F 00A4 00A9 00AE 00B3 00B8 00BD 00C2\n5). do pg_upgrade\n6). $ ls pg_commit_ts/\n 0000 00C2\n\nEither I do not understand something, or the files from pg_commit_ts\ndirectory are not copied.\n\n-- \nBest regards,\nMaxim Orlov.\n\nOn Mon, 27 Feb 2023 at 12:10, Heikki Linnakangas <hlinnaka@iki.fi> wrote:(CC list trimmed, gmail wouldn't let me send otherwise)\n\nThat sounds right for pg_serial, pg_notify, and pg_subtrans. But not for \npg_commit_ts and the pg_multixacts.\n\nThis needs tests for pg_upgrading those SLRUs, after 0, 1 and N wraparounds.\nYep, that's my fault. I've forgotten about pg_multixacts. But for the pg_commit_ts it's a bit complicated story. The thing is, if we do upgrade, old files from pg_commit_ts not copied into a new server. For example, I've checked one more time on the current master branch:1). initdb2). add \"track_commit_timestamp = on\" into postgresql.conf3). pgbench4). $ ls pg_commit_ts/ 0000 0005 000A 000F 0014 0019 001E 0023... ...009A 009F 00A4 00A9 00AE 00B3 00B8 00BD 00C25). do pg_upgrade6). $ ls pg_commit_ts/ 0000 00C2Either I do not understand something, or the files from pg_commit_ts directory are not copied. -- Best regards,Maxim Orlov.",
"msg_date": "Tue, 28 Feb 2023 17:17:10 +0300",
"msg_from": "Maxim Orlov <orlovmg@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: XID formatting and SLRU refactorings (was: Add 64-bit XIDs into\n PostgreSQL 15)"
},
{
"msg_contents": "On 28/02/2023 16:17, Maxim Orlov wrote:\n> Either I do not understand something, or the files from pg_commit_ts \n> directory are not copied.\n\nHuh, yeah you're right, pg_upgrade doesn't copy pg_commit_ts.\n\n- Heikki\n\n\n\n",
"msg_date": "Tue, 28 Feb 2023 19:37:23 +0200",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": false,
"msg_subject": "Re: XID formatting and SLRU refactorings (was: Add 64-bit XIDs into\n PostgreSQL 15)"
},
{
"msg_contents": "Hi,\n\n> I'm surprised that these patches extend the page numbering to 64 bits,\n> but never actually uses the high bits. The XID \"epoch\" is not used, and\n> pg_xact still wraps around and the segment names are still reused. I\n> thought we could stop doing that.\n\nTo clarify, the idea is to let CLOG grow indefinitely and simply store\nFullTransactionId -> TransactionStatus (two bits). Correct?\n\nI didn't investigate this in much detail but it may affect quite some\namount of code since TransactionIdDidCommit() and\nTransactionIdDidCommit() currently both deal with TransactionId, not\nFullTransactionId. IMO, this would be a nice change however, assuming\nwe are ready for it.\n\nIn the previous version of the patch there was an attempt to derive\nFullTransactionId from TransactionId but it was wrong for the reasons\nnamed above in the thread. Thus is was removed and the patch\nsimplified.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Wed, 1 Mar 2023 13:21:21 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": true,
"msg_subject": "Re: XID formatting and SLRU refactorings (was: Add 64-bit XIDs into\n PostgreSQL 15)"
},
{
"msg_contents": "On 01/03/2023 12:21, Aleksander Alekseev wrote:\n> Hi,\n> \n>> I'm surprised that these patches extend the page numbering to 64 bits,\n>> but never actually uses the high bits. The XID \"epoch\" is not used, and\n>> pg_xact still wraps around and the segment names are still reused. I\n>> thought we could stop doing that.\n> \n> To clarify, the idea is to let CLOG grow indefinitely and simply store\n> FullTransactionId -> TransactionStatus (two bits). Correct?\n\nCorrect.\n\n> I didn't investigate this in much detail but it may affect quite some\n> amount of code since TransactionIdDidCommit() and\n> TransactionIdDidCommit() currently both deal with TransactionId, not\n> FullTransactionId. IMO, this would be a nice change however, assuming\n> we are ready for it.\n\nYep, it's a lot of code churn..\n\n> In the previous version of the patch there was an attempt to derive\n> FullTransactionId from TransactionId but it was wrong for the reasons\n> named above in the thread. Thus is was removed and the patch\n> simplified.\n\nYeah, it's tricky to get it right. Clearly we need to do it at some \npoint though.\n\nAll in all, this is a big effort. I spent some more time reviewing this \nin the last few days, and thought a lot about what the path forward here \ncould be. And I haven't looked at the actual 64-bit XIDs patch set yet, \njust this patch to use 64-bit addressing in SLRUs.\n\nThis patch is the first step, but we have a bit of a chicken and egg \nproblem, because this patch on its own isn't very interesting, but on \nthe other hand, we need it to work on the follow up items. Here's how I \nsee the development path for this (and again, this is just for the \n64-bit SLRUs work, not the bigger 64-bit-XIDs-in-heapam effort):\n\n1. Use 64 bit page numbers in SLRUs (this patch)\n\nI would like to make one change here: I'd prefer to keep the old 4-digit \nsegment names, until we actually start to use the wider address space. \nLet's allow each SLRU to specify how many digits to use in the \nfilenames, so that we convert one SLRU at a time.\n\nIf we do that, and don't change any of the existing SLRUs to actually \nuse the wider space of page and segment numbers yet, this patch becomes \njust refactoring with no on-disk format changes. No pg_upgrade needed.\n\nThe next patches will start to make use of the wider address space, one \nSLRU at a time.\n\n2. Use the larger segment file names in async.c, to lift the current 8 \nGB limit on the max number of pending notifications.\n\nNo one actually minds the limit, it's quite generous as it is. But there \nis some code and complexity in async.c to avoid the wraparound that \ncould be made simpler if we used longer SLRU segment names and avoided \nthe wraparound altogether.\n\nI wonder if we should actually add an artificial limit, as a GUC. If \nthere are gigabytes of notifications queued up, something's probably \nwrong with the system, and you're not going to be happy if we just \nremove the limit so it can grow to terabytes until you run out of disk \nspace.\n\n3. Extend pg_multixact so that pg_multixact/members is addressed by \n64-bit offsets.\n\nCurrently, multi-XIDs can wrap around, requiring anti-wraparound \nfreezing, but independently of that, the pg_multixact/members SLRU can \nalso wrap around. We track both, and trigger anti-wraparound if either \nSLRU is about to wrap around. If we redefine MultiXactOffset as a 64-bit \ninteger, we can avoid the pg_multixact/members wraparound altogether. A \ndownside is that pg_multixact/offsets will take twice as much space, but \nI think that's a good tradeoff. Or perhaps we can play tricks like store \na single 64-bit offset on each pg_multixact/offsets page, and a 32-bit \noffset from that for each XID, to avoid making it so much larger.\n\nThis would reduce the need to do anti-wraparound VACUUMs on systems that \nuse multixacts heavily. Needs pg_upgrade support.\n\n4. Extend pg_subtrans to 64-bits.\n\nThis isn't all that interesting because the active region of pg_subtrans \ncannot be wider than 32 bits anyway, because you'll still reach the \ngeneral 32-bit XID wraparound. But it might be less confusing in some \nplaces.\n\nI actually started to write a patch to do this, to see how complicated \nit is. It quickly proliferates into expanding other XIDs to 64-bits, \nlike TransactionXmin, frozenXid calculation in vacuum.c, known-assigned \nXID tracking in procarray.c. etc. It's going to be necessary to convert \n32-bit XIDs to FullTransactionIds at some boundaries, and I'm not sure \nwhere exactly that should happen. It's easier to do the conversions \nclose to subtrans.c, but then I'm not sure how much it gets us in terms \nof reducing confusion. It's easy to get confused with the epochs during \nconversions, as you noted. On the other hand, if we change much more of \nthe backend to use FullTransactionIds, the patch becomes much more invasive.\n\nNice thing with pg_subtrans, though, is that it doesn't require \npg_upgrade support.\n\n5. Extend pg_xact to 64-bits.\n\nSimilar to pg_subtrans, really, but needs pg_upgrade support.\n\n6. (a bonus thing that I noticed while thinking of pg_xact.) Extend \npg_twophase.c, to use FullTransactionIds.\n\nCurrently, the twophase state files in pg_twophase are named according \nto the 32 bit Xid of the transaction. Let's switch to FullTransactionId \nthere.\n\n\n\nAs we start to refactor these things, I also think it would be good to \nhave more explicit tracking of the valid range of SLRU pages in each \nSLRU. Take pg_subtrans for example: it's not very clear what pages have \nbeen initialized, especially during different stages of startup. It \nwould be good to have clear start and end page numbers, and throw an \nerror if you try to look up anything outside those bounds. Same for all \nother SLRUs.\n\nI propose that we try to finish 1 and 2 for v16. And maybe 6. I think \nthat's doable. It doesn't have any great user-visible benefits yet, but \nwe need to start somewhere.\n\n- Heikki\n\n\n\n",
"msg_date": "Mon, 6 Mar 2023 21:48:02 +0200",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": false,
"msg_subject": "Re: XID formatting and SLRU refactorings (was: Add 64-bit XIDs into\n PostgreSQL 15)"
},
{
"msg_contents": "On Tue, 17 Jan 2023 at 16:33, Aleksander Alekseev <aleksander@timescale.com>\nwrote:\n\n> Hi hackers,\n>\n> Maxim, perhaps you could share with us what your reasoning was here?\n>\n> I'm really sorry for late response, but better late than never. Yes, we\ncan not access shared memory without lock.\nIn this particular case, we use XidGenLock. That is why we use lock\nargument to take it is it was not taken previously.\nActually, we may place assertion in this insist.\n\nAs for xid compare: we do not compare xids here, we are checking for\nwraparound, so, AFAICS, this code is correct.\n\n\n\nOn Mon, 6 Mar 2023 at 22:48, Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n\n>\n> 1. Use 64 bit page numbers in SLRUs (this patch)\n>\n> 2. Use the larger segment file names in async.c, to lift the current 8\n> GB limit on the max number of pending notifications.\n>\n> 3. Extend pg_multixact so that pg_multixact/members is addressed by\n> 64-bit offsets.\n>\n> 4. Extend pg_subtrans to 64-bits.\n>\n> 5. Extend pg_xact to 64-bits.\n>\n>\n> 6. (a bonus thing that I noticed while thinking of pg_xact.) Extend\n> pg_twophase.c, to use FullTransactionIds.\n>\n> Currently, the twophase state files in pg_twophase are named according\n> to the 32 bit Xid of the transaction. Let's switch to FullTransactionId\n> there.\n>\n> ...\n>\n> I propose that we try to finish 1 and 2 for v16. And maybe 6. I think\n> that's doable. It doesn't have any great user-visible benefits yet, but\n> we need to start somewhere.\n>\n> - Heikki\n>\n> Yes, this is a great idea! My only concern here is that we're going in\ncircles here. You see, patch 1 is what was proposed\nin the beginning of this thread. Anyway, I will be happy if we are being\nable to push this topic forward.\n\nAs for making pg_multixact 64 bit, I spend the last couple of days to make\nproper pg_upgrade for pg_multixact's and for pg_xact's\nwith wraparound and I've understood, that it is not a simple task compare\nto pg_xact's. The problem is, we do not have epoch for\nmultixacts, so we do not have ability to \"overcome\" wraparound. The\nsolution may be adding some kind of epoch for multixacts or\nmake them 64 bit in \"main\" 64-xid patch, but in perspective of this thread,\nin my view, this should be last in line here.\n\nIn pg_xact we do not have such a problem, we do have epoch for transacions,\nso conversion should be pretty obvious:\n0000 -> 000000000000\n0001 -> 000000000001\n...\n0FFE -> 000000000FFE\n0FFF -> 000000000FFF\n0000 -> 000000010000\n0001 -> 000000010001\n\nSo, in my view, the plan should be:\n1. Use internal 64 bit page numbers in SLRUs without changing segments\nnaming.\n2. Use the larger segment file names in async.c, to lift the current 8 GB\nlimit on the max number of pending notifications.\n3. Extend pg_xact to 64-bits.\n4. Extend pg_subtrans to 64-bits.\n5. Extend pg_multixact so that pg_multixact/members is addressed by 64-bit\noffsets.\n6. Extend pg_twophase.c, to use FullTransactionIds. (a bonus thing)\n\nThoughts?\n\n-- \nBest regards,\nMaxim Orlov.\n\nOn Mon, 6 Mar 2023 at 22:48, Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n\n> On 01/03/2023 12:21, Aleksander Alekseev wrote:\n> > Hi,\n> >\n> >> I'm surprised that these patches extend the page numbering to 64 bits,\n> >> but never actually uses the high bits. The XID \"epoch\" is not used, and\n> >> pg_xact still wraps around and the segment names are still reused. I\n> >> thought we could stop doing that.\n> >\n> > To clarify, the idea is to let CLOG grow indefinitely and simply store\n> > FullTransactionId -> TransactionStatus (two bits). Correct?\n>\n> Correct.\n>\n> > I didn't investigate this in much detail but it may affect quite some\n> > amount of code since TransactionIdDidCommit() and\n> > TransactionIdDidCommit() currently both deal with TransactionId, not\n> > FullTransactionId. IMO, this would be a nice change however, assuming\n> > we are ready for it.\n>\n> Yep, it's a lot of code churn..\n>\n> > In the previous version of the patch there was an attempt to derive\n> > FullTransactionId from TransactionId but it was wrong for the reasons\n> > named above in the thread. Thus is was removed and the patch\n> > simplified.\n>\n> Yeah, it's tricky to get it right. Clearly we need to do it at some\n> point though.\n>\n> All in all, this is a big effort. I spent some more time reviewing this\n> in the last few days, and thought a lot about what the path forward here\n> could be. And I haven't looked at the actual 64-bit XIDs patch set yet,\n> just this patch to use 64-bit addressing in SLRUs.\n>\n> This patch is the first step, but we have a bit of a chicken and egg\n> problem, because this patch on its own isn't very interesting, but on\n> the other hand, we need it to work on the follow up items. Here's how I\n> see the development path for this (and again, this is just for the\n> 64-bit SLRUs work, not the bigger 64-bit-XIDs-in-heapam effort):\n>\n> 1. Use 64 bit page numbers in SLRUs (this patch)\n>\n> I would like to make one change here: I'd prefer to keep the old 4-digit\n> segment names, until we actually start to use the wider address space.\n> Let's allow each SLRU to specify how many digits to use in the\n> filenames, so that we convert one SLRU at a time.\n>\n> If we do that, and don't change any of the existing SLRUs to actually\n> use the wider space of page and segment numbers yet, this patch becomes\n> just refactoring with no on-disk format changes. No pg_upgrade needed.\n>\n> The next patches will start to make use of the wider address space, one\n> SLRU at a time.\n>\n> 2. Use the larger segment file names in async.c, to lift the current 8\n> GB limit on the max number of pending notifications.\n>\n> No one actually minds the limit, it's quite generous as it is. But there\n> is some code and complexity in async.c to avoid the wraparound that\n> could be made simpler if we used longer SLRU segment names and avoided\n> the wraparound altogether.\n>\n> I wonder if we should actually add an artificial limit, as a GUC. If\n> there are gigabytes of notifications queued up, something's probably\n> wrong with the system, and you're not going to be happy if we just\n> remove the limit so it can grow to terabytes until you run out of disk\n> space.\n>\n> 3. Extend pg_multixact so that pg_multixact/members is addressed by\n> 64-bit offsets.\n>\n> Currently, multi-XIDs can wrap around, requiring anti-wraparound\n> freezing, but independently of that, the pg_multixact/members SLRU can\n> also wrap around. We track both, and trigger anti-wraparound if either\n> SLRU is about to wrap around. If we redefine MultiXactOffset as a 64-bit\n> integer, we can avoid the pg_multixact/members wraparound altogether. A\n> downside is that pg_multixact/offsets will take twice as much space, but\n> I think that's a good tradeoff. Or perhaps we can play tricks like store\n> a single 64-bit offset on each pg_multixact/offsets page, and a 32-bit\n> offset from that for each XID, to avoid making it so much larger.\n>\n> This would reduce the need to do anti-wraparound VACUUMs on systems that\n> use multixacts heavily. Needs pg_upgrade support.\n>\n> 4. Extend pg_subtrans to 64-bits.\n>\n> This isn't all that interesting because the active region of pg_subtrans\n> cannot be wider than 32 bits anyway, because you'll still reach the\n> general 32-bit XID wraparound. But it might be less confusing in some\n> places.\n>\n> I actually started to write a patch to do this, to see how complicated\n> it is. It quickly proliferates into expanding other XIDs to 64-bits,\n> like TransactionXmin, frozenXid calculation in vacuum.c, known-assigned\n> XID tracking in procarray.c. etc. It's going to be necessary to convert\n> 32-bit XIDs to FullTransactionIds at some boundaries, and I'm not sure\n> where exactly that should happen. It's easier to do the conversions\n> close to subtrans.c, but then I'm not sure how much it gets us in terms\n> of reducing confusion. It's easy to get confused with the epochs during\n> conversions, as you noted. On the other hand, if we change much more of\n> the backend to use FullTransactionIds, the patch becomes much more\n> invasive.\n>\n> Nice thing with pg_subtrans, though, is that it doesn't require\n> pg_upgrade support.\n>\n> 5. Extend pg_xact to 64-bits.\n>\n> Similar to pg_subtrans, really, but needs pg_upgrade support.\n>\n> 6. (a bonus thing that I noticed while thinking of pg_xact.) Extend\n> pg_twophase.c, to use FullTransactionIds.\n>\n> Currently, the twophase state files in pg_twophase are named according\n> to the 32 bit Xid of the transaction. Let's switch to FullTransactionId\n> there.\n>\n>\n>\n> As we start to refactor these things, I also think it would be good to\n> have more explicit tracking of the valid range of SLRU pages in each\n> SLRU. Take pg_subtrans for example: it's not very clear what pages have\n> been initialized, especially during different stages of startup. It\n> would be good to have clear start and end page numbers, and throw an\n> error if you try to look up anything outside those bounds. Same for all\n> other SLRUs.\n>\n> I propose that we try to finish 1 and 2 for v16. And maybe 6. I think\n> that's doable. It doesn't have any great user-visible benefits yet, but\n> we need to start somewhere.\n>\n> - Heikki\n>\n>\n\n-- \nBest regards,\nMaxim Orlov.\n\nOn Tue, 17 Jan 2023 at 16:33, Aleksander Alekseev <aleksander@timescale.com> wrote:Hi hackers,\n\nMaxim, perhaps you could share with us what your reasoning was here?I'm really sorry for late response, but better late than never. Yes, we can not access shared memory without lock.In this particular case, we use XidGenLock. That is why we use lock argument to take it is it was not taken previously.Actually, we may place assertion in this insist.As for xid compare: we do not compare xids here, we are checking for wraparound, so, AFAICS, this code is correct.On Mon, 6 Mar 2023 at 22:48, Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n\n1. Use 64 bit page numbers in SLRUs (this patch)\n\n2. Use the larger segment file names in async.c, to lift the current 8 \nGB limit on the max number of pending notifications.\n\n3. Extend pg_multixact so that pg_multixact/members is addressed by \n64-bit offsets.\n\n4. Extend pg_subtrans to 64-bits.\n\n5. Extend pg_xact to 64-bits.\n\n\n6. (a bonus thing that I noticed while thinking of pg_xact.) Extend \npg_twophase.c, to use FullTransactionIds.\n\nCurrently, the twophase state files in pg_twophase are named according \nto the 32 bit Xid of the transaction. Let's switch to FullTransactionId \nthere.\n...\n\nI propose that we try to finish 1 and 2 for v16. And maybe 6. I think \nthat's doable. It doesn't have any great user-visible benefits yet, but \nwe need to start somewhere.\n\n- Heikki\nYes, this is a great idea! My only concern here is that we're going in circles here. You see, patch 1 is what was proposed in the beginning of this thread. Anyway, I will be happy if we are being able to push this topic forward.As\n for making pg_multixact 64 bit, I spend the last couple of days to make\n proper pg_upgrade for pg_multixact's and for pg_xact's with wraparound and I've understood, that it is not a simple task compare to pg_xact's. The problem is, we do not have epoch formultixacts, so we do not have ability to \"overcome\" wraparound. The solution may be adding some kind of epoch for multixacts ormake them 64 bit in \"main\" 64-xid patch, but in perspective of this thread, in my view, this should be last in line here.In pg_xact we do not have such a problem, we do have epoch for transacions, so conversion should be pretty obvious:0000 -> 0000000000000001 -> 000000000001...0FFE -> 000000000FFE0FFF -> 000000000FFF0000 -> 0000000100000001 -> 000000010001So, in my view, the plan should be:1. Use internal 64 bit page numbers in SLRUs without changing segments naming.2. Use the larger segment file names in async.c, to lift the current 8 GB limit on the max number of pending notifications.3. Extend pg_xact to 64-bits.4. Extend pg_subtrans to 64-bits.5. Extend pg_multixact so that pg_multixact/members is addressed by 64-bit offsets.6. Extend pg_twophase.c, to use FullTransactionIds. (a bonus thing)Thoughts?-- Best regards,Maxim Orlov.On Mon, 6 Mar 2023 at 22:48, Heikki Linnakangas <hlinnaka@iki.fi> wrote:On 01/03/2023 12:21, Aleksander Alekseev wrote:\n> Hi,\n> \n>> I'm surprised that these patches extend the page numbering to 64 bits,\n>> but never actually uses the high bits. The XID \"epoch\" is not used, and\n>> pg_xact still wraps around and the segment names are still reused. I\n>> thought we could stop doing that.\n> \n> To clarify, the idea is to let CLOG grow indefinitely and simply store\n> FullTransactionId -> TransactionStatus (two bits). Correct?\n\nCorrect.\n\n> I didn't investigate this in much detail but it may affect quite some\n> amount of code since TransactionIdDidCommit() and\n> TransactionIdDidCommit() currently both deal with TransactionId, not\n> FullTransactionId. IMO, this would be a nice change however, assuming\n> we are ready for it.\n\nYep, it's a lot of code churn..\n\n> In the previous version of the patch there was an attempt to derive\n> FullTransactionId from TransactionId but it was wrong for the reasons\n> named above in the thread. Thus is was removed and the patch\n> simplified.\n\nYeah, it's tricky to get it right. Clearly we need to do it at some \npoint though.\n\nAll in all, this is a big effort. I spent some more time reviewing this \nin the last few days, and thought a lot about what the path forward here \ncould be. And I haven't looked at the actual 64-bit XIDs patch set yet, \njust this patch to use 64-bit addressing in SLRUs.\n\nThis patch is the first step, but we have a bit of a chicken and egg \nproblem, because this patch on its own isn't very interesting, but on \nthe other hand, we need it to work on the follow up items. Here's how I \nsee the development path for this (and again, this is just for the \n64-bit SLRUs work, not the bigger 64-bit-XIDs-in-heapam effort):\n\n1. Use 64 bit page numbers in SLRUs (this patch)\n\nI would like to make one change here: I'd prefer to keep the old 4-digit \nsegment names, until we actually start to use the wider address space. \nLet's allow each SLRU to specify how many digits to use in the \nfilenames, so that we convert one SLRU at a time.\n\nIf we do that, and don't change any of the existing SLRUs to actually \nuse the wider space of page and segment numbers yet, this patch becomes \njust refactoring with no on-disk format changes. No pg_upgrade needed.\n\nThe next patches will start to make use of the wider address space, one \nSLRU at a time.\n\n2. Use the larger segment file names in async.c, to lift the current 8 \nGB limit on the max number of pending notifications.\n\nNo one actually minds the limit, it's quite generous as it is. But there \nis some code and complexity in async.c to avoid the wraparound that \ncould be made simpler if we used longer SLRU segment names and avoided \nthe wraparound altogether.\n\nI wonder if we should actually add an artificial limit, as a GUC. If \nthere are gigabytes of notifications queued up, something's probably \nwrong with the system, and you're not going to be happy if we just \nremove the limit so it can grow to terabytes until you run out of disk \nspace.\n\n3. Extend pg_multixact so that pg_multixact/members is addressed by \n64-bit offsets.\n\nCurrently, multi-XIDs can wrap around, requiring anti-wraparound \nfreezing, but independently of that, the pg_multixact/members SLRU can \nalso wrap around. We track both, and trigger anti-wraparound if either \nSLRU is about to wrap around. If we redefine MultiXactOffset as a 64-bit \ninteger, we can avoid the pg_multixact/members wraparound altogether. A \ndownside is that pg_multixact/offsets will take twice as much space, but \nI think that's a good tradeoff. Or perhaps we can play tricks like store \na single 64-bit offset on each pg_multixact/offsets page, and a 32-bit \noffset from that for each XID, to avoid making it so much larger.\n\nThis would reduce the need to do anti-wraparound VACUUMs on systems that \nuse multixacts heavily. Needs pg_upgrade support.\n\n4. Extend pg_subtrans to 64-bits.\n\nThis isn't all that interesting because the active region of pg_subtrans \ncannot be wider than 32 bits anyway, because you'll still reach the \ngeneral 32-bit XID wraparound. But it might be less confusing in some \nplaces.\n\nI actually started to write a patch to do this, to see how complicated \nit is. It quickly proliferates into expanding other XIDs to 64-bits, \nlike TransactionXmin, frozenXid calculation in vacuum.c, known-assigned \nXID tracking in procarray.c. etc. It's going to be necessary to convert \n32-bit XIDs to FullTransactionIds at some boundaries, and I'm not sure \nwhere exactly that should happen. It's easier to do the conversions \nclose to subtrans.c, but then I'm not sure how much it gets us in terms \nof reducing confusion. It's easy to get confused with the epochs during \nconversions, as you noted. On the other hand, if we change much more of \nthe backend to use FullTransactionIds, the patch becomes much more invasive.\n\nNice thing with pg_subtrans, though, is that it doesn't require \npg_upgrade support.\n\n5. Extend pg_xact to 64-bits.\n\nSimilar to pg_subtrans, really, but needs pg_upgrade support.\n\n6. (a bonus thing that I noticed while thinking of pg_xact.) Extend \npg_twophase.c, to use FullTransactionIds.\n\nCurrently, the twophase state files in pg_twophase are named according \nto the 32 bit Xid of the transaction. Let's switch to FullTransactionId \nthere.\n\n\n\nAs we start to refactor these things, I also think it would be good to \nhave more explicit tracking of the valid range of SLRU pages in each \nSLRU. Take pg_subtrans for example: it's not very clear what pages have \nbeen initialized, especially during different stages of startup. It \nwould be good to have clear start and end page numbers, and throw an \nerror if you try to look up anything outside those bounds. Same for all \nother SLRUs.\n\nI propose that we try to finish 1 and 2 for v16. And maybe 6. I think \nthat's doable. It doesn't have any great user-visible benefits yet, but \nwe need to start somewhere.\n\n- Heikki\n\n-- Best regards,Maxim Orlov.",
"msg_date": "Tue, 7 Mar 2023 14:38:50 +0300",
"msg_from": "Maxim Orlov <orlovmg@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: XID formatting and SLRU refactorings (was: Add 64-bit XIDs into\n PostgreSQL 15)"
},
{
"msg_contents": "On 07/03/2023 13:38, Maxim Orlov wrote:\n> As for making pg_multixact 64 bit, I spend the last couple of days to \n> make proper pg_upgrade for pg_multixact's and for pg_xact's\n> with wraparound and I've understood, that it is not a simple task \n> compare to pg_xact's. The problem is, we do not have epoch for\n> multixacts, so we do not have ability to \"overcome\" wraparound. The \n> solution may be adding some kind of epoch for multixacts or\n> make them 64 bit in \"main\" 64-xid patch, but in perspective of this \n> thread, in my view, this should be last in line here.\n\nThat is true for pg_multixact/offsets. We will indeed need to add an \nepoch and introduce the concept of FullMultiXactIds for that. However, \nwe can change pg_multixact/members independently of that. We can extend \nMultiXactOffset from 32 to 64 bits, and eliminate pg_multixact/members \nwraparound, while keeping multi-xids 32 bits wide.\n\n- Heikki\n\n\n\n",
"msg_date": "Tue, 7 Mar 2023 14:38:14 +0200",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": false,
"msg_subject": "Re: XID formatting and SLRU refactorings (was: Add 64-bit XIDs into\n PostgreSQL 15)"
},
{
"msg_contents": "Hi,\n\n> Here's how I see the development path for this [...]\n\n> So, in my view, the plan should be [...]\n> Thoughts?\n\nThe plan looks great! I would also explicitly include this:\n\n> As we start to refactor these things, I also think it would be good to\n> have more explicit tracking of the valid range of SLRU pages in each\n> SLRU. Take pg_subtrans for example: it's not very clear what pages have\n> been initialized, especially during different stages of startup. It\n> would be good to have clear start and end page numbers, and throw an\n> error if you try to look up anything outside those bounds. Same for all\n> other SLRUs.\n\nSo the plan becomes:\n\n1. Use internal 64 bit page numbers in SLRUs without changing segments naming.\n2. Use the larger segment file names in async.c, to lift the current 8\nGB limit on the max number of pending notifications.\n3. Extend pg_xact to 64-bits.\n4. Extend pg_subtrans to 64-bits.\n5. Extend pg_multixact so that pg_multixact/members is addressed by\n64-bit offsets.\n6. Extend pg_twophase.c, to use FullTransactionIds. (a bonus thing)\n7. More explicit tracking of the valid range of SLRU pages in each SLRU\n\nWhere our main focus for PG16 is going to be 1 and 2, and we may try\nto deliver 6 and 7 too if time will permit.\n\nMaxim and I agreed (offlist) that I'm going to submit 1 and 2. The\npatch 1 is ready, please see the attachment. I'm currently working on\n2 and going to submit it in a bit. It seems to be relatively\nstraightforward but I don't want to rush things and want to make sure\nI didn't miss something.\n\n> I wonder if we should actually add an artificial limit, as a GUC.\n\nYes, I think we need some sort of limit. Using a GUC would be the most\nstraightforward approach. Alternatively we could derive the limit from\nthe existing GUCs, similarly to how we derive the default value of\nwal_buffers from shared_buffers [1]. However, off the top of my head\nwe only have max_wal_size and it doesn't strike me as a good candidate\nfor deriving something for NOTIFY / LISTEN.\n\nSo I'm going to add max_notify_segments GUC with the default value of\n65536 as it is now. If the new GUC will receive a push back from the\ncommunity we can always use a hard-coded value instead, or no limit at\nall.\n\n[1]: https://www.postgresql.org/docs/current/runtime-config-wal.html#GUC-WAL-BUFFERS\n\n-- \nBest regards,\nAleksander Alekseev",
"msg_date": "Tue, 7 Mar 2023 16:18:13 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": true,
"msg_subject": "Re: XID formatting and SLRU refactorings (was: Add 64-bit XIDs into\n PostgreSQL 15)"
},
{
"msg_contents": "On Tue, 7 Mar 2023 at 15:38, Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n\n>\n> That is true for pg_multixact/offsets. We will indeed need to add an\n> epoch and introduce the concept of FullMultiXactIds for that. However,\n> we can change pg_multixact/members independently of that. We can extend\n> MultiXactOffset from 32 to 64 bits, and eliminate pg_multixact/members\n> wraparound, while keeping multi-xids 32 bits wide.\n>\n> Yes, you're totally correct. If it will be committable that way, I'm all\nfor that.\n\n-- \nBest regards,\nMaxim Orlov.\n\nOn Tue, 7 Mar 2023 at 15:38, Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n\nThat is true for pg_multixact/offsets. We will indeed need to add an \nepoch and introduce the concept of FullMultiXactIds for that. However, \nwe can change pg_multixact/members independently of that. We can extend \nMultiXactOffset from 32 to 64 bits, and eliminate pg_multixact/members \nwraparound, while keeping multi-xids 32 bits wide.\nYes, you're totally correct. If it will be committable that way, I'm all for that.-- Best regards,Maxim Orlov.",
"msg_date": "Tue, 7 Mar 2023 17:06:43 +0300",
"msg_from": "Maxim Orlov <orlovmg@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: XID formatting and SLRU refactorings (was: Add 64-bit XIDs into\n PostgreSQL 15)"
},
{
"msg_contents": "Hi,\n\n> 1. Use internal 64 bit page numbers in SLRUs without changing segments naming.\n> 2. Use the larger segment file names in async.c, to lift the current 8\n> GB limit on the max number of pending notifications.\n> [...]\n>\n> Where our main focus for PG16 is going to be 1 and 2, and we may try\n> to deliver 6 and 7 too if time will permit.\n>\n> Maxim and I agreed (offlist) that I'm going to submit 1 and 2. The\n> patch 1 is ready, please see the attachment. I'm currently working on\n> 2 and going to submit it in a bit. It seems to be relatively\n> straightforward but I don't want to rush things and want to make sure\n> I didn't miss something.\n\nPFA the patch v57 which now includes both 1 and 2.\n\n-- \nBest regards,\nAleksander Alekseev",
"msg_date": "Thu, 9 Mar 2023 18:21:38 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": true,
"msg_subject": "Re: XID formatting and SLRU refactorings (was: Add 64-bit XIDs into\n PostgreSQL 15)"
},
{
"msg_contents": "Hi!\n\nAs suggested before by Heikki Linnakangas, I've added a patch for making\n2PC transaction state 64-bit.\nAt first, my intention was to rebuild all twophase interface to use\nFullTransactionId. But doing this in a proper\nmanner would lead to switching from TransactionId to FullTransactionId in\nPGPROC and patch become too\nbig to handle here.\n\nSo I decided to keep it simple for now and use wrap logic trick and calc\nFullTransactionId on current epoch,\nsince the span of active xids cannot exceed one epoch at any given time.\n\n\nPatches 1 and 2 are the same as above.\n\n-- \nBest regards,\nMaxim Orlov.",
"msg_date": "Mon, 20 Mar 2023 18:58:00 +0300",
"msg_from": "Maxim Orlov <orlovmg@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: XID formatting and SLRU refactorings (was: Add 64-bit XIDs into\n PostgreSQL 15)"
},
{
"msg_contents": "Hi,\n\n> As suggested before by Heikki Linnakangas, I've added a patch for making 2PC transaction state 64-bit.\n> At first, my intention was to rebuild all twophase interface to use FullTransactionId. But doing this in a proper\n> manner would lead to switching from TransactionId to FullTransactionId in PGPROC and patch become too\n> big to handle here.\n>\n> So I decided to keep it simple for now and use wrap logic trick and calc FullTransactionId on current epoch,\n> since the span of active xids cannot exceed one epoch at any given time.\n>\n> Patches 1 and 2 are the same as above.\n\nThanks! 0003 LGTM. I'll mark the CF entry as RfC.\n\n> I propose that we try to finish 1 and 2 for v16. And maybe 6. I think\n> that's doable. It doesn't have any great user-visible benefits yet, but\n> we need to start somewhere.\n\n+1.\n\nHowever there are only a few days left if we are going to do this\nwithin March CF...\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Wed, 29 Mar 2023 12:32:39 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": true,
"msg_subject": "Re: XID formatting and SLRU refactorings (was: Add 64-bit XIDs into\n PostgreSQL 15)"
},
{
"msg_contents": "On 09/03/2023 17:21, Aleksander Alekseev wrote:\n> v57-0001-Index-SLRUs-by-64-bit-integers-rather-than-by-32.patch\n\nReviewing this now. I think it's almost ready to be committed.\n\nThere's another big effort going on to move SLRUs to the regular buffer \ncache (https://commitfest.postgresql.org/43/3514/). I wonder how moving \nto 64 bit page numbers will affect that. BlockNumber is still 32 bits, \nafter all.\n\n> +/*\n> + * An internal function used by SlruScanDirectory().\n> + *\n> + * Returns true if a file with a name of a given length may be a correct\n> + * SLRU segment.\n> + */\n> +static inline bool\n> +SlruCorrectSegmentFilenameLength(SlruCtl ctl, size_t len)\n> +{\n> + if (ctl->long_segment_names)\n> + return (len == 12);\n> + else\n> +\n> + /*\n> + * XXX Should we still consider 5 and 6 to be a correct length? It\n> + * looks like it was previously allowed but now SlruFileName() can't\n> + * return such a name.\n> + */\n> + return (len == 4 || len == 5 || len == 6);\n> +}\n\nHuh, I didn't realize that 5 and 6 character SLRU segment names are \npossible. But indeed they are. Commit 638cf09e76d allowed 5-character \nlengths:\n\n> commit 638cf09e76d70dd83d8123e7079be6c0532102d2\n> Author: Alvaro Herrera <alvherre@alvh.no-ip.org>\n> Date: Thu Jan 2 18:17:29 2014 -0300\n> \n> Handle 5-char filenames in SlruScanDirectory\n> \n> Original users of slru.c were all producing 4-digit filenames, so that\n> was all that that code was prepared to handle. Changes to multixact.c\n> in the course of commit 0ac5ad5134f made pg_multixact/members create\n> 5-digit filenames once a certain threshold was reached, which\n> SlruScanDirectory wasn't prepared to deal with; in particular,\n> 5-digit-name files were not removed during truncation. Change that\n> routine to make it aware of those files, and have it process them just\n> like any others.\n> \n> Right now, some pg_multixact/members directories will contain a mixture\n> of 4-char and 5-char filenames. A future commit is expected fix things\n> so that each slru.c user declares the correct maximum width for the\n> files it produces, to avoid such unsightly mixtures.\n> \n> Noticed while investigating bug #8673 reported by Serge Negodyuck.\n> \n\nAnd later commit 73c986adde5, which introduced commit_ts, allowed 6 \ncharacters. With 8192 block size, pg_commit_ts segments are indeed 5 \nchars wide, and with 512 block size, 6 chars are needed.\n\nThis patch makes the filenames always 12 characters wide (for SLRUs that \nopt-in to the new naming). That's actually not enough for the full range \nthat a 64 bit page number can represent. Should we make it 16 characters \nnow, to avoid repeating the same mistake we made earlier? Or make it \nmore configurable, on a per-SLRU basis. One reason I don't want to just \nmake it 16 characters is that WAL segment filenames are also 16 hex \ncharacters, which could cause confusion.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n",
"msg_date": "Wed, 5 Jul 2023 15:26:04 +0300",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": false,
"msg_subject": "Re: XID formatting and SLRU refactorings (was: Add 64-bit XIDs into\n PostgreSQL 15)"
},
{
"msg_contents": "Hi,\n\n> Reviewing this now. I think it's almost ready to be committed.\n>\n> There's another big effort going on to move SLRUs to the regular buffer\n> cache (https://commitfest.postgresql.org/43/3514/). I wonder how moving\n> to 64 bit page numbers will affect that. BlockNumber is still 32 bits,\n> after all.\n\nSomehow I didn't pay too much attention to this effort, thanks. I will\nfamiliarize myself with the patch. Intuitively I don't think that the\npatchse should block each other.\n\n> This patch makes the filenames always 12 characters wide (for SLRUs that\n> opt-in to the new naming). That's actually not enough for the full range\n> that a 64 bit page number can represent. Should we make it 16 characters\n> now, to avoid repeating the same mistake we made earlier? Or make it\n> more configurable, on a per-SLRU basis. One reason I don't want to just\n> make it 16 characters is that WAL segment filenames are also 16 hex\n> characters, which could cause confusion.\n\nGood catch. I propose the following solution:\n\n```\n SlruFileName(SlruCtl ctl, char *path, int64 segno)\n {\n if (ctl->long_segment_names)\n- return snprintf(path, MAXPGPATH, \"%s/%012llX\", ctl->Dir,\n+ /*\n+ * We could use 16 characters here but the disadvantage would be that\n+ * the SLRU segments will be hard to distinguish from WAL segments.\n+ *\n+ * For this reason we use 15 characters. It is enough but also means\n+ * that in the future we can't decrease SLRU_PAGES_PER_SEGMENT easily.\n+ */\n+ return snprintf(path, MAXPGPATH, \"%s/%015llX\", ctl->Dir,\n (long long) segno);\n else\n return snprintf(path, MAXPGPATH, \"%s/%04X\", (ctl)->Dir,\n```\n\nPFE the corrected patchset v58.\n\n--\nBest regards,\nAleksander Alekseev",
"msg_date": "Wed, 5 Jul 2023 16:45:49 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": true,
"msg_subject": "Re: XID formatting and SLRU refactorings (was: Add 64-bit XIDs into\n PostgreSQL 15)"
},
{
"msg_contents": "Hi,\n\n> > Reviewing this now. I think it's almost ready to be committed.\n> >\n> > There's another big effort going on to move SLRUs to the regular buffer\n> > cache (https://commitfest.postgresql.org/43/3514/). I wonder how moving\n> > to 64 bit page numbers will affect that. BlockNumber is still 32 bits,\n> > after all.\n>\n> Somehow I didn't pay too much attention to this effort, thanks. I will\n> familiarize myself with the patch. Intuitively I don't think that the\n> patchse should block each other.\n>\n> > This patch makes the filenames always 12 characters wide (for SLRUs that\n> > opt-in to the new naming). That's actually not enough for the full range\n> > that a 64 bit page number can represent. Should we make it 16 characters\n> > now, to avoid repeating the same mistake we made earlier? Or make it\n> > more configurable, on a per-SLRU basis. One reason I don't want to just\n> > make it 16 characters is that WAL segment filenames are also 16 hex\n> > characters, which could cause confusion.\n>\n> Good catch. I propose the following solution:\n>\n> ```\n> SlruFileName(SlruCtl ctl, char *path, int64 segno)\n> {\n> if (ctl->long_segment_names)\n> - return snprintf(path, MAXPGPATH, \"%s/%012llX\", ctl->Dir,\n> + /*\n> + * We could use 16 characters here but the disadvantage would be that\n> + * the SLRU segments will be hard to distinguish from WAL segments.\n> + *\n> + * For this reason we use 15 characters. It is enough but also means\n> + * that in the future we can't decrease SLRU_PAGES_PER_SEGMENT easily.\n> + */\n> + return snprintf(path, MAXPGPATH, \"%s/%015llX\", ctl->Dir,\n> (long long) segno);\n> else\n> return snprintf(path, MAXPGPATH, \"%s/%04X\", (ctl)->Dir,\n> ```\n>\n> PFE the corrected patchset v58.\n\nWhile triaging the patches for the September CF [1] a consensus was\nreached that the patchset needs another round of review. Also I\nremoved myself from the list of reviewers in order to make it clear\nthat a review from somebody else would be appreciated.\n\n[1]: https://postgr.es/m/0737f444-59bb-ac1d-2753-873c40da0840%40eisentraut.org\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Mon, 4 Sep 2023 17:41:43 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": true,
"msg_subject": "Re: XID formatting and SLRU refactorings (was: Add 64-bit XIDs into\n PostgreSQL 15)"
},
{
"msg_contents": "> > This patch makes the filenames always 12 characters wide (for SLRUs that\r\n> > opt-in to the new naming). That's actually not enough for the full range\r\n> > that a 64 bit page number can represent. Should we make it 16 characters\r\n> > now, to avoid repeating the same mistake we made earlier? Or make it\r\n> > more configurable, on a per-SLRU basis. One reason I don't want to just\r\n> > make it 16 characters is that WAL segment filenames are also 16 hex\r\n> > characters, which could cause confusion.\r\n>\r\n> Good catch. I propose the following solution:\r\n>\r\n> ```\r\n> SlruFileName(SlruCtl ctl, char *path, int64 segno)\r\n> {\r\n> if (ctl->long_segment_names)\r\n> - return snprintf(path, MAXPGPATH, \"%s/%012llX\", ctl->Dir,\r\n> + /*\r\n> + * We could use 16 characters here but the disadvantage would be that\r\n> + * the SLRU segments will be hard to distinguish from WAL segments.\r\n> + *\r\n> + * For this reason we use 15 characters. It is enough but also means\r\n> + * that in the future we can't decrease SLRU_PAGES_PER_SEGMENT easily.\r\n> + */\r\n> + return snprintf(path, MAXPGPATH, \"%s/%015llX\", ctl->Dir,\r\n> (long long) segno);\r\n> else\r\n> return snprintf(path, MAXPGPATH, \"%s/%04X\", (ctl)->Dir,\r\n> ```\r\n>\r\n> PFE the corrected patchset v58.\r\nGood idea\r\n\r\n________________________________\r\n发件人: Aleksander Alekseev <aleksander@timescale.com>\r\n发送时间: 2023年9月4日 22:41\r\n收件人: Postgres hackers <pgsql-hackers@lists.postgresql.org>\r\n抄送: Heikki Linnakangas <hlinnaka@iki.fi>; Maxim Orlov <orlovmg@gmail.com>; Jacob Champion <jchampion@timescale.com>; Japin Li <japinli@hotmail.com>; Andres Freund <andres@anarazel.de>; Michael Paquier <michael@paquier.xyz>; Pavel Borisov <pashkin.elfe@gmail.com>; Peter Eisentraut <peter.eisentraut@enterprisedb.com>; Alexander Korotkov <aekorotkov@gmail.com>\r\n主题: Re: XID formatting and SLRU refactorings (was: Add 64-bit XIDs into PostgreSQL 15)\r\n\r\nHi,\r\n\r\n> > Reviewing this now. I think it's almost ready to be committed.\r\n> >\r\n> > There's another big effort going on to move SLRUs to the regular buffer\r\n> > cache (https://commitfest.postgresql.org/43/3514/). I wonder how moving\r\n> > to 64 bit page numbers will affect that. BlockNumber is still 32 bits,\r\n> > after all.\r\n>\r\n> Somehow I didn't pay too much attention to this effort, thanks. I will\r\n> familiarize myself with the patch. Intuitively I don't think that the\r\n> patchse should block each other.\r\n>\r\n> > This patch makes the filenames always 12 characters wide (for SLRUs that\r\n> > opt-in to the new naming). That's actually not enough for the full range\r\n> > that a 64 bit page number can represent. Should we make it 16 characters\r\n> > now, to avoid repeating the same mistake we made earlier? Or make it\r\n> > more configurable, on a per-SLRU basis. One reason I don't want to just\r\n> > make it 16 characters is that WAL segment filenames are also 16 hex\r\n> > characters, which could cause confusion.\r\n>\r\n> Good catch. I propose the following solution:\r\n>\r\n> ```\r\n> SlruFileName(SlruCtl ctl, char *path, int64 segno)\r\n> {\r\n> if (ctl->long_segment_names)\r\n> - return snprintf(path, MAXPGPATH, \"%s/%012llX\", ctl->Dir,\r\n> + /*\r\n> + * We could use 16 characters here but the disadvantage would be that\r\n> + * the SLRU segments will be hard to distinguish from WAL segments.\r\n> + *\r\n> + * For this reason we use 15 characters. It is enough but also means\r\n> + * that in the future we can't decrease SLRU_PAGES_PER_SEGMENT easily.\r\n> + */\r\n> + return snprintf(path, MAXPGPATH, \"%s/%015llX\", ctl->Dir,\r\n> (long long) segno);\r\n> else\r\n> return snprintf(path, MAXPGPATH, \"%s/%04X\", (ctl)->Dir,\r\n> ```\r\n>\r\n> PFE the corrected patchset v58.\r\n\r\nWhile triaging the patches for the September CF [1] a consensus was\r\nreached that the patchset needs another round of review. Also I\r\nremoved myself from the list of reviewers in order to make it clear\r\nthat a review from somebody else would be appreciated.\r\n\r\n[1]: https://postgr.es/m/0737f444-59bb-ac1d-2753-873c40da0840%40eisentraut.org\r\n--\r\nBest regards,\r\nAleksander Alekseev\r\n\r\n\r\n\n\n\n\n\n\n\n\n>\n > This patch makes the filenames always 12 characters wide (for SLRUs that\n>\n > opt-in to the new naming). That's actually not enough for the full range\n>\n > that a 64 bit page number can represent. Should we make it 16 characters\n>\n > now, to avoid repeating the same mistake we made earlier? Or make it\n>\n > more configurable, on a per-SLRU basis. One reason I don't want to just\n>\n > make it 16 characters is that WAL segment filenames are also 16 hex\n>\n > characters, which could cause confusion.\n>\n>\n Good catch. I propose the following solution:\n>\n>\n ```\n> \n SlruFileName(SlruCtl ctl, char *path, int64 segno)\n> \n {\n> \n if (ctl->long_segment_names)\n>\n - return snprintf(path, MAXPGPATH, \"%s/%012llX\", ctl->Dir,\n>\n + /*\n>\n + * We could use 16 characters here but the disadvantage would be that\n>\n + * the SLRU segments will be hard to distinguish from WAL segments.\n>\n + *\n>\n + * For this reason we use 15 characters. It is enough but also means\n>\n + * that in the future we can't decrease SLRU_PAGES_PER_SEGMENT easily.\n>\n + */\n>\n + return snprintf(path, MAXPGPATH, \"%s/%015llX\", ctl->Dir,\n> \n (long long) segno);\n> \n else\n> \n return snprintf(path, MAXPGPATH, \"%s/%04X\", (ctl)->Dir,\n>\n ```\n>\n>\n PFE the corrected patchset v58.\nGood idea \n\n\n\n\n\n发件人: Aleksander Alekseev <aleksander@timescale.com>\n发送时间: 2023年9月4日 22:41\n收件人: Postgres hackers <pgsql-hackers@lists.postgresql.org>\n抄送: Heikki Linnakangas <hlinnaka@iki.fi>; Maxim Orlov <orlovmg@gmail.com>; Jacob Champion <jchampion@timescale.com>; Japin Li <japinli@hotmail.com>; Andres Freund <andres@anarazel.de>; Michael Paquier <michael@paquier.xyz>; Pavel Borisov <pashkin.elfe@gmail.com>;\n Peter Eisentraut <peter.eisentraut@enterprisedb.com>; Alexander Korotkov <aekorotkov@gmail.com>\n主题: Re: XID formatting and SLRU refactorings (was: Add 64-bit XIDs into PostgreSQL 15)\n \n\n\nHi,\n\n> > Reviewing this now. I think it's almost ready to be committed.\n> >\n> > There's another big effort going on to move SLRUs to the regular buffer\n> > cache (https://commitfest.postgresql.org/43/3514/). I wonder how moving\n> > to 64 bit page numbers will affect that. BlockNumber is still 32 bits,\n> > after all.\n>\n> Somehow I didn't pay too much attention to this effort, thanks. I will\n> familiarize myself with the patch. Intuitively I don't think that the\n> patchse should block each other.\n>\n> > This patch makes the filenames always 12 characters wide (for SLRUs that\n> > opt-in to the new naming). That's actually not enough for the full range\n> > that a 64 bit page number can represent. Should we make it 16 characters\n> > now, to avoid repeating the same mistake we made earlier? Or make it\n> > more configurable, on a per-SLRU basis. One reason I don't want to just\n> > make it 16 characters is that WAL segment filenames are also 16 hex\n> > characters, which could cause confusion.\n>\n> Good catch. I propose the following solution:\n>\n> ```\n> SlruFileName(SlruCtl ctl, char *path, int64 segno)\n> {\n> if (ctl->long_segment_names)\n> - return snprintf(path, MAXPGPATH, \"%s/%012llX\", ctl->Dir,\n> + /*\n> + * We could use 16 characters here but the disadvantage would be that\n> + * the SLRU segments will be hard to distinguish from WAL segments.\n> + *\n> + * For this reason we use 15 characters. It is enough but also means\n> + * that in the future we can't decrease SLRU_PAGES_PER_SEGMENT easily.\n> + */\n> + return snprintf(path, MAXPGPATH, \"%s/%015llX\", ctl->Dir,\n> (long long) segno);\n> else\n> return snprintf(path, MAXPGPATH, \"%s/%04X\", (ctl)->Dir,\n> ```\n>\n> PFE the corrected patchset v58.\n\nWhile triaging the patches for the September CF [1] a consensus was\nreached that the patchset needs another round of review. Also I\nremoved myself from the list of reviewers in order to make it clear\nthat a review from somebody else would be appreciated.\n\n[1]: \nhttps://postgr.es/m/0737f444-59bb-ac1d-2753-873c40da0840%40eisentraut.org\n-- \nBest regards,\nAleksander Alekseev",
"msg_date": "Wed, 4 Oct 2023 02:57:08 +0000",
"msg_from": "Thomas wen <Thomas_valentine_365@outlook.com>",
"msg_from_op": false,
"msg_subject": "\n =?gb2312?B?u9i4tDogWElEIGZvcm1hdHRpbmcgYW5kIFNMUlUgcmVmYWN0b3JpbmdzICh3?=\n =?gb2312?Q?as:_Add_64-bit_XIDs_into_PostgreSQL_15)?="
},
{
"msg_contents": "Hi!\n\nOn Wed, Jul 5, 2023 at 4:46 PM Aleksander Alekseev <aleksander@timescale.com>\nwrote:\n> PFE the corrected patchset v58.\n\nI'd like to revive this thread.\n\nThis patchset is extracted from a larger patchset implementing 64-bit\nxids. It converts page numbers in SLRUs into 64 bits. The most SLRUs save\nthe same file naming scheme, thus their on-disk representation remains the\nsame. However, the patch 0002 converts pg_notify to actually use a new\nnaming scheme. Therefore pg_notify can benefit from simplification and\ngetting rid of wraparound.\n\n-#define TransactionIdToCTsPage(xid) \\\n- ((xid) / (TransactionId) COMMIT_TS_XACTS_PER_PAGE)\n+\n+/*\n+ * Although we return an int64 the actual value can't currently exceeed\n2**32.\n+ */\n+static inline int64\n+TransactionIdToCTsPage(TransactionId xid)\n+{\n+ return xid / (int64) COMMIT_TS_XACTS_PER_PAGE;\n+}\n\nIs there any reason we transform macro into a function? If not, I propose\nto leave this as a macro. BTW, there is a typo in a word \"exceeed\".\n\n+static int inline\n+SlruFileName(SlruCtl ctl, char *path, int64 segno)\n+{\n+ if (ctl->long_segment_names)\n+ /*\n+ * We could use 16 characters here but the disadvantage would be\nthat\n+ * the SLRU segments will be hard to distinguish from WAL segments.\n+ *\n+ * For this reason we use 15 characters. It is enough but also means\n+ * that in the future we can't decrease SLRU_PAGES_PER_SEGMENT\neasily.\n+ */\n+ return snprintf(path, MAXPGPATH, \"%s/%015llX\", ctl->Dir,\n+ (long long) segno);\n+ else\n+ return snprintf(path, MAXPGPATH, \"%s/%04X\", (ctl)->Dir,\n+ (unsigned int) segno);\n+}\n\nI think it worth adding asserts here to verify there is no overflow making\nus mapping different segments into the same files.\n\n+ return occupied == max_notify_queue_pages;\n\nI'm not sure if the current code could actually allow to occupy more than\nmax_notify_queue_pages. Probably not even in extreme cases. But I still\nthink it will more safe and easier to read to write \"occupied >=\nmax_notify_queue\"_pages here.\n\ndiff --git a/src/test/modules/test_slru/test_slru.c\nb/src/test/modules/test_slru/test_slru.c\n\nThe actual 64-bitness of SLRU pages isn't much exercised in our automated\ntests. It would be too exhausting to make pg_notify actually use higher\nthan 2**32 page numbers. Thus, I think test/modules/test_slru is a good\nplace to give high page numbers a good test.\n\n------\nRegards,\nAlexander Korotkov\n\nHi!On Wed, Jul 5, 2023 at 4:46 PM Aleksander Alekseev <aleksander@timescale.com> wrote:> PFE the corrected patchset v58.I'd like to revive this thread.This patchset is extracted from a larger patchset implementing 64-bit xids. It converts page numbers in SLRUs into 64 bits. The most SLRUs save the same file naming scheme, thus their on-disk representation remains the same. However, the patch 0002 converts pg_notify to actually use a new naming scheme. Therefore pg_notify can benefit from simplification and getting rid of wraparound.-#define TransactionIdToCTsPage(xid) \\- ((xid) / (TransactionId) COMMIT_TS_XACTS_PER_PAGE)++/*+ * Although we return an int64 the actual value can't currently exceeed 2**32.+ */+static inline int64+TransactionIdToCTsPage(TransactionId xid)+{+ return xid / (int64) COMMIT_TS_XACTS_PER_PAGE;+}Is there any reason we transform macro into a function? If not, I propose to leave this as a macro. BTW, there is a typo in a word \"exceeed\".+static int inline+SlruFileName(SlruCtl ctl, char *path, int64 segno)+{+ if (ctl->long_segment_names)+ /*+ * We could use 16 characters here but the disadvantage would be that+ * the SLRU segments will be hard to distinguish from WAL segments.+ *+ * For this reason we use 15 characters. It is enough but also means+ * that in the future we can't decrease SLRU_PAGES_PER_SEGMENT easily.+ */+ return snprintf(path, MAXPGPATH, \"%s/%015llX\", ctl->Dir,+ (long long) segno);+ else+ return snprintf(path, MAXPGPATH, \"%s/%04X\", (ctl)->Dir,+ (unsigned int) segno);+}I think it worth adding asserts here to verify there is no overflow making us mapping different segments into the same files.+ return occupied == max_notify_queue_pages;I'm not sure if the current code could actually allow to occupy more than max_notify_queue_pages. Probably not even in extreme cases. But I still think it will more safe and easier to read to write \"occupied >= max_notify_queue\"_pages here.diff --git a/src/test/modules/test_slru/test_slru.c b/src/test/modules/test_slru/test_slru.cThe actual 64-bitness of SLRU pages isn't much exercised in our automated tests. It would be too exhausting to make pg_notify actually use higher than 2**32 page numbers. Thus, I think test/modules/test_slru is a good place to give high page numbers a good test.------Regards,Alexander Korotkov",
"msg_date": "Mon, 6 Nov 2023 15:07:34 +0200",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: XID formatting and SLRU refactorings (was: Add 64-bit XIDs into\n PostgreSQL 15)"
},
{
"msg_contents": "On Mon, 6 Nov 2023 at 17:07, Alexander Korotkov <aekorotkov@gmail.com> wrote:\n>\n> Hi!\n>\n> On Wed, Jul 5, 2023 at 4:46 PM Aleksander Alekseev <aleksander@timescale.com> wrote:\n> > PFE the corrected patchset v58.\n>\n> I'd like to revive this thread.\n>\n> This patchset is extracted from a larger patchset implementing 64-bit xids. It converts page numbers in SLRUs into 64 bits. The most SLRUs save the same file naming scheme, thus their on-disk representation remains the same. However, the patch 0002 converts pg_notify to actually use a new naming scheme. Therefore pg_notify can benefit from simplification and getting rid of wraparound.\n>\n> -#define TransactionIdToCTsPage(xid) \\\n> - ((xid) / (TransactionId) COMMIT_TS_XACTS_PER_PAGE)\n> +\n> +/*\n> + * Although we return an int64 the actual value can't currently exceeed 2**32.\n> + */\n> +static inline int64\n> +TransactionIdToCTsPage(TransactionId xid)\n> +{\n> + return xid / (int64) COMMIT_TS_XACTS_PER_PAGE;\n> +}\n>\n> Is there any reason we transform macro into a function? If not, I propose to leave this as a macro. BTW, there is a typo in a word \"exceeed\".\nIf I remember right, the compiler will make equivalent code from\ninline functions and macros, and functions has an additional benefit:\nthe compiler will report type mismatch if any. That was the only\nreason.\n\nAlso, I looked at v58-0001 and don't quite agree with mentioning the\nauthors of the original 64-xid patch, from which the current patch is\nderived, as just \"privious input\" persons.\n\nRegards, Pavel Borisov\n\n\n",
"msg_date": "Mon, 6 Nov 2023 17:42:38 +0400",
"msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: XID formatting and SLRU refactorings (was: Add 64-bit XIDs into\n PostgreSQL 15)"
},
{
"msg_contents": "On Mon, Nov 6, 2023 at 3:42 PM Pavel Borisov <pashkin.elfe@gmail.com> wrote:\n>\n> On Mon, 6 Nov 2023 at 17:07, Alexander Korotkov <aekorotkov@gmail.com> wrote:\n> > On Wed, Jul 5, 2023 at 4:46 PM Aleksander Alekseev <aleksander@timescale.com> wrote:\n> > > PFE the corrected patchset v58.\n> >\n> > I'd like to revive this thread.\n> >\n> > This patchset is extracted from a larger patchset implementing 64-bit xids. It converts page numbers in SLRUs into 64 bits. The most SLRUs save the same file naming scheme, thus their on-disk representation remains the same. However, the patch 0002 converts pg_notify to actually use a new naming scheme. Therefore pg_notify can benefit from simplification and getting rid of wraparound.\n> >\n> > -#define TransactionIdToCTsPage(xid) \\\n> > - ((xid) / (TransactionId) COMMIT_TS_XACTS_PER_PAGE)\n> > +\n> > +/*\n> > + * Although we return an int64 the actual value can't currently exceeed 2**32.\n> > + */\n> > +static inline int64\n> > +TransactionIdToCTsPage(TransactionId xid)\n> > +{\n> > + return xid / (int64) COMMIT_TS_XACTS_PER_PAGE;\n> > +}\n> >\n> > Is there any reason we transform macro into a function? If not, I propose to leave this as a macro. BTW, there is a typo in a word \"exceeed\".\n> If I remember right, the compiler will make equivalent code from\n> inline functions and macros, and functions has an additional benefit:\n> the compiler will report type mismatch if any. That was the only\n> reason.\n\nThen it's OK to leave it as an inline function.\n\n> Also, I looked at v58-0001 and don't quite agree with mentioning the\n> authors of the original 64-xid patch, from which the current patch is\n> derived, as just \"privious input\" persons.\n\n+1, for converting all \"previous input\" persons as additional authors.\nThat would be a pretty long list of authors though.\n\nBTW, I'm a bit puzzled on who should be the first author now?\n\n------\nRegards,\nAlexander Korotkov\n\n\n",
"msg_date": "Mon, 6 Nov 2023 16:00:57 +0200",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: XID formatting and SLRU refactorings (was: Add 64-bit XIDs into\n PostgreSQL 15)"
},
{
"msg_contents": "On Mon, 6 Nov 2023 at 18:01, Alexander Korotkov <aekorotkov@gmail.com> wrote:\n>\n> On Mon, Nov 6, 2023 at 3:42 PM Pavel Borisov <pashkin.elfe@gmail.com> wrote:\n> >\n> > On Mon, 6 Nov 2023 at 17:07, Alexander Korotkov <aekorotkov@gmail.com> wrote:\n> > > On Wed, Jul 5, 2023 at 4:46 PM Aleksander Alekseev <aleksander@timescale.com> wrote:\n> > > > PFE the corrected patchset v58.\n> > >\n> > > I'd like to revive this thread.\n> > >\n> > > This patchset is extracted from a larger patchset implementing 64-bit xids. It converts page numbers in SLRUs into 64 bits. The most SLRUs save the same file naming scheme, thus their on-disk representation remains the same. However, the patch 0002 converts pg_notify to actually use a new naming scheme. Therefore pg_notify can benefit from simplification and getting rid of wraparound.\n> > >\n> > > -#define TransactionIdToCTsPage(xid) \\\n> > > - ((xid) / (TransactionId) COMMIT_TS_XACTS_PER_PAGE)\n> > > +\n> > > +/*\n> > > + * Although we return an int64 the actual value can't currently exceeed 2**32.\n> > > + */\n> > > +static inline int64\n> > > +TransactionIdToCTsPage(TransactionId xid)\n> > > +{\n> > > + return xid / (int64) COMMIT_TS_XACTS_PER_PAGE;\n> > > +}\n> > >\n> > > Is there any reason we transform macro into a function? If not, I propose to leave this as a macro. BTW, there is a typo in a word \"exceeed\".\n> > If I remember right, the compiler will make equivalent code from\n> > inline functions and macros, and functions has an additional benefit:\n> > the compiler will report type mismatch if any. That was the only\n> > reason.\n>\n> Then it's OK to leave it as an inline function.\n>\n> > Also, I looked at v58-0001 and don't quite agree with mentioning the\n> > authors of the original 64-xid patch, from which the current patch is\n> > derived, as just \"privious input\" persons.\n>\n> +1, for converting all \"previous input\" persons as additional authors.\n> That would be a pretty long list of authors though.\n> BTW, I'm a bit puzzled on who should be the first author now?\nThanks! It's long, I agree, but the activity around this was huge and\ninvolved many people, the patch itself already has more than\nhalf-hundred iterations to date. I think at least people mentioned in\nthe commit message of v58 are fair to have author status.\n\nAs for the first, I'm not sure, it's hard for me to evaluate what is\nmore important, the initial prototype, or the final improvement\niterations. I don't think the existing order in a commit message has\nsome meaning. Maybe it's worth throwing a dice.\n\nRegards, Pavel\n\n\n",
"msg_date": "Mon, 6 Nov 2023 18:14:14 +0400",
"msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: XID formatting and SLRU refactorings (was: Add 64-bit XIDs into\n PostgreSQL 15)"
},
{
"msg_contents": "Hi,\n\n> > > If I remember right, the compiler will make equivalent code from\n> > > inline functions and macros, and functions has an additional benefit:\n> > > the compiler will report type mismatch if any. That was the only\n> > > reason.\n> >\n> > Then it's OK to leave it as an inline function.\n\n+1\n\n> > BTW, I'm a bit puzzled on who should be the first author now?\n> Thanks! It's long, I agree, but the activity around this was huge and\n> involved many people, the patch itself already has more than\n> half-hundred iterations to date. I think at least people mentioned in\n> the commit message of v58 are fair to have author status.\n>\n> As for the first, I'm not sure, it's hard for me to evaluate what is\n> more important, the initial prototype, or the final improvement\n> iterations. I don't think the existing order in a commit message has\n> some meaning. Maybe it's worth throwing a dice.\n\nPersonally I was not aware that the order of the authors in a commit\nmessage carries any information. To my knowledge it doesn't not. I\nbelieve this patchset is a team effort, so I suggest keeping the\ncommit message as is or shuffling the authors randomly. I believe we\nshould do our best to reflect the input of people who previously\ncontributed to the effort, if anyone are aware of them, and add them\nto the commit message if they are not there yet. Pretty sure Git will\nforgive us if the list ends up being long. Hopefully so will people\nwho we mistakenly forget.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Mon, 6 Nov 2023 17:28:23 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": true,
"msg_subject": "Re: XID formatting and SLRU refactorings (was: Add 64-bit XIDs into\n PostgreSQL 15)"
},
{
"msg_contents": "Alexander,\n\n> > PFE the corrected patchset v58.\n>\n> I'd like to revive this thread.\n\nMany thanks for your comments and suggestions.\n\n> I think it worth adding asserts here to verify there is no overflow making us mapping different segments into the same files.\n\nSorry, I didn't understand this one. Maybe you could provide the exact code?\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Mon, 6 Nov 2023 17:38:00 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": true,
"msg_subject": "Re: XID formatting and SLRU refactorings (was: Add 64-bit XIDs into\n PostgreSQL 15)"
},
{
"msg_contents": "On Mon, Nov 6, 2023 at 4:38 PM Aleksander Alekseev <aleksander@timescale.com>\nwrote:\n> > > PFE the corrected patchset v58.\n> >\n> > I'd like to revive this thread.\n>\n> Many thanks for your comments and suggestions.\n>\n> > I think it worth adding asserts here to verify there is no overflow\nmaking us mapping different segments into the same files.\n>\n> Sorry, I didn't understand this one. Maybe you could provide the exact\ncode?\n\nI actually meant this.\n\nstatic int inline\nSlruFileName(SlruCtl ctl, char *path, int64 segno)\n{\n if (ctl->long_segment_names)\n {\n /*\n * We could use 16 characters here but the disadvantage would be\nthat\n * the SLRU segments will be hard to distinguish from WAL segments.\n *\n * For this reason we use 15 characters. It is enough but also means\n * that in the future we can't decrease SLRU_PAGES_PER_SEGMENT\neasily.\n */\n Assert(segno >= 0 && segno <= 0x1000000000000000);\n return snprintf(path, MAXPGPATH, \"%s/%015llX\", ctl->Dir,\n (long long) segno);\n }\n else\n {\n Assert(segno >= 0 && segno <= 0x10000);\n return snprintf(path, MAXPGPATH, \"%s/%04X\", (ctl)->Dir,\n (unsigned int) segno);\n }\n}\n\nAs I now get, snprintf() wouldn't just truncate the high signs, instead it\nwill use more characters. But I think assertions are useful anyway.\n\n------\nRegards,\nAlexander Korotkov\n\nOn Mon, Nov 6, 2023 at 4:38 PM Aleksander Alekseev <aleksander@timescale.com> wrote:> > > PFE the corrected patchset v58.> >> > I'd like to revive this thread.>> Many thanks for your comments and suggestions.>> > I think it worth adding asserts here to verify there is no overflow making us mapping different segments into the same files.>> Sorry, I didn't understand this one. Maybe you could provide the exact code?I actually meant this.static int inlineSlruFileName(SlruCtl ctl, char *path, int64 segno){ if (ctl->long_segment_names) { /* * We could use 16 characters here but the disadvantage would be that * the SLRU segments will be hard to distinguish from WAL segments. * * For this reason we use 15 characters. It is enough but also means * that in the future we can't decrease SLRU_PAGES_PER_SEGMENT easily. */ Assert(segno >= 0 && segno <= 0x1000000000000000); return snprintf(path, MAXPGPATH, \"%s/%015llX\", ctl->Dir, (long long) segno); } else { Assert(segno >= 0 && segno <= 0x10000); return snprintf(path, MAXPGPATH, \"%s/%04X\", (ctl)->Dir, (unsigned int) segno); }}As I now get, snprintf() wouldn't just truncate the high signs, instead it will use more characters. But I think assertions are useful anyway.------Regards,Alexander Korotkov",
"msg_date": "Mon, 6 Nov 2023 23:00:46 +0200",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: XID formatting and SLRU refactorings (was: Add 64-bit XIDs into\n PostgreSQL 15)"
},
{
"msg_contents": "Hi Alexander,\n\n> -#define TransactionIdToCTsPage(xid) \\\n> - ((xid) / (TransactionId) COMMIT_TS_XACTS_PER_PAGE)\n> +\n> +/*\n> + * Although we return an int64 the actual value can't currently exceeed 2**32.\n> + */\n> +static inline int64\n> +TransactionIdToCTsPage(TransactionId xid)\n> +{\n> + return xid / (int64) COMMIT_TS_XACTS_PER_PAGE;\n> +}\n>\n> Is there any reason we transform macro into a function? If not, I propose to leave this as a macro. BTW, there is a typo in a word \"exceeed\".\n\nI kept the inline function, as we agreed above.\n\nTypo fixed.\n\n> +static int inline\n> +SlruFileName(SlruCtl ctl, char *path, int64 segno)\n> +{\n> + if (ctl->long_segment_names)\n> + /*\n> + * We could use 16 characters here but the disadvantage would be that\n> + * the SLRU segments will be hard to distinguish from WAL segments.\n> + *\n> + * For this reason we use 15 characters. It is enough but also means\n> + * that in the future we can't decrease SLRU_PAGES_PER_SEGMENT easily.\n> + */\n> + return snprintf(path, MAXPGPATH, \"%s/%015llX\", ctl->Dir,\n> + (long long) segno);\n> + else\n> + return snprintf(path, MAXPGPATH, \"%s/%04X\", (ctl)->Dir,\n> + (unsigned int) segno);\n> +}\n>\n> I think it worth adding asserts here to verify there is no overflow making us mapping different segments into the same files.\n\nAdded. I noticed a off-by-one error in the code snippet proposed\nabove, so my code differs a bit.\n\n> + return occupied == max_notify_queue_pages;\n>\n> I'm not sure if the current code could actually allow to occupy more than max_notify_queue_pages. Probably not even in extreme cases. But I still think it will more safe and easier to read to write \"occupied >= max_notify_queue\"_pages here.\n\nFixed.\n\n> diff --git a/src/test/modules/test_slru/test_slru.c b/src/test/modules/test_slru/test_slru.c\n>\n> The actual 64-bitness of SLRU pages isn't much exercised in our automated tests. It would be too exhausting to make pg_notify actually use higher than 2**32 page numbers. Thus, I think test/modules/test_slru is a good place to give high page numbers a good test.\n\nFixed. I choose not to change any numbers in the test in order to\ncheck any corner cases, etc. The code patches for long_segment_names =\ntrue and long_segment_names = false are almost the same thus it will\nnot improve code coverage. Using the current numbers will allow to\neasily switch back to long_segment_names = false in the test if\nnecessary.\n\nPFA the corrected patchset v59.\n\n-- \nBest regards,\nAleksander Alekseev",
"msg_date": "Tue, 7 Nov 2023 14:57:12 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": true,
"msg_subject": "Re: XID formatting and SLRU refactorings (was: Add 64-bit XIDs into\n PostgreSQL 15)"
},
{
"msg_contents": "Hi again,\n\n> PFA the corrected patchset v59.\n\nOn second thought, I believe this Assert is incorrect:\n\n```\n+ else\n+ {\n+ Assert(segno >= 0 && segno <= 0xFFFF);\n+ return snprintf(path, MAXPGPATH, \"%s/%04X\", (ctl)->Dir,\n+ (unsigned int) segno);\n+ }\n```\n\nSee SlruCorrectSegmentFilenameLength():\n\n```\n if (ctl->long_segment_names)\n return (len == 15); /* see SlruFileName() */\n else\n /*\n * Commit 638cf09e76d allowed 5-character lengths. Later commit\n * 73c986adde5 allowed 6-character length.\n *\n * XXX should we still consider such names to be valid?\n */\n return (len == 4 || len == 5 || len == 6);\n```\n\nShould we just drop it or check that segno is <= 0xFFFFFF?\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Tue, 7 Nov 2023 15:05:10 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": true,
"msg_subject": "Re: XID formatting and SLRU refactorings (was: Add 64-bit XIDs into\n PostgreSQL 15)"
},
{
"msg_contents": "On Mon, 6 Nov 2023 at 16:07, Alexander Korotkov <aekorotkov@gmail.com>\nwrote:\n\n> Hi!\n>\n> On Wed, Jul 5, 2023 at 4:46 PM Aleksander Alekseev <\n> aleksander@timescale.com> wrote:\n> > PFE the corrected patchset v58.\n>\n> I'd like to revive this thread.\n>\nHi! Great news!\n\n\n> BTW, there is a typo in a word \"exceeed\".\n>\nFixed.\n\n\n>\n> +static int inline\n> +SlruFileName(SlruCtl ctl, char *path, int64 segno)\n> +{\n> ...\n> +}\n>\n> I think it worth adding asserts here to verify there is no overflow making\n> us mapping different segments into the same files.\n>\nAgree, assertion added.\n\n\n> + return occupied == max_notify_queue_pages;\n>\n> I'm not sure if the current code could actually allow to occupy more than\n> max_notify_queue_pages. Probably not even in extreme cases. But I still\n> think it will more safe and easier to read to write \"occupied >=\n> max_notify_queue\"_pages here.\n>\nFixed.\n\n\n> diff --git a/src/test/modules/test_slru/test_slru.c\n> b/src/test/modules/test_slru/test_slru.c\n>\n> The actual 64-bitness of SLRU pages isn't much exercised in our automated\n> tests. It would be too exhausting to make pg_notify actually use higher\n> than 2**32 page numbers. Thus, I think test/modules/test_slru is a good\n> place to give high page numbers a good test.\n>\nPFA, I've add test for a 64-bit SLRU pages.\n\nBy the way, there is another one useful thing we may do here. For now\npg_commit_ts functionality is rather strange: if it was enabled, then\ndisabled and then enabled again all the data from before will be\ndiscarded. Meanwhile, users expected to have their commit timestamps for\nall transactions, which were \"logged\" when this feature was enabled. It's\nweird.\n\nAFICS, the only reason for this behaviour is becouse of transaction\nwraparound. It may occur while the feature is disabled end it is safe to\nsimply remove all the data from previous period. If we switch to\nFullTransactionId in commit_ts we can overcome this limitation. But I'm\nnot sure if it worth to try to fix this in current patchset, since it is\nalready non trivial.\n\n-- \nBest regards,\nMaxim Orlov.",
"msg_date": "Tue, 7 Nov 2023 19:20:27 +0300",
"msg_from": "Maxim Orlov <orlovmg@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: XID formatting and SLRU refactorings (was: Add 64-bit XIDs into\n PostgreSQL 15)"
},
{
"msg_contents": "Maxim,\n\nI see both of us accounted for Alexanders feedback and submitted v59.\nYour newer version seems to have issues on cfbot, so resubmitting the\nprevious patchset that passes the tests. Please feel free to add\nchanges.\n\n> See SlruCorrectSegmentFilenameLength():\n>\n> ```\n> if (ctl->long_segment_names)\n> return (len == 15); /* see SlruFileName() */\n> else\n> /*\n> * Commit 638cf09e76d allowed 5-character lengths. Later commit\n> * 73c986adde5 allowed 6-character length.\n> *\n> * XXX should we still consider such names to be valid?\n> */\n> return (len == 4 || len == 5 || len == 6);\n> ```\n>\n> Should we just drop it or check that segno is <= 0xFFFFFF?\n\nI also choose to change this Assert and to add a corresponding comment:\n\n else\n {\n- Assert(segno >= 0 && segno <= 0xFFFF);\n+ /*\n+ * Despite the fact that %04X format string is used up to 24 bit\n+ * integers are allowed. See SlruCorrectSegmentFilenameLength()\n+ */\n+ Assert(segno >= 0 && segno <= 0xFFFFFF);\n return snprintf(path, MAXPGPATH, \"%s/%04X\", (ctl)->Dir,\n (unsigned int) segno);\n }\n\n\n--\nBest regards,\nAleksander Alekseev",
"msg_date": "Wed, 8 Nov 2023 17:17:26 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": true,
"msg_subject": "Re: XID formatting and SLRU refactorings (was: Add 64-bit XIDs into\n PostgreSQL 15)"
},
{
"msg_contents": "Aleksander Alekseev,\n\n> Maxim,\n> I see both of us accounted for Alexanders feedback and submitted v59.\n> Your newer version seems to have issues on cfbot, so resubmitting the\n> previous patchset that passes the tests. Please feel free to add\n> changes.\n\nFor unknown reasons, I do not receive any of your emails from after\n2023-11-07 11:57:12 (Message-ID: CAJ7c6TN1hKqNPGrNcq96SUyD=\nZ61raKGXF8iqq36qr90oudxRg@mail.gmail.com).\nEven after resend.\n\nAnyway, PFA patch set of version 61. I've made some minor changes in the\n0001 and add 004 in order to test actual 64-bit SLRU pages.\n\nAs for CF bot had failed on my v59 patch set, this is because of the bug.\nIt's manifested because of added 64-bit pages tests.\nThe problem was in segno calculation, since we convert it from file name\nusing strtol call. But strtol return long,\nwhich is 4 byte long in x86.\n\n- segno = (int) strtol(clde->d_name, NULL, 16);\n+ segno = strtoi64(clde->d_name, NULL, 16);\n\n-- \nBest regards,\nMaxim Orlov.",
"msg_date": "Thu, 9 Nov 2023 19:22:11 +0300",
"msg_from": "Maxim Orlov <orlovmg@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: XID formatting and SLRU refactorings (was: Add 64-bit XIDs into\n PostgreSQL 15)"
},
{
"msg_contents": "Alexander, Maxim,\n\nThank you for revisions.\n\nOn Thu, Nov 9, 2023 at 6:22 PM Maxim Orlov <orlovmg@gmail.com> wrote:\n> Aleksander Alekseev,\n>\n> > Maxim,\n> > I see both of us accounted for Alexanders feedback and submitted v59.\n> > Your newer version seems to have issues on cfbot, so resubmitting the\n> > previous patchset that passes the tests. Please feel free to add\n> > changes.\n>\n> For unknown reasons, I do not receive any of your emails from after 2023-11-07 11:57:12 (Message-ID: CAJ7c6TN1hKqNPGrNcq96SUyD=Z61raKGXF8iqq36qr90oudxRg@mail.gmail.com).\n> Even after resend.\n>\n> Anyway, PFA patch set of version 61. I've made some minor changes in the 0001 and add 004 in order to test actual 64-bit SLRU pages.\n>\n> As for CF bot had failed on my v59 patch set, this is because of the bug. It's manifested because of added 64-bit pages tests.\n> The problem was in segno calculation, since we convert it from file name using strtol call. But strtol return long,\n> which is 4 byte long in x86.\n>\n> - segno = (int) strtol(clde->d_name, NULL, 16);\n> + segno = strtoi64(clde->d_name, NULL, 16);\n\nv61 looks good to me. I'm going to push it as long as there are no objections.\n\n------\nRegards,\nAlexander Korotkov\n\n\n",
"msg_date": "Mon, 27 Nov 2023 01:43:26 +0200",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: XID formatting and SLRU refactorings (was: Add 64-bit XIDs into\n PostgreSQL 15)"
},
{
"msg_contents": "Hello Alexander,\n\n27.11.2023 02:43, Alexander Korotkov wrote:\n\n> v61 looks good to me. I'm going to push it as long as there are no objections.\n>\n\nI've looked at the patch set and found a typo:\noccured\n\nAnd a warning:\n$ CC=gcc-12 CFLAGS=\"-Wall -Wextra -Wno-unused-parameter -Wno-sign-compare -Wno-clobbered \n-Wno-missing-field-initializers\" ./configure -q && make -s\nslru.c:63:1: warning: ‘inline’ is not at beginning of declaration [-Wold-style-declaration]\n 63 | static int inline\n | ^~~~~~\n\nMaybe it's worth fixing before committing...\n\nBest regards,\nAlexander\n\n\n",
"msg_date": "Mon, 27 Nov 2023 10:00:00 +0300",
"msg_from": "Alexander Lakhin <exclusion@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: XID formatting and SLRU refactorings (was: Add 64-bit XIDs into\n PostgreSQL 15)"
},
{
"msg_contents": "Андрей, привет!\r\n\r\nТекущее положение у меня такое.\r\n\r\n.\r\n*pg_stats and range statisticsTracking statements entry timestamp in\r\npg_stat_statements*\r\n\r\nУже закоммичены.\r\n\r\n\r\n*XID formatting and SLRU refactorings (was: Add 64-bit XIDs into PostgreSQL\r\n15)*\r\nЕсли всё будет и замечаний не возникнет ок, завтра утром закоммичу.\r\n\r\n\r\n*Add semi-join pushdown to postgres_fdw*\r\nПереработал патч, сделал обработку условий более аккурантно. Хочу попросить\r\nещё 2 часа на финальное ревью и коммит.\r\n\r\n\r\n*May be BUG. Periodic burst growth of the checkpoint_req counter on\r\nreplica.*\r\nСегодня вечером планирую доделать и выложить review.\r\n\r\n------\r\nRegards,\r\nAlexander Korotkov\r\n\r\n------\r\nRegards,\r\nAlexander Korotkov\r\n\r\n\r\nOn Mon, Nov 27, 2023 at 9:00 AM Alexander Lakhin <exclusion@gmail.com>\r\nwrote:\r\n\r\n> Hello Alexander,\r\n>\r\n> 27.11.2023 02:43, Alexander Korotkov wrote:\r\n>\r\n> > v61 looks good to me. I'm going to push it as long as there are no\r\n> objections.\r\n> >\r\n>\r\n> I've looked at the patch set and found a typo:\r\n> occured\r\n>\r\n> And a warning:\r\n> $ CC=gcc-12 CFLAGS=\"-Wall -Wextra -Wno-unused-parameter -Wno-sign-compare\r\n> -Wno-clobbered\r\n> -Wno-missing-field-initializers\" ./configure -q && make -s\r\n> slru.c:63:1: warning: ‘inline’ is not at beginning of declaration\r\n> [-Wold-style-declaration]\r\n> 63 | static int inline\r\n> | ^~~~~~\r\n>\r\n> Maybe it's worth fixing before committing...\r\n>\r\n> Best regards,\r\n> Alexander\r\n>\r\n\nАндрей, привет!Текущее положение у меня такое..pg_stats and range statisticsTracking statements entry timestamp in pg_stat_statementsУже закоммичены.XID formatting and SLRU refactorings (was: Add 64-bit XIDs into PostgreSQL 15)Если всё будет и замечаний не возникнет ок, завтра утром закоммичу.Add semi-join pushdown to postgres_fdwПереработал патч, сделал обработку условий более аккурантно. Хочу попросить ещё 2 часа на финальное ревью и коммит.May be BUG. Periodic burst growth of the checkpoint_req counter on replica.Сегодня вечером планирую доделать и выложить review.------Regards,Alexander Korotkov------Regards,Alexander KorotkovOn Mon, Nov 27, 2023 at 9:00 AM Alexander Lakhin <exclusion@gmail.com> wrote:Hello Alexander,\n\r\n27.11.2023 02:43, Alexander Korotkov wrote:\n\r\n> v61 looks good to me. I'm going to push it as long as there are no objections.\r\n>\n\r\nI've looked at the patch set and found a typo:\r\noccured\n\r\nAnd a warning:\r\n$ CC=gcc-12 CFLAGS=\"-Wall -Wextra -Wno-unused-parameter -Wno-sign-compare -Wno-clobbered \r\n-Wno-missing-field-initializers\" ./configure -q && make -s\r\nslru.c:63:1: warning: ‘inline’ is not at beginning of declaration [-Wold-style-declaration]\r\n 63 | static int inline\r\n | ^~~~~~\n\r\nMaybe it's worth fixing before committing...\n\r\nBest regards,\r\nAlexander",
"msg_date": "Tue, 28 Nov 2023 10:35:13 +0200",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: XID formatting and SLRU refactorings (was: Add 64-bit XIDs into\n PostgreSQL 15)"
},
{
"msg_contents": "On 27/11/2023 01:43, Alexander Korotkov wrote:\n> v61 looks good to me. I'm going to push it as long as there are no objections.\nThis was discussed earlier, but is still present in v61:\n\n> +/*\n> + * An internal function used by SlruScanDirectory().\n> + *\n> + * Returns true if a file with a name of a given length may be a correct\n> + * SLRU segment.\n> + */\n> +static inline bool\n> +SlruCorrectSegmentFilenameLength(SlruCtl ctl, size_t len)\n> +{\n> +\tif (ctl->long_segment_names)\n> +\t\treturn (len == 15); /* see SlruFileName() */\n> +\telse\n> +\t\t/*\n> +\t\t * Commit 638cf09e76d allowed 5-character lengths. Later commit\n> +\t\t * 73c986adde5 allowed 6-character length.\n> +\t\t *\n> +\t\t * XXX should we still consider such names to be valid?\n> +\t\t */\n> +\t\treturn (len == 4 || len == 5 || len == 6);\n> +}\n> +\n\nI think it's pretty sloppy that the \"short\" filenames can be 4, 5 or 6 \nchars long. For pg_multixact/members, which introduced the 5-char case, \nI think we should always pad the filenames 5 characters, and for \ncommit_ts which introduced the 6 char case, always pad to 6 characters.\n\nInstead of a \"long_segment_names\" boolean, how about an integer field, \nto specify the length.\n\nThat means that we'll need pg_upgrade to copy pg_multixact/members files \nunder the new names. That should be pretty straightforward.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n",
"msg_date": "Tue, 28 Nov 2023 11:13:24 +0200",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": false,
"msg_subject": "Re: XID formatting and SLRU refactorings"
},
{
"msg_contents": "Hi, Heikki!\n\nOn Tue, 28 Nov 2023 at 13:13, Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n>\n> On 27/11/2023 01:43, Alexander Korotkov wrote:\n> > v61 looks good to me. I'm going to push it as long as there are no objections.\n> This was discussed earlier, but is still present in v61:\n>\n> > +/*\n> > + * An internal function used by SlruScanDirectory().\n> > + *\n> > + * Returns true if a file with a name of a given length may be a correct\n> > + * SLRU segment.\n> > + */\n> > +static inline bool\n> > +SlruCorrectSegmentFilenameLength(SlruCtl ctl, size_t len)\n> > +{\n> > + if (ctl->long_segment_names)\n> > + return (len == 15); /* see SlruFileName() */\n> > + else\n> > + /*\n> > + * Commit 638cf09e76d allowed 5-character lengths. Later commit\n> > + * 73c986adde5 allowed 6-character length.\n> > + *\n> > + * XXX should we still consider such names to be valid?\n> > + */\n> > + return (len == 4 || len == 5 || len == 6);\n> > +}\n> > +\n>\n> I think it's pretty sloppy that the \"short\" filenames can be 4, 5 or 6\n> chars long. For pg_multixact/members, which introduced the 5-char case,\n> I think we should always pad the filenames 5 characters, and for\n> commit_ts which introduced the 6 char case, always pad to 6 characters.\n>\n> Instead of a \"long_segment_names\" boolean, how about an integer field,\n> to specify the length.\n>\n> That means that we'll need pg_upgrade to copy pg_multixact/members files\n> under the new names. That should be pretty straightforward.\n\nI think what's done in patch 0001 is just an extension of existing\nlogic and moving it into separate function.\n\n- if ((len == 4 || len == 5 || len == 6) &&\n+ if (SlruCorrectSegmentFilenameLength(ctl, len) &&\n strspn(clde->d_name, \"0123456789ABCDEF\") == len)\n {\n- segno = (int) strtol(clde->d_name, NULL, 16);\n+ segno = strtoi64(clde->d_name, NULL, 16);\n segpage = segno * SLRU_PAGES_PER_SEGMENT;\n\nI'd prefer to leave it as it is as a part of 64-bit extension patch.\n\nRegards,\nPavel.\n\n\n",
"msg_date": "Tue, 28 Nov 2023 14:14:51 +0400",
"msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: XID formatting and SLRU refactorings"
},
{
"msg_contents": "On 28/11/2023 12:14, Pavel Borisov wrote:\n> On Tue, 28 Nov 2023 at 13:13, Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n>>\n>> On 27/11/2023 01:43, Alexander Korotkov wrote:\n>>> v61 looks good to me. I'm going to push it as long as there are no objections.\n>> This was discussed earlier, but is still present in v61:\n>>\n>>> +/*\n>>> + * An internal function used by SlruScanDirectory().\n>>> + *\n>>> + * Returns true if a file with a name of a given length may be a correct\n>>> + * SLRU segment.\n>>> + */\n>>> +static inline bool\n>>> +SlruCorrectSegmentFilenameLength(SlruCtl ctl, size_t len)\n>>> +{\n>>> + if (ctl->long_segment_names)\n>>> + return (len == 15); /* see SlruFileName() */\n>>> + else\n>>> + /*\n>>> + * Commit 638cf09e76d allowed 5-character lengths. Later commit\n>>> + * 73c986adde5 allowed 6-character length.\n>>> + *\n>>> + * XXX should we still consider such names to be valid?\n>>> + */\n>>> + return (len == 4 || len == 5 || len == 6);\n>>> +}\n>>> +\n>>\n>> I think it's pretty sloppy that the \"short\" filenames can be 4, 5 or 6\n>> chars long. For pg_multixact/members, which introduced the 5-char case,\n>> I think we should always pad the filenames 5 characters, and for\n>> commit_ts which introduced the 6 char case, always pad to 6 characters.\n>>\n>> Instead of a \"long_segment_names\" boolean, how about an integer field,\n>> to specify the length.\n>>\n>> That means that we'll need pg_upgrade to copy pg_multixact/members files\n>> under the new names. That should be pretty straightforward.\n> \n> I think what's done in patch 0001 is just an extension of existing\n> logic and moving it into separate function.\n\nThat's right. I'm arguing that now is a good time to clean it up.\n\nI won't insist if Alexander prefers to commit it as it is, though. But \nlet's at least explain how this works in the comment, instead of the XXX.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n",
"msg_date": "Tue, 28 Nov 2023 12:37:08 +0200",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": false,
"msg_subject": "Re: XID formatting and SLRU refactorings"
},
{
"msg_contents": "On Tue, 28 Nov 2023 at 14:37, Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n\n> On 28/11/2023 12:14, Pavel Borisov wrote:\n> > On Tue, 28 Nov 2023 at 13:13, Heikki Linnakangas <hlinnaka@iki.fi>\n> wrote:\n> >>\n> >> On 27/11/2023 01:43, Alexander Korotkov wrote:\n> >>> v61 looks good to me. I'm going to push it as long as there are no\n> objections.\n> >> This was discussed earlier, but is still present in v61:\n> >>\n> >>> +/*\n> >>> + * An internal function used by SlruScanDirectory().\n> >>> + *\n> >>> + * Returns true if a file with a name of a given length may be a\n> correct\n> >>> + * SLRU segment.\n> >>> + */\n> >>> +static inline bool\n> >>> +SlruCorrectSegmentFilenameLength(SlruCtl ctl, size_t len)\n> >>> +{\n> >>> + if (ctl->long_segment_names)\n> >>> + return (len == 15); /* see SlruFileName() */\n> >>> + else\n> >>> + /*\n> >>> + * Commit 638cf09e76d allowed 5-character lengths. Later\n> commit\n> >>> + * 73c986adde5 allowed 6-character length.\n> >>> + *\n> >>> + * XXX should we still consider such names to be valid?\n> >>> + */\n> >>> + return (len == 4 || len == 5 || len == 6);\n> >>> +}\n> >>> +\n> >>\n> >> I think it's pretty sloppy that the \"short\" filenames can be 4, 5 or 6\n> >> chars long. For pg_multixact/members, which introduced the 5-char case,\n> >> I think we should always pad the filenames 5 characters, and for\n> >> commit_ts which introduced the 6 char case, always pad to 6 characters.\n> >>\n> >> Instead of a \"long_segment_names\" boolean, how about an integer field,\n> >> to specify the length.\n> >>\n> >> That means that we'll need pg_upgrade to copy pg_multixact/members files\n> >> under the new names. That should be pretty straightforward.\n> >\n> > I think what's done in patch 0001 is just an extension of existing\n> > logic and moving it into separate function.\n>\n> That's right. I'm arguing that now is a good time to clean it up.\n>\n> I won't insist if Alexander prefers to commit it as it is, though. But\n> let's at least explain how this works in the comment, instead of the XXX.\n>\nI agree with you that would be good to add a comment instead of XXX and\ncommit.\n\nPavel\n\nOn Tue, 28 Nov 2023 at 14:37, Heikki Linnakangas <hlinnaka@iki.fi> wrote:On 28/11/2023 12:14, Pavel Borisov wrote:\n> On Tue, 28 Nov 2023 at 13:13, Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n>>\n>> On 27/11/2023 01:43, Alexander Korotkov wrote:\n>>> v61 looks good to me. I'm going to push it as long as there are no objections.\n>> This was discussed earlier, but is still present in v61:\n>>\n>>> +/*\n>>> + * An internal function used by SlruScanDirectory().\n>>> + *\n>>> + * Returns true if a file with a name of a given length may be a correct\n>>> + * SLRU segment.\n>>> + */\n>>> +static inline bool\n>>> +SlruCorrectSegmentFilenameLength(SlruCtl ctl, size_t len)\n>>> +{\n>>> + if (ctl->long_segment_names)\n>>> + return (len == 15); /* see SlruFileName() */\n>>> + else\n>>> + /*\n>>> + * Commit 638cf09e76d allowed 5-character lengths. Later commit\n>>> + * 73c986adde5 allowed 6-character length.\n>>> + *\n>>> + * XXX should we still consider such names to be valid?\n>>> + */\n>>> + return (len == 4 || len == 5 || len == 6);\n>>> +}\n>>> +\n>>\n>> I think it's pretty sloppy that the \"short\" filenames can be 4, 5 or 6\n>> chars long. For pg_multixact/members, which introduced the 5-char case,\n>> I think we should always pad the filenames 5 characters, and for\n>> commit_ts which introduced the 6 char case, always pad to 6 characters.\n>>\n>> Instead of a \"long_segment_names\" boolean, how about an integer field,\n>> to specify the length.\n>>\n>> That means that we'll need pg_upgrade to copy pg_multixact/members files\n>> under the new names. That should be pretty straightforward.\n> \n> I think what's done in patch 0001 is just an extension of existing\n> logic and moving it into separate function.\n\nThat's right. I'm arguing that now is a good time to clean it up.\n\nI won't insist if Alexander prefers to commit it as it is, though. But \nlet's at least explain how this works in the comment, instead of the XXX.I agree with you that would be good to add a comment instead of XXX and commit. Pavel",
"msg_date": "Tue, 28 Nov 2023 14:38:54 +0400",
"msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: XID formatting and SLRU refactorings"
},
{
"msg_contents": "Hi,\n\n>> > I think what's done in patch 0001 is just an extension of existing\n>> > logic and moving it into separate function.\n>>\n>> That's right. I'm arguing that now is a good time to clean it up.\n>>\n>> I won't insist if Alexander prefers to commit it as it is, though. But\n>> let's at least explain how this works in the comment, instead of the XXX.\n>\n> I agree with you that would be good to add a comment instead of XXX and commit.\n\n+1\n\nOne could argue that getting rid of short filenames entirely in the\nlong term (i.e. always long_segment_names == true) could be a better\nstrategy. Maybe it's not but I believe this should be discussed\nseparately from the patchset under question.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Tue, 28 Nov 2023 15:06:28 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": true,
"msg_subject": "Re: XID formatting and SLRU refactorings"
},
{
"msg_contents": "Hi!\n\nOn Tue, Nov 28, 2023 at 2:06 PM Aleksander Alekseev\n<aleksander@timescale.com> wrote:\n>\n> >> > I think what's done in patch 0001 is just an extension of existing\n> >> > logic and moving it into separate function.\n> >>\n> >> That's right. I'm arguing that now is a good time to clean it up.\n> >>\n> >> I won't insist if Alexander prefers to commit it as it is, though. But\n> >> let's at least explain how this works in the comment, instead of the XXX.\n> >\n> > I agree with you that would be good to add a comment instead of XXX and commit.\n>\n> +1\n>\n> One could argue that getting rid of short filenames entirely in the\n> long term (i.e. always long_segment_names == true) could be a better\n> strategy. Maybe it's not but I believe this should be discussed\n> separately from the patchset under question.\n\n\nHeikki, thank you for catching this.\n\nThis mess with file names formats already lasts quite long. I don't\nthink we should hurry unifying this as long as we're anyway going to\nchange that in near future.\n\nPlease, find the revised patchset with relevant comment.\n\n------\nRegards,\nAlexander Korotkov",
"msg_date": "Tue, 28 Nov 2023 20:03:46 +0200",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: XID formatting and SLRU refactorings"
},
{
"msg_contents": "Alexander Lakhin <exclusion@gmail.com> writes:\n> And a warning:\n> $ CC=gcc-12 CFLAGS=\"-Wall -Wextra -Wno-unused-parameter -Wno-sign-compare -Wno-clobbered \n> -Wno-missing-field-initializers\" ./configure -q && make -s\n> slru.c:63:1: warning: ‘inline’ is not at beginning of declaration [-Wold-style-declaration]\n> 63 | static int inline\n> | ^~~~~~\n\n> Maybe it's worth fixing before committing...\n\nThis should have been fixed before commit, because there are now a\ndozen buildfarm animals complaining about it, as well as who-knows-\nhow-many developers' compilers.\n\n calliphoridae | 2023-11-30 02:48:59 | /home/bf/bf-build/calliphoridae/HEAD/pgsql.build/../pgsql/src/backend/access/transam/slru.c:63:1: warning: 'inline' is not at beginning of declaration [-Wold-style-declaration]\n canebrake | 2023-11-29 14:22:10 | /home/bf/bf-build/canebrake/HEAD/pgsql.build/../pgsql/src/backend/access/transam/slru.c:63:1: warning: 'inline' is not at beginning of declaration [-Wold-style-declaration]\n culicidae | 2023-11-30 02:49:06 | /home/bf/bf-build/culicidae/HEAD/pgsql.build/../pgsql/src/backend/access/transam/slru.c:63:1: warning: 'inline' is not at beginning of declaration [-Wold-style-declaration]\n desmoxytes | 2023-11-30 03:11:15 | /home/bf/bf-build/desmoxytes/HEAD/pgsql.build/../pgsql/src/backend/access/transam/slru.c:63:1: warning: 'inline' is not at beginning of declaration [-Wold-style-declaration]\n flaviventris | 2023-11-30 02:53:19 | /home/bf/bf-build/flaviventris/HEAD/pgsql.build/../pgsql/src/backend/access/transam/slru.c:63:1: warning: 'inline' is not at beginning of declaration [-Wold-style-declaration]\n francolin | 2023-11-30 02:26:08 | ../pgsql/src/backend/access/transam/slru.c:63:1: warning: 'inline' is not at beginning of declaration [-Wold-style-declaration]\n grassquit | 2023-11-30 02:58:36 | /home/bf/bf-build/grassquit/HEAD/pgsql.build/../pgsql/src/backend/access/transam/slru.c:63:1: warning: 'inline' is not at beginning of declaration [-Wold-style-declaration]\n komodoensis | 2023-11-30 03:07:52 | /home/bf/bf-build/komodoensis/HEAD/pgsql.build/../pgsql/src/backend/access/transam/slru.c:63:1: warning: 'inline' is not at beginning of declaration [-Wold-style-declaration]\n phycodurus | 2023-11-29 14:29:02 | /home/bf/bf-build/phycodurus/HEAD/pgsql.build/../pgsql/src/backend/access/transam/slru.c:63:1: warning: 'inline' is not at beginning of declaration [-Wold-style-declaration]\n piculet | 2023-11-30 02:32:57 | ../pgsql/src/backend/access/transam/slru.c:63:1: warning: 'inline' is not at beginning of declaration [-Wold-style-declaration]\n pogona | 2023-11-29 14:22:31 | /home/bf/bf-build/pogona/HEAD/pgsql.build/../pgsql/src/backend/access/transam/slru.c:63:1: warning: 'inline' is not at beginning of declaration [-Wold-style-declaration]\n rorqual | 2023-11-30 02:32:41 | ../pgsql/src/backend/access/transam/slru.c:63:1: warning: 'inline' is not at beginning of declaration [-Wold-style-declaration]\n serinus | 2023-11-30 02:47:05 | /home/bf/bf-build/serinus/HEAD/pgsql.build/../pgsql/src/backend/access/transam/slru.c:63:1: warning: 'inline' is not at beginning of declaration [-Wold-style-declaration]\n skink | 2023-11-29 14:23:05 | /home/bf/bf-build/skink-master/HEAD/pgsql.build/../pgsql/src/backend/access/transam/slru.c:63:1: warning: 'inline' is not at beginning of declaration [-Wold-style-declaration]\n taipan | 2023-11-30 03:03:52 | /home/bf/bf-build/taipan/HEAD/pgsql.build/../pgsql/src/backend/access/transam/slru.c:63:1: warning: 'inline' is not at beginning of declaration [-Wold-style-declaration]\n tamandua | 2023-11-30 02:49:50 | /home/bf/bf-build/tamandua/HEAD/pgsql.build/../pgsql/src/backend/access/transam/slru.c:63:1: warning: 'inline' is not at beginning of declaration [-Wold-style-declaration]\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 29 Nov 2023 23:03:25 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: XID formatting and SLRU refactorings (was: Add 64-bit XIDs into\n PostgreSQL 15)"
},
{
"msg_contents": "On Thu, 30 Nov 2023 at 08:03, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Alexander Lakhin <exclusion@gmail.com> writes:\n> > And a warning:\n> > $ CC=gcc-12 CFLAGS=\"-Wall -Wextra -Wno-unused-parameter\n> -Wno-sign-compare -Wno-clobbered\n> > -Wno-missing-field-initializers\" ./configure -q && make -s\n> > slru.c:63:1: warning: ‘inline’ is not at beginning of declaration\n> [-Wold-style-declaration]\n> > 63 | static int inline\n> > | ^~~~~~\n>\n> > Maybe it's worth fixing before committing...\n>\n> This should have been fixed before commit, because there are now a\n> dozen buildfarm animals complaining about it, as well as who-knows-\n> how-many developers' compilers.\n>\n> calliphoridae | 2023-11-30 02:48:59 |\n> /home/bf/bf-build/calliphoridae/HEAD/pgsql.build/../pgsql/src/backend/access/transam/slru.c:63:1:\n> warning: 'inline' is not at beginning of declaration\n> [-Wold-style-declaration]\n> canebrake | 2023-11-29 14:22:10 |\n> /home/bf/bf-build/canebrake/HEAD/pgsql.build/../pgsql/src/backend/access/transam/slru.c:63:1:\n> warning: 'inline' is not at beginning of declaration\n> [-Wold-style-declaration]\n> culicidae | 2023-11-30 02:49:06 |\n> /home/bf/bf-build/culicidae/HEAD/pgsql.build/../pgsql/src/backend/access/transam/slru.c:63:1:\n> warning: 'inline' is not at beginning of declaration\n> [-Wold-style-declaration]\n> desmoxytes | 2023-11-30 03:11:15 |\n> /home/bf/bf-build/desmoxytes/HEAD/pgsql.build/../pgsql/src/backend/access/transam/slru.c:63:1:\n> warning: 'inline' is not at beginning of declaration\n> [-Wold-style-declaration]\n> flaviventris | 2023-11-30 02:53:19 |\n> /home/bf/bf-build/flaviventris/HEAD/pgsql.build/../pgsql/src/backend/access/transam/slru.c:63:1:\n> warning: 'inline' is not at beginning of declaration\n> [-Wold-style-declaration]\n> francolin | 2023-11-30 02:26:08 |\n> ../pgsql/src/backend/access/transam/slru.c:63:1: warning: 'inline' is not\n> at beginning of declaration [-Wold-style-declaration]\n> grassquit | 2023-11-30 02:58:36 |\n> /home/bf/bf-build/grassquit/HEAD/pgsql.build/../pgsql/src/backend/access/transam/slru.c:63:1:\n> warning: 'inline' is not at beginning of declaration\n> [-Wold-style-declaration]\n> komodoensis | 2023-11-30 03:07:52 |\n> /home/bf/bf-build/komodoensis/HEAD/pgsql.build/../pgsql/src/backend/access/transam/slru.c:63:1:\n> warning: 'inline' is not at beginning of declaration\n> [-Wold-style-declaration]\n> phycodurus | 2023-11-29 14:29:02 |\n> /home/bf/bf-build/phycodurus/HEAD/pgsql.build/../pgsql/src/backend/access/transam/slru.c:63:1:\n> warning: 'inline' is not at beginning of declaration\n> [-Wold-style-declaration]\n> piculet | 2023-11-30 02:32:57 |\n> ../pgsql/src/backend/access/transam/slru.c:63:1: warning: 'inline' is not\n> at beginning of declaration [-Wold-style-declaration]\n> pogona | 2023-11-29 14:22:31 |\n> /home/bf/bf-build/pogona/HEAD/pgsql.build/../pgsql/src/backend/access/transam/slru.c:63:1:\n> warning: 'inline' is not at beginning of declaration\n> [-Wold-style-declaration]\n> rorqual | 2023-11-30 02:32:41 |\n> ../pgsql/src/backend/access/transam/slru.c:63:1: warning: 'inline' is not\n> at beginning of declaration [-Wold-style-declaration]\n> serinus | 2023-11-30 02:47:05 |\n> /home/bf/bf-build/serinus/HEAD/pgsql.build/../pgsql/src/backend/access/transam/slru.c:63:1:\n> warning: 'inline' is not at beginning of declaration\n> [-Wold-style-declaration]\n> skink | 2023-11-29 14:23:05 |\n> /home/bf/bf-build/skink-master/HEAD/pgsql.build/../pgsql/src/backend/access/transam/slru.c:63:1:\n> warning: 'inline' is not at beginning of declaration\n> [-Wold-style-declaration]\n> taipan | 2023-11-30 03:03:52 |\n> /home/bf/bf-build/taipan/HEAD/pgsql.build/../pgsql/src/backend/access/transam/slru.c:63:1:\n> warning: 'inline' is not at beginning of declaration\n> [-Wold-style-declaration]\n> tamandua | 2023-11-30 02:49:50 |\n> /home/bf/bf-build/tamandua/HEAD/pgsql.build/../pgsql/src/backend/access/transam/slru.c:63:1:\n> warning: 'inline' is not at beginning of declaration\n> [-Wold-style-declaration]\n>\n> regards, tom lane\n>\nAgree. The fix is attached.\n\nRegards,\nPavel",
"msg_date": "Thu, 30 Nov 2023 12:29:46 +0400",
"msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: XID formatting and SLRU refactorings (was: Add 64-bit XIDs into\n PostgreSQL 15)"
},
{
"msg_contents": "On Thu, Nov 30, 2023 at 10:29 AM Pavel Borisov <pashkin.elfe@gmail.com> wrote:\n> Agree. The fix is attached.\n\nWhat an oversight.\nThank you, pushed!\n\n------\nRegards,\nAlexander Korotkov\n\n\n",
"msg_date": "Thu, 30 Nov 2023 11:37:07 +0200",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: XID formatting and SLRU refactorings (was: Add 64-bit XIDs into\n PostgreSQL 15)"
},
{
"msg_contents": "On Thu, Nov 30, 2023 at 4:37 PM Alexander Korotkov <aekorotkov@gmail.com> wrote:\n>\n> On Thu, Nov 30, 2023 at 10:29 AM Pavel Borisov <pashkin.elfe@gmail.com> wrote:\n> > Agree. The fix is attached.\n>\n> What an oversight.\n> Thank you, pushed!\n\nWith that, is there any more work pending, or can we close the CF entry?\n\n\n",
"msg_date": "Mon, 4 Dec 2023 13:34:25 +0700",
"msg_from": "John Naylor <johncnaylorls@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: XID formatting and SLRU refactorings (was: Add 64-bit XIDs into\n PostgreSQL 15)"
},
{
"msg_contents": "On Mon, 4 Dec 2023 at 10:34, John Naylor <johncnaylorls@gmail.com> wrote:\n\n> On Thu, Nov 30, 2023 at 4:37 PM Alexander Korotkov <aekorotkov@gmail.com>\n> wrote:\n> >\n> > On Thu, Nov 30, 2023 at 10:29 AM Pavel Borisov <pashkin.elfe@gmail.com>\n> wrote:\n> > > Agree. The fix is attached.\n> >\n> > What an oversight.\n> > Thank you, pushed!\n>\n> With that, is there any more work pending, or can we close the CF entry?\n>\nI think this is complete and could be closed.\n\nRegards,\nPavel\n\nOn Mon, 4 Dec 2023 at 10:34, John Naylor <johncnaylorls@gmail.com> wrote:On Thu, Nov 30, 2023 at 4:37 PM Alexander Korotkov <aekorotkov@gmail.com> wrote:\n>\n> On Thu, Nov 30, 2023 at 10:29 AM Pavel Borisov <pashkin.elfe@gmail.com> wrote:\n> > Agree. The fix is attached.\n>\n> What an oversight.\n> Thank you, pushed!\n\nWith that, is there any more work pending, or can we close the CF entry?I think this is complete and could be closed.Regards,Pavel",
"msg_date": "Mon, 4 Dec 2023 12:12:18 +0400",
"msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: XID formatting and SLRU refactorings (was: Add 64-bit XIDs into\n PostgreSQL 15)"
},
{
"msg_contents": "On Mon, Dec 4, 2023 at 3:12 PM Pavel Borisov <pashkin.elfe@gmail.com> wrote:\n> I think this is complete and could be closed.\n\nDone.\n\n\n",
"msg_date": "Mon, 4 Dec 2023 17:22:03 +0700",
"msg_from": "John Naylor <johncnaylorls@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: XID formatting and SLRU refactorings (was: Add 64-bit XIDs into\n PostgreSQL 15)"
},
{
"msg_contents": "Hi orlovmg@gmail.com\r\n That's good news, I think we can continue discuss for (https://commitfest.postgresql.org/43/3594/)\r\n\r\n\r\n\r\n\r\nBest regards\r\n________________________________\r\n发件人: John Naylor <johncnaylorls@gmail.com>\r\n发送时间: 2023年12月4日 18:22\r\n收件人: Pavel Borisov <pashkin.elfe@gmail.com>\r\n抄送: Alexander Korotkov <aekorotkov@gmail.com>; Tom Lane <tgl@sss.pgh.pa.us>; Alexander Lakhin <exclusion@gmail.com>; Maxim Orlov <orlovmg@gmail.com>; Aleksander Alekseev <aleksander@timescale.com>; Postgres hackers <pgsql-hackers@lists.postgresql.org>; Heikki Linnakangas <hlinnaka@iki.fi>; Japin Li <japinli@hotmail.com>; Andres Freund <andres@anarazel.de>; Michael Paquier <michael@paquier.xyz>; Peter Eisentraut <peter.eisentraut@enterprisedb.com>\r\n主题: Re: XID formatting and SLRU refactorings (was: Add 64-bit XIDs into PostgreSQL 15)\r\n\r\nOn Mon, Dec 4, 2023 at 3:12 PM Pavel Borisov <pashkin.elfe@gmail.com> wrote:\r\n> I think this is complete and could be closed.\r\n\r\nDone.\r\n\r\n\r\n\n\n\n\n\n\n\nHi orlovmg@gmail.com\n That's good news, I think we can\r\ncontinue discuss\r\n for (https://commitfest.postgresql.org/43/3594/)\n\n\n\n \n\n\nBest regards \n\n\n发件人: John Naylor <johncnaylorls@gmail.com>\n发送时间: 2023年12月4日 18:22\n收件人: Pavel Borisov <pashkin.elfe@gmail.com>\n抄送: Alexander Korotkov <aekorotkov@gmail.com>; Tom Lane <tgl@sss.pgh.pa.us>; Alexander Lakhin <exclusion@gmail.com>; Maxim Orlov <orlovmg@gmail.com>; Aleksander Alekseev <aleksander@timescale.com>; Postgres hackers <pgsql-hackers@lists.postgresql.org>;\r\n Heikki Linnakangas <hlinnaka@iki.fi>; Japin Li <japinli@hotmail.com>; Andres Freund <andres@anarazel.de>; Michael Paquier <michael@paquier.xyz>; Peter Eisentraut <peter.eisentraut@enterprisedb.com>\n主题: Re: XID formatting and SLRU refactorings (was: Add 64-bit XIDs into PostgreSQL 15)\n \n\n\nOn Mon, Dec 4, 2023 at 3:12 PM Pavel Borisov <pashkin.elfe@gmail.com> wrote:\r\n> I think this is complete and could be closed.\n\r\nDone.",
"msg_date": "Mon, 4 Dec 2023 14:02:06 +0000",
"msg_from": "Thomas wen <Thomas_valentine_365@outlook.com>",
"msg_from_op": false,
"msg_subject": "\n =?utf-8?B?5Zue5aSNOiBYSUQgZm9ybWF0dGluZyBhbmQgU0xSVSByZWZhY3RvcmluZ3Mg?=\n =?utf-8?Q?(was:_Add_64-bit_XIDs_into_PostgreSQL_15)?="
},
{
"msg_contents": "Hi,\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=loach&dt=2023-12-16%2005%3A25%3A18\n\nTRAP: failed Assert(\"epoch > 0\"), File: \"twophase.c\", Line: 969, PID: 71030\n0xa8edcd <ExceptionalCondition+0x6d> at\n/usr/home/pgbf/buildroot/HEAD/pgsql.build/tmp_install/home/pgbf/buildroot/HEAD/inst/bin/postgres\n0x613863 <ReadTwoPhaseFile+0x463> at\n/usr/home/pgbf/buildroot/HEAD/pgsql.build/tmp_install/home/pgbf/buildroot/HEAD/inst/bin/postgres\n\nThat's the new assertion from 5a1dfde8:\n\n+ * The wrap logic is safe here because the span of active xids cannot\nexceed one\n+ * epoch at any given time.\n...\n+ if (unlikely(xid > nextXid))\n+ {\n+ /* Wraparound occured, must be from a prev epoch. */\n+ Assert(epoch > 0);\n\n\n",
"msg_date": "Sun, 17 Dec 2023 12:48:13 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: XID formatting and SLRU refactorings"
},
{
"msg_contents": "On Sun, Dec 17, 2023 at 1:48 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=loach&dt=2023-12-16%2005%3A25%3A18\n>\n> TRAP: failed Assert(\"epoch > 0\"), File: \"twophase.c\", Line: 969, PID: 71030\n> 0xa8edcd <ExceptionalCondition+0x6d> at\n> /usr/home/pgbf/buildroot/HEAD/pgsql.build/tmp_install/home/pgbf/buildroot/HEAD/inst/bin/postgres\n> 0x613863 <ReadTwoPhaseFile+0x463> at\n> /usr/home/pgbf/buildroot/HEAD/pgsql.build/tmp_install/home/pgbf/buildroot/HEAD/inst/bin/postgres\n>\n> That's the new assertion from 5a1dfde8:\n>\n> + * The wrap logic is safe here because the span of active xids cannot\n> exceed one\n> + * epoch at any given time.\n> ...\n> + if (unlikely(xid > nextXid))\n> + {\n> + /* Wraparound occured, must be from a prev epoch. */\n> + Assert(epoch > 0);\n\nThank you for noticing this. I did some investigations.\nAdjustToFullTransactionId() uses TransamVariables->nextXid to convert\nTransactionId into FullTransactionId. However,\nProcArrayApplyRecoveryInfo() first checks two phase transactions then\nupdates TransamVariables->nextXid. Please, see the draft patch\nfixing this. I'll do further check if it has some side-effects.\n\n------\nRegards,\nAlexander Korotkov",
"msg_date": "Sun, 17 Dec 2023 17:22:23 +0200",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: XID formatting and SLRU refactorings"
},
{
"msg_contents": "On Tue, Nov 28, 2023 at 11:14 PM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n> I think it's pretty sloppy that the \"short\" filenames can be 4, 5 or 6\n> chars long. For pg_multixact/members, which introduced the 5-char case,\n> I think we should always pad the filenames 5 characters, and for\n> commit_ts which introduced the 6 char case, always pad to 6 characters.\n>\n> Instead of a \"long_segment_names\" boolean, how about an integer field,\n> to specify the length.\n>\n> That means that we'll need pg_upgrade to copy pg_multixact/members files\n> under the new names. That should be pretty straightforward.\n\nDo you think it could be useful if the file names were not sequential\nnumbers ...0000, ...0001, ...0002 but instead used the 64 bit 'index'\nnumber for the contained data? In the cases where the index is an\nfxid, such as pg_xact, pg_serial etc that seems easy to grok, and for\nthe multixacts or notify it's a bit more arbitrary but that's not\nworse (and it is perhaps even meaningful, number of multixacts etc).\nFor example, pg_serial holds a uint64_t for every xid, so that's 32768\n= 0x8000 xids per 256kB file, and you might see the following files on\ndisk:\n\n0000000000000000\n0000000000008000\n0000000000010000\n\n... so that it's very clear what fxid ranges are being held. It might\nalso make the file unlinking logic more straightforward in the\nnon-modulo cases (not sure). Of course you can work it out with\nsimple arithmetic but I wonder if human administrators who don't have\na C-level understanding of PostgreSQL would find this scheme more\ncromulent when trying to understand, quickly, whether the system is\nretaining expected data.\n\n(Assuming we actually get the indexes to be 64 bit in the first place.\nI started thinking/hacking around how to do that for the specific case of\npg_serial because it's [by its own admission] a complete mess right now,\nand I was investigating its disk usage, see nearby thread, but then\nI found my way here and realised I'm probably duplicating work that's\nalready been/being done so I'm trying to catch up here... forgive me if\nthe above was already covered, so many messages...)\n\n\n",
"msg_date": "Mon, 18 Dec 2023 10:14:03 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: XID formatting and SLRU refactorings"
},
{
"msg_contents": "On Mon, Nov 27, 2023 at 01:43:26AM +0200, Alexander Korotkov wrote:\n> v61 looks good to me. I'm going to push it as long as there are no objections.\n\nThis yielded commit 4ed8f09 \"Index SLRUs by 64-bit integers rather than by\n32-bit integers\" and left some expressions coercing SLRU page numbers to int.\nTwo sources:\n\n grep -i 'int\\b.*page' $(git grep -l SimpleLruInit)\n make && touch $(git grep -l SimpleLruInit) && make PROFILE=-Wconversion 2>&1 | less -p '.int. from .int64. may alter its value'\n\n(Not every match needs to change.)\n\n> --- a/src/include/access/slru.h\n> +++ b/src/include/access/slru.h\n\n> @@ -127,7 +127,15 @@ typedef struct SlruCtlData\n> \t * the behavior of this callback has no functional implications.) Use\n> \t * SlruPagePrecedesUnitTests() in SLRUs meeting its criteria.\n> \t */\n> -\tbool\t\t(*PagePrecedes) (int, int);\n> +\tbool\t\t(*PagePrecedes) (int64, int64);\n> +\n> +\t/*\n> +\t * If true, use long segment filenames formed from lower 48 bits of the\n> +\t * segment number, e.g. pg_xact/000000001234. Otherwise, use short\n> +\t * filenames formed from lower 16 bits of the segment number e.g.\n> +\t * pg_xact/1234.\n> +\t */\n> +\tbool\t\tlong_segment_names;\n\nSlruFileName() makes 15-character (60-bit) file names. Where does the 48-bit\nlimit arise? How does the SlruFileName() comment about a 24-bit limit for\nshort names relate this comment's 16-bit limit?\n\nnm\n\n\n",
"msg_date": "Tue, 25 Jun 2024 17:27:47 -0700",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": false,
"msg_subject": "Re: XID formatting and SLRU refactorings (was: Add 64-bit XIDs into\n PostgreSQL 15)"
},
{
"msg_contents": "Hi Noah,\n\n> This yielded commit 4ed8f09 \"Index SLRUs by 64-bit integers rather than by\n> 32-bit integers\" and left some expressions coercing SLRU page numbers to int.\n> Two sources:\n>\n> grep -i 'int\\b.*page' $(git grep -l SimpleLruInit)\n> make && touch $(git grep -l SimpleLruInit) && make PROFILE=-Wconversion 2>&1 | less -p '.int. from .int64. may alter its value'\n>\n> (Not every match needs to change.)\n\nI examined the new warnings introduced by 4ed8f09. Most of them seem\nto be harmless, for instance:\n\n```\nslru.c:657:43: warning: conversion from ‘int64’ {aka ‘long int’} to\n‘int’ may change value [-Wconversion]\n 657 | int rpageno = pageno %\nSLRU_PAGES_PER_SEGMENT;\n```\n\n```\nslru.c: In function ‘SlruReportIOError’:\nslru.c:962:43: warning: conversion from ‘int64’ {aka ‘long int’} to\n‘int’ may change value [-Wconversion]\n 962 | int rpageno = pageno %\nSLRU_PAGES_PER_SEGMENT;\n```\n\nInterestingly the patch decreased the overall number of warnings.\n\nI prepared the patch for clog.c. The rest of the warnings don't strike\nme as something we should immediately act on unless we have a bug\nreport. Or perhaps there is a particular warning that worries you?\n\n> > @@ -127,7 +127,15 @@ typedef struct SlruCtlData\n> > * the behavior of this callback has no functional implications.) Use\n> > * SlruPagePrecedesUnitTests() in SLRUs meeting its criteria.\n> > */\n> > - bool (*PagePrecedes) (int, int);\n> > + bool (*PagePrecedes) (int64, int64);\n> > +\n> > + /*\n> > + * If true, use long segment filenames formed from lower 48 bits of the\n> > + * segment number, e.g. pg_xact/000000001234. Otherwise, use short\n> > + * filenames formed from lower 16 bits of the segment number e.g.\n> > + * pg_xact/1234.\n> > + */\n> > + bool long_segment_names;\n>\n> SlruFileName() makes 15-character (60-bit) file names. Where does the 48-bit\n> limit arise? How does the SlruFileName() comment about a 24-bit limit for\n> short names relate this comment's 16-bit limit?\n\nYes, this comment is wrong. Here is a fix.\n\n[1]: https://www.postgresql.org/message-id/CAJ7c6TNMuKWUuMfh5KWgJJBoJGqPHYdZeN4t%2BLB6WdRLbDfVTw%40mail.gmail.com\n\n-- \nBest regards,\nAleksander Alekseev",
"msg_date": "Wed, 26 Jun 2024 14:09:58 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": true,
"msg_subject": "Re: XID formatting and SLRU refactorings (was: Add 64-bit XIDs into\n PostgreSQL 15)"
},
{
"msg_contents": "On Wed, Jun 26, 2024 at 02:09:58PM +0300, Aleksander Alekseev wrote:\n> > This yielded commit 4ed8f09 \"Index SLRUs by 64-bit integers rather than by\n> > 32-bit integers\" and left some expressions coercing SLRU page numbers to int.\n> > Two sources:\n> >\n> > grep -i 'int\\b.*page' $(git grep -l SimpleLruInit)\n> > make && touch $(git grep -l SimpleLruInit) && make PROFILE=-Wconversion 2>&1 | less -p '.int. from .int64. may alter its value'\n> >\n> > (Not every match needs to change.)\n> \n> I examined the new warnings introduced by 4ed8f09. Most of them seem\n> to be harmless, for instance:\n[...]\n> I prepared the patch for clog.c. The rest of the warnings don't strike\n> me as something we should immediately act on unless we have a bug\n> report. Or perhaps there is a particular warning that worries you?\n\nIs \"int\" acceptable or unacceptable in the following grep match?\n\nsrc/backend/commands/async.c:1274:\tint\t\t\theadPage = QUEUE_POS_PAGE(QUEUE_HEAD);\n\n\n",
"msg_date": "Wed, 26 Jun 2024 11:58:44 -0700",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": false,
"msg_subject": "Re: XID formatting and SLRU refactorings (was: Add 64-bit XIDs into\n PostgreSQL 15)"
},
{
"msg_contents": "Hi,\n\n> > I prepared the patch for clog.c. The rest of the warnings don't strike\n> > me as something we should immediately act on unless we have a bug\n> > report. Or perhaps there is a particular warning that worries you?\n>\n> Is \"int\" acceptable or unacceptable in the following grep match?\n>\n> src/backend/commands/async.c:1274: int headPage = QUEUE_POS_PAGE(QUEUE_HEAD);\n\nGood catch. We better use int64s here.\n\nHere is the corrected patchset.\n\n-- \nBest regards,\nAleksander Alekseev",
"msg_date": "Thu, 27 Jun 2024 13:45:51 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": true,
"msg_subject": "Re: XID formatting and SLRU refactorings (was: Add 64-bit XIDs into\n PostgreSQL 15)"
},
{
"msg_contents": "Hi,\n\n> Here is the corrected patchset.\n\nTWIMC this is currently listed as an open item for PG17 [1].\nSorry if everyone interested is already aware.\n\n[1]: https://wiki.postgresql.org/wiki/PostgreSQL_17_Open_Items\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Mon, 8 Jul 2024 12:30:09 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": true,
"msg_subject": "Re: XID formatting and SLRU refactorings (was: Add 64-bit XIDs into\n PostgreSQL 15)"
},
{
"msg_contents": "On Mon, Jul 08, 2024 at 12:30:09PM +0300, Aleksander Alekseev wrote:\n> TWIMC this is currently listed as an open item for PG17 [1].\n> Sorry if everyone interested is already aware.\n> \n> [1]: https://wiki.postgresql.org/wiki/PostgreSQL_17_Open_Items\n\nThe proposed patch looks rather incomplete to me, based on the fact\nthat this stuff has a lot of inconsistencies with the types used when\nmanipulating 64b SLRU pages. Some of them are harder to catch as the\nvariables don't specifically refer to pages.\n\nSo, even after v2, there are two more of these in asyncQueueUsage()\nwith the two QUEUE_POS_PAGE() for the head and tail positions:\n int headPage = QUEUE_POS_PAGE(QUEUE_HEAD);\n int tailPage = QUEUE_POS_PAGE(QUEUE_TAIL);\n\nasyncQueueReadAllNotifications() also has one:\nint curpage = QUEUE_POS_PAGE(pos);\n\nasyncQueueAdvanceTail() declares the following:\n int oldtailpage;\n int newtailpage;\n int boundary;\n\nAsyncQueueControl.stopPage is an int.\n\nAnd that's only for async.c. Alexander K., as the owner of the open\nitem, are you planning to look at that?\n--\nMichael",
"msg_date": "Tue, 9 Jul 2024 15:07:24 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: XID formatting and SLRU refactorings (was: Add 64-bit XIDs into\n PostgreSQL 15)"
},
{
"msg_contents": "Hi,\n\n> The proposed patch looks rather incomplete to me, based on the fact\n> that this stuff has a lot of inconsistencies with the types used when\n> manipulating 64b SLRU pages. Some of them are harder to catch as the\n> variables don't specifically refer to pages.\n>\n> So, even after v2, there are two more of these in asyncQueueUsage()\n> with the two QUEUE_POS_PAGE() for the head and tail positions:\n> int headPage = QUEUE_POS_PAGE(QUEUE_HEAD);\n> int tailPage = QUEUE_POS_PAGE(QUEUE_TAIL);\n>\n> asyncQueueReadAllNotifications() also has one:\n> int curpage = QUEUE_POS_PAGE(pos);\n>\n> asyncQueueAdvanceTail() declares the following:\n> int oldtailpage;\n> int newtailpage;\n> int boundary;\n>\n> AsyncQueueControl.stopPage is an int.\n>\n> And that's only for async.c. Alexander K., as the owner of the open\n> item, are you planning to look at that?\n\nThanks, Michael. I prepared a corrected patchset.\n\n-- \nBest regards,\nAleksander Alekseev",
"msg_date": "Thu, 11 Jul 2024 13:11:05 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": true,
"msg_subject": "Re: XID formatting and SLRU refactorings (was: Add 64-bit XIDs into\n PostgreSQL 15)"
},
{
"msg_contents": "On Thu, Jul 11, 2024 at 01:11:05PM +0300, Aleksander Alekseev wrote:\n> Thanks, Michael. I prepared a corrected patchset.\n\nA comment about v3-0001.\n\n-\t * If true, use long segment filenames formed from lower 48 bits of the\n-\t * segment number, e.g. pg_xact/000000001234. Otherwise, use short\n-\t * filenames formed from lower 16 bits of the segment number e.g.\n-\t * pg_xact/1234.\n+\t * If true, use long segment filenames. Use short filenames otherwise.\n+\t * See SlruFileName().\n\nWe're losing some details here even if SlruFileName() has some\nexplanations, because one would need to read through the snprintf's\n04X to know that short file names include 4 characters. I'm OK to\nmention SlruFileName() rather than duplicate the knowledge here, but\nSlruFileName() should also be updated to mention the same level of\ndetails with some examples of file names.\n--\nMichael",
"msg_date": "Fri, 12 Jul 2024 10:13:42 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: XID formatting and SLRU refactorings (was: Add 64-bit XIDs into\n PostgreSQL 15)"
},
{
"msg_contents": "Hi,\n\n> A comment about v3-0001.\n>\n> - * If true, use long segment filenames formed from lower 48 bits of the\n> - * segment number, e.g. pg_xact/000000001234. Otherwise, use short\n> - * filenames formed from lower 16 bits of the segment number e.g.\n> - * pg_xact/1234.\n> + * If true, use long segment filenames. Use short filenames otherwise.\n> + * See SlruFileName().\n>\n> We're losing some details here even if SlruFileName() has some\n> explanations, because one would need to read through the snprintf's\n> 04X to know that short file names include 4 characters. I'm OK to\n> mention SlruFileName() rather than duplicate the knowledge here, but\n> SlruFileName() should also be updated to mention the same level of\n> details with some examples of file names.\n\nFair enough. Here is the updated patchset.\n\n\n--\nBest regards,\nAleksander Alekseev",
"msg_date": "Fri, 12 Jul 2024 12:44:54 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": true,
"msg_subject": "Re: XID formatting and SLRU refactorings (was: Add 64-bit XIDs into\n PostgreSQL 15)"
},
{
"msg_contents": "On Fri, Jul 12, 2024 at 12:44:54PM +0300, Aleksander Alekseev wrote:\n> Fair enough. Here is the updated patchset.\n\nHearing nothing but cicadas as now is their season, I have taken the\ninitiative here to address this open item.\n\n0001 felt a bit too complicated in slru.h, so I have simplified it and\nkept all the details in slru.c with SlruFileName().\n\nI have reviewed all the code that uses SLRUs, and spotted three more\nproblematic code paths in predicate.c that needed an update like the\nothers for some pagenos. I've added these, and applied 0002. We\nshould be good now.\n--\nMichael",
"msg_date": "Tue, 23 Jul 2024 18:01:44 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: XID formatting and SLRU refactorings (was: Add 64-bit XIDs into\n PostgreSQL 15)"
},
{
"msg_contents": "Hi,\n\n> Hearing nothing but cicadas as now is their season, I have taken the\n> initiative here to address this open item.\n>\n> 0001 felt a bit too complicated in slru.h, so I have simplified it and\n> kept all the details in slru.c with SlruFileName().\n>\n> I have reviewed all the code that uses SLRUs, and spotted three more\n> problematic code paths in predicate.c that needed an update like the\n> others for some pagenos. I've added these, and applied 0002. We\n> should be good now.\n\nThank you!\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Tue, 23 Jul 2024 12:09:58 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": true,
"msg_subject": "Re: XID formatting and SLRU refactorings (was: Add 64-bit XIDs into\n PostgreSQL 15)"
},
{
"msg_contents": "On Tue, Jul 23, 2024 at 06:01:44PM +0900, Michael Paquier wrote:\n> Hearing nothing but cicadas as now is their season, I have taken the\n> initiative here to address this open item.\n> \n> 0001 felt a bit too complicated in slru.h, so I have simplified it and\n> kept all the details in slru.c with SlruFileName().\n> \n> I have reviewed all the code that uses SLRUs, and spotted three more\n> problematic code paths in predicate.c that needed an update like the\n> others for some pagenos. I've added these, and applied 0002. We\n> should be good now.\n\nI'm still seeing need for s/int/int64/ at:\n\n- \"pagesegno\" variable\n- return value of MultiXactIdToOffsetSegment()\n- return value of MXOffsetToMemberSegment()\n- callers of previous two\n\nOnly the first should be a live bug, since multixact isn't electing the higher\npageno ceiling.\n\n\n",
"msg_date": "Wed, 24 Jul 2024 06:00:59 -0700",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": false,
"msg_subject": "Re: XID formatting and SLRU refactorings (was: Add 64-bit XIDs into\n PostgreSQL 15)"
},
{
"msg_contents": "On Wed, Jul 24, 2024 at 06:00:59AM -0700, Noah Misch wrote:\n> I'm still seeing need for s/int/int64/ at:\n\nNice catches! I've missed these.\n\n> - \"pagesegno\" variable\n> - return value of MultiXactIdToOffsetSegment()\n\nOnly used in four places for two elog(DEBUG1) entries with %x.\n\n> - return value of MXOffsetToMemberSegment()\n\nAlso used in four places for two elog(DEBUG1) entries with %x, plus\nthree callers in PerformMembersTruncation(), nothing fancy.\n\n> Only the first should be a live bug, since multixact isn't electing the higher\n> pageno ceiling.\n\nYes, and it makes a switch to long segment names everywhere easier.\nThere is a patch in the air to do that, without the pg_upgrade\nadditions required to do the transfer, though.\n\nI am attaching a patch for all these you have spotted, switching the\nlogs to use %llx. Does that look fine for you?\n--\nMichael",
"msg_date": "Thu, 25 Jul 2024 10:52:13 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: XID formatting and SLRU refactorings (was: Add 64-bit XIDs into\n PostgreSQL 15)"
},
{
"msg_contents": "On Thu, Jul 25, 2024 at 10:52:13AM +0900, Michael Paquier wrote:\n> On Wed, Jul 24, 2024 at 06:00:59AM -0700, Noah Misch wrote:\n> > I'm still seeing need for s/int/int64/ at:\n\n> I am attaching a patch for all these you have spotted, switching the\n> logs to use %llx. Does that look fine for you?\n\nYes. I think that completes the project.\n\n\n",
"msg_date": "Thu, 25 Jul 2024 17:42:41 -0700",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": false,
"msg_subject": "Re: XID formatting and SLRU refactorings (was: Add 64-bit XIDs into\n PostgreSQL 15)"
},
{
"msg_contents": "On Fri, Jul 26, 2024 at 3:42 AM Noah Misch <noah@leadboat.com> wrote:\n> On Thu, Jul 25, 2024 at 10:52:13AM +0900, Michael Paquier wrote:\n> > On Wed, Jul 24, 2024 at 06:00:59AM -0700, Noah Misch wrote:\n> > > I'm still seeing need for s/int/int64/ at:\n>\n> > I am attaching a patch for all these you have spotted, switching the\n> > logs to use %llx. Does that look fine for you?\n>\n> Yes. I think that completes the project.\n\nThanks to everybody for working on this. It's pity I didn't notice\nthis is v17 open item on me. Sorry for this.\n\nI tool a look on commits and on other slru code for similar issue.\nEverything looks good for me.\n\n------\nRegards,\nAlexander Korotkov\nSupabase\n\n\n",
"msg_date": "Fri, 26 Jul 2024 23:50:48 +0300",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: XID formatting and SLRU refactorings (was: Add 64-bit XIDs into\n PostgreSQL 15)"
},
{
"msg_contents": "On Fri, Jul 26, 2024 at 11:50:48PM +0300, Alexander Korotkov wrote:\n> Thanks to everybody for working on this. It's pity I didn't notice\n> this is v17 open item on me. Sorry for this.\n\nNo problem. I've just applied now the remaining pieces down to 17.\n--\nMichael",
"msg_date": "Sat, 27 Jul 2024 07:24:33 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: XID formatting and SLRU refactorings (was: Add 64-bit XIDs into\n PostgreSQL 15)"
},
{
"msg_contents": "On Sat, Jul 27, 2024 at 07:24:33AM +0900, Michael Paquier wrote:\n> I've just applied now the remaining pieces down to 17.\n\nComparing commit c9e2457 to the patch in ZqGvzSbW5TGKqZcE@paquier.xyz, the\ncommit lacks the slru.c portion.\n\n\n",
"msg_date": "Sat, 10 Aug 2024 10:50:55 -0700",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": false,
"msg_subject": "Re: XID formatting and SLRU refactorings (was: Add 64-bit XIDs into\n PostgreSQL 15)"
},
{
"msg_contents": "On Sat, Aug 10, 2024 at 10:50:55AM -0700, Noah Misch wrote:\n> On Sat, Jul 27, 2024 at 07:24:33AM +0900, Michael Paquier wrote:\n>> I've just applied now the remaining pieces down to 17.\n> \n> Comparing commit c9e2457 to the patch in ZqGvzSbW5TGKqZcE@paquier.xyz, the\n> commit lacks the slru.c portion.\n\nAnd a portion of multixact.c as well, thanks! I am pretty sure that I\nhave messed up with a `git add` while doing a rebase on this dev\nbranch. I'll take care of it.\n--\nMichael",
"msg_date": "Mon, 19 Aug 2024 11:09:22 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: XID formatting and SLRU refactorings (was: Add 64-bit XIDs into\n PostgreSQL 15)"
}
] |
[
{
"msg_contents": "Hi,\n\nA customer has found a limitation, more than a bug:\n\nWhen a partitioned table has a foreign key that points to itself,\nand this FK points only to lines in the same partition\n(the partition key is part of the FK),\nyou cannot detach the partition: PostgreSQL claims that\nthe constraint is violated ;\nalthough it's impossible that the remaining partitions\ncontain lines pointing to the partition to-be-detached (or vice versa).\n\nIn some way this is logical:\nthe FK on a detached partition is still there, and points to the partitioned table.\n\nFor a human, this sounds illogical:\nthe data in the detached partition is « self-contained » and points to the same table.\n\nIt's not possible to modify the inherited constraint before detaching the table,\nand not possible to detach the table because of the constraint.\n\nThe only realistic workaround that we found was to get rid of the global FK, \nand rebuild independent FKs on each partition: logical but tedious, and error-prone to maintain.\n\nA suggestion:\nwhen the FK is on the partition itself and contains the partition key,\nallow to rewrite the constraint to point to the detached partition,\nor at least allow to drop it before detaching (in the same transaction).\nI have no idea if some syntax changes would be necessary,\nand no idea how easy to implement it would be. Is it worth it?\n\nThe script below reproduces the case.\n\nThanks for any comment.\n\n\\timing off\n\nDROP TABLE IF EXISTS demo1, demo2, demo3, demo ;\n\n-- A table with 3 partitions and a FK to itself ;\n-- * the partition key is in the PK and FK *\nCREATE TABLE demo (\n i int,\n j int,\n rj int,\n z text )\nPARTITION BY LIST (i);\n\nALTER TABLE demo ADD CONSTRAINT demo_pk PRIMARY KEY (i,j);\nALTER TABLE demo ADD CONSTRAINT demo_fk FOREIGN KEY (i,rj) REFERENCES demo (i,j) DEFERRABLE;\n\nCREATE TABLE demo1 PARTITION OF demo FOR VALUES IN (1);\nCREATE TABLE demo2 PARTITION OF demo FOR VALUES IN (2); \nCREATE TABLE demo3 PARTITION OF demo FOR VALUES IN (3);\n\n-- few data in each partition\n\nINSERT INTO demo (i,j,rj) VALUES (1,10, null);\n-- data pointing to the same partition: detaching this partition will have problems:\nINSERT INTO demo (i,j,rj) VALUES (2, 21, null);\nINSERT INTO demo (i,j,rj) VALUES (2, 31, 21);\n-- no FK used:\nINSERT INTO demo (i,j,rj) VALUES (3, 31, null);\n\n\\d+ demo\n\\d+ demo2\n\nTABLE demo1 ;\nTABLE demo2 ;\nTABLE demo3 ;\n\n-- Detaching partitions\n\\set ECHO queries\n\n\\echo \"Detaching demo3: it works (FK unused)\"\nBEGIN ;\n ALTER TABLE demo DETACH PARTITION demo3 ;\n \\echo \"Note that the constraint still points to the partitioned table\"\n \\d+ demo3\nROLLBACK ;\n\nBEGIN ;\n \\echo \"Cannot DETACH!\"\n \\echo \"This is our problem\"\n ALTER TABLE demo DETACH PARTITION demo2 ;\n -- ERROR: removing partition \"demo2\" violates foreign key constraint \"demo_i_rj_fkey1\"\n -- DETAIL : Key (i, rj)=(2, 21) is still referenced from table \"demo\".\nROLLBACK ;\n\n\n\\echo \n\\echo \"Trying work arounds\"\n\\echo \n\nBEGIN ;\n \\echo \"Drop FK only on partition: FAIL, not possible\"\n ALTER TABLE demo2 DROP CONSTRAINT demo_fk ; \n -- ERROR: cannot drop inherited constraint \"demo_fk\" of relation \"demo2\"\n ALTER TABLE demo DETACH PARTITION demo2 ;\n -- fail\nROLLBACK ;\n\nBEGIN ;\n \\echo \"UPDATE FK : works but destroys data and costly\"\n UPDATE demo2 SET rj=null ;\n ALTER TABLE demo DETACH PARTITION demo2 ;\nROLLBACK ;\n\nBEGIN ;\n \\echo \"UPDATE FK (partition key): not allowed and would be costly\"\n UPDATE demo2 SET i=null ;\n --ERROR: new row for relation \"demo2\" violates partition constraint\n ALTER TABLE demo DETACH PARTITION demo2 ; --KO\nROLLBACK ;\n\nBEGIN ;\n \\echo \"DROP whole constraint, DETACH, recreate : works but costly\"\n ALTER TABLE demo DROP CONSTRAINT demo_fk ;\n ALTER TABLE demo DETACH PARTITION demo2 ;\n ALTER TABLE demo ADD CONSTRAINT demo_fk FOREIGN KEY (i,rj) REFERENCES demo (i,j);\nROLLBACK ;\n\n\\echo \"Re-declare keys on partitions only: works but painful\"\nBEGIN ;\n ALTER TABLE demo1 ADD CONSTRAINT demo_fk1 FOREIGN KEY (i,rj) REFERENCES demo1 ;\n ALTER TABLE demo2 ADD CONSTRAINT demo_fk2 FOREIGN KEY (i,rj) REFERENCES demo2 ;\n ALTER TABLE demo3 ADD CONSTRAINT demo_fk3 FOREIGN KEY (i,rj) REFERENCES demo3 ;\n ALTER TABLE demo DROP CONSTRAINT demo_fk ;\n ALTER TABLE demo DETACH PARTITION demo2 ;\nROLLBACK ;\n\n\n\n\n-- \nChristophe Courtois\nConsultant Dalibo\nhttps://dalibo.com/\n\n\n",
"msg_date": "Thu, 17 Mar 2022 18:31:37 +0100",
"msg_from": "Christophe Courtois <christophe.courtois@dalibo.com>",
"msg_from_op": true,
"msg_subject": "Detaching a partition with a FK on itself is not possible"
},
{
"msg_contents": "Hi,\n\n\nI don't think this a bug, but a feature request. I therefore think hackers would be more appropriate.\n\n\nI don't see how an additional syntax to modify the constraint should help.\n\n\nIf I'd want to fix this, I'd try to teach the detach partition code about self referencing foreign keys. It seems to me like that would be the cleanest solution, because the user doesn't need to care about this at all.\n\nI don't think, I'll spend time on this in the near future though.\n\n\nRegards\nArne\n\n\n\n\n\n\n\n\n\n\n\nHi,\n\n\nI don't think this a bug, but a feature request. I therefore think hackers would be more appropriate.\n\n\nI don't see how an additional syntax to modify the constraint should help.\n\n\nIf I'd want to fix this, I'd try to teach the detach partition code about self referencing foreign keys. It seems to me like that would be the cleanest solution, because the user doesn't need to care about this at all.\nI don't think, I'll spend time on this in the near future though.\n\n\n\nRegards\nArne",
"msg_date": "Thu, 17 Mar 2022 17:58:04 +0000",
"msg_from": "Arne Roland <A.Roland@index.de>",
"msg_from_op": false,
"msg_subject": "Re: Detaching a partition with a FK on itself is not possible"
},
{
"msg_contents": "Hi,\n\nOn Thu, 17 Mar 2022 17:58:04 +0000\nArne Roland <A.Roland@index.de> wrote:\n\n> I don't think this a bug, but a feature request. I therefore think hackers\n> would be more appropriate.\n\n+1\n\nI changed the list destination\n\n> I don't see how an additional syntax to modify the constraint should help.\n\nMe neiher.\n\n> If I'd want to fix this, I'd try to teach the detach partition code about\n> self referencing foreign keys. It seems to me like that would be the cleanest\n> solution, because the user doesn't need to care about this at all.\n\nTeaching the detach partition about self referencing means either:\n\n* it's safe to remove the FK\n* we can rewrite the FK for self referencing\n\nBoth solution are not ideal from the original schema and user perspective.\n\nAnother solution could be to teach the create partition to detect a self\nreferencing FK and actually create a self referencing FK, not pointing to the\npartitioned table, and of course issuing a NOTICE to the client.\n\n\n\n",
"msg_date": "Mon, 21 Mar 2022 11:36:34 +0100",
"msg_from": "Jehan-Guillaume de Rorthais <jgdr@dalibo.com>",
"msg_from_op": false,
"msg_subject": "Re: Detaching a partition with a FK on itself is not possible"
},
{
"msg_contents": "From: Jehan-Guillaume de Rorthais <jgdr@dalibo.com>\n\nSent: Monday, March 21, 2022 11:36\nSubject: Re: Detaching a partition with a FK on itself is not possible\n > I changed the list destination\nThanks\n\n> Another solution could be to teach the create partition to detect a self\n> referencing FK and actually create a self referencing FK, not pointing to the\n> partitioned table, and of course issuing a NOTICE to the client.\nThat's what I meant. I didn't think about the NOTICE, but that's a good idea.\n\nRegards\nArne\n\n\n\n\n\n\n\n\n\nFrom: Jehan-Guillaume de Rorthais <jgdr@dalibo.com>\n\n\n\nSent: Monday, March 21, 2022 11:36\nSubject: Re: Detaching a partition with a FK on itself is not possible\n > I changed the list destination\nThanks\n\n> Another solution could be to teach the create partition to detect a self\n> referencing FK and actually create a self referencing FK, not pointing to the\n> partitioned table, and of course issuing a NOTICE to the client.\nThat's what I meant. I didn't think about the NOTICE, but that's a good idea.\n\nRegards\nArne",
"msg_date": "Mon, 21 Mar 2022 10:50:46 +0000",
"msg_from": "Arne Roland <A.Roland@index.de>",
"msg_from_op": false,
"msg_subject": "Re: Detaching a partition with a FK on itself is not possible"
}
] |
[
{
"msg_contents": "So far this commitfest these 10 patches have been marked committed.\nThat leaves us with 175 \"Needs Review\" and 28 \"Ready for Comitter\" so\nquite a ways to go ...\n\n* FUNCAPI tuplestore helper function\n* parse/analyze API refactoring\n* Add wal_compression=zstd\n* Add id's to various elements in protocol.sgml\n* Fix alter data type of clustered/replica identity columns\n* Fix flaky tests when synchronous_commit = off\n* Support of time zone patterns - of, tzh and tzm\n* Optionally automatically disable subscription on error\n* fix race condition between DROP TABLESPACE and checkpointing\n* ICU for global collation\n\n\nIn case anyone's looking for inspiration... Here are the list of\npatches marked Ready for Committer:\n\n* document the need to analyze partitioned tables\n* ExecTypeSetColNames is fundamentally broken\n* pg_dump - read data for some options from external file\n* Add comment about startup process getting a free procState array slot always\n* Doc patch for retryable xacts\n* Function to log backtrace of postgres processes\n* Allow multiple recursive self-references\n* Expose get_query_def()\n* Add checkpoint and redo LSN to LogCheckpointEnd log message\n* Fast COPY FROM command for the foreign tables\n* Consider parallel for LATERAL subqueries having LIMIT/OFFSET\n* Faster pglz compression\n* Full support for index LP_DEAD hint bits on standby\n* KnownAssignedXidsGetAndSetXmin performance\n* Parameter for planner estimates of recursive queries\n* enhancing plpgsql API for debugging and tracing\n* Make message at end-of-recovery less scary\n* Identify missing publications from publisher while create/alter subscription.\n* Allow providing restore_command as a command line option to pg_rewind\n* Avoid erroring out when unable to remove or parse logical rewrite\nfiles to save checkpoint work\n* Allow batched insert during cross-partition updates\n* Add callback table access method to reset filenode when dropping relation\n* use has_privs_for_role for predefined roles\n* range_agg with multirange inputs\n* Add new reloption to views for enabling row level security\n* Fix pg_rewind race condition just after promotion\n* Mitigate pg_rewind race condition, if config file is enlarged concurrently.\n* remove exclusive backup mode\n\n-- \ngreg\n\n\n",
"msg_date": "Thu, 17 Mar 2022 14:07:16 -0400",
"msg_from": "Greg Stark <stark@mit.edu>",
"msg_from_op": true,
"msg_subject": "Commitfest Update"
},
{
"msg_contents": "On Fri, Mar 18, 2022 at 3:08 AM Greg Stark <stark@mit.edu> wrote:\n> In case anyone's looking for inspiration... Here are the list of\n> patches marked Ready for Committer:\n\n> * Fast COPY FROM command for the foreign tables\n\nI have (re-)started reviewing this patch.\n\nBest regards,\nEtsuro Fujita\n\n\n",
"msg_date": "Fri, 18 Mar 2022 20:12:34 +0900",
"msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Commitfest Update"
},
{
"msg_contents": "On Thu, Mar 17, 2022 at 02:07:16PM -0400, Greg Stark wrote:\n> So far this commitfest these 10 patches have been marked committed.\n> That leaves us with 175 \"Needs Review\" and 28 \"Ready for Comitter\" so\n> quite a ways to go ...\n\nIf it were me, I'd move these out of the way somehow; WOA/RWF or move to June:\n\nhttps://commitfest.postgresql.org/37/3291/ Add PGDLLIMPORT to all direct or indirect GUC variables\nhttps://commitfest.postgresql.org/37/3568/ PQexecParams binary handling example for REAL data type\n\nIs anyone claiming these ?\n\nhttps://commitfest.postgresql.org/37/3142/ Logging plan of the currently running query\nhttps://commitfest.postgresql.org/37/3298/ Showing I/O timings spent reading/writing temp buffers in EXPLAIN\nhttps://commitfest.postgresql.org/37/3050/ Extended statistics in EXPLAIN\nhttps://commitfest.postgresql.org/37/3508/ Avoid smgrimmedsync() during index build and add unbuffered IO API\n\n-- \nJustin\n\n\n",
"msg_date": "Tue, 29 Mar 2022 15:47:31 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Commitfest Update"
},
{
"msg_contents": "On Tue, Mar 29, 2022 at 4:47 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> If it were me, I'd move these out of the way somehow; WOA/RWF or move to June:\n>\n> https://commitfest.postgresql.org/37/3291/ Add PGDLLIMPORT to all direct or indirect GUC variables\n\nI plan to do this yet, but it seemed best to leave it until the end to\navoid creating as many merge conflicts.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 29 Mar 2022 17:02:37 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Commitfest Update"
}
] |
[
{
"msg_contents": "Hackers,\n\nOver in [1], Joshua proposed a new set of Object Access Type hooks based on strings rather than Oids.\n\nHis patch was written to be applied atop my patch for granting privileges on gucs.\n\nOn review of his patch, I became uncomfortable with the complete lack of regression test coverage. To be fair, he did paste a bit of testing logic to the thread, but it appears to be based on pgaudit, and it is unclear how to include such a test in the core project, where pgaudit is not assumed to be installed.\n\nFirst, I refactored his patch to work against HEAD and not depend on my GUCs patch. Find that as v1-0001. The refactoring exposed a bit of a problem. To call the new hook for SET and ALTER SYSTEM commands, I need to pass in the Oid of a catalog table. But since my GUC patch isn't applied yet, there isn't any such table (pg_setting_acl or whatnot) to pass. So I'm passing InvalidOid, but I don't know if that is right. In any event, if we want a new API like this, we should think a bit harder about whether it can be used to check operations where no table Oid is applicable.\n\nSecond, I added a new test directory, src/test/modules/test_oat_hooks, which includes a new loadable module with hook implementations and a regression test for testing the object access hooks. The main point of the test is to log which hooks get called in which order, and which hooks do or do not get called when other hooks allow or deny access. That information shows up in the expected output as NOTICE messages.\n\nThis second patch has gotten a little long, and I'd like another pair of eyes on this before spending a second day on the effort. Please note that this is a quick WIP patch in response to the patch Joshua posted earlier today. Sorry for sometimes missing function comments, etc. The goal, if this design seems acceptable, is to polish this, hopefully with Joshua's assistance, and get it committed *before* my GUCs patch, so that my patch can be rebased to use it. Otherwise, if this is rejected, I can continue on the GUC patch without this.\n\n(FYI, I got a test failure from src/test/recovery/t/013_crash_restart.pl when testing v1-0001. I'm not sure yet what that is about.)\n\n\n\n\n\n\n[1] https://www.postgresql.org/message-id/flat/664799.1647456444%40sss.pgh.pa.us#c9721c2da88d59684ac7ac5fc36f09c1\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Thu, 17 Mar 2022 20:21:56 -0700",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "New Object Access Type hooks"
},
{
"msg_contents": "On Thu, Mar 17, 2022 at 11:21 PM Mark Dilger\n<mark.dilger@enterprisedb.com> wrote:\n>\n> Hackers,\n>\n> Over in [1], Joshua proposed a new set of Object Access Type hooks based on strings rather than Oids.\n>\n> His patch was written to be applied atop my patch for granting privileges on gucs.\n>\n> On review of his patch, I became uncomfortable with the complete lack of regression test coverage. To be fair, he did paste a bit of testing logic to the thread, but it appears to be based on pgaudit, and it is unclear how to include such a test in the core project, where pgaudit is not assumed to be installed.\n>\n> First, I refactored his patch to work against HEAD and not depend on my GUCs patch. Find that as v1-0001. The refactoring exposed a bit of a problem. To call the new hook for SET and ALTER SYSTEM commands, I need to pass in the Oid of a catalog table. But since my GUC patch isn't applied yet, there isn't any such table (pg_setting_acl or whatnot) to pass. So I'm passing InvalidOid, but I don't know if that is right. In any event, if we want a new API like this, we should think a bit harder about whether it can be used to check operations where no table Oid is applicable.\n>\n> Second, I added a new test directory, src/test/modules/test_oat_hooks, which includes a new loadable module with hook implementations and a regression test for testing the object access hooks. The main point of the test is to log which hooks get called in which order, and which hooks do or do not get called when other hooks allow or deny access. That information shows up in the expected output as NOTICE messages.\n>\n> This second patch has gotten a little long, and I'd like another pair of eyes on this before spending a second day on the effort. Please note that this is a quick WIP patch in response to the patch Joshua posted earlier today. Sorry for sometimes missing function comments, etc. The goal, if this design seems acceptable, is to polish this, hopefully with Joshua's assistance, and get it committed *before* my GUCs patch, so that my patch can be rebased to use it. Otherwise, if this is rejected, I can continue on the GUC patch without this.\n>\n\nThis is great, thank you for doing this. I didn't even realize the OAT\nhooks had no regression tests.\n\nIt looks good to me, I reviewed both and tested the module. I wonder\nif the slight abuse of subid is warranted with brand new hooks going\nin but not enough to object, I just hope this doesn't rise to the too\nlarge to merge this late level.\n\n> (FYI, I got a test failure from src/test/recovery/t/013_crash_restart.pl when testing v1-0001. I'm not sure yet what that is about.)\n>\n>\n>\n> [1] https://www.postgresql.org/message-id/flat/664799.1647456444%40sss.pgh.pa.us#c9721c2da88d59684ac7ac5fc36f09c1\n\n>\n\n\n",
"msg_date": "Fri, 18 Mar 2022 10:16:02 -0400",
"msg_from": "Joshua Brindle <joshua.brindle@crunchydata.com>",
"msg_from_op": false,
"msg_subject": "Re: New Object Access Type hooks"
},
{
"msg_contents": "\n\n> On Mar 18, 2022, at 7:16 AM, Joshua Brindle <joshua.brindle@crunchydata.com> wrote:\n> \n> This is great, thank you for doing this. I didn't even realize the OAT\n> hooks had no regression tests.\n> \n> It looks good to me, I reviewed both and tested the module. I wonder\n> if the slight abuse of subid is warranted with brand new hooks going\n> in but not enough to object, I just hope this doesn't rise to the too\n> large to merge this late level.\n\nThe majority of the patch is regression testing code, stuff which doesn't get installed. It's even marked as NO_INSTALLCHECK, so it won't get installed even as part of \"make installcheck\". That seems safe enough to me.\n\nNot including tests of OAT seems worse, as it leaves us open to breaking the behavior without realizing we've done so. A refactoring of the core code might cause hooks to be called in a different order, something which isn't necessarily wrong, but should not be done unknowingly.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Fri, 18 Mar 2022 08:15:47 -0700",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: New Object Access Type hooks"
},
{
"msg_contents": "\nOn 3/18/22 11:15, Mark Dilger wrote:\n>\n>> On Mar 18, 2022, at 7:16 AM, Joshua Brindle <joshua.brindle@crunchydata.com> wrote:\n>>\n>> This is great, thank you for doing this. I didn't even realize the OAT\n>> hooks had no regression tests.\n>>\n>> It looks good to me, I reviewed both and tested the module. I wonder\n>> if the slight abuse of subid is warranted with brand new hooks going\n>> in but not enough to object, I just hope this doesn't rise to the too\n>> large to merge this late level.\n\n\nThe core code is extracted from a current CF patch, so I think in\nprinciple it's OK.\n\n\nI haven't looked at it in detail, but regarding the test code I'm not\nsure why there's a .control file, since this isn't a loadable extension,\nnot why there's a test_oat_hooks.h file.\n\n\n> The majority of the patch is regression testing code, stuff which doesn't get installed. It's even marked as NO_INSTALLCHECK, so it won't get installed even as part of \"make installcheck\". That seems safe enough to me.\n>\n> Not including tests of OAT seems worse, as it leaves us open to breaking the behavior without realizing we've done so. A refactoring of the core code might cause hooks to be called in a different order, something which isn't necessarily wrong, but should not be done unknowingly.\n>\n\nYes, and in any case we've added test code after feature freeze in the past.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Fri, 18 Mar 2022 18:04:11 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: New Object Access Type hooks"
},
{
"msg_contents": "\nOn 3/17/22 23:21, Mark Dilger wrote:\n> Hackers,\n>\n> Over in [1], Joshua proposed a new set of Object Access Type hooks based on strings rather than Oids.\n>\n> His patch was written to be applied atop my patch for granting privileges on gucs.\n>\n> On review of his patch, I became uncomfortable with the complete lack of regression test coverage. To be fair, he did paste a bit of testing logic to the thread, but it appears to be based on pgaudit, and it is unclear how to include such a test in the core project, where pgaudit is not assumed to be installed.\n>\n> First, I refactored his patch to work against HEAD and not depend on my GUCs patch. Find that as v1-0001. The refactoring exposed a bit of a problem. To call the new hook for SET and ALTER SYSTEM commands, I need to pass in the Oid of a catalog table. But since my GUC patch isn't applied yet, there isn't any such table (pg_setting_acl or whatnot) to pass. So I'm passing InvalidOid, but I don't know if that is right. In any event, if we want a new API like this, we should think a bit harder about whether it can be used to check operations where no table Oid is applicable.\n\n\nMy first inclination is to say it's probably ok. The immediately obvious\nalternative would be to create yet another set of functions that don't\nhave classId parameters. That doesn't seem attractive.\n\nModulo that issue I think patch 1 is basically ok, but we should fix the\ncomments in objectaccess.c. Rather than \"It is [the] entrypoint ...\" we\nshould have something like \"Oid variant entrypoint ...\" and \"Name\nvariant entrypoint ...\", and also fix the function names in the comments.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Mon, 21 Mar 2022 11:41:04 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: New Object Access Type hooks"
},
{
"msg_contents": "> On Mar 18, 2022, at 3:04 PM, Andrew Dunstan <andrew@dunslane.net> wrote:\n> \n> I haven't looked at it in detail, but regarding the test code I'm not\n> sure why there's a .control file, since this isn't a loadable extension,\n> not why there's a test_oat_hooks.h file.\n\nThe .control file exists because the test defines a loadable module which defines the hooks. The test_oat_hooks.h file was extraneous, and has been removed in v2.\n\n\n\n\n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Mon, 21 Mar 2022 12:57:34 -0700",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: New Object Access Type hooks"
},
{
"msg_contents": "\n\n> On Mar 21, 2022, at 8:41 AM, Andrew Dunstan <andrew@dunslane.net> wrote:\n> \n> My first inclination is to say it's probably ok. The immediately obvious\n> alternative would be to create yet another set of functions that don't\n> have classId parameters. That doesn't seem attractive.\n> \n> Modulo that issue I think patch 1 is basically ok, but we should fix the\n> comments in objectaccess.c. Rather than \"It is [the] entrypoint ...\" we\n> should have something like \"Oid variant entrypoint ...\" and \"Name\n> variant entrypoint ...\", and also fix the function names in the comments.\n\nJoshua,\n\nDo you care to create a new version of this, perhaps based on the v2-0001 patch I just posted?\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Mon, 21 Mar 2022 12:58:32 -0700",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: New Object Access Type hooks"
},
{
"msg_contents": "\nOn 3/21/22 15:57, Mark Dilger wrote:\n>> On Mar 18, 2022, at 3:04 PM, Andrew Dunstan <andrew@dunslane.net> wrote:\n>>\n>> I haven't looked at it in detail, but regarding the test code I'm not\n>> sure why there's a .control file, since this isn't a loadable extension,\n>> not why there's a test_oat_hooks.h file.\n> The .control file exists because the test defines a loadable module which defines the hooks. \n\n\n\nTo the best of my knowledge .control files are only used by extensions,\nnot by other modules. They are only referenced in\nsrc/backend/commands/extension.c in the backend code. For example,\nauto_explain which is a loadable module but not en extension does not\nhave one, and I bet if you remove it you'll find this will work just fine.\n\n\ncheers\n\n\nandrew\n\n\n-- \n\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Mon, 21 Mar 2022 16:30:57 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: New Object Access Type hooks"
},
{
"msg_contents": "> On Mar 21, 2022, at 1:30 PM, Andrew Dunstan <andrew@dunslane.net> wrote:\n> \n> To the best of my knowledge .control files are only used by extensions,\n> not by other modules. They are only referenced in\n> src/backend/commands/extension.c in the backend code. For example,\n> auto_explain which is a loadable module but not en extension does not\n> have one, and I bet if you remove it you'll find this will work just fine.\n\nFixed, also with adjustments to Joshua's function comments.\n\n\n\n\n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Mon, 21 Mar 2022 16:08:48 -0700",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: New Object Access Type hooks"
},
{
"msg_contents": "On Fri, Mar 18, 2022 at 4:22 PM Mark Dilger\n<mark.dilger@enterprisedb.com> wrote:\n> (FYI, I got a test failure from src/test/recovery/t/013_crash_restart.pl when testing v1-0001. I'm not sure yet what that is about.)\n\nDoesn't look like 0001 has anything to do with that... Are you on a\nMac? Did it look like this recent failure from CI?\n\nhttps://cirrus-ci.com/task/4686108033286144\nhttps://api.cirrus-ci.com/v1/artifact/task/4686108033286144/log/src/test/recovery/tmp_check/log/regress_log_013_crash_restart\nhttps://api.cirrus-ci.com/v1/artifact/task/4686108033286144/log/src/test/recovery/tmp_check/log/013_crash_restart_primary.log\n\nI have no idea what is going on there, but searching for discussion\nbrought me here...\n\n\n",
"msg_date": "Tue, 22 Mar 2022 18:03:31 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: New Object Access Type hooks"
},
{
"msg_contents": "\n\n> On Mar 21, 2022, at 10:03 PM, Thomas Munro <thomas.munro@gmail.com> wrote:\n> \n> On Fri, Mar 18, 2022 at 4:22 PM Mark Dilger\n> <mark.dilger@enterprisedb.com> wrote:\n>> (FYI, I got a test failure from src/test/recovery/t/013_crash_restart.pl when testing v1-0001. I'm not sure yet what that is about.)\n> \n> Doesn't look like 0001 has anything to do with that... Are you on a\n> Mac?\n\nYes, macOS Catalina, currently 10.15.7.\n \n> Did it look like this recent failure from CI?\n> \n> https://cirrus-ci.com/task/4686108033286144\n> https://api.cirrus-ci.com/v1/artifact/task/4686108033286144/log/src/test/recovery/tmp_check/log/regress_log_013_crash_restart\n> https://api.cirrus-ci.com/v1/artifact/task/4686108033286144/log/src/test/recovery/tmp_check/log/013_crash_restart_primary.log\n\nI no longer have the logs for comparison.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Tue, 22 Mar 2022 06:15:15 -0700",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: New Object Access Type hooks"
},
{
"msg_contents": "\nOn 3/22/22 01:03, Thomas Munro wrote:\n> On Fri, Mar 18, 2022 at 4:22 PM Mark Dilger\n> <mark.dilger@enterprisedb.com> wrote:\n>> (FYI, I got a test failure from src/test/recovery/t/013_crash_restart.pl when testing v1-0001. I'm not sure yet what that is about.)\n> Doesn't look like 0001 has anything to do with that... Are you on a\n> Mac? Did it look like this recent failure from CI?\n\n\nProbably not connected. It's working fine for me on Ubuntu/\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Tue, 22 Mar 2022 09:25:18 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: New Object Access Type hooks"
},
{
"msg_contents": "\nOn 3/21/22 19:08, Mark Dilger wrote:\n>\n>> On Mar 21, 2022, at 1:30 PM, Andrew Dunstan <andrew@dunslane.net> wrote:\n>>\n>> To the best of my knowledge .control files are only used by extensions,\n>> not by other modules. They are only referenced in\n>> src/backend/commands/extension.c in the backend code. For example,\n>> auto_explain which is a loadable module but not en extension does not\n>> have one, and I bet if you remove it you'll find this will work just fine.\n> Fixed, also with adjustments to Joshua's function comments.\n>\n\nPushed with slight adjustments - the LOAD was unnecessary as was the\nsetting of client_min_messages - the latter would have made buildfarm\nanimals unhappy.\n\n\nNow you need to re-submit your GUCs patch I think.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Tue, 22 Mar 2022 10:41:05 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: New Object Access Type hooks"
},
{
"msg_contents": "Hi,\n\nOn Tue, Mar 22, 2022 at 10:41:05AM -0400, Andrew Dunstan wrote:\n> \n> Pushed with slight adjustments - the LOAD was unnecessary as was the\n> setting of client_min_messages - the latter would have made buildfarm\n> animals unhappy.\n\nFor the record this just failed on my buildfarm animal:\nhttps://brekka.postgresql.org/cgi-bin/show_stage_log.pl?nm=lapwing&dt=2022-03-22%2014%3A40%3A10&stg=misc-check.\n\n\n",
"msg_date": "Tue, 22 Mar 2022 23:14:44 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: New Object Access Type hooks"
},
{
"msg_contents": "\n\n> On Mar 22, 2022, at 8:14 AM, Julien Rouhaud <rjuju123@gmail.com> wrote:\n> \n> Hi,\n> \n> On Tue, Mar 22, 2022 at 10:41:05AM -0400, Andrew Dunstan wrote:\n>> \n>> Pushed with slight adjustments - the LOAD was unnecessary as was the\n>> setting of client_min_messages - the latter would have made buildfarm\n>> animals unhappy.\n> \n> For the record this just failed on my buildfarm animal:\n> https://brekka.postgresql.org/cgi-bin/show_stage_log.pl?nm=lapwing&dt=2022-03-22%2014%3A40%3A10&stg=misc-check.\n\nculicidae is complaining:\n\n==~_~===-=-===~_~== pgsql.build/src/test/modules/test_oat_hooks/log/postmaster.log ==~_~===-=-===~_~==\n2022-03-22 14:53:27.175 UTC [2166986][postmaster][:0] LOG: starting PostgreSQL 15devel on x86_64-pc-linux-gnu, compiled by gcc (Debian 11.2.0-18) 11.2.0, 64-bit\n2022-03-22 14:53:27.175 UTC [2166986][postmaster][:0] LOG: listening on Unix socket \"/tmp/pg_regress-RiE7x8/.s.PGSQL.6280\"\n2022-03-22 14:53:27.198 UTC [2167008][not initialized][:0] FATAL: test_oat_hooks must be loaded via shared_preload_libraries\n2022-03-22 14:53:27.202 UTC [2167006][not initialized][:0] FATAL: test_oat_hooks must be loaded via shared_preload_libraries\n2022-03-22 14:53:27.203 UTC [2167009][not initialized][:0] FATAL: test_oat_hooks must be loaded via shared_preload_libraries\n2022-03-22 14:53:27.204 UTC [2166986][postmaster][:0] LOG: checkpointer process (PID 2167006) exited with exit code 1\n2022-03-22 14:53:27.204 UTC [2166986][postmaster][:0] LOG: terminating any other active server processes\n2022-03-22 14:53:27.204 UTC [2166986][postmaster][:0] LOG: shutting down because restart_after_crash is off\n2022-03-22 14:53:27.206 UTC [2166986][postmaster][:0] LOG: database system is shut down\n==~_~===-=-===~_~== pgsql.build/src/test/modules/test_rls_hooks/log/initdb.log ==~_~===-=-===~_~==\n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Tue, 22 Mar 2022 08:26:45 -0700",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: New Object Access Type hooks"
},
{
"msg_contents": "\nOn 3/22/22 11:26, Mark Dilger wrote:\n>\n>> On Mar 22, 2022, at 8:14 AM, Julien Rouhaud <rjuju123@gmail.com> wrote:\n>>\n>> Hi,\n>>\n>> On Tue, Mar 22, 2022 at 10:41:05AM -0400, Andrew Dunstan wrote:\n>>> Pushed with slight adjustments - the LOAD was unnecessary as was the\n>>> setting of client_min_messages - the latter would have made buildfarm\n>>> animals unhappy.\n>> For the record this just failed on my buildfarm animal:\n>> https://brekka.postgresql.org/cgi-bin/show_stage_log.pl?nm=lapwing&dt=2022-03-22%2014%3A40%3A10&stg=misc-check.\n> culicidae is complaining:\n>\n> ==~_~===-=-===~_~== pgsql.build/src/test/modules/test_oat_hooks/log/postmaster.log ==~_~===-=-===~_~==\n> 2022-03-22 14:53:27.175 UTC [2166986][postmaster][:0] LOG: starting PostgreSQL 15devel on x86_64-pc-linux-gnu, compiled by gcc (Debian 11.2.0-18) 11.2.0, 64-bit\n> 2022-03-22 14:53:27.175 UTC [2166986][postmaster][:0] LOG: listening on Unix socket \"/tmp/pg_regress-RiE7x8/.s.PGSQL.6280\"\n> 2022-03-22 14:53:27.198 UTC [2167008][not initialized][:0] FATAL: test_oat_hooks must be loaded via shared_preload_libraries\n> 2022-03-22 14:53:27.202 UTC [2167006][not initialized][:0] FATAL: test_oat_hooks must be loaded via shared_preload_libraries\n> 2022-03-22 14:53:27.203 UTC [2167009][not initialized][:0] FATAL: test_oat_hooks must be loaded via shared_preload_libraries\n> 2022-03-22 14:53:27.204 UTC [2166986][postmaster][:0] LOG: checkpointer process (PID 2167006) exited with exit code 1\n> 2022-03-22 14:53:27.204 UTC [2166986][postmaster][:0] LOG: terminating any other active server processes\n> 2022-03-22 14:53:27.204 UTC [2166986][postmaster][:0] LOG: shutting down because restart_after_crash is off\n> 2022-03-22 14:53:27.206 UTC [2166986][postmaster][:0] LOG: database system is shut down\n> ==~_~===-=-===~_~== pgsql.build/src/test/modules/test_rls_hooks/log/initdb.log ==~_~===-=-===~_~==\n>\n>\n\n\nThat seems quite weird. I'm not sure how it's getting loaded at all if\nnot via shared_preload_libraries\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Tue, 22 Mar 2022 11:48:45 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: New Object Access Type hooks"
},
{
"msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> That seems quite weird. I'm not sure how it's getting loaded at all if\n> not via shared_preload_libraries\n\nSome other animals are showing this:\n\ndiff -U3 /home/postgres/pgsql/src/test/modules/test_oat_hooks/expected/test_oat_hooks.out /home/postgres/pgsql/src/test/modules/test_oat_hooks/results/test_oat_hooks.out\n--- /home/postgres/pgsql/src/test/modules/test_oat_hooks/expected/test_oat_hooks.out\t2022-03-22 11:57:40.224991011 -0400\n+++ /home/postgres/pgsql/src/test/modules/test_oat_hooks/results/test_oat_hooks.out\t2022-03-22 11:59:59.998983366 -0400\n@@ -48,6 +48,8 @@\n SELECT * FROM regress_test_table;\n NOTICE: in executor check perms: superuser attempting execute\n NOTICE: in executor check perms: superuser finished execute\n+NOTICE: in executor check perms: superuser attempting execute\n+NOTICE: in executor check perms: superuser finished execute\n t \n ---\n (0 rows)\n@@ -95,6 +97,8 @@\n ^\n NOTICE: in executor check perms: non-superuser attempting execute\n NOTICE: in executor check perms: non-superuser finished execute\n+NOTICE: in executor check perms: non-superuser attempting execute\n+NOTICE: in executor check perms: non-superuser finished execute\n t \n ---\n (0 rows)\n@@ -168,6 +172,8 @@\n ^\n NOTICE: in executor check perms: superuser attempting execute\n NOTICE: in executor check perms: superuser finished execute\n+NOTICE: in executor check perms: superuser attempting execute\n+NOTICE: in executor check perms: superuser finished execute\n t \n ---\n (0 rows)\n\n\nI can duplicate that by adding \"force_parallel_mode = regress\"\nto test_oat_hooks.conf, so a fair bet is that the duplication\ncomes from executing the same hook in both leader and worker.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 22 Mar 2022 12:02:18 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: New Object Access Type hooks"
},
{
"msg_contents": "\nOn 3/22/22 12:02, Tom Lane wrote:\n> Andrew Dunstan <andrew@dunslane.net> writes:\n>> That seems quite weird. I'm not sure how it's getting loaded at all if\n>> not via shared_preload_libraries\n> Some other animals are showing this:\n>\n> diff -U3 /home/postgres/pgsql/src/test/modules/test_oat_hooks/expected/test_oat_hooks.out /home/postgres/pgsql/src/test/modules/test_oat_hooks/results/test_oat_hooks.out\n> --- /home/postgres/pgsql/src/test/modules/test_oat_hooks/expected/test_oat_hooks.out\t2022-03-22 11:57:40.224991011 -0400\n> +++ /home/postgres/pgsql/src/test/modules/test_oat_hooks/results/test_oat_hooks.out\t2022-03-22 11:59:59.998983366 -0400\n> @@ -48,6 +48,8 @@\n> SELECT * FROM regress_test_table;\n> NOTICE: in executor check perms: superuser attempting execute\n> NOTICE: in executor check perms: superuser finished execute\n> +NOTICE: in executor check perms: superuser attempting execute\n> +NOTICE: in executor check perms: superuser finished execute\n> t \n> ---\n> (0 rows)\n> @@ -95,6 +97,8 @@\n> ^\n> NOTICE: in executor check perms: non-superuser attempting execute\n> NOTICE: in executor check perms: non-superuser finished execute\n> +NOTICE: in executor check perms: non-superuser attempting execute\n> +NOTICE: in executor check perms: non-superuser finished execute\n> t \n> ---\n> (0 rows)\n> @@ -168,6 +172,8 @@\n> ^\n> NOTICE: in executor check perms: superuser attempting execute\n> NOTICE: in executor check perms: superuser finished execute\n> +NOTICE: in executor check perms: superuser attempting execute\n> +NOTICE: in executor check perms: superuser finished execute\n> t \n> ---\n> (0 rows)\n>\n>\n> I can duplicate that by adding \"force_parallel_mode = regress\"\n> to test_oat_hooks.conf, so a fair bet is that the duplication\n> comes from executing the same hook in both leader and worker.\n>\n> \t\t\t\n\n\n\nOK, thanks. My test didn't include that one setting :-(\n\n\nIf I can't com up with a very quick fix I'll revert it.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Tue, 22 Mar 2022 12:09:20 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: New Object Access Type hooks"
},
{
"msg_contents": "\n\n> On Mar 22, 2022, at 9:09 AM, Andrew Dunstan <andrew@dunslane.net> wrote:\n> \n> If I can't com up with a very quick fix I'll revert it.\n\nThe problem is coming from the REGRESS_exec_check_perms, which was included in the patch to demonstrate when the other hooks fired relative to the ExecutorCheckPerms_hook, but since it is causing problems, I can submit a patch with that removed. Give me a couple minutes....\n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Tue, 22 Mar 2022 09:11:09 -0700",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: New Object Access Type hooks"
},
{
"msg_contents": "> On Mar 22, 2022, at 9:11 AM, Mark Dilger <mark.dilger@enterprisedb.com> wrote:\n> \n> Give me a couple minutes....\n\n\n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Tue, 22 Mar 2022 09:15:39 -0700",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: New Object Access Type hooks"
},
{
"msg_contents": "Mark Dilger <mark.dilger@enterprisedb.com> writes:\n> The problem is coming from the REGRESS_exec_check_perms, which was included in the patch to demonstrate when the other hooks fired relative to the ExecutorCheckPerms_hook, but since it is causing problems, I can submit a patch with that removed. Give me a couple minutes....\n\nMaybe better to suppress the audit messages if in a parallel worker?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 22 Mar 2022 12:26:02 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: New Object Access Type hooks"
},
{
"msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> On 3/22/22 11:26, Mark Dilger wrote:\n>> culicidae is complaining:\n>> 2022-03-22 14:53:27.198 UTC [2167008][not initialized][:0] FATAL: test_oat_hooks must be loaded via shared_preload_libraries\n\n> That seems quite weird. I'm not sure how it's getting loaded at all if\n> not via shared_preload_libraries\n\nAfter checking culicidae's config, I've duplicated this failure\nby building with EXEC_BACKEND defined. So I'd opine that there\nis something broken about the method test_oat_hooks uses to\ndecide if it was loaded via shared_preload_libraries or not.\n(Note that the failures appear to be coming out of auxiliary\nprocesses such as the checkpointer.)\n\nAs a quick-n-dirty fix to avoid reverting the entire test module,\nperhaps just delete this error check for now.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 22 Mar 2022 12:33:49 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: New Object Access Type hooks"
},
{
"msg_contents": "> On Mar 22, 2022, at 9:33 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> Andrew Dunstan <andrew@dunslane.net> writes:\n>> On 3/22/22 11:26, Mark Dilger wrote:\n>>> culicidae is complaining:\n>>> 2022-03-22 14:53:27.198 UTC [2167008][not initialized][:0] FATAL: test_oat_hooks must be loaded via shared_preload_libraries\n> \n>> That seems quite weird. I'm not sure how it's getting loaded at all if\n>> not via shared_preload_libraries\n> \n> After checking culicidae's config, I've duplicated this failure\n> by building with EXEC_BACKEND defined. So I'd opine that there\n> is something broken about the method test_oat_hooks uses to\n> decide if it was loaded via shared_preload_libraries or not.\n> (Note that the failures appear to be coming out of auxiliary\n> processes such as the checkpointer.)\n> \n> As a quick-n-dirty fix to avoid reverting the entire test module,\n> perhaps just delete this error check for now.\n\nOk, done as you suggest:\n\n\n\n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Tue, 22 Mar 2022 09:48:24 -0700",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: New Object Access Type hooks"
},
{
"msg_contents": "Mark Dilger <mark.dilger@enterprisedb.com> writes:\n> On Mar 22, 2022, at 9:33 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> As a quick-n-dirty fix to avoid reverting the entire test module,\n>> perhaps just delete this error check for now.\n\n> Ok, done as you suggest:\n\nI only suggested removing the error check in _PG_init, not\nchanging the way the test works.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 22 Mar 2022 12:58:19 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: New Object Access Type hooks"
},
{
"msg_contents": "\nOn 3/22/22 12:58, Tom Lane wrote:\n> Mark Dilger <mark.dilger@enterprisedb.com> writes:\n>> On Mar 22, 2022, at 9:33 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>> As a quick-n-dirty fix to avoid reverting the entire test module,\n>>> perhaps just delete this error check for now.\n>> Ok, done as you suggest:\n> I only suggested removing the error check in _PG_init, not\n> changing the way the test works.\n>\n> \t\t\t\n\n\n\nMark and I discussed this offline, and decided there was no requirement\nfor the module to be preloaded. Do you have a different opinion?\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Tue, 22 Mar 2022 13:04:15 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: New Object Access Type hooks"
},
{
"msg_contents": "\n\n> On Mar 22, 2022, at 9:58 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n>> Ok, done as you suggest:\n> \n> I only suggested removing the error check in _PG_init, not\n> changing the way the test works.\n\nI should have been more explicit and said, \"done as y'all suggest\". The \"To\" line of that email was to you and Andrew.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Tue, 22 Mar 2022 10:04:55 -0700",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: New Object Access Type hooks"
},
{
"msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> On 3/22/22 12:58, Tom Lane wrote:\n>> I only suggested removing the error check in _PG_init, not\n>> changing the way the test works.\n\n> Mark and I discussed this offline, and decided there was no requirement\n> for the module to be preloaded. Do you have a different opinion?\n\nNo, I was actually about to make the same point: it seems to me there\nare arguable use-cases for loading it shared, loading it per-session\n(perhaps via ALTER USER SET or ALTER DATABASE SET to target particular\nusers/DBs), or even manually LOADing it. So the module code should\nnot be prejudging how it's used.\n\nOn reflection, I withdraw my complaint about changing the way the\ntest script loads the module. Getting rid of the need for a custom\n.conf file simplifies the test module, and that seems good.\nSo I'm on board with Mark's patch now.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 22 Mar 2022 13:08:45 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: New Object Access Type hooks"
},
{
"msg_contents": "\nOn 3/22/22 13:08, Tom Lane wrote:\n> Andrew Dunstan <andrew@dunslane.net> writes:\n>> On 3/22/22 12:58, Tom Lane wrote:\n>>> I only suggested removing the error check in _PG_init, not\n>>> changing the way the test works.\n>> Mark and I discussed this offline, and decided there was no requirement\n>> for the module to be preloaded. Do you have a different opinion?\n> No, I was actually about to make the same point: it seems to me there\n> are arguable use-cases for loading it shared, loading it per-session\n> (perhaps via ALTER USER SET or ALTER DATABASE SET to target particular\n> users/DBs), or even manually LOADing it. So the module code should\n> not be prejudging how it's used.\n>\n> On reflection, I withdraw my complaint about changing the way the\n> test script loads the module. Getting rid of the need for a custom\n> .conf file simplifies the test module, and that seems good.\n> So I'm on board with Mark's patch now.\n>\n> \t\t\t\n\n\n\nOK, I have pushed that.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Tue, 22 Mar 2022 13:47:27 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: New Object Access Type hooks"
},
{
"msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> OK, I have pushed that.\n\nIt seems like you could remove the NO_INSTALLCHECK restriction\ntoo. You already removed the comment defending it, and it\nseems to work fine as an installcheck now if I remove that\nlocally.\n\nOther nitpicks:\n\n* the IsParallelWorker test could use a comment\n\n* I notice a typo \"permisisons\" in test_oat_hooks.sql\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 22 Mar 2022 14:01:59 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: New Object Access Type hooks"
},
{
"msg_contents": "\nOn 3/22/22 14:01, Tom Lane wrote:\n> Andrew Dunstan <andrew@dunslane.net> writes:\n>> OK, I have pushed that.\n> It seems like you could remove the NO_INSTALLCHECK restriction\n> too. You already removed the comment defending it, and it\n> seems to work fine as an installcheck now if I remove that\n> locally.\n>\n> Other nitpicks:\n>\n> * the IsParallelWorker test could use a comment\n>\n> * I notice a typo \"permisisons\" in test_oat_hooks.sql\n>\n> \t\t\t\n\n\n\nFixed\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Tue, 22 Mar 2022 16:34:52 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: New Object Access Type hooks"
},
{
"msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> Fixed\n\nNow that we got past the hard failures, we can see that the test\nfalls over with (some?) non-default encodings, as for instance\nhere:\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=prion&dt=2022-03-22%2020%3A23%3A13\n\nI can replicate that by running the test under LANG=en_US.iso885915.\nWhat I think is happening is:\n\n(1) Rather unwisely, the relevant InvokeNamespaceSearchHook calls\nappear in recomputeNamespacePath. That means that their timing\ndepends heavily on accidents of caching.\n\n(2) If we decide that we need an encoding conversion to talk to\nthe client, there'll be a lookup for the conversion function\nearly during session startup. That will cause the namespace\nsearch path to get computed then, before the test module has been\nloaded and certainly before the audit GUC has been turned on.\n\n(3) At the point where the test expects some audit notices\nto come out, nothing happens because the search path is\nalready validated.\n\nI'm inclined to think that (1) is a seriously bad idea,\nnot only because of this instability, but because\n\n(a) the namespace cache logic is unlikely to cause the search-path\ncache to get invalidated when something happens that might cause an\nOAT hook to wish to change its decision, and\n\n(b) this placement means that the hook is invoked during cache loading\noperations that are likely to be super-sensitive to any additional\ncatalog accesses a hook might wish to do. (I await the results of the\nCLOBBER_CACHE_ALWAYS animals with trepidation.)\n\nNow, if our attitude to the OAT hooks is that we are going to\nsprinkle some at random and whether they are useful is someone\nelse's problem, then maybe these are not interesting concerns.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 22 Mar 2022 18:18:45 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: New Object Access Type hooks"
},
{
"msg_contents": "Hi,\n\nOn 2022-03-22 13:47:27 -0400, Andrew Dunstan wrote:\n> OK, I have pushed that.\n\nSeems like it might actually be good to test that object access hooks work\nwell in a parallel worker. How about going the other way and explicitly setting\nforce_parallel_mode = disabled for parts of the test and to enabled for\nothers?\n\n- Andres\n\n\n",
"msg_date": "Tue, 22 Mar 2022 15:20:11 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: New Object Access Type hooks"
},
{
"msg_contents": "\n\n> On Mar 22, 2022, at 3:20 PM, Andres Freund <andres@anarazel.de> wrote:\n> \n> Seems like it might actually be good to test that object access hooks work\n> well in a parallel worker. How about going the other way and explicitly setting\n> force_parallel_mode = disabled for parts of the test and to enabled for\n> others?\n\nWouldn't we get differing numbers of NOTICE messages depending on how many parallel workers there are? Or would you propose setting the number of workers to a small, fixed value?\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Tue, 22 Mar 2022 15:37:37 -0700",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: New Object Access Type hooks"
},
{
"msg_contents": "Mark Dilger <mark.dilger@enterprisedb.com> writes:\n>> On Mar 22, 2022, at 3:20 PM, Andres Freund <andres@anarazel.de> wrote:\n>> Seems like it might actually be good to test that object access hooks work\n>> well in a parallel worker. How about going the other way and explicitly setting\n>> force_parallel_mode = disabled for parts of the test and to enabled for\n>> others?\n\n> Wouldn't we get differing numbers of NOTICE messages depending on how\n> many parallel workers there are? Or would you propose setting the\n> number of workers to a small, fixed value?\n\nThe value would have to be \"1\", else you are going to have issues\nwith notices from different workers being interleaved differently\nfrom run to run. You might have that anyway, due to interleaving\nof leader and worker messages.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 22 Mar 2022 18:41:45 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: New Object Access Type hooks"
},
{
"msg_contents": "Hi,\n\nOn 2022-03-22 18:41:45 -0400, Tom Lane wrote:\n> Mark Dilger <mark.dilger@enterprisedb.com> writes:\n> >> On Mar 22, 2022, at 3:20 PM, Andres Freund <andres@anarazel.de> wrote:\n> >> Seems like it might actually be good to test that object access hooks work\n> >> well in a parallel worker. How about going the other way and explicitly setting\n> >> force_parallel_mode = disabled for parts of the test and to enabled for\n> >> others?\n> \n> > Wouldn't we get differing numbers of NOTICE messages depending on how\n> > many parallel workers there are? Or would you propose setting the\n> > number of workers to a small, fixed value?\n\nYes.\n\n\n> The value would have to be \"1\", else you are going to have issues\n> with notices from different workers being interleaved differently\n> from run to run.\n\nYea. Possible one could work around those with some effort (using multiple\nnotification channels maybe), but there seems little to glean from multiple\nworkers that a single worker wouldn't show.\n\n\n> You might have that anyway, due to interleaving of leader and worker\n> messages.\n\nThat part could perhaps be addressed by setting parallel_leader_participation\n= 0.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 22 Mar 2022 16:08:51 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: New Object Access Type hooks"
},
{
"msg_contents": "If I'm not wrong, this is still causing issues at least on cfbot/windows, even\nsince f0206d99.\n\nhttps://cirrus-ci.com/task/5266352712712192\nhttps://cirrus-ci.com/task/5061218867085312\nhttps://cirrus-ci.com/task/5663822005403648\nhttps://cirrus-ci.com/task/5744257246953472\n\nhttps://cirrus-ci.com/task/5744257246953472\n[22:26:50.939] test test_oat_hooks ... FAILED 173 ms\n\nhttps://api.cirrus-ci.com/v1/artifact/task/5744257246953472/log/src/test/modules/test_oat_hooks/regression.diffs\ndiff -w -U3 c:/cirrus/src/test/modules/test_oat_hooks/expected/test_oat_hooks.out c:/cirrus/src/test/modules/test_oat_hooks/results/test_oat_hooks.out\n--- c:/cirrus/src/test/modules/test_oat_hooks/expected/test_oat_hooks.out\t2022-03-22 22:13:23.386895200 +0000\n+++ c:/cirrus/src/test/modules/test_oat_hooks/results/test_oat_hooks.out\t2022-03-22 22:26:51.104419600 +0000\n@@ -15,12 +15,6 @@\n NOTICE: in process utility: superuser finished CreateRoleStmt\n CREATE TABLE regress_test_table (t text);\n NOTICE: in process utility: superuser attempting CreateStmt\n-NOTICE: in object access: superuser attempting namespace search (subId=0) [no report on violation, allowed]\n-LINE 1: CREATE TABLE regress_test_table (t text);\n- ^\n-NOTICE: in object access: superuser finished namespace search (subId=0) [no report on violation, allowed]\n-LINE 1: CREATE TABLE regress_test_table (t text);\n- ^\n NOTICE: in object access: superuser attempting create (subId=0) [explicit]\n\n\n",
"msg_date": "Tue, 22 Mar 2022 18:13:11 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: New Object Access Type hooks"
},
{
"msg_contents": "Justin Pryzby <pryzby@telsasoft.com> writes:\n> If I'm not wrong, this is still causing issues at least on cfbot/windows, even\n> since f0206d99.\n\nThat's probably a variant of the encoding dependency I described\nupthread.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 22 Mar 2022 19:36:50 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: New Object Access Type hooks"
},
{
"msg_contents": "\nOn 3/22/22 18:18, Tom Lane wrote:\n> Andrew Dunstan <andrew@dunslane.net> writes:\n>> Fixed\n> Now that we got past the hard failures, we can see that the test\n> falls over with (some?) non-default encodings, as for instance\n> here:\n>\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=prion&dt=2022-03-22%2020%3A23%3A13\n>\n> I can replicate that by running the test under LANG=en_US.iso885915.\n> What I think is happening is:\n>\n> (1) Rather unwisely, the relevant InvokeNamespaceSearchHook calls\n> appear in recomputeNamespacePath. That means that their timing\n> depends heavily on accidents of caching.\n>\n> (2) If we decide that we need an encoding conversion to talk to\n> the client, there'll be a lookup for the conversion function\n> early during session startup. That will cause the namespace\n> search path to get computed then, before the test module has been\n> loaded and certainly before the audit GUC has been turned on.\n>\n> (3) At the point where the test expects some audit notices\n> to come out, nothing happens because the search path is\n> already validated.\n>\n> I'm inclined to think that (1) is a seriously bad idea,\n> not only because of this instability, but because\n>\n> (a) the namespace cache logic is unlikely to cause the search-path\n> cache to get invalidated when something happens that might cause an\n> OAT hook to wish to change its decision, and\n>\n> (b) this placement means that the hook is invoked during cache loading\n> operations that are likely to be super-sensitive to any additional\n> catalog accesses a hook might wish to do. (I await the results of the\n> CLOBBER_CACHE_ALWAYS animals with trepidation.)\n>\n> Now, if our attitude to the OAT hooks is that we are going to\n> sprinkle some at random and whether they are useful is someone\n> else's problem, then maybe these are not interesting concerns.\n\n\nSo this was a pre-existing problem that the test has exposed? I don't\nthink we can just say \"you deal with it\", and if I understand you right\nyou don't think that either.\n\nI could make the buildfarm quiet again by resetting NO_INSTALLCHECK\ntemporarily.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Tue, 22 Mar 2022 19:51:20 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: New Object Access Type hooks"
},
{
"msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> On 3/22/22 18:18, Tom Lane wrote:\n>> Now, if our attitude to the OAT hooks is that we are going to\n>> sprinkle some at random and whether they are useful is someone\n>> else's problem, then maybe these are not interesting concerns.\n\n> So this was a pre-existing problem that the test has exposed? I don't\n> think we can just say \"you deal with it\", and if I understand you right\n> you don't think that either.\n\nYeah, my point exactly: the placement of those hooks needs to be rethought.\nI'm guessing what we ought to do is let the cached namespace OID list\nget built without interference, and then allow the OAT hook to filter\nit when it's read.\n\n> I could make the buildfarm quiet again by resetting NO_INSTALLCHECK\n> temporarily.\n\nI was able to reproduce it under \"make check\" as long as I had\nLANG set to one of the troublesome values, so I'm not real sure\nthat that'll be enough.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 22 Mar 2022 20:07:02 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: New Object Access Type hooks"
},
{
"msg_contents": "\nOn 3/22/22 20:07, Tom Lane wrote:\n> Andrew Dunstan <andrew@dunslane.net> writes:\n>> On 3/22/22 18:18, Tom Lane wrote:\n>>> Now, if our attitude to the OAT hooks is that we are going to\n>>> sprinkle some at random and whether they are useful is someone\n>>> else's problem, then maybe these are not interesting concerns.\n>> So this was a pre-existing problem that the test has exposed? I don't\n>> think we can just say \"you deal with it\", and if I understand you right\n>> you don't think that either.\n> Yeah, my point exactly: the placement of those hooks needs to be rethought.\n> I'm guessing what we ought to do is let the cached namespace OID list\n> get built without interference, and then allow the OAT hook to filter\n> it when it's read.\n>\n>> I could make the buildfarm quiet again by resetting NO_INSTALLCHECK\n>> temporarily.\n> I was able to reproduce it under \"make check\" as long as I had\n> LANG set to one of the troublesome values, so I'm not real sure\n> that that'll be enough.\n>\n> \t\t\t\n\n\nThe buildfarm only runs installcheck under different locales/encodings.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Tue, 22 Mar 2022 20:11:01 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: New Object Access Type hooks"
},
{
"msg_contents": "\nOn 3/22/22 20:11, Andrew Dunstan wrote:\n> On 3/22/22 20:07, Tom Lane wrote:\n>> Andrew Dunstan <andrew@dunslane.net> writes:\n>>> On 3/22/22 18:18, Tom Lane wrote:\n>>>> Now, if our attitude to the OAT hooks is that we are going to\n>>>> sprinkle some at random and whether they are useful is someone\n>>>> else's problem, then maybe these are not interesting concerns.\n>>> So this was a pre-existing problem that the test has exposed? I don't\n>>> think we can just say \"you deal with it\", and if I understand you right\n>>> you don't think that either.\n>> Yeah, my point exactly: the placement of those hooks needs to be rethought.\n>> I'm guessing what we ought to do is let the cached namespace OID list\n>> get built without interference, and then allow the OAT hook to filter\n>> it when it's read.\n>>\n>>> I could make the buildfarm quiet again by resetting NO_INSTALLCHECK\n>>> temporarily.\n>> I was able to reproduce it under \"make check\" as long as I had\n>> LANG set to one of the troublesome values, so I'm not real sure\n>> that that'll be enough.\n>>\n>> \t\t\t\n>\n> The buildfarm only runs installcheck under different locales/encodings.\n\n\nBut you're right about the non-installcheck cases. fairywren had that\nissue. I have committed a (tested) fix for those too to force\nNO_LOCALE/UTF8.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Wed, 23 Mar 2022 11:19:26 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: New Object Access Type hooks"
},
{
"msg_contents": "Hi,\n\nI just rebased the meson tree (atop 75b1521dae1) and the test_oat_hooks test\nfail on windows with meson. They don't with our \"homegrown\" buildsystem, but\njust because it won't run them.\n\nhttps://cirrus-ci.com/build/6101947223638016\nhttps://cirrus-ci.com/task/5869668815601664?logs=check_world#L67\nhttps://api.cirrus-ci.com/v1/artifact/task/5869668815601664/log/build/testrun/test_oat_hooks/pg_regress/regression.diffs\n\ndiff -w -U3 C:/cirrus/src/test/modules/test_oat_hooks/expected/test_oat_hooks.out C:/cirrus/build/testrun/test_oat_hooks/pg_regress/results/test_oat_hooks.out\n--- C:/cirrus/src/test/modules/test_oat_hooks/expected/test_oat_hooks.out\t2022-03-24 18:56:39.592048000 +0000\n+++ C:/cirrus/build/testrun/test_oat_hooks/pg_regress/results/test_oat_hooks.out\t2022-03-24 19:03:33.910466700 +0000\n@@ -15,12 +15,6 @@\n NOTICE: in process utility: superuser finished CreateRoleStmt\n CREATE TABLE regress_test_table (t text);\n NOTICE: in process utility: superuser attempting CreateStmt\n-NOTICE: in object access: superuser attempting namespace search (subId=0) [no report on violation, allowed]\n-LINE 1: CREATE TABLE regress_test_table (t text);\n- ^\n-NOTICE: in object access: superuser finished namespace search (subId=0) [no report on violation, allowed]\n-LINE 1: CREATE TABLE regress_test_table (t text);\n- ^\n NOTICE: in object access: superuser attempting create (subId=0) [explicit]\n NOTICE: in object access: superuser finished create (subId=0) [explicit]\n NOTICE: in object access: superuser attempting create (subId=0) [explicit]\n\n\nI don't think this is meson specific...\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 24 Mar 2022 13:59:58 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: New Object Access Type hooks"
},
{
"msg_contents": "\nOn 3/24/22 16:59, Andres Freund wrote:\n> Hi,\n>\n> I just rebased the meson tree (atop 75b1521dae1) and the test_oat_hooks test\n> fail on windows with meson. They don't with our \"homegrown\" buildsystem, but\n> just because it won't run them.\n>\n> https://cirrus-ci.com/build/6101947223638016\n> https://cirrus-ci.com/task/5869668815601664?logs=check_world#L67\n> https://api.cirrus-ci.com/v1/artifact/task/5869668815601664/log/build/testrun/test_oat_hooks/pg_regress/regression.diffs\n>\n> diff -w -U3 C:/cirrus/src/test/modules/test_oat_hooks/expected/test_oat_hooks.out C:/cirrus/build/testrun/test_oat_hooks/pg_regress/results/test_oat_hooks.out\n> --- C:/cirrus/src/test/modules/test_oat_hooks/expected/test_oat_hooks.out\t2022-03-24 18:56:39.592048000 +0000\n> +++ C:/cirrus/build/testrun/test_oat_hooks/pg_regress/results/test_oat_hooks.out\t2022-03-24 19:03:33.910466700 +0000\n> @@ -15,12 +15,6 @@\n> NOTICE: in process utility: superuser finished CreateRoleStmt\n> CREATE TABLE regress_test_table (t text);\n> NOTICE: in process utility: superuser attempting CreateStmt\n> -NOTICE: in object access: superuser attempting namespace search (subId=0) [no report on violation, allowed]\n> -LINE 1: CREATE TABLE regress_test_table (t text);\n> - ^\n> -NOTICE: in object access: superuser finished namespace search (subId=0) [no report on violation, allowed]\n> -LINE 1: CREATE TABLE regress_test_table (t text);\n> - ^\n> NOTICE: in object access: superuser attempting create (subId=0) [explicit]\n> NOTICE: in object access: superuser finished create (subId=0) [explicit]\n> NOTICE: in object access: superuser attempting create (subId=0) [explicit]\n>\n>\n> I don't think this is meson specific...\n\n\nEven if you use NO_LOCALE=1/ENCODING=UTF8 as the Makefile now does?\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Thu, 24 Mar 2022 17:31:59 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: New Object Access Type hooks"
},
{
"msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> On 3/24/22 16:59, Andres Freund wrote:\n>> I just rebased the meson tree (atop 75b1521dae1) and the test_oat_hooks test\n>> fail on windows with meson.\n\n> Even if you use NO_LOCALE=1/ENCODING=UTF8 as the Makefile now does?\n\nNote that that's basically a workaround for buggy placement of the\nOAT hooks, as per previous discussion. I hope that we fix that bug\npretty soon, so it shouldn't really be a factor for the meson conversion.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 24 Mar 2022 17:44:31 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: New Object Access Type hooks"
},
{
"msg_contents": "Hi,\n\nOn 2022-03-24 17:44:31 -0400, Tom Lane wrote:\n> Andrew Dunstan <andrew@dunslane.net> writes:\n> > On 3/24/22 16:59, Andres Freund wrote:\n> >> I just rebased the meson tree (atop 75b1521dae1) and the test_oat_hooks test\n> >> fail on windows with meson.\n> \n> > Even if you use NO_LOCALE=1/ENCODING=UTF8 as the Makefile now does?\n\nI didn't do that, no. I guess I should have re-checked that in the makefile /\nreread the commit message in more detail - although it really doesn't provide\na lot of that. That's, uh, some shoveling problems under the carpet.\n\nI'll do that for now then.\n\n\n> Note that that's basically a workaround for buggy placement of the\n> OAT hooks, as per previous discussion. I hope that we fix that bug\n> pretty soon, so it shouldn't really be a factor for the meson conversion.\n\nYea. I found it's a lot easier to rebase the meson tree frequently rather than\ndo it in larger batches, so I just do so every few days...\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 24 Mar 2022 14:58:22 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: New Object Access Type hooks"
},
{
"msg_contents": "\n\n> On Mar 21, 2022, at 10:03 PM, Thomas Munro <thomas.munro@gmail.com> wrote:\n> \n> On Fri, Mar 18, 2022 at 4:22 PM Mark Dilger\n> <mark.dilger@enterprisedb.com> wrote:\n>> (FYI, I got a test failure from src/test/recovery/t/013_crash_restart.pl when testing v1-0001. I'm not sure yet what that is about.)\n> \n> Doesn't look like 0001 has anything to do with that... Are you on a\n> Mac? Did it look like this recent failure from CI?\n> \n> https://cirrus-ci.com/task/4686108033286144\n> https://api.cirrus-ci.com/v1/artifact/task/4686108033286144/log/src/test/recovery/tmp_check/log/regress_log_013_crash_restart\n> https://api.cirrus-ci.com/v1/artifact/task/4686108033286144/log/src/test/recovery/tmp_check/log/013_crash_restart_primary.log\n> \n> I have no idea what is going on there, but searching for discussion\n> brought me here...\n\nI just got a crash in this test again. Are you still interested? I still have the logs. No core file appears to have been generated.\n\nThe test failure is\n\nnot ok 5 - psql query died successfully after SIGQUIT\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Mon, 4 Apr 2022 09:16:39 -0700",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: New Object Access Type hooks"
},
{
"msg_contents": "Mark Dilger <mark.dilger@enterprisedb.com> writes:\n> I just got a crash in this test again. Are you still interested? I still have the logs. No core file appears to have been generated.\n> The test failure is\n> not ok 5 - psql query died successfully after SIGQUIT\n\nHmm ... I can see one problem with that test:\n\nok( pump_until(\n\t\t$killme,\n\t\t$psql_timeout,\n\t\t\\$killme_stderr,\n\t\tqr/WARNING: terminating connection because of crash of another server process|server closed the connection unexpectedly|connection to server was lost/m\n\t),\n\t\"psql query died successfully after SIGQUIT\");\n\nIt's been a little while since that message looked like that.\nNowadays you get\n\nWARNING: terminating connection because of unexpected SIGQUIT signal\nserver closed the connection unexpectedly\n This probably means the server terminated abnormally\n before or while processing the request.\n\nUsually the test would succeed anyway because of matching the\nsecond or third regex alternative, but I wonder if there is\nsome other spelling of libpq's complaint that shows up\noccasionally. It'd be nice if we could see the contents of\n$killme_stderr upon failure.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 04 Apr 2022 13:03:10 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: New Object Access Type hooks"
},
{
"msg_contents": "I wrote:\n> Usually the test would succeed anyway because of matching the\n> second or third regex alternative, but I wonder if there is\n> some other spelling of libpq's complaint that shows up\n> occasionally. It'd be nice if we could see the contents of\n> $killme_stderr upon failure.\n\nOK, now I'm confused, because pump_until is very clearly\n*trying* to report exactly that:\n\n if (not $proc->pumpable())\n {\n diag(\"pump_until: process terminated unexpectedly when searching for \\\"$until\\\" with stream: \\\"$$stream\\\"\");\n return 0;\n }\n\nand if I intentionally break the regex then I do see this\noutput when running the test by hand:\n\n# Running: pg_ctl kill QUIT 1922645\nok 4 - killed process with SIGQUIT\n# pump_until: process terminated unexpectedly when searching for \"(?^m:WARNING: terminating connection because of crash of another server process|server closed the connection foounexpectedly|connection to server was lost)\" with stream: \"psql:<stdin>:9: WARNING: terminating connection because of unexpected SIGQUIT signal\n# psql:<stdin>:9: server closed the connection unexpectedly\n# This probably means the server terminated abnormally\n# before or while processing the request.\n# \"\nnot ok 5 - psql query died successfully after SIGQUIT\n\nIs our CI setup failing to capture stderr from TAP tests??\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 04 Apr 2022 13:41:17 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: New Object Access Type hooks"
},
{
"msg_contents": "\n\n> On Apr 4, 2022, at 10:41 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> Is our CI setup failing to capture stderr from TAP tests??\n\nI'm looking into the way our TAP test infrastructure assigns port numbers to nodes, and whether that is reasonable during parallel test runs with nodes stopping and starting again. On casual inspection, that doesn't seem ok, because the Cluster.pm logic to make sure nodes get unique ports doesn't seem to be thinking about other parallel tests running. It will notice if another node is already bound to the port, but if another node has been killed and not yet restarted, won't things go off the rails?\n\nI'm writing a parallel test just for this. Will get back to you.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Mon, 4 Apr 2022 10:44:25 -0700",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: New Object Access Type hooks"
},
{
"msg_contents": "I wrote:\n> Is our CI setup failing to capture stderr from TAP tests??\n\nOh, I'm barking up the wrong tree. This test must have been run\nagainst HEAD between 6da65a3f9 (23 Feb) and 2beb4acff (31 Mar), when\npump_until indeed didn't print this essential information :-(\n\nIf you just got this failure, could you look in the log to\nsee if there's a pump_until report?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 04 Apr 2022 13:51:13 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: New Object Access Type hooks"
},
{
"msg_contents": "> On Apr 4, 2022, at 10:51 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> Oh, I'm barking up the wrong tree. This test must have been run\n> against HEAD between 6da65a3f9 (23 Feb) and 2beb4acff (31 Mar), when\n> pump_until indeed didn't print this essential information :-(\n> \n> If you just got this failure, could you look in the log to\n> see if there's a pump_until report?\n\nI was running `make -j12 check-world` against my local patched version of master:\n\ncommit 80399fa5f208c4acd4ec194c47e534ba8dd3ae7c (HEAD -> 0001)\nAuthor: Mark Dilger <mark.dilger@enterprisedb.com>\nDate: Mon Mar 28 13:35:11 2022 -0700\n\n Allow grant and revoke of privileges on parameters\n \n Add new SET and ALTER SYSTEM privileges for configuration parameters\n (GUCs), and a new catalog, pg_parameter_acl, for tracking grants of\n these privileges.\n \n The privilege to SET a parameter marked USERSET is inherent in that\n parameter's marking and cannot be revoked. This saves cycles when\n performing SET operations, as looking up privileges in the catalog\n can be skipped. If we find that administrators need to revoke SET\n privilege on a particular variable from public, that variable can be\n redefined in future releases as SUSET with a default grant of SET to\n PUBLIC issued.\n\ncommit 4eb9798879680dcc0e3ebb301cf6f925dfa69422 (origin/master, origin/HEAD, master)\nAuthor: Andrew Dunstan <andrew@dunslane.net>\nDate: Mon Apr 4 10:12:30 2022 -0400\n\n Avoid freeing objects during json aggregate finalization\n \n Commit f4fb45d15c tried to free memory during aggregate finalization.\n This cause issues, particularly when used as a window function, so stop\n doing that.\n \n Per complaint by Jaime Casanova and diagnosis by Andres Freund\n \n Discussion: https://postgr.es/m/YkfeMNYRCGhySKyg@ahch-to\n\n\nThe test logs are attached.\n\n\n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Mon, 4 Apr 2022 10:56:11 -0700",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: New Object Access Type hooks"
},
{
"msg_contents": "Mark Dilger <mark.dilger@enterprisedb.com> writes:\n> The test logs are attached.\n\nServer log looks as-expected:\n\n2022-04-04 09:11:51.087 PDT [2084] 013_crash_restart.pl LOG: statement: SELECT pg_sleep(3600);\n2022-04-04 09:11:51.094 PDT [2083] 013_crash_restart.pl WARNING: terminating connection because of unexpected SIGQUIT signal\n2022-04-04 09:11:51.095 PDT [2070] LOG: server process (PID 2083) exited with exit code 2\n2022-04-04 09:11:51.095 PDT [2070] DETAIL: Failed process was running: INSERT INTO alive VALUES($$in-progress-before-sigquit$$) RETURNING status;\n2022-04-04 09:11:51.095 PDT [2070] LOG: terminating any other active server processes\n\nI was hoping to see regress_log_013_crash_restart, though.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 04 Apr 2022 14:07:06 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: New Object Access Type hooks"
},
{
"msg_contents": "> On Apr 4, 2022, at 11:07 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> I was hoping to see regress_log_013_crash_restart, though.\n\n\n\n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Mon, 4 Apr 2022 11:09:45 -0700",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: New Object Access Type hooks"
},
{
"msg_contents": "\n\n> On Apr 4, 2022, at 10:44 AM, Mark Dilger <mark.dilger@enterprisedb.com> wrote:\n> \n> I'm writing a parallel test just for this. Will get back to you.\n\nOk, that experiment didn't accomplish anything, beyond refreshing my memory regarding Cluster.pm preferring sockets over ports.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Mon, 4 Apr 2022 11:32:40 -0700",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: New Object Access Type hooks"
},
{
"msg_contents": "Mark Dilger <mark.dilger@enterprisedb.com> writes:\n\n> # Running: pg_ctl kill QUIT 2083\n> ok 4 - killed process with SIGQUIT\n> # pump_until: process terminated unexpectedly when searching for \"(?^m:WARNING: terminating connection because of crash of another server process|server closed the connection unexpectedly|connection to server was lost)\" with stream: \"psql:<stdin>:9: WARNING: terminating connection because of unexpected SIGQUIT signal\n> # psql:<stdin>:9: could not send data to server: Socket is not connected\n> # \"\n> not ok 5 - psql query died successfully after SIGQUIT\n\nAnd there we have it: the test wasn't updated for the new backend message\nspelling, and we're seeing a different frontend behavior. Evidently the\nbackend is dying before we're able to send the \"SELECT 1;\" to it.\n\nI'm not quite sure whether it's a libpq bug that it doesn't produce the\n\"connection to server was lost\" message here, but in any case I suspect\nthat we shouldn't be checking for the second and third regex alternatives.\nThe \"terminating connection\" warning absolutely should get through, and\nif it doesn't we want to know about it. So my proposal for a fix is\nto change the regex to be just \"WARNING: terminating connection because\nof unexpected SIGQUIT signal\".\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 04 Apr 2022 15:01:15 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: New Object Access Type hooks"
},
{
"msg_contents": "I wrote:\n> The \"terminating connection\" warning absolutely should get through,\n\n... oh, no, that's not guaranteed at all, since it's sent from quickdie().\nSo scratch that. Maybe we'd better add \"could not send data to server\"\nto the regex?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 04 Apr 2022 15:05:10 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: New Object Access Type hooks"
},
{
"msg_contents": "\n\n> On Apr 4, 2022, at 12:05 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> I wrote:\n>> The \"terminating connection\" warning absolutely should get through,\n> \n> ... oh, no, that's not guaranteed at all, since it's sent from quickdie().\n> So scratch that. Maybe we'd better add \"could not send data to server\"\n> to the regex?\n\nIf it fails in pqsecure_raw_write(), you get either \"server closed the connection unexpectedly\" or \"could not send data to server\". Do we need to support pgtls_write() or pg_GSS_write(), which have different error messages? Can anybody run the tests with TLS or GSS enabled? I assume the test framework prevents this, but I didn't check too closely....\n\nIs it possible that pgFlush will call pqSendSome which calls pqReadData before trying to write anything, and get back a \"could not receive data from server\" from pqsecure_raw_read()?\n\nIt's a bit hard to prove to myself which paths might be followed through this code. Thoughts?\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Mon, 4 Apr 2022 13:28:17 -0700",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: New Object Access Type hooks"
},
{
"msg_contents": "Mark Dilger <mark.dilger@enterprisedb.com> writes:\n>> On Apr 4, 2022, at 12:05 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> So scratch that. Maybe we'd better add \"could not send data to server\"\n>> to the regex?\n\n> If it fails in pqsecure_raw_write(), you get either \"server closed the connection unexpectedly\" or \"could not send data to server\". Do we need to support pgtls_write() or pg_GSS_write(), which have different error messages?\n\nDon't see why, since this test sets up a new cluster in which neither\nis enabled.\n\n> Is it possible that pgFlush will call pqSendSome which calls pqReadData before trying to write anything, and get back a \"could not receive data from server\" from pqsecure_raw_read()?\n\nYeah, it's plausible to get a failure on either the write or read side\ndepending on timing.\n\nPerhaps libpq should be trying harder to make those cases look alike, but\nthis test is about server behavior not libpq behavior, so I'm inclined\nto just make it lax.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 04 Apr 2022 16:47:04 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: New Object Access Type hooks"
},
{
"msg_contents": "\n\n> On Apr 4, 2022, at 1:47 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> Yeah, it's plausible to get a failure on either the write or read side\n> depending on timing.\n> \n> Perhaps libpq should be trying harder to make those cases look alike, but\n> this test is about server behavior not libpq behavior, so I'm inclined\n> to just make it lax.\n\n+1.\n\nI've gotten this test failure only a few times in perhaps the last six months, so if we narrow the opportunity for test failure without closing it entirely, we're just making the test failures that much harder to diagnose.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Mon, 4 Apr 2022 13:50:40 -0700",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: New Object Access Type hooks"
},
{
"msg_contents": "Mark Dilger <mark.dilger@enterprisedb.com> writes:\n> On Apr 4, 2022, at 1:47 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Perhaps libpq should be trying harder to make those cases look alike, but\n>> this test is about server behavior not libpq behavior, so I'm inclined\n>> to just make it lax.\n\n> +1.\n\n> I've gotten this test failure only a few times in perhaps the last six months, so if we narrow the opportunity for test failure without closing it entirely, we're just making the test failures that much harder to diagnose.\n\nDone that way.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 04 Apr 2022 22:10:50 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: New Object Access Type hooks"
},
{
"msg_contents": "On Thu, Mar 24, 2022 at 05:44:31PM -0400, Tom Lane wrote:\n> Note that that's basically a workaround for buggy placement of the\n> OAT hooks, as per previous discussion. I hope that we fix that bug\n> pretty soon, so it shouldn't really be a factor for the meson conversion.\n\nSo, this issue is still listed as an open item. What should we do?\nFrom what I get, the caching issues with the namespace lookup hook are\nnot new to v15, they just get exposed by the new test module\ntest_oat_hooks/. FWIW, I would vote against moving around hook calls\nin back branches as that could cause compatibility problems in\nexisting code relying on them, but it surely is unstable to keep these\nwhen recomputing the search_path.\n\nA removal from recomputeNamespacePath() implies an addition at the end\nof fetch_search_path() and fetch_search_path_array(). Perhaps an\nextra one in RangeVarGetCreationNamespace()? The question is how much\nof these we want, for example the search hook would be called now even\nwhen doing relation-specific checks like RelationIsVisible() and the\nkind.\n--\nMichael",
"msg_date": "Mon, 18 Apr 2022 15:50:11 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: New Object Access Type hooks"
},
{
"msg_contents": "On Mon, Apr 18, 2022 at 03:50:11PM +0900, Michael Paquier wrote:\n> A removal from recomputeNamespacePath() implies an addition at the end\n> of fetch_search_path() and fetch_search_path_array(). Perhaps an\n> extra one in RangeVarGetCreationNamespace()? The question is how much\n> of these we want, for example the search hook would be called now even\n> when doing relation-specific checks like RelationIsVisible() and the\n> kind.\n\nI have been playing with this issue, and if we want to minimize the\nnumber of times the list of namespaces in activeSearchPath gets\nchecked through the search hook, it looks like this is going to\nrequire an additional cached list of namespace OIDs filtered through\nInvokeNamespaceSearchHook(). However, it is unclear to me how we can\nguarantee that any of the code paths forcing a recomputation of\nactiveSearchPath are not used for a caching phase, so it looks rather\neasy to mess up things and finish with a code path using an unfiltered\nactiveSearchPath. The set of *IsVisible() routines should be fine, at\nleast.\n\nAt the end, I am not sure that it is a wise time to redesign this\narea close to beta2, so I would vote for leaving this issue aside for\nnow. Another thing that we could do is to tweak the tests and silence\nthe part around OAT_NAMESPACE_SEARCH, which would increase the coverage\nwith installcheck, removing the need for ENCODING and NO_LOCALE.\n--\nMichael",
"msg_date": "Wed, 22 Jun 2022 16:48:15 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: New Object Access Type hooks"
}
] |
[
{
"msg_contents": "Hey, hackers,\n\nAdding and removing indexes is a regular part of database maintenance,\nbut in a large database, removing an index can be a very risky operation.\nRemoving the wrong index could have disastrous consequences for\nperformance, and it could take tens of minutes, or even hours, to rebuild\nthe entire index.\n\nI propose adding an ALTER INDEX command that can enable or disable an\nindex on a global level:\n\nALTER INDEX index_name ENABLE;\nALTER INDEX index_name DISABLE;\n\nA disabled index is still updated, and still enforces constraints, but it\nwill not be used for queries.\n\nWhether or not the index is disabled could also be specified at index\ncreation:\n\nCREATE INDEX index_name ON table_name (col1, col2) ENABLED; -- default\nCREATE INDEX index_name ON table_name (col1, col2) DISABLED;\n\nThis would be useful if a user anticipates index creation to take a long\ntime and they want to be able to carefully monitor the database once the\nindex starts getting used.\n\nIt would also be useful to be able to enable and disable indexes locally\nin the context of a single session to easily and safely verify that a\nquery can still be executed efficiently without an index:\n\nALTER INDEX index_name DISABLE SESSION;\n\nIt might also be reasonable to \"unset\" any local override to what's\nactually set on the index itself, but this would probably require slightly\ndifferent syntax: SET ENABLED = true / false / DEFAULT maybe?\n\nI am unsure of how a user would query this information; maybe a function\nlike pg_disabled_index_overrides() ? The permanent state of an index\nshould be reflected in the output of \\d <table> by appending 'DISABLED'\nto disabled indexes.\n\n\nThe pg_index catalog entry currently includes a column `indisvalid` which\nprevents queries from using the index, and this column can be set\nexplicitly, though not easily (it requires getting the oid of the index\nrelation from pg_class), and presumably not entirely safely. This column\ncontains significant semantic information about the state of the index, so\nI don't think it makes sense to burden it with additional meaning that is\nentirely user-dependent.\n\nSupporting global enabling/disabling of an index could be accomplished\nfairly simply by adding a `indisenabled` boolean flag to pg_index.\nUpdating this value would acquire an AccessExclusive lock on the index and\ncall an updated version of index_set_state_flags, which automatically\nhandles sending the cache invalidation message to other processes.\n(Is this sufficient to also invalidate all cached query plans?)\n\nThe actual \"disabling\" part can be handled by adding disable_cost inside\nthe cost_index function in costsize.c, similar to how enable_indexscan is\nhandled.\n\n\nSupporting session-local enabling/disabling of indexes is trickier. We can\nkeep track of the manual overrides in the backend process's local memory\nas a very light-weight option. (A simple linked list would suffice.) But\nwe have to take extra care to keep this up-to-date. When an index is\ndropped, any local overrides need to be dropped. It probably also makes\nsense to mimic the behavior of SET SESSION, which will rollback any\nchanges made during a transaction if the transaction rolls back. (And if\nwe handle this, maybe it makes sense to support ENABLE / DISABLE LOCAL as\nanalogues of SET LOCAL as well.)\n\nTo handle persisting/rolling back changes we can add a new AtEOXact\nfunction that gets called at the end of CommitTransaction and\nAbortTransaction.\n\nI'm less sure how to handle deleting entries when indexes are deleted by\nother transactions (or especially by the same transaction). Could we use\nCacheRegisterRelcacheCallback to be notified anytime the relcache is\nupdated and make sure all the indexes we have overrides for still exist?\nWhen would that callback be executed relative to our own process? If the\nbackend isn't in a transaction, it would have to check for deleted indexes\nright away, but if it is, we would have to wait for the end of the\ntransaction to update our list (possibly a job for\nAtEOXact_UpdateDisabledIndexes?) Are they other parts of Postgres that\nbehave similarly?\n\nA more heavy-weight option would be to actually store this info in a\ncatalog table, but that would add a lot of overhead to cost estimation\nduring query planning, so I don't think it's a great option.\n\n\nDoes this sound like a reasonable feature to add to Postgres? I feel like\nit would make it a lot easier to manage large databases and debug some\nquery performance problems. There are definitely some details to iron out,\nlike the exact syntax, and a lot of implementation details I'm unsure of,\nbut if people support it I'd be glad to try to implement it.\n\nThanks!\nPaul\n\n\n",
"msg_date": "Thu, 17 Mar 2022 23:16:24 -0700",
"msg_from": "Paul Martinez <hellopfm@gmail.com>",
"msg_from_op": true,
"msg_subject": "PROPOSAL: Support global and local disabling of indexes"
},
{
"msg_contents": "Hi,\n\nOn Thu, Mar 17, 2022 at 11:16:24PM -0700, Paul Martinez wrote:\n>\n> Adding and removing indexes is a regular part of database maintenance,\n> but in a large database, removing an index can be a very risky operation.\n> Removing the wrong index could have disastrous consequences for\n> performance, and it could take tens of minutes, or even hours, to rebuild\n> the entire index.\n>\n> I propose adding an ALTER INDEX command that can enable or disable an\n> index on a global level:\n>\n> ALTER INDEX index_name ENABLE;\n> ALTER INDEX index_name DISABLE;\n>\n> A disabled index is still updated, and still enforces constraints, but it\n> will not be used for queries.\n>\n> Whether or not the index is disabled could also be specified at index\n> creation:\n>\n> CREATE INDEX index_name ON table_name (col1, col2) ENABLED; -- default\n> CREATE INDEX index_name ON table_name (col1, col2) DISABLED;\n>\n> This would be useful if a user anticipates index creation to take a long\n> time and they want to be able to carefully monitor the database once the\n> index starts getting used.\n>\n> It would also be useful to be able to enable and disable indexes locally\n> in the context of a single session to easily and safely verify that a\n> query can still be executed efficiently without an index:\n>\n> ALTER INDEX index_name DISABLE SESSION;\n\nFor the record, all of that is already doable using plantuner extension:\nhttps://github.com/postgrespro/plantuner.\n\n\n",
"msg_date": "Fri, 18 Mar 2022 14:33:05 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: PROPOSAL: Support global and local disabling of indexes"
},
{
"msg_contents": "On Thu, Mar 17, 2022 at 11:16:24PM -0700, Paul Martinez wrote:\n> I propose adding an ALTER INDEX command that can enable or disable an\n> index on a global level:\n\nSee also this thread:\nhttps://commitfest.postgresql.org/34/2937/\nhttps://www.postgresql.org/message-id/flat/CANbhV-H35fJsKnLoZJuhkYqg2MO4XLjR57Qwf=3-xOvG2+2UEg@mail.gmail.com\n\n-- \nJustin\n\n\n",
"msg_date": "Fri, 18 Mar 2022 04:06:30 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: PROPOSAL: Support global and local disabling of indexes"
},
{
"msg_contents": "Just wanted to mention that this would be a useful feature for me. Had\npreviously been bitten by this:\nhttps://www.postgresql.org/message-id/flat/CAMjNa7c4pKLZe%2BZ0V49isKycnXQ6Y%3D3BO-4Gsj3QAwsd2r7Wrw%40mail.gmail.com\n\nEnded up \"solving\" by putting a where clause on all my exclusion\nconstraints I didn't want used for most queries (WHERE 1=1). That allowed\nme \"disable\" that index for all queries unless they explicitly have a 1=1\nconstant in the where clause.\n\nJust wanted to mention that this would be a useful feature for me. Had previously been bitten by this: https://www.postgresql.org/message-id/flat/CAMjNa7c4pKLZe%2BZ0V49isKycnXQ6Y%3D3BO-4Gsj3QAwsd2r7Wrw%40mail.gmail.comEnded up \"solving\" by putting a where clause on all my exclusion constraints I didn't want used for most queries (WHERE 1=1). That allowed me \"disable\" that index for all queries unless they explicitly have a 1=1 constant in the where clause.",
"msg_date": "Fri, 18 Mar 2022 09:01:59 -0400",
"msg_from": "Adam Brusselback <adambrusselback@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: PROPOSAL: Support global and local disabling of indexes"
}
] |
[
{
"msg_contents": "Hello,\n\nI found a problem that planning takes too much time when the tables\nhave many child partitions. According to my observation, the planning\ntime increases in the order of O(n^2). Here, n is the number of child\npartitions. I attached the patch to solve this problem. Please be\nnoted that this patch is a PoC.\n\n1. Problem Statement\n\nThe problem arises in the next simple query. This query is modeled\nafter a university's grade system, joining tables about students,\nscores, and their GPAs to output academic records for each student.\n\n=====\nSELECT students.name, gpas.gpa AS gpa, sum(scores.score) AS total_score\nFROM students, scores, gpas\nWHERE students.id = scores.student_id AND students.id = gpas.student_id\nGROUP BY students.id, gpas.student_id;\n=====\n\nHere, since there are so many students enrolled in the university, we\nwill partition each table. If so, the planning time of the above query\nincreases very rapidly as the number of partitions increases.\n\nI conducted an experiment by varying the number of partitions of three\ntables (students, scores, and gpas) from 2 to 1024. The attached\nfigure illustrates the result. The blue line annotated with \"master\"\nstands for the result on the master branch. Obviously, its\ncomputational complexity is large.\n\nI attached SQL files to this e-mail as \"sample-queries.zip\". You can\nreproduce my experiment by the next steps:\n=====\n$ unzip sample-queries.zip\n$ cd sample-queries\n# Create tables and insert sample data ('n' denotes the number of partitions)\n$ psql -f create-table-n.sql\n# Measure planning time\n$ psql -f query-n.sql\n=====\n\n2. Where is Slow?\n\nIn order to identify bottlenecks, I ran a performance profiler(perf).\nThe \"perf-master.png\" is a call graph of planning of query-1024.sql.\n\n From this figure, it can be seen that \"bms_equal\" and \"bms_is_subset\"\ntake up most of the running time. Most of these functions are called\nwhen enumerating EquivalenceMembers in EquivalenceClass. The\nenumerations exist in src/backend/optimizer/path/equivclass.c and have\nthe following form.\n\n=====\nEquivalenceClass *ec = /* given */;\n\nEquivalenceMember *em;\nListCell *lc;\nforeach(lc, ec->ec_members)\n{\n em = (EquivalenceMember *) lfirst(lc);\n\n /* predicate is bms_equal or bms_is_subset, etc */\n if (!predicate(em))\n continue;\n\n /* The predicate satisfies */\n do something...;\n}\n=====\n\nThis foreach loop is a linear search, whose cost will become very high\nwhen there are many EquivalenceMembers in ec_members. This is the case\nwhen the number of partitions is large. Eliminating this heavy linear\nsearch is a key to improving planning performance.\n\n3. How to Solve?\n\nIn my patch, I made three different optimizations depending on the\npredicate pattern.\n\n3.1 When the predicate is \"!em->em_is_child\"\n\nIn equivclass.c, there are several processes performed when\nem_is_child is false. If a table has many partitions, the number of\nEquivalenceMembers which are not children is limited. Therefore, it is\nuseful to keep only the non-child members as a list in advance.\n\nMy patch adds the \"ec_not_child_members\" field to EquivalenceClass.\nThis field is a List containing non-child members. Taking advantage of\nthis, the previous loop can be rewritten as follows:\n\n=====\nforeach(lc, ec->ec_not_child_members)\n{\n em = (EquivalenceMember *) lfirst(lc);\n Assert(!em->em_is_child);\n do something...;\n}\n=====\n\n3.2 When the predicate is \"bms_equal(em->em_relids, relids)\"\n\n\"bms_equal\" is another example of the predicate. In this case,\nprocesses will be done when the \"em_relids\" matches certain Relids.\n\nThis type of loop can be quickly handled by utilizing a hash table.\nFirst, group EquivalenceMembers with the same Relids into a list.\nThen, create an associative array whose key is Relids and whose value\nis the list. In my patch, I added the \"ec_members_htab\" field to\nEquivalenceClass, which plays a role of an associative array.\n\nBased on this idea, the previous loop is transformed as follows. Here,\nthe FindEcMembersMatchingRelids function looks up the hash table and\nreturns the corresponding value, which is a list.\n=====\nforeach(lc, FindEcMembersMatchingRelids(ec, relids))\n{\n em = (EquivalenceMember *) lfirst(lc);\n Assert(bms_equal(em->em_relids, relids));\n do something...;\n}\n=====\n\n3.3 When the predicate is \"bms_is_subset(em->em_relids, relids)\"\n\nThere are several processings performed on EquivalenceMembers whose\nem_relids is a subset of the given \"relids\". In this case, the\npredicate is \"bms_is_subset\". Optimizing this search is not as easy as\nwith bms_equal, but the technique above by hash tables can be applied.\n\nThere are 2^m subsets if the number of elements of the \"relids\" is m.\nThe key here is that m is not so large in most cases. For example, m\nis up to 3 in the sample query, meaning that the number of subsets is\nat most 2^3=8. Therefore, we can enumerate all subsets within a\nrealistic time. Looking up the hash table with each subset as a key\nwill drastically reduce unnecessary searches. My patch's optimization\nis based on this notion.\n\nThis technique can be illustrated as the next pseudo-code. The code\niterates over all subsets and looks up the corresponding\nEquivalenceMembers from the hash table. The actual code is more\ncomplicated for performance reasons.\n\n===\nEquivalenceClass *ec = /* given */;\nRelids relids = /* given */;\n\nint num_members_in_relids = bms_num_members(relids);\nfor (int bit = 0; bit < (1 << num_members_in_relids); bit++)\n{\n EquivalenceMember *em;\n ListCell *lc;\n Relids subset = construct subset from 'bit';\n\n foreach(lc, FindEcMembersMatchingRelids(ec, subset))\n {\n em = (EquivalenceMember *) lfirst(lc);\n Assert(bms_is_subset(em->em_relids, relids));\n do something...;\n }\n}\n===\n\n4. Experimental Result\n\nThe red line in the attached figure is the planning time with my\npatch. The chart indicates that planning performance has been greatly\nimproved. The exact values are shown below.\n\nPlanning time of \"query-n.sql\" (n = number of partitions):\n n | Master (s) | Patched (s) | Speed up\n------------------------------------------\n 2 | 0.003 | 0.003 | 0.9%\n 4 | 0.004 | 0.004 | 1.0%\n 8 | 0.006 | 0.006 | 4.6%\n 16 | 0.011 | 0.010 | 5.3%\n 32 | 0.017 | 0.016 | 4.7%\n 64 | 0.032 | 0.030 | 8.0%\n 128 | 0.073 | 0.060 | 17.7%\n 256 | 0.216 | 0.142 | 34.2%\n 384 | 0.504 | 0.272 | 46.1%\n 512 | 0.933 | 0.462 | 50.4%\n 640 | 1.529 | 0.678 | 55.7%\n 768 | 2.316 | 1.006 | 56.6%\n 896 | 3.280 | 1.363 | 58.5%\n1024 | 4.599 | 1.770 | 61.5%\n\nWith 1024 partitions, the planning time was reduced by 61.5%. Besides,\nwith 128 partitions, which is a realistic use case, the performance\nincreased by 17.7%.\n\n5. Things to Be Discussed\n\n5.1 Regressions\n\nWhile my approach is effective for tables with a large number of\npartitions, it may cause performance degradation otherwise. For small\ncases, it is necessary to switch to a conventional algorithm. However,\nits threshold is not self-evident.\n\n5.2 Enumeration order\n\nMy patch may change the order in which members are enumerated. This\naffects generated plans.\n\n5.3 Code Quality\n\nSource code quality should be improved.\n\n=====\n\nAgain, I posted this patch as a PoC. I would appreciate it if you\nwould discuss the effectiveness of these optimizations with me.\n\nBest regards,\nYuya Watari",
"msg_date": "Fri, 18 Mar 2022 19:24:56 +0900",
"msg_from": "Yuya Watari <watari.yuya@gmail.com>",
"msg_from_op": true,
"msg_subject": "[PoC] Reducing planning time when tables have many partitions"
},
{
"msg_contents": "On Fri, 18 Mar 2022 at 23:32, Yuya Watari <watari.yuya@gmail.com> wrote:\n> I found a problem that planning takes too much time when the tables\n> have many child partitions. According to my observation, the planning\n> time increases in the order of O(n^2). Here, n is the number of child\n> partitions. I attached the patch to solve this problem. Please be\n> noted that this patch is a PoC.\n\n> 3. How to Solve?\n\nI think a better way to solve this would be just to have a single hash\ntable over all EquivalenceClasses that allows fast lookups of\nEquivalenceMember->em_expr. I think there's no reason that a given\nExpr should appear in more than one non-merged EquivalenceClass. The\nEquivalenceClass a given Expr belongs to would need to be updated\nduring the merge process.\n\nFor functions such as get_eclass_for_sort_expr() and\nprocess_equivalence(), that would become a fairly fast hashtable\nlookup instead of having nested loops to find if an EquivalenceMember\nalready exists for the given Expr. We might not want to build the hash\ntable for all queries. Maybe we could just do it if we get to\nsomething like ~16 EquivalenceMember in total.\n\nAs of now, we don't have any means to hash Exprs, so all that\ninfrastructure would need to be built first. Peter Eisentraut is\nworking on a patch [1] which is a step towards having this.\n\nHere's a simple setup to show the pain of this problem:\n\ncreate table lp (a int, b int) partition by list(a);\nselect 'create table lp'||x::text|| ' partition of lp for values\nin('||x::text||');' from generate_Series(0,4095)x;\n\\gexec\nexplain analyze select * from lp where a=b order by a;\n\n Planning Time: 510.248 ms\n Execution Time: 264.659 ms\n\nDavid\n\n[1] https://commitfest.postgresql.org/37/3182/\n\n\n",
"msg_date": "Thu, 24 Mar 2022 15:03:38 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] Reducing planning time when tables have many partitions"
},
{
"msg_contents": "Dear David,\n\nThank you for your comments on my patch. I really apologize for my\nlate response.\n\nOn Thu, Mar 24, 2022 at 11:03 AM David Rowley <dgrowleyml@gmail.com> wrote:\n> I think a better way to solve this would be just to have a single hash\n> table over all EquivalenceClasses that allows fast lookups of\n> EquivalenceMember->em_expr. I think there's no reason that a given\n> Expr should appear in more than one non-merged EquivalenceClass. The\n> EquivalenceClass a given Expr belongs to would need to be updated\n> during the merge process.\n\nThank you for your idea. However, I think building a hash table whose\nkey is EquivalenceMember->em_expr does not work for this case.\n\nWhat I am trying to optimize in this patch is the following code.\n\n=====\nEquivalenceClass *ec = /* given */;\n\nEquivalenceMember *em;\nListCell *lc;\nforeach(lc, ec->ec_members)\n{\n em = (EquivalenceMember *) lfirst(lc);\n\n /* predicate is bms_equal or bms_is_subset, etc */\n if (!predicate(em))\n continue;\n\n /* The predicate satisfies */\n do something...;\n}\n=====\n\n From my observation, the predicates above will be false in most cases\nand the subsequent processes are not executed. My optimization is\nbased on this notion and utilizes hash tables to eliminate calls of\npredicates.\n\nIf the predicate were \"em->em_expr == something\", the hash table whose\nkey is em_expr would be effective. However, the actual predicates are\nnot of this type but the following.\n\n// Find EquivalenceMembers whose relids is equal to the given relids\n(1) bms_equal(em->em_relids, relids)\n\n// Find EquivalenceMembers whose relids is a subset of the given relids\n(2) bms_is_subset(em->em_relids, relids)\n\nSince these predicates perform a match search for not em_expr but\nem_relids, we need to build a hash table with em_relids as key. If so,\nwe can drastically reduce the planning time for the pattern (1).\nBesides, by enumerating all subsets of relids, pattern (2) can be\noptimized. The detailed algorithm is described in the first email.\n\nI show an example of the pattern (1). The next code is in\nsrc/backend/optimizer/path/equivclass.c. As can be seen from this\ncode, the foreach loop tries to find an EquivalenceMember whose\ncur_em->em_relids is equal to rel->relids. If found, subsequent\nprocessing will be performed.\n\n== Before patched ==\nList *\ngenerate_implied_equalities_for_column(PlannerInfo *root,\n RelOptInfo *rel,\n ec_matches_callback_type callback,\n void *callback_arg,\n Relids prohibited_rels)\n{\n ...\n\n EquivalenceClass *cur_ec = (EquivalenceClass *)\nlist_nth(root->eq_classes, i);\n EquivalenceMember *cur_em;\n ListCell *lc2;\n\n cur_em = NULL;\n foreach(lc2, cur_ec->ec_members)\n {\n cur_em = (EquivalenceMember *) lfirst(lc2);\n if (bms_equal(cur_em->em_relids, rel->relids) &&\n callback(root, rel, cur_ec, cur_em, callback_arg))\n break;\n cur_em = NULL;\n }\n\n if (!cur_em)\n continue;\n\n ...\n}\n===\n\nMy patch modifies this code as follows. The em_foreach_relids_equals\nis a newly defined macro that finds EquivalenceMember satisfying the\nbms_equal. The macro looks up a hash table using rel->relids as a key.\nThis type of optimization cannot be achieved without using hash tables\nwhose key is em->em_relids.\n\n== After patched ==\nList *\ngenerate_implied_equalities_for_column(PlannerInfo *root,\n RelOptInfo *rel,\n ec_matches_callback_type callback,\n void *callback_arg,\n Relids prohibited_rels)\n{\n ...\n\n EquivalenceClass *cur_ec = (EquivalenceClass *)\nlist_nth(root->eq_classes, i);\n EquivalenceMember *cur_em;\n EquivalenceMember *other_em;\n\n cur_em = NULL;\n em_foreach_relids_equals(cur_em, cur_ec, rel->relids)\n {\n Assert(bms_equal(cur_em->em_relids, rel->relids));\n if (callback(root, rel, cur_ec, cur_em, callback_arg))\n break;\n cur_em = NULL;\n }\n\n if (!cur_em)\n continue;\n\n ...\n}\n===\n\n> We might not want to build the hash table for all queries.\n\nI agree with you. Building a lot of hash tables will consume much\nmemory. My idea for this problem is to let the hash table's key be a\npair of EquivalenceClass and Relids. However, this approach may lead\nto increasing looking up time of the hash table.\n\n==========\n\nI noticed that the previous patch does not work with the current HEAD.\nI attached the modified one to this email.\n\nAdditionally, I added my patch to the current commit fest [1].\n[1] https://commitfest.postgresql.org/38/3701/\n\n-- \nBest regards,\nYuya Watari",
"msg_date": "Wed, 22 Jun 2022 18:05:43 +0900",
"msg_from": "Yuya Watari <watari.yuya@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PoC] Reducing planning time when tables have many partitions"
},
{
"msg_contents": "Yuya Watari <watari.yuya@gmail.com> writes:\n> On Thu, Mar 24, 2022 at 11:03 AM David Rowley <dgrowleyml@gmail.com> wrote:\n>> I think a better way to solve this would be just to have a single hash\n>> table over all EquivalenceClasses that allows fast lookups of\n>> EquivalenceMember->em_expr.\n\n> If the predicate were \"em->em_expr == something\", the hash table whose\n> key is em_expr would be effective. However, the actual predicates are\n> not of this type but the following.\n\n> // Find EquivalenceMembers whose relids is equal to the given relids\n> (1) bms_equal(em->em_relids, relids)\n\n> // Find EquivalenceMembers whose relids is a subset of the given relids\n> (2) bms_is_subset(em->em_relids, relids)\n\nYeah, that's a really interesting observation, and I agree that\nDavid's suggestion doesn't address it. Maybe after we fix this\nproblem, matching of em_expr would be the next thing to look at,\nbut your results say it isn't the first thing.\n\nI'm not real thrilled with trying to throw hashtables at the problem,\nthough. As David noted, they'd be counterproductive for simple\nqueries. Sure, we could address that with duplicate code paths,\nbut that's a messy and hard-to-tune approach. Also, I find the\nidea of hashing on all subsets of relids to be outright scary.\n\"m is not so large in most cases\" does not help when m *is* large.\n\nFor the bms_equal class of lookups, I wonder if we could get anywhere\nby adding an additional List field to every RelOptInfo that chains\nall EquivalenceMembers that match that RelOptInfo's relids.\nThe trick here would be to figure out when to build those lists.\nThe simple answer would be to do it lazily on-demand, but that\nwould mean a separate scan of all the EquivalenceMembers for each\nRelOptInfo; I wonder if there's a way to do better?\n\nPerhaps the bms_is_subset class could be handled in a similar\nway, ie do a one-time pass to make a List of all EquivalenceMembers\nthat use a RelOptInfo.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 03 Jul 2022 17:28:22 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] Reducing planning time when tables have many partitions"
},
{
"msg_contents": "Dear Tom,\n\nThank you for replying to my email.\n\nOn Mon, Jul 4, 2022 at 6:28 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I'm not real thrilled with trying to throw hashtables at the problem,\n> though. As David noted, they'd be counterproductive for simple\n> queries.\n\nAs you said, my approach that utilizes hash tables has some overheads,\nleading to degradation in query planning time.\n\nI tested the degradation by a brief experiment. In this experiment, I\nused a simple query shown below.\n\n===\nSELECT students.name, gpas.gpa AS gpa, sum(scores.score) AS total_score\nFROM students, scores, gpas\nWHERE students.id = scores.student_id AND students.id = gpas.student_id\nGROUP BY students.id, gpas.student_id;\n===\n\nHere, students, scores, and gpas tables have no partitions, i.e., they\nare regular tables. Therefore, my techniques do not work for this\nquery and instead may lead to some regression. I repeatedly issued\nthis query 1 million times and measured their planning times.\n\nThe attached figure describes the distribution of the planning times.\nThe figure indicates that my patch has no severe negative impacts on\nthe planning performance. However, there seems to be a slight\ndegradation.\n\nI show the mean and median of planning times below. With my patch, the\nplanning time became 0.002-0.004 milliseconds slower. We have to deal\nwith this problem, but reducing time complexity while keeping\ndegradation to zero is significantly challenging.\n\nPlanning time (ms)\n | Mean | Median\n------------------------------\n Master | 0.682 | 0.674\n Patched | 0.686 | 0.676\n------------------------------\n Degradation | 0.004 | 0.002\n\nOf course, the attached result is just an example. Significant\nregression might occur in other types of queries.\n\n> For the bms_equal class of lookups, I wonder if we could get anywhere\n> by adding an additional List field to every RelOptInfo that chains\n> all EquivalenceMembers that match that RelOptInfo's relids.\n> The trick here would be to figure out when to build those lists.\n> The simple answer would be to do it lazily on-demand, but that\n> would mean a separate scan of all the EquivalenceMembers for each\n> RelOptInfo; I wonder if there's a way to do better?\n>\n> Perhaps the bms_is_subset class could be handled in a similar\n> way, ie do a one-time pass to make a List of all EquivalenceMembers\n> that use a RelOptInfo.\n\nThank you for giving your idea. I will try to polish up my algorithm\nbased on your suggestion.\n\n-- \nBest regards,\nYuya Watari",
"msg_date": "Tue, 5 Jul 2022 17:57:14 +0900",
"msg_from": "Yuya Watari <watari.yuya@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PoC] Reducing planning time when tables have many partitions"
},
{
"msg_contents": "On 7/5/22 13:57, Yuya Watari wrote:\n> On Mon, Jul 4, 2022 at 6:28 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Perhaps the bms_is_subset class could be handled in a similar\n>> way, ie do a one-time pass to make a List of all EquivalenceMembers\n>> that use a RelOptInfo.\n> \n> Thank you for giving your idea. I will try to polish up my algorithm\n> based on your suggestion.\nThis work has significant interest for highly partitioned \nconfigurations. Are you still working on this patch? According to the \ncurrent state of the thread, changing the status to 'Waiting on author' \nmay be better until the next version.\nFeel free to reverse the status if you need more feedback.\n\n-- \nRegards\nAndrey Lepikhov\nPostgres Professional\n\n\n",
"msg_date": "Thu, 21 Jul 2022 16:35:49 +0500",
"msg_from": "Andrey Lepikhov <a.lepikhov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] Reducing planning time when tables have many partitions"
},
{
"msg_contents": "Dear Andrey Lepikhov,\n\nThank you for replying and being a reviewer for this patch. I really\nappreciate it.\n\n> Are you still working on this patch?\n\nYes, I’m working on improving this patch. It is not easy to address\nthe problems that this patch has, but I’m hoping to send a new version\nof it in a few weeks.\n\n-- \nBest regards,\nYuya Watari\n\n\n",
"msg_date": "Fri, 22 Jul 2022 18:16:51 +0900",
"msg_from": "Yuya Watari <watari.yuya@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PoC] Reducing planning time when tables have many partitions"
},
{
"msg_contents": "On Mon, 4 Jul 2022 at 09:28, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> For the bms_equal class of lookups, I wonder if we could get anywhere\n> by adding an additional List field to every RelOptInfo that chains\n> all EquivalenceMembers that match that RelOptInfo's relids.\n> The trick here would be to figure out when to build those lists.\n> The simple answer would be to do it lazily on-demand, but that\n> would mean a separate scan of all the EquivalenceMembers for each\n> RelOptInfo; I wonder if there's a way to do better?\n\nHow about, instead of EquivalenceClass having a List field named\nec_members, it has a Bitmapset field named ec_member_indexes and we\njust keep a List of all EquivalenceMembers in PlannerInfo and mark\nwhich ones are in the class by setting the bit in the class's\nec_member_indexes field.\n\nThat would be teamed up with a new eclass_member_indexes field in\nRelOptInfo to store the index into PlannerInfo's List of\nEquivalenceMembers that belong to the given RelOptInfo.\n\nFor searching:\nIf you want to get all EquivalenceMembers in an EquivalenceClass, you\nbms_next_member loop over the EC's ec_member_indexes field.\nIf you want to get all EquivalenceMembers for a given RelOptInfo, you\nbms_next_member loop over the RelOptInfo's eclass_member_indexes\nfield.\nIf you want to get all EquivalenceMembers for a given EquivalenceClass\nand RelOptInfo you need to do some bms_intersect() calls for the rel's\neclass_member_indexes and EC's ec_member_indexes.\n\nI'm unsure if we'd want to bms_union the RelOptInfo's\nec_member_indexes field for join rels. Looking at\nget_eclass_indexes_for_relids() we didn't do it that way for\neclass_indexes. Maybe that's because we're receiving RelIds in a few\nplaces without a RelOptInfo.\n\nCertainly, the CPU cache locality is not going to be as good as if we\nhad a List with all elements together, but for simple queries, there's\nnot going to be many EquivalenceClasses anyway, and for complex\nqueries, this should be a win.\n\nDavid\n\n\n",
"msg_date": "Wed, 27 Jul 2022 15:31:08 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] Reducing planning time when tables have many partitions"
},
{
"msg_contents": "Dear David,\n\nThank you for sharing your new idea.\n\nI agree that introducing a Bitmapset field may solve this problem. I\nwill try this approach in addition to previous ones.\n\nThank you again for helping me.\n\n-- \nBest regards,\nYuya Watari\n\n\n",
"msg_date": "Wed, 27 Jul 2022 15:07:03 +0900",
"msg_from": "Yuya Watari <watari.yuya@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PoC] Reducing planning time when tables have many partitions"
},
{
"msg_contents": "On Wed, 27 Jul 2022 at 18:07, Yuya Watari <watari.yuya@gmail.com> wrote:\n> I agree that introducing a Bitmapset field may solve this problem. I\n> will try this approach in addition to previous ones.\n\nI've attached a very half-done patch that might help you get started\non this. There are still 2 failing regression tests which seem to be\ndue to plan changes. I didn't expend any effort looking into why these\nplans changed.\n\nThe attached does not contain any actual optimizations to find the\nminimal set of EMs to loop through by masking the Bitmapsets that I\nmentioned in my post last night. I just quickly put it together to\nsee if there's some hole in the idea. I don't think there is.\n\nI've not really considered all of the places that we'll want to do the\nbit twiddling to get the minimal set of EquivalenceMember. I did see\nthere's a couple more functions in postgres_fdw.c that could be\noptimized.\n\nOne thing I've only partially thought about is what if you want to\nalso find EquivalenceMembers with a constant value. If there's a\nConst, then you'll lose the bit for that when you mask the ec's\nec_member_indexes with the RelOptInfos. If there are some places\nwhere we need to keep those then I think we'll need to add another\nfield to EquivalenceClass to mark the index into PlannerInfo's\neq_members for the EquivalenceMember with the Const. That bit would\nhave to be bms_add_member()ed back into the Bitmapset of matching\nEquivalenceMembers after masking out RelOptInfo's ec_member_indexes.\n\nWhen adding the optimizations to find the minimal set of EM bits to\nsearch through, you should likely add some functions similar to the\nget_eclass_indexes_for_relids() and get_common_eclass_indexes()\nfunctions to help you find the minimal set of bits. You can also\nprobably get some other inspiration from [1], in general.\n\nDavid\n\n[1] https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=3373c715535",
"msg_date": "Thu, 28 Jul 2022 09:35:10 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] Reducing planning time when tables have many partitions"
},
{
"msg_contents": "Hello,\n\nOn Thu, Jul 28, 2022 at 6:35 AM David Rowley <dgrowleyml@gmail.com> wrote:\n> I've attached a very half-done patch that might help you get started\n> on this.\n\nThank you so much for creating the patch. I have implemented your\napproach and attached a new version of the patch to this email.\n\nIf you have already applied David's patch, please start the 'git am'\ncommand from 0002-Fix-bugs.patch. All regression tests passed with\nthis patch on my environment.\n\n1. Optimizations\n\nThe new optimization techniques utilizing Bitmapsets are implemented\nas the following functions in src/include/optimizer/paths.h.\n\n* get_eclass_members_indexes_for_relids()\n* get_eclass_members_indexes_for_not_children()\n* get_eclass_members_indexes_for_relids_or_not_children()\n* get_eclass_members_indexes_for_subsets_of_relids()\n* get_eclass_members_indexes_for_subsets_of_relids_or_not_children()\n// I think the names of these functions need to be reconsidered.\n\nThese functions intersect ec->ec_member_indexes and some Bitmapset and\nreturn indexes of EquivalenceMembers that we want to get.\n\nThe implementation of the first three functions listed above is\nsimple. However, the rest functions regarding the bms_is_subset()\ncondition are a bit more complicated. I have optimized this case based\non Tom's idea. The detailed steps are as follows.\n\nI. Intersect ec->ec_member_indexes and the Bitmapset in RelOptInfo.\nThis intersection set is a candidate for the EquivalenceMembers to be\nretrieved.\nII. Remove from the candidate set the members that do not satisfy the\nbms_is_subset().\n\nOptimization for EquivalenceMembers with a constant value is one of\nthe future works.\n\n2. Experimental Results\n\nI conducted an experiment by using the original query, which is\nattached to this email. You can reproduce this experiment by the\nfollowing commands.\n\n=====\npsql -f create-tables.sql\npsql -f query.sql\n=====\n\nThe following table and the attached figure describe the experimental result.\n\nPlanning time of \"query.sql\" (n = the number of partitions)\n----------------------------------------------------------------\n n | Master (ms) | Patched (ms) | Speedup (%) | Speedup (ms)\n----------------------------------------------------------------\n 1 | 0.809 | 0.760 | 6.09% | 0.049\n 2 | 0.799 | 0.811 | -1.53% | -0.012\n 4 | 1.022 | 0.989 | 3.20% | 0.033\n 8 | 1.357 | 1.325 | 2.32% | 0.032\n 16 | 2.149 | 2.026 | 5.69% | 0.122\n 32 | 4.357 | 3.925 | 9.91% | 0.432\n 64 | 9.543 | 7.543 | 20.96% | 2.000\n 128 | 27.195 | 15.823 | 41.82% | 11.372\n 256 | 130.207 | 52.664 | 59.55% | 77.542\n 384 | 330.642 | 112.324 | 66.03% | 218.318\n 512 | 632.009 | 197.957 | 68.68% | 434.052\n 640 | 1057.193 | 306.861 | 70.97% | 750.333\n 768 | 1709.914 | 463.628 | 72.89% | 1246.287\n 896 | 2531.685 | 738.827 | 70.82% | 1792.858\n 1024 | 3516.592 | 858.211 | 75.60% | 2658.381\n----------------------------------------------------------------\n\n-------------------------------------------------------\n n | Stddev of Master (ms) | Stddev of Patched (ms)\n-------------------------------------------------------\n 1 | 0.085 | 0.091\n 2 | 0.061 | 0.091\n 4 | 0.153 | 0.118\n 8 | 0.203 | 0.107\n 16 | 0.150 | 0.153\n 32 | 0.313 | 0.242\n 64 | 0.411 | 0.531\n 128 | 1.263 | 1.109\n 256 | 5.592 | 4.714\n 384 | 17.423 | 6.625\n 512 | 20.172 | 7.188\n 640 | 40.964 | 26.246\n 768 | 61.924 | 31.741\n 896 | 66.481 | 27.819\n 1024 | 80.950 | 49.162\n-------------------------------------------------------\n\nThe speed up with the new patch was up to 75.6% and 2.7 seconds. The\npatch achieved a 21.0% improvement even with 64 partitions, which is a\nrealistic size. We can conclude that this optimization is very\neffective in workloads with highly partitioned tables.\n\nPerformance degradation occurred only when the number of partitions\nwas 2, and its degree was 1.53% or 12 microseconds. This degradation\nis the difference between the average planning times of 10000 runs.\nTheir standard deviations far exceed the difference in averages. It is\nunclear whether this degradation is an error.\n\n=====\n\nI'm looking forward to your comments.\n\n-- \nBest regards,\nYuya Watari",
"msg_date": "Mon, 8 Aug 2022 20:27:46 +0900",
"msg_from": "Yuya Watari <watari.yuya@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PoC] Reducing planning time when tables have many partitions"
},
{
"msg_contents": "On Mon, 8 Aug 2022 at 23:28, Yuya Watari <watari.yuya@gmail.com> wrote:\n> If you have already applied David's patch, please start the 'git am'\n> command from 0002-Fix-bugs.patch. All regression tests passed with\n> this patch on my environment.\n\nThanks for fixing those scope bugs.\n\nIn regards to the 0002 patch, you have;\n\n+ * TODO: \"bms_add_members(ec1->ec_member_indexes, ec2->ec_member_indexes)\"\n+ * did not work to combine two EquivalenceClasses. This is probably because\n+ * the order of the EquivalenceMembers is different from the previous\n+ * implementation, which added the ec2's EquivalenceMembers to the end of\n+ * the list.\n\nas far as I can see, the reason the code I that wrote caused the\nfollowing regression test failure;\n\n- Index Cond: ((ff = '42'::bigint) AND (ff = '42'::bigint))\n+ Index Cond: (ff = '42'::bigint)\n\nwas down to how generate_base_implied_equalities_const() marks the EC\nas ec_broken = true without any regard to cleaning up the work it's\npartially already complete.\n\nBecause the loop inside generate_base_implied_equalities_const() just\nbreaks as soon as we're unable to find a valid equality operator for\nthe two given types, with my version, since the EquivalenceMember's\norder has effectively changed, we just discover the EC is broken\nbefore we call process_implied_equality() ->\ndistribute_restrictinfo_to_rels(). In the code you've added, the\nEquivalenceMembers are effectively still in the original order and the\nprocess_implied_equality() -> distribute_restrictinfo_to_rels() gets\ndone before we discover the broken EC. The same qual is just added\nagain during generate_base_implied_equalities_broken(), which is why\nthe plan has a duplicate ff=42.\n\nThis is all just down to the order that the ECs are merged. If you'd\njust swapped the order of the items in the query's WHERE clause to\nbecome:\n\n where ec1.ff = 42::int8 and ss1.x = ec1.f1 and ec1.ff = ec1.f1;\n\nthen my version would keep the duplicate qual. For what you've changed\nthe code to, the planner would not have produced the duplicate ff=42\nqual if you'd written the WHERE clause as follows:\n\n where ss1.x = ec1.f1 and ec1.ff = ec1.f1 and ec1.ff = 42::int8;\n\nIn short, I think the code I had for that was fine and it's just the\nexpected plan that you should be editing. If we wanted to this\nbehaviour to be consistent then the fix should be to make\ngenerate_base_implied_equalities_const() better at only distributing\nthe quals down to the relations after it has discovered that the EC is\nnot broken, or at least cleaning up the partial work that it's done if\nit discovers a broken EC. The former seems better to me, but I doubt\nthat it matters too much as broken ECs should be pretty rare and it\ndoes not seem worth spending too much effort making this work better.\n\nI've not had a chance to look at the 0003 patch yet.\n\nDavid\n\n\n",
"msg_date": "Tue, 9 Aug 2022 19:10:00 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] Reducing planning time when tables have many partitions"
},
{
"msg_contents": "esOn Tue, 9 Aug 2022 at 19:10, David Rowley <dgrowleyml@gmail.com> wrote:\n> I've not had a chance to look at the 0003 patch yet.\n\nI've looked at the 0003 patch now.\n\nThe performance numbers look quite impressive, however, there were a\nfew things about the patch that I struggled to figure what they were\ndone the way you did them:\n\n+ root->eq_not_children_indexes = bms_add_member(root->eq_not_children_indexes,\n\nWhy is that in PlannerInfo rather than in the EquivalenceClass?\n\n if (bms_equal(rel->relids, em->em_relids))\n {\n rel->eclass_member_indexes =\nbms_add_member(rel->eclass_member_indexes, em_index);\n }\n\nWhy are you only adding the eclass_member_index to the RelOptInfo when\nthe em_relids contain a singleton relation?\n\nI ended up going and fixing the patch to be more how I imagined it.\n\nI've ended up with 3 Bitmapset fields in EquivalenceClass;\nec_member_indexes, ec_nonchild_indexes, ec_norel_indexes. I also\ntrimmed the number of helper functions down for obtaining the minimal\nset of matching EquivalenceMember indexes to just:\n\nBitmapset *\nget_ecmember_indexes(PlannerInfo *root, EquivalenceClass *ec, Relids relids,\nbool with_children, bool with_norel_members)\n\nBitmapset *\nget_ecmember_indexes_strict(PlannerInfo *root, EquivalenceClass *ec,\nRelids relids, bool with_children,\nbool with_norel_members)\n\nI'm not so much a fan of the bool parameters, but it seemed better\nthan having 8 different functions with each combination of the bool\nparamters instead of 2.\n\nThe \"strict\" version of the function takes the intersection of\neclass_member_indexes for each rel mentioned in relids, whereas the\nnon-strict version does a union of those. Each then intersect that\nwith all members in the 'ec', or just the non-child members when\n'with_children' is false. They both then optionally bms_add_members()\nthe ec_norel_members if with_norel_members is true. I found it\ndifficult to figure out the best order to do the intersection. That\nreally depends on if the particular query has many EquivalenceClasses\nwith few EquivalenceMembers or few EquivalenceClasses with many\nEquivalenceMembers. bms_int_members() always recycles the left input.\nIdeally, that would always be the smallest Bitmapset. Maybe it's worth\ninventing a new version of bms_int_members() that recycles the input\nwith the least nwords. That would give the subsequent\nbms_next_member() calls an easier time. Right now they'll need to loop\nover a bunch of 0 words at the end for many queries.\n\nA few problems I ran into along the way:\n\n1. generate_append_tlist() generates Vars with varno=0. That causes\nproblems when we add Exprs from those in add_eq_member() as there is\nno element at root->simple_rel_array[0] to add eclass_member_indexes\nto.\n2. The existing comment for EquivalenceMember.em_relids claims \"all\nrelids appearing in em_expr\", but that's just not true when it comes\nto em_is_child members.\n\nSo far, I fixed #1 by adding a hack to setup_simple_rel_arrays() to do\n\"root->simple_rel_array[0] = makeNode(RelOptInfo);\" I'm not suggesting\nthat's the correct fix. It might be possible to set the varnos to the\nvarnos from the first Append child instead.\n\nThe fact that #2 is not true adds quite a bit of complexity to the\npatch and I think the patch might even misbehave as a result. It seems\nthere are cases where a child em_relids can contain additional relids\nthat are not present in the em_expr. For example, when a UNION ALL\nchild has a Const in the targetlist, as explained in a comment in\nadd_child_rel_equivalences(). However, there also seem to be cases\nwhere the opposite is true. I had to add the following code in\nadd_eq_member() to stop a regression test failing:\n\nif (is_child)\n expr_relids = bms_add_members(expr_relids, relids);\n\nThat's to make sure we add eclass_member_indexes to each RelOptInfo\nmentioned in the em_expr.\n\nAfter doing all that, I noticed that your benchmark was showing that\ncreate_join_clause() was the new bottleneck. This was due to having to\nloop so many times over the ec_sources to find an already built\nRestrictInfo. I went off and added some new code to optimize the\nlookup of those in a similar way by adding a new Bitmapset field in\nRelOptInfo to index which ec_sources it mentioned, which meant having\nto move ec_sources into PlannerInfo. I don't think this part of the\npatch is quite right yet as the code I have relies on em_relids being\nthe same as the ones mentioned in the RestrictInfo. That seems not\ntrue for em_is_child EMs, so I think we probably need to add a new\nfield to EquivalenceMember that truly is just pull_varnos from\nem_expr, or else look into some way to make em_relids mean that (like\nthe comment claims).\n\nHere are my results from running your benchmark on master (@f6c750d31)\nwith and without the attached patch.\n\nnpart master (ms) patched (ms) speedup\n2 0.28 0.29 95.92%\n4 0.37 0.38 96.75%\n8 0.53 0.56 94.43%\n16 0.92 0.91 100.36%\n32 1.82 1.70 107.57%\n64 4.05 3.26 124.32%\n128 10.83 6.69 161.89%\n256 42.63 19.46 219.12%\n512 194.31 42.60 456.14%\n1024 1104.02 98.37 1122.33%\n\nThis resulted in some good additional gains in planner performance.\nThe 1024 partition case is now about 11x faster on my machine instead\nof 4x. The 2 partition does regress slightly. There might be a few\nthings we can do about that, for example, move ec_collation up 1 to\nshrink EquivalenceClass back down closer to the size it was before.\n[1] might be enough to make up for the remainder.\n\nI've attached a draft patch with my revisions.\n\nDavid\n\n[1] https://commitfest.postgresql.org/39/3810/",
"msg_date": "Tue, 16 Aug 2022 20:26:41 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] Reducing planning time when tables have many partitions"
},
{
"msg_contents": "Dear David,\n\nI really appreciate your reply and your modifying the patch. The\nperformance improvements are quite impressive. I believe these\nimprovements will help PostgreSQL users. Thank you again.\n\n> The 2 partition does regress slightly. There might be a few\n> things we can do about that\n\nI tried to solve this regression problem. From here, I will refer to\nthe patch you sent on August 16th as the v3 patch. I will also call my\npatch attached to this email the v4 patch. I will discuss the v4 patch\nlater.\n\nAdditionally, I give names to queries.\n* Query A: The query we have been using in previous emails, which\njoins students, scores, and gpas tables.\n* Query B: The query which is attached to this email.\n\nQuery B is as follows:\n\n===\nSELECT *\nFROM testtable_1, testtable_2, testtable_3, testtable_4, testtable_5,\ntesttable_6, testtable_7, testtable_8\nWHERE testtable_1.x = testtable_2.x AND testtable_1.x = testtable_3.x\nAND testtable_1.x = testtable_4.x AND testtable_1.x = testtable_5.x\nAND testtable_1.x = testtable_6.x AND testtable_1.x = testtable_7.x\nAND testtable_1.x = testtable_8.x;\n===\n\nQuery A joins three tables, whereas Query B joins eight tables. Since\nEquivalenceClass is used when handling chained join conditions, I\nthought queries joining many tables, such as Query B, would have\ngreater performance impacts.\n\nI have investigated the v3 patch with these queries. As a result, I\ndid not observe any regressions in Query A in my environment. However,\nthe v3 patch showed significant degradation in Query B.\n\nThe following table and Figures 1 and 2 describe the result. The v3\npatch resulted in a regression of 8.7% for one partition and 4.8% for\ntwo partitions. Figure 2 shows the distribution of planning times for\nthe 1-partition case, indicating that the 8.7% regression is not an\nerror.\n\nTable 1: Planning time of Query B\n (n: number of partitions)\n (milliseconds)\n----------------------------------------------------------------\n n | Master | v3 | v4 | Master / v3 | Master / v4\n----------------------------------------------------------------\n 1 | 54.926 | 60.178 | 55.275 | 91.3% | 99.4%\n 2 | 53.853 | 56.554 | 53.519 | 95.2% | 100.6%\n 4 | 57.115 | 57.829 | 55.648 | 98.8% | 102.6%\n 8 | 64.208 | 60.945 | 58.025 | 105.4% | 110.7%\n 16 | 79.818 | 65.526 | 63.365 | 121.8% | 126.0%\n 32 | 136.981 | 77.813 | 76.526 | 176.0% | 179.0%\n 64 | 371.991 | 108.058 | 110.202 | 344.2% | 337.6%\n 128 | 1449.063 | 173.326 | 181.302 | 836.0% | 799.3%\n 256 | 6245.577 | 333.480 | 354.961 | 1872.8% | 1759.5%\n----------------------------------------------------------------\n\nThis performance degradation is due to the heavy processing of the\nget_ec***_indexes***() functions. These functions are the core part of\nthe optimization we are working on in this thread, but they are\nrelatively heavy when the number of partitions is small.\n\nI noticed that these functions were called repeatedly with the same\narguments. During planning Query B with one partition, the\nget_ec_source_indexes_strict() function was called 2087 times with\nexactly the same parameters. Such repeated calls occurred many times\nin a single query.\n\nTo address this problem, I introduced a caching mechanism in the v4\npatch. This patch caches the Bitmapset once it has been computed.\nAfter that, we only have to read the cached value instead of\nperforming the same process. Of course, we cannot devote much time to\nthe caching itself. Hash tables are a simple solution to accomplish\nthis but are not available under the current case where microsecond\nperformance degradation is a problem. Therefore, my patch adopts\nanother approach. I will use the following function as an example to\nexplain it.\n\n===\nBitmapset *get_ecmember_indexes(PlannerInfo *root,\nEquivalenceClass *ec, Relids relids, bool with_children, bool\nwith_norel_members);\n===\n\nMy idea is \"caching the returned Bitmapset into Relids.\" If the Relids\nhas the result Bitmapset, we can access it quickly via the pointer. Of\ncourse, I understand this description is not accurate. Relids is just\nan alias of Bitmapset, so we cannot change the layout.\n\nI will describe the precise mechanism. In the v4 patch, I changed the\nsignature of the get_ecmember_indexes() function as follows.\n\n===\nBitmapset *get_ecmember_indexes(PlannerInfo *root,\nEquivalenceClass *ec, Relids relids, bool with_children, bool\nwith_norel_members, ECIndexCache *cache);\n===\n\nECIndexCache is storage for caching returned values. ECIndexCache has\na one-to-one relationship with Relids. This relationship is achieved\nby placing the ECIndexCache just alongside the Relids. For example,\nECIndexCache corresponding to some RelOptInfo's relids exists in the\nsame RelOptInfo. When calling the get_ecmember_indexes() function with\na RelOptInfo, we pass RelOptInfo->ECIndexCache together. On the other\nhand, since Relids appear in various places, it is sometimes difficult\nto prepare a corresponding ECIndexCache. In such cases, we give up\ncaching and pass NULL.\n\nBesides, one ECIndexCache can only map to one EquivalenceClass.\nECIndexCache only caches for the first EquivalenceClass it encounters\nand does not cache for another EC.\n\nMy method abandons full caching to prevent overhead. However, it\novercame the regression problem for Query B. As can be seen from\nFigure 2, the regression with the v4 patch is either non-existent or\nnegligible. Furthermore, the v4 patch is faster than the v3 patch when\nthe number of partitions is 32 or less.\n\nIn addition to Query B, the results with Query A are shown in Figure\n3. I cannot recognize any regression from Figure 3. Please be noted\nthat these results are done on my machine and may differ in other\nenvironments.\n\nHowever, when the number of partitions was relatively large, my patch\nwas slightly slower than the v3 patch. This may be due to too frequent\nmemory allocation. ECIndexCache is a large struct containing 13\npointers. In the current implementation, ECIndexCache exists within\ncommonly used structs such as RelOptInfo. Therefore, ECIndexCache is\nallocated even if no one uses it. When there were 256 partitions of\nQuery B, 88509 ECIndexCache instances were allocated, but only 2295\nwere actually used. This means that 95.4% were wasted. I think\non-demand allocation would solve this problem. Similar problems could\nalso occur with other workloads, including OLTP. I'm going to try this\napproach soon.\n\nI really apologize for not commenting on the rest of your reply. I\nwill continue to consider them.\n\n-- \nBest regards,\nYuya Watari",
"msg_date": "Fri, 26 Aug 2022 09:39:32 +0900",
"msg_from": "Yuya Watari <watari.yuya@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PoC] Reducing planning time when tables have many partitions"
},
{
"msg_contents": "On Fri, 26 Aug 2022 at 12:40, Yuya Watari <watari.yuya@gmail.com> wrote:\n> This performance degradation is due to the heavy processing of the\n> get_ec***_indexes***() functions. These functions are the core part of\n> the optimization we are working on in this thread, but they are\n> relatively heavy when the number of partitions is small.\n>\n> I noticed that these functions were called repeatedly with the same\n> arguments. During planning Query B with one partition, the\n> get_ec_source_indexes_strict() function was called 2087 times with\n> exactly the same parameters. Such repeated calls occurred many times\n> in a single query.\n\nHow about instead of doing this caching like this, why don't we code\nup some iterators that we can loop over to fetch the required EMs.\n\nI'll attempt to type out my thoughts here without actually trying to\nsee if this works:\n\ntypedef struct EquivalenceMemberIterator\n{\n EquivalenceClass *ec;\n Relids relids;\n Bitmapset *em_matches;\n int position; /* last found index of em_matches or -1 */\n bool use_index;\n bool with_children;\n bool with_norel_members;\n} EquivalenceMemberIterator;\n\nWe'd then have functions like:\n\nstatic void\nget_ecmember_indexes_iterator(EquivalenceMemberIterator *it,\nPlannerInfo *root, EquivalenceClass *ec, Relids relids, bool\nwith_children, bool with_norel_members)\n{\n it->ec = ec;\n it->relids = relids;\n it->position = -1;\n\n it->use_index = (root->simple_rel_array_size > 32); /* or whatever\nthreshold is best */\n it->with_children = with_children;\n it->with_norel_members = with_norel_members;\n\n if (it->use_index)\n it->em_matches = get_ecmember_indexes(root, ec, relids,\nwith_children, with_norel_members);\n else\n it->em_matches = NULL;\n}\n\nstatic EquivalenceMember *\nget_next_matching_member(PlannerInfo *root, EquivalenceMemberIterator *it)\n{\n if (it->use_index)\n {\n it->position = bms_next_member(it->ec_matches, it->position);\n if (it->position >= 0)\n return list_nth(root->eq_members, it->position);\n return NULL;\n }\n else\n {\n int i = it->position;\n while ((i = bms_next_member(it->ec->ec_member_indexes, i) >= 0)\n {\n /* filter out the EMs we don't want here \"break\" when\nwe find a match */\n }\n it->position = i;\n if (i >= 0)\n return list_nth(root->eq_members, i);\n return NULL;\n }\n}\n\nThen the consuming code will do something like:\n\nEquivalenceMemberIterator iterator;\nget_ecmember_indexes_iterator(&iterator, root, ec, relids, true, false);\n\nwhile ((cur_em = get_next_matching_member(root, &it)) != NULL)\n{\n // do stuff\n}\n\nDavid\n\n\n",
"msg_date": "Fri, 26 Aug 2022 15:18:30 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] Reducing planning time when tables have many partitions"
},
{
"msg_contents": "Dear David,\n\nOn Fri, Aug 26, 2022 at 12:18 PM David Rowley <dgrowleyml@gmail.com> wrote:\n> How about instead of doing this caching like this, why don't we code\n> up some iterators that we can loop over to fetch the required EMs.\n\nThank you very much for your quick reply and for sharing your idea\nwith code. I also think introducing EquivalenceMemberIterator is one\ngood alternative solution. I will try to implement and test it.\n\nThank you again for helping me.\n\n-- \nBest regards,\nYuya Watari\n\n\n",
"msg_date": "Fri, 26 Aug 2022 17:53:34 +0900",
"msg_from": "Yuya Watari <watari.yuya@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PoC] Reducing planning time when tables have many partitions"
},
{
"msg_contents": "Hello,\n\nOn Fri, Aug 26, 2022 at 5:53 PM Yuya Watari <watari.yuya@gmail.com> wrote:\n> Thank you very much for your quick reply and for sharing your idea\n> with code. I also think introducing EquivalenceMemberIterator is one\n> good alternative solution. I will try to implement and test it.\n\nI apologize for my late response. I have implemented several\napproaches and tested them.\n\n1. Changes\n\nI will describe how I modified our codes. I tested five versions:\n\n* v1: The first draft patch by David with bug fixes by me. This patch\ndoes not perform any optimizations based on Bitmapset operations.\n* v3: The past patch\n* v5 (v3 with revert): The v3 with revert of one of our optimizations\n* v6 (Iterator): An approach using iterators to enumerate over\nEquivalenceMembers. This approach is David's suggestion in the\nprevious email.\n* v7 (Cache): My approach to caching the result of get_ec***indexes***()\n\nPlease be noted that there is no direct parent-child relationship\nbetween v6 and v7; they are v5's children, i.e., siblings. I'm sorry\nfor the confusing versioning.\n\n1.1. Revert one of our optimizations (v5)\n\nAs I mentioned in the comment in\nv[5|6|7]-0002-Revert-one-of-the-optimizations.patch, I reverted one of\nour optimizations. This code tries to find EquivalenceMembers that do\nnot satisfy the bms_overlap condition. We encounter such members early\nin the loop, so the linear search is enough, and our optimization is\ntoo excessive here. As a result of experiments, I found this\noptimization was a bottleneck, so I reverted it.\n\nv6 (Iterator) and v7 (Cache) include this revert.\n\n1.2. Iterator (v6)\n\nI have implemented the iterator approach. The code is based on what\nDavid advised, but I customized it a bit. I added the \"bool\ncaller_needs_recheck\" argument to get_ecmember_indexes_iterator() and\nother similar functions. If this argument is true, the iterator\nenumerates all EquivalenceMembers without checking conditions such as\nbms_is_subset or bms_overlap.\n\nThis change is because callers of these iterators sometimes recheck\ndesired conditions after calling it. For example, if some caller wants\nEquivalenceMembers whose Relids is equal to some value, it calls\nget_ecmember_indexes(). However, since the result may have false\npositives, the caller has to recheck the result by the bms_equal()\ncondition. In this case, if the threshold is below and we don't\nperform our optimization, checking bms_overlap() in the iterator does\nnot make sense. We can solve this problem by passing true to the\n\"caller_needs_recheck\" argument to skip redundant checking.\n\n1.3. Cache (v7)\n\nI have improved my caching approach. First, I introduced the on-demand\nallocation approach I mentioned in the previous email. ECIndexCache is\nallocated not together with RelOptInfo but when using it.\n\nIn addition to this, a new version of the patch can handle multiple\nEquivalenceClasses. In the previous version, caching was only possible\nfor one EquivalenceClass. This limitation is to prevent overhead but\nreduces caching opportunities. So, I have improved it so that it can\nhandle all EquivalenceClasses. I made this change on the advice of\nFujita-san. Thank you, Fujita-san.\n\n2. Experimental Results\n\nI conducted experiments to test these methods.\n\n2.1. Query A\n\nFigure 1 illustrates the planning times of Query A. Please see the\nprevious email for what Query A refers to. The performance of all\nmethods except master and v1 are almost the same. I cannot observe any\ndegradation from this figure.\n\n2.2. Query B\n\nQuery B joins eight tables. In the previous email, I mentioned that\nthe v3 patch has significant degradation for this query.\n\nFigure 2 and Table 1 show the results. The three approaches of v5, v6\n(Iterator), and v7 (Cache) showed good overall performance. In\nparticular, v7 (Cache) performed best for the smaller number of\npartitions.\n\nTable 1: Planning Time of Query B (ms)\n-------------------------------------\n n | Master | v1 | v3\n-------------------------------------\n 1 | 55.459 | 57.376 | 58.849\n 2 | 54.162 | 56.454 | 57.615\n 4 | 56.491 | 59.742 | 57.108\n 8 | 62.694 | 67.920 | 59.591\n 16 | 79.547 | 90.589 | 64.954\n 32 | 134.623 | 160.452 | 76.626\n 64 | 368.716 | 439.894 | 107.278\n 128 | 1374.000 | 1598.748 | 170.909\n 256 | 5955.762 | 6921.668 | 324.113\n-------------------------------------\n--------------------------------------------------------\n n | v5 (v3 with revert) | v6 (Iterator) | v7 (Cache)\n--------------------------------------------------------\n 1 | 56.268 | 57.520 | 56.703\n 2 | 55.511 | 55.212 | 54.395\n 4 | 55.643 | 55.025 | 54.996\n 8 | 57.770 | 57.519 | 57.114\n 16 | 63.075 | 63.117 | 63.161\n 32 | 74.788 | 74.369 | 75.801\n 64 | 104.027 | 104.787 | 105.450\n 128 | 169.473 | 169.019 | 174.919\n 256 | 321.450 | 322.739 | 342.601\n--------------------------------------------------------\n\n2.3. Join Order Benchmark\n\nIt is essential to test real workloads, so I used the Join Order\nBenchmark [1]. This benchmark contains many complicated queries\njoining a lot of tables. I partitioned fact tables by 'id' columns and\nmeasured query planning times.\n\nFigure 3 and Table 2 describe the results. The results showed that all\nmethods produced some degradations when there were not so many\npartitions. However, the degradation of v7 (cache) was relatively\nsmall. It was 0.8% with two partitions, while the other methods'\ndegradation was at least 1.6%.\n\nTable 2: Speedup of Join Order Benchmark (higher is better)\n-----------------------------------------------------------------\n n | v3 | v5 (v3 with revert) | v6 (Iterator) | v7 (Cache)\n-----------------------------------------------------------------\n 2 | 95.8% | 97.3% | 97.3% | 97.7%\n 4 | 96.9% | 98.4% | 98.0% | 99.2%\n 8 | 102.2% | 102.9% | 98.1% | 103.0%\n 16 | 107.6% | 109.5% | 110.1% | 109.4%\n 32 | 123.5% | 125.4% | 125.5% | 125.0%\n 64 | 165.2% | 165.9% | 164.6% | 165.9%\n 128 | 308.2% | 309.2% | 312.1% | 311.4%\n 256 | 770.1% | 772.3% | 776.6% | 773.2%\n-----------------------------------------------------------------\n\n2.4. pgbench\n\nOur optimizations must not cause negative impacts on OLTP workloads. I\nconducted pgbench, and Figure 4 and Table 3 show its result.\n\nTable 3: The result of pgbench (tps)\n------------------------------------------------------------------------\n n | Master | v3 | v5 (v3 with revert) | v6 (Iterator) | v7 (Cache)\n------------------------------------------------------------------------\n 1 | 7617 | 7510 | 7484 | 7599 | 7561\n 2 | 7613 | 7487 | 7503 | 7609 | 7560\n 4 | 7559 | 7497 | 7453 | 7560 | 7553\n 8 | 7506 | 7429 | 7405 | 7523 | 7503\n 16 | 7584 | 7481 | 7466 | 7558 | 7508\n 32 | 7556 | 7456 | 7448 | 7558 | 7521\n 64 | 7555 | 7452 | 7435 | 7541 | 7504\n 128 | 7542 | 7430 | 7442 | 7558 | 7517\n------------------------------------------------------------------------\n Avg | 7566 | 7468 | 7455 | 7563 | 7528\n------------------------------------------------------------------------\n\nThis result indicates that v3 and v5 (v3 with revert) had a\nsignificant negative impact on the pgbench workload. Their tps\ndecreased by 1.3% or more. On the other hand, degradations of v6\n(Iterator) and v7 (Cache) are non-existent or negligible.\n\n3. Causes of Degression\n\nWe could not avoid degradation with the Join Order Benchmark. The\nleading cause of this problem is that Bitmapset operation, especially\nbms_next_member(), is relatively slower than simple enumeration over\nList.\n\nIt is easy to imagine that bms_next_member(), which has complex bit\noperations, is a little heavier than List enumerations simply\nadvancing a pointer. The fact that even the v1, where we don't perform\nany optimizations, slowed down supports this notion.\n\nI think preventing this regression is very hard. To do so, we must\nhave both List and Bitmapset representations of EquivalenceMembers.\nHowever, I don't prefer this solution because it is redundant and\nleads to less code maintainability.\n\nReducing Bitmapset->nwords is another possible solution. I will try\nit, but it will likely not solve the significant degradation in\npgbench for v3 and v5. This is because such degradation did not occur\nwith v6 and v7, with also use Bitmapset.\n\n4. Which Method is The Best?\n\nFirst of all, it is hard to adopt v3 and v5 (v3 with revert) because\nthey degrade performance on OLTP workloads. Therefore, v6 (Iterator)\nand v7 (Cache) are possible candidates. Of these methods, I prefer v7\n(Cache).\n\nActually, I don't think an approach to introducing thresholds is a\ngood idea because the best threshold is unclear. If we become\nconservative to avoid degradation, we must increase the threshold, but\nthat takes away the opportunity for optimization. The opposite is\ntrue.\n\nIn contrast, v7 (Cache) is an essential solution in terms of reducing\nthe cost of repeated function calls and does not require the\nintroduction of a threshold. Besides, it performs better on almost all\nworkloads, including the Join Order Benchmark. It also has no negative\nimpacts on OLTP.\n\nIn conclusion, I think v7 (Cache) is the most desirable. Of course,\nthe method may have some problems, but it is worth considering.\n\n[1] https://github.com/winkyao/join-order-benchmark\n\n-- \nBest regards,\nYuya Watari",
"msg_date": "Wed, 21 Sep 2022 18:43:51 +0900",
"msg_from": "Yuya Watari <watari.yuya@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PoC] Reducing planning time when tables have many partitions"
},
{
"msg_contents": "Hello,\n\nOn Wed, Sep 21, 2022 at 6:43 PM Yuya Watari <watari.yuya@gmail.com> wrote:\n> 1.1. Revert one of our optimizations (v5)\n>\n> As I mentioned in the comment in\n> v[5|6|7]-0002-Revert-one-of-the-optimizations.patch, I reverted one of\n> our optimizations. This code tries to find EquivalenceMembers that do\n> not satisfy the bms_overlap condition. We encounter such members early\n> in the loop, so the linear search is enough, and our optimization is\n> too excessive here. As a result of experiments, I found this\n> optimization was a bottleneck, so I reverted it.\n\nIn the previous mail, I proposed a revert of one excessive\noptimization. In addition, I found a new bottleneck and attached a new\nversion of the patch solving it to this email.\n\nThe new bottleneck exists in the select_outer_pathkeys_for_merge()\nfunction. At the end of this function, we count EquivalenceMembers\nthat satisfy the specific condition. To count them, we have used\nBitmapset operations. Through experiments, I concluded that this\noptimization is effective for larger cases but leads to some\ndegradation for the smaller number of partitions. The new patch\nswitches two algorithms depending on the problem sizes.\n\n1. Experimental result\n\n1.1. Join Order Benchmark\n\nAs in the previous email, I used the Join Order Benchmark to evaluate\nthe patches' performance. The correspondence between each version and\npatches is as follows.\n\nv3: v8-0001-*.patch\nv5 (v3 with revert): v8-0001-*.patch + v8-0002-*.patch\nv8 (v5 with revert): v8-0001-*.patch + v8-0002-*.patch + v8-0003-*.patch\n\nI show the speed-up of each method compared with the master branch in\nTable 1. When the number of partitions is 1, performance degradation\nis kept to 1.1% in v8, while they are 4.2% and 1.8% in v3 and v5. This\nresult indicates that a newly introduced revert is effective.\n\nTable 1: Speedup of Join Order Benchmark (higher is better)\n(n = the number of partitions)\n----------------------------------------------------------\n n | v3 | v5 (v3 with revert) | v8 (v5 with revert)\n----------------------------------------------------------\n 2 | 95.8% | 98.2% | 98.9%\n 4 | 97.2% | 99.7% | 99.3%\n 8 | 101.4% | 102.5% | 103.4%\n 16 | 108.7% | 111.4% | 110.2%\n 32 | 127.1% | 127.6% | 128.8%\n 64 | 169.5% | 172.1% | 172.4%\n 128 | 330.1% | 335.2% | 332.3%\n 256 | 815.1% | 826.4% | 821.8%\n----------------------------------------------------------\n\n1.2. pgbench\n\nThe following table describes the result of pgbench. The v5 and v8\nperformed clearly better than the v3 patch. The difference between v5\nand v8 is not so significant, but v8's performance is close to the\nmaster branch.\n\nTable 2: The result of pgbench (tps)\n-----------------------------------------------------------------\n n | Master | v3 | v5 (v3 with revert) | v8 (v5 with revert)\n-----------------------------------------------------------------\n 1 | 7550 | 7422 | 7474 | 7521\n 2 | 7594 | 7381 | 7536 | 7529\n 4 | 7518 | 7362 | 7461 | 7524\n 8 | 7459 | 7340 | 7424 | 7460\n-----------------------------------------------------------------\n Avg | 7531 | 7377 | 7474 | 7509\n-----------------------------------------------------------------\n\n2. Conclusion and future works\n\nThe revert in the v8-0003-*.patch is effective in preventing\nperformance degradation for the smaller number of partitions. However,\nI don't think what I have done in the patch is the best or ideal\nsolution. As I mentioned in the comments in the patch, switching two\nalgorithms may be ugly because it introduces code duplication. We need\na wiser solution to this problem.\n\n-- \nBest regards,\nYuya Watari",
"msg_date": "Mon, 24 Oct 2022 13:12:51 +0900",
"msg_from": "Yuya Watari <watari.yuya@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PoC] Reducing planning time when tables have many partitions"
},
{
"msg_contents": "Hello,\n\nI noticed that the previous patch does not apply to the current HEAD.\nI attached the rebased version to this email.\n\n-- \nBest regards,\nYuya Watari",
"msg_date": "Wed, 2 Nov 2022 18:27:52 +0900",
"msg_from": "Yuya Watari <watari.yuya@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PoC] Reducing planning time when tables have many partitions"
},
{
"msg_contents": "On 2/11/2022 15:27, Yuya Watari wrote:\n> Hello,\n> \n> I noticed that the previous patch does not apply to the current HEAD.\n> I attached the rebased version to this email.\n> \nI'm still in review of your patch now. At most it seems ok, but are you \nreally need both eq_sources and eq_derives lists now? As I see, \neverywhere access to these lists guides by eclass_source_indexes and \neclass_derive_indexes correspondingly. Maybe to merge them?\n\n-- \nregards,\nAndrey Lepikhov\nPostgres Professional\n\n\n\n",
"msg_date": "Mon, 7 Nov 2022 12:21:14 +0600",
"msg_from": "Andrey Lepikhov <a.lepikhov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] Reducing planning time when tables have many partitions"
},
{
"msg_contents": "Andrey Lepikhov <a.lepikhov@postgrespro.ru> writes:\n> I'm still in review of your patch now. At most it seems ok, but are you \n> really need both eq_sources and eq_derives lists now?\n\nDidn't we just have this conversation? eq_sources needs to be kept\nseparate to support the \"broken EC\" logic. We don't want to be\nregurgitating derived clauses as well as originals in that path.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 07 Nov 2022 01:25:58 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] Reducing planning time when tables have many partitions"
},
{
"msg_contents": "HI,\n\nRegards,\nZhang Mingli\nOn Nov 7, 2022, 14:26 +0800, Tom Lane <tgl@sss.pgh.pa.us>, wrote:\n> Andrey Lepikhov <a.lepikhov@postgrespro.ru> writes:\n> > I'm still in review of your patch now. At most it seems ok, but are you\n> > really need both eq_sources and eq_derives lists now?\n>\n> Didn't we just have this conversation? eq_sources needs to be kept\n> separate to support the \"broken EC\" logic. We don't want to be\n> regurgitating derived clauses as well as originals in that path.\n>\nAha, we have that conversation in another thread(Reducing duplicativeness of EquivalenceClass-derived clauses\n) : https://www.postgresql.org/message-id/644164.1666877342%40sss.pgh.pa.us\n\n\n\n\n\n\n\nHI,\n\n\nRegards,\nZhang Mingli\n\n\nOn Nov 7, 2022, 14:26 +0800, Tom Lane <tgl@sss.pgh.pa.us>, wrote:\nAndrey Lepikhov <a.lepikhov@postgrespro.ru> writes:\nI'm still in review of your patch now. At most it seems ok, but are you\nreally need both eq_sources and eq_derives lists now?\n\nDidn't we just have this conversation? eq_sources needs to be kept\nseparate to support the \"broken EC\" logic. We don't want to be\nregurgitating derived clauses as well as originals in that path.\n\nAha, we have that conversation in another thread(Reducing duplicativeness of EquivalenceClass-derived clauses) : https://www.postgresql.org/message-id/644164.1666877342%40sss.pgh.pa.us",
"msg_date": "Mon, 7 Nov 2022 14:32:59 +0800",
"msg_from": "Zhang Mingli <zmlpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] Reducing planning time when tables have many\n partitions"
},
{
"msg_contents": "On 2/11/2022 15:27, Yuya Watari wrote:\n> I noticed that the previous patch does not apply to the current HEAD.\n> I attached the rebased version to this email.\nLooking into find_em_for_rel() changes I see that you replaced\nif (bms_is_subset(em->em_relids, rel->relids)\nwith assertion statement.\nAccording of get_ecmember_indexes(), the em_relids field of returned \nequivalence members can contain relids, not mentioned in the relation.\nI don't understand, why it works now? For example, we can sort by t1.x, \nbut have an expression t1.x=t1.y*t2.z. Or I've missed something? If it \nis not a mistake, maybe to add a comment why assertion here isn't failed?\n\n-- \nregards,\nAndrey Lepikhov\nPostgres Professional\n\n\n\n",
"msg_date": "Tue, 8 Nov 2022 17:31:04 +0600",
"msg_from": "Andrey Lepikhov <a.lepikhov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] Reducing planning time when tables have many partitions"
},
{
"msg_contents": "On Mon, 7 Nov 2022 at 06:33, Zhang Mingli <zmlpostgres@gmail.com> wrote:\n>\n> HI,\n>\n> Regards,\n> Zhang Mingli\n> On Nov 7, 2022, 14:26 +0800, Tom Lane <tgl@sss.pgh.pa.us>, wrote:\n>\n> Andrey Lepikhov <a.lepikhov@postgrespro.ru> writes:\n>\n> I'm still in review of your patch now. At most it seems ok, but are you\n> really need both eq_sources and eq_derives lists now?\n>\n>\n> Didn't we just have this conversation? eq_sources needs to be kept\n> separate to support the \"broken EC\" logic. We don't want to be\n> regurgitating derived clauses as well as originals in that path.\n>\n> Aha, we have that conversation in another thread(Reducing duplicativeness of EquivalenceClass-derived clauses\n> ) : https://www.postgresql.org/message-id/644164.1666877342%40sss.pgh.pa.us\n\nOnce the issue Tom identified has been resolved, I'd like to test\ndrive newer patches.\n\nThom\n\n\n",
"msg_date": "Wed, 16 Nov 2022 16:44:58 +0000",
"msg_from": "Thom Brown <thom@linux.com>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] Reducing planning time when tables have many partitions"
},
{
"msg_contents": "On 2022-Nov-16, Thom Brown wrote:\n\n> Once the issue Tom identified has been resolved, I'd like to test\n> drive newer patches.\n\nWhat issue? If you mean the one from the thread \"Reducing\nduplicativeness of EquivalenceClass-derived clauses\", that patch is\nalready applied (commit a5fc46414deb), and Yuya Watari's v8 series\napplies fine to current master.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"Having your biases confirmed independently is how scientific progress is\nmade, and hence made our great society what it is today\" (Mary Gardiner)\n\n\n",
"msg_date": "Thu, 17 Nov 2022 10:31:46 +0100",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] Reducing planning time when tables have many partitions"
},
{
"msg_contents": "On Thu, 17 Nov 2022 at 09:31, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> On 2022-Nov-16, Thom Brown wrote:\n>\n> > Once the issue Tom identified has been resolved, I'd like to test\n> > drive newer patches.\n>\n> What issue? If you mean the one from the thread \"Reducing\n> duplicativeness of EquivalenceClass-derived clauses\", that patch is\n> already applied (commit a5fc46414deb), and Yuya Watari's v8 series\n> applies fine to current master.\n\nAh, I see.. I'll test the v8 patches.\n\nThanks\n\nThom\n\n\n",
"msg_date": "Thu, 17 Nov 2022 11:20:48 +0000",
"msg_from": "Thom Brown <thom@linux.com>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] Reducing planning time when tables have many partitions"
},
{
"msg_contents": "On Thu, 17 Nov 2022 at 11:20, Thom Brown <thom@linux.com> wrote:\n>\n> On Thu, 17 Nov 2022 at 09:31, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> >\n> > On 2022-Nov-16, Thom Brown wrote:\n> >\n> > > Once the issue Tom identified has been resolved, I'd like to test\n> > > drive newer patches.\n> >\n> > What issue? If you mean the one from the thread \"Reducing\n> > duplicativeness of EquivalenceClass-derived clauses\", that patch is\n> > already applied (commit a5fc46414deb), and Yuya Watari's v8 series\n> > applies fine to current master.\n>\n> Ah, I see.. I'll test the v8 patches.\n\nNo issues with applying. Created 1024 partitions, each of which is\npartitioned into 64 partitions.\n\nI'm getting a generic planning time of 1415ms. Is that considered\nreasonable in this situation? Bear in mind that the planning time\nprior to this patch was 282311ms, so pretty much a 200x speedup.\n\nThom\n\n\n",
"msg_date": "Thu, 17 Nov 2022 12:04:46 +0000",
"msg_from": "Thom Brown <thom@linux.com>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] Reducing planning time when tables have many partitions"
},
{
"msg_contents": "Dear Andrey and Thom,\n\nThank you for reviewing and testing the patch. I really apologize for\nmy late response.\n\nOn Tue, Nov 8, 2022 at 8:31 PM Andrey Lepikhov\n<a.lepikhov@postgrespro.ru> wrote:\n> Looking into find_em_for_rel() changes I see that you replaced\n> if (bms_is_subset(em->em_relids, rel->relids)\n> with assertion statement.\n> According of get_ecmember_indexes(), the em_relids field of returned\n> equivalence members can contain relids, not mentioned in the relation.\n> I don't understand, why it works now? For example, we can sort by t1.x,\n> but have an expression t1.x=t1.y*t2.z. Or I've missed something? If it\n> is not a mistake, maybe to add a comment why assertion here isn't failed?\n\nAs you pointed out, changing the bms_is_subset() condition to an\nassertion is logically incorrect here. Thank you for telling me about\nit. I fixed it and attached the modified patch to this email.\n\nOn Thu, Nov 17, 2022 at 9:05 PM Thom Brown <thom@linux.com> wrote:\n> No issues with applying. Created 1024 partitions, each of which is\n> partitioned into 64 partitions.\n>\n> I'm getting a generic planning time of 1415ms. Is that considered\n> reasonable in this situation? Bear in mind that the planning time\n> prior to this patch was 282311ms, so pretty much a 200x speedup.\n\nThank you for testing the patch with an actual query. This speedup is\nvery impressive. When I used an original query with 1024 partitions,\nits planning time was about 200ms. Given that each partition is also\npartitioned in your workload, I think the result of 1415ms is\nreasonable.\n\n-- \nBest regards,\nYuya Watari",
"msg_date": "Tue, 29 Nov 2022 17:58:25 +0900",
"msg_from": "Yuya Watari <watari.yuya@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PoC] Reducing planning time when tables have many partitions"
},
{
"msg_contents": "On Tue, 29 Nov 2022 at 21:59, Yuya Watari <watari.yuya@gmail.com> wrote:\n> Thank you for testing the patch with an actual query. This speedup is\n> very impressive. When I used an original query with 1024 partitions,\n> its planning time was about 200ms. Given that each partition is also\n> partitioned in your workload, I think the result of 1415ms is\n> reasonable.\n\nI was looking again at the v9-0001 patch and I think we can do a\nlittle better when building the Bitmapset of matching EMs. For\nexample, in the v9 patch, the code for get_ecmember_indexes_strict()\nis doing:\n\n+ if (!with_children)\n+ matching_ems = bms_copy(ec->ec_nonchild_indexes);\n+ else\n+ matching_ems = bms_copy(ec->ec_member_indexes);\n+\n+ i = -1;\n+ while ((i = bms_next_member(relids, i)) >= 0)\n+ {\n+ RelOptInfo *rel = root->simple_rel_array[i];\n+\n+ matching_ems = bms_int_members(matching_ems, rel->eclass_member_indexes);\n+ }\n\nIt seems reasonable that if there are a large number of partitions\nthen ec_member_indexes will have a large number of Bitmapwords. When\nwe do bms_int_members() on that, we're going to probably end up with a\nbunch of trailing zero words in the set. In the v10 patch, I've\nchanged this to become:\n\n+ int i = bms_next_member(relids, -1);\n+\n+ if (i >= 0)\n+ {\n+ RelOptInfo *rel = root->simple_rel_array[i];\n+\n+ /*\n+ * bms_intersect to the first relation to try to keep the resulting\n+ * Bitmapset as small as possible. This saves having to make a\n+ * complete bms_copy() of one of them. One may contain significantly\n+ * more words than the other.\n+ */\n+ if (!with_children)\n+ matching_ems = bms_intersect(rel->eclass_member_indexes,\n+ ec->ec_nonchild_indexes);\n+ else\n+ matching_ems = bms_intersect(rel->eclass_member_indexes,\n+ ec->ec_member_indexes);\n+\n+ while ((i = bms_next_member(relids, i)) >= 0)\n+ {\n+ rel = root->simple_rel_array[i];\n+ matching_ems = bms_int_members(matching_ems,\n+ rel->eclass_member_indexes);\n+ }\n+ }\n\nso, effectively we first bms_intersect to the first member of relids\nbefore masking out the bits for the remaining ones. This should mean\nwe'll have a Bitmapset with fewer words in many complex planning\nproblems. There's no longer the dilemma of having to decide if we\nshould start with RelOptInfo's eclass_member_indexes or the\nEquivalenceClass's member indexes. When using bms_int_member, we\nreally want to start with the smallest of those so we get the smallest\nresulting set. With bms_intersect(), it will always make a copy of\nthe smallest set. v10 does that instead of bms_copy()ing the\nEquivalenceClass's member's Bitmapset.\n\nI also wondered how much we're losing to the fact that\nbms_int_members() zeros the trailing words and does not trim the\nBitmapset down.\n\nThe problem there is 2-fold;\n1) we have to zero the trailing words on the left input. That'll\npollute the CPU cache a bit as it may have to fetch a bunch of extra\ncache lines, and;\n2) subsequent bms_int_members() done afterwards may have to mask out\nadditional words. If we can make the shortest input really short, then\nsubsequent bms_int_members() are going to be very fast.\n\nYou might argue there that setting nwords to the shortest length may\ncause us to have to repalloc the Bitmapset if we need to later add\nmore members again, but if you look at the repalloc() code, it's\neffectively a no-op when the allocated size >= the requested size, so\nrepalloc() should be very fast in this case. So, worst case, there's\nan additional \"no-op\" repalloc() (which should be very fast) followed\nby maybe a bms_add_members() which has to zero the words instead of\nbms_int_members(). I changed this in the v10-0002 patch. I'm not sure\nif we should do this or not.\n\nI also changed v10-0001 so that we still store the EquivalenceClass's\nmembers list. There were a few places where the code just wanted to\nget the first member and having to look at the Bitmapset index and\nfetch the first match from PlannerInfo seemed convoluted. If the\nquery is simple, it seems like it's not going to be very expensive to\nadd a few EquivalenceMembers to this list. When planning more complex\nproblems, there's probably enough other extra overhead that we're\nunlikely to notice the extra lappend()s. This also allows v10-0003 to\nwork, see below.\n\nIn v10-0003, I experimented with the iterator concept that I mentioned\nearlier. Since v10-0001 is now storing the EquivalenceMember list in\nEquivalenceClass again, it's now quite simple to have the iterator\ndecide if it should be scanning the index or doing a loop over all\nmembers to find the ones matching the search. We can make this\ndecision based on list_length(ec->ec_members). This should be a more\nreliable check than checking root->simple_rel_array_size as we could\nstill have classes with just a few members even when there's a large\nnumber of rels in simple_rel_array. I was hoping that v10-0003 would\nallow us to maintain the same planner performance for simple queries.\nIt just does not seem to change the performance much. Perhaps it's not\nworth the complexity if there are no performance benefits. It probably\nneeds more performance testing than what I've done to know if it helps\nor hinders, however.\n\nOverall, I'm not quite sure if this is any faster than your v9 patch.\nI think more performance testing needs to be done. I think the\nv10-0001 + v10-0002 is faster than v9-0001, but perhaps the changes\nyou've made in v9-0002 and v9-0003 are worth redoing. I didn't test. I\nwas hoping to keep the logic about which method to use to find the\nmembers in the iterator code and not litter it around the tree.\n\nI did run the test you mentioned in [1] and I got:\n\n$ echo Master @ 29452de73 && ./partbench.sh | grep -E \"^(Testing|latency)\"\nMaster @ 29452de73\nTesting with 2 partitions...\nlatency average = 0.231 ms\nTesting with 4 partitions...\nlatency average = 0.303 ms\nTesting with 8 partitions...\nlatency average = 0.454 ms\nTesting with 16 partitions...\nlatency average = 0.777 ms\nTesting with 32 partitions...\nlatency average = 1.576 ms\nTesting with 64 partitions...\nlatency average = 3.574 ms\nTesting with 128 partitions...\nlatency average = 9.504 ms\nTesting with 256 partitions...\nlatency average = 37.321 ms\nTesting with 512 partitions...\nlatency average = 171.660 ms\nTesting with 1024 partitions...\nlatency average = 1021.990 ms\n\n$ echo Master + v10-0001 && ./partbench.sh | grep -E \"^(Testing|latency)\"\nMaster + v10-0001\nTesting with 2 partitions...\nlatency average = 0.239 ms\nTesting with 4 partitions...\nlatency average = 0.315 ms\nTesting with 8 partitions...\nlatency average = 0.463 ms\nTesting with 16 partitions...\nlatency average = 0.757 ms\nTesting with 32 partitions...\nlatency average = 1.481 ms\nTesting with 64 partitions...\nlatency average = 2.563 ms\nTesting with 128 partitions...\nlatency average = 5.618 ms\nTesting with 256 partitions...\nlatency average = 16.229 ms\nTesting with 512 partitions...\nlatency average = 38.855 ms\nTesting with 1024 partitions...\nlatency average = 85.705 ms\n\n$ echo Master + v10-0001 + v10-0002 && ./partbench.sh | grep -E\n\"^(Testing|latency)\"\nMaster + v10-0001 + v10-0002\nTesting with 2 partitions...\nlatency average = 0.241 ms\nTesting with 4 partitions...\nlatency average = 0.312 ms\nTesting with 8 partitions...\nlatency average = 0.459 ms\nTesting with 16 partitions...\nlatency average = 0.755 ms\nTesting with 32 partitions...\nlatency average = 1.464 ms\nTesting with 64 partitions...\nlatency average = 2.580 ms\nTesting with 128 partitions...\nlatency average = 5.652 ms\nTesting with 256 partitions...\nlatency average = 16.464 ms\nTesting with 512 partitions...\nlatency average = 37.674 ms\nTesting with 1024 partitions...\nlatency average = 84.094 ms\n\n$ echo Master + v10-0001 + v10-0002 + v10-0003 && ./partbench.sh |\ngrep -E \"^(Testing|latency)\"\nMaster + v10-0001 + v10-0002 + v10-0003\nTesting with 2 partitions...\nlatency average = 0.240 ms\nTesting with 4 partitions...\nlatency average = 0.318 ms\nTesting with 8 partitions...\nlatency average = 0.465 ms\nTesting with 16 partitions...\nlatency average = 0.763 ms\nTesting with 32 partitions...\nlatency average = 1.486 ms\nTesting with 64 partitions...\nlatency average = 2.858 ms\nTesting with 128 partitions...\nlatency average = 5.764 ms\nTesting with 256 partitions...\nlatency average = 16.995 ms\nTesting with 512 partitions...\nlatency average = 38.012 ms\nTesting with 1024 partitions...\nlatency average = 88.098 ms\n\n$ echo Master + v9-* && ./partbench.sh | grep -E \"^(Testing|latency)\"\nMaster + v9-*\nTesting with 2 partitions...\nlatency average = 0.237 ms\nTesting with 4 partitions...\nlatency average = 0.313 ms\nTesting with 8 partitions...\nlatency average = 0.460 ms\nTesting with 16 partitions...\nlatency average = 0.780 ms\nTesting with 32 partitions...\nlatency average = 1.468 ms\nTesting with 64 partitions...\nlatency average = 2.701 ms\nTesting with 128 partitions...\nlatency average = 5.275 ms\nTesting with 256 partitions...\nlatency average = 17.208 ms\nTesting with 512 partitions...\nlatency average = 37.183 ms\nTesting with 1024 partitions...\nlatency average = 90.595 ms\n\nDavid\n\n[1] https://postgr.es/m/CAJ2pMkZNCgoUKSE%2B_5LthD%2BKbXKvq6h2hQN8Esxpxd%2Bcxmgomg%40mail.gmail.com",
"msg_date": "Sun, 4 Dec 2022 13:34:44 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] Reducing planning time when tables have many partitions"
},
{
"msg_contents": "On Sun, 4 Dec 2022 at 00:35, David Rowley <dgrowleyml@gmail.com> wrote:\n>\n> On Tue, 29 Nov 2022 at 21:59, Yuya Watari <watari.yuya@gmail.com> wrote:\n> > Thank you for testing the patch with an actual query. This speedup is\n> > very impressive. When I used an original query with 1024 partitions,\n> > its planning time was about 200ms. Given that each partition is also\n> > partitioned in your workload, I think the result of 1415ms is\n> > reasonable.\n>\n> I was looking again at the v9-0001 patch and I think we can do a\n> little better when building the Bitmapset of matching EMs. For\n> example, in the v9 patch, the code for get_ecmember_indexes_strict()\n> is doing:\n>\n> + if (!with_children)\n> + matching_ems = bms_copy(ec->ec_nonchild_indexes);\n> + else\n> + matching_ems = bms_copy(ec->ec_member_indexes);\n> +\n> + i = -1;\n> + while ((i = bms_next_member(relids, i)) >= 0)\n> + {\n> + RelOptInfo *rel = root->simple_rel_array[i];\n> +\n> + matching_ems = bms_int_members(matching_ems, rel->eclass_member_indexes);\n> + }\n>\n> It seems reasonable that if there are a large number of partitions\n> then ec_member_indexes will have a large number of Bitmapwords. When\n> we do bms_int_members() on that, we're going to probably end up with a\n> bunch of trailing zero words in the set. In the v10 patch, I've\n> changed this to become:\n>\n> + int i = bms_next_member(relids, -1);\n> +\n> + if (i >= 0)\n> + {\n> + RelOptInfo *rel = root->simple_rel_array[i];\n> +\n> + /*\n> + * bms_intersect to the first relation to try to keep the resulting\n> + * Bitmapset as small as possible. This saves having to make a\n> + * complete bms_copy() of one of them. One may contain significantly\n> + * more words than the other.\n> + */\n> + if (!with_children)\n> + matching_ems = bms_intersect(rel->eclass_member_indexes,\n> + ec->ec_nonchild_indexes);\n> + else\n> + matching_ems = bms_intersect(rel->eclass_member_indexes,\n> + ec->ec_member_indexes);\n> +\n> + while ((i = bms_next_member(relids, i)) >= 0)\n> + {\n> + rel = root->simple_rel_array[i];\n> + matching_ems = bms_int_members(matching_ems,\n> + rel->eclass_member_indexes);\n> + }\n> + }\n>\n> so, effectively we first bms_intersect to the first member of relids\n> before masking out the bits for the remaining ones. This should mean\n> we'll have a Bitmapset with fewer words in many complex planning\n> problems. There's no longer the dilemma of having to decide if we\n> should start with RelOptInfo's eclass_member_indexes or the\n> EquivalenceClass's member indexes. When using bms_int_member, we\n> really want to start with the smallest of those so we get the smallest\n> resulting set. With bms_intersect(), it will always make a copy of\n> the smallest set. v10 does that instead of bms_copy()ing the\n> EquivalenceClass's member's Bitmapset.\n>\n> I also wondered how much we're losing to the fact that\n> bms_int_members() zeros the trailing words and does not trim the\n> Bitmapset down.\n>\n> The problem there is 2-fold;\n> 1) we have to zero the trailing words on the left input. That'll\n> pollute the CPU cache a bit as it may have to fetch a bunch of extra\n> cache lines, and;\n> 2) subsequent bms_int_members() done afterwards may have to mask out\n> additional words. If we can make the shortest input really short, then\n> subsequent bms_int_members() are going to be very fast.\n>\n> You might argue there that setting nwords to the shortest length may\n> cause us to have to repalloc the Bitmapset if we need to later add\n> more members again, but if you look at the repalloc() code, it's\n> effectively a no-op when the allocated size >= the requested size, so\n> repalloc() should be very fast in this case. So, worst case, there's\n> an additional \"no-op\" repalloc() (which should be very fast) followed\n> by maybe a bms_add_members() which has to zero the words instead of\n> bms_int_members(). I changed this in the v10-0002 patch. I'm not sure\n> if we should do this or not.\n>\n> I also changed v10-0001 so that we still store the EquivalenceClass's\n> members list. There were a few places where the code just wanted to\n> get the first member and having to look at the Bitmapset index and\n> fetch the first match from PlannerInfo seemed convoluted. If the\n> query is simple, it seems like it's not going to be very expensive to\n> add a few EquivalenceMembers to this list. When planning more complex\n> problems, there's probably enough other extra overhead that we're\n> unlikely to notice the extra lappend()s. This also allows v10-0003 to\n> work, see below.\n>\n> In v10-0003, I experimented with the iterator concept that I mentioned\n> earlier. Since v10-0001 is now storing the EquivalenceMember list in\n> EquivalenceClass again, it's now quite simple to have the iterator\n> decide if it should be scanning the index or doing a loop over all\n> members to find the ones matching the search. We can make this\n> decision based on list_length(ec->ec_members). This should be a more\n> reliable check than checking root->simple_rel_array_size as we could\n> still have classes with just a few members even when there's a large\n> number of rels in simple_rel_array. I was hoping that v10-0003 would\n> allow us to maintain the same planner performance for simple queries.\n> It just does not seem to change the performance much. Perhaps it's not\n> worth the complexity if there are no performance benefits. It probably\n> needs more performance testing than what I've done to know if it helps\n> or hinders, however.\n>\n> Overall, I'm not quite sure if this is any faster than your v9 patch.\n> I think more performance testing needs to be done. I think the\n> v10-0001 + v10-0002 is faster than v9-0001, but perhaps the changes\n> you've made in v9-0002 and v9-0003 are worth redoing. I didn't test. I\n> was hoping to keep the logic about which method to use to find the\n> members in the iterator code and not litter it around the tree.\n>\n> I did run the test you mentioned in [1] and I got:\n>\n> $ echo Master @ 29452de73 && ./partbench.sh | grep -E \"^(Testing|latency)\"\n> Master @ 29452de73\n> Testing with 2 partitions...\n> latency average = 0.231 ms\n> Testing with 4 partitions...\n> latency average = 0.303 ms\n> Testing with 8 partitions...\n> latency average = 0.454 ms\n> Testing with 16 partitions...\n> latency average = 0.777 ms\n> Testing with 32 partitions...\n> latency average = 1.576 ms\n> Testing with 64 partitions...\n> latency average = 3.574 ms\n> Testing with 128 partitions...\n> latency average = 9.504 ms\n> Testing with 256 partitions...\n> latency average = 37.321 ms\n> Testing with 512 partitions...\n> latency average = 171.660 ms\n> Testing with 1024 partitions...\n> latency average = 1021.990 ms\n>\n> $ echo Master + v10-0001 && ./partbench.sh | grep -E \"^(Testing|latency)\"\n> Master + v10-0001\n> Testing with 2 partitions...\n> latency average = 0.239 ms\n> Testing with 4 partitions...\n> latency average = 0.315 ms\n> Testing with 8 partitions...\n> latency average = 0.463 ms\n> Testing with 16 partitions...\n> latency average = 0.757 ms\n> Testing with 32 partitions...\n> latency average = 1.481 ms\n> Testing with 64 partitions...\n> latency average = 2.563 ms\n> Testing with 128 partitions...\n> latency average = 5.618 ms\n> Testing with 256 partitions...\n> latency average = 16.229 ms\n> Testing with 512 partitions...\n> latency average = 38.855 ms\n> Testing with 1024 partitions...\n> latency average = 85.705 ms\n>\n> $ echo Master + v10-0001 + v10-0002 && ./partbench.sh | grep -E\n> \"^(Testing|latency)\"\n> Master + v10-0001 + v10-0002\n> Testing with 2 partitions...\n> latency average = 0.241 ms\n> Testing with 4 partitions...\n> latency average = 0.312 ms\n> Testing with 8 partitions...\n> latency average = 0.459 ms\n> Testing with 16 partitions...\n> latency average = 0.755 ms\n> Testing with 32 partitions...\n> latency average = 1.464 ms\n> Testing with 64 partitions...\n> latency average = 2.580 ms\n> Testing with 128 partitions...\n> latency average = 5.652 ms\n> Testing with 256 partitions...\n> latency average = 16.464 ms\n> Testing with 512 partitions...\n> latency average = 37.674 ms\n> Testing with 1024 partitions...\n> latency average = 84.094 ms\n>\n> $ echo Master + v10-0001 + v10-0002 + v10-0003 && ./partbench.sh |\n> grep -E \"^(Testing|latency)\"\n> Master + v10-0001 + v10-0002 + v10-0003\n> Testing with 2 partitions...\n> latency average = 0.240 ms\n> Testing with 4 partitions...\n> latency average = 0.318 ms\n> Testing with 8 partitions...\n> latency average = 0.465 ms\n> Testing with 16 partitions...\n> latency average = 0.763 ms\n> Testing with 32 partitions...\n> latency average = 1.486 ms\n> Testing with 64 partitions...\n> latency average = 2.858 ms\n> Testing with 128 partitions...\n> latency average = 5.764 ms\n> Testing with 256 partitions...\n> latency average = 16.995 ms\n> Testing with 512 partitions...\n> latency average = 38.012 ms\n> Testing with 1024 partitions...\n> latency average = 88.098 ms\n>\n> $ echo Master + v9-* && ./partbench.sh | grep -E \"^(Testing|latency)\"\n> Master + v9-*\n> Testing with 2 partitions...\n> latency average = 0.237 ms\n> Testing with 4 partitions...\n> latency average = 0.313 ms\n> Testing with 8 partitions...\n> latency average = 0.460 ms\n> Testing with 16 partitions...\n> latency average = 0.780 ms\n> Testing with 32 partitions...\n> latency average = 1.468 ms\n> Testing with 64 partitions...\n> latency average = 2.701 ms\n> Testing with 128 partitions...\n> latency average = 5.275 ms\n> Testing with 256 partitions...\n> latency average = 17.208 ms\n> Testing with 512 partitions...\n> latency average = 37.183 ms\n> Testing with 1024 partitions...\n> latency average = 90.595 ms\n\nTesting your patches with the same 1024 partitions, each with 64\nsub-partitions, I get a planning time of 205.020 ms, which is now a\n1,377x speedup. This has essentially reduced the planning time from a\ncatastrophe to a complete non-issue. Huge win!\n\n-- \nThom\n\n\n",
"msg_date": "Mon, 5 Dec 2022 15:44:59 +0000",
"msg_from": "Thom Brown <thom@linux.com>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] Reducing planning time when tables have many partitions"
},
{
"msg_contents": "On Tue, 6 Dec 2022 at 04:45, Thom Brown <thom@linux.com> wrote:\n> Testing your patches with the same 1024 partitions, each with 64\n> sub-partitions, I get a planning time of 205.020 ms, which is now a\n> 1,377x speedup. This has essentially reduced the planning time from a\n> catastrophe to a complete non-issue. Huge win!\n\nThanks for testing the v10 patches.\n\nI wouldn't have expected such additional gains from v10. I was mostly\nfocused on trying to minimise any performance regression for simple\nqueries that wouldn't benefit from indexing the EquivalenceMembers.\nYour query sounds like it does not fit into that category. Perhaps it\nis down to the fact that v9-0002 or v9-0003 reverts a couple of the\noptimisations that is causing v9 to be slower than v10 for your query.\nIt's hard to tell without more details of what you're running.\n\nIs this a schema and query you're able to share? Or perhaps mock up a\nscript of something similar enough to allow us to see why v9 and v10\nare so different?\n\nAdditionally, it would be interesting to see if patching with v10-0002\nalone helps the performance of your query at all. I didn't imagine\nthat change would give us anything easily measurable, but partition\npruning makes extensive use of Bitmapsets, so perhaps you've found\nsomething. If you have then it might be worth considering v10-0002\nindependently of the EquivalenceMember indexing work.\n\nDavid\n\n\n",
"msg_date": "Tue, 6 Dec 2022 10:27:55 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] Reducing planning time when tables have many partitions"
},
{
"msg_contents": "On Mon, 5 Dec 2022 at 21:28, David Rowley <dgrowleyml@gmail.com> wrote:\n>\n> On Tue, 6 Dec 2022 at 04:45, Thom Brown <thom@linux.com> wrote:\n> > Testing your patches with the same 1024 partitions, each with 64\n> > sub-partitions, I get a planning time of 205.020 ms, which is now a\n> > 1,377x speedup. This has essentially reduced the planning time from a\n> > catastrophe to a complete non-issue. Huge win!\n>\n> Thanks for testing the v10 patches.\n>\n> I wouldn't have expected such additional gains from v10. I was mostly\n> focused on trying to minimise any performance regression for simple\n> queries that wouldn't benefit from indexing the EquivalenceMembers.\n> Your query sounds like it does not fit into that category. Perhaps it\n> is down to the fact that v9-0002 or v9-0003 reverts a couple of the\n> optimisations that is causing v9 to be slower than v10 for your query.\n> It's hard to tell without more details of what you're running.\n\nI celebrated prematurely as I neglected to wait for the 6th execution\nof the prepared statement, which shows the real result. With the v10\npatches, it takes 5632.040 ms, a speedup of 50x.\n\nTesting the v9 patches, the same query takes 3388.173 ms, a speedup of\n83x. And re-testing v8, I'm getting roughly the same times. These\nare all with a cold cache.\n\nSo the result isn't as dramatic as I had initially interpreted it to\nhave unfortunately.\n\n-- \nThom\n\n\n",
"msg_date": "Tue, 6 Dec 2022 11:16:01 +0000",
"msg_from": "Thom Brown <thom@linux.com>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] Reducing planning time when tables have many partitions"
},
{
"msg_contents": "Hello,\n\nThank you for creating the v10 patches.\n\nOn Sun, Dec 4, 2022 at 9:34 AM David Rowley <dgrowleyml@gmail.com> wrote:\n> Overall, I'm not quite sure if this is any faster than your v9 patch.\n> I think more performance testing needs to be done. I think the\n> v10-0001 + v10-0002 is faster than v9-0001, but perhaps the changes\n> you've made in v9-0002 and v9-0003 are worth redoing. I didn't test. I\n> was hoping to keep the logic about which method to use to find the\n> members in the iterator code and not litter it around the tree.\n\nI tested the performance of v9, v10, and v10 + v9-0002 + v9-0003. The\nlast one is v10 with v9-0002 and v9-0003 applied.\n\n1. Join Order Benchmark\n\nI ran the Join Order Benchmark [1] and measured its planning times.\nThe result is shown in Table 1.\n\nTable 1: Speedup of Join Order Benchmark (higher is better)\n(n = the number of partitions)\n-------------------------------------------------\n n | v9 | v10 | v10 + v9-0002 + v9-0003\n-------------------------------------------------\n 2 | 97.2% | 95.7% | 97.5%\n 4 | 98.0% | 96.7% | 97.3%\n 8 | 101.2% | 99.6% | 100.3%\n 16 | 107.0% | 106.7% | 107.5%\n 32 | 123.1% | 122.0% | 123.7%\n 64 | 161.9% | 162.0% | 162.6%\n 128 | 307.0% | 311.7% | 313.4%\n 256 | 780.1% | 805.5% | 816.4%\n-------------------------------------------------\n\nThis result indicates that v10 degraded slightly more for the smaller\nnumber of partitions. The performances of v9 and v10 + v9-0002 +\nv9-0003 were almost the same, but the latter was faster when the\nnumber of partitions was large.\n\n2. Query A (The query mentioned in [2])\n\nI also ran Query A, which I shared in [2] and you used in\n./partbench.sh. The attached figure illustrates the planning times of\nQuery A. Our patches might have had some degradations, but they were\nnot so significant.\n\n3. Query B (The query mentioned in [3])\n\nThe following tables show the results of Query B. The results are\nclose to the one of the Join Order Benchmark; v9 and v10 + v9-0002 +\nv9-0003 had fewer degradations than v10.\n\nTable 2: Planning Time of Query B (ms)\n--------------------------------------------------------------\n n | Master | v9 | v10 | v10 + v9-0002 + v9-0003\n--------------------------------------------------------------\n 1 | 36.056 | 37.730 | 38.546 | 37.782\n 2 | 35.035 | 37.190 | 37.472 | 36.393\n 4 | 36.860 | 37.478 | 38.312 | 37.388\n 8 | 41.099 | 40.152 | 40.705 | 40.268\n 16 | 52.852 | 44.926 | 45.956 | 45.211\n 32 | 87.042 | 54.919 | 55.287 | 55.125\n 64 | 224.750 | 82.125 | 81.323 | 80.567\n 128 | 901.226 | 136.631 | 136.632 | 132.840\n 256 | 4166.045 | 263.913 | 260.295 | 258.453\n--------------------------------------------------------------\n\nTable 3: Speedup of Query B (higher is better)\n---------------------------------------------------\n n | v9 | v10 | v10 + v9-0002 + v9-0003\n---------------------------------------------------\n 1 | 95.6% | 93.5% | 95.4%\n 2 | 94.2% | 93.5% | 96.3%\n 4 | 98.4% | 96.2% | 98.6%\n 8 | 102.4% | 101.0% | 102.1%\n 16 | 117.6% | 115.0% | 116.9%\n 32 | 158.5% | 157.4% | 157.9%\n 64 | 273.7% | 276.4% | 279.0%\n 128 | 659.6% | 659.6% | 678.4%\n 256 | 1578.6% | 1600.5% | 1611.9%\n---------------------------------------------------\n\n======\n\nThe above results show that the reverts I have made in v9-0002 and\nv9-0003 are very important in avoiding degradation. I think we should\napply these changes again. It is unclear whether v9 or v10 + v9-0002 +\nv9-0003 is better, but the latter performed better in my experiments.\n\n[1] https://github.com/winkyao/join-order-benchmark\n[2] https://postgr.es/m/CAJ2pMkZNCgoUKSE%2B_5LthD%2BKbXKvq6h2hQN8Esxpxd%2Bcxmgomg%40mail.gmail.com\n[3] https://postgr.es/m/CAJ2pMka2PBXNNzUfe0-ksFsxVN%2BgmfKq7aGQ5v35TcpjFG3Ggg%40mail.gmail.com\n\n-- \nBest regards,\nYuya Watari",
"msg_date": "Wed, 7 Dec 2022 20:30:24 +0900",
"msg_from": "Yuya Watari <watari.yuya@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PoC] Reducing planning time when tables have many partitions"
},
{
"msg_contents": "Thank you for running all the benchmarks on v10.\n\nOn Thu, 8 Dec 2022 at 00:31, Yuya Watari <watari.yuya@gmail.com> wrote:\n> The above results show that the reverts I have made in v9-0002 and\n> v9-0003 are very important in avoiding degradation. I think we should\n> apply these changes again. It is unclear whether v9 or v10 + v9-0002 +\n> v9-0003 is better, but the latter performed better in my experiments.\n\nI was hoping to keep the logic which decides to loop over ec_members\nor use the bitmap indexes all in equivclass.c, ideally in the iterator\ncode.\n\nI've looked at the v9-0002 patch and I'm thinking maybe it's ok since\nit always loops over ec_nonchild_indexes. We process the base\nrelations first, so all the EquivalenceMember in PlannerInfo for these\nwill be at the start of the eq_members list and the Bitmapset won't\nhave many bitmapwords to loop over. Additionally, it's only looping\nover the nonchild ones, so a large number of partitions existing has\nno effect on the number of loops performed.\n\nFor v9-0003, I was really hoping to find some kind of workaround so we\ndidn't need the \"if (root->simple_rel_array_size < 32)\". The problem\nI have with that is; 1) why is 32 a good choice?, and 2)\nsimple_rel_array_size is just not a great thing to base the decision\noff of. For #1, we only need to look at the EquivalenceMembers\nbelonging to base relations here and simple_rel_array_size includes\nall relations, including partitions, so even if there's just a few\nmembers belonging to base rels, we may still opt to use the Bitmapset\nmethod. Additionally, it does look like this patch should be looping\nover ec_nonchild_indexes rather than ec_member_indexes and filtering\nout the !em->em_is_const && !em->em_is_child EquivalenceMembers.\n\nSince both the changes made in v9-0002 and v9-0003 can just be made to\nloop over ec_nonchild_indexes, which isn't going to get big with large\nnumbers of partitions, then I wonder if we're ok just to do the loop\nin all cases rather than conditionally try to do something more\nfanciful with counting bits like I had done in\nselect_outer_pathkeys_for_merge(). I've made v11 work like what\nv9-0003 did and I've used v9-0002. I also found a stray remaining\n\"bms_membership(eclass->ec_member_indexes) != BMS_MULTIPLE\" in\neclass_useful_for_merging() that should have been put back to\n\"list_length(eclass->ec_members) <= 1\".\n\nI've still got a couple of things in mind that I'd like to see done to\nthis patch.\n\na) I think the iterator code should have some additional sanity checks\nthat the results of both methods match when building with\nUSE_ASSERT_CHECKING. I've got some concerns that we might break\nsomething. The logic about what the em_relids is set to for child\nmembers is a little confusing. See add_eq_member().\nb) We still need to think about if adding a RelOptInfo to\nPlannerInfo->simple_rel_array[0] is a good idea for solving the append\nrelation issue. Ideally, we'd have a proper varno for these Vars\ninstead of setting varno=0 per what's being done in\ngenerate_append_tlist().\n\nI've attached the v11 set of patches.\n\nDavid",
"msg_date": "Mon, 12 Dec 2022 17:50:09 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] Reducing planning time when tables have many partitions"
},
{
"msg_contents": "Dear David,\n\nOn Mon, Dec 12, 2022 at 1:50 PM David Rowley <dgrowleyml@gmail.com> wrote:\n> I've attached the v11 set of patches.\n\nThanks for creating the v11 version. I think your patches look good to\nme. I really apologize for my late reply.\n\n> a) I think the iterator code should have some additional sanity checks\n> that the results of both methods match when building with\n> USE_ASSERT_CHECKING. I've got some concerns that we might break\n> something. The logic about what the em_relids is set to for child\n> members is a little confusing. See add_eq_member().\n\nI added sanity checking code to check that two iteration results are\nthe same. I have attached a new version of the patch, v12, to this\nemail.\n\nThe implementation of my sanity checking code (v12-0004) is not ideal\nand a little ugly. I understand that and will try to improve it.\n\nHowever, there is more bad news. Unfortunately, some regression tests\nare failing in my environment. I'm not sure why, but it could be that\na) my sanity checking code (v12-0004) is wrong, or b) our patches have\nsome bugs.\n\nI will investigate this issue further, and share the results when found.\n\n-- \nBest regards,\nYuya Watari",
"msg_date": "Fri, 27 Jan 2023 12:48:30 +0900",
"msg_from": "Yuya Watari <watari.yuya@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PoC] Reducing planning time when tables have many partitions"
},
{
"msg_contents": "Hello,\n\nOn Fri, Jan 27, 2023 at 12:48 PM Yuya Watari <watari.yuya@gmail.com> wrote:\n> However, there is more bad news. Unfortunately, some regression tests\n> are failing in my environment. I'm not sure why, but it could be that\n> a) my sanity checking code (v12-0004) is wrong, or b) our patches have\n> some bugs.\n>\n> I will investigate this issue further, and share the results when found.\n\nI have investigated this issue and concluded that b) our patches have\nsome bugs. I have attached the modified patches to this email. This\nversion passed regression tests in my environment.\n\n1. v13-0005\n\nThe first bug is in eclass_member_iterator_strict_next(). As I\nmentioned in the commit message, the original code incorrectly missed\nEquivalenceMembers with empty em_relids when 'with_norel_members' is\ntrue.\n\nI show my changes as follows:\n\n===\n- if (!iter->with_children && em->em_is_child)\n- continue;\n\n- if (!iter->with_norel_members && bms_is_empty(em->em_relids))\n- continue;\n\n- if (!bms_is_subset(iter->with_relids, em->em_relids))\n- continue;\n\n- iter->current_index = foreach_current_index(lc);\n+ if ((iter->with_norel_members && bms_is_empty(em->em_relids))\n+ || (bms_is_subset(iter->with_relids, em->em_relids)\n+ && (iter->with_children || !em->em_is_child)))\n+ {\n+ iter->current_index = foreach_current_index(lc);\n===\n\nEquivalenceMembers with empty em_relids will pass the second 'if'\ncondition when 'with_norel_members' is true. These members should be\nreturned. However, since the empty em_relids can never be superset of\nany non-empty relids, the EMs may fail the last condition. Therefore,\nthe original code missed some members.\n\n2. v13-0006\n\nThe second bug exists in get_ecmember_indexes_strict(). As I described\nin the comment, if the empty relids is given, this function must\nreturn all members because their em_relids are always superset. I am\nconcerned that this change may adversely affect performance.\nCurrently, I have not seen any degradation.\n\n3. v13-0007\n\nThe last one is in add_eq_member(). I am not sure why this change is\nworking, but it is probably related to the concerns David mentioned in\nthe previous mail. The v13-0007 may be wrong, so it should be\nreconsidered.\n\n-- \nBest regards,\nYuya Watari",
"msg_date": "Mon, 30 Jan 2023 19:02:37 +0900",
"msg_from": "Yuya Watari <watari.yuya@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PoC] Reducing planning time when tables have many partitions"
},
{
"msg_contents": "isOn Mon, 30 Jan 2023 at 23:03, Yuya Watari <watari.yuya@gmail.com> wrote:\n> 1. v13-0005\n>\n> The first bug is in eclass_member_iterator_strict_next(). As I\n> mentioned in the commit message, the original code incorrectly missed\n> EquivalenceMembers with empty em_relids when 'with_norel_members' is\n> true.\n\nYeah, I was also looking at this today and found the same issues after\nadding the verification code that checks we get the same members from\nthe index and via the looking method. I ended up making some changes\nslightly different from what you had but wasn't quite ready to post\nthem yet.\n\nI'm still a little unhappy with master's comments for the\nEquivalenceMember.em_relids field. It claims to be the relids for the\nem_expr, but that's not the case for em_is_child members. I've ended\nup adding an additional field named em_norel_expr that gets set to\ntrue when em_expr truly contains no Vars. I then adjusted the\nconditions in the iterator's loops to properly include members with no\nVars when we ask for those.\n\n> 2. v13-0006\n>\n> The second bug exists in get_ecmember_indexes_strict(). As I described\n> in the comment, if the empty relids is given, this function must\n> return all members because their em_relids are always superset. I am\n> concerned that this change may adversely affect performance.\n> Currently, I have not seen any degradation.\n\nI fixed this by adding a new field to the iterator struct named\nrelids_empty. It's just set to bms_is_empty(iter->with_relids). The\nloop condition then just becomes:\n\nif (iter->relids_empty ||\n !bms_is_subset(iter->with_relids, em->em_relids))\n continue;\n\n> 3. v13-0007\n>\n> The last one is in add_eq_member(). I am not sure why this change is\n> working, but it is probably related to the concerns David mentioned in\n> the previous mail. The v13-0007 may be wrong, so it should be\n> reconsidered.\n\nUnfortunately, we can't fix it that way. At a glance, what you have\nwould only find var-less child members if you requested that the\niterator also gave you with_norel_members==true. I've not looked,\nperhaps all current code locations request with_norel_members, so your\nchange likely just words by accident.\n\nI've attached what I worked on today. I still want to do more\ncross-checking to make sure all code locations which use these new\niterators get the same members as they used to get.\n\nIn the attached I also changed the code that added a RelOptInfo to\nroot->simple_rel_array[0] to allow the varno=0 Vars made in\ngenerate_append_tlist() to be indexed. That's now done via a new\nfunction (setup_append_rel_entry()) which is only called during\nplan_set_operations(). This means we're no longer wastefully creating\nthat entry during the planning of normal queries. We could maybe\nconsider giving this a more valid varno and expand simple_rel_array to\nmake more room, but I'm not sure it's worth it or not. I'm happier\nthat this simple_rel_array[0] entry now only exists when planning set\noperations, but I'd probably feel better if there was some other way\nthat felt less like we're faking up a RelOptInfo to store\nEquivalenceMembers in.\n\nI've also included a slightly edited version of your code which checks\nthat the members match when using and not using the new indexes. All\nthe cross-checking seems to pass.\n\nDavid",
"msg_date": "Tue, 31 Jan 2023 01:14:28 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] Reducing planning time when tables have many partitions"
},
{
"msg_contents": "Dear David,\n\nOn Mon, Jan 30, 2023 at 9:14 PM David Rowley <dgrowleyml@gmail.com> wrote:\n> I've attached what I worked on today.\n\nI really appreciate your quick response and the v15 patches. The bug\nfixes in the v15 look good to me.\n\nAfter receiving your email, I realized that this version does not\napply to the current master. This conflict is caused by commits of\n2489d76c49 [1] and related. I have attached the rebased version, v16,\nto this email. Resolving many conflicts was a bit of hard work, so I\nmay have made some mistakes.\n\nUnfortunately, the rebased version did not pass regression tests. This\nfailure is due to segmentation faults regarding a null reference to\nRelOptInfo. I show the code snippet that leads to the segfault as\nfollows.\n\n=====\n@@ -572,9 +662,31 @@ add_eq_member(EquivalenceClass *ec, Expr *expr,\nRelids relids,\n+ i = -1;\n+ while ((i = bms_next_member(expr_relids, i)) >= 0)\n+ {\n+ RelOptInfo *rel = root->simple_rel_array[i];\n+\n+ rel->eclass_member_indexes =\nbms_add_member(rel->eclass_member_indexes, em_index);\n+ }\n=====\n\nThe segfault occurred because root->simple_rel_array[i] is sometimes\nNULL. This issue is similar to the one regarding\nroot->simple_rel_array[0]. Before the commit of 2489d76c49, we only\nhad to consider the nullability of root->simple_rel_array[0]. We\novercame this problem by creating the RelOptInfo in the\nsetup_append_rel_entry() function. However, after the commit,\nroot->simple_rel_array[i] with non-zero 'i' can also be NULL. I'm not\nconfident with its cause, but is this because non-base relations\nappear in the expr_relids? Seeing the commit, I found the following\nchange in pull_varnos_walker():\n\n=====\n@@ -153,7 +161,11 @@ pull_varnos_walker(Node *node,\npull_varnos_context *context)\n Var *var = (Var *) node;\n\n if (var->varlevelsup == context->sublevels_up)\n+ {\n context->varnos = bms_add_member(context->varnos, var->varno);\n+ context->varnos = bms_add_members(context->varnos,\n+ var->varnullingrels);\n+ }\n return false;\n }\n if (IsA(node, CurrentOfExpr))\n=====\n\nWe get the expr_relids by pull_varnos(). This commit adds\nvar->varnullingrels to its result. From my observations, indices 'i'\nsuch that root->simple_rel_array[i] is null come from\nvar->varnullingrels. This change is probably related to the segfault.\nI don't understand the commit well, so please let me know if I'm\nwrong.\n\nTo address this problem, in v16-0003, I moved EquivalenceMember\nindexes in RelOptInfo to PlannerInfo. This change allows us to store\nindexes whose corresponding RelOptInfo is NULL.\n\n> I'm happier\n> that this simple_rel_array[0] entry now only exists when planning set\n> operations, but I'd probably feel better if there was some other way\n> that felt less like we're faking up a RelOptInfo to store\n> EquivalenceMembers in.\n\nOf course, I'm not sure if my approach in v16-0003 is ideal, but it\nmay help solve your concern above. Since simple_rel_array[0] is no\nlonger necessary with my patch, I removed the setup_append_rel_entry()\nfunction in v16-0004. However, to work the patch, I needed to change\nsome assertions in v16-0005. For more details, please see the commit\nmessage of v16-0005. After these works, the attached patches passed\nall regression tests in my environment.\n\nInstead of my approach, imitating the following change to\nget_eclass_indexes_for_relids() is also a possible solution. Ignoring\nNULL RelOptInfos enables us to avoid the segfault, but we have to\nadjust EquivalenceMemberIterator to match the result, and I'm not sure\nif this idea is correct.\n\n=====\n@@ -3204,6 +3268,12 @@ get_eclass_indexes_for_relids(PlannerInfo\n*root, Relids relids)\n {\n RelOptInfo *rel = root->simple_rel_array[i];\n\n+ if (rel == NULL) /* must be an outer join */\n+ {\n+ Assert(bms_is_member(i, root->outer_join_rels));\n+ continue;\n+ }\n+\n ec_indexes = bms_add_members(ec_indexes, rel->eclass_indexes);\n }\n return ec_indexes;\n=====\n\n[1] https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=2489d76c4906f4461a364ca8ad7e0751ead8aa0d\n\n-- \nBest regards,\nYuya Watari",
"msg_date": "Mon, 6 Feb 2023 10:47:33 +0900",
"msg_from": "Yuya Watari <watari.yuya@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PoC] Reducing planning time when tables have many partitions"
},
{
"msg_contents": "On 2/6/23 06:47, Yuya Watari wrote:\n> Of course, I'm not sure if my approach in v16-0003 is ideal, but it\n> may help solve your concern above. Since simple_rel_array[0] is no\n> longer necessary with my patch, I removed the setup_append_rel_entry()\n> function in v16-0004. However, to work the patch, I needed to change\n> some assertions in v16-0005. For more details, please see the commit\n> message of v16-0005. After these works, the attached patches passed\n> all regression tests in my environment.\n> \n> Instead of my approach, imitating the following change to\n> get_eclass_indexes_for_relids() is also a possible solution. Ignoring\n> NULL RelOptInfos enables us to avoid the segfault, but we have to\n> adjust EquivalenceMemberIterator to match the result, and I'm not sure\n> if this idea is correct.\nAs I see, You moved the indexes from RelOptInfo to PlannerInfo. May be \nbetter to move them into RangeTblEntry instead?\n\n-- \nRegards\nAndrey Lepikhov\nPostgres Professional\n\n\n\n",
"msg_date": "Tue, 14 Feb 2023 15:01:41 +0500",
"msg_from": "Andrey Lepikhov <a.lepikhov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] Reducing planning time when tables have many partitions"
},
{
"msg_contents": "Dear Andrey,\n\nOn Tue, Feb 14, 2023 at 7:01 PM Andrey Lepikhov\n<a.lepikhov@postgrespro.ru> wrote:\n> As I see, You moved the indexes from RelOptInfo to PlannerInfo. May be\n> better to move them into RangeTblEntry instead?\n\nI really appreciate your kind advice. I think your idea is very good.\nI have implemented it as the v17 patches, which are attached to this\nemail. The v17 has passed all regression tests in my environment.\n\n-- \nBest regards,\nYuya Watari",
"msg_date": "Fri, 17 Feb 2023 17:31:45 +0900",
"msg_from": "Yuya Watari <watari.yuya@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PoC] Reducing planning time when tables have many partitions"
},
{
"msg_contents": "Hello Watari-san, this patch does not currently apply. Could you please\nrebase?\n\nDavid, do you intend to continue to be involved in reviewing this one?\n\nThanks to both,\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"All rings of power are equal,\nBut some rings of power are more equal than others.\"\n (George Orwell's The Lord of the Rings)\n\n\n",
"msg_date": "Wed, 8 Mar 2023 13:34:23 +0100",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] Reducing planning time when tables have many partitions"
},
{
"msg_contents": "On Thu, 9 Mar 2023 at 01:34, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> David, do you intend to continue to be involved in reviewing this one?\n\nYes. I'm currently trying to make a few Bitmapset improvements which\ninclude the change made in this thread's 0001 patch over on [1].\n\nFor the main patch, I've been starting to wonder if it should work\ncompletely differently. Instead of adding members for partitioned and\ninheritance children, we could just translate the Vars from child to\ntop-level parent and find the member that way. I wondered if this\nmethod might be even faster as it would forego\nadd_child_rel_equivalences(). I think we'd still need em_is_child for\nUNION ALL children. So far, I've not looked into this in detail. I\nwas hoping to find an idea that would allow some means to have the\nplanner realise that a LIST partition which allows a single Datum\ncould skip pushing base quals which are constantly true. i.e:\n\ncreate table lp (a int) partition by list(a);\ncreate table lp1 partition of lp for values in(1);\nexplain select * from lp where a = 1;\n\n Seq Scan on lp1 lp (cost=0.00..41.88 rows=13 width=4)\n Filter: (a = 1)\n\nDavid\n\n[1] https://postgr.es/m/CAApHDvq9eq0W_aFUGrb6ba28ieuQN4zM5Uwqxy7+LMZjJc+VGg@mail.gmail.com\n\n\n",
"msg_date": "Thu, 9 Mar 2023 10:23:28 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] Reducing planning time when tables have many partitions"
},
{
"msg_contents": "Hello,\n\nOn Wed, Mar 8, 2023 at 9:34 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> Hello Watari-san, this patch does not currently apply. Could you please\n> rebase?\n\nThank you for pointing it out. I have attached the rebased version to\nthis email. This version includes an additional change, v18-0005. The\nchange relates to the Bitmapset operations that David mentioned:\n\nOn Thu, Mar 9, 2023 at 6:23 AM David Rowley <dgrowleyml@gmail.com> wrote:\n> Yes. I'm currently trying to make a few Bitmapset improvements which\n> include the change made in this thread's 0001 patch over on [1].\n\nAs of v18-0005, the redundant loop to check if the result of\nbms_intersect() is empty has been removed. This change is almost the\nsame as David's following idea in the [1] thread, but slightly\ndifferent.\n\nOn Fri, Mar 3, 2023 at 10:52 AM David Rowley <dgrowleyml@gmail.com> wrote:\n> The patch also optimizes sub-optimal newly added code which calls\n> bms_is_empty_internal() when we have other more optimal means to\n> determine if the set is empty or not.\n\nI conducted an experiment measuring the planning time of Query B [2].\nIn the experiment, I tested the next four versions:\n\n* Master\n* (A): v18-0001 + v18-0002 + v18-0003 + v18-0004 (= v17)\n* (B): v18-0001 + v18-0002 + v18-0003 + v18-0004 + v18-0005\n* (C): v18-0002 + v18-0003 + v18-0004 + David's patches in [1]\n --> Since [1] includes v18-0001, (C) does not contain v18-0001.\n\nThe following tables show the results. These show that when the number\nof partitions is large, (B) is faster than (A). This result indicates\nthat the change in v18-0005 is effective on this workload. In\naddition, the patches in [1] slowed down the performance compared to\n(A) and (B). I am not sure of the cause of this degradation. I will\ninvestigate this issue further. I hope these results will help the\ndiscussion of [1].\n\nTable 1: Planning time of Query B (ms)\n----------------------------------------------\n n | Master | (A) | (B) | (C)\n----------------------------------------------\n 1 | 37.780 | 38.836 | 38.354 | 38.187\n 2 | 36.222 | 37.067 | 37.416 | 37.068\n 4 | 38.001 | 38.410 | 37.980 | 38.005\n 8 | 42.384 | 41.159 | 41.601 | 42.218\n 16 | 53.906 | 47.277 | 47.080 | 59.466\n 32 | 88.271 | 58.842 | 58.762 | 69.474\n 64 | 229.445 | 91.675 | 91.194 | 115.348\n 128 | 896.418 | 166.251 | 161.182 | 335.121\n 256 | 4220.514 | 371.369 | 350.723 | 923.272\n----------------------------------------------\n\nTable 2: Planning time speedup of Query B (higher is better)\n--------------------------------------------------------------------------\n n | Master / (A) | Master / (B) | Master / (C) | (A) / (B) | (A) / (C)\n--------------------------------------------------------------------------\n 1 | 97.3% | 98.5% | 98.9% | 101.3% | 101.7%\n 2 | 97.7% | 96.8% | 97.7% | 99.1% | 100.0%\n 4 | 98.9% | 100.1% | 100.0% | 101.1% | 101.1%\n 8 | 103.0% | 101.9% | 100.4% | 98.9% | 97.5%\n 16 | 114.0% | 114.5% | 90.7% | 100.4% | 79.5%\n 32 | 150.0% | 150.2% | 127.1% | 100.1% | 84.7%\n 64 | 250.3% | 251.6% | 198.9% | 100.5% | 79.5%\n 128 | 539.2% | 556.2% | 267.5% | 103.1% | 49.6%\n 256 | 1136.5% | 1203.4% | 457.1% | 105.9% | 40.2%\n--------------------------------------------------------------------------\n\nOn Thu, Mar 9, 2023 at 6:23 AM David Rowley <dgrowleyml@gmail.com> wrote:\n> For the main patch, I've been starting to wonder if it should work\n> completely differently. Instead of adding members for partitioned and\n> inheritance children, we could just translate the Vars from child to\n> top-level parent and find the member that way. I wondered if this\n> method might be even faster as it would forego\n> add_child_rel_equivalences(). I think we'd still need em_is_child for\n> UNION ALL children. So far, I've not looked into this in detail. I\n> was hoping to find an idea that would allow some means to have the\n> planner realise that a LIST partition which allows a single Datum\n> could skip pushing base quals which are constantly true. i.e:\n>\n> create table lp (a int) partition by list(a);\n> create table lp1 partition of lp for values in(1);\n> explain select * from lp where a = 1;\n>\n> Seq Scan on lp1 lp (cost=0.00..41.88 rows=13 width=4)\n> Filter: (a = 1)\n\nThank you for considering this issue. I will look into this as well.\n\n[1] https://postgr.es/m/CAApHDvq9eq0W_aFUGrb6ba28ieuQN4zM5Uwqxy7+LMZjJc+VGg@mail.gmail.com\n[2] https://postgr.es/m/CAJ2pMka2PBXNNzUfe0-ksFsxVN%2BgmfKq7aGQ5v35TcpjFG3Ggg%40mail.gmail.com\n\n-- \nBest regards,\nYuya Watari",
"msg_date": "Fri, 10 Mar 2023 17:38:46 +0900",
"msg_from": "Yuya Watari <watari.yuya@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PoC] Reducing planning time when tables have many partitions"
},
{
"msg_contents": "Hello,\n\nOn Fri, Mar 10, 2023 at 5:38 PM Yuya Watari <watari.yuya@gmail.com> wrote:\n> Thank you for pointing it out. I have attached the rebased version to\n> this email.\n\nRecent commits, such as a8c09daa8b [1], have caused conflicts and\ncompilation errors in these patches. I have attached the fixed version\nto this email.\n\nThe v19-0004 adds an 'em_index' field representing the index within\nroot->eq_members of the EquivalenceMember. This field is needed to\ndelete EquivalenceMembers when iterating them using the ec_members\nlist instead of the ec_member_indexes.\n\n[1] https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=a8c09daa8bb1d741bb8b3d31a12752448eb6fb7c\n\n-- \nBest regards,\nYuya Watari",
"msg_date": "Wed, 5 Jul 2023 18:57:56 +0900",
"msg_from": "Yuya Watari <watari.yuya@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PoC] Reducing planning time when tables have many partitions"
},
{
"msg_contents": "On 5/7/2023 16:57, Yuya Watari wrote:\n> Hello,\n> \n> On Fri, Mar 10, 2023 at 5:38 PM Yuya Watari <watari.yuya@gmail.com> wrote:\n>> Thank you for pointing it out. I have attached the rebased version to\n>> this email.\n> \n> Recent commits, such as a8c09daa8b [1], have caused conflicts and\n> compilation errors in these patches. I have attached the fixed version\n> to this email.\n> \n> The v19-0004 adds an 'em_index' field representing the index within\n> root->eq_members of the EquivalenceMember. This field is needed to\n> delete EquivalenceMembers when iterating them using the ec_members\n> list instead of the ec_member_indexes.\n> \n> [1] https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=a8c09daa8bb1d741bb8b3d31a12752448eb6fb7c\n> \nDiscovering quality of partition pruning at the stage of execution \ninitialization and using your set of patches I have found some dubious \nresults with performance degradation. Look into the test case in attachment.\nHere is three queries. Execution times:\n1 - 8s; 2 - 30s; 3 - 131s (with your patch set).\n1 - 5s; 2 - 10s; 3 - 33s (current master).\n\nMaybe it is a false alarm, but on my laptop I see this degradation at \nevery launch.\n\n-- \nregards,\nAndrey Lepikhov\nPostgres Professional",
"msg_date": "Thu, 27 Jul 2023 14:58:16 +0700",
"msg_from": "Andrey Lepikhov <a.lepikhov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] Reducing planning time when tables have many partitions"
},
{
"msg_contents": "On 27/7/2023 14:58, Andrey Lepikhov wrote:\n> On 5/7/2023 16:57, Yuya Watari wrote:\n>> Hello,\n>>\n>> On Fri, Mar 10, 2023 at 5:38 PM Yuya Watari <watari.yuya@gmail.com> \n>> wrote:\n>>> Thank you for pointing it out. I have attached the rebased version to\n>>> this email.\n>>\n>> Recent commits, such as a8c09daa8b [1], have caused conflicts and\n>> compilation errors in these patches. I have attached the fixed version\n>> to this email.\n>>\n>> The v19-0004 adds an 'em_index' field representing the index within\n>> root->eq_members of the EquivalenceMember. This field is needed to\n>> delete EquivalenceMembers when iterating them using the ec_members\n>> list instead of the ec_member_indexes.\n>>\n>> [1] \n>> https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=a8c09daa8bb1d741bb8b3d31a12752448eb6fb7c\n>>\n> Discovering quality of partition pruning at the stage of execution \n> initialization and using your set of patches I have found some dubious \n> results with performance degradation. Look into the test case in \n> attachment.\n> Here is three queries. Execution times:\n> 1 - 8s; 2 - 30s; 3 - 131s (with your patch set).\n> 1 - 5s; 2 - 10s; 3 - 33s (current master).\n> \n> Maybe it is a false alarm, but on my laptop I see this degradation at \n> every launch.\nSorry for this. It was definitely a false alarm. In this patch, \nassertion checking adds much overhead. After switching it off, I found \nout that this feature solves my problem with a quick pass through the \nmembers of an equivalence class. Planning time results for the queries \nfrom the previous letter:\n1 - 0.4s, 2 - 1.3s, 3 - 1.3s; (with the patches applied)\n1 - 5s; 2 - 8.7s; 3 - 22s; (current master).\n\nI have attached flamegraph that shows query 2 planning process after \napplying this set of patches. As you can see, overhead at the \nequivalence class routines has gone.\n\n-- \nregards,\nAndrey Lepikhov\nPostgres Professional",
"msg_date": "Fri, 28 Jul 2023 11:27:40 +0700",
"msg_from": "Andrey Lepikhov <a.lepikhov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] Reducing planning time when tables have many partitions"
},
{
"msg_contents": "Hello,\n\nOn Fri, Jul 28, 2023 at 1:27 PM Andrey Lepikhov\n<a.lepikhov@postgrespro.ru> wrote:\n> Sorry for this. It was definitely a false alarm. In this patch,\n> assertion checking adds much overhead. After switching it off, I found\n> out that this feature solves my problem with a quick pass through the\n> members of an equivalence class. Planning time results for the queries\n> from the previous letter:\n> 1 - 0.4s, 2 - 1.3s, 3 - 1.3s; (with the patches applied)\n> 1 - 5s; 2 - 8.7s; 3 - 22s; (current master).\n>\n> I have attached flamegraph that shows query 2 planning process after\n> applying this set of patches. As you can see, overhead at the\n> equivalence class routines has gone.\n\nI really appreciate testing the patches and sharing your results. The\nresults are interesting because they show that our optimization\neffectively reduces planning time for your workload containing\ndifferent queries than I have used in my benchmarks.\n\nThank you again for reviewing this.\n\n-- \nBest regards,\nYuya Watari\n\n\n",
"msg_date": "Fri, 28 Jul 2023 17:49:04 +0900",
"msg_from": "Yuya Watari <watari.yuya@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PoC] Reducing planning time when tables have many partitions"
},
{
"msg_contents": "Hi Yuya, Andrey,\n\nOn Fri, Jul 28, 2023 at 9:58 AM Andrey Lepikhov\n<a.lepikhov@postgrespro.ru> wrote:\n\n> >>\n> > Discovering quality of partition pruning at the stage of execution\n> > initialization and using your set of patches I have found some dubious\n> > results with performance degradation. Look into the test case in\n> > attachment.\n> > Here is three queries. Execution times:\n> > 1 - 8s; 2 - 30s; 3 - 131s (with your patch set).\n> > 1 - 5s; 2 - 10s; 3 - 33s (current master).\n> >\n> > Maybe it is a false alarm, but on my laptop I see this degradation at\n> > every launch.\n> Sorry for this. It was definitely a false alarm. In this patch,\n> assertion checking adds much overhead. After switching it off, I found\n> out that this feature solves my problem with a quick pass through the\n> members of an equivalence class. Planning time results for the queries\n> from the previous letter:\n> 1 - 0.4s, 2 - 1.3s, 3 - 1.3s; (with the patches applied)\n> 1 - 5s; 2 - 8.7s; 3 - 22s; (current master).\n\nI measured planning time using my scripts setup.sql and queries.sql\nattached to [1] with and without assert build using your patch. The\ntimings are recorded in the attached spreadsheet. I have following\nobservations\n\n1. The patchset improves the planning time of queries involving\npartitioned tables by an integral factor. Both in case of\npartitionwise join and without it. The speedup is 5x to 21x in my\nexperiment. That's huge.\n2. There's slight degradation in planning time of queries involving\nunpartitioned tables. But I have seen that much variance usually.\n3. assert and debug enabled build shows degradation in planning time\nin all the cases.\n4. There is substantial memory increase in all the cases. It's\npercentage wise predominant when the partitionwise join is not used.\n\nGiven that most of the developers run assert enabled builds it would\nbe good to bring down the degradation there while keeping the\nexcellent speedup in non-assert builds.\nQueries on partitioned tables eat a lot of memory anyways, increasing\nthat further should be avoided.\n\nI have not studied the patches. But I think the memory increase has to\ndo with our Bitmapset structure. It's space inefficient when there are\nthousands of partitions involved. See my comment at [2]\n\n[1] https://www.postgresql.org/message-id/CAExHW5stmOUobE55pMt83r8UxvfCph+Pvo5dNpdrVCsBgXEzDQ@mail.gmail.com\n[2] https://www.postgresql.org/message-id/CAExHW5s4EqY43oB%3Dne6B2%3D-xLgrs9ZGeTr1NXwkGFt2j-OmaQQ%40mail.gmail.com\n\n--\nBest Wishes,\nAshutosh Bapat",
"msg_date": "Fri, 28 Jul 2023 15:20:57 +0530",
"msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] Reducing planning time when tables have many partitions"
},
{
"msg_contents": "Hello,\n\nI really appreciate sharing very useful scripts and benchmarking results.\n\nOn Fri, Jul 28, 2023 at 6:51 PM Ashutosh Bapat\n<ashutosh.bapat.oss@gmail.com> wrote:\n> Given that most of the developers run assert enabled builds it would\n> be good to bring down the degradation there while keeping the\n> excellent speedup in non-assert builds.\n\n From my observation, this degradation in assert enabled build is\ncaused by verifying the iteration results of EquivalenceMembers. My\npatch uses Bitmapset-based indexes to speed up the iteration. When\nassertions are enabled, we verify that the result of the iteration is\nthe same with and without the indexes. This verification results in\nexecuting a similar loop three times, which causes the degradation. I\nmeasured planning time by using your script without this verification.\nThe results are as follows:\n\nMaster: 144.55 ms\nPatched (v19): 529.85 ms\nPatched (v19) without verification: 78.84 ms\n(*) All runs are with assertions.\n\nAs seen from the above, verifying iteration results was the cause of\nthe performance degradation. I agree that we should avoid such\ndegradation because it negatively affects the development of\nPostgreSQL. Removing the verification when committing this patch is\none possible option.\n\n> Queries on partitioned tables eat a lot of memory anyways, increasing\n> that further should be avoided.\n>\n> I have not studied the patches. But I think the memory increase has to\n> do with our Bitmapset structure. It's space inefficient when there are\n> thousands of partitions involved. See my comment at [2]\n\nThank you for pointing this out. I have never considered the memory\nusage impact of this patch. As you say, the Bitmapset structure caused\nthis increase. I will try to look into this further.\n\n-- \nBest regards,\nYuya Watari\n\n\n",
"msg_date": "Wed, 2 Aug 2023 15:40:39 +0900",
"msg_from": "Yuya Watari <watari.yuya@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PoC] Reducing planning time when tables have many partitions"
},
{
"msg_contents": "On 2/8/2023 13:40, Yuya Watari wrote:\n> As seen from the above, verifying iteration results was the cause of\n> the performance degradation. I agree that we should avoid such\n> degradation because it negatively affects the development of\n> PostgreSQL. Removing the verification when committing this patch is\n> one possible option.\nYou introduced list_ptr_cmp as an extern function of a List, but use it \nthe only under USE_ASSERT_CHECKING ifdef.\nMaybe you hide it under USE_ASSERT_CHECKING or remove all the stuff?\n\n-- \nregards,\nAndrey Lepikhov\nPostgres Professional\n\n\n\n",
"msg_date": "Wed, 2 Aug 2023 16:43:19 +0700",
"msg_from": "Andrey Lepikhov <a.lepikhov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] Reducing planning time when tables have many partitions"
},
{
"msg_contents": "Hello,\n\nOn Wed, Aug 2, 2023 at 6:43 PM Andrey Lepikhov\n<a.lepikhov@postgrespro.ru> wrote:\n> You introduced list_ptr_cmp as an extern function of a List, but use it\n> the only under USE_ASSERT_CHECKING ifdef.\n> Maybe you hide it under USE_ASSERT_CHECKING or remove all the stuff?\n\nThank you for your quick reply and for pointing that out. If we remove\nthe verification code when committing this patch, we should also\nremove the list_ptr_cmp() function because nobody will use it. If we\ndon't remove the verification, whether to hide it by\nUSE_ASSERT_CHECKING is a difficult question. The list_ptr_cmp() can be\nused for generic use and is helpful even without assertions, so not\nhiding it is one option. However, I understand that it is not pretty\nto have the function compiled even though it is not referenced from\nanywhere when assertions are disabled. As you say, I think hiding it\nby USE_ASSERT_CHECKING is also a possible solution.\n\n-- \nBest regards,\nYuya Watari\n\n\n",
"msg_date": "Thu, 3 Aug 2023 15:08:32 +0900",
"msg_from": "Yuya Watari <watari.yuya@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PoC] Reducing planning time when tables have many partitions"
},
{
"msg_contents": "On Wed, Aug 2, 2023 at 12:11 PM Yuya Watari <watari.yuya@gmail.com> wrote:\n\n> Hello,\n>\n> I really appreciate sharing very useful scripts and benchmarking results.\n>\n> On Fri, Jul 28, 2023 at 6:51 PM Ashutosh Bapat\n> <ashutosh.bapat.oss@gmail.com> wrote:\n> > Given that most of the developers run assert enabled builds it would\n> > be good to bring down the degradation there while keeping the\n> > excellent speedup in non-assert builds.\n>\n> From my observation, this degradation in assert enabled build is\n> caused by verifying the iteration results of EquivalenceMembers. My\n> patch uses Bitmapset-based indexes to speed up the iteration. When\n> assertions are enabled, we verify that the result of the iteration is\n> the same with and without the indexes. This verification results in\n> executing a similar loop three times, which causes the degradation. I\n> measured planning time by using your script without this verification.\n> The results are as follows:\n>\n> Master: 144.55 ms\n> Patched (v19): 529.85 ms\n> Patched (v19) without verification: 78.84 ms\n> (*) All runs are with assertions.\n>\n> As seen from the above, verifying iteration results was the cause of\n> the performance degradation. I agree that we should avoid such\n> degradation because it negatively affects the development of\n> PostgreSQL. Removing the verification when committing this patch is\n> one possible option.\n>\n\nIf you think that the verification is important to catch bugs, you may want\nto encapsulate it with an #ifdef .. #endif such that the block within is\nnot compiled by default. See OPTIMIZER_DEBUG for example.\n\n\n>\n> > Queries on partitioned tables eat a lot of memory anyways, increasing\n> > that further should be avoided.\n> >\n> > I have not studied the patches. But I think the memory increase has to\n> > do with our Bitmapset structure. It's space inefficient when there are\n> > thousands of partitions involved. See my comment at [2]\n>\n> Thank you for pointing this out. I have never considered the memory\n> usage impact of this patch. As you say, the Bitmapset structure caused\n> this increase. I will try to look into this further.\n>\n>\nDo you think that the memory measurement patch I have shared in those\nthreads is useful in itself? If so, I will start another proposal to\naddress it.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\nOn Wed, Aug 2, 2023 at 12:11 PM Yuya Watari <watari.yuya@gmail.com> wrote:Hello,\n\nI really appreciate sharing very useful scripts and benchmarking results.\n\nOn Fri, Jul 28, 2023 at 6:51 PM Ashutosh Bapat\n<ashutosh.bapat.oss@gmail.com> wrote:\n> Given that most of the developers run assert enabled builds it would\n> be good to bring down the degradation there while keeping the\n> excellent speedup in non-assert builds.\n\n From my observation, this degradation in assert enabled build is\ncaused by verifying the iteration results of EquivalenceMembers. My\npatch uses Bitmapset-based indexes to speed up the iteration. When\nassertions are enabled, we verify that the result of the iteration is\nthe same with and without the indexes. This verification results in\nexecuting a similar loop three times, which causes the degradation. I\nmeasured planning time by using your script without this verification.\nThe results are as follows:\n\nMaster: 144.55 ms\nPatched (v19): 529.85 ms\nPatched (v19) without verification: 78.84 ms\n(*) All runs are with assertions.\n\nAs seen from the above, verifying iteration results was the cause of\nthe performance degradation. I agree that we should avoid such\ndegradation because it negatively affects the development of\nPostgreSQL. Removing the verification when committing this patch is\none possible option.If you think that the verification is important to catch bugs, you may want to encapsulate it with an #ifdef .. #endif such that the block within is not compiled by default. See OPTIMIZER_DEBUG for example. \n\n> Queries on partitioned tables eat a lot of memory anyways, increasing\n> that further should be avoided.\n>\n> I have not studied the patches. But I think the memory increase has to\n> do with our Bitmapset structure. It's space inefficient when there are\n> thousands of partitions involved. See my comment at [2]\n\nThank you for pointing this out. I have never considered the memory\nusage impact of this patch. As you say, the Bitmapset structure caused\nthis increase. I will try to look into this further.\nDo you think that the memory measurement patch I have shared in those threads is useful in itself? If so, I will start another proposal to address it. -- Best Wishes,Ashutosh Bapat",
"msg_date": "Thu, 3 Aug 2023 18:59:07 +0530",
"msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] Reducing planning time when tables have many partitions"
},
{
"msg_contents": "Hello,\n\nThank you for your reply.\n\nOn Thu, Aug 3, 2023 at 10:29 PM Ashutosh Bapat\n<ashutosh.bapat.oss@gmail.com> wrote:\n> If you think that the verification is important to catch bugs, you may want to encapsulate it with an #ifdef .. #endif such that the block within is not compiled by default. See OPTIMIZER_DEBUG for example.\n\nIn my opinion, verifying the iteration results is only necessary to\navoid introducing bugs while developing this patch. The verification\nis too excessive for regular development of PostgreSQL. I agree that\nwe should avoid a significant degradation in assert enabled builds, so\nI will consider removing it.\n\n> Do you think that the memory measurement patch I have shared in those threads is useful in itself? If so, I will start another proposal to address it.\n\nFor me, who is developing the planner in this thread, the memory\nmeasurement patch is useful. However, most users do not care about\nmemory usage, so there is room for consideration. For example, making\nthe metrics optional in EXPLAIN ANALYZE outputs might be better.\n\n-- \nBest regards,\nYuya Watari\n\n\n",
"msg_date": "Mon, 7 Aug 2023 17:19:06 +0900",
"msg_from": "Yuya Watari <watari.yuya@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PoC] Reducing planning time when tables have many partitions"
},
{
"msg_contents": "On 7/8/2023 15:19, Yuya Watari wrote:\n> Hello,\n> \n> Thank you for your reply.\n> \n> On Thu, Aug 3, 2023 at 10:29 PM Ashutosh Bapat\n> <ashutosh.bapat.oss@gmail.com> wrote:\n>> If you think that the verification is important to catch bugs, you may want to encapsulate it with an #ifdef .. #endif such that the block within is not compiled by default. See OPTIMIZER_DEBUG for example.\n> \n> In my opinion, verifying the iteration results is only necessary to\n> avoid introducing bugs while developing this patch. The verification\n> is too excessive for regular development of PostgreSQL. I agree that\n> we should avoid a significant degradation in assert enabled builds, so\n> I will consider removing it.\nI should admit, these checks has helped me during backpatching this \nfeature to pg v.13 (users crave speed up of query planning a lot). Maybe \nit is a sign of a lack of tests, but in-fact, it already has helped.\n\nOne more thing: I think, you should add comments to \nadd_child_rel_equivalences() and add_child_join_rel_equivalences()\non replacing of:\n\nif (bms_is_subset(cur_em->em_relids, top_parent_relids) &&\n\t\t\t\t!bms_is_empty(cur_em->em_relids))\nand\nif (bms_overlap(cur_em->em_relids, top_parent_relids))\n\nwith different logic. What was changed? It will be better to help future \ndevelopers realize this part of the code more easily by adding some \ncomments.\n> \n>> Do you think that the memory measurement patch I have shared in those threads is useful in itself? If so, I will start another proposal to address it.\n> \n> For me, who is developing the planner in this thread, the memory\n> measurement patch is useful. However, most users do not care about\n> memory usage, so there is room for consideration. For example, making\n> the metrics optional in EXPLAIN ANALYZE outputs might be better.\n> \n+1. Any memory-related info in the output of EXPLAIN ANALYZE makes tests \nmore complex because of architecture dependency.\n\n-- \nregards,\nAndrey Lepikhov\nPostgres Professional\n\n\n\n",
"msg_date": "Mon, 7 Aug 2023 15:51:52 +0700",
"msg_from": "Andrey Lepikhov <a.lepikhov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] Reducing planning time when tables have many partitions"
},
{
"msg_contents": "On Mon, Aug 7, 2023 at 2:21 PM Andrey Lepikhov <a.lepikhov@postgrespro.ru>\nwrote:\n\n> >> Do you think that the memory measurement patch I have shared in those\n> threads is useful in itself? If so, I will start another proposal to\n> address it.\n> >\n> > For me, who is developing the planner in this thread, the memory\n> > measurement patch is useful. However, most users do not care about\n> > memory usage, so there is room for consideration. For example, making\n> > the metrics optional in EXPLAIN ANALYZE outputs might be better.\n> >\n> +1. Any memory-related info in the output of EXPLAIN ANALYZE makes tests\n> more complex because of architecture dependency.\n>\n>\nAs far as the tests go, the same is the case with planning time and\nexecution time. They change even without changing the architecture. But we\nhave tests which mask the actual values. Something similar will be done to\nthe planning memory.\n\nI will propose it as a separate patch in the next commitfest and will seek\nopinions from other hackers.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\nOn Mon, Aug 7, 2023 at 2:21 PM Andrey Lepikhov <a.lepikhov@postgrespro.ru> wrote:\n>> Do you think that the memory measurement patch I have shared in those threads is useful in itself? If so, I will start another proposal to address it.\n> \n> For me, who is developing the planner in this thread, the memory\n> measurement patch is useful. However, most users do not care about\n> memory usage, so there is room for consideration. For example, making\n> the metrics optional in EXPLAIN ANALYZE outputs might be better.\n> \n+1. Any memory-related info in the output of EXPLAIN ANALYZE makes tests \nmore complex because of architecture dependency.\n\nAs far as the tests go, the same is the case with planning time and execution time. They change even without changing the architecture. But we have tests which mask the actual values. Something similar will be done to the planning memory.I will propose it as a separate patch in the next commitfest and will seek opinions from other hackers.-- Best Wishes,Ashutosh Bapat",
"msg_date": "Mon, 7 Aug 2023 17:45:22 +0530",
"msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] Reducing planning time when tables have many partitions"
},
{
"msg_contents": "On 7/8/2023 19:15, Ashutosh Bapat wrote:\n> \n> \n> On Mon, Aug 7, 2023 at 2:21 PM Andrey Lepikhov \n> <a.lepikhov@postgrespro.ru <mailto:a.lepikhov@postgrespro.ru>> wrote:\n> \n> >> Do you think that the memory measurement patch I have shared in\n> those threads is useful in itself? If so, I will start another\n> proposal to address it.\n> >\n> > For me, who is developing the planner in this thread, the memory\n> > measurement patch is useful. However, most users do not care about\n> > memory usage, so there is room for consideration. For example, making\n> > the metrics optional in EXPLAIN ANALYZE outputs might be better.\n> >\n> +1. Any memory-related info in the output of EXPLAIN ANALYZE makes\n> tests\n> more complex because of architecture dependency.\n> \n> \n> As far as the tests go, the same is the case with planning time and \n> execution time. They change even without changing the architecture. But \n> we have tests which mask the actual values. Something similar will be \n> done to the planning memory.\nIt is a positive thing to access some planner internals from the \nconsole, of course. My point is dedicated to the structuration of an \nEXPLAIN output and is caused by two reasons:\n1. I use the EXPLAIN command daily to identify performance issues and \nthe optimiser's weak points. According to the experience, when you have \nan 'explain analyze' containing more than 100 strings, you try removing \nunnecessary information to improve observability. It would be better to \nhave the possibility to see an EXPLAIN with different levels of the \noutput details. Flexibility here reduces a lot of manual work, sometimes.\n2. Writing extensions and having an explain analyze in the regression \ntest, we must create masking functions just to make the test more \nstable. That additional work can be avoided with another option, like \nMEMUSAGE ON/OFF.\n\nSo, in my opinion, it would be better to introduce this new output data \nguarded by additional option.\n\n> \n> I will propose it as a separate patch in the next commitfest and will \n> seek opinions from other hackers.\nCool, good news.\n\n-- \nregards,\nAndrey Lepikhov\nPostgres Professional\n\n\n\n",
"msg_date": "Tue, 8 Aug 2023 10:22:49 +0700",
"msg_from": "Andrey Lepikhov <a.lepikhov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] Reducing planning time when tables have many partitions"
},
{
"msg_contents": "Hi Andrey,\n\nOn Tue, Aug 8, 2023 at 8:52 AM Andrey Lepikhov\n<a.lepikhov@postgrespro.ru> wrote:\n> It is a positive thing to access some planner internals from the\n> console, of course. My point is dedicated to the structuration of an\n> EXPLAIN output and is caused by two reasons:\n> 1. I use the EXPLAIN command daily to identify performance issues and\n> the optimiser's weak points. According to the experience, when you have\n> an 'explain analyze' containing more than 100 strings, you try removing\n> unnecessary information to improve observability. It would be better to\n> have the possibility to see an EXPLAIN with different levels of the\n> output details. Flexibility here reduces a lot of manual work, sometimes.\n\nI use the json output format to extract the interesting parts of\nEXPLAIN output. See my SQL scripts attached upthread. That way I can\nignore new additions like this.\n\n> 2. Writing extensions and having an explain analyze in the regression\n> test, we must create masking functions just to make the test more\n> stable. That additional work can be avoided with another option, like\n> MEMUSAGE ON/OFF.\n\nWe already have a masking function in-place. See changes to\nexplain.out in my proposed patch at [1]\n\n> > I will propose it as a separate patch in the next commitfest and will\n> > seek opinions from other hackers.\n> Cool, good news.\n\nDone. Commitfest entry https://commitfest.postgresql.org/44/4492/\n\n[1] https://www.postgresql.org/message-id/CAExHW5sZA=5LJ_ZPpRO-w09ck8z9p7eaYAqq3Ks9GDfhrxeWBw@mail.gmail.com\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n",
"msg_date": "Tue, 8 Aug 2023 11:49:40 +0530",
"msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] Reducing planning time when tables have many partitions"
},
{
"msg_contents": "Hello Andrey, Ashutosh, and David,\n\nThank you for your reply and for reviewing the patch.\n\nOn Mon, Aug 7, 2023 at 5:51 PM Andrey Lepikhov\n<a.lepikhov@postgrespro.ru> wrote:\n> One more thing: I think, you should add comments to\n> add_child_rel_equivalences() and add_child_join_rel_equivalences()\n> on replacing of:\n>\n> if (bms_is_subset(cur_em->em_relids, top_parent_relids) &&\n> !bms_is_empty(cur_em->em_relids))\n> and\n> if (bms_overlap(cur_em->em_relids, top_parent_relids))\n>\n> with different logic. What was changed? It will be better to help future\n> developers realize this part of the code more easily by adding some\n> comments.\n\nThe following change in add_child_join_rel_equivalences():\n\n- /* Does this member reference child's topmost parent rel? */\n- if (bms_overlap(cur_em->em_relids, top_parent_relids))\n\nis correct because EquivalenceMemberIterator guarantees that these two\nRelids always overlap for the iterated results. The following code\ndoes this iteration. As seen from the below code, the iteration\neliminates not overlapping Relids, so we do not need to check\nbms_overlap() for the iterated results.\n\n=====\n/*\n * eclass_member_iterator_next\n * Fetch the next EquivalenceMember from an EquivalenceMemberIterator\n * which was set up by setup_eclass_member_iterator(). Returns NULL when\n * there are no more matching EquivalenceMembers.\n */\nEquivalenceMember *\neclass_member_iterator_next(EquivalenceMemberIterator *iter)\n{\n ...\n ListCell *lc;\n\n for_each_from(lc, iter->eclass->ec_members, iter->current_index + 1)\n {\n EquivalenceMember *em = lfirst_node(EquivalenceMember, lc);\n ...\n /*\n * Don't return members which have no common rels with with_relids\n */\n if (!bms_overlap(em->em_relids, iter->with_relids))\n continue;\n\n return em;\n }\n return NULL;\n ...\n}\n=====\n\nI agree with your opinion that my patch lacks some explanations, so I\nwill consider adding more comments. However, I received the following\nmessage from David in March.\n\nOn Thu, Mar 9, 2023 at 6:23 AM David Rowley <dgrowleyml@gmail.com> wrote:\n> For the main patch, I've been starting to wonder if it should work\n> completely differently. Instead of adding members for partitioned and\n> inheritance children, we could just translate the Vars from child to\n> top-level parent and find the member that way. I wondered if this\n> method might be even faster as it would forego\n> add_child_rel_equivalences(). I think we'd still need em_is_child for\n> UNION ALL children. So far, I've not looked into this in detail. I\n> was hoping to find an idea that would allow some means to have the\n> planner realise that a LIST partition which allows a single Datum\n> could skip pushing base quals which are constantly true. i.e:\n>\n> create table lp (a int) partition by list(a);\n> create table lp1 partition of lp for values in(1);\n> explain select * from lp where a = 1;\n>\n> Seq Scan on lp1 lp (cost=0.00..41.88 rows=13 width=4)\n> Filter: (a = 1)\n\nI am concerned that fixing the current patch will conflict with\nDavid's idea. Of course, I am now trying to experiment with the above\nidea, but I should avoid the conflict if he is working on this. David,\nwhat do you think about this? Is it OK to post a new patch to address\nthe review comments? I am looking forward to your reply.\n\n-- \nBest regards,\nYuya Watari\n\n\n",
"msg_date": "Wed, 9 Aug 2023 17:14:56 +0900",
"msg_from": "Yuya Watari <watari.yuya@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PoC] Reducing planning time when tables have many partitions"
},
{
"msg_contents": "On Wed, 5 Jul 2023 at 21:58, Yuya Watari <watari.yuya@gmail.com> wrote:\n>\n> Hello,\n>\n> On Fri, Mar 10, 2023 at 5:38 PM Yuya Watari <watari.yuya@gmail.com> wrote:\n> > Thank you for pointing it out. I have attached the rebased version to\n> > this email.\n>\n> Recent commits, such as a8c09daa8b [1], have caused conflicts and\n> compilation errors in these patches. I have attached the fixed version\n> to this email.\n>\n> The v19-0004 adds an 'em_index' field representing the index within\n> root->eq_members of the EquivalenceMember. This field is needed to\n> delete EquivalenceMembers when iterating them using the ec_members\n> list instead of the ec_member_indexes.\n\nIf 0004 is adding an em_index to mark the index into\nPlannerInfo->eq_members, can't you use that in\nsetup_eclass_member[_strict]_iterator to loop to verify that the two\nmethods yield the same result?\n\ni.e:\n\n+ Bitmapset *matching_ems = NULL;\n+ memcpy(&idx_iter, iter, sizeof(EquivalenceMemberIterator));\n+ memcpy(&noidx_iter, iter, sizeof(EquivalenceMemberIterator));\n+\n+ idx_iter.use_index = true;\n+ noidx_iter.use_index = false;\n+\n+ while ((em = eclass_member_iterator_strict_next(&noidx_iter)) != NULL)\n+ matching_ems = bms_add_member(matching_ems, em->em_index);\n+\n+ Assert(bms_equal(matching_ems, iter->matching_ems));\n\nThat should void the complaint that the Assert checking is too slow.\nYou can also delete the list_ptr_cmp function too (also noticed a\ncomplaint about that).\n\nFor the 0003 patch. Can you explain why you think these fields should\nbe in RangeTblEntry rather than RelOptInfo? I can only guess you might\nhave done this for memory usage so that we don't have to carry those\nfields for join rels? I think RelOptInfo is the correct place to\nstore fields that are only used in the planner. If you put them in\nRangeTblEntry they'll end up in pg_rewrite and be stored for all\nviews. Seems very space inefficient and scary as it limits the scope\nfor fixing bugs in back branches due to RangeTblEntries being\nserialized into the catalogues in various places.\n\nDavid\n\n\n",
"msg_date": "Wed, 9 Aug 2023 22:28:07 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] Reducing planning time when tables have many partitions"
},
{
"msg_contents": "On Wed, 9 Aug 2023 at 22:28, David Rowley <dgrowleyml@gmail.com> wrote:\n> i.e:\n>\n> + Bitmapset *matching_ems = NULL;\n> + memcpy(&idx_iter, iter, sizeof(EquivalenceMemberIterator));\n> + memcpy(&noidx_iter, iter, sizeof(EquivalenceMemberIterator));\n> +\n> + idx_iter.use_index = true;\n> + noidx_iter.use_index = false;\n> +\n> + while ((em = eclass_member_iterator_strict_next(&noidx_iter)) != NULL)\n> + matching_ems = bms_add_member(matching_ems, em->em_index);\n> +\n> + Assert(bms_equal(matching_ems, iter->matching_ems));\n\nSlight correction, you could just get rid of idx_iter completely. I\nonly added that copy since the Assert code needed to iterate and I\ndidn't want to change the position of the iterator that's actually\nbeing used. Since the updated code wouldn't be interesting over\n\"iter\", you could just use \"iter\" directly like I have in the\nAssert(bms_equals... code above.\n\nDavid\n\n\n",
"msg_date": "Wed, 9 Aug 2023 22:37:33 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] Reducing planning time when tables have many partitions"
},
{
"msg_contents": "On Wed, 9 Aug 2023 at 20:15, Yuya Watari <watari.yuya@gmail.com> wrote:\n> I agree with your opinion that my patch lacks some explanations, so I\n> will consider adding more comments. However, I received the following\n> message from David in March.\n>\n> On Thu, Mar 9, 2023 at 6:23 AM David Rowley <dgrowleyml@gmail.com> wrote:\n> > For the main patch, I've been starting to wonder if it should work\n> > completely differently. Instead of adding members for partitioned and\n> > inheritance children, we could just translate the Vars from child to\n> > top-level parent and find the member that way. I wondered if this\n> > method might be even faster as it would forego\n> > add_child_rel_equivalences(). I think we'd still need em_is_child for\n> > UNION ALL children. So far, I've not looked into this in detail. I\n> > was hoping to find an idea that would allow some means to have the\n> > planner realise that a LIST partition which allows a single Datum\n> > could skip pushing base quals which are constantly true. i.e:\n> >\n> > create table lp (a int) partition by list(a);\n> > create table lp1 partition of lp for values in(1);\n> > explain select * from lp where a = 1;\n> >\n> > Seq Scan on lp1 lp (cost=0.00..41.88 rows=13 width=4)\n> > Filter: (a = 1)\n>\n> I am concerned that fixing the current patch will conflict with\n> David's idea. Of course, I am now trying to experiment with the above\n> idea, but I should avoid the conflict if he is working on this. David,\n> what do you think about this? Is it OK to post a new patch to address\n> the review comments? I am looking forward to your reply.\n\nSo, I have three concerns with this patch.\n\n1) I really dislike the way eclass_member_iterator_next() has to check\nbms_overlap() to filter out unwanted EMs. This is required because of\nhow add_child_rel_equivalences() does not pass the \"relids\" parameter\nin add_eq_member() as equivalent to pull_varnos(expr). See this code\nin master:\n\n/*\n* Transform em_relids to match. Note we do *not* do\n* pull_varnos(child_expr) here, as for example the\n* transformation might have substituted a constant, but we\n* don't want the child member to be marked as constant.\n*/\nnew_relids = bms_difference(cur_em->em_relids,\ntop_parent_relids);\nnew_relids = bms_add_members(new_relids, child_relids);\n\n\nI understand this is done to support Consts in UNION ALL parents, e.g\nthe following query prunes the n=2 UNION ALL branch\n\npostgres=# explain select * from (select 1 AS n,* from pg_Class c1\nunion all select 2 AS n,* from pg_Class c2) where n=1;\n QUERY PLAN\n----------------------------------------------------------------\n Seq Scan on pg_class c1 (cost=0.00..18.13 rows=413 width=277)\n(1 row)\n\n... but the following (existing) comment is just a lie:\n\nRelids em_relids; /* all relids appearing in em_expr */\n\nThis means that there's some weirdness on which RelOptInfos we set\neclass_member_indexes. Do we just set the EM in the RelOptInfos\nmentioned in the em_expr, or should it be the ones in em_relids?\n\nYou can see the following code I wrote in the 0001 patch which tries\nto work around this problem:\n\n+ /*\n+ * We must determine the exact set of relids in the expr for child\n+ * EquivalenceMembers as what is given to us in 'relids' may differ from\n+ * the relids mentioned in the expression. See add_child_rel_equivalences\n+ */\n+ if (parent != NULL)\n+ expr_relids = pull_varnos(root, (Node *) expr);\n+ else\n+ {\n+ expr_relids = relids;\n+ /* We expect the relids to match for non-child members */\n+ Assert(bms_equal(pull_varnos(root, (Node *) expr), relids));\n+ }\n\nSo, you can see we go with the relids from the em_expr rather than\nwhat's mentioned in em_relids. I believe this means we need the\nfollowing line:\n\n+ /*\n+ * Don't return members which have no common rels with with_relids\n+ */\n+ if (!bms_overlap(em->em_relids, iter->with_relids))\n+ continue;\n\nI don't quite recall if the em_expr can mention relids that are not in\nem_relids or not or if em_expr's relids always is a subset of\nem_relids.\n\nI'm just concerned this adds complexity and the risk of mixing up the\nmeaning (even more than it is already in master). I'm not sure I'm\nconfident that all this is correct, and I wrote the 0001 patch.\n\nMaybe this can be fixed by changing master so that em_relids always\nmatches pull_varnos(em_expr)? I'm unsure if there are any other\ncomplexities other than having to ensure we don't set em_is_const for\nchild members.\n\n2) The 2nd reason is what I hinted at that you quoted in the email I\nsent you in March. I think if it wasn't for UNION ALL and perhaps\ntable inheritance and we only needed child EMs for partitions of\npartitioned tables, then I think we might be able to get away with\njust translating Exprs child -> parent before looking up the EM and\nlikewise when asked to get join quals for child rels, we'd translate\nthe child relids to their top level parents, find the quals then\ntranslate those back to child form again. EquivalenceClasses would\nthen only contain a few members and there likely wouldn't be a great\nneed to do any indexing like we are in the 0001 patch. I'm sure\nsomeone somewhere probably has a query that would go faster with them,\nbut it's likely going to be rare therefore probably not worth it.\n\nUnfortunately, I'm not smart enough to just tell you this will or will\nnot work just off hand. The UNION ALL branch pruning adds complexity\nthat I don't recall the details of. To know, someone would either\nneed to tell me, or I'd need to go try to make it work myself and then\ndiscover the reason it can't be made to work. I'm happy for you to try\nthis, but if you don't I'm not sure when I can do it. I think it\nwould need to be at least explored before I'd ever consider thinking\nabout committing this patch.\n\n3) I just don't like the way the patch switches between methods of\nlooking up EMs as it means we could return EMs in a different order\ndepending on something like how many partitions were pruned or after\nthe DBA does ATTACH PARTITION. That could start causing weird\nproblems like plan changes due to a change in which columns were\nselected in generate_implied_equalities_for_column(). I don't have\nany examples of actual problems, but it's pretty difficult to prove\nthere aren't any.\n\nOf course, I do recall the complaint about the regression for more\nsimple queries and that's why I wrote the iterator code to have it use\nthe linear search when the number of EMs is small, so we can't exactly\njust delete the linear search method as we'd end up with that\nperformance regression again.\n\nI think the best way to move this forward is to explore not putting\npartitioned table partitions in EMs and instead see if we can\ntranslate to top-level parent before lookups. This might just be too\ncomplex to translate the Exprs all the time and it may add overhead\nunless we can quickly determine somehow that we don't need to attempt\nto translate the Expr when the given Expr is already from the\ntop-level parent. If that can't be made to work, then maybe that shows\nthe current patch has merit.\n\nDavid\n\n\n",
"msg_date": "Wed, 9 Aug 2023 23:54:26 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] Reducing planning time when tables have many partitions"
},
{
"msg_contents": "Hello David,\n\nI really appreciate your quick reply.\n\nOn Wed, Aug 9, 2023 at 7:28 PM David Rowley <dgrowleyml@gmail.com> wrote:\n> If 0004 is adding an em_index to mark the index into\n> PlannerInfo->eq_members, can't you use that in\n> setup_eclass_member[_strict]_iterator to loop to verify that the two\n> methods yield the same result?\n>\n> i.e:\n>\n> + Bitmapset *matching_ems = NULL;\n> + memcpy(&idx_iter, iter, sizeof(EquivalenceMemberIterator));\n> + memcpy(&noidx_iter, iter, sizeof(EquivalenceMemberIterator));\n> +\n> + idx_iter.use_index = true;\n> + noidx_iter.use_index = false;\n> +\n> + while ((em = eclass_member_iterator_strict_next(&noidx_iter)) != NULL)\n> + matching_ems = bms_add_member(matching_ems, em->em_index);\n> +\n> + Assert(bms_equal(matching_ems, iter->matching_ems));\n>\n> That should void the complaint that the Assert checking is too slow.\n> You can also delete the list_ptr_cmp function too (also noticed a\n> complaint about that).\n\nThanks for sharing your idea regarding this verification. It looks\ngood to solve the degradation problem in assert-enabled builds. I will\ntry it.\n\n> For the 0003 patch. Can you explain why you think these fields should\n> be in RangeTblEntry rather than RelOptInfo? I can only guess you might\n> have done this for memory usage so that we don't have to carry those\n> fields for join rels? I think RelOptInfo is the correct place to\n> store fields that are only used in the planner. If you put them in\n> RangeTblEntry they'll end up in pg_rewrite and be stored for all\n> views. Seems very space inefficient and scary as it limits the scope\n> for fixing bugs in back branches due to RangeTblEntries being\n> serialized into the catalogues in various places.\n\nThis change was not made for performance reasons but to avoid null\nreference exceptions. The details are explained in my email [1]. In\nbrief, the earlier patch did not work because simple_rel_array[i]\ncould be NULL for some 'i', and we referenced such a RelOptInfo. For\nexample, the following code snippet in add_eq_member() does not work.\nI inserted \"Assert(rel != NULL)\" into this code, and then the\nassertion failed. So, I moved the indexes to RangeTblEntry to address\nthis issue, but I don't know if this solution is good. We may have to\nsolve this in a different way.\n\n=====\n@@ -572,9 +662,31 @@ add_eq_member(EquivalenceClass *ec, Expr *expr,\nRelids relids,\n+ i = -1;\n+ while ((i = bms_next_member(expr_relids, i)) >= 0)\n+ {\n+ RelOptInfo *rel = root->simple_rel_array[i];\n+\n+ rel->eclass_member_indexes =\nbms_add_member(rel->eclass_member_indexes, em_index);\n+ }\n=====\n\nOn Wed, Aug 9, 2023 at 8:54 PM David Rowley <dgrowleyml@gmail.com> wrote:\n> So, I have three concerns with this patch.\n\n> I think the best way to move this forward is to explore not putting\n> partitioned table partitions in EMs and instead see if we can\n> translate to top-level parent before lookups. This might just be too\n> complex to translate the Exprs all the time and it may add overhead\n> unless we can quickly determine somehow that we don't need to attempt\n> to translate the Expr when the given Expr is already from the\n> top-level parent. If that can't be made to work, then maybe that shows\n> the current patch has merit.\n\nI really appreciate your detailed advice. I am sorry that I will not\nbe able to respond for a week or two due to my vacation, but I will\nexplore and work on these issues. Thanks again for your kind reply.\n\n[1] https://www.postgresql.org/message-id/CAJ2pMkYR_X-%3Dpq%2B39-W5kc0OG7q9u5YUwDBCHnkPur17DXnxuQ%40mail.gmail.com\n\n-- \nBest regards,\nYuya Watari\n\n\n",
"msg_date": "Thu, 10 Aug 2023 19:03:43 +0900",
"msg_from": "Yuya Watari <watari.yuya@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PoC] Reducing planning time when tables have many partitions"
},
{
"msg_contents": "Hello,\n\nOn Wed, Aug 9, 2023 at 8:54 PM David Rowley <dgrowleyml@gmail.com> wrote:\n> I think the best way to move this forward is to explore not putting\n> partitioned table partitions in EMs and instead see if we can\n> translate to top-level parent before lookups. This might just be too\n> complex to translate the Exprs all the time and it may add overhead\n> unless we can quickly determine somehow that we don't need to attempt\n> to translate the Expr when the given Expr is already from the\n> top-level parent. If that can't be made to work, then maybe that shows\n> the current patch has merit.\n\nBased on your suggestion, I have experimented with not putting child\nEquivalenceMembers in an EquivalenceClass. I have attached a new\npatch, v20, to this email. The following is a summary of v20.\n\n* v20 has been written from scratch.\n* In v20, EquivalenceClass->ec_members no longer has any child\nmembers. All of ec_members are now non-child. Instead, the child\nEquivalenceMembers are in the RelOptInfos.\n* When child EquivalenceMembers are required, 1) we translate the\ngiven Relids to their top-level parents, and 2) if some parent\nEquivalenceMembers' Relids match the translated top-level ones, we get\nthe child members from the RelOptInfo.\n* With the above change, ec_members has a few members, which leads to\na significant performance improvement. This is the core part of the\nv20 optimization.\n* My experimental results show that v20 performs better for both small\nand large sizes. For small sizes, v20 is clearly superior to v19. For\nlarge sizes, v20 performs as well as v19.\n* At this point, I don't know if we should switch to the v20 method.\nv20 is just a new proof of concept with much room for improvement. It\nis important to compare two different methods of v19 and v20 and\ndiscuss the best strategy.\n\n1. Key idea of v20\n\nI have attached a patch series consisting of two patches. v20-0001 and\nv20-0002 are for optimizations regarding EquivalenceClasses and\nRestrictInfos, respectively. v20-0002 is picked up from v19. Most of\nmy new optimizations are in v20-0001.\n\nAs I wrote above, the main change in v20-0001 is that we don't add\nchild EquivalenceMembers to ec_members. I will describe how v20 works.\nFirst of all, take a look at the code of get_eclass_for_sort_expr().\nIts comments are helpful for understanding my idea. Traditionally, we\nhave searched EquivalenceMembers matching the request as follows. This\nwas a very slow linear search when there were many members in the\nlist.\n\n===== Master =====\n foreach(lc2, cur_ec->ec_members)\n {\n EquivalenceMember *cur_em = (EquivalenceMember *) lfirst(lc2);\n\n /*\n * Ignore child members unless they match the request.\n */\n if (cur_em->em_is_child &&\n !bms_equal(cur_em->em_relids, rel))\n continue;\n\n /*\n * Match constants only within the same JoinDomain (see\n * optimizer/README).\n */\n if (cur_em->em_is_const && cur_em->em_jdomain != jdomain)\n continue;\n\n if (opcintype == cur_em->em_datatype &&\n equal(expr, cur_em->em_expr))\n return cur_ec; /* Match! */\n }\n==================\n\nv20 addressed this problem by not adding child members to ec_members.\nSince there are few members in the list, we can speed up the search.\nOf course, we still need child members. Previously, child members have\nbeen made and added to ec_members in\nadd_child_[join_]rel_equivalences(). Now, in v20, we add them to\nchild_[join]rel instead of ec_members. The following is the v20's\nchange.\n\n===== v20 =====\n@@ -2718,9 +2856,20 @@ add_child_rel_equivalences(PlannerInfo *root,\n top_parent_relids);\n new_relids = bms_add_members(new_relids, child_relids);\n\n- (void) add_eq_member(cur_ec, child_expr, new_relids,\n- cur_em->em_jdomain,\n- cur_em, cur_em->em_datatype);\n+ child_em = make_eq_member(cur_ec, child_expr, new_relids,\n+ cur_em->em_jdomain,\n+ cur_em, cur_em->em_datatype);\n+ child_rel->eclass_child_members = lappend(child_rel->eclass_child_members,\n+ child_em);\n+\n+ /*\n+ * We save the knowledge that 'child_em' can be translated from\n+ * 'child_rel'. This knowledge is useful for\n+ * add_transformed_child_version() to find child members from the\n+ * given Relids.\n+ */\n+ cur_em->em_child_relids = bms_add_member(cur_em->em_child_relids,\n+ child_rel->relid);\n\n /* Record this EC index for the child rel */\n child_rel->eclass_indexes = bms_add_member(child_rel->eclass_indexes, i);\n===============\n\nIn many places, we need child EquivalenceMembers that match the given\nRelids. To get them, we first find the top-level parents of the given\nRelids by calling find_relids_top_parents(). find_relids_top_parents()\nreplaces all of the Relids as their top-level parents. During looping\nover ec_members, we check if the children of an EquivalenceMember can\nmatch the request (top-level parents are needed in this checking). If\nthe children can match, we get child members from RelOptInfos. These\ntechniques are the core of the v20 solution. The next change does what\nI mentioned now.\n\n===== v20 =====\n@@ -599,6 +648,17 @@ get_eclass_for_sort_expr(PlannerInfo *root,\n EquivalenceMember *newem;\n ListCell *lc1;\n MemoryContext oldcontext;\n+ Relids top_parent_rel;\n+\n+ /*\n+ * First, we translate the given Relids to their top-level parents. This is\n+ * required because an EquivalenceClass contains only parent\n+ * EquivalenceMembers, and we have to translate top-level ones to get child\n+ * members. We can skip such translations if we now see top-level ones,\n+ * i.e., when top_parent_rel is NULL. See the find_relids_top_parents()'s\n+ * definition for more details.\n+ */\n+ top_parent_rel = find_relids_top_parents(root, rel);\n\n /*\n * Ensure the expression exposes the correct type and collation.\n@@ -632,16 +694,35 @@ get_eclass_for_sort_expr(PlannerInfo *root,\n if (!equal(opfamilies, cur_ec->ec_opfamilies))\n continue;\n\n- foreach(lc2, cur_ec->ec_members)\n+ /*\n+ * When we have to see child EquivalenceMembers, we get and add them to\n+ * 'members'. We cannot use foreach() because the 'members' may be\n+ * modified during iteration.\n+ */\n+ members = cur_ec->ec_members;\n+ modified = false;\n+ for (i = 0; i < list_length(members); i++)\n {\n- EquivalenceMember *cur_em = (EquivalenceMember *) lfirst(lc2);\n+ EquivalenceMember *cur_em =\nlist_nth_node(EquivalenceMember, members, i);\n+\n+ /*\n+ * If child EquivalenceMembers may match the request, we add and\n+ * iterate over them.\n+ */\n+ if (unlikely(top_parent_rel != NULL) && !cur_em->em_is_child &&\n+ bms_equal(cur_em->em_relids, top_parent_rel))\n+ add_child_rel_equivalences_to_list(root, cur_ec, cur_em, rel,\n+ &members, &modified);\n\n /*\n * Ignore child members unless they match the request.\n */\n- if (cur_em->em_is_child &&\n- !bms_equal(cur_em->em_relids, rel))\n- continue;\n+ /*\n+ * If this EquivalenceMember is a child, i.e., translated above,\n+ * it should match the request. We cannot assert this if a request\n+ * is bms_is_subset().\n+ */\n+ Assert(!cur_em->em_is_child || bms_equal(cur_em->em_relids, rel));\n\n /*\n * Match constants only within the same JoinDomain (see\n===============\n\nThe main concern was the overhead of getting top-level parents. If the\ngiven Relids are already top-level, such an operation can be a major\nbottleneck. I addressed this issue with a simple null check. v20 saves\ntop-level parent Relids to PlannerInfo's array. If there are no\nchildren, v20 sets this array to null, and find_relids_top_parents()\ncan quickly conclude that the given Relids are top-level. For more\ndetails, see the find_relids_top_parents() in pathnode.h (partially\nquoted below).\n\n===== v20 =====\n@@ -323,6 +323,24 @@ extern Relids min_join_parameterization(PlannerInfo *root,\n+#define find_relids_top_parents(root, relids) \\\n+ (likely((root)->top_parent_relid_array == NULL) \\\n+ ? NULL : find_relids_top_parents_slow(root, relids))\n+extern Relids find_relids_top_parents_slow(PlannerInfo *root, Relids relids);\n===============\n\n2. Experimental results\n\nI conducted experiments to test the performance of v20.\n\n2.1. Small size cases (make installcheck)\n\nWhen I worked with you on optimizing Bitmapset operations, we used\n'make installcheck' to check degradation in planning [1]. I did the\nsame for v19 and v20. Figure 1 and Tables 1 and 2 are the results.\nThey show that v20 is clearly superior to v19. The degradation of v20\nwas only 0.5%, while that of v19 was 2.1%. Figure 1 shows that the\n0.5% slowdown is much smaller than its variance.\n\nTable 1: Total Planning Time for installcheck (seconds)\n-----------------------------------------\n | Mean | Median | Stddev\n-----------------------------------------\n Master | 2.505161 | 2.503110 | 0.019775\n v19 | 2.558466 | 2.558560 | 0.017320\n v20 | 2.517806 | 2.516081 | 0.016932\n-----------------------------------------\n\nTable 2: Speedup for installcheck (higher is better)\n----------------------\n | Mean | Median\n----------------------\n v19 | 97.9% | 97.8%\n v20 | 99.5% | 99.5%\n----------------------\n\n2.2. Large size cases (queries A and B)\n\nI evaluated v20 with the same queries I have used in this thread. The\nqueries, Queries A and B, are attached in [2]. Both queries join\npartitioned tables. Figures 2 and 3 and the following tables show the\nresults. v20 performed as well as v19 for large sizes. v20 achieved a\nspeedup of about x10. There seems to be some regression for small\nsizes.\n\nTable 3: Planning time of Query A\n(n: the number of partitions of each table)\n(lower is better)\n------------------------------------------\n n | Master (ms) | v19 (ms) | v20 (ms)\n------------------------------------------\n 1 | 0.713 | 0.730 | 0.737\n 2 | 0.792 | 0.814 | 0.815\n 4 | 0.955 | 0.982 | 0.987\n 8 | 1.291 | 1.299 | 1.335\n 16 | 1.984 | 1.951 | 1.992\n 32 | 3.991 | 3.720 | 3.778\n 64 | 7.701 | 6.003 | 6.891\n 128 | 21.118 | 13.988 | 12.861\n 256 | 77.405 | 37.091 | 37.294\n 384 | 166.122 | 56.748 | 57.130\n 512 | 316.650 | 79.942 | 78.692\n 640 | 520.007 | 94.030 | 93.772\n 768 | 778.314 | 123.494 | 123.207\n 896 | 1182.477 | 185.422 | 179.266\n 1024 | 1547.897 | 161.104 | 155.761\n------------------------------------------\n\nTable 4: Speedup of Query A (higher is better)\n------------------------\n n | v19 | v20\n------------------------\n 1 | 97.7% | 96.7%\n 2 | 97.3% | 97.2%\n 4 | 97.3% | 96.8%\n 8 | 99.4% | 96.7%\n 16 | 101.7% | 99.6%\n 32 | 107.3% | 105.6%\n 64 | 128.3% | 111.8%\n 128 | 151.0% | 164.2%\n 256 | 208.7% | 207.6%\n 384 | 292.7% | 290.8%\n 512 | 396.1% | 402.4%\n 640 | 553.0% | 554.5%\n 768 | 630.2% | 631.7%\n 896 | 637.7% | 659.6%\n 1024 | 960.8% | 993.8%\n------------------------\n\nTable 5: Planning time of Query B\n-----------------------------------------\n n | Master (ms) | v19 (ms) | v20 (ms)\n-----------------------------------------\n 1 | 37.044 | 38.062 | 37.614\n 2 | 35.839 | 36.804 | 36.555\n 4 | 38.202 | 37.864 | 37.977\n 8 | 42.292 | 41.023 | 41.210\n 16 | 51.867 | 46.481 | 46.477\n 32 | 80.003 | 57.329 | 57.363\n 64 | 185.212 | 87.124 | 88.528\n 128 | 656.116 | 157.236 | 160.884\n 256 | 2883.258 | 343.035 | 340.285\n-----------------------------------------\n\nTable 6: Speedup of Query B (higher is better)\n-----------------------\n n | v19 | v20\n-----------------------\n 1 | 97.3% | 98.5%\n 2 | 97.4% | 98.0%\n 4 | 100.9% | 100.6%\n 8 | 103.1% | 102.6%\n 16 | 111.6% | 111.6%\n 32 | 139.6% | 139.5%\n 64 | 212.6% | 209.2%\n 128 | 417.3% | 407.8%\n 256 | 840.5% | 847.3%\n-----------------------\n\n3. Future works\n\n3.1. Redundant memory allocation of Lists\n\nWhen we need child EquivalenceMembers in a loop over ec_members, v20\nadds them to the list. However, since we cannot modify the ec_members,\nv20 always copies it. In most cases, there are only one or two child\nmembers, so this behavior is a waste of memory and time and not a good\nidea. I didn't address this problem in v20 because doing so could add\nmuch complexity to the code, but it is one of the major future works.\n\nI suspect that the degradation of Queries A and B is due to this\nproblem. The difference between 'make installcheck' and Queries A and\nB is whether there are partitioned tables. Most of the tests in 'make\ninstallcheck' do not have partitions, so find_relids_top_parents()\ncould immediately determine the given Relids are already top-level and\nkeep degradation very small. However, since Queries A and B have\npartitions, too frequent allocations of Lists may have caused the\nregression. I hope we can reduce the degradation by avoiding these\nmemory allocations. I will continue to investigate and fix this\nproblem.\n\n3.2. em_relids and pull_varnos\n\nI'm sorry that v20 did not address your 1st concern regarding\nem_relids and pull_varnos. I will try to look into this.\n\n3.3. Indexes for RestrictInfos\n\nIndexes for RestrictInfos are still in RangeTblEntry in v20-0002. I\nwill also investigate this issue.\n\n3.4. Correctness\n\nv20 has passed all regression tests in my environment, but I'm not so\nsure if v20 is correct.\n\n4. Conclusion\n\nI wrote v20 based on a new idea. It may have a lot of problems, but it\nhas advantages. At least it solves your 3rd concern. Since we iterate\nLists instead of Bitmapsets, we don't have to introduce an iterator\nmechanism. My experiment showed that the 'make installcheck'\ndegradation was very small. For the 2nd concern, v20 no longer adds\nchild EquivalenceMembers to ec_members. I'm sorry if this is not what\nyou intended, but it effectively worked. Again, v20 is a new proof of\nconcept. I hope the v20-based approach will be a good alternative\nsolution if we can overcome several problems, including what I\nmentioned above.\n\n[1] https://www.postgresql.org/message-id/CAApHDvo68m_0JuTHnEHFNsdSJEb2uPphK6BWXStj93u_QEi2rg%40mail.gmail.com\n[2] https://www.postgresql.org/message-id/CAJ2pMkYcKHFBD_OMUSVyhYSQU0-j9T6NZ0pL6pwbZsUCohWc7Q%40mail.gmail.com\n\n-- \nBest regards,\nYuya Watari",
"msg_date": "Fri, 25 Aug 2023 16:39:16 +0900",
"msg_from": "Yuya Watari <watari.yuya@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PoC] Reducing planning time when tables have many partitions"
},
{
"msg_contents": "Hi Yuya,\n\nOn Fri, Aug 25, 2023 at 1:09 PM Yuya Watari <watari.yuya@gmail.com> wrote:\n>\n> 3. Future works\n>\n> 3.1. Redundant memory allocation of Lists\n>\n> When we need child EquivalenceMembers in a loop over ec_members, v20\n> adds them to the list. However, since we cannot modify the ec_members,\n> v20 always copies it. In most cases, there are only one or two child\n> members, so this behavior is a waste of memory and time and not a good\n> idea. I didn't address this problem in v20 because doing so could add\n> much complexity to the code, but it is one of the major future works.\n>\n> I suspect that the degradation of Queries A and B is due to this\n> problem. The difference between 'make installcheck' and Queries A and\n> B is whether there are partitioned tables. Most of the tests in 'make\n> installcheck' do not have partitions, so find_relids_top_parents()\n> could immediately determine the given Relids are already top-level and\n> keep degradation very small. However, since Queries A and B have\n> partitions, too frequent allocations of Lists may have caused the\n> regression. I hope we can reduce the degradation by avoiding these\n> memory allocations. I will continue to investigate and fix this\n> problem.\n>\n> 3.2. em_relids and pull_varnos\n>\n> I'm sorry that v20 did not address your 1st concern regarding\n> em_relids and pull_varnos. I will try to look into this.\n>\n> 3.3. Indexes for RestrictInfos\n>\n> Indexes for RestrictInfos are still in RangeTblEntry in v20-0002. I\n> will also investigate this issue.\n>\n> 3.4. Correctness\n>\n> v20 has passed all regression tests in my environment, but I'm not so\n> sure if v20 is correct.\n>\n> 4. Conclusion\n>\n> I wrote v20 based on a new idea. It may have a lot of problems, but it\n> has advantages. At least it solves your 3rd concern. Since we iterate\n> Lists instead of Bitmapsets, we don't have to introduce an iterator\n> mechanism. My experiment showed that the 'make installcheck'\n> degradation was very small. For the 2nd concern, v20 no longer adds\n> child EquivalenceMembers to ec_members. I'm sorry if this is not what\n> you intended, but it effectively worked. Again, v20 is a new proof of\n> concept. I hope the v20-based approach will be a good alternative\n> solution if we can overcome several problems, including what I\n> mentioned above.\n\nIt seems that you are still investigating and fixing issues. But the\nCF entry is marked as \"needs review\". I think a better status is\n\"WoA\". Do you agree with that?\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n",
"msg_date": "Thu, 7 Sep 2023 12:13:15 +0530",
"msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] Reducing planning time when tables have many partitions"
},
{
"msg_contents": "On 25/8/2023 14:39, Yuya Watari wrote:\n> Hello,\n> \n> On Wed, Aug 9, 2023 at 8:54 PM David Rowley <dgrowleyml@gmail.com> wrote:\n>> I think the best way to move this forward is to explore not putting\n>> partitioned table partitions in EMs and instead see if we can\n>> translate to top-level parent before lookups. This might just be too\n>> complex to translate the Exprs all the time and it may add overhead\n>> unless we can quickly determine somehow that we don't need to attempt\n>> to translate the Expr when the given Expr is already from the\n>> top-level parent. If that can't be made to work, then maybe that shows\n>> the current patch has merit.\n> \n> Based on your suggestion, I have experimented with not putting child\n> EquivalenceMembers in an EquivalenceClass. I have attached a new\n> patch, v20, to this email. The following is a summary of v20.\nWorking on self-join removal in the thread [1] nearby, I stuck into the \nproblem, which made an additional argument to work in this new direction \nthan a couple of previous ones.\nWith indexing positions in the list of equivalence members, we make some \noptimizations like join elimination more complicated - it may need to \nremove some clauses and equivalence class members.\nFor changing lists of derives or ec_members, we should go through all \nthe index lists and fix them, which is a non-trivial operation.\n\n[1] \nhttps://www.postgresql.org/message-id/flat/64486b0b-0404-e39e-322d-0801154901f3%40postgrespro.ru\n\n-- \nregards,\nAndrey Lepikhov\nPostgres Professional\n\n\n\n",
"msg_date": "Tue, 19 Sep 2023 15:21:15 +0700",
"msg_from": "Andrey Lepikhov <a.lepikhov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] Reducing planning time when tables have many partitions"
},
{
"msg_contents": "Hello Ashutosh and Andrey,\n\nThank you for your email, and I really apologize for my late response.\n\nOn Thu, Sep 7, 2023 at 3:43 PM Ashutosh Bapat\n<ashutosh.bapat.oss@gmail.com> wrote:\n> It seems that you are still investigating and fixing issues. But the\n> CF entry is marked as \"needs review\". I think a better status is\n> \"WoA\". Do you agree with that?\n\nYes, I am now investigating and fixing issues. I agree with you and\nchanged the entry's status to \"Waiting on Author\". Thank you for your\nadvice.\n\nOn Tue, Sep 19, 2023 at 5:21 PM Andrey Lepikhov\n<a.lepikhov@postgrespro.ru> wrote:\n> Working on self-join removal in the thread [1] nearby, I stuck into the\n> problem, which made an additional argument to work in this new direction\n> than a couple of previous ones.\n> With indexing positions in the list of equivalence members, we make some\n> optimizations like join elimination more complicated - it may need to\n> remove some clauses and equivalence class members.\n> For changing lists of derives or ec_members, we should go through all\n> the index lists and fix them, which is a non-trivial operation.\n\nThank you for looking into this and pointing that out. I understand\nthat this problem will occur somewhere like your patch [1] quoted\nbelow because we need to modify RelOptInfo->eclass_child_members in\naddition to ec_members. Is my understanding correct? (Of course, I\nknow ec_[no]rel_members, but I doubt we need them.)\n\n=====\n+static void\n+update_eclass(EquivalenceClass *ec, int from, int to)\n+{\n+ List *new_members = NIL;\n+ ListCell *lc;\n+\n+ foreach(lc, ec->ec_members)\n+ {\n+ EquivalenceMember *em = lfirst_node(EquivalenceMember, lc);\n+ bool is_redundant = false;\n+\n ...\n+\n+ if (!is_redundant)\n+ new_members = lappend(new_members, em);\n+ }\n+\n+ list_free(ec->ec_members);\n+ ec->ec_members = new_members;\n=====\n\nI think we may be able to remove the eclass_child_members field by\nmaking child members on demand. v20 makes child members at\nadd_[child_]join_rel_equivalences() and adds them into\nRelOptInfo->eclass_child_members. Instead of doing that, if we\ntranslate on demand when child members are requested,\nRelOptInfo->eclass_child_members is no longer necessary. After that,\nthere is only ec_members, which consists of parent members, so\nremoving clauses will still be simple. Do you think this idea will\nsolve your problem? If so, I will experiment with this and share a new\npatch version. The main concern with this idea is that the same child\nmember will be created many times, wasting time and memory. Some\ntechniques like caching might solve this.\n\n[1] https://www.postgresql.org/message-id/flat/64486b0b-0404-e39e-322d-0801154901f3%40postgrespro.ru\n\n-- \nBest regards,\nYuya Watari\n\n\n",
"msg_date": "Wed, 20 Sep 2023 19:04:46 +0900",
"msg_from": "Yuya Watari <watari.yuya@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PoC] Reducing planning time when tables have many partitions"
},
{
"msg_contents": "On Wed, Sep 20, 2023 at 3:35 PM Yuya Watari <watari.yuya@gmail.com> wrote:\n\n> I think we may be able to remove the eclass_child_members field by\n> making child members on demand. v20 makes child members at\n> add_[child_]join_rel_equivalences() and adds them into\n> RelOptInfo->eclass_child_members. Instead of doing that, if we\n> translate on demand when child members are requested,\n> RelOptInfo->eclass_child_members is no longer necessary. After that,\n> there is only ec_members, which consists of parent members, so\n> removing clauses will still be simple. Do you think this idea will\n> solve your problem? If so, I will experiment with this and share a new\n> patch version. The main concern with this idea is that the same child\n> member will be created many times, wasting time and memory. Some\n> techniques like caching might solve this.\n>\n\nWhile working on RestrictInfo translations patch I was thinking on\nthese lines. [1] uses hash table for storing translated RestrictInfo.\nAn EC can have a hash table to store ec_member translations. The same\npatchset also has some changes in the code which generates\nRestrictInfo clauses from ECs. I think that code will be simplified by\nyour approach.\n\n[1] https://www.postgresql.org/message-id/CAExHW5u0Yyyr2mwvLrvVy_QnLd65kpc9u-bO0Ox7bgLkgbac8A@mail.gmail.com\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n",
"msg_date": "Wed, 20 Sep 2023 16:33:09 +0530",
"msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] Reducing planning time when tables have many partitions"
},
{
"msg_contents": "On Wed, Sep 20, 2023, at 5:04 PM, Yuya Watari wrote:\n> On Tue, Sep 19, 2023 at 5:21 PM Andrey Lepikhov\n> <a.lepikhov@postgrespro.ru> wrote:\n>> Working on self-join removal in the thread [1] nearby, I stuck into the\n>> problem, which made an additional argument to work in this new direction\n>> than a couple of previous ones.\n>> With indexing positions in the list of equivalence members, we make some\n>> optimizations like join elimination more complicated - it may need to\n>> remove some clauses and equivalence class members.\n>> For changing lists of derives or ec_members, we should go through all\n>> the index lists and fix them, which is a non-trivial operation.\n>\n> Thank you for looking into this and pointing that out. I understand\n> that this problem will occur somewhere like your patch [1] quoted\n> below because we need to modify RelOptInfo->eclass_child_members in\n> addition to ec_members. Is my understanding correct? (Of course, I\n> know ec_[no]rel_members, but I doubt we need them.)\n\nIt is okay if we talk about the self-join-removal feature specifically because joins are removed before an inheritance expansion.\nBut ec_source_indexes and ec_derive_indexes point to specific places in eq_sources and eq_derives lists. If I removed an EquivalenceClass or a restriction during an optimisation, I would arrange all indexes, too.\nRight now, I use a workaround here and remove the index link without removing the element from the list. But I'm not sure how good this approach can be in perspective.\nSo, having eq_sources and eq_derives localised in EC could make such optimisations a bit more simple.\n\n-- \nRegards,\nAndrei Lepikhov\n\n\n",
"msg_date": "Fri, 22 Sep 2023 10:48:38 +0700",
"msg_from": "\"Lepikhov Andrei\" <a.lepikhov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] Reducing planning time when tables have many partitions"
},
{
"msg_contents": "Hello Ashutosh and Andrey,\n\nOn Wed, Sep 20, 2023 at 8:03 PM Ashutosh Bapat\n<ashutosh.bapat.oss@gmail.com> wrote:\n> While working on RestrictInfo translations patch I was thinking on\n> these lines. [1] uses hash table for storing translated RestrictInfo.\n> An EC can have a hash table to store ec_member translations. The same\n> patchset also has some changes in the code which generates\n> RestrictInfo clauses from ECs. I think that code will be simplified by\n> your approach.\n\nThank you for sharing this. I agree that we have to avoid adding\ncomplexity to existing or future codes through my patch. As you say,\nthis approach mentioned in the last email is helpful to simplify the\ncode, so I will try it.\n\nOn Fri, Sep 22, 2023 at 12:49 PM Lepikhov Andrei\n<a.lepikhov@postgrespro.ru> wrote:\n> It is okay if we talk about the self-join-removal feature specifically because joins are removed before an inheritance expansion.\n> But ec_source_indexes and ec_derive_indexes point to specific places in eq_sources and eq_derives lists. If I removed an EquivalenceClass or a restriction during an optimisation, I would arrange all indexes, too.\n> Right now, I use a workaround here and remove the index link without removing the element from the list. But I'm not sure how good this approach can be in perspective.\n> So, having eq_sources and eq_derives localised in EC could make such optimisations a bit more simple.\n\nThank you for pointing it out. The ec_source_indexes and\nec_derive_indexes are just picked up from the previous patch, and I\nhave not changed their design. I think a similar approach to\nEquivalenceMembers may be applied to RestrictInfos. I will experiment\nwith them and share a new patch.\n\n-- \nBest regards,\nYuya Watari\n\n\n",
"msg_date": "Wed, 27 Sep 2023 16:28:46 +0900",
"msg_from": "Yuya Watari <watari.yuya@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PoC] Reducing planning time when tables have many partitions"
},
{
"msg_contents": "Hi, all!\n\nWhile I was reviewing the patches, I noticed that they needed some \nrebasing, and in one of the patches \n(Introduce-indexes-for-RestrictInfo.patch) there was a conflict with the \nrecently added self-join-removal feature [1]. So, I rebased patches and \nresolved the conflicts. While I was doing this, I found a problem that I \nalso fixed:\n\n1. Due to the lack of ec_source_indexes, ec_derive_indexes, we could \ncatch an error during the execution of atomic functions such as:\n\nERROR: unrecognized token: \")\"\nContext: внедрённая в код SQL-функция \"shobj_description\"\n\nI fixed it.\n\nWe save the current reading context before reading the field name, then \ncheck whether the field has been read and, if not, restore the context \nto allow the next macro reads the field name correctly.\n\nI added the solution to the bug_related_atomic_function.diff file.\n\n2. I added the solution to the conflict to the \nsolved_conflict_with_self_join_removal.diff file.\n\nAll diff files have already been added to \nv21-0002-Introduce-indexes-for-RestrictInfo patch.\n\n\n1. \nhttps://www.postgresql.org/message-id/CAPpHfduLxYm4biJrTbjBxTAW6vkxBswuQ2B%3DgXU%2Bc37QJd6%2BOw%40mail.gmail.com\n\n-- \nRegards,\nAlena Rybakina",
"msg_date": "Sat, 18 Nov 2023 00:04:12 +0300",
"msg_from": "Alena Rybakina <lena.ribackina@yandex.ru>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] Reducing planning time when tables have many partitions"
},
{
"msg_contents": "On Sat, Nov 18, 2023 at 4:04 AM Alena Rybakina <lena.ribackina@yandex.ru> wrote:\n>\n> All diff files have already been added to v21-0002-Introduce-indexes-for-RestrictInfo patch.\n\nUnfortunately, the patch tester is too smart for its own good, and\nwill try to apply .diff files as well. Since\nbug_related_to_atomic_function.diff is first in the alphabet, it comes\nfirst, which is the reason for the current CI failure.\n\n\n",
"msg_date": "Sat, 18 Nov 2023 09:45:51 +0700",
"msg_from": "John Naylor <johncnaylorls@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] Reducing planning time when tables have many partitions"
},
{
"msg_contents": "John Naylor <johncnaylorls@gmail.com> writes:\n> On Sat, Nov 18, 2023 at 4:04 AM Alena Rybakina <lena.ribackina@yandex.ru> wrote:\n>> All diff files have already been added to v21-0002-Introduce-indexes-for-RestrictInfo patch.\n\n> Unfortunately, the patch tester is too smart for its own good, and\n> will try to apply .diff files as well.\n\nYeah --- see documentation here:\n\nhttps://wiki.postgresql.org/wiki/Cfbot\n\nThat suggests using a .txt extension for anything you don't want to\nbe taken as part of the patch set.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 17 Nov 2023 22:13:35 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] Reducing planning time when tables have many partitions"
},
{
"msg_contents": "On 18.11.2023 05:45, John Naylor wrote:\n> On Sat, Nov 18, 2023 at 4:04 AM Alena Rybakina <lena.ribackina@yandex.ru> wrote:\n>> All diff files have already been added to v21-0002-Introduce-indexes-for-RestrictInfo patch.\n> Unfortunately, the patch tester is too smart for its own good, and\n> will try to apply .diff files as well. Since\n> bug_related_to_atomic_function.diff is first in the alphabet, it comes\n> first, which is the reason for the current CI failure.\n\nOn 18.11.2023 06:13, Tom Lane wrote:\n> John Naylor <johncnaylorls@gmail.com> writes:\n>> On Sat, Nov 18, 2023 at 4:04 AM Alena Rybakina <lena.ribackina@yandex.ru> wrote:\n>>> All diff files have already been added to v21-0002-Introduce-indexes-for-RestrictInfo patch.\n>> Unfortunately, the patch tester is too smart for its own good, and\n>> will try to apply .diff files as well.\n> Yeah --- see documentation here:\n>\n> https://wiki.postgresql.org/wiki/Cfbot\n>\n> That suggests using a .txt extension for anything you don't want to\n> be taken as part of the patch set.\n>\n> \t\t\tregards, tom lane\n\nThank you for explanation. I fixed it.\n\nI have attached the previous diff files as txt so that they will not \napplied (they are already applied in the second patch \n\"v21-0002-PATCH-PATCH-1-2-Introduce-indexes-for-RestrictInfo-T.patch\"). \nAlso, the previous time I missed the fact that the files conflict with \neach other - I fixed it too and everything seems to work fine now.",
"msg_date": "Sun, 19 Nov 2023 02:57:34 +0300",
"msg_from": "Alena Rybakina <lena.ribackina@yandex.ru>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] Reducing planning time when tables have many partitions"
},
{
"msg_contents": "On 27/9/2023 14:28, Yuya Watari wrote:\n> Thank you for pointing it out. The ec_source_indexes and\n> ec_derive_indexes are just picked up from the previous patch, and I\n> have not changed their design. I think a similar approach to\n> EquivalenceMembers may be applied to RestrictInfos. I will experiment\n> with them and share a new patch.\n\nDuring the work on committing the SJE feature [1], Alexander Korotkov \npointed out the silver lining in this work [2]: he proposed that we \nshouldn't remove RelOptInfo from simple_rel_array at all but replace it \nwith an 'Alias', which will refer the kept relation. It can simplify \nfurther optimizations on removing redundant parts of the query.\n\n[1] \nhttps://www.postgresql.org/message-id/flat/64486b0b-0404-e39e-322d-0801154901f3%40postgrespro.ru\n[2] \nhttps://www.postgresql.org/message-id/CAPpHfdsnAbg8CaK+NJ8AkiG_+_Tt07eCStkb1LOa50f0UsT5RQ@mail.gmail.com\n\n-- \nregards,\nAndrei Lepikhov\nPostgres Professional\n\n\n\n",
"msg_date": "Mon, 20 Nov 2023 11:45:42 +0700",
"msg_from": "Andrei Lepikhov <a.lepikhov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] Reducing planning time when tables have many partitions"
},
{
"msg_contents": "Hello Alena, Andrei, and all,\n\nThank you for reviewing this patch. I really apologize for not\nupdating this thread for a while.\n\nOn Sat, Nov 18, 2023 at 6:04 AM Alena Rybakina <lena.ribackina@yandex.ru> wrote:\n> Hi, all!\n>\n> While I was reviewing the patches, I noticed that they needed some rebasing, and in one of the patches (Introduce-indexes-for-RestrictInfo.patch) there was a conflict with the recently added self-join-removal feature [1]. So, I rebased patches and resolved the conflicts. While I was doing this, I found a problem that I also fixed:\n\nThank you very much for rebasing these patches and fixing the issue.\nThe bug seemed to be caused because these indexes were in\nRangeTblEntry, and the handling of their serialization was not\ncorrect. Thank you for fixing it.\n\nOn Mon, Nov 20, 2023 at 1:45 PM Andrei Lepikhov\n<a.lepikhov@postgrespro.ru> wrote:\n> During the work on committing the SJE feature [1], Alexander Korotkov\n> pointed out the silver lining in this work [2]: he proposed that we\n> shouldn't remove RelOptInfo from simple_rel_array at all but replace it\n> with an 'Alias', which will refer the kept relation. It can simplify\n> further optimizations on removing redundant parts of the query.\n\nThank you for sharing this information. I think the idea suggested by\nAlexander Korotkov is also helpful for our patch. As mentioned above,\nthe indexes are in RangeTblEntry in the current implementation.\nHowever, I think RangeTblEntry is not the best place to store them. An\n'alias' relids may help solve this and simplify fixing the above bug.\nI will try this approach soon.\n\nUnfortunately, I've been busy due to work, so I won't be able to\nrespond for several weeks. I'm really sorry for not being able to see\nthe patches. As soon as I'm not busy, I will look at them, consider\nthe above approach, and reply to this thread. If there is no\nobjection, I will move this CF entry forward to next CF.\n\nAgain, thank you very much for looking at this thread, and I'm sorry\nfor my late work.\n\n-- \nBest regards,\nYuya Watari\n\n\n",
"msg_date": "Wed, 22 Nov 2023 14:32:04 +0900",
"msg_from": "Yuya Watari <watari.yuya@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PoC] Reducing planning time when tables have many partitions"
},
{
"msg_contents": "Hello,\n\nOn Wed, Nov 22, 2023 at 2:32 PM Yuya Watari <watari.yuya@gmail.com> wrote:\n> Unfortunately, I've been busy due to work, so I won't be able to\n> respond for several weeks. I'm really sorry for not being able to see\n> the patches. As soon as I'm not busy, I will look at them, consider\n> the above approach, and reply to this thread. If there is no\n> objection, I will move this CF entry forward to next CF.\n\nSince the end of this month is approaching, I moved this CF entry to\nthe next CF (January CF). I will reply to this thread in a few weeks.\nAgain, I appreciate your kind reviews and patches.\n\n-- \nBest regards,\nYuya Watari\n\n\n",
"msg_date": "Thu, 30 Nov 2023 13:18:57 +0900",
"msg_from": "Yuya Watari <watari.yuya@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PoC] Reducing planning time when tables have many partitions"
},
{
"msg_contents": "Hello Alena, Andrei, and all,\n\nI am sorry for my late response. I found that the current patches do\nnot apply to the master, so I have rebased those patches. I have\nattached v22. For this later discussion, I separated the rebasing and\nbug fixing that Alena did in v21 into separate commits, v22-0003 and\nv22-0004. I will merge these commits after the discussion.\n\n1. v22-0003 (solved_conflict_with_self_join_removal.txt)\n\nThank you for your rebase. Looking at your rebasing patch, I thought\nwe could do this more simply. Your patch deletes (more precisely, sets\nto null) non-redundant members from the root->eq_sources list and\nre-adds them to the same list. However, this approach seems a little\nwaste of memory. Instead, we can update\nEquivalenceClass->ec_source_indexes directly. Then, we can reuse the\nmembers in root->eq_sources and don't need to extend root->eq_sources.\nI did this in v22-0003. What do you think of this approach?\n\nThe main concern with this idea is that it does not fix\nRangeTblEntry->eclass_source_indexes. The current code works fine even\nif we don't fix the index because get_ec_source_indexes() always does\nbms_intersect() for eclass_source_indexes and ec_source_indexes. If we\nguaranteed this behavior of doing bms_intersect, then simply modifying\nec_source_indexes would be fine. Fortunately, such a guarantee is not\nso difficult.\n\nAnd your patch removes the following assertion code from the previous\npatch. May I ask why you removed this code? I think this assertion is\nhelpful for sanity checks. Of course, I know that this kind of\nassertion will slow down regression tests or assert-enabled builds.\nSo, we may have to discuss which assertions to keep and which to\ndiscard.\n\n=====\n-#ifdef USE_ASSERT_CHECKING\n- /* verify the results look sane */\n- i = -1;\n- while ((i = bms_next_member(rel_esis, i)) >= 0)\n- {\n- RestrictInfo *rinfo = list_nth_node(RestrictInfo, root->eq_sources,\n- i);\n-\n- Assert(bms_overlap(relids, rinfo->clause_relids));\n- }\n-#endif\n=====\n\nFinally, your patch changes the name of the following function. I\nunderstand the need for this change, but it has nothing to do with our\npatches, so we should not include it and discuss it in another thread.\n\n=====\n-update_eclasses(EquivalenceClass *ec, int from, int to)\n+update_eclass(PlannerInfo *root, EquivalenceClass *ec, int from, int to)\n=====\n\n2. v22-0004 (bug_related_to_atomic_function.txt)\n\nThank you for fixing the bug. As I wrote in the previous mail:\n\nOn Wed, Nov 22, 2023 at 2:32 PM Yuya Watari <watari.yuya@gmail.com> wrote:\n> On Mon, Nov 20, 2023 at 1:45 PM Andrei Lepikhov\n> <a.lepikhov@postgrespro.ru> wrote:\n> > During the work on committing the SJE feature [1], Alexander Korotkov\n> > pointed out the silver lining in this work [2]: he proposed that we\n> > shouldn't remove RelOptInfo from simple_rel_array at all but replace it\n> > with an 'Alias', which will refer the kept relation. It can simplify\n> > further optimizations on removing redundant parts of the query.\n>\n> Thank you for sharing this information. I think the idea suggested by\n> Alexander Korotkov is also helpful for our patch. As mentioned above,\n> the indexes are in RangeTblEntry in the current implementation.\n> However, I think RangeTblEntry is not the best place to store them. An\n> 'alias' relids may help solve this and simplify fixing the above bug.\n> I will try this approach soon.\n\nI think that the best way to solve this issue is to move these indexes\nfrom RangeTblEntry to RelOptInfo. Since they are related to planning\ntime, they should be in RelOptInfo. The reason why I put these indexes\nin RangeTblEntry is because some RelOptInfos can be null and we cannot\nstore the indexes. This problem is similar to an issue regarding\n'varno 0' Vars. I hope an alias RelOptInfo would help solve this\nissue. I have attached the current proof of concept I am considering\nas poc-alias-reloptinfo.txt. To test this patch, please follow the\nprocedure below.\n\n1. Apply all *.patch files,\n2. Apply Alexander Korotkov's alias_relids.patch [1], and\n3. Apply poc-alias-reloptinfo.txt, which is attached to this email.\n\nMy patch creates a dummy (or an alias) RelOptInfo to store indexes if\nthe corresponding RelOptInfo is null. The following is the core change\nin my patch.\n\n=====\n@@ -627,9 +627,19 @@ add_eq_source(PlannerInfo *root, EquivalenceClass\n*ec, RestrictInfo *rinfo)\n i = -1;\n while ((i = bms_next_member(rinfo->clause_relids, i)) >= 0)\n {\n- RangeTblEntry *rte = root->simple_rte_array[i];\n+ RelOptInfo *rel = root->simple_rel_array[i];\n\n- rte->eclass_source_indexes = bms_add_member(rte->eclass_source_indexes,\n+ /*\n+ * If the corresponding RelOptInfo does not exist, we create a 'dummy'\n+ * RelOptInfo for storing EquivalenceClass indexes.\n+ */\n+ if (rel == NULL)\n+ {\n+ rel = root->simple_rel_array[i] = makeNode(RelOptInfo);\n+ rel->eclass_source_indexes = NULL;\n+ rel->eclass_derive_indexes = NULL;\n+ }\n+ rel->eclass_source_indexes = bms_add_member(rel->eclass_source_indexes,\n source_idx);\n }\n=====\n\nAt this point, I'm not sure if this approach is correct. It seems to\npass the regression tests, but we should doubt its correctness. I will\ncontinue to experiment with this idea.\n\n[1] https://www.postgresql.org/message-id/CAPpHfdseB13zJJPZuBORevRnZ0vcFyUaaJeSGfAysX7S5er%2BEQ%40mail.gmail.com\n\n-- \nBest regards,\nYuya Watari",
"msg_date": "Wed, 13 Dec 2023 15:21:57 +0900",
"msg_from": "Yuya Watari <watari.yuya@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PoC] Reducing planning time when tables have many partitions"
},
{
"msg_contents": "Hi!\nOn 13.12.2023 09:21, Yuya Watari wrote:\n> Hello Alena, Andrei, and all,\n>\n> I am sorry for my late response. I found that the current patches do\n> not apply to the master, so I have rebased those patches. I have\n> attached v22. For this later discussion, I separated the rebasing and\n> bug fixing that Alena did in v21 into separate commits, v22-0003 and\n> v22-0004. I will merge these commits after the discussion.\n>\n> 1. v22-0003 (solved_conflict_with_self_join_removal.txt)\nThank you!\n> Thank you for your rebase. Looking at your rebasing patch, I thought\n> we could do this more simply. Your patch deletes (more precisely, sets\n> to null) non-redundant members from the root->eq_sources list and\n> re-adds them to the same list. However, this approach seems a little\n> waste of memory. Instead, we can update\n> EquivalenceClass->ec_source_indexes directly. Then, we can reuse the\n> members in root->eq_sources and don't need to extend root->eq_sources.\n> I did this in v22-0003. What do you think of this approach?\nI thought about this earlier and was worried that the index links of the \nequivalence classes might not be referenced correctly for outer joins,\nso I decided to just overwrite them and reset the previous ones.\n> The main concern with this idea is that it does not fix\n> RangeTblEntry->eclass_source_indexes. The current code works fine even\n> if we don't fix the index because get_ec_source_indexes() always does\n> bms_intersect() for eclass_source_indexes and ec_source_indexes. If we\n> guaranteed this behavior of doing bms_intersect, then simply modifying\n> ec_source_indexes would be fine. Fortunately, such a guarantee is not\n> so difficult.\n>\n> And your patch removes the following assertion code from the previous\n> patch. May I ask why you removed this code? I think this assertion is\n> helpful for sanity checks. Of course, I know that this kind of\n> assertion will slow down regression tests or assert-enabled builds.\n> So, we may have to discuss which assertions to keep and which to\n> discard.\n>\n> =====\n> -#ifdef USE_ASSERT_CHECKING\n> - /* verify the results look sane */\n> - i = -1;\n> - while ((i = bms_next_member(rel_esis, i)) >= 0)\n> - {\n> - RestrictInfo *rinfo = list_nth_node(RestrictInfo, root->eq_sources,\n> - i);\n> -\n> - Assert(bms_overlap(relids, rinfo->clause_relids));\n> - }\n> -#endif\n> =====\nthis is due to the fact that I explained before: we zeroed the values \nindicated by the indexes,\nthen this check is not correct either - since the zeroed value indicated \nby the index is correct.\nThat's why I removed this check.\n> Finally, your patch changes the name of the following function. I\n> understand the need for this change, but it has nothing to do with our\n> patches, so we should not include it and discuss it in another thread.\n>\n> =====\n> -update_eclasses(EquivalenceClass *ec, int from, int to)\n> +update_eclass(PlannerInfo *root, EquivalenceClass *ec, int from, int to)\n> =====\nI agree.\n> 2. v22-0004 (bug_related_to_atomic_function.txt)\n>\n> Thank you for fixing the bug. As I wrote in the previous mail:\n>\n> On Wed, Nov 22, 2023 at 2:32 PM Yuya Watari<watari.yuya@gmail.com> wrote:\n>> On Mon, Nov 20, 2023 at 1:45 PM Andrei Lepikhov\n>> <a.lepikhov@postgrespro.ru> wrote:\n>>> During the work on committing the SJE feature [1], Alexander Korotkov\n>>> pointed out the silver lining in this work [2]: he proposed that we\n>>> shouldn't remove RelOptInfo from simple_rel_array at all but replace it\n>>> with an 'Alias', which will refer the kept relation. It can simplify\n>>> further optimizations on removing redundant parts of the query.\n>> Thank you for sharing this information. I think the idea suggested by\n>> Alexander Korotkov is also helpful for our patch. As mentioned above,\n>> the indexes are in RangeTblEntry in the current implementation.\n>> However, I think RangeTblEntry is not the best place to store them. An\n>> 'alias' relids may help solve this and simplify fixing the above bug.\n>> I will try this approach soon.\n> I think that the best way to solve this issue is to move these indexes\n> from RangeTblEntry to RelOptInfo. Since they are related to planning\n> time, they should be in RelOptInfo. The reason why I put these indexes\n> in RangeTblEntry is because some RelOptInfos can be null and we cannot\n> store the indexes. This problem is similar to an issue regarding\n> 'varno 0' Vars. I hope an alias RelOptInfo would help solve this\n> issue. I have attached the current proof of concept I am considering\n> as poc-alias-reloptinfo.txt. To test this patch, please follow the\n> procedure below.\n>\n> 1. Apply all *.patch files,\n> 2. Apply Alexander Korotkov's alias_relids.patch [1], and\n> 3. Apply poc-alias-reloptinfo.txt, which is attached to this email.\n>\n> My patch creates a dummy (or an alias) RelOptInfo to store indexes if\n> the corresponding RelOptInfo is null. The following is the core change\n> in my patch.\n>\n> =====\n> @@ -627,9 +627,19 @@ add_eq_source(PlannerInfo *root, EquivalenceClass\n> *ec, RestrictInfo *rinfo)\n> i = -1;\n> while ((i = bms_next_member(rinfo->clause_relids, i)) >= 0)\n> {\n> - RangeTblEntry *rte = root->simple_rte_array[i];\n> + RelOptInfo *rel = root->simple_rel_array[i];\n>\n> - rte->eclass_source_indexes = bms_add_member(rte->eclass_source_indexes,\n> + /*\n> + * If the corresponding RelOptInfo does not exist, we create a 'dummy'\n> + * RelOptInfo for storing EquivalenceClass indexes.\n> + */\n> + if (rel == NULL)\n> + {\n> + rel = root->simple_rel_array[i] = makeNode(RelOptInfo);\n> + rel->eclass_source_indexes = NULL;\n> + rel->eclass_derive_indexes = NULL;\n> + }\n> + rel->eclass_source_indexes = bms_add_member(rel->eclass_source_indexes,\n> source_idx);\n> }\n> =====\n>\n> At this point, I'm not sure if this approach is correct. It seems to\n> pass the regression tests, but we should doubt its correctness. I will\n> continue to experiment with this idea.\n>\n> [1]https://www.postgresql.org/message-id/CAPpHfdseB13zJJPZuBORevRnZ0vcFyUaaJeSGfAysX7S5er%2BEQ%40mail.gmail.com\n>\nYes, I also thought in this direction before and I agree that this is \nthe best way to develop the patch.\n\n-- \nRegards,\nAlena Rybakina\nPostgres Professional:http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n\n\n\nHi!\n\nOn 13.12.2023 09:21, Yuya Watari wrote:\n\n\nHello Alena, Andrei, and all,\n\nI am sorry for my late response. I found that the current patches do\nnot apply to the master, so I have rebased those patches. I have\nattached v22. For this later discussion, I separated the rebasing and\nbug fixing that Alena did in v21 into separate commits, v22-0003 and\nv22-0004. I will merge these commits after the discussion.\n\n1. v22-0003 (solved_conflict_with_self_join_removal.txt)\n\n Thank you!\n\nThank you for your rebase. Looking at your rebasing patch, I thought\nwe could do this more simply. Your patch deletes (more precisely, sets\nto null) non-redundant members from the root->eq_sources list and\nre-adds them to the same list. However, this approach seems a little\nwaste of memory. Instead, we can update\nEquivalenceClass->ec_source_indexes directly. Then, we can reuse the\nmembers in root->eq_sources and don't need to extend root->eq_sources.\nI did this in v22-0003. What do you think of this approach?\n\n I thought about this earlier and was worried that the index links of\n the equivalence classes might not be referenced correctly for outer\n joins, \n so I decided to just overwrite them and reset the previous ones.\n\nThe main concern with this idea is that it does not fix\nRangeTblEntry->eclass_source_indexes. The current code works fine even\nif we don't fix the index because get_ec_source_indexes() always does\nbms_intersect() for eclass_source_indexes and ec_source_indexes. If we\nguaranteed this behavior of doing bms_intersect, then simply modifying\nec_source_indexes would be fine. Fortunately, such a guarantee is not\nso difficult.\n\nAnd your patch removes the following assertion code from the previous\npatch. May I ask why you removed this code? I think this assertion is\nhelpful for sanity checks. Of course, I know that this kind of\nassertion will slow down regression tests or assert-enabled builds.\nSo, we may have to discuss which assertions to keep and which to\ndiscard.\n\n=====\n-#ifdef USE_ASSERT_CHECKING\n- /* verify the results look sane */\n- i = -1;\n- while ((i = bms_next_member(rel_esis, i)) >= 0)\n- {\n- RestrictInfo *rinfo = list_nth_node(RestrictInfo, root->eq_sources,\n- i);\n-\n- Assert(bms_overlap(relids, rinfo->clause_relids));\n- }\n-#endif\n=====\n\n this is due to the fact that I explained before: we zeroed the\n values indicated by the indexes, \n then this check is not correct either - since the zeroed value\n indicated by the index is correct. \n That's why I removed this check.\n\n\n\nFinally, your patch changes the name of the following function. I\nunderstand the need for this change, but it has nothing to do with our\npatches, so we should not include it and discuss it in another thread.\n\n=====\n-update_eclasses(EquivalenceClass *ec, int from, int to)\n+update_eclass(PlannerInfo *root, EquivalenceClass *ec, int from, int to)\n=====\n\n\n I agree.\n\n\n2. v22-0004 (bug_related_to_atomic_function.txt)\n\nThank you for fixing the bug. As I wrote in the previous mail:\n\nOn Wed, Nov 22, 2023 at 2:32 PM Yuya Watari <watari.yuya@gmail.com> wrote:\n\n\nOn Mon, Nov 20, 2023 at 1:45 PM Andrei Lepikhov\n<a.lepikhov@postgrespro.ru> wrote:\n\n\nDuring the work on committing the SJE feature [1], Alexander Korotkov\npointed out the silver lining in this work [2]: he proposed that we\nshouldn't remove RelOptInfo from simple_rel_array at all but replace it\nwith an 'Alias', which will refer the kept relation. It can simplify\nfurther optimizations on removing redundant parts of the query.\n\n\n\nThank you for sharing this information. I think the idea suggested by\nAlexander Korotkov is also helpful for our patch. As mentioned above,\nthe indexes are in RangeTblEntry in the current implementation.\nHowever, I think RangeTblEntry is not the best place to store them. An\n'alias' relids may help solve this and simplify fixing the above bug.\nI will try this approach soon.\n\n\n\nI think that the best way to solve this issue is to move these indexes\nfrom RangeTblEntry to RelOptInfo. Since they are related to planning\ntime, they should be in RelOptInfo. The reason why I put these indexes\nin RangeTblEntry is because some RelOptInfos can be null and we cannot\nstore the indexes. This problem is similar to an issue regarding\n'varno 0' Vars. I hope an alias RelOptInfo would help solve this\nissue. I have attached the current proof of concept I am considering\nas poc-alias-reloptinfo.txt. To test this patch, please follow the\nprocedure below.\n\n1. Apply all *.patch files,\n2. Apply Alexander Korotkov's alias_relids.patch [1], and\n3. Apply poc-alias-reloptinfo.txt, which is attached to this email.\n\nMy patch creates a dummy (or an alias) RelOptInfo to store indexes if\nthe corresponding RelOptInfo is null. The following is the core change\nin my patch.\n\n=====\n@@ -627,9 +627,19 @@ add_eq_source(PlannerInfo *root, EquivalenceClass\n*ec, RestrictInfo *rinfo)\n i = -1;\n while ((i = bms_next_member(rinfo->clause_relids, i)) >= 0)\n {\n- RangeTblEntry *rte = root->simple_rte_array[i];\n+ RelOptInfo *rel = root->simple_rel_array[i];\n\n- rte->eclass_source_indexes = bms_add_member(rte->eclass_source_indexes,\n+ /*\n+ * If the corresponding RelOptInfo does not exist, we create a 'dummy'\n+ * RelOptInfo for storing EquivalenceClass indexes.\n+ */\n+ if (rel == NULL)\n+ {\n+ rel = root->simple_rel_array[i] = makeNode(RelOptInfo);\n+ rel->eclass_source_indexes = NULL;\n+ rel->eclass_derive_indexes = NULL;\n+ }\n+ rel->eclass_source_indexes = bms_add_member(rel->eclass_source_indexes,\n source_idx);\n }\n=====\n\nAt this point, I'm not sure if this approach is correct. It seems to\npass the regression tests, but we should doubt its correctness. I will\ncontinue to experiment with this idea.\n\n[1] https://www.postgresql.org/message-id/CAPpHfdseB13zJJPZuBORevRnZ0vcFyUaaJeSGfAysX7S5er%2BEQ%40mail.gmail.com\n\n\n\n Yes, I also thought in this direction before and I agree that this\n is the best way to develop the patch.\n-- \nRegards,\nAlena Rybakina\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Sat, 16 Dec 2023 18:41:27 +0300",
"msg_from": "Alena Rybakina <lena.ribackina@yandex.ru>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] Reducing planning time when tables have many partitions"
},
{
"msg_contents": "Hello Alena,\n\nThank you for your quick response, and I'm sorry for my delayed reply.\n\nOn Sun, Dec 17, 2023 at 12:41 AM Alena Rybakina\n<lena.ribackina@yandex.ru> wrote:\n> I thought about this earlier and was worried that the index links of the equivalence classes might not be referenced correctly for outer joins,\n> so I decided to just overwrite them and reset the previous ones.\n\nThank you for pointing this out. I have investigated this problem and\nfound a potential bug place. The code quoted below modifies\nRestrictInfo's clause_relids. Here, our indexes, namely\neclass_source_indexes and eclass_derive_indexes, are based on\nclause_relids, so they should be adjusted after the modification.\nHowever, my patch didn't do that, so it may have missed some\nreferences. The same problem occurs in places other than the quoted\none.\n\n=====\n/*\n * Walker function for replace_varno()\n */\nstatic bool\nreplace_varno_walker(Node *node, ReplaceVarnoContext *ctx)\n{\n ...\n else if (IsA(node, RestrictInfo))\n {\n RestrictInfo *rinfo = (RestrictInfo *) node;\n ...\n\n if (bms_is_member(ctx->from, rinfo->clause_relids))\n {\n replace_varno((Node *) rinfo->clause, ctx->from, ctx->to);\n replace_varno((Node *) rinfo->orclause, ctx->from, ctx->to);\n rinfo->clause_relids = replace_relid(rinfo->clause_relids,\nctx->from, ctx->to);\n ...\n }\n ...\n }\n ...\n}\n=====\n\nI have attached a new version of the patch, v23, to fix this problem.\nv23-0006 adds a helper function called update_clause_relids(). This\nfunction modifies RestrictInfo->clause_relids while adjusting its\nrelated indexes. I have also attached a sanity check patch\n(sanity-check.txt) to this email. This sanity check patch verifies\nthat there are no missing references between RestrictInfos and our\nindexes. I don't intend to commit this patch, but it helps find\npotential bugs. v23 passes this sanity check, but the v21 you\nsubmitted before does not. This means that the adjustment by\nupdate_clause_relids() is needed to prevent missing references after\nmodifying clause_relids. I'd appreciate your letting me know if v23\ndoesn't solve your concern.\n\nOne of the things I don't think is good about my approach is that it\nadds some complexity to the code. In my approach, all modifications to\nclause_relids must be done through the update_clause_relids()\nfunction, but enforcing this rule is not so easy. In this sense, my\npatch may need to be simplified more.\n\n> this is due to the fact that I explained before: we zeroed the values indicated by the indexes,\n> then this check is not correct either - since the zeroed value indicated by the index is correct.\n> That's why I removed this check.\n\nThank you for letting me know. I fixed this in v23-0005 to adjust the\nindexes in update_eclasses(). With this change, the assertion check\nwill be correct.\n\n-- \nBest regards,\nYuya Watari",
"msg_date": "Wed, 17 Jan 2024 18:33:42 +0900",
"msg_from": "Yuya Watari <watari.yuya@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PoC] Reducing planning time when tables have many partitions"
},
{
"msg_contents": "Hi! Sorry my delayed reply too.\n\nOn 17.01.2024 12:33, Yuya Watari wrote:\n> Hello Alena,\n>\n> Thank you for your quick response, and I'm sorry for my delayed reply.\n>\n> On Sun, Dec 17, 2023 at 12:41 AM Alena Rybakina\n> <lena.ribackina@yandex.ru> wrote:\n>> I thought about this earlier and was worried that the index links of the equivalence classes might not be referenced correctly for outer joins,\n>> so I decided to just overwrite them and reset the previous ones.\n> Thank you for pointing this out. I have investigated this problem and\n> found a potential bug place. The code quoted below modifies\n> RestrictInfo's clause_relids. Here, our indexes, namely\n> eclass_source_indexes and eclass_derive_indexes, are based on\n> clause_relids, so they should be adjusted after the modification.\n> However, my patch didn't do that, so it may have missed some\n> references. The same problem occurs in places other than the quoted\n> one.\n>\n> =====\n> /*\n> * Walker function for replace_varno()\n> */\n> static bool\n> replace_varno_walker(Node *node, ReplaceVarnoContext *ctx)\n> {\n> ...\n> else if (IsA(node, RestrictInfo))\n> {\n> RestrictInfo *rinfo = (RestrictInfo *) node;\n> ...\n>\n> if (bms_is_member(ctx->from, rinfo->clause_relids))\n> {\n> replace_varno((Node *) rinfo->clause, ctx->from, ctx->to);\n> replace_varno((Node *) rinfo->orclause, ctx->from, ctx->to);\n> rinfo->clause_relids = replace_relid(rinfo->clause_relids,\n> ctx->from, ctx->to);\n> ...\n> }\n> ...\n> }\n> ...\n> }\n> =====\n>\n> I have attached a new version of the patch, v23, to fix this problem.\n> v23-0006 adds a helper function called update_clause_relids(). This\n> function modifies RestrictInfo->clause_relids while adjusting its\n> related indexes. I have also attached a sanity check patch\n> (sanity-check.txt) to this email. This sanity check patch verifies\n> that there are no missing references between RestrictInfos and our\n> indexes. I don't intend to commit this patch, but it helps find\n> potential bugs. v23 passes this sanity check, but the v21 you\n> submitted before does not. This means that the adjustment by\n> update_clause_relids() is needed to prevent missing references after\n> modifying clause_relids. I'd appreciate your letting me know if v23\n> doesn't solve your concern.\n>\n> One of the things I don't think is good about my approach is that it\n> adds some complexity to the code. In my approach, all modifications to\n> clause_relids must be done through the update_clause_relids()\n> function, but enforcing this rule is not so easy. In this sense, my\n> patch may need to be simplified more.\n>\n>> this is due to the fact that I explained before: we zeroed the values indicated by the indexes,\n>> then this check is not correct either - since the zeroed value indicated by the index is correct.\n>> That's why I removed this check.\n> Thank you for letting me know. I fixed this in v23-0005 to adjust the\n> indexes in update_eclasses(). With this change, the assertion check\n> will be correct.\n>\nYes, it is working correctly now with the assertion check. I suppose \nit's better to add this code with an additional comment and a \nrecommendation for other developers\nto use it for checking in case of manipulations with the list of \nequivalences.\n\n-- \nRegards,\nAlena Rybakina\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n",
"msg_date": "Tue, 13 Feb 2024 00:19:21 +0300",
"msg_from": "Alena Rybakina <lena.ribackina@yandex.ru>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] Reducing planning time when tables have many partitions"
},
{
"msg_contents": "Hello,\n\nOn Tue, Feb 13, 2024 at 6:19 AM Alena Rybakina <lena.ribackina@yandex.ru> wrote:\n>\n> Yes, it is working correctly now with the assertion check. I suppose\n> it's better to add this code with an additional comment and a\n> recommendation for other developers\n> to use it for checking in case of manipulations with the list of\n> equivalences.\n\nThank you for your reply and advice. I have added this assertion so\nthat other developers can use it in the future.\n\nI also merged recent changes and attached a new version, v24. Since\nthis thread is getting long, I will summarize the patches.\n\n1. v24-0001\n\nThis patch is one of the main parts of my optimization. Traditionally,\nEquivalenceClass has both parent and child members. However, this\nleads to high iteration costs when there are many child partitions. In\nv24-0001, EquivalenceClasses no longer have child members. If we need\nto iterate over child EquivalenceMembers, we use the\nEquivalenceChildMemberIterator and access the children through the\niterator. For more details, see [1] (please note that there are a few\ndesign changes from [1]).\n\n2. v24-0002\n\nThis patch was made in the previous work with David. Like\nEquivalenceClass, there are many RestrictInfos in highly partitioned\ncases. This patch introduces an indexing mechanism to speed up\nsearches for RestrictInfos.\n\n3. v24-0003\n\nv24-0002 adds its indexes to RangeTblEntry, but this is not a good\nidea. RelOptInfo is the best place. This problem is a workaround\nbecause some RelOptInfos can be NULL, so we cannot store indexes to\nsuch RelOptInfos. v24-0003 moves the indexes from RangeTblEntry to\nPlannerInfo. This is still a workaround, and I think it should be\nreconsidered.\n\n[1] https://www.postgresql.org/message-id/CAJ2pMkZk-Nr%3DyCKrGfGLu35gK-D179QPyxaqtJMUkO86y1NmSA%40mail.gmail.com\n\n-- \nBest regards,\nYuya Watari",
"msg_date": "Wed, 28 Feb 2024 20:18:18 +0900",
"msg_from": "Yuya Watari <watari.yuya@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PoC] Reducing planning time when tables have many partitions"
},
{
"msg_contents": "Hi Yuya\n\nOn Wed, Feb 28, 2024 at 4:48 PM Yuya Watari <watari.yuya@gmail.com> wrote:\n\n> Hello,\n>\n> On Tue, Feb 13, 2024 at 6:19 AM Alena Rybakina <lena.ribackina@yandex.ru>\n> wrote:\n> >\n> > Yes, it is working correctly now with the assertion check. I suppose\n> > it's better to add this code with an additional comment and a\n> > recommendation for other developers\n> > to use it for checking in case of manipulations with the list of\n> > equivalences.\n>\n> Thank you for your reply and advice. I have added this assertion so\n> that other developers can use it in the future.\n>\n> I also merged recent changes and attached a new version, v24. Since\n> this thread is getting long, I will summarize the patches.\n>\n>\n>\nI repeated my experiments in [1]. I ran 2, 3, 4, 5-way self-joins on a\npartitioned table with 1000 partitions.\n\nPlanning time measurement\n---------------------------------------\nWithout patch with an assert enabled build and enable_partitionwise_join =\nfalse, those joins took 435.31 ms, 1629.16 ms, 4701.59 ms and 11976.69 ms\nrespectively.\nKeeping other things the same, with the patch, they took 247.33 ms, 1318.57\nms, 6960.31 ms and 28463.24 ms respectively.\nThose with enable_partitionwise_join = true are 488.73 ms, 2102.12 ms,\n6906.02 ms and 21300.77 ms respectively without the patch.\nAnd with the patch, 277.22 ms, 1542.48 ms, 7879.35 ms, and 31826.39 ms.\n\nWithout patch without assert enabled build and enable_partitionwise_join =\nfalse, the joins take 298.43 ms, 1179.15 ms, 3518.84 ms and 9149.76 ms\nrespectively.\nKeeping other things the same, with the patch, the joins take 65.70 ms,\n131.29 ms, 247.67 ms and 477.74 ms respectively.\nThose with enable_partitionwise_join = true are 348.48 ms, 1576.11 ms,\n5417.98 and 17433.65 ms respectively without the patch.\nAnd with the patch 95.15 ms, 333.99 ms, 1084.06 ms, and 3609.42 ms.\n\nMemory usage measurement\n---------------------------------------\nWithout patch, with an assert enabled build and enable_partitionwise_join =\nfalse, memory used is 19 MB, 45 MB, 83 MB and 149 MB respectively.\nKeeping other things the same, with the patch, memory used is 23 MB, 66 MB,\n159 MB and 353 MB respectively.\nThat with enable_partitionwise_join = true is 40 MB, 151 MB, 464 MB and\n1663 MB respectively.\nAnd with the patch it is 44 MB, 172 MB, 540 MB and 1868 MB respectively.\n\nWithout patch without assert enabled build and enable_partitionwise_join =\nfalse, memory used is 17 MB, 41 MB, 77 MB, and 140 MB resp.\nKeeping other things the same with the patch memory used is 21 MB, 62 MB,\n152 MB and 341 MB resp.\nThat with enable_partitionwise_join = true is 37 MB, 138 MB, 428 MB and\n1495 MB resp.\nAnd with the patch it is 42 MB, 160 MB, 496 MB and 1705 MB resp.\n\nhere's summary of observations\n1. The patch improves planning time significantly (3X to 20X) and the\nimprovement increases with the number of tables being joined.\n2. In the assert enabled build the patch slows down (in comparison to HEAD)\nplanning with higher number of tables in the join. You may want to\ninvestigate this. But this is still better than my earlier measurements.\n3. The patch increased memory consumption by planner. But the numbers have\nimproved since my last measurement. Still it will be good to investigate\nwhat causes this extra memory consumption.\n4. Generally with the assert enabled build planner consumes more memory\nwith or without patch. From my previous experience this might be due to\nBitmapset objects created within Assert() calls.\n\nDoes v24-0002 have any relation/overlap with my patches to reduce memory\nconsumed by RestrictInfos? Those patches have code to avoid creating\nduplicate RestrictInfos (including commuted RestrictInfos) from ECs. [2]\n\n[1]\nhttps://www.postgresql.org/message-id/CAExHW5uVZ3E5RT9cXHaxQ_DEK7tasaMN=D6rPHcao5gcXanY5w@mail.gmail.com\n[2]\nhttps://www.postgresql.org/message-id/CAExHW5tEvzM%3D%2BLpN%3DyhU%2BP33D%2B%3D7x6fhzwDwNRM971UJunRTkQ%40mail.gmail.com\n\n-- \nBest Wishes,\nAshutosh Bapat\n\nHi YuyaOn Wed, Feb 28, 2024 at 4:48 PM Yuya Watari <watari.yuya@gmail.com> wrote:Hello,\n\nOn Tue, Feb 13, 2024 at 6:19 AM Alena Rybakina <lena.ribackina@yandex.ru> wrote:\n>\n> Yes, it is working correctly now with the assertion check. I suppose\n> it's better to add this code with an additional comment and a\n> recommendation for other developers\n> to use it for checking in case of manipulations with the list of\n> equivalences.\n\nThank you for your reply and advice. I have added this assertion so\nthat other developers can use it in the future.\n\nI also merged recent changes and attached a new version, v24. Since\nthis thread is getting long, I will summarize the patches.\n\nI repeated my experiments in [1]. I ran 2, 3, 4, 5-way self-joins on a partitioned table with 1000 partitions.Planning time measurement---------------------------------------Without patch with an assert enabled build and enable_partitionwise_join = false, those joins took 435.31 ms, 1629.16 ms, 4701.59 ms and 11976.69 ms respectively.Keeping other things the same, with the patch, they took 247.33 ms, 1318.57 ms, 6960.31 ms and 28463.24 ms respectively.Those with enable_partitionwise_join = true are 488.73 ms, 2102.12 ms, 6906.02 ms and 21300.77 ms respectively without the patch.And with the patch, 277.22 ms, 1542.48 ms, 7879.35 ms, and 31826.39 ms.Without patch without assert enabled build and enable_partitionwise_join = false, the joins take 298.43 ms, 1179.15 ms, 3518.84 ms and 9149.76 ms respectively.Keeping other things the same, with the patch, the joins take 65.70 ms, 131.29 ms, 247.67 ms and 477.74 ms respectively.Those with enable_partitionwise_join = true are 348.48 ms, 1576.11 ms, 5417.98 and 17433.65 ms respectively without the patch.And with the patch 95.15 ms, 333.99 ms, 1084.06 ms, and 3609.42 ms.Memory usage measurement---------------------------------------Without patch, with an assert enabled build and enable_partitionwise_join = false, memory used is 19 MB, 45 MB, 83 MB and 149 MB respectively.Keeping other things the same, with the patch, memory used is 23 MB, 66 MB, 159 MB and 353 MB respectively.That with enable_partitionwise_join = true is 40 MB, 151 MB, 464 MB and 1663 MB respectively.And with the patch it is 44 MB, 172 MB, 540 MB and 1868 MB respectively.Without patch without assert enabled build and enable_partitionwise_join = false, memory used is 17 MB, 41 MB, 77 MB, and 140 MB resp.Keeping other things the same with the patch memory used is 21 MB, 62 MB, 152 MB and 341 MB resp.That with enable_partitionwise_join = true is 37 MB, 138 MB, 428 MB and 1495 MB resp.And with the patch it is 42 MB, 160 MB, 496 MB and 1705 MB resp.here's summary of observations1. The patch improves planning time significantly (3X to 20X) and the improvement increases with the number of tables being joined.2. In the assert enabled build the patch slows down (in comparison to HEAD) planning with higher number of tables in the join. You may want to investigate this. But this is still better than my earlier measurements.3. The patch increased memory consumption by planner. But the numbers have improved since my last measurement. Still it will be good to investigate what causes this extra memory consumption.4. Generally with the assert enabled build planner consumes more memory with or without patch. From my previous experience this might be due to Bitmapset objects created within Assert() calls.Does v24-0002 have any relation/overlap with my patches to reduce memory consumed by RestrictInfos? Those patches have code to avoid creating duplicate RestrictInfos (including commuted RestrictInfos) from ECs. [2][1] https://www.postgresql.org/message-id/CAExHW5uVZ3E5RT9cXHaxQ_DEK7tasaMN=D6rPHcao5gcXanY5w@mail.gmail.com[2] https://www.postgresql.org/message-id/CAExHW5tEvzM%3D%2BLpN%3DyhU%2BP33D%2B%3D7x6fhzwDwNRM971UJunRTkQ%40mail.gmail.com-- Best Wishes,Ashutosh Bapat",
"msg_date": "Wed, 6 Mar 2024 19:46:38 +0530",
"msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] Reducing planning time when tables have many partitions"
},
{
"msg_contents": "Hello Ashutosh,\n\nThank you for your email and for reviewing the patch. I sincerely\napologize for the delay in responding.\n\nOn Wed, Mar 6, 2024 at 11:16 PM Ashutosh Bapat\n<ashutosh.bapat.oss@gmail.com> wrote:\n> here's summary of observations\n> 1. The patch improves planning time significantly (3X to 20X) and the improvement increases with the number of tables being joined.\n> 2. In the assert enabled build the patch slows down (in comparison to HEAD) planning with higher number of tables in the join. You may want to investigate this. But this is still better than my earlier measurements.\n> 3. The patch increased memory consumption by planner. But the numbers have improved since my last measurement. Still it will be good to investigate what causes this extra memory consumption.\n> 4. Generally with the assert enabled build planner consumes more memory with or without patch. From my previous experience this might be due to Bitmapset objects created within Assert() calls.\n\nThank you for testing the patch and sharing the results. For comment\n#1, these results show the effectiveness of the patch.\n\nFor comment #2, I agree that we should not slow down assert-enabled\nbuilds. The patch adds a lot of assertions to avoid adding bugs, but\nthey might be too excessive. I will reconsider these assertions and\nremove unnecessary ones.\n\nFor comments #3 and #4, while the patch improves time complexity, it\nhas some negative impacts on space complexity. The patch uses a\nBitmapset-based index to speed up searching for EquivalenceMembers and\nRestrictInfos. Reducing this memory consumption is a little hard, but\nthis is a very important problem in committing this patch, so I will\ninvestigate this further.\n\n> Does v24-0002 have any relation/overlap with my patches to reduce memory consumed by RestrictInfos? Those patches have code to avoid creating duplicate RestrictInfos (including commuted RestrictInfos) from ECs. [2]\n\nThank you for sharing these patches. My patch may be related to your\npatches. My patch speeds up slow linear searches over\nEquivalenceMembers and RestrictInfos. It uses several approaches, one\nof which is the Bitmapset-based index. Bitmapsets are space\ninefficient, so if there are many EquivalenceMembers and\nRestrictInfos, this index becomes large. This is true for highly\npartitioned cases, where there are a lot of similar (or duplicate)\nelements. Eliminating such duplicate elements may help my patch reduce\nmemory consumption. I will investigate this further.\n\nUnfortunately, I've been busy due to work, so I may not be able to\nrespond soon. I really apologize for this. However, I will look into\nthe patches, including yours, and share further information if found.\n\nAgain, I apologize for my late response and appreciate your kind review.\n\n-- \nBest regards,\nYuya Watari\n\n\n",
"msg_date": "Thu, 2 May 2024 16:56:41 +0900",
"msg_from": "Yuya Watari <watari.yuya@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PoC] Reducing planning time when tables have many partitions"
},
{
"msg_contents": "On Thu, May 2, 2024 at 3:57 PM Yuya Watari <watari.yuya@gmail.com> wrote:\n>\n\nhi. sorry to bother you, maybe a dumb question.\n\ntrying to understand something under the hood.\ncurrently I only applied\nv24-0001-Speed-up-searches-for-child-EquivalenceMembers.patch.\n\non v24-0001:\n+/*\n+ * add_eq_member - build a new EquivalenceMember and add it to an EC\n+ */\n+static EquivalenceMember *\n+add_eq_member(EquivalenceClass *ec, Expr *expr, Relids relids,\n+ JoinDomain *jdomain, Oid datatype)\n+{\n+ EquivalenceMember *em = make_eq_member(ec, expr, relids, jdomain,\n+ NULL, datatype);\n+\n+ ec->ec_members = lappend(ec->ec_members, em);\n+ return em;\n+}\n+\nthis part seems so weird to me.\nadd_eq_member function was added very very long ago,\nwhy do we create a function with the same function name?\n\nalso I didn't see deletion of original add_eq_member function\n(https://git.postgresql.org/cgit/postgresql.git/tree/src/backend/optimizer/path/equivclass.c#n516)\nin v24-0001.\n\nObviously, now I cannot compile it correctly.\nWhat am I missing?\n\n\n",
"msg_date": "Thu, 2 May 2024 22:35:25 +0800",
"msg_from": "jian he <jian.universality@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] Reducing planning time when tables have many partitions"
},
{
"msg_contents": "Hello,\n\nThank you for reviewing these patches.\n\nOn Thu, May 2, 2024 at 11:35 PM jian he <jian.universality@gmail.com> wrote:\n> on v24-0001:\n> +/*\n> + * add_eq_member - build a new EquivalenceMember and add it to an EC\n> + */\n> +static EquivalenceMember *\n> +add_eq_member(EquivalenceClass *ec, Expr *expr, Relids relids,\n> + JoinDomain *jdomain, Oid datatype)\n> +{\n> + EquivalenceMember *em = make_eq_member(ec, expr, relids, jdomain,\n> + NULL, datatype);\n> +\n> + ec->ec_members = lappend(ec->ec_members, em);\n> + return em;\n> +}\n> +\n> this part seems so weird to me.\n> add_eq_member function was added very very long ago,\n> why do we create a function with the same function name?\n>\n> also I didn't see deletion of original add_eq_member function\n> (https://git.postgresql.org/cgit/postgresql.git/tree/src/backend/optimizer/path/equivclass.c#n516)\n> in v24-0001.\n\nActually, this patch does not recreate the add_eq_member() function\nbut splits it into two functions: add_eq_member() and\nmake_eq_member().\n\nThe reason why planning takes so long time in the current\nimplementation is that EquivalenceClasses have a large number of child\nEquivalenceMembers, and the linear search for them is time-consuming.\nTo solve this problem, the patch makes EquivalenceClasses have only\nparent members. There are few parent members, so we can speed up the\nsearch. In the patch, the child members are introduced when needed.\n\nThe add_eq_member() function originally created EquivalenceMembers and\nadded them to ec_members. In the patch, this function is split into\nthe following two functions.\n\n1. make_eq_member\nCreates a new (parent or child) EquivalenceMember and returns it\nwithout adding it to ec_members.\n2. add_eq_member\nCreates a new parent (not child) EquivalenceMember and adds it to\nec_members. Internally calls make_eq_member.\n\nWhen we create parent members, we simply call add_eq_member(). This is\nthe same as the current implementation. When we create child members,\nwe have to do something different. Look at the code below. The\nadd_child_rel_equivalences() function creates child members. The patch\ncreates child EquivalenceMembers by the make_eq_member() function and\nstores them in RelOptInfo (child_rel->eclass_child_members) instead of\ntheir parent EquivalenceClass->ec_members. When we need child\nEquivalenceMembers, we get them via RelOptInfos.\n\n=====\nvoid\nadd_child_rel_equivalences(PlannerInfo *root,\n AppendRelInfo *appinfo,\n RelOptInfo *parent_rel,\n RelOptInfo *child_rel)\n{\n ...\n i = -1;\n while ((i = bms_next_member(parent_rel->eclass_indexes, i)) >= 0)\n {\n ...\n foreach(lc, cur_ec->ec_members)\n {\n ...\n if (bms_is_subset(cur_em->em_relids, top_parent_relids) &&\n !bms_is_empty(cur_em->em_relids))\n {\n /* OK, generate transformed child version */\n ...\n child_em = make_eq_member(cur_ec, child_expr, new_relids,\n cur_em->em_jdomain,\n cur_em, cur_em->em_datatype);\n child_rel->eclass_child_members =\nlappend(child_rel->eclass_child_members,\n child_em);\n ...\n }\n }\n }\n}\n=====\n\nI didn't change the name of add_eq_member, but it might be better to\nchange it to something like add_parent_eq_member(). Alternatively,\ncreating a new function named add_child_eq_member() that adds child\nmembers to RelOptInfo can be a solution. I will consider these changes\nin the next version.\n\n> Obviously, now I cannot compile it correctly.\n> What am I missing?\n\nThank you for pointing this out. This is due to a conflict with a\nrecent commit [1]. This commit introduces a new function named\nadd_setop_child_rel_equivalences(), which is quoted below. This\nfunction creates a new child EquivalenceMember by calling\nadd_eq_member(). We have to adjust this function to make my patch\nwork, but it is not so easy. I'm sorry it will take some time to solve\nthis conflict, but I will post a new version when it is fixed.\n\n=====\n/*\n * add_setop_child_rel_equivalences\n * Add equivalence members for each non-resjunk target in 'child_tlist'\n * to the EquivalenceClass in the corresponding setop_pathkey's pk_eclass.\n *\n * 'root' is the PlannerInfo belonging to the top-level set operation.\n * 'child_rel' is the RelOptInfo of the child relation we're adding\n * EquivalenceMembers for.\n * 'child_tlist' is the target list for the setop child relation. The target\n * list expressions are what we add as EquivalenceMembers.\n * 'setop_pathkeys' is a list of PathKeys which must contain an entry for each\n * non-resjunk target in 'child_tlist'.\n */\nvoid\nadd_setop_child_rel_equivalences(PlannerInfo *root, RelOptInfo *child_rel,\n List *child_tlist, List *setop_pathkeys)\n{\n ListCell *lc;\n ListCell *lc2 = list_head(setop_pathkeys);\n\n foreach(lc, child_tlist)\n {\n TargetEntry *tle = lfirst_node(TargetEntry, lc);\n EquivalenceMember *parent_em;\n PathKey *pk;\n\n if (tle->resjunk)\n continue;\n\n if (lc2 == NULL)\n elog(ERROR, \"too few pathkeys for set operation\");\n\n pk = lfirst_node(PathKey, lc2);\n parent_em = linitial(pk->pk_eclass->ec_members);\n\n /*\n * We can safely pass the parent member as the first member in the\n * ec_members list as this is added first in generate_union_paths,\n * likewise, the JoinDomain can be that of the initial member of the\n * Pathkey's EquivalenceClass.\n */\n add_eq_member(pk->pk_eclass,\n tle->expr,\n child_rel->relids,\n parent_em->em_jdomain,\n parent_em,\n exprType((Node *) tle->expr));\n\n lc2 = lnext(setop_pathkeys, lc2);\n }\n\n /*\n * transformSetOperationStmt() ensures that the targetlist never contains\n * any resjunk columns, so all eclasses that exist in 'root' must have\n * received a new member in the loop above. Add them to the child_rel's\n * eclass_indexes.\n */\n child_rel->eclass_indexes = bms_add_range(child_rel->eclass_indexes, 0,\n\nlist_length(root->eq_classes) - 1);\n}\n=====\n\n[1] https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=66c0185a3d14bbbf51d0fc9d267093ffec735231\n\n-- \nBest regards,\nYuya Watari\n\n\n",
"msg_date": "Thu, 16 May 2024 11:44:45 +0900",
"msg_from": "Yuya Watari <watari.yuya@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PoC] Reducing planning time when tables have many partitions"
},
{
"msg_contents": "Hello,\n\nOn Thu, May 16, 2024 at 11:44 AM Yuya Watari <watari.yuya@gmail.com> wrote:\n> I'm sorry it will take some time to solve\n> this conflict, but I will post a new version when it is fixed.\n\nThe previous patches no longer apply to the master, so I rebased them.\nI'm sorry for the delay.\n\nI will summarize the patches again.\n\n1. v25-0001\nThis patch is one of the main parts of my optimization. Traditionally,\nEquivalenceClass has both parent and child members. However, this\nleads to high iteration costs when there are many child partitions. In\nv25-0001, EquivalenceClasses no longer have child members. If we need\nto iterate over child EquivalenceMembers, we use the\nEquivalenceChildMemberIterator and access the children through the\niterator. For more details, see [1] (note that there are some design\nchanges from [1]).\n\n2. v25-0002\nThis patch was made in the previous work with David. Like\nEquivalenceClass, there are many RestrictInfos in highly partitioned\ncases. This patch introduces an indexing mechanism to speed up\nsearches for RestrictInfos.\n\n3. v25-0003\nv25-0002 adds its indexes to RangeTblEntry, but this is not a good\nidea. RelOptInfo is the best place. This problem is a workaround\nbecause some RelOptInfos can be NULL, so we cannot store indexes to\nsuch RelOptInfos. v25-0003 moves the indexes from RangeTblEntry to\nPlannerInfo. This is still a workaround, and I think it should be\nreconsidered.\n\n4. v25-0004\nAfter our changes, add_eq_member() no longer creates and adds child\nEquivalenceMembers. This commit renames it to add_parent_eq_member()\nto clarify that it only creates parent members, and that we need to\nuse make_eq_member() to handle child EquivalenceMembers.\n\n5. v25-0005\nThis commit resolves a conflict with commit 66c0185 [2], which adds\nadd_setop_child_rel_equivalences(). As I mentioned in the previous\nemail [3], this function creates child EquivalenceMembers by calling\nadd_eq_member(). This commit adjusts our optimization so that it can\nhandle such child members.\n\n[1] https://www.postgresql.org/message-id/CAJ2pMkZk-Nr%3DyCKrGfGLu35gK-D179QPyxaqtJMUkO86y1NmSA%40mail.gmail.com\n[2] https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=66c0185a3\n[3] https://www.postgresql.org/message-id/CAJ2pMkZt6r94NUYm-F77FYahjgnMrY4CLHGAD7HxYZxGVwCaow%40mail.gmail.com\n\n-- \nBest regards,\nYuya Watari",
"msg_date": "Thu, 29 Aug 2024 14:34:46 +0900",
"msg_from": "Yuya Watari <watari.yuya@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PoC] Reducing planning time when tables have many partitions"
}
] |
[
{
"msg_contents": "Hi, hackers!\nI've noticed that CF bot hasn't been running active branches from yesterday:\nhttps://github.com/postgresql-cfbot/postgresql/branches/active\n\nAlso, there is no new results on the current CF page on cputube.\nI don't know if it is a problem or kind of scheduled maintenance though.\n\n-- \nBest regards,\nPavel Borisov\n\nPostgres Professional: http://postgrespro.com <http://www.postgrespro.com>\n\nHi, hackers!I've noticed that CF bot hasn't been running active branches from yesterday:https://github.com/postgresql-cfbot/postgresql/branches/activeAlso, there is no new results on the current CF page on cputube.I don't know if it is a problem or kind of scheduled maintenance though.-- Best regards,Pavel BorisovPostgres Professional: http://postgrespro.com",
"msg_date": "Fri, 18 Mar 2022 19:43:47 +0400",
"msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>",
"msg_from_op": true,
"msg_subject": "Probable CF bot degradation"
},
{
"msg_contents": "Hi,\n\nOn Fri, Mar 18, 2022 at 07:43:47PM +0400, Pavel Borisov wrote:\n> Hi, hackers!\n> I've noticed that CF bot hasn't been running active branches from yesterday:\n> https://github.com/postgresql-cfbot/postgresql/branches/active\n> \n> Also, there is no new results on the current CF page on cputube.\n> I don't know if it is a problem or kind of scheduled maintenance though.\n\nThere was a github incident yesterday, that was resolved a few hours ago ([1]),\nmaybe the cfbot didn't like that.\n\n[1] https://www.githubstatus.com/incidents/dcnvr6zym66r\n\n\n",
"msg_date": "Sat, 19 Mar 2022 00:06:44 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Probable CF bot degradation"
},
{
"msg_contents": "On Sat, Mar 19, 2022 at 5:07 AM Julien Rouhaud <rjuju123@gmail.com> wrote:\n> On Fri, Mar 18, 2022 at 07:43:47PM +0400, Pavel Borisov wrote:\n> > Hi, hackers!\n> > I've noticed that CF bot hasn't been running active branches from yesterday:\n> > https://github.com/postgresql-cfbot/postgresql/branches/active\n> >\n> > Also, there is no new results on the current CF page on cputube.\n> > I don't know if it is a problem or kind of scheduled maintenance though.\n>\n> There was a github incident yesterday, that was resolved a few hours ago ([1]),\n> maybe the cfbot didn't like that.\n\nYeah, for a while it was seeing:\n\nremote: Internal Server Error\nTo github.com:postgresql-cfbot/postgresql.git\n ! [remote rejected] commitfest/37/3489 -> commitfest/37/3489\n(Internal Server Error)\nerror: failed to push some refs to 'github.com:postgresql-cfbot/postgresql.git'\n\nUnfortunately cfbot didn't handle that failure very well and it was\nwaiting for a long timeout before scheduling more jobs. It's going\nagain now, and I'll try to make it more resilient against that type of\nfailure...\n\n\n",
"msg_date": "Sat, 19 Mar 2022 07:51:43 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Probable CF bot degradation"
},
{
"msg_contents": ">\n> remote: Internal Server Error\n> To github.com:postgresql-cfbot/postgresql.git\n> ! [remote rejected] commitfest/37/3489 -> commitfest/37/3489\n> (Internal Server Error)\n> error: failed to push some refs to 'github.com:\n> postgresql-cfbot/postgresql.git'\n>\nI am seeing commitfest/37/3489 in \"triggered\" state for a long time. No\nprogress is seen on this branch, though I started to see successful runs on\nthe other branches now.\nCould you see this particular branch and maybe restart it manually?\n\nUnfortunately cfbot didn't handle that failure very well and it was\n> waiting for a long timeout before scheduling more jobs. It's going\n> again now, and I'll try to make it more resilient against that type of\n> failure...\n>\nThanks a lot!\n\n-- \nBest regards,\nPavel Borisov\n\nPostgres Professional: http://postgrespro.com <http://www.postgrespro.com>\n\nremote: Internal Server Error\nTo github.com:postgresql-cfbot/postgresql.git\n ! [remote rejected] commitfest/37/3489 -> commitfest/37/3489\n(Internal Server Error)\nerror: failed to push some refs to 'github.com:postgresql-cfbot/postgresql.git'I am seeing commitfest/37/3489 in \"triggered\" state for a long time. No progress is seen on this branch, though I started to see successful runs on the other branches now.Could you see this particular branch and maybe restart it manually?\nUnfortunately cfbot didn't handle that failure very well and it was\nwaiting for a long timeout before scheduling more jobs. It's going\nagain now, and I'll try to make it more resilient against that type of\nfailure...Thanks a lot!-- Best regards,Pavel BorisovPostgres Professional: http://postgrespro.com",
"msg_date": "Sat, 19 Mar 2022 00:41:00 +0400",
"msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Probable CF bot degradation"
},
{
"msg_contents": "On Sat, Mar 19, 2022 at 9:41 AM Pavel Borisov <pashkin.elfe@gmail.com> wrote:\n>>\n>> remote: Internal Server Error\n>> To github.com:postgresql-cfbot/postgresql.git\n>> ! [remote rejected] commitfest/37/3489 -> commitfest/37/3489\n>> (Internal Server Error)\n>> error: failed to push some refs to 'github.com:postgresql-cfbot/postgresql.git'\n>\n> I am seeing commitfest/37/3489 in \"triggered\" state for a long time. No progress is seen on this branch, though I started to see successful runs on the other branches now.\n> Could you see this particular branch and maybe restart it manually?\n\nI don't seem to have a way to delete that... it looks like when\ngithub told us \"Internal Server Error\", it had partially succeeded and\nthe new branch (partially?) existed, but something was b0rked and it\nconfused Cirrus. 🤷 There is already another build for 3489 that is\nalmost finished now so I don't think that stale TRIGGERED one is\nstopping anything from working and I guess it will eventually go away\nby itself somehow...\n\n\n",
"msg_date": "Sat, 19 Mar 2022 10:05:42 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Probable CF bot degradation"
},
{
"msg_contents": ">\n> confused Cirrus. 🤷 There is already another build for 3489 that is\n> almost finished now so I don't think that stale TRIGGERED one is\n> stopping anything from working and I guess it will eventually go away\n> by itself somehow...\n>\nIndeed, I saw this now. No problem anymore.\nThanks!\n\n--\nBest regards,\nPavel Borisov\n\nPostgres Professional: http://postgrespro.com <http://www.postgrespro.com>\n\nconfused Cirrus. 🤷 There is already another build for 3489 that is\nalmost finished now so I don't think that stale TRIGGERED one is\nstopping anything from working and I guess it will eventually go away\nby itself somehow...Indeed, I saw this now. No problem anymore. Thanks!--Best regards,Pavel BorisovPostgres Professional: http://postgrespro.com",
"msg_date": "Sat, 19 Mar 2022 01:45:23 +0400",
"msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Probable CF bot degradation"
},
{
"msg_contents": "On Fri, 18 Mar 2022 at 19:52, Thomas Munro <thomas.munro@gmail.com> wrote:\n> Unfortunately cfbot didn't handle that failure very well and it was\n> waiting for a long timeout before scheduling more jobs. It's going\n> again now, and I'll try to make it more resilient against that type of\n> failure...\n\nI noticed that two of my patches (37/3543 and 37/3542) both failed due\nto a bad commit on master (076f4d9). The issue was fixed an hour later\nwith b61e6214; but the pipeline for these patches hasn't run since.\nBecause doing a no-op update would only clutter people's inboxes, I\nwas waiting for CFBot to do its regular bitrot check; but that hasn't\nhappened yet after 4 days.\nI understand that this is probably due to the high rate of new patch\nrevisions that get priority in the queue; but that doesn't quite\nfulfill my want for information in this case.\n\nWould you know how long the expected bitrot re-check period for CF\nentries that haven't been updated is, or could the bitrot-checking\nqueue be displayed somewhere to indicate the position of a patch in\nthis queue?\nAdditionally, are there plans to validate commits of the main branch\nbefore using them as a base for CF entries, so that \"bad\" commits on\nmaster won't impact CFbot results as easy?\n\nKind regards,\n\nMatthias van de Meent\n\n\n",
"msg_date": "Sun, 20 Mar 2022 13:58:01 +0100",
"msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Probable CF bot degradation"
},
{
"msg_contents": "On Sun, Mar 20, 2022 at 01:58:01PM +0100, Matthias van de Meent wrote:\n>\n> I noticed that two of my patches (37/3543 and 37/3542) both failed due\n> to a bad commit on master (076f4d9). The issue was fixed an hour later\n> with b61e6214; but the pipeline for these patches hasn't run since.\n> Because doing a no-op update would only clutter people's inboxes, I\n> was waiting for CFBot to do its regular bitrot check; but that hasn't\n> happened yet after 4 days.\n> I understand that this is probably due to the high rate of new patch\n> revisions that get priority in the queue; but that doesn't quite\n> fulfill my want for information in this case.\n\nJust in case, if you only want to know whether the cfbot would be happy with\nyour patches you can run the exact same checks using a personal github repo, as\ndocumented at src/tools/ci/README.\n\nYou could also send the URL of a successful run on the related threads, or as\nan annotation on the cf entries to let possible reviewers know that the patch\nis still in a good shape even if the cfbot is currently still broken.\n\n\n",
"msg_date": "Sun, 20 Mar 2022 21:28:38 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Probable CF bot degradation"
},
{
"msg_contents": "On Mon, Mar 21, 2022 at 1:58 AM Matthias van de Meent\n<boekewurm+postgres@gmail.com> wrote:\n> Would you know how long the expected bitrot re-check period for CF\n> entries that haven't been updated is, or could the bitrot-checking\n> queue be displayed somewhere to indicate the position of a patch in\n> this queue?\n\nI see that your patches were eventually retested.\n\nIt was set to try to recheck every ~48 hours, though it couldn't quite\nalways achieve that when the total number of eligible submissions is\ntoo large. In this case it had stalled for too long after the github\noutage, which I'm going to try to improve. The reason for the 48+\nhour cycle is the Windows tests now take ~25 minutes (since we started\nactually running all the tests on that platform), and we could only\nhave two Windows tasts running at a time in practice, because the\nlimit for Windows was 8 CPUs, and we use 4 for each task, which means\nwe could only test ~115 branches per day, or actually a shade fewer\nbecause it's pretty dumb and only wakes up once a minute to decide\nwhat to do, and we currently have 242 submissions (though some don't\napply, those are free, so the number varies over time...). There are\nlimits on the Unixes too but they are more generous, and the Unix\ntests only take 4-10 minutes, so we can ignore that for now, it's all\ndown to Windows.\n\nI had been meaning to stump up the USD$10/month it costs to double the\nCPU limits from the basic free Cirrus account, and I've just now done\nthat and told cfbot it's allowed to test 4 branches at once and to try\nto test every branch every 24 hours. Let's see how that goes.\n\nHere's hoping we can cut down the time it takes to run the tests on\nWindows... there's some really dumb stuff happening there. Top items\nI'm aware of: (1) general lack of test concurrency, (2) exec'ing new\nbackends is glacially slow on that OS but we do it for every SQL\nstatement in the TAP tests and every regression test script (I have\nsome patches for this to share after the code freeze).\n\n> Additionally, are there plans to validate commits of the main branch\n> before using them as a base for CF entries, so that \"bad\" commits on\n> master won't impact CFbot results as easy?\n\nHow do you see this working?\n\nI have wondered about some kind of way to click a button to say \"do\nthis one again now\", but I guess that sort of user interaction should\nideally happen after merging this thing into the Commitfest app,\nbecause it already has auth, and interactive Python/Django web stuff.\n\n\n",
"msg_date": "Mon, 21 Mar 2022 12:23:02 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Probable CF bot degradation"
},
{
"msg_contents": "On Mon, Mar 21, 2022 at 12:23 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> On Mon, Mar 21, 2022 at 1:58 AM Matthias van de Meent\n> <boekewurm+postgres@gmail.com> wrote:\n> > Would you know how long the expected bitrot re-check period for CF\n> > entries that haven't been updated is, or could the bitrot-checking\n> > queue be displayed somewhere to indicate the position of a patch in\n> > this queue?\n\nAlso, as for the show-me-the-queue page, yeah that's a good idea and\nquite feasible. I'll look into that in a bit.\n\n> > Additionally, are there plans to validate commits of the main branch\n> > before using them as a base for CF entries, so that \"bad\" commits on\n> > master won't impact CFbot results as easy?\n>\n> How do you see this working?\n\n[Now with more coffee on board] Oh, right, I see, you're probably\nthinking that we could look at\nhttps://github.com/postgres/postgres/commits/master and take the most\nrecent passing commit as a base. Hmm, interesting idea.\n\n\n",
"msg_date": "Mon, 21 Mar 2022 12:46:32 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Probable CF bot degradation"
},
{
"msg_contents": "Hi,\n\nOn 2022-03-21 12:23:02 +1300, Thomas Munro wrote:\n> It was set to try to recheck every ~48 hours, though it couldn't quite\n> always achieve that when the total number of eligible submissions is\n> too large. In this case it had stalled for too long after the github\n> outage, which I'm going to try to improve. The reason for the 48+\n> hour cycle is the Windows tests now take ~25 minutes (since we started\n> actually running all the tests on that platform)\n\nI see 26-28 minutes regularly :(. And that doesn't even include the \"boot\ntime\" of the test of around 3-4min, which is quite a bit higher for windows\nthan for the other OSs.\n\n\n> and we could only\n> have two Windows tasts running at a time in practice, because the\n> limit for Windows was 8 CPUs, and we use 4 for each task, which means\n> we could only test ~115 branches per day, or actually a shade fewer\n> because it's pretty dumb and only wakes up once a minute to decide\n> what to do, and we currently have 242 submissions (though some don't\n> apply, those are free, so the number varies over time...). There are\n> limits on the Unixes too but they are more generous, and the Unix\n> tests only take 4-10 minutes, so we can ignore that for now, it's all\n> down to Windows.\n\nI wonder if it's worth using the number of concurrently running windows tasks\nas the limit, rather than the number of commits being tested\nconcurrently. It's not rare for windows to fail more quickly than other\nOSs. But probably the 4 concurrent tests are good enough for now...\n\nI'd love to merge the patch adding mingw CI testing, which'd increase the\npressure substantially :/\n\n\n> I had been meaning to stump up the USD$10/month it costs to double the\n> CPU limits from the basic free Cirrus account, and I've just now done\n> that and told cfbot it's allowed to test 4 branches at once and to try\n> to test every branch every 24 hours. Let's see how that goes.\n\nYay.\n\n\n> Here's hoping we can cut down the time it takes to run the tests on\n> Windows... there's some really dumb stuff happening there. Top items\n> I'm aware of: (1) general lack of test concurrency, (2) exec'ing new\n> backends is glacially slow on that OS but we do it for every SQL\n> statement in the TAP tests and every regression test script (I have\n> some patches for this to share after the code freeze).\n\n3) build is quite slow and has no caching\n\n\nWith meson the difference of 1, 3 is quite visible. Look at\nhttps://cirrus-ci.com/build/5265480968568832\n\ncurrent buildsystem: 28:07 min\nmeson w/ msbuild: 22:21 min\nmeson w/ ninja: 19:24\n\nmeson runs quite a few tests that the \"current buildsystem\" doesn't, so the\nwin is actually bigger than the time difference indicates...\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sun, 20 Mar 2022 17:17:47 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Probable CF bot degradation"
},
{
"msg_contents": "Hi,\n\nOn 2022-03-21 12:23:02 +1300, Thomas Munro wrote:\n> or actually a shade fewer because it's pretty dumb and only wakes up once a\n> minute to decide what to do\n\nMight be worth using https://cirrus-ci.org/api/#webhooks to trigger a run of\nthe scheduler. Probably still want to have the timeout based \"scheduling\niterations\", but perhaps at a lower frequency?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sun, 20 Mar 2022 17:36:21 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Probable CF bot degradation"
},
{
"msg_contents": "On Sun, Mar 20, 2022 at 4:23 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> On Mon, Mar 21, 2022 at 1:58 AM Matthias van de Meent\n> <boekewurm+postgres@gmail.com> wrote:\n> > Would you know how long the expected bitrot re-check period for CF\n> > entries that haven't been updated is, or could the bitrot-checking\n> > queue be displayed somewhere to indicate the position of a patch in\n> > this queue?\n>\n> I see that your patches were eventually retested.\n\nWhat about just seeing if the patch still applies cleanly against HEAD\nmuch more frequently? Obviously that would be way cheaper than running\nall of the tests again.\n\nPerhaps Cirrus provides a way of taking advantage of that? (Or maybe\nthat happens already, in which case please enlighten me.)\n\nBTW, I think that the usability of the CFBot website would be improved\nif there was a better visual indicator of what each \"green tick inside\na circle\" link actually indicates -- what are we testing for each\ngreen tick/red X shown?\n\nI already see tooltips which show a descriptive string (for example a\ntooltip that says \"FreeBSD - 13: COMPLETED\" which comes from\n<title></title> tags), which is something. But seeing these tooltips\nrequires several seconds of mouseover on my browser (Chrome). I'd be\nquite happy if I could see similar tooltips immediately on mouseover\n(which isn't actually possible with standard generic tooltips IIUC),\nor something equivalent. Any kind of visual feedback on the nature of\nthe thing tested by a particular CI run that the user can drill down\nto (you know, a Debian logo next to the tick, that kind of thing).\n\n> I had been meaning to stump up the USD$10/month it costs to double the\n> CPU limits from the basic free Cirrus account, and I've just now done\n> that and told cfbot it's allowed to test 4 branches at once and to try\n> to test every branch every 24 hours. Let's see how that goes.\n\nExtravagance!\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Sun, 20 Mar 2022 17:41:02 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Probable CF bot degradation"
},
{
"msg_contents": "On Mon, Mar 21, 2022 at 1:41 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> BTW, I think that the usability of the CFBot website would be improved\n> if there was a better visual indicator of what each \"green tick inside\n> a circle\" link actually indicates -- what are we testing for each\n> green tick/red X shown?\n>\n> I already see tooltips which show a descriptive string (for example a\n> tooltip that says \"FreeBSD - 13: COMPLETED\" which comes from\n> <title></title> tags), which is something. But seeing these tooltips\n> requires several seconds of mouseover on my browser (Chrome). I'd be\n> quite happy if I could see similar tooltips immediately on mouseover\n> (which isn't actually possible with standard generic tooltips IIUC),\n> or something equivalent. Any kind of visual feedback on the nature of\n> the thing tested by a particular CI run that the user can drill down\n> to (you know, a Debian logo next to the tick, that kind of thing).\n\nNice idea, if someone with graphics skills is interested in looking into it...\n\nThose tooltips come from the \"name\" elements of the .cirrus.yml file\nwhere tasks are defined, with Cirrus's task status appended. If we\nhad a set of monochrome green and red icons with a Linux penguin,\nFreeBSD daemon, Windows logo and Apple logo of matching dimensions, a\nconfig file could map task names to icons, and fall back to\nticks/crosses for anything unknown/new, including the\n\"CompilerWarnings\" one that doesn't have an obvious icon. Another\nthing to think about is the 'solid' and 'hollow' variants, the former\nindicating a recent change. So we'd need 4 variants of each logo.\nAlso I believe there is a proposal to add NetBSD and OpenBSD in the\nworks.\n\n\n",
"msg_date": "Mon, 21 Mar 2022 14:44:55 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Probable CF bot degradation"
},
{
"msg_contents": "On Sun, Mar 20, 2022 at 6:45 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> Nice idea, if someone with graphics skills is interested in looking into it...\n\nThe logo thing wasn't really the point for me. I'd just like to have\nthe information be more visible, sooner.\n\nI was hoping that there might be a very simple method of making the\nsame information more visible, that you could implement in only a few\nminutes. Perhaps that was optimistic.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Sun, 20 Mar 2022 18:48:34 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Probable CF bot degradation"
},
{
"msg_contents": "Hi,\n\nOn 2022-03-21 14:44:55 +1300, Thomas Munro wrote:\n> Those tooltips come from the \"name\" elements of the .cirrus.yml file\n> where tasks are defined, with Cirrus's task status appended. If we\n> had a set of monochrome green and red icons with a Linux penguin,\n> FreeBSD daemon, Windows logo and Apple logo of matching dimensions, a\n> config file could map task names to icons, and fall back to\n> ticks/crosses for anything unknown/new, including the\n> \"CompilerWarnings\" one that doesn't have an obvious icon. Another\n> thing to think about is the 'solid' and 'hollow' variants, the former\n> indicating a recent change. So we'd need 4 variants of each logo.\n> Also I believe there is a proposal to add NetBSD and OpenBSD in the\n> works.\n\nMight even be sufficient to add just the first letter of the task inside the\ncircle, instead of the \"check\" and x. Right now the letters are unique.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sun, 20 Mar 2022 19:11:32 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Probable CF bot degradation"
},
{
"msg_contents": "On Mon, Mar 21, 2022 at 3:11 PM Andres Freund <andres@anarazel.de> wrote:\n> On 2022-03-21 14:44:55 +1300, Thomas Munro wrote:\n> > Those tooltips come from the \"name\" elements of the .cirrus.yml file\n> > where tasks are defined, with Cirrus's task status appended. If we\n> > had a set of monochrome green and red icons with a Linux penguin,\n> > FreeBSD daemon, Windows logo and Apple logo of matching dimensions, a\n> > config file could map task names to icons, and fall back to\n> > ticks/crosses for anything unknown/new, including the\n> > \"CompilerWarnings\" one that doesn't have an obvious icon. Another\n> > thing to think about is the 'solid' and 'hollow' variants, the former\n> > indicating a recent change. So we'd need 4 variants of each logo.\n> > Also I believe there is a proposal to add NetBSD and OpenBSD in the\n> > works.\n>\n> Might even be sufficient to add just the first letter of the task inside the\n> circle, instead of the \"check\" and x. Right now the letters are unique.\n\nNice idea, because it retains the information density. If someone\nwith web skills would like to pull down the cfbot page and hack up one\nof the rows to show an example of a pass, fail, recent-pass,\nrecent-fail as a circle with a letter in it, and also an \"in progress\"\nsymbol that occupies the same amoutn of space, I'd be keen to try\nthat. (The current \"in progress\" blue circle was originally supposed\nto be a pie filling up slowly according to a prediction of finished\ntime based on past performance, but I never got to that... it's stuck\nat 1/4 :-))\n\n\n",
"msg_date": "Mon, 21 Mar 2022 15:41:23 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Probable CF bot degradation"
},
{
"msg_contents": "On Mon, Mar 21, 2022 at 12:46 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> On Mon, Mar 21, 2022 at 12:23 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> > On Mon, Mar 21, 2022 at 1:58 AM Matthias van de Meent\n> > <boekewurm+postgres@gmail.com> wrote:\n> > > Additionally, are there plans to validate commits of the main branch\n> > > before using them as a base for CF entries, so that \"bad\" commits on\n> > > master won't impact CFbot results as easy?\n> >\n> > How do you see this working?\n>\n> [Now with more coffee on board] Oh, right, I see, you're probably\n> thinking that we could look at\n> https://github.com/postgres/postgres/commits/master and take the most\n> recent passing commit as a base. Hmm, interesting idea.\n\nA nice case in point today: everything is breaking on Windows due to a\ncommit in master, which could easily be avoided by looking back a\ncertain distance for a passing commit from postgres/postgres to use as\na base. Let's me see if this is easy to fix...\n\nhttps://www.postgresql.org/message-id/20220322231311.GK28503%40telsasoft.com\n\n\n",
"msg_date": "Wed, 23 Mar 2022 12:44:09 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Probable CF bot degradation"
},
{
"msg_contents": "On Wed, Mar 23, 2022 at 12:44:09PM +1300, Thomas Munro wrote:\n> On Mon, Mar 21, 2022 at 12:46 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> > On Mon, Mar 21, 2022 at 12:23 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> > > On Mon, Mar 21, 2022 at 1:58 AM Matthias van de Meent <boekewurm+postgres@gmail.com> wrote:\n> > > > Additionally, are there plans to validate commits of the main branch\n> > > > before using them as a base for CF entries, so that \"bad\" commits on\n> > > > master won't impact CFbot results as easy?\n> > >\n> > > How do you see this working?\n> >\n> > [Now with more coffee on board] Oh, right, I see, you're probably\n> > thinking that we could look at\n> > https://github.com/postgres/postgres/commits/master and take the most\n> > recent passing commit as a base. Hmm, interesting idea.\n> \n> A nice case in point today: everything is breaking on Windows due to a\n> commit in master, which could easily be avoided by looking back a\n> certain distance for a passing commit from postgres/postgres to use as\n> a base. Let's me see if this is easy to fix...\n> \n> https://www.postgresql.org/message-id/20220322231311.GK28503%40telsasoft.com\n\nI suggest not to make it too sophisticated. If something is broken, the CI\nshould show that rather than presenting a misleading conclusion.\n\nMaybe you could keep track of how many consecutive, *new* failures there've\nbeen (which were passing on the previous run for that task, for that patch) and\ndelay if it's more than (say) 5. For bonus points, queue a rerun of all the\nfailed tasks once something passes.\n\nIf you create a page to show the queue, maybe it should show the history of\nresults, too. And maybe there should be a history of results for each patch.\n\nIf you implement interactive buttons, maybe it could allow re-queueing some\nrecent failures (add to end of queue).\n\n-- \nJustin\n\n\n",
"msg_date": "Tue, 22 Mar 2022 19:43:39 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Probable CF bot degradation"
}
] |
[
{
"msg_contents": "Hi,\n\nI found the following lines in pg_backup_tar.c.\n\n if (len != th->fileLen)\n {\n char buf1[32],\n buf2[32];\n\n snprintf(buf1, sizeof(buf1), INT64_FORMAT, (int64) len);\n snprintf(buf2, sizeof(buf2), INT64_FORMAT, (int64) th->fileLen);\n fatal(\"actual file length (%s) does not match expected (%s)\",\n buf1, buf2);\n }\n\nwe can rely on %lld/%llu and we decided to use them in translatable strings.\nSee 6a1cd8b.\n\nHowever, I am not sure how to update the *.po files under the pg_dump/po\ndirectory. Any suggestions?\n\n--\nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.",
"msg_date": "Fri, 18 Mar 2022 23:49:14 +0800",
"msg_from": "Japin Li <japinli@hotmail.com>",
"msg_from_op": true,
"msg_subject": "Remove INT64_FORMAT in translatable strings"
},
{
"msg_contents": "Japin Li <japinli@hotmail.com> writes:\n> we can rely on %lld/%llu and we decided to use them in translatable strings.\n\nSeems like good cleanup, so pushed. I think though that project style\nis to use \"long long\" or \"unsigned long long\", without the unnecessary\n\"int\" --- it certainly makes little sense to do it both ways in the\nsame patch.\n\n> However, I am not sure how to update the *.po files under the pg_dump/po\n> directory. Any suggestions?\n\nThe translation team manages those files. We don't normally touch them\nduring code development.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 18 Mar 2022 13:12:40 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Remove INT64_FORMAT in translatable strings"
},
{
"msg_contents": "On Fri, Mar 18, 2022 at 01:12:40PM -0400, Tom Lane wrote:\n> Japin Li <japinli@hotmail.com> writes:\n> > we can rely on %lld/%llu and we decided to use them in translatable strings.\n> \n> Seems like good cleanup, so pushed. I think though that project style\n> is to use \"long long\" or \"unsigned long long\", without the unnecessary\n> \"int\" --- it certainly makes little sense to do it both ways in the\n> same patch.\n\nThis seemed familiar - it's about the same thing I sent here, while fixing\nftello().\n\nhttps://www.postgresql.org/message-id/flat/20210104025321.GA9712@telsasoft.com\n0002-Fix-broken-error-message-on-unseekable-input.patch\n\n-- \nJustin\n\n\n",
"msg_date": "Fri, 18 Mar 2022 12:32:25 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Remove INT64_FORMAT in translatable strings"
},
{
"msg_contents": "\nOn Sat, 19 Mar 2022 at 01:12, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Japin Li <japinli@hotmail.com> writes:\n>> we can rely on %lld/%llu and we decided to use them in translatable strings.\n>\n> Seems like good cleanup, so pushed. I think though that project style\n> is to use \"long long\" or \"unsigned long long\", without the unnecessary\n> \"int\" --- it certainly makes little sense to do it both ways in the\n> same patch.\n>\n>> However, I am not sure how to update the *.po files under the pg_dump/po\n>> directory. Any suggestions?\n>\n> The translation team manages those files. We don't normally touch them\n> during code development.\n>\n\nThank you for pushing the patch.\n\n\n--\nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.\n\n\n",
"msg_date": "Sat, 19 Mar 2022 07:40:43 +0800",
"msg_from": "Japin Li <japinli@hotmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Remove INT64_FORMAT in translatable strings"
}
] |
[
{
"msg_contents": "Hi, hackers!\n\nI've noticed that in SRLU error reporting both signed and unsigned values\nare output as %u. I think it is worth correcting this with the very simple\npatch attached.\n\nThanks!\n\n-- \nBest regards,\nPavel Borisov\n\nPostgres Professional: http://postgrespro.com <http://www.postgrespro.com>",
"msg_date": "Fri, 18 Mar 2022 22:52:02 +0400",
"msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>",
"msg_from_op": true,
"msg_subject": "Fix unsigned output for signed values in SLRU error reporting"
},
{
"msg_contents": "Hi,\n\nOn 2022-03-18 22:52:02 +0400, Pavel Borisov wrote:\n> I've noticed that in SRLU error reporting both signed and unsigned values\n> are output as %u. I think it is worth correcting this with the very simple\n> patch attached.\n\nAfaics offset etc can't be negative, so I don't think this really improves\nmatters. I think there's quite a few other places where we use %u to print\nintegers that we know aren't negative.\n\nIf anything I think we should change the signed integers to unsigned ones. It\nmight be worth doing that as part of\nhttps://www.postgresql.org/message-id/CAJ7c6TPDOYBYrnCAeyndkBktO0WG2xSdYduTF0nxq%2BvfkmTF5Q%40mail.gmail.com\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 18 Mar 2022 16:14:30 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Fix unsigned output for signed values in SLRU error reporting"
},
{
"msg_contents": ">\n> Afaics offset etc can't be negative, so I don't think this really improves\n> matters. I think there's quite a few other places where we use %u to print\n> integers that we know aren't negative.\n>\n> If anything I think we should change the signed integers to unsigned ones.\n> It\n> might be worth doing that as part of\n>\n> https://www.postgresql.org/message-id/CAJ7c6TPDOYBYrnCAeyndkBktO0WG2xSdYduTF0nxq%2BvfkmTF5Q%40mail.gmail.com\n\n\nThat was one of my intentions in the mentioned patch, but I couldn't\nconfirm that the page number (and offset) in SLRU was used signed not by\npurpose. Thank you for confirming this. I will try to replace int to\nunsigned where it is relevant in SLRU as part of the mentioned thread.\nThough it could be a big change worth a separate patch maybe.\n\nAgain thanks!\nPavel\n\nAfaics offset etc can't be negative, so I don't think this really improves\nmatters. I think there's quite a few other places where we use %u to print\nintegers that we know aren't negative.\n\nIf anything I think we should change the signed integers to unsigned ones. It\nmight be worth doing that as part of\nhttps://www.postgresql.org/message-id/CAJ7c6TPDOYBYrnCAeyndkBktO0WG2xSdYduTF0nxq%2BvfkmTF5Q%40mail.gmail.comThat was one of my intentions in the mentioned patch, but I couldn't confirm that the page number (and offset) in SLRU was used signed not by purpose. Thank you for confirming this. I will try to replace int to unsigned where it is relevant in SLRU as part of the mentioned thread. Though it could be a big change worth a separate patch maybe.Again thanks!Pavel",
"msg_date": "Mon, 21 Mar 2022 16:11:31 +0400",
"msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Fix unsigned output for signed values in SLRU error reporting"
},
{
"msg_contents": "пн, 21 мар. 2022 г. в 16:11, Pavel Borisov <pashkin.elfe@gmail.com>:\n\n> Afaics offset etc can't be negative, so I don't think this really improves\n>> matters. I think there's quite a few other places where we use %u to print\n>> integers that we know aren't negative.\n>>\n>> If anything I think we should change the signed integers to unsigned\n>> ones. It\n>> might be worth doing that as part of\n>>\n>> https://www.postgresql.org/message-id/CAJ7c6TPDOYBYrnCAeyndkBktO0WG2xSdYduTF0nxq%2BvfkmTF5Q%40mail.gmail.com\n>\n>\n> That was one of my intentions in the mentioned patch, but I couldn't\n> confirm that the page number (and offset) in SLRU was used signed not by\n> purpose. Thank you for confirming this. I will try to replace int to\n> unsigned where it is relevant in SLRU as part of the mentioned thread.\n> Though it could be a big change worth a separate patch maybe.\n>\n> In the patchset where we're working on making SLRU 64bit [1] we have come\nto agreement that:\n- signed to unsigned change in SLRU *page* numbering is not needed as\nmaximum SLRU page number is guaranteed to be much more than 2 times less\nthan maximum 64-bit XID.\n- change of *offset* from int format to the wider one is not needed at all\nas multiple of SLRU_PAGES_PER_SEGMENT\nand CLOG_XACTS_PER_PAGE (and similar for commit_ts and mxact) is far less\nthan 2^32 [2]\n\nSo the change to printing offset as signed, from this thread, is not going\nto be included into SLRU 64-bit thread [1].\nIt's true that offset can not be negative, but printing int value as %u\nisn't nice even if it is not supposed to be negative. So I'd propose the\nsmall patch in this thread be applied separately if none has anything\nagainst it.\n\n[1]\nhttps://www.postgresql.org/message-id/CALT9ZEEf1uywYN%2BVaRuSwNMGE5%3DeFOy7ZTwtP2g%2BW9oJDszqQw%40mail.gmail.com\n[2]\nhttps://www.postgresql.org/message-id/20220325.120718.758699124869814269.horikyota.ntt%40gmail.com\n\n-- \nBest regards,\nPavel Borisov\n\nPostgres Professional: http://postgrespro.com <http://www.postgrespro.com>\n\nпн, 21 мар. 2022 г. в 16:11, Pavel Borisov <pashkin.elfe@gmail.com>:Afaics offset etc can't be negative, so I don't think this really improves\nmatters. I think there's quite a few other places where we use %u to print\nintegers that we know aren't negative.\n\nIf anything I think we should change the signed integers to unsigned ones. It\nmight be worth doing that as part of\nhttps://www.postgresql.org/message-id/CAJ7c6TPDOYBYrnCAeyndkBktO0WG2xSdYduTF0nxq%2BvfkmTF5Q%40mail.gmail.comThat was one of my intentions in the mentioned patch, but I couldn't confirm that the page number (and offset) in SLRU was used signed not by purpose. Thank you for confirming this. I will try to replace int to unsigned where it is relevant in SLRU as part of the mentioned thread. Though it could be a big change worth a separate patch maybe.In the patchset where we're working on making SLRU 64bit [1] we have come to agreement that:- signed to unsigned change in SLRU page numbering is not needed as maximum SLRU page number is guaranteed to be much more than 2 times less than maximum 64-bit XID.- change of offset from int format to the wider one is not needed at all as multiple of SLRU_PAGES_PER_SEGMENTand CLOG_XACTS_PER_PAGE (and similar for commit_ts and mxact) is far lessthan 2^32 [2] So the change to printing offset as signed, from this thread, is not going to be included into SLRU 64-bit thread [1].It's true that offset can not be negative, but printing int value as %u isn't nice even if it is not supposed to be negative. So I'd propose the small patch in this thread be applied separately if none has anything against it.[1] https://www.postgresql.org/message-id/CALT9ZEEf1uywYN%2BVaRuSwNMGE5%3DeFOy7ZTwtP2g%2BW9oJDszqQw%40mail.gmail.com[2] https://www.postgresql.org/message-id/20220325.120718.758699124869814269.horikyota.ntt%40gmail.com-- Best regards,Pavel BorisovPostgres Professional: http://postgrespro.com",
"msg_date": "Fri, 25 Mar 2022 14:49:08 +0400",
"msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Fix unsigned output for signed values in SLRU error reporting"
},
{
"msg_contents": "On 25.03.22 11:49, Pavel Borisov wrote:\n> It's true that offset can not be negative, but printing int value as %u \n> isn't nice even if it is not supposed to be negative. So I'd propose the \n> small patch in this thread be applied separately if none has anything \n> against it.\n\ncommitted\n\n\n",
"msg_date": "Wed, 6 Apr 2022 09:31:30 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Fix unsigned output for signed values in SLRU error reporting"
}
] |
[
{
"msg_contents": "Hi Pgsql-Hackers\n\nWhich hook should I use when overriding the COPY command in an extension?\n\nI am working on adding new functionalities to COPY (compression, index\nmanagement, various other transports in addition to stdin and file, other\ndata formats, etc...) and while the aim is to contribute this to v15 I\nwould also like to have much of it in earlier versions.\n\nAs the current policy is to back-port only bugfixes and not \"features\" ,\nthe only way I can see to get it in earlier versions is to provide an\nextension which intercepts the COPY command and replaces it with my own\nimplementation.\n\nSo my question is, which of the hooks would be easiest to use for this ?\n\nAt the syntax level it would still look the same COPY ... FROM/TO ... WITH\n( options) and the extensibility will be in file names (using a URI scheme\nmapped to transports) and in options part.\nso I hope to fully reuse the parsing part and get in before the existence\nchecks.\n\nDoes anyone have experience in this and can [point to samples?\n\nCheers\nHannu\n\nHi Pgsql-HackersWhich hook should I use when overriding the COPY command in an extension?I am working on adding new functionalities to COPY (compression, index management, various other transports in addition to stdin and file, other data formats, etc...) and while the aim is to contribute this to v15 I would also like to have much of it in earlier versions.As the current policy is to back-port only bugfixes and not \"features\" , the only way I can see to get it in earlier versions is to provide an extension which intercepts the COPY command and replaces it with my own implementation.So my question is, which of the hooks would be easiest to use for this ?At the syntax level it would still look the same COPY ... FROM/TO ... WITH ( options) and the extensibility will be in file names (using a URI scheme mapped to transports) and in options part.so I hope to fully reuse the parsing part and get in before the existence checks.Does anyone have experience in this and can [point to samples?CheersHannu",
"msg_date": "Sat, 19 Mar 2022 12:28:46 +0100",
"msg_from": "Hannu Krosing <hannuk@google.com>",
"msg_from_op": true,
"msg_subject": "Which hook to use when overriding utility commands (COPY ...)"
},
{
"msg_contents": "On Sat, Mar 19, 2022 at 12:28:46PM +0100, Hannu Krosing wrote:\n> Which hook should I use when overriding the COPY command in an extension?\n\nCopyStmt goes through ProcessUtility(), so you can use the hook called\nProcessUtility_hook to override what you want.\n--\nMichael",
"msg_date": "Sat, 19 Mar 2022 21:44:18 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Which hook to use when overriding utility commands (COPY ...)"
}
] |
[
{
"msg_contents": "On Wed, Mar 9, 2022 at 4:46 PM Andres Freund <andres@anarazel.de> wrote:\n> On 2022-03-03 19:31:32 -0800, Peter Geoghegan wrote:\n> > Attached is a new revision of my fix. This is more or less a\n> > combination of my v4 fix from November 12 [1] and Andres'\n> > already-committed fix (commit 18b87b20), rebased on top of HEAD. This\n> > patch was originally a bugfix, but now it's technically just\n> > refactoring/hardening of the logic in pruneheap.c. It hasn't changed\n> > all that much, though.\n>\n> Perhaps worth discussing outside of -bugs?\n\nOkay. Replying on a new -hackers thread dedicated to the hardening patch.\n\nAttached is v6, which goes a bit further than v5 in using local state\nthat we build up-front to describe the state of the page being pruned\n(rather than rereading the page itself). I'll address your v5 review\ncomments now.\n\n> > We now do \"3 passes\" over the page. The first is the aforementioned\n> > \"precalculate HTSV\" pass, the second is for determining the extent of\n> > HOT chains, and the third is for any remaining disconnected/orphaned\n> > HOT chains. I suppose that that's okay, since the amount of work has\n> > hardly increased in proportion to this \"extra pass over the page\". Not\n> > 100% sure about everything myself right now, though. I guess that \"3\n> > passes\" might be considered excessive.\n>\n> We should be able to optimize away the third pass in the majority of cases, by\n> keeping track of the number of tuples visited or such. That seems like it\n> might be worth doing?\n\nIt independently occured to me to go further with using the state from\nthe first pass to save work in the second and third pass. We still\nhave what looks like a third pass over the page in v6, but it doesn't\nreally work that way. That is, we only need to rely on local state\nthat describes the page, which is conveniently available already. We\ndon't need to look at the physical page in the so-called third pass.\n\n> > + /*\n> > + * Start from the root item. Mark it as valid up front, since root items\n> > + * are always processed here (not as disconnected tuples in third pass\n> > + * over page).\n> > + */\n> > + prstate->visited[rootoffnum] = true;\n> > offnum = rootoffnum;\n> > + nchain = 0;\n>\n> I wonder if it'd be clearer if we dealt with redirects outside of the\n> loop.\n\nI found it more natural to deal with everything outside of the entire\nfunction (the heap_prune_chain function, which I've renamed to\nheap_prune_from_root in v6). This is kind of what you suggested\nanyway, since the function itself is stripped down to just the loop in\nv6.\n\n> Would make it easier to assert that the target of a redirect may not be\n> unused / !heap-only?\n\nWith the approach is v6 we always eliminate LP_DEAD and LP_UNUSED\nitems up-front (by considering them \"visited\" in the first pass over\nthe page). We're also able to eliminate not-heap-only tuples quickly,\nsince whether or not each item is a heap-only tuple is also recorded\nin the first pass (the same pass that gets the HTSV status of each\nitem). It seemed natural to give more responsibility to the caller,\nheap_page_prune, which is what this really is. This continues the\ntrend you started in bugfix commit 18b87b20, which added the initial\nloop for the HTSV calls.\n\nIn v6 it becomes heap_page_prune's responsibility to only call\nheap_prune_from_root with an item that is either an LP_REDIRECT item,\nor a plain heap tuple (not a heap-only tuple). The new bookkeeping\nstate gathered in our first pass over the page makes this easy.\n\nI'm not entirely sure that it makes sense to go this far. We do need\nan extra array of booleans to make this work (the new \"heaponly[]\"\narray in PruneState), which will need to be stored on the stack. What\ndo you think of that aspect?\n\n> > + /*\n> > + * Remember the offnum of the last DEAD tuple in this HOT\n> > + * chain. To keep things simple, don't treat heap-only tuples\n> > + * from a HOT chain as DEAD unless they're only preceded by\n> > + * other DEAD tuples (in addition to actually being DEAD).\n>\n> s/actually/themselves/?\n\nFixed.\n\n> > + * Remaining tuples that appear DEAD (but don't get treated as\n> > + * such by us) are from concurrently aborting updaters.\n>\n> I don't understand this bit. A DEAD tuple following a RECENTLY_DEAD one won't\n> be removed now, and doesn't need to involve a concurrent abort? Are you\n> thinking of \"remaining\" as the tuples not referenced in the previous sentences?\n\nThis was just brainfade on my part. I think that I messed it up during rebasing.\n\n> > + * VACUUM will ask us to prune the heap page again when it\n> > + * sees that there is a DEAD tuple left behind, but that would\n> > + * be necessary regardless of our approach here.\n> > + */\n>\n> Only as long as we do another set of HTSV calls...\n\nRemoved.\n\n> > case HEAPTUPLE_LIVE:\n> > case HEAPTUPLE_INSERT_IN_PROGRESS:\n> > + pastlatestdead = true; /* no further DEAD tuples in CHAIN */\n>\n> If we don't do anything to the following tuples, why don't we just abort here?\n> I assume it is because we'd then treat them as disconnected? That should\n> probably be called out...\n\nGood point. Fixed.\n\n> > - }\n> > - else if (nchain < 2 && ItemIdIsRedirected(rootlp))\n> > - {\n> > - /*\n> > - * We found a redirect item that doesn't point to a valid follow-on\n> > - * item. This can happen if the loop in heap_page_prune caused us to\n> > - * visit the dead successor of a redirect item before visiting the\n> > - * redirect item. We can clean up by setting the redirect item to\n> > - * DEAD state.\n> > - */\n> > - heap_prune_record_dead(prstate, rootoffnum);\n> > +\n> > + return ndeleted;\n> > }\n>\n> Could there be such tuples from before a pg_upgrade? Do we need to deal with\n> them somehow?\n\nBTW, this code (which I now propose to more or less remove) predates\ncommit 0a469c87, which removed old style VACUUM full back in 2010.\nPrior to that commit there was an extra block that followed this one\n-- \"else if (redirect_move && ItemIdIsRedirected(rootlp) { ... }\".\nOnce you consider that old style VACUUM FULL once had to rewrite\nLP_REDIRECT items then this code structure makes a bit more sense.\n\nAnother possibly-relevant piece of historic context about this hunk of\ncode: it was originally part of a critical section -- structuring that\nadded heap_page_prune_execute() came later, in commit 6f10eb2111.\n\nAnyway, to answer your question: I don't believe that there will be\nany such pre-pg_upgrade tuples. But let's assume that I'm wrong about\nthat; what can be done about it now?\n\nWe're talking about a root LP_REDIRECT item that points to some item\nthat just doesn't appear sane for it to point to (e.g. maybe it points\npast the end of the line pointer array, or to a not-heap-only tuple).\nIf we are ever able to even *detect* such corruption using tools like\namcheck, then that's just because we got lucky with the details. The\nitem is irredeemably corrupt, no matter what we do.\n\nTo be clear, I'm not saying that you're wrong -- maybe I do need more\nhandling for this case. In any case I haven't quite figured out where\nto draw the line with visibly corrupt HOT chains, even in v6.\n\nAll I'm saying is that the old code doesn't seem to tell us anything\nabout what I ought to replace it with now. It tells us nothing about\nwhat should be possible with LP_REDIRECT items in general, with\npg_upgrade. The structure doesn't even suggest that somebody once\nbelieved that it was acceptable for an existing LP_REDIRECT item to\nredirect to something other than a heap-only tuple. And even if it did\nsuggest that, it would still be a wildly unreasonable thing for\nanybody to have ever believed IMV. As I said, the item has to be\nconsidered corrupt, no matter what might have been possible in the\npast.\n\n--\nPeter Geoghegan",
"msg_date": "Sat, 19 Mar 2022 20:48:15 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Hardening heap pruning code (was: BUG #17255: Server crashes in\n index_delete_sort_cmp() due to race condition with vacuum)"
},
{
"msg_contents": "On Sun, 20 Mar 2022 at 04:48, Peter Geoghegan <pg@bowt.ie> wrote:\n>\n> Attached is v6, which goes a bit further than v5 in using local state\n> that we build up-front to describe the state of the page being pruned\n> (rather than rereading the page itself).\n\nI didn't test the code; so these comments are my general feel of this\npatch based on visual analysis only.\n\n> > > We now do \"3 passes\" over the page. The first is the aforementioned\n> > > \"precalculate HTSV\" pass, the second is for determining the extent of\n> > > HOT chains, and the third is for any remaining disconnected/orphaned\n> > > HOT chains. I suppose that that's okay, since the amount of work has\n> > > hardly increased in proportion to this \"extra pass over the page\". Not\n> > > 100% sure about everything myself right now, though. I guess that \"3\n> > > passes\" might be considered excessive.\n\nThe passes don't all have a very clear explanation what they do; or\nwhat they leave for the next one to process. It can be distilled from\nthe code comments at the respective phases and various functions, but\nonly after careful review of all code comments; there doesn't seem to\nbe a clear overview.\n\n\n> @@ -295,30 +305,25 @@ heap_page_prune(Relation relation, Buffer buffer,\n> [...]\n> + * prstate for later passes. Scan the page backwards (in reverse item\n> + * offset number order).\n> - [...]\n> + * This approach is good for performance. Most commonly tuples within a\n> + * page are stored at decreasing offsets (while the items are stored at\n> + * increasing offsets).\n\nA reference to why this is commonly the case (i.e.\nPageRepairFragmentation, compactify_tuples, natural insertion order)\nwould probably help make the case; as this order being common is not\nspecifically obvious at first sight.\n\n> @@ -350,28 +370,41 @@ heap_page_prune(Relation relation, Buffer buffer,\n> + /* Now scan the page a second time to process each root item */\n\nThis second pass also processes the HOT chain of each root item; but\nthat is not clear from the comment on the loop. I'd suggest a comment\nmore along these lines:\n\n/*\n * Now scan the page a second time to process each valid HOT chain;\n * i.e. each non-HOT tuple or redirect line pointer and the HOT tuples in\n * the trailing chain, if any.\n * heap_prune_from_root marks all items in the HOT chain as visited;\n * so that phase 3 knows to skip those items.\n */\n\nApart from changes in comments for extra clarity; I think this\nmaterially improves pruning, so thanks for working on this.\n\n-Matthias\n\n\n",
"msg_date": "Sun, 20 Mar 2022 21:12:17 +0100",
"msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Hardening heap pruning code (was: BUG #17255: Server crashes in\n index_delete_sort_cmp() due to race condition with vacuum)"
}
] |
[
{
"msg_contents": "Hello, hackers.\n\nWe have a production cluster with 10 hot standby servers. Each server\nhas 48 cores and 762Mbs network.\n\nWe have experienced multiple temporary downtimes caused by long\ntransactions and hint bits.\n\nFor example - we are creating a new big index. It could take even a\nday sometimes. Also, there are some tables with frequently updating\nindexes (HOT is not used for such tables). Of course, after some time\nwe have experienced higher CPU usage because of tons of “dead” tuples\nin index and heap. But everything is still working.\n\nBut real issues come once a long-lived transaction is finally\nfinished. Next index and heap scans start to mark millions of records\nwith the LP_DEAD flag. And it causes a ton of FPW records in WAL. It\nis impossible to quickly transfer such an amount through the network\n(or even write to the disk) - and the primary server becomes\nunavailable with the whole system.\n\nYou can check the graphic of primary resources for real downtime\nincident in the attachment.\n\nSo, I was thinking about a way to avoid such downtimes. What is about\na patch to add parameters to limit the number of FPW caused by LP_DEAD\nbits per second? It is always possible to skip the setting of LP_DEAD\nfor future time. Such a parameter will make it possible to spread all\nadditional WAL traffic over time by some Mbit/s.\n\nDoes it look worth its implementation?\n\nThanks,\nMichail.",
"msg_date": "Sun, 20 Mar 2022 22:43:49 +0300",
"msg_from": "Michail Nikolaev <michail.nikolaev@gmail.com>",
"msg_from_op": true,
"msg_subject": "Patch proposal - parameter to limit amount of FPW because of hint\n bits per second"
},
{
"msg_contents": "On Sun, Mar 20, 2022 at 12:44 PM Michail Nikolaev\n<michail.nikolaev@gmail.com> wrote:\n> So, I was thinking about a way to avoid such downtimes. What is about\n> a patch to add parameters to limit the number of FPW caused by LP_DEAD\n> bits per second? It is always possible to skip the setting of LP_DEAD\n> for future time. Such a parameter will make it possible to spread all\n> additional WAL traffic over time by some Mbit/s.\n>\n> Does it look worth its implementation?\n\nThe following approach seems like it might fix the problem in the way\nthat you hope for:\n\n* Add code to _bt_killitems() that detects if it has generated an FPI,\njust to set some LP_DEAD bits.\n\n* Instead of avoiding the FPI when this happens, proactively call\n_bt_simpledel_pass() just before _bt_killitems() returns. Accept the\nimmediate cost of setting an LP_DEAD bit, just like today, but avoid\nrepeated FPIs.\n\nThe idea here is to take advantage of the enhancements to LP_DEAD\nindex tuple deletion (or \"simple deletion\") in Postgres 14.\n_bt_simpledel_pass() will now do a good job of deleting \"extra\" heap\nTIDs in practice, with many workloads. So in your scenario it's likely\nthat the proactive index tuple deletions will be able to delete many\n\"extra\" nearby index tuples whose TIDs point to the same heap page.\n\nThis will be useful to you because it cuts down on repeated FPIs for\nthe same leaf page. You still get the FPIs, but in practice you may\nget far fewer of them by triggering these proactive deletions, that\ncan easily delete many TIDs in batch. I think that it's better to\npursue an approach like this because it's more general.\n\nIt would perhaps also make sense to not set LP_DEAD bits in\n_bt_killitems() when we see that doing so right now generates an FPI,\n*and* we also see that existing LP_DEAD markings are enough to make\n_bt_simpledel_pass() delete the index tuple that we want to mark\nLP_DEAD now, anyway (because it'll definitely visit the same heap\nblock later on). That does mean that we pay a small cost, but at least\nwe won't miss out on deleting any index tuples as a result of avoiding\nan FPI. This second idea is also much more general than simply\navoiding FPIs in general.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Sun, 20 Mar 2022 13:35:56 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Patch proposal - parameter to limit amount of FPW because of hint\n bits per second"
},
{
"msg_contents": "Hello, Peter.\n\n> * Instead of avoiding the FPI when this happens, proactively call\n> _bt_simpledel_pass() just before _bt_killitems() returns. Accept the\n> immediate cost of setting an LP_DEAD bit, just like today, but avoid\n> repeated FPIs.\n\nHm, not sure here\nAFAIK current implementation does not produce repeated FPIs. Page is\nmarked as dirty on the first bit. So, others LP_DEAD (if not set by\nsingle scan) do not generate FPI until checkpoint is ready.\nAlso, the issue affects GITS and HASH indexes and HEAP pages.\n\nBest regards,\nMichail.\n\n\n",
"msg_date": "Mon, 21 Mar 2022 10:58:23 +0300",
"msg_from": "Michail Nikolaev <michail.nikolaev@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Patch proposal - parameter to limit amount of FPW because of hint\n bits per second"
},
{
"msg_contents": "On Mon, Mar 21, 2022 at 12:58 AM Michail Nikolaev\n<michail.nikolaev@gmail.com> wrote:\n> Hm, not sure here\n> AFAIK current implementation does not produce repeated FPIs. Page is\n> marked as dirty on the first bit. So, others LP_DEAD (if not set by\n> single scan) do not generate FPI until checkpoint is ready.\n\nThere is one FPI per checkpoint for any leaf page that is modified\nduring that checkpoint. The difference between having that happen once\nor twice per leaf page and having that happen many more times per leaf\npage could be very large.\n\nOf course it's true that that might not make that much difference. Who\nknows? But if you're not willing to measure it then we'll never know.\nWhat version are you using here? How frequently were checkpoints\noccurring in the period in question, and how does that compare to\nnormal? You didn't even include this basic information.\n\nMany things have changed in this area already, and it's rather unclear\nhow much just upgrading to Postgres 14 would help. I think that it's\npossible that it would help you here a great deal. I also think it's\npossible that it wouldn't help at all. I don't know which it is, and I\nwouldn't expect to know without careful testing -- it's too\ncomplicated, and likely would be even if all of the information about\nthe application is available.\n\nThe main reason that this can be so complex is that FPIs are caused by\nmore frequent checkpoints, but *also* cause more frequent checkpoints\nin turn. So you could have a \"death spiral\" with FPIs -- the effect is\nnonlinear, which has the potential to lead to pathological, chaotic\nbehavior. The impact on response time is *also* nonlinear and chaotic,\nin turn.\n\nSometimes it's possible to address things like this quite well with\nrelatively simple solutions, that at least work well in most cases --\njust avoiding getting into a \"death spiral\" might be all it takes. As\nI said, maybe that won't be possible here, but it should be carefully\nconsidered first. Not setting LP_DEAD bits because there are currently\n\"too many FPIs\" requires defining what that actually means, which\nseems very difficult because of these nonlinear dynamics. What do you\ndo when there were too many FPIs for a long time, but also too much\navoiding them earlier on? It's very complicated.\n\nThat's why I'm emphasizing solutions that focus on limiting the\ndownside of not setting LP_DEAD bits, which is local information (not\nsystem wide information) that is much easier to understand and target\nin the implementation.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Mon, 21 Mar 2022 13:36:33 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Patch proposal - parameter to limit amount of FPW because of hint\n bits per second"
},
{
"msg_contents": "Hello, Peter.\n\nThanks for your comments.\n\n> There is one FPI per checkpoint for any leaf page that is modified\n> during that checkpoint. The difference between having that happen once\n> or twice per leaf page and having that happen many more times per leaf\n> page could be very large.\n\nYes, I am almost sure proactively calling of_bt_simpledel_pass() will\npositively impact the system on many workloads. But also I am almost\nsure it will not change the behavior of the incident I mention -\nbecause it is not related to multiple checkpoints.\n\n> Of course it's true that that might not make that much difference. Who\n> knows? But if you're not willing to measure it then we'll never know.\n> What version are you using here? How frequently were checkpoints\n> occurring in the period in question, and how does that compare to\n> normal? You didn't even include this basic information.\n\nYes, I probably had to provide more details. Downtime is pretty short\n(you could see network peak on telemetry image from the first letter)\n- so, just 1-3 minutes. Checkpoints are about each 30 min.\nIt is just an issue with super-high WAL traffic caused by tons of FPI\ntraffic after a long transaction commit. The issue resolved fast on\nits own, but downtime still happens.\n\n> Many things have changed in this area already, and it's rather unclear\n> how much just upgrading to Postgres 14 would help.\n\nVersion is 11. Yes, many things have changed but IFAIK nothing's\nchanged related to FPI mechanics (LP_DEAD and other hint bits,\nincluding HEAP).\n\nI could probably try to reproduce the issue, but I'm not sure how to\ndo it in a fast and reliable way (it is hard to wait for a day for\neach test). Probably it may be possible by some temporary crutch in\npostgres source (to emulate old transaction commit somehow).\n\n> The main reason that this can be so complex is that FPIs are caused by\n> more frequent checkpoints, but *also* cause more frequent checkpoints\n> in turn. So you could have a \"death spiral\" with FPIs -- the effect is\n> nonlinear, which has the potential to lead to pathological, chaotic\n> behavior. The impact on response time is *also* nonlinear and chaotic,\n> in turn.\n\nCould you please explain \"death spiral\" mechanics related to FPIs?\n\n> What do you do when there were too many FPIs for a long time, but also too much\n> avoiding them earlier on? It's very complicated.\n\nYes, it could cause at least performance degradation in case of too\naggressive avoiding the FPI. I am 100% sure such settings should be\ndisabled by default. It is more about the physical limits of servers.\nPersonally I would like to set it to about 75% of resources.\n\nAlso, there are some common things between checkpoints and vacuum -\nthey are processes which are required to be done regularly (but not\nright now) and they are limited in resources. Setting LP_DEAD (and\nother hint bits, especially in HEAP) is also something required to be\ndone regularly (but not right now). But it is not limited by\nresources.\n\nBTW, probably new index creation is something with the same nature.\n\nBest regards,\nMichail.\n\n\n",
"msg_date": "Tue, 22 Mar 2022 15:07:54 +0300",
"msg_from": "Michail Nikolaev <michail.nikolaev@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Patch proposal - parameter to limit amount of FPW because of hint\n bits per second"
},
{
"msg_contents": "Hello, Peter.\n\n>> * Add code to _bt_killitems() that detects if it has generated an FPI,\n>> just to set some LP_DEAD bits.\n>> * Instead of avoiding the FPI when this happens, proactively call\n>> _bt_simpledel_pass() just before _bt_killitems() returns. Accept the\n>> immediate cost of setting an LP_DEAD bit, just like today, but avoid\n>> repeated FPIs.\n\n> Yes, I am almost sure proactively calling of_bt_simpledel_pass() will\n> positively impact the system on many workloads. But also I am almost\n> sure it will not change the behavior of the incident I mention -\n> because it is not related to multiple checkpoints.\n\nI just realized what it seems to be dangerous approache because of\nlocking mechanism.\nCurrently _bt_killitems requires only read lock but _bt_simpledel_pass\nrequired write lock (it ends with _bt_delitems_delete).\nIt will required to increase locking mode in order to call _bt_simpledel_pass.\n\nSuch a change may negatively affect many workloads because of write\nlock during scanning - and it is really hard to to prove absence of\nregression (have no idea how).\n\nThanks,\nMichail.\n\n\n",
"msg_date": "Thu, 24 Mar 2022 11:03:59 +0300",
"msg_from": "Michail Nikolaev <michail.nikolaev@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Patch proposal - parameter to limit amount of FPW because of hint\n bits per second"
}
] |
[
{
"msg_contents": "In current EXPLAIN ANALYZE implementation, the Sort Node stats from each workers are not summarized: https://github.com/postgres/postgres/blob/d4ba8b51c76300f06cc23f4d8a41d9f7210c4866/src/backend/commands/explain.c#L2762\n\nWhen the worker number is large, it will print out huge amount of node details in the plan. I have created this patch to summarize the tuplesort stats by AverageSpaceUsed / PeakSpaceUsed, make it behave just like in `show_incremental_sort_group_info()`: https://github.com/postgres/postgres/blob/d4ba8b51c76300f06cc23f4d8a41d9f7210c4866/src/backend/commands/explain.c#L2890",
"msg_date": "Mon, 21 Mar 2022 03:36:17 +0000",
"msg_from": "Jian Guo <gjian@vmware.com>",
"msg_from_op": true,
"msg_subject": "Summary Sort workers Stats in EXPLAIN ANALYZE"
},
{
"msg_contents": "There is some problem with the last patch, I have removed the `ExplainOpenWorker` call to fix.\r\n\r\nAnd also, I have added a test case in explain.sql according to the code change.\r\n________________________________\r\nFrom: Jian Guo <gjian@vmware.com>\r\nSent: Monday, March 21, 2022 11:36\r\nTo: pgsql-hackers@lists.postgresql.org <pgsql-hackers@lists.postgresql.org>\r\nCc: Zhenghua Lyu <zlyu@vmware.com>\r\nSubject: Summary Sort workers Stats in EXPLAIN ANALYZE\r\n\r\n\r\nIn current EXPLAIN ANALYZE implementation, the Sort Node stats from each workers are not summarized: https://github.com/postgres/postgres/blob/d4ba8b51c76300f06cc23f4d8a41d9f7210c4866/src/backend/commands/explain.c#L2762<https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fpostgres%2Fpostgres%2Fblob%2Fd4ba8b51c76300f06cc23f4d8a41d9f7210c4866%2Fsrc%2Fbackend%2Fcommands%2Fexplain.c%23L2762&data=04%7C01%7Cgjian%40vmware.com%7C0f2c3df25e8a46bdd84f08da0aebff59%7Cb39138ca3cee4b4aa4d6cd83d9dd62f0%7C0%7C0%7C637834305971955895%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000&sdata=RXY0uDuK7cFraHJqwU%2FQv%2BXhq3n%2F2cO6nv%2BoxHTbmCM%3D&reserved=0>\r\n\r\nWhen the worker number is large, it will print out huge amount of node details in the plan. I have created this patch to summarize the tuplesort stats by AverageSpaceUsed / PeakSpaceUsed, make it behave just like in `show_incremental_sort_group_info()`: https://github.com/postgres/postgres/blob/d4ba8b51c76300f06cc23f4d8a41d9f7210c4866/src/backend/commands/explain.c#L2890<https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fpostgres%2Fpostgres%2Fblob%2Fd4ba8b51c76300f06cc23f4d8a41d9f7210c4866%2Fsrc%2Fbackend%2Fcommands%2Fexplain.c%23L2890&data=04%7C01%7Cgjian%40vmware.com%7C0f2c3df25e8a46bdd84f08da0aebff59%7Cb39138ca3cee4b4aa4d6cd83d9dd62f0%7C0%7C0%7C637834305971955895%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000&sdata=NotrZFufFycTmlVy3SKioUSWGzLalSu9SWCOccMXGCI%3D&reserved=0>",
"msg_date": "Mon, 21 Mar 2022 07:50:38 +0000",
"msg_from": "Jian Guo <gjian@vmware.com>",
"msg_from_op": true,
"msg_subject": "Re: Summary Sort workers Stats in EXPLAIN ANALYZE"
},
{
"msg_contents": "For a simple demo, with this explain statement:\r\n\r\n-- Test sort stats summary\r\nset force_parallel_mode=on;\r\nselect explain_filter('explain (analyze, summary off, timing off, costs off, format json) select * from tenk1 order by unique1');\r\n\r\nBefore this patch, we got plan like this:\r\n\r\n\r\n \"Node Type\": \"Sort\", +\r\n \"Parent Relationship\": \"Outer\", +\r\n \"Parallel Aware\": false, +\r\n \"Async Capable\": false, +\r\n \"Actual Rows\": 10000, +\r\n \"Actual Loops\": 1, +\r\n \"Sort Key\": [\"unique1\"], +\r\n \"Workers\": [ +\r\n { +\r\n \"Worker Number\": 0, +\r\n \"Sort Method\": \"external merge\",+\r\n \"Sort Space Used\": 2496, +\r\n \"Sort Space Type\": \"Disk\" +\r\n } +\r\n ], +\r\n\r\n\r\n\r\nAfter this patch, the effected plan is this:\r\n\r\n \"Node Type\": \"Sort\", +\r\n \"Parent Relationship\": \"Outer\", +\r\n \"Parallel Aware\": false, +\r\n \"Async Capable\": false, +\r\n \"Actual Rows\": N, +\r\n \"Actual Loops\": N, +\r\n \"Sort Key\": [\"unique1\"], +\r\n \"Workers planned\": N, +\r\n \"Sort Method\": \"external merge\", +\r\n \"Average Sort Space Used\": N, +\r\n \"Peak Sort Space Used\": N, +\r\n \"Sort Space Type\": \"Disk\", +\r\n________________________________\r\nFrom: Jian Guo <gjian@vmware.com>\r\nSent: Monday, March 21, 2022 15:50\r\nTo: pgsql-hackers@lists.postgresql.org <pgsql-hackers@lists.postgresql.org>\r\nCc: Zhenghua Lyu <zlyu@vmware.com>\r\nSubject: Re: Summary Sort workers Stats in EXPLAIN ANALYZE\r\n\r\nThere is some problem with the last patch, I have removed the `ExplainOpenWorker` call to fix.\r\n\r\nAnd also, I have added a test case in explain.sql according to the code change.\r\n________________________________\r\nFrom: Jian Guo <gjian@vmware.com>\r\nSent: Monday, March 21, 2022 11:36\r\nTo: pgsql-hackers@lists.postgresql.org <pgsql-hackers@lists.postgresql.org>\r\nCc: Zhenghua Lyu <zlyu@vmware.com>\r\nSubject: Summary Sort workers Stats in EXPLAIN ANALYZE\r\n\r\n\r\nIn current EXPLAIN ANALYZE implementation, the Sort Node stats from each workers are not summarized: https://github.com/postgres/postgres/blob/d4ba8b51c76300f06cc23f4d8a41d9f7210c4866/src/backend/commands/explain.c#L2762<https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fpostgres%2Fpostgres%2Fblob%2Fd4ba8b51c76300f06cc23f4d8a41d9f7210c4866%2Fsrc%2Fbackend%2Fcommands%2Fexplain.c%23L2762&data=04%7C01%7Cgjian%40vmware.com%7C0f2c3df25e8a46bdd84f08da0aebff59%7Cb39138ca3cee4b4aa4d6cd83d9dd62f0%7C0%7C0%7C637834305971955895%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000&sdata=RXY0uDuK7cFraHJqwU%2FQv%2BXhq3n%2F2cO6nv%2BoxHTbmCM%3D&reserved=0>\r\n\r\nWhen the worker number is large, it will print out huge amount of node details in the plan. I have created this patch to summarize the tuplesort stats by AverageSpaceUsed / PeakSpaceUsed, make it behave just like in `show_incremental_sort_group_info()`: https://github.com/postgres/postgres/blob/d4ba8b51c76300f06cc23f4d8a41d9f7210c4866/src/backend/commands/explain.c#L2890<https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fpostgres%2Fpostgres%2Fblob%2Fd4ba8b51c76300f06cc23f4d8a41d9f7210c4866%2Fsrc%2Fbackend%2Fcommands%2Fexplain.c%23L2890&data=04%7C01%7Cgjian%40vmware.com%7C0f2c3df25e8a46bdd84f08da0aebff59%7Cb39138ca3cee4b4aa4d6cd83d9dd62f0%7C0%7C0%7C637834305971955895%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000&sdata=NotrZFufFycTmlVy3SKioUSWGzLalSu9SWCOccMXGCI%3D&reserved=0>\r\n\r\n\n\n\n\n\n\n\n\r\nFor a simple demo, with this explain statement:\n\n\n\n\n\r\n-- Test sort stats summary\r\nset force_parallel_mode=on;\nselect explain_filter('explain (analyze, summary off, timing off, costs off, format json) select * from tenk1 order by unique1');\n\n\n\nBefore this patch, we got plan like this:\n\n\n\n\n\n\n \"Node Type\": \"Sort\", +\n \"Parent Relationship\": \"Outer\", +\n \"Parallel Aware\": false, +\n \"Async Capable\": false, +\n \"Actual Rows\": 10000, +\n \"Actual Loops\": 1, +\n \"Sort Key\": [\"unique1\"], +\n \"Workers\": [ +\n { +\n \"Worker Number\": 0, +\n \"Sort Method\": \"external merge\",+\n \"Sort Space Used\": 2496, +\n \"Sort Space Type\": \"Disk\" +\n } +\n ], +\n\n\n\n\n\n\n\nAfter this patch, the effected plan is this:\n\n\n\n\n \"Node Type\": \"Sort\", +\n \"Parent Relationship\": \"Outer\", +\n \"Parallel Aware\": false, +\n \"Async Capable\": false, +\n \"Actual Rows\": N, +\n \"Actual Loops\": N, +\n \"Sort Key\": [\"unique1\"], +\n \"Workers planned\": N, +\n \"Sort Method\": \"external merge\", +\n \"Average Sort Space Used\": N, +\n \"Peak Sort Space Used\": N, +\r\n \"Sort Space Type\": \"Disk\", +\n\n\n\n\nFrom: Jian Guo <gjian@vmware.com>\nSent: Monday, March 21, 2022 15:50\nTo: pgsql-hackers@lists.postgresql.org <pgsql-hackers@lists.postgresql.org>\nCc: Zhenghua Lyu <zlyu@vmware.com>\nSubject: Re: Summary Sort workers Stats in EXPLAIN ANALYZE\n \n\n\n\n\r\nThere is some problem with the last patch, I have removed the `ExplainOpenWorker` call to fix.\n\n\n\n\r\nAnd also, I have added a test case in explain.sql according to the code change.\n\n\n\nFrom: Jian Guo <gjian@vmware.com>\nSent: Monday, March 21, 2022 11:36\nTo: pgsql-hackers@lists.postgresql.org <pgsql-hackers@lists.postgresql.org>\nCc: Zhenghua Lyu <zlyu@vmware.com>\nSubject: Summary Sort workers Stats in EXPLAIN ANALYZE\n \n\n\n\n\n\n\n\r\nIn current EXPLAIN ANALYZE implementation, the Sort Node stats from each workers are not summarized:\r\n\r\nhttps://github.com/postgres/postgres/blob/d4ba8b51c76300f06cc23f4d8a41d9f7210c4866/src/backend/commands/explain.c#L2762\n\n\n\n\n\r\nWhen the worker number is large, it will print out huge amount of node details in the plan. I have created this patch to summarize the tuplesort stats by AverageSpaceUsed / PeakSpaceUsed, make it behave just like in `show_incremental_sort_group_info()`:\r\n\r\nhttps://github.com/postgres/postgres/blob/d4ba8b51c76300f06cc23f4d8a41d9f7210c4866/src/backend/commands/explain.c#L2890",
"msg_date": "Thu, 24 Mar 2022 07:50:11 +0000",
"msg_from": "Jian Guo <gjian@vmware.com>",
"msg_from_op": true,
"msg_subject": "Re: Summary Sort workers Stats in EXPLAIN ANALYZE"
},
{
"msg_contents": "Hi,\n\nOn Thu, Mar 24, 2022 at 07:50:11AM +0000, Jian Guo wrote:\n> For a simple demo, with this explain statement:\n>\n> -- Test sort stats summary\n> set force_parallel_mode=on;\n> select explain_filter('explain (analyze, summary off, timing off, costs off, format json) select * from tenk1 order by unique1');\n>\n> Before this patch, we got plan like this:\n>\n>\n> \"Node Type\": \"Sort\", +\n> \"Parent Relationship\": \"Outer\", +\n> \"Parallel Aware\": false, +\n> \"Async Capable\": false, +\n> \"Actual Rows\": 10000, +\n> \"Actual Loops\": 1, +\n> \"Sort Key\": [\"unique1\"], +\n> \"Workers\": [ +\n> { +\n> \"Worker Number\": 0, +\n> \"Sort Method\": \"external merge\",+\n> \"Sort Space Used\": 2496, +\n> \"Sort Space Type\": \"Disk\" +\n> } +\n> ], +\n\n> After this patch, the effected plan is this:\n>\n> \"Node Type\": \"Sort\", +\n> \"Parent Relationship\": \"Outer\", +\n> \"Parallel Aware\": false, +\n> \"Async Capable\": false, +\n> \"Actual Rows\": N, +\n> \"Actual Loops\": N, +\n> \"Sort Key\": [\"unique1\"], +\n> \"Workers planned\": N, +\n> \"Sort Method\": \"external merge\", +\n> \"Average Sort Space Used\": N, +\n> \"Peak Sort Space Used\": N, +\n> \"Sort Space Type\": \"Disk\", +\n\nI think the idea is interesting, however there are a few problems in the patch.\n\nFirst, I think that it should only be done in the VERBOSE OFF mode. If you ask\nfor a VERBOSE output you don't need both the details and the summarized\nversion.\n\nOther minor problems:\n\n- why (only) emitting the number of workers planned and not the number of\n workers launched?\n- the textual format is missing details about what the numbers are, which is\n particularly obvious since avgSpaceUsed and peakSpaceUsed don't have any unit\n or even space between them:\n\n+\t\t\t \"Sort Method: %s %s: \" INT64_FORMAT INT64_FORMAT \"kB\\n\",\n+\t\t\t sortMethod, spaceType, avgSpaceUsed, peakSpaceUsed);\n\n\n\n",
"msg_date": "Fri, 25 Mar 2022 17:04:53 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Summary Sort workers Stats in EXPLAIN ANALYZE"
},
{
"msg_contents": "On Fri, Mar 25, 2022 at 05:04:53PM +0800, Julien Rouhaud wrote:\n> I think the idea is interesting, however there are a few problems in the patch.\n>\n> First, I think that it should only be done in the VERBOSE OFF mode. If you ask\n> for a VERBOSE output you don't need both the details and the summarized\n> version.\n>\n> Other minor problems:\n>\n> - why (only) emitting the number of workers planned and not the number of\n> workers launched?\n> - the textual format is missing details about what the numbers are, which is\n> particularly obvious since avgSpaceUsed and peakSpaceUsed don't have any unit\n> or even space between them:\n>\n> +\t\t\t \"Sort Method: %s %s: \" INT64_FORMAT INT64_FORMAT \"kB\\n\",\n> +\t\t\t sortMethod, spaceType, avgSpaceUsed, peakSpaceUsed);\n\nAlso I didn't find your patch in the next commitfest [1]. Please register it\nto make sure that it's not forgotten. Not that we're already at the end of the\nlast pg15 commitfest, so this should be material for pg16.\n\n[1] https://commitfest.postgresql.org/38/\n\n\n",
"msg_date": "Fri, 25 Mar 2022 17:30:30 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Summary Sort workers Stats in EXPLAIN ANALYZE"
},
{
"msg_contents": "I have updated the patch addressing the review comments, but I didn't moved this code block into VERBOSE mode, to keep consistency with `show_incremental_sort_info`:\r\n\r\nhttps://github.com/postgres/postgres/blob/d4ba8b51c76300f06cc23f4d8a41d9f7210c4866/src/backend/commands/explain.c#L2890\r\n\r\nPlease review, thanks.\r\n\r\n________________________________\r\nFrom: Julien Rouhaud <rjuju123@gmail.com>\r\nSent: Friday, March 25, 2022 17:04\r\nTo: Jian Guo <gjian@vmware.com>\r\nCc: pgsql-hackers@lists.postgresql.org <pgsql-hackers@lists.postgresql.org>; Zhenghua Lyu <zlyu@vmware.com>\r\nSubject: Re: Summary Sort workers Stats in EXPLAIN ANALYZE\r\n\r\n⚠ External Email\r\n\r\nHi,\r\n\r\nOn Thu, Mar 24, 2022 at 07:50:11AM +0000, Jian Guo wrote:\r\n> For a simple demo, with this explain statement:\r\n>\r\n> -- Test sort stats summary\r\n> set force_parallel_mode=on;\r\n> select explain_filter('explain (analyze, summary off, timing off, costs off, format json) select * from tenk1 order by unique1');\r\n>\r\n> Before this patch, we got plan like this:\r\n>\r\n>\r\n> \"Node Type\": \"Sort\", +\r\n> \"Parent Relationship\": \"Outer\", +\r\n> \"Parallel Aware\": false, +\r\n> \"Async Capable\": false, +\r\n> \"Actual Rows\": 10000, +\r\n> \"Actual Loops\": 1, +\r\n> \"Sort Key\": [\"unique1\"], +\r\n> \"Workers\": [ +\r\n> { +\r\n> \"Worker Number\": 0, +\r\n> \"Sort Method\": \"external merge\",+\r\n> \"Sort Space Used\": 2496, +\r\n> \"Sort Space Type\": \"Disk\" +\r\n> } +\r\n> ], +\r\n\r\n> After this patch, the effected plan is this:\r\n>\r\n> \"Node Type\": \"Sort\", +\r\n> \"Parent Relationship\": \"Outer\", +\r\n> \"Parallel Aware\": false, +\r\n> \"Async Capable\": false, +\r\n> \"Actual Rows\": N, +\r\n> \"Actual Loops\": N, +\r\n> \"Sort Key\": [\"unique1\"], +\r\n> \"Workers planned\": N, +\r\n> \"Sort Method\": \"external merge\", +\r\n> \"Average Sort Space Used\": N, +\r\n> \"Peak Sort Space Used\": N, +\r\n> \"Sort Space Type\": \"Disk\", +\r\n\r\nI think the idea is interesting, however there are a few problems in the patch.\r\n\r\nFirst, I think that it should only be done in the VERBOSE OFF mode. If you ask\r\nfor a VERBOSE output you don't need both the details and the summarized\r\nversion.\r\n\r\nOther minor problems:\r\n\r\n- why (only) emitting the number of workers planned and not the number of\r\n workers launched?\r\n- the textual format is missing details about what the numbers are, which is\r\n particularly obvious since avgSpaceUsed and peakSpaceUsed don't have any unit\r\n or even space between them:\r\n\r\n+ \"Sort Method: %s %s: \" INT64_FORMAT INT64_FORMAT \"kB\\n\",\r\n+ sortMethod, spaceType, avgSpaceUsed, peakSpaceUsed);\r\n\r\n\r\n________________________________\r\n\r\n⚠ External Email: This email originated from outside of the organization. Do not click links or open attachments unless you recognize the sender.",
"msg_date": "Mon, 28 Mar 2022 09:55:39 +0000",
"msg_from": "Jian Guo <gjian@vmware.com>",
"msg_from_op": true,
"msg_subject": "Re: Summary Sort workers Stats in EXPLAIN ANALYZE"
},
{
"msg_contents": "On Mon, Mar 28, 2022 at 2:55 PM Jian Guo <gjian@vmware.com> wrote:\n\n>\n> I have updated the patch addressing the review comments, but I didn't\n> moved this code block into VERBOSE mode, to keep consistency with `\n> show_incremental_sort_info`:\n>\n>\n> https://github.com/postgres/postgres/blob/d4ba8b51c76300f06cc23f4d8a41d9f7210c4866/src/backend/commands/explain.c#L2890\n>\n> Please review, thanks.\n>\n> ------------------------------\n> *From:* Julien Rouhaud <rjuju123@gmail.com>\n> *Sent:* Friday, March 25, 2022 17:04\n> *To:* Jian Guo <gjian@vmware.com>\n> *Cc:* pgsql-hackers@lists.postgresql.org <\n> pgsql-hackers@lists.postgresql.org>; Zhenghua Lyu <zlyu@vmware.com>\n> *Subject:* Re: Summary Sort workers Stats in EXPLAIN ANALYZE\n>\n> ⚠ External Email\n>\n> Hi,\n>\n> On Thu, Mar 24, 2022 at 07:50:11AM +0000, Jian Guo wrote:\n> > For a simple demo, with this explain statement:\n> >\n> > -- Test sort stats summary\n> > set force_parallel_mode=on;\n> > select explain_filter('explain (analyze, summary off, timing off, costs\n> off, format json) select * from tenk1 order by unique1');\n> >\n> > Before this patch, we got plan like this:\n> >\n> >\n> > \"Node Type\": \"Sort\", +\n> > \"Parent Relationship\": \"Outer\", +\n> > \"Parallel Aware\": false, +\n> > \"Async Capable\": false, +\n> > \"Actual Rows\": 10000, +\n> > \"Actual Loops\": 1, +\n> > \"Sort Key\": [\"unique1\"], +\n> > \"Workers\": [ +\n> > { +\n> > \"Worker Number\": 0, +\n> > \"Sort Method\": \"external merge\",+\n> > \"Sort Space Used\": 2496, +\n> > \"Sort Space Type\": \"Disk\" +\n> > } +\n> > ], +\n>\n> > After this patch, the effected plan is this:\n> >\n> > \"Node Type\": \"Sort\", +\n> > \"Parent Relationship\": \"Outer\", +\n> > \"Parallel Aware\": false, +\n> > \"Async Capable\": false, +\n> > \"Actual Rows\": N, +\n> > \"Actual Loops\": N, +\n> > \"Sort Key\": [\"unique1\"], +\n> > \"Workers planned\": N, +\n> > \"Sort Method\": \"external merge\", +\n> > \"Average Sort Space Used\": N, +\n> > \"Peak Sort Space Used\": N, +\n> > \"Sort Space Type\": \"Disk\", +\n>\n> I think the idea is interesting, however there are a few problems in the\n> patch.\n>\n> First, I think that it should only be done in the VERBOSE OFF mode. If\n> you ask\n> for a VERBOSE output you don't need both the details and the summarized\n> version.\n>\n> Other minor problems:\n>\n> - why (only) emitting the number of workers planned and not the number of\n> workers launched?\n> - the textual format is missing details about what the numbers are, which\n> is\n> particularly obvious since avgSpaceUsed and peakSpaceUsed don't have any\n> unit\n> or even space between them:\n>\n> + \"Sort Method: %s %s: \" INT64_FORMAT INT64_FORMAT\n> \"kB\\n\",\n> + sortMethod, spaceType, avgSpaceUsed,\n> peakSpaceUsed);\n>\n>\n> ________________________________\n>\n> ⚠ External Email: This email originated from outside of the organization.\n> Do not click links or open attachments unless you recognize the sender.\n>\n\nThe patch failed different regression tests on all platforms. Please\ncorrect that and send an updated patch.\n\n[06:40:02.370] Test Summary Report\n[06:40:02.370] -------------------\n[06:40:02.370] t/002_pg_upgrade.pl (Wstat: 256 Tests: 13 Failed: 1)\n[06:40:02.370] Failed test: 4\n[06:40:02.370] Non-zero exit status: 1\n[06:40:02.370] Files=2, Tests=21, 45 wallclock secs ( 0.02 usr 0.00 sys +\n3.52 cusr 2.06 csys = 5.60 CPU)\n-- \nIbrar Ahmed\n\nOn Mon, Mar 28, 2022 at 2:55 PM Jian Guo <gjian@vmware.com> wrote:\n\n\n\n\n\r\nI have updated the patch addressing the review comments, but I didn't moved this code block into VERBOSE mode, to keep consistency with `show_incremental_sort_info`:\n\n\n\n\nhttps://github.com/postgres/postgres/blob/d4ba8b51c76300f06cc23f4d8a41d9f7210c4866/src/backend/commands/explain.c#L2890\n\n\n\n\r\nPlease review, thanks.\n\n\n\n\n\n\nFrom: Julien Rouhaud <rjuju123@gmail.com>\nSent: Friday, March 25, 2022 17:04\nTo: Jian Guo <gjian@vmware.com>\nCc: pgsql-hackers@lists.postgresql.org <pgsql-hackers@lists.postgresql.org>; Zhenghua Lyu <zlyu@vmware.com>\nSubject: Re: Summary Sort workers Stats in EXPLAIN ANALYZE\n \n\n\n⚠ External Email\n\r\nHi,\n\r\nOn Thu, Mar 24, 2022 at 07:50:11AM +0000, Jian Guo wrote:\r\n> For a simple demo, with this explain statement:\r\n>\r\n> -- Test sort stats summary\r\n> set force_parallel_mode=on;\r\n> select explain_filter('explain (analyze, summary off, timing off, costs off, format json) select * from tenk1 order by unique1');\r\n>\r\n> Before this patch, we got plan like this:\r\n>\r\n>\r\n> \"Node Type\": \"Sort\", +\r\n> \"Parent Relationship\": \"Outer\", +\r\n> \"Parallel Aware\": false, +\r\n> \"Async Capable\": false, +\r\n> \"Actual Rows\": 10000, +\r\n> \"Actual Loops\": 1, +\r\n> \"Sort Key\": [\"unique1\"], +\r\n> \"Workers\": [ +\r\n> { +\r\n> \"Worker Number\": 0, +\r\n> \"Sort Method\": \"external merge\",+\r\n> \"Sort Space Used\": 2496, +\r\n> \"Sort Space Type\": \"Disk\" +\r\n> } +\r\n> ], +\n\r\n> After this patch, the effected plan is this:\r\n>\r\n> \"Node Type\": \"Sort\", +\r\n> \"Parent Relationship\": \"Outer\", +\r\n> \"Parallel Aware\": false, +\r\n> \"Async Capable\": false, +\r\n> \"Actual Rows\": N, +\r\n> \"Actual Loops\": N, +\r\n> \"Sort Key\": [\"unique1\"], +\r\n> \"Workers planned\": N, +\r\n> \"Sort Method\": \"external merge\", +\r\n> \"Average Sort Space Used\": N, +\r\n> \"Peak Sort Space Used\": N, +\r\n> \"Sort Space Type\": \"Disk\", +\n\r\nI think the idea is interesting, however there are a few problems in the patch.\n\r\nFirst, I think that it should only be done in the VERBOSE OFF mode. If you ask\r\nfor a VERBOSE output you don't need both the details and the summarized\r\nversion.\n\r\nOther minor problems:\n\r\n- why (only) emitting the number of workers planned and not the number of\r\n workers launched?\r\n- the textual format is missing details about what the numbers are, which is\r\n particularly obvious since avgSpaceUsed and peakSpaceUsed don't have any unit\r\n or even space between them:\n\r\n+ \"Sort Method: %s %s: \" INT64_FORMAT INT64_FORMAT \"kB\\n\",\r\n+ sortMethod, spaceType, avgSpaceUsed, peakSpaceUsed);\n\n\r\n________________________________\n\r\n⚠ External Email: This email originated from outside of the organization. Do not click links or open attachments unless you recognize the sender.\n\n\n\nThe patch failed different regression tests on all platforms. Please correct that and send an updated patch. [06:40:02.370] Test Summary Report[06:40:02.370] -------------------[06:40:02.370] t/002_pg_upgrade.pl (Wstat: 256 Tests: 13 Failed: 1)[06:40:02.370] Failed test: 4[06:40:02.370] Non-zero exit status: 1[06:40:02.370] Files=2, Tests=21, 45 wallclock secs ( 0.02 usr 0.00 sys + 3.52 cusr 2.06 csys = 5.60 CPU)-- Ibrar Ahmed",
"msg_date": "Tue, 6 Sep 2022 11:37:32 +0500",
"msg_from": "Ibrar Ahmed <ibrar.ahmad@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Summary Sort workers Stats in EXPLAIN ANALYZE"
},
{
"msg_contents": "On Tue, Sep 06, 2022 at 11:37:32AM +0500, Ibrar Ahmed wrote:\n> The patch failed different regression tests on all platforms. Please\n> correct that and send an updated patch.\n> \n> [06:40:02.370] Test Summary Report\n> [06:40:02.370] -------------------\n> [06:40:02.370] t/002_pg_upgrade.pl (Wstat: 256 Tests: 13 Failed: 1)\n> [06:40:02.370] Failed test: 4\n> [06:40:02.370] Non-zero exit status: 1\n> [06:40:02.370] Files=2, Tests=21, 45 wallclock secs ( 0.02 usr 0.00 sys +\n> 3.52 cusr 2.06 csys = 5.60 CPU)\n\nThis has been marked as RwF based on the lack of an update.\n--\nMichael",
"msg_date": "Wed, 12 Oct 2022 17:42:15 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Summary Sort workers Stats in EXPLAIN ANALYZE"
}
] |
[
{
"msg_contents": "Hi hackers,\n\nI learned from Tom [1] that we can simplify the code like:\n\n```\nchar buff[32];\nsnprintf(buf, sizeof(buf), INT64_FORMAT, ...)\nereport(WARNING, (errmsg(\"%s ...\", buf)));\n```\n\n... and rely on %lld/%llu now as long as we explicitly cast the\nargument to long long int / unsigned long long. This was previously\naddressed in 6a1cd8b9 and d914eb34, but I see more places where we\nstill use an old approach.\n\nSuggested patch fixes this. Tested locally - no warnings; passes all the tests.\n\n[1] https://www.postgresql.org/message-id/771048.1647528068%40sss.pgh.pa.us\n\n-- \nBest regards,\nAleksander Alekseev",
"msg_date": "Mon, 21 Mar 2022 11:52:14 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": true,
"msg_subject": "[PATCH] Remove workarounds to format [u]int64's"
},
{
"msg_contents": "пн, 21 мар. 2022 г. в 12:52, Aleksander Alekseev <aleksander@timescale.com>:\n\n> Hi hackers,\n>\n> I learned from Tom [1] that we can simplify the code like:\n>\n> ```\n> char buff[32];\n> snprintf(buf, sizeof(buf), INT64_FORMAT, ...)\n> ereport(WARNING, (errmsg(\"%s ...\", buf)));\n> ```\n>\n> ... and rely on %lld/%llu now as long as we explicitly cast the\n> argument to long long int / unsigned long long. This was previously\n> addressed in 6a1cd8b9 and d914eb34, but I see more places where we\n> still use an old approach.\n>\n> Suggested patch fixes this. Tested locally - no warnings; passes all the\n> tests.\n>\n> [1]\n> https://www.postgresql.org/message-id/771048.1647528068%40sss.pgh.pa.us\n>\n> Hi, Alexander!\nProbably you can do (long long) instead of (long long int). It is shorter\nand this is used elsewhere in the code.\n\n-- \nBest regards,\nPavel Borisov\n\nPostgres Professional: http://postgrespro.com <http://www.postgrespro.com>\n\nпн, 21 мар. 2022 г. в 12:52, Aleksander Alekseev <aleksander@timescale.com>:Hi hackers,\n\nI learned from Tom [1] that we can simplify the code like:\n\n```\nchar buff[32];\nsnprintf(buf, sizeof(buf), INT64_FORMAT, ...)\nereport(WARNING, (errmsg(\"%s ...\", buf)));\n```\n\n... and rely on %lld/%llu now as long as we explicitly cast the\nargument to long long int / unsigned long long. This was previously\naddressed in 6a1cd8b9 and d914eb34, but I see more places where we\nstill use an old approach.\n\nSuggested patch fixes this. Tested locally - no warnings; passes all the tests.\n\n[1] https://www.postgresql.org/message-id/771048.1647528068%40sss.pgh.pa.usHi, Alexander!Probably you can do (long long) instead of (long long int). It is shorter and this is used elsewhere in the code.-- Best regards,Pavel BorisovPostgres Professional: http://postgrespro.com",
"msg_date": "Mon, 21 Mar 2022 13:12:33 +0400",
"msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Remove workarounds to format [u]int64's"
},
{
"msg_contents": "Hi Pavel,\n\n> Probably you can do (long long) instead of (long long int). It is shorter and this is used elsewhere in the code.\n\nThanks! Here is the updated patch. I also added Reviewed-by: and\nDiscussion: to the commit message.\n\n-- \nBest regards,\nAleksander Alekseev",
"msg_date": "Mon, 21 Mar 2022 12:23:33 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Remove workarounds to format [u]int64's"
},
{
"msg_contents": ">\n> > Probably you can do (long long) instead of (long long int). It is\n> shorter and this is used elsewhere in the code.\n>\n> Thanks! Here is the updated patch. I also added Reviewed-by: and\n> Discussion: to the commit message.\n>\nThanks, Alexander!\nI suggest the patch is in a good shape to be committed.\n(\nMaybe some strings that don't fit screen cloud be reflowed:\n (long long int)seqdataform->last_value, (long long int)seqform->seqmax)));\n)\n\n-- \nBest regards,\nPavel Borisov\n\nPostgres Professional: http://postgrespro.com <http://www.postgrespro.com>\n\n> Probably you can do (long long) instead of (long long int). It is shorter and this is used elsewhere in the code.\n\nThanks! Here is the updated patch. I also added Reviewed-by: and\nDiscussion: to the commit message.Thanks, Alexander!I suggest the patch is in a good shape to be committed.(Maybe some strings that don't fit screen cloud be reflowed: (long long int)seqdataform->last_value, (long long int)seqform->seqmax)));)-- Best regards,Pavel BorisovPostgres Professional: http://postgrespro.com",
"msg_date": "Mon, 21 Mar 2022 13:30:59 +0400",
"msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Remove workarounds to format [u]int64's"
},
{
"msg_contents": "On Mon, 21 Mar 2022 at 17:23, Aleksander Alekseev <aleksander@timescale.com> wrote:\n> Hi Pavel,\n>\n>> Probably you can do (long long) instead of (long long int). It is shorter and this is used elsewhere in the code.\n>\n> Thanks! Here is the updated patch. I also added Reviewed-by: and\n> Discussion: to the commit message.\n\nHi,\n\nAfter apply the patch, I found pg_checksums.c also has the similar code.\n\nIn progress_report(), I'm not sure we can do this replace for this code.\n\n snprintf(total_size_str, sizeof(total_size_str), INT64_FORMAT,\n total_size / (1024 * 1024));\n snprintf(current_size_str, sizeof(current_size_str), INT64_FORMAT,\n current_size / (1024 * 1024));\n\n fprintf(stderr, _(\"%*s/%s MB (%d%%) computed\"),\n (int) strlen(current_size_str), current_size_str, total_size_str,\n percent);\n\n--\nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.",
"msg_date": "Mon, 21 Mar 2022 18:13:46 +0800",
"msg_from": "Japin Li <japinli@hotmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Remove workarounds to format [u]int64's"
},
{
"msg_contents": "Hi Japin,\n\n> After apply the patch, I found pg_checksums.c also has the similar code.\n\nThanks for noticing it.\n\n> In progress_report(), I'm not sure we can do this replace for this code.\n\nI added the corresponding change as a separate commit so it can be\neasily reverted if necessary.\n\nHere is a complete patchset with some additional changes by me.\n\n-- \nBest regards,\nAleksander Alekseev",
"msg_date": "Mon, 21 Mar 2022 13:55:44 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Remove workarounds to format [u]int64's"
},
{
"msg_contents": "Hi Japin,\n\n> As Tom said in [1], we don't need to touch the *.po files, since those\nfiles\n> are managed by the translation team.\n>\n> [1]\nhttps://www.postgresql.org/message-id/1110708.1647623560%40sss.pgh.pa.us\n\nTrue, but I figured that simplifying the work of the translation team would\nnot harm either. In any case, the committer can easily exclude these\nchanges from the patch, if necessary.\n\n-- \nBest regards,\nAleksander Alekseev\n\nHi Japin,> As Tom said in [1], we don't need to touch the *.po files, since those files> are managed by the translation team.>> [1] https://www.postgresql.org/message-id/1110708.1647623560%40sss.pgh.pa.usTrue, but I figured that simplifying the work of the translation team would not harm either. In any case, the committer can easily exclude these changes from the patch, if necessary.-- Best regards,Aleksander Alekseev",
"msg_date": "Mon, 21 Mar 2022 14:25:33 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Remove workarounds to format [u]int64's"
},
{
"msg_contents": "Aleksander Alekseev <aleksander@timescale.com> writes:\n>> As Tom said in [1], we don't need to touch the *.po files, since those\n>> files are managed by the translation team.\n\n> True, but I figured that simplifying the work of the translation team would\n> not harm either.\n\nIt would not simplify things for them at all, just mess it up.\nThe master copies of the .po files are kept in a different repo.\nAlso, I believe that extraction of new message strings is automated\nalready.\n\nhttps://www.postgresql.org/docs/devel/nls.html\n\nhttps://wiki.postgresql.org/wiki/NLS\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 21 Mar 2022 09:41:13 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Remove workarounds to format [u]int64's"
},
{
"msg_contents": "Hi Tom,\n\n> It would not simplify things for them at all, just mess it up.\n> The master copies of the .po files are kept in a different repo.\n> Also, I believe that extraction of new message strings is automated\n> already.\n\nGot it, thanks. Here is the corrected patch. It includes all the\nchanges by me and Japin, and doesn't touch PO files.\n\n-- \nBest regards,\nAleksander Alekseev",
"msg_date": "Mon, 21 Mar 2022 17:37:44 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Remove workarounds to format [u]int64's"
},
{
"msg_contents": "Aleksander Alekseev <aleksander@timescale.com> writes:\n> Got it, thanks. Here is the corrected patch. It includes all the\n> changes by me and Japin, and doesn't touch PO files.\n\nPushed. I removed now-unnecessary braces, reflowed some lines\nas suggested by Pavel, and pgindent'ed (which insisted on adding\nspaces after the casts, as is project style).\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 21 Mar 2022 11:13:14 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Remove workarounds to format [u]int64's"
},
{
"msg_contents": "On 21.03.22 15:37, Aleksander Alekseev wrote:\n>> It would not simplify things for them at all, just mess it up.\n>> The master copies of the .po files are kept in a different repo.\n>> Also, I believe that extraction of new message strings is automated\n>> already.\n> \n> Got it, thanks. Here is the corrected patch. It includes all the\n> changes by me and Japin, and doesn't touch PO files.\n\nI think in some cases we can make this even simpler (and cast-free) by \nchanging the underlying variable to be long long instead of int64. \nEspecially in cases where the whole point of the variable is to be some \ncounter that ends up being printed, there isn't a need to use int64 in \nthe first place. See attached patch for examples.",
"msg_date": "Wed, 23 Mar 2022 21:48:48 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Remove workarounds to format [u]int64's"
},
{
"msg_contents": ">\n> On 21.03.22 15:37, Aleksander Alekseev wrote:\n> >> It would not simplify things for them at all, just mess it up.\n> >> The master copies of the .po files are kept in a different repo.\n> >> Also, I believe that extraction of new message strings is automated\n> >> already.\n> >\n> > Got it, thanks. Here is the corrected patch. It includes all the\n> > changes by me and Japin, and doesn't touch PO files.\n>\n> I think in some cases we can make this even simpler (and cast-free) by\n> changing the underlying variable to be long long instead of int64.\n> Especially in cases where the whole point of the variable is to be some\n> counter that ends up being printed, there isn't a need to use int64 in\n> the first place. See attached patch for examples.\n\n\nYes, this will work, when we can define a variable itself as *long long*.\nBut for some applications: [1], [2], I suppose we'll need exactly uint64 to\nrepresent TransactionId. uint64 is warrantied to fit into *unsigned long\nlong*, but on most of archs it is just *unsigned long*. Defining what we\nneed to be uint64 as *unsigned long long* on these archs will mean it\nbecome uint128, which we may not like.\n\nIn my opinion, in many places, it's better to have casts when it's for\nprinting fixed-width int/uint variables than the alternative.\n\n[1]\nhttps://www.postgresql.org/message-id/flat/CACG%3DezYV3FM5i9ws2QLyF%2Brz5WHTqheL59VRsHGsgAwfx8gh4g%40mail.gmail.com#d7068b9d25a2f8a1064d2ea4815df23d\n[2]\nhttps://www.postgresql.org/message-id/flat/CACG%3DezZe1NQSCnfHOr78AtAZxJZeCvxrts0ygrxYwe%3DpyyjVWA%40mail.gmail.com\n\n--\nBest regards,\nPavel Borisov\n\nPostgres Professional: http://postgrespro.com <http://www.postgrespro.com>\n\nOn 21.03.22 15:37, Aleksander Alekseev wrote:\n>> It would not simplify things for them at all, just mess it up.\n>> The master copies of the .po files are kept in a different repo.\n>> Also, I believe that extraction of new message strings is automated\n>> already.\n> \n> Got it, thanks. Here is the corrected patch. It includes all the\n> changes by me and Japin, and doesn't touch PO files.\n\nI think in some cases we can make this even simpler (and cast-free) by \nchanging the underlying variable to be long long instead of int64. \nEspecially in cases where the whole point of the variable is to be some \ncounter that ends up being printed, there isn't a need to use int64 in \nthe first place. See attached patch for examples.Yes, this will work, when we can define a variable itself as long long. But for some applications: [1], [2], I suppose we'll need exactly uint64 to represent TransactionId. uint64 is warrantied to fit into unsigned long long, but on most of archs it is just unsigned long. Defining what we need to be uint64 as unsigned long long on these archs will mean it become uint128, which we may not like.In my opinion, in many places, it's better to have casts when it's for printing fixed-width int/uint variables than the alternative.[1] https://www.postgresql.org/message-id/flat/CACG%3DezYV3FM5i9ws2QLyF%2Brz5WHTqheL59VRsHGsgAwfx8gh4g%40mail.gmail.com#d7068b9d25a2f8a1064d2ea4815df23d[2] https://www.postgresql.org/message-id/flat/CACG%3DezZe1NQSCnfHOr78AtAZxJZeCvxrts0ygrxYwe%3DpyyjVWA%40mail.gmail.com--Best regards,Pavel BorisovPostgres Professional: http://postgrespro.com",
"msg_date": "Thu, 24 Mar 2022 01:12:27 +0400",
"msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Remove workarounds to format [u]int64's"
}
] |
[
{
"msg_contents": ">\n> Hi,\n> I was looking at calls to bms_free() in PG code.\n>\n> e.g. src/backend/commands/publicationcmds.c line 362\n>\n> bms_free(bms);\n>\n> The above is just an example, there're other calls to bms_free().\n> Since the bms is allocated from some execution context, I wonder why this\n> call is needed.\n>\n> When the underlying execution context wraps up, isn't the bms freed ?\n>\n> Cheers\n>\n>\n>\n\nHi,I was looking at calls to bms_free() in PG code.e.g. src/backend/commands/publicationcmds.c line 362 bms_free(bms);The above is just an example, there're other calls to bms_free().Since the bms is allocated from some execution context, I wonder why this call is needed.When the underlying execution context wraps up, isn't the bms freed ?Cheers",
"msg_date": "Mon, 21 Mar 2022 14:30:52 -0700",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": true,
"msg_subject": "Re: freeing bms explicitly"
},
{
"msg_contents": "Zhihong Yu <zyu@yugabyte.com> writes:\n>> I was looking at calls to bms_free() in PG code.\n>> e.g. src/backend/commands/publicationcmds.c line 362\n>> \tbms_free(bms);\n>> The above is just an example, there're other calls to bms_free().\n>> Since the bms is allocated from some execution context, I wonder why this\n>> call is needed.\n>> \n>> When the underlying execution context wraps up, isn't the bms freed ?\n\nYeah, that's kind of pointless --- and the pfree(rfnode) after it is even\nmore pointless, since it'll free only the top node of that expression\ntree. Not to mention the string returned by TextDatumGetCString, and\nwhatever might be leaked during the underlying catalog accesses.\n\nIf we were actually worried about transient space consumption of this\nfunction, it'd be necessary to do a lot more than this. It doesn't\nlook to me like it's worth worrying about though -- it doesn't seem\nlike it could be hit more than once per query in normal cases.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 21 Mar 2022 18:05:19 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: freeing bms explicitly"
},
{
"msg_contents": "On Mon, Mar 21, 2022 at 3:05 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Zhihong Yu <zyu@yugabyte.com> writes:\n> >> I was looking at calls to bms_free() in PG code.\n> >> e.g. src/backend/commands/publicationcmds.c line 362\n> >> bms_free(bms);\n> >> The above is just an example, there're other calls to bms_free().\n> >> Since the bms is allocated from some execution context, I wonder why\n> this\n> >> call is needed.\n> >>\n> >> When the underlying execution context wraps up, isn't the bms freed ?\n>\n> Yeah, that's kind of pointless --- and the pfree(rfnode) after it is even\n> more pointless, since it'll free only the top node of that expression\n> tree. Not to mention the string returned by TextDatumGetCString, and\n> whatever might be leaked during the underlying catalog accesses.\n>\n> If we were actually worried about transient space consumption of this\n> function, it'd be necessary to do a lot more than this. It doesn't\n> look to me like it's worth worrying about though -- it doesn't seem\n> like it could be hit more than once per query in normal cases.\n>\n> regards, tom lane\n>\n\nThanks Tom for replying.\n\nWhat do you think of the following patch ?\n\nCheers",
"msg_date": "Mon, 21 Mar 2022 15:13:18 -0700",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": true,
"msg_subject": "Re: freeing bms explicitly"
},
{
"msg_contents": "On Tue, Mar 22, 2022 at 3:39 AM Zhihong Yu <zyu@yugabyte.com> wrote:\n>\n> On Mon, Mar 21, 2022 at 3:05 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>\n>> Zhihong Yu <zyu@yugabyte.com> writes:\n>> >> I was looking at calls to bms_free() in PG code.\n>> >> e.g. src/backend/commands/publicationcmds.c line 362\n>> >> bms_free(bms);\n>> >> The above is just an example, there're other calls to bms_free().\n>> >> Since the bms is allocated from some execution context, I wonder why this\n>> >> call is needed.\n>> >>\n>> >> When the underlying execution context wraps up, isn't the bms freed ?\n>>\n>> Yeah, that's kind of pointless --- and the pfree(rfnode) after it is even\n>> more pointless, since it'll free only the top node of that expression\n>> tree. Not to mention the string returned by TextDatumGetCString, and\n>> whatever might be leaked during the underlying catalog accesses.\n>>\n>> If we were actually worried about transient space consumption of this\n>> function, it'd be necessary to do a lot more than this. It doesn't\n>> look to me like it's worth worrying about though -- it doesn't seem\n>> like it could be hit more than once per query in normal cases.\n>>\n>> regards, tom lane\n>\n>\n> Thanks Tom for replying.\n>\n> What do you think of the following patch ?\n>\n\nYour patch looks good to me. I have found one more similar instance in\nthe same file and changed that as well accordingly. Let me know what\nyou think of the attached?\n\n-- \nWith Regards,\nAmit Kapila.",
"msg_date": "Wed, 23 Mar 2022 09:15:27 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: freeing bms explicitly"
},
{
"msg_contents": "On Tue, Mar 22, 2022 at 8:45 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n\n> On Tue, Mar 22, 2022 at 3:39 AM Zhihong Yu <zyu@yugabyte.com> wrote:\n> >\n> > On Mon, Mar 21, 2022 at 3:05 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >>\n> >> Zhihong Yu <zyu@yugabyte.com> writes:\n> >> >> I was looking at calls to bms_free() in PG code.\n> >> >> e.g. src/backend/commands/publicationcmds.c line 362\n> >> >> bms_free(bms);\n> >> >> The above is just an example, there're other calls to bms_free().\n> >> >> Since the bms is allocated from some execution context, I wonder why\n> this\n> >> >> call is needed.\n> >> >>\n> >> >> When the underlying execution context wraps up, isn't the bms freed ?\n> >>\n> >> Yeah, that's kind of pointless --- and the pfree(rfnode) after it is\n> even\n> >> more pointless, since it'll free only the top node of that expression\n> >> tree. Not to mention the string returned by TextDatumGetCString, and\n> >> whatever might be leaked during the underlying catalog accesses.\n> >>\n> >> If we were actually worried about transient space consumption of this\n> >> function, it'd be necessary to do a lot more than this. It doesn't\n> >> look to me like it's worth worrying about though -- it doesn't seem\n> >> like it could be hit more than once per query in normal cases.\n> >>\n> >> regards, tom lane\n> >\n> >\n> > Thanks Tom for replying.\n> >\n> > What do you think of the following patch ?\n> >\n>\n> Your patch looks good to me. I have found one more similar instance in\n> the same file and changed that as well accordingly. Let me know what\n> you think of the attached?\n>\n> --\n> With Regards,\n> Amit Kapila.\n>\n\nHi, Amit:\nThe patch looks good to me.\n\nCheers\n\nOn Tue, Mar 22, 2022 at 8:45 PM Amit Kapila <amit.kapila16@gmail.com> wrote:On Tue, Mar 22, 2022 at 3:39 AM Zhihong Yu <zyu@yugabyte.com> wrote:\n>\n> On Mon, Mar 21, 2022 at 3:05 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>\n>> Zhihong Yu <zyu@yugabyte.com> writes:\n>> >> I was looking at calls to bms_free() in PG code.\n>> >> e.g. src/backend/commands/publicationcmds.c line 362\n>> >> bms_free(bms);\n>> >> The above is just an example, there're other calls to bms_free().\n>> >> Since the bms is allocated from some execution context, I wonder why this\n>> >> call is needed.\n>> >>\n>> >> When the underlying execution context wraps up, isn't the bms freed ?\n>>\n>> Yeah, that's kind of pointless --- and the pfree(rfnode) after it is even\n>> more pointless, since it'll free only the top node of that expression\n>> tree. Not to mention the string returned by TextDatumGetCString, and\n>> whatever might be leaked during the underlying catalog accesses.\n>>\n>> If we were actually worried about transient space consumption of this\n>> function, it'd be necessary to do a lot more than this. It doesn't\n>> look to me like it's worth worrying about though -- it doesn't seem\n>> like it could be hit more than once per query in normal cases.\n>>\n>> regards, tom lane\n>\n>\n> Thanks Tom for replying.\n>\n> What do you think of the following patch ?\n>\n\nYour patch looks good to me. I have found one more similar instance in\nthe same file and changed that as well accordingly. Let me know what\nyou think of the attached?\n\n-- \nWith Regards,\nAmit Kapila. Hi, Amit:The patch looks good to me.Cheers",
"msg_date": "Tue, 22 Mar 2022 21:04:03 -0700",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": true,
"msg_subject": "Re: freeing bms explicitly"
},
{
"msg_contents": "On Tue, Mar 22, 2022 at 9:04 PM Zhihong Yu <zyu@yugabyte.com> wrote:\n\n>\n>\n> On Tue, Mar 22, 2022 at 8:45 PM Amit Kapila <amit.kapila16@gmail.com>\n> wrote:\n>\n>> On Tue, Mar 22, 2022 at 3:39 AM Zhihong Yu <zyu@yugabyte.com> wrote:\n>> >\n>> > On Mon, Mar 21, 2022 at 3:05 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> >>\n>> >> Zhihong Yu <zyu@yugabyte.com> writes:\n>> >> >> I was looking at calls to bms_free() in PG code.\n>> >> >> e.g. src/backend/commands/publicationcmds.c line 362\n>> >> >> bms_free(bms);\n>> >> >> The above is just an example, there're other calls to bms_free().\n>> >> >> Since the bms is allocated from some execution context, I wonder\n>> why this\n>> >> >> call is needed.\n>> >> >>\n>> >> >> When the underlying execution context wraps up, isn't the bms freed\n>> ?\n>> >>\n>> >> Yeah, that's kind of pointless --- and the pfree(rfnode) after it is\n>> even\n>> >> more pointless, since it'll free only the top node of that expression\n>> >> tree. Not to mention the string returned by TextDatumGetCString, and\n>> >> whatever might be leaked during the underlying catalog accesses.\n>> >>\n>> >> If we were actually worried about transient space consumption of this\n>> >> function, it'd be necessary to do a lot more than this. It doesn't\n>> >> look to me like it's worth worrying about though -- it doesn't seem\n>> >> like it could be hit more than once per query in normal cases.\n>> >>\n>> >> regards, tom lane\n>> >\n>> >\n>> > Thanks Tom for replying.\n>> >\n>> > What do you think of the following patch ?\n>> >\n>>\n>> Your patch looks good to me. I have found one more similar instance in\n>> the same file and changed that as well accordingly. Let me know what\n>> you think of the attached?\n>>\n>> --\n>> With Regards,\n>> Amit Kapila.\n>>\n>\n> Hi, Amit:\n> The patch looks good to me.\n>\n> Cheers\n>\n\nTom:\n Do you mind taking a look at the latest patch ?\n\nThanks\n\nOn Tue, Mar 22, 2022 at 9:04 PM Zhihong Yu <zyu@yugabyte.com> wrote:On Tue, Mar 22, 2022 at 8:45 PM Amit Kapila <amit.kapila16@gmail.com> wrote:On Tue, Mar 22, 2022 at 3:39 AM Zhihong Yu <zyu@yugabyte.com> wrote:\n>\n> On Mon, Mar 21, 2022 at 3:05 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>\n>> Zhihong Yu <zyu@yugabyte.com> writes:\n>> >> I was looking at calls to bms_free() in PG code.\n>> >> e.g. src/backend/commands/publicationcmds.c line 362\n>> >> bms_free(bms);\n>> >> The above is just an example, there're other calls to bms_free().\n>> >> Since the bms is allocated from some execution context, I wonder why this\n>> >> call is needed.\n>> >>\n>> >> When the underlying execution context wraps up, isn't the bms freed ?\n>>\n>> Yeah, that's kind of pointless --- and the pfree(rfnode) after it is even\n>> more pointless, since it'll free only the top node of that expression\n>> tree. Not to mention the string returned by TextDatumGetCString, and\n>> whatever might be leaked during the underlying catalog accesses.\n>>\n>> If we were actually worried about transient space consumption of this\n>> function, it'd be necessary to do a lot more than this. It doesn't\n>> look to me like it's worth worrying about though -- it doesn't seem\n>> like it could be hit more than once per query in normal cases.\n>>\n>> regards, tom lane\n>\n>\n> Thanks Tom for replying.\n>\n> What do you think of the following patch ?\n>\n\nYour patch looks good to me. I have found one more similar instance in\nthe same file and changed that as well accordingly. Let me know what\nyou think of the attached?\n\n-- \nWith Regards,\nAmit Kapila. Hi, Amit:The patch looks good to me.Cheers Tom: Do you mind taking a look at the latest patch ?Thanks",
"msg_date": "Wed, 23 Mar 2022 09:42:21 -0700",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": true,
"msg_subject": "Re: freeing bms explicitly"
},
{
"msg_contents": "On Wed, Mar 23, 2022 at 9:30 AM Zhihong Yu <zyu@yugabyte.com> wrote:\n>\n> On Tue, Mar 22, 2022 at 8:45 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>>\n>> On Tue, Mar 22, 2022 at 3:39 AM Zhihong Yu <zyu@yugabyte.com> wrote:\n>>\n>> Your patch looks good to me. I have found one more similar instance in\n>> the same file and changed that as well accordingly. Let me know what\n>> you think of the attached?\n>>\n>\n> Hi, Amit:\n> The patch looks good to me.\n>\n\nThanks. I'll push this tomorrow unless Tom or someone else wants to\nlook at it or would like to commit.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 24 Mar 2022 07:43:37 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: freeing bms explicitly"
},
{
"msg_contents": "On Thu, Mar 24, 2022 at 7:43 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, Mar 23, 2022 at 9:30 AM Zhihong Yu <zyu@yugabyte.com> wrote:\n> >\n> > On Tue, Mar 22, 2022 at 8:45 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >>\n> >> On Tue, Mar 22, 2022 at 3:39 AM Zhihong Yu <zyu@yugabyte.com> wrote:\n> >>\n> >> Your patch looks good to me. I have found one more similar instance in\n> >> the same file and changed that as well accordingly. Let me know what\n> >> you think of the attached?\n> >>\n> >\n> > Hi, Amit:\n> > The patch looks good to me.\n> >\n>\n> Thanks. I'll push this tomorrow unless Tom or someone else wants to\n> look at it or would like to commit.\n>\n\nPushed.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 25 Mar 2022 12:05:55 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: freeing bms explicitly"
}
] |
[
{
"msg_contents": "Add ALTER SUBSCRIPTION ... SKIP.\n\nThis feature allows skipping the transaction on subscriber nodes.\n\nIf incoming change violates any constraint, logical replication stops\nuntil it's resolved. Currently, users need to either manually resolve the\nconflict by updating a subscriber-side database or by using function\npg_replication_origin_advance() to skip the conflicting transaction. This\ncommit introduces a simpler way to skip the conflicting transactions.\n\nThe user can specify LSN by ALTER SUBSCRIPTION ... SKIP (lsn = XXX),\nwhich allows the apply worker to skip the transaction finished at\nspecified LSN. The apply worker skips all data modification changes within\nthe transaction.\n\nAuthor: Masahiko Sawada\nReviewed-by: Takamichi Osumi, Hou Zhijie, Peter Eisentraut, Amit Kapila, Shi Yu, Vignesh C, Greg Nancarrow, Haiying Tang, Euler Taveira\nDiscussion: https://postgr.es/m/CAD21AoDeScrsHhLyEPYqN3sydg6PxAPVBboK=30xJfUVihNZDA@mail.gmail.com\n\nBranch\n------\nmaster\n\nDetails\n-------\nhttps://git.postgresql.org/pg/commitdiff/208c5d65bbd60e33e272964578cb74182ac726a8\n\nModified Files\n--------------\ndoc/src/sgml/catalogs.sgml | 10 +\ndoc/src/sgml/logical-replication.sgml | 27 +--\ndoc/src/sgml/ref/alter_subscription.sgml | 42 +++++\nsrc/backend/catalog/pg_subscription.c | 1 +\nsrc/backend/catalog/system_views.sql | 2 +-\nsrc/backend/commands/subscriptioncmds.c | 73 ++++++++\nsrc/backend/parser/gram.y | 9 +\nsrc/backend/replication/logical/worker.c | 233 +++++++++++++++++++++++-\nsrc/bin/pg_dump/pg_dump.c | 4 +\nsrc/bin/psql/describe.c | 8 +-\nsrc/bin/psql/tab-complete.c | 5 +-\nsrc/include/catalog/catversion.h | 2 +-\nsrc/include/catalog/pg_subscription.h | 5 +\nsrc/include/nodes/parsenodes.h | 3 +-\nsrc/test/regress/expected/subscription.out | 126 +++++++------\nsrc/test/regress/sql/subscription.sql | 11 ++\nsrc/test/subscription/t/029_disable_on_error.pl | 94 ----------\nsrc/test/subscription/t/029_on_error.pl | 183 +++++++++++++++++++\n18 files changed, 665 insertions(+), 173 deletions(-)",
"msg_date": "Tue, 22 Mar 2022 01:56:03 +0000",
"msg_from": "Amit Kapila <akapila@postgresql.org>",
"msg_from_op": true,
"msg_subject": "pgsql: Add ALTER SUBSCRIPTION ... SKIP."
},
{
"msg_contents": "Amit Kapila <akapila@postgresql.org> writes:\n> The user can specify LSN by ALTER SUBSCRIPTION ... SKIP (lsn = XXX),\n> which allows the apply worker to skip the transaction finished at\n> specified LSN. The apply worker skips all data modification changes within\n> the transaction.\n\nHmm ... this seems like a really poor choice of syntax.\nI would expect ALTER to be used for changes of persistent\nobject properties, which surely this is not?\n\nAn alternative perhaps could be to invoke the operation\nvia a function.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 21 Mar 2022 22:06:46 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Add ALTER SUBSCRIPTION ... SKIP."
},
{
"msg_contents": "On Tue, Mar 22, 2022 at 7:36 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Amit Kapila <akapila@postgresql.org> writes:\n> > The user can specify LSN by ALTER SUBSCRIPTION ... SKIP (lsn = XXX),\n> > which allows the apply worker to skip the transaction finished at\n> > specified LSN. The apply worker skips all data modification changes within\n> > the transaction.\n>\n> Hmm ... this seems like a really poor choice of syntax.\n> I would expect ALTER to be used for changes of persistent\n> object properties, which surely this is not?\n>\n\nWe have discussed this syntax and discussed the point that this is\ndifferent from other properties of subscription like slot_name, binary\netc. and that is why we used SKIP for it rather than the usual way by\nusing SET [1][2]. There could also be other such options in future\nlike XID or other attributes, so we thought it would be easier to\nextend it.\n\n> An alternative perhaps could be to invoke the operation\n> via a function.\n>\n\nI agree that is another alternative but could be inconvenient if there\nare multiple such functions. We already have one\npg_replication_origin_advance().\n\n[1] - https://www.postgresql.org/message-id/CAA4eK1KD_C_0LSxaYB0UbG59VOgjf4mXBeSYbVWCLXAnnuqnPw%40mail.gmail.com\n[2] - https://www.postgresql.org/message-id/f716f584-65d0-fe83-2e84-53426631739a%40enterprisedb.com\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 22 Mar 2022 08:23:00 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Add ALTER SUBSCRIPTION ... SKIP."
},
{
"msg_contents": "Hi,\n\nOn 2022-03-22 01:56:03 +0000, Amit Kapila wrote:\n> Add ALTER SUBSCRIPTION ... SKIP.\n> \n> This feature allows skipping the transaction on subscriber nodes.\n> \n> If incoming change violates any constraint, logical replication stops\n> until it's resolved. Currently, users need to either manually resolve the\n> conflict by updating a subscriber-side database or by using function\n> pg_replication_origin_advance() to skip the conflicting transaction. This\n> commit introduces a simpler way to skip the conflicting transactions.\n> \n> The user can specify LSN by ALTER SUBSCRIPTION ... SKIP (lsn = XXX),\n> which allows the apply worker to skip the transaction finished at\n> specified LSN. The apply worker skips all data modification changes within\n> the transaction.\n\nThis was missing an include of xlogdefs.h in pg_subscription.h, thus failing\nin headerscheck. See e.g.\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2022-03-22%2022%3A22%3A05\n\nI've pushed the trivial fix for that. I'll propose adding headerscheck to CI /\ncfbot.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 22 Mar 2022 16:59:17 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Add ALTER SUBSCRIPTION ... SKIP."
},
{
"msg_contents": "On Wed, Mar 23, 2022 at 5:29 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On 2022-03-22 01:56:03 +0000, Amit Kapila wrote:\n> > Add ALTER SUBSCRIPTION ... SKIP.\n> >\n> > This feature allows skipping the transaction on subscriber nodes.\n> >\n> > If incoming change violates any constraint, logical replication stops\n> > until it's resolved. Currently, users need to either manually resolve the\n> > conflict by updating a subscriber-side database or by using function\n> > pg_replication_origin_advance() to skip the conflicting transaction. This\n> > commit introduces a simpler way to skip the conflicting transactions.\n> >\n> > The user can specify LSN by ALTER SUBSCRIPTION ... SKIP (lsn = XXX),\n> > which allows the apply worker to skip the transaction finished at\n> > specified LSN. The apply worker skips all data modification changes within\n> > the transaction.\n>\n> This was missing an include of xlogdefs.h in pg_subscription.h, thus failing\n> in headerscheck. See e.g.\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2022-03-22%2022%3A22%3A05\n>\n> I've pushed the trivial fix for that. I'll propose adding headerscheck to CI /\n> cfbot.\n>\n\nThanks.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 23 Mar 2022 07:26:28 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Add ALTER SUBSCRIPTION ... SKIP."
},
{
"msg_contents": "On 2022-Mar-22, Amit Kapila wrote:\n\n> Add ALTER SUBSCRIPTION ... SKIP.\n\nThere are two messages here that seem oddly worded.\n\nmsgid \"start skipping logical replication transaction finished at %X/%X\"\nmsgid \"done skipping logical replication transaction finished at %X/%X\"\n\nTwo complaints here. First, the phrases \"start / finished\" and \"done /\nfinished\" look very strange. It took me a while to realize that\n\"finished\" refers to the LSN, not to the skipping operation. Do we ever\ntalk about a transaction \"finished at XYZ\" as opposed to a transaction\nwhose LSN is XYZ? (This became particularly strange when I realized\nthat the LSN might come from a PREPARE.)\n\nSecond, \"logical replication transaction\". Is it not a regular\ntransaction that we happen to be processing via logical replication?\n\nI think they should say something like\n\n\"logical replication starts skipping transaction with LSN %X/%X\"\n\"logical replication completed skipping transaction with LSN %X/%X\"\n\nOther ideas?\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\nOfficer Krupke, what are we to do?\nGee, officer Krupke, Krup you! (West Side Story, \"Gee, Officer Krupke\")\n\n\n",
"msg_date": "Sun, 4 Sep 2022 10:18:21 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Add ALTER SUBSCRIPTION ... SKIP."
},
{
"msg_contents": "On Sun, Sep 4, 2022 at 1:48 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> On 2022-Mar-22, Amit Kapila wrote:\n>\n> > Add ALTER SUBSCRIPTION ... SKIP.\n>\n> There are two messages here that seem oddly worded.\n>\n> msgid \"start skipping logical replication transaction finished at %X/%X\"\n> msgid \"done skipping logical replication transaction finished at %X/%X\"\n>\n> Two complaints here. First, the phrases \"start / finished\" and \"done /\n> finished\" look very strange. It took me a while to realize that\n> \"finished\" refers to the LSN, not to the skipping operation. Do we ever\n> talk about a transaction \"finished at XYZ\" as opposed to a transaction\n> whose LSN is XYZ? (This became particularly strange when I realized\n> that the LSN might come from a PREPARE.)\n>\n\nThe reason to add \"finished at ...\" was to be explicit about whether\nit is a starting LSN or an end LSN of a transaction. We do have such\ndifferentiation in ReorderBufferTXN (first_lsn ... end_lsn).\n\n> Second, \"logical replication transaction\". Is it not a regular\n> transaction that we happen to be processing via logical replication?\n>\n> I think they should say something like\n>\n> \"logical replication starts skipping transaction with LSN %X/%X\"\n> \"logical replication completed skipping transaction with LSN %X/%X\"\n>\n\nThis looks better to me. If you find the above argument to\ndifferentiate between the start and end LSN convincing then we can\nthink of replacing \"with\" in the above messages with \"finished at\". I\nsee your point related to using \"finished at\" for PREPARE may not be a\ngood idea but I don't have better ideas for the same.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Sun, 4 Sep 2022 17:08:55 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Add ALTER SUBSCRIPTION ... SKIP."
},
{
"msg_contents": "On Sun, Sep 4, 2022 at 8:39 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Sun, Sep 4, 2022 at 1:48 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> >\n> > On 2022-Mar-22, Amit Kapila wrote:\n> >\n> > > Add ALTER SUBSCRIPTION ... SKIP.\n> >\n> > There are two messages here that seem oddly worded.\n> >\n> > msgid \"start skipping logical replication transaction finished at %X/%X\"\n> > msgid \"done skipping logical replication transaction finished at %X/%X\"\n> >\n> > Two complaints here. First, the phrases \"start / finished\" and \"done /\n> > finished\" look very strange. It took me a while to realize that\n> > \"finished\" refers to the LSN, not to the skipping operation. Do we ever\n> > talk about a transaction \"finished at XYZ\" as opposed to a transaction\n> > whose LSN is XYZ? (This became particularly strange when I realized\n> > that the LSN might come from a PREPARE.)\n> >\n>\n> The reason to add \"finished at ...\" was to be explicit about whether\n> it is a starting LSN or an end LSN of a transaction. We do have such\n> differentiation in ReorderBufferTXN (first_lsn ... end_lsn).\n>\n> > Second, \"logical replication transaction\". Is it not a regular\n> > transaction that we happen to be processing via logical replication?\n> >\n> > I think they should say something like\n> >\n> > \"logical replication starts skipping transaction with LSN %X/%X\"\n> > \"logical replication completed skipping transaction with LSN %X/%X\"\n> >\n>\n> This looks better to me.\n\n+1\n\n> If you find the above argument to\n> differentiate between the start and end LSN convincing then we can\n> think of replacing \"with\" in the above messages with \"finished at\". I\n> see your point related to using \"finished at\" for PREPARE may not be a\n> good idea but I don't have better ideas for the same.\n\nGiven that the user normally doesn't need to be aware of the\ndifference between start LSN and end LSN in the context of using this\nfeature, I think we can use \"with LSN %X/%X\".\n\nRegards,\n\n-- \nMasahiko Sawada\n\n\n",
"msg_date": "Tue, 6 Sep 2022 23:10:04 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Add ALTER SUBSCRIPTION ... SKIP."
},
{
"msg_contents": "On Tue, Sep 6, 2022 at 7:40 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> > > I think they should say something like\n> > >\n> > > \"logical replication starts skipping transaction with LSN %X/%X\"\n> > > \"logical replication completed skipping transaction with LSN %X/%X\"\n> > >\n> >\n> > This looks better to me.\n>\n> +1\n>\n> > If you find the above argument to\n> > differentiate between the start and end LSN convincing then we can\n> > think of replacing \"with\" in the above messages with \"finished at\". I\n> > see your point related to using \"finished at\" for PREPARE may not be a\n> > good idea but I don't have better ideas for the same.\n>\n> Given that the user normally doesn't need to be aware of the\n> difference between start LSN and end LSN in the context of using this\n> feature, I think we can use \"with LSN %X/%X\".\n>\n\nFair enough.\n\nAlvaro, would you like to push your proposed change? Otherwise, I am\nhappy to take care of this.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 7 Sep 2022 09:16:54 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Add ALTER SUBSCRIPTION ... SKIP."
}
] |
[
{
"msg_contents": "Hi,\n\nHere's a strange one-off failure seen on CI[1], in the\nCompilerWarnings task where we check that mingw cross-compile works:\n\n[10:48:29.045] time make -s -j${BUILD_JOBS} world-bin\n[10:48:38.705] x86_64-w64-mingw32-gcc: error: win32ver.o: No such file\nor directory\n[10:48:38.705] make[3]: *** [Makefile:44: pg_dumpall] Error 1\n[10:48:38.705] make[3]: *** Waiting for unfinished jobs....\n[10:48:38.709] make[2]: *** [Makefile:43: all-pg_dump-recurse] Error 2\n[10:48:38.709] make[2]: *** Waiting for unfinished jobs....\n[10:48:38.918] make[1]: *** [Makefile:42: all-bin-recurse] Error 2\n[10:48:38.918] make: *** [GNUmakefile:21: world-bin-src-recurse] Error 2\n\nI guess this implies a dependency problem somewhere around\nsrc/makefiles/Makefile.win32 but I'm not familiar with how that .rc\nstuff is supposed to work and I figured I'd mention it here in case\nit's obvious to someone else...\n\n[1] https://cirrus-ci.com/task/5546921619095552\n\n\n",
"msg_date": "Tue, 22 Mar 2022 15:47:13 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Broken make dependency somewhere near win32ver.o?"
},
{
"msg_contents": "Hi,\n\nOn 2022-03-22 15:47:13 +1300, Thomas Munro wrote:\n> Here's a strange one-off failure seen on CI[1], in the\n> CompilerWarnings task where we check that mingw cross-compile works:\n> \n> [10:48:29.045] time make -s -j${BUILD_JOBS} world-bin\n> [10:48:38.705] x86_64-w64-mingw32-gcc: error: win32ver.o: No such file\n> or directory\n> [10:48:38.705] make[3]: *** [Makefile:44: pg_dumpall] Error 1\n> [10:48:38.705] make[3]: *** Waiting for unfinished jobs....\n> [10:48:38.709] make[2]: *** [Makefile:43: all-pg_dump-recurse] Error 2\n> [10:48:38.709] make[2]: *** Waiting for unfinished jobs....\n> [10:48:38.918] make[1]: *** [Makefile:42: all-bin-recurse] Error 2\n> [10:48:38.918] make: *** [GNUmakefile:21: world-bin-src-recurse] Error 2\n> \n> I guess this implies a dependency problem somewhere around\n> src/makefiles/Makefile.win32 but I'm not familiar with how that .rc\n> stuff is supposed to work and I figured I'd mention it here in case\n> it's obvious to someone else...\n\nOh. I think I figured out how to reproduce it reliably:\n\nmake -s clean\nmake -j pg_dumpall -C src/bin/pg_dump/\n...\nx86_64-w64-mingw32-gcc: error: win32ver.o: No such file or directory\n\n\nThe problem looks to be that pg_dumpall doesn't have a dependency on OBJs,\nwhich in turn is what contains the dependency on WIN32RES, which in turn\ncontains win32ver.o. So the build succeeds if pg_dump/restores's dependencies\nare built first, but not if pg_dumpall starts to be built before that...\n\nSeems we just need to add $(WIN32RES) to pg_dumpall: ?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 21 Mar 2022 20:14:21 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Broken make dependency somewhere near win32ver.o?"
},
{
"msg_contents": "On Tue, Mar 22, 2022 at 4:14 PM Andres Freund <andres@anarazel.de> wrote:\n> The problem looks to be that pg_dumpall doesn't have a dependency on OBJs,\n> which in turn is what contains the dependency on WIN32RES, which in turn\n> contains win32ver.o. So the build succeeds if pg_dump/restores's dependencies\n> are built first, but not if pg_dumpall starts to be built before that...\n>\n> Seems we just need to add $(WIN32RES) to pg_dumpall: ?\n\nAh, yeah, that looks right. I don't currently have a mingw setup to\ntest, but clearly $(WIN32RES) is passed to $(CC) despite not being\nlisted as a dependency.\n\n\n",
"msg_date": "Tue, 22 Mar 2022 18:09:08 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Broken make dependency somewhere near win32ver.o?"
},
{
"msg_contents": "Hi,\n\nOn 2022-03-22 18:09:08 +1300, Thomas Munro wrote:\n> On Tue, Mar 22, 2022 at 4:14 PM Andres Freund <andres@anarazel.de> wrote:\n> > The problem looks to be that pg_dumpall doesn't have a dependency on OBJs,\n> > which in turn is what contains the dependency on WIN32RES, which in turn\n> > contains win32ver.o. So the build succeeds if pg_dump/restores's dependencies\n> > are built first, but not if pg_dumpall starts to be built before that...\n> >\n> > Seems we just need to add $(WIN32RES) to pg_dumpall: ?\n> \n> Ah, yeah, that looks right. I don't currently have a mingw setup to\n> test, but clearly $(WIN32RES) is passed to $(CC) despite not being\n> listed as a dependency.\n\nPushed a fix for that. Ended up doing it for all branches, although I was\ndebating with myself about doing so.\n\nI did a quick search and didn't find other cases of this problem.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 22 Mar 2022 08:30:18 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Broken make dependency somewhere near win32ver.o?"
}
] |
[
{
"msg_contents": "Hi,\n\nThis feature adds an option to skip changes of all tables in specified\nschema while creating publication.\nThis feature is helpful for use cases where the user wants to\nsubscribe to all the changes except for the changes present in a few\nschemas.\nEx:\nCREATE PUBLICATION pub1 FOR ALL TABLES SKIP ALL TABLES IN SCHEMA s1,s2;\nOR\nALTER PUBLICATION pub1 ADD SKIP ALL TABLES IN SCHEMA s1,s2;\n\nA new column pnskip is added to table \"pg_publication_namespace\", to\nmaintain the schemas that the user wants to skip publishing through\nthe publication. Modified the output plugin (pgoutput) to skip\npublishing the changes if the relation is part of skip schema\npublication.\nAs a continuation to this, I will work on implementing skipping tables\nfrom all tables in schema and skipping tables from all tables\npublication.\n\nAttached patch has the implementation for this.\nThis feature is for the pg16 version.\nThoughts?\n\nRegards,\nVignesh",
"msg_date": "Tue, 22 Mar 2022 12:38:43 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": true,
"msg_subject": "Skipping schema changes in publication"
},
{
"msg_contents": "On Tue, Mar 22, 2022 at 12:38 PM vignesh C <vignesh21@gmail.com> wrote:\n>\n> Hi,\n>\n> This feature adds an option to skip changes of all tables in specified\n> schema while creating publication.\n> This feature is helpful for use cases where the user wants to\n> subscribe to all the changes except for the changes present in a few\n> schemas.\n> Ex:\n> CREATE PUBLICATION pub1 FOR ALL TABLES SKIP ALL TABLES IN SCHEMA s1,s2;\n> OR\n> ALTER PUBLICATION pub1 ADD SKIP ALL TABLES IN SCHEMA s1,s2;\n>\n> A new column pnskip is added to table \"pg_publication_namespace\", to\n> maintain the schemas that the user wants to skip publishing through\n> the publication. Modified the output plugin (pgoutput) to skip\n> publishing the changes if the relation is part of skip schema\n> publication.\n> As a continuation to this, I will work on implementing skipping tables\n> from all tables in schema and skipping tables from all tables\n> publication.\n>\n> Attached patch has the implementation for this.\n\nThe patch was not applying on top of HEAD because of the recent\ncommits, attached patch is rebased on top of HEAD.\n\nRegards,\nVignesh",
"msg_date": "Sat, 26 Mar 2022 19:37:26 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping schema changes in publication"
},
{
"msg_contents": "On Sat, Mar 26, 2022 at 7:37 PM vignesh C <vignesh21@gmail.com> wrote:\n>\n> On Tue, Mar 22, 2022 at 12:38 PM vignesh C <vignesh21@gmail.com> wrote:\n> >\n> > Hi,\n> >\n> > This feature adds an option to skip changes of all tables in specified\n> > schema while creating publication.\n> > This feature is helpful for use cases where the user wants to\n> > subscribe to all the changes except for the changes present in a few\n> > schemas.\n> > Ex:\n> > CREATE PUBLICATION pub1 FOR ALL TABLES SKIP ALL TABLES IN SCHEMA s1,s2;\n> > OR\n> > ALTER PUBLICATION pub1 ADD SKIP ALL TABLES IN SCHEMA s1,s2;\n> >\n> > A new column pnskip is added to table \"pg_publication_namespace\", to\n> > maintain the schemas that the user wants to skip publishing through\n> > the publication. Modified the output plugin (pgoutput) to skip\n> > publishing the changes if the relation is part of skip schema\n> > publication.\n> > As a continuation to this, I will work on implementing skipping tables\n> > from all tables in schema and skipping tables from all tables\n> > publication.\n> >\n> > Attached patch has the implementation for this.\n>\n> The patch was not applying on top of HEAD because of the recent\n> commits, attached patch is rebased on top of HEAD.\n\nThe patch does not apply on top of HEAD because of the recent commit,\nattached patch is rebased on top of HEAD.\n\nI have also included the implementation for skipping a few tables from\nall tables publication, the 0002 patch has the implementation for the\nsame.\nThis feature is helpful for use cases where the user wants to\nsubscribe to all the changes except for the changes present in a few\ntables.\nEx:\nCREATE PUBLICATION pub1 FOR ALL TABLES SKIP TABLE t1,t2;\nOR\nALTER PUBLICATION pub1 ADD SKIP TABLE t1,t2;\n\nRegards,\nVignesh",
"msg_date": "Tue, 12 Apr 2022 11:53:29 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping schema changes in publication"
},
{
"msg_contents": "On Tue, Apr 12, 2022 at 11:53 AM vignesh C <vignesh21@gmail.com> wrote:\n>\n> On Sat, Mar 26, 2022 at 7:37 PM vignesh C <vignesh21@gmail.com> wrote:\n> >\n> > On Tue, Mar 22, 2022 at 12:38 PM vignesh C <vignesh21@gmail.com> wrote:\n> > >\n> > > Hi,\n> > >\n> > > This feature adds an option to skip changes of all tables in specified\n> > > schema while creating publication.\n> > > This feature is helpful for use cases where the user wants to\n> > > subscribe to all the changes except for the changes present in a few\n> > > schemas.\n> > > Ex:\n> > > CREATE PUBLICATION pub1 FOR ALL TABLES SKIP ALL TABLES IN SCHEMA s1,s2;\n> > > OR\n> > > ALTER PUBLICATION pub1 ADD SKIP ALL TABLES IN SCHEMA s1,s2;\n> > >\n> > > A new column pnskip is added to table \"pg_publication_namespace\", to\n> > > maintain the schemas that the user wants to skip publishing through\n> > > the publication. Modified the output plugin (pgoutput) to skip\n> > > publishing the changes if the relation is part of skip schema\n> > > publication.\n> > > As a continuation to this, I will work on implementing skipping tables\n> > > from all tables in schema and skipping tables from all tables\n> > > publication.\n> > >\n> > > Attached patch has the implementation for this.\n> >\n> > The patch was not applying on top of HEAD because of the recent\n> > commits, attached patch is rebased on top of HEAD.\n>\n> The patch does not apply on top of HEAD because of the recent commit,\n> attached patch is rebased on top of HEAD.\n>\n> I have also included the implementation for skipping a few tables from\n> all tables publication, the 0002 patch has the implementation for the\n> same.\n> This feature is helpful for use cases where the user wants to\n> subscribe to all the changes except for the changes present in a few\n> tables.\n> Ex:\n> CREATE PUBLICATION pub1 FOR ALL TABLES SKIP TABLE t1,t2;\n> OR\n> ALTER PUBLICATION pub1 ADD SKIP TABLE t1,t2;\n>\n\nFor the second syntax (Alter Publication ...), isn't it better to\navoid using ADD? It looks odd to me because we are not adding anything\nin publication with this sytax.\n\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 12 Apr 2022 12:19:22 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping schema changes in publication"
},
{
"msg_contents": "On Tue, Apr 12, 2022 at 12:19 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Apr 12, 2022 at 11:53 AM vignesh C <vignesh21@gmail.com> wrote:\n> >\n> > On Sat, Mar 26, 2022 at 7:37 PM vignesh C <vignesh21@gmail.com> wrote:\n> > >\n> > > On Tue, Mar 22, 2022 at 12:38 PM vignesh C <vignesh21@gmail.com> wrote:\n> > > >\n> > > > Hi,\n> > > >\n> > > > This feature adds an option to skip changes of all tables in specified\n> > > > schema while creating publication.\n> > > > This feature is helpful for use cases where the user wants to\n> > > > subscribe to all the changes except for the changes present in a few\n> > > > schemas.\n> > > > Ex:\n> > > > CREATE PUBLICATION pub1 FOR ALL TABLES SKIP ALL TABLES IN SCHEMA s1,s2;\n> > > > OR\n> > > > ALTER PUBLICATION pub1 ADD SKIP ALL TABLES IN SCHEMA s1,s2;\n> > > >\n> > > > A new column pnskip is added to table \"pg_publication_namespace\", to\n> > > > maintain the schemas that the user wants to skip publishing through\n> > > > the publication. Modified the output plugin (pgoutput) to skip\n> > > > publishing the changes if the relation is part of skip schema\n> > > > publication.\n> > > > As a continuation to this, I will work on implementing skipping tables\n> > > > from all tables in schema and skipping tables from all tables\n> > > > publication.\n> > > >\n> > > > Attached patch has the implementation for this.\n> > >\n> > > The patch was not applying on top of HEAD because of the recent\n> > > commits, attached patch is rebased on top of HEAD.\n> >\n> > The patch does not apply on top of HEAD because of the recent commit,\n> > attached patch is rebased on top of HEAD.\n> >\n> > I have also included the implementation for skipping a few tables from\n> > all tables publication, the 0002 patch has the implementation for the\n> > same.\n> > This feature is helpful for use cases where the user wants to\n> > subscribe to all the changes except for the changes present in a few\n> > tables.\n> > Ex:\n> > CREATE PUBLICATION pub1 FOR ALL TABLES SKIP TABLE t1,t2;\n> > OR\n> > ALTER PUBLICATION pub1 ADD SKIP TABLE t1,t2;\n> >\n>\n> For the second syntax (Alter Publication ...), isn't it better to\n> avoid using ADD? It looks odd to me because we are not adding anything\n> in publication with this sytax.\n\nI was thinking of the scenario where user initially creates the\npublication for all tables:\nCREATE PUBLICATION pub1 FOR ALL TABLES;\n\nAfter that user decides to skip few tables ex: t1, t2\n ALTER PUBLICATION pub1 ADD SKIP TABLE t1,t2;\n\nI thought of supporting this syntax if incase user decides to add the\nskipping of a few tables later.\nThoughts?\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Tue, 12 Apr 2022 16:16:53 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping schema changes in publication"
},
{
"msg_contents": "On Tue, Apr 12, 2022 at 4:17 PM vignesh C <vignesh21@gmail.com> wrote:\n>\n> On Tue, Apr 12, 2022 at 12:19 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> >\n> > For the second syntax (Alter Publication ...), isn't it better to\n> > avoid using ADD? It looks odd to me because we are not adding anything\n> > in publication with this sytax.\n>\n> I was thinking of the scenario where user initially creates the\n> publication for all tables:\n> CREATE PUBLICATION pub1 FOR ALL TABLES;\n>\n> After that user decides to skip few tables ex: t1, t2\n> ALTER PUBLICATION pub1 ADD SKIP TABLE t1,t2;\n>\n> I thought of supporting this syntax if incase user decides to add the\n> skipping of a few tables later.\n>\n\nI understand that part but what I pointed out was that it might be\nbetter to avoid using ADD keyword in this syntax like: ALTER\nPUBLICATION pub1 SKIP TABLE t1,t2;\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 12 Apr 2022 16:46:28 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping schema changes in publication"
},
{
"msg_contents": "On Tue, Apr 12, 2022 at 4:46 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Apr 12, 2022 at 4:17 PM vignesh C <vignesh21@gmail.com> wrote:\n> >\n> > On Tue, Apr 12, 2022 at 12:19 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > >\n> > > For the second syntax (Alter Publication ...), isn't it better to\n> > > avoid using ADD? It looks odd to me because we are not adding anything\n> > > in publication with this sytax.\n> >\n> > I was thinking of the scenario where user initially creates the\n> > publication for all tables:\n> > CREATE PUBLICATION pub1 FOR ALL TABLES;\n> >\n> > After that user decides to skip few tables ex: t1, t2\n> > ALTER PUBLICATION pub1 ADD SKIP TABLE t1,t2;\n> >\n> > I thought of supporting this syntax if incase user decides to add the\n> > skipping of a few tables later.\n> >\n>\n> I understand that part but what I pointed out was that it might be\n> better to avoid using ADD keyword in this syntax like: ALTER\n> PUBLICATION pub1 SKIP TABLE t1,t2;\n\nCurrently we are supporting Alter publication using the following syntax:\nALTER PUBLICATION pub1 ADD TABLE t1;\nALTER PUBLICATION pub1 SET TABLE t1;\nALTER PUBLICATION pub1 DROP TABLE T1;\nALTER PUBLICATION pub1 ADD ALL TABLES IN SCHEMA sch1;\nALTER PUBLICATION pub1 SET ALL TABLES IN SCHEMA sch1;\nALTER PUBLICATION pub1 DROP ALL TABLES IN SCHEMA sch1;\n\nI have extended the new syntax in similar lines:\nALTER PUBLICATION pub1 ADD SKIP TABLE t1;\nALTER PUBLICATION pub1 SET SKIP TABLE t1;\nALTER PUBLICATION pub1 DROP SKIP TABLE T1;\n\nI did it like this to maintain consistency.\nBut I'm fine doing it either way to keep it simple for the user.\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Wed, 13 Apr 2022 08:45:10 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping schema changes in publication"
},
{
"msg_contents": "On Wed, Apr 13, 2022 at 8:45 AM vignesh C <vignesh21@gmail.com> wrote:\n>\n> On Tue, Apr 12, 2022 at 4:46 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > I understand that part but what I pointed out was that it might be\n> > better to avoid using ADD keyword in this syntax like: ALTER\n> > PUBLICATION pub1 SKIP TABLE t1,t2;\n>\n> Currently we are supporting Alter publication using the following syntax:\n> ALTER PUBLICATION pub1 ADD TABLE t1;\n> ALTER PUBLICATION pub1 SET TABLE t1;\n> ALTER PUBLICATION pub1 DROP TABLE T1;\n> ALTER PUBLICATION pub1 ADD ALL TABLES IN SCHEMA sch1;\n> ALTER PUBLICATION pub1 SET ALL TABLES IN SCHEMA sch1;\n> ALTER PUBLICATION pub1 DROP ALL TABLES IN SCHEMA sch1;\n>\n> I have extended the new syntax in similar lines:\n> ALTER PUBLICATION pub1 ADD SKIP TABLE t1;\n> ALTER PUBLICATION pub1 SET SKIP TABLE t1;\n> ALTER PUBLICATION pub1 DROP SKIP TABLE T1;\n>\n> I did it like this to maintain consistency.\n>\n\nWhat is the difference between ADD and SET variants? I understand we\nneed some way to remove the SKIP table setting but not sure if DROP is\nthe best alternative.\n\nThe other ideas could be:\nTo set skip tables: ALTER PUBLICATION pub1 SKIP TABLE t1, t2...;\nTo reset skip tables: ALTER PUBLICATION pub1 SKIP TABLE; /* basically\nan empty list*/\nYet another way to reset skip tables: ALTER PUBLICATION pub1 RESET\nSKIP TABLE; /* Here we need to introduce RESET. */\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 13 Apr 2022 10:09:51 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping schema changes in publication"
},
{
"msg_contents": "On Wed, Apr 13, 2022 at 2:40 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, Apr 13, 2022 at 8:45 AM vignesh C <vignesh21@gmail.com> wrote:\n> >\n> > On Tue, Apr 12, 2022 at 4:46 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > I understand that part but what I pointed out was that it might be\n> > > better to avoid using ADD keyword in this syntax like: ALTER\n> > > PUBLICATION pub1 SKIP TABLE t1,t2;\n> >\n> > Currently we are supporting Alter publication using the following syntax:\n> > ALTER PUBLICATION pub1 ADD TABLE t1;\n> > ALTER PUBLICATION pub1 SET TABLE t1;\n> > ALTER PUBLICATION pub1 DROP TABLE T1;\n> > ALTER PUBLICATION pub1 ADD ALL TABLES IN SCHEMA sch1;\n> > ALTER PUBLICATION pub1 SET ALL TABLES IN SCHEMA sch1;\n> > ALTER PUBLICATION pub1 DROP ALL TABLES IN SCHEMA sch1;\n> >\n> > I have extended the new syntax in similar lines:\n> > ALTER PUBLICATION pub1 ADD SKIP TABLE t1;\n> > ALTER PUBLICATION pub1 SET SKIP TABLE t1;\n> > ALTER PUBLICATION pub1 DROP SKIP TABLE T1;\n> >\n> > I did it like this to maintain consistency.\n> >\n>\n> What is the difference between ADD and SET variants? I understand we\n> need some way to remove the SKIP table setting but not sure if DROP is\n> the best alternative.\n>\n> The other ideas could be:\n> To set skip tables: ALTER PUBLICATION pub1 SKIP TABLE t1, t2...;\n> To reset skip tables: ALTER PUBLICATION pub1 SKIP TABLE; /* basically\n> an empty list*/\n> Yet another way to reset skip tables: ALTER PUBLICATION pub1 RESET\n> SKIP TABLE; /* Here we need to introduce RESET. */\n>\n\nWhen you were talking about SKIP TABLE then I liked the idea of:\n\nALTER ... SET SKIP TABLE; /* empty list to reset the table skips */\nALTER ... SET SKIP TABLE t1,t2; /* non-empty list to replace the table skips */\n\nBut when you apply that rule to SKIP ALL TABLES IN SCHEMA, then the\nreset syntax looks too awkward.\n\nALTER ... SET SKIP ALL TABLES IN SCHEMA; /* empty list to reset the\nschema skips */\nALTER ... SET SKIP ALL TABLES IN SCHEMA s1,s2; /* non-empty list to\nreplace the schema skips */\n\n~~~\n\nIMO it might be simpler to do it like:\n\nALTER ... DROP SKIP; /* reset/remove the skip */\nALTER ... SET SKIP TABLE t1,t2; /* non-empty list to replace table skips */\nALTER ... SET SKIP ALL TABLES IS SCHEMA s1,s2; /* non-empty list to\nreplace schema skips */\n\nI don't really think that the ALTER ... SET SKIP empty list should be\nsupported (because reason above)\nI don't really think that the ALTER ... ADD SKIP should be supported.\n\n===\n\nMore questions - What happens if the skip table or skip schema no\nlonger exists exist? Does that mean error? Maybe there is a\ndependency on it but OTOH it might be annoying - e.g. to disallow a\nDROP TABLE when the only dependency was that the user wanted to SKIP\nit...\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Wed, 13 Apr 2022 17:35:39 +1000",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping schema changes in publication"
},
{
"msg_contents": "On Tue, Apr 12, 2022 2:23 PM vignesh C <vignesh21@gmail.com> wrote:\r\n> \r\n> The patch does not apply on top of HEAD because of the recent commit,\r\n> attached patch is rebased on top of HEAD.\r\n> \r\n\r\nThanks for your patch. Here are some comments for 0001 patch.\r\n\r\n1. doc/src/sgml/catalogs.sgml\r\n@@ -6438,6 +6438,15 @@ SCRAM-SHA-256$<replaceable><iteration count></replaceable>:<replaceable>&l\r\n A null value indicates that all columns are published.\r\n </para></entry>\r\n </row>\r\n+\r\n+ <row>\r\n+ <entry role=\"catalog_table_entry\"><para role=\"column_definition\">\r\n+ <structfield>pnskip</structfield> <type>bool</type>\r\n+ </para>\r\n+ <para>\r\n+ True if the schema is skip schema\r\n+ </para></entry>\r\n+ </row>\r\n </tbody>\r\n </tgroup>\r\n </table>\r\n\r\nThis change is added to pg_publication_rel, I think it should be added to\r\npg_publication_namespace, right?\r\n\r\n2.\r\npostgres=# alter publication p1 add skip all tables in schema s1,s2;\r\nERROR: schema \"s1\" is already member of publication \"p1\"\r\n\r\nThis error message seems odd to me, can we improve it? Something like:\r\nschema \"s1\" is already skipped in publication \"p1\"\r\n\r\n3.\r\ncreate table tbl (a int primary key);\r\ncreate schema s1;\r\ncreate schema s2;\r\ncreate table s1.tbl (a int);\r\ncreate publication p1 for all tables skip all tables in schema s1,s2;\r\n\r\npostgres=# \\dRp+\r\n Publication p1\r\n Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root\r\n----------+------------+---------+---------+---------+-----------+----------\r\n postgres | t | t | t | t | t | f\r\nSkip tables from schemas:\r\n \"s1\"\r\n \"s2\"\r\n\r\npostgres=# select * from pg_publication_tables;\r\n pubname | schemaname | tablename\r\n---------+------------+-----------\r\n p1 | public | tbl\r\n p1 | s1 | tbl\r\n(2 rows)\r\n\r\nThere shouldn't be a record of s1.tbl, since all tables in schema s1 are skipped.\r\n\r\nI found that it is caused by the following code:\r\n\r\nsrc/backend/catalog/pg_publication.c\r\n+\tforeach(cell, pubschemalist)\r\n+\t{\r\n+\t\tPublicationSchInfo *pubsch = (PublicationSchInfo *) lfirst(cell);\r\n+\r\n+\t\tskipschemaidlist = lappend_oid(result, pubsch->oid);\r\n+\t}\r\n\r\nThe first argument to append_oid() seems wrong, should it be:\r\n\r\nskipschemaidlist = lappend_oid(skipschemaidlist, pubsch->oid);\r\n\r\n\r\n4. src/backend/commands/publicationcmds.c\r\n\r\n/*\r\n * Convert the PublicationObjSpecType list into schema oid list and\r\n * PublicationTable list.\r\n */\r\nstatic void\r\nObjectsInPublicationToOids(List *pubobjspec_list, ParseState *pstate,\r\n\t\t\t\t\t\t List **rels, List **schemas)\r\n\r\nShould we modify the comment of ObjectsInPublicationToOids()?\r\n\"schema oid list\" should be \"PublicationSchInfo list\".\r\n\r\nRegards,\r\nShi yu\r\n\r\n",
"msg_date": "Thu, 14 Apr 2022 08:55:01 +0000",
"msg_from": "\"shiy.fnst@fujitsu.com\" <shiy.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Skipping schema changes in publication"
},
{
"msg_contents": "On Tue, Apr 12, 2022 at 2:23 PM vignesh C <vignesh21@gmail.com> wrote:\r\n> The patch does not apply on top of HEAD because of the recent commit,\r\n> attached patch is rebased on top of HEAD.\r\nThanks for your patches.\r\n\r\nHere are some comments for v1-0001:\r\n1.\r\nI found the patch add the following two new functions in gram.y:\r\npreprocess_alltables_pubobj_list, check_skip_in_pubobj_list.\r\nThese two functions look similar. So could we just add one new function?\r\nBesides, do we need the API `location` in new function\r\npreprocess_alltables_pubobj_list? It seems that \"location\" is not used in this\r\nnew function.\r\nIn addition, the location of error cursor in the messages seems has a little\r\nproblem. For example:\r\npostgres=# create publication pub for all TABLES skip all tables in schema public, table test;\r\nERROR: only SKIP ALL TABLES IN SCHEMA or SKIP TABLE can be specified with ALL TABLES option\r\nLINE 1: create publication pub for all TABLES skip all tables in sch...\r\n ^\r\n(The location of error cursor is under 'create')\r\n\r\n2. I think maybe there is a minor missing in function\r\npreprocess_alltables_pubobj_list and check_skip_in_pubobj_list:\r\nWe seem to be missing the CURRENT_SCHEMA case.\r\nFor example(In function preprocess_alltables_pubobj_list) :\r\n+\t\t/* Only SKIP ALL TABLES IN SCHEMA option supported with ALL TABLES */\r\n+\t\tif (pubobj->pubobjtype != PUBLICATIONOBJ_TABLES_IN_SCHEMA ||\r\n+\t\t\t!pubobj->skip)\r\nmaybe need to be changed like this:\r\n+\t\t/* Only SKIP ALL TABLES IN SCHEMA option supported with ALL TABLES */\r\n+\t\tif ((pubobj->pubobjtype != PUBLICATIONOBJ_TABLES_IN_SCHEMA &&\r\n+\t\t pubobj->pubobjtype != PUBLICATIONOBJ_TABLES_IN_CUR_SCHEMA) &&\r\n+\t\t\tpubobj->skip)\r\n\r\n3. I think maybe there are some minor missing in create_publication.sgml.\r\n+ [ FOR ALL TABLES [SKIP ALL TABLES IN SCHEMA { <replaceable class=\"parameter\">schema_name</replaceable> | CURRENT_SCHEMA }]\r\nmaybe need to be changed to this:\r\n+ [ FOR ALL TABLES [SKIP ALL TABLES IN SCHEMA { <replaceable class=\"parameter\">schema_name</replaceable> | CURRENT_SCHEMA } [, ... ]]\r\n\r\n4. The error message of function CreatePublication.\r\nDoes the message below need to be modified like the comment?\r\nIn addition, I think maybe \"FOR/SKIP\" is better.\r\n@@ -835,18 +843,21 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)\r\n-\t\t/* FOR ALL TABLES IN SCHEMA requires superuser */\r\n+\t\t/* FOR [SKIP] ALL TABLES IN SCHEMA requires superuser */\r\n \t\tif (list_length(schemaidlist) > 0 && !superuser())\r\n \t\t\tereport(ERROR,\r\n \t\t\t\t\terrcode(ERRCODE_INSUFFICIENT_PRIVILEGE),\r\n \t\t\t\t\terrmsg(\"must be superuser to create FOR ALL TABLES IN SCHEMA publication\"));\r\n\r\n5.\r\nI think there are some minor missing in tab-complete.c.\r\n+\t\t\t Matches(\"CREATE\", \"PUBLICATION\", MatchAny, \"FOR\", \"SKIP\", \"ALL\", \"TABLES\", \"IN\", \"SCHEMA\"))\r\nmaybe need to be changed to this:\r\n+\t\t\t Matches(\"CREATE\", \"PUBLICATION\", MatchAny, \"FOR\", \"ALL\", \"TABLES\", \"SKIP\", \"ALL\", \"TABLES\", \"IN\", \"SCHEMA\"))\r\n\r\n+\t\t\t Matches(\"CREATE\", \"PUBLICATION\", MatchAny, \"SKIP\", \"FOR\", \"ALL\", \"TABLES\", \"IN\", \"SCHEMA\", MatchAny)) &&\r\nmaybe need to be changed to this:\r\n+\t\t\t Matches(\"CREATE\", \"PUBLICATION\", MatchAny, \"FOR\", \"ALL\", \"TABLES\", \"SKIP\", \"ALL\", \"TABLES\", \"IN\", \"SCHEMA\", MatchAny)) &&\r\n\r\n6.\r\nIn function get_rel_sync_entry, do we need `if (!publish)` in below code?\r\nI think `publish` is always false here, as we delete the check for\r\n\"pub->alltables\".\r\n```\r\n-\t\t\t/*\r\n-\t\t\t * If this is a FOR ALL TABLES publication, pick the partition root\r\n-\t\t\t * and set the ancestor level accordingly.\r\n-\t\t\t */\r\n-\t\t\tif (pub->alltables)\r\n-\t\t\t{\r\n-\t\t\t\t......\r\n-\t\t\t}\r\n-\r\n \t\t\tif (!publish)\r\n```\r\n\r\nRegards,\r\nWang wei\r\n",
"msg_date": "Thu, 14 Apr 2022 09:33:15 +0000",
"msg_from": "\"wangw.fnst@fujitsu.com\" <wangw.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Skipping schema changes in publication"
},
{
"msg_contents": "On 12.04.22 08:23, vignesh C wrote:\n> I have also included the implementation for skipping a few tables from\n> all tables publication, the 0002 patch has the implementation for the\n> same.\n> This feature is helpful for use cases where the user wants to\n> subscribe to all the changes except for the changes present in a few\n> tables.\n> Ex:\n> CREATE PUBLICATION pub1 FOR ALL TABLES SKIP TABLE t1,t2;\n> OR\n> ALTER PUBLICATION pub1 ADD SKIP TABLE t1,t2;\n\nWe have already allocated the \"skip\" terminology for skipping \ntransactions, which is a dynamic run-time action. We are also using the \nterm \"skip\" elsewhere to skip locked rows, which is similarly a run-time \naction. I think it would be confusing to use the term SKIP for DDL \nconstruction.\n\nLet's find another term like \"omit\", \"except\", etc.\n\nI would also think about this in broader terms. For example, sometimes \npeople want features like \"all columns except these\" in certain places. \nThe syntax for those things should be similar.\n\nThat said, I'm not sure this feature is worth the trouble. If this is \nuseful, what about \"whole database except these schemas\"? What about \n\"create this database from this template except these schemas\". This \ncould get out of hand. I think we should encourage users to group their \nobject the way they want and not offer these complicated negative \nselection mechanisms.\n\n\n",
"msg_date": "Thu, 14 Apr 2022 15:47:58 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping schema changes in publication"
},
{
"msg_contents": "On Thu, Apr 14, 2022, at 10:47 AM, Peter Eisentraut wrote:\n> On 12.04.22 08:23, vignesh C wrote:\n> > I have also included the implementation for skipping a few tables from\n> > all tables publication, the 0002 patch has the implementation for the\n> > same.\n> > This feature is helpful for use cases where the user wants to\n> > subscribe to all the changes except for the changes present in a few\n> > tables.\n> > Ex:\n> > CREATE PUBLICATION pub1 FOR ALL TABLES SKIP TABLE t1,t2;\n> > OR\n> > ALTER PUBLICATION pub1 ADD SKIP TABLE t1,t2;\n> \n> We have already allocated the \"skip\" terminology for skipping \n> transactions, which is a dynamic run-time action. We are also using the \n> term \"skip\" elsewhere to skip locked rows, which is similarly a run-time \n> action. I think it would be confusing to use the term SKIP for DDL \n> construction.\nI didn't like the SKIP choice too. We already have EXCEPT for IMPORT FOREIGN\nSCHEMA and if I were to suggest a keyword, it would be EXCEPT.\n\n> I would also think about this in broader terms. For example, sometimes \n> people want features like \"all columns except these\" in certain places. \n> The syntax for those things should be similar.\nThe questions are:\nWhat kind of issues does it solve?\nDo we have a workaround for it?\n\n> That said, I'm not sure this feature is worth the trouble. If this is \n> useful, what about \"whole database except these schemas\"? What about \n> \"create this database from this template except these schemas\". This \n> could get out of hand. I think we should encourage users to group their \n> object the way they want and not offer these complicated negative \n> selection mechanisms.\nI have the same impression too. We already provide a way to:\n\n* include individual tables;\n* include all tables;\n* include all tables in a certain schema.\n\nDoesn't it cover the majority of the use cases? We don't need to cover all\npossible cases in one DDL command. IMO the current grammar for CREATE\nPUBLICATION is already complicated after the ALL TABLES IN SCHEMA. You are\nproposing to add \"ALL TABLES SKIP ALL TABLES\" that sounds repetitive but it is\nnot; doesn't seem well-thought-out. I'm also concerned about possible gotchas\nfor this proposal. The first command above suggests that it skips all tables in a\ncertain schema. What happen if I decide to include a particular table of the\nskipped schema (second command)?\n\nALTER PUBLICATION pub1 ADD SKIP ALL TABLES IN SCHEMA s1,s2;\nALTER PUBLICATION pub1 ADD TABLE s1.foo;\n\nHaving said that I'm not wedded to this proposal. Unless someone provides\ncompelling use cases for this additional syntax, I think we should leave the\npublication syntax as is.\n\n\n--\nEuler Taveira\nEDB https://www.enterprisedb.com/\n\nOn Thu, Apr 14, 2022, at 10:47 AM, Peter Eisentraut wrote:On 12.04.22 08:23, vignesh C wrote:> I have also included the implementation for skipping a few tables from> all tables publication, the 0002 patch has the implementation for the> same.> This feature is helpful for use cases where the user wants to> subscribe to all the changes except for the changes present in a few> tables.> Ex:> CREATE PUBLICATION pub1 FOR ALL TABLES SKIP TABLE t1,t2;> OR> ALTER PUBLICATION pub1 ADD SKIP TABLE t1,t2;We have already allocated the \"skip\" terminology for skipping transactions, which is a dynamic run-time action. We are also using the term \"skip\" elsewhere to skip locked rows, which is similarly a run-time action. I think it would be confusing to use the term SKIP for DDL construction.I didn't like the SKIP choice too. We already have EXCEPT for IMPORT FOREIGNSCHEMA and if I were to suggest a keyword, it would be EXCEPT.I would also think about this in broader terms. For example, sometimes people want features like \"all columns except these\" in certain places. The syntax for those things should be similar.The questions are:What kind of issues does it solve?Do we have a workaround for it?That said, I'm not sure this feature is worth the trouble. If this is useful, what about \"whole database except these schemas\"? What about \"create this database from this template except these schemas\". This could get out of hand. I think we should encourage users to group their object the way they want and not offer these complicated negative selection mechanisms.I have the same impression too. We already provide a way to:* include individual tables;* include all tables;* include all tables in a certain schema.Doesn't it cover the majority of the use cases? We don't need to cover allpossible cases in one DDL command. IMO the current grammar for CREATEPUBLICATION is already complicated after the ALL TABLES IN SCHEMA. You areproposing to add \"ALL TABLES SKIP ALL TABLES\" that sounds repetitive but it isnot; doesn't seem well-thought-out. I'm also concerned about possible gotchasfor this proposal. The first command above suggests that it skips all tables in acertain schema. What happen if I decide to include a particular table of theskipped schema (second command)?ALTER PUBLICATION pub1 ADD SKIP ALL TABLES IN SCHEMA s1,s2;ALTER PUBLICATION pub1 ADD TABLE s1.foo;Having said that I'm not wedded to this proposal. Unless someone providescompelling use cases for this additional syntax, I think we should leave thepublication syntax as is.--Euler TaveiraEDB https://www.enterprisedb.com/",
"msg_date": "Thu, 14 Apr 2022 16:55:24 -0300",
"msg_from": "\"Euler Taveira\" <euler@eulerto.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping schema changes in publication"
},
{
"msg_contents": "On Fri, Apr 15, 2022 at 1:26 AM Euler Taveira <euler@eulerto.com> wrote:\n>\n> On Thu, Apr 14, 2022, at 10:47 AM, Peter Eisentraut wrote:\n>\n> On 12.04.22 08:23, vignesh C wrote:\n> > I have also included the implementation for skipping a few tables from\n> > all tables publication, the 0002 patch has the implementation for the\n> > same.\n> > This feature is helpful for use cases where the user wants to\n> > subscribe to all the changes except for the changes present in a few\n> > tables.\n> > Ex:\n> > CREATE PUBLICATION pub1 FOR ALL TABLES SKIP TABLE t1,t2;\n> > OR\n> > ALTER PUBLICATION pub1 ADD SKIP TABLE t1,t2;\n>\n> We have already allocated the \"skip\" terminology for skipping\n> transactions, which is a dynamic run-time action. We are also using the\n> term \"skip\" elsewhere to skip locked rows, which is similarly a run-time\n> action. I think it would be confusing to use the term SKIP for DDL\n> construction.\n>\n> I didn't like the SKIP choice too. We already have EXCEPT for IMPORT FOREIGN\n> SCHEMA and if I were to suggest a keyword, it would be EXCEPT.\n>\n\n+1 for EXCEPT.\n\n> I would also think about this in broader terms. For example, sometimes\n> people want features like \"all columns except these\" in certain places.\n> The syntax for those things should be similar.\n>\n> The questions are:\n> What kind of issues does it solve?\n\nAs far as I understand, it is for usability, otherwise, users need to\nlist all required columns' names even if they don't want to hide most\nof the columns in the table. Consider user doesn't want to publish the\n'salary' or other sensitive information of executives/employees but\nwould like to publish all other columns. I feel in such cases it will\nbe a lot of work for the user especially when the table has many\ncolumns. I see that Oracle has a similar feature [1]. I think without\nthis it will be difficult for users to use this feature in some cases.\n\n> Do we have a workaround for it?\n>\n\nI can't think of any except the user needs to manually input all\nrequired columns. Can you think of any other workaround?\n\n> That said, I'm not sure this feature is worth the trouble. If this is\n> useful, what about \"whole database except these schemas\"? What about\n> \"create this database from this template except these schemas\". This\n> could get out of hand. I think we should encourage users to group their\n> object the way they want and not offer these complicated negative\n> selection mechanisms.\n>\n> I have the same impression too. We already provide a way to:\n>\n> * include individual tables;\n> * include all tables;\n> * include all tables in a certain schema.\n>\n> Doesn't it cover the majority of the use cases?\n>\n\nSimilar to columns, the same applies to tables. Users need to manually\nadd all tables for a database even when she wants to avoid only a\nhandful of tables from the database say because they contain sensitive\ninformation or are not required. I think we don't need to cover all\npossible exceptions but a few where users can avoid some tables would\nbe useful. If not, what kind of alternative do users have except for\nlisting all columns or all tables that are required.\n\n\n[1] - https://docs.oracle.com/en/cloud/paas/goldengate-cloud/gwuad/selecting-columns.html#GUID-9A851C8B-48F7-43DF-8D98-D086BE069E20\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 18 Apr 2022 12:31:59 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping schema changes in publication"
},
{
"msg_contents": "On Thu, Apr 14, 2022 at 7:18 PM Peter Eisentraut\n<peter.eisentraut@enterprisedb.com> wrote:\n>\n> On 12.04.22 08:23, vignesh C wrote:\n> > I have also included the implementation for skipping a few tables from\n> > all tables publication, the 0002 patch has the implementation for the\n> > same.\n> > This feature is helpful for use cases where the user wants to\n> > subscribe to all the changes except for the changes present in a few\n> > tables.\n> > Ex:\n> > CREATE PUBLICATION pub1 FOR ALL TABLES SKIP TABLE t1,t2;\n> > OR\n> > ALTER PUBLICATION pub1 ADD SKIP TABLE t1,t2;\n>\n> We have already allocated the \"skip\" terminology for skipping\n> transactions, which is a dynamic run-time action. We are also using the\n> term \"skip\" elsewhere to skip locked rows, which is similarly a run-time\n> action. I think it would be confusing to use the term SKIP for DDL\n> construction.\n>\n> Let's find another term like \"omit\", \"except\", etc.\n\n+1 for Except\n\n> I would also think about this in broader terms. For example, sometimes\n> people want features like \"all columns except these\" in certain places.\n> The syntax for those things should be similar.\n>\n> That said, I'm not sure this feature is worth the trouble. If this is\n> useful, what about \"whole database except these schemas\"? What about\n> \"create this database from this template except these schemas\". This\n> could get out of hand. I think we should encourage users to group their\n> object the way they want and not offer these complicated negative\n> selection mechanisms.\n\nI thought this feature would help when there are many many tables in\nthe database and the user wants only certain confidential tables like\ncredit card information. In this case instead of specifying the whole\ntable list it will be better to specify \"ALL TABLES EXCEPT\ncred_info_tbl\".\nI had seen that mysql also has a similar option replicate-ignore-table\nto ignore the changes on specific tables as mentioned in [1].\nSimilar use case exists in pg_dump too. pg_dump has an option\nexclude-table that will be used for not dumping any tables that are\nmatching the table specified as in [2].\n\n[1] - https://dev.mysql.com/doc/refman/5.7/en/change-replication-filter.html\n[2] - https://www.postgresql.org/docs/devel/app-pgdump.html\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Mon, 18 Apr 2022 15:10:46 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping schema changes in publication"
},
{
"msg_contents": "On Mon, Apr 18, 2022 at 12:32 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Fri, Apr 15, 2022 at 1:26 AM Euler Taveira <euler@eulerto.com> wrote:\n> >\n> > On Thu, Apr 14, 2022, at 10:47 AM, Peter Eisentraut wrote:\n> >\n> > On 12.04.22 08:23, vignesh C wrote:\n> > > I have also included the implementation for skipping a few tables from\n> > > all tables publication, the 0002 patch has the implementation for the\n> > > same.\n> > > This feature is helpful for use cases where the user wants to\n> > > subscribe to all the changes except for the changes present in a few\n> > > tables.\n> > > Ex:\n> > > CREATE PUBLICATION pub1 FOR ALL TABLES SKIP TABLE t1,t2;\n> > > OR\n> > > ALTER PUBLICATION pub1 ADD SKIP TABLE t1,t2;\n> >\n> > We have already allocated the \"skip\" terminology for skipping\n> > transactions, which is a dynamic run-time action. We are also using the\n> > term \"skip\" elsewhere to skip locked rows, which is similarly a run-time\n> > action. I think it would be confusing to use the term SKIP for DDL\n> > construction.\n> >\n> > I didn't like the SKIP choice too. We already have EXCEPT for IMPORT FOREIGN\n> > SCHEMA and if I were to suggest a keyword, it would be EXCEPT.\n> >\n>\n> +1 for EXCEPT.\n\nUpdated patch by changing the syntax to use EXCEPT instead of SKIP.\n\nRegards,\nVignesh",
"msg_date": "Thu, 21 Apr 2022 08:45:07 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping schema changes in publication"
},
{
"msg_contents": "On Tue, Mar 22, 2022 at 12:39 PM vignesh C <vignesh21@gmail.com> wrote:\n>\n> Hi,\n>\n> This feature adds an option to skip changes of all tables in specified\n> schema while creating publication.\n> This feature is helpful for use cases where the user wants to\n> subscribe to all the changes except for the changes present in a few\n> schemas.\n> Ex:\n> CREATE PUBLICATION pub1 FOR ALL TABLES SKIP ALL TABLES IN SCHEMA s1,s2;\n> OR\n> ALTER PUBLICATION pub1 ADD SKIP ALL TABLES IN SCHEMA s1,s2;\n>\n> A new column pnskip is added to table \"pg_publication_namespace\", to\n> maintain the schemas that the user wants to skip publishing through\n> the publication. Modified the output plugin (pgoutput) to skip\n> publishing the changes if the relation is part of skip schema\n> publication.\n> As a continuation to this, I will work on implementing skipping tables\n> from all tables in schema and skipping tables from all tables\n> publication.\n>\n> Attached patch has the implementation for this.\n> This feature is for the pg16 version.\n> Thoughts?\n\nThe feature seems to be useful especially when there are lots of\nschemas in a database. However, I don't quite like the syntax. Do we\nhave 'SKIP' identifier in any of the SQL statements in SQL standard?\nCan we think of adding skip_schema_list as an option, something like\nbelow?\n\nCREATE PUBLICATION foo FOR ALL TABLES (skip_schema_list = 's1, s2');\nALTER PUBLICATION foo SET (skip_schema_list = 's1, s2'); - to set\nALTER PUBLICATION foo SET (skip_schema_list = ''); - to reset\n\nRegards,\nBharath Rupireddy.\n\n\n",
"msg_date": "Fri, 22 Apr 2022 21:39:24 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping schema changes in publication"
},
{
"msg_contents": "On Sat, Apr 23, 2022 at 2:09 AM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Tue, Mar 22, 2022 at 12:39 PM vignesh C <vignesh21@gmail.com> wrote:\n> >\n> > Hi,\n> >\n> > This feature adds an option to skip changes of all tables in specified\n> > schema while creating publication.\n> > This feature is helpful for use cases where the user wants to\n> > subscribe to all the changes except for the changes present in a few\n> > schemas.\n> > Ex:\n> > CREATE PUBLICATION pub1 FOR ALL TABLES SKIP ALL TABLES IN SCHEMA s1,s2;\n> > OR\n> > ALTER PUBLICATION pub1 ADD SKIP ALL TABLES IN SCHEMA s1,s2;\n> >\n> > A new column pnskip is added to table \"pg_publication_namespace\", to\n> > maintain the schemas that the user wants to skip publishing through\n> > the publication. Modified the output plugin (pgoutput) to skip\n> > publishing the changes if the relation is part of skip schema\n> > publication.\n> > As a continuation to this, I will work on implementing skipping tables\n> > from all tables in schema and skipping tables from all tables\n> > publication.\n> >\n> > Attached patch has the implementation for this.\n> > This feature is for the pg16 version.\n> > Thoughts?\n>\n> The feature seems to be useful especially when there are lots of\n> schemas in a database. However, I don't quite like the syntax. Do we\n> have 'SKIP' identifier in any of the SQL statements in SQL standard?\n> Can we think of adding skip_schema_list as an option, something like\n> below?\n>\n> CREATE PUBLICATION foo FOR ALL TABLES (skip_schema_list = 's1, s2');\n> ALTER PUBLICATION foo SET (skip_schema_list = 's1, s2'); - to set\n> ALTER PUBLICATION foo SET (skip_schema_list = ''); - to reset\n>\n\nI had been wondering for some time if there was any way to introduce a\nmore flexible pattern matching into PUBLICATION but without bloating\nthe syntax. Maybe your idea to use an option for the \"skip\" gives a\nway to do it...\n\nFor example, if we could use regex (for <schemaname>.<tablename>\npatterns) for the option value then....\n\n~~\n\ne.g.1. Exclude certain tables:\n\n// do NOT publish any tables of schemas s1,s2\nCREATE PUBLICATION foo FOR ALL TABLES (exclude_match = '(s1\\..*)|(s2\\..*)');\n\n// do NOT publish my secret tables (those called \"mysecretXXX\")\nCREATE PUBLICATION foo FOR ALL TABLES (exclude_match = '(.*\\.mysecret.*)');\n\n~~\n\ne.g.2. Only allow certain tables.\n\n// ONLY publish my tables (those called \"mytableXXX\")\nCREATE PUBLICATION foo FOR ALL TABLES (subset_match = '(.*\\.mytable.*)');\n\n// So following is equivalent to FOR ALL TABLES IN SCHEMA s1\nCREATE PUBLICATION foo FOR ALL TABLES (subset_match = '(s1\\..*)');\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Tue, 26 Apr 2022 11:55:21 +1000",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping schema changes in publication"
},
{
"msg_contents": "On Thursday, April 21, 2022 12:15 PM vignesh C <vignesh21@gmail.com> wrote:\r\n> Updated patch by changing the syntax to use EXCEPT instead of SKIP.\r\nHi\r\n\r\n\r\nThis is my review comments on the v2 patch.\r\n\r\n(1) gram.y\r\n\r\nI think we can make a unified function that merges\r\npreprocess_alltables_pubobj_list with check_except_in_pubobj_list.\r\n\r\nWith regard to preprocess_alltables_pubobj_list,\r\nwe don't use the 2nd argument \"location\" in this function.\r\n\r\n(2) create_publication.sgml\r\n\r\n+ <para>\r\n+ Create a publication that publishes all changes in all the tables except for\r\n+ the changes of <structname>users</structname> and\r\n+ <structname>departments</structname> table;\r\n\r\nThis sentence should end \":\" not \";\".\r\n\r\n(3) publication.out & publication.sql\r\n\r\n+-- fail - can't set except table to schema publication\r\n+ALTER PUBLICATION testpub_forschema SET EXCEPT TABLE testpub_tbl1;\r\n\r\nThere is one unnecessary space in the comment.\r\nKindly change from \"schema publication\" to \"schema publication\".\r\n\r\n(4) pg_dump.c & describe.c\r\n\r\nIn your first email of this thread, you explained this feature\r\nis for PG16. Don't we need additional branch for PG16 ?\r\n\r\n@@ -6322,6 +6328,21 @@ describePublications(const char *pattern)\r\n }\r\n }\r\n\r\n+ if (pset.sversion >= 150000)\r\n+ {\r\n\r\n\r\n@@ -4162,7 +4164,7 @@ getPublicationTables(Archive *fout, TableInfo tblinfo[], int numTables)\r\n /* Collect all publication membership info. */\r\n if (fout->remoteVersion >= 150000)\r\n appendPQExpBufferStr(query,\r\n- \"SELECT tableoid, oid, prpubid, prrelid, \"\r\n+ \"SELECT tableoid, oid, prpubid, prrelid, prexcept,\"\r\n\r\n\r\n(5) psql-ref.sgml\r\n\r\n+ If <literal>+</literal> is appended to the command name, the tables,\r\n+ except tables and schemas associated with each publication are shown as\r\n+ well.\r\n\r\nI'm not sure if \"except tables\" is a good description.\r\nI suggest \"excluded tables\". This applies to the entire patch,\r\nin case if this is reasonable suggestion.\r\n\r\n\r\nBest Regards,\r\n\tTakamichi Osumi\r\n\r\n",
"msg_date": "Tue, 26 Apr 2022 06:02:46 +0000",
"msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Skipping schema changes in publication"
},
{
"msg_contents": "On Tue, Apr 26, 2022 at 11:32 AM osumi.takamichi@fujitsu.com\n<osumi.takamichi@fujitsu.com> wrote:\n>\n> On Thursday, April 21, 2022 12:15 PM vignesh C <vignesh21@gmail.com> wrote:\n> > Updated patch by changing the syntax to use EXCEPT instead of SKIP.\n> Hi\n>\n>\n> This is my review comments on the v2 patch.\n>\n> (1) gram.y\n>\n> I think we can make a unified function that merges\n> preprocess_alltables_pubobj_list with check_except_in_pubobj_list.\n>\n> With regard to preprocess_alltables_pubobj_list,\n> we don't use the 2nd argument \"location\" in this function.\n\nRemoved location and made a unified function.\n\n> (2) create_publication.sgml\n>\n> + <para>\n> + Create a publication that publishes all changes in all the tables except for\n> + the changes of <structname>users</structname> and\n> + <structname>departments</structname> table;\n>\n> This sentence should end \":\" not \";\".\n\nModified\n\n> (3) publication.out & publication.sql\n>\n> +-- fail - can't set except table to schema publication\n> +ALTER PUBLICATION testpub_forschema SET EXCEPT TABLE testpub_tbl1;\n>\n> There is one unnecessary space in the comment.\n> Kindly change from \"schema publication\" to \"schema publication\".\n\nModified\n\n> (4) pg_dump.c & describe.c\n>\n> In your first email of this thread, you explained this feature\n> is for PG16. Don't we need additional branch for PG16 ?\n>\n> @@ -6322,6 +6328,21 @@ describePublications(const char *pattern)\n> }\n> }\n>\n> + if (pset.sversion >= 150000)\n> + {\n>\n>\n> @@ -4162,7 +4164,7 @@ getPublicationTables(Archive *fout, TableInfo tblinfo[], int numTables)\n> /* Collect all publication membership info. */\n> if (fout->remoteVersion >= 150000)\n> appendPQExpBufferStr(query,\n> - \"SELECT tableoid, oid, prpubid, prrelid, \"\n> + \"SELECT tableoid, oid, prpubid, prrelid, prexcept,\"\n>\n\nModified by adding a comment saying \"FIXME: 150000 should be changed\nto 160000 later for PG16.\"\n\n> (5) psql-ref.sgml\n>\n> + If <literal>+</literal> is appended to the command name, the tables,\n> + except tables and schemas associated with each publication are shown as\n> + well.\n>\n> I'm not sure if \"except tables\" is a good description.\n> I suggest \"excluded tables\". This applies to the entire patch,\n> in case if this is reasonable suggestion.\n\nModified it in most of the places where it was applicable. I felt the\nusage was ok in a few places.\n\nThanks for the comments, the attached v3 patch has the changes for the same.\n\nRegards.\nVignesh",
"msg_date": "Wed, 27 Apr 2022 18:20:11 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping schema changes in publication"
},
{
"msg_contents": "On Wednesday, April 27, 2022 9:50 PM vignesh C <vignesh21@gmail.com> wrote:\r\n> Thanks for the comments, the attached v3 patch has the changes for the same.\r\nHi\r\n\r\nThank you for updating the patch. Several minor comments on v3.\r\n\r\n(1) commit message\r\n\r\nThe new syntax allows specifying schemas. For example:\r\nCREATE PUBLICATION pub1 FOR ALL TABLES EXCEPT TABLE t1,t2;\r\nOR\r\nALTER PUBLICATION pub1 ADD EXCEPT TABLE t1,t2;\r\n\r\nWe have above sentence, but it looks better\r\nto make the description a bit more accurate.\r\n\r\nKindly change\r\nFrom :\r\n\"The new syntax allows specifying schemas\"\r\nTo :\r\n\"The new syntax allows specifying excluded relations\"\r\n\r\nAlso, kindly change \"OR\" to \"or\",\r\nbecause this description is not syntax.\r\n\r\n(2) publication_add_relation\r\n\r\n@@ -396,6 +400,9 @@ publication_add_relation(Oid pubid, PublicationRelInfo *pri,\r\n ObjectIdGetDatum(pubid);\r\n values[Anum_pg_publication_rel_prrelid - 1] =\r\n ObjectIdGetDatum(relid);\r\n+ values[Anum_pg_publication_rel_prexcept - 1] =\r\n+ BoolGetDatum(pri->except);\r\n+\r\n\r\n /* Add qualifications, if available */\r\n\r\nIt would be better to remove the blank line,\r\nbecause with this change, we'll have two blank\r\nlines in a row.\r\n\r\n(3) pg_dump.h & pg_dump_sort.c\r\n\r\n@@ -80,6 +80,7 @@ typedef enum\r\n DO_REFRESH_MATVIEW,\r\n DO_POLICY,\r\n DO_PUBLICATION,\r\n+ DO_PUBLICATION_EXCEPT_REL,\r\n DO_PUBLICATION_REL,\r\n DO_PUBLICATION_TABLE_IN_SCHEMA,\r\n DO_SUBSCRIPTION\r\n\r\n@@ -90,6 +90,7 @@ enum dbObjectTypePriorities\r\n PRIO_FK_CONSTRAINT,\r\n PRIO_POLICY,\r\n PRIO_PUBLICATION,\r\n+ PRIO_PUBLICATION_EXCEPT_REL,\r\n PRIO_PUBLICATION_REL,\r\n PRIO_PUBLICATION_TABLE_IN_SCHEMA,\r\n PRIO_SUBSCRIPTION,\r\n@@ -144,6 +145,7 @@ static const int dbObjectTypePriority[] =\r\n PRIO_REFRESH_MATVIEW, /* DO_REFRESH_MATVIEW */\r\n PRIO_POLICY, /* DO_POLICY */\r\n PRIO_PUBLICATION, /* DO_PUBLICATION */\r\n+ PRIO_PUBLICATION_EXCEPT_REL, /* DO_PUBLICATION_EXCEPT_REL */\r\n PRIO_PUBLICATION_REL, /* DO_PUBLICATION_REL */\r\n PRIO_PUBLICATION_TABLE_IN_SCHEMA, /* DO_PUBLICATION_TABLE_IN_SCHEMA */\r\n PRIO_SUBSCRIPTION /* DO_SUBSCRIPTION */\r\n\r\nHow about having similar order between\r\npg_dump.h and pg_dump_sort.c, like\r\nwe'll add DO_PUBLICATION_EXCEPT_REL\r\nafter DO_PUBLICATION_REL in pg_dump.h ?\r\n\r\n\r\n(4) GetAllTablesPublicationRelations\r\n\r\n+ /*\r\n+ * pg_publication_rel and pg_publication_namespace will only have except\r\n+ * tables in case of all tables publication, no need to pass except flag\r\n+ * to get the relations.\r\n+ */\r\n+ List *exceptpubtablelist = GetPublicationRelations(pubid, PUBLICATION_PART_ALL);\r\n+\r\n\r\nThere is one unnecessary space in a comment\r\n\"...pg_publication_namespace will only have...\". Kindly remove it.\r\n\r\nThen, how about diving the variable declaration and\r\nthe insertion of the return value of GetPublicationRelations ?\r\nThat might be aligned with other places in this file.\r\n\r\n(5) GetTopMostAncestorInPublication\r\n\r\n\r\n@@ -302,8 +303,9 @@ GetTopMostAncestorInPublication(Oid puboid, List *ancestors, int *ancestor_level\r\n foreach(lc, ancestors)\r\n {\r\n Oid ancestor = lfirst_oid(lc);\r\n- List *apubids = GetRelationPublications(ancestor);\r\n+ List *apubids = GetRelationPublications(ancestor, false);\r\n List *aschemaPubids = NIL;\r\n+ List *aexceptpubids;\r\n\r\n level++;\r\n\r\n@@ -317,7 +319,9 @@ GetTopMostAncestorInPublication(Oid puboid, List *ancestors, int *ancestor_level\r\n else\r\n {\r\n aschemaPubids = GetSchemaPublications(get_rel_namespace(ancestor));\r\n- if (list_member_oid(aschemaPubids, puboid))\r\n+ aexceptpubids = GetRelationPublications(ancestor, true);\r\n+ if (list_member_oid(aschemaPubids, puboid) ||\r\n+ (puballtables && !list_member_oid(aexceptpubids, puboid)))\r\n {\r\n topmost_relid = ancestor;\r\n\r\nIt seems we forgot to call list_free for \"aexceptpubids\".\r\n\r\n\r\nBest Regards,\r\n\tTakamichi Osumi\r\n\r\n",
"msg_date": "Thu, 28 Apr 2022 11:20:52 +0000",
"msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Skipping schema changes in publication"
},
{
"msg_contents": "On Fri, Apr 22, 2022 at 9:39 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Tue, Mar 22, 2022 at 12:39 PM vignesh C <vignesh21@gmail.com> wrote:\n> >\n> > This feature adds an option to skip changes of all tables in specified\n> > schema while creating publication.\n> > This feature is helpful for use cases where the user wants to\n> > subscribe to all the changes except for the changes present in a few\n> > schemas.\n> > Ex:\n> > CREATE PUBLICATION pub1 FOR ALL TABLES SKIP ALL TABLES IN SCHEMA s1,s2;\n> > OR\n> > ALTER PUBLICATION pub1 ADD SKIP ALL TABLES IN SCHEMA s1,s2;\n> >\n>\n> The feature seems to be useful especially when there are lots of\n> schemas in a database. However, I don't quite like the syntax. Do we\n> have 'SKIP' identifier in any of the SQL statements in SQL standard?\n>\n\nAfter discussion, it seems EXCEPT is a preferred choice and the same\nis used in the other existing syntax as well.\n\n> Can we think of adding skip_schema_list as an option, something like\n> below?\n>\n> CREATE PUBLICATION foo FOR ALL TABLES (skip_schema_list = 's1, s2');\n> ALTER PUBLICATION foo SET (skip_schema_list = 's1, s2'); - to set\n> ALTER PUBLICATION foo SET (skip_schema_list = ''); - to reset\n>\n\nYeah, that is also an option but it seems it will be difficult to\nextend if want to support \"all columns except (c1, ..)\" for the column\nlist feature.\n\nThe other thing to decide is for which all objects we want to support\nEXCEPT clause as it may not be useful for everything as indicated by\nPeter E. and Euler. We have seen that Oracle supports \"all columns\nexcept (c1, ..)\" [1] and MySQL seems to support for tables [2]. I\nguess we should restrict ourselves to those two cases for now and then\nwe can extend it later for schemas if required or people agree. Also,\nwe should see the syntax we choose here should be extendable.\n\nAnother idea that occurred to me today for tables this is as follows:\n1. Allow to mention except during create publication ... For All Tables.\nCREATE PUBLICATION pub1 FOR ALL TABLES EXCEPT TABLE t1,t2;\n2. Allow to Reset it. This new syntax will reset all objects in the\npublications.\nAlter Publication ... RESET;\n3. Allow to add it to an existing publication\nAlter Publication ... Add ALL TABLES [EXCEPT TABLE t1,t2];\n\nI think it can be extended in a similar way for schema syntax as well.\n\n[1] - https://dev.mysql.com/doc/refman/5.7/en/change-replication-filter.html\n[2] - https://docs.oracle.com/en/cloud/paas/goldengate-cloud/gwuad/selecting-columns.html#GUID-9A851C8B-48F7-43DF-8D98-D086BE069E20\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 28 Apr 2022 17:01:53 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping schema changes in publication"
},
{
"msg_contents": "On Thu, Apr 28, 2022 at 4:50 PM osumi.takamichi@fujitsu.com\n<osumi.takamichi@fujitsu.com> wrote:\n>\n> On Wednesday, April 27, 2022 9:50 PM vignesh C <vignesh21@gmail.com> wrote:\n> > Thanks for the comments, the attached v3 patch has the changes for the same.\n> Hi\n>\n> Thank you for updating the patch. Several minor comments on v3.\n>\n> (1) commit message\n>\n> The new syntax allows specifying schemas. For example:\n> CREATE PUBLICATION pub1 FOR ALL TABLES EXCEPT TABLE t1,t2;\n> OR\n> ALTER PUBLICATION pub1 ADD EXCEPT TABLE t1,t2;\n>\n> We have above sentence, but it looks better\n> to make the description a bit more accurate.\n>\n> Kindly change\n> From :\n> \"The new syntax allows specifying schemas\"\n> To :\n> \"The new syntax allows specifying excluded relations\"\n>\n> Also, kindly change \"OR\" to \"or\",\n> because this description is not syntax.\n\nSlightly reworded and modified\n\n> (2) publication_add_relation\n>\n> @@ -396,6 +400,9 @@ publication_add_relation(Oid pubid, PublicationRelInfo *pri,\n> ObjectIdGetDatum(pubid);\n> values[Anum_pg_publication_rel_prrelid - 1] =\n> ObjectIdGetDatum(relid);\n> + values[Anum_pg_publication_rel_prexcept - 1] =\n> + BoolGetDatum(pri->except);\n> +\n>\n> /* Add qualifications, if available */\n>\n> It would be better to remove the blank line,\n> because with this change, we'll have two blank\n> lines in a row.\n\nModified\n\n> (3) pg_dump.h & pg_dump_sort.c\n>\n> @@ -80,6 +80,7 @@ typedef enum\n> DO_REFRESH_MATVIEW,\n> DO_POLICY,\n> DO_PUBLICATION,\n> + DO_PUBLICATION_EXCEPT_REL,\n> DO_PUBLICATION_REL,\n> DO_PUBLICATION_TABLE_IN_SCHEMA,\n> DO_SUBSCRIPTION\n>\n> @@ -90,6 +90,7 @@ enum dbObjectTypePriorities\n> PRIO_FK_CONSTRAINT,\n> PRIO_POLICY,\n> PRIO_PUBLICATION,\n> + PRIO_PUBLICATION_EXCEPT_REL,\n> PRIO_PUBLICATION_REL,\n> PRIO_PUBLICATION_TABLE_IN_SCHEMA,\n> PRIO_SUBSCRIPTION,\n> @@ -144,6 +145,7 @@ static const int dbObjectTypePriority[] =\n> PRIO_REFRESH_MATVIEW, /* DO_REFRESH_MATVIEW */\n> PRIO_POLICY, /* DO_POLICY */\n> PRIO_PUBLICATION, /* DO_PUBLICATION */\n> + PRIO_PUBLICATION_EXCEPT_REL, /* DO_PUBLICATION_EXCEPT_REL */\n> PRIO_PUBLICATION_REL, /* DO_PUBLICATION_REL */\n> PRIO_PUBLICATION_TABLE_IN_SCHEMA, /* DO_PUBLICATION_TABLE_IN_SCHEMA */\n> PRIO_SUBSCRIPTION /* DO_SUBSCRIPTION */\n>\n> How about having similar order between\n> pg_dump.h and pg_dump_sort.c, like\n> we'll add DO_PUBLICATION_EXCEPT_REL\n> after DO_PUBLICATION_REL in pg_dump.h ?\n>\n\nModified\n\n> (4) GetAllTablesPublicationRelations\n>\n> + /*\n> + * pg_publication_rel and pg_publication_namespace will only have except\n> + * tables in case of all tables publication, no need to pass except flag\n> + * to get the relations.\n> + */\n> + List *exceptpubtablelist = GetPublicationRelations(pubid, PUBLICATION_PART_ALL);\n> +\n>\n> There is one unnecessary space in a comment\n> \"...pg_publication_namespace will only have...\". Kindly remove it.\n>\n> Then, how about diving the variable declaration and\n> the insertion of the return value of GetPublicationRelations ?\n> That might be aligned with other places in this file.\n\nModified\n\n> (5) GetTopMostAncestorInPublication\n>\n>\n> @@ -302,8 +303,9 @@ GetTopMostAncestorInPublication(Oid puboid, List *ancestors, int *ancestor_level\n> foreach(lc, ancestors)\n> {\n> Oid ancestor = lfirst_oid(lc);\n> - List *apubids = GetRelationPublications(ancestor);\n> + List *apubids = GetRelationPublications(ancestor, false);\n> List *aschemaPubids = NIL;\n> + List *aexceptpubids;\n>\n> level++;\n>\n> @@ -317,7 +319,9 @@ GetTopMostAncestorInPublication(Oid puboid, List *ancestors, int *ancestor_level\n> else\n> {\n> aschemaPubids = GetSchemaPublications(get_rel_namespace(ancestor));\n> - if (list_member_oid(aschemaPubids, puboid))\n> + aexceptpubids = GetRelationPublications(ancestor, true);\n> + if (list_member_oid(aschemaPubids, puboid) ||\n> + (puballtables && !list_member_oid(aexceptpubids, puboid)))\n> {\n> topmost_relid = ancestor;\n>\n> It seems we forgot to call list_free for \"aexceptpubids\".\n\nModified\n\nThe attached v4 patch has the changes for the same.\n\nRegards,\nVignesh",
"msg_date": "Fri, 29 Apr 2022 17:12:59 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping schema changes in publication"
},
{
"msg_contents": "On Thu, Apr 28, 2022 at 9:32 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n...\n> Another idea that occurred to me today for tables this is as follows:\n> 1. Allow to mention except during create publication ... For All Tables.\n> CREATE PUBLICATION pub1 FOR ALL TABLES EXCEPT TABLE t1,t2;\n> 2. Allow to Reset it. This new syntax will reset all objects in the\n> publications.\n> Alter Publication ... RESET;\n> 3. Allow to add it to an existing publication\n> Alter Publication ... Add ALL TABLES [EXCEPT TABLE t1,t2];\n>\n> I think it can be extended in a similar way for schema syntax as well.\n>\n\nConsider if the user does\nCREATE PUBLICATION pub1 FOR ALL TABLES EXCEPT TABLE t1,t2;\nALTER PUBLICATION pub1 ADD ALL TABLES EXCEPT t3,t4;\n\nWhat does it mean?\ne.g. Is there only one exception list that is modified? Or did the ADD\nALL TABLES override all meaning of the original list?\ne.g. Are we now skipping t1,t2,t3,t4, or are we now only skipping t3,t4?\n\n~~~\n\nHere is a similar example, where the ADD TABLE seems confusing to me\nwhen it intersects with a prior EXCEPT\ne.g.\nCREATE PUBLICATION pub1 FOR ALL TABLES EXCEPT t1,t2; // ok\nALTER PUBLICATION pub1 ADD TABLE t1; ???\n\nWhat does it mean?\ne.g. Does the explicit ADD TABLE override the original exception list?\ne.g. Is t1 published now or should that ALTER have caused an error?\n\n~~\n\nIt feels like there are too many tricky rules when using EXCEPT with\nALTER PUBLICATION. I guess complexities can be described in the\ndocumentation but IMO it would be better if the ALTER syntax could be\nunambiguous in the first place. So perhaps the rules should be more\nrestrictive (e.g. just disallow ALTER ... ADD any table that overlaps\nthe existing EXCEPT list ??)\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia.\n\n\n",
"msg_date": "Tue, 3 May 2022 18:54:33 +1000",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping schema changes in publication"
},
{
"msg_contents": "On Tue, May 3, 2022 at 2:24 PM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> On Thu, Apr 28, 2022 at 9:32 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> ...\n> > Another idea that occurred to me today for tables this is as follows:\n> > 1. Allow to mention except during create publication ... For All Tables.\n> > CREATE PUBLICATION pub1 FOR ALL TABLES EXCEPT TABLE t1,t2;\n> > 2. Allow to Reset it. This new syntax will reset all objects in the\n> > publications.\n> > Alter Publication ... RESET;\n> > 3. Allow to add it to an existing publication\n> > Alter Publication ... Add ALL TABLES [EXCEPT TABLE t1,t2];\n> >\n> > I think it can be extended in a similar way for schema syntax as well.\n> >\n>\n> Consider if the user does\n> CREATE PUBLICATION pub1 FOR ALL TABLES EXCEPT TABLE t1,t2;\n> ALTER PUBLICATION pub1 ADD ALL TABLES EXCEPT t3,t4;\n>\n> What does it mean?\n> e.g. Is there only one exception list that is modified? Or did the ADD\n> ALL TABLES override all meaning of the original list?\n> e.g. Are we now skipping t1,t2,t3,t4, or are we now only skipping t3,t4?\n>\n\nThis won't be allowed. We won't allow changing ALL TABLES publication\nunless the user first performs RESET. This is the purpose of providing\nthe RESET variant.\n\n> ~~~\n>\n> Here is a similar example, where the ADD TABLE seems confusing to me\n> when it intersects with a prior EXCEPT\n> e.g.\n> CREATE PUBLICATION pub1 FOR ALL TABLES EXCEPT t1,t2; // ok\n> ALTER PUBLICATION pub1 ADD TABLE t1; ???\n>\n> What does it mean?\n> e.g. Does the explicit ADD TABLE override the original exception list?\n> e.g. Is t1 published now or should that ALTER have caused an error?\n>\n\nThis won't be allowed either. We don't allow to Add/Drop from All\nTables publication unless the user performs a RESET. This is true even\ntoday except that we don't have a RESET syntax.\n\n> ~~\n>\n> It feels like there are too many tricky rules when using EXCEPT with\n> ALTER PUBLICATION. I guess complexities can be described in the\n> documentation but IMO it would be better if the ALTER syntax could be\n> unambiguous in the first place.\n>\n\nAgreed.\n\n> So perhaps the rules should be more\n> restrictive (e.g. just disallow ALTER ... ADD any table that overlaps\n> the existing EXCEPT list ??)\n>\n\nI think the current proposal seems to be restrictive enough to avoid\nany tricky issues. Do you see any other problem?\n\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 4 May 2022 09:44:53 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping schema changes in publication"
},
{
"msg_contents": "On 14.04.22 15:47, Peter Eisentraut wrote:\n> That said, I'm not sure this feature is worth the trouble. If this is \n> useful, what about \"whole database except these schemas\"? What about \n> \"create this database from this template except these schemas\". This \n> could get out of hand. I think we should encourage users to group their \n> object the way they want and not offer these complicated negative \n> selection mechanisms.\n\nAnother problem in general with this \"all except these\" way of \nspecifying things is that you need to track negative dependencies.\n\nFor example, assume you can't add a table to a publication unless it has \na replica identity. Now, if you have a publication p1 that says \nincludes \"all tables except t1\", you now have to check p1 whenever a new \ntable is created, even though the new table has no direct dependency \nlink with p1. So in more general cases, you would have to check all \nexisting objects to see whether their specification is in conflict with \nthe new object being created.\n\nNow publications don't actually work that way, so it's not a real \nproblem right now, but similar things could work like that. So I think \nit's worth thinking this through a bit.\n\n\n\n",
"msg_date": "Wed, 4 May 2022 15:34:54 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping schema changes in publication"
},
{
"msg_contents": "On Wed, May 4, 2022 at 7:05 PM Peter Eisentraut\n<peter.eisentraut@enterprisedb.com> wrote:\n>\n> On 14.04.22 15:47, Peter Eisentraut wrote:\n> > That said, I'm not sure this feature is worth the trouble. If this is\n> > useful, what about \"whole database except these schemas\"? What about\n> > \"create this database from this template except these schemas\". This\n> > could get out of hand. I think we should encourage users to group their\n> > object the way they want and not offer these complicated negative\n> > selection mechanisms.\n>\n> Another problem in general with this \"all except these\" way of\n> specifying things is that you need to track negative dependencies.\n>\n> For example, assume you can't add a table to a publication unless it has\n> a replica identity. Now, if you have a publication p1 that says\n> includes \"all tables except t1\", you now have to check p1 whenever a new\n> table is created, even though the new table has no direct dependency\n> link with p1. So in more general cases, you would have to check all\n> existing objects to see whether their specification is in conflict with\n> the new object being created.\n>\n\nYes, I think we should avoid adding such negative dependencies. We\nhave carefully avoided such dependencies during row filter, column\nlist work where we don't try to perform DDL time verification.\nHowever, it is not clear to me how this proposal is related to this\nexample or in general about tracking negative dependencies? AFAIR, we\ncurrently have such a check while changing persistence of logged table\n(logged to unlogged, see ATPrepChangePersistence) where we cannot\nallow changing persistence if that relation is part of some\npublication. But as per my understanding, this feature shouldn't add\nany such new dependencies. I agree that we have to ensure that\nexisting checks shouldn't break due to this feature.\n\n> Now publications don't actually work that way, so it's not a real\n> problem right now, but similar things could work like that. So I think\n> it's worth thinking this through a bit.\n>\n\nThis is a good point and I agree that we should be careful to not add\nsome new negative dependencies unless it is really required but I\ncan't see how this proposal will make it more prone to such checks.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 5 May 2022 09:20:36 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping schema changes in publication"
},
{
"msg_contents": "On Thu, May 5, 2022 at 9:20 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, May 4, 2022 at 7:05 PM Peter Eisentraut\n> <peter.eisentraut@enterprisedb.com> wrote:\n> >\n> > On 14.04.22 15:47, Peter Eisentraut wrote:\n> > > That said, I'm not sure this feature is worth the trouble. If this is\n> > > useful, what about \"whole database except these schemas\"? What about\n> > > \"create this database from this template except these schemas\". This\n> > > could get out of hand. I think we should encourage users to group their\n> > > object the way they want and not offer these complicated negative\n> > > selection mechanisms.\n> >\n> > Another problem in general with this \"all except these\" way of\n> > specifying things is that you need to track negative dependencies.\n> >\n> > For example, assume you can't add a table to a publication unless it has\n> > a replica identity. Now, if you have a publication p1 that says\n> > includes \"all tables except t1\", you now have to check p1 whenever a new\n> > table is created, even though the new table has no direct dependency\n> > link with p1. So in more general cases, you would have to check all\n> > existing objects to see whether their specification is in conflict with\n> > the new object being created.\n> >\n>\n> Yes, I think we should avoid adding such negative dependencies. We\n> have carefully avoided such dependencies during row filter, column\n> list work where we don't try to perform DDL time verification.\n> However, it is not clear to me how this proposal is related to this\n> example or in general about tracking negative dependencies?\n>\n\nI mean to say that even if we have such a restriction, it would apply\nto \"for all tables\" or other publications as well. In your example,\nconsider one wants to Alter a table and remove its replica identity,\nwe have to check whether the table is part of any publication similar\nto what we are doing for relation persistence in\nATPrepChangePersistence.\n\n> AFAIR, we\n> currently have such a check while changing persistence of logged table\n> (logged to unlogged, see ATPrepChangePersistence) where we cannot\n> allow changing persistence if that relation is part of some\n> publication. But as per my understanding, this feature shouldn't add\n> any such new dependencies. I agree that we have to ensure that\n> existing checks shouldn't break due to this feature.\n>\n> > Now publications don't actually work that way, so it's not a real\n> > problem right now, but similar things could work like that. So I think\n> > it's worth thinking this through a bit.\n> >\n>\n> This is a good point and I agree that we should be careful to not add\n> some new negative dependencies unless it is really required but I\n> can't see how this proposal will make it more prone to such checks.\n>\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 5 May 2022 09:42:32 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping schema changes in publication"
},
{
"msg_contents": "On Thu, Apr 28, 2022 at 9:32 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n...\n>\n> Another idea that occurred to me today for tables this is as follows:\n> 1. Allow to mention except during create publication ... For All Tables.\n> CREATE PUBLICATION pub1 FOR ALL TABLES EXCEPT TABLE t1,t2;\n> 2. Allow to Reset it. This new syntax will reset all objects in the\n> publications.\n> Alter Publication ... RESET;\n> 3. Allow to add it to an existing publication\n> Alter Publication ... Add ALL TABLES [EXCEPT TABLE t1,t2];\n>\n> I think it can be extended in a similar way for schema syntax as well.\n>\n\nIf the proposed syntax ALTER PUBLICATION ... RESET will reset all the\nobjects in the publication then there still seems simple way to remove\nonly the EXCEPT list but leave everything else intact. IIUC to clear\njust the EXCEPT list would require a 2 step process - 1) ALTER ...\nRESET then 2) ALTER ... ADD ALL TABLES again.\n\nI was wondering if it might be useful to have a variation that *only*\nresets the EXCEPT list, but still leaves everything else as-is?\n\nSo, instead of:\nALTER PUBLICATION pubname RESET\n\nuse a syntax something like:\nALTER PUBLICATION pubname RESET {ALL | EXCEPT}\nor\nALTER PUBLICATION pubname RESET [EXCEPT]\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Fri, 6 May 2022 12:35:16 +1000",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping schema changes in publication"
},
{
"msg_contents": "On Fri, May 6, 2022 at 8:05 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> On Thu, Apr 28, 2022 at 9:32 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> ...\n> >\n> > Another idea that occurred to me today for tables this is as follows:\n> > 1. Allow to mention except during create publication ... For All Tables.\n> > CREATE PUBLICATION pub1 FOR ALL TABLES EXCEPT TABLE t1,t2;\n> > 2. Allow to Reset it. This new syntax will reset all objects in the\n> > publications.\n> > Alter Publication ... RESET;\n> > 3. Allow to add it to an existing publication\n> > Alter Publication ... Add ALL TABLES [EXCEPT TABLE t1,t2];\n> >\n> > I think it can be extended in a similar way for schema syntax as well.\n> >\n>\n> If the proposed syntax ALTER PUBLICATION ... RESET will reset all the\n> objects in the publication then there still seems simple way to remove\n> only the EXCEPT list but leave everything else intact. IIUC to clear\n> just the EXCEPT list would require a 2 step process - 1) ALTER ...\n> RESET then 2) ALTER ... ADD ALL TABLES again.\n>\n> I was wondering if it might be useful to have a variation that *only*\n> resets the EXCEPT list, but still leaves everything else as-is?\n>\n> So, instead of:\n> ALTER PUBLICATION pubname RESET\n\n+1 for this syntax as this syntax can be extendable to include options\nlike (except/all/etc) later.\nCurrently we can support this syntax and can be extended later based\non the requirements.\n\nThe new feature will handle the various use cases based on the\nbehavior given below:\n-- CREATE Publication with EXCEPT TABLE syntax\nCREATE PUBLICATION pub1 FOR ALL TABLES EXCEPT TABLE t1,t2; -- ok\nAlter Publication pub1 RESET;\n-- All Tables and options are reset similar to creating publication\nwithout any publication object and publication option (create\npublication pub1)\n\\dRp+ pub1\nPublication pub2\nOwner | All tables | Inserts | Updates | Deletes | Truncates | Via root\n---------+------------+---------+---------+---------+-----------+----------\nvignesh | f | t | t | t | t | f\n(1 row)\n\n-- Can add except table after reset of publication\nALTER PUBLICATION pub1 Add ALL TABLES EXCEPT TABLE t1,t2; -- ok\n\n-- Cannot add except table without reset of publication\nALTER PUBLICATION pub1 Add EXCEPT TABLE t3,t4; -- not ok, need to be reset\n\nAlter Publication pub1 RESET;\n-- Cannot add table to ALL TABLES Publication\nALTER PUBLICATION pub1 Add ALL TABLES EXCEPT TABLE t1,t2, t3, t4,\nTABLE t5; -- not ok, ALL TABLES Publications does not support\nincluding of TABLES\n\nAlter Publication pub1 RESET;\n-- Cannot add table to ALL TABLES Publication\nALTER PUBLICATION pub1 Add ALL TABLES TABLE t1,t2; -- not ok, ALL\nTABLES Publications does not support including of TABLES\n\n-- Cannot add ALL TABLES IN SCHEMA to ALL TABLES Publication\nALTER PUBLICATION pub1 Add ALL TABLES ALL TABLES IN SCHEMA sch1, sch2;\n-- not ok, ALL TABLES Publications does not support including of ALL\nTABLES IN SCHEMA\n\n-- Existing syntax should work as it is\nCREATE PUBLICATION pub1 FOR TABLE t1;\nALTER PUBLICATION pub1 ADD TABLE t1; -- ok, existing ALTER should work\nas it is (ok without reset)\nALTER PUBLICATION pub1 ADD ALL TABLES IN SCHEMA sch1; -- ok, existing\nALTER should work as it is (ok without reset)\nALTER PUBLICATION pub1 DROP TABLE t1; -- ok, existing ALTER should\nwork as it is (ok without reset)\nALTER PUBLICATION pub1 DROP ALL TABLES IN SCHEMA sch1; -- ok, existing\nALTER should work as it is (ok without reset)\nALTER PUBLICATION pub1 SET TABLE t1; -- ok, existing ALTER should work\nas it is (ok without reset)\nALTER PUBLICATION pub1 SET ALL TABLES IN SCHEMA sch1; -- ok, existing\nALTER should work as it is (ok without reset)\n\nI will modify the patch to handle this.\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Tue, 10 May 2022 09:08:48 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping schema changes in publication"
},
{
"msg_contents": "On Tue, May 10, 2022 at 9:08 AM vignesh C <vignesh21@gmail.com> wrote:\n>\n> On Fri, May 6, 2022 at 8:05 AM Peter Smith <smithpb2250@gmail.com> wrote:\n> >\n> > On Thu, Apr 28, 2022 at 9:32 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > ...\n> > >\n> > > Another idea that occurred to me today for tables this is as follows:\n> > > 1. Allow to mention except during create publication ... For All Tables.\n> > > CREATE PUBLICATION pub1 FOR ALL TABLES EXCEPT TABLE t1,t2;\n> > > 2. Allow to Reset it. This new syntax will reset all objects in the\n> > > publications.\n> > > Alter Publication ... RESET;\n> > > 3. Allow to add it to an existing publication\n> > > Alter Publication ... Add ALL TABLES [EXCEPT TABLE t1,t2];\n> > >\n> > > I think it can be extended in a similar way for schema syntax as well.\n> > >\n> >\n> > If the proposed syntax ALTER PUBLICATION ... RESET will reset all the\n> > objects in the publication then there still seems simple way to remove\n> > only the EXCEPT list but leave everything else intact. IIUC to clear\n> > just the EXCEPT list would require a 2 step process - 1) ALTER ...\n> > RESET then 2) ALTER ... ADD ALL TABLES again.\n> >\n> > I was wondering if it might be useful to have a variation that *only*\n> > resets the EXCEPT list, but still leaves everything else as-is?\n> >\n> > So, instead of:\n> > ALTER PUBLICATION pubname RESET\n>\n> +1 for this syntax as this syntax can be extendable to include options\n> like (except/all/etc) later.\n> Currently we can support this syntax and can be extended later based\n> on the requirements.\n\nThe attached patch has the implementation for \"ALTER PUBLICATION\npubname RESET\". This command will reset the publication to default\nstate which includes resetting the publication options, setting ALL\nTABLES option to false and dropping the relations and schemas that are\nassociated with the publication.\n\nRegards,\nVignesh",
"msg_date": "Thu, 12 May 2022 09:54:39 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping schema changes in publication"
},
{
"msg_contents": "On Thu, May 12, 2022 at 2:24 PM vignesh C <vignesh21@gmail.com> wrote:\n>\n...\n> The attached patch has the implementation for \"ALTER PUBLICATION\n> pubname RESET\". This command will reset the publication to default\n> state which includes resetting the publication options, setting ALL\n> TABLES option to false and dropping the relations and schemas that are\n> associated with the publication.\n>\n\nPlease see below my review comments for the v1-0001 (RESET) patch\n\n======\n\n1. Commit message\n\nThis patch adds a new RESET option to ALTER PUBLICATION which\n\nWording: \"RESET option\" -> \"RESET clause\"\n\n~~~\n\n2. doc/src/sgml/ref/alter_publication.sgml\n\n+ <para>\n+ The <literal>RESET</literal> clause will reset the publication to default\n+ state which includes resetting the publication options, setting\n+ <literal>ALL TABLES</literal> option to <literal>false</literal>\nand drop the\n+ relations and schemas that are associated with the publication.\n </para>\n\n2a. Wording: \"to default state\" -> \"to the default state\"\n\n2b. Wording: \"and drop the relations...\" -> \"and dropping all relations...\"\n\n~~~\n\n3. doc/src/sgml/ref/alter_publication.sgml\n\n+ invoking user to be a superuser. <literal>RESET</literal> of publication\n+ requires invoking user to be a superuser. To alter the owner, you must also\n\nWording: \"requires invoking user\" -> \"requires the invoking user\"\n\n~~~\n\n4. doc/src/sgml/ref/alter_publication.sgml - Example\n\n@@ -207,6 +220,12 @@ ALTER PUBLICATION sales_publication ADD ALL\nTABLES IN SCHEMA marketing, sales;\n <structname>production_publication</structname>:\n <programlisting>\n ALTER PUBLICATION production_publication ADD TABLE users,\ndepartments, ALL TABLES IN SCHEMA production;\n+</programlisting></para>\n+\n+ <para>\n+ Resetting the publication <structname>production_publication</structname>:\n+<programlisting>\n+ALTER PUBLICATION production_publication RESET;\n\nWording: \"Resetting the publication\" -> \"Reset the publication\"\n\n~~~\n\n5. src/backend/commands/publicationcmds.c\n\n+ /* Check and reset the options */\n\nIMO the code can just reset all these options unconditionally. I did\nnot see the point to check for existing option values first. I feel\nthe simpler code outweighs any negligible performance difference in\nthis case.\n\n~~~\n\n6. src/backend/commands/publicationcmds.c\n\n+ /* Check and reset the options */\n\nSomehow it seemed a pity having to hardcode all these default values\ntrue/false in multiple places; e.g. the same is already hardcoded in\nthe parse_publication_options function.\n\nTo avoid multiple hard coded bools you could just call the\nparse_publication_options with an empty options list. That would set\nthe defaults which you can then use:\nvalues[Anum_pg_publication_pubinsert - 1] = BoolGetDatum(pubactiondefs->insert);\n\nAlternatively, maybe there should be #defines to use instead of having\nthe scattered hardcoded bool defaults:\n#define PUBACTION_DEFAULT_INSERT true\n#define PUBACTION_DEFAULT_UPDATE true\netc\n\n~~~\n\n7. src/include/nodes/parsenodes.h\n\n@@ -4033,7 +4033,8 @@ typedef enum AlterPublicationAction\n {\n AP_AddObjects, /* add objects to publication */\n AP_DropObjects, /* remove objects from publication */\n- AP_SetObjects /* set list of objects */\n+ AP_SetObjects, /* set list of objects */\n+ AP_ReSetPublication /* reset the publication */\n } AlterPublicationAction;\n\nUnusual case: \"AP_ReSetPublication\" -> \"AP_ResetPublication\"\n\n~~~\n\n8. src/test/regress/sql/publication.sql\n\n8a.\n+-- Test for RESET PUBLICATION\nSUGGESTED\n+-- Tests for ALTER PUBLICATION ... RESET\n\n8b.\n+-- Verify that 'ALL TABLES' option is reset\nSUGGESTED:\n+-- Verify that 'ALL TABLES' flag is reset\n\n8c.\n+-- Verify that publish option and publish via root option is reset\nSUGGESTED:\n+-- Verify that publish options and publish_via_partition_root option are reset\n\n8d.\n+-- Verify that only superuser can execute RESET publication\nSUGGESTED\n+-- Verify that only superuser can reset a publication\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Fri, 13 May 2022 14:07:17 +1000",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping schema changes in publication"
},
{
"msg_contents": "On Fri, May 13, 2022 at 9:37 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> On Thu, May 12, 2022 at 2:24 PM vignesh C <vignesh21@gmail.com> wrote:\n> >\n> ...\n> > The attached patch has the implementation for \"ALTER PUBLICATION\n> > pubname RESET\". This command will reset the publication to default\n> > state which includes resetting the publication options, setting ALL\n> > TABLES option to false and dropping the relations and schemas that are\n> > associated with the publication.\n> >\n>\n> Please see below my review comments for the v1-0001 (RESET) patch\n>\n> ======\n>\n> 1. Commit message\n>\n> This patch adds a new RESET option to ALTER PUBLICATION which\n>\n> Wording: \"RESET option\" -> \"RESET clause\"\n\nModified\n\n> ~~~\n>\n> 2. doc/src/sgml/ref/alter_publication.sgml\n>\n> + <para>\n> + The <literal>RESET</literal> clause will reset the publication to default\n> + state which includes resetting the publication options, setting\n> + <literal>ALL TABLES</literal> option to <literal>false</literal>\n> and drop the\n> + relations and schemas that are associated with the publication.\n> </para>\n>\n> 2a. Wording: \"to default state\" -> \"to the default state\"\n\nModified\n\n> 2b. Wording: \"and drop the relations...\" -> \"and dropping all relations...\"\n\nModified\n\n> ~~~\n>\n> 3. doc/src/sgml/ref/alter_publication.sgml\n>\n> + invoking user to be a superuser. <literal>RESET</literal> of publication\n> + requires invoking user to be a superuser. To alter the owner, you must also\n>\n> Wording: \"requires invoking user\" -> \"requires the invoking user\"\n\nModified\n\n> ~~~\n>\n> 4. doc/src/sgml/ref/alter_publication.sgml - Example\n>\n> @@ -207,6 +220,12 @@ ALTER PUBLICATION sales_publication ADD ALL\n> TABLES IN SCHEMA marketing, sales;\n> <structname>production_publication</structname>:\n> <programlisting>\n> ALTER PUBLICATION production_publication ADD TABLE users,\n> departments, ALL TABLES IN SCHEMA production;\n> +</programlisting></para>\n> +\n> + <para>\n> + Resetting the publication <structname>production_publication</structname>:\n> +<programlisting>\n> +ALTER PUBLICATION production_publication RESET;\n>\n> Wording: \"Resetting the publication\" -> \"Reset the publication\"\n\nModified\n\n> ~~~\n>\n> 5. src/backend/commands/publicationcmds.c\n>\n> + /* Check and reset the options */\n>\n> IMO the code can just reset all these options unconditionally. I did\n> not see the point to check for existing option values first. I feel\n> the simpler code outweighs any negligible performance difference in\n> this case.\n\nModified\n\n> ~~~\n>\n> 6. src/backend/commands/publicationcmds.c\n>\n> + /* Check and reset the options */\n>\n> Somehow it seemed a pity having to hardcode all these default values\n> true/false in multiple places; e.g. the same is already hardcoded in\n> the parse_publication_options function.\n>\n> To avoid multiple hard coded bools you could just call the\n> parse_publication_options with an empty options list. That would set\n> the defaults which you can then use:\n> values[Anum_pg_publication_pubinsert - 1] = BoolGetDatum(pubactiondefs->insert);\n>\n> Alternatively, maybe there should be #defines to use instead of having\n> the scattered hardcoded bool defaults:\n> #define PUBACTION_DEFAULT_INSERT true\n> #define PUBACTION_DEFAULT_UPDATE true\n> etc\n\nI have used #define for default value and used it in both the functions.\n\n> ~~~\n>\n> 7. src/include/nodes/parsenodes.h\n>\n> @@ -4033,7 +4033,8 @@ typedef enum AlterPublicationAction\n> {\n> AP_AddObjects, /* add objects to publication */\n> AP_DropObjects, /* remove objects from publication */\n> - AP_SetObjects /* set list of objects */\n> + AP_SetObjects, /* set list of objects */\n> + AP_ReSetPublication /* reset the publication */\n> } AlterPublicationAction;\n>\n> Unusual case: \"AP_ReSetPublication\" -> \"AP_ResetPublication\"\n\nModified\n\n> ~~~\n>\n> 8. src/test/regress/sql/publication.sql\n>\n> 8a.\n> +-- Test for RESET PUBLICATION\n> SUGGESTED\n> +-- Tests for ALTER PUBLICATION ... RESET\n\nModified\n\n> 8b.\n> +-- Verify that 'ALL TABLES' option is reset\n> SUGGESTED:\n> +-- Verify that 'ALL TABLES' flag is reset\n\nModified\n\n> 8c.\n> +-- Verify that publish option and publish via root option is reset\n> SUGGESTED:\n> +-- Verify that publish options and publish_via_partition_root option are reset\n\nModified\n\n> 8d.\n> +-- Verify that only superuser can execute RESET publication\n> SUGGESTED\n> +-- Verify that only superuser can reset a publication\n\nModified\n\nThanks for the comments, the attached v5 patch has the changes for the\nsame. Also I have made the changes for SKIP Table based on the new\nsyntax, the changes for the same are available in\nv5-0002-Skip-publishing-the-tables-specified-in-EXCEPT-TA.patch.\n\nRegards,\nVignesh",
"msg_date": "Sat, 14 May 2022 19:02:54 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping schema changes in publication"
},
{
"msg_contents": "On Saturday, May 14, 2022 10:33 PM vignesh C <vignesh21@gmail.com> wrote:\r\n> Thanks for the comments, the attached v5 patch has the changes for the same.\r\n> Also I have made the changes for SKIP Table based on the new syntax, the\r\n> changes for the same are available in\r\n> v5-0002-Skip-publishing-the-tables-specified-in-EXCEPT-TA.patch.\r\nHi,\r\n\r\n\r\nThank you for updating the patch.\r\nI'll share few minor review comments on v5-0001.\r\n\r\n\r\n(1) doc/src/sgml/ref/alter_publication.sgml\r\n\r\n@@ -73,12 +85,13 @@ ALTER PUBLICATION <replaceable class=\"parameter\">name</replaceable> RENAME TO <r\r\n Adding a table to a publication additionally requires owning that table.\r\n The <literal>ADD ALL TABLES IN SCHEMA</literal> and\r\n <literal>SET ALL TABLES IN SCHEMA</literal> to a publication requires the\r\n- invoking user to be a superuser. To alter the owner, you must also be a\r\n- direct or indirect member of the new owning role. The new owner must have\r\n- <literal>CREATE</literal> privilege on the database. Also, the new owner\r\n- of a <literal>FOR ALL TABLES</literal> or <literal>FOR ALL TABLES IN\r\n- SCHEMA</literal> publication must be a superuser. However, a superuser can\r\n- change the ownership of a publication regardless of these restrictions.\r\n+ invoking user to be a superuser. <literal>RESET</literal> of publication\r\n+ requires the invoking user to be a superuser. To alter the owner, you must\r\n...\r\n\r\n\r\nI suggest to combine the first part of your change with one existing sentence\r\nbefore your change, to make our description concise.\r\n\r\nFROM:\r\n\"The <literal>ADD ALL TABLES IN SCHEMA</literal> and\r\n<literal>SET ALL TABLES IN SCHEMA</literal> to a publication requires the\r\ninvoking user to be a superuser. <literal>RESET</literal> of publication\r\nrequires the invoking user to be a superuser.\"\r\n\r\nTO:\r\n\"The <literal>ADD ALL TABLES IN SCHEMA</literal>,\r\n<literal>SET ALL TABLES IN SCHEMA</literal> to a publication and\r\n<literal>RESET</literal> of publication requires the invoking user to be a superuser.\"\r\n\r\n\r\n(2) typo\r\n\r\n+++ b/src/backend/commands/publicationcmds.c\r\n@@ -53,6 +53,13 @@\r\n #include \"utils/syscache.h\"\r\n #include \"utils/varlena.h\"\r\n\r\n+#define PUB_ATION_INSERT_DEFAULT true\r\n+#define PUB_ACTION_UPDATE_DEFAULT true\r\n\r\n\r\nKindly change\r\nFROM:\r\n\"PUB_ATION_INSERT_DEFAULT\"\r\nTO:\r\n\"PUB_ACTION_INSERT_DEFAULT\"\r\n\r\n\r\n(3) src/test/regress/expected/publication.out\r\n\r\n+-- Verify that only superuser can reset a publication\r\n+ALTER PUBLICATION testpub_reset OWNER TO regress_publication_user2;\r\n+SET ROLE regress_publication_user2;\r\n+ALTER PUBLICATION testpub_reset RESET; -- fail\r\n\r\n\r\nWe have \"-- fail\" for one case in this patch.\r\nOn the other hand, isn't better to add \"-- ok\" (or \"-- success\") for\r\nother successful statements,\r\nwhen we consider the entire tests description consistency ?\r\n\r\n\r\nBest Regards,\r\n\tTakamichi Osumi\r\n\r\n",
"msg_date": "Mon, 16 May 2022 03:02:26 +0000",
"msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Skipping schema changes in publication"
},
{
"msg_contents": "On Saturday, May 14, 2022 10:33 PM vignesh C <vignesh21@gmail.com> wrote:\r\n> Thanks for the comments, the attached v5 patch has the changes for the same.\r\n> Also I have made the changes for SKIP Table based on the new syntax, the\r\n> changes for the same are available in\r\n> v5-0002-Skip-publishing-the-tables-specified-in-EXCEPT-TA.patch.\r\nHi,\r\n\r\n\r\n\r\nSeveral comments on v5-0002.\r\n\r\n(1) One unnecessary space before \"except_pub_obj_list\" syntax definition\r\n\r\n+ except_pub_obj_list: ExceptPublicationObjSpec\r\n+ { $$ = list_make1($1); }\r\n+ | except_pub_obj_list ',' ExceptPublicationObjSpec\r\n+ { $$ = lappend($1, $3); }\r\n+ | /*EMPTY*/ { $$ = NULL; }\r\n+ ;\r\n+\r\n\r\nFrom above part, kindly change\r\nFROM:\r\n\" except_pub_obj_list: ExceptPublicationObjSpec\"\r\nTO:\r\n\"except_pub_obj_list: ExceptPublicationObjSpec\"\r\n\r\n\r\n(2) doc/src/sgml/ref/create_publication.sgml\r\n\r\n(2-1)\r\n\r\n@@ -22,7 +22,7 @@ PostgreSQL documentation\r\n <refsynopsisdiv>\r\n <synopsis>\r\n CREATE PUBLICATION <replaceable class=\"parameter\">name</replaceable>\r\n- [ FOR ALL TABLES\r\n+ [ FOR ALL TABLES [EXCEPT TABLE [ ONLY ] <replaceable class=\"parameter\">table_name</replaceable> [ * ] [, ... ]]\r\n | FOR <replaceable class=\"parameter\">publication_object</replaceable> [, ... ] ]\r\n [ WITH ( <replaceable class=\"parameter\">publication_parameter</replaceable> [= <replaceable class=\"parameter\">value</replaceable>] [, ... ] ) ]\r\n\r\n\r\nHere I think we need to add two more whitespaces around square brackets.\r\nPlease change\r\nFROM:\r\n\"[ FOR ALL TABLES [EXCEPT TABLE [ ONLY ] <replaceable class=\"parameter\">table_name</replaceable> [ * ] [, ... ]]\"\r\nTO:\r\n\"[ FOR ALL TABLES [ EXCEPT TABLE [ ONLY ] <replaceable class=\"parameter\">table_name</replaceable> [ * ] [, ... ] ]\"\r\n\r\nWhen I check other documentations, I see whitespaces before/after square brackets.\r\n\r\n(2-2)\r\nThis whitespace alignment applies to alter_publication.sgml as well.\r\n\r\n(3)\r\n\r\n\r\n@@ -156,6 +156,24 @@ CREATE PUBLICATION <replaceable class=\"parameter\">name</replaceable>\r\n </listitem>\r\n </varlistentry>\r\n\r\n+\r\n+ <varlistentry>\r\n+ <term><literal>EXCEPT TABLE</literal></term>\r\n+ <listitem>\r\n+ <para>\r\n+ Marks the publication as one that excludes replicating changes for the\r\n+ specified tables.\r\n+ </para>\r\n+\r\n+ <para>\r\n+ <literal>EXCEPT TABLE</literal> can be specified only for\r\n+ <literal>FOR ALL TABLES</literal> publication. It is not supported for\r\n+ <literal>FOR ALL TABLES IN SCHEMA </literal> publication and\r\n+ <literal>FOR TABLE</literal> publication.\r\n+ </para>\r\n+ </listitem>\r\n+ </varlistentry>\r\n+\r\n\r\nThis EXCEPT TABLE clause is only for FOR ALL TABLES.\r\nSo, how about extracting the main message from above part and\r\nmoving it to an exising paragraph below, instead of having one independent paragraph ?\r\n\r\n <varlistentry>\r\n <term><literal>FOR ALL TABLES</literal></term>\r\n <listitem>\r\n <para>\r\n Marks the publication as one that replicates changes for all tables in\r\n the database, including tables created in the future.\r\n </para>\r\n </listitem>\r\n </varlistentry>\r\n\r\nSomething like\r\n\"Marks the publication as one that replicates changes for all tables in\r\nthe database, including tables created in the future. EXCEPT TABLE indicates\r\nexcluded tables for the defined publication.\r\n\"\r\n\r\n\r\n(4) One minor confirmation about the syntax\r\n\r\nCurrently, we allow one way of writing to indicate excluded tables like below.\r\n\r\n(example) CREATE PUBLICATION mypub FOR ALL TABLES EXCEPT TABLE tab3, tab4, EXCEPT TABLE tab5;\r\n\r\nThis is because we define ExceptPublicationObjSpec with EXCEPT TABLE.\r\nIs it OK to have a room to write duplicate \"EXCEPT TABLE\" clauses ?\r\nI think there is no harm in having this,\r\nbut I'd like to confirm whether this syntax might be better to be adjusted or not.\r\n\r\n\r\n(5) CheckAlterPublication\r\n\r\n+\r\n+ if (excepttable && !stmt->for_all_tables)\r\n+ ereport(ERROR,\r\n+ (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),\r\n+ errmsg(\"publication \\\"%s\\\" is not defined as FOR ALL TABLES\",\r\n+ NameStr(pubform->pubname)),\r\n+ errdetail(\"except table cannot be added to, dropped from, or set on NON ALL TABLES publications.\")));\r\n\r\nCould you please add a test for this ?\r\n\r\n\r\n\r\nBest Regards,\r\n\tTakamichi Osumi\r\n\r\n",
"msg_date": "Mon, 16 May 2022 08:29:58 +0000",
"msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Skipping schema changes in publication"
},
{
"msg_contents": "Below are my review comments for v5-0001.\n\nThere is some overlap with comments recently posted by Osumi-san [1].\n\n(I also have review comments for v5-0002; will post them tomorrow)\n\n======\n\n1. Commit message\n\nThis patch adds a new RESET clause to ALTER PUBLICATION which will reset\nthe publication to default state which includes resetting the publication\noptions, setting ALL TABLES option to false and dropping the relations and\nschemas that are associated with the publication.\n\nSUGGEST\n\"to default state\" -> \"to the default state\"\n\"ALL TABLES option\" -> \"ALL TABLES flag\"\n\n~~~\n\n2. doc/src/sgml/ref/alter_publication.sgml\n\n+ <para>\n+ The <literal>RESET</literal> clause will reset the publication to the\n+ default state which includes resetting the publication options, setting\n+ <literal>ALL TABLES</literal> option to <literal>false</literal> and\n+ dropping all relations and schemas that are associated with the publication.\n </para>\n\n\"ALL TABLES option\" -> \"ALL TABLES flag\"\n\n~~~\n\n3. doc/src/sgml/ref/alter_publication.sgml\n\n+ invoking user to be a superuser. <literal>RESET</literal> of publication\n+ requires the invoking user to be a superuser. To alter the owner, you must\n\nSUGGESTION\nTo <literal>RESET</literal> a publication requires the invoking user\nto be a superuser.\n\n~~~\n\n4. src/backend/commands/publicationcmds.c\n\n@@ -53,6 +53,13 @@\n #include \"utils/syscache.h\"\n #include \"utils/varlena.h\"\n\n+#define PUB_ATION_INSERT_DEFAULT true\n+#define PUB_ACTION_UPDATE_DEFAULT true\n+#define PUB_ACTION_DELETE_DEFAULT true\n+#define PUB_ACTION_TRUNCATE_DEFAULT true\n+#define PUB_VIA_ROOT_DEFAULT false\n+#define PUB_ALL_TABLES_DEFAULT false\n\n4a.\nTypo: \"ATION\" -> \"ACTION\"\n\n4b.\nI think these #defines deserve a 1 line comment.\ne.g.\n/* CREATE PUBLICATION default values for flags and options */\n\n4c.\nSince the \"_DEFAULT\" is a common part of all the names, maybe it is\ntidier if it comes first.\ne.g.\n#define PUB_DEFAULT_ACTION_INSERT true\n#define PUB_DEFAULT_ACTION_UPDATE true\n#define PUB_DEFAULT_ACTION_DELETE true\n#define PUB_DEFAULT_ACTION_TRUNCATE true\n#define PUB_DEFAULT_VIA_ROOT false\n#define PUB_DEFAULT_ALL_TABLES false\n\n------\n[1] https://www.postgresql.org/message-id/TYCPR01MB8373C3120C2B3112001ED6F1EDCF9%40TYCPR01MB8373.jpnprd01.prod.outlook.com\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Mon, 16 May 2022 19:23:14 +1000",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping schema changes in publication"
},
{
"msg_contents": "Below are my review comments for v5-0002.\n\nThere may be an overlap with comments recently posted by Osumi-san [1].\n\n(I also have review comments for v5-0002; will post them tomorrow)\n\n======\n\n1. General\n\nIs it really necessary to have to say \"EXCEPT TABLE\" instead of just\n\"EXCEPT\". It seems unnecessarily verbose and redundant when you write\n\"FOR ALL TABLES EXCEPT TABLE...\".\n\nIf you want to keep this TABLE keyword (maybe you have plans for other\nkinds of except?) then IMO perhaps at least it can be the optional\ndefault except type. e.g. EXCEPT [TABLE].\n\n~~~\n\n2. General\n\n(I was unsure whether to even mention this one).\n\nI understand the \"EXCEPT\" is chosen as the user-facing syntax, but it\nstill seems strange when reading the patch to see attribute members\nand column names called 'except'. I think the problem is that \"except\"\nis not a verb, so saying except=t/f just does not make much sense.\nSometimes I feel that for all the internal usage\n(code/comments/catalog) using \"skip\" and \"skip-list\" etc would be a\nmuch better choice of names. OTOH I can see that having consistency\nwith the outside syntax might also be good. Anyway, please consider -\nmaybe other people feel the same?\n\n~~~\n\n3. General\n\nThe ONLY keyword seems supported by the syntax for tables of the\nexcept-list (more on this in later comments) but:\na) I am not sure if the patch code is accounting for that, and\nb) There are no test cases using ONLY.\n\n~~~\n\n4. Commit message\n\nA new option \"EXCEPT TABLE\" in Create/Alter Publication allows\none or more tables to be excluded, publisher will exclude sending the data\nof the excluded tables to the subscriber.\n\nSUGGESTION\nA new \"EXCEPT TABLE\" clause for CREATE/ALTER PUBLICATION allows one or\nmore tables to be excluded. The publisher will not send the data of\nexcluded tables to the subscriber.\n\n~~\n\n5. Commit message\n\nThe new syntax allows specifying exclude relations while creating a publication\nor exclude relations in alter publication. For example:\n\nSUGGESTION\nThe new syntax allows specifying excluded relations when creating or\naltering a publication. For example:\n\n~~~\n\n6. Commit message\n\nA new column prexcept is added to table \"pg_publication_rel\", to maintain\nthe relations that the user wants to exclude publishing through the publication.\n\nSUGGESTION\nA new column \"prexcept\" is added to table \"pg_publication_rel\", to\nmaintain the relations that the user wants to exclude from the\npublications.\n\n~~~\n\n7. Commit message\n\nModified the output plugin (pgoutput) to exclude publishing the changes of the\nexcluded tables.\n\nI did not feel it was necessary to say this. It is already said above\nthat the data is not sent, so that seems enough.\n\n~~~\n\n8. Commit message\n\nUpdates pg_dump to identify and dump the excluded tables of the publications.\nUpdates the \\d family of commands to display excluded tables of the\npublications and \\dRp+ variant will now display associated except tables if any.\n\nSUGGESTION\npg_dump is updated to identify and dump the excluded tables of the publications.\n\nThe psql \\d family of commands to display excluded tables. e.g. psql\n\\dRp+ variant will now display associated \"except tables\" if any.\n\n~~~\n\n9. doc/src/sgml/catalogs.sgml\n\n@@ -6426,6 +6426,15 @@ SCRAM-SHA-256$<replaceable><iteration\ncount></replaceable>:<replaceable>&l\n if there is no publication qualifying condition.</para></entry>\n </row>\n\n+ <row>\n+ <entry role=\"catalog_table_entry\"><para role=\"column_definition\">\n+ <structfield>prexcept</structfield> <type>bool</type>\n+ </para>\n+ <para>\n+ True if the table must be excluded\n+ </para></entry>\n+ </row>\n\nOther descriptions on this page refer to \"relation\" instead of\n\"table\". Probably this should do the same to be consistent.\n\n~~~\n\n10. doc/src/sgml/logical-replication.sgml\n\n@@ -1167,8 +1167,9 @@ CONTEXT: processing remote data for replication\norigin \"pg_16395\" during \"INSER\n <para>\n To add tables to a publication, the user must have ownership rights on the\n table. To add all tables in schema to a publication, the user must be a\n- superuser. To create a publication that publishes all tables or\nall tables in\n- schema automatically, the user must be a superuser.\n+ superuser. To add all tables to a publication, the user must be a superuser.\n+ To create a publication that publishes all tables or all tables in schema\n+ automatically, the user must be a superuser.\n </para>\n\nIt seems like a valid change but how is this related to this EXCEPT\npatch. Maybe this fix should be patched separately?\n\n~~~\n\n11. doc/src/sgml/ref/alter_publication.sgml\n\n@@ -22,6 +22,7 @@ PostgreSQL documentation\n <refsynopsisdiv>\n <synopsis>\n ALTER PUBLICATION <replaceable class=\"parameter\">name</replaceable>\nADD <replaceable class=\"parameter\">publication_object</replaceable> [,\n...]\n+ALTER PUBLICATION <replaceable class=\"parameter\">name</replaceable>\nADD ALL TABLES [EXCEPT TABLE [ ONLY ] <replaceable\nclass=\"parameter\">table_name</replaceable> [ * ] [, ... ]]\n\nThe [ONLY] looks misplaced when the syntax is described like this. For\nexample, in practice it is possible to write \"EXCEPT TABLE ONLY t1,\nONLY t2, t3, ONLY t4\" but it doesn't seem that way by looking at these\nPG DOCS.\n\nIMO would be better described like this:\n\n[ FOR ALL TABLES [ EXCEPT TABLE exception_object [,...] ]]\n\nwhere exception_object is:\n\n [ ONLY ] table_name [ * ]\n\n~~~\n\n12. doc/src/sgml/ref/alter_publication.sgml\n\n@@ -82,8 +83,8 @@ ALTER PUBLICATION <replaceable\nclass=\"parameter\">name</replaceable> RESET\n\n <para>\n You must own the publication to use <command>ALTER PUBLICATION</command>.\n- Adding a table to a publication additionally requires owning that table.\n- The <literal>ADD ALL TABLES IN SCHEMA</literal> and\n+ Adding a table or excluding a table to a publication additionally requires\n+ owning that table. The <literal>ADD ALL TABLES IN SCHEMA</literal> and\n\nSUGGESTION\nAdding a table to or excluding a table from a publication additionally\nrequires owning that table.\n\n~~~\n\n13. doc/src/sgml/ref/alter_publication.sgml\n\n@@ -213,6 +214,14 @@ ALTER PUBLICATION sales_publication ADD ALL\nTABLES IN SCHEMA marketing, sales;\n </programlisting>\n </para>\n\n+ <para>\n+ Alter publication <structname>production_publication</structname> that\n+ publishes all tables except <structname>users</structname> and\n+ <structname>departments</structname> tables:\n+<programlisting>\n\n\"that publishes\" -> \"to publish\"\n\n~~~\n\n14. doc/src/sgml/ref/create_publication.sgml\n\n(Same comment about the ONLY syntax as #11)\n\n~~~\n\n15. doc/src/sgml/ref/create_publication.sgml\n\n+ <varlistentry>\n+ <term><literal>EXCEPT TABLE</literal></term>\n+ <listitem>\n+ <para>\n+ Marks the publication as one that excludes replicating changes for the\n+ specified tables.\n+ </para>\n+\n+ <para>\n+ <literal>EXCEPT TABLE</literal> can be specified only for\n+ <literal>FOR ALL TABLES</literal> publication. It is not supported for\n+ <literal>FOR ALL TABLES IN SCHEMA </literal> publication and\n+ <literal>FOR TABLE</literal> publication.\n+ </para>\n+ </listitem>\n+ </varlistentry>\n\nIMO you can remove all that \"It is not supported for...\" sentence. You\ndon't need to spell that out again when it is already clear from the\nsyntax.\n\n~~~\n\n16. doc/src/sgml/ref/psql-ref.sgml\n\n@@ -1868,8 +1868,9 @@ testdb=>\n If <replaceable class=\"parameter\">pattern</replaceable> is\n specified, only those publications whose names match the pattern are\n listed.\n- If <literal>+</literal> is appended to the command name, the tables and\n- schemas associated with each publication are shown as well.\n+ If <literal>+</literal> is appended to the command name, the tables,\n+ excluded tables and schemas associated with each publication\nare shown as\n+ well.\n </para>\n\nPerhaps this is OK just as-is, but OTOH I felt that the change was\nalmost unnecessary because saying it displays \"the tables\" kind of\nimplies it would also have to account for the \"excluded tables\" too.\n\n~~~\n\n17. src/backend/catalog/pg_publication.c - GetTopMostAncestorInPublication\n\n@@ -302,8 +303,9 @@ GetTopMostAncestorInPublication(Oid puboid, List\n*ancestors, int *ancestor_level\n foreach(lc, ancestors)\n {\n Oid ancestor = lfirst_oid(lc);\n- List *apubids = GetRelationPublications(ancestor);\n+ List *apubids = GetRelationPublications(ancestor, false);\n List *aschemaPubids = NIL;\n+ List *aexceptpubids = NIL;\n\n17a.\nI think the var \"aschemaPubids\" and \"aexceptpubids\" are only used in\nthe 'else' block so it seems better they can be declared and freed in\nthat block too instead of always.\n\n17b.\nAlso, the camel-case of those variables is inconsistent so may fix\nthat at the same time.\n\n~~~\n\n18. src/backend/catalog/pg_publication.c - GetRelationPublications\n\n@@ -666,7 +673,7 @@ publication_add_schema(Oid pubid, Oid schemaid,\nbool if_not_exists)\n\n /* Gets list of publication oids for a relation */\n List *\n-GetRelationPublications(Oid relid)\n+GetRelationPublications(Oid relid, bool bexcept)\n\n18a.\nI felt that \"except_flag\" is a better name than \"bexcept\" for this param.\n\n18b.\nThe function comment should be updated to say only relations matching\nthis except_flag are returned in the list.\n\n~~~\n\n19. src/backend/catalog/pg_publication.c - GetAllTablesPublicationRelations\n\n@@ -787,6 +795,15 @@ GetAllTablesPublicationRelations(bool pubviaroot)\n HeapTuple tuple;\n List *result = NIL;\n\n+ /*\n+ * pg_publication_rel and pg_publication_namespace will only have excluded\n+ * tables in case of all tables publication, no need to pass except flag\n+ * to get the relations.\n+ */\n+ List *exceptpubtablelist;\n+\n+ exceptpubtablelist = GetPublicationRelations(pubid, PUBLICATION_PART_ALL);\n+\n\n19a.\nI wasn't very sure of the meaning/intent of the comment, but IIUC it\nseems to be explaining why it is not necessary to use an \"except_flag\"\nparameter in this code. Is it necessary/helpful to explain parameters\nthat do NOT exist?\n\n19b.\nThe var name \"exceptpubtablelist\" seems a bit overkill. (e.g.\n\"excepttablelist\" or \"exceptlist\" etc... are shorter but seem equally\ninformative).\n\n~~~\n\n20. src/backend/commands/publicationcmds.c - CreatePublication\n\n@@ -843,54 +849,52 @@ CreatePublication(ParseState *pstate,\nCreatePublicationStmt *stmt)\n /* Make the changes visible. */\n CommandCounterIncrement();\n\n- /* Associate objects with the publication. */\n- if (stmt->for_all_tables)\n- {\n- /* Invalidate relcache so that publication info is rebuilt. */\n- CacheInvalidateRelcacheAll();\n- }\n- else\n- {\n- ObjectsInPublicationToOids(stmt->pubobjects, pstate, &relations,\n- &schemaidlist);\n+ ObjectsInPublicationToOids(stmt->pubobjects, pstate, &relations,\n+ &schemaidlist);\n\n- /* FOR ALL TABLES IN SCHEMA requires superuser */\n- if (list_length(schemaidlist) > 0 && !superuser())\n- ereport(ERROR,\n- errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),\n- errmsg(\"must be superuser to create FOR ALL TABLES IN SCHEMA publication\"));\n+ /* FOR ALL TABLES IN SCHEMA requires superuser */\n+ if (list_length(schemaidlist) > 0 && !superuser())\n+ ereport(ERROR,\n+ errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),\n+ errmsg(\"must be superuser to create FOR ALL TABLES IN SCHEMA publication\"));\n\n- if (list_length(relations) > 0)\n- {\n- List *rels;\n+ if (list_length(relations) > 0)\n+ {\n+ List *rels;\n\n- rels = OpenTableList(relations);\n- CheckObjSchemaNotAlreadyInPublication(rels, schemaidlist,\n- PUBLICATIONOBJ_TABLE);\n+ rels = OpenTableList(relations);\n+ CheckObjSchemaNotAlreadyInPublication(rels, schemaidlist,\n+ PUBLICATIONOBJ_TABLE);\n\n- TransformPubWhereClauses(rels, pstate->p_sourcetext,\n- publish_via_partition_root);\n+ TransformPubWhereClauses(rels, pstate->p_sourcetext,\n+ publish_via_partition_root);\n\n- CheckPubRelationColumnList(rels, pstate->p_sourcetext,\n- publish_via_partition_root);\n+ CheckPubRelationColumnList(rels, pstate->p_sourcetext,\n+ publish_via_partition_root);\n\n- PublicationAddTables(puboid, rels, true, NULL);\n- CloseTableList(rels);\n- }\n+ PublicationAddTables(puboid, rels, true, NULL);\n+ CloseTableList(rels);\n+ }\n\n- if (list_length(schemaidlist) > 0)\n- {\n- /*\n- * Schema lock is held until the publication is created to prevent\n- * concurrent schema deletion.\n- */\n- LockSchemaList(schemaidlist);\n- PublicationAddSchemas(puboid, schemaidlist, true, NULL);\n- }\n+ if (list_length(schemaidlist) > 0)\n+ {\n+ /*\n+ * Schema lock is held until the publication is created to prevent\n+ * concurrent schema deletion.\n+ */\n+ LockSchemaList(schemaidlist);\n+ PublicationAddSchemas(puboid, schemaidlist, true, NULL);\n }\n\n table_close(rel, RowExclusiveLock);\n\n+ /* Associate objects with the publication. */\n+ if (stmt->for_all_tables)\n+ {\n+ /* Invalidate relcache so that publication info is rebuilt. */\n+ CacheInvalidateRelcacheAll();\n+ }\n+\n\nThis function is refactored a lot to not use \"if/else\" as it did\nbefore. But AFAIK (maybe I misunderstood) this refactor doesn't seem\nto actually have anything to do with the EXCEPT patch. If it really is\nunrelated maybe it should not be part of this patch.\n\n~~~\n\n21. src/backend/commands/publicationcmds.c - CheckPublicationDefValues\n\n+ if (pubform->puballtables)\n+ return false;\n+\n+ if (!pubform->pubinsert || !pubform->pubupdate || !pubform->pubdelete ||\n+ !pubform->pubtruncate || pubform->pubviaroot)\n+ return false;\n\nNow you have all the #define for the PUB_DEFAULT_XXX values, perhaps\nthis function should be using them instead of the hardcoded\nassumptions what the default values are.\n\ne.g.\n\nif (pubform->puballtables != PUB_DEFAULT_ALL_TABLES) return false;\nif (pubform->pubinsert != PUB_DEFAULT_ACTION_INSERT) return false;\n...\netc.\n\n~~~\n\n22. src/backend/commands/publicationcmds.c - CheckAlterPublication\n\n\n@@ -1442,6 +1516,19 @@ CheckAlterPublication(AlterPublicationStmt\n*stmt, HeapTuple tup,\n List *tables, List *schemaidlist)\n {\n Form_pg_publication pubform = (Form_pg_publication) GETSTRUCT(tup);\n+ ListCell *lc;\n+ bool nonexcepttable = false;\n+ bool excepttable = false;\n+\n+ foreach(lc, tables)\n+ {\n+ PublicationTable *pub_table = lfirst_node(PublicationTable, lc);\n+\n+ if (!pub_table->except)\n+ nonexcepttable = true;\n+ else\n+ excepttable = true;\n+ }\n\n22a.\nThe names are very confusing. e.g. \"nonexcepttable\" is like a double-negative.\n\nSUGGEST:\nbool has_tables = false;\nbool has_except_tables = false;\n\n22b.\nReverse the \"if\" condition to be positive instead of negative (remove !)\ne.g.\nif (pub_table->except)\nhas_except_table = true;\nelse\nhas_table = true;\n\n~~~\n\n23. src/backend/commands/publicationcmds.c - CheckAlterPublication\n\n@@ -1461,12 +1548,19 @@ CheckAlterPublication(AlterPublicationStmt\n*stmt, HeapTuple tup,\n errdetail(\"Tables from schema cannot be added to, dropped from, or\nset on FOR ALL TABLES publications.\")));\n\n /* Check that user is allowed to manipulate the publication tables. */\n- if (tables && pubform->puballtables)\n+ if (nonexcepttable && tables && pubform->puballtables)\n ereport(ERROR,\n\nSeems no reason for \"tables\" to be in the condition since\n\"nonexcepttable\" can't be true if \"tables\" is NIL.\n\n~~~\n\n24. src/backend/commands/publicationcmds.c - CheckAlterPublication\n\n+\n+ if (excepttable && !stmt->for_all_tables)\n+ ereport(ERROR,\n+ (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),\n+ errmsg(\"publication \\\"%s\\\" is not defined as FOR ALL TABLES\",\n+ NameStr(pubform->pubname)),\n+ errdetail(\"except table cannot be added to, dropped from, or set on\nNON ALL TABLES publications.\")));\n\nThe errdetail message seems over-complex.\n\nSUGGESTION\n\"EXCEPT TABLE clause is only allowed for FOR ALL TABLES publications.\"\n\n~~~\n\n25. src/backend/commands/publicationcmds.c - AlterPublication\n\n@@ -1500,6 +1594,20 @@ AlterPublication(ParseState *pstate,\nAlterPublicationStmt *stmt)\n aclcheck_error(ACLCHECK_NOT_OWNER, OBJECT_PUBLICATION,\n stmt->pubname);\n\n+ if (stmt->for_all_tables)\n+ {\n+ bool isdefault = CheckPublicationDefValues(tup);\n+\n+ if (!isdefault)\n+ ereport(ERROR,\n+ errcode(ERRCODE_INVALID_OBJECT_DEFINITION),\n+ errmsg(\"Setting ALL TABLES requires publication \\\"%s\\\" to have\ndefault values\",\n+ stmt->pubname),\n+ errhint(\"Either the publication has tables/schemas associated or\ndoes not have default publication options or ALL TABLES option is\nset.\"));\n\nThe errhint message seems over-complex.\n\nSUGGESTION\n\"Use ALTER PUBLICATION ... RESET\"\n\n~~~\n\n26. src/bin/pg_dump/pg_dump.c - dumpPublication\n\n@@ -3980,8 +3982,34 @@ dumpPublication(Archive *fout, const\nPublicationInfo *pubinfo)\n qpubname);\n\n if (pubinfo->puballtables)\n+ {\n+ SimplePtrListCell *cell;\n+ bool first = true;\n appendPQExpBufferStr(query, \" FOR ALL TABLES\");\n\n+ /* Include exception tables if the publication has except tables */\n+ for (cell = exceptinfo.head; cell; cell = cell->next)\n+ {\n+ PublicationRelInfo *pubrinfo = (PublicationRelInfo *) cell->ptr;\n+ PublicationInfo *relpubinfo = pubrinfo->publication;\n+ TableInfo *tbinfo;\n+\n+ if (pubinfo == relpubinfo)\n+ {\n+ tbinfo = pubrinfo->pubtable;\n+\n+ if (first)\n+ {\n+ appendPQExpBufferStr(query, \" EXCEPT TABLE ONLY\");\n+ first = false;\n+ }\n+ else\n+ appendPQExpBufferStr(query, \", \");\n+ appendPQExpBuffer(query, \" %s\", fmtQualifiedDumpable(tbinfo));\n+ }\n+ }\n+ }\n+\n\nIIUC this usage of ONLY looks incorrect.\n\n26a.\nFirstly, if you want to hardwire ONLY then shouldn't it apply to every\nof the except-list table, not just the first one? e.g. \"EXCEPT TABLE\nONLY t1, ONLY t2, ONLY t3...\"\n\n26b.\nSecondly, is it even correct to unconditionally hardwire the ONLY? How\ndo you know that is how the user wanted it?\n\n~~~\n\n27. src/bin/pg_dump/pg_dump.c\n\n@@ -127,6 +127,8 @@ static SimpleOidList foreign_servers_include_oids\n= {NULL, NULL};\n static SimpleStringList extension_include_patterns = {NULL, NULL};\n static SimpleOidList extension_include_oids = {NULL, NULL};\n\n+static SimplePtrList exceptinfo = {NULL, NULL};\n+\n\nProbably I just did not understand how this logic works, but how does\nthis static work properly if there are multiple publications and 2\ndifferent EXCEPT lists? E.g. where is it clearing the \"exceptinfo\" so\nthat multiple EXCEPT TABLE lists don't become muddled?\n\n~~~\n\n28. src/bin/pg_dump/pg_dump.c - dumpPublicationTable\n\n@@ -4330,8 +4378,11 @@ dumpPublicationTable(Archive *fout, const\nPublicationRelInfo *pubrinfo)\n\n query = createPQExpBuffer();\n\n- appendPQExpBuffer(query, \"ALTER PUBLICATION %s ADD TABLE ONLY\",\n+ appendPQExpBuffer(query, \"ALTER PUBLICATION %s ADD \",\n fmtId(pubinfo->dobj.name));\n+\n+ appendPQExpBufferStr(query, \"TABLE ONLY\");\n+\n\nThat code refactor does not seem necessary for this patch.\n\n~~~\n\n29. src/bin/pg_dump/pg_dump_sort.c\n\n@@ -90,6 +90,7 @@ enum dbObjectTypePriorities\n PRIO_FK_CONSTRAINT,\n PRIO_POLICY,\n PRIO_PUBLICATION,\n+ PRIO_PUBLICATION_EXCEPT_REL,\n PRIO_PUBLICATION_REL,\n PRIO_PUBLICATION_TABLE_IN_SCHEMA,\n PRIO_SUBSCRIPTION,\n\nI'm not sure how this enum is used (so perhaps this makes no\ndifference) but judging by the enum comment why did you put the sort\npriority order PRIO_PUBLICATION_EXCEPT_REL before\nPRIO_PUBLICATION_REL. Wouldn’t it make more sense the other way\naround?\n\n~~~\n\n30. src/bin/psql/describe.c\n\n@@ -2950,17 +2950,34 @@ describeOneTableDetails(const char *schemaname,\n \" WHERE attrelid = pr.prrelid AND attnum = prattrs[s])\\n\"\n \" ELSE NULL END) \"\n \"FROM pg_catalog.pg_publication p\\n\"\n- \" JOIN pg_catalog.pg_publication_rel pr ON p.oid = pr.prpubid\\n\"\n- \" JOIN pg_catalog.pg_class c ON c.oid = pr.prrelid\\n\"\n- \"WHERE pr.prrelid = '%s'\\n\"\n- \"UNION\\n\"\n+ \" JOIN pg_catalog.pg_publication_rel pr ON p.oid = pr.prpubid\\n\"\n+ \" JOIN pg_catalog.pg_class c ON c.oid = pr.prrelid\\n\"\n+ \"WHERE pr.prrelid = '%s'\",\n+ oid, oid, oid);\n\nI feel that trailing \"\\n\" (\"WHERE pr.prrelid = '%s'\\n\") should not\nhave been removed.\n\n~~~\n\n31. src/bin/psql/describe.c\n\n+ /* FIXME: 150000 should be changed to 160000 later for PG16. */\n+ if (pset.sversion >= 150000)\n+ appendPQExpBufferStr(&buf, \" AND pr.prexcept = 'f'\\n\");\n+\n+ appendPQExpBuffer(&buf, \"UNION\\n\"\n\nThe \"UNION\\n\" param might be better wrapped onto the next line like it\nused to be.\n\n~~~\n\n32. src/bin/psql/describe.c\n\n+ /* FIXME: 150000 should be changed to 160000 later for PG16. */\n+ if (pset.sversion >= 150000)\n+ appendPQExpBuffer(&buf,\n+ \" AND NOT EXISTS (SELECT 1\\n\"\n+ \" FROM pg_catalog.pg_publication_rel pr\\n\"\n+ \" JOIN pg_catalog.pg_class pc\\n\"\n+ \" ON pr.prrelid = pc.oid\\n\"\n+ \" WHERE pr.prrelid = '%s' AND pr.prpubid = p.oid)\\n\",\n+ oid);\n\nThe whitespace indents in the SQL seem excessive here.\n\n~~~\n\n33. src/bin/psql/describe.c - describePublications\n\n@@ -6322,6 +6344,22 @@ describePublications(const char *pattern)\n }\n }\n\n+ /* FIXME: 150000 should be changed to 160000 later for PG16. */\n+ if (pset.sversion >= 150000)\n+ {\n+ /* Get the excluded tables for the specified publication */\n+ printfPQExpBuffer(&buf,\n+ \"SELECT concat(c.relnamespace::regnamespace, '.', c.relname)\\n\"\n+ \"FROM pg_catalog.pg_class c\\n\"\n+ \" JOIN pg_catalog.pg_publication_rel pr ON c.oid = pr.prrelid\\n\"\n+ \"WHERE pr.prpubid = '%s'\\n\"\n+ \" AND pr.prexcept = 't'\\n\"\n+ \"ORDER BY 1\", pubid);\n+ if (!addFooterToPublicationDesc(&buf, \"Except tables:\",\n+ true, &cont))\n+ goto error_return;\n+ }\n+\n\nI think this code is misplaced. Shouldn't it be if/else and be above\nthe other 150000 check, otherwise when you change this to PG16 it may\nnot work as expected?\n\n~~~\n\n34. src/bin/psql/describe.c - describePublications\n\n+ if (!addFooterToPublicationDesc(&buf, \"Except tables:\",\n+ true, &cont))\n+ goto error_return;\n+ }\n\nShould this be using the _T() macros same as the other prompts for translation?\n\n~~~\n\n35. src/include/catalog/pg_publication.h\n\nI thought the param \"bexpect\" should be \"except_flag\".\n\n(same comment as #18a)\n\n~~~\n\n36. src/include/catalog/pg_publication_rel.h\n\n@@ -31,6 +31,7 @@ CATALOG(pg_publication_rel,6106,PublicationRelRelationId)\n Oid oid; /* oid */\n Oid prpubid BKI_LOOKUP(pg_publication); /* Oid of the publication */\n Oid prrelid BKI_LOOKUP(pg_class); /* Oid of the relation */\n+ bool prexcept BKI_DEFAULT(f); /* except the relation */\n\nSUGGEST (comment)\n/* skip the relation */\n\n~~~\n\n37. src/include/commands/publicationcmds.h\n\n@@ -32,8 +32,8 @@ extern ObjectAddress AlterPublicationOwner(const\nchar *name, Oid newOwnerId);\n extern void AlterPublicationOwner_oid(Oid pubid, Oid newOwnerId);\n extern void InvalidatePublicationRels(List *relids);\n extern bool pub_rf_contains_invalid_column(Oid pubid, Relation relation,\n- List *ancestors, bool pubviaroot);\n+ List *ancestors, bool pubviaroot, bool alltables);\n extern bool pub_collist_contains_invalid_column(Oid pubid, Relation relation,\n- List *ancestors, bool pubviaroot);\n+ List *ancestors, bool pubviaroot, bool alltables);\n\nElsewhere in this patch, a similarly added param is called\n\"puballtables\" (not \"alltables\"). Please check all places and use a\nconsistent param name for all of them.\n\n~~~\n\n38. src/test/regress/sql/publication.sql\n\nThere don't seem to be any tests for more than one EXCEPT TABLE (e.g.\nno list tests?)\n\n~~~\n\n38. src/test/regress/sql/publication.sql\n\nMaybe adjust all the below comments (a-d) to say \"EXCEPT TABLES\"\nintead of \"except tables\"\n\n38a.\n+-- can't add except table to 'FOR ALL TABLES' publication\n\n38b.\n+-- can't add except table to 'FOR TABLE' publication\n\n38c.\n+-- can't add except table to 'FOR ALL TABLES IN SCHEMA' publication\n\n38d.\n+-- can't add except table when publish_via_partition_root option does not\n+-- have default value\n\n38e.\n+-- can't add except table when the publication options does not have default\n+-- values\n\nSUGGESTION\ncan't add EXCEPT TABLE when the publication options are not the default values\n\n~~~\n\n39. .../t/032_rep_changes_except_table.pl\n\n39a.\n+# Check the table data does not sync for excluded table\n+my $result = $node_subscriber->safe_psql('postgres',\n+ \"SELECT count(*), min(a), max(a) FROM sch1.tab1\");\n+is($result, qq(0||), 'check tablesync is excluded for excluded tables');\n\nMaybe the \"is\" message should say \"check there is no initial data\ncopied for the excluded table\"\n\n~~~\n\n\n40 .../t/032_rep_changes_except_table.pl\n\n+# Insert some data into few tables and verify that inserted data is not\n+# replicated\n+$node_publisher->safe_psql('postgres',\n+ \"INSERT INTO sch1.tab1 VALUES(generate_series(11,20))\");\n\nThe comment is not quite correct. You are inserting into only one\ntable here - not \"few tables\".\n\n~~~\n\n41. .../t/032_rep_changes_except_table.pl\n\n+# Alter publication to exclude data changes in public.tab1 and verify that\n+# subscriber does not get the new table data.\n\n\"new table data\" -> \"changed data for this table\"\n\n------\n[1] https://www.postgresql.org/message-id/TYCPR01MB83737C28187A6E0BADAE98F0EDCF9%40TYCPR01MB8373.jpnprd01.prod.outlook.com\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Tue, 17 May 2022 12:04:57 +1000",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping schema changes in publication"
},
{
"msg_contents": "On Tue, May 17, 2022 at 7:35 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> Below are my review comments for v5-0002.\n>\n> There may be an overlap with comments recently posted by Osumi-san [1].\n>\n> (I also have review comments for v5-0002; will post them tomorrow)\n>\n> ======\n>\n> 1. General\n>\n> Is it really necessary to have to say \"EXCEPT TABLE\" instead of just\n> \"EXCEPT\". It seems unnecessarily verbose and redundant when you write\n> \"FOR ALL TABLES EXCEPT TABLE...\".\n>\n> If you want to keep this TABLE keyword (maybe you have plans for other\n> kinds of except?)\n>\n\nI don't think there is an immediate plan but one can imagine using\nEXCEPT SCHEMA. Then for column lists, one may want to use the syntax\nCreate Publication pub1 For Table t1 Except Cols (c1, ..);\n\n> then IMO perhaps at least it can be the optional\n> default except type. e.g. EXCEPT [TABLE].\n>\n\nYeah, that might be okay, so, even if we plan to extend this in the\nfuture, by default we will consider the list of tables after EXCEPT\nbut if the user mentions EXCEPT SCHEMA or something else then we can\nuse a different object. Is that sound okay?\n\n>\n> 3. General\n>\n> The ONLY keyword seems supported by the syntax for tables of the\n> except-list (more on this in later comments) but:\n> a) I am not sure if the patch code is accounting for that, and\n> b) There are no test cases using ONLY.\n>\n> ~~~\n>\n\nIsn't it better to map ONLY with the way it can already be specified\nin CREATE PUBLICATION? I am not sure what exactly is proposed and what\nis your suggestion? Can you please explain if it is different from the\nway we use it for CREATE PUBLICATION?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 17 May 2022 09:26:28 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping schema changes in publication"
},
{
"msg_contents": "On Tue, May 17, 2022 at 1:56 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, May 17, 2022 at 7:35 AM Peter Smith <smithpb2250@gmail.com> wrote:\n> >\n> > Below are my review comments for v5-0002.\n> >\n> > There may be an overlap with comments recently posted by Osumi-san [1].\n> >\n> > (I also have review comments for v5-0002; will post them tomorrow)\n> >\n> > ======\n> >\n> > 1. General\n> >\n> > Is it really necessary to have to say \"EXCEPT TABLE\" instead of just\n> > \"EXCEPT\". It seems unnecessarily verbose and redundant when you write\n> > \"FOR ALL TABLES EXCEPT TABLE...\".\n> >\n> > If you want to keep this TABLE keyword (maybe you have plans for other\n> > kinds of except?)\n> >\n>\n> I don't think there is an immediate plan but one can imagine using\n> EXCEPT SCHEMA. Then for column lists, one may want to use the syntax\n> Create Publication pub1 For Table t1 Except Cols (c1, ..);\n>\n> > then IMO perhaps at least it can be the optional\n> > default except type. e.g. EXCEPT [TABLE].\n> >\n>\n> Yeah, that might be okay, so, even if we plan to extend this in the\n> future, by default we will consider the list of tables after EXCEPT\n> but if the user mentions EXCEPT SCHEMA or something else then we can\n> use a different object. Is that sound okay?\n\nYes. That is what I meant.\n\n>\n> >\n> > 3. General\n> >\n> > The ONLY keyword seems supported by the syntax for tables of the\n> > except-list (more on this in later comments) but:\n> > a) I am not sure if the patch code is accounting for that, and\n> > b) There are no test cases using ONLY.\n> >\n> > ~~~\n> >\n>\n> Isn't it better to map ONLY with the way it can already be specified\n> in CREATE PUBLICATION? I am not sure what exactly is proposed and what\n> is your suggestion? Can you please explain if it is different from the\n> way we use it for CREATE PUBLICATION?\n>\n\nYes, I am not proposing anything different to how ONLY already works\nfor published tables. I was only questioning whether the patch behaves\ncorrectly when ONLY is specified for the tables of the EXCEPT list. I\nhad some doubt about it because there are a few other review comments\nI wrote (e.g. in pg_dump.c), and also I did not find any ONLY tests,\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Tue, 17 May 2022 14:29:33 +1000",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping schema changes in publication"
},
{
"msg_contents": "On Sat, May 14, 2022 9:33 PM vignesh C <vignesh21@gmail.com> wrote:\r\n> \r\n> Thanks for the comments, the attached v5 patch has the changes for the\r\n> same. Also I have made the changes for SKIP Table based on the new\r\n> syntax, the changes for the same are available in\r\n> v5-0002-Skip-publishing-the-tables-specified-in-EXCEPT-TA.patch.\r\n>\r\n\r\nThanks for your patch. Here are some comments on v5-0001 patch.\r\n\r\n+\t\tOid\t\t\trelid = lfirst_oid(lc);\r\n+\r\n+\t\tprid = GetSysCacheOid2(PUBLICATIONRELMAP, Anum_pg_publication_rel_oid,\r\n+\t\t\t\t\t\t\t ObjectIdGetDatum(relid),\r\n+\t\t\t\t\t\t\t ObjectIdGetDatum(pubid));\r\n+\t\tif (!OidIsValid(prid))\r\n+\t\t\tereport(ERROR,\r\n+\t\t\t\t\t(errcode(ERRCODE_UNDEFINED_OBJECT),\r\n+\t\t\t\t\t errmsg(\"relation \\\"%s\\\" is not part of the publication\",\r\n+\t\t\t\t\t\t\tRelationGetRelationName(rel))));\r\n\r\nI think the relation in the error message should be the one whose oid is\r\n\"relid\", instead of relation \"rel\".\r\n\r\nBesides, I think it might be better not to report an error in this case. If\r\n\"prid\" is invalid, just ignore this relation. Because in RESET cases, we want to\r\ndrop all tables in the publications, and there is no specific table.\r\n(If you agree with that, similarly missing_ok should be set to true when calling\r\nPublicationDropSchemas().)\r\n\r\nRegards,\r\nShi yu\r\n",
"msg_date": "Wed, 18 May 2022 03:00:38 +0000",
"msg_from": "\"shiy.fnst@fujitsu.com\" <shiy.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Skipping schema changes in publication"
},
{
"msg_contents": "On Mon, May 16, 2022 at 8:32 AM osumi.takamichi@fujitsu.com\n<osumi.takamichi@fujitsu.com> wrote:\n>\n> On Saturday, May 14, 2022 10:33 PM vignesh C <vignesh21@gmail.com> wrote:\n> > Thanks for the comments, the attached v5 patch has the changes for the same.\n> > Also I have made the changes for SKIP Table based on the new syntax, the\n> > changes for the same are available in\n> > v5-0002-Skip-publishing-the-tables-specified-in-EXCEPT-TA.patch.\n> Hi,\n>\n>\n> Thank you for updating the patch.\n> I'll share few minor review comments on v5-0001.\n>\n>\n> (1) doc/src/sgml/ref/alter_publication.sgml\n>\n> @@ -73,12 +85,13 @@ ALTER PUBLICATION <replaceable class=\"parameter\">name</replaceable> RENAME TO <r\n> Adding a table to a publication additionally requires owning that table.\n> The <literal>ADD ALL TABLES IN SCHEMA</literal> and\n> <literal>SET ALL TABLES IN SCHEMA</literal> to a publication requires the\n> - invoking user to be a superuser. To alter the owner, you must also be a\n> - direct or indirect member of the new owning role. The new owner must have\n> - <literal>CREATE</literal> privilege on the database. Also, the new owner\n> - of a <literal>FOR ALL TABLES</literal> or <literal>FOR ALL TABLES IN\n> - SCHEMA</literal> publication must be a superuser. However, a superuser can\n> - change the ownership of a publication regardless of these restrictions.\n> + invoking user to be a superuser. <literal>RESET</literal> of publication\n> + requires the invoking user to be a superuser. To alter the owner, you must\n> ...\n>\n>\n> I suggest to combine the first part of your change with one existing sentence\n> before your change, to make our description concise.\n>\n> FROM:\n> \"The <literal>ADD ALL TABLES IN SCHEMA</literal> and\n> <literal>SET ALL TABLES IN SCHEMA</literal> to a publication requires the\n> invoking user to be a superuser. <literal>RESET</literal> of publication\n> requires the invoking user to be a superuser.\"\n>\n> TO:\n> \"The <literal>ADD ALL TABLES IN SCHEMA</literal>,\n> <literal>SET ALL TABLES IN SCHEMA</literal> to a publication and\n> <literal>RESET</literal> of publication requires the invoking user to be a superuser.\"\n\nModified\n\n>\n> (2) typo\n>\n> +++ b/src/backend/commands/publicationcmds.c\n> @@ -53,6 +53,13 @@\n> #include \"utils/syscache.h\"\n> #include \"utils/varlena.h\"\n>\n> +#define PUB_ATION_INSERT_DEFAULT true\n> +#define PUB_ACTION_UPDATE_DEFAULT true\n>\n>\n> Kindly change\n> FROM:\n> \"PUB_ATION_INSERT_DEFAULT\"\n> TO:\n> \"PUB_ACTION_INSERT_DEFAULT\"\n\nModified\n\n>\n> (3) src/test/regress/expected/publication.out\n>\n> +-- Verify that only superuser can reset a publication\n> +ALTER PUBLICATION testpub_reset OWNER TO regress_publication_user2;\n> +SET ROLE regress_publication_user2;\n> +ALTER PUBLICATION testpub_reset RESET; -- fail\n>\n>\n> We have \"-- fail\" for one case in this patch.\n> On the other hand, isn't better to add \"-- ok\" (or \"-- success\") for\n> other successful statements,\n> when we consider the entire tests description consistency ?\n\nWe generally do not mention success comments for all the success cases\nas that might be an overkill. I felt it is better to keep it as it is.\nThoughts?\n\nThe attached v6 patch has the changes for the same.\n\nRegards,\nVignesh",
"msg_date": "Wed, 18 May 2022 23:15:08 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping schema changes in publication"
},
{
"msg_contents": "On Mon, May 16, 2022 at 2:53 PM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> Below are my review comments for v5-0001.\n>\n> There is some overlap with comments recently posted by Osumi-san [1].\n>\n> (I also have review comments for v5-0002; will post them tomorrow)\n>\n> ======\n>\n> 1. Commit message\n>\n> This patch adds a new RESET clause to ALTER PUBLICATION which will reset\n> the publication to default state which includes resetting the publication\n> options, setting ALL TABLES option to false and dropping the relations and\n> schemas that are associated with the publication.\n>\n> SUGGEST\n> \"to default state\" -> \"to the default state\"\n> \"ALL TABLES option\" -> \"ALL TABLES flag\"\n\nModified\n\n> ~~~\n>\n> 2. doc/src/sgml/ref/alter_publication.sgml\n>\n> + <para>\n> + The <literal>RESET</literal> clause will reset the publication to the\n> + default state which includes resetting the publication options, setting\n> + <literal>ALL TABLES</literal> option to <literal>false</literal> and\n> + dropping all relations and schemas that are associated with the publication.\n> </para>\n>\n> \"ALL TABLES option\" -> \"ALL TABLES flag\"\n\nModified\n\n> ~~~\n>\n> 3. doc/src/sgml/ref/alter_publication.sgml\n>\n> + invoking user to be a superuser. <literal>RESET</literal> of publication\n> + requires the invoking user to be a superuser. To alter the owner, you must\n>\n> SUGGESTION\n> To <literal>RESET</literal> a publication requires the invoking user\n> to be a superuser.\n\n I have combined it with the earlier sentence.\n\n> ~~~\n>\n> 4. src/backend/commands/publicationcmds.c\n>\n> @@ -53,6 +53,13 @@\n> #include \"utils/syscache.h\"\n> #include \"utils/varlena.h\"\n>\n> +#define PUB_ATION_INSERT_DEFAULT true\n> +#define PUB_ACTION_UPDATE_DEFAULT true\n> +#define PUB_ACTION_DELETE_DEFAULT true\n> +#define PUB_ACTION_TRUNCATE_DEFAULT true\n> +#define PUB_VIA_ROOT_DEFAULT false\n> +#define PUB_ALL_TABLES_DEFAULT false\n>\n> 4a.\n> Typo: \"ATION\" -> \"ACTION\"\n\nModified\n\n> 4b.\n> I think these #defines deserve a 1 line comment.\n> e.g.\n> /* CREATE PUBLICATION default values for flags and options */\n\nAdded comment\n\n> 4c.\n> Since the \"_DEFAULT\" is a common part of all the names, maybe it is\n> tidier if it comes first.\n> e.g.\n> #define PUB_DEFAULT_ACTION_INSERT true\n> #define PUB_DEFAULT_ACTION_UPDATE true\n> #define PUB_DEFAULT_ACTION_DELETE true\n> #define PUB_DEFAULT_ACTION_TRUNCATE true\n> #define PUB_DEFAULT_VIA_ROOT false\n> #define PUB_DEFAULT_ALL_TABLES false\n\nModified\n\nThe v6 patch attached at [1] has the changes for the same.\n[1] - https://www.postgresql.org/message-id/CALDaNm0iZZDB300Dez_97S8G6_RW5QpQ8ef6X3wq8tyK-8wnXQ%40mail.gmail.com\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Wed, 18 May 2022 23:17:27 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping schema changes in publication"
},
{
"msg_contents": "On Wed, May 18, 2022 at 8:30 AM shiy.fnst@fujitsu.com\n<shiy.fnst@fujitsu.com> wrote:\n>\n> On Sat, May 14, 2022 9:33 PM vignesh C <vignesh21@gmail.com> wrote:\n> >\n> > Thanks for the comments, the attached v5 patch has the changes for the\n> > same. Also I have made the changes for SKIP Table based on the new\n> > syntax, the changes for the same are available in\n> > v5-0002-Skip-publishing-the-tables-specified-in-EXCEPT-TA.patch.\n> >\n>\n> Thanks for your patch. Here are some comments on v5-0001 patch.\n>\n> + Oid relid = lfirst_oid(lc);\n> +\n> + prid = GetSysCacheOid2(PUBLICATIONRELMAP, Anum_pg_publication_rel_oid,\n> + ObjectIdGetDatum(relid),\n> + ObjectIdGetDatum(pubid));\n> + if (!OidIsValid(prid))\n> + ereport(ERROR,\n> + (errcode(ERRCODE_UNDEFINED_OBJECT),\n> + errmsg(\"relation \\\"%s\\\" is not part of the publication\",\n> + RelationGetRelationName(rel))));\n>\n> I think the relation in the error message should be the one whose oid is\n> \"relid\", instead of relation \"rel\".\n\nModified it\n\n> Besides, I think it might be better not to report an error in this case. If\n> \"prid\" is invalid, just ignore this relation. Because in RESET cases, we want to\n> drop all tables in the publications, and there is no specific table.\n> (If you agree with that, similarly missing_ok should be set to true when calling\n> PublicationDropSchemas().)\n\nIdeally this scenario should not happen, but if it happens I felt we\nshould throw an error in this case.\n\nThe v6 patch attached at [1] has the changes for the same.\n[1] - https://www.postgresql.org/message-id/CALDaNm0iZZDB300Dez_97S8G6_RW5QpQ8ef6X3wq8tyK-8wnXQ%40mail.gmail.com\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Wed, 18 May 2022 23:19:01 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping schema changes in publication"
},
{
"msg_contents": "On Mon, May 16, 2022 at 2:00 PM osumi.takamichi@fujitsu.com\n<osumi.takamichi@fujitsu.com> wrote:\n>\n> On Saturday, May 14, 2022 10:33 PM vignesh C <vignesh21@gmail.com> wrote:\n> > Thanks for the comments, the attached v5 patch has the changes for the same.\n> > Also I have made the changes for SKIP Table based on the new syntax, the\n> > changes for the same are available in\n> > v5-0002-Skip-publishing-the-tables-specified-in-EXCEPT-TA.patch.\n> Hi,\n>\n>\n>\n> Several comments on v5-0002.\n>\n> (1) One unnecessary space before \"except_pub_obj_list\" syntax definition\n>\n> + except_pub_obj_list: ExceptPublicationObjSpec\n> + { $$ = list_make1($1); }\n> + | except_pub_obj_list ',' ExceptPublicationObjSpec\n> + { $$ = lappend($1, $3); }\n> + | /*EMPTY*/ { $$ = NULL; }\n> + ;\n> +\n>\n> From above part, kindly change\n> FROM:\n> \" except_pub_obj_list: ExceptPublicationObjSpec\"\n> TO:\n> \"except_pub_obj_list: ExceptPublicationObjSpec\"\n>\n\nModified\n\n> (2) doc/src/sgml/ref/create_publication.sgml\n>\n> (2-1)\n>\n> @@ -22,7 +22,7 @@ PostgreSQL documentation\n> <refsynopsisdiv>\n> <synopsis>\n> CREATE PUBLICATION <replaceable class=\"parameter\">name</replaceable>\n> - [ FOR ALL TABLES\n> + [ FOR ALL TABLES [EXCEPT TABLE [ ONLY ] <replaceable class=\"parameter\">table_name</replaceable> [ * ] [, ... ]]\n> | FOR <replaceable class=\"parameter\">publication_object</replaceable> [, ... ] ]\n> [ WITH ( <replaceable class=\"parameter\">publication_parameter</replaceable> [= <replaceable class=\"parameter\">value</replaceable>] [, ... ] ) ]\n>\n>\n> Here I think we need to add two more whitespaces around square brackets.\n> Please change\n> FROM:\n> \"[ FOR ALL TABLES [EXCEPT TABLE [ ONLY ] <replaceable class=\"parameter\">table_name</replaceable> [ * ] [, ... ]]\"\n> TO:\n> \"[ FOR ALL TABLES [ EXCEPT TABLE [ ONLY ] <replaceable class=\"parameter\">table_name</replaceable> [ * ] [, ... ] ]\"\n>\n> When I check other documentations, I see whitespaces before/after square brackets.\n>\n> (2-2)\n> This whitespace alignment applies to alter_publication.sgml as well.\n\nModified\n\n> (3)\n>\n>\n> @@ -156,6 +156,24 @@ CREATE PUBLICATION <replaceable class=\"parameter\">name</replaceable>\n> </listitem>\n> </varlistentry>\n>\n> +\n> + <varlistentry>\n> + <term><literal>EXCEPT TABLE</literal></term>\n> + <listitem>\n> + <para>\n> + Marks the publication as one that excludes replicating changes for the\n> + specified tables.\n> + </para>\n> +\n> + <para>\n> + <literal>EXCEPT TABLE</literal> can be specified only for\n> + <literal>FOR ALL TABLES</literal> publication. It is not supported for\n> + <literal>FOR ALL TABLES IN SCHEMA </literal> publication and\n> + <literal>FOR TABLE</literal> publication.\n> + </para>\n> + </listitem>\n> + </varlistentry>\n> +\n>\n> This EXCEPT TABLE clause is only for FOR ALL TABLES.\n> So, how about extracting the main message from above part and\n> moving it to an exising paragraph below, instead of having one independent paragraph ?\n>\n> <varlistentry>\n> <term><literal>FOR ALL TABLES</literal></term>\n> <listitem>\n> <para>\n> Marks the publication as one that replicates changes for all tables in\n> the database, including tables created in the future.\n> </para>\n> </listitem>\n> </varlistentry>\n>\n> Something like\n> \"Marks the publication as one that replicates changes for all tables in\n> the database, including tables created in the future. EXCEPT TABLE indicates\n> excluded tables for the defined publication.\n> \"\n>\n\nModified\n\n> (4) One minor confirmation about the syntax\n>\n> Currently, we allow one way of writing to indicate excluded tables like below.\n>\n> (example) CREATE PUBLICATION mypub FOR ALL TABLES EXCEPT TABLE tab3, tab4, EXCEPT TABLE tab5;\n>\n> This is because we define ExceptPublicationObjSpec with EXCEPT TABLE.\n> Is it OK to have a room to write duplicate \"EXCEPT TABLE\" clauses ?\n> I think there is no harm in having this,\n> but I'd like to confirm whether this syntax might be better to be adjusted or not.\n\nChanged it to allow except table only once\n\n>\n> (5) CheckAlterPublication\n>\n> +\n> + if (excepttable && !stmt->for_all_tables)\n> + ereport(ERROR,\n> + (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),\n> + errmsg(\"publication \\\"%s\\\" is not defined as FOR ALL TABLES\",\n> + NameStr(pubform->pubname)),\n> + errdetail(\"except table cannot be added to, dropped from, or set on NON ALL TABLES publications.\")));\n>\n> Could you please add a test for this ?\n\nThis code can be removed because of grammar optimization, it will not\nallow tables without \"ALL TABLES\". Removed this code\n\nThe v6 patch attached at [1] has the changes for the same.\n[1] - https://www.postgresql.org/message-id/CALDaNm0iZZDB300Dez_97S8G6_RW5QpQ8ef6X3wq8tyK-8wnXQ%40mail.gmail.com\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Wed, 18 May 2022 23:24:02 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping schema changes in publication"
},
{
"msg_contents": "On Thursday, May 19, 2022 2:45 AM vignesh C <vignesh21@gmail.com> wrote:\r\n> On Mon, May 16, 2022 at 8:32 AM osumi.takamichi@fujitsu.com\r\n> <osumi.takamichi@fujitsu.com> wrote:\r\n> > (3) src/test/regress/expected/publication.out\r\n> >\r\n> > +-- Verify that only superuser can reset a publication ALTER\r\n> > +PUBLICATION testpub_reset OWNER TO regress_publication_user2; SET\r\n> > +ROLE regress_publication_user2; ALTER PUBLICATION testpub_reset\r\n> > +RESET; -- fail\r\n> >\r\n> >\r\n> > We have \"-- fail\" for one case in this patch.\r\n> > On the other hand, isn't better to add \"-- ok\" (or \"-- success\") for\r\n> > other successful statements, when we consider the entire tests\r\n> > description consistency ?\r\n> \r\n> We generally do not mention success comments for all the success cases as\r\n> that might be an overkill. I felt it is better to keep it as it is.\r\n> Thoughts?\r\nThank you for updating the patches !\r\n\r\nIn terms of this point,\r\nI meant to say we add \"-- ok\" for each successful\r\n\"ALTER PUBLICATION testpub_reset RESET;\" statement.\r\nThat means, we'll have just three places to add \"--ok\"\r\nand I thought this was not an overkill.\r\n\r\n*But*, I'm also OK with your idea.\r\nPlease don't change the comments\r\nand keep them as it is like v6.\r\n\r\n\r\nBest Regards,\r\n\tTakamichi Osumi\r\n\r\n",
"msg_date": "Thu, 19 May 2022 01:58:27 +0000",
"msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Skipping schema changes in publication"
},
{
"msg_contents": "On Tue, May 17, 2022 at 7:35 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> Below are my review comments for v5-0002.\n>\n> There may be an overlap with comments recently posted by Osumi-san [1].\n>\n> (I also have review comments for v5-0002; will post them tomorrow)\n>\n> ======\n>\n> 1. General\n>\n> Is it really necessary to have to say \"EXCEPT TABLE\" instead of just\n> \"EXCEPT\". It seems unnecessarily verbose and redundant when you write\n> \"FOR ALL TABLES EXCEPT TABLE...\".\n>\n> If you want to keep this TABLE keyword (maybe you have plans for other\n> kinds of except?) then IMO perhaps at least it can be the optional\n> default except type. e.g. EXCEPT [TABLE].\n\nI have made TABLE optional.\n\n> ~~~\n>\n> 2. General\n>\n> (I was unsure whether to even mention this one).\n>\n> I understand the \"EXCEPT\" is chosen as the user-facing syntax, but it\n> still seems strange when reading the patch to see attribute members\n> and column names called 'except'. I think the problem is that \"except\"\n> is not a verb, so saying except=t/f just does not make much sense.\n> Sometimes I feel that for all the internal usage\n> (code/comments/catalog) using \"skip\" and \"skip-list\" etc would be a\n> much better choice of names. OTOH I can see that having consistency\n> with the outside syntax might also be good. Anyway, please consider -\n> maybe other people feel the same?\n\nEarlier we had discussed whether to use SKIP, but felt SKIP was not\nappropriate and planned to use except as in [1]. Let's use except\nunless we find a better alternative.\n\n> ~~~\n>\n> 3. General\n>\n> The ONLY keyword seems supported by the syntax for tables of the\n> except-list (more on this in later comments) but:\n> a) I am not sure if the patch code is accounting for that, and\n\nI have kept the behavior similar to FOR TABLE\n\n> b) There are no test cases using ONLY.\n\nAdded tests for the same\n\n> ~~~\n>\n> 4. Commit message\n>\n> A new option \"EXCEPT TABLE\" in Create/Alter Publication allows\n> one or more tables to be excluded, publisher will exclude sending the data\n> of the excluded tables to the subscriber.\n>\n> SUGGESTION\n> A new \"EXCEPT TABLE\" clause for CREATE/ALTER PUBLICATION allows one or\n> more tables to be excluded. The publisher will not send the data of\n> excluded tables to the subscriber.\n\nModified\n\n> ~~\n>\n> 5. Commit message\n>\n> The new syntax allows specifying exclude relations while creating a publication\n> or exclude relations in alter publication. For example:\n>\n> SUGGESTION\n> The new syntax allows specifying excluded relations when creating or\n> altering a publication. For example:\n\nModified\n\n> ~~~\n>\n> 6. Commit message\n>\n> A new column prexcept is added to table \"pg_publication_rel\", to maintain\n> the relations that the user wants to exclude publishing through the publication.\n>\n> SUGGESTION\n> A new column \"prexcept\" is added to table \"pg_publication_rel\", to\n> maintain the relations that the user wants to exclude from the\n> publications.\n\nModified\n\n> ~~~\n>\n> 7. Commit message\n>\n> Modified the output plugin (pgoutput) to exclude publishing the changes of the\n> excluded tables.\n>\n> I did not feel it was necessary to say this. It is already said above\n> that the data is not sent, so that seems enough.\n\nModified\n\n> ~~~\n>\n> 8. Commit message\n>\n> Updates pg_dump to identify and dump the excluded tables of the publications.\n> Updates the \\d family of commands to display excluded tables of the\n> publications and \\dRp+ variant will now display associated except tables if any.\n>\n> SUGGESTION\n> pg_dump is updated to identify and dump the excluded tables of the publications.\n>\n> The psql \\d family of commands to display excluded tables. e.g. psql\n> \\dRp+ variant will now display associated \"except tables\" if any.\n\nModified\n\n> ~~~\n>\n> 9. doc/src/sgml/catalogs.sgml\n>\n> @@ -6426,6 +6426,15 @@ SCRAM-SHA-256$<replaceable><iteration\n> count></replaceable>:<replaceable>&l\n> if there is no publication qualifying condition.</para></entry>\n> </row>\n>\n> + <row>\n> + <entry role=\"catalog_table_entry\"><para role=\"column_definition\">\n> + <structfield>prexcept</structfield> <type>bool</type>\n> + </para>\n> + <para>\n> + True if the table must be excluded\n> + </para></entry>\n> + </row>\n>\n> Other descriptions on this page refer to \"relation\" instead of\n> \"table\". Probably this should do the same to be consistent.\n\nModified\n\n> ~~~\n>\n> 10. doc/src/sgml/logical-replication.sgml\n>\n> @@ -1167,8 +1167,9 @@ CONTEXT: processing remote data for replication\n> origin \"pg_16395\" during \"INSER\n> <para>\n> To add tables to a publication, the user must have ownership rights on the\n> table. To add all tables in schema to a publication, the user must be a\n> - superuser. To create a publication that publishes all tables or\n> all tables in\n> - schema automatically, the user must be a superuser.\n> + superuser. To add all tables to a publication, the user must be a superuser.\n> + To create a publication that publishes all tables or all tables in schema\n> + automatically, the user must be a superuser.\n> </para>\n>\n> It seems like a valid change but how is this related to this EXCEPT\n> patch. Maybe this fix should be patched separately?\n\nEarlier we were not allowed to add ALL TABLES, while altering\npublication. This is mentioned in this patch as we suport:\nALTER PUBLICATION pubname ADD ALL TABLES syntax.\n\n> ~~~\n>\n> 11. doc/src/sgml/ref/alter_publication.sgml\n>\n> @@ -22,6 +22,7 @@ PostgreSQL documentation\n> <refsynopsisdiv>\n> <synopsis>\n> ALTER PUBLICATION <replaceable class=\"parameter\">name</replaceable>\n> ADD <replaceable class=\"parameter\">publication_object</replaceable> [,\n> ...]\n> +ALTER PUBLICATION <replaceable class=\"parameter\">name</replaceable>\n> ADD ALL TABLES [EXCEPT TABLE [ ONLY ] <replaceable\n> class=\"parameter\">table_name</replaceable> [ * ] [, ... ]]\n>\n> The [ONLY] looks misplaced when the syntax is described like this. For\n> example, in practice it is possible to write \"EXCEPT TABLE ONLY t1,\n> ONLY t2, t3, ONLY t4\" but it doesn't seem that way by looking at these\n> PG DOCS.\n>\n> IMO would be better described like this:\n>\n> [ FOR ALL TABLES [ EXCEPT TABLE exception_object [,...] ]]\n>\n> where exception_object is:\n>\n> [ ONLY ] table_name [ * ]\n\nModified\n\n> ~~~\n>\n> 12. doc/src/sgml/ref/alter_publication.sgml\n>\n> @@ -82,8 +83,8 @@ ALTER PUBLICATION <replaceable\n> class=\"parameter\">name</replaceable> RESET\n>\n> <para>\n> You must own the publication to use <command>ALTER PUBLICATION</command>.\n> - Adding a table to a publication additionally requires owning that table.\n> - The <literal>ADD ALL TABLES IN SCHEMA</literal> and\n> + Adding a table or excluding a table to a publication additionally requires\n> + owning that table. The <literal>ADD ALL TABLES IN SCHEMA</literal> and\n>\n> SUGGESTION\n> Adding a table to or excluding a table from a publication additionally\n> requires owning that table.\n\nModified\n\n> ~~~\n>\n> 13. doc/src/sgml/ref/alter_publication.sgml\n>\n> @@ -213,6 +214,14 @@ ALTER PUBLICATION sales_publication ADD ALL\n> TABLES IN SCHEMA marketing, sales;\n> </programlisting>\n> </para>\n>\n> + <para>\n> + Alter publication <structname>production_publication</structname> that\n> + publishes all tables except <structname>users</structname> and\n> + <structname>departments</structname> tables:\n> +<programlisting>\n>\n> \"that publishes\" -> \"to publish\"\n\nModified\n\n> ~~~\n>\n> 14. doc/src/sgml/ref/create_publication.sgml\n>\n> (Same comment about the ONLY syntax as #11)\n\nModified\n\n> ~~~\n>\n> 15. doc/src/sgml/ref/create_publication.sgml\n>\n> + <varlistentry>\n> + <term><literal>EXCEPT TABLE</literal></term>\n> + <listitem>\n> + <para>\n> + Marks the publication as one that excludes replicating changes for the\n> + specified tables.\n> + </para>\n> +\n> + <para>\n> + <literal>EXCEPT TABLE</literal> can be specified only for\n> + <literal>FOR ALL TABLES</literal> publication. It is not supported for\n> + <literal>FOR ALL TABLES IN SCHEMA </literal> publication and\n> + <literal>FOR TABLE</literal> publication.\n> + </para>\n> + </listitem>\n> + </varlistentry>\n>\n> IMO you can remove all that \"It is not supported for...\" sentence. You\n> don't need to spell that out again when it is already clear from the\n> syntax.\n\nModified\n\n> ~~~\n>\n> 16. doc/src/sgml/ref/psql-ref.sgml\n>\n> @@ -1868,8 +1868,9 @@ testdb=>\n> If <replaceable class=\"parameter\">pattern</replaceable> is\n> specified, only those publications whose names match the pattern are\n> listed.\n> - If <literal>+</literal> is appended to the command name, the tables and\n> - schemas associated with each publication are shown as well.\n> + If <literal>+</literal> is appended to the command name, the tables,\n> + excluded tables and schemas associated with each publication\n> are shown as\n> + well.\n> </para>\n>\n> Perhaps this is OK just as-is, but OTOH I felt that the change was\n> almost unnecessary because saying it displays \"the tables\" kind of\n> implies it would also have to account for the \"excluded tables\" too.\n\nI mentioned it that way so that it is clearer and to avoid confusions\nto be pointed out by other members later. I felt let's keep it this\nway.\n\n> ~~~\n>\n> 17. src/backend/catalog/pg_publication.c - GetTopMostAncestorInPublication\n>\n> @@ -302,8 +303,9 @@ GetTopMostAncestorInPublication(Oid puboid, List\n> *ancestors, int *ancestor_level\n> foreach(lc, ancestors)\n> {\n> Oid ancestor = lfirst_oid(lc);\n> - List *apubids = GetRelationPublications(ancestor);\n> + List *apubids = GetRelationPublications(ancestor, false);\n> List *aschemaPubids = NIL;\n> + List *aexceptpubids = NIL;\n>\n> 17a.\n> I think the var \"aschemaPubids\" and \"aexceptpubids\" are only used in\n> the 'else' block so it seems better they can be declared and freed in\n> that block too instead of always.\n\nModified\n\n> 17b.\n> Also, the camel-case of those variables is inconsistent so may fix\n> that at the same time.\n\nModified\n\n> ~~~\n>\n> 18. src/backend/catalog/pg_publication.c - GetRelationPublications\n>\n> @@ -666,7 +673,7 @@ publication_add_schema(Oid pubid, Oid schemaid,\n> bool if_not_exists)\n>\n> /* Gets list of publication oids for a relation */\n> List *\n> -GetRelationPublications(Oid relid)\n> +GetRelationPublications(Oid relid, bool bexcept)\n>\n> 18a.\n> I felt that \"except_flag\" is a better name than \"bexcept\" for this param.\n\nModified\n\n> 18b.\n> The function comment should be updated to say only relations matching\n> this except_flag are returned in the list.\n\nModified\n\n> ~~~\n>\n> 19. src/backend/catalog/pg_publication.c - GetAllTablesPublicationRelations\n>\n> @@ -787,6 +795,15 @@ GetAllTablesPublicationRelations(bool pubviaroot)\n> HeapTuple tuple;\n> List *result = NIL;\n>\n> + /*\n> + * pg_publication_rel and pg_publication_namespace will only have excluded\n> + * tables in case of all tables publication, no need to pass except flag\n> + * to get the relations.\n> + */\n> + List *exceptpubtablelist;\n> +\n> + exceptpubtablelist = GetPublicationRelations(pubid, PUBLICATION_PART_ALL);\n> +\n>\n> 19a.\n> I wasn't very sure of the meaning/intent of the comment, but IIUC it\n> seems to be explaining why it is not necessary to use an \"except_flag\"\n> parameter in this code. Is it necessary/helpful to explain parameters\n> that do NOT exist?\n\nI have removed it\n\n> 19b.\n> The var name \"exceptpubtablelist\" seems a bit overkill. (e.g.\n> \"excepttablelist\" or \"exceptlist\" etc... are shorter but seem equally\n> informative).\n\nModified\n\n> ~~~\n>\n> 20. src/backend/commands/publicationcmds.c - CreatePublication\n>\n> @@ -843,54 +849,52 @@ CreatePublication(ParseState *pstate,\n> CreatePublicationStmt *stmt)\n> /* Make the changes visible. */\n> CommandCounterIncrement();\n>\n> - /* Associate objects with the publication. */\n> - if (stmt->for_all_tables)\n> - {\n> - /* Invalidate relcache so that publication info is rebuilt. */\n> - CacheInvalidateRelcacheAll();\n> - }\n> - else\n> - {\n> - ObjectsInPublicationToOids(stmt->pubobjects, pstate, &relations,\n> - &schemaidlist);\n> + ObjectsInPublicationToOids(stmt->pubobjects, pstate, &relations,\n> + &schemaidlist);\n>\n> - /* FOR ALL TABLES IN SCHEMA requires superuser */\n> - if (list_length(schemaidlist) > 0 && !superuser())\n> - ereport(ERROR,\n> - errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),\n> - errmsg(\"must be superuser to create FOR ALL TABLES IN SCHEMA publication\"));\n> + /* FOR ALL TABLES IN SCHEMA requires superuser */\n> + if (list_length(schemaidlist) > 0 && !superuser())\n> + ereport(ERROR,\n> + errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),\n> + errmsg(\"must be superuser to create FOR ALL TABLES IN SCHEMA publication\"));\n>\n> - if (list_length(relations) > 0)\n> - {\n> - List *rels;\n> + if (list_length(relations) > 0)\n> + {\n> + List *rels;\n>\n> - rels = OpenTableList(relations);\n> - CheckObjSchemaNotAlreadyInPublication(rels, schemaidlist,\n> - PUBLICATIONOBJ_TABLE);\n> + rels = OpenTableList(relations);\n> + CheckObjSchemaNotAlreadyInPublication(rels, schemaidlist,\n> + PUBLICATIONOBJ_TABLE);\n>\n> - TransformPubWhereClauses(rels, pstate->p_sourcetext,\n> - publish_via_partition_root);\n> + TransformPubWhereClauses(rels, pstate->p_sourcetext,\n> + publish_via_partition_root);\n>\n> - CheckPubRelationColumnList(rels, pstate->p_sourcetext,\n> - publish_via_partition_root);\n> + CheckPubRelationColumnList(rels, pstate->p_sourcetext,\n> + publish_via_partition_root);\n>\n> - PublicationAddTables(puboid, rels, true, NULL);\n> - CloseTableList(rels);\n> - }\n> + PublicationAddTables(puboid, rels, true, NULL);\n> + CloseTableList(rels);\n> + }\n>\n> - if (list_length(schemaidlist) > 0)\n> - {\n> - /*\n> - * Schema lock is held until the publication is created to prevent\n> - * concurrent schema deletion.\n> - */\n> - LockSchemaList(schemaidlist);\n> - PublicationAddSchemas(puboid, schemaidlist, true, NULL);\n> - }\n> + if (list_length(schemaidlist) > 0)\n> + {\n> + /*\n> + * Schema lock is held until the publication is created to prevent\n> + * concurrent schema deletion.\n> + */\n> + LockSchemaList(schemaidlist);\n> + PublicationAddSchemas(puboid, schemaidlist, true, NULL);\n> }\n>\n> table_close(rel, RowExclusiveLock);\n>\n> + /* Associate objects with the publication. */\n> + if (stmt->for_all_tables)\n> + {\n> + /* Invalidate relcache so that publication info is rebuilt. */\n> + CacheInvalidateRelcacheAll();\n> + }\n> +\n>\n> This function is refactored a lot to not use \"if/else\" as it did\n> before. But AFAIK (maybe I misunderstood) this refactor doesn't seem\n> to actually have anything to do with the EXCEPT patch. If it really is\n> unrelated maybe it should not be part of this patch.\n\nEarlier tables cannot be specified with all tables, now except tables\ncan be specified with all tables, except tables should be added to\npg_publication_rel, to handle it the code changes are required.\n\n> ~~~\n>\n> 21. src/backend/commands/publicationcmds.c - CheckPublicationDefValues\n>\n> + if (pubform->puballtables)\n> + return false;\n> +\n> + if (!pubform->pubinsert || !pubform->pubupdate || !pubform->pubdelete ||\n> + !pubform->pubtruncate || pubform->pubviaroot)\n> + return false;\n>\n> Now you have all the #define for the PUB_DEFAULT_XXX values, perhaps\n> this function should be using them instead of the hardcoded\n> assumptions what the default values are.\n>\n> e.g.\n>\n> if (pubform->puballtables != PUB_DEFAULT_ALL_TABLES) return false;\n> if (pubform->pubinsert != PUB_DEFAULT_ACTION_INSERT) return false;\n> ...\n> etc.\n\nModified\n\n> ~~~\n>\n> 22. src/backend/commands/publicationcmds.c - CheckAlterPublication\n>\n>\n> @@ -1442,6 +1516,19 @@ CheckAlterPublication(AlterPublicationStmt\n> *stmt, HeapTuple tup,\n> List *tables, List *schemaidlist)\n> {\n> Form_pg_publication pubform = (Form_pg_publication) GETSTRUCT(tup);\n> + ListCell *lc;\n> + bool nonexcepttable = false;\n> + bool excepttable = false;\n> +\n> + foreach(lc, tables)\n> + {\n> + PublicationTable *pub_table = lfirst_node(PublicationTable, lc);\n> +\n> + if (!pub_table->except)\n> + nonexcepttable = true;\n> + else\n> + excepttable = true;\n> + }\n>\n> 22a.\n> The names are very confusing. e.g. \"nonexcepttable\" is like a double-negative.\n>\n> SUGGEST:\n> bool has_tables = false;\n> bool has_except_tables = false;\n>\n> 22b.\n> Reverse the \"if\" condition to be positive instead of negative (remove !)\n> e.g.\n> if (pub_table->except)\n> has_except_table = true;\n> else\n> has_table = true;\n\nThis code can be removed because of grammar optimization, it will not\nallow except table without \"ALL TABLES\". Removed these changes.\n\n> ~~~\n>\n> 23. src/backend/commands/publicationcmds.c - CheckAlterPublication\n>\n> @@ -1461,12 +1548,19 @@ CheckAlterPublication(AlterPublicationStmt\n> *stmt, HeapTuple tup,\n> errdetail(\"Tables from schema cannot be added to, dropped from, or\n> set on FOR ALL TABLES publications.\")));\n>\n> /* Check that user is allowed to manipulate the publication tables. */\n> - if (tables && pubform->puballtables)\n> + if (nonexcepttable && tables && pubform->puballtables)\n> ereport(ERROR,\n>\n> Seems no reason for \"tables\" to be in the condition since\n> \"nonexcepttable\" can't be true if \"tables\" is NIL.\n\nThis code can be removed because of grammar optimization, it will not\nallow except table without \"ALL TABLES\". Removed these changes.\n\n> ~~~\n>\n> 24. src/backend/commands/publicationcmds.c - CheckAlterPublication\n>\n> +\n> + if (excepttable && !stmt->for_all_tables)\n> + ereport(ERROR,\n> + (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),\n> + errmsg(\"publication \\\"%s\\\" is not defined as FOR ALL TABLES\",\n> + NameStr(pubform->pubname)),\n> + errdetail(\"except table cannot be added to, dropped from, or set on\n> NON ALL TABLES publications.\")));\n>\n> The errdetail message seems over-complex.\n>\n> SUGGESTION\n> \"EXCEPT TABLE clause is only allowed for FOR ALL TABLES publications.\"\n\nThis code can be removed because of grammar optimization, it will not\nallow except table without \"ALL TABLES\". Removed this code\n\n> ~~~\n>\n> 25. src/backend/commands/publicationcmds.c - AlterPublication\n>\n> @@ -1500,6 +1594,20 @@ AlterPublication(ParseState *pstate,\n> AlterPublicationStmt *stmt)\n> aclcheck_error(ACLCHECK_NOT_OWNER, OBJECT_PUBLICATION,\n> stmt->pubname);\n>\n> + if (stmt->for_all_tables)\n> + {\n> + bool isdefault = CheckPublicationDefValues(tup);\n> +\n> + if (!isdefault)\n> + ereport(ERROR,\n> + errcode(ERRCODE_INVALID_OBJECT_DEFINITION),\n> + errmsg(\"Setting ALL TABLES requires publication \\\"%s\\\" to have\n> default values\",\n> + stmt->pubname),\n> + errhint(\"Either the publication has tables/schemas associated or\n> does not have default publication options or ALL TABLES option is\n> set.\"));\n>\n> The errhint message seems over-complex.\n>\n> SUGGESTION\n> \"Use ALTER PUBLICATION ... RESET\"\n\nModified\n\n> ~~~\n>\n> 26. src/bin/pg_dump/pg_dump.c - dumpPublication\n>\n> @@ -3980,8 +3982,34 @@ dumpPublication(Archive *fout, const\n> PublicationInfo *pubinfo)\n> qpubname);\n>\n> if (pubinfo->puballtables)\n> + {\n> + SimplePtrListCell *cell;\n> + bool first = true;\n> appendPQExpBufferStr(query, \" FOR ALL TABLES\");\n>\n> + /* Include exception tables if the publication has except tables */\n> + for (cell = exceptinfo.head; cell; cell = cell->next)\n> + {\n> + PublicationRelInfo *pubrinfo = (PublicationRelInfo *) cell->ptr;\n> + PublicationInfo *relpubinfo = pubrinfo->publication;\n> + TableInfo *tbinfo;\n> +\n> + if (pubinfo == relpubinfo)\n> + {\n> + tbinfo = pubrinfo->pubtable;\n> +\n> + if (first)\n> + {\n> + appendPQExpBufferStr(query, \" EXCEPT TABLE ONLY\");\n> + first = false;\n> + }\n> + else\n> + appendPQExpBufferStr(query, \", \");\n> + appendPQExpBuffer(query, \" %s\", fmtQualifiedDumpable(tbinfo));\n> + }\n> + }\n> + }\n> +\n>\n> IIUC this usage of ONLY looks incorrect.\n>\n> 26a.\n> Firstly, if you want to hardwire ONLY then shouldn't it apply to every\n> of the except-list table, not just the first one? e.g. \"EXCEPT TABLE\n> ONLY t1, ONLY t2, ONLY t3...\"\n\nModified, included ONLY for all the tables\n\n> 26b.\n> Secondly, is it even correct to unconditionally hardwire the ONLY? How\n> do you know that is how the user wanted it?\n\nThe table ONLY selection is handled appropriately while creating\npublication and stored in pg_publication_rel. When we dump all the\nparent and child table will be included specifying ONLY will handle\nboth scenarios with and without ONLY. This is the same behavior as in\nFOR TABLE publication\n\n> ~~~\n>\n> 27. src/bin/pg_dump/pg_dump.c\n>\n> @@ -127,6 +127,8 @@ static SimpleOidList foreign_servers_include_oids\n> = {NULL, NULL};\n> static SimpleStringList extension_include_patterns = {NULL, NULL};\n> static SimpleOidList extension_include_oids = {NULL, NULL};\n>\n> +static SimplePtrList exceptinfo = {NULL, NULL};\n> +\n>\n> Probably I just did not understand how this logic works, but how does\n> this static work properly if there are multiple publications and 2\n> different EXCEPT lists? E.g. where is it clearing the \"exceptinfo\" so\n> that multiple EXCEPT TABLE lists don't become muddled?\n\nCurrently exceptinfo holds all the exception tables and the\ncorresponding publications. When we dump the publication it will\nselect the appropriate exception tables that correspond to the\npublication and dump the exception tables associated for this\npublication. Since this is a special syntax \"CREATE PUBLICATION FOR\nALL TABLES EXCEPT TABLE tb1 ..\" all the except tables should be\nspecified in a single statement unlike the other publication objects.\n\n> ~~~\n>\n> 28. src/bin/pg_dump/pg_dump.c - dumpPublicationTable\n>\n> @@ -4330,8 +4378,11 @@ dumpPublicationTable(Archive *fout, const\n> PublicationRelInfo *pubrinfo)\n>\n> query = createPQExpBuffer();\n>\n> - appendPQExpBuffer(query, \"ALTER PUBLICATION %s ADD TABLE ONLY\",\n> + appendPQExpBuffer(query, \"ALTER PUBLICATION %s ADD \",\n> fmtId(pubinfo->dobj.name));\n> +\n> + appendPQExpBufferStr(query, \"TABLE ONLY\");\n> +\n>\n> That code refactor does not seem necessary for this patch.\n\nModified\n\n> ~~~\n>\n> 29. src/bin/pg_dump/pg_dump_sort.c\n>\n> @@ -90,6 +90,7 @@ enum dbObjectTypePriorities\n> PRIO_FK_CONSTRAINT,\n> PRIO_POLICY,\n> PRIO_PUBLICATION,\n> + PRIO_PUBLICATION_EXCEPT_REL,\n> PRIO_PUBLICATION_REL,\n> PRIO_PUBLICATION_TABLE_IN_SCHEMA,\n> PRIO_SUBSCRIPTION,\n>\n> I'm not sure how this enum is used (so perhaps this makes no\n> difference) but judging by the enum comment why did you put the sort\n> priority order PRIO_PUBLICATION_EXCEPT_REL before\n> PRIO_PUBLICATION_REL. Wouldn’t it make more sense the other way\n> around?\n\nThis order does not matter, since the new syntax is like \"CREATE\nPUBLICATION.. FOR ALL TABLES EXCEPT TABLE ....\", all the except tables\nneed to be accumulated and handled during dump publication. This code\nchanges take care of accumulating the exception table which will be\nused later by dump publication\n\n> ~~~\n>\n> 30. src/bin/psql/describe.c\n>\n> @@ -2950,17 +2950,34 @@ describeOneTableDetails(const char *schemaname,\n> \" WHERE attrelid = pr.prrelid AND attnum = prattrs[s])\\n\"\n> \" ELSE NULL END) \"\n> \"FROM pg_catalog.pg_publication p\\n\"\n> - \" JOIN pg_catalog.pg_publication_rel pr ON p.oid = pr.prpubid\\n\"\n> - \" JOIN pg_catalog.pg_class c ON c.oid = pr.prrelid\\n\"\n> - \"WHERE pr.prrelid = '%s'\\n\"\n> - \"UNION\\n\"\n> + \" JOIN pg_catalog.pg_publication_rel pr ON p.oid = pr.prpubid\\n\"\n> + \" JOIN pg_catalog.pg_class c ON c.oid = pr.prrelid\\n\"\n> + \"WHERE pr.prrelid = '%s'\",\n> + oid, oid, oid);\n>\n> I feel that trailing \"\\n\" (\"WHERE pr.prrelid = '%s'\\n\") should not\n> have been removed.\n\nModified\n\n> ~~~\n>\n> 31. src/bin/psql/describe.c\n>\n> + /* FIXME: 150000 should be changed to 160000 later for PG16. */\n> + if (pset.sversion >= 150000)\n> + appendPQExpBufferStr(&buf, \" AND pr.prexcept = 'f'\\n\");\n> +\n> + appendPQExpBuffer(&buf, \"UNION\\n\"\n>\n> The \"UNION\\n\" param might be better wrapped onto the next line like it\n> used to be.\n\nModified\n\n> ~~~\n>\n> 32. src/bin/psql/describe.c\n>\n> + /* FIXME: 150000 should be changed to 160000 later for PG16. */\n> + if (pset.sversion >= 150000)\n> + appendPQExpBuffer(&buf,\n> + \" AND NOT EXISTS (SELECT 1\\n\"\n> + \" FROM pg_catalog.pg_publication_rel pr\\n\"\n> + \" JOIN pg_catalog.pg_class pc\\n\"\n> + \" ON pr.prrelid = pc.oid\\n\"\n> + \" WHERE pr.prrelid = '%s' AND pr.prpubid = p.oid)\\n\",\n> + oid);\n>\n> The whitespace indents in the SQL seem excessive here.\n\nModified\n\n> ~~~\n>\n> 33. src/bin/psql/describe.c - describePublications\n>\n> @@ -6322,6 +6344,22 @@ describePublications(const char *pattern)\n> }\n> }\n>\n> + /* FIXME: 150000 should be changed to 160000 later for PG16. */\n> + if (pset.sversion >= 150000)\n> + {\n> + /* Get the excluded tables for the specified publication */\n> + printfPQExpBuffer(&buf,\n> + \"SELECT concat(c.relnamespace::regnamespace, '.', c.relname)\\n\"\n> + \"FROM pg_catalog.pg_class c\\n\"\n> + \" JOIN pg_catalog.pg_publication_rel pr ON c.oid = pr.prrelid\\n\"\n> + \"WHERE pr.prpubid = '%s'\\n\"\n> + \" AND pr.prexcept = 't'\\n\"\n> + \"ORDER BY 1\", pubid);\n> + if (!addFooterToPublicationDesc(&buf, \"Except tables:\",\n> + true, &cont))\n> + goto error_return;\n> + }\n> +\n>\n> I think this code is misplaced. Shouldn't it be if/else and be above\n> the other 150000 check, otherwise when you change this to PG16 it may\n> not work as expected?\n\nI moved this to else. I felt this is applicable only for all tables\npublication. Just keeping in else is fine.\n\n> ~~~\n>\n> 34. src/bin/psql/describe.c - describePublications\n>\n> + if (!addFooterToPublicationDesc(&buf, \"Except tables:\",\n> + true, &cont))\n> + goto error_return;\n> + }\n>\n> Should this be using the _T() macros same as the other prompts for translation?\n\nModified\n\n> ~~~\n>\n> 35. src/include/catalog/pg_publication.h\n>\n> I thought the param \"bexpect\" should be \"except_flag\".\n>\n> (same comment as #18a)\n\nModified\n\n> ~~~\n>\n> 36. src/include/catalog/pg_publication_rel.h\n>\n> @@ -31,6 +31,7 @@ CATALOG(pg_publication_rel,6106,PublicationRelRelationId)\n> Oid oid; /* oid */\n> Oid prpubid BKI_LOOKUP(pg_publication); /* Oid of the publication */\n> Oid prrelid BKI_LOOKUP(pg_class); /* Oid of the relation */\n> + bool prexcept BKI_DEFAULT(f); /* except the relation */\n>\n> SUGGEST (comment)\n> /* skip the relation */\n\nChanged it to exclude the relation\n\n> ~~~\n>\n> 37. src/include/commands/publicationcmds.h\n>\n> @@ -32,8 +32,8 @@ extern ObjectAddress AlterPublicationOwner(const\n> char *name, Oid newOwnerId);\n> extern void AlterPublicationOwner_oid(Oid pubid, Oid newOwnerId);\n> extern void InvalidatePublicationRels(List *relids);\n> extern bool pub_rf_contains_invalid_column(Oid pubid, Relation relation,\n> - List *ancestors, bool pubviaroot);\n> + List *ancestors, bool pubviaroot, bool alltables);\n> extern bool pub_collist_contains_invalid_column(Oid pubid, Relation relation,\n> - List *ancestors, bool pubviaroot);\n> + List *ancestors, bool pubviaroot, bool alltables);\n>\n> Elsewhere in this patch, a similarly added param is called\n> \"puballtables\" (not \"alltables\"). Please check all places and use a\n> consistent param name for all of them.\n\nModified\n\n> ~~~\n>\n> 38. src/test/regress/sql/publication.sql\n>\n> There don't seem to be any tests for more than one EXCEPT TABLE (e.g.\n> no list tests?)\n\nModified\n\n> ~~~\n>\n> 38. src/test/regress/sql/publication.sql\n>\n> Maybe adjust all the below comments (a-d) to say \"EXCEPT TABLES\"\n> intead of \"except tables\"\n>\n> 38a.\n> +-- can't add except table to 'FOR ALL TABLES' publication\n>\n> 38b.\n> +-- can't add except table to 'FOR TABLE' publication\n>\n> 38c.\n> +-- can't add except table to 'FOR ALL TABLES IN SCHEMA' publication\n>\n> 38d.\n> +-- can't add except table when publish_via_partition_root option does not\n> +-- have default value\n>\n> 38e.\n> +-- can't add except table when the publication options does not have default\n> +-- values\n>\n> SUGGESTION\n> can't add EXCEPT TABLE when the publication options are not the default values\n\nModified\n> ~~~\n>\n> 39. .../t/032_rep_changes_except_table.pl\n>\n> 39a.\n> +# Check the table data does not sync for excluded table\n> +my $result = $node_subscriber->safe_psql('postgres',\n> + \"SELECT count(*), min(a), max(a) FROM sch1.tab1\");\n> +is($result, qq(0||), 'check tablesync is excluded for excluded tables');\n>\n> Maybe the \"is\" message should say \"check there is no initial data\n> copied for the excluded table\"\n\nModified\n\n> ~~~\n>\n>\n> 40 .../t/032_rep_changes_except_table.pl\n>\n> +# Insert some data into few tables and verify that inserted data is not\n> +# replicated\n> +$node_publisher->safe_psql('postgres',\n> + \"INSERT INTO sch1.tab1 VALUES(generate_series(11,20))\");\n>\n> The comment is not quite correct. You are inserting into only one\n> table here - not \"few tables\".\n\nModified\n\n> ~~~\n>\n> 41. .../t/032_rep_changes_except_table.pl\n>\n> +# Alter publication to exclude data changes in public.tab1 and verify that\n> +# subscriber does not get the new table data.\n>\n> \"new table data\" -> \"changed data for this table\"\n\nModified\n\nThanks for the comments, the v6 patch attached at [2] has the changes\nfor the same.\n[1] - https://www.postgresql.org/message-id/a2004f08-eb2f-b124-115c-f8f18667e585%40enterprisedb.com\n[2] - https://www.postgresql.org/message-id/CALDaNm0iZZDB300Dez_97S8G6_RW5QpQ8ef6X3wq8tyK-8wnXQ%40mail.gmail.com\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Thu, 19 May 2022 08:27:02 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping schema changes in publication"
},
{
"msg_contents": "Below are my review comments for v6-0001.\n\n======\n\n1. General.\n\nThe patch failed 'publication' tests in the make check phase.\n\nPlease add this work to the commit-fest so that the 'cfbot' can report\nsuch errors sooner.\n\n~~~\n\n2. src/backend/commands/publicationcmds.c - AlterPublicationReset\n\n+/*\n+ * Reset the publication.\n+ *\n+ * Reset the publication options, publication relations and\npublication schemas.\n+ */\n+static void\n+AlterPublicationReset(ParseState *pstate, AlterPublicationStmt *stmt,\n+ Relation rel, HeapTuple tup)\n\nSUGGESTION (Make the comment similar to the sgml text instead of\nrepeating \"publication\" 4x !)\n/*\n * Reset the publication options, set the ALL TABLES flag to false, and\n * drop all relations and schemas that are associated with the publication.\n */\n\n~~~\n\n3. src/test/regress/expected/publication.out\n\nmake check failed. The diff is below:\n\n@@ -1716,7 +1716,7 @@\n -- Verify that only superuser can reset a publication\n ALTER PUBLICATION testpub_reset OWNER TO regress_publication_user2;\n SET ROLE regress_publication_user2;\n-ALTER PUBLICATION testpub_reset RESET; -- fail\n+ALTER PUBLICATION testpub_reset RESET; -- fail - must be superuser\n ERROR: must be superuser to RESET publication\n SET ROLE regress_publication_user;\n DROP PUBLICATION testpub_reset;\n\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Thu, 19 May 2022 18:19:19 +1000",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping schema changes in publication"
},
{
"msg_contents": "FYI, although the v6-0002 patch applied cleanly, I found that the SGML\nwas malformed and so the pg docs build fails.\n\n~~~\ne.g.\n\n[postgres@CentOS7-x64 sgml]$ make STYLE=website html\n{ \\\n echo \"<!ENTITY version \\\"15beta1\\\">\"; \\\n echo \"<!ENTITY majorversion \\\"15\\\">\"; \\\n} > version.sgml\n'/usr/bin/perl' ./mk_feature_tables.pl YES\n../../../src/backend/catalog/sql_feature_packages.txt\n../../../src/backend/catalog/sql_features.txt >\nfeatures-supported.sgml\n'/usr/bin/perl' ./mk_feature_tables.pl NO\n../../../src/backend/catalog/sql_feature_packages.txt\n../../../src/backend/catalog/sql_features.txt >\nfeatures-unsupported.sgml\n'/usr/bin/perl' ./generate-errcodes-table.pl\n../../../src/backend/utils/errcodes.txt > errcodes-table.sgml\n'/usr/bin/perl' ./generate-keywords-table.pl . > keywords-table.sgml\n/usr/bin/xmllint --path . --noout --valid postgres.sgml\nref/create_publication.sgml:171: parser error : Opening and ending tag\nmismatch: varlistentry line 166 and listitem\n </listitem>\n ^\nref/create_publication.sgml:172: parser error : Opening and ending tag\nmismatch: variablelist line 60 and varlistentry\n </varlistentry>\n ^\nref/create_publication.sgml:226: parser error : Opening and ending tag\nmismatch: refsect1 line 57 and variablelist\n </variablelist>\n ^\n...\n\nI will work around it locally, but for future patches please check the\nSGML builds ok before posting.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Fri, 20 May 2022 10:19:13 +1000",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping schema changes in publication"
},
{
"msg_contents": "On Fri, May 20, 2022 at 10:19 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> FYI, although the v6-0002 patch applied cleanly, I found that the SGML\n> was malformed and so the pg docs build fails.\n>\n> ~~~\n> e.g.\n>\n> [postgres@CentOS7-x64 sgml]$ make STYLE=website html\n> { \\\n> echo \"<!ENTITY version \\\"15beta1\\\">\"; \\\n> echo \"<!ENTITY majorversion \\\"15\\\">\"; \\\n> } > version.sgml\n> '/usr/bin/perl' ./mk_feature_tables.pl YES\n> ../../../src/backend/catalog/sql_feature_packages.txt\n> ../../../src/backend/catalog/sql_features.txt >\n> features-supported.sgml\n> '/usr/bin/perl' ./mk_feature_tables.pl NO\n> ../../../src/backend/catalog/sql_feature_packages.txt\n> ../../../src/backend/catalog/sql_features.txt >\n> features-unsupported.sgml\n> '/usr/bin/perl' ./generate-errcodes-table.pl\n> ../../../src/backend/utils/errcodes.txt > errcodes-table.sgml\n> '/usr/bin/perl' ./generate-keywords-table.pl . > keywords-table.sgml\n> /usr/bin/xmllint --path . --noout --valid postgres.sgml\n> ref/create_publication.sgml:171: parser error : Opening and ending tag\n> mismatch: varlistentry line 166 and listitem\n> </listitem>\n> ^\n> ref/create_publication.sgml:172: parser error : Opening and ending tag\n> mismatch: variablelist line 60 and varlistentry\n> </varlistentry>\n> ^\n> ref/create_publication.sgml:226: parser error : Opening and ending tag\n> mismatch: refsect1 line 57 and variablelist\n> </variablelist>\n> ^\n> ...\n>\n> I will work around it locally, but for future patches please check the\n> SGML builds ok before posting.\n\nFYI, I rewrote the bad SGML fragment like this:\n\n <varlistentry>\n <term><literal>EXCEPT TABLE</literal></term>\n <listitem>\n <para>\n This clause specifies a list of tables to exclude from the publication. It\n can only be used with <literal>FOR ALL TABLES</literal>.\n </para>\n </listitem>\n </varlistentry>\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Fri, 20 May 2022 11:20:47 +1000",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping schema changes in publication"
},
{
"msg_contents": "Below are my review comments for v6-0002.\n\n======\n\n1. Commit message.\nThe psql \\d family of commands to display excluded tables.\n\nSUGGESTION\nThe psql \\d family of commands can now display excluded tables.\n\n~~~\n\n2. doc/src/sgml/ref/alter_publication.sgml\n\n@@ -22,6 +22,7 @@ PostgreSQL documentation\n <refsynopsisdiv>\n <synopsis>\n ALTER PUBLICATION <replaceable class=\"parameter\">name</replaceable>\nADD <replaceable class=\"parameter\">publication_object</replaceable> [,\n...]\n+ALTER PUBLICATION <replaceable class=\"parameter\">name</replaceable>\nADD ALL TABLES [ EXCEPT [ TABLE ] exception_object [, ... ] ]\n\nThe \"exception_object\" font is wrong. Should look the same as\n\"publication_object\"\n\n~~~\n\n3. doc/src/sgml/ref/alter_publication.sgml - Examples\n\n@@ -214,6 +220,14 @@ ALTER PUBLICATION sales_publication ADD ALL\nTABLES IN SCHEMA marketing, sales;\n </programlisting>\n </para>\n\n+ <para>\n+ Alter publication <structname>production_publication</structname> to publish\n+ all tables except <structname>users</structname> and\n+ <structname>departments</structname> tables:\n+<programlisting>\n+ALTER PUBLICATION production_publication ADD ALL TABLES EXCEPT TABLE\nusers, departments;\n+</programlisting></para>\n\nConsider using \"EXCEPT\" instead of \"EXCEPT TABLE\" because that will\nshow TABLE keyword is optional.\n\n~~~\n\n4. doc/src/sgml/ref/create_publication.sgml\n\nAn SGML tag error caused building the docs to fail. My fix was\npreviously reported [1].\n\n~~~\n\n5. doc/src/sgml/ref/create_publication.sgml\n\n@@ -22,7 +22,7 @@ PostgreSQL documentation\n <refsynopsisdiv>\n <synopsis>\n CREATE PUBLICATION <replaceable class=\"parameter\">name</replaceable>\n- [ FOR ALL TABLES\n+ [ FOR ALL TABLES [ EXCEPT [ TABLE ] exception_object [, ... ] ]\n\nThe \"exception_object\" font is wrong. Should look the same as\n\"publication_object\"\n\n~~~\n\n6. doc/src/sgml/ref/create_publication.sgml - Examples\n\n@@ -351,6 +366,15 @@ CREATE PUBLICATION production_publication FOR\nTABLE users, departments, ALL TABL\n CREATE PUBLICATION sales_publication FOR ALL TABLES IN SCHEMA marketing, sales;\n </programlisting></para>\n\n+ <para>\n+ Create a publication that publishes all changes in all the tables except for\n+ the changes of <structname>users</structname> and\n+ <structname>departments</structname> table:\n+<programlisting>\n+CREATE PUBLICATION mypublication FOR ALL TABLE EXCEPT TABLE users, departments;\n+</programlisting>\n+ </para>\n+\n\n6a.\nTypo: \"FOR ALL TABLE\" -> \"FOR ALL TABLES\"\n\n6b.\nConsider using \"EXCEPT\" instead of \"EXCEPT TABLE\" because that will\nshow TABLE keyword is optional.\n\n~~~\n\n7. src/backend/catalog/pg_publication.c - GetTopMostAncestorInPublication\n\n@@ -316,18 +316,25 @@ GetTopMostAncestorInPublication(Oid puboid, List\n*ancestors, int *ancestor_level\n }\n else\n {\n- aschemaPubids = GetSchemaPublications(get_rel_namespace(ancestor));\n- if (list_member_oid(aschemaPubids, puboid))\n+ List *aschemapubids = NIL;\n+ List *aexceptpubids = NIL;\n+\n+ aschemapubids = GetSchemaPublications(get_rel_namespace(ancestor));\n+ aexceptpubids = GetRelationPublications(ancestor, true);\n+ if (list_member_oid(aschemapubids, puboid) ||\n+ (puballtables && !list_member_oid(aexceptpubids, puboid)))\n {\n\nYou could re-write this as multiple conditions instead of one. That\ncould avoid always assigning the 'aexceptpubids', so it might be a\nmore efficient way to write this logic.\n\n~~~\n\n8. src/backend/catalog/pg_publication.c - CheckPublicationDefValues\n\n+/*\n+ * Check if the publication has default values\n+ *\n+ * Check the following:\n+ * Publication is having default options\n+ * Publication is not associated with relations\n+ * Publication is not associated with schemas\n+ * Publication is not set with \"FOR ALL TABLES\"\n+ */\n+static bool\n+CheckPublicationDefValues(HeapTuple tup)\n\n8a.\nRemove the tab. Replace with spaces.\n\n8b.\nIt might be better if this comment order is the same as the logic order.\ne.g.\n\n* Check the following:\n* Publication is not set with \"FOR ALL TABLES\"\n* Publication is having default options\n* Publication is not associated with schemas\n* Publication is not associated with relations\n\n~~~\n\n9. src/backend/catalog/pg_publication.c - AlterPublicationSetAllTables\n\n+/*\n+ * Reset the publication.\n+ *\n+ * Reset the publication options, publication relations and\npublication schemas.\n+ */\n+static void\n+AlterPublicationSetAllTables(Relation rel, HeapTuple tup)\n\nThe function comment and the function name do not seem to match here;\nsomething looks like a cut/paste error ??\n\n~~~\n\n10. src/backend/catalog/pg_publication.c - AlterPublicationSetAllTables\n\n+ /* set all tables option */\n+ values[Anum_pg_publication_puballtables - 1] = BoolGetDatum(true);\n+ replaces[Anum_pg_publication_puballtables - 1] = true;\n\nSUGGEST (comment)\n/* set all ALL TABLES flag */\n\n~~~\n\n11. src/backend/catalog/pg_publication.c - AlterPublication\n\n@@ -1501,6 +1579,20 @@ AlterPublication(ParseState *pstate,\nAlterPublicationStmt *stmt)\n aclcheck_error(ACLCHECK_NOT_OWNER, OBJECT_PUBLICATION,\n stmt->pubname);\n\n+ if (stmt->for_all_tables)\n+ {\n+ bool isdefault = CheckPublicationDefValues(tup);\n+\n+ if (!isdefault)\n+ ereport(ERROR,\n+ errcode(ERRCODE_INVALID_OBJECT_DEFINITION),\n+ errmsg(\"Setting ALL TABLES requires publication \\\"%s\\\" to have\ndefault values\",\n+ stmt->pubname),\n+ errhint(\"Use ALTER PUBLICATION ... RESET to reset the publication\"));\n\nThe errmsg should start with a lowercase letter.\n\n~~~\n\n12. src/backend/catalog/pg_publication.c - AlterPublication\n\n@@ -1501,6 +1579,20 @@ AlterPublication(ParseState *pstate,\nAlterPublicationStmt *stmt)\n aclcheck_error(ACLCHECK_NOT_OWNER, OBJECT_PUBLICATION,\n stmt->pubname);\n\n+ if (stmt->for_all_tables)\n+ {\n+ bool isdefault = CheckPublicationDefValues(tup);\n+\n+ if (!isdefault)\n+ ereport(ERROR,\n+ errcode(ERRCODE_INVALID_OBJECT_DEFINITION),\n+ errmsg(\"Setting ALL TABLES requires publication \\\"%s\\\" to have\ndefault values\",\n+ stmt->pubname),\n+ errhint(\"Use ALTER PUBLICATION ... RESET to reset the publication\"));\n\nExample test:\n\npostgres=# create table t1(a int);\nCREATE TABLE\npostgres=# create publication p1 for table t1;\nCREATE PUBLICATION\npostgres=# alter publication p1 add all tables except t1;\n2022-05-20 14:34:49.301 AEST [21802] ERROR: Setting ALL TABLES\nrequires publication \"p1\" to have default values\n2022-05-20 14:34:49.301 AEST [21802] HINT: Use ALTER PUBLICATION ...\nRESET to reset the publication\n2022-05-20 14:34:49.301 AEST [21802] STATEMENT: alter publication p1\nadd all tables except t1;\nERROR: Setting ALL TABLES requires publication \"p1\" to have default values\nHINT: Use ALTER PUBLICATION ... RESET to reset the publication\npostgres=# alter publication p1 set all tables except t1;\n\nThat error message does not quite match what the user was doing.\nFirstly, they were adding the ALL TABLES, not setting it. Secondly,\nall the values of the publication were already defaults (only there\nwas an existing table t1 in the publication). Maybe some minor changes\nto the message wording can be a better reflect what the user is doing\nhere.\n\n~~~\n\n13. src/backend/parser/gram.y\n\n@@ -10410,7 +10411,7 @@ AlterOwnerStmt: ALTER AGGREGATE\naggregate_with_argtypes OWNER TO RoleSpec\n *\n * CREATE PUBLICATION name [WITH options]\n *\n- * CREATE PUBLICATION FOR ALL TABLES [WITH options]\n+ * CREATE PUBLICATION FOR ALL TABLES [EXCEPT TABLE table [, ...]]\n[WITH options]\n\nComment should show the \"TABLE\" keyword is optional\n\n~~~\n\n14. src/bin/pg_dump/pg_dump.c - dumpPublicationTable\n\n@@ -4332,6 +4380,7 @@ dumpPublicationTable(Archive *fout, const\nPublicationRelInfo *pubrinfo)\n\n appendPQExpBuffer(query, \"ALTER PUBLICATION %s ADD TABLE ONLY\",\n fmtId(pubinfo->dobj.name));\n+\n appendPQExpBuffer(query, \" %s\",\n fmtQualifiedDumpable(tbinfo));\n\nThis additional whitespace seems unrelated to this patch\n\n~~~\n\n15. src/include/nodes/parsenodes.h\n\n15a.\n@@ -3999,6 +3999,7 @@ typedef struct PublicationTable\n RangeVar *relation; /* relation to be published */\n Node *whereClause; /* qualifications */\n List *columns; /* List of columns in a publication table */\n+ bool except; /* except relation */\n } PublicationTable;\n\nMaybe the comment should be more like similar ones:\n/* exclude the relation */\n\n15b.\n@@ -4007,6 +4008,7 @@ typedef struct PublicationTable\n typedef enum PublicationObjSpecType\n {\n PUBLICATIONOBJ_TABLE, /* A table */\n+ PUBLICATIONOBJ_EXCEPT_TABLE, /* An Except table */\n PUBLICATIONOBJ_TABLES_IN_SCHEMA, /* All tables in schema */\n PUBLICATIONOBJ_TABLES_IN_CUR_SCHEMA, /* All tables in first element of\n\nMaybe the comment should be more like:\n/* A table to be excluded */\n\n~~~\n\n16. src/test/regress/sql/publication.sql\n\nI did not see any test cases using EXCEPT when the optional TABLE\nkeyword is omitted.\n\n\n------\n[1] https://www.postgresql.org/message-id/CAHut%2BPtZDfBJ1d%3D3kSexgM5m%2BP_ok8sdsJXKimsXycaMyqXsNA%40mail.gmail.com\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Fri, 20 May 2022 15:53:12 +1000",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping schema changes in publication"
},
{
"msg_contents": "On Thu, May 19, 2022 at 1:49 PM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> Below are my review comments for v6-0001.\n>\n> ======\n>\n> 1. General.\n>\n> The patch failed 'publication' tests in the make check phase.\n>\n> Please add this work to the commit-fest so that the 'cfbot' can report\n> such errors sooner.\n\nAdded commitfest entry\n\n> ~~~\n>\n> 2. src/backend/commands/publicationcmds.c - AlterPublicationReset\n>\n> +/*\n> + * Reset the publication.\n> + *\n> + * Reset the publication options, publication relations and\n> publication schemas.\n> + */\n> +static void\n> +AlterPublicationReset(ParseState *pstate, AlterPublicationStmt *stmt,\n> + Relation rel, HeapTuple tup)\n>\n> SUGGESTION (Make the comment similar to the sgml text instead of\n> repeating \"publication\" 4x !)\n> /*\n> * Reset the publication options, set the ALL TABLES flag to false, and\n> * drop all relations and schemas that are associated with the publication.\n> */\n\nModified\n\n> ~~~\n>\n> 3. src/test/regress/expected/publication.out\n>\n> make check failed. The diff is below:\n>\n> @@ -1716,7 +1716,7 @@\n> -- Verify that only superuser can reset a publication\n> ALTER PUBLICATION testpub_reset OWNER TO regress_publication_user2;\n> SET ROLE regress_publication_user2;\n> -ALTER PUBLICATION testpub_reset RESET; -- fail\n> +ALTER PUBLICATION testpub_reset RESET; -- fail - must be superuser\n> ERROR: must be superuser to RESET publication\n> SET ROLE regress_publication_user;\n> DROP PUBLICATION testpub_reset;\n\nIt passed for me locally because the change was present in the 002\npatch. I have moved the change to 001.\n\nThe attached v7 patch has the changes for the same.\n[1] - https://commitfest.postgresql.org/38/3646/\n\nRegards,\nVignesh",
"msg_date": "Sat, 21 May 2022 11:00:52 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping schema changes in publication"
},
{
"msg_contents": "On Fri, May 20, 2022 at 5:49 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> FYI, although the v6-0002 patch applied cleanly, I found that the SGML\n> was malformed and so the pg docs build fails.\n>\n> ~~~\n> e.g.\n>\n> [postgres@CentOS7-x64 sgml]$ make STYLE=website html\n> { \\\n> echo \"<!ENTITY version \\\"15beta1\\\">\"; \\\n> echo \"<!ENTITY majorversion \\\"15\\\">\"; \\\n> } > version.sgml\n> '/usr/bin/perl' ./mk_feature_tables.pl YES\n> ../../../src/backend/catalog/sql_feature_packages.txt\n> ../../../src/backend/catalog/sql_features.txt >\n> features-supported.sgml\n> '/usr/bin/perl' ./mk_feature_tables.pl NO\n> ../../../src/backend/catalog/sql_feature_packages.txt\n> ../../../src/backend/catalog/sql_features.txt >\n> features-unsupported.sgml\n> '/usr/bin/perl' ./generate-errcodes-table.pl\n> ../../../src/backend/utils/errcodes.txt > errcodes-table.sgml\n> '/usr/bin/perl' ./generate-keywords-table.pl . > keywords-table.sgml\n> /usr/bin/xmllint --path . --noout --valid postgres.sgml\n> ref/create_publication.sgml:171: parser error : Opening and ending tag\n> mismatch: varlistentry line 166 and listitem\n> </listitem>\n> ^\n> ref/create_publication.sgml:172: parser error : Opening and ending tag\n> mismatch: variablelist line 60 and varlistentry\n> </varlistentry>\n> ^\n> ref/create_publication.sgml:226: parser error : Opening and ending tag\n> mismatch: refsect1 line 57 and variablelist\n> </variablelist>\n> ^\n> ...\n>\n> I will work around it locally, but for future patches please check the\n> SGML builds ok before posting.\n\nThanks for reporting this, I have made the changes for this.\nThe v7 patch attached at [1] has the changes for the same.\n\n[1] - https://www.postgresql.org/message-id/CALDaNm3EpX3%2BRu%3DSNaYi%3DUW5ZLE6nNhGRHZ7a8-fXPZ_-gLdxQ%40mail.gmail.com\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Sat, 21 May 2022 11:02:45 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping schema changes in publication"
},
{
"msg_contents": "On Fri, May 20, 2022 at 11:23 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> Below are my review comments for v6-0002.\n>\n> ======\n>\n> 1. Commit message.\n> The psql \\d family of commands to display excluded tables.\n>\n> SUGGESTION\n> The psql \\d family of commands can now display excluded tables.\n\nModified\n\n> ~~~\n>\n> 2. doc/src/sgml/ref/alter_publication.sgml\n>\n> @@ -22,6 +22,7 @@ PostgreSQL documentation\n> <refsynopsisdiv>\n> <synopsis>\n> ALTER PUBLICATION <replaceable class=\"parameter\">name</replaceable>\n> ADD <replaceable class=\"parameter\">publication_object</replaceable> [,\n> ...]\n> +ALTER PUBLICATION <replaceable class=\"parameter\">name</replaceable>\n> ADD ALL TABLES [ EXCEPT [ TABLE ] exception_object [, ... ] ]\n>\n> The \"exception_object\" font is wrong. Should look the same as\n> \"publication_object\"\n\nModified\n\n> ~~~\n>\n> 3. doc/src/sgml/ref/alter_publication.sgml - Examples\n>\n> @@ -214,6 +220,14 @@ ALTER PUBLICATION sales_publication ADD ALL\n> TABLES IN SCHEMA marketing, sales;\n> </programlisting>\n> </para>\n>\n> + <para>\n> + Alter publication <structname>production_publication</structname> to publish\n> + all tables except <structname>users</structname> and\n> + <structname>departments</structname> tables:\n> +<programlisting>\n> +ALTER PUBLICATION production_publication ADD ALL TABLES EXCEPT TABLE\n> users, departments;\n> +</programlisting></para>\n>\n> Consider using \"EXCEPT\" instead of \"EXCEPT TABLE\" because that will\n> show TABLE keyword is optional.\n\nModified\n\n> ~~~\n>\n> 4. doc/src/sgml/ref/create_publication.sgml\n>\n> An SGML tag error caused building the docs to fail. My fix was\n> previously reported [1].\n\nModified\n\n> ~~~\n>\n> 5. doc/src/sgml/ref/create_publication.sgml\n>\n> @@ -22,7 +22,7 @@ PostgreSQL documentation\n> <refsynopsisdiv>\n> <synopsis>\n> CREATE PUBLICATION <replaceable class=\"parameter\">name</replaceable>\n> - [ FOR ALL TABLES\n> + [ FOR ALL TABLES [ EXCEPT [ TABLE ] exception_object [, ... ] ]\n>\n> The \"exception_object\" font is wrong. Should look the same as\n> \"publication_object\"\n\nModified\n\n> ~~~\n>\n> 6. doc/src/sgml/ref/create_publication.sgml - Examples\n>\n> @@ -351,6 +366,15 @@ CREATE PUBLICATION production_publication FOR\n> TABLE users, departments, ALL TABL\n> CREATE PUBLICATION sales_publication FOR ALL TABLES IN SCHEMA marketing, sales;\n> </programlisting></para>\n>\n> + <para>\n> + Create a publication that publishes all changes in all the tables except for\n> + the changes of <structname>users</structname> and\n> + <structname>departments</structname> table:\n> +<programlisting>\n> +CREATE PUBLICATION mypublication FOR ALL TABLE EXCEPT TABLE users, departments;\n> +</programlisting>\n> + </para>\n> +\n>\n> 6a.\n> Typo: \"FOR ALL TABLE\" -> \"FOR ALL TABLES\"\n\nModified\n\n> 6b.\n> Consider using \"EXCEPT\" instead of \"EXCEPT TABLE\" because that will\n> show TABLE keyword is optional.\n\nModified\n\n> ~~~\n>\n> 7. src/backend/catalog/pg_publication.c - GetTopMostAncestorInPublication\n>\n> @@ -316,18 +316,25 @@ GetTopMostAncestorInPublication(Oid puboid, List\n> *ancestors, int *ancestor_level\n> }\n> else\n> {\n> - aschemaPubids = GetSchemaPublications(get_rel_namespace(ancestor));\n> - if (list_member_oid(aschemaPubids, puboid))\n> + List *aschemapubids = NIL;\n> + List *aexceptpubids = NIL;\n> +\n> + aschemapubids = GetSchemaPublications(get_rel_namespace(ancestor));\n> + aexceptpubids = GetRelationPublications(ancestor, true);\n> + if (list_member_oid(aschemapubids, puboid) ||\n> + (puballtables && !list_member_oid(aexceptpubids, puboid)))\n> {\n>\n> You could re-write this as multiple conditions instead of one. That\n> could avoid always assigning the 'aexceptpubids', so it might be a\n> more efficient way to write this logic.\n\nModified\n\n> ~~~\n>\n> 8. src/backend/catalog/pg_publication.c - CheckPublicationDefValues\n>\n> +/*\n> + * Check if the publication has default values\n> + *\n> + * Check the following:\n> + * Publication is having default options\n> + * Publication is not associated with relations\n> + * Publication is not associated with schemas\n> + * Publication is not set with \"FOR ALL TABLES\"\n> + */\n> +static bool\n> +CheckPublicationDefValues(HeapTuple tup)\n>\n> 8a.\n> Remove the tab. Replace with spaces.\n\nModified\n\n> 8b.\n> It might be better if this comment order is the same as the logic order.\n> e.g.\n>\n> * Check the following:\n> * Publication is not set with \"FOR ALL TABLES\"\n> * Publication is having default options\n> * Publication is not associated with schemas\n> * Publication is not associated with relations\n\nModified\n\n> ~~~\n>\n> 9. src/backend/catalog/pg_publication.c - AlterPublicationSetAllTables\n>\n> +/*\n> + * Reset the publication.\n> + *\n> + * Reset the publication options, publication relations and\n> publication schemas.\n> + */\n> +static void\n> +AlterPublicationSetAllTables(Relation rel, HeapTuple tup)\n>\n> The function comment and the function name do not seem to match here;\n> something looks like a cut/paste error ??\n\nModified\n\n> ~~~\n>\n> 10. src/backend/catalog/pg_publication.c - AlterPublicationSetAllTables\n>\n> + /* set all tables option */\n> + values[Anum_pg_publication_puballtables - 1] = BoolGetDatum(true);\n> + replaces[Anum_pg_publication_puballtables - 1] = true;\n>\n> SUGGEST (comment)\n> /* set all ALL TABLES flag */\n\nModified\n\n> ~~~\n>\n> 11. src/backend/catalog/pg_publication.c - AlterPublication\n>\n> @@ -1501,6 +1579,20 @@ AlterPublication(ParseState *pstate,\n> AlterPublicationStmt *stmt)\n> aclcheck_error(ACLCHECK_NOT_OWNER, OBJECT_PUBLICATION,\n> stmt->pubname);\n>\n> + if (stmt->for_all_tables)\n> + {\n> + bool isdefault = CheckPublicationDefValues(tup);\n> +\n> + if (!isdefault)\n> + ereport(ERROR,\n> + errcode(ERRCODE_INVALID_OBJECT_DEFINITION),\n> + errmsg(\"Setting ALL TABLES requires publication \\\"%s\\\" to have\n> default values\",\n> + stmt->pubname),\n> + errhint(\"Use ALTER PUBLICATION ... RESET to reset the publication\"));\n>\n> The errmsg should start with a lowercase letter.\n\nModified\n\n> ~~~\n>\n> 12. src/backend/catalog/pg_publication.c - AlterPublication\n>\n> @@ -1501,6 +1579,20 @@ AlterPublication(ParseState *pstate,\n> AlterPublicationStmt *stmt)\n> aclcheck_error(ACLCHECK_NOT_OWNER, OBJECT_PUBLICATION,\n> stmt->pubname);\n>\n> + if (stmt->for_all_tables)\n> + {\n> + bool isdefault = CheckPublicationDefValues(tup);\n> +\n> + if (!isdefault)\n> + ereport(ERROR,\n> + errcode(ERRCODE_INVALID_OBJECT_DEFINITION),\n> + errmsg(\"Setting ALL TABLES requires publication \\\"%s\\\" to have\n> default values\",\n> + stmt->pubname),\n> + errhint(\"Use ALTER PUBLICATION ... RESET to reset the publication\"));\n>\n> Example test:\n>\n> postgres=# create table t1(a int);\n> CREATE TABLE\n> postgres=# create publication p1 for table t1;\n> CREATE PUBLICATION\n> postgres=# alter publication p1 add all tables except t1;\n> 2022-05-20 14:34:49.301 AEST [21802] ERROR: Setting ALL TABLES\n> requires publication \"p1\" to have default values\n> 2022-05-20 14:34:49.301 AEST [21802] HINT: Use ALTER PUBLICATION ...\n> RESET to reset the publication\n> 2022-05-20 14:34:49.301 AEST [21802] STATEMENT: alter publication p1\n> add all tables except t1;\n> ERROR: Setting ALL TABLES requires publication \"p1\" to have default values\n> HINT: Use ALTER PUBLICATION ... RESET to reset the publication\n> postgres=# alter publication p1 set all tables except t1;\n>\n> That error message does not quite match what the user was doing.\n> Firstly, they were adding the ALL TABLES, not setting it. Secondly,\n> all the values of the publication were already defaults (only there\n> was an existing table t1 in the publication). Maybe some minor changes\n> to the message wording can be a better reflect what the user is doing\n> here.\n\nModified\n\n> ~~~\n>\n> 13. src/backend/parser/gram.y\n>\n> @@ -10410,7 +10411,7 @@ AlterOwnerStmt: ALTER AGGREGATE\n> aggregate_with_argtypes OWNER TO RoleSpec\n> *\n> * CREATE PUBLICATION name [WITH options]\n> *\n> - * CREATE PUBLICATION FOR ALL TABLES [WITH options]\n> + * CREATE PUBLICATION FOR ALL TABLES [EXCEPT TABLE table [, ...]]\n> [WITH options]\n>\n> Comment should show the \"TABLE\" keyword is optional\n\nModified\n\n> ~~~\n>\n> 14. src/bin/pg_dump/pg_dump.c - dumpPublicationTable\n>\n> @@ -4332,6 +4380,7 @@ dumpPublicationTable(Archive *fout, const\n> PublicationRelInfo *pubrinfo)\n>\n> appendPQExpBuffer(query, \"ALTER PUBLICATION %s ADD TABLE ONLY\",\n> fmtId(pubinfo->dobj.name));\n> +\n> appendPQExpBuffer(query, \" %s\",\n> fmtQualifiedDumpable(tbinfo));\n>\n> This additional whitespace seems unrelated to this patch\n\nModified\n\n> ~~~\n>\n> 15. src/include/nodes/parsenodes.h\n>\n> 15a.\n> @@ -3999,6 +3999,7 @@ typedef struct PublicationTable\n> RangeVar *relation; /* relation to be published */\n> Node *whereClause; /* qualifications */\n> List *columns; /* List of columns in a publication table */\n> + bool except; /* except relation */\n> } PublicationTable;\n>\n> Maybe the comment should be more like similar ones:\n> /* exclude the relation */\n\nModified\n\n> 15b.\n> @@ -4007,6 +4008,7 @@ typedef struct PublicationTable\n> typedef enum PublicationObjSpecType\n> {\n> PUBLICATIONOBJ_TABLE, /* A table */\n> + PUBLICATIONOBJ_EXCEPT_TABLE, /* An Except table */\n> PUBLICATIONOBJ_TABLES_IN_SCHEMA, /* All tables in schema */\n> PUBLICATIONOBJ_TABLES_IN_CUR_SCHEMA, /* All tables in first element of\n>\n> Maybe the comment should be more like:\n> /* A table to be excluded */\n\nModified\n\n> ~~~\n>\n> 16. src/test/regress/sql/publication.sql\n>\n> I did not see any test cases using EXCEPT when the optional TABLE\n> keyword is omitted.\n\nAdded a test\n\nThanks for the comments, the v7 patch attached at [1] has the changes\nfor the same.\n[1] - https://www.postgresql.org/message-id/CALDaNm3EpX3%2BRu%3DSNaYi%3DUW5ZLE6nNhGRHZ7a8-fXPZ_-gLdxQ%40mail.gmail.com\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Sat, 21 May 2022 11:06:16 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping schema changes in publication"
},
{
"msg_contents": "On Sat, May 21, 2022 at 11:06 AM vignesh C <vignesh21@gmail.com> wrote:\n>\n> On Fri, May 20, 2022 at 11:23 AM Peter Smith <smithpb2250@gmail.com> wrote:\n> >\n> > Below are my review comments for v6-0002.\n> >\n> > ======\n> >\n> > 1. Commit message.\n> > The psql \\d family of commands to display excluded tables.\n> >\n> > SUGGESTION\n> > The psql \\d family of commands can now display excluded tables.\n>\n> Modified\n>\n> > ~~~\n> >\n> > 2. doc/src/sgml/ref/alter_publication.sgml\n> >\n> > @@ -22,6 +22,7 @@ PostgreSQL documentation\n> > <refsynopsisdiv>\n> > <synopsis>\n> > ALTER PUBLICATION <replaceable class=\"parameter\">name</replaceable>\n> > ADD <replaceable class=\"parameter\">publication_object</replaceable> [,\n> > ...]\n> > +ALTER PUBLICATION <replaceable class=\"parameter\">name</replaceable>\n> > ADD ALL TABLES [ EXCEPT [ TABLE ] exception_object [, ... ] ]\n> >\n> > The \"exception_object\" font is wrong. Should look the same as\n> > \"publication_object\"\n>\n> Modified\n>\n> > ~~~\n> >\n> > 3. doc/src/sgml/ref/alter_publication.sgml - Examples\n> >\n> > @@ -214,6 +220,14 @@ ALTER PUBLICATION sales_publication ADD ALL\n> > TABLES IN SCHEMA marketing, sales;\n> > </programlisting>\n> > </para>\n> >\n> > + <para>\n> > + Alter publication <structname>production_publication</structname> to publish\n> > + all tables except <structname>users</structname> and\n> > + <structname>departments</structname> tables:\n> > +<programlisting>\n> > +ALTER PUBLICATION production_publication ADD ALL TABLES EXCEPT TABLE\n> > users, departments;\n> > +</programlisting></para>\n> >\n> > Consider using \"EXCEPT\" instead of \"EXCEPT TABLE\" because that will\n> > show TABLE keyword is optional.\n>\n> Modified\n>\n> > ~~~\n> >\n> > 4. doc/src/sgml/ref/create_publication.sgml\n> >\n> > An SGML tag error caused building the docs to fail. My fix was\n> > previously reported [1].\n>\n> Modified\n>\n> > ~~~\n> >\n> > 5. doc/src/sgml/ref/create_publication.sgml\n> >\n> > @@ -22,7 +22,7 @@ PostgreSQL documentation\n> > <refsynopsisdiv>\n> > <synopsis>\n> > CREATE PUBLICATION <replaceable class=\"parameter\">name</replaceable>\n> > - [ FOR ALL TABLES\n> > + [ FOR ALL TABLES [ EXCEPT [ TABLE ] exception_object [, ... ] ]\n> >\n> > The \"exception_object\" font is wrong. Should look the same as\n> > \"publication_object\"\n>\n> Modified\n>\n> > ~~~\n> >\n> > 6. doc/src/sgml/ref/create_publication.sgml - Examples\n> >\n> > @@ -351,6 +366,15 @@ CREATE PUBLICATION production_publication FOR\n> > TABLE users, departments, ALL TABL\n> > CREATE PUBLICATION sales_publication FOR ALL TABLES IN SCHEMA marketing, sales;\n> > </programlisting></para>\n> >\n> > + <para>\n> > + Create a publication that publishes all changes in all the tables except for\n> > + the changes of <structname>users</structname> and\n> > + <structname>departments</structname> table:\n> > +<programlisting>\n> > +CREATE PUBLICATION mypublication FOR ALL TABLE EXCEPT TABLE users, departments;\n> > +</programlisting>\n> > + </para>\n> > +\n> >\n> > 6a.\n> > Typo: \"FOR ALL TABLE\" -> \"FOR ALL TABLES\"\n>\n> Modified\n>\n> > 6b.\n> > Consider using \"EXCEPT\" instead of \"EXCEPT TABLE\" because that will\n> > show TABLE keyword is optional.\n>\n> Modified\n>\n> > ~~~\n> >\n> > 7. src/backend/catalog/pg_publication.c - GetTopMostAncestorInPublication\n> >\n> > @@ -316,18 +316,25 @@ GetTopMostAncestorInPublication(Oid puboid, List\n> > *ancestors, int *ancestor_level\n> > }\n> > else\n> > {\n> > - aschemaPubids = GetSchemaPublications(get_rel_namespace(ancestor));\n> > - if (list_member_oid(aschemaPubids, puboid))\n> > + List *aschemapubids = NIL;\n> > + List *aexceptpubids = NIL;\n> > +\n> > + aschemapubids = GetSchemaPublications(get_rel_namespace(ancestor));\n> > + aexceptpubids = GetRelationPublications(ancestor, true);\n> > + if (list_member_oid(aschemapubids, puboid) ||\n> > + (puballtables && !list_member_oid(aexceptpubids, puboid)))\n> > {\n> >\n> > You could re-write this as multiple conditions instead of one. That\n> > could avoid always assigning the 'aexceptpubids', so it might be a\n> > more efficient way to write this logic.\n>\n> Modified\n>\n> > ~~~\n> >\n> > 8. src/backend/catalog/pg_publication.c - CheckPublicationDefValues\n> >\n> > +/*\n> > + * Check if the publication has default values\n> > + *\n> > + * Check the following:\n> > + * Publication is having default options\n> > + * Publication is not associated with relations\n> > + * Publication is not associated with schemas\n> > + * Publication is not set with \"FOR ALL TABLES\"\n> > + */\n> > +static bool\n> > +CheckPublicationDefValues(HeapTuple tup)\n> >\n> > 8a.\n> > Remove the tab. Replace with spaces.\n>\n> Modified\n>\n> > 8b.\n> > It might be better if this comment order is the same as the logic order.\n> > e.g.\n> >\n> > * Check the following:\n> > * Publication is not set with \"FOR ALL TABLES\"\n> > * Publication is having default options\n> > * Publication is not associated with schemas\n> > * Publication is not associated with relations\n>\n> Modified\n>\n> > ~~~\n> >\n> > 9. src/backend/catalog/pg_publication.c - AlterPublicationSetAllTables\n> >\n> > +/*\n> > + * Reset the publication.\n> > + *\n> > + * Reset the publication options, publication relations and\n> > publication schemas.\n> > + */\n> > +static void\n> > +AlterPublicationSetAllTables(Relation rel, HeapTuple tup)\n> >\n> > The function comment and the function name do not seem to match here;\n> > something looks like a cut/paste error ??\n>\n> Modified\n>\n> > ~~~\n> >\n> > 10. src/backend/catalog/pg_publication.c - AlterPublicationSetAllTables\n> >\n> > + /* set all tables option */\n> > + values[Anum_pg_publication_puballtables - 1] = BoolGetDatum(true);\n> > + replaces[Anum_pg_publication_puballtables - 1] = true;\n> >\n> > SUGGEST (comment)\n> > /* set all ALL TABLES flag */\n>\n> Modified\n>\n> > ~~~\n> >\n> > 11. src/backend/catalog/pg_publication.c - AlterPublication\n> >\n> > @@ -1501,6 +1579,20 @@ AlterPublication(ParseState *pstate,\n> > AlterPublicationStmt *stmt)\n> > aclcheck_error(ACLCHECK_NOT_OWNER, OBJECT_PUBLICATION,\n> > stmt->pubname);\n> >\n> > + if (stmt->for_all_tables)\n> > + {\n> > + bool isdefault = CheckPublicationDefValues(tup);\n> > +\n> > + if (!isdefault)\n> > + ereport(ERROR,\n> > + errcode(ERRCODE_INVALID_OBJECT_DEFINITION),\n> > + errmsg(\"Setting ALL TABLES requires publication \\\"%s\\\" to have\n> > default values\",\n> > + stmt->pubname),\n> > + errhint(\"Use ALTER PUBLICATION ... RESET to reset the publication\"));\n> >\n> > The errmsg should start with a lowercase letter.\n>\n> Modified\n>\n> > ~~~\n> >\n> > 12. src/backend/catalog/pg_publication.c - AlterPublication\n> >\n> > @@ -1501,6 +1579,20 @@ AlterPublication(ParseState *pstate,\n> > AlterPublicationStmt *stmt)\n> > aclcheck_error(ACLCHECK_NOT_OWNER, OBJECT_PUBLICATION,\n> > stmt->pubname);\n> >\n> > + if (stmt->for_all_tables)\n> > + {\n> > + bool isdefault = CheckPublicationDefValues(tup);\n> > +\n> > + if (!isdefault)\n> > + ereport(ERROR,\n> > + errcode(ERRCODE_INVALID_OBJECT_DEFINITION),\n> > + errmsg(\"Setting ALL TABLES requires publication \\\"%s\\\" to have\n> > default values\",\n> > + stmt->pubname),\n> > + errhint(\"Use ALTER PUBLICATION ... RESET to reset the publication\"));\n> >\n> > Example test:\n> >\n> > postgres=# create table t1(a int);\n> > CREATE TABLE\n> > postgres=# create publication p1 for table t1;\n> > CREATE PUBLICATION\n> > postgres=# alter publication p1 add all tables except t1;\n> > 2022-05-20 14:34:49.301 AEST [21802] ERROR: Setting ALL TABLES\n> > requires publication \"p1\" to have default values\n> > 2022-05-20 14:34:49.301 AEST [21802] HINT: Use ALTER PUBLICATION ...\n> > RESET to reset the publication\n> > 2022-05-20 14:34:49.301 AEST [21802] STATEMENT: alter publication p1\n> > add all tables except t1;\n> > ERROR: Setting ALL TABLES requires publication \"p1\" to have default values\n> > HINT: Use ALTER PUBLICATION ... RESET to reset the publication\n> > postgres=# alter publication p1 set all tables except t1;\n> >\n> > That error message does not quite match what the user was doing.\n> > Firstly, they were adding the ALL TABLES, not setting it. Secondly,\n> > all the values of the publication were already defaults (only there\n> > was an existing table t1 in the publication). Maybe some minor changes\n> > to the message wording can be a better reflect what the user is doing\n> > here.\n>\n> Modified\n>\n> > ~~~\n> >\n> > 13. src/backend/parser/gram.y\n> >\n> > @@ -10410,7 +10411,7 @@ AlterOwnerStmt: ALTER AGGREGATE\n> > aggregate_with_argtypes OWNER TO RoleSpec\n> > *\n> > * CREATE PUBLICATION name [WITH options]\n> > *\n> > - * CREATE PUBLICATION FOR ALL TABLES [WITH options]\n> > + * CREATE PUBLICATION FOR ALL TABLES [EXCEPT TABLE table [, ...]]\n> > [WITH options]\n> >\n> > Comment should show the \"TABLE\" keyword is optional\n>\n> Modified\n>\n> > ~~~\n> >\n> > 14. src/bin/pg_dump/pg_dump.c - dumpPublicationTable\n> >\n> > @@ -4332,6 +4380,7 @@ dumpPublicationTable(Archive *fout, const\n> > PublicationRelInfo *pubrinfo)\n> >\n> > appendPQExpBuffer(query, \"ALTER PUBLICATION %s ADD TABLE ONLY\",\n> > fmtId(pubinfo->dobj.name));\n> > +\n> > appendPQExpBuffer(query, \" %s\",\n> > fmtQualifiedDumpable(tbinfo));\n> >\n> > This additional whitespace seems unrelated to this patch\n>\n> Modified\n>\n> > ~~~\n> >\n> > 15. src/include/nodes/parsenodes.h\n> >\n> > 15a.\n> > @@ -3999,6 +3999,7 @@ typedef struct PublicationTable\n> > RangeVar *relation; /* relation to be published */\n> > Node *whereClause; /* qualifications */\n> > List *columns; /* List of columns in a publication table */\n> > + bool except; /* except relation */\n> > } PublicationTable;\n> >\n> > Maybe the comment should be more like similar ones:\n> > /* exclude the relation */\n>\n> Modified\n>\n> > 15b.\n> > @@ -4007,6 +4008,7 @@ typedef struct PublicationTable\n> > typedef enum PublicationObjSpecType\n> > {\n> > PUBLICATIONOBJ_TABLE, /* A table */\n> > + PUBLICATIONOBJ_EXCEPT_TABLE, /* An Except table */\n> > PUBLICATIONOBJ_TABLES_IN_SCHEMA, /* All tables in schema */\n> > PUBLICATIONOBJ_TABLES_IN_CUR_SCHEMA, /* All tables in first element of\n> >\n> > Maybe the comment should be more like:\n> > /* A table to be excluded */\n>\n> Modified\n>\n> > ~~~\n> >\n> > 16. src/test/regress/sql/publication.sql\n> >\n> > I did not see any test cases using EXCEPT when the optional TABLE\n> > keyword is omitted.\n>\n> Added a test\n>\n> Thanks for the comments, the v7 patch attached at [1] has the changes\n> for the same.\n> [1] - https://www.postgresql.org/message-id/CALDaNm3EpX3%2BRu%3DSNaYi%3DUW5ZLE6nNhGRHZ7a8-fXPZ_-gLdxQ%40mail.gmail.com\n\nAttached v7 patch which fixes the buildfarm warning for an unused\nwarning in release mode as in [1].\n[1] - https://cirrus-ci.com/task/6220288017825792\n\nRegards,\nVignesh",
"msg_date": "Mon, 23 May 2022 10:43:03 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping schema changes in publication"
},
{
"msg_contents": "On Monday, May 23, 2022 2:13 PM vignesh C <vignesh21@gmail.com> wrote:\r\n> Attached v7 patch which fixes the buildfarm warning for an unused warning in\r\n> release mode as in [1].\r\nHi, thank you for the patches.\r\n\r\n\r\nI'll share several review comments.\r\n\r\nFor v7-0001.\r\n\r\n(1) I'll suggest some minor rewording.\r\n\r\n+ <para>\r\n+ The <literal>RESET</literal> clause will reset the publication to the\r\n+ default state which includes resetting the publication options, setting\r\n+ <literal>ALL TABLES</literal> flag to <literal>false</literal> and\r\n+ dropping all relations and schemas that are associated with the publication.\r\n\r\nMy suggestion is\r\n\"The RESET clause will reset the publication to the\r\ndefault state. It resets the publication operations,\r\nsets ALL TABLES flag to false and drops all relations\r\nand schemas associated with the publication.\"\r\n\r\n(2) typo and rewording\r\n\r\n+/*\r\n+ * Reset the publication.\r\n+ *\r\n+ * Reset the publication options, setting ALL TABLES flag to false and drop\r\n+ * all relations and schemas that are associated with the publication.\r\n+ */\r\n\r\nThe \"setting\" in this sentence should be \"set\".\r\n\r\nHow about changing like below ?\r\nFROM:\r\n\"Reset the publication options, setting ALL TABLES flag to false and drop\r\nall relations and schemas that are associated with the publication.\"\r\nTO:\r\n\"Reset the publication operations, set ALL TABLES flag to false and drop\r\nall relations and schemas associated with the publication.\"\r\n\r\n(3) AlterPublicationReset\r\n\r\nDo we need to call CacheInvalidateRelcacheAll() or\r\nInvalidatePublicationRels() at the end of\r\nAlterPublicationReset() like AlterPublicationOptions() ?\r\n\r\n\r\nFor v7-0002.\r\n\r\n(4)\r\n\r\n+ if (stmt->for_all_tables)\r\n+ {\r\n+ bool isdefault = CheckPublicationDefValues(tup);\r\n+\r\n+ if (!isdefault)\r\n+ ereport(ERROR,\r\n+ errcode(ERRCODE_INVALID_OBJECT_DEFINITION),\r\n+ errmsg(\"adding ALL TABLES requires the publication to have default publication options, no tables/....\r\n+ errhint(\"Use ALTER PUBLICATION ... RESET to reset the publication\"));\r\n\r\n\r\nThe errmsg string has three messages for user and is a bit long\r\n(we have two sentences there connected by 'and').\r\nCan't we make it concise and split it into a couple of lines for code readability ?\r\n\r\nI'll suggest a change below.\r\nFROM:\r\n\"adding ALL TABLES requires the publication to have default publication options, no tables/schemas associated and ALL TABLES flag should not be set\"\r\nTO:\r\n\"adding ALL TABLES requires the publication defined not for ALL TABLES\"\r\n\"to have default publish actions without any associated tables/schemas\"\r\n\r\n(5) typo\r\n\r\n <varlistentry>\r\n+ <term><literal>EXCEPT TABLE</literal></term>\r\n+ <listitem>\r\n+ <para>\r\n+ This clause specifies a list of tables to exclude from the publication.\r\n+ It can only be used with <literal>FOR ALL TABLES</literal>.\r\n+ </para>\r\n+ </listitem>\r\n+ </varlistentry>\r\n+\r\n\r\nKindly change\r\nFROM:\r\nThis clause specifies a list of tables to exclude from the publication.\r\nTO:\r\nThis clause specifies a list of tables to be excluded from the publication.\r\nor\r\nThis clause specifies a list of tables excluded from the publication.\r\n\r\n(6) Minor suggestion for an expression change\r\n\r\n Marks the publication as one that replicates changes for all tables in\r\n- the database, including tables created in the future.\r\n+ the database, including tables created in the future. If\r\n+ <literal>EXCEPT TABLE</literal> is specified, then exclude replicating\r\n+ the changes for the specified tables.\r\n\r\n\r\nI'll suggest a minor rewording.\r\nFROM:\r\n...exclude replicating the changes for the specified tables\r\nTO:\r\n...exclude replication changes for the specified tables\r\n\r\n(7)\r\n(7-1)\r\n\r\n+/*\r\n+ * Check if the publication has default values\r\n+ *\r\n+ * Check the following:\r\n+ * a) Publication is not set with \"FOR ALL TABLES\"\r\n+ * b) Publication is having default options\r\n+ * c) Publication is not associated with schemas\r\n+ * d) Publication is not associated with relations\r\n+ */\r\n+static bool\r\n+CheckPublicationDefValues(HeapTuple tup)\r\n\r\n\r\nI think this header comment can be improved.\r\nFROM:\r\nCheck the following:\r\nTO:\r\nReturns true if the publication satisfies all the following conditions:\r\n\r\n(7-2)\r\n\r\nb) should be changed as well\r\nFROM:\r\nPublication is having default options\r\nTO:\r\nPublication has the default publish operations\r\n\r\n\r\n\r\nBest Regards,\r\n\tTakamichi Osumi\r\n\r\n",
"msg_date": "Thu, 26 May 2022 13:34:33 +0000",
"msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Skipping schema changes in publication"
},
{
"msg_contents": "Here are some minor review comments for v7-0001.\n\n======\n\n1. General\n\nProbably the commit message and all the PG docs and code comments\nshould be changed to refer to \"publication parameters\" instead of\n(currently) \"publication options\". This is because these things are\nreally called \"publication_parameters\" in the PG docs [1].\n\nAll the following review comments are just examples of this suggestion.\n\n~~~\n\n2. Commit message\n\n\"includes resetting the publication options,\" -> \"includes resetting\nthe publication parameters,\"\n\n~~~\n\n3. doc/src/sgml/ref/alter_publication.sgml\n\n+ <para>\n+ The <literal>RESET</literal> clause will reset the publication to the\n+ default state which includes resetting the publication options, setting\n+ <literal>ALL TABLES</literal> flag to <literal>false</literal> and\n+ dropping all relations and schemas that are associated with the publication.\n </para>\n\n\n\"resetting the publication options,\" -> \"resetting the publication parameters,\"\n\n~~~\n\n4. src/backend/commands/publicationcmds.c\n\n@@ -53,6 +53,14 @@\n #include \"utils/syscache.h\"\n #include \"utils/varlena.h\"\n\n+/* CREATE PUBLICATION default values for flags and options */\n+#define PUB_DEFAULT_ACTION_INSERT true\n+#define PUB_DEFAULT_ACTION_UPDATE true\n+#define PUB_DEFAULT_ACTION_DELETE true\n+#define PUB_DEFAULT_ACTION_TRUNCATE true\n+#define PUB_DEFAULT_VIA_ROOT false\n+#define PUB_DEFAULT_ALL_TABLES false\n\n\"flags and options\" -> \"flags and publication parameters\"\n\n~~~\n\n5. src/backend/commands/publicationcmds.c\n\n+/*\n+ * Reset the publication.\n+ *\n+ * Reset the publication options, setting ALL TABLES flag to false and drop\n+ * all relations and schemas that are associated with the publication.\n+ */\n+static void\n+AlterPublicationReset(ParseState *pstate, AlterPublicationStmt *stmt,\n+ Relation rel, HeapTuple tup)\n\n\"Reset the publication options,\" -> \"Reset the publication parameters,\"\n\n~~~\n\n6. src/test/regress/sql/publication.sql\n\n+-- Verify that publish options and publish_via_partition_root option are reset\n+\\dRp+ testpub_reset\n+ALTER PUBLICATION testpub_reset RESET;\n+\\dRp+ testpub_reset\n\nSUGGESTION\n-- Verify that 'publish' and 'publish_via_partition_root' publication\nparameters are reset\n\n------\n[1] https://www.postgresql.org/docs/current/sql-createpublication.html\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Mon, 30 May 2022 18:21:36 +1000",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping schema changes in publication"
},
{
"msg_contents": "Here are my review comments for patch v7-0002.\n\n======\n\n1. doc/src/sgml/logical-replication.sgml\n\n@@ -1167,8 +1167,9 @@ CONTEXT: processing remote data for replication\norigin \"pg_16395\" during \"INSER\n <para>\n To add tables to a publication, the user must have ownership rights on the\n table. To add all tables in schema to a publication, the user must be a\n- superuser. To create a publication that publishes all tables or\nall tables in\n- schema automatically, the user must be a superuser.\n+ superuser. To add all tables to a publication, the user must be a superuser.\n+ To create a publication that publishes all tables or all tables in schema\n+ automatically, the user must be a superuser.\n </para>\n\nI felt that maybe this whole paragraph should be rearranged. Put the\n\"create publication\" parts before the \"alter publication\" parts;\nRe-word the sentences more similarly. I also felt the ALL TABLES and\nALL TABLES IN SCHEMA etc should be written uppercase/literal since\nthat is what was meant.\n\nSUGGESTION\nTo create a publication using FOR ALL TABLES or FOR ALL TABLES IN\nSCHEMA, the user must be a superuser. To add ALL TABLES or ALL TABLES\nIN SCHEMA to a publication, the user must be a superuser. To add\ntables to a publication, the user must have ownership rights on the\ntable.\n\n~~~\n\n2. doc/src/sgml/ref/alter_publication.sgml\n\n@@ -82,8 +88,8 @@ ALTER PUBLICATION <replaceable\nclass=\"parameter\">name</replaceable> RESET\n\n <para>\n You must own the publication to use <command>ALTER PUBLICATION</command>.\n- Adding a table to a publication additionally requires owning that table.\n- The <literal>ADD ALL TABLES IN SCHEMA</literal>,\n+ Adding a table to or excluding a table from a publication additionally\n+ requires owning that table. The <literal>ADD ALL TABLES IN SCHEMA</literal>,\n <literal>SET ALL TABLES IN SCHEMA</literal> to a publication and\n\nIsn't this missing some information that says ADD ALL TABLES requires\nthe invoking user to be a superuser?\n\n~~~\n\n3. doc/src/sgml/ref/alter_publication.sgml - examples\n\n+ <para>\n+ Alter publication <structname>production_publication</structname> to publish\n+ all tables except <structname>users</structname> and\n+ <structname>departments</structname> tables:\n+<programlisting>\n+ALTER PUBLICATION production_publication ADD ALL TABLES EXCEPT users,\ndepartments;\n+</programlisting></para>\n+\n\nI didn't think it needs to say \"tables\" 2x (e.g. remove the last \"tables\")\n\n~~~\n\n4. doc/src/sgml/ref/create_publication.sgml - examples\n\n+ <para>\n+ Create a publication that publishes all changes in all the tables except for\n+ the changes of <structname>users</structname> and\n+ <structname>departments</structname> tables:\n+<programlisting>\n+CREATE PUBLICATION mypublication FOR ALL TABLES EXCEPT users, departments;\n+</programlisting>\n+ </para>\n\nI didn't think it needs to say \"tables\" 2x (e.g. remove the last \"tables\")\n\n~~~\n\n5. src/backend/catalog/pg_publication.c\n\n foreach(lc, ancestors)\n {\n Oid ancestor = lfirst_oid(lc);\n- List *apubids = GetRelationPublications(ancestor);\n- List *aschemaPubids = NIL;\n+ List *apubids = GetRelationPublications(ancestor, false);\n+ List *aschemapubids = NIL;\n+ List *aexceptpubids = NIL;\n\n level++;\n\n- if (list_member_oid(apubids, puboid))\n+ /* check if member of table publications */\n+ if (!list_member_oid(apubids, puboid))\n {\n- topmost_relid = ancestor;\n-\n- if (ancestor_level)\n- *ancestor_level = level;\n- }\n- else\n- {\n- aschemaPubids = GetSchemaPublications(get_rel_namespace(ancestor));\n- if (list_member_oid(aschemaPubids, puboid))\n+ /* check if member of schema publications */\n+ aschemapubids = GetSchemaPublications(get_rel_namespace(ancestor));\n+ if (!list_member_oid(aschemapubids, puboid))\n {\n- topmost_relid = ancestor;\n-\n- if (ancestor_level)\n- *ancestor_level = level;\n+ /*\n+ * If the publication is all tables publication and the table\n+ * is not part of exception tables.\n+ */\n+ if (puballtables)\n+ {\n+ aexceptpubids = GetRelationPublications(ancestor, true);\n+ if (list_member_oid(aexceptpubids, puboid))\n+ goto next;\n+ }\n+ else\n+ goto next;\n }\n }\n\n+ topmost_relid = ancestor;\n+\n+ if (ancestor_level)\n+ *ancestor_level = level;\n+\n+next:\n list_free(apubids);\n- list_free(aschemaPubids);\n+ list_free(aschemapubids);\n+ list_free(aexceptpubids);\n }\n\n\nI felt those negative (!) conditions and those goto are making this\nlogic hard to understand. Can’t it be simplified more than this? Even\njust having another bool flag might help make it easier.\n\ne.g. Perhaps something a bit like this (but add some comments)\n\nforeach(lc, ancestors)\n{\nOid ancestor = lfirst_oid(lc);\nList *apubids = GetRelationPublications(ancestor);\nList *aschemaPubids = NIL;\nList *aexceptpubids = NIL;\nbool set_top = false;\nlevel++;\n\nset_top = list_member_oid(apubids, puboid);\nif (!set_top)\n{\naschemaPubids = GetSchemaPublications(get_rel_namespace(ancestor));\nset_top = list_member_oid(aschemaPubids, puboid);\n\nif (!set_top && puballtables)\n{\naexceptpubids = GetRelationPublications(ancestor, true);\nset_top = !list_member_oid(aexceptpubids, puboid);\n}\n}\nif (set_top)\n{\ntopmost_relid = ancestor;\n\nif (ancestor_level)\n*ancestor_level = level;\n}\n\nlist_free(apubids);\nlist_free(aschemapubids);\nlist_free(aexceptpubids);\n}\n\n------\n\n6. src/backend/commands/publicationcmds.c - CheckPublicationDefValues\n\n+/*\n+ * Check if the publication has default values\n+ *\n+ * Check the following:\n+ * a) Publication is not set with \"FOR ALL TABLES\"\n+ * b) Publication is having default options\n+ * c) Publication is not associated with schemas\n+ * d) Publication is not associated with relations\n+ */\n+static bool\n+CheckPublicationDefValues(HeapTuple tup)\n\nI think Osumi-san already gave a review [1] about this same comment.\n\nSo I only wanted to add that it should not say \"options\" here:\n\"default options\" -> \"default publication parameter values\"\n\n~~~\n\n7. src/backend/commands/publicationcmds.c - AlterPublicationSetAllTables\n\n+#ifdef USE_ASSERT_CHECKING\n+ Assert(!pubform->puballtables);\n+#endif\n\nWhy is this #ifdef needed? Isn't that logic built into the Assert macro already?\n\n~~~\n\n8. src/backend/commands/publicationcmds.c - AlterPublicationSetAllTables\n\n+ /* set ALL TABLES flag */\n\nUse uppercase 'S' to match other comments.\n\n~~~\n\n9. src/backend/commands/publicationcmds.c - AlterPublication\n\n+ if (!isdefault)\n+ ereport(ERROR,\n+ errcode(ERRCODE_INVALID_OBJECT_DEFINITION),\n+ errmsg(\"adding ALL TABLES requires the publication to have default\npublication options, no tables/schemas associated and ALL TABLES flag\nshould not be set\"),\n+ errhint(\"Use ALTER PUBLICATION ... RESET to reset the publication\"));\n\nIMO this errmsg text is not very good but I think Osumi-san [1] has\nalso given a review comment about the same errmsg.\n\nSo I only wanted to add that should not say \"options\" here:\n\"default publication options\" -> \"default publication parameter values\"\n\n~~~\n\n10. src/backend/parser/gram.y\n\n/*****************************************************************************\n *\n * ALTER PUBLICATION name SET ( options )\n *\n * ALTER PUBLICATION name ADD pub_obj [, ...]\n *\n * ALTER PUBLICATION name DROP pub_obj [, ...]\n *\n * ALTER PUBLICATION name SET pub_obj [, ...]\n *\n * ALTER PUBLICATION name RESET\n *\n * pub_obj is one of:\n *\n * TABLE table_name [, ...]\n * ALL TABLES IN SCHEMA schema_name [, ...]\n *\n *****************************************************************************/\n\n-\n\n Should the above comment be updated to mention also ADD ALL TABLES\n... EXCEPT [TABLE] ...\n\n~~~\n\n11. src/bin/pg_dump/pg_dump.c - dumpPublication\n\n+ /* Include exception tables if the publication has except tables */\n+ for (cell = exceptinfo.head; cell; cell = cell->next)\n+ {\n+ PublicationRelInfo *pubrinfo = (PublicationRelInfo *) cell->ptr;\n+ PublicationInfo *relpubinfo = pubrinfo->publication;\n+ TableInfo *tbinfo;\n+\n+ if (pubinfo == relpubinfo)\n\nI am unsure if that variable 'relpubinfo' is of much use; it is only\nused one time.\n\n~~~\n\n12. src/bin/pg_dump/t/002_pg_dump.pl\n\nI think there should be more test cases here:\n\nE.g.1. EXCEPT TABLE should also test a list of tables\n\nE.g.2. EXCEPT with optional TABLE keyword ommitted\n\n~~~\n\n13. src/bin/psql/describe.c - question about the SQL\n\nSince the new 'except' is a boolean column, wouldn't it be more\nnatural if all the SQL was treating it as one?\n\ne.g. should the SQL be saying \"IS preexpect\", \"IS NOT prexcept\";\ninstead of comparing preexpect to 't' and 'f' character.\n\n~~~\n\n14. .../t/032_rep_changes_except_table.pl\n\n+# Test replication with publications created using FOR ALL TABLES EXCEPT TABLE\n+# option.\n+# Create schemas and tables on publisher\n\n\"option\" -> \"clause\"\n\n------\n[1] https://www.postgresql.org/message-id/TYCPR01MB83730A2F1D6A5303E9C1416AEDD99%40TYCPR01MB8373.jpnprd01.prod.outlook.com\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Tue, 31 May 2022 16:20:48 +1000",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping schema changes in publication"
},
{
"msg_contents": "On Thu, May 26, 2022 at 7:04 PM osumi.takamichi@fujitsu.com\n<osumi.takamichi@fujitsu.com> wrote:\n>\n> On Monday, May 23, 2022 2:13 PM vignesh C <vignesh21@gmail.com> wrote:\n> > Attached v7 patch which fixes the buildfarm warning for an unused warning in\n> > release mode as in [1].\n> Hi, thank you for the patches.\n>\n>\n> I'll share several review comments.\n>\n> For v7-0001.\n>\n> (1) I'll suggest some minor rewording.\n>\n> + <para>\n> + The <literal>RESET</literal> clause will reset the publication to the\n> + default state which includes resetting the publication options, setting\n> + <literal>ALL TABLES</literal> flag to <literal>false</literal> and\n> + dropping all relations and schemas that are associated with the publication.\n>\n> My suggestion is\n> \"The RESET clause will reset the publication to the\n> default state. It resets the publication operations,\n> sets ALL TABLES flag to false and drops all relations\n> and schemas associated with the publication.\"\n\nI felt the existing looks better. I would prefer to keep it that way.\n\n> (2) typo and rewording\n>\n> +/*\n> + * Reset the publication.\n> + *\n> + * Reset the publication options, setting ALL TABLES flag to false and drop\n> + * all relations and schemas that are associated with the publication.\n> + */\n>\n> The \"setting\" in this sentence should be \"set\".\n>\n> How about changing like below ?\n> FROM:\n> \"Reset the publication options, setting ALL TABLES flag to false and drop\n> all relations and schemas that are associated with the publication.\"\n> TO:\n> \"Reset the publication operations, set ALL TABLES flag to false and drop\n> all relations and schemas associated with the publication.\"\n\n I felt the existing looks better. I would prefer to keep it that way.\n\n> (3) AlterPublicationReset\n>\n> Do we need to call CacheInvalidateRelcacheAll() or\n> InvalidatePublicationRels() at the end of\n> AlterPublicationReset() like AlterPublicationOptions() ?\n\nCacheInvalidateRelcacheAll should be called if we change all tables\nfrom true to false, else the cache will not be invalidated. Modified\n\n>\n> For v7-0002.\n>\n> (4)\n>\n> + if (stmt->for_all_tables)\n> + {\n> + bool isdefault = CheckPublicationDefValues(tup);\n> +\n> + if (!isdefault)\n> + ereport(ERROR,\n> + errcode(ERRCODE_INVALID_OBJECT_DEFINITION),\n> + errmsg(\"adding ALL TABLES requires the publication to have default publication options, no tables/....\n> + errhint(\"Use ALTER PUBLICATION ... RESET to reset the publication\"));\n>\n>\n> The errmsg string has three messages for user and is a bit long\n> (we have two sentences there connected by 'and').\n> Can't we make it concise and split it into a couple of lines for code readability ?\n>\n> I'll suggest a change below.\n> FROM:\n> \"adding ALL TABLES requires the publication to have default publication options, no tables/schemas associated and ALL TABLES flag should not be set\"\n> TO:\n> \"adding ALL TABLES requires the publication defined not for ALL TABLES\"\n> \"to have default publish actions without any associated tables/schemas\"\n\nAdded errdetail and split it\n\n> (5) typo\n>\n> <varlistentry>\n> + <term><literal>EXCEPT TABLE</literal></term>\n> + <listitem>\n> + <para>\n> + This clause specifies a list of tables to exclude from the publication.\n> + It can only be used with <literal>FOR ALL TABLES</literal>.\n> + </para>\n> + </listitem>\n> + </varlistentry>\n> +\n>\n> Kindly change\n> FROM:\n> This clause specifies a list of tables to exclude from the publication.\n> TO:\n> This clause specifies a list of tables to be excluded from the publication.\n> or\n> This clause specifies a list of tables excluded from the publication.\n\nModified\n\n> (6) Minor suggestion for an expression change\n>\n> Marks the publication as one that replicates changes for all tables in\n> - the database, including tables created in the future.\n> + the database, including tables created in the future. If\n> + <literal>EXCEPT TABLE</literal> is specified, then exclude replicating\n> + the changes for the specified tables.\n>\n>\n> I'll suggest a minor rewording.\n> FROM:\n> ...exclude replicating the changes for the specified tables\n> TO:\n> ...exclude replication changes for the specified tables\n\nI felt the existing is better.\n\n> (7)\n> (7-1)\n>\n> +/*\n> + * Check if the publication has default values\n> + *\n> + * Check the following:\n> + * a) Publication is not set with \"FOR ALL TABLES\"\n> + * b) Publication is having default options\n> + * c) Publication is not associated with schemas\n> + * d) Publication is not associated with relations\n> + */\n> +static bool\n> +CheckPublicationDefValues(HeapTuple tup)\n>\n>\n> I think this header comment can be improved.\n> FROM:\n> Check the following:\n> TO:\n> Returns true if the publication satisfies all the following conditions:\n\nModified\n\n> (7-2)\n>\n> b) should be changed as well\n> FROM:\n> Publication is having default options\n> TO:\n> Publication has the default publish operations\n\nChanged it to \"Publication is having default publication parameter values\"\n\nThanks for the comments, the attached v8 patch has the changes for the same.\n\nRegards,\nVignesh",
"msg_date": "Fri, 3 Jun 2022 15:36:51 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping schema changes in publication"
},
{
"msg_contents": "'On Mon, May 30, 2022 at 1:51 PM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> Here are some minor review comments for v7-0001.\n>\n> ======\n>\n> 1. General\n>\n> Probably the commit message and all the PG docs and code comments\n> should be changed to refer to \"publication parameters\" instead of\n> (currently) \"publication options\". This is because these things are\n> really called \"publication_parameters\" in the PG docs [1].\n>\n> All the following review comments are just examples of this suggestion.\n\nModified\n\n> ~~~\n>\n> 2. Commit message\n>\n> \"includes resetting the publication options,\" -> \"includes resetting\n> the publication parameters,\"\n\nModified\n\n> ~~~\n>\n> 3. doc/src/sgml/ref/alter_publication.sgml\n>\n> + <para>\n> + The <literal>RESET</literal> clause will reset the publication to the\n> + default state which includes resetting the publication options, setting\n> + <literal>ALL TABLES</literal> flag to <literal>false</literal> and\n> + dropping all relations and schemas that are associated with the publication.\n> </para>\n>\n>\n> \"resetting the publication options,\" -> \"resetting the publication parameters,\"\n\nModified\n\n> ~~~\n>\n> 4. src/backend/commands/publicationcmds.c\n>\n> @@ -53,6 +53,14 @@\n> #include \"utils/syscache.h\"\n> #include \"utils/varlena.h\"\n>\n> +/* CREATE PUBLICATION default values for flags and options */\n> +#define PUB_DEFAULT_ACTION_INSERT true\n> +#define PUB_DEFAULT_ACTION_UPDATE true\n> +#define PUB_DEFAULT_ACTION_DELETE true\n> +#define PUB_DEFAULT_ACTION_TRUNCATE true\n> +#define PUB_DEFAULT_VIA_ROOT false\n> +#define PUB_DEFAULT_ALL_TABLES false\n>\n> \"flags and options\" -> \"flags and publication parameters\"\n\nModified\n\n> ~~~\n>\n> 5. src/backend/commands/publicationcmds.c\n>\n> +/*\n> + * Reset the publication.\n> + *\n> + * Reset the publication options, setting ALL TABLES flag to false and drop\n> + * all relations and schemas that are associated with the publication.\n> + */\n> +static void\n> +AlterPublicationReset(ParseState *pstate, AlterPublicationStmt *stmt,\n> + Relation rel, HeapTuple tup)\n>\n> \"Reset the publication options,\" -> \"Reset the publication parameters,\"\n\nModified\n\n> ~~~\n>\n> 6. src/test/regress/sql/publication.sql\n>\n> +-- Verify that publish options and publish_via_partition_root option are reset\n> +\\dRp+ testpub_reset\n> +ALTER PUBLICATION testpub_reset RESET;\n> +\\dRp+ testpub_reset\n>\n> SUGGESTION\n> -- Verify that 'publish' and 'publish_via_partition_root' publication\n> parameters are reset\n\nModified, I have split this into two tests as it will help the 0002\npatch to add few tests with the existing steps for 'publish' and\n'publish_via_partition_root' publication parameter.\n\nThanks for the comments. the v8 patch attached at [1] has the fixes\nfor the same.\n[1] - https://www.postgresql.org/message-id/CALDaNm0sAU4s1KTLOEWv%3DrYo5dQK6uFTJn_0FKj3XG1Nv4D-qw%40mail.gmail.com\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Fri, 3 Jun 2022 15:40:55 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping schema changes in publication"
},
{
"msg_contents": "On Tue, May 31, 2022 at 11:51 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> Here are my review comments for patch v7-0002.\n>\n> ======\n>\n> 1. doc/src/sgml/logical-replication.sgml\n>\n> @@ -1167,8 +1167,9 @@ CONTEXT: processing remote data for replication\n> origin \"pg_16395\" during \"INSER\n> <para>\n> To add tables to a publication, the user must have ownership rights on the\n> table. To add all tables in schema to a publication, the user must be a\n> - superuser. To create a publication that publishes all tables or\n> all tables in\n> - schema automatically, the user must be a superuser.\n> + superuser. To add all tables to a publication, the user must be a superuser.\n> + To create a publication that publishes all tables or all tables in schema\n> + automatically, the user must be a superuser.\n> </para>\n>\n> I felt that maybe this whole paragraph should be rearranged. Put the\n> \"create publication\" parts before the \"alter publication\" parts;\n> Re-word the sentences more similarly. I also felt the ALL TABLES and\n> ALL TABLES IN SCHEMA etc should be written uppercase/literal since\n> that is what was meant.\n>\n> SUGGESTION\n> To create a publication using FOR ALL TABLES or FOR ALL TABLES IN\n> SCHEMA, the user must be a superuser. To add ALL TABLES or ALL TABLES\n> IN SCHEMA to a publication, the user must be a superuser. To add\n> tables to a publication, the user must have ownership rights on the\n> table.\n\nModified\n\n> ~~~\n>\n> 2. doc/src/sgml/ref/alter_publication.sgml\n>\n> @@ -82,8 +88,8 @@ ALTER PUBLICATION <replaceable\n> class=\"parameter\">name</replaceable> RESET\n>\n> <para>\n> You must own the publication to use <command>ALTER PUBLICATION</command>.\n> - Adding a table to a publication additionally requires owning that table.\n> - The <literal>ADD ALL TABLES IN SCHEMA</literal>,\n> + Adding a table to or excluding a table from a publication additionally\n> + requires owning that table. The <literal>ADD ALL TABLES IN SCHEMA</literal>,\n> <literal>SET ALL TABLES IN SCHEMA</literal> to a publication and\n>\n> Isn't this missing some information that says ADD ALL TABLES requires\n> the invoking user to be a superuser?\n\nModified\n\n> ~~~\n>\n> 3. doc/src/sgml/ref/alter_publication.sgml - examples\n>\n> + <para>\n> + Alter publication <structname>production_publication</structname> to publish\n> + all tables except <structname>users</structname> and\n> + <structname>departments</structname> tables:\n> +<programlisting>\n> +ALTER PUBLICATION production_publication ADD ALL TABLES EXCEPT users,\n> departments;\n> +</programlisting></para>\n> +\n>\n> I didn't think it needs to say \"tables\" 2x (e.g. remove the last \"tables\")\n\nModified\n\n> ~~~\n>\n> 4. doc/src/sgml/ref/create_publication.sgml - examples\n>\n> + <para>\n> + Create a publication that publishes all changes in all the tables except for\n> + the changes of <structname>users</structname> and\n> + <structname>departments</structname> tables:\n> +<programlisting>\n> +CREATE PUBLICATION mypublication FOR ALL TABLES EXCEPT users, departments;\n> +</programlisting>\n> + </para>\n>\n> I didn't think it needs to say \"tables\" 2x (e.g. remove the last \"tables\")\n\nModified\n\n> ~~~\n>\n> 5. src/backend/catalog/pg_publication.c\n>\n> foreach(lc, ancestors)\n> {\n> Oid ancestor = lfirst_oid(lc);\n> - List *apubids = GetRelationPublications(ancestor);\n> - List *aschemaPubids = NIL;\n> + List *apubids = GetRelationPublications(ancestor, false);\n> + List *aschemapubids = NIL;\n> + List *aexceptpubids = NIL;\n>\n> level++;\n>\n> - if (list_member_oid(apubids, puboid))\n> + /* check if member of table publications */\n> + if (!list_member_oid(apubids, puboid))\n> {\n> - topmost_relid = ancestor;\n> -\n> - if (ancestor_level)\n> - *ancestor_level = level;\n> - }\n> - else\n> - {\n> - aschemaPubids = GetSchemaPublications(get_rel_namespace(ancestor));\n> - if (list_member_oid(aschemaPubids, puboid))\n> + /* check if member of schema publications */\n> + aschemapubids = GetSchemaPublications(get_rel_namespace(ancestor));\n> + if (!list_member_oid(aschemapubids, puboid))\n> {\n> - topmost_relid = ancestor;\n> -\n> - if (ancestor_level)\n> - *ancestor_level = level;\n> + /*\n> + * If the publication is all tables publication and the table\n> + * is not part of exception tables.\n> + */\n> + if (puballtables)\n> + {\n> + aexceptpubids = GetRelationPublications(ancestor, true);\n> + if (list_member_oid(aexceptpubids, puboid))\n> + goto next;\n> + }\n> + else\n> + goto next;\n> }\n> }\n>\n> + topmost_relid = ancestor;\n> +\n> + if (ancestor_level)\n> + *ancestor_level = level;\n> +\n> +next:\n> list_free(apubids);\n> - list_free(aschemaPubids);\n> + list_free(aschemapubids);\n> + list_free(aexceptpubids);\n> }\n>\n>\n> I felt those negative (!) conditions and those goto are making this\n> logic hard to understand. Can’t it be simplified more than this? Even\n> just having another bool flag might help make it easier.\n>\n> e.g. Perhaps something a bit like this (but add some comments)\n>\n> foreach(lc, ancestors)\n> {\n> Oid ancestor = lfirst_oid(lc);\n> List *apubids = GetRelationPublications(ancestor);\n> List *aschemaPubids = NIL;\n> List *aexceptpubids = NIL;\n> bool set_top = false;\n> level++;\n>\n> set_top = list_member_oid(apubids, puboid);\n> if (!set_top)\n> {\n> aschemaPubids = GetSchemaPublications(get_rel_namespace(ancestor));\n> set_top = list_member_oid(aschemaPubids, puboid);\n>\n> if (!set_top && puballtables)\n> {\n> aexceptpubids = GetRelationPublications(ancestor, true);\n> set_top = !list_member_oid(aexceptpubids, puboid);\n> }\n> }\n> if (set_top)\n> {\n> topmost_relid = ancestor;\n>\n> if (ancestor_level)\n> *ancestor_level = level;\n> }\n>\n> list_free(apubids);\n> list_free(aschemapubids);\n> list_free(aexceptpubids);\n> }\n\nModified\n\n> ------\n>\n> 6. src/backend/commands/publicationcmds.c - CheckPublicationDefValues\n>\n> +/*\n> + * Check if the publication has default values\n> + *\n> + * Check the following:\n> + * a) Publication is not set with \"FOR ALL TABLES\"\n> + * b) Publication is having default options\n> + * c) Publication is not associated with schemas\n> + * d) Publication is not associated with relations\n> + */\n> +static bool\n> +CheckPublicationDefValues(HeapTuple tup)\n>\n> I think Osumi-san already gave a review [1] about this same comment.\n>\n> So I only wanted to add that it should not say \"options\" here:\n> \"default options\" -> \"default publication parameter values\"\n\nModified\n\n> ~~~\n>\n> 7. src/backend/commands/publicationcmds.c - AlterPublicationSetAllTables\n>\n> +#ifdef USE_ASSERT_CHECKING\n> + Assert(!pubform->puballtables);\n> +#endif\n>\n> Why is this #ifdef needed? Isn't that logic built into the Assert macro already?\n\npubform is used only for assert case. If we don't use it within #ifdef\nor PG_USED_FOR_ASSERTS_ONLY, it will throw a unused variable error\nwithout --enable-cassert like:\n\npublicationcmds.c: In function ‘AlterPublicationSetAllTables’:\npublicationcmds.c:1250:29: error: unused variable ‘pubform’\n[-Werror=unused-variable]\n 1250 | Form_pg_publication pubform = (Form_pg_publication)\nGETSTRUCT(tup);\n | ^~~~~~~\ncc1: all warnings being treated as errors\n\n> ~~~\n>\n> 8. src/backend/commands/publicationcmds.c - AlterPublicationSetAllTables\n>\n> + /* set ALL TABLES flag */\n>\n> Use uppercase 'S' to match other comments.\n\nModified\n\n> ~~~\n>\n> 9. src/backend/commands/publicationcmds.c - AlterPublication\n>\n> + if (!isdefault)\n> + ereport(ERROR,\n> + errcode(ERRCODE_INVALID_OBJECT_DEFINITION),\n> + errmsg(\"adding ALL TABLES requires the publication to have default\n> publication options, no tables/schemas associated and ALL TABLES flag\n> should not be set\"),\n> + errhint(\"Use ALTER PUBLICATION ... RESET to reset the publication\"));\n>\n> IMO this errmsg text is not very good but I think Osumi-san [1] has\n> also given a review comment about the same errmsg.\n>\n> So I only wanted to add that should not say \"options\" here:\n> \"default publication options\" -> \"default publication parameter values\"\n\nModified\n\n> ~~~\n>\n> 10. src/backend/parser/gram.y\n>\n> /*****************************************************************************\n> *\n> * ALTER PUBLICATION name SET ( options )\n> *\n> * ALTER PUBLICATION name ADD pub_obj [, ...]\n> *\n> * ALTER PUBLICATION name DROP pub_obj [, ...]\n> *\n> * ALTER PUBLICATION name SET pub_obj [, ...]\n> *\n> * ALTER PUBLICATION name RESET\n> *\n> * pub_obj is one of:\n> *\n> * TABLE table_name [, ...]\n> * ALL TABLES IN SCHEMA schema_name [, ...]\n> *\n> *****************************************************************************/\n>\n> -\n>\n> Should the above comment be updated to mention also ADD ALL TABLES\n> ... EXCEPT [TABLE] ...\n\nModified\n\n> ~~~\n>\n> 11. src/bin/pg_dump/pg_dump.c - dumpPublication\n>\n> + /* Include exception tables if the publication has except tables */\n> + for (cell = exceptinfo.head; cell; cell = cell->next)\n> + {\n> + PublicationRelInfo *pubrinfo = (PublicationRelInfo *) cell->ptr;\n> + PublicationInfo *relpubinfo = pubrinfo->publication;\n> + TableInfo *tbinfo;\n> +\n> + if (pubinfo == relpubinfo)\n>\n> I am unsure if that variable 'relpubinfo' is of much use; it is only\n> used one time.\n\nRemoved relpubinfo\n\n> ~~~\n>\n> 12. src/bin/pg_dump/t/002_pg_dump.pl\n>\n> I think there should be more test cases here:\n>\n> E.g.1. EXCEPT TABLE should also test a list of tables\n>\n> E.g.2. EXCEPT with optional TABLE keyword ommitted\n\nAdded a test for list of tables and modified one of the test to remove TABLE.\n\n> ~~~\n>\n> 13. src/bin/psql/describe.c - question about the SQL\n>\n> Since the new 'except' is a boolean column, wouldn't it be more\n> natural if all the SQL was treating it as one?\n>\n> e.g. should the SQL be saying \"IS preexpect\", \"IS NOT prexcept\";\n> instead of comparing preexpect to 't' and 'f' character.\n\nmodified\n\n> ~~~\n>\n> 14. .../t/032_rep_changes_except_table.pl\n>\n> +# Test replication with publications created using FOR ALL TABLES EXCEPT TABLE\n> +# option.\n> +# Create schemas and tables on publisher\n>\n> \"option\" -> \"clause\"\n\nModified.\n\nThanks for the comments. The v8 patch attached at [1] has the fixes\nfor the same.\n[1] - https://www.postgresql.org/message-id/CALDaNm0sAU4s1KTLOEWv%3DrYo5dQK6uFTJn_0FKj3XG1Nv4D-qw%40mail.gmail.com\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Fri, 3 Jun 2022 15:50:08 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping schema changes in publication"
},
{
"msg_contents": "On Fri, Jun 3, 2022 at 3:37 PM vignesh C <vignesh21@gmail.com> wrote:\n>\n> Thanks for the comments, the attached v8 patch has the changes for the same.\n>\n\nAFAICS, the summary of this proposal is that we want to support\nexclude of certain objects from publication with two kinds of\nvariants. The first variant is to add support to exclude specific\ntables from ALL TABLES PUBLICATION. Without this feature, users need\nto manually add all tables for a database even when she wants to avoid\nonly a handful of tables from the database say because they contain\nsensitive information or are not required. We have seen that other\ndatabase like MySQL also provides similar feature [1] (See\nREPLICATE_WILD_IGNORE_TABLE). The proposed syntax for this is as\nfollows:\n\nCREATE PUBLICATION pub1 FOR ALL TABLES EXCEPT TABLE t1,t2;\nor\nALTER PUBLICATION pub1 ADD ALL TABLES EXCEPT TABLE t1,t2;\n\nThis will allow us to publish all the tables in the current database\nexcept t1 and t2. Now, I see that pg_dump has a similar option\nprovided by switch --exclude-table but that allows tables matching\npatterns which is not the case here. I am not sure if we need a\nsimilar variant here.\n\nThen users will be allowed to reset the publication by:\nALTER PUBLICATION pub1 RESET;\n\nThis will reset the publication to the default state which includes\nresetting the publication parameters, setting the ALL TABLES flag to\nfalse, and dropping the relations and schemas that are associated with\nthe publication. I don't know if we want to go further with allowing\nto RESET specific parameters and if so which parameters and what would\nits syntax be?\n\nThe second variant is to add support to exclude certain columns of a\ntable while publishing a particular table. Currently, users need to\nlist all required columns' names even if they don't want to hide most\nof the columns in the table (for example Create Publication pub For\nTable t1 (c1, c2)). Consider user doesn't want to publish the 'salary'\nor other sensitive information of executives/employees but would like\nto publish all other columns. I feel in such cases it will be a lot of\nwork for the user especially when the table has many columns. I see\nthat Oracle has a similar feature [2]. I think without this it will be\ndifficult for users to use this feature in some cases. The patch for\nthis is not proposed but I would imagine syntax for it to be something\nlike \"Create Publication pub For Table t1 Except (c3)\" and similar\nvariants for Alter Publication.\n\nHave I missed anything?\n\nThoughts on the proposal/syntax would be appreciated?\n\n[1] - https://dev.mysql.com/doc/refman/5.7/en/change-replication-filter.html\n[2] - https://docs.oracle.com/en/cloud/paas/goldengate-cloud/gwuad/selecting-columns.html#GUID-9A851C8B-48F7-43DF-8D98-D086BE069E20\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 8 Jun 2022 16:34:05 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping schema changes in publication"
},
{
"msg_contents": "On Wednesday, June 8, 2022 7:04 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> \r\n> On Fri, Jun 3, 2022 at 3:37 PM vignesh C <vignesh21@gmail.com> wrote:\r\n> >\r\n> > Thanks for the comments, the attached v8 patch has the changes for the\r\n> same.\r\n> >\r\n> \r\n> AFAICS, the summary of this proposal is that we want to support\r\n> exclude of certain objects from publication with two kinds of\r\n> variants. The first variant is to add support to exclude specific\r\n> tables from ALL TABLES PUBLICATION. Without this feature, users need\r\n> to manually add all tables for a database even when she wants to avoid\r\n> only a handful of tables from the database say because they contain\r\n> sensitive information or are not required. We have seen that other\r\n> database like MySQL also provides similar feature [1] (See\r\n> REPLICATE_WILD_IGNORE_TABLE). The proposed syntax for this is as\r\n> follows:\r\n> \r\n> CREATE PUBLICATION pub1 FOR ALL TABLES EXCEPT TABLE t1,t2;\r\n> or\r\n> ALTER PUBLICATION pub1 ADD ALL TABLES EXCEPT TABLE t1,t2;\r\n> \r\n> This will allow us to publish all the tables in the current database\r\n> except t1 and t2. Now, I see that pg_dump has a similar option\r\n> provided by switch --exclude-table but that allows tables matching\r\n> patterns which is not the case here. I am not sure if we need a\r\n> similar variant here.\r\n> \r\n> Then users will be allowed to reset the publication by:\r\n> ALTER PUBLICATION pub1 RESET;\r\n> \r\n> This will reset the publication to the default state which includes\r\n> resetting the publication parameters, setting the ALL TABLES flag to\r\n> false, and dropping the relations and schemas that are associated with\r\n> the publication. I don't know if we want to go further with allowing\r\n> to RESET specific parameters and if so which parameters and what would\r\n> its syntax be?\r\n> \r\n> The second variant is to add support to exclude certain columns of a\r\n> table while publishing a particular table. Currently, users need to\r\n> list all required columns' names even if they don't want to hide most\r\n> of the columns in the table (for example Create Publication pub For\r\n> Table t1 (c1, c2)). Consider user doesn't want to publish the 'salary'\r\n> or other sensitive information of executives/employees but would like\r\n> to publish all other columns. I feel in such cases it will be a lot of\r\n> work for the user especially when the table has many columns. I see\r\n> that Oracle has a similar feature [2]. I think without this it will be\r\n> difficult for users to use this feature in some cases. The patch for\r\n> this is not proposed but I would imagine syntax for it to be something\r\n> like \"Create Publication pub For Table t1 Except (c3)\" and similar\r\n> variants for Alter Publication.\r\n\r\nI think the feature to exclude certain columns of a table would be useful.\r\n\r\nIn some production scenarios, we usually do not want to replicate\r\nsensitive fields(column) in the table. Although we already can achieve\r\nthis by specify all replicated columns in the list[1], but that seems a\r\nhard work when the table has hundreds of columns.\r\n\r\n[1]\r\nCREATE TABLE test(a int, b int, c int,..., sensitive text);\r\nCRAETE PUBLICATION pub FOR TABLE test(a,b,c,...);\r\n\r\nIn addition, it's not easy to maintain the column list like above. Because\r\nwe sometimes need to add new fields or delete fields due to business\r\nneeds. Every time we add a column(or delete a column in column list), we\r\nneed to update the column list.\r\n\r\nIf we support Except:\r\nCRAETE PUBLICATION pub FOR TABLE test EXCEPT (sensitive);\r\n\r\nWe don't need to update the column list in most cases.\r\n\r\nThanks for \"hametan\" for providing the use case off-list.\r\n\r\nBest regards,\r\nHou zj\r\n\r\n\r\n\r\n",
"msg_date": "Tue, 14 Jun 2022 03:40:42 +0000",
"msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Skipping schema changes in publication"
},
{
"msg_contents": "On Tue, Jun 14, 2022 at 9:10 AM houzj.fnst@fujitsu.com\n<houzj.fnst@fujitsu.com> wrote:\n>\n> On Wednesday, June 8, 2022 7:04 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Fri, Jun 3, 2022 at 3:37 PM vignesh C <vignesh21@gmail.com> wrote:\n> > >\n> > > Thanks for the comments, the attached v8 patch has the changes for the\n> > same.\n> > >\n> >\n> > AFAICS, the summary of this proposal is that we want to support\n> > exclude of certain objects from publication with two kinds of\n> > variants. The first variant is to add support to exclude specific\n> > tables from ALL TABLES PUBLICATION. Without this feature, users need\n> > to manually add all tables for a database even when she wants to avoid\n> > only a handful of tables from the database say because they contain\n> > sensitive information or are not required. We have seen that other\n> > database like MySQL also provides similar feature [1] (See\n> > REPLICATE_WILD_IGNORE_TABLE). The proposed syntax for this is as\n> > follows:\n> >\n> > CREATE PUBLICATION pub1 FOR ALL TABLES EXCEPT TABLE t1,t2;\n> > or\n> > ALTER PUBLICATION pub1 ADD ALL TABLES EXCEPT TABLE t1,t2;\n> >\n> > This will allow us to publish all the tables in the current database\n> > except t1 and t2. Now, I see that pg_dump has a similar option\n> > provided by switch --exclude-table but that allows tables matching\n> > patterns which is not the case here. I am not sure if we need a\n> > similar variant here.\n> >\n> > Then users will be allowed to reset the publication by:\n> > ALTER PUBLICATION pub1 RESET;\n> >\n> > This will reset the publication to the default state which includes\n> > resetting the publication parameters, setting the ALL TABLES flag to\n> > false, and dropping the relations and schemas that are associated with\n> > the publication. I don't know if we want to go further with allowing\n> > to RESET specific parameters and if so which parameters and what would\n> > its syntax be?\n> >\n> > The second variant is to add support to exclude certain columns of a\n> > table while publishing a particular table. Currently, users need to\n> > list all required columns' names even if they don't want to hide most\n> > of the columns in the table (for example Create Publication pub For\n> > Table t1 (c1, c2)). Consider user doesn't want to publish the 'salary'\n> > or other sensitive information of executives/employees but would like\n> > to publish all other columns. I feel in such cases it will be a lot of\n> > work for the user especially when the table has many columns. I see\n> > that Oracle has a similar feature [2]. I think without this it will be\n> > difficult for users to use this feature in some cases. The patch for\n> > this is not proposed but I would imagine syntax for it to be something\n> > like \"Create Publication pub For Table t1 Except (c3)\" and similar\n> > variants for Alter Publication.\n>\n> I think the feature to exclude certain columns of a table would be useful.\n>\n> In some production scenarios, we usually do not want to replicate\n> sensitive fields(column) in the table. Although we already can achieve\n> this by specify all replicated columns in the list[1], but that seems a\n> hard work when the table has hundreds of columns.\n>\n> [1]\n> CREATE TABLE test(a int, b int, c int,..., sensitive text);\n> CRAETE PUBLICATION pub FOR TABLE test(a,b,c,...);\n>\n> In addition, it's not easy to maintain the column list like above. Because\n> we sometimes need to add new fields or delete fields due to business\n> needs. Every time we add a column(or delete a column in column list), we\n> need to update the column list.\n>\n> If we support Except:\n> CRAETE PUBLICATION pub FOR TABLE test EXCEPT (sensitive);\n>\n> We don't need to update the column list in most cases.\n>\n\nRight, this is a valid point and I think it makes sense for me to\nsupport such a feature for column list and also to exclude a\nparticular table(s) from the ALL TABLES publication.\n\nPeter E., Euler, and others, do you have any objections to supporting\nthe above-mentioned two cases?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 16 Jun 2022 09:34:54 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping schema changes in publication"
},
{
"msg_contents": "On Fri, Jun 3, 2022 at 3:36 PM vignesh C <vignesh21@gmail.com> wrote:\n>\n> On Thu, May 26, 2022 at 7:04 PM osumi.takamichi@fujitsu.com\n> <osumi.takamichi@fujitsu.com> wrote:\n> >\n> > On Monday, May 23, 2022 2:13 PM vignesh C <vignesh21@gmail.com> wrote:\n> > > Attached v7 patch which fixes the buildfarm warning for an unused warning in\n> > > release mode as in [1].\n> > Hi, thank you for the patches.\n> >\n> >\n> > I'll share several review comments.\n> >\n> > For v7-0001.\n> >\n> > (1) I'll suggest some minor rewording.\n> >\n> > + <para>\n> > + The <literal>RESET</literal> clause will reset the publication to the\n> > + default state which includes resetting the publication options, setting\n> > + <literal>ALL TABLES</literal> flag to <literal>false</literal> and\n> > + dropping all relations and schemas that are associated with the publication.\n> >\n> > My suggestion is\n> > \"The RESET clause will reset the publication to the\n> > default state. It resets the publication operations,\n> > sets ALL TABLES flag to false and drops all relations\n> > and schemas associated with the publication.\"\n>\n> I felt the existing looks better. I would prefer to keep it that way.\n>\n> > (2) typo and rewording\n> >\n> > +/*\n> > + * Reset the publication.\n> > + *\n> > + * Reset the publication options, setting ALL TABLES flag to false and drop\n> > + * all relations and schemas that are associated with the publication.\n> > + */\n> >\n> > The \"setting\" in this sentence should be \"set\".\n> >\n> > How about changing like below ?\n> > FROM:\n> > \"Reset the publication options, setting ALL TABLES flag to false and drop\n> > all relations and schemas that are associated with the publication.\"\n> > TO:\n> > \"Reset the publication operations, set ALL TABLES flag to false and drop\n> > all relations and schemas associated with the publication.\"\n>\n> I felt the existing looks better. I would prefer to keep it that way.\n>\n> > (3) AlterPublicationReset\n> >\n> > Do we need to call CacheInvalidateRelcacheAll() or\n> > InvalidatePublicationRels() at the end of\n> > AlterPublicationReset() like AlterPublicationOptions() ?\n>\n> CacheInvalidateRelcacheAll should be called if we change all tables\n> from true to false, else the cache will not be invalidated. Modified\n>\n> >\n> > For v7-0002.\n> >\n> > (4)\n> >\n> > + if (stmt->for_all_tables)\n> > + {\n> > + bool isdefault = CheckPublicationDefValues(tup);\n> > +\n> > + if (!isdefault)\n> > + ereport(ERROR,\n> > + errcode(ERRCODE_INVALID_OBJECT_DEFINITION),\n> > + errmsg(\"adding ALL TABLES requires the publication to have default publication options, no tables/....\n> > + errhint(\"Use ALTER PUBLICATION ... RESET to reset the publication\"));\n> >\n> >\n> > The errmsg string has three messages for user and is a bit long\n> > (we have two sentences there connected by 'and').\n> > Can't we make it concise and split it into a couple of lines for code readability ?\n> >\n> > I'll suggest a change below.\n> > FROM:\n> > \"adding ALL TABLES requires the publication to have default publication options, no tables/schemas associated and ALL TABLES flag should not be set\"\n> > TO:\n> > \"adding ALL TABLES requires the publication defined not for ALL TABLES\"\n> > \"to have default publish actions without any associated tables/schemas\"\n>\n> Added errdetail and split it\n>\n> > (5) typo\n> >\n> > <varlistentry>\n> > + <term><literal>EXCEPT TABLE</literal></term>\n> > + <listitem>\n> > + <para>\n> > + This clause specifies a list of tables to exclude from the publication.\n> > + It can only be used with <literal>FOR ALL TABLES</literal>.\n> > + </para>\n> > + </listitem>\n> > + </varlistentry>\n> > +\n> >\n> > Kindly change\n> > FROM:\n> > This clause specifies a list of tables to exclude from the publication.\n> > TO:\n> > This clause specifies a list of tables to be excluded from the publication.\n> > or\n> > This clause specifies a list of tables excluded from the publication.\n>\n> Modified\n>\n> > (6) Minor suggestion for an expression change\n> >\n> > Marks the publication as one that replicates changes for all tables in\n> > - the database, including tables created in the future.\n> > + the database, including tables created in the future. If\n> > + <literal>EXCEPT TABLE</literal> is specified, then exclude replicating\n> > + the changes for the specified tables.\n> >\n> >\n> > I'll suggest a minor rewording.\n> > FROM:\n> > ...exclude replicating the changes for the specified tables\n> > TO:\n> > ...exclude replication changes for the specified tables\n>\n> I felt the existing is better.\n>\n> > (7)\n> > (7-1)\n> >\n> > +/*\n> > + * Check if the publication has default values\n> > + *\n> > + * Check the following:\n> > + * a) Publication is not set with \"FOR ALL TABLES\"\n> > + * b) Publication is having default options\n> > + * c) Publication is not associated with schemas\n> > + * d) Publication is not associated with relations\n> > + */\n> > +static bool\n> > +CheckPublicationDefValues(HeapTuple tup)\n> >\n> >\n> > I think this header comment can be improved.\n> > FROM:\n> > Check the following:\n> > TO:\n> > Returns true if the publication satisfies all the following conditions:\n>\n> Modified\n>\n> > (7-2)\n> >\n> > b) should be changed as well\n> > FROM:\n> > Publication is having default options\n> > TO:\n> > Publication has the default publish operations\n>\n> Changed it to \"Publication is having default publication parameter values\"\n>\n> Thanks for the comments, the attached v8 patch has the changes for the same.\n\nThe patch needed to be rebased on top of HEAD because of commit\n\"0c20dd33db1607d6a85ffce24238c1e55e384b49\", attached a rebased v8\nversion for the changes of the same.\n\nRegards,\nVignesh",
"msg_date": "Mon, 8 Aug 2022 12:46:39 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping schema changes in publication"
},
{
"msg_contents": "On Mon, Aug 8, 2022 at 12:46 PM vignesh C <vignesh21@gmail.com> wrote:\n>\n> On Fri, Jun 3, 2022 at 3:36 PM vignesh C <vignesh21@gmail.com> wrote:\n> >\n> > On Thu, May 26, 2022 at 7:04 PM osumi.takamichi@fujitsu.com\n> > <osumi.takamichi@fujitsu.com> wrote:\n> > >\n> > > On Monday, May 23, 2022 2:13 PM vignesh C <vignesh21@gmail.com> wrote:\n> > > > Attached v7 patch which fixes the buildfarm warning for an unused warning in\n> > > > release mode as in [1].\n> > > Hi, thank you for the patches.\n> > >\n> > >\n> > > I'll share several review comments.\n> > >\n> > > For v7-0001.\n> > >\n> > > (1) I'll suggest some minor rewording.\n> > >\n> > > + <para>\n> > > + The <literal>RESET</literal> clause will reset the publication to the\n> > > + default state which includes resetting the publication options, setting\n> > > + <literal>ALL TABLES</literal> flag to <literal>false</literal> and\n> > > + dropping all relations and schemas that are associated with the publication.\n> > >\n> > > My suggestion is\n> > > \"The RESET clause will reset the publication to the\n> > > default state. It resets the publication operations,\n> > > sets ALL TABLES flag to false and drops all relations\n> > > and schemas associated with the publication.\"\n> >\n> > I felt the existing looks better. I would prefer to keep it that way.\n> >\n> > > (2) typo and rewording\n> > >\n> > > +/*\n> > > + * Reset the publication.\n> > > + *\n> > > + * Reset the publication options, setting ALL TABLES flag to false and drop\n> > > + * all relations and schemas that are associated with the publication.\n> > > + */\n> > >\n> > > The \"setting\" in this sentence should be \"set\".\n> > >\n> > > How about changing like below ?\n> > > FROM:\n> > > \"Reset the publication options, setting ALL TABLES flag to false and drop\n> > > all relations and schemas that are associated with the publication.\"\n> > > TO:\n> > > \"Reset the publication operations, set ALL TABLES flag to false and drop\n> > > all relations and schemas associated with the publication.\"\n> >\n> > I felt the existing looks better. I would prefer to keep it that way.\n> >\n> > > (3) AlterPublicationReset\n> > >\n> > > Do we need to call CacheInvalidateRelcacheAll() or\n> > > InvalidatePublicationRels() at the end of\n> > > AlterPublicationReset() like AlterPublicationOptions() ?\n> >\n> > CacheInvalidateRelcacheAll should be called if we change all tables\n> > from true to false, else the cache will not be invalidated. Modified\n> >\n> > >\n> > > For v7-0002.\n> > >\n> > > (4)\n> > >\n> > > + if (stmt->for_all_tables)\n> > > + {\n> > > + bool isdefault = CheckPublicationDefValues(tup);\n> > > +\n> > > + if (!isdefault)\n> > > + ereport(ERROR,\n> > > + errcode(ERRCODE_INVALID_OBJECT_DEFINITION),\n> > > + errmsg(\"adding ALL TABLES requires the publication to have default publication options, no tables/....\n> > > + errhint(\"Use ALTER PUBLICATION ... RESET to reset the publication\"));\n> > >\n> > >\n> > > The errmsg string has three messages for user and is a bit long\n> > > (we have two sentences there connected by 'and').\n> > > Can't we make it concise and split it into a couple of lines for code readability ?\n> > >\n> > > I'll suggest a change below.\n> > > FROM:\n> > > \"adding ALL TABLES requires the publication to have default publication options, no tables/schemas associated and ALL TABLES flag should not be set\"\n> > > TO:\n> > > \"adding ALL TABLES requires the publication defined not for ALL TABLES\"\n> > > \"to have default publish actions without any associated tables/schemas\"\n> >\n> > Added errdetail and split it\n> >\n> > > (5) typo\n> > >\n> > > <varlistentry>\n> > > + <term><literal>EXCEPT TABLE</literal></term>\n> > > + <listitem>\n> > > + <para>\n> > > + This clause specifies a list of tables to exclude from the publication.\n> > > + It can only be used with <literal>FOR ALL TABLES</literal>.\n> > > + </para>\n> > > + </listitem>\n> > > + </varlistentry>\n> > > +\n> > >\n> > > Kindly change\n> > > FROM:\n> > > This clause specifies a list of tables to exclude from the publication.\n> > > TO:\n> > > This clause specifies a list of tables to be excluded from the publication.\n> > > or\n> > > This clause specifies a list of tables excluded from the publication.\n> >\n> > Modified\n> >\n> > > (6) Minor suggestion for an expression change\n> > >\n> > > Marks the publication as one that replicates changes for all tables in\n> > > - the database, including tables created in the future.\n> > > + the database, including tables created in the future. If\n> > > + <literal>EXCEPT TABLE</literal> is specified, then exclude replicating\n> > > + the changes for the specified tables.\n> > >\n> > >\n> > > I'll suggest a minor rewording.\n> > > FROM:\n> > > ...exclude replicating the changes for the specified tables\n> > > TO:\n> > > ...exclude replication changes for the specified tables\n> >\n> > I felt the existing is better.\n> >\n> > > (7)\n> > > (7-1)\n> > >\n> > > +/*\n> > > + * Check if the publication has default values\n> > > + *\n> > > + * Check the following:\n> > > + * a) Publication is not set with \"FOR ALL TABLES\"\n> > > + * b) Publication is having default options\n> > > + * c) Publication is not associated with schemas\n> > > + * d) Publication is not associated with relations\n> > > + */\n> > > +static bool\n> > > +CheckPublicationDefValues(HeapTuple tup)\n> > >\n> > >\n> > > I think this header comment can be improved.\n> > > FROM:\n> > > Check the following:\n> > > TO:\n> > > Returns true if the publication satisfies all the following conditions:\n> >\n> > Modified\n> >\n> > > (7-2)\n> > >\n> > > b) should be changed as well\n> > > FROM:\n> > > Publication is having default options\n> > > TO:\n> > > Publication has the default publish operations\n> >\n> > Changed it to \"Publication is having default publication parameter values\"\n> >\n> > Thanks for the comments, the attached v8 patch has the changes for the same.\n>\n> The patch needed to be rebased on top of HEAD because of commit\n> \"0c20dd33db1607d6a85ffce24238c1e55e384b49\", attached a rebased v8\n> version for the changes of the same.\n\nI had missed attaching one of the changes that was present locally.\nThe updated patch has the changes for the same.\n\nRegards,\nVignesh",
"msg_date": "Mon, 8 Aug 2022 14:53:28 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping schema changes in publication"
},
{
"msg_contents": "I spent some time on understanding the proposal and the patch. Here\nare a few comments wrt the test cases.\n\n> +ALTER PUBLICATION testpub_reset ADD TABLE pub_sch1.tbl1;\n> +\n> +-- Verify that tables associated with the publication are dropped after RESET\n> +\\dRp+ testpub_reset\n> +ALTER PUBLICATION testpub_reset RESET;\n> +\\dRp+ testpub_reset\n>\n> +ALTER PUBLICATION testpub_reset ADD ALL TABLES IN SCHEMA public;\n> +\n> +-- Verify that schemas associated with the publication are dropped after RESET\n> +\\dRp+ testpub_reset\n> +ALTER PUBLICATION testpub_reset RESET;\n> +\\dRp+ testpub_reset\n\nThe results for the above two cases are the same before and after the\nreset. Is there any way to verify that?\n---\n\n> +-- Can't add EXCEPT TABLE to 'FOR ALL TABLES' publication\n> +ALTER PUBLICATION testpub_reset ADD ALL TABLES EXCEPT TABLE pub_sch1.tbl1;\n> +\n>\n> +-- Can't add EXCEPT TABLE to 'FOR TABLE' publication\n> +ALTER PUBLICATION testpub_reset ADD ALL TABLES EXCEPT TABLE pub_sch1.tbl1;\n> +\n>\n> +-- Can't add EXCEPT TABLE to 'FOR ALL TABLES IN SCHEMA' publication\n> +ALTER PUBLICATION testpub_reset ADD ALL TABLES EXCEPT TABLE pub_sch1.tbl1;\n> +\n\nI did not understand the objective of these tests. I think we need to\nimprove the comments.\n\nThanks & Regards,\n\n\n\nOn Mon, Aug 8, 2022 at 2:53 PM vignesh C <vignesh21@gmail.com> wrote:\n>\n> On Mon, Aug 8, 2022 at 12:46 PM vignesh C <vignesh21@gmail.com> wrote:\n> >\n> > On Fri, Jun 3, 2022 at 3:36 PM vignesh C <vignesh21@gmail.com> wrote:\n> > >\n> > > On Thu, May 26, 2022 at 7:04 PM osumi.takamichi@fujitsu.com\n> > > <osumi.takamichi@fujitsu.com> wrote:\n> > > >\n> > > > On Monday, May 23, 2022 2:13 PM vignesh C <vignesh21@gmail.com> wrote:\n> > > > > Attached v7 patch which fixes the buildfarm warning for an unused warning in\n> > > > > release mode as in [1].\n> > > > Hi, thank you for the patches.\n> > > >\n> > > >\n> > > > I'll share several review comments.\n> > > >\n> > > > For v7-0001.\n> > > >\n> > > > (1) I'll suggest some minor rewording.\n> > > >\n> > > > + <para>\n> > > > + The <literal>RESET</literal> clause will reset the publication to the\n> > > > + default state which includes resetting the publication options, setting\n> > > > + <literal>ALL TABLES</literal> flag to <literal>false</literal> and\n> > > > + dropping all relations and schemas that are associated with the publication.\n> > > >\n> > > > My suggestion is\n> > > > \"The RESET clause will reset the publication to the\n> > > > default state. It resets the publication operations,\n> > > > sets ALL TABLES flag to false and drops all relations\n> > > > and schemas associated with the publication.\"\n> > >\n> > > I felt the existing looks better. I would prefer to keep it that way.\n> > >\n> > > > (2) typo and rewording\n> > > >\n> > > > +/*\n> > > > + * Reset the publication.\n> > > > + *\n> > > > + * Reset the publication options, setting ALL TABLES flag to false and drop\n> > > > + * all relations and schemas that are associated with the publication.\n> > > > + */\n> > > >\n> > > > The \"setting\" in this sentence should be \"set\".\n> > > >\n> > > > How about changing like below ?\n> > > > FROM:\n> > > > \"Reset the publication options, setting ALL TABLES flag to false and drop\n> > > > all relations and schemas that are associated with the publication.\"\n> > > > TO:\n> > > > \"Reset the publication operations, set ALL TABLES flag to false and drop\n> > > > all relations and schemas associated with the publication.\"\n> > >\n> > > I felt the existing looks better. I would prefer to keep it that way.\n> > >\n> > > > (3) AlterPublicationReset\n> > > >\n> > > > Do we need to call CacheInvalidateRelcacheAll() or\n> > > > InvalidatePublicationRels() at the end of\n> > > > AlterPublicationReset() like AlterPublicationOptions() ?\n> > >\n> > > CacheInvalidateRelcacheAll should be called if we change all tables\n> > > from true to false, else the cache will not be invalidated. Modified\n> > >\n> > > >\n> > > > For v7-0002.\n> > > >\n> > > > (4)\n> > > >\n> > > > + if (stmt->for_all_tables)\n> > > > + {\n> > > > + bool isdefault = CheckPublicationDefValues(tup);\n> > > > +\n> > > > + if (!isdefault)\n> > > > + ereport(ERROR,\n> > > > + errcode(ERRCODE_INVALID_OBJECT_DEFINITION),\n> > > > + errmsg(\"adding ALL TABLES requires the publication to have default publication options, no tables/....\n> > > > + errhint(\"Use ALTER PUBLICATION ... RESET to reset the publication\"));\n> > > >\n> > > >\n> > > > The errmsg string has three messages for user and is a bit long\n> > > > (we have two sentences there connected by 'and').\n> > > > Can't we make it concise and split it into a couple of lines for code readability ?\n> > > >\n> > > > I'll suggest a change below.\n> > > > FROM:\n> > > > \"adding ALL TABLES requires the publication to have default publication options, no tables/schemas associated and ALL TABLES flag should not be set\"\n> > > > TO:\n> > > > \"adding ALL TABLES requires the publication defined not for ALL TABLES\"\n> > > > \"to have default publish actions without any associated tables/schemas\"\n> > >\n> > > Added errdetail and split it\n> > >\n> > > > (5) typo\n> > > >\n> > > > <varlistentry>\n> > > > + <term><literal>EXCEPT TABLE</literal></term>\n> > > > + <listitem>\n> > > > + <para>\n> > > > + This clause specifies a list of tables to exclude from the publication.\n> > > > + It can only be used with <literal>FOR ALL TABLES</literal>.\n> > > > + </para>\n> > > > + </listitem>\n> > > > + </varlistentry>\n> > > > +\n> > > >\n> > > > Kindly change\n> > > > FROM:\n> > > > This clause specifies a list of tables to exclude from the publication.\n> > > > TO:\n> > > > This clause specifies a list of tables to be excluded from the publication.\n> > > > or\n> > > > This clause specifies a list of tables excluded from the publication.\n> > >\n> > > Modified\n> > >\n> > > > (6) Minor suggestion for an expression change\n> > > >\n> > > > Marks the publication as one that replicates changes for all tables in\n> > > > - the database, including tables created in the future.\n> > > > + the database, including tables created in the future. If\n> > > > + <literal>EXCEPT TABLE</literal> is specified, then exclude replicating\n> > > > + the changes for the specified tables.\n> > > >\n> > > >\n> > > > I'll suggest a minor rewording.\n> > > > FROM:\n> > > > ...exclude replicating the changes for the specified tables\n> > > > TO:\n> > > > ...exclude replication changes for the specified tables\n> > >\n> > > I felt the existing is better.\n> > >\n> > > > (7)\n> > > > (7-1)\n> > > >\n> > > > +/*\n> > > > + * Check if the publication has default values\n> > > > + *\n> > > > + * Check the following:\n> > > > + * a) Publication is not set with \"FOR ALL TABLES\"\n> > > > + * b) Publication is having default options\n> > > > + * c) Publication is not associated with schemas\n> > > > + * d) Publication is not associated with relations\n> > > > + */\n> > > > +static bool\n> > > > +CheckPublicationDefValues(HeapTuple tup)\n> > > >\n> > > >\n> > > > I think this header comment can be improved.\n> > > > FROM:\n> > > > Check the following:\n> > > > TO:\n> > > > Returns true if the publication satisfies all the following conditions:\n> > >\n> > > Modified\n> > >\n> > > > (7-2)\n> > > >\n> > > > b) should be changed as well\n> > > > FROM:\n> > > > Publication is having default options\n> > > > TO:\n> > > > Publication has the default publish operations\n> > >\n> > > Changed it to \"Publication is having default publication parameter values\"\n> > >\n> > > Thanks for the comments, the attached v8 patch has the changes for the same.\n> >\n> > The patch needed to be rebased on top of HEAD because of commit\n> > \"0c20dd33db1607d6a85ffce24238c1e55e384b49\", attached a rebased v8\n> > version for the changes of the same.\n>\n> I had missed attaching one of the changes that was present locally.\n> The updated patch has the changes for the same.\n>\n> Regards,\n> Vignesh\n\n\n",
"msg_date": "Thu, 18 Aug 2022 12:32:31 +0530",
"msg_from": "Nitin Jadhav <nitinjadhavpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping schema changes in publication"
},
{
"msg_contents": "On Thu, Aug 18, 2022 at 12:33 PM Nitin Jadhav\n<nitinjadhavpostgres@gmail.com> wrote:\n>\n> I spent some time on understanding the proposal and the patch. Here\n> are a few comments wrt the test cases.\n>\n> > +ALTER PUBLICATION testpub_reset ADD TABLE pub_sch1.tbl1;\n> > +\n> > +-- Verify that tables associated with the publication are dropped after RESET\n> > +\\dRp+ testpub_reset\n> > +ALTER PUBLICATION testpub_reset RESET;\n> > +\\dRp+ testpub_reset\n> >\n> > +ALTER PUBLICATION testpub_reset ADD ALL TABLES IN SCHEMA public;\n> > +\n> > +-- Verify that schemas associated with the publication are dropped after RESET\n> > +\\dRp+ testpub_reset\n> > +ALTER PUBLICATION testpub_reset RESET;\n> > +\\dRp+ testpub_reset\n>\n> The results for the above two cases are the same before and after the\n> reset. Is there any way to verify that?\n\nIf you see the expected, first \\dRp+ command includes:\n+Tables:\n+ \"pub_sch1.tbl1\"\nThe second \\dRp+ does not include the Tables.\nWe are trying to verify that after reset, the tables will be removed\nfrom the publication.\nThe second test is similar to the first, the only difference here is\nthat we test schema instead of tables. i.e we verify that the schemas\nwill be removed from the publication.\n\n> ---\n>\n> > +-- Can't add EXCEPT TABLE to 'FOR ALL TABLES' publication\n> > +ALTER PUBLICATION testpub_reset ADD ALL TABLES EXCEPT TABLE pub_sch1.tbl1;\n> > +\n> >\n> > +-- Can't add EXCEPT TABLE to 'FOR TABLE' publication\n> > +ALTER PUBLICATION testpub_reset ADD ALL TABLES EXCEPT TABLE pub_sch1.tbl1;\n> > +\n> >\n> > +-- Can't add EXCEPT TABLE to 'FOR ALL TABLES IN SCHEMA' publication\n> > +ALTER PUBLICATION testpub_reset ADD ALL TABLES EXCEPT TABLE pub_sch1.tbl1;\n> > +\n>\n> I did not understand the objective of these tests. I think we need to\n> improve the comments.\n\nThere are different publications like \"ALL TABLES\", \"TABLE\", \"ALL\nTABLES IN SCHEMA\" publications. Here we are trying to verify that\nexcept tables cannot be added to \"ALL TABLES\", \"TABLE\", \"ALL TABLES IN\nSCHEMA\" publications.\nIf you see the expected file, you will see the following error:\n+-- Can't add EXCEPT TABLE to 'FOR ALL TABLES' publication\n+ALTER PUBLICATION testpub_reset ADD ALL TABLES EXCEPT TABLE pub_sch1.tbl1;\n+ERROR: adding ALL TABLES requires the publication to have default\npublication parameter values\n+DETAIL: ALL TABLES flag should not be set and no tables/schemas\nshould be associated.\n+HINT: Use ALTER PUBLICATION ... RESET to reset the publication\n\nI felt the existing comment is ok. Let me know if you still feel any\nchange is required.\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Thu, 18 Aug 2022 19:56:54 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping schema changes in publication"
},
{
"msg_contents": "On Mon, Aug 8, 2022 at 2:53 PM vignesh C <vignesh21@gmail.com> wrote:\n>\n> On Mon, Aug 8, 2022 at 12:46 PM vignesh C <vignesh21@gmail.com> wrote:\n> >\n> > On Fri, Jun 3, 2022 at 3:36 PM vignesh C <vignesh21@gmail.com> wrote:\n> > >\n> > > On Thu, May 26, 2022 at 7:04 PM osumi.takamichi@fujitsu.com\n> > > <osumi.takamichi@fujitsu.com> wrote:\n> > > >\n> > > > On Monday, May 23, 2022 2:13 PM vignesh C <vignesh21@gmail.com> wrote:\n> > > > > Attached v7 patch which fixes the buildfarm warning for an unused warning in\n> > > > > release mode as in [1].\n> > > > Hi, thank you for the patches.\n> > > >\n> > > >\n> > > > I'll share several review comments.\n> > > >\n> > > > For v7-0001.\n> > > >\n> > > > (1) I'll suggest some minor rewording.\n> > > >\n> > > > + <para>\n> > > > + The <literal>RESET</literal> clause will reset the publication to the\n> > > > + default state which includes resetting the publication options, setting\n> > > > + <literal>ALL TABLES</literal> flag to <literal>false</literal> and\n> > > > + dropping all relations and schemas that are associated with the publication.\n> > > >\n> > > > My suggestion is\n> > > > \"The RESET clause will reset the publication to the\n> > > > default state. It resets the publication operations,\n> > > > sets ALL TABLES flag to false and drops all relations\n> > > > and schemas associated with the publication.\"\n> > >\n> > > I felt the existing looks better. I would prefer to keep it that way.\n> > >\n> > > > (2) typo and rewording\n> > > >\n> > > > +/*\n> > > > + * Reset the publication.\n> > > > + *\n> > > > + * Reset the publication options, setting ALL TABLES flag to false and drop\n> > > > + * all relations and schemas that are associated with the publication.\n> > > > + */\n> > > >\n> > > > The \"setting\" in this sentence should be \"set\".\n> > > >\n> > > > How about changing like below ?\n> > > > FROM:\n> > > > \"Reset the publication options, setting ALL TABLES flag to false and drop\n> > > > all relations and schemas that are associated with the publication.\"\n> > > > TO:\n> > > > \"Reset the publication operations, set ALL TABLES flag to false and drop\n> > > > all relations and schemas associated with the publication.\"\n> > >\n> > > I felt the existing looks better. I would prefer to keep it that way.\n> > >\n> > > > (3) AlterPublicationReset\n> > > >\n> > > > Do we need to call CacheInvalidateRelcacheAll() or\n> > > > InvalidatePublicationRels() at the end of\n> > > > AlterPublicationReset() like AlterPublicationOptions() ?\n> > >\n> > > CacheInvalidateRelcacheAll should be called if we change all tables\n> > > from true to false, else the cache will not be invalidated. Modified\n> > >\n> > > >\n> > > > For v7-0002.\n> > > >\n> > > > (4)\n> > > >\n> > > > + if (stmt->for_all_tables)\n> > > > + {\n> > > > + bool isdefault = CheckPublicationDefValues(tup);\n> > > > +\n> > > > + if (!isdefault)\n> > > > + ereport(ERROR,\n> > > > + errcode(ERRCODE_INVALID_OBJECT_DEFINITION),\n> > > > + errmsg(\"adding ALL TABLES requires the publication to have default publication options, no tables/....\n> > > > + errhint(\"Use ALTER PUBLICATION ... RESET to reset the publication\"));\n> > > >\n> > > >\n> > > > The errmsg string has three messages for user and is a bit long\n> > > > (we have two sentences there connected by 'and').\n> > > > Can't we make it concise and split it into a couple of lines for code readability ?\n> > > >\n> > > > I'll suggest a change below.\n> > > > FROM:\n> > > > \"adding ALL TABLES requires the publication to have default publication options, no tables/schemas associated and ALL TABLES flag should not be set\"\n> > > > TO:\n> > > > \"adding ALL TABLES requires the publication defined not for ALL TABLES\"\n> > > > \"to have default publish actions without any associated tables/schemas\"\n> > >\n> > > Added errdetail and split it\n> > >\n> > > > (5) typo\n> > > >\n> > > > <varlistentry>\n> > > > + <term><literal>EXCEPT TABLE</literal></term>\n> > > > + <listitem>\n> > > > + <para>\n> > > > + This clause specifies a list of tables to exclude from the publication.\n> > > > + It can only be used with <literal>FOR ALL TABLES</literal>.\n> > > > + </para>\n> > > > + </listitem>\n> > > > + </varlistentry>\n> > > > +\n> > > >\n> > > > Kindly change\n> > > > FROM:\n> > > > This clause specifies a list of tables to exclude from the publication.\n> > > > TO:\n> > > > This clause specifies a list of tables to be excluded from the publication.\n> > > > or\n> > > > This clause specifies a list of tables excluded from the publication.\n> > >\n> > > Modified\n> > >\n> > > > (6) Minor suggestion for an expression change\n> > > >\n> > > > Marks the publication as one that replicates changes for all tables in\n> > > > - the database, including tables created in the future.\n> > > > + the database, including tables created in the future. If\n> > > > + <literal>EXCEPT TABLE</literal> is specified, then exclude replicating\n> > > > + the changes for the specified tables.\n> > > >\n> > > >\n> > > > I'll suggest a minor rewording.\n> > > > FROM:\n> > > > ...exclude replicating the changes for the specified tables\n> > > > TO:\n> > > > ...exclude replication changes for the specified tables\n> > >\n> > > I felt the existing is better.\n> > >\n> > > > (7)\n> > > > (7-1)\n> > > >\n> > > > +/*\n> > > > + * Check if the publication has default values\n> > > > + *\n> > > > + * Check the following:\n> > > > + * a) Publication is not set with \"FOR ALL TABLES\"\n> > > > + * b) Publication is having default options\n> > > > + * c) Publication is not associated with schemas\n> > > > + * d) Publication is not associated with relations\n> > > > + */\n> > > > +static bool\n> > > > +CheckPublicationDefValues(HeapTuple tup)\n> > > >\n> > > >\n> > > > I think this header comment can be improved.\n> > > > FROM:\n> > > > Check the following:\n> > > > TO:\n> > > > Returns true if the publication satisfies all the following conditions:\n> > >\n> > > Modified\n> > >\n> > > > (7-2)\n> > > >\n> > > > b) should be changed as well\n> > > > FROM:\n> > > > Publication is having default options\n> > > > TO:\n> > > > Publication has the default publish operations\n> > >\n> > > Changed it to \"Publication is having default publication parameter values\"\n> > >\n> > > Thanks for the comments, the attached v8 patch has the changes for the same.\n> >\n> > The patch needed to be rebased on top of HEAD because of commit\n> > \"0c20dd33db1607d6a85ffce24238c1e55e384b49\", attached a rebased v8\n> > version for the changes of the same.\n>\n> I had missed attaching one of the changes that was present locally.\n> The updated patch has the changes for the same.\n\nThe patch needed to be rebased on top of HEAD because of a recent\ncommit. The updated v8 patch has the changes for the same.\n\nRegards,\nVignesh",
"msg_date": "Thu, 18 Aug 2022 23:11:30 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping schema changes in publication"
},
{
"msg_contents": "2022年8月19日(金) 2:41 vignesh C <vignesh21@gmail.com>:\n>\n> On Mon, Aug 8, 2022 at 2:53 PM vignesh C <vignesh21@gmail.com> wrote:\n> >\n> > On Mon, Aug 8, 2022 at 12:46 PM vignesh C <vignesh21@gmail.com> wrote:\n> > >\n> > > On Fri, Jun 3, 2022 at 3:36 PM vignesh C <vignesh21@gmail.com> wrote:\n> > > >\n> > > > On Thu, May 26, 2022 at 7:04 PM osumi.takamichi@fujitsu.com\n> > > > <osumi.takamichi@fujitsu.com> wrote:\n> > > > >\n> > > > > On Monday, May 23, 2022 2:13 PM vignesh C <vignesh21@gmail.com> wrote:\n> > > > > > Attached v7 patch which fixes the buildfarm warning for an unused warning in\n> > > > > > release mode as in [1].\n> > > > > Hi, thank you for the patches.\n> > > > >\n> > > > >\n> > > > > I'll share several review comments.\n> > > > >\n> > > > > For v7-0001.\n> > > > >\n> > > > > (1) I'll suggest some minor rewording.\n> > > > >\n> > > > > + <para>\n> > > > > + The <literal>RESET</literal> clause will reset the publication to the\n> > > > > + default state which includes resetting the publication options, setting\n> > > > > + <literal>ALL TABLES</literal> flag to <literal>false</literal> and\n> > > > > + dropping all relations and schemas that are associated with the publication.\n> > > > >\n> > > > > My suggestion is\n> > > > > \"The RESET clause will reset the publication to the\n> > > > > default state. It resets the publication operations,\n> > > > > sets ALL TABLES flag to false and drops all relations\n> > > > > and schemas associated with the publication.\"\n> > > >\n> > > > I felt the existing looks better. I would prefer to keep it that way.\n> > > >\n> > > > > (2) typo and rewording\n> > > > >\n> > > > > +/*\n> > > > > + * Reset the publication.\n> > > > > + *\n> > > > > + * Reset the publication options, setting ALL TABLES flag to false and drop\n> > > > > + * all relations and schemas that are associated with the publication.\n> > > > > + */\n> > > > >\n> > > > > The \"setting\" in this sentence should be \"set\".\n> > > > >\n> > > > > How about changing like below ?\n> > > > > FROM:\n> > > > > \"Reset the publication options, setting ALL TABLES flag to false and drop\n> > > > > all relations and schemas that are associated with the publication.\"\n> > > > > TO:\n> > > > > \"Reset the publication operations, set ALL TABLES flag to false and drop\n> > > > > all relations and schemas associated with the publication.\"\n> > > >\n> > > > I felt the existing looks better. I would prefer to keep it that way.\n> > > >\n> > > > > (3) AlterPublicationReset\n> > > > >\n> > > > > Do we need to call CacheInvalidateRelcacheAll() or\n> > > > > InvalidatePublicationRels() at the end of\n> > > > > AlterPublicationReset() like AlterPublicationOptions() ?\n> > > >\n> > > > CacheInvalidateRelcacheAll should be called if we change all tables\n> > > > from true to false, else the cache will not be invalidated. Modified\n> > > >\n> > > > >\n> > > > > For v7-0002.\n> > > > >\n> > > > > (4)\n> > > > >\n> > > > > + if (stmt->for_all_tables)\n> > > > > + {\n> > > > > + bool isdefault = CheckPublicationDefValues(tup);\n> > > > > +\n> > > > > + if (!isdefault)\n> > > > > + ereport(ERROR,\n> > > > > + errcode(ERRCODE_INVALID_OBJECT_DEFINITION),\n> > > > > + errmsg(\"adding ALL TABLES requires the publication to have default publication options, no tables/....\n> > > > > + errhint(\"Use ALTER PUBLICATION ... RESET to reset the publication\"));\n> > > > >\n> > > > >\n> > > > > The errmsg string has three messages for user and is a bit long\n> > > > > (we have two sentences there connected by 'and').\n> > > > > Can't we make it concise and split it into a couple of lines for code readability ?\n> > > > >\n> > > > > I'll suggest a change below.\n> > > > > FROM:\n> > > > > \"adding ALL TABLES requires the publication to have default publication options, no tables/schemas associated and ALL TABLES flag should not be set\"\n> > > > > TO:\n> > > > > \"adding ALL TABLES requires the publication defined not for ALL TABLES\"\n> > > > > \"to have default publish actions without any associated tables/schemas\"\n> > > >\n> > > > Added errdetail and split it\n> > > >\n> > > > > (5) typo\n> > > > >\n> > > > > <varlistentry>\n> > > > > + <term><literal>EXCEPT TABLE</literal></term>\n> > > > > + <listitem>\n> > > > > + <para>\n> > > > > + This clause specifies a list of tables to exclude from the publication.\n> > > > > + It can only be used with <literal>FOR ALL TABLES</literal>.\n> > > > > + </para>\n> > > > > + </listitem>\n> > > > > + </varlistentry>\n> > > > > +\n> > > > >\n> > > > > Kindly change\n> > > > > FROM:\n> > > > > This clause specifies a list of tables to exclude from the publication.\n> > > > > TO:\n> > > > > This clause specifies a list of tables to be excluded from the publication.\n> > > > > or\n> > > > > This clause specifies a list of tables excluded from the publication.\n> > > >\n> > > > Modified\n> > > >\n> > > > > (6) Minor suggestion for an expression change\n> > > > >\n> > > > > Marks the publication as one that replicates changes for all tables in\n> > > > > - the database, including tables created in the future.\n> > > > > + the database, including tables created in the future. If\n> > > > > + <literal>EXCEPT TABLE</literal> is specified, then exclude replicating\n> > > > > + the changes for the specified tables.\n> > > > >\n> > > > >\n> > > > > I'll suggest a minor rewording.\n> > > > > FROM:\n> > > > > ...exclude replicating the changes for the specified tables\n> > > > > TO:\n> > > > > ...exclude replication changes for the specified tables\n> > > >\n> > > > I felt the existing is better.\n> > > >\n> > > > > (7)\n> > > > > (7-1)\n> > > > >\n> > > > > +/*\n> > > > > + * Check if the publication has default values\n> > > > > + *\n> > > > > + * Check the following:\n> > > > > + * a) Publication is not set with \"FOR ALL TABLES\"\n> > > > > + * b) Publication is having default options\n> > > > > + * c) Publication is not associated with schemas\n> > > > > + * d) Publication is not associated with relations\n> > > > > + */\n> > > > > +static bool\n> > > > > +CheckPublicationDefValues(HeapTuple tup)\n> > > > >\n> > > > >\n> > > > > I think this header comment can be improved.\n> > > > > FROM:\n> > > > > Check the following:\n> > > > > TO:\n> > > > > Returns true if the publication satisfies all the following conditions:\n> > > >\n> > > > Modified\n> > > >\n> > > > > (7-2)\n> > > > >\n> > > > > b) should be changed as well\n> > > > > FROM:\n> > > > > Publication is having default options\n> > > > > TO:\n> > > > > Publication has the default publish operations\n> > > >\n> > > > Changed it to \"Publication is having default publication parameter values\"\n> > > >\n> > > > Thanks for the comments, the attached v8 patch has the changes for the same.\n> > >\n> > > The patch needed to be rebased on top of HEAD because of commit\n> > > \"0c20dd33db1607d6a85ffce24238c1e55e384b49\", attached a rebased v8\n> > > version for the changes of the same.\n> >\n> > I had missed attaching one of the changes that was present locally.\n> > The updated patch has the changes for the same.\n>\n> The patch needed to be rebased on top of HEAD because of a recent\n> commit. The updated v8 patch has the changes for the same.\n\nHi\n\ncfbot reports the patch no longer applies [1]. As CommitFest 2022-11 is\ncurrently underway, this would be an excellent time to update the patch.\n\n[1] http://cfbot.cputube.org/patch_40_3646.log\n\nThanks\n\nIan Barwick\n\n\n",
"msg_date": "Fri, 4 Nov 2022 11:49:46 +0900",
"msg_from": "Ian Lawrence Barwick <barwick@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping schema changes in publication"
},
{
"msg_contents": "On Fri, 4 Nov 2022 at 08:19, Ian Lawrence Barwick <barwick@gmail.com> wrote:\n>\n> Hi\n>\n> cfbot reports the patch no longer applies [1]. As CommitFest 2022-11 is\n> currently underway, this would be an excellent time to update the patch.\n>\n> [1] http://cfbot.cputube.org/patch_40_3646.log\n\nHere is an updated patch which is rebased on top of HEAD.\n\nRegards,\nVignesh",
"msg_date": "Mon, 7 Nov 2022 19:09:41 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping schema changes in publication"
},
{
"msg_contents": "2022年11月7日(月) 22:39 vignesh C <vignesh21@gmail.com>:\n>\n> On Fri, 4 Nov 2022 at 08:19, Ian Lawrence Barwick <barwick@gmail.com> wrote:\n> >\n> > Hi\n> >\n> > cfbot reports the patch no longer applies [1]. As CommitFest 2022-11 is\n> > currently underway, this would be an excellent time to update the patch.\n> >\n> > [1] http://cfbot.cputube.org/patch_40_3646.log\n>\n> Here is an updated patch which is rebased on top of HEAD.\n\nThanks for the updated patch.\n\nWhile reviewing the patch backlog, we have determined that this patch adds\none or more TAP tests but has not added the test to the \"meson.build\" file.\n\nTo do this, locate the relevant \"meson.build\" file for each test and add it\nin the 'tests' dictionary, which will look something like this:\n\n 'tap': {\n 'tests': [\n 't/001_basic.pl',\n ],\n },\n\nFor some additional details please see this Wiki article:\n\n https://wiki.postgresql.org/wiki/Meson_for_patch_authors\n\nFor more information on the meson build system for PostgreSQL see:\n\n https://wiki.postgresql.org/wiki/Meson\n\n\nRegards\n\nIan Barwick\n\n\n",
"msg_date": "Wed, 16 Nov 2022 13:04:18 +0900",
"msg_from": "Ian Lawrence Barwick <barwick@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping schema changes in publication"
},
{
"msg_contents": "On Wed, 16 Nov 2022 at 09:34, Ian Lawrence Barwick <barwick@gmail.com> wrote:\n>\n> 2022年11月7日(月) 22:39 vignesh C <vignesh21@gmail.com>:\n> >\n> > On Fri, 4 Nov 2022 at 08:19, Ian Lawrence Barwick <barwick@gmail.com> wrote:\n> > >\n> > > Hi\n> > >\n> > > cfbot reports the patch no longer applies [1]. As CommitFest 2022-11 is\n> > > currently underway, this would be an excellent time to update the patch.\n> > >\n> > > [1] http://cfbot.cputube.org/patch_40_3646.log\n> >\n> > Here is an updated patch which is rebased on top of HEAD.\n>\n> Thanks for the updated patch.\n>\n> While reviewing the patch backlog, we have determined that this patch adds\n> one or more TAP tests but has not added the test to the \"meson.build\" file.\n\nThanks, I have updated the meson.build to include the TAP test. The\nattached patch has the changes for the same.\n\nRegards,\nVignesh",
"msg_date": "Wed, 16 Nov 2022 15:35:31 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping schema changes in publication"
},
{
"msg_contents": "On Wed, 16 Nov 2022 at 15:35, vignesh C <vignesh21@gmail.com> wrote:\n>\n> On Wed, 16 Nov 2022 at 09:34, Ian Lawrence Barwick <barwick@gmail.com> wrote:\n> >\n> > 2022年11月7日(月) 22:39 vignesh C <vignesh21@gmail.com>:\n> > >\n> > > On Fri, 4 Nov 2022 at 08:19, Ian Lawrence Barwick <barwick@gmail.com> wrote:\n> > > >\n> > > > Hi\n> > > >\n> > > > cfbot reports the patch no longer applies [1]. As CommitFest 2022-11 is\n> > > > currently underway, this would be an excellent time to update the patch.\n> > > >\n> > > > [1] http://cfbot.cputube.org/patch_40_3646.log\n> > >\n> > > Here is an updated patch which is rebased on top of HEAD.\n> >\n> > Thanks for the updated patch.\n> >\n> > While reviewing the patch backlog, we have determined that this patch adds\n> > one or more TAP tests but has not added the test to the \"meson.build\" file.\n>\n> Thanks, I have updated the meson.build to include the TAP test. The\n> attached patch has the changes for the same.\n\nThe patch was not applying on top of HEAD, attached a rebased version.\n\nRegards,\nVignesh",
"msg_date": "Fri, 20 Jan 2023 15:30:54 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping schema changes in publication"
},
{
"msg_contents": "On Fri, 20 Jan 2023 at 15:30, vignesh C <vignesh21@gmail.com> wrote:\n>\n> On Wed, 16 Nov 2022 at 15:35, vignesh C <vignesh21@gmail.com> wrote:\n> >\n> > On Wed, 16 Nov 2022 at 09:34, Ian Lawrence Barwick <barwick@gmail.com> wrote:\n> > >\n> > > 2022年11月7日(月) 22:39 vignesh C <vignesh21@gmail.com>:\n> > > >\n> > > > On Fri, 4 Nov 2022 at 08:19, Ian Lawrence Barwick <barwick@gmail.com> wrote:\n> > > > >\n> > > > > Hi\n> > > > >\n> > > > > cfbot reports the patch no longer applies [1]. As CommitFest 2022-11 is\n> > > > > currently underway, this would be an excellent time to update the patch.\n> > > > >\n> > > > > [1] http://cfbot.cputube.org/patch_40_3646.log\n> > > >\n> > > > Here is an updated patch which is rebased on top of HEAD.\n> > >\n> > > Thanks for the updated patch.\n> > >\n> > > While reviewing the patch backlog, we have determined that this patch adds\n> > > one or more TAP tests but has not added the test to the \"meson.build\" file.\n> >\n> > Thanks, I have updated the meson.build to include the TAP test. The\n> > attached patch has the changes for the same.\n>\n> The patch was not applying on top of HEAD, attached a rebased version.\n\nAs I did not see much interest from others, I'm withdrawing this patch\nfor now. But if there is any interest others in future, I would be\nmore than happy to work on this feature.\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Tue, 9 Jan 2024 12:02:04 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping schema changes in publication"
}
] |
[
{
"msg_contents": "Hi,\n\nWhen LockAcquireExtended(dontWait=false) acquires a lock where we already hold\nstronger lock and somebody else is also waiting for that lock, it goes through\na fairly circuitous path to acquire the lock:\n\nA conflicting lock is detected: if (lockMethodTable->conflictTab[lockmode] & lock->waitMask)\nLockAcquireExtended() -> WaitOnLock() -> ProcSleep()\nProcSleep() follows this special path:\n\t * Special case: if I find I should go in front of some waiter, check to\n\t * see if I conflict with already-held locks or the requests before that\n\t * waiter. If not, then just grant myself the requested lock immediately.\nand grants the lock.\n\n\nHowever, in dontWait mode, there's no such path. Which means that\nLockAcquireExtended() returns LOCKACQUIRE_NOT_AVAIL despite the fact that dontWait=false\nwould succeed in granting us the lock.\n\nThis seems decidedly suboptimal.\n\nFor one, the code flow to acquire a lock we already hold in some form is\nunnecessarily hard to understand and expensive. There's no comment in\nLockAcquireExtended() explaining that WaitOnLock() might immediately grant us\nthe lock in that case, we emit bogus TRACE_POSTGRESQL_LOCK_WAIT_START() etc.\n\nFor another, LockAcquireExtended(dontWait=true) returning spuriously is quite\nconfusing behaviour, and quite plausibly could cause bugs in fairly random\nplaces.\n\n\nNot planning to do anything about this for now, but I did want something I can\nfind if I hit such a problem in the future...\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 22 Mar 2022 10:43:18 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "LockAcquireExtended() dontWait vs weaker lock levels than already\n held"
},
{
"msg_contents": "On Tue, Mar 22, 2022 at 1:43 PM Andres Freund <andres@anarazel.de> wrote:\n> When LockAcquireExtended(dontWait=false) acquires a lock where we already hold\n> stronger lock and somebody else is also waiting for that lock, it goes through\n> a fairly circuitous path to acquire the lock:\n>\n> A conflicting lock is detected: if (lockMethodTable->conflictTab[lockmode] & lock->waitMask)\n> LockAcquireExtended() -> WaitOnLock() -> ProcSleep()\n> ProcSleep() follows this special path:\n> * Special case: if I find I should go in front of some waiter, check to\n> * see if I conflict with already-held locks or the requests before that\n> * waiter. If not, then just grant myself the requested lock immediately.\n> and grants the lock.\n\nI think this happens because lock.c is trying to imagine a world in\nwhich we don't know anything a priori about which locks are stronger\nor weaker than others and everything is deduced from the conflict\nmatrix. I think at some point in time someone believed that we might\nuse different conflict matrixes for different lock types. With an\narbitrary conflict matrix, \"stronger\" and \"weaker\" aren't even\nnecessarily well-defined ideas: A could conflict with B, B with C, and\nC with A, or something crazy like that. It seems rather unlikely to me\nthat we'd ever do such a thing at this point. In fact, there are a lot\nof things in lock.c that we'd probably do differently if we were doing\nthat work over.\n\n--\nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 22 Mar 2022 14:20:55 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: LockAcquireExtended() dontWait vs weaker lock levels than already\n held"
},
{
"msg_contents": "Hi,\n\nOn 2022-03-22 14:20:55 -0400, Robert Haas wrote:\n> On Tue, Mar 22, 2022 at 1:43 PM Andres Freund <andres@anarazel.de> wrote:\n> > When LockAcquireExtended(dontWait=false) acquires a lock where we already hold\n> > stronger lock and somebody else is also waiting for that lock, it goes through\n> > a fairly circuitous path to acquire the lock:\n> >\n> > A conflicting lock is detected: if (lockMethodTable->conflictTab[lockmode] & lock->waitMask)\n> > LockAcquireExtended() -> WaitOnLock() -> ProcSleep()\n> > ProcSleep() follows this special path:\n> > * Special case: if I find I should go in front of some waiter, check to\n> > * see if I conflict with already-held locks or the requests before that\n> > * waiter. If not, then just grant myself the requested lock immediately.\n> > and grants the lock.\n>\n> I think this happens because lock.c is trying to imagine a world in\n> which we don't know anything a priori about which locks are stronger\n> or weaker than others and everything is deduced from the conflict\n> matrix. I think at some point in time someone believed that we might\n> use different conflict matrixes for different lock types. With an\n> arbitrary conflict matrix, \"stronger\" and \"weaker\" aren't even\n> necessarily well-defined ideas: A could conflict with B, B with C, and\n> C with A, or something crazy like that. It seems rather unlikely to me\n> that we'd ever do such a thing at this point. In fact, there are a lot\n> of things in lock.c that we'd probably do differently if we were doing\n> that work over.\n\nWe clearly already know how to compute whether a lock is \"included\" in\nsomething we already hold - after all ProcSleep() successfully does so.\n\nIsn't it a pretty trivial test? Seems like it'd boil down to something like\n\nacquireMask = lockMethodTable->conflictTab[lockmode];\nif ((MyProc->heldLocks & acquireMask) == acquireMask)\n /* already hold lock conflicting with it, grant the new lock to myself) */\nelse\n /* current behaviour */\n\nLockCheckConflicts() mostly knows how to deal with this. It's just that we don't\neven use LockCheckConflicts() if a lock acquisition conflicts with waitMask:\n\n /*\n * If lock requested conflicts with locks requested by waiters, must join\n * wait queue. Otherwise, check for conflict with already-held locks.\n * (That's last because most complex check.)\n */\n if (lockMethodTable->conflictTab[lockmode] & lock->waitMask)\n found_conflict = true;\n else\n found_conflict = LockCheckConflicts(lockMethodTable, lockmode,\n lock, proclock);\n\nYes, there's more deadlocks that can be solved by queue reordering, but the\nsimple cases that ProcSleep() handles don't seem problematic to solve in\nlock.c directly either...\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 22 Mar 2022 12:01:36 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: LockAcquireExtended() dontWait vs weaker lock levels than\n already held"
},
{
"msg_contents": "On Tue, Mar 22, 2022 at 3:01 PM Andres Freund <andres@anarazel.de> wrote:\n> We clearly already know how to compute whether a lock is \"included\" in\n> something we already hold - after all ProcSleep() successfully does so.\n>\n> Isn't it a pretty trivial test? Seems like it'd boil down to something like\n\nI don't mind you fixing the behavior. I just couldn't pass up an\nopportunity to complain about the structure of lock.c.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 22 Mar 2022 15:04:37 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: LockAcquireExtended() dontWait vs weaker lock levels than already\n held"
}
] |
[
{
"msg_contents": "Hi,\n\nI was about to propose adding headerscheck / cpluspluscheck to the CI file so\nthat cfbot can catch future issues. Unfortunately running cpluspluscheck with\nICU enabled is, um, not fun: There's 30k lines of error output.\n\n/home/andres/src/postgresql/src/tools/pginclude/cpluspluscheck /home/andres/src/postgresql /home/andres/build/postgres/dev-assert/vpath\nIn file included from /usr/include/c++/12/bits/stl_algobase.h:60,\n from /usr/include/c++/12/memory:63,\n from /usr/include/unicode/localpointer.h:45,\n from /usr/include/unicode/unorm2.h:34,\n from /usr/include/unicode/unorm.h:25,\n from /usr/include/unicode/ucol.h:17,\n from /home/andres/src/postgresql/src/include/utils/pg_locale.h:19,\n from /home/andres/src/postgresql/src/include/tsearch/ts_locale.h:20,\n from /tmp/cpluspluscheck.H59Y6V/test.cpp:3:\n/usr/include/c++/12/bits/functexcept.h:101:3: error: conflicting declaration of C function ‘void std::__throw_ios_failure(const char*, int)’\n 101 | __throw_ios_failure(const char*, int) __attribute__((__noreturn__));\n | ^~~~~~~~~~~~~~~~~~~\n/usr/include/c++/12/bits/functexcept.h:98:3: note: previous declaration ‘void std::__throw_ios_failure(const char*)’\n 98 | __throw_ios_failure(const char*) __attribute__((__noreturn__));\n | ^~~~~~~~~~~~~~~~~~~\nIn file included from /usr/include/c++/12/bits/stl_algobase.h:63:\n/usr/include/c++/12/ext/numeric_traits.h:50:3: error: template with C linkage\n 50 | template<typename _Tp>\n | ^~~~~~~~\n/tmp/cpluspluscheck.H59Y6V/test.cpp:1:1: note: ‘extern \"C\"’ linkage started here\n 1 | extern \"C\" {\n | ^~~~~~~~~~\n...\n\nwith one warning for each declaration in numeric_traits.h, I think.\n\nSo, there's two questions:\n1) How can we prevent this problem when ICU support is enabled?\n2) Can we prevent such absurdly long error output?\n\nFor 2), perhaps we should just specify EXTRAFLAGS=-fmax-errors=10 in the\ncpluspluscheck invocation, or add it in cpluspluscheck itself?\n\nFor 1), I don't immediately see a minimal solution other than ignoring it in\ncpluspluscheck, similar to pg_trace.h/probes.h.\n\nA different / complimentary approach could be to add -Wc++-compat to the\nheaderscheck invocation. Both gcc and clang understand that.\n\nBut neither of these really gets to the heart of the problem. There's still no\nway for C++ code to include pg_locale.h correctly. And in contrast to\npg_trace.h/probes.h pg_locale.h is somewhat important.\n\n\nThis isn't a new problem, afaics.\n\n\nPerhaps we should strive to remove the use of ICU headers from within our\nheaders? The members of pg_locale are just pointers and could thus be void *,\nand HAVE_UCOL_STRCOLLUTF8 could be computed at configure time or such.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 22 Mar 2022 17:20:24 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "cpluspluscheck vs ICU"
},
{
"msg_contents": "Hi,\n\nOn 2022-03-22 17:20:24 -0700, Andres Freund wrote:\n> I was about to propose adding headerscheck / cpluspluscheck to the CI file so\n> that cfbot can catch future issues.\n\nThe attached patch does so, with ICU disabled to avoid the problems discussed\nin the thread. Example run:\nhttps://cirrus-ci.com/task/6326161696358400?logs=headers_headerscheck#L0\n\nUnless somebody sees a reason not to, I'm planning to commit this soon.\n\nGreetings,\n\nAndres Freund",
"msg_date": "Tue, 22 Mar 2022 19:23:23 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: cpluspluscheck vs ICU"
},
{
"msg_contents": "On Wed, Mar 23, 2022 at 3:23 PM Andres Freund <andres@anarazel.de> wrote:\n> On 2022-03-22 17:20:24 -0700, Andres Freund wrote:\n> > I was about to propose adding headerscheck / cpluspluscheck to the CI file so\n> > that cfbot can catch future issues.\n>\n> The attached patch does so, with ICU disabled to avoid the problems discussed\n> in the thread. Example run:\n> https://cirrus-ci.com/task/6326161696358400?logs=headers_headerscheck#L0\n>\n> Unless somebody sees a reason not to, I'm planning to commit this soon.\n\nLGTM.\n\n\n",
"msg_date": "Wed, 23 Mar 2022 15:52:34 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: cpluspluscheck vs ICU"
},
{
"msg_contents": "\nOn 3/22/22 22:23, Andres Freund wrote:\n> Hi,\n>\n> On 2022-03-22 17:20:24 -0700, Andres Freund wrote:\n>> I was about to propose adding headerscheck / cpluspluscheck to the CI file so\n>> that cfbot can catch future issues.\n> The attached patch does so, with ICU disabled to avoid the problems discussed\n> in the thread. Example run:\n> https://cirrus-ci.com/task/6326161696358400?logs=headers_headerscheck#L0\n>\n> Unless somebody sees a reason not to, I'm planning to commit this soon.\n>\n\nThat only helps when running the CI/cfbot setup. Fixing it for other\n(manual or buildfarm) users would be nice. Luckily crake isn't building\nwith ICU.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Wed, 23 Mar 2022 08:19:38 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: cpluspluscheck vs ICU"
},
{
"msg_contents": "Hi,\n\nOn 2022-03-23 08:19:38 -0400, Andrew Dunstan wrote:\n> On 3/22/22 22:23, Andres Freund wrote:\n> > On 2022-03-22 17:20:24 -0700, Andres Freund wrote:\n> >> I was about to propose adding headerscheck / cpluspluscheck to the CI file so\n> >> that cfbot can catch future issues.\n> > The attached patch does so, with ICU disabled to avoid the problems discussed\n> > in the thread. Example run:\n> > https://cirrus-ci.com/task/6326161696358400?logs=headers_headerscheck#L0\n> >\n> > Unless somebody sees a reason not to, I'm planning to commit this soon.\n> >\n> \n> That only helps when running the CI/cfbot setup. Fixing it for other\n> (manual or buildfarm) users would be nice. Luckily crake isn't building\n> with ICU.\n\nOh, I agree we need to fix it properly. I just don't yet know how to - see the\nlist of alternatives upthread. Seems no reason to hold up preventing further\nproblems via CI / cfbot though.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 23 Mar 2022 08:56:17 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: cpluspluscheck vs ICU"
},
{
"msg_contents": "Hi,\n\nOn 2022-03-23 08:56:17 -0700, Andres Freund wrote:\n> On 2022-03-23 08:19:38 -0400, Andrew Dunstan wrote:\n> > On 3/22/22 22:23, Andres Freund wrote:\n> > That only helps when running the CI/cfbot setup. Fixing it for other\n> > (manual or buildfarm) users would be nice. Luckily crake isn't building\n> > with ICU.\n>\n> Oh, I agree we need to fix it properly. I just don't yet know how to - see the\n> list of alternatives upthread. Seems no reason to hold up preventing further\n> problems via CI / cfbot though.\n\nI just hit this once more - and I figured out a fairly easy fix:\n\nWe just need a\n #ifndef U_DEFAULT_SHOW_DRAFT\n #define U_DEFAULT_SHOW_DRAFT 0\n #endif\nbefore including unicode/ucol.h.\n\nAt first I was looking at\n #define U_SHOW_CPLUSPLUS_API 0\nand\n #define U_HIDE_INTERNAL_API 1\nwhich both work, but they are documented to be internal.\n\n\nThe reason for the #ifndef is that pg_locale.h might be included by .c files\nthat already included ICU headers, which then otherwise would cause macro\nredefinition warnings. E.g. in formatting.c.\n\nAlternatively we could emit U_DEFAULT_SHOW_DRAFT 0 into pg_config.h to avoid\nthat issue.\n\n\nThe only other thing I see is to do something like:\n\n#ifdef USE_ICU\n#ifdef __cplusplus\n/* close extern \"C\", otherwise we'll get errors from within ICU */\n}\n#endif /* __cplusplus */\n\n#include <unicode/ucol.h>\n\n#ifdef __cplusplus\nextern \"C\" {\n#endif /* __cplusplus */\n\n#endif /* USE_ICU */\n\nwhich seems mighty ugly.\n\n\nRegards,\n\nAndres\n\n\n",
"msg_date": "Fri, 10 Mar 2023 19:37:27 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: cpluspluscheck vs ICU"
},
{
"msg_contents": "Hi,\n\nOn 2023-03-10 19:37:27 -0800, Andres Freund wrote:\n> On 2022-03-23 08:56:17 -0700, Andres Freund wrote:\n> > On 2022-03-23 08:19:38 -0400, Andrew Dunstan wrote:\n> > > On 3/22/22 22:23, Andres Freund wrote:\n> > > That only helps when running the CI/cfbot setup. Fixing it for other\n> > > (manual or buildfarm) users would be nice. Luckily crake isn't building\n> > > with ICU.\n> >\n> > Oh, I agree we need to fix it properly. I just don't yet know how to - see the\n> > list of alternatives upthread. Seems no reason to hold up preventing further\n> > problems via CI / cfbot though.\n> \n> I just hit this once more - and I figured out a fairly easy fix:\n> \n> We just need a\n> #ifndef U_DEFAULT_SHOW_DRAFT\n> #define U_DEFAULT_SHOW_DRAFT 0\n> #endif\n> before including unicode/ucol.h.\n> \n> At first I was looking at\n> #define U_SHOW_CPLUSPLUS_API 0\n> and\n> #define U_HIDE_INTERNAL_API 1\n> which both work, but they are documented to be internal.\n\nErr. Unfortunately only the U_SHOW_CPLUSPLUS_API approach actually works. The\nothers don't, not quite sure what I was doing earlier.\n\nSo it's either relying on a define marked as internal, or the below:\n\n> Alternatively we could emit U_DEFAULT_SHOW_DRAFT 0 into pg_config.h to avoid\n> that issue.\n> \n> \n> The only other thing I see is to do something like:\n> \n> #ifdef USE_ICU\n> #ifdef __cplusplus\n> /* close extern \"C\", otherwise we'll get errors from within ICU */\n> }\n> #endif /* __cplusplus */\n> \n> #include <unicode/ucol.h>\n> \n> #ifdef __cplusplus\n> extern \"C\" {\n> #endif /* __cplusplus */\n> \n> #endif /* USE_ICU */\n> \n> which seems mighty ugly.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 10 Mar 2023 20:10:30 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: cpluspluscheck vs ICU"
},
{
"msg_contents": "Hi,\n\nOn 2023-03-10 20:10:30 -0800, Andres Freund wrote:\n> On 2023-03-10 19:37:27 -0800, Andres Freund wrote:\n> > I just hit this once more - and I figured out a fairly easy fix:\n> > \n> > We just need a\n> > #ifndef U_DEFAULT_SHOW_DRAFT\n> > #define U_DEFAULT_SHOW_DRAFT 0\n> > #endif\n> > before including unicode/ucol.h.\n> > \n> > At first I was looking at\n> > #define U_SHOW_CPLUSPLUS_API 0\n> > and\n> > #define U_HIDE_INTERNAL_API 1\n> > which both work, but they are documented to be internal.\n> \n> Err. Unfortunately only the U_SHOW_CPLUSPLUS_API approach actually works. The\n> others don't, not quite sure what I was doing earlier.\n> \n> So it's either relying on a define marked as internal, or the below:\n> \n> > Alternatively we could emit U_DEFAULT_SHOW_DRAFT 0 into pg_config.h to avoid\n> > that issue.\n> > \n> > \n> > The only other thing I see is to do something like:\n> > [ugly]\n> > which seems mighty ugly.\n\nThe ICU docs talk about it like it's not really internal:\nhttps://github.com/unicode-org/icu/blob/720e5741ccaa112c4faafffdedeb7459b66c5673/docs/processes/release/tasks/healthy-code.md#test-icu4c-headers\n\nSo I'm inclined to go with that solution.\n\nAny comments? Arguments against?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 7 Aug 2023 16:35:57 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: cpluspluscheck vs ICU"
},
{
"msg_contents": "On 08.08.23 01:35, Andres Freund wrote:\n> Hi,\n> \n> On 2023-03-10 20:10:30 -0800, Andres Freund wrote:\n>> On 2023-03-10 19:37:27 -0800, Andres Freund wrote:\n>>> I just hit this once more - and I figured out a fairly easy fix:\n>>>\n>>> We just need a\n>>> #ifndef U_DEFAULT_SHOW_DRAFT\n>>> #define U_DEFAULT_SHOW_DRAFT 0\n>>> #endif\n>>> before including unicode/ucol.h.\n>>>\n>>> At first I was looking at\n>>> #define U_SHOW_CPLUSPLUS_API 0\n>>> and\n>>> #define U_HIDE_INTERNAL_API 1\n>>> which both work, but they are documented to be internal.\n>>\n>> Err. Unfortunately only the U_SHOW_CPLUSPLUS_API approach actually works. The\n>> others don't, not quite sure what I was doing earlier.\n>>\n>> So it's either relying on a define marked as internal, or the below:\n>>\n>>> Alternatively we could emit U_DEFAULT_SHOW_DRAFT 0 into pg_config.h to avoid\n>>> that issue.\n>>>\n>>>\n>>> The only other thing I see is to do something like:\n>>> [ugly]\n>>> which seems mighty ugly.\n> \n> The ICU docs talk about it like it's not really internal:\n> https://github.com/unicode-org/icu/blob/720e5741ccaa112c4faafffdedeb7459b66c5673/docs/processes/release/tasks/healthy-code.md#test-icu4c-headers\n> \n> So I'm inclined to go with that solution.\n\nThis looks sensible to me.\n\nPerhaps undef U_SHOW_CPLUSPLUS_API after including the headers, so that \nif extensions want to use the ICU C++ APIs, they are not tripped up by this?\n\n\n\n",
"msg_date": "Tue, 8 Aug 2023 09:20:03 +0200",
"msg_from": "Peter Eisentraut <peter@eisentraut.org>",
"msg_from_op": false,
"msg_subject": "Re: cpluspluscheck vs ICU"
}
] |
[
{
"msg_contents": "Hi,\n\nIt's possible to have a good number of standbys (in the context of async\nstreaming replication) as part of the client architecture. Rather than\nasking the client to look into the intricacies of comparing the LSN of each\nstandby with that of primary and performing the pg_rewind, isn't it a good\nidea to integrate the pg_rewind into the startup logic and perform\npg_rewind on need basis?\n\nConsidering the scenarios where primary is ahead of sync standbys, upon\npromotion of a standby, pg_rewind is needed on the old primary if it has to\nbe up as a standby. Similarly in the scenarios where async standbys(in\nphysical replication context) go ahead of sync standbys, and upon promotion\nof a standby, there is need for pg_rewind to be performed on the async\nstandbys which are ahead of sync standby being promoted.\n\nWith these scenarios under consideration, integrating pg_rewind into\npostgres core might be a better option IMO. We could optionally choose to\nhave pg_rewind dry run performed during the standby startup and based on\nthe need, perform the rewind and have the standby in sync with the primary.\n\nWould like to invite more thoughts from the hackers.\n\nRegards,\nRKN\n\nHi,It's possible to have a good number of standbys (in the context of async streaming replication) as part of the client architecture. Rather than asking the client to look into the intricacies of comparing the LSN of each standby with that of primary and performing the pg_rewind, isn't it a good idea to integrate the pg_rewind into the startup logic and perform pg_rewind on need basis?Considering the scenarios where primary is ahead of sync standbys, upon promotion of a standby, pg_rewind is needed on the old primary if it has to be up as a standby. Similarly in the scenarios where async standbys(in physical replication context) go ahead of sync standbys, and upon promotion of a standby, there is need for pg_rewind to be performed on the async standbys which are ahead of sync standby being promoted.With these scenarios under consideration, integrating pg_rewind into postgres core might be a better option IMO. We could optionally choose to have pg_rewind dry run performed during the standby startup and based on the need, perform the rewind and have the standby in sync with the primary.Would like to invite more thoughts from the hackers.Regards,RKN",
"msg_date": "Wed, 23 Mar 2022 17:13:47 +0530",
"msg_from": "RKN Sai Krishna <rknsaiforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "[Proposal] pg_rewind integration into core"
},
{
"msg_contents": "On Wed, Mar 23, 2022 at 05:13:47PM +0530, RKN Sai Krishna wrote:\n> Considering the scenarios where primary is ahead of sync standbys, upon\n> promotion of a standby, pg_rewind is needed on the old primary if it has to\n> be up as a standby. Similarly in the scenarios where async standbys(in\n> physical replication context) go ahead of sync standbys, and upon promotion\n> of a standby, there is need for pg_rewind to be performed on the async\n> standbys which are ahead of sync standby being promoted.\n\n> With these scenarios under consideration, integrating pg_rewind into\n> postgres core might be a better option IMO. We could optionally choose to\n> have pg_rewind dry run performed during the standby startup and based on\n> the need, perform the rewind and have the standby in sync with the primary.\n\npg_rewind is already part of the core code as a binary tool, but what\nyou mean is to integrate a portion of it in the backend code, as of a\nstartup sequence (with the node to rewind using primary_conninfo for\nthe source?). Once thing that we would need to be careful about\nis that no assumptions a rewind relies on are messed up in any way\nat the step where the rewind begins. One such thing is that the\nstandby has achieved crash recovery correctly, so you would need \nto somewhat complicate more the startup sequence, which is already a\ncomplicated and sensitive piece of logic, with more internal\ndependencies between each piece. I am not really convinced that we\nneed to add more technical debt in this area, particularly now that\npg_rewind is able to enforce recovery on the target node once so as it\nhas a clean state when the rewind can begin, so the assumptions around\ncrash recovery and rewind have a clear frontier cut.\n--\nMichael",
"msg_date": "Thu, 16 Jun 2022 12:31:12 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: [Proposal] pg_rewind integration into core"
}
] |
[
{
"msg_contents": "Hi All,\nWe have (StringInfo::len == 0) checks at many places. I thought it\nwould be better to wrap that into a function isEmptyStringInfo() to\nmake those checks more readable and also abstract the logic to check\nemptiness of a StringInfo. I think this will be useful to extensions\noutside core which also have these checks. They won't need to worry\nabout that logic/code being changed in future; rare but not impossible\ncase.\n\nProbably we should have similar support for PQExpBuffer as well, which\nwill be more useful to hide the internals of PQExpBuffer from client\ncode. But I haven't included those changes in this patch. I can do\nthat if hackers like the idea.\n\n-- \nBest Wishes,\nAshutosh Bapat",
"msg_date": "Wed, 23 Mar 2022 18:02:44 +0530",
"msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>",
"msg_from_op": true,
"msg_subject": "Support isEmptyStringInfo"
},
{
"msg_contents": "On Wed, Mar 23, 2022 at 8:33 AM Ashutosh Bapat\n<ashutosh.bapat.oss@gmail.com> wrote:\n> We have (StringInfo::len == 0) checks at many places. I thought it\n> would be better to wrap that into a function isEmptyStringInfo() to\n> make those checks more readable and also abstract the logic to check\n> emptiness of a StringInfo. I think this will be useful to extensions\n> outside core which also have these checks. They won't need to worry\n> about that logic/code being changed in future; rare but not impossible\n> case.\n\nI think that the code is perfectly readable as it is and that this\nchange makes it less so.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 23 Mar 2022 08:35:43 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Support isEmptyStringInfo"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> I think that the code is perfectly readable as it is and that this\n> change makes it less so.\n\nYeah, after a quick look through this patch I'm unimpressed too.\nThe new code is strictly longer, and it requires the introduction\nof distracting \"!\" and \"&\" operators in many places.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 23 Mar 2022 10:13:43 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Support isEmptyStringInfo"
},
{
"msg_contents": "At Wed, 23 Mar 2022 10:13:43 -0400, Tom Lane <tgl@sss.pgh.pa.us> wrote in \n> Robert Haas <robertmhaas@gmail.com> writes:\n> > I think that the code is perfectly readable as it is and that this\n> > change makes it less so.\n> \n> Yeah, after a quick look through this patch I'm unimpressed too.\n> The new code is strictly longer, and it requires the introduction\n> of distracting \"!\" and \"&\" operators in many places.\n\nThe struct members are not private at all. In that sense StringInfo\nis not a kind of class of C/Java but like a struct of C/C++ at least\nto me. I think encapsulating only \".len == 0\" doesn't help. Already\nin many places we pull out buf.data to use it separately from buf, we\nhave a dozen of instances of \"buf.len (<|>|<=|>=) <some length>\" and\neven \"buf.data[buf.len - 1] == '\\n'\"\n\nAbout read-easiness, isEmptyStringInfo(str) slightly spins my eyes\nthan str->len == 0.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Thu, 24 Mar 2022 10:37:37 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Support isEmptyStringInfo"
}
] |
[
{
"msg_contents": "Over at [1] Mark Dilger got led astray because he used\nsrc/test/modules/test_rls_hooks as a template for his new test module.\nIt looks like that module was created with some over eager copying of\nanother test module, but it has a couple of things wrong with it.\n\n. it isn't an extension, so the Makefile shouldn't have an EXTENSION\nentry, and there shouldn't be a .control file\n\n. it doesn't need to be preloaded so there is no requirement for a\nspecial config, nor for corresponding REGRESS_OPTS and NO_INSTALLCHECK\nlines in the Makefile.\n\nHere's a patch to fix those things.\n\n\ncheers\n\n\nandrew\n\n\n[1]\nhttps://postgr.es/m/47F87A0E-C0E5-43A6-89F6-D403F2B45175@enterprisedb.com\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Wed, 23 Mar 2022 09:31:04 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": true,
"msg_subject": "clean up test_rls_hooks module"
},
{
"msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> Here's a patch to fix those things.\n\nSeems reasonable to me, except that I think I'd leave the formatting\nof the OBJS macro alone. IIRC, Andres or someone went around and\nmade all of those follow this one-file-per-line style some time ago.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 23 Mar 2022 10:07:06 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: clean up test_rls_hooks module"
},
{
"msg_contents": "Hi,\n\nOn 2022-03-23 10:07:06 -0400, Tom Lane wrote:\n> Andrew Dunstan <andrew@dunslane.net> writes:\n> > Here's a patch to fix those things.\n> \n> Seems reasonable to me, except that I think I'd leave the formatting\n> of the OBJS macro alone. IIRC, Andres or someone went around and\n> made all of those follow this one-file-per-line style some time ago.\n\nYea. merge conflicts are considerably rarer that way, and a heck of a lot\neasier to deal with when they occur.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 23 Mar 2022 08:45:11 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: clean up test_rls_hooks module"
}
] |
[
{
"msg_contents": "Unbreak the build.\n\nCommit ffd53659c46a54a6978bcb8c4424c1e157a2c0f1 broke the build for\nanyone not compiling with LZ4 and ZSTD enabled. Woops.\n\nBranch\n------\nmaster\n\nDetails\n-------\nhttps://git.postgresql.org/pg/commitdiff/607e75e8f0f84544feb879b747da1d40fed71499\n\nModified Files\n--------------\nsrc/backend/replication/basebackup_lz4.c | 3 +--\nsrc/backend/replication/basebackup_zstd.c | 3 +--\n2 files changed, 2 insertions(+), 4 deletions(-)",
"msg_date": "Wed, 23 Mar 2022 14:24:13 +0000",
"msg_from": "Robert Haas <rhaas@postgresql.org>",
"msg_from_op": true,
"msg_subject": "pgsql: Unbreak the build."
},
{
"msg_contents": "Hi, \n\nOn March 23, 2022 7:24:13 AM PDT, Robert Haas <rhaas@postgresql.org> wrote:\n>Unbreak the build.\n>\n>Commit ffd53659c46a54a6978bcb8c4424c1e157a2c0f1 broke the build for\n>anyone not compiling with LZ4 and ZSTD enabled. Woops.\n\nThere's new warnings that sound reasonable introduced in the prior commit that didn't get removed in this one:\n\nhttps://cirrus-ci.com/task/5259487073271808?logs=mingw_cross_warning#L392\n\n- Andres\n \n-- \nSent from my Android device with K-9 Mail. Please excuse my brevity.\n\n\n",
"msg_date": "Wed, 23 Mar 2022 07:46:10 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Unbreak the build."
},
{
"msg_contents": "Hi, \n\nOn March 23, 2022 7:46:10 AM PDT, Andres Freund <andres@anarazel.de> wrote:\n>Hi, \n>\n>On March 23, 2022 7:24:13 AM PDT, Robert Haas <rhaas@postgresql.org> wrote:\n>>Unbreak the build.\n>>\n>>Commit ffd53659c46a54a6978bcb8c4424c1e157a2c0f1 broke the build for\n>>anyone not compiling with LZ4 and ZSTD enabled. Woops.\n>\n>There's new warnings that sound reasonable introduced in the prior commit that didn't get removed in this one:\n>\n>https://cirrus-ci.com/task/5259487073271808?logs=mingw_cross_warning#L392\n\nAnd windows still fails tests after this commit: https://cirrus-ci.com/task/6424123323711488?logs=test_bin#L22\n\nAndres\n\n-- \nSent from my Android device with K-9 Mail. Please excuse my brevity.\n\n\n",
"msg_date": "Wed, 23 Mar 2022 07:58:18 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Unbreak the build."
},
{
"msg_contents": "On Wed, Mar 23, 2022 at 10:46 AM Andres Freund <andres@anarazel.de> wrote:\n> There's new warnings that sound reasonable introduced in the prior commit that didn't get removed in this one:\n>\n> https://cirrus-ci.com/task/5259487073271808?logs=mingw_cross_warning#L392\n\nThat link takes me to a screen that shows no warnings. Scrolling\naround the only thing I see that doesn't seem to be addressed by this\ncommit is a complaint about get_bc_algorithm_name falling off the end.\nI've added a dummy return statement to hopefully address that.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 23 Mar 2022 11:38:39 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Unbreak the build."
},
{
"msg_contents": "Hi,\n\nOn 2022-03-23 11:38:39 -0400, Robert Haas wrote:\n> On Wed, Mar 23, 2022 at 10:46 AM Andres Freund <andres@anarazel.de> wrote:\n> > There's new warnings that sound reasonable introduced in the prior commit that didn't get removed in this one:\n> >\n> > https://cirrus-ci.com/task/5259487073271808?logs=mingw_cross_warning#L392\n> \n> That link takes me to a screen that shows no warnings.\n\nHm, apparently copied the link with slightly off line numbers. Odd.\n\n\n> Scrolling around the only thing I see that doesn't seem to be addressed by\n> this commit is a complaint about get_bc_algorithm_name falling off the end.\n\nWell, there's also the test failure on windows...\n\n\n> I've added a dummy return statement to hopefully address that.\n\nAssert(false); won't help a compiler to see the path is unreachable when\nbuilding without assertions. Might be nicer to use pg_unreachable().\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 23 Mar 2022 08:49:48 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Unbreak the build."
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> And windows still fails tests after this commit: https://cirrus-ci.com/task/6424123323711488?logs=test_bin#L22\n\nYeah. drongo is reporting\n\n# Running: pg_basebackup --no-sync -cfast -D C:\\\\prog\\\\bf\\\\root\\\\HEAD\\\\pgsql.build\\\\src\\\\bin\\\\pg_basebackup\\\\tmp_check\\\\tmp_test_vv4i/tarbackup -Ft\nAssertion failed: 0, file c:\\\\prog\\\\bf\\\\root\\\\HEAD\\\\pgsql.build\\\\src\\\\bin\\\\pg_basebackup\\\\walmethods.c, line 953\nnot ok 82 - tar format\n\n# Failed test 'tar format'\n# at t/010_pg_basebackup.pl line 261.\n\nwhich is pointing at\n\n\t\t/* not reachable */\n\t\tAssert(false);\n\nso it's not so unreachable after all.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 23 Mar 2022 11:55:37 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Unbreak the build."
},
{
"msg_contents": "On Wed, Mar 23, 2022 at 11:55 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > And windows still fails tests after this commit: https://cirrus-ci.com/task/6424123323711488?logs=test_bin#L22\n>\n> Yeah. drongo is reporting\n>\n> # Running: pg_basebackup --no-sync -cfast -D C:\\\\prog\\\\bf\\\\root\\\\HEAD\\\\pgsql.build\\\\src\\\\bin\\\\pg_basebackup\\\\tmp_check\\\\tmp_test_vv4i/tarbackup -Ft\n> Assertion failed: 0, file c:\\\\prog\\\\bf\\\\root\\\\HEAD\\\\pgsql.build\\\\src\\\\bin\\\\pg_basebackup\\\\walmethods.c, line 953\n> not ok 82 - tar format\n>\n> # Failed test 'tar format'\n> # at t/010_pg_basebackup.pl line 261.\n>\n> which is pointing at\n>\n> /* not reachable */\n> Assert(false);\n>\n> so it's not so unreachable after all.\n\nI'm looking into this now, but that's not the same Assert(false). The\none Andres is talking about is at the end of get_bc_algorithm_name().\nThis one is in tar_open_for_write(). AFAIK, the first of these is\nactually unreachable or at least I see no evidence that we are\nreaching it. The second is clearly reachable because we're failing the\nassertion. I thought that might be because I didn't test\n--without-zlib locally, and indeed in testing that just now, I found\nanother unused variable warning which I need to fix. But, that doesn't\naccount for this failure, because when I correct the problem with the\nunused variable, all the tests pass.\n\nI think what likely happened here is that in reorganizing some of the\nlogic in basebackup.c, I caused COMPRESSION_GZIP to get passed to\ntar_open_for_write() even when HAVE_LIBZ is not defined. But I don't\nyet understand why it only happens on Windows. I am suspicious that\nthe problem is in basebackup.c's main() function, but I haven't\npinpointed it yet.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 23 Mar 2022 12:03:35 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Unbreak the build."
},
{
"msg_contents": "On Wed, Mar 23, 2022 at 12:03 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> I'm looking into this now, but that's not the same Assert(false). The\n> one Andres is talking about is at the end of get_bc_algorithm_name().\n> This one is in tar_open_for_write(). AFAIK, the first of these is\n> actually unreachable or at least I see no evidence that we are\n> reaching it. The second is clearly reachable because we're failing the\n> assertion. I thought that might be because I didn't test\n> --without-zlib locally, and indeed in testing that just now, I found\n> another unused variable warning which I need to fix. But, that doesn't\n> account for this failure, because when I correct the problem with the\n> unused variable, all the tests pass.\n>\n> I think what likely happened here is that in reorganizing some of the\n> logic in basebackup.c, I caused COMPRESSION_GZIP to get passed to\n> tar_open_for_write() even when HAVE_LIBZ is not defined. But I don't\n> yet understand why it only happens on Windows. I am suspicious that\n> the problem is in basebackup.c's main() function, but I haven't\n> pinpointed it yet.\n\nOh, it's not that: it's that Windows is using threads, and therefore\nLogStreamerMain() is getting called with the wrong arguments. I guess\nI just need to move the additional parameters that I added to\nLogStreamerMain() into members in the logstreamer_param struct.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 23 Mar 2022 13:05:52 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Unbreak the build."
}
] |
[
{
"msg_contents": "Hi,\n\nIt looks like the following errmsg_plural in dependency.c is\nunnecessary as numReportedClient > 1 always and numNotReportedClient\ncan never be < 0. Therefore plural version of the error message is\nsufficient. Attached a patch to fix it.\n\n@@ -1200,10 +1200,8 @@ reportDependentObjects(const ObjectAddresses\n*targetObjects,\n {\n ereport(msglevel,\n /* translator: %d always has a value larger than 1 */\n- (errmsg_plural(\"drop cascades to %d\nother object\",\n- \"drop\ncascades to %d other objects\",\n-\nnumReportedClient + numNotReportedClient,\n-\nnumReportedClient + numNotReportedClient),\n+ (errmsg(\"drop cascades to %d other objects\",\n+ numReportedClient +\nnumNotReportedClient),\n errdetail(\"%s\", clientdetail.data),\n errdetail_log(\"%s\", logdetail.data)));\n\nRegards,\nBharath Rupireddy.",
"msg_date": "Wed, 23 Mar 2022 22:03:55 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Remove an unnecessary errmsg_plural in dependency.c"
},
{
"msg_contents": "On 23.03.22 17:33, Bharath Rupireddy wrote:\n> It looks like the following errmsg_plural in dependency.c is\n> unnecessary as numReportedClient > 1 always and numNotReportedClient\n> can never be < 0. Therefore plural version of the error message is\n> sufficient. Attached a patch to fix it.\n\nSome languages have more than two forms, so we still need to keep this \nto handle those.\n\n\n",
"msg_date": "Wed, 23 Mar 2022 17:39:52 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Remove an unnecessary errmsg_plural in dependency.c"
},
{
"msg_contents": "At Wed, 23 Mar 2022 17:39:52 +0100, Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote in \n> On 23.03.22 17:33, Bharath Rupireddy wrote:\n> > It looks like the following errmsg_plural in dependency.c is\n> > unnecessary as numReportedClient > 1 always and numNotReportedClient\n> > can never be < 0. Therefore plural version of the error message is\n> > sufficient. Attached a patch to fix it.\n> \n> Some languages have more than two forms, so we still need to keep this\n> to handle those.\n\nThe point seems to be that numReportedClient + numNotReportedClient >=\n2 (not 1) there. So the singular form is never used. It doesn't harm\nas-is but translation burden decreases a bit by fixing it.\n\nBy the way it has a translator-note as follows.\n\n>\telse if (numReportedClient > 1)\n>\t{\n>\t\tereport(msglevel,\n>\t\t/* translator: %d always has a value larger than 1 */\n>\t\t\t\t(errmsg_plural(\"drop cascades to %d other object\",\n>\t\t\t\t\t\t\t \"drop cascades to %d other objects\",\n>\t\t\t\t\t\t\t numReportedClient + numNotReportedClient,\n>\t\t\t\t\t\t\t numReportedClient + numNotReportedClient),\n\nThe comment and errmsg_plural don't seem to be consistent. When the\ncode was added by c4f2a0458d, it had only singular form and already\nhad the comment. After that 8032d76b5 turned it to errmsg_plural\nignoring the comment. It seems like a thinko of 8032d76b5.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Thu, 24 Mar 2022 14:17:03 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Remove an unnecessary errmsg_plural in dependency.c"
},
{
"msg_contents": "> On 24 Mar 2022, at 06:17, Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\n\n> The comment and errmsg_plural don't seem to be consistent. When the\n> code was added by c4f2a0458d, it had only singular form and already\n> had the comment. After that 8032d76b5 turned it to errmsg_plural\n> ignoring the comment. It seems like a thinko of 8032d76b5.\n\nFollowing the bouncing ball, that seems like a reasonable conclusion, and\nremoving the plural form should be fine to reduce translator work.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n",
"msg_date": "Thu, 24 Mar 2022 10:04:28 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Remove an unnecessary errmsg_plural in dependency.c"
},
{
"msg_contents": "On Thu, Mar 24, 2022 at 2:34 PM Daniel Gustafsson <daniel@yesql.se> wrote:\n>\n> > On 24 Mar 2022, at 06:17, Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\n>\n> > The comment and errmsg_plural don't seem to be consistent. When the\n> > code was added by c4f2a0458d, it had only singular form and already\n> > had the comment. After that 8032d76b5 turned it to errmsg_plural\n> > ignoring the comment. It seems like a thinko of 8032d76b5.\n>\n> Following the bouncing ball, that seems like a reasonable conclusion, and\n> removing the plural form should be fine to reduce translator work.\n\nYes, the singular version of the message isn't required at all as\nnumReportedClient > 1. Hence I proposed to remove errmsg_plural and\nsingular version.\n\nRegards,\nBharath Rupireddy.\n\n\n",
"msg_date": "Thu, 24 Mar 2022 18:18:38 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Remove an unnecessary errmsg_plural in dependency.c"
},
{
"msg_contents": "\nOn 24.03.22 13:48, Bharath Rupireddy wrote:\n> Yes, the singular version of the message isn't required at all as\n> numReportedClient > 1. Hence I proposed to remove errmsg_plural and\n> singular version.\n\nThe issue is that n == 1 and n != 1 are not the only cases that \nerrmsg_plural() handles. Some languages have different forms for n == \n1, n == 2, and n >= 5, for example. So while it is true that in\n\n errmsg_plural(\"drop cascades to %d other object\",\n \"drop cascades to %d other objects\",\n\nthe English singular string will never be used, you have to keep the \nerrmsg_plural() call so that it can handle variants like the above for \nother languages.\n\nYou could write\n\n errmsg_plural(\"DUMMY NOT USED %d\",\n \"drop cascades to %d other objects\",\n\nbut I don't think that is better.\n\n\n",
"msg_date": "Thu, 24 Mar 2022 14:05:27 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Remove an unnecessary errmsg_plural in dependency.c"
},
{
"msg_contents": "On 24.03.22 06:17, Kyotaro Horiguchi wrote:\n> The comment and errmsg_plural don't seem to be consistent. When the\n> code was added by c4f2a0458d, it had only singular form and already\n> had the comment. After that 8032d76b5 turned it to errmsg_plural\n> ignoring the comment. It seems like a thinko of 8032d76b5.\n\nI have removed the comment.\n\n\n",
"msg_date": "Thu, 24 Mar 2022 14:07:35 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Remove an unnecessary errmsg_plural in dependency.c"
},
{
"msg_contents": "> On 24 Mar 2022, at 14:07, Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote:\n> \n> On 24.03.22 06:17, Kyotaro Horiguchi wrote:\n>> The comment and errmsg_plural don't seem to be consistent. When the\n>> code was added by c4f2a0458d, it had only singular form and already\n>> had the comment. After that 8032d76b5 turned it to errmsg_plural\n>> ignoring the comment. It seems like a thinko of 8032d76b5.\n> \n> I have removed the comment.\n\nI was just typing a reply to your upthread answer that we should just remove\nthe comment then, so a retroactive +1 on this =)\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n",
"msg_date": "Thu, 24 Mar 2022 14:11:01 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Remove an unnecessary errmsg_plural in dependency.c"
},
{
"msg_contents": "On Thu, Mar 24, 2022 at 6:35 PM Peter Eisentraut\n<peter.eisentraut@enterprisedb.com> wrote:\n>\n> On 24.03.22 13:48, Bharath Rupireddy wrote:\n> > Yes, the singular version of the message isn't required at all as\n> > numReportedClient > 1. Hence I proposed to remove errmsg_plural and\n> > singular version.\n>\n> The issue is that n == 1 and n != 1 are not the only cases that\n> errmsg_plural() handles. Some languages have different forms for n ==\n> 1, n == 2, and n >= 5, for example. So while it is true that in\n>\n> errmsg_plural(\"drop cascades to %d other object\",\n> \"drop cascades to %d other objects\",\n\nThanks. I think I get the point - is it dngettext doing things\ndifferently for different languages?\n\n#define EVALUATE_MESSAGE_PLURAL(domain, targetfield, appendval) \\\n { \\\n const char *fmt; \\\n StringInfoData buf; \\\n /* Internationalize the error format string */ \\\n if (!in_error_recursion_trouble()) \\\n fmt = dngettext((domain), fmt_singular, fmt_plural, n); \\\n else \\\n fmt = (n == 1 ? fmt_singular : fmt_plural); \\\n initStringInfo(&buf); \\\n\nRegards,\nBharath Rupireddy.\n\n\n",
"msg_date": "Thu, 24 Mar 2022 19:04:20 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Remove an unnecessary errmsg_plural in dependency.c"
},
{
"msg_contents": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> writes:\n> Thanks. I think I get the point - is it dngettext doing things\n> differently for different languages?\n\nYeah. To be concrete, have a look in ru.po:\n\n#: catalog/dependency.c:1208\n#, c-format\nmsgid \"drop cascades to %d other object\"\nmsgid_plural \"drop cascades to %d other objects\"\nmsgstr[0] \"удаление распространяется на ещё %d объект\"\nmsgstr[1] \"удаление распространяется на ещё %d объекта\"\nmsgstr[2] \"удаление распространяется на ещё %d объектов\"\n\nI don't know Russian, so I don't know exactly what's going\non there, but there's evidently three different forms in\nthat language. Probably one of them is not reachable given\nthat n > 1, but I doubt we're saving the translator any time\nwith that info. Besides, gettext might require all three\nforms to be provided anyway in order to work correctly.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 24 Mar 2022 10:19:18 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Remove an unnecessary errmsg_plural in dependency.c"
},
{
"msg_contents": "On 2022-Mar-24, Bharath Rupireddy wrote:\n\n> Thanks. I think I get the point - is it dngettext doing things\n> differently for different languages?\n\nYes. The dngettext() rules are embedded in each translation's catalog\nfile:\n\n$ git grep 'Plural-Forms' src/backend/po/*.po\nde.po:\"Plural-Forms: nplurals=2; plural=(n != 1);\\n\"\nes.po:\"Plural-Forms: nplurals=2; plural=n != 1;\\n\"\nfr.po:\"Plural-Forms: nplurals=2; plural=(n > 1);\\n\"\nid.po:\"Plural-Forms: nplurals=2; plural=(n > 1);\\n\"\nit.po:\"Plural-Forms: nplurals=2; plural=n != 1;\\n\"\nja.po:\"Plural-Forms: nplurals=2; plural=n != 1;\\n\"\nko.po:\"Plural-Forms: nplurals=1; plural=0;\\n\"\npl.po:\"Plural-Forms: nplurals=3; plural=(n==1 ? 0 : n%10>=2 && n%10<=4 && (n%100<10 \"\npt_BR.po:\"Plural-Forms: nplurals=2; plural=(n>1);\\n\"\nru.po:\"Plural-Forms: nplurals=3; plural=(n%10==1 && n%100!=11 ? 0 : n%10>=2 && n\"\nsv.po:\"Plural-Forms: nplurals=2; plural=(n != 1);\\n\"\ntr.po:\"Plural-Forms: nplurals=2; plural=(n != 1);\\n\"\nuk.po:\"Plural-Forms: nplurals=4; plural=((n%10==1 && n%100!=11) ? 0 : ((n%10 >= 2 && n%10 <=4 && (n%100 < 12 || n%100 > 14)) ? 1 : ((n%10 == 0 || (n%10 >= 5 && n%10 <=9)) || (n%100 >= 11 && n%100 <= 14)) ? 2 : 3));\\n\"\nzh_CN.po:\"Plural-Forms: nplurals=1; plural=0;\\n\"\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\nThou shalt study thy libraries and strive not to reinvent them without\ncause, that thy code may be short and readable and thy days pleasant\nand productive. (7th Commandment for C Programmers)\n\n\n",
"msg_date": "Thu, 24 Mar 2022 15:28:57 +0100",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Remove an unnecessary errmsg_plural in dependency.c"
},
{
"msg_contents": "Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> $ git grep 'Plural-Forms' src/backend/po/*.po\n> ru.po:\"Plural-Forms: nplurals=3; plural=(n%10==1 && n%100!=11 ? 0 : n%10>=2 && n\"\n\nOh, interesting: if I'm reading that right, all three Russian\nforms are reachable, even with the knowledge that n > 1.\n(But isn't the last \"&& n\" test redundant?)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 24 Mar 2022 10:49:18 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Remove an unnecessary errmsg_plural in dependency.c"
},
{
"msg_contents": "On 2022-Mar-24, Tom Lane wrote:\n\n> Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> > $ git grep 'Plural-Forms' src/backend/po/*.po\n> > ru.po:\"Plural-Forms: nplurals=3; plural=(n%10==1 && n%100!=11 ? 0 : n%10>=2 && n\"\n> \n> Oh, interesting: if I'm reading that right, all three Russian\n> forms are reachable, even with the knowledge that n > 1.\n> (But isn't the last \"&& n\" test redundant?)\n\nI wondered about that trailing 'n' and it turns out that the grep was\ntoo simplistic, so it's incomplete. The full rule is:\n\n\"Plural-Forms: nplurals=3; plural=(n%10==1 && n%100!=11 ? 0 : n%10>=2 && n\"\n\"%10<=4 && (n%100<10 || n%100>=20) ? 1 : 2);\\n\"\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Thu, 24 Mar 2022 16:00:58 +0100",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Remove an unnecessary errmsg_plural in dependency.c"
},
{
"msg_contents": "At Thu, 24 Mar 2022 10:19:18 -0400, Tom Lane <tgl@sss.pgh.pa.us> wrote in \n> Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> writes:\n> > Thanks. I think I get the point - is it dngettext doing things\n> > differently for different languages?\n> \n> Yeah. To be concrete, have a look in ru.po:\n\nI wondered why it takes two forms of format string but I now\nunderstand it is the fall-back texts used when translation is not\nfound.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Fri, 25 Mar 2022 10:45:54 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Remove an unnecessary errmsg_plural in dependency.c"
},
{
"msg_contents": "At Thu, 24 Mar 2022 16:00:58 +0100, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote in \n> On 2022-Mar-24, Tom Lane wrote:\n> \n> > Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> > > $ git grep 'Plural-Forms' src/backend/po/*.po\n> > > ru.po:\"Plural-Forms: nplurals=3; plural=(n%10==1 && n%100!=11 ? 0 : n%10>=2 && n\"\n> > \n> > Oh, interesting: if I'm reading that right, all three Russian\n> > forms are reachable, even with the knowledge that n > 1.\n> > (But isn't the last \"&& n\" test redundant?)\n> \n> I wondered about that trailing 'n' and it turns out that the grep was\n> too simplistic, so it's incomplete. The full rule is:\n> \n> \"Plural-Forms: nplurals=3; plural=(n%10==1 && n%100!=11 ? 0 : n%10>=2 && n\"\n> \"%10<=4 && (n%100<10 || n%100>=20) ? 1 : 2);\\n\"\n\nFWIW just for fun, I saw the first form.\n\npostgres=# drop table t cascade;\nЗАМЕЧАНИЕ: удаление распространяется на ещё 21 объект\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Fri, 25 Mar 2022 10:52:46 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Remove an unnecessary errmsg_plural in dependency.c"
}
] |
[
{
"msg_contents": "Hi,\n\nI tried to run postgres with ubsan to debug something. I ran into two main\nissues:\n\n\n1) Despite Tom's recent efforts in [1], I see two ubsan failures in\n HEAD.\n\n These are easy enough to fix as per the attached, although the fix for the\n GetConfigOptionByNum() isn't great - we should probably pass a nulls array\n to GetConfigOptionByNum() as well, but that'd have a bunch of followup\n changes. So I'm inclined to go with the minimal for now.\n\n\n2) When debugging issues I got very confused by the fact that *sometimes*\n UBSAN_OPTIONS takes effect and sometimes not. I was trying to have ubsan\n generate backtraces as well as a coredump with\n UBSAN_OPTIONS=\"print_stacktrace=1:disable_coredump=0:abort_on_error=1:verbosity=2\"\n\n After a lot of debugging I figured out that the options took effect in\n postmaster, but not in any children. Which in turn is because\n set_ps_display() breaks /proc/$pid/environ - it's empty in all postmaster\n children for me.\n\n The sanitizer library use /proc/$pid/environ (from [2] to [3]), because\n they don't want to use getenv() because it wants to work without libc\n (whether that's the right way, i have my doubts). When just using\n undefined and alignment sanitizers, the sanitizer library is only\n initialized when the first error occurs, by which time we've often already\n called set_ps_display().\n\n Note that ps_status.c breaks /proc/$pid/environ even if the\n update_process_title GUC is set to false, because init_ps_display() ignores\n that. Yes, that confused me for a while last night.\n\n\n The reason that /proc/$pid/environ is borken is fairly obvious: We\n overwrite it in set_ps_display() in the PS_USE_CLOBBER_ARGV case. The\n kernel just looks at the range set up originally, which we'll overwrite\n with zeroes.\n\n We could try telling the kernel about the new location of environ using\n prctl() PR_SET_MM_ENV_START/END but the restrictions around that sound\n problematic.\n\n\n I've included a hacky workaround: Define __ubsan_default_options, a weak\n symbol libsanitizer uses to get defaults from the application, and return\n getenv(\"UBSAN_OPTIONS\"). But only if main already was reached, so that we\n don't end up relying on a not-yet-working getenv().\n\n\n This also lead me to finally figure out why /proc/$pid/cmdline of postgres\n children includes a lot of NULL bytes. I'd noticed this because tools like\n pidstat -l started to print long long status strings at some point, before\n it got fixed on the pidstat side.\n The way we overwrite doesn't trigger this special case in the kernel:\n https://github.com/torvalds/linux/blob/master/fs/proc/base.c#L296\n therefore the original size of the commandline arguments are printed, with\n a lot of boring zeroes.\n\n\nA test run of this in ci, with the guc.c intentionally reintroduced, shows the\nfailure both via core dump\nhttps://api.cirrus-ci.com/v1/task/6543164315009024/logs/cores.log\nand\nin postmaster's log: https://api.cirrus-ci.com/v1/artifact/task/6543164315009024/log/src/test/regress/log/postmaster.log\n\nalthough that part is a bit annoying to read, because the error is\ninterspersed with other log messages:\n\nguc.c:9801:15: runtime error: null pointer passed as argument 2, which is declared to never be null\n==19253==Using libbacktrace symbolizer.\n2022-03-23 17:20:35.412 UTC [19258][client backend] [pg_regress/alter_operator][14/429:0] ERROR: must be owner of operator ===\n...\n2022-03-23 17:20:35.601 UTC [19254][client backend] [pg_regress/alter_generic][10/1569:0] STATEMENT: ALTER STATISTICS alt_stat2 OWNER TO regress_alter_generic_user2;\n #0 0x5626562b2ab4 in GetConfigOptionByNum /tmp/cirrus-ci-build/src/backend/utils/misc/guc.c:9801\n #1 0x5626562b3fd5 in show_all_settings /tmp/cirrus-ci-build/src/backend/utils/misc/guc.c:10137\n2022-03-23 17:20:35.604 UTC [19254][client backend] [pg_regress/alter_generic][10/1576:0] ERROR: must be owner of statistics object alt_stat3\n...\n2022-03-23 17:20:35.601 UTC [19254][client backend] [pg_regress/alter_generic][10/1569:0] STATEMENT: ALTER STATISTICS alt_stat2 OWNER TO regress_alter_generic_user2;\n #0 0x5626562b2ab4 in GetConfigOptionByNum /tmp/cirrus-ci-build/src/backend/utils/misc/guc.c:9801\n #1 0x5626562b3fd5 in show_all_settings /tmp/cirrus-ci-build/src/backend/utils/misc/guc.c:10137\n2022-03-23 17:20:35.604 UTC [19254][client backend] [pg_regress/alter_generic][10/1576:0] ERROR: must be owner of statistics object alt_stat3\n2022-03-23 17:20:35.604 UTC [19254][client backend] [pg_regress/alter_generic][10/1576:0] STATEMENT: ALTER STATISTICS alt_stat3 RENAME TO alt_stat4;\n #2 0x562655c0ea86 in ExecMakeTableFunctionResult /tmp/cirrus-ci-build/src/backend/executor/execSRF.c:234\n #3 0x562655c3f8be in FunctionNext /tmp/cirrus-ci-build/src/backend/executor/nodeFunctionscan.c:95\n2022-03-23 17:20:35.605 UTC [19254][client backend] [pg_regress/alter_generic][10/1578:0] ERROR: must be owner of statistics object alt_stat3\n2022-03-23 17:20:35.605 UTC [19254][client backend] [pg_regress/alter_generic][10/1578:0] STATEMENT: ALTER STATISTICS alt_stat3 OWNER TO regress_alter_generic_user2;\n2022-03-23 17:20:35.606 UTC [19254][client backend] [pg_regress/alter_generic][10/1579:0] ERROR: must be member of role \"regress_alter_generic_user3\"\n2022-03-23 17:20:35.606 UTC [19254][client backend] [pg_regress/alter_generic][10/1579:0] STATEMENT: ALTER STATISTICS alt_stat2 OWNER TO regress_alter_generic_user3;\n #4 0x562655c10175 in ExecScanFetch /tmp/cirrus-ci-build/src/backend/executor/execScan.c:133\n #5 0x562655c10653 in ExecScan /tmp/cirrus-ci-build/src/backend/executor/execScan.c:199\n #6 0x562655c3f643 in ExecFunctionScan /tmp/cirrus-ci-build/src/backend/executor/nodeFunctionscan.c:270\n2022-03-23 17:20:35.606 UTC [19254][client backend] [pg_regress/alter_generic][10/1580:0] ERROR: must be owner of statistics object alt_stat3\n2022-03-23 17:20:35.606 UTC [19254][client backend] [pg_regress/alter_generic][10/1580:0] STATEMENT: ALTER STATISTICS alt_stat3 SET SCHEMA alt_nsp2;\n2022-03-23 17:20:35.606 UTC [19254][client backend] [pg_regress/alter_generic][10/1581:0] ERROR: statistics object \"alt_stat2\" already exists in schema \"alt_nsp2\"\n2022-03-23 17:20:35.606 UTC [19254][client backend] [pg_regress/alter_generic][10/1581:0] STATEMENT: ALTER STATISTICS alt_stat2 SET SCHEMA alt_nsp2;\n #7 0x562655c09bc9 in ExecProcNodeFirst /tmp/cirrus-ci-build/src/backend/executor/execProcnode.c:463\n #8 0x562655bf7580 in ExecProcNode ../../../src/include/executor/executor.h:259\n #9 0x562655bf7580 in ExecutePlan /tmp/cirrus-ci-build/src/backend/executor/execMain.c:1633\n #10 0x562655bf78b9 in standard_ExecutorRun /tmp/cirrus-ci-build/src/backend/executor/execMain.c:362\n\n\nGreetings,\n\nAndres Freund\n\n[1] https://postgr.es/m/CALNJ-vT9r0DSsAOw9OXVJFxLENoVS_68kJ5x0p44atoYH+H4dg@mail.gmail.com\n[2] https://gcc.gnu.org/git/?p=gcc.git;a=blob;f=libsanitizer/ubsan/ubsan_flags.cpp;h=9a66bd37518b3a0606049b761ffdd7ddf3c3c714;hb=refs/heads/master#l68\n[2] https://gcc.gnu.org/git/?p=gcc.git;a=blob;f=libsanitizer/sanitizer_common/sanitizer_linux.cpp;h=aa59d9718ca89cc554bdf677df3e64ddd233ca59;hb=refs/heads/master#l559",
"msg_date": "Wed, 23 Mar 2022 10:35:37 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "ubsan"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> I tried to run postgres with ubsan to debug something.\n\nFor 0001, could we just replace configure's dlopen check with the\ndlsym check? Or are you afraid of reverse-case failures?\n\n0002: ugh, but my only real complaint is that __ubsan_default_options\nneeds more than zero comment. Also, it's not \"our\" getenv is it?\n\n0003: OK. Interesting though that we haven't seen these before.\n\n0004: no opinion\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 23 Mar 2022 13:54:50 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: ubsan"
},
{
"msg_contents": "Hi,\n\nOn 2022-03-23 13:54:50 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > I tried to run postgres with ubsan to debug something.\n> \n> For 0001, could we just replace configure's dlopen check with the\n> dlsym check? Or are you afraid of reverse-case failures?\n\nYea, I was worried about that. But now that I think more about it, it's hard\nto believe something could provide / intercept dlsym but not dlopen. I guess\nwe can try and see?\n\n\n> 0002: ugh, but my only real complaint is that __ubsan_default_options\n> needs more than zero comment.\n\nYea, definitely. I am still hoping that somebody could see a better approach\nthan that ugly hack.\n\nHaven't yet checked, but probably should also verify asan either doesn't have\nthe same problem or provide the same hack for ASAN_OPTIONS.\n\n\n> Also, it's not \"our\" getenv is it?\n\nNot really. \"libc's getenv()\"?\n\n\n> 0003: OK. Interesting though that we haven't seen these before.\n\nI assume it's a question of library version and configure flags.\n\nLooks like the fwrite nonnull case isn't actually due to the nonnull\nattribute, but just fwrite() getting intercepted by the sanitizer\nlibrary. Looks like that was added starting in gcc 9 [1]\n\nAnd the guc.c case presumably requires --enable-nls and a version of gettext\nusing the nonnull attribute?\n\n\nWonder if there's a few functions we should add nonnull to ourselves. Probably\nwould help \"everyday compiler warnings\", static analyzers, and ubsan.\n\nGreetings,\n\nAndres Freund\n\n[1]\n5d3805fca3e9 (Jakub Jelinek 2017-10-19 13:23:59 +0200 1151) #if SANITIZER_INTERCEPT_FWRITE\n5d3805fca3e9 (Jakub Jelinek 2017-10-19 13:23:59 +0200 1152) INTERCEPTOR(SIZE_T, fwrite, const void *p, uptr size, uptr nmemb, void *file) {\n5d3805fca3e9 (Jakub Jelinek 2017-10-19 13:23:59 +0200 1153) // libc file streams can call user-supplied functions, see fopencookie.\n5d3805fca3e9 (Jakub Jelinek 2017-10-19 13:23:59 +0200 1154) void *ctx;\n5d3805fca3e9 (Jakub Jelinek 2017-10-19 13:23:59 +0200 1155) COMMON_INTERCEPTOR_ENTER(ctx, fwrite, p, size, nmemb, file);\n5d3805fca3e9 (Jakub Jelinek 2017-10-19 13:23:59 +0200 1156) SIZE_T res = REAL(fwrite)(p, size, nmemb, file);\n5d3805fca3e9 (Jakub Jelinek 2017-10-19 13:23:59 +0200 1157) if (res > 0) COMMON_INTERCEPTOR_READ_RANGE(ctx, p, res * size);\n5d3805fca3e9 (Jakub Jelinek 2017-10-19 13:23:59 +0200 1158) return res;\n5d3805fca3e9 (Jakub Jelinek 2017-10-19 13:23:59 +0200 1159) }\n\n$ git describe --tags 5d3805fca3e9\nbasepoints/gcc-8-3961-g5d3805fca3e\n\n\n",
"msg_date": "Wed, 23 Mar 2022 11:21:37 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: ubsan"
},
{
"msg_contents": "Hi,\n\nOn 2022-03-23 11:21:37 -0700, Andres Freund wrote:\n> On 2022-03-23 13:54:50 -0400, Tom Lane wrote:\n> > Andres Freund <andres@anarazel.de> writes:\n> > > I tried to run postgres with ubsan to debug something.\n> > \n> > For 0001, could we just replace configure's dlopen check with the\n> > dlsym check? Or are you afraid of reverse-case failures?\n> \n> Yea, I was worried about that. But now that I think more about it, it's hard\n> to believe something could provide / intercept dlsym but not dlopen. I guess\n> we can try and see?\n>\n> > 0003: OK. Interesting though that we haven't seen these before.\n\nI think we should backpatch both, based on the reasoning in\n46ab07ffda9d6c8e63360ded2d4568aa160a7700 ?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 23 Mar 2022 12:22:41 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: ubsan"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> I think we should backpatch both, based on the reasoning in\n> 46ab07ffda9d6c8e63360ded2d4568aa160a7700 ?\n\nYeah, I suppose. Is anyone going to step up and run a buildfarm\nmember with ubsan enabled? (I'm already checking -fsanitize=alignment\non longfin, but it seems advisable to keep that separate from\n-fsanitize=undefined.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 23 Mar 2022 15:58:09 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: ubsan"
},
{
"msg_contents": "Hi,\n\nOn 2022-03-23 15:58:09 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > I think we should backpatch both, based on the reasoning in\n> > 46ab07ffda9d6c8e63360ded2d4568aa160a7700 ?\n> \n> Yeah, I suppose. Is anyone going to step up and run a buildfarm\n> member with ubsan enabled? (I'm already checking -fsanitize=alignment\n> on longfin, but it seems advisable to keep that separate from\n> -fsanitize=undefined.)\n\nI'm planning to enable it on two of mine. Looks like gcc and clang find\nslightly different things, so I was intending to enable it on one of each.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 23 Mar 2022 13:12:34 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: ubsan"
},
{
"msg_contents": "Hi,\n\nOn 2022-03-23 13:12:34 -0700, Andres Freund wrote:\n> I'm planning to enable it on two of mine. Looks like gcc and clang find\n> slightly different things, so I was intending to enable it on one of each.\n\nOriginally I'd planned to mix them into existing members, but I think it'd be\nbetter to have dedicated ones. Applied for a few new buildfarm names for:\n{gcc,clang}-{-fsanitize=undefined,-fsanitize=address}.\n\n\nRunning with asan found an existing use-after-free bug in pg_waldump (*), a bug in\ndshash_seq_next() next that probably can't be hit in HEAD and a bug in my\nshared memory stats patch. I count that as a success.\n\nIt's particularly impressive that the cost of running with ASAN is *so* much\nlower than valgrind. On my workstation a check-world with\n-fsanitize=alignment,undefined,address takes 3min17s, vs 1min10s or so without\n-fsanitize. Not something to always use, but certainly better than valgrind.\n\nGreetings,\n\nAndres Freund\n\n(*) search_directory() uses fname = xlde->d_name after closedir(). Found in\npg_verifybackup.c's tests. Probably worth adding a few simple tests to\npg_waldump itself.\n\n\n",
"msg_date": "Wed, 23 Mar 2022 15:55:28 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: ubsan"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> Running with asan found an existing use-after-free bug in pg_waldump (*), a bug in\n> dshash_seq_next() next that probably can't be hit in HEAD and a bug in my\n> shared memory stats patch. I count that as a success.\n\nNice!\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 23 Mar 2022 19:02:30 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: ubsan"
},
{
"msg_contents": "On Wed, Mar 23, 2022 at 03:58:09PM -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > I think we should backpatch both, based on the reasoning in\n> > 46ab07ffda9d6c8e63360ded2d4568aa160a7700 ?\n> \n> Yeah, I suppose. Is anyone going to step up and run a buildfarm\n> member with ubsan enabled?\n\nthorntail has been running with UBSan since 2019. I've removed flag\n-fno-sanitize=nonnull-attribute, which your changes rendered superfluous.\n\n\n",
"msg_date": "Wed, 23 Mar 2022 23:23:23 -0700",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": false,
"msg_subject": "Re: ubsan"
},
{
"msg_contents": "On 3/23/22 16:55, Andres Freund wrote:\n> \n> It's particularly impressive that the cost of running with ASAN is *so* much\n> lower than valgrind. On my workstation a check-world with\n> -fsanitize=alignment,undefined,address takes 3min17s, vs 1min10s or so without\n> -fsanitize. Not something to always use, but certainly better than valgrind.\n\nIt also catches things that valgrind does not so that's a bonus.\n\nOne thing to note, though. I have noticed that when enabling \n-fsanitize=undefined and/or -fsanitize=address in combination with \n-fprofile-arcs -ftest-coverage there is a loss in reported coverage, at \nleast on gcc 9.3. This may not be very obvious unless coverage is \nnormally at 100%.\n\nRegards,\n-David\n\n\n",
"msg_date": "Fri, 25 Mar 2022 09:55:45 -0600",
"msg_from": "David Steele <david@pgmasters.net>",
"msg_from_op": false,
"msg_subject": "Re: ubsan"
},
{
"msg_contents": "Hi,\n\nOn 2022-03-23 15:55:28 -0700, Andres Freund wrote:\n> Originally I'd planned to mix them into existing members, but I think it'd be\n> better to have dedicated ones. Applied for a few new buildfarm names for:\n> {gcc,clang}-{-fsanitize=undefined,-fsanitize=address}.\n\nThey're now enabled...\n\ntamandua: gcc, -fsanitize=undefined,alignment\nkestrel: clang, -fsanitize=undefined,alignment\ngrassquit: gcc, -fsanitize=address\nolingo: clang, -fsanitize=address\n\nThe first three have started reporting in, the last is starting its first run\nnow.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 25 Mar 2022 18:33:41 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: ubsan"
},
{
"msg_contents": "Hi,\n\nOn 2022-03-23 13:54:50 -0400, Tom Lane wrote:\n> 0002: ugh, but my only real complaint is that __ubsan_default_options\n> needs more than zero comment. Also, it's not \"our\" getenv is it?\n> \n> 0004: no opinion\n\nAttached is a rebased version of this patch. Hopefully with a reasonable\namount of comments? I kind of wanted to add a comment to reached_main, but it\njust seems to end up restating the variable name...\n\nGreetings,\n\nAndres Freund",
"msg_date": "Thu, 29 Sep 2022 18:17:55 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: ubsan"
},
{
"msg_contents": "Hi,\n\nOn 2022-09-29 18:17:55 -0700, Andres Freund wrote:\n> Attached is a rebased version of this patch. Hopefully with a reasonable\n> amount of comments? I kind of wanted to add a comment to reached_main, but it\n> just seems to end up restating the variable name...\n\nI've now pushed a version of this with a few cleanups, mostly in\n.cirrus.yml. It'll be interesting to see how many additional problems\nit finds via cfbot.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 21 Nov 2022 15:15:03 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: ubsan"
},
{
"msg_contents": "On Mon, Nov 21, 2022 at 03:15:03PM -0800, Andres Freund wrote:\n> Hi,\n> \n> On 2022-09-29 18:17:55 -0700, Andres Freund wrote:\n> > Attached is a rebased version of this patch. Hopefully with a reasonable\n> > amount of comments? I kind of wanted to add a comment to reached_main, but it\n> > just seems to end up restating the variable name...\n> \n> I've now pushed a version of this with a few cleanups, mostly in\n\nThanks. I'd meant to ask if there's a reason why you didn't use\nmeson -D sanitize ?\n\nI recall seeing a bug which affected linking .. maybe that's why ...\n\n-- \nJustin\n\n\n",
"msg_date": "Mon, 21 Nov 2022 17:42:38 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: ubsan"
},
{
"msg_contents": "Hi, \n\nOn November 21, 2022 3:42:38 PM PST, Justin Pryzby <pryzby@telsasoft.com> wrote:\n>On Mon, Nov 21, 2022 at 03:15:03PM -0800, Andres Freund wrote:\n>> Hi,\n>> \n>> On 2022-09-29 18:17:55 -0700, Andres Freund wrote:\n>> > Attached is a rebased version of this patch. Hopefully with a reasonable\n>> > amount of comments? I kind of wanted to add a comment to reached_main, but it\n>> > just seems to end up restating the variable name...\n>> \n>> I've now pushed a version of this with a few cleanups, mostly in\n>\n>Thanks. I'd meant to ask if there's a reason why you didn't use\n>meson -D sanitize ?\n>\n>I recall seeing a bug which affected linking .. maybe that's why ...\n\nDoesn't allow enabling multiple sanitizers (the PR for that might recently have been merged, but that doesn't help us yet). We also need to add the no-recover flag anyway. So there doesn't seem to be much point.\n\nAndres\n\n-- \nSent from my Android device with K-9 Mail. Please excuse my brevity.\n\n\n",
"msg_date": "Mon, 21 Nov 2022 15:45:42 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: ubsan"
},
{
"msg_contents": "Hello Andres,\n\n22.11.2022 02:15, Andres Freund wrote:\n> Hi,\n>\n> On 2022-09-29 18:17:55 -0700, Andres Freund wrote:\n>> Attached is a rebased version of this patch. Hopefully with a reasonable\n>> amount of comments? I kind of wanted to add a comment to reached_main, but it\n>> just seems to end up restating the variable name...\n> I've now pushed a version of this with a few cleanups, mostly in\n> .cirrus.yml. It'll be interesting to see how many additional problems\n> it finds via cfbot.\n\nI've just discovered that that function __ubsan_default_options() is\nincompatible with -fsanitize=hwaddress:\n$ tmp_install/usr/local/pgsql/bin/postgres\nSegmentation fault\n\nProgram received signal SIGSEGV, Segmentation fault.\n0x000000555639e3ec in __hwasan_check_x0_0 ()\n(gdb) bt\n#0 0x000000555639e3ec in __hwasan_check_x0_0 ()\n#1 0x000000555697b5a8 in __ubsan_default_options () at main.c:446\n#2 0x0000005556367e48 in InitializeFlags ()\n at /home/builder/.termux-build/libllvm/src/compiler-rt/lib/hwasan/hwasan.cpp:133\n#3 __hwasan_init () at /home/builder/.termux-build/libllvm/src/compiler-rt/lib/hwasan/hwasan.cpp:351\n#4 0x0000007ff7f4929c in __dl__ZL13call_functionPKcPFviPPcS2_ES0_ () from /system/bin/linker64\n#5 0x0000007ff7f4900c in __dl__ZL10call_arrayIPFviPPcS1_EEvPKcPT_mbS5_ () from /system/bin/linker64\n#6 0x0000007ff7f45670 in __dl__ZL29__linker_init_post_relocationR19KernelArgumentBlockR6soinfo ()\n from /system/bin/linker64\n#7 0x0000007ff7f449c8 in __dl___linker_init () from /system/bin/linker64\n#8 0x0000007ff7f4b208 in __dl__start () from /system/bin/linker64\n\nI use clang version 16.0.6, Target: aarch64-unknown-linux-android24.\n\nWith just 'return \"\"' in __ubsan_default_options(), I've managed to run\n`make check` (there is also an issue with check_stack_depth(),\nbut that's another story)...\n\nBest regards,\nAlexander\n\n\n\n",
"msg_date": "Wed, 27 Sep 2023 14:00:00 +0300",
"msg_from": "Alexander Lakhin <exclusion@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: ubsan"
}
] |
[
{
"msg_contents": "Hi,\n\nStarting with the below commit, pg_stat_reset_single_function_counters,\npg_stat_reset_single_table_counters don't just reset the stats for the\nindividual function, but also set pg_stat_database.stats_reset.\n\ncommit 4c468b37a281941afd3bf61c782b20def8c17047\nAuthor: Magnus Hagander <magnus@hagander.net>\nDate: 2011-02-10 15:09:35 +0100\n\n Track last time for statistics reset on databases and bgwriter\n\n Tracks one counter for each database, which is reset whenever\n the statistics for any individual object inside the database is\n reset, and one counter for the background writer.\n\n Tomas Vondra, reviewed by Greg Smith\n /*\n\n\n@@ -4107,6 +4118,8 @@ pgstat_recv_resetsinglecounter(PgStat_MsgResetsinglecounter *msg, int len)\n if (!dbentry)\n return;\n \n+ /* Set the reset timestamp for the whole database */\n+ dbentry->stat_reset_timestamp = GetCurrentTimestamp();\n \n /* Remove object if it exists, ignore it if not */\n if (msg->m_resettype == RESET_TABLE)\n\n\nThe relevant thread is [1], with the most-on-point message at [2].\npg_stat_reset_single_*_counters were introduced in [3]\n\n\nThis behaviour can be trivially (and is) implemented for the shared memory\nstats patch. But every time I read over that part of the code it feels just\nprofoundly wrong to me. Way worse than *not* resetting\npg_stat_database.stats_reset.\n\nAnybody that uses the time since the stats reset as part of a calculation of\ntransactions / sec, reads / sec or such will get completely bogus results\nafter a call to pg_stat_reset_single_table_counters().\n\n\nMaybe I just don't understand what these reset functions are intended for?\nTheir introduction [3] didn't explain much either. To me the behaviour of\nresetting pg_stat_database.stats_reset but nothing else in pg_stat_database\nmakes them kind of dangerous.\n\nGreetings,\n\nAndres Freund\n\n[1] https://postgr.es/m/7177d0cd40b82409024e7c495e9d6992.squirrel%40sq.gransy.com\n[2] https://www.postgresql.org/message-id/4D0E5A54.3060302%40fuzzy.cz\n[3] https://www.postgresql.org/message-id/9837222c1001240837r5c103519lc6a74c37be5f1831%40mail.gmail.com\n\n\n",
"msg_date": "Wed, 23 Mar 2022 17:55:16 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "pg_stat_reset_single_*_counters vs pg_stat_database.stats_reset"
},
{
"msg_contents": "Hi,\n\nOn 2022-03-23 17:55:16 -0700, Andres Freund wrote:\n> Maybe I just don't understand what these reset functions are intended for?\n> Their introduction [3] didn't explain much either. To me the behaviour of\n> resetting pg_stat_database.stats_reset but nothing else in pg_stat_database\n> makes them kind of dangerous.\n\nForgot to add: At the very least we should document that weird behaviour,\nbecause it's certainly not obvious. But imo we should either remove the\nbehaviour or drop the functions.\n\n <row>\n <entry role=\"func_table_entry\"><para role=\"func_signature\">\n <indexterm>\n <primary>pg_stat_reset_single_table_counters</primary>\n </indexterm>\n <function>pg_stat_reset_single_table_counters</function> ( <type>oid</type> )\n <returnvalue>void</returnvalue>\n </para>\n <para>\n Resets statistics for a single table or index in the current database\n or shared across all databases in the cluster to zero.\n </para>\n <para>\n This function is restricted to superusers by default, but other users\n can be granted EXECUTE to run the function.\n </para></entry>\n </row>\n\n <row>\n <entry role=\"func_table_entry\"><para role=\"func_signature\">\n <indexterm>\n <primary>pg_stat_reset_single_function_counters</primary>\n </indexterm>\n <function>pg_stat_reset_single_function_counters</function> ( <type>oid</type> )\n <returnvalue>void</returnvalue>\n </para>\n <para>\n Resets statistics for a single function in the current database to\n zero.\n </para>\n <para>\n This function is restricted to superusers by default, but other users\n can be granted EXECUTE to run the function.\n </para></entry>\n </row>\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 23 Mar 2022 17:59:38 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: pg_stat_reset_single_*_counters vs pg_stat_database.stats_reset"
},
{
"msg_contents": "On Wed, Mar 23, 2022 at 5:55 PM Andres Freund <andres@anarazel.de> wrote:\n\n>\n> Starting with the below commit, pg_stat_reset_single_function_counters,\n> pg_stat_reset_single_table_counters don't just reset the stats for the\n> individual function, but also set pg_stat_database.stats_reset.\n>\n> commit 4c468b37a281941afd3bf61c782b20def8c17047\n> Author: Magnus Hagander <magnus@hagander.net>\n> Date: 2011-02-10 15:09:35 +0100\n>\n> Track last time for statistics reset on databases and bgwriter\n>\n> Tracks one counter for each database, which is reset whenever\n> the statistics for any individual object inside the database is\n> reset, and one counter for the background writer.\n>\n> Tomas Vondra, reviewed by Greg Smith\n> /*\n> [...]\n> Maybe I just don't understand what these reset functions are intended for?\n> Their introduction [3] didn't explain much either. To me the behaviour of\n> resetting pg_stat_database.stats_reset but nothing else in pg_stat_database\n> makes them kind of dangerous.\n>\n\n*tl/dr;*\nThere seems to be three scopes here:\n\nCluster Stats - (add last_reset fields for consistency)\nDatabase and Shared Object Stats (add last_reset to handle recording the\nresetting of just the records on this table, leaving stats_reset to be a\nflag meaning self-or-children)\nObject Stats (add last_reset, don't need stats_reset since there are no\nchildren)\n\nIf we are OK with just changing pg_stat_database.stats_reset meanings to be\nless informative than what it is today (I strongly dislike such a silent\nbehavioral change) we could simply standardize on the intended meaning of\nstats_reset on all three scopes, adding stats_reset as needed to track\nobject-level resetting.\n\n*Additional Exposition*\n\nThe description for the column declares that the field is reset when the\nstatistics on the pg_stat_database are reset. That is also the expected\nbehavior, knowing when any statistics in the whole database are reset is\nindeed not useful.\n\n\"Time at which these statistics were last reset\"\n\nThe \"these\" clearly refers to the statistics columns in pg_stat_database.\n\nIn fact, pg_stat_archiver.stats_reset already exists (as does\npg_bgwriter.stats_reset) with (I presume) this interpretation. This is a\nproblem because pg_stat_database.stats_reset does not have the same\nmeaning. So we have to either live with inconsistency or break something.\n\nIn the vein of living with inconsistency I would suggest changing the\ndocumentation of \"pg_stat.database.stats_reset\" to match the present\nbehavior. Then add a new column (last_reset ?) to represent the existing\ndescription of \"stats_reset\".\n\nI suppose we could document \"stats_reset\" as the itself-or-any-children\nreset timestamp, it's just that the archive and pgwriter don't have\nchildren in this sense while databases do. When the\npg_stat_database.last_reset field changes the pg_stat_database.stats_reset\nwould have to match anyway.\n\nI don't have any issue with an indicator field saying \"something regarding\nstats has changed\" at the database level. It is much easier to monitor\nthat and then inspect what may have changed rather than monitoring a\nlast_reset column on every single catalog that has statistics that can be\nreset.\n\nIt also seems that each tracked object type needs to have its own\nlast_reset field (we could choose to name it stats_reset too, leaving\npg_stat_database.last_reset as the only anomaly) added as an implied\nbehavior needed for such individualized resetting. I would go with\n*.last_reset though and leave the absence of pg_stat_archiver.last_reset as\nthe anomaly (or just add it redundantly for consistency).\n\nI don't see removing existing functionality as a good course to getting a\nconsistent implementation; we should just push forward with figuring out\nwhat is missing and fill in those gaps. At worst if that isn't something\nwe want to fix right now our new setup should at least leave the status quo\nbehaviors in place.\n\nI haven't looked into what kind of explicit resetting options are available\nbut the above seems to cover tracking resetting regardless of how it is\nimplemented. I've only spot checked some of the tables to identify the\npattern.\n\nDavid J.\n\nOn Wed, Mar 23, 2022 at 5:55 PM Andres Freund <andres@anarazel.de> wrote:\nStarting with the below commit, pg_stat_reset_single_function_counters,\npg_stat_reset_single_table_counters don't just reset the stats for the\nindividual function, but also set pg_stat_database.stats_reset.\n\ncommit 4c468b37a281941afd3bf61c782b20def8c17047\nAuthor: Magnus Hagander <magnus@hagander.net>\nDate: 2011-02-10 15:09:35 +0100\n\n Track last time for statistics reset on databases and bgwriter\n\n Tracks one counter for each database, which is reset whenever\n the statistics for any individual object inside the database is\n reset, and one counter for the background writer.\n\n Tomas Vondra, reviewed by Greg Smith\n /*[...]\nMaybe I just don't understand what these reset functions are intended for?\nTheir introduction [3] didn't explain much either. To me the behaviour of\nresetting pg_stat_database.stats_reset but nothing else in pg_stat_database\nmakes them kind of dangerous.*tl/dr;*There seems to be three scopes here:Cluster Stats - (add last_reset fields for consistency)Database and Shared Object Stats (add last_reset to handle recording the resetting of just the records on this table, leaving stats_reset to be a flag meaning self-or-children)Object Stats (add last_reset, don't need stats_reset since there are no children)If we are OK with just changing pg_stat_database.stats_reset meanings to be less informative than what it is today (I strongly dislike such a silent behavioral change) we could simply standardize on the intended meaning of stats_reset on all three scopes, adding stats_reset as needed to track object-level resetting.*Additional Exposition*The description for the column declares that the field is reset when the statistics on the pg_stat_database are reset. That is also the expected behavior, knowing when any statistics in the whole database are reset is indeed not useful.\"Time at which these statistics were last reset\"The \"these\" clearly refers to the statistics columns in pg_stat_database.In fact, pg_stat_archiver.stats_reset already exists (as does pg_bgwriter.stats_reset) with (I presume) this interpretation. This is a problem because pg_stat_database.stats_reset does not have the same meaning. So we have to either live with inconsistency or break something.In the vein of living with inconsistency I would suggest changing the documentation of \"pg_stat.database.stats_reset\" to match the present behavior. Then add a new column (last_reset ?) to represent the existing description of \"stats_reset\".I suppose we could document \"stats_reset\" as the itself-or-any-children reset timestamp, it's just that the archive and pgwriter don't have children in this sense while databases do. When the pg_stat_database.last_reset field changes the pg_stat_database.stats_reset would have to match anyway.I don't have any issue with an indicator field saying \"something regarding stats has changed\" at the database level. It is much easier to monitor that and then inspect what may have changed rather than monitoring a last_reset column on every single catalog that has statistics that can be reset.It also seems that each tracked object type needs to have its own last_reset field (we could choose to name it stats_reset too, leaving pg_stat_database.last_reset as the only anomaly) added as an implied behavior needed for such individualized resetting. I would go with *.last_reset though and leave the absence of pg_stat_archiver.last_reset as the anomaly (or just add it redundantly for consistency).I don't see removing existing functionality as a good course to getting a consistent implementation; we should just push forward with figuring out what is missing and fill in those gaps. At worst if that isn't something we want to fix right now our new setup should at least leave the status quo behaviors in place.I haven't looked into what kind of explicit resetting options are available but the above seems to cover tracking resetting regardless of how it is implemented. I've only spot checked some of the tables to identify the pattern.David J.",
"msg_date": "Wed, 23 Mar 2022 18:47:58 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_reset_single_*_counters vs pg_stat_database.stats_reset"
},
{
"msg_contents": "Hi,\n\nOn 2022-03-23 18:47:58 -0700, David G. Johnston wrote:\n> It also seems that each tracked object type needs to have its own\n> last_reset field (we could choose to name it stats_reset too, leaving\n> pg_stat_database.last_reset as the only anomaly) added as an implied\n> behavior needed for such individualized resetting. I would go with\n> *.last_reset though and leave the absence of pg_stat_archiver.last_reset as\n> the anomaly (or just add it redundantly for consistency).\n\nIt's not free to track more information. We always have the stats for the\nwhole system in memory at least once (stats collector currently, shared hash\ntable with shared memory stats patch), often more than that (stats accessing\nbackends).\n\n\n> I don't see removing existing functionality as a good course to getting a\n> consistent implementation; we should just push forward with figuring out\n> what is missing and fill in those gaps. At worst if that isn't something\n> we want to fix right now our new setup should at least leave the status quo\n> behaviors in place.\n\nWell, it depends on whether there's an actual use case for those super fine\ngrained reset functions. Neither docs nor the thread introducing them\npresented that. We don't have SQL level stats\nvalues either.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 23 Mar 2022 20:25:22 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: pg_stat_reset_single_*_counters vs pg_stat_database.stats_reset"
},
{
"msg_contents": "On Wed, 2022-03-23 at 17:55 -0700, Andres Freund wrote:\n> Starting with the below commit, pg_stat_reset_single_function_counters,\n> pg_stat_reset_single_table_counters don't just reset the stats for the\n> individual function, but also set pg_stat_database.stats_reset.\n\nI see the point in the fine-grained reset, but I am -1 on having that\nreset \"pg_stat_database.stats_reset\". That would make the timestamp\nmostly useless.\n\nOne could argue that resetting a single counter and *not* resetting\n\"pg_stat_database.stats_reset\" would also be a lie, but at least it is\na smaller lie.\n\nYours,\nLaurenz Albe\n\n\n\n",
"msg_date": "Thu, 24 Mar 2022 06:27:48 +0100",
"msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_reset_single_*_counters vs pg_stat_database.stats_reset"
},
{
"msg_contents": "Hi,\n\nOn 2022-03-24 06:27:48 +0100, Laurenz Albe wrote:\n> On Wed, 2022-03-23 at 17:55 -0700, Andres Freund wrote:\n> > Starting with the below commit, pg_stat_reset_single_function_counters,\n> > pg_stat_reset_single_table_counters don't just reset the stats for the\n> > individual function, but also set pg_stat_database.stats_reset.\n> \n> I see the point in the fine-grained reset, but I am -1 on having that\n> reset \"pg_stat_database.stats_reset\". That would make the timestamp\n> mostly useless.\n\nJust to be clear - that's the current and long-time behaviour.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 23 Mar 2022 22:33:29 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: pg_stat_reset_single_*_counters vs pg_stat_database.stats_reset"
},
{
"msg_contents": "On 3/24/22 01:59, Andres Freund wrote:\n> Hi,\n> \n> On 2022-03-23 17:55:16 -0700, Andres Freund wrote:\n>> Maybe I just don't understand what these reset functions are intended for?\n>> Their introduction [3] didn't explain much either. To me the behaviour of\n>> resetting pg_stat_database.stats_reset but nothing else in pg_stat_database\n>> makes them kind of dangerous.\n> \n> Forgot to add: At the very least we should document that weird behaviour,\n> because it's certainly not obvious. But imo we should either remove the\n> behaviour or drop the functions.\n> \n\nI agree it should have been documented, but I still stand behind the\ncurrent behavior. I'm not willing to die on this hill, but I think the\nreasoning was/is sound.\n\nFirstly, calculating transactions/second, reads/second just from by\nlooking at pg_stat_database data (counter and stat_reset) is nonsense.\nIt might work for short time periods, but for anything longer it's bound\nto give you bogus results - you don't even know if the system was\nrunning at all, and so on.\n\nSecondly, to do anything really meaningful you need to calculate deltas,\nand be able to detect if some of the stats were reset for the particular\ninterval. And the stat_reset timestamp was designed to be a simple way\nto detect that (instead of having to inspect all individual timestamps).\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 24 Mar 2022 13:12:24 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_reset_single_*_counters vs pg_stat_database.stats_reset"
},
{
"msg_contents": "Hi,\n\nOn 2022-03-24 13:12:24 +0100, Tomas Vondra wrote:\n> I agree it should have been documented, but I still stand behind the\n> current behavior. I'm not willing to die on this hill, but I think the\n> reasoning was/is sound.\n> \n> Firstly, calculating transactions/second, reads/second just from by\n> looking at pg_stat_database data (counter and stat_reset) is nonsense.\n> It might work for short time periods, but for anything longer it's bound\n> to give you bogus results - you don't even know if the system was\n> running at all, and so on.\n\nIt's not that you'd use it as the sole means of determining the time\ndelta. But using it to see if stats were reset between two samples of\npg_stat_database imo makes plenty sense.\n\n\n> Secondly, to do anything really meaningful you need to calculate deltas,\n> and be able to detect if some of the stats were reset for the particular\n> interval. And the stat_reset timestamp was designed to be a simple way\n> to detect that (instead of having to inspect all individual timestamps).\n\nI wonder if we should just split that per-database timestamp into two. One\nabout the pg_stat_database contents, one about per-database stats? That\ndoesn't have the same memory-usage-increase concerns as adding\nper-table/function reset stats.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 29 Mar 2022 13:37:34 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: pg_stat_reset_single_*_counters vs pg_stat_database.stats_reset"
},
{
"msg_contents": "On Tue, Mar 29, 2022 at 1:37 PM Andres Freund <andres@anarazel.de> wrote:\n\n> > Secondly, to do anything really meaningful you need to calculate deltas,\n> > and be able to detect if some of the stats were reset for the particular\n> > interval. And the stat_reset timestamp was designed to be a simple way\n> > to detect that (instead of having to inspect all individual timestamps).\n>\n> I wonder if we should just split that per-database timestamp into two. One\n> about the pg_stat_database contents, one about per-database stats? That\n> doesn't have the same memory-usage-increase concerns as adding\n> per-table/function reset stats.\n>\n>\nThat seems like only half a solution. The reasoning for doing such a split\nfor pg_stat_database is identical to the reason that new fields should be\nadded to pg_stat_all_tables and pg_stat_user_functions (and possibly\nothers).\n\npg_stat_all_tables already has 16 bigint fields, 4 timestamptz fields, 2\nnames and an oid. Seems like one more timestamptz field is a marginal\nincrease whose presence lets us keep the already implemented per-table\nreset mechanism. We should at least measure the impact that adding the\nfield has before deciding its presence is too costly.\n\nBut then, I'm going on design theory here, I don't presently have a horse\nin this race. And the fact no one has called us on this deficiency (not\nthat I've really been looking) does suggest the status quo is at least\nrealistic to maintain. But on that basis I would just leave\npg_stat_database alone with its single field. And then explain that the\nsingle field covers everything, including the database statistics. And so\nwhile it is possible to reset a subset of the statistics the field really\nloses its usefulness when that is done because the reset timestamp only\napplies to a subset. It only regains its meaning if/when one performs a\nfull stats reset.\n\nDavid J.\n\nOn Tue, Mar 29, 2022 at 1:37 PM Andres Freund <andres@anarazel.de> wrote:> Secondly, to do anything really meaningful you need to calculate deltas,\n> and be able to detect if some of the stats were reset for the particular\n> interval. And the stat_reset timestamp was designed to be a simple way\n> to detect that (instead of having to inspect all individual timestamps).\n\nI wonder if we should just split that per-database timestamp into two. One\nabout the pg_stat_database contents, one about per-database stats? That\ndoesn't have the same memory-usage-increase concerns as adding\nper-table/function reset stats.That seems like only half a solution. The reasoning for doing such a split for pg_stat_database is identical to the reason that new fields should be added to pg_stat_all_tables and pg_stat_user_functions (and possibly others).pg_stat_all_tables already has 16 bigint fields, 4 timestamptz fields, 2 names and an oid. Seems like one more timestamptz field is a marginal increase whose presence lets us keep the already implemented per-table reset mechanism. We should at least measure the impact that adding the field has before deciding its presence is too costly.But then, I'm going on design theory here, I don't presently have a horse in this race. And the fact no one has called us on this deficiency (not that I've really been looking) does suggest the status quo is at least realistic to maintain. But on that basis I would just leave pg_stat_database alone with its single field. And then explain that the single field covers everything, including the database statistics. And so while it is possible to reset a subset of the statistics the field really loses its usefulness when that is done because the reset timestamp only applies to a subset. It only regains its meaning if/when one performs a full stats reset.David J.",
"msg_date": "Tue, 29 Mar 2022 14:14:05 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_reset_single_*_counters vs pg_stat_database.stats_reset"
},
{
"msg_contents": "Hi,\n\nOn 2022-03-29 14:14:05 -0700, David G. Johnston wrote:\n> On Tue, Mar 29, 2022 at 1:37 PM Andres Freund <andres@anarazel.de> wrote:\n> \n> > > Secondly, to do anything really meaningful you need to calculate deltas,\n> > > and be able to detect if some of the stats were reset for the particular\n> > > interval. And the stat_reset timestamp was designed to be a simple way\n> > > to detect that (instead of having to inspect all individual timestamps).\n> >\n> > I wonder if we should just split that per-database timestamp into two. One\n> > about the pg_stat_database contents, one about per-database stats? That\n> > doesn't have the same memory-usage-increase concerns as adding\n> > per-table/function reset stats.\n> >\n> >\n> That seems like only half a solution. The reasoning for doing such a split\n> for pg_stat_database is identical to the reason that new fields should be\n> added to pg_stat_all_tables and pg_stat_user_functions (and possibly\n> others).\n\nNot really IMO. There's obviously the space usage aspect - there's always\nfewer pg_stat_database stats than relation stats. But more importantly, a\nper-relation/function reset field wouldn't address Tomas's concern: He wants a\nsingle thing to check to see if any stats have been reset - and that's imo a\nquite reasonable desire.\n\n\n> pg_stat_all_tables already has 16 bigint fields, 4 timestamptz fields, 2\n> names and an oid. Seems like one more timestamptz field is a marginal\n> increase whose presence lets us keep the already implemented per-table\n> reset mechanism. We should at least measure the impact that adding the\n> field has before deciding its presence is too costly.\n\nBecause of the desire for a single place to check whether there has been a\nreset within a database, that's imo an orthogonal debate.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 29 Mar 2022 16:43:28 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: pg_stat_reset_single_*_counters vs pg_stat_database.stats_reset"
},
{
"msg_contents": "On Tue, Mar 29, 2022 at 4:43 PM Andres Freund <andres@anarazel.de> wrote:\n\n\n> But more importantly, a\n> per-relation/function reset field wouldn't address Tomas's concern: He\n> wants a\n> single thing to check to see if any stats have been reset - and that's imo\n> a\n> quite reasonable desire.\n>\n\nPer the original email:\n\n\"Starting with the below commit, pg_stat_reset_single_function_counters,\npg_stat_reset_single_table_counters don't just reset the stats for the\nindividual function, but also set pg_stat_database.stats_reset.\"\n\nThus we already have the desired behavior, it is just poorly documented.\n\nNow, maybe other functions aren't doing this? If so, given these functions\ndo, we probably should just change any outliers to match.\n\nI'm reading Tomas's comments as basically a defense of the status quo, at\nleast so far as the field goes. He didn't comment on the idea of \"drop the\nreset_[relation|function]_counters(...)\" functions. Combined, I take that\nas supporting the entire status quo: leaving the function and fields\nas-is. I'm inclined to do the same. I don't see any real benefit to\nchange here as there is no user demand for it and the field change proposal\nis to change only one of the at least three locations that should be\nchanged if we want to have a consistent design. And we aren't getting user\nreports saying the presence of the functions is a problem (confusion or\notherwise) either, so unless there is a technical reason writing these\nfunctions in the new system is undesirable we have no justification that I\ncan see for removing the long-standing feature.\n\nDavid J.\n\nOn Tue, Mar 29, 2022 at 4:43 PM Andres Freund <andres@anarazel.de> wrote: But more importantly, a\nper-relation/function reset field wouldn't address Tomas's concern: He wants a\nsingle thing to check to see if any stats have been reset - and that's imo a\nquite reasonable desire.Per the original email:\"Starting with the below commit, pg_stat_reset_single_function_counters,pg_stat_reset_single_table_counters don't just reset the stats for theindividual function, but also set pg_stat_database.stats_reset.\"Thus we already have the desired behavior, it is just poorly documented.Now, maybe other functions aren't doing this? If so, given these functions do, we probably should just change any outliers to match.I'm reading Tomas's comments as basically a defense of the status quo, at least so far as the field goes. He didn't comment on the idea of \"drop the reset_[relation|function]_counters(...)\" functions. Combined, I take that as supporting the entire status quo: leaving the function and fields as-is. I'm inclined to do the same. I don't see any real benefit to change here as there is no user demand for it and the field change proposal is to change only one of the at least three locations that should be changed if we want to have a consistent design. And we aren't getting user reports saying the presence of the functions is a problem (confusion or otherwise) either, so unless there is a technical reason writing these functions in the new system is undesirable we have no justification that I can see for removing the long-standing feature.David J.",
"msg_date": "Tue, 29 Mar 2022 17:06:24 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_reset_single_*_counters vs pg_stat_database.stats_reset"
},
{
"msg_contents": "Hi,\n\nOn 2022-03-29 17:06:24 -0700, David G. Johnston wrote:\n> On Tue, Mar 29, 2022 at 4:43 PM Andres Freund <andres@anarazel.de> wrote:\n> > But more importantly, a\n> > per-relation/function reset field wouldn't address Tomas's concern: He\n> > wants a\n> > single thing to check to see if any stats have been reset - and that's imo\n> > a\n> > quite reasonable desire.\n> >\n> \n> Per the original email:\n> \n> \"Starting with the below commit, pg_stat_reset_single_function_counters,\n> pg_stat_reset_single_table_counters don't just reset the stats for the\n> individual function, but also set pg_stat_database.stats_reset.\"\n> \n> Thus we already have the desired behavior, it is just poorly documented.\n\nThe problem is that it also make stats_reset useless for other purposes -\nwhich I do consider a problem. Hence this thread. My concern would be\nmollified if I there were a separate reset timestamp counting the last\n\"database wide\" reset time. Your comment about that was something about\nrelation/function level timestamps, which doesn't seem relevant.\n\n\n> Now, maybe other functions aren't doing this? If so, given these functions\n> do, we probably should just change any outliers to match.\n\nDon't think there are other functions.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 29 Mar 2022 17:50:08 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: pg_stat_reset_single_*_counters vs pg_stat_database.stats_reset"
},
{
"msg_contents": "On Tue, Mar 29, 2022 at 5:50 PM Andres Freund <andres@anarazel.de> wrote:\n\n> Hi,\n>\n> On 2022-03-29 17:06:24 -0700, David G. Johnston wrote:\n> > On Tue, Mar 29, 2022 at 4:43 PM Andres Freund <andres@anarazel.de>\n> wrote:\n> > > But more importantly, a\n> > > per-relation/function reset field wouldn't address Tomas's concern: He\n> > > wants a\n> > > single thing to check to see if any stats have been reset - and that's\n> imo\n> > > a\n> > > quite reasonable desire.\n> > >\n> >\n> > Per the original email:\n> >\n> > \"Starting with the below commit, pg_stat_reset_single_function_counters,\n> > pg_stat_reset_single_table_counters don't just reset the stats for the\n> > individual function, but also set pg_stat_database.stats_reset.\"\n> >\n> > Thus we already have the desired behavior, it is just poorly documented.\n>\n> The problem is that it also make stats_reset useless for other purposes -\n> which I do consider a problem. Hence this thread. My concern would be\n> mollified if I there were a separate reset timestamp counting the last\n> \"database wide\" reset time. Your comment about that was something about\n> relation/function level timestamps, which doesn't seem relevant.\n>\n\nI can't figure out whether you agree that as of today stats_reset is the\n\"database wide\" reset time. The first sentence makes it sound like you do,\nthe first one makes it sound like you don't.\n\nDavid J.\n\nOn Tue, Mar 29, 2022 at 5:50 PM Andres Freund <andres@anarazel.de> wrote:Hi,\n\nOn 2022-03-29 17:06:24 -0700, David G. Johnston wrote:\n> On Tue, Mar 29, 2022 at 4:43 PM Andres Freund <andres@anarazel.de> wrote:\n> > But more importantly, a\n> > per-relation/function reset field wouldn't address Tomas's concern: He\n> > wants a\n> > single thing to check to see if any stats have been reset - and that's imo\n> > a\n> > quite reasonable desire.\n> >\n> \n> Per the original email:\n> \n> \"Starting with the below commit, pg_stat_reset_single_function_counters,\n> pg_stat_reset_single_table_counters don't just reset the stats for the\n> individual function, but also set pg_stat_database.stats_reset.\"\n> \n> Thus we already have the desired behavior, it is just poorly documented.\n\nThe problem is that it also make stats_reset useless for other purposes -\nwhich I do consider a problem. Hence this thread. My concern would be\nmollified if I there were a separate reset timestamp counting the last\n\"database wide\" reset time. Your comment about that was something about\nrelation/function level timestamps, which doesn't seem relevant.I can't figure out whether you agree that as of today stats_reset is the \"database wide\" reset time. The first sentence makes it sound like you do, the first one makes it sound like you don't.David J.",
"msg_date": "Tue, 29 Mar 2022 17:56:26 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_reset_single_*_counters vs pg_stat_database.stats_reset"
},
{
"msg_contents": "On Tue, Mar 29, 2022 at 5:56 PM David G. Johnston <\ndavid.g.johnston@gmail.com> wrote:\n\n> On Tue, Mar 29, 2022 at 5:50 PM Andres Freund <andres@anarazel.de> wrote:\n>\n>> Hi,\n>>\n>> On 2022-03-29 17:06:24 -0700, David G. Johnston wrote:\n>> > On Tue, Mar 29, 2022 at 4:43 PM Andres Freund <andres@anarazel.de>\n>> wrote:\n>> > > But more importantly, a\n>> > > per-relation/function reset field wouldn't address Tomas's concern: He\n>> > > wants a\n>> > > single thing to check to see if any stats have been reset - and\n>> that's imo\n>> > > a\n>> > > quite reasonable desire.\n>> > >\n>> >\n>> > Per the original email:\n>> >\n>> > \"Starting with the below commit, pg_stat_reset_single_function_counters,\n>> > pg_stat_reset_single_table_counters don't just reset the stats for the\n>> > individual function, but also set pg_stat_database.stats_reset.\"\n>> >\n>> > Thus we already have the desired behavior, it is just poorly documented.\n>>\n>> The problem is that it also make stats_reset useless for other purposes -\n>> which I do consider a problem. Hence this thread. My concern would be\n>> mollified if I there were a separate reset timestamp counting the last\n>> \"database wide\" reset time. Your comment about that was something about\n>> relation/function level timestamps, which doesn't seem relevant.\n>>\n>\n> I can't figure out whether you agree that as of today stats_reset is the\n> \"database wide\" reset time. The first sentence makes it sound like you do,\n> the first one makes it sound like you don't.\n>\n\nOK, I meant the third one seems contrary, but re-reading this all again I\nthink I see what you are saying.\n\nYou want to add a field that only changes when \"reset all stats\" is\nexecuted for a given database. Leaving stats_reset to mean \"the last time\nany individual stat record changed\". I can get behind that.\n\nDavid J.\n\nOn Tue, Mar 29, 2022 at 5:56 PM David G. Johnston <david.g.johnston@gmail.com> wrote:On Tue, Mar 29, 2022 at 5:50 PM Andres Freund <andres@anarazel.de> wrote:Hi,\n\nOn 2022-03-29 17:06:24 -0700, David G. Johnston wrote:\n> On Tue, Mar 29, 2022 at 4:43 PM Andres Freund <andres@anarazel.de> wrote:\n> > But more importantly, a\n> > per-relation/function reset field wouldn't address Tomas's concern: He\n> > wants a\n> > single thing to check to see if any stats have been reset - and that's imo\n> > a\n> > quite reasonable desire.\n> >\n> \n> Per the original email:\n> \n> \"Starting with the below commit, pg_stat_reset_single_function_counters,\n> pg_stat_reset_single_table_counters don't just reset the stats for the\n> individual function, but also set pg_stat_database.stats_reset.\"\n> \n> Thus we already have the desired behavior, it is just poorly documented.\n\nThe problem is that it also make stats_reset useless for other purposes -\nwhich I do consider a problem. Hence this thread. My concern would be\nmollified if I there were a separate reset timestamp counting the last\n\"database wide\" reset time. Your comment about that was something about\nrelation/function level timestamps, which doesn't seem relevant.I can't figure out whether you agree that as of today stats_reset is the \"database wide\" reset time. The first sentence makes it sound like you do, the first one makes it sound like you don't.OK, I meant the third one seems contrary, but re-reading this all again I think I see what you are saying.You want to add a field that only changes when \"reset all stats\" is executed for a given database. Leaving stats_reset to mean \"the last time any individual stat record changed\". I can get behind that.David J.",
"msg_date": "Tue, 29 Mar 2022 18:04:43 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_reset_single_*_counters vs pg_stat_database.stats_reset"
},
{
"msg_contents": "On Wed, Mar 23, 2022 at 8:55 PM Andres Freund <andres@anarazel.de> wrote:\n> This behaviour can be trivially (and is) implemented for the shared memory\n> stats patch. But every time I read over that part of the code it feels just\n> profoundly wrong to me. Way worse than *not* resetting\n> pg_stat_database.stats_reset.\n>\n> Anybody that uses the time since the stats reset as part of a calculation of\n> transactions / sec, reads / sec or such will get completely bogus results\n> after a call to pg_stat_reset_single_table_counters().\n\nSure, but that's unavoidable anyway. If some stats have been reset and\nother stats have not, you can't calculate a meaningful average over\ntime unless you have a specific reset time for each statistic.\n\nTo me, the current behavior feels more correct than what you propose.\nImagine for example that you are looking for tables/indexes where the\ncounters are 0 as a way of finding unused objects. If you know that no\ncounters have been zeroed in a long time, you know that this is\nreliable. But under your proposal, there's no way to know this. All\nyou know is that the entire system wasn't reset, and therefore some of\nthe 0s that you are seeing might be for individual objects that were\nreset.\n\nI think of this mechanism like as answering the question \"When's the\nlast time anybody tinkered with this thing by hand?\". If it's recent,\nthe tinkering has a good chance of being related to whatever problem\nI'm trying to solve. If it's not, it's probably unrelated.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 30 Mar 2022 14:57:25 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_reset_single_*_counters vs pg_stat_database.stats_reset"
},
{
"msg_contents": "Hi,\n\nOn 2022-03-30 14:57:25 -0400, Robert Haas wrote:\n> On Wed, Mar 23, 2022 at 8:55 PM Andres Freund <andres@anarazel.de> wrote:\n> > This behaviour can be trivially (and is) implemented for the shared memory\n> > stats patch. But every time I read over that part of the code it feels just\n> > profoundly wrong to me. Way worse than *not* resetting\n> > pg_stat_database.stats_reset.\n> >\n> > Anybody that uses the time since the stats reset as part of a calculation of\n> > transactions / sec, reads / sec or such will get completely bogus results\n> > after a call to pg_stat_reset_single_table_counters().\n> \n> Sure, but that's unavoidable anyway. If some stats have been reset and\n> other stats have not, you can't calculate a meaningful average over\n> time unless you have a specific reset time for each statistic.\n\nIndividual pg_stat_database columns can't be reset independently. Other views\nsummarizing large parts of the system, like pg_stat_bgwriter, pg_stat_wal etc\nhave a stats_reset column that is only reset if their counters is also\nreset. So the only reason we can't do that for pg_stat_database is that we\ndon't know since when pg_stat_database counters are counting.\n\n\n> To me, the current behavior feels more correct than what you propose.\n> Imagine for example that you are looking for tables/indexes where the\n> counters are 0 as a way of finding unused objects. If you know that no\n> counters have been zeroed in a long time, you know that this is\n> reliable. But under your proposal, there's no way to know this. All\n> you know is that the entire system wasn't reset, and therefore some of\n> the 0s that you are seeing might be for individual objects that were\n> reset.\n\nMy current proposal is to just have two reset times. One for the contents of\npg_stat_database (i.e. not affected by pg_stat_reset_single_*_counters()), and\none for stats within the entire database.\n\n\n> I think of this mechanism like as answering the question \"When's the\n> last time anybody tinkered with this thing by hand?\". If it's recent,\n> the tinkering has a good chance of being related to whatever problem\n> I'm trying to solve. If it's not, it's probably unrelated.\n\nWhen I look at a database with a problem, I'll often look at pg_stat_database\nto get a first impression of the type of workload running. The fact that\nstats_reset doesn't reflect the age of other pg_stat_database columns makes\nthat much harder.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 30 Mar 2022 12:23:08 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: pg_stat_reset_single_*_counters vs pg_stat_database.stats_reset"
},
{
"msg_contents": "On Wednesday, March 30, 2022, Andres Freund <andres@anarazel.de> wrote:\n\n>\n> My current proposal is to just have two reset times. One for the contents\n> of\n> pg_stat_database (i.e. not affected by pg_stat_reset_single_*_counters()),\n> and\n> one for stats within the entire database.\n>\n>\nWhat IS it affected by? And does whatever affects it affect anything else?\n\nDavid J.\n\nOn Wednesday, March 30, 2022, Andres Freund <andres@anarazel.de> wrote:\nMy current proposal is to just have two reset times. One for the contents of\npg_stat_database (i.e. not affected by pg_stat_reset_single_*_counters()), and\none for stats within the entire database.\nWhat IS it affected by? And does whatever affects it affect anything else?David J.",
"msg_date": "Wed, 30 Mar 2022 12:29:51 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_reset_single_*_counters vs pg_stat_database.stats_reset"
},
{
"msg_contents": "Hi,\n\nOn 2022-03-30 12:29:51 -0700, David G. Johnston wrote:\n> On Wednesday, March 30, 2022, Andres Freund <andres@anarazel.de> wrote:\n> > My current proposal is to just have two reset times. One for the contents\n> > of\n> > pg_stat_database (i.e. not affected by pg_stat_reset_single_*_counters()),\n> > and\n> > one for stats within the entire database.\n\n> What IS it affected by? And does whatever affects it affect anything else?\n\npg_stat_reset() resets the current database's stats. That includes the\ndatabase's row in pg_stat_database and all table and function stats.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 30 Mar 2022 13:39:27 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: pg_stat_reset_single_*_counters vs pg_stat_database.stats_reset"
},
{
"msg_contents": "On Wed, Mar 30, 2022 at 1:39 PM Andres Freund <andres@anarazel.de> wrote:\n\n> Hi,\n>\n> On 2022-03-30 12:29:51 -0700, David G. Johnston wrote:\n> > On Wednesday, March 30, 2022, Andres Freund <andres@anarazel.de> wrote:\n> > > My current proposal is to just have two reset times. One for the\n> contents\n> > > of\n> > > pg_stat_database (i.e. not affected by\n> pg_stat_reset_single_*_counters()),\n> > > and\n> > > one for stats within the entire database.\n>\n> > What IS it affected by? And does whatever affects it affect anything\n> else?\n>\n> pg_stat_reset() resets the current database's stats. That includes the\n> database's row in pg_stat_database and all table and function stats.\n>\n>\nRight, so basically it updates both of the fields you are talking about.\n\nThe existing stats_reset field is also updated upon\ncalling pg_stat_reset_single_*_counters()\n\nSo when the two fields are different we know that at least one relation or\nfunction statistic row is out-of-sync with the rest of the database, we\njust don't know which one(s). This is an improvement over the status quo\nwhere the single timestamp cannot be trusted to mean anything useful. The\nDBA can execute pg_stat_reset() to get the statistics back into a common\nstate.\n\nAs an added bonus we will always have a reference timestamp for when the\npg_stat_database database record was last reset (as well as any other\nstatistic record that can only be reset by using pg_stat_reset).\n\nDavid J.\n\nOn Wed, Mar 30, 2022 at 1:39 PM Andres Freund <andres@anarazel.de> wrote:Hi,\n\nOn 2022-03-30 12:29:51 -0700, David G. Johnston wrote:\n> On Wednesday, March 30, 2022, Andres Freund <andres@anarazel.de> wrote:\n> > My current proposal is to just have two reset times. One for the contents\n> > of\n> > pg_stat_database (i.e. not affected by pg_stat_reset_single_*_counters()),\n> > and\n> > one for stats within the entire database.\n\n> What IS it affected by? And does whatever affects it affect anything else?\n\npg_stat_reset() resets the current database's stats. That includes the\ndatabase's row in pg_stat_database and all table and function stats.Right, so basically it updates both of the fields you are talking about.The existing stats_reset field is also updated upon calling pg_stat_reset_single_*_counters()So when the two fields are different we know that at least one relation or function statistic row is out-of-sync with the rest of the database, we just don't know which one(s). This is an improvement over the status quo where the single timestamp cannot be trusted to mean anything useful. The DBA can execute pg_stat_reset() to get the statistics back into a common state.As an added bonus we will always have a reference timestamp for when the pg_stat_database database record was last reset (as well as any other statistic record that can only be reset by using pg_stat_reset).David J.",
"msg_date": "Wed, 30 Mar 2022 14:05:01 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_reset_single_*_counters vs pg_stat_database.stats_reset"
}
] |
[
{
"msg_contents": "Hi,\n\nThe comment for pgwin32_is_junction() says \"Assumes the file exists,\nso will return false if it doesn't (since a nonexistent file is not a\njunction)\". In fact that's the behaviour for any kind of error, and\nalthough we set errno in that case, no caller ever checks it.\n\nI think it'd be better to add missing_ok and elevel parameters,\nfollowing existing patterns. Unfortunately, it can't use the generic\nfrontend logging to implement elevel in frontend code from its current\nlocation, because pgport can't call pgcommon. For now I came up with\na kludge to work around that problem, but I don't like it, and would\nneed to come up with something better...\n\nSketch code attached.",
"msg_date": "Thu, 24 Mar 2022 16:30:26 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Checking pgwin32_is_junction() errors"
},
{
"msg_contents": "On Thu, Mar 24, 2022 at 04:30:26PM +1300, Thomas Munro wrote:\n> I think it'd be better to add missing_ok and elevel parameters,\n> following existing patterns. Unfortunately, it can't use the generic\n> frontend logging to implement elevel in frontend code from its current\n> location, because pgport can't call pgcommon. For now I came up with\n> a kludge to work around that problem, but I don't like it, and would\n> need to come up with something better...\n\nThe only barrier reason why elevel if needed is because of pg_wal in\nSyncDataDirectory() that cannot fail hard. I don't have a great idea\nhere, except using a bits32 with some bitwise flags to control the\nbehavior of the routine, aka something close to a MISSING_OK and a\nFAIL_HARD_ON_ERROR. This pattern exists already in some of the\n*Extended() routines.\n--\nMichael",
"msg_date": "Thu, 21 Apr 2022 16:56:23 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Checking pgwin32_is_junction() errors"
},
{
"msg_contents": "Here's a better idea, now that I'm emboldened by having working CI for\nWindows frankenbuilds, and since I broke some stuff in this area on\nMSYS[1], which caused me to look more closely at this area.\n\nWhy don't we just nuke pgwin32_is_junction() from orbit, and teach\nWindows how to lstat()? We're already defining our own replacement\nstat() used in both MSVC and MSYS builds along with our own junction\npoint-based symlink() and readlink() functions, and lstat() was\nalready suggested in a comment in win32stat.c.\n\nThere's one curious change in the draft patch attached: you can't\nunlink() a junction point, you have to rmdir() it. Previously, things\nthat traverse directories without ever calling pgwin32_is_junction()\nwould see junction points as S_ISDIR() and call rmdir(), which was OK,\nbut now they see S_ISLNK() and call unlink(). So I taught unlink() to\ntry both things. Which is kinda weird, and not beautiful, especially\nwhen combined with the existing looping weirdness.\n\n0001 is a copy of v2 of Melih Mutlu's CI patch[2] to show cfbot how to\ntest this on MSYS (alongside the normal MSVC result), but that's not\npart of this submission.\n\n[1] https://www.postgresql.org/message-id/flat/b9ddf605-6b36-f90d-7c30-7b3e95c46276%40dunslane.net\n[2] https://www.postgresql.org/message-id/flat/CAGPVpCSKS9E0An4%3De7ZDnme%2By%3DWOcQFJYJegKO8kE9%3Dgh8NJKQ%40mail.gmail.com",
"msg_date": "Thu, 28 Jul 2022 21:31:44 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Checking pgwin32_is_junction() errors"
},
{
"msg_contents": "On Thu, Jul 28, 2022 at 9:31 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> There's one curious change in the draft patch attached: you can't\n> unlink() a junction point, you have to rmdir() it. Previously, things\n> that traverse directories without ever calling pgwin32_is_junction()\n> would see junction points as S_ISDIR() and call rmdir(), which was OK,\n> but now they see S_ISLNK() and call unlink(). So I taught unlink() to\n> try both things. Which is kinda weird, and not beautiful, especially\n> when combined with the existing looping weirdness.\n\nHere's a new attempt at unlink(), this time in its own patch. This\nversion is a little more careful about calling rmdir() only after\nchecking that it is a junction point, so that unlink(\"a directory\")\nfails just like on Unix (well, POSIX says that that should fail with\nEPERM, not EACCES, and implementations are allowed to make it work\nanyway, but it doesn't seem helpful to allow it to work there when\nevery OS I know of fails with EPERM or EISDIR). That check is racy,\nbut should be good enough for our purposes, no (see comment for a note\non that)?\n\nLonger term, I wonder if we should get rid of our use of symlinks, and\ninstead just put paths in a file and do our own path translation. But\nfor now, this patch set completes the set of junction point-based\nemulations, and, IMHO, cleans up a confusing aspect of our code.\n\nAs before, 0001 is just for cfbot to add an MSYS checkmark.",
"msg_date": "Mon, 1 Aug 2022 17:09:34 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Checking pgwin32_is_junction() errors"
},
{
"msg_contents": "\nOn 2022-08-01 Mo 01:09, Thomas Munro wrote:\n> On Thu, Jul 28, 2022 at 9:31 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n>> There's one curious change in the draft patch attached: you can't\n>> unlink() a junction point, you have to rmdir() it. Previously, things\n>> that traverse directories without ever calling pgwin32_is_junction()\n>> would see junction points as S_ISDIR() and call rmdir(), which was OK,\n>> but now they see S_ISLNK() and call unlink(). So I taught unlink() to\n>> try both things. Which is kinda weird, and not beautiful, especially\n>> when combined with the existing looping weirdness.\n> Here's a new attempt at unlink(), this time in its own patch. This\n> version is a little more careful about calling rmdir() only after\n> checking that it is a junction point, so that unlink(\"a directory\")\n> fails just like on Unix (well, POSIX says that that should fail with\n> EPERM, not EACCES, and implementations are allowed to make it work\n> anyway, but it doesn't seem helpful to allow it to work there when\n> every OS I know of fails with EPERM or EISDIR). That check is racy,\n> but should be good enough for our purposes, no (see comment for a note\n> on that)?\n>\n> Longer term, I wonder if we should get rid of our use of symlinks, and\n> instead just put paths in a file and do our own path translation. But\n> for now, this patch set completes the set of junction point-based\n> emulations, and, IMHO, cleans up a confusing aspect of our code.\n>\n> As before, 0001 is just for cfbot to add an MSYS checkmark.\n\n\n\nI'll try it out on fairywren/drongo.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Mon, 1 Aug 2022 16:06:41 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: Checking pgwin32_is_junction() errors"
},
{
"msg_contents": "\nOn 2022-08-01 Mo 16:06, Andrew Dunstan wrote:\n> On 2022-08-01 Mo 01:09, Thomas Munro wrote:\n>> On Thu, Jul 28, 2022 at 9:31 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n>>> There's one curious change in the draft patch attached: you can't\n>>> unlink() a junction point, you have to rmdir() it. Previously, things\n>>> that traverse directories without ever calling pgwin32_is_junction()\n>>> would see junction points as S_ISDIR() and call rmdir(), which was OK,\n>>> but now they see S_ISLNK() and call unlink(). So I taught unlink() to\n>>> try both things. Which is kinda weird, and not beautiful, especially\n>>> when combined with the existing looping weirdness.\n>> Here's a new attempt at unlink(), this time in its own patch. This\n>> version is a little more careful about calling rmdir() only after\n>> checking that it is a junction point, so that unlink(\"a directory\")\n>> fails just like on Unix (well, POSIX says that that should fail with\n>> EPERM, not EACCES, and implementations are allowed to make it work\n>> anyway, but it doesn't seem helpful to allow it to work there when\n>> every OS I know of fails with EPERM or EISDIR). That check is racy,\n>> but should be good enough for our purposes, no (see comment for a note\n>> on that)?\n>>\n>> Longer term, I wonder if we should get rid of our use of symlinks, and\n>> instead just put paths in a file and do our own path translation. But\n>> for now, this patch set completes the set of junction point-based\n>> emulations, and, IMHO, cleans up a confusing aspect of our code.\n>>\n>> As before, 0001 is just for cfbot to add an MSYS checkmark.\n>\n>\n> I'll try it out on fairywren/drongo.\n>\n>\n\nThey are happy with patches 2, 3, and 4.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Wed, 3 Aug 2022 17:28:02 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: Checking pgwin32_is_junction() errors"
},
{
"msg_contents": "On Thu, Aug 4, 2022 at 9:28 AM Andrew Dunstan <andrew@dunslane.net> wrote:\n> On 2022-08-01 Mo 16:06, Andrew Dunstan wrote:\n> > I'll try it out on fairywren/drongo.\n\n> They are happy with patches 2, 3, and 4.\n\nThanks for testing!\n\nIf there are no objections, I'll go ahead and commit these later today.\n\n\n",
"msg_date": "Thu, 4 Aug 2022 09:42:34 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Checking pgwin32_is_junction() errors"
},
{
"msg_contents": "On Thu, Aug 4, 2022 at 9:42 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> On Thu, Aug 4, 2022 at 9:28 AM Andrew Dunstan <andrew@dunslane.net> wrote:\n> > On 2022-08-01 Mo 16:06, Andrew Dunstan wrote:\n> > > I'll try it out on fairywren/drongo.\n>\n> > They are happy with patches 2, 3, and 4.\n>\n> Thanks for testing!\n>\n> If there are no objections, I'll go ahead and commit these later today.\n\nHmm, POSIX says st_link should contain the length of a symlink's\ntarget path, so I suppose we should probably set that even though we\nnever consult it. Here's a version that does that. I also removed\nthe rest of the now redundant #ifdef S_ISLNK conditions.",
"msg_date": "Fri, 5 Aug 2022 21:17:45 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Checking pgwin32_is_junction() errors"
},
{
"msg_contents": "On Fri, Aug 5, 2022 at 9:17 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> Hmm, POSIX says st_link should contain the length of a symlink's\n> target path, so I suppose we should probably set that even though we\n> never consult it. Here's a version that does that. I also removed\n> the rest of the now redundant #ifdef S_ISLNK conditions.\n\nPushed.\n\nHmm, this stuff could *really* use a little test framework that's run\nby check-world, that exercises these various replacement operations.\nBut I also suspect that problems in this area are likely to be due to\nconcurrency. It's hard to make a simple test that simulates the case\nwhere a file is unlinked between system calls within stat() and hits\nthe STATUS_DELETE_PENDING case. That check is code I cargo-culted in\nthis patch. So much of the stuff we've had in the tree relating to\nthat area has been wrong in the past...\n\n\n",
"msg_date": "Sat, 6 Aug 2022 13:02:41 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Checking pgwin32_is_junction() errors"
},
{
"msg_contents": "On 2022-08-06 08:02, Thomas Munro wrote:\n> \n> Pushed.\n> \n> Hmm, this stuff could *really* use a little test framework that's run\n> by check-world, that exercises these various replacement operations.\n> But I also suspect that problems in this area are likely to be due to\n> concurrency. It's hard to make a simple test that simulates the case\n> where a file is unlinked between system calls within stat() and hits\n> the STATUS_DELETE_PENDING case. That check is code I cargo-culted in\n> this patch. So much of the stuff we've had in the tree relating to\n> that area has been wrong in the past...\n\nHello, hackers!\n\ninitdb on my windows 10 system stopped working after the commit\nc5cb8f3b: \"Provide lstat() for Windows.\"\nThe error message is: creating directory C:/HOME/data ... initdb:\nerror: could not create directory \"C:/HOME\": File exists\n\n\"C:/HOME\" is the junction point to the second volume on my hard drive -\n\"\\??\\Volume{GUID}\\\" which name pgreadlink() erroneously strips here:\nhttps://github.com/postgres/postgres/blob/7e29a79a46d30dc236d097825ab849158929d977/src/port/dirmod.c#L357.\nSo initdb could not stat the file with name \"Volume{GUID}\", tried to\ncreate it and failed.\nWith the attached patch initdb works fine again.\n\n-- \nregards,\n\nRoman",
"msg_date": "Mon, 08 Aug 2022 15:23:30 +0700",
"msg_from": "r.zharkov@postgrespro.ru",
"msg_from_op": false,
"msg_subject": "Re: Checking pgwin32_is_junction() errors"
},
{
"msg_contents": "On Mon, Aug 8, 2022 at 8:23 PM <r.zharkov@postgrespro.ru> wrote:\n> initdb on my windows 10 system stopped working after the commit\n> c5cb8f3b: \"Provide lstat() for Windows.\"\n> The error message is: creating directory C:/HOME/data ... initdb:\n> error: could not create directory \"C:/HOME\": File exists\n>\n> \"C:/HOME\" is the junction point to the second volume on my hard drive -\n> \"\\??\\Volume{GUID}\\\" which name pgreadlink() erroneously strips here:\n> https://github.com/postgres/postgres/blob/7e29a79a46d30dc236d097825ab849158929d977/src/port/dirmod.c#L357.\n> So initdb could not stat the file with name \"Volume{GUID}\", tried to\n> create it and failed.\n> With the attached patch initdb works fine again.\n\n- if (r > 4 && strncmp(buf, \"\\\\??\\\\\", 4) == 0)\n+ if (r > 4 && strncmp(buf, \"\\\\??\\\\\", 4) == 0 &&\n+ strncmp(buf, \"\\\\??\\\\Volume\", 10) != 0)\n {\n memmove(buf, buf + 4, strlen(buf + 4) + 1);\n r -= 4;\n\nHmm. I suppose the problem must be in pg_mkdir_p(). Our symlink()\nemulation usually adds this \"\\??\\\" prefix (making it an \"NT object\npath\"?), because junction points only work if they are in that format.\nThen our readlink() emulation removes it again, but in the case of\nyour \\??\\Volume{GUID} path, created by you, not our symlink()\nemulation, removing \"\\??\\\" apparently makes it unopenable with\nCreateFile() (I guess that's what fails? What's the error?). So your\npatch just says: don't strip \"\\??\\\" if it's followed by \"Volume\".\n\nI don't understand all the kinds of DOS, Windows and NT paths (let me\ntake a moment to say how much I love Unix), but here's a guess: could\nit be that NT \"\\??\\C:\\foo\" = DOS \"C:\\foo\", but NT \"\\??\\Volume...\" =\nDOS \"\\Volume...\"? In other words, if it hasn't got a drive letter,\nmaybe it still needs an initial \"\\\" (or if not that, then *something*\nspecial, because otherwise it looks like a relative path). Would it\nbe better to say: if it doesn't begin with \"\\??\\X:\", where X could be\nany letter, then don't modify it?\n\nMaybe [1] has some clues. It seems to give the info in a higher\ndensity form than the Windows docs (at least to the uninitiated like\nme wanting a quick overview with examples). Hmm, I wonder if we could\nget away from doing our own path mangling and use some of the proper\nlibrary calls mentioned on that page...\n\n[1] https://googleprojectzero.blogspot.com/2016/02/the-definitive-guide-on-win32-to-nt.html\n\n\n",
"msg_date": "Tue, 9 Aug 2022 08:30:11 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Checking pgwin32_is_junction() errors"
},
{
"msg_contents": "On Tue, Aug 9, 2022 at 8:30 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> On Mon, Aug 8, 2022 at 8:23 PM <r.zharkov@postgrespro.ru> wrote:\n> > \"C:/HOME\" is the junction point to the second volume on my hard drive -\n> > \"\\??\\Volume{GUID}\\\" which name pgreadlink() erroneously strips here:\n> > https://github.com/postgres/postgres/blob/7e29a79a46d30dc236d097825ab849158929d977/src/port/dirmod.c#L357.\n\n> ... Would it\n> be better to say: if it doesn't begin with \"\\??\\X:\", where X could be\n> any letter, then don't modify it?\n\nConcretely, I wonder if this is a good fix at least in the short term.\nDoes this work for you, and do the logic and explanation make sense?",
"msg_date": "Tue, 9 Aug 2022 10:44:55 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Checking pgwin32_is_junction() errors"
},
{
"msg_contents": "On 2022-08-09 03:30, Thomas Munro wrote:\n\n> Then our readlink() emulation removes it again, but in the case of\n> your \\??\\Volume{GUID} path, created by you, not our symlink()\n> emulation, removing \"\\??\\\" apparently makes it unopenable with\n> CreateFile() (I guess that's what fails? What's the error?). So your\n> patch just says: don't strip \"\\??\\\" if it's followed by \"Volume\".\n\nSorry, I thought wrong that everyone sees the backtrace on my screen.\nFailes the CreateFile() function with fileName = \"Volume{GUID}\\\" at [1].\nAnd the GetLastError() returnes 2 (ERROR_FILE_NOT_FOUND).\n\nCall Stack:\ninitdb.exe!pgwin32_open_handle(const char * fileName, ...) Line 111\tC\ninitdb.exe!_pglstat64(const char * name, stat * buf) Line 128\tC\ninitdb.exe!_pgstat64(const char * name, stat * buf) Line 221\tC\ninitdb.exe!pg_mkdir_p(char * path, int omode) Line 123\tC\ninitdb.exe!create_data_directory() Line 2537\tC\ninitdb.exe!initialize_data_directory() Line 2696\tC\ninitdb.exe!main(int argc, char * * argv) Line 3102\tC\n\n> I don't understand all the kinds of DOS, Windows and NT paths (let me\n> take a moment to say how much I love Unix), but here's a guess: could\n> it be that NT \"\\??\\C:\\foo\" = DOS \"C:\\foo\", but NT \"\\??\\Volume...\" =\n> DOS \"\\Volume...\"? In other words, if it hasn't got a drive letter,\n> maybe it still needs an initial \"\\\" (or if not that, then *something*\n> special, because otherwise it looks like a relative path).\n\nIt seems to me, when we call CreateFile() Windows Object Manager \nsearches\nDOS devices (drive letters in our case) in DOS Device namespaces.\nBut it doesn't search the \"Volume{GUID}\" devices which must be named as\n\"\\\\?\\Volume{GUID}\\\" [2].\n\n> Would it be better to say: if it doesn't begin with \"\\??\\X:\", where X\n> could be any letter, then don't modify it?\n> \n\nI think it will be better.\n\n[1] \nhttps://github.com/postgres/postgres/blob/7e29a79a46d30dc236d097825ab849158929d977/src/port/open.c#L86\n[2] \nhttps://docs.microsoft.com/en-us/windows/win32/fileio/naming-a-volume\n\n\n",
"msg_date": "Tue, 09 Aug 2022 17:29:00 +0700",
"msg_from": "r.zharkov@postgrespro.ru",
"msg_from_op": false,
"msg_subject": "Re: Checking pgwin32_is_junction() errors"
},
{
"msg_contents": "On 2022-08-09 05:44, Thomas Munro wrote:\n> On Tue, Aug 9, 2022 at 8:30 AM Thomas Munro <thomas.munro@gmail.com> \n> wrote:\n>> On Mon, Aug 8, 2022 at 8:23 PM <r.zharkov@postgrespro.ru> wrote:\n>> > \"C:/HOME\" is the junction point to the second volume on my hard drive -\n>> > \"\\??\\Volume{GUID}\\\" which name pgreadlink() erroneously strips here:\n>> > https://github.com/postgres/postgres/blob/7e29a79a46d30dc236d097825ab849158929d977/src/port/dirmod.c#L357.\n> \n>> ... Would it\n>> be better to say: if it doesn't begin with \"\\??\\X:\", where X could be\n>> any letter, then don't modify it?\n> \n> Concretely, I wonder if this is a good fix at least in the short term.\n> Does this work for you, and do the logic and explanation make sense?\n\nYes, this patch works well with my junction points.\nI checked a few variants:\n\n21.07.2022 15:11 <JUNCTION> HOME [\\??\\Volume{GUID}\\]\n09.08.2022 15:06 <JUNCTION> Test1 [\\\\?\\Volume{GUID}\\]\n09.08.2022 15:06 <JUNCTION> Test2 [\\\\.\\Volume{GUID}\\]\n09.08.2022 15:17 <JUNCTION> Test3 [\\??\\Volume{GUID}\\]\n09.08.2022 15:27 <JUNCTION> Test4 [C:\\temp\\1]\n09.08.2022 15:28 <JUNCTION> Test5 [C:\\HOME\\Temp\\1]\n\nAfter hours of reading the documentation and debugging, it seems to me\nwe can use REPARSE_GUID_DATA_BUFFER structure instead of our\nREPARSE_JUNCTION_DATA_BUFFER [1]. DataBuffer doesn't contain any \nprefixes,\nso we don't need to strip them. But we still need to construct a correct\nvolume name if a junction point is a volume mount point. Is it worth to\ncheck this idea?\n\n[1] \nhttps://docs.microsoft.com/en-us/windows/win32/api/winnt/ns-winnt-reparse_guid_data_buffer\n\n\n",
"msg_date": "Tue, 09 Aug 2022 17:59:33 +0700",
"msg_from": "r.zharkov@postgrespro.ru",
"msg_from_op": false,
"msg_subject": "Re: Checking pgwin32_is_junction() errors"
},
{
"msg_contents": "On Tue, Aug 9, 2022 at 10:59 PM <r.zharkov@postgrespro.ru> wrote:\n> On 2022-08-09 05:44, Thomas Munro wrote:\n> > On Tue, Aug 9, 2022 at 8:30 AM Thomas Munro <thomas.munro@gmail.com>\n> > wrote:\n> >> On Mon, Aug 8, 2022 at 8:23 PM <r.zharkov@postgrespro.ru> wrote:\n> >> > \"C:/HOME\" is the junction point to the second volume on my hard drive -\n> >> > \"\\??\\Volume{GUID}\\\" which name pgreadlink() erroneously strips here:\n> >> > https://github.com/postgres/postgres/blob/7e29a79a46d30dc236d097825ab849158929d977/src/port/dirmod.c#L357.\n> >\n> >> ... Would it\n> >> be better to say: if it doesn't begin with \"\\??\\X:\", where X could be\n> >> any letter, then don't modify it?\n> >\n> > Concretely, I wonder if this is a good fix at least in the short term.\n> > Does this work for you, and do the logic and explanation make sense?\n>\n> Yes, this patch works well with my junction points.\n\nThanks for testing! I did a bit more reading on this stuff, so that I\ncould update the comments with the correct terminology from Windows\nAPIs. I also realised that the pattern we could accept to symlink()\nand expect to work is not just \"C:...\" (could be\nRtlPathTypeDriveRelative, which wouldn't actually work in a junction\npoint) but \"C:\\...\" (RtlPathTypeDriveAbsolute). I tweaked it a bit to\ntest for that.\n\n> I checked a few variants:\n>\n> 21.07.2022 15:11 <JUNCTION> HOME [\\??\\Volume{GUID}\\]\n> 09.08.2022 15:06 <JUNCTION> Test1 [\\\\?\\Volume{GUID}\\]\n> 09.08.2022 15:06 <JUNCTION> Test2 [\\\\.\\Volume{GUID}\\]\n> 09.08.2022 15:17 <JUNCTION> Test3 [\\??\\Volume{GUID}\\]\n> 09.08.2022 15:27 <JUNCTION> Test4 [C:\\temp\\1]\n> 09.08.2022 15:28 <JUNCTION> Test5 [C:\\HOME\\Temp\\1]\n\nOne more thing I wondered about, now that we're following junctions\noutside PGDATA: can a junction point to another junction? If so, I\ndidn't allow for that: stat() gives up after one hop, because I\nfigured that was enough for the stuff we expect inside PGDATA and I\ncouldn't find any evidence in the man pages that referred to chains.\nBut if you *are* allowed to create a junction \"c:\\huey\" that points to\njunction \"c:\\dewey\" that points to \"c:\\louie\", and then you do initdb\n-D c:\\huey\\pgdata, then I guess it would fail. Would you mind\nchecking if that is a real possibility, and if so, testing this\nchain-following patch to see if it fixes it?\n\n> After hours of reading the documentation and debugging, it seems to me\n> we can use REPARSE_GUID_DATA_BUFFER structure instead of our\n> REPARSE_JUNCTION_DATA_BUFFER [1]. DataBuffer doesn't contain any\n> prefixes,\n> so we don't need to strip them. But we still need to construct a correct\n> volume name if a junction point is a volume mount point. Is it worth to\n> check this idea?\n\nI don't know. I think I prefer our current approach, because it can\nhandle anything (raw/full NT paths) and doesn't try to be very clever,\nand I don't want to change to a different scheme for no real\nbenefit...",
"msg_date": "Thu, 11 Aug 2022 12:55:36 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Checking pgwin32_is_junction() errors"
},
{
"msg_contents": "On 2022-08-11 07:55, Thomas Munro wrote:\n>> I checked a few variants:\n>> \n>> 21.07.2022 15:11 <JUNCTION> HOME [\\??\\Volume{GUID}\\]\n>> 09.08.2022 15:06 <JUNCTION> Test1 [\\\\?\\Volume{GUID}\\]\n>> 09.08.2022 15:06 <JUNCTION> Test2 [\\\\.\\Volume{GUID}\\]\n>> 09.08.2022 15:17 <JUNCTION> Test3 [\\??\\Volume{GUID}\\]\n>> 09.08.2022 15:27 <JUNCTION> Test4 [C:\\temp\\1]\n>> 09.08.2022 15:28 <JUNCTION> Test5 [C:\\HOME\\Temp\\1]\n> \n> One more thing I wondered about, now that we're following junctions\n> outside PGDATA: can a junction point to another junction? If so, I\n> didn't allow for that: stat() gives up after one hop, because I\n> figured that was enough for the stuff we expect inside PGDATA and I\n> couldn't find any evidence in the man pages that referred to chains.\n> But if you *are* allowed to create a junction \"c:\\huey\" that points to\n> junction \"c:\\dewey\" that points to \"c:\\louie\", and then you do initdb\n> -D c:\\huey\\pgdata, then I guess it would fail. Would you mind\n> checking if that is a real possibility, and if so, testing this\n> chain-following patch to see if it fixes it?\n\nI made some junctions and rechecked both patches.\n\n11.08.2022 16:11 <JUNCTION> donald [C:\\huey]\n11.08.2022 13:23 <JUNCTION> huey [C:\\dewey]\n11.08.2022 13:23 <JUNCTION> dewey [C:\\louie]\n11.08.2022 16:57 <DIR> louie\n\nWith the small attached patch initdb succeeded in any of these\n\"directories\". If the junction chain is too long, initdb fails with\n\"could not create directory\" as expected.\n\ninitdb -D huey/pgdata\n...\nSuccess.\n\ninitdb -N -D donald\n...\nSuccess.\n\n11.08.2022 17:32 <DIR> 1\n11.08.2022 17:32 <JUNCTION> 2 [C:\\1]\n11.08.2022 17:32 <JUNCTION> 3 [C:\\2]\n11.08.2022 17:32 <JUNCTION> 4 [C:\\3]\n11.08.2022 17:32 <JUNCTION> 5 [C:\\4]\n11.08.2022 17:32 <JUNCTION> 6 [C:\\5]\n11.08.2022 17:32 <JUNCTION> 7 [C:\\6]\n11.08.2022 17:32 <JUNCTION> 8 [C:\\7]\n11.08.2022 17:32 <JUNCTION> 9 [C:\\8]\n11.08.2022 17:32 <JUNCTION> 10 [C:\\9]\n\ninitdb -D 10/pgdata\n...\ncreating directory 10/pgdata ... initdb: error: could not create \ndirectory \"10\": File exists\n\ninitdb -D 9/pgdata\n...\nSuccess.",
"msg_date": "Thu, 11 Aug 2022 17:40:52 +0700",
"msg_from": "r.zharkov@postgrespro.ru",
"msg_from_op": false,
"msg_subject": "Re: Checking pgwin32_is_junction() errors"
},
{
"msg_contents": "On Thu, Aug 11, 2022 at 10:40 PM <r.zharkov@postgrespro.ru> wrote:\n> On 2022-08-11 07:55, Thomas Munro wrote:\n> >> I checked a few variants:\n> >>\n> >> 21.07.2022 15:11 <JUNCTION> HOME [\\??\\Volume{GUID}\\]\n> >> 09.08.2022 15:06 <JUNCTION> Test1 [\\\\?\\Volume{GUID}\\]\n> >> 09.08.2022 15:06 <JUNCTION> Test2 [\\\\.\\Volume{GUID}\\]\n> >> 09.08.2022 15:17 <JUNCTION> Test3 [\\??\\Volume{GUID}\\]\n> >> 09.08.2022 15:27 <JUNCTION> Test4 [C:\\temp\\1]\n> >> 09.08.2022 15:28 <JUNCTION> Test5 [C:\\HOME\\Temp\\1]\n> >\n> > One more thing I wondered about, now that we're following junctions\n> > outside PGDATA: can a junction point to another junction? If so, I\n> > didn't allow for that: stat() gives up after one hop, because I\n> > figured that was enough for the stuff we expect inside PGDATA and I\n> > couldn't find any evidence in the man pages that referred to chains.\n> > But if you *are* allowed to create a junction \"c:\\huey\" that points to\n> > junction \"c:\\dewey\" that points to \"c:\\louie\", and then you do initdb\n> > -D c:\\huey\\pgdata, then I guess it would fail. Would you mind\n> > checking if that is a real possibility, and if so, testing this\n> > chain-following patch to see if it fixes it?\n>\n> I made some junctions and rechecked both patches.\n>\n> 11.08.2022 16:11 <JUNCTION> donald [C:\\huey]\n> 11.08.2022 13:23 <JUNCTION> huey [C:\\dewey]\n> 11.08.2022 13:23 <JUNCTION> dewey [C:\\louie]\n> 11.08.2022 16:57 <DIR> louie\n>\n> With the small attached patch initdb succeeded in any of these\n> \"directories\". If the junction chain is too long, initdb fails with\n> \"could not create directory\" as expected.\n\nThanks for testing and for that fix! I do intend to push this, and a\nnearby fix for unlink(), but first I want to have test coverage for\nall this stuff so we can demonstrate comprehensively that it works via\nautomated testing, otherwise it's just impossible to maintain (at\nleast for me, Unix guy). I have a prototype test suite based on\nwriting TAP tests in C and I've already found more subtle ancient bugs\naround the Windows porting layer... more soon.\n\n\n",
"msg_date": "Wed, 12 Oct 2022 17:05:18 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Checking pgwin32_is_junction() errors"
}
] |
[
{
"msg_contents": "This patch should silence some recent Coverity (false positive)\ncomplaints about assertions contained in these macros.\n\nPortability testing at:\nhttps://cirrus-ci.com/github/alvherre/postgres/macros-to-inlinefuncs\n\nIntend to push later today, unless something ugly happens.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/",
"msg_date": "Thu, 24 Mar 2022 11:21:07 +0100",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": true,
"msg_subject": "turn fastgetattr and heap_getattr to inline functions"
},
{
"msg_contents": "On Thu, Mar 24, 2022 at 11:21:07AM +0100, Alvaro Herrera wrote:\n> This patch should silence some recent Coverity (false positive)\n> complaints about assertions contained in these macros.\n\nThe logic looks fine. Good idea to get rid of DISABLE_COMPLEX_MACRO.\n\n> Portability testing at:\n> https://cirrus-ci.com/github/alvherre/postgres/macros-to-inlinefuncs\n> \n> Intend to push later today, unless something ugly happens.\n\nHmm. I think that you'd better add a return at the end of each\nfunction? Some compilers are dumb in detecting that all the code \npaths return (aka recent d0083c1) and could generate warnings, even if\nthings are coded to return all the time, like in your patch.\n--\nMichael",
"msg_date": "Thu, 24 Mar 2022 21:09:26 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: turn fastgetattr and heap_getattr to inline functions"
},
{
"msg_contents": "On 2022-Mar-24, Michael Paquier wrote:\n\n> Hmm. I think that you'd better add a return at the end of each\n> function? Some compilers are dumb in detecting that all the code \n> paths return (aka recent d0083c1) and could generate warnings, even if\n> things are coded to return all the time, like in your patch.\n\nHmm, OK to do something about that. I added pg_unreachable(): looking\nat LWLockAttemptLock(), it looks that that should be sufficient.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/",
"msg_date": "Thu, 24 Mar 2022 14:26:10 +0100",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": true,
"msg_subject": "Re: turn fastgetattr and heap_getattr to inline functions"
},
{
"msg_contents": "\nOn Thu, 24 Mar 2022 at 21:26, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> On 2022-Mar-24, Michael Paquier wrote:\n>\n>> Hmm. I think that you'd better add a return at the end of each\n>> function? Some compilers are dumb in detecting that all the code\n>> paths return (aka recent d0083c1) and could generate warnings, even if\n>> things are coded to return all the time, like in your patch.\n>\n> Hmm, OK to do something about that. I added pg_unreachable(): looking\n> at LWLockAttemptLock(), it looks that that should be sufficient.\n\nHi,\n\nI want to know why we do not use the following style?\n\n+static inline Datum\n+heap_getattr(HeapTuple tup, int attnum, TupleDesc tupleDesc, bool *isnull)\n+{\n+\tif (attnum > 0)\n+\t{\n+\t\tif (attnum > (int) HeapTupleHeaderGetNatts(tup->t_data))\n+\t\t\treturn getmissingattr(tupleDesc, attnum, isnull);\n+\t\telse\n+\t\t\treturn fastgetattr(tup, attnum, tupleDesc, isnull);\n+\t}\n+\n+\treturn heap_getsysattr(tup, attnum, tupleDesc, isnull);\n+}\n\n\n--\nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.\n\n\n",
"msg_date": "Thu, 24 Mar 2022 21:36:51 +0800",
"msg_from": "Japin Li <japinli@hotmail.com>",
"msg_from_op": false,
"msg_subject": "Re: turn fastgetattr and heap_getattr to inline functions"
},
{
"msg_contents": "On 2022-Mar-24, Japin Li wrote:\n\n> I want to know why we do not use the following style?\n> \n> +static inline Datum\n> +heap_getattr(HeapTuple tup, int attnum, TupleDesc tupleDesc, bool *isnull)\n> +{\n> +\tif (attnum > 0)\n> +\t{\n> +\t\tif (attnum > (int) HeapTupleHeaderGetNatts(tup->t_data))\n> +\t\t\treturn getmissingattr(tupleDesc, attnum, isnull);\n> +\t\telse\n> +\t\t\treturn fastgetattr(tup, attnum, tupleDesc, isnull);\n> +\t}\n> +\n> +\treturn heap_getsysattr(tup, attnum, tupleDesc, isnull);\n> +}\n\nThat was the first thing I wrote, but I can't get myself to like it.\nFor this one function the code flow is obvious enough; but if you apply\nthe same idea to fastgetattr(), the result is not nice at all.\n\nIf there are enough votes for doing it this way, I can do that.\n\nI guess we could do something like this instead, which seems somewhat\nless bad:\n\nif (attnum <= 0)\n\treturn heap_getsysattr(...)\nif (likely(attnum <= HeapTupleHeaderGetNattrs(...)))\n\treturn fastgetattr(...)\n\nreturn getmissingattr(...)\n\nbut I still prefer the one in the v2 patch I posted.\n\nIt's annoying that case 0 (InvalidAttrNumber) is not well handled\nanywhere.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Thu, 24 Mar 2022 15:32:49 +0100",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": true,
"msg_subject": "Re: turn fastgetattr and heap_getattr to inline functions"
},
{
"msg_contents": "\nOn Thu, 24 Mar 2022 at 22:32, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> On 2022-Mar-24, Japin Li wrote:\n>\n>> I want to know why we do not use the following style?\n>>\n>> +static inline Datum\n>> +heap_getattr(HeapTuple tup, int attnum, TupleDesc tupleDesc, bool *isnull)\n>> +{\n>> +\tif (attnum > 0)\n>> +\t{\n>> +\t\tif (attnum > (int) HeapTupleHeaderGetNatts(tup->t_data))\n>> +\t\t\treturn getmissingattr(tupleDesc, attnum, isnull);\n>> +\t\telse\n>> +\t\t\treturn fastgetattr(tup, attnum, tupleDesc, isnull);\n>> +\t}\n>> +\n>> +\treturn heap_getsysattr(tup, attnum, tupleDesc, isnull);\n>> +}\n>\n> That was the first thing I wrote, but I can't get myself to like it.\n> For this one function the code flow is obvious enough; but if you apply\n> the same idea to fastgetattr(), the result is not nice at all.\n>\n> If there are enough votes for doing it this way, I can do that.\n>\n> I guess we could do something like this instead, which seems somewhat\n> less bad:\n>\n> if (attnum <= 0)\n> \treturn heap_getsysattr(...)\n> if (likely(attnum <= HeapTupleHeaderGetNattrs(...)))\n> \treturn fastgetattr(...)\n>\n> return getmissingattr(...)\n>\n> but I still prefer the one in the v2 patch I posted.\n>\n> It's annoying that case 0 (InvalidAttrNumber) is not well handled\n> anywhere.\n\nThanks for your detail explaination. I find bottomup_sort_and_shrink_cmp()\nhas smilar code\n\nstatic int\nbottomup_sort_and_shrink_cmp(const void *arg1, const void *arg2)\n{\n const IndexDeleteCounts *group1 = (const IndexDeleteCounts *) arg1;\n const IndexDeleteCounts *group2 = (const IndexDeleteCounts *) arg2;\n\n [...]\n\n pg_unreachable();\n\n return 0;\n}\n\nIIUC, the last statement is used to keep the compiler quiet. However,\nit doesn't exist in LWLockAttemptLock(). Why?\n\nThe difference between bottomup_sort_and_shrink_cmp() and LWLockAttemptlock()\nis that LWLockAttemptlock() always returned before pg_unreachable(), however,\nbottomup_sort_and_shrink_cmp() might be returned after pg_unreachable(), which\nisn't expected.\n\n\n--\nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.\n\n\n",
"msg_date": "Thu, 24 Mar 2022 23:41:43 +0800",
"msg_from": "Japin Li <japinli@hotmail.com>",
"msg_from_op": false,
"msg_subject": "Re: turn fastgetattr and heap_getattr to inline functions"
},
{
"msg_contents": "On 24.03.22 13:09, Michael Paquier wrote:\n> Hmm. I think that you'd better add a return at the end of each\n> function? Some compilers are dumb in detecting that all the code\n> paths return (aka recent d0083c1) and could generate warnings, even if\n> things are coded to return all the time, like in your patch.\n\nThat is a different case. We know that not all compilers understand \nwhen elog/ereport return. But no compiler is stupid enough not to \nunderstand that\n\nfoo()\n{\n if (something)\n return this;\n else\n return that;\n}\n\nalways reaches a return.\n\n\n",
"msg_date": "Thu, 24 Mar 2022 17:40:46 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: turn fastgetattr and heap_getattr to inline functions"
},
{
"msg_contents": "On 24.03.22 15:32, Alvaro Herrera wrote:\n>> +static inline Datum\n>> +heap_getattr(HeapTuple tup, int attnum, TupleDesc tupleDesc, bool *isnull)\n>> +{\n>> +\tif (attnum > 0)\n>> +\t{\n>> +\t\tif (attnum > (int) HeapTupleHeaderGetNatts(tup->t_data))\n>> +\t\t\treturn getmissingattr(tupleDesc, attnum, isnull);\n>> +\t\telse\n>> +\t\t\treturn fastgetattr(tup, attnum, tupleDesc, isnull);\n>> +\t}\n>> +\n>> +\treturn heap_getsysattr(tup, attnum, tupleDesc, isnull);\n>> +}\n> That was the first thing I wrote, but I can't get myself to like it.\n> For this one function the code flow is obvious enough; but if you apply\n> the same idea to fastgetattr(), the result is not nice at all.\n\nI like your first patch. That is more of a functional style, whereas \nthe above is more of a procedural style.\n\n\n\n",
"msg_date": "Thu, 24 Mar 2022 17:44:38 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: turn fastgetattr and heap_getattr to inline functions"
},
{
"msg_contents": "On 2022-Mar-24, Peter Eisentraut wrote:\n\n> But no compiler is stupid enough not to understand that\n> \n> foo()\n> {\n> if (something)\n> return this;\n> else\n> return that;\n> }\n> \n> always reaches a return.\n\nWe have a number of examples of this pattern, so I guess it must be\ntrue. Pushed without the pg_unreachables, then.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"Las navajas y los monos deben estar siempre distantes\" (Germán Poo)\n\n\n",
"msg_date": "Thu, 24 Mar 2022 18:05:12 +0100",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": true,
"msg_subject": "Re: turn fastgetattr and heap_getattr to inline functions"
},
{
"msg_contents": "On 2022-Mar-24, Japin Li wrote:\n\n> Thanks for your detail explaination. I find bottomup_sort_and_shrink_cmp()\n> has smilar code\n\n... except that bottomup_sort_and_shrink_cmp never handles the case of\nthe two structs being exactly identical, so I don't think this is a\ngreat counter-example.\n\n> IIUC, the last statement is used to keep the compiler quiet. However,\n> it doesn't exist in LWLockAttemptLock(). Why?\n\nWhat I do care about is the fact that LWLockAttemptLock does compile\nsilently everywhere without a final \"return dummy_value\" statement. I\ndon't have to build a theory for why the other function has a statement\nthat may or may not be actually doing anything.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"Los dioses no protegen a los insensatos. Éstos reciben protección de\notros insensatos mejor dotados\" (Luis Wu, Mundo Anillo)\n\n\n",
"msg_date": "Fri, 25 Mar 2022 10:42:14 +0100",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": true,
"msg_subject": "Re: turn fastgetattr and heap_getattr to inline functions"
},
{
"msg_contents": "\nOn Fri, 25 Mar 2022 at 17:42, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> On 2022-Mar-24, Japin Li wrote:\n>\n>> Thanks for your detail explaination. I find bottomup_sort_and_shrink_cmp()\n>> has smilar code\n>\n> ... except that bottomup_sort_and_shrink_cmp never handles the case of\n> the two structs being exactly identical, so I don't think this is a\n> great counter-example.\n>\n>> IIUC, the last statement is used to keep the compiler quiet. However,\n>> it doesn't exist in LWLockAttemptLock(). Why?\n>\n> What I do care about is the fact that LWLockAttemptLock does compile\n> silently everywhere without a final \"return dummy_value\" statement.\n\nI'm just a bit confused about this.\n\n> I\n> don't have to build a theory for why the other function has a statement\n> that may or may not be actually doing anything.\n\nAnyway, thanks for your explaination!\n\n--\nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.\n\n\n",
"msg_date": "Fri, 25 Mar 2022 19:18:14 +0800",
"msg_from": "Japin Li <japinli@hotmail.com>",
"msg_from_op": false,
"msg_subject": "Re: turn fastgetattr and heap_getattr to inline functions"
}
] |
[
{
"msg_contents": "Hi Hackers,\n\nI just spotted an unnecessarily gendered example involving a 'salesmen'\ntable in the UPDATE docs. Here's a patch that changes that to\n'salespeople'.\n\n- ilmari",
"msg_date": "Thu, 24 Mar 2022 18:34:55 +0000",
"msg_from": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <ilmari@ilmari.org>",
"msg_from_op": true,
"msg_subject": "Doc patch: replace 'salesmen' with 'salespeople'"
},
{
"msg_contents": "> On 24 Mar 2022, at 19:34, Dagfinn Ilmari Mannsåker <ilmari@ilmari.org> wrote:\n\n> I just spotted an unnecessarily gendered example involving a 'salesmen'\n> table in the UPDATE docs. Here's a patch that changes that to\n> 'salespeople'.\n\nNo objections to changing that, it's AFAICT the sole such usage in the docs.\n\n> Update contact names in an accounts table to match the currently assigned\n> - salesmen:\n> + salespeople:\n> <programlisting>\n> UPDATE accounts SET (contact_first_name, contact_last_name) =\n> - (SELECT first_name, last_name FROM salesmen\n> - WHERE salesmen.id = accounts.sales_id);\n> + (SELECT first_name, last_name FROM salespeople\n> + WHERE salespeople.id = accounts.sales_id);\n\nThis example is a bit confusing to me, it's joining on accounts.sales_id to get\nthe assigned salesperson, but in the example just above we are finding the\nsalesperson by joining on accounts.sales_person. Shouldn't this be using the\nemployees table to keep it consistent? (which also avoids the gendered issue\nraised here) The same goes for the second example. Or am I missing something?\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n",
"msg_date": "Thu, 24 Mar 2022 21:40:11 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Doc patch: replace 'salesmen' with 'salespeople'"
},
{
"msg_contents": "Daniel Gustafsson <daniel@yesql.se> writes:\n\n>> On 24 Mar 2022, at 19:34, Dagfinn Ilmari Mannsåker <ilmari@ilmari.org> wrote:\n>\n>> I just spotted an unnecessarily gendered example involving a 'salesmen'\n>> table in the UPDATE docs. Here's a patch that changes that to\n>> 'salespeople'.\n>\n> No objections to changing that, it's AFAICT the sole such usage in the docs.\n\nThere's a mention of the travelling salesman problem in the GEQO docs\n(and one in the code comments), but that's the established name for that\nproblem (although I do note the Wikipedia page says it's \"also called\nthe travelling salesperson problem\").\n\n>> Update contact names in an accounts table to match the currently assigned\n>> - salesmen:\n>> + salespeople:\n>> <programlisting>\n>> UPDATE accounts SET (contact_first_name, contact_last_name) =\n>> - (SELECT first_name, last_name FROM salesmen\n>> - WHERE salesmen.id = accounts.sales_id);\n>> + (SELECT first_name, last_name FROM salespeople\n>> + WHERE salespeople.id = accounts.sales_id);\n>\n> This example is a bit confusing to me, it's joining on accounts.sales_id to get\n> the assigned salesperson, but in the example just above we are finding the\n> salesperson by joining on accounts.sales_person. Shouldn't this be using the\n> employees table to keep it consistent? (which also avoids the gendered issue\n> raised here) The same goes for the second example. Or am I missing something?\n\nYeah, you're right. The second section (added by Tom in commit\n8f889b1083f) is inconsistent with the first half in both table and\ncolumn names. Here's a patch that makes it all consistent, eliminating\nthe salesmen references completely, rather than renaming them.\n\n- ilmari",
"msg_date": "Fri, 25 Mar 2022 12:59:35 +0000",
"msg_from": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <ilmari@ilmari.org>",
"msg_from_op": true,
"msg_subject": "Re: Doc patch: replace 'salesmen' with 'salespeople'"
},
{
"msg_contents": "> On 25 Mar 2022, at 13:59, Dagfinn Ilmari Mannsåker <ilmari@ilmari.org> wrote:\n> \n> Daniel Gustafsson <daniel@yesql.se> writes:\n> \n>>> On 24 Mar 2022, at 19:34, Dagfinn Ilmari Mannsåker <ilmari@ilmari.org> wrote:\n>> \n>>> I just spotted an unnecessarily gendered example involving a 'salesmen'\n>>> table in the UPDATE docs. Here's a patch that changes that to\n>>> 'salespeople'.\n>> \n>> No objections to changing that, it's AFAICT the sole such usage in the docs.\n> \n> There's a mention of the travelling salesman problem in the GEQO docs\n> (and one in the code comments), but that's the established name for that\n> problem (although I do note the Wikipedia page says it's \"also called\n> the travelling salesperson problem\").\n\nI would be slightly worried about \"git grep'ability\" when changing such an\nestablished name (even though the risk might be miniscule here). Unless it's\ndeemed controversial I would err on the side of caution and leave this alone.\n\n>>> Update contact names in an accounts table to match the currently assigned\n>>> - salesmen:\n>>> + salespeople:\n>>> <programlisting>\n>>> UPDATE accounts SET (contact_first_name, contact_last_name) =\n>>> - (SELECT first_name, last_name FROM salesmen\n>>> - WHERE salesmen.id = accounts.sales_id);\n>>> + (SELECT first_name, last_name FROM salespeople\n>>> + WHERE salespeople.id = accounts.sales_id);\n>> \n>> This example is a bit confusing to me, it's joining on accounts.sales_id to get\n>> the assigned salesperson, but in the example just above we are finding the\n>> salesperson by joining on accounts.sales_person. Shouldn't this be using the\n>> employees table to keep it consistent? (which also avoids the gendered issue\n>> raised here) The same goes for the second example. Or am I missing something?\n> \n> Yeah, you're right. The second section (added by Tom in commit\n> 8f889b1083f) is inconsistent with the first half in both table and\n> column names. Here's a patch that makes it all consistent, eliminating\n> the salesmen references completely, rather than renaming them.\n\nI think this is an improvement, both in language and content. The example does\nshow off a strange choice of schema but it's after all an example of syntax and\nnot data modelling. Barring objections I plan to go ahead with this.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n",
"msg_date": "Sat, 26 Mar 2022 21:08:38 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Doc patch: replace 'salesmen' with 'salespeople'"
}
] |
[
{
"msg_contents": "Fix possible recovery trouble if TRUNCATE overlaps a checkpoint.\n\nIf TRUNCATE causes some buffers to be invalidated and thus the\ncheckpoint does not flush them, TRUNCATE must also ensure that the\ncorresponding files are truncated on disk. Otherwise, a replay\nfrom the checkpoint might find that the buffers exist but have\nthe wrong contents, which may cause replay to fail.\n\nReport by Teja Mupparti. Patch by Kyotaro Horiguchi, per a design\nsuggestion from Heikki Linnakangas, with some changes to the\ncomments by me. Review of this and a prior patch that approached\nthe issue differently by Heikki Linnakangas, Andres Freund, Álvaro\nHerrera, Masahiko Sawada, and Tom Lane.\n\nDiscussion: http://postgr.es/m/BYAPR06MB6373BF50B469CA393C614257ABF00@BYAPR06MB6373.namprd06.prod.outlook.com\n\nBranch\n------\nREL_14_STABLE\n\nDetails\n-------\nhttps://git.postgresql.org/pg/commitdiff/bbace5697df12398e87ffd9879171c39d27f5b33\n\nModified Files\n--------------\nsrc/backend/access/transam/multixact.c | 6 +++---\nsrc/backend/access/transam/twophase.c | 12 ++++++-----\nsrc/backend/access/transam/xact.c | 5 +++--\nsrc/backend/access/transam/xlog.c | 16 ++++++++++++--\nsrc/backend/access/transam/xloginsert.c | 2 +-\nsrc/backend/catalog/storage.c | 29 +++++++++++++++++++++++++-\nsrc/backend/storage/buffer/bufmgr.c | 6 ++++--\nsrc/backend/storage/ipc/procarray.c | 26 ++++++++++++++++-------\nsrc/backend/storage/lmgr/proc.c | 4 ++--\nsrc/include/storage/proc.h | 37 ++++++++++++++++++++++++++++++++-\nsrc/include/storage/procarray.h | 5 +++--\n11 files changed, 120 insertions(+), 28 deletions(-)",
"msg_date": "Thu, 24 Mar 2022 19:32:54 +0000",
"msg_from": "Robert Haas <rhaas@postgresql.org>",
"msg_from_op": true,
"msg_subject": "pgsql: Fix possible recovery trouble if TRUNCATE overlaps a\n checkpoint."
},
{
"msg_contents": "\nOn 24.03.22 20:32, Robert Haas wrote:\n> Fix possible recovery trouble if TRUNCATE overlaps a checkpoint.\n\nThis patch changed the delayChkpt field of struct PGPROC from bool to \nint. Back-porting this change could be considered an API breaking \nchange for extensions using this field.\n\nI'm not certain about padding behavior of compilers in general (or \nstandards requirements around that), but at least on my machine, it \nseems sizeof(PGPROC) did not change, so padding led to subsequent fields \nstill having the same offset.\n\nNonetheless, the meaning of the field itself changed. And the \nadditional assert now also triggers for the following pseudo-code of the \nextension I'm concerned about:\n\n /*\n * Prevent checkpoints being emitted in between additional\n * information in the logical message and the following\n * prepare record.\n */\n MyProc->delayChkpt = true;\n\n LogLogicalMessage(...);\n\n /* Note that this will also reset the delayChkpt flag. */\n PrepareTransaction(...);\n\n\nNow, I'm well aware this is not an official API, it just happens to be \naccessible for extensions. So I guess the underlying question is: What \ncan extension developers expect? Which parts are okay to change even in \nstable branches and which can be relied upon to remain stable?\n\nAnd for this specific case: Is it worth reverting this change and \napplying a fully backwards compatible fix, instead?\n\nRegards\n\nMarkus Wanner\n\n\n",
"msg_date": "Tue, 5 Apr 2022 15:02:02 +0200",
"msg_from": "Markus Wanner <markus.wanner@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "API stability [was: pgsql: Fix possible recovery trouble if TRUNCATE\n overlaps a checkpoint.]"
},
{
"msg_contents": "On Tue, Apr 5, 2022 at 9:02 AM Markus Wanner\n<markus.wanner@enterprisedb.com> wrote:\n> And for this specific case: Is it worth reverting this change and\n> applying a fully backwards compatible fix, instead?\n\nI think it's normally our policy to avoid changing definitions of\naccessible structs in back branches, except that we allow ourselves\nthe indulgence of adding new members at the end or in padding space.\nSo what would probably be best is if, in the back-branches, we changed\n\"delayChkpt\" back to a boolean, renamed it to delayChkptStart, and\nadded a separate Boolean called delayChkptEnd. Maybe that could be\nadded just after statusFlags, where I think it would fall into padding\nspace.\n\nI think as the person who committed that patch I'm on the hook to fix\nthis if nobody else would like to do it, but let me ask whether\nKyotaro Horiguchi would like to propose a patch, since the original\npatch did, and/or whether you would like to propose a patch, as the\nperson reporting the issue.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 5 Apr 2022 10:01:56 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: API stability [was: pgsql: Fix possible recovery trouble if\n TRUNCATE overlaps a checkpoint.]"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Tue, Apr 5, 2022 at 9:02 AM Markus Wanner\n> <markus.wanner@enterprisedb.com> wrote:\n>> And for this specific case: Is it worth reverting this change and\n>> applying a fully backwards compatible fix, instead?\n\n> I think it's normally our policy to avoid changing definitions of\n> accessible structs in back branches, except that we allow ourselves\n> the indulgence of adding new members at the end or in padding space.\n> So what would probably be best is if, in the back-branches, we changed\n> \"delayChkpt\" back to a boolean, renamed it to delayChkptStart, and\n> added a separate Boolean called delayChkptEnd. Maybe that could be\n> added just after statusFlags, where I think it would fall into padding\n> space.\n\nRenaming it would constitute an API break, which is if anything worse\nthan an ABI break.\n\nWhile we're complaining at you, let me point out that changing a field's\ncontent and semantics while not changing its name is a time bomb waiting\nto break any third-party code that looks at (or modifies...) the field.\n\nWhat I think you need to do is:\n\n1. In the back branches, revert delayChkpt to its previous type and\nsemantics. Squeeze a separate delayChkptEnd bool in somewhere\n(you can't change the struct size either ...).\n\n2. In HEAD, rename the field to something like delayChkptFlags,\nto ensure that any code touching it has to be inspected and updated.\n\nIn other words, this is already an API break in HEAD, and that's\nfine, but it didn't break it hard enough to draw anyone's attention,\nwhich is not fine.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 05 Apr 2022 10:17:39 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: API stability [was: pgsql: Fix possible recovery trouble if\n TRUNCATE overlaps a checkpoint.]"
},
{
"msg_contents": "On Tue, Apr 5, 2022 at 10:17 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Renaming it would constitute an API break, which is if anything worse\n> than an ABI break.\n\nI don't think so, because an API break will cause a compilation\nfailure, which an extension author can easily fix.\n\n> While we're complaining at you, let me point out that changing a field's\n> content and semantics while not changing its name is a time bomb waiting\n> to break any third-party code that looks at (or modifies...) the field.\n>\n> What I think you need to do is:\n>\n> 1. In the back branches, revert delayChkpt to its previous type and\n> semantics. Squeeze a separate delayChkptEnd bool in somewhere\n> (you can't change the struct size either ...).\n>\n> 2. In HEAD, rename the field to something like delayChkptFlags,\n> to ensure that any code touching it has to be inspected and updated.\n\nWell, we can do it that way, I suppose.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 5 Apr 2022 10:29:03 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: API stability [was: pgsql: Fix possible recovery trouble if\n TRUNCATE overlaps a checkpoint.]"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Tue, Apr 5, 2022 at 10:17 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Renaming it would constitute an API break, which is if anything worse\n>> than an ABI break.\n\n> I don't think so, because an API break will cause a compilation\n> failure, which an extension author can easily fix.\n\nMy point is that we want that to happen in HEAD, but it's not okay\nfor it to happen in a minor release of a stable branch.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 05 Apr 2022 10:32:06 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: API stability [was: pgsql: Fix possible recovery trouble if\n TRUNCATE overlaps a checkpoint.]"
},
{
"msg_contents": "On Tue, Apr 5, 2022 at 10:32 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Robert Haas <robertmhaas@gmail.com> writes:\n> > On Tue, Apr 5, 2022 at 10:17 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> Renaming it would constitute an API break, which is if anything worse\n> >> than an ABI break.\n>\n> > I don't think so, because an API break will cause a compilation\n> > failure, which an extension author can easily fix.\n>\n> My point is that we want that to happen in HEAD, but it's not okay\n> for it to happen in a minor release of a stable branch.\n\nI understand, but I am not sure that I agree. I think that if an\nextension stops compiling against a back-branch, someone will notice\nthe next time they try to compile it and will fix it. Maybe that's not\namazing, but I don't think it's a huge deal either. On the other hand,\nif existing builds that someone has already shipped stop working with\na new server release, that's a much larger issue. The extension\npackager can't go back and retroactively add a dependency on the\nserver version to the already-shipped package. A new package can be\nshipped and specify a minimum minor version with which it will work,\nbut an old package is what it is.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 5 Apr 2022 10:51:01 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: API stability [was: pgsql: Fix possible recovery trouble if\n TRUNCATE overlaps a checkpoint.]"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Tue, Apr 5, 2022 at 10:32 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> My point is that we want that to happen in HEAD, but it's not okay\n>> for it to happen in a minor release of a stable branch.\n\n> I understand, but I am not sure that I agree. I think that if an\n> extension stops compiling against a back-branch, someone will notice\n> the next time they try to compile it and will fix it. Maybe that's not\n> amazing, but I don't think it's a huge deal either.\n\nWell, perhaps it's not the end of the world, but it's still a large\nPITA for the maintainer of such an extension. They can't \"just fix it\"\nbecause some percentage of their userbase will still need to compile\nagainst older minor releases. Nor have you provided any way to handle\nthat requirement via conditional compilation.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 05 Apr 2022 15:16:20 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: API stability [was: pgsql: Fix possible recovery trouble if\n TRUNCATE overlaps a checkpoint.]"
},
{
"msg_contents": "At Tue, 5 Apr 2022 10:01:56 -0400, Robert Haas <robertmhaas@gmail.com> wrote in \n> I think as the person who committed that patch I'm on the hook to fix\n> this if nobody else would like to do it, but let me ask whether\n> Kyotaro Horiguchi would like to propose a patch, since the original\n> patch did, and/or whether you would like to propose a patch, as the\n> person reporting the issue.\n\nI'd like to do that. Let me see.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Wed, 06 Apr 2022 10:36:30 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: API stability"
},
{
"msg_contents": "me> I'd like to do that. Let me see.\n\nAt Tue, 5 Apr 2022 10:29:03 -0400, Robert Haas <robertmhaas@gmail.com> wrote in \n> On Tue, Apr 5, 2022 at 10:17 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > What I think you need to do is:\n> >\n> > 1. In the back branches, revert delayChkpt to its previous type and\n> > semantics. Squeeze a separate delayChkptEnd bool in somewhere\n> > (you can't change the struct size either ...).\n> >\n> > 2. In HEAD, rename the field to something like delayChkptFlags,\n> > to ensure that any code touching it has to be inspected and updated.\n> \n> Well, we can do it that way, I suppose.\n\nThe change is easy on head, but is it better use uint8 instead of int\nfor delayChkptFlags?\n\nIn the back branches, we have, on gcc/Linux/x86-64,\n14's PGPROC is 880 bytes and has gaps:\n\n- 6 bytes after statusFlag\n- 4 bytes after syncRepState\n- 2 bytes after subxidStatus\n- 3 bytes after procArrayGroupMember\n- 3 bytes after clogGroupMember\n- 3 bytes after fpVXIDLock\n\nIt seems that we can place the new variable in the first place above,\nsince the two are not bonded together, or at least in less tightly\nbonded than other candidates.\n\n13's PGPROC is 856 bytes and has a 7 bytes gap after delayChkpt.\n\nVersions Ealier than 13 have delayChkpt in PGXACT (12 bytes). It is\ntightly packed and dones't have a room for a new member. Can we add\nthe new flag to PGPROC instead of PGXACT?\n \n12 and 11's PGPROC is 848 bytes and has gaps:\n - 4 bytes after syncRepState\n - 3 bytes after procArrayGroupMember\n - 3 bytes after clogGroupMember\n - 4 bytes after clogGroupMemberPage\n - 3 bytes after fpVXIDLock\n\n\n10's PGPROC is 816 bytes and has gaps:\n - 4 bytes after cvWaitLink\n - 4 bytes after syncRepState\n - 3 bytes after procArrayGroupMember\n - 3 bytes after fpVXIDLock\n\nSo if we don't want to move any member in PGPROC, we do:\n\n14: after statusFlags.\n13: after delayChkpt.\n12-10: after syncRepState (and before syncRepLinks).\n\nIf we allow to shift some members, the new flag can be placed more\nsaner place.\n\n14: after delayChkpt ((uint8)statusFlags moves forward by 1 byte)\n13: after delayChkpt (no member moves)\n12-10: after subxids ((bool)procArrayGroupMember moves forward by 1 byte)\n\nI continue working on the last direction above.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Wed, 06 Apr 2022 14:30:37 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: API stability"
},
{
"msg_contents": "At Wed, 06 Apr 2022 14:30:37 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> So if we don't want to move any member in PGPROC, we do:\n> \n> 14: after statusFlags.\n> 13: after delayChkpt.\n> 12-10: after syncRepState (and before syncRepLinks).\n> \n> If we allow to shift some members, the new flag can be placed more\n> saner place.\n> \n> 14: after delayChkpt ((uint8)statusFlags moves forward by 1 byte)\n> 13: after delayChkpt (no member moves)\n> 12-10: after subxids ((bool)procArrayGroupMember moves forward by 1 byte)\n> \n> I continue working on the last direction above.\n\nHmm. That is ABI break. I go with the first way.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Wed, 06 Apr 2022 15:31:53 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: API stability"
},
{
"msg_contents": "At Wed, 06 Apr 2022 15:31:53 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> At Wed, 06 Apr 2022 14:30:37 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> > So if we don't want to move any member in PGPROC, we do:\n> > \n> > 14: after statusFlags.\n> > 13: after delayChkpt.\n> > 12-10: after syncRepState (and before syncRepLinks).\n> > \n> > If we allow to shift some members, the new flag can be placed more\n> > saner place.\n> > \n> > 14: after delayChkpt ((uint8)statusFlags moves forward by 1 byte)\n> > 13: after delayChkpt (no member moves)\n> > 12-10: after subxids ((bool)procArrayGroupMember moves forward by 1 byte)\n> > \n> > I continue working on the last direction above.\n> \n> Hmm. That is ABI break. I go with the first way.\n\nBy the way, the patch for -14 changed the sigunature of two public\nfunctions.\n\n-GetVirtualXIDsDelayingChkpt(int *nvxids)\n+GetVirtualXIDsDelayingChkpt(int *nvxids, int type)\n\n-HaveVirtualXIDsDelayingChkpt(VirtualTransactionId *vxids, int nvxids)\n+HaveVirtualXIDsDelayingChkpt(VirtualTransactionId *vxids, int nvxids, int type)\n\n\nDo I need to restore the signature?\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Wed, 06 Apr 2022 15:53:32 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: API stability"
},
{
"msg_contents": "At Wed, 06 Apr 2022 15:53:32 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> At Wed, 06 Apr 2022 15:31:53 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> > At Wed, 06 Apr 2022 14:30:37 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> > > So if we don't want to move any member in PGPROC, we do:\n> > > \n> > > 14: after statusFlags.\n> > > 13: after delayChkpt.\n> > > 12-10: after syncRepState (and before syncRepLinks).\n> > > \n> > > If we allow to shift some members, the new flag can be placed more\n> > > saner place.\n> > > \n> > > 14: after delayChkpt ((uint8)statusFlags moves forward by 1 byte)\n> > > 13: after delayChkpt (no member moves)\n> > > 12-10: after subxids ((bool)procArrayGroupMember moves forward by 1 byte)\n> > > \n> > > I continue working on the last direction above.\n> > \n> > Hmm. That is ABI break. I go with the first way.\n> \n> By the way, the patch for -14 changed the sigunature of two public\n> functions.\n> \n> -GetVirtualXIDsDelayingChkpt(int *nvxids)\n> +GetVirtualXIDsDelayingChkpt(int *nvxids, int type)\n> \n> -HaveVirtualXIDsDelayingChkpt(VirtualTransactionId *vxids, int nvxids)\n> +HaveVirtualXIDsDelayingChkpt(VirtualTransactionId *vxids, int nvxids, int type)\n> \n> Do I need to restore the signature?\n\nFor master, renamed delayChkpt to delayChkptFlags and changed it to uint8.\n\nFor 14, restored delayChkpt to bool and added delayChkptEnd into a gap in PGPROC, then restored the signature of the two functions above to before the patch. Then added a new functions ..DelayingChkptEnd().\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Wed, 06 Apr 2022 16:45:21 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: API stability"
},
{
"msg_contents": "On 2022-Apr-06, Kyotaro Horiguchi wrote:\n\n> For master, renamed delayChkpt to delayChkptFlags and changed it to\n> uint8.\n\nFor code documentation purposes, I think it is slightly better to use\nbits8 than uint8 for variables where you're storing independent bit flags.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"Nunca se desea ardientemente lo que solo se desea por razón\" (F. Alexandre)\n\n\n",
"msg_date": "Wed, 6 Apr 2022 10:30:32 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: API stability"
},
{
"msg_contents": "At Wed, 6 Apr 2022 10:30:32 +0200, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote in \n> On 2022-Apr-06, Kyotaro Horiguchi wrote:\n> \n> > For master, renamed delayChkpt to delayChkptFlags and changed it to\n> > uint8.\n> \n> For code documentation purposes, I think it is slightly better to use\n> bits8 than uint8 for variables where you're storing independent bit flags.\n\nOh, agreed. Will fix in the next version along with other fixes.\n\nThanks!\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Wed, 06 Apr 2022 18:13:17 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: API stability"
},
{
"msg_contents": "At Wed, 06 Apr 2022 18:13:17 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> At Wed, 6 Apr 2022 10:30:32 +0200, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote in \n> > For code documentation purposes, I think it is slightly better to use\n> > bits8 than uint8 for variables where you're storing independent bit flags.\n> \n> Oh, agreed. Will fix in the next version along with other fixes.\n\nThe immediately folloing member statusFlags is in uint8. So using\nbits8 here results in the following look.\n\n>\tbits8\t\tdelayChkptFlags;/* for DELAY_CHKPT_* flags */\n>\n>\tuint8\t\tstatusFlags;\t/* this backend's status flags, see PROC_*\n>\t\t\t\t\t\t\t\t * above. mirrored in\n\nPGPROC has another member that fits bits*.\n\n>\tuint64\t\tfpLockBits;\t\t/* lock modes held for each fast-path slot */\n\nDo I change this in this patch? Or leave them for another chance?\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Wed, 06 Apr 2022 18:21:16 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: API stability"
},
{
"msg_contents": "On Tue, Apr 05, 2022 at 03:16:20PM -0400, Tom Lane wrote:\n> Robert Haas <robertmhaas@gmail.com> writes:\n>> On Tue, Apr 5, 2022 at 10:32 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>> My point is that we want that to happen in HEAD, but it's not okay\n>>> for it to happen in a minor release of a stable branch.\n> \n>> I understand, but I am not sure that I agree. I think that if an\n>> extension stops compiling against a back-branch, someone will notice\n>> the next time they try to compile it and will fix it. Maybe that's not\n>> amazing, but I don't think it's a huge deal either.\n\nI agree with Tom's argument. The internals of this structure should\nnot have changed in a stable branch.\n\n> Well, perhaps it's not the end of the world, but it's still a large\n> PITA for the maintainer of such an extension. They can't \"just fix it\"\n> because some percentage of their userbase will still need to compile\n> against older minor releases. Nor have you provided any way to handle\n> that requirement via conditional compilation.\n\nFor example, I recall that some external extensions make use of\nsizeof(PGPROC) for their own business. Isn't 412ad7a going to be a\nproblem to change this structure's internals for already-compiled code\non stable branches?\n--\nMichael",
"msg_date": "Thu, 7 Apr 2022 15:28:19 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: API stability [was: pgsql: Fix possible recovery trouble if\n TRUNCATE overlaps a checkpoint.]"
},
{
"msg_contents": "On Thu, Apr 7, 2022 at 2:28 AM Michael Paquier <michael@paquier.xyz> wrote:\n> > Well, perhaps it's not the end of the world, but it's still a large\n> > PITA for the maintainer of such an extension. They can't \"just fix it\"\n> > because some percentage of their userbase will still need to compile\n> > against older minor releases. Nor have you provided any way to handle\n> > that requirement via conditional compilation.\n>\n> For example, I recall that some external extensions make use of\n> sizeof(PGPROC) for their own business. Isn't 412ad7a going to be a\n> problem to change this structure's internals for already-compiled code\n> on stable branches?\n\nI don't think that commit changed sizeof(PGPROC), but it did affect\nthe position of the delayChkpt and statusFlags members within the\nstruct, which is what we now need to fix. Since I don't hear anyone\nelse volunteering to take care of that, I'll go work on it.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 7 Apr 2022 10:04:20 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: API stability [was: pgsql: Fix possible recovery trouble if\n TRUNCATE overlaps a checkpoint.]"
},
{
"msg_contents": "On Tue, Apr 5, 2022 at 10:17 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> What I think you need to do is:\n>\n> 1. In the back branches, revert delayChkpt to its previous type and\n> semantics. Squeeze a separate delayChkptEnd bool in somewhere\n> (you can't change the struct size either ...).\n>\n> 2. In HEAD, rename the field to something like delayChkptFlags,\n> to ensure that any code touching it has to be inspected and updated.\n\nHere are patches for master and v14 to do things this way. Comments?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com",
"msg_date": "Thu, 7 Apr 2022 11:19:15 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: API stability [was: pgsql: Fix possible recovery trouble if\n TRUNCATE overlaps a checkpoint.]"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> Here are patches for master and v14 to do things this way. Comments?\n\nWFM.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 07 Apr 2022 11:51:02 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: API stability [was: pgsql: Fix possible recovery trouble if\n TRUNCATE overlaps a checkpoint.]"
},
{
"msg_contents": "\n> On 7. 4. 2022, at 17:19, Robert Haas <robertmhaas@gmail.com> wrote:\n> \n> On Tue, Apr 5, 2022 at 10:17 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> What I think you need to do is:\n>> \n>> 1. In the back branches, revert delayChkpt to its previous type and\n>> semantics. Squeeze a separate delayChkptEnd bool in somewhere\n>> (you can't change the struct size either ...).\n>> \n>> 2. In HEAD, rename the field to something like delayChkptFlags,\n>> to ensure that any code touching it has to be inspected and updated.\n> \n> Here are patches for master and v14 to do things this way. Comments?\n\n\nYeah I think this should do it (compilers should warn on master even without the rename, but who notices that right? :) )\n\nThanks,\nPetr\n\n\n\n",
"msg_date": "Thu, 7 Apr 2022 20:07:17 +0200",
"msg_from": "Petr Jelinek <petr.jelinek@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: API stability [was: pgsql: Fix possible recovery trouble if\n TRUNCATE overlaps a checkpoint.]"
},
{
"msg_contents": "On Thu, Apr 07, 2022 at 11:19:15AM -0400, Robert Haas wrote:\n> Here are patches for master and v14 to do things this way. Comments?\n\nThanks for the patches. They look correct. For ~14, I'd rather avoid\nthe code duplication done by GetVirtualXIDsDelayingChkptEnd() and\nHaveVirtualXIDsDelayingChkpt() that could be avoided with an extra\nbool argument to the existing routine. The same kind of duplication\nhappens with GetVirtualXIDsDelayingChkpt() and\nGetVirtualXIDsDelayingChkptEnd().\n--\nMichael",
"msg_date": "Fri, 8 Apr 2022 08:47:42 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: API stability [was: pgsql: Fix possible recovery trouble if\n TRUNCATE overlaps a checkpoint.]"
},
{
"msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> On Thu, Apr 07, 2022 at 11:19:15AM -0400, Robert Haas wrote:\n>> Here are patches for master and v14 to do things this way. Comments?\n\n> Thanks for the patches. They look correct. For ~14, I'd rather avoid\n> the code duplication done by GetVirtualXIDsDelayingChkptEnd() and\n> HaveVirtualXIDsDelayingChkpt() that could be avoided with an extra\n> bool argument to the existing routine.\n\nIsn't adding another argument an API break? (If there's any outside\ncode calling GetVirtualXIDsDelayingChkpt, which it seems like there\nmight be.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 07 Apr 2022 19:52:19 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: API stability [was: pgsql: Fix possible recovery trouble if\n TRUNCATE overlaps a checkpoint.]"
},
{
"msg_contents": "At Fri, 8 Apr 2022 08:47:42 +0900, Michael Paquier <michael@paquier.xyz> wrote in \n> On Thu, Apr 07, 2022 at 11:19:15AM -0400, Robert Haas wrote:\n> > Here are patches for master and v14 to do things this way. Comments?\n> \n> Thanks for the patches. They look correct. For ~14, I'd rather avoid\n> the code duplication done by GetVirtualXIDsDelayingChkptEnd() and\n> HaveVirtualXIDsDelayingChkpt() that could be avoided with an extra\n> bool argument to the existing routine. The same kind of duplication\n> happens with GetVirtualXIDsDelayingChkpt() and\n> GetVirtualXIDsDelayingChkptEnd().\n\nFWIW, it is done in [1].\n\n[1] https://www.postgresql.org/message-id/20220406.164521.17171257901083417.horikyota.ntt%40gmail.com\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Fri, 08 Apr 2022 10:32:55 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: API stability"
},
{
"msg_contents": "(Mmm. My mailer automatically teared off the [was: ..] part from the\nsubject..)\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Fri, 08 Apr 2022 10:34:55 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: API stability"
},
{
"msg_contents": "On Thu, Apr 7, 2022 at 7:52 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Michael Paquier <michael@paquier.xyz> writes:\n> > On Thu, Apr 07, 2022 at 11:19:15AM -0400, Robert Haas wrote:\n> >> Here are patches for master and v14 to do things this way. Comments?\n>\n> > Thanks for the patches. They look correct. For ~14, I'd rather avoid\n> > the code duplication done by GetVirtualXIDsDelayingChkptEnd() and\n> > HaveVirtualXIDsDelayingChkpt() that could be avoided with an extra\n> > bool argument to the existing routine.\n>\n> Isn't adding another argument an API break? (If there's any outside\n> code calling GetVirtualXIDsDelayingChkpt, which it seems like there\n> might be.)\n\nYeah, that's exactly why I didn't do what Michael proposes. If we're\ngoing to go to this trouble to avoid changing the layout of a PGPROC,\nwe must be doing that on the theory that extension code cares about\ndelayChkpt. And if that is so, it seems reasonable to suppose that it\nmight also want to call the associated functions.\n\nHonestly, I wouldn't have thought that this mattered, because I\nwouldn't have guessed that any non-core code cared about delayChkpt.\nBut I would have been wrong.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 7 Apr 2022 22:19:35 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: API stability [was: pgsql: Fix possible recovery trouble if\n TRUNCATE overlaps a checkpoint.]"
},
{
"msg_contents": "On Thu, Apr 07, 2022 at 10:19:35PM -0400, Robert Haas wrote:\n> Yeah, that's exactly why I didn't do what Michael proposes. If we're\n> going to go to this trouble to avoid changing the layout of a PGPROC,\n> we must be doing that on the theory that extension code cares about\n> delayChkpt. And if that is so, it seems reasonable to suppose that it\n> might also want to call the associated functions.\n\nCompatibility does not strike me as a problem with two static inline \nfunctions used as wrappers of their common logic.\n\n> Honestly, I wouldn't have thought that this mattered, because I\n> wouldn't have guessed that any non-core code cared about delayChkpt.\n> But I would have been wrong.\n\nThat's a minor point. If you wish to keep this code as you are\nproposing, that's fine as well by me.\n--\nMichael",
"msg_date": "Fri, 8 Apr 2022 16:13:47 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: API stability [was: pgsql: Fix possible recovery trouble if\n TRUNCATE overlaps a checkpoint.]"
},
{
"msg_contents": "On Fri, 2022-04-08 at 08:47 +0900, Michael Paquier wrote:\n> On Thu, Apr 07, 2022 at 11:19:15AM -0400, Robert Haas wrote:\n> > Here are patches for master and v14 to do things this way.\n> > Comments?\n> \n> Thanks for the patches. They look correct.\n\n+1, looks good to me and addresses my specific original concern.\n\n> For ~14, I'd rather avoid\n> the code duplication done by GetVirtualXIDsDelayingChkptEnd() and\n> HaveVirtualXIDsDelayingChkpt() that could be avoided with an extra\n> bool argument to the existing routine. The same kind of duplication\n> happens with GetVirtualXIDsDelayingChkpt() and\n> GetVirtualXIDsDelayingChkptEnd().\n\nI agree with Michael, it would be nice to not duplicate the code, but\nuse a common underlying method. A modified patch is attached.\n\nBest Regards\n\nMarkus",
"msg_date": "Fri, 08 Apr 2022 10:47:26 +0200",
"msg_from": "Markus Wanner <markus.wanner@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: API stability [was: pgsql: Fix possible recovery trouble if\n TRUNCATE overlaps a checkpoint.]"
},
{
"msg_contents": "On Fri, Apr 8, 2022 at 4:47 AM Markus Wanner\n<markus.wanner@enterprisedb.com> wrote:\n> I agree with Michael, it would be nice to not duplicate the code, but\n> use a common underlying method. A modified patch is attached.\n\nI don't think this is better, but I don't think it's worth arguing\nabout, either, so I'll do it this way if nobody objects.\n\nMeanwhile, I've committed the patch for master to master.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 8 Apr 2022 11:50:59 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: API stability [was: pgsql: Fix possible recovery trouble if\n TRUNCATE overlaps a checkpoint.]"
},
{
"msg_contents": "(a bit off-topic)\n\nI'm not sure where I am..\n\nAt Wed, 06 Apr 2022 10:36:30 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \nme> > this if nobody else would like to do it, but let me ask whether\nme> > Kyotaro Horiguchi would like to propose a patch, since the original\nme> > patch did, and/or whether you would like to propose a patch, as the\nme> > person reporting the issue.\nme> \nme> I'd like to do that. Let me see.\n\nAt Thu, 7 Apr 2022 10:04:20 -0400, Robert Haas <robertmhaas@gmail.com> wrote in \n> struct, which is what we now need to fix. Since I don't hear anyone\n> else volunteering to take care of that, I'll go work on it.\n\nJust confirmation. Is my message above didn't look like declaring that\nI'd like to volunteering? If so, please teach me the correct way to\nsay that, since I don't want to repeat the same mistake. Or are there\nsome other reasons? (Sorry if this looks like a blame, but I asking\nplainly (really:).)\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Mon, 11 Apr 2022 13:29:55 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: API stability"
},
{
"msg_contents": "On Mon, 11 Apr 2022 at 06:30, Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\n>\n> (a bit off-topic)\n>\n> I'm not sure where I am..\n>\n> At Wed, 06 Apr 2022 10:36:30 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in\n> me> > this if nobody else would like to do it, but let me ask whether\n> me> > Kyotaro Horiguchi would like to propose a patch, since the original\n> me> > patch did, and/or whether you would like to propose a patch, as the\n> me> > person reporting the issue.\n> me>\n> me> I'd like to do that. Let me see.\n>\n> At Thu, 7 Apr 2022 10:04:20 -0400, Robert Haas <robertmhaas@gmail.com> wrote in\n> > struct, which is what we now need to fix. Since I don't hear anyone\n> > else volunteering to take care of that, I'll go work on it.\n>\n> Just confirmation. Is my message above didn't look like declaring that\n> I'd like to volunteering? If so, please teach me the correct way to\n> say that, since I don't want to repeat the same mistake. Or are there\n> some other reasons? (Sorry if this looks like a blame, but I asking\n> plainly (really:).)\n\nI won't speak for Robert H., but this might be because of gmail not\nputting this mail in the right thread: Your mail client dropped the\n\"[was: pgsql: ...]\" tag, which Gmail subsequently displays as a\ndifferent thread (that is, in my Gmail UI there's three \"Re: API\nstability\" threads, one of which has the [was: pgsql: ...]-tag, and\ntwo of which seem to be started by you as a reply on the original\nthread, but with the [was: pgsql: ...]-tag dropped and thus considered\na new thread).\n\nSo, this might be the reason Robert overlooked your declaration to\nvolunteer: he was looking for volunteers in the thread \"Re: API\nStability [was: pgsql: ...]\" in the Gmail UI, which didn't show your\nmessages there because of the different subject line.\n\nKind regards,\n\nMatthias van de Meent\n\n\n",
"msg_date": "Mon, 11 Apr 2022 12:48:25 +0200",
"msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: API stability"
},
{
"msg_contents": "On Mon, Apr 11, 2022 at 6:48 AM Matthias van de Meent\n<boekewurm+postgres@gmail.com> wrote:\n> So, this might be the reason Robert overlooked your declaration to\n> volunteer: he was looking for volunteers in the thread \"Re: API\n> Stability [was: pgsql: ...]\" in the Gmail UI, which didn't show your\n> messages there because of the different subject line.\n\nYes, that's what happened.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 11 Apr 2022 14:00:34 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: API stability"
},
{
"msg_contents": "On Fri, Apr 8, 2022 at 11:50 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> On Fri, Apr 8, 2022 at 4:47 AM Markus Wanner\n> <markus.wanner@enterprisedb.com> wrote:\n> > I agree with Michael, it would be nice to not duplicate the code, but\n> > use a common underlying method. A modified patch is attached.\n>\n> I don't think this is better, but I don't think it's worth arguing\n> about, either, so I'll do it this way if nobody objects.\n>\n> Meanwhile, I've committed the patch for master to master.\n\nWell, I've just realized that Kyotaro Horiguchi volunteered to fix\nthis on an email thread I did not see because of the way Gmail breaks\nthe thread if you change the subject line. And he developed a very\nsimilar patch to what we have here. I'm going to use this one as the\nbasis for going forward because I've already studied it in detail and\nit's less work for me to stick with what I know than to go study\nsomething else. But, he also noticed something which we didn't notice\nhere, which is that before v13, the commit in question actually\nchanged the size of PGXACT, which is really quite bad -- it needs to\nbe 12 bytes for performance reasons. And there's no spare bytes\navailable, so I think we should follow one of the suggestions that he\nhad over in that email thread, and put delayChkptEnd in PGPROC even\nthough delayChkpt is in PGXACT.\n\nComments?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 11 Apr 2022 15:21:06 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: API stability [was: pgsql: Fix possible recovery trouble if\n TRUNCATE overlaps a checkpoint.]"
},
{
"msg_contents": "On Mon, 2022-04-11 at 15:21 -0400, Robert Haas wrote:\n> ... before v13, the commit in question actually\n> changed the size of PGXACT, which is really quite bad -- it needs to\n> be 12 bytes for performance reasons. And there's no spare bytes\n> available, so I think we should follow one of the suggestions that he\n> had over in that email thread, and put delayChkptEnd in PGPROC even\n> though delayChkpt is in PGXACT.\n\nThis makes sense to me. Kudos to Kyotaro for considering this.\n\nAt first read, this sounded like a trade-off between compatibility and\nperformance for PG 12 and older. But I realize leaving delayChkpt in\nPGXACT and adding just delayChkptEnd to PGPROC is compatible and leaves\nPGXACT at a size of 12 bytes. So this sounds like a good approach to\nme.\n\nBest Regards\n\nMarkus\n\n\n\n",
"msg_date": "Mon, 11 Apr 2022 21:45:59 +0200",
"msg_from": "Markus Wanner <markus.wanner@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: API stability [was: pgsql: Fix possible recovery trouble if\n TRUNCATE overlaps a checkpoint.]"
},
{
"msg_contents": "At Mon, 11 Apr 2022 12:48:25 +0200, Matthias van de Meent <boekewurm+postgres@gmail.com> wrote in \n> On Mon, 11 Apr 2022 at 06:30, Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\n> I won't speak for Robert H., but this might be because of gmail not\n> putting this mail in the right thread: Your mail client dropped the\n> \"[was: pgsql: ...]\" tag, which Gmail subsequently displays as a\n> different thread (that is, in my Gmail UI there's three \"Re: API\n> stability\" threads, one of which has the [was: pgsql: ...]-tag, and\n> two of which seem to be started by you as a reply on the original\n> thread, but with the [was: pgsql: ...]-tag dropped and thus considered\n> a new thread).\n\nMmm. d*** gmail.. My main mailer does that defaltly but I think I can\n*fix* that behavior.\n\nThanks!\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Tue, 12 Apr 2022 11:57:38 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: API stability"
},
{
"msg_contents": "(My mailer has been fixed.)\n\nAt Mon, 11 Apr 2022 21:45:59 +0200, Markus Wanner <markus.wanner@enterprisedb.com> wrote in \n> On Mon, 2022-04-11 at 15:21 -0400, Robert Haas wrote:\n> > ... before v13, the commit in question actually\n> > changed the size of PGXACT, which is really quite bad -- it needs to\n> > be 12 bytes for performance reasons. And there's no spare bytes\n> > available, so I think we should follow one of the suggestions that he\n> > had over in that email thread, and put delayChkptEnd in PGPROC even\n> > though delayChkpt is in PGXACT.\n> \n> This makes sense to me. Kudos to Kyotaro for considering this.\n> \n> At first read, this sounded like a trade-off between compatibility and\n> performance for PG 12 and older. But I realize leaving delayChkpt in\n> PGXACT and adding just delayChkptEnd to PGPROC is compatible and leaves\n> PGXACT at a size of 12 bytes. So this sounds like a good approach to\n> me.\n\nThanks!\n\nSo, I created the patches for back-patching from 10 to 14. (With\nfixed a silly bug of the v1-pg14 that HaveVirtualXIDsDelayingChkpt and\nHaveVirtualXIDsDelayingChkptEnd are inverted..)\n\nThey revert delayChkpt-related changes made by the patch and add\ndelayChkptEnd stuff. I compared among every pair of the patches for\nneighbouring versions, to make sure not making the same change in\ndifferent way and they have the same set of hunks.\n\nThis version takes the way sharing the common static function\n(*ChkptGuts) between the functions *Chkpt() and *ChkptEnd().\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Tue, 12 Apr 2022 19:54:58 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: API stability [was: pgsql: Fix possible recovery trouble if\n TRUNCATE overlaps a checkpoint.]"
},
{
"msg_contents": "On Tue, Apr 12, 2022 at 6:55 AM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n> (My mailer has been fixed.)\n\nCool.\n\n> So, I created the patches for back-patching from 10 to 14. (With\n> fixed a silly bug of the v1-pg14 that HaveVirtualXIDsDelayingChkpt and\n> HaveVirtualXIDsDelayingChkptEnd are inverted..)\n\nI am very sorry not to use these, but as I said in my previous post on\nthis thread, I prefer to commit what I wrote and Markus revised rather\nthan these versions. I have done that now.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 14 Apr 2022 11:13:01 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: API stability [was: pgsql: Fix possible recovery trouble if\n TRUNCATE overlaps a checkpoint.]"
},
{
"msg_contents": "At Thu, 14 Apr 2022 11:13:01 -0400, Robert Haas <robertmhaas@gmail.com> wrote in \n> On Tue, Apr 12, 2022 at 6:55 AM Kyotaro Horiguchi\n> <horikyota.ntt@gmail.com> wrote:\n> > (My mailer has been fixed.)\n> \n> Cool.\n> \n> > So, I created the patches for back-patching from 10 to 14. (With\n> > fixed a silly bug of the v1-pg14 that HaveVirtualXIDsDelayingChkpt and\n> > HaveVirtualXIDsDelayingChkptEnd are inverted..)\n> \n> I am very sorry not to use these, but as I said in my previous post on\n> this thread, I prefer to commit what I wrote and Markus revised rather\n> than these versions. I have done that now.\n\nNot at all. It's just an unfortunate crossing.\nThanks for fixing this.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Fri, 15 Apr 2022 10:20:46 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: API stability [was: pgsql: Fix possible recovery trouble if\n TRUNCATE overlaps a checkpoint.]"
}
] |
[
{
"msg_contents": "As I was tracking down some of these errors in the sql/json patches I\nnoticed that we have a whole lot of them in the code, so working out\nwhich one has triggered an error is not as easy as it might be. ISTM we\ncould usefully prefix each such message with the name of the function in\nwhich it occurs, along the lines of this fragment for nodeFuncs.c. Thoughts?\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Thu, 24 Mar 2022 15:53:42 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": true,
"msg_subject": "identifying unrecognized node type errors"
},
{
"msg_contents": "On Fri, 25 Mar 2022 at 08:53, Andrew Dunstan <andrew@dunslane.net> wrote:\n> As I was tracking down some of these errors in the sql/json patches I\n> noticed that we have a whole lot of them in the code, so working out\n> which one has triggered an error is not as easy as it might be. ISTM we\n> could usefully prefix each such message with the name of the function in\n> which it occurs, along the lines of this fragment for nodeFuncs.c. Thoughts?\n\nCan you not use \\set VERBOSITY verbose ?\n\npostgres=# \\set VERBOSITY verbose\npostgres=# select 1/0;\nERROR: 22012: division by zero\nLOCATION: int4div, int.c:846\n\nDavid\n\n\n",
"msg_date": "Fri, 25 Mar 2022 09:01:07 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: identifying unrecognized node type errors"
},
{
"msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> As I was tracking down some of these errors in the sql/json patches I\n> noticed that we have a whole lot of them in the code, so working out\n> which one has triggered an error is not as easy as it might be. ISTM we\n> could usefully prefix each such message with the name of the function in\n> which it occurs, along the lines of this fragment for nodeFuncs.c. Thoughts?\n\n-1. You're reinventing the error location support that already exists\ninside elog. Just turn up log_error_verbosity instead.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 24 Mar 2022 16:10:39 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: identifying unrecognized node type errors"
},
{
"msg_contents": "\nOn 3/24/22 16:10, Tom Lane wrote:\n> Andrew Dunstan <andrew@dunslane.net> writes:\n>> As I was tracking down some of these errors in the sql/json patches I\n>> noticed that we have a whole lot of them in the code, so working out\n>> which one has triggered an error is not as easy as it might be. ISTM we\n>> could usefully prefix each such message with the name of the function in\n>> which it occurs, along the lines of this fragment for nodeFuncs.c. Thoughts?\n> -1. You're reinventing the error location support that already exists\n> inside elog. Just turn up log_error_verbosity instead.\n>\n> \t\t\t\n\n\n\nYeah, must have had some brain fade. Sorry for the noise.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Thu, 24 Mar 2022 16:34:35 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": true,
"msg_subject": "Re: identifying unrecognized node type errors"
},
{
"msg_contents": "All these functions are too low level to be helpful to know. Knowing\nthe caller might actually give a hint as to where the unknown node\noriginated from. We may get that from the stack trace if that's\navailable. But if we could annotate the error with error_context that\nwill be super helpful. For example, if we could annotate the error\nmessage like \"while searching for columns to hash\" for\nexpression_tree_walker() called from\nfind_hash_columns->find_cols->find_cols_walker->expression_tree_walker().\nThat helps to focus on group by colum expression for example. We could\ndo that by setting up an error context in find_hash_columns(). But\nthat's a lot of work.\n\nOn Fri, Mar 25, 2022 at 2:04 AM Andrew Dunstan <andrew@dunslane.net> wrote:\n>\n>\n> On 3/24/22 16:10, Tom Lane wrote:\n> > Andrew Dunstan <andrew@dunslane.net> writes:\n> >> As I was tracking down some of these errors in the sql/json patches I\n> >> noticed that we have a whole lot of them in the code, so working out\n> >> which one has triggered an error is not as easy as it might be. ISTM we\n> >> could usefully prefix each such message with the name of the function in\n> >> which it occurs, along the lines of this fragment for nodeFuncs.c. Thoughts?\n> > -1. You're reinventing the error location support that already exists\n> > inside elog. Just turn up log_error_verbosity instead.\n> >\n> >\n>\n>\n>\n> Yeah, must have had some brain fade. Sorry for the noise.\n>\n>\n> cheers\n>\n>\n> andrew\n>\n> --\n> Andrew Dunstan\n> EDB: https://www.enterprisedb.com\n>\n>\n>\n\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n",
"msg_date": "Fri, 25 Mar 2022 15:25:42 +0530",
"msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: identifying unrecognized node type errors"
},
{
"msg_contents": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com> writes:\n> All these functions are too low level to be helpful to know. Knowing\n> the caller might actually give a hint as to where the unknown node\n> originated from. We may get that from the stack trace if that's\n> available. But if we could annotate the error with error_context that\n> will be super helpful.\n\nIs it really that interesting? If function X lacks coverage for\nnode type Y, then X is what needs to be fixed. The exact call\nchain for any particular failure seems of only marginal interest,\ncertainly not enough to be building vast new infrastructure for.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 25 Mar 2022 09:53:25 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: identifying unrecognized node type errors"
},
{
"msg_contents": "On Fri, Mar 25, 2022 at 7:23 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Ashutosh Bapat <ashutosh.bapat.oss@gmail.com> writes:\n> > All these functions are too low level to be helpful to know. Knowing\n> > the caller might actually give a hint as to where the unknown node\n> > originated from. We may get that from the stack trace if that's\n> > available. But if we could annotate the error with error_context that\n> > will be super helpful.\n>\n> Is it really that interesting? If function X lacks coverage for\n> node type Y, then X is what needs to be fixed. The exact call\n> chain for any particular failure seems of only marginal interest,\n> certainly not enough to be building vast new infrastructure for.\n>\n\nI don't think we have tests covering all possible combinations of\nexpression trees. Code coverage reports may not reveal this. I have\nencountered flaky \"unknown expression type\" errors. Always ended up\nchanging code to get the stack trace. Having error context helps.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n",
"msg_date": "Mon, 28 Mar 2022 19:47:42 +0530",
"msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: identifying unrecognized node type errors"
}
] |
[
{
"msg_contents": "Hi,\n\nI noticed a possible typo in standby.c:\n\n---\n * The definitions of RunningTransactionsData and xl_xact_running_xacts are\n * similar. We keep them separate because xl_xact_running_xacts is a\n---\n\nIt seems \"xl_xact_running_xacts\" should be \"xl_running_xacts\".\n\nBest regards,\nHou zhijie",
"msg_date": "Fri, 25 Mar 2022 02:34:22 +0000",
"msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "Fix typo in standby.c"
},
{
"msg_contents": "On Fri, Mar 25, 2022 at 02:34:22AM +0000, houzj.fnst@fujitsu.com wrote:\n> I noticed a possible typo in standby.c:\n> \n> ---\n> * The definitions of RunningTransactionsData and xl_xact_running_xacts are\n> * similar. We keep them separate because xl_xact_running_xacts is a\n> ---\n> \n> It seems \"xl_xact_running_xacts\" should be \"xl_running_xacts\".\n\nYou are right, will fix. Thanks!\n--\nMichael",
"msg_date": "Fri, 25 Mar 2022 11:46:55 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Fix typo in standby.c"
}
] |
[
{
"msg_contents": "Hi,\n\nCurrently postgres runs end-of-recovery(EOR) checkpoint in wait mode\nmeaning the server can take longer before it opens up for connections.\nThe EOR checkpoint, at times, can take a while if there was a lot of\nwork the server has done during crash recovery, say it replayed many\nWAL records or created many snapshot or mapping files or dirtied so\nmany buffers and so on.\n\nSince the server spins up checkpointer process [1] while the startup\nprocess performs recovery, isn't it a good idea to make\nend-of-recovery completely optional for the users or at least run it\nin non-wait mode so that the server will be available faster. The next\ncheckpointer cycle will take care of performing the EOR checkpoint\nwork, if user chooses to skip the EOR or the checkpointer will run EOR\ncheckpoint in background, if user chooses to run it in the non-wait\nmode (without CHECKPOINT_WAIT flag). Of course by choosing this\noption, users must be aware of the fact that the extra amount of\nrecovery work that needs to be done if a crash happens from the point\nEOR gets skipped or runs in non-wait mode until the next checkpoint.\nBut the advantage that users get is the faster server availability.\n\nThanks a lot Thomas for the internal discussion.\n\nThoughts?\n\n[1]\ncommit 7ff23c6d277d1d90478a51f0dd81414d343f3850\nAuthor: Thomas Munro <tmunro@postgresql.org>\nDate: Mon Aug 2 17:32:20 2021 +1200\n\n Run checkpointer and bgwriter in crash recovery.\n\nRegards,\nBharath Rupireddy.\n\n\n",
"msg_date": "Fri, 25 Mar 2022 13:10:27 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Run end-of-recovery checkpoint in non-wait mode or skip it entirely\n for faster server availability?"
},
{
"msg_contents": "On Fri, Mar 25, 2022 at 3:40 AM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n> Since the server spins up checkpointer process [1] while the startup\n> process performs recovery, isn't it a good idea to make\n> end-of-recovery completely optional for the users or at least run it\n> in non-wait mode so that the server will be available faster. The next\n> checkpointer cycle will take care of performing the EOR checkpoint\n> work, if user chooses to skip the EOR or the checkpointer will run EOR\n> checkpoint in background, if user chooses to run it in the non-wait\n> mode (without CHECKPOINT_WAIT flag). Of course by choosing this\n> option, users must be aware of the fact that the extra amount of\n> recovery work that needs to be done if a crash happens from the point\n> EOR gets skipped or runs in non-wait mode until the next checkpoint.\n> But the advantage that users get is the faster server availability.\n\nI think that we should remove end-of-recovery checkpoints completely\nand instead use the end-of-recovery WAL record (cf.\nCreateEndOfRecoveryRecord). However, when I tried to do that, I ran\ninto some problems:\n\nhttp://postgr.es/m/CA+TgmobrM2jvkiccCS9NgFcdjNSgAvk1qcAPx5S6F+oJT3D2mQ@mail.gmail.com\n\nThe second problem described in that email has subsequently been\nfixed, I believe, but the first one remains.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 25 Mar 2022 12:56:38 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Run end-of-recovery checkpoint in non-wait mode or skip it\n entirely for faster server availability?"
},
{
"msg_contents": "Hi, \n\nOn March 25, 2022 9:56:38 AM PDT, Robert Haas <robertmhaas@gmail.com> wrote:\n>On Fri, Mar 25, 2022 at 3:40 AM Bharath Rupireddy\n><bharath.rupireddyforpostgres@gmail.com> wrote:\n>> Since the server spins up checkpointer process [1] while the startup\n>> process performs recovery, isn't it a good idea to make\n>> end-of-recovery completely optional for the users or at least run it\n>> in non-wait mode so that the server will be available faster. The next\n>> checkpointer cycle will take care of performing the EOR checkpoint\n>> work, if user chooses to skip the EOR or the checkpointer will run EOR\n>> checkpoint in background, if user chooses to run it in the non-wait\n>> mode (without CHECKPOINT_WAIT flag). Of course by choosing this\n>> option, users must be aware of the fact that the extra amount of\n>> recovery work that needs to be done if a crash happens from the point\n>> EOR gets skipped or runs in non-wait mode until the next checkpoint.\n>> But the advantage that users get is the faster server availability.\n>\n>I think that we should remove end-of-recovery checkpoints completely\n>and instead use the end-of-recovery WAL record (cf.\n>CreateEndOfRecoveryRecord). However, when I tried to do that, I ran\n>into some problems:\n>\n>http://postgr.es/m/CA+TgmobrM2jvkiccCS9NgFcdjNSgAvk1qcAPx5S6F+oJT3D2mQ@mail.gmail.com\n>\n>The second problem described in that email has subsequently been\n>fixed, I believe, but the first one remains.\n\nSeems we could deal with that by making latestCompleted a 64bit xid? Then there never are cases where we have to retreat back into such early xids?\n\nA random note from a conversation with Thomas a few days ago: We still perform timeline increases with checkpoints in some cases. Might be worth fixing as a step towards just using EOR.\n\nAndres\n-- \nSent from my Android device with K-9 Mail. Please excuse my brevity.\n\n\n",
"msg_date": "Fri, 25 Mar 2022 10:29:32 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "=?US-ASCII?Q?Re=3A_Run_end-of-recovery_checkpoint_in_non-wait_mode?=\n =?US-ASCII?Q?_or_skip_it_entirely_for_faster_server_availability=3F?="
}
] |
[
{
"msg_contents": "The following bug has been logged on the website:\n\nBug reference: 17448\nLogged by: Okano Naoki\nEmail address: okano.naoki@jp.fujitsu.com\nPostgreSQL version: 14.2\nOperating system: Windows\nDescription: \n\nWith huge_pages = on, the postgres process does not appear to use large\npages.\r\nI checked with VMMap if the large pages are used in the following\nenvironment.\r\nEnvironment\r\n PostgreSQL version: 14.2\r\n Operating system : Windows 10 20H2\r\n\r\nOn this page (*) says that in Windows 10, version 1703 and later OS\nversions, \r\nyou must specify the FILE_MAP_LARGE_PAGES flag with the MapViewOfFile\nfunction \r\nto map large pages.\r\n\r\nI think it seems to be the cause that MapViewOfFile() in\nsrc/backend/port/win32_shmem.c \r\ndoes not specify FILE_MAP_LARGE_PAGES flag.\r\n\r\n(*) MapViewOfFileEx function\r\nhttps://docs.microsoft.com/en-us/windows/win32/api/memoryapi/nf-memoryapi-mapviewoffileex\r\n\r\nFILE_MAP_LARGE_PAGES\r\n Starting with Windows 10, version 1703, this flag specifies \r\n that the view should be mapped using large page support. \r\n The size of the view must be a multiple of the size of a large page \r\n reported by the GetLargePageMinimum function, \r\n and the file-mapping object must have been created using the\nSEC_LARGE_PAGES option. \r\n If you provide a non-null value for lpBaseAddress, \r\n then the value must be a multiple of GetLargePageMinimum.",
"msg_date": "Fri, 25 Mar 2022 07:52:57 +0000",
"msg_from": "PG Bug reporting form <noreply@postgresql.org>",
"msg_from_op": true,
"msg_subject": "BUG #17448: In Windows 10, version 1703 and later,\n huge_pages doesn't work."
},
{
"msg_contents": "On Fri, Mar 25, 2022 at 07:52:57AM +0000, PG Bug reporting form wrote:\n> On this page (*) says that in Windows 10, version 1703 and later OS\n> versions, \n> you must specify the FILE_MAP_LARGE_PAGES flag with the MapViewOfFile\n> function \n> to map large pages.\n> \n> I think it seems to be the cause that MapViewOfFile() in\n> src/backend/port/win32_shmem.c \n> does not specify FILE_MAP_LARGE_PAGES flag.\n\nHmm. Okay. A patch would be straight-forward, as we could just\nassign the optional flag in a separate variable at the beginning of\nPGSharedMemoryCreate(), similarly to flProtect when we find out that\nlarge pages can be used, then pass it down to MapViewOfFileEx(). I\ndon't have a Windows 10 machine as recent as that at hand, though..\n\nPerhaps the CI uses Windows machines that would allow to test and\ncheck that, with some logs magically added to debug things.\n--\nMichael",
"msg_date": "Sat, 26 Mar 2022 14:46:45 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: BUG #17448: In Windows 10, version 1703 and later, huge_pages\n doesn't work."
},
{
"msg_contents": "On Sat, Mar 26, 2022 at 02:46:45PM +0900, Michael Paquier wrote:\n> On Fri, Mar 25, 2022 at 07:52:57AM +0000, PG Bug reporting form wrote:\n> > On this page (*) says that in Windows 10, version 1703 and later OS\n> > versions,\n> > you must specify the FILE_MAP_LARGE_PAGES flag with the MapViewOfFile\n> > function\n> > to map large pages.\n> >\n> > I think it seems to be the cause that MapViewOfFile() in\n> > src/backend/port/win32_shmem.c\n> > does not specify FILE_MAP_LARGE_PAGES flag.\n>\n> I don't have a Windows 10 machine as recent as that at hand, though..\n\nI have a Windows 10 apparently version 20H2 (the versioning doesn't make any\nsense) with all needed to compile postgres at hand. I can have a look next\nweek.\n\n\n",
"msg_date": "Sat, 26 Mar 2022 16:24:08 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: BUG #17448: In Windows 10, version 1703 and later, huge_pages\n doesn't work."
},
{
"msg_contents": "On Sat, Mar 26, 2022 at 04:24:08PM +0800, Julien Rouhaud wrote:\n> I have a Windows 10 apparently version 20H2 (the versioning doesn't make any\n> sense) with all needed to compile postgres at hand. I can have a look next\n> week.\n\nThanks!\n--\nMichael",
"msg_date": "Sat, 26 Mar 2022 18:03:55 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: BUG #17448: In Windows 10, version 1703 and later, huge_pages\n doesn't work."
},
{
"msg_contents": "On Sat, Mar 26, 2022 at 9:24 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n> On Sat, Mar 26, 2022 at 02:46:45PM +0900, Michael Paquier wrote:\n> > On Fri, Mar 25, 2022 at 07:52:57AM +0000, PG Bug reporting form wrote:\n> > > On this page (*) says that in Windows 10, version 1703 and later OS\n> > > versions,\n> > > you must specify the FILE_MAP_LARGE_PAGES flag with the MapViewOfFile\n> > > function\n> > > to map large pages.\n> > >\n> > > I think it seems to be the cause that MapViewOfFile() in\n> > > src/backend/port/win32_shmem.c\n> > > does not specify FILE_MAP_LARGE_PAGES flag.\n> >\n> > I don't have a Windows 10 machine as recent as that at hand, though..\n>\n> I have a Windows 10 apparently version 20H2 (the versioning doesn't make any\n> sense) with all needed to compile postgres at hand. I can have a look next\n> week.\n\nThere are traces of method to the madness: It's basically YYMM, but\nthen after 2004 they switched to H1 and H2 (first/second half of the\nyear) instead of MM, perhaps to avoid confusion with YYYY format year.\nNote also that Windows 10 has a 21H2 and Windows 11 has a 21H2.\n\nHmm, so all versions of Windows that our current coding worked on were\nEOL'd 6 months after PostgreSQL 11 came out with huge_pages support\nfor Windows:\n\nhttps://en.wikipedia.org/wiki/Windows_10_version_history\n\nSome question I have: is FILE_MAP_LARGE PAGES a macro? We claim to\nsupport all those ancient zombie OSes like Windows 7, or maybe it's\neven XP for 11, and this has to be back-patched to 11, so we might\nneed to make it conditional. But conditional on what? For example,\ndoes something like the attached work (untested)? What happens if a <\n1703 kernel sees this flag, does it reject it or ignore it?",
"msg_date": "Sun, 27 Mar 2022 00:07:57 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: BUG #17448: In Windows 10, version 1703 and later, huge_pages\n doesn't work."
},
{
"msg_contents": "On Sun, Mar 27, 2022 at 12:07:57AM +1300, Thomas Munro wrote:\n> Some question I have: is FILE_MAP_LARGE PAGES a macro? We claim to\n> support all those ancient zombie OSes like Windows 7, or maybe it's\n> even XP for 11, and this has to be back-patched to 11, so we might\n> need to make it conditional. But conditional on what? For example,\n> does something like the attached work (untested)? What happens if a <\n> 1703 kernel sees this flag, does it reject it or ignore it?\n\nGood question. I would choose a soft approach here and not insist if\nthe flag was not known at compilation time, but we could also take a\nmore aggressive approach and hardcode a value. Anyway, it seems to me\nthat the correct solution here would be to compile the code with a\nPG_FILE_MAP_LARGE_PAGES that checks if the flag exists at compile\ntime, and we would set it at run time if we know that we are on a\nversion of Windows that supports it. IsWindowsVersionOrGreater()\nshould be able to do the job.\n--\nMichael",
"msg_date": "Sat, 26 Mar 2022 20:33:48 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: BUG #17448: In Windows 10, version 1703 and later, huge_pages\n doesn't work."
},
{
"msg_contents": "On Sun, Mar 27, 2022 at 12:07:57AM +1300, Thomas Munro wrote:\n> There are traces of method to the madness: It's basically YYMM, but\n> then after 2004 they switched to H1 and H2 (first/second half of the\n> year) instead of MM, perhaps to avoid confusion with YYYY format year.\n> Note also that Windows 10 has a 21H2 and Windows 11 has a 21H2.\n>\n> Some question I have: is FILE_MAP_LARGE PAGES a macro? We claim to\n> support all those ancient zombie OSes like Windows 7, or maybe it's\n> even XP for 11, and this has to be back-patched to 11, so we might\n> need to make it conditional. But conditional on what? For example,\n> does something like the attached work (untested)? What happens if a <\n> 1703 kernel sees this flag, does it reject it or ignore it?\n\nI don't have an answer about how much Windows gets angry if we pass\ndown to MapViewOfFileEx() the flag FILE_MAP_LARGE_PAGES when running\nthe code on a version of Windows that does not support it.\n\nAnyway, I think that we could just play it safe. See for example the\nattached, where I use PG_FILE_MAP_LARGE_PAGES at compile time to find\nif the value is set. Then, at run-time, I am just relying on \nIsWindowsVersionOrGreater() to do the job, something useful when\nhuge_pages=on as I guess we should fail hard if we did not know about\nFILE_MAP_LARGE_PAGES at compile-time, but try to use huge pages at run\ntime with version >= 10.0.1703.\n\nPerhaps there is a better thing to do?\n--\nMichael",
"msg_date": "Wed, 30 Mar 2022 16:54:50 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: BUG #17448: In Windows 10, version 1703 and later, huge_pages\n doesn't work."
},
{
"msg_contents": "Hi,\n\nOn Wed, Mar 30, 2022 at 04:54:50PM +0900, Michael Paquier wrote:\n> On Sun, Mar 27, 2022 at 12:07:57AM +1300, Thomas Munro wrote:\n> > There are traces of method to the madness: It's basically YYMM, but\n> > then after 2004 they switched to H1 and H2 (first/second half of the\n> > year) instead of MM, perhaps to avoid confusion with YYYY format year.\n> > Note also that Windows 10 has a 21H2 and Windows 11 has a 21H2.\n> >\n> > Some question I have: is FILE_MAP_LARGE PAGES a macro? We claim to\n> > support all those ancient zombie OSes like Windows 7, or maybe it's\n> > even XP for 11, and this has to be back-patched to 11, so we might\n> > need to make it conditional. But conditional on what? For example,\n> > does something like the attached work (untested)? What happens if a <\n> > 1703 kernel sees this flag, does it reject it or ignore it?\n>\n> I don't have an answer about how much Windows gets angry if we pass\n> down to MapViewOfFileEx() the flag FILE_MAP_LARGE_PAGES when running\n> the code on a version of Windows that does not support it.\n\nNo idea either, and I don't have old enough Windows machine available to try.\n\n> Anyway, I think that we could just play it safe. See for example the\n> attached, where I use PG_FILE_MAP_LARGE_PAGES at compile time to find\n> if the value is set. Then, at run-time, I am just relying on\n> IsWindowsVersionOrGreater() to do the job, something useful when\n> huge_pages=on as I guess we should fail hard if we did not know about\n> FILE_MAP_LARGE_PAGES at compile-time, but try to use huge pages at run\n> time with version >= 10.0.1703.\n\nThat approach seems sensible. For reference the versionhelpers.h seems to be\navailable starting with VS 2013 / v120, which is ok since that the oldest\nversion we support AFAICS.\n\nAfter *a lot of time* I could finally test this patch. For the record I could\nnever find a way to allow 'Lock pages in memory' on the Windows 10 home I have,\nso I tried on my Windows 11 evaluation I also had around (version 21H2, so it\nshould be recent enough). For the record on this one there was gpedit\navailable, but then I got a 1450 error, and didn't find any information on how\nto reserve huge pages or something like that on Windows. So I just configured\nshared_buffers to 10MB, which should still be big enough to need multiple huge\npages, and it seems to work:\n\npostgres=# select version();\n version\n---------------------------------------------------------------\n PostgreSQL 15devel, compiled by Visual C++ build 1929, 64-bit\n(1 row)\n\npostgres=# show huge_pages;\n huge_pages\n------------\n on\n(1 row)\n\nNow, I also have the exact same result without the patch applied so it's hard\nto know whether it had any impact at all. Unfortunately, I didn't find any\ninformation on how to check if \"large pages\" are used and/or by which program.\n\n\n",
"msg_date": "Thu, 31 Mar 2022 12:59:08 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: BUG #17448: In Windows 10, version 1703 and later, huge_pages\n doesn't work."
},
{
"msg_contents": "On Thu, Mar 31, 2022 at 12:59:08PM +0800, Julien Rouhaud wrote:\n> That approach seems sensible. For reference the versionhelpers.h seems to be\n> available starting with VS 2013 / v120, which is ok since that the oldest\n> version we support AFAICS.\n\nHmm. This points out to a different problem. The oldest version of\nMSVC supported on v10 and v11 is VS2005, so a backpatch means that\nthose checks would need to be tweaked if we'd want to be perfectly\ncompatible. But I'd rather not enter in this game category, limiting\nthis patch to v12~ :)\n\n> Now, I also have the exact same result without the patch applied so it's hard\n> to know whether it had any impact at all. Unfortunately, I didn't find any\n> information on how to check if \"large pages\" are used and/or by which program.\n\nOkano-san has mentioned VMMap upthread:\nhttps://docs.microsoft.com/en-us/sysinternals/downloads/vmmap\n--\nMichael",
"msg_date": "Thu, 31 Mar 2022 16:42:37 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: BUG #17448: In Windows 10, version 1703 and later, huge_pages\n doesn't work."
},
{
"msg_contents": "On Thu, Mar 31, 2022 at 04:42:37PM +0900, Michael Paquier wrote:\n> On Thu, Mar 31, 2022 at 12:59:08PM +0800, Julien Rouhaud wrote:\n> > That approach seems sensible. For reference the versionhelpers.h seems to be\n> > available starting with VS 2013 / v120, which is ok since that the oldest\n> > version we support AFAICS.\n> \n> Hmm. This points out to a different problem. The oldest version of\n> MSVC supported on v10 and v11 is VS2005, so a backpatch means that\n> those checks would need to be tweaked if we'd want to be perfectly\n> compatible. But I'd rather not enter in this game category, limiting\n> this patch to v12~ :)\n\nAh, I indeed haven't check in back branches. v12 isn't that bad.\n\n> > Now, I also have the exact same result without the patch applied so it's hard\n> > to know whether it had any impact at all. Unfortunately, I didn't find any\n> > information on how to check if \"large pages\" are used and/or by which program.\n> \n> Okano-san has mentioned VMMap upthread:\n> https://docs.microsoft.com/en-us/sysinternals/downloads/vmmap\n\nYes, I totally missed that. Thomas Munro also mentioned it off-list, and\nalso found some reference [1] indicating that large pages should show up as\n\"Locked WS\". I tested with and without the patch and in both case I don't see\nany \"Locked WS\" usage. I also get the same Page Table size, which seems\nconsistent with large pages not being used. Now, this is a vm running with\nvirtualbox and we're not entirely sure that huge pages can be allocated with\nit. I wish I could test on my windows 10 machine as it's not virtualized, but\nI can't give the required privileges.\n\n[1] https://aloiskraus.wordpress.com/2016/10/03/windows-10-memory-compression-and-more/\n\n\n",
"msg_date": "Thu, 31 Mar 2022 16:00:55 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: BUG #17448: In Windows 10, version 1703 and later, huge_pages\n doesn't work."
},
{
"msg_contents": "On Thu, Mar 31, 2022 at 04:00:55PM +0800, Julien Rouhaud wrote:\n> > Okano-san has mentioned VMMap upthread:\n> > https://docs.microsoft.com/en-us/sysinternals/downloads/vmmap\n> \n> Yes, I totally missed that. Thomas Munro also mentioned it off-list, and\n> also found some reference [1] indicating that large pages should show up as\n> \"Locked WS\". I tested with and without the patch and in both case I don't see\n> any \"Locked WS\" usage. I also get the same Page Table size, which seems\n> consistent with large pages not being used. Now, this is a vm running with\n> virtualbox and we're not entirely sure that huge pages can be allocated with\n> it. I wish I could test on my windows 10 machine as it's not virtualized, but\n> I can't give the required privileges.\n> \n> [1] https://aloiskraus.wordpress.com/2016/10/03/windows-10-memory-compression-and-more/\n\nSo, after more digging it turns out that the patch is supposed to work. If I\nforce using the PG_FILE_MAP_LARGE_PAGES, postgres starts and I do see \"Locked\nWS\" usage with VMMap, with a size in the order of magnitude of my\nshared_buffers.\n\nWhat is apparently not working on my VM is IsWindowsVersionOrGreater(10, 0,\n1703). I added some debug around to check what GetVersionEx() [2] is saying,\nand I get:\n\ndwMajorVersion == 6\ndwMinorVersion == 2\ndwBuildNumber == 9200\n\nWhile winver.exe on the same vm says windows 11, version 21H2, build 22000.493.\n\nI'm therefore extremely confused. The documentation of\nIsWindowsVersionOrGreater() at [3] is also highly confusing:\n\n> TRUE if the specified version matches, or is greater than, the version of the\n> current Windows OS; otherwise, FALSE.\n\nIsn't that supposed to be the opposite?\n\n[2] https://docs.microsoft.com/en-us/windows/win32/api/sysinfoapi/nf-sysinfoapi-getversionexa\nhttps://docs.microsoft.com/en-us/windows/win32/api/winnt/ns-winnt-osversioninfoexa\n\n[3] https://docs.microsoft.com/en-us/windows/win32/api/versionhelpers/nf-versionhelpers-iswindowsversionorgreater\n\n\n",
"msg_date": "Thu, 31 Mar 2022 18:46:59 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: BUG #17448: In Windows 10, version 1703 and later, huge_pages\n doesn't work."
},
{
"msg_contents": "On Thu, Mar 31, 2022 at 06:46:59PM +0800, Julien Rouhaud wrote:\n> On Thu, Mar 31, 2022 at 04:00:55PM +0800, Julien Rouhaud wrote:\n> > > Okano-san has mentioned VMMap upthread:\n> > > https://docs.microsoft.com/en-us/sysinternals/downloads/vmmap\n> > \n> > Yes, I totally missed that. Thomas Munro also mentioned it off-list, and\n> > also found some reference [1] indicating that large pages should show up as\n> > \"Locked WS\". I tested with and without the patch and in both case I don't see\n> > any \"Locked WS\" usage. I also get the same Page Table size, which seems\n> > consistent with large pages not being used. Now, this is a vm running with\n> > virtualbox and we're not entirely sure that huge pages can be allocated with\n> > it. I wish I could test on my windows 10 machine as it's not virtualized, but\n> > I can't give the required privileges.\n> > \n> > [1] https://aloiskraus.wordpress.com/2016/10/03/windows-10-memory-compression-and-more/\n> \n> So, after more digging it turns out that the patch is supposed to work. If I\n> force using the PG_FILE_MAP_LARGE_PAGES, postgres starts and I do see \"Locked\n> WS\" usage with VMMap, with a size in the order of magnitude of my\n> shared_buffers.\n> \n> What is apparently not working on my VM is IsWindowsVersionOrGreater(10, 0,\n> 1703). I added some debug around to check what GetVersionEx() [2] is saying,\n> and I get:\n> \n> dwMajorVersion == 6\n> dwMinorVersion == 2\n> dwBuildNumber == 9200\n> \n> While winver.exe on the same vm says windows 11, version 21H2, build 22000.493.\n\nSo, what GetVersionEx returns is actually \"it depends\", and this is documented:\n\n> With the release of Windows 8.1, the behavior of the GetVersionEx API has\n> changed in the value it will return for the operating system version. The\n> value returned by the GetVersionEx function now depends on how the\n> application is manifested.\n>\n> Applications not manifested for Windows 8.1 or Windows 10 will return the\n> Windows 8 OS version value (6.2). Once an application is manifested for a\n> given operating system version, GetVersionEx will always return the version\n> that the application is manifested for in future releases. To manifest your\n> applications for Windows 8.1 or Windows 10, refer to Targeting your\n> application for Windows.\n\nThere's no such indication on IsWindowsVersionOrGreater(), but after seeing\nvarious comments on forums from angry people, it may be a hint that it behaves\nsimilarly. I'm not sure what to do at this point, maybe just always use the\nflag (the PG_ version which may be 0), hoping that hopefully windows won't\ndefine it if it can't handle it?\n\n\n",
"msg_date": "Thu, 31 Mar 2022 19:03:09 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: BUG #17448: In Windows 10, version 1703 and later, huge_pages\n doesn't work."
},
{
"msg_contents": "On Thu, Mar 31, 2022 at 06:46:59PM +0800, Julien Rouhaud wrote:\n> So, after more digging it turns out that the patch is supposed to work. If I\n> force using the PG_FILE_MAP_LARGE_PAGES, postgres starts and I do see \"Locked\n> WS\" usage with VMMap, with a size in the order of magnitude of my\n> shared_buffers.\n> \n> What is apparently not working on my VM is IsWindowsVersionOrGreater(10, 0,\n> 1703). I added some debug around to check what GetVersionEx() [2] is saying,\n> and I get:\n> \n> dwMajorVersion == 6\n> dwMinorVersion == 2\n> dwBuildNumber == 9200\n\nOkay. Well, I'd like to think that the patch written as-is is\ncorrect. Now your tests are saying the contrary, so I don't really\nknow what to think about it :)\n\n>> TRUE if the specified version matches, or is greater than, the version of the\n>> current Windows OS; otherwise, FALSE.\n> \n> Isn't that supposed to be the opposite?\n\nI get from the upstream docs that if the runtime version of Windows is\nhigher than 10.0.1703, IsWindowsVersionOrGreater() should return\ntrue. Perhaps the issue is in the patch and its argument values, but\nit does not look straight-forward to know what those values should\nbe, and there are no examples in the docs to show that, either :/\n--\nMichael",
"msg_date": "Fri, 1 Apr 2022 14:54:53 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: BUG #17448: In Windows 10, version 1703 and later, huge_pages\n doesn't work."
},
{
"msg_contents": "On Thu, Mar 31, 2022 at 07:03:09PM +0800, Julien Rouhaud wrote:\n> There's no such indication on IsWindowsVersionOrGreater(), but after seeing\n> various comments on forums from angry people, it may be a hint that it behaves\n> similarly. I'm not sure what to do at this point, maybe just always use the\n> flag (the PG_ version which may be 0), hoping that hopefully windows won't\n> define it if it can't handle it?\n\nLooking at the internals of versionhelpers.h, would it work to use as\narguments for IsWindowsVersionOrGreater() the following, in this\norder? As of:\n- HIBYTE(_WIN32_WINNT_WINTHRESHOLD)\n- LOBYTE(_WIN32_WINNT_WINTHRESHOLD)\n- 1703\n\nJust to drop an idea.\n--\nMichael",
"msg_date": "Fri, 1 Apr 2022 14:59:22 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: BUG #17448: In Windows 10, version 1703 and later, huge_pages\n doesn't work."
},
{
"msg_contents": "On Fri, Apr 01, 2022 at 02:59:22PM +0900, Michael Paquier wrote:\n> On Thu, Mar 31, 2022 at 07:03:09PM +0800, Julien Rouhaud wrote:\n> > There's no such indication on IsWindowsVersionOrGreater(), but after seeing\n> > various comments on forums from angry people, it may be a hint that it behaves\n> > similarly. I'm not sure what to do at this point, maybe just always use the\n> > flag (the PG_ version which may be 0), hoping that hopefully windows won't\n> > define it if it can't handle it?\n>\n> Looking at the internals of versionhelpers.h, would it work to use as\n> arguments for IsWindowsVersionOrGreater() the following, in this\n> order? As of:\n> - HIBYTE(_WIN32_WINNT_WINTHRESHOLD)\n> - LOBYTE(_WIN32_WINNT_WINTHRESHOLD)\n> - 1703\n>\n> Just to drop an idea.\n\nI will test that in a bit. I still think that at least some API exists to give\nthe real answer since winver and similar report correct info, I just don't know\nwhat those are.\n\nNote that if you want to test yourself you could use this script [1] using the\nevaluation virtual machine [2] to automatically setup a windows 11 environment\nwith everything needed to compile postgres with a extra dependencies. The\nwhole process is a bit long though, so I can also give you access to my vm if\nyou prefer, probably the latency shouldn't be too bad :)\n\n[1] https://github.com/rjuju/pg_msvc_generator/blob/master/bootstrap.ps1\n[2] https://developer.microsoft.com/en-us/windows/downloads/virtual-machines/\n\n\n",
"msg_date": "Fri, 1 Apr 2022 14:19:46 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: BUG #17448: In Windows 10, version 1703 and later, huge_pages\n doesn't work."
},
{
"msg_contents": "Hi,\n\nPlease keep the list in copy, especially if that's about Windows specific as\nI'm definitely not very knowledgeable about it.\n\nOn Fri, Apr 01, 2022 at 09:18:03AM +0000, Wilm Hoyer wrote:\n> \n> If you don't wanna go the manifest way, maybe the RtlGetVersion function is the one you need:\n> https://docs.microsoft.com/en-us/windows-hardware/drivers/ddi/wdm/nf-wdm-rtlgetversion?redirectedfrom=MSDN\n\nThanks for the info! I tried to use the function but trying to include either\nwdm.h or Ntddk.h errors out. Unfortunately I don't know how to look for a file\nin Windows so I don't even know if those files are present.\n\nI searched a bit and apparently some people are using this function directly\nopening some dll, which seems wrong.\n\n> Another Idea on windows machines would be to use the commandline to execute\n> ver in a separate Process and store the result in a file.\n\nThat also seems hackish, I don't think that we want to rely on something like\nthat.\n\n> >> While winver.exe on the same vm says windows 11, version 21H2, build 22000.493.\n> \n> > So, what GetVersionEx returns is actually \"it depends\", and this is documented:\n> \n> >> With the release of Windows 8.1, the behavior of the GetVersionEx API \n> >> has changed in the value it will return for the operating system \n> >> version. The value returned by the GetVersionEx function now depends \n> >> on how the application is manifested.\n> >>\n> >> Applications not manifested for Windows 8.1 or Windows 10 will return \n> >> the Windows 8 OS version value (6.2). Once an application is \n> >> manifested for a given operating system version, GetVersionEx will \n> >> always return the version that the application is manifested for in \n> >> future releases. To manifest your applications for Windows 8.1 or \n> >> Windows 10, refer to Targeting your application for Windows.\n> \n> The documentation is a bit unclear - with the correct functions you should get the:\n> Minimum( actualOS-Version, Maximum(Manifested OS Versions))\n> The Idea behind, as I understand it, is to better support virtualization and\n> backward compatibility - you manifest only Windows 8.1 -> than you always get\n> a System that behaves like Windows 8.1 in every aspect. (Every Aspect not\n> true in some corner cases due to security patches)\n\nWell, it clearly does *NOT* behave as a Windows 8.1, even if for some reason\nlarge pages relies on security patches.\n\nTheir API is entirely useless, so I'm still on the opinion that we should\nunconditionally use the FILE_MAP_LARGE_PAGES flag if it's defined and call it a\nday.\n\n\n",
"msg_date": "Tue, 26 Apr 2022 12:54:35 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: BUG #17448: In Windows 10, version 1703 and later, huge_pages\n doesn't work."
},
{
"msg_contents": "On Tue, Apr 26, 2022 at 12:54:35PM +0800, Julien Rouhaud wrote:\n> I searched a bit and apparently some people are using this function directly\n> opening some dll, which seems wrong.\n\nI was wondering about this whole business, and the manifest approach\nis a *horrible* design for an API where the goal is to know if your\nrun-time environment is greater than a given threshold.\n\n>> Another Idea on windows machines would be to use the commandline to execute\n>> ver in a separate Process and store the result in a file.\n> \n> That also seems hackish, I don't think that we want to rely on something like\n> that.\n\nHmm. That depends on the dependency set, I guess. We do that on\nLinux at some extent to for large pages in sysv_shmem.c. Perhaps this\ncould work for Win10 if this avoids the extra loopholes with the\nmanifests.\n\n> Their API is entirely useless,\n\nThis I agree.\n\n> so I'm still on the opinion that we should\n> unconditionally use the FILE_MAP_LARGE_PAGES flag if it's defined and call it a\n> day.\n\nAre we sure that this is not going to cause failures in environments\nwhere the flag is not supported?\n--\nMichael",
"msg_date": "Wed, 27 Apr 2022 17:13:12 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: BUG #17448: In Windows 10, version 1703 and later, huge_pages\n doesn't work."
},
{
"msg_contents": "On Tue, Apr 26, 2022, 05:55 Julien Rouhaud <rjuju123@gmail.com> wrote:\n\n> Hi,\n>\n> Please keep the list in copy, especially if that's about Windows specific\n> as\n> I'm definitely not very knowledgeable about it.\n>\n> On Fri, Apr 01, 2022 at 09:18:03AM +0000, Wilm Hoyer wrote:\n> >\n> > If you don't wanna go the manifest way, maybe the RtlGetVersion function\n> is the one you need:\n> >\n> https://docs.microsoft.com/en-us/windows-hardware/drivers/ddi/wdm/nf-wdm-rtlgetversion?redirectedfrom=MSDN\n>\n> Thanks for the info! I tried to use the function but trying to include\n> either\n> wdm.h or Ntddk.h errors out. Unfortunately I don't know how to look for a\n> file\n> in Windows so I don't even know if those files are present.\n>\n\nThat's a kernel api for use in drivers. Not in applications. You need the\ndevice driver developer kit to get to the headers.\n\nIt's not supposed to be used from a user land application.\n\nBut note the documentation comment that says: “*RtlGetVersion* is the\nkernel-mode equivalent of the user-mode *GetVersionEx* function in the\nWindows SDK. \".\n\nTldr, user mode applications are supposed to use GetVersionEx().\n\n/Magnus\n\nOn Tue, Apr 26, 2022, 05:55 Julien Rouhaud <rjuju123@gmail.com> wrote:Hi,\n\nPlease keep the list in copy, especially if that's about Windows specific as\nI'm definitely not very knowledgeable about it.\n\nOn Fri, Apr 01, 2022 at 09:18:03AM +0000, Wilm Hoyer wrote:\n> \n> If you don't wanna go the manifest way, maybe the RtlGetVersion function is the one you need:\n> https://docs.microsoft.com/en-us/windows-hardware/drivers/ddi/wdm/nf-wdm-rtlgetversion?redirectedfrom=MSDN\n\nThanks for the info! I tried to use the function but trying to include either\nwdm.h or Ntddk.h errors out. Unfortunately I don't know how to look for a file\nin Windows so I don't even know if those files are present.That's a kernel api for use in drivers. Not in applications. You need the device driver developer kit to get to the headers. It's not supposed to be used from a user land application. But note the documentation comment that says: “RtlGetVersion is the kernel-mode equivalent of the user-mode GetVersionEx function in the Windows SDK. \". Tldr, user mode applications are supposed to use GetVersionEx(). /Magnus",
"msg_date": "Wed, 27 Apr 2022 05:01:03 -0500",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": false,
"msg_subject": "Re: BUG #17448: In Windows 10, version 1703 and later, huge_pages\n doesn't work."
},
{
"msg_contents": "\nOn Tue, Apr 26, 2022 at 12:54:35PM +0800, Julien Rouhaud wrote:\n>> I searched a bit and apparently some people are using this function \n>> directly opening some dll, which seems wrong.\n\n> I was wondering about this whole business, and the manifest approach is a *horrible* design for an API where the goal is to know if your run-time environment is greater than a given threshold.\n\nAgreed for the use case at hand, where you want to use one core API Function or another depending on the OS Version.\n\nOne Blog from Microsoft, I remember, told that one reason for the change were the increase of false installation error messages \"Install Error - Your system does not meet the minimum supported operating system and service pack level.\"\nwhere the software in question was written for Windows XP and the user tried to install it on, say, Windows 8.\nThat is just a Developer-Pilot error, where the Developers forgot to anticipate future OS Versions and instead of checking for Version at least, where checking for Version equality of all at design time known Windows Version.\nSince you can develop only against OS APIs known at design time, and Microsoft claims to be pretty good at maintaining backward compatible facades for their APIs, there is some reason in that decision.\n(To only see the Versions and APIs you told the OS with the manifest, you knew about at compile time).\nThe core Problem at hand is, that ms broke the promise of backward compatibility, since the function in question is working differently, depending on windows version, although with the above reasoning we should get the exact same behavior on windows 10 as on windows 8.1 (as PostgreSql, per default, only claims to know about Windows 8.1 features).\n\nThat said, I can understand the design decision. Personally, I still don't like it a bit, since developers should be allowed to make some stupid mistakes.\n\n>>> Another Idea on windows machines would be to use the commandline to \n>>> execute ver in a separate Process and store the result in a file.\n>> \n>> That also seems hackish, I don't think that we want to rely on \n>> something like that.\n\n>Hmm. That depends on the dependency set, I guess. We do that on Linux at some extent to for large pages in sysv_shmem.c. Perhaps this could work for Win10 if this avoids the extra loopholes with the >manifests.\n\nI used the following hack to get the \"real\" Major and Minor Version of Windows - it's in C# (.Net) and needs to be adjusted (you can compile as x64 and use a long-long as return value ) to return the Service Number too and translated it into C.\nI share it anyways, as it might help - please be kind, as it really is a little hack.\n\nSituation: \nMain Application can't or is not willing to add a manifest file into its resources.\n\nSolution:\nStart a small executable (which has a manifest file compiled into its resources), let it read the OS Version and code the Version into its return Code.\n\nCInt32 is basically an integer redefinition, where one can access the lower and higher Int16 separately.\n\nThe Main Programm eventually calls this (locate the executable, adjust the Process startup to be minimal, call the executable as separate process and interpret the return value as Version):\nprivate static Version ReadModernOsVersionInternal()\n{\n String codeBase = Assembly.GetExecutingAssembly().CodeBase;\n Uri uri = new Uri(codeBase);\n\n String localPath = uri.LocalPath;\n String pathDirectory = Path.GetDirectoryName(localPath);\n\n if (pathDirectory != null)\n {\n String fullCombinePath = Path.Combine(pathDirectory, \"Cf.Utilities.ReadModernOSVersion\");\n\n ProcessStartInfo processInfo = new ProcessStartInfo\n {\n FileName = fullCombinePath,\n CreateNoWindow = true,\n UseShellExecute = false\n };\n\n Process process = new Process\n {\n StartInfo = processInfo\n };\n\n process.Start();\n\n if (process.WaitForExit(TimeSpan.FromSeconds(1).Milliseconds))\n {\n CInt32 versionInteger = process.ExitCode;\n return new Version(versionInteger.HighValue, versionInteger.LowValue);\n }\n }\n\n return new Version();\n}\n\n\nThe small Version Check executable:\n\nstatic Int32 Main(String[] args)\n{\n return OsVersionErmittler.ErmittleOsVersion();\n}\n\nand\n\nstatic class OsVersionErmittler\n{\n /// <summary>\n /// Ermittelt die OsVersion und übergibt diese als High und LowWord.\n /// </summary>\n /// <returns></returns>\n public static CInt32 ErmittleOsVersion()\n {\n OperatingSystem version = Environment.OSVersion;\n if (version.Platform == PlatformID.Win32NT && version.Version >= new Version(6, 3))\n {\n String versionString = version.VersionString;\n return new CInt32((Int16) version.Version.Major, (Int16) version.Version.Minor);\n }\n return 0;\n }\n}\n\nThe shortened manifest of the small executable:\n<?xml version=\"1.0\" encoding=\"utf-8\"?>\n<assembly manifestVersion=\"1.0\" xmlns=\"urn:schemas-microsoft-com:asm.v1\">\n <compatibility xmlns=\"urn:schemas-microsoft-com:compatibility.v1\">\n <application>\n <!-- Eine Liste der Windows-Versionen, unter denen diese Anwendung getestet\n und für die sie entwickelt wurde. Wenn Sie die Auskommentierung der entsprechenden Elemente aufheben, \n wird von Windows automatisch die kompatibelste Umgebung ausgewählt. -->\n\n <!-- Windows Vista -->\n <!--<supportedOS Id=\"{e2011457-1546-43c5-a5fe-008deee3d3f0}\" />-->\n\n <!-- Windows 7 -->\n <!--<supportedOS Id=\"{35138b9a-5d96-4fbd-8e2d-a2440225f93a}\" />-->\n\n <!-- Windows 8 -->\n <!--<supportedOS Id=\"{4a2f28e3-53b9-4441-ba9c-d69d4a4a6e38}\" />-->\n\n <!-- Windows 8.1 -->\n <supportedOS Id=\"{1f676c76-80e1-4239-95bb-83d0f6d0da78}\" />\n\n <!-- Windows 10 -->\n <supportedOS Id=\"{8e0f7a12-bfb3-4fe8-b9a5-48fd50a15a9a}\" />\n\n </application>\n </compatibility>\n\n </assembly>\n\n\nI hope I'm not intrusive, otherwise, feel free to ignore this mail,\nWilm.\n\n\n",
"msg_date": "Wed, 27 Apr 2022 15:04:23 +0000",
"msg_from": "Wilm Hoyer <W.Hoyer@dental-vision.de>",
"msg_from_op": false,
"msg_subject": "AW: BUG #17448: In Windows 10, version 1703 and later, huge_pages\n doesn't work."
},
{
"msg_contents": "On Wed, Apr 27, 2022 at 05:13:12PM +0900, Michael Paquier wrote:\n> On Tue, Apr 26, 2022 at 12:54:35PM +0800, Julien Rouhaud wrote:\n> > so I'm still on the opinion that we should\n> > unconditionally use the FILE_MAP_LARGE_PAGES flag if it's defined and call it a\n> > day.\n> \n> Are we sure that this is not going to cause failures in environments\n> where the flag is not supported?\n\nI don't know for sure as I have no way to test, but it would be very lame for\nan OS to provide a #define explicitly intended for one use case if that use\ncase can't handle that flag yet.\n\n\n",
"msg_date": "Thu, 28 Apr 2022 00:48:41 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: BUG #17448: In Windows 10, version 1703 and later, huge_pages\n doesn't work."
},
{
"msg_contents": "On Wed, Apr 27, 2022 at 03:04:23PM +0000, Wilm Hoyer wrote:\n>\n> I used the following hack to get the \"real\" Major and Minor Version of\n> Windows - it's in C# (.Net) and needs to be adjusted (you can compile as x64\n> and use a long-long as return value ) to return the Service Number too and\n> translated it into C.\n> I share it anyways, as it might help - please be kind, as it really is a\n> little hack.\n>\n> Situation:\n> Main Application can't or is not willing to add a manifest file into its\n> resources.\n>\n> Solution:\n> Start a small executable (which has a manifest file compiled into its\n> resources), let it read the OS Version and code the Version into its return\n> Code.\n\nThanks for sharing.\n\nHaving to compile another tool just for that seems like a very high price to\npay, especially since we don't have any C# code in the tree. I'm not even sure\nthat compiling this wouldn't need additional requirements and/or if it would\nwork on our oldest supported Windows versions.\n\n\n",
"msg_date": "Thu, 28 Apr 2022 00:52:28 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: BUG #17448: In Windows 10, version 1703 and later, huge_pages\n doesn't work."
},
{
"msg_contents": "\nOn Wed, Apr 27, 2022 at 03:04:23PM +0000, Wilm Hoyer wrote:\n>>\n>> I used the following hack to get the \"real\" Major and Minor Version of \n>> Windows - it's in C# (.Net) and needs to be adjusted (you can compile \n>> as x64 and use a long-long as return value ) to return the Service \n>> Number too and translated it into C.\n>> I share it anyways, as it might help - please be kind, as it really is \n>> a little hack.\n>>\n>> Situation:\n>> Main Application can't or is not willing to add a manifest file into \n>> its resources.\n>>\n>> Solution:\n>> Start a small executable (which has a manifest file compiled into its \n>> resources), let it read the OS Version and code the Version into its \n>> return Code.\n\n> Thanks for sharing.\n\nYou are welcome.\n\n> Having to compile another tool just for that seems like a very high price to pay, especially since we don't have any C# code in the tree. I'm not even sure that compiling this wouldn't need additional requirements and/or if it would work on our oldest supported Windows versions.\n\nWith \"translate it into C\" I meant \"tread it as pseudo code, for a solution in plain C\" (e.g. substitude Environment.OSVersion with IsWindowsVersionOrGreater or GetVersion )\n\nOn Wed, Apr 27, 2022 at 05:13:12PM +0900, Michael Paquier wrote:\n> On Tue, Apr 26, 2022 at 12:54:35PM +0800, Julien Rouhaud wrote:\n> > so I'm still on the opinion that we should unconditionally use the \n> > FILE_MAP_LARGE_PAGES flag if it's defined and call it a day.\n> \n> Are we sure that this is not going to cause failures in environments \n> where the flag is not supported?\n\nI'm not that familiar with the Microsoft OS or C (that's why I haven't migrated the c# code to C in the first place) to have a clear answer to that question.\n\nIf there is any risk and you want to avoid it, I can share a search result when I faced the same issue for our application. I declined this solution in favor of the previously shared one.\nIt's from NUnit (and needs migration to C as well - but since it just involves the Registry this should be pretty forward).\nJust in case the Framework is not known: NUnit is the most popular .Net port of the Unit Testing Framework JUnit. There exits a C port too (CUnit) Maybe in there you can find an OS Version check too.\n\n// Copyright (c) Charlie Poole, Rob Prouse and Contributors. MIT License - see LICENSE.txt\n[...]\nnamespace NUnit.Framework.Internal\n{\n [SecuritySafeCritical]\n public class OSPlatform\n {\n[...]\n /// <summary>\n /// Gets the actual OS Version, not the incorrect value that might be\n /// returned for Win 8.1 and Win 10\n /// </summary>\n /// <remarks>\n /// If an application is not manifested as Windows 8.1 or Windows 10,\n /// the version returned from Environment.OSVersion will not be 6.3 and 10.0\n /// respectively, but will be 6.2 and 6.3. The correct value can be found in\n /// the registry.\n /// </remarks>\n /// <param name=\"version\">The original version</param>\n /// <returns>The correct OS version</returns>\n private static Version GetWindows81PlusVersion(Version version)\n {\n try\n {\n using (var key = Registry.LocalMachine.OpenSubKey(@\"SOFTWARE\\Microsoft\\Windows NT\\CurrentVersion\"))\n {\n if (key != null)\n {\n var buildStr = key.GetValue(\"CurrentBuildNumber\") as string;\n int.TryParse(buildStr, out var build);\n\n // These two keys are in Windows 10 only and are DWORDS\n var major = key.GetValue(\"CurrentMajorVersionNumber\") as int?;\n var minor = key.GetValue(\"CurrentMinorVersionNumber\") as int?;\n if (major.HasValue && minor.HasValue)\n {\n return new Version(major.Value, minor.Value, build);\n }\n\n // If we get here, we are not Windows 10, so we are Windows 8\n // or 8.1. 8.1 might report itself as 6.2, but will have 6.3\n // in the registry. We can't do this earlier because for backwards\n // compatibility, Windows 10 also has 6.3 for this key.\n var currentVersion = key.GetValue(\"CurrentVersion\") as string;\n if(currentVersion == \"6.3\")\n {\n return new Version(6, 3, build);\n }\n }\n }\n }\n catch (Exception)\n {\n }\n return version;\n }\n[...]\n }\n}\n\nFinally, my reasoning to use the executable solution in favor of the NUnit one: \nI found no guarantee from Microsoft regarding the keys and values in the registry - hence a change with an update or in a newer Windows is not likely, but still possible. That's no problem for a heavily used and supported framework like NUnit - they are likely to adopt within days of a new Windows release. I on the other hand wanted a solution with small to no support. That's why I decided to implement a solution that's as in line as possible with the official Microsoft advice for targeting newer OS Versions.\n\nBest regards\nWilm.\n\n\n",
"msg_date": "Thu, 28 Apr 2022 09:31:17 +0000",
"msg_from": "Wilm Hoyer <W.Hoyer@dental-vision.de>",
"msg_from_op": false,
"msg_subject": "AW: BUG #17448: In Windows 10, version 1703 and later, huge_pages\n doesn't work."
},
{
"msg_contents": "On Tue, Apr 26, 2022 at 12:54:35PM +0800, Julien Rouhaud wrote:\n> Their API is entirely useless, so I'm still on the opinion that we should\n> unconditionally use the FILE_MAP_LARGE_PAGES flag if it's defined and call it a\n> day.\n\nNow that the minimal runtime version is Windows 10 in v16~ thanks to\n495ed0e, we could be much more aggressive and do the attached, which\nis roughly what Thomas has proposed upthread at the exception of\nassuming that FILE_MAP_LARGE_PAGES always exists, because updates are\nforced by MS in this environment. We could make it conditional, of\ncourse, with an extra #ifdef painting.\n--\nMichael",
"msg_date": "Fri, 8 Jul 2022 07:38:49 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: BUG #17448: In Windows 10, version 1703 and later, huge_pages\n doesn't work."
},
{
"msg_contents": "On Sun, Mar 27, 2022 at 12:07:57AM +1300, Thomas Munro wrote:\n> Some question I have: is FILE_MAP_LARGE PAGES a macro? We claim to\n> support all those ancient zombie OSes like Windows 7, or maybe it's\n> even XP for 11, and this has to be back-patched to 11, so we might\n> need to make it conditional. But conditional on what? For example,\n> does something like the attached work (untested)? What happens if a <\n> 1703 kernel sees this flag, does it reject it or ignore it?\n\nI have been looking at this thread, and found an answer in an example\nof application creating a map object with large pages in [1]:\n\"This flag is ignored on OS versions before Windows 10, version 1703.\"\n\nSo based on that I think that we could just apply and backpatch what\nyou have here. This issue is much easier to reason about on HEAD\nwhere we just care about Win >= 10, and we've be rather careful with\nchanges like that when it came to Windows. Any objections about doing \na backpatch? I'd like to do so after an extra lookup, if there are no\nobjections. Or would folks prefer a HEAD-only fix for now?\n\n[1]: https://docs.microsoft.com/en-us/windows/win32/memory/creating-a-file-mapping-using-large-pages?source=recommendations\n--\nMichael",
"msg_date": "Fri, 16 Sep 2022 21:51:40 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: BUG #17448: In Windows 10, version 1703 and later, huge_pages\n doesn't work."
},
{
"msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> So based on that I think that we could just apply and backpatch what\n> you have here. This issue is much easier to reason about on HEAD\n> where we just care about Win >= 10, and we've be rather careful with\n> changes like that when it came to Windows. Any objections about doing \n> a backpatch? I'd like to do so after an extra lookup, if there are no\n> objections. Or would folks prefer a HEAD-only fix for now?\n\nLet's just fix it in HEAD. I think the risk/reward ratio isn't very\ngood here.\n\n(I'd be particularly against changing this in v10, because 10.23 will\nbe the last one; there will be no second chance if we ship it broken.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 16 Sep 2022 10:29:38 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: BUG #17448: In Windows 10, version 1703 and later,\n huge_pages doesn't work."
},
{
"msg_contents": "On Fri, Sep 16, 2022 at 10:29:38AM -0400, Tom Lane wrote:\n> Michael Paquier <michael@paquier.xyz> writes:\n> > So based on that I think that we could just apply and backpatch what\n> > you have here. This issue is much easier to reason about on HEAD\n> > where we just care about Win >= 10, and we've be rather careful with\n> > changes like that when it came to Windows. Any objections about doing\n> > a backpatch? I'd like to do so after an extra lookup, if there are no\n> > objections. Or would folks prefer a HEAD-only fix for now?\n>\n> Let's just fix it in HEAD. I think the risk/reward ratio isn't very\n> good here.\n>\n> (I'd be particularly against changing this in v10, because 10.23 will\n> be the last one; there will be no second chance if we ship it broken.)\n\n+1\n\n\n",
"msg_date": "Fri, 16 Sep 2022 22:36:12 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: BUG #17448: In Windows 10, version 1703 and later, huge_pages\n doesn't work."
},
{
"msg_contents": "On Fri, Sep 16, 2022 at 10:36:12PM +0800, Julien Rouhaud wrote:\n> On Fri, Sep 16, 2022 at 10:29:38AM -0400, Tom Lane wrote:\n>> Let's just fix it in HEAD. I think the risk/reward ratio isn't very\n>> good here.\n>>\n>> (I'd be particularly against changing this in v10, because 10.23 will\n>> be the last one; there will be no second chance if we ship it broken.)\n> \n> +1\n\nOkay, fine by me. I have applied that only on HEAD, then.\n--\nMichael",
"msg_date": "Sat, 17 Sep 2022 15:41:11 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: BUG #17448: In Windows 10, version 1703 and later, huge_pages\n doesn't work."
}
] |
[
{
"msg_contents": "\nHi, hackers\n\nWhen I try to get total size of partition tables though partitioned table\nname using pg_relation_size(), it always returns zero. I can use the\nfollowing SQL to get total size of partition tables, however, it is a bit\ncomplex.\n\n SELECT\n pg_size_pretty(sum(pg_relation_size(i.inhrelid)))\n FROM\n pg_class c JOIN pg_inherits i ON c.oid = i.inhparent\n WHERE\n relname = 'parent';\n\nCould we provide a function to get the total size of the partition table\nthough the partitioned table name? Maybe we can extend\nthe pg_relation_size() to get the total size of partition tables through\nthe partitioned table name.\n\n--\nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.\n\n\n",
"msg_date": "Fri, 25 Mar 2022 20:52:40 +0800",
"msg_from": "Japin Li <japinli@hotmail.com>",
"msg_from_op": true,
"msg_subject": "pg_relation_size on partitioned table"
},
{
"msg_contents": "On 2022-Mar-25, Japin Li wrote:\n\n> Could we provide a function to get the total size of the partition table\n> though the partitioned table name? Maybe we can extend\n> the pg_relation_size() to get the total size of partition tables through\n> the partitioned table name.\n\nDoes \\dP+ do what you need?\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"La espina, desde que nace, ya pincha\" (Proverbio africano)\n\n\n",
"msg_date": "Fri, 25 Mar 2022 13:59:13 +0100",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: pg_relation_size on partitioned table"
},
{
"msg_contents": "On Fri, Mar 25, 2022 at 6:23 PM Japin Li <japinli@hotmail.com> wrote:\n>\n> Hi, hackers\n>\n> When I try to get total size of partition tables though partitioned table\n> name using pg_relation_size(), it always returns zero. I can use the\n> following SQL to get total size of partition tables, however, it is a bit\n> complex.\n>\n> SELECT\n> pg_size_pretty(sum(pg_relation_size(i.inhrelid)))\n> FROM\n> pg_class c JOIN pg_inherits i ON c.oid = i.inhparent\n> WHERE\n> relname = 'parent';\n>\n> Could we provide a function to get the total size of the partition table\n> though the partitioned table name? Maybe we can extend\n> the pg_relation_size() to get the total size of partition tables through\n> the partitioned table name.\n\nIf we want to have it in the core, why can't it just be a function (in\nsystem_functions.sql) something like below? Not everyone, would know\nhow to get partition relation size, especially whey they are not using\npsql, they can't use the short forms that it provides.\n\nCREATE OR REPLACE FUNCTION pg_partition_relation_size(regclass)\n RETURNS bigint\n LANGUAGE sql\n PARALLEL SAFE STRICT COST 1\nBEGIN ATOMIC\n SELECT\n pg_size_pretty(sum(pg_relation_size(i.inhrelid)))\n FROM\n pg_class c JOIN pg_inherits i ON c.oid = i.inhparent\n WHERE\n relname = '$1';\nEND;\n\nRegards,\nBharath Rupireddy.\n\n\n",
"msg_date": "Fri, 25 Mar 2022 18:51:08 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_relation_size on partitioned table"
},
{
"msg_contents": "\nOn Fri, 25 Mar 2022 at 20:59, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> On 2022-Mar-25, Japin Li wrote:\n>\n>> Could we provide a function to get the total size of the partition table\n>> though the partitioned table name? Maybe we can extend\n>> the pg_relation_size() to get the total size of partition tables through\n>> the partitioned table name.\n>\n> Does \\dP+ do what you need?\n\nThanks for your quick response!\n\nI find the \\dP+ use the following SQL:\n\n SELECT n.nspname as \"Schema\",\n c.relname as \"Name\",\n pg_catalog.pg_get_userbyid(c.relowner) as \"Owner\",\n CASE c.relkind WHEN 'p' THEN 'partitioned table' WHEN 'I' THEN 'partitioned index' END as \"Type\",\n inh.inhparent::pg_catalog.regclass as \"Parent name\",\n c2.oid::pg_catalog.regclass as \"Table\",\n s.tps as \"Total size\",\n pg_catalog.obj_description(c.oid, 'pg_class') as \"Description\"\n FROM pg_catalog.pg_class c\n LEFT JOIN pg_catalog.pg_namespace n ON n.oid = c.relnamespace\n LEFT JOIN pg_catalog.pg_index i ON i.indexrelid = c.oid\n LEFT JOIN pg_catalog.pg_class c2 ON i.indrelid = c2.oid\n LEFT JOIN pg_catalog.pg_inherits inh ON c.oid = inh.inhrelid,\n LATERAL (SELECT pg_catalog.pg_size_pretty(sum(\n CASE WHEN ppt.isleaf AND ppt.level = 1\n THEN pg_catalog.pg_table_size(ppt.relid) ELSE 0 END)) AS dps,\n pg_catalog.pg_size_pretty(sum(pg_catalog.pg_table_size(ppt.relid))) AS tps\n FROM pg_catalog.pg_partition_tree(c.oid) ppt) s\n WHERE c.relkind IN ('p','I','')\n AND c.relname OPERATOR(pg_catalog.~) '^(parent)$' COLLATE pg_catalog.default\n AND pg_catalog.pg_table_is_visible(c.oid)\n ORDER BY \"Schema\", \"Type\" DESC, \"Parent name\" NULLS FIRST, \"Name\";\n\n\npg_table_size() includes \"main\", \"vm\", \"fsm\", \"init\" and \"toast\", however,\nI only care about the \"main\" fork.\n\n--\nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.\n\n\n",
"msg_date": "Fri, 25 Mar 2022 21:28:42 +0800",
"msg_from": "Japin Li <japinli@hotmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_relation_size on partitioned table"
},
{
"msg_contents": "\nOn Fri, 25 Mar 2022 at 21:21, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote:\n> On Fri, Mar 25, 2022 at 6:23 PM Japin Li <japinli@hotmail.com> wrote:\n>>\n>> Hi, hackers\n>>\n>> When I try to get total size of partition tables though partitioned table\n>> name using pg_relation_size(), it always returns zero. I can use the\n>> following SQL to get total size of partition tables, however, it is a bit\n>> complex.\n>>\n>> SELECT\n>> pg_size_pretty(sum(pg_relation_size(i.inhrelid)))\n>> FROM\n>> pg_class c JOIN pg_inherits i ON c.oid = i.inhparent\n>> WHERE\n>> relname = 'parent';\n>>\n>> Could we provide a function to get the total size of the partition table\n>> though the partitioned table name? Maybe we can extend\n>> the pg_relation_size() to get the total size of partition tables through\n>> the partitioned table name.\n>\n> If we want to have it in the core, why can't it just be a function (in\n> system_functions.sql) something like below? Not everyone, would know\n> how to get partition relation size, especially whey they are not using\n> psql, they can't use the short forms that it provides.\n>\n> CREATE OR REPLACE FUNCTION pg_partition_relation_size(regclass)\n> RETURNS bigint\n> LANGUAGE sql\n> PARALLEL SAFE STRICT COST 1\n> BEGIN ATOMIC\n> SELECT\n> pg_size_pretty(sum(pg_relation_size(i.inhrelid)))\n> FROM\n> pg_class c JOIN pg_inherits i ON c.oid = i.inhparent\n> WHERE\n> relname = '$1';\n> END;\n>\n\nYeah, it's a good idea! How about add a fork parameter?\n\n--\nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.\n\n\n",
"msg_date": "Fri, 25 Mar 2022 21:35:42 +0800",
"msg_from": "Japin Li <japinli@hotmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_relation_size on partitioned table"
},
{
"msg_contents": "On Fri, 25 Mar 2022 at 21:21, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote:\n> On Fri, Mar 25, 2022 at 6:23 PM Japin Li <japinli@hotmail.com> wrote:\n>>\n>> Hi, hackers\n>>\n>> When I try to get total size of partition tables though partitioned table\n>> name using pg_relation_size(), it always returns zero. I can use the\n>> following SQL to get total size of partition tables, however, it is a bit\n>> complex.\n>>\n>> SELECT\n>> pg_size_pretty(sum(pg_relation_size(i.inhrelid)))\n>> FROM\n>> pg_class c JOIN pg_inherits i ON c.oid = i.inhparent\n>> WHERE\n>> relname = 'parent';\n>>\n>> Could we provide a function to get the total size of the partition table\n>> though the partitioned table name? Maybe we can extend\n>> the pg_relation_size() to get the total size of partition tables through\n>> the partitioned table name.\n>\n> If we want to have it in the core, why can't it just be a function (in\n> system_functions.sql) something like below? Not everyone, would know\n> how to get partition relation size, especially whey they are not using\n> psql, they can't use the short forms that it provides.\n>\n> CREATE OR REPLACE FUNCTION pg_partition_relation_size(regclass)\n> RETURNS bigint\n> LANGUAGE sql\n> PARALLEL SAFE STRICT COST 1\n> BEGIN ATOMIC\n> SELECT\n> pg_size_pretty(sum(pg_relation_size(i.inhrelid)))\n> FROM\n> pg_class c JOIN pg_inherits i ON c.oid = i.inhparent\n> WHERE\n> relname = '$1';\n> END;\n>\n\nI add two functions (as suggested by Bharath Rupireddy)\npg_partition_relation_size and pg_partition_table_size to get partition tables\nsize through partitioned table name. It may reduce the complexity to get the\nsize of partition tables.\n\n--\nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.",
"msg_date": "Fri, 25 Mar 2022 22:46:58 +0800",
"msg_from": "Japin Li <japinli@hotmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_relation_size on partitioned table"
},
{
"msg_contents": "On Fri, Mar 25, 2022 at 08:52:40PM +0800, Japin Li wrote:\n> When I try to get total size of partition tables though partitioned table\n> name using pg_relation_size(), it always returns zero. I can use the\n> following SQL to get total size of partition tables, however, it is a bit\n> complex.\n\nThis doesn't handle multiple levels of partitioning, as \\dP+ already does.\n\nAny new function should probably be usable by \\dP+ (although it would also need\nto support older server versions for another ~10 years).\n\n> SELECT pg_size_pretty(sum(pg_relation_size(i.inhrelid)))\n> FROM pg_class c JOIN pg_inherits i ON c.oid = i.inhparent\n> WHERE relname = 'parent';\n\n> Could we provide a function to get the total size of the partition table\n> though the partitioned table name? Maybe we can extend\n> the pg_relation_size() to get the total size of partition tables through\n> the partitioned table name.\n\nSometimes people would want the size of the table itself and not the size of\nits partitions, so it's not good to change pg_relation_size().\n\nOTOH, pg_total_relation_size() shows a table size including toast and indexes.\nToast are an implementation detail, which is intended to be hidden from\napplication developers. And that's a goal for partitioning, too. So maybe it\nwould make sense if it showed the size of the table, toast, indexes, *and*\npartitions (but not legacy inheritance children).\n\nI know I'm not the only one who can't keep track of what all the existing\npg_*_size functions include, so adding more functions will also add some\nadditional confusion, unless, perhaps, it took arguments indicating what to\ninclude, like pg_total_relation_size(partitions=>false, toast=>true,\nindexes=>true, fork=>main).\n\n-- \nJustin\n\n\n",
"msg_date": "Fri, 25 Mar 2022 10:27:42 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_relation_size on partitioned table"
},
{
"msg_contents": "On Fri, Mar 25, 2022 at 08:52:40PM +0800, Japin Li wrote:\n> Could we provide a function to get the total size of the partition table\n> though the partitioned table name? Maybe we can extend\n> the pg_relation_size() to get the total size of partition tables through\n> the partitioned table name.\n\nThere are already many replies on this thread, but nobody has\nmentioned pg_partition_tree() yet, so here you go. You could use that\nin combination with pg_relation_size() to get the whole size of a tree\ndepending on your needs.\n--\nMichael",
"msg_date": "Sat, 26 Mar 2022 15:05:46 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg_relation_size on partitioned table"
},
{
"msg_contents": "On Sat, Mar 26, 2022 at 11:35 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Fri, Mar 25, 2022 at 08:52:40PM +0800, Japin Li wrote:\n> > Could we provide a function to get the total size of the partition table\n> > though the partitioned table name? Maybe we can extend\n> > the pg_relation_size() to get the total size of partition tables through\n> > the partitioned table name.\n>\n> There are already many replies on this thread, but nobody has\n> mentioned pg_partition_tree() yet, so here you go. You could use that\n> in combination with pg_relation_size() to get the whole size of a tree\n> depending on your needs.\n\nYeah. The docs have a note on using it for finding partitioned table size:\n\n <para>\n For example, to check the total size of the data contained in a\n partitioned table <structname>measurement</structname>, one could use the\n following query:\n<programlisting>\nSELECT pg_size_pretty(sum(pg_relation_size(relid))) AS total_size\n FROM pg_partition_tree('measurement');\n</programlisting>\n </para>\n\nRegards,\nBharath Rupireddy.\n\n\n",
"msg_date": "Sat, 26 Mar 2022 19:46:29 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_relation_size on partitioned table"
},
{
"msg_contents": "\nOn Sat, 26 Mar 2022 at 22:16, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote:\n> On Sat, Mar 26, 2022 at 11:35 AM Michael Paquier <michael@paquier.xyz> wrote:\n>>\n>> On Fri, Mar 25, 2022 at 08:52:40PM +0800, Japin Li wrote:\n>> > Could we provide a function to get the total size of the partition table\n>> > though the partitioned table name? Maybe we can extend\n>> > the pg_relation_size() to get the total size of partition tables through\n>> > the partitioned table name.\n>>\n>> There are already many replies on this thread, but nobody has\n>> mentioned pg_partition_tree() yet, so here you go. You could use that\n>> in combination with pg_relation_size() to get the whole size of a tree\n>> depending on your needs.\n>\n> Yeah. The docs have a note on using it for finding partitioned table size:\n>\n> <para>\n> For example, to check the total size of the data contained in a\n> partitioned table <structname>measurement</structname>, one could use the\n> following query:\n> <programlisting>\n> SELECT pg_size_pretty(sum(pg_relation_size(relid))) AS total_size\n> FROM pg_partition_tree('measurement');\n> </programlisting>\n> </para>\n>\n\nThanks for all of you! The above code does what I want.\n\n--\nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.\n\n\n",
"msg_date": "Sun, 27 Mar 2022 11:05:58 +0800",
"msg_from": "Japin Li <japinli@hotmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_relation_size on partitioned table"
}
] |
[
{
"msg_contents": "Hi,\n\nThe function GetWalRcvWriteRecPtr isn't being used anywhere, however\npg_atomic_read_u64(&walrcv->writtenUpto); (reading writtenUpto without\nspinlock) is being used directly in pg_stat_get_wal_receiver\nwalreceiver.c. We either make use of the function instead of\npg_atomic_read_u64(&walrcv->writtenUpto); or remove it. Since there's\nonly one function using walrcv->writtenUpto right now, I prefer to\nremove the function to save some LOC (13).\n\nAttaching patch. Thoughts?\n\nRegards,\nBharath Rupireddy.",
"msg_date": "Sat, 26 Mar 2022 10:51:15 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Remove an unused function GetWalRcvWriteRecPtr"
},
{
"msg_contents": "On Sat, Mar 26, 2022 at 10:51:15AM +0530, Bharath Rupireddy wrote:\n> The function GetWalRcvWriteRecPtr isn't being used anywhere, however\n> pg_atomic_read_u64(&walrcv->writtenUpto); (reading writtenUpto without\n> spinlock) is being used directly in pg_stat_get_wal_receiver\n> walreceiver.c. We either make use of the function instead of\n> pg_atomic_read_u64(&walrcv->writtenUpto); or remove it. Since there's\n> only one function using walrcv->writtenUpto right now, I prefer to\n> remove the function to save some LOC (13).\n> \n> Attaching patch. Thoughts?\n\nThis could be used by some external module, no?\n--\nMichael",
"msg_date": "Sat, 26 Mar 2022 14:52:29 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Remove an unused function GetWalRcvWriteRecPtr"
},
{
"msg_contents": "On Sat, Mar 26, 2022 at 02:52:29PM +0900, Michael Paquier wrote:\n> On Sat, Mar 26, 2022 at 10:51:15AM +0530, Bharath Rupireddy wrote:\n> > The function GetWalRcvWriteRecPtr isn't being used anywhere, however\n> > pg_atomic_read_u64(&walrcv->writtenUpto); (reading writtenUpto without\n> > spinlock) is being used directly in pg_stat_get_wal_receiver\n> > walreceiver.c. We either make use of the function instead of\n> > pg_atomic_read_u64(&walrcv->writtenUpto); or remove it. Since there's\n> > only one function using walrcv->writtenUpto right now, I prefer to\n> > remove the function to save some LOC (13).\n> > \n> > Attaching patch. Thoughts?\n> \n> This could be used by some external module, no?\n\nMaybe, but WalRcv is exposed so if an external module needs it it could still\nmaintain its own version of GetWalRcvWriteRecPtr.\n\n\n",
"msg_date": "Sat, 26 Mar 2022 15:25:25 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Remove an unused function GetWalRcvWriteRecPtr"
},
{
"msg_contents": "On Sat, Mar 26, 2022 at 12:55 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n>\n> On Sat, Mar 26, 2022 at 02:52:29PM +0900, Michael Paquier wrote:\n> > On Sat, Mar 26, 2022 at 10:51:15AM +0530, Bharath Rupireddy wrote:\n> > > The function GetWalRcvWriteRecPtr isn't being used anywhere, however\n> > > pg_atomic_read_u64(&walrcv->writtenUpto); (reading writtenUpto without\n> > > spinlock) is being used directly in pg_stat_get_wal_receiver\n> > > walreceiver.c. We either make use of the function instead of\n> > > pg_atomic_read_u64(&walrcv->writtenUpto); or remove it. Since there's\n> > > only one function using walrcv->writtenUpto right now, I prefer to\n> > > remove the function to save some LOC (13).\n> > >\n> > > Attaching patch. Thoughts?\n> >\n> > This could be used by some external module, no?\n>\n> Maybe, but WalRcv is exposed so if an external module needs it it could still\n> maintain its own version of GetWalRcvWriteRecPtr.\n\nYes. And the core extensions aren't using GetWalRcvWriteRecPtr. IMO,\nlet's not maintain that function.\n\nRegards,\nBharath Rupireddy.\n\n\n",
"msg_date": "Sat, 26 Mar 2022 19:40:41 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Remove an unused function GetWalRcvWriteRecPtr"
},
{
"msg_contents": "Julien Rouhaud <rjuju123@gmail.com> writes:\n> On Sat, Mar 26, 2022 at 02:52:29PM +0900, Michael Paquier wrote:\n>> This could be used by some external module, no?\n\n> Maybe, but WalRcv is exposed so if an external module needs it it could still\n> maintain its own version of GetWalRcvWriteRecPtr.\n\nWe'd need to mark WalRcv as PGDLLIMPORT if we want to take that\nseriously.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 26 Mar 2022 10:56:41 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Remove an unused function GetWalRcvWriteRecPtr"
},
{
"msg_contents": "Hi,\n\nOn 2022-03-26 10:51:15 +0530, Bharath Rupireddy wrote:\n> The function GetWalRcvWriteRecPtr isn't being used anywhere, however\n> pg_atomic_read_u64(&walrcv->writtenUpto); (reading writtenUpto without\n> spinlock) is being used directly in pg_stat_get_wal_receiver\n> walreceiver.c. We either make use of the function instead of\n> pg_atomic_read_u64(&walrcv->writtenUpto); or remove it. Since there's\n> only one function using walrcv->writtenUpto right now, I prefer to\n> remove the function to save some LOC (13).\n\n-1. I think it's a perfectly reasonable function to have, it doesn't cause\narchitectural / maintenance issues to have it and there's several plausible\nfuture uses for it (moving fsyncing of received WAL to different process,\noptionally allowing logical decoding up to the written LSN, reporting function\nfor monitoring on the standby itself).\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sat, 26 Mar 2022 10:27:09 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Remove an unused function GetWalRcvWriteRecPtr"
},
{
"msg_contents": "On Sat, Mar 26, 2022 at 10:57 PM Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On 2022-03-26 10:51:15 +0530, Bharath Rupireddy wrote:\n> > The function GetWalRcvWriteRecPtr isn't being used anywhere, however\n> > pg_atomic_read_u64(&walrcv->writtenUpto); (reading writtenUpto without\n> > spinlock) is being used directly in pg_stat_get_wal_receiver\n> > walreceiver.c. We either make use of the function instead of\n> > pg_atomic_read_u64(&walrcv->writtenUpto); or remove it. Since there's\n> > only one function using walrcv->writtenUpto right now, I prefer to\n> > remove the function to save some LOC (13).\n>\n> -1. I think it's a perfectly reasonable function to have, it doesn't cause\n> architectural / maintenance issues to have it and there's several plausible\n> future uses for it (moving fsyncing of received WAL to different process,\n> optionally allowing logical decoding up to the written LSN, reporting function\n> for monitoring on the standby itself).\n\nGiven the use-cases that it may have in future, I can use that\nfunction right now in pg_stat_get_wal_receiver instead of\npg_atomic_read_u64(&WalRcv->writtenUpto);\n\nI was also thinking of a function to expose it but backed off because\nit can't be used reliably for data integrity checks or do we want to\nspecify about this in the functions docs as well leave it to the user?\nThoughts?\n\n /*\n * Like flushedUpto, but advanced after writing and before flushing,\n * without the need to acquire the spin lock. Data can be read by another\n * process up to this point, but shouldn't be used for data integrity\n * purposes.\n */\n pg_atomic_uint64 writtenUpto;\n\nRegards,\nBharath Rupireddy.\n\n\n",
"msg_date": "Sat, 26 Mar 2022 23:09:17 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Remove an unused function GetWalRcvWriteRecPtr"
},
{
"msg_contents": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> writes:\n> On Sat, Mar 26, 2022 at 10:57 PM Andres Freund <andres@anarazel.de> wrote:\n>> -1. I think it's a perfectly reasonable function to have, it doesn't cause\n>> architectural / maintenance issues to have it and there's several plausible\n>> future uses for it (moving fsyncing of received WAL to different process,\n>> optionally allowing logical decoding up to the written LSN, reporting function\n>> for monitoring on the standby itself).\n\n> Given the use-cases that it may have in future, I can use that\n> function right now in pg_stat_get_wal_receiver instead of\n> pg_atomic_read_u64(&WalRcv->writtenUpto);\n\nI do not really see a reason to change anything at all here.\nWe have far better things to spend our (finite) time on.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 26 Mar 2022 13:52:44 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Remove an unused function GetWalRcvWriteRecPtr"
}
] |
[
{
"msg_contents": "Hi,\r\n\r\nWe are pleased to announce the Release Management Team (RMT) (cc'd) for \r\nthe PostgreSQL 15 release:\r\n\r\n - John Naylor\r\n - Jonathan Katz\r\n - Michael Paquier\r\n\r\nYou can find information about the responsibilities of the RMT here:\r\n\r\n https://wiki.postgresql.org/wiki/Release_Management_Team\r\n\r\nAdditionally, the RMT has set the feature freeze date to be April 7, \r\n2022. This is the last day to commit features for PostgreSQL 15. In \r\nother words, no new PostgreSQL 15 feature can be committed after April 8 \r\n0:00, 2022 AoE[1].\r\n\r\nYou can track open items for the PostgreSQL 15 release here:\r\n\r\n https://wiki.postgresql.org/wiki/PostgreSQL_15_Open_Items\r\n\r\nPlease let us know if you have any questions.\r\n\r\nOn behalf of the PG15 RMT,\r\n\r\nJonathan\r\n\r\n[1] https://en.wikipedia.org/wiki/Anywhere_on_Earth",
"msg_date": "Sat, 26 Mar 2022 11:10:55 -0400",
"msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>",
"msg_from_op": true,
"msg_subject": "PostgreSQL 15 Release Management Team (RMT) + Feature Freeze"
},
{
"msg_contents": "On 3/26/22 11:10 AM, Jonathan S. Katz wrote:\r\n\r\n> Additionally, the RMT has set the feature freeze date to be April 7, \r\n> 2022. This is the last day to commit features for PostgreSQL 15. In \r\n> other words, no new PostgreSQL 15 feature can be committed after April 8 \r\n> 0:00, 2022 AoE[1].\r\n> \r\n> [1] https://en.wikipedia.org/wiki/Anywhere_on_Earth\r\n\r\nA reminder that feature freeze takes effect in two days.\r\n\r\nBest wishes on your outstanding patches!\r\n\r\nJonathan",
"msg_date": "Tue, 5 Apr 2022 12:47:42 -0400",
"msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>",
"msg_from_op": true,
"msg_subject": "Re: PostgreSQL 15 Release Management Team (RMT) + Feature Freeze"
}
] |
[
{
"msg_contents": "Hi,\n\nWhile reading code around I just noticed that I failed to adapt a comment a\ncouple of lines above a removed line in 0f61727b75b9. Patch attached.",
"msg_date": "Sun, 27 Mar 2022 00:01:17 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": true,
"msg_subject": "Invalid comment in ParallelQueryMain"
},
{
"msg_contents": "On Sat, 26 Mar 2022 at 17:01, Julien Rouhaud <rjuju123@gmail.com> wrote:\n>\n> Hi,\n>\n> While reading code around I just noticed that I failed to adapt a comment a\n> couple of lines above a removed line in 0f61727b75b9. Patch attached.\n\n+1, seems OK to me.\n\n\n",
"msg_date": "Sat, 26 Mar 2022 17:12:05 +0100",
"msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Invalid comment in ParallelQueryMain"
},
{
"msg_contents": "On Sat, Mar 26, 2022 at 05:12:05PM +0100, Matthias van de Meent wrote:\n> On Sat, 26 Mar 2022 at 17:01, Julien Rouhaud <rjuju123@gmail.com> wrote:\n>> While reading code around I just noticed that I failed to adapt a comment a\n>> couple of lines above a removed line in 0f61727b75b9. Patch attached.\n> \n> +1, seems OK to me.\n\nYep. Will fix.\n--\nMichael",
"msg_date": "Sun, 27 Mar 2022 16:13:00 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Invalid comment in ParallelQueryMain"
},
{
"msg_contents": "Hi,\n\nOn Sun, Mar 27, 2022 at 04:13:00PM +0900, Michael Paquier wrote:\n> On Sat, Mar 26, 2022 at 05:12:05PM +0100, Matthias van de Meent wrote:\n> > On Sat, 26 Mar 2022 at 17:01, Julien Rouhaud <rjuju123@gmail.com> wrote:\n> >> While reading code around I just noticed that I failed to adapt a comment a\n> >> couple of lines above a removed line in 0f61727b75b9. Patch attached.\n> > \n> > +1, seems OK to me.\n> \n> Yep. Will fix.\n\nFor the archive's sake, this has been pushed as-of\n411b91360f2711e36782b68cd0c9bc6de44d3384.\n\nThanks!\n\n\n",
"msg_date": "Mon, 28 Mar 2022 11:31:02 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Invalid comment in ParallelQueryMain"
}
] |
[
{
"msg_contents": "Several of Andres' buildfarm animals have recently started to whine\nthat \"performing pointer subtraction with a null pointer has undefined\nbehavior\" for assorted places in freepage.c.\n\n From a mathematical standpoint, this astonishes me: \"x - 0 = x\" is a\ntautology. So I'm a bit inclined to say \"you're full of it\" and disable\n-Wnull-pointer-subtraction. On the other hand, all of the occurrences\nare in calls of relptr_store with a constant-NULL third argument.\nSo we could silence them without too much pain by adjusting that macro\nto special-case NULL. Or maybe we should change these call sites to do\nsomething different, because this is surely abusing the intent of\nrelptr_store.\n\nThoughts?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 26 Mar 2022 12:04:54 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Pointer subtraction with a null pointer"
},
{
"msg_contents": "I wrote:\n> Several of Andres' buildfarm animals have recently started to whine\n> that \"performing pointer subtraction with a null pointer has undefined\n> behavior\" for assorted places in freepage.c.\n\nAh, sorry, I meant to include a link:\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_stage_log.pl?nm=mylodon&dt=2022-03-26%2000%3A02%3A10&stg=make\n\nThis code is old, but mylodon wasn't doing that a week ago, so\nAndres must've updated the compiler and/or changed its options.\nkestrel and olingo are reporting it too, but they're new.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 26 Mar 2022 12:13:12 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Pointer subtraction with a null pointer"
},
{
"msg_contents": "Hi,\n\nOn 2022-03-26 12:04:54 -0400, Tom Lane wrote:\n> Several of Andres' buildfarm animals have recently started to whine\n> that \"performing pointer subtraction with a null pointer has undefined\n> behavior\" for assorted places in freepage.c.\n>\n> From a mathematical standpoint, this astonishes me: \"x - 0 = x\" is a\n> tautology.\n\nI don't think that's quite what the warning is warning about. The C standard\ndoesn't allow pointer arithmetic between arbitrary pointers, they have to be\nto the same \"object\" (plus a trailing array element).\n\nhttp://www.open-std.org/jtc1/sc22/wg14/www/docs/n1548.pdf 6.5.6 Additive\noperators, 8/9\n\n When two pointers are subtracted, both shall point to elements of the same array object,\n or one past the last element of the array object; the result is the difference of the\n subscripts of the two array elements.\n\nNULL can never be part of the same \"array object\" or one past past the last\nelement as the pointer it is subtracted from. Hence the undefined beaviour.\n\n\n> Or maybe we should change these call sites to do something different,\n> because this is surely abusing the intent of relptr_store.\n\nI think a relptr_zero(), relptr_setnull() or such would make sense. That'd get\nrid of the need for the cast as well.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sat, 26 Mar 2022 09:24:18 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Pointer subtraction with a null pointer"
},
{
"msg_contents": "On Sat, 26 Mar 2022 at 12:24, Andres Freund <andres@anarazel.de> wrote:\n\n\n> NULL can never be part of the same \"array object\" or one past past the last\n> element as the pointer it is subtracted from. Hence the undefined beaviour.\n>\n\nEven more fundamentally, NULL is not 0 in any ordinary mathematical sense,\neven though it can be written 0 in source code and is often (but not\nalways) represented in memory as an all-0s bit pattern. I'm not at all\nsurprised to learn that arithmetic involving NULL is undefined.\n\nOn Sat, 26 Mar 2022 at 12:24, Andres Freund <andres@anarazel.de> wrote: \nNULL can never be part of the same \"array object\" or one past past the last\nelement as the pointer it is subtracted from. Hence the undefined beaviour.Even more fundamentally, NULL is not 0 in any ordinary mathematical sense, even though it can be written 0 in source code and is often (but not always) represented in memory as an all-0s bit pattern. I'm not at all surprised to learn that arithmetic involving NULL is undefined.",
"msg_date": "Sat, 26 Mar 2022 12:34:10 -0400",
"msg_from": "Isaac Morland <isaac.morland@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Pointer subtraction with a null pointer"
},
{
"msg_contents": "Hi,\n\nOn 2022-03-26 12:13:12 -0400, Tom Lane wrote:\n> This code is old, but mylodon wasn't doing that a week ago, so\n> Andres must've updated the compiler and/or changed its options.\n\nYep, updated it to clang 13. It's a warning present in 13, but not in 12.\n\nI'll update it to 14 soon, now that that's released. It still has that\nwarning, so it's not going to help us avoid the warning.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sat, 26 Mar 2022 09:34:41 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Pointer subtraction with a null pointer"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2022-03-26 12:13:12 -0400, Tom Lane wrote:\n>> This code is old, but mylodon wasn't doing that a week ago, so\n>> Andres must've updated the compiler and/or changed its options.\n\n> Yep, updated it to clang 13. It's a warning present in 13, but not in 12.\n\nOK, that answers that.\n\nAfter more thought I agree that replacing these relptr_store calls\nwith something else would be the better solution. I'll prepare a\npatch.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 26 Mar 2022 12:37:44 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Pointer subtraction with a null pointer"
},
{
"msg_contents": "I wrote:\n> Andres Freund <andres@anarazel.de> writes:\n>> On 2022-03-26 12:13:12 -0400, Tom Lane wrote:\n>>> This code is old, but mylodon wasn't doing that a week ago, so\n>>> Andres must've updated the compiler and/or changed its options.\n\n>> Yep, updated it to clang 13. It's a warning present in 13, but not in 12.\n\n> OK, that answers that.\n\n... Actually, after looking closer, I misread what our code is doing.\nThese call sites are trying to set the relptr value to \"null\" (zero),\nand AFAICS it should be allowed:\n\nfreepage.c:188:2: warning: performing pointer subtraction with a null pointer has undefined behavior [-Wnull-pointer-subtraction]\n relptr_store(base, fpm->btree_root, (FreePageBtree *) NULL);\n ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n../../../../src/include/utils/relptr.h:63:59: note: expanded from macro 'relptr_store'\n (rp).relptr_off = ((val) == NULL ? 0 : ((char *) (val)) - (base)))\n ~~~~~~~~~~~~~~~~ ^\n\nclang is complaining about the subtraction despite it being inside\na conditional arm that cannot be reached when val is null. It's hard\nto see how that isn't a flat-out compiler bug.\n\nHowever, granting that it isn't going to get fixed right away,\nwe could replace these call sites with \"relptr_store_null()\",\nand maybe get rid of the conditional in relptr_store().\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 26 Mar 2022 13:23:34 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Pointer subtraction with a null pointer"
},
{
"msg_contents": "Hi,\n\nOn 2022-03-26 13:23:34 -0400, Tom Lane wrote:\n> I wrote:\n> > Andres Freund <andres@anarazel.de> writes:\n> >> On 2022-03-26 12:13:12 -0400, Tom Lane wrote:\n> >>> This code is old, but mylodon wasn't doing that a week ago, so\n> >>> Andres must've updated the compiler and/or changed its options.\n>\n> >> Yep, updated it to clang 13. It's a warning present in 13, but not in 12.\n>\n> > OK, that answers that.\n>\n> ... Actually, after looking closer, I misread what our code is doing.\n> These call sites are trying to set the relptr value to \"null\" (zero),\n> and AFAICS it should be allowed:\n>\n> freepage.c:188:2: warning: performing pointer subtraction with a null pointer has undefined behavior [-Wnull-pointer-subtraction]\n> relptr_store(base, fpm->btree_root, (FreePageBtree *) NULL);\n> ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n> ../../../../src/include/utils/relptr.h:63:59: note: expanded from macro 'relptr_store'\n> (rp).relptr_off = ((val) == NULL ? 0 : ((char *) (val)) - (base)))\n> ~~~~~~~~~~~~~~~~ ^\n>\n> clang is complaining about the subtraction despite it being inside\n> a conditional arm that cannot be reached when val is null.\n\nHuh, yea. I somehow read the conditional branch as guarding against a an\nuninitialized base pointer or such.\n\n\n> It's hard to see how that isn't a flat-out compiler bug.\n\nIt only happens if the NULL is directly passed as an argument to the macro,\nnot if there's an intermediary variable. Argh.\n\n\n#include <stddef.h>\n\n#define relptr_store(base, rp, val) \\\n\t ((rp).relptr_off = ((val) == NULL ? 0 : ((char *) (val)) - (base)))\n\ntypedef union { struct foo *relptr_type; size_t relptr_off; } relptr;\n\nvoid\nproblem_not_present(relptr *rp, char *base)\n{\n struct foo *val = NULL;\n\n relptr_store(base, *rp, val);\n}\n\nvoid\nproblem_present(relptr *rp, char *base)\n{\n relptr_store(base, *rp, NULL);\n}\n\n\nLooks like that warning is uttered whenever there's a subtraction from a\npointer with NULL, even if the code isn't reachable. Which I guess makes\n*some* sense, outside of macros it's not something that'd ever be reasonable.\n\n\nWonder if we should try to get rid of the problem by also fixing the double\nevaluation of val? I think something like\n\nstatic inline void\nrelptr_store_internal(size_t *off, char *base, char *val)\n{\n if (val == NULL)\n *off = 0;\n else\n *off = val - base;\n}\n\n#ifdef HAVE__BUILTIN_TYPES_COMPATIBLE_P\n#define relptr_store(base, rp, val) \\\n\t(AssertVariableIsOfTypeMacro(base, char *), \\\n\t AssertVariableIsOfTypeMacro(val, __typeof__((rp).relptr_type)), \\\n relptr_store_internal(&(rp).relptr_off, base, (char *) val))\n#else\n...\n\nshould do the trick?\n\nMight also be worth adding an assertion that base < val.\n\n\n> However, granting that it isn't going to get fixed right away,\n> we could replace these call sites with \"relptr_store_null()\",\n> and maybe get rid of the conditional in relptr_store().\n\nAlso would be good with that.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sat, 26 Mar 2022 10:49:53 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Pointer subtraction with a null pointer"
},
{
"msg_contents": "I wrote:\n> clang is complaining about the subtraction despite it being inside\n> a conditional arm that cannot be reached when val is null. It's hard\n> to see how that isn't a flat-out compiler bug.\n> However, granting that it isn't going to get fixed right away,\n> we could replace these call sites with \"relptr_store_null()\",\n> and maybe get rid of the conditional in relptr_store().\n\nI've confirmed that the attached silences the warning with clang\n13.0.0 (on Fedora 35). The store_null notation is not awful, perhaps;\nit makes those lines shorter and more readable.\n\nI'm a bit less enthused about removing the conditional in relptr_store,\nas that forces re-introducing it at a couple of call sites. Perhaps\nwe should leave relptr_store alone ... but then the reason for\nrelptr_store_null is hard to explain except as a workaround for a\nbroken compiler.\n\nI changed the comment suggesting that you could use relptrs with the\n\"process address space\" as a base, because that would presumably mean\nbase == NULL which is going to draw the same warning.\n\n\t\t\tregards, tom lane",
"msg_date": "Sat, 26 Mar 2022 13:51:07 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Pointer subtraction with a null pointer"
},
{
"msg_contents": "Hi,\n\nOn 2022-03-26 10:49:53 -0700, Andres Freund wrote:\n> > It's hard to see how that isn't a flat-out compiler bug.\n> \n> It only happens if the NULL is directly passed as an argument to the macro,\n> not if there's an intermediary variable. Argh.\n> \n> \n> #include <stddef.h>\n> \n> #define relptr_store(base, rp, val) \\\n> \t ((rp).relptr_off = ((val) == NULL ? 0 : ((char *) (val)) - (base)))\n> \n> typedef union { struct foo *relptr_type; size_t relptr_off; } relptr;\n> \n> void\n> problem_not_present(relptr *rp, char *base)\n> {\n> struct foo *val = NULL;\n> \n> relptr_store(base, *rp, val);\n> }\n> \n> void\n> problem_present(relptr *rp, char *base)\n> {\n> relptr_store(base, *rp, NULL);\n> }\n> \n> \n> Looks like that warning is uttered whenever there's a subtraction from a\n> pointer with NULL, even if the code isn't reachable. Which I guess makes\n> *some* sense, outside of macros it's not something that'd ever be reasonable.\n\nReported as https://github.com/llvm/llvm-project/issues/54570\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sat, 26 Mar 2022 11:08:59 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Pointer subtraction with a null pointer"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> Wonder if we should try to get rid of the problem by also fixing the double\n> evaluation of val? I think something like\n\nGood idea. The attached also silences the warning, and getting rid\nof the double-eval hazard seems like a net win.\n\n> Might also be worth adding an assertion that base < val.\n\nDid that too. On the whole I like this better.\n\n\t\t\tregards, tom lane",
"msg_date": "Sat, 26 Mar 2022 14:13:56 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Pointer subtraction with a null pointer"
},
{
"msg_contents": "Hi,\n\nOn 2022-03-26 14:13:56 -0400, Tom Lane wrote:\n> The attached also silences the warning, and getting rid of the double-eval\n> hazard seems like a net win.\n\nLooks good wrt relptr_store. Maybe we should fix the double-eval hazard in\nrelptr_access too, think that'd be only one left over...\n\n\n> > Might also be worth adding an assertion that base < val.\n> \n> Did that too. On the whole I like this better.\n\nBetter than the relptr_store_null() approach I assume? Agreed, if so.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sat, 26 Mar 2022 11:41:47 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Pointer subtraction with a null pointer"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> Looks good wrt relptr_store. Maybe we should fix the double-eval hazard in\n> relptr_access too, think that'd be only one left over...\n\nHm. Probably not worth the trouble, because it's hard to envision\na situation where rp is not a plain lvalue.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 26 Mar 2022 14:49:44 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Pointer subtraction with a null pointer"
},
{
"msg_contents": "On 2022-03-26 11:08:59 -0700, Andres Freund wrote:\n> On 2022-03-26 10:49:53 -0700, Andres Freund wrote:\n> > > It's hard to see how that isn't a flat-out compiler bug.\n> >\n> > It only happens if the NULL is directly passed as an argument to the macro,\n> > not if there's an intermediary variable. Argh.\n> >\n> >\n> > #include <stddef.h>\n> >\n> > #define relptr_store(base, rp, val) \\\n> > \t ((rp).relptr_off = ((val) == NULL ? 0 : ((char *) (val)) - (base)))\n> >\n> > typedef union { struct foo *relptr_type; size_t relptr_off; } relptr;\n> >\n> > void\n> > problem_not_present(relptr *rp, char *base)\n> > {\n> > struct foo *val = NULL;\n> >\n> > relptr_store(base, *rp, val);\n> > }\n> >\n> > void\n> > problem_present(relptr *rp, char *base)\n> > {\n> > relptr_store(base, *rp, NULL);\n> > }\n> >\n> >\n> > Looks like that warning is uttered whenever there's a subtraction from a\n> > pointer with NULL, even if the code isn't reachable. Which I guess makes\n> > *some* sense, outside of macros it's not something that'd ever be reasonable.\n>\n> Reported as https://github.com/llvm/llvm-project/issues/54570\n\nAnd it now got fixed. Will obviously be a bit till it reaches a compiler near\nyou...\n\n\n",
"msg_date": "Fri, 3 Jun 2022 09:01:17 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Pointer subtraction with a null pointer"
}
] |
[
{
"msg_contents": "I chanced to notice that buildfarm member lorikeet has been\nfailing an awful lot lately in the v14 branch, but hardly\nat all in other branches. Here's a log extract from its\nlatest run [1]:\n\n2022-03-26 06:31:47.245 EDT [623eeb93.d202:131] pg_regress/inherit LOG: statement: create table mlparted_tab (a int, b char, c text) partition by list (a);\n2022-03-26 06:31:47.247 EDT [623eeb93.d202:132] pg_regress/inherit LOG: statement: create table mlparted_tab_part1 partition of mlparted_tab for values in (1);\n2022-03-26 06:31:47.254 EDT [623eeb93.d203:60] pg_regress/vacuum LOG: statement: VACUUM FULL pg_class;\n2022-03-26 06:31:47.258 EDT [623eeb92.d201:90] pg_regress/typed_table LOG: statement: SELECT a.attname,\n\t pg_catalog.format_type(a.atttypid, a.atttypmod),\n\t (SELECT pg_catalog.pg_get_expr(d.adbin, d.adrelid, true)\n\t FROM pg_catalog.pg_attrdef d\n\t WHERE d.adrelid = a.attrelid AND d.adnum = a.attnum AND a.atthasdef),\n\t a.attnotnull,\n\t (SELECT c.collname FROM pg_catalog.pg_collation c, pg_catalog.pg_type t\n\t WHERE c.oid = a.attcollation AND t.oid = a.atttypid AND a.attcollation <> t.typcollation) AS attcollation,\n\t a.attidentity,\n\t a.attgenerated\n\tFROM pg_catalog.pg_attribute a\n\tWHERE a.attrelid = '21770' AND a.attnum > 0 AND NOT a.attisdropped\n\tORDER BY a.attnum;\n*** starting debugger for pid 53762, tid 10536\n2022-03-26 06:32:02.158 EDT [623eeb6c.d0c2:4] LOG: server process (PID 53762) exited with exit code 127\n2022-03-26 06:32:02.158 EDT [623eeb6c.d0c2:5] DETAIL: Failed process was running: create table mlparted_tab_part1 partition of mlparted_tab for values in (1);\n2022-03-26 06:32:02.158 EDT [623eeb6c.d0c2:6] LOG: terminating any other active server processes\n\nThe failures are not all exactly like this one, but they're mostly in\nCREATE TABLE operations nearby to this one. I speculate what is happening\nis that the \"VACUUM FULL pg_class\" is triggering some misbehavior in\nconcurrent partitioned-table creation. The lack of failures in other\nbranches could be due to changes in the relative timing of the \"vacuum\"\nand \"inherit\" test scripts.\n\nAny chance we could get a stack trace from one of these crashes?\n\n\t\t\tregards, tom lane\n\n[1] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=lorikeet&dt=2022-03-26%2010%3A17%3A22\n\n\n",
"msg_date": "Sat, 26 Mar 2022 14:47:07 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Why is lorikeet so unstable in v14 branch only?"
},
{
"msg_contents": "On 2022-03-26 14:47:07 -0400, Tom Lane wrote:\n> I chanced to notice that buildfarm member lorikeet has been\n> failing an awful lot lately in the v14 branch, but hardly\n> at all in other branches. Here's a log extract from its\n> latest run [1]:\n\nOne interesting bit in the config is:\n\n 'extra_config' => {\n ...\n 'HEAD' => [\n 'update_process_title = off'\n ],\n 'REL_13_STABLE' => [\n 'update_process_title = off'\n ]\n\n\n> *** starting debugger for pid 53762, tid 10536\n> 2022-03-26 06:32:02.158 EDT [623eeb6c.d0c2:4] LOG: server process (PID 53762) exited with exit code 127\n> 2022-03-26 06:32:02.158 EDT [623eeb6c.d0c2:5] DETAIL: Failed process was running: create table mlparted_tab_part1 partition of mlparted_tab for values in (1);\n> 2022-03-26 06:32:02.158 EDT [623eeb6c.d0c2:6] LOG: terminating any other active server processes\n\nI wonder what where the output of \"starting debugger for pid 53762\" ends up? I\nassume it's triggered by\n 'CYGWIN' => 'server error_start=c:\\\\ncygwin64\\\\bin\\\\dumper.exe -d %1 %2',\n\nhttps://cygwin.org/cygwin-ug-net/using-cygwinenv.html\nsays \"The filename of the executing program and it's Windows process id are appended to the command as arguments. \"\n\nbut nothing about %1 and %2 :(. I those are just \"executing program\" and\n\"Windows process id\" respectively?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sat, 26 Mar 2022 12:49:48 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Why is lorikeet so unstable in v14 branch only?"
},
{
"msg_contents": "\nOn 3/26/22 15:49, Andres Freund wrote:\n> On 2022-03-26 14:47:07 -0400, Tom Lane wrote:\n>> I chanced to notice that buildfarm member lorikeet has been\n>> failing an awful lot lately in the v14 branch, but hardly\n>> at all in other branches. Here's a log extract from its\n>> latest run [1]:\n> One interesting bit in the config is:\n>\n> 'extra_config' => {\n> ...\n> 'HEAD' => [\n> 'update_process_title = off'\n> ],\n> 'REL_13_STABLE' => [\n> 'update_process_title = off'\n> ]\n>\n\nI'd forgotten about that. Let me do that for REL_14_STABLE and see where\nwe get to.\n\n\n>> *** starting debugger for pid 53762, tid 10536\n>> 2022-03-26 06:32:02.158 EDT [623eeb6c.d0c2:4] LOG: server process (PID 53762) exited with exit code 127\n>> 2022-03-26 06:32:02.158 EDT [623eeb6c.d0c2:5] DETAIL: Failed process was running: create table mlparted_tab_part1 partition of mlparted_tab for values in (1);\n>> 2022-03-26 06:32:02.158 EDT [623eeb6c.d0c2:6] LOG: terminating any other active server processes\n> I wonder what where the output of \"starting debugger for pid 53762\" ends up? I\n> assume it's triggered by\n> 'CYGWIN' => 'server error_start=c:\\\\ncygwin64\\\\bin\\\\dumper.exe -d %1 %2',\n>\n> https://cygwin.org/cygwin-ug-net/using-cygwinenv.html\n> says \"The filename of the executing program and it's Windows process id are appended to the command as arguments. \"\n>\n> but nothing about %1 and %2 :(. I those are just \"executing program\" and\n> \"Windows process id\" respectively?\n\n\n\nI don't remember where I got this invocation from. But see for example\n<https://stackoverflow.com/questions/320001/using-a-stackdump-from-cygwin-executable>\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Sat, 26 Mar 2022 16:57:39 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: Why is lorikeet so unstable in v14 branch only?"
},
{
"msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> On 3/26/22 15:49, Andres Freund wrote:\n>> One interesting bit in the config is:\n>> [ lack of ]\n>> 'update_process_title = off'\n\n> I'd forgotten about that. Let me do that for REL_14_STABLE and see where\n> we get to.\n\nHm. But if that does mitigate it, it still seems like a bug no?\nWhy would that be preferentially crashing partitioned-table creation?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 26 Mar 2022 17:19:43 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Why is lorikeet so unstable in v14 branch only?"
},
{
"msg_contents": "\nOn 3/26/22 17:19, Tom Lane wrote:\n> Andrew Dunstan <andrew@dunslane.net> writes:\n>> On 3/26/22 15:49, Andres Freund wrote:\n>>> One interesting bit in the config is:\n>>> [ lack of ]\n>>> 'update_process_title = off'\n>> I'd forgotten about that. Let me do that for REL_14_STABLE and see where\n>> we get to.\n> Hm. But if that does mitigate it, it still seems like a bug no?\n> Why would that be preferentially crashing partitioned-table creation?\n\n\nYes it seems like a bug, but hard to diagnose. It seemed like a bug back\nin May: see\n<https://postgr.es/m/4baee39d-0ebe-8327-7878-5bc11c95effa@dunslane.net>\n\nI vaguely theorize about a buffer overrun somewhere that scribbles on\nthe stack.\n\nThe answer to Andres's question about where the stackdumps go is that\nthey go in the data directory, AFAIK. You can see the buildfarm logic\nfor collecting them at\n<https://github.com/PGBuildFarm/client-code/blob/main/PGBuild/Utils.pm>\nstarting at line 149. There are various appropriate invocations of\nget_stack_trace() in run_build.pl.\n\n\ncheers\n\n\nandrew\n\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Sat, 26 Mar 2022 17:48:39 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: Why is lorikeet so unstable in v14 branch only?"
},
{
"msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> Yes it seems like a bug, but hard to diagnose. It seemed like a bug back\n> in May: see\n> <https://postgr.es/m/4baee39d-0ebe-8327-7878-5bc11c95effa@dunslane.net>\n\nAh, right, but that link is busted. Here's the correct link:\n\nhttps://www.postgresql.org/message-id/flat/e6f1fb3e-1e08-0188-9c71-2b5b894571de%40dunslane.net\n\n> I vaguely theorize about a buffer overrun somewhere that scribbles on\n> the stack.\n\nI said in the earlier thread\n\n> A platform-specific problem in get_ps_display() seems plausible\n> enough. The apparent connection to a concurrent VACUUM FULL seems\n> pretty hard to explain that way ... but maybe that's a mirage.\n\nbut your one stack trace showed a crash while trying to lock pg_class for\nScanPgRelation, which'd potentially have blocked because of the VACUUM ---\nand that'd result in a process title change, if not disabled. So now\nI feel like \"something rotten in ps_status.c\" is a theory that can fit\nthe available facts.\n\n> If I understand correctly that you're only seeing this in v13 and\n> HEAD, then it seems like bf68b79e5 (Refactor ps_status.c API)\n> deserves a hard look.\n\nI still stand by this opinion. Can you verify which of the ps_status.c\ncode paths gets used on this build?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 26 Mar 2022 18:10:20 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Why is lorikeet so unstable in v14 branch only?"
},
{
"msg_contents": "\nOn 3/26/22 18:10, Tom Lane wrote:\n>\n>> If I understand correctly that you're only seeing this in v13 and\n>> HEAD, then it seems like bf68b79e5 (Refactor ps_status.c API)\n>> deserves a hard look.\n> I still stand by this opinion. Can you verify which of the ps_status.c\n> code paths gets used on this build?\n>\n> \t\t\t\n\n\n\nIt appears that it is using PS_USE_NONE, as it doesn't have any of the\ndefines required for the other paths. I note that the branch for that in\nget_ps_display() doesn't set *displen, which looks a tad suspicious. It\ncould be left with any old junk. And maybe there's a good case for also\nsurrounding some of the code in WaitOnLock() with \"if (len) ...\"\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Sun, 27 Mar 2022 09:42:05 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: Why is lorikeet so unstable in v14 branch only?"
},
{
"msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> It appears that it is using PS_USE_NONE, as it doesn't have any of the\n> defines required for the other paths. I note that the branch for that in\n> get_ps_display() doesn't set *displen, which looks a tad suspicious.\n\nIndeed. I forced it to use PS_USE_NONE on my Linux machine, and got\na core dump on the first try of the regression tests:\n\nProgram terminated with signal SIGSEGV, Segmentation fault.\n#0 __memmove_avx_unaligned_erms ()\n at ../sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S:516\n516 VMOVNT %VEC(0), (%r9)\n(gdb) bt\n#0 __memmove_avx_unaligned_erms ()\n at ../sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S:516\n#1 0x00000000008299b3 in WaitOnLock (locallock=locallock@entry=0x2a5e700, \n owner=owner@entry=0x2aba8f0) at lock.c:1831\n#2 0x000000000082adc6 in LockAcquireExtended (\n locktag=locktag@entry=0x7ffc864fad90, lockmode=lockmode@entry=1, \n sessionLock=sessionLock@entry=false, dontWait=dontWait@entry=false, \n reportMemoryError=reportMemoryError@entry=true, \n locallockp=locallockp@entry=0x7ffc864fad88) at lock.c:1101\n#3 0x000000000082861f in LockRelationOid (relid=1259, lockmode=1)\n at lmgr.c:117\n#4 0x000000000051c5ed in relation_open (relationId=1259, \n lockmode=lockmode@entry=1) at relation.c:56\n...\n\n(gdb) f 1\n#1 0x00000000008299b3 in WaitOnLock (locallock=locallock@entry=0x2a5e700, \n owner=owner@entry=0x2aba8f0) at lock.c:1831\n1831 memcpy(new_status, old_status, len);\n(gdb) p len\n$1 = -1\n\nProblem explained, good detective work!\n\n> And maybe there's a good case for also\n> surrounding some of the code in WaitOnLock() with \"if (len) ...\"\n\n+1. I'll make it so, and check the other callers too.\n\nOnce I push this, you should remove the update_process_title hack\nfrom lorikeet's config, since that was just a workaround until\nwe tracked down the problem, which I think we just did.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 27 Mar 2022 12:31:23 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Why is lorikeet so unstable in v14 branch only?"
},
{
"msg_contents": "I wrote:\n> Andrew Dunstan <andrew@dunslane.net> writes:\n>> And maybe there's a good case for also\n>> surrounding some of the code in WaitOnLock() with \"if (len) ...\"\n\n> +1. I'll make it so, and check the other callers too.\n\nI had second thoughts about that part after realizing that callers\ncannot tell the difference between \"ps_display is disabled\" and\n\"the activity part of the display is currently empty\". In the latter\ncase I think we'd rather have WaitOnLock still append \" waiting\";\nand it's not like PS_USE_NONE is so common as to be worth optimizing\nfor. (Else we'd have identified this problem sooner.)\n\n> Once I push this, you should remove the update_process_title hack\n> from lorikeet's config, since that was just a workaround until\n> we tracked down the problem, which I think we just did.\n\nMinimal fix pushed, so please adjust that animal's config.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 27 Mar 2022 13:01:31 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Why is lorikeet so unstable in v14 branch only?"
},
{
"msg_contents": "\nOn 3/27/22 13:01, Tom Lane wrote:\n>\n>> Once I push this, you should remove the update_process_title hack\n>> from lorikeet's config, since that was just a workaround until\n>> we tracked down the problem, which I think we just did.\n> Minimal fix pushed, so please adjust that animal's config.\n>\n\nDone.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Sun, 27 Mar 2022 14:58:17 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: Why is lorikeet so unstable in v14 branch only?"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.