threads
listlengths
1
2.99k
[ { "msg_contents": "In brief, I'm proposing to raise xidWrapLimit-xidStopLimit to 3M and\nxidWrapLimit-xidWarnLimit to 40M. Likewise for mxact counterparts.\n\n\nPostgreSQL has three \"stop limit\" values beyond which only single-user mode\nwill assign new values of a certain counter:\n\n- xidStopLimit protects pg_xact, pg_commit_ts, pg_subtrans, and pg_serial.\n SetTransactionIdLimit() withholds a million XIDs, and warnings start ten\n million before that.\n- multiStopLimit protects pg_multixact/offsets. SetMultiXactIdLimit()\n withholds 100 mxacts, and warnings start at ten million.\n- offsetStopLimit protects pg_multixact/members. SetOffsetVacuumLimit()\n withholds [1,2) SLRU segments thereof (50k-100k member XIDs). No warning\n phase for this one.\n\nReasons to like a larger stop limit:\n\n1. VACUUM, to advance a limit, may assign IDs subject to one of the limits.\n VACUUM formerly consumed XIDs, not mxacts. It now consumes mxacts, not\n XIDs. I think a DBA can suppress VACUUM's mxact consumption by stopping\n all transactions older than vacuum_freeze_min_age, including prepared\n transactions.\n\n2. We currently have edge-case bugs when assigning values in the last few\n dozen pages before the wrap limit\n (https://postgr.es/m/20190214072623.GA1139206@rfd.leadboat.com and\n https://postgr.es/m/20200525070033.GA1591335@rfd.leadboat.com). A higher\n stop limit could make this class of bug unreachable outside of single-user\n mode. That's valuable against undiscovered bugs of this class.\n\n3. Any bug in stop limit enforcement is less likely to have user impact. For\n a live example, see the XXX comment that\n https://postgr.es/m/attachment/111084/slru-truncate-modulo-v3.patch adds to\n CheckPointPredicate().\n\nRaising a stop limit prompts an examination of warn limits, which represent\nthe time separating the initial torrent of warnings from the service outage.\nThe current limits appeared in 2005; transaction rates have grown, while human\nreaction times have not. I like warnings starting when an SLRU is 98%\nconsumed (40M XIDs or mxacts remaining). That doesn't change things enough to\nmake folks reconfigure VACUUM, and it buys back some of the grace period DBAs\nhad in 2005. I considered 95-97%, but the max_val of\nautovacuum_freeze_max_age would then start the warnings before the autovacuum.\nWhile that wouldn't rule out a value lower than 98%, 98% felt fine anyhow.\n\nFor the new stop limits, I propose allowing 99.85% SLRU fullness (stop with 3M\nXIDs or mxacts remaining). If stopping this early will bother users, an\nalternative is 3M for XIDs and 0.2M for others. Either way leaves at least\ntwo completely-idle segments for each SLRU, which I expect to mitigate present\nand future edge-case bugs.\n\nChanging this does threaten clusters that experience pg_upgrade when close to\na limit. pg_upgrade can fail or, worse, yield a cluster that spews warnings\nshortly after the upgrade. I could implement countermeasures, but they would\ntake effect only when one upgrades a cluster having a 98%-full SLRU. I\npropose not to change pg_upgrade; some sites may find cause to do a\nwhole-cluster VACUUM before pg_upgrade. Do you agree or disagree with that\nchoice? I am attaching a patch (not for commit) that demonstrates the\npg_upgrade behavior that nearly-full-SLRU clusters would see.\n\nThanks,\nnm", "msg_date": "Sun, 21 Jun 2020 01:35:13 -0700", "msg_from": "Noah Misch <noah@leadboat.com>", "msg_from_op": true, "msg_subject": "Raising stop and warn limits" }, { "msg_contents": "On Sun, Jun 21, 2020 at 01:35:13AM -0700, Noah Misch wrote:\n> In brief, I'm proposing to raise xidWrapLimit-xidStopLimit to 3M and\n> xidWrapLimit-xidWarnLimit to 40M. Likewise for mxact counterparts.\n\nHere's the patch for it.\n\n> 1. VACUUM, to advance a limit, may assign IDs subject to one of the limits.\n> VACUUM formerly consumed XIDs, not mxacts. It now consumes mxacts, not\n> XIDs.\n\nCorrection: a lazy_truncate_heap() at wal_level!=minimal does assign an XID,\nso XID consumption is impossible with \"VACUUM (TRUNCATE false)\" but possible\notherwise. \"VACUUM (ANALYZE)\", which a DBA might do by reflex, also assigns\nXIDs. (These corrections do not affect $SUBJECT.)", "msg_date": "Sat, 27 Jun 2020 23:20:04 -0700", "msg_from": "Noah Misch <noah@leadboat.com>", "msg_from_op": true, "msg_subject": "Re: Raising stop and warn limits" } ]
[ { "msg_contents": "Hi Hackers,\n\nWhile I was looking at linkifying SQL commands in the system catalog\ndocs, I noticed that catalog.sgml uses the <command> tag to refer to\ninitdb, while I'd expected it to use <application>.\n\nLooking for patterns, I grepped for the two tags with contents\nconsisting only of lower-case letters, numbers, hyphens and underscores,\nto restrict it to things that look like shell command names, not SQL\ncommands or full command lines. When referring to binaries shipped with\npostgres itself, <application> is by far the most common, except for\ninitdb, postgres, ecpg, pg_ctl, pg_resetwal, and pg_config. For\nexternal programs it's more of a mix, but overall there are more than\ntwice as many instances of <application> of this form than of <command>.\n\nI'm not proposing going throuhgh all 1500+ instances in one fell swoop,\nbut some consistency (and a policy going forward) would be nice.\n\n- ilmari\n-- \n\"A disappointingly low fraction of the human race is,\n at any given time, on fire.\" - Stig Sandbeck Mathisen\n\n\n", "msg_date": "Sun, 21 Jun 2020 15:45:29 +0100", "msg_from": "ilmari@ilmari.org (Dagfinn Ilmari =?utf-8?Q?Manns=C3=A5ker?=)", "msg_from_op": true, "msg_subject": "<application> vs <command> for command line tools in the docs" }, { "msg_contents": "ilmari@ilmari.org (Dagfinn Ilmari =?utf-8?Q?Manns=C3=A5ker?=) writes:\n> While I was looking at linkifying SQL commands in the system catalog\n> docs, I noticed that catalog.sgml uses the <command> tag to refer to\n> initdb, while I'd expected it to use <application>.\n\nI agree that the latter is what we generally use.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 21 Jun 2020 11:16:43 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: <application> vs <command> for command line tools in the docs" }, { "msg_contents": "On 06/21/20 11:16, Tom Lane wrote:\n> ilmari@ilmari.org (Dagfinn Ilmari =?utf-8?Q?Manns=C3=A5ker?=) writes:\n>> docs, I noticed that catalog.sgml uses the <command> tag to refer to\n>> initdb, while I'd expected it to use <application>.\n> \n> I agree that the latter is what we generally use.\n\n'The latter' is <command> in the Subject:, <application> in the body....\n\nRegards,\n-Chap\n\n\n", "msg_date": "Sun, 21 Jun 2020 11:46:28 -0400", "msg_from": "Chapman Flack <chap@anastigmatix.net>", "msg_from_op": false, "msg_subject": "Re: <application> vs <command> for command line tools in the docs" } ]
[ { "msg_contents": "Hi Hackers,\n\nWhile looking at making more <command>SQL</command> into links, I\nnoticed that <xref> loses the monospace formatting of <command>, and\ncan't itself be wrapped in <command>. This becomes particularly\napparent when you have one link that can be an <xref/> next to another\nthat's <link><command>...</command></link> because it's actually\nreferring to a specific variant of the command.\n\nBy some trial and error I found that putting <command> inside the\n<refentrytitle> tag propagates the formatting to the <xref> contents.\nWe already do this with <application> for (most of) the client and\nserver applications. Is this something we want to do consistently for\nboth?\n\n- ilmari\n-- \n\"The surreality of the universe tends towards a maximum\" -- Skud's Law\n\"Never formulate a law or axiom that you're not prepared to live with\n the consequences of.\" -- Skud's Meta-Law\n\n\n", "msg_date": "Sun, 21 Jun 2020 16:22:04 +0100", "msg_from": "ilmari@ilmari.org (Dagfinn Ilmari =?utf-8?Q?Manns=C3=A5ker?=)", "msg_from_op": true, "msg_subject": "<xref> vs <command> formatting in the docs" }, { "msg_contents": "On 2020-Jun-21, Dagfinn Ilmari Manns�ker wrote:\n\n> While looking at making more <command>SQL</command> into links, I\n> noticed that <xref> loses the monospace formatting of <command>, and\n> can't itself be wrapped in <command>.\n\nOuch.\n\n> By some trial and error I found that putting <command> inside the\n> <refentrytitle> tag propagates the formatting to the <xref> contents.\n> We already do this with <application> for (most of) the client and\n> server applications. Is this something we want to do consistently for\n> both?\n\nLooking at the ones that use <application>, it looks like manpages are\nnot damaged, so +1.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Sun, 21 Jun 2020 11:30:38 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: <xref> vs <command> formatting in the docs" }, { "msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n\n> On 2020-Jun-21, Dagfinn Ilmari Mannsåker wrote:\n>\n>> While looking at making more <command>SQL</command> into links, I\n>> noticed that <xref> loses the monospace formatting of <command>, and\n>> can't itself be wrapped in <command>.\n>\n> Ouch.\n>\n>> By some trial and error I found that putting <command> inside the\n>> <refentrytitle> tag propagates the formatting to the <xref> contents.\n>> We already do this with <application> for (most of) the client and\n>> server applications. Is this something we want to do consistently for\n>> both?\n>\n> Looking at the ones that use <application>, it looks like manpages are\n> not damaged, so +1.\n\nAttached are two patches: the first adds the missing <application> tags,\nthe second adds <command> to all the SQL commands (specifically anything\nwith <manvolnum>7</manvolnum>).\n\nI'll add it to the commitfest.\n\n- ilmari\n-- \n- Twitter seems more influential [than blogs] in the 'gets reported in\n the mainstream press' sense at least. - Matt McLeod\n- That'd be because the content of a tweet is easier to condense down\n to a mainstream media article. - Calle Dybedahl", "msg_date": "Sun, 21 Jun 2020 17:57:45 +0100", "msg_from": "ilmari@ilmari.org (Dagfinn Ilmari =?utf-8?Q?Manns=C3=A5ker?=)", "msg_from_op": true, "msg_subject": "Re: <xref> vs <command> formatting in the docs" }, { "msg_contents": "On 2020-06-21 18:57, Dagfinn Ilmari Mannsåker wrote:\n> Attached are two patches: the first adds the missing <application> tags,\n> the second adds <command> to all the SQL commands (specifically anything\n> with <manvolnum>7</manvolnum>).\n\nI have committed the first one.\n\nI have some concerns about the second one. If you look at the diff of \nthe source of the man pages before and after, it creates a bit of a \nmess, even though the man pages look okay when rendered. I'll need to \nthink about this a bit more.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 10 Jul 2020 17:05:04 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: <xref> vs <command> formatting in the docs" }, { "msg_contents": "On 2020-07-10 17:05, Peter Eisentraut wrote:\n> On 2020-06-21 18:57, Dagfinn Ilmari Mannsåker wrote:\n>> Attached are two patches: the first adds the missing <application> tags,\n>> the second adds <command> to all the SQL commands (specifically anything\n>> with <manvolnum>7</manvolnum>).\n> \n> I have committed the first one.\n> \n> I have some concerns about the second one. If you look at the diff of\n> the source of the man pages before and after, it creates a bit of a\n> mess, even though the man pages look okay when rendered. I'll need to\n> think about this a bit more.\n\nI asked about this on a DocBook discussion list. While the general \nanswer is that you can do anything you want, it was clear that putting \nmarkup into title elements requires more careful additional \ncustomization and testing, and it's preferable to handle appearance \nissues on the link source side. (See also us dialing back the number of \nxreflabels recently.) This is also the direction that DocBook 5 appears \nto be taking, where you can put linkend attributes into most inline \ntags, so you could maybe do <command linkend=\"sql-select\"/>. This \ndoesn't work for us yet, but the equivalent of that would be \n<command><xref linkend=\"...\"/></command>.\n\nSo, based on that, I think the patch proposed here is not the right one, \nand we should instead be marking up the link sources appropriately.\n\n(This also implies that the already committed 0001 patch should be \nreverted.)\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 25 Sep 2020 07:38:56 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: <xref> vs <command> formatting in the docs" }, { "msg_contents": "On 2020-09-25 07:38, Peter Eisentraut wrote:\n> So, based on that, I think the patch proposed here is not the right one,\n> and we should instead be marking up the link sources appropriately.\n\nI have committed a fix for this:\n\n Improve <xref> vs. <command> formatting in the documentation\n\n SQL commands are generally marked up as <command>, except when a link\n to a reference page is used using <xref>. But the latter doesn't\n create monospace markup, so this looks strange especially when a\n paragraph contains a mix of links and non-links.\n\n We considered putting <command> in the <refentrytitle> on the target\n side, but that creates some formatting side effects elsewhere.\n Generally, it seems safer to solve this on the link source side.\n\n We can't put the <xref> inside the <command>; the DTD doesn't allow\n this. DocBook 5 would allow the <command> to have the linkend\n attribute itself, but we are not there yet.\n\n So to solve this for now, convert the <xref>s to <link> plus\n <command>. This gives the correct look and also gives some more\n flexibility what we can put into the link text (e.g., subcommands or\n other clauses). In the future, these could then be converted to\n DocBook 5 style.\n\n I haven't converted absolutely all xrefs to SQL command reference\n pages, only those where we care about the appearance of the link text\n or where it was otherwise appropriate to make the appearance match a\n bit better. Also in some cases, the links where repetitive, so in\n those cases the links where just removed and replaced by a plain\n <command>. In cases where we just want the link and don't\n specifically care about the generated link text (typically phrased\n \"for further information see <xref ...>\") the xref is kept.\n\nLet me know if I missed something or further changes are needed.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Sat, 3 Oct 2020 16:57:34 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: <xref> vs <command> formatting in the docs" }, { "msg_contents": "On Sat, Oct 3, 2020 at 04:57:34PM +0200, Peter Eisentraut wrote:\n> On 2020-09-25 07:38, Peter Eisentraut wrote:\n> > So, based on that, I think the patch proposed here is not the right one,\n> > and we should instead be marking up the link sources appropriately.\n> \n> I have committed a fix for this:\n> \n> Improve <xref> vs. <command> formatting in the documentation\n\nThanks, this is a big step forward.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n", "msg_date": "Sat, 3 Oct 2020 12:33:31 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: <xref> vs <command> formatting in the docs" } ]
[ { "msg_contents": "Back in bd3daddaf232d95b0c9ba6f99b0170a0147dd8af, which introduced\nAlternativeSubPlans, I wrote:\n \n There is a lot more that could be done based on this infrastructure: in\n particular it's interesting to consider switching to the hash plan if we start\n out using the non-hashed plan but find a lot more upper rows going by than we\n expected. I have therefore left some minor inefficiencies in place, such as\n initializing both subplans even though we will currently only use one.\n\nThat commit will be twelve years old come August, and nobody has either\nbuilt anything else atop it or shown any interest in making the plan choice\nswitchable mid-run. So it seems like kind of a failed experiment.\n\nTherefore, I'm considering the idea of ripping out all executor support\nfor AlternativeSubPlan and instead having the planner replace an\nAlternativeSubPlan with the desired specific SubPlan somewhere late in\nplanning, possibly setrefs.c.\n\nAdmittedly, the relevant executor support only amounts to a couple hundred\nlines, but that's not nothing. A perhaps-more-useful effect is to get rid\nof the confusing and poorly documented EXPLAIN output that you get for an\nAlternativeSubPlan.\n\nI also noted that the existing subplan-selection logic in\nExecInitAlternativeSubPlan is really pretty darn bogus, in that it uses a\none-size-fits-all execution count estimate of parent->plan->plan_rows, no\nmatter which subexpression the subplan is in. This is only appropriate\nfor subplans in the plan node's targetlist, and can be either too high\nor too low elsewhere. It'd be relatively easy for setrefs.c to do\nbetter, I think, since it knows which subexpression it's working on\nat any point.\n\nThoughts?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 21 Jun 2020 20:20:25 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Get rid of runtime handling of AlternativeSubPlan?" }, { "msg_contents": "On Mon, 22 Jun 2020 at 12:20, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Back in bd3daddaf232d95b0c9ba6f99b0170a0147dd8af, which introduced\n> AlternativeSubPlans, I wrote:\n>\n> There is a lot more that could be done based on this infrastructure: in\n> particular it's interesting to consider switching to the hash plan if we start\n> out using the non-hashed plan but find a lot more upper rows going by than we\n> expected. I have therefore left some minor inefficiencies in place, such as\n> initializing both subplans even though we will currently only use one.\n>\n> That commit will be twelve years old come August, and nobody has either\n> built anything else atop it or shown any interest in making the plan choice\n> switchable mid-run. So it seems like kind of a failed experiment.\n>\n> Therefore, I'm considering the idea of ripping out all executor support\n> for AlternativeSubPlan and instead having the planner replace an\n> AlternativeSubPlan with the desired specific SubPlan somewhere late in\n> planning, possibly setrefs.c.\n>\n> Admittedly, the relevant executor support only amounts to a couple hundred\n> lines, but that's not nothing. A perhaps-more-useful effect is to get rid\n> of the confusing and poorly documented EXPLAIN output that you get for an\n> AlternativeSubPlan.\n>\n> I also noted that the existing subplan-selection logic in\n> ExecInitAlternativeSubPlan is really pretty darn bogus, in that it uses a\n> one-size-fits-all execution count estimate of parent->plan->plan_rows, no\n> matter which subexpression the subplan is in. This is only appropriate\n> for subplans in the plan node's targetlist, and can be either too high\n> or too low elsewhere. It'd be relatively easy for setrefs.c to do\n> better, I think, since it knows which subexpression it's working on\n> at any point.\n\nWhen I was working on [1] a few weeks ago, I did wonder if I'd have to\nuse an AlternativeSubPlan when doing result caching for subqueries.\nThe problem is likely the same as why they were invented in the first\nplace; we basically don't know how many rows the parent will produce\nwhen planning the subplan.\n\nFor my case, I have an interest in both the number of rows in the\nouter plan, and the ndistinct estimate on the subplan parameters. If\nthe parameters for the subquery are all distinct, then there's not\nmuch sense in trying to cache results to use later. We're never going\nto need them.\n\nRight now, if I wanted to use AlternativeSubPlan to delay the choice\nof this until run-time, then I'd be missing information about the\nndistinct estimation since we don't have that information available in\nthe final plan. Perhaps that's an argument for doing this in setrefs.c\ninstead. I could look up the ndistinct estimate there.\n\nFor switching plans on the fly during execution. I can see the sense\nin that as an idea. For the hashed subplan case, we'd likely want to\nswitch to hashing mode if we discovered that there were many more rows\nin the outer query than we had thought there would be. However, I'm\nuncertain if Result Cache would never need anything similar as\ntechnically we could just switch off the caching if we discovered our\ncache hit ration was either terrible or 0. We would have an\nadditional node to pull tuples through, however. Switching would\nalso require that the tupleslot type was the same between the\nalternatives.\n\nDavid\n\n[1] https://www.postgresql.org/message-id/CAApHDvrPcQyQdWERGYWx8J+2DLUNgXu+fOSbQ1UscxrunyXyrQ@mail.gmail.com\n\n\n", "msg_date": "Mon, 22 Jun 2020 13:29:31 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Get rid of runtime handling of AlternativeSubPlan?" }, { "msg_contents": "I wrote:\n> Back in bd3daddaf232d95b0c9ba6f99b0170a0147dd8af, which introduced\n> AlternativeSubPlans, I wrote:\n> There is a lot more that could be done based on this infrastructure: in\n> particular it's interesting to consider switching to the hash plan if we start\n> out using the non-hashed plan but find a lot more upper rows going by than we\n> expected. I have therefore left some minor inefficiencies in place, such as\n> initializing both subplans even though we will currently only use one.\n>\n> That commit will be twelve years old come August, and nobody has either\n> built anything else atop it or shown any interest in making the plan choice\n> switchable mid-run. So it seems like kind of a failed experiment.\n>\n> Therefore, I'm considering the idea of ripping out all executor support\n> for AlternativeSubPlan and instead having the planner replace an\n> AlternativeSubPlan with the desired specific SubPlan somewhere late in\n> planning, possibly setrefs.c.\n\nHere's a proposed patchset for that. This runs with the idea I'd had\nthat setrefs.c could be smarter than the executor about which plan node\nsubexpressions will be executed how many times. I did not take it very\nfar, for fear of adding an undue number of planning cycles, but it's still\nbetter than what we have now.\n\nFor ease of review, 0001 adds the new planner logic, while 0002 removes\nthe now-dead executor support.\n\nThere's one bit of dead code that I left in place for the moment, which is\nruleutils.c's support for printing AlternativeSubPlans. I'm not sure if\nthat's worth keeping or not --- it's dead code for normal use, but if\nsomeone tried to use ruleutils.c to print partially-planned expression\ntrees, maybe there'd be a use for it?\n\n(It's also arguable that readfuncs.c's support is now dead code, but\nI have little interest in stripping that out.)\n\n\t\t\tregards, tom lane", "msg_date": "Sat, 29 Aug 2020 19:26:36 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Get rid of runtime handling of AlternativeSubPlan?" }, { "msg_contents": "On Sun, Aug 30, 2020 at 7:26 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> I wrote:\n> > Back in bd3daddaf232d95b0c9ba6f99b0170a0147dd8af, which introduced\n> > AlternativeSubPlans, I wrote:\n> > There is a lot more that could be done based on this infrastructure: in\n> > particular it's interesting to consider switching to the hash plan if\n> we start\n> > out using the non-hashed plan but find a lot more upper rows going by\n> than we\n> > expected. I have therefore left some minor inefficiencies in place,\n> such as\n> > initializing both subplans even though we will currently only use one.\n> >\n> > That commit will be twelve years old come August, and nobody has either\n> > built anything else atop it or shown any interest in making the plan\n> choice\n> > switchable mid-run. So it seems like kind of a failed experiment.\n> >\n> > Therefore, I'm considering the idea of ripping out all executor support\n> > for AlternativeSubPlan and instead having the planner replace an\n> > AlternativeSubPlan with the desired specific SubPlan somewhere late in\n> > planning, possibly setrefs.c.\n>\n> Here's a proposed patchset for that. This runs with the idea I'd had\n> that setrefs.c could be smarter than the executor about which plan node\n> subexpressions will be executed how many times. I did not take it very\n> far, for fear of adding an undue number of planning cycles, but it's still\n> better than what we have now.\n>\n> For ease of review, 0001 adds the new planner logic, while 0002 removes\n> the now-dead executor support.\n>\n> There's one bit of dead code that I left in place for the moment, which is\n> ruleutils.c's support for printing AlternativeSubPlans. I'm not sure if\n> that's worth keeping or not --- it's dead code for normal use, but if\n> someone tried to use ruleutils.c to print partially-planned expression\n> trees, maybe there'd be a use for it?\n>\n> (It's also arguable that readfuncs.c's support is now dead code, but\n> I have little interest in stripping that out.)\n>\n> regards, tom lane\n>\n>\nThank you for this code! I still have some confusion about when a SubPlan\nshould be executed when a join is involved. I care about this because this\nhas an impact on when we can get the num_exec for a subplan.\n\nThe subplan in a target list, it is executed after the join in my case.\nThe subplan\ncan be execute after the scan of T1(see below example) and it can also be\nexecuted\nafter the join. Which one is better depends on which methods make the\nnum_exec\nsmaller. Is it something we already considered? I drill-down to\npopulate_joinrel_with_paths and not find this logic.\n\n# explain (costs off) select (select a from t2 where t2.b = t1.b) from t1,\nt3;\n QUERY PLAN\n------------------------------\n Nested Loop\n -> Seq Scan on t1\n -> Materialize\n -> Seq Scan on t3\n SubPlan 1\n -> Seq Scan on t2\n Filter: (b = t1.b)\n(7 rows)\n\n\nWhen the subplan is in a Qual, it is supposed to be executed as soon as\npossible,\nThe current implementation matches the below cases. So can we say we\nknows the num_execs of SubPlan just after we plan the dependent rels?\n(In Q1 below the dependent rel is t1 vs t3, in Q2 it is t1 only) If we can\nchoose\na subplan and recost the related path during(not after) creating the best\npath, will\nwe get better results for some cases (due to the current cost method for\nAlternativeSubPlan[1])?\n\n-- the subplan depends on the result of t1 join t3\n# explain (costs off) select t1.* from t1, t3 where\n t1.a > (select max(a) from t2 where t2.b = t1.b and t2.c = t3.c);\n QUERY PLAN\n-----------------------------------------------------\n Nested Loop\n Join Filter: (t1.a > (SubPlan 1))\n -> Seq Scan on t1\n -> Materialize\n -> Seq Scan on t3\n SubPlan 1\n -> Aggregate\n -> Seq Scan on t2\n Filter: ((b = t1.b) AND (c = t3.c))\n(9 rows)\n\n-- the subplan only depends on t1.\n# explain (costs off) select t1.* from t1, t3 where\nt1.a > (select max(a) from t2 where t2.b = t1.b);\n QUERY PLAN\n------------------------------------------------\n Nested Loop\n -> Seq Scan on t3\n -> Materialize\n -> Seq Scan on t1\n Filter: (a > (SubPlan 1))\n SubPlan 1\n -> Aggregate\n -> Seq Scan on t2\n Filter: (b = t1.b)\n(9 rows)\n\n\nAt last, I want to use the commonly used table\nin src/test/regress/sql/create_table.sql\nwhen providing an example, but I always have issues running the\ncreate_table.sql which\nmakes me uncomfortable to use that. Am I missing something?\n\nCREATE TABLE fail_part PARTITION OF range_parted3 FOR VALUES FROM (1,\nminvalue) TO (1, maxvalue);\npsql:src/test/regress/sql/create_table.sql:611: ERROR: partition\n\"fail_part\" would overlap partition \"part10\"\n\nCREATE TABLE fail_part PARTITION OF hash_parted2 FOR VALUES WITH (MODULUS\n2, REMAINDER 1);\npsql:src/test/regress/sql/create_table.sql:622: ERROR: partition\n\"fail_part\" would overlap partition \"h2part_4\"\n\n[1]\nhttps://www.postgresql.org/message-id/07b3fa88-aa4e-2e13-423d-8389eb1712cf%40imap.cc\n\n\n-- \nBest Regards\nAndy Fan\n\nOn Sun, Aug 30, 2020 at 7:26 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:I wrote:\n> Back in bd3daddaf232d95b0c9ba6f99b0170a0147dd8af, which introduced\n> AlternativeSubPlans, I wrote:\n>   There is a lot more that could be done based on this infrastructure: in\n>   particular it's interesting to consider switching to the hash plan if we start\n>   out using the non-hashed plan but find a lot more upper rows going by than we\n>   expected.  I have therefore left some minor inefficiencies in place, such as\n>   initializing both subplans even though we will currently only use one.\n>\n> That commit will be twelve years old come August, and nobody has either\n> built anything else atop it or shown any interest in making the plan choice\n> switchable mid-run.  So it seems like kind of a failed experiment.\n>\n> Therefore, I'm considering the idea of ripping out all executor support\n> for AlternativeSubPlan and instead having the planner replace an\n> AlternativeSubPlan with the desired specific SubPlan somewhere late in\n> planning, possibly setrefs.c.\n\nHere's a proposed patchset for that.  This runs with the idea I'd had\nthat setrefs.c could be smarter than the executor about which plan node\nsubexpressions will be executed how many times.  I did not take it very\nfar, for fear of adding an undue number of planning cycles, but it's still\nbetter than what we have now.\n\nFor ease of review, 0001 adds the new planner logic, while 0002 removes\nthe now-dead executor support.\n\nThere's one bit of dead code that I left in place for the moment, which is\nruleutils.c's support for printing AlternativeSubPlans.  I'm not sure if\nthat's worth keeping or not --- it's dead code for normal use, but if\nsomeone tried to use ruleutils.c to print partially-planned expression\ntrees, maybe there'd be a use for it?\n\n(It's also arguable that readfuncs.c's support is now dead code, but\nI have little interest in stripping that out.)\n\n                        regards, tom lane\n\nThank you for this code!  I still have some confusion about when a SubPlanshould be executed when a join is involved.  I care about this because this has an impact on when we can get the num_exec for a subplan.The subplan in a target list,  it is executed after the join in my case.  The subplancan be execute after the scan of T1(see below example) and it can also be executedafter the join. Which one is better depends on which methods make the num_execsmaller.  Is it something we already considered? I drill-down to populate_joinrel_with_paths and not find this logic. # explain (costs off) select (select a from t2 where t2.b = t1.b) from t1, t3;          QUERY PLAN------------------------------ Nested Loop   ->  Seq Scan on t1   ->  Materialize         ->  Seq Scan on t3   SubPlan 1     ->  Seq Scan on t2           Filter: (b = t1.b)(7 rows)When the subplan is in a Qual, it is supposed to be executed as soon as possible,The current implementation matches the below cases.  So can we say we knows the num_execs of SubPlan just after we plan the dependent rels?  (In Q1 below the dependent rel is t1 vs t3,  in Q2 it is t1 only) If we can choose a subplan and recost the related path during(not after) creating the best path,  willwe get better results for some cases (due to the current cost method for AlternativeSubPlan[1])? -- the subplan depends on the result of t1 join t3# explain (costs off) select t1.* from t1, t3 where    t1.a > (select max(a) from t2 where t2.b = t1.b and t2.c  = t3.c);                     QUERY PLAN----------------------------------------------------- Nested Loop   Join Filter: (t1.a > (SubPlan 1))   ->  Seq Scan on t1   ->  Materialize         ->  Seq Scan on t3   SubPlan 1     ->  Aggregate           ->  Seq Scan on t2                 Filter: ((b = t1.b) AND (c = t3.c))(9 rows)-- the subplan only depends on t1.# explain (costs off) select t1.* from t1, t3 where t1.a > (select max(a) from t2 where t2.b = t1.b);                   QUERY PLAN------------------------------------------------ Nested Loop   ->  Seq Scan on t3   ->  Materialize         ->  Seq Scan on t1               Filter: (a > (SubPlan 1))               SubPlan 1                 ->  Aggregate                       ->  Seq Scan on t2                             Filter: (b = t1.b)(9 rows) At last,  I want to use the commonly used table in src/test/regress/sql/create_table.sqlwhen providing an example, but I always have issues running the create_table.sql whichmakes me uncomfortable to use that. Am I missing something? CREATE TABLE fail_part PARTITION OF range_parted3 FOR VALUES FROM (1, minvalue) TO (1, maxvalue);psql:src/test/regress/sql/create_table.sql:611: ERROR:  partition \"fail_part\" would overlap partition \"part10\"CREATE TABLE fail_part PARTITION OF hash_parted2 FOR VALUES WITH (MODULUS 2, REMAINDER 1);psql:src/test/regress/sql/create_table.sql:622: ERROR:  partition \"fail_part\" would overlap partition \"h2part_4\"[1] https://www.postgresql.org/message-id/07b3fa88-aa4e-2e13-423d-8389eb1712cf%40imap.cc -- Best RegardsAndy Fan", "msg_date": "Mon, 31 Aug 2020 08:23:53 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Get rid of runtime handling of AlternativeSubPlan?" }, { "msg_contents": "On Sun, 30 Aug 2020 at 11:26, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> I wrote:\n> > Therefore, I'm considering the idea of ripping out all executor support\n> > for AlternativeSubPlan and instead having the planner replace an\n> > AlternativeSubPlan with the desired specific SubPlan somewhere late in\n> > planning, possibly setrefs.c.\n>\n> Here's a proposed patchset for that.\n\nDo you feel that the choice to create_plan() on the subplan before\nplanning the outer query is still a good one? ISTM that that was\nrequired when the AlternativeSubplan decision was made during\nexecution, since we, of course, need a plan to execute. If the\ndecision is now being made in the planner then is it not better to\ndelay the create_plan() until later in planning?\n\n From looking at the code it seems that Paths won't really do here as\nwe're dealing with two separate PlannerInfos rather than two paths\nbelonging to the same PlannerInfo, but maybe it's better to invent\nsomething else that's similar to a list of paths and just do\ncreate_plan() for the subquery once.\n\nDavid\n\n\n", "msg_date": "Mon, 31 Aug 2020 14:35:32 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Get rid of runtime handling of AlternativeSubPlan?" }, { "msg_contents": "David Rowley <dgrowleyml@gmail.com> writes:\n> Do you feel that the choice to create_plan() on the subplan before\n> planning the outer query is still a good one? ISTM that that was\n> required when the AlternativeSubplan decision was made during\n> execution, since we, of course, need a plan to execute. If the\n> decision is now being made in the planner then is it not better to\n> delay the create_plan() until later in planning?\n\nHm. That's well outside the scope I had in mind for this patch.\nIn principle, you're right that we could postpone final planning\nof the subquery till later; but I fear it'd require quite a lot\nof refactoring to make it work that way. There's a lot of rather\nsubtle timing dependencies in the processing done by createplan.c\nand setrefs.c, so I think this might be a lot more painful than\nit seems at first glance. And we'd only gain anything in cases that\nuse AlternativeSubPlan, which is a minority of subplans, so on the\nwhole I rather doubt it's worth the trouble.\n\nOne inefficiency I see that we could probably get rid of is\nwhere make_subplan() is doing\n\n /* Now we can check if it'll fit in hash_mem */\n /* XXX can we check this at the Path stage? */\n if (subplan_is_hashable(plan))\n {\n\nThe only inputs subplan_is_hashable needs are the predicted rowcount\nand output width, which surely we could get from the Path. So we\ncould save doing create_plan() when we decide the subquery output is\ntoo big to hash. OTOH, that's probably a pretty small minority of\nuse-cases, so it might not be worth troubling over.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 31 Aug 2020 11:41:46 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Get rid of runtime handling of AlternativeSubPlan?" }, { "msg_contents": "I wrote:\n> One inefficiency I see that we could probably get rid of is\n> where make_subplan() is doing\n> /* Now we can check if it'll fit in hash_mem */\n> /* XXX can we check this at the Path stage? */\n\nI went ahead and fixed that, and I also realized there's another small\nimprovement to be made: we can remove the unused SubPlan from the\nsubplans list of the finished PlannedStmt, by setting that list cell\nto NULL. (This is already specifically allowed by the comments for\nPlannedStmt.subplans.) Initially I supposed that this'd only save the\ncosts of copying that subtree when we copy the whole plan. On looking\ncloser, though, InitPlan actually runs ExecInitNode on every list\nentry, even unused ones, so this will make some difference in executor\nstartup time.\n\nHence, an updated 0001 patch. 0002 hasn't changed.\n\n\t\t\tregards, tom lane", "msg_date": "Mon, 31 Aug 2020 13:22:11 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Get rid of runtime handling of AlternativeSubPlan?" }, { "msg_contents": "Andy Fan <zhihui.fan1213@gmail.com> writes:\n> Thank you for this code! I still have some confusion about when a SubPlan\n> should be executed when a join is involved. I care about this because this\n> has an impact on when we can get the num_exec for a subplan.\n\n> The subplan in a target list, it is executed after the join in my case.\n> The subplan\n> can be execute after the scan of T1(see below example) and it can also be\n> executed\n> after the join. Which one is better depends on which methods make the\n> num_exec\n> smaller. Is it something we already considered?\n\nUh, I'm not following your concern. SubPlans appearing in the join\ntargetlist *must* be executed \"after the join\", ie only for valid\njoin rows. Otherwise we could have cases where, say, they throw\nerrors that should not occur. On the other hand, SubPlans appearing\nin the join's qual conditions have to be executed \"before the join\",\nalthough exactly what that means is fuzzy because we don't make any\npromises about the relative ordering of different qual conditions.\n\n> When the subplan is in a Qual, it is supposed to be executed as soon as\n> possible,\n> The current implementation matches the below cases. So can we say we\n> knows the num_execs of SubPlan just after we plan the dependent rels?\n\nI wouldn't say so. If the SubPlan's qual actually only depends on one\nof the input rels, distribute_qual_to_rels would have pushed it down\nfurther than the join. Among the quals that do have to be evaluated\nat the join, a qual involving a SubPlan is best executed last on cost\ngrounds, or so I'd guess anyway. So the number of executions is probably\nless than the product of the input rel sizes. That's what motivates\nthe choice of NUM_EXEC_QUAL in my patch.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 31 Aug 2020 13:42:15 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Get rid of runtime handling of AlternativeSubPlan?" }, { "msg_contents": "On Tue, Sep 1, 2020 at 1:42 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Andy Fan <zhihui.fan1213@gmail.com> writes:\n> > Thank you for this code! I still have some confusion about when a\n> SubPlan\n> > should be executed when a join is involved. I care about this because\n> this\n> > has an impact on when we can get the num_exec for a subplan.\n>\n> > The subplan in a target list, it is executed after the join in my case.\n> > The subplan\n> > can be execute after the scan of T1(see below example) and it can also be\n> > executed\n> > after the join. Which one is better depends on which methods make the\n> > num_exec\n> > smaller. Is it something we already considered?\n>\n> Uh, I'm not following your concern. SubPlans appearing in the join\n> targetlist *must* be executed \"after the join\", ie only for valid\n> join rows. Otherwise we could have cases where, say, they throw\n> errors that should not occur.\n\n\nI am feeling I'm wrong somewhere however I can't figure it out until now.\n\nQ1: select (select t3.a from t3 where t3.c = t1.c) from t1, t2 where t1.b\n= t2.b;\n\nshould equals Q2:\n\n1. select (select t3.a from t3 where t3.c = t1.c) as a, b from t1 ==>\nt13.\n2. select t13.a from t13, t2 where t13.b = t2.b;\n\nWith the following data, Q1 will execute the subplan twice (since we get 2\nrows\nafter join t1, t2). while Q2 executes the subplan once (since t1 has only\n1 row).\nFinally the result is the same.\n\npostgres=# select * from t1;\n a | b | c\n---+---+---\n 1 | 1 | 1\n(1 row)\n\npostgres=# select * from t2;\n a | b | c\n---+---+---\n 1 | 1 | 1\n 1 | 1 | 2\n(2 rows)\n\npostgres=# select * from t3;\n a | b | c\n---+---+---\n 1 | 1 | 1\n(1 row)\n\nOn the other hand, SubPlans appearing\n> in the join's qual conditions have to be executed \"before the join\",\n> although exactly what that means is fuzzy because we don't make any\n> promises about the relative ordering of different qual conditions.\n>\n> > When the subplan is in a Qual, it is supposed to be executed as soon as\n> > possible,\n> > The current implementation matches the below cases. So can we say we\n> > knows the num_execs of SubPlan just after we plan the dependent rels?\n>\n> I wouldn't say so. If the SubPlan's qual actually only depends on one\n> of the input rels, distribute_qual_to_rels would have pushed it down\n> further than the join. Among the quals that do have to be evaluated at the\n> join,\n\na qual involving a SubPlan is best executed last on cost\n> grounds, or so I'd guess anyway. So the number of executions is probably\n\nless than the product of the input rel sizes. That's what motivates\n> the choice of NUM_EXEC_QUAL in my patch.\n\n\nUnderstand now. Thank you!\n\n-- \nBest Regards\nAndy Fan\n\nOn Tue, Sep 1, 2020 at 1:42 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:Andy Fan <zhihui.fan1213@gmail.com> writes:\n> Thank you for this code!  I still have some confusion about when a SubPlan\n> should be executed when a join is involved.  I care about this because this\n> has an impact on when we can get the num_exec for a subplan.\n\n> The subplan in a target list,  it is executed after the join in my case.\n> The subplan\n> can be execute after the scan of T1(see below example) and it can also be\n> executed\n> after the join. Which one is better depends on which methods make the\n> num_exec\n> smaller.  Is it something we already considered?\n\nUh, I'm not following your concern.  SubPlans appearing in the join\ntargetlist *must* be executed \"after the join\", ie only for valid\njoin rows.  Otherwise we could have cases where, say, they throw\nerrors that should not occur.  I am feeling I'm wrong somewhere however I can't figure it out until now. Q1:  select (select t3.a from t3 where t3.c = t1.c) from t1, t2 where t1.b = t2.b;should equals Q2: 1. select (select t3.a from t3 where t3.c = t1.c) as a, b from t1  ==>  t13.  2. select t13.a from t13, t2 where t13.b = t2.b; With the following data,  Q1 will execute the subplan twice (since we get 2 rowsafter join t1, t2).  while Q2 executes the subplan once (since t1 has only 1 row).Finally the result is the same. postgres=# select * from t1; a | b | c---+---+--- 1 | 1 | 1(1 row)postgres=# select * from t2; a | b | c---+---+--- 1 | 1 | 1 1 | 1 | 2(2 rows)postgres=# select * from t3; a | b | c---+---+--- 1 | 1 | 1(1 row)On the other hand, SubPlans appearing\nin the join's qual conditions have to be executed \"before the join\",\nalthough exactly what that means is fuzzy because we don't make any\npromises about the relative ordering of different qual conditions.\n\n> When the subplan is in a Qual, it is supposed to be executed as soon as\n> possible,\n> The current implementation matches the below cases.  So can we say we\n> knows the num_execs of SubPlan just after we plan the dependent rels?\n\nI wouldn't say so.  If the SubPlan's qual actually only depends on one\nof the input rels, distribute_qual_to_rels would have pushed it down\nfurther than the join. Among the quals that do have to be evaluated at the join,a qual involving a SubPlan is best executed last on cost\ngrounds, or so I'd guess anyway.  So the number of executions is probably\nless than the product of the input rel sizes.  That's what motivates\nthe choice of NUM_EXEC_QUAL in my patch.Understand now.  Thank you!-- Best RegardsAndy Fan", "msg_date": "Tue, 1 Sep 2020 08:10:35 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Get rid of runtime handling of AlternativeSubPlan?" }, { "msg_contents": "On Tue, 1 Sep 2020 at 05:22, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> I wrote:\n> > One inefficiency I see that we could probably get rid of is\n> > where make_subplan() is doing\n> > /* Now we can check if it'll fit in hash_mem */\n> > /* XXX can we check this at the Path stage? */\n>\n> I went ahead and fixed that, and I also realized there's another small\n> improvement to be made: we can remove the unused SubPlan from the\n> subplans list of the finished PlannedStmt, by setting that list cell\n> to NULL. (This is already specifically allowed by the comments for\n> PlannedStmt.subplans.) Initially I supposed that this'd only save the\n> costs of copying that subtree when we copy the whole plan. On looking\n> closer, though, InitPlan actually runs ExecInitNode on every list\n> entry, even unused ones, so this will make some difference in executor\n> startup time.\n>\n> Hence, an updated 0001 patch. 0002 hasn't changed.\n\nI had a look over these two. A handful of very small things:\n\n0001:\n\n1. I think we should be moving away from using linitial() and second()\nwhen we know there are two items in the list. Using list_nth() has\nless overhead.\n\nsubplan1 = (SubPlan *) linitial(asplan->subplans);\nsubplan2 = (SubPlan *) lsecond(asplan->subplans);\n\n2. I did have sight concerns that fix_alternative_subplan() always\nassumes the list of subplans will always be 2, though on looking at\nthe definition of AlternativeSubPlan, I see always having two in the\nlist is mentioned. It feels like fix_alternative_subplan() wouldn't\nbecome much more complex to allow any non-zero number of subplans, but\nmaybe making that happen should wait until there is some need for more\nthan two. It just feels a bit icky to have to document the special\ncase when not having the special case is not that hard to implement.\n\n3. Wouldn't it be better to say NULLify rather than delete?\n\n+ * node or higher-level nodes. However, we do delete the rejected subplan\n+ * from root->glob->subplans, to minimize cycles expended on it later.\n\n0002:\n\nI don't have much to say about this. Leaving the code in\nget_rule_expr() for the reasons you mentioned in the new comment does\nmake sense.\n\n\nOn a side note, I was playing around with the following case:\n\ncreate table t (a int, b int, c int);\ninsert into t select x,1,2 from generate_Series(1,10000)x;\ncreate index on t (b);\nvacuum freeze analyze t;\n\nand ran:\n\nselect * from t where exists (select 1 from t t2 where t.a=t2.b) or a < 0;\n\nEXPLAIN ANALYZE shows:\n\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------\n Seq Scan on t (cost=0.00..360.00 rows=5000 width=12) (actual\ntime=0.020..7468.452 rows=1 loops=1)\n Filter: ((SubPlan 1) OR (a < 0))\n Rows Removed by Filter: 9999\n SubPlan 1\n -> Seq Scan on t t2 (cost=0.00..180.00 rows=10000 width=0)\n(actual time=0.746..0.746 rows=0 loops=10000)\n Filter: (t.a = b)\n Rows Removed by Filter: 9999\n Planning Time: 0.552 ms\n Execution Time: 7468.481 ms\n(9 rows)\n\n\nNotice that the SubPlan's estimated rows are 10000. This is due to the\nndistinct for \"b\" being 1 and since t.a is a parameter, the\nselectivity is estimated to be 1.0 by var_eq_non_const().\nUnfortunately, for this reason, the index on t(b) is not used either.\nThe planner thinks all rows are being selected, in which case, an\nindex is not much help.\n\nboth master and patched seem to not choose to use the hashed subplan\nwhich results in a pretty slow execution time. This seems to be down\nto cost_subplan() doing:\n\n/* we only need to fetch 1 tuple; clamp to avoid zero divide */\nsp_cost.per_tuple += plan_run_cost / clamp_row_est(plan->plan_rows);\n\nI imagine / 2 might be more realistic to account for the early abort,\nwhich is pretty much what the ALL_SUBLINK and ANY_SUBLINK do just\nbelow:\n\nChanging that makes the run-time of that query go from 7.4 seconds for\nme down to 3.7 ms, about 2000 times faster.\n\nI understand there will be other cases where that's not so ideal, but\nthis slowness is not ideal either. Of course, not the fault of this\npatch.\n\nDavid\n\n\n", "msg_date": "Sun, 27 Sep 2020 04:56:35 +1300", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Get rid of runtime handling of AlternativeSubPlan?" }, { "msg_contents": "Thanks for reviewing!\n\nDavid Rowley <dgrowleyml@gmail.com> writes:\n> 1. I think we should be moving away from using linitial() and second()\n> when we know there are two items in the list. Using list_nth() has\n> less overhead.\n\nUh, really? And if it's true, why would we change all the call sites\nrather than improving the pg_list.h macros?\n\n> 2. I did have sight concerns that fix_alternative_subplan() always\n> assumes the list of subplans will always be 2, though on looking at\n> the definition of AlternativeSubPlan, I see always having two in the\n> list is mentioned. It feels like fix_alternative_subplan() wouldn't\n> become much more complex to allow any non-zero number of subplans, but\n> maybe making that happen should wait until there is some need for more\n> than two. It just feels a bit icky to have to document the special\n> case when not having the special case is not that hard to implement.\n\nIt seemed to me that dealing with the general case would make\nfix_alternative_subplan() noticeably more complex and less obviously\ncorrect. I might be wrong though; what specific coding did you have in\nmind?\n\n> 3. Wouldn't it be better to say NULLify rather than delete?\n\n> + * node or higher-level nodes. However, we do delete the rejected subplan\n> + * from root->glob->subplans, to minimize cycles expended on it later.\n\nFair enough, that comment could be improved.\n\n> On a side note, I was playing around with the following case:\n> ...\n> both master and patched seem to not choose to use the hashed subplan\n> which results in a pretty slow execution time. This seems to be down\n> to cost_subplan() doing:\n> \t/* we only need to fetch 1 tuple; clamp to avoid zero divide */\n> \tsp_cost.per_tuple += plan_run_cost / clamp_row_est(plan->plan_rows);\n> I imagine / 2 might be more realistic to account for the early abort,\n> which is pretty much what the ALL_SUBLINK and ANY_SUBLINK do just\n> below:\n\nHm, actually isn't it the other way around? *If* there are any matching\nrows, then what's being done here is an accurate estimate. But if there\nare not, we're going to have to scan the entire subquery output to verify\nthat. I wonder if we should just be taking the subquery cost at face\nvalue, ie be pessimistic not optimistic. If the user is bothering to\ntest EXISTS, we should expect that the no-match case does happen.\n\nHowever, I think that's a distinct concern from this patch; this patch\nis only meant to improve the processing of alternative subplans, not\nto change the costing rules around them. If we fool with it I'd rather\ndo so as a separate patch.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 26 Sep 2020 17:03:27 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Get rid of runtime handling of AlternativeSubPlan?" }, { "msg_contents": "On Sun, 27 Sep 2020 at 10:03, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Thanks for reviewing!\n>\n> David Rowley <dgrowleyml@gmail.com> writes:\n> > 1. I think we should be moving away from using linitial() and second()\n> > when we know there are two items in the list. Using list_nth() has\n> > less overhead.\n>\n> Uh, really?\n\nYeah. Using linitial() and lsecond() will check if the list is\nnot-NIL. lsecond() does an additional check to ensure the list has at\nleast two elements. None of which are required since we already know\nthe list has two elements.\n\n> And if it's true, why would we change all the call sites\n> rather than improving the pg_list.h macros?\n\nMaybe we should. Despite the non-NIL check and length check in\nlist_head(), list_second_cell(), list_third_cell() functions, the\ncorresponding macro will crash anyway if those functions were to\nreturn NULL. We might as well just use list_nth_cell() to get the\nListCell without any checks to see if the cell exists. I can go off\nand fix those separately. I attached a 0004 patch to help explain what\nI'm talking about.\n\n> > 2. I did have sight concerns that fix_alternative_subplan() always\n> > assumes the list of subplans will always be 2, though on looking at\n> > the definition of AlternativeSubPlan, I see always having two in the\n> > list is mentioned. It feels like fix_alternative_subplan() wouldn't\n> > become much more complex to allow any non-zero number of subplans, but\n> > maybe making that happen should wait until there is some need for more\n> > than two. It just feels a bit icky to have to document the special\n> > case when not having the special case is not that hard to implement.\n>\n> It seemed to me that dealing with the general case would make\n> fix_alternative_subplan() noticeably more complex and less obviously\n> correct. I might be wrong though; what specific coding did you have in\n> mind?\n\nI had thought something like 0003 (attached). It's a net reduction of\n3 entire lines, including the removal of the comment that explained\nthat there's always two in the list.\n\n> > On a side note, I was playing around with the following case:\n> > ...\n> > both master and patched seem to not choose to use the hashed subplan\n> > which results in a pretty slow execution time. This seems to be down\n> > to cost_subplan() doing:\n> > /* we only need to fetch 1 tuple; clamp to avoid zero divide */\n> > sp_cost.per_tuple += plan_run_cost / clamp_row_est(plan->plan_rows);\n> > I imagine / 2 might be more realistic to account for the early abort,\n> > which is pretty much what the ALL_SUBLINK and ANY_SUBLINK do just\n> > below:\n>\n> Hm, actually isn't it the other way around? *If* there are any matching\n> rows, then what's being done here is an accurate estimate. But if there\n> are not, we're going to have to scan the entire subquery output to verify\n> that. I wonder if we should just be taking the subquery cost at face\n> value, ie be pessimistic not optimistic. If the user is bothering to\n> test EXISTS, we should expect that the no-match case does happen.\n>\n> However, I think that's a distinct concern from this patch; this patch\n> is only meant to improve the processing of alternative subplans, not\n> to change the costing rules around them. If we fool with it I'd rather\n> do so as a separate patch.\n\nYeah, agreed. I'll open another thread.\n\nDavid", "msg_date": "Sun, 27 Sep 2020 21:48:20 +1300", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Get rid of runtime handling of AlternativeSubPlan?" }, { "msg_contents": "David Rowley <dgrowleyml@gmail.com> writes:\n> On Sun, 27 Sep 2020 at 10:03, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> And if it's true, why would we change all the call sites\n>> rather than improving the pg_list.h macros?\n\n> Maybe we should. Despite the non-NIL check and length check in\n> list_head(), list_second_cell(), list_third_cell() functions, the\n> corresponding macro will crash anyway if those functions were to\n> return NULL.\n\nHm, good point.\n\n> We might as well just use list_nth_cell() to get the\n> ListCell without any checks to see if the cell exists. I can go off\n> and fix those separately. I attached a 0004 patch to help explain what\n> I'm talking about.\n\nYeah, that should be dealt with separately.\n\n>> It seemed to me that dealing with the general case would make\n>> fix_alternative_subplan() noticeably more complex and less obviously\n>> correct. I might be wrong though; what specific coding did you have in\n>> mind?\n\n> I had thought something like 0003 (attached). It's a net reduction of\n> 3 entire lines, including the removal of the comment that explained\n> that there's always two in the list.\n\nMeh. This seems to prove my point, as it's in fact wrong; you are only\nnulling out the discarded subplans-list entry in one of the two cases.\nOnce you fix that it's not really shorter anymore, nor clearer. Still,\nI suppose there's some value in removing the assumption about exactly\ntwo subplans.\n\nI'll fix up fix_alternative_subplan and push this. I think the other\ntopics should be raised in separate threads.\n\nThanks for reviewing!\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 27 Sep 2020 11:59:26 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Get rid of runtime handling of AlternativeSubPlan?" } ]
[ { "msg_contents": "Hi\n\nThere is one user request for unescape function in core.\n\nhttps://stackoverflow.com/questions/20124393/convert-escaped-unicode-character-back-to-actual-character-in-postgresql/20125412?noredirect=1#comment110502526_20125412\n\nThis request is about possibility that we do with string literal via\nfunctional interface instead string literals only\n\nI wrote plpgsql function, but built in function can be simpler:\n\nCREATE OR REPLACE FUNCTION public.unescape(text, text)\n RETURNS text\n LANGUAGE plpgsql\n AS $function$\n DECLARE result text;\n BEGIN\n EXECUTE format('SELECT U&%s UESCAPE %s',\n quote_literal(replace($1, '\\u','^')),\n quote_literal($2)) INTO result;\n RETURN result;\n END;\n $function$\n\npostgres=# select unescape('Odpov\\u011Bdn\\u00E1 osoba','^');\n unescape -----------------\n Odpovědná osoba(1 row)\n\nWhat do you think about this?\n\nRegards\n\nPavel\n\nHiThere is one user request for unescape function in core.https://stackoverflow.com/questions/20124393/convert-escaped-unicode-character-back-to-actual-character-in-postgresql/20125412?noredirect=1#comment110502526_20125412This request is about possibility that we do with string literal via functional interface instead string literals onlyI wrote plpgsql function, but built in function can be simpler:CREATE OR REPLACE FUNCTION public.unescape(text, text) \n RETURNS text\n LANGUAGE plpgsql\n AS $function$\n DECLARE result text;\n BEGIN\n EXECUTE format('SELECT U&%s UESCAPE %s', \n quote_literal(replace($1, '\\u','^')),\n quote_literal($2)) INTO result;\n RETURN result;\n END;\n $function$postgres=# select unescape('Odpov\\u011Bdn\\u00E1 osoba','^');\n unescape \n-----------------\n Odpovědná osoba\n(1 row)What do you think about this?RegardsPavel", "msg_date": "Mon, 22 Jun 2020 05:48:48 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "proposal: unescape_text function" }, { "msg_contents": "po 22. 6. 2020 v 5:48 odesílatel Pavel Stehule <pavel.stehule@gmail.com>\nnapsal:\n\n> Hi\n>\n> There is one user request for unescape function in core.\n>\n>\n> https://stackoverflow.com/questions/20124393/convert-escaped-unicode-character-back-to-actual-character-in-postgresql/20125412?noredirect=1#comment110502526_20125412\n>\n> This request is about possibility that we do with string literal via\n> functional interface instead string literals only\n>\n> I wrote plpgsql function, but built in function can be simpler:\n>\n> CREATE OR REPLACE FUNCTION public.unescape(text, text)\n> RETURNS text\n> LANGUAGE plpgsql\n> AS $function$\n> DECLARE result text;\n> BEGIN\n> EXECUTE format('SELECT U&%s UESCAPE %s',\n> quote_literal(replace($1, '\\u','^')),\n> quote_literal($2)) INTO result;\n> RETURN result;\n> END;\n> $function$\n>\n> postgres=# select unescape('Odpov\\u011Bdn\\u00E1 osoba','^');\n> unescape -----------------\n> Odpovědná osoba(1 row)\n>\n> What do you think about this?\n>\n\nI changed the name to more accurately \"unicode_unescape\". Patch is assigned\n\nRegards\n\nPavel\n\n\n> Regards\n>\n> Pavel\n>", "msg_date": "Tue, 23 Jun 2020 11:51:40 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal: unescape_text function" }, { "msg_contents": "> On 23 Jun 2020, at 11:51, Pavel Stehule <pavel.stehule@gmail.com> wrote:\n\n> I changed the name to more accurately \"unicode_unescape\". Patch is assigned\n\nYou've made this function return Oid, where it used to be void. Was that a\ncopy-paste mistake? Else the code needs fixing as it doesn't return an Oid.\n\n+Oid\n+check_unicode_value(pg_wchar c)\n+{\n+ if (!is_valid_unicode_codepoint(c))\n+ ereport(ERROR,\n+ (errcode(ERRCODE_SYNTAX_ERROR),\n+ errmsg(\"invalid Unicode escape value\")));\n+}\n\ncheers ./daniel\n\n\n", "msg_date": "Thu, 2 Jul 2020 17:27:52 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: proposal: unescape_text function" }, { "msg_contents": "čt 2. 7. 2020 v 17:27 odesílatel Daniel Gustafsson <daniel@yesql.se> napsal:\n\n> > On 23 Jun 2020, at 11:51, Pavel Stehule <pavel.stehule@gmail.com> wrote:\n>\n> > I changed the name to more accurately \"unicode_unescape\". Patch is\n> assigned\n>\n> You've made this function return Oid, where it used to be void. Was that a\n> copy-paste mistake? Else the code needs fixing as it doesn't return an Oid.\n>\n> +Oid\n> +check_unicode_value(pg_wchar c)\n> +{\n> + if (!is_valid_unicode_codepoint(c))\n> + ereport(ERROR,\n> + (errcode(ERRCODE_SYNTAX_ERROR),\n> + errmsg(\"invalid Unicode escape value\")));\n> +}\n>\n>\nyes, it is my error\n\nI am sending fixed patch\n\nThank you for check\n\nPavel\n\ncheers ./daniel\n>", "msg_date": "Thu, 2 Jul 2020 19:09:57 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal: unescape_text function" }, { "msg_contents": "On Thu, Jul 2, 2020 at 10:10 PM Pavel Stehule <pavel.stehule@gmail.com>\nwrote:\n\n>\n>\n> čt 2. 7. 2020 v 17:27 odesílatel Daniel Gustafsson <daniel@yesql.se>\n> napsal:\n>\n>> > On 23 Jun 2020, at 11:51, Pavel Stehule <pavel.stehule@gmail.com>\n>> wrote:\n>>\n>> > I changed the name to more accurately \"unicode_unescape\". Patch is\n>> assigned\n>>\n>> You've made this function return Oid, where it used to be void. Was that\n>> a\n>> copy-paste mistake? Else the code needs fixing as it doesn't return an\n>> Oid.\n>>\n>> +Oid\n>> +check_unicode_value(pg_wchar c)\n>> +{\n>> + if (!is_valid_unicode_codepoint(c))\n>> + ereport(ERROR,\n>> + (errcode(ERRCODE_SYNTAX_ERROR),\n>> + errmsg(\"invalid Unicode escape value\")));\n>> +}\n>>\n>>\n> yes, it is my error\n>\n> I am sending fixed patch\n>\n> Thank you for check\n>\n> Pavel\n>\n> cheers ./daniel\n>>\n>\n\nHi Pavel,\n\nSince the idea originated from unescaping unicode string literals i.e.\n select unescape('Odpov\\u011Bdn\\u00E1 osoba');\n\nShouldn't the built-in function support the above syntax as well?\n\n--\nAsif Rehman\nHighgo Software (Canada/China/Pakistan)\nURL : www.highgo.ca\n\nOn Thu, Jul 2, 2020 at 10:10 PM Pavel Stehule <pavel.stehule@gmail.com> wrote:čt 2. 7. 2020 v 17:27 odesílatel Daniel Gustafsson <daniel@yesql.se> napsal:> On 23 Jun 2020, at 11:51, Pavel Stehule <pavel.stehule@gmail.com> wrote:\n\n> I changed the name to more accurately \"unicode_unescape\". Patch is assigned\n\nYou've made this function return Oid, where it used to be void.  Was that a\ncopy-paste mistake? Else the code needs fixing as it doesn't return an Oid.\n\n+Oid\n+check_unicode_value(pg_wchar c)\n+{\n+   if (!is_valid_unicode_codepoint(c))\n+       ereport(ERROR,\n+               (errcode(ERRCODE_SYNTAX_ERROR),\n+                errmsg(\"invalid Unicode escape value\")));\n+}\nyes, it is my errorI am sending fixed patchThank you for checkPavel\ncheers ./daniel\n\nHi Pavel,Since the idea originated from unescaping unicode string literals i.e.       select unescape('Odpov\\u011Bdn\\u00E1 osoba'); Shouldn't the built-in function support the above syntax as well?--Asif RehmanHighgo Software (Canada/China/Pakistan) URL : www.highgo.ca", "msg_date": "Tue, 28 Jul 2020 17:02:28 +0500", "msg_from": "Asif Rehman <asifr.rehman@gmail.com>", "msg_from_op": false, "msg_subject": "Re: proposal: unescape_text function" }, { "msg_contents": "Hi\n\n\n>\n> Hi Pavel,\n>\n> Since the idea originated from unescaping unicode string literals i.e.\n> select unescape('Odpov\\u011Bdn\\u00E1 osoba');\n>\n> Shouldn't the built-in function support the above syntax as well?\n>\n\ngood idea. The prefixes u (4 digits) and U (8 digits) are supported\n\nRegards\n\nPavel\n\n\n> --\n> Asif Rehman\n> Highgo Software (Canada/China/Pakistan)\n> URL : www.highgo.ca\n>\n>", "msg_date": "Wed, 29 Jul 2020 08:18:18 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal: unescape_text function" }, { "msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: tested, passed\nImplements feature: tested, passed\nSpec compliant: tested, passed\nDocumentation: tested, passed\n\nHi,\r\n\r\nThe patch looks good to me.\n\nThe new status of this patch is: Ready for Committer\n", "msg_date": "Wed, 29 Jul 2020 18:04:37 +0000", "msg_from": "Asif Rehman <asifr.rehman@gmail.com>", "msg_from_op": false, "msg_subject": "Re: proposal: unescape_text function" }, { "msg_contents": "st 29. 7. 2020 v 8:18 odesílatel Pavel Stehule <pavel.stehule@gmail.com>\nnapsal:\n\n> Hi\n>\n>\n>>\n>> Hi Pavel,\n>>\n>> Since the idea originated from unescaping unicode string literals i.e.\n>> select unescape('Odpov\\u011Bdn\\u00E1 osoba');\n>>\n>> Shouldn't the built-in function support the above syntax as well?\n>>\n>\n> good idea. The prefixes u (4 digits) and U (8 digits) are supported\n>\n> Regards\n>\n\nrebase\n\nRegards\n\nPavel\n\n\n> Pavel\n>\n>\n>> --\n>> Asif Rehman\n>> Highgo Software (Canada/China/Pakistan)\n>> URL : www.highgo.ca\n>>\n>>", "msg_date": "Wed, 7 Oct 2020 11:00:40 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal: unescape_text function" }, { "msg_contents": "On 2020-10-07 11:00, Pavel Stehule wrote:\n> Since the idea originated from unescaping unicode string\n> literals i.e.\n>        select unescape('Odpov\\u011Bdn\\u00E1 osoba');\n> \n> Shouldn't the built-in function support the above syntax as well?\n> \n> \n> good idea. The prefixes u (4 digits) and U (8 digits) are supported\n\nI don't really get the point of this function. There is AFAICT no \nfunction to produce this escaped format, and it's not a recognized \ninterchange format. So under what circumstances would one need to use this?\n\n\n", "msg_date": "Fri, 27 Nov 2020 15:37:05 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: proposal: unescape_text function" }, { "msg_contents": "pá 27. 11. 2020 v 15:37 odesílatel Peter Eisentraut <\npeter.eisentraut@enterprisedb.com> napsal:\n\n> On 2020-10-07 11:00, Pavel Stehule wrote:\n> > Since the idea originated from unescaping unicode string\n> > literals i.e.\n> > select unescape('Odpov\\u011Bdn\\u00E1 osoba');\n> >\n> > Shouldn't the built-in function support the above syntax as well?\n> >\n> >\n> > good idea. The prefixes u (4 digits) and U (8 digits) are supported\n>\n> I don't really get the point of this function. There is AFAICT no\n> function to produce this escaped format, and it's not a recognized\n> interchange format. So under what circumstances would one need to use\n> this?\n>\n\nSome corporate data can be in CSV format with escaped unicode characters.\nWithout this function it is not possible to decode these files without\nexternal application.\n\nPostgres has support for this conversion, but only for string literals.\n\nCREATE OR REPLACE FUNCTION public.unescape(text, text)\n RETURNS text\n LANGUAGE plpgsql\n AS $function$\n DECLARE result text;\n BEGIN\n EXECUTE format('SELECT U&%s UESCAPE %s',\n quote_literal(replace($1, '\\u','^')),\n quote_literal($2)) INTO result;\n RETURN result;\n END;\n $function$\n\n\nBecause unicode is major encoding, I think this conversion should be\nsupported. There is another question about implementation like in this\npatch implemented unicode_unescape function, or with some new conversion.\nUsing conversion\nhttps://www.postgresql.org/docs/current/sql-createconversion.html is\nprobably better, but I am not sure how intuitive it is, and it is hard to\nuse it (without not nice workarounds) in plpgsql.\n\nI don't expect so Postgres should produce data in unicode escaped format,\nbut can be useful, if Postgres can do some work with data in special format\nof major encoding.\n\npostgres=# select convert_from(E'Odpov\\u011Bdn\\u00E1 osoba', 'UTF8');\n┌─────────────────┐\n│ convert_from │\n╞═════════════════╡\n│ Odpovědná osoba │\n└─────────────────┘\n(1 row)\n\nI can do this with bytea, but it is hard to use it with text fields.\n\nI didn't find any way how to do it without ugly steps.\n\nRegards\n\nPavel\n\npá 27. 11. 2020 v 15:37 odesílatel Peter Eisentraut <peter.eisentraut@enterprisedb.com> napsal:On 2020-10-07 11:00, Pavel Stehule wrote:\n>         Since the idea originated from unescaping unicode string\n>         literals i.e.\n>                 select unescape('Odpov\\u011Bdn\\u00E1 osoba');\n> \n>         Shouldn't the built-in function support the above syntax as well?\n> \n> \n>     good idea. The prefixes u (4 digits) and U (8 digits) are supported\n\nI don't really get the point of this function.  There is AFAICT no \nfunction to produce this escaped format, and it's not a recognized \ninterchange format.  So under what circumstances would one need to use this?Some corporate data can be in CSV format with escaped unicode characters. Without this function it is not possible to decode these files without external application. Postgres has support for this conversion, but only for string literals. CREATE OR REPLACE FUNCTION public.unescape(text, text) \n RETURNS text\n LANGUAGE plpgsql\n AS $function$\n DECLARE result text;\n BEGIN\n EXECUTE format('SELECT U&%s UESCAPE %s', \n quote_literal(replace($1, '\\u','^')),\n quote_literal($2)) INTO result;\n RETURN result;\n END;\n $function$Because unicode is major encoding, I think this conversion should be supported. There is another question about implementation like in this patch implemented unicode_unescape function, or with some new conversion. Using conversion https://www.postgresql.org/docs/current/sql-createconversion.html  is probably better, but I am not sure how intuitive it is, and it is hard to use it (without not nice workarounds) in plpgsql. I don't expect so Postgres should produce data in unicode escaped format, but can be useful, if Postgres can do some work with data in special format of major encoding. postgres=# select convert_from(E'Odpov\\u011Bdn\\u00E1 osoba', 'UTF8');┌─────────────────┐│  convert_from   │╞═════════════════╡│ Odpovědná osoba │└─────────────────┘(1 row)I can do this with bytea, but it is hard to use it with text fields.I didn't find any way how to do it without ugly steps.RegardsPavel", "msg_date": "Sun, 29 Nov 2020 18:36:01 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal: unescape_text function" }, { "msg_contents": "On 2020-11-29 18:36, Pavel Stehule wrote:\n> \n> I don't really get the point of this function.  There is AFAICT no\n> function to produce this escaped format, and it's not a recognized\n> interchange format.  So under what circumstances would one need to\n> use this?\n> \n> \n> Some corporate data can be in CSV format with escaped unicode \n> characters. Without this function it is not possible to decode these \n> files without external application.\n\nI would like some supporting documentation on this. So far we only have \none stackoverflow question, and then this implementation, and they are \nnot even the same format. My worry is that if there is not precise \nspecification, then people are going to want to add things in the \nfuture, and there will be no way to analyze such requests in a \nprincipled way.\n\n\n\n", "msg_date": "Mon, 30 Nov 2020 14:14:30 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: proposal: unescape_text function" }, { "msg_contents": "po 30. 11. 2020 v 14:14 odesílatel Peter Eisentraut <\npeter.eisentraut@enterprisedb.com> napsal:\n\n> On 2020-11-29 18:36, Pavel Stehule wrote:\n> >\n> > I don't really get the point of this function. There is AFAICT no\n> > function to produce this escaped format, and it's not a recognized\n> > interchange format. So under what circumstances would one need to\n> > use this?\n> >\n> >\n> > Some corporate data can be in CSV format with escaped unicode\n> > characters. Without this function it is not possible to decode these\n> > files without external application.\n>\n> I would like some supporting documentation on this. So far we only have\n> one stackoverflow question, and then this implementation, and they are\n> not even the same format. My worry is that if there is not precise\n> specification, then people are going to want to add things in the\n> future, and there will be no way to analyze such requests in a\n> principled way.\n>\n>\nI checked this and it is \"prefix backslash-u hex\" used by Java, JavaScript\nor RTF - https://billposer.org/Software/ListOfRepresentations.html\n\nIn some languages (Python), there is decoder \"unicode-escape\". Java has a\nmethod escapeJava, for conversion from unicode to ascii. I can imagine so\nthese data are from Java systems exported to 8bit strings - so this\nimplementation can be accepted as referential. This format is used by\nhttps://docs.oracle.com/javase/8/docs/technotes/tools/unix/native2ascii.html\ntool too.\n\nPostgres can decode this format too, and the patch is based on Postgres\nimplementation. I just implemented a different interface.\n\nCurrently decode function does only text->bytea transformation. Maybe a\nmore generic function \"decode_text\" and \"encode_text\" for similar cases can\nbe better (here we need text->text transformation). But it looks like\noverengineering now.\n\nMaybe we introduce new encoding \"ascii\" and we can implement new\nconversions \"ascii_to_utf8\" and \"utf8_to_ascii\". It looks like the most\nclean solution. What do you think about it?\n\nRegards\n\nPavel\n\npo 30. 11. 2020 v 14:14 odesílatel Peter Eisentraut <peter.eisentraut@enterprisedb.com> napsal:On 2020-11-29 18:36, Pavel Stehule wrote:\n> \n>     I don't really get the point of this function.  There is AFAICT no\n>     function to produce this escaped format, and it's not a recognized\n>     interchange format.  So under what circumstances would one need to\n>     use this?\n> \n> \n> Some corporate data can be in CSV format with escaped unicode \n> characters. Without this function it is not possible to decode these \n> files without external application.\n\nI would like some supporting documentation on this.  So far we only have \none stackoverflow question, and then this implementation, and they are \nnot even the same format.  My worry is that if there is not precise \nspecification, then people are going to want to add things in the \nfuture, and there will be no way to analyze such requests in a \nprincipled way.\nI checked this and it is \"prefix backslash-u hex\" used by Java, JavaScript  or RTF - https://billposer.org/Software/ListOfRepresentations.htmlIn some languages (Python), there is decoder \"unicode-escape\". Java has a method escapeJava, for conversion from unicode to ascii. I can imagine so these data are from Java systems exported to 8bit strings - so this implementation can be accepted as  referential. This format is used by https://docs.oracle.com/javase/8/docs/technotes/tools/unix/native2ascii.html tool too.Postgres can decode this format too, and the patch is based on Postgres implementation. I just implemented a different interface. Currently decode function does only text->bytea transformation. Maybe a more generic function \"decode_text\" and \"encode_text\" for similar cases can be better (here we need text->text transformation). But it looks like overengineering now. Maybe we introduce new encoding \"ascii\" and we can implement new conversions \"ascii_to_utf8\" and \"utf8_to_ascii\". It looks like the most clean solution. What do you think about it?RegardsPavel", "msg_date": "Mon, 30 Nov 2020 22:15:32 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal: unescape_text function" }, { "msg_contents": "po 30. 11. 2020 v 22:15 odesílatel Pavel Stehule <pavel.stehule@gmail.com>\nnapsal:\n\n>\n>\n> po 30. 11. 2020 v 14:14 odesílatel Peter Eisentraut <\n> peter.eisentraut@enterprisedb.com> napsal:\n>\n>> On 2020-11-29 18:36, Pavel Stehule wrote:\n>> >\n>> > I don't really get the point of this function. There is AFAICT no\n>> > function to produce this escaped format, and it's not a recognized\n>> > interchange format. So under what circumstances would one need to\n>> > use this?\n>> >\n>> >\n>> > Some corporate data can be in CSV format with escaped unicode\n>> > characters. Without this function it is not possible to decode these\n>> > files without external application.\n>>\n>> I would like some supporting documentation on this. So far we only have\n>> one stackoverflow question, and then this implementation, and they are\n>> not even the same format. My worry is that if there is not precise\n>> specification, then people are going to want to add things in the\n>> future, and there will be no way to analyze such requests in a\n>> principled way.\n>>\n>>\n> I checked this and it is \"prefix backslash-u hex\" used by Java,\n> JavaScript or RTF -\n> https://billposer.org/Software/ListOfRepresentations.html\n>\n> In some languages (Python), there is decoder \"unicode-escape\". Java has a\n> method escapeJava, for conversion from unicode to ascii. I can imagine so\n> these data are from Java systems exported to 8bit strings - so this\n> implementation can be accepted as referential. This format is used by\n> https://docs.oracle.com/javase/8/docs/technotes/tools/unix/native2ascii.html\n> tool too.\n>\n> Postgres can decode this format too, and the patch is based on Postgres\n> implementation. I just implemented a different interface.\n>\n> Currently decode function does only text->bytea transformation. Maybe a\n> more generic function \"decode_text\" and \"encode_text\" for similar cases can\n> be better (here we need text->text transformation). But it looks like\n> overengineering now.\n>\n> Maybe we introduce new encoding \"ascii\" and we can implement new\n> conversions \"ascii_to_utf8\" and \"utf8_to_ascii\". It looks like the most\n> clean solution. What do you think about it?\n>\n\na better name of new encoding can be \"unicode-escape\" than \"ascii\". We use\n\"to_ascii\" function for different use case.\n\nset client_encoding to unicode-escape;\ncopy tab from xxx;\n...\n\nbut it doesn't help when only a few columns from the table are in\nunicode-escape format.\n\n\n\n\n> Regards\n>\n> Pavel\n>\n>\n>\n\npo 30. 11. 2020 v 22:15 odesílatel Pavel Stehule <pavel.stehule@gmail.com> napsal:po 30. 11. 2020 v 14:14 odesílatel Peter Eisentraut <peter.eisentraut@enterprisedb.com> napsal:On 2020-11-29 18:36, Pavel Stehule wrote:\n> \n>     I don't really get the point of this function.  There is AFAICT no\n>     function to produce this escaped format, and it's not a recognized\n>     interchange format.  So under what circumstances would one need to\n>     use this?\n> \n> \n> Some corporate data can be in CSV format with escaped unicode \n> characters. Without this function it is not possible to decode these \n> files without external application.\n\nI would like some supporting documentation on this.  So far we only have \none stackoverflow question, and then this implementation, and they are \nnot even the same format.  My worry is that if there is not precise \nspecification, then people are going to want to add things in the \nfuture, and there will be no way to analyze such requests in a \nprincipled way.\nI checked this and it is \"prefix backslash-u hex\" used by Java, JavaScript  or RTF - https://billposer.org/Software/ListOfRepresentations.htmlIn some languages (Python), there is decoder \"unicode-escape\". Java has a method escapeJava, for conversion from unicode to ascii. I can imagine so these data are from Java systems exported to 8bit strings - so this implementation can be accepted as  referential. This format is used by https://docs.oracle.com/javase/8/docs/technotes/tools/unix/native2ascii.html tool too.Postgres can decode this format too, and the patch is based on Postgres implementation. I just implemented a different interface. Currently decode function does only text->bytea transformation. Maybe a more generic function \"decode_text\" and \"encode_text\" for similar cases can be better (here we need text->text transformation). But it looks like overengineering now. Maybe we introduce new encoding \"ascii\" and we can implement new conversions \"ascii_to_utf8\" and \"utf8_to_ascii\". It looks like the most clean solution. What do you think about it?a better name of new encoding can be \"unicode-escape\" than \"ascii\". We use \"to_ascii\" function for different use case.set client_encoding to unicode-escape;copy tab from xxx;...but it doesn't help when only a few columns from the table are in unicode-escape format. RegardsPavel", "msg_date": "Mon, 30 Nov 2020 22:56:34 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal: unescape_text function" }, { "msg_contents": "po 30. 11. 2020 v 22:56 odesílatel Pavel Stehule <pavel.stehule@gmail.com>\nnapsal:\n\n>\n>\n> po 30. 11. 2020 v 22:15 odesílatel Pavel Stehule <pavel.stehule@gmail.com>\n> napsal:\n>\n>>\n>>\n>> po 30. 11. 2020 v 14:14 odesílatel Peter Eisentraut <\n>> peter.eisentraut@enterprisedb.com> napsal:\n>>\n>>> On 2020-11-29 18:36, Pavel Stehule wrote:\n>>> >\n>>> > I don't really get the point of this function. There is AFAICT no\n>>> > function to produce this escaped format, and it's not a recognized\n>>> > interchange format. So under what circumstances would one need to\n>>> > use this?\n>>> >\n>>> >\n>>> > Some corporate data can be in CSV format with escaped unicode\n>>> > characters. Without this function it is not possible to decode these\n>>> > files without external application.\n>>>\n>>> I would like some supporting documentation on this. So far we only have\n>>> one stackoverflow question, and then this implementation, and they are\n>>> not even the same format. My worry is that if there is not precise\n>>> specification, then people are going to want to add things in the\n>>> future, and there will be no way to analyze such requests in a\n>>> principled way.\n>>>\n>>>\n>> I checked this and it is \"prefix backslash-u hex\" used by Java,\n>> JavaScript or RTF -\n>> https://billposer.org/Software/ListOfRepresentations.html\n>>\n>> In some languages (Python), there is decoder \"unicode-escape\". Java has\n>> a method escapeJava, for conversion from unicode to ascii. I can imagine so\n>> these data are from Java systems exported to 8bit strings - so this\n>> implementation can be accepted as referential. This format is used by\n>> https://docs.oracle.com/javase/8/docs/technotes/tools/unix/native2ascii.html\n>> tool too.\n>>\n>> Postgres can decode this format too, and the patch is based on Postgres\n>> implementation. I just implemented a different interface.\n>>\n>> Currently decode function does only text->bytea transformation. Maybe a\n>> more generic function \"decode_text\" and \"encode_text\" for similar cases can\n>> be better (here we need text->text transformation). But it looks like\n>> overengineering now.\n>>\n>> Maybe we introduce new encoding \"ascii\" and we can implement new\n>> conversions \"ascii_to_utf8\" and \"utf8_to_ascii\". It looks like the most\n>> clean solution. What do you think about it?\n>>\n>\n> a better name of new encoding can be \"unicode-escape\" than \"ascii\". We use\n> \"to_ascii\" function for different use case.\n>\n> set client_encoding to unicode-escape;\n> copy tab from xxx;\n> ...\n>\n> but it doesn't help when only a few columns from the table are in\n> unicode-escape format.\n>\n>\nprobably the most complete solution can be from two steps:\n\n1. introducing new encoding - \"ascii_unicode_escape\" with related\nconversions\n2. introducing two new functions - text_escape and text_unescape with two\nparameters - source text and conversion name\n\nselect text_convert_to('Тимати', 'ascii_unicode_escape')\n\\u0422\\u0438\\u043c\\u0430\\u0442\\u0438 .. result is text\n\nselect text_convert_from('\\u0422\\u0438\\u043c\\u0430\\u0442\\u0438',\n'ascii_unicode_escape')\n┌──────────┐\n│ ?column? │\n╞══════════╡\n│ Тимати │\n└──────────┘\n(1 row)\n\n\n>\n>\n>> Regards\n>>\n>> Pavel\n>>\n>>\n>>\n\npo 30. 11. 2020 v 22:56 odesílatel Pavel Stehule <pavel.stehule@gmail.com> napsal:po 30. 11. 2020 v 22:15 odesílatel Pavel Stehule <pavel.stehule@gmail.com> napsal:po 30. 11. 2020 v 14:14 odesílatel Peter Eisentraut <peter.eisentraut@enterprisedb.com> napsal:On 2020-11-29 18:36, Pavel Stehule wrote:\n> \n>     I don't really get the point of this function.  There is AFAICT no\n>     function to produce this escaped format, and it's not a recognized\n>     interchange format.  So under what circumstances would one need to\n>     use this?\n> \n> \n> Some corporate data can be in CSV format with escaped unicode \n> characters. Without this function it is not possible to decode these \n> files without external application.\n\nI would like some supporting documentation on this.  So far we only have \none stackoverflow question, and then this implementation, and they are \nnot even the same format.  My worry is that if there is not precise \nspecification, then people are going to want to add things in the \nfuture, and there will be no way to analyze such requests in a \nprincipled way.\nI checked this and it is \"prefix backslash-u hex\" used by Java, JavaScript  or RTF - https://billposer.org/Software/ListOfRepresentations.htmlIn some languages (Python), there is decoder \"unicode-escape\". Java has a method escapeJava, for conversion from unicode to ascii. I can imagine so these data are from Java systems exported to 8bit strings - so this implementation can be accepted as  referential. This format is used by https://docs.oracle.com/javase/8/docs/technotes/tools/unix/native2ascii.html tool too.Postgres can decode this format too, and the patch is based on Postgres implementation. I just implemented a different interface. Currently decode function does only text->bytea transformation. Maybe a more generic function \"decode_text\" and \"encode_text\" for similar cases can be better (here we need text->text transformation). But it looks like overengineering now. Maybe we introduce new encoding \"ascii\" and we can implement new conversions \"ascii_to_utf8\" and \"utf8_to_ascii\". It looks like the most clean solution. What do you think about it?a better name of new encoding can be \"unicode-escape\" than \"ascii\". We use \"to_ascii\" function for different use case.set client_encoding to unicode-escape;copy tab from xxx;...but it doesn't help when only a few columns from the table are in unicode-escape format.probably the most complete solution can be from two steps:1. introducing new encoding - \"ascii_unicode_escape\" with related conversions2. introducing two new functions - text_escape and text_unescape with two parameters - source text and conversion name select text_convert_to('Тимати', 'ascii_unicode_escape')\\u0422\\u0438\\u043c\\u0430\\u0442\\u0438 .. result is textselect text_convert_from('\\u0422\\u0438\\u043c\\u0430\\u0442\\u0438', 'ascii_unicode_escape')┌──────────┐│ ?column? │╞══════════╡│ Тимати   │└──────────┘(1 row) RegardsPavel", "msg_date": "Tue, 1 Dec 2020 06:43:27 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal: unescape_text function" }, { "msg_contents": ">> po 30. 11. 2020 v 22:15 odesílatel Pavel Stehule <pavel.stehule@gmail.com>\n>> napsal:\n>>> I checked this and it is \"prefix backslash-u hex\" used by Java,\n>>> JavaScript or RTF -\n>>> https://billposer.org/Software/ListOfRepresentations.html\n\nIf I look on that page, it appears that RTF is using a similar-looking\nescape but in decimal rather than hex.\n\nIt would be important to define what is done with non-BMP characters?\nWill there be another escape for a six- or eight-hexdigit format for\nthe codepoint, or will it be represented by two four-hexdigit escapes\nfor consecutive UTF-16 surrogates?\n\nRegards,\n-Chap\n\n\n", "msg_date": "Tue, 1 Dec 2020 14:20:40 -0500", "msg_from": "Chapman Flack <chap@anastigmatix.net>", "msg_from_op": false, "msg_subject": "Re: proposal: unescape_text function" }, { "msg_contents": "út 1. 12. 2020 v 20:20 odesílatel Chapman Flack <chap@anastigmatix.net>\nnapsal:\n\n> >> po 30. 11. 2020 v 22:15 odesílatel Pavel Stehule <\n> pavel.stehule@gmail.com>\n> >> napsal:\n> >>> I checked this and it is \"prefix backslash-u hex\" used by Java,\n> >>> JavaScript or RTF -\n> >>> https://billposer.org/Software/ListOfRepresentations.html\n>\n> If I look on that page, it appears that RTF is using a similar-looking\n> escape but in decimal rather than hex.\n>\n> It would be important to define what is done with non-BMP characters?\n> Will there be another escape for a six- or eight-hexdigit format for\n> the codepoint, or will it be represented by two four-hexdigit escapes\n> for consecutive UTF-16 surrogates?\n>\n\nthe detection of decimal or hexadecimal codes can be a hard problem -\nstring \"12\" is valid in both systems, but the numbers are different. So\nthere should be external specification as an argument.\n\nRegards\n\nPavel\n\n\n\n> Regards,\n> -Chap\n>\n\nút 1. 12. 2020 v 20:20 odesílatel Chapman Flack <chap@anastigmatix.net> napsal:>> po 30. 11. 2020 v 22:15 odesílatel Pavel Stehule <pavel.stehule@gmail.com>\n>> napsal:\n>>> I checked this and it is \"prefix backslash-u hex\" used by Java,\n>>> JavaScript  or RTF -\n>>> https://billposer.org/Software/ListOfRepresentations.html\n\nIf I look on that page, it appears that RTF is using a similar-looking\nescape but in decimal rather than hex.\n\nIt would be important to define what is done with non-BMP characters?\nWill there be another escape for a six- or eight-hexdigit format for\nthe codepoint, or will it be represented by two four-hexdigit escapes\nfor consecutive UTF-16 surrogates?the detection of decimal or hexadecimal codes can be a hard problem - string \"12\" is valid in both systems, but the numbers are different. So there should be external specification as an argument.RegardsPavel \n\nRegards,\n-Chap", "msg_date": "Tue, 1 Dec 2020 21:16:05 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal: unescape_text function" }, { "msg_contents": "\nOn 11/30/20 8:14 AM, Peter Eisentraut wrote:\n> On 2020-11-29 18:36, Pavel Stehule wrote:\n>>\n>>     I don't really get the point of this function.  There is AFAICT no\n>>     function to produce this escaped format, and it's not a recognized\n>>     interchange format.  So under what circumstances would one need to\n>>     use this?\n>>\n>>\n>> Some corporate data can be in CSV format with escaped unicode\n>> characters. Without this function it is not possible to decode these\n>> files without external application.\n>\n> I would like some supporting documentation on this.  So far we only\n> have one stackoverflow question, and then this implementation, and\n> they are not even the same format.  My worry is that if there is not\n> precise specification, then people are going to want to add things in\n> the future, and there will be no way to analyze such requests in a\n> principled way.\n>\n>\n>\n\n\nAlso, should this be an extension? I'm dubious about including such\nmarginal uses in the core code unless there's a really good case for it.\n\n\ncheers\n\n\nandrew\n\n\n\n", "msg_date": "Tue, 1 Dec 2020 18:05:21 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: proposal: unescape_text function" }, { "msg_contents": "st 2. 12. 2020 v 0:05 odesílatel Andrew Dunstan <andrew@dunslane.net>\nnapsal:\n\n>\n> On 11/30/20 8:14 AM, Peter Eisentraut wrote:\n> > On 2020-11-29 18:36, Pavel Stehule wrote:\n> >>\n> >> I don't really get the point of this function. There is AFAICT no\n> >> function to produce this escaped format, and it's not a recognized\n> >> interchange format. So under what circumstances would one need to\n> >> use this?\n> >>\n> >>\n> >> Some corporate data can be in CSV format with escaped unicode\n> >> characters. Without this function it is not possible to decode these\n> >> files without external application.\n> >\n> > I would like some supporting documentation on this. So far we only\n> > have one stackoverflow question, and then this implementation, and\n> > they are not even the same format. My worry is that if there is not\n> > precise specification, then people are going to want to add things in\n> > the future, and there will be no way to analyze such requests in a\n> > principled way.\n> >\n> >\n> >\n>\n>\n> Also, should this be an extension? I'm dubious about including such\n> marginal uses in the core code unless there's a really good case for it.\n>\n\nI am not sure, and I am inclined so it should be core functionality.\n\n1. Although this use case is marginal, this is related to most used\nencodings - ascii and unicode. 8 bit encodings enhanced about escaped\nmultibyte chars will be used for a very long time. Unfortunately - this\nwill be worse, because Postgres will be used more in the corporate\nenvironment, where there is a bigger press to conserve very legacy\ntechnologies without correct multibyte support. The core problem so this\nissue is out of concept bytea -> text or text -> bytea transformations\nsupported by Postgres. This is text -> text transformation (for almost all\nencoding based on ascii), that is not supported by Postgres now.\n\n2. Postgres already has this functionality - but unfortunately there is a\nlimit just only literal constants.\n\ncreate or replace function uunescape(text)\nreturns text as $$\ndeclare r text;\nbegin\n -- don't use this code!!!\n execute 'select e''' || $1 || '''' into r;\n return r;\nend;\n$$ language plpgsql immutable;\n\nBut one way how anybody can use it is SQL injection vulnerable and slow. So\nsome simple buildin solution can be protection against some future security\nissues. Personally I am happy with just this limited function that will be\nsafe (although the design based on introducing new encoding and conversions\ncan be more complete and accurate). I agree so this case is marginal, but\nit is a fully valid use case, and supporting unicode escaped codes just by\nparser is a needless limit.\n\n3. there are new disadvantages of extensions in current DBaaS times. Until\nthe extension is not directly accepted by a cloud provider, then the\nextension is not available for users. The acceptance of extensions is not\ntoo agile - so moving this code to extension doesn't solve this problem.\nWithout DBaaS the implementation of this feature as the extensions can be\ngood enough.\n\nRegards\n\nPavel\n\n\n\n\n>\n> cheers\n>\n>\n> andrew\n>\n>\n\nst 2. 12. 2020 v 0:05 odesílatel Andrew Dunstan <andrew@dunslane.net> napsal:\nOn 11/30/20 8:14 AM, Peter Eisentraut wrote:\n> On 2020-11-29 18:36, Pavel Stehule wrote:\n>>\n>>     I don't really get the point of this function.  There is AFAICT no\n>>     function to produce this escaped format, and it's not a recognized\n>>     interchange format.  So under what circumstances would one need to\n>>     use this?\n>>\n>>\n>> Some corporate data can be in CSV format with escaped unicode\n>> characters. Without this function it is not possible to decode these\n>> files without external application.\n>\n> I would like some supporting documentation on this.  So far we only\n> have one stackoverflow question, and then this implementation, and\n> they are not even the same format.  My worry is that if there is not\n> precise specification, then people are going to want to add things in\n> the future, and there will be no way to analyze such requests in a\n> principled way.\n>\n>\n>\n\n\nAlso, should this be an extension? I'm dubious about including such\nmarginal uses in the core code unless there's a really good case for it.I am not sure, and  I am inclined so it should be core functionality.1. Although this use case is marginal, this is related to most used encodings - ascii and unicode. 8 bit encodings enhanced about escaped multibyte chars will be used for a very long time. Unfortunately - this will be worse, because Postgres will be used more in the corporate environment, where there is a bigger press to conserve very legacy technologies without correct multibyte support. The core problem so this issue is out of concept bytea -> text or text -> bytea transformations supported by Postgres. This is text -> text transformation (for almost all encoding based on ascii), that is not supported by Postgres now.2. Postgres already has this functionality - but unfortunately there is a limit just only literal constants. create or replace function uunescape(text)returns text as $$declare r text;begin  -- don't use this code!!!  execute 'select e''' || $1 || '''' into r;  return r;end;$$ language plpgsql immutable;But one way how anybody can use it is SQL injection vulnerable and slow. So some simple buildin solution can be protection against some future security issues. Personally I am happy with just this limited function that will be safe (although the design based on introducing new encoding and conversions can be more complete and accurate). I agree so this case is marginal, but it is a fully valid use case, and supporting unicode escaped codes just by parser is a needless limit. 3. there are new disadvantages of extensions in current DBaaS times. Until the extension is not directly accepted by a cloud provider, then the extension is not available for users. The acceptance of extensions is not too agile - so moving this code to extension doesn't solve this problem. Without DBaaS the implementation of this feature as the extensions can be good enough. RegardsPavel\n\n\ncheers\n\n\nandrew", "msg_date": "Wed, 2 Dec 2020 06:48:02 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal: unescape_text function" }, { "msg_contents": "On 2020-11-30 22:15, Pavel Stehule wrote:\n> I would like some supporting documentation on this.  So far we only\n> have\n> one stackoverflow question, and then this implementation, and they are\n> not even the same format.  My worry is that if there is not precise\n> specification, then people are going to want to add things in the\n> future, and there will be no way to analyze such requests in a\n> principled way.\n> \n> \n> I checked this and it is \"prefix backslash-u hex\" used by Java, \n> JavaScript  or RTF - \n> https://billposer.org/Software/ListOfRepresentations.html\n\nHeh. The fact that there is a table of two dozen possible \nrepresentations kind of proves my point that we should be deliberate in \npicking one.\n\nI do see Oracle unistr() on that list, which appears to be very similar \nto what you are trying to do here. Maybe look into aligning with that.\n\n\n", "msg_date": "Wed, 2 Dec 2020 09:23:05 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: proposal: unescape_text function" }, { "msg_contents": "st 2. 12. 2020 v 9:23 odesílatel Peter Eisentraut <\npeter.eisentraut@enterprisedb.com> napsal:\n\n> On 2020-11-30 22:15, Pavel Stehule wrote:\n> > I would like some supporting documentation on this. So far we only\n> > have\n> > one stackoverflow question, and then this implementation, and they\n> are\n> > not even the same format. My worry is that if there is not precise\n> > specification, then people are going to want to add things in the\n> > future, and there will be no way to analyze such requests in a\n> > principled way.\n> >\n> >\n> > I checked this and it is \"prefix backslash-u hex\" used by Java,\n> > JavaScript or RTF -\n> > https://billposer.org/Software/ListOfRepresentations.html\n>\n> Heh. The fact that there is a table of two dozen possible\n> representations kind of proves my point that we should be deliberate in\n> picking one.\n>\n> I do see Oracle unistr() on that list, which appears to be very similar\n> to what you are trying to do here. Maybe look into aligning with that.\n>\n\nunistr is a primitive form of proposed function. But it can be used as a\nbase. The format is compatible with our \"4.1.2.3. String Constants with\nUnicode Escapes\".\n\nWhat do you think about the following proposal?\n\n1. unistr(text) .. compatible with Postgres unicode escapes - it is\nenhanced against Oracle, because Oracle's unistr doesn't support 6 digits\nunicodes.\n\n2. there can be optional parameter \"prefix\" with default \"\\\". But with \"\\u\"\nit can be compatible with Java or Python.\n\nWhat do you think about it?\n\nPavel\n\nst 2. 12. 2020 v 9:23 odesílatel Peter Eisentraut <peter.eisentraut@enterprisedb.com> napsal:On 2020-11-30 22:15, Pavel Stehule wrote:\n>     I would like some supporting documentation on this.  So far we only\n>     have\n>     one stackoverflow question, and then this implementation, and they are\n>     not even the same format.  My worry is that if there is not precise\n>     specification, then people are going to want to add things in the\n>     future, and there will be no way to analyze such requests in a\n>     principled way.\n> \n> \n> I checked this and it is \"prefix backslash-u hex\" used by Java, \n> JavaScript  or RTF - \n> https://billposer.org/Software/ListOfRepresentations.html\n\nHeh.  The fact that there is a table of two dozen possible \nrepresentations kind of proves my point that we should be deliberate in \npicking one.\n\nI do see Oracle unistr() on that list, which appears to be very similar \nto what you are trying to do here.  Maybe look into aligning with that.unistr is a primitive form of proposed function.  But it can be used as a base. The format is compatible with our  \"4.1.2.3. String Constants with Unicode Escapes\".What do you think about the following proposal? 1. unistr(text) .. compatible with Postgres unicode escapes - it is enhanced against Oracle, because Oracle's unistr doesn't support 6 digits unicodes.2. there can be optional parameter \"prefix\" with default \"\\\". But with \"\\u\" it can be compatible with Java or Python.What do you think about it?Pavel", "msg_date": "Wed, 2 Dec 2020 11:37:01 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal: unescape_text function" }, { "msg_contents": "\nOn 12/2/20 12:48 AM, Pavel Stehule wrote:\n>\n>\n> st 2. 12. 2020 v 0:05 odesílatel Andrew Dunstan <andrew@dunslane.net\n> <mailto:andrew@dunslane.net>> napsal:\n>\n>\n> On 11/30/20 8:14 AM, Peter Eisentraut wrote:\n> > On 2020-11-29 18:36, Pavel Stehule wrote:\n> >>\n> >>     I don't really get the point of this function.  There is\n> AFAICT no\n> >>     function to produce this escaped format, and it's not a\n> recognized\n> >>     interchange format.  So under what circumstances would one\n> need to\n> >>     use this?\n> >>\n> >>\n> >> Some corporate data can be in CSV format with escaped unicode\n> >> characters. Without this function it is not possible to decode\n> these\n> >> files without external application.\n> >\n> > I would like some supporting documentation on this.  So far we only\n> > have one stackoverflow question, and then this implementation, and\n> > they are not even the same format.  My worry is that if there is not\n> > precise specification, then people are going to want to add\n> things in\n> > the future, and there will be no way to analyze such requests in a\n> > principled way.\n> >\n> >\n> >\n>\n>\n> Also, should this be an extension? I'm dubious about including such\n> marginal uses in the core code unless there's a really good case\n> for it.\n>\n>\n>\n[...]\n> 3. there are new disadvantages of extensions in current DBaaS times.\n> Until the extension is not directly accepted by a cloud provider, then\n> the extension is not available for users. The acceptance of extensions\n> is not too agile - so moving this code to extension doesn't solve this\n> problem. Without DBaaS the implementation of this feature as the\n> extensions can be good enough.\n>\n>\n\nThat argument can apply to any extension someone wants to use. If your\nDBaaS provider doesn't support some extension you need to lobby them or\nfind another that does support it, rather than try to put it in core\ncode. Some extensions, such as untrusted PLs,  will naturally almost\nnever be supported by DBaaS providers because they are inherently\nunsafe. That's not the case here.\n\n\ncheers\n\n\nandrew\n\n\n\n\n", "msg_date": "Wed, 2 Dec 2020 07:39:42 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: proposal: unescape_text function" }, { "msg_contents": "On 12/02/20 05:37, Pavel Stehule wrote:\n> 2. there can be optional parameter \"prefix\" with default \"\\\". But with \"\\u\"\n> it can be compatible with Java or Python.\n\nJava's unicode escape form is one of those early ones that lack\na six-digit form, and where any character outside of the basic multilingual\nplane has to be represented by two four-digit escapes in a row, encoding\nthe two surrogates that would make up the character's representation\nin UTF-16.\n\nObviously that's an existing form that's out there, so it's not a bad\nthing to have some kind of support for it, but it's not a great\nrepresentation to encourage people to use.\n\nPython, by contrast, has both \\uxxxx and \\Uxxxxxxxx where you would use\nthe latter to represent a non-BMP character directly. So the Java and\nPython schemes should be considered distinct.\n\nIn Perl, there is a useful extension to regexp substitution where\nyou specify the replacement not as a string or even a string with &\nand \\1 \\2 ... magic, but as essentially a lambda that is passed the\nmatch and returns a computed replacement. That makes conversions of\nthe sort discussed here generally trivial to implement. Would it be\nworth considering to add something of general utility like that, and\nthen there could be a small library of pure SQL functions (or a wiki\npage or GitHub gist) covering a bunch of the two dozen representations\non that page linked above?\n\nRegards,\n-Chap\n\n\n", "msg_date": "Wed, 2 Dec 2020 09:55:49 -0500", "msg_from": "Chapman Flack <chap@anastigmatix.net>", "msg_from_op": false, "msg_subject": "Re: proposal: unescape_text function" }, { "msg_contents": "On 12/02/20 09:55, Chapman Flack wrote:\n> In Perl, there is a useful extension to regexp substitution where\n> you specify the replacement not as a string or even a string with &\n> and \\1 \\2 ... magic, but as essentially a lambda that is passed the\n> match and returns a computed replacement. That makes conversions of\n> the sort discussed here generally trivial to implement.\n\nPython, I should have added, allows that also. Java too, since release 9.\n\nRegards,\n-Chap\n\n\n", "msg_date": "Wed, 2 Dec 2020 11:32:04 -0500", "msg_from": "Chapman Flack <chap@anastigmatix.net>", "msg_from_op": false, "msg_subject": "Re: proposal: unescape_text function" }, { "msg_contents": "st 2. 12. 2020 v 11:37 odesílatel Pavel Stehule <pavel.stehule@gmail.com>\nnapsal:\n\n>\n>\n> st 2. 12. 2020 v 9:23 odesílatel Peter Eisentraut <\n> peter.eisentraut@enterprisedb.com> napsal:\n>\n>> On 2020-11-30 22:15, Pavel Stehule wrote:\n>> > I would like some supporting documentation on this. So far we only\n>> > have\n>> > one stackoverflow question, and then this implementation, and they\n>> are\n>> > not even the same format. My worry is that if there is not precise\n>> > specification, then people are going to want to add things in the\n>> > future, and there will be no way to analyze such requests in a\n>> > principled way.\n>> >\n>> >\n>> > I checked this and it is \"prefix backslash-u hex\" used by Java,\n>> > JavaScript or RTF -\n>> > https://billposer.org/Software/ListOfRepresentations.html\n>>\n>> Heh. The fact that there is a table of two dozen possible\n>> representations kind of proves my point that we should be deliberate in\n>> picking one.\n>>\n>> I do see Oracle unistr() on that list, which appears to be very similar\n>> to what you are trying to do here. Maybe look into aligning with that.\n>>\n>\n> unistr is a primitive form of proposed function. But it can be used as a\n> base. The format is compatible with our \"4.1.2.3. String Constants with\n> Unicode Escapes\".\n>\n> What do you think about the following proposal?\n>\n> 1. unistr(text) .. compatible with Postgres unicode escapes - it is\n> enhanced against Oracle, because Oracle's unistr doesn't support 6 digits\n> unicodes.\n>\n> 2. there can be optional parameter \"prefix\" with default \"\\\". But with\n> \"\\u\" it can be compatible with Java or Python.\n>\n> What do you think about it?\n>\n\nI thought about it a little bit more, and the prefix specification has not\ntoo much sense (more if we implement this functionality as function\n\"unistr\"). I removed the optional argument and renamed the function to\n\"unistr\". The functionality is the same. Now it supports Oracle convention,\nJava and Python (for Python UXXXXXXXX) and \\+XXXXXX. These formats was\nalready supported. The compatibility witth Oracle is nice.\n\npostgres=# select\n 'Arabic : ' || unistr( '\\0627\\0644\\0639\\0631\\0628\\064A\\0629' ) ||\n'\n Chinese : ' || unistr( '\\4E2D\\6587' ) ||\n'\n English : ' || unistr( 'English' ) ||\n'\n French : ' || unistr( 'Fran\\00E7ais' ) ||\n'\n German : ' || unistr( 'Deutsch' ) ||\n'\n Greek : ' || unistr( '\\0395\\03BB\\03BB\\03B7\\03BD\\03B9\\03BA\\03AC' ) ||\n'\n Hebrew : ' || unistr( '\\05E2\\05D1\\05E8\\05D9\\05EA' ) ||\n'\n Japanese : ' || unistr( '\\65E5\\672C\\8A9E' ) ||\n'\n Korean : ' || unistr( '\\D55C\\AD6D\\C5B4' ) ||\n'\n Portuguese : ' || unistr( 'Portugu\\00EAs' ) ||\n'\n Russian : ' || unistr( '\\0420\\0443\\0441\\0441\\043A\\0438\\0439' ) ||\n'\n Spanish : ' || unistr( 'Espa\\00F1ol' ) ||\n'\n Thai : ' || unistr( '\\0E44\\0E17\\0E22' )\n as unicode_test_string;\n┌──────────────────────────┐\n│ unicode_test_string │\n╞══════════════════════════╡\n│ Arabic : العربية ↵│\n│ Chinese : 中文 ↵│\n│ English : English ↵│\n│ French : Français ↵│\n│ German : Deutsch ↵│\n│ Greek : Ελληνικά ↵│\n│ Hebrew : עברית ↵│\n│ Japanese : 日本語 ↵│\n│ Korean : 한국어 ↵│\n│ Portuguese : Português↵│\n│ Russian : Русский ↵│\n│ Spanish : Español ↵│\n│ Thai : ไทย │\n└──────────────────────────┘\n(1 row)\n\n\npostgres=# SELECT UNISTR('Odpov\\u011Bdn\\u00E1 osoba');\n┌─────────────────┐\n│ unistr │\n╞═════════════════╡\n│ Odpovědná osoba │\n└─────────────────┘\n(1 row)\n\nNew patch attached\n\nRegards\n\nPavel\n\n\n\n\n\n\n> Pavel\n>", "msg_date": "Wed, 2 Dec 2020 19:30:39 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal: unescape_text function" }, { "msg_contents": "On Wed, Dec 2, 2020 at 07:30:39PM +0100, Pavel Stehule wrote:\n> postgres=# select\n>  'Arabic     : ' || unistr( '\\0627\\0644\\0639\\0631\\0628\\064A\\0629' )      || '\n>   Chinese    : ' || unistr( '\\4E2D\\6587' )                               || '\n>   English    : ' || unistr( 'English' )                                  || '\n>   French     : ' || unistr( 'Fran\\00E7ais' )                             || '\n>   German     : ' || unistr( 'Deutsch' )                                  || '\n>   Greek      : ' || unistr( '\\0395\\03BB\\03BB\\03B7\\03BD\\03B9\\03BA\\03AC' ) || '\n>   Hebrew     : ' || unistr( '\\05E2\\05D1\\05E8\\05D9\\05EA' )                || '\n>   Japanese   : ' || unistr( '\\65E5\\672C\\8A9E' )                          || '\n>   Korean     : ' || unistr( '\\D55C\\AD6D\\C5B4' )                          || '\n>   Portuguese : ' || unistr( 'Portugu\\00EAs' )                            || '\n>   Russian    : ' || unistr( '\\0420\\0443\\0441\\0441\\043A\\0438\\0439' )      || '\n>   Spanish    : ' || unistr( 'Espa\\00F1ol' )                              || '\n>   Thai       : ' || unistr( '\\0E44\\0E17\\0E22' )\n>   as unicode_test_string;\n> ┌──────────────────────────┐\n> │   unicode_test_string    │\n> ╞══════════════════════════╡\n> │ Arabic     : العربية    ↵│\n> │   Chinese    : 中文     ↵│\n> │   English    : English  ↵│\n> │   French     : Français ↵│\n> │   German     : Deutsch  ↵│\n> │   Greek      : Ελληνικά ↵│\n> │   Hebrew     : עברית    ↵│\n> │   Japanese   : 日本語   ↵│\n> │   Korean     : 한국어   ↵│\n> │   Portuguese : Português↵│\n> │   Russian    : Русский  ↵│\n> │   Spanish    : Español  ↵│\n> │   Thai       : ไทย       │\n> └──────────────────────────┘\n\nOfflist, this table output is super-cool!\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n", "msg_date": "Wed, 2 Dec 2020 14:25:07 -0500", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: proposal: unescape_text function" }, { "msg_contents": "On 12/2/20 1:30 PM, Pavel Stehule wrote:\n> st 2. 12. 2020 v 11:37 odesílatel Pavel Stehule <pavel.stehule@gmail.com \n> st 2. 12. 2020 v 9:23 odesílatel Peter Eisentraut\n> \n> Heh.  The fact that there is a table of two dozen possible\n> representations kind of proves my point that we should be\n> deliberate in\n> picking one.\n> \n> I do see Oracle unistr() on that list, which appears to be very\n> similar\n> to what you are trying to do here.  Maybe look into aligning\n> with that.\n> \n> unistr is a primitive form of proposed function.  But it can be used\n> as a base. The format is compatible with our  \"4.1.2.3. String\n> Constants with Unicode Escapes\".\n> \n> What do you think about the following proposal?\n> \n> 1. unistr(text) .. compatible with Postgres unicode escapes - it is\n> enhanced against Oracle, because Oracle's unistr doesn't support 6\n> digits unicodes.\n> \n> 2. there can be optional parameter \"prefix\" with default \"\\\". But\n> with \"\\u\" it can be compatible with Java or Python.\n> \n> What do you think about it?\n> \n> I thought about it a little bit more, and  the prefix specification has \n> not too much sense (more if we implement this functionality as function \n> \"unistr\"). I removed the optional argument and renamed the function to \n> \"unistr\". The functionality is the same. Now it supports Oracle \n> convention, Java and Python (for Python UXXXXXXXX) and \\+XXXXXX. These \n> formats was already supported.The compatibility witth Oracle is nice.\n\nPeter, it looks like Pavel has aligned this function with unistr() as \nyou suggested. Thoughts?\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n\n", "msg_date": "Wed, 10 Mar 2021 08:52:50 -0500", "msg_from": "David Steele <david@pgmasters.net>", "msg_from_op": false, "msg_subject": "Re: proposal: unescape_text function" }, { "msg_contents": "\nOn 10.03.21 14:52, David Steele wrote:\n>> I thought about it a little bit more, and  the prefix specification \n>> has not too much sense (more if we implement this functionality as \n>> function \"unistr\"). I removed the optional argument and renamed the \n>> function to \"unistr\". The functionality is the same. Now it supports \n>> Oracle convention, Java and Python (for Python UXXXXXXXX) and \n>> \\+XXXXXX. These formats was already supported.The compatibility witth \n>> Oracle is nice.\n> \n> Peter, it looks like Pavel has aligned this function with unistr() as \n> you suggested. Thoughts?\n\nI haven't read through the patch in detail yet, but I support the \nproposed details of the functionality.\n\n\n", "msg_date": "Thu, 25 Mar 2021 10:44:49 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: proposal: unescape_text function" }, { "msg_contents": "On 25.03.21 10:44, Peter Eisentraut wrote:\n> \n> On 10.03.21 14:52, David Steele wrote:\n>>> I thought about it a little bit more, and  the prefix specification \n>>> has not too much sense (more if we implement this functionality as \n>>> function \"unistr\"). I removed the optional argument and renamed the \n>>> function to \"unistr\". The functionality is the same. Now it supports \n>>> Oracle convention, Java and Python (for Python UXXXXXXXX) and \n>>> \\+XXXXXX. These formats was already supported.The compatibility witth \n>>> Oracle is nice.\n>>\n>> Peter, it looks like Pavel has aligned this function with unistr() as \n>> you suggested. Thoughts?\n> \n> I haven't read through the patch in detail yet, but I support the \n> proposed details of the functionality.\n\nCommitted.\n\nI made two major changes: I moved the tests from unicode.sql to \nstrings.sql. The first file is for tests that only work in UTF8 \nencoding, which is not the case here. Also, I wasn't comfortable with \nexposing little utility functions from the parser in an ad hoc way. So \nI made local copies, which also allows us to make more \nlocally-appropriate error messages. I think there is some potential for \nrefactoring here (see also src/common/hex.c), but that's perhaps better \ndone separately and more comprehensively.\n\n\n", "msg_date": "Mon, 29 Mar 2021 12:19:07 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: proposal: unescape_text function" }, { "msg_contents": "po 29. 3. 2021 v 12:19 odesílatel Peter Eisentraut <\npeter.eisentraut@enterprisedb.com> napsal:\n\n> On 25.03.21 10:44, Peter Eisentraut wrote:\n> >\n> > On 10.03.21 14:52, David Steele wrote:\n> >>> I thought about it a little bit more, and the prefix specification\n> >>> has not too much sense (more if we implement this functionality as\n> >>> function \"unistr\"). I removed the optional argument and renamed the\n> >>> function to \"unistr\". The functionality is the same. Now it supports\n> >>> Oracle convention, Java and Python (for Python UXXXXXXXX) and\n> >>> \\+XXXXXX. These formats was already supported.The compatibility witth\n> >>> Oracle is nice.\n> >>\n> >> Peter, it looks like Pavel has aligned this function with unistr() as\n> >> you suggested. Thoughts?\n> >\n> > I haven't read through the patch in detail yet, but I support the\n> > proposed details of the functionality.\n>\n> Committed.\n>\n> I made two major changes: I moved the tests from unicode.sql to\n> strings.sql. The first file is for tests that only work in UTF8\n> encoding, which is not the case here. Also, I wasn't comfortable with\n> exposing little utility functions from the parser in an ad hoc way. So\n> I made local copies, which also allows us to make more\n> locally-appropriate error messages. I think there is some potential for\n> refactoring here (see also src/common/hex.c), but that's perhaps better\n> done separately and more comprehensively.\n>\n\nThank you very much\n\nPavel\n\npo 29. 3. 2021 v 12:19 odesílatel Peter Eisentraut <peter.eisentraut@enterprisedb.com> napsal:On 25.03.21 10:44, Peter Eisentraut wrote:\n> \n> On 10.03.21 14:52, David Steele wrote:\n>>> I thought about it a little bit more, and  the prefix specification \n>>> has not too much sense (more if we implement this functionality as \n>>> function \"unistr\"). I removed the optional argument and renamed the \n>>> function to \"unistr\". The functionality is the same. Now it supports \n>>> Oracle convention, Java and Python (for Python UXXXXXXXX) and \n>>> \\+XXXXXX. These formats was already supported.The compatibility witth \n>>> Oracle is nice.\n>>\n>> Peter, it looks like Pavel has aligned this function with unistr() as \n>> you suggested. Thoughts?\n> \n> I haven't read through the patch in detail yet, but I support the \n> proposed details of the functionality.\n\nCommitted.\n\nI made two major changes:  I moved the tests from unicode.sql to \nstrings.sql.  The first file is for tests that only work in UTF8 \nencoding, which is not the case here.  Also, I wasn't comfortable with \nexposing little utility functions from the parser in an ad hoc way.  So \nI made local copies, which also allows us to make more \nlocally-appropriate error messages.  I think there is some potential for \nrefactoring here (see also src/common/hex.c), but that's perhaps better \ndone separately and more comprehensively.Thank you very muchPavel", "msg_date": "Mon, 29 Mar 2021 12:22:28 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal: unescape_text function" } ]
[ { "msg_contents": "I propose to backpatch b61d161c14 [1] (Introduce vacuum errcontext to\ndisplay additional information.). In the recent past, we have seen an\nerror report similar to \"ERROR: found xmin 2157740646 from before\nrelfrozenxid 1197\" from multiple EDB customers. A similar report is\nseen on pgsql-bugs as well [2] which I think has triggered the\nimplementation of this feature for v13. Such reports mostly indicate\ndatabase corruption rather than any postgres bug which is also\nindicated by the error-code (from before relfrozenxid) for this\nmessage. I think there is a good reason to back-patch this as multiple\nusers are facing similar issues. This patch won't fix this issue but\nit will help us in detecting the problematic part of the heap/index\nand then if users wish they can delete the portion of data that\nappeared to be corrupted and resume the operations on that relation.\n\nI have tried to back-patch this for v12 and attached is the result.\nThe attached patch passes make check-world but I have yet to test it\nmanually and also prepare the patch for other branches once we agree\non this proposal.\n\nThoughts?\n\n[1] -\ncommit b61d161c146328ae6ba9ed937862d66e5c8b035a\nAuthor: Amit Kapila <akapila@postgresql.org>\nDate: Mon Mar 30 07:33:38 2020 +0530\n\n Introduce vacuum errcontext to display additional information.\n\n The additional information displayed will be block number for error\n occurring while processing heap and index name for error occurring\n while processing the index.\n\n[2] - https://www.postgresql.org/message-id/20190807235154.erbmr4o4bo6vgnjv%40alap3.anarazel.de\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Mon, 22 Jun 2020 10:35:47 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Backpatch b61d161c14" }, { "msg_contents": "On Mon, Jun 22, 2020 at 10:35 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> I propose to backpatch b61d161c14 [1] (Introduce vacuum errcontext to\n> display additional information.). In the recent past, we have seen an\n> error report similar to \"ERROR: found xmin 2157740646 from before\n> relfrozenxid 1197\" from multiple EDB customers. A similar report is\n> seen on pgsql-bugs as well [2] which I think has triggered the\n> implementation of this feature for v13. Such reports mostly indicate\n> database corruption rather than any postgres bug which is also\n> indicated by the error-code (from before relfrozenxid) for this\n> message.\n>\n\nSorry, the error-code I want to refer to in above sentence was\nERRCODE_DATA_CORRUPTED.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 22 Jun 2020 17:35:16 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Backpatch b61d161c14" }, { "msg_contents": "On 2020-Jun-22, Amit Kapila wrote:\n\n> I propose to backpatch b61d161c14 [1] (Introduce vacuum errcontext to\n> display additional information.). In the recent past, we have seen an\n> error report similar to \"ERROR: found xmin 2157740646 from before\n> relfrozenxid 1197\" from multiple EDB customers. A similar report is\n> seen on pgsql-bugs as well [2] which I think has triggered the\n> implementation of this feature for v13.\n\n+1 to backpatching this change. I did not review your actual patch,\nthough.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Mon, 22 Jun 2020 12:32:13 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Backpatch b61d161c14" }, { "msg_contents": "Hi,\n\nOn 2020-06-22 10:35:47 +0530, Amit Kapila wrote:\n> I propose to backpatch b61d161c14 [1] (Introduce vacuum errcontext to\n> display additional information.). In the recent past, we have seen an\n> error report similar to \"ERROR: found xmin 2157740646 from before\n> relfrozenxid 1197\" from multiple EDB customers. A similar report is\n> seen on pgsql-bugs as well [2] which I think has triggered the\n> implementation of this feature for v13. Such reports mostly indicate\n> database corruption rather than any postgres bug which is also\n> indicated by the error-code (from before relfrozenxid) for this\n> message. I think there is a good reason to back-patch this as multiple\n> users are facing similar issues. This patch won't fix this issue but\n> it will help us in detecting the problematic part of the heap/index\n> and then if users wish they can delete the portion of data that\n> appeared to be corrupted and resume the operations on that relation.\n> \n> I have tried to back-patch this for v12 and attached is the result.\n> The attached patch passes make check-world but I have yet to test it\n> manually and also prepare the patch for other branches once we agree\n> on this proposal.\n\nI think having the additional information in the back branches would be\ngood. But on the other hand I think this is a somewhat large change\nto backpatch, and it hasn't yet much real world exposure.\n\nI'm also uncomfortable with the approach of just copying all of\nLVRelStats in several places:\n\n> /*\n> @@ -1580,9 +1648,15 @@ lazy_vacuum_page(Relation onerel, BlockNumber blkno, Buffer buffer,\n> \tint\t\t\tuncnt = 0;\n> \tTransactionId visibility_cutoff_xid;\n> \tbool\t\tall_frozen;\n> +\tLVRelStats\tolderrinfo;\n> \n> \tpgstat_progress_update_param(PROGRESS_VACUUM_HEAP_BLKS_VACUUMED, blkno);\n> \n> +\t/* Update error traceback information */\n> +\tolderrinfo = *vacrelstats;\n> +\tupdate_vacuum_error_info(vacrelstats, VACUUM_ERRCB_PHASE_VACUUM_HEAP,\n> +\t\t\t\t\t\t\t blkno, NULL);\n> +\n> \tSTART_CRIT_SECTION();\n> \n> \tfor (; tupindex < vacrelstats->num_dead_tuples; tupindex++)\n> @@ -1659,6 +1733,11 @@ lazy_vacuum_page(Relation onerel, BlockNumber blkno, Buffer buffer,\n> \t\t\t\t\t\t\t *vmbuffer, visibility_cutoff_xid, flags);\n> \t}\n> \n> +\t/* Revert to the previous phase information for error traceback */\n> +\tupdate_vacuum_error_info(vacrelstats,\n> +\t\t\t\t\t\t\t olderrinfo.phase,\n> +\t\t\t\t\t\t\t olderrinfo.blkno,\n> +\t\t\t\t\t\t\t olderrinfo.indname);\n> \treturn tupindex;\n> }\n\nTo me that's a very weird approach. It's fragile because we need to be\nsure that there's no updates to the wrong LVRelStats for important\nthings, and it has a good bit of potential to be inefficient because\nLVRelStats isn't exactly small. This pretty much relies on the compiler\ndoing good enough escape analysis to realize that most parts of\nolderrinfo aren't touched.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 22 Jun 2020 13:09:39 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Backpatch b61d161c14" }, { "msg_contents": "On Mon, Jun 22, 2020 at 01:09:39PM -0700, Andres Freund wrote:\n> On 2020-06-22 10:35:47 +0530, Amit Kapila wrote:\n> > I propose to backpatch b61d161c14 [1] (Introduce vacuum errcontext to\n> > display additional information.).\n...\n> I think having the additional information in the back branches would be\n> good. But on the other hand I think this is a somewhat large change\n> to backpatch, and it hasn't yet much real world exposure.\n\nI see that's nontrivial to cherry-pick due to parallel vacuum changes, and due\nto re-arranging calls to pgstat_progress.\n\nSince the next minor releases are in August, and PG13 expected to be released\n~October, we could defer backpatching until November (or later).\n\n> I'm also uncomfortable with the approach of just copying all of\n> LVRelStats in several places:\n> \n> > /*\n> > @@ -1580,9 +1648,15 @@ lazy_vacuum_page(Relation onerel, BlockNumber blkno, Buffer buffer,\n> > \tint\t\t\tuncnt = 0;\n> > \tTransactionId visibility_cutoff_xid;\n> > \tbool\t\tall_frozen;\n> > +\tLVRelStats\tolderrinfo;\n\nI guess the alternative is to write like\n\nLVRelStats\tolderrinfo = {\n\t.phase = vacrelstats.phase,\n\t.blkno = vacrelstats.blkno,\n\t.indname = vacrelstats.indname,\n};\n\n-- \nJustin\n\n\n", "msg_date": "Mon, 22 Jun 2020 15:43:11 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Backpatch b61d161c14 (Introduce vacuum errcontext ...)" }, { "msg_contents": "Hi,\n\nOn 2020-06-22 15:43:11 -0500, Justin Pryzby wrote:\n> On Mon, Jun 22, 2020 at 01:09:39PM -0700, Andres Freund wrote:\n> > I'm also uncomfortable with the approach of just copying all of\n> > LVRelStats in several places:\n> > \n> > > /*\n> > > @@ -1580,9 +1648,15 @@ lazy_vacuum_page(Relation onerel, BlockNumber blkno, Buffer buffer,\n> > > \tint\t\t\tuncnt = 0;\n> > > \tTransactionId visibility_cutoff_xid;\n> > > \tbool\t\tall_frozen;\n> > > +\tLVRelStats\tolderrinfo;\n> \n> I guess the alternative is to write like\n> \n> LVRelStats\tolderrinfo = {\n> \t.phase = vacrelstats.phase,\n> \t.blkno = vacrelstats.blkno,\n> \t.indname = vacrelstats.indname,\n> };\n\nNo, I don't think that's a solution. I think it's wrong to have\nsomething like olderrinfo in the first place. Using a struct with ~25\nmembers to store the current state of three variables just doesn't make\nsense. Why isn't this just a LVSavedPosition struct or something like\nthat?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 22 Jun 2020 13:57:12 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Backpatch b61d161c14 (Introduce vacuum errcontext ...)" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> No, I don't think that's a solution. I think it's wrong to have\n> something like olderrinfo in the first place. Using a struct with ~25\n> members to store the current state of three variables just doesn't make\n> sense. Why isn't this just a LVSavedPosition struct or something like\n> that?\n\nThat seems like rather pointless micro-optimization really; the struct's\nnot *that* large. But I have a different complaint now that I look at\nthis code: is it safe at all? I see that the indname field is a pointer\nto who-knows-where. If it's possible in the first place for that to\nchange while this code runs, then what guarantees that we won't be\nrestoring a dangling pointer to freed memory?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 22 Jun 2020 18:15:27 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Backpatch b61d161c14 (Introduce vacuum errcontext ...)" }, { "msg_contents": "On Mon, Jun 22, 2020 at 06:15:27PM -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > No, I don't think that's a solution. I think it's wrong to have\n> > something like olderrinfo in the first place. Using a struct with ~25\n> > members to store the current state of three variables just doesn't make\n> > sense. Why isn't this just a LVSavedPosition struct or something like\n> > that?\n> \n> That seems like rather pointless micro-optimization really; the struct's\n> not *that* large. But I have a different complaint now that I look at\n> this code: is it safe at all? I see that the indname field is a pointer\n> to who-knows-where. If it's possible in the first place for that to\n> change while this code runs, then what guarantees that we won't be\n> restoring a dangling pointer to freed memory?\n\nI'm not sure it addresses your concern, but we talked a bunch about safety\nstarting here:\nhttps://www.postgresql.org/message-id/20200326150457.GB17431%40telsasoft.com\n..and concluding with an explanation about CHECK_FOR_INTERRUPTS.\n\n20200326150457.GB17431@telsasoft.com\n|And I think you're right: we only save state when the calling function has a\n|indname=NULL, so we never \"put back\" a non-NULL indname. We go from having a\n|indname=NULL at lazy_scan_heap to not not-NULL at lazy_vacuum_index, and never\n|the other way around. So once we've \"reverted back\", 1) the pointer is null;\n|and, 2) the callback function doesn't access it for the previous/reverted phase\n|anyway.\n\nWhen this function is called by lazy_vacuum_{heap,page,index}, it's also called\na 2nd time to restore the previous phase information. When it's called the\nfirst time by lazy_vacuum_index(), it does errinfo->indname = pstrdup(indname),\nand on the 2nd call then does pfree(errinfo->indame), followed by\nerrinfo->indname = NULL.\n\n|static void\n|update_vacuum_error_info(LVSavedPosition *errinfo, int phase, BlockNumber blkno,\n| char *indname)\n|{\n| errinfo->blkno = blkno;\n| errinfo->phase = phase;\n|\n| /* Free index name from any previous phase */\n| if (errinfo->indname)\n| pfree(errinfo->indname);\n|\n| /* For index phases, save the name of the current index for the callback */\n| errinfo->indname = indname ? pstrdup(indname) : NULL;\n|}\n\nIf it's inadequately clear, maybe we should do:\n\n if (errinfo->indname)\n+ {\n pfree(errinfo->indname);\n+ Assert(indname == NULL);\n+ }\n\n-- \nJustin\n\n\n", "msg_date": "Mon, 22 Jun 2020 17:53:01 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Backpatch b61d161c14 (Introduce vacuum errcontext ...)" }, { "msg_contents": "Justin Pryzby <pryzby@telsasoft.com> writes:\n> On Mon, Jun 22, 2020 at 06:15:27PM -0400, Tom Lane wrote:\n>> That seems like rather pointless micro-optimization really; the struct's\n>> not *that* large. But I have a different complaint now that I look at\n>> this code: is it safe at all? I see that the indname field is a pointer\n>> to who-knows-where. If it's possible in the first place for that to\n>> change while this code runs, then what guarantees that we won't be\n>> restoring a dangling pointer to freed memory?\n\n> I'm not sure it addresses your concern, but we talked a bunch about safety\n> starting here:\n> https://www.postgresql.org/message-id/20200326150457.GB17431%40telsasoft.com\n> ..and concluding with an explanation about CHECK_FOR_INTERRUPTS.\n\n> 20200326150457.GB17431@telsasoft.com\n> |And I think you're right: we only save state when the calling function has a\n> |indname=NULL, so we never \"put back\" a non-NULL indname. We go from having a\n> |indname=NULL at lazy_scan_heap to not not-NULL at lazy_vacuum_index, and never\n> |the other way around. So once we've \"reverted back\", 1) the pointer is null;\n> |and, 2) the callback function doesn't access it for the previous/reverted phase\n> |anyway.\n\nIf we're relying on that, I'd replace the \"save\" action by an Assert that\nindname is NULL, and the \"restore\" action by just assigning NULL again.\nThat eliminates all concern about whether the restored value is valid.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 22 Jun 2020 19:03:08 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Backpatch b61d161c14 (Introduce vacuum errcontext ...)" }, { "msg_contents": "On Mon, Jun 22, 2020 at 01:57:12PM -0700, Andres Freund wrote:\n> On 2020-06-22 15:43:11 -0500, Justin Pryzby wrote:\n> > On Mon, Jun 22, 2020 at 01:09:39PM -0700, Andres Freund wrote:\n> > > I'm also uncomfortable with the approach of just copying all of\n> > > LVRelStats in several places:\n> > > \n> > > > /*\n> > > > @@ -1580,9 +1648,15 @@ lazy_vacuum_page(Relation onerel, BlockNumber blkno, Buffer buffer,\n> > > > \tint\t\t\tuncnt = 0;\n> > > > \tTransactionId visibility_cutoff_xid;\n> > > > \tbool\t\tall_frozen;\n> > > > +\tLVRelStats\tolderrinfo;\n> > \n> > I guess the alternative is to write like\n> > \n> > LVRelStats\tolderrinfo = {\n> > \t.phase = vacrelstats.phase,\n> > \t.blkno = vacrelstats.blkno,\n> > \t.indname = vacrelstats.indname,\n> > };\n> \n> No, I don't think that's a solution. I think it's wrong to have\n> something like olderrinfo in the first place. Using a struct with ~25\n> members to store the current state of three variables just doesn't make\n> sense. Why isn't this just a LVSavedPosition struct or something like\n> that?\n\nI'd used LVRelStats on your suggestion:\nhttps://www.postgresql.org/message-id/20191211165425.4ewww2s5k5cafi4l%40alap3.anarazel.de\nhttps://www.postgresql.org/message-id/20200120191305.sxi44cedhtxwr3ag%40alap3.anarazel.de\n\nI understood the goal to be avoiding the need to add a new struct, when most\nfunctions are already passed LVRelStats *vacrelstats.\n\nBut maybe I misunderstood. (Also, back in January, the callback was only used\nfor scan-heap phase, so it's increased in scope several times).\n\nAnyway, I put together some patches for discussion purposes.\n\n-- \nJustin", "msg_date": "Mon, 22 Jun 2020 20:43:47 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Backpatch b61d161c14 (Introduce vacuum errcontext ...)" }, { "msg_contents": "On Tue, Jun 23, 2020 at 7:13 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n> On Mon, Jun 22, 2020 at 01:57:12PM -0700, Andres Freund wrote:\n> >\n> > No, I don't think that's a solution. I think it's wrong to have\n> > something like olderrinfo in the first place. Using a struct with ~25\n> > members to store the current state of three variables just doesn't make\n> > sense. Why isn't this just a LVSavedPosition struct or something like\n> > that?\n>\n> I'd used LVRelStats on your suggestion:\n> https://www.postgresql.org/message-id/20191211165425.4ewww2s5k5cafi4l%40alap3.anarazel.de\n> https://www.postgresql.org/message-id/20200120191305.sxi44cedhtxwr3ag%40alap3.anarazel.de\n>\n> I understood the goal to be avoiding the need to add a new struct, when most\n> functions are already passed LVRelStats *vacrelstats.\n>\n\nYeah, I think this is a good point against adding a separate struct.\nI also don't think that we can buy much by doing this optimization.\nTo me, the current code looks good in this regard.\n\n> But maybe I misunderstood. (Also, back in January, the callback was only used\n> for scan-heap phase, so it's increased in scope several times).\n>\n> Anyway, I put together some patches for discussion purposes.\n>\n\nFew comments for 0002-Add-assert-and-document-why-indname-is-safe\n-----------------------------------------------------------------------------------------------------\n- /* Free index name from any previous phase */\n if (errinfo->indname)\n+ {\n+ /*\n+ * indname is only ever saved during lazy_vacuum_index(), which\n+ * during which the phase information is not not further\n+ * manipulated, until it's restored before returning from\n+ * lazy_vacuum_index().\n+ */\n+ Assert(indname == NULL);\n+\n pfree(errinfo->indname);\n+ errinfo->indname = NULL;\n+ }\n\nIt is not very clear that this is the place where we are saving the\nstate. I think it would be better to do in the caller (ex. in before\nstatement olderrinfo = *vacrelstats; in lazy_vacuum_index()) where it\nis clear that we are saving the state for later use.\n\nI guess for the restore case we are already assigning NULL via\n\"errinfo->indname = indname ? pstrdup(indname) : NULL;\" in\nupdate_vacuum_error_info. I think some more comments in the function\nupdate_vacuum_error_info would explain it better.\n\n0001-Rename-from-errcbarg, looks fine to me but we can see if others\nhave any opinion on the naming (especially changing VACUUM_ERRCB* to\nVACUUM_ERRINFO*).\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 23 Jun 2020 09:27:47 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Backpatch b61d161c14 (Introduce vacuum errcontext ...)" }, { "msg_contents": "Hi,\n\nOn 2020-06-22 20:43:47 -0500, Justin Pryzby wrote:\n> On Mon, Jun 22, 2020 at 01:57:12PM -0700, Andres Freund wrote:\n> > On 2020-06-22 15:43:11 -0500, Justin Pryzby wrote:\n> > > On Mon, Jun 22, 2020 at 01:09:39PM -0700, Andres Freund wrote:\n> > > > I'm also uncomfortable with the approach of just copying all of\n> > > > LVRelStats in several places:\n> > > > \n> > > > > /*\n> > > > > @@ -1580,9 +1648,15 @@ lazy_vacuum_page(Relation onerel, BlockNumber blkno, Buffer buffer,\n> > > > > \tint\t\t\tuncnt = 0;\n> > > > > \tTransactionId visibility_cutoff_xid;\n> > > > > \tbool\t\tall_frozen;\n> > > > > +\tLVRelStats\tolderrinfo;\n> > > \n> > > I guess the alternative is to write like\n> > > \n> > > LVRelStats\tolderrinfo = {\n> > > \t.phase = vacrelstats.phase,\n> > > \t.blkno = vacrelstats.blkno,\n> > > \t.indname = vacrelstats.indname,\n> > > };\n> > \n> > No, I don't think that's a solution. I think it's wrong to have\n> > something like olderrinfo in the first place. Using a struct with ~25\n> > members to store the current state of three variables just doesn't make\n> > sense. Why isn't this just a LVSavedPosition struct or something like\n> > that?\n> \n> I'd used LVRelStats on your suggestion:\n> https://www.postgresql.org/message-id/20191211165425.4ewww2s5k5cafi4l%40alap3.anarazel.de\n> https://www.postgresql.org/message-id/20200120191305.sxi44cedhtxwr3ag%40alap3.anarazel.de\n> \n> I understood the goal to be avoiding the need to add a new struct, when most\n> functions are already passed LVRelStats *vacrelstats.\n\n> But maybe I misunderstood. (Also, back in January, the callback was only used\n> for scan-heap phase, so it's increased in scope several times).\n\nI am only suggesting that where you save the old location, as currently\ndone with LVRelStats olderrinfo, you instead use a more specific\ntype. Not that you should pass that anywhere (except for\nupdate_vacuum_error_info).\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 22 Jun 2020 21:05:19 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Backpatch b61d161c14 (Introduce vacuum errcontext ...)" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> I am only suggesting that where you save the old location, as currently\n> done with LVRelStats olderrinfo, you instead use a more specific\n> type. Not that you should pass that anywhere (except for\n> update_vacuum_error_info).\n\nAs things currently stand, I don't think we need another struct\ntype at all. ISTM we should hard-wire the handling of indname\nas I suggested above. Then there are only two fields to be dealt\nwith, and we could just as well save them in simple local variables.\n\nIf there's a clear future path to needing to save/restore more\nfields, then maybe another struct type would be useful ... but\nright now the struct type declaration itself would take more\nlines of code than it would save.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 23 Jun 2020 00:14:40 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Backpatch b61d161c14 (Introduce vacuum errcontext ...)" }, { "msg_contents": "On Tue, Jun 23, 2020 at 12:14:40AM -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > I am only suggesting that where you save the old location, as currently\n> > done with LVRelStats olderrinfo, you instead use a more specific\n> > type. Not that you should pass that anywhere (except for\n> > update_vacuum_error_info).\n> \n> As things currently stand, I don't think we need another struct\n> type at all. ISTM we should hard-wire the handling of indname\n> as I suggested above. Then there are only two fields to be dealt\n> with, and we could just as well save them in simple local variables.\n> \n> If there's a clear future path to needing to save/restore more\n> fields, then maybe another struct type would be useful ... but\n> right now the struct type declaration itself would take more\n> lines of code than it would save.\n\nUpdated patches for consideration. I left the \"struct\" patch there to show\nwhat it'd look like.\n\n-- \nJustin", "msg_date": "Tue, 23 Jun 2020 08:53:54 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Backpatch b61d161c14 (Introduce vacuum errcontext ...)" }, { "msg_contents": "Hi,\n\nOn 2020-06-23 00:14:40 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > I am only suggesting that where you save the old location, as currently\n> > done with LVRelStats olderrinfo, you instead use a more specific\n> > type. Not that you should pass that anywhere (except for\n> > update_vacuum_error_info).\n> \n> As things currently stand, I don't think we need another struct\n> type at all. ISTM we should hard-wire the handling of indname\n> as I suggested above. Then there are only two fields to be dealt\n> with, and we could just as well save them in simple local variables.\n\nThat's fine with me too.\n\n\n> If there's a clear future path to needing to save/restore more\n> fields, then maybe another struct type would be useful ... but\n> right now the struct type declaration itself would take more\n> lines of code than it would save.\n\nFWIW, I started to be annoyed by this code when I was addding\nprefetching support to vacuum, and wanted to change what's tracked\nwhere. The number of places that needed to be touched was\ndisproportional.\n\n\nHere's a *draft* for how I thought this roughly could look like. I think\nit's nicer to not specify the exact saved state in multiple places, and\nI think it's much clearer to use a separate function to restore the\nstate than to set a \"fresh\" state.\n\nI've only applied a hacky fix for the way the indname is tracked, I\nthought that'd best be discussed separately.\n\nGreetings,\n\nAndres Freund", "msg_date": "Tue, 23 Jun 2020 11:19:48 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Backpatch b61d161c14 (Introduce vacuum errcontext ...)" }, { "msg_contents": "On Tue, Jun 23, 2020 at 11:49 PM Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On 2020-06-23 00:14:40 -0400, Tom Lane wrote:\n> > Andres Freund <andres@anarazel.de> writes:\n> > > I am only suggesting that where you save the old location, as currently\n> > > done with LVRelStats olderrinfo, you instead use a more specific\n> > > type. Not that you should pass that anywhere (except for\n> > > update_vacuum_error_info).\n> >\n> > As things currently stand, I don't think we need another struct\n> > type at all. ISTM we should hard-wire the handling of indname\n> > as I suggested above. Then there are only two fields to be dealt\n> > with, and we could just as well save them in simple local variables.\n>\n> That's fine with me too.\n>\n\nI have looked at both the patches (using separate variables (by\nJustin) and using a struct (by Andres)) and found the second one bit\nbetter.\n\n>\n> > If there's a clear future path to needing to save/restore more\n> > fields, then maybe another struct type would be useful ... but\n> > right now the struct type declaration itself would take more\n> > lines of code than it would save.\n>\n> FWIW, I started to be annoyed by this code when I was addding\n> prefetching support to vacuum, and wanted to change what's tracked\n> where. The number of places that needed to be touched was\n> disproportional.\n>\n>\n> Here's a *draft* for how I thought this roughly could look like. I think\n> it's nicer to not specify the exact saved state in multiple places, and\n> I think it's much clearer to use a separate function to restore the\n> state than to set a \"fresh\" state.\n>\n\nI think this is a good idea and makes code look better. I think it is\nbetter to name new struct as LVSavedErrInfo instead of LVSavedPos.\n\n> I've only applied a hacky fix for the way the indname is tracked, I\n> thought that'd best be discussed separately.\n>\n\nI think it is better to use Tom's idea here to save and restore index\ninformation in-place. I have used Justin's patch with some\nimprovements like adding Asserts and initializing with NULL for\nindname while restoring to make things unambiguous.\n\nI have improved some comments in the code and for now, kept as two\npatches (a) one for improving the error info for index (mostly\nJustin's patch based on Tom's idea) and (b) the other to generally\nimprove the code in this area (mostly Andres's patch).\n\nI have done some testing with both the patches and would like to do\nmore unless there are objections with these.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Thu, 25 Jun 2020 14:31:51 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Backpatch b61d161c14 (Introduce vacuum errcontext ...)" }, { "msg_contents": "On Thu, Jun 25, 2020 at 02:31:51PM +0530, Amit Kapila wrote:\n> I have looked at both the patches (using separate variables (by\n> Justin) and using a struct (by Andres)) and found the second one bit\n> better.\n\nThanks for looking.\n\n> I have improved some comments in the code and for now, kept as two\n> patches (a) one for improving the error info for index (mostly\n> Justin's patch based on Tom's idea) and (b) the other to generally\n> improve the code in this area (mostly Andres's patch).\n\nAnd thanks for separate patchen :)\n\n> I have done some testing with both the patches and would like to do\n> more unless there are objections with these.\n\nComments:\n\n> * The index name is saved only during this phase and restored immediately\n\n=> I wouldn't say \"only\" since it's saved during lazy_vacuum: index AND cleanup.\n\n>update_vacuum_error_info(LVRelStats *errinfo, LVSavedErrInfo *oldpos, int phase,\n\n=> You called your struct \"LVSavedErrInfo\" but the variables are still called\n\"pos\". I would call it olderrinfo or just old.\n\nAlso, this doesn't (re)rename the \"cbarg\" stuff that Alvaro didn't like, which\nwas my 0001 patch.\n\n-- \nJustin\n\n\n", "msg_date": "Thu, 25 Jun 2020 20:55:17 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Backpatch b61d161c14 (Introduce vacuum errcontext ...)" }, { "msg_contents": "On Fri, Jun 26, 2020 at 7:25 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n>\n> > I have done some testing with both the patches and would like to do\n> > more unless there are objections with these.\n>\n> Comments:\n>\n> > * The index name is saved only during this phase and restored immediately\n>\n> => I wouldn't say \"only\" since it's saved during lazy_vacuum: index AND cleanup.\n>\n> >update_vacuum_error_info(LVRelStats *errinfo, LVSavedErrInfo *oldpos, int phase,\n>\n> => You called your struct \"LVSavedErrInfo\" but the variables are still called\n> \"pos\". I would call it olderrinfo or just old.\n>\n\nFixed both of the above comments. I used the variable name as saved_err_info.\n\n> Also, this doesn't (re)rename the \"cbarg\" stuff that Alvaro didn't like, which\n> was my 0001 patch.\n>\n\nIf I am not missing anything then that change was in\nlazy_cleanup_index and after this patch, it won't be required because\nwe are using a different variable name.\n\nI have combined both the patches now.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Fri, 26 Jun 2020 09:19:38 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Backpatch b61d161c14 (Introduce vacuum errcontext ...)" }, { "msg_contents": "On Fri, Jun 26, 2020 at 9:19 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Fri, Jun 26, 2020 at 7:25 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> >\n> >\n> > Comments:\n> >\n> > > * The index name is saved only during this phase and restored immediately\n> >\n> > => I wouldn't say \"only\" since it's saved during lazy_vacuum: index AND cleanup.\n> >\n> > >update_vacuum_error_info(LVRelStats *errinfo, LVSavedErrInfo *oldpos, int phase,\n> >\n> > => You called your struct \"LVSavedErrInfo\" but the variables are still called\n> > \"pos\". I would call it olderrinfo or just old.\n> >\n>\n> Fixed both of the above comments. I used the variable name as saved_err_info.\n>\n> > Also, this doesn't (re)rename the \"cbarg\" stuff that Alvaro didn't like, which\n> > was my 0001 patch.\n> >\n>\n> If I am not missing anything then that change was in\n> lazy_cleanup_index and after this patch, it won't be required because\n> we are using a different variable name.\n>\n> I have combined both the patches now.\n>\n\nI am planning to push this tomorrow if there are no further\nsuggestions/comments.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 30 Jun 2020 09:30:54 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Backpatch b61d161c14 (Introduce vacuum errcontext ...)" }, { "msg_contents": "On Tue, Jun 30, 2020 at 9:30 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> >\n> > If I am not missing anything then that change was in\n> > lazy_cleanup_index and after this patch, it won't be required because\n> > we are using a different variable name.\n> >\n> > I have combined both the patches now.\n> >\n>\n> I am planning to push this tomorrow if there are no further\n> suggestions/comments.\n>\n\nPushed. Now, coming back to the question of the back patch. I see a\npoint in deferring this for 3-6 months or maybe more after PG13 is\nreleased. OTOH, this implementation is mainly triggered by issues\nreported in this area and this doesn't seem to be a very invasive\npatch which can cause some de-stabilization in back-branches. I am not\nin a hurry to get this backpatched but still, it would be good if this\ncan be backpatched earlier as quite a few people (onlist and EDB\ncustomers) have reported issues that could have been narrowed down if\nthis patch is present in back-branches.\n\nIt seems Alvaro and I are in favor of backpatch whereas Andres and\nJustin seem to think it should be deferred until this change has seen\nsome real-world exposure.\n\nAnyone else wants to weigh in?\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 2 Jul 2020 09:07:10 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Backpatch b61d161c14 (Introduce vacuum errcontext ...)" }, { "msg_contents": "On Thu, Jul 2, 2020 at 9:07 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Jun 30, 2020 at 9:30 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > >\n> > > If I am not missing anything then that change was in\n> > > lazy_cleanup_index and after this patch, it won't be required because\n> > > we are using a different variable name.\n> > >\n> > > I have combined both the patches now.\n> > >\n> >\n> > I am planning to push this tomorrow if there are no further\n> > suggestions/comments.\n> >\n>\n> Pushed. Now, coming back to the question of the back patch. I see a\n> point in deferring this for 3-6 months or maybe more after PG13 is\n> released. OTOH, this implementation is mainly triggered by issues\n> reported in this area and this doesn't seem to be a very invasive\n> patch which can cause some de-stabilization in back-branches. I am not\n> in a hurry to get this backpatched but still, it would be good if this\n> can be backpatched earlier as quite a few people (onlist and EDB\n> customers) have reported issues that could have been narrowed down if\n> this patch is present in back-branches.\n>\n> It seems Alvaro and I are in favor of backpatch whereas Andres and\n> Justin seem to think it should be deferred until this change has seen\n> some real-world exposure.\n>\n> Anyone else wants to weigh in?\n>\n\nSeeing no more responses, it seems better to defer this backpatch till\nPG13 is out and we get some confidence in this functionality.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 9 Jul 2020 08:57:38 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Backpatch b61d161c14 (Introduce vacuum errcontext ...)" } ]
[ { "msg_contents": "Hi,\n\nWhen a query on foreign table is executed from a local session using\npostgres_fdw, as expected the local postgres backend opens a\nconnection which causes a remote session/backend to be opened on the\nremote postgres server for query execution.\n\nOne observation is that, even after the query is finished, the remote\nsession/backend still persists on the remote postgres server. Upon\nresearching, I found that there is a concept of Connection Caching for\nthe remote connections made using postgres_fdw. Local backend/session\ncan cache up to 8 different connections per backend. This caching is\nuseful as it avoids the cost of reestablishing new connections per\nforeign query.\n\nHowever, at times, there may be situations where the long lasting\nlocal sessions may execute very few foreign queries and remaining all\nare local queries, in this scenario, the remote sessions opened by the\nlocal sessions/backends may not be useful as they remain idle and eat\nup the remote server connections capacity. This problem gets even\nworse(though this use case is a bit imaginary) if all of\nmax_connections(default 100 and each backend caching 8 remote\nconnections) local sessions open remote sessions and they are cached\nin the local backend.\n\nI propose to have a new session level GUC called\n\"enable_connectioncache\"(name can be changed if it doesn't correctly\nmean the purpose) with the default value being true which means that\nall the remote connections are cached. If set to false, the\nconnections are not cached and so are remote sessions closed by the local\nbackend/session at\nthe end of each remote transaction.\n\nAttached the initial patch(based on commit\n9550ea3027aa4f290c998afd8836a927df40b09d), test setup.\n\nAnother approach to solve this problem could be that (based on Robert's\nidea[1]) automatic clean up of cache entries, but letting users decide\non caching also seems to be good.\n\nPlease note that documentation is still pending.\n\nThoughts?\n\nTest Case:\nwithout patch:\n1. Run the query on foreign table\n2. Look for the backend/session opened on the remote postgres server, it\nexists till the local session remains active.\n\nwith patch:\n1. SET enable_connectioncache TO false;\n2. Run the query on the foreign table\n3. Look for the backend/session opened on the remote postgres server, it\nshould not exist.\n\n[1] -\nhttps://www.postgresql.org/message-id/CA%2BTgmob_ksTOgmbXhno%2Bk5XXPOK%2B-JYYLoU3MpXuutP4bH7gzA%40mail.gmail.com\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Mon, 22 Jun 2020 11:25:54 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "[PATCH] postgres_fdw connection caching - cause remote sessions\n linger till the local session exit" }, { "msg_contents": "On Mon, Jun 22, 2020 at 11:26 AM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> Hi,\n>\n> When a query on foreign table is executed from a local session using\n> postgres_fdw, as expected the local postgres backend opens a\n> connection which causes a remote session/backend to be opened on the\n> remote postgres server for query execution.\n>\n> One observation is that, even after the query is finished, the remote\n> session/backend still persists on the remote postgres server. Upon\n> researching, I found that there is a concept of Connection Caching for\n> the remote connections made using postgres_fdw. Local backend/session\n> can cache up to 8 different connections per backend. This caching is\n> useful as it avoids the cost of reestablishing new connections per\n> foreign query.\n>\n> However, at times, there may be situations where the long lasting\n> local sessions may execute very few foreign queries and remaining all\n> are local queries, in this scenario, the remote sessions opened by the\n> local sessions/backends may not be useful as they remain idle and eat\n> up the remote server connections capacity. This problem gets even\n> worse(though this use case is a bit imaginary) if all of\n> max_connections(default 100 and each backend caching 8 remote\n> connections) local sessions open remote sessions and they are cached\n> in the local backend.\n>\n> I propose to have a new session level GUC called\n> \"enable_connectioncache\"(name can be changed if it doesn't correctly\n> mean the purpose) with the default value being true which means that\n> all the remote connections are cached. If set to false, the\n> connections are not cached and so are remote sessions closed by the local backend/session at\n> the end of each remote transaction.\n>\n> Attached the initial patch(based on commit\n> 9550ea3027aa4f290c998afd8836a927df40b09d), test setup.\n\nFew comments:\n\n #backend_flush_after = 0 # measured in pages, 0 disables\n-\n+#enable_connectioncache = on\nThis guc could be placed in CONNECTIONS AND AUTHENTICATION section.\n\n+\n+ /* see if the cache was for postgres_fdw connections and\n+ user chose to disable connection caching*/\n+ if ((strcmp(hashp->tabname,\"postgres_fdw connections\") == 0) &&\n+ !enable_connectioncache)\n\nShould be changed to postgres style commenting like:\n/*\n * See if the cache was for postgres_fdw connections and\n * user chose to disable connection caching.\n */\n\n+ /* if true, fdw connections in a session are cached, else\n+ discarded at the end of every remote transaction.\n+ */\n+ bool enableconncache;\nShould be changed to postgres style commenting.\n\n+/* parameter for enabling fdw connection hashing */\n+bool enable_connectioncache = true;\n+\n\nShould this be connection caching?\n\nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Sat, 27 Jun 2020 07:31:21 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] postgres_fdw connection caching - cause remote sessions\n linger till the local session exit" }, { "msg_contents": "On Mon, Jun 22, 2020 at 11:26 AM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n> Attached the initial patch(based on commit\n> 9550ea3027aa4f290c998afd8836a927df40b09d), test setup.\n>\n\nmake check is failing\nsysviews.out 2020-06-27 07:22:32.162146320 +0530\n@@ -73,6 +73,7 @@\n name | setting\n --------------------------------+---------\n enable_bitmapscan | on\n+ enable_connectioncache | on\n\none of the test expect files needs to be updated.\n\nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Sat, 27 Jun 2020 07:37:49 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] postgres_fdw connection caching - cause remote sessions\n linger till the local session exit" }, { "msg_contents": "On Sun, Jun 21, 2020 at 10:56 PM Bharath Rupireddy <\nbharath.rupireddyforpostgres@gmail.com> wrote:\n\n> When a query on foreign table is executed from a local session using\n> postgres_fdw, as expected the local postgres backend opens a\n> connection which causes a remote session/backend to be opened on the\n> remote postgres server for query execution.\n>\n> [...]\n\n\n> I propose to have a new session level GUC called\n> \"enable_connectioncache\"(name can be changed if it doesn't correctly\n> mean the purpose) with the default value being true which means that\n> all the remote connections are cached. If set to false, the\n> connections are not cached and so are remote sessions closed by the local\n> backend/session at\n> the end of each remote transaction.\n>\n> [...]\n\n> Thoughts?\n>\n> Test Case:\n> without patch:\n> 1. Run the query on foreign table\n> 2. Look for the backend/session opened on the remote postgres server, it\n> exists till the local session remains active.\n>\n> with patch:\n> 1. SET enable_connectioncache TO false;\n> 2. Run the query on the foreign table\n> 3. Look for the backend/session opened on the remote postgres server, it\n> should not exist.\n>\n\nIf this is just going to apply to postgres_fdw why not just have that\nmodule provide a function \"disconnect_open_sessions()\" or the like that\ndoes this upon user command? I suppose there would be some potential value\nto having this be set per-user but that wouldn't preclude the usefulness of\na function. And by having a function the usefulness of the GUC seems\nreduced. On a related note is there any entanglement here with the\nsupplied dblink and/or dblink_fdw [1] modules as they do provide connect\nand disconnect functions and also leverages postgres_fdw (or dblink_fdw if\nspecified, which brings us back to the question of whether this option\nshould be respected by that FDW).\n\nOtherwise, I would imagine that having multiple queries execute before\nwanting to drop the connection would be desirable so at minimum a test case\nthat does something like:\n\nSELECT count(*) FROM remote.tbl1;\n-- connection still open\nSET enable_connectioncache TO false;\nSELECT count(*) FROM remote.tbl2;\n-- now it was closed\n\nOr maybe even better, have the close action happen on a transaction\nboundary.\n\nAnd if it doesn't just apply to postgres_fdw (or at least doesn't have to)\nthen the description text should be less specific.\n\nDavid J.\n\n[1] The only place I see \"dblink_fdw\" in the documentation is in the dblink\nmodule's dblink_connect page. I would probably modify that page to say:\n\"It is recommended to use the foreign-data wrapper dblink_fdw (installed by\nthis module) when defining the foreign server.\" (adding the parenthetical).\n\nOn Sun, Jun 21, 2020 at 10:56 PM Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote:When a query on foreign table is executed from a local session usingpostgres_fdw, as expected the local postgres backend opens aconnection which causes a remote session/backend to be opened on theremote postgres server for query execution.[...] I propose to have a new session level GUC called\"enable_connectioncache\"(name can be changed if it doesn't correctlymean the purpose) with the default value being true which means thatall the remote connections are cached. If set to false, theconnections are not cached and so are remote sessions closed by the local backend/session atthe end of each remote transaction.[...] Thoughts?Test Case:without patch:1. Run the query on foreign table2. Look for the backend/session opened on the remote postgres server, it exists till the local session remains active.with patch:1. SET enable_connectioncache TO false;2. Run the query on the foreign table3. Look for the backend/session opened on the remote postgres server, it should not exist.If this is just going to apply to postgres_fdw why not just have that module provide a function \"disconnect_open_sessions()\" or the like that does this upon user command?  I suppose there would be some potential value to having this be set per-user but that wouldn't preclude the usefulness of a function.   And by having a function the usefulness of the GUC seems reduced.  On a related note is there any entanglement here with the supplied dblink and/or dblink_fdw [1] modules as they do provide connect and disconnect functions and also leverages postgres_fdw (or dblink_fdw if specified, which brings us back to the question of whether this option should be respected by that FDW).Otherwise, I would imagine that having multiple queries execute before wanting to drop the connection would be desirable so at minimum a test case that does something like:SELECT count(*) FROM remote.tbl1;-- connection still openSET enable_connectioncache TO false;\n\nSELECT count(*) FROM remote.tbl2;\n\n-- now it was closedOr maybe even better, have the close action happen on a transaction boundary.And if it doesn't just apply to postgres_fdw (or at least doesn't have to) then the description text should be less specific.David J.[1] The only place I see \"dblink_fdw\" in the documentation is in the dblink module's dblink_connect page.  I would probably modify that page to say:\"It is recommended to use the foreign-data wrapper dblink_fdw (installed by this module) when defining the foreign server.\" (adding the parenthetical).", "msg_date": "Fri, 26 Jun 2020 19:33:36 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] postgres_fdw connection caching - cause remote sessions\n linger till the local session exit" }, { "msg_contents": "Thanks for the responses.\n\n>\n> If this is just going to apply to postgres_fdw why not just have that module provide a function \"disconnect_open_sessions()\" or the like that does this upon user command? I suppose there would be some potential value to having this be set per-user but that wouldn't preclude the usefulness of a function. And by having a function the usefulness of the GUC seems reduced.\n>\n\nThe idea of having module-specific functions to remove cached entries\nseems like a good idea. Users have to frequently call these functions\nto clean up the cached entries in a long lasting single session. This\nmay not\nbe always possible if these sessions are from an application not from\na psql-like client which is a more frequent scenario in the customer\nuse cases. In this case users might have to change their application\ncode that is\nissuing queries to postgres server to include these functions.\n\nAssuming the fact that the server/session configuration happens much\nbefore the user application starts to submit actual database queries,\nhaving a GUC just helps to avoid making such function calls in between\nthe session, by having to set the GUC either to true if required to\ncache connections or to off if not to cache connections.\n\n>\n> On a related note is there any entanglement here with the supplied dblink and/or dblink_fdw [1] modules as they do provide connect and disconnect functions and also leverages postgres_fdw (or dblink_fdw if specified, which brings us back to the question of whether this option should be respected by that FDW).\n>\n\nI found that dblink also has the connection caching concept and it\ndoes provide a user a function to disconnect/remove cached connections\nusing a function, dblink_disconnect() using connection name as it's\ninput.\nIMO, this solution seems a bit problematic as explained in my first\nresponse in this mail.\nThe postgres_fdw connection caching and dblink connection caching has\nno link at all. Please point me if I'm missing anything here.\nBut probably, this GUC can be extended from a bool to an enum of type\nconfig_enum_entry and use it for dblink as well. This is extensible as\nwell. Please let me know if this is okay, so that I can code for it.\n\n>\n> Otherwise, I would imagine that having multiple queries execute before wanting to drop the connection would be desirable so at minimum a test case that does something like:\n>\n> SELECT count(*) FROM remote.tbl1;\n> -- connection still open\n> SET enable_connectioncache TO false;\n> SELECT count(*) FROM remote.tbl2;\n> -- now it was closed\n>\n> Or maybe even better, have the close action happen on a transaction boundary.\n>\n\nThis is a valid scenario, as the same connection can be used in the\nsame transaction multiple times. With my attached initial patch above\nthe point is already covered. The decision to cache or not cache the\nconnection happens at the main transaction end i.e. in\npgfdw_xact_callback().\n\n>\n> And if it doesn't just apply to postgres_fdw (or at least doesn't have to) then the description text should be less specific.\n>\n\nIf we are agreed on a generic GUC for postgres_fdw, dblink and so on.\nI will change the description and documentation accordingly.\n\nThoughts?\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 29 Jun 2020 14:07:03 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] postgres_fdw connection caching - cause remote sessions\n linger till the local session exit" }, { "msg_contents": "On Mon, 22 Jun 2020 at 14:56, Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> Hi,\n>\n> When a query on foreign table is executed from a local session using\n> postgres_fdw, as expected the local postgres backend opens a\n> connection which causes a remote session/backend to be opened on the\n> remote postgres server for query execution.\n>\n> One observation is that, even after the query is finished, the remote\n> session/backend still persists on the remote postgres server. Upon\n> researching, I found that there is a concept of Connection Caching for\n> the remote connections made using postgres_fdw. Local backend/session\n> can cache up to 8 different connections per backend. This caching is\n> useful as it avoids the cost of reestablishing new connections per\n> foreign query.\n>\n> However, at times, there may be situations where the long lasting\n> local sessions may execute very few foreign queries and remaining all\n> are local queries, in this scenario, the remote sessions opened by the\n> local sessions/backends may not be useful as they remain idle and eat\n> up the remote server connections capacity. This problem gets even\n> worse(though this use case is a bit imaginary) if all of\n> max_connections(default 100 and each backend caching 8 remote\n> connections) local sessions open remote sessions and they are cached\n> in the local backend.\n>\n> I propose to have a new session level GUC called\n> \"enable_connectioncache\"(name can be changed if it doesn't correctly\n> mean the purpose) with the default value being true which means that\n> all the remote connections are cached. If set to false, the\n> connections are not cached and so are remote sessions closed by the local backend/session at\n> the end of each remote transaction.\n\nI've not looked at your patch deeply but if this problem is talking\nonly about postgres_fdw I think we should improve postgres_fdw, not\nadding a GUC to the core. It’s not that all FDW plugins use connection\ncache and postgres_fdw’s connection cache is implemented within\npostgres_fdw, I think we should focus on improving postgres_fdw. I\nalso think it’s not a good design that the core manages connections to\nremote servers connected via FDW. I wonder if we can add a\npostgres_fdw option for this purpose, say keep_connection [on|off].\nThat way, we can set it per server so that remote connections to the\nparticular server don’t remain idle.\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Tue, 30 Jun 2020 12:23:28 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] postgres_fdw connection caching - cause remote sessions\n linger till the local session exit" }, { "msg_contents": "On Tue, Jun 30, 2020 at 12:23:28PM +0900, Masahiko Sawada wrote:\n> > I propose to have a new session level GUC called\n> > \"enable_connectioncache\"(name can be changed if it doesn't correctly\n> > mean the purpose) with the default value being true which means that\n> > all the remote connections are cached. If set to false, the\n> > connections are not cached and so are remote sessions closed by the local backend/session at\n> > the end of each remote transaction.\n> \n> I've not looked at your patch deeply but if this problem is talking\n> only about postgres_fdw I think we should improve postgres_fdw, not\n> adding a GUC to the core. It’s not that all FDW plugins use connection\n> cache and postgres_fdw’s connection cache is implemented within\n> postgres_fdw, I think we should focus on improving postgres_fdw. I\n> also think it’s not a good design that the core manages connections to\n> remote servers connected via FDW. I wonder if we can add a\n> postgres_fdw option for this purpose, say keep_connection [on|off].\n> That way, we can set it per server so that remote connections to the\n> particular server don’t remain idle.\n\nI thought we would add a core capability, idle_session_timeout, which\nwould disconnect idle sessions, and the postgres_fdw would use that. We\nhave already had requests for idle_session_timeout, but avoided it\nbecause it seemed better to tell people to monitor pg_stat_activity and\nterminate sessions that way, but now that postgres_fdw needs it too,\nthere might be enough of a requirement to add it.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n", "msg_date": "Tue, 30 Jun 2020 11:00:31 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: [PATCH] postgres_fdw connection caching - cause remote sessions\n linger till the local session exit" }, { "msg_contents": "On Tue, Jun 30, 2020 at 8:54 AM Masahiko Sawada <\nmasahiko.sawada@2ndquadrant.com> wrote:\n\n> On Mon, 22 Jun 2020 at 14:56, Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n> >\n> > Hi,\n> >\n> > When a query on foreign table is executed from a local session using\n> > postgres_fdw, as expected the local postgres backend opens a\n> > connection which causes a remote session/backend to be opened on the\n> > remote postgres server for query execution.\n> >\n> > One observation is that, even after the query is finished, the remote\n> > session/backend still persists on the remote postgres server. Upon\n> > researching, I found that there is a concept of Connection Caching for\n> > the remote connections made using postgres_fdw. Local backend/session\n> > can cache up to 8 different connections per backend. This caching is\n> > useful as it avoids the cost of reestablishing new connections per\n> > foreign query.\n> >\n> > However, at times, there may be situations where the long lasting\n> > local sessions may execute very few foreign queries and remaining all\n> > are local queries, in this scenario, the remote sessions opened by the\n> > local sessions/backends may not be useful as they remain idle and eat\n> > up the remote server connections capacity. This problem gets even\n> > worse(though this use case is a bit imaginary) if all of\n> > max_connections(default 100 and each backend caching 8 remote\n> > connections) local sessions open remote sessions and they are cached\n> > in the local backend.\n> >\n> > I propose to have a new session level GUC called\n> > \"enable_connectioncache\"(name can be changed if it doesn't correctly\n> > mean the purpose) with the default value being true which means that\n> > all the remote connections are cached. If set to false, the\n> > connections are not cached and so are remote sessions closed by the\n> local backend/session at\n> > the end of each remote transaction.\n>\n> I've not looked at your patch deeply but if this problem is talking\n> only about postgres_fdw I think we should improve postgres_fdw, not\n> adding a GUC to the core. It’s not that all FDW plugins use connection\n> cache and postgres_fdw’s connection cache is implemented within\n> postgres_fdw, I think we should focus on improving postgres_fdw. I\n> also think it’s not a good design that the core manages connections to\n> remote servers connected via FDW. I wonder if we can add a\n> postgres_fdw option for this purpose, say keep_connection [on|off].\n> That way, we can set it per server so that remote connections to the\n> particular server don’t remain idle.\n>\n>\n+1\n\nI have not looked at the implementation, but I agree that here problem\nis with postgres_fdw so we should try to solve that by keeping it limited\nto postgres_fdw. I liked the idea of passing it as an option to the FDW\nconnection.\n\nRegards,\n>\n> --\n> Masahiko Sawada http://www.2ndQuadrant.com/\n> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n>\n>\n>\n\n-- \nRushabh Lathia\n\nOn Tue, Jun 30, 2020 at 8:54 AM Masahiko Sawada <masahiko.sawada@2ndquadrant.com> wrote:On Mon, 22 Jun 2020 at 14:56, Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> Hi,\n>\n> When a query on foreign table is executed from a local session using\n> postgres_fdw, as expected the local postgres backend opens a\n> connection which causes a remote session/backend to be opened on the\n> remote postgres server for query execution.\n>\n> One observation is that, even after the query is finished, the remote\n> session/backend still persists on the remote postgres server. Upon\n> researching, I found that there is a concept of Connection Caching for\n> the remote connections made using postgres_fdw. Local backend/session\n> can cache up to 8 different connections per backend. This caching is\n> useful as it avoids the cost of reestablishing new connections per\n> foreign query.\n>\n> However, at times, there may be situations where the long lasting\n> local sessions may execute very few foreign queries and remaining all\n> are local queries, in this scenario, the remote sessions opened by the\n> local sessions/backends may not be useful as they remain idle and eat\n> up the remote server connections capacity. This problem gets even\n> worse(though this use case is a bit imaginary) if all of\n> max_connections(default 100 and each backend caching 8 remote\n> connections) local sessions open remote sessions and they are cached\n> in the local backend.\n>\n> I propose to have a new session level GUC called\n> \"enable_connectioncache\"(name can be changed if it doesn't correctly\n> mean the purpose) with the default value being true which means that\n> all the remote connections are cached. If set to false, the\n> connections are not cached and so are remote sessions closed by the local backend/session at\n> the end of each remote transaction.\n\nI've not looked at your patch deeply but if this problem is talking\nonly about postgres_fdw I think we should improve postgres_fdw, not\nadding a GUC to the core. It’s not that all FDW plugins use connection\ncache and postgres_fdw’s connection cache is implemented within\npostgres_fdw, I think we should focus on improving postgres_fdw. I\nalso think it’s not a good design that the core manages connections to\nremote servers connected via FDW. I wonder if we can add a\npostgres_fdw option for this purpose, say keep_connection [on|off].\nThat way, we can set it per server so that remote connections to the\nparticular server don’t remain idle.\n +1I have not looked at the implementation, but I agree that here problemis with postgres_fdw so we should try to solve that by keeping it limitedto postgres_fdw.   I liked the idea of passing it as an option to the FDWconnection.\nRegards,\n\n-- \nMasahiko Sawada            http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n-- Rushabh Lathia", "msg_date": "Wed, 1 Jul 2020 14:33:22 +0530", "msg_from": "Rushabh Lathia <rushabh.lathia@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] postgres_fdw connection caching - cause remote sessions\n linger till the local session exit" }, { "msg_contents": ">\n> I've not looked at your patch deeply but if this problem is talking\n> only about postgres_fdw I think we should improve postgres_fdw, not\n> adding a GUC to the core. It’s not that all FDW plugins use connection\n> cache and postgres_fdw’s connection cache is implemented within\n> postgres_fdw, I think we should focus on improving postgres_fdw. I\n> also think it’s not a good design that the core manages connections to\n> remote servers connected via FDW. I wonder if we can add a\n> postgres_fdw option for this purpose, say keep_connection [on|off].\n> That way, we can set it per server so that remote connections to the\n> particular server don’t remain idle.\n>\n\nIf I understand it correctly, your suggestion is to add\nkeep_connection option and use that while defining the server object.\nIMO having keep_connection option at the server object level may not\nserve the purpose being discussed here.\nFor instance, let's say I create a foreign server in session 1 with\nkeep_connection on, and I want to use that\nserver object in session 2 with keep_connection off and session 3 with\nkeep_connection on and so on.\nOne way we can change the server's keep_connection option is to alter\nthe server object, but that's not a good choice,\nas we have to alter it at the system level.\n\nOverall, though we define the server object in a single session, it\nwill be used in multiple sessions, having an\noption at the per-server level would not be a good idea.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 1 Jul 2020 14:44:46 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] postgres_fdw connection caching - cause remote sessions\n linger till the local session exit" }, { "msg_contents": "On Wed, Jul 1, 2020 at 2:45 PM Bharath Rupireddy <\nbharath.rupireddyforpostgres@gmail.com> wrote:\n\n> >\n> > I've not looked at your patch deeply but if this problem is talking\n> > only about postgres_fdw I think we should improve postgres_fdw, not\n> > adding a GUC to the core. It’s not that all FDW plugins use connection\n> > cache and postgres_fdw’s connection cache is implemented within\n> > postgres_fdw, I think we should focus on improving postgres_fdw. I\n> > also think it’s not a good design that the core manages connections to\n> > remote servers connected via FDW. I wonder if we can add a\n> > postgres_fdw option for this purpose, say keep_connection [on|off].\n> > That way, we can set it per server so that remote connections to the\n> > particular server don’t remain idle.\n> >\n>\n> If I understand it correctly, your suggestion is to add\n> keep_connection option and use that while defining the server object.\n> IMO having keep_connection option at the server object level may not\n> serve the purpose being discussed here.\n> For instance, let's say I create a foreign server in session 1 with\n> keep_connection on, and I want to use that\n> server object in session 2 with keep_connection off and session 3 with\n> keep_connection on and so on.\n>\n\nIn my opinion, in such cases, one needs to create two server object one with\nkeep-connection ON and one with keep-connection off. And need to decide\nto use appropriate for the particular session.\n\n\n> One way we can change the server's keep_connection option is to alter\n> the server object, but that's not a good choice,\n> as we have to alter it at the system level.\n>\n> Overall, though we define the server object in a single session, it\n> will be used in multiple sessions, having an\n> option at the per-server level would not be a good idea.\n>\n> With Regards,\n> Bharath Rupireddy.\n> EnterpriseDB: http://www.enterprisedb.com\n>\n>\n>\n\n-- \nRushabh Lathia\n\nOn Wed, Jul 1, 2020 at 2:45 PM Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote:>\n> I've not looked at your patch deeply but if this problem is talking\n> only about postgres_fdw I think we should improve postgres_fdw, not\n> adding a GUC to the core. It’s not that all FDW plugins use connection\n> cache and postgres_fdw’s connection cache is implemented within\n> postgres_fdw, I think we should focus on improving postgres_fdw. I\n> also think it’s not a good design that the core manages connections to\n> remote servers connected via FDW. I wonder if we can add a\n> postgres_fdw option for this purpose, say keep_connection [on|off].\n> That way, we can set it per server so that remote connections to the\n> particular server don’t remain idle.\n>\n\nIf I understand it correctly, your suggestion is to add\nkeep_connection option and use that while defining the server object.\nIMO having keep_connection option at the server object level may not\nserve the purpose being discussed here.\nFor instance, let's say I create a foreign server in session 1 with\nkeep_connection on, and I want to use that\nserver object in session 2 with keep_connection off and session 3 with\nkeep_connection on and so on.In my opinion, in such cases, one needs to create two server object one withkeep-connection ON and one with keep-connection off.  And need to decideto use appropriate for the particular session. \nOne way we can change the server's keep_connection option is to alter\nthe server object, but that's not a good choice,\nas we have to alter it at the system level.\n\nOverall, though we define the server object in a single session, it\nwill be used in multiple sessions, having an\noption at the per-server level would not be a good idea.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n-- Rushabh Lathia", "msg_date": "Wed, 1 Jul 2020 15:32:46 +0530", "msg_from": "Rushabh Lathia <rushabh.lathia@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] postgres_fdw connection caching - cause remote sessions\n linger till the local session exit" }, { "msg_contents": ">\n> I thought we would add a core capability, idle_session_timeout, which\n> would disconnect idle sessions, and the postgres_fdw would use that. We\n> have already had requests for idle_session_timeout, but avoided it\n> because it seemed better to tell people to monitor pg_stat_activity and\n> terminate sessions that way, but now that postgres_fdw needs it too,\n> there might be enough of a requirement to add it.\n>\n\nIf we were to use idle_session_timeout (from patch [1]) for the remote\nsession to go off without\nhaving to delete the corresponding entry from local connection cache and\nafter that if we submit foreign query from local session, then below\nerror would occur,\nwhich may not be an expected behaviour. (I took the patch from [1] and\nintentionally set the\nidle_session_timeout to a low value on remote server, issued a\nforeign_tbl query which\ncaused remote session to open and after idle_session_timeout , the\nremote session\ncloses and now issue the foreign_tbl query from local session)\n\npostgres=# SELECT * FROM foreign_tbl;\nERROR: server closed the connection unexpectedly\nThis probably means the server terminated abnormally\nbefore or while processing the request.\nCONTEXT: remote SQL command: START TRANSACTION ISOLATION LEVEL REPEATABLE READ\npostgres=#\n\nAnother way is that if we are thinking to use idle_session_timeout\ninfra on the local postgres server to remove cached entries\nfrom the local connection cache, then the question arises:\n\ndo we intend to use the same configuration parameter value set for\nidle_session_timeout for connection cache as well?\nProbably not, as we might use different values for different purposes\nof the same idle_session_timeout parameter,\nlet's say 2000sec for idle_session_timeout and 1000sec for connection\ncache cleanup.\n\n[1] - https://www.postgresql.org/message-id/763A0689-F189-459E-946F-F0EC4458980B%40hotmail.com\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\nOn Wed, Jul 1, 2020 at 3:33 PM Rushabh Lathia <rushabh.lathia@gmail.com> wrote:\n>\n>\n>\n> On Wed, Jul 1, 2020 at 2:45 PM Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote:\n>>\n>> >\n>> > I've not looked at your patch deeply but if this problem is talking\n>> > only about postgres_fdw I think we should improve postgres_fdw, not\n>> > adding a GUC to the core. It’s not that all FDW plugins use connection\n>> > cache and postgres_fdw’s connection cache is implemented within\n>> > postgres_fdw, I think we should focus on improving postgres_fdw. I\n>> > also think it’s not a good design that the core manages connections to\n>> > remote servers connected via FDW. I wonder if we can add a\n>> > postgres_fdw option for this purpose, say keep_connection [on|off].\n>> > That way, we can set it per server so that remote connections to the\n>> > particular server don’t remain idle.\n>> >\n>>\n>> If I understand it correctly, your suggestion is to add\n>> keep_connection option and use that while defining the server object.\n>> IMO having keep_connection option at the server object level may not\n>> serve the purpose being discussed here.\n>> For instance, let's say I create a foreign server in session 1 with\n>> keep_connection on, and I want to use that\n>> server object in session 2 with keep_connection off and session 3 with\n>> keep_connection on and so on.\n>\n>\n> In my opinion, in such cases, one needs to create two server object one with\n> keep-connection ON and one with keep-connection off. And need to decide\n> to use appropriate for the particular session.\n>\n>>\n>> One way we can change the server's keep_connection option is to alter\n>> the server object, but that's not a good choice,\n>> as we have to alter it at the system level.\n>>\n>> Overall, though we define the server object in a single session, it\n>> will be used in multiple sessions, having an\n>> option at the per-server level would not be a good idea.\n>>\n>> With Regards,\n>> Bharath Rupireddy.\n>> EnterpriseDB: http://www.enterprisedb.com\n>>\n>>\n>\n>\n> --\n> Rushabh Lathia\n\n\n", "msg_date": "Wed, 1 Jul 2020 15:53:56 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] postgres_fdw connection caching - cause remote sessions\n linger till the local session exit" }, { "msg_contents": "On Wed, Jul 1, 2020 at 3:54 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n\n> If we were to use idle_session_timeout (from patch [1]) for the remote\n> session to go off without\n> having to delete the corresponding entry from local connection cache and\n> after that if we submit foreign query from local session, then below\n> error would occur,\n> which may not be an expected behaviour. (I took the patch from [1] and\n> intentionally set the\n> idle_session_timeout to a low value on remote server, issued a\n> foreign_tbl query which\n> caused remote session to open and after idle_session_timeout , the\n> remote session\n> closes and now issue the foreign_tbl query from local session)\n>\n> postgres=# SELECT * FROM foreign_tbl;\n> ERROR: server closed the connection unexpectedly\n> This probably means the server terminated abnormally\n> before or while processing the request.\n> CONTEXT: remote SQL command: START TRANSACTION ISOLATION LEVEL REPEATABLE READ\n> postgres=#\n\nThis is actually strange. AFAIR the code, without looking at the\ncurrent code, when a query picks a foreign connection it checks its\nstate. It's possible that the connection has not been marked bad by\nthe time you fire new query. If the problem exists probably we should\nfix it anyway since the backend at the other end of the connection has\nhigher chances of being killed while the connection was sitting idle\nin the cache.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Wed, 1 Jul 2020 18:54:17 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] postgres_fdw connection caching - cause remote sessions\n linger till the local session exit" }, { "msg_contents": "On Wed, 1 Jul 2020 at 18:14, Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> >\n> > I've not looked at your patch deeply but if this problem is talking\n> > only about postgres_fdw I think we should improve postgres_fdw, not\n> > adding a GUC to the core. It’s not that all FDW plugins use connection\n> > cache and postgres_fdw’s connection cache is implemented within\n> > postgres_fdw, I think we should focus on improving postgres_fdw. I\n> > also think it’s not a good design that the core manages connections to\n> > remote servers connected via FDW. I wonder if we can add a\n> > postgres_fdw option for this purpose, say keep_connection [on|off].\n> > That way, we can set it per server so that remote connections to the\n> > particular server don’t remain idle.\n> >\n>\n> If I understand it correctly, your suggestion is to add\n> keep_connection option and use that while defining the server object.\n> IMO having keep_connection option at the server object level may not\n> serve the purpose being discussed here.\n> For instance, let's say I create a foreign server in session 1 with\n> keep_connection on, and I want to use that\n> server object in session 2 with keep_connection off and session 3 with\n> keep_connection on and so on.\n> One way we can change the server's keep_connection option is to alter\n> the server object, but that's not a good choice,\n> as we have to alter it at the system level.\n\nIs there use-case in practice where different backends need to have\ndifferent connection cache setting even if all of them connect the\nsame server? I thought since the problem that this feature is trying\nto resolve is not to eat up the remote server connections capacity by\ndisabling connection cache, we’d like to disable connection cache to\nthe particular server, for example, which sets low max_connections.\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Wed, 1 Jul 2020 22:42:57 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] postgres_fdw connection caching - cause remote sessions\n linger till the local session exit" }, { "msg_contents": "> > If we were to use idle_session_timeout (from patch [1]) for the remote\n> > session to go off without\n> > having to delete the corresponding entry from local connection cache and\n> > after that if we submit foreign query from local session, then below\n> > error would occur,\n> > which may not be an expected behaviour. (I took the patch from [1] and\n> > intentionally set the\n> > idle_session_timeout to a low value on remote server, issued a\n> > foreign_tbl query which\n> > caused remote session to open and after idle_session_timeout , the\n> > remote session\n> > closes and now issue the foreign_tbl query from local session)\n> >\n> > postgres=# SELECT * FROM foreign_tbl;\n> > ERROR: server closed the connection unexpectedly\n> > This probably means the server terminated abnormally\n> > before or while processing the request.\n> > CONTEXT: remote SQL command: START TRANSACTION ISOLATION LEVEL\nREPEATABLE READ\n> > postgres=#\n>\n> This is actually strange. AFAIR the code, without looking at the\n> current code, when a query picks a foreign connection it checks its\n> state. It's possible that the connection has not been marked bad by\n> the time you fire new query. If the problem exists probably we should\n> fix it anyway since the backend at the other end of the connection has\n> higher chances of being killed while the connection was sitting idle\n> in the cache.\n>\n\nThanks Ashutosh for the suggestion. One way, we could solve the above\nproblem is that, upon firing the new foreign query from local backend using\ncached\nconnection, (assuming the remote backend/session that was cached in the\nlocal backed got\nkilled by some means), instead of failing the query in the local\nbackend/session, upon\ndetecting error from remote backend, we could just delete the cached old\nentry and try getting another\nconnection to remote backend/session, cache it and proceed to submit the\nquery. This has to happen only at\nthe beginning of remote xact.\n\nThis way, instead of failing(as mentioned above \" server closed the\nconnection unexpectedly\"),\nthe query succeeds if the local session is able to get a new remote backend\nconnection.\n\nI worked on a POC patch to prove the above point. Attaching the patch.\nPlease note that, the patch doesn't contain comments and has some issues\nlike having some new\nvariable in PGconn structure and the things like.\n\nIf the approach makes some sense, then I can rework properly on the patch\nand probably\ncan open another thread for the review and other stuff.\n\nThe way I tested the patch:\n\n1. select * from foreign_tbl;\n/*from local session - this results in a\nremote connection being cached in\nthe connection cache and\na remote backend/session is opened.\n*/\n2. kill the remote backend/session\n3. select * from foreign_tbl;\n/*from local session - without patch\nthis throws error \"ERROR: server closed the connection unexpectedly\"\nwith path - try to use\nthe cached connection at the beginning of remote xact, upon receiving\nerror from remote postgres\nserver, instead of aborting the query, delete the cached entry, try to\nget a new connection, if it\ngets, cache it and use that for executing the query, query succeeds.\n*/\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\nOn Wed, Jul 1, 2020 at 7:13 PM Masahiko Sawada <\nmasahiko.sawada@2ndquadrant.com> wrote:\n\n> On Wed, 1 Jul 2020 at 18:14, Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n> >\n> > >\n> > > I've not looked at your patch deeply but if this problem is talking\n> > > only about postgres_fdw I think we should improve postgres_fdw, not\n> > > adding a GUC to the core. It’s not that all FDW plugins use connection\n> > > cache and postgres_fdw’s connection cache is implemented within\n> > > postgres_fdw, I think we should focus on improving postgres_fdw. I\n> > > also think it’s not a good design that the core manages connections to\n> > > remote servers connected via FDW. I wonder if we can add a\n> > > postgres_fdw option for this purpose, say keep_connection [on|off].\n> > > That way, we can set it per server so that remote connections to the\n> > > particular server don’t remain idle.\n> > >\n> >\n> > If I understand it correctly, your suggestion is to add\n> > keep_connection option and use that while defining the server object.\n> > IMO having keep_connection option at the server object level may not\n> > serve the purpose being discussed here.\n> > For instance, let's say I create a foreign server in session 1 with\n> > keep_connection on, and I want to use that\n> > server object in session 2 with keep_connection off and session 3 with\n> > keep_connection on and so on.\n> > One way we can change the server's keep_connection option is to alter\n> > the server object, but that's not a good choice,\n> > as we have to alter it at the system level.\n>\n> Is there use-case in practice where different backends need to have\n> different connection cache setting even if all of them connect the\n> same server? I thought since the problem that this feature is trying\n> to resolve is not to eat up the remote server connections capacity by\n> disabling connection cache, we’d like to disable connection cache to\n> the particular server, for example, which sets low max_connections.\n>\n> Regards,\n>\n> --\n> Masahiko Sawada http://www.2ndQuadrant.com/\n> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n>", "msg_date": "Thu, 2 Jul 2020 16:29:33 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] postgres_fdw connection caching - cause remote sessions\n linger till the local session exit" }, { "msg_contents": ">>\n>> If I understand it correctly, your suggestion is to add\n>> keep_connection option and use that while defining the server object.\n>> IMO having keep_connection option at the server object level may not\n>> serve the purpose being discussed here.\n>> For instance, let's say I create a foreign server in session 1 with\n>> keep_connection on, and I want to use that\n>> server object in session 2 with keep_connection off and session 3 with\n>> keep_connection on and so on.\n>\n> In my opinion, in such cases, one needs to create two server object one with\n> keep-connection ON and one with keep-connection off. And need to decide\n> to use appropriate for the particular session.\n>\n\nYes, having two variants of foreign servers: one with keep-connections\non (this can be default behavior,\neven if user doesn't mention this option, internally it can be treated\nas keep-connections on) ,\nand if users need no connection hashing, another foreign server with\nall other options same but keep-connections\noff.\n\nThis looks okay to me, if we want to avoid a core session level GUC.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 6 Jul 2020 17:19:43 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] postgres_fdw connection caching - cause remote sessions\n linger till the local session exit" }, { "msg_contents": "> >\n> > If I understand it correctly, your suggestion is to add\n> > keep_connection option and use that while defining the server object.\n> > IMO having keep_connection option at the server object level may not\n> > serve the purpose being discussed here.\n> > For instance, let's say I create a foreign server in session 1 with\n> > keep_connection on, and I want to use that\n> > server object in session 2 with keep_connection off and session 3 with\n> > keep_connection on and so on.\n> > One way we can change the server's keep_connection option is to alter\n> > the server object, but that's not a good choice,\n> > as we have to alter it at the system level.\n>\n> Is there use-case in practice where different backends need to have\n> different connection cache setting even if all of them connect the\n> same server?\n\nCurrently, connection cache exists at each backend/session level and\ngets destroyed\non backend/session exit. I think the same cached connection can be\nused until it gets invalidated\ndue to user mapping or server definition changes.\n\nOne way is to have a shared memory based connection cache instead of\nbackend level cache,\nbut it has its own problems, like maintenance, invalidation, dealing\nwith concurrent usages etc.\n\n> I thought since the problem that this feature is trying\n> to resolve is not to eat up the remote server connections capacity by\n> disabling connection cache, we’d like to disable connection cache to\n> the particular server, for example, which sets low max_connections.\n>\n\nCurrently, the user mapping oid acts as the key for the cache's hash\ntable, so the cache entries\nare not made directly using foreign server ids though each entry would\nhave some information related\nto foreign server.\n\nJust to reiterate, the main idea if this feature is to give the user\na way to choose, whether to use connection caching or not,\nif he decides that his session uses remote queries very rarely, then\nhe can disable, or if the remote queries are more frequent in\na particular session, he can choose to use connection caching.\n\nIn a way, this feature addresses the point that local sessions not\neating up remote connections/sessions by\nletting users decide(as users know better when to cache or when not\nto) to cache or not cache the remote connections\nand thus releasing them immediately if there is not much usage from\nlocal session.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 6 Jul 2020 17:59:02 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] postgres_fdw connection caching - cause remote sessions\n linger till the local session exit" }, { "msg_contents": "On Thu, Jul 2, 2020 at 4:29 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n> >\n> > This is actually strange. AFAIR the code, without looking at the\n> > current code, when a query picks a foreign connection it checks its\n> > state. It's possible that the connection has not been marked bad by\n> > the time you fire new query. If the problem exists probably we should\n> > fix it anyway since the backend at the other end of the connection has\n> > higher chances of being killed while the connection was sitting idle\n> > in the cache.\n> >\n>\n> Thanks Ashutosh for the suggestion. One way, we could solve the above\n> problem is that, upon firing the new foreign query from local backend using cached\n> connection, (assuming the remote backend/session that was cached in the local backed got\n> killed by some means), instead of failing the query in the local backend/session, upon\n> detecting error from remote backend, we could just delete the cached old entry and try getting another\n> connection to remote backend/session, cache it and proceed to submit the query. This has to happen only at\n> the beginning of remote xact.\n\nYes, I believe that would be good.\n\n>\n> This way, instead of failing(as mentioned above \" server closed the connection unexpectedly\"),\n> the query succeeds if the local session is able to get a new remote backend connection.\n>\n\nIn GetConnection() there's a comment\n /*\n * We don't check the health of cached connection here, because it would\n * require some overhead. Broken connection will be detected when the\n * connection is actually used.\n */\nPossibly this is where you want to check the health of connection when\nit's being used the first time in a transaction.\n\n> I worked on a POC patch to prove the above point. Attaching the patch.\n> Please note that, the patch doesn't contain comments and has some issues like having some new\n> variable in PGconn structure and the things like.\n\nI don't think changing the PGConn structure for this is going to help.\nIt's a libpq construct and used by many other applications/tools other\nthan postgres_fdw. Instead you could use ConnCacheEntry for the same.\nSee how we track invalidated connection and reconnect upon\ninvalidation.\n\n>\n> If the approach makes some sense, then I can rework properly on the patch and probably\n> can open another thread for the review and other stuff.\n>\n> The way I tested the patch:\n>\n> 1. select * from foreign_tbl;\n> /*from local session - this results in a\n> remote connection being cached in\n> the connection cache and\n> a remote backend/session is opened.\n> */\n> 2. kill the remote backend/session\n> 3. select * from foreign_tbl;\n> /*from local session - without patch\n> this throws error \"ERROR: server closed the connection unexpectedly\"\n> with path - try to use\n> the cached connection at the beginning of remote xact, upon receiving\n> error from remote postgres\n> server, instead of aborting the query, delete the cached entry, try to\n> get a new connection, if it\n> gets, cache it and use that for executing the query, query succeeds.\n> */\n\nThis will work. Be cognizant of the fact that the same connection may\nbe used by multiple plan nodes.\n\n\n--\nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Mon, 6 Jul 2020 19:07:18 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] postgres_fdw connection caching - cause remote sessions\n linger till the local session exit" }, { "msg_contents": "On Wed, Jul 1, 2020 at 5:15 AM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n> If I understand it correctly, your suggestion is to add\n> keep_connection option and use that while defining the server object.\n> IMO having keep_connection option at the server object level may not\n> serve the purpose being discussed here.\n> For instance, let's say I create a foreign server in session 1 with\n> keep_connection on, and I want to use that\n> server object in session 2 with keep_connection off and session 3 with\n> keep_connection on and so on.\n> One way we can change the server's keep_connection option is to alter\n> the server object, but that's not a good choice,\n> as we have to alter it at the system level.\n>\n> Overall, though we define the server object in a single session, it\n> will be used in multiple sessions, having an\n> option at the per-server level would not be a good idea.\n\nYou present this here as if it should be a Boolean (on or off) but I\ndon't see why that should be the case. You can imagine trying to close\nconnections if they have been idle for a certain length of time, or if\nthere are more than a certain number of them, rather than (or in\naddition to) always/never. Which one is best, and why?\n\nI tend to think this is better as an FDW property rather than a core\nfacility, but I'm not 100% sure of that and I think it likely depends\nsomewhat on the answers we choose to the questions in the preceding\nparagraph.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Mon, 6 Jul 2020 11:37:26 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] postgres_fdw connection caching - cause remote sessions\n linger till the local session exit" }, { "msg_contents": "> > If I understand it correctly, your suggestion is to add\n> > keep_connection option and use that while defining the server object.\n> > IMO having keep_connection option at the server object level may not\n> > serve the purpose being discussed here.\n> > For instance, let's say I create a foreign server in session 1 with\n> > keep_connection on, and I want to use that\n> > server object in session 2 with keep_connection off and session 3 with\n> > keep_connection on and so on.\n> > One way we can change the server's keep_connection option is to alter\n> > the server object, but that's not a good choice,\n> > as we have to alter it at the system level.\n> >\n> > Overall, though we define the server object in a single session, it\n> > will be used in multiple sessions, having an\n> > option at the per-server level would not be a good idea.\n>\n> You present this here as if it should be a Boolean (on or off) but I\n> don't see why that should be the case. You can imagine trying to close\n> connections if they have been idle for a certain length of time, or if\n> there are more than a certain number of them, rather than (or in\n> addition to) always/never. Which one is best, and why?\n>\nIf the cached connection idle time property is used (I'm thinking we\ncan define it per server object) then the local backend might have to\nclose the connections which are lying unused more than idle time. To\nperform this task, the local backend might have to do it before it\ngoes into idle state(as suggested by you in [1]). Please correct, if\nmy understanding/thinking is wrong here.\n\nIf the connection clean up is to be done by the local backend, then a\npoint can be - let say a local session initially issues few foreign\nqueries for which connections are cached, and it keeps executing all\nlocal queries, without never going to idle mode(I think this scenario\nlooks too much impractical to me), then we may never clean the unused\ncached connections. If this scenario is really impractical if we are\nsure that there are high chances that the local backend goes to idle\nmode, then the idea of having per-server-object idle time and letting\nthe local backend clean it up before it goes to idle mode looks great\nto me.\n\n>\n> I tend to think this is better as an FDW property rather than a core\n> facility, but I'm not 100% sure of that and I think it likely depends\n> somewhat on the answers we choose to the questions in the preceding\n> paragraph.\n>\nI completely agree on having it as a FDW property.\n\n[1] - https://www.postgresql.org/message-id/CA%2BTgmob_ksTOgmbXhno%2Bk5XXPOK%2B-JYYLoU3MpXuutP4bH7gzA%40mail.gmail.com\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 8 Jul 2020 18:55:50 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] postgres_fdw connection caching - cause remote sessions\n linger till the local session exit" }, { "msg_contents": "On Wed, Jul 8, 2020 at 9:26 AM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n> If the cached connection idle time property is used (I'm thinking we\n> can define it per server object) then the local backend might have to\n> close the connections which are lying unused more than idle time. To\n> perform this task, the local backend might have to do it before it\n> goes into idle state(as suggested by you in [1]). Please correct, if\n> my understanding/thinking is wrong here.\n>\n> If the connection clean up is to be done by the local backend, then a\n> point can be - let say a local session initially issues few foreign\n> queries for which connections are cached, and it keeps executing all\n> local queries, without never going to idle mode(I think this scenario\n> looks too much impractical to me), then we may never clean the unused\n> cached connections. If this scenario is really impractical if we are\n> sure that there are high chances that the local backend goes to idle\n> mode, then the idea of having per-server-object idle time and letting\n> the local backend clean it up before it goes to idle mode looks great\n> to me.\n\nIf it just did it before going idle, then what about sessions that\nhaven't reached the timeout at the point when we go idle, but do reach\nthe timeout later? And how would the FDW get control at the right time\nanyway?\n\n> > I tend to think this is better as an FDW property rather than a core\n> > facility, but I'm not 100% sure of that and I think it likely depends\n> > somewhat on the answers we choose to the questions in the preceding\n> > paragraph.\n> >\n> I completely agree on having it as a FDW property.\n\nRight, but not everyone does. It looks to me like there have been\nvotes on both sides.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Wed, 8 Jul 2020 10:44:11 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] postgres_fdw connection caching - cause remote sessions\n linger till the local session exit" }, { "msg_contents": "Thanks all for the ideas. There have been various points/approaches\ndiscussed in the entire email chain so far.\nI would like to summarize all of them here, so that we can agree on\none of the options and proceed further with this feature.\n\nThe issue this feature is trying to solve:\nIn postgres_fdw, rarely used remote connections lie ilde in the\nconnection cache(per backend) and so are remote sessions, for long\nlasting local sessions which may unnecessarily eatup connections on\nremote postgres servers.\n\nApproach #1:\nA new session level GUC (proposed name \"enable_connectioncache\"), when\nset to true(which is by default) caches the remote connections\notherwise not. When set to false, everytime foreign query is issued a\nnew connection is made at the remote xact begin and dropped from the\nconnection cache at the remote xact end. This GUC applies to all the\nforeign servers that are used in the session, it may not be possible\nto have the control at the foreign server level. It may not be a good\nidea to have postgres core controlling postgres_fdw property.\n\nApproach #2:\nA new postgres_fdw function, similar to dblink's dblink_disconnect(),\n(possibly named postgres_fdw_disconnect_open_connections()). Seems\neasy, but users have to frequently call this function to clean up the\ncached entries. This may not be always possible, requires some sort of\nmonitoring and issuing this new disconnect function from in between\napplication code.\n\nApproach #3:\nA postgres_fdw foreign server level option: keep_connection(on/off).\nWhen set to on (which is by default), caches the entries related to\nthat particular foreign server otherwise not. This gives control at\nthe foreign server level, which may not be possible with a single GUC.\nIt also addresses the concern that having postgres core solving\npostgres_fdw problem. But, when the same foreign server is to be used\nin multiple other sessions with different keep_connection\noptions(on/off), then a possible solution is to have two foreign\nserver definitions for the same server, one with keep_connection on\nand another with off and use the foreign server accordingly and when\nthere is any change in other foreign server properties/options, need\nto maintain the two versions of foreign servers.\n\nApproach #4:\nA postgres_fdw foreign server level option: connection idle time, the\namount of idle time for that server cached entry, after which the\ncached entry goes away. Probably the backend, before itself going to\nidle, has to be checking the cached entries and see if any of the\nentries has timed out. One problem is that, if the backend just did it\nbefore going idle, then what about sessions that haven't reached the\ntimeout at the point when we go idle, but do reach the timeout later?\n\nI tried to summarize and put in the points in a concise manner,\nforgive if I miss anything.\n\nThoughts?\n\nCredits and thanks to: vignesh C, David G. Johnston, Masahiko Sawada,\nBruce Momjian, Rushabh Lathia, Ashutosh Bapat, Robert Haas.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 14 Jul 2020 15:38:49 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] postgres_fdw connection caching - cause remote sessions\n linger till the local session exit" }, { "msg_contents": "On Tue, Jul 14, 2020 at 6:09 AM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n> Thanks all for the ideas. There have been various points/approaches\n> discussed in the entire email chain so far.\n> I would like to summarize all of them here, so that we can agree on\n> one of the options and proceed further with this feature.\n\nIn my opinion, approach #2 seems easy to implement and it's hard to\nimagine anyone finding much to complain about there, but it's not that\npowerful either, because it isn't automatic. Now the other approaches\nhave to do with the way in which this should be controlled, and I\nthink there are two separate questions.\n\n1. Should this be controlled by (a) a core GUC, (b) a postgres_fdw\nGUC, (c) a postgres_fdw server-level option?\n2. Should it be (a) a timeout or (b) a Boolean (keep vs. don't keep)?\n\nWith regard to #1, even if we decided on a core GUC, I cannot imagine\nthat we'd accept enable_connectioncache as a name, because most\nenable_whatever GUCs are for the planner, and this is something else.\nAlso, underscores between some words but not all words is a lousy\nconvention; let's not do more of that. Apart from those points, I\ndon't have a strong opinion; other people might. With regard to #2, a\ntimeout seems a lot more powerful, but also harder to implement\nbecause you'd need some kind of core changes to let the FDW get\ncontrol at the proper time. Maybe that's an argument for 2(b), but I\nhave a bit of a hard time believing that 2(b) will provide a good user\nexperience. I doubt that most people want to have to decide between\nslamming the connection shut even if it's going to be used again\nalmost immediately and keeping it open until the end of time. Those\nare two pretty extreme positions.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Tue, 14 Jul 2020 12:08:46 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] postgres_fdw connection caching - cause remote sessions\n linger till the local session exit" }, { "msg_contents": "On Tue, Jul 14, 2020 at 03:38:49PM +0530, Bharath Rupireddy wrote:\n> Approach #4:\n> A postgres_fdw foreign server level option: connection idle time, the\n> amount of idle time for that server cached entry, after which the\n> cached entry goes away. Probably the backend, before itself going to\n> idle, has to be checking the cached entries and see if any of the\n> entries has timed out. One problem is that, if the backend just did it\n> before going idle, then what about sessions that haven't reached the\n> timeout at the point when we go idle, but do reach the timeout later?\n\nImagine implementing idle_in_session_timeout (which is useful on its\nown), and then, when you connect to a foreign postgres_fdw server, you\nset idle_in_session_timeout on the foreign side, and it just\ndisconnects/exits after an idle timeout. There is nothing the sending\nside has to do.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n", "msg_date": "Tue, 14 Jul 2020 12:58:22 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: [PATCH] postgres_fdw connection caching - cause remote sessions\n linger till the local session exit" }, { "msg_contents": "On Tue, Jul 14, 2020 at 10:28 PM Bruce Momjian <bruce@momjian.us> wrote:\n>\n> On Tue, Jul 14, 2020 at 03:38:49PM +0530, Bharath Rupireddy wrote:\n> > Approach #4:\n> > A postgres_fdw foreign server level option: connection idle time, the\n> > amount of idle time for that server cached entry, after which the\n> > cached entry goes away. Probably the backend, before itself going to\n> > idle, has to be checking the cached entries and see if any of the\n> > entries has timed out. One problem is that, if the backend just did it\n> > before going idle, then what about sessions that haven't reached the\n> > timeout at the point when we go idle, but do reach the timeout later?\n>\n> Imagine implementing idle_in_session_timeout (which is useful on its\n> own), and then, when you connect to a foreign postgres_fdw server, you\n> set idle_in_session_timeout on the foreign side, and it just\n> disconnects/exits after an idle timeout. There is nothing the sending\n> side has to do.\n>\n\nAssuming we use idle_in_session_timeout on remote backends, the\nremote sessions will be closed after timeout, but the locally cached\nconnection cache entries still exist and become stale. The subsequent\nqueries that may use the cached connections will fail, of course these\nsubsequent queries can retry the connections only at the beginning of\na remote txn but not in the middle of a remote txn, as being discussed\nin [1]. For instance, in a long running local txn, let say we used a\nremote connection at the beginning of the local txn(note that it will\nopen a remote session and it's entry is cached in local connection\ncache), only we use the cached connection later at some point in the\nlocal txn, by then let say the idle_in_session_timeout has happened on\nthe remote backend and the remote session would have been closed. The\nlong running local txn will fail instead of succeeding. Isn't it a\nproblem here? Please correct me, If I miss anything.\n\nIMHO, we are not fully solving the problem with\nidle_in_session_timeout on remote backends though we are addressing\nthe main problem partly by letting the remote sessions close by\nthemselves.\n\n[1] - https://www.postgresql.org/message-id/flat/CALj2ACUAi23vf1WiHNar_LksM9EDOWXcbHCo-fD4Mbr1d%3D78YQ%40mail.gmail.com\n\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 3 Aug 2020 16:41:58 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] postgres_fdw connection caching - cause remote sessions\n linger till the local session exit" }, { "msg_contents": "On Mon, Aug 03, 2020 at 04:41:58PM +0530, Bharath Rupireddy wrote:\n> IMHO, we are not fully solving the problem with\n> idle_in_session_timeout on remote backends though we are addressing\n> the main problem partly by letting the remote sessions close by\n> themselves.\n\nThis patch fails to compile on Windows. And while skimming through\nthe patch, I can see that you are including libpq-int.h in a place\ndifferent than src/interfaces/libpq/. This is incorrect as it should\nremain strictly as a header internal to libpq.\n--\nMichael", "msg_date": "Tue, 29 Sep 2020 14:50:38 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: [PATCH] postgres_fdw connection caching - cause remote sessions\n linger till the local session exit" }, { "msg_contents": "On Tue, Sep 29, 2020 at 11:21 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Mon, Aug 03, 2020 at 04:41:58PM +0530, Bharath Rupireddy wrote:\n> > IMHO, we are not fully solving the problem with\n> > idle_in_session_timeout on remote backends though we are addressing\n> > the main problem partly by letting the remote sessions close by\n> > themselves.\n>\n> This patch fails to compile on Windows. And while skimming through\n> the patch, I can see that you are including libpq-int.h in a place\n> different than src/interfaces/libpq/. This is incorrect as it should\n> remain strictly as a header internal to libpq.\n>\n\nUnfortunately, we have not arrived at a final solution yet, please\nignore this patch. I will post a new patch, once the solution is\nfinalized. I will move it to the next commit fest if okay.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 29 Sep 2020 11:29:45 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] postgres_fdw connection caching - cause remote sessions\n linger till the local session exit" }, { "msg_contents": "On Tue, Sep 29, 2020 at 11:29:45AM +0530, Bharath Rupireddy wrote:\n> Unfortunately, we have not arrived at a final solution yet, please\n> ignore this patch. I will post a new patch, once the solution is\n> finalized. I will move it to the next commit fest if okay.\n\nIf you are planning to get that addressed, moving it to next CF is\nfine by me. Thanks for the update!\n--\nMichael", "msg_date": "Tue, 29 Sep 2020 15:14:08 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: [PATCH] postgres_fdw connection caching - cause remote sessions\n linger till the local session exit" }, { "msg_contents": "Status update for a commitfest entry.\r\n\r\nThis thread was inactive for a while and from the latest messages, I see that the patch needs some further work.\r\nSo I move it to \"Waiting on Author\".\n\nThe new status of this patch is: Waiting on Author\n", "msg_date": "Fri, 06 Nov 2020 15:56:25 +0000", "msg_from": "Anastasia Lubennikova <a.lubennikova@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: [PATCH] postgres_fdw connection caching - cause remote sessions\n linger till the local session exit" }, { "msg_contents": "Hi,\n\nOn 2020-11-06 18:56, Anastasia Lubennikova wrote:\n> Status update for a commitfest entry.\n> \n> This thread was inactive for a while and from the latest messages, I\n> see that the patch needs some further work.\n> So I move it to \"Waiting on Author\".\n> \n> The new status of this patch is: Waiting on Author\n\nI had a look on the initial patch and discussed options [1] to proceed \nwith this issue. I agree with Bruce about idle_session_timeout, it would \nbe a nice to have in-core feature on its own. However, this should be a \ncluster-wide option and it will start dropping all idle connection not \nonly foreign ones. So it may be not an option for some cases, when the \nsame foreign server is used for another load as well.\n\nRegarding the initial issue I prefer point #3, i.e. foreign server \noption. It has a couple of benefits IMO: 1) it may be set separately on \nper foreign server basis, 2) it will live only in the postgres_fdw \ncontrib without any need to touch core. I would only supplement this \npostgres_fdw foreign server option with a GUC, e.g. \npostgres_fdw.keep_connections, so one could easily define such behavior \nfor all foreign servers at once or override server-level option by \nsetting this GUC on per session basis.\n\nAttached is a small POC patch, which implements this contrib-level \npostgres_fdw.keep_connections GUC. What do you think?\n\n[1] \nhttps://www.postgresql.org/message-id/CALj2ACUFNydy0uo0JL9A1isHQ9pFe1Fgqa_HVanfG6F8g21nSQ%40mail.gmail.com\n\n\nRegards\n-- \nAlexey Kondratov\n\nPostgres Professional https://www.postgrespro.com\nRussian Postgres Company", "msg_date": "Tue, 17 Nov 2020 22:37:28 +0300", "msg_from": "Alexey Kondratov <a.kondratov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: [PATCH] postgres_fdw connection caching - cause remote sessions\n linger till the local session exit" }, { "msg_contents": "Thanks for the interest shown!\n\nOn Wed, Nov 18, 2020 at 1:07 AM Alexey Kondratov\n<a.kondratov@postgrespro.ru> wrote:\n>\n> I had a look on the initial patch and discussed options [1] to proceed\n> with this issue. I agree with Bruce about idle_session_timeout, it would\n> be a nice to have in-core feature on its own. However, this should be a\n> cluster-wide option and it will start dropping all idle connection not\n> only foreign ones. So it may be not an option for some cases, when the\n> same foreign server is used for another load as well.\n>\n\nWith idle_session_timeout the remote idle backends may go away, part\nof our problem is solved. But we also need to clear that connection\nentry from the local backend's connection cache.\n\n>\n> Regarding the initial issue I prefer point #3, i.e. foreign server\n> option. It has a couple of benefits IMO: 1) it may be set separately on\n> per foreign server basis, 2) it will live only in the postgres_fdw\n> contrib without any need to touch core. I would only supplement this\n> postgres_fdw foreign server option with a GUC, e.g.\n> postgres_fdw.keep_connections, so one could easily define such behavior\n> for all foreign servers at once or override server-level option by\n> setting this GUC on per session basis.\n>\n\nBelow is what I have in my mind, mostly inline with yours:\n\na) Have a server level option (keep_connetion true/false, with the\ndefault being true), when set to false the connection that's made with\nthis foreign server is closed and cached entry from the connection\ncache is deleted at the end of txn in pgfdw_xact_callback.\nb) Have postgres_fdw level GUC postgres_fdw.keep_connections default\nbeing true. When set to false by the user, the connections, that are\nused after this, are closed and removed from the cache at the end of\nrespective txns. If we don't use a connection that was cached prior to\nthe user setting the GUC as false, then we may not be able to clear\nit. We can avoid this problem by recommending users either to set the\nGUC to false right after the CREATE EXTENSION postgres_fdw; or else\nuse the function specified in (c).\nc) Have a new function that gets defined as part of CREATE EXTENSION\npostgres_fdw;, say postgres_fdw_discard_connections(), similar to\ndblink's dblink_disconnect(), which discards all the remote\nconnections and clears connection cache. And we can also have server\nname as input to postgres_fdw_discard_connections() to discard\nselectively.\n\nThoughts? If okay with the approach, I will start working on the patch.\n\n>\n> Attached is a small POC patch, which implements this contrib-level\n> postgres_fdw.keep_connections GUC. What do you think?\n>\n\nI see two problems with your patch: 1) It just disconnects the remote\nconnection at the end of txn if the GUC is set to false, but it\ndoesn't remove the connection cache entry from ConnectionHash. 2) What\nhappens if there are some cached connections, user set the GUC to\nfalse and not run any foreign queries or not use those connections\nthereafter, so only the new connections will not be cached? Will the\nexisting unused connections still remain in the connection cache? See\n(b) above for a solution.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 18 Nov 2020 19:09:48 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] postgres_fdw connection caching - cause remote sessions\n linger till the local session exit" }, { "msg_contents": "On 2020-11-18 16:39, Bharath Rupireddy wrote:\n> Thanks for the interest shown!\n> \n> On Wed, Nov 18, 2020 at 1:07 AM Alexey Kondratov\n> <a.kondratov@postgrespro.ru> wrote:\n>> \n>> Regarding the initial issue I prefer point #3, i.e. foreign server\n>> option. It has a couple of benefits IMO: 1) it may be set separately \n>> on\n>> per foreign server basis, 2) it will live only in the postgres_fdw\n>> contrib without any need to touch core. I would only supplement this\n>> postgres_fdw foreign server option with a GUC, e.g.\n>> postgres_fdw.keep_connections, so one could easily define such \n>> behavior\n>> for all foreign servers at once or override server-level option by\n>> setting this GUC on per session basis.\n>> \n> \n> Below is what I have in my mind, mostly inline with yours:\n> \n> a) Have a server level option (keep_connetion true/false, with the\n> default being true), when set to false the connection that's made with\n> this foreign server is closed and cached entry from the connection\n> cache is deleted at the end of txn in pgfdw_xact_callback.\n> b) Have postgres_fdw level GUC postgres_fdw.keep_connections default\n> being true. When set to false by the user, the connections, that are\n> used after this, are closed and removed from the cache at the end of\n> respective txns. If we don't use a connection that was cached prior to\n> the user setting the GUC as false, then we may not be able to clear\n> it. We can avoid this problem by recommending users either to set the\n> GUC to false right after the CREATE EXTENSION postgres_fdw; or else\n> use the function specified in (c).\n> c) Have a new function that gets defined as part of CREATE EXTENSION\n> postgres_fdw;, say postgres_fdw_discard_connections(), similar to\n> dblink's dblink_disconnect(), which discards all the remote\n> connections and clears connection cache. And we can also have server\n> name as input to postgres_fdw_discard_connections() to discard\n> selectively.\n> \n> Thoughts? If okay with the approach, I will start working on the patch.\n> \n\nThis approach looks solid enough from my perspective to give it a try. I \nwould only make it as three separate patches for an ease of further \nreview.\n\n>> \n>> Attached is a small POC patch, which implements this contrib-level\n>> postgres_fdw.keep_connections GUC. What do you think?\n>> \n> \n> I see two problems with your patch: 1) It just disconnects the remote\n> connection at the end of txn if the GUC is set to false, but it\n> doesn't remove the connection cache entry from ConnectionHash.\n\nYes, and this looks like a valid state for postgres_fdw and it can get \ninto the same state even without my patch. Next time GetConnection() \nwill find this cache entry, figure out that entry->conn is NULL and \nestablish a fresh connection. It is not clear for me right now, what \nbenefits we will get from clearing also this cache entry, except just \ndoing this for sanity.\n\n> 2) What\n> happens if there are some cached connections, user set the GUC to\n> false and not run any foreign queries or not use those connections\n> thereafter, so only the new connections will not be cached? Will the\n> existing unused connections still remain in the connection cache? See\n> (b) above for a solution.\n> \n\nYes, they will. This could be solved with that additional disconnect \nfunction as you proposed in c).\n\n\nRegards\n-- \nAlexey Kondratov\n\nPostgres Professional https://www.postgrespro.com\nRussian Postgres Company\n\n\n", "msg_date": "Wed, 18 Nov 2020 20:02:41 +0300", "msg_from": "Alexey Kondratov <a.kondratov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: [PATCH] postgres_fdw connection caching - cause remote sessions\n linger till the local session exit" }, { "msg_contents": "On Wed, Nov 18, 2020 at 10:32 PM Alexey Kondratov\n<a.kondratov@postgrespro.ru> wrote:\n>\n> > Below is what I have in my mind, mostly inline with yours:\n> >\n> > a) Have a server level option (keep_connetion true/false, with the\n> > default being true), when set to false the connection that's made with\n> > this foreign server is closed and cached entry from the connection\n> > cache is deleted at the end of txn in pgfdw_xact_callback.\n> > b) Have postgres_fdw level GUC postgres_fdw.keep_connections default\n> > being true. When set to false by the user, the connections, that are\n> > used after this, are closed and removed from the cache at the end of\n> > respective txns. If we don't use a connection that was cached prior to\n> > the user setting the GUC as false, then we may not be able to clear\n> > it. We can avoid this problem by recommending users either to set the\n> > GUC to false right after the CREATE EXTENSION postgres_fdw; or else\n> > use the function specified in (c).\n> > c) Have a new function that gets defined as part of CREATE EXTENSION\n> > postgres_fdw;, say postgres_fdw_discard_connections(), similar to\n> > dblink's dblink_disconnect(), which discards all the remote\n> > connections and clears connection cache. And we can also have server\n> > name as input to postgres_fdw_discard_connections() to discard\n> > selectively.\n> >\n> > Thoughts? If okay with the approach, I will start working on the patch.\n>\n> This approach looks solid enough from my perspective to give it a try. I\n> would only make it as three separate patches for an ease of further\n> review.\n>\n\nThanks! I will make separate patches and post them soon.\n\n>\n> >> Attached is a small POC patch, which implements this contrib-level\n> >> postgres_fdw.keep_connections GUC. What do you think?\n >\n> > I see two problems with your patch: 1) It just disconnects the remote\n> > connection at the end of txn if the GUC is set to false, but it\n> > doesn't remove the connection cache entry from ConnectionHash.\n>\n> Yes, and this looks like a valid state for postgres_fdw and it can get\n> into the same state even without my patch. Next time GetConnection()\n> will find this cache entry, figure out that entry->conn is NULL and\n> establish a fresh connection. It is not clear for me right now, what\n> benefits we will get from clearing also this cache entry, except just\n> doing this for sanity.\n>\n\nBy clearing the cache entry we will have 2 advantages: 1) we could\nsave a(small) bit of memory 2) we could allow new connections to be\ncached, currently ConnectionHash can have only 8 entries. IMHO, along\nwith disconnecting, we can also clear off the cache entry. Thoughts?\n\n>\n> > 2) What\n> > happens if there are some cached connections, user set the GUC to\n> > false and not run any foreign queries or not use those connections\n> > thereafter, so only the new connections will not be cached? Will the\n> > existing unused connections still remain in the connection cache? See\n> > (b) above for a solution.\n> >\n>\n> Yes, they will. This could be solved with that additional disconnect\n> function as you proposed in c).\n>\n\nRight.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 19 Nov 2020 09:41:29 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] postgres_fdw connection caching - cause remote sessions\n linger till the local session exit" }, { "msg_contents": "On 2020-11-19 07:11, Bharath Rupireddy wrote:\n> On Wed, Nov 18, 2020 at 10:32 PM Alexey Kondratov\n> <a.kondratov@postgrespro.ru> wrote:\n> \n> Thanks! I will make separate patches and post them soon.\n> \n>> \n>> >> Attached is a small POC patch, which implements this contrib-level\n>> >> postgres_fdw.keep_connections GUC. What do you think?\n> >\n>> > I see two problems with your patch: 1) It just disconnects the remote\n>> > connection at the end of txn if the GUC is set to false, but it\n>> > doesn't remove the connection cache entry from ConnectionHash.\n>> \n>> Yes, and this looks like a valid state for postgres_fdw and it can get\n>> into the same state even without my patch. Next time GetConnection()\n>> will find this cache entry, figure out that entry->conn is NULL and\n>> establish a fresh connection. It is not clear for me right now, what\n>> benefits we will get from clearing also this cache entry, except just\n>> doing this for sanity.\n>> \n> \n> By clearing the cache entry we will have 2 advantages: 1) we could\n> save a(small) bit of memory 2) we could allow new connections to be\n> cached, currently ConnectionHash can have only 8 entries. IMHO, along\n> with disconnecting, we can also clear off the cache entry. Thoughts?\n> \n\nIIUC, 8 is not a hard limit, it is just a starting size. ConnectionHash \nis not a shared-memory hash table, so dynahash can expand it on-the-fly \nas follow, for example, from the comment before hash_create():\n\n * Note: for a shared-memory hashtable, nelem needs to be a pretty good\n * estimate, since we can't expand the table on the fly. But an \nunshared\n * hashtable can be expanded on-the-fly, so it's better for nelem to be\n * on the small side and let the table grow if it's exceeded. An overly\n * large nelem will penalize hash_seq_search speed without buying much.\n\nAlso I am not sure that by doing just a HASH_REMOVE you will free any \nmemory, since hash table is already allocated (or expanded) to some \nsize. So HASH_REMOVE will only add removed entry to the freeList, I \nguess.\n\nAnyway, I can hardly imagine bloating of ConnectionHash to be a problem \neven in the case, when one has thousands of foreign servers all being \naccessed during a single backend life span.\n\n\nRegards\n-- \nAlexey Kondratov\n\nPostgres Professional https://www.postgrespro.com\nRussian Postgres Company\n\n\n", "msg_date": "Thu, 19 Nov 2020 15:09:42 +0300", "msg_from": "Alexey Kondratov <a.kondratov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: [PATCH] postgres_fdw connection caching - cause remote sessions\n linger till the local session exit" }, { "msg_contents": "On Thu, Nov 19, 2020 at 5:39 PM Alexey Kondratov\n<a.kondratov@postgrespro.ru> wrote:\n>\n> >\n> > By clearing the cache entry we will have 2 advantages: 1) we could\n> > save a(small) bit of memory 2) we could allow new connections to be\n> > cached, currently ConnectionHash can have only 8 entries. IMHO, along\n> > with disconnecting, we can also clear off the cache entry. Thoughts?\n> >\n>\n> IIUC, 8 is not a hard limit, it is just a starting size. ConnectionHash\n> is not a shared-memory hash table, so dynahash can expand it on-the-fly\n> as follow, for example, from the comment before hash_create():\n>\n\nThanks! Yes this is true. I was wrong earlier. I verified that 8 is\nnot a hard limit.\n\n>\n> Also I am not sure that by doing just a HASH_REMOVE you will free any\n> memory, since hash table is already allocated (or expanded) to some\n> size. So HASH_REMOVE will only add removed entry to the freeList, I\n> guess.\n>\n> Anyway, I can hardly imagine bloating of ConnectionHash to be a problem\n> even in the case, when one has thousands of foreign servers all being\n> accessed during a single backend life span.\n>\n\nOkay. I will not add the code to remove the entries from cache.\n\nHere is how I'm making 4 separate patches:\n\n1. new function and it's documentation.\n2. GUC and it's documentation.\n3. server level option and it's documentation.\n4. test cases for all of the above patches.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 19 Nov 2020 17:57:14 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] postgres_fdw connection caching - cause remote sessions\n linger till the local session exit" }, { "msg_contents": ">\n> Here is how I'm making 4 separate patches:\n>\n> 1. new function and it's documentation.\n> 2. GUC and it's documentation.\n> 3. server level option and it's documentation.\n> 4. test cases for all of the above patches.\n>\n\nHi, I'm attaching the patches here. Note that, though the code changes\nfor this feature are small, I divided them up as separate patches to\nmake review easy.\n\nv1-0001-postgres_fdw-function-to-discard-cached-connections.patch\nThis patch adds a new function that gets defined as part of CREATE\nEXTENSION postgres_fdw; postgres_fdw_disconnect() when called with a\nforeign server name discards the associated connections with the\nserver name. When called without any argument, discards all the\nexisting cached connections.\n\nv1-0002-postgres_fdw-add-keep_connections-GUC-to-not-cache-connections.patch\nThis patch adds a new GUC postgres_fdw.keep_connections, default being\non, when set to off no remote connections are cached by the local\nsession.\n\nv1-0003-postgres_fdw-server-level-option-keep_connection.patch\nThis patch adds a new server level option, keep_connection, default\nbeing on, when set to off, the local session doesn't cache the\nconnections associated with the foreign server.\n\nv1-0004-postgres_fdw-connection-cache-discard-tests-and-documentation.patch\nThis patch adds the tests and documentation related to this feature.\n\nPlease review the patches.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Mon, 23 Nov 2020 12:18:42 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] postgres_fdw connection caching - cause remote sessions\n linger till the local session exit" }, { "msg_contents": "Hi,\n\nOn 2020-11-23 09:48, Bharath Rupireddy wrote:\n>> \n>> Here is how I'm making 4 separate patches:\n>> \n>> 1. new function and it's documentation.\n>> 2. GUC and it's documentation.\n>> 3. server level option and it's documentation.\n>> 4. test cases for all of the above patches.\n>> \n> \n> Hi, I'm attaching the patches here. Note that, though the code changes\n> for this feature are small, I divided them up as separate patches to\n> make review easy.\n> \n> v1-0001-postgres_fdw-function-to-discard-cached-connections.patch\n> \n\nThis patch looks pretty straightforward for me, but there are some \nthings to be addressed IMO:\n\n+\t\tserver = GetForeignServerByName(servername, true);\n+\n+\t\tif (server != NULL)\n+\t\t{\n\nYes, you return a false if no server was found, but for me it worth \nthrowing an error in this case as, for example, dblink does in the \ndblink_disconnect().\n\n+ result = disconnect_cached_connections(FOREIGNSERVEROID,\n+\t hashvalue,\n+\t false);\n\n+\t\tif (all || (!all && cacheid == FOREIGNSERVEROID &&\n+\t\t\tentry->server_hashvalue == hashvalue))\n+\t\t{\n+\t\t\tif (entry->conn != NULL &&\n+\t\t\t\t!all && cacheid == FOREIGNSERVEROID &&\n+\t\t\t\tentry->server_hashvalue == hashvalue)\n\nThese conditions look bulky for me. First, you pass FOREIGNSERVEROID to \ndisconnect_cached_connections(), but actually it just duplicates 'all' \nflag, since when it is 'FOREIGNSERVEROID', then 'all == false'; when it \nis '-1', then 'all == true'. That is all, there are only two calls of \ndisconnect_cached_connections(). That way, it seems that we should keep \nonly 'all' flag at least for now, doesn't it?\n\nSecond, I think that we should just rewrite this if statement in order \nto simplify it and make more readable, e.g.:\n\n\tif ((all || entry->server_hashvalue == hashvalue) &&\n\t\tentry->conn != NULL)\n\t{\n\t\tdisconnect_pg_server(entry);\n\t\tresult = true;\n\t}\n\n+\tif (all)\n+\t{\n+\t\thash_destroy(ConnectionHash);\n+\t\tConnectionHash = NULL;\n+\t\tresult = true;\n+\t}\n\nAlso, I am still not sure that it is a good idea to destroy the whole \ncache even in 'all' case, but maybe others will have a different \nopinion.\n\n> \n> v1-0002-postgres_fdw-add-keep_connections-GUC-to-not-cache-connections.patch\n> \n\n+\t\t\tentry->changing_xact_state) ||\n+\t\t\t(entry->used_in_current_xact &&\n+\t\t\t!keep_connections))\n\nI am not sure, but I think, that instead of adding this additional flag \ninto ConnCacheEntry structure we can look on entry->xact_depth and use \nlocal:\n\nbool used_in_current_xact = entry->xact_depth > 0;\n\nfor exactly the same purpose. Since we set entry->xact_depth to zero at \nthe end of xact, then it was used if it is not zero. It is set to 1 by \nbegin_remote_xact() called by GetConnection(), so everything seems to be \nfine.\n\nOtherwise, both patches seem to be working as expected. I am going to \nhave a look on the last two patches a bit later.\n\nRegards\n-- \nAlexey Kondratov\n\nPostgres Professional https://www.postgrespro.com\nRussian Postgres Company\n\n\n", "msg_date": "Mon, 23 Nov 2020 19:27:18 +0300", "msg_from": "Alexey Kondratov <a.kondratov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: [PATCH] postgres_fdw connection caching - cause remote sessions\n linger till the local session exit" }, { "msg_contents": "Thanks for the review comments.\n\nOn Mon, Nov 23, 2020 at 9:57 PM Alexey Kondratov\n<a.kondratov@postgrespro.ru> wrote:\n>\n> > v1-0001-postgres_fdw-function-to-discard-cached-connections.patch\n>\n> This patch looks pretty straightforward for me, but there are some\n> things to be addressed IMO:\n>\n> + server = GetForeignServerByName(servername, true);\n> +\n> + if (server != NULL)\n> + {\n>\n> Yes, you return a false if no server was found, but for me it worth\n> throwing an error in this case as, for example, dblink does in the\n> dblink_disconnect().\n>\n\ndblink_disconnect() \"Returns status, which is always OK (since any\nerror causes the function to throw an error instead of returning).\"\nThis behaviour doesn't seem okay to me.\n\nSince we throw true/false, I would prefer to throw a warning(with a\nreason) while returning false over an error.\n\n>\n> + result = disconnect_cached_connections(FOREIGNSERVEROID,\n> + hashvalue,\n> + false);\n>\n> + if (all || (!all && cacheid == FOREIGNSERVEROID &&\n> + entry->server_hashvalue == hashvalue))\n> + {\n> + if (entry->conn != NULL &&\n> + !all && cacheid == FOREIGNSERVEROID &&\n> + entry->server_hashvalue == hashvalue)\n>\n> These conditions look bulky for me. First, you pass FOREIGNSERVEROID to\n> disconnect_cached_connections(), but actually it just duplicates 'all'\n> flag, since when it is 'FOREIGNSERVEROID', then 'all == false'; when it\n> is '-1', then 'all == true'. That is all, there are only two calls of\n> disconnect_cached_connections(). That way, it seems that we should keep\n> only 'all' flag at least for now, doesn't it?\n>\n\nI added cachid as an argument to disconnect_cached_connections() for\nreusability. Say, someone wants to use it with a user mapping then\nthey can pass cacheid USERMAPPINGOID, hash value of user mapping. The\ncacheid == USERMAPPINGOID && entry->mapping_hashvalue == hashvalue can\nbe added to disconnect_cached_connections().\n\n>\n> Second, I think that we should just rewrite this if statement in order\n> to simplify it and make more readable, e.g.:\n>\n> if ((all || entry->server_hashvalue == hashvalue) &&\n> entry->conn != NULL)\n> {\n> disconnect_pg_server(entry);\n> result = true;\n> }\n>\n\nYeah. I will add a cacheid check and change it to below.\n\n if ((all || (cacheid == FOREIGNSERVEROID &&\nentry->server_hashvalue == hashvalue)) &&\n entry->conn != NULL)\n {\n disconnect_pg_server(entry);\n result = true;\n }\n\n>\n> + if (all)\n> + {\n> + hash_destroy(ConnectionHash);\n> + ConnectionHash = NULL;\n> + result = true;\n> + }\n>\n> Also, I am still not sure that it is a good idea to destroy the whole\n> cache even in 'all' case, but maybe others will have a different\n> opinion.\n>\n\nI think we should. When we disconnect all the connections, then no\npoint in keeping the connection cache hash data structure. If required\nit gets created at the next first foreign server usage in the same\nsession. And also, hash_destroy() frees up memory context unlike\nhash_search with HASH_REMOVE, so we can save a bit of memory.\n\n> >\n> > v1-0002-postgres_fdw-add-keep_connections-GUC-to-not-cache-connections.patch\n> >\n>\n> + entry->changing_xact_state) ||\n> + (entry->used_in_current_xact &&\n> + !keep_connections))\n>\n> I am not sure, but I think, that instead of adding this additional flag\n> into ConnCacheEntry structure we can look on entry->xact_depth and use\n> local:\n>\n> bool used_in_current_xact = entry->xact_depth > 0;\n>\n> for exactly the same purpose. Since we set entry->xact_depth to zero at\n> the end of xact, then it was used if it is not zero. It is set to 1 by\n> begin_remote_xact() called by GetConnection(), so everything seems to be\n> fine.\n>\n\nI missed this. Thanks, we can use the local variable as you suggested.\nI will change it.\n\n>\n> Otherwise, both patches seem to be working as expected. I am going to\n> have a look on the last two patches a bit later.\n>\n\nThanks. I will work on the comments so far and post updated patches soon.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 24 Nov 2020 09:22:58 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] postgres_fdw connection caching - cause remote sessions\n linger till the local session exit" }, { "msg_contents": "On 2020-11-24 06:52, Bharath Rupireddy wrote:\n> Thanks for the review comments.\n> \n> On Mon, Nov 23, 2020 at 9:57 PM Alexey Kondratov\n> <a.kondratov@postgrespro.ru> wrote:\n>> \n>> > v1-0001-postgres_fdw-function-to-discard-cached-connections.patch\n>> \n>> This patch looks pretty straightforward for me, but there are some\n>> things to be addressed IMO:\n>> \n>> + server = GetForeignServerByName(servername, true);\n>> +\n>> + if (server != NULL)\n>> + {\n>> \n>> Yes, you return a false if no server was found, but for me it worth\n>> throwing an error in this case as, for example, dblink does in the\n>> dblink_disconnect().\n>> \n> \n> dblink_disconnect() \"Returns status, which is always OK (since any\n> error causes the function to throw an error instead of returning).\"\n> This behaviour doesn't seem okay to me.\n> \n> Since we throw true/false, I would prefer to throw a warning(with a\n> reason) while returning false over an error.\n> \n\nI thought about something a bit more sophisticated:\n\n1) Return 'true' if there were open connections and we successfully \nclosed them.\n2) Return 'false' in the no-op case, i.e. there were no open \nconnections.\n3) Rise an error if something went wrong. And non-existing server case \nbelongs to this last category, IMO.\n\nThat looks like a semantically correct behavior, but let us wait for any \nother opinion.\n\n> \n>> \n>> + result = disconnect_cached_connections(FOREIGNSERVEROID,\n>> + hashvalue,\n>> + false);\n>> \n>> + if (all || (!all && cacheid == FOREIGNSERVEROID &&\n>> + entry->server_hashvalue == hashvalue))\n>> + {\n>> + if (entry->conn != NULL &&\n>> + !all && cacheid == FOREIGNSERVEROID &&\n>> + entry->server_hashvalue == hashvalue)\n>> \n>> These conditions look bulky for me. First, you pass FOREIGNSERVEROID \n>> to\n>> disconnect_cached_connections(), but actually it just duplicates 'all'\n>> flag, since when it is 'FOREIGNSERVEROID', then 'all == false'; when \n>> it\n>> is '-1', then 'all == true'. That is all, there are only two calls of\n>> disconnect_cached_connections(). That way, it seems that we should \n>> keep\n>> only 'all' flag at least for now, doesn't it?\n>> \n> \n> I added cachid as an argument to disconnect_cached_connections() for\n> reusability. Say, someone wants to use it with a user mapping then\n> they can pass cacheid USERMAPPINGOID, hash value of user mapping. The\n> cacheid == USERMAPPINGOID && entry->mapping_hashvalue == hashvalue can\n> be added to disconnect_cached_connections().\n> \n\nYeah, I have got your point and motivation to add this argument, but how \nwe can use it? To disconnect all connections belonging to some specific \nuser mapping? But any user mapping is hard bound to some foreign server, \nAFAIK, so we can pass serverid-based hash in this case.\n\nIn the case of pgfdw_inval_callback() this argument makes sense, since \nsyscache callbacks work that way, but here I can hardly imagine a case \nwhere we can use it. Thus, it still looks as a preliminary complication \nfor me, since we do not have plans to use it, do we? Anyway, everything \nseems to be working fine, so it is up to you to keep this additional \nargument.\n\n> \n> v1-0003-postgres_fdw-server-level-option-keep_connection.patch\n> This patch adds a new server level option, keep_connection, default\n> being on, when set to off, the local session doesn't cache the\n> connections associated with the foreign server.\n> \n\nThis patch looks good to me, except one note:\n\n \t\t\t(entry->used_in_current_xact &&\n-\t\t\t!keep_connections))\n+\t\t\t(!keep_connections || !entry->keep_connection)))\n \t\t{\n\nFollowing this logic:\n\n1) If keep_connections == true, then per-server keep_connection has a \n*higher* priority, so one can disable caching of a single foreign \nserver.\n\n2) But if keep_connections == false, then it works like a global switch \noff indifferently of per-server keep_connection's, i.e. they have a \n*lower* priority.\n\nIt looks fine for me, at least I cannot propose anything better, but \nmaybe it should be documented in 0004?\n\n> \n> v1-0004-postgres_fdw-connection-cache-discard-tests-and-documentation.patch\n> This patch adds the tests and documentation related to this feature.\n> \n\nI have not read all texts thoroughly, but what caught my eye:\n\n+ A GUC, <varname>postgres_fdw.keep_connections</varname>, default \nbeing\n+ <literal>on</literal>, when set to <literal>off</literal>, the local \nsession\n\nI think that GUC acronym is used widely only in the source code and \nPostgres docs tend to do not use it at all, except from acronyms list \nand a couple of 'GUC parameters' collocation usage. And it never used in \na singular form there, so I think that it should be rather:\n\nA configuration parameter, \n<varname>postgres_fdw.keep_connections</varname>, default being...\n\n+ <para>\n+ Note that when <varname>postgres_fdw.keep_connections</varname> \nis set to\n+ off, <filename>postgres_fdw</filename> discards either the \nconnections\n+ that are made previously and will be used by the local session or \nthe\n+ connections that will be made newly. But the connections that are \nmade\n+ previously and kept, but not used after this parameter is set to \noff, are\n+ not discarded. To discard them, use\n+ <function>postgres_fdw_disconnect</function> function.\n+ </para>\n\nThe whole paragraph is really difficult to follow. It could be something \nlike that:\n\n <para>\n Note that setting <varname>postgres_fdw.keep_connections</varname> \nto\n off does not discard any previously made and still open \nconnections immediately.\n They will be closed only at the end of a future transaction, which \noperated on them.\n\n To close all connections immediately use\n <function>postgres_fdw_disconnect</function> function.\n </para>\n\n\nRegards\n-- \nAlexey Kondratov\n\nPostgres Professional https://www.postgrespro.com\nRussian Postgres Company\n\n\n", "msg_date": "Tue, 24 Nov 2020 21:43:01 +0300", "msg_from": "Alexey Kondratov <a.kondratov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: [PATCH] postgres_fdw connection caching - cause remote sessions\n linger till the local session exit" }, { "msg_contents": "On Wed, Nov 25, 2020 at 2:43 AM Alexey Kondratov <a.kondratov@postgrespro.ru>\nwrote:\n\n> On 2020-11-24 06:52, Bharath Rupireddy wrote:\n> > Thanks for the review comments.\n> >\n> > On Mon, Nov 23, 2020 at 9:57 PM Alexey Kondratov\n> > <a.kondratov@postgrespro.ru> wrote:\n> >>\n> >> > v1-0001-postgres_fdw-function-to-discard-cached-connections.patch\n> >>\n> >> This patch looks pretty straightforward for me, but there are some\n> >> things to be addressed IMO:\n> >>\n> >> + server = GetForeignServerByName(servername, true);\n> >> +\n> >> + if (server != NULL)\n> >> + {\n> >>\n> >> Yes, you return a false if no server was found, but for me it worth\n> >> throwing an error in this case as, for example, dblink does in the\n> >> dblink_disconnect().\n> >>\n> >\n> > dblink_disconnect() \"Returns status, which is always OK (since any\n> > error causes the function to throw an error instead of returning).\"\n> > This behaviour doesn't seem okay to me.\n> >\n> > Since we throw true/false, I would prefer to throw a warning(with a\n> > reason) while returning false over an error.\n> >\n>\n> I thought about something a bit more sophisticated:\n>\n> 1) Return 'true' if there were open connections and we successfully\n> closed them.\n> 2) Return 'false' in the no-op case, i.e. there were no open\n> connections.\n> 3) Rise an error if something went wrong. And non-existing server case\n> belongs to this last category, IMO.\n>\n> That looks like a semantically correct behavior, but let us wait for any\n> other opinion.\n>\n> >\n> >>\n> >> + result = disconnect_cached_connections(FOREIGNSERVEROID,\n> >> + hashvalue,\n> >> + false);\n> >>\n> >> + if (all || (!all && cacheid == FOREIGNSERVEROID &&\n> >> + entry->server_hashvalue == hashvalue))\n> >> + {\n> >> + if (entry->conn != NULL &&\n> >> + !all && cacheid == FOREIGNSERVEROID &&\n> >> + entry->server_hashvalue == hashvalue)\n> >>\n> >> These conditions look bulky for me. First, you pass FOREIGNSERVEROID\n> >> to\n> >> disconnect_cached_connections(), but actually it just duplicates 'all'\n> >> flag, since when it is 'FOREIGNSERVEROID', then 'all == false'; when\n> >> it\n> >> is '-1', then 'all == true'. That is all, there are only two calls of\n> >> disconnect_cached_connections(). That way, it seems that we should\n> >> keep\n> >> only 'all' flag at least for now, doesn't it?\n> >>\n> >\n> > I added cachid as an argument to disconnect_cached_connections() for\n> > reusability. Say, someone wants to use it with a user mapping then\n> > they can pass cacheid USERMAPPINGOID, hash value of user mapping. The\n> > cacheid == USERMAPPINGOID && entry->mapping_hashvalue == hashvalue can\n> > be added to disconnect_cached_connections().\n> >\n>\n> Yeah, I have got your point and motivation to add this argument, but how\n> we can use it? To disconnect all connections belonging to some specific\n> user mapping? But any user mapping is hard bound to some foreign server,\n> AFAIK, so we can pass serverid-based hash in this case.\n>\n> In the case of pgfdw_inval_callback() this argument makes sense, since\n> syscache callbacks work that way, but here I can hardly imagine a case\n> where we can use it. Thus, it still looks as a preliminary complication\n> for me, since we do not have plans to use it, do we? Anyway, everything\n> seems to be working fine, so it is up to you to keep this additional\n> argument.\n>\n> >\n> > v1-0003-postgres_fdw-server-level-option-keep_connection.patch\n> > This patch adds a new server level option, keep_connection, default\n> > being on, when set to off, the local session doesn't cache the\n> > connections associated with the foreign server.\n> >\n>\n> This patch looks good to me, except one note:\n>\n> (entry->used_in_current_xact &&\n> - !keep_connections))\n> + (!keep_connections || !entry->keep_connection)))\n> {\n>\n> Following this logic:\n>\n> 1) If keep_connections == true, then per-server keep_connection has a\n> *higher* priority, so one can disable caching of a single foreign\n> server.\n>\n> 2) But if keep_connections == false, then it works like a global switch\n> off indifferently of per-server keep_connection's, i.e. they have a\n> *lower* priority.\n>\n> It looks fine for me, at least I cannot propose anything better, but\n> maybe it should be documented in 0004?\n>\n> >\n> >\n> v1-0004-postgres_fdw-connection-cache-discard-tests-and-documentation.patch\n> > This patch adds the tests and documentation related to this feature.\n> >\n>\n> I have not read all texts thoroughly, but what caught my eye:\n>\n> + A GUC, <varname>postgres_fdw.keep_connections</varname>, default\n> being\n> + <literal>on</literal>, when set to <literal>off</literal>, the local\n> session\n>\n> I think that GUC acronym is used widely only in the source code and\n> Postgres docs tend to do not use it at all, except from acronyms list\n> and a couple of 'GUC parameters' collocation usage. And it never used in\n> a singular form there, so I think that it should be rather:\n>\n> A configuration parameter,\n> <varname>postgres_fdw.keep_connections</varname>, default being...\n>\n>\nA quick thought here.\n\nWould it make sense to add a hook in the DISCARD ALL implementation that\npostgres_fdw can register for?\n\nThere's precedent here, since DISCARD ALL already has the same effect as\nSELECT pg_advisory_unlock_all(); amongst other things.\n\nOn Wed, Nov 25, 2020 at 2:43 AM Alexey Kondratov <a.kondratov@postgrespro.ru> wrote:On 2020-11-24 06:52, Bharath Rupireddy wrote:\n> Thanks for the review comments.\n> \n> On Mon, Nov 23, 2020 at 9:57 PM Alexey Kondratov\n> <a.kondratov@postgrespro.ru> wrote:\n>> \n>> > v1-0001-postgres_fdw-function-to-discard-cached-connections.patch\n>> \n>> This patch looks pretty straightforward for me, but there are some\n>> things to be addressed IMO:\n>> \n>> +               server = GetForeignServerByName(servername, true);\n>> +\n>> +               if (server != NULL)\n>> +               {\n>> \n>> Yes, you return a false if no server was found, but for me it worth\n>> throwing an error in this case as, for example, dblink does in the\n>> dblink_disconnect().\n>> \n> \n> dblink_disconnect() \"Returns status, which is always OK (since any\n> error causes the function to throw an error instead of returning).\"\n> This behaviour doesn't seem okay to me.\n> \n> Since we throw true/false, I would prefer to throw a warning(with a\n> reason) while returning false over an error.\n> \n\nI thought about something a bit more sophisticated:\n\n1) Return 'true' if there were open connections and we successfully \nclosed them.\n2) Return 'false' in the no-op case, i.e. there were no open \nconnections.\n3) Rise an error if something went wrong. And non-existing server case \nbelongs to this last category, IMO.\n\nThat looks like a semantically correct behavior, but let us wait for any \nother opinion.\n\n> \n>> \n>> + result = disconnect_cached_connections(FOREIGNSERVEROID,\n>> +        hashvalue,\n>> +        false);\n>> \n>> +               if (all || (!all && cacheid == FOREIGNSERVEROID &&\n>> +                       entry->server_hashvalue == hashvalue))\n>> +               {\n>> +                       if (entry->conn != NULL &&\n>> +                               !all && cacheid == FOREIGNSERVEROID &&\n>> +                               entry->server_hashvalue == hashvalue)\n>> \n>> These conditions look bulky for me. First, you pass FOREIGNSERVEROID \n>> to\n>> disconnect_cached_connections(), but actually it just duplicates 'all'\n>> flag, since when it is 'FOREIGNSERVEROID', then 'all == false'; when \n>> it\n>> is '-1', then 'all == true'. That is all, there are only two calls of\n>> disconnect_cached_connections(). That way, it seems that we should \n>> keep\n>> only 'all' flag at least for now, doesn't it?\n>> \n> \n> I added cachid as an argument to disconnect_cached_connections() for\n> reusability. Say, someone wants to use it with a user mapping then\n> they can pass cacheid USERMAPPINGOID, hash value of user mapping. The\n> cacheid == USERMAPPINGOID && entry->mapping_hashvalue == hashvalue can\n> be added to disconnect_cached_connections().\n> \n\nYeah, I have got your point and motivation to add this argument, but how \nwe can use it? To disconnect all connections belonging to some specific \nuser mapping? But any user mapping is hard bound to some foreign server, \nAFAIK, so we can pass serverid-based hash in this case.\n\nIn the case of pgfdw_inval_callback() this argument makes sense, since \nsyscache callbacks work that way, but here I can hardly imagine a case \nwhere we can use it. Thus, it still looks as a preliminary complication \nfor me, since we do not have plans to use it, do we? Anyway, everything \nseems to be working fine, so it is up to you to keep this additional \nargument.\n\n> \n> v1-0003-postgres_fdw-server-level-option-keep_connection.patch\n> This patch adds a new server level option, keep_connection, default\n> being on, when set to off, the local session doesn't cache the\n> connections associated with the foreign server.\n> \n\nThis patch looks good to me, except one note:\n\n                        (entry->used_in_current_xact &&\n-                       !keep_connections))\n+                       (!keep_connections || !entry->keep_connection)))\n                {\n\nFollowing this logic:\n\n1) If keep_connections == true, then per-server keep_connection has a \n*higher* priority, so one can disable caching of a single foreign \nserver.\n\n2) But if keep_connections == false, then it works like a global switch \noff indifferently of per-server keep_connection's, i.e. they have a \n*lower* priority.\n\nIt looks fine for me, at least I cannot propose anything better, but \nmaybe it should be documented in 0004?\n\n> \n> v1-0004-postgres_fdw-connection-cache-discard-tests-and-documentation.patch\n> This patch adds the tests and documentation related to this feature.\n> \n\nI have not read all texts thoroughly, but what caught my eye:\n\n+   A GUC, <varname>postgres_fdw.keep_connections</varname>, default \nbeing\n+   <literal>on</literal>, when set to <literal>off</literal>, the local \nsession\n\nI think that GUC acronym is used widely only in the source code and \nPostgres docs tend to do not use it at all, except from acronyms list \nand a couple of 'GUC parameters' collocation usage. And it never used in \na singular form there, so I think that it should be rather:\n\nA configuration parameter, \n<varname>postgres_fdw.keep_connections</varname>, default being...\nA quick thought here.Would it make sense to add a hook in the DISCARD ALL implementation that postgres_fdw can register for? There's precedent here, since DISCARD ALL already has the same effect as SELECT pg_advisory_unlock_all(); amongst other things.", "msg_date": "Wed, 25 Nov 2020 09:54:22 +0800", "msg_from": "Craig Ringer <craig.ringer@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] postgres_fdw connection caching - cause remote sessions\n linger till the local session exit" }, { "msg_contents": "On Wed, Nov 25, 2020 at 7:24 AM Craig Ringer\n<craig.ringer@enterprisedb.com> wrote:\n>\n> A quick thought here.\n>\n> Would it make sense to add a hook in the DISCARD ALL implementation that postgres_fdw can register for?\n>\n> There's precedent here, since DISCARD ALL already has the same effect as SELECT pg_advisory_unlock_all(); amongst other things.\n>\n\nIIUC, then it is like a core(server) function doing some work for the\npostgres_fdw module. Earlier in the discussion, one point raised was\nthat it's better not to have core handling something related to\npostgres_fdw. This is the reason we have come up with postgres_fdw\nspecific function and a GUC, which get defined when extension is\ncreated. Similarly, dblink also has it's own bunch of functions one\namong them is dblink_disconnect().\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 25 Nov 2020 08:47:52 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] postgres_fdw connection caching - cause remote sessions\n linger till the local session exit" }, { "msg_contents": "On 2020-11-25 06:17, Bharath Rupireddy wrote:\n> On Wed, Nov 25, 2020 at 7:24 AM Craig Ringer\n> <craig.ringer@enterprisedb.com> wrote:\n>> \n>> A quick thought here.\n>> \n>> Would it make sense to add a hook in the DISCARD ALL implementation \n>> that postgres_fdw can register for?\n>> \n>> There's precedent here, since DISCARD ALL already has the same effect \n>> as SELECT pg_advisory_unlock_all(); amongst other things.\n>> \n> \n> IIUC, then it is like a core(server) function doing some work for the\n> postgres_fdw module. Earlier in the discussion, one point raised was\n> that it's better not to have core handling something related to\n> postgres_fdw. This is the reason we have come up with postgres_fdw\n> specific function and a GUC, which get defined when extension is\n> created. Similarly, dblink also has it's own bunch of functions one\n> among them is dblink_disconnect().\n> \n\nIf I have got Craig correctly, he proposed that we already have a \nDISCARD ALL statement, which is processed by DiscardAll(), and it \nreleases internal resources known from the core perspective. That way, \nwe can introduce a general purpose hook DiscardAll_hook(), so \npostgres_fdw can get use of it to clean up its own resources \n(connections in our context) if needed. In other words, it is not like a \ncore function doing some work for the postgres_fdw module, but rather \nlike a callback/hook, that postgres_fdw is able to register to do some \nadditional work.\n\nIt can be a good replacement for 0001, but won't it be already an \noverkill to drop all local caches along with remote connections? I mean, \nthat it would be a nice to have hook from the extensibility perspective, \nbut postgres_fdw_disconnect() still makes sense, since it does a very \nnarrow and specific job.\n\n\nRegards\n-- \nAlexey Kondratov\n\nPostgres Professional https://www.postgrespro.com\nRussian Postgres Company\n\n\n", "msg_date": "Wed, 25 Nov 2020 13:04:01 +0300", "msg_from": "Alexey Kondratov <a.kondratov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: [PATCH] postgres_fdw connection caching - cause remote sessions\n linger till the local session exit" }, { "msg_contents": "On Wed, Nov 25, 2020 at 12:13 AM Alexey Kondratov\n<a.kondratov@postgrespro.ru> wrote:\n>\n> 1) Return 'true' if there were open connections and we successfully\n> closed them.\n> 2) Return 'false' in the no-op case, i.e. there were no open\n> connections.\n> 3) Rise an error if something went wrong. And non-existing server case\n> belongs to this last category, IMO.\n>\n\nDone this way.\n\n>\n> I am not sure, but I think, that instead of adding this additional flag\n> into ConnCacheEntry structure we can look on entry->xact_depth and use\n> local:\n>\n> bool used_in_current_xact = entry->xact_depth > 0;\n>\n> for exactly the same purpose. Since we set entry->xact_depth to zero at\n> the end of xact, then it was used if it is not zero. It is set to 1 by\n> begin_remote_xact() called by GetConnection(), so everything seems to be\n> fine.\n>\n\nDone.\n\n>\n> In the case of pgfdw_inval_callback() this argument makes sense, since\n> syscache callbacks work that way, but here I can hardly imagine a case\n> where we can use it. Thus, it still looks as a preliminary complication\n> for me, since we do not have plans to use it, do we? Anyway, everything\n> seems to be working fine, so it is up to you to keep this additional\n> argument.\n>\n\nRemoved the cacheid variable.\n\n>\n> Following this logic:\n>\n> 1) If keep_connections == true, then per-server keep_connection has a\n> *higher* priority, so one can disable caching of a single foreign\n> server.\n>\n> 2) But if keep_connections == false, then it works like a global switch\n> off indifferently of per-server keep_connection's, i.e. they have a\n> *lower* priority.\n>\n> It looks fine for me, at least I cannot propose anything better, but\n> maybe it should be documented in 0004?\n>\n\nDone.\n\n>\n> I think that GUC acronym is used widely only in the source code and\n> Postgres docs tend to do not use it at all, except from acronyms list\n> and a couple of 'GUC parameters' collocation usage. And it never used in\n> a singular form there, so I think that it should be rather:\n>\n> A configuration parameter,\n> <varname>postgres_fdw.keep_connections</varname>, default being...\n>\n\nDone.\n\n>\n> The whole paragraph is really difficult to follow. It could be something\n> like that:\n>\n> <para>\n> Note that setting <varname>postgres_fdw.keep_connections</varname>\n> to\n> off does not discard any previously made and still open\n> connections immediately.\n> They will be closed only at the end of a future transaction, which\n> operated on them.\n>\n> To close all connections immediately use\n> <function>postgres_fdw_disconnect</function> function.\n> </para>\n>\n\nDone.\n\nAttaching the v2 patch set. Please review it further.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Fri, 27 Nov 2020 07:42:50 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] postgres_fdw connection caching - cause remote sessions\n linger till the local session exit" }, { "msg_contents": "\n\nOn 2020/11/27 11:12, Bharath Rupireddy wrote:\n> On Wed, Nov 25, 2020 at 12:13 AM Alexey Kondratov\n> <a.kondratov@postgrespro.ru> wrote:\n>>\n>> 1) Return 'true' if there were open connections and we successfully\n>> closed them.\n>> 2) Return 'false' in the no-op case, i.e. there were no open\n>> connections.\n>> 3) Rise an error if something went wrong. And non-existing server case\n>> belongs to this last category, IMO.\n>>\n> \n> Done this way.\n> \n>>\n>> I am not sure, but I think, that instead of adding this additional flag\n>> into ConnCacheEntry structure we can look on entry->xact_depth and use\n>> local:\n>>\n>> bool used_in_current_xact = entry->xact_depth > 0;\n>>\n>> for exactly the same purpose. Since we set entry->xact_depth to zero at\n>> the end of xact, then it was used if it is not zero. It is set to 1 by\n>> begin_remote_xact() called by GetConnection(), so everything seems to be\n>> fine.\n>>\n> \n> Done.\n> \n>>\n>> In the case of pgfdw_inval_callback() this argument makes sense, since\n>> syscache callbacks work that way, but here I can hardly imagine a case\n>> where we can use it. Thus, it still looks as a preliminary complication\n>> for me, since we do not have plans to use it, do we? Anyway, everything\n>> seems to be working fine, so it is up to you to keep this additional\n>> argument.\n>>\n> \n> Removed the cacheid variable.\n> \n>>\n>> Following this logic:\n>>\n>> 1) If keep_connections == true, then per-server keep_connection has a\n>> *higher* priority, so one can disable caching of a single foreign\n>> server.\n>>\n>> 2) But if keep_connections == false, then it works like a global switch\n>> off indifferently of per-server keep_connection's, i.e. they have a\n>> *lower* priority.\n>>\n>> It looks fine for me, at least I cannot propose anything better, but\n>> maybe it should be documented in 0004?\n>>\n> \n> Done.\n> \n>>\n>> I think that GUC acronym is used widely only in the source code and\n>> Postgres docs tend to do not use it at all, except from acronyms list\n>> and a couple of 'GUC parameters' collocation usage. And it never used in\n>> a singular form there, so I think that it should be rather:\n>>\n>> A configuration parameter,\n>> <varname>postgres_fdw.keep_connections</varname>, default being...\n>>\n> \n> Done.\n> \n>>\n>> The whole paragraph is really difficult to follow. It could be something\n>> like that:\n>>\n>> <para>\n>> Note that setting <varname>postgres_fdw.keep_connections</varname>\n>> to\n>> off does not discard any previously made and still open\n>> connections immediately.\n>> They will be closed only at the end of a future transaction, which\n>> operated on them.\n>>\n>> To close all connections immediately use\n>> <function>postgres_fdw_disconnect</function> function.\n>> </para>\n>>\n> \n> Done.\n> \n> Attaching the v2 patch set. Please review it further.\n\nRegarding the 0001 patch, we should add the function that returns\nthe information of cached connections like dblink_get_connections(),\ntogether with 0001 patch? Otherwise it's not easy for users to\nsee how many cached connections are and determine whether to\ndisconnect them or not. Sorry if this was already discussed before.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Fri, 4 Dec 2020 15:19:23 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] postgres_fdw connection caching - cause remote sessions\n linger till the local session exit" }, { "msg_contents": "On Fri, Dec 4, 2020 at 11:49 AM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>\n> > Attaching the v2 patch set. Please review it further.\n>\n> Regarding the 0001 patch, we should add the function that returns\n> the information of cached connections like dblink_get_connections(),\n> together with 0001 patch? Otherwise it's not easy for users to\n> see how many cached connections are and determine whether to\n> disconnect them or not. Sorry if this was already discussed before.\n>\n\nThanks for bringing this up. Exactly this is what I was thinking a few\ndays back. Say the new function postgres_fdw_get_connections() which\ncan return an array of server names whose connections exist in the\ncache. Without this function, the user may not know how many\nconnections this backend has until he checks it manually on the remote\nserver.\n\nThoughts? If okay, I can code the function in the 0001 patch.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 4 Dec 2020 13:46:40 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] postgres_fdw connection caching - cause remote sessions\n linger till the local session exit" }, { "msg_contents": "On Fri, Dec 4, 2020 at 1:46 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Fri, Dec 4, 2020 at 11:49 AM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> >\n> > > Attaching the v2 patch set. Please review it further.\n> >\n> > Regarding the 0001 patch, we should add the function that returns\n> > the information of cached connections like dblink_get_connections(),\n> > together with 0001 patch? Otherwise it's not easy for users to\n> > see how many cached connections are and determine whether to\n> > disconnect them or not. Sorry if this was already discussed before.\n> >\n>\n> Thanks for bringing this up. Exactly this is what I was thinking a few\n> days back. Say the new function postgres_fdw_get_connections() which\n> can return an array of server names whose connections exist in the\n> cache. Without this function, the user may not know how many\n> connections this backend has until he checks it manually on the remote\n> server.\n>\n> Thoughts? If okay, I can code the function in the 0001 patch.\n>\n\nAdded a new function postgres_fdw_get_connections() into 0001 patch,\nwhich returns a list of server names for which there exists an\nexisting open and active connection.\n\nAttaching v3 patch set, please review it further.\n\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Fri, 4 Dec 2020 16:45:29 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] postgres_fdw connection caching - cause remote sessions\n linger till the local session exit" }, { "msg_contents": "\n\nOn 2020/12/04 20:15, Bharath Rupireddy wrote:\n> On Fri, Dec 4, 2020 at 1:46 PM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n>>\n>> On Fri, Dec 4, 2020 at 11:49 AM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>>>\n>>>> Attaching the v2 patch set. Please review it further.\n>>>\n>>> Regarding the 0001 patch, we should add the function that returns\n>>> the information of cached connections like dblink_get_connections(),\n>>> together with 0001 patch? Otherwise it's not easy for users to\n>>> see how many cached connections are and determine whether to\n>>> disconnect them or not. Sorry if this was already discussed before.\n>>>\n>>\n>> Thanks for bringing this up. Exactly this is what I was thinking a few\n>> days back. Say the new function postgres_fdw_get_connections() which\n>> can return an array of server names whose connections exist in the\n>> cache. Without this function, the user may not know how many\n>> connections this backend has until he checks it manually on the remote\n>> server.\n>>\n>> Thoughts? If okay, I can code the function in the 0001 patch.\n>>\n> \n> Added a new function postgres_fdw_get_connections() into 0001 patch,\n\nThanks!\n\n\n> which returns a list of server names for which there exists an\n> existing open and active connection.\n> \n> Attaching v3 patch set, please review it further.\n\nI started reviewing 0001 patch.\n\nIMO the 0001 patch should be self-contained so that we can commit it at first. That is, I think that it's better to move the documents and tests for the functions 0001 patch adds from 0004 to 0001.\n\nSince 0001 introduces new user-visible functions into postgres_fdw, the version of postgres_fdw should be increased?\n\nThe similar code to get the server name from cached connection entry exists also in pgfdw_reject_incomplete_xact_state_change(). I'm tempted to make the \"common\" function for that code and use it both in postgres_fdw_get_connections() and pgfdw_reject_incomplete_xact_state_change(), to simplify the code.\n\n+\t\t\t/* We only look for active and open remote connections. */\n+\t\t\tif (entry->invalidated || !entry->conn)\n+\t\t\t\tcontinue;\n\nWe should return even invalidated entry because it has still cached connection?\nAlso this makes me wonder if we should return both the server name and boolean flag indicating whether it's invalidated or not. If so, users can easily find the invalidated connection entry and disconnect it because there is no need to keep invalidated connection.\n\n+\tif (all)\n+\t{\n+\t\thash_destroy(ConnectionHash);\n+\t\tConnectionHash = NULL;\n+\t\tresult = true;\n+\t}\n\nCould you tell me why ConnectionHash needs to be destroyed?\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Wed, 9 Dec 2020 20:19:09 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] postgres_fdw connection caching - cause remote sessions\n linger till the local session exit" }, { "msg_contents": "On Wed, Dec 9, 2020 at 4:49 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> I started reviewing 0001 patch.\n>\n\nThanks!\n\n> IMO the 0001 patch should be self-contained so that we can commit it at first. That is, I think that it's better to move the documents and tests for the functions 0001 patch adds from 0004 to 0001.\n>\n\n+1. I will make each patch self-contained in the next version which I\nplan to submit soon.\n\n> Since 0001 introduces new user-visible functions into postgres_fdw, the version of postgres_fdw should be increased?\n>\n\nYeah looks like we should do that, dblink has done that when it\nintroduced new functions. In case the new functions are not required\nfor anyone, they can choose to go back to 1.0.\n\nShould we make the new version as 1.1 or 2.0? I prefer to make it 1.1\nas we are just adding few functionality over 1.0. I will change the\ndefault_version from 1.0 to the 1.1 and add a new\npostgres_fdw--1.1.sql file.\n\nIf okay, I will make changes to 0001 patch.\n\n> The similar code to get the server name from cached connection entry exists also in pgfdw_reject_incomplete_xact_state_change(). I'm tempted to make the \"common\" function for that code and use it both in postgres_fdw_get_connections() and pgfdw_reject_incomplete_xact_state_change(), to simplify the code.\n>\n\n+1. I will move the server name finding code to a new function, say\nchar *pgfdw_get_server_name(ConnCacheEntry *entry);\n\n> + /* We only look for active and open remote connections. */\n> + if (entry->invalidated || !entry->conn)\n> + continue;\n>\n> We should return even invalidated entry because it has still cached connection?\n>\n\nI checked this point earlier, for invalidated connections, the tuple\nreturned from the cache is also invalid and the following error will\nbe thrown. So, we can not get the server name for that user mapping.\nCache entries too would have been invalidated after the connection is\nmarked as invalid in pgfdw_inval_callback().\n\numaptup = SearchSysCache1(USERMAPPINGOID, ObjectIdGetDatum(entry->key));\nif (!HeapTupleIsValid(umaptup))\n elog(ERROR, \"cache lookup failed for user mapping with OID %u\",\nentry->key);\n\nCan we reload the sys cache entries of USERMAPPINGOID (if there is a\nway) for invalid connections in our new function and then do a look\nup? If not, another way could be storing the associated server name or\noid in the ConnCacheEntry. Currently we store user mapping oid(in\nkey), its hash value(in mapping_hashvalue) and foreign server oid's\nhash value (in server_hashvalue). If we have the foreign server oid,\nthen we can just look up for the server name, but I'm not quite sure\nwhether we get the same issue i.e. invalid tuples when the entry gets\ninvalided (via pgfdw_inval_callback) due to some change in foreign\nserver options.\n\nIMHO, we can simply choose to show all the active, valid connections. Thoughts?\n\n> Also this makes me wonder if we should return both the server name and boolean flag indicating whether it's invalidated or not. If so, users can easily find the invalidated connection entry and disconnect it because there is no need to keep invalidated connection.\n>\n\nCurrently we are returning a list of foreing server names with whom\nthere exist active connections. If we somehow address the above\nmentioned problem for invalid connections and choose to show them as\nwell, then how should our output look like? Is it something like we\nprepare a list of pairs (servername, validflag)?\n\n> + if (all)\n> + {\n> + hash_destroy(ConnectionHash);\n> + ConnectionHash = NULL;\n> + result = true;\n> + }\n>\n> Could you tell me why ConnectionHash needs to be destroyed?\n>\n\nSay, in a session there are hundreds of different foreign server\nconnections made and if users want to disconnect all of them with the\nnew function and don't want any further foreign connections in that\nsession, they can do it. But then why keep the cache just lying around\nand holding those many entries? Instead we can destroy the cache and\nif necessary it will be allocated later on next foreign server\nconnections.\n\nIMHO, it is better to destroy the cache in case of disconnect all,\nhoping to save memory, thinking that (next time if required) the cache\nallocation doesn't take much time. Thoughts?\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 10 Dec 2020 07:14:33 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] postgres_fdw connection caching - cause remote sessions\n linger till the local session exit" }, { "msg_contents": "On Thu, Dec 10, 2020 at 7:14 AM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n> > + /* We only look for active and open remote connections. */\n> > + if (entry->invalidated || !entry->conn)\n> > + continue;\n> >\n> > We should return even invalidated entry because it has still cached connection?\n> >\n>\n> I checked this point earlier, for invalidated connections, the tuple\n> returned from the cache is also invalid and the following error will\n> be thrown. So, we can not get the server name for that user mapping.\n> Cache entries too would have been invalidated after the connection is\n> marked as invalid in pgfdw_inval_callback().\n>\n> umaptup = SearchSysCache1(USERMAPPINGOID, ObjectIdGetDatum(entry->key));\n> if (!HeapTupleIsValid(umaptup))\n> elog(ERROR, \"cache lookup failed for user mapping with OID %u\",\n> entry->key);\n>\n\nI further checked on returning invalidated connections in the output\nof the function. Actually, the reason I'm seeing a null tuple from sys\ncache (and hence the error \"cache lookup failed for user mapping with\nOID xxxx\") for an invalidated connection is that the user mapping\n(with OID entry->key that exists in the cache) is getting dropped, so\nthe sys cache returns null tuple. The use case is as follows:\n\n1) Create a server, role, and user mapping of the role with the server\n2) Run a foreign table query, so that the connection related to the\nserver gets cached\n3) Issue DROP OWNED BY for the role created, since the user mapping is\ndependent on that role, it gets dropped from the catalogue table and\nan invalidation message will be pending to clear the sys cache\nassociated with that user mapping.\n4) Now, if I do select * from postgres_fdw_get_connections() or for\nthat matter any query, at the beginning the txn\nAtStart_Cache()-->AcceptInvalidationMessages()-->pgfdw_inval_callback()\ngets called and marks the cached entry as invalidated. Remember the\nreason for this invalidation message is that the user mapping with the\nOID entry->key is dropped from 3). Later in\npostgres_fdw_get_connections(), when we search the sys cache with\nentry->key for that invalidated connection, since the user mapping is\ndropped from the system, null tuple is returned.\n\nIf we were to show invalidated connections in the output of\npostgres_fdw_get_connections(), we can ignore the entry and continue\nfurther if the user mapping sys cache search returns null tuple:\n\numaptup = SearchSysCache1(USERMAPPINGOID, ObjectIdGetDatum(entry->key));\n\nif (!HeapTupleIsValid(umaptup))\n continue;\n\nThoughts?\n\n> > Also this makes me wonder if we should return both the server name and boolean flag indicating whether it's invalidated or not. If so, users can easily find the invalidated connection entry and disconnect it because there is no need to keep invalidated connection.\n> >\n>\n> Currently we are returning a list of foreing server names with whom\n> there exist active connections. If we somehow address the above\n> mentioned problem for invalid connections and choose to show them as\n> well, then how should our output look like? Is it something like we\n> prepare a list of pairs (servername, validflag)?\n\nIf agreed on above point, we can output something like: (myserver1,\nvalid), (myserver2, valid), (myserver3, invalid), (myserver4, valid)\n....\n\nThoughts?\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 11 Dec 2020 15:46:45 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] postgres_fdw connection caching - cause remote sessions\n linger till the local session exit" }, { "msg_contents": "\n\nOn 2020/12/11 19:16, Bharath Rupireddy wrote:\n> On Thu, Dec 10, 2020 at 7:14 AM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n>>> + /* We only look for active and open remote connections. */\n>>> + if (entry->invalidated || !entry->conn)\n>>> + continue;\n>>>\n>>> We should return even invalidated entry because it has still cached connection?\n>>>\n>>\n>> I checked this point earlier, for invalidated connections, the tuple\n>> returned from the cache is also invalid and the following error will\n>> be thrown. So, we can not get the server name for that user mapping.\n>> Cache entries too would have been invalidated after the connection is\n>> marked as invalid in pgfdw_inval_callback().\n>>\n>> umaptup = SearchSysCache1(USERMAPPINGOID, ObjectIdGetDatum(entry->key));\n>> if (!HeapTupleIsValid(umaptup))\n>> elog(ERROR, \"cache lookup failed for user mapping with OID %u\",\n>> entry->key);\n>>\n> \n> I further checked on returning invalidated connections in the output\n> of the function. Actually, the reason I'm seeing a null tuple from sys\n> cache (and hence the error \"cache lookup failed for user mapping with\n> OID xxxx\") for an invalidated connection is that the user mapping\n> (with OID entry->key that exists in the cache) is getting dropped, so\n> the sys cache returns null tuple. The use case is as follows:\n> \n> 1) Create a server, role, and user mapping of the role with the server\n> 2) Run a foreign table query, so that the connection related to the\n> server gets cached\n> 3) Issue DROP OWNED BY for the role created, since the user mapping is\n> dependent on that role, it gets dropped from the catalogue table and\n> an invalidation message will be pending to clear the sys cache\n> associated with that user mapping.\n> 4) Now, if I do select * from postgres_fdw_get_connections() or for\n> that matter any query, at the beginning the txn\n> AtStart_Cache()-->AcceptInvalidationMessages()-->pgfdw_inval_callback()\n> gets called and marks the cached entry as invalidated. Remember the\n> reason for this invalidation message is that the user mapping with the\n> OID entry->key is dropped from 3). Later in\n> postgres_fdw_get_connections(), when we search the sys cache with\n> entry->key for that invalidated connection, since the user mapping is\n> dropped from the system, null tuple is returned.\n\nThanks for the analysis! This means that the cached connection invalidated by drop of server or user mapping will not be closed even by the subsequent access to the foreign server and will remain until the backend exits. Right? If so, this seems like a connection-leak bug, at least for me.... Thought?\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Sat, 12 Dec 2020 02:30:57 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] postgres_fdw connection caching - cause remote sessions\n linger till the local session exit" }, { "msg_contents": "\n\nOn 2020/12/10 10:44, Bharath Rupireddy wrote:\n> On Wed, Dec 9, 2020 at 4:49 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>> I started reviewing 0001 patch.\n>>\n> \n> Thanks!\n> \n>> IMO the 0001 patch should be self-contained so that we can commit it at first. That is, I think that it's better to move the documents and tests for the functions 0001 patch adds from 0004 to 0001.\n>>\n> \n> +1. I will make each patch self-contained in the next version which I\n> plan to submit soon.\n> \n>> Since 0001 introduces new user-visible functions into postgres_fdw, the version of postgres_fdw should be increased?\n>>\n> \n> Yeah looks like we should do that, dblink has done that when it\n> introduced new functions. In case the new functions are not required\n> for anyone, they can choose to go back to 1.0.\n> \n> Should we make the new version as 1.1 or 2.0? I prefer to make it 1.1\n> as we are just adding few functionality over 1.0. I will change the\n> default_version from 1.0 to the 1.1 and add a new\n> postgres_fdw--1.1.sql file.\n\n+1\n\n\n> \n> If okay, I will make changes to 0001 patch.\n> \n>> The similar code to get the server name from cached connection entry exists also in pgfdw_reject_incomplete_xact_state_change(). I'm tempted to make the \"common\" function for that code and use it both in postgres_fdw_get_connections() and pgfdw_reject_incomplete_xact_state_change(), to simplify the code.\n>>\n> \n> +1. I will move the server name finding code to a new function, say\n> char *pgfdw_get_server_name(ConnCacheEntry *entry);\n> \n>> + /* We only look for active and open remote connections. */\n>> + if (entry->invalidated || !entry->conn)\n>> + continue;\n>>\n>> We should return even invalidated entry because it has still cached connection?\n>>\n> \n> I checked this point earlier, for invalidated connections, the tuple\n> returned from the cache is also invalid and the following error will\n> be thrown. So, we can not get the server name for that user mapping.\n> Cache entries too would have been invalidated after the connection is\n> marked as invalid in pgfdw_inval_callback().\n> \n> umaptup = SearchSysCache1(USERMAPPINGOID, ObjectIdGetDatum(entry->key));\n> if (!HeapTupleIsValid(umaptup))\n> elog(ERROR, \"cache lookup failed for user mapping with OID %u\",\n> entry->key);\n> \n> Can we reload the sys cache entries of USERMAPPINGOID (if there is a\n> way) for invalid connections in our new function and then do a look\n> up? If not, another way could be storing the associated server name or\n> oid in the ConnCacheEntry. Currently we store user mapping oid(in\n> key), its hash value(in mapping_hashvalue) and foreign server oid's\n> hash value (in server_hashvalue). If we have the foreign server oid,\n> then we can just look up for the server name, but I'm not quite sure\n> whether we get the same issue i.e. invalid tuples when the entry gets\n> invalided (via pgfdw_inval_callback) due to some change in foreign\n> server options.\n> \n> IMHO, we can simply choose to show all the active, valid connections. Thoughts?\n> \n>> Also this makes me wonder if we should return both the server name and boolean flag indicating whether it's invalidated or not. If so, users can easily find the invalidated connection entry and disconnect it because there is no need to keep invalidated connection.\n>>\n> \n> Currently we are returning a list of foreing server names with whom\n> there exist active connections. If we somehow address the above\n> mentioned problem for invalid connections and choose to show them as\n> well, then how should our output look like? Is it something like we\n> prepare a list of pairs (servername, validflag)?\n> \n>> + if (all)\n>> + {\n>> + hash_destroy(ConnectionHash);\n>> + ConnectionHash = NULL;\n>> + result = true;\n>> + }\n>>\n>> Could you tell me why ConnectionHash needs to be destroyed?\n>>\n> \n> Say, in a session there are hundreds of different foreign server\n> connections made and if users want to disconnect all of them with the\n> new function and don't want any further foreign connections in that\n> session, they can do it. But then why keep the cache just lying around\n> and holding those many entries? Instead we can destroy the cache and\n> if necessary it will be allocated later on next foreign server\n> connections.\n> \n> IMHO, it is better to destroy the cache in case of disconnect all,\n> hoping to save memory, thinking that (next time if required) the cache\n> allocation doesn't take much time. Thoughts?\n\nOk, but why is ConnectionHash destroyed only when \"all\" is true? Even when \"all\" is false, for example, the following query can disconnect all the cached connections. Even in this case, i.e., whenever there are no cached connections, ConnectionHash should be destroyed?\n\n SELECT postgres_fdw_disconnect(srvname) FROM pg_foreign_server ;\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Sat, 12 Dec 2020 02:31:48 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] postgres_fdw connection caching - cause remote sessions\n linger till the local session exit" }, { "msg_contents": "On Fri, Dec 11, 2020 at 11:01 PM Fujii Masao\n<masao.fujii@oss.nttdata.com> wrote:\n> On 2020/12/11 19:16, Bharath Rupireddy wrote:\n> > On Thu, Dec 10, 2020 at 7:14 AM Bharath Rupireddy\n> > <bharath.rupireddyforpostgres@gmail.com> wrote:\n> >>> + /* We only look for active and open remote connections. */\n> >>> + if (entry->invalidated || !entry->conn)\n> >>> + continue;\n> >>>\n> >>> We should return even invalidated entry because it has still cached connection?\n> >>>\n> >>\n> >> I checked this point earlier, for invalidated connections, the tuple\n> >> returned from the cache is also invalid and the following error will\n> >> be thrown. So, we can not get the server name for that user mapping.\n> >> Cache entries too would have been invalidated after the connection is\n> >> marked as invalid in pgfdw_inval_callback().\n> >>\n> >> umaptup = SearchSysCache1(USERMAPPINGOID, ObjectIdGetDatum(entry->key));\n> >> if (!HeapTupleIsValid(umaptup))\n> >> elog(ERROR, \"cache lookup failed for user mapping with OID %u\",\n> >> entry->key);\n> >>\n> >\n> > I further checked on returning invalidated connections in the output\n> > of the function. Actually, the reason I'm seeing a null tuple from sys\n> > cache (and hence the error \"cache lookup failed for user mapping with\n> > OID xxxx\") for an invalidated connection is that the user mapping\n> > (with OID entry->key that exists in the cache) is getting dropped, so\n> > the sys cache returns null tuple. The use case is as follows:\n> >\n> > 1) Create a server, role, and user mapping of the role with the server\n> > 2) Run a foreign table query, so that the connection related to the\n> > server gets cached\n> > 3) Issue DROP OWNED BY for the role created, since the user mapping is\n> > dependent on that role, it gets dropped from the catalogue table and\n> > an invalidation message will be pending to clear the sys cache\n> > associated with that user mapping.\n> > 4) Now, if I do select * from postgres_fdw_get_connections() or for\n> > that matter any query, at the beginning the txn\n> > AtStart_Cache()-->AcceptInvalidationMessages()-->pgfdw_inval_callback()\n> > gets called and marks the cached entry as invalidated. Remember the\n> > reason for this invalidation message is that the user mapping with the\n> > OID entry->key is dropped from 3). Later in\n> > postgres_fdw_get_connections(), when we search the sys cache with\n> > entry->key for that invalidated connection, since the user mapping is\n> > dropped from the system, null tuple is returned.\n>\n> Thanks for the analysis! This means that the cached connection invalidated by drop of server or user mapping will not be closed even by the subsequent access to the foreign server and will remain until the backend exits. Right?\n\nIt will be first marked as invalidated via\nAtStart_Cache()-->AcceptInvalidationMessages()-->pgfdw_inval_callback(),\nand on the next use of that connection invalidated connections are\ndisconnected and reconnected.\n\n if (entry->conn != NULL && entry->invalidated && entry->xact_depth == 0)\n {\n elog(DEBUG3, \"closing connection %p for option changes to take effect\",\n entry->conn);\n disconnect_pg_server(entry);\n }\n\n> If so, this seems like a connection-leak bug, at least for me.... Thought?\n>\n\nIt's not a leak. The comment before pgfdw_inval_callback() [1]\nexplains why we can not immediately close/disconnect the connections\nin pgfdw_inval_callback() after marking them as invalidated.\n\nHere is the scenario how in the midst of a txn we get invalidation\nmessages(AtStart_Cache()-->AcceptInvalidationMessages()-->pgfdw_inval_callback()\nhappens):\n\n1) select from foreign table with server1, usermapping1 in session1\n2) begin a top txn in session1, run a few foreign queries that open up\nsub txns internally. meanwhile alter/drop server1/usermapping1 in\nsession2, then at each start of sub txn also we get to process the\ninvalidation messages via\nAtStart_Cache()-->AcceptInvalidationMessages()-->pgfdw_inval_callback().\nSo, if we disconnect right after marking invalidated in\npgfdw_inval_callback, that's a problem since we are in a sub txn under\na top txn.\n\nI don't think we can do something here and disconnect the connections\nright after the invalidation happens. Thoughts?\n\n\n[1]\n/*\n * Connection invalidation callback function\n *\n * After a change to a pg_foreign_server or pg_user_mapping catalog entry,\n * mark connections depending on that entry as needing to be remade.\n * We can't immediately destroy them, since they might be in the midst of\n * a transaction, but we'll remake them at the next opportunity.\n *\n * Although most cache invalidation callbacks blow away all the related stuff\n * regardless of the given hashvalue, connections are expensive enough that\n * it's worth trying to avoid that.\n *\n * NB: We could avoid unnecessary disconnection more strictly by examining\n * individual option values, but it seems too much effort for the gain.\n */\nstatic void\npgfdw_inval_callback(Datum arg, int cacheid, uint32 hashvalue)\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 11 Dec 2020 23:31:53 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] postgres_fdw connection caching - cause remote sessions\n linger till the local session exit" }, { "msg_contents": "On Fri, Dec 11, 2020 at 3:46 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n> If we were to show invalidated connections in the output of\n> postgres_fdw_get_connections(), we can ignore the entry and continue\n> further if the user mapping sys cache search returns null tuple:\n>\n> umaptup = SearchSysCache1(USERMAPPINGOID, ObjectIdGetDatum(entry->key));\n>\n> if (!HeapTupleIsValid(umaptup))\n> continue;\n\nAny thoughts here?\n\n> > > Also this makes me wonder if we should return both the server name and boolean flag indicating whether it's invalidated or not. If so, users can easily find the invalidated connection entry and disconnect it because there is no need to keep invalidated connection.\n> > >\n> >\n> > Currently we are returning a list of foreing server names with whom\n> > there exist active connections. If we somehow address the above\n> > mentioned problem for invalid connections and choose to show them as\n> > well, then how should our output look like? Is it something like we\n> > prepare a list of pairs (servername, validflag)?\n>\n> If agreed on above point, we can output something like: (myserver1,\n> valid), (myserver2, valid), (myserver3, invalid), (myserver4, valid)\n\nAnd here on the output text?\n\nIn case we agreed on the above output format, one funniest thing could\noccur is that if some hypothetical person has \"valid\" or \"invalid\" as\ntheir foreign server names, they will have difficulty in reading their\noutput. (valid, valid), (valid, invalid), (invalid, valid), (invalid,\ninvalid).\n\nOr should it be something like pairs of (server_name, true/false)?\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 11 Dec 2020 23:37:56 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] postgres_fdw connection caching - cause remote sessions\n linger till the local session exit" }, { "msg_contents": "\n\nOn 2020/12/12 3:01, Bharath Rupireddy wrote:\n> On Fri, Dec 11, 2020 at 11:01 PM Fujii Masao\n> <masao.fujii@oss.nttdata.com> wrote:\n>> On 2020/12/11 19:16, Bharath Rupireddy wrote:\n>>> On Thu, Dec 10, 2020 at 7:14 AM Bharath Rupireddy\n>>> <bharath.rupireddyforpostgres@gmail.com> wrote:\n>>>>> + /* We only look for active and open remote connections. */\n>>>>> + if (entry->invalidated || !entry->conn)\n>>>>> + continue;\n>>>>>\n>>>>> We should return even invalidated entry because it has still cached connection?\n>>>>>\n>>>>\n>>>> I checked this point earlier, for invalidated connections, the tuple\n>>>> returned from the cache is also invalid and the following error will\n>>>> be thrown. So, we can not get the server name for that user mapping.\n>>>> Cache entries too would have been invalidated after the connection is\n>>>> marked as invalid in pgfdw_inval_callback().\n>>>>\n>>>> umaptup = SearchSysCache1(USERMAPPINGOID, ObjectIdGetDatum(entry->key));\n>>>> if (!HeapTupleIsValid(umaptup))\n>>>> elog(ERROR, \"cache lookup failed for user mapping with OID %u\",\n>>>> entry->key);\n>>>>\n>>>\n>>> I further checked on returning invalidated connections in the output\n>>> of the function. Actually, the reason I'm seeing a null tuple from sys\n>>> cache (and hence the error \"cache lookup failed for user mapping with\n>>> OID xxxx\") for an invalidated connection is that the user mapping\n>>> (with OID entry->key that exists in the cache) is getting dropped, so\n>>> the sys cache returns null tuple. The use case is as follows:\n>>>\n>>> 1) Create a server, role, and user mapping of the role with the server\n>>> 2) Run a foreign table query, so that the connection related to the\n>>> server gets cached\n>>> 3) Issue DROP OWNED BY for the role created, since the user mapping is\n>>> dependent on that role, it gets dropped from the catalogue table and\n>>> an invalidation message will be pending to clear the sys cache\n>>> associated with that user mapping.\n>>> 4) Now, if I do select * from postgres_fdw_get_connections() or for\n>>> that matter any query, at the beginning the txn\n>>> AtStart_Cache()-->AcceptInvalidationMessages()-->pgfdw_inval_callback()\n>>> gets called and marks the cached entry as invalidated. Remember the\n>>> reason for this invalidation message is that the user mapping with the\n>>> OID entry->key is dropped from 3). Later in\n>>> postgres_fdw_get_connections(), when we search the sys cache with\n>>> entry->key for that invalidated connection, since the user mapping is\n>>> dropped from the system, null tuple is returned.\n>>\n>> Thanks for the analysis! This means that the cached connection invalidated by drop of server or user mapping will not be closed even by the subsequent access to the foreign server and will remain until the backend exits. Right?\n> \n> It will be first marked as invalidated via\n> AtStart_Cache()-->AcceptInvalidationMessages()-->pgfdw_inval_callback(),\n> and on the next use of that connection invalidated connections are\n> disconnected and reconnected.\n\nI was thinking that in the case of drop of user mapping or server, hash_search(ConnnectionHash) in GetConnection() cannot find the cached connection entry invalidated by that drop. Because \"user->umid\" used as hash key is changed. So I was thinking that that invalidated connection will not be closed nor reconnected.\n\n\n> \n> if (entry->conn != NULL && entry->invalidated && entry->xact_depth == 0)\n> {\n> elog(DEBUG3, \"closing connection %p for option changes to take effect\",\n> entry->conn);\n> disconnect_pg_server(entry);\n> }\n> \n>> If so, this seems like a connection-leak bug, at least for me.... Thought?\n>>\n> \n> It's not a leak. The comment before pgfdw_inval_callback() [1]\n> explains why we can not immediately close/disconnect the connections\n> in pgfdw_inval_callback() after marking them as invalidated.\n\n*If* invalidated connection cannot be close immediately even in the case of drop of server or user mapping, we can defer it to the subsequent call to GetConnection(). That is, GetConnection() closes not only the target invalidated connection but also the other all invalidated connections. Of course, invalidated connections will remain until subsequent GetConnection() is called, though.\n\n\n> \n> Here is the scenario how in the midst of a txn we get invalidation\n> messages(AtStart_Cache()-->AcceptInvalidationMessages()-->pgfdw_inval_callback()\n> happens):\n> \n> 1) select from foreign table with server1, usermapping1 in session1\n> 2) begin a top txn in session1, run a few foreign queries that open up\n> sub txns internally. meanwhile alter/drop server1/usermapping1 in\n> session2, then at each start of sub txn also we get to process the\n> invalidation messages via\n> AtStart_Cache()-->AcceptInvalidationMessages()-->pgfdw_inval_callback().\n> So, if we disconnect right after marking invalidated in\n> pgfdw_inval_callback, that's a problem since we are in a sub txn under\n> a top txn.\n\nMaybe. But what is the actual problem here?\n\nOTOH, if cached connection should not be close in the middle of transaction, postgres_fdw_disconnect() also should be disallowed to be executed during transaction?\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Sat, 12 Dec 2020 03:48:58 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] postgres_fdw connection caching - cause remote sessions\n linger till the local session exit" }, { "msg_contents": "On Sat, Dec 12, 2020 at 12:19 AM Fujii Masao <masao.fujii@oss.nttdata.com>\nwrote:\n> I was thinking that in the case of drop of user mapping or server,\nhash_search(ConnnectionHash) in GetConnection() cannot find the cached\nconnection entry invalidated by that drop. Because \"user->umid\" used as\nhash key is changed. So I was thinking that that invalidated connection\nwill not be closed nor reconnected.\n>\n\nYou are right in saying that the connection leaks.\n\nUse case 1:\n1) Run foreign query in session1 with server1, user mapping1\n2) Drop user mapping1 in another session2, invalidation message gets logged\nwhich will have to be processed by other sessions\n3) Run foreign query again in session1, at the start of txn, the cached\nentry gets invalidated via pgfdw_inval_callback(). Whatever may be the type\nof foreign query (select, update, explain, delete, insert, analyze etc.),\nupon next call to GetUserMapping() from postgres_fdw.c, the cache lookup\nfails(with ERROR: user mapping not found for \"XXXX\") since the user\nmapping1 has been dropped in session2 and the query will also fail before\nreaching GetConnection() where the connections associated with invalidated\nentries would have got disconnected.\n\nSo, the connection associated with invalidated entry would remain until the\nlocal session exits which is a problem to solve.\n\nUse case 2:\n1) Run foreign query in session1 with server1, user mapping1\n2) Try to drop foreign server1, then we would not be allowed to do so\nbecause of dependency. If we use CASCADE, then the dependent user mapping1\nand foreign tables get dropped too [1].\n3) Run foreign query again in session1, at the start of txn, the cached\nentry gets invalidated via pgfdw_inval_callback(), it fails because there\nis no foreign table and user mapping1.\n\nBut, note that the connection remains open in session1, which is again a\nproblem to solve.\n\nTo solve the above connection leak problem, it looks like the right place\nto close all the invalid connections is pgfdw_xact_callback(), once\nregistered, which gets called at the end of every txn in the current\nsession(by then all the sub txns also would have been finished). Note that\nif there are too many invalidated entries, then one of the following txn\nhas to bear running this extra code, but that's okay than having leaked\nconnections. Thoughts? If okay, I can code a separate patch.\n\nstatic void\npgfdw_xact_callback(XactEvent event, void *arg)\n{\n HASH_SEQ_STATUS scan;\n ConnCacheEntry *entry;\n* /* HERE WE CAN LOOK FOR ALL INVALIDATED ENTRIES AND DISCONNECT THEM*/*\n /* Quick exit if no connections were touched in this transaction. */\n if (!xact_got_connection)\n return;\n\nAnd we can also extend postgres_fdw_disconnect() something like.\n\npostgres_fdw_disconnect(bool invalid_only) --> default for invalid_only\nfalse. disconnects all connections. when invalid_only is set to true then\ndisconnects only invalid connections.\npostgres_fdw_disconnect('server_name') --> disconnections connections\nassociated with the specified foreign server\n\nHaving said this, I'm not in favour of invalid_only flag, because if we\nchoose to change the code in pgfdw_xact_callback to solve connection leak\nproblem, we may not need this invalid_only flag at all, because at the end\nof txn (even for the txns in which the queries fail with error,\npgfdw_xact_callback gets called), all the existing invalid connections get\ndisconnected. Thoughts?\n\n[1]\npostgres=# drop server loopback1 ;\nERROR: cannot drop server loopback1 because other objects depend on it\nDETAIL: user mapping for bharath on server loopback1 depends on server\nloopback1\nforeign table f1 depends on server loopback1\nHINT: Use DROP ... CASCADE to drop the dependent objects too.\n\npostgres=# drop server loopback1 CASCADE ;\nNOTICE: drop cascades to 2 other objects\nDETAIL: drop cascades to user mapping for bharath on server loopback1\ndrop cascades to foreign table f1\nDROP SERVER\n\n> > if (entry->conn != NULL && entry->invalidated && entry->xact_depth\n== 0)\n> > {\n> > elog(DEBUG3, \"closing connection %p for option changes to take\neffect\",\n> > entry->conn);\n> > disconnect_pg_server(entry);\n> > }\n> >\n> >> If so, this seems like a connection-leak bug, at least for me....\nThought?\n> >>\n> >\n> > It's not a leak. The comment before pgfdw_inval_callback() [1]\n> > explains why we can not immediately close/disconnect the connections\n> > in pgfdw_inval_callback() after marking them as invalidated.\n>\n> *If* invalidated connection cannot be close immediately even in the case\nof drop of server or user mapping, we can defer it to the subsequent call\nto GetConnection(). That is, GetConnection() closes not only the target\ninvalidated connection but also the other all invalidated connections. Of\ncourse, invalidated connections will remain until subsequent\nGetConnection() is called, though.\n>\n\nI think my detailed response to the above comment clarifies this.\n\n> > Here is the scenario how in the midst of a txn we get invalidation\n> >\nmessages(AtStart_Cache()-->AcceptInvalidationMessages()-->pgfdw_inval_callback()\n> > happens):\n> >\n> > 1) select from foreign table with server1, usermapping1 in session1\n> > 2) begin a top txn in session1, run a few foreign queries that open up\n> > sub txns internally. meanwhile alter/drop server1/usermapping1 in\n> > session2, then at each start of sub txn also we get to process the\n> > invalidation messages via\n> > AtStart_Cache()-->AcceptInvalidationMessages()-->pgfdw_inval_callback().\n> > So, if we disconnect right after marking invalidated in\n> > pgfdw_inval_callback, that's a problem since we are in a sub txn under\n> > a top txn.\n>\n> Maybe. But what is the actual problem here?\n>\n> OTOH, if cached connection should not be close in the middle of\ntransaction, postgres_fdw_disconnect() also should be disallowed to be\nexecuted during transaction?\n\n+1. Yeah that makes sense. We can avoid closing the connection if\n(entry->xact_depth > 0). I will modify it in\ndisconnect_cached_connections().\n\n> >> Could you tell me why ConnectionHash needs to be destroyed?\n> >\n> > Say, in a session there are hundreds of different foreign server\n> > connections made and if users want to disconnect all of them with the\n> > new function and don't want any further foreign connections in that\n> > session, they can do it. But then why keep the cache just lying around\n> > and holding those many entries? Instead we can destroy the cache and\n> > if necessary it will be allocated later on next foreign server\n> > connections.\n> >\n> > IMHO, it is better to destroy the cache in case of disconnect all,\n> > hoping to save memory, thinking that (next time if required) the cache\n> > allocation doesn't take much time. Thoughts?\n>\n> Ok, but why is ConnectionHash destroyed only when \"all\" is true? Even\nwhen \"all\" is false, for example, the following query can disconnect all\nthe cached connections. Even in this case, i.e., whenever there are no\ncached connections, ConnectionHash should be destroyed?\n>\n> SELECT postgres_fdw_disconnect(srvname) FROM pg_foreign_server ;\n\n+1. I can check all the cache entries to see if there are any active\nconnections, in the same loop where I try to find the cache entry for the\ngiven foreign server, if none exists, then I destroy the cache. Thoughts?\n\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\nOn Sat, Dec 12, 2020 at 12:19 AM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:> I was thinking that in the case of drop of user mapping or server, hash_search(ConnnectionHash) in GetConnection() cannot find the cached connection entry invalidated by that drop. Because \"user->umid\" used as hash key is changed. So I was thinking that that invalidated connection will not be closed nor reconnected.>You are right in saying that the connection leaks.Use case 1:1) Run foreign query in session1 with server1, user mapping12) Drop user mapping1 in another session2, invalidation message gets logged which will have to be processed by other sessions3) Run foreign query again in session1, at the start of txn, the cached entry gets invalidated via pgfdw_inval_callback(). Whatever may be the type of foreign query (select, update, explain, delete, insert, analyze etc.), upon next call to GetUserMapping() from postgres_fdw.c, the cache lookup fails(with ERROR:  user mapping not found for \"XXXX\") since the user mapping1 has been dropped in session2 and the query will also fail before reaching GetConnection() where the connections associated with invalidated entries would have got disconnected.So, the connection associated with invalidated entry would remain until the local session exits which is a problem to solve.Use case 2:1) Run foreign query in session1 with server1, user mapping12) Try to drop foreign server1, then we would not be allowed to do so because of dependency. If we use CASCADE, then the dependent user mapping1 and foreign tables get dropped too [1].3) Run foreign query again in session1, at the start of txn, the cached entry gets invalidated via pgfdw_inval_callback(), it fails because there is no foreign table and user mapping1.But, note that the connection remains open in session1, which is again a problem to solve.To solve the above connection leak problem, it looks like the right place to close all the invalid connections is pgfdw_xact_callback(), once registered, which gets called at the end of every txn in the current session(by then all the sub txns also would have been finished). Note that if there are too many invalidated entries, then one of the following txn has to bear running this extra code, but that's okay than having leaked connections. Thoughts? If okay, I can code a separate patch.static voidpgfdw_xact_callback(XactEvent event, void *arg){    HASH_SEQ_STATUS scan;    ConnCacheEntry *entry;     /* HERE WE CAN LOOK FOR ALL INVALIDATED ENTRIES AND DISCONNECT THEM*/    /* Quick exit if no connections were touched in this transaction. */    if (!xact_got_connection)        return;And we can also extend postgres_fdw_disconnect() something like.postgres_fdw_disconnect(bool invalid_only) --> default for invalid_only false. disconnects all connections. when invalid_only is set to true then disconnects only invalid connections.postgres_fdw_disconnect('server_name') --> disconnections connections associated with the specified foreign serverHaving said this, I'm not in favour of invalid_only flag, because if we choose to change the code in pgfdw_xact_callback to solve connection leak problem, we may not need this invalid_only flag at all, because at the end of txn (even for the txns in which the queries fail with error, pgfdw_xact_callback gets called), all the existing invalid connections get disconnected. Thoughts?[1]postgres=# drop server loopback1 ;ERROR:  cannot drop server loopback1 because other objects depend on itDETAIL:  user mapping for bharath on server loopback1 depends on server loopback1foreign table f1 depends on server loopback1HINT:  Use DROP ... CASCADE to drop the dependent objects too.postgres=# drop server loopback1 CASCADE ;NOTICE:  drop cascades to 2 other objectsDETAIL:  drop cascades to user mapping for bharath on server loopback1drop cascades to foreign table f1DROP SERVER> >      if (entry->conn != NULL && entry->invalidated && entry->xact_depth == 0)> >      {> >          elog(DEBUG3, \"closing connection %p for option changes to take effect\",> >               entry->conn);> >          disconnect_pg_server(entry);> >      }> >> >> If so, this seems like a connection-leak bug, at least for me.... Thought?> >>> >> > It's not a leak. The comment before pgfdw_inval_callback() [1]> > explains why we can not immediately close/disconnect the connections> > in pgfdw_inval_callback() after marking them as invalidated.>> *If* invalidated connection cannot be close immediately even in the case of drop of server or user mapping, we can defer it to the subsequent call to GetConnection(). That is, GetConnection() closes not only the target invalidated connection but also the other all invalidated connections. Of course, invalidated connections will remain until subsequent GetConnection() is called, though.>I think my detailed response to the above comment clarifies this.> > Here is the scenario how in the midst of a txn we get invalidation> > messages(AtStart_Cache()-->AcceptInvalidationMessages()-->pgfdw_inval_callback()> > happens):> >> > 1) select from foreign table with server1, usermapping1 in session1> > 2) begin a top txn in session1, run a few foreign queries that open up> > sub txns internally. meanwhile alter/drop server1/usermapping1 in> > session2, then at each start of sub txn also we get to process the> > invalidation messages via> > AtStart_Cache()-->AcceptInvalidationMessages()-->pgfdw_inval_callback().> > So, if we disconnect right after marking invalidated in> > pgfdw_inval_callback, that's a problem since we are in a sub txn under> > a top txn.>> Maybe. But what is the actual problem here?>> OTOH, if cached connection should not be close in the middle of transaction, postgres_fdw_disconnect() also should be disallowed to be executed during transaction?+1. Yeah that makes sense. We can avoid closing the connection if (entry->xact_depth > 0). I will modify it in disconnect_cached_connections().> >> Could you tell me why ConnectionHash needs to be destroyed?> >> > Say, in a session there are hundreds of different foreign server> > connections made and if users want to disconnect all of them with the> > new function and don't want any further foreign connections in that> > session, they can do it. But then why keep the cache just lying around> > and holding those many entries? Instead we can destroy the cache and> > if necessary it will be allocated later on next foreign server> > connections.> >> > IMHO, it is better to destroy the cache in case of disconnect all,> > hoping to save memory, thinking that (next time if required) the cache> > allocation doesn't take much time. Thoughts?>> Ok, but why is ConnectionHash destroyed only when \"all\" is true? Even when \"all\" is false, for example, the following query can disconnect all the cached connections. Even in this case, i.e., whenever there are no cached connections, ConnectionHash should be destroyed?>>      SELECT postgres_fdw_disconnect(srvname) FROM pg_foreign_server ;+1. I can check all the cache entries to see if there are any active connections, in the same loop where I try to find the cache entry for the given foreign server,  if none exists, then I destroy the cache. Thoughts?With Regards,Bharath Rupireddy.EnterpriseDB: http://www.enterprisedb.com", "msg_date": "Sat, 12 Dec 2020 11:35:42 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] postgres_fdw connection caching - cause remote sessions\n linger till the local session exit" }, { "msg_contents": "\n\nOn 2020/12/12 15:05, Bharath Rupireddy wrote:\n> On Sat, Dec 12, 2020 at 12:19 AM Fujii Masao <masao.fujii@oss.nttdata.com <mailto:masao.fujii@oss.nttdata.com>> wrote:\n> > I was thinking that in the case of drop of user mapping or server, hash_search(ConnnectionHash) in GetConnection() cannot find the cached connection entry invalidated by that drop. Because \"user->umid\" used as hash key is changed. So I was thinking that that invalidated connection will not be closed nor reconnected.\n> >\n> \n> You are right in saying that the connection leaks.\n> \n> Use case 1:\n> 1) Run foreign query in session1 with server1, user mapping1\n> 2) Drop user mapping1 in another session2, invalidation message gets logged which will have to be processed by other sessions\n> 3) Run foreign query again in session1, at the start of txn, the cached entry gets invalidated via pgfdw_inval_callback(). Whatever may be the type of foreign query (select, update, explain, delete, insert, analyze etc.), upon next call to GetUserMapping() from postgres_fdw.c, the cache lookup fails(with ERROR:  user mapping not found for \"XXXX\") since the user mapping1 has been dropped in session2 and the query will also fail before reaching GetConnection() where the connections associated with invalidated entries would have got disconnected.\n> \n> So, the connection associated with invalidated entry would remain until the local session exits which is a problem to solve.\n> \n> Use case 2:\n> 1) Run foreign query in session1 with server1, user mapping1\n> 2) Try to drop foreign server1, then we would not be allowed to do so because of dependency. If we use CASCADE, then the dependent user mapping1 and foreign tables get dropped too [1].\n> 3) Run foreign query again in session1, at the start of txn, the cached entry gets invalidated via pgfdw_inval_callback(), it fails because there is no foreign table and user mapping1.\n> \n> But, note that the connection remains open in session1, which is again a problem to solve.\n> \n> To solve the above connection leak problem, it looks like the right place to close all the invalid connections is pgfdw_xact_callback(), once registered, which gets called at the end of every txn in the current session(by then all the sub txns also would have been finished). Note that if there are too many invalidated entries, then one of the following txn has to bear running this extra code, but that's okay than having leaked connections. Thoughts? If okay, I can code a separate patch.\n\nThanks for further analysis! Sounds good. Also +1 for making it as separate patch. Maybe only this patch needs to be back-patched.\n\n\n> static void\n> pgfdw_xact_callback(XactEvent event, void *arg)\n> {\n>     HASH_SEQ_STATUS scan;\n>     ConnCacheEntry *entry;\n> *     /* HERE WE CAN LOOK FOR ALL INVALIDATED ENTRIES AND DISCONNECT THEM*/*\n\nThis may cause the connection to be closed before sending COMMIT TRANSACTION command to the foreign server, i.e., the connection is closed in the middle of the transaction. So as you explained before, we should avoid that? If this my understanding is right, probably the connection should be closed after COMMIT TRANSACTION command is sent to the foreign server. What about changing the following code in pgfdw_xact_callback() so that it closes the connection even when it's marked as invalidated?\n\n\t\tif (PQstatus(entry->conn) != CONNECTION_OK ||\n\t\t\tPQtransactionStatus(entry->conn) != PQTRANS_IDLE ||\n\t\t\tentry->changing_xact_state)\n\t\t{\n\t\t\telog(DEBUG3, \"discarding connection %p\", entry->conn);\n\t\t\tdisconnect_pg_server(entry);\n\t\t}\n\n\n>     /* Quick exit if no connections were touched in this transaction. */\n>     if (!xact_got_connection)\n>         return;\n> \n> And we can also extend postgres_fdw_disconnect() something like.\n> \n> postgres_fdw_disconnect(bool invalid_only) --> default for invalid_only false. disconnects all connections. when invalid_only is set to true then disconnects only invalid connections.\n> postgres_fdw_disconnect('server_name') --> disconnections connections associated with the specified foreign server\n> \n> Having said this, I'm not in favour of invalid_only flag, because if we choose to change the code in pgfdw_xact_callback to solve connection leak problem, we may not need this invalid_only flag at all, because at the end of txn (even for the txns in which the queries fail with error, pgfdw_xact_callback gets called), all the existing invalid connections get disconnected. Thoughts?\n\n+1 not to have invalid_only flag. On the other hand, I think that postgres_fdw_get_connections() should return all the cached connections including invalidated ones. Otherwise, the number of connections observed via postgres_fdw_get_connections() may be different from the number of connections actually established, and which would be confusing to users. BTW, even after fixing the connection-leak issue, postgres_fdw_get_connections() may see invalidated cached connections when it's called during the transaction.\n\n\n> \n> [1]\n> postgres=# drop server loopback1 ;\n> ERROR:  cannot drop server loopback1 because other objects depend on it\n> DETAIL:  user mapping for bharath on server loopback1 depends on server loopback1\n> foreign table f1 depends on server loopback1\n> HINT:  Use DROP ... CASCADE to drop the dependent objects too.\n> \n> postgres=# drop server loopback1 CASCADE ;\n> NOTICE:  drop cascades to 2 other objects\n> DETAIL:  drop cascades to user mapping for bharath on server loopback1\n> drop cascades to foreign table f1\n> DROP SERVER\n> \n> > >      if (entry->conn != NULL && entry->invalidated && entry->xact_depth == 0)\n> > >      {\n> > >          elog(DEBUG3, \"closing connection %p for option changes to take effect\",\n> > >               entry->conn);\n> > >          disconnect_pg_server(entry);\n> > >      }\n> > >\n> > >> If so, this seems like a connection-leak bug, at least for me.... Thought?\n> > >>\n> > >\n> > > It's not a leak. The comment before pgfdw_inval_callback() [1]\n> > > explains why we can not immediately close/disconnect the connections\n> > > in pgfdw_inval_callback() after marking them as invalidated.\n> >\n> > *If* invalidated connection cannot be close immediately even in the case of drop of server or user mapping, we can defer it to the subsequent call to GetConnection(). That is, GetConnection() closes not only the target invalidated connection but also the other all invalidated connections. Of course, invalidated connections will remain until subsequent GetConnection() is called, though.\n> >\n> \n> I think my detailed response to the above comment clarifies this.\n> \n> > > Here is the scenario how in the midst of a txn we get invalidation\n> > > messages(AtStart_Cache()-->AcceptInvalidationMessages()-->pgfdw_inval_callback()\n> > > happens):\n> > >\n> > > 1) select from foreign table with server1, usermapping1 in session1\n> > > 2) begin a top txn in session1, run a few foreign queries that open up\n> > > sub txns internally. meanwhile alter/drop server1/usermapping1 in\n> > > session2, then at each start of sub txn also we get to process the\n> > > invalidation messages via\n> > > AtStart_Cache()-->AcceptInvalidationMessages()-->pgfdw_inval_callback().\n> > > So, if we disconnect right after marking invalidated in\n> > > pgfdw_inval_callback, that's a problem since we are in a sub txn under\n> > > a top txn.\n> >\n> > Maybe. But what is the actual problem here?\n> >\n> > OTOH, if cached connection should not be close in the middle of transaction, postgres_fdw_disconnect() also should be disallowed to be executed during transaction?\n> \n> +1. Yeah that makes sense. We can avoid closing the connection if (entry->xact_depth > 0). I will modify it in disconnect_cached_connections().\n> \n> > >> Could you tell me why ConnectionHash needs to be destroyed?\n> > >\n> > > Say, in a session there are hundreds of different foreign server\n> > > connections made and if users want to disconnect all of them with the\n> > > new function and don't want any further foreign connections in that\n> > > session, they can do it. But then why keep the cache just lying around\n> > > and holding those many entries? Instead we can destroy the cache and\n> > > if necessary it will be allocated later on next foreign server\n> > > connections.\n> > >\n> > > IMHO, it is better to destroy the cache in case of disconnect all,\n> > > hoping to save memory, thinking that (next time if required) the cache\n> > > allocation doesn't take much time. Thoughts?\n> >\n> > Ok, but why is ConnectionHash destroyed only when \"all\" is true? Even when \"all\" is false, for example, the following query can disconnect all the cached connections. Even in this case, i.e., whenever there are no cached connections, ConnectionHash should be destroyed?\n> >\n> >      SELECT postgres_fdw_disconnect(srvname) FROM pg_foreign_server ;\n> \n> +1. I can check all the cache entries to see if there are any active connections, in the same loop where I try to find the cache entry for the given foreign server,  if none exists, then I destroy the cache. Thoughts?\n\n+1\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Mon, 14 Dec 2020 13:08:48 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] postgres_fdw connection caching - cause remote sessions\n linger till the local session exit" }, { "msg_contents": "On Mon, Dec 14, 2020 at 9:38 AM Fujii Masao <masao.fujii@oss.nttdata.com>\nwrote:\n> On 2020/12/12 15:05, Bharath Rupireddy wrote:\n> > On Sat, Dec 12, 2020 at 12:19 AM Fujii Masao <\nmasao.fujii@oss.nttdata.com <mailto:masao.fujii@oss.nttdata.com>> wrote:\n> > > I was thinking that in the case of drop of user mapping or server,\nhash_search(ConnnectionHash) in GetConnection() cannot find the cached\nconnection entry invalidated by that drop. Because \"user->umid\" used as\nhash key is changed. So I was thinking that that invalidated connection\nwill not be closed nor reconnected.\n> > >\n> >\n> > You are right in saying that the connection leaks.\n> >\n> > Use case 1:\n> > 1) Run foreign query in session1 with server1, user mapping1\n> > 2) Drop user mapping1 in another session2, invalidation message gets\nlogged which will have to be processed by other sessions\n> > 3) Run foreign query again in session1, at the start of txn, the cached\nentry gets invalidated via pgfdw_inval_callback(). Whatever may be the type\nof foreign query (select, update, explain, delete, insert, analyze etc.),\nupon next call to GetUserMapping() from postgres_fdw.c, the cache lookup\nfails(with ERROR: user mapping not found for \"XXXX\") since the user\nmapping1 has been dropped in session2 and the query will also fail before\nreaching GetConnection() where the connections associated with invalidated\nentries would have got disconnected.\n> >\n> > So, the connection associated with invalidated entry would remain until\nthe local session exits which is a problem to solve.\n> >\n> > Use case 2:\n> > 1) Run foreign query in session1 with server1, user mapping1\n> > 2) Try to drop foreign server1, then we would not be allowed to do so\nbecause of dependency. If we use CASCADE, then the dependent user mapping1\nand foreign tables get dropped too [1].\n> > 3) Run foreign query again in session1, at the start of txn, the cached\nentry gets invalidated via pgfdw_inval_callback(), it fails because there\nis no foreign table and user mapping1.\n> >\n> > But, note that the connection remains open in session1, which is again\na problem to solve.\n> >\n> > To solve the above connection leak problem, it looks like the right\nplace to close all the invalid connections is pgfdw_xact_callback(), once\nregistered, which gets called at the end of every txn in the current\nsession(by then all the sub txns also would have been finished). Note that\nif there are too many invalidated entries, then one of the following txn\nhas to bear running this extra code, but that's okay than having leaked\nconnections. Thoughts? If okay, I can code a separate patch.\n>\n> Thanks for further analysis! Sounds good. Also +1 for making it as\nseparate patch. Maybe only this patch needs to be back-patched.\n\nThanks. Yeah once agreed on the fix, +1 to back patch. Shall I start a\nseparate thread for connection leak issue and patch, so that others might\nhave different thoughts??\n\n> > static void\n> > pgfdw_xact_callback(XactEvent event, void *arg)\n> > {\n> > HASH_SEQ_STATUS scan;\n> > ConnCacheEntry *entry;\n> > * /* HERE WE CAN LOOK FOR ALL INVALIDATED ENTRIES AND DISCONNECT\nTHEM*/*\n>\n> This may cause the connection to be closed before sending COMMIT\nTRANSACTION command to the foreign server, i.e., the connection is closed\nin the middle of the transaction. So as you explained before, we should\navoid that? If this my understanding is right, probably the connection\nshould be closed after COMMIT TRANSACTION command is sent to the foreign\nserver. What about changing the following code in pgfdw_xact_callback() so\nthat it closes the connection even when it's marked as invalidated?\n\nYou are right! I'm posting what I have in my mind for fixing this\nconnection leak problem.\n\n/* tracks whether any work is needed in callback functions */\nstatic bool xact_got_connection = false;\n/* tracks whether there exists at least one invalid connection in the\nconnection cache */\n*static bool invalid_connections_exist = false;*\n\nstatic void\npgfdw_inval_callback(Datum arg, int cacheid, uint32 hashvalue)\n{\n\n /* hashvalue == 0 means a cache reset, must clear all state */\n if (hashvalue == 0 ||\n (cacheid == FOREIGNSERVEROID &&\n entry->server_hashvalue == hashvalue) ||\n (cacheid == USERMAPPINGOID &&\n entry->mapping_hashvalue == hashvalue))\n {\n entry->invalidated = true;\n* invalid_connections_exist = true;*\n }\n\nstatic void\npgfdw_xact_callback(XactEvent event, void *arg)\n{\n HASH_SEQ_STATUS scan;\n ConnCacheEntry *entry;\n\n /* Quick exit if no connections were touched in this transaction or\nthere are no invalid connections in the cache. */\n if (!xact_got_connection *&& !invalid_connections_exist)*\n return;\n\n /*\n * If the connection isn't in a good idle state, discard it to\n * recover. Next GetConnection will open a new connection.\n */\n if (PQstatus(entry->conn) != CONNECTION_OK ||\n PQtransactionStatus(entry->conn) != PQTRANS_IDLE ||\n entry->changing_xact_state ||\n* entry->invalidated)*\n {\n elog(DEBUG3, \"discarding connection %p\", entry->conn);\n disconnect_pg_server(entry);\n }\n\n/*\n* Regardless of the event type, we can now mark ourselves as out of the\n* transaction. (Note: if we are here during PRE_COMMIT or PRE_PREPARE,\n* this saves a useless scan of the hashtable during COMMIT or PREPARE.)\n*/\nxact_got_connection = false;\n\n/* We are done with closing all the invalidated connections so reset. */\n*invalid_connections_exist = false;*\n}\n\n> > And we can also extend postgres_fdw_disconnect() something like.\n> >\n> > postgres_fdw_disconnect(bool invalid_only) --> default for invalid_only\nfalse. disconnects all connections. when invalid_only is set to true then\ndisconnects only invalid connections.\n> > postgres_fdw_disconnect('server_name') --> disconnections connections\nassociated with the specified foreign server\n> >\n> > Having said this, I'm not in favour of invalid_only flag, because if we\nchoose to change the code in pgfdw_xact_callback to solve connection leak\nproblem, we may not need this invalid_only flag at all, because at the end\nof txn (even for the txns in which the queries fail with error,\npgfdw_xact_callback gets called), all the existing invalid connections get\ndisconnected. Thoughts?\n>\n> +1 not to have invalid_only flag. On the other hand, I think that\npostgres_fdw_get_connections() should return all the cached connections\nincluding invalidated ones. Otherwise, the number of connections observed\nvia postgres_fdw_get_connections() may be different from the number of\nconnections actually established, and which would be confusing to users.\n>\n\nIf postgres_fdw_get_connections() has to return invalidated connections, I\nhave few things mentioned in [1] to be clarified. Thoughts? Please have a\nlook at the below comment before we decide to show up the invalid entries\nor not.\n\n[1] -\nhttps://www.postgresql.org/message-id/CALj2ACUv%3DArQXs0U9PM3YXKCeSzJ1KxRokDY0g_0aGy--kDScA%40mail.gmail.com\n\n> BTW, even after fixing the connection-leak issue,\npostgres_fdw_get_connections() may see invalidated cached connections when\nit's called during the transaction.\n\nWe will not output if the invalidated entry has no active connection[2], so\nif we fix the connection leak issue with the above discussed fix i.e\nclosing all the invalidated connections at the end of next xact, there are\nless chances that we will output invalidated entries in the\npostgres_fdw_get_connections() output. Only case we may show up invalidated\nconnections(which have active connections entry->conn) in the\npostgres_fdw_get_connections() output is as follows:\n\n1) say we have few cached active connections exists in session 1\n2) drop the user mapping (in another session) associated with any of the\ncached connections to make that entry invalid\n3) run select * from postgres_fdw_get_connections(); in session 1. At the\nstart of the xact, the invalidation message gets processed and the\ncorresponding entry gets marked as invalid. If we allow invalid connections\n(that have entry->conn) to show up in the output, then we show them in the\nresult of the query. At the end of xact, we close these invalid\nconnections, in this case, user might think that he still have invalid\nconnections active.\n\nIf the query ran in 3) is not postgres_fdw_get_connections() and something\nelse, then postgres_fdw_get_connections() will never get to show invalid\nconnections as they would have closed the connections.\n\nIMO, better not choose the invalid connections to show up in the\npostgres_fdw_get_connections() output, if we fix the connection leak issue\nwith the above discussed fix i.e closing all the invalidated connections at\nthe end of next xact\n\n[2]\n+Datum\n+postgres_fdw_get_connections(PG_FUNCTION_ARGS)\n+{\n+ ArrayBuildState *astate = NULL;\n+\n+ if (ConnectionHash)\n+ {\n+ HASH_SEQ_STATUS scan;\n+ ConnCacheEntry *entry;\n+\n+ hash_seq_init(&scan, ConnectionHash);\n+ while ((entry = (ConnCacheEntry *) hash_seq_search(&scan)))\n+ {\n+ Form_pg_user_mapping umap;\n+ HeapTuple umaptup;\n+ Form_pg_foreign_server fsrv;\n+ HeapTuple fsrvtup;\n+\n+ /* We only look for active and open remote connections. */\n+ if (!entry->conn)\n+ continue;\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\nOn Mon, Dec 14, 2020 at 9:38 AM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:> On 2020/12/12 15:05, Bharath Rupireddy wrote:> > On Sat, Dec 12, 2020 at 12:19 AM Fujii Masao <masao.fujii@oss.nttdata.com <mailto:masao.fujii@oss.nttdata.com>> wrote:> >  > I was thinking that in the case of drop of user mapping or server, hash_search(ConnnectionHash) in GetConnection() cannot find the cached connection entry invalidated by that drop. Because \"user->umid\" used as hash key is changed. So I was thinking that that invalidated connection will not be closed nor reconnected.> >  >> >> > You are right in saying that the connection leaks.> >> > Use case 1:> > 1) Run foreign query in session1 with server1, user mapping1> > 2) Drop user mapping1 in another session2, invalidation message gets logged which will have to be processed by other sessions> > 3) Run foreign query again in session1, at the start of txn, the cached entry gets invalidated via pgfdw_inval_callback(). Whatever may be the type of foreign query (select, update, explain, delete, insert, analyze etc.), upon next call to GetUserMapping() from postgres_fdw.c, the cache lookup fails(with ERROR:  user mapping not found for \"XXXX\") since the user mapping1 has been dropped in session2 and the query will also fail before reaching GetConnection() where the connections associated with invalidated entries would have got disconnected.> >> > So, the connection associated with invalidated entry would remain until the local session exits which is a problem to solve.> >> > Use case 2:> > 1) Run foreign query in session1 with server1, user mapping1> > 2) Try to drop foreign server1, then we would not be allowed to do so because of dependency. If we use CASCADE, then the dependent user mapping1 and foreign tables get dropped too [1].> > 3) Run foreign query again in session1, at the start of txn, the cached entry gets invalidated via pgfdw_inval_callback(), it fails because there is no foreign table and user mapping1.> >> > But, note that the connection remains open in session1, which is again a problem to solve.> >> > To solve the above connection leak problem, it looks like the right place to close all the invalid connections is pgfdw_xact_callback(), once registered, which gets called at the end of every txn in the current session(by then all the sub txns also would have been finished). Note that if there are too many invalidated entries, then one of the following txn has to bear running this extra code, but that's okay than having leaked connections. Thoughts? If okay, I can code a separate patch.>> Thanks for further analysis! Sounds good. Also +1 for making it as separate patch. Maybe only this patch needs to be back-patched.Thanks. Yeah once agreed on the fix, +1 to back patch. Shall I start a separate thread for connection leak issue and patch, so that others might have different thoughts??> > static void> > pgfdw_xact_callback(XactEvent event, void *arg)> > {> >      HASH_SEQ_STATUS scan;> >      ConnCacheEntry *entry;> > *     /* HERE WE CAN LOOK FOR ALL INVALIDATED ENTRIES AND DISCONNECT THEM*/*>> This may cause the connection to be closed before sending COMMIT TRANSACTION command to the foreign server, i.e., the connection is closed in the middle of the transaction. So as you explained before, we should avoid that? If this my understanding is right, probably the connection should be closed after COMMIT TRANSACTION command is sent to the foreign server. What about changing the following code in pgfdw_xact_callback() so that it closes the connection even when it's marked as invalidated?You are right! I'm posting what I have in my mind for fixing this connection leak problem./* tracks whether any work is needed in callback functions */static bool xact_got_connection = false;/* tracks whether there exists at least one invalid connection in the connection cache */static bool invalid_connections_exist = false;static voidpgfdw_inval_callback(Datum arg, int cacheid, uint32 hashvalue){        /* hashvalue == 0 means a cache reset, must clear all state */        if (hashvalue == 0 ||            (cacheid == FOREIGNSERVEROID &&             entry->server_hashvalue == hashvalue) ||            (cacheid == USERMAPPINGOID &&             entry->mapping_hashvalue == hashvalue))        {            entry->invalidated = true;            invalid_connections_exist = true;    }static voidpgfdw_xact_callback(XactEvent event, void *arg){    HASH_SEQ_STATUS scan;    ConnCacheEntry *entry;    /* Quick exit if no connections were touched in this transaction or there are no invalid connections in the cache. */    if (!xact_got_connection && !invalid_connections_exist)        return;        /*         * If the connection isn't in a good idle state, discard it to         * recover. Next GetConnection will open a new connection.         */        if (PQstatus(entry->conn) != CONNECTION_OK ||            PQtransactionStatus(entry->conn) != PQTRANS_IDLE ||            entry->changing_xact_state ||            entry->invalidated)        {            elog(DEBUG3, \"discarding connection %p\", entry->conn);            disconnect_pg_server(entry);        }\t/*\t * Regardless of the event type, we can now mark ourselves as out of the\t * transaction.  (Note: if we are here during PRE_COMMIT or PRE_PREPARE,\t * this saves a useless scan of the hashtable during COMMIT or PREPARE.)\t */\txact_got_connection = false;/* We are done with closing all the invalidated connections so reset. */invalid_connections_exist = false;}> > And we can also extend postgres_fdw_disconnect() something like.> >> > postgres_fdw_disconnect(bool invalid_only) --> default for invalid_only false. disconnects all connections. when invalid_only is set to true then disconnects only invalid connections.> > postgres_fdw_disconnect('server_name') --> disconnections connections associated with the specified foreign server> >> > Having said this, I'm not in favour of invalid_only flag, because if we choose to change the code in pgfdw_xact_callback to solve connection leak problem, we may not need this invalid_only flag at all, because at the end of txn (even for the txns in which the queries fail with error, pgfdw_xact_callback gets called), all the existing invalid connections get disconnected. Thoughts?>> +1 not to have invalid_only flag. On the other hand, I think that postgres_fdw_get_connections() should return all the cached connections including invalidated ones. Otherwise, the number of connections observed via postgres_fdw_get_connections() may be different from the number of connections actually established, and which would be confusing to users. >If postgres_fdw_get_connections() has to return invalidated connections, I have few things mentioned in [1] to be clarified. Thoughts? Please have a look at the below comment before we decide to show up the invalid entries or not.[1] - https://www.postgresql.org/message-id/CALj2ACUv%3DArQXs0U9PM3YXKCeSzJ1KxRokDY0g_0aGy--kDScA%40mail.gmail.com> BTW, even after fixing the connection-leak issue, postgres_fdw_get_connections() may see invalidated cached connections when it's called during the transaction.We will not output if the invalidated entry has no active connection[2], so if we fix the connection leak issue with the above discussed fix i.e closing all the invalidated connections at the end of next xact, there are less chances that we will output invalidated entries in the postgres_fdw_get_connections() output. Only case we may show up invalidated connections(which have active connections entry->conn) in the postgres_fdw_get_connections() output is as follows:1) say we have few cached active connections exists in session 12) drop the user mapping (in another session) associated with any of the cached connections to make that entry invalid3) run select * from postgres_fdw_get_connections(); in session 1.  At the start of the xact, the invalidation message gets processed and the corresponding entry gets marked as invalid. If we allow invalid connections (that have entry->conn) to show up in the output, then we show them in the result of the query. At the end of xact, we close these invalid connections, in this case, user might think that he still have invalid connections active.If the query ran in 3) is not postgres_fdw_get_connections() and something else, then postgres_fdw_get_connections() will never get to show invalid connections as they would have closed the connections.IMO, better not choose the invalid connections to show up in the postgres_fdw_get_connections() output, if we fix the connection leak issue with the above discussed fix i.e closing all the invalidated connections at the end of next xact [2]+Datum+postgres_fdw_get_connections(PG_FUNCTION_ARGS)+{+\tArrayBuildState *astate = NULL;++\tif (ConnectionHash)+\t{+\t\tHASH_SEQ_STATUS\tscan;+\t\tConnCacheEntry\t*entry;++\t\thash_seq_init(&scan, ConnectionHash);+\t\twhile ((entry = (ConnCacheEntry *) hash_seq_search(&scan)))+\t\t{+\t\t\tForm_pg_user_mapping umap;+\t\t\tHeapTuple\tumaptup;+\t\t\tForm_pg_foreign_server fsrv;+\t\t\tHeapTuple\tfsrvtup;++\t\t\t/* We only look for active and open remote connections. */+\t\t\tif (!entry->conn)+\t\t\t\tcontinue;With Regards,Bharath Rupireddy.EnterpriseDB: http://www.enterprisedb.com", "msg_date": "Mon, 14 Dec 2020 11:06:10 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] postgres_fdw connection caching - cause remote sessions\n linger till the local session exit" }, { "msg_contents": "\n\nOn 2020/12/14 14:36, Bharath Rupireddy wrote:\n> On Mon, Dec 14, 2020 at 9:38 AM Fujii Masao <masao.fujii@oss.nttdata.com <mailto:masao.fujii@oss.nttdata.com>> wrote:\n> > On 2020/12/12 15:05, Bharath Rupireddy wrote:\n> > > On Sat, Dec 12, 2020 at 12:19 AM Fujii Masao <masao.fujii@oss.nttdata.com <mailto:masao.fujii@oss.nttdata.com> <mailto:masao.fujii@oss.nttdata.com <mailto:masao.fujii@oss.nttdata.com>>> wrote:\n> > >  > I was thinking that in the case of drop of user mapping or server, hash_search(ConnnectionHash) in GetConnection() cannot find the cached connection entry invalidated by that drop. Because \"user->umid\" used as hash key is changed. So I was thinking that that invalidated connection will not be closed nor reconnected.\n> > >  >\n> > >\n> > > You are right in saying that the connection leaks.\n> > >\n> > > Use case 1:\n> > > 1) Run foreign query in session1 with server1, user mapping1\n> > > 2) Drop user mapping1 in another session2, invalidation message gets logged which will have to be processed by other sessions\n> > > 3) Run foreign query again in session1, at the start of txn, the cached entry gets invalidated via pgfdw_inval_callback(). Whatever may be the type of foreign query (select, update, explain, delete, insert, analyze etc.), upon next call to GetUserMapping() from postgres_fdw.c, the cache lookup fails(with ERROR:  user mapping not found for \"XXXX\") since the user mapping1 has been dropped in session2 and the query will also fail before reaching GetConnection() where the connections associated with invalidated entries would have got disconnected.\n> > >\n> > > So, the connection associated with invalidated entry would remain until the local session exits which is a problem to solve.\n> > >\n> > > Use case 2:\n> > > 1) Run foreign query in session1 with server1, user mapping1\n> > > 2) Try to drop foreign server1, then we would not be allowed to do so because of dependency. If we use CASCADE, then the dependent user mapping1 and foreign tables get dropped too [1].\n> > > 3) Run foreign query again in session1, at the start of txn, the cached entry gets invalidated via pgfdw_inval_callback(), it fails because there is no foreign table and user mapping1.\n> > >\n> > > But, note that the connection remains open in session1, which is again a problem to solve.\n> > >\n> > > To solve the above connection leak problem, it looks like the right place to close all the invalid connections is pgfdw_xact_callback(), once registered, which gets called at the end of every txn in the current session(by then all the sub txns also would have been finished). Note that if there are too many invalidated entries, then one of the following txn has to bear running this extra code, but that's okay than having leaked connections. Thoughts? If okay, I can code a separate patch.\n> >\n> > Thanks for further analysis! Sounds good. Also +1 for making it as separate patch. Maybe only this patch needs to be back-patched.\n> \n> Thanks. Yeah once agreed on the fix, +1 to back patch. Shall I start a separate thread for connection leak issue and patch, so that others might have different thoughts??\n\nYes, of course!\n\n\n> \n> > > static void\n> > > pgfdw_xact_callback(XactEvent event, void *arg)\n> > > {\n> > >      HASH_SEQ_STATUS scan;\n> > >      ConnCacheEntry *entry;\n> > > *     /* HERE WE CAN LOOK FOR ALL INVALIDATED ENTRIES AND DISCONNECT THEM*/*\n> >\n> > This may cause the connection to be closed before sending COMMIT TRANSACTION command to the foreign server, i.e., the connection is closed in the middle of the transaction. So as you explained before, we should avoid that? If this my understanding is right, probably the connection should be closed after COMMIT TRANSACTION command is sent to the foreign server. What about changing the following code in pgfdw_xact_callback() so that it closes the connection even when it's marked as invalidated?\n> \n> You are right! I'm posting what I have in my mind for fixing this connection leak problem.\n> \n> /* tracks whether any work is needed in callback functions */\n> static bool xact_got_connection = false;\n> /* tracks whether there exists at least one invalid connection in the connection cache */\n> *static bool invalid_connections_exist = false;*\n> \n> static void\n> pgfdw_inval_callback(Datum arg, int cacheid, uint32 hashvalue)\n> {\n> \n>         /* hashvalue == 0 means a cache reset, must clear all state */\n>         if (hashvalue == 0 ||\n>             (cacheid == FOREIGNSERVEROID &&\n>              entry->server_hashvalue == hashvalue) ||\n>             (cacheid == USERMAPPINGOID &&\n>              entry->mapping_hashvalue == hashvalue))\n>         {\n>             entry->invalidated = true;\n> *            invalid_connections_exist = true;*\n>     }\n> \n> static void\n> pgfdw_xact_callback(XactEvent event, void *arg)\n> {\n>     HASH_SEQ_STATUS scan;\n>     ConnCacheEntry *entry;\n> \n>     /* Quick exit if no connections were touched in this transaction or there are no invalid connections in the cache. */\n>     if (!xact_got_connection *&& !invalid_connections_exist)*\n>         return;\n> \n>         /*\n>          * If the connection isn't in a good idle state, discard it to\n>          * recover. Next GetConnection will open a new connection.\n>          */\n>         if (PQstatus(entry->conn) != CONNECTION_OK ||\n>             PQtransactionStatus(entry->conn) != PQTRANS_IDLE ||\n>             entry->changing_xact_state ||\n> *            entry->invalidated)*\n>         {\n>             elog(DEBUG3, \"discarding connection %p\", entry->conn);\n>             disconnect_pg_server(entry);\n>         }\n> \n> /*\n> * Regardless of the event type, we can now mark ourselves as out of the\n> * transaction.  (Note: if we are here during PRE_COMMIT or PRE_PREPARE,\n> * this saves a useless scan of the hashtable during COMMIT or PREPARE.)\n> */\n> xact_got_connection = false;\n> \n> /* We are done with closing all the invalidated connections so reset. */\n> *invalid_connections_exist = false;*\n> }\n> \n> > > And we can also extend postgres_fdw_disconnect() something like.\n> > >\n> > > postgres_fdw_disconnect(bool invalid_only) --> default for invalid_only false. disconnects all connections. when invalid_only is set to true then disconnects only invalid connections.\n> > > postgres_fdw_disconnect('server_name') --> disconnections connections associated with the specified foreign server\n> > >\n> > > Having said this, I'm not in favour of invalid_only flag, because if we choose to change the code in pgfdw_xact_callback to solve connection leak problem, we may not need this invalid_only flag at all, because at the end of txn (even for the txns in which the queries fail with error, pgfdw_xact_callback gets called), all the existing invalid connections get disconnected. Thoughts?\n> >\n> > +1 not to have invalid_only flag. On the other hand, I think that postgres_fdw_get_connections() should return all the cached connections including invalidated ones. Otherwise, the number of connections observed via postgres_fdw_get_connections() may be different from the number of connections actually established, and which would be confusing to users.\n> >\n> \n> If postgres_fdw_get_connections() has to return invalidated connections, I have few things mentioned in [1] to be clarified. Thoughts? Please have a look at the below comment before we decide to show up the invalid entries or not.\n> \n> [1] - https://www.postgresql.org/message-id/CALj2ACUv%3DArQXs0U9PM3YXKCeSzJ1KxRokDY0g_0aGy--kDScA%40mail.gmail.com <https://www.postgresql.org/message-id/CALj2ACUv%3DArQXs0U9PM3YXKCeSzJ1KxRokDY0g_0aGy--kDScA%40mail.gmail.com>\n\nI was thinking to display the records having the columns for server name and boolean flag indicating whether it's invalidated or not. But I'm not sure if this is the best design for now. Probably we should revisit this after determining how to fix the connection-leak issue.\n\n\n> \n> > BTW, even after fixing the connection-leak issue, postgres_fdw_get_connections() may see invalidated cached connections when it's called during the transaction.\n> \n> We will not output if the invalidated entry has no active connection[2], so if we fix the connection leak issue with the above discussed fix i.e closing all the invalidated connections at the end of next xact, there are less chances that we will output invalidated entries in the postgres_fdw_get_connections() output. Only case we may show up invalidated connections(which have active connections entry->conn) in the postgres_fdw_get_connections() output is as follows:\n> \n> 1) say we have few cached active connections exists in session 1\n> 2) drop the user mapping (in another session) associated with any of the cached connections to make that entry invalid\n> 3) run select * from postgres_fdw_get_connections(); in session 1.  At the start of the xact, the invalidation message gets processed and the corresponding entry gets marked as invalid. If we allow invalid connections (that have entry->conn) to show up in the output, then we show them in the result of the query. At the end of xact, we close these invalid connections, in this case, user might think that he still have invalid connections active.\n\nWhat about the case where the transaction started at the above 1) at session 1, and postgres_fdw_get_connections() in the above 3) is called within that transaction at session 1? In this case, postgres_fdw_get_connections() can return even invalidated connections?\n\n\n> \n> If the query ran in 3) is not postgres_fdw_get_connections() and something else, then postgres_fdw_get_connections() will never get to show invalid connections as they would have closed the connections.\n> \n> IMO, better not choose the invalid connections to show up in the postgres_fdw_get_connections() output, if we fix the connection leak issue with the above discussed fix i.e closing all the invalidated connections at the end of next xact\n> \n> [2]\n> +Datum\n> +postgres_fdw_get_connections(PG_FUNCTION_ARGS)\n> +{\n> + ArrayBuildState *astate = NULL;\n> +\n> + if (ConnectionHash)\n> + {\n> + HASH_SEQ_STATUS scan;\n> + ConnCacheEntry *entry;\n> +\n> + hash_seq_init(&scan, ConnectionHash);\n> + while ((entry = (ConnCacheEntry *) hash_seq_search(&scan)))\n> + {\n> + Form_pg_user_mapping umap;\n> + HeapTuple umaptup;\n> + Form_pg_foreign_server fsrv;\n> + HeapTuple fsrvtup;\n> +\n> + /* We only look for active and open remote connections. */\n> + if (!entry->conn)\n> + continue;\n> \n> With Regards,\n> Bharath Rupireddy.\n> EnterpriseDB: http://www.enterprisedb.com <http://www.enterprisedb.com>\n> \n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Mon, 14 Dec 2020 23:33:53 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] postgres_fdw connection caching - cause remote sessions\n linger till the local session exit" }, { "msg_contents": "On Mon, Dec 14, 2020 at 8:03 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> > We will not output if the invalidated entry has no active connection[2], so if we fix the connection leak issue with the above discussed fix i.e closing all the invalidated connections at the end of next xact, there are less chances that we will output invalidated entries in the postgres_fdw_get_connections() output. Only case we may show up invalidated connections(which have active connections entry->conn) in the postgres_fdw_get_connections() output is as follows:\n> >\n> > 1) say we have few cached active connections exists in session 1\n> > 2) drop the user mapping (in another session) associated with any of the cached connections to make that entry invalid\n> > 3) run select * from postgres_fdw_get_connections(); in session 1. At the start of the xact, the invalidation message gets processed and the corresponding entry gets marked as invalid. If we allow invalid connections (that have entry->conn) to show up in the output, then we show them in the result of the query. At the end of xact, we close these invalid connections, in this case, user might think that he still have invalid connections active.\n>\n> What about the case where the transaction started at the above 1) at session 1, and postgres_fdw_get_connections() in the above 3) is called within that transaction at session 1? In this case, postgres_fdw_get_connections() can return even invalidated connections?\n\nIn that case, since the user mapping would have been dropped in\nanother session and we are in the middle of a txn in session 1, the\nentries would not get marked as invalid until the invalidation message\ngets processed by the session 1 which may happen if the session 1\nopens a sub txn, if not then for postgres_fdw_get_connections() the\nentries will still be active as they would not have been marked as\ninvalid yet and postgres_fdw_get_connections() would return them in\nthe output.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 14 Dec 2020 21:47:58 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] postgres_fdw connection caching - cause remote sessions\n linger till the local session exit" }, { "msg_contents": "On Mon, Dec 14, 2020, 9:47 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Mon, Dec 14, 2020 at 8:03 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> > > We will not output if the invalidated entry has no active connection[2], so if we fix the connection leak issue with the above discussed fix i.e closing all the invalidated connections at the end of next xact, there are less chances that we will output invalidated entries in the postgres_fdw_get_connections() output. Only case we may show up invalidated connections(which have active connections entry->conn) in the postgres_fdw_get_connections() output is as follows:\n> > >\n> > > 1) say we have few cached active connections exists in session 1\n> > > 2) drop the user mapping (in another session) associated with any of the cached connections to make that entry invalid\n> > > 3) run select * from postgres_fdw_get_connections(); in session 1. At the start of the xact, the invalidation message gets processed and the corresponding entry gets marked as invalid. If we allow invalid connections (that have entry->conn) to show up in the output, then we show them in the result of the query. At the end of xact, we close these invalid connections, in this case, user might think that he still have invalid connections active.\n> >\n> > What about the case where the transaction started at the above 1) at session 1, and postgres_fdw_get_connections() in the above 3) is called within that transaction at session 1? In this case, postgres_fdw_get_connections() can return even invalidated connections?\n>\n> In that case, since the user mapping would have been dropped in\n> another session and we are in the middle of a txn in session 1, the\n> entries would not get marked as invalid until the invalidation message\n> gets processed by the session 1 which may happen if the session 1\n> opens a sub txn, if not then for postgres_fdw_get_connections() the\n> entries will still be active as they would not have been marked as\n> invalid yet and postgres_fdw_get_connections() would return them in\n> the output.\n\nOne more point for the above scenario: if the user mapping is dropped\nin another session, then cache lookup for that entry in the\npostgres_fdw_get_connections() returns a null tuple which I plan to\nnot throw an error, but just to skip in that case and continue. But if\nthe user mapping is not dropped in another session but altered, then\npostgres_fdw_get_connections() still can show that in the output.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 14 Dec 2020 22:10:51 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] postgres_fdw connection caching - cause remote sessions\n linger till the local session exit" }, { "msg_contents": "\n\nOn 2020/12/15 1:40, Bharath Rupireddy wrote:\n> On Mon, Dec 14, 2020, 9:47 PM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n>>\n>> On Mon, Dec 14, 2020 at 8:03 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>>>> We will not output if the invalidated entry has no active connection[2], so if we fix the connection leak issue with the above discussed fix i.e closing all the invalidated connections at the end of next xact, there are less chances that we will output invalidated entries in the postgres_fdw_get_connections() output. Only case we may show up invalidated connections(which have active connections entry->conn) in the postgres_fdw_get_connections() output is as follows:\n>>>>\n>>>> 1) say we have few cached active connections exists in session 1\n>>>> 2) drop the user mapping (in another session) associated with any of the cached connections to make that entry invalid\n>>>> 3) run select * from postgres_fdw_get_connections(); in session 1. At the start of the xact, the invalidation message gets processed and the corresponding entry gets marked as invalid. If we allow invalid connections (that have entry->conn) to show up in the output, then we show them in the result of the query. At the end of xact, we close these invalid connections, in this case, user might think that he still have invalid connections active.\n>>>\n>>> What about the case where the transaction started at the above 1) at session 1, and postgres_fdw_get_connections() in the above 3) is called within that transaction at session 1? In this case, postgres_fdw_get_connections() can return even invalidated connections?\n>>\n>> In that case, since the user mapping would have been dropped in\n>> another session and we are in the middle of a txn in session 1, the\n>> entries would not get marked as invalid until the invalidation message\n>> gets processed by the session 1 which may happen if the session 1\n\nYes, and this can happen by other commands, for example, CREATE TABLE.\n\n\n>> opens a sub txn, if not then for postgres_fdw_get_connections() the\n>> entries will still be active as they would not have been marked as\n>> invalid yet and postgres_fdw_get_connections() would return them in\n>> the output.\n> \n> One more point for the above scenario: if the user mapping is dropped\n> in another session, then cache lookup for that entry in the\n> postgres_fdw_get_connections() returns a null tuple which I plan to\n> not throw an error, but just to skip in that case and continue. But if\n> the user mapping is not dropped in another session but altered, then\n> postgres_fdw_get_connections() still can show that in the output.\n\nYes, so *if* we really want to return even connection invalidated by drop of\nuser mapping, the cached connection entry may need to store not only\nuser mapping id but also server id so that we can get the server name without\nuser mapping entry.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Tue, 15 Dec 2020 02:30:47 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] postgres_fdw connection caching - cause remote sessions\n linger till the local session exit" }, { "msg_contents": "On Mon, Dec 14, 2020 at 11:00 PM Fujii Masao\n<masao.fujii@oss.nttdata.com> wrote:\n> > One more point for the above scenario: if the user mapping is dropped\n> > in another session, then cache lookup for that entry in the\n> > postgres_fdw_get_connections() returns a null tuple which I plan to\n> > not throw an error, but just to skip in that case and continue. But if\n> > the user mapping is not dropped in another session but altered, then\n> > postgres_fdw_get_connections() still can show that in the output.\n>\n> Yes, so *if* we really want to return even connection invalidated by drop of\n> user mapping, the cached connection entry may need to store not only\n> user mapping id but also server id so that we can get the server name without\n> user mapping entry.\n\nWe can do that, but what happens if the foreign server itself get\ndropped with cascade option in another session, use case is as\nfollows:\n\n1) Run a foreign query in session 1 with server 1, user mapping 1\n2) Try to drop foreign server 1, then we would not be allowed to do so\nbecause of dependency, if we use CASCADE, then the dependent user\nmapping 1 and foreign tables get dropped too.\n3) Run the postgres_fdw_get_connections(), at the start of txn, the\ncached entry gets invalidated via pgfdw_inval_callback() and we try to\nuse the stored server id of the invalid entry (for which the foreign\nserver would have been dropped) and lookup in sys catalogues, so again\na null tuple is returned.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 14 Dec 2020 23:13:08 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] postgres_fdw connection caching - cause remote sessions\n linger till the local session exit" }, { "msg_contents": "On Mon, Dec 14, 2020 at 8:03 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> On 2020/12/14 14:36, Bharath Rupireddy wrote:\n> > On Mon, Dec 14, 2020 at 9:38 AM Fujii Masao <masao.fujii@oss.nttdata.com <mailto:masao.fujii@oss.nttdata.com>> wrote:\n> > > On 2020/12/12 15:05, Bharath Rupireddy wrote:\n> > > > On Sat, Dec 12, 2020 at 12:19 AM Fujii Masao <masao.fujii@oss.nttdata.com <mailto:masao.fujii@oss.nttdata.com> <mailto:masao.fujii@oss.nttdata.com <mailto:masao.fujii@oss.nttdata.com>>> wrote:\n> > > > > I was thinking that in the case of drop of user mapping or server, hash_search(ConnnectionHash) in GetConnection() cannot find the cached connection entry invalidated by that drop. Because \"user->umid\" used as hash key is changed. So I was thinking that that invalidated connection will not be closed nor reconnected.\n> > > > >\n> > > >\n> > > > You are right in saying that the connection leaks.\n> > > >\n> > > > Use case 1:\n> > > > 1) Run foreign query in session1 with server1, user mapping1\n> > > > 2) Drop user mapping1 in another session2, invalidation message gets logged which will have to be processed by other sessions\n> > > > 3) Run foreign query again in session1, at the start of txn, the cached entry gets invalidated via pgfdw_inval_callback(). Whatever may be the type of foreign query (select, update, explain, delete, insert, analyze etc.), upon next call to GetUserMapping() from postgres_fdw.c, the cache lookup fails(with ERROR: user mapping not found for \"XXXX\") since the user mapping1 has been dropped in session2 and the query will also fail before reaching GetConnection() where the connections associated with invalidated entries would have got disconnected.\n> > > >\n> > > > So, the connection associated with invalidated entry would remain until the local session exits which is a problem to solve.\n> > > >\n> > > > Use case 2:\n> > > > 1) Run foreign query in session1 with server1, user mapping1\n> > > > 2) Try to drop foreign server1, then we would not be allowed to do so because of dependency. If we use CASCADE, then the dependent user mapping1 and foreign tables get dropped too [1].\n> > > > 3) Run foreign query again in session1, at the start of txn, the cached entry gets invalidated via pgfdw_inval_callback(), it fails because there is no foreign table and user mapping1.\n> > > >\n> > > > But, note that the connection remains open in session1, which is again a problem to solve.\n> > > >\n> > > > To solve the above connection leak problem, it looks like the right place to close all the invalid connections is pgfdw_xact_callback(), once registered, which gets called at the end of every txn in the current session(by then all the sub txns also would have been finished). Note that if there are too many invalidated entries, then one of the following txn has to bear running this extra code, but that's okay than having leaked connections. Thoughts? If okay, I can code a separate patch.\n> > >\n> > > Thanks for further analysis! Sounds good. Also +1 for making it as separate patch. Maybe only this patch needs to be back-patched.\n> >\n> > Thanks. Yeah once agreed on the fix, +1 to back patch. Shall I start a separate thread for connection leak issue and patch, so that others might have different thoughts??\n>\n> Yes, of course!\n\nThanks. I posted the patch in a separate thread[1] for fixing the\nconnection leak problem.\n\n[1] - https://www.postgresql.org/message-id/flat/CALj2ACVNcGH_6qLY-4_tXz8JLvA%2B4yeBThRfxMz7Oxbk1aHcpQ%40mail.gmail.com\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 16 Dec 2020 18:20:27 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] postgres_fdw connection caching - cause remote sessions\n linger till the local session exit" }, { "msg_contents": "On Mon, Dec 14, 2020 at 11:13 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n> On Mon, Dec 14, 2020 at 11:00 PM Fujii Masao\n> <masao.fujii@oss.nttdata.com> wrote:\n> > > One more point for the above scenario: if the user mapping is dropped\n> > > in another session, then cache lookup for that entry in the\n> > > postgres_fdw_get_connections() returns a null tuple which I plan to\n> > > not throw an error, but just to skip in that case and continue. But if\n> > > the user mapping is not dropped in another session but altered, then\n> > > postgres_fdw_get_connections() still can show that in the output.\n> >\n> > Yes, so *if* we really want to return even connection invalidated by drop of\n> > user mapping, the cached connection entry may need to store not only\n> > user mapping id but also server id so that we can get the server name without\n> > user mapping entry.\n>\n> We can do that, but what happens if the foreign server itself get\n> dropped with cascade option in another session, use case is as\n> follows:\n>\n> 1) Run a foreign query in session 1 with server 1, user mapping 1\n> 2) Try to drop foreign server 1, then we would not be allowed to do so\n> because of dependency, if we use CASCADE, then the dependent user\n> mapping 1 and foreign tables get dropped too.\n> 3) Run the postgres_fdw_get_connections(), at the start of txn, the\n> cached entry gets invalidated via pgfdw_inval_callback() and we try to\n> use the stored server id of the invalid entry (for which the foreign\n> server would have been dropped) and lookup in sys catalogues, so again\n> a null tuple is returned.\n\nHi,\n\nAny further thoughts on this would be really helpful.\n\nDiscussion here is on the point - whether to show up the invalidated\nconnections in the output of the new postgres_fdw_get_connections()\nfunction? If we were to show, then because of the solution we proposed\nfor the connection leak problem in [1], will the invalidated entries\nbe shown every time?\n\n[1] - https://www.postgresql.org/message-id/flat/CALj2ACVNcGH_6qLY-4_tXz8JLvA%2B4yeBThRfxMz7Oxbk1aHcpQ%40mail.gmail.com\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 17 Dec 2020 16:43:18 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] postgres_fdw connection caching - cause remote sessions\n linger till the local session exit" }, { "msg_contents": "Hi\r\n\r\n> Discussion here is on the point - whether to show up the invalidated\r\n> connections in the output of the new postgres_fdw_get_connections()\r\n> function? If we were to show, then because of the solution we proposed for\r\n> the connection leak problem in [1], will the invalidated entries be shown\r\n> every time?\r\n\r\nIMO, we introduced the function postgres_fdw_get_connections to decide \r\nwhether there are too many connections exists and we should disconnect them.\r\n\r\nIf User decide to disconnect, we have two cases:\r\n1. user decide to disconnect one of them, \r\n I think it’s ok for user to disconnect invalidated connection, so we'd better list the invalidated connections.\r\n\r\n2. User decide to disconnect all of them. In this case, \r\n It seems postgres_fdw_disconnect will disconnect both invalidated and not connections,\r\n And we should let user realize what connections they are disconnecting, so we should list the invalidated connections.\r\n\r\nBased on the above two cases, Personlly, I think we can list the invalidated connections.\r\n\r\n-----\r\nI took a look into the patch, and have a little issue:\r\n\r\n+bool disconnect_cached_connections(uint32 hashvalue, bool all)\r\n+\tif (all)\r\n+\t{\r\n+\t\thash_destroy(ConnectionHash);\r\n+\t\tConnectionHash = NULL;\r\n+\t\tresult = true;\r\n+\t}\r\n\r\nIf disconnect_cached_connections is called to disconnect all the connections, \r\nshould we reset the 'xact_got_connection' flag ?\r\n\r\n\r\n> [1] -\r\n> https://www.postgresql.org/message-id/flat/CALj2ACVNcGH_6qLY-4_tXz8JLv\r\n> A%2B4yeBThRfxMz7Oxbk1aHcpQ%40mail.gmail.com\r\n\r\nThe patch about connection leak looks good to me.\r\nAnd I have a same issue about the new 'have_invalid_connections' flag,\r\nIf we disconnect all the connections, should we reset the flag ?\r\n\r\nBest regards,\r\nhouzj\r\n\r\n\n\n", "msg_date": "Thu, 17 Dec 2020 12:08:41 +0000", "msg_from": "\"Hou, Zhijie\" <houzj.fnst@cn.fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: [PATCH] postgres_fdw connection caching - cause remote sessions\n linger till the local session exit" }, { "msg_contents": "On Thu, Dec 17, 2020 at 5:38 PM Hou, Zhijie <houzj.fnst@cn.fujitsu.com> wrote:\n> > Discussion here is on the point - whether to show up the invalidated\n> > connections in the output of the new postgres_fdw_get_connections()\n> > function? If we were to show, then because of the solution we proposed for\n> > the connection leak problem in [1], will the invalidated entries be shown\n> > every time?\n>\n> IMO, we introduced the function postgres_fdw_get_connections to decide\n> whether there are too many connections exists and we should disconnect them.\n>\n> If User decide to disconnect, we have two cases:\n> 1. user decide to disconnect one of them,\n> I think it’s ok for user to disconnect invalidated connection, so we'd better list the invalidated connections.\n>\n> 2. User decide to disconnect all of them. In this case,\n> It seems postgres_fdw_disconnect will disconnect both invalidated and not connections,\n> And we should let user realize what connections they are disconnecting, so we should list the invalidated connections.\n>\n> Based on the above two cases, Personlly, I think we can list the invalidated connections.\n\nI will do that. So, the output will have a list of pairs like\n(server_name, true/false), true/false is for valid/invalid connection.\n\n> -----\n> I took a look into the patch, and have a little issue:\n>\n> +bool disconnect_cached_connections(uint32 hashvalue, bool all)\n> + if (all)\n> + {\n> + hash_destroy(ConnectionHash);\n> + ConnectionHash = NULL;\n> + result = true;\n> + }\n>\n> If disconnect_cached_connections is called to disconnect all the connections,\n> should we reset the 'xact_got_connection' flag ?\n\nI think we must allow postgres_fdw_disconnect() to disconnect the\nparticular/all connections only when the corresponding entries have no\nopen txns or connections are not being used in that txn, otherwise\nnot. We may end up closing/disconnecting the connection that's still\nbeing in use because entry->xact_dept can even go more than 1 for sub\ntxns. See use case [1].\n\n+ if ((all || entry->server_hashvalue == hashvalue) &&\nentry->xact_depth == 0 &&\n+ entry->conn)\n+ {\n+ disconnect_pg_server(entry);\n+ result = true;\n+ }\n\nThoughts?\n\nAnd to reset the 'xact_got_connection' flag: I think we should reset\nit only when we close all the connections i.e. when all the\nconnections are at entry->xact_depth = 0, otherwise not. Same for\nhave_invalid_connections flag as well.\n\n[1] -\nBEGIN;\nSELECT 1 FROM ft1 LIMIT 1; --> server 1 entry->xact_depth is 1\nSAVEPOINT s;\nSELECT 1 FROM ft1 LIMIT 1; --> entry->xact_depth becomes 2\nSELECT postgres_fdw_disconnect()/postgres_fdw_disconnect('server 1');\n--> I think we should not close the connection as it's txn is still\nopen.\nCOMMIT;\n\n> > [1] -\n> > https://www.postgresql.org/message-id/flat/CALj2ACVNcGH_6qLY-4_tXz8JLv\n> > A%2B4yeBThRfxMz7Oxbk1aHcpQ%40mail.gmail.com\n>\n> The patch about connection leak looks good to me.\n> And I have a same issue about the new 'have_invalid_connections' flag,\n> If we disconnect all the connections, should we reset the flag ?\n\nYes as mentioned in the above comment.\n\nThanks for reviewing the connection leak patch. It will be good if the\nreview comments for the connection leak flag is provided separately in\nthat thread. I added it to commitfest -\nhttps://commitfest.postgresql.org/31/2882/.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 17 Dec 2020 20:32:57 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] postgres_fdw connection caching - cause remote sessions\n linger till the local session exit" }, { "msg_contents": "On 2020-12-17 18:02, Bharath Rupireddy wrote:\n> On Thu, Dec 17, 2020 at 5:38 PM Hou, Zhijie <houzj.fnst@cn.fujitsu.com> \n> wrote:\n>> I took a look into the patch, and have a little issue:\n>> \n>> +bool disconnect_cached_connections(uint32 hashvalue, bool all)\n>> + if (all)\n>> + {\n>> + hash_destroy(ConnectionHash);\n>> + ConnectionHash = NULL;\n>> + result = true;\n>> + }\n>> \n>> If disconnect_cached_connections is called to disconnect all the \n>> connections,\n>> should we reset the 'xact_got_connection' flag ?\n> \n> I think we must allow postgres_fdw_disconnect() to disconnect the\n> particular/all connections only when the corresponding entries have no\n> open txns or connections are not being used in that txn, otherwise\n> not. We may end up closing/disconnecting the connection that's still\n> being in use because entry->xact_dept can even go more than 1 for sub\n> txns. See use case [1].\n> \n> + if ((all || entry->server_hashvalue == hashvalue) &&\n> entry->xact_depth == 0 &&\n> + entry->conn)\n> + {\n> + disconnect_pg_server(entry);\n> + result = true;\n> + }\n> \n> Thoughts?\n> \n\nI think that you are right. Actually, I was thinking about much more \nsimple solution to this problem --- just restrict \npostgres_fdw_disconnect() to run only *outside* of explicit transaction \nblock. This should protect everyone from closing its underlying \nconnections, but seems to be a bit more restrictive than you propose.\n\nJust thought, that if we start closing fdw connections in the open xact \nblock:\n\n1) Close a couple of them.\n2) Found one with xact_depth > 0 and error out.\n3) End up in the mixed state: some of connections were closed, but some \nthem not, and it cannot be rolled back with the xact.\n\nIn other words, I have some doubts about allowing to call a \nnon-transactional by its nature function in the transaction block.\n\n\nRegards\n-- \nAlexey Kondratov\n\nPostgres Professional https://www.postgrespro.com\nRussian Postgres Company\n\n\n", "msg_date": "Thu, 17 Dec 2020 20:02:19 +0300", "msg_from": "Alexey Kondratov <a.kondratov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: [PATCH] postgres_fdw connection caching - cause remote sessions\n linger till the local session exit" }, { "msg_contents": "On Thu, Dec 17, 2020 at 10:32 PM Alexey Kondratov\n<a.kondratov@postgrespro.ru> wrote:\n> On 2020-12-17 18:02, Bharath Rupireddy wrote:\n> > On Thu, Dec 17, 2020 at 5:38 PM Hou, Zhijie <houzj.fnst@cn.fujitsu.com>\n> > wrote:\n> >> I took a look into the patch, and have a little issue:\n> >>\n> >> +bool disconnect_cached_connections(uint32 hashvalue, bool all)\n> >> + if (all)\n> >> + {\n> >> + hash_destroy(ConnectionHash);\n> >> + ConnectionHash = NULL;\n> >> + result = true;\n> >> + }\n> >>\n> >> If disconnect_cached_connections is called to disconnect all the\n> >> connections,\n> >> should we reset the 'xact_got_connection' flag ?\n> >\n> > I think we must allow postgres_fdw_disconnect() to disconnect the\n> > particular/all connections only when the corresponding entries have no\n> > open txns or connections are not being used in that txn, otherwise\n> > not. We may end up closing/disconnecting the connection that's still\n> > being in use because entry->xact_dept can even go more than 1 for sub\n> > txns. See use case [1].\n> >\n> > + if ((all || entry->server_hashvalue == hashvalue) &&\n> > entry->xact_depth == 0 &&\n> > + entry->conn)\n> > + {\n> > + disconnect_pg_server(entry);\n> > + result = true;\n> > + }\n> >\n> > Thoughts?\n> >\n>\n> I think that you are right. Actually, I was thinking about much more\n> simple solution to this problem --- just restrict\n> postgres_fdw_disconnect() to run only *outside* of explicit transaction\n> block. This should protect everyone from closing its underlying\n> connections, but seems to be a bit more restrictive than you propose.\n\nAgree that it's restrictive from a usability point of view. I think\nhaving entry->xact_depth == 0 should be enough to protect from closing\nany connections that are currently in use.\n\nSay the user has called postgres_fdw_disconnect('myserver1'), if it's\ncurrently in use in that xact, then we can return false or even go\nfurther and issue a warning along with false. Also if\npostgres_fdw_disconnect() is called for closing all connections and\nany one of the connections are currently in use in the xact, then also\nwe can return: true and a warning if atleast one connection is closed\nor false and a warning if all the connections are in use.\n\nThe warning message can be something like - for the first case -\n\"could not close the server connection as it is in use\" and for the\nsecond case - \"could not close some of the connections as they are in\nuse\".\n\nThoughts?\n\n> Just thought, that if we start closing fdw connections in the open xact\n> block:\n>\n> 1) Close a couple of them.\n> 2) Found one with xact_depth > 0 and error out.\n> 3) End up in the mixed state: some of connections were closed, but some\n> them not, and it cannot be rolled back with the xact.\n\nWe don't error out, but we may issue a warning (if agreed on the above\nreponse) and return false, but definitely not an error.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 18 Dec 2020 07:20:07 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] postgres_fdw connection caching - cause remote sessions\n linger till the local session exit" }, { "msg_contents": "Hi,\n\nI'm posting a v4-0001 patch for the new functions\npostgres_fdw_get_connections() and postgres_fdw_disconnect(). In this\npatch, I tried to address the review comments provided upthread.\n\nAt a high level, the changes include:\n1) Storing the foreign server id in the cache entry which will help to\nfetch the server name associated with it easily.\n2) postgres_fdw_get_connections now returns an open connection server\nname and true or false to indicate whether it's valid or not.\n3) postgres_fdw_get_connections can issue a warning when the cache\nlook up for server name returns null i.e. the foreign server is\ndropped. Please see the comments before postgres_fdw_get_connections\nin which situations this is possible.\n4) postgres_fdw_disconnect('myserver') disconnects the open connection\nonly when it's not being used in the current xact. If it's used, then\nfalse is returned and a warning is issued.\n5) postgres_fdw_disconnect() disconnects all the connections only when\nthey are not being used in the current xact. If at least one\nconnection that's being used exists, then it issues a warning and\nreturns true if at least one open connection gets closed otherwise\nfalse. If there are no connections made yet or connection cache is\nempty, then also false is returned.\n6) postgres_fdw_disconnect can discard the entire cache if there is no\nactive connection.\n\nThoughts?\n\nBelow things are still pending which I plan to post new patches after\nthe v4-0001 is reviewed:\n1) changing the version of postgres_fdw--1.0.sql to postgres_fdw--1.1.sql\n2) 0002 and 0003 patches having keep_connections GUC and\nkeep_connection server level option.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Wed, 30 Dec 2020 11:40:29 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] postgres_fdw connection caching - cause remote sessions\n linger till the local session exit" }, { "msg_contents": "On 2020-12-30 09:10, Bharath Rupireddy wrote:\n> Hi,\n> \n> I'm posting a v4-0001 patch for the new functions\n> postgres_fdw_get_connections() and postgres_fdw_disconnect(). In this\n> patch, I tried to address the review comments provided upthread.\n> \n> Thoughts?\n> \n\nI still have some doubts that it is worth of allowing to call \npostgres_fdw_disconnect() in the explicit transaction block, since it \nadds a lot of things to care and check for. But otherwise current logic \nlooks solid.\n\n+\t\t\t\t errdetail(\"Such connections get closed either in the next use or \nat the end of the current transaction.\")\n+\t\t\t\t : errdetail(\"Such connection gets closed either in the next use or \nat the end of the current transaction.\")));\n\nDoes it really have a chance to get closed on the next use? If foreign \nserver is dropped then user mapping should be dropped as well (either \nwith CASCADE or manually), but we do need user mapping for a local cache \nlookup. That way, if I understand all the discussion up-thread \ncorrectly, we can only close such connections at the end of xact, do we?\n\n+ * This function returns false if the cache doesn't exist.\n+ * When the cache exists:\n\nI think that this will be corrected later by pg_indent, but still. In \nthis comment section following points 1) and 2) have a different \ncombination of tabs/spaces.\n\n\nRegards\n-- \nAlexey Kondratov\n\nPostgres Professional https://www.postgrespro.com\nRussian Postgres Company", "msg_date": "Wed, 30 Dec 2020 14:50:23 +0300", "msg_from": "Alexey Kondratov <a.kondratov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: [PATCH] postgres_fdw connection caching - cause remote sessions\n linger till the local session exit" }, { "msg_contents": "On Wed, Dec 30, 2020 at 5:20 PM Alexey Kondratov\n<a.kondratov@postgrespro.ru> wrote:\n>\n> On 2020-12-30 09:10, Bharath Rupireddy wrote:\n> > Hi,\n> >\n> > I'm posting a v4-0001 patch for the new functions\n> > postgres_fdw_get_connections() and postgres_fdw_disconnect(). In this\n> > patch, I tried to address the review comments provided upthread.\n> >\n> > Thoughts?\n> >\n>\n> I still have some doubts that it is worth of allowing to call\n> postgres_fdw_disconnect() in the explicit transaction block, since it\n> adds a lot of things to care and check for. But otherwise current logic\n> looks solid.\n>\n> + errdetail(\"Such connections get closed either in the next use or\n> at the end of the current transaction.\")\n> + : errdetail(\"Such connection gets closed either in the next use or\n> at the end of the current transaction.\")));\n>\n> Does it really have a chance to get closed on the next use? If foreign\n> server is dropped then user mapping should be dropped as well (either\n> with CASCADE or manually), but we do need user mapping for a local cache\n> lookup. That way, if I understand all the discussion up-thread\n> correctly, we can only close such connections at the end of xact, do we?\n\nThe next use of such a connection in the following query whose foreign\nserver would have been dropped fails because of the cascading that can\nhappen to drop the user mapping and the foreign table as well. During\nthe start of the next query such connection will be marked as\ninvalidated because xact_depth of that connection is > 1 and when the\nfail happens, txn gets aborted due to which pgfdw_xact_callback gets\ncalled and in that the connection gets closed. To make it more clear,\nplease have a look at the scenarios [1].\n\nI still feel the detailed message \"Such connections get closed either\nin the next use or at the end of the current transaction\" is\nappropriate. Please have a closer look at the possible use cases [1].\n\nAnd IMO, anyone dropping a foreign server inside an explicit txn block\nin which the foreign server was used is extremely rare, so still\nshowing this message and allowing postgres_fdw_disconnect() in\nexplicit txn block is useful. For all other cases the\npostgres_fdw_disconnect behaves as expected.\n\nThoughts?\n\n[1]\ncase 1:\nBEGIN;\nSELECT 1 FROM f1 LIMIT 1; --> xact_depth becomes 1\nDROP SERVER loopback1 CASCADE; --> drop cascades to the user mapping\nand the foreign table and the connection gets invalidated in\npgfdw_inval_callback because xact_depth is 1\nSELECT 1 FROM f1 LIMIT 1; --> since the failure occurs for this query\nand txn is aborted, the connection gets closed in pgfdw_xact_callback.\nSELECT * FROM postgres_fdw_get_connections(); --> txn was aborted\nSELECT * FROM postgres_fdw_disconnect(); --> txn was aborted\nCOMMIT;\n\ncase 2:\nBEGIN;\nSELECT 1 FROM f1 LIMIT 1; --> xact_depth becomes 1\nDROP SERVER loopback1 CASCADE; --> drop cascades to the user mapping\nand the foreign table and the connection gets invalidated in\npgfdw_inval_callback because xact_depth is 1\nSELECT * FROM postgres_fdw_get_connections(); --> shows the above\nwarning because foreign server name can not be fetched\nSELECT * FROM postgres_fdw_disconnect(); --> the connection can not be\nclosed here as well because xact_depth is 1, then it issues a warning\n\"cannot close any connection because they are still in use\"\nCOMMIT; --> finally the connection gets closed here in pgfdw_xact_callback.\n\ncase 3:\nSELECT 1 FROM f1 LIMIT 1;\nBEGIN;\nDROP SERVER loopback1 CASCADE; --> drop cascades to the user mapping\nand the foreign table and the connection gets closed in\npgfdw_inval_callback because xact_depth is 0\nSELECT 1 FROM f1 LIMIT 1; --> since the failure occurs for this query\nand the connection was closed previously then the txn gets aborted\nSELECT * FROM postgres_fdw_get_connections(); --> txn was aborted\nSELECT * FROM postgres_fdw_disconnect(); --> txn was aborted\nCOMMIT;\n\n> + * This function returns false if the cache doesn't exist.\n> + * When the cache exists:\n>\n> I think that this will be corrected later by pg_indent, but still. In\n> this comment section following points 1) and 2) have a different\n> combination of tabs/spaces.\n\nI can change that in the next version.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 30 Dec 2020 20:29:11 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] postgres_fdw connection caching - cause remote sessions\n linger till the local session exit" }, { "msg_contents": "On 2020-12-30 17:59, Bharath Rupireddy wrote:\n> On Wed, Dec 30, 2020 at 5:20 PM Alexey Kondratov\n> <a.kondratov@postgrespro.ru> wrote:\n>> \n>> On 2020-12-30 09:10, Bharath Rupireddy wrote:\n>> I still have some doubts that it is worth of allowing to call\n>> postgres_fdw_disconnect() in the explicit transaction block, since it\n>> adds a lot of things to care and check for. But otherwise current \n>> logic\n>> looks solid.\n>> \n>> + errdetail(\"Such connections get \n>> closed either in the next use or\n>> at the end of the current transaction.\")\n>> + : errdetail(\"Such connection gets \n>> closed either in the next use or\n>> at the end of the current transaction.\")));\n>> \n>> Does it really have a chance to get closed on the next use? If foreign\n>> server is dropped then user mapping should be dropped as well (either\n>> with CASCADE or manually), but we do need user mapping for a local \n>> cache\n>> lookup. That way, if I understand all the discussion up-thread\n>> correctly, we can only close such connections at the end of xact, do \n>> we?\n> \n> The next use of such a connection in the following query whose foreign\n> server would have been dropped fails because of the cascading that can\n> happen to drop the user mapping and the foreign table as well. During\n> the start of the next query such connection will be marked as\n> invalidated because xact_depth of that connection is > 1 and when the\n> fail happens, txn gets aborted due to which pgfdw_xact_callback gets\n> called and in that the connection gets closed. To make it more clear,\n> please have a look at the scenarios [1].\n> \n\nIn my understanding 'connection gets closed either in the next use' \nmeans that connection will be closed next time someone will try to use \nit, i.e. GetConnection() will be called and it closes this connection \nbecause of a bad state. However, if foreign server is dropped \nGetConnection() cannot lookup the connection because it needs a user \nmapping oid as a key.\n\nI had a look on your scenarios. IIUC, under **next use** you mean a \nselect attempt from a table belonging to the same foreign server, which \nleads to a transaction abort and connection gets closed in the xact \ncallback. Sorry, maybe I am missing something, but this just confirms \nthat such connections only get closed in the xact callback (taking into \naccount your recently committed patch [1]), so 'next use' looks \nmisleading.\n\n[1] \nhttps://www.postgresql.org/message-id/8b2aa1aa-c638-12a8-cb56-ea0f0a5019cf%40oss.nttdata.com\n\n\nRegards\n-- \nAlexey Kondratov\n\nPostgres Professional https://www.postgrespro.com\nRussian Postgres Company\n\n\n", "msg_date": "Wed, 30 Dec 2020 20:41:27 +0300", "msg_from": "Alexey Kondratov <a.kondratov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: [PATCH] postgres_fdw connection caching - cause remote sessions\n linger till the local session exit" }, { "msg_contents": "On Wed, Dec 30, 2020 at 11:11 PM Alexey Kondratov\n<a.kondratov@postgrespro.ru> wrote:\n> On 2020-12-30 17:59, Bharath Rupireddy wrote:\n> > On Wed, Dec 30, 2020 at 5:20 PM Alexey Kondratov\n> > <a.kondratov@postgrespro.ru> wrote:\n> >>\n> >> On 2020-12-30 09:10, Bharath Rupireddy wrote:\n> >> I still have some doubts that it is worth of allowing to call\n> >> postgres_fdw_disconnect() in the explicit transaction block, since it\n> >> adds a lot of things to care and check for. But otherwise current\n> >> logic\n> >> looks solid.\n> >>\n> >> + errdetail(\"Such connections get\n> >> closed either in the next use or\n> >> at the end of the current transaction.\")\n> >> + : errdetail(\"Such connection gets\n> >> closed either in the next use or\n> >> at the end of the current transaction.\")));\n> >>\n> >> Does it really have a chance to get closed on the next use? If foreign\n> >> server is dropped then user mapping should be dropped as well (either\n> >> with CASCADE or manually), but we do need user mapping for a local\n> >> cache\n> >> lookup. That way, if I understand all the discussion up-thread\n> >> correctly, we can only close such connections at the end of xact, do\n> >> we?\n> >\n> > The next use of such a connection in the following query whose foreign\n> > server would have been dropped fails because of the cascading that can\n> > happen to drop the user mapping and the foreign table as well. During\n> > the start of the next query such connection will be marked as\n> > invalidated because xact_depth of that connection is > 1 and when the\n> > fail happens, txn gets aborted due to which pgfdw_xact_callback gets\n> > called and in that the connection gets closed. To make it more clear,\n> > please have a look at the scenarios [1].\n> >\n>\n> In my understanding 'connection gets closed either in the next use'\n> means that connection will be closed next time someone will try to use\n> it, i.e. GetConnection() will be called and it closes this connection\n> because of a bad state. However, if foreign server is dropped\n> GetConnection() cannot lookup the connection because it needs a user\n> mapping oid as a key.\n\nRight. We don't reach GetConnection(). The look up in either\nGetForeignTable() or GetUserMapping() or GetForeignServer() fails (and\nso the query) depending one which one gets called first.\n\n> I had a look on your scenarios. IIUC, under **next use** you mean a\n> select attempt from a table belonging to the same foreign server, which\n> leads to a transaction abort and connection gets closed in the xact\n> callback. Sorry, maybe I am missing something, but this just confirms\n> that such connections only get closed in the xact callback (taking into\n> account your recently committed patch [1]), so 'next use' looks\n> misleading.\n>\n> [1]\n> https://www.postgresql.org/message-id/8b2aa1aa-c638-12a8-cb56-ea0f0a5019cf%40oss.nttdata.com\n\nRight. I meant the \"next use\" as the select attempt on a foreign table\nwith that foreign server. If no select query is run, then at the end\nof the current txn that connection gets closed. Yes internally such\nconnection gets closed in pgfdw_xact_callback.\n\nIf the errdetail(\"Such connections get closed either in the next use\nor at the end of the current transaction.\") looks confusing, how about\n\n1) errdetail(\"Such connection gets discarded while closing the remote\ntransaction.\")/errdetail(\"Such connections get discarded while closing\nthe remote transaction.\")\n2) errdetail(\"Such connection is discarded at the end of remote\ntransaction.\")/errdetail(\"Such connections are discarded at the end of\nremote transaction.\")\n\nI prefer 2) Thoughts?\n\nBecause we already print a message in pgfdw_xact_callback -\nelog(DEBUG3, \"closing remote transaction on connection %p\"\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 31 Dec 2020 08:29:09 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] postgres_fdw connection caching - cause remote sessions\n linger till the local session exit" }, { "msg_contents": "On Thu, Dec 31, 2020 at 8:29 AM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n> Right. I meant the \"next use\" as the select attempt on a foreign table\n> with that foreign server. If no select query is run, then at the end\n> of the current txn that connection gets closed. Yes internally such\n> connection gets closed in pgfdw_xact_callback.\n>\n> If the errdetail(\"Such connections get closed either in the next use\n> or at the end of the current transaction.\") looks confusing, how about\n>\n> 1) errdetail(\"Such connection gets discarded while closing the remote\n> transaction.\")/errdetail(\"Such connections get discarded while closing\n> the remote transaction.\")\n> 2) errdetail(\"Such connection is discarded at the end of remote\n> transaction.\")/errdetail(\"Such connections are discarded at the end of\n> remote transaction.\")\n>\n> I prefer 2) Thoughts?\n>\n> Because we already print a message in pgfdw_xact_callback -\n> elog(DEBUG3, \"closing remote transaction on connection %p\"\n\nI changed the message to \"Such connection is discarded at the end of\nremote transaction.\".\n\nI'm attaching v5 patch set i.e. all the patches 0001 ( for new\nfunctions), 0002 ( for GUC) and 0003 (for server level option). I have\nalso made the changes for increasing the version of\npostgres_fdw--1.0.sql from 1.0 to 1.1.\n\nI have no open points from my end. Please consider the v5 patch set\nfor further review.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Fri, 1 Jan 2021 15:33:33 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] postgres_fdw connection caching - cause remote sessions\n linger till the local session exit" }, { "msg_contents": "Hi, Bharath:\n\nHappy new year.\n\n+ appendStringInfo(&buf, \"(%s, %s)\", server->servername,\n+ entry->invalidated ? \"false\" : \"true\");\n\nIs it better to use 'invalidated' than 'false' in the string ?\n\nFor the first if block of postgres_fdw_disconnect():\n\n+ * Check if the connection associated with the given foreign server\nis\n+ * in use i.e. entry->xact_depth > 0. Since we can not close it, so\n+ * error out.\n+ */\n+ if (is_in_use)\n+ ereport(WARNING,\n\nsince is_in_use is only set in the if (server) block, I think the above\nwarning can be moved into that block.\n\nCheers\n\nOn Fri, Jan 1, 2021 at 2:04 AM Bharath Rupireddy <\nbharath.rupireddyforpostgres@gmail.com> wrote:\n\n> On Thu, Dec 31, 2020 at 8:29 AM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n> > Right. I meant the \"next use\" as the select attempt on a foreign table\n> > with that foreign server. If no select query is run, then at the end\n> > of the current txn that connection gets closed. Yes internally such\n> > connection gets closed in pgfdw_xact_callback.\n> >\n> > If the errdetail(\"Such connections get closed either in the next use\n> > or at the end of the current transaction.\") looks confusing, how about\n> >\n> > 1) errdetail(\"Such connection gets discarded while closing the remote\n> > transaction.\")/errdetail(\"Such connections get discarded while closing\n> > the remote transaction.\")\n> > 2) errdetail(\"Such connection is discarded at the end of remote\n> > transaction.\")/errdetail(\"Such connections are discarded at the end of\n> > remote transaction.\")\n> >\n> > I prefer 2) Thoughts?\n> >\n> > Because we already print a message in pgfdw_xact_callback -\n> > elog(DEBUG3, \"closing remote transaction on connection %p\"\n>\n> I changed the message to \"Such connection is discarded at the end of\n> remote transaction.\".\n>\n> I'm attaching v5 patch set i.e. all the patches 0001 ( for new\n> functions), 0002 ( for GUC) and 0003 (for server level option). I have\n> also made the changes for increasing the version of\n> postgres_fdw--1.0.sql from 1.0 to 1.1.\n>\n> I have no open points from my end. Please consider the v5 patch set\n> for further review.\n>\n> With Regards,\n> Bharath Rupireddy.\n> EnterpriseDB: http://www.enterprisedb.com\n>\n\nHi, Bharath:Happy new year.+       appendStringInfo(&buf, \"(%s, %s)\", server->servername,+                        entry->invalidated ? \"false\" : \"true\");Is it better to use 'invalidated' than 'false' in the string ?For the first if block of postgres_fdw_disconnect():+        * Check if the connection associated with the given foreign server is+        * in use i.e. entry->xact_depth > 0. Since we can not close it, so+        * error out.+        */+       if (is_in_use)+           ereport(WARNING,since is_in_use is only set in the if (server) block, I think the above warning can be moved into that block.CheersOn Fri, Jan 1, 2021 at 2:04 AM Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote:On Thu, Dec 31, 2020 at 8:29 AM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n> Right. I meant the \"next use\" as the select attempt on a foreign table\n> with that foreign server. If no select query is run, then at the end\n> of the current txn that connection gets closed. Yes internally such\n> connection gets closed in pgfdw_xact_callback.\n>\n> If the errdetail(\"Such connections get closed either in the next use\n> or at the end of the current transaction.\") looks confusing, how about\n>\n> 1) errdetail(\"Such connection gets discarded while closing the remote\n> transaction.\")/errdetail(\"Such connections get discarded while closing\n> the remote transaction.\")\n> 2) errdetail(\"Such connection is discarded at the end of remote\n> transaction.\")/errdetail(\"Such connections are discarded at the end of\n> remote transaction.\")\n>\n> I prefer 2)  Thoughts?\n>\n> Because we already print a message in pgfdw_xact_callback -\n> elog(DEBUG3, \"closing remote transaction on connection %p\"\n\nI changed the message to \"Such connection is discarded at the end of\nremote transaction.\".\n\nI'm attaching v5 patch set i.e. all the patches 0001 ( for new\nfunctions), 0002 ( for GUC) and 0003 (for server level option). I have\nalso made the changes for increasing the version of\npostgres_fdw--1.0.sql from 1.0 to 1.1.\n\nI have no open points from my end. Please consider the v5 patch set\nfor further review.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Fri, 1 Jan 2021 08:06:15 -0800", "msg_from": "Zhihong Yu <zyu@yugabyte.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] postgres_fdw connection caching - cause remote sessions\n linger till the local session exit" }, { "msg_contents": "Thanks for taking a look at the patches.\n\nOn Fri, Jan 1, 2021 at 9:35 PM Zhihong Yu <zyu@yugabyte.com> wrote:\n> Happy new year.\n>\n> + appendStringInfo(&buf, \"(%s, %s)\", server->servername,\n> + entry->invalidated ? \"false\" : \"true\");\n>\n> Is it better to use 'invalidated' than 'false' in the string ?\n\nThis point was earlier discussed in [1] and [2], but the agreement was\non having true/false [2] because of a simple reason specified in [1],\nthat is when some users have foreign server names as invalid or valid,\nthen the output is difficult to interpret which one is what. With\nhaving true/false, it's easier. IMO, let's keep the true/false as is,\nsince it's also suggested in [2].\n\n[1] - https://www.postgresql.org/message-id/CALj2ACUv%3DArQXs0U9PM3YXKCeSzJ1KxRokDY0g_0aGy--kDScA%40mail.gmail.com\n[2] - https://www.postgresql.org/message-id/6da38393-6ae5-4d87-2690-11c932123403%40oss.nttdata.com\n\n> For the first if block of postgres_fdw_disconnect():\n>\n> + * Check if the connection associated with the given foreign server is\n> + * in use i.e. entry->xact_depth > 0. Since we can not close it, so\n> + * error out.\n> + */\n> + if (is_in_use)\n> + ereport(WARNING,\n>\n> since is_in_use is only set in the if (server) block, I think the above warning can be moved into that block.\n\nModified that a bit. Since we error out when no server object is\nfound, then no need of keeping the code in else part. We could save on\nsome indentation\n\n+ if (!server)\n+ ereport(ERROR,\n+ (errcode(ERRCODE_CONNECTION_DOES_NOT_EXIST),\n+ errmsg(\"foreign server \\\"%s\\\" does not exist\",\nservername)));\n+\n+ hashvalue = GetSysCacheHashValue1(FOREIGNSERVEROID,\n+ ObjectIdGetDatum(server->serverid));\n+ result = disconnect_cached_connections(hashvalue, false, &is_in_use);\n+\n+ /*\n+ * Check if the connection associated with the given foreign server is\n+ * in use i.e. entry->xact_depth > 0. Since we can not close it, so\n+ * error out.\n+ */\n+ if (is_in_use)\n+ ereport(WARNING,\n+ (errmsg(\"cannot close the connection because it\nis still in use\")));\n\nAttaching v6 patch set. Please have a look.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Sat, 2 Jan 2021 10:53:40 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] postgres_fdw connection caching - cause remote sessions\n linger till the local session exit" }, { "msg_contents": "On Sat, Jan 2, 2021 at 10:53 AM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n> Thanks for taking a look at the patches.\n>\n> On Fri, Jan 1, 2021 at 9:35 PM Zhihong Yu <zyu@yugabyte.com> wrote:\n> > Happy new year.\n> >\n> > + appendStringInfo(&buf, \"(%s, %s)\", server->servername,\n> > + entry->invalidated ? \"false\" : \"true\");\n> >\n> > Is it better to use 'invalidated' than 'false' in the string ?\n>\n> This point was earlier discussed in [1] and [2], but the agreement was\n> on having true/false [2] because of a simple reason specified in [1],\n> that is when some users have foreign server names as invalid or valid,\n> then the output is difficult to interpret which one is what. With\n> having true/false, it's easier. IMO, let's keep the true/false as is,\n> since it's also suggested in [2].\n>\n> [1] - https://www.postgresql.org/message-id/CALj2ACUv%3DArQXs0U9PM3YXKCeSzJ1KxRokDY0g_0aGy--kDScA%40mail.gmail.com\n> [2] - https://www.postgresql.org/message-id/6da38393-6ae5-4d87-2690-11c932123403%40oss.nttdata.com\n>\n> > For the first if block of postgres_fdw_disconnect():\n> >\n> > + * Check if the connection associated with the given foreign server is\n> > + * in use i.e. entry->xact_depth > 0. Since we can not close it, so\n> > + * error out.\n> > + */\n> > + if (is_in_use)\n> > + ereport(WARNING,\n> >\n> > since is_in_use is only set in the if (server) block, I think the above warning can be moved into that block.\n>\n> Modified that a bit. Since we error out when no server object is\n> found, then no need of keeping the code in else part. We could save on\n> some indentation\n>\n> + if (!server)\n> + ereport(ERROR,\n> + (errcode(ERRCODE_CONNECTION_DOES_NOT_EXIST),\n> + errmsg(\"foreign server \\\"%s\\\" does not exist\",\n> servername)));\n> +\n> + hashvalue = GetSysCacheHashValue1(FOREIGNSERVEROID,\n> + ObjectIdGetDatum(server->serverid));\n> + result = disconnect_cached_connections(hashvalue, false, &is_in_use);\n> +\n> + /*\n> + * Check if the connection associated with the given foreign server is\n> + * in use i.e. entry->xact_depth > 0. Since we can not close it, so\n> + * error out.\n> + */\n> + if (is_in_use)\n> + ereport(WARNING,\n> + (errmsg(\"cannot close the connection because it\n> is still in use\")));\n>\n> Attaching v6 patch set. Please have a look.\n\nI'm sorry for the mess. I missed adding the new files into the v6-0001\npatch. Please ignore the v6 patch set and consder the v7 patch set for\nfurther review. Note that 0002 and 0003 patches have no difference\nfrom v5 patch set.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Sat, 2 Jan 2021 11:19:17 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] postgres_fdw connection caching - cause remote sessions\n linger till the local session exit" }, { "msg_contents": "On Sat, Jan 2, 2021 at 11:19 AM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n> I'm sorry for the mess. I missed adding the new files into the v6-0001\n> patch. Please ignore the v6 patch set and consder the v7 patch set for\n> further review. Note that 0002 and 0003 patches have no difference\n> from v5 patch set.\n\nIt seems like cf bot was failing on v7 patches. On Linux, it fails\nwhile building documentation in 0001 patch, I corrected that. On\nFreeBSD, it fails in one of the test cases I added, since it was\nunstable, I corrected it now.\n\nAttaching v8 patch set. Hopefully, cf bot will be happy with v8.\n\nPlease consider the v8 patch set for further review.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Tue, 5 Jan 2021 13:26:14 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] postgres_fdw connection caching - cause remote sessions\n linger till the local session exit" }, { "msg_contents": "\n\nOn 2021/01/05 16:56, Bharath Rupireddy wrote:\n> On Sat, Jan 2, 2021 at 11:19 AM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n>> I'm sorry for the mess. I missed adding the new files into the v6-0001\n>> patch. Please ignore the v6 patch set and consder the v7 patch set for\n>> further review. Note that 0002 and 0003 patches have no difference\n>> from v5 patch set.\n> \n> It seems like cf bot was failing on v7 patches. On Linux, it fails\n> while building documentation in 0001 patch, I corrected that. On\n> FreeBSD, it fails in one of the test cases I added, since it was\n> unstable, I corrected it now.\n> \n> Attaching v8 patch set. Hopefully, cf bot will be happy with v8.\n> \n> Please consider the v8 patch set for further review.\n\nThanks for the patch!\n\n-DATA = postgres_fdw--1.0.sql\n+DATA = postgres_fdw--1.1.sql postgres_fdw--1.0--1.1.sql\n\nShouldn't we leave 1.0.sql as it is and create 1.0--1.1.sql so that\nwe can run the followings?\n\n CREATE EXTENSION postgres_fdw VERSION \"1.0\";\n ALTER EXTENSION postgres_fdw UPDATE TO \"1.1\";\n\n\n+<sect2>\n+ <title>Functions</title>\n\nThe document format for functions should be consistent with\nthat in other contrib module like pgstattuple?\n\n\n+ When called in the local session, it returns an array with each element as a\n+ pair of the foreign server names of all the open connections that are\n+ previously made to the foreign servers and <literal>true</literal> or\n+ <literal>false</literal> to show whether or not the connection is valid.\n\nWe thought that the information about whether the connection is valid or\nnot was useful to, for example, identify and close explicitly the long-living\ninvalid connections because they were useless. But thanks to the recent\nbug fix for connection leak issue, that information would be no longer\nso helpful for us? False is returned only when the connection is used in\nthis local transaction but it's marked as invalidated. In this case that\nconnection cannot be explicitly closed because it's used in this transaction.\nIt will be closed at the end of transaction. Thought?\n\n\nI guess that you made postgres_fdw_get_connections() return the array\nbecause the similar function dblink_get_connections() does that. But\nisn't it more convenient to make that return the set of records?\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Thu, 7 Jan 2021 13:19:37 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] postgres_fdw connection caching - cause remote sessions\n linger till the local session exit" }, { "msg_contents": "On Thu, Jan 7, 2021 at 9:49 AM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> On 2021/01/05 16:56, Bharath Rupireddy wrote:\n> > Attaching v8 patch set. Hopefully, cf bot will be happy with v8.\n> >\n> > Please consider the v8 patch set for further review.\n> -DATA = postgres_fdw--1.0.sql\n> +DATA = postgres_fdw--1.1.sql postgres_fdw--1.0--1.1.sql\n>\n> Shouldn't we leave 1.0.sql as it is and create 1.0--1.1.sql so that\n> we can run the followings?\n>\n> CREATE EXTENSION postgres_fdw VERSION \"1.0\";\n> ALTER EXTENSION postgres_fdw UPDATE TO \"1.1\";\n\nYes we can. In that case, to use the new functions users have to\nupdate postgres_fdw to 1.1, in that case, do we need to mention in the\ndocumentation that to make use of the new functions, update\npostgres_fdw to version 1.1?\n\nWith the above change, the contents of postgres_fdw--1.0.sql remain as\nis and in postgres_fdw--1.0--1.1.sql we will have only the new\nfunction statements:\n\n/* contrib/postgres_fdw/postgres_fdw--1.0--1.1.sql */\n\n-- complain if script is sourced in psql, rather than via ALTER EXTENSION\n\\echo Use \"ALTER EXTENSION postgres_fdw UPDATE TO '1.1'\" to load this\nfile. \\quit\n\nCREATE FUNCTION postgres_fdw_get_connections ()\nRETURNS text[]\nAS 'MODULE_PATHNAME','postgres_fdw_get_connections'\nLANGUAGE C STRICT PARALLEL RESTRICTED;\n\nCREATE FUNCTION postgres_fdw_disconnect ()\nRETURNS bool\nAS 'MODULE_PATHNAME','postgres_fdw_disconnect'\nLANGUAGE C STRICT PARALLEL RESTRICTED;\n\nCREATE FUNCTION postgres_fdw_disconnect (text)\nRETURNS bool\nAS 'MODULE_PATHNAME','postgres_fdw_disconnect'\nLANGUAGE C STRICT PARALLEL RESTRICTED;\n\n> +<sect2>\n> + <title>Functions</title>\n>\n> The document format for functions should be consistent with\n> that in other contrib module like pgstattuple?\n\npgstattuple has so many columns to show up in output because of that\nthey have a table listing all the output columns and their types. The\nnew functions introduced here have only one or none input and an\noutput. I think, we don't need a table listing the input and output\nnames and types.\n\nIMO, we can have something similar to what pg_visibility_map has for\nfunctions, and also an example for each function showing how it can be\nused. Thoughts?\n\n> + When called in the local session, it returns an array with each element as a\n> + pair of the foreign server names of all the open connections that are\n> + previously made to the foreign servers and <literal>true</literal> or\n> + <literal>false</literal> to show whether or not the connection is valid.\n>\n> We thought that the information about whether the connection is valid or\n> not was useful to, for example, identify and close explicitly the long-living\n> invalid connections because they were useless. But thanks to the recent\n> bug fix for connection leak issue, that information would be no longer\n> so helpful for us? False is returned only when the connection is used in\n> this local transaction but it's marked as invalidated. In this case that\n> connection cannot be explicitly closed because it's used in this transaction.\n> It will be closed at the end of transaction. Thought?\n\nYes, connection's validity can be false only when the connection gets\ninvalidated and postgres_fdw_get_connections is called within an\nexplicit txn i.e. begin; commit;. In implicit txn, since the\ninvalidated connections get closed either during invalidation callback\nor at the end of txn, postgres_fdw_get_connections will always show\nvalid connections. Having said that, I still feel we need the\ntrue/false for valid/invalid in the output of the\npostgres_fdw_get_connections, otherwise we might miss giving\nconnection validity information to the user in a very narrow use case\nof explicit txn. If required, we can issue a warning whenever we see\nan invalid connection saying \"invalid connections connections are\ndiscarded at the end of remote transaction\". Thoughts?\n\n> I guess that you made postgres_fdw_get_connections() return the array\n> because the similar function dblink_get_connections() does that. But\n> isn't it more convenient to make that return the set of records?\n\nYes, for postgres_fdw_get_connections we can return a set of records\nof (server_name, valid). To do so, I can refer to dblink_get_pkey.\nThoughts?\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 7 Jan 2021 13:51:40 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] postgres_fdw connection caching - cause remote sessions\n linger till the local session exit" }, { "msg_contents": "\n\nOn 2021/01/07 17:21, Bharath Rupireddy wrote:\n> On Thu, Jan 7, 2021 at 9:49 AM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>> On 2021/01/05 16:56, Bharath Rupireddy wrote:\n>>> Attaching v8 patch set. Hopefully, cf bot will be happy with v8.\n>>>\n>>> Please consider the v8 patch set for further review.\n>> -DATA = postgres_fdw--1.0.sql\n>> +DATA = postgres_fdw--1.1.sql postgres_fdw--1.0--1.1.sql\n>>\n>> Shouldn't we leave 1.0.sql as it is and create 1.0--1.1.sql so that\n>> we can run the followings?\n>>\n>> CREATE EXTENSION postgres_fdw VERSION \"1.0\";\n>> ALTER EXTENSION postgres_fdw UPDATE TO \"1.1\";\n> \n> Yes we can. In that case, to use the new functions users have to\n> update postgres_fdw to 1.1, in that case, do we need to mention in the\n> documentation that to make use of the new functions, update\n> postgres_fdw to version 1.1?\n\nBut since postgres_fdw.control indicates that the default version is 1.1,\n\"CREATE EXTENSION postgres_fdw\" installs v1.1. So basically the users\ndon't need to update postgres_fdw from v1.0 to v1.1. Only the users of\nv1.0 need to update that to v1.1 to use new functions. No?\n\n\n> \n> With the above change, the contents of postgres_fdw--1.0.sql remain as\n> is and in postgres_fdw--1.0--1.1.sql we will have only the new\n> function statements:\n\nYes.\n\n\n> \n> /* contrib/postgres_fdw/postgres_fdw--1.0--1.1.sql */\n> \n> -- complain if script is sourced in psql, rather than via ALTER EXTENSION\n> \\echo Use \"ALTER EXTENSION postgres_fdw UPDATE TO '1.1'\" to load this\n> file. \\quit\n> \n> CREATE FUNCTION postgres_fdw_get_connections ()\n> RETURNS text[]\n> AS 'MODULE_PATHNAME','postgres_fdw_get_connections'\n> LANGUAGE C STRICT PARALLEL RESTRICTED;\n> \n> CREATE FUNCTION postgres_fdw_disconnect ()\n> RETURNS bool\n> AS 'MODULE_PATHNAME','postgres_fdw_disconnect'\n> LANGUAGE C STRICT PARALLEL RESTRICTED;\n> \n> CREATE FUNCTION postgres_fdw_disconnect (text)\n> RETURNS bool\n> AS 'MODULE_PATHNAME','postgres_fdw_disconnect'\n> LANGUAGE C STRICT PARALLEL RESTRICTED;\n> \n>> +<sect2>\n>> + <title>Functions</title>\n>>\n>> The document format for functions should be consistent with\n>> that in other contrib module like pgstattuple?\n> \n> pgstattuple has so many columns to show up in output because of that\n> they have a table listing all the output columns and their types. The\n> new functions introduced here have only one or none input and an\n> output. I think, we don't need a table listing the input and output\n> names and types.\n> \n> IMO, we can have something similar to what pg_visibility_map has for\n> functions, and also an example for each function showing how it can be\n> used. Thoughts?\n\nSounds good.\n\n\n> \n>> + When called in the local session, it returns an array with each element as a\n>> + pair of the foreign server names of all the open connections that are\n>> + previously made to the foreign servers and <literal>true</literal> or\n>> + <literal>false</literal> to show whether or not the connection is valid.\n>>\n>> We thought that the information about whether the connection is valid or\n>> not was useful to, for example, identify and close explicitly the long-living\n>> invalid connections because they were useless. But thanks to the recent\n>> bug fix for connection leak issue, that information would be no longer\n>> so helpful for us? False is returned only when the connection is used in\n>> this local transaction but it's marked as invalidated. In this case that\n>> connection cannot be explicitly closed because it's used in this transaction.\n>> It will be closed at the end of transaction. Thought?\n> \n> Yes, connection's validity can be false only when the connection gets\n> invalidated and postgres_fdw_get_connections is called within an\n> explicit txn i.e. begin; commit;. In implicit txn, since the\n> invalidated connections get closed either during invalidation callback\n> or at the end of txn, postgres_fdw_get_connections will always show\n> valid connections. Having said that, I still feel we need the\n> true/false for valid/invalid in the output of the\n> postgres_fdw_get_connections, otherwise we might miss giving\n> connection validity information to the user in a very narrow use case\n> of explicit txn.\n\nUnderstood. I withdraw my suggestion and am fine to display\nvalid/invalid information.\n\n\n> If required, we can issue a warning whenever we see\n> an invalid connection saying \"invalid connections connections are\n> discarded at the end of remote transaction\". Thoughts?\n\nIMO it's overkill to emit such warinng message because that\nsituation is normal one. OTOH, it seems worth documenting that.\n\n\n> \n>> I guess that you made postgres_fdw_get_connections() return the array\n>> because the similar function dblink_get_connections() does that. But\n>> isn't it more convenient to make that return the set of records?\n> \n> Yes, for postgres_fdw_get_connections we can return a set of records\n> of (server_name, valid). To do so, I can refer to dblink_get_pkey.\n> Thoughts?\n\nYes.\n\nRegards,\n\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Fri, 8 Jan 2021 10:59:56 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] postgres_fdw connection caching - cause remote sessions\n linger till the local session exit" }, { "msg_contents": "On Fri, Jan 8, 2021 at 7:29 AM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> On 2021/01/07 17:21, Bharath Rupireddy wrote:\n> > On Thu, Jan 7, 2021 at 9:49 AM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> >> On 2021/01/05 16:56, Bharath Rupireddy wrote:\n> >>> Attaching v8 patch set. Hopefully, cf bot will be happy with v8.\n> >>>\n> >>> Please consider the v8 patch set for further review.\n> >> -DATA = postgres_fdw--1.0.sql\n> >> +DATA = postgres_fdw--1.1.sql postgres_fdw--1.0--1.1.sql\n> >>\n> >> Shouldn't we leave 1.0.sql as it is and create 1.0--1.1.sql so that\n> >> we can run the followings?\n> >>\n> >> CREATE EXTENSION postgres_fdw VERSION \"1.0\";\n> >> ALTER EXTENSION postgres_fdw UPDATE TO \"1.1\";\n> >\n> > Yes we can. In that case, to use the new functions users have to\n> > update postgres_fdw to 1.1, in that case, do we need to mention in the\n> > documentation that to make use of the new functions, update\n> > postgres_fdw to version 1.1?\n>\n> But since postgres_fdw.control indicates that the default version is 1.1,\n> \"CREATE EXTENSION postgres_fdw\" installs v1.1. So basically the users\n> don't need to update postgres_fdw from v1.0 to v1.1. Only the users of\n> v1.0 need to update that to v1.1 to use new functions. No?\n\nIt works this way:\nscenario 1:\n1) create extension postgres_fdw; --> this is run before our feature\ni.e default_version 1.0\n2) after the feature i..e default_version 1.1, users can run alter\nextension postgres_fdw update to \"1.1\"; which gets the new functions\nfrom postgres_fdw--1.0--1.1.sql.\n\nscenario 2:\n1) create extension postgres_fdw; --> this is run after our feature\ni.e default_version 1.1, then the new functions will be installed with\ncreate extension itself, no need to run alter update to get the\nfunctions,\n\nI will make the changes and post a new patch set soon.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 8 Jan 2021 09:55:38 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] postgres_fdw connection caching - cause remote sessions\n linger till the local session exit" }, { "msg_contents": "On Fri, Jan 8, 2021 at 9:55 AM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n> I will make the changes and post a new patch set soon.\n\nAttaching v9 patch set that has addressed the review comments on the\ndisconnect function returning setof records, documentation changes,\nand postgres_fdw--1.0-1.1.sql changes.\n\nPlease consider the v9 patch set for further review.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Sat, 9 Jan 2021 06:42:26 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] postgres_fdw connection caching - cause remote sessions\n linger till the local session exit" }, { "msg_contents": "On 2021/01/09 10:12, Bharath Rupireddy wrote:\n> On Fri, Jan 8, 2021 at 9:55 AM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n>> I will make the changes and post a new patch set soon.\n> \n> Attaching v9 patch set that has addressed the review comments on the\n> disconnect function returning setof records, documentation changes,\n> and postgres_fdw--1.0-1.1.sql changes.\n> \n> Please consider the v9 patch set for further review.\n\nThanks for updating the patch! I reviewed only 0001 patch.\n\n+\t/*\n+\t * Quick exit if the cache has been destroyed in\n+\t * disconnect_cached_connections.\n+\t */\n+\tif (!ConnectionHash)\n+\t\treturn;\n\nThis code is not necessary at least in pgfdw_xact_callback() and\npgfdw_subxact_callback()? Because those functions check\n\"if (!xact_got_connection)\" before checking the above condition.\n\n-\tif (!HeapTupleIsValid(tup))\n-\t\telog(ERROR, \"cache lookup failed for user mapping %u\", entry->key);\n-\tumform = (Form_pg_user_mapping) GETSTRUCT(tup);\n-\tserver = GetForeignServer(umform->umserver);\n-\tReleaseSysCache(tup);\n+\tserver = GetForeignServer(entry->serverid);\n\nWhat about applying only the change about serverid, as a separate patch at\nfirst? This change itself is helpful to get rid of error \"cache lookup failed\"\nin pgfdw_reject_incomplete_xact_state_change(). Patch attached.\n\n+\t\tserver = GetForeignServerExtended(entry->serverid, true);\n\nSince the type of second argument in GetForeignServerExtended() is bits16,\nit's invalid to specify \"true\" there?\n\n+\tif (no_server_conn_cnt > 0)\n+\t{\n+\t\tereport(WARNING,\n+\t\t\t\t(errmsg_plural(\"found an active connection for which the foreign server would have been dropped\",\n+\t\t\t\t\t\t\t \"found some active connections for which the foreign servers would have been dropped\",\n+\t\t\t\t\t\t\t no_server_conn_cnt),\n+\t\t\t\t no_server_conn_cnt > 1 ?\n+\t\t\t\t errdetail(\"Such connections are discarded at the end of remote transaction.\")\n+\t\t\t\t : errdetail(\"Such connection is discarded at the end of remote transaction.\")));\n\nAt least for me, I like returning such connections with \"NULL\" in server_name\ncolumn and \"false\" in valid column, rather than emitting a warning. Because\nwhich would enable us to count the number of actual foreign connections\neasily by using SQL, for example.\n\n+\t * During the first call, we initialize the function context, get the list\n+\t * of active connections using get_connections and store this in the\n+\t * function's memory context so that it can live multiple calls.\n+\t */\n+\tif (SRF_IS_FIRSTCALL())\n\nI guess that you used value-per-call mode to make the function return\na set result since you refered to dblink_get_pkey(). But isn't it better to\nuse materialize mode like dblink_get_notify() does rather than\nvalue-per-call because this function returns not so many records? ISTM\nthat we can simplify postgres_fdw_get_connections() by using materialize mode.\n\n+\t\thash_destroy(ConnectionHash);\n+\t\tConnectionHash = NULL;\n\nIf GetConnection() is called after ConnectionHash is destroyed,\nit initialize the hashtable and registers some callback functions again\neven though the same function have already been registered. This causes\nsame function to be registered as a callback more than once. This is\na bug.\n\n+CREATE FUNCTION postgres_fdw_disconnect ()\n\nDo we really want postgres_fdw_disconnect() with no argument?\nIMO postgres_fdw_disconnect() with the server name specified is enough.\nBut I'd like to hear the opinion about that.\n\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION", "msg_date": "Thu, 14 Jan 2021 19:22:19 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] postgres_fdw connection caching - cause remote sessions\n linger till the local session exit" }, { "msg_contents": "On Thu, Jan 14, 2021 at 3:52 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> - if (!HeapTupleIsValid(tup))\n> - elog(ERROR, \"cache lookup failed for user mapping %u\", entry->key);\n> - umform = (Form_pg_user_mapping) GETSTRUCT(tup);\n> - server = GetForeignServer(umform->umserver);\n> - ReleaseSysCache(tup);\n> + server = GetForeignServer(entry->serverid);\n>\n> What about applying only the change about serverid, as a separate patch at\n> first? This change itself is helpful to get rid of error \"cache lookup failed\"\n> in pgfdw_reject_incomplete_xact_state_change(). Patch attached.\n\nRight, we can get rid of the \"cache lookup failed for user mapping\"\nerror and also storing server oid in the cache entry is helpful for\nthe new functions we are going to introduce.\n\nserverid_v1.patch looks good to me. Both make check and make\ncheck-world passes on my system.\n\nI will respond to other comments soon.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 14 Jan 2021 17:06:07 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] postgres_fdw connection caching - cause remote sessions\n linger till the local session exit" }, { "msg_contents": "\n\nOn 2021/01/14 20:36, Bharath Rupireddy wrote:\n> On Thu, Jan 14, 2021 at 3:52 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>> - if (!HeapTupleIsValid(tup))\n>> - elog(ERROR, \"cache lookup failed for user mapping %u\", entry->key);\n>> - umform = (Form_pg_user_mapping) GETSTRUCT(tup);\n>> - server = GetForeignServer(umform->umserver);\n>> - ReleaseSysCache(tup);\n>> + server = GetForeignServer(entry->serverid);\n>>\n>> What about applying only the change about serverid, as a separate patch at\n>> first? This change itself is helpful to get rid of error \"cache lookup failed\"\n>> in pgfdw_reject_incomplete_xact_state_change(). Patch attached.\n> \n> Right, we can get rid of the \"cache lookup failed for user mapping\"\n> error and also storing server oid in the cache entry is helpful for\n> the new functions we are going to introduce.\n> \n> serverid_v1.patch looks good to me. Both make check and make\n> check-world passes on my system.\n\nThanks for the check! I pushed the patch.\n \n> I will respond to other comments soon.\n\nThanks!\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Fri, 15 Jan 2021 10:32:53 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] postgres_fdw connection caching - cause remote sessions\n linger till the local session exit" }, { "msg_contents": "On Thu, Jan 14, 2021 at 3:52 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> On 2021/01/09 10:12, Bharath Rupireddy wrote:\n> > On Fri, Jan 8, 2021 at 9:55 AM Bharath Rupireddy\n> > <bharath.rupireddyforpostgres@gmail.com> wrote:\n> >> I will make the changes and post a new patch set soon.\n> >\n> > Attaching v9 patch set that has addressed the review comments on the\n> > disconnect function returning setof records, documentation changes,\n> > and postgres_fdw--1.0-1.1.sql changes.\n> >\n> > Please consider the v9 patch set for further review.\n>\n> Thanks for updating the patch! I reviewed only 0001 patch.\n>\n> + /*\n> + * Quick exit if the cache has been destroyed in\n> + * disconnect_cached_connections.\n> + */\n> + if (!ConnectionHash)\n> + return;\n>\n> This code is not necessary at least in pgfdw_xact_callback() and\n> pgfdw_subxact_callback()? Because those functions check\n> \"if (!xact_got_connection)\" before checking the above condition.\n\nYes, if xact_got_connection is true, then ConnectionHash wouldn't have\nbeen cleaned up in disconnect_cached_connections. +1 to remove that in\npgfdw_xact_callback and pgfdw_subxact_callback. But we need that check\nin pgfdw_inval_callback, because we may reach there after\nConnectionHash is destroyed and set to NULL in\ndisconnect_cached_connections.\n\n> + server = GetForeignServerExtended(entry->serverid, true);\n>\n> Since the type of second argument in GetForeignServerExtended() is bits16,\n> it's invalid to specify \"true\" there?\n\nYeah. I will change it to be something like below:\nbits16 flags = FSV_MISSING_OK;\nserver = GetForeignServerExtended(entry->serverid, flags);\n\n> + if (no_server_conn_cnt > 0)\n> + {\n> + ereport(WARNING,\n> + (errmsg_plural(\"found an active connection for which the foreign server would have been dropped\",\n> + \"found some active connections for which the foreign servers would have been dropped\",\n> + no_server_conn_cnt),\n> + no_server_conn_cnt > 1 ?\n> + errdetail(\"Such connections are discarded at the end of remote transaction.\")\n> + : errdetail(\"Such connection is discarded at the end of remote transaction.\")));\n>\n> At least for me, I like returning such connections with \"NULL\" in server_name\n> column and \"false\" in valid column, rather than emitting a warning. Because\n> which would enable us to count the number of actual foreign connections\n> easily by using SQL, for example.\n\n+1. I was also of the similar opinion about this initially. I will change this.\n\n> + * During the first call, we initialize the function context, get the list\n> + * of active connections using get_connections and store this in the\n> + * function's memory context so that it can live multiple calls.\n> + */\n> + if (SRF_IS_FIRSTCALL())\n>\n> I guess that you used value-per-call mode to make the function return\n> a set result since you refered to dblink_get_pkey(). But isn't it better to\n> use materialize mode like dblink_get_notify() does rather than\n> value-per-call because this function returns not so many records? ISTM\n> that we can simplify postgres_fdw_get_connections() by using materialize mode.\n\nYeah. +1 I will change it to use materialize mode.\n\n> + hash_destroy(ConnectionHash);\n> + ConnectionHash = NULL;\n>\n> If GetConnection() is called after ConnectionHash is destroyed,\n> it initialize the hashtable and registers some callback functions again\n> even though the same function have already been registered. This causes\n> same function to be registered as a callback more than once. This is\n> a bug.\n\nYeah, we will register the same callbacks many times. I'm thinking to\nhave something like below:\n\nstatic bool conn_cache_destroyed = false;\n\n if (!active_conn_exists)\n {\n hash_destroy(ConnectionHash);\n ConnectionHash = NULL;\n conn_cache_destroyed = true;\n }\n\n /*\n * Register callback functions that manage connection cleanup. This\n * should be done just once in each backend. We don't register the\n * callbacks again, if the connection cache is destroyed at least once\n * in the backend.\n */\n if (!conn_cache_destroyed)\n {\n RegisterXactCallback(pgfdw_xact_callback, NULL);\n RegisterSubXactCallback(pgfdw_subxact_callback, NULL);\n CacheRegisterSyscacheCallback(FOREIGNSERVEROID,\n pgfdw_inval_callback, (Datum) 0);\n CacheRegisterSyscacheCallback(USERMAPPINGOID,\n pgfdw_inval_callback, (Datum) 0);\n }\n\nThoughts?\n\n> +CREATE FUNCTION postgres_fdw_disconnect ()\n>\n> Do we really want postgres_fdw_disconnect() with no argument?\n> IMO postgres_fdw_disconnect() with the server name specified is enough.\n> But I'd like to hear the opinion about that.\n\nIMO, we should have that. Though a bit impractical use case, if we\nhave many connections which are not being used and want to disconnect\nthem at once, this function will be useful.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Sat, 16 Jan 2021 10:36:52 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] postgres_fdw connection caching - cause remote sessions\n linger till the local session exit" }, { "msg_contents": "On Sat, Jan 16, 2021 at 10:36 AM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n> > > Please consider the v9 patch set for further review.\n> >\n> > Thanks for updating the patch! I reviewed only 0001 patch.\n\nI addressed the review comments and attached v10 patch set. Please\nconsider it for further review.\n\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Sun, 17 Jan 2021 13:09:27 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] postgres_fdw connection caching - cause remote sessions\n linger till the local session exit" }, { "msg_contents": "Hi,\n\nThis patch introduces new function postgres_fdw_disconnect() when\ncalled with a foreign server name discards the associated\nconnections with the server name.\n\nI think the following would read better:\n\nThis patch introduces *a* new function postgres_fdw_disconnect(). When\ncalled with a foreign server name, it discards the associated\nconnections with the server.\n\nPlease note the removal of the 'name' at the end - connection is with\nserver, not server name.\n\n+ if (is_in_use)\n+ ereport(WARNING,\n+ (errmsg(\"cannot close the connection because it is\nstill in use\")));\n\nIt would be better to include servername in the message.\n\n+ ereport(WARNING,\n+ (errmsg(\"cannot close all connections because some\nof them are still in use\")));\n\nI think showing the number of active connections would be more informative.\nThis can be achieved by changing active_conn_exists from bool to int (named\nactive_conns, e.g.):\n\n+ if (entry->conn && !active_conn_exists)\n+ active_conn_exists = true;\n\nInstead of setting the bool value, active_conns can be incremented.\n\nCheers\n\nOn Sat, Jan 16, 2021 at 11:39 PM Bharath Rupireddy <\nbharath.rupireddyforpostgres@gmail.com> wrote:\n\n> On Sat, Jan 16, 2021 at 10:36 AM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n> > > > Please consider the v9 patch set for further review.\n> > >\n> > > Thanks for updating the patch! I reviewed only 0001 patch.\n>\n> I addressed the review comments and attached v10 patch set. Please\n> consider it for further review.\n>\n>\n> With Regards,\n> Bharath Rupireddy.\n> EnterpriseDB: http://www.enterprisedb.com\n>\n\nHi,This patch introduces new function postgres_fdw_disconnect() whencalled with a foreign server name discards the associatedconnections with the server name.I think the following would read better:This patch introduces a new function postgres_fdw_disconnect(). Whencalled with a foreign server name, it discards the associatedconnections with the server.Please note the removal of the 'name' at the end - connection is with server, not server name.+       if (is_in_use)+           ereport(WARNING,+                   (errmsg(\"cannot close the connection because it is still in use\")));It would be better to include servername in the message.+               ereport(WARNING,+                       (errmsg(\"cannot close all connections because some of them are still in use\")));I think showing the number of active connections would be more informative.This can be achieved by changing active_conn_exists from bool to int (named active_conns, e.g.):+       if (entry->conn && !active_conn_exists)+           active_conn_exists = true;Instead of setting the bool value, active_conns can be incremented.CheersOn Sat, Jan 16, 2021 at 11:39 PM Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote:On Sat, Jan 16, 2021 at 10:36 AM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n> > > Please consider the v9 patch set for further review.\n> >\n> > Thanks for updating the patch! I reviewed only 0001 patch.\n\nI addressed the review comments and attached v10 patch set. Please\nconsider it for further review.\n\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Sun, 17 Jan 2021 10:02:15 -0800", "msg_from": "Zhihong Yu <zyu@yugabyte.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] postgres_fdw connection caching - cause remote sessions\n linger till the local session exit" }, { "msg_contents": "On Sun, Jan 17, 2021 at 11:30 PM Zhihong Yu <zyu@yugabyte.com> wrote:\n> This patch introduces new function postgres_fdw_disconnect() when\n> called with a foreign server name discards the associated\n> connections with the server name.\n>\n> I think the following would read better:\n>\n> This patch introduces a new function postgres_fdw_disconnect(). When\n> called with a foreign server name, it discards the associated\n> connections with the server.\n\nThanks. I corrected the commit message.\n\n> Please note the removal of the 'name' at the end - connection is with server, not server name.\n>\n> + if (is_in_use)\n> + ereport(WARNING,\n> + (errmsg(\"cannot close the connection because it is still in use\")));\n>\n> It would be better to include servername in the message.\n\nUser would have provided the servername in\npostgres_fdw_disconnect('myserver'), I don't think we need to emit the\nwarning again with the servername. The existing warning seems fine.\n\n> + ereport(WARNING,\n> + (errmsg(\"cannot close all connections because some of them are still in use\")));\n>\n> I think showing the number of active connections would be more informative.\n> This can be achieved by changing active_conn_exists from bool to int (named active_conns, e.g.):\n>\n> + if (entry->conn && !active_conn_exists)\n> + active_conn_exists = true;\n>\n> Instead of setting the bool value, active_conns can be incremented.\n\nIMO, the number of active connections is not informative, because\nusers can not do anything with them. What's actually more informative\nwould be to list all the server names for which the connections are\nactive, instead of the warning - \"cannot close all connections because\nsome of them are still in use\". Having said that, I feel like it's an\noverkill for now to do that. If required, we can enhance the warnings\nin future. Thoughts?\n\nAttaching v11 patch set, with changes only in 0001. The changes are\ncommit message correction and moved the warning related code to\ndisconnect_cached_connections from postgres_fdw_disconnect.\n\nPlease review v11 further.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Mon, 18 Jan 2021 09:03:56 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] postgres_fdw connection caching - cause remote sessions\n linger till the local session exit" }, { "msg_contents": "On 2021/01/18 12:33, Bharath Rupireddy wrote:\n> On Sun, Jan 17, 2021 at 11:30 PM Zhihong Yu <zyu@yugabyte.com> wrote:\n>> This patch introduces new function postgres_fdw_disconnect() when\n>> called with a foreign server name discards the associated\n>> connections with the server name.\n>>\n>> I think the following would read better:\n>>\n>> This patch introduces a new function postgres_fdw_disconnect(). When\n>> called with a foreign server name, it discards the associated\n>> connections with the server.\n> \n> Thanks. I corrected the commit message.\n> \n>> Please note the removal of the 'name' at the end - connection is with server, not server name.\n>>\n>> + if (is_in_use)\n>> + ereport(WARNING,\n>> + (errmsg(\"cannot close the connection because it is still in use\")));\n>>\n>> It would be better to include servername in the message.\n> \n> User would have provided the servername in\n> postgres_fdw_disconnect('myserver'), I don't think we need to emit the\n> warning again with the servername. The existing warning seems fine.\n> \n>> + ereport(WARNING,\n>> + (errmsg(\"cannot close all connections because some of them are still in use\")));\n>>\n>> I think showing the number of active connections would be more informative.\n>> This can be achieved by changing active_conn_exists from bool to int (named active_conns, e.g.):\n>>\n>> + if (entry->conn && !active_conn_exists)\n>> + active_conn_exists = true;\n>>\n>> Instead of setting the bool value, active_conns can be incremented.\n> \n> IMO, the number of active connections is not informative, because\n> users can not do anything with them. What's actually more informative\n> would be to list all the server names for which the connections are\n> active, instead of the warning - \"cannot close all connections because\n> some of them are still in use\". Having said that, I feel like it's an\n> overkill for now to do that. If required, we can enhance the warnings\n> in future. Thoughts?\n> \n> Attaching v11 patch set, with changes only in 0001. The changes are\n> commit message correction and moved the warning related code to\n> disconnect_cached_connections from postgres_fdw_disconnect.\n> \n> Please review v11 further.\n\nThanks for updating the patch!\n\nThe patch for postgres_fdw_get_connections() basically looks good to me.\nSo at first I'd like to push it. Attached is the patch that I extracted\npostgres_fdw_get_connections() part from 0001 patch and tweaked.\nThought?\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION", "msg_date": "Mon, 18 Jan 2021 13:08:07 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] postgres_fdw connection caching - cause remote sessions\n linger till the local session exit" }, { "msg_contents": "On Mon, Jan 18, 2021 at 9:38 AM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> > Please review v11 further.\n>\n> Thanks for updating the patch!\n>\n> The patch for postgres_fdw_get_connections() basically looks good to me.\n> So at first I'd like to push it. Attached is the patch that I extracted\n> postgres_fdw_get_connections() part from 0001 patch and tweaked.\n> Thought?\n\nThanks.\n\nWe need to create the loopback3 with user mapping public, otherwise\nthe test might become unstable as shown below. Note that loopback and\nloopback2 are not dropped in the test, so no problem with them.\n\n ALTER SERVER loopback OPTIONS (ADD use_remote_estimate 'off');\n DROP SERVER loopback3 CASCADE;\n NOTICE: drop cascades to 2 other objects\n-DETAIL: drop cascades to user mapping for postgres on server loopback3\n+DETAIL: drop cascades to user mapping for bharath on server loopback3\n\nAttaching v2 patch for postgres_fdw_get_connections. Please have a look.\n\nI will post patches for the other function postgres_fdw_disconnect,\nGUC and server level option later.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Mon, 18 Jan 2021 10:16:40 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] postgres_fdw connection caching - cause remote sessions\n linger till the local session exit" }, { "msg_contents": "> We need to create the loopback3 with user mapping public, otherwise the\r\n> test might become unstable as shown below. Note that loopback and\r\n> loopback2 are not dropped in the test, so no problem with them.\r\n> \r\n> ALTER SERVER loopback OPTIONS (ADD use_remote_estimate 'off'); DROP\r\n> SERVER loopback3 CASCADE;\r\n> NOTICE: drop cascades to 2 other objects\r\n> -DETAIL: drop cascades to user mapping for postgres on server loopback3\r\n> +DETAIL: drop cascades to user mapping for bharath on server loopback3\r\n> \r\n> Attaching v2 patch for postgres_fdw_get_connections. Please have a look.\r\nHi\r\n\r\nI have a comment for the doc about postgres_fdw_get_connections.\r\n\r\n+ <term><function>postgres_fdw_get_connections(OUT server_name text, OUT valid boolean) returns setof record</function></term>\r\n+ <listitem>\r\n+ <para>\r\n+ This function returns the foreign server names of all the open\r\n+ connections that <filename>postgres_fdw</filename> established from\r\n+ the local session to the foreign servers. It also returns whether\r\n+ each connection is valid or not. <literal>false</literal> is returned\r\n+ if the foreign server connection is used in the current local\r\n+ transaction but its foreign server or user mapping is changed or\r\n+ dropped, and then such invalid connection will be closed at\r\n+ the end of that transaction. <literal>true</literal> is returned\r\n+ otherwise. If there are no open connections, no record is returned.\r\n+ Example usage of the function:\r\n\r\nThe doc seems does not memtion the case when the function returns NULL in server_name.\r\nUsers may be a little confused about why NULL was returned.\r\n\r\nBest regards,\r\nhouzj\r\n\n\n", "msg_date": "Mon, 18 Jan 2021 06:02:29 +0000", "msg_from": "\"Hou, Zhijie\" <houzj.fnst@cn.fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: [PATCH] postgres_fdw connection caching - cause remote sessions\n linger till the local session exit" }, { "msg_contents": "\n\nOn 2021/01/18 13:46, Bharath Rupireddy wrote:\n> On Mon, Jan 18, 2021 at 9:38 AM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>>> Please review v11 further.\n>>\n>> Thanks for updating the patch!\n>>\n>> The patch for postgres_fdw_get_connections() basically looks good to me.\n>> So at first I'd like to push it. Attached is the patch that I extracted\n>> postgres_fdw_get_connections() part from 0001 patch and tweaked.\n>> Thought?\n> \n> Thanks.\n> \n> We need to create the loopback3 with user mapping public, otherwise\n> the test might become unstable as shown below. Note that loopback and\n> loopback2 are not dropped in the test, so no problem with them.\n> \n> ALTER SERVER loopback OPTIONS (ADD use_remote_estimate 'off');\n> DROP SERVER loopback3 CASCADE;\n> NOTICE: drop cascades to 2 other objects\n> -DETAIL: drop cascades to user mapping for postgres on server loopback3\n> +DETAIL: drop cascades to user mapping for bharath on server loopback3\n> \n> Attaching v2 patch for postgres_fdw_get_connections. Please have a look.\n\nThanks! You're right. I pushed the v2 patch.\n\n\n> I will post patches for the other function postgres_fdw_disconnect,\n> GUC and server level option later.\n\nThanks!\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Mon, 18 Jan 2021 15:14:05 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] postgres_fdw connection caching - cause remote sessions\n linger till the local session exit" }, { "msg_contents": "\n\nOn 2021/01/18 15:02, Hou, Zhijie wrote:\n>> We need to create the loopback3 with user mapping public, otherwise the\n>> test might become unstable as shown below. Note that loopback and\n>> loopback2 are not dropped in the test, so no problem with them.\n>>\n>> ALTER SERVER loopback OPTIONS (ADD use_remote_estimate 'off'); DROP\n>> SERVER loopback3 CASCADE;\n>> NOTICE: drop cascades to 2 other objects\n>> -DETAIL: drop cascades to user mapping for postgres on server loopback3\n>> +DETAIL: drop cascades to user mapping for bharath on server loopback3\n>>\n>> Attaching v2 patch for postgres_fdw_get_connections. Please have a look.\n> Hi\n> \n> I have a comment for the doc about postgres_fdw_get_connections.\n> \n> + <term><function>postgres_fdw_get_connections(OUT server_name text, OUT valid boolean) returns setof record</function></term>\n> + <listitem>\n> + <para>\n> + This function returns the foreign server names of all the open\n> + connections that <filename>postgres_fdw</filename> established from\n> + the local session to the foreign servers. It also returns whether\n> + each connection is valid or not. <literal>false</literal> is returned\n> + if the foreign server connection is used in the current local\n> + transaction but its foreign server or user mapping is changed or\n> + dropped, and then such invalid connection will be closed at\n> + the end of that transaction. <literal>true</literal> is returned\n> + otherwise. If there are no open connections, no record is returned.\n> + Example usage of the function:\n> \n> The doc seems does not memtion the case when the function returns NULL in server_name.\n> Users may be a little confused about why NULL was returned.\n\nYes, so what about adding\n\n (Note that the returned server name of invalid connection is NULL if its server is dropped)\n\ninto the following (just after \"dropped\")?\n\n+ if the foreign server connection is used in the current local\n+ transaction but its foreign server or user mapping is changed or\n+ dropped\n\nOr better description?\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Mon, 18 Jan 2021 15:28:04 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] postgres_fdw connection caching - cause remote sessions\n linger till the local session exit" }, { "msg_contents": "On Mon, Jan 18, 2021 at 11:58 AM Fujii Masao\n<masao.fujii@oss.nttdata.com> wrote:\n>\n>\n>\n> On 2021/01/18 15:02, Hou, Zhijie wrote:\n> >> We need to create the loopback3 with user mapping public, otherwise the\n> >> test might become unstable as shown below. Note that loopback and\n> >> loopback2 are not dropped in the test, so no problem with them.\n> >>\n> >> ALTER SERVER loopback OPTIONS (ADD use_remote_estimate 'off'); DROP\n> >> SERVER loopback3 CASCADE;\n> >> NOTICE: drop cascades to 2 other objects\n> >> -DETAIL: drop cascades to user mapping for postgres on server loopback3\n> >> +DETAIL: drop cascades to user mapping for bharath on server loopback3\n> >>\n> >> Attaching v2 patch for postgres_fdw_get_connections. Please have a look.\n> > Hi\n> >\n> > I have a comment for the doc about postgres_fdw_get_connections.\n> >\n> > + <term><function>postgres_fdw_get_connections(OUT server_name text, OUT valid boolean) returns setof record</function></term>\n> > + <listitem>\n> > + <para>\n> > + This function returns the foreign server names of all the open\n> > + connections that <filename>postgres_fdw</filename> established from\n> > + the local session to the foreign servers. It also returns whether\n> > + each connection is valid or not. <literal>false</literal> is returned\n> > + if the foreign server connection is used in the current local\n> > + transaction but its foreign server or user mapping is changed or\n> > + dropped, and then such invalid connection will be closed at\n> > + the end of that transaction. <literal>true</literal> is returned\n> > + otherwise. If there are no open connections, no record is returned.\n> > + Example usage of the function:\n> >\n> > The doc seems does not memtion the case when the function returns NULL in server_name.\n> > Users may be a little confused about why NULL was returned.\n>\n> Yes, so what about adding\n>\n> (Note that the returned server name of invalid connection is NULL if its server is dropped)\n>\n> into the following (just after \"dropped\")?\n>\n> + if the foreign server connection is used in the current local\n> + transaction but its foreign server or user mapping is changed or\n> + dropped\n>\n> Or better description?\n\n+1 to add it after \"dropped (Note ........)\", how about as follows\nwith slight changes?\n\ndropped (Note that server name of an invalid connection can be NULL if\nthe server is dropped), and then such .....\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 18 Jan 2021 12:07:53 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] postgres_fdw connection caching - cause remote sessions\n linger till the local session exit" }, { "msg_contents": "\n\nOn 2021/01/18 15:37, Bharath Rupireddy wrote:\n> On Mon, Jan 18, 2021 at 11:58 AM Fujii Masao\n> <masao.fujii@oss.nttdata.com> wrote:\n>>\n>>\n>>\n>> On 2021/01/18 15:02, Hou, Zhijie wrote:\n>>>> We need to create the loopback3 with user mapping public, otherwise the\n>>>> test might become unstable as shown below. Note that loopback and\n>>>> loopback2 are not dropped in the test, so no problem with them.\n>>>>\n>>>> ALTER SERVER loopback OPTIONS (ADD use_remote_estimate 'off'); DROP\n>>>> SERVER loopback3 CASCADE;\n>>>> NOTICE: drop cascades to 2 other objects\n>>>> -DETAIL: drop cascades to user mapping for postgres on server loopback3\n>>>> +DETAIL: drop cascades to user mapping for bharath on server loopback3\n>>>>\n>>>> Attaching v2 patch for postgres_fdw_get_connections. Please have a look.\n>>> Hi\n>>>\n>>> I have a comment for the doc about postgres_fdw_get_connections.\n>>>\n>>> + <term><function>postgres_fdw_get_connections(OUT server_name text, OUT valid boolean) returns setof record</function></term>\n>>> + <listitem>\n>>> + <para>\n>>> + This function returns the foreign server names of all the open\n>>> + connections that <filename>postgres_fdw</filename> established from\n>>> + the local session to the foreign servers. It also returns whether\n>>> + each connection is valid or not. <literal>false</literal> is returned\n>>> + if the foreign server connection is used in the current local\n>>> + transaction but its foreign server or user mapping is changed or\n>>> + dropped, and then such invalid connection will be closed at\n>>> + the end of that transaction. <literal>true</literal> is returned\n>>> + otherwise. If there are no open connections, no record is returned.\n>>> + Example usage of the function:\n>>>\n>>> The doc seems does not memtion the case when the function returns NULL in server_name.\n>>> Users may be a little confused about why NULL was returned.\n>>\n>> Yes, so what about adding\n>>\n>> (Note that the returned server name of invalid connection is NULL if its server is dropped)\n>>\n>> into the following (just after \"dropped\")?\n>>\n>> + if the foreign server connection is used in the current local\n>> + transaction but its foreign server or user mapping is changed or\n>> + dropped\n>>\n>> Or better description?\n> \n> +1 to add it after \"dropped (Note ........)\", how about as follows\n> with slight changes?\n> \n> dropped (Note that server name of an invalid connection can be NULL if\n> the server is dropped), and then such .....\n\nYes, I like this one. One question is; \"should\" or \"is\" is better than\n\"can\" in this case because the server name of invalid connection is\nalways NULL when its server is dropped?\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Mon, 18 Jan 2021 21:47:15 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] postgres_fdw connection caching - cause remote sessions\n linger till the local session exit" }, { "msg_contents": "On Mon, Jan 18, 2021 at 6:17 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> > +1 to add it after \"dropped (Note ........)\", how about as follows\n> > with slight changes?\n> >\n> > dropped (Note that server name of an invalid connection can be NULL if\n> > the server is dropped), and then such .....\n>\n> Yes, I like this one. One question is; \"should\" or \"is\" is better than\n> \"can\" in this case because the server name of invalid connection is\n> always NULL when its server is dropped?\n\nI think \"dropped (Note that server name of an invalid connection will\nbe NULL if the server is dropped), and then such .....\"\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 18 Jan 2021 18:33:42 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] postgres_fdw connection caching - cause remote sessions\n linger till the local session exit" }, { "msg_contents": "On Mon, Jan 18, 2021 at 11:44 AM Fujii Masao\n<masao.fujii@oss.nttdata.com> wrote:\n> > I will post patches for the other function postgres_fdw_disconnect,\n> > GUC and server level option later.\n>\n> Thanks!\n\nAttaching v12 patch set. 0001 is for postgres_fdw_disconnect()\nfunction, 0002 is for keep_connections GUC and 0003 is for\nkeep_connection server level option.\n\nPlease review it further.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Mon, 18 Jan 2021 19:44:30 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] postgres_fdw connection caching - cause remote sessions\n linger till the local session exit" }, { "msg_contents": "\n\nOn 2021/01/18 23:14, Bharath Rupireddy wrote:\n> On Mon, Jan 18, 2021 at 11:44 AM Fujii Masao\n> <masao.fujii@oss.nttdata.com> wrote:\n>>> I will post patches for the other function postgres_fdw_disconnect,\n>>> GUC and server level option later.\n>>\n>> Thanks!\n> \n> Attaching v12 patch set. 0001 is for postgres_fdw_disconnect()\n> function, 0002 is for keep_connections GUC and 0003 is for\n> keep_connection server level option.\n\nThanks!\n\n> \n> Please review it further.\n\n+\t\tserver = GetForeignServerByName(servername, true);\n+\n+\t\tif (!server)\n+\t\t\tereport(ERROR,\n+\t\t\t\t\t(errcode(ERRCODE_CONNECTION_DOES_NOT_EXIST),\n+\t\t\t\t\t errmsg(\"foreign server \\\"%s\\\" does not exist\", servername)));\n\nISTM we can simplify this code as follows.\n\n server = GetForeignServerByName(servername, false);\n\n\n+\thash_seq_init(&scan, ConnectionHash);\n+\twhile ((entry = (ConnCacheEntry *) hash_seq_search(&scan)))\n\nWhen the server name is specified, even if its connection is successfully\nclosed, postgres_fdw_disconnect() scans through all the entries to check\nwhether there are active connections. But if \"result\" is true and\nactive_conn_exists is true, we can get out of this loop to avoid unnecessary\nscans.\n\n\n+\t/*\n+\t * Destroy the cache if we discarded all active connections i.e. if there\n+\t * is no single active connection, which we can know while scanning the\n+\t * cached entries in the above loop. Destroying the cache is better than to\n+\t * keep it in the memory with all inactive entries in it to save some\n+\t * memory. Cache can get initialized on the subsequent queries to foreign\n+\t * server.\n\nHow much memory is assumed to be saved by destroying the cache in\nmany cases? I'm not sure if it's really worth destroying the cache to save\nthe memory.\n\n\n+ a warning is issued and <literal>false</literal> is returned. <literal>false</literal>\n+ is returned when there are no open connections. When there are some open\n+ connections, but there is no connection for the given foreign server,\n+ then <literal>false</literal> is returned. When no foreign server exists\n+ with the given name, an error is emitted. Example usage of the function:\n\nWhen a non-existent server name is specified, postgres_fdw_disconnect()\nemits an error if there is at least one open connection, but just returns\nfalse otherwise. At least for me, this behavior looks inconsitent and strange.\nIn that case, IMO the function always should emit an error.\n\nRegards,\n\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Tue, 19 Jan 2021 00:41:29 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] postgres_fdw connection caching - cause remote sessions\n linger till the local session exit" }, { "msg_contents": "On 2021/01/18 22:03, Bharath Rupireddy wrote:\n> On Mon, Jan 18, 2021 at 6:17 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>>> +1 to add it after \"dropped (Note ........)\", how about as follows\n>>> with slight changes?\n>>>\n>>> dropped (Note that server name of an invalid connection can be NULL if\n>>> the server is dropped), and then such .....\n>>\n>> Yes, I like this one. One question is; \"should\" or \"is\" is better than\n>> \"can\" in this case because the server name of invalid connection is\n>> always NULL when its server is dropped?\n> \n> I think \"dropped (Note that server name of an invalid connection will\n> be NULL if the server is dropped), and then such .....\"\n\nSounds good to me. So patch attached.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION", "msg_date": "Tue, 19 Jan 2021 00:57:55 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] postgres_fdw connection caching - cause remote sessions\n linger till the local session exit" }, { "msg_contents": "> >>> +1 to add it after \"dropped (Note ........)\", how about as follows\r\n> >>> with slight changes?\r\n> >>>\r\n> >>> dropped (Note that server name of an invalid connection can be NULL\r\n> >>> if the server is dropped), and then such .....\r\n> >>\r\n> >> Yes, I like this one. One question is; \"should\" or \"is\" is better\r\n> >> than \"can\" in this case because the server name of invalid connection\r\n> >> is always NULL when its server is dropped?\r\n> >\r\n> > I think \"dropped (Note that server name of an invalid connection will\r\n> > be NULL if the server is dropped), and then such .....\"\r\n> \r\n> Sounds good to me. So patch attached.\r\n\r\n+1\r\n\r\nBest regards,\r\nhouzj\r\n\n\n", "msg_date": "Tue, 19 Jan 2021 00:53:41 +0000", "msg_from": "\"Hou, Zhijie\" <houzj.fnst@cn.fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: [PATCH] postgres_fdw connection caching - cause remote sessions\n linger till the local session exit" }, { "msg_contents": "On Mon, Jan 18, 2021 at 9:11 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> > Attaching v12 patch set. 0001 is for postgres_fdw_disconnect()\n> > function, 0002 is for keep_connections GUC and 0003 is for\n> > keep_connection server level option.\n>\n> Thanks!\n>\n> >\n> > Please review it further.\n>\n> + server = GetForeignServerByName(servername, true);\n> +\n> + if (!server)\n> + ereport(ERROR,\n> + (errcode(ERRCODE_CONNECTION_DOES_NOT_EXIST),\n> + errmsg(\"foreign server \\\"%s\\\" does not exist\", servername)));\n>\n> ISTM we can simplify this code as follows.\n>\n> server = GetForeignServerByName(servername, false);\n\nDone.\n\n> + hash_seq_init(&scan, ConnectionHash);\n> + while ((entry = (ConnCacheEntry *) hash_seq_search(&scan)))\n>\n> When the server name is specified, even if its connection is successfully\n> closed, postgres_fdw_disconnect() scans through all the entries to check\n> whether there are active connections. But if \"result\" is true and\n> active_conn_exists is true, we can get out of this loop to avoid unnecessary\n> scans.\n\nMy initial thought was that it's possible to have two entries with the\nsame foreign server name but with different user mappings, looks like\nit's not possible. I tried associating a foreign server with two\ndifferent user mappings [1], then the cache entry is getting\nassociated initially with the user mapping that comes first in the\npg_user_mappings, if this user mapping is dropped then the cache entry\ngets invalidated, so next time the second user mapping is used.\n\nSince there's no way we can have two cache entries with the same\nforeign server name, we can get out of the loop when we find the cache\nentry match with the given server. I made the changes.\n\n[1]\npostgres=# select * from pg_user_mappings ;\n umid | srvid | srvname | umuser | usename | umoptions\n-------+-------+-----------+--------+---------+-----------\n 16395 | 16394 | loopback1 | 10 | bharath | -----> cache entry\nis initially made with this user mapping.\n 16399 | 16394 | loopback1 | 0 | public | -----> if the\nabove user mapping is dropped, then the cache entry is made with this\nuser mapping.\n\n> + /*\n> + * Destroy the cache if we discarded all active connections i.e. if there\n> + * is no single active connection, which we can know while scanning the\n> + * cached entries in the above loop. Destroying the cache is better than to\n> + * keep it in the memory with all inactive entries in it to save some\n> + * memory. Cache can get initialized on the subsequent queries to foreign\n> + * server.\n>\n> How much memory is assumed to be saved by destroying the cache in\n> many cases? I'm not sure if it's really worth destroying the cache to save\n> the memory.\n\nI removed the cache destroying code, if somebody complains in\nfuture(after the feature commit), we can really revisit then.\n\n> + a warning is issued and <literal>false</literal> is returned. <literal>false</literal>\n> + is returned when there are no open connections. When there are some open\n> + connections, but there is no connection for the given foreign server,\n> + then <literal>false</literal> is returned. When no foreign server exists\n> + with the given name, an error is emitted. Example usage of the function:\n>\n> When a non-existent server name is specified, postgres_fdw_disconnect()\n> emits an error if there is at least one open connection, but just returns\n> false otherwise. At least for me, this behavior looks inconsitent and strange.\n> In that case, IMO the function always should emit an error.\n\nDone.\n\nAttaching v13 patch set, please review it further.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Tue, 19 Jan 2021 08:39:47 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] postgres_fdw connection caching - cause remote sessions\n linger till the local session exit" }, { "msg_contents": "\n\nOn 2021/01/19 9:53, Hou, Zhijie wrote:\n>>>>> +1 to add it after \"dropped (Note ........)\", how about as follows\n>>>>> with slight changes?\n>>>>>\n>>>>> dropped (Note that server name of an invalid connection can be NULL\n>>>>> if the server is dropped), and then such .....\n>>>>\n>>>> Yes, I like this one. One question is; \"should\" or \"is\" is better\n>>>> than \"can\" in this case because the server name of invalid connection\n>>>> is always NULL when its server is dropped?\n>>>\n>>> I think \"dropped (Note that server name of an invalid connection will\n>>> be NULL if the server is dropped), and then such .....\"\n>>\n>> Sounds good to me. So patch attached.\n> \n> +1\n\nThanks! I pushed the patch.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Tue, 19 Jan 2021 15:06:40 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] postgres_fdw connection caching - cause remote sessions\n linger till the local session exit" }, { "msg_contents": "\n\nOn 2021/01/19 12:09, Bharath Rupireddy wrote:\n> On Mon, Jan 18, 2021 at 9:11 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>>> Attaching v12 patch set. 0001 is for postgres_fdw_disconnect()\n>>> function, 0002 is for keep_connections GUC and 0003 is for\n>>> keep_connection server level option.\n>>\n>> Thanks!\n>>\n>>>\n>>> Please review it further.\n>>\n>> + server = GetForeignServerByName(servername, true);\n>> +\n>> + if (!server)\n>> + ereport(ERROR,\n>> + (errcode(ERRCODE_CONNECTION_DOES_NOT_EXIST),\n>> + errmsg(\"foreign server \\\"%s\\\" does not exist\", servername)));\n>>\n>> ISTM we can simplify this code as follows.\n>>\n>> server = GetForeignServerByName(servername, false);\n> \n> Done.\n> \n>> + hash_seq_init(&scan, ConnectionHash);\n>> + while ((entry = (ConnCacheEntry *) hash_seq_search(&scan)))\n>>\n>> When the server name is specified, even if its connection is successfully\n>> closed, postgres_fdw_disconnect() scans through all the entries to check\n>> whether there are active connections. But if \"result\" is true and\n>> active_conn_exists is true, we can get out of this loop to avoid unnecessary\n>> scans.\n> \n> My initial thought was that it's possible to have two entries with the\n> same foreign server name but with different user mappings, looks like\n> it's not possible. I tried associating a foreign server with two\n> different user mappings [1], then the cache entry is getting\n> associated initially with the user mapping that comes first in the\n> pg_user_mappings, if this user mapping is dropped then the cache entry\n> gets invalidated, so next time the second user mapping is used.\n> \n> Since there's no way we can have two cache entries with the same\n> foreign server name, we can get out of the loop when we find the cache\n> entry match with the given server. I made the changes.\n\nSo, furthermore, we can use hash_search() to find the target cached\nconnection, instead of using hash_seq_search(), when the server name\nis given. This would simplify the code a bit more? Of course,\nhash_seq_search() is necessary when closing all the connections, though.\n\n\n> \n> [1]\n> postgres=# select * from pg_user_mappings ;\n> umid | srvid | srvname | umuser | usename | umoptions\n> -------+-------+-----------+--------+---------+-----------\n> 16395 | 16394 | loopback1 | 10 | bharath | -----> cache entry\n> is initially made with this user mapping.\n> 16399 | 16394 | loopback1 | 0 | public | -----> if the\n> above user mapping is dropped, then the cache entry is made with this\n> user mapping.\n> \n>> + /*\n>> + * Destroy the cache if we discarded all active connections i.e. if there\n>> + * is no single active connection, which we can know while scanning the\n>> + * cached entries in the above loop. Destroying the cache is better than to\n>> + * keep it in the memory with all inactive entries in it to save some\n>> + * memory. Cache can get initialized on the subsequent queries to foreign\n>> + * server.\n>>\n>> How much memory is assumed to be saved by destroying the cache in\n>> many cases? I'm not sure if it's really worth destroying the cache to save\n>> the memory.\n> \n> I removed the cache destroying code, if somebody complains in\n> future(after the feature commit), we can really revisit then.\n> \n>> + a warning is issued and <literal>false</literal> is returned. <literal>false</literal>\n>> + is returned when there are no open connections. When there are some open\n>> + connections, but there is no connection for the given foreign server,\n>> + then <literal>false</literal> is returned. When no foreign server exists\n>> + with the given name, an error is emitted. Example usage of the function:\n>>\n>> When a non-existent server name is specified, postgres_fdw_disconnect()\n>> emits an error if there is at least one open connection, but just returns\n>> false otherwise. At least for me, this behavior looks inconsitent and strange.\n>> In that case, IMO the function always should emit an error.\n> \n> Done.\n> \n> Attaching v13 patch set, please review it further.\n\nThanks!\n\n+ *\t2) If no input argument is provided, then it tries to disconnect all the\n+ *\t connections.\n\nI'm concerned that users can easily forget to specify the argument and\naccidentally discard all the connections. So, IMO, to alleviate this situation,\nwhat about changing the function name (only when closing all the connections)\nto something postgres_fdw_disconnect_all(), like we have\npg_advisory_unlock_all() against pg_advisory_unlock()?\n\n+\t\t\tif (result)\n+\t\t\t{\n+\t\t\t\t/* We closed at least one connection, others are in use. */\n+\t\t\t\tereport(WARNING,\n+\t\t\t\t\t\t(errmsg(\"cannot close all connections because some of them are still in use\")));\n+\t\t\t}\n\nSorry if this was already discussed upthread. Isn't it more helpful to\nemit a warning for every connections that fail to be closed? For example,\n\nWARNING: cannot close connection for server \"loopback1\" because it is still in use\nWARNING: cannot close connection for server \"loopback2\" because it is still in use\nWARNING: cannot close connection for server \"loopback3\" because it is still in use\n...\n\nThis enables us to identify easily which server connections cannot be\nclosed for now.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Wed, 20 Jan 2021 15:23:52 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] postgres_fdw connection caching - cause remote sessions\n linger till the local session exit" }, { "msg_contents": "On Wed, Jan 20, 2021 at 11:53 AM Fujii Masao\n<masao.fujii@oss.nttdata.com> wrote:\n> So, furthermore, we can use hash_search() to find the target cached\n> connection, instead of using hash_seq_search(), when the server name\n> is given. This would simplify the code a bit more? Of course,\n> hash_seq_search() is necessary when closing all the connections, though.\n\nNote that the cache entry key is user mapping oid and to use\nhash_search() we need the user mapping oid. But in\npostgres_fdw_disconnect we can get server oid and we can also get user\nmapping id using GetUserMapping, but it requires\nGetUserId()/CurrentUserId as an input, I doubt we will have problems\nif CurrentUserId is changed somehow with the change of current user in\nthe session. And user mapping may be dropped but still the connection\ncan exist if it's in use, in that case GetUserMapping fails in cache\nlookup.\n\nAnd yes, disconnecting all connections requires hash_seq_search().\n\nKeeping above in mind, I feel we can do hash_seq_search(), as we do\ncurrently, even when the server name is given as input. This way, we\ndon't need to bother much on the above points.\n\nThoughts?\n\n> + * 2) If no input argument is provided, then it tries to disconnect all the\n> + * connections.\n>\n> I'm concerned that users can easily forget to specify the argument and\n> accidentally discard all the connections. So, IMO, to alleviate this situation,\n> what about changing the function name (only when closing all the connections)\n> to something postgres_fdw_disconnect_all(), like we have\n> pg_advisory_unlock_all() against pg_advisory_unlock()?\n\n+1. We will have two functions postgres_fdw_disconnect(server name),\npostgres_fdw_disconnect_all.\n\n> + if (result)\n> + {\n> + /* We closed at least one connection, others are in use. */\n> + ereport(WARNING,\n> + (errmsg(\"cannot close all connections because some of them are still in use\")));\n> + }\n>\n> Sorry if this was already discussed upthread. Isn't it more helpful to\n> emit a warning for every connections that fail to be closed? For example,\n>\n> WARNING: cannot close connection for server \"loopback1\" because it is still in use\n> WARNING: cannot close connection for server \"loopback2\" because it is still in use\n> WARNING: cannot close connection for server \"loopback3\" because it is still in use\n> ...\n>\n> This enables us to identify easily which server connections cannot be\n> closed for now.\n\n+1. Looks like pg_advisory_unlock is doing that. Given the fact that\nstill in use connections are possible only in explicit txns, we might\nnot have many still in use connections in the real world use case, so\nI'm okay to change that way.\n\nI will address all these comments and post an updated patch set soon.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 20 Jan 2021 14:11:39 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] postgres_fdw connection caching - cause remote sessions\n linger till the local session exit" }, { "msg_contents": "\n\nOn 2021/01/20 17:41, Bharath Rupireddy wrote:\n> On Wed, Jan 20, 2021 at 11:53 AM Fujii Masao\n> <masao.fujii@oss.nttdata.com> wrote:\n>> So, furthermore, we can use hash_search() to find the target cached\n>> connection, instead of using hash_seq_search(), when the server name\n>> is given. This would simplify the code a bit more? Of course,\n>> hash_seq_search() is necessary when closing all the connections, though.\n> \n> Note that the cache entry key is user mapping oid and to use\n> hash_search() we need the user mapping oid. But in\n> postgres_fdw_disconnect we can get server oid and we can also get user\n> mapping id using GetUserMapping, but it requires\n> GetUserId()/CurrentUserId as an input, I doubt we will have problems\n> if CurrentUserId is changed somehow with the change of current user in\n> the session. And user mapping may be dropped but still the connection\n> can exist if it's in use, in that case GetUserMapping fails in cache\n> lookup.\n> \n> And yes, disconnecting all connections requires hash_seq_search().\n> \n> Keeping above in mind, I feel we can do hash_seq_search(), as we do\n> currently, even when the server name is given as input. This way, we\n> don't need to bother much on the above points.\n> \n> Thoughts?\n\nThanks for explaining this! You're right. I'd withdraw my suggestion.\n\nRegards,\n\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Wed, 20 Jan 2021 18:54:09 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] postgres_fdw connection caching - cause remote sessions\n linger till the local session exit" }, { "msg_contents": "On Wed, Jan 20, 2021 at 3:24 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> > Keeping above in mind, I feel we can do hash_seq_search(), as we do\n> > currently, even when the server name is given as input. This way, we\n> > don't need to bother much on the above points.\n> >\n> > Thoughts?\n>\n> Thanks for explaining this! You're right. I'd withdraw my suggestion.\n\nAttaching v14 patch set with review comments addressed. Please review\nit further.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Wed, 20 Jan 2021 15:47:01 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] postgres_fdw connection caching - cause remote sessions\n linger till the local session exit" }, { "msg_contents": "\n\nOn 2021/01/20 19:17, Bharath Rupireddy wrote:\n> On Wed, Jan 20, 2021 at 3:24 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>>> Keeping above in mind, I feel we can do hash_seq_search(), as we do\n>>> currently, even when the server name is given as input. This way, we\n>>> don't need to bother much on the above points.\n>>>\n>>> Thoughts?\n>>\n>> Thanks for explaining this! You're right. I'd withdraw my suggestion.\n> \n> Attaching v14 patch set with review comments addressed. Please review\n> it further.\n\nThanks for updating the patch!\n\n+ * It checks if the cache has a connection for the given foreign server that's\n+ * not being used within current transaction, then returns true. If the\n+ * connection is in use, then it emits a warning and returns false.\n\nThe comment also should mention the case where no open connection\nfor the given server is found? What about rewriting this to the following?\n\n---------------------\nIf the cached connection for the given foreign server is found and has not\nbeen used within current transaction yet, close the connection and return\ntrue. Even when it's found, if it's already used, keep the connection, emit\na warning and return false. If it's not found, return false.\n---------------------\n\n+ * It returns true, if it closes at least one connection, otherwise false.\n+ *\n+ * It returns false, if the cache doesn't exit.\n\nThe above second comment looks redundant.\n\n+\tif (ConnectionHash)\n+\t\tresult = disconnect_cached_connections(0, true);\n\nIsn't it smarter to make disconnect_cached_connections() check\nConnectionHash and return false if it's NULL? If we do that, we can\nsimplify the code of postgres_fdw_disconnect() and _all().\n\n+ * current transaction are disconnected. Otherwise, the unused entries with the\n+ * given hashvalue are disconnected.\n\nIn the above second comment, a singular form should be used instead?\nBecause there must be no multiple entries with the given hashvalue.\n\n+\t\t\t\tserver = GetForeignServer(entry->serverid);\n\nThis seems to cause an error \"cache lookup failed\"\nif postgres_fdw_disconnect_all() is called when there is\na connection in use but its server is dropped. To avoid this error,\nGetForeignServerExtended() with FSV_MISSING_OK should be used\ninstead, like postgres_fdw_get_connections() does?\n\n+\t\t\tif (entry->server_hashvalue == hashvalue &&\n+\t\t\t\t(entry->xact_depth > 0 || result))\n+\t\t\t{\n+\t\t\t\thash_seq_term(&scan);\n+\t\t\t\tbreak;\n\nentry->server_hashvalue can be 0? If yes, since postgres_fdw_disconnect_all()\nspecifies 0 as hashvalue, ISTM that the above condition can be true\nunexpectedly. Can we replace this condition with just \"if (!all)\"?\n\n+-- Closes loopback connection, returns true and issues a warning as loopback2\n+-- connection is still in use and can not be closed.\n+SELECT * FROM postgres_fdw_disconnect_all();\n+WARNING: cannot close connection for server \"loopback2\" because it is still in use\n+ postgres_fdw_disconnect_all\n+-----------------------------\n+ t\n+(1 row)\n\nAfter the above test, isn't it better to call postgres_fdw_get_connections()\nto check that loopback is not output?\n\n+WARNING: cannot close connection for server \"loopback\" because it is still in use\n+WARNING: cannot close connection for server \"loopback2\" because it is still in use\n\nJust in the case please let me confirm that the order of these warning\nmessages is always stable?\n\n+ <varlistentry>\n+ <term><function>postgres_fdw_disconnect(IN servername text) returns boolean</function></term>\n\nI think that \"IN\" of \"IN servername text\" is not necessary.\n\nI'd like to replace \"servername\" with \"server_name\" because\npostgres_fdw_get_connections() uses \"server_name\" as the output\ncolumn name.\n\n+ <listitem>\n+ <para>\n+ When called in local session with foreign server name as input, it\n+ discards the unused open connection previously made to the foreign server\n+ and returns <literal>true</literal>.\n\n\"unused open connection\" sounds confusing to me. What about the following?\n\n---------------------\nThis function discards the open connection that postgres_fdw established\nfrom the local session to the foriegn server with the given name if it's not\nused in the current local transaction yet, and then returns true. If it's\nalready used, the function doesn't discard the connection, emits\na warning and then returns false. If there is no open connection to\nthe given foreign server, false is returned. If no foreign server with\nthe given name is found, an error is emitted. Example usage of the function:\n---------------------\n\n+postgres=# SELECT * FROM postgres_fdw_disconnect('loopback1');\n\n\"SELECT postgres_fdw_disconnect('loopback1')\" is more common?\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Wed, 20 Jan 2021 22:28:34 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] postgres_fdw connection caching - cause remote sessions\n linger till the local session exit" }, { "msg_contents": "On Wed, Jan 20, 2021 at 6:58 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> + * It checks if the cache has a connection for the given foreign server that's\n> + * not being used within current transaction, then returns true. If the\n> + * connection is in use, then it emits a warning and returns false.\n>\n> The comment also should mention the case where no open connection\n> for the given server is found? What about rewriting this to the following?\n>\n> ---------------------\n> If the cached connection for the given foreign server is found and has not\n> been used within current transaction yet, close the connection and return\n> true. Even when it's found, if it's already used, keep the connection, emit\n> a warning and return false. If it's not found, return false.\n> ---------------------\n\nDone.\n\n> + * It returns true, if it closes at least one connection, otherwise false.\n> + *\n> + * It returns false, if the cache doesn't exit.\n>\n> The above second comment looks redundant.\n\nYes. \"otherwise false\" means it.\n\n> + if (ConnectionHash)\n> + result = disconnect_cached_connections(0, true);\n>\n> Isn't it smarter to make disconnect_cached_connections() check\n> ConnectionHash and return false if it's NULL? If we do that, we can\n> simplify the code of postgres_fdw_disconnect() and _all().\n\nDone.\n\n> + * current transaction are disconnected. Otherwise, the unused entries with the\n> + * given hashvalue are disconnected.\n>\n> In the above second comment, a singular form should be used instead?\n> Because there must be no multiple entries with the given hashvalue.\n\nRephrased the function comment a bit. Mentioned the _disconnect and\n_disconnect_all in comments because we have said enough what each of\nthose two functions do.\n\n+/*\n+ * Workhorse to disconnect cached connections.\n+ *\n+ * This function disconnects either all unused connections when called from\n+ * postgres_fdw_disconnect_all or a given foreign server unused connection when\n+ * called from postgres_fdw_disconnect.\n+ *\n+ * This function returns true if at least one connection is disconnected,\n+ * otherwise false.\n+ */\n\n> + server = GetForeignServer(entry->serverid);\n>\n> This seems to cause an error \"cache lookup failed\"\n> if postgres_fdw_disconnect_all() is called when there is\n> a connection in use but its server is dropped. To avoid this error,\n> GetForeignServerExtended() with FSV_MISSING_OK should be used\n> instead, like postgres_fdw_get_connections() does?\n\n+1. So, I changed it to GetForeignServerExtended, added an assertion\nfor invalidation just like postgres_fdw_get_connections. I also added\na test case for this, we now emit a slightly different warning for\nthis case alone that is (errmsg(\"cannot close dropped server\nconnection because it is still in use\")));. This warning looks okay as\nwe cannot show any other server name in the ouput and we know that\nthis rare case can exist when someone drops the server in an explicit\ntransaction.\n\n> + if (entry->server_hashvalue == hashvalue &&\n> + (entry->xact_depth > 0 || result))\n> + {\n> + hash_seq_term(&scan);\n> + break;\n>\n> entry->server_hashvalue can be 0? If yes, since postgres_fdw_disconnect_all()\n> specifies 0 as hashvalue, ISTM that the above condition can be true\n> unexpectedly. Can we replace this condition with just \"if (!all)\"?\n\nI don't think so entry->server_hashvalue can be zero, because\nGetSysCacheHashValue1/CatalogCacheComputeHashValue will not return 0\nas hash value. I have not seen someone comparing hashvalue with an\nexpectation that it has 0 value, for instance see if (hashvalue == 0\n|| riinfo->oidHashValue == hashvalue).\n\n Having if(!all) something like below there doesn't suffice because we\nmight call hash_seq_term, when some connection other than the given\nforeign server connection is in use. Our intention to call\nhash_seq_term is only when a given server is found and either it's in\nuse or is closed.\n\n if (!all && (entry->xact_depth > 0 || result))\n {\n hash_seq_term(&scan);\n break;\n }\n\nGiven the above points, the existing check looks good to me.\n\n> +-- Closes loopback connection, returns true and issues a warning as loopback2\n> +-- connection is still in use and can not be closed.\n> +SELECT * FROM postgres_fdw_disconnect_all();\n> +WARNING: cannot close connection for server \"loopback2\" because it is still in use\n> + postgres_fdw_disconnect_all\n> +-----------------------------\n> + t\n> +(1 row)\n>\n> After the above test, isn't it better to call postgres_fdw_get_connections()\n> to check that loopback is not output?\n\n+1.\n\n> +WARNING: cannot close connection for server \"loopback\" because it is still in use\n> +WARNING: cannot close connection for server \"loopback2\" because it is still in use\n>\n> Just in the case please let me confirm that the order of these warning\n> messages is always stable?\n\nI think the order of the above warnings depends on how the connections\nare stored in cache and we emit the warnings. Looks like new cached\nconnections are stored at the beginning of the cache always and the\nwarnings also will show up in that order i.e new entries to old\nentries. I think it's stable and I didn't see cfbot complaining about\nthat on v14.\n\n> + <varlistentry>\n> + <term><function>postgres_fdw_disconnect(IN servername text) returns boolean</function></term>\n>\n> I think that \"IN\" of \"IN servername text\" is not necessary.\n\nDone.\n\n> I'd like to replace \"servername\" with \"server_name\" because\n> postgres_fdw_get_connections() uses \"server_name\" as the output\n> column name.\n\nDone.\n\n> + <listitem>\n> + <para>\n> + When called in local session with foreign server name as input, it\n> + discards the unused open connection previously made to the foreign server\n> + and returns <literal>true</literal>.\n>\n> \"unused open connection\" sounds confusing to me. What about the following?\n>\n> ---------------------\n> This function discards the open connection that postgres_fdw established\n> from the local session to the foriegn server with the given name if it's not\n> used in the current local transaction yet, and then returns true. If it's\n> already used, the function doesn't discard the connection, emits\n> a warning and then returns false. If there is no open connection to\n> the given foreign server, false is returned. If no foreign server with\n> the given name is found, an error is emitted. Example usage of the function:\n> ---------------------\n\nDone.\n\n> +postgres=# SELECT * FROM postgres_fdw_disconnect('loopback1');\n>\n> \"SELECT postgres_fdw_disconnect('loopback1')\" is more common?\n\nDone.\n\nAttaching v15 patch set. Please consider it for further review.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Thu, 21 Jan 2021 08:30:22 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] postgres_fdw connection caching - cause remote sessions\n linger till the local session exit" }, { "msg_contents": "\n\nOn 2021/01/21 12:00, Bharath Rupireddy wrote:\n> On Wed, Jan 20, 2021 at 6:58 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>> + * It checks if the cache has a connection for the given foreign server that's\n>> + * not being used within current transaction, then returns true. If the\n>> + * connection is in use, then it emits a warning and returns false.\n>>\n>> The comment also should mention the case where no open connection\n>> for the given server is found? What about rewriting this to the following?\n>>\n>> ---------------------\n>> If the cached connection for the given foreign server is found and has not\n>> been used within current transaction yet, close the connection and return\n>> true. Even when it's found, if it's already used, keep the connection, emit\n>> a warning and return false. If it's not found, return false.\n>> ---------------------\n> \n> Done.\n> \n>> + * It returns true, if it closes at least one connection, otherwise false.\n>> + *\n>> + * It returns false, if the cache doesn't exit.\n>>\n>> The above second comment looks redundant.\n> \n> Yes. \"otherwise false\" means it.\n> \n>> + if (ConnectionHash)\n>> + result = disconnect_cached_connections(0, true);\n>>\n>> Isn't it smarter to make disconnect_cached_connections() check\n>> ConnectionHash and return false if it's NULL? If we do that, we can\n>> simplify the code of postgres_fdw_disconnect() and _all().\n> \n> Done.\n> \n>> + * current transaction are disconnected. Otherwise, the unused entries with the\n>> + * given hashvalue are disconnected.\n>>\n>> In the above second comment, a singular form should be used instead?\n>> Because there must be no multiple entries with the given hashvalue.\n> \n> Rephrased the function comment a bit. Mentioned the _disconnect and\n> _disconnect_all in comments because we have said enough what each of\n> those two functions do.\n> \n> +/*\n> + * Workhorse to disconnect cached connections.\n> + *\n> + * This function disconnects either all unused connections when called from\n> + * postgres_fdw_disconnect_all or a given foreign server unused connection when\n> + * called from postgres_fdw_disconnect.\n> + *\n> + * This function returns true if at least one connection is disconnected,\n> + * otherwise false.\n> + */\n> \n>> + server = GetForeignServer(entry->serverid);\n>>\n>> This seems to cause an error \"cache lookup failed\"\n>> if postgres_fdw_disconnect_all() is called when there is\n>> a connection in use but its server is dropped. To avoid this error,\n>> GetForeignServerExtended() with FSV_MISSING_OK should be used\n>> instead, like postgres_fdw_get_connections() does?\n> \n> +1. So, I changed it to GetForeignServerExtended, added an assertion\n> for invalidation just like postgres_fdw_get_connections. I also added\n> a test case for this, we now emit a slightly different warning for\n> this case alone that is (errmsg(\"cannot close dropped server\n> connection because it is still in use\")));. This warning looks okay as\n> we cannot show any other server name in the ouput and we know that\n> this rare case can exist when someone drops the server in an explicit\n> transaction.\n> \n>> + if (entry->server_hashvalue == hashvalue &&\n>> + (entry->xact_depth > 0 || result))\n>> + {\n>> + hash_seq_term(&scan);\n>> + break;\n>>\n>> entry->server_hashvalue can be 0? If yes, since postgres_fdw_disconnect_all()\n>> specifies 0 as hashvalue, ISTM that the above condition can be true\n>> unexpectedly. Can we replace this condition with just \"if (!all)\"?\n> \n> I don't think so entry->server_hashvalue can be zero, because\n> GetSysCacheHashValue1/CatalogCacheComputeHashValue will not return 0\n> as hash value. I have not seen someone comparing hashvalue with an\n> expectation that it has 0 value, for instance see if (hashvalue == 0\n> || riinfo->oidHashValue == hashvalue).\n> \n> Having if(!all) something like below there doesn't suffice because we\n> might call hash_seq_term, when some connection other than the given\n> foreign server connection is in use.\n\nNo because we check the following condition before reaching that code. No?\n\n+\t\tif ((all || entry->server_hashvalue == hashvalue) &&\n\n\nI was thinking that \"(entry->xact_depth > 0 || result))\" condition is not\nnecessary because \"result\" is set to true when xact_depth <= 0 and that\ncondition always indicates true.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Thu, 21 Jan 2021 13:36:46 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] postgres_fdw connection caching - cause remote sessions\n linger till the local session exit" }, { "msg_contents": "> Attaching v15 patch set. Please consider it for further review.\r\n\r\nHi\r\n\r\nI have some comments for the 0001 patch\r\n\r\nIn v15-0001-postgres_fdw-function-to-discard-cached-connecti\r\n\r\n1.\r\n+ If there is no open connection to the given foreign server, <literal>false</literal>\r\n+ is returned. If no foreign server with the given name is found, an error\r\n\r\nDo you think it's better add some testcases about:\r\n\tcall postgres_fdw_disconnect and postgres_fdw_disconnect_all when there is no open connection to the given foreign server\r\n\r\n2.\r\n+\t\t\t/*\r\n+\t\t\t * For the given server, if we closed connection or it is still in\r\n+\t\t\t * use, then no need of scanning the cache further.\r\n+\t\t\t */\r\n+\t\t\tif (entry->server_hashvalue == hashvalue &&\r\n+\t\t\t\t(entry->xact_depth > 0 || result))\r\n+\t\t\t{\r\n+\t\t\t\thash_seq_term(&scan);\r\n+\t\t\t\tbreak;\r\n+\t\t\t}\r\n\r\nIf I am not wrong, is the following condition always true ?\r\n\t(entry->xact_depth > 0 || result)\r\n\r\nBest regards,\r\nhouzj\r\n\r\n\n\n", "msg_date": "Thu, 21 Jan 2021 05:45:18 +0000", "msg_from": "\"Hou, Zhijie\" <houzj.fnst@cn.fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: [PATCH] postgres_fdw connection caching - cause remote sessions\n linger till the local session exit" }, { "msg_contents": "On Thu, Jan 21, 2021 at 10:06 AM Fujii Masao\n<masao.fujii@oss.nttdata.com> wrote:\n > >> + if (entry->server_hashvalue == hashvalue &&\n> >> + (entry->xact_depth > 0 || result))\n> >> + {\n> >> + hash_seq_term(&scan);\n> >> + break;\n> >>\n> >> entry->server_hashvalue can be 0? If yes, since postgres_fdw_disconnect_all()\n> >> specifies 0 as hashvalue, ISTM that the above condition can be true\n> >> unexpectedly. Can we replace this condition with just \"if (!all)\"?\n> >\n> > I don't think so entry->server_hashvalue can be zero, because\n> > GetSysCacheHashValue1/CatalogCacheComputeHashValue will not return 0\n> > as hash value. I have not seen someone comparing hashvalue with an\n> > expectation that it has 0 value, for instance see if (hashvalue == 0\n> > || riinfo->oidHashValue == hashvalue).\n> >\n> > Having if(!all) something like below there doesn't suffice because we\n> > might call hash_seq_term, when some connection other than the given\n> > foreign server connection is in use.\n>\n> No because we check the following condition before reaching that code. No?\n>\n> + if ((all || entry->server_hashvalue == hashvalue) &&\n>\n>\n> I was thinking that \"(entry->xact_depth > 0 || result))\" condition is not\n> necessary because \"result\" is set to true when xact_depth <= 0 and that\n> condition always indicates true.\n\nI think that condition is too confusing. How about having a boolean\ncan_terminate_scan like below?\n\n while ((entry = (ConnCacheEntry *) hash_seq_search(&scan)))\n {\n bool can_terminate_scan = false;\n\n /*\n * Either disconnect given or all the active and not in use cached\n * connections.\n */\n if ((all || entry->server_hashvalue == hashvalue) &&\n entry->conn)\n {\n /* We cannot close connection that's in use, so issue a warning. */\n if (entry->xact_depth > 0)\n {\n ForeignServer *server;\n\n if (!all)\n can_terminate_scan = true;\n\n server = GetForeignServerExtended(entry->serverid,\n FSV_MISSING_OK);\n\n if (!server)\n {\n /*\n * If the server has been dropped in the current explicit\n * transaction, then this entry would have been invalidated\n * in pgfdw_inval_callback at the end of drop sever\n * command. Note that this connection would not have been\n * closed in pgfdw_inval_callback because it is still being\n * used in the current explicit transaction. So, assert\n * that here.\n */\n Assert(entry->invalidated);\n\n ereport(WARNING,\n (errmsg(\"cannot close dropped server\nconnection because it is still in use\")));\n }\n else\n ereport(WARNING,\n (errmsg(\"cannot close connection for\nserver \\\"%s\\\" because it is still in use\",\n server->servername)));\n }\n else\n {\n elog(DEBUG3, \"discarding connection %p\", entry->conn);\n disconnect_pg_server(entry);\n result = true;\n\n if (!all)\n can_terminate_scan = true;\n }\n\n /*\n * For the given server, if we closed connection or it is still in\n * use, then no need of scanning the cache further.\n */\n if (can_terminate_scan)\n {\n hash_seq_term(&scan);\n break;\n }\n }\n }\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 21 Jan 2021 11:16:19 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] postgres_fdw connection caching - cause remote sessions\n linger till the local session exit" }, { "msg_contents": "On Thu, Jan 21, 2021 at 11:15 AM Hou, Zhijie <houzj.fnst@cn.fujitsu.com> wrote:\n>\n> > Attaching v15 patch set. Please consider it for further review.\n>\n> Hi\n>\n> I have some comments for the 0001 patch\n>\n> In v15-0001-postgres_fdw-function-to-discard-cached-connecti\n>\n> 1.\n> + If there is no open connection to the given foreign server, <literal>false</literal>\n> + is returned. If no foreign server with the given name is found, an error\n>\n> Do you think it's better add some testcases about:\n> call postgres_fdw_disconnect and postgres_fdw_disconnect_all when there is no open connection to the given foreign server\n\nDo you mean a test case where foreign server exists but\npostgres_fdw_disconnect() returns false because there's no connection\nfor that server?\n\n> 2.\n> + /*\n> + * For the given server, if we closed connection or it is still in\n> + * use, then no need of scanning the cache further.\n> + */\n> + if (entry->server_hashvalue == hashvalue &&\n> + (entry->xact_depth > 0 || result))\n> + {\n> + hash_seq_term(&scan);\n> + break;\n> + }\n>\n> If I am not wrong, is the following condition always true ?\n> (entry->xact_depth > 0 || result)\n\nIt's not always true. But it seems like it's too confusing, please\nhave a look at the upthread suggestion to change this with\ncan_terminate_scan boolean.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 21 Jan 2021 11:22:07 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] postgres_fdw connection caching - cause remote sessions\n linger till the local session exit" }, { "msg_contents": "> > > Attaching v15 patch set. Please consider it for further review.\r\n> >\r\n> > Hi\r\n> >\r\n> > I have some comments for the 0001 patch\r\n> >\r\n> > In v15-0001-postgres_fdw-function-to-discard-cached-connecti\r\n> >\r\n> > 1.\r\n> > + If there is no open connection to the given foreign server,\r\n> <literal>false</literal>\r\n> > + is returned. If no foreign server with the given name is found,\r\n> > + an error\r\n> >\r\n> > Do you think it's better add some testcases about:\r\n> > call postgres_fdw_disconnect and postgres_fdw_disconnect_all\r\n> > when there is no open connection to the given foreign server\r\n> \r\n> Do you mean a test case where foreign server exists but\r\n> postgres_fdw_disconnect() returns false because there's no connection for\r\n> that server?\r\n\r\n\r\nYes, I read this from the doc, so I think it's better to test this.\r\n\r\n\r\n\r\n\r\n> > 2.\r\n> > + /*\r\n> > + * For the given server, if we closed connection\r\n> or it is still in\r\n> > + * use, then no need of scanning the cache\r\n> further.\r\n> > + */\r\n> > + if (entry->server_hashvalue == hashvalue &&\r\n> > + (entry->xact_depth > 0 || result))\r\n> > + {\r\n> > + hash_seq_term(&scan);\r\n> > + break;\r\n> > + }\r\n> >\r\n> > If I am not wrong, is the following condition always true ?\r\n> > (entry->xact_depth > 0 || result)\r\n> \r\n> It's not always true. But it seems like it's too confusing, please have\r\n> a look at the upthread suggestion to change this with can_terminate_scan\r\n> boolean.\r\n\r\nThanks for the remind, I will look at that.\r\n\r\n\r\n\r\nBest regards,\r\nhouzj\r\n\n\n", "msg_date": "Thu, 21 Jan 2021 06:00:49 +0000", "msg_from": "\"Hou, Zhijie\" <houzj.fnst@cn.fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: [PATCH] postgres_fdw connection caching - cause remote sessions\n linger till the local session exit" }, { "msg_contents": "\n\nOn 2021/01/21 14:46, Bharath Rupireddy wrote:\n> On Thu, Jan 21, 2021 at 10:06 AM Fujii Masao\n> <masao.fujii@oss.nttdata.com> wrote:\n> > >> + if (entry->server_hashvalue == hashvalue &&\n>>>> + (entry->xact_depth > 0 || result))\n>>>> + {\n>>>> + hash_seq_term(&scan);\n>>>> + break;\n>>>>\n>>>> entry->server_hashvalue can be 0? If yes, since postgres_fdw_disconnect_all()\n>>>> specifies 0 as hashvalue, ISTM that the above condition can be true\n>>>> unexpectedly. Can we replace this condition with just \"if (!all)\"?\n>>>\n>>> I don't think so entry->server_hashvalue can be zero, because\n>>> GetSysCacheHashValue1/CatalogCacheComputeHashValue will not return 0\n>>> as hash value. I have not seen someone comparing hashvalue with an\n>>> expectation that it has 0 value, for instance see if (hashvalue == 0\n>>> || riinfo->oidHashValue == hashvalue).\n>>>\n>>> Having if(!all) something like below there doesn't suffice because we\n>>> might call hash_seq_term, when some connection other than the given\n>>> foreign server connection is in use.\n>>\n>> No because we check the following condition before reaching that code. No?\n>>\n>> + if ((all || entry->server_hashvalue == hashvalue) &&\n>>\n>>\n>> I was thinking that \"(entry->xact_depth > 0 || result))\" condition is not\n>> necessary because \"result\" is set to true when xact_depth <= 0 and that\n>> condition always indicates true.\n> \n> I think that condition is too confusing. How about having a boolean\n> can_terminate_scan like below?\n\nThanks for thinking this. But at least for me, \"if (!all)\" looks not so confusing.\nAnd the comment seems to explain why we can end the scan.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Thu, 21 Jan 2021 15:47:31 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] postgres_fdw connection caching - cause remote sessions\n linger till the local session exit" }, { "msg_contents": "On Thu, Jan 21, 2021 at 12:17 PM Fujii Masao\n<masao.fujii@oss.nttdata.com> wrote:\n> On 2021/01/21 14:46, Bharath Rupireddy wrote:\n> > On Thu, Jan 21, 2021 at 10:06 AM Fujii Masao\n> > <masao.fujii@oss.nttdata.com> wrote:\n> > > >> + if (entry->server_hashvalue == hashvalue &&\n> >>>> + (entry->xact_depth > 0 || result))\n> >>>> + {\n> >>>> + hash_seq_term(&scan);\n> >>>> + break;\n> >>>>\n> >>>> entry->server_hashvalue can be 0? If yes, since postgres_fdw_disconnect_all()\n> >>>> specifies 0 as hashvalue, ISTM that the above condition can be true\n> >>>> unexpectedly. Can we replace this condition with just \"if (!all)\"?\n> >>>\n> >>> I don't think so entry->server_hashvalue can be zero, because\n> >>> GetSysCacheHashValue1/CatalogCacheComputeHashValue will not return 0\n> >>> as hash value. I have not seen someone comparing hashvalue with an\n> >>> expectation that it has 0 value, for instance see if (hashvalue == 0\n> >>> || riinfo->oidHashValue == hashvalue).\n> >>>\n> >>> Having if(!all) something like below there doesn't suffice because we\n> >>> might call hash_seq_term, when some connection other than the given\n> >>> foreign server connection is in use.\n> >>\n> >> No because we check the following condition before reaching that code. No?\n> >>\n> >> + if ((all || entry->server_hashvalue == hashvalue) &&\n> >>\n> >>\n> >> I was thinking that \"(entry->xact_depth > 0 || result))\" condition is not\n> >> necessary because \"result\" is set to true when xact_depth <= 0 and that\n> >> condition always indicates true.\n> >\n> > I think that condition is too confusing. How about having a boolean\n> > can_terminate_scan like below?\n>\n> Thanks for thinking this. But at least for me, \"if (!all)\" looks not so confusing.\n> And the comment seems to explain why we can end the scan.\n\nMay I know if it's okay to have the boolean can_terminate_scan as shown in [1]?\n\n[1] - https://www.postgresql.org/message-id/flat/CALj2ACVx0%2BiOsrAA-wXbo3RLAKqUoNvvEd7foJ0vLwOdu8XjXw%40mail.gmail.com\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 21 Jan 2021 12:46:44 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] postgres_fdw connection caching - cause remote sessions\n linger till the local session exit" }, { "msg_contents": "\n\nOn 2021/01/21 16:16, Bharath Rupireddy wrote:\n> On Thu, Jan 21, 2021 at 12:17 PM Fujii Masao\n> <masao.fujii@oss.nttdata.com> wrote:\n>> On 2021/01/21 14:46, Bharath Rupireddy wrote:\n>>> On Thu, Jan 21, 2021 at 10:06 AM Fujii Masao\n>>> <masao.fujii@oss.nttdata.com> wrote:\n>>> > >> + if (entry->server_hashvalue == hashvalue &&\n>>>>>> + (entry->xact_depth > 0 || result))\n>>>>>> + {\n>>>>>> + hash_seq_term(&scan);\n>>>>>> + break;\n>>>>>>\n>>>>>> entry->server_hashvalue can be 0? If yes, since postgres_fdw_disconnect_all()\n>>>>>> specifies 0 as hashvalue, ISTM that the above condition can be true\n>>>>>> unexpectedly. Can we replace this condition with just \"if (!all)\"?\n>>>>>\n>>>>> I don't think so entry->server_hashvalue can be zero, because\n>>>>> GetSysCacheHashValue1/CatalogCacheComputeHashValue will not return 0\n>>>>> as hash value. I have not seen someone comparing hashvalue with an\n>>>>> expectation that it has 0 value, for instance see if (hashvalue == 0\n>>>>> || riinfo->oidHashValue == hashvalue).\n>>>>>\n>>>>> Having if(!all) something like below there doesn't suffice because we\n>>>>> might call hash_seq_term, when some connection other than the given\n>>>>> foreign server connection is in use.\n>>>>\n>>>> No because we check the following condition before reaching that code. No?\n>>>>\n>>>> + if ((all || entry->server_hashvalue == hashvalue) &&\n>>>>\n>>>>\n>>>> I was thinking that \"(entry->xact_depth > 0 || result))\" condition is not\n>>>> necessary because \"result\" is set to true when xact_depth <= 0 and that\n>>>> condition always indicates true.\n>>>\n>>> I think that condition is too confusing. How about having a boolean\n>>> can_terminate_scan like below?\n>>\n>> Thanks for thinking this. But at least for me, \"if (!all)\" looks not so confusing.\n>> And the comment seems to explain why we can end the scan.\n> \n> May I know if it's okay to have the boolean can_terminate_scan as shown in [1]?\n\nMy opinion is to check \"!all\", but if others prefer using such boolean flag,\nI'd withdraw my opinion.\n\n+\t\tif ((all || entry->server_hashvalue == hashvalue) &&\n\nWhat about making disconnect_cached_connections() accept serverid instead\nof hashvalue, and perform the above comparison based on serverid? That is,\nI'm thinking \"if (all || entry->serverid == serverid)\". If we do that, we can\nsimplify postgres_fdw_disconnect() a bit more by getting rid of the calculation\nof hashvalue.\n\n+\t\tif ((all || entry->server_hashvalue == hashvalue) &&\n+\t\t\t entry->conn)\n\nI think that it's better to make the check of \"entry->conn\" independent\nlike other functions in postgres_fdw/connection.c. What about adding\nthe following check before the above?\n\n\t\t/* Ignore cache entry if no open connection right now */\n\t\tif (entry->conn == NULL)\n\t\t\tcontinue;\n\n+\t\t\t\t\t/*\n+\t\t\t\t\t * If the server has been dropped in the current explicit\n+\t\t\t\t\t * transaction, then this entry would have been invalidated\n+\t\t\t\t\t * in pgfdw_inval_callback at the end of drop sever\n+\t\t\t\t\t * command. Note that this connection would not have been\n+\t\t\t\t\t * closed in pgfdw_inval_callback because it is still being\n+\t\t\t\t\t * used in the current explicit transaction. So, assert\n+\t\t\t\t\t * that here.\n+\t\t\t\t\t */\n+\t\t\t\t\tAssert(entry->invalidated);\n\nAs this comment explains, even when the connection is used in the transaction,\nits server can be dropped in the same transaction. The connection can remain\nuntil the end of transaction even though its server has been already dropped.\nI'm now wondering if this behavior itself is problematic and should be forbid.\nOf course, this is separate topic from this patch, though..\n\nBTW, my just idea for that is;\n1. change postgres_fdw_get_connections() return also serverid and xact_depth.\n2. make postgres_fdw define the event trigger on DROP SERVER command so that\n an error is thrown if the connection to the server is still in use.\n The event trigger function uses postgres_fdw_get_connections() to check\n if the server connection is still in use or not.\n\nI'm not sure if this just idea is really feasible or not, though...\n\nRegards,\n\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Fri, 22 Jan 2021 00:28:05 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] postgres_fdw connection caching - cause remote sessions\n linger till the local session exit" }, { "msg_contents": "On Thu, Jan 21, 2021 at 8:58 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> My opinion is to check \"!all\", but if others prefer using such boolean flag,\n> I'd withdraw my opinion.\n\nI'm really sorry, actually if (!all) is enough there, my earlier\nunderstanding was wrong.\n\n> + if ((all || entry->server_hashvalue == hashvalue) &&\n>\n> What about making disconnect_cached_connections() accept serverid instead\n> of hashvalue, and perform the above comparison based on serverid? That is,\n> I'm thinking \"if (all || entry->serverid == serverid)\". If we do that, we can\n> simplify postgres_fdw_disconnect() a bit more by getting rid of the calculation\n> of hashvalue.\n\nThat's a good idea. I missed this point. Thanks.\n\n> + if ((all || entry->server_hashvalue == hashvalue) &&\n> + entry->conn)\n>\n> I think that it's better to make the check of \"entry->conn\" independent\n> like other functions in postgres_fdw/connection.c. What about adding\n> the following check before the above?\n>\n> /* Ignore cache entry if no open connection right now */\n> if (entry->conn == NULL)\n> continue;\n\nDone.\n\n> + /*\n> + * If the server has been dropped in the current explicit\n> + * transaction, then this entry would have been invalidated\n> + * in pgfdw_inval_callback at the end of drop sever\n> + * command. Note that this connection would not have been\n> + * closed in pgfdw_inval_callback because it is still being\n> + * used in the current explicit transaction. So, assert\n> + * that here.\n> + */\n> + Assert(entry->invalidated);\n>\n> As this comment explains, even when the connection is used in the transaction,\n> its server can be dropped in the same transaction. The connection can remain\n> until the end of transaction even though its server has been already dropped.\n> I'm now wondering if this behavior itself is problematic and should be forbid.\n> Of course, this is separate topic from this patch, though..\n>\n> BTW, my just idea for that is;\n> 1. change postgres_fdw_get_connections() return also serverid and xact_depth.\n> 2. make postgres_fdw define the event trigger on DROP SERVER command so that\n> an error is thrown if the connection to the server is still in use.\n> The event trigger function uses postgres_fdw_get_connections() to check\n> if the server connection is still in use or not.\n>\n> I'm not sure if this just idea is really feasible or not, though...\n\nI'm not quite sure if we can create such a dependency i.e. blocking\n\"drop foreign server\" when at least one session has an in use cached\nconnection on it? What if a user wants to drop a server from one\nsession, all other sessions one after the other keep having in-use\nconnections related to that server, (though this use case sounds\nimpractical) will the drop server ever be successful? Since we can\nhave hundreds of sessions in real world postgres environment, I don't\nknow if it's a good idea to create such dependency.\n\nAs you suggested, this point can be discussed in a separate thread and\nif any of the approaches proposed by you above is finalized we can\nextend postgres_fdw_get_connections anytime.\n\nThoughts?\n\nAttaching v16 patch set, addressing above review comments and also\nadded a test case suggested upthread that postgres_fdw_disconnect()\nwith existing server name returns false that is when the cache doesn't\nhave active connection.\n\nPlease review the v16 patch set further.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Thu, 21 Jan 2021 21:47:40 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] postgres_fdw connection caching - cause remote sessions\n linger till the local session exit" }, { "msg_contents": "\n\nOn 2021/01/22 1:17, Bharath Rupireddy wrote:\n> On Thu, Jan 21, 2021 at 8:58 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>> My opinion is to check \"!all\", but if others prefer using such boolean flag,\n>> I'd withdraw my opinion.\n> \n> I'm really sorry, actually if (!all) is enough there, my earlier\n> understanding was wrong.\n> \n>> + if ((all || entry->server_hashvalue == hashvalue) &&\n>>\n>> What about making disconnect_cached_connections() accept serverid instead\n>> of hashvalue, and perform the above comparison based on serverid? That is,\n>> I'm thinking \"if (all || entry->serverid == serverid)\". If we do that, we can\n>> simplify postgres_fdw_disconnect() a bit more by getting rid of the calculation\n>> of hashvalue.\n> \n> That's a good idea. I missed this point. Thanks.\n> \n>> + if ((all || entry->server_hashvalue == hashvalue) &&\n>> + entry->conn)\n>>\n>> I think that it's better to make the check of \"entry->conn\" independent\n>> like other functions in postgres_fdw/connection.c. What about adding\n>> the following check before the above?\n>>\n>> /* Ignore cache entry if no open connection right now */\n>> if (entry->conn == NULL)\n>> continue;\n> \n> Done.\n> \n>> + /*\n>> + * If the server has been dropped in the current explicit\n>> + * transaction, then this entry would have been invalidated\n>> + * in pgfdw_inval_callback at the end of drop sever\n>> + * command. Note that this connection would not have been\n>> + * closed in pgfdw_inval_callback because it is still being\n>> + * used in the current explicit transaction. So, assert\n>> + * that here.\n>> + */\n>> + Assert(entry->invalidated);\n>>\n>> As this comment explains, even when the connection is used in the transaction,\n>> its server can be dropped in the same transaction. The connection can remain\n>> until the end of transaction even though its server has been already dropped.\n>> I'm now wondering if this behavior itself is problematic and should be forbid.\n>> Of course, this is separate topic from this patch, though..\n>>\n>> BTW, my just idea for that is;\n>> 1. change postgres_fdw_get_connections() return also serverid and xact_depth.\n>> 2. make postgres_fdw define the event trigger on DROP SERVER command so that\n>> an error is thrown if the connection to the server is still in use.\n>> The event trigger function uses postgres_fdw_get_connections() to check\n>> if the server connection is still in use or not.\n>>\n>> I'm not sure if this just idea is really feasible or not, though...\n> \n> I'm not quite sure if we can create such a dependency i.e. blocking\n> \"drop foreign server\" when at least one session has an in use cached\n> connection on it?\n\nMaybe my explanation was not clear... I was thinking to prevent the server whose connection is used *within the current transaction* from being dropped. IOW, I was thinking to forbid the drop of server if xact_depth of its connection is more than one. So one session can drop the server even when its connection is open in other session if it's not used within the transaction (i.e., xact_depth == 0).\n\nBTW, for now, if the connection is used within the transaction, other session cannot drop the corresponding server because the transaction holds the lock on the relations that depend on the server. Only the session running that transaction can drop the server. This can cause the issue in discussion.\n\nSo, my just idea is to disallow even that session running the transaction to drop the server. This means that no session can drop the server while its connection is used within the transaction (xact_depth > 0).\n\n\n> What if a user wants to drop a server from one\n> session, all other sessions one after the other keep having in-use\n> connections related to that server, (though this use case sounds\n> impractical) will the drop server ever be successful? Since we can\n> have hundreds of sessions in real world postgres environment, I don't\n> know if it's a good idea to create such dependency.\n> \n> As you suggested, this point can be discussed in a separate thread and\n> if any of the approaches proposed by you above is finalized we can\n> extend postgres_fdw_get_connections anytime.\n> \n> Thoughts?\n\nI will consider more before starting separate discussion!\n\n\n> \n> Attaching v16 patch set, addressing above review comments and also\n> added a test case suggested upthread that postgres_fdw_disconnect()\n> with existing server name returns false that is when the cache doesn't\n> have active connection.\n> \n> Please review the v16 patch set further.\n\nThanks! Will review that later.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Fri, 22 Jan 2021 03:29:02 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] postgres_fdw connection caching - cause remote sessions\n linger till the local session exit" }, { "msg_contents": "\n\nOn 2021/01/22 3:29, Fujii Masao wrote:\n> \n> \n> On 2021/01/22 1:17, Bharath Rupireddy wrote:\n>> On Thu, Jan 21, 2021 at 8:58 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>>> My opinion is to check \"!all\", but if others prefer using such boolean flag,\n>>> I'd withdraw my opinion.\n>>\n>> I'm really sorry, actually if (!all) is enough there, my earlier\n>> understanding was wrong.\n>>\n>>> +               if ((all || entry->server_hashvalue == hashvalue) &&\n>>>\n>>> What about making disconnect_cached_connections() accept serverid instead\n>>> of hashvalue, and perform the above comparison based on serverid? That is,\n>>> I'm thinking \"if (all || entry->serverid == serverid)\". If we do that, we can\n>>> simplify postgres_fdw_disconnect() a bit more by getting rid of the calculation\n>>> of hashvalue.\n>>\n>> That's a good idea. I missed this point. Thanks.\n>>\n>>> +               if ((all || entry->server_hashvalue == hashvalue) &&\n>>> +                        entry->conn)\n>>>\n>>> I think that it's better to make the check of \"entry->conn\" independent\n>>> like other functions in postgres_fdw/connection.c. What about adding\n>>> the following check before the above?\n>>>\n>>>                  /* Ignore cache entry if no open connection right now */\n>>>                  if (entry->conn == NULL)\n>>>                          continue;\n>>\n>> Done.\n>>\n>>> +                                       /*\n>>> +                                        * If the server has been dropped in the current explicit\n>>> +                                        * transaction, then this entry would have been invalidated\n>>> +                                        * in pgfdw_inval_callback at the end of drop sever\n>>> +                                        * command. Note that this connection would not have been\n>>> +                                        * closed in pgfdw_inval_callback because it is still being\n>>> +                                        * used in the current explicit transaction. So, assert\n>>> +                                        * that here.\n>>> +                                        */\n>>> +                                       Assert(entry->invalidated);\n>>>\n>>> As this comment explains, even when the connection is used in the transaction,\n>>> its server can be dropped in the same transaction. The connection can remain\n>>> until the end of transaction even though its server has been already dropped.\n>>> I'm now wondering if this behavior itself is problematic and should be forbid.\n>>> Of course, this is separate topic from this patch, though..\n>>>\n>>> BTW, my just idea for that is;\n>>> 1. change postgres_fdw_get_connections() return also serverid and xact_depth.\n>>> 2. make postgres_fdw define the event trigger on DROP SERVER command so that\n>>>       an error is thrown if the connection to the server is still in use.\n>>>       The event trigger function uses postgres_fdw_get_connections() to check\n>>>       if the server connection is still in use or not.\n>>>\n>>> I'm not sure if this just idea is really feasible or not, though...\n>>\n>> I'm not quite sure if we can create such a dependency i.e. blocking\n>> \"drop foreign server\" when at least one session has an in use cached\n>> connection on it?\n> \n> Maybe my explanation was not clear... I was thinking to prevent the server whose connection is used *within the current transaction* from being dropped. IOW, I was thinking to forbid the drop of server if xact_depth of its connection is more than one. So one session can drop the server even when its connection is open in other session if it's not used within the transaction (i.e., xact_depth == 0).\n> \n> BTW, for now, if the connection is used within the transaction, other session cannot drop the corresponding server because the transaction holds the lock on the relations that depend on the server. Only the session running that transaction can drop the server. This can cause the issue in discussion.\n> \n> So, my just idea is to disallow even that session running the transaction to drop the server. This means that no session can drop the server while its connection is used within the transaction (xact_depth > 0).\n> \n> \n>> What if a user wants to drop a server from one\n>> session, all other sessions one after the other keep having in-use\n>> connections related to that server, (though this use case sounds\n>> impractical) will the drop server ever be successful? Since we can\n>> have hundreds of sessions in real world postgres environment, I don't\n>> know if it's a good idea to create such dependency.\n>>\n>> As you suggested, this point can be discussed in a separate thread and\n>> if any of the approaches proposed by you above is finalized we can\n>> extend postgres_fdw_get_connections anytime.\n>>\n>> Thoughts?\n> \n> I will consider more before starting separate discussion!\n> \n> \n>>\n>> Attaching v16 patch set, addressing above review comments and also\n>> added a test case suggested upthread that postgres_fdw_disconnect()\n>> with existing server name returns false that is when the cache doesn't\n>> have active connection.\n>>\n>> Please review the v16 patch set further.\n> \n> Thanks! Will review that later.\n\n+\t\t\t/*\n+\t\t\t * For the given server, if we closed connection or it is still in\n+\t\t\t * use, then no need of scanning the cache further. We do this\n+\t\t\t * because the cache can not have multiple cache entries for a\n+\t\t\t * single foreign server.\n+\t\t\t */\n\nOn second thought, ISTM that single foreign server can have multiple cache\nentries. For example,\n\nCREATE ROLE foo1 SUPERUSER;\nCREATE ROLE foo2 SUPERUSER;\nCREATE EXTENSION postgres_fdw;\nCREATE SERVER loopback FOREIGN DATA WRAPPER postgres_fdw OPTIONS (port '5432');\nCREATE USER MAPPING FOR foo1 SERVER loopback OPTIONS (user 'postgres');\nCREATE USER MAPPING FOR foo2 SERVER loopback OPTIONS (user 'postgres');\nCREATE TABLE t (i int);\nCREATE FOREIGN TABLE ft (i int) SERVER loopback OPTIONS (table_name 't');\nSET SESSION AUTHORIZATION foo1;\nSELECT * FROM ft;\nSET SESSION AUTHORIZATION foo2;\nSELECT * FROM ft;\n\n\nThen you can see there are multiple open connections for the same server\nas follows. So we need to scan all the entries even when the serverid is\nspecified.\n\nSELECT * FROM postgres_fdw_get_connections();\n\n server_name | valid\n-------------+-------\n loopback | t\n loopback | t\n(2 rows)\n\n\nThis means that user (even non-superuser) can disconnect the connection\nestablished by another user (superuser), by using postgres_fdw_disconnect_all().\nIs this really OK?\n\n\n+\t\tif (all || (OidIsValid(serverid) && entry->serverid == serverid))\n+\t\t{\n\nI don't think that \"OidIsValid(serverid)\" condition is necessary here.\nBut you're just concerned about the case where the caller mistakenly\nspecifies invalid oid and all=false? One idea to avoid that inconsistent\ncombination of inputs is to change disconnect_cached_connections()\nas follows.\n\n-disconnect_cached_connections(Oid serverid, bool all)\n+disconnect_cached_connections(Oid serverid)\n {\n \tHASH_SEQ_STATUS\tscan;\n \tConnCacheEntry\t*entry;\n+\tbool\tall = !OidIsValid(serverid);\n\n\n+\t\t\t\t\t * in pgfdw_inval_callback at the end of drop sever\n\nTypo: \"sever\" should be \"server\".\n\n\n+-- ===================================================================\n+-- test postgres_fdw_disconnect function\n+-- ===================================================================\n\nThis regression test is placed at the end of test file. But isn't it better\nto place that just after the regression test \"test connection invalidation\n cases\" because they are related?\n\n\n+ <screen>\n+postgres=# SELECT * FROM postgres_fdw_disconnect('loopback1');\n+ postgres_fdw_disconnect\n\nThe tag <screen> should start from the beginning.\n\nAs I commented upthread, what about replacing the example query with\n\"SELECT postgres_fdw_disconnect('loopback1');\" because it's more common?\n\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Fri, 22 Jan 2021 22:13:04 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] postgres_fdw connection caching - cause remote sessions\n linger till the local session exit" }, { "msg_contents": "On Fri, Jan 22, 2021 at 6:43 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> >> Please review the v16 patch set further.\n> >\n> > Thanks! Will review that later.\n>\n> + /*\n> + * For the given server, if we closed connection or it is still in\n> + * use, then no need of scanning the cache further. We do this\n> + * because the cache can not have multiple cache entries for a\n> + * single foreign server.\n> + */\n>\n> On second thought, ISTM that single foreign server can have multiple cache\n> entries. For example,\n>\n> CREATE ROLE foo1 SUPERUSER;\n> CREATE ROLE foo2 SUPERUSER;\n> CREATE EXTENSION postgres_fdw;\n> CREATE SERVER loopback FOREIGN DATA WRAPPER postgres_fdw OPTIONS (port '5432');\n> CREATE USER MAPPING FOR foo1 SERVER loopback OPTIONS (user 'postgres');\n> CREATE USER MAPPING FOR foo2 SERVER loopback OPTIONS (user 'postgres');\n> CREATE TABLE t (i int);\n> CREATE FOREIGN TABLE ft (i int) SERVER loopback OPTIONS (table_name 't');\n> SET SESSION AUTHORIZATION foo1;\n> SELECT * FROM ft;\n> SET SESSION AUTHORIZATION foo2;\n> SELECT * FROM ft;\n>\n>\n> Then you can see there are multiple open connections for the same server\n> as follows. So we need to scan all the entries even when the serverid is\n> specified.\n>\n> SELECT * FROM postgres_fdw_get_connections();\n>\n> server_name | valid\n> -------------+-------\n> loopback | t\n> loopback | t\n> (2 rows)\n\nThis is a great finding. Thanks a lot. I will remove\nhash_seq_term(&scan); in disconnect_cached_connections and add this as\na test case for postgres_fdw_get_connections function, just to show\nthere can be multiple connections with a single server name.\n\n> This means that user (even non-superuser) can disconnect the connection\n> established by another user (superuser), by using postgres_fdw_disconnect_all().\n> Is this really OK?\n\nYeah, connections can be discarded by non-super users using\npostgres_fdw_disconnect_all and postgres_fdw_disconnect. Given the\nfact that a non-super user requires a password to access foreign\ntables [1], IMO a non-super user changing something related to a super\nuser makes no sense at all. If okay, we can have a check in\ndisconnect_cached_connections something like below:\n\n+static bool\n+disconnect_cached_connections(Oid serverid)\n+{\n+ HASH_SEQ_STATUS scan;\n+ ConnCacheEntry *entry;\n+ bool all = !OidIsValid(serverid);\n+ bool result = false;\n+\n+ if (!superuser())\n+ ereport(ERROR,\n+ (errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),\n+ errmsg(\"must be superuser to discard open connections\")));\n+\n+ if (!ConnectionHash)\n\nHaving said that, it looks like dblink_disconnect doesn't perform\nsuperuser checks.\n\nThoughts?\n\n[1]\nSELECT * FROM ft1_nopw LIMIT 1;\nERROR: password is required\nDETAIL: Non-superusers must provide a password in the user mapping.\n\n> + if (all || (OidIsValid(serverid) && entry->serverid == serverid))\n> + {\n>\n> I don't think that \"OidIsValid(serverid)\" condition is necessary here.\n> But you're just concerned about the case where the caller mistakenly\n> specifies invalid oid and all=false? One idea to avoid that inconsistent\n> combination of inputs is to change disconnect_cached_connections()\n> as follows.\n>\n> -disconnect_cached_connections(Oid serverid, bool all)\n> +disconnect_cached_connections(Oid serverid)\n> {\n> HASH_SEQ_STATUS scan;\n> ConnCacheEntry *entry;\n> + bool all = !OidIsValid(serverid);\n\n+1. Will change it.\n\n> + * in pgfdw_inval_callback at the end of drop sever\n>\n> Typo: \"sever\" should be \"server\".\n\n+1. Will change it.\n\n> +-- ===================================================================\n> +-- test postgres_fdw_disconnect function\n> +-- ===================================================================\n>\n> This regression test is placed at the end of test file. But isn't it better\n> to place that just after the regression test \"test connection invalidation\n> cases\" because they are related?\n\n+1. Will change it.\n\n> + <screen>\n> +postgres=# SELECT * FROM postgres_fdw_disconnect('loopback1');\n> + postgres_fdw_disconnect\n>\n> The tag <screen> should start from the beginning.\n\n+1. Will change it.\n\n> As I commented upthread, what about replacing the example query with\n> \"SELECT postgres_fdw_disconnect('loopback1');\" because it's more common?\n\nSorry, I forgot to check that in the documentation earlier. +1. Will change it.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Sat, 23 Jan 2021 10:10:26 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] postgres_fdw connection caching - cause remote sessions\n linger till the local session exit" }, { "msg_contents": "\n\nOn 2021/01/23 13:40, Bharath Rupireddy wrote:\n> On Fri, Jan 22, 2021 at 6:43 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>>>> Please review the v16 patch set further.\n>>>\n>>> Thanks! Will review that later.\n>>\n>> + /*\n>> + * For the given server, if we closed connection or it is still in\n>> + * use, then no need of scanning the cache further. We do this\n>> + * because the cache can not have multiple cache entries for a\n>> + * single foreign server.\n>> + */\n>>\n>> On second thought, ISTM that single foreign server can have multiple cache\n>> entries. For example,\n>>\n>> CREATE ROLE foo1 SUPERUSER;\n>> CREATE ROLE foo2 SUPERUSER;\n>> CREATE EXTENSION postgres_fdw;\n>> CREATE SERVER loopback FOREIGN DATA WRAPPER postgres_fdw OPTIONS (port '5432');\n>> CREATE USER MAPPING FOR foo1 SERVER loopback OPTIONS (user 'postgres');\n>> CREATE USER MAPPING FOR foo2 SERVER loopback OPTIONS (user 'postgres');\n>> CREATE TABLE t (i int);\n>> CREATE FOREIGN TABLE ft (i int) SERVER loopback OPTIONS (table_name 't');\n>> SET SESSION AUTHORIZATION foo1;\n>> SELECT * FROM ft;\n>> SET SESSION AUTHORIZATION foo2;\n>> SELECT * FROM ft;\n>>\n>>\n>> Then you can see there are multiple open connections for the same server\n>> as follows. So we need to scan all the entries even when the serverid is\n>> specified.\n>>\n>> SELECT * FROM postgres_fdw_get_connections();\n>>\n>> server_name | valid\n>> -------------+-------\n>> loopback | t\n>> loopback | t\n>> (2 rows)\n> \n> This is a great finding. Thanks a lot. I will remove\n> hash_seq_term(&scan); in disconnect_cached_connections and add this as\n> a test case for postgres_fdw_get_connections function, just to show\n> there can be multiple connections with a single server name.\n> \n>> This means that user (even non-superuser) can disconnect the connection\n>> established by another user (superuser), by using postgres_fdw_disconnect_all().\n>> Is this really OK?\n> \n> Yeah, connections can be discarded by non-super users using\n> postgres_fdw_disconnect_all and postgres_fdw_disconnect. Given the\n> fact that a non-super user requires a password to access foreign\n> tables [1], IMO a non-super user changing something related to a super\n> user makes no sense at all. If okay, we can have a check in\n> disconnect_cached_connections something like below:\n\nAlso like pg_terminate_backend(), we should disallow non-superuser to disconnect the connections established by other non-superuser if the requesting user is not a member of the other? Or that's overkill because the target to discard is just a connection and it can be established again if necessary?\n\nFor now I'm thinking that it might better to add the restriction like pg_terminate_backend() at first and relax that later if possible. But I'd like hear more opinions about this.\n\n\n> \n> +static bool\n> +disconnect_cached_connections(Oid serverid)\n> +{\n> + HASH_SEQ_STATUS scan;\n> + ConnCacheEntry *entry;\n> + bool all = !OidIsValid(serverid);\n> + bool result = false;\n> +\n> + if (!superuser())\n> + ereport(ERROR,\n> + (errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),\n> + errmsg(\"must be superuser to discard open connections\")));\n> +\n> + if (!ConnectionHash)\n> \n> Having said that, it looks like dblink_disconnect doesn't perform\n> superuser checks.\n\nAlso non-superuser (set by SET ROLE or SET SESSION AUTHORIZATION) seems to be able to run SQL using the dblink connection established by superuser. If we didn't think that this is a problem, we also might not need to care about issue even for postgres_fdw.\n\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Mon, 25 Jan 2021 16:50:39 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] postgres_fdw connection caching - cause remote sessions\n linger till the local session exit" }, { "msg_contents": "On Mon, Jan 25, 2021 at 1:20 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> > Yeah, connections can be discarded by non-super users using\n> > postgres_fdw_disconnect_all and postgres_fdw_disconnect. Given the\n> > fact that a non-super user requires a password to access foreign\n> > tables [1], IMO a non-super user changing something related to a super\n> > user makes no sense at all. If okay, we can have a check in\n> > disconnect_cached_connections something like below:\n>\n> Also like pg_terminate_backend(), we should disallow non-superuser to disconnect the connections established by other non-superuser if the requesting user is not a member of the other? Or that's overkill because the target to discard is just a connection and it can be established again if necessary?\n\nYes, if required backends can establish the connection again. But my\nworry is this - a non-super user disconnecting all or a given\nconnection created by a super user?\n\n> For now I'm thinking that it might better to add the restriction like pg_terminate_backend() at first and relax that later if possible. But I'd like hear more opinions about this.\n\nI agree. If required we can lift it later, once we get the users using\nthese functions? Maybe we can have a comment near superchecks in\ndisconnect_cached_connections saying, we can lift this in future?\n\nDo you want me to add these checks like in pg_signal_backend?\n\n /* Only allow superusers to signal superuser-owned backends. */\n if (superuser_arg(proc->roleId) && !superuser())\n return SIGNAL_BACKEND_NOSUPERUSER;\n\n /* Users can signal backends they have role membership in. */\n if (!has_privs_of_role(GetUserId(), proc->roleId) &&\n !has_privs_of_role(GetUserId(), DEFAULT_ROLE_SIGNAL_BACKENDID))\n return SIGNAL_BACKEND_NOPERMISSION;\n\nor only below is enough?\n\n+ /* Non-super users are not allowed to disconnect cached connections. */\n+ if (!superuser())\n+ ereport(ERROR,\n+ (errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),\n+ errmsg(\"must be superuser to discard open connections\")));\n\n> > +static bool\n> > +disconnect_cached_connections(Oid serverid)\n> > +{\n> > + HASH_SEQ_STATUS scan;\n> > + ConnCacheEntry *entry;\n> > + bool all = !OidIsValid(serverid);\n> > + bool result = false;\n> > +\n> > + if (!superuser())\n> > + ereport(ERROR,\n> > + (errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),\n> > + errmsg(\"must be superuser to discard open connections\")));\n> > +\n> > + if (!ConnectionHash)\n> >\n> > Having said that, it looks like dblink_disconnect doesn't perform\n> > superuser checks.\n>\n> Also non-superuser (set by SET ROLE or SET SESSION AUTHORIZATION) seems to be able to run SQL using the dblink connection established by superuser. If we didn't think that this is a problem, we also might not need to care about issue even for postgres_fdw.\n\nIMO, we can have superuser checks for postgres_fdw new functions for now.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 25 Jan 2021 14:43:01 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] postgres_fdw connection caching - cause remote sessions\n linger till the local session exit" }, { "msg_contents": "\n\nOn 2021/01/25 18:13, Bharath Rupireddy wrote:\n> On Mon, Jan 25, 2021 at 1:20 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>>> Yeah, connections can be discarded by non-super users using\n>>> postgres_fdw_disconnect_all and postgres_fdw_disconnect. Given the\n>>> fact that a non-super user requires a password to access foreign\n>>> tables [1], IMO a non-super user changing something related to a super\n>>> user makes no sense at all. If okay, we can have a check in\n>>> disconnect_cached_connections something like below:\n>>\n>> Also like pg_terminate_backend(), we should disallow non-superuser to disconnect the connections established by other non-superuser if the requesting user is not a member of the other? Or that's overkill because the target to discard is just a connection and it can be established again if necessary?\n> \n> Yes, if required backends can establish the connection again. But my\n> worry is this - a non-super user disconnecting all or a given\n> connection created by a super user?\n\nYes, I was also worried about that. But I found that there are other similar cases, for example,\n\n- a cursor that superuser declared can be closed by non-superuser (set by SET ROLE or SET SESSION AUTHORIZATION) in the same session.\n- a prepared statement that superuser created can be deallocated by non-superuser in the same session.\n\nThis makes me think that it's OK even for non-superuser to disconnect the connections established by superuser in the same session. For now I've not found any real security issue by doing that yet. Thought? Am I missing something?\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Mon, 25 Jan 2021 18:47:05 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] postgres_fdw connection caching - cause remote sessions\n linger till the local session exit" }, { "msg_contents": "On Mon, Jan 25, 2021 at 3:17 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> > Yes, if required backends can establish the connection again. But my\n> > worry is this - a non-super user disconnecting all or a given\n> > connection created by a super user?\n>\n> Yes, I was also worried about that. But I found that there are other similar cases, for example,\n>\n> - a cursor that superuser declared can be closed by non-superuser (set by SET ROLE or SET SESSION AUTHORIZATION) in the same session.\n> - a prepared statement that superuser created can be deallocated by non-superuser in the same session.\n>\n> This makes me think that it's OK even for non-superuser to disconnect the connections established by superuser in the same session. For now I've not found any real security issue by doing that yet. Thought? Am I missing something?\n\nOh, and added to that list is dblink_disconnect(). I don't know\nwhether there's any security risk if we allow non-superusers to\ndiscard the super users connections. In this case, the super users\nwill just have to re make the connection.\n\n> > For now I'm thinking that it might better to add the restriction like pg_terminate_backend() at first and relax that later if possible. But I'd like hear more opinions about this.\n>\n> I agree. If required we can lift it later, once we get the users using\n> these functions? Maybe we can have a comment near superchecks in\n> disconnect_cached_connections saying, we can lift this in future?\n\nMaybe we can do the opposite of the above that is not doing any\nsuperuser checks in disconnect functions for now, and later if some\nusers complain we can add it? We can leave a comment there that \"As of\nnow we don't see any security risks if a non-super user disconnects\nthe connections made by super users. If required, non-supers can be\ndisallowed to disconnct the connections\" ?\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 25 Jan 2021 15:58:18 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] postgres_fdw connection caching - cause remote sessions\n linger till the local session exit" }, { "msg_contents": "\n\nOn 2021/01/25 19:28, Bharath Rupireddy wrote:\n> On Mon, Jan 25, 2021 at 3:17 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>>> Yes, if required backends can establish the connection again. But my\n>>> worry is this - a non-super user disconnecting all or a given\n>>> connection created by a super user?\n>>\n>> Yes, I was also worried about that. But I found that there are other similar cases, for example,\n>>\n>> - a cursor that superuser declared can be closed by non-superuser (set by SET ROLE or SET SESSION AUTHORIZATION) in the same session.\n>> - a prepared statement that superuser created can be deallocated by non-superuser in the same session.\n>>\n>> This makes me think that it's OK even for non-superuser to disconnect the connections established by superuser in the same session. For now I've not found any real security issue by doing that yet. Thought? Am I missing something?\n> \n> Oh, and added to that list is dblink_disconnect(). I don't know\n> whether there's any security risk if we allow non-superusers to\n> discard the super users connections.\n\nI guess that's ok because superuser and nonsuperuser are running in the same session. That is, since this is the case where superuser switches to nonsuperuser intentionally, interactions between them is also intentional.\n\nOTOH, if nonsuperuser in one session can affect superuser in another session that way, which would be problematic. So, for example, for now pg_stat_activity disallows nonsuperuser to see the query that superuser in another session is running, from it.\n\n\n> In this case, the super users\n> will just have to re make the connection.\n> \n>>> For now I'm thinking that it might better to add the restriction like pg_terminate_backend() at first and relax that later if possible. But I'd like hear more opinions about this.\n>>\n>> I agree. If required we can lift it later, once we get the users using\n>> these functions? Maybe we can have a comment near superchecks in\n>> disconnect_cached_connections saying, we can lift this in future?\n> \n> Maybe we can do the opposite of the above that is not doing any\n> superuser checks in disconnect functions for now, and later if some\n> users complain we can add it?\n\n+1\n\n> We can leave a comment there that \"As of\n> now we don't see any security risks if a non-super user disconnects\n> the connections made by super users. If required, non-supers can be\n> disallowed to disconnct the connections\" ?\n\nYes. Also we should note that that's ok because they are in the same session.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Mon, 25 Jan 2021 22:50:06 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] postgres_fdw connection caching - cause remote sessions\n linger till the local session exit" }, { "msg_contents": "On Mon, Jan 25, 2021 at 7:20 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> On 2021/01/25 19:28, Bharath Rupireddy wrote:\n> > On Mon, Jan 25, 2021 at 3:17 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> >>> Yes, if required backends can establish the connection again. But my\n> >>> worry is this - a non-super user disconnecting all or a given\n> >>> connection created by a super user?\n> >>\n> >> Yes, I was also worried about that. But I found that there are other similar cases, for example,\n> >>\n> >> - a cursor that superuser declared can be closed by non-superuser (set by SET ROLE or SET SESSION AUTHORIZATION) in the same session.\n> >> - a prepared statement that superuser created can be deallocated by non-superuser in the same session.\n> >>\n> >> This makes me think that it's OK even for non-superuser to disconnect the connections established by superuser in the same session. For now I've not found any real security issue by doing that yet. Thought? Am I missing something?\n> >\n> > Oh, and added to that list is dblink_disconnect(). I don't know\n> > whether there's any security risk if we allow non-superusers to\n> > discard the super users connections.\n>\n> I guess that's ok because superuser and nonsuperuser are running in the same session. That is, since this is the case where superuser switches to nonsuperuser intentionally, interactions between them is also intentional.\n>\n> OTOH, if nonsuperuser in one session can affect superuser in another session that way, which would be problematic. So, for example, for now pg_stat_activity disallows nonsuperuser to see the query that superuser in another session is running, from it.\n\nHmm, that makes sense.\n\n> > In this case, the super users\n> > will just have to re make the connection.\n> >\n> >>> For now I'm thinking that it might better to add the restriction like pg_terminate_backend() at first and relax that later if possible. But I'd like hear more opinions about this.\n> >>\n> >> I agree. If required we can lift it later, once we get the users using\n> >> these functions? Maybe we can have a comment near superchecks in\n> >> disconnect_cached_connections saying, we can lift this in future?\n> >\n> > Maybe we can do the opposite of the above that is not doing any\n> > superuser checks in disconnect functions for now, and later if some\n> > users complain we can add it?\n>\n> +1\n\nThanks, will send the updated patch set soon.\n\n> > We can leave a comment there that \"As of\n> > now we don't see any security risks if a non-super user disconnects\n> > the connections made by super users. If required, non-supers can be\n> > disallowed to disconnct the connections\" ?\n>\n> Yes. Also we should note that that's ok because they are in the same session.\n\nI will add this comment in disconnect_cached_connections so that we\ndon't lose track of it.\n\nI will provide the updated patch set soon.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 25 Jan 2021 19:28:18 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] postgres_fdw connection caching - cause remote sessions\n linger till the local session exit" }, { "msg_contents": "On Mon, Jan 25, 2021 at 7:28 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n> I will provide the updated patch set soon.\n\nAttaching v17 patch set, please review it further.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Mon, 25 Jan 2021 20:42:39 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] postgres_fdw connection caching - cause remote sessions\n linger till the local session exit" }, { "msg_contents": "On 2021/01/26 0:12, Bharath Rupireddy wrote:\n> On Mon, Jan 25, 2021 at 7:28 PM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n>> I will provide the updated patch set soon.\n> \n> Attaching v17 patch set, please review it further.\n\nThanks for updating the patch!\n\nAttached is the tweaked version of the patch. I didn't change any logic,\nbut I updated some comments and docs. Also I added the regresssion test\nto check that postgres_fdw_disconnect() closes multiple connections.\nBarring any objection, I will commit this version.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION", "msg_date": "Tue, 26 Jan 2021 04:08:27 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] postgres_fdw connection caching - cause remote sessions\n linger till the local session exit" }, { "msg_contents": "On Tue, Jan 26, 2021 at 12:38 AM Fujii Masao\n<masao.fujii@oss.nttdata.com> wrote:\n> > Attaching v17 patch set, please review it further.\n>\n> Thanks for updating the patch!\n>\n> Attached is the tweaked version of the patch. I didn't change any logic,\n> but I updated some comments and docs. Also I added the regresssion test\n> to check that postgres_fdw_disconnect() closes multiple connections.\n> Barring any objection, I will commit this version.\n\nThanks. The patch LGTM, except few typos:\n1) in the commit message \"a warning messsage is emitted.\" it's\n\"message\" not \"messsage\".\n2) in the documentation \"+ a user mapping, the correspoinding\nconnections are closed.\" it's \"corresponding\" not \"correspoinding\".\n\nI will post \"keep_connections\" GUC and \"keep_connection\" server level\noption patches later.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 26 Jan 2021 08:38:02 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] postgres_fdw connection caching - cause remote sessions\n linger till the local session exit" }, { "msg_contents": "\n\nOn 2021/01/26 12:08, Bharath Rupireddy wrote:\n> On Tue, Jan 26, 2021 at 12:38 AM Fujii Masao\n> <masao.fujii@oss.nttdata.com> wrote:\n>>> Attaching v17 patch set, please review it further.\n>>\n>> Thanks for updating the patch!\n>>\n>> Attached is the tweaked version of the patch. I didn't change any logic,\n>> but I updated some comments and docs. Also I added the regresssion test\n>> to check that postgres_fdw_disconnect() closes multiple connections.\n>> Barring any objection, I will commit this version.\n> \n> Thanks. The patch LGTM, except few typos:\n> 1) in the commit message \"a warning messsage is emitted.\" it's\n> \"message\" not \"messsage\".\n> 2) in the documentation \"+ a user mapping, the correspoinding\n> connections are closed.\" it's \"corresponding\" not \"correspoinding\".\n\nThanks for the review! I fixed them and pushed the patch!\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Tue, 26 Jan 2021 15:38:33 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] postgres_fdw connection caching - cause remote sessions\n linger till the local session exit" }, { "msg_contents": "Fujii Masao <masao.fujii@oss.nttdata.com> writes:\n> Thanks for the review! I fixed them and pushed the patch!\n\nBuildfarm is very not happy ...\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 26 Jan 2021 02:05:54 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCH] postgres_fdw connection caching - cause remote sessions\n linger till the local session exit" }, { "msg_contents": "\n\nOn 2021/01/26 16:05, Tom Lane wrote:\n> Fujii Masao <masao.fujii@oss.nttdata.com> writes:\n>> Thanks for the review! I fixed them and pushed the patch!\n> \n> Buildfarm is very not happy ...\n\nYes.... I'm investigating that.\n\n -- Return false as connections are still in use, warnings are issued.\n SELECT postgres_fdw_disconnect_all();\n-WARNING: cannot close dropped server connection because it is still in use\n-WARNING: cannot close connection for server \"loopback\" because it is still in use\n WARNING: cannot close connection for server \"loopback2\" because it is still in use\n+WARNING: cannot close connection for server \"loopback\" because it is still in use\n+WARNING: cannot close dropped server connection because it is still in use\n\nThe cause of the regression test failure is that the order of warning messages\nis not stable. So I'm thinking to set client_min_messages to ERROR temporarily\nwhen doing the above test.\n\nRegards,\n\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Tue, 26 Jan 2021 16:24:43 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] postgres_fdw connection caching - cause remote sessions\n linger till the local session exit" }, { "msg_contents": "On Tue, Jan 26, 2021 at 12:54 PM Fujii Masao\n<masao.fujii@oss.nttdata.com> wrote:\n> On 2021/01/26 16:05, Tom Lane wrote:\n> > Fujii Masao <masao.fujii@oss.nttdata.com> writes:\n> >> Thanks for the review! I fixed them and pushed the patch!\n> >\n> > Buildfarm is very not happy ...\n>\n> Yes.... I'm investigating that.\n>\n> -- Return false as connections are still in use, warnings are issued.\n> SELECT postgres_fdw_disconnect_all();\n> -WARNING: cannot close dropped server connection because it is still in use\n> -WARNING: cannot close connection for server \"loopback\" because it is still in use\n> WARNING: cannot close connection for server \"loopback2\" because it is still in use\n> +WARNING: cannot close connection for server \"loopback\" because it is still in use\n> +WARNING: cannot close dropped server connection because it is still in use\n>\n> The cause of the regression test failure is that the order of warning messages\n> is not stable. So I'm thinking to set client_min_messages to ERROR temporarily\n> when doing the above test.\n\nLooks like we do suppress warnings/notices by setting\nclient_min_messages to ERROR/WARNING. For instance, \"suppress warning\nthat depends on wal_level\" and \"Suppress NOTICE messages when\nusers/groups don't exist\".\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 26 Jan 2021 13:03:57 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] postgres_fdw connection caching - cause remote sessions\n linger till the local session exit" }, { "msg_contents": "\n\nOn 2021/01/26 16:33, Bharath Rupireddy wrote:\n> On Tue, Jan 26, 2021 at 12:54 PM Fujii Masao\n> <masao.fujii@oss.nttdata.com> wrote:\n>> On 2021/01/26 16:05, Tom Lane wrote:\n>>> Fujii Masao <masao.fujii@oss.nttdata.com> writes:\n>>>> Thanks for the review! I fixed them and pushed the patch!\n>>>\n>>> Buildfarm is very not happy ...\n>>\n>> Yes.... I'm investigating that.\n>>\n>> -- Return false as connections are still in use, warnings are issued.\n>> SELECT postgres_fdw_disconnect_all();\n>> -WARNING: cannot close dropped server connection because it is still in use\n>> -WARNING: cannot close connection for server \"loopback\" because it is still in use\n>> WARNING: cannot close connection for server \"loopback2\" because it is still in use\n>> +WARNING: cannot close connection for server \"loopback\" because it is still in use\n>> +WARNING: cannot close dropped server connection because it is still in use\n>>\n>> The cause of the regression test failure is that the order of warning messages\n>> is not stable. So I'm thinking to set client_min_messages to ERROR temporarily\n>> when doing the above test.\n> \n> Looks like we do suppress warnings/notices by setting\n> client_min_messages to ERROR/WARNING. For instance, \"suppress warning\n> that depends on wal_level\" and \"Suppress NOTICE messages when\n> users/groups don't exist\".\n\nYes, so I pushed that change to stabilize the regression test.\nLet's keep checking how the results of buildfarm members are changed.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Tue, 26 Jan 2021 16:39:54 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] postgres_fdw connection caching - cause remote sessions\n linger till the local session exit" }, { "msg_contents": "\n\nOn 2021/01/26 16:39, Fujii Masao wrote:\n> \n> \n> On 2021/01/26 16:33, Bharath Rupireddy wrote:\n>> On Tue, Jan 26, 2021 at 12:54 PM Fujii Masao\n>> <masao.fujii@oss.nttdata.com> wrote:\n>>> On 2021/01/26 16:05, Tom Lane wrote:\n>>>> Fujii Masao <masao.fujii@oss.nttdata.com> writes:\n>>>>> Thanks for the review! I fixed them and pushed the patch!\n>>>>\n>>>> Buildfarm is very not happy ...\n>>>\n>>> Yes.... I'm investigating that.\n>>>\n>>>    -- Return false as connections are still in use, warnings are issued.\n>>>    SELECT postgres_fdw_disconnect_all();\n>>> -WARNING:  cannot close dropped server connection because it is still in use\n>>> -WARNING:  cannot close connection for server \"loopback\" because it is still in use\n>>>    WARNING:  cannot close connection for server \"loopback2\" because it is still in use\n>>> +WARNING:  cannot close connection for server \"loopback\" because it is still in use\n>>> +WARNING:  cannot close dropped server connection because it is still in use\n>>>\n>>> The cause of the regression test failure is that the order of warning messages\n>>> is not stable. So I'm thinking to set client_min_messages to ERROR temporarily\n>>> when doing the above test.\n>>\n>> Looks like we do suppress warnings/notices by setting\n>> client_min_messages to ERROR/WARNING. For instance, \"suppress warning\n>> that depends on wal_level\" and  \"Suppress NOTICE messages when\n>> users/groups don't exist\".\n> \n> Yes, so I pushed that change to stabilize the regression test.\n> Let's keep checking how the results of buildfarm members are changed.\n\n+WARNING: roles created by regression test cases should have names starting with \"regress_\"\n CREATE ROLE multi_conn_user2 SUPERUSER;\n+WARNING: roles created by regression test cases should have names starting with \"regress_\"\n\nHmm... another failure happened.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Tue, 26 Jan 2021 16:57:39 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] postgres_fdw connection caching - cause remote sessions\n linger till the local session exit" }, { "msg_contents": "On Tue, Jan 26, 2021 at 1:27 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> > Yes, so I pushed that change to stabilize the regression test.\n> > Let's keep checking how the results of buildfarm members are changed.\n\nSorry, I'm unfamiliar with checking the system status on the build\nfarm website - https://buildfarm.postgresql.org/cgi-bin/show_failures.pl.\nI'm trying to figure that out.\n\n> +WARNING: roles created by regression test cases should have names starting with \"regress_\"\n> CREATE ROLE multi_conn_user2 SUPERUSER;\n> +WARNING: roles created by regression test cases should have names starting with \"regress_\"\n>\n> Hmm... another failure happened.\n\nMy bad. I should have caught that earlier. I will take care in future.\n\nAttaching a patch to fix it.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Tue, 26 Jan 2021 13:37:03 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] postgres_fdw connection caching - cause remote sessions\n linger till the local session exit" }, { "msg_contents": "\n\nOn 2021/01/26 17:07, Bharath Rupireddy wrote:\n> On Tue, Jan 26, 2021 at 1:27 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>>> Yes, so I pushed that change to stabilize the regression test.\n>>> Let's keep checking how the results of buildfarm members are changed.\n> \n> Sorry, I'm unfamiliar with checking the system status on the build\n> farm website - https://buildfarm.postgresql.org/cgi-bin/show_failures.pl.\n> I'm trying to figure that out.\n> \n>> +WARNING: roles created by regression test cases should have names starting with \"regress_\"\n>> CREATE ROLE multi_conn_user2 SUPERUSER;\n>> +WARNING: roles created by regression test cases should have names starting with \"regress_\"\n>>\n>> Hmm... another failure happened.\n> \n> My bad. I should have caught that earlier. I will take care in future.\n> \n> Attaching a patch to fix it.\n\nThanks for the patch! I also created that patch, confirmed that the test\nsuccessfully passed with -DENFORCE_REGRESSION_TEST_NAME_RESTRICTIONS,\nand pushed the patch.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Tue, 26 Jan 2021 17:25:29 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] postgres_fdw connection caching - cause remote sessions\n linger till the local session exit" }, { "msg_contents": "On Tue, Jan 26, 2021 at 1:55 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> Thanks for the patch! I also created that patch, confirmed that the test\n> successfully passed with -DENFORCE_REGRESSION_TEST_NAME_RESTRICTIONS,\n> and pushed the patch.\n\nThanks a lot!\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 26 Jan 2021 13:58:48 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] postgres_fdw connection caching - cause remote sessions\n linger till the local session exit" }, { "msg_contents": "On Tue, Jan 26, 2021 at 8:38 AM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n> I will post \"keep_connections\" GUC and \"keep_connection\" server level\n> option patches later.\n\nAttaching v19 patch set for \"keep_connections\" GUC and\n\"keep_connection\" server level option. Please review them further.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Wed, 27 Jan 2021 06:36:41 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] postgres_fdw connection caching - cause remote sessions\n linger till the local session exit" }, { "msg_contents": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> writes:\n> On Tue, Jan 26, 2021 at 1:55 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>> Thanks for the patch! I also created that patch, confirmed that the test\n>> successfully passed with -DENFORCE_REGRESSION_TEST_NAME_RESTRICTIONS,\n>> and pushed the patch.\n\n> Thanks a lot!\n\nSeems you're not out of the woods yet:\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=trilobite&dt=2021-01-26%2019%3A59%3A40\n\nThis is a CLOBBER_CACHE_ALWAYS build, so I suspect what it's\ntelling us is that the patch's behavior is unstable in the face\nof unexpected cache flushes.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 28 Jan 2021 15:22:51 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCH] postgres_fdw connection caching - cause remote sessions\n linger till the local session exit" }, { "msg_contents": "On Fri, Jan 29, 2021 at 1:52 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> writes:\n> > On Tue, Jan 26, 2021 at 1:55 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> >> Thanks for the patch! I also created that patch, confirmed that the test\n> >> successfully passed with -DENFORCE_REGRESSION_TEST_NAME_RESTRICTIONS,\n> >> and pushed the patch.\n>\n> > Thanks a lot!\n>\n> Seems you're not out of the woods yet:\n>\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=trilobite&dt=2021-01-26%2019%3A59%3A40\n>\n> This is a CLOBBER_CACHE_ALWAYS build, so I suspect what it's\n> telling us is that the patch's behavior is unstable in the face\n> of unexpected cache flushes.\n\nThanks a lot! It looks like the syscache invalidation messages are\ngenerated too frequently with -DCLOBBER_CACHE_ALWAYS build due to\nwhich pgfdw_inval_callback gets called many times in which the cached\nentries are marked as invalid and closed if they are not used in the\ntxn. The new function postgres_fdw_get_connections outputs the\ninformation of the cached connections such as name if the connection\nis still open and their validity. Hence the output of the\npostgres_fdw_get_connections became unstable in the buildfarm member.\n\nI will further analyze making tests stable, meanwhile any suggestions\nare welcome.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 29 Jan 2021 06:48:36 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] postgres_fdw connection caching - cause remote sessions\n linger till the local session exit" }, { "msg_contents": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> writes:\n> On Fri, Jan 29, 2021 at 1:52 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=trilobite&dt=2021-01-26%2019%3A59%3A40\n>> This is a CLOBBER_CACHE_ALWAYS build, so I suspect what it's\n>> telling us is that the patch's behavior is unstable in the face\n>> of unexpected cache flushes.\n\n> Thanks a lot! It looks like the syscache invalidation messages are\n> generated too frequently with -DCLOBBER_CACHE_ALWAYS build due to\n> which pgfdw_inval_callback gets called many times in which the cached\n> entries are marked as invalid and closed if they are not used in the\n> txn. The new function postgres_fdw_get_connections outputs the\n> information of the cached connections such as name if the connection\n> is still open and their validity. Hence the output of the\n> postgres_fdw_get_connections became unstable in the buildfarm member.\n> I will further analyze making tests stable, meanwhile any suggestions\n> are welcome.\n\nI do not think you should regard this as \"we need to hack the test\nto make it stable\". I think you should regard this as \"this is a\nbug\". A cache flush should not cause user-visible state changes.\nIn particular, the above analysis implies that you think a cache\nflush is equivalent to end-of-transaction, which it absolutely\nis not.\n\nAlso, now that I've looked at pgfdw_inval_callback, it scares\nthe heck out of me. Actually disconnecting a connection during\na cache inval callback seems quite unsafe --- what if that happens\nwhile we're using the connection?\n\nI fear this patch needs to be reverted and redesigned.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 28 Jan 2021 21:09:44 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCH] postgres_fdw connection caching - cause remote sessions\n linger till the local session exit" }, { "msg_contents": "\n\nOn 2021/01/29 11:09, Tom Lane wrote:\n> Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> writes:\n>> On Fri, Jan 29, 2021 at 1:52 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=trilobite&dt=2021-01-26%2019%3A59%3A40\n>>> This is a CLOBBER_CACHE_ALWAYS build, so I suspect what it's\n>>> telling us is that the patch's behavior is unstable in the face\n>>> of unexpected cache flushes.\n> \n>> Thanks a lot! It looks like the syscache invalidation messages are\n>> generated too frequently with -DCLOBBER_CACHE_ALWAYS build due to\n>> which pgfdw_inval_callback gets called many times in which the cached\n>> entries are marked as invalid and closed if they are not used in the\n>> txn. The new function postgres_fdw_get_connections outputs the\n>> information of the cached connections such as name if the connection\n>> is still open and their validity. Hence the output of the\n>> postgres_fdw_get_connections became unstable in the buildfarm member.\n>> I will further analyze making tests stable, meanwhile any suggestions\n>> are welcome.\n> \n> I do not think you should regard this as \"we need to hack the test\n> to make it stable\". I think you should regard this as \"this is a\n> bug\". A cache flush should not cause user-visible state changes.\n> In particular, the above analysis implies that you think a cache\n> flush is equivalent to end-of-transaction, which it absolutely\n> is not.\n> \n> Also, now that I've looked at pgfdw_inval_callback, it scares\n> the heck out of me. Actually disconnecting a connection during\n> a cache inval callback seems quite unsafe --- what if that happens\n> while we're using the connection?\n\nIf the connection is still used in the transaction, pgfdw_inval_callback()\nmarks it as invalidated and doesn't close it. So I was not thinking that\nthis is so unsafe.\n\nThe disconnection code in pgfdw_inval_callback() was added in commit\ne3ebcca843 to fix connection leak issue, and it's back-patched. If this\nchange is really unsafe, we need to revert it immediately at least from back\nbranches because the next minor release is scheduled soon.\n\nBTW, even if we change pgfdw_inval_callback() so that it doesn't close\nthe connection at all, ISTM that the results of postgres_fdw_get_connections()\nwould not be stable because entry->invalidated would vary based on\nwhether CLOBBER_CACHE_ALWAYS is used or not.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Fri, 29 Jan 2021 13:58:11 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] postgres_fdw connection caching - cause remote sessions\n linger till the local session exit" }, { "msg_contents": "On Fri, Jan 29, 2021 at 10:28 AM Fujii Masao\n<masao.fujii@oss.nttdata.com> wrote:\n> On 2021/01/29 11:09, Tom Lane wrote:\n> > Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> writes:\n> >> On Fri, Jan 29, 2021 at 1:52 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >>> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=trilobite&dt=2021-01-26%2019%3A59%3A40\n> >>> This is a CLOBBER_CACHE_ALWAYS build, so I suspect what it's\n> >>> telling us is that the patch's behavior is unstable in the face\n> >>> of unexpected cache flushes.\n> >\n> >> Thanks a lot! It looks like the syscache invalidation messages are\n> >> generated too frequently with -DCLOBBER_CACHE_ALWAYS build due to\n> >> which pgfdw_inval_callback gets called many times in which the cached\n> >> entries are marked as invalid and closed if they are not used in the\n> >> txn. The new function postgres_fdw_get_connections outputs the\n> >> information of the cached connections such as name if the connection\n> >> is still open and their validity. Hence the output of the\n> >> postgres_fdw_get_connections became unstable in the buildfarm member.\n> >> I will further analyze making tests stable, meanwhile any suggestions\n> >> are welcome.\n> >\n> > I do not think you should regard this as \"we need to hack the test\n> > to make it stable\". I think you should regard this as \"this is a\n> > bug\". A cache flush should not cause user-visible state changes.\n> > In particular, the above analysis implies that you think a cache\n> > flush is equivalent to end-of-transaction, which it absolutely\n> > is not.\n> >\n> > Also, now that I've looked at pgfdw_inval_callback, it scares\n> > the heck out of me. Actually disconnecting a connection during\n> > a cache inval callback seems quite unsafe --- what if that happens\n> > while we're using the connection?\n>\n> If the connection is still used in the transaction, pgfdw_inval_callback()\n> marks it as invalidated and doesn't close it. So I was not thinking that\n> this is so unsafe.\n>\n> The disconnection code in pgfdw_inval_callback() was added in commit\n> e3ebcca843 to fix connection leak issue, and it's back-patched. If this\n> change is really unsafe, we need to revert it immediately at least from back\n> branches because the next minor release is scheduled soon.\n\nI think we can remove disconnect_pg_server in pgfdw_inval_callback and\nmake entries only invalidated. Anyways, those connections can get\nclosed at the end of main txn in pgfdw_xact_callback. Thoughts?\n\nIf okay, I can make a patch for this.\n\n> BTW, even if we change pgfdw_inval_callback() so that it doesn't close\n> the connection at all, ISTM that the results of postgres_fdw_get_connections()\n> would not be stable because entry->invalidated would vary based on\n> whether CLOBBER_CACHE_ALWAYS is used or not.\n\nYes, after the above change (removing disconnect_pg_server in\npgfdw_inval_callback), our tests don't get stable because\npostgres_fdw_get_connections shows the valid state of the connections.\nI think we can change postgres_fdw_get_connections so that it only\nshows the active connections server name but not valid state. Because,\nthe valid state is something dependent on the internal state change\nand is not consistent with the user expectation but we are exposing it\nto the user. Thoughts?\n\nIf okay, I can work on the patch for this.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 29 Jan 2021 10:42:54 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] postgres_fdw connection caching - cause remote sessions\n linger till the local session exit" }, { "msg_contents": "On Fri, Jan 29, 2021 at 10:42 AM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n> > > Also, now that I've looked at pgfdw_inval_callback, it scares\n> > > the heck out of me. Actually disconnecting a connection during\n> > > a cache inval callback seems quite unsafe --- what if that happens\n> > > while we're using the connection?\n> >\n> > If the connection is still used in the transaction, pgfdw_inval_callback()\n> > marks it as invalidated and doesn't close it. So I was not thinking that\n> > this is so unsafe.\n> >\n> > The disconnection code in pgfdw_inval_callback() was added in commit\n> > e3ebcca843 to fix connection leak issue, and it's back-patched. If this\n> > change is really unsafe, we need to revert it immediately at least from back\n> > branches because the next minor release is scheduled soon.\n>\n> I think we can remove disconnect_pg_server in pgfdw_inval_callback and\n> make entries only invalidated. Anyways, those connections can get\n> closed at the end of main txn in pgfdw_xact_callback. Thoughts?\n>\n> If okay, I can make a patch for this.\n\nAttaching a patch for this, which can be back patched.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Fri, 29 Jan 2021 10:55:35 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] postgres_fdw connection caching - cause remote sessions\n linger till the local session exit" }, { "msg_contents": "\n\nOn 2021/01/29 14:12, Bharath Rupireddy wrote:\n> On Fri, Jan 29, 2021 at 10:28 AM Fujii Masao\n> <masao.fujii@oss.nttdata.com> wrote:\n>> On 2021/01/29 11:09, Tom Lane wrote:\n>>> Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> writes:\n>>>> On Fri, Jan 29, 2021 at 1:52 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>>>> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=trilobite&dt=2021-01-26%2019%3A59%3A40\n>>>>> This is a CLOBBER_CACHE_ALWAYS build, so I suspect what it's\n>>>>> telling us is that the patch's behavior is unstable in the face\n>>>>> of unexpected cache flushes.\n>>>\n>>>> Thanks a lot! It looks like the syscache invalidation messages are\n>>>> generated too frequently with -DCLOBBER_CACHE_ALWAYS build due to\n>>>> which pgfdw_inval_callback gets called many times in which the cached\n>>>> entries are marked as invalid and closed if they are not used in the\n>>>> txn. The new function postgres_fdw_get_connections outputs the\n>>>> information of the cached connections such as name if the connection\n>>>> is still open and their validity. Hence the output of the\n>>>> postgres_fdw_get_connections became unstable in the buildfarm member.\n>>>> I will further analyze making tests stable, meanwhile any suggestions\n>>>> are welcome.\n>>>\n>>> I do not think you should regard this as \"we need to hack the test\n>>> to make it stable\". I think you should regard this as \"this is a\n>>> bug\". A cache flush should not cause user-visible state changes.\n>>> In particular, the above analysis implies that you think a cache\n>>> flush is equivalent to end-of-transaction, which it absolutely\n>>> is not.\n>>>\n>>> Also, now that I've looked at pgfdw_inval_callback, it scares\n>>> the heck out of me. Actually disconnecting a connection during\n>>> a cache inval callback seems quite unsafe --- what if that happens\n>>> while we're using the connection?\n>>\n>> If the connection is still used in the transaction, pgfdw_inval_callback()\n>> marks it as invalidated and doesn't close it. So I was not thinking that\n>> this is so unsafe.\n>>\n>> The disconnection code in pgfdw_inval_callback() was added in commit\n>> e3ebcca843 to fix connection leak issue, and it's back-patched. If this\n>> change is really unsafe, we need to revert it immediately at least from back\n>> branches because the next minor release is scheduled soon.\n> \n> I think we can remove disconnect_pg_server in pgfdw_inval_callback and\n> make entries only invalidated. Anyways, those connections can get\n> closed at the end of main txn in pgfdw_xact_callback. Thoughts?\n\nBut this revives the connection leak issue. So isn't it better to\nto do that after we confirm that the current code is really unsafe?\n\n> \n> If okay, I can make a patch for this.\n> \n>> BTW, even if we change pgfdw_inval_callback() so that it doesn't close\n>> the connection at all, ISTM that the results of postgres_fdw_get_connections()\n>> would not be stable because entry->invalidated would vary based on\n>> whether CLOBBER_CACHE_ALWAYS is used or not.\n> \n> Yes, after the above change (removing disconnect_pg_server in\n> pgfdw_inval_callback), our tests don't get stable because\n> postgres_fdw_get_connections shows the valid state of the connections.\n> I think we can change postgres_fdw_get_connections so that it only\n> shows the active connections server name but not valid state. Because,\n> the valid state is something dependent on the internal state change\n> and is not consistent with the user expectation but we are exposing it\n> to the user. Thoughts?\n\nI don't think that's enough because even the following simple\nqueries return the different results, depending on whether\nCLOBBER_CACHE_ALWAYS is used or not.\n\n SELECT * FROM ft6; -- ft6 is the foreign table\n SELECT server_name FROM postgres_fdw_get_connections();\n\nWhen CLOBBER_CACHE_ALWAYS is used, postgres_fdw_get_connections()\nreturns no records because the connection is marked as invalidated,\nand then closed at xact callback in SELECT query. Otherwise,\npostgres_fdw_get_connections() returns at least one connection that\nwas established in the SELECT query.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Fri, 29 Jan 2021 14:25:48 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] postgres_fdw connection caching - cause remote sessions\n linger till the local session exit" }, { "msg_contents": "On Fri, Jan 29, 2021 at 10:55 AM Fujii Masao\n<masao.fujii@oss.nttdata.com> wrote:\n> On 2021/01/29 14:12, Bharath Rupireddy wrote:\n> > On Fri, Jan 29, 2021 at 10:28 AM Fujii Masao\n> > <masao.fujii@oss.nttdata.com> wrote:\n> >> On 2021/01/29 11:09, Tom Lane wrote:\n> >>> Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> writes:\n> >>>> On Fri, Jan 29, 2021 at 1:52 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >>>>> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=trilobite&dt=2021-01-26%2019%3A59%3A40\n> >>>>> This is a CLOBBER_CACHE_ALWAYS build, so I suspect what it's\n> >>>>> telling us is that the patch's behavior is unstable in the face\n> >>>>> of unexpected cache flushes.\n> >>>\n> >>>> Thanks a lot! It looks like the syscache invalidation messages are\n> >>>> generated too frequently with -DCLOBBER_CACHE_ALWAYS build due to\n> >>>> which pgfdw_inval_callback gets called many times in which the cached\n> >>>> entries are marked as invalid and closed if they are not used in the\n> >>>> txn. The new function postgres_fdw_get_connections outputs the\n> >>>> information of the cached connections such as name if the connection\n> >>>> is still open and their validity. Hence the output of the\n> >>>> postgres_fdw_get_connections became unstable in the buildfarm member.\n> >>>> I will further analyze making tests stable, meanwhile any suggestions\n> >>>> are welcome.\n> >>>\n> >>> I do not think you should regard this as \"we need to hack the test\n> >>> to make it stable\". I think you should regard this as \"this is a\n> >>> bug\". A cache flush should not cause user-visible state changes.\n> >>> In particular, the above analysis implies that you think a cache\n> >>> flush is equivalent to end-of-transaction, which it absolutely\n> >>> is not.\n> >>>\n> >>> Also, now that I've looked at pgfdw_inval_callback, it scares\n> >>> the heck out of me. Actually disconnecting a connection during\n> >>> a cache inval callback seems quite unsafe --- what if that happens\n> >>> while we're using the connection?\n> >>\n> >> If the connection is still used in the transaction, pgfdw_inval_callback()\n> >> marks it as invalidated and doesn't close it. So I was not thinking that\n> >> this is so unsafe.\n> >>\n> >> The disconnection code in pgfdw_inval_callback() was added in commit\n> >> e3ebcca843 to fix connection leak issue, and it's back-patched. If this\n> >> change is really unsafe, we need to revert it immediately at least from back\n> >> branches because the next minor release is scheduled soon.\n> >\n> > I think we can remove disconnect_pg_server in pgfdw_inval_callback and\n> > make entries only invalidated. Anyways, those connections can get\n> > closed at the end of main txn in pgfdw_xact_callback. Thoughts?\n>\n> But this revives the connection leak issue. So isn't it better to\n> to do that after we confirm that the current code is really unsafe?\n\nIMO, connections will not leak, because the invalidated connections\neventually will get closed in pgfdw_xact_callback at the main txn end.\n\nIIRC, when we were finding a way to close the invalidated connections\nso that they don't leaked, we had two options:\n\n1) let those connections (whether currently being used in the xact or\nnot) get marked invalidated in pgfdw_inval_callback and closed in\npgfdw_xact_callback at the main txn end as shown below\n\n if (PQstatus(entry->conn) != CONNECTION_OK ||\n PQtransactionStatus(entry->conn) != PQTRANS_IDLE ||\n entry->changing_xact_state ||\n entry->invalidated). ----> by adding this\n {\n elog(DEBUG3, \"discarding connection %p\", entry->conn);\n disconnect_pg_server(entry);\n }\n\n2) close the unused connections right away in pgfdw_inval_callback\ninstead of marking them invalidated. Mark used connections as\ninvalidated in pgfdw_inval_callback and close them in\npgfdw_xact_callback at the main txn end.\n\nWe went with option (2) because we thought this would ease some burden\non pgfdw_xact_callback closing a lot of invalid connections at once.\n\nHope that's fine.\n\nI will respond to postgres_fdw_get_connections issue separately.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 29 Jan 2021 11:08:23 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] postgres_fdw connection caching - cause remote sessions\n linger till the local session exit" }, { "msg_contents": "On Fri, Jan 29, 2021 at 11:08 AM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Fri, Jan 29, 2021 at 10:55 AM Fujii Masao\n> <masao.fujii@oss.nttdata.com> wrote:\n> > On 2021/01/29 14:12, Bharath Rupireddy wrote:\n> > > On Fri, Jan 29, 2021 at 10:28 AM Fujii Masao\n> > > <masao.fujii@oss.nttdata.com> wrote:\n> > >> On 2021/01/29 11:09, Tom Lane wrote:\n> > >>> Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> writes:\n> > >>>> On Fri, Jan 29, 2021 at 1:52 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > >>>>> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=trilobite&dt=2021-01-26%2019%3A59%3A40\n> > >>>>> This is a CLOBBER_CACHE_ALWAYS build, so I suspect what it's\n> > >>>>> telling us is that the patch's behavior is unstable in the face\n> > >>>>> of unexpected cache flushes.\n> > >>>\n> > >>>> Thanks a lot! It looks like the syscache invalidation messages are\n> > >>>> generated too frequently with -DCLOBBER_CACHE_ALWAYS build due to\n> > >>>> which pgfdw_inval_callback gets called many times in which the cached\n> > >>>> entries are marked as invalid and closed if they are not used in the\n> > >>>> txn. The new function postgres_fdw_get_connections outputs the\n> > >>>> information of the cached connections such as name if the connection\n> > >>>> is still open and their validity. Hence the output of the\n> > >>>> postgres_fdw_get_connections became unstable in the buildfarm member.\n> > >>>> I will further analyze making tests stable, meanwhile any suggestions\n> > >>>> are welcome.\n> > >>>\n> > >>> I do not think you should regard this as \"we need to hack the test\n> > >>> to make it stable\". I think you should regard this as \"this is a\n> > >>> bug\". A cache flush should not cause user-visible state changes.\n> > >>> In particular, the above analysis implies that you think a cache\n> > >>> flush is equivalent to end-of-transaction, which it absolutely\n> > >>> is not.\n> > >>>\n> > >>> Also, now that I've looked at pgfdw_inval_callback, it scares\n> > >>> the heck out of me. Actually disconnecting a connection during\n> > >>> a cache inval callback seems quite unsafe --- what if that happens\n> > >>> while we're using the connection?\n> > >>\n> > >> If the connection is still used in the transaction, pgfdw_inval_callback()\n> > >> marks it as invalidated and doesn't close it. So I was not thinking that\n> > >> this is so unsafe.\n> > >>\n> > >> The disconnection code in pgfdw_inval_callback() was added in commit\n> > >> e3ebcca843 to fix connection leak issue, and it's back-patched. If this\n> > >> change is really unsafe, we need to revert it immediately at least from back\n> > >> branches because the next minor release is scheduled soon.\n> > >\n> > > I think we can remove disconnect_pg_server in pgfdw_inval_callback and\n> > > make entries only invalidated. Anyways, those connections can get\n> > > closed at the end of main txn in pgfdw_xact_callback. Thoughts?\n> >\n> > But this revives the connection leak issue. So isn't it better to\n> > to do that after we confirm that the current code is really unsafe?\n>\n> IMO, connections will not leak, because the invalidated connections\n> eventually will get closed in pgfdw_xact_callback at the main txn end.\n>\n> IIRC, when we were finding a way to close the invalidated connections\n> so that they don't leaked, we had two options:\n>\n> 1) let those connections (whether currently being used in the xact or\n> not) get marked invalidated in pgfdw_inval_callback and closed in\n> pgfdw_xact_callback at the main txn end as shown below\n>\n> if (PQstatus(entry->conn) != CONNECTION_OK ||\n> PQtransactionStatus(entry->conn) != PQTRANS_IDLE ||\n> entry->changing_xact_state ||\n> entry->invalidated). ----> by adding this\n> {\n> elog(DEBUG3, \"discarding connection %p\", entry->conn);\n> disconnect_pg_server(entry);\n> }\n>\n> 2) close the unused connections right away in pgfdw_inval_callback\n> instead of marking them invalidated. Mark used connections as\n> invalidated in pgfdw_inval_callback and close them in\n> pgfdw_xact_callback at the main txn end.\n>\n> We went with option (2) because we thought this would ease some burden\n> on pgfdw_xact_callback closing a lot of invalid connections at once.\n\nAlso, see the original patch for the connection leak issue just does\noption (1), see [1]. But in [2] and [3], we chose option (2).\n\nI feel, we can go for option (1), with the patch attached in [1] i.e.\nhaving have_invalid_connections whenever any connection gets invalided\nso that we don't quickly exit in pgfdw_xact_callback and the\ninvalidated connections get closed properly. Thoughts?\n\nstatic void\npgfdw_xact_callback(XactEvent event, void *arg)\n{\n HASH_SEQ_STATUS scan;\n ConnCacheEntry *entry;\n\n /* Quick exit if no connections were touched in this transaction. */\n if (!xact_got_connection)\n return;\n\n[1] https://www.postgresql.org/message-id/CALj2ACVNcGH_6qLY-4_tXz8JLvA%2B4yeBThRfxMz7Oxbk1aHcpQ%40mail.gmail.com\n[2] https://www.postgresql.org/message-id/f57dd9c3-0664-5f4c-41f0-0713047ae7b7%40oss.nttdata.com\n[3] https://www.postgresql.org/message-id/CALj2ACVNjV1%2B72f3nVCngC7RsGSiGXZQ2mAzYx_Dij7oJpV8iA%40mail.gmail.com\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 29 Jan 2021 11:16:54 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] postgres_fdw connection caching - cause remote sessions\n linger till the local session exit" }, { "msg_contents": "On Fri, Jan 29, 2021 at 10:55 AM Fujii Masao\n<masao.fujii@oss.nttdata.com> wrote:\n> >> BTW, even if we change pgfdw_inval_callback() so that it doesn't close\n> >> the connection at all, ISTM that the results of postgres_fdw_get_connections()\n> >> would not be stable because entry->invalidated would vary based on\n> >> whether CLOBBER_CACHE_ALWAYS is used or not.\n> >\n> > Yes, after the above change (removing disconnect_pg_server in\n> > pgfdw_inval_callback), our tests don't get stable because\n> > postgres_fdw_get_connections shows the valid state of the connections.\n> > I think we can change postgres_fdw_get_connections so that it only\n> > shows the active connections server name but not valid state. Because,\n> > the valid state is something dependent on the internal state change\n> > and is not consistent with the user expectation but we are exposing it\n> > to the user. Thoughts?\n>\n> I don't think that's enough because even the following simple\n> queries return the different results, depending on whether\n> CLOBBER_CACHE_ALWAYS is used or not.\n>\n> SELECT * FROM ft6; -- ft6 is the foreign table\n> SELECT server_name FROM postgres_fdw_get_connections();\n>\n> When CLOBBER_CACHE_ALWAYS is used, postgres_fdw_get_connections()\n> returns no records because the connection is marked as invalidated,\n> and then closed at xact callback in SELECT query. Otherwise,\n> postgres_fdw_get_connections() returns at least one connection that\n> was established in the SELECT query.\n\nRight. In that case, after changing postgres_fdw_get_connections() so\nthat it doesn't output the valid state of the connections at all, we\ncan have all the new function test cases inside an explicit txn block.\nSo even if the clobber cache invalidates the connections, they don't\nget closed until the end of main xact, the tests will be stable.\nThoughts?\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 29 Jan 2021 11:23:17 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] postgres_fdw connection caching - cause remote sessions\n linger till the local session exit" }, { "msg_contents": "\n\nOn 2021/01/29 14:53, Bharath Rupireddy wrote:\n> On Fri, Jan 29, 2021 at 10:55 AM Fujii Masao\n> <masao.fujii@oss.nttdata.com> wrote:\n>>>> BTW, even if we change pgfdw_inval_callback() so that it doesn't close\n>>>> the connection at all, ISTM that the results of postgres_fdw_get_connections()\n>>>> would not be stable because entry->invalidated would vary based on\n>>>> whether CLOBBER_CACHE_ALWAYS is used or not.\n>>>\n>>> Yes, after the above change (removing disconnect_pg_server in\n>>> pgfdw_inval_callback), our tests don't get stable because\n>>> postgres_fdw_get_connections shows the valid state of the connections.\n>>> I think we can change postgres_fdw_get_connections so that it only\n>>> shows the active connections server name but not valid state. Because,\n>>> the valid state is something dependent on the internal state change\n>>> and is not consistent with the user expectation but we are exposing it\n>>> to the user. Thoughts?\n>>\n>> I don't think that's enough because even the following simple\n>> queries return the different results, depending on whether\n>> CLOBBER_CACHE_ALWAYS is used or not.\n>>\n>> SELECT * FROM ft6; -- ft6 is the foreign table\n>> SELECT server_name FROM postgres_fdw_get_connections();\n>>\n>> When CLOBBER_CACHE_ALWAYS is used, postgres_fdw_get_connections()\n>> returns no records because the connection is marked as invalidated,\n>> and then closed at xact callback in SELECT query. Otherwise,\n>> postgres_fdw_get_connections() returns at least one connection that\n>> was established in the SELECT query.\n> \n> Right. In that case, after changing postgres_fdw_get_connections() so\n> that it doesn't output the valid state of the connections at all, we\n\nYou're thinking to get rid of \"valid\" column? Or hide it from the test query\n(e.g., SELECT server_name from postgres_fdw_get_connections())?\n\n> can have all the new function test cases inside an explicit txn block.\n> So even if the clobber cache invalidates the connections, they don't\n> get closed until the end of main xact, the tests will be stable.\n> Thoughts?\n\nAlso if there are cached connections before starting that transaction,\nthey should be closed or established again before executing\npostgres_fdw_get_connections(). Otherwise, those connections are\nreturned from postgres_fdw_get_connections() when\nCLOBBER_CACHE_ALWAYS is not used, but not when it's used.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Fri, 29 Jan 2021 15:08:04 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] postgres_fdw connection caching - cause remote sessions\n linger till the local session exit" }, { "msg_contents": "On Fri, Jan 29, 2021 at 11:38 AM Fujii Masao\n<masao.fujii@oss.nttdata.com> wrote:\n> On 2021/01/29 14:53, Bharath Rupireddy wrote:\n> > On Fri, Jan 29, 2021 at 10:55 AM Fujii Masao\n> > <masao.fujii@oss.nttdata.com> wrote:\n> >>>> BTW, even if we change pgfdw_inval_callback() so that it doesn't close\n> >>>> the connection at all, ISTM that the results of postgres_fdw_get_connections()\n> >>>> would not be stable because entry->invalidated would vary based on\n> >>>> whether CLOBBER_CACHE_ALWAYS is used or not.\n> >>>\n> >>> Yes, after the above change (removing disconnect_pg_server in\n> >>> pgfdw_inval_callback), our tests don't get stable because\n> >>> postgres_fdw_get_connections shows the valid state of the connections.\n> >>> I think we can change postgres_fdw_get_connections so that it only\n> >>> shows the active connections server name but not valid state. Because,\n> >>> the valid state is something dependent on the internal state change\n> >>> and is not consistent with the user expectation but we are exposing it\n> >>> to the user. Thoughts?\n> >>\n> >> I don't think that's enough because even the following simple\n> >> queries return the different results, depending on whether\n> >> CLOBBER_CACHE_ALWAYS is used or not.\n> >>\n> >> SELECT * FROM ft6; -- ft6 is the foreign table\n> >> SELECT server_name FROM postgres_fdw_get_connections();\n> >>\n> >> When CLOBBER_CACHE_ALWAYS is used, postgres_fdw_get_connections()\n> >> returns no records because the connection is marked as invalidated,\n> >> and then closed at xact callback in SELECT query. Otherwise,\n> >> postgres_fdw_get_connections() returns at least one connection that\n> >> was established in the SELECT query.\n> >\n> > Right. In that case, after changing postgres_fdw_get_connections() so\n> > that it doesn't output the valid state of the connections at all, we\n>\n> You're thinking to get rid of \"valid\" column? Or hide it from the test query\n> (e.g., SELECT server_name from postgres_fdw_get_connections())?\n\nI'm thinking we can get rid of the \"valid\" column from the\npostgres_fdw_get_connections() function, not from the tests. Seems\nlike we are exposing some internal state(connection is valid or not)\nwhich can change because of internal events. And also with the\nexisting postgres_fdw_get_connections(), the valid will always be true\nif the user calls postgres_fdw_get_connections() outside an explicit\nxact block, it can become false only when it's used in an explicit txn\nblock. So, the valid column may not be much useful for the user.\nThoughts?\n\n> > can have all the new function test cases inside an explicit txn block.\n> > So even if the clobber cache invalidates the connections, they don't\n> > get closed until the end of main xact, the tests will be stable.\n> > Thoughts?\n>\n> Also if there are cached connections before starting that transaction,\n> they should be closed or established again before executing\n> postgres_fdw_get_connections(). Otherwise, those connections are\n> returned from postgres_fdw_get_connections() when\n> CLOBBER_CACHE_ALWAYS is not used, but not when it's used.\n\nYes, we need to move the test to the place where cache wouldn't have\nbeen initialized yet or no foreign connection has been made yet in the\nsession.\n\nALTER FOREIGN TABLE ft2 ALTER COLUMN c1 OPTIONS (column_name 'C 1');\n\\det+\n\n<<<<<<<<<<<<MAY BE HERE>>>>>>>>>>>>\n\n-- Test that alteration of server options causes reconnection\n-- Remote's errors might be non-English, so hide them to ensure stable results\n\\set VERBOSITY terse\nSELECT c3, c4 FROM ft1 ORDER BY c3, c1 LIMIT 1; -- should work\nALTER SERVER loopback OPTIONS (SET dbname 'no such database');\nSELECT c3, c4 FROM ft1 ORDER BY c3, c1 LIMIT 1; -- should fail\nDO $d$\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 29 Jan 2021 11:49:16 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] postgres_fdw connection caching - cause remote sessions\n linger till the local session exit" }, { "msg_contents": "\n\nOn 2021/01/29 14:46, Bharath Rupireddy wrote:\n> On Fri, Jan 29, 2021 at 11:08 AM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n>>\n>> On Fri, Jan 29, 2021 at 10:55 AM Fujii Masao\n>> <masao.fujii@oss.nttdata.com> wrote:\n>>> On 2021/01/29 14:12, Bharath Rupireddy wrote:\n>>>> On Fri, Jan 29, 2021 at 10:28 AM Fujii Masao\n>>>> <masao.fujii@oss.nttdata.com> wrote:\n>>>>> On 2021/01/29 11:09, Tom Lane wrote:\n>>>>>> Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> writes:\n>>>>>>> On Fri, Jan 29, 2021 at 1:52 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>>>>>>> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=trilobite&dt=2021-01-26%2019%3A59%3A40\n>>>>>>>> This is a CLOBBER_CACHE_ALWAYS build, so I suspect what it's\n>>>>>>>> telling us is that the patch's behavior is unstable in the face\n>>>>>>>> of unexpected cache flushes.\n>>>>>>\n>>>>>>> Thanks a lot! It looks like the syscache invalidation messages are\n>>>>>>> generated too frequently with -DCLOBBER_CACHE_ALWAYS build due to\n>>>>>>> which pgfdw_inval_callback gets called many times in which the cached\n>>>>>>> entries are marked as invalid and closed if they are not used in the\n>>>>>>> txn. The new function postgres_fdw_get_connections outputs the\n>>>>>>> information of the cached connections such as name if the connection\n>>>>>>> is still open and their validity. Hence the output of the\n>>>>>>> postgres_fdw_get_connections became unstable in the buildfarm member.\n>>>>>>> I will further analyze making tests stable, meanwhile any suggestions\n>>>>>>> are welcome.\n>>>>>>\n>>>>>> I do not think you should regard this as \"we need to hack the test\n>>>>>> to make it stable\". I think you should regard this as \"this is a\n>>>>>> bug\". A cache flush should not cause user-visible state changes.\n>>>>>> In particular, the above analysis implies that you think a cache\n>>>>>> flush is equivalent to end-of-transaction, which it absolutely\n>>>>>> is not.\n>>>>>>\n>>>>>> Also, now that I've looked at pgfdw_inval_callback, it scares\n>>>>>> the heck out of me. Actually disconnecting a connection during\n>>>>>> a cache inval callback seems quite unsafe --- what if that happens\n>>>>>> while we're using the connection?\n>>>>>\n>>>>> If the connection is still used in the transaction, pgfdw_inval_callback()\n>>>>> marks it as invalidated and doesn't close it. So I was not thinking that\n>>>>> this is so unsafe.\n>>>>>\n>>>>> The disconnection code in pgfdw_inval_callback() was added in commit\n>>>>> e3ebcca843 to fix connection leak issue, and it's back-patched. If this\n>>>>> change is really unsafe, we need to revert it immediately at least from back\n>>>>> branches because the next minor release is scheduled soon.\n>>>>\n>>>> I think we can remove disconnect_pg_server in pgfdw_inval_callback and\n>>>> make entries only invalidated. Anyways, those connections can get\n>>>> closed at the end of main txn in pgfdw_xact_callback. Thoughts?\n>>>\n>>> But this revives the connection leak issue. So isn't it better to\n>>> to do that after we confirm that the current code is really unsafe?\n>>\n>> IMO, connections will not leak, because the invalidated connections\n>> eventually will get closed in pgfdw_xact_callback at the main txn end.\n>>\n>> IIRC, when we were finding a way to close the invalidated connections\n>> so that they don't leaked, we had two options:\n>>\n>> 1) let those connections (whether currently being used in the xact or\n>> not) get marked invalidated in pgfdw_inval_callback and closed in\n>> pgfdw_xact_callback at the main txn end as shown below\n>>\n>> if (PQstatus(entry->conn) != CONNECTION_OK ||\n>> PQtransactionStatus(entry->conn) != PQTRANS_IDLE ||\n>> entry->changing_xact_state ||\n>> entry->invalidated). ----> by adding this\n>> {\n>> elog(DEBUG3, \"discarding connection %p\", entry->conn);\n>> disconnect_pg_server(entry);\n>> }\n>>\n>> 2) close the unused connections right away in pgfdw_inval_callback\n>> instead of marking them invalidated. Mark used connections as\n>> invalidated in pgfdw_inval_callback and close them in\n>> pgfdw_xact_callback at the main txn end.\n>>\n>> We went with option (2) because we thought this would ease some burden\n>> on pgfdw_xact_callback closing a lot of invalid connections at once.\n> \n> Also, see the original patch for the connection leak issue just does\n> option (1), see [1]. But in [2] and [3], we chose option (2).\n> \n> I feel, we can go for option (1), with the patch attached in [1] i.e.\n> having have_invalid_connections whenever any connection gets invalided\n> so that we don't quickly exit in pgfdw_xact_callback and the\n> invalidated connections get closed properly. Thoughts?\n\nBefore going for (1) or something, I'd like to understand what the actual\nissue of (2), i.e., the current code is. Otherwise other approaches might\nhave the same issue.\n\n\nRegarding (1), as far as I understand correctly, even when the transaction\ndoesn't use foreign tables at all, it needs to scan the connection cache\nentries if necessary. I was thinking to avoid this. I guess that this doesn't\nwork with at least the postgres_fdw 2PC patch that Sawada-san is proposing\nbecause with the patch the commit/rollback callback is performed only\nfor the connections used in the transaction.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Fri, 29 Jan 2021 15:24:40 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] postgres_fdw connection caching - cause remote sessions\n linger till the local session exit" }, { "msg_contents": "On Fri, Jan 29, 2021 at 11:54 AM Fujii Masao\n<masao.fujii@oss.nttdata.com> wrote:\n> >> IIRC, when we were finding a way to close the invalidated connections\n> >> so that they don't leaked, we had two options:\n> >>\n> >> 1) let those connections (whether currently being used in the xact or\n> >> not) get marked invalidated in pgfdw_inval_callback and closed in\n> >> pgfdw_xact_callback at the main txn end as shown below\n> >>\n> >> if (PQstatus(entry->conn) != CONNECTION_OK ||\n> >> PQtransactionStatus(entry->conn) != PQTRANS_IDLE ||\n> >> entry->changing_xact_state ||\n> >> entry->invalidated). ----> by adding this\n> >> {\n> >> elog(DEBUG3, \"discarding connection %p\", entry->conn);\n> >> disconnect_pg_server(entry);\n> >> }\n> >>\n> >> 2) close the unused connections right away in pgfdw_inval_callback\n> >> instead of marking them invalidated. Mark used connections as\n> >> invalidated in pgfdw_inval_callback and close them in\n> >> pgfdw_xact_callback at the main txn end.\n> >>\n> >> We went with option (2) because we thought this would ease some burden\n> >> on pgfdw_xact_callback closing a lot of invalid connections at once.\n> >\n> > Also, see the original patch for the connection leak issue just does\n> > option (1), see [1]. But in [2] and [3], we chose option (2).\n> >\n> > I feel, we can go for option (1), with the patch attached in [1] i.e.\n> > having have_invalid_connections whenever any connection gets invalided\n> > so that we don't quickly exit in pgfdw_xact_callback and the\n> > invalidated connections get closed properly. Thoughts?\n>\n> Before going for (1) or something, I'd like to understand what the actual\n> issue of (2), i.e., the current code is. Otherwise other approaches might\n> have the same issue.\n\nThe problem with option (2) is that because of CLOBBER_CACHE_ALWAYS,\npgfdw_inval_callback is getting called many times and the connections\nthat are not used i..e xact_depth == 0, are getting disconnected\nthere, so we are not seeing the consistent results for\npostgres_fdw_get_connectionstest cases. If the connections are being\nused within the xact, then the valid option for those connections are\nbeing shown as false again making postgres_fdw_get_connections output\ninconsistent. This is what happened on the build farm member with\nCLOBBER_CACHE_ALWAYS build.\n\nSo if we go with option (1), get rid of valid state from\npostgres_fdw_get_connectionstest and having the test cases inside an\nexplicit xact block at the beginning of the postgres_fdw.sql test\nfile, we don't see CLOBBER_CACHE_ALWAYS inconsistencies. I'm not sure\nif this is the correct way.\n\n> Regarding (1), as far as I understand correctly, even when the transaction\n> doesn't use foreign tables at all, it needs to scan the connection cache\n> entries if necessary. I was thinking to avoid this. I guess that this doesn't\n> work with at least the postgres_fdw 2PC patch that Sawada-san is proposing\n> because with the patch the commit/rollback callback is performed only\n> for the connections used in the transaction.\n\nYou mean to say, pgfdw_xact_callback will not get called when the xact\nuses no foreign server connection or is it that pgfdw_xact_callback\ngets called but exits quickly from it? I'm not sure what the 2PC patch\ndoes.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 29 Jan 2021 12:14:17 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] postgres_fdw connection caching - cause remote sessions\n linger till the local session exit" }, { "msg_contents": "\n\nOn 2021/01/29 15:44, Bharath Rupireddy wrote:\n> On Fri, Jan 29, 2021 at 11:54 AM Fujii Masao\n> <masao.fujii@oss.nttdata.com> wrote:\n>>>> IIRC, when we were finding a way to close the invalidated connections\n>>>> so that they don't leaked, we had two options:\n>>>>\n>>>> 1) let those connections (whether currently being used in the xact or\n>>>> not) get marked invalidated in pgfdw_inval_callback and closed in\n>>>> pgfdw_xact_callback at the main txn end as shown below\n>>>>\n>>>> if (PQstatus(entry->conn) != CONNECTION_OK ||\n>>>> PQtransactionStatus(entry->conn) != PQTRANS_IDLE ||\n>>>> entry->changing_xact_state ||\n>>>> entry->invalidated). ----> by adding this\n>>>> {\n>>>> elog(DEBUG3, \"discarding connection %p\", entry->conn);\n>>>> disconnect_pg_server(entry);\n>>>> }\n>>>>\n>>>> 2) close the unused connections right away in pgfdw_inval_callback\n>>>> instead of marking them invalidated. Mark used connections as\n>>>> invalidated in pgfdw_inval_callback and close them in\n>>>> pgfdw_xact_callback at the main txn end.\n>>>>\n>>>> We went with option (2) because we thought this would ease some burden\n>>>> on pgfdw_xact_callback closing a lot of invalid connections at once.\n>>>\n>>> Also, see the original patch for the connection leak issue just does\n>>> option (1), see [1]. But in [2] and [3], we chose option (2).\n>>>\n>>> I feel, we can go for option (1), with the patch attached in [1] i.e.\n>>> having have_invalid_connections whenever any connection gets invalided\n>>> so that we don't quickly exit in pgfdw_xact_callback and the\n>>> invalidated connections get closed properly. Thoughts?\n>>\n>> Before going for (1) or something, I'd like to understand what the actual\n>> issue of (2), i.e., the current code is. Otherwise other approaches might\n>> have the same issue.\n> \n> The problem with option (2) is that because of CLOBBER_CACHE_ALWAYS,\n> pgfdw_inval_callback is getting called many times and the connections\n> that are not used i..e xact_depth == 0, are getting disconnected\n> there, so we are not seeing the consistent results for\n> postgres_fdw_get_connectionstest cases. If the connections are being\n> used within the xact, then the valid option for those connections are\n> being shown as false again making postgres_fdw_get_connections output\n> inconsistent. This is what happened on the build farm member with\n> CLOBBER_CACHE_ALWAYS build.\n\nBut if the issue is only the inconsistency of test results,\nwe can go with the option (2)? Even with (2), we can make the test\nstable by removing \"valid\" column and executing\npostgres_fdw_get_connections() within the transaction?\n\n> \n> So if we go with option (1), get rid of valid state from\n> postgres_fdw_get_connectionstest and having the test cases inside an\n> explicit xact block at the beginning of the postgres_fdw.sql test\n> file, we don't see CLOBBER_CACHE_ALWAYS inconsistencies. I'm not sure\n> if this is the correct way.\n> \n>> Regarding (1), as far as I understand correctly, even when the transaction\n>> doesn't use foreign tables at all, it needs to scan the connection cache\n>> entries if necessary. I was thinking to avoid this. I guess that this doesn't\n>> work with at least the postgres_fdw 2PC patch that Sawada-san is proposing\n>> because with the patch the commit/rollback callback is performed only\n>> for the connections used in the transaction.\n> \n> You mean to say, pgfdw_xact_callback will not get called when the xact\n> uses no foreign server connection or is it that pgfdw_xact_callback\n> gets called but exits quickly from it? I'm not sure what the 2PC patch\n> does.\n\nMaybe it's chance to review the patch! ;P\n\nBTW his patch tries to add new callback interfaces for commit/rollback of\nforeign transactions, and make postgres_fdw use them instead of\nXactCallback. And those new interfaces are executed only when\nthe transaction has started the foreign transactions.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Fri, 29 Jan 2021 16:06:34 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] postgres_fdw connection caching - cause remote sessions\n linger till the local session exit" }, { "msg_contents": "On Fri, Jan 29, 2021 at 12:36 PM Fujii Masao\n<masao.fujii@oss.nttdata.com> wrote:\n> On 2021/01/29 15:44, Bharath Rupireddy wrote:\n> > On Fri, Jan 29, 2021 at 11:54 AM Fujii Masao\n> > <masao.fujii@oss.nttdata.com> wrote:\n> >>>> IIRC, when we were finding a way to close the invalidated connections\n> >>>> so that they don't leaked, we had two options:\n> >>>>\n> >>>> 1) let those connections (whether currently being used in the xact or\n> >>>> not) get marked invalidated in pgfdw_inval_callback and closed in\n> >>>> pgfdw_xact_callback at the main txn end as shown below\n> >>>>\n> >>>> if (PQstatus(entry->conn) != CONNECTION_OK ||\n> >>>> PQtransactionStatus(entry->conn) != PQTRANS_IDLE ||\n> >>>> entry->changing_xact_state ||\n> >>>> entry->invalidated). ----> by adding this\n> >>>> {\n> >>>> elog(DEBUG3, \"discarding connection %p\", entry->conn);\n> >>>> disconnect_pg_server(entry);\n> >>>> }\n> >>>>\n> >>>> 2) close the unused connections right away in pgfdw_inval_callback\n> >>>> instead of marking them invalidated. Mark used connections as\n> >>>> invalidated in pgfdw_inval_callback and close them in\n> >>>> pgfdw_xact_callback at the main txn end.\n> >>>>\n> >>>> We went with option (2) because we thought this would ease some burden\n> >>>> on pgfdw_xact_callback closing a lot of invalid connections at once.\n> >>>\n> >>> Also, see the original patch for the connection leak issue just does\n> >>> option (1), see [1]. But in [2] and [3], we chose option (2).\n> >>>\n> >>> I feel, we can go for option (1), with the patch attached in [1] i.e.\n> >>> having have_invalid_connections whenever any connection gets invalided\n> >>> so that we don't quickly exit in pgfdw_xact_callback and the\n> >>> invalidated connections get closed properly. Thoughts?\n> >>\n> >> Before going for (1) or something, I'd like to understand what the actual\n> >> issue of (2), i.e., the current code is. Otherwise other approaches might\n> >> have the same issue.\n> >\n> > The problem with option (2) is that because of CLOBBER_CACHE_ALWAYS,\n> > pgfdw_inval_callback is getting called many times and the connections\n> > that are not used i..e xact_depth == 0, are getting disconnected\n> > there, so we are not seeing the consistent results for\n> > postgres_fdw_get_connectionstest cases. If the connections are being\n> > used within the xact, then the valid option for those connections are\n> > being shown as false again making postgres_fdw_get_connections output\n> > inconsistent. This is what happened on the build farm member with\n> > CLOBBER_CACHE_ALWAYS build.\n>\n> But if the issue is only the inconsistency of test results,\n> we can go with the option (2)? Even with (2), we can make the test\n> stable by removing \"valid\" column and executing\n> postgres_fdw_get_connections() within the transaction?\n\nHmmm, and we should have the tests at the start of the file\npostgres_fdw.sql before even we make any foreign server connections.\n\nIf okay, I can prepare the patch and run with clobber cache build locally.\n\n> >\n> > So if we go with option (1), get rid of valid state from\n> > postgres_fdw_get_connectionstest and having the test cases inside an\n> > explicit xact block at the beginning of the postgres_fdw.sql test\n> > file, we don't see CLOBBER_CACHE_ALWAYS inconsistencies. I'm not sure\n> > if this is the correct way.\n> >\n> >> Regarding (1), as far as I understand correctly, even when the transaction\n> >> doesn't use foreign tables at all, it needs to scan the connection cache\n> >> entries if necessary. I was thinking to avoid this. I guess that this doesn't\n> >> work with at least the postgres_fdw 2PC patch that Sawada-san is proposing\n> >> because with the patch the commit/rollback callback is performed only\n> >> for the connections used in the transaction.\n> >\n> > You mean to say, pgfdw_xact_callback will not get called when the xact\n> > uses no foreign server connection or is it that pgfdw_xact_callback\n> > gets called but exits quickly from it? I'm not sure what the 2PC patch\n> > does.\n>\n> Maybe it's chance to review the patch! ;P\n>\n> BTW his patch tries to add new callback interfaces for commit/rollback of\n> foreign transactions, and make postgres_fdw use them instead of\n> XactCallback. And those new interfaces are executed only when\n> the transaction has started the foreign transactions.\n\nIMHO, it's better to keep it as a separate discussion. I will try to\nreview that patch later.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 29 Jan 2021 12:42:02 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] postgres_fdw connection caching - cause remote sessions\n linger till the local session exit" }, { "msg_contents": "\n\nOn 2021/01/29 16:12, Bharath Rupireddy wrote:\n> On Fri, Jan 29, 2021 at 12:36 PM Fujii Masao\n> <masao.fujii@oss.nttdata.com> wrote:\n>> On 2021/01/29 15:44, Bharath Rupireddy wrote:\n>>> On Fri, Jan 29, 2021 at 11:54 AM Fujii Masao\n>>> <masao.fujii@oss.nttdata.com> wrote:\n>>>>>> IIRC, when we were finding a way to close the invalidated connections\n>>>>>> so that they don't leaked, we had two options:\n>>>>>>\n>>>>>> 1) let those connections (whether currently being used in the xact or\n>>>>>> not) get marked invalidated in pgfdw_inval_callback and closed in\n>>>>>> pgfdw_xact_callback at the main txn end as shown below\n>>>>>>\n>>>>>> if (PQstatus(entry->conn) != CONNECTION_OK ||\n>>>>>> PQtransactionStatus(entry->conn) != PQTRANS_IDLE ||\n>>>>>> entry->changing_xact_state ||\n>>>>>> entry->invalidated). ----> by adding this\n>>>>>> {\n>>>>>> elog(DEBUG3, \"discarding connection %p\", entry->conn);\n>>>>>> disconnect_pg_server(entry);\n>>>>>> }\n>>>>>>\n>>>>>> 2) close the unused connections right away in pgfdw_inval_callback\n>>>>>> instead of marking them invalidated. Mark used connections as\n>>>>>> invalidated in pgfdw_inval_callback and close them in\n>>>>>> pgfdw_xact_callback at the main txn end.\n>>>>>>\n>>>>>> We went with option (2) because we thought this would ease some burden\n>>>>>> on pgfdw_xact_callback closing a lot of invalid connections at once.\n>>>>>\n>>>>> Also, see the original patch for the connection leak issue just does\n>>>>> option (1), see [1]. But in [2] and [3], we chose option (2).\n>>>>>\n>>>>> I feel, we can go for option (1), with the patch attached in [1] i.e.\n>>>>> having have_invalid_connections whenever any connection gets invalided\n>>>>> so that we don't quickly exit in pgfdw_xact_callback and the\n>>>>> invalidated connections get closed properly. Thoughts?\n>>>>\n>>>> Before going for (1) or something, I'd like to understand what the actual\n>>>> issue of (2), i.e., the current code is. Otherwise other approaches might\n>>>> have the same issue.\n>>>\n>>> The problem with option (2) is that because of CLOBBER_CACHE_ALWAYS,\n>>> pgfdw_inval_callback is getting called many times and the connections\n>>> that are not used i..e xact_depth == 0, are getting disconnected\n>>> there, so we are not seeing the consistent results for\n>>> postgres_fdw_get_connectionstest cases. If the connections are being\n>>> used within the xact, then the valid option for those connections are\n>>> being shown as false again making postgres_fdw_get_connections output\n>>> inconsistent. This is what happened on the build farm member with\n>>> CLOBBER_CACHE_ALWAYS build.\n>>\n>> But if the issue is only the inconsistency of test results,\n>> we can go with the option (2)? Even with (2), we can make the test\n>> stable by removing \"valid\" column and executing\n>> postgres_fdw_get_connections() within the transaction?\n> \n> Hmmm, and we should have the tests at the start of the file\n> postgres_fdw.sql before even we make any foreign server connections.\n\nWe don't need to move the test if we always call postgres_fdw_disconnect_all() just before starting new transaction and calling postgres_fdw_get_connections() as follows?\n\nSELECT 1 FROM postgres_fdw_disconnect_all();\nBEGIN;\n...\nSELECT * FROM postgres_fdw_get_connections();\n...\n\n\n> \n> If okay, I can prepare the patch and run with clobber cache build locally.\n\nMany thanks!\n\n\n> \n>>>\n>>> So if we go with option (1), get rid of valid state from\n>>> postgres_fdw_get_connectionstest and having the test cases inside an\n>>> explicit xact block at the beginning of the postgres_fdw.sql test\n>>> file, we don't see CLOBBER_CACHE_ALWAYS inconsistencies. I'm not sure\n>>> if this is the correct way.\n>>>\n>>>> Regarding (1), as far as I understand correctly, even when the transaction\n>>>> doesn't use foreign tables at all, it needs to scan the connection cache\n>>>> entries if necessary. I was thinking to avoid this. I guess that this doesn't\n>>>> work with at least the postgres_fdw 2PC patch that Sawada-san is proposing\n>>>> because with the patch the commit/rollback callback is performed only\n>>>> for the connections used in the transaction.\n>>>\n>>> You mean to say, pgfdw_xact_callback will not get called when the xact\n>>> uses no foreign server connection or is it that pgfdw_xact_callback\n>>> gets called but exits quickly from it? I'm not sure what the 2PC patch\n>>> does.\n>>\n>> Maybe it's chance to review the patch! ;P\n>>\n>> BTW his patch tries to add new callback interfaces for commit/rollback of\n>> foreign transactions, and make postgres_fdw use them instead of\n>> XactCallback. And those new interfaces are executed only when\n>> the transaction has started the foreign transactions.\n> \n> IMHO, it's better to keep it as a separate discussion.\n\nYes, of course!\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Fri, 29 Jan 2021 16:47:36 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] postgres_fdw connection caching - cause remote sessions\n linger till the local session exit" }, { "msg_contents": "On Fri, Jan 29, 2021 at 1:17 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> >> But if the issue is only the inconsistency of test results,\n> >> we can go with the option (2)? Even with (2), we can make the test\n> >> stable by removing \"valid\" column and executing\n> >> postgres_fdw_get_connections() within the transaction?\n> >\n> > Hmmm, and we should have the tests at the start of the file\n> > postgres_fdw.sql before even we make any foreign server connections.\n>\n> We don't need to move the test if we always call postgres_fdw_disconnect_all() just before starting new transaction and calling postgres_fdw_get_connections() as follows?\n>\n> SELECT 1 FROM postgres_fdw_disconnect_all();\n> BEGIN;\n> ...\n> SELECT * FROM postgres_fdw_get_connections();\n> ...\n\nYes, that works, but we cannot show true/false for the\npostgres_fdw_disconnect_all output.\n\nI will post the patch soon. Thanks a lot.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 29 Jan 2021 13:24:53 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] postgres_fdw connection caching - cause remote sessions\n linger till the local session exit" }, { "msg_contents": "On Fri, Jan 29, 2021 at 1:24 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Fri, Jan 29, 2021 at 1:17 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> > >> But if the issue is only the inconsistency of test results,\n> > >> we can go with the option (2)? Even with (2), we can make the test\n> > >> stable by removing \"valid\" column and executing\n> > >> postgres_fdw_get_connections() within the transaction?\n> > >\n> > > Hmmm, and we should have the tests at the start of the file\n> > > postgres_fdw.sql before even we make any foreign server connections.\n> >\n> > We don't need to move the test if we always call postgres_fdw_disconnect_all() just before starting new transaction and calling postgres_fdw_get_connections() as follows?\n> >\n> > SELECT 1 FROM postgres_fdw_disconnect_all();\n> > BEGIN;\n> > ...\n> > SELECT * FROM postgres_fdw_get_connections();\n> > ...\n>\n> Yes, that works, but we cannot show true/false for the\n> postgres_fdw_disconnect_all output.\n>\n> I will post the patch soon. Thanks a lot.\n\nAttaching a patch that has following changes: 1) Now,\npostgres_fdw_get_connections will only return set of active\nconnections server names not their valid state 2) The functions\npostgres_fdw_get_connections, postgres_fdw_disconnect and\npostgres_fdw_disconnect_all are now being tested within an explicit\nxact block, this way the tests are more stable even with clobber cache\nalways builds.\n\nI tested the patch here on my development system with\n-DCLOBBER_CACHE_ALWAYS configuration, the tests look consistent.\n\nPlease review the patch.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Fri, 29 Jan 2021 16:15:43 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] postgres_fdw connection caching - cause remote sessions\n linger till the local session exit" }, { "msg_contents": "On 2021/01/29 19:45, Bharath Rupireddy wrote:\n> On Fri, Jan 29, 2021 at 1:24 PM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n>>\n>> On Fri, Jan 29, 2021 at 1:17 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>>>>> But if the issue is only the inconsistency of test results,\n>>>>> we can go with the option (2)? Even with (2), we can make the test\n>>>>> stable by removing \"valid\" column and executing\n>>>>> postgres_fdw_get_connections() within the transaction?\n>>>>\n>>>> Hmmm, and we should have the tests at the start of the file\n>>>> postgres_fdw.sql before even we make any foreign server connections.\n>>>\n>>> We don't need to move the test if we always call postgres_fdw_disconnect_all() just before starting new transaction and calling postgres_fdw_get_connections() as follows?\n>>>\n>>> SELECT 1 FROM postgres_fdw_disconnect_all();\n>>> BEGIN;\n>>> ...\n>>> SELECT * FROM postgres_fdw_get_connections();\n>>> ...\n>>\n>> Yes, that works, but we cannot show true/false for the\n>> postgres_fdw_disconnect_all output.\n>>\n>> I will post the patch soon. Thanks a lot.\n> \n> Attaching a patch that has following changes: 1) Now,\n> postgres_fdw_get_connections will only return set of active\n> connections server names not their valid state 2) The functions\n> postgres_fdw_get_connections, postgres_fdw_disconnect and\n> postgres_fdw_disconnect_all are now being tested within an explicit\n> xact block, this way the tests are more stable even with clobber cache\n> always builds.\n> \n> I tested the patch here on my development system with\n> -DCLOBBER_CACHE_ALWAYS configuration, the tests look consistent.\n> \n> Please review the patch.\n\nThanks for the patch!\n\n--- Return false as loopback2 connectin is closed already.\n-SELECT postgres_fdw_disconnect('loopback2');\n- postgres_fdw_disconnect\n--------------------------\n- f\n-(1 row)\n-\n--- Return an error as there is no foreign server with given name.\n-SELECT postgres_fdw_disconnect('unknownserver');\n-ERROR: server \"unknownserver\" does not exist\n\nWhy do we need to remove these? These seem to work fine even in\nCLOBBER_CACHE_ALWAYS.\n\n+\t\t\t/*\n+\t\t\t * It doesn't make sense to show this entry in the output with a\n+\t\t\t * NULL server_name as it will be closed at the xact end.\n+\t\t\t */\n+\t\t\tcontinue;\n\n-1 with this change because I still think that it's more useful to list\nall the open connections.\n\nThis makes me think that more discussion would be necessary before\nchanging the interface of postgres_fdw_get_connections(). On the other\nhand, we should address the issue ASAP to make the buildfarm member fine.\nSo at first I'd like to push only the change of regression test.\nPatch attached. I tested it both with CLOBBER_CACHE_ALWAYS set and unset,\nand the results were stable.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION", "msg_date": "Sat, 30 Jan 2021 03:44:30 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] postgres_fdw connection caching - cause remote sessions\n linger till the local session exit" }, { "msg_contents": "On Sat, Jan 30, 2021 at 12:14 AM Fujii Masao\n<masao.fujii@oss.nttdata.com> wrote:\n> + /*\n> + * It doesn't make sense to show this entry in the output with a\n> + * NULL server_name as it will be closed at the xact end.\n> + */\n> + continue;\n>\n> -1 with this change because I still think that it's more useful to list\n> all the open connections.\n\nIf postgres_fdw_get_connections doesn't have a \"valid\" column, then I\nthought it's better not showing server_name NULL in the output. Do you\nthink that we need to output some fixed strings for such connections\nlike \"<unknown server>\" or \"<server doesn't exist>\" or \"<dropped\nserver>\" or \"<server information not available>\"? I'm not sure whether\nwe are allowed to have fixed strings as column output.\n\n> This makes me think that more discussion would be necessary before\n> changing the interface of postgres_fdw_get_connections(). On the other\n> hand, we should address the issue ASAP to make the buildfarm member fine.\n> So at first I'd like to push only the change of regression test.\n> Patch attached. I tested it both with CLOBBER_CACHE_ALWAYS set and unset,\n> and the results were stable.\n\nThanks, the postgres_fdw.patch looks good to me.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Sat, 30 Jan 2021 05:58:19 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] postgres_fdw connection caching - cause remote sessions\n linger till the local session exit" }, { "msg_contents": "\n\nOn 2021/01/30 9:28, Bharath Rupireddy wrote:\n> On Sat, Jan 30, 2021 at 12:14 AM Fujii Masao\n> <masao.fujii@oss.nttdata.com> wrote:\n>> + /*\n>> + * It doesn't make sense to show this entry in the output with a\n>> + * NULL server_name as it will be closed at the xact end.\n>> + */\n>> + continue;\n>>\n>> -1 with this change because I still think that it's more useful to list\n>> all the open connections.\n> \n> If postgres_fdw_get_connections doesn't have a \"valid\" column, then I\n> thought it's better not showing server_name NULL in the output.\n\nOr if we don't have strong reason to remove \"valid\" column,\nthe current design is enough?\n\n\n> Do you\n> think that we need to output some fixed strings for such connections\n> like \"<unknown server>\" or \"<server doesn't exist>\" or \"<dropped\n> server>\" or \"<server information not available>\"? I'm not sure whether\n> we are allowed to have fixed strings as column output.\n> \n>> This makes me think that more discussion would be necessary before\n>> changing the interface of postgres_fdw_get_connections(). On the other\n>> hand, we should address the issue ASAP to make the buildfarm member fine.\n>> So at first I'd like to push only the change of regression test.\n>> Patch attached. I tested it both with CLOBBER_CACHE_ALWAYS set and unset,\n>> and the results were stable.\n> \n> Thanks, the postgres_fdw.patch looks good to me.\n\nThanks for checking the patch! I pushed that.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Mon, 1 Feb 2021 15:59:31 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] postgres_fdw connection caching - cause remote sessions\n linger till the local session exit" }, { "msg_contents": "\n\nOn 2021/01/27 10:06, Bharath Rupireddy wrote:\n> On Tue, Jan 26, 2021 at 8:38 AM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n>> I will post \"keep_connections\" GUC and \"keep_connection\" server level\n>> option patches later.\n> \n> Attaching v19 patch set for \"keep_connections\" GUC and\n> \"keep_connection\" server level option. Please review them further.\n\nThese options are no longer necessary because we now support idle_session_timeout? If we want to disconnect the foreign server connections that sit on idle to prevent them from eating up the connection capacities in the foriegn servers, we can just set idle_session_timeout in those foreign servers. If we want to avoid the cluster-wide setting of idle_session_timeout, we can set that per role. One issue for this approach is that the connection entry remains even after idle_session_timeout happens. So postgres_fdw_get_connections() returns that connection even though it's actually closed by the timeout. Which is confusing. But which doesn't cause any actual problem, right? When the foreign table is accessed the next time, that connection entry is dropped, an error is detected, and then new connection will be remade.\n\nSorry I've not read the past long discussion about this feature. If there is the consensus that these options are still necessary and useful even when we have idle_session_timeout, please correct me.\n\nISTM that it's intuitive (at least for me) to add this kind of option into the foreign server. But I'm not sure if it's good idea to expose the option as GUC. Also if there is the consensus about this, please correct me.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Mon, 1 Feb 2021 16:13:27 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] postgres_fdw connection caching - cause remote sessions\n linger till the local session exit" }, { "msg_contents": "On Mon, Feb 1, 2021 at 12:29 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> On 2021/01/30 9:28, Bharath Rupireddy wrote:\n> > On Sat, Jan 30, 2021 at 12:14 AM Fujii Masao\n> > <masao.fujii@oss.nttdata.com> wrote:\n> >> + /*\n> >> + * It doesn't make sense to show this entry in the output with a\n> >> + * NULL server_name as it will be closed at the xact end.\n> >> + */\n> >> + continue;\n> >>\n> >> -1 with this change because I still think that it's more useful to list\n> >> all the open connections.\n> >\n> > If postgres_fdw_get_connections doesn't have a \"valid\" column, then I\n> > thought it's better not showing server_name NULL in the output.\n>\n> Or if we don't have strong reason to remove \"valid\" column,\n> the current design is enough?\n\nMy only worry was that the statement from [1] \"A cache flush should\nnot cause user-visible state changes.\" But the newly added function\npostgres_fdw_get_connections is VOLATILE which means that the results\nreturned by postgres_fdw_get_connections() is also VOLATILE. Isn't\nthis enough, so that users will not get surprised with different\nresults in case invalidations occur within the server by the time they\nrun the query subsequent times and see different results than what\nthey saw in the first run?\n\nThoughts?\n\n[1] https://www.postgresql.org/message-id/flat/2724627.1611886184%40sss.pgh.pa.us\n\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 1 Feb 2021 12:43:35 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] postgres_fdw connection caching - cause remote sessions\n linger till the local session exit" }, { "msg_contents": "On Mon, Feb 1, 2021 at 12:43 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> On 2021/01/27 10:06, Bharath Rupireddy wrote:\n> > On Tue, Jan 26, 2021 at 8:38 AM Bharath Rupireddy\n> > <bharath.rupireddyforpostgres@gmail.com> wrote:\n> >> I will post \"keep_connections\" GUC and \"keep_connection\" server level\n> >> option patches later.\n> >\n> > Attaching v19 patch set for \"keep_connections\" GUC and\n> > \"keep_connection\" server level option. Please review them further.\n>\n> These options are no longer necessary because we now support idle_session_timeout? If we want to disconnect the foreign server connections that sit on idle to prevent them from eating up the connection capacities in the foriegn servers, we can just set idle_session_timeout in those foreign servers. If we want to avoid the cluster-wide setting of idle_session_timeout, we can set that per role. One issue for this approach is that the connection entry remains even after idle_session_timeout happens. So postgres_fdw_get_connections() returns that connection even though it's actually closed by the timeout. Which is confusing. But which doesn't cause any actual problem, right? When the foreign table is accessed the next time, that connection entry is dropped, an error is detected, and then new connection will be remade.\n\nFirst of all, idle_session_timeout is by default 0 i.e. disabled,\nthere are chances that users may not use that and don't want to set it\njust for not caching any foreign server connection. A simple use case\nwhere server level option can be useful is that, users are accessing\nforeign tables (may be not that frequently, once in a while) from a\nlong running local session using foreign servers and they don't want\nto keep the local session cache those connections, then setting this\nserver level option, keep_connections to false makes their life\neasier, without having to depend on setting idle_session_timeout on\nthe remote server.\n\nAnd, just using idle_session_timeout on a remote server may not help\nus completely. Because the remote session may go away, while we are\nstill using that cached connection in an explicit txn on the local\nsession. Our connection retry will also not work because we are in the\nmiddle of an xact, so the local explicit txn gets aborted.\n\nSo, IMO, we can still have both server level option as well as\npostgres_fdw contrib level GUC (to tell the local session that \"I\ndon't want to keep any foreign connections active\" instead of setting\nkeep_connection server level option for each foreign server).\n\n> Sorry I've not read the past long discussion about this feature. If there is the consensus that these options are still necessary and useful even when we have idle_session_timeout, please correct me.\n>\n> ISTM that it's intuitive (at least for me) to add this kind of option into the foreign server. But I'm not sure if it's good idea to expose the option as GUC. Also if there is the consensus about this, please correct me.\n\nSee here [1].\n\n[1] - https://www.postgresql.org/message-id/f58d1df4ae58f6cf3bfa560f923462e0%40postgrespro.ru\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 1 Feb 2021 13:09:48 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] postgres_fdw connection caching - cause remote sessions\n linger till the local session exit" }, { "msg_contents": "\n\nOn 2021/02/01 16:13, Bharath Rupireddy wrote:\n> On Mon, Feb 1, 2021 at 12:29 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>> On 2021/01/30 9:28, Bharath Rupireddy wrote:\n>>> On Sat, Jan 30, 2021 at 12:14 AM Fujii Masao\n>>> <masao.fujii@oss.nttdata.com> wrote:\n>>>> + /*\n>>>> + * It doesn't make sense to show this entry in the output with a\n>>>> + * NULL server_name as it will be closed at the xact end.\n>>>> + */\n>>>> + continue;\n>>>>\n>>>> -1 with this change because I still think that it's more useful to list\n>>>> all the open connections.\n>>>\n>>> If postgres_fdw_get_connections doesn't have a \"valid\" column, then I\n>>> thought it's better not showing server_name NULL in the output.\n>>\n>> Or if we don't have strong reason to remove \"valid\" column,\n>> the current design is enough?\n> \n> My only worry was that the statement from [1] \"A cache flush should\n> not cause user-visible state changes.\"\n\nIf we follow this strictly, I'm afraid that postgres_fdw_get_connections()\nitself would also be a problem because the cached connections are affected\nby cache flush and postgres_fdw_get_connections() shows that to users.\nI'm not sure if removing \"valid\" column is actually helpful for that statement.\n\nAnyway, for now we have the following options;\n\n(1) keep the feature as it is\n(2) remove \"valid\" column\n (2-1) show NULL for the connection whose server was dropped\n\t(2-2) show fixed value (e.g., <dropped>) for the connection whose server was dropped\n(3) remove \"valid\" column and don't display connection whose server was dropped\n(4) remove postgres_fdw_get_connections()\n\nFor now I like (1), but if others think \"valid\" column should be dropped,\nI'm fine with (2). But I'd like to avoid (3) because I think that\npostgres_fdw_get_connections() should list all the connections that\nare actually being established. I have no strong opinion about whether\n(2-1) or (2-2) is better, for now.\n\n> But the newly added function\n> postgres_fdw_get_connections is VOLATILE which means that the results\n> returned by postgres_fdw_get_connections() is also VOLATILE. Isn't\n> this enough, so that users will not get surprised with different\n> results in case invalidations occur within the server by the time they\n> run the query subsequent times and see different results than what\n> they saw in the first run?\n\nI'm not sure about this...\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Tue, 2 Feb 2021 11:32:30 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] postgres_fdw connection caching - cause remote sessions\n linger till the local session exit" }, { "msg_contents": "\n\nOn 2021/02/01 16:39, Bharath Rupireddy wrote:\n> On Mon, Feb 1, 2021 at 12:43 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>> On 2021/01/27 10:06, Bharath Rupireddy wrote:\n>>> On Tue, Jan 26, 2021 at 8:38 AM Bharath Rupireddy\n>>> <bharath.rupireddyforpostgres@gmail.com> wrote:\n>>>> I will post \"keep_connections\" GUC and \"keep_connection\" server level\n>>>> option patches later.\n>>>\n>>> Attaching v19 patch set for \"keep_connections\" GUC and\n>>> \"keep_connection\" server level option. Please review them further.\n>>\n>> These options are no longer necessary because we now support idle_session_timeout? If we want to disconnect the foreign server connections that sit on idle to prevent them from eating up the connection capacities in the foriegn servers, we can just set idle_session_timeout in those foreign servers. If we want to avoid the cluster-wide setting of idle_session_timeout, we can set that per role. One issue for this approach is that the connection entry remains even after idle_session_timeout happens. So postgres_fdw_get_connections() returns that connection even though it's actually closed by the timeout. Which is confusing. But which doesn't cause any actual problem, right? When the foreign table is accessed the next time, that connection entry is dropped, an error is detected, and then new connection will be remade.\n> \n> First of all, idle_session_timeout is by default 0 i.e. disabled,\n> there are chances that users may not use that and don't want to set it\n> just for not caching any foreign server connection. A simple use case\n> where server level option can be useful is that, users are accessing\n> foreign tables (may be not that frequently, once in a while) from a\n> long running local session using foreign servers and they don't want\n> to keep the local session cache those connections, then setting this\n> server level option, keep_connections to false makes their life\n> easier, without having to depend on setting idle_session_timeout on\n> the remote server.\n\nThanks for explaining this!\n\nI understand that use case. But I still think that we can use\nidle_session_timeout for that use case without keep_connections.\nPer the past discussion, Robert seems to prefer controling the cached\nconnection by timeout rather than boolean, at [1]. Bruce seems to think\nthat idle_session_timeout is enough for the use case, at [2]. So I'm not\nsure what the current consensus is...\n\nAlso Alexey seems to have thought that idle_session_timeout is not\nsuitable for cached connection because it's the cluster-wide option, at [3].\nBut since it's marked as PGC_USERSET, we can set it per-role, e.g.,\nby using ALTER ROLE SET, so that it can affect only the foreign server\nconnections.\n\nOne merit of keep_connections that I found is that we can use it even\nwhen connecting to the older PostgreSQL that doesn't support\nidle_session_timeout. Also it seems simpler to use keep_connections\nrather than setting idle_session_timeout in multiple remote servers.\nSo I'm inclined to add this feature, but I'd like to hear more opinions.\n\n[1]\nhttps://www.postgresql.org/message-id/CA%2BTgmob_nF7NkBfVLUhmQ%2Bt8JGVV4hXy%2BzkuMUtTSd-%3DHPBeuA%40mail.gmail.com\n\n[2]\nhttps://www.postgresql.org/message-id/20200714165822.GE7628%40momjian.us\n\n[3]\nhttps://www.postgresql.org/message-id/6df6525ca7a4b54a4a39f55e4dd6b3e9%40postgrespro.ru\n\n\n> \n> And, just using idle_session_timeout on a remote server may not help\n> us completely. Because the remote session may go away, while we are\n> still using that cached connection in an explicit txn on the local\n> session. Our connection retry will also not work because we are in the\n> middle of an xact, so the local explicit txn gets aborted.\n\nRegarding idle_in_transaction_session_timeout, this seems true. But\nI was thinking that idle_session_timeout doesn't cause this issue because\nit doesn't close the connection in the middle of transaction. No?\n\n\n> \n> So, IMO, we can still have both server level option as well as\n> postgres_fdw contrib level GUC (to tell the local session that \"I\n> don't want to keep any foreign connections active\" instead of setting\n> keep_connection server level option for each foreign server).\n> \n>> Sorry I've not read the past long discussion about this feature. If there is the consensus that these options are still necessary and useful even when we have idle_session_timeout, please correct me.\n>>\n>> ISTM that it's intuitive (at least for me) to add this kind of option into the foreign server. But I'm not sure if it's good idea to expose the option as GUC. Also if there is the consensus about this, please correct me.\n> \n> See here [1].\n> \n> [1] - https://www.postgresql.org/message-id/f58d1df4ae58f6cf3bfa560f923462e0%40postgrespro.ru\n\nThanks!\n\n\nHere are some review comments.\n\n-\t\t\t(used_in_current_xact && !keep_connections))\n+\t\t\t(used_in_current_xact &&\n+\t\t\t(!keep_connections || !entry->keep_connection)))\n\nThe names of GUC and server-level option should be the same,\nto make the thing less confusing?\n\nIMO the server-level option should override GUC. IOW, GUC setting\nshould be used only when the server-level option is not specified.\nBut the above code doesn't seem to do that. Thought?\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Tue, 2 Feb 2021 13:15:46 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] postgres_fdw connection caching - cause remote sessions\n linger till the local session exit" }, { "msg_contents": "On Tue, Feb 2, 2021 at 9:45 AM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> One merit of keep_connections that I found is that we can use it even\n> when connecting to the older PostgreSQL that doesn't support\n> idle_session_timeout. Also it seems simpler to use keep_connections\n> rather than setting idle_session_timeout in multiple remote servers.\n> So I'm inclined to add this feature, but I'd like to hear more opinions.\n\nThanks.\n\n> > And, just using idle_session_timeout on a remote server may not help\n> > us completely. Because the remote session may go away, while we are\n> > still using that cached connection in an explicit txn on the local\n> > session. Our connection retry will also not work because we are in the\n> > middle of an xact, so the local explicit txn gets aborted.\n>\n> Regarding idle_in_transaction_session_timeout, this seems true. But\n> I was thinking that idle_session_timeout doesn't cause this issue because\n> it doesn't close the connection in the middle of transaction. No?\n\nYou are right. idle_session_timeout doesn't take effect when in the\nmiddle of an explicit txn. I missed this point.\n\n> Here are some review comments.\n>\n> - (used_in_current_xact && !keep_connections))\n> + (used_in_current_xact &&\n> + (!keep_connections || !entry->keep_connection)))\n>\n> The names of GUC and server-level option should be the same,\n> to make the thing less confusing?\n\nWe can have GUC name keep_connections as there can be multiple\nconnections within a local session and I can change the server level\noption keep_connection to keep_connections because a single foreign\nserver can have multiple connections as we have seen that in the use\ncase identified by you. I will change that in the next patch set.\n\n> IMO the server-level option should override GUC. IOW, GUC setting\n> should be used only when the server-level option is not specified.\n> But the above code doesn't seem to do that. Thought?\n\nNote that default values for GUC and server level option are on i.e.\nconnections are cached.\n\nThe main intention of the GUC is to not set server level options to\nfalse for all the foreign servers in case users don't want to keep any\nforeign server connections. If the server level option overrides GUC,\nthen even if users set GUC to off, they have to set the server level\noption to false for all the foreign servers.\n\nSo, the below code in the patch, first checks the GUC. If the GUC is\noff, then discards the connections. If the GUC is on, then it further\nchecks the server level option. If it's off discards the connection,\notherwise not.\n\nI would like it to keep this behaviour as is. Thoughts?\n\n if (PQstatus(entry->conn) != CONNECTION_OK ||\n PQtransactionStatus(entry->conn) != PQTRANS_IDLE ||\n entry->changing_xact_state ||\n entry->invalidated ||\n+ (used_in_current_xact &&\n+ (!keep_connections || !entry->keep_connection)))\n {\n elog(DEBUG3, \"discarding connection %p\", entry->conn);\n disconnect_pg_server(entry);\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 3 Feb 2021 10:26:59 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] postgres_fdw connection caching - cause remote sessions\n linger till the local session exit" }, { "msg_contents": "\n\nOn 2021/02/03 13:56, Bharath Rupireddy wrote:\n> On Tue, Feb 2, 2021 at 9:45 AM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>> One merit of keep_connections that I found is that we can use it even\n>> when connecting to the older PostgreSQL that doesn't support\n>> idle_session_timeout. Also it seems simpler to use keep_connections\n>> rather than setting idle_session_timeout in multiple remote servers.\n>> So I'm inclined to add this feature, but I'd like to hear more opinions.\n> \n> Thanks.\n> \n>>> And, just using idle_session_timeout on a remote server may not help\n>>> us completely. Because the remote session may go away, while we are\n>>> still using that cached connection in an explicit txn on the local\n>>> session. Our connection retry will also not work because we are in the\n>>> middle of an xact, so the local explicit txn gets aborted.\n>>\n>> Regarding idle_in_transaction_session_timeout, this seems true. But\n>> I was thinking that idle_session_timeout doesn't cause this issue because\n>> it doesn't close the connection in the middle of transaction. No?\n> \n> You are right. idle_session_timeout doesn't take effect when in the\n> middle of an explicit txn. I missed this point.\n> \n>> Here are some review comments.\n>>\n>> - (used_in_current_xact && !keep_connections))\n>> + (used_in_current_xact &&\n>> + (!keep_connections || !entry->keep_connection)))\n>>\n>> The names of GUC and server-level option should be the same,\n>> to make the thing less confusing?\n> \n> We can have GUC name keep_connections as there can be multiple\n> connections within a local session and I can change the server level\n> option keep_connection to keep_connections because a single foreign\n> server can have multiple connections as we have seen that in the use\n> case identified by you. I will change that in the next patch set.\n> \n>> IMO the server-level option should override GUC. IOW, GUC setting\n>> should be used only when the server-level option is not specified.\n>> But the above code doesn't seem to do that. Thought?\n> \n> Note that default values for GUC and server level option are on i.e.\n> connections are cached.\n> \n> The main intention of the GUC is to not set server level options to\n> false for all the foreign servers in case users don't want to keep any\n> foreign server connections. If the server level option overrides GUC,\n> then even if users set GUC to off, they have to set the server level\n> option to false for all the foreign servers.\n\nMaybe my explanation in the previous email was unclear. What I think is; If the server-level option is explicitly specified, its setting is used whatever GUC is. On the other hand, if the server-level option is NOT specified, GUC setting is used. For example, if we define the server as follows, GUC setting is used because the server-level option is NOT specified.\n\n CREATE SERVER loopback FOREIGN DATA WRAPPER postgres;\n\nIf we define the server as follows, the server-level setting is used.\n\n CREATE SERVER loopback FOREIGN DATA WRAPPER postgres OPTIONS (keep_connections 'on');\n\n\nFor example, log_autovacuum_min_duration GUC and reloption work in the similar way. That is, reloption setting overrides GUC. If reltion is not specified, GUC is used.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Wed, 3 Feb 2021 19:52:26 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] postgres_fdw connection caching - cause remote sessions\n linger till the local session exit" }, { "msg_contents": "On Wed, Feb 3, 2021 at 4:22 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> Maybe my explanation in the previous email was unclear. What I think is; If the server-level option is explicitly specified, its setting is used whatever GUC is. On the other hand, if the server-level option is NOT specified, GUC setting is used. For example, if we define the server as follows, GUC setting is used because the server-level option is NOT specified.\n>\n> CREATE SERVER loopback FOREIGN DATA WRAPPER postgres;\n>\n> If we define the server as follows, the server-level setting is used.\n>\n> CREATE SERVER loopback FOREIGN DATA WRAPPER postgres OPTIONS (keep_connections 'on');\n\nAttaching v20 patch set. Now, server level option if provided\noverrides the GUC.The GUC will be used only if server level option is\nnot provided. And also, both server level option and GUC are named the\nsame - \"keep_connections\".\n\nPlease have a look.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Thu, 4 Feb 2021 09:36:42 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] postgres_fdw connection caching - cause remote sessions\n linger till the local session exit" }, { "msg_contents": "On Thu, Feb 4, 2021 at 9:36 AM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Wed, Feb 3, 2021 at 4:22 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> > Maybe my explanation in the previous email was unclear. What I think is; If the server-level option is explicitly specified, its setting is used whatever GUC is. On the other hand, if the server-level option is NOT specified, GUC setting is used. For example, if we define the server as follows, GUC setting is used because the server-level option is NOT specified.\n> >\n> > CREATE SERVER loopback FOREIGN DATA WRAPPER postgres;\n> >\n> > If we define the server as follows, the server-level setting is used.\n> >\n> > CREATE SERVER loopback FOREIGN DATA WRAPPER postgres OPTIONS (keep_connections 'on');\n>\n> Attaching v20 patch set. Now, server level option if provided\n> overrides the GUC.The GUC will be used only if server level option is\n> not provided. And also, both server level option and GUC are named the\n> same - \"keep_connections\".\n>\n> Please have a look.\n\nAttaching v21 patch set, rebased onto the latest master.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Mon, 22 Feb 2021 11:25:31 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] postgres_fdw connection caching - cause remote sessions\n linger till the local session exit" }, { "msg_contents": "On 2021/02/22 14:55, Bharath Rupireddy wrote:\n> On Thu, Feb 4, 2021 at 9:36 AM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n>>\n>> On Wed, Feb 3, 2021 at 4:22 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>>> Maybe my explanation in the previous email was unclear. What I think is; If the server-level option is explicitly specified, its setting is used whatever GUC is. On the other hand, if the server-level option is NOT specified, GUC setting is used. For example, if we define the server as follows, GUC setting is used because the server-level option is NOT specified.\n>>>\n>>> CREATE SERVER loopback FOREIGN DATA WRAPPER postgres;\n>>>\n>>> If we define the server as follows, the server-level setting is used.\n>>>\n>>> CREATE SERVER loopback FOREIGN DATA WRAPPER postgres OPTIONS (keep_connections 'on');\n>>\n>> Attaching v20 patch set. Now, server level option if provided\n>> overrides the GUC.The GUC will be used only if server level option is\n>> not provided. And also, both server level option and GUC are named the\n>> same - \"keep_connections\".\n>>\n>> Please have a look.\n> \n> Attaching v21 patch set, rebased onto the latest master.\n\nI agree to add the server-level option. But I'm still not sure if it's good idea to also expose that option as GUC. Isn't the server-level option enough for most cases?\n\nAlso it's strange to expose only this option as GUC while there are other many postgres_fdw options?\n\nWith v21-002 patch, even when keep_connections GUC is disabled, the existing open connections are not close immediately. Only connections used in the transaction are closed at the end of that transaction. That is, the existing connections that no transactions use will never be closed. I'm not sure if this behavior is intuitive for users.\n\nTherefore for now I'm thinking to support the server-level option at first... Then if we find it's not enough for most cases in practice, I'd like to consider to expose postgres_fdw options including keep_connections as GUC.\n\nThought?\n\nBTW these patches fail to be applied to the master because of commit 27e1f14563. I updated and simplified the 003 patch. Patch attached.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION", "msg_date": "Fri, 2 Apr 2021 00:26:06 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] postgres_fdw connection caching - cause remote sessions\n linger till the local session exit" }, { "msg_contents": "On Thu, Apr 1, 2021 at 8:56 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>\n> > Attaching v21 patch set, rebased onto the latest master.\n>\n> I agree to add the server-level option. But I'm still not sure if it's good idea to also expose that option as GUC. Isn't the server-level option enough for most cases?\n>\n> Also it's strange to expose only this option as GUC while there are other many postgres_fdw options?\n>\n> With v21-002 patch, even when keep_connections GUC is disabled, the existing open connections are not close immediately. Only connections used in the transaction are closed at the end of that transaction. That is, the existing connections that no transactions use will never be closed. I'm not sure if this behavior is intuitive for users.\n>\n> Therefore for now I'm thinking to support the server-level option at first... Then if we find it's not enough for most cases in practice, I'd like to consider to expose postgres_fdw options including keep_connections as GUC.\n>\n> Thought?\n\n+1 to have only a server-level option for now and if the need arises\nwe could expose it as a GUC.\n\n> BTW these patches fail to be applied to the master because of commit 27e1f14563. I updated and simplified the 003 patch. Patch attached.\n\nThanks for updating the patch. It looks good to me. Just a minor\nchange, instead of using \"true\" and \"off\" for the option, I used \"on\"\nand \"off\" in the docs. Attaching v23.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Thu, 1 Apr 2021 21:43:42 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] postgres_fdw connection caching - cause remote sessions\n linger till the local session exit" }, { "msg_contents": "\n\nOn 2021/04/02 1:13, Bharath Rupireddy wrote:\n> On Thu, Apr 1, 2021 at 8:56 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>>\n>>> Attaching v21 patch set, rebased onto the latest master.\n>>\n>> I agree to add the server-level option. But I'm still not sure if it's good idea to also expose that option as GUC. Isn't the server-level option enough for most cases?\n>>\n>> Also it's strange to expose only this option as GUC while there are other many postgres_fdw options?\n>>\n>> With v21-002 patch, even when keep_connections GUC is disabled, the existing open connections are not close immediately. Only connections used in the transaction are closed at the end of that transaction. That is, the existing connections that no transactions use will never be closed. I'm not sure if this behavior is intuitive for users.\n>>\n>> Therefore for now I'm thinking to support the server-level option at first... Then if we find it's not enough for most cases in practice, I'd like to consider to expose postgres_fdw options including keep_connections as GUC.\n>>\n>> Thought?\n> \n> +1 to have only a server-level option for now and if the need arises\n> we could expose it as a GUC.\n> \n>> BTW these patches fail to be applied to the master because of commit 27e1f14563. I updated and simplified the 003 patch. Patch attached.\n> \n> Thanks for updating the patch. It looks good to me. Just a minor\n> change, instead of using \"true\" and \"off\" for the option, I used \"on\"\n> and \"off\" in the docs. Attaching v23.\n\nThanks a lot! Barring any objection, I will commit this version.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Fri, 2 Apr 2021 02:22:08 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] postgres_fdw connection caching - cause remote sessions\n linger till the local session exit" }, { "msg_contents": "\n\nOn 2021/04/02 2:22, Fujii Masao wrote:\n> Thanks a lot! Barring any objection, I will commit this version.\n\nPushed. Thanks!\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Fri, 2 Apr 2021 19:47:39 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] postgres_fdw connection caching - cause remote sessions\n linger till the local session exit" } ]
[ { "msg_contents": "Hi,\n\nThis is something I've wanted several times in the past, so I thought\nothers here could be interested: if you're looking for a way to run\nyour development branch through check-world on a big endian box, the\nnew s390x support[1] on Travis is good for that. Capacity is a bit\nlimited, so I don't think I'll point cfbot.cputube.org at it just yet\n(maybe I need to invent a separate slow build cycle).\n\nI tried it just now and found that cfbot's .travis.yml file[2] just\nneeded this at the top:\n\narch:\n - s390x\n\n... and then it needed these lines commented out:\n\n#before_install:\n# - echo '/tmp/%e-%s-%p.core' | sudo tee /proc/sys/kernel/core_pattern\n\nI didn't look into why, but otherwise that fails with a permission\nerror on that environment, so it'd be nice to figure out what's up\nwith that so we can still get back traces from cores.\n\n[1] https://blog.travis-ci.com/2019-11-12-multi-cpu-architecture-ibm-power-ibm-z\n[2] https://github.com/macdice/cfbot/blob/master/travis/.travis.yml\n\n\n", "msg_date": "Mon, 22 Jun 2020 18:27:19 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Testing big endian code with Travis CI's new s390x support" }, { "msg_contents": "On Mon, Jun 22, 2020 at 6:27 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> arch:\n> - s390x\n>\n> ... and then it needed these lines commented out:\n>\n> #before_install:\n> # - echo '/tmp/%e-%s-%p.core' | sudo tee /proc/sys/kernel/core_pattern\n\nOne thing I forgot to mention: for some reason slapd is strangely\nbroken on that system. Removing the ldap test with PG_TEST_EXTRA=\"ssl\nkerberos\" works around that.\n\n\n", "msg_date": "Tue, 23 Jun 2020 16:04:36 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Testing big endian code with Travis CI's new s390x support" } ]
[ { "msg_contents": "hello hackers,\r\n\r\nWhen I try to use pg_resetwal tool to skip some transaction ID, I get a problem that is\r\nthe tool can accept all transaction id I offered with '-x' option, however, the database\r\nmay failed to restart because of can not read file under $PGDATA/pg_xact. For\r\nexample, the 'NextXID' in a database is 1000, if you offer '-x 32769' then the database\r\nfailed to restart.\r\n\r\nI read the document of pg_resetwal tool, it told me to write a 'safe value', but I think\r\npg_resetwal tool should report it and refuse to exec walreset work when using an unsafe\r\nvalue, rather than remaining it until the user restarts the database.\r\n\r\nI do a initial patch to limit the input, now it accepts transaction in two ways:\r\n1. The transaction ID is on the same CLOG page with the 'NextXID' in pg_control.\r\n2. The transaction ID is right at the end of a CLOG page.\r\nThe input limited above can ensure the database restart successfully.\r\n\r\nThe same situation with multixact and multixact-offset option and I make\r\nthe same change in the patch.\r\n\r\nDo you think it is an issue?\r\n\r\n\r\n\r\nRegards,\r\nHighgo Software (Canada/China/Pakistan) \r\nURL : www.highgo.ca \r\nEMAIL: mailto:movead(dot)li(at)highgo(dot)ca", "msg_date": "Mon, 22 Jun 2020 14:31:37 +0800", "msg_from": "\"movead.li@highgo.ca\" <movead.li@highgo.ca>", "msg_from_op": true, "msg_subject": "pg_resetwal --next-transaction-id may cause database failed to\n restart." }, { "msg_contents": "On 2020-Jun-22, movead.li@highgo.ca wrote:\n\n> hello hackers,\n> \n> When I try to use pg_resetwal tool to skip some transaction ID, I get a problem that is\n> the tool can accept all transaction id I offered with '-x' option, however, the database\n> may failed to restart because of can not read file under $PGDATA/pg_xact. For\n> example, the 'NextXID' in a database is 1000, if you offer '-x 32769' then the database\n> failed to restart.\n\nYeah, the normal workaround is to create the necessary file manually in\norder to let the system start after such an operation; they are\nsometimes necessary to enable testing weird cases with wraparound and\nsuch. So a total rejection to work for these cases would be unhelpful\nprecisely for the scenario that those switches were intended to serve.\n\nMaybe a better answer is to have a new switch in postmaster that creates\nany needed files (incl. producing associated WAL etc); so you'd run\npg_resetwal -x some-value\npostmaster --create-special-stuff\nthen start your server and off you go.\n\nNow maybe this is too much complication for a mechanism that really\nisn't for general consumption anyway. I mean, if you're using\npg_resetwal, you're already playing with fire.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Tue, 23 Jun 2020 12:22:12 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: pg_resetwal --next-transaction-id may cause database failed to\n restart." }, { "msg_contents": ">Yeah, the normal workaround is to create the necessary file manually in\r\n>order to let the system start after such an operation; they are\r\n>sometimes necessary to enable testing weird cases with wraparound and\r\n>such. So a total rejection to work for these cases would be unhelpful\r\n>precisely for the scenario that those switches were intended to serve.\r\nI think these words should appear in pg_resetwal document if we decide\r\nto do nothing for this issue. \r\n\r\n>Maybe a better answer is to have a new switch in postmaster that creates\r\n>any needed files (incl. producing associated WAL etc); so you'd run\r\n>pg_resetwal -x some-value\r\n>postmaster --create-special-stuff\r\n>then start your server and off you go.\r\nAs shown in the document, it looks like to rule a safe input, so I think it's better\r\nto rule it and add an option to focus write an unsafe value if necessary.\r\n \r\n>Now maybe this is too much complication for a mechanism that really\r\n>isn't for general consumption anyway. I mean, if you're using\r\n>pg_resetwal, you're already playing with fire.\r\nYes, that's true, I always heard the word \"You'd better not use pg_walreset\".\r\nBut the tool appear in PG code, it's better to improve it than do nothing.\r\n\r\n\r\n\r\nRegards,\r\nHighgo Software (Canada/China/Pakistan) \r\nURL : www.highgo.ca \r\nEMAIL: mailto:movead(dot)li(at)highgo(dot)ca\r\n\n\n>Yeah, the normal workaround is to create the necessary file manually in>order to let the system start after such an operation; they are>sometimes necessary to enable testing weird cases with wraparound and>such.  So a total rejection to work for these cases would be unhelpful>precisely for the scenario that those switches were intended to serve.I think these words should appear in pg_resetwal document if we decideto do nothing for this issue. >Maybe a better answer is to have a new switch in postmaster that creates>any needed files (incl. producing associated WAL etc); so you'd run>pg_resetwal -x some-value>postmaster --create-special-stuff>then start your server and off you go.As shown in the document, it looks like to rule a safe input, so I think it's betterto rule it and add an option to focus write an unsafe value if necessary. >Now maybe this is too much complication for a mechanism that really>isn't for general consumption anyway.  I mean, if you're using>pg_resetwal, you're already playing with fire.Yes, that's true, I always heard the word \"You'd better not use pg_walreset\".But the tool appear in PG code, it's better to improve it than do nothing.\n\nRegards,Highgo Software (Canada/China/Pakistan) URL : www.highgo.ca EMAIL: mailto:movead(dot)li(at)highgo(dot)ca", "msg_date": "Wed, 24 Jun 2020 16:49:31 +0800", "msg_from": "\"movead.li@highgo.ca\" <movead.li@highgo.ca>", "msg_from_op": true, "msg_subject": "Re: pg_resetwal --next-transaction-id may cause database failed to\n restart." }, { "msg_contents": "On 2020-Jun-24, movead.li@highgo.ca wrote:\n\n> >Maybe a better answer is to have a new switch in postmaster that creates\n> >any needed files (incl. producing associated WAL etc); so you'd run\n> >pg_resetwal -x some-value\n> >postmaster --create-special-stuff\n> >then start your server and off you go.\n>\n> As shown in the document, it looks like to rule a safe input, so I think it's better\n> to rule it and add an option to focus write an unsafe value if necessary.\n\nISTM that a reasonable compromise is that if you use -x (or -c, -m, -O)\nand the input value is outside the range supported by existing files,\nthen it's a fatal error; unless you use --force, which turns it into\njust a warning.\n\n> >Now maybe this is too much complication for a mechanism that really\n> >isn't for general consumption anyway. I mean, if you're using\n> >pg_resetwal, you're already playing with fire.\n> Yes, that's true, I always heard the word \"You'd better not use pg_walreset\".\n> But the tool appear in PG code, it's better to improve it than do nothing.\n\nSure.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Wed, 24 Jun 2020 11:04:22 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: pg_resetwal --next-transaction-id may cause database failed to\n restart." }, { "msg_contents": ">ISTM that a reasonable compromise is that if you use -x (or -c, -m, -O)\r\n>and the input value is outside the range supported by existing files,\r\n>then it's a fatal error; unless you use --force, which turns it into\r\n>just a warning.\r\nI do not think '--force' is a good choice, so I add a '--test, -t' option to\r\nforce to write a unsafe value to pg_control.\r\nDo you think it is an acceptable method?\r\n\r\n\r\n\r\n\r\nRegards,\r\nHighgo Software (Canada/China/Pakistan) \r\nURL : www.highgo.ca \r\nEMAIL: mailto:movead(dot)li(at)highgo(dot)ca", "msg_date": "Tue, 7 Jul 2020 11:22:46 +0800", "msg_from": "\"movead.li@highgo.ca\" <movead.li@highgo.ca>", "msg_from_op": true, "msg_subject": "Re: pg_resetwal --next-transaction-id may cause database failed to\n restart." }, { "msg_contents": "On 2020-Jul-07, movead.li@highgo.ca wrote:\n\n> >ISTM that a reasonable compromise is that if you use -x (or -c, -m, -O)\n> >and the input value is outside the range supported by existing files,\n> >then it's a fatal error; unless you use --force, which turns it into\n> >just a warning.\n>\n> I do not think '--force' is a good choice, so I add a '--test, -t' option to\n> force to write a unsafe value to pg_control.\n> Do you think it is an acceptable method?\n\nThe rationale for this interface is unclear to me. Please explain what\nhappens in each case?\n\nIn my proposal, we'd have:\n\n* Bad value, no --force:\n - program raises error, no work done.\n* Bad value with --force:\n - program raises warning but changes anyway.\n* Good value, no --force:\n - program changes value without saying anything\n* Good value with --force:\n - same\n\nThe rationale for this interface is convenient knowledgeable access: the\nDBA runs the program with value X, and if the value is good, then\nthey're done. If the program raises an error, DBA has a choice: either\nrun with --force because they know what they're doing, or don't do\nanything because they know that they would make a mess.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Tue, 7 Jul 2020 11:06:39 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: pg_resetwal --next-transaction-id may cause database failed to\n restart." }, { "msg_contents": ">The rationale for this interface is unclear to me. Please explain what\r\n>happens in each case?\r\n>In my proposal, we'd have:\r\n>* Bad value, no --force:\r\n> - program raises error, no work done.\r\n>* Bad value with --force:\r\n> - program raises warning but changes anyway.\r\n>* Good value, no --force:\r\n> - program changes value without saying anything\r\n>* Good value with --force:\r\n> - same\r\nYou have list all cases, maybe you are right it needs to raise a warning\r\nwhen force a Bad value write which missed in the patch.\r\nAnd I use '--test' in the patch, not '--force' temporary, maybe it needs\r\na deep research and discuss.\r\n\r\n>The rationale for this interface is convenient knowledgeable access: the\r\n>DBA runs the program with value X, and if the value is good, then\r\n>they're done. If the program raises an error, DBA has a choice: either\r\n>run with --force because they know what they're doing, or don't do\r\n>anything because they know that they would make a mess.\r\nYes that's it, in addition the raised error, can tell the DBA to input a good\r\nvalue.\r\n\r\n\r\n\r\nRegards,\r\nHighgo Software (Canada/China/Pakistan) \r\nURL : www.highgo.ca \r\nEMAIL: mailto:movead(dot)li(at)highgo(dot)ca\r\n\n\n>The rationale for this interface is unclear to me.  Please explain what>happens in each case?>In my proposal, we'd have:>* Bad value, no --force:>  - program raises error, no work done.>* Bad value with --force:>  - program raises warning but changes anyway.>* Good value, no --force:>  - program changes value without saying anything>* Good value with --force:>  - sameYou have list all cases, maybe you are right it needs to raise a warningwhen force a Bad value write which missed in the patch.And I use '--test' in the patch, not '--force' temporary, maybe it needsa deep research and discuss.>The rationale for this interface is convenient knowledgeable access: the>DBA runs the program with value X, and if the value is good, then>they're done.  If the program raises an error, DBA has a choice: either>run with --force because they know what they're doing, or don't do>anything because they know that they would make a mess.Yes that's it, in addition the raised error, can tell the DBA to input a goodvalue.\n\nRegards,Highgo Software (Canada/China/Pakistan) URL : www.highgo.ca EMAIL: mailto:movead(dot)li(at)highgo(dot)ca", "msg_date": "Wed, 8 Jul 2020 09:21:08 +0800", "msg_from": "\"movead.li@highgo.ca\" <movead.li@highgo.ca>", "msg_from_op": true, "msg_subject": "Re: pg_resetwal --next-transaction-id may cause database failed to\n restart." }, { "msg_contents": "On Wed, Jun 24, 2020 at 11:04 AM Alvaro Herrera\n<alvherre@2ndquadrant.com> wrote:\n> ISTM that a reasonable compromise is that if you use -x (or -c, -m, -O)\n> and the input value is outside the range supported by existing files,\n> then it's a fatal error; unless you use --force, which turns it into\n> just a warning.\n\nOne potential problem is that you might be using --force for some\nother reason and end up forcing this, too. But maybe that's OK.\n\nPerhaps we should consider the idea of having pg_resetwal create the\nrelevant clog file and zero-fill it, if it doesn't exist already,\nrather than leaving that to to the DBA or the postmaster binary to do\nit. It seems like that is what people would want to happen in this\nsituation.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Wed, 8 Jul 2020 11:10:03 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_resetwal --next-transaction-id may cause database failed to\n restart." }, { "msg_contents": ">> ISTM that a reasonable compromise is that if you use -x (or -c, -m, -O)\r\n>> and the input value is outside the range supported by existing files,\r\n>> then it's a fatal error; unless you use --force, which turns it into\r\n>> just a warning.\r\n \r\n>One potential problem is that you might be using --force for some\r\n>other reason and end up forcing this, too. But maybe that's OK.\r\nYes it's true, so I try to add a new option to control this behavior, you\r\ncan see it in the last mail with attach.\r\n \r\n>Perhaps we should consider the idea of having pg_resetwal create the\r\n>relevant clog file and zero-fill it, if it doesn't exist already,\r\n>rather than leaving that to to the DBA or the postmaster binary to do\r\n>it. It seems like that is what people would want to happen in this\r\n>situation.\r\nI have considered this idea, but I think it produces files uncontrolled\r\nby postmaster, so I think it may be unacceptable and give up.\r\n\r\nIn the case we force to write an unsafe value, we can create or extend\r\nrelated files I think. Do you have any further idea, I can work out a new\r\npatch.\r\n\r\n\r\n\r\nRegards,\r\nHighgo Software (Canada/China/Pakistan) \r\nURL : www.highgo.ca \r\nEMAIL: mailto:movead(dot)li(at)highgo(dot)ca\r\n\r\n\n\n>> ISTM that a reasonable compromise is that if you use -x (or -c, -m, -O)>> and the input value is outside the range supported by existing files,>> then it's a fatal error; unless you use --force, which turns it into>> just a warning. >One potential problem is that you might be using --force for some>other reason and end up forcing this, too. But maybe that's OK.Yes it's true, so I try to add a new option to control this behavior, youcan see it in the last mail with attach. >Perhaps we should consider the idea of having pg_resetwal create the>relevant clog file and zero-fill it, if it doesn't exist already,>rather than leaving that to to the DBA or the postmaster binary to do>it. It seems like that is what people would want to happen in this>situation.I have considered this idea, but I think it produces files uncontrolledby postmaster, so I think it may be unacceptable and give up.In the case we force to write an unsafe value, we can create or extendrelated files I think. Do you have any further idea, I can work out a newpatch.\nRegards,Highgo Software (Canada/China/Pakistan) URL : www.highgo.ca EMAIL: mailto:movead(dot)li(at)highgo(dot)ca", "msg_date": "Thu, 9 Jul 2020 09:31:51 +0800", "msg_from": "\"movead.li@highgo.ca\" <movead.li@highgo.ca>", "msg_from_op": true, "msg_subject": "Re: pg_resetwal --next-transaction-id may cause database failed to\n restart." }, { "msg_contents": "On 2020-Jul-09, movead.li@highgo.ca wrote:\n\n> >> ISTM that a reasonable compromise is that if you use -x (or -c, -m, -O)\n> >> and the input value is outside the range supported by existing files,\n> >> then it's a fatal error; unless you use --force, which turns it into\n> >> just a warning.\n> \n> >One potential problem is that you might be using --force for some\n> >other reason and end up forcing this, too. But maybe that's OK.\n> Yes it's true, so I try to add a new option to control this behavior, you\n> can see it in the last mail with attach.\n\nIt may be OK actually; if you're doing multiple dangerous changes, you'd\nuse --dry-run beforehand ... No? (It's what *I* would do, for sure.)\nWhich in turns suggests that it would good to ensure that --dry-run\n*also* emits a warning (not an error, so that any other warnings can\nalso be thrown and the user gets the full picture).\n\nI think adding multiple different --force switches makes the UI more\ncomplex for little added value.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Thu, 9 Jul 2020 01:18:35 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: pg_resetwal --next-transaction-id may cause database failed to\n restart." }, { "msg_contents": ">It may be OK actually; if you're doing multiple dangerous changes, you'd\r\n>use --dry-run beforehand ... No? (It's what *I* would do, for sure.)\r\n>Which in turns suggests that it would good to ensure that --dry-run\r\n>*also* emits a warning (not an error, so that any other warnings can\r\n>also be thrown and the user gets the full picture).\r\nYes that's true, I have chaged the patch and will get a warning rather than\r\nerror when we point a --dry-run option.\r\nAnd I remake the code which looks more clearly.\r\n\r\n>I think adding multiple different --force switches makes the UI more\r\n>complex for little added value.\r\nYes I also feel about that, but I can't convince myself to use --force\r\nto finish the mission, because --force is used when something wrong with\r\npg_control file and we can listen to hackers' proposals.\r\n\r\n\r\n\r\nRegards,\r\nHighgo Software (Canada/China/Pakistan) \r\nURL : www.highgo.ca \r\nEMAIL: mailto:movead(dot)li(at)highgo(dot)ca", "msg_date": "Mon, 20 Jul 2020 13:42:59 +0800", "msg_from": "\"movead.li@highgo.ca\" <movead.li@highgo.ca>", "msg_from_op": true, "msg_subject": "Re: pg_resetwal --next-transaction-id may cause database failed to\n restart." } ]
[ { "msg_contents": "Hi All,\nExecInitPartitionInfo() has code\n 536 /*\n 537 * Open partition indices. The user may have asked to check for\nconflicts\n 538 * within this leaf partition and do \"nothing\" instead of throwing\nan\n 539 * error. Be prepared in that case by initializing the index\ninformation\n 540 * needed by ExecInsert() to perform speculative insertions.\n 541 */\n 542 if (partrel->rd_rel->relhasindex &&\n 543 leaf_part_rri->ri_IndexRelationDescs == NULL)\n 544 ExecOpenIndices(leaf_part_rri,\n 545 (node != NULL &&\n 546 node->onConflictAction != ONCONFLICT_NONE));\n\nThis calls ExecOpenIndices only when ri_indexRelationDescs has not been\ninitialized which is fine. I think this is done so that we don't open\nindices again and again when multiple tuples are routed to the same\npartition. But as part of opening indices, we also open corresponding index\nrelations using index_open.\n\nThe indices opened here are closed in ExecCleanupTupleRouting(), but it\ndoes this unconditionally. This means that for any reason we had called\nExecOpenIndices on a partition before ExecInitPartitionInfo() following\nthings will happen\n1. ExecOpenIndices will overwrite the old arrays for index descriptors\nleaking memory.\n2. ExecCleanupTupleRouting will close index relations that were opened\nsecond time cleaning up memory. But the relcache references corresponding\nto the first call to ExecOpenIndices will leak.\n\nSimilar situation can happen if ExecOpenIndices is called on a partition\nafter it has been called in ExecInitPartitionInfo.\n\nI couldn't find code where this can happen but I don't see any code which\nprevents this. This looks like a recipe for memory and reference leaks.\n\nWe could fix this by\n1. Make ExecOpenIndices and ExecCloseIndices so that they can be called\nmultiple times on the same relation similar to heap_open. The second time\nonwards ExecOpenIndices doesn't allocate memory and open indexes but\nincreases a refcount. ExecCloseIndices releases memory and closes the index\nrelations when the refcount drops to 0. Then we don't need to\ncheck leaf_part_rri->ri_IndexRelationDescs == NULL in\nExecInitPartitionInfo().\n\n2. Throw an error in ExecOpenIndices if all the arrays are present. We will\nneed to check leaf_part_rri->ri_IndexRelationDescs == NULL in\nExecInitPartitionInfo().\n\n-- \nBest Wishes,\nAshutosh\n\nHi All,ExecInitPartitionInfo() has code 536     /* 537      * Open partition indices.  The user may have asked to check for conflicts 538      * within this leaf partition and do \"nothing\" instead of throwing an 539      * error.  Be prepared in that case by initializing the index information 540      * needed by ExecInsert() to perform speculative insertions. 541      */ 542     if (partrel->rd_rel->relhasindex && 543         leaf_part_rri->ri_IndexRelationDescs == NULL) 544         ExecOpenIndices(leaf_part_rri, 545                         (node != NULL && 546                          node->onConflictAction != ONCONFLICT_NONE));This calls ExecOpenIndices only when ri_indexRelationDescs has not been initialized which is fine. I think this is done so that we don't open indices again and again when multiple tuples are routed to the same partition. But as part of opening indices, we also open corresponding index relations using index_open. The indices opened here are closed in ExecCleanupTupleRouting(), but it does this unconditionally. This means that for any reason we had called ExecOpenIndices on a partition before ExecInitPartitionInfo() following things will happen1. ExecOpenIndices will overwrite the old arrays for index descriptors leaking memory.2. ExecCleanupTupleRouting will close index relations that were opened second time cleaning up memory. But the relcache references corresponding to the first call to ExecOpenIndices will leak.Similar situation can happen if ExecOpenIndices is called on a partition after it has been called in ExecInitPartitionInfo.I couldn't find code where this can happen but I don't see any code which prevents this. This looks like a recipe for memory and reference leaks.We could fix this by1. Make ExecOpenIndices and ExecCloseIndices so that they can be called multiple times on the same relation similar to heap_open. The second time onwards ExecOpenIndices doesn't allocate memory and open indexes but increases a refcount. ExecCloseIndices releases memory and closes the index relations when the refcount drops to 0. Then we don't need to check leaf_part_rri->ri_IndexRelationDescs == NULL in ExecInitPartitionInfo(). 2. Throw an error in ExecOpenIndices if all the arrays are present. We will need to check leaf_part_rri->ri_IndexRelationDescs == NULL in ExecInitPartitionInfo().-- Best Wishes,Ashutosh", "msg_date": "Mon, 22 Jun 2020 19:48:52 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Asymmetry in opening and closing indices for partition routing" }, { "msg_contents": "Hi Ashutosh,\n\nOn 2020-Jun-22, Ashutosh Bapat wrote:\n\n> I couldn't find code where this can happen but I don't see any code which\n> prevents this. This looks like a recipe for memory and reference leaks.\n> \n> We could fix this by\n> 1. Make ExecOpenIndices and ExecCloseIndices so that they can be called\n> multiple times on the same relation similar to heap_open. The second time\n> onwards ExecOpenIndices doesn't allocate memory and open indexes but\n> increases a refcount. ExecCloseIndices releases memory and closes the index\n> relations when the refcount drops to 0. Then we don't need to\n> check leaf_part_rri->ri_IndexRelationDescs == NULL in\n> ExecInitPartitionInfo().\n\nI think there are a couple of places in executor related to partition\ntuple routing where code is a bit weirdly structured. It might be nice\nto improve on that if you either find inefficiencies that can be fixed,\nor clear code structure improvements, as long as they don't make\nperformance worse. Feel free to have a look around and see if you can\npropose some concrete proposals.\n\nI'm not sure that expecting the relcache entry's refcount drops to zero\nat the right time is a good approach; that may cause leaks some other\nplace might have refcounts you're not expecting (say, an open cursor\nthat's not fully read).\n\n(I'm not terribly worried about refcount leakage as a theoretical\nconcern, since the ResourceOwner mechanism will warn us about that if it\nhappens.)\n\n> 2. Throw an error in ExecOpenIndices if all the arrays are present. We will\n> need to check leaf_part_rri->ri_IndexRelationDescs == NULL in\n> ExecInitPartitionInfo().\n\nThis sounds like a job for an assert rather than an error.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Mon, 22 Jun 2020 13:52:22 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Asymmetry in opening and closing indices for partition routing" }, { "msg_contents": "On Mon, 22 Jun 2020 at 23:22, Alvaro Herrera <alvherre@2ndquadrant.com>\nwrote:\n\n>\n> I'm not sure that expecting the relcache entry's refcount drops to zero\n> at the right time is a good approach; that may cause leaks some other\n> place might have refcounts you're not expecting (say, an open cursor\n> that's not fully read).\n>\n\nMy proposal was to maintain a refcount counting the number of times an\nindex is opened in ResultRelInfo itself, not to rely on the relcache ref\ncount. But I think that would be an overkill. Please read ahead\n\n\n>\n> (I'm not terribly worried about refcount leakage as a theoretical\n> concern, since the ResourceOwner mechanism will warn us about that if it\n> happens.)\n>\n> > 2. Throw an error in ExecOpenIndices if all the arrays are present. We\n> will\n> > need to check leaf_part_rri->ri_IndexRelationDescs == NULL in\n> > ExecInitPartitionInfo().\n>\n> This sounds like a job for an assert rather than an error.\n>\n\nI agree. Here's a patch to fix to add Assert'ion in ExecOpenIndices(). I\nran make check with this patch and the assertion didn't trip. I think this\nwill be a good step forward.\n\n-- \nBest Wishes,\nAshutosh", "msg_date": "Mon, 29 Jun 2020 15:33:55 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Asymmetry in opening and closing indices for partition routing" } ]
[ { "msg_contents": "The comment for InsertPgAttributeTuple no longer, since 911e702077037996, match\nreality as attoptions isn't always initialized to NULL. The attached removes\nmention of attoptions, and updates the list of always-NULL attributes to match\nwhat the code does (the git history didn't provide rationale for why they were\nomitted so it seemed like an oversight).\n\ncheers ./daniel", "msg_date": "Mon, 22 Jun 2020 16:27:18 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": true, "msg_subject": "Update InsertPgAttributeTuple comment to match new signature" }, { "msg_contents": "On Mon, Jun 22, 2020 at 04:27:18PM +0200, Daniel Gustafsson wrote:\n> The comment for InsertPgAttributeTuple no longer, since 911e702077037996, match\n> reality as attoptions isn't always initialized to NULL. The attached removes\n> mention of attoptions, and updates the list of always-NULL attributes to match\n> what the code does (the git history didn't provide rationale for why they were\n> omitted so it seemed like an oversight).\n\nLooks right to me, good catch. I'll apply that tomorrow my time\nexcept if there are any objections in-between.\n--\nMichael", "msg_date": "Tue, 23 Jun 2020 14:31:05 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Update InsertPgAttributeTuple comment to match new signature" }, { "msg_contents": "On Tue, Jun 23, 2020 at 02:31:05PM +0900, Michael Paquier wrote:\n> Looks right to me, good catch. I'll apply that tomorrow my time\n> except if there are any objections in-between.\n\nAnd done.\n--\nMichael", "msg_date": "Wed, 24 Jun 2020 15:16:49 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Update InsertPgAttributeTuple comment to match new signature" } ]
[ { "msg_contents": "Hi,\n\nIf restart_lsn of logical replication slot gets behind more than\nmax_slot_wal_keep_size from the current LSN, the logical replication\nslot would be invalidated and its restart_lsn is reset to an invalid LSN.\nIf this logical replication slot with an invalid restart_lsn is specified\nas the source slot in pg_copy_logical_replication_slot(), the function\ncauses the following assertion failure.\n\n TRAP: FailedAssertion(\"!logical_slot\", File: \"slotfuncs.c\", Line: 727)\n\nThis assertion failure is caused by\n\n\t/* Copying non-reserved slot doesn't make sense */\n\tif (XLogRecPtrIsInvalid(src_restart_lsn))\n\t\tereport(ERROR,\n\t\t\t\t(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),\n\t\t\t\t errmsg(\"cannot copy a replication slot that doesn't reserve WAL\")));\n\nI *guess* this assertion check was added because restart_lsn should\nnot be invalid before. But in v13, it can be invalid thanks to max_slot_wal_keep_size.\nI think that this assertion check seems useless and should be removed in v13.\nPatch attached. Thought?\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION", "msg_date": "Tue, 23 Jun 2020 00:17:47 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Assertion failure in pg_copy_logical_replication_slot()" }, { "msg_contents": "At Tue, 23 Jun 2020 00:17:47 +0900, Fujii Masao <masao.fujii@oss.nttdata.com> wrote in \n> Hi,\n> \n> If restart_lsn of logical replication slot gets behind more than\n> max_slot_wal_keep_size from the current LSN, the logical replication\n> slot would be invalidated and its restart_lsn is reset to an invalid\n> LSN.\n> If this logical replication slot with an invalid restart_lsn is\n> specified\n> as the source slot in pg_copy_logical_replication_slot(), the function\n> causes the following assertion failure.\n\nGood catch!\n\n> TRAP: FailedAssertion(\"!logical_slot\", File: \"slotfuncs.c\", Line: 727)\n> \n> This assertion failure is caused by\n> \n> \t/* Copying non-reserved slot doesn't make sense */\n> \tif (XLogRecPtrIsInvalid(src_restart_lsn))\n> \t\tereport(ERROR,\n> \t\t\t\t(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),\n> \t\t\t\t errmsg(\"cannot copy a replication slot that doesn't reserve\n> \t\t\t\t WAL\")));\n> \n> I *guess* this assertion check was added because restart_lsn should\n> not be invalid before. But in v13, it can be invalid thanks to\n> max_slot_wal_keep_size.\n> I think that this assertion check seems useless and should be removed\n> in v13.\n> Patch attached. Thought?\n\nYour diagnosis looks correct to me. The assertion failure means that\ncopy_replication_slot was not exercised at least for a non-reserving\nlogical slots. Greping \"pg_copy_logical_replication_slot\" on src/test\nshowed nothing so I doubt we are exercising the function.\n\nDon't we need some?\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Tue, 23 Jun 2020 18:42:10 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Assertion failure in pg_copy_logical_replication_slot()" }, { "msg_contents": "\n\nOn 2020/06/23 18:42, Kyotaro Horiguchi wrote:\n> At Tue, 23 Jun 2020 00:17:47 +0900, Fujii Masao <masao.fujii@oss.nttdata.com> wrote in\n>> Hi,\n>>\n>> If restart_lsn of logical replication slot gets behind more than\n>> max_slot_wal_keep_size from the current LSN, the logical replication\n>> slot would be invalidated and its restart_lsn is reset to an invalid\n>> LSN.\n>> If this logical replication slot with an invalid restart_lsn is\n>> specified\n>> as the source slot in pg_copy_logical_replication_slot(), the function\n>> causes the following assertion failure.\n> \n> Good catch!\n> \n>> TRAP: FailedAssertion(\"!logical_slot\", File: \"slotfuncs.c\", Line: 727)\n>>\n>> This assertion failure is caused by\n>>\n>> \t/* Copying non-reserved slot doesn't make sense */\n>> \tif (XLogRecPtrIsInvalid(src_restart_lsn))\n>> \t\tereport(ERROR,\n>> \t\t\t\t(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),\n>> \t\t\t\t errmsg(\"cannot copy a replication slot that doesn't reserve\n>> \t\t\t\t WAL\")));\n>>\n>> I *guess* this assertion check was added because restart_lsn should\n>> not be invalid before. But in v13, it can be invalid thanks to\n>> max_slot_wal_keep_size.\n>> I think that this assertion check seems useless and should be removed\n>> in v13.\n>> Patch attached. Thought?\n> \n> Your diagnosis looks correct to me.\n\nThanks for the check! I will commit the patch later.\n\n> The assertion failure means that\n> copy_replication_slot was not exercised at least for a non-reserving\n> logical slots. Greping \"pg_copy_logical_replication_slot\" on src/test\n> showed nothing so I doubt we are exercising the function.\n> \n> Don't we need some?\n\nYes, increasing the test coverage sounds helpful!\n\nRegards,\n\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Wed, 24 Jun 2020 03:29:32 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Re: Assertion failure in pg_copy_logical_replication_slot()" }, { "msg_contents": "On 2020-Jun-23, Fujii Masao wrote:\n\n> If restart_lsn of logical replication slot gets behind more than\n> max_slot_wal_keep_size from the current LSN, the logical replication\n> slot would be invalidated and its restart_lsn is reset to an invalid LSN.\n> If this logical replication slot with an invalid restart_lsn is specified\n> as the source slot in pg_copy_logical_replication_slot(), the function\n> causes the following assertion failure.\n> \n> TRAP: FailedAssertion(\"!logical_slot\", File: \"slotfuncs.c\", Line: 727)\n\nOops.\n\n> This assertion failure is caused by\n> \n> \t/* Copying non-reserved slot doesn't make sense */\n> \tif (XLogRecPtrIsInvalid(src_restart_lsn))\n> \t\tereport(ERROR,\n> \t\t\t\t(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),\n> \t\t\t\t errmsg(\"cannot copy a replication slot that doesn't reserve WAL\")));\n\nHeh, you pasted the code after your patch rather than the original.\n\nI think the errcode is a bit bogus considering the new case.\nIMO ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE is more appropriate.\n\nOne could argue that the error message could also be different for the\ncase of a logical slot (or even a physical slot that has the upcoming\n\"invalidated_at\" LSN set), maybe \"cannot copy a replication slot that\nhas been invalidated\" but maybe that's a pointless distinction.\nI don't object to the patch as presented.\n\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Tue, 23 Jun 2020 20:38:41 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Assertion failure in pg_copy_logical_replication_slot()" }, { "msg_contents": "On 2020/06/24 9:38, Alvaro Herrera wrote:\n> On 2020-Jun-23, Fujii Masao wrote:\n> \n>> If restart_lsn of logical replication slot gets behind more than\n>> max_slot_wal_keep_size from the current LSN, the logical replication\n>> slot would be invalidated and its restart_lsn is reset to an invalid LSN.\n>> If this logical replication slot with an invalid restart_lsn is specified\n>> as the source slot in pg_copy_logical_replication_slot(), the function\n>> causes the following assertion failure.\n>>\n>> TRAP: FailedAssertion(\"!logical_slot\", File: \"slotfuncs.c\", Line: 727)\n> \n> Oops.\n> \n>> This assertion failure is caused by\n>>\n>> \t/* Copying non-reserved slot doesn't make sense */\n>> \tif (XLogRecPtrIsInvalid(src_restart_lsn))\n>> \t\tereport(ERROR,\n>> \t\t\t\t(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),\n>> \t\t\t\t errmsg(\"cannot copy a replication slot that doesn't reserve WAL\")));\n> \n> Heh, you pasted the code after your patch rather than the original.\n\noh.... sorry.\n\n\n> I think the errcode is a bit bogus considering the new case.\n> IMO ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE is more appropriate.\n\nAgreed. So I updated the patch so this errcode is used instead.\nPatch attached.\n\n\n> One could argue that the error message could also be different for the\n> case of a logical slot (or even a physical slot that has the upcoming\n> \"invalidated_at\" LSN set), maybe \"cannot copy a replication slot that\n> has been invalidated\" but maybe that's a pointless distinction.\n> I don't object to the patch as presented.\n\nI have no strong opinion about this, but for now I kept the message as it is.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION", "msg_date": "Wed, 24 Jun 2020 18:16:04 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Re: Assertion failure in pg_copy_logical_replication_slot()" }, { "msg_contents": "On 2020-Jun-24, Fujii Masao wrote:\n\n> > I think the errcode is a bit bogus considering the new case.\n> > IMO ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE is more appropriate.\n> \n> Agreed. So I updated the patch so this errcode is used instead.\n> Patch attached.\n\nLGTM.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Wed, 24 Jun 2020 10:58:47 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Assertion failure in pg_copy_logical_replication_slot()" }, { "msg_contents": "\n\nOn 2020/06/24 23:58, Alvaro Herrera wrote:\n> On 2020-Jun-24, Fujii Masao wrote:\n> \n>>> I think the errcode is a bit bogus considering the new case.\n>>> IMO ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE is more appropriate.\n>>\n>> Agreed. So I updated the patch so this errcode is used instead.\n>> Patch attached.\n> \n> LGTM.\n\nThanks! Pushed.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Thu, 25 Jun 2020 11:16:04 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Re: Assertion failure in pg_copy_logical_replication_slot()" } ]
[ { "msg_contents": "Hi All,\n\nI have been programming Perl for over several years and would like to help\nif needed. Will be looking through the issues for anything that jumps out\nat me, but if there is something that someone is aware of that needs\nattention please let me know.\n\nIf I am going about this the wrong way please let me know that too.\n\nBest regards,\n\nJim Woodworth\n\nHi All,I have been programming Perl for over several years and would like to help if needed.  Will be looking through the issues for anything that jumps out at me, but if there is something that someone is aware of that needs attention please let me know.If I am going about this the wrong way please let me know that too.Best regards,Jim Woodworth", "msg_date": "Mon, 22 Jun 2020 12:09:35 -0400", "msg_from": "Jim Woodworth <jimw54321@gmail.com>", "msg_from_op": true, "msg_subject": "may I help with Perl?" }, { "msg_contents": "\n\n> On Jun 22, 2020, at 9:09 AM, Jim Woodworth <jimw54321@gmail.com> wrote:\n> \n> I have been programming Perl for over several years and would like to help if needed. Will be looking through the issues for anything that jumps out at me, but if there is something that someone is aware of that needs attention please let me know.\n\nThanks for volunteering!\n\n> If I am going about this the wrong way please let me know that too.\n\nOne of the easiest ways to contribute is to review patches already submitted in the commitfest application. See https://commitfest.postgresql.org/\n\nThere may be patches in the list that have some perl component that you could review. Even if the patch itself does not contain any perl code, you could always consider whether the patch would be improved with a TAP test, which is our perl regression test system. Testing of that kind might usefully expose flaws in the patch even if the TAP tests you write are themselves not accepted into the project.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Mon, 22 Jun 2020 09:25:32 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: may I help with Perl?" } ]
[ { "msg_contents": "In PostgreSQL there are a function table_to_xml to map the table content\nto xml value but there are no functionality to decompose xml back into\ntable which can be used in system that uses xml for transport only or there\nare a need to migrate to database system to use database functionality. I\npropose to have this by extending copy to handle xml format as well because\nfile parsing and tuple formation functions is in there and it also seems to\nme that implement it without using xml library is simpler\n\nComments?\n\nregards\n\nSurafel\n\n\nIn PostgreSQL there\nare a function table_to_xml to map the table content to xml value but\nthere are no functionality to decompose xml back into table which can\nbe used in system that uses xml for transport only or there are a\nneed to migrate to database system to use database functionality. I\npropose to have this by extending copy to handle xml\nformat as well because file parsing and tuple formation functions is\nin there and it also seems to me that implement it without using xml\nlibrary is simpler\nComments?\nregards \n\nSurafel", "msg_date": "Mon, 22 Jun 2020 21:49:33 +0300", "msg_from": "Surafel Temesgen <surafel3000@gmail.com>", "msg_from_op": true, "msg_subject": "Decomposing xml into table" }, { "msg_contents": "po 22. 6. 2020 v 20:49 odesílatel Surafel Temesgen <surafel3000@gmail.com>\nnapsal:\n\n> In PostgreSQL there are a function table_to_xml to map the table content\n> to xml value but there are no functionality to decompose xml back into\n> table which can be used in system that uses xml for transport only or there\n> are a need to migrate to database system to use database functionality. I\n> propose to have this by extending copy to handle xml format as well because\n> file parsing and tuple formation functions is in there and it also seems to\n> me that implement it without using xml library is simpler\n>\n\nDid you try the xmltable function?\n\nhttps://www.postgresql.org/docs/10/functions-xml.html\n\nRegards\n\nPavel\n\nComments?\n>\n> regards\n>\n> Surafel\n>\n\npo 22. 6. 2020 v 20:49 odesílatel Surafel Temesgen <surafel3000@gmail.com> napsal:\nIn PostgreSQL there\nare a function table_to_xml to map the table content to xml value but\nthere are no functionality to decompose xml back into table which can\nbe used in system that uses xml for transport only or there are a\nneed to migrate to database system to use database functionality. I\npropose to have this by extending copy to handle xml\nformat as well because file parsing and tuple formation functions is\nin there and it also seems to me that implement it without using xml\nlibrary is simplerDid you try the xmltable function? https://www.postgresql.org/docs/10/functions-xml.htmlRegardsPavel\nComments?\nregards \n\nSurafel", "msg_date": "Mon, 22 Jun 2020 20:59:00 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Decomposing xml into table" }, { "msg_contents": "Surafel Temesgen <surafel3000@gmail.com> writes:\n> In PostgreSQL there are a function table_to_xml to map the table content\n> to xml value but there are no functionality to decompose xml back into\n> table\n\nHuh? XMLTABLE does that, and it's even SQL-standard.\n\n> I propose to have this by extending copy to handle xml format as well because\n> file parsing and tuple formation functions is in there\n\nBig -1 on that. COPY is not for general-purpose data transformation.\nThe more unrelated features we load onto it, the slower it will get,\nand probably also the more buggy and unmaintainable. There's also a\nreally fundamental mismatch, in that COPY is designed to do row-by-row\nprocessing with essentially no cross-row state. How would you square\nthat with the inherently nested nature of XML?\n\n> and it also seems to\n> me that implement it without using xml library is simpler\n\nI'm not in favor of implementing our own XML functionality, at least\nnot unless we go all the way and remove the dependency on libxml2\naltogether. That wouldn't be a terrible idea --- libxml2 has a long\nand sad track record of bugs, including security issues. But it'd be\nquite a big job, and it'd still have nothing to do with COPY.\n\nThe big-picture question here, though, is why expend effort on XML at all?\nIt seems like JSON is where it's at these days for that problem space.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 22 Jun 2020 15:13:56 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Decomposing xml into table" }, { "msg_contents": "hey Pavel\n\nOn Mon, Jun 22, 2020 at 9:59 PM Pavel Stehule <pavel.stehule@gmail.com>\nwrote:\n\n>\n> Did you try the xmltable function?\n>\n>\nyes i know it but i am proposing changing given xml data in to relational\nform and insert it to desired table at once\n\nregards\nSurafel\n\nhey PavelOn Mon, Jun 22, 2020 at 9:59 PM Pavel Stehule <pavel.stehule@gmail.com> wrote:Did you try the xmltable function?yes i know it  but i am proposing changing given xml data\nin to relational form and insert it to desired table at onceregards Surafel", "msg_date": "Tue, 23 Jun 2020 14:59:45 +0300", "msg_from": "Surafel Temesgen <surafel3000@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Decomposing xml into table" }, { "msg_contents": "út 23. 6. 2020 v 13:59 odesílatel Surafel Temesgen <surafel3000@gmail.com>\nnapsal:\n\n> hey Pavel\n>\n> On Mon, Jun 22, 2020 at 9:59 PM Pavel Stehule <pavel.stehule@gmail.com>\n> wrote:\n>\n>>\n>> Did you try the xmltable function?\n>>\n>>\n> yes i know it but i am proposing changing given xml data in to relational\n> form and insert it to desired table at once\n>\n\nIt is a question of how common it is. Because there is no common format\nfor xml, I agree with Tom, so it should not be part of core. A import from\nXML can be done with COPY PROGRAM\n\nor some special tools like https://github.com/okbob/pgimportdoc\n\nThere is too high variability so some special external tool will be better\n(more cleaner, more user friendly).\n\nRegards\n\nPavel\n\n\n\n> regards\n> Surafel\n>\n\nút 23. 6. 2020 v 13:59 odesílatel Surafel Temesgen <surafel3000@gmail.com> napsal:hey PavelOn Mon, Jun 22, 2020 at 9:59 PM Pavel Stehule <pavel.stehule@gmail.com> wrote:Did you try the xmltable function?yes i know it  but i am proposing changing given xml data\nin to relational form and insert it to desired table at onceIt is a question of how common it is.  Because there is no common format for xml, I agree with Tom, so it should not be part of core. A import from XML can be done with COPY PROGRAMor some special tools like https://github.com/okbob/pgimportdocThere is too high variability so some special external tool will be better (more cleaner, more user friendly).RegardsPavelregards Surafel", "msg_date": "Tue, 23 Jun 2020 14:08:52 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Decomposing xml into table" }, { "msg_contents": "Hey Tom\n\nOn Mon, Jun 22, 2020 at 10:13 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Big -1 on that. COPY is not for general-purpose data transformation.\n> The more unrelated features we load onto it, the slower it will get,\n> and probably also the more buggy and unmaintainable.\n\n\nwhat new format handling takes to add regards to performance is a check to\na few place and I don’t think that have noticeable performance impact and\nas far as I can see copy is extendable by design and I don’t think adding\nadditional format will be a huge undertaking\n\n\n> There's also a\n> really fundamental mismatch, in that COPY is designed to do row-by-row\n> processing with essentially no cross-row state. How would you square\n> that with the inherently nested nature of XML?\n>\n>\nIn xml case the difference is row delimiter . In xml mode user specifies\nrow delimiter tag name and starting from start tag of specified name up to\nits end tag treated as single row and every text content in between will be\nour columns value filed\n\n\n>\n> The big-picture question here, though, is why expend effort on XML at all?\n> It seems like JSON is where it's at these days for that problem space.\n>\n\nthere are a legacy systems and I think xml is still popular\n\nregards\nSurafel\n\nHey TomOn Mon, Jun 22, 2020 at 10:13 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\nBig -1 on that.  COPY is not for general-purpose data transformation.\nThe more unrelated features we load onto it, the slower it will get,\nand probably also the more buggy and unmaintainable.  \n\nwhat new format\nhandling takes to add regards to performance is a check to a few\nplace and I don’t think that have noticeable performance impact and\nas far as I can see copy is extendable by design and I don’t think\nadding additional format will be a huge undertaking\n There's also a\nreally fundamental mismatch, in that COPY is designed to do row-by-row\nprocessing with essentially no cross-row state.  How would you square\nthat with the inherently nested nature of XML?\n\n\nIn xml case the\ndifference is row delimiter . In xml mode user specifies row\ndelimiter tag name and starting from start tag of specified name up\nto its end tag treated as single row and every text content in\nbetween will be our columns value filed\n \nThe big-picture question here, though, is why expend effort on XML at all?\nIt seems like JSON is where it's at these days for that problem space.\n\nthere are a legacy\nsystems and I think xml is still popularregards Surafel", "msg_date": "Tue, 23 Jun 2020 15:25:47 +0300", "msg_from": "Surafel Temesgen <surafel3000@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Decomposing xml into table" }, { "msg_contents": "Surafel Temesgen schrieb am 23.06.2020 um 13:59:\n>> Did you try the xmltable function?\n>\n> yes i know it but i am proposing changing given xml data in to\n> relational form and insert it to desired table at once\nWell, xmltable() does change the XML data to a relational form and\nthe result can directly be used to insert into a table\n\n insert into target_table (...)\n select ...\n from xmltable(...);\n\n\n\n", "msg_date": "Tue, 23 Jun 2020 14:57:56 +0200", "msg_from": "Thomas Kellerer <shammat@gmx.net>", "msg_from_op": false, "msg_subject": "Re: Decomposing xml into table" }, { "msg_contents": "On 6/22/20 8:49 PM, Surafel Temesgen wrote:\n> Comments?\n\nI feel it would make more sense to add features like this to an external \ntool, e.g pgloader. But even if we add it to the core PostgreSQL project \nI feel the XML parsing should be done in the client, not in the database \nserver. The COPY command is already very complex.\n\nAndreas\n\n\n", "msg_date": "Tue, 23 Jun 2020 15:06:12 +0200", "msg_from": "Andreas Karlsson <andreas@proxel.se>", "msg_from_op": false, "msg_subject": "Re: Decomposing xml into table" }, { "msg_contents": "On 06/23/20 08:57, Thomas Kellerer wrote:\n> Surafel Temesgen schrieb am 23.06.2020 um 13:59:\n>>> Did you try the xmltable function?\n>>\n>> yes i know it but i am proposing changing given xml data in to\n>> relational form and insert it to desired table at once\n> Well, xmltable() does change the XML data to a relational form and\n> the result can directly be used to insert into a table\n> \n> insert into target_table (...)\n> select ...\n> from xmltable(...);\n\n\nThe use case that I imagine might be driving this would be where the XML\nsource is not deeply or elaborately nested, but is yuuge. In such a case,\nPostgreSQL's XML handling and xmltable will not be doing beautiful things:\n\n- the data coming from the frontend will have to be completely buffered\n in backend memory, and then parsed as XML once by the XML data type\n input routine, only for the purpose of confirming it's XML. The unparsed\n form is what becomes the Datum value, which means\n\n- xmltable gets to parse it a second time, again all in memory, and then\n generate the set-returning function result tuples from it.\n\n- as I last understood it [1], even the tuples generated as a result\n get all piled up in a tuplestore before the next part of the (what\n you would otherwise hope to call) \"pipeline\" can happen. (There may\n be work on better pipelining that part.)\n\nSo I would say for that use case, it will be hard to do better than an\nexternal process acting as a filter from XML in to COPY-formatted tuples\nout.\n\nThe XML-processing library I'm most familiar with, Saxon, can do some\nsophisticated analysis of an XML Query or XSLT transformation and\ndetermine when it can be done while consuming the XML in streaming\nmode rather than building a complete tree first. (The open-source\n\"community edition\" doesn't have that trick, only the paid editions,\nbut they're otherwise compatible, so you can prototype stuff using\nthe community edition, and then drop in a paid version and poof, it\ngoes faster.)\n\n\nOn 06/23/20 08:25, Surafel Temesgen wrote:\n> On Mon, Jun 22, 2020 at 10:13 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> The big-picture question here, though, is why expend effort on XML\n>> at all?\n>> It seems like JSON is where it's at these days for that problem space.\n>\n> there are a legacy systems and I think xml is still popular\n\nI had an interesting conversation about that at PGCon a year ago, with\nsomeone who crystallized this idea better than I had at the time (but\nmay or may not want his name on it):\n\nWe tend to repeat a cycle of: a new technology is introduced, minimal\nat first, then develops a good ecosystem of sophisticated tooling, then\nlooks complicated and gets replaced with something minimal that needs to\nrepeat the same process.\n\nBy this point, we're on to 3.x versions of XML Query, XPath, and XSLT,\nvery mature languages that can express sophisticated transformations\nand optimize the daylights out of them.\n\nJSON now has JSONPATH, which is coming along, and relearning the lessons\nof XPath and XQuery, and by the time it has, there will be something else\nthat's appealing because it looks more minimal, and we'll be having the\n\"why expend effort on JSON at all?\" conversation.\n\n\nRegards,\n-Chap\n\n\n[1] https://www.postgresql.org/message-id/12389.1563746057%40sss.pgh.pa.us\n\n\n", "msg_date": "Tue, 23 Jun 2020 09:59:08 -0400", "msg_from": "Chapman Flack <chap@anastigmatix.net>", "msg_from_op": false, "msg_subject": "Re: Decomposing xml into table" } ]
[ { "msg_contents": "Hello,\n\nI had some questions about the behavior of some accounting in parallel\nEXPLAIN plans. Take the following plan:\n\n```\nGather (cost=1000.43..750173.74 rows=2 width=235) (actual\ntime=1665.122..1665.122 rows=0 loops=1)\n Workers Planned: 2\n Workers Launched: 2\n Buffers: shared hit=27683 read=239573\n I/O Timings: read=687.358\n -> Nested Loop (cost=0.43..749173.54 rows=1 width=235) (actual\ntime=1660.095..1660.095 rows=0 loops=3)\n Inner Unique: true\n Buffers: shared hit=77579 read=657847\n I/O Timings: read=2090.189\n Worker 0: actual time=1657.443..1657.443 rows=0 loops=1\n Buffers: shared hit=23645 read=208365\n I/O Timings: read=702.270\n Worker 1: actual time=1658.277..1658.277 rows=0 loops=1\n Buffers: shared hit=26251 read=209909\n I/O Timings: read=700.560\n -> Parallel Seq Scan on public.schema_indices\n(cost=0.00..748877.88 rows=35 width=235) (actual\ntime=136.744..1659.629 rows=32 loops=3)\n Filter: ((schema_indices.invalidated_at_snapshot_id IS\nNULL) AND (NOT schema_indices.is_valid))\n Rows Removed by Filter: 703421\n Buffers: shared hit=77193 read=657847\n I/O Timings: read=2090.189\n Worker 0: actual time=69.248..1656.950 rows=32 loops=1\n Buffers: shared hit=23516 read=208365\n I/O Timings: read=702.270\n Worker 1: actual time=260.188..1657.875 rows=27 loops=1\n Buffers: shared hit=26140 read=209909\n I/O Timings: read=700.560\n -> Index Scan using schema_tables_pkey on\npublic.schema_tables (cost=0.43..8.45 rows=1 width=8) (actual\ntime=0.011..0.011 rows=0 loops=95)\n Index Cond: (schema_tables.id = schema_indices.table_id)\n Filter: (schema_tables.database_id = 123)\n Rows Removed by Filter: 1\n Buffers: shared hit=386\n Worker 0: actual time=0.011..0.011 rows=0 loops=32\n Buffers: shared hit=129\n Worker 1: actual time=0.011..0.011 rows=0 loops=27\n Buffers: shared hit=111\nPlanning Time: 0.429 ms\nExecution Time: 1667.373 ms\n```\n\nThe Nested Loop here aggregates data for metrics like `buffers read`\nfrom its workers, and to calculate a metric like `buffers read` for\nthe parallel leader, we can subtract the values recorded in each\nindividual worker. This happens in the Seq Scan and Index Scan\nchildren, as well. However, the Gather node appears to only include\nvalues from its direct parallel leader child (excluding that child's\nworkers).\n\nThis leads to the odd situation that the Gather has lower values for\nsome of these metrics than its child (because the child node reporting\nincludes the worker metrics) even though the values are supposed to be\ncumulative. This is even more surprising for something like I/O\ntiming, where the Gather has a lower `read` value than one of the\nNested Loop workers, which doesn't make sense in terms of wall-clock\ntime.\n\nIs this behavior intentional? If so, is there an explanation of the\nreasoning or the trade-offs involved? Would it not make sense to\npropagate those cumulative parallel costs up the tree all the way to\nthe root, instead of only using the parallel leader metrics under\nGather?\n\nThanks,\nMaciek\n\n\n", "msg_date": "Mon, 22 Jun 2020 12:24:51 -0700", "msg_from": "Maciek Sakrejda <m.sakrejda@gmail.com>", "msg_from_op": true, "msg_subject": "EXPLAIN: Non-parallel ancestor plan nodes exclude parallel worker\n instrumentation" }, { "msg_contents": "On Tue, Jun 23, 2020 at 12:55 AM Maciek Sakrejda <m.sakrejda@gmail.com> wrote:\n>\n> Hello,\n>\n> I had some questions about the behavior of some accounting in parallel\n> EXPLAIN plans. Take the following plan:\n>\n> ```\n> Gather (cost=1000.43..750173.74 rows=2 width=235) (actual\n> time=1665.122..1665.122 rows=0 loops=1)\n> Workers Planned: 2\n> Workers Launched: 2\n> Buffers: shared hit=27683 read=239573\n> I/O Timings: read=687.358\n> -> Nested Loop (cost=0.43..749173.54 rows=1 width=235) (actual\n> time=1660.095..1660.095 rows=0 loops=3)\n> Inner Unique: true\n> Buffers: shared hit=77579 read=657847\n> I/O Timings: read=2090.189\n..\n> ```\n>\n> The Nested Loop here aggregates data for metrics like `buffers read`\n> from its workers, and to calculate a metric like `buffers read` for\n> the parallel leader, we can subtract the values recorded in each\n> individual worker. This happens in the Seq Scan and Index Scan\n> children, as well. However, the Gather node appears to only include\n> values from its direct parallel leader child (excluding that child's\n> workers).\n>\n> This leads to the odd situation that the Gather has lower values for\n> some of these metrics than its child (because the child node reporting\n> includes the worker metrics) even though the values are supposed to be\n> cumulative.\n>\n\nI don't think this is an odd situation because in this case, child\nnodes like \"Nested Loop\" and \"Parallel Seq Scan\" has a value of\n'loops' as 3. So, to get the correct stats at those nodes, you need\nto divide it by 3 whereas, at Gather node, the value of 'loops' is 1.\nIf you want to verify this thing then try with a plan where loops\nshould be 1 for child nodes as well, you should get the same value at\nboth Gather and Parallel Seq Scan nodes.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 23 Jun 2020 15:26:58 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: EXPLAIN: Non-parallel ancestor plan nodes exclude parallel worker\n instrumentation" }, { "msg_contents": "On Tue, Jun 23, 2020 at 2:57 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> I don't think this is an odd situation because in this case, child\n> nodes like \"Nested Loop\" and \"Parallel Seq Scan\" has a value of\n> 'loops' as 3. So, to get the correct stats at those nodes, you need\n> to divide it by 3 whereas, at Gather node, the value of 'loops' is 1.\n> If you want to verify this thing then try with a plan where loops\n> should be 1 for child nodes as well, you should get the same value at\n> both Gather and Parallel Seq Scan nodes.\n\nThanks for the response, but I still don't follow. I had assumed that\nloops=3 was just from loops=1 for the parallel leader plus loops=1 for\neach worker--is that not right? I don't see any other reason for\nlooping over the NL node itself in this plan. The Gather itself\ndoesn't do any real looping, right?\n\nBut even so, the documentation [1] states:\n\n>In some query plans, it is possible for a subplan node to be executed more than once. For example, the inner index scan will be executed once per outer row in the above nested-loop plan. In such cases, the loops value reports the total number of executions of the node, and the actual time and rows values shown are averages per-execution. This is done to make the numbers comparable with the way that the cost estimates are shown. Multiply by the loops value to get the total time actually spent in the node.\n\nSo we should be seeing an average, not a sum, right?\n\n[1]: https://www.postgresql.org/docs/current/using-explain.html#USING-EXPLAIN-ANALYZE\n\n\n", "msg_date": "Tue, 23 Jun 2020 14:47:57 -0700", "msg_from": "Maciek Sakrejda <m.sakrejda@gmail.com>", "msg_from_op": true, "msg_subject": "Re: EXPLAIN: Non-parallel ancestor plan nodes exclude parallel worker\n instrumentation" }, { "msg_contents": "On Wed, Jun 24, 2020 at 3:18 AM Maciek Sakrejda <m.sakrejda@gmail.com> wrote:\n>\n> On Tue, Jun 23, 2020 at 2:57 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > I don't think this is an odd situation because in this case, child\n> > nodes like \"Nested Loop\" and \"Parallel Seq Scan\" has a value of\n> > 'loops' as 3. So, to get the correct stats at those nodes, you need\n> > to divide it by 3 whereas, at Gather node, the value of 'loops' is 1.\n> > If you want to verify this thing then try with a plan where loops\n> > should be 1 for child nodes as well, you should get the same value at\n> > both Gather and Parallel Seq Scan nodes.\n>\n> Thanks for the response, but I still don't follow. I had assumed that\n> loops=3 was just from loops=1 for the parallel leader plus loops=1 for\n> each worker--is that not right?\n>\n\nNo, I don't think so.\n\n> I don't see any other reason for\n> looping over the NL node itself in this plan. The Gather itself\n> doesn't do any real looping, right?\n\nIt is right that Gather doesn't do looping but Parallel Seq Scan node does so.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 24 Jun 2020 08:25:15 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: EXPLAIN: Non-parallel ancestor plan nodes exclude parallel worker\n instrumentation" }, { "msg_contents": "On Tue, Jun 23, 2020 at 7:55 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > I don't see any other reason for\n> > looping over the NL node itself in this plan. The Gather itself\n> > doesn't do any real looping, right?\n>\n> It is right that Gather doesn't do looping but Parallel Seq Scan node does so.\n\nSorry, I still don't follow. How does a Parallel Seq Scan do looping?\nI looked at the parallel plan docs but I don't see looping mentioned\nanywhere[1]. Also, is looping not normally indicated on children,\nrather than on the node doing the looping? E.g., with a standard\nNested Loop, the outer child will have loops=1 and the inner child\nwill have loops equal to the row count produced by the outer child\n(and the Nested Loop itself will have loops=1 unless it also is being\nlooped over by a parent node), right?\n\nBut even aside from that, why do I need to divide by the number of\nloops here, when normally Postgres presents a per-loop average?\n\n[1]: https://www.postgresql.org/docs/current/parallel-plans.html\n\n\n", "msg_date": "Wed, 24 Jun 2020 00:11:45 -0700", "msg_from": "Maciek Sakrejda <m.sakrejda@gmail.com>", "msg_from_op": true, "msg_subject": "Re: EXPLAIN: Non-parallel ancestor plan nodes exclude parallel worker\n instrumentation" }, { "msg_contents": "On Wed, Jun 24, 2020 at 12:41 PM Maciek Sakrejda <m.sakrejda@gmail.com> wrote:\n>\n> On Tue, Jun 23, 2020 at 7:55 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > I don't see any other reason for\n> > > looping over the NL node itself in this plan. The Gather itself\n> > > doesn't do any real looping, right?\n> >\n> > It is right that Gather doesn't do looping but Parallel Seq Scan node does so.\n>\n> Sorry, I still don't follow. How does a Parallel Seq Scan do looping?\n\nSorry, I intend to say that Parallel Seq Scan is involved in looping.\nLet me try by example:\n\nGather (actual time=6.444..722.642 rows=10000 loops=1)\n Workers Planned: 2\n Workers Launched: 2\n -> Nested Loop (actual time=0.046..705.936 rows=5000 loops=2)\n -> Parallel Seq Scan on t1 (actual time=0.010..45.423\nrows=250000 loops=2)\n -> Index Scan using idx_t2 on t2 (actual time=0.002..0.002\nrows=0 loops=500000)\n Index Cond: (c1 = t1.c1)\n\nIn the above plan, each of the worker runs\nNestLoop\n -> Parallel Seq Scan on t1\n -> Index Scan using idx_t2 on t2\n\nSo, that leads to loops as 2 on \"Parallel Seq Scan\" and \"Nested Loop\"\nnodes. Does this make sense now?\n\n> I looked at the parallel plan docs but I don't see looping mentioned\n> anywhere[1]. Also, is looping not normally indicated on children,\n> rather than on the node doing the looping? E.g., with a standard\n> Nested Loop, the outer child will have loops=1 and the inner child\n> will have loops equal to the row count produced by the outer child\n> (and the Nested Loop itself will have loops=1 unless it also is being\n> looped over by a parent node), right?\n>\n\nYeah, I hope the above has clarified it.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 24 Jun 2020 14:37:36 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: EXPLAIN: Non-parallel ancestor plan nodes exclude parallel worker\n instrumentation" }, { "msg_contents": "On Tue, Jun 23, 2020 at 12:55 AM Maciek Sakrejda <m.sakrejda@gmail.com> wrote:\n>\n> Hello,\n>\n> I had some questions about the behavior of some accounting in parallel\n> EXPLAIN plans. Take the following plan:\n>\n>\n..\n>\n> The Nested Loop here aggregates data for metrics like `buffers read`\n> from its workers, and to calculate a metric like `buffers read` for\n> the parallel leader, we can subtract the values recorded in each\n> individual worker. This happens in the Seq Scan and Index Scan\n> children, as well. However, the Gather node appears to only include\n> values from its direct parallel leader child (excluding that child's\n> workers).\n>\n\nI have tried to check a similar plan and for me, the values at Gather\nnode seems to be considering the values from all workers and leader\n(aka whatever is displayed at \"Nested Loop \" node), see below. I have\ntried the test on HEAD. Which version of PostgreSQL are you using?\nIf you are also using the latest version then it is possible that in\nsome cases it is not displaying correct data. If that turns out to be\nthe case, then feel to share the test case. Sorry, for the confusion\ncaused by my previous reply.\n\n Gather (actual time=2.083..550.093 rows=10000 loops=1)\n Output: t1.c1, t1.c2, t2.c1, t2.c2\n Workers Planned: 2\n Workers Launched: 2\n Buffers: shared hit=1012621 read=84\n I/O Timings: read=0.819\n -> Nested Loop (actual time=0.084..541.882 rows=3333 loops=3)\n Output: t1.c1, t1.c2, t2.c1, t2.c2\n Buffers: shared hit=1012621 read=84\n I/O Timings: read=0.819\n Worker 0: actual time=0.069..541.249 rows=3155 loops=1\n Buffers: shared hit=326529 read=29\n I/O Timings: read=0.325\n Worker 1: actual time=0.063..541.376 rows=3330 loops=1\n Buffers: shared hit=352045 read=26\n I/O Timings: read=0.179\n -> Parallel Seq Scan on public.t1 (actual time=0.011..34.250\nrows=166667 loops=3)\n Output: t1.c1, t1.c2\n Buffers: shared hit=2703\n Worker 0: actual time=0.011..33.785 rows=161265 loops=1\n Buffers: shared hit=872\n Worker 1: actual time=0.009..34.582 rows=173900 loops=1\n Buffers: shared hit=940\n -> Index Scan using idx_t2 on public.t2 (actual\ntime=0.003..0.003 rows=0 loops=500000)\n Output: t2.c1, t2.c2\n Index Cond: (t2.c1 = t1.c1)\n Buffers: shared hit=1009918 read=84\n I/O Timings: read=0.819\n Worker 0: actual time=0.003..0.003 rows=0 loops=161265\n Buffers: shared hit=325657 read=29\n I/O Timings: read=0.325\n Worker 1: actual time=0.002..0.002 rows=0 loops=173900\n Buffers: shared hit=351105 read=26\n I/O Timings: read=0.179\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 24 Jun 2020 15:14:01 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: EXPLAIN: Non-parallel ancestor plan nodes exclude parallel worker\n instrumentation" }, { "msg_contents": "On Wed, Jun 24, 2020 at 2:44 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> So, that leads to loops as 2 on \"Parallel Seq Scan\" and \"Nested Loop\" nodes. Does this make sense now?\n\nYes, I think we're on the same page. Thanks for the additional details.\n\nIt turns out that the plan I sent at the top of the thread is actually\nan older plan we had saved, all the way from April 2018. We're fairly\ncertain this was Postgres 10, but not sure what point release. I tried\nto reproduce this on 10, 11, 12, and 13 beta, but I am now seeing\nsimilar results to yours: Buffers and I/O Timings are rolled up into\nthe parallel leader, and that is propagated as expected to the Gather.\nSorry for the confusion.\n\nOn Wed, Jun 24, 2020 at 3:18 AM Maciek Sakrejda <m.sakrejda@gmail.com> wrote:\n>So we should be seeing an average, not a sum, right?\n\nAnd here I missed that the documentation specifies rows and actual\ntime as per-loop, but other metrics are not--they're just cumulative.\nSo actual time and rows are still per-\"loop\" values, but while rows\nvalues are additive (the Gather combines rows from the parallel leader\nand the workers), the actual time is not because the whole point is\nthat this work happens in parallel.\n\nI'll report back if I can reproduce the weird numbers we saw in that\noriginal plan or find out exactly what Postgres version it was from.\n\nThanks,\nMaciek\n\n\n", "msg_date": "Thu, 25 Jun 2020 22:42:00 -0700", "msg_from": "Maciek Sakrejda <m.sakrejda@gmail.com>", "msg_from_op": true, "msg_subject": "Re: EXPLAIN: Non-parallel ancestor plan nodes exclude parallel worker\n instrumentation" } ]
[ { "msg_contents": "Hi hackers,\n\nI was talking about PostgreSQL and threading on IRC the other day - which I\nknow is a frowned upon topic - and just wanted to frame the same questions\nhere and hopefully get a discussion going.\n\nOn IRC RhodiumToad offered the following advice (after a standard there be\ndragons here disclaimer, as well as noting this may not be a complete list):\n\nThreading (may) be safe if:\n\n 1. all signals will be delivered on the main thread and nowhere else\n 2. no postgres function will ever be called from anything that's not the\n main thread\n 3. it's safe for postgres to call any system library function, even ones\n explicitly marked as not thread safe\n 4. it's safe for postgres to call sigprocmask()\n\nI can live with 1. and 2 - they are fairly easy as long as you know the\nrules.\n\n3. needs to be converted to a list of possible calls - which can be done\nand checked, I suppose against the POSIX standards?\n\n4. is not fine (I suppose this is a specific example of 3.), as I think\nPostgres would need to be using pthread_sigmask() instead - given looks\nlike a one line change could pthread_sigmask be used when available?\n\nAre there any other rules which need to be adhered to?\n\nAny thoughts, comments, dire warnings, hand waving? On IRC the general\nthought was that any changes could be seen as encouraging threading which\nis a bad thing - I would argue if you're writing BGWorkers which have SPI\naccess you've already got a pretty large area to screw things up in anyway\n(if you aren't following the standards / code comments).\n\nJames\n\n-- \nThe contents of this email are confidential and may be subject to legal or \nprofessional privilege and copyright. No representation is made that this \nemail is free of viruses or other defects. If you have received this \ncommunication in error, you may not copy or distribute any part of it or \notherwise disclose its contents to anyone. Please advise the sender of your \nincorrect receipt of this correspondence.\n\nHi hackers,I was talking about PostgreSQL and threading on IRC the other day - which I know is a frowned upon topic - and just wanted to frame the same questions here and hopefully get a discussion going.On IRC RhodiumToad offered the following advice (after a standard there be dragons here disclaimer, as well as noting this may not be a complete list):Threading (may) be safe if:all signals will be delivered on the main thread and nowhere elseno postgres function will ever be called from anything that's not the main threadit's safe for postgres to call any system library function, even ones explicitly marked as not thread safeit's safe for postgres to call sigprocmask()I can live with 1. and 2 - they are fairly easy as long as you know the rules.3. needs to be converted to a list of possible calls - which can be done and checked, I suppose against the POSIX standards?4. is not fine (I suppose this is a specific example of 3.), as I think Postgres would need to be using  pthread_sigmask() instead - given looks like a one line change could  pthread_sigmask be used when available?Are there any other rules which need to be adhered to?Any thoughts, comments, dire warnings, hand waving? On IRC the general thought was that any changes could be seen as encouraging threading which is a bad thing - I would argue if you're writing BGWorkers which have SPI access you've already got a pretty large area to screw things up in anyway (if you aren't following the standards / code comments).James\n\nThe contents of this email are confidential and may be subject to legal or professional privilege and copyright. No representation is made that this email is free of viruses or other defects. If you have received this communication in error, you may not copy or distribute any part of it or otherwise disclose its contents to anyone. Please advise the sender of your incorrect receipt of this correspondence.", "msg_date": "Tue, 23 Jun 2020 10:17:25 +1000", "msg_from": "James Sewell <james.sewell@jirotech.com>", "msg_from_op": true, "msg_subject": "Threading in BGWorkers (!)" }, { "msg_contents": "James Sewell <james.sewell@jirotech.com> writes:\n> I was talking about PostgreSQL and threading on IRC the other day - which I\n> know is a frowned upon topic - and just wanted to frame the same questions\n> here and hopefully get a discussion going.\n\nI think the short answer about threading in bgworkers (or any other\nbackend process) is \"we don't support it; if you try it and it breaks,\nwhich it likely will, you get to keep both pieces\". I'm not sure that\nthere's any merit in making small dents in that policy. I suspect that\nat some point, somebody will try to move those goalposts a long way,\nbut it will be a large and controversial patch.\n\nWhy do you want threads in a bgworker anyway? You could spawn multiple\nbgworkers, or you could dispatch the threaded work to a non-Postgres-ish\nprocess as PL/Java does. The only advantage I can see of doing work in a\nprocess that's not at arm's-length is to have access to PG computational\nor IPC facilities, and none of that is likely to work safely in a threaded\ncontext.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 22 Jun 2020 23:38:16 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Threading in BGWorkers (!)" }, { "msg_contents": "> dispatch the threaded work to a non-Postgres-ish process\n\n\nI’m no expert here but all your solid points about threading with Postgres\nnotwithstanding....\n\n\nI think there’s some issues around interrupt handling and general syscalls\nthat doesn’t otherwise play nice with “non-Postgres-ish” *threads* when\nPostgres is still the main thread.\n\n\nThis is all purely hypothetical, but it seems that Postgres’ use of\nsigprocmask can cause problems with threads that are otherwise 100%\n“disconnected” from Postgres.\n\n\nHow can we start a dialog about this kind of situation? Nobody here is\ntrying to make Postgres thread-safe, maybe only thread-friendly.\n\n\nI think Mr. Sewell, has a better handle around these topics. But he ain’t\nthe only one interested.\n\n\neric\n\nOn Mon, Jun 22, 2020 at 9:38 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> James Sewell <james.sewell@jirotech.com> writes:\n> > I was talking about PostgreSQL and threading on IRC the other day -\n> which I\n> > know is a frowned upon topic - and just wanted to frame the same\n> questions\n> > here and hopefully get a discussion going.\n>\n> I think the short answer about threading in bgworkers (or any other\n> backend process) is \"we don't support it; if you try it and it breaks,\n> which it likely will, you get to keep both pieces\". I'm not sure that\n> there's any merit in making small dents in that policy. I suspect that\n> at some point, somebody will try to move those goalposts a long way,\n> but it will be a large and controversial patch.\n>\n> Why do you want threads in a bgworker anyway? You could spawn multiple\n> bgworkers, or you could dispatch the threaded work to a non-Postgres-ish\n> process as PL/Java does. The only advantage I can see of doing work in a\n> process that's not at arm's-length is to have access to PG computational\n> or IPC facilities, and none of that is likely to work safely in a threaded\n> context.\n>\n> regards, tom lane\n>\n>\n>\n\n> dispatch the threaded work to a non-Postgres-ish processI’m no expert here but all your solid points about threading with Postgres notwithstanding....I think there’s some issues around interrupt handling and general syscalls that doesn’t otherwise play nice with “non-Postgres-ish” *threads* when Postgres is still the main thread. This is all purely hypothetical, but it seems that Postgres’ use of sigprocmask can cause problems with threads that are otherwise 100% “disconnected” from Postgres. How can we start a dialog about this kind of situation?  Nobody here is trying to make Postgres thread-safe, maybe only thread-friendly. I think Mr. Sewell, has a better handle around these topics.  But he ain’t the only one interested. eric On Mon, Jun 22, 2020 at 9:38 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:James Sewell <james.sewell@jirotech.com> writes:\n> I was talking about PostgreSQL and threading on IRC the other day - which I\n> know is a frowned upon topic - and just wanted to frame the same questions\n> here and hopefully get a discussion going.\n\nI think the short answer about threading in bgworkers (or any other\nbackend process) is \"we don't support it; if you try it and it breaks,\nwhich it likely will, you get to keep both pieces\".  I'm not sure that\nthere's any merit in making small dents in that policy.  I suspect that\nat some point, somebody will try to move those goalposts a long way,\nbut it will be a large and controversial patch.\n\nWhy do you want threads in a bgworker anyway?  You could spawn multiple\nbgworkers, or you could dispatch the threaded work to a non-Postgres-ish\nprocess as PL/Java does.  The only advantage I can see of doing work in a\nprocess that's not at arm's-length is to have access to PG computational\nor IPC facilities, and none of that is likely to work safely in a threaded\ncontext.\n\n                        regards, tom lane", "msg_date": "Mon, 22 Jun 2020 22:46:11 -0600", "msg_from": "Eric Ridge <eebbrr@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Threading in BGWorkers (!)" }, { "msg_contents": "On Tue, 23 Jun 2020 at 13:38, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> James Sewell <james.sewell@jirotech.com> writes:\n> > I was talking about PostgreSQL and threading on IRC the other day -\n> which I\n> > know is a frowned upon topic - and just wanted to frame the same\n> questions\n> > here and hopefully get a discussion going.\n>\n> I think the short answer about threading in bgworkers (or any other\n> backend process) is \"we don't support it; if you try it and it breaks,\n> which it likely will, you get to keep both pieces\".\n\n\nI'm hoping that from this a set of rules rather than a blanket ban can be\nagreed upon.\n\n\n> I'm not sure that\n> there's any merit in making small dents in that policy. I suspect that\n> at some point, somebody will try to move those goalposts a long way,\n> but it will be a large and controversial patch.\n>\n\nUnderstood, and I do agree with keeping the policy simple - but it looks\nlike (potentially) the only blocker is a one line change to swap\nout sigprocmask. From my perspective this is a very large win - I'll do\nsome testing.\n\nWhy do you want threads in a bgworker anyway? You could spawn multiple\n> bgworkers, or you could dispatch the threaded work to a non-Postgres-ish\n> process as PL/Java does. The only advantage I can see of doing work in a\n> process that's not at arm's-length is to have access to PG computational\n> or IPC facilities, and none of that is likely to work safely in a threaded\n> context.\n>\n\nI'm writing the workers in Rust - it would be nice to be able to safely\naccess Rust crates which make use of threads.\n\ncheers,\nJames\n\n-- \nThe contents of this email are confidential and may be subject to legal or \nprofessional privilege and copyright. No representation is made that this \nemail is free of viruses or other defects. If you have received this \ncommunication in error, you may not copy or distribute any part of it or \notherwise disclose its contents to anyone. Please advise the sender of your \nincorrect receipt of this correspondence.\n\nOn Tue, 23 Jun 2020 at 13:38, Tom Lane <tgl@sss.pgh.pa.us> wrote:James Sewell <james.sewell@jirotech.com> writes:\n> I was talking about PostgreSQL and threading on IRC the other day - which I\n> know is a frowned upon topic - and just wanted to frame the same questions\n> here and hopefully get a discussion going.\n\nI think the short answer about threading in bgworkers (or any other\nbackend process) is \"we don't support it; if you try it and it breaks,\nwhich it likely will, you get to keep both pieces\". I'm hoping that from this a set of rules rather than a blanket ban can be agreed upon.    I'm not sure that\nthere's any merit in making small dents in that policy.  I suspect that\nat some point, somebody will try to move those goalposts a long way,\nbut it will be a large and controversial patch.Understood, and I do agree with keeping the policy simple - but it looks like (potentially) the only blocker is a one line change to swap out sigprocmask. From my perspective this is a very large win - I'll do some testing.Why do you want threads in a bgworker anyway?  You could spawn multiple\nbgworkers, or you could dispatch the threaded work to a non-Postgres-ish\nprocess as PL/Java does.  The only advantage I can see of doing work in a\nprocess that's not at arm's-length is to have access to PG computational\nor IPC facilities, and none of that is likely to work safely in a threaded\ncontext.I'm writing the workers in Rust - it would be nice to be able to safely access Rust crates which make use of threads.cheers,James\n\nThe contents of this email are confidential and may be subject to legal or professional privilege and copyright. No representation is made that this email is free of viruses or other defects. If you have received this communication in error, you may not copy or distribute any part of it or otherwise disclose its contents to anyone. Please advise the sender of your incorrect receipt of this correspondence.", "msg_date": "Tue, 23 Jun 2020 16:01:51 +1000", "msg_from": "James Sewell <james.sewell@jirotech.com>", "msg_from_op": true, "msg_subject": "Re: Threading in BGWorkers (!)" }, { "msg_contents": "\n\nOn 23.06.2020 06:38, Tom Lane wrote:\n> James Sewell <james.sewell@jirotech.com> writes:\n>> I was talking about PostgreSQL and threading on IRC the other day - which I\n>> know is a frowned upon topic - and just wanted to frame the same questions\n>> here and hopefully get a discussion going.\n> I think the short answer about threading in bgworkers (or any other\n> backend process) is \"we don't support it; if you try it and it breaks,\n> which it likely will, you get to keep both pieces\". I'm not sure that\n> there's any merit in making small dents in that policy. I suspect that\n> at some point, somebody will try to move those goalposts a long way,\n> but it will be a large and controversial patch.\n>\n> Why do you want threads in a bgworker anyway? You could spawn multiple\n> bgworkers, or you could dispatch the threaded work to a non-Postgres-ish\n> process as PL/Java does. The only advantage I can see of doing work in a\n> process that's not at arm's-length is to have access to PG computational\n> or IPC facilities, and none of that is likely to work safely in a threaded\n> context.\n\nJust an example of using threads in bgworker: right now I am working on \nFDW for RocksDB.\nRocksDB LSM  implementation shows quite promising performance advantages \ncomparing with classical Postgres B-Tree\n(almost no degrade of insert speed with increasing number of records).\n\nRocksDB is multithreded database. May be it is possible to port it to \nmultiprocess model\nbut it will be very non trivial task. And even if such fork of RocksDB \nwill be created,  somebody have to permanently back patch changes from \nmain trunk.\n\nAlternative solution is to launch multithreded RocksDB worker and let \nbackends redirect requests to this worker.\nClient-server architecture inside server:)\n\nUsing multithreading in bgworker is possible if you do not use any \nPostgres runtime inside thread procedures or do it in exclusive critical \nsection.\nIt is not so convenient but possible. The most difficult thing from my \npoint of view is error reporting.\n\n\n\n", "msg_date": "Tue, 23 Jun 2020 10:07:04 +0300", "msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: Threading in BGWorkers (!)" }, { "msg_contents": "> Using multithreading in bgworker is possible if you do not use any\n> Postgres runtime inside thread procedures or do it in exclusive critical\n> section.\n> It is not so convenient but possible. The most difficult thing from my\n> point of view is error reporting.\n>\n\nHappy to be proved wrong, but I don't think this is correct.\n\nPostgreSQL can call sigprocmask() in your BGWorker whenever it wants, and\n\"The use of sigprocmask() is unspecified in a multithreaded process\" [1]\n\n[1] https://pubs.opengroup.org/onlinepubs/9699919799/\n\n-- \nThe contents of this email are confidential and may be subject to legal or \nprofessional privilege and copyright. No representation is made that this \nemail is free of viruses or other defects. If you have received this \ncommunication in error, you may not copy or distribute any part of it or \notherwise disclose its contents to anyone. Please advise the sender of your \nincorrect receipt of this correspondence.\n\nUsing multithreading in bgworker is possible if you do not use any \nPostgres runtime inside thread procedures or do it in exclusive critical \nsection.\nIt is not so convenient but possible. The most difficult thing from my \npoint of view is error reporting.Happy to be proved wrong, but I don't think this is correct. PostgreSQL can call sigprocmask() in your BGWorker whenever it wants, and  \"The use of sigprocmask() is unspecified in a multithreaded process\" [1][1] https://pubs.opengroup.org/onlinepubs/9699919799/\n\nThe contents of this email are confidential and may be subject to legal or professional privilege and copyright. No representation is made that this email is free of viruses or other defects. If you have received this communication in error, you may not copy or distribute any part of it or otherwise disclose its contents to anyone. Please advise the sender of your incorrect receipt of this correspondence.", "msg_date": "Tue, 23 Jun 2020 17:15:38 +1000", "msg_from": "James Sewell <james.sewell@jirotech.com>", "msg_from_op": true, "msg_subject": "Re: Threading in BGWorkers (!)" }, { "msg_contents": "On Tue, 23 Jun 2020 at 17:15, James Sewell <james.sewell@jirotech.com>\nwrote:\n\n> Using multithreading in bgworker is possible if you do not use any\n>> Postgres runtime inside thread procedures or do it in exclusive critical\n>> section.\n>> It is not so convenient but possible. The most difficult thing from my\n>> point of view is error reporting.\n>>\n>\n> Happy to be proved wrong, but I don't think this is correct.\n>\n> PostgreSQL can call sigprocmask() in your BGWorker whenever it wants, and\n> \"The use of sigprocmask() is unspecified in a multithreaded process\" [1]\n>\n> [1] https://pubs.opengroup.org/onlinepubs/9699919799/\n>\n\nSorry link should be [1]\nhttps://pubs.opengroup.org/onlinepubs/9699919799/functions/sigprocmask.html\n\n-- \nThe contents of this email are confidential and may be subject to legal or \nprofessional privilege and copyright. No representation is made that this \nemail is free of viruses or other defects. If you have received this \ncommunication in error, you may not copy or distribute any part of it or \notherwise disclose its contents to anyone. Please advise the sender of your \nincorrect receipt of this correspondence.\n\nOn Tue, 23 Jun 2020 at 17:15, James Sewell <james.sewell@jirotech.com> wrote:Using multithreading in bgworker is possible if you do not use any \nPostgres runtime inside thread procedures or do it in exclusive critical \nsection.\nIt is not so convenient but possible. The most difficult thing from my \npoint of view is error reporting.Happy to be proved wrong, but I don't think this is correct. PostgreSQL can call sigprocmask() in your BGWorker whenever it wants, and  \"The use of sigprocmask() is unspecified in a multithreaded process\" [1][1] https://pubs.opengroup.org/onlinepubs/9699919799/Sorry link should be [1] https://pubs.opengroup.org/onlinepubs/9699919799/functions/sigprocmask.html \n\nThe contents of this email are confidential and may be subject to legal or professional privilege and copyright. No representation is made that this email is free of viruses or other defects. If you have received this communication in error, you may not copy or distribute any part of it or otherwise disclose its contents to anyone. Please advise the sender of your incorrect receipt of this correspondence.", "msg_date": "Tue, 23 Jun 2020 17:18:35 +1000", "msg_from": "James Sewell <james.sewell@jirotech.com>", "msg_from_op": true, "msg_subject": "Re: Threading in BGWorkers (!)" }, { "msg_contents": "On 23.06.2020 10:15, James Sewell wrote:\n>\n> Using multithreading in bgworker is possible if you do not use any\n> Postgres runtime inside thread procedures or do it in exclusive\n> critical\n> section.\n> It is not so convenient but possible. The most difficult thing\n> from my\n> point of view is error reporting.\n>\n>\n> Happy to be proved wrong, but I don't think this is correct.\n> PostgreSQL can call sigprocmask() in your BGWorker whenever it wants, \n> and  \"The use of sigprocmask() is unspecified in a multithreaded \n> process\" [1]\n\nSorry, may be I missed something.\nBut in my bgworker I am not using Postgres runtime at all (except \ninitial bgworker startup code).\nSo I am not using latches (which are based on signals), snapshots,...\nIn my case bgworker has no connection to Postgres at all.\nYes, it can still receives signals from Postmaster (SIGTERM, SIGHUP). \nBut their handler are trivial and do not need to mask any signals.\n\nSo may be in general case combination of signals and threads may cause \nsome problems,\nbut it doesn't mean that you can't create multithreaded bgworker.\n\n\n\n\n\n\n\n\n\n\n\nOn 23.06.2020 10:15, James Sewell\n wrote:\n\n\n\n\n\nUsing multithreading in\n bgworker is possible if you do not use any \n Postgres runtime inside thread procedures or do it in\n exclusive critical \n section.\n It is not so convenient but possible. The most difficult\n thing from my \n point of view is error reporting.\n\n\n\nHappy to be proved wrong, but I don't think this is\n correct. \n\n\n\n\n\n\n\nPostgreSQL can call sigprocmask() in your BGWorker\n whenever it wants, and  \"The use of sigprocmask() is\n unspecified in a multithreaded process\" [1]\n\n\n\n\n Sorry, may be I missed something.\n But in my bgworker I am not using Postgres runtime at all (except\n initial bgworker startup code).\n So I am not using latches (which are based on signals),\n snapshots,...\n In my case bgworker has no connection to Postgres at all.\n Yes, it can still receives signals from Postmaster (SIGTERM,\n SIGHUP). But their handler are trivial and do not need to mask any\n signals.\n\n So may be in general case combination of signals and threads may\n cause some problems,\n but it doesn't mean that you can't create multithreaded bgworker.", "msg_date": "Tue, 23 Jun 2020 10:26:16 +0300", "msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: Threading in BGWorkers (!)" }, { "msg_contents": "On Tue, 23 Jun 2020 at 17:26, Konstantin Knizhnik <k.knizhnik@postgrespro.ru>\nwrote:\n\n> On 23.06.2020 10:15, James Sewell wrote:\n>\n> Using multithreading in bgworker is possible if you do not use any\n>> Postgres runtime inside thread procedures or do it in exclusive critical\n>> section.\n>> It is not so convenient but possible. The most difficult thing from my\n>> point of view is error reporting.\n>>\n>\n> Happy to be proved wrong, but I don't think this is correct.\n>\n> PostgreSQL can call sigprocmask() in your BGWorker whenever it wants, and\n> \"The use of sigprocmask() is unspecified in a multithreaded process\" [1]\n>\n>\n> Sorry, may be I missed something.\n> But in my bgworker I am not using Postgres runtime at all (except initial\n> bgworker startup code).\n> So I am not using latches (which are based on signals), snapshots,...\n> In my case bgworker has no connection to Postgres at all.\n> Yes, it can still receives signals from Postmaster (SIGTERM, SIGHUP). But\n> their handler are trivial and do not need to mask any signals.\n>\n> So may be in general case combination of signals and threads may cause\n> some problems,\n> but it doesn't mean that you can't create multithreaded bgworker.\n>\n\nAh yes - sorry *I* missed something.\n\nA multi threaded BGWorker which accesses shared memory and database via\nSPI.\n\n-- \nThe contents of this email are confidential and may be subject to legal or \nprofessional privilege and copyright. No representation is made that this \nemail is free of viruses or other defects. If you have received this \ncommunication in error, you may not copy or distribute any part of it or \notherwise disclose its contents to anyone. Please advise the sender of your \nincorrect receipt of this correspondence.\n\nOn Tue, 23 Jun 2020 at 17:26, Konstantin Knizhnik <k.knizhnik@postgrespro.ru> wrote:\nOn 23.06.2020 10:15, James Sewell\n wrote:\n\n\n\n\nUsing multithreading in\n bgworker is possible if you do not use any \n Postgres runtime inside thread procedures or do it in\n exclusive critical \n section.\n It is not so convenient but possible. The most difficult\n thing from my \n point of view is error reporting.\n\n\n\nHappy to be proved wrong, but I don't think this is\n correct. \n\n\n\n\n\n\n\nPostgreSQL can call sigprocmask() in your BGWorker\n whenever it wants, and  \"The use of sigprocmask() is\n unspecified in a multithreaded process\" [1]\n\n\n\n\n Sorry, may be I missed something.\n But in my bgworker I am not using Postgres runtime at all (except\n initial bgworker startup code).\n So I am not using latches (which are based on signals),\n snapshots,...\n In my case bgworker has no connection to Postgres at all.\n Yes, it can still receives signals from Postmaster (SIGTERM,\n SIGHUP). But their handler are trivial and do not need to mask any\n signals.\n\n So may be in general case combination of signals and threads may\n cause some problems,\n but it doesn't mean that you can't create multithreaded bgworker.Ah yes - sorry *I* missed something. A multi threaded BGWorker which accesses shared memory and database via SPI. \n\nThe contents of this email are confidential and may be subject to legal or professional privilege and copyright. No representation is made that this email is free of viruses or other defects. If you have received this communication in error, you may not copy or distribute any part of it or otherwise disclose its contents to anyone. Please advise the sender of your incorrect receipt of this correspondence.", "msg_date": "Tue, 23 Jun 2020 17:34:21 +1000", "msg_from": "James Sewell <james.sewell@jirotech.com>", "msg_from_op": true, "msg_subject": "Re: Threading in BGWorkers (!)" }, { "msg_contents": "On 06/22/20 23:38, Tom Lane wrote:\n> bgworkers, or you could dispatch the threaded work to a non-Postgres-ish\n> process as PL/Java does. The only advantage I can see of doing work in a\n> process that's not at arm's-length is to have access to PG computational\n> or IPC facilities, and none of that is likely to work safely in a threaded\n> context.\n\nYou might be thinking of Dave Cramer's PL/JVM, which runs a JVM in another\nprocess and does IPC to it.\n\nPL/Java, by contrast, runs the JVM in-process and keeps both pieces.\nIt only lets one thread downcall into PostgreSQL.\n\n\nOn 06/23/20 00:46, Eric Ridge wrote:\n> How can we start a dialog about this kind of situation? Nobody here is\n> trying to make Postgres thread-safe, maybe only thread-friendly.\n\nThere are just a couple of things I've been wanting to suggest, based on\nPL/Java experience.\n\n1) It would be nice to be able to ereport from an arbitrary thread. There\n is already support in core to forward messages from parallel workers:\n the worker signals the lead process after adding a message to a shm_mq\n referenced from its ParallelWorkerInfo struct. The signal handler\n asynchronously sets ParallelMessagePending, which ProcessInterrupts\n will check at some convenient point and ereport the message.\n\n It seems like it would be no sweat for another thread in the same\n process to add something to an mq (could be the same structure as\n shm_mq but would not need to really be in shared memory) and do a\n volatile write of ParallelMessagePending. The rest is already there.\n Only missing ingredient would be a way for an extension to allocate\n something like a ParallelWorkerInfo struct good for the life of the\n backend (the current parallel worker infrastructure makes them all\n go away at the completion of a parallel query).\n\n2) It would be nice to be able to request service. If J Random thread\n in PL/Java generates a bit of work requiring some PostgreSQL API,\n at present that bit of work has to queue up until the next time a\n call into PL/Java is occasioned by a query, which might be never.\n\n It would be nice to be able to also asynchronously set some flag\n like ExtensionServiceRequested, which could be checked as part of\n CHECK_FOR_INTERRUPTS or even at more limited times, such as idle.\n An extension could populate an ExtensionServiceInfo struct with\n a service entry point an flag indicating that extension has work\n pending.\n\n\nThose are the two thread-friendlier ideas I have been thinking of.\n\nRegards,\n-Chap\n\n\n", "msg_date": "Tue, 23 Jun 2020 09:19:36 -0400", "msg_from": "Chapman Flack <chap@anastigmatix.net>", "msg_from_op": false, "msg_subject": "Re: Threading in BGWorkers (!)" }, { "msg_contents": "Hi,\n\nOn 2020-06-23 09:19:36 -0400, Chapman Flack wrote:\n> 1) It would be nice to be able to ereport from an arbitrary thread. There\n> is already support in core to forward messages from parallel workers:\n> the worker signals the lead process after adding a message to a shm_mq\n> referenced from its ParallelWorkerInfo struct. The signal handler\n> asynchronously sets ParallelMessagePending, which ProcessInterrupts\n> will check at some convenient point and ereport the message.\n> \n> It seems like it would be no sweat for another thread in the same\n> process to add something to an mq (could be the same structure as\n> shm_mq but would not need to really be in shared memory) and do a\n> volatile write of ParallelMessagePending. The rest is already there.\n> Only missing ingredient would be a way for an extension to allocate\n> something like a ParallelWorkerInfo struct good for the life of the\n> backend (the current parallel worker infrastructure makes them all\n> go away at the completion of a parallel query).\n\nI think that's way harder than what you make it sound here. The locking\nfor shm_mq doesn't really work inside a process. In contrast to the\nsingle threaded case something like a volatile write to\nParallelMessagePending doesn't guarantee much, because there's no\nguaranteed memory ordering between threads. And more.\n\n\nI'm very doubtful this is a good direction to go in. Kinda maybe\nsomewhat partially converting tiny parts of the backend code into\nthreadsafe code will leave us with some baroque code.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 23 Jun 2020 18:44:31 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Threading in BGWorkers (!)" }, { "msg_contents": "On 06/23/20 21:44, Andres Freund wrote:\n\n> I think that's way harder than what you make it sound here. The locking\n> for shm_mq doesn't really work inside a process. In contrast to the\n> single threaded case something like a volatile write to\n> ParallelMessagePending doesn't guarantee much, because there's no\n> guaranteed memory ordering between threads. And more.\n\nIt occurred to me after I sent the message this morning that my suggestion\n(2) could subsume (1). And requires nothing more than a single volatile\nwrite of a boolean, and getting called back at a convenient time on the\nsingle main thread.\n\nSo perhaps I shouldn't have suggested (1) at all - just muddies the waters.\n\nRegards,\n-Chap\n\n\n", "msg_date": "Tue, 23 Jun 2020 21:50:26 -0400", "msg_from": "Chapman Flack <chap@anastigmatix.net>", "msg_from_op": false, "msg_subject": "Re: Threading in BGWorkers (!)" }, { "msg_contents": "Hi,\n\nOn 2020-06-23 21:50:26 -0400, Chapman Flack wrote:\n> On 06/23/20 21:44, Andres Freund wrote:\n> \n> > I think that's way harder than what you make it sound here. The locking\n> > for shm_mq doesn't really work inside a process. In contrast to the\n> > single threaded case something like a volatile write to\n> > ParallelMessagePending doesn't guarantee much, because there's no\n> > guaranteed memory ordering between threads. And more.\n> \n> It occurred to me after I sent the message this morning that my suggestion\n> (2) could subsume (1). And requires nothing more than a single volatile\n> write of a boolean, and getting called back at a convenient time on the\n> single main thread.\n\nA single volatile write wouldn't guarantee you much in the presence of\nmultiple threads. You could very well end up with a concurrent\nCHECK_FOR_INTERRUPTS() in the main thread unsetting InterruptPending,\nbut not yet seeing / processing ParallelMessagePending. Nor would it\nwake up the main process if it's currently waiting on a latch.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 23 Jun 2020 19:06:07 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Threading in BGWorkers (!)" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2020-06-23 09:19:36 -0400, Chapman Flack wrote:\n>> 1) It would be nice to be able to ereport from an arbitrary thread.\n\n> I think that's way harder than what you make it sound here.\n\nIndeed. Just for starters:\n\n1. elog.c is in itself not-thread-safe, because it uses a static data\nstructure to track the message being built.\n\n2. It uses palloc, another large pile of not-thread-safe infrastructure.\n\n3. What exactly would the semantics of elog(ERROR) be? You can't make it\nsomething other than \"abort the transaction\" without mind-boggling levels\nof breakage. But how are you going to enforce a transaction abort across\nmultiple threads? What if one of the other threads reports an independent\nerror concurrently, or worse tries to COMMIT concurrently?\n\nSo that's already two fundamental, and non-trivial, subsystems that have\nto be made fully thread-safe before you can get anything off the ground;\nplus basic architectural issues to be settled. I imagine that somebody\nwill take a run at this at some point, but the idea that it's an easy\nproblem to bite off seems nonsensical.\n\nI'm not sure whether the other idea\n\n>> It would be nice to be able to also asynchronously set some flag\n>> like ExtensionServiceRequested, which could be checked as part of\n>> CHECK_FOR_INTERRUPTS or even at more limited times, such as idle.\n\nis much easier. In the barest terms, we already have things like that\n(such as NOTIFY interrupts), so it doesn't sound hard at first. The\nproblem is to figure out whether action X that you wish to do is safe\nto do at CHECK_FOR_INTERRUPTS call site Y. The answer is certainly not\nalways \"yes\", but how would we build an infrastructure for deciding?\n(NOTIFY largely punts on this, by decreeing that it won't do anything\ntill we reach an idle state. That's surely not adequate for a lot\nof likely actions X.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 23 Jun 2020 22:13:51 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Threading in BGWorkers (!)" }, { "msg_contents": "On 06/23/20 22:13, Tom Lane wrote:\n> 1. elog.c is in itself not-thread-safe, because it uses a static data\n> structure to track the message being built.\n> \n> 2. It uses palloc, another large pile of not-thread-safe infrastructure.\n...\n\nI'm sure now that I shouldn't have mentioned (1) - muddied waters. The idea\nin my head had been to make what the PG code sees as close to the parallel-\nmessage case as possible: \"here is an error structure that your elog code\ndid not build, in a region of memory your palloc code did not manage.\" But\nleave that aside, because a way to request a service callback would clearly\nallow the regular elog and the regular palloc to do their regular thing\non the regular thread, and be the more general and desirable idea anyway.\n\n> I'm not sure whether the other idea\n> \n>>> It would be nice to be able to also asynchronously set some flag\n>>> like ExtensionServiceRequested, which could be checked as part of\n>>> CHECK_FOR_INTERRUPTS or even at more limited times, such as idle.\n> \n> is much easier. In the barest terms, we already have things like that\n> (such as NOTIFY interrupts), so it doesn't sound hard at first. The\n> problem is to figure out whether action X that you wish to do is safe\n> to do at CHECK_FOR_INTERRUPTS call site Y. The answer is certainly not\n> always \"yes\", but how would we build an infrastructure for deciding?\n> (NOTIFY largely punts on this, by decreeing that it won't do anything\n> till we reach an idle state. That's surely not adequate for a lot\n> of likely actions X.)\n\nI think it could be adequate for a lot of them. (I even said \"more\nlimited times, such as idle\" up there.) In PL/Java's case,\nthere clearly aren't people running code now that functionally depends\non this ability, because it wouldn't work. Even if the JVM uses multiple\nthreads to accomplish something, if it is something the Java function\nresult depends on, it obviously has to happen before the function returns,\nwhile PL/Java has the main thread and can just serialize the work onto it.\n\nThe likeliest cases where something might want to happen after the\nfunction has returned are resource releases, which can sometimes\nbe discovered by the garbage collector a little after the fact, and if\nthe Java resource that's being collected is the dual of some palloc'd\nor reference-counted PostgreSQL object, it would be nice to not have to\nenqueue that cleanup for the next time some query calls into PL/Java.\nEven an \"only in an idle state\" rule would be an improvement over\n\"who knows when and maybe never\".\n\nRegards,\n-Chap\n\n\n", "msg_date": "Tue, 23 Jun 2020 22:57:06 -0400", "msg_from": "Chapman Flack <chap@anastigmatix.net>", "msg_from_op": false, "msg_subject": "Re: Threading in BGWorkers (!)" }, { "msg_contents": "Chapman Flack <chap@anastigmatix.net> writes:\n> On 06/23/20 22:13, Tom Lane wrote:\n>> I'm not sure whether the other idea\n\n>>> It would be nice to be able to also asynchronously set some flag\n>>> like ExtensionServiceRequested, which could be checked as part of\n>>> CHECK_FOR_INTERRUPTS or even at more limited times, such as idle.\n\n>> is much easier. In the barest terms, we already have things like that\n>> (such as NOTIFY interrupts), so it doesn't sound hard at first. The\n>> problem is to figure out whether action X that you wish to do is safe\n>> to do at CHECK_FOR_INTERRUPTS call site Y. The answer is certainly not\n>> always \"yes\", but how would we build an infrastructure for deciding?\n\n> I think it could be adequate for a lot of them.\n\nI dunno. It's not even adequate for the use-case of reporting an\nerror, because waiting till after the current transaction commits\nis surely not what should happen in that case. It happens to be\nokay for NOTIFY, because that's reporting an outside event that\ndid occur regardless of the local transaction's success ... but\nreally, how many use-cases does that argument apply to?\n\nI'm not trying to be completely negative here, but I think these\nissues are a lot harder than they might seem at first glance.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 23 Jun 2020 23:08:47 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Threading in BGWorkers (!)" }, { "msg_contents": "On 06/23/20 23:08, Tom Lane wrote:\n\n> I dunno. It's not even adequate for the use-case of reporting an\n> error, because waiting till after the current transaction commits\n> is surely not what should happen in that case.\n\nIn the case of the kind of exuberantly-threaded language runtime of\nwhich Java's an example, most of the threads running at any given time\nare doing somewhat obscure things that the language runtime knows about\nbut might not be directly relevant to whether your current transaction\ncommits. (The garbage collector thread was my earlier example because it\nroutinely discovers reclaimable things, which can have implications for\nresources in PG but usually not for whether a commit should succeed.)\n\nIf you're going to write a function and explicitly use threads in your\ncomputation, those are threads you're going to manage, catch exceptions\nin, and forward those back to the main thread to be ereported at the\nappropriate time.\n\nIn other cases where some thread you're but dimly aware of has encountered\nsome problem, generally what happens now is a default message and stacktrace\nget directly written to the backend's stderr and you don't otherwise\nfind out anything happened. If something doesn't work later\nbecause that thread got wedged, then you piece together the story.\nIf the logging_collector is running then at least the stuff written to\nstderr ends up in the log anyway, though without any log prefix info added.\nIf the collector isn't running, then the messages went somewhere else,\nmaybe the systemd journal, maybe the floor.\n\nRegards,\n-Chap\n\n\n", "msg_date": "Tue, 23 Jun 2020 23:30:40 -0400", "msg_from": "Chapman Flack <chap@anastigmatix.net>", "msg_from_op": false, "msg_subject": "Re: Threading in BGWorkers (!)" }, { "msg_contents": "Hackers,\n\nIn the hope of this not being derailed by larger/more unpopular pieces of\nwork, I'm attaching a tiny patch which I don't believe will have any\nnegative impact - but will remove one blocker for $subject (sigprocmask\nusage is \"unspecified\" in multithreaded code [1]).\n\nThe patch replaces *sigprocmask *with *pthread_sigmask*. They have\nidentical APIs (\"[pthread_sigmask] shall be equivalent to sigprocmask(),\nwithout the restriction that the call be made in a single-threaded\nprocess\"[1])\n\nThe rationale here is that as far as I can tell this is the **only**\nblocker to using multithreaded code in a BGWorker which can't be avoided by\nadhering to strict code rules (eg: no PG calls from non-main threads, no\ninteraction with signals from non-main threads).\n\nBefore this went in the rules would need to be agreed upon and documented -\nbut hopefully it's at least a way forward / a way to progress this\ndiscussion.\n\nCheers,\nJames\n\n[1]\nhttps://pubs.opengroup.org/onlinepubs/9699919799/functions/sigprocmask.html\n\n-- \nThe contents of this email are confidential and may be subject to legal or \nprofessional privilege and copyright. No representation is made that this \nemail is free of viruses or other defects. If you have received this \ncommunication in error, you may not copy or distribute any part of it or \notherwise disclose its contents to anyone. Please advise the sender of your \nincorrect receipt of this correspondence.", "msg_date": "Thu, 2 Jul 2020 16:39:22 +1000", "msg_from": "James Sewell <james.sewell@jirotech.com>", "msg_from_op": true, "msg_subject": "Re: Threading in BGWorkers (!)" }, { "msg_contents": "On Thu, Jul 2, 2020 at 6:39 PM James Sewell <james.sewell@jirotech.com> wrote:\n> The patch replaces sigprocmask with pthread_sigmask. They have identical APIs (\"[pthread_sigmask] shall be equivalent to sigprocmask(), without the restriction that the call be made in a single-threaded process\"[1])\n\n-#define PG_SETMASK(mask) sigprocmask(SIG_SETMASK, mask, NULL)\n+#define PG_SETMASK(mask) pthread_sigmask(SIG_SETMASK, mask, NULL)\n\nSo you're assuming that <signal.h> declares pthread_sigmask(). I was\ntrying to understand what POSIX's \"The functionality described is\noptional\" means; could there be <signal.h> headers without the\ndeclaration? I mean, I know the practical answer: we support all the\nremaining Unixes, you can count them on two hands, and they all have\npthreads, so it doesn't matter, and like Dr Stonebraker said, the plan\nis \"converting POSTGRES to use lightweight processes available in the\noperating systems we are using. These include PRESTO for the Sequent\nSymmetry and threads in Version 4 of Sun/OS.\" so we'll actually\n*require* that stuff eventually anyway.\n\nOne practical problem with this change is that some systems have a\nstub definition of pthread_sigmask() that does nothing, when you don't\nlink against the threading library. Realistically, most *useful*\nbuilds of PostgreSQL bring it in indirectly (via SSL, LDAP, blah\nblah), but it so happens that a bare bones build and make check on\nthis here FreeBSD box hangs in CHECK DATABASE waiting for the\ncheckpointer to signal it. I can fix that by putting -lthr into\nLDFLAGS, so we'd probably have to figure out how to do that on our\nsupported systems.\n\n> The rationale here is that as far as I can tell this is the *only* blocker to using multithreaded code in a BGWorker which can't be avoided by adhering to strict code rules (eg: no PG calls from non-main threads, no interaction with signals from non-main threads).\n\nI guess you'd have to mask at least all the signals we care about\nbefore every call to pthread_create() and trust that the threads never\nunmask them. I guess you could interpose a checker function to abort\nif something tries to break the programming rule.\n\n\n", "msg_date": "Wed, 29 Jul 2020 13:41:02 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Threading in BGWorkers (!)" }, { "msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> On Thu, Jul 2, 2020 at 6:39 PM James Sewell <james.sewell@jirotech.com> wrote:\n>> The patch replaces sigprocmask with pthread_sigmask. They have identical APIs (\"[pthread_sigmask] shall be equivalent to sigprocmask(), without the restriction that the call be made in a single-threaded process\"[1])\n\n> -#define PG_SETMASK(mask) sigprocmask(SIG_SETMASK, mask, NULL)\n> +#define PG_SETMASK(mask) pthread_sigmask(SIG_SETMASK, mask, NULL)\n\n> So you're assuming that <signal.h> declares pthread_sigmask().\n\nIf we were going to accept this patch, I'd say it should be conditional\non a configure test for pthread_sigmask being present. We could allow\nthat to require an additional library, or not.\n\n>> The rationale here is that as far as I can tell this is the *only* blocker to using multithreaded code in a BGWorker which can't be avoided by adhering to strict code rules (eg: no PG calls from non-main threads, no interaction with signals from non-main threads).\n\nTBH, though, I do not buy this argument for a millisecond. I don't\nthink that anything is going to come out of multithreading a bgworker\nbut blood and tears. Perhaps someday we'll make a major push to\nmake the backend code (somewhat(?)) thread safe ... but I'm not on\nboard with making one-line-at-a-time changes in hopes of getting\npartway there. We need some kind of concrete plan for what is a\nusable amount of functionality and what has to be done to get it.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 28 Jul 2020 21:52:20 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Threading in BGWorkers (!)" }, { "msg_contents": "> We need some kind of concrete plan for what is a\n> usable amount of functionality and what has to be done to get it.\n>\n\nThis is exactly the type of discussion I'm after.\n\nI'll start.\n\nA usable amount of functionality would be the ability to start threads from:\n\n - a background worker\n\nThese cases should be bound by *at least* the following rules:\n\n - no signal handling from threads\n - no calls into PostgreSQL functions from threads\n\n\nThe patch I supplied is one of the requirements to get there - I would love\nhelp to discover the others.\n\n-- \nThe contents of this email are confidential and may be subject to legal or \nprofessional privilege and copyright. No representation is made that this \nemail is free of viruses or other defects. If you have received this \ncommunication in error, you may not copy or distribute any part of it or \notherwise disclose its contents to anyone. Please advise the sender of your \nincorrect receipt of this correspondence.\n\nWe need some kind of concrete plan for what is a\nusable amount of functionality and what has to be done to get it.This is exactly the type of discussion I'm after. I'll start.A usable amount of functionality would be the ability to start threads from:a background workerThese cases should be bound by *at least* the following rules:no signal handling from threadsno calls into PostgreSQL functions from threadsThe patch I supplied is one of the requirements to get there - I would love help to discover the others.\n\nThe contents of this email are confidential and may be subject to legal or professional privilege and copyright. No representation is made that this email is free of viruses or other defects. If you have received this communication in error, you may not copy or distribute any part of it or otherwise disclose its contents to anyone. Please advise the sender of your incorrect receipt of this correspondence.", "msg_date": "Wed, 29 Jul 2020 13:00:54 +1000", "msg_from": "James Sewell <james.sewell@jirotech.com>", "msg_from_op": true, "msg_subject": "Re: Threading in BGWorkers (!)" }, { "msg_contents": "Hi,\n\nOn 2020-07-29 13:41:02 +1200, Thomas Munro wrote:\n> One practical problem with this change is that some systems have a\n> stub definition of pthread_sigmask() that does nothing, when you don't\n> link against the threading library. Realistically, most *useful*\n> builds of PostgreSQL bring it in indirectly (via SSL, LDAP, blah\n> blah), but it so happens that a bare bones build and make check on\n> this here FreeBSD box hangs in CHECK DATABASE waiting for the\n> checkpointer to signal it. I can fix that by putting -lthr into\n> LDFLAGS, so we'd probably have to figure out how to do that on our\n> supported systems.\n\nCouldn't this be driven by --disable-thread-safety?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 30 Jul 2020 11:44:10 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Threading in BGWorkers (!)" }, { "msg_contents": "Hi,\n\nOn 2020-07-28 21:52:20 -0400, Tom Lane wrote:\n> >> The rationale here is that as far as I can tell this is the *only* blocker to using multithreaded code in a BGWorker which can't be avoided by adhering to strict code rules (eg: no PG calls from non-main threads, no interaction with signals from non-main threads).\n> \n> TBH, though, I do not buy this argument for a millisecond. I don't\n> think that anything is going to come out of multithreading a bgworker\n> but blood and tears. Perhaps someday we'll make a major push to\n> make the backend code (somewhat(?)) thread safe ... but I'm not on\n> board with making one-line-at-a-time changes in hopes of getting\n> partway there. We need some kind of concrete plan for what is a\n> usable amount of functionality and what has to be done to get it.\n\nWhy not? Our answer to threading inside functions has been, for quite a\nwhile, that it's kinda ok if the threads never call into postgres and\ncan never escape the lifetime of a function. But that's not actually\nreally safe due to the signal handler issue. Whether it's a normal\nbackend or a bgworker doesn't really play a role here, no?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 30 Jul 2020 11:46:44 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Threading in BGWorkers (!)" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2020-07-28 21:52:20 -0400, Tom Lane wrote:\n>> TBH, though, I do not buy this argument for a millisecond. I don't\n>> think that anything is going to come out of multithreading a bgworker\n>> but blood and tears. Perhaps someday we'll make a major push to\n>> make the backend code (somewhat(?)) thread safe ... but I'm not on\n>> board with making one-line-at-a-time changes in hopes of getting\n>> partway there. We need some kind of concrete plan for what is a\n>> usable amount of functionality and what has to be done to get it.\n\n> Why not? Our answer to threading inside functions has been, for quite a\n> while, that it's kinda ok if the threads never call into postgres and\n> can never escape the lifetime of a function. But that's not actually\n> really safe due to the signal handler issue.\n\nIn other words, it's *not* safe and never has been. I see no good reason\nto believe that the signal handler issue is the only one. Even if it is,\nnot being able to call any postgres infrastructure is a pretty huge\nhandicap.\n\nSo I stand by the position that we need an actual plan here, not just\nchipping away at one-liner things that might or might not improve\nmatters noticeably.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 30 Jul 2020 14:54:35 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Threading in BGWorkers (!)" }, { "msg_contents": "On Thu, Jul 30, 2020 at 2:55 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > Why not? Our answer to threading inside functions has been, for quite a\n> > while, that it's kinda ok if the threads never call into postgres and\n> > can never escape the lifetime of a function. But that's not actually\n> > really safe due to the signal handler issue.\n>\n> In other words, it's *not* safe and never has been. I see no good reason\n> to believe that the signal handler issue is the only one. Even if it is,\n> not being able to call any postgres infrastructure is a pretty huge\n> handicap.\n\nI find this line of argument really unfair. It's true that there might\nbe issues other than the signal handler one, but so what? That is not\na principled argument against fixing the signal handler part of the\nproblem, which is the only *known* problem with the use case Andres\ndescribes. It is also true that it would be more useful to enable\nadditional use cases rather than just this one, but that is not a\nprincipled argument against enabling this one.\n\nMy only present concern about the proposal actually in front of us --\nthat is to say, use pthread_sigmask() rather than sigprocmask() -- is\nThomas's observation that on his system doing so breaks the world.\nThat seems to be quite a serious problem. If we are deciding whether\nto use one function or another some purpose and they are equally good\nfor the core code but one is better for people who want to play around\nwith extensions, we may as well use the one that's better for that\npurpose. We need not give such experimentation our unqualified\nendorsement; we need only decide against obstructing it unnecessarily.\nBut when such a substitution risks breaking things that work today,\nthe calculus gets a lot more complicated. Unless we can find a way to\navoid that risk, I don't think this is a good trade-off.\n\nBut more broadly I think it is well past time that we look into making\nthe backend more thread-friendly. The fact that \"it's *not* safe and\nnever has been\" has not prevented people from doing it. We don't end\nup with people going \"oh, well sigprocmask uh oh so I better give up\nnow.\" What we end up with is people just going right ahead and doing\nit, probably without even thinking about sigprocmask, and ending up\nwith low-probability failure conditions, which seems likely to hurt\nPostgreSQL's reputation for reliability with no compensating benefit.\nOr alternatively they hack core, which sucks, or they switch to some\nnon-PG database, which also sucks.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Thu, 30 Jul 2020 15:51:10 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Threading in BGWorkers (!)" }, { "msg_contents": ">\n> I see no good reason to believe that the signal handler issue is the only\n> one. Even if it is,\n> not being able to call any postgres infrastructure is a pretty huge\n> handicap.\n>\n\n(changed emails to get rid of the nasty employer notice...)\n\nIt's at least a workable handicap that I'm happy to deal with.\n\nI can say with 100% confidence that people coming from non C languages will\nbe using threading in Postgres backends as interop matures (and it's\nhappening fast now). A lot of the time they won't even know they are using\nthreads as it will be done by libraries they make use of transparently.\n\nLet's help them to avoid unsafe code now, not wait until they show up on\nthis list with a critical failure and tap at the big sign that says \"NO\nTHREADING\".\n\n- james\n\nI see no good reason to believe that the signal handler issue is the only one.  Even if it is,\nnot being able to call any postgres infrastructure is a pretty huge\nhandicap.(changed emails to get rid of the nasty employer notice...)It's at least a workable handicap that I'm happy to deal with.I can say with 100% confidence that people coming from non C languages will be using threading in Postgres backends as interop matures (and it's happening fast now). A lot of the time they won't even know they are using threads as it will be done by libraries they make use of transparently. Let's help them to avoid unsafe code now, not wait until they show up on this list with a critical failure and tap at the big sign that says \"NO THREADING\".- james", "msg_date": "Fri, 31 Jul 2020 15:29:48 +1000", "msg_from": "James Sewell <james.sewell@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Threading in BGWorkers (!)" } ]
[ { "msg_contents": "Hi,\n\nIt's a bit odd that syncscan.c is used by both heapam.c and tableam.c,\nand provides a generic block-synchronization mechanism that other\ntable AMs might want to use too, but it lives under\nsrc/backend/access/heap. It doesn't actually do anything heap\nspecific (beyond being block-oriented), and it's weird that tableam.c\nhas to include heapam.h.\n\nPerhaps we should move the .c file under src/backend/access/table, as attached.\n\nI suppose it's remotely possible that someone might invent\nphysical-order index scans, and once you have those you might sync\nscans of those too, and then even table would be too specific, but\nthat may be a bit far fetched.", "msg_date": "Tue, 23 Jun 2020 13:30:39 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Move syncscan.c?" }, { "msg_contents": "Hi,\n\nOn 2020-06-23 13:30:39 +1200, Thomas Munro wrote:\n> It's a bit odd that syncscan.c is used by both heapam.c and tableam.c,\n> and provides a generic block-synchronization mechanism that other\n> table AMs might want to use too, but it lives under\n> src/backend/access/heap. It doesn't actually do anything heap\n> specific (beyond being block-oriented), and it's weird that tableam.c\n> has to include heapam.h.\n> \n> Perhaps we should move the .c file under src/backend/access/table, as attached.\n\nSounds reasonable. I suspect there's a few more files (and definitely\nfunctions) that could be de-heapified.\n\n\n> I suppose it's remotely possible that someone might invent\n> physical-order index scans, and once you have those you might sync\n> scans of those too, and then even table would be too specific, but\n> that may be a bit far fetched.\n\nHm. That'd be an argument for moving it to access/common. I don't really\nsee a reason not to go for that?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 23 Jun 2020 11:28:52 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Move syncscan.c?" }, { "msg_contents": "On Wed, Jun 24, 2020 at 6:28 AM Andres Freund <andres@anarazel.de> wrote:\n> On 2020-06-23 13:30:39 +1200, Thomas Munro wrote:\n> > I suppose it's remotely possible that someone might invent\n> > physical-order index scans, and once you have those you might sync\n> > scans of those too, and then even table would be too specific, but\n> > that may be a bit far fetched.\n>\n> Hm. That'd be an argument for moving it to access/common. I don't really\n> see a reason not to go for that?\n\nOk, done that way. Thanks.\n\n\n", "msg_date": "Wed, 29 Jul 2020 17:04:15 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Move syncscan.c?" } ]
[ { "msg_contents": "A few months ago while writing the initial version of a patch to extract\nthe columns that need to be scanned at plan time for use by table AMs,\n[1] Ashwin and I noticed some peculiar aspects of the reltarget exprs\nfor partition tables for DELETE FROM ... USING... WHERE... RETURNING ...\nqueries (mentioned in this [2] specific mail in the thread)\n\nRecently, I was looking into this again and decided to ask in a separate\nthread if the current implementation is correct.\n\nGiven a partitioned table t(a,b,c) (with partitions tp1(a,b,c) and\ntp2(a,b,c) and a non-partitioned table foo(a,b)\n\nand the following query\n\nDELETE FROM t USING foo where foo.a = t.a RETURNING *;\n\nThe processed_tlist of the child partitions (which are the targets of\nthe DELETE) include *all* of the columns from foo and t.\n\nThe columns from foo are added to the child partition's querytree's\nprocessed_tlist in preprocess_targetlist() from the child partitions's\nquerytree's returningList so that those vars are \"available for the\nRETURNING calculation\", however, neither the qual evaluation nor the\nDELETE execution require the other columns in t to be added to the\ntargetlist.\n\nIn fact, the root/parent partition, t , only adds those columns from foo\n(a and b) from its returningList to its processed_targetlist. It does\nnot end up adding any other columns from t to its processed_tlist than\nthose that are already there.\n\nThis alone didn't bother me that much. I assume that this could be this\nway for some valid reason. However, the part that seems odd to me is the\nexact way in which these other columns are added to the child\npartition's processed_tlist.\n\nIn preprocess_targetlist(), this code determines if the var pulled out\nof the querytree's returningList is added to its processed_tlist\n\nif (IsA(var, Var) &&\nvar->varno == result_relation)\ncontinue;\n\nFor the root partition, result_relation will be the index into the range\ntable of the relation that is the target of the DELETE -- so, in this\ncase, the index into the range table for the leaf partitions.\nFor the child partitions, however, result_relation is 0 because in\ninheritance_planner(), we copy the parent querytree (including the\nreturningList) and then set the result_relation to 0. The comment\nreads:\n\n /*\n * Make a deep copy of the parsetree for this planning cycle to mess\n * around with, and change it to look like a SELECT.\n ...\n\nThat means that in preprocess_targetlist() for child partitions,\nresult_relation in the querytree is 0 and all of the returningList will\nalways be added to the processed_tlist.\n\nIn practice, I don't actually know if this breaks anything. I poked\naround a bit, but I don't see how having too many target list entries\nfor a leaf partition could ever, for example, produce wrong results.\n\nAnyway, I think it makes the code harder to understand.\n\nA simple fix would be to also guard that if statement with a check that\nresult_relation is not 0.\n\n- if (parse->returningList && list_length(parse->rtable) > 1)\n+ if (result_relation && parse->returningList &&\nlist_length(parse->rtable) > 1)\n\n(this does pass regress)\nThat doesn't make that much sense, though, I think, since SELECT\nstatements shouldn't have a returningList, so the existing check should\nbe sufficient.\n\nMaybe a more complete solution would do something to make sure that\nchild partitions are treated the same as root partitions for DELETE\nqueries, similar to what seems to be described to be happening for\nUPDATE queries in the same comment about making the deep copy of the\nquerytree for the child partition:\n\n /*\n * Make a deep copy of the parsetree for this planning cycle to mess\n * around with, and change it to look like a SELECT. (Hack alert: the\n * target RTE still has updatedCols set if this is an UPDATE, so that\n * expand_partitioned_rtentry will correctly update\n * subroot->partColsUpdated.)\n */\n\n[1]\nhttps://www.postgresql.org/message-id/flat/CAAKRu_ZQ0Jy7LfZDCY0JdxChdpja9rf-S8Y5%2BU4vX7cYJd62dA%40mail.gmail.com#f16fb3bdf33519c0d547a4b7ae2fc3c3\n[2]\nhttps://www.postgresql.org/message-id/CAAKRu_ZQ0Jy7LfZDCY0JdxChdpja9rf-S8Y5%2BU4vX7cYJd62dA%40mail.gmail.com\n\n-- \nMelanie Plageman\n\nA few months ago while writing the initial version of a patch to extractthe columns that need to be scanned at plan time for use by table AMs,[1] Ashwin and I noticed some peculiar aspects of the reltarget exprsfor partition tables for DELETE FROM ... USING... WHERE... RETURNING ...queries (mentioned in this [2] specific mail in the thread)Recently, I was looking into this again and decided to ask in a separatethread if the current implementation is correct.Given a partitioned table t(a,b,c) (with partitions tp1(a,b,c) andtp2(a,b,c) and a non-partitioned table foo(a,b)and the following queryDELETE FROM t USING foo where foo.a = t.a RETURNING *;The processed_tlist of the child partitions (which are the targets ofthe DELETE) include *all* of the columns from foo and t. The columns from foo are added to the child partition's querytree'sprocessed_tlist in preprocess_targetlist() from the child partitions'squerytree's returningList so that those vars are \"available for theRETURNING calculation\", however, neither the qual evaluation nor theDELETE execution require the other columns in t to be added to thetargetlist.In fact, the root/parent partition, t , only adds those columns from foo(a and b) from its returningList to its processed_targetlist. It doesnot end up adding any other columns from t to its processed_tlist thanthose that are already there.This alone didn't bother me that much. I assume that this could be thisway for some valid reason. However, the part that seems odd to me is theexact way in which these other columns are added to the childpartition's processed_tlist.In preprocess_targetlist(), this code determines if the var pulled outof the querytree's returningList is added to its processed_tlist\t\t\tif (IsA(var, Var) &&\t\t\t\tvar->varno == result_relation)\t\t\t\tcontinue;For the root partition, result_relation will be the index into the rangetable of the relation that is the target of the DELETE -- so, in thiscase, the index into the range table for the leaf partitions.For the child partitions, however, result_relation is 0 because ininheritance_planner(), we copy the parent querytree (including thereturningList) and then set the result_relation to 0. The commentreads:  /*    * Make a deep copy of the parsetree for this planning cycle to mess    * around with, and change it to look like a SELECT.     ...That means that in preprocess_targetlist() for child partitions,result_relation in the querytree is 0 and all of the returningList willalways be added to the processed_tlist.In practice, I don't actually know if this breaks anything. I pokedaround a bit, but I don't see how having too many target list entriesfor a leaf partition could ever, for example, produce wrong results.Anyway, I think it makes the code harder to understand.A simple fix would be to also guard that if statement with a check thatresult_relation is not 0. -       if (parse->returningList && list_length(parse->rtable) > 1)+       if (result_relation && parse->returningList && list_length(parse->rtable) > 1)(this does pass regress)That doesn't make that much sense, though, I think, since SELECTstatements shouldn't have a returningList, so the existing check shouldbe sufficient. Maybe a more complete solution would do something to make sure thatchild partitions are treated the same as root partitions for DELETEqueries, similar to what seems to be described to be happening forUPDATE queries in the same comment about making the deep copy of thequerytree for the child partition:  /*    * Make a deep copy of the parsetree for this planning cycle to mess    * around with, and change it to look like a SELECT.  (Hack alert: the    * target RTE still has updatedCols set if this is an UPDATE, so that    * expand_partitioned_rtentry will correctly update    * subroot->partColsUpdated.)    */[1] https://www.postgresql.org/message-id/flat/CAAKRu_ZQ0Jy7LfZDCY0JdxChdpja9rf-S8Y5%2BU4vX7cYJd62dA%40mail.gmail.com#f16fb3bdf33519c0d547a4b7ae2fc3c3[2] https://www.postgresql.org/message-id/CAAKRu_ZQ0Jy7LfZDCY0JdxChdpja9rf-S8Y5%2BU4vX7cYJd62dA%40mail.gmail.com-- Melanie Plageman", "msg_date": "Mon, 22 Jun 2020 19:07:23 -0700", "msg_from": "Melanie Plageman <melanieplageman@gmail.com>", "msg_from_op": true, "msg_subject": "Extra target list entries for child partitions in\n DELETE...USING...RETURNING" } ]
[ { "msg_contents": "Hi,\n\nFix typo on sum size table_parallelscan_estimate.\nIf IsMVCCSnapshot(snapshot is true, add_size return is being lost.\n\nregards,\nRanier Vilela", "msg_date": "Tue, 23 Jun 2020 09:31:51 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": true, "msg_subject": "[PATCH] fix size sum table_parallelscan_estimate\n (src/backend/access/table/tableam.c)" }, { "msg_contents": "On 2020-Jun-23, Ranier Vilela wrote:\n\n> Hi,\n> \n> Fix typo on sum size table_parallelscan_estimate.\n> If IsMVCCSnapshot(snapshot is true, add_size return is being lost.\n\nadd_size() already adds, no? You don't need to add again.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Tue, 23 Jun 2020 09:22:50 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] fix size sum table_parallelscan_estimate\n (src/backend/access/table/tableam.c)" } ]
[ { "msg_contents": "Here is a patch to reorganize dumpFunc() and dumpAgg() in pg_dump, \nsimilar to daa9fe8a5264a3f192efa5ddee8fb011ad9da365. Instead of \nrepeating the almost same large query in each version branch, use one\nquery and add a few columns to the SELECT list depending on the\nversion. This saves a lot of duplication.\n\nI have tested this with various old versions of PostgreSQL I had \navailable, but a bit more random testing with old versions would be welcome.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Tue, 23 Jun 2020 14:57:23 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "some more pg_dump refactoring" }, { "msg_contents": "> On 23 Jun 2020, at 14:57, Peter Eisentraut <peter.eisentraut@2ndquadrant.com> wrote:\n> \n> Here is a patch to reorganize dumpFunc() and dumpAgg() in pg_dump, similar to daa9fe8a5264a3f192efa5ddee8fb011ad9da365. Instead of repeating the almost same large query in each version branch, use one\n> query and add a few columns to the SELECT list depending on the\n> version. This saves a lot of duplication.\n> \n> I have tested this with various old versions of PostgreSQL I had available, but a bit more random testing with old versions would be welcome.\n\n+1 from reading the patch and lightly testing it with the older versions I had\nhandy, and another big +1 on the refactoring to remove the duplication.\n\ncheers ./daniel\n\n", "msg_date": "Tue, 23 Jun 2020 22:22:05 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: some more pg_dump refactoring" }, { "msg_contents": "Hallo Peter,\n\nMy 0.02 €:\n\nPatch applies cleanly, compiles, make check and pg_dump tap tests ok. The \nrefactoring is a definite improvements.\n\nYou changed the query strings to use \"\\n\" instead of \" \". I would not have \nchanged that, because it departs from the style around, and I do not think \nit improves readability at the C code level.\n\nI tried to check manually and randomly that the same query is built for \nthe same version, although my check may have been partial, especially on \nthe aggregate query which does not comment about what is changed between \nversions, and my eyes are not very good at diffing.\n\nI've notice that some attributes are given systematic replacements (eg \nproparallel), removing the need to test for presence afterwards. This \nlooks fine to me.\n\nHowever, on version < 8.4, ISTM that funcargs and funciargs are always \nadded: is this voluntary?.\n\nWould it make sense to accumulate in the other direction, older to newer, \nso that new attributes are added at the end of the select?\n\nShould array_to_string be pg_catalog.array_to_string? All other calls seem \nto have an explicit schema.\n\nI'm fine with inlining most PQfnumber calls.\n\nI do not have old versions at hand for testing.\n\n> Here is a patch to reorganize dumpFunc() and dumpAgg() in pg_dump, similar to \n> daa9fe8a5264a3f192efa5ddee8fb011ad9da365. Instead of repeating the almost \n> same large query in each version branch, use one\n> query and add a few columns to the SELECT list depending on the\n> version. This saves a lot of duplication.\n>\n> I have tested this with various old versions of PostgreSQL I had available, \n> but a bit more random testing with old versions would be welcome.\n\n-- \nFabien.", "msg_date": "Thu, 25 Jun 2020 08:58:58 +0200 (CEST)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": false, "msg_subject": "Re: some more pg_dump refactoring" }, { "msg_contents": "On 2020-06-25 08:58, Fabien COELHO wrote:\n> You changed the query strings to use \"\\n\" instead of \" \". I would not have\n> changed that, because it departs from the style around, and I do not think\n> it improves readability at the C code level.\n\nThis was the style that was introduced by \ndaa9fe8a5264a3f192efa5ddee8fb011ad9da365.\n\n> However, on version < 8.4, ISTM that funcargs and funciargs are always\n> added: is this voluntary?.\n\nThat was a mistake.\n\n> Would it make sense to accumulate in the other direction, older to newer,\n> so that new attributes are added at the end of the select?\n\nI think that could make sense, but the current style was introduced by \ndaa9fe8a5264a3f192efa5ddee8fb011ad9da365. Should we revise that?\n\n> Should array_to_string be pg_catalog.array_to_string? All other calls seem\n> to have an explicit schema.\n\nIt's not handled fully consistently in pg_dump. But my understanding is \nthat this is no longer necessary because pg_dump explicitly sets the \nsearch path before running.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Mon, 29 Jun 2020 15:13:24 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: some more pg_dump refactoring" }, { "msg_contents": "\nHello,\n\n>> You changed the query strings to use \"\\n\" instead of \" \". I would not have\n>> changed that, because it departs from the style around, and I do not think\n>> it improves readability at the C code level.\n>\n> This was the style that was introduced by \n> daa9fe8a5264a3f192efa5ddee8fb011ad9da365.\n\nYep. This does not imply that it is better, or worst. Currently it is not \nconsistent within the file, and there are only few instances, so maybe it \ncould be discussed anyway.\n\nAfter giving it some thought, I'd say that at least I'd like the query to \nbe easy to read when dumped. This is not incompatible with some embedded \neol, on the contrary, but ISTM that it could keep some indentation as \nwell, which would be kind-of a middle ground. For readability, I'd also \nconsider turning keywords to upper case. Maybe it could look like:\n\n \"SELECT\\n\"\n \" somefield,\\n\"\n \" someotherfiled,\\n\"\n ...\n \"FROM some_table\\n\"\n \"JOIN ... ON ...\\n\" ...\n\nAll this is highly debatable, so ignore if you feel like it.\n\n>> Would it make sense to accumulate in the other direction, older to newer,\n>> so that new attributes are added at the end of the select?\n>\n> I think that could make sense, but the current style was introduced by \n> daa9fe8a5264a3f192efa5ddee8fb011ad9da365. Should we revise that?\n\nIt seems to me more logical to do it while you're at it, but you are the \none writing the patches:-)\n\n>> Should array_to_string be pg_catalog.array_to_string? All other calls seem\n>> to have an explicit schema.\n>\n> It's not handled fully consistently in pg_dump. But my understanding is that \n> this is no longer necessary because pg_dump explicitly sets the search path \n> before running.\n\nDunno. It does not look consistent with a mix because the wary reviewer \nthink that there may be a potential bug:-) ISTM that explicit is better \nthan implicit in this context: not relying on search path would allow to \ntest the query independently, and anyway what you see is what you get.\n\nOtherwise: v2 patch applies cleanly, compiles, global make check ok, \npg_dump tap ok.\n\n\"progargnames\" is added in both branches of an if, which looks awkward. \nI'd suggest maybe to add it once unconditionnaly.\n\nI cannot test easily on older versions.\n\n-- \nFabien.\n\n\n", "msg_date": "Tue, 30 Jun 2020 16:56:01 +0200 (CEST)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": false, "msg_subject": "Re: some more pg_dump refactoring" }, { "msg_contents": "On 2020-06-30 16:56, Fabien COELHO wrote:\n>>> Would it make sense to accumulate in the other direction, older to newer,\n>>> so that new attributes are added at the end of the select?\n>> I think that could make sense, but the current style was introduced by\n>> daa9fe8a5264a3f192efa5ddee8fb011ad9da365. Should we revise that?\n> It seems to me more logical to do it while you're at it, but you are the\n> one writing the patches:-)\n\nWhat do you think about this patch to reorganize the existing code from \nthat old commit?\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Tue, 7 Jul 2020 09:00:38 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: some more pg_dump refactoring" }, { "msg_contents": "\nHallo Peter,\n\n>>>> Would it make sense to accumulate in the other direction, older to newer,\n>>>> so that new attributes are added at the end of the select?\n>>> I think that could make sense, but the current style was introduced by\n>>> daa9fe8a5264a3f192efa5ddee8fb011ad9da365. Should we revise that?\n>> It seems to me more logical to do it while you're at it, but you are the\n>> one writing the patches:-)\n>\n> What do you think about this patch to reorganize the existing code from that \n> old commit?\n\nI think it is a definite further improvement.\n\nPatch applies cleanly, compiles, global & pg_dump tap test ok, looks ok to \nme.\n\n-- \nFabien.\n\n\n", "msg_date": "Wed, 8 Jul 2020 06:42:14 +0200 (CEST)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": false, "msg_subject": "Re: some more pg_dump refactoring" }, { "msg_contents": "On 2020-07-08 06:42, Fabien COELHO wrote:\n>> What do you think about this patch to reorganize the existing code from that\n>> old commit?\n> \n> I think it is a definite further improvement.\n> \n> Patch applies cleanly, compiles, global & pg_dump tap test ok, looks ok to\n> me.\n\nThanks. I have committed that, and attached is my original patch \nadjusted to this newer style.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Thu, 9 Jul 2020 16:14:47 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: some more pg_dump refactoring" }, { "msg_contents": "On 2020-Jul-09, Peter Eisentraut wrote:\n\n> On 2020-07-08 06:42, Fabien COELHO wrote:\n> > > What do you think about this patch to reorganize the existing code from that\n> > > old commit?\n> > \n> > I think it is a definite further improvement.\n> > \n> > Patch applies cleanly, compiles, global & pg_dump tap test ok, looks ok to\n> > me.\n> \n> Thanks. I have committed that, and attached is my original patch adjusted\n> to this newer style.\n\nThanks, I too prefer the style where queries are split in lines instead\nof a single very long one, at least when looking at log_statement=all\nlines generated by pg_dump runs. It's not a *huge* usability\nimprovement, but I see no reason to make things worse.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Thu, 9 Jul 2020 12:56:27 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: some more pg_dump refactoring" }, { "msg_contents": "On 2020-07-09 16:14, Peter Eisentraut wrote:\n> On 2020-07-08 06:42, Fabien COELHO wrote:\n>>> What do you think about this patch to reorganize the existing code from that\n>>> old commit?\n>>\n>> I think it is a definite further improvement.\n>>\n>> Patch applies cleanly, compiles, global & pg_dump tap test ok, looks ok to\n>> me.\n> \n> Thanks. I have committed that, and attached is my original patch\n> adjusted to this newer style.\n\ncommitted\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Wed, 15 Jul 2020 15:05:32 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: some more pg_dump refactoring" } ]
[ { "msg_contents": "Hello all,\n\nas to my recent findings, I'm not able to build postgresql 10.13\nagainst libpq 12.1, as in that case, postgresql is missing changes\nimplemented in libpq 10.12 (and 12.2) [1]. So to rebase to postgresql\n10.13 on a system with a separated libpq package shipped at version\n12.1, I'm required to rebase the libpq package as well (even though\nversion 12.1 is technically higher than 10.13, it has been released\nprior to 10.13, and is missing changes included in that version).\n\nWhile I suppose that such compatibility is intended to be preserved\nonly for minor releases of separate major versions, I thought I'd\nbring this up anyway, as it is something I haven't considered before.\n\n[1] https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=e60b480d39ee3401727a994988dd9117a3b48466\n\n--\nPatrik Novotný\nAssociate Software Engineer\nRed Hat\npanovotn@redhat.com\n\n\n\n", "msg_date": "Tue, 23 Jun 2020 15:24:59 +0200", "msg_from": "Patrik Novotny <panovotn@redhat.com>", "msg_from_op": true, "msg_subject": "Building postgresql with higher major version of separate libpq\n package" }, { "msg_contents": "Patrik Novotny <panovotn@redhat.com> writes:\n> as to my recent findings, I'm not able to build postgresql 10.13\n> against libpq 12.1, as in that case, postgresql is missing changes\n> implemented in libpq 10.12 (and 12.2) [1].\n\nThe commit you cite changed only private, internal details in libpq,\nso it's not apparent to me why it would cause FTBFS problems for\nclients. Having said that, our build system is in no sense set up\nto compile against an external copy of libpq. I suspect that maybe\nthere's something wrong with the way you've done that. Could you\nprovide more details about how that's done and what exact problem you\nare seeing?\n\n> So to rebase to postgresql\n> 10.13 on a system with a separated libpq package shipped at version\n> 12.1, I'm required to rebase the libpq package as well (even though\n> version 12.1 is technically higher than 10.13, it has been released\n> prior to 10.13, and is missing changes included in that version).\n\nOn the whole I'm not sure that's a bad thing. Bug fixes made in 10.13\nvery likely went into the same-vintage 12.x update (ie 12.3) as well;\nso if you don't update libpq that may negatively impact stability of\nthe 10.13 installation.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 23 Jun 2020 09:54:48 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Building postgresql with higher major version of separate libpq\n package" } ]
[ { "msg_contents": "In the following log file the presence of \"exit code 1\" after performing a\n\"pg_ctl stop -m smart\" shutdown is bugging me. I take it most people would\njust ignore it as noise but a clean install from source, startup, and\nshutdown would ideally not result in a non-zero exit code being sent to the\nlog. But I've decided to stop trying to track it down on my own and at\nleast mention it here. It seems like f669c09989 probably introduced the\nbehavior, and I can see how \"restarting\" is generally a good thing, but we\nare not going to restart the launcher during a clean shutdown and the exit\ncode isn't being done conditionally.\n\n2020-06-23 19:49:07.177 UTC [2772] LOG: starting PostgreSQL 14devel on\nx86_64-pc-linux-gnu, compiled by gcc (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0,\n64-bit\n2020-06-23 19:49:07.178 UTC [2772] LOG: listening on IPv6 address \"::1\",\nport 5432\n2020-06-23 19:49:07.178 UTC [2772] LOG: listening on IPv4 address\n\"127.0.0.1\", port 5432\n2020-06-23 19:49:07.189 UTC [2772] LOG: listening on Unix socket\n\"/tmp/.s.PGSQL.5432\"\n2020-06-23 19:49:07.260 UTC [2773] LOG: database system was shut down at\n2020-06-18 15:39:29 UTC\n2020-06-23 19:49:07.373 UTC [2772] LOG: database system is ready to accept\nconnections\n2020-06-23 20:01:59.014 UTC [2772] LOG: received smart shutdown request\n2020-06-23 20:01:59.016 UTC [2772] LOG: background worker \"logical\nreplication launcher\" (PID 2779) exited with exit code 1\n2020-06-23 20:01:59.017 UTC [2774] LOG: shutting down\n2020-06-23 20:01:59.024 UTC [2772] LOG: database system is shut down\n\nDavid J.\n\nIn the following log file the presence of \"exit code 1\" after performing a \"pg_ctl stop -m smart\" shutdown is bugging me.  I take it most people would just ignore it as noise but a clean install from source, startup, and shutdown would ideally not result in a non-zero exit code being sent to the log.  But I've decided to stop trying to track it down on my own and at least mention it here.  It seems like f669c09989 probably introduced the behavior, and I can see how \"restarting\" is generally a good thing, but we are not going to restart the launcher during a clean shutdown and the exit code isn't being done conditionally.2020-06-23 19:49:07.177 UTC [2772] LOG:  starting PostgreSQL 14devel on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0, 64-bit2020-06-23 19:49:07.178 UTC [2772] LOG:  listening on IPv6 address \"::1\", port 54322020-06-23 19:49:07.178 UTC [2772] LOG:  listening on IPv4 address \"127.0.0.1\", port 54322020-06-23 19:49:07.189 UTC [2772] LOG:  listening on Unix socket \"/tmp/.s.PGSQL.5432\"2020-06-23 19:49:07.260 UTC [2773] LOG:  database system was shut down at 2020-06-18 15:39:29 UTC2020-06-23 19:49:07.373 UTC [2772] LOG:  database system is ready to accept connections2020-06-23 20:01:59.014 UTC [2772] LOG:  received smart shutdown request2020-06-23 20:01:59.016 UTC [2772] LOG:  background worker \"logical replication launcher\" (PID 2779) exited with exit code 12020-06-23 20:01:59.017 UTC [2774] LOG:  shutting down2020-06-23 20:01:59.024 UTC [2772] LOG:  database system is shut downDavid J.", "msg_date": "Tue, 23 Jun 2020 13:48:25 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": true, "msg_subject": "Curious - \"logical replication launcher\" (PID) existed with exit code\n 1" }, { "msg_contents": "On 2020-06-23 22:48, David G. Johnston wrote:\n> In the following log file the presence of \"exit code 1\" after performing \n> a \"pg_ctl stop -m smart\" shutdown is bugging me.  I take it most people \n> would just ignore it as noise but a clean install from source, startup, \n> and shutdown would ideally not result in a non-zero exit code being sent \n> to the log.  But I've decided to stop trying to track it down on my own \n> and at least mention it here.  It seems like f669c09989 probably \n> introduced the behavior, and I can see how \"restarting\" is generally a \n> good thing, but we are not going to restart the launcher during a clean \n> shutdown and the exit code isn't being done conditionally.\n\nYeah, this is just a consequence of how background workers work. One \nwould need to adjust the handling of signaling, restart flags, exit \ncodes, etc. to be able to do this more elegantly. It's probably not \nthat hard, but it needs some leg work to go through all the cases and \nissues.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 3 Jul 2020 10:06:35 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Curious - \"logical replication launcher\" (PID) existed with exit\n code 1" } ]
[ { "msg_contents": "I just noticed that when you compile pg_bsd_indent with a PG tree that\nhas --enable-jit (or something around that), then it compiles the source\nfiles into bytecode.\n\nObviously this is not harmful since these files don't get installed, but\nI wonder if our compiles aren't being excessively generous.\n\n-- \n�lvaro Herrera http://www.twitter.com/alvherre\n\n\n", "msg_date": "Tue, 23 Jun 2020 17:56:10 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "pg_bsd_indent compiles bytecode" }, { "msg_contents": "On Tue, Jun 23, 2020 at 05:56:10PM -0400, Alvaro Herrera wrote:\n> I just noticed that when you compile pg_bsd_indent with a PG tree that\n> has --enable-jit (or something around that), then it compiles the source\n> files into bytecode.\n> \n> Obviously this is not harmful since these files don't get installed, but\n> I wonder if our compiles aren't being excessively generous.\n\nAre you saying pg_bsd_indent indents the JIT output files? I assumed\npeople only ran pg_bsd_indent on dist-clean trees.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n", "msg_date": "Sat, 27 Jun 2020 16:41:36 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: pg_bsd_indent compiles bytecode" }, { "msg_contents": "Bruce Momjian <bruce@momjian.us> writes:\n> On Tue, Jun 23, 2020 at 05:56:10PM -0400, Alvaro Herrera wrote:\n>> I just noticed that when you compile pg_bsd_indent with a PG tree that\n>> has --enable-jit (or something around that), then it compiles the source\n>> files into bytecode.\n>> Obviously this is not harmful since these files don't get installed, but\n>> I wonder if our compiles aren't being excessively generous.\n\n> Are you saying pg_bsd_indent indents the JIT output files? I assumed\n> people only ran pg_bsd_indent on dist-clean trees.\n\nI think what he means is that when pg_bsd_indent absorbs the CFLAGS\nsettings that PG uses (because it uses the pgxs build infrastructure),\nit ends up also building .bc files.\n\nI wouldn't care about this particularly for pg_bsd_indent itself,\nbut it suggests that we're probably building .bc files for client-side\nfiles, which seems like a substantial waste of time. Maybe we need\ndifferent CFLAGS for client and server?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 27 Jun 2020 17:12:57 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pg_bsd_indent compiles bytecode" }, { "msg_contents": "On Sat, Jun 27, 2020 at 05:12:57PM -0400, Tom Lane wrote:\n> Bruce Momjian <bruce@momjian.us> writes:\n> > On Tue, Jun 23, 2020 at 05:56:10PM -0400, Alvaro Herrera wrote:\n> >> I just noticed that when you compile pg_bsd_indent with a PG tree that\n> >> has --enable-jit (or something around that), then it compiles the source\n> >> files into bytecode.\n> >> Obviously this is not harmful since these files don't get installed, but\n> >> I wonder if our compiles aren't being excessively generous.\n> \n> > Are you saying pg_bsd_indent indents the JIT output files? I assumed\n> > people only ran pg_bsd_indent on dist-clean trees.\n> \n> I think what he means is that when pg_bsd_indent absorbs the CFLAGS\n> settings that PG uses (because it uses the pgxs build infrastructure),\n> it ends up also building .bc files.\n\nWow, OK, I was confused then.\n\n> I wouldn't care about this particularly for pg_bsd_indent itself,\n> but it suggests that we're probably building .bc files for client-side\n> files, which seems like a substantial waste of time. Maybe we need\n> different CFLAGS for client and server?\n\nUnderstood.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n", "msg_date": "Sat, 27 Jun 2020 17:14:56 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: pg_bsd_indent compiles bytecode" }, { "msg_contents": "Hi,\n\nOn 2020-06-27 17:12:57 -0400, Tom Lane wrote:\n> Bruce Momjian <bruce@momjian.us> writes:\n> > On Tue, Jun 23, 2020 at 05:56:10PM -0400, Alvaro Herrera wrote:\n> >> I just noticed that when you compile pg_bsd_indent with a PG tree that\n> >> has --enable-jit (or something around that), then it compiles the source\n> >> files into bytecode.\n> >> Obviously this is not harmful since these files don't get installed, but\n> >> I wonder if our compiles aren't being excessively generous.\n> \n> > Are you saying pg_bsd_indent indents the JIT output files? I assumed\n> > people only ran pg_bsd_indent on dist-clean trees.\n> \n> I think what he means is that when pg_bsd_indent absorbs the CFLAGS\n> settings that PG uses (because it uses the pgxs build infrastructure),\n> it ends up also building .bc files.\n\nHm. Yea, I think I see the problem. OBJS should only be expanded if\nMODULE_big is set.\n\n\n> I wouldn't care about this particularly for pg_bsd_indent itself,\n> but it suggests that we're probably building .bc files for client-side\n> files, which seems like a substantial waste of time. Maybe we need\n> different CFLAGS for client and server?\n\nI don't think it'll apply to most in-tree client side programs, so it\nshouldn't be too bad currently. Still should fix it, of course.\n\nI can test that with another program, but for some reason pg_bsd_indent\nfails to build against 13/HEAD, but builds fine against 12. Not sure yet\nwhat's up:\n\n/usr/bin/ld.gold: error: indent.o: multiple definition of 'input'\n/usr/bin/ld.gold: args.o: previous definition here\n/usr/bin/ld.gold: error: indent.o: multiple definition of 'output'\n/usr/bin/ld.gold: args.o: previous definition here\n/usr/bin/ld.gold: error: indent.o: multiple definition of 'labbuf'\n/usr/bin/ld.gold: args.o: previous definition here\n...\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sat, 27 Jun 2020 15:34:27 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: pg_bsd_indent compiles bytecode" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> I can test that with another program, but for some reason pg_bsd_indent\n> fails to build against 13/HEAD, but builds fine against 12. Not sure yet\n> what's up:\n\nHuh. Works here on RHEL8 ... what platform are you using?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 27 Jun 2020 18:43:40 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pg_bsd_indent compiles bytecode" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2020-06-27 17:12:57 -0400, Tom Lane wrote:\n>> I wouldn't care about this particularly for pg_bsd_indent itself,\n>> but it suggests that we're probably building .bc files for client-side\n>> files, which seems like a substantial waste of time. Maybe we need\n>> different CFLAGS for client and server?\n\n> I don't think it'll apply to most in-tree client side programs, so it\n> shouldn't be too bad currently. Still should fix it, of course.\n\nHaving now checked, there isn't any such problem. No .bc files are\ngetting built except in src/backend and in other modules that feed\ninto the backend, such as src/timezone and most of contrib.\n\nI do see .bc files getting built for pg_bsd_indent, as Alvaro reported.\nSeems like it must be a bug in the pgxs make logic, not anything more\ngeneric.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 27 Jun 2020 18:54:04 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pg_bsd_indent compiles bytecode" }, { "msg_contents": "On Sat, Jun 27, 2020 at 06:54:04PM -0400, Tom Lane wrote:\n> Having now checked, there isn't any such problem. No .bc files are\n> getting built except in src/backend and in other modules that feed\n> into the backend, such as src/timezone and most of contrib.\n> \n> I do see .bc files getting built for pg_bsd_indent, as Alvaro reported.\n> Seems like it must be a bug in the pgxs make logic, not anything more\n> generic.\n\nYeah, and I think that it is caused by the following bit:\nifeq ($(with_llvm), yes)\nall: $(addsuffix .bc, $(MODULES)) $(patsubst %.o,%.bc, $(OBJS))\nendif\n\nShouldn't the latter part be ignored if PROGRAM is used?\n--\nMichael", "msg_date": "Mon, 29 Jun 2020 16:48:58 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: pg_bsd_indent compiles bytecode" }, { "msg_contents": "Hi,\n\nOn 2020-06-27 18:43:40 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > I can test that with another program, but for some reason pg_bsd_indent\n> > fails to build against 13/HEAD, but builds fine against 12. Not sure yet\n> > what's up:\n> \n> Huh. Works here on RHEL8 ... what platform are you using?\n\nThat was on Debian unstable, but I don't think it's really related. The\nissue turns out to be that gcc 10 changed the default from -fno-common\nto -fcommon, and I had 13/HEAD set up to use gcc 10, but 12 to use gcc\n9.\n\nThe way that pg_bsd_indent defines its variables isn't standard C, as\nfar as I can tell, which explains the errors I was getting. All the\nindividual files include indent_globs.h, which declares/defines a bunch\nof variables. Since it doesn't use extern, they'll all end up as full\ndefinitions in each .o when -fno-common is used (the default now), hence\nthe multiple definition errors. The only reason it works with -fcommon\nis that they'll end up processed as weak symbols and 'deduplicated' at\nlink time.\n\nIck.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 29 Jun 2020 09:50:51 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: pg_bsd_indent compiles bytecode" }, { "msg_contents": "Hi,\n\nOn 2020-06-27 18:54:04 -0400, Tom Lane wrote:\n> Having now checked, there isn't any such problem. No .bc files are\n> getting built except in src/backend and in other modules that feed\n> into the backend, such as src/timezone and most of contrib.\n\n> I do see .bc files getting built for pg_bsd_indent, as Alvaro reported.\n> Seems like it must be a bug in the pgxs make logic, not anything more\n> generic.\n\nYea. The issue is in pgxs.mk. So it does affect contrib/ modules that\nuse PROGRAM (e.g. contrib/pg_standby/pg_standby.bc is built), but not\nother parts of the tree.\n\nIt's easy enough to fix by just adding a ifndef PROGRAM around the piece\nadding the depency to the .bc files:\n\nifeq ($(with_llvm), yes)\nifndef PROGRAM\nall: $(addsuffix .bc, $(MODULES)) $(patsubst %.o,%.bc, $(OBJS))\nendif # PROGRAM\nendif # with_llvm\n\nbut it's not particularly pretty. But given the way pgxs.mk is\nstructured, I'm not sure there's really a great answer?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 29 Jun 2020 09:59:24 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: pg_bsd_indent compiles bytecode" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> The way that pg_bsd_indent defines its variables isn't standard C, as\n> far as I can tell, which explains the errors I was getting. All the\n> individual files include indent_globs.h, which declares/defines a bunch\n> of variables. Since it doesn't use extern, they'll all end up as full\n> definitions in each .o when -fno-common is used (the default now), hence\n> the multiple definition errors. The only reason it works with -fcommon\n> is that they'll end up processed as weak symbols and 'deduplicated' at\n> link time.\n\nUgh. I agree that's pretty bogus, even if there's anything in the\nC standard that allows it. I'll put it on my to-do list.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 29 Jun 2020 14:58:57 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pg_bsd_indent compiles bytecode" }, { "msg_contents": "I wrote:\n> Andres Freund <andres@anarazel.de> writes:\n>> The way that pg_bsd_indent defines its variables isn't standard C, as\n>> far as I can tell, which explains the errors I was getting. All the\n>> individual files include indent_globs.h, which declares/defines a bunch\n>> of variables. Since it doesn't use extern, they'll all end up as full\n>> definitions in each .o when -fno-common is used (the default now), hence\n>> the multiple definition errors. The only reason it works with -fcommon\n>> is that they'll end up processed as weak symbols and 'deduplicated' at\n>> link time.\n\n> Ugh. I agree that's pretty bogus, even if there's anything in the\n> C standard that allows it. I'll put it on my to-do list.\n\nI pushed the attached patch to the pg_bsd_indent repo. Perhaps Piotr\nwould like to absorb it into upstream.\n\nI don't intend to mark pg_bsd_indent with a new release number for\nthis --- for people who successfully compiled, it's the same as before.\n\n\t\t\tregards, tom lane", "msg_date": "Mon, 29 Jun 2020 21:27:48 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pg_bsd_indent compiles bytecode" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> It's easy enough to fix by just adding a ifndef PROGRAM around the piece\n> adding the depency to the .bc files:\n\n> ifeq ($(with_llvm), yes)\n> ifndef PROGRAM\n> all: $(addsuffix .bc, $(MODULES)) $(patsubst %.o,%.bc, $(OBJS))\n> endif # PROGRAM\n> endif # with_llvm\n\n> but it's not particularly pretty. But given the way pgxs.mk is\n> structured, I'm not sure there's really a great answer?\n\nYeah. The only other plausible alternative I see is like this:\n\nifeq ($(with_llvm), yes)\nifdef MODULES\nall: $(addsuffix .bc, $(MODULES))\nendif # MODULES\nifdef MODULE_big\nall: $(patsubst %.o,%.bc, $(OBJS))\nendif # MODULE_big\nendif # with_llvm\n\nwhich might be a little nicer because it squares better with,\neg, the install/uninstall rules. But it's not much better.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 29 Jun 2020 21:38:44 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pg_bsd_indent compiles bytecode" }, { "msg_contents": "Hi,\n\nOn 2020-06-29 21:27:48 -0400, Tom Lane wrote:\n> I wrote:\n> > Andres Freund <andres@anarazel.de> writes:\n> >> The way that pg_bsd_indent defines its variables isn't standard C, as\n> >> far as I can tell, which explains the errors I was getting. All the\n> >> individual files include indent_globs.h, which declares/defines a bunch\n> >> of variables. Since it doesn't use extern, they'll all end up as full\n> >> definitions in each .o when -fno-common is used (the default now), hence\n> >> the multiple definition errors. The only reason it works with -fcommon\n> >> is that they'll end up processed as weak symbols and 'deduplicated' at\n> >> link time.\n> \n> > Ugh. I agree that's pretty bogus, even if there's anything in the\n> > C standard that allows it. I'll put it on my to-do list.\n> \n> I pushed the attached patch to the pg_bsd_indent repo. Perhaps Piotr\n> would like to absorb it into upstream.\n\nThanks!\n\n\n> I don't intend to mark pg_bsd_indent with a new release number for\n> this --- for people who successfully compiled, it's the same as before.\n\nMakes sense to me.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 30 Jun 2020 12:29:56 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: pg_bsd_indent compiles bytecode" } ]
[ { "msg_contents": "I was checking some loose ends in SQL conformance, when I noticed: We \nsupport GRANT role ... GRANTED BY CURRENT_USER, but we don't support \nCURRENT_ROLE in that place, even though in PostgreSQL they are \nequivalent. Here is a trivial patch to add that.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Wed, 24 Jun 2020 08:35:58 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Allow CURRENT_ROLE in GRANTED BY" }, { "msg_contents": "On 6/24/20 8:35 AM, Peter Eisentraut wrote:\n> I was checking some loose ends in SQL conformance, when I noticed: We\n> support GRANT role ... GRANTED BY CURRENT_USER, but we don't support\n> CURRENT_ROLE in that place, even though in PostgreSQL they are\n> equivalent.  Here is a trivial patch to add that.\n\n\nThe only thing that isn't dead-obvious about this patch is the commit\nmessage says \"[PATCH 1/2]\". What is in the other part?\n\nAssuming that's just a remnant of development, this LGTM.\n-- \nVik Fearing\n\n\n", "msg_date": "Wed, 24 Jun 2020 10:12:38 +0200", "msg_from": "Vik Fearing <vik@postgresfriends.org>", "msg_from_op": false, "msg_subject": "Re: Allow CURRENT_ROLE in GRANTED BY" }, { "msg_contents": "On 2020-06-24 10:12, Vik Fearing wrote:\n> On 6/24/20 8:35 AM, Peter Eisentraut wrote:\n>> I was checking some loose ends in SQL conformance, when I noticed: We\n>> support GRANT role ... GRANTED BY CURRENT_USER, but we don't support\n>> CURRENT_ROLE in that place, even though in PostgreSQL they are\n>> equivalent.  Here is a trivial patch to add that.\n> \n> \n> The only thing that isn't dead-obvious about this patch is the commit\n> message says \"[PATCH 1/2]\". What is in the other part?\n\nHehe. The second patch is some in-progress work to add the GRANTED BY \nclause to the regular GRANT command. More on that perhaps at a later date.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Wed, 24 Jun 2020 20:21:29 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Allow CURRENT_ROLE in GRANTED BY" }, { "msg_contents": "On 2020-Jun-24, Peter Eisentraut wrote:\n\n> I was checking some loose ends in SQL conformance, when I noticed: We\n> support GRANT role ... GRANTED BY CURRENT_USER, but we don't support\n> CURRENT_ROLE in that place, even though in PostgreSQL they are equivalent.\n> Here is a trivial patch to add that.\n\nHmm, since this adds to RoleSpec, this change makes every place that\nuses that production also take CURRENT_ROLE, so we'd need to document in\nall those places. For example, alter_role.sgml, create_schema.sgml,\netc.\n\nThis also affects role_list (but maybe the docs for those are already\nvague enough -- eg. ALTER INDEX .. OWNED BY only says \"role_name\" with\nno further explanation, even though it does take \"current_user\".)\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Wed, 24 Jun 2020 17:08:31 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Allow CURRENT_ROLE in GRANTED BY" }, { "msg_contents": "On 2020-06-24 23:08, Alvaro Herrera wrote:\n> On 2020-Jun-24, Peter Eisentraut wrote:\n> \n>> I was checking some loose ends in SQL conformance, when I noticed: We\n>> support GRANT role ... GRANTED BY CURRENT_USER, but we don't support\n>> CURRENT_ROLE in that place, even though in PostgreSQL they are equivalent.\n>> Here is a trivial patch to add that.\n> \n> Hmm, since this adds to RoleSpec, this change makes every place that\n> uses that production also take CURRENT_ROLE, so we'd need to document in\n> all those places. For example, alter_role.sgml, create_schema.sgml,\n> etc.\n\nGood point. Here is an updated patch that updates all the documentation \nplaces where CURRENT_USER is mentioned.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Mon, 29 Jun 2020 14:47:38 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Allow CURRENT_ROLE in GRANTED BY" }, { "msg_contents": "On 2020-06-29 14:47, Peter Eisentraut wrote:\n> On 2020-06-24 23:08, Alvaro Herrera wrote:\n>> On 2020-Jun-24, Peter Eisentraut wrote:\n>>\n>>> I was checking some loose ends in SQL conformance, when I noticed: We\n>>> support GRANT role ... GRANTED BY CURRENT_USER, but we don't support\n>>> CURRENT_ROLE in that place, even though in PostgreSQL they are equivalent.\n>>> Here is a trivial patch to add that.\n>>\n>> Hmm, since this adds to RoleSpec, this change makes every place that\n>> uses that production also take CURRENT_ROLE, so we'd need to document in\n>> all those places. For example, alter_role.sgml, create_schema.sgml,\n>> etc.\n> \n> Good point. Here is an updated patch that updates all the documentation\n> places where CURRENT_USER is mentioned.\n\nHere is another patch that also makes comprehensive updates to the \nrolenames tests under src/test/modules/unsafe_tests/.\n\nI think this should now cover all the required ancillary changes.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Wed, 26 Aug 2020 13:00:57 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Allow CURRENT_ROLE in GRANTED BY" }, { "msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: tested, passed\nImplements feature: tested, passed\nSpec compliant: tested, passed\nDocumentation: tested, passed\n\nThe patch applies cleanly and looks fine to me. However wouldn't it be better to just map the CURRENT_ROLE to CURRENT_USER in backend grammar?\n\nThe new status of this patch is: Waiting on Author\n", "msg_date": "Mon, 07 Sep 2020 10:02:49 +0000", "msg_from": "Asif Rehman <asifr.rehman@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Allow CURRENT_ROLE in GRANTED BY" }, { "msg_contents": "On 2020-09-07 12:02, Asif Rehman wrote:\n> The following review has been posted through the commitfest application:\n> make installcheck-world: tested, passed\n> Implements feature: tested, passed\n> Spec compliant: tested, passed\n> Documentation: tested, passed\n> \n> The patch applies cleanly and looks fine to me. However wouldn't it be better to just map the CURRENT_ROLE to CURRENT_USER in backend grammar?\n\nExisting code treats them differently. I think, given that the code is \nalready written, it is good to preserve what the user wrote.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Thu, 10 Sep 2020 18:21:54 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Allow CURRENT_ROLE in GRANTED BY" }, { "msg_contents": "On 2020-Aug-26, Peter Eisentraut wrote:\n\n> Here is another patch that also makes comprehensive updates to the rolenames\n> tests under src/test/modules/unsafe_tests/.\n\nLooks good to me. You need to DROP ROLE \"current_role\" at the bottom of\nrolenames.sql, though (as well as DROP OWNED BY, I suppose.)\n\n> I think this should now cover all the required ancillary changes.\n\n\\o/\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 11 Sep 2020 17:05:13 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Allow CURRENT_ROLE in GRANTED BY" }, { "msg_contents": "On 2020-09-11 22:05, Alvaro Herrera wrote:\n> On 2020-Aug-26, Peter Eisentraut wrote:\n> \n>> Here is another patch that also makes comprehensive updates to the rolenames\n>> tests under src/test/modules/unsafe_tests/.\n> \n> Looks good to me. You need to DROP ROLE \"current_role\" at the bottom of\n> rolenames.sql, though (as well as DROP OWNED BY, I suppose.)\n> \n>> I think this should now cover all the required ancillary changes.\n> \n> \\o/\n> \n\ncommitted\n\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Thu, 17 Sep 2020 12:02:35 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Allow CURRENT_ROLE in GRANTED BY" }, { "msg_contents": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> committed\n\nA couple of buildfarm animals are reporting instability in the\nmodified rolenames test, eg\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=hoverfly&dt=2020-09-17%2010%3A27%3A36\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=prion&dt=2020-09-17%2011%3A17%3A08\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=serinus&dt=2020-09-17%2011%3A47%3A07\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 17 Sep 2020 10:12:04 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Allow CURRENT_ROLE in GRANTED BY" }, { "msg_contents": "On 2020-06-24 20:21, Peter Eisentraut wrote:\n> On 2020-06-24 10:12, Vik Fearing wrote:\n>> On 6/24/20 8:35 AM, Peter Eisentraut wrote:\n>>> I was checking some loose ends in SQL conformance, when I noticed: We\n>>> support GRANT role ... GRANTED BY CURRENT_USER, but we don't support\n>>> CURRENT_ROLE in that place, even though in PostgreSQL they are\n>>> equivalent.  Here is a trivial patch to add that.\n>>\n>>\n>> The only thing that isn't dead-obvious about this patch is the commit\n>> message says \"[PATCH 1/2]\". What is in the other part?\n> \n> Hehe. The second patch is some in-progress work to add the GRANTED BY\n> clause to the regular GRANT command. More on that perhaps at a later date.\n\nHere is the highly anticipated and quite underwhelming second part of \nthis patch set.\n\n-- \nPeter Eisentraut\n2ndQuadrant, an EDB company\nhttps://www.2ndquadrant.com/", "msg_date": "Thu, 10 Dec 2020 19:39:49 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Allow CURRENT_ROLE in GRANTED BY" }, { "msg_contents": "On Thu, 10 Dec 2020 at 18:40, Peter Eisentraut\n<peter.eisentraut@2ndquadrant.com> wrote:\n>\n> On 2020-06-24 20:21, Peter Eisentraut wrote:\n> > On 2020-06-24 10:12, Vik Fearing wrote:\n> >> On 6/24/20 8:35 AM, Peter Eisentraut wrote:\n> >>> I was checking some loose ends in SQL conformance, when I noticed: We\n> >>> support GRANT role ... GRANTED BY CURRENT_USER, but we don't support\n> >>> CURRENT_ROLE in that place, even though in PostgreSQL they are\n> >>> equivalent. Here is a trivial patch to add that.\n> >>\n> >>\n> >> The only thing that isn't dead-obvious about this patch is the commit\n> >> message says \"[PATCH 1/2]\". What is in the other part?\n> >\n> > Hehe. The second patch is some in-progress work to add the GRANTED BY\n> > clause to the regular GRANT command. More on that perhaps at a later date.\n>\n> Here is the highly anticipated and quite underwhelming second part of\n> this patch set.\n\nLooks great, but no test to confirm it works. I would suggest adding a\ntest and committing directly since I don't see any cause for further\ndiscussion.\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/\n\n\n", "msg_date": "Wed, 30 Dec 2020 12:43:39 +0000", "msg_from": "Simon Riggs <simon@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Allow CURRENT_ROLE in GRANTED BY" }, { "msg_contents": "On 2020-12-30 13:43, Simon Riggs wrote:\n> On Thu, 10 Dec 2020 at 18:40, Peter Eisentraut\n> <peter.eisentraut@2ndquadrant.com> wrote:\n>>\n>> On 2020-06-24 20:21, Peter Eisentraut wrote:\n>>> On 2020-06-24 10:12, Vik Fearing wrote:\n>>>> On 6/24/20 8:35 AM, Peter Eisentraut wrote:\n>>>>> I was checking some loose ends in SQL conformance, when I noticed: We\n>>>>> support GRANT role ... GRANTED BY CURRENT_USER, but we don't support\n>>>>> CURRENT_ROLE in that place, even though in PostgreSQL they are\n>>>>> equivalent. Here is a trivial patch to add that.\n>>>>\n>>>>\n>>>> The only thing that isn't dead-obvious about this patch is the commit\n>>>> message says \"[PATCH 1/2]\". What is in the other part?\n>>>\n>>> Hehe. The second patch is some in-progress work to add the GRANTED BY\n>>> clause to the regular GRANT command. More on that perhaps at a later date.\n>>\n>> Here is the highly anticipated and quite underwhelming second part of\n>> this patch set.\n> \n> Looks great, but no test to confirm it works. I would suggest adding a\n> test and committing directly since I don't see any cause for further\n> discussion.\n\nCommitted with some tests. Thanks.\n\n-- \nPeter Eisentraut\n2ndQuadrant, an EDB company\nhttps://www.2ndquadrant.com/\n\n\n", "msg_date": "Sat, 30 Jan 2021 09:51:03 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Allow CURRENT_ROLE in GRANTED BY" }, { "msg_contents": "> On 30 Jan 2021, at 09:51, Peter Eisentraut <peter.eisentraut@2ndquadrant.com> wrote:\n> \n> On 2020-12-30 13:43, Simon Riggs wrote:\n>> On Thu, 10 Dec 2020 at 18:40, Peter Eisentraut\n>> <peter.eisentraut@2ndquadrant.com> wrote:\n>>> \n>>> On 2020-06-24 20:21, Peter Eisentraut wrote:\n>>>> On 2020-06-24 10:12, Vik Fearing wrote:\n>>>>> On 6/24/20 8:35 AM, Peter Eisentraut wrote:\n>>>>>> I was checking some loose ends in SQL conformance, when I noticed: We\n>>>>>> support GRANT role ... GRANTED BY CURRENT_USER, but we don't support\n>>>>>> CURRENT_ROLE in that place, even though in PostgreSQL they are\n>>>>>> equivalent. Here is a trivial patch to add that.\n>>>>> \n>>>>> \n>>>>> The only thing that isn't dead-obvious about this patch is the commit\n>>>>> message says \"[PATCH 1/2]\". What is in the other part?\n>>>> \n>>>> Hehe. The second patch is some in-progress work to add the GRANTED BY\n>>>> clause to the regular GRANT command. More on that perhaps at a later date.\n>>> \n>>> Here is the highly anticipated and quite underwhelming second part of\n>>> this patch set.\n>> Looks great, but no test to confirm it works. I would suggest adding a\n>> test and committing directly since I don't see any cause for further\n>> discussion.\n> \n> Committed with some tests. Thanks.\n\nWhile looking at the proposed privileges.sql test patch from Mark Dilger [0] I\nrealized that the commit above seems to have missed the RevokeRoleStmt syntax.\nAs per the SQL Spec it should be supported there as well AFAICT. Was this\nintentional or should the attached small diff be applied to fix it?\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n[0] 333B0203-D19B-4335-AE64-90EB0FAF46F0@enterprisedb.com", "msg_date": "Tue, 16 Nov 2021 15:04:11 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: Allow CURRENT_ROLE in GRANTED BY" }, { "msg_contents": "> On 16 Nov 2021, at 15:04, Daniel Gustafsson <daniel@yesql.se> wrote:\n\n> ..or should the attached small diff be applied to fix it?\n\nActually it shouldn't, I realized when hitting Send that it was the wrong\nversion. The attached is the proposed diff.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/", "msg_date": "Tue, 16 Nov 2021 15:27:25 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: Allow CURRENT_ROLE in GRANTED BY" }, { "msg_contents": "On 16.11.21 15:27, Daniel Gustafsson wrote:\n>> On 16 Nov 2021, at 15:04, Daniel Gustafsson <daniel@yesql.se> wrote:\n> \n>> ..or should the attached small diff be applied to fix it?\n> \n> Actually it shouldn't, I realized when hitting Send that it was the wrong\n> version. The attached is the proposed diff.\n\nThis appears to have been an oversight.\n\n\n", "msg_date": "Thu, 18 Nov 2021 14:41:07 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Allow CURRENT_ROLE in GRANTED BY" }, { "msg_contents": "\n\n> On 18 Nov 2021, at 14:41, Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote:\n> \n> On 16.11.21 15:27, Daniel Gustafsson wrote:\n>>>> On 16 Nov 2021, at 15:04, Daniel Gustafsson <daniel@yesql.se> wrote:\n>>> ..or should the attached small diff be applied to fix it?\n>> Actually it shouldn't, I realized when hitting Send that it was the wrong\n>> version. The attached is the proposed diff.\n> \n> This appears to have been an oversight.\n\nThanks for confirming, I’ll take another pass over the proposed diff in a bit.\n\n", "msg_date": "Thu, 18 Nov 2021 14:42:46 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: Allow CURRENT_ROLE in GRANTED BY" }, { "msg_contents": "> On 18 Nov 2021, at 14:42, Daniel Gustafsson <daniel@yesql.se> wrote:\n> \n>> On 18 Nov 2021, at 14:41, Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote:\n>> \n>> On 16.11.21 15:27, Daniel Gustafsson wrote:\n>>>>> On 16 Nov 2021, at 15:04, Daniel Gustafsson <daniel@yesql.se> wrote:\n>>>> ..or should the attached small diff be applied to fix it?\n>>> Actually it shouldn't, I realized when hitting Send that it was the wrong\n>>> version. The attached is the proposed diff.\n>> \n>> This appears to have been an oversight.\n> \n> Thanks for confirming, I’ll take another pass over the proposed diff in a bit.\n\nPolished a little and pushed to master with a backpatch to 14 where it was\nintroduced.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Fri, 26 Nov 2021 14:17:05 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: Allow CURRENT_ROLE in GRANTED BY" } ]
[ { "msg_contents": "In PG13, we raised the server-side default of ssl_min_protocol_version \nto TLSv1.2. We also added a connection setting named \nssl_min_protocol_version to libpq. But AFAICT, the default value of the \nlibpq setting is empty, so any protocol version will be accepted. Is \nthis what we wanted? Should we raise the default in libpq as well?\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Wed, 24 Jun 2020 08:39:26 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "should libpq also require TLSv1.2 by default?" }, { "msg_contents": "> On 24 Jun 2020, at 08:39, Peter Eisentraut <peter.eisentraut@2ndquadrant.com> wrote:\n> \n> In PG13, we raised the server-side default of ssl_min_protocol_version to TLSv1.2. We also added a connection setting named ssl_min_protocol_version to libpq. But AFAICT, the default value of the libpq setting is empty, so any protocol version will be accepted. Is this what we wanted? Should we raise the default in libpq as well?\n\nThis was discussed [0] when the connection settings were introduced, and the\nconcensus was to leave them alone [1] to allow for example a new pg_dump to\nwork against an old server. Re-reading the thread I think the argument still\nholds, but I was about to respond \"yes, let's do this\" before refreshing my\nmemory. Perhaps we should add a comment explaining this along the lines of the\nattached?\n\ncheers ./daniel\n\n[0] https://www.postgresql.org/message-id/157800160408.1198.1714906047977693148.pgcf%40coridan.postgresql.org\n[1] https://www.postgresql.org/message-id/31993.1578321474%40sss.pgh.pa.us", "msg_date": "Wed, 24 Jun 2020 10:33:22 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: should libpq also require TLSv1.2 by default?" }, { "msg_contents": "On Wed, Jun 24, 2020 at 10:33 AM Daniel Gustafsson <daniel@yesql.se> wrote:\n\n> > On 24 Jun 2020, at 08:39, Peter Eisentraut <\n> peter.eisentraut@2ndquadrant.com> wrote:\n> >\n> > In PG13, we raised the server-side default of ssl_min_protocol_version\n> to TLSv1.2. We also added a connection setting named\n> ssl_min_protocol_version to libpq. But AFAICT, the default value of the\n> libpq setting is empty, so any protocol version will be accepted. Is this\n> what we wanted? Should we raise the default in libpq as well?\n>\n> This was discussed [0] when the connection settings were introduced, and\n> the\n> concensus was to leave them alone [1] to allow for example a new pg_dump to\n> work against an old server. Re-reading the thread I think the argument\n> still\n> holds, but I was about to respond \"yes, let's do this\" before refreshing my\n> memory. Perhaps we should add a comment explaining this along the lines\n> of the\n> attached?\n>\n>\nAnother argument for not changing the default is that if you want to use\nSSL in any meaningful way you have to *already* change the connection\nstring (with sslmode=require or verify-*), so it's not unreasonable to make\nthat consideration at the same time.\n\nIt might also be worth noting that it's not really \"any protocol version\",\nit means it will be \"whatever the openssl configuration says\", I think? For\nexample, debian buster sets:\n\n[system_default_sect]\nMinProtocol = TLSv1.2\n\nWhich I believe means that if your libpq app is running on debian buster,\nit will be min v1.2 already (and it would likely be more useful to use\nssl_min_protocol_version to *lower* that when connecting to older servers).\n\n//Magnus\n\nOn Wed, Jun 24, 2020 at 10:33 AM Daniel Gustafsson <daniel@yesql.se> wrote:> On 24 Jun 2020, at 08:39, Peter Eisentraut <peter.eisentraut@2ndquadrant.com> wrote:\n> \n> In PG13, we raised the server-side default of ssl_min_protocol_version to TLSv1.2.  We also added a connection setting named ssl_min_protocol_version to libpq.  But AFAICT, the default value of the libpq setting is empty, so any protocol version will be accepted.  Is this what we wanted?  Should we raise the default in libpq as well?\n\nThis was discussed [0] when the connection settings were introduced, and the\nconcensus was to leave them alone [1] to allow for example a new pg_dump to\nwork against an old server.  Re-reading the thread I think the argument still\nholds, but I was about to respond \"yes, let's do this\" before refreshing my\nmemory.  Perhaps we should add a comment explaining this along the lines of the\nattached?Another argument for not changing the default is that if you want to use SSL in any meaningful way you have to *already* change the connection string (with sslmode=require or verify-*), so it's not unreasonable to make that consideration at the same time.It might also be worth noting that it's not really \"any protocol version\", it means it will be \"whatever the openssl configuration says\", I think? For example, debian buster sets:[system_default_sect]MinProtocol = TLSv1.2Which I believe means that if your libpq app is running on debian buster, it will be min v1.2 already (and it would likely be more useful to use ssl_min_protocol_version to *lower* that when connecting to older servers).//Magnus", "msg_date": "Wed, 24 Jun 2020 10:46:17 +0200", "msg_from": "Magnus Hagander <magnus@hagander.net>", "msg_from_op": false, "msg_subject": "Re: should libpq also require TLSv1.2 by default?" }, { "msg_contents": "> On 24 Jun 2020, at 10:46, Magnus Hagander <magnus@hagander.net> wrote:\n\n> It might also be worth noting that it's not really \"any protocol version\", it means it will be \"whatever the openssl configuration says\", I think? For example, debian buster sets:\n> \n> [system_default_sect]\n> MinProtocol = TLSv1.2\n> \n> Which I believe means that if your libpq app is running on debian buster, it will be min v1.2 already\n\nCorrect, that being said I'm not sure how common it is for distributions to set\na default protocol version. The macOS versions I have handy doesn't enforce a\ndefault version, nor does Ubuntu 20.04, FreeBSD 12 or OpenBSD 6.5 AFAICT.\n\n> (and it would likely be more useful to use ssl_min_protocol_version to *lower* that when connecting to older servers).\n\nThat is indeed one use-case for the connection parameter.\n\ncheers ./daniel\n\n", "msg_date": "Wed, 24 Jun 2020 11:01:48 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: should libpq also require TLSv1.2 by default?" }, { "msg_contents": "On 2020-06-24 10:33, Daniel Gustafsson wrote:\n>> In PG13, we raised the server-side default of ssl_min_protocol_version to TLSv1.2. We also added a connection setting named ssl_min_protocol_version to libpq. But AFAICT, the default value of the libpq setting is empty, so any protocol version will be accepted. Is this what we wanted? Should we raise the default in libpq as well?\n> \n> This was discussed [0] when the connection settings were introduced, and the\n> concensus was to leave them alone [1] to allow for example a new pg_dump to\n> work against an old server. Re-reading the thread I think the argument still\n> holds, but I was about to respond \"yes, let's do this\" before refreshing my\n> memory. Perhaps we should add a comment explaining this along the lines of the\n> attached?\n> \n> [0] https://www.postgresql.org/message-id/157800160408.1198.1714906047977693148.pgcf%40coridan.postgresql.org\n> [1] https://www.postgresql.org/message-id/31993.1578321474%40sss.pgh.pa.us\n\nISTM that these discussions went through the same questions and \narguments that were made regarding the server-side change but arrived at \na different conclusion. So I suggest to reconsider this so that we \ndon't ship with contradictory results.\n\nThat doesn't necessarily mean that we have to make a change, but we \nshould make sure our rationale is sound.\n\nNote that all OpenSSL versions that do not support TLSv1.2 also do not \nsupport TLSv1.1. So by saying, in effect, that TLSv1.2 is too new to \nrequire, we are saying that we need to keep supporting TLSv1.0 -- which \nis heavily deprecated. Also note that the first OpenSSL version with \nsupport for TLSv1.2 shipped on March 14, 2012.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Wed, 24 Jun 2020 19:57:31 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: should libpq also require TLSv1.2 by default?" }, { "msg_contents": "On Wed, Jun 24, 2020 at 07:57:31PM +0200, Peter Eisentraut wrote:\n> On 2020-06-24 10:33, Daniel Gustafsson wrote:\n> > > In PG13, we raised the server-side default of ssl_min_protocol_version to TLSv1.2. We also added a connection setting named ssl_min_protocol_version to libpq. But AFAICT, the default value of the libpq setting is empty, so any protocol version will be accepted. Is this what we wanted? Should we raise the default in libpq as well?\n> > \n> > This was discussed [0] when the connection settings were introduced, and the\n> > concensus was to leave them alone [1] to allow for example a new pg_dump to\n> > work against an old server. Re-reading the thread I think the argument still\n> > holds, but I was about to respond \"yes, let's do this\" before refreshing my\n> > memory. Perhaps we should add a comment explaining this along the lines of the\n> > attached?\n> > \n> > [0] https://www.postgresql.org/message-id/157800160408.1198.1714906047977693148.pgcf%40coridan.postgresql.org\n> > [1] https://www.postgresql.org/message-id/31993.1578321474%40sss.pgh.pa.us\n> \n> ISTM that these discussions went through the same questions and arguments\n> that were made regarding the server-side change but arrived at a different\n> conclusion. So I suggest to reconsider this so that we don't ship with\n> contradictory results.\n> \n> That doesn't necessarily mean that we have to make a change, but we should\n> make sure our rationale is sound.\n> \n> Note that all OpenSSL versions that do not support TLSv1.2 also do not\n> support TLSv1.1. So by saying, in effect, that TLSv1.2 is too new to\n> require, we are saying that we need to keep supporting TLSv1.0 -- which is\n> heavily deprecated. Also note that the first OpenSSL version with support\n> for TLSv1.2 shipped on March 14, 2012.\n\nI do think mismatched SSL requirements between client and server is\nconfusing, though I can see the back-version pg_dump being an issue. \nMaybe a clear error message would help here.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n", "msg_date": "Wed, 24 Jun 2020 14:22:10 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: should libpq also require TLSv1.2 by default?" }, { "msg_contents": "> On 24 Jun 2020, at 19:57, Peter Eisentraut <peter.eisentraut@2ndquadrant.com> wrote:\n> \n> On 2020-06-24 10:33, Daniel Gustafsson wrote:\n>>> In PG13, we raised the server-side default of ssl_min_protocol_version to TLSv1.2. We also added a connection setting named ssl_min_protocol_version to libpq. But AFAICT, the default value of the libpq setting is empty, so any protocol version will be accepted. Is this what we wanted? Should we raise the default in libpq as well?\n>> This was discussed [0] when the connection settings were introduced, and the\n>> concensus was to leave them alone [1] to allow for example a new pg_dump to\n>> work against an old server. Re-reading the thread I think the argument still\n>> holds, but I was about to respond \"yes, let's do this\" before refreshing my\n>> memory. Perhaps we should add a comment explaining this along the lines of the\n>> attached?\n>> [0] https://www.postgresql.org/message-id/157800160408.1198.1714906047977693148.pgcf%40coridan.postgresql.org\n>> [1] https://www.postgresql.org/message-id/31993.1578321474%40sss.pgh.pa.us\n> \n> ISTM that these discussions went through the same questions and arguments that were made regarding the server-side change but arrived at a different conclusion. So I suggest to reconsider this so that we don't ship with contradictory results.\n\nI don't think anyone argues against safe defaults for communication between\nupgraded clients and upgraded servers. That being said; out of the box, an\nupgraded client *will* use TLSv1.2 when connecting to a upgraded server due to\nthe server defaults requirements (assuming the server hasn't been reconfigured\nwith a lower TLS version, but since we're talking defaults we have to assume\nthat).\n\nThe problem comes when an updated client needs to talk to an old server which\nhasn't been upgraded and which use an older OpenSSL version. If we set TLSv1.2\nas the minimum client version, the user will have to amend a connection string\nwhich used to work when talking to a server which hasn't changed. If we don't\nraise the default, a user to wants nothing lower than TLSv1.2 will have to\namend the connection string.\n\n> That doesn't necessarily mean that we have to make a change, but we should make sure our rationale is sound.\n\nTotally agree. I think I, FWIW, still vote for keeping it at 1.0 to not break\nconnections to old servers, since upgraded/new servers will enforce 1.2\nanyways.\n\nAs mentioned elsewhere in the thread, maybe this is also something which can be\ndone more easily if we improve the error reporting? Right now it's fairly\ncryptic IMO.\n\n> Note that all OpenSSL versions that do not support TLSv1.2 also do not support TLSv1.1. So by saying, in effect, that TLSv1.2 is too new to require, we are saying that we need to keep supporting TLSv1.0 -- which is heavily deprecated. Also note that the first OpenSSL version with support for TLSv1.2 shipped on March 14, 2012.\n\nCorrect, this being the 1.0.1 release which is referred to.\n\ncheers ./daniel\n\n", "msg_date": "Thu, 25 Jun 2020 00:30:03 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: should libpq also require TLSv1.2 by default?" }, { "msg_contents": "On Thu, Jun 25, 2020 at 12:30:03AM +0200, Daniel Gustafsson wrote:\n> I don't think anyone argues against safe defaults for communication between\n> upgraded clients and upgraded servers. That being said; out of the box, an\n> upgraded client *will* use TLSv1.2 when connecting to a upgraded server due to\n> the server defaults requirements (assuming the server hasn't been reconfigured\n> with a lower TLS version, but since we're talking defaults we have to assume\n> that).\n\nMy take here is to let things as they are for libpq. pg_dump is a very\ngood argument, because we allow backward compatibility with a newer\nversion of the tool, not upward compatibility.\n\n> The problem comes when an updated client needs to talk to an old server which\n> hasn't been upgraded and which use an older OpenSSL version. If we set TLSv1.2\n> as the minimum client version, the user will have to amend a connection string\n> which used to work when talking to a server which hasn't changed. If we don't\n> raise the default, a user to wants nothing lower than TLSv1.2 will have to\n> amend the connection string.\n\nYeah, and I would not be surprised to see cases where people complain\nto us about that when attempting to reach one of their old boxes,\nbreaking some stuff they have been relying on for years by forcing the\naddition of a tls_min_server_protocol in the connection string. It is\na more important step that we enforce TLSv1.2 on the server side\nactually, and libpq just follows up automatically with that.\n\n> As mentioned elsewhere in the thread, maybe this is also something which can be\n> done more easily if we improve the error reporting? Right now it's fairly\n> cryptic IMO.\n\nThis part may be tricky to get right I think, because the error comes\ndirectly from OpenSSL when negotiating the protocol used between the\nclient and the server, like \"no protocols available\" or such.\n--\nMichael", "msg_date": "Thu, 25 Jun 2020 10:50:26 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: should libpq also require TLSv1.2 by default?" }, { "msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> On Thu, Jun 25, 2020 at 12:30:03AM +0200, Daniel Gustafsson wrote:\n>> As mentioned elsewhere in the thread, maybe this is also something which can be\n>> done more easily if we improve the error reporting? Right now it's fairly\n>> cryptic IMO.\n\n> This part may be tricky to get right I think, because the error comes\n> directly from OpenSSL when negotiating the protocol used between the\n> client and the server, like \"no protocols available\" or such.\n\nCan we do something comparable to the backend's HINT protocol, where\nwe add on a comment that's only mostly-likely to be right?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 24 Jun 2020 22:50:39 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: should libpq also require TLSv1.2 by default?" }, { "msg_contents": "On Wed, Jun 24, 2020 at 10:50:39PM -0400, Tom Lane wrote:\n> Can we do something comparable to the backend's HINT protocol, where\n> we add on a comment that's only mostly-likely to be right?\n\nOpenSSL publishes its error codes as of openssl/sslerr.h, and it looks\nlike the two error codes we would need to worry about are\nSSL_R_UNSUPPORTED_PROTOCOL and SSL_R_NO_PROTOCOLS_AVAILABLE. So we\ncould for example amend open_client_SSL() when negotiating the SSL\nconnection in libpq with error messages or hints that help better than\nthe current state of things, but that also means an extra maintenance\non our side to make sure that we keep in sync with new error codes\ncoming from the OpenSSL world.\n--\nMichael", "msg_date": "Thu, 25 Jun 2020 13:41:24 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: should libpq also require TLSv1.2 by default?" }, { "msg_contents": "On Wed, Jun 24, 2020 at 9:50 PM Michael Paquier <michael@paquier.xyz> wrote:\n> Yeah, and I would not be surprised to see cases where people complain\n> to us about that when attempting to reach one of their old boxes,\n> breaking some stuff they have been relying on for years by forcing the\n> addition of a tls_min_server_protocol in the connection string. It is\n> a more important step that we enforce TLSv1.2 on the server side\n> actually, and libpq just follows up automatically with that.\n\nI wonder how much of a problem this really is. A few quick Google\nsearches suggests that support for TLSv1.2 was added to OpenSSL in\nv1.0.1, which was released in March 2012. If packagers adopted that\nversion for the following PostgreSQL release, they would have had\nTLSv1.2 support from PostgreSQL 9.2 onward. Some people may have taken\nlonger to adopt it, but even if they waited a year or two, all\nversions that they built with older OpenSSL versions would now be out\nof support. It doesn't seem that likely that there are going to be\nthat many people using pg_dump to upgrade directly from PostgreSQL 9.2\n-- or even 9.4 -- to PostgreSQL 13. Skipping six or eight major\nversions in a single upgrade is a little unusual, in my experience.\nAnd even if someone does want to do that, we haven't broken it; it'll\nstill work fine if they are connecting through a UNIX socket, and if\nnot, they can disable SSL or just specify that they're OK with an\nolder protocol version. That doesn't seem like a big deal, especially\nif we can adopt Tom's suggestion of giving them a warning about what\nwent wrong.\n\nLet's also consider the other side of this argument, which is that a\ndecent number of PostgreSQL users are operating in environments where\nthey are required for regulatory compliance to prohibit the use of\nTLSv1.0 and TLSv1.1. Those people will be happy if that is the default\non both the client and the server side by default. They will probably\nbe somewhat happy anyway, because now we have an option for it, which\nwe didn't before. But they'll be more happy if it's the default. Now,\nwe can't please everybody here. Is it more important to please people\nwho would like insecure TLS versions disabled by default, or to please\npeople who want to use insecure TLS versions to back up old servers?\nSeems debatable. Based on my own experience, I'd guess there are more\nusers who want to avoid insecure TLS versions than there are users who\nwant to use them to back up very old servers, so I'd tentatively favor\nchanging the default. However, I don't know whether my experiences are\nrepresentative.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Thu, 25 Jun 2020 15:41:55 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: should libpq also require TLSv1.2 by default?" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> I wonder how much of a problem this really is.\n\nYeah. I find Robert's points about that pretty persuasive: by now\nneeding to connect to a server without TLSv1.2 support, *and* needing to\ndo so with SSL on, ought to be a tiny niche use case (much smaller than\nthe number of people who would like a more secure default). If we can\nmake the error message about this be reasonably clear then I don't have\nan objection to changing libpq's default.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 25 Jun 2020 15:57:00 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: should libpq also require TLSv1.2 by default?" }, { "msg_contents": "Daniel Gustafsson <daniel@yesql.se> writes:\n>> On 24 Jun 2020, at 10:46, Magnus Hagander <magnus@hagander.net> wrote:\n>> It might also be worth noting that it's not really \"any protocol version\", it means it will be \"whatever the openssl configuration says\", I think? For example, debian buster sets:\n>> \n>> [system_default_sect]\n>> MinProtocol = TLSv1.2\n>> \n>> Which I believe means that if your libpq app is running on debian buster, it will be min v1.2 already\n\n> Correct, that being said I'm not sure how common it is for distributions to set\n> a default protocol version. The macOS versions I have handy doesn't enforce a\n> default version, nor does Ubuntu 20.04, FreeBSD 12 or OpenBSD 6.5 AFAICT.\n\nYeah, this. I experimented with connecting current libpq to a 9.2-vintage\nserver I'd built with openssl 0.9.8x, and was surprised to find I couldn't\ndo so unless I explicitly set \"ssl_min_protocol_version=tlsv1\". After\nsome digging I verified that that's because RHEL8's openssl.cnf sets\n\nMinProtocol = TLSv1.2\nMaxProtocol = TLSv1.3\n\nInterestingly, Fedora 32 is laxer:\n\nMinProtocol = TLSv1\nMaxProtocol = TLSv1.3\n\nI concur with Daniel's finding that current macOS and FreeBSD don't\nenforce anything in this area. Nonetheless, for a pretty significant\nfraction of the world, our claim that the default is to not enforce\nany minimum protocol version is a lie.\n\nMy feeling now is that we'd be better off defaulting\nssl_min_protocol_version to something nonempty, just to make this\nbehavior platform-independent. We certainly can't leave the docs\nas they are.\n\nAlso, I confirm that the failure looks like\n\n$ psql -h ... -d \"dbname=postgres sslmode=require\"\npsql: error: could not connect to server: SSL error: unsupported protocol\n\nWhile that's not *that* awful, if you realize that \"protocol\" means\nTLS version, many people probably won't without a hint. It does not\nhelp any that the message doesn't mention either the offered TLS version\nor the version limits being enforced. I'm not sure we can do anything\nabout the former, but reducing the number of variables affecting the\nlatter seems like a smart idea.\n\nBTW, the server-side report of the problem looks like\n\nLOG: could not accept SSL connection: wrong version number\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 25 Jun 2020 18:44:05 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: should libpq also require TLSv1.2 by default?" }, { "msg_contents": "On Thu, Jun 25, 2020 at 06:44:05PM -0400, Tom Lane wrote:\n> BTW, the server-side report of the problem looks like\n> \n> LOG: could not accept SSL connection: wrong version number\n\nI like that one. ;-)\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n", "msg_date": "Thu, 25 Jun 2020 18:55:59 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: should libpq also require TLSv1.2 by default?" }, { "msg_contents": "> On 26 Jun 2020, at 00:44, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> My feeling now is that we'd be better off defaulting\n> ssl_min_protocol_version to something nonempty, just to make this\n> behavior platform-independent. We certainly can't leave the docs\n> as they are.\n\nYeah, given the concensus in this thread and your findings I think we should\ndefault to TLSv1.2 as originally proposed.\n\nI still think there will be instances of existing connections to old servers\nthat will all of a sudden break, but it's probably true that it's not a common\nsetup. Optimizing for the majority and helping the minority with documentation\nis IMO the winning move.\n\n> Also, I confirm that the failure looks like\n> \n> $ psql -h ... -d \"dbname=postgres sslmode=require\"\n> psql: error: could not connect to server: SSL error: unsupported protocol\n> \n> While that's not *that* awful, if you realize that \"protocol\" means\n> TLS version, many people probably won't without a hint. It does not\n> help any that the message doesn't mention either the offered TLS version\n> or the version limits being enforced. I'm not sure we can do anything\n> about the former, but reducing the number of variables affecting the\n> latter seems like a smart idea.\n\n+1\n\n> BTW, the server-side report of the problem looks like\n> \n> LOG: could not accept SSL connection: wrong version number\n\nI can totally see some thinking that it's the psql version at client side which\nis referred to and not the TLS protocol version. Perhaps we should add a hint\nthere as well?\n\ncheers ./daniel\n\n", "msg_date": "Fri, 26 Jun 2020 14:33:04 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: should libpq also require TLSv1.2 by default?" }, { "msg_contents": "Daniel Gustafsson <daniel@yesql.se> writes:\n>> On 26 Jun 2020, at 00:44, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> BTW, the server-side report of the problem looks like\n>> LOG: could not accept SSL connection: wrong version number\n\n> I can totally see some thinking that it's the psql version at client side which\n> is referred to and not the TLS protocol version. Perhaps we should add a hint\n> there as well?\n\nNot sure. We can't fix it in the case we're mainly concerned about,\nnamely an out-of-support server version. At the same time, it's certainly\ntrue that \"version number\" is way too under-specified in this context.\nMaybe improving this against the day that TLSv2 exists would be smart.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 26 Jun 2020 09:19:43 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: should libpq also require TLSv1.2 by default?" }, { "msg_contents": "Here is a quick attempt at getting libpq and the server to report\nsuitable hint messages about SSL version problems.\n\nThe main thing I don't like about this as formulated is that I hard-wired\nknowledge of the minimum and maximum SSL versions into the hint messages.\nThat's clearly not very maintainable, but it seems really hard to keep the\nmessages readable/useful without giving concrete version numbers.\nAnybdy have a better idea? Is there a reasonably direct way to ask\nOpenSSL what its min and max versions are?\n\n\t\t\tregards, tom lane", "msg_date": "Fri, 26 Jun 2020 11:36:42 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: should libpq also require TLSv1.2 by default?" }, { "msg_contents": "I wrote:\n> Anybdy have a better idea? Is there a reasonably direct way to ask\n> OpenSSL what its min and max versions are?\n\nAfter some digging, there apparently is not. At first glance it would\nseem that SSL_get_min_proto_version/SSL_get_max_proto_version should\nhelp, but in reality they're just blindingly useless, because they\nreturn zero in most cases of interest. And when they don't return zero\nthey might give us a code that we don't recognize, so there's no future\nproofing to be had from using them. Plus they don't exist before\nopenssl 1.1.1.\n\nIt looks like, when they exist, we could use them to discover any\nrestrictions openssl.cnf has set on the allowed protocol versions ...\nbut I'm not really convinced that's worth the trouble. If we up the\nlibpq default to TLSv1.2 then there probably won't be any real-world\ncases where openssl.cnf affects our results.\n\nSo I propose the attached. The hack in openssl.h to guess the\nmin/max supported versions is certainly nothing but a hack;\nbut I see no way to do better.\n\n\t\t\tregards, tom lane", "msg_date": "Fri, 26 Jun 2020 16:22:37 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: should libpq also require TLSv1.2 by default?" }, { "msg_contents": "> On 26 Jun 2020, at 22:22, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> I wrote:\n>> Anybdy have a better idea? Is there a reasonably direct way to ask\n>> OpenSSL what its min and max versions are?\n> \n> After some digging, there apparently is not.\n\nAFAIK everyone either #ifdef around the TLS1_x_VERSION macros or the OpenSSL\nversioning and use hardcoded knowledge based on that. The latter is fairly\nshaky since configure options can disable protocols. At least in past\nversions, the validation for protocol range in OpenSSL ssl_lib was doing pretty\nmuch that too.\n\n> So I propose the attached.\n\nSSL_R_UNKNOWN_PROTOCOL seem to covers cases when someone manages to perform\nsomething which OpenSSL believes is a broken SSLv2 connection, but their own\nclient-level code use it to refer to SSL as well as TLS. Maybe it's worth\nadding as a belts and suspenders type thing?\n\nI've only had a chance to read the patches, but they read pretty much just like\nI had in mind that we could do this. +1 on both patches from an eye-ball POV.\n\nIs this targeting v13 or v14? In case of the former, the release notes entry\nfor raising the default minimum version should perhaps be tweaked as it now\njust refers to the GUC which is a tad misleading.\n\n> The hack in openssl.h to guess the\n> min/max supported versions is certainly nothing but a hack;\n> but I see no way to do better.\n\nIf anything it might useful to document in the comment that we're only\nconcerned with TLS versions, SSL2/3 are disabled in the library initialization.\n\ncheers ./daniel\n\n", "msg_date": "Fri, 26 Jun 2020 23:18:58 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: should libpq also require TLSv1.2 by default?" }, { "msg_contents": "Daniel Gustafsson <daniel@yesql.se> writes:\n> SSL_R_UNKNOWN_PROTOCOL seem to covers cases when someone manages to perform\n> something which OpenSSL believes is a broken SSLv2 connection, but their own\n> client-level code use it to refer to SSL as well as TLS. Maybe it's worth\n> adding as a belts and suspenders type thing?\n\nNo objection on my part.\n\n> Is this targeting v13 or v14? In case of the former, the release notes entry\n> for raising the default minimum version should perhaps be tweaked as it now\n> just refers to the GUC which is a tad misleading.\n\nI think Peter is proposing that we change this in v13. I didn't look\nat the release notes; usually we cover this sort of thing in-bulk\nwhen we update the release notes later in beta.\n\n> If anything it might useful to document in the comment that we're only\n> concerned with TLS versions, SSL2/3 are disabled in the library initialization.\n\nGood point.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 26 Jun 2020 17:27:44 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: should libpq also require TLSv1.2 by default?" }, { "msg_contents": "I wrote:\n> Daniel Gustafsson <daniel@yesql.se> writes:\n>> SSL_R_UNKNOWN_PROTOCOL seem to covers cases when someone manages to perform\n>> something which OpenSSL believes is a broken SSLv2 connection, but their own\n>> client-level code use it to refer to SSL as well as TLS. Maybe it's worth\n>> adding as a belts and suspenders type thing?\n\n> No objection on my part.\n\n>> If anything it might useful to document in the comment that we're only\n>> concerned with TLS versions, SSL2/3 are disabled in the library initialization.\n\n> Good point.\n\nPushed with those corrections. I also rewrote the comment about which\nerror codes we'd seen in practice, after realizing that one of my tests\nhad been affected by the presence of \"MinProtocol = TLSv1.2\" in\nRHEL8's openssl.cnf (causing a max setting less than that to be a local\nconfiguration error, not something the server had rejected).\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 27 Jun 2020 12:55:21 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: should libpq also require TLSv1.2 by default?" } ]
[ { "msg_contents": "Hi\nI would like to use a Foreign Data Wrapper (FDW) to connect to a HADOOP cluster which uses KERBEROS authentication.\nis it possible to achieve this ? which FDW should be used ?\n\nThanks in advance\n\nBest Regards\nDidier ROS\nEDF\nCe message et toutes les pièces jointes (ci-après le 'Message') sont établis à l'intention exclusive des destinataires et les informations qui y figurent sont strictement confidentielles. Toute utilisation de ce Message non conforme à sa destination, toute diffusion ou toute publication totale ou partielle, est interdite sauf autorisation expresse.\n\nSi vous n'êtes pas le destinataire de ce Message, il vous est interdit de le copier, de le faire suivre, de le divulguer ou d'en utiliser tout ou partie. Si vous avez reçu ce Message par erreur, merci de le supprimer de votre système, ainsi que toutes ses copies, et de n'en garder aucune trace sur quelque support que ce soit. Nous vous remercions également d'en avertir immédiatement l'expéditeur par retour du message.\n\nIl est impossible de garantir que les communications par messagerie électronique arrivent en temps utile, sont sécurisées ou dénuées de toute erreur ou virus.\n____________________________________________________\n\nThis message and any attachments (the 'Message') are intended solely for the addressees. The information contained in this Message is confidential. Any use of information contained in this Message not in accord with its purpose, any dissemination or disclosure, either whole or partial, is prohibited except formal approval.\n\nIf you are not the addressee, you may not copy, forward, disclose or use any part of it. If you have received this message in error, please delete it and all copies from your system and notify the sender immediately by return message.\n\nE-mail communication cannot be guaranteed to be timely secure, error or virus-free.", "msg_date": "Wed, 24 Jun 2020 09:05:30 +0000", "msg_from": "ROS Didier <didier.ros@edf.fr>", "msg_from_op": true, "msg_subject": "PostgreSQL and big data - FDW" }, { "msg_contents": "On Wed, Jun 24, 2020 at 09:05:30AM +0000, ROS Didier wrote:\n> Hi\n> \n> I would like to use a Foreign Data Wrapper (FDW) to connect to a HADOOP cluster\n> which uses KERBEROS authentication.\n> \n> is it possible to achieve this ? which FDW should be used ?\n\nWell, I would use the Hadoop FDW:\n\n\thttps://github.com/EnterpriseDB/hdfs_fdw\n\nand it only supports these authentication methods:\n\n\tAuthentication Support\n\n\tThe FDW supports NOSASL and LDAP authentication modes. In order to use\n\tNOSASL do not specify any OPTIONS while creating user mapping. For LDAP\n\tusername and password must be specified in OPTIONS while creating user mapping.\n\nNot every FDW supports every Postgres server authentication method.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n", "msg_date": "Wed, 24 Jun 2020 05:13:10 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL and big data - FDW" }, { "msg_contents": "Hi Bruce\n\n\tIn the following link : https://www.enterprisedb.com/blog/connecting-hadoop-and-edb-postgres-shrink-big-data-challenges\nWe can see : \n\"Support for various authentication methods (i.e. Kerberos, NOSASL, etc.)\"\n\nSo HDFS_FDW support kerberos authentication . how to be sure of that ? \nCould EDB make a clear statement on this point?\n\nIf so, how to implement this method ? is there any document on this subject ?\n\nThanks in advance.\nBest Regards\n\nDidier ROS\ndidier.ros@edf.fr\nTél. : +33 6 49 51 11 88\n\n\n\n\n-----Message d'origine-----\nDe : bruce@momjian.us [mailto:bruce@momjian.us] \nEnvoyé : mercredi 24 juin 2020 11:13\nÀ : ROS Didier <didier.ros@edf.fr>\nCc : pgsql-hackers@lists.postgresql.org\nObjet : Re: PostgreSQL and big data - FDW\n\nOn Wed, Jun 24, 2020 at 09:05:30AM +0000, ROS Didier wrote:\n> Hi\n> \n> I would like to use a Foreign Data Wrapper (FDW) to connect to a \n> HADOOP cluster which uses KERBEROS authentication.\n> \n> is it possible to achieve this ? which FDW should be used ?\n\nWell, I would use the Hadoop FDW:\n\n\thttps://github.com/EnterpriseDB/hdfs_fdw\n\nand it only supports these authentication methods:\n\n\tAuthentication Support\n\n\tThe FDW supports NOSASL and LDAP authentication modes. In order to use\n\tNOSASL do not specify any OPTIONS while creating user mapping. For LDAP\n\tusername and password must be specified in OPTIONS while creating user mapping.\n\nNot every FDW supports every Postgres server authentication method.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n\n\n\nCe message et toutes les pièces jointes (ci-après le 'Message') sont établis à l'intention exclusive des destinataires et les informations qui y figurent sont strictement confidentielles. Toute utilisation de ce Message non conforme à sa destination, toute diffusion ou toute publication totale ou partielle, est interdite sauf autorisation expresse.\n\nSi vous n'êtes pas le destinataire de ce Message, il vous est interdit de le copier, de le faire suivre, de le divulguer ou d'en utiliser tout ou partie. Si vous avez reçu ce Message par erreur, merci de le supprimer de votre système, ainsi que toutes ses copies, et de n'en garder aucune trace sur quelque support que ce soit. Nous vous remercions également d'en avertir immédiatement l'expéditeur par retour du message.\n\nIl est impossible de garantir que les communications par messagerie électronique arrivent en temps utile, sont sécurisées ou dénuées de toute erreur ou virus.\n____________________________________________________\n\nThis message and any attachments (the 'Message') are intended solely for the addressees. The information contained in this Message is confidential. Any use of information contained in this Message not in accord with its purpose, any dissemination or disclosure, either whole or partial, is prohibited except formal approval.\n\nIf you are not the addressee, you may not copy, forward, disclose or use any part of it. If you have received this message in error, please delete it and all copies from your system and notify the sender immediately by return message.\n\nE-mail communication cannot be guaranteed to be timely secure, error or virus-free.\n\n\n\n", "msg_date": "Wed, 24 Jun 2020 12:39:38 +0000", "msg_from": "ROS Didier <didier.ros@edf.fr>", "msg_from_op": true, "msg_subject": "RE: PostgreSQL and big data - FDW" }, { "msg_contents": "On Wed, Jun 24, 2020 at 6:09 PM ROS Didier <didier.ros@edf.fr> wrote:\n\n> Hi Bruce\n>\n> In the following link :\n> https://www.enterprisedb.com/blog/connecting-hadoop-and-edb-postgres-shrink-big-data-challenges\n> We can see :\n> \"Support for various authentication methods (i.e. Kerberos, NOSASL, etc.)\"\n>\n> So HDFS_FDW support kerberos authentication . how to be sure of that ?\n> Could EDB make a clear statement on this point?\n>\n\nHDFS_FDW does not support kerberos authentication.\nThe sentence you have pasted above is from the wish list or say TODO\nlist, here is what it says:\n\"Currently the HDFS_FDW only provides READ capabilities but EDB is planning\nthe following additional functionality:\"\n\nThe functionality was not implemented. I think the part of confusion might\nbe\ndue to the formatting of the list in the blog.\n\nYou can follow the README[1] of HDFS_FDW to get an idea of how to use it.\n\n[1] https://github.com/EnterpriseDB/hdfs_fdw/blob/master/README.md\n\nRegards,\nJeevan\n\nOn Wed, Jun 24, 2020 at 6:09 PM ROS Didier <didier.ros@edf.fr> wrote:Hi Bruce\n\n        In the following link : https://www.enterprisedb.com/blog/connecting-hadoop-and-edb-postgres-shrink-big-data-challenges\nWe can see : \n\"Support for various authentication methods (i.e. Kerberos, NOSASL, etc.)\"\n\nSo HDFS_FDW  support kerberos authentication . how to be sure of that  ? \nCould EDB make a clear statement on this point?HDFS_FDW does not support kerberos authentication.The sentence you have pasted above is from the wish list or say TODOlist, here is what it says:\"Currently the HDFS_FDW only provides READ capabilities but EDB is planning the following additional functionality:\"The functionality was not implemented. I think the part of confusion might bedue to the formatting of the list in the blog.You can follow the README[1] of HDFS_FDW to get an idea of how to use it.[1] https://github.com/EnterpriseDB/hdfs_fdw/blob/master/README.mdRegards,Jeevan", "msg_date": "Wed, 24 Jun 2020 21:10:40 +0530", "msg_from": "Jeevan Ladhe <jeevan.ladhe@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL and big data - FDW" }, { "msg_contents": "Greetings,\n\n* Bruce Momjian (bruce@momjian.us) wrote:\n> On Wed, Jun 24, 2020 at 09:05:30AM +0000, ROS Didier wrote:\n> > I would like to use a Foreign Data Wrapper (FDW) to connect to a HADOOP cluster\n> > which uses KERBEROS authentication.\n\nSadly, not really.\n\n> > is it possible to achieve this ? which FDW should be used ?\n> \n> Well, I would use the Hadoop FDW:\n> \n> \thttps://github.com/EnterpriseDB/hdfs_fdw\n> \n> and it only supports these authentication methods:\n> \n> \tAuthentication Support\n> \n> \tThe FDW supports NOSASL and LDAP authentication modes. In order to use\n> \tNOSASL do not specify any OPTIONS while creating user mapping. For LDAP\n> \tusername and password must be specified in OPTIONS while creating user mapping.\n> \n> Not every FDW supports every Postgres server authentication method.\n\nThat isn't really the issue here, the problem is really that the GSSAPI\nsupport in PG today doesn't support credential delegation- if it did,\nthen the HDFS FDW (and the postgres FDW) could be easily extended to\nleverage those delegated credentials to connect.\n\nThat's been something that's been on my personal todo list of things to\nwork on but unfortunately I've not, as yet, had time to go implement. I\ndon't actually think it would be very hard- if someone writes it, I'd\ndefinitely review it.\n\nThanks,\n\nStephen", "msg_date": "Wed, 24 Jun 2020 12:53:28 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL and big data - FDW" }, { "msg_contents": "Hi Stephen\n\nMy EDF company is very interested in this feature (KERBEROS authentication method and hdfs_fdw ). \nIs it possible to know how many days of development does this represent ? who can develop this implementation ? what cost ?\n\nBest Regards\nDidier ROS\nEDF\n-----Message d'origine-----\nDe : sfrost@snowman.net [mailto:sfrost@snowman.net] \nEnvoyé : mercredi 24 juin 2020 18:53\nÀ : Bruce Momjian <bruce@momjian.us>\nCc : ROS Didier <didier.ros@edf.fr>; pgsql-hackers@lists.postgresql.org\nObjet : Re: PostgreSQL and big data - FDW\n\nGreetings,\n\n* Bruce Momjian (bruce@momjian.us) wrote:\n> On Wed, Jun 24, 2020 at 09:05:30AM +0000, ROS Didier wrote:\n> > I would like to use a Foreign Data Wrapper (FDW) to connect to a \n> > HADOOP cluster which uses KERBEROS authentication.\n\nSadly, not really.\n\n> > is it possible to achieve this ? which FDW should be used ?\n> \n> Well, I would use the Hadoop FDW:\n> \n> \thttps://github.com/EnterpriseDB/hdfs_fdw\n> \n> and it only supports these authentication methods:\n> \n> \tAuthentication Support\n> \n> \tThe FDW supports NOSASL and LDAP authentication modes. In order to use\n> \tNOSASL do not specify any OPTIONS while creating user mapping. For LDAP\n> \tusername and password must be specified in OPTIONS while creating user mapping.\n> \n> Not every FDW supports every Postgres server authentication method.\n\nThat isn't really the issue here, the problem is really that the GSSAPI support in PG today doesn't support credential delegation- if it did, then the HDFS FDW (and the postgres FDW) could be easily extended to leverage those delegated credentials to connect.\n\nThat's been something that's been on my personal todo list of things to work on but unfortunately I've not, as yet, had time to go implement. I don't actually think it would be very hard- if someone writes it, I'd definitely review it.\n\nThanks,\n\nStephen\n\n\n\nCe message et toutes les pièces jointes (ci-après le 'Message') sont établis à l'intention exclusive des destinataires et les informations qui y figurent sont strictement confidentielles. Toute utilisation de ce Message non conforme à sa destination, toute diffusion ou toute publication totale ou partielle, est interdite sauf autorisation expresse.\n\nSi vous n'êtes pas le destinataire de ce Message, il vous est interdit de le copier, de le faire suivre, de le divulguer ou d'en utiliser tout ou partie. Si vous avez reçu ce Message par erreur, merci de le supprimer de votre système, ainsi que toutes ses copies, et de n'en garder aucune trace sur quelque support que ce soit. Nous vous remercions également d'en avertir immédiatement l'expéditeur par retour du message.\n\nIl est impossible de garantir que les communications par messagerie électronique arrivent en temps utile, sont sécurisées ou dénuées de toute erreur ou virus.\n____________________________________________________\n\nThis message and any attachments (the 'Message') are intended solely for the addressees. The information contained in this Message is confidential. Any use of information contained in this Message not in accord with its purpose, any dissemination or disclosure, either whole or partial, is prohibited except formal approval.\n\nIf you are not the addressee, you may not copy, forward, disclose or use any part of it. If you have received this message in error, please delete it and all copies from your system and notify the sender immediately by return message.\n\nE-mail communication cannot be guaranteed to be timely secure, error or virus-free.\n\n\n\n", "msg_date": "Thu, 25 Jun 2020 07:02:37 +0000", "msg_from": "ROS Didier <didier.ros@edf.fr>", "msg_from_op": true, "msg_subject": "RE: PostgreSQL and big data - FDW" }, { "msg_contents": "On Thu, Jun 25, 2020 at 07:02:37AM +0000, ROS Didier wrote:\n> Hi Stephen\n> \n> My EDF company is very interested in this feature (KERBEROS authentication method and hdfs_fdw ). \n> Is it possible to know how many days of development does this represent ? who can develop this implementation ? what cost ?\n\nUh, the only thing I can suggest is to contact one of the larger\nPostgres support companies (ones that have developers who understand the\nserver code, or at least the FDW code), and ask them for estimates. The\ncommunity really can't supply any of that, unless you want to do the\nwork and want source code tips.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n", "msg_date": "Thu, 25 Jun 2020 11:36:22 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL and big data - FDW" } ]
[ { "msg_contents": "Hi,\n\nCOPY command's FORMAT option allows only all lowercase csv, text or\nbinary, this is true because strcmp is being used while parsing these\nvalues.\n\nIt would be nice if the uppercase or combination of lower and upper\ncase format options such as CSV, TEXT, BINARY, Csv, Text, Binary so\non. is also allowed.\n\nTo achieve this pg_strcasecmp() is used instead of strcmp.\n\nAttached is a patch having above changes.\n\nRequest the community to review the patch, if it makes sense.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Wed, 24 Jun 2020 15:20:34 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "[PATCH] COPY command's data format option allows only lowercase csv,\n text or binary" }, { "msg_contents": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> writes:\n> COPY command's FORMAT option allows only all lowercase csv, text or\n> binary, this is true because strcmp is being used while parsing these\n> values.\n\nThis is nonsense, actually:\n\nregression=# create table foo (f1 int);\nCREATE TABLE\nregression=# copy foo from stdin (format CSV);\nEnter data to be copied followed by a newline.\nEnd with a backslash and a period on a line by itself, or an EOF signal.\n\nAs that shows, there's already a round of lowercasing done by the parser.\nThe only way that strcasecmp in copy.c would be useful is if you wanted to\naccept things like\n\tcopy foo from stdin (format \"CSV\");\nI don't find that to be a terribly good idea. The normal implication\nof quoting is that it *prevents* case folding, so why should that\nhappen anyway?\n\nMore generally, though, why would we want to change this policy only\nhere? I believe we're reasonably consistent about letting the parser\ndo any required down-casing and then just checking keyword matches\nwith strcmp.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 24 Jun 2020 10:27:41 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCH] COPY command's data format option allows only lowercase\n csv, text or binary" }, { "msg_contents": "On Wed, Jun 24, 2020 at 10:27 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> More generally, though, why would we want to change this policy only\n> here? I believe we're reasonably consistent about letting the parser\n> do any required down-casing and then just checking keyword matches\n> with strcmp.\n\nI've had the feeling in the past that our use of pg_strcasecmp() was a\nbit random. Looking through the output of 'git grep pg_strcasecmp', it\nseems like we don't typically don't use it on option names, but\nsometimes we use it on option values. For instance, DefineCollation()\nuses pg_strcasecmp() on the collproviderstr, and DefineType() uses it\non storageEl; and also, not to be overlooked, defGetBoolean() uses it\nwhen matching true/false/on/off, which probably affects a lot of\nplaces. On the other hand, ExplainQuery() matches the format using\nplain old strcmp(), despite indirectly using pg_strcasecmp() for the\nBoolean parameters. So I guess I'm not really convinced that there is\nall that much consistency here.\n\nMind you, I'm not sure whether or not anything really needs to be\nchanged, or exactly what ought to be done. I'm just making the\nobservation that it might not be as consistent as you may think.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Wed, 24 Jun 2020 12:09:35 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] COPY command's data format option allows only lowercase\n csv, text or binary" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Wed, Jun 24, 2020 at 10:27 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> More generally, though, why would we want to change this policy only\n>> here? I believe we're reasonably consistent about letting the parser\n>> do any required down-casing and then just checking keyword matches\n>> with strcmp.\n\n> ... Mind you, I'm not sure whether or not anything really needs to be\n> changed, or exactly what ought to be done. I'm just making the\n> observation that it might not be as consistent as you may think.\n\nYeah, I'm sure there are a few inconsistencies. We previously made a\npass to get rid of pg_strcasecmp for anything that had been through\nthe parser's downcasing (commit fb8697b31) but I wouldn't be surprised\nif that missed a few cases, or if new ones have snuck in. Anyway,\n\"don't use pg_strcasecmp unnecessarily\" was definitely the agreed-to\npolicy as of Jan 2018.\n\nMy vague recollection is that there are a few exceptions (defGetBoolean\nmay well be one of them) where pg_strcasecmp still seemed necessary\nbecause the input might not have come through the parser in some usages.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 24 Jun 2020 12:55:22 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCH] COPY command's data format option allows only lowercase\n csv, text or binary" }, { "msg_contents": "On Wed, Jun 24, 2020 at 12:55:22PM -0400, Tom Lane wrote:\n> Yeah, I'm sure there are a few inconsistencies. We previously made a\n> pass to get rid of pg_strcasecmp for anything that had been through\n> the parser's downcasing (commit fb8697b31) but I wouldn't be surprised\n> if that missed a few cases, or if new ones have snuck in. Anyway,\n> \"don't use pg_strcasecmp unnecessarily\" was definitely the agreed-to\n> policy as of Jan 2018.\n\n0d8c9c1 has introduced some in parse_basebackup_options() for the\nnew manifest option, and fe30e7e for AlterType(), no?\n\n> My vague recollection is that there are a few exceptions (defGetBoolean\n> may well be one of them) where pg_strcasecmp still seemed necessary\n> because the input might not have come through the parser in some usages.\n\nYep, there were a couple of exceptions. What was done at this time\nwas a case-by-case lookup to check what came only from the parser.\n--\nMichael", "msg_date": "Thu, 25 Jun 2020 11:07:33 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: [PATCH] COPY command's data format option allows only lowercase\n csv, text or binary" }, { "msg_contents": "> As that shows, there's already a round of lowercasing done by the parser.\n> The only way that strcasecmp in copy.c would be useful is if you wanted to\n> accept things like\n> copy foo from stdin (format \"CSV\");\n> I don't find that to be a terribly good idea. The normal implication\n> of quoting is that it *prevents* case folding, so why should that\n> happen anyway?\n>\n\nI was able to know that the parser does the lowercasing for other\nparts of the query,\nwhat I missed in my understanding is that about the proper usage of quoting.\n\nThanks for letting me know this point.\n\nI agree with the above understanding to not change that behavior.\n\nPlease ignore this patch.\n\nThank you all for your responses.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 29 Jun 2020 11:38:54 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] COPY command's data format option allows only lowercase\n csv, text or binary" }, { "msg_contents": "On Thu, Jun 25, 2020 at 11:07:33AM +0900, Michael Paquier wrote:\n> 0d8c9c1 has introduced some in parse_basebackup_options() for the\n> new manifest option, and fe30e7e for AlterType(), no?\n\nPlease forget this one. I had a moment of brain fade. Those have\nbeen added for the option values, and on the option names we use\ndirectly strcmp(), so I am not actually seeing a code path on HEAD\nwhere we use pg_strcasecmp for something coming only from the parser.\n--\nMichael", "msg_date": "Mon, 29 Jun 2020 16:08:27 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: [PATCH] COPY command's data format option allows only lowercase\n csv, text or binary" } ]
[ { "msg_contents": "\nHello devs,\n\nI would like to create an \"all defaults\" row, i.e. a row composed of the \ndefault values for all attributes, so I wrote:\n\n INSERT INTO t() VALUES ();\n\nThis is forbidden by postgres, and also sqlite.\n\nIs there any good reason why this should be the case?\n\n-- \nFabien.\n\n\n", "msg_date": "Wed, 24 Jun 2020 14:18:01 +0200 (CEST)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": true, "msg_subject": "Why forbid \"INSERT INTO t () VALUES ();\"" }, { "msg_contents": "Fabien COELHO schrieb am 24.06.2020 um 14:18:\n> I would like to create an \"all defaults\" row, i.e. a row composed of the default values for all attributes, so I wrote:\n>\n>   INSERT INTO t() VALUES ();\n>\n> This is forbidden by postgres, and also sqlite.\n>\n> Is there any good reason why this should be the case?\n>\n\nMaybe because\n\n insert into t default values;\n\nexists (and is standard SQL if I'm not mistaken)\n\nThomas\n\n\n\n", "msg_date": "Wed, 24 Jun 2020 14:23:08 +0200", "msg_from": "Thomas Kellerer <shammat@gmx.net>", "msg_from_op": false, "msg_subject": "Re: Why forbid \"INSERT INTO t () VALUES ();\"" }, { "msg_contents": "Hallo Thomas,\n\n>>   INSERT INTO t() VALUES ();\n>>\n>> This is forbidden by postgres, and also sqlite.\n>>\n>> Is there any good reason why this should be the case?\n>\n> Maybe because\n>\n> insert into t default values;\n>\n> exists (and is standard SQL if I'm not mistaken)\n\nThat's a nice alternative I did not notice. Well, not an alternative as \nthe other one does not work.\n\nI'm still unclear why it would be forbidden though, it seems logical to \ntry that, whereas the working one is quite away from the usual syntax.\n\n-- \nFabien.", "msg_date": "Wed, 24 Jun 2020 21:34:34 +0200 (CEST)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": true, "msg_subject": "Re: Why forbid \"INSERT INTO t () VALUES ();\"" }, { "msg_contents": "Fabien COELHO <coelho@cri.ensmp.fr> writes:\n>>> INSERT INTO t() VALUES ();\n\n> I'm still unclear why it would be forbidden though, it seems logical to \n> try that, whereas the working one is quite away from the usual syntax.\n\nIt's forbidden because the SQL standard forbids it.\n\nWe allow zero-column syntaxes in some other places where SQL forbids\nthem, but that's only because there is no reasonable alternative.\nIn this case, there's a perfectly good, standards-compliant alternative.\nSo why encourage people to write unportable code?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 24 Jun 2020 15:54:01 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Why forbid \"INSERT INTO t () VALUES ();\"" }, { "msg_contents": "Tom Lane <tgl@sss.pgh.pa.us> writes:\n\n> Fabien COELHO <coelho@cri.ensmp.fr> writes:\n>>>> INSERT INTO t() VALUES ();\n>\n>> I'm still unclear why it would be forbidden though, it seems logical to \n>> try that, whereas the working one is quite away from the usual syntax.\n>\n> It's forbidden because the SQL standard forbids it.\n>\n> We allow zero-column syntaxes in some other places where SQL forbids\n> them, but that's only because there is no reasonable alternative.\n> In this case, there's a perfectly good, standards-compliant alternative.\n> So why encourage people to write unportable code?\n\nFWIW, MySQL (and MariaDB) only support INSERT INTO t () VALUES (), not\nDEFAULT VALUES. We have added syntax for MySQL compatibility in the\npast, e.g. the CONCAT() function.\n\n- ilmari\n-- \n\"A disappointingly low fraction of the human race is,\n at any given time, on fire.\" - Stig Sandbeck Mathisen\n\n\n", "msg_date": "Wed, 24 Jun 2020 23:31:00 +0100", "msg_from": "ilmari@ilmari.org (Dagfinn Ilmari =?utf-8?Q?Manns=C3=A5ker?=)", "msg_from_op": false, "msg_subject": "Re: Why forbid \"INSERT INTO t () VALUES ();\"" }, { "msg_contents": "On Wed, Jun 24, 2020 at 3:31 PM Dagfinn Ilmari Mannsåker <ilmari@ilmari.org>\nwrote:\n\n> FWIW, MySQL (and MariaDB) only support INSERT INTO t () VALUES (), not\n> DEFAULT VALUES.\n\n\nWe have added syntax for MySQL compatibility in the\n> past, e.g. the CONCAT() function.\n>\n\nI don't see the similarities. IIUC there isn't a standard mandated\nfunction that provides the behavior that the concat function does. There\nis an operator but the treatment of NULL is different. So for concat we\ndecided to add a custom function modelled on another DB's custom function.\nAdding custom syntax here when an identically behaving standard syntax\nalready exists has considerably less going for it. I would say that\naccepting the compatibility hit while being the ones that are\nstandard-compliant is in line with project values.\n\nDavid J.\n\nOn Wed, Jun 24, 2020 at 3:31 PM Dagfinn Ilmari Mannsåker <ilmari@ilmari.org> wrote:FWIW, MySQL (and MariaDB) only support INSERT INTO t () VALUES (), not\nDEFAULT VALUES.We have added syntax for MySQL compatibility in the\npast, e.g. the CONCAT() function.I don't see the similarities.  IIUC there isn't a standard mandated function that provides the behavior that the concat function does.  There is an operator but the treatment of NULL is different.  So for concat we decided to add a custom function modelled on another DB's custom function.  Adding custom syntax here when an identically behaving standard syntax already exists has considerably less going for it.  I would say that accepting the compatibility hit while being the ones that are standard-compliant is in line with project values.David J.", "msg_date": "Wed, 24 Jun 2020 16:03:40 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Why forbid \"INSERT INTO t () VALUES ();\"" }, { "msg_contents": "\nHello Tom,\n\n>>>> INSERT INTO t() VALUES ();\n>\n>> I'm still unclear why it would be forbidden though, it seems logical to\n>> try that, whereas the working one is quite away from the usual syntax.\n>\n> It's forbidden because the SQL standard forbids it.\n\nOk, that is definitely a reason. I'm not sure it is a good reason, though.\n\nWhy would the standard forbid it? From the language design point of view, \nit is basically having a syntax for lists which would not work for empty \nlists, or a syntax for strings which would not work for empty strings.\n\nIt also means that if for some reason someone wants to insert several such \nall-default rows, they have to repeat the insert, as \"VALUES (), ();\" \nwould not work, so it is also losing a corner-corner case capability \nwithout obvious reason.\n\n> We allow zero-column syntaxes in some other places where SQL forbids\n> them,\n\nThen forbidding there it just adds awkwardness: the same thing works in \none place but not in another. That does not help users.\n\n> but that's only because there is no reasonable alternative. In this \n> case, there's a perfectly good, standards-compliant alternative. So why \n> encourage people to write unportable code?\n\nI doubt that people look at the (costly) standard when writing corner case \nqueries, they just try something logical as I did.\n\nAs some other databases accepts it, and if it is already allowed elsewhere \nin pg, encouraging portability is not the main issue here. I'd rather have \nlogic and uniformity accross commands.\n\nIf I'm annoyed enough to send a patch some day, would you veto it because \nit departs from the standard?\n\nAnyway, thanks for the answer!\n\n-- \nFabien.\n\n\n", "msg_date": "Thu, 25 Jun 2020 06:56:10 +0200 (CEST)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": true, "msg_subject": "Re: Why forbid \"INSERT INTO t () VALUES ();\"" }, { "msg_contents": "On 6/25/20 6:56 AM, Fabien COELHO wrote:\n> \n> Hello Tom,\n> \n>>>>>   INSERT INTO t() VALUES ();\n>>\n>>> I'm still unclear why it would be forbidden though, it seems logical to\n>>> try that, whereas the working one is quite away from the usual syntax.\n>>\n>> It's forbidden because the SQL standard forbids it.\n> \n> Ok, that is definitely a reason. I'm not sure it is a good reason, though.\n\n\nIt's a very good reason. It might not be good *enough*, but it is a\ngood reason.\n\n\n> Why would the standard forbid it? From the language design point of\n> view[...]\n\n\nDon't go there. There is nothing but pain there.\n\n-- \nVik Fearing\n\n\n", "msg_date": "Thu, 25 Jun 2020 10:39:16 +0200", "msg_from": "Vik Fearing <vik@postgresfriends.org>", "msg_from_op": false, "msg_subject": "Re: Why forbid \"INSERT INTO t () VALUES ();\"" }, { "msg_contents": "On Thu, 25 Jun 2020 at 16:56, Fabien COELHO <coelho@cri.ensmp.fr> wrote:\n> It also means that if for some reason someone wants to insert several such\n> all-default rows, they have to repeat the insert, as \"VALUES (), ();\"\n> would not work, so it is also losing a corner-corner case capability\n> without obvious reason.\n\nThis is not a vote in either direction but just wanted to say that\nduring 7e413a0f8 where multi-row inserts were added to pg_dump, a\nspecial case had to be added to support tables with no columns. We\ncannot do multi-inserts for that so are forced to fall back on\none-row-per-INSERT.\n\nHowever, even if we had this syntax I imagine it would be unlikely\nwe'd change pg_dump to use it since we want to be as standard\ncompliant as possible when dumping INSERTs since it appears the only\ngenuine use-case for that is for importing the data into some other\nrelational database.\n\nDavid\n\n\n", "msg_date": "Thu, 25 Jun 2020 21:27:39 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Why forbid \"INSERT INTO t () VALUES ();\"" }, { "msg_contents": "\nBonjour Vik,\n\n>>> It's forbidden because the SQL standard forbids it.\n>>\n>> Ok, that is definitely a reason. I'm not sure it is a good reason, though.\n\n> It's a very good reason. It might not be good *enough*, but it is a\n> good reason.\n\nOk for good, although paradoxically not \"good enough\":-)\n\n>> Why would the standard forbid it? From the language design point of \n>> view[...]\n>\n> Don't go there. There is nothing but pain there.\n\nHmmm. I like to understand. Basically it is my job.\n\nOtherwise, yes and no. Postgres could decide (has sometimes decided) to \nextend the syntax or semantics wrt the standard if it makes sense, so that \nwhen a syntax is allowed by the standard it does what the standard says, \nwhich I would call positive compliance and I would support that, but keep \nsome freedom elsewhere.\n\n-- \nFabien.\n\n\n", "msg_date": "Thu, 25 Jun 2020 16:00:03 +0200 (CEST)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": true, "msg_subject": "Re: Why forbid \"INSERT INTO t () VALUES ();\"" }, { "msg_contents": "On Thu, Jun 25, 2020 at 12:56 AM Fabien COELHO <coelho@cri.ensmp.fr> wrote:\n> It also means that if for some reason someone wants to insert several such\n> all-default rows, they have to repeat the insert, as \"VALUES (), ();\"\n> would not work, so it is also losing a corner-corner case capability\n> without obvious reason.\n\nThat, and a desire to make things work in PostgreSQL that work in\nMySQL, seems like a good-enough reason to me, but YMMV.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Thu, 25 Jun 2020 10:51:11 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Why forbid \"INSERT INTO t () VALUES ();\"" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Thu, Jun 25, 2020 at 12:56 AM Fabien COELHO <coelho@cri.ensmp.fr> wrote:\n>> It also means that if for some reason someone wants to insert several such\n>> all-default rows, they have to repeat the insert, as \"VALUES (), ();\"\n>> would not work, so it is also losing a corner-corner case capability\n>> without obvious reason.\n\n> That, and a desire to make things work in PostgreSQL that work in\n> MySQL, seems like a good-enough reason to me, but YMMV.\n\nYeah, the multi-insert case is a plausible reason that hadn't been\nmentioned before. On the other hand, you can already do that pretty\npainlessly:\n\nregression=# create table foo(x float8 default random());\nCREATE TABLE\nregression=# insert into foo select from generate_series(1,10);\nINSERT 0 10\nregression=# table foo;\n x \n---------------------\n 0.08414037203059621\n 0.2921176461398325\n 0.8760821189460586\n 0.6266325419285828\n 0.9946880079739273\n 0.4547070342142696\n 0.09683985675118834\n 0.3172576600666268\n 0.5122428845812195\n 0.8823697407826394\n(10 rows)\n\nSo I'm still not convinced we should do this. \"MySQL is incapable\nof conforming to the standard\" is a really lousy reason for us to do\nsomething.\n\nAnyway, to answer Fabien's question about why things are like this:\nthe standard doesn't allow zero-column tables, so most of these\nsyntactic edge cases are forbidden on that ground. We decided we\ndidn't like that restriction (because, for example, it creates a\npainful special case for DROP COLUMN). So we've adjusted a minimal\nset of syntactic edge cases to go along with that semantic change.\nThere's room to argue that INSERT's edge case should be included,\nbut there's also room to argue that it doesn't need to be.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 25 Jun 2020 12:07:05 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Why forbid \"INSERT INTO t () VALUES ();\"" }, { "msg_contents": "On Wed, 24 Jun 2020 at 08:18, Fabien COELHO <coelho@cri.ensmp.fr> wrote:\n\n> I would like to create an \"all defaults\" row, i.e. a row composed of the\n> default values for all attributes, so I wrote:\n>\n> INSERT INTO t() VALUES ();\n>\n> This is forbidden by postgres, and also sqlite.\n>\n\nThis is not the only area where empty tuples are not supported. Consider:\n\nPRIMARY KEY ()\n\nThis should mean the table may only contain a single row, but is not\nsupported.\n\nAlso, GROUP BY supports grouping by no columns, but not in a systematic\nway: Using aggregate functions with no explicit GROUP BY clause will result\nin grouping by no columns (i.e., entire result set is one group); I also\nfound that I could GROUP BY NULL::integer, abusing the column number\nsyntax. But things like GROUP BY ROLLUP () are not supported.\n\nOn the plus side, empty rows are supported, although the explicit ROW\nkeyword is required.\n\nOn Wed, 24 Jun 2020 at 08:18, Fabien COELHO <coelho@cri.ensmp.fr> wrote:I would like to create an \"all defaults\" row, i.e. a row composed of the \ndefault values for all attributes, so I wrote:\n\n   INSERT INTO t() VALUES ();\n\nThis is forbidden by postgres, and also sqlite.This is not the only area where empty tuples are not supported. Consider:PRIMARY KEY ()This should mean the table may only contain a single row, but is not supported.Also, GROUP BY supports grouping by no columns, but not in a systematic way: Using aggregate functions with no explicit GROUP BY clause will result in grouping by no columns (i.e., entire result set is one group); I also found that I could GROUP BY NULL::integer, abusing the column number syntax. But things like GROUP BY ROLLUP () are not supported.On the plus side, empty rows are supported, although the explicit ROW keyword is required.", "msg_date": "Thu, 25 Jun 2020 12:35:20 -0400", "msg_from": "Isaac Morland <isaac.morland@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Why forbid \"INSERT INTO t () VALUES ();\"" }, { "msg_contents": "On Thu, Jun 25, 2020 at 12:07 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Yeah, the multi-insert case is a plausible reason that hadn't been\n> mentioned before. On the other hand, you can already do that pretty\n> painlessly:\n\nSure, but it means if you're writing code to generate queries\nprogrammatically, then you have to handle the 0-column case completely\ndifferently from all the others. Seems like unnecessary pain for no\nreal reason.\n\nI mean, I generally agree that if the standard says that syntax X\nmeans Y, we should either make X mean Y, or not support X. But if the\nstandard says that X has no meaning at all, I don't think it's a\nproblem for us to make it mean something logical. If we thought\notherwise, we'd have to rip out support for indexes, which would\nprobably not be a winning move. Now, various people, including you and\nI, have made the point that it's bad to give a meaning to some piece\nof syntax that is not current part of the standard but might become\npart of the standard in the future, because then we might end up with\nthe standard saying that X means one thing and PostgreSQL thinking\nthat it means something else. However, that can quickly turn into an\nargument against doing anything that we happen not to like, even if\nthe reason we don't like it has more to do with needing a Snickers bar\nthan any underlying engineering reality. In a case like this, it's\nhard to imagine that () can reasonably mean anything other than a\n0-column tuple. It's not impossible that someone could invent another\ninterpretation, and there's been much discussion on this list about\nhow the SQL standards committee is more likely than you'd think to\ncome up with unusual ideas, but I still don't think it's a bad gamble,\nespecially given the MySQL/MariaDB precedent.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Thu, 25 Jun 2020 15:06:22 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Why forbid \"INSERT INTO t () VALUES ();\"" }, { "msg_contents": "On 2020-06-25 18:07, Tom Lane wrote:\n> So I'm still not convinced we should do this. \"MySQL is incapable\n> of conforming to the standard\" is a really lousy reason for us to do\n> something.\n\nConformance to the standard means that the syntax described in the \nstandard behaves as specified in the standard. It doesn't mean you \ncan't have additional syntax that is not in the standard.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 26 Jun 2020 09:09:07 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Why forbid \"INSERT INTO t () VALUES ();\"" }, { "msg_contents": "\nHello Isaac,\n\n> This is not the only area where empty tuples are not supported. Consider:\n>\n> PRIMARY KEY ()\n>\n> This should mean the table may only contain a single row, but is not\n> supported.\n\nYep. This is exactly the kind of case about which I was trying the \ncommand, after reading Bruce Momjian blog \n(https://momjian.us/main/blogs/pgblog/2020.html#June_22_2020) about \none-row tables and thinking about how to improve it and allow enforcing a \nsingleton simply, which is a thing I needed several times in the past.\n\n> On the plus side, empty rows are supported, although the explicit ROW\n> keyword is required.\n\nYet another weirdness.\n\nThanks for the comments.\n\n-- \nFabien.\n\n\n", "msg_date": "Fri, 26 Jun 2020 22:26:24 +0200 (CEST)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": true, "msg_subject": "Re: Why forbid \"INSERT INTO t () VALUES ();\"" }, { "msg_contents": "Greetings,\n\n* Peter Eisentraut (peter.eisentraut@2ndquadrant.com) wrote:\n> On 2020-06-25 18:07, Tom Lane wrote:\n> >So I'm still not convinced we should do this. \"MySQL is incapable\n> >of conforming to the standard\" is a really lousy reason for us to do\n> >something.\n> \n> Conformance to the standard means that the syntax described in the standard\n> behaves as specified in the standard. It doesn't mean you can't have\n> additional syntax that is not in the standard.\n\nAgreed in general with the caveat that we don't want to support syntax\nthat the standard might decide later means something else.\n\nFor this case, however, I tend to agree with the other folks on this\nthread who feel that we should add it- since it seems quite unlikely\nthat the standard folks would define this syntax to somehow mean\nsomething else.\n\nThanks,\n\nStephen", "msg_date": "Mon, 29 Jun 2020 10:12:46 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: Why forbid \"INSERT INTO t () VALUES ();\"" } ]
[ { "msg_contents": "Hi hackers,\n\nWhile working with Chris Hajas on merging Postgres 12 with Greenplum\nDatabase we stumbled upon the following strange behavior in the geometry\ntype polygon:\n\n------ >8 --------\n\nCREATE TEMP TABLE foo (p point);\nCREATE INDEX ON foo USING gist(p);\n\nINSERT INTO foo VALUES ('0,0'), ('1,1'), ('NaN,NaN');\n\nSELECT $q$\nSELECT * FROM foo WHERE p <@ polygon '(0,0), (0, 100), (100, 100), (100, 0)'\n$q$ AS qry \\gset\n\nBEGIN;\nSAVEPOINT yolo;\nSET LOCAL enable_seqscan TO off;\n:qry;\n\nROLLBACK TO SAVEPOINT yolo;\nSET LOCAL enable_indexscan TO off;\nSET LOCAL enable_bitmapscan TO off;\n:qry;\n\n------ 8< --------\n\nIf you run the above repro SQL in HEAD (and 12, and likely all older\nversions), you get the following output:\n\nCREATE TABLE\nCREATE INDEX\nINSERT 0 3\nBEGIN\nSAVEPOINT\nSET\n p\n-------\n (0,0)\n (1,1)\n(2 rows)\n\nROLLBACK\nSET\nSET\n p\n-----------\n (0,0)\n (1,1)\n (NaN,NaN)\n(3 rows)\n\n\nAt first glance, you'd think this is the gist AM's bad, but on a second\nthought, something else is strange here. The following query returns\ntrue:\n\nSELECT point '(NaN, NaN)' <@ polygon '(0,0), (0, 100), (100, 100), (100, 0)'\n\nThe above behavior of the \"contained in\" operator is surprising, and\nit's probably not what the GiST AM is expecting. I took a look at\npoint_inside() in geo_ops.c, and it doesn't seem well equipped to handle\nNaN. Similary ill-equipped is dist_ppoly_internal() which underlies the\ndistnace operator for polygon. It gives the following interesting\noutput:\n\nSELECT *, c <-> polygon '(0,0),(0,100),(100,100),(100,0)' as distance\nFROM (\n SELECT circle(point(100 * i, 'NaN'), 50) AS c\n FROM generate_series(-2, 4) i\n) t(c)\nORDER BY 2;\n\n c | distance\n-----------------+----------\n <(-200,NaN),50> | 0\n <(-100,NaN),50> | 0\n <(0,NaN),50> | 0\n <(100,NaN),50> | 0\n <(200,NaN),50> | NaN\n <(300,NaN),50> | NaN\n <(400,NaN),50> | NaN\n(7 rows)\n\nShould they all be NaN? Am I alone in thinking the index is right but\nthe operators are wrong? Or should we call the indexes wrong here?\n\nCheers,\nJesse and Chris\n\n\n", "msg_date": "Wed, 24 Jun 2020 15:11:03 -0700", "msg_from": "Jesse Zhang <sbjesse@gmail.com>", "msg_from_op": true, "msg_subject": "Strange behavior with polygon and NaN" }, { "msg_contents": "\nI can confirm that this two-month old email report still produces\ndifferent results with indexes on/off in git master, which I don't think\nis ever correct behavior.\n\n---------------------------------------------------------------------------\n\nOn Wed, Jun 24, 2020 at 03:11:03PM -0700, Jesse Zhang wrote:\n> Hi hackers,\n> \n> While working with Chris Hajas on merging Postgres 12 with Greenplum\n> Database we stumbled upon the following strange behavior in the geometry\n> type polygon:\n> \n> ------ >8 --------\n> \n> CREATE TEMP TABLE foo (p point);\n> CREATE INDEX ON foo USING gist(p);\n> \n> INSERT INTO foo VALUES ('0,0'), ('1,1'), ('NaN,NaN');\n> \n> SELECT $q$\n> SELECT * FROM foo WHERE p <@ polygon '(0,0), (0, 100), (100, 100), (100, 0)'\n> $q$ AS qry \\gset\n> \n> BEGIN;\n> SAVEPOINT yolo;\n> SET LOCAL enable_seqscan TO off;\n> :qry;\n> \n> ROLLBACK TO SAVEPOINT yolo;\n> SET LOCAL enable_indexscan TO off;\n> SET LOCAL enable_bitmapscan TO off;\n> :qry;\n> \n> ------ 8< --------\n> \n> If you run the above repro SQL in HEAD (and 12, and likely all older\n> versions), you get the following output:\n> \n> CREATE TABLE\n> CREATE INDEX\n> INSERT 0 3\n> BEGIN\n> SAVEPOINT\n> SET\n> p\n> -------\n> (0,0)\n> (1,1)\n> (2 rows)\n> \n> ROLLBACK\n> SET\n> SET\n> p\n> -----------\n> (0,0)\n> (1,1)\n> (NaN,NaN)\n> (3 rows)\n> \n> \n> At first glance, you'd think this is the gist AM's bad, but on a second\n> thought, something else is strange here. The following query returns\n> true:\n> \n> SELECT point '(NaN, NaN)' <@ polygon '(0,0), (0, 100), (100, 100), (100, 0)'\n> \n> The above behavior of the \"contained in\" operator is surprising, and\n> it's probably not what the GiST AM is expecting. I took a look at\n> point_inside() in geo_ops.c, and it doesn't seem well equipped to handle\n> NaN. Similary ill-equipped is dist_ppoly_internal() which underlies the\n> distnace operator for polygon. It gives the following interesting\n> output:\n> \n> SELECT *, c <-> polygon '(0,0),(0,100),(100,100),(100,0)' as distance\n> FROM (\n> SELECT circle(point(100 * i, 'NaN'), 50) AS c\n> FROM generate_series(-2, 4) i\n> ) t(c)\n> ORDER BY 2;\n> \n> c | distance\n> -----------------+----------\n> <(-200,NaN),50> | 0\n> <(-100,NaN),50> | 0\n> <(0,NaN),50> | 0\n> <(100,NaN),50> | 0\n> <(200,NaN),50> | NaN\n> <(300,NaN),50> | NaN\n> <(400,NaN),50> | NaN\n> (7 rows)\n> \n> Should they all be NaN? Am I alone in thinking the index is right but\n> the operators are wrong? Or should we call the indexes wrong here?\n> \n> Cheers,\n> Jesse and Chris\n> \n> \n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n", "msg_date": "Tue, 25 Aug 2020 19:03:50 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Strange behavior with polygon and NaN" }, { "msg_contents": "At Tue, 25 Aug 2020 19:03:50 -0400, Bruce Momjian <bruce@momjian.us> wrote in \n> \n> I can confirm that this two-month old email report still produces\n> different results with indexes on/off in git master, which I don't think\n> is ever correct behavior.\n\nI agree to that the behavior is broken. \n\n> ---------------------------------------------------------------------------\n> \n> On Wed, Jun 24, 2020 at 03:11:03PM -0700, Jesse Zhang wrote:\n> > Hi hackers,\n> > \n> > While working with Chris Hajas on merging Postgres 12 with Greenplum\n> > Database we stumbled upon the following strange behavior in the geometry\n> > type polygon:\n> > \n> > ------ >8 --------\n> > \n> > CREATE TEMP TABLE foo (p point);\n> > CREATE INDEX ON foo USING gist(p);\n> > \n> > INSERT INTO foo VALUES ('0,0'), ('1,1'), ('NaN,NaN');\n> > \n> > SELECT $q$\n> > SELECT * FROM foo WHERE p <@ polygon '(0,0), (0, 100), (100, 100), (100, 0)'\n> > $q$ AS qry \\gset\n> > \n> > BEGIN;\n> > SAVEPOINT yolo;\n> > SET LOCAL enable_seqscan TO off;\n> > :qry;\n> > \n> > ROLLBACK TO SAVEPOINT yolo;\n> > SET LOCAL enable_indexscan TO off;\n> > SET LOCAL enable_bitmapscan TO off;\n> > :qry;\n> > \n> > ------ 8< --------\n> > \n> > If you run the above repro SQL in HEAD (and 12, and likely all older\n> > versions), you get the following output:\n> > \n> > CREATE TABLE\n> > CREATE INDEX\n> > INSERT 0 3\n> > BEGIN\n> > SAVEPOINT\n> > SET\n> > p\n> > -------\n> > (0,0)\n> > (1,1)\n> > (2 rows)\n> > \n> > ROLLBACK\n> > SET\n> > SET\n> > p\n> > -----------\n> > (0,0)\n> > (1,1)\n> > (NaN,NaN)\n> > (3 rows)\n> > \n> > \n> > At first glance, you'd think this is the gist AM's bad, but on a second\n> > thought, something else is strange here. The following query returns\n> > true:\n> > \n> > SELECT point '(NaN, NaN)' <@ polygon '(0,0), (0, 100), (100, 100), (100, 0)'\n> > \n> > The above behavior of the \"contained in\" operator is surprising, and\n> > it's probably not what the GiST AM is expecting. I took a look at\n> > point_inside() in geo_ops.c, and it doesn't seem well equipped to handle\n> > NaN. Similary ill-equipped is dist_ppoly_internal() which underlies the\n> > distnace operator for polygon. It gives the following interesting\n> > output:\n> > \n> > SELECT *, c <-> polygon '(0,0),(0,100),(100,100),(100,0)' as distance\n> > FROM (\n> > SELECT circle(point(100 * i, 'NaN'), 50) AS c\n> > FROM generate_series(-2, 4) i\n> > ) t(c)\n> > ORDER BY 2;\n> > \n> > c | distance\n> > -----------------+----------\n> > <(-200,NaN),50> | 0\n> > <(-100,NaN),50> | 0\n> > <(0,NaN),50> | 0\n> > <(100,NaN),50> | 0\n> > <(200,NaN),50> | NaN\n> > <(300,NaN),50> | NaN\n> > <(400,NaN),50> | NaN\n> > (7 rows)\n> > \n> > Should they all be NaN? Am I alone in thinking the index is right but\n> > the operators are wrong? Or should we call the indexes wrong here?\n\n\nThere may be other places that NaN is not fully considered. For\nexample, the following (perpendicular op) returns true.\n\nSELECT lseg '[(0,0),(0,NaN)]' ?-| lseg '[(0,0),(10,0)]';\n\nIt is quite hard to fix all of the defect..\n\nFor the above cases, it's enough to make sure that point_inside don't\npass NaN's to lseg_crossing(), but it's much of labor to fill all\nholes reasonable and principled way..\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center", "msg_date": "Wed, 26 Aug 2020 17:25:25 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Strange behavior with polygon and NaN" }, { "msg_contents": "Kyotaro Horiguchi <horikyota.ntt@gmail.com> writes:\n> At Tue, 25 Aug 2020 19:03:50 -0400, Bruce Momjian <bruce@momjian.us> wrote in \n>> I can confirm that this two-month old email report still produces\n>> different results with indexes on/off in git master, which I don't think\n>> is ever correct behavior.\n\n> I agree to that the behavior is broken. \n\nYeah, but ... what is \"non broken\" in this case? I'm not convinced\nthat having point_inside() return zero for any case involving NaN\nis going to lead to noticeably saner behavior than today.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 26 Aug 2020 08:18:49 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Strange behavior with polygon and NaN" }, { "msg_contents": "At Wed, 26 Aug 2020 08:18:49 -0400, Tom Lane <tgl@sss.pgh.pa.us> wrote in \n> Kyotaro Horiguchi <horikyota.ntt@gmail.com> writes:\n> > At Tue, 25 Aug 2020 19:03:50 -0400, Bruce Momjian <bruce@momjian.us> wrote in \n> >> I can confirm that this two-month old email report still produces\n> >> different results with indexes on/off in git master, which I don't think\n> >> is ever correct behavior.\n> \n> > I agree to that the behavior is broken. \n> \n> Yeah, but ... what is \"non broken\" in this case? I'm not convinced\n> that having point_inside() return zero for any case involving NaN\n> is going to lead to noticeably saner behavior than today.\n\nYes, just doing that leaves many unfixed behavior come from NaNs, but\nat least it seems to me one of sane definition candidates that a point\ncannot be inside a polygon when NaN is involved. It's similar to\nFpxx() returns false if NaN is involved. As mentioned, I had't fully\nchecked and haven't considered this seriously, but I changed my mind\nto check all the callers. I started checking all the callers of\npoint_inside, then finally I had to check all functions in geo_ops.c:(\n\nThe attached is the result as of now.\n\n=== Resulting behavior\n\nWith the attached:\n\n - All boolean functions return false if NaN is involved.\n - All float8 functions return NaN if NaN is involved.\n - All geometric arithmetics return NaN as output if NaN is involved.\n\nWith some exceptions:\n - line_eq: needs to consider that NaNs are equal each other.\n - point_eq/ne (point_eq_pint): ditto\n - lseg_eq/ne: ditto\n\nThe change makes some difference in the regression test.\nFor example,\n\n<obj containing NaN> <-> <any obj> changed from 0 to NaN. (distance)\n<obj containing NaN> <@ <any obj> changed from true to false. (contained)\n<obj containing NaN> <-> <any obj> changed from 0 to NaN. (distance)\n<obj containing NaN> ?# <any obj> changed from true to false (overlaps)\n\n\n=== pg_hypot mistake?\n\nI noticed that pg_hypot returns inf for the parameters (NaN, Inf) but\nI think NaN is appropriate here since other operators behaves that\nway. This change causes a change of distance between point(1e+300,Inf)\nand line (1,-1,0) from infinity to NaN, which I think is correct\nbecause the arithmetic generates NaN as an intermediate value.\n\n\n=== Infinity annoyances\n\nInfinity makes some not-great changes in regresssion results. For example:\n\n- point '(1e+300,Infinity)' <-> path '((10,20))' returns\n NaN(previously Infinity), but point '(1e+300,Infinity)' <-> path\n '[(1,2),(3,4)]' returns Infinity. The difference of the two\n expressions is whether (0 * Inf = NaN) is performed or not. The\n former performs it but that was concealed by not propagating NaN to\n upper layer without the patch.\n\n- Without the patch, point '(1e+300,Infinity)' ## box '(2,2),(0,0)'\n generates '(0,2)', which is utterly wrong. It is because\n box_closept_point evaluates float8_lt(Inf, NaN) as true(!) and sets\n the wrong point for distance=NaN is set. With the patch, the NaN\n makes the result NULL.\n\n- This is not a difference caused by this patch, but for both patched\n and unpatched, point '(1e+300,Inf)' <-> line '{3,0,0}' returns NaN,\n which should be 1e+300. However, the behavior comes from arithmetic\n reasons and wouldn't be a problem.\n\ncreate_index.out is changed since point(NaN,NaN) <@ polygon changed\nfrom true to false, which seems rather saner.\n\nI haven't checked unchanged results but at least changed results seems\nsaner to me.\n\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center", "msg_date": "Thu, 27 Aug 2020 20:24:12 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Strange behavior with polygon and NaN" }, { "msg_contents": "‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐\nOn Thursday, 27 August 2020 14:24, Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\n\n> At Wed, 26 Aug 2020 08:18:49 -0400, Tom Lane tgl@sss.pgh.pa.us wrote in\n>\n> > Kyotaro Horiguchi horikyota.ntt@gmail.com writes:\n> >\n> > > At Tue, 25 Aug 2020 19:03:50 -0400, Bruce Momjian bruce@momjian.us wrote in\n> > >\n> > > > I can confirm that this two-month old email report still produces\n> > > > different results with indexes on/off in git master, which I don't think\n> > > > is ever correct behavior.\n> >\n> > > I agree to that the behavior is broken.\n> >\n> > Yeah, but ... what is \"non broken\" in this case? I'm not convinced\n> > that having point_inside() return zero for any case involving NaN\n> > is going to lead to noticeably saner behavior than today.\n>\n> Yes, just doing that leaves many unfixed behavior come from NaNs, but\n> at least it seems to me one of sane definition candidates that a point\n> cannot be inside a polygon when NaN is involved. It's similar to\n> Fpxx() returns false if NaN is involved. As mentioned, I had't fully\n> checked and haven't considered this seriously, but I changed my mind\n> to check all the callers. I started checking all the callers of\n> point_inside, then finally I had to check all functions in geo_ops.c:(\n>\n\nFor what is worth, I agree with this definition.\n\n\n> The attached is the result as of now.\n>\n> === Resulting behavior\n>\n> With the attached:\n>\n> - All boolean functions return false if NaN is involved.\n> - All float8 functions return NaN if NaN is involved.\n> - All geometric arithmetics return NaN as output if NaN is involved.\n\nAgreed! As in both this behaviour conforms to the definition above and the patch provides this behaviour with the exceptions below.\n\n>\n> With some exceptions:\n>\n> - line_eq: needs to consider that NaNs are equal each other.\n> - point_eq/ne (point_eq_pint): ditto\n> - lseg_eq/ne: ditto\n>\n> The change makes some difference in the regression test.\n> For example,\n>\n> <obj containing NaN> <-> <any obj> changed from 0 to NaN. (distance)\n>\n>\n> <obj containing NaN> <@ <any obj> changed from true to false. (contained)\n> <obj containing NaN> <-> <any obj> changed from 0 to NaN. (distance)\n> <obj containing NaN> ?# <any obj> changed from true to false (overlaps)\n>\n> === pg_hypot mistake?\n>\n> I noticed that pg_hypot returns inf for the parameters (NaN, Inf) but\n> I think NaN is appropriate here since other operators behaves that\n> way. This change causes a change of distance between point(1e+300,Inf)\n> and line (1,-1,0) from infinity to NaN, which I think is correct\n> because the arithmetic generates NaN as an intermediate value.\n>\n> === Infinity annoyances\n>\n> Infinity makes some not-great changes in regresssion results. For example:\n>\n> - point '(1e+300,Infinity)' <-> path '((10,20))' returns\n> NaN(previously Infinity), but point '(1e+300,Infinity)' <-> path\n> '[(1,2),(3,4)]' returns Infinity. The difference of the two\n> expressions is whether (0 * Inf = NaN) is performed or not. The\n> former performs it but that was concealed by not propagating NaN to\n> upper layer without the patch.\n\nAlthough I understand the reasoning for this change. I am not certain I agree with the result. I feel that:\n point '(1e+300,Infinity)' <-> path '((10,20))'\nshould return Infinity. Even if I am wrong to think that, the two results should be expected to behave the same. Am I wrong on that too?\n\n\n>\n> - Without the patch, point '(1e+300,Infinity)' ## box '(2,2),(0,0)'\n> generates '(0,2)', which is utterly wrong. It is because\n> box_closept_point evaluates float8_lt(Inf, NaN) as true(!) and sets\n> the wrong point for distance=NaN is set. With the patch, the NaN\n> makes the result NULL.\n\nAgreed.\n\n>\n> - This is not a difference caused by this patch, but for both patched\n> and unpatched, point '(1e+300,Inf)' <-> line '{3,0,0}' returns NaN,\n> which should be 1e+300. However, the behavior comes from arithmetic\n> reasons and wouldn't be a problem.\n>\n> create_index.out is changed since point(NaN,NaN) <@ polygon changed\n> from true to false, which seems rather saner.\n>\n> I haven't checked unchanged results but at least changed results seems\n> saner to me.\n\nAll in all a great patch!\n\nIt is clean, well reasoned and carefully crafted.\n\nDo you think that the documentation needs to get updated to the 'new' behaviour?\n\n\n//Georgios\n\n\n>\n> regards.\n>\n> --\n> Kyotaro Horiguchi\n> NTT Open Source Software Center\n>", "msg_date": "Mon, 07 Sep 2020 12:46:50 +0000", "msg_from": "gkokolatos@pm.me", "msg_from_op": false, "msg_subject": "Re: Strange behavior with polygon and NaN" }, { "msg_contents": "Hello, Georgios.\n\nAt Mon, 07 Sep 2020 12:46:50 +0000, gkokolatos@pm.me wrote in \n> ‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐\n> On Thursday, 27 August 2020 14:24, Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\n> \n> > At Wed, 26 Aug 2020 08:18:49 -0400, Tom Lane tgl@sss.pgh.pa.us wrote in\n> >\n> > > Kyotaro Horiguchi horikyota.ntt@gmail.com writes:\n> > >\n> > > > At Tue, 25 Aug 2020 19:03:50 -0400, Bruce Momjian bruce@momjian.us wrote in\n> > > >\n> > > > > I can confirm that this two-month old email report still produces\n> > > > > different results with indexes on/off in git master, which I don't think\n> > > > > is ever correct behavior.\n> > >\n> > > > I agree to that the behavior is broken.\n> > >\n> > > Yeah, but ... what is \"non broken\" in this case? I'm not convinced\n> > > that having point_inside() return zero for any case involving NaN\n> > > is going to lead to noticeably saner behavior than today.\n> >\n> > Yes, just doing that leaves many unfixed behavior come from NaNs, but\n> > at least it seems to me one of sane definition candidates that a point\n> > cannot be inside a polygon when NaN is involved. It's similar to\n> > Fpxx() returns false if NaN is involved. As mentioned, I had't fully\n> > checked and haven't considered this seriously, but I changed my mind\n> > to check all the callers. I started checking all the callers of\n> > point_inside, then finally I had to check all functions in geo_ops.c:(\n> >\n> \n> For what is worth, I agree with this definition.\n\nThanks.\n\n> > The attached is the result as of now.\n> >\n> > === Resulting behavior\n> >\n> > With the attached:\n> >\n> > - All boolean functions return false if NaN is involved.\n> > - All float8 functions return NaN if NaN is involved.\n> > - All geometric arithmetics return NaN as output if NaN is involved.\n> \n> Agreed! As in both this behaviour conforms to the definition above and the patch provides this behaviour with the exceptions below.\n> \n> >\n> > With some exceptions:\n> >\n> > - line_eq: needs to consider that NaNs are equal each other.\n> > - point_eq/ne (point_eq_pint): ditto\n> > - lseg_eq/ne: ditto\n...\n> > === pg_hypot mistake?\n> >\n> > I noticed that pg_hypot returns inf for the parameters (NaN, Inf) but\n> > I think NaN is appropriate here since other operators behaves that\n> > way. This change causes a change of distance between point(1e+300,Inf)\n> > and line (1,-1,0) from infinity to NaN, which I think is correct\n> > because the arithmetic generates NaN as an intermediate value.\n> >\n> > === Infinity annoyances\n> >\n> > Infinity makes some not-great changes in regresssion results. For example:\n> >\n> > - point '(1e+300,Infinity)' <-> path '((10,20))' returns\n> > NaN(previously Infinity), but point '(1e+300,Infinity)' <-> path\n> > '[(1,2),(3,4)]' returns Infinity. The difference of the two\n> > expressions is whether (0 * Inf = NaN) is performed or not. The\n> > former performs it but that was concealed by not propagating NaN to\n> > upper layer without the patch.\n> \n> Although I understand the reasoning for this change. I am not certain I agree with the result. I feel that:\n> point '(1e+300,Infinity)' <-> path '((10,20))'\n> should return Infinity. Even if I am wrong to think that, the two results should be expected to behave the same. Am I wrong on that too?\n\nNo. Actually that's not correct and that just comes from avoiding\nspecial code paths for Infinity. I put more thought on\nline_interpt_line and found that that issue is \"fixed\" by just\nsimplifying formulas by removing invariants. But one more if-block is\nneeded to make the function work a symmetrical way, though..\n\nHowever, still we have a similar issue.\n\npoint '(Infinity,1e+300)' <-> line '{-1,0,5}' => Infinity\npoint '(Infinity,1e+300)' <-> line '{0,-1,5}' => NaN\npoint '(Infinity,1e+300)' <-> line '{1,1,5}' => NaN\n\nThe second should be 1e+300 and the third infinity. This is because\nline_closept_point taking the distance between the foot of the\nperpendicular line from the point and the point. We can fix the second\ncase by adding special code paths for vertical and horizontal lines,\nbut the third needs another special code path explicitly identifying\nInfinity. It seems a kind of too-much..\n\nFinally, I gave up fixing that and added doucmentation.\n\nAs another issue, (point '(Infinity, 1e+300)' <-> path '((10,20))')\nresults in NaN. That is \"fixed\" by adding a special path for \"m ==\n0.0\" case, but I'm not sure it's worth doing..\n\nBy the way I found that float8_div(<normal number>, infinity) erros\nout as underflow. It is inconsistent with the behavior that float8_mul\ndoesn't error-out as overflow when Infinity is given. So fixed it.\n\n> > - This is not a difference caused by this patch, but for both patched\n> > and unpatched, point '(1e+300,Inf)' <-> line '{3,0,0}' returns NaN,\n> > which should be 1e+300. However, the behavior comes from arithmetic\n> > reasons and wouldn't be a problem.\n> >\n> > create_index.out is changed since point(NaN,NaN) <@ polygon changed\n> > from true to false, which seems rather saner.\n> >\n> > I haven't checked unchanged results but at least changed results seems\n> > saner to me.\n> \n> All in all a great patch!\n> \n> It is clean, well reasoned and carefully crafted.\n> \n> Do you think that the documentation needs to get updated to the 'new' behaviour?\n\nHmm. I'm not sure we can guarantee the behavior as documented, but I\ntried writing in functions-geometry.html.\n\n> NaN and Infinity make geometric functions and operators behave\n> inconsistently. Geometric operators or functions that return a boolean\n> return false for operands that contain NaNs. Number-returning\n> functions and operators return the NaN in most cases but sometimes\n> return a valid value if no NaNs are met while\n> calculation. Object-returning ones yield an object that contain NaNs\n> depending to the operation. Likewise the objects containing Infinity\n> can make geometric operators and functions behave inconsistently. For\n> example (point '(Infinity,Infinity)' <-> line '{-1,0,5}') is Infinity\n> but (point '(Infinity,Infinity)' <-> line '{0,-1,5}') is NaN. It can\n> never be a value other than these, but you should consider it\n> uncertain how geometric operators behave for objects containing\n> Infinity.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center", "msg_date": "Thu, 10 Sep 2020 18:37:09 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Strange behavior with polygon and NaN" }, { "msg_contents": "Hi,\r\n\r\napologies for the very, very late reply to your fixes.\r\n\r\nYou have answered/addressed all my questions concerns. The added documentation\r\nreads well, at least to a non native English speaker.\r\n\r\nThe patch still applies and as far as I can see the tests are passing.\r\n\r\nIt gets my :+1: and I am changing the status to \"Ready for Committer\".\r\n\r\nFor what little is worth, I learned a lot from this patch, thank you.\r\n\r\nCheers,\r\nGeorgios\n\nThe new status of this patch is: Ready for Committer\n", "msg_date": "Mon, 02 Nov 2020 14:43:32 +0000", "msg_from": "Georgios Kokolatos <gkokolatos@protonmail.com>", "msg_from_op": false, "msg_subject": "Re: Strange behavior with polygon and NaN" }, { "msg_contents": "At Mon, 02 Nov 2020 14:43:32 +0000, Georgios Kokolatos <gkokolatos@protonmail.com> wrote in \n> Hi,\n> \n> apologies for the very, very late reply to your fixes.\n> \n> You have answered/addressed all my questions concerns. The added documentation\n> reads well, at least to a non native English speaker.\n> \n> The patch still applies and as far as I can see the tests are passing.\n> \n> It gets my :+1: and I am changing the status to \"Ready for Committer\".\n> \n> For what little is worth, I learned a lot from this patch, thank you.\n> \n> Cheers,\n> Georgios\n> \n> The new status of this patch is: Ready for Committer\n\nOh! Thanks. Since a part of this patch is committed (Thanks to Tom.)\nthis is a rebased version on the commit.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center", "msg_date": "Thu, 05 Nov 2020 14:07:32 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Strange behavior with polygon and NaN" }, { "msg_contents": "‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐\nOn Thursday, November 5, 2020 6:07 AM, Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\n\n> At Mon, 02 Nov 2020 14:43:32 +0000, Georgios Kokolatos gkokolatos@protonmail.com wrote in\n>\n> > Hi,\n> > apologies for the very, very late reply to your fixes.\n> > You have answered/addressed all my questions concerns. The added documentation\n> > reads well, at least to a non native English speaker.\n> > The patch still applies and as far as I can see the tests are passing.\n> > It gets my :+1: and I am changing the status to \"Ready for Committer\".\n> > For what little is worth, I learned a lot from this patch, thank you.\n> > Cheers,\n> > Georgios\n> > The new status of this patch is: Ready for Committer\n>\n> Oh! Thanks. Since a part of this patch is committed (Thanks to Tom.)\n> this is a rebased version on the commit.\n\nI completely missed that a part got committed.\n\nThank you for your rebased version of the rest. I went through it\nand my initial assessement of '+1' still stands.\n\nThe status remains to: Ready for Committer.\n\n//Georgios\n\n>\n> regards.\n>\n> --------------------------------------------------------------------------------------------------------------------------\n>\n> Kyotaro Horiguchi\n> NTT Open Source Software Center", "msg_date": "Mon, 09 Nov 2020 14:35:09 +0000", "msg_from": "gkokolatos@pm.me", "msg_from_op": false, "msg_subject": "Re: Strange behavior with polygon and NaN" }, { "msg_contents": "I spent some time looking this over, and have a few thoughts:\n\n1. I think it's useful to split the test changes into two patches,\nas I've done below: first, just add the additional row in point_tbl\nand let the fallout from that happen, and then in the second patch\nmake the code changes. This way, it's much clearer what the actual\nbehavioral changes are. Some of them don't look right, either.\nFor instance, in the very first hunk in geometry.out, we have\nthis:\n\n- (Infinity,1e+300) | {1,0,5} | NaN | NaN\n+ (Infinity,1e+300) | {1,0,5} | Infinity | Infinity\n\nwhich seems right, and also this:\n\n- (1e+300,Infinity) | {1,-1,0} | Infinity | Infinity\n- (1e+300,Infinity) | {-0.4,-1,-6} | Infinity | Infinity\n- (1e+300,Infinity) | {-0.000184615384615,-1,15.3846153846} | Infinity | Infinity\n+ (1e+300,Infinity) | {1,-1,0} | NaN | NaN\n+ (1e+300,Infinity) | {-0.4,-1,-6} | NaN | NaN\n+ (1e+300,Infinity) | {-0.000184615384615,-1,15.3846153846} | NaN | NaN\n\nwhich does not. Why aren't these distances infinite as well?\nFor instance, {1,-1,0} is the line \"x = y\". We could argue about\nwhether it'd be sensible to return zero for the distance between that\nand the point (inf,inf), but surely any point with one inf and one\nfinite coordinate must be an infinite distance away from that line.\nThere's nothing ill-defined about that situation.\n\n2. Rather than coding around undesirable behavior of float8_min,\nit seems like it's better to add a primitive to float.h that\ndoes what you want, ie \"NaN if either input is NaN, else the\nsmaller input\". This is more readable, and possibly more efficient\n(depending on whether the compiler is smart enough to optimize\naway redundant isnan checks). I did that in the attached.\n\n3. Looking for other calls of float8_min, I wonder why you did not\ntouch the bounding-box calculations in box_interpt_lseg() or\nboxes_bound_box().\n\n4. The line changes feel a bit weird, like there's no clear idea\nof what a \"valid\" or \"invalid\" line is. For instance the first\nhunk in line_construct():\n\n+\t\t/* Avoid creating a valid line from an invalid point */\n+\t\tif (unlikely(isnan(pt->y)))\n+\t\t\tresult->C = get_float8_nan();\n\nWhy's it appropriate to set C and only C to NaN?\n\n5. But actually there's a bigger issue with that particular hunk.\nThis code branch is dealing with \"draw a vertical line through this\npoint\", so why should we care what the point's y coordinate is --- that\nis, why is this particular change appropriate at all? The usual rule as\nI understand it is that if a function's result is determined by some of\nits arguments independently of what another argument's value is, then it\ndoesn't matter if that one is NaN, you can still return the same result.\n\n6. I'm a bit uncomfortable with the use of \"bool isnan\" in a couple\nof places. I think it's confusing to use that alongside the isnan()\nmacro. Moreover, it's at least possible that some platforms implement\nisnan() in a way that would break this usage. The C spec specifically\nsays that isnan() is a macro not a function ... but it doesn't commit\nto it being a macro-with-arguments. I think \"anynan\" or something\nlike that would be a better choice of name.\n\n[ a bit later... ] Indeed, I get a compile failure on gaur:\n\ngeo_ops.c: In function 'lseg_closept_lseg':\ngeo_ops.c:2906:17: error: called object 'isnan' is not a function\ngeo_ops.c:2906:32: error: called object 'isnan' is not a function\ngeo_ops.c:2916:16: error: called object 'isnan' is not a function\ngeo_ops.c:2924:16: error: called object 'isnan' is not a function\ngeo_ops.c: In function 'box_closept_point':\ngeo_ops.c:2989:16: error: called object 'isnan' is not a function\ngeo_ops.c:2992:16: error: called object 'isnan' is not a function\ngeo_ops.c:3004:16: error: called object 'isnan' is not a function\ngeo_ops.c:3014:16: error: called object 'isnan' is not a function\nmake: *** [geo_ops.o] Error 1\n\nSo that scenario isn't hypothetical. Please rename the variables.\n\n\t\t\tregards, tom lane", "msg_date": "Tue, 10 Nov 2020 14:30:08 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Strange behavior with polygon and NaN" }, { "msg_contents": "Thank you for the review, Georgios and Tom.\n\nAt Tue, 10 Nov 2020 14:30:08 -0500, Tom Lane <tgl@sss.pgh.pa.us> wrote in \n> I spent some time looking this over, and have a few thoughts:\n> \n> 1. I think it's useful to split the test changes into two patches,\n> as I've done below: first, just add the additional row in point_tbl\n> and let the fallout from that happen, and then in the second patch\n> make the code changes. This way, it's much clearer what the actual\n> behavioral changes are. Some of them don't look right, either.\n> For instance, in the very first hunk in geometry.out, we have\n> this:\n> \n> - (Infinity,1e+300) | {1,0,5} | NaN | NaN\n> + (Infinity,1e+300) | {1,0,5} | Infinity | Infinity\n> \n> which seems right, and also this:\n\nFor example, ('Infinity', 1e300) <-> {1,0,5}, that is:\n\n line \"x = -5\" <-> point(1e300, Inf)\n\nSo sqrt((1e300 - 5)^2 + Inf^2) = Inf, which looks right.\n\n\n> - (1e+300,Infinity) | {1,-1,0} | Infinity | Infinity\n> - (1e+300,Infinity) | {-0.4,-1,-6} | Infinity | Infinity\n> - (1e+300,Infinity) | {-0.000184615384615,-1,15.3846153846} | Infinity | Infinity\n> + (1e+300,Infinity) | {1,-1,0} | NaN | NaN\n> + (1e+300,Infinity) | {-0.4,-1,-6} | NaN | NaN\n> + (1e+300,Infinity) | {-0.000184615384615,-1,15.3846153846} | NaN | NaN\n> \n> which does not. Why aren't these distances infinite as well?\n> \n> For instance, {1,-1,0} is the line \"x = y\". We could argue about\n> whether it'd be sensible to return zero for the distance between that\n> and the point (inf,inf), but surely any point with one inf and one\n> finite coordinate must be an infinite distance away from that line.\n> There's nothing ill-defined about that situation.\n\nMmm... (swinging my arms to mimic lines..)\ndist(x = y, (1e300, Inf)) looks indeterminant to me..\n\nThe calcuation is performed in the following steps.\n\n1. construct the perpendicular line for the line.\n perpine(1e300, 'Infinity') => {-1, -1, Inf}\n\n2. calculate the cross point.\n corsspoint({-1, -1, Inf}, {1,-1,0}) => (Inf, NaN)\n\n3. calculte the distance from the crosspoint to the point.\n point_dt((Inf, NaN), (1e300, 'Infinity'))\n = HYPOT(Inf - 1e300, NaN - Inf);\n = HYPOT(Inf, NaN);\n\n4. HYPOT changed the behavior by the patch\n\n Before: HYPOT(Inf, NaN) = Inf\n After : HYPOT(Inf, NaN) = NaN - Result A\n\n\nSo if we will \"fix\" that, we should fix any, some, or all of 1-3.\n\n1. seems to have no other way than the result.\n\n2. crosspoint (x = - y + Inf, x = y) could be (Inf, Inf)?\n\n3. point_dt((Inf, Inf), (1e300, Inf))\n = HYPOT(Inf - 1e300, Inf - Inf)\n = HYPOT(Inf, -NaN)\n = NaN. - Result B\n\n I'm not sure why Inf - Inf is negative, but |Inf-Inf| = NaN is\n reasonable.\n\nThat is, we don't get a \"reasonable\" result this way.\n\n\nThe formula for the distance((x0,y0) - (ax + by + c = 0)) is\n\n |ax0 + by0 + c|/sqrt(a^2 + b^2)\n\n where a = -1, b = -1, c = Inf, x0 = 1e300, y0 = Inf,\n\n abs(-1 * 1e300 + -1 * Inf + Inf) / sqrt(1 + 1)\n = abs(-1e300 -Inf + Inf) / C\n = NaN. - Result C\n\nAll of the Result A - C is NaN. At last NaN looks to be the right\nresult.\n\nBy the way that the formula is far simple than what we are doing\nnow. Is there any reason to take the above steps for the calculation?\n\n\n> 2. Rather than coding around undesirable behavior of float8_min,\n> it seems like it's better to add a primitive to float.h that\n> does what you want, ie \"NaN if either input is NaN, else the\n> smaller input\". This is more readable, and possibly more efficient\n> (depending on whether the compiler is smart enough to optimize\n> away redundant isnan checks). I did that in the attached.\n\nSounds reasonable. I found that I forgot to do the same thing to y\ncoordinate.\n\n> 3. Looking for other calls of float8_min, I wonder why you did not\n> touch the bounding-box calculations in box_interpt_lseg() or\n> boxes_bound_box().\n\nWhile doing that, I didn't make changes just by looking a code locally\nsince I thought that that might be looked as overdone. Maybe, for\nexample box_interpt_lseg, even if bounding-box check overlooked NaNs,\nI thought that the following calcualaions reflect any involved NaNs to\nthe result. (But I'm not confident that that is perfect, though..)\n\n> 4. The line changes feel a bit weird, like there's no clear idea\n> of what a \"valid\" or \"invalid\" line is. For instance the first\n> hunk in line_construct():\n> \n> +\t\t/* Avoid creating a valid line from an invalid point */\n> +\t\tif (unlikely(isnan(pt->y)))\n> +\t\t\tresult->C = get_float8_nan();\n> \n> Why's it appropriate to set C and only C to NaN?\n\nNot limited to here, I intended to reduce the patch footprint as much\nas possible and it seemed that only set C to NaN is sufficient. (But\nI'm not con<snip..>) I don't object to make that change more\ncomprehensively. Do we go that direction?\n\n> 5. But actually there's a bigger issue with that particular hunk.\n> This code branch is dealing with \"draw a vertical line through this\n> point\", so why should we care what the point's y coordinate is --- that\n> is, why is this particular change appropriate at all? The usual rule as\n\nThe calculation mess comes from omitting a part of the component\nvalues during calculation. So:\n\n+ <para>\n+ NaN and Infinity make geometric functions and operators behave\n+ inconsistently. Geometric operators or functions that return a boolean\n+ return false for operands that contain NaNs. Number-returning functions\n+ and operators return NaN in most cases but sometimes return a valid\n+ value if no NaNs are met while actual calculation. Object-returning one\n+ yield an object that contain NaNs depending to the operation. Likewise\n\nThe code is following this policy. A point containing NaN yields an\n\"invalid\" line, that is, a line containg NaN.\n\n> I understand it is that if a function's result is determined by some of\n> its arguments independently of what another argument's value is, then it\n> doesn't matter if that one is NaN, you can still return the same result.\n\nThat's true looking from pure calculation point of view, which caused\nsome of the messes.\n\n> 6. I'm a bit uncomfortable with the use of \"bool isnan\" in a couple\n> of places. I think it's confusing to use that alongside the isnan()\n> macro. Moreover, it's at least possible that some platforms implement\n> isnan() in a way that would break this usage. The C spec specifically\n> says that isnan() is a macro not a function ... but it doesn't commit\n> to it being a macro-with-arguments. I think \"anynan\" or something\n> like that would be a better choice of name.\n\nOoo! Rright. I agreed to that. Will fix them.\n\n> [ a bit later... ] Indeed, I get a compile failure on gaur:\n> \n> geo_ops.c: In function 'lseg_closept_lseg':\n> geo_ops.c:2906:17: error: called object 'isnan' is not a function\n> geo_ops.c:2906:32: error: called object 'isnan' is not a function\n> geo_ops.c:2916:16: error: called object 'isnan' is not a function\n> geo_ops.c:2924:16: error: called object 'isnan' is not a function\n> geo_ops.c: In function 'box_closept_point':\n> geo_ops.c:2989:16: error: called object 'isnan' is not a function\n> geo_ops.c:2992:16: error: called object 'isnan' is not a function\n> geo_ops.c:3004:16: error: called object 'isnan' is not a function\n> geo_ops.c:3014:16: error: called object 'isnan' is not a function\n> make: *** [geo_ops.o] Error 1\n> \n> So that scenario isn't hypothetical. Please rename the variables.\n\nlol! gaur looks like coal mine canary.\n\n1. Won't fix the dist_pl/lp's changed behavior.\n2. (already fixed?) Will find other instances.\n3. Will do more comprehensive NaN-detection (as another patch)\n4. Ditto.\n5. Keep the curent state. Do we revert that?\n6. Will fix.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Fri, 13 Nov 2020 15:35:58 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Strange behavior with polygon and NaN" }, { "msg_contents": "At Fri, 13 Nov 2020 15:35:58 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> Thank you for the review, Georgios and Tom.\n> \n> At Tue, 10 Nov 2020 14:30:08 -0500, Tom Lane <tgl@sss.pgh.pa.us> wrote in \n> > I spent some time looking this over, and have a few thoughts:\n> > \n> > 1. I think it's useful to split the test changes into two patches,\n> > as I've done below: first, just add the additional row in point_tbl\n> > and let the fallout from that happen, and then in the second patch\n> > make the code changes. This way, it's much clearer what the actual\n> > behavioral changes are. Some of them don't look right, either.\n> > For instance, in the very first hunk in geometry.out, we have\n> > this:\n> > \n> > - (Infinity,1e+300) | {1,0,5} | NaN | NaN\n> > + (Infinity,1e+300) | {1,0,5} | Infinity | Infinity\n> > \n> > which seems right, and also this:\n> \n> For example, ('Infinity', 1e300) <-> {1,0,5}, that is:\n> \n> line \"x = -5\" <-> point(1e300, Inf)\n> \n> So sqrt((1e300 - 5)^2 + Inf^2) = Inf, which looks right.\n\n??! Correction:\n\n It's sqrt((1e300 - 5)^2 + 0^2) = Inf, which looks right.\n\nreagrds.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Fri, 13 Nov 2020 15:39:23 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Strange behavior with polygon and NaN" }, { "msg_contents": "Kyotaro Horiguchi <horikyota.ntt@gmail.com> writes:\n> At Tue, 10 Nov 2020 14:30:08 -0500, Tom Lane <tgl@sss.pgh.pa.us> wrote in \n>> For instance, {1,-1,0} is the line \"x = y\". We could argue about\n>> whether it'd be sensible to return zero for the distance between that\n>> and the point (inf,inf), but surely any point with one inf and one\n>> finite coordinate must be an infinite distance away from that line.\n>> There's nothing ill-defined about that situation.\n\n> Mmm... (swinging my arms to mimic lines..)\n> dist(x = y, (1e300, Inf)) looks indeterminant to me..\n\nWell, what you're showing is that we get an internal overflow,\nessentially, on the way to calculating the result. Which is true,\nso it's sort of accidental that we got a sensible result before.\nNonetheless, we *did* get a sensible result, so producing NaN\ninstead seems like a regression.\n\nWe might need to introduce special-case handling to protect the\nlow-level calculations from ever seeing NaN or Inf in their inputs.\nGetting the right answer to \"just fall out\" of those calculations\nmight be an unreasonable hope.\n\nFor example, for a line with positive slope (A and B of opposite\nsigns), I think that the right answer for points (Inf,Inf) and\n(-Inf,-Inf) should be NaN, on much the same grounds that Inf\nminus Inf is NaN not zero. But all other points involving any Inf\ncoordinates are clearly an infinite distance away from that line.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 13 Nov 2020 11:26:21 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Strange behavior with polygon and NaN" }, { "msg_contents": "At Fri, 13 Nov 2020 11:26:21 -0500, Tom Lane <tgl@sss.pgh.pa.us> wrote in \n> Kyotaro Horiguchi <horikyota.ntt@gmail.com> writes:\n> > At Tue, 10 Nov 2020 14:30:08 -0500, Tom Lane <tgl@sss.pgh.pa.us> wrote in \n> >> For instance, {1,-1,0} is the line \"x = y\". We could argue about\n> >> whether it'd be sensible to return zero for the distance between that\n> >> and the point (inf,inf), but surely any point with one inf and one\n> >> finite coordinate must be an infinite distance away from that line.\n> >> There's nothing ill-defined about that situation.\n> \n> > Mmm... (swinging my arms to mimic lines..)\n> > dist(x = y, (1e300, Inf)) looks indeterminant to me..\n> \n> Well, what you're showing is that we get an internal overflow,\n> essentially, on the way to calculating the result. Which is true,\n> so it's sort of accidental that we got a sensible result before.\n> Nonetheless, we *did* get a sensible result, so producing NaN\n> instead seems like a regression.\n\nIndependently from the discussion, the following was wrong.\n\n> 2. calculate the cross point.\n> corsspoint({-1, -1, Inf}, {1,-1,0}) => (Inf, NaN)\n\nThe Corss point must be on the line 2, that is, x equas to y. If we\navoid using x to calcualte y, the result gets right. But that doesn't\n\"fix\" the result.\n\n> We might need to introduce special-case handling to protect the\n> low-level calculations from ever seeing NaN or Inf in their inputs.\n> Getting the right answer to \"just fall out\" of those calculations\n> might be an unreasonable hope.\n\nHowever, as far as we we calculate the distance between the point and\nthe foot of the perpendicular line from the point to the line, (inf -\ninf) is inevitable and we cannot avoid that \"wrong\" result.\n\n> For example, for a line with positive slope (A and B of opposite\n> signs), I think that the right answer for points (Inf,Inf) and\n> (-Inf,-Inf) should be NaN, on much the same grounds that Inf\n> minus Inf is NaN not zero. But all other points involving any Inf\n> coordinates are clearly an infinite distance away from that line.\n\nAfter some checking I noticed that the calculation with the well-known\nformula was wrong.\n\n> The formula for the distance((x0,y0) - (ax + by + c = 0)) is\n> \n> |ax0 + by0 + c|/sqrt(a^2 + b^2)\n> \n> where a = -1, b = -1, c = Inf, x0 = 1e300, y0 = Inf,\n\na = -1, b = -1, c = \"0\", x0=1e300, y0=Inf results in Inf. Sorry for\nthe mistake.\n\nSo, we can recalculate the result using the formula if get NaN based\non the perpendicular foot. The reason I left the existing calculation\nis the consistency between the returned perpendicular foot and the\ndistance value, and for the reduced complexity in the major code path.\n\n1. So the attached yeilds \"Inf\" in that cases.\n\n2. Independently from the point, I noticed that the y-coord of the\n perpendicular foot is miscalculated as NaN instead of Inf for the\n cases that are discussed here. (line_interpt_line)\n\n3. I fixed line_construct to construct (NaN, NaN, NaN) if the input\n containsNaNs.\n\n4. Renamed the variable \"isnan\" to \"anynan\" in lseg_closept_lseg() and\n box_closept_point().\n\n5. (not in the past comments) line_interpt() needs to check if any of\n the coordinates is NaN since line_interpt_line() is defined to return\n such a result.\n\nA. I'm not sure how to treat addtion/subtruct/multiply between\n points. But thinking that operations as vector calculation returning\n such values are valid. So I left them as it is.\n\n -- Add point\n SELECT p1.f1, p2.f1, p1.f1 + p2.f1 FROM POINT_TBL p1, POINT_TBL p2;\n (NaN,NaN) | (0,0) | (NaN,NaN)\n\nB. @@ lseg (center) returns NaN-containing results. I'm not sure this\n is regarded whether as a vector calculation or as a geometric\n operation. If it is the former we don't fix it and otherwise we\n should reutrn NULL for such input.\n\n =# select @@ lseg('[(NaN,1),(NaN,90)]');\n ?column? \n ------------\n (NaN,45.5)\n (1 row)\n\n\n== Changes in the result ============\n\n1 and 2 above cause visible diffence in some results at the least\nsignificant digit in mantissa, but that difference doesn't matter.\n\n> - (-3,4) | {-0.000184615384615,-1,15.3846153846} | 11.3851690368 | 11.3851690368\n> + (-3,4) | {-0.000184615384615,-1,15.3846153846} | 11.3851690367 | 11.3851690367\n\n1 restored the previous results.\n\n> - (1e+300,Infinity) | {1,-1,0} | NaN | NaN\n> - (1e+300,Infinity) | {-0.4,-1,-6} | NaN | NaN\n> - (1e+300,Infinity) | {-0.000184615384615,-1,15.3846153846} | NaN | NaN\n> + (1e+300,Infinity) | {1,-1,0} | Infinity | Infinity\n> + (1e+300,Infinity) | {-0.4,-1,-6} | Infinity | Infinity\n> + (1e+300,Infinity) | {-0.000184615384615,-1,15.3846153846} | Infinity | Infinity\n> \n> \n> - (Infinity,1e+300) | [(0,-20),(30,-20)] | NaN | NaN\n> + (Infinity,1e+300) | [(0,-20),(30,-20)] | Infinity | Infinity\n> - (Infinity,1e+300) | [(0,0),(3,0),(4,5),(1,6)] | NaN | NaN\n> + (Infinity,1e+300) | [(0,0),(3,0),(4,5),(1,6)] | Infinity | Infinity\n\nLooks fine.\n\n> -- Closest point to line\n> SELECT p.f1, l.s, p.f1 ## l.s FROM POINT_TBL p, LINE_TBL l;\n> - (1e+300,Infinity) | {1,-1,0} | \n> - (1e+300,Infinity) | {-0.4,-1,-6} | \n> - (1e+300,Infinity) | {-0.000184615384615,-1,15.3846153846} | \n> + (1e+300,Infinity) | {1,-1,0} | (Infinity,Infinity)\n> + (1e+300,Infinity) | {-0.4,-1,-6} | (-Infinity,Infinity)\n> + (1e+300,Infinity) | {-0.000184615384615,-1,15.3846153846} | (-Infinity,Infinity)\n> \n> -- Distance to line segment\n> SELECT p.f1, l.s, p.f1 <-> l.s AS dist_ps, l.s <-> p.f1 AS dist_sp FROM POINT_TBL p, LSEG_TBL l;\n> - (Infinity,1e+300) | [(0,-20),(30,-20)] | \n> + (Infinity,1e+300) | [(0,-20),(30,-20)] | (30,-20)\n> \n> -- Intersection point with line\n> SELECT l1.s, l2.s, l1.s # l2.s FROM LINE_TBL l1, LINE_TBL l2;\n> - {-0.000184615384615,-1,15.3846153846} | {0,3,0} | (83333.3333333,-1.7763568394e-15)\n> + {-0.000184615384615,-1,15.3846153846} | {0,3,0} | (83333.3333333,0)\n\nThese are fixed by 2.\n\n\n> -- Distance to line\n> SELECT p.f1, l.s, p.f1 <-> l.s AS dist_pl, l.s <-> p.f1 AS dist_lp FROM POINT_TBL p, LINE_TBL l;\n> (1e+300,Infinity) | {-1,0,3} | NaN | NaN\n\nThis should be 1e+300, not NaN, but 1 nor 2 doesn't fix this. The\nreasonis line->B(0) * point->y(Infinity) results in NaN. But from the\nmeaning of the this sexpression, it should be 0.\n\nI made line_closept_point() to do that but I found a similar issue in\nline_interpt_line().\n\n> -- Closest point to line\n> SELECT p.f1, l.s, p.f1 ## l.s FROM POINT_TBL p, LINE_TBL l;\n> (1e+300,Infinity) | {1,0,5} | (NaN,Infinity)\n\nSo, what is needed here is we have special multiplication function\nthat supercedes 0*Inf = NaN rule by \"0\"*Inf = 0. I introduced that\nfunction as float8_coef_mul(). The reason that the function is in\ngeo_ops.c is that it is geo_ops specific and using ZPzere(), which is\nnot used in float.h. By using the function the results are fixed as:\n\n> -- Distance to line\n> SELECT p.f1, l.s, p.f1 <-> l.s AS dist_pl, l.s <-> p.f1 AS dist_lp FROM POINT_TBL p, LINE_TBL l;\n> (1e+300,Infinity) | {-1,0,3} | 1e+300 | 1e+300\n> (Infinity,1e+300) | {0,-1,5} | 1e+300 | 1e+300\n> \n> -- Closest point to line\n> SELECT p.f1, l.s, p.f1 ## l.s FROM POINT_TBL p, LINE_TBL l;\n> (1e+300,Infinity) | {1,0,5} | (-5,Infinity)\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center", "msg_date": "Mon, 16 Nov 2020 15:16:36 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Strange behavior with polygon and NaN" }, { "msg_contents": "I spent some more time looking at this patch.\n\nSome experimentation shows that the changes around bounding box\ncalculation (ie float8_min_nan() and its call sites) seem to be\ncompletely pointless: removing them doesn't change any of the regression\nresults. Nor does using float8_min_nan() in the other two bounding-box\ncalculations I'd asked about. So I think we should just drop that set\nof changes and stick with the rule that bounding box upper and lower\nvalues are sorted as per float.h comparison rules. This isn't that hard\nto work with: if you want to know whether any NaNs are in the box, test\nthe upper limit (and *not* the lower limit) for isnan(). Moreover, even\nif we wanted a different coding rule, we really can't have it because we\nwill still need to work with existing on-disk values that have bounding\nboxes computed the old way.\n\nI don't much like anything about float8_coef_mul(). In the first place,\nFPzero() is an ugly, badly designed condition that we should be trying\nto get rid of not add more dependencies on. In the second place, it's\nreally unclear why allowing 0 times Inf to be something other than NaN\nis a good idea, and it's even less clear why allowing small-but-not-zero\ntimes Inf to be zero rather than Inf is a good idea. In the third\nplace, the asymmetry between the two inputs looks more like a bug than\nsomething we should actually want.\n\nAfter some time spent staring at the specific case of line_closept_point\nand its callers, I decided that the real problems there are twofold.\nFirst, the API, or at least the callers' interpretation of this\nundocumented point, is that whether the distance is undefined (NaN) is\nequivalent to whether the closest point is undefined. This is not so;\nin some cases we know that the distance is infinite even though we can't\ncalculate a unique closest point. Second, it's not making any attempt\nto eliminate undefined cases up front. We can do that pretty easily\nby plugging the point's coordinates into the line's equation Ax+By+C\nand seeing whether we get a NaN. The attached 0002 is a subset patch\nthat just fixes these two issues, and I like the results it produces.\n\nI wonder now whether the problems around line_interpt_line() and the\nother intersection-ish functions wouldn't be better handled in a similar\nway, by making changes to their API specs to be clearer about what\nhappens with NaNs and trying to eliminate ill-defined cases explicitly.\nI've not tried to code that though.\n\nChanging pg_hypot() the way you've done here is right out. See the\ncomment for the function: what it is doing now is per all the relevant\nstandards, and your patch breaks that. It's extremely unlikely that\ndoing it differently from IEEE and POSIX is a good idea.\n\nAttached are the same old 0001 (adding the extra point_tbl entry)\nand a small 0002 that fixes just line_closept_point. I've not\ntried to figure out just which of the rest of your diffs should be\ndropped given that. I did note though that the example you add\nto func.sgml doesn't apply to this version of line_closept_point:\n\nregression=# select point '(Infinity,Infinity)' <-> line '{-1,0,5}';\n ?column? \n----------\n NaN\n(1 row)\n\nregression=# select point '(Infinity,Infinity)' <-> line '{0,-1,5}';\n ?column? \n----------\n NaN\n(1 row)\n\n\t\t\tregards, tom lane", "msg_date": "Fri, 20 Nov 2020 15:57:46 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Strange behavior with polygon and NaN" }, { "msg_contents": "Further to this ...\n\nI realized after looking at things some more that one of\nline_closept_point's issues is really a bug in line_construct:\nit fails to draw a horizontal line through a point with x = Inf,\nthough surely that's not particularly ill-defined. The reason\nis that somebody thought they could dispense with a special case\nfor m == 0, but then we end up doing\n\n\tresult->C = float8_mi(pt->y, float8_mul(m, pt->x));\n\nand if m = 0 and pt->x = Inf, we get NaN.\n\nIt also annoyed me that the code was still using DBL_MAX instead of a\ntrue Inf to represent infinite slope. That's sort of okay as long as\nit's just a communication mechanism between line_construct and places\nlike line_sl, but it's not really okay, because in some places you can\nget a true infinity from a slope calculation. Thus in HEAD you get\ndifferent results from\n\nregression=# select line(point(1,2),point(1,'inf'));\n line \n----------\n {-1,0,1}\n(1 row)\n\nregression=# select line(point(1,2),point(4,'inf'));\n line \n-------------------------\n {Infinity,-1,-Infinity}\n(1 row)\n\nwhich is completely silly: we ought to \"round off\" that infinitesimal\nslope to a true vertical, rather than producing a line representation\nwe can't do anything with.\n\nSo I fixed that too, but then I got a weird regression test diff:\nthe case of\n\tlseg '[(-10,2),(-10,3)]' ?|| lseg '[(-10,2),(-10,3)]'\nwas no longer returning true. The reason turned out to be that\nlseg_parallel does\n\n\tPG_RETURN_BOOL(FPeq(lseg_sl(l1), lseg_sl(l2)));\n\nand now lseg_sl is returning true infinities for vertical lines, and\nFPeq *gets the wrong answer* when asked to compare Inf to Inf. It\nshould say equal, surely, but internally it computes a NaN and ends up\nwith false.\n\nSo the attached 0003 patch also fixes FPeq() and friends to give\nsane answers for Inf-vs-Inf comparisons. That part seems like\na fairly fundamental bug fix, and so I feel like we ought to\ngo ahead and apply it before we do too much more messing with\nthe logic in this area.\n\n(Note that the apparently-large results diff in 0003 is mostly\na whitespace change: the first hunk just reflects slopes coming\nout as Infinity not DBL_MAX.)\n\nI'm reposting 0001 and 0002 just to keep the cfbot happy,\nthey're the same as in my previous message.\n\n\t\t\tregards, tom lane", "msg_date": "Fri, 20 Nov 2020 17:26:46 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Strange behavior with polygon and NaN" }, { "msg_contents": "I went ahead and pushed 0001 and 0003 (the latter in two parts), since\nthey didn't seem particularly controversial to me. Just to keep the\ncfbot from whining, here's a rebased version of 0002.\n\n\t\t\tregards, tom lane", "msg_date": "Sat, 21 Nov 2020 17:33:53 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Strange behavior with polygon and NaN" }, { "msg_contents": "At Sat, 21 Nov 2020 17:33:53 -0500, Tom Lane <tgl@sss.pgh.pa.us> wrote in \n> I went ahead and pushed 0001 and 0003 (the latter in two parts), since\n> they didn't seem particularly controversial to me. Just to keep the\n> cfbot from whining, here's a rebased version of 0002.\n\nI didn't noticed that inf == inf sould be true (in IEEE754).\n\n# (inf - inf == 0) => false but (inf == inf + 0) == false is somewhat\n# uneasy but, yes, it's the standare we are basing on.\n\nSo, I agree that the changes of line_construct() and line_(inv)sl()\nlooks good to me.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Tue, 24 Nov 2020 11:39:33 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Strange behavior with polygon and NaN" }, { "msg_contents": "At Fri, 20 Nov 2020 15:57:46 -0500, Tom Lane <tgl@sss.pgh.pa.us> wrote in \n> I spent some more time looking at this patch.\n> \n> Some experimentation shows that the changes around bounding box\n> calculation (ie float8_min_nan() and its call sites) seem to be\n> completely pointless: removing them doesn't change any of the regression\n> results. Nor does using float8_min_nan() in the other two bounding-box\n> calculations I'd asked about. So I think we should just drop that set\n> of changes and stick with the rule that bounding box upper and lower\n> values are sorted as per float.h comparison rules. This isn't that hard\n> to work with: if you want to know whether any NaNs are in the box, test\n> the upper limit (and *not* the lower limit) for isnan(). Moreover, even\n> if we wanted a different coding rule, we really can't have it because we\n> will still need to work with existing on-disk values that have bounding\n> boxes computed the old way.\n\nActually that changes the result since that code gives a shortcut of\nchecking NaNs in the object coordinates. I don't think that the it is\npointless to avoid full calculations that are performed only to find\nNaNs are involved, if bounding box check is meaningful.\n\n> I don't much like anything about float8_coef_mul(). In the first place,\n> FPzero() is an ugly, badly designed condition that we should be trying\n> to get rid of not add more dependencies on. In the second place, it's\n> really unclear why allowing 0 times Inf to be something other than NaN\n> is a good idea, and it's even less clear why allowing small-but-not-zero\n> times Inf to be zero rather than Inf is a good idea. In the third\n> place, the asymmetry between the two inputs looks more like a bug than\n> something we should actually want.\n\nI have the same feeling on the function, but I concluded that\ncoefficients and coordinates should be regarded as different things in\nthe practical standpoint.\n\nFor example, consider Ax + By + C == 0, if B is 0.0, we can remove the\nsecond term from the equation, regardless of the value of y, of course\neven if it were inf. that is, The function imitates that kind of\nremovals.\n\n> After some time spent staring at the specific case of line_closept_point\n> and its callers, I decided that the real problems there are twofold.\n> First, the API, or at least the callers' interpretation of this\n> undocumented point, is that whether the distance is undefined (NaN) is\n> equivalent to whether the closest point is undefined. This is not so;\n> in some cases we know that the distance is infinite even though we can't\n> calculate a unique closest point. Second, it's not making any attempt\n> to eliminate undefined cases up front. We can do that pretty easily\n> by plugging the point's coordinates into the line's equation Ax+By+C\n> and seeing whether we get a NaN. The attached 0002 is a subset patch\n> that just fixes these two issues, and I like the results it produces.\n\nActually the code reacts to some \"problem\" cases in a \"wrong\" way:\n\n+\t * If it is unclear whether the point is on the line or not, then the\n+\t * results are ill-defined. This eliminates cases where any of the given\n+\t * coordinates are NaN, as well as cases where infinite coordinates give\n+\t * rise to Inf - Inf, 0 * Inf, etc.\n+\t */\n+\tif (unlikely(isnan(float8_pl(float8_pl(float8_mul(line->A, point->x),\n+\t\t\t\t\t\t\t\t\t\t float8_mul(line->B, point->y)),\n+\t\t\t\t\t\t\t\t line->C))))\n\n| postgres=# select point(1e+300, 'Infinity') <-> line('{1,0,5}');\n| ?column? \n| ----------\n| NaN\n\nAren't our guts telling that is 1e+300? You might be thinking to put\nsome special case handling into that path (as mentioned below?), but\notherwise it yeildsa \"wrong\" result. The reason for the expectation\nis that we assume that \"completely vertical\" lines have a constant x\nvalue regardless of the y coordinate. That is the reason for the\nfloat8_coef_mul() function.\n\n> I wonder now whether the problems around line_interpt_line() and the\n> other intersection-ish functions wouldn't be better handled in a similar\n> way, by making changes to their API specs to be clearer about what\n> happens with NaNs and trying to eliminate ill-defined cases explicitly.\n> I've not tried to code that though.\n\nOne of the \"ill-defined\" cases is the zero-coefficient issue. The\nasymmetric multiply function \"fixes\" it, at least. Of course it could\nbe open-coded instead of being as a function that looks as if having\nsome general interpretation.\n\n> Changing pg_hypot() the way you've done here is right out. See the\n> comment for the function: what it is doing now is per all the relevant\n> standards, and your patch breaks that. It's extremely unlikely that\n> doing it differently from IEEE and POSIX is a good idea.\n\nMmm. Ok, I agree to that.\n\n> Attached are the same old 0001 (adding the extra point_tbl entry)\n> and a small 0002 that fixes just line_closept_point. I've not\n> tried to figure out just which of the rest of your diffs should be\n> dropped given that. I did note though that the example you add\n> to func.sgml doesn't apply to this version of line_closept_point:\n> \n> regression=# select point '(Infinity,Infinity)' <-> line '{-1,0,5}';\n> ?column? \n> ----------\n> NaN\n> (1 row)\n> \n> regression=# select point '(Infinity,Infinity)' <-> line '{0,-1,5}';\n> ?column? \n> ----------\n> NaN\n> (1 row)\n\nThey root on the same \"zero-coefficient issue\" with my example shown\nabove.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Tue, 24 Nov 2020 13:55:44 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Strange behavior with polygon and NaN" }, { "msg_contents": "Kyotaro Horiguchi <horikyota.ntt@gmail.com> writes:\n> At Fri, 20 Nov 2020 15:57:46 -0500, Tom Lane <tgl@sss.pgh.pa.us> wrote in \n>> I don't much like anything about float8_coef_mul().\n\n> I have the same feeling on the function, but I concluded that\n> coefficients and coordinates should be regarded as different things in\n> the practical standpoint.\n\n> For example, consider Ax + By + C == 0, if B is 0.0, we can remove the\n> second term from the equation, regardless of the value of y, of course\n> even if it were inf. that is, The function imitates that kind of\n> removals.\n\nMeh --- I can see where you're going with that, but I don't much like it.\nI fear that it's as likely to introduce weird behaviors as remove any.\n\nThe core of the issue in\n\n> | postgres=# select point(1e+300, 'Infinity') <-> line('{1,0,5}');\n> | ?column? \n> | ----------\n> | NaN\n\nis that we generate the line y = Inf:\n\n(gdb) p tmp\n$1 = {A = 0, B = -1, C = inf}\n\nand then try to find the intersection with {1,0,5} (x = -5), but that\ncalculation involves 0 * Inf so we get NaNs. It seems reasonable that\nthe intersection should be (-5,Inf), but I don't think we should try\nto force the normal calculation to produce that. I think we'd be\nbetter off to explicitly special-case vertical and/or horizontal lines\nin line_interpt_line.\n\nActually though ... even if we successfully got that intersection\npoint, we'd still end up with a NaN distance between (1e300,Inf) and\n(-5,Inf), on account of Inf - Inf being NaN. I think this is correct\nand we'd be ill-advised to try to force it to be something else.\nAlthough we pretend that two Infs are equal for purposes such as\nsorting, they aren't really, so we should not assume that their\ndifference is zero.\n\nSo that line of thought prompts me to tread *very* carefully when\ntrying to dodge NaN results. We need to be certain that we\nintroduce only logically-defensible special cases. Something like\nfloat8_coef_mul() seems much more likely to lead us into errors\nthan away from them.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 24 Nov 2020 12:29:41 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Strange behavior with polygon and NaN" }, { "msg_contents": "(My mailer seems to have recovered from unresponsiveness.)\n\nAt Tue, 24 Nov 2020 12:29:41 -0500, Tom Lane <tgl@sss.pgh.pa.us> wrote in \n> Kyotaro Horiguchi <horikyota.ntt@gmail.com> writes:\n> > At Fri, 20 Nov 2020 15:57:46 -0500, Tom Lane <tgl@sss.pgh.pa.us> wrote in \n> >> I don't much like anything about float8_coef_mul().\n> \n> > I have the same feeling on the function, but I concluded that\n> > coefficients and coordinates should be regarded as different things in\n> > the practical standpoint.\n> \n> > For example, consider Ax + By + C == 0, if B is 0.0, we can remove the\n> > second term from the equation, regardless of the value of y, of course\n> > even if it were inf. that is, The function imitates that kind of\n> > removals.\n> \n> Meh --- I can see where you're going with that, but I don't much like it.\n> I fear that it's as likely to introduce weird behaviors as remove any.\n>\n> The core of the issue in\n> \n> > | postgres=# select point(1e+300, 'Infinity') <-> line('{1,0,5}');\n> > | ?column? \n> > | ----------\n> > | NaN\n> \n> is that we generate the line y = Inf:\n> \n> (gdb) p tmp\n> $1 = {A = 0, B = -1, C = inf}\n> \n> and then try to find the intersection with {1,0,5} (x = -5), but that\n> calculation involves 0 * Inf so we get NaNs. It seems reasonable that\n> the intersection should be (-5,Inf), but I don't think we should try\n> to force the normal calculation to produce that. I think we'd be\n> better off to explicitly special-case vertical and/or horizontal lines\n> in line_interpt_line.\n\nI don't object to have explicit special case for vertical lines since\nit is clear than embedding such a function in the formula, but it\nseems equivalent to what the function is doing, that is, treating inf\n* 0.0 as 0.0 in some special cases.\n\n# And after rethinking, the FPzero() used in the function is wrong\n# since the macro (function) is expected to be applied to coordinates,\n# not to coefficients.\n\n> Actually though ... even if we successfully got that intersection\n> point, we'd still end up with a NaN distance between (1e300,Inf) and\n> (-5,Inf), on account of Inf - Inf being NaN. I think this is correct\n> and we'd be ill-advised to try to force it to be something else.\n> Although we pretend that two Infs are equal for purposes such as\n> sorting, they aren't really, so we should not assume that their\n> difference is zero.\n\nThe definition \"inf == inf\" comes from some practical reasons\nuncertain to me, and actually inf - inf yields NaN in IEEE\n754. However, aren't we going to assume a line on which B is exactly\n0.0 as a completely vertical line? Thus things are slightiy different\nfrom the IEEE's definition. The \"Inf\" as the y-coord of the\nperpendicular foot is actually \"the same y-coord with the point\". So\nwhat we should do on our definition for the calculation is:\n\nperp-foot (line {1,0,5}, point(1e300, Inf)) => point(-5, <y of the point>)\ndistance (point(1e300, Inf), point(-5, <y of the point>)) => 1e300 (+5)\n\nThis is what the code below is doing:\n\n+\treturn float8_div(fabs(float8_pl(\n+\t\t\t\t\t\t\t float8_pl(\n+\t\t\t\t\t\t\t\t float8_coef_mul(line->A, point->x, false),\n+\t\t\t\t\t\t\t\t float8_coef_mul(line->B, point->y, false)),\n+\t\t\t\t\t\t\t line->C)),\n+\t\t\t\t\t HYPOT(line->A, line->B));\n\n> So that line of thought prompts me to tread *very* carefully when\n> trying to dodge NaN results. We need to be certain that we\n> introduce only logically-defensible special cases. Something like\n> float8_coef_mul() seems much more likely to lead us into errors\n> than away from them.\n\nAgreed on that point. I'm going to rewirte the patch in that\ndirection.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Wed, 25 Nov 2020 11:39:39 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Strange behavior with polygon and NaN" }, { "msg_contents": "At Wed, 25 Nov 2020 11:39:39 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> > So that line of thought prompts me to tread *very* carefully when\n> > trying to dodge NaN results. We need to be certain that we\n> > introduce only logically-defensible special cases. Something like\n> > float8_coef_mul() seems much more likely to lead us into errors\n> > than away from them.\n> \n> Agreed on that point. I'm going to rewirte the patch in that\n> direction.\n\nRemoved the function float8_coef_mul().\n\n\nI noticed that the check you proposed to add to line_closept_point\ndoesn't work for the following case:\n\nselect line('{1,-1,0}') <-> point(1e300, 'Infinity');\n\nAx + By + C = 1 * 1e300 + -1 * Inf + 0 = -Inf is not NaN so we go on\nthe following steps.\n\nderive the perpendicular line: => line(-1, -1, Inf}\nderive the cross point : => point(Inf, Inf)\ncalculate the distance : => NaN (which should be Infinity)\n\nSo I left the check whether distance is NaN in this version. In the previous version the check is done before directly calculating the distance, but since we already have the result of Ax+Bx+C so I decided not to use point_dt() in this\nversion.\n\nAlthough I wrote that it should be wrong that applying FPzero() to\ncoefficients, there are some places already doing that so I followed\nthose predecessors.\n\n\nReverted the change of pg_hypot().\n\n\nWhile checking the regression results, I noticed that the follwoing\ncalculation, which seems wrong.\n\nselect line('{3,NaN,5}') = line('{3,NaN,5}');\n ?column? \n----------\n t\n\nBut after looking point_eq(), I decided to let the behavior alone\nsince I'm not sure the reason for the behavior of the functions. At\nleast the comment for point_eq() says that is the delibarate\nbehvior. box_same, poly_same base on the point_eq_point so they behave\nthe same way.\n\n\nBy the way, '=' doesn't compare the shape but compares the area.\nHowever, what is the area of a line? That should be always 0 even if\nwe considered it. And it is also strange that we don't have\ncorresponding comparison ('<' and so) operators. It seems to me as if\na mistake of '~='. If it is correct, I should revert the change of\nline_eq() along with fixing operator assignment.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center", "msg_date": "Wed, 25 Nov 2020 17:14:06 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Strange behavior with polygon and NaN" }, { "msg_contents": "On 25.11.2020 11:14, Kyotaro Horiguchi wrote:\n> At Wed, 25 Nov 2020 11:39:39 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in\n>>> So that line of thought prompts me to tread *very* carefully when\n>>> trying to dodge NaN results. We need to be certain that we\n>>> introduce only logically-defensible special cases. Something like\n>>> float8_coef_mul() seems much more likely to lead us into errors\n>>> than away from them.\n>> Agreed on that point. I'm going to rewirte the patch in that\n>> direction.\n> Removed the function float8_coef_mul().\n>\n>\n> I noticed that the check you proposed to add to line_closept_point\n> doesn't work for the following case:\n>\n> select line('{1,-1,0}') <-> point(1e300, 'Infinity');\n>\n> Ax + By + C = 1 * 1e300 + -1 * Inf + 0 = -Inf is not NaN so we go on\n> the following steps.\n>\n> derive the perpendicular line: => line(-1, -1, Inf}\n> derive the cross point : => point(Inf, Inf)\n> calculate the distance : => NaN (which should be Infinity)\n>\n> So I left the check whether distance is NaN in this version. In the previous version the check is done before directly calculating the distance, but since we already have the result of Ax+Bx+C so I decided not to use point_dt() in this\n> version.\n>\n> Although I wrote that it should be wrong that applying FPzero() to\n> coefficients, there are some places already doing that so I followed\n> those predecessors.\n>\n>\n> Reverted the change of pg_hypot().\n>\n>\n> While checking the regression results, I noticed that the follwoing\n> calculation, which seems wrong.\n>\n> select line('{3,NaN,5}') = line('{3,NaN,5}');\n> ?column?\n> ----------\n> t\n>\n> But after looking point_eq(), I decided to let the behavior alone\n> since I'm not sure the reason for the behavior of the functions. At\n> least the comment for point_eq() says that is the delibarate\n> behvior. box_same, poly_same base on the point_eq_point so they behave\n> the same way.\n>\n>\n> By the way, '=' doesn't compare the shape but compares the area.\n> However, what is the area of a line? That should be always 0 even if\n> we considered it. And it is also strange that we don't have\n> corresponding comparison ('<' and so) operators. It seems to me as if\n> a mistake of '~='. If it is correct, I should revert the change of\n> line_eq() along with fixing operator assignment.\n>\n> regards.\n>\n\nStatus update for a commitfest entry.\n\nThe commitfest is closed now and this entry is \"Waiting on author\".\nAs far as I see, part of the fixes is already committed. Is there \nanything left to work on or this patch needs review/ ready for committer \nnow?\n\n-- \nAnastasia Lubennikova\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n", "msg_date": "Tue, 1 Dec 2020 17:06:44 +0300", "msg_from": "Anastasia Lubennikova <a.lubennikova@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: Strange behavior with polygon and NaN" }, { "msg_contents": "Anastasia Lubennikova <a.lubennikova@postgrespro.ru> writes:\n> The commitfest is closed now and this entry is \"Waiting on author\".\n> As far as I see, part of the fixes is already committed. Is there \n> anything left to work on or this patch needs review/ ready for committer \n> now?\n\nI think it should be \"needs review\" now.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 01 Dec 2020 10:03:42 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Strange behavior with polygon and NaN" }, { "msg_contents": "At Tue, 01 Dec 2020 10:03:42 -0500, Tom Lane <tgl@sss.pgh.pa.us> wrote in \n> I think it should be \"needs review\" now.\n\nConflicted with some commit(s) uncertain to me. Rebased.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center", "msg_date": "Mon, 21 Dec 2020 17:30:11 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Strange behavior with polygon and NaN" }, { "msg_contents": "On 12/21/20 3:30 AM, Kyotaro Horiguchi wrote:\n> At Tue, 01 Dec 2020 10:03:42 -0500, Tom Lane <tgl@sss.pgh.pa.us> wrote in\n>> I think it should be \"needs review\" now.\n> \n> Conflicted with some commit(s) uncertain to me. Rebased.\n\nTom, Georgios, thoughts on the new patch?\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n\n", "msg_date": "Mon, 15 Mar 2021 08:34:00 -0400", "msg_from": "David Steele <david@pgmasters.net>", "msg_from_op": false, "msg_subject": "Re: Strange behavior with polygon and NaN" }, { "msg_contents": "At Mon, 15 Mar 2021 08:34:00 -0400, David Steele <david@pgmasters.net> wrote in \n> On 12/21/20 3:30 AM, Kyotaro Horiguchi wrote:\n> > At Tue, 01 Dec 2020 10:03:42 -0500, Tom Lane <tgl@sss.pgh.pa.us> wrote\n> > in\n> >> I think it should be \"needs review\" now.\n> > Conflicted with some commit(s) uncertain to me. Rebased.\n\nWhen I recently re-found the working tree on this topic in my laptop,\nit contained an unfinished work of tristate geometric comparisons. I\nhad forgotten how come I started it but I found the starting thread\n[1]. I've done that work to the level it anyhow looks working.\n\nThe first patch fixes arithmetic functions to handle NaNs, which is\nthe first motive of this thread. The second introduces tri-state\ncomparison as requested in [1]. The tri-state comparison returns\nTS_NULL or SQL-NULL for comparisons involving NaNs. GiST index is\nadjusted so that it treats null as false. On the way doing this, the\nbug #17334 [2] and another bug raised earlier [3] are naturally fixed.\n\nThe first patch changes/fixes the behavior of geometric arithmetics as\nfollows *as the result*.\n\n==== 1-a\nDistance between an valid object and an object containing NaNs was 0.\n\nselect line '{-0.4,-1,-6}' <-> line '{3,NaN,5}'; -- 0 => NaN\n\n==== 1-b\nThe distance between a point and a vertical or horizontal line was NaN\nfor some valid cases.\n\nselect point '(1e+300,Infinity)' <-> line '{-1,0,3}'; -- NaN -> Infinity\n\n==== 1-c\nThe closest point of two objects could not be solved if it contains\nInfinity.\n\nselect point '(1e+300,Infinity)' ## line '{1,0,5}'; -- null -> (-5,Infinity)\n\n(I'm not sure the fix works for all possible cases..)\n\n==== 1-d\nContainment involving NaNs was falsely true for some kind of objects.\n\nselect point '(NaN,NaN)' <@ path '((1,2),(3,4))'; -- true -> null\nselect point '(NaN,NaN)' <@ polygon '((2,0),(2,4),(0,0))';-- true -> null\n\n=== 1-e\nThe intersection detection of two objects containing NaNs was true.\n\nselect line '{-1,0,3}' ?# line '{3,NaN,5}'; -- true -> null\n\n\nThe second patch as the result changes/fixes the behavior of\ngeometrical arithmetics as the follows.\n\n==== 2-a\nThe containment detection between a valid shape and a point containing\nNaN(s) was false. This is not necessarirly wrong but it is changed\naccording to [1]\n\nselect point '(0.5, NaN)' <@ box '(0,0,1,1)'; -- false -> null\nselect point '(NaN, NaN)' <@ path '[(0,0),(1,0),(1,1),(0,1)]'; -- false -> null\n\n==== 2-b\nThe equality of two lines containing NaNs can be true or false. This\nis right assuming NaN == NaN. But it is changed according to the\npolicy that NaN makes an object invalid.\n\nselect line '{NaN, 1, NaN}' = line '{NaN, 1, NaN}'; -- true -> null\nselect line '{NaN, 1, NaN}' = line '{NaN, 2, NaN}'; -- false -> null\n\n==== 2-c\nThe result of the following expression changed from Infinity to\nNaN. The distance from the point to to the box is indeterminant.\n\nselect box '(-Infinity,500),(-Infinity,100)' <-> point '123,456';\n\nThe internal function is dist_bp(). The difference comes from the\nbehavior that lseg_closept_line() ignores the NaN returned from\nline_closept_point() and uses the last one of the two ends of the lseg\nas the result, which is (-inf, 500). With this patch the same\nfunction returns (NaN, NaN) which leads to null as the final result.\nThe previos behavior (for these particular values) may be correct at a\nglance but it is wrong at the time lseg_closept_line ignores the fact\nthat it could not determine which end is closer.\n\n==== 2-d\n\nThe comparison between objects involving NaNs results was false, but\nit is now null. So NaNs that were shown as a part of the following\nquery are now excluded. (I'm not sure this is the desired result.)\n\nSELECT p1.f1, p2.f1 FROM\n(VALUES (point '0,0'), (point '1,1'), (point 'NaN,NaN')) p1(f1),\n(VALUES (point '0,0'), (point '1,1'), (point 'NaN,NaN')) p2(f1)\nWHERE p1.f1 <> p2.f1;\n\n f1 | f1 \n -----------+-----------\n (0,0) | (1,1)\n- (0,0) | (NaN,NaN)\n (1,1) | (0,0)\n- (1,1) | (NaN,NaN)\n- (NaN,NaN) | (0,0)\n- (NaN,NaN) | (1,1)\n(6 rows)\n\n==== 2-e\n\ncircle_same(~=) returned true for '<(3,5),NaN>' and circle\n'<(3,5),0>', which is definitely bogus.\nnull.\n\nSELECT circle '<(3,5),NaN>' ~= circle '<(3,5),0>'; -- true -> null\nSELECT circle '<(3,5),NaN>' ~= circle '<(3,5),NaN>'; -- true -> null\n\n\n[1] https://www.postgresql.org/message-id/CAOBaU_ZvJGkAuKqfFxQxnsirpaVci_-S3F3M5M1Wzrq1kGyC%3Dg%40mail.gmail.com\n[2] https://www.postgresql.org/message-id/17334-135f485c21739caa%40postgresql.org\n[3] https://www.postgresql.org/message-id/CAMbWs4-C9K-8V=cAY7q0ciZmJKBMiUnp_xBGzxgKpEWPKd0bng@mail.gmail.com\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center", "msg_date": "Wed, 15 Dec 2021 16:20:55 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Strange behavior with polygon and NaN" }, { "msg_contents": "At Wed, 15 Dec 2021 16:20:55 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> adjusted so that it treats null as false. On the way doing this, the\n> bug #17334 [2] and another bug raised earlier [3] are naturally fixed.\n\nThat being said, even if this patch were committed to the master\nbranch, we won't apply the whole patch set for backbranches.\n\nI guess applying the computeDistance part in the patch works.\n\n# But I found that the part contains a bugX(\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Wed, 15 Dec 2021 17:31:00 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Strange behavior with polygon and NaN" }, { "msg_contents": "As discussed in [1], we're taking this opportunity to return some\npatchsets that don't appear to be getting enough reviewer interest.\n\nThis is not a rejection, since we don't necessarily think there's\nanything unacceptable about the entry, but it differs from a standard\n\"Returned with Feedback\" in that there's probably not much actionable\nfeedback at all. Rather than code changes, what this patch needs is more\ncommunity interest. You might\n\n- ask people for help with your approach,\n- see if there are similar patches that your code could supplement,\n- get interested parties to agree to review your patch in a CF, or\n- possibly present the functionality in a way that's easier to review\n overall.\n\n(Doing these things is no guarantee that there will be interest, but\nit's hopefully better than endlessly rebasing a patchset that is not\nreceiving any feedback from the community.)\n\nOnce you think you've built up some community support and the patchset\nis ready for review, you (or any interested party) can resurrect the\npatch entry by visiting\n\n https://commitfest.postgresql.org/38/2710/\n\nand changing the status to \"Needs Review\", and then changing the\nstatus again to \"Move to next CF\". (Don't forget the second step;\nhopefully we will have streamlined this in the near future!)\n\nThanks,\n--Jacob\n\n[1]\nhttps://postgr.es/m/flat/0ab66589-2f71-69b3-2002-49e821740b0d%40timescale.com\n\n\n", "msg_date": "Mon, 1 Aug 2022 13:29:09 -0700", "msg_from": "Jacob Champion <jchampion@timescale.com>", "msg_from_op": false, "msg_subject": "Re: Strange behavior with polygon and NaN" }, { "msg_contents": "At Mon, 1 Aug 2022 13:29:09 -0700, Jacob Champion <jchampion@timescale.com> wrote in \n> As discussed in [1], we're taking this opportunity to return some\n> patchsets that don't appear to be getting enough reviewer interest.\n\nOh, sorry. I missed that thread. Thank you for kindly noticing that.\n\n> This is not a rejection, since we don't necessarily think there's\n> anything unacceptable about the entry, but it differs from a standard\n> \"Returned with Feedback\" in that there's probably not much actionable\n> feedback at all. Rather than code changes, what this patch needs is more\n> community interest. You might\n> \n> - ask people for help with your approach,\n> - see if there are similar patches that your code could supplement,\n> - get interested parties to agree to review your patch in a CF, or\n> - possibly present the functionality in a way that's easier to review\n> overall.\n> \n> (Doing these things is no guarantee that there will be interest, but\n> it's hopefully better than endlessly rebasing a patchset that is not\n> receiving any feedback from the community.)\n> \n> Once you think you've built up some community support and the patchset\n> is ready for review, you (or any interested party) can resurrect the\n> patch entry by visiting\n> \n> https://commitfest.postgresql.org/38/2710/\n> \n> and changing the status to \"Needs Review\", and then changing the\n> status again to \"Move to next CF\". (Don't forget the second step;\n> hopefully we will have streamlined this in the near future!)\n\nThanks. I don't insist on this patch unless some other people are\ninterested in.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Tue, 02 Aug 2022 16:44:23 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Strange behavior with polygon and NaN" } ]
[ { "msg_contents": "Back in [1] I experimented with a patch to coax compilers to build all\nelog/ereport calls that were >= ERROR into a cold path away from the\nfunction rasing the error. At the time, I really just wanted to test\nhow much of a speedup we could get by doing this and ended up just\nwriting up a patch that basically changed all elog(ERROR) calls from:\n\nif (<error situation check>)\n{\n <do stuff>;\n elog(ERROR, \"...\");\n}\n\nto add an unlikely() and become;\n\nif (unlikely(<error situation check>)\n{\n <do stuff>;\n elog(ERROR, \"...\");\n}\n\nPer the report in [1] I did see some pretty good gains in performance\nfrom doing this. The problem was, that at the time I couldn't figure\nout a way to do this without an invasive patch that changed the code\nin the thousands of elog/ereport calls.\n\nIn the attached, I've come up with a way that works. Basically I've\njust added a function named errstart_cold() that is attributed with\n__attribute__((cold)), which will hint to the compiler to keep\nbranches which call this function in a cold path. To make the\ncompiler properly mark just >= ERROR calls as cold, and only when the\nelevel is a constant, I modified the ereport_domain macro to do:\n\nif (__builtin_constant_p(elevel) && (elevel) >= ERROR ? \\\nerrstart_cold(elevel, domain) : \\\nerrstart(elevel, domain)) \\\n\nI see no reason why the compiler shouldn't always fold that constant\nexpression at compile-time and correctly select the correct version of\nthe function for the job. (I also tried using __builtin_choose_expr()\nbut GCC complained when the elevel was not constant, even with the\n__builtin_constant_p() test in the condition)\n\nI sampled a few .s files to inspect what code had changed. Looking at\nmcxt.s, fmgr.s and xlog.s, the first two of these because I found in\n[1] that elogs were being done from those files quite often and xlog.s\nbecause it's pretty big. As far as I could see, GCC correctly moved\nall the error reporting stuff where the elevel was a constant and >=\nERROR into the cold path and left the lower-level or non-consts elevel\ncalls alone.\n\nFor clang, I didn't see any changes in the .s files. I suspect that\nthey might have a few smarts in there and see the\n__builtin_unreachable() call and assume the path is cold already based\non that. That was with clang 10. Perhaps older versions are not as\nsmart.\n\nBenchmarking:\n\nFor benchmarking, I've not done a huge amount to test the impacts of\nthis change. However, I can say that I am seeing some fairly good\nimprovements. There seems to be some small improvements to execution\nspeed using TPCH-Q1 and also some good results from a pgbench -S test.\n\nFor TPCH-Q1:\n\nMaster:\n$ pgbench -n -f pg-tpch/queries/q01.sql -T 120 tpch\nlatency average = 5272.630 ms\nlatency average = 5258.610 ms\nlatency average = 5250.871 ms\n\nMaster + elog_ereport_attribute_cold.patch\n$ pgbench -n -f pg-tpch/queries/q01.sql -T 120 tpch\nlatency average = 5182.761 ms\nlatency average = 5194.851 ms\nlatency average = 5183.128 ms\n\nWhich is about a 1.42% increase in performance. That's not exactly\ngroundbreaking, but pretty useful to have if that happens to apply\nacross the board for execution performance.\n\nFor pgbench -S:\n\nMy results were a bit noisier than the TPCH test, but the results I\nobtained did show about a 10% increase in performance:\n\nMaster:\ndrowley@amd3990x:~$ pgbench -S -T 120 postgres\ntps = 25245.903255 (excluding connections establishing)\ntps = 26144.454208 (excluding connections establishing)\ntps = 25931.850518 (excluding connections establishing)\n\nMaster + elog_ereport_attribute_cold.patch\ndrowley@amd3990x:~$ pgbench -S -T 120 postgres\ntps = 28351.480631 (excluding connections establishing)\ntps = 27763.656557 (excluding connections establishing)\ntps = 28896.427929 (excluding connections establishing)\n\nIt would be useful if someone with some server-grade Intel hardware\ncould run a few tests on this. The above results are all from AMD\nhardware.\n\nI've attached the patch for this. I'll add it to the July 'fest.\n\nDavid\n\n[1] https://www.postgresql.org/message-id/CAKJS1f8yqRW3qx2CO9r4bqqvA2Vx68=3awbh8CJWTP9zXeoHMw@mail.gmail.com", "msg_date": "Thu, 25 Jun 2020 13:50:37 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Keep elog(ERROR) and ereport(ERROR) calls in the cold path" }, { "msg_contents": "On Wed, Jun 24, 2020 at 9:51 PM David Rowley <dgrowleyml@gmail.com> wrote:\n> $ pgbench -n -f pg-tpch/queries/q01.sql -T 120 tpch\n>\n> Which is about a 1.42% increase in performance. That's not exactly\n> groundbreaking, but pretty useful to have if that happens to apply\n> across the board for execution performance.\n>\n> For pgbench -S:\n>\n> My results were a bit noisier than the TPCH test, but the results I\n> obtained did show about a 10% increase in performance:\n\nThis is pretty cool, particularly because it affects single-client\nperformance. It seems like a lot of ideas people have had about\nspeeding up pgbench performance - including me - have improved\nperformance under concurrency at the cost of very slightly degrading\nsingle-client performance. It would be nice to claw some of that back.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Thu, 25 Jun 2020 11:53:28 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Keep elog(ERROR) and ereport(ERROR) calls in the cold path" }, { "msg_contents": "Hi,\n\nThanks for picking this up again!\n\nOn 2020-06-25 13:50:37 +1200, David Rowley wrote:\n> In the attached, I've come up with a way that works. Basically I've\n> just added a function named errstart_cold() that is attributed with\n> __attribute__((cold)), which will hint to the compiler to keep\n> branches which call this function in a cold path.\n\nI recall you trying this before? Has that gotten easier because we\nevolved ereport()/elog(), or has gcc become smarter, or ...?\n\n\n> To make the compiler properly mark just >= ERROR calls as cold, and\n> only when the elevel is a constant, I modified the ereport_domain\n> macro to do:\n> \n> if (__builtin_constant_p(elevel) && (elevel) >= ERROR ? \\\n> errstart_cold(elevel, domain) : \\\n> errstart(elevel, domain)) \\\n\nI think it'd be good to not just do this for ERROR, but also for <=\nDEBUG1. I recall seing quite a few debug elogs that made the code worse\njust by \"being there\".\n\nI suspect that doing this for DEBUG* could also improve the code for\nclang, because we obviously don't have __builtin_unreachable after those.\n\n\n> I see no reason why the compiler shouldn't always fold that constant\n> expression at compile-time and correctly select the correct version of\n> the function for the job. (I also tried using __builtin_choose_expr()\n> but GCC complained when the elevel was not constant, even with the\n> __builtin_constant_p() test in the condition)\n\nI think it has to be constant in all paths for that...\n\n\n> Master:\n> drowley@amd3990x:~$ pgbench -S -T 120 postgres\n> tps = 25245.903255 (excluding connections establishing)\n> tps = 26144.454208 (excluding connections establishing)\n> tps = 25931.850518 (excluding connections establishing)\n> \n> Master + elog_ereport_attribute_cold.patch\n> drowley@amd3990x:~$ pgbench -S -T 120 postgres\n> tps = 28351.480631 (excluding connections establishing)\n> tps = 27763.656557 (excluding connections establishing)\n> tps = 28896.427929 (excluding connections establishing)\n\nThat is pretty damn cool.\n\n\n> diff --git a/src/backend/utils/error/elog.c b/src/backend/utils/error/elog.c\n> index e976201030..8076e8af24 100644\n> --- a/src/backend/utils/error/elog.c\n> +++ b/src/backend/utils/error/elog.c\n> @@ -219,6 +219,19 @@ err_gettext(const char *str)\n> #endif\n> }\n> \n> +#if defined(HAVE_PG_ATTRIBUTE_HOT_AND_COLD) && defined(HAVE__BUILTIN_CONSTANT_P)\n> +/*\n> + * errstart_cold\n> + *\t\tA simple wrapper around errstart, but hinted to be cold so that the\n> + *\t\tcompiler is more likely to put error code in a cold area away from the\n> + *\t\tmain function body.\n> + */\n> +bool\n> +pg_attribute_cold errstart_cold(int elevel, const char *domain)\n> +{\n> +\treturn errstart(elevel, domain);\n> +}\n> +#endif\n\nHm. Would it make sense to have this be a static inline?\n\n\n> /*\n> * errstart --- begin an error-reporting cycle\n> diff --git a/src/include/c.h b/src/include/c.h\n> index d72b23afe4..087b8af6cb 100644\n> --- a/src/include/c.h\n> +++ b/src/include/c.h\n> @@ -178,6 +178,21 @@\n> #define pg_noinline\n> #endif\n> \n> +/*\n> + * Marking certain functions as \"hot\" or \"cold\" can be useful to assist the\n> + * compiler in arranging the assembly code in a more efficient way.\n> + * These are supported from GCC >= 4.3 and clang >= 3.2\n> + */\n> +#if (defined(__GNUC__) && (__GNUC__ > 4 || (__GNUC__ == 4 && __GNUC_MINOR__ >= 3))) || \\\n> +\t(defined(__clang__) && (__clang_major__ > 3 || (__clang_major__ == 3 && __clang_minor__ >= 2)))\n> +#define HAVE_PG_ATTRIBUTE_HOT_AND_COLD 1\n> +#define pg_attribute_hot __attribute__((hot))\n> +#define pg_attribute_cold __attribute__((cold))\n> +#else\n> +#define pg_attribute_hot\n> +#define pg_attribute_cold\n> +#endif\n\nWonder if we should start using __has_attribute() for things like this.\n\nhttps://gcc.gnu.org/onlinedocs/cpp/_005f_005fhas_005fattribute.html\n\nI.e. we could do something like\n#ifndef __has_attribute\n#define __has_attribute(attribute) 0\n#endif\n\n#if __has_attribute(hot)\n#define pg_attribute_hot __attribute__((hot))\n#else\n#define pg_attribute_hot\n#endif\n\nclang added __has_attribute in 2.9 (2010), gcc added it in 5 (2014), so\nI don't think we'd loose too much.\n\n\n\n\n> #ifdef HAVE__BUILTIN_CONSTANT_P\n> +#ifdef HAVE_PG_ATTRIBUTE_HOT_AND_COLD\n> +#define ereport_domain(elevel, domain, ...)\t\\\n> +\tdo { \\\n> +\t\tpg_prevent_errno_in_scope(); \\\n> +\t\tif (__builtin_constant_p(elevel) && (elevel) >= ERROR ? \\\n> +\t\t\t errstart_cold(elevel, domain) : \\\n> +\t\t\t errstart(elevel, domain)) \\\n> +\t\t\t__VA_ARGS__, errfinish(__FILE__, __LINE__, PG_FUNCNAME_MACRO); \\\n> +\t\tif (__builtin_constant_p(elevel) && (elevel) >= ERROR) \\\n> +\t\t\tpg_unreachable(); \\\n> +\t} while(0)\n> +#else\t\t\t\t\t\t\t/* !HAVE_PG_ATTRIBUTE_HOT_AND_COLD */\n> #define ereport_domain(elevel, domain, ...)\t\\\n> \tdo { \\\n> \t\tpg_prevent_errno_in_scope(); \\\n> @@ -129,6 +141,7 @@\n> \t\tif (__builtin_constant_p(elevel) && (elevel) >= ERROR) \\\n> \t\t\tpg_unreachable(); \\\n> \t} while(0)\n> +#endif\t\t\t\t\t\t\t/* HAVE_PG_ATTRIBUTE_HOT_AND_COLD */\n> #else\t\t\t\t\t\t\t/* !HAVE__BUILTIN_CONSTANT_P */\n> #define ereport_domain(elevel, domain, ...)\t\\\n> \tdo { \\\n> @@ -146,6 +159,9 @@\n\nCould we do this without another copy? Feels like we should be able to\njust do that in the existing #ifdef HAVE__BUILTIN_CONSTANT_P if we just\nadd errstart_cold() independent HAVE_PG_ATTRIBUTE_HOT_AND_COLD.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 25 Jun 2020 09:35:53 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Keep elog(ERROR) and ereport(ERROR) calls in the cold path" }, { "msg_contents": "On Fri, 26 Jun 2020 at 04:35, Andres Freund <andres@anarazel.de> wrote:\n> On 2020-06-25 13:50:37 +1200, David Rowley wrote:\n> > In the attached, I've come up with a way that works. Basically I've\n> > just added a function named errstart_cold() that is attributed with\n> > __attribute__((cold)), which will hint to the compiler to keep\n> > branches which call this function in a cold path.\n>\n> I recall you trying this before? Has that gotten easier because we\n> evolved ereport()/elog(), or has gcc become smarter, or ...?\n\nYeah, I appear to have tried it and found it not to work in [1]. I can\nonly assume GCC got smarter in regards to how it marks a path as cold.\nPreviously it seemed not do due to the do/while(0). I'm pretty sure\nback when I tested last that ditching the do while made it work, just\nwe can't really get rid of it.\n\n> > To make the compiler properly mark just >= ERROR calls as cold, and\n> > only when the elevel is a constant, I modified the ereport_domain\n> > macro to do:\n> >\n> > if (__builtin_constant_p(elevel) && (elevel) >= ERROR ? \\\n> > errstart_cold(elevel, domain) : \\\n> > errstart(elevel, domain)) \\\n>\n> I think it'd be good to not just do this for ERROR, but also for <=\n> DEBUG1. I recall seing quite a few debug elogs that made the code worse\n> just by \"being there\".\n\nI think that case is different. We don't want to move the entire elog\npath into the cold path for that. We'd only want to hint that errstart\nis unlikely to return true if elevel <= DEBUG1\n\n> > diff --git a/src/backend/utils/error/elog.c b/src/backend/utils/error/elog.c\n> > index e976201030..8076e8af24 100644\n> > --- a/src/backend/utils/error/elog.c\n> > +++ b/src/backend/utils/error/elog.c\n> > @@ -219,6 +219,19 @@ err_gettext(const char *str)\n> > #endif\n> > }\n> >\n> > +#if defined(HAVE_PG_ATTRIBUTE_HOT_AND_COLD) && defined(HAVE__BUILTIN_CONSTANT_P)\n> > +/*\n> > + * errstart_cold\n> > + * A simple wrapper around errstart, but hinted to be cold so that the\n> > + * compiler is more likely to put error code in a cold area away from the\n> > + * main function body.\n> > + */\n> > +bool\n> > +pg_attribute_cold errstart_cold(int elevel, const char *domain)\n> > +{\n> > + return errstart(elevel, domain);\n> > +}\n> > +#endif\n>\n> Hm. Would it make sense to have this be a static inline?\n\nI thought about that but didn't try it to ensure it still worked ok. I\ndidn't think it was that important to make sure we don't get the extra\nfunction hop for ERRORs. It seemed like a case we'd not want to really\noptimise for.\n\n> > /*\n> > * errstart --- begin an error-reporting cycle\n> > diff --git a/src/include/c.h b/src/include/c.h\n> > index d72b23afe4..087b8af6cb 100644\n> > --- a/src/include/c.h\n> > +++ b/src/include/c.h\n> > @@ -178,6 +178,21 @@\n> > #define pg_noinline\n> > #endif\n> >\n> > +/*\n> > + * Marking certain functions as \"hot\" or \"cold\" can be useful to assist the\n> > + * compiler in arranging the assembly code in a more efficient way.\n> > + * These are supported from GCC >= 4.3 and clang >= 3.2\n> > + */\n> > +#if (defined(__GNUC__) && (__GNUC__ > 4 || (__GNUC__ == 4 && __GNUC_MINOR__ >= 3))) || \\\n> > + (defined(__clang__) && (__clang_major__ > 3 || (__clang_major__ == 3 && __clang_minor__ >= 2)))\n> > +#define HAVE_PG_ATTRIBUTE_HOT_AND_COLD 1\n> > +#define pg_attribute_hot __attribute__((hot))\n> > +#define pg_attribute_cold __attribute__((cold))\n> > +#else\n> > +#define pg_attribute_hot\n> > +#define pg_attribute_cold\n> > +#endif\n>\n> Wonder if we should start using __has_attribute() for things like this.\n>\n> https://gcc.gnu.org/onlinedocs/cpp/_005f_005fhas_005fattribute.html\n>\n> I.e. we could do something like\n> #ifndef __has_attribute\n> #define __has_attribute(attribute) 0\n> #endif\n>\n> #if __has_attribute(hot)\n> #define pg_attribute_hot __attribute__((hot))\n> #else\n> #define pg_attribute_hot\n> #endif\n>\n> clang added __has_attribute in 2.9 (2010), gcc added it in 5 (2014), so\n> I don't think we'd loose too much.\n\nThanks for pointing that out. Seems like a good idea to me. I don't\nthink we'll upset too many people running GCC 4.4 to 5.0. I can't\nimagine many people serious about performance will be using a\nPostgreSQL version that'll be released in 2021 with a pre 2014\ncompiler.\n\n> > #ifdef HAVE__BUILTIN_CONSTANT_P\n> > +#ifdef HAVE_PG_ATTRIBUTE_HOT_AND_COLD\n> > +#define ereport_domain(elevel, domain, ...) \\\n> > + do { \\\n> > + pg_prevent_errno_in_scope(); \\\n> > + if (__builtin_constant_p(elevel) && (elevel) >= ERROR ? \\\n> > + errstart_cold(elevel, domain) : \\\n> > + errstart(elevel, domain)) \\\n> > + __VA_ARGS__, errfinish(__FILE__, __LINE__, PG_FUNCNAME_MACRO); \\\n> > + if (__builtin_constant_p(elevel) && (elevel) >= ERROR) \\\n> > + pg_unreachable(); \\\n> > + } while(0)\n> > +#else /* !HAVE_PG_ATTRIBUTE_HOT_AND_COLD */\n> > #define ereport_domain(elevel, domain, ...) \\\n> > do { \\\n> > pg_prevent_errno_in_scope(); \\\n> > @@ -129,6 +141,7 @@\n> > if (__builtin_constant_p(elevel) && (elevel) >= ERROR) \\\n> > pg_unreachable(); \\\n> > } while(0)\n> > +#endif /* HAVE_PG_ATTRIBUTE_HOT_AND_COLD */\n> > #else /* !HAVE__BUILTIN_CONSTANT_P */\n> > #define ereport_domain(elevel, domain, ...) \\\n> > do { \\\n> > @@ -146,6 +159,9 @@\n>\n> Could we do this without another copy? Feels like we should be able to\n> just do that in the existing #ifdef HAVE__BUILTIN_CONSTANT_P if we just\n> add errstart_cold() independent HAVE_PG_ATTRIBUTE_HOT_AND_COLD.\n\nYeah. I just did it that way so we didn't get the extra function hop\nin compilers that don't support __attribute((cold)). If I can inline\nerrstart_cold() and have the compiler still properly determine that\nit's a cold function, then it seems wise to do it that way. If not,\nthen I'll need to keep a separate macro.\n\nDavid\n\n[1] https://www.postgresql.org/message-id/20171030094449.ffqhvt5n623zvyja%40alap3.anarazel.de\n\n\n", "msg_date": "Fri, 26 Jun 2020 13:21:53 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Keep elog(ERROR) and ereport(ERROR) calls in the cold path" }, { "msg_contents": "On Fri, 26 Jun 2020 at 13:21, David Rowley <dgrowleyml@gmail.com> wrote:\n>\n> On Fri, 26 Jun 2020 at 04:35, Andres Freund <andres@anarazel.de> wrote:\n> > On 2020-06-25 13:50:37 +1200, David Rowley wrote:\n> > > In the attached, I've come up with a way that works. Basically I've\n> > > just added a function named errstart_cold() that is attributed with\n> > > __attribute__((cold)), which will hint to the compiler to keep\n> > > branches which call this function in a cold path.\n> >\n> > I recall you trying this before? Has that gotten easier because we\n> > evolved ereport()/elog(), or has gcc become smarter, or ...?\n>\n> Yeah, I appear to have tried it and found it not to work in [1]. I can\n> only assume GCC got smarter in regards to how it marks a path as cold.\n> Previously it seemed not do due to the do/while(0). I'm pretty sure\n> back when I tested last that ditching the do while made it work, just\n> we can't really get rid of it.\n>\n> > > To make the compiler properly mark just >= ERROR calls as cold, and\n> > > only when the elevel is a constant, I modified the ereport_domain\n> > > macro to do:\n> > >\n> > > if (__builtin_constant_p(elevel) && (elevel) >= ERROR ? \\\n> > > errstart_cold(elevel, domain) : \\\n> > > errstart(elevel, domain)) \\\n> >\n> > I think it'd be good to not just do this for ERROR, but also for <=\n> > DEBUG1. I recall seing quite a few debug elogs that made the code worse\n> > just by \"being there\".\n>\n> I think that case is different. We don't want to move the entire elog\n> path into the cold path for that. We'd only want to hint that errstart\n> is unlikely to return true if elevel <= DEBUG1\n\nI played around with this trying to find if there was a way to make this work.\n\nv2 patch includes the change you mentioned about using __has_attribute\n(cold) and removes the additional ereport_domain macro\nv3 is v2 plus an additional change to mark the branch within\nereport_domain as unlikely when elevel <= DEBUG1\nv4 is v2 plus it marks the errstart call as unlikely regardless of elevel.\n\nI tried v4 as I was having trouble as v3 was showing worse performance\nthan v2. v4 appears better on the AMD system, but that system is\nproducing noisy results (very obvious if looking at attached\namd3990x_elog_cold.png)\n\nI ran pgbench -S T 600 -P 10 with each patch and for the AMD machine I got:\n\nmaster = 27817.32167 tps\nv2 = 28991.65667 tps (104.22% of master)\nv3 = 28622.775 tps (102.90% of master)\nv4 = 29648.91 tps (106.58% of master)\n\n(I attribute the speedup here not being the same as my last report due\nto noise. A recent bios update partially fixed the problem, but not\ncompletely)\n\nFor the intel laptop I got:\n\nmaster = 25452.38167 tps\nv2 = 25473.695 tps (100.08% of master)\nv3 = 25434.89333 tps (99.93% of master)\nv4 = 25389.02833 tps (99.75% of master)\n\nLooking at the assembly for the v3 patch, it does appear that the\nelevel <= DEBUG1 calls were correctly moved to the cold area and that\nthe ones > DEBUG1 and < ERROR were left alone. However, I did only\nlook at xlog.s. The intel results don't look very promising, but\nperhaps this is not the ideal test to show improvements with\ninstruction cache efficiency.\n\n> > > +bool\n> > > +pg_attribute_cold errstart_cold(int elevel, const char *domain)\n> > > +{\n> > > + return errstart(elevel, domain);\n> > > +}\n> > > +#endif\n> >\n> > Hm. Would it make sense to have this be a static inline?\n\nI didn't find a way to make this work (using gcc-10). Inlining, of\ncourse, makes the assembly just call errstart(). errstart_cold() is\nnowhere to be seen. The __attribute(cold) does not seem to be applied\nto the errstart() call where the errstart_cold() call was inlined.\n\nI've attached a graph showing the results for the AMD and Intel runs\nand also csv files of the pgbench tps output. I've also attached each\nversion of the patch I tested.\n\nIt would be good to see some testing done on other hardware.\n\nDavid", "msg_date": "Mon, 29 Jun 2020 21:36:56 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Keep elog(ERROR) and ereport(ERROR) calls in the cold path" }, { "msg_contents": "On Mon, Jun 29, 2020 at 09:36:56PM +1200, David Rowley wrote:\n> I've attached a graph showing the results for the AMD and Intel runs\n> and also csv files of the pgbench tps output. I've also attached each\n> version of the patch I tested.\n> \n> It would be good to see some testing done on other hardware.\n\nWorth noting that the patch set fails to apply. I have moved this\nentry to the next CF, waiting on author.\n--\nMichael", "msg_date": "Mon, 3 Aug 2020 16:54:06 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Keep elog(ERROR) and ereport(ERROR) calls in the cold path" }, { "msg_contents": "On Mon, 3 Aug 2020 at 19:54, Michael Paquier <michael@paquier.xyz> wrote:\n> Worth noting that the patch set fails to apply. I have moved this\n> entry to the next CF, waiting on author.\n\nThanks.\n\nNB: It's not a patch set. It's 3 different versions of the patch.\nThey're not all meant to apply at once. Probably the CF bot wasn't\naware of that though :-(\n\nDavid\n\n\n", "msg_date": "Mon, 3 Aug 2020 21:05:52 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Keep elog(ERROR) and ereport(ERROR) calls in the cold path" }, { "msg_contents": "On Mon, 29 Jun 2020 at 21:36, David Rowley <dgrowleyml@gmail.com> wrote:\n> (I attribute the speedup here not being the same as my last report due\n> to noise. A recent bios update partially fixed the problem, but not\n> completely)\n\nI managed to fix the unstable performance on this AMD machine by\ntweaking some bios power management settings.\n\nI did some further testing with the v4 patch using both each of:\n\n1. pgbench -S\n2. pgbench -S -M prepared\n3. pgbench -S -M prepared -c 64 -j 64.\n4. TPC-H @ 5GB scale (per recommendation from Andres offlist)\n\nThe results of 1-3 above don't really show much of a win, which really\ndoes contradict what I saw about 5 years ago when testing unlikely()\naround elog calls in [1]. The experiment I did back then did pre-date\nthe use of unlikely() in the source code, so I thought perhaps that\nsince we now have a sprinkling of unlikely() in various of the hottest\ncode paths that the use of those already gained most of what we were\ngoing to gain from today's patch. To see if this was the case, I\ndecided to hack up a test patch which removes all those unlikely()\ncalls that exist in an if test above an elog/ereport ERROR and I\nconfirm that I *do* see a small regression in performance from doing\nthat. This patch only serves to confirm if the existing unlikely()\nmacros are already giving us most of what we'd get from today's v4\npatch, and the results do seem to confirm that.\n\nThe 5GB scaled TPC-H test does show some performance gains from the v4\npatch and shows an obvious regression from removing the unlikely()\ncalls too.\n\nBased, mostly on the TPC-H results where performance did improve close\nto 2%, I'm starting to think it would be a good idea just to go for\nthe v4 patch. It means that future hot elog/ereport calls should make\nit into the cold path.\n\nCurrently, I'm just unsure how other CPUs will benefit from this. The\n3990x I've been testing with is pretty new and has some pretty large\ncaches. I suspect older CPUs may see larger gains.\n\nDavid\n\n[1] https://www.postgresql.org/message-id/CAKJS1f8yqRW3qx2CO9r4bqqvA2Vx68%3D3awbh8CJWTP9zXeoHMw%40mail.gmail.com", "msg_date": "Wed, 5 Aug 2020 15:00:01 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Keep elog(ERROR) and ereport(ERROR) calls in the cold path" }, { "msg_contents": "On 2020-08-05 05:00, David Rowley wrote:\n> The 5GB scaled TPC-H test does show some performance gains from the v4\n> patch and shows an obvious regression from removing the unlikely()\n> calls too.\n> \n> Based, mostly on the TPC-H results where performance did improve close\n> to 2%, I'm starting to think it would be a good idea just to go for\n> the v4 patch. It means that future hot elog/ereport calls should make\n> it into the cold path.\n\nSomething based on the v4 patch makes sense.\n\nI would add DEBUG1 back into the conditional, like\n\nif (__builtin_constant_p(elevel) && ((elevel) >= ERROR || (elevel) <= \nDEBUG1) ? \\\n\nAlso, for the __has_attribute handling, I'd prefer the style that Andres \nillustrated earlier, using:\n\n#ifndef __has_attribute\n#define __has_attribute(attribute) 0\n#endif\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 4 Sep 2020 22:36:55 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Keep elog(ERROR) and ereport(ERROR) calls in the cold path" }, { "msg_contents": "On Sat, 5 Sep 2020 at 08:36, Peter Eisentraut\n<peter.eisentraut@2ndquadrant.com> wrote:\n> Something based on the v4 patch makes sense.\n\nThanks for having a look at this.\n\n> I would add DEBUG1 back into the conditional, like\n>\n> if (__builtin_constant_p(elevel) && ((elevel) >= ERROR || (elevel) <=\n> DEBUG1) ? \\\n\nhmm, but surely we don't want to move all code that's in the same\nbranch as an elog(DEBUG1) call into a cold area.\n\nWith elog(ERROR) we generally have the form:\n\nif (<some condition we hope never to see>)\n elog(ERROR, \"something bad happened\");\n\nIn this case, we'd quite like for the compiler to put code relating to\nthe elog in some cold area of the binary.\n\nWith DEBUG we often have the form:\n\n<do normal stuff>\n\nelog(DEBUG1, \"some interesting information\");\n\n<do normal stuff>\n\nI don't think we'd want to move the <do normal stuff> into a cold area.\n\nThe v3 patch just put an unlikely() around the errstart() call if the\nlevel was <= DEBUG1. That just to move the code that's inside the if\n(errstart(...)) in the macro into a cold area.\n\n> Also, for the __has_attribute handling, I'd prefer the style that Andres\n> illustrated earlier, using:\n>\n> #ifndef __has_attribute\n> #define __has_attribute(attribute) 0\n> #endif\n\nYip, for sure. That way is much nicer.\n\nDavid\n\n\n", "msg_date": "Sun, 6 Sep 2020 12:24:31 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Keep elog(ERROR) and ereport(ERROR) calls in the cold path" }, { "msg_contents": "On 2020-09-06 02:24, David Rowley wrote:\n>> I would add DEBUG1 back into the conditional, like\n>>\n>> if (__builtin_constant_p(elevel) && ((elevel) >= ERROR || (elevel) <=\n>> DEBUG1) ? \\\n> \n> hmm, but surely we don't want to move all code that's in the same\n> branch as an elog(DEBUG1) call into a cold area.\n\nYeah, nevermind that.\n\n> The v3 patch just put an unlikely() around the errstart() call if the\n> level was <= DEBUG1. That just to move the code that's inside the if\n> (errstart(...)) in the macro into a cold area.\n\nThat could be useful. Depends on how much effect it has.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Thu, 10 Sep 2020 16:02:14 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Keep elog(ERROR) and ereport(ERROR) calls in the cold path" }, { "msg_contents": "On Fri, 11 Sep 2020 at 02:01, Peter Eisentraut\n<peter.eisentraut@2ndquadrant.com> wrote:\n>\n> On 2020-09-06 02:24, David Rowley wrote:\n> >> I would add DEBUG1 back into the conditional, like\n> >>\n> >> if (__builtin_constant_p(elevel) && ((elevel) >= ERROR || (elevel) <=\n> >> DEBUG1) ? \\\n> >\n> > hmm, but surely we don't want to move all code that's in the same\n> > branch as an elog(DEBUG1) call into a cold area.\n>\n> Yeah, nevermind that.\n\nI've reattached the v4 patch since it just does the >= ERROR case.\n\n> > The v3 patch just put an unlikely() around the errstart() call if the\n> > level was <= DEBUG1. That just to move the code that's inside the if\n> > (errstart(...)) in the macro into a cold area.\n>\n> That could be useful. Depends on how much effect it has.\n\nI wonder if it is. I'm having trouble even seeing gains from the ERROR\ncase and I'm considering dropping this patch due to that.\n\nI ran another scale=5 TPCH benchmark on v4 against f859c2ffa using gcc\n9.3. I'm unable to see any gains with this, however, the results were\npretty noisy. I only ran pgbench for 60 seconds per query. I'll likely\nneed to run that a bit longer. I'll do that tonight.\n\nIt would be good if someone else could run some tests on their own\nhardware to see if they can see any gains.\n\nDavid", "msg_date": "Tue, 22 Sep 2020 19:08:12 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Keep elog(ERROR) and ereport(ERROR) calls in the cold path" }, { "msg_contents": "On Tue, 22 Sep 2020 at 19:08, David Rowley <dgrowleyml@gmail.com> wrote:\n> I ran another scale=5 TPCH benchmark on v4 against f859c2ffa using gcc\n> 9.3. I'm unable to see any gains with this, however, the results were\n> pretty noisy. I only ran pgbench for 60 seconds per query. I'll likely\n> need to run that a bit longer. I'll do that tonight.\n\nI've attached the results of a TPCH scale=5 run master (f859c2ffa) vs\nmaster + elog_ereport_attribute_cold_v4.patch\n\nIt does not look great. The patched version seems to have done about\n1.17% less work than master did.\n\nDavid", "msg_date": "Wed, 23 Sep 2020 08:42:33 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Keep elog(ERROR) and ereport(ERROR) calls in the cold path" }, { "msg_contents": "On 2020-09-22 22:42, David Rowley wrote:\n> On Tue, 22 Sep 2020 at 19:08, David Rowley <dgrowleyml@gmail.com> wrote:\n>> I ran another scale=5 TPCH benchmark on v4 against f859c2ffa using gcc\n>> 9.3. I'm unable to see any gains with this, however, the results were\n>> pretty noisy. I only ran pgbench for 60 seconds per query. I'll likely\n>> need to run that a bit longer. I'll do that tonight.\n> \n> I've attached the results of a TPCH scale=5 run master (f859c2ffa) vs\n> master + elog_ereport_attribute_cold_v4.patch\n> \n> It does not look great. The patched version seems to have done about\n> 1.17% less work than master did.\n\nI wonder how much benefit you'd get from\n\na) compiling with -O3 instead of -O2, or\nb) compiling with profile-driven optimization\n\nI think that would indicate a target and/or a ceiling of what we should \nbe expecting from hot/cold/likely/unlikely optimization techniques like \nthis.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Wed, 23 Sep 2020 08:07:10 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Keep elog(ERROR) and ereport(ERROR) calls in the cold path" }, { "msg_contents": "On Wed, 23 Sep 2020 at 08:42, David Rowley <dgrowleyml@gmail.com> wrote:\n>\n> On Tue, 22 Sep 2020 at 19:08, David Rowley <dgrowleyml@gmail.com> wrote:\n> > I ran another scale=5 TPCH benchmark on v4 against f859c2ffa using gcc\n> > 9.3. I'm unable to see any gains with this, however, the results were\n> > pretty noisy. I only ran pgbench for 60 seconds per query. I'll likely\n> > need to run that a bit longer. I'll do that tonight.\n>\n> I've attached the results of a TPCH scale=5 run master (f859c2ffa) vs\n> master + elog_ereport_attribute_cold_v4.patch\n>\n> It does not look great. The patched version seems to have done about\n> 1.17% less work than master did.\n\nI've marked this patch back as waiting for review. It would be good if\nsomeone could run some tests on some intel hardware and see if they\ncan see any speedup.\n\nDavid\n\n\n", "msg_date": "Tue, 29 Sep 2020 22:26:06 +1300", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Keep elog(ERROR) and ereport(ERROR) calls in the cold path" }, { "msg_contents": "On 2020-09-29 11:26, David Rowley wrote:\n> On Wed, 23 Sep 2020 at 08:42, David Rowley <dgrowleyml@gmail.com> wrote:\n>>\n>> On Tue, 22 Sep 2020 at 19:08, David Rowley <dgrowleyml@gmail.com> wrote:\n>>> I ran another scale=5 TPCH benchmark on v4 against f859c2ffa using gcc\n>>> 9.3. I'm unable to see any gains with this, however, the results were\n>>> pretty noisy. I only ran pgbench for 60 seconds per query. I'll likely\n>>> need to run that a bit longer. I'll do that tonight.\n>>\n>> I've attached the results of a TPCH scale=5 run master (f859c2ffa) vs\n>> master + elog_ereport_attribute_cold_v4.patch\n>>\n>> It does not look great. The patched version seems to have done about\n>> 1.17% less work than master did.\n> \n> I've marked this patch back as waiting for review. It would be good if\n> someone could run some tests on some intel hardware and see if they\n> can see any speedup.\n\nWhat is the way forward here? What exactly would you like to have tested?\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Tue, 3 Nov 2020 08:08:14 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Keep elog(ERROR) and ereport(ERROR) calls in the cold path" }, { "msg_contents": "On Tue, 3 Nov 2020 at 20:08, Peter Eisentraut\n<peter.eisentraut@2ndquadrant.com> wrote:\n>\n> On 2020-09-29 11:26, David Rowley wrote:\n> > I've marked this patch back as waiting for review. It would be good if\n> > someone could run some tests on some intel hardware and see if they\n> > can see any speedup.\n>\n> What is the way forward here? What exactly would you like to have tested?\n\nIt would be good to see some small scale bench -S tests with and\nwithout -M prepared.\n\nAlso, small scale TPC-H tests would be good. I really only did\ntesting on new AMD hardware, so some testing on intel hardware would\nbe good.\n\nDavid\n\n\n", "msg_date": "Wed, 4 Nov 2020 09:53:30 +1300", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Keep elog(ERROR) and ereport(ERROR) calls in the cold path" }, { "msg_contents": "On 2020-11-03 21:53, David Rowley wrote:\n> On Tue, 3 Nov 2020 at 20:08, Peter Eisentraut\n> <peter.eisentraut@2ndquadrant.com> wrote:\n>>\n>> On 2020-09-29 11:26, David Rowley wrote:\n>>> I've marked this patch back as waiting for review. It would be good if\n>>> someone could run some tests on some intel hardware and see if they\n>>> can see any speedup.\n>>\n>> What is the way forward here? What exactly would you like to have tested?\n> \n> It would be good to see some small scale bench -S tests with and\n> without -M prepared.\n> \n> Also, small scale TPC-H tests would be good. I really only did\n> testing on new AMD hardware, so some testing on intel hardware would\n> be good.\n\nI did tests of elog_ereport_attribute_cold_v4.patch on an oldish Mac \nIntel laptop with pgbench scale 1 (default), and then:\n\npgbench -S -T 60\n\nmaster: tps = 8251.883229 (excluding connections establishing)\npatched: tps = 9556.836232 (excluding connections establishing)\n\npgbench -S -T 60 -M prepared\n\nmaster: tps = 14713.821837 (excluding connections establishing)\npatched: tps = 16200.066185 (excluding connections establishing)\n\nSo from that this seems like an easy win.\n\nI also tested on a newish Mac ARM laptop, and there the patch did not do \nanything, but that was because clang does not support the cold \nattribute, so that part works as well. ;-)\n\n-- \nPeter Eisentraut\n2ndQuadrant, an EDB company\nhttps://www.2ndquadrant.com/\n\n\n", "msg_date": "Fri, 20 Nov 2020 15:26:39 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Keep elog(ERROR) and ereport(ERROR) calls in the cold path" }, { "msg_contents": "On Sat, 21 Nov 2020 at 03:26, Peter Eisentraut\n<peter.eisentraut@2ndquadrant.com> wrote:\n>\n> I did tests of elog_ereport_attribute_cold_v4.patch on an oldish Mac\n> Intel laptop with pgbench scale 1 (default), and then:\n>\n> pgbench -S -T 60\n>\n> master: tps = 8251.883229 (excluding connections establishing)\n> patched: tps = 9556.836232 (excluding connections establishing)\n>\n> pgbench -S -T 60 -M prepared\n>\n> master: tps = 14713.821837 (excluding connections establishing)\n> patched: tps = 16200.066185 (excluding connections establishing)\n>\n> So from that this seems like an easy win.\n\nWell, that makes it look pretty good. If we can get 10-15% on some\nmachines without making things slower on any other machines, then that\nseems like a good win to me.\n\nThanks for testing that.\n\nDavid\n\n\n", "msg_date": "Tue, 24 Nov 2020 09:36:55 +1300", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Keep elog(ERROR) and ereport(ERROR) calls in the cold path" }, { "msg_contents": "On Tue, 24 Nov 2020 at 09:36, David Rowley <dgrowleyml@gmail.com> wrote:\n> Well, that makes it look pretty good. If we can get 10-15% on some\n> machines without making things slower on any other machines, then that\n> seems like a good win to me.\n\nPushed.\n\nThank you both for reviewing this.\n\nDavid\n\n\n", "msg_date": "Tue, 24 Nov 2020 12:06:22 +1300", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Keep elog(ERROR) and ereport(ERROR) calls in the cold path" }, { "msg_contents": "On Tue, Nov 24, 2020 at 10:06 AM David Rowley <dgrowleyml@gmail.com> wrote:\n>\n> On Tue, 24 Nov 2020 at 09:36, David Rowley <dgrowleyml@gmail.com> wrote:\n> > Well, that makes it look pretty good. If we can get 10-15% on some\n> > machines without making things slower on any other machines, then that\n> > seems like a good win to me.\n>\n> Pushed.\n>\n> Thank you both for reviewing this.\n>\n> David\n>\n>\n\nHmmm, unfortunately this seems to break my build ...\n\nmake[1]: Entering directory `/space2/pg13/postgres/src'\nmake -C common all\nmake[2]: Entering directory `/space2/pg13/postgres/src/common'\ngcc -std=gnu99 -Wall -Wmissing-prototypes -Wpointer-arith\n-Wdeclaration-after-statement -Werror=vla -Wendif-labels\n-Wmissing-format-attribute -Wformat-security -fno-strict-aliasing\n-fwrapv -fexcess-precision=standard -g -O0 -DFRONTEND -I.\n-I../../src/common -I../../src/include -D_GNU_SOURCE -DVAL_CC=\"\\\"gcc\n-std=gnu99\\\"\" -DVAL_CPPFLAGS=\"\\\"-D_GNU_SOURCE\\\"\" -DVAL_CFLAGS=\"\\\"-Wall\n-Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement\n-Werror=vla -Wendif-labels -Wmissing-format-attribute\n-Wformat-security -fno-strict-aliasing -fwrapv\n-fexcess-precision=standard -g -O0\\\"\" -DVAL_CFLAGS_SL=\"\\\"-fPIC\\\"\"\n-DVAL_LDFLAGS=\"\\\"-Wl,--as-needed\n-Wl,-rpath,'/usr/local/pg14/lib',--enable-new-dtags\\\"\"\n-DVAL_LDFLAGS_EX=\"\\\"\\\"\" -DVAL_LDFLAGS_SL=\"\\\"\\\"\"\n-DVAL_LIBS=\"\\\"-lpgcommon -lpgport -lpthread -lz -lreadline -lrt -ldl\n-lm \\\"\" -c -o archive.o archive.c\nIn file included from ../../src/include/postgres_fe.h:25:0,\n from archive.c:19:\n../../src/include/c.h:198:49: error: missing binary operator before token \"(\"\n #if defined(__has_attribute) && __has_attribute (cold)\n ^\n../../src/include/c.h:204:49: error: missing binary operator before token \"(\"\n #if defined(__has_attribute) && __has_attribute (hot)\n ^\nmake[2]: *** [archive.o] Error 1\nmake[2]: Leaving directory `/space2/pg13/postgres/src/common'\nmake[1]: *** [all-common-recurse] Error 2\nmake[1]: Leaving directory `/space2/pg13/postgres/src'\nmake: *** [world-src-recurse] Error 2\n\n[gregn@localhost postgres]$ gcc --version\ngcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-44)\nCopyright (C) 2015 Free Software Foundation, Inc.\nThis is free software; see the source for copying conditions. There is NO\nwarranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.\n\n\nI think your commit needs to be fixed based on the following documentation:\n\nhttps://gcc.gnu.org/onlinedocs/cpp/_005f_005fhas_005fattribute.html#g_t_005f_005fhas_005fattribute\n\n\"The first ‘#if’ test succeeds only when the operator is supported by\nthe version of GCC (or another compiler) being used. Only when that\ntest succeeds is it valid to use __has_attribute as a preprocessor\noperator. As a result, combining the two tests into a single\nexpression as shown below would only be valid with a compiler that\nsupports the operator but not with others that don’t. \"\n\n(Thanks to my colleague Peter Smith for finding the doc explanation)\n\nRegards,\nGreg Nancarrow\nFujitsu Australia\n\n\n", "msg_date": "Tue, 24 Nov 2020 10:49:54 +1100", "msg_from": "Greg Nancarrow <gregn4422@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Keep elog(ERROR) and ereport(ERROR) calls in the cold path" }, { "msg_contents": "On Tue, 24 Nov 2020 at 12:50, Greg Nancarrow <gregn4422@gmail.com> wrote:\n> Hmmm, unfortunately this seems to break my build ...\n\n> I think your commit needs to be fixed based on the following documentation:\n>\n> https://gcc.gnu.org/onlinedocs/cpp/_005f_005fhas_005fattribute.html#g_t_005f_005fhas_005fattribute\n\nAgreed. I tested on https://godbolt.org/ with a GCC version < 5 and\nupdating to what's mentioned in the GCC manual works fine. What I had\ndid not.\n\nThanks for the report.\n\nI pushed a fix.\n\nDavid\n\n\n", "msg_date": "Tue, 24 Nov 2020 13:12:09 +1300", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Keep elog(ERROR) and ereport(ERROR) calls in the cold path" }, { "msg_contents": "David Rowley <dgrowleyml@gmail.com> writes:\n\n> On Tue, 24 Nov 2020 at 12:50, Greg Nancarrow <gregn4422@gmail.com> wrote:\n>> Hmmm, unfortunately this seems to break my build ...\n>\n>> I think your commit needs to be fixed based on the following documentation:\n>>\n>> https://gcc.gnu.org/onlinedocs/cpp/_005f_005fhas_005fattribute.html#g_t_005f_005fhas_005fattribute\n>\n> Agreed. I tested on https://godbolt.org/ with a GCC version < 5 and\n> updating to what's mentioned in the GCC manual works fine. What I had\n> did not.\n>\n> Thanks for the report.\n>\n> I pushed a fix.\n\nThe Clang documentation¹ suggest an even neater solution, which would\neliminate the repetitive empty pg_attribute_foo #defines in the trailing\n#else/#endif block in commit 1fa22a43a56e1fe44c7bb3a3d5ef31be5bcac41d:\n\n#ifndef __has_attribute\n#define __has_attribute(x) 0\n#endif\n\n[1] http://clang.llvm.org/docs/LanguageExtensions.html#has-attribute\n\n- ilmari\n-- \n\"The surreality of the universe tends towards a maximum\" -- Skud's Law\n\"Never formulate a law or axiom that you're not prepared to live with\n the consequences of.\" -- Skud's Meta-Law\n\n\n", "msg_date": "Tue, 24 Nov 2020 00:52:59 +0000", "msg_from": "ilmari@ilmari.org (Dagfinn Ilmari =?utf-8?Q?Manns=C3=A5ker?=)", "msg_from_op": false, "msg_subject": "Re: Keep elog(ERROR) and ereport(ERROR) calls in the cold path" }, { "msg_contents": "On 2020-11-24 01:52, Dagfinn Ilmari Mannsåker wrote:\n> The Clang documentation¹ suggest an even neater solution, which would\n> eliminate the repetitive empty pg_attribute_foo #defines in the trailing\n> #else/#endif block in commit 1fa22a43a56e1fe44c7bb3a3d5ef31be5bcac41d:\n> \n> #ifndef __has_attribute\n> #define __has_attribute(x) 0\n> #endif\n\nYes, this was also mentioned and agreed earlier in the thread, but then \nwe apparently forgot to update the patch.\n\n-- \nPeter Eisentraut\n2ndQuadrant, an EDB company\nhttps://www.2ndquadrant.com/\n\n\n", "msg_date": "Tue, 24 Nov 2020 16:48:19 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Keep elog(ERROR) and ereport(ERROR) calls in the cold path" }, { "msg_contents": "David Rowley <dgrowleyml@gmail.com> writes:\n> Pushed.\n\nwalleye's been failing since this patchset went in:\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=walleye&dt=2020-11-24%2000%3A25%3A31\n\nccache gcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wimplicit-fallthrough=3 -Wcast-function-type -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -g -O2 -I../../../src/include -I./src/include/port/win32 -I/c/msys/local/include -I/c/Python35/include -I/c/OpenSSL-Win64/include -I/c/msys/local/include \"-I../../../src/include/port/win32\" -DWIN32_STACK_RLIMIT=4194304 -DBUILDING_DLL -c -o autovacuum.o autovacuum.c\nC:\\\\Users\\\\BUILDE~1.SER\\\\AppData\\\\Local\\\\Temp\\\\cc4HR3xZ.s: Assembler messages:\nC:\\\\Users\\\\BUILDE~1.SER\\\\AppData\\\\Local\\\\Temp\\\\cc4HR3xZ.s:5900: Error: .seh_savexmm offset is negative\nmake[3]: *** [autovacuum.o] Error 1\n\nI have no idea what to make of that, but it looks more like a compiler bug\nthan anything else.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 24 Nov 2020 10:55:16 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Keep elog(ERROR) and ereport(ERROR) calls in the cold path" }, { "msg_contents": "On Wed, 25 Nov 2020 at 04:48, Peter Eisentraut\n<peter.eisentraut@2ndquadrant.com> wrote:\n>\n> On 2020-11-24 01:52, Dagfinn Ilmari Mannsåker wrote:\n> > The Clang documentation¹ suggest an even neater solution, which would\n> > eliminate the repetitive empty pg_attribute_foo #defines in the trailing\n> > #else/#endif block in commit 1fa22a43a56e1fe44c7bb3a3d5ef31be5bcac41d:\n> >\n> > #ifndef __has_attribute\n> > #define __has_attribute(x) 0\n> > #endif\n>\n> Yes, this was also mentioned and agreed earlier in the thread, but then\n> we apparently forgot to update the patch.\n\nI wanted to let the buildfarm settle a bit before changing this again.\nI plan on making the change today.\n\n(I know walleye is still not happy)\n\nDavid\n\n\n", "msg_date": "Wed, 25 Nov 2020 09:43:47 +1300", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Keep elog(ERROR) and ereport(ERROR) calls in the cold path" }, { "msg_contents": "On Wed, 25 Nov 2020 at 04:55, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> walleye's been failing since this patchset went in:\n>\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=walleye&dt=2020-11-24%2000%3A25%3A31\n>\n> ccache gcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wimplicit-fallthrough=3 -Wcast-function-type -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -g -O2 -I../../../src/include -I./src/include/port/win32 -I/c/msys/local/include -I/c/Python35/include -I/c/OpenSSL-Win64/include -I/c/msys/local/include \"-I../../../src/include/port/win32\" -DWIN32_STACK_RLIMIT=4194304 -DBUILDING_DLL -c -o autovacuum.o autovacuum.c\n> C:\\\\Users\\\\BUILDE~1.SER\\\\AppData\\\\Local\\\\Temp\\\\cc4HR3xZ.s: Assembler messages:\n> C:\\\\Users\\\\BUILDE~1.SER\\\\AppData\\\\Local\\\\Temp\\\\cc4HR3xZ.s:5900: Error: .seh_savexmm offset is negative\n> make[3]: *** [autovacuum.o] Error 1\n>\n> I have no idea what to make of that, but it looks more like a compiler bug\n> than anything else.\n\nThat's about the best I could come up with too when looking at that\nyesterday. The message gives me the impression that it might be\nrelated to code arrangement. It does seem to be the assembler that's\ncomplaining.\n\nI wondered if #if !defined(__MINGW32__) && !defined(__MINGW64__) would\nbe the correct fix for it... aka, just define the new\npg_attribute_(hot|cold) macros to empty on MinGW.\n\nDavid\n\n\n", "msg_date": "Wed, 25 Nov 2020 09:48:15 +1300", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Keep elog(ERROR) and ereport(ERROR) calls in the cold path" }, { "msg_contents": "David Rowley <dgrowleyml@gmail.com> writes:\n> On Wed, 25 Nov 2020 at 04:55, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> walleye's been failing since this patchset went in:\n>> I have no idea what to make of that, but it looks more like a compiler bug\n>> than anything else.\n\n> I wondered if #if !defined(__MINGW32__) && !defined(__MINGW64__) would\n> be the correct fix for it... aka, just define the new\n> pg_attribute_(hot|cold) macros to empty on MinGW.\n\nI'd make any such fix as narrow as possible (ie MINGW64 only, based on\npresent evidence). It'd be nice to have a compiler version upper bound\ntoo, in the hopes that they'd fix it in future. Maybe something like\n\"#if defined(__MINGW64__) && defined(__GNUC__) && __GNUC__ <= 8\" ?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 24 Nov 2020 15:59:54 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Keep elog(ERROR) and ereport(ERROR) calls in the cold path" }, { "msg_contents": "On 2020-Nov-24, Tom Lane wrote:\n\n> David Rowley <dgrowleyml@gmail.com> writes:\n> > On Wed, 25 Nov 2020 at 04:55, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> walleye's been failing since this patchset went in:\n> >> I have no idea what to make of that, but it looks more like a compiler bug\n> >> than anything else.\n> \n> > I wondered if #if !defined(__MINGW32__) && !defined(__MINGW64__) would\n> > be the correct fix for it... aka, just define the new\n> > pg_attribute_(hot|cold) macros to empty on MinGW.\n> \n> I'd make any such fix as narrow as possible (ie MINGW64 only, based on\n> present evidence). It'd be nice to have a compiler version upper bound\n> too, in the hopes that they'd fix it in future. Maybe something like\n> \"#if defined(__MINGW64__) && defined(__GNUC__) && __GNUC__ <= 8\" ?\n\nApparently the bug was fixed days after it was reported,\nhttps://gcc.gnu.org/bugzilla/show_bug.cgi?id=86048\nbut they haven't made a release containing the fix yet.\n\n\n", "msg_date": "Tue, 24 Nov 2020 18:04:31 -0300", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Keep elog(ERROR) and ereport(ERROR) calls in the cold path" }, { "msg_contents": "Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> On 2020-Nov-24, Tom Lane wrote:\n>> I'd make any such fix as narrow as possible (ie MINGW64 only, based on\n>> present evidence). It'd be nice to have a compiler version upper bound\n>> too, in the hopes that they'd fix it in future. Maybe something like\n>> \"#if defined(__MINGW64__) && defined(__GNUC__) && __GNUC__ <= 8\" ?\n\n> Apparently the bug was fixed days after it was reported,\n> https://gcc.gnu.org/bugzilla/show_bug.cgi?id=86048\n> but they haven't made a release containing the fix yet.\n\nAh, great sleuthing! So that says it occurs in 8.1 only, meaning\nthe version test could be like\n\n#if defined(__MINGW64__) && __GNUC__ == 8 && __GNUC_MINOR__ == 1\n// lobotomized code here\n#else ...\n\nIt's not entirely clear from that bug report whether it can manifest on\ngcc 8.1 on other platforms; maybe we should test for x86 in general\nnot __MINGW64__.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 24 Nov 2020 16:11:59 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Keep elog(ERROR) and ereport(ERROR) calls in the cold path" }, { "msg_contents": "Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n>> On Wed, 25 Nov 2020 at 04:55, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>> walleye's been failing since this patchset went in:\n>>> I have no idea what to make of that, but it looks more like a compiler bug\n>>> than anything else.\n\n> Apparently the bug was fixed days after it was reported,\n> https://gcc.gnu.org/bugzilla/show_bug.cgi?id=86048\n> but they haven't made a release containing the fix yet.\n\nWait ... the second part of that doesn't seem to be true.\nAccording to\n\nhttp://mingw-w64.org/doku.php/versions\n\nmingw-w64 has made at least three releases since this\nbug was fixed. Surely they're shipping something newer\nthan 8.1.0 by now.\n\nSo maybe, rather than hacking up the attribute stuff for\na bug that might bite us again anyway in future, we ought\nto press walleye's owner to install a more recent compiler.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 24 Nov 2020 20:28:23 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Keep elog(ERROR) and ereport(ERROR) calls in the cold path" }, { "msg_contents": "On Wed, 25 Nov 2020 at 14:28, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> So maybe, rather than hacking up the attribute stuff for\n> a bug that might bite us again anyway in future, we ought\n> to press walleye's owner to install a more recent compiler.\n\nI think that seems like a better idea. I had thoughts about\ninstalling a quick for now to give the owner of walleye a bit of time\nfor the upgrade. From what I can tell, the latest version of minGW\ncomes with GCC 9.2 [1]\n\nDavid\n\n[1] https://osdn.net/projects/mingw/releases/\n\n\n", "msg_date": "Wed, 25 Nov 2020 14:35:25 +1300", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Keep elog(ERROR) and ereport(ERROR) calls in the cold path" }, { "msg_contents": "On Wed, 25 Nov 2020 at 14:35, David Rowley <dgrowleyml@gmail.com> wrote:\n>\n> On Wed, 25 Nov 2020 at 14:28, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > So maybe, rather than hacking up the attribute stuff for\n> > a bug that might bite us again anyway in future, we ought\n> > to press walleye's owner to install a more recent compiler.\n>\n> I think that seems like a better idea. I had thoughts about\n> installing a quick for now to give the owner of walleye a bit of time\n> for the upgrade. From what I can tell, the latest version of minGW\n> comes with GCC 9.2 [1]\n\nSo, how about the attached today and I'll email Joseph about walleye\nand see if he can upgrade to a newer minGW version.\n\nDavid", "msg_date": "Wed, 25 Nov 2020 14:46:12 +1300", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Keep elog(ERROR) and ereport(ERROR) calls in the cold path" }, { "msg_contents": "David Rowley <dgrowleyml@gmail.com> writes:\n> On Wed, 25 Nov 2020 at 14:28, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> So maybe, rather than hacking up the attribute stuff for\n>> a bug that might bite us again anyway in future, we ought\n>> to press walleye's owner to install a more recent compiler.\n\n> I think that seems like a better idea. I had thoughts about\n> installing a quick for now to give the owner of walleye a bit of time\n> for the upgrade. From what I can tell, the latest version of minGW\n> comes with GCC 9.2 [1]\n\nmingw and mingw-w64 seem to be distinct projects with separate\nrelease schedules. The latter's webpage isn't too clear about\nwhich gcc version is in each of their releases. But they seem\nto be organized enough to put out releases roughly annually,\nso I'm supposing they aren't falling too far behind gcc upstream.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 24 Nov 2020 20:47:42 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Keep elog(ERROR) and ereport(ERROR) calls in the cold path" }, { "msg_contents": "David Rowley <dgrowleyml@gmail.com> writes:\n> So, how about the attached today and I'll email Joseph about walleye\n> and see if he can upgrade to a newer minGW version.\n\nWFM. (Note I already cc'd Joseph on this thread.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 24 Nov 2020 20:59:14 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Keep elog(ERROR) and ereport(ERROR) calls in the cold path" } ]
[ { "msg_contents": "Hi all,\n\nAs subject tells, we have in src/common/ four files that are only\ncompiled as part of the frontend: fe_memutils.c, file_utils.c,\nlogging.c and restricted_token.c. Two of them are missing the\nfollowing, to make sure that we never try to compile them with the\nbackend:\n+#ifndef FRONTEND\n+#error \"This file is not expected to be compiled for backend code\"\n+#endif\n\nSo, shouldn't that stuff be added as per the attached?\n\nThanks.\n--\nMichael", "msg_date": "Thu, 25 Jun 2020 17:07:57 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Missing some ifndef FRONTEND at the top of logging.c and file_utils.c" }, { "msg_contents": "> On 25 Jun 2020, at 10:07, Michael Paquier <michael@paquier.xyz> wrote:\n\n> So, shouldn't that stuff be added as per the attached?\n\nThat makes sense, logging.c and file_utils.c are indeed only part of\nlibpgcommon.a and should only be compiled for frontend.\n\ncheers ./daniel\n\n\n", "msg_date": "Thu, 25 Jun 2020 11:15:03 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: Missing some ifndef FRONTEND at the top of logging.c and\n file_utils.c" }, { "msg_contents": "On Thu, Jun 25, 2020 at 11:15:03AM +0200, Daniel Gustafsson wrote:\n> That makes sense, logging.c and file_utils.c are indeed only part of\n> libpgcommon.a and should only be compiled for frontend.\n\nThanks. This list is provided by OBJS_FRONTEND in\nsrc/common/Makefile, and pgcommonfrontendfiles in Mkvcbuild.pm. Let's\nsee if others have comments, as it just looks like something that was\nforgotten in bf5bb2e and fc9a62a when this code was moved to\nsrc/common/. If there are no objections, I'll revisit that some time\nnext week and fix it on HEAD.\n--\nMichael", "msg_date": "Fri, 26 Jun 2020 09:59:30 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: Missing some ifndef FRONTEND at the top of logging.c and\n file_utils.c" }, { "msg_contents": "On Fri, Jun 26, 2020 at 09:59:30AM +0900, Michael Paquier wrote:\n> Thanks. This list is provided by OBJS_FRONTEND in\n> src/common/Makefile, and pgcommonfrontendfiles in Mkvcbuild.pm. Let's\n> see if others have comments, as it just looks like something that was\n> forgotten in bf5bb2e and fc9a62a when this code was moved to\n> src/common/. If there are no objections, I'll revisit that some time\n> next week and fix it on HEAD.\n\nAnd committed as of 324435e.\n--\nMichael", "msg_date": "Tue, 30 Jun 2020 21:18:04 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: Missing some ifndef FRONTEND at the top of logging.c and\n file_utils.c" } ]
[ { "msg_contents": "Over on [1] Justin mentions that the non-text EXPLAIN ANALYZE should\nalways show the \"Disk Usage\" and \"HashAgg Batches\" properties. I\nagree with this. show_wal_usage() is a good example of how we normally\ndo things. We try to keep the text format as humanly readable as\npossible but don't really expect humans to be commonly reading the\nother supported formats, so we care less about including additional\ndetails there.\n\nThere's also an open item regarding this for Incremental Sort, so I've\nCC'd James and Tomas here. This seems like a good place to discuss\nboth.\n\nI've attached a small patch that changes the Hash Aggregate behaviour\nto always show these properties for non-text formats.\n\nDoes anyone object to this?\n\nDavid\n\n[1] https://www.postgresql.org/message-id/20200619040624.GA17995%40telsasoft.com", "msg_date": "Thu, 25 Jun 2020 21:15:21 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Open Item: Should non-text EXPLAIN always show properties?" }, { "msg_contents": "On Thu, Jun 25, 2020 at 5:15 AM David Rowley <dgrowleyml@gmail.com> wrote:\n>\n> Over on [1] Justin mentions that the non-text EXPLAIN ANALYZE should\n> always show the \"Disk Usage\" and \"HashAgg Batches\" properties. I\n> agree with this. show_wal_usage() is a good example of how we normally\n> do things. We try to keep the text format as humanly readable as\n> possible but don't really expect humans to be commonly reading the\n> other supported formats, so we care less about including additional\n> details there.\n>\n> There's also an open item regarding this for Incremental Sort, so I've\n> CC'd James and Tomas here. This seems like a good place to discuss\n> both.\n\nYesterday I'd replied [1] to Justin's proposal for this WRT\nincremental sort and expressed my opinion that including both\nunnecessarily (i.e., including disk when an in-memory sort was used)\nis undesirable and confusing and leads to shortcuts I believe to be\nbad habits when using the data programmatically.\n\nOn a somewhat related note, memory can be 0 but that doesn't mean no\nmemory was used: it's a result of how tuplesort.c doesn't properly\ntrack memory usage when it switches to disk sort. The same isn't true\nin reverse (we don't have 0 disk when disk was used), but I do think\nit does show the idea that showing \"empty\" data isn't an inherent\ngood.\n\nIf there's a clear established pattern and/or most others seem to\nprefer Justin's proposed approach, then I'm not going to fight it\nhard. I just don't think it's the best approach.\n\nJames\n\n[1] https://www.postgresql.org/message-id/CAAaqYe-LswZFUL4k5Dr6%3DEN6MJG1HurggcH4QzUs6UFqBbnQzQ%40mail.gmail.com\n\n\n", "msg_date": "Thu, 25 Jun 2020 08:41:43 -0400", "msg_from": "James Coleman <jtc331@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Open Item: Should non-text EXPLAIN always show properties?" }, { "msg_contents": "On Thu, Jun 25, 2020 at 8:42 AM James Coleman <jtc331@gmail.com> wrote:\n> Yesterday I'd replied [1] to Justin's proposal for this WRT\n> incremental sort and expressed my opinion that including both\n> unnecessarily (i.e., including disk when an in-memory sort was used)\n> is undesirable and confusing and leads to shortcuts I believe to be\n> bad habits when using the data programmatically.\n\n+1.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Thu, 25 Jun 2020 11:34:42 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Open Item: Should non-text EXPLAIN always show properties?" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Thu, Jun 25, 2020 at 8:42 AM James Coleman <jtc331@gmail.com> wrote:\n>> Yesterday I'd replied [1] to Justin's proposal for this WRT\n>> incremental sort and expressed my opinion that including both\n>> unnecessarily (i.e., including disk when an in-memory sort was used)\n>> is undesirable and confusing and leads to shortcuts I believe to be\n>> bad habits when using the data programmatically.\n\n> +1.\n\nI think the policy about non-text output formats is \"all applicable\nfields should be included automatically\". But the key word there is\n\"applicable\". Are disk-sort numbers applicable when no disk sort\nhappened?\n\nI think the right way to think about this is that we are building\nan output data structure according to a schema that should be fixed\nfor any particular plan shape. If event X happened zero times in\na given execution, but it could have happened in a different execution\nof the same plan, then we should print X with a zero count. If X\ncould not happen period in this plan, we should omit X's entry.\n\nSo the real question here is whether the disk vs memory decision is\nplan time vs run time. AFAIK it's run time, which leads me to think\nwe ought to print the zeroes.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 25 Jun 2020 12:33:31 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Open Item: Should non-text EXPLAIN always show properties?" }, { "msg_contents": "On Thu, Jun 25, 2020 at 12:33 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Robert Haas <robertmhaas@gmail.com> writes:\n> > On Thu, Jun 25, 2020 at 8:42 AM James Coleman <jtc331@gmail.com> wrote:\n> >> Yesterday I'd replied [1] to Justin's proposal for this WRT\n> >> incremental sort and expressed my opinion that including both\n> >> unnecessarily (i.e., including disk when an in-memory sort was used)\n> >> is undesirable and confusing and leads to shortcuts I believe to be\n> >> bad habits when using the data programmatically.\n>\n> > +1.\n>\n> I think the policy about non-text output formats is \"all applicable\n> fields should be included automatically\". But the key word there is\n> \"applicable\". Are disk-sort numbers applicable when no disk sort\n> happened?\n>\n> I think the right way to think about this is that we are building\n> an output data structure according to a schema that should be fixed\n> for any particular plan shape. If event X happened zero times in\n> a given execution, but it could have happened in a different execution\n> of the same plan, then we should print X with a zero count. If X\n> could not happen period in this plan, we should omit X's entry.\n>\n> So the real question here is whether the disk vs memory decision is\n> plan time vs run time. AFAIK it's run time, which leads me to think\n> we ought to print the zeroes.\n\nDo we print zeroes for memory usage when all sorts ended up spilling\nto disk then? That might be the current behavior; I'd have to check.\nBecause that's a lie, but we don't have any better information\ncurrently (which is unfortunate, but hardly in scope for fixing here.)\n\nJames\n\n\n", "msg_date": "Thu, 25 Jun 2020 15:29:17 -0400", "msg_from": "James Coleman <jtc331@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Open Item: Should non-text EXPLAIN always show properties?" }, { "msg_contents": "James Coleman <jtc331@gmail.com> writes:\n> On Thu, Jun 25, 2020 at 12:33 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> I think the right way to think about this is that we are building\n>> an output data structure according to a schema that should be fixed\n>> for any particular plan shape. If event X happened zero times in\n>> a given execution, but it could have happened in a different execution\n>> of the same plan, then we should print X with a zero count. If X\n>> could not happen period in this plan, we should omit X's entry.\n\n> Do we print zeroes for memory usage when all sorts ended up spilling\n> to disk then?\n\nI did not claim that the pre-existing code adheres to this model\ncompletely faithfully ;-). But we ought to have a clear mental\npicture of what it is we're trying to achieve. If you don't like\nthe above design, propose a different one.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 25 Jun 2020 15:46:30 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Open Item: Should non-text EXPLAIN always show properties?" }, { "msg_contents": "On Fri, 26 Jun 2020 at 04:33, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Robert Haas <robertmhaas@gmail.com> writes:\n> > On Thu, Jun 25, 2020 at 8:42 AM James Coleman <jtc331@gmail.com> wrote:\n> >> Yesterday I'd replied [1] to Justin's proposal for this WRT\n> >> incremental sort and expressed my opinion that including both\n> >> unnecessarily (i.e., including disk when an in-memory sort was used)\n> >> is undesirable and confusing and leads to shortcuts I believe to be\n> >> bad habits when using the data programmatically.\n>\n> > +1.\n>\n> I think the policy about non-text output formats is \"all applicable\n> fields should be included automatically\". But the key word there is\n> \"applicable\". Are disk-sort numbers applicable when no disk sort\n> happened?\n>\n> I think the right way to think about this is that we are building\n> an output data structure according to a schema that should be fixed\n> for any particular plan shape. If event X happened zero times in\n> a given execution, but it could have happened in a different execution\n> of the same plan, then we should print X with a zero count. If X\n> could not happen period in this plan, we should omit X's entry.\n>\n> So the real question here is whether the disk vs memory decision is\n> plan time vs run time. AFAIK it's run time, which leads me to think\n> we ought to print the zeroes.\n\nI think that's a pretty good way of thinking about it.\n\nFor the HashAgg case, the plan could end up spilling, so based on what\nyou've said, we should be printing those zeros as some other execution\nof the same plan could spill.\n\nIf nobody objects to that very soon, then I'll go ahead and push the\nchanges for HashAgg's non-text EXPLAIN ANALYZE\n\nDavid\n\n\n", "msg_date": "Tue, 30 Jun 2020 11:22:02 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Open Item: Should non-text EXPLAIN always show properties?" }, { "msg_contents": "On Thu, 25 Jun 2020 at 21:15, David Rowley <dgrowleyml@gmail.com> wrote:\n> I've attached a small patch that changes the Hash Aggregate behaviour\n> to always show these properties for non-text formats.\n\nI've pushed this change for HashAgg only and marked the open item as\ncompleted for hash agg. I'll leave it up to Justin, Tomas and James\nto decide what to do with the incremental sort EXPLAIN open item.\n\nDavid\n\n\n", "msg_date": "Wed, 1 Jul 2020 12:22:44 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Open Item: Should non-text EXPLAIN always show properties?" }, { "msg_contents": "On Thu, Jun 25, 2020 at 08:41:43AM -0400, James Coleman wrote:\n> On Thu, Jun 25, 2020 at 5:15 AM David Rowley <dgrowleyml@gmail.com> wrote:\n> > Over on [1] Justin mentions that the non-text EXPLAIN ANALYZE should\n> > always show the \"Disk Usage\" and \"HashAgg Batches\" properties. I\n> > agree with this. show_wal_usage() is a good example of how we normally\n> > do things. We try to keep the text format as humanly readable as\n> > possible but don't really expect humans to be commonly reading the\n> > other supported formats, so we care less about including additional\n> > details there.\n> >\n> > There's also an open item regarding this for Incremental Sort, so I've\n> > CC'd James and Tomas here. This seems like a good place to discuss\n> > both.\n> \n> Yesterday I'd replied [1] to Justin's proposal for this WRT\n> incremental sort and expressed my opinion that including both\n> unnecessarily (i.e., including disk when an in-memory sort was used)\n> is undesirable and confusing and leads to shortcuts I believe to be\n> bad habits when using the data programmatically.\n\nI have gone back and forth about this.\n\nThe current non-text output for Incremental Sort is like:\n\n Sort Methods Used: +\n - \"quicksort\" +\n Sort Space Memory: +\n Average Sort Space Used: 26 +\n Peak Sort Space Used: 26 +\n\nexplain.c determines whether to output in non-text mode by checking:\n| if (groupInfo->maxDiskSpaceUsed > 0)\n\nWhich I think is per se wrong. Either it should use a test like:\n| if (groupInfo->sortMethods & SORT_TYPE_QUICKSORT != 0)\nor it should output the \"Sort Space\" unconditionally.\n\nIt does seem wrong if Incr Sort says \"Sort Space Disk / Average: 0, Peak: 0\"\nwhen there was no disk sort at all, and it wasn't listed as a \"Sort Method\".\n\nOn the other hand, that's determined during execution, right? (Based on things\nlike table content and table order and tuple width) ? David's argument in\nmaking the HashAgg's explain output unconditionally show Disk/Batch was that\nthis is not known until execution time (based on table content).\n\nHashAgg now shows:\n\nSET work_mem='64 MB'; explain(format yaml, analyze) SELECT a ,COUNT(1) FROM generate_series(1,99999)a GROUP BY 1;\n...\n Disk Usage: 0 +\n HashAgg Batches: 0 +\n\nSo I think I still think incr sort should do the same, showing Disk:0.\n\nOtherwise, I think it should at least use a test like this, rather than (DiskSpaceUsed > 0):\n| if (groupInfo->sortMethods & SORT_TYPE_QUICKSORT != 0)\n\n-- \nJustin\n\n\n", "msg_date": "Thu, 23 Jul 2020 09:14:54 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Open Item: Should non-text EXPLAIN always show properties?" } ]
[ { "msg_contents": "Hi,\n\nSomeone contacted me about increasing CUBE_MAX_DIM\nin contrib/cube/cubedata.h (in the community RPMs). The current value\nis 100 with the following comment:\n\n* This limit is pretty arbitrary, but don't make it so large that you\n* risk overflow in sizing calculations.\n\n\nThey said they use 500, and never had a problem. I never added such patches to the RPMS, and will not -- but wanted to ask if we can safely increase it in upstream?\n\nRegards,\n\n-- \nDevrim Gündüz\nOpen Source Solution Architect, Red Hat Certified Engineer\nTwitter: @DevrimGunduz , @DevrimGunduzTR", "msg_date": "Thu, 25 Jun 2020 11:00:55 +0100", "msg_from": "Devrim =?ISO-8859-1?Q?G=FCnd=FCz?= <devrim@gunduz.org>", "msg_from_op": true, "msg_subject": "CUBE_MAX_DIM" }, { "msg_contents": "Hello,\n\nThe problem with higher dimension cubes is that starting with\ndimensionality of ~52 the \"distance\" metrics in 64-bit float have less than\na single bit per dimension in mantissa, making cubes indistinguishable.\nDevelopers for facial recognition software had a chat about that on russian\npostgres telegram group https://t.me/pgsql. Their problem was that they had\n128-dimensional points, recompiled postgres - distances weren't helpful,\nand GIST KNN severely degraded to almost full scans. They had to change the\nnumber of facial features to smaller in order to make KNN search work.\n\nFloating point overflow isn't that much of a risk per se, worst\ncase scenario it becomes an Infinity or 0 which are usually acceptable in\nthose contexts.\n\nWhile mathematically possible, there are implementation issues with higher\ndimension cubes. I'm ok with raising the limit if such nuances get a\nmention in docs.\n\nOn Thu, Jun 25, 2020 at 1:01 PM Devrim Gündüz <devrim@gunduz.org> wrote:\n\n>\n> Hi,\n>\n> Someone contacted me about increasing CUBE_MAX_DIM\n> in contrib/cube/cubedata.h (in the community RPMs). The current value\n> is 100 with the following comment:\n>\n> * This limit is pretty arbitrary, but don't make it so large that you\n> * risk overflow in sizing calculations.\n>\n>\n> They said they use 500, and never had a problem. I never added such\n> patches to the RPMS, and will not -- but wanted to ask if we can safely\n> increase it in upstream?\n>\n> Regards,\n>\n> --\n> Devrim Gündüz\n> Open Source Solution Architect, Red Hat Certified Engineer\n> Twitter: @DevrimGunduz , @DevrimGunduzTR\n>\n\n\n-- \nDarafei Praliaskouski\nSupport me: http://patreon.com/komzpa\n\nHello,The problem with higher dimension cubes is that starting with dimensionality of ~52 the \"distance\" metrics in 64-bit float have less than a single bit per dimension in mantissa, making cubes indistinguishable. Developers for facial recognition software had a chat about that on russian postgres telegram group https://t.me/pgsql. Their problem was that they had 128-dimensional points, recompiled postgres - distances weren't helpful, and GIST KNN severely degraded to almost full scans. They had to change the number of facial features to smaller in order to make KNN search work.Floating point overflow isn't that much of a risk per se, worst case scenario it becomes an Infinity or 0 which are usually acceptable in those contexts.While mathematically possible, there are implementation issues with higher dimension cubes. I'm ok with raising the limit if such nuances get a mention in docs.On Thu, Jun 25, 2020 at 1:01 PM Devrim Gündüz <devrim@gunduz.org> wrote:\nHi,\n\nSomeone contacted me about increasing CUBE_MAX_DIM\nin contrib/cube/cubedata.h (in the community RPMs). The current value\nis 100 with the following comment:\n\n* This limit is pretty arbitrary, but don't make it so large that you\n* risk overflow in sizing calculations.\n\n\nThey said they use 500, and never had a problem. I never added such patches to the RPMS, and will not -- but wanted to ask if we can safely increase it in upstream?\n\nRegards,\n\n-- \nDevrim Gündüz\nOpen Source Solution Architect, Red Hat Certified Engineer\nTwitter: @DevrimGunduz , @DevrimGunduzTR\n-- Darafei PraliaskouskiSupport me: http://patreon.com/komzpa", "msg_date": "Thu, 25 Jun 2020 16:31:36 +0300", "msg_from": "=?UTF-8?Q?Darafei_=22Kom=D1=8Fpa=22_Praliaskouski?= <me@komzpa.net>", "msg_from_op": false, "msg_subject": "Re: CUBE_MAX_DIM" }, { "msg_contents": "Devrim =?ISO-8859-1?Q?G=FCnd=FCz?= <devrim@gunduz.org> writes:\n> Someone contacted me about increasing CUBE_MAX_DIM\n> in contrib/cube/cubedata.h (in the community RPMs). The current value\n> is 100 with the following comment:\n\n> * This limit is pretty arbitrary, but don't make it so large that you\n> * risk overflow in sizing calculations.\n\n> They said they use 500, and never had a problem.\n\nI guess I'm wondering what's the use-case. 100 already seems an order of\nmagnitude more than anyone could want. Or, if it's not enough, why does\nraising the limit just 5x enable any large set of new applications?\n\nThe practical issue here is that, since the data requires 16 bytes per\ndimension (plus a little bit of overhead), we'd be talking about\nincreasing the maximum size of a cube field from ~ 1600 bytes to ~ 8000\nbytes. And cube is not toastable, so that couldn't be compressed or\nshoved out-of-line. Maybe your OP never had a problem with it, but\nplenty of use-cases would have \"tuple too large\" failures due to not\nhaving room on a heap page for whatever other data they want in the row.\n\nEven a non-toastable 2KB field is going to give the tuple toaster\nalgorithm problems, as it'll end up shoving every other toastable field\nout-of-line in an ultimately vain attempt to bring the tuple size below\n2KB. So I'm really quite hesitant to raise CUBE_MAX_DIM much past where\nit is now without any other changes.\n\nA more credible proposal would be to make cube toast-aware and then\nraise the limit to ~1GB ... but that would take a significant amount\nof work, and we still haven't got a use-case justifying it.\n\nI think I'd counsel storing such data as plain float8 arrays, which\ndo have the necessary storage infrastructure. Is there something\nabout the cube operators that's particularly missing?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 25 Jun 2020 11:03:20 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: CUBE_MAX_DIM" }, { "msg_contents": "> \n> Devrim =?ISO-8859-1?Q?G=FCnd=FCz?= <devrim@gunduz.org> writes:\n> > Someone contacted me about increasing CUBE_MAX_DIM\n> > in contrib/cube/cubedata.h (in the community RPMs). The current value\n> > is 100 with the following comment:\n> \n> > * This limit is pretty arbitrary, but don't make it so large that you\n> > * risk overflow in sizing calculations.\n> \n> > They said they use 500, and never had a problem.\n> \n> I guess I'm wondering what's the use-case. 100 already seems an order of\n> magnitude more than anyone could want. Or, if it's not enough, why does\n> raising the limit just 5x enable any large set of new applications?\n\nThe dimensionality of embeddings generated by deep neural networks can be high. \nGoogle BERT has 768 dimensions for example.\n\nI know that Cube in it's current form isn't suitable for nearest-neighbour searching these vectors in their raw form (I have tried recompilation with higher CUBE_MAX_DIM myself), but conceptually kNN GiST searches using Cubes can be useful for these applications. There are other pre-processing techniques that can be used to improved the speed of the search, but it still ends up with a kNN search in a high-ish dimensional space.\n\n> The practical issue here is that, since the data requires 16 bytes per\n> dimension (plus a little bit of overhead), we'd be talking about\n> increasing the maximum size of a cube field from ~ 1600 bytes to ~ 8000\n> bytes. And cube is not toastable, so that couldn't be compressed or\n> shoved out-of-line. Maybe your OP never had a problem with it, but\n> plenty of use-cases would have \"tuple too large\" failures due to not\n> having room on a heap page for whatever other data they want in the row.\n> \n> Even a non-toastable 2KB field is going to give the tuple toaster\n> algorithm problems, as it'll end up shoving every other toastable field\n> out-of-line in an ultimately vain attempt to bring the tuple size below\n> 2KB. So I'm really quite hesitant to raise CUBE_MAX_DIM much past where\n> it is now without any other changes.\n> \n> A more credible proposal would be to make cube toast-aware and then\n> raise the limit to ~1GB ... but that would take a significant amount\n> of work, and we still haven't got a use-case justifying it.\n> \n> I think I'd counsel storing such data as plain float8 arrays, which\n> do have the necessary storage infrastructure. Is there something\n> about the cube operators that's particularly missing?\n> \n\nThe indexable nearest-neighbour searches are one of the great cube features not available with float8 arrays.\n\n> regards, tom lane\n\nBest regards, \nAlastair\n\n\n\n\n\n\n\n", "msg_date": "Thu, 25 Jun 2020 16:31:21 +0000", "msg_from": "Alastair McKinley <a.mckinley@analyticsengines.com>", "msg_from_op": false, "msg_subject": "Re: CUBE_MAX_DIM" }, { "msg_contents": "Alastair McKinley <a.mckinley@analyticsengines.com> writes:\n> I know that Cube in it's current form isn't suitable for nearest-neighbour searching these vectors in their raw form (I have tried recompilation with higher CUBE_MAX_DIM myself), but conceptually kNN GiST searches using Cubes can be useful for these applications. There are other pre-processing techniques that can be used to improved the speed of the search, but it still ends up with a kNN search in a high-ish dimensional space.\n\nIs there a way to fix the numerical instability involved? If we could do\nthat, then we'd definitely have a use-case justifying the work to make\ncube toastable.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 25 Jun 2020 12:43:12 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: CUBE_MAX_DIM" }, { "msg_contents": "> From: Tom Lane <tgl@sss.pgh.pa.us>\n> Sent: 25 June 2020 17:43\n> \n> Alastair McKinley <a.mckinley@analyticsengines.com> writes:\n> > I know that Cube in it's current form isn't suitable for nearest-neighbour searching these vectors in their raw form (I have tried recompilation with higher CUBE_MAX_DIM myself), but conceptually kNN GiST searches using Cubes can be useful for these applications. There are other pre-processing techniques that can be used to improved the speed of the search, but it still ends up with a kNN search in a high-ish dimensional space.\n> \n> Is there a way to fix the numerical instability involved? If we could do\n> that, then we'd definitely have a use-case justifying the work to make\n> cube toastable.\n\nI am not that familiar with the nature of the numerical instability, but it might be worth noting for additional context that for the NN use case:\n\n- The value of each dimension is likely to be between 0 and 1 \n- The L1 distance is meaningful for high numbers of dimensions, which *possibly* suffers less from the numeric issues than euclidean distance.\n\nThe numerical stability isn't the only issue for high dimensional kNN, the GiST search performance currently degrades with increasing N towards sequential scan performance, although maybe they are related?\n\n> regards, tom lane\n\nBest regards, \nAlastair\n\n", "msg_date": "Thu, 25 Jun 2020 20:47:30 +0000", "msg_from": "Alastair McKinley <a.mckinley@analyticsengines.com>", "msg_from_op": false, "msg_subject": "Re: CUBE_MAX_DIM" } ]
[ { "msg_contents": "Hello,\n\nA one line change to remove a duplicate check. This duplicate check was detected during testing my contribution to a static code analysis tool. There is no functional change, no new tests needed.\n\nRegards,\n\nÁdám Balogh\nCodeChecker Team\nEricsson Hungary", "msg_date": "Thu, 25 Jun 2020 15:27:14 +0000", "msg_from": "=?iso-8859-1?Q?=C1d=E1m_Balogh?= <adam.balogh@ericsson.com>", "msg_from_op": true, "msg_subject": "Remove a redundant condition check" }, { "msg_contents": "On Thu, Jun 25, 2020 at 11:23 PM Ádám Balogh <adam.balogh@ericsson.com> wrote:\n>\n>\n> A one line change to remove a duplicate check. This duplicate check was detected during testing my contribution to a static code analysis tool. There is no functional change, no new tests needed.\n>\n>\n\nYeah, this duplicate check is added as part of commit b2a5545bd6. See\nbelow part of change.\n\n- /*\n- * If this record was a timeline switch, wake up any\n- * walsenders to notice that we are on a new timeline.\n- */\n- if (switchedTLI && AllowCascadeReplication())\n- WalSndWakeup();\n+ /* Is this a timeline switch? */\n+ if (switchedTLI)\n+ {\n+ /*\n+ * Before we continue on the new timeline, clean up any\n+ * (possibly bogus) future WAL segments on the old timeline.\n+ */\n+ RemoveNonParentXlogFiles(EndRecPtr, ThisTimeLineID);\n+\n+ /*\n+ * Wake up any walsenders to notice that we are on a new\n+ * timeline.\n+ */\n+ if (switchedTLI && AllowCascadeReplication())\n+ WalSndWakeup();\n+ }\n\nIt seems we forgot to remove the additional check for switchedTLI\nwhile adding a new check. I think we can remove this duplicate check\nin the HEAD code. I am not sure if it is worth to backpatch such a\nchange.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 26 Jun 2020 14:39:22 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Remove a redundant condition check" }, { "msg_contents": "On Fri, Jun 26, 2020 at 02:39:22PM +0530, Amit Kapila wrote:\n> It seems we forgot to remove the additional check for switchedTLI\n> while adding a new check. I think we can remove this duplicate check\n> in the HEAD code. I am not sure if it is worth to backpatch such a\n> change.\n\nYes, there is no point to keep this check so let's clean up this\ncode. I also see no need to do a backpatch here, this is purely\ncosmetic.\n--\nMichael", "msg_date": "Fri, 26 Jun 2020 19:02:39 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Remove a redundant condition check" }, { "msg_contents": "Hello,\r\n\r\n-----Original Message-----\r\nFrom: Amit Kapila <amit.kapila16@gmail.com> \r\nSent: 2020. június 26., péntek 11:09\r\nTo: Ádám Balogh <adam.balogh@ericsson.com>\r\nCc: PostgreSQL Hackers <pgsql-hackers@postgresql.org>\r\nSubject: Re: Remove a redundant condition check\r\n\r\n>On Thu, Jun 25, 2020 at 11:23 PM Ádám Balogh <adam.balogh@ericsson.com> wrote:\r\n>>\r\n>>\r\n>> A one line change to remove a duplicate check. This duplicate check was detected during testing my contribution to a static code analysis tool. There is no functional change, no new tests needed.\r\n>\r\n> Yeah, this duplicate check is added as part of commit b2a5545bd6. See below part of change.\r\n>\r\n> - /*\r\n> - * If this record was a timeline switch, wake up any\r\n> - * walsenders to notice that we are on a new timeline.\r\n> - */\r\n> - if (switchedTLI && AllowCascadeReplication())\r\n> - WalSndWakeup();\r\n> + /* Is this a timeline switch? */\r\n> + if (switchedTLI)\r\n> + {\r\n> + /*\r\n> + * Before we continue on the new timeline, clean up any\r\n> + * (possibly bogus) future WAL segments on the old timeline.\r\n> + */\r\n> + RemoveNonParentXlogFiles(EndRecPtr, ThisTimeLineID);\r\n> +\r\n> + /*\r\n> + * Wake up any walsenders to notice that we are on a new\r\n> + * timeline.\r\n> + */\r\n> + if (switchedTLI && AllowCascadeReplication()) WalSndWakeup(); }\r\n>\r\n> It seems we forgot to remove the additional check for switchedTLI while adding a new check. I think we can remove this duplicate > > check in the HEAD code. I am not sure if it is worth to backpatch such a change.\r\n\r\nThank you for confirming it. I do not think it is worth to backpatch, it is just a readability issue. \r\nRegards,\r\n\r\nÁdám\r\n\r\n", "msg_date": "Fri, 26 Jun 2020 10:43:48 +0000", "msg_from": "=?utf-8?B?w4Fkw6FtIEJhbG9naA==?= <adam.balogh@ericsson.com>", "msg_from_op": false, "msg_subject": "RE: Remove a redundant condition check" }, { "msg_contents": "Em sex., 26 de jun. de 2020 às 06:09, Amit Kapila <amit.kapila16@gmail.com>\nescreveu:\n\n> On Thu, Jun 25, 2020 at 11:23 PM Ádám Balogh <adam.balogh@ericsson.com>\n> wrote:\n> >\n> >\n> > A one line change to remove a duplicate check. This duplicate check was\n> detected during testing my contribution to a static code analysis tool.\n> There is no functional change, no new tests needed.\n> >\n> >\n>\n> Yeah, this duplicate check is added as part of commit b2a5545bd6. See\n> below part of change.\n>\n> - /*\n> - * If this record was a timeline switch, wake up any\n> - * walsenders to notice that we are on a new timeline.\n> - */\n> - if (switchedTLI && AllowCascadeReplication())\n> - WalSndWakeup();\n> + /* Is this a timeline switch? */\n> + if (switchedTLI)\n> + {\n> + /*\n> + * Before we continue on the new timeline, clean up any\n> + * (possibly bogus) future WAL segments on the old timeline.\n> + */\n> + RemoveNonParentXlogFiles(EndRecPtr, ThisTimeLineID);\n> +\n> + /*\n> + * Wake up any walsenders to notice that we are on a new\n> + * timeline.\n> + */\n> + if (switchedTLI && AllowCascadeReplication())\n> + WalSndWakeup();\n> + }\n>\n> It seems we forgot to remove the additional check for switchedTLI\n> while adding a new check. I think we can remove this duplicate check\n> in the HEAD code. I am not sure if it is worth to backpatch such a\n> change.\n>\n+1\nGreat to know, that this is finally going to be fixed. (1)\n\nregards,\nRanier Vilela\n1.\nhttps://www.postgresql.org/message-id/CAEudQAocMqfqt0t64HNo39Z73jMey60WmeryB%2BWFDg3BZpCf%3Dg%40mail.gmail.com\n\nEm sex., 26 de jun. de 2020 às 06:09, Amit Kapila <amit.kapila16@gmail.com> escreveu:On Thu, Jun 25, 2020 at 11:23 PM Ádám Balogh <adam.balogh@ericsson.com> wrote:\n>\n>\n> A one line change to remove a duplicate check. This duplicate check was detected during testing my contribution to a static code analysis tool. There is no functional change, no new tests needed.\n>\n>\n\nYeah, this duplicate check is added as part of commit b2a5545bd6.  See\nbelow part of change.\n\n- /*\n- * If this record was a timeline switch, wake up any\n- * walsenders to notice that we are on a new timeline.\n- */\n- if (switchedTLI && AllowCascadeReplication())\n- WalSndWakeup();\n+ /* Is this a timeline switch? */\n+ if (switchedTLI)\n+ {\n+ /*\n+ * Before we continue on the new timeline, clean up any\n+ * (possibly bogus) future WAL segments on the old timeline.\n+ */\n+ RemoveNonParentXlogFiles(EndRecPtr, ThisTimeLineID);\n+\n+ /*\n+ * Wake up any walsenders to notice that we are on a new\n+ * timeline.\n+ */\n+ if (switchedTLI && AllowCascadeReplication())\n+ WalSndWakeup();\n+ }\n\nIt seems we forgot to remove the additional check for switchedTLI\nwhile adding a new check.  I think we can remove this duplicate check\nin the HEAD code.  I am not sure if it is worth to backpatch such a\nchange.+1Great to know, that this is finally going to be fixed. (1)regards,Ranier Vilela1. https://www.postgresql.org/message-id/CAEudQAocMqfqt0t64HNo39Z73jMey60WmeryB%2BWFDg3BZpCf%3Dg%40mail.gmail.com", "msg_date": "Fri, 26 Jun 2020 08:14:55 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Remove a redundant condition check" }, { "msg_contents": "On Fri, Jun 26, 2020 at 3:32 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Fri, Jun 26, 2020 at 02:39:22PM +0530, Amit Kapila wrote:\n> > It seems we forgot to remove the additional check for switchedTLI\n> > while adding a new check. I think we can remove this duplicate check\n> > in the HEAD code. I am not sure if it is worth to backpatch such a\n> > change.\n>\n> Yes, there is no point to keep this check so let's clean up this\n> code. I also see no need to do a backpatch here, this is purely\n> cosmetic.\n>\n\nThanks for the confirmation, pushed!\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Sat, 27 Jun 2020 10:52:49 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Remove a redundant condition check" } ]
[ { "msg_contents": "The bit/varbit type input functions cause file_fdw to fail to read the \nlogfile normally.\n\n1. Server conf:\n server_encoding = UTF8\n locale = zh_CN.UTF-8\n\n2. Create external tables using file_fdw\n\nCREATE EXTENSION file_fdw;\nCREATE SERVER pglog FOREIGN DATA WRAPPER file_fdw;\n\nCREATE FOREIGN TABLE pglog (\n log_time timestamp(3) with time zone,\n user_name text,\n database_name text,\n process_id integer,\n connection_from text,\n session_id text,\n session_line_num bigint,\n command_tag text,\n session_start_time timestamp with time zone,\n virtual_transaction_id text,\n transaction_id bigint,\n error_severity text,\n sql_state_code text,\n message text,\n detail text,\n hint text,\n internal_query text,\n internal_query_pos integer,\n context text,\n query text,\n query_pos integer,\n location text,\n application_name text\n) SERVER pglog\nOPTIONS ( filename 'log/postgresql-2020-06-16_213409.csv',\n format 'csv');\n\nIt's normal to be here.\n\n3. bit/varbit input\n select b'Ù';\n\nThe foreign table cannot be accessed. SELECT * FROM pglog will get:\ninvalid byte sequence for encoding \"UTF8\": 0xc3 0x22\n\n\nThe reason is that the error message in the bit_in / varbit_in function \nis output directly using %c. Causes the log file to not be decoded \ncorrectly.\n\nThe attachment is a patch.", "msg_date": "Fri, 26 Jun 2020 14:44:40 +0800", "msg_from": "Quan Zongliang <quanzongliang@gmail.com>", "msg_from_op": true, "msg_subject": "bugfix: invalid bit/varbit input causes the log file to be unreadable" }, { "msg_contents": "Quan Zongliang <quanzongliang@gmail.com> writes:\n> The reason is that the error message in the bit_in / varbit_in function \n> is output directly using %c. Causes the log file to not be decoded \n> correctly.\n\n> The attachment is a patch.\n\nI'm really quite skeptical of the premise here. We do not guarantee that\nthe postmaster log file is valid in any particular encoding; it'd be\nnearly impossible to do so if the cluster contains databases using\ndifferent encodings. So I think you'd be way better off to reformulate\nyour log-reading code to be less fragile.\n\nEven granting the premise, the proposed patch seems like a significant\ndecrease in user-friendliness for typical cases. I'd rather see us\nmake an effort to print one valid-per-the-DB-encoding character.\nNow that we can rely on snprintf to count %s restrictions in bytes,\nI think something like this should work:\n\n errmsg(\"\\\"%.*s\\\" is not a valid binary digit\",\n pg_mblen(sp), sp)));\n\nBut the real problem is that this is only the tip of the iceberg.\nYou didn't even hit all the %c usages in varbit.c. A quick grep finds\nthese other spots that can doubtless be made to do the same thing:\n\nacl.c:899:\t\t\telog(ERROR, \"unrecognized objtype abbreviation: %c\", objtypec);\narrayfuncs.c:507:\t\t\t\t\t\t\t\t errdetail(\"Unexpected \\\"%c\\\" character.\",\narrayfuncs.c:554:\t\t\t\t\t\t\t\t\t errdetail(\"Unexpected \\\"%c\\\" character.\",\narrayfuncs.c:584:\t\t\t\t\t\t\t\t\t errdetail(\"Unexpected \\\"%c\\\" character.\",\narrayfuncs.c:591:\t\t\t\t\t\t\t\t\t errdetail(\"Unmatched \\\"%c\\\" character.\", '}')));\narrayfuncs.c:633:\t\t\t\t\t\t\t\t\t\t errdetail(\"Unexpected \\\"%c\\\" character.\",\nencode.c:184:\t\t\t\t errmsg(\"invalid hexadecimal digit: \\\"%c\\\"\", c)));\nencode.c:341:\t\t\t\t\t\t errmsg(\"invalid symbol \\\"%c\\\" while decoding base64 sequence\", (int) c)));\nformatting.c:3298:\t\t\t\t\t\t\t\t\t\t errmsg(\"unmatched format separator \\\"%c\\\"\",\njsonpath_gram.c:2390:\t\t\t\t\t\t errdetail(\"unrecognized flag character \\\"%c\\\" in LIKE_REGEX predicate\",\nregexp.c:426:\t\t\t\t\t\t\t errmsg(\"invalid regular expression option: \\\"%c\\\"\",\ntsvector_op.c:312:\t\t\telog(ERROR, \"unrecognized weight: %c\", char_weight);\ntsvector_op.c:872:\t\t\t\t\t\t errmsg(\"unrecognized weight: \\\"%c\\\"\", char_weight)));\nvarbit.c:233:\t\t\t\t\t\t errmsg(\"\\\"%c\\\" is not a valid binary digit\",\nvarbit.c:258:\t\t\t\t\t\t errmsg(\"\\\"%c\\\" is not a valid hexadecimal digit\",\nvarbit.c:534:\t\t\t\t\t\t errmsg(\"\\\"%c\\\" is not a valid binary digit\",\nvarbit.c:559:\t\t\t\t\t\t errmsg(\"\\\"%c\\\" is not a valid hexadecimal digit\",\nvarlena.c:5589:\t\t\t\t\t errmsg(\"unrecognized format() type specifier \\\"%c\\\"\",\nvarlena.c:5710:\t\t\t\t\t\t errmsg(\"unrecognized format() type specifier \\\"%c\\\"\",\n\nand that's just in src/backend/utils/adt/.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 26 Jun 2020 12:45:43 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: bugfix: invalid bit/varbit input causes the log file to be\n unreadable" }, { "msg_contents": "I wrote:\n> Even granting the premise, the proposed patch seems like a significant\n> decrease in user-friendliness for typical cases. I'd rather see us\n> make an effort to print one valid-per-the-DB-encoding character.\n> Now that we can rely on snprintf to count %s restrictions in bytes,\n> I think something like this should work:\n> errmsg(\"\\\"%.*s\\\" is not a valid binary digit\",\n> pg_mblen(sp), sp)));\n> But the real problem is that this is only the tip of the iceberg.\n> You didn't even hit all the %c usages in varbit.c.\n\nI went through all the %c format sequences in the backend to see which\nones could use this type of fix. There were not as many as I'd expected,\nbut still a fair number. (I skipped cases where the input was coming from\nthe catalogs, as well as some non-user-facing debug printouts.) That\nleads to the attached patch, which seems to do the job without breaking\nanything that works today.\n\n\t\t\tregards, tom lane\n\nPS: I failed to resist the temptation to improve some shoddy error\nmessages nearby in pageinspect/heapfuncs.c.", "msg_date": "Sun, 28 Jun 2020 13:10:59 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: bugfix: invalid bit/varbit input causes the log file to be\n unreadable" }, { "msg_contents": "Good.\n\nI tested it, and it looks fine.\n\nThank you.\n\n\nOn 2020/6/29 1:10 上午, Tom Lane wrote:\n> I wrote:\n>> Even granting the premise, the proposed patch seems like a significant\n>> decrease in user-friendliness for typical cases. I'd rather see us\n>> make an effort to print one valid-per-the-DB-encoding character.\n>> Now that we can rely on snprintf to count %s restrictions in bytes,\n>> I think something like this should work:\n>> errmsg(\"\\\"%.*s\\\" is not a valid binary digit\",\n>> pg_mblen(sp), sp)));\n>> But the real problem is that this is only the tip of the iceberg.\n>> You didn't even hit all the %c usages in varbit.c.\n> I went through all the %c format sequences in the backend to see which\n> ones could use this type of fix. There were not as many as I'd expected,\n> but still a fair number. (I skipped cases where the input was coming from\n> the catalogs, as well as some non-user-facing debug printouts.) That\n> leads to the attached patch, which seems to do the job without breaking\n> anything that works today.\n>\n> \t\t\tregards, tom lane\n>\n> PS: I failed to resist the temptation to improve some shoddy error\n> messages nearby in pageinspect/heapfuncs.c.\n>\n\n\n\n\n\n\nGood.\nI tested\n it, and it looks fine.\nThank you.\n\n\n\n\nOn 2020/6/29 1:10 上午, Tom Lane wrote:\n\n\nI wrote:\n\n\nEven granting the premise, the proposed patch seems like a significant\ndecrease in user-friendliness for typical cases. I'd rather see us\nmake an effort to print one valid-per-the-DB-encoding character.\nNow that we can rely on snprintf to count %s restrictions in bytes,\nI think something like this should work:\n errmsg(\"\\\"%.*s\\\" is not a valid binary digit\",\n pg_mblen(sp), sp)));\nBut the real problem is that this is only the tip of the iceberg.\nYou didn't even hit all the %c usages in varbit.c.\n\n\n\nI went through all the %c format sequences in the backend to see which\nones could use this type of fix. There were not as many as I'd expected,\nbut still a fair number. (I skipped cases where the input was coming from\nthe catalogs, as well as some non-user-facing debug printouts.) That\nleads to the attached patch, which seems to do the job without breaking\nanything that works today.\n\n\t\t\tregards, tom lane\n\nPS: I failed to resist the temptation to improve some shoddy error\nmessages nearby in pageinspect/heapfuncs.c.", "msg_date": "Mon, 29 Jun 2020 18:45:47 +0800", "msg_from": "Quan Zongliang <quanzongliang@gmail.com>", "msg_from_op": true, "msg_subject": "Re: bugfix: invalid bit/varbit input causes the log file to be\n unreadable" }, { "msg_contents": "Quan Zongliang <quanzongliang@gmail.com> writes:\n> I tested it, and it looks fine.\n\nPushed, thanks for reporting the issue!\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 29 Jun 2020 11:42:23 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: bugfix: invalid bit/varbit input causes the log file to be\n unreadable" }, { "msg_contents": "Hi, \n\nWe recently saw a similar issue in v12 and wondered why the corresponding fix for v14 (https://github.com/postgres/postgres/commit/16e3ad5d143) was not backported to v13 and before. The commit message did mention that this fix might have problem with translatable string messages - would you mind providing a bit more context about what is needed to backport this fix? Thank you.\n\nRegards,\nHuansong\nhttps://vmware.com/ <https://vmware.com/>\n\n> On Jun 29, 2020, at 11:42 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> Quan Zongliang <quanzongliang@gmail.com> writes:\n>> I tested it, and it looks fine.\n> \n> Pushed, thanks for reporting the issue!\n> \n> \t\t\tregards, tom lane\n> \n> \n> \n> \n\n\nHi, We recently saw a similar issue in v12 and wondered why the corresponding fix for v14 (https://github.com/postgres/postgres/commit/16e3ad5d143) was not backported to v13 and before. The commit message did mention that this fix might have problem with translatable string messages - would you mind providing a bit more context about what is needed to backport this fix? Thank you.Regards,Huansonghttps://vmware.com/On Jun 29, 2020, at 11:42 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:Quan Zongliang <quanzongliang@gmail.com> writes:I tested it, and it looks fine.Pushed, thanks for reporting the issue! regards, tom lane", "msg_date": "Mon, 13 Dec 2021 14:42:11 -0500", "msg_from": "Huansong Fu <huansong.fu.info@gmail.com>", "msg_from_op": false, "msg_subject": "Re: bugfix: invalid bit/varbit input causes the log file to be\n unreadable" }, { "msg_contents": "Huansong Fu <huansong.fu.info@gmail.com> writes:\n> We recently saw a similar issue in v12 and wondered why the corresponding fix for v14 (https://github.com/postgres/postgres/commit/16e3ad5d143) was not backported to v13 and before. The commit message did mention that this fix might have problem with translatable string messages - would you mind providing a bit more context about what is needed to backport this fix? Thank you.\n\nWell, the commit message lists the reasons for not back-patching:\n\n* we've seen few field complaints about such problems\n* it'd add work for translators\n* it wouldn't work reliably before v12.\n\nPerhaps there's a case for back-patching as far as v12,\nbut I can't get very excited about it.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 13 Dec 2021 15:33:30 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: bugfix: invalid bit/varbit input causes the log file to be\n unreadable" } ]
[ { "msg_contents": "Hi Hackers,\n\nThere seems to be an extra palloc of 64KB of raw_buf for binary format\nfiles which is not required\nas copy logic for binary files don't use raw_buf, instead, attribute_buf\nis used in CopyReadBinaryAttribute.\n\nAttached is a patch, which places a check to avoid this unnecessary 64KB palloc.\n\nRequest the community to take this patch, if it is useful.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Fri, 26 Jun 2020 15:16:30 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "[PATCH] Remove Extra palloc Of raw_buf For Binary Format In COPY FROM" }, { "msg_contents": "On Fri, Jun 26, 2020 at 3:16 PM Bharath Rupireddy <\nbharath.rupireddyforpostgres@gmail.com> wrote:\n\n> Hi Hackers,\n>\n> There seems to be an extra palloc of 64KB of raw_buf for binary format\n> files which is not required\n> as copy logic for binary files don't use raw_buf, instead, attribute_buf\n> is used in CopyReadBinaryAttribute.\n>\n\n+1\n\nI looked at the patch and the changes looked good. Couple of comments;\n\n1)\n\n+\n+ /* For binary files raw_buf is not used,\n+ * instead, attribute_buf is used in\n+ * CopyReadBinaryAttribute. Hence, don't palloc\n+ * raw_buf.\n+ */\n\nNot a PG style of commenting.\n\n2) In non-binary mode, should assign NULL the raw_buf.\n\nAttaching patch with those changes.\n\n\n\n> Attached is a patch, which places a check to avoid this unnecessary 64KB\n> palloc.\n>\n> Request the community to take this patch, if it is useful.\n>\n> With Regards,\n> Bharath Rupireddy.\n> EnterpriseDB: http://www.enterprisedb.com\n>\n\n\nThanks,\nRushabh Lathia\nwww.EnterpriseDB.com", "msg_date": "Fri, 26 Jun 2020 18:15:02 +0530", "msg_from": "Rushabh Lathia <rushabh.lathia@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Remove Extra palloc Of raw_buf For Binary Format In COPY\n FROM" }, { "msg_contents": "On Fri, Jun 26, 2020 at 6:15 PM Rushabh Lathia <rushabh.lathia@gmail.com> wrote:\n>\n>\n>\n> On Fri, Jun 26, 2020 at 3:16 PM Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote:\n>>\n>> Hi Hackers,\n>>\n>> There seems to be an extra palloc of 64KB of raw_buf for binary format\n>> files which is not required\n>> as copy logic for binary files don't use raw_buf, instead, attribute_buf\n>> is used in CopyReadBinaryAttribute.\n>\n>\n> +1\n>\n> I looked at the patch and the changes looked good. Couple of comments;\n>\n> 1)\n>\n> +\n> + /* For binary files raw_buf is not used,\n> + * instead, attribute_buf is used in\n> + * CopyReadBinaryAttribute. Hence, don't palloc\n> + * raw_buf.\n> + */\n>\n> Not a PG style of commenting.\n>\n> 2) In non-binary mode, should assign NULL the raw_buf.\n>\n> Attaching patch with those changes.\n>\n\n+1 for the patch.\n\nOne comment:\nWe could change below code:\n+ */\n+ if (!cstate->binary)\n+ cstate->raw_buf = (char *) palloc(RAW_BUF_SIZE + 1);\n+ else\n+ cstate->raw_buf = NULL;\nto:\ncstate->raw_buf = (cstate->binary) ? NULL : (char *) palloc(RAW_BUF_SIZE + 1);\n\nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Sat, 27 Jun 2020 07:05:09 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Remove Extra palloc Of raw_buf For Binary Format In COPY\n FROM" }, { "msg_contents": "Thanks Rushabh and Vignesh for the comments.\n\n>\n> One comment:\n> We could change below code:\n> + */\n> + if (!cstate->binary)\n> + cstate->raw_buf = (char *) palloc(RAW_BUF_SIZE + 1);\n> + else\n> + cstate->raw_buf = NULL;\n> to:\n> cstate->raw_buf = (cstate->binary) ? NULL : (char *) palloc(RAW_BUF_SIZE + 1);\n>\n\nAttached the patch with the above changes.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Sat, 27 Jun 2020 09:23:24 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Remove Extra palloc Of raw_buf For Binary Format In COPY\n FROM" }, { "msg_contents": "On Sat, Jun 27, 2020 at 9:23 AM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> Thanks Rushabh and Vignesh for the comments.\n>\n> >\n> > One comment:\n> > We could change below code:\n> > + */\n> > + if (!cstate->binary)\n> > + cstate->raw_buf = (char *) palloc(RAW_BUF_SIZE + 1);\n> > + else\n> > + cstate->raw_buf = NULL;\n> > to:\n> > cstate->raw_buf = (cstate->binary) ? NULL : (char *) palloc(RAW_BUF_SIZE + 1);\n> >\n>\n> Attached the patch with the above changes.\n\nChanges look fine to me.\n\nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Sat, 27 Jun 2020 18:30:47 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Remove Extra palloc Of raw_buf For Binary Format In COPY\n FROM" }, { "msg_contents": "Thanks Vignesh and Rushabh for reviewing this.\n\nI've added this patch to commitfest - https://commitfest.postgresql.org/28/.\n\nRequest community take this patch further if there are no further issues.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\nOn Sat, Jun 27, 2020 at 6:30 PM vignesh C <vignesh21@gmail.com> wrote:\n>\n> On Sat, Jun 27, 2020 at 9:23 AM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n> >\n> > Thanks Rushabh and Vignesh for the comments.\n> >\n> > >\n> > > One comment:\n> > > We could change below code:\n> > > + */\n> > > + if (!cstate->binary)\n> > > + cstate->raw_buf = (char *) palloc(RAW_BUF_SIZE + 1);\n> > > + else\n> > > + cstate->raw_buf = NULL;\n> > > to:\n> > > cstate->raw_buf = (cstate->binary) ? NULL : (char *) palloc(RAW_BUF_SIZE + 1);\n> > >\n> >\n> > Attached the patch with the above changes.\n>\n> Changes look fine to me.\n>\n> Regards,\n> Vignesh\n> EnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 30 Jun 2020 14:40:56 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Remove Extra palloc Of raw_buf For Binary Format In COPY\n FROM" }, { "msg_contents": "On Tue, Jun 30, 2020 at 2:41 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> Thanks Vignesh and Rushabh for reviewing this.\n>\n> I've added this patch to commitfest - https://commitfest.postgresql.org/28/.\n\nI felt this patch is ready for committer, changing the status to ready\nfor committer.\n\nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Sat, 11 Jul 2020 18:57:11 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Remove Extra palloc Of raw_buf For Binary Format In COPY\n FROM" }, { "msg_contents": "vignesh C <vignesh21@gmail.com> writes:\n> On Tue, Jun 30, 2020 at 2:41 PM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n>> I've added this patch to commitfest - https://commitfest.postgresql.org/28/.\n\n> I felt this patch is ready for committer, changing the status to ready\n> for committer.\n\nPushed with some fiddling. Mainly, if we're going to the trouble of\nchecking for binary mode here, we might as well skip allocating the\nline_buf too.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 11 Jul 2020 14:23:34 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Remove Extra palloc Of raw_buf For Binary Format In COPY\n FROM" }, { "msg_contents": "On Sat, Jul 11, 2020 at 11:53 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> vignesh C <vignesh21@gmail.com> writes:\n> > On Tue, Jun 30, 2020 at 2:41 PM Bharath Rupireddy\n> > <bharath.rupireddyforpostgres@gmail.com> wrote:\n> >> I've added this patch to commitfest - https://commitfest.postgresql.org/28/.\n>\n> > I felt this patch is ready for committer, changing the status to ready\n> > for committer.\n>\n> Pushed with some fiddling. Mainly, if we're going to the trouble of\n> checking for binary mode here, we might as well skip allocating the\n> line_buf too.\n>\n\nHi Tom,\n\nIsn't it good if we backpatch this to versions 13, 12, 11 and so on?\nAs we can save good amount of memory with this patch for non-binary\ncopy.\n\nAttaching the patch which applies on versions 13, 12, 11.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Sat, 18 Jul 2020 10:08:55 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Remove Extra palloc Of raw_buf For Binary Format In COPY\n FROM" }, { "msg_contents": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> writes:\n> On Sat, Jul 11, 2020 at 11:53 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Pushed with some fiddling. Mainly, if we're going to the trouble of\n>> checking for binary mode here, we might as well skip allocating the\n>> line_buf too.\n\n> Isn't it good if we backpatch this to versions 13, 12, 11 and so on?\n\nGiven the lack of complaints, I wasn't excited about it. Transient\nconsumption of 64K is not a huge deal these days. (And yes, I've\nworked on machines where that was the entire address space. But that\nwas a very long time ago.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 18 Jul 2020 01:03:21 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Remove Extra palloc Of raw_buf For Binary Format In COPY\n FROM" } ]
[ { "msg_contents": "Hi,\n\nI would like to discuss a refactoring patch that builds on top of the\npatches at [1] to address $subject. To get an idea for what\neliminating these overheads looks like, take a look at the following\nbenchmarking results.\n\nNote 1: I've forced the use of generic plan by setting plan_cache_mode\nto 'force_generic_plan'\n\nNote 2: The individual TPS figures as measured are quite noisy, though\nI just want to show the rough trend with increasing number of\npartitions.\n\npgbench -i -s 10 --partitions={0, 10, 100, 1000}\npgbench -T120 -f test.sql -M prepared\n\ntest.sql:\n\\set aid random(1, 1000000)\nupdate pgbench_accounts set abalance = abalance + 1 where aid = :aid;\n\nWithout any of the patches:\n\n0 tps = 13045.485121 (excluding connections establishing)\n10 tps = 9358.157433 (excluding connections establishing)\n100 tps = 1878.274500 (excluding connections establishing)\n1000 tps = 84.684695 (excluding connections establishing)\n\nThe slowdown as the partition count increases can be explained by the\nfact that UPDATE and DELETE can't currently use runtime partition\npruning. So, even if any given transaction is only updating a single\ntuple in a single partition, the plans for *all* partitions are being\ninitialized and also the ResultRelInfos. That is, a lot of useless\nwork being done in InitPlan() and ExecInitModifyTable().\n\nWith the patches at [1] (latest 0001+0002 posted there), whereby the\ngeneric plan for UPDATE can now perform runtime pruning, numbers can\nbe seen to improve, slightly:\n\n0 tps = 12743.487196 (excluding connections establishing)\n10 tps = 12644.240748 (excluding connections establishing)\n100 tps = 4158.123345 (excluding connections establishing)\n1000 tps = 391.248067 (excluding connections establishing)\n\nSo even though runtime pruning enabled by those patches ensures that\nthe useless plans are left untouched by the executor, the\nResultRelInfos are still being made assuming *all* result relations\nwill be processed. With the attached patches (0001+0002+0003) that I\nwant to discuss here in this thread, numbers are further improved:\n\n0 tps = 13419.283168 (excluding connections establishing)\n10 tps = 12588.016095 (excluding connections establishing)\n100 tps = 8560.824225 (excluding connections establishing)\n1000 tps = 1926.553901 (excluding connections establishing)\n\n0001 and 0002 are preparatory patches. 0003 teaches nodeModifyTable.c\nto make the ResultRelInfo for a given result relation lazily, that is,\nwhen the plan producing tuples to be updated/deleted actually produces\none that belongs to that relation. So, if a transaction only updates\none tuple, then only one ResultRelInfo would be made. For larger\npartition counts, that saves significant amount of work.\n\nHowever, there's one new loop in ExecInitModifyTable() added by the\npatches at [1] that loops over all partitions, which I haven't been\nable to eliminate so far and I'm seeing it cause significant\nbottleneck at higher partition counts. The loop is meant to create a\nhash table that maps result relation OIDs to their offsets in the\nPlannedStmt.resultRelations list. We need this mapping, because the\nResultRelInfos are accessed from the query-global array using that\noffset. One approach that was mentioned by David Rowley at [1] to not\nhave do this mapping is to make the result relation's scan node's\ntargetlist emit the relation's RT index or ordinal position to begin\nwith, instead of the table OID, but I haven't figured out a way to do\nthat.\n\nHaving taken care of the ModifyTable overheads (except the one\nmentioned in the last paragraph), a few more bottlenecks are seen to\npop up at higher partition counts. Basically, they result from doing\nsome pre-execution actions on relations contained in the plan by\ntraversing the flat range table in whole.\n\n1. AcquireExecutorLocks(): locks *all* partitions before executing the\nplan tree but runtime pruning allows to skip scanning all but one\n\n2. ExecCheckRTPerms(): checks permissions of *all* partitions before\nexecuting the plan tree, but maybe it's okay to check only the ones\nthat will be accessed\n\nProblem 1 has been discussed before and David Rowley even developed a\npatch that was discussed at [2]. The approach taken in the patch was\nto delay locking of the partitions contained in a generic plan that\nare potentially runtime pruneable, although as also described in the\nlinked thread, that approach has a race condition whereby a concurrent\nsession may invalidate the generic plan by altering a partition in the\nwindow between when AcquireExecutorLocks() runs on the plan and the\nplan is executed.\n\nAnother solution suggested to me by Robert Haas in an off-list\ndiscussion is to teach AcquireExecutorLocks() or the nearby code to\nperform EXTERN parameter based pruning before passing the plan tree to\nthe executor and lock partitions that survive that pruning. It's\nperhaps doable if we refactor the ExecFindInitialMatchingSubPlans() to\nnot require a full-blown execution context. Or maybe we could do\nsomething more invasive by rewriting AcquireExecutorLocks() to walk\nthe plan tree instead of the flat range table, looking for scan nodes\nand nodes that support runtime pruning to lock the appropriate\nrelations.\n\nRegarding problem 2, I wonder if we shouldn't simply move the\npermission check to ExecGetRangeTableRelation(), which will be\nperformed the first time a given relation is accessed during\nexecution.\n\nAnyway, applying David's aforementioned patch gives the following numbers:\n\n0 tps = 12325.890487 (excluding connections establishing)\n10 tps = 12146.420443 (excluding connections establishing)\n100 tps = 12807.850709 (excluding connections establishing)\n1000 tps = 7578.652893 (excluding connections establishing)\n\nwhich suggests that it might be worthwhile try to find a solution for this.\n\nFinally, there's one more place that shows up in perf profiles at\nhigher partition counts and that is LockReleaseAll(). David,\nTsunakawa-san had worked on a patch [3], which still applies and can\nbe shown to be quite beneficial when generic plans are involved. I\ncouldn't get it to show major improvement over the above numbers in\nthis case (for UPDATE that is), but maybe that's because the loop in\nExecInitModifyTable() mentioned above is still in the way.\n\n--\nAmit Langote\nEnterpriseDB: http://www.enterprisedb.com\n\n[1] https://commitfest.postgresql.org/28/2575/\n[2] https://www.postgresql.org/message-id/flat/CAKJS1f_kfRQ3ZpjQyHC7%3DPK9vrhxiHBQFZ%2Bhc0JCwwnRKkF3hg%40mail.gmail.com\n[3] https://www.postgresql.org/message-id/flat/CAKJS1f-7T9F1xLw5PqgOApcV6YX3WYC4XJHHCpxh8hzcZsA-xA%40mail.gmail.com#c57f2f592484bca76310f4c551d4de15", "msg_date": "Fri, 26 Jun 2020 21:36:01 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "ModifyTable overheads in generic plans" }, { "msg_contents": "On Fri, Jun 26, 2020 at 9:36 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> I would like to discuss a refactoring patch that builds on top of the\n> patches at [1] to address $subject.\n\nI've added this to the next CF: https://commitfest.postgresql.org/28/2621/\n\n-- \nAmit Langote\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 29 Jun 2020 10:17:18 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: ModifyTable overheads in generic plans" }, { "msg_contents": "On Sat, 27 Jun 2020 at 00:36, Amit Langote <amitlangote09@gmail.com> wrote:\n> 2. ExecCheckRTPerms(): checks permissions of *all* partitions before\n> executing the plan tree, but maybe it's okay to check only the ones\n> that will be accessed\n\nI don't think it needs to be quite as complex as that.\nexpand_single_inheritance_child will set the\nRangeTblEntry.requiredPerms to 0, so we never need to check\npermissions on a partition. The overhead of permission checking when\nthere are many partitions is just down to the fact that\nExecCheckRTPerms() loops over the entire rangetable and calls\nExecCheckRTEPerms for each one. ExecCheckRTEPerms() does have very\nlittle work to do when requiredPerms is 0, but the loop itself and the\nfunction call overhead show up when you remove the other bottlenecks.\n\nI have a patch somewhere that just had the planner add the RTindexes\nwith a non-zero requiredPerms and set that in the plan so that\nExecCheckRTPerms could just look at the ones that actually needed\nsomething checked. There's a slight disadvantage there that for\nqueries to non-partitioned tables that we need to build a Bitmapset\nthat has all items from the rangetable. That's likely a small\noverhead, but not free, so perhaps there is a better way.\n\nDavid\n\n\n", "msg_date": "Mon, 29 Jun 2020 13:39:01 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: ModifyTable overheads in generic plans" }, { "msg_contents": "On Mon, Jun 29, 2020 at 10:39 AM David Rowley <dgrowleyml@gmail.com> wrote:\n>\n> On Sat, 27 Jun 2020 at 00:36, Amit Langote <amitlangote09@gmail.com> wrote:\n> > 2. ExecCheckRTPerms(): checks permissions of *all* partitions before\n> > executing the plan tree, but maybe it's okay to check only the ones\n> > that will be accessed\n>\n> I don't think it needs to be quite as complex as that.\n> expand_single_inheritance_child will set the\n> RangeTblEntry.requiredPerms to 0, so we never need to check\n> permissions on a partition. The overhead of permission checking when\n> there are many partitions is just down to the fact that\n> ExecCheckRTPerms() loops over the entire rangetable and calls\n> ExecCheckRTEPerms for each one. ExecCheckRTEPerms() does have very\n> little work to do when requiredPerms is 0, but the loop itself and the\n> function call overhead show up when you remove the other bottlenecks.\n\nI had forgotten that we set requiredPerms to 0 for the inheritance child tables.\n\n> I have a patch somewhere that just had the planner add the RTindexes\n> with a non-zero requiredPerms and set that in the plan so that\n> ExecCheckRTPerms could just look at the ones that actually needed\n> something checked. There's a slight disadvantage there that for\n> queries to non-partitioned tables that we need to build a Bitmapset\n> that has all items from the rangetable. That's likely a small\n> overhead, but not free, so perhaps there is a better way.\n\nI can't think of anything for this that doesn't involve having one\nmore list of RTEs or bitmapset of RT indexes in PlannedStmt.\n\n\n\n--\nAmit Langote\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 29 Jun 2020 18:04:18 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: ModifyTable overheads in generic plans" }, { "msg_contents": "On Fri, Jun 26, 2020 at 9:36 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> I would like to discuss a refactoring patch that builds on top of the\n> patches at [1] to address $subject.\n\nI forgot to update a place in postgres_fdw causing one of its tests to crash.\n\nFixed in the attached updated version.\n\n-- \nAmit Langote\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Wed, 1 Jul 2020 15:30:39 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: ModifyTable overheads in generic plans" }, { "msg_contents": "> On 1 Jul 2020, at 08:30, Amit Langote <amitlangote09@gmail.com> wrote:\n> \n> On Fri, Jun 26, 2020 at 9:36 PM Amit Langote <amitlangote09@gmail.com> wrote:\n>> I would like to discuss a refactoring patch that builds on top of the\n>> patches at [1] to address $subject.\n> \n> I forgot to update a place in postgres_fdw causing one of its tests to crash.\n> \n> Fixed in the attached updated version.\n\nThe attached 0003 fails to apply to current HEAD, please submit another rebased\nversion. Marking the entry as Waiting on Author in the meantime.\n\ncheers ./daniel\n\n", "msg_date": "Wed, 1 Jul 2020 11:50:19 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: ModifyTable overheads in generic plans" }, { "msg_contents": "Hi Daniel,\n\nOn Wed, Jul 1, 2020 at 6:50 PM Daniel Gustafsson <daniel@yesql.se> wrote:\n> > On 1 Jul 2020, at 08:30, Amit Langote <amitlangote09@gmail.com> wrote:\n> >\n> > On Fri, Jun 26, 2020 at 9:36 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> >> I would like to discuss a refactoring patch that builds on top of the\n> >> patches at [1] to address $subject.\n> >\n> > I forgot to update a place in postgres_fdw causing one of its tests to crash.\n> >\n> > Fixed in the attached updated version.\n>\n> The attached 0003 fails to apply to current HEAD, please submit another rebased\n> version. Marking the entry as Waiting on Author in the meantime.\n\nThank you for the heads up.\n\nActually, as I noted in the first email, the patches here are to be\napplied on top of patches of another thread that I chose not to post\nhere. But I can see how that is inconvenient both for the CF bot and\nother humans, so I'm attaching all of the patches.\n\nAnother thing I could do is decouple the patches to discuss here from\nthe patches of the other thread, which should be possible and might be\ngood to avoid back and forth between the two threads.\n\n-- \nAmit Langote\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Wed, 1 Jul 2020 22:38:45 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: ModifyTable overheads in generic plans" }, { "msg_contents": "> On 1 Jul 2020, at 15:38, Amit Langote <amitlangote09@gmail.com> wrote:\n\n> Another thing I could do is decouple the patches to discuss here from\n> the patches of the other thread, which should be possible and might be\n> good to avoid back and forth between the two threads.\n\nIt sounds like it would make it easier for reviewers, so if it's possible with\na reasonable effort it might be worth it. I've moved this entry to the next CF\nfor now.\n\ncheers ./daniel\n\n", "msg_date": "Fri, 31 Jul 2020 00:30:43 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: ModifyTable overheads in generic plans" }, { "msg_contents": "On Fri, Jun 26, 2020 at 8:36 AM Amit Langote <amitlangote09@gmail.com> wrote:\n> 0001 and 0002 are preparatory patches.\n\nI read through these patches a bit but it's really unclear what the\npoint of them is. I think they need better commit messages, or better\ncomments, or both.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Fri, 31 Jul 2020 15:46:14 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: ModifyTable overheads in generic plans" }, { "msg_contents": "On Sat, Aug 1, 2020 at 4:46 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> On Fri, Jun 26, 2020 at 8:36 AM Amit Langote <amitlangote09@gmail.com> wrote:\n> > 0001 and 0002 are preparatory patches.\n>\n> I read through these patches a bit but it's really unclear what the\n> point of them is. I think they need better commit messages, or better\n> comments, or both.\n\nThanks for taking a look. Sorry about the lack of good commentary,\nwhich I have tried to address in the attached updated version. I\nextracted one more part as preparatory from the earlier 0003 patch, so\nthere are 4 patches now.\n\nAlso as discussed with Daniel, I have changed the patches so that they\ncan be applied on plain HEAD instead of having to first apply the\npatches at [1]. Without runtime pruning for UPDATE/DELETE proposed in\n[1], optimizing ResultRelInfo creation by itself does not improve the\nperformance/scalability by that much, but the benefit of lazily\ncreating ResultRelInfos seems clear so I think maybe it's okay to\npursue this independently.\n\n-- \nAmit Langote\nEnterpriseDB: http://www.enterprisedb.com\n\n[1] https://www.postgresql.org/message-id/CA%2BHiwqHpHdqdDn48yCEhynnniahH78rwcrv1rEX65-fsZGBOLQ%40mail.gmail.com", "msg_date": "Tue, 4 Aug 2020 15:15:00 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: ModifyTable overheads in generic plans" }, { "msg_contents": "On Tue, Aug 4, 2020 at 3:15 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> On Sat, Aug 1, 2020 at 4:46 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> > On Fri, Jun 26, 2020 at 8:36 AM Amit Langote <amitlangote09@gmail.com> wrote:\n> > > 0001 and 0002 are preparatory patches.\n> >\n> > I read through these patches a bit but it's really unclear what the\n> > point of them is. I think they need better commit messages, or better\n> > comments, or both.\n>\n> Thanks for taking a look. Sorry about the lack of good commentary,\n> which I have tried to address in the attached updated version. I\n> extracted one more part as preparatory from the earlier 0003 patch, so\n> there are 4 patches now.\n>\n> Also as discussed with Daniel, I have changed the patches so that they\n> can be applied on plain HEAD instead of having to first apply the\n> patches at [1]. Without runtime pruning for UPDATE/DELETE proposed in\n> [1], optimizing ResultRelInfo creation by itself does not improve the\n> performance/scalability by that much, but the benefit of lazily\n> creating ResultRelInfos seems clear so I think maybe it's okay to\n> pursue this independently.\n\nPer cfbot's automatic patch tester, there were some issues in the 0004 patch:\n\nnodeModifyTable.c: In function ‘ExecModifyTable’:\n1529nodeModifyTable.c:2484:24: error: ‘junkfilter’ may be used\nuninitialized in this function [-Werror=maybe-uninitialized]\n1530 junkfilter->jf_junkAttNo,\n1531 ^\n1532nodeModifyTable.c:2309:14: note: ‘junkfilter’ was declared here\n1533 JunkFilter *junkfilter;\n1534 ^\n1535cc1: all warnings being treated as errors\n1536<builtin>: recipe for target 'nodeModifyTable.o' failed\n1537make[3]: *** [nodeModifyTable.o] Error 1\n\nFixed in the attached updated version\n\n-- \nAmit Langote\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Fri, 7 Aug 2020 21:26:45 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: ModifyTable overheads in generic plans" }, { "msg_contents": "Hello,\n\nOn Fri, Aug 7, 2020 at 9:26 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> On Tue, Aug 4, 2020 at 3:15 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> > On Sat, Aug 1, 2020 at 4:46 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> > > On Fri, Jun 26, 2020 at 8:36 AM Amit Langote <amitlangote09@gmail.com> wrote:\n> > > > 0001 and 0002 are preparatory patches.\n> > >\n> > > I read through these patches a bit but it's really unclear what the\n> > > point of them is. I think they need better commit messages, or better\n> > > comments, or both.\n> >\n> > Thanks for taking a look. Sorry about the lack of good commentary,\n> > which I have tried to address in the attached updated version. I\n> > extracted one more part as preparatory from the earlier 0003 patch, so\n> > there are 4 patches now.\n> >\n> > Also as discussed with Daniel, I have changed the patches so that they\n> > can be applied on plain HEAD instead of having to first apply the\n> > patches at [1]. Without runtime pruning for UPDATE/DELETE proposed in\n> > [1], optimizing ResultRelInfo creation by itself does not improve the\n> > performance/scalability by that much, but the benefit of lazily\n> > creating ResultRelInfos seems clear so I think maybe it's okay to\n> > pursue this independently.\n>\n> Per cfbot's automatic patch tester, there were some issues in the 0004 patch:\n>\n> nodeModifyTable.c: In function ‘ExecModifyTable’:\n> 1529nodeModifyTable.c:2484:24: error: ‘junkfilter’ may be used\n> uninitialized in this function [-Werror=maybe-uninitialized]\n> 1530 junkfilter->jf_junkAttNo,\n> 1531 ^\n> 1532nodeModifyTable.c:2309:14: note: ‘junkfilter’ was declared here\n> 1533 JunkFilter *junkfilter;\n> 1534 ^\n> 1535cc1: all warnings being treated as errors\n> 1536<builtin>: recipe for target 'nodeModifyTable.o' failed\n> 1537make[3]: *** [nodeModifyTable.o] Error 1\n>\n> Fixed in the attached updated version\n\nNeeded a rebase due to f481d28232. Attached updated patches.\n\n-- \nAmit Langote\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Mon, 14 Sep 2020 12:51:33 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: ModifyTable overheads in generic plans" }, { "msg_contents": "Attached updated patches based on recent the discussion at:\n\n* Re: partition routing layering in nodeModifyTable.c *\nhttps://www.postgresql.org/message-id/CA%2BHiwqHpmMjenQqNpMHrhg3DRhqqQfby2RCT1HWVwMin3_5vMA%40mail.gmail.com\n\n0001 adjusts how ForeignScanState.resultRelInfo is initialized for use\nby direct modify operations.\n\n0002 refactors ResultRelInfo initialization do be don lazily on first use\n\nI call these v6, because the last version posted on this thread was\nv5, even though it went through a couple of iterations on the above\nthread. Sorry about the confusion.\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com", "msg_date": "Fri, 30 Oct 2020 15:13:46 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: ModifyTable overheads in generic plans" }, { "msg_contents": "On 30/10/2020 08:13, Amit Langote wrote:\n> /*\n> * Perform WITH CHECK OPTIONS check, if any.\n> */\n> static void\n> ExecProcessWithCheckOptions(ModifyTableState *mtstate, ResultRelInfo *resultRelInfo,\n> \t\t\t\t\t\t\tTupleTableSlot *slot, WCOKind wco_kind)\n> {\n> \tModifyTable *node = (ModifyTable *) mtstate->ps.plan;\n> \tEState *estate = mtstate->ps.state;\n> \n> \tif (node->withCheckOptionLists == NIL)\n> \t\treturn;\n> \n> \t/* Initialize expression state if not already done. */\n> \tif (resultRelInfo->ri_WithCheckOptions == NIL)\n> \t{\n> \t\tint\t\twhichrel = resultRelInfo - mtstate->resultRelInfo;\n> \t\tList *wcoList;\n> \t\tList *wcoExprs = NIL;\n> \t\tListCell *ll;\n> \n> \t\tAssert(whichrel >= 0 && whichrel < mtstate->mt_nplans);\n> \t\twcoList = (List *) list_nth(node->withCheckOptionLists, whichrel);\n> \t\tforeach(ll, wcoList)\n> \t\t{\n> \t\t\tWithCheckOption *wco = (WithCheckOption *) lfirst(ll);\n> \t\t\tExprState *wcoExpr = ExecInitQual((List *) wco->qual,\n> \t\t\t\t\t\t\t\t\t\t\t &mtstate->ps);\n> \n> \t\t\twcoExprs = lappend(wcoExprs, wcoExpr);\n> \t\t}\n> \n> \t\tresultRelInfo->ri_WithCheckOptions = wcoList;\n> \t\tresultRelInfo->ri_WithCheckOptionExprs = wcoExprs;\n> \t}\n> \n> \t/*\n> \t * ExecWithCheckOptions() will skip any WCOs which are not of the kind\n> \t * we are looking for at this point.\n> \t */\n> \tExecWithCheckOptions(wco_kind, resultRelInfo, slot, estate);\n> }\n\nCan we do this initialization in ExecGetResultRelation()? That would \nseem much more straightforward. Is there any advantage to delaying it \nhere? And same thing with the junk filter and the RETURNING list.\n\n(/me reads patch further) I presume that's what you referred to in the \ncommit message:\n\n> Also, extend this lazy initialization approach to some of the\n> individual fields of ResultRelInfo such that even for the result\n> relations that are initialized, those fields are only initialized on\n> first access. While no performance improvement is to be expected\n> there, it can lead to a simpler initialization logic of the\n> ResultRelInfo itself, because the conditions for whether a given\n> field is needed or not tends to look confusing. One side-effect\n> of this is that any \"SubPlans\" referenced in the expressions of\n> those fields are also lazily initialized and hence changes the\n> output of EXPLAIN (without ANALYZE) in some regression tests.\n\n\nI'm now curious what the initialization logic would look like, if we \ninitialized those fields in ExecGetResultRelation(). At a quick glance \non the conditions on when those initializations are done in the patch \nnow, it would seem pretty straightforward. If the target list contains \nany junk columns, initialize junk filter, and if \nModifyTable->returningLists is set, initialize RETURNING list. Maybe I'm \nmissing something.\n\n- Heikki\n\n\n", "msg_date": "Mon, 2 Nov 2020 15:19:49 +0200", "msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>", "msg_from_op": false, "msg_subject": "Re: ModifyTable overheads in generic plans" }, { "msg_contents": "On Mon, Nov 2, 2020 at 10:19 PM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n> On 30/10/2020 08:13, Amit Langote wrote:\n> > /*\n> > * Perform WITH CHECK OPTIONS check, if any.\n> > */\n> > static void\n> > ExecProcessWithCheckOptions(ModifyTableState *mtstate, ResultRelInfo *resultRelInfo,\n> > TupleTableSlot *slot, WCOKind wco_kind)\n> > {\n> > ModifyTable *node = (ModifyTable *) mtstate->ps.plan;\n> > EState *estate = mtstate->ps.state;\n> >\n> > if (node->withCheckOptionLists == NIL)\n> > return;\n> >\n> > /* Initialize expression state if not already done. */\n> > if (resultRelInfo->ri_WithCheckOptions == NIL)\n> > {\n> > int whichrel = resultRelInfo - mtstate->resultRelInfo;\n> > List *wcoList;\n> > List *wcoExprs = NIL;\n> > ListCell *ll;\n> >\n> > Assert(whichrel >= 0 && whichrel < mtstate->mt_nplans);\n> > wcoList = (List *) list_nth(node->withCheckOptionLists, whichrel);\n> > foreach(ll, wcoList)\n> > {\n> > WithCheckOption *wco = (WithCheckOption *) lfirst(ll);\n> > ExprState *wcoExpr = ExecInitQual((List *) wco->qual,\n> > &mtstate->ps);\n> >\n> > wcoExprs = lappend(wcoExprs, wcoExpr);\n> > }\n> >\n> > resultRelInfo->ri_WithCheckOptions = wcoList;\n> > resultRelInfo->ri_WithCheckOptionExprs = wcoExprs;\n> > }\n> >\n> > /*\n> > * ExecWithCheckOptions() will skip any WCOs which are not of the kind\n> > * we are looking for at this point.\n> > */\n> > ExecWithCheckOptions(wco_kind, resultRelInfo, slot, estate);\n> > }\n>\n> Can we do this initialization in ExecGetResultRelation()? That would\n> seem much more straightforward. Is there any advantage to delaying it\n> here? And same thing with the junk filter and the RETURNING list.\n>\n> (/me reads patch further) I presume that's what you referred to in the\n> commit message:\n>\n> > Also, extend this lazy initialization approach to some of the\n> > individual fields of ResultRelInfo such that even for the result\n> > relations that are initialized, those fields are only initialized on\n> > first access. While no performance improvement is to be expected\n> > there, it can lead to a simpler initialization logic of the\n> > ResultRelInfo itself, because the conditions for whether a given\n> > field is needed or not tends to look confusing. One side-effect\n> > of this is that any \"SubPlans\" referenced in the expressions of\n> > those fields are also lazily initialized and hence changes the\n> > output of EXPLAIN (without ANALYZE) in some regression tests.\n>\n>\n> I'm now curious what the initialization logic would look like, if we\n> initialized those fields in ExecGetResultRelation(). At a quick glance\n> on the conditions on when those initializations are done in the patch\n> now, it would seem pretty straightforward. If the target list contains\n> any junk columns, initialize junk filter, and if\n> ModifyTable->returningLists is set, initialize RETURNING list. Maybe I'm\n> missing something.\n\nYeah, it's not that complicated to initialize those things in\nExecGetResultRelation(). In fact, ExecGetResultRelation() (or its\nsubroutine ExecBuildResultRelation()) housed those initializations in\nthe earlier versions of this patch, but I changed that after our\ndiscussion about being lazy about initializing as much stuff as we\ncan. Maybe I should revert that?\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 2 Nov 2020 22:53:39 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: ModifyTable overheads in generic plans" }, { "msg_contents": "On Mon, Nov 2, 2020 at 10:53 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> On Mon, Nov 2, 2020 at 10:19 PM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n> > (/me reads patch further) I presume that's what you referred to in the\n> > commit message:\n> >\n> > > Also, extend this lazy initialization approach to some of the\n> > > individual fields of ResultRelInfo such that even for the result\n> > > relations that are initialized, those fields are only initialized on\n> > > first access. While no performance improvement is to be expected\n> > > there, it can lead to a simpler initialization logic of the\n> > > ResultRelInfo itself, because the conditions for whether a given\n> > > field is needed or not tends to look confusing. One side-effect\n> > > of this is that any \"SubPlans\" referenced in the expressions of\n> > > those fields are also lazily initialized and hence changes the\n> > > output of EXPLAIN (without ANALYZE) in some regression tests.\n> >\n> >\n> > I'm now curious what the initialization logic would look like, if we\n> > initialized those fields in ExecGetResultRelation(). At a quick glance\n> > on the conditions on when those initializations are done in the patch\n> > now, it would seem pretty straightforward. If the target list contains\n> > any junk columns, initialize junk filter, and if\n> > ModifyTable->returningLists is set, initialize RETURNING list. Maybe I'm\n> > missing something.\n>\n> Yeah, it's not that complicated to initialize those things in\n> ExecGetResultRelation(). In fact, ExecGetResultRelation() (or its\n> subroutine ExecBuildResultRelation()) housed those initializations in\n> the earlier versions of this patch, but I changed that after our\n> discussion about being lazy about initializing as much stuff as we\n> can. Maybe I should revert that?\n\nPlease check the attached if that looks better.\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com", "msg_date": "Tue, 3 Nov 2020 17:27:52 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: ModifyTable overheads in generic plans" }, { "msg_contents": "On 03/11/2020 10:27, Amit Langote wrote:\n> Please check the attached if that looks better.\n\nGreat, thanks! Yeah, I like that much better.\n\nThis makes me a bit unhappy:\n\n> \n> \t\t/* Also let FDWs init themselves for foreign-table result rels */\n> \t\tif (resultRelInfo->ri_FdwRoutine != NULL)\n> \t\t{\n> \t\t\tif (resultRelInfo->ri_usesFdwDirectModify)\n> \t\t\t{\n> \t\t\t\tForeignScanState *fscan = (ForeignScanState *) mtstate->mt_plans[i];\n> \n> \t\t\t\t/*\n> \t\t\t\t * For the FDW's convenience, set the ForeignScanState node's\n> \t\t\t\t * ResultRelInfo to let the FDW know which result relation it\n> \t\t\t\t * is going to work with.\n> \t\t\t\t */\n> \t\t\t\tAssert(IsA(fscan, ForeignScanState));\n> \t\t\t\tfscan->resultRelInfo = resultRelInfo;\n> \t\t\t\tresultRelInfo->ri_FdwRoutine->BeginDirectModify(fscan, eflags);\n> \t\t\t}\n> \t\t\telse if (resultRelInfo->ri_FdwRoutine->BeginForeignModify != NULL)\n> \t\t\t{\n> \t\t\t\tList *fdw_private = (List *) list_nth(node->fdwPrivLists, i);\n> \n> \t\t\t\tresultRelInfo->ri_FdwRoutine->BeginForeignModify(mtstate,\n> \t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t resultRelInfo,\n> \t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t fdw_private,\n> \t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t i,\n> \t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t eflags);\n> \t\t\t}\n> \t\t}\n\nIf you remember, I was unhappy with a similar assertion in the earlier \npatches [1]. I'm not sure what to do instead though. A few options:\n\nA) We could change FDW API so that BeginDirectModify takes the same \narguments as BeginForeignModify(). That avoids the assumption that it's \na ForeignScan node, because BeginForeignModify() doesn't take \nForeignScanState as argument. That would be consistent, which is nice. \nBut I think we'd somehow still need to pass the ResultRelInfo to the \ncorresponding ForeignScan, and I'm not sure how.\n\nB) Look up the ResultRelInfo, and call BeginDirectModify(), on the first \ncall to ForeignNext().\n\nC) Accept the Assertion. And add an elog() check in the planner for that \nwith a proper error message.\n\nI'm leaning towards B), but maybe there's some better solution I didn't \nthink of? Perhaps changing the API would make sense in any case, it is a \nbit weird as it is. Backwards-incompatible API changes are not nice, but \nI don't think there are many FDWs out there that implement the \nDirectModify functions. And those functions are pretty tightly coupled \nwith the executor and ModifyTable node details anyway, so I don't feel \nlike we can, or need to, guarantee that they stay unchanged across major \nversions.\n\n[1] \nhttps://www.postgresql.org/message-id/19c23dd9-89ce-75a3-9105-5fc05a46f94a%40iki.fi\n\n- Heikki\n\n\n", "msg_date": "Tue, 3 Nov 2020 14:05:58 +0200", "msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>", "msg_from_op": false, "msg_subject": "Re: ModifyTable overheads in generic plans" }, { "msg_contents": "On Tue, Nov 3, 2020 at 9:05 PM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n> On 03/11/2020 10:27, Amit Langote wrote:\n> > Please check the attached if that looks better.\n>\n> Great, thanks! Yeah, I like that much better.\n>\n> This makes me a bit unhappy:\n>\n> >\n> > /* Also let FDWs init themselves for foreign-table result rels */\n> > if (resultRelInfo->ri_FdwRoutine != NULL)\n> > {\n> > if (resultRelInfo->ri_usesFdwDirectModify)\n> > {\n> > ForeignScanState *fscan = (ForeignScanState *) mtstate->mt_plans[i];\n> >\n> > /*\n> > * For the FDW's convenience, set the ForeignScanState node's\n> > * ResultRelInfo to let the FDW know which result relation it\n> > * is going to work with.\n> > */\n> > Assert(IsA(fscan, ForeignScanState));\n> > fscan->resultRelInfo = resultRelInfo;\n> > resultRelInfo->ri_FdwRoutine->BeginDirectModify(fscan, eflags);\n> > }\n> > else if (resultRelInfo->ri_FdwRoutine->BeginForeignModify != NULL)\n> > {\n> > List *fdw_private = (List *) list_nth(node->fdwPrivLists, i);\n> >\n> > resultRelInfo->ri_FdwRoutine->BeginForeignModify(mtstate,\n> > resultRelInfo,\n> > fdw_private,\n> > i,\n> > eflags);\n> > }\n> > }\n>\n> If you remember, I was unhappy with a similar assertion in the earlier\n> patches [1]. I'm not sure what to do instead though. A few options:\n>\n> A) We could change FDW API so that BeginDirectModify takes the same\n> arguments as BeginForeignModify(). That avoids the assumption that it's\n> a ForeignScan node, because BeginForeignModify() doesn't take\n> ForeignScanState as argument. That would be consistent, which is nice.\n> But I think we'd somehow still need to pass the ResultRelInfo to the\n> corresponding ForeignScan, and I'm not sure how.\n\nMaybe ForeignScan doesn't need to contain any result relation info\nthen? ForeignScan.operation != CMD_SELECT is enough to tell it to\ncall IterateDirectModify() as today.\n\n> B) Look up the ResultRelInfo, and call BeginDirectModify(), on the first\n> call to ForeignNext().\n>\n> C) Accept the Assertion. And add an elog() check in the planner for that\n> with a proper error message.\n>\n> I'm leaning towards B), but maybe there's some better solution I didn't\n> think of? Perhaps changing the API would make sense in any case, it is a\n> bit weird as it is. Backwards-incompatible API changes are not nice, but\n> I don't think there are many FDWs out there that implement the\n> DirectModify functions. And those functions are pretty tightly coupled\n> with the executor and ModifyTable node details anyway, so I don't feel\n> like we can, or need to, guarantee that they stay unchanged across major\n> versions.\n\nB is not too bad, but I tend to prefer doing A too.\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 4 Nov 2020 11:32:18 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: ModifyTable overheads in generic plans" }, { "msg_contents": "On Wed, Nov 4, 2020 at 11:32 AM Amit Langote <amitlangote09@gmail.com> wrote:\n> On Tue, Nov 3, 2020 at 9:05 PM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n> > A) We could change FDW API so that BeginDirectModify takes the same\n> > arguments as BeginForeignModify(). That avoids the assumption that it's\n> > a ForeignScan node, because BeginForeignModify() doesn't take\n> > ForeignScanState as argument. That would be consistent, which is nice.\n> > But I think we'd somehow still need to pass the ResultRelInfo to the\n> > corresponding ForeignScan, and I'm not sure how.\n>\n> Maybe ForeignScan doesn't need to contain any result relation info\n> then? ForeignScan.operation != CMD_SELECT is enough to tell it to\n> call IterateDirectModify() as today.\n>\n> > B) Look up the ResultRelInfo, and call BeginDirectModify(), on the first\n> > call to ForeignNext().\n> >\n> > C) Accept the Assertion. And add an elog() check in the planner for that\n> > with a proper error message.\n> >\n> > I'm leaning towards B), but maybe there's some better solution I didn't\n> > think of? Perhaps changing the API would make sense in any case, it is a\n> > bit weird as it is. Backwards-incompatible API changes are not nice, but\n> > I don't think there are many FDWs out there that implement the\n> > DirectModify functions. And those functions are pretty tightly coupled\n> > with the executor and ModifyTable node details anyway, so I don't feel\n> > like we can, or need to, guarantee that they stay unchanged across major\n> > versions.\n>\n> B is not too bad, but I tend to prefer doing A too.\n\nHow about I update the 0001 patch to implement A?\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 5 Nov 2020 21:54:54 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: ModifyTable overheads in generic plans" }, { "msg_contents": "On Wed, Nov 4, 2020 at 11:32 AM Amit Langote <amitlangote09@gmail.com> wrote:\n> On Tue, Nov 3, 2020 at 9:05 PM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n> > On 03/11/2020 10:27, Amit Langote wrote:\n> > > Please check the attached if that looks better.\n> >\n> > Great, thanks! Yeah, I like that much better.\n> >\n> > This makes me a bit unhappy:\n> >\n> > >\n> > > /* Also let FDWs init themselves for foreign-table result rels */\n> > > if (resultRelInfo->ri_FdwRoutine != NULL)\n> > > {\n> > > if (resultRelInfo->ri_usesFdwDirectModify)\n> > > {\n> > > ForeignScanState *fscan = (ForeignScanState *) mtstate->mt_plans[i];\n> > >\n> > > /*\n> > > * For the FDW's convenience, set the ForeignScanState node's\n> > > * ResultRelInfo to let the FDW know which result relation it\n> > > * is going to work with.\n> > > */\n> > > Assert(IsA(fscan, ForeignScanState));\n> > > fscan->resultRelInfo = resultRelInfo;\n> > > resultRelInfo->ri_FdwRoutine->BeginDirectModify(fscan, eflags);\n> > > }\n> > > else if (resultRelInfo->ri_FdwRoutine->BeginForeignModify != NULL)\n> > > {\n> > > List *fdw_private = (List *) list_nth(node->fdwPrivLists, i);\n> > >\n> > > resultRelInfo->ri_FdwRoutine->BeginForeignModify(mtstate,\n> > > resultRelInfo,\n> > > fdw_private,\n> > > i,\n> > > eflags);\n> > > }\n> > > }\n> >\n> > If you remember, I was unhappy with a similar assertion in the earlier\n> > patches [1]. I'm not sure what to do instead though. A few options:\n> >\n> > A) We could change FDW API so that BeginDirectModify takes the same\n> > arguments as BeginForeignModify(). That avoids the assumption that it's\n> > a ForeignScan node, because BeginForeignModify() doesn't take\n> > ForeignScanState as argument. That would be consistent, which is nice.\n> > But I think we'd somehow still need to pass the ResultRelInfo to the\n> > corresponding ForeignScan, and I'm not sure how.\n>\n> Maybe ForeignScan doesn't need to contain any result relation info\n> then? ForeignScan.operation != CMD_SELECT is enough to tell it to\n> call IterateDirectModify() as today.\n\nHmm, I misspoke. We do still need ForeignScanState.resultRelInfo,\nbecause the IterateDirectModify() API uses it to return the remotely\ninserted/updated/deleted tuple for the RETURNING projection performed\nby ExecModifyTable().\n\n> > B) Look up the ResultRelInfo, and call BeginDirectModify(), on the first\n> > call to ForeignNext().\n> >\n> > C) Accept the Assertion. And add an elog() check in the planner for that\n> > with a proper error message.\n> >\n> > I'm leaning towards B), but maybe there's some better solution I didn't\n> > think of? Perhaps changing the API would make sense in any case, it is a\n> > bit weird as it is. Backwards-incompatible API changes are not nice, but\n> > I don't think there are many FDWs out there that implement the\n> > DirectModify functions. And those functions are pretty tightly coupled\n> > with the executor and ModifyTable node details anyway, so I don't feel\n> > like we can, or need to, guarantee that they stay unchanged across major\n> > versions.\n>\n> B is not too bad, but I tend to prefer doing A too.\n\nOn second thought, it seems A would amount to merely a cosmetic\nadjustment of the API, nothing more. B seems to get the job done for\nme and also doesn't unnecessarily break compatibility, so I've updated\n0001 to implement B. Please give it a look.\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com", "msg_date": "Tue, 10 Nov 2020 20:12:03 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: ModifyTable overheads in generic plans" }, { "msg_contents": "On 10/11/2020 13:12, Amit Langote wrote:\n> On Wed, Nov 4, 2020 at 11:32 AM Amit Langote <amitlangote09@gmail.com> wrote:\n>> On Tue, Nov 3, 2020 at 9:05 PM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n>>> A) We could change FDW API so that BeginDirectModify takes the same\n>>> arguments as BeginForeignModify(). That avoids the assumption that it's\n>>> a ForeignScan node, because BeginForeignModify() doesn't take\n>>> ForeignScanState as argument. That would be consistent, which is nice.\n>>> But I think we'd somehow still need to pass the ResultRelInfo to the\n>>> corresponding ForeignScan, and I'm not sure how.\n>>\n>> Maybe ForeignScan doesn't need to contain any result relation info\n>> then? ForeignScan.operation != CMD_SELECT is enough to tell it to\n>> call IterateDirectModify() as today.\n> \n> Hmm, I misspoke. We do still need ForeignScanState.resultRelInfo,\n> because the IterateDirectModify() API uses it to return the remotely\n> inserted/updated/deleted tuple for the RETURNING projection performed\n> by ExecModifyTable().\n> \n>>> B) Look up the ResultRelInfo, and call BeginDirectModify(), on the first\n>>> call to ForeignNext().\n>>>\n>>> C) Accept the Assertion. And add an elog() check in the planner for that\n>>> with a proper error message.\n>>>\n>>> I'm leaning towards B), but maybe there's some better solution I didn't\n>>> think of? Perhaps changing the API would make sense in any case, it is a\n>>> bit weird as it is. Backwards-incompatible API changes are not nice, but\n>>> I don't think there are many FDWs out there that implement the\n>>> DirectModify functions. And those functions are pretty tightly coupled\n>>> with the executor and ModifyTable node details anyway, so I don't feel\n>>> like we can, or need to, guarantee that they stay unchanged across major\n>>> versions.\n>>\n>> B is not too bad, but I tend to prefer doing A too.\n> \n> On second thought, it seems A would amount to merely a cosmetic\n> adjustment of the API, nothing more. B seems to get the job done for\n> me and also doesn't unnecessarily break compatibility, so I've updated\n> 0001 to implement B. Please give it a look.\n\nLooks good at a quick glance. It is a small API break that \nBeginDirectModify() is now called during execution, not at executor \nstartup, but I don't think that's going to break FDWs in practice. One \ncould argue, though, that if we're going to change the API, we should do \nit more loudly. So changing the arguments might be a good thing.\n\nThe BeginDirectModify() and BeginForeignModify() interfaces are \ninconsistent, but that's not this patch's fault. I wonder if we could \nmove the call to BeginForeignModify() also to ForeignNext(), though? And \nBeginForeignScan() too, while we're at it.\n\nOverall, this is probably fine as it is though. I'll review more \nthorougly tomorrow.\n\n- Heikki\n\n\n", "msg_date": "Tue, 10 Nov 2020 17:32:00 +0200", "msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>", "msg_from_op": false, "msg_subject": "Re: ModifyTable overheads in generic plans" }, { "msg_contents": "On 10/11/2020 17:32, Heikki Linnakangas wrote:\n> On 10/11/2020 13:12, Amit Langote wrote:\n>> On second thought, it seems A would amount to merely a cosmetic\n>> adjustment of the API, nothing more. B seems to get the job done for\n>> me and also doesn't unnecessarily break compatibility, so I've updated\n>> 0001 to implement B. Please give it a look.\n> \n> Looks good at a quick glance. It is a small API break that\n> BeginDirectModify() is now called during execution, not at executor\n> startup, but I don't think that's going to break FDWs in practice. One\n> could argue, though, that if we're going to change the API, we should do\n> it more loudly. So changing the arguments might be a good thing.\n> \n> The BeginDirectModify() and BeginForeignModify() interfaces are\n> inconsistent, but that's not this patch's fault. I wonder if we could\n> move the call to BeginForeignModify() also to ForeignNext(), though? And\n> BeginForeignScan() too, while we're at it.\n\nWith these patches, BeginForeignModify() and BeginDirectModify() are \nboth called during execution, before the first \nIterateForeignScan/IterateDirectModify call. The documentation for \nBeginForeignModify() needs to be updated, it still claims that it's run \nat executor startup, but that's not true after these patches. So that \nneeds to be updated.\n\nI think that's a good thing, because it means that BeginForeignModify() \nand BeginDirectModify() are called at the same stage, from the FDW's \npoint of view. Even though BeginDirectModify() is called from \nForeignNext(), and BeginForeignModify() from ExecModifyTable(), that \ndifference isn't visible to the FDW; both are after executor startup but \nbefore the first Iterate call.\n\n- Heikki\n\n\n", "msg_date": "Wed, 11 Nov 2020 10:55:46 +0200", "msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>", "msg_from_op": false, "msg_subject": "Re: ModifyTable overheads in generic plans" }, { "msg_contents": "Thanks for the review.\n\nOn Wed, Nov 11, 2020 at 5:55 PM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n> On 10/11/2020 17:32, Heikki Linnakangas wrote:\n> > On 10/11/2020 13:12, Amit Langote wrote:\n> >> On second thought, it seems A would amount to merely a cosmetic\n> >> adjustment of the API, nothing more. B seems to get the job done for\n> >> me and also doesn't unnecessarily break compatibility, so I've updated\n> >> 0001 to implement B. Please give it a look.\n> >\n> > Looks good at a quick glance. It is a small API break that\n> > BeginDirectModify() is now called during execution, not at executor\n> > startup, but I don't think that's going to break FDWs in practice. One\n> > could argue, though, that if we're going to change the API, we should do\n> > it more loudly. So changing the arguments might be a good thing.\n> >\n> > The BeginDirectModify() and BeginForeignModify() interfaces are\n> > inconsistent, but that's not this patch's fault. I wonder if we could\n> > move the call to BeginForeignModify() also to ForeignNext(), though? And\n> > BeginForeignScan() too, while we're at it.\n>\n> With these patches, BeginForeignModify() and BeginDirectModify() are\n> both called during execution, before the first\n> IterateForeignScan/IterateDirectModify call. The documentation for\n> BeginForeignModify() needs to be updated, it still claims that it's run\n> at executor startup, but that's not true after these patches. So that\n> needs to be updated.\n\nGood point, I've updated the patch to note that.\n\n> I think that's a good thing, because it means that BeginForeignModify()\n> and BeginDirectModify() are called at the same stage, from the FDW's\n> point of view. Even though BeginDirectModify() is called from\n> ForeignNext(), and BeginForeignModify() from ExecModifyTable(), that\n> difference isn't visible to the FDW; both are after executor startup but\n> before the first Iterate call.\n\nRight.\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com", "msg_date": "Wed, 11 Nov 2020 18:52:01 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: ModifyTable overheads in generic plans" }, { "msg_contents": "I'm still a bit confused and unhappy about the initialization of \nResultRelInfos and the various fields in them. We've made progress in \nthe previous patches, but it's still a bit messy.\n\n> \t\t/*\n> \t\t * If transition tuples will be captured, initialize a map to convert\n> \t\t * child tuples into the format of the table mentioned in the query\n> \t\t * (root relation), because the transition tuple store can only store\n> \t\t * tuples in the root table format. However for INSERT, the map is\n> \t\t * only initialized for a given partition when the partition itself is\n> \t\t * first initialized by ExecFindPartition. Also, this map is also\n> \t\t * needed if an UPDATE ends up having to move tuples across\n> \t\t * partitions, because in that case the child tuple to be moved first\n> \t\t * needs to be converted into the root table's format. In that case,\n> \t\t * we use GetChildToRootMap() to either create one from scratch if\n> \t\t * we didn't already create it here.\n> \t\t *\n> \t\t * Note: We cannot always initialize this map lazily, that is, use\n> \t\t * GetChildToRootMap(), because AfterTriggerSaveEvent(), which needs\n> \t\t * the map, doesn't have access to the \"target\" relation that is\n> \t\t * needed to create the map.\n> \t\t */\n> \t\tif (mtstate->mt_transition_capture && operation != CMD_INSERT)\n> \t\t{\n> \t\t\tRelation\trelation = resultRelInfo->ri_RelationDesc;\n> \t\t\tRelation\ttargetRel = mtstate->rootResultRelInfo->ri_RelationDesc;\n> \n> \t\t\tresultRelInfo->ri_ChildToRootMap =\n> \t\t\t\tconvert_tuples_by_name(RelationGetDescr(relation),\n> \t\t\t\t\t\t\t\t\t RelationGetDescr(targetRel));\n> \t\t\t/* First time creating the map for this result relation. */\n> \t\t\tAssert(!resultRelInfo->ri_ChildToRootMapValid);\n> \t\t\tresultRelInfo->ri_ChildToRootMapValid = true;\n> \t\t}\n\nThe comment explains that AfterTriggerSaveEvent() cannot use \nGetChildToRootMap(), because it doesn't have access to the root target \nrelation. But there is a field for that in ResultRelInfo: \nri_PartitionRoot. However, that's only set up when we do partition routing.\n\nHow about we rename ri_PartitionRoot to e.g ri_RootTarget, and set it \nalways, even for non-partition inheritance? We have that information \navailable when we initialize the ResultRelInfo, so might as well.\n\nSome code currently checks ri_PartitionRoot, to determine if a tuple \nthat's been inserted, has been routed. For example:\n\n> \t\t/*\n> \t\t * Also check the tuple against the partition constraint, if there is\n> \t\t * one; except that if we got here via tuple-routing, we don't need to\n> \t\t * if there's no BR trigger defined on the partition.\n> \t\t */\n> \t\tif (resultRelationDesc->rd_rel->relispartition &&\n> \t\t\t(resultRelInfo->ri_PartitionRoot == NULL ||\n> \t\t\t (resultRelInfo->ri_TrigDesc &&\n> \t\t\t resultRelInfo->ri_TrigDesc->trig_insert_before_row)))\n> \t\t\tExecPartitionCheck(resultRelInfo, slot, estate, true);\n\nSo if we set ri_PartitionRoot always, we would need some other way to \ndetermine if the tuple at hand has actually been routed or not. But \nwouldn't that be a good thing anyway? Isn't it possible that the same \nResultRelInfo is sometimes used for routed tuples, and sometimes for \ntuples that have been inserted/updated \"directly\"? \nExecLookupUpdateResultRelByOid() sets that field lazily, so I think it \nwould be possible to get here with ri_PartitionRoot either set or not, \ndepending on whether an earlier cross-partition update was routed to the \ntable.\n\nThe above check is just an optimization, to skip unnecessary \nExecPartitionCheck() calls, but I think this snippet in \nExecConstraints() needs to get this right:\n\n> \t\t\t\t/*\n> \t\t\t\t * If the tuple has been routed, it's been converted to the\n> \t\t\t\t * partition's rowtype, which might differ from the root\n> \t\t\t\t * table's. We must convert it back to the root table's\n> \t\t\t\t * rowtype so that val_desc shown error message matches the\n> \t\t\t\t * input tuple.\n> \t\t\t\t */\n> \t\t\t\tif (resultRelInfo->ri_PartitionRoot)\n> \t\t\t\t{\n> \t\t\t\t\tAttrMap *map;\n> \n> \t\t\t\t\trel = resultRelInfo->ri_PartitionRoot;\n> \t\t\t\t\ttupdesc = RelationGetDescr(rel);\n> \t\t\t\t\t/* a reverse map */\n> \t\t\t\t\tmap = build_attrmap_by_name_if_req(orig_tupdesc,\n> \t\t\t\t\t\t\t\t\t\t\t\t\t tupdesc);\n> \n> \t\t\t\t\t/*\n> \t\t\t\t\t * Partition-specific slot's tupdesc can't be changed, so\n> \t\t\t\t\t * allocate a new one.\n> \t\t\t\t\t */\n> \t\t\t\t\tif (map != NULL)\n> \t\t\t\t\t\tslot = execute_attr_map_slot(map, slot,\n> \t\t\t\t\t\t\t\t\t\t\t\t\t MakeTupleTableSlot(tupdesc, &TTSOpsVirtual));\n> \t\t\t\t}\n\nIs that an existing bug, or am I missing?\n\n- Heikki\n\n\n", "msg_date": "Wed, 11 Nov 2020 15:14:58 +0200", "msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>", "msg_from_op": false, "msg_subject": "Re: ModifyTable overheads in generic plans" }, { "msg_contents": "On Wed, Nov 11, 2020 at 10:14 PM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n> I'm still a bit confused and unhappy about the initialization of\n> ResultRelInfos and the various fields in them. We've made progress in\n> the previous patches, but it's still a bit messy.\n>\n> > /*\n> > * If transition tuples will be captured, initialize a map to convert\n> > * child tuples into the format of the table mentioned in the query\n> > * (root relation), because the transition tuple store can only store\n> > * tuples in the root table format. However for INSERT, the map is\n> > * only initialized for a given partition when the partition itself is\n> > * first initialized by ExecFindPartition. Also, this map is also\n> > * needed if an UPDATE ends up having to move tuples across\n> > * partitions, because in that case the child tuple to be moved first\n> > * needs to be converted into the root table's format. In that case,\n> > * we use GetChildToRootMap() to either create one from scratch if\n> > * we didn't already create it here.\n> > *\n> > * Note: We cannot always initialize this map lazily, that is, use\n> > * GetChildToRootMap(), because AfterTriggerSaveEvent(), which needs\n> > * the map, doesn't have access to the \"target\" relation that is\n> > * needed to create the map.\n> > */\n> > if (mtstate->mt_transition_capture && operation != CMD_INSERT)\n> > {\n> > Relation relation = resultRelInfo->ri_RelationDesc;\n> > Relation targetRel = mtstate->rootResultRelInfo->ri_RelationDesc;\n> >\n> > resultRelInfo->ri_ChildToRootMap =\n> > convert_tuples_by_name(RelationGetDescr(relation),\n> > RelationGetDescr(targetRel));\n> > /* First time creating the map for this result relation. */\n> > Assert(!resultRelInfo->ri_ChildToRootMapValid);\n> > resultRelInfo->ri_ChildToRootMapValid = true;\n> > }\n>\n> The comment explains that AfterTriggerSaveEvent() cannot use\n> GetChildToRootMap(), because it doesn't have access to the root target\n> relation. But there is a field for that in ResultRelInfo:\n> ri_PartitionRoot. However, that's only set up when we do partition routing.\n>\n> How about we rename ri_PartitionRoot to e.g ri_RootTarget, and set it\n> always, even for non-partition inheritance? We have that information\n> available when we initialize the ResultRelInfo, so might as well.\n\nYeah, I agree it's better to use ri_PartitionRoot more generally like\nyou describe here.\n\n> Some code currently checks ri_PartitionRoot, to determine if a tuple\n> that's been inserted, has been routed. For example:\n>\n> > /*\n> > * Also check the tuple against the partition constraint, if there is\n> > * one; except that if we got here via tuple-routing, we don't need to\n> > * if there's no BR trigger defined on the partition.\n> > */\n> > if (resultRelationDesc->rd_rel->relispartition &&\n> > (resultRelInfo->ri_PartitionRoot == NULL ||\n> > (resultRelInfo->ri_TrigDesc &&\n> > resultRelInfo->ri_TrigDesc->trig_insert_before_row)))\n> > ExecPartitionCheck(resultRelInfo, slot, estate, true);\n>\n> So if we set ri_PartitionRoot always, we would need some other way to\n> determine if the tuple at hand has actually been routed or not. But\n> wouldn't that be a good thing anyway? Isn't it possible that the same\n> ResultRelInfo is sometimes used for routed tuples, and sometimes for\n> tuples that have been inserted/updated \"directly\"?\n> ExecLookupUpdateResultRelByOid() sets that field lazily, so I think it\n> would be possible to get here with ri_PartitionRoot either set or not,\n> depending on whether an earlier cross-partition update was routed to the\n> table.\n\nri_RelationDesc != ri_PartitionRoot gives whether the result relation\nis the original target relation of the query or not, so checking that\nshould be enough here.\n\n> The above check is just an optimization, to skip unnecessary\n> ExecPartitionCheck() calls, but I think this snippet in\n> ExecConstraints() needs to get this right:\n>\n> > /*\n> > * If the tuple has been routed, it's been converted to the\n> > * partition's rowtype, which might differ from the root\n> > * table's. We must convert it back to the root table's\n> > * rowtype so that val_desc shown error message matches the\n> > * input tuple.\n> > */\n> > if (resultRelInfo->ri_PartitionRoot)\n> > {\n> > AttrMap *map;\n> >\n> > rel = resultRelInfo->ri_PartitionRoot;\n> > tupdesc = RelationGetDescr(rel);\n> > /* a reverse map */\n> > map = build_attrmap_by_name_if_req(orig_tupdesc,\n> > tupdesc);\n> >\n> > /*\n> > * Partition-specific slot's tupdesc can't be changed, so\n> > * allocate a new one.\n> > */\n> > if (map != NULL)\n> > slot = execute_attr_map_slot(map, slot,\n> > MakeTupleTableSlot(tupdesc, &TTSOpsVirtual));\n> > }\n>\n> Is that an existing bug, or am I missing?\n\nWhat it's doing is converting a routed tuple in the partition's tuple\nformat back into the original target relation's format before showing\nthe tuple in the error message. Note that we do this reverse\nconversion only for tuple routing target relations, not all child\nresult relations, so in that sense it's a bit inconsistent. Maybe we\ndon't need to be too pedantic about showing the exact same tuple as\nthe user inserted (that is, one matching the \"root\" table's column\norder), so it seems okay to just remove these reverse-conversion\nblocks that are repeated in a number of places that show an error\nmessage after failing a constraint check.\n\nAttached new 0002 which does these adjustments. I went with\nri_RootTargetDesc to go along with ri_RelationDesc.\n\nAlso, I have updated the original 0002 (now 0003) to make\nGetChildToRootMap() use ri_RootTargetDesc instead of\nModifyTableState.rootResultRelInfo.ri_RelationDesc, so that even\nAfterTriggerSaveEvent() can now use that function. This allows us to\navoid having to initialize ri_ChildToRootMap anywhere but inside\nGetChildRootMap(), with that long comment defending doing so. :-)\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com", "msg_date": "Thu, 12 Nov 2020 17:04:43 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: ModifyTable overheads in generic plans" }, { "msg_contents": "On Thu, Nov 12, 2020 at 5:04 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> Attached new 0002 which does these adjustments. I went with\n> ri_RootTargetDesc to go along with ri_RelationDesc.\n>\n> Also, I have updated the original 0002 (now 0003) to make\n> GetChildToRootMap() use ri_RootTargetDesc instead of\n> ModifyTableState.rootResultRelInfo.ri_RelationDesc, so that even\n> AfterTriggerSaveEvent() can now use that function. This allows us to\n> avoid having to initialize ri_ChildToRootMap anywhere but inside\n> GetChildRootMap(), with that long comment defending doing so. :-)\n\nThese needed to be rebased due to recent copy.c upheavals. Attached.\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com", "msg_date": "Mon, 7 Dec 2020 15:53:37 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: ModifyTable overheads in generic plans" }, { "msg_contents": "On Mon, Dec 7, 2020 at 3:53 PM Amit Langote <amitlangote09@gmail.com> wrote:\n>\n> On Thu, Nov 12, 2020 at 5:04 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> > Attached new 0002 which does these adjustments. I went with\n> > ri_RootTargetDesc to go along with ri_RelationDesc.\n> >\n> > Also, I have updated the original 0002 (now 0003) to make\n> > GetChildToRootMap() use ri_RootTargetDesc instead of\n> > ModifyTableState.rootResultRelInfo.ri_RelationDesc, so that even\n> > AfterTriggerSaveEvent() can now use that function. This allows us to\n> > avoid having to initialize ri_ChildToRootMap anywhere but inside\n> > GetChildRootMap(), with that long comment defending doing so. :-)\n>\n> These needed to be rebased due to recent copy.c upheavals. Attached.\n\nNeeded to be rebased again.\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com", "msg_date": "Tue, 22 Dec 2020 17:16:46 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: ModifyTable overheads in generic plans" }, { "msg_contents": "On Tue, Dec 22, 2020 at 5:16 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> On Mon, Dec 7, 2020 at 3:53 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> > On Thu, Nov 12, 2020 at 5:04 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> > > Attached new 0002 which does these adjustments. I went with\n> > > ri_RootTargetDesc to go along with ri_RelationDesc.\n> > >\n> > > Also, I have updated the original 0002 (now 0003) to make\n> > > GetChildToRootMap() use ri_RootTargetDesc instead of\n> > > ModifyTableState.rootResultRelInfo.ri_RelationDesc, so that even\n> > > AfterTriggerSaveEvent() can now use that function. This allows us to\n> > > avoid having to initialize ri_ChildToRootMap anywhere but inside\n> > > GetChildRootMap(), with that long comment defending doing so. :-)\n> >\n> > These needed to be rebased due to recent copy.c upheavals. Attached.\n>\n> Needed to be rebased again.\n\nAnd again, this time over the recent batch insert API related patches.\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com", "msg_date": "Mon, 25 Jan 2021 14:23:35 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: ModifyTable overheads in generic plans" }, { "msg_contents": "On Mon, Jan 25, 2021 at 2:23 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> On Tue, Dec 22, 2020 at 5:16 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> > On Mon, Dec 7, 2020 at 3:53 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> > > On Thu, Nov 12, 2020 at 5:04 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> > > > Attached new 0002 which does these adjustments. I went with\n> > > > ri_RootTargetDesc to go along with ri_RelationDesc.\n> > > >\n> > > > Also, I have updated the original 0002 (now 0003) to make\n> > > > GetChildToRootMap() use ri_RootTargetDesc instead of\n> > > > ModifyTableState.rootResultRelInfo.ri_RelationDesc, so that even\n> > > > AfterTriggerSaveEvent() can now use that function. This allows us to\n> > > > avoid having to initialize ri_ChildToRootMap anywhere but inside\n> > > > GetChildRootMap(), with that long comment defending doing so. :-)\n> > >\n> > > These needed to be rebased due to recent copy.c upheavals. Attached.\n> >\n> > Needed to be rebased again.\n>\n> And again, this time over the recent batch insert API related patches.\n\nAnother rebase.\n\nI've dropped what was patch 0001 in the previous set, because I think\nit has been rendered unnecessary due to recently committed changes.\nHowever, the rebase led to a couple of additional regression test\noutput changes that I think are harmless. The changes are caused by\nthe fact that ri_RootResultRelInfo now gets initialized in *all* child\nresult relations, not just those that participate in tuple routing.\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com", "msg_date": "Tue, 9 Feb 2021 17:38:06 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: ModifyTable overheads in generic plans" }, { "msg_contents": "Amit Langote <amitlangote09@gmail.com> writes:\n> [ v14-0002-Initialize-result-relation-information-lazily.patch ]\n\nNeeds YA rebase over 86dc90056.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 31 Mar 2021 14:12:52 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: ModifyTable overheads in generic plans" }, { "msg_contents": "On Thu, Apr 1, 2021 at 3:12 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Amit Langote <amitlangote09@gmail.com> writes:\n> > [ v14-0002-Initialize-result-relation-information-lazily.patch ]\n> Needs YA rebase over 86dc90056.\n\nDone. I will post the updated results for -Mprepared benchmarks I did\nin the other thread shortly.\n\n--\nAmit Langote\nEDB: http://www.enterprisedb.com", "msg_date": "Thu, 1 Apr 2021 22:12:47 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: ModifyTable overheads in generic plans" }, { "msg_contents": "On Thu, Apr 1, 2021 at 10:12 PM Amit Langote <amitlangote09@gmail.com> wrote:\n>\n> On Thu, Apr 1, 2021 at 3:12 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > Amit Langote <amitlangote09@gmail.com> writes:\n> > > [ v14-0002-Initialize-result-relation-information-lazily.patch ]\n> > Needs YA rebase over 86dc90056.\n>\n> Done. I will post the updated results for -Mprepared benchmarks I did\n> in the other thread shortly.\n\nTest details:\n\npgbench -n -T60 -Mprepared -f nojoin.sql\n\nnojoin.sql:\n\n\\set a random(1, 1000000)\nupdate test_table t set b = :a where a = :a;\n\n* test_table has 40 columns and partitions as shown below\n* plan_cache_mode = force_generic_plan\n\nResults:\n\nnparts master patched\n\n64 6262 17118\n128 3449 12082\n256 1722 7643\n1024 359 2099\n\n* tps figures shown are the median of 3 runs.\n\nSo, drastic speedup can be seen by even just not creating\nResultRelInfos for child relations that are not updated, as the patch\ndoes. I haven't yet included any changes for AcquireExecutorLocks()\nand ExecCheckRTPerms() bottlenecks that still remain and cause the\ndrop in tps as partition count increases.\n\n--\nAmit Langote\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 1 Apr 2021 23:56:02 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: ModifyTable overheads in generic plans" }, { "msg_contents": "Amit Langote <amitlangote09@gmail.com> writes:\n> On Thu, Apr 1, 2021 at 3:12 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Amit Langote <amitlangote09@gmail.com> writes:\n> [ v14-0002-Initialize-result-relation-information-lazily.patch ]\n>> Needs YA rebase over 86dc90056.\n\n> Done.\n\nI spent some time looking this over. There are bits of it we can\nadopt without too much trouble, but I'm afraid that 0001 (delay\nFDW BeginDirectModify until the first actual update) is a nonstarter,\nwhich makes the main idea of delaying ExecInitResultRelation unworkable.\n\nMy fear about 0001 is that it will destroy any hope of direct updates\non different remote partitions executing with consistent semantics\n(i.e. compatible snapshots), because some row updates triggered by the\nlocal query may have already happened before a given partition gets to\nstart its remote query. Maybe we can work around that, but I do not\nwant to commit a major restructuring that assumes we can dodge this\nproblem when we don't yet even have a fix for cross-partition updates\nthat does rely on the assumption of synchronous startup.\n\nIn some desultory performance testing here, it seemed like a\nsignificant part of the cost is ExecOpenIndices, and I don't see\na reason offhand why we could not delay/skip that. I also concur\nwith delaying construction of ri_ChildToRootMap and the\npartition_tuple_routing data structures, since many queries will\nnever need those at all.\n\n> * PartitionTupleRouting.subplan_resultrel_htab is removed in favor\n> of using ModifyTableState.mt_resultOidHash to look up an UPDATE\n> result relation by OID.\n\nHmm, that sounds promising too, though I didn't look at the details.\n\nAnyway, I think the way to proceed for now is to grab the low-hanging\nfruit of things that clearly won't change any semantics. But tail end\nof the dev cycle is no time to be making really fundamental changes\nin how FDW direct modify works.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 03 Apr 2021 21:20:14 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: ModifyTable overheads in generic plans" }, { "msg_contents": "On Sun, Apr 4, 2021 at 10:20 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Amit Langote <amitlangote09@gmail.com> writes:\n> > On Thu, Apr 1, 2021 at 3:12 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> Amit Langote <amitlangote09@gmail.com> writes:\n> > [ v14-0002-Initialize-result-relation-information-lazily.patch ]\n> >> Needs YA rebase over 86dc90056.\n>\n> > Done.\n>\n> I spent some time looking this over.\n\nThanks.\n\n> There are bits of it we can\n> adopt without too much trouble, but I'm afraid that 0001 (delay\n> FDW BeginDirectModify until the first actual update) is a nonstarter,\n> which makes the main idea of delaying ExecInitResultRelation unworkable.\n>\n> My fear about 0001 is that it will destroy any hope of direct updates\n> on different remote partitions executing with consistent semantics\n> (i.e. compatible snapshots), because some row updates triggered by the\n> local query may have already happened before a given partition gets to\n> start its remote query. Maybe we can work around that, but I do not\n> want to commit a major restructuring that assumes we can dodge this\n> problem when we don't yet even have a fix for cross-partition updates\n> that does rely on the assumption of synchronous startup.\n\nHmm, okay, I can understand the concern.\n\n> In some desultory performance testing here, it seemed like a\n> significant part of the cost is ExecOpenIndices, and I don't see\n> a reason offhand why we could not delay/skip that. I also concur\n> with delaying construction of ri_ChildToRootMap and the\n> partition_tuple_routing data structures, since many queries will\n> never need those at all.\n\nAs I mentioned in [1], creating ri_projectNew can be expensive too,\nespecially as column count (and partition count for the generic plan\ncase) grows. I think we should have an static inline\ninitialize-on-first-access accessor function for that field too.\n\nActually, I remember considering having such accessor functions (all\nstatic inline) for ri_WithCheckOptionExprs, ri_projectReturning,\nri_onConflictArbiterIndexes, and ri_onConflict (needed by ON CONFLICT\nUPDATE) as well, prompted by Heikki's comments earlier in the\ndiscussion. I also remember, before even writing this patch, not\nliking that WCO and RETURNING expressions are initialized in their own\nseparate loops, rather than being part of the earlier loop that says:\n\n /*\n * Do additional per-result-relation initialization.\n */\n for (i = 0; i < nrels; i++)\n {\n\nI guess ri_RowIdAttNo initialization can go into the same loop.\n\n> > * PartitionTupleRouting.subplan_resultrel_htab is removed in favor\n> > of using ModifyTableState.mt_resultOidHash to look up an UPDATE\n> > result relation by OID.\n>\n> Hmm, that sounds promising too, though I didn't look at the details.\n>\n> Anyway, I think the way to proceed for now is to grab the low-hanging\n> fruit of things that clearly won't change any semantics. But tail end\n> of the dev cycle is no time to be making really fundamental changes\n> in how FDW direct modify works.\n\nI agree.\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com\n\n[1] https://www.postgresql.org/message-id/CA+HiwqHLUNhMxy46Mrb04VJpN=HUdm9bD7xdZ6f5h2o4imX79g@mail.gmail.com\n\n\n", "msg_date": "Sun, 4 Apr 2021 23:34:32 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: ModifyTable overheads in generic plans" }, { "msg_contents": "Amit Langote <amitlangote09@gmail.com> writes:\n> On Sun, Apr 4, 2021 at 10:20 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> In some desultory performance testing here, it seemed like a\n>> significant part of the cost is ExecOpenIndices, and I don't see\n>> a reason offhand why we could not delay/skip that. I also concur\n>> with delaying construction of ri_ChildToRootMap and the\n>> partition_tuple_routing data structures, since many queries will\n>> never need those at all.\n\n> As I mentioned in [1], creating ri_projectNew can be expensive too,\n> especially as column count (and partition count for the generic plan\n> case) grows. I think we should have an static inline\n> initialize-on-first-access accessor function for that field too.\n\n> Actually, I remember considering having such accessor functions (all\n> static inline) for ri_WithCheckOptionExprs, ri_projectReturning,\n> ri_onConflictArbiterIndexes, and ri_onConflict (needed by ON CONFLICT\n> UPDATE) as well, prompted by Heikki's comments earlier in the\n> discussion. I also remember, before even writing this patch, not\n> liking that WCO and RETURNING expressions are initialized in their own\n> separate loops, rather than being part of the earlier loop that says:\n\nSure, we might as well try to improve the cosmetics here.\n\n>> Anyway, I think the way to proceed for now is to grab the low-hanging\n>> fruit of things that clearly won't change any semantics. But tail end\n>> of the dev cycle is no time to be making really fundamental changes\n>> in how FDW direct modify works.\n\n> I agree.\n\nOK. Do you want to pull out the bits of the patch that we can still\ndo without postponing BeginDirectModify?\n\nAnother thing we could consider, perhaps, is keeping the behavior\nthe same for foreign tables but postponing init of local ones.\nTo avoid opening the relations to figure out which kind they are,\nwe'd have to rely on the RTE copies of relkind, which is a bit\nworrisome --- I'm not certain that those are guaranteed to be\nup-to-date --- but it's probably okay since there is no way to\nconvert a regular table to foreign or vice versa. Anyway, that\nidea seems fairly messy so I'm inclined to just pursue the\nlower-hanging fruit for now.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 04 Apr 2021 12:43:19 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: ModifyTable overheads in generic plans" }, { "msg_contents": "On Mon, Apr 5, 2021 at 1:43 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Amit Langote <amitlangote09@gmail.com> writes:\n> > On Sun, Apr 4, 2021 at 10:20 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> In some desultory performance testing here, it seemed like a\n> >> significant part of the cost is ExecOpenIndices, and I don't see\n> >> a reason offhand why we could not delay/skip that. I also concur\n> >> with delaying construction of ri_ChildToRootMap and the\n> >> partition_tuple_routing data structures, since many queries will\n> >> never need those at all.\n>\n> > As I mentioned in [1], creating ri_projectNew can be expensive too,\n> > especially as column count (and partition count for the generic plan\n> > case) grows. I think we should have an static inline\n> > initialize-on-first-access accessor function for that field too.\n>\n> > Actually, I remember considering having such accessor functions (all\n> > static inline) for ri_WithCheckOptionExprs, ri_projectReturning,\n> > ri_onConflictArbiterIndexes, and ri_onConflict (needed by ON CONFLICT\n> > UPDATE) as well, prompted by Heikki's comments earlier in the\n> > discussion. I also remember, before even writing this patch, not\n> > liking that WCO and RETURNING expressions are initialized in their own\n> > separate loops, rather than being part of the earlier loop that says:\n>\n> Sure, we might as well try to improve the cosmetics here.\n>\n> >> Anyway, I think the way to proceed for now is to grab the low-hanging\n> >> fruit of things that clearly won't change any semantics. But tail end\n> >> of the dev cycle is no time to be making really fundamental changes\n> >> in how FDW direct modify works.\n>\n> > I agree.\n>\n> OK. Do you want to pull out the bits of the patch that we can still\n> do without postponing BeginDirectModify?\n\nI ended up with the attached, whereby ExecInitResultRelation() is now\nperformed for all relations before calling ExecInitNode() on the\nsubplan. As mentioned, I moved other per-result-rel initializations\ninto the same loop that does ExecInitResultRelation(), while moving\ncode related to some initializations into initialize-on-first-access\naccessor functions for the concerned fields. I chose to do that for\nri_WIthCheckOptionExprs, ri_projectReturning, and ri_projectNew.\n\nExecInitNode() is called on the subplan (to set\nouterPlanState(mtstate) that is) after all of the per-result-rel\ninitializations are done. One of the initializations is calling\nBeginForeignModify() for non-direct modifications, an API to which we\ncurrently pass mtstate. Moving that to before setting\nouterPlanState(mtstate) so as to be in the same loop as other\ninitializations had me worried just a little bit given a modification\nI had to perform in postgresBeginForeignModify():\n\n@@ -1879,7 +1879,7 @@ postgresBeginForeignModify(ModifyTableState *mtstate,\n rte,\n resultRelInfo,\n mtstate->operation,\n- outerPlanState(mtstate)->plan,\n+ outerPlan(mtstate->ps.plan),\n query,\n target_attrs,\n values_end_len,\n\nThough I think that this is harmless, because I'd think that the\nimplementers of this API shouldn't really rely too strongly on\nassuming that outerPlanState(mtstate) is valid when it is called, if\npostgres_fdw's implementation is any indication.\n\nAnother slightly ugly bit is the dependence of direct modify API on\nri_projectReturning being set even if it doesn't care for anything\nelse in the ResultRelInfo. So in ExecInitModifyTable()\nri_projectReturning initialization is not skipped for\ndirectly-modified foreign result relations.\n\nNotes on regression test changes:\n\n* Initializing WCO quals during execution instead of during\nExecInitNode() of ModifyTable() causes a couple of regression test\nchanges in updatable_view.out that were a bit unexpected for me --\nSubplans that are referenced in WCO quals are no longer shown in the\nplain EXPLAIN output. Even though that's a user-visible change, maybe\nwe can live with that?\n\n* ri_RootResultRelInfo in *all* child relations instead of just in\ntuple-routing result relations has caused changes to inherit.out and\nprivileges.out. I think that's basically down to ExecConstraints() et\nal doing one thing for child relations in which ri_RootResultRelInfo\nis set and another for those in which it is not. Now it's set in\n*all* child relations, so it always does the former thing. I remember\nhaving checked that those changes are only cosmetic when I first\nencountered them.\n\n* Moving PartitionTupleRouting initialization to be done lazily for\ncross-partition update cases causes changes to update.out. They have\nto do with the fact that the violations of the actual target table's\npartition constraint are now shown as such, instead of reporting them\nas occurring on one of the leaf partitions. Again, only cosmetic.\n\n> Another thing we could consider, perhaps, is keeping the behavior\n> the same for foreign tables but postponing init of local ones.\n> To avoid opening the relations to figure out which kind they are,\n> we'd have to rely on the RTE copies of relkind, which is a bit\n> worrisome --- I'm not certain that those are guaranteed to be\n> up-to-date --- but it's probably okay since there is no way to\n> convert a regular table to foreign or vice versa. Anyway, that\n> idea seems fairly messy so I'm inclined to just pursue the\n> lower-hanging fruit for now.\n\nIt would be nice to try that idea out, but I tend to agree with the last part.\n\nAlso, I'm fairly happy with the kind of performance improvement I see\neven with the lower-hanging fruit patch for my earlier earlier shared\nbenchmark that tests the performance of generic plan execution:\n\nHEAD (especially with 86dc90056 now in):\n\nnparts 10cols 20cols 40cols\n\n64 6926 6394 6253\n128 3758 3501 3482\n256 1938 1822 1776\n1024 406 371 406\n\nPatched:\n\n64 13147 12554 14787\n128 7850 9788 9631\n256 4472 5599 5638\n1024 1218 1503 1309\n\nI also tried with a version where the new tuple projections are built\nin ExecInitModifyTable() as opposed to lazily:\n\n64 10937 9969 8535\n128 6586 5903 4887\n256 3613 3118 2654\n1024 884 749 652\n\nThis tells us that delaying initializing new tuple projection for\nupdates can have a sizable speedup and better scalability.\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com", "msg_date": "Mon, 5 Apr 2021 22:42:48 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: ModifyTable overheads in generic plans" }, { "msg_contents": "Amit Langote <amitlangote09@gmail.com> writes:\n> On Mon, Apr 5, 2021 at 1:43 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> OK. Do you want to pull out the bits of the patch that we can still\n>> do without postponing BeginDirectModify?\n\n> I ended up with the attached, whereby ExecInitResultRelation() is now\n> performed for all relations before calling ExecInitNode() on the\n> subplan. As mentioned, I moved other per-result-rel initializations\n> into the same loop that does ExecInitResultRelation(), while moving\n> code related to some initializations into initialize-on-first-access\n> accessor functions for the concerned fields. I chose to do that for\n> ri_WIthCheckOptionExprs, ri_projectReturning, and ri_projectNew.\n\nI pushed the parts of this that I thought were safe and productive.\n\nThe business about moving the subplan tree initialization to after\ncalling FDWs' BeginForeignModify functions seems to me to be a\nnonstarter. Existing FDWs are going to expect their scan initializations\nto have been done first. I'm surprised that postgres_fdw seemed to\nneed only a one-line fix; it could have been far worse. The amount of\ntrouble that could cause is absolutely not worth it to remove one loop\nover the result relations.\n\nI also could not get excited about postponing initialization of RETURNING\nor WITH CHECK OPTIONS expressions. I grant that that can be helpful\nwhen those features are used, but I doubt that RETURNING is used that\nheavily, and WITH CHECK OPTIONS is surely next door to nonexistent\nin performance-critical queries. If the feature isn't used, the cost\nof the existing code is about zero. So I couldn't see that it was worth\nthe amount of code thrashing and risk of new bugs involved. The bit you\nnoted about EXPLAIN missing a subplan is pretty scary in this connection;\nI'm not at all sure that that's just cosmetic.\n\n(Having said that, I'm wondering if there are bugs in these cases for\ncross-partition updates that target a previously-not-used partition.\nSo we might have things to fix anyway.)\n\nAnyway, looking at the test case you posted at the very top of this\nthread, I was getting this with HEAD on Friday:\n\nnparts\tTPS\n0\t12152\n10\t8672\n100\t2753\n1000\t314\n\nand after the two patches I just pushed, it looks like:\n\n0\t12105\n10\t9928\n100\t5433\n1000\t938\n\nSo while there's certainly work left to do, that's not bad for\nsome low-hanging-fruit grabbing.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 06 Apr 2021 19:24:11 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: ModifyTable overheads in generic plans" }, { "msg_contents": "Hi,\n\nOn 2021-04-06 19:24:11 -0400, Tom Lane wrote:\n> I also could not get excited about postponing initialization of RETURNING\n> or WITH CHECK OPTIONS expressions. I grant that that can be helpful\n> when those features are used, but I doubt that RETURNING is used that\n> heavily, and WITH CHECK OPTIONS is surely next door to nonexistent\n> in performance-critical queries.\n\nFWIW, there's a number of ORMs etc that use it on every insert (there's\nnot really a better way to get the serial when you also want to do\npipelining).\n\n> nparts\tTPS\n> 0\t12152\n> 10\t8672\n> 100\t2753\n> 1000\t314\n> \n> and after the two patches I just pushed, it looks like:\n> \n> 0\t12105\n> 10\t9928\n> 100\t5433\n> 1000\t938\n> \n> So while there's certainly work left to do, that's not bad for\n> some low-hanging-fruit grabbing.\n\nNice. 3x at the upper end is pretty good.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 6 Apr 2021 17:00:25 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: ModifyTable overheads in generic plans" }, { "msg_contents": "On Wed, Apr 7, 2021 at 8:24 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Amit Langote <amitlangote09@gmail.com> writes:\n> > On Mon, Apr 5, 2021 at 1:43 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> OK. Do you want to pull out the bits of the patch that we can still\n> >> do without postponing BeginDirectModify?\n>\n> > I ended up with the attached, whereby ExecInitResultRelation() is now\n> > performed for all relations before calling ExecInitNode() on the\n> > subplan. As mentioned, I moved other per-result-rel initializations\n> > into the same loop that does ExecInitResultRelation(), while moving\n> > code related to some initializations into initialize-on-first-access\n> > accessor functions for the concerned fields. I chose to do that for\n> > ri_WIthCheckOptionExprs, ri_projectReturning, and ri_projectNew.\n>\n> I pushed the parts of this that I thought were safe and productive.\n\nThank you.\n\n+/*\n+ * ExecInitInsertProjection\n+ * Do one-time initialization of projection data for INSERT tuples.\n+ *\n+ * INSERT queries may need a projection to filter out junk attrs in the tlist.\n+ *\n+ * This is \"one-time\" for any given result rel, but we might touch\n+ * more than one result rel in the course of a partitioned INSERT.\n\nI don't think we need this last bit for INSERT, because the result\nrels for leaf partitions will never have to go through\nExecInitInsertProjection(). Leaf partitions are never directly fed\ntuples that ExecModifyTable() extracts out of the subplan, because\nthose tuples will have gone through the root target table's projection\nbefore being passed to tuple routing. So, if INSERTs will ever need a\nprojection, only the partitioned table being inserted into will need\nto have one built for.\n\nAlso, I think we should update the commentary around ri_projectNew a\nbit to make it clear that noplace beside ExecGet{Insert|Update}Tuple\nshould be touching it and the associated slots.\n\n+ * This is \"one-time\" for any given result rel, but we might touch more than\n+ * one result rel in the course of a partitioned UPDATE, and each one needs\n+ * its own projection due to possible column order variation.\n\nMinor quibble, but should we write it as \"...in the course of an\ninherited UPDATE\"?\n\nAttached patch contains these changes.\n\n> The business about moving the subplan tree initialization to after\n> calling FDWs' BeginForeignModify functions seems to me to be a\n> nonstarter. Existing FDWs are going to expect their scan initializations\n> to have been done first. I'm surprised that postgres_fdw seemed to\n> need only a one-line fix; it could have been far worse. The amount of\n> trouble that could cause is absolutely not worth it to remove one loop\n> over the result relations.\n\nOkay, that sounds fair. After all, we write this about 'mtstate' in\nthe description of BeginForeignModify(), which I had failed to notice:\n\n\"mtstate is the overall state of the ModifyTable plan node being\nexecuted; global data about the plan and execution state is available\nvia this structure.\"\n\n> I also could not get excited about postponing initialization of RETURNING\n> or WITH CHECK OPTIONS expressions. I grant that that can be helpful\n> when those features are used, but I doubt that RETURNING is used that\n> heavily, and WITH CHECK OPTIONS is surely next door to nonexistent\n> in performance-critical queries. If the feature isn't used, the cost\n> of the existing code is about zero. So I couldn't see that it was worth\n> the amount of code thrashing and risk of new bugs involved.\n\nOkay.\n\n> The bit you\n> noted about EXPLAIN missing a subplan is pretty scary in this connection;\n> I'm not at all sure that that's just cosmetic.\n\nYeah, this and...\n\n> (Having said that, I'm wondering if there are bugs in these cases for\n> cross-partition updates that target a previously-not-used partition.\n> So we might have things to fix anyway.)\n\n...this would need to be looked at a bit more closely, which I'll try\nto do sometime later this week.\n\n> Anyway, looking at the test case you posted at the very top of this\n> thread, I was getting this with HEAD on Friday:\n>\n> nparts TPS\n> 0 12152\n> 10 8672\n> 100 2753\n> 1000 314\n>\n> and after the two patches I just pushed, it looks like:\n>\n> 0 12105\n> 10 9928\n> 100 5433\n> 1000 938\n>\n> So while there's certainly work left to do, that's not bad for\n> some low-hanging-fruit grabbing.\n\nYes, certainly.\n\nI reran my usual benchmark and got the following numbers, this time\ncomparing v13.2 against the latest HEAD:\n\nnparts 10cols 20cols 40cols\n\nv13.2\n\n64 3231 2747 2217\n128 1528 1269 1121\n256 709 652 491\n1024 96 78 67\n\nv14dev HEAD\n\n64 14835 14360 14563\n128 9469 9601 9490\n256 5523 5383 5268\n1024 1482 1415 1366\n\nClearly, we've made some very good progress here. Thanks.\n\n\n\n--\nAmit Langote\nEDB: http://www.enterprisedb.com", "msg_date": "Wed, 7 Apr 2021 17:18:21 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: ModifyTable overheads in generic plans" }, { "msg_contents": "Amit Langote <amitlangote09@gmail.com> writes:\n> Also, I think we should update the commentary around ri_projectNew a\n> bit to make it clear that noplace beside ExecGet{Insert|Update}Tuple\n> should be touching it and the associated slots.\n\nHm. I pushed your comment fixes in nodeModifyTable.c, but not this\nchange, because it seemed to be more verbose and not really an\nimprovement. Why are these fields any more hands-off than any others?\nBesides which, there certainly is other code touching ri_oldTupleSlot.\n\nAnyway, I've marked the CF entry closed, because I think this is about\nas far as we can get for v14. I'm not averse to revisiting the\nRETURNING and WITH CHECK OPTIONS issues later, but it looks to me like\nthat needs more study.\n\n> I reran my usual benchmark and got the following numbers, this time\n> comparing v13.2 against the latest HEAD:\n\n> nparts 10cols 20cols 40cols\n\n> v13.2\n> 64 3231 2747 2217\n> 128 1528 1269 1121\n> 256 709 652 491\n> 1024 96 78 67\n\n> v14dev HEAD\n> 64 14835 14360 14563\n> 128 9469 9601 9490\n> 256 5523 5383 5268\n> 1024 1482 1415 1366\n\n> Clearly, we've made some very good progress here. Thanks.\n\nIndeed, that's a pretty impressive comparison.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 07 Apr 2021 12:34:39 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: ModifyTable overheads in generic plans" }, { "msg_contents": "On Wed, Apr 7, 2021 at 12:34 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > v13.2\n> > 64 3231 2747 2217\n> > 128 1528 1269 1121\n> > 256 709 652 491\n> > 1024 96 78 67\n>\n> > v14dev HEAD\n> > 64 14835 14360 14563\n> > 128 9469 9601 9490\n> > 256 5523 5383 5268\n> > 1024 1482 1415 1366\n>\n> > Clearly, we've made some very good progress here. Thanks.\n>\n> Indeed, that's a pretty impressive comparison.\n\n+1. That looks like a big improvement.\n\nIn a vacuum, you'd hope that partitioning a table would make things\nfaster rather than slower, when only one partition is implicated. Or\nat least that the speed would stay about the same. And, while this is\na lot better, we're clearly not there yet. So I hope that, in future\nreleases, we can continue to find ways to whittle down the overhead.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 7 Apr 2021 13:42:30 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: ModifyTable overheads in generic plans" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Wed, Apr 7, 2021 at 12:34 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Indeed, that's a pretty impressive comparison.\n\n> +1. That looks like a big improvement.\n\n> In a vacuum, you'd hope that partitioning a table would make things\n> faster rather than slower, when only one partition is implicated. Or\n> at least that the speed would stay about the same. And, while this is\n> a lot better, we're clearly not there yet. So I hope that, in future\n> releases, we can continue to find ways to whittle down the overhead.\n\nNote that this test case includes plan_cache_mode = force_generic_plan,\nso it's deliberately kneecapping our ability to tell that \"only one\npartition is implicated\". I think things would often be better in\nproduction cases. No argument that there's not work left to do, though.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 07 Apr 2021 14:02:16 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: ModifyTable overheads in generic plans" }, { "msg_contents": "On Thu, Apr 8, 2021 at 1:34 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Amit Langote <amitlangote09@gmail.com> writes:\n> > Also, I think we should update the commentary around ri_projectNew a\n> > bit to make it clear that noplace beside ExecGet{Insert|Update}Tuple\n> > should be touching it and the associated slots.\n>\n> Hm. I pushed your comment fixes in nodeModifyTable.c, but not this\n> change, because it seemed to be more verbose and not really an\n> improvement. Why are these fields any more hands-off than any others?\n> Besides which, there certainly is other code touching ri_oldTupleSlot.\n\nOops, that's right.\n\n> Anyway, I've marked the CF entry closed, because I think this is about\n> as far as we can get for v14. I'm not averse to revisiting the\n> RETURNING and WITH CHECK OPTIONS issues later, but it looks to me like\n> that needs more study.\n\nSure, I will look into that.\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 8 Apr 2021 11:12:13 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: ModifyTable overheads in generic plans" }, { "msg_contents": "On Thu, Apr 8, 2021 at 3:02 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Robert Haas <robertmhaas@gmail.com> writes:\n> > On Wed, Apr 7, 2021 at 12:34 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> Indeed, that's a pretty impressive comparison.\n>\n> > +1. That looks like a big improvement.\n>\n> > In a vacuum, you'd hope that partitioning a table would make things\n> > faster rather than slower, when only one partition is implicated. Or\n> > at least that the speed would stay about the same. And, while this is\n> > a lot better, we're clearly not there yet. So I hope that, in future\n> > releases, we can continue to find ways to whittle down the overhead.\n>\n> Note that this test case includes plan_cache_mode = force_generic_plan,\n> so it's deliberately kneecapping our ability to tell that \"only one\n> partition is implicated\".\n\nFor the record, here are the numbers for plan_cache_mode = auto.\n(Actually, plancache.c always goes with custom planning for\npartitioned tables.)\n\nv13.2\nnparts 10cols 20cols 40cols\n\n64 13391 12140 10958\n128 13436 12297 10643\n256 12564 12294 10355\n1024 11450 10600 9020\n\nv14dev HEAD\n\n64 14925 14648 13361\n128 14379 14333 13138\n256 14478 14246 13316\n1024 12744 12621 11579\n\nThere's 10-20% improvement in this case too for various partition\ncounts, which really has more to do with 86dc90056 than the work done\nhere.\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 8 Apr 2021 12:32:43 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: ModifyTable overheads in generic plans" }, { "msg_contents": "On Thu, 8 Apr 2021 at 15:32, Amit Langote <amitlangote09@gmail.com> wrote:\n> There's 10-20% improvement in this case too for various partition\n> counts, which really has more to do with 86dc90056 than the work done\n> here.\n\nI'm not sure of the exact query you're running, but I imagine the\nreason that it wasn't that slow with custom plans was down to\n428b260f87.\n\nSo the remaining slowness for the generic plan case with large numbers\nof partitions in the plan vs custom plans plan-time pruning is a)\nlocking run-time pruned partitions; and; b) permission checks during\nexecutor startup?\n\nAside from the WCO and RETURNING stuff you mentioned, I mean.\n\nDavid\n\n\n", "msg_date": "Thu, 8 Apr 2021 16:54:39 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: ModifyTable overheads in generic plans" }, { "msg_contents": "On Thu, Apr 8, 2021 at 1:54 PM David Rowley <dgrowleyml@gmail.com> wrote:\n> On Thu, 8 Apr 2021 at 15:32, Amit Langote <amitlangote09@gmail.com> wrote:\n> > There's 10-20% improvement in this case too for various partition\n> > counts, which really has more to do with 86dc90056 than the work done\n> > here.\n>\n> I'm not sure of the exact query you're running,\n\nThe query is basically this:\n\n\\set a random(1, 1000000)\nupdate test_table set b = :a where a = :a;\n\n> but I imagine the\n> reason that it wasn't that slow with custom plans was down to\n> 428b260f87.\n\nRight, 428b260f87 is certainly why we are seeing numbers this big at\nall. However, I was saying that 86dc90056 is what makes v14 HEAD run\nabout 10-20% faster than *v13.2* in this benchmark. Note that\ninheritance_planner() in v13, which, although not as bad as it used to\nbe in v11, is still more expensive than a single grouping_planner()\ncall for a given query that we now get thanks to 86dc90056.\n\n> So the remaining slowness for the generic plan case with large numbers\n> of partitions in the plan vs custom plans plan-time pruning is a)\n> locking run-time pruned partitions; and; b) permission checks during\n> executor startup?\n\nActually, we didn't move ahead with making the ResulRelInfos\nthemselves lazily as I had proposed in the original patch, so\nExecInitModifyTable() still builds ResultRelInfos for all partitions.\n Although we did move initializations of some fields of it out of\nExecInitModifyTable() --- commits a1115fa0, c5b7ba4e, saving a decent\namount of time spent there. We need to study closely whether\ninitializing foreign partition's updates (direct or otherwise) lazily\ndoesn't produce wrong semantics before we can do that and we need the\nResultRelInfos to pass to those APIs.\n\n--\nAmit Langote\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 8 Apr 2021 14:33:31 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: ModifyTable overheads in generic plans" }, { "msg_contents": "On Wed, Apr 7, 2021 at 5:18 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> On Wed, Apr 7, 2021 at 8:24 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > I also could not get excited about postponing initialization of RETURNING\n> > or WITH CHECK OPTIONS expressions. I grant that that can be helpful\n> > when those features are used, but I doubt that RETURNING is used that\n> > heavily, and WITH CHECK OPTIONS is surely next door to nonexistent\n> > in performance-critical queries. If the feature isn't used, the cost\n> > of the existing code is about zero. So I couldn't see that it was worth\n> > the amount of code thrashing and risk of new bugs involved.\n>\n> Okay.\n>\n> > The bit you\n> > noted about EXPLAIN missing a subplan is pretty scary in this connection;\n> > I'm not at all sure that that's just cosmetic.\n>\n> Yeah, this and...\n\nI looked into this and can't see why this isn't just cosmetic as far\nas ModifyTable is concerned.\n\n\"EXPLAIN missing a subplan\" here just means that\nModifyTableState.PlanState.subPlan is not set. Besides ExplainNode(),\nonly ExecReScan() looks at PlanState.subPlan, and that does not seem\nrelevant to ModifyTable, because it doesn't support rescanning.\n\nI don't see any such problems with creating RETURNING projections\non-demand either.\n\n> > (Having said that, I'm wondering if there are bugs in these cases for\n> > cross-partition updates that target a previously-not-used partition.\n> > So we might have things to fix anyway.)\n>\n> ...this would need to be looked at a bit more closely, which I'll try\n> to do sometime later this week.\n\nGiven the above, I can't think of any undiscovered problems related to\nWCO and RETURNING expression states in the cases where cross-partition\nupdates target partitions that need to be initialized by\nExecInitPartitionInfo(). Here is the result for the test case in\nupdatable_views.sql modified to use partitioning and cross-partition\nupdates:\n\nCREATE TABLE base_tbl (a int) partition by range (a);\nCREATE TABLE base_tbl1 PARTITION OF base_tbl FOR VALUES FROM (1) TO (6);\nCREATE TABLE base_tbl2 PARTITION OF base_tbl FOR VALUES FROM (6) TO (11);\nCREATE TABLE base_tbl3 PARTITION OF base_tbl FOR VALUES FROM (11) TO (15);\nCREATE TABLE ref_tbl (a int PRIMARY KEY);\nINSERT INTO ref_tbl SELECT * FROM generate_series(1,10);\nCREATE VIEW rw_view1 AS\n SELECT * FROM base_tbl b\n WHERE EXISTS(SELECT 1 FROM ref_tbl r WHERE r.a = b.a)\n WITH CHECK OPTION;\n\nINSERT INTO rw_view1 VALUES (1);\nINSERT 0 1\n\nINSERT INTO rw_view1 VALUES (11);\nERROR: new row violates check option for view \"rw_view1\"\nDETAIL: Failing row contains (11).\n\n-- Both are cross-partition updates where the target relation is\n-- lazily initialized in ExecInitPartitionInfo(), along with the WCO\n-- qual ExprState\nUPDATE rw_view1 SET a = a + 5 WHERE a = 1;\nUPDATE 1\n\nUPDATE rw_view1 SET a = a + 5 WHERE a = 6;\nERROR: new row violates check option for view \"rw_view1\"\nDETAIL: Failing row contains (11).\n\nEXPLAIN (costs off) INSERT INTO rw_view1 VALUES (5);\n QUERY PLAN\n----------------------\n Insert on base_tbl b\n -> Result\n(2 rows)\n\nEXPLAIN (costs off) UPDATE rw_view1 SET a = a + 5 WHERE a = 1;\n QUERY PLAN\n--------------------------------------------------------\n Update on base_tbl b\n Update on base_tbl1 b_1\n -> Nested Loop\n -> Index Scan using ref_tbl_pkey on ref_tbl r\n Index Cond: (a = 1)\n -> Seq Scan on base_tbl1 b_1\n Filter: (a = 1)\n(7 rows)\n\nEXPLAIN (costs off) UPDATE rw_view1 SET a = a + 5 WHERE a = 6;\n QUERY PLAN\n--------------------------------------------------------\n Update on base_tbl b\n Update on base_tbl2 b_1\n -> Nested Loop\n -> Index Scan using ref_tbl_pkey on ref_tbl r\n Index Cond: (a = 6)\n -> Seq Scan on base_tbl2 b_1\n Filter: (a = 6)\n(7 rows)\n\nPatch attached. I tested the performance benefit of doing this by\nmodifying the update query used in earlier benchmarks to have a\nRETURNING * clause, getting the following TPS numbers:\n\n-Mprepared (plan_cache_mode=force_generic_plan)\n\nnparts 10cols 20cols 40cols\n\nHEAD\n64 10909 9067 7171\n128 6903 5624 4161\n256 3748 3056 2219\n1024 953 738 427\n\nPatched\n64 13817 13395 12754\n128 9271 9102 8279\n256 5345 5207 5083\n1024 1463 1443 1389\n\nAlso, I don't see much impact of checking if (node->returningLists) in\nthe per-result-rel initialization loop in the common cases where\nthere's no RETURNING.\n\n--\nAmit Langote\nEDB: http://www.enterprisedb.com", "msg_date": "Wed, 14 Apr 2021 12:06:56 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: ModifyTable overheads in generic plans" } ]
[ { "msg_contents": "\nFound when working against release 12.\n\n\nGiven the following extension:\n\n\n::::::::::::::\nshare/postgresql/extension/dummy--1.0.0.sql\n::::::::::::::\n-- complain if script is sourced in psql, rather than via CREATE EXTENSION\n\\echo Use \"CREATE EXTENSION dummy\" to load this file. \\quit\n\nCREATE TABLE @extschema@.dummytab (\n       a int,\n       b int,\n       c int);\nSELECT pg_catalog.pg_extension_config_dump('dummytab', '');\n\n\n::::::::::::::\nshare/postgresql/extension/dummy.control\n::::::::::::::\n# dummy extension\ncomment = 'dummy'\ndefault_version = '1.0.0'\nrelocatable = false\n\n\nand this use of it:\n\n\nbin/psql -c 'create schema dummy; create extension  dummy schema dummy;\ninsert into dummy.dummytab values(1,2,3);'\n\n\nthis command segfaults:\n\n\nbin/pg_dump -a --column-inserts -n dummy\n\n\nIt appears that for extension owned tables tbinfo.attgenerated isn't\nbeing properly populated, so line 2050 in REL_12_STABLE, which is line\n2109 in git tip, is failing.\n\n\nI'm looking for a fix, but if anyone has a quick fix that would be nice :-)\n\n\ncheers\n\n\nandrew\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n", "msg_date": "Fri, 26 Jun 2020 09:57:04 -0400", "msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "pg_dump bug for extension owned tables" }, { "msg_contents": "\nOn 6/26/20 9:57 AM, Andrew Dunstan wrote:\n> It appears that for extension owned tables tbinfo.attgenerated isn't\n> being properly populated, so line 2050 in REL_12_STABLE, which is line\n> 2109 in git tip, is failing.\n>\n>\n\nShould have mentioned this is in src/bin/pg_dump/pg_dump.c\n\n\ncheers\n\n\nandrew\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n", "msg_date": "Fri, 26 Jun 2020 10:24:33 -0400", "msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: pg_dump bug for extension owned tables" }, { "msg_contents": "On Fri, Jun 26, 2020 at 11:24 AM Andrew Dunstan <\nandrew.dunstan@2ndquadrant.com> wrote:\n>\n>\n> On 6/26/20 9:57 AM, Andrew Dunstan wrote:\n> > It appears that for extension owned tables tbinfo.attgenerated isn't\n> > being properly populated, so line 2050 in REL_12_STABLE, which is line\n> > 2109 in git tip, is failing.\n> >\n> >\n>\n> Should have mentioned this is in src/bin/pg_dump/pg_dump.c\n>\n\nHaving a look on it.\n\n--\n Fabrízio de Royes Mello Timbira - http://www.timbira.com.br/\n PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento\n\nOn Fri, Jun 26, 2020 at 11:24 AM Andrew Dunstan <andrew.dunstan@2ndquadrant.com> wrote:>>> On 6/26/20 9:57 AM, Andrew Dunstan wrote:> > It appears that for extension owned tables tbinfo.attgenerated isn't> > being properly populated, so line 2050 in REL_12_STABLE, which is line> > 2109 in git tip, is failing.> >> >>> Should have mentioned this is in src/bin/pg_dump/pg_dump.c>Having a look on it.--   Fabrízio de Royes Mello         Timbira - http://www.timbira.com.br/   PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento", "msg_date": "Fri, 26 Jun 2020 11:55:26 -0300", "msg_from": "=?UTF-8?Q?Fabr=C3=ADzio_de_Royes_Mello?= <fabriziomello@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_dump bug for extension owned tables" }, { "msg_contents": "On Fri, Jun 26, 2020 at 11:55 AM Fabrízio de Royes Mello <\nfabriziomello@gmail.com> wrote:\n>\n>\n> On Fri, Jun 26, 2020 at 11:24 AM Andrew Dunstan <\nandrew.dunstan@2ndquadrant.com> wrote:\n> >\n> >\n> > On 6/26/20 9:57 AM, Andrew Dunstan wrote:\n> > > It appears that for extension owned tables tbinfo.attgenerated isn't\n> > > being properly populated, so line 2050 in REL_12_STABLE, which is line\n> > > 2109 in git tip, is failing.\n> > >\n> > >\n> >\n> > Should have mentioned this is in src/bin/pg_dump/pg_dump.c\n> >\n>\n> Having a look on it.\n>\n\nSeems when qualify the schemaname the the \"tbinfo->interesting\" field is\nnot setted for extensions objects, so the getTableAttrs can't fill the\nattgenerated field properly.\n\nI'm not 100% sure it's the correct way but the attached patch works for me\nand all tests passed. Maybe we should add more TAP tests?\n\nRegards,\n\n--\n Fabrízio de Royes Mello Timbira - http://www.timbira.com.br/\n PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento", "msg_date": "Fri, 26 Jun 2020 15:10:17 -0300", "msg_from": "=?UTF-8?Q?Fabr=C3=ADzio_de_Royes_Mello?= <fabriziomello@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_dump bug for extension owned tables" }, { "msg_contents": "\nOn 6/26/20 2:10 PM, Fabrízio de Royes Mello wrote:\n>\n> On Fri, Jun 26, 2020 at 11:55 AM Fabrízio de Royes Mello\n> <fabriziomello@gmail.com <mailto:fabriziomello@gmail.com>> wrote:\n> >\n> >\n> > On Fri, Jun 26, 2020 at 11:24 AM Andrew Dunstan\n> <andrew.dunstan@2ndquadrant.com\n> <mailto:andrew.dunstan@2ndquadrant.com>> wrote:\n> > >\n> > >\n> > > On 6/26/20 9:57 AM, Andrew Dunstan wrote:\n> > > > It appears that for extension owned tables tbinfo.attgenerated isn't\n> > > > being properly populated, so line 2050 in REL_12_STABLE, which\n> is line\n> > > > 2109 in git tip, is failing.\n> > > >\n> > > >\n> > >\n> > > Should have mentioned this is in src/bin/pg_dump/pg_dump.c\n> > >\n> >\n> > Having a look on it.\n> >\n>\n> Seems when qualify the schemaname the the \"tbinfo->interesting\" field\n> is not setted for extensions objects, so the getTableAttrs can't fill\n> the attgenerated field properly.\n>\n> I'm not 100% sure it's the correct way but the attached patch works\n> for me and all tests passed. Maybe we should add more TAP tests?\n>\n>\n\n\nThanks for this.\n\nIt looks sane to me, I'll let others weigh in on it though. Yes we\nshould have a TAP test for it.\n\n\ncheers\n\n\nandrew\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n", "msg_date": "Fri, 26 Jun 2020 17:48:23 -0400", "msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: pg_dump bug for extension owned tables" }, { "msg_contents": "\nOn 6/26/20 2:10 PM, Fabrízio de Royes Mello wrote:\n>\n> On Fri, Jun 26, 2020 at 11:55 AM Fabrízio de Royes Mello\n> <fabriziomello@gmail.com <mailto:fabriziomello@gmail.com>> wrote:\n> >\n> >\n> > On Fri, Jun 26, 2020 at 11:24 AM Andrew Dunstan\n> <andrew.dunstan@2ndquadrant.com\n> <mailto:andrew.dunstan@2ndquadrant.com>> wrote:\n> > >\n> > >\n> > > On 6/26/20 9:57 AM, Andrew Dunstan wrote:\n> > > > It appears that for extension owned tables tbinfo.attgenerated isn't\n> > > > being properly populated, so line 2050 in REL_12_STABLE, which\n> is line\n> > > > 2109 in git tip, is failing.\n> > > >\n> > > >\n> > >\n> > > Should have mentioned this is in src/bin/pg_dump/pg_dump.c\n> > >\n> >\n> > Having a look on it.\n> >\n>\n> Seems when qualify the schemaname the the \"tbinfo->interesting\" field\n> is not setted for extensions objects, so the getTableAttrs can't fill\n> the attgenerated field properly.\n>\n> I'm not 100% sure it's the correct way but the attached patch works\n> for me and all tests passed. Maybe we should add more TAP tests?\n>\n>\n\n\nI just tried this patch out on master, with the test case I gave\nupthread. It's not working, still getting a segfault.\n\n\nWill look closer.\n\n\ncheers\n\n\nandrew\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n", "msg_date": "Sat, 11 Jul 2020 19:07:07 -0400", "msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: pg_dump bug for extension owned tables" }, { "msg_contents": "On Sat, Jul 11, 2020 at 8:07 PM Andrew Dunstan <\nandrew.dunstan@2ndquadrant.com> wrote:\n>\n>\n> On 6/26/20 2:10 PM, Fabrízio de Royes Mello wrote:\n> >\n> > On Fri, Jun 26, 2020 at 11:55 AM Fabrízio de Royes Mello\n> > <fabriziomello@gmail.com <mailto:fabriziomello@gmail.com>> wrote:\n> > >\n> > >\n> > > On Fri, Jun 26, 2020 at 11:24 AM Andrew Dunstan\n> > <andrew.dunstan@2ndquadrant.com\n> > <mailto:andrew.dunstan@2ndquadrant.com>> wrote:\n> > > >\n> > > >\n> > > > On 6/26/20 9:57 AM, Andrew Dunstan wrote:\n> > > > > It appears that for extension owned tables tbinfo.attgenerated\nisn't\n> > > > > being properly populated, so line 2050 in REL_12_STABLE, which\n> > is line\n> > > > > 2109 in git tip, is failing.\n> > > > >\n> > > > >\n> > > >\n> > > > Should have mentioned this is in src/bin/pg_dump/pg_dump.c\n> > > >\n> > >\n> > > Having a look on it.\n> > >\n> >\n> > Seems when qualify the schemaname the the \"tbinfo->interesting\" field\n> > is not setted for extensions objects, so the getTableAttrs can't fill\n> > the attgenerated field properly.\n> >\n> > I'm not 100% sure it's the correct way but the attached patch works\n> > for me and all tests passed. Maybe we should add more TAP tests?\n> >\n> >\n>\n>\n> I just tried this patch out on master, with the test case I gave\n> upthread. It's not working, still getting a segfault.\n>\n\nOhh really sorry about it... my bad... i completely forgot about it!!!\n\nDue to my rush I ended up adding the wrong patch version. Attached the\ncorrect version.\n\nWill add TAP tests to src/test/modules/test_pg_dump\n\nRegards,\n\n--\n Fabrízio de Royes Mello Timbira - http://www.timbira.com.br/\n PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento", "msg_date": "Mon, 13 Jul 2020 11:52:23 -0300", "msg_from": "=?UTF-8?Q?Fabr=C3=ADzio_de_Royes_Mello?= <fabriziomello@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_dump bug for extension owned tables" }, { "msg_contents": "On Mon, Jul 13, 2020 at 11:52 AM Fabrízio de Royes Mello <\nfabriziomello@gmail.com> wrote:\n>\n>\n> On Sat, Jul 11, 2020 at 8:07 PM Andrew Dunstan <\nandrew.dunstan@2ndquadrant.com> wrote:\n> >\n> >\n> > On 6/26/20 2:10 PM, Fabrízio de Royes Mello wrote:\n> > >\n> > > On Fri, Jun 26, 2020 at 11:55 AM Fabrízio de Royes Mello\n> > > <fabriziomello@gmail.com <mailto:fabriziomello@gmail.com>> wrote:\n> > > >\n> > > >\n> > > > On Fri, Jun 26, 2020 at 11:24 AM Andrew Dunstan\n> > > <andrew.dunstan@2ndquadrant.com\n> > > <mailto:andrew.dunstan@2ndquadrant.com>> wrote:\n> > > > >\n> > > > >\n> > > > > On 6/26/20 9:57 AM, Andrew Dunstan wrote:\n> > > > > > It appears that for extension owned tables tbinfo.attgenerated\nisn't\n> > > > > > being properly populated, so line 2050 in REL_12_STABLE, which\n> > > is line\n> > > > > > 2109 in git tip, is failing.\n> > > > > >\n> > > > > >\n> > > > >\n> > > > > Should have mentioned this is in src/bin/pg_dump/pg_dump.c\n> > > > >\n> > > >\n> > > > Having a look on it.\n> > > >\n> > >\n> > > Seems when qualify the schemaname the the \"tbinfo->interesting\" field\n> > > is not setted for extensions objects, so the getTableAttrs can't fill\n> > > the attgenerated field properly.\n> > >\n> > > I'm not 100% sure it's the correct way but the attached patch works\n> > > for me and all tests passed. Maybe we should add more TAP tests?\n> > >\n> > >\n> >\n> >\n> > I just tried this patch out on master, with the test case I gave\n> > upthread. It's not working, still getting a segfault.\n> >\n>\n> Ohh really sorry about it... my bad... i completely forgot about it!!!\n>\n> Due to my rush I ended up adding the wrong patch version. Attached the\ncorrect version.\n>\n> Will add TAP tests to src/test/modules/test_pg_dump\n>\n\nAttached the patch including TAP tests.\n\nRegards,\n\n--\n Fabrízio de Royes Mello Timbira - http://www.timbira.com.br/\n PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento", "msg_date": "Mon, 13 Jul 2020 14:37:50 -0300", "msg_from": "=?UTF-8?Q?Fabr=C3=ADzio_de_Royes_Mello?= <fabriziomello@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_dump bug for extension owned tables" }, { "msg_contents": "\nOn 7/13/20 10:52 AM, Fabrízio de Royes Mello wrote:\n>\n> On Sat, Jul 11, 2020 at 8:07 PM Andrew Dunstan\n> <andrew.dunstan@2ndquadrant.com\n> <mailto:andrew.dunstan@2ndquadrant.com>> wrote:\n> >\n> >\n> > On 6/26/20 2:10 PM, Fabrízio de Royes Mello wrote:\n> > >\n> > > On Fri, Jun 26, 2020 at 11:55 AM Fabrízio de Royes Mello\n> > > <fabriziomello@gmail.com <mailto:fabriziomello@gmail.com>\n> <mailto:fabriziomello@gmail.com <mailto:fabriziomello@gmail.com>>> wrote:\n> > > >\n> > > >\n> > > > On Fri, Jun 26, 2020 at 11:24 AM Andrew Dunstan\n> > > <andrew.dunstan@2ndquadrant.com\n> <mailto:andrew.dunstan@2ndquadrant.com>\n> > > <mailto:andrew.dunstan@2ndquadrant.com\n> <mailto:andrew.dunstan@2ndquadrant.com>>> wrote:\n> > > > >\n> > > > >\n> > > > > On 6/26/20 9:57 AM, Andrew Dunstan wrote:\n> > > > > > It appears that for extension owned tables\n> tbinfo.attgenerated isn't\n> > > > > > being properly populated, so line 2050 in REL_12_STABLE, which\n> > > is line\n> > > > > > 2109 in git tip, is failing.\n> > > > > >\n> > > > > >\n> > > > >\n> > > > > Should have mentioned this is in src/bin/pg_dump/pg_dump.c\n> > > > >\n> > > >\n> > > > Having a look on it.\n> > > >\n> > >\n> > > Seems when qualify the schemaname the the \"tbinfo->interesting\" field\n> > > is not setted for extensions objects, so the getTableAttrs can't fill\n> > > the attgenerated field properly.\n> > >\n> > > I'm not 100% sure it's the correct way but the attached patch works\n> > > for me and all tests passed. Maybe we should add more TAP tests?\n> > >\n> > >\n> >\n> >\n> > I just tried this patch out on master, with the test case I gave\n> > upthread. It's not working, still getting a segfault.\n> >\n>\n> Ohh really sorry about it... my bad... i completely forgot about it!!!\n>\n> Due to my rush I ended up adding the wrong patch version. Attached the\n> correct version.\n>\n> Will add TAP tests to src/test/modules/test_pg_dump\n\n\nyeah, that's the fix I came up with too. The only thing I added was\n\"Assert(tbinfo->attgenerated);\" at about line 2097.\n\n\nWill wait for your TAP tests.\n\n\nthanks\n\n\ncheers\n\n\nandrew\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n", "msg_date": "Mon, 13 Jul 2020 14:29:14 -0400", "msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: pg_dump bug for extension owned tables" }, { "msg_contents": "On Mon, Jul 13, 2020 at 3:29 PM Andrew Dunstan <\nandrew.dunstan@2ndquadrant.com> wrote:\n>\n>\n> On 7/13/20 10:52 AM, Fabrízio de Royes Mello wrote:\n> >\n> > On Sat, Jul 11, 2020 at 8:07 PM Andrew Dunstan\n> > <andrew.dunstan@2ndquadrant.com\n> > <mailto:andrew.dunstan@2ndquadrant.com>> wrote:\n> > >\n> > >\n> > > On 6/26/20 2:10 PM, Fabrízio de Royes Mello wrote:\n> > > >\n> > > > On Fri, Jun 26, 2020 at 11:55 AM Fabrízio de Royes Mello\n> > > > <fabriziomello@gmail.com <mailto:fabriziomello@gmail.com>\n> > <mailto:fabriziomello@gmail.com <mailto:fabriziomello@gmail.com>>>\nwrote:\n> > > > >\n> > > > >\n> > > > > On Fri, Jun 26, 2020 at 11:24 AM Andrew Dunstan\n> > > > <andrew.dunstan@2ndquadrant.com\n> > <mailto:andrew.dunstan@2ndquadrant.com>\n> > > > <mailto:andrew.dunstan@2ndquadrant.com\n> > <mailto:andrew.dunstan@2ndquadrant.com>>> wrote:\n> > > > > >\n> > > > > >\n> > > > > > On 6/26/20 9:57 AM, Andrew Dunstan wrote:\n> > > > > > > It appears that for extension owned tables\n> > tbinfo.attgenerated isn't\n> > > > > > > being properly populated, so line 2050 in REL_12_STABLE, which\n> > > > is line\n> > > > > > > 2109 in git tip, is failing.\n> > > > > > >\n> > > > > > >\n> > > > > >\n> > > > > > Should have mentioned this is in src/bin/pg_dump/pg_dump.c\n> > > > > >\n> > > > >\n> > > > > Having a look on it.\n> > > > >\n> > > >\n> > > > Seems when qualify the schemaname the the \"tbinfo->interesting\"\nfield\n> > > > is not setted for extensions objects, so the getTableAttrs can't\nfill\n> > > > the attgenerated field properly.\n> > > >\n> > > > I'm not 100% sure it's the correct way but the attached patch works\n> > > > for me and all tests passed. Maybe we should add more TAP tests?\n> > > >\n> > > >\n> > >\n> > >\n> > > I just tried this patch out on master, with the test case I gave\n> > > upthread. It's not working, still getting a segfault.\n> > >\n> >\n> > Ohh really sorry about it... my bad... i completely forgot about it!!!\n> >\n> > Due to my rush I ended up adding the wrong patch version. Attached the\n> > correct version.\n> >\n> > Will add TAP tests to src/test/modules/test_pg_dump\n>\n>\n> yeah, that's the fix I came up with too. The only thing I added was\n> \"Assert(tbinfo->attgenerated);\" at about line 2097.\n>\n\nCool.\n\n>\n> Will wait for your TAP tests.\n>\n\nActually I've sent it already but it seems to have gone to the moderation\nqueue.\n\nAnyway attached with your assertion and TAP tests.\n\nRegards,\n\n--\n Fabrízio de Royes Mello Timbira - http://www.timbira.com.br/\n PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento", "msg_date": "Mon, 13 Jul 2020 15:46:18 -0300", "msg_from": "=?UTF-8?Q?Fabr=C3=ADzio_de_Royes_Mello?= <fabriziomello@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_dump bug for extension owned tables" }, { "msg_contents": "\nOn 7/13/20 2:46 PM, Fabrízio de Royes Mello wrote:\n>\n>\n> >\n> > yeah, that's the fix I came up with too. The only thing I added was\n> > \"Assert(tbinfo->attgenerated);\" at about line 2097.\n> >\n>\n> Cool.\n>\n> >\n> > Will wait for your TAP tests.\n> >\n>\n> Actually I've sent it already but it seems to have gone to the\n> moderation queue.\n>\n> Anyway attached with your assertion and TAP tests.\n>\n>\n\n\n\nThanks, that all seems fine. The TAP test changes are a bit of a pain in\nthe neck before release 11, so I think I'll just do those back that far,\nbut the main fix for all live branches.\n\n\ncheers\n\n\nandrew\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n", "msg_date": "Mon, 13 Jul 2020 16:05:20 -0400", "msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: pg_dump bug for extension owned tables" }, { "msg_contents": "On Mon, Jul 13, 2020 at 5:05 PM Andrew Dunstan <\nandrew.dunstan@2ndquadrant.com> wrote:\n>\n>\n> On 7/13/20 2:46 PM, Fabrízio de Royes Mello wrote:\n> >\n> >\n> > >\n> > > yeah, that's the fix I came up with too. The only thing I added was\n> > > \"Assert(tbinfo->attgenerated);\" at about line 2097.\n> > >\n> >\n> > Cool.\n> >\n> > >\n> > > Will wait for your TAP tests.\n> > >\n> >\n> > Actually I've sent it already but it seems to have gone to the\n> > moderation queue.\n> >\n> > Anyway attached with your assertion and TAP tests.\n> >\n> >\n>\n>\n>\n> Thanks, that all seems fine. The TAP test changes are a bit of a pain in\n> the neck before release 11, so I think I'll just do those back that far,\n> but the main fix for all live branches.\n>\n\nSounds good to me.\n\n--\n Fabrízio de Royes Mello Timbira - http://www.timbira.com.br/\n PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento\n\nOn Mon, Jul 13, 2020 at 5:05 PM Andrew Dunstan <andrew.dunstan@2ndquadrant.com> wrote:>>> On 7/13/20 2:46 PM, Fabrízio de Royes Mello wrote:> >> >> > >> > > yeah, that's the fix I came up with too. The only thing I added was> > > \"Assert(tbinfo->attgenerated);\" at about line 2097.> > >> >> > Cool.> >> > >> > > Will wait for your TAP tests.> > >> >> > Actually I've sent it already but it seems to have gone to the> > moderation queue.> >> > Anyway attached with your assertion and TAP tests.> >> >>>>> Thanks, that all seems fine. The TAP test changes are a bit of a pain in> the neck before release 11, so I think I'll just do those back that far,> but the main fix for all live branches.>Sounds good to me.--   Fabrízio de Royes Mello         Timbira - http://www.timbira.com.br/   PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento", "msg_date": "Mon, 13 Jul 2020 18:18:37 -0300", "msg_from": "=?UTF-8?Q?Fabr=C3=ADzio_de_Royes_Mello?= <fabriziomello@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_dump bug for extension owned tables" }, { "msg_contents": "On Mon, Jul 13, 2020 at 6:18 PM Fabrízio de Royes Mello <\nfabriziomello@gmail.com> wrote:\n\n>\n> On Mon, Jul 13, 2020 at 5:05 PM Andrew Dunstan <\n> andrew.dunstan@2ndquadrant.com> wrote:\n> >\n> >\n> > On 7/13/20 2:46 PM, Fabrízio de Royes Mello wrote:\n> > >\n> > >\n> > > >\n> > > > yeah, that's the fix I came up with too. The only thing I added was\n> > > > \"Assert(tbinfo->attgenerated);\" at about line 2097.\n> > > >\n> > >\n> > > Cool.\n> > >\n> > > >\n> > > > Will wait for your TAP tests.\n> > > >\n> > >\n> > > Actually I've sent it already but it seems to have gone to the\n> > > moderation queue.\n> > >\n> > > Anyway attached with your assertion and TAP tests.\n> > >\n> > >\n> >\n> >\n> >\n> > Thanks, that all seems fine. The TAP test changes are a bit of a pain in\n> > the neck before release 11, so I think I'll just do those back that far,\n> > but the main fix for all live branches.\n> >\n>\n> Sounds good to me.\n>\n>\nJust added to the next commitfest [1] to make sure we'll not lose it.\n\nRegards,\n\n[1] https://commitfest.postgresql.org/29/2671/\n\n-- \n Fabrízio de Royes Mello Timbira - http://www.timbira.com.br/\n PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento\n\nOn Mon, Jul 13, 2020 at 6:18 PM Fabrízio de Royes Mello <fabriziomello@gmail.com> wrote:On Mon, Jul 13, 2020 at 5:05 PM Andrew Dunstan <andrew.dunstan@2ndquadrant.com> wrote:>>> On 7/13/20 2:46 PM, Fabrízio de Royes Mello wrote:> >> >> > >> > > yeah, that's the fix I came up with too. The only thing I added was> > > \"Assert(tbinfo->attgenerated);\" at about line 2097.> > >> >> > Cool.> >> > >> > > Will wait for your TAP tests.> > >> >> > Actually I've sent it already but it seems to have gone to the> > moderation queue.> >> > Anyway attached with your assertion and TAP tests.> >> >>>>> Thanks, that all seems fine. The TAP test changes are a bit of a pain in> the neck before release 11, so I think I'll just do those back that far,> but the main fix for all live branches.>Sounds good to me.Just added to the next commitfest [1] to make sure we'll not lose it.Regards,[1] https://commitfest.postgresql.org/29/2671/--    Fabrízio de Royes Mello         Timbira - http://www.timbira.com.br/   PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento", "msg_date": "Thu, 6 Aug 2020 17:11:49 -0300", "msg_from": "=?UTF-8?Q?Fabr=C3=ADzio_de_Royes_Mello?= <fabriziomello@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_dump bug for extension owned tables" }, { "msg_contents": "On Thu, Aug 6, 2020 at 4:12 PM Fabrízio de Royes Mello\n<fabriziomello@gmail.com> wrote:\n>\n>\n> On Mon, Jul 13, 2020 at 6:18 PM Fabrízio de Royes Mello <fabriziomello@gmail.com> wrote:\n>>\n>>\n>> On Mon, Jul 13, 2020 at 5:05 PM Andrew Dunstan <andrew.dunstan@2ndquadrant.com> wrote:\n>> >\n>> >\n>> > On 7/13/20 2:46 PM, Fabrízio de Royes Mello wrote:\n>> > >\n>> > >\n>> > > >\n>> > > > yeah, that's the fix I came up with too. The only thing I added was\n>> > > > \"Assert(tbinfo->attgenerated);\" at about line 2097.\n>> > > >\n>> > >\n>> > > Cool.\n>> > >\n>> > > >\n>> > > > Will wait for your TAP tests.\n>> > > >\n>> > >\n>> > > Actually I've sent it already but it seems to have gone to the\n>> > > moderation queue.\n>> > >\n>> > > Anyway attached with your assertion and TAP tests.\n>> > >\n>> > >\n>> >\n>> >\n>> >\n>> > Thanks, that all seems fine. The TAP test changes are a bit of a pain in\n>> > the neck before release 11, so I think I'll just do those back that far,\n>> > but the main fix for all live branches.\n>> >\n>>\n>> Sounds good to me.\n>>\n>\n> Just added to the next commitfest [1] to make sure we'll not lose it.\n>\n> Regards,\n>\n> [1] https://commitfest.postgresql.org/29/2671/\n>\n\n\nThanks, Committed. Further investigation shows this was introduced in\nrelease 12, so that's how far back I went.\n\ncheers\n\nandrew\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 4 Sep 2020 14:00:13 -0400", "msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: pg_dump bug for extension owned tables" }, { "msg_contents": "On Fri, Sep 4, 2020 at 3:00 PM Andrew Dunstan <\nandrew.dunstan@2ndquadrant.com> wrote:\n>\n>\n> Thanks, Committed. Further investigation shows this was introduced in\n> release 12, so that's how far back I went.\n>\n\nThanks!\n\n--\n Fabrízio de Royes Mello Timbira - http://www.timbira.com.br/\n PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento\n\nOn Fri, Sep 4, 2020 at 3:00 PM Andrew Dunstan <andrew.dunstan@2ndquadrant.com> wrote:>>> Thanks, Committed. Further investigation shows this was introduced in> release 12, so that's how far back I went.>Thanks!--   Fabrízio de Royes Mello         Timbira - http://www.timbira.com.br/   PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento", "msg_date": "Fri, 4 Sep 2020 16:10:53 -0300", "msg_from": "=?UTF-8?Q?Fabr=C3=ADzio_de_Royes_Mello?= <fabriziomello@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_dump bug for extension owned tables" }, { "msg_contents": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com> writes:\n> Thanks, Committed. Further investigation shows this was introduced in\n> release 12, so that's how far back I went.\n\nStill further investigation shows that this patch caused bug #16655 [1].\nIt should *not* have been designed to unconditionally clear the\ntable's \"interesting\" flag, as there may have been other reasons\nwhy that was set. The right way to think about it is \"if we are\ngoing to dump the table's data, then the table certainly needs its\ninteresting flag set, so that we'll collect the per-attribute info.\nOtherwise leave well enough alone\".\n\nThe patches I proposed in the other thread seem like they really ought\nto go all the way back for safety's sake. However, I do not observe\nany crash on the test case in v11, and I'm kind of wondering why not.\nDid you identify exactly where this was \"introduced in release 12\"?\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/flat/16655-5c92d6b3a9438137%40postgresql.org\n\n\n", "msg_date": "Tue, 06 Oct 2020 17:19:53 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pg_dump bug for extension owned tables" }, { "msg_contents": "\nOn 10/6/20 5:19 PM, Tom Lane wrote:\n> Andrew Dunstan <andrew.dunstan@2ndquadrant.com> writes:\n>> Thanks, Committed. Further investigation shows this was introduced in\n>> release 12, so that's how far back I went.\n> Still further investigation shows that this patch caused bug #16655 [1].\n> It should *not* have been designed to unconditionally clear the\n> table's \"interesting\" flag, as there may have been other reasons\n> why that was set. The right way to think about it is \"if we are\n> going to dump the table's data, then the table certainly needs its\n> interesting flag set, so that we'll collect the per-attribute info.\n> Otherwise leave well enough alone\".\n\n\n\nYes, I see the issue. Mea culpa :-(\n\n\n\n>\n> The patches I proposed in the other thread seem like they really ought\n> to go all the way back for safety's sake. However, I do not observe\n> any crash on the test case in v11, and I'm kind of wondering why not.\n> Did you identify exactly where this was \"introduced in release 12\"?\n\n\n\nIt looks like you've since discovered the cause here. Do you need me to\ndig more?\n\n\ncheers\n\n\nandrew\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n", "msg_date": "Wed, 7 Oct 2020 08:46:03 -0400", "msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: pg_dump bug for extension owned tables" }, { "msg_contents": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com> writes:\n> It looks like you've since discovered the cause here. Do you need me to\n> dig more?\n\nNah, I've got it. Thanks.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 07 Oct 2020 09:37:01 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pg_dump bug for extension owned tables" } ]
[ { "msg_contents": "Hi,\n\nPer Coverity.\nPerhaps it is excessive caution.\nProbably assertion check has already caught all possible errors.\nBut, redundancy may not cost as much and is worth it.\n\n1.Assertion check\n/* Caller messed up if we have neither a ready query nor held data. */\nAssert(queryDesc || portal->holdStore);\n\nBut in release, if QueryDesc is NULL and portal->holdStore is NULL too,\nwhen Call PushActiveSnapshot *deference* NULL check can happen.\n\n2. if (portal->atEnd || count <= 0) is True\nNo need to recheck count against FETCH_ALL.\n\nIs it worth correcting them?\n\nregards,\nRanier Vilela", "msg_date": "Fri, 26 Jun 2020 11:31:18 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": true, "msg_subject": "Possible NULL dereferencing (src/backend/tcop/pquery.c)" }, { "msg_contents": "Ranier Vilela <ranier.vf@gmail.com> writes:\n> 1.Assertion check\n> /* Caller messed up if we have neither a ready query nor held data. */\n> Assert(queryDesc || portal->holdStore);\n\n> But in release, if QueryDesc is NULL and portal->holdStore is NULL too,\n> when Call PushActiveSnapshot *deference* NULL check can happen.\n\n> 2. if (portal->atEnd || count <= 0) is True\n> No need to recheck count against FETCH_ALL.\n\n> Is it worth correcting them?\n\nNo.\n\nThe assertion already says that that's a case that cannot happen.\nOr to look at it another way: if the case were to occur in a devel\nbuild, you'd get a core dump at the assertion. If the case were\nto occur in a production build, you'd get a core dump at the\ndereference. Not much difference. Either way, it's a *caller*\nbug, because the caller is supposed to make sure this cannot happen.\nIf we thought that it could possibly happen, we would use an ereport\nbut not an assertion; having both for the same condition is quite\nmisguided.\n\n(If Coverity is whining about this for you, there's something wrong\nwith your Coverity settings. In the project's instance, Coverity\naccepts assertions as assertions.)\n\nI'm unimpressed with the other proposed change too; it's making the logic\nmore complicated and fragile for a completely negligible \"performance\ngain\". Moreover the compiler could probably make the same optimization.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 26 Jun 2020 17:24:27 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Possible NULL dereferencing (src/backend/tcop/pquery.c)" }, { "msg_contents": "Em sex., 26 de jun. de 2020 às 18:24, Tom Lane <tgl@sss.pgh.pa.us> escreveu:\n\n> Ranier Vilela <ranier.vf@gmail.com> writes:\n> > 1.Assertion check\n> > /* Caller messed up if we have neither a ready query nor held data. */\n> > Assert(queryDesc || portal->holdStore);\n>\n> > But in release, if QueryDesc is NULL and portal->holdStore is NULL too,\n> > when Call PushActiveSnapshot *deference* NULL check can happen.\n>\n> > 2. if (portal->atEnd || count <= 0) is True\n> > No need to recheck count against FETCH_ALL.\n>\n> > Is it worth correcting them?\n>\n> No.\n>\n> The assertion already says that that's a case that cannot happen.\n> Or to look at it another way: if the case were to occur in a devel\n> build, you'd get a core dump at the assertion. If the case were\n> to occur in a production build, you'd get a core dump at the\n> dereference. Not much difference. Either way, it's a *caller*\n> bug, because the caller is supposed to make sure this cannot happen.\n> If we thought that it could possibly happen, we would use an ereport\n> but not an assertion; having both for the same condition is quite\n> misguided.\n>\nOk, thats a job of Assertion.\nBut I still worry that, in some rare cases, portal-> holdStore might be\ncorrupted in some way\nand the function is called, causing a segmentation fault.\n\n>\n> (If Coverity is whining about this for you, there's something wrong\n> with your Coverity settings. In the project's instance, Coverity\n> accepts assertions as assertions.)\n>\nProbable, because reports this:\nCID 10127 (#2 of 2): Dereference after null check (FORWARD_NULL)8.\nvar_deref_op: Dereferencing null pointer queryDesc.\n\n>\n> I'm unimpressed with the other proposed change too; it's making the logic\n> more complicated and fragile for a completely negligible \"performance\n> gain\". Moreover the compiler could probably make the same optimization.\n>\nOk.\n\nAnyway, thank you for by responding, your observations are always valuable\nand help learn \"the postgres way\" to develop.\nIt's not easy.\n\nbest regards,\nRanier Vilela\n\nEm sex., 26 de jun. de 2020 às 18:24, Tom Lane <tgl@sss.pgh.pa.us> escreveu:Ranier Vilela <ranier.vf@gmail.com> writes:\n> 1.Assertion check\n> /* Caller messed up if we have neither a ready query nor held data. */\n> Assert(queryDesc || portal->holdStore);\n\n> But in release, if QueryDesc is NULL and portal->holdStore is NULL too,\n> when Call PushActiveSnapshot  *deference* NULL check can happen.\n\n> 2. if (portal->atEnd || count <= 0) is True\n> No need to recheck count against FETCH_ALL.\n\n> Is it worth correcting them?\n\nNo.\n\nThe assertion already says that that's a case that cannot happen.\nOr to look at it another way: if the case were to occur in a devel\nbuild, you'd get a core dump at the assertion.  If the case were\nto occur in a production build, you'd get a core dump at the\ndereference.  Not much difference.  Either way, it's a *caller*\nbug, because the caller is supposed to make sure this cannot happen.\nIf we thought that it could possibly happen, we would use an ereport\nbut not an assertion; having both for the same condition is quite\nmisguided.Ok, thats a job of Assertion. But I still worry that, in some rare cases, portal-> holdStore might be corrupted in some wayand the function is called, causing a segmentation fault.\n\n(If Coverity is whining about this for you, there's something wrong\nwith your Coverity settings.  In the project's instance, Coverity\naccepts assertions as assertions.)Probable, because reports this:\nCID 10127 (#2 of 2): Dereference after null check (FORWARD_NULL)8. var_deref_op: Dereferencing null pointer queryDesc.\n\n\nI'm unimpressed with the other proposed change too; it's making the logic\nmore complicated and fragile for a completely negligible \"performance\ngain\".  Moreover the compiler could probably make the same optimization.Ok.  Anyway, thank you for by responding, your observations are always valuable and help learn \"the postgres way\" to develop.It's not easy.best regards,Ranier Vilela", "msg_date": "Fri, 26 Jun 2020 20:07:42 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Possible NULL dereferencing (src/backend/tcop/pquery.c)" } ]
[ { "msg_contents": "Hi,\n\nIs anyone here interested in helping to evaluate an experimental patch\nfor wolfSSL support?\n\nAttached please find a WIP patch for wolfSSL support in postgresql-12.\nAs a shortcut, you may find this merge request helpful:\n\n https://salsa.debian.org/postgresql/postgresql/-/merge_requests/4\n\nI used Debian stable (buster) with backports enabled and preferred.\n\nThe wolfssl.patch in d/patches builds and completes all tests, as long\nas libwolfssl-dev version 4.4.0+dfsg-2~bpo10+1 is installed and\npatched with the included libwolfssl-dev-rename-types.patch.\n\nYou can do so as root with:\n\n cd /usr/include/wolfssl\n patch -p1 < libwolfssl-dev-rename-types.patch\n\nPatching the library was easier than resolving type conflicts for\ntwenty-five files. An attempt was made but resulted in failing tests.\n\nThe offending types are called 'ValidateDate' and 'Hash'. They do not\nseem to be part of the wolfSSL ABI.\n\nThe patch operates with the following caveats:\n\n1. DH parameters are not currently loaded from a database-internal PEM\ncertificate. The function OBJ_find_sigid_algs is not available. The\nsecurity implications should be discussed with a cryptographer.\n\n2. The contrib module pgcrypto was not compiled with OpenSSL support\nand currently offers only native algorithms. wolfSSL's compatibility\nsupport for OpenSSL's EVP interface is incomplete and offers only a\nfew algorithms. The module should work directly with wolfCrypt.\n\n3. The error reporting in wolfSSL_set_fd seems to be different from\nOpenSSL. I could not locate SSLerr and decided to return BAD_FUNC_ARG.\nThat is what the routine being mimicked does in wolfSSL. If you see an\nSSL connection error, it may be wise to simply remove these two\nstatements in src/interfaces/libpq/fe-secure-openssl.c:\n\n ret = BAD_FUNC_ARG;\n\nUnsupported functions or features can probably be replaced with\nwolfSSL's or wolfCrypt's native interfaces. The company may be happy\nto assist.\n\nThe patch includes modifications toward missing goals. Some parts\nmodify code, for example in util/pgpcrypto, that is not actually called.\n\nPlease note that the wolfSSL team prefers the styling of their brand\nto be capitalized as recorded in this sentence. Thank you!\n\nKind regards\nFelix Lechner", "msg_date": "Fri, 26 Jun 2020 15:33:47 -0700", "msg_from": "Felix Lechner <felix.lechner@lease-up.com>", "msg_from_op": true, "msg_subject": "Fwd: PostgreSQL: WolfSSL support" }, { "msg_contents": "On 2020-06-27 00:33, Felix Lechner wrote:\n> Is anyone here interested in helping to evaluate an experimental patch\n> for wolfSSL support?\n\nWhat would be the advantage of using wolfSSL over OpenSSL?\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Sat, 27 Jun 2020 14:30:37 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Fwd: PostgreSQL: WolfSSL support" }, { "msg_contents": "Re: Peter Eisentraut\n> What would be the advantage of using wolfSSL over OpenSSL?\n\nAvoiding the OpenSSL-vs-GPL linkage problem with readline.\n\nChristoph\n\n\n", "msg_date": "Sat, 27 Jun 2020 14:50:27 +0200", "msg_from": "Christoph Berg <myon@debian.org>", "msg_from_op": false, "msg_subject": "Re: Fwd: PostgreSQL: WolfSSL support" }, { "msg_contents": "On Sat, Jun 27, 2020 at 02:50:27PM +0200, Christoph Berg wrote:\n> Re: Peter Eisentraut\n> > What would be the advantage of using wolfSSL over OpenSSL?\n> \n> Avoiding the OpenSSL-vs-GPL linkage problem with readline.\n\nUh, wolfSSL is GPL2:\n\n\thttps://www.wolfssl.com/license/\n\nNot sure why we would want to lock Postgres into a GPL-style\nrequirement. As I understand it, we don't normally ship readline or\nopenssl.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n", "msg_date": "Sat, 27 Jun 2020 08:56:38 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Fwd: PostgreSQL: WolfSSL support" }, { "msg_contents": "Bruce Momjian <bruce@momjian.us> writes:\n> On Sat, Jun 27, 2020 at 02:50:27PM +0200, Christoph Berg wrote:\n>> Re: Peter Eisentraut\n>>> What would be the advantage of using wolfSSL over OpenSSL?\n\n>> Avoiding the OpenSSL-vs-GPL linkage problem with readline.\n\n> Uh, wolfSSL is GPL2:\n> \thttps://www.wolfssl.com/license/\n\nReadline is GPLv3+ (according to Red Hat's labeling of that package\nanyway, didn't check the source). So they'd be compatible, while\nopenssl's license is nominally incompatible with GPL. As I recall,\nDebian jumps through some silly hoops to pretend that they're not\nusing openssl and readline at the same time with Postgres, so I\ncan definitely understand Christoph's interest in an alternative.\n\nHowever, judging from the caveats mentioned in the initial message,\nmy inclination would be to wait awhile for wolfSSL to mature.\n\nIn any case, the patch as written seems to *remove* the option\nto compile PG with OpenSSL. The chance of it being accepted that\nway is indistinguishable from zero. We've made some efforts towards\nseparating out the openssl-specific bits, so the shape I'd expect\nfrom a patch like this is to add some parallel wolfssl-specific bits.\nThere probably are more such bits to separate, but this isn't the\nway to proceed.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 27 Jun 2020 10:56:46 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Fwd: PostgreSQL: WolfSSL support" }, { "msg_contents": "On Sat, Jun 27, 2020 at 10:56:46AM -0400, Tom Lane wrote:\n> Bruce Momjian <bruce@momjian.us> writes:\n> > On Sat, Jun 27, 2020 at 02:50:27PM +0200, Christoph Berg wrote:\n> >> Re: Peter Eisentraut\n> >>> What would be the advantage of using wolfSSL over OpenSSL?\n> \n> >> Avoiding the OpenSSL-vs-GPL linkage problem with readline.\n> \n> > Uh, wolfSSL is GPL2:\n> > \thttps://www.wolfssl.com/license/\n> \n> Readline is GPLv3+ (according to Red Hat's labeling of that package\n> anyway, didn't check the source). So they'd be compatible, while\n> openssl's license is nominally incompatible with GPL. As I recall,\n> Debian jumps through some silly hoops to pretend that they're not\n> using openssl and readline at the same time with Postgres, so I\n> can definitely understand Christoph's interest in an alternative.\n> \n> However, judging from the caveats mentioned in the initial message,\n> my inclination would be to wait awhile for wolfSSL to mature.\n\nAlso, wolfSSL is developed by a company and dual GPL/commerical\nlicenses, so it seems like a mismatch to me.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n", "msg_date": "Sat, 27 Jun 2020 11:10:35 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Fwd: PostgreSQL: WolfSSL support" }, { "msg_contents": "Bruce Momjian <bruce@momjian.us> writes:\n> Also, wolfSSL is developed by a company and dual GPL/commerical\n> licenses, so it seems like a mismatch to me.\n\nYeah, that's definitely a factor behind my disinterest in\nmaking wolfSSL be the only alternative. However, as long as\nit's available on GPL terms, I don't see a problem with it\nbeing one alternative.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 27 Jun 2020 11:16:26 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Fwd: PostgreSQL: WolfSSL support" }, { "msg_contents": "On Sat, Jun 27, 2020 at 11:16:26AM -0400, Tom Lane wrote:\n> Bruce Momjian <bruce@momjian.us> writes:\n> > Also, wolfSSL is developed by a company and dual GPL/commerical\n> > licenses, so it seems like a mismatch to me.\n> \n> Yeah, that's definitely a factor behind my disinterest in\n> making wolfSSL be the only alternative. However, as long as\n> it's available on GPL terms, I don't see a problem with it\n> being one alternative.\n\nYeah, I guess it depends on how much Postgres code it takes to support\nit. Company-developed open source stuff usually goes into pay mode once\nit gets popular, so I am not super-excited to be going in this\ndirection.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n", "msg_date": "Sat, 27 Jun 2020 11:49:40 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Fwd: PostgreSQL: WolfSSL support" }, { "msg_contents": "Re: Tom Lane\n> In any case, the patch as written seems to *remove* the option\n> to compile PG with OpenSSL.\n\nIt's a WIP patch, meant to see if it works at all. Of course OpenSSL\nwould stay as the default option.\n\nChristoph\n\n\n", "msg_date": "Sat, 27 Jun 2020 20:48:29 +0200", "msg_from": "Christoph Berg <myon@debian.org>", "msg_from_op": false, "msg_subject": "Re: Fwd: PostgreSQL: WolfSSL support" }, { "msg_contents": "Christoph Berg <myon@debian.org> writes:\n> It's a WIP patch, meant to see if it works at all. Of course OpenSSL\n> would stay as the default option.\n\nFair enough. One thing that struck me as I looked at it was that\nmost of the #include hackery seemed unnecessary. The configure\nscript could add -I/usr/include/wolfssl (or wherever those files\nare) to CPPFLAGS instead of touching all those #includes.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 27 Jun 2020 14:52:41 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Fwd: PostgreSQL: WolfSSL support" }, { "msg_contents": "Em sáb., 27 de jun. de 2020 às 09:50, Christoph Berg <myon@debian.org>\nescreveu:\n\n> Re: Peter Eisentraut\n> > What would be the advantage of using wolfSSL over OpenSSL?\n>\n> Avoiding the OpenSSL-vs-GPL linkage problem with readline.\n>\nI'm curious, how do you intend to solve a linking problem with\nOpenSSL-vs-GPL-readline, with another GPL product?\nWolfSSL, will provide a commercial license for PostgreSQL?\nIsn't LIbreSSL a better alternative?\n\nregards,\nRanier Vilela\n\nEm sáb., 27 de jun. de 2020 às 09:50, Christoph Berg <myon@debian.org> escreveu:Re: Peter Eisentraut\n> What would be the advantage of using wolfSSL over OpenSSL?\n\nAvoiding the OpenSSL-vs-GPL linkage problem with readline.I'm curious, how do you intend to solve a linking problem with OpenSSL-vs-GPL-readline, with another GPL product?WolfSSL, will provide a commercial license for PostgreSQL?Isn't LIbreSSL a better alternative? regards,Ranier Vilela", "msg_date": "Sat, 27 Jun 2020 16:22:51 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Fwd: PostgreSQL: WolfSSL support" }, { "msg_contents": "On Sat, Jun 27, 2020 at 3:25 PM Ranier Vilela <ranier.vf@gmail.com> wrote:\n\n> Em sáb., 27 de jun. de 2020 às 09:50, Christoph Berg <myon@debian.org>\n> escreveu:\n>\n>> Re: Peter Eisentraut\n>> > What would be the advantage of using wolfSSL over OpenSSL?\n>>\n>> Avoiding the OpenSSL-vs-GPL linkage problem with readline.\n>>\n> I'm curious, how do you intend to solve a linking problem with\n> OpenSSL-vs-GPL-readline, with another GPL product?\n> WolfSSL, will provide a commercial license for PostgreSQL?\n> Isn't LIbreSSL a better alternative?\n>\n\nSomewhere, I recall seeing an open-source OpenSSL compatibility wrapper for\nWolfSSL. Assuming that still exists, this patch seems entirely unnecessary.\n\n-- \nJonah H. Harris\n\nOn Sat, Jun 27, 2020 at 3:25 PM Ranier Vilela <ranier.vf@gmail.com> wrote:Em sáb., 27 de jun. de 2020 às 09:50, Christoph Berg <myon@debian.org> escreveu:Re: Peter Eisentraut\n> What would be the advantage of using wolfSSL over OpenSSL?\n\nAvoiding the OpenSSL-vs-GPL linkage problem with readline.I'm curious, how do you intend to solve a linking problem with OpenSSL-vs-GPL-readline, with another GPL product?WolfSSL, will provide a commercial license for PostgreSQL?Isn't LIbreSSL a better alternative? Somewhere, I recall seeing an open-source OpenSSL compatibility wrapper for WolfSSL. Assuming that still exists, this patch seems entirely unnecessary.-- Jonah H. Harris", "msg_date": "Sat, 27 Jun 2020 15:35:20 -0400", "msg_from": "\"Jonah H. Harris\" <jonah.harris@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Fwd: PostgreSQL: WolfSSL support" }, { "msg_contents": "Re: Jonah H. Harris\n> Somewhere, I recall seeing an open-source OpenSSL compatibility wrapper for\n> WolfSSL. Assuming that still exists, this patch seems entirely unnecessary.\n\nUnless you actually tried.\n\nChristoph\n\n\n", "msg_date": "Sat, 27 Jun 2020 21:37:42 +0200", "msg_from": "Christoph Berg <myon@debian.org>", "msg_from_op": false, "msg_subject": "Re: Fwd: PostgreSQL: WolfSSL support" }, { "msg_contents": "Re: Ranier Vilela\n> I'm curious, how do you intend to solve a linking problem with\n> OpenSSL-vs-GPL-readline, with another GPL product?\n> WolfSSL, will provide a commercial license for PostgreSQL?\n\nIt's replacing OpenSSL+GPL with GPL+GPL.\n\n> Isn't LIbreSSL a better alternative?\n\nI don't know.\n\nChristoph\n\n\n", "msg_date": "Sat, 27 Jun 2020 21:38:50 +0200", "msg_from": "Christoph Berg <myon@debian.org>", "msg_from_op": false, "msg_subject": "Re: Fwd: PostgreSQL: WolfSSL support" }, { "msg_contents": "On Sat, Jun 27, 2020 at 04:22:51PM -0300, Ranier Vilela wrote:\n> Em s�b., 27 de jun. de 2020 �s 09:50, Christoph Berg <myon@debian.org>\n> escreveu:\n> \n> Re: Peter Eisentraut\n> > What would be the advantage of using wolfSSL over OpenSSL?\n> \n> Avoiding the OpenSSL-vs-GPL linkage problem with readline.\n> \n> I'm curious, how do you intend to solve a linking problem with\n> OpenSSL-vs-GPL-readline, with another GPL product?\n\nI assume you can use wolfSSL as long as the result is GPL, which is the\nsame requirement libreadline causes for Postgres, particularly if\nPostgres is statically linked to libreadline.\n\n> WolfSSL, will provide a commercial license for PostgreSQL?\n> Isn't LIbreSSL a better alternative?\n\nSeems it might be.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n", "msg_date": "Sat, 27 Jun 2020 15:40:26 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Fwd: PostgreSQL: WolfSSL support" }, { "msg_contents": "Em sáb., 27 de jun. de 2020 às 16:40, Bruce Momjian <bruce@momjian.us>\nescreveu:\n\n> On Sat, Jun 27, 2020 at 04:22:51PM -0300, Ranier Vilela wrote:\n> > Em sáb., 27 de jun. de 2020 às 09:50, Christoph Berg <myon@debian.org>\n> > escreveu:\n> >\n> > Re: Peter Eisentraut\n> > > What would be the advantage of using wolfSSL over OpenSSL?\n> >\n> > Avoiding the OpenSSL-vs-GPL linkage problem with readline.\n> >\n> > I'm curious, how do you intend to solve a linking problem with\n> > OpenSSL-vs-GPL-readline, with another GPL product?\n>\n> I assume you can use wolfSSL as long as the result is GPL, which is the\n> same requirement libreadline causes for Postgres, particularly if\n> Postgres is statically linked to libreadline.\n>\nI don't want to divert the focus from the theread, but this subject has a\ncontroversial potential, in my opinion.\nI participated in a speech on another list, where I make contributions (IUP\nlibrary: https://www.tecgraf.puc-rio.br/iup/).\nWhere a user, upon discovering that two sub-libraries, were GPL licenses,\ncaused an uproar, bringing the speech to Mr.Stallman himself.\nIn short, the best thing for the project will be to remove the two GPL\nsub-libraries.\n\nregards,\nRanier Vilela\n\nEm sáb., 27 de jun. de 2020 às 16:40, Bruce Momjian <bruce@momjian.us> escreveu:On Sat, Jun 27, 2020 at 04:22:51PM -0300, Ranier Vilela wrote:\n> Em sáb., 27 de jun. de 2020 às 09:50, Christoph Berg <myon@debian.org>\n> escreveu:\n> \n>     Re: Peter Eisentraut\n>     > What would be the advantage of using wolfSSL over OpenSSL?\n> \n>     Avoiding the OpenSSL-vs-GPL linkage problem with readline.\n> \n> I'm curious, how do you intend to solve a linking problem with\n> OpenSSL-vs-GPL-readline, with another GPL product?\n\nI assume you can use wolfSSL as long as the result is GPL, which is the\nsame requirement libreadline causes for Postgres, particularly if\nPostgres is statically linked to libreadline.I don't want to divert the focus from the theread, but this subject has a controversial potential, in my opinion.I participated in a speech on another list, where I make contributions (IUP library: https://www.tecgraf.puc-rio.br/iup/). Where a user, upon discovering that two sub-libraries, were GPL licenses, caused an uproar, bringing the speech to Mr.Stallman himself.In short, the best thing for the project will be to remove the two GPL sub-libraries. regards,Ranier Vilela", "msg_date": "Sat, 27 Jun 2020 18:14:21 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Fwd: PostgreSQL: WolfSSL support" }, { "msg_contents": "On Sat, Jun 27, 2020 at 06:14:21PM -0300, Ranier Vilela wrote:\n> Em s�b., 27 de jun. de 2020 �s 16:40, Bruce Momjian <bruce@momjian.us>\n> escreveu:\n> \n> On Sat, Jun 27, 2020 at 04:22:51PM -0300, Ranier Vilela wrote:\n> > Em s�b., 27 de jun. de 2020 �s 09:50, Christoph Berg <myon@debian.org>\n> > escreveu:\n> >\n> >� � �Re: Peter Eisentraut\n> >� � �> What would be the advantage of using wolfSSL over OpenSSL?\n> >\n> >� � �Avoiding the OpenSSL-vs-GPL linkage problem with readline.\n> >\n> > I'm curious, how do you intend to solve a linking problem with\n> > OpenSSL-vs-GPL-readline, with another GPL product?\n> \n> I assume you can use wolfSSL as long as the result is GPL, which is the\n> same requirement libreadline causes for Postgres, particularly if\n> Postgres is statically linked to libreadline.\n> \n> I don't want to divert the focus from the theread, but this subject has a\n> controversial potential, in my opinion.\n> I participated in a speech on another list, where I make contributions (IUP\n> library: https://www.tecgraf.puc-rio.br/iup/).\n> Where a user, upon discovering that two sub-libraries, were GPL licenses,\n> caused an uproar, bringing the speech to Mr.Stallman himself.\n> In short, the best thing for the project will be to remove the two GPL\n> sub-libraries.\n\nWe aleady try to do that by trying to use BSD-licensed libedit if\ninstalled:\n\n\thttps://github.com/freebsd/freebsd/tree/master/lib/libedit\n\thttps://certif.com/spec_print/readline.html\n\nI would love to see libedit fully functional so we don't need to rely on\nlibreadline anymore, but I seem to remember there are a few libreadline\nfeatures that libedit doesn't implement, so we use libreadline if it is\nalready installed. (I am still not clear if dynamic linking is a GPL\nviolation.)\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n", "msg_date": "Sat, 27 Jun 2020 17:23:09 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Fwd: PostgreSQL: WolfSSL support" }, { "msg_contents": "On Sat, Jun 27, 2020 at 3:37 PM Christoph Berg <myon@debian.org> wrote:\n\n> Re: Jonah H. Harris\n> > Somewhere, I recall seeing an open-source OpenSSL compatibility wrapper\n> for\n> > WolfSSL. Assuming that still exists, this patch seems entirely\n> unnecessary.\n>\n> Unless you actually tried.\n\n\nDid you? It worked for me in the past on a similarly large system...\n\n-- \nJonah H. Harris\n\nOn Sat, Jun 27, 2020 at 3:37 PM Christoph Berg <myon@debian.org> wrote:Re: Jonah H. Harris\n> Somewhere, I recall seeing an open-source OpenSSL compatibility wrapper for\n> WolfSSL. Assuming that still exists, this patch seems entirely unnecessary.\n\nUnless you actually tried.Did you? It worked for me in the past on a similarly large system...-- Jonah H. Harris", "msg_date": "Sat, 27 Jun 2020 17:25:15 -0400", "msg_from": "\"Jonah H. Harris\" <jonah.harris@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Fwd: PostgreSQL: WolfSSL support" }, { "msg_contents": "Em sáb., 27 de jun. de 2020 às 18:23, Bruce Momjian <bruce@momjian.us>\nescreveu:\n\n> On Sat, Jun 27, 2020 at 06:14:21PM -0300, Ranier Vilela wrote:\n> > Em sáb., 27 de jun. de 2020 às 16:40, Bruce Momjian <bruce@momjian.us>\n> > escreveu:\n> >\n> > On Sat, Jun 27, 2020 at 04:22:51PM -0300, Ranier Vilela wrote:\n> > > Em sáb., 27 de jun. de 2020 às 09:50, Christoph Berg <\n> myon@debian.org>\n> > > escreveu:\n> > >\n> > > Re: Peter Eisentraut\n> > > > What would be the advantage of using wolfSSL over OpenSSL?\n> > >\n> > > Avoiding the OpenSSL-vs-GPL linkage problem with readline.\n> > >\n> > > I'm curious, how do you intend to solve a linking problem with\n> > > OpenSSL-vs-GPL-readline, with another GPL product?\n> >\n> > I assume you can use wolfSSL as long as the result is GPL, which is\n> the\n> > same requirement libreadline causes for Postgres, particularly if\n> > Postgres is statically linked to libreadline.\n> >\n> > I don't want to divert the focus from the theread, but this subject has a\n> > controversial potential, in my opinion.\n> > I participated in a speech on another list, where I make contributions\n> (IUP\n> > library: https://www.tecgraf.puc-rio.br/iup/).\n> > Where a user, upon discovering that two sub-libraries, were GPL licenses,\n> > caused an uproar, bringing the speech to Mr.Stallman himself.\n> > In short, the best thing for the project will be to remove the two GPL\n> > sub-libraries.\n>\n> We aleady try to do that by trying to use BSD-licensed libedit if\n> installed:\n>\n> https://github.com/freebsd/freebsd/tree/master/lib/libedit\n> https://certif.com/spec_print/readline.html\n>\n> I would love to see libedit fully functional so we don't need to rely on\n> libreadline anymore, but I seem to remember there are a few libreadline\n> features that libedit doesn't implement, so we use libreadline if it is\n> already installed. (I am still not clear if dynamic linking is a GPL\n> violation.)\n>\nPersonally, the dynamic link does not hurt the GPL.\nBut some people, do not think so, it was also unclear what Mr Stallman\nthinks of the subject (dynamic link).\n\nregards,\nRanier Vilela\n\nEm sáb., 27 de jun. de 2020 às 18:23, Bruce Momjian <bruce@momjian.us> escreveu:On Sat, Jun 27, 2020 at 06:14:21PM -0300, Ranier Vilela wrote:\n> Em sáb., 27 de jun. de 2020 às 16:40, Bruce Momjian <bruce@momjian.us>\n> escreveu:\n> \n>     On Sat, Jun 27, 2020 at 04:22:51PM -0300, Ranier Vilela wrote:\n>     > Em sáb., 27 de jun. de 2020 às 09:50, Christoph Berg <myon@debian.org>\n>     > escreveu:\n>     >\n>     >     Re: Peter Eisentraut\n>     >     > What would be the advantage of using wolfSSL over OpenSSL?\n>     >\n>     >     Avoiding the OpenSSL-vs-GPL linkage problem with readline.\n>     >\n>     > I'm curious, how do you intend to solve a linking problem with\n>     > OpenSSL-vs-GPL-readline, with another GPL product?\n> \n>     I assume you can use wolfSSL as long as the result is GPL, which is the\n>     same requirement libreadline causes for Postgres, particularly if\n>     Postgres is statically linked to libreadline.\n> \n> I don't want to divert the focus from the theread, but this subject has a\n> controversial potential, in my opinion.\n> I participated in a speech on another list, where I make contributions (IUP\n> library: https://www.tecgraf.puc-rio.br/iup/).\n> Where a user, upon discovering that two sub-libraries, were GPL licenses,\n> caused an uproar, bringing the speech to Mr.Stallman himself.\n> In short, the best thing for the project will be to remove the two GPL\n> sub-libraries.\n\nWe aleady try to do that by trying to use BSD-licensed libedit if\ninstalled:\n\n        https://github.com/freebsd/freebsd/tree/master/lib/libedit\n        https://certif.com/spec_print/readline.html\n\nI would love to see libedit fully functional so we don't need to rely on\nlibreadline anymore, but I seem to remember there are a few libreadline\nfeatures that libedit doesn't implement, so we use libreadline if it is\nalready installed.  (I am still not clear if dynamic linking is a GPL\nviolation.)Personally, the dynamic link does not hurt the GPL.But some people, do not think so, it was also unclear what Mr Stallman thinks of the subject (dynamic link).regards,Ranier Vilela", "msg_date": "Sat, 27 Jun 2020 18:25:21 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Fwd: PostgreSQL: WolfSSL support" }, { "msg_contents": "On Sat, Jun 27, 2020 at 06:25:21PM -0300, Ranier Vilela wrote:\n> Personally, the dynamic link does not hurt the GPL.\n> But some people, do not think so, it was also unclear what Mr Stallman thinks\n> of the subject (dynamic link).\n\nI think Stallman says the courts have to decide, which kind of makes\nsense.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n", "msg_date": "Sat, 27 Jun 2020 17:30:10 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Fwd: PostgreSQL: WolfSSL support" }, { "msg_contents": "Christoph Berg <myon@debian.org> writes:\n> Re: Ranier Vilela\n>> Isn't LIbreSSL a better alternative?\n\n> I don't know.\n\nIt should work all right --- it's the default ssl library on OpenBSD\nand some other platforms, so we have some buildfarm coverage for it.\n(AFAICT, none of the OpenBSD machines are running the ssl test, but\nI tried that just now on OpenBSD 6.4 and it passed.)\n\nHowever, I'm not exactly convinced that using LibreSSL gets you out\nof the license compatibility bind. LibreSSL is a fork of OpenSSL,\nand IIUC a fairly hostile fork at that, so how did they get permission\nto remove OpenSSL's problematic license clauses? Did they remove them\nat all? A quick look at the header files on my OpenBSD installation\nshows a whole lot of ancient copyright text.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 27 Jun 2020 17:39:01 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Fwd: PostgreSQL: WolfSSL support" }, { "msg_contents": "Bruce Momjian <bruce@momjian.us> writes:\n> On Sat, Jun 27, 2020 at 06:25:21PM -0300, Ranier Vilela wrote:\n>> Personally, the dynamic link does not hurt the GPL.\n>> But some people, do not think so, it was also unclear what Mr Stallman thinks\n>> of the subject (dynamic link).\n\n> I think Stallman says the courts have to decide, which kind of makes\n> sense.\n\nThis subject (openssl vs readline) has been discussed to death in the\npast, with varying opinions --- for example, Red Hat's well-qualified\nlawyers think building PG with openssl + readline poses no problem,\nDebian's lawyers apparently think otherwise. Please see the archives\nbefore re-opening the topic. And, if you're not a lawyer, it's quite\nunlikely you'll bring any new insights.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 27 Jun 2020 17:46:17 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Fwd: PostgreSQL: WolfSSL support" }, { "msg_contents": "On Sat, Jun 27, 2020 at 05:46:17PM -0400, Tom Lane wrote:\n> Bruce Momjian <bruce@momjian.us> writes:\n> > On Sat, Jun 27, 2020 at 06:25:21PM -0300, Ranier Vilela wrote:\n> >> Personally, the dynamic link does not hurt the GPL.\n> >> But some people, do not think so, it was also unclear what Mr Stallman thinks\n> >> of the subject (dynamic link).\n> \n> > I think Stallman says the courts have to decide, which kind of makes\n> > sense.\n> \n> This subject (openssl vs readline) has been discussed to death in the\n> past, with varying opinions --- for example, Red Hat's well-qualified\n> lawyers think building PG with openssl + readline poses no problem,\n> Debian's lawyers apparently think otherwise. Please see the archives\n> before re-opening the topic. And, if you're not a lawyer, it's quite\n> unlikely you'll bring any new insights.\n\nI think the larger problem is that different jurisdictions, e.g., USA,\nEU, could rule differently. Also, the FSF is not the only organization\nthat can bring violation suits, e.g. Oracle with MySQL, so there\nprobably isn't one answer to this until it is thoroughly litigated.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n", "msg_date": "Sat, 27 Jun 2020 17:53:15 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Fwd: PostgreSQL: WolfSSL support" }, { "msg_contents": "On Saturday, June 27, 2020, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Christoph Berg <myon@debian.org> writes:\n> > Re: Ranier Vilela\n> >> Isn't LIbreSSL a better alternative?\n>\n> > I don't know.\n>\n> It should work all right --- it's the default ssl library on OpenBSD\n> and some other platforms, so we have some buildfarm coverage for it.\n> (AFAICT, none of the OpenBSD machines are running the ssl test, but\n> I tried that just now on OpenBSD 6.4 and it passed.)\n>\n> However, I'm not exactly convinced that using LibreSSL gets you out\n> of the license compatibility bind. LibreSSL is a fork of OpenSSL,\n> and IIUC a fairly hostile fork at that, so how did they get permission\n> to remove OpenSSL's problematic license clauses? Did they remove them\n> at all? A quick look at the header files on my OpenBSD installation\n> shows a whole lot of ancient copyright text.\n\n\nAs I understand Libressl objective is not to change the license of existing\ncode but to deprecate features they don't want in it.\n\nThey also include in Libressl a new libtls which is ISC licensed, but it's\nanother history\n\n\n\n> regards, tom lane\n>\n>\n>\n\nOn Saturday, June 27, 2020, Tom Lane <tgl@sss.pgh.pa.us> wrote:Christoph Berg <myon@debian.org> writes:\n> Re: Ranier Vilela\n>> Isn't LIbreSSL a better alternative?\n\n> I don't know.\n\nIt should work all right --- it's the default ssl library on OpenBSD\nand some other platforms, so we have some buildfarm coverage for it.\n(AFAICT, none of the OpenBSD machines are running the ssl test, but\nI tried that just now on OpenBSD 6.4 and it passed.)\n\nHowever, I'm not exactly convinced that using LibreSSL gets you out\nof the license compatibility bind.  LibreSSL is a fork of OpenSSL,\nand IIUC a fairly hostile fork at that, so how did they get permission\nto remove OpenSSL's problematic license clauses?  Did they remove them\nat all?  A quick look at the header files on my OpenBSD installation\nshows a whole lot of ancient copyright text.As I understand Libressl objective is not to change the license of existing code but to deprecate features they don't want in it.They also include in Libressl a new libtls which is ISC licensed, but it's another history\n\n                        regards, tom lane", "msg_date": "Sat, 27 Jun 2020 21:52:38 -0500", "msg_from": "Abel Abraham Camarillo Ojeda <acamari@verlet.org>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL: WolfSSL support" }, { "msg_contents": "On 2020-06-27 14:50, Christoph Berg wrote:\n> Re: Peter Eisentraut\n>> What would be the advantage of using wolfSSL over OpenSSL?\n> \n> Avoiding the OpenSSL-vs-GPL linkage problem with readline.\n\nWe have added support for allegedly-OpenSSL compatible libraries such as \nLibreSSL before, so some tweaks for wolfSSL would seem acceptable. \nHowever, I doubt we are going to backpatch them, so unless you want to \ntake responsibility for that as a packager, it's not really going to \nhelp anyone soon. And OpenSSL 3.0.0 will have a new license, so for the \nnext PostgreSQL release, this problem might be gone.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Sun, 28 Jun 2020 10:18:12 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Fwd: PostgreSQL: WolfSSL support" }, { "msg_contents": "> On 27 Jun 2020, at 21:40, Bruce Momjian <bruce@momjian.us> wrote:\n> On Sat, Jun 27, 2020 at 04:22:51PM -0300, Ranier Vilela wrote:\n\n>> WolfSSL, will provide a commercial license for PostgreSQL?\n>> Isn't LIbreSSL a better alternative?\n> \n> Seems it might be.\n\nThat's not really an apples/apples comparison as the projects have vastly\ndifferent goals (wolfSSL offers FIPS certification for example).\n\ncheers ./daniel\n\n\n", "msg_date": "Sun, 28 Jun 2020 12:37:02 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL: WolfSSL support" }, { "msg_contents": "Personally I'm more interested in a library like Amazon's which is\ntrying to do less rather than more. I would rather a simpler, better\ntested, easier to audit code-base than one with more features and more\ncomplications.\n\nhttps://github.com/awslabs/s2n\n\n\n", "msg_date": "Sun, 28 Jun 2020 09:40:00 -0400", "msg_from": "Greg Stark <stark@mit.edu>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL: WolfSSL support" }, { "msg_contents": "On Sun, Jun 28, 2020 at 10:18:12AM +0200, Peter Eisentraut wrote:\n> On 2020-06-27 14:50, Christoph Berg wrote:\n> > Re: Peter Eisentraut\n> > > What would be the advantage of using wolfSSL over OpenSSL?\n> > \n> > Avoiding the OpenSSL-vs-GPL linkage problem with readline.\n> \n> We have added support for allegedly-OpenSSL compatible libraries such as\n> LibreSSL before, so some tweaks for wolfSSL would seem acceptable. However,\n> I doubt we are going to backpatch them, so unless you want to take\n> responsibility for that as a packager, it's not really going to help anyone\n> soon. And OpenSSL 3.0.0 will have a new license, so for the next PostgreSQL\n> release, this problem might be gone.\n\nOh, that is a long time coming --- it would be nice.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n", "msg_date": "Sun, 28 Jun 2020 09:56:13 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Fwd: PostgreSQL: WolfSSL support" }, { "msg_contents": "Hi Greg,\n\nOn Sun, Jun 28, 2020 at 6:40 AM Greg Stark <stark@mit.edu> wrote:\n>\n> I'm more interested in a library like Amazon's\n\nDoes S2N support TLS 1.3?\n\n https://github.com/awslabs/s2n/issues/388\n\nKind regards\nFelix Lechner\n\n\n", "msg_date": "Sun, 28 Jun 2020 07:22:34 -0700", "msg_from": "Felix Lechner <felix.lechner@lease-up.com>", "msg_from_op": true, "msg_subject": "Re: PostgreSQL: WolfSSL support" }, { "msg_contents": "Hi Jonah,\n\nOn Sat, Jun 27, 2020 at 12:35 PM Jonah H. Harris <jonah.harris@gmail.com> wrote:\n>\n> Somewhere, I recall seeing an open-source OpenSSL compatibility wrapper for WolfSSL. Assuming that still exists, this patch seems entirely unnecessary.\n\nThe patch uses the OpenSSL compatibility layer.\n\nKind regards\nFelix Lechner\n\n\n", "msg_date": "Sun, 28 Jun 2020 07:25:10 -0700", "msg_from": "Felix Lechner <felix.lechner@lease-up.com>", "msg_from_op": true, "msg_subject": "Re: Fwd: PostgreSQL: WolfSSL support" }, { "msg_contents": "Hi Tom,\n\nOn Sat, Jun 27, 2020 at 11:52 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> The configure\n> script could add -I/usr/include/wolfssl (or wherever those files\n> are) to CPPFLAGS instead of touching all those #includes.\n\nThat does not work well when OpenSSL's development files are\ninstalled. I did not think a segmentation fault was a good way to make\nfriends.\n\n> However, as long as\n> it's available on GPL terms, I don't see a problem with it\n> [wolfSSL] being one alternative.\n\nA minimal patch against -13 is on its way.\n\nKind regards\nFelix Lechner\n\n\n", "msg_date": "Sun, 28 Jun 2020 07:34:43 -0700", "msg_from": "Felix Lechner <felix.lechner@lease-up.com>", "msg_from_op": true, "msg_subject": "Re: Fwd: PostgreSQL: WolfSSL support" }, { "msg_contents": "Hi Tom,\n\nOn Sat, Jun 27, 2020 at 7:56 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> However, judging from the caveats mentioned in the initial message,\n> my inclination would be to wait awhile for wolfSSL to mature.\n\nPlease have a closer look. The library has been around since 2004 and\nis popular in embedded systems. (It was previously known as cyaSSL.)\nIf you bought a car or an appliance in the past ten years, you may be\nusing it already.\n\nwolfSSL's original claim to fame was that MySQL relied on it (I think\nOracle changed that). MariaDB still bundles an older, captive version.\nThe software is mature, and widely deployed.\n\nKind regards\nFelix Lechner\n\n\n", "msg_date": "Sun, 28 Jun 2020 09:33:52 -0700", "msg_from": "Felix Lechner <felix.lechner@lease-up.com>", "msg_from_op": true, "msg_subject": "Re: Fwd: PostgreSQL: WolfSSL support" }, { "msg_contents": "Felix Lechner <felix.lechner@lease-up.com> writes:\n> On Sat, Jun 27, 2020 at 7:56 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> However, judging from the caveats mentioned in the initial message,\n>> my inclination would be to wait awhile for wolfSSL to mature.\n\n> Please have a closer look. The library has been around since 2004 and\n> is popular in embedded systems. (It was previously known as cyaSSL.)\n\nI don't really care where else it's used. If we have to hack the source\ncode before we can use it, it's not mature for our purposes. Even when\n(if) that requirement reduces to \"you have to use the latest bleeding\nedge release\", it'll be problematic for people whose platforms supply\na less bleeding edge version. I was signifying a desire to wait until\ncompatible versions are reasonably common in the wild before we spend\nmuch time on this.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 28 Jun 2020 13:21:14 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Fwd: PostgreSQL: WolfSSL support" }, { "msg_contents": "On Sun, Jun 28, 2020 at 10:18:12AM +0200, Peter Eisentraut wrote:\n> We have added support for allegedly-OpenSSL compatible libraries such as\n> LibreSSL before, so some tweaks for wolfSSL would seem acceptable. However,\n> I doubt we are going to backpatch them, so unless you want to take\n> responsibility for that as a packager, it's not really going to help anyone\n> soon.\n\nThat's a new feature to me.\n\n> And OpenSSL 3.0.0 will have a new license, so for the next PostgreSQL\n> release, this problem might be gone.\n\nAnd there is this part too to consider, but I am no lawyer.\n\n@@ -131,11 +131,11 @@ typedef union {\n #ifdef WOLFSSL_SHA3\n wc_Sha3 sha3;\n #endif\n-} Hash;\n+} WolfSSLHash;\n[...]\n #endif\n #if !defined(XVALIDATE_DATE) && !defined(HAVE_VALIDATE_DATE)\n #define USE_WOLF_VALIDDATE\n- #define XVALIDATE_DATE(d, f, t) ValidateDate((d), (f), (t))\n+ #define XVALIDATE_DATE(d, f, t) WolfSSLValidateDate((d), (f), (t))\n #endif\nLooking at the patches, it seems to me that the part applying only to\nWolfSSL should be done anyway, at least for the Hash part which is a\nrather generic name, and that it may be better to do something as well\non the Postgres part for the same plan node to avoid conflicts, but\nthat's something old enough that it could vote (1054097).\nValidateTime() is present in the Postgres tree since f901bb5, but it\nis always annoying to break stuff that could be used by external\nplugins...\n\nRegarding the Postgres part of the WIP, the hard part is that we need\nmore thinking about the refactoring bits, so as people compiling\nPostgres can choose between OpenSSL or something else. And as Tom\nmentioned upthread there is no need for that:\n-#include <openssl/x509.h>\n-#include <openssl/x509v3.h>\n-#include <openssl/asn1.h>\n+#include <wolfssl/options.h>\n+#include <wolfssl/openssl/x509.h>\n+#include <wolfssl/openssl/x509v3.h>\n+#include <wolfssl/openssl/asn1.h>\n\n./configure should just append the correct path with -I.\n\n- my_bio_methods->bread = my_sock_read;\n- my_bio_methods->bwrite = my_sock_write;\n+ my_bio_methods->readCb = my_sock_read;\n+ my_bio_methods->writeCb = my_sock_write;\nThese parts could also be consolidated between OpenSSL and WolfSSL?\n\n- dh = PEM_read_DHparams(fp, NULL, NULL, NULL);\n FreeFile(fp);\n+ return NULL;\nThis part is not acceptable as-is. As a proof of concept, that's\nfine of course.\n--\nMichael", "msg_date": "Mon, 29 Jun 2020 10:20:57 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Fwd: PostgreSQL: WolfSSL support" }, { "msg_contents": "Greetings,\n\n* Felix Lechner (felix.lechner@lease-up.com) wrote:\n> Attached please find a WIP patch for wolfSSL support in postgresql-12.\n\nWould really be best to have this off of HEAD if we're going to be\nlooking at it rather than v12. We certainly aren't going to add new\nsupport for something new into the back-branches.\n\nFurther, I'd definitely suggest seeing how this plays with the patch to\nadd support for NSS which was posted recently to -hackers by Daniel.\n\nThanks,\n\nStephen", "msg_date": "Mon, 29 Jun 2020 10:30:36 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: Fwd: PostgreSQL: WolfSSL support" }, { "msg_contents": "Greetings,\n\n* Christoph Berg (myon@debian.org) wrote:\n> Re: Peter Eisentraut\n> > What would be the advantage of using wolfSSL over OpenSSL?\n> \n> Avoiding the OpenSSL-vs-GPL linkage problem with readline.\n\nI'd further say \"folks are interested in an alternative to OpenSSL\" as\nbeing a generally good reason to add support for alternatives, such as\nthe patch to add NSS support, which would also help with the GPL linkage\nproblem and add a higher FIPS rating option than OpenSSL.\n\nThanks,\n\nStephen", "msg_date": "Mon, 29 Jun 2020 10:31:59 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: Fwd: PostgreSQL: WolfSSL support" } ]
[ { "msg_contents": "Hi,\n\nI found that tab completion for some parts of the copy statement was\nmissing. The Tab completion was missing for the following cases:\n1) COPY [BINARY] <sth> FROM filename -> \"BINARY\", \"DELIMITER\", \"NULL\",\n\"CSV\", \"ENCODING\", \"WITH (\", \"WHERE\" should be shown.\n2) COPY [BINARY] <sth> TO filename -> \"BINARY\", \"DELIMITER\", \"NULL\",\n\"CSV\", \"ENCODING\", \"WITH (\" should be shown.\n3) COPY [BINARY] <sth> FROM filename WITH options -> \"WHERE\" should be shown.\n\nI could not find any test cases for tab completion, hence no tests\nwere added. Attached a patch which has the fix for the same.\nThoughts?\n\nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Sat, 27 Jun 2020 06:52:56 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": true, "msg_subject": "Added tab completion for the missing options in copy statement" }, { "msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: tested, passed\nImplements feature: tested, passed\nSpec compliant: tested, passed\nDocumentation: not tested\n\nTested the tab complete for copy command, it provides the tab completion after providing the \"TO|FROM filename With|Where\". Does this require any doc change?\n\nThe new status of this patch is: Waiting on Author\n", "msg_date": "Tue, 07 Jul 2020 13:06:38 +0000", "msg_from": "ahsan hadi <ahsan.hadi@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Added tab completion for the missing options in copy statement" }, { "msg_contents": "On Sat, Jun 27, 2020 at 6:52 AM vignesh C <vignesh21@gmail.com> wrote:\n>\n> Hi,\n>\n> I found that tab completion for some parts of the copy statement was\n> missing. The Tab completion was missing for the following cases:\n> 1) COPY [BINARY] <sth> FROM filename -> \"BINARY\", \"DELIMITER\", \"NULL\",\n> \"CSV\", \"ENCODING\", \"WITH (\", \"WHERE\" should be shown.\n> 2) COPY [BINARY] <sth> TO filename -> \"BINARY\", \"DELIMITER\", \"NULL\",\n> \"CSV\", \"ENCODING\", \"WITH (\" should be shown.\n> 3) COPY [BINARY] <sth> FROM filename WITH options -> \"WHERE\" should be shown.\n>\n> I could not find any test cases for tab completion, hence no tests\n> were added. Attached a patch which has the fix for the same.\n> Thoughts?\n>\n\n>The following review has been posted through the commitfest application:\n>make installcheck-world: tested, passed\n>Implements feature: tested, passed\n>Spec compliant: tested, passed\n>Documentation: not tested\n>Tested the tab complete for copy command, it provides the tab completion after providing the \"TO|FROM filename With|Where\". Does this require any doc change?\n\nThanks for reviewing the patch.\nThis changes is already present in the document, no need to make any\nchanges as shown below:\n\nCOPY table_name [ ( column_name [, ...] ) ]\n FROM { 'filename' | PROGRAM 'command' | STDIN }\n [ [ WITH ] ( option [, ...] ) ]\n [ WHERE condition ]\n\nPlease have a look and let me know if you feel anything needs to be\nadded on top of it.\n\nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 10 Jul 2020 09:58:28 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Added tab completion for the missing options in copy statement" }, { "msg_contents": "On Tue, Jul 07, 2020 at 01:06:38PM +0000, ahsan hadi wrote:\n> Tested the tab complete for copy command, it provides the tab\n> completion after providing the \"TO|FROM filename With|Where\". Does\n> this require any doc change?\n\nNo documentation changes are required for that, as long as they match\nthe supported grammar.\n--\nMichael", "msg_date": "Fri, 17 Jul 2020 14:13:09 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Added tab completion for the missing options in copy statement" }, { "msg_contents": "On Fri, Jul 10, 2020 at 09:58:28AM +0530, vignesh C wrote:\n> Thanks for reviewing the patch.\n> This changes is already present in the document, no need to make any\n> changes as shown below:\n> \n> COPY table_name [ ( column_name [, ...] ) ]\n> FROM { 'filename' | PROGRAM 'command' | STDIN }\n> [ [ WITH ] ( option [, ...] ) ]\n> [ WHERE condition ]\n\nNot completely actually. The page of psql for \\copy does not mention\nthe optional where clause, and I think that it would be better to add\nthat for consistency (perhaps that's the point raised by Ahsan?). I\ndon't see much point in splitting the description of the meta-command\ninto two lines as we already mix stdin and stdout for example which\nonly apply to respectively \"FROM\" and \"TO\", so let's just append the\nconditional where clause at its end. Attached is a patch doing so\nthat I intend to back-patch down to v12.\n\nComing back to your proposal, another thing is that with your patch\nyou recommend a syntax still present for compatibility reasons, but I\ndon't think that we should recommend it to the users anymore, giving\npriority to the new grammar of the post-9.0 era. I would actually go\nas far as removing BINARY from the completion when specified just\nafter COPY to simplify the code, and specify the list of available\noptions after typing \"COPY ... WITH (FORMAT \", with \"text\", \"csv\" and\n\"binary\". Adding completion for WHERE after COPY FROM is of course a\ngood idea.\n--\nMichael", "msg_date": "Fri, 17 Jul 2020 14:45:10 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Added tab completion for the missing options in copy statement" }, { "msg_contents": "On Fri, Jul 17, 2020 at 11:15 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Fri, Jul 10, 2020 at 09:58:28AM +0530, vignesh C wrote:\n> > Thanks for reviewing the patch.\n> > This changes is already present in the document, no need to make any\n> > changes as shown below:\n> >\n> > COPY table_name [ ( column_name [, ...] ) ]\n> > FROM { 'filename' | PROGRAM 'command' | STDIN }\n> > [ [ WITH ] ( option [, ...] ) ]\n> > [ WHERE condition ]\n>\n> Not completely actually. The page of psql for \\copy does not mention\n> the optional where clause, and I think that it would be better to add\n> that for consistency (perhaps that's the point raised by Ahsan?). I\n> don't see much point in splitting the description of the meta-command\n> into two lines as we already mix stdin and stdout for example which\n> only apply to respectively \"FROM\" and \"TO\", so let's just append the\n> conditional where clause at its end. Attached is a patch doing so\n> that I intend to back-patch down to v12.\n\nI would like to split into 2 lines similar to documentation of\nsql-copy which gives better readability, attaching a new patch in\nsimilar lines.\n\n> Coming back to your proposal, another thing is that with your patch\n> you recommend a syntax still present for compatibility reasons, but I\n> don't think that we should recommend it to the users anymore, giving\n> priority to the new grammar of the post-9.0 era. I would actually go\n> as far as removing BINARY from the completion when specified just\n> after COPY to simplify the code, and specify the list of available\n> options after typing \"COPY ... WITH (FORMAT \", with \"text\", \"csv\" and\n> \"binary\". Adding completion for WHERE after COPY FROM is of course a\n> good idea.\n\nI agree with your comments, and have made a new patch accordingly.\nThoughts?\n\nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Fri, 17 Jul 2020 17:28:51 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Added tab completion for the missing options in copy statement" }, { "msg_contents": "On Fri, Jul 17, 2020 at 05:28:51PM +0530, vignesh C wrote:\n> On Fri, Jul 17, 2020 at 11:15 AM Michael Paquier <michael@paquier.xyz> wrote:\n>> Not completely actually. The page of psql for \\copy does not mention\n>> the optional where clause, and I think that it would be better to add\n>> that for consistency (perhaps that's the point raised by Ahsan?). I\n>> don't see much point in splitting the description of the meta-command\n>> into two lines as we already mix stdin and stdout for example which\n>> only apply to respectively \"FROM\" and \"TO\", so let's just append the\n>> conditional where clause at its end. Attached is a patch doing so\n>> that I intend to back-patch down to v12.\n> \n> I would like to split into 2 lines similar to documentation of\n> sql-copy which gives better readability, attaching a new patch in\n> similar lines.\n\nFine by me. I have applied and back-patched this part down to 12.\n\n>> Coming back to your proposal, another thing is that with your patch\n>> you recommend a syntax still present for compatibility reasons, but I\n>> don't think that we should recommend it to the users anymore, giving\n>> priority to the new grammar of the post-9.0 era. I would actually go\n>> as far as removing BINARY from the completion when specified just\n>> after COPY to simplify the code, and specify the list of available\n>> options after typing \"COPY ... WITH (FORMAT \", with \"text\", \"csv\" and\n>> \"binary\". Adding completion for WHERE after COPY FROM is of course a\n>> good idea.\n> \n> I agree with your comments, and have made a new patch accordingly.\n> Thoughts?\n\nNope, that's not what I meant. My point was to drop completely from\nthe completion the past grammar we are keeping around for\ncompatibility reasons, and just complete with the new grammar\ndocumented at the top of the COPY page. This leads me to the\nattached, which actually simplifies the code, adds more completion\npatterns with the mixes of WHERE/WITH depending on if FROM or TO is\nused, and at the end is less bug-prone if the grammar gets more\nextended. I have also added some completion for \"WITH (FORMAT\" for\ntext, csv and binary.\n--\nMichael", "msg_date": "Sat, 18 Jul 2020 11:38:06 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Added tab completion for the missing options in copy statement" }, { "msg_contents": "On Sat, Jul 18, 2020 at 8:08 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Fri, Jul 17, 2020 at 05:28:51PM +0530, vignesh C wrote:\n> > On Fri, Jul 17, 2020 at 11:15 AM Michael Paquier <michael@paquier.xyz> wrote:\n> >> Not completely actually. The page of psql for \\copy does not mention\n> >> the optional where clause, and I think that it would be better to add\n> >> that for consistency (perhaps that's the point raised by Ahsan?). I\n> >> don't see much point in splitting the description of the meta-command\n> >> into two lines as we already mix stdin and stdout for example which\n> >> only apply to respectively \"FROM\" and \"TO\", so let's just append the\n> >> conditional where clause at its end. Attached is a patch doing so\n> >> that I intend to back-patch down to v12.\n> >\n> > I would like to split into 2 lines similar to documentation of\n> > sql-copy which gives better readability, attaching a new patch in\n> > similar lines.\n>\n> Fine by me. I have applied and back-patched this part down to 12.\n\nThanks for pushing the patch.\n\n>\n> >> Coming back to your proposal, another thing is that with your patch\n> >> you recommend a syntax still present for compatibility reasons, but I\n> >> don't think that we should recommend it to the users anymore, giving\n> >> priority to the new grammar of the post-9.0 era. I would actually go\n> >> as far as removing BINARY from the completion when specified just\n> >> after COPY to simplify the code, and specify the list of available\n> >> options after typing \"COPY ... WITH (FORMAT \", with \"text\", \"csv\" and\n> >> \"binary\". Adding completion for WHERE after COPY FROM is of course a\n> >> good idea.\n> >\n> > I agree with your comments, and have made a new patch accordingly.\n> > Thoughts?\n>\n> Nope, that's not what I meant. My point was to drop completely from\n> the completion the past grammar we are keeping around for\n> compatibility reasons, and just complete with the new grammar\n> documented at the top of the COPY page. This leads me to the\n> attached, which actually simplifies the code, adds more completion\n> patterns with the mixes of WHERE/WITH depending on if FROM or TO is\n> used, and at the end is less bug-prone if the grammar gets more\n> extended. I have also added some completion for \"WITH (FORMAT\" for\n> text, csv and binary.\n\nThis version of patch looks good, patch applies, make check & make\ncheck-world passes.\nThis is not part of the new changes, this change already exists, I had\none small clarification on the below code:\n/* Complete COPY ( with legal query commands */\nelse if (Matches(\"COPY|\\\\copy\", \"(\"))\nCOMPLETE_WITH(\"SELECT\", \"TABLE\", \"VALUES\", \"INSERT\", \"UPDATE\",\n\"DELETE\", \"WITH\");\n\nCan we specify Insert/Update or delete with copy?\nWhen I tried few scenarios I was getting the following error:\nERROR: COPY query must have a RETURNING clause\n\nI might be missing some scenarios, just wanted to confirm if this is\nkept intentionally.\n\nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Sat, 18 Jul 2020 16:12:02 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Added tab completion for the missing options in copy statement" }, { "msg_contents": "On Sat, Jul 18, 2020 at 04:12:02PM +0530, vignesh C wrote:\n> Can we specify Insert/Update or delete with copy?\n> When I tried few scenarios I was getting the following error:\n> ERROR: COPY query must have a RETURNING clause\n> \n> I might be missing some scenarios, just wanted to confirm if this is\n> kept intentionally.\n\nThis error message says it all, this is supported for a DML that\nincludes a RETURNING clause:\n=# create table aa (a int);\nCREATE TABLE\n=# copy (insert into aa values (generate_series(2,5)) returning a)\n to '/tmp/data.txt';\nCOPY 4\n=# \\! cat /tmp/data.txt\n2\n3\n4\n5\n\nThanks,\n--\nMichael", "msg_date": "Sat, 18 Jul 2020 20:06:20 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Added tab completion for the missing options in copy statement" }, { "msg_contents": "On Sat, Jul 18, 2020 at 4:36 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Sat, Jul 18, 2020 at 04:12:02PM +0530, vignesh C wrote:\n> > Can we specify Insert/Update or delete with copy?\n> > When I tried few scenarios I was getting the following error:\n> > ERROR: COPY query must have a RETURNING clause\n> >\n> > I might be missing some scenarios, just wanted to confirm if this is\n> > kept intentionally.\n>\n> This error message says it all, this is supported for a DML that\n> includes a RETURNING clause:\n> =# create table aa (a int);\n> CREATE TABLE\n> =# copy (insert into aa values (generate_series(2,5)) returning a)\n> to '/tmp/data.txt';\n> COPY 4\n> =# \\! cat /tmp/data.txt\n> 2\n> 3\n> 4\n> 5\n>\n\nThanks Michael for the clarification, the patch looks fine to me.\n\n\nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Sat, 18 Jul 2020 16:49:23 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Added tab completion for the missing options in copy statement" }, { "msg_contents": "On Sat, Jul 18, 2020 at 04:49:23PM +0530, vignesh C wrote:\n> Thanks Michael for the clarification, the patch looks fine to me.\n\nThanks for the confirmation, committed.\n--\nMichael", "msg_date": "Tue, 21 Jul 2020 12:21:46 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Added tab completion for the missing options in copy statement" } ]
[ { "msg_contents": "Hello,\n\nOn \"Debian GNU/Linux 9 (stretch)\", compiling master just now, I get the \nfollowing (interspersed with some output fom my build script):\n\n-- [2020.06.27 19:07:42 HEAD/1] ./configure \n--prefix=/home/aardvark/pg_stuff/pg_installations/pgsql.HEAD \n--bindir=/home/aardvark/pg_stuff/pg_installations/pgsql.HEAD/bin.fast \n--l\nibdir=/home/aardvark/pg_stuff/pg_installations/pgsql.HEAD/lib.fast \n--with-pgport=6514 --quiet --enable-depend --with-openssl --with-perl \n--with-libxml --with-libxslt --with-zlib\n--enable-tap-tests --with-extra-version=_0627_b63d\n\n-- [2020.06.27 19:08:06 HEAD/1] make core: make --quiet -j 4\nbe-secure-openssl.c: In function ‘be_tls_open_server’:\nbe-secure-openssl.c:477:11: error: ‘SSL_R_VERSION_TOO_HIGH’ undeclared \n(first use in this function)\n 477 | case SSL_R_VERSION_TOO_HIGH:\n | ^~~~~~~~~~~~~~~~~~~~~~\nbe-secure-openssl.c:477:11: note: each undeclared identifier is reported \nonly once for each function it appears in\nbe-secure-openssl.c:478:11: error: ‘SSL_R_VERSION_TOO_LOW’ undeclared \n(first use in this function); did you mean ‘SSL_R_MESSAGE_TOO_LONG’?\n 478 | case SSL_R_VERSION_TOO_LOW:\n | ^~~~~~~~~~~~~~~~~~~~~\n | SSL_R_MESSAGE_TOO_LONG\nmake[3]: *** [be-secure-openssl.o] Error 1\nmake[2]: *** [libpq-recursive] Error 2\nmake[2]: *** Waiting for unfinished jobs....\nmake[1]: *** [all-backend-recurse] Error 2\nmake: *** [all-src-recurse] Error 2\n../../../src/Makefile.global:919: recipe for target \n'be-secure-openssl.o' failed\ncommon.mk:39: recipe for target 'libpq-recursive' failed\nMakefile:42: recipe for target 'all-backend-recurse' failed\nGNUmakefile:11: recipe for target 'all-src-recurse' failed\n\n\nTo be honest I have no idea what needs to be fixed...\n\n\nThanks,\n\nErik Rijkers\n\n\n", "msg_date": "Sat, 27 Jun 2020 19:18:50 +0200", "msg_from": "Erik Rijkers <er@xs4all.nl>", "msg_from_op": true, "msg_subject": "compile error master SSL_R_VERSION_TOO_HIGH:" }, { "msg_contents": "Erik Rijkers <er@xs4all.nl> writes:\n> On \"Debian GNU/Linux 9 (stretch)\", compiling master just now, I get the \n> following (interspersed with some output fom my build script):\n\nYeah, just saw that in the buildfarm. Should be OK now.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 27 Jun 2020 13:28:15 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: compile error master SSL_R_VERSION_TOO_HIGH:" }, { "msg_contents": "On Sat, Jun 27, 2020 at 01:28:15PM -0400, Tom Lane wrote:\n> Erik Rijkers <er@xs4all.nl> writes:\n> > On \"Debian GNU/Linux 9 (stretch)\", compiling master just now, I get the \n> > following (interspersed with some output fom my build script):\n> \n> Yeah, just saw that in the buildfarm. Should be OK now.\n\nI can confirm a successful \"Debian 10/Buster\" compile here with master.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n", "msg_date": "Sat, 27 Jun 2020 13:30:39 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: compile error master SSL_R_VERSION_TOO_HIGH:" }, { "msg_contents": "On 2020-06-27 19:28, Tom Lane wrote:\n> Erik Rijkers <er@xs4all.nl> writes:\n>> On \"Debian GNU/Linux 9 (stretch)\", compiling master just now, I get \n>> the\n>> following (interspersed with some output fom my build script):\n> \n> Yeah, just saw that in the buildfarm. Should be OK now.\n> \n\nIt is. I should've checked the farm before complaining...\n\nThanks!\n\n\n\n\n\n\n\n\n", "msg_date": "Sat, 27 Jun 2020 19:31:41 +0200", "msg_from": "Erik Rijkers <er@xs4all.nl>", "msg_from_op": true, "msg_subject": "Re: compile error master SSL_R_VERSION_TOO_HIGH:" } ]
[ { "msg_contents": "Since pg11 pg_read_file() and friends can be used with absolute paths as long as\nthe user is superuser or explicitly granted the role pg_read_server_files.\n\nI noticed that when trying to read a virtual file, e.g.:\n\n SELECT pg_read_file('/proc/self/status');\n\nthe returned result is a zero length string.\n\nHowever this works fine:\n\n SELECT pg_read_file('/proc/self/status', 127, 128);\n\nThe reason for that is pg_read_file_v2() sets bytes_to_read=-1 if no offset and\nlength are supplied as arguments when it is called. It passes bytes_to_read down\nto read_binary_file().\n\nWhen the latter function sees bytes_to_read < 0 it tries to read the entire file\nby getting the file size via stat, which returns 0 for a virtual file size.\n\nThe attached patch fixes this for me. I think it ought to be backpatched through\npg11.\n\nComments?\n\nJoe\n-- \nCrunchy Data - http://crunchydata.com\nPostgreSQL Support for Secure Enterprises\nConsulting, Training, & Open Source Development", "msg_date": "Sat, 27 Jun 2020 15:00:21 -0400", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": true, "msg_subject": "pg_read_file() with virtual files returns empty string" }, { "msg_contents": "Joe Conway <mail@joeconway.com> writes:\n> The attached patch fixes this for me. I think it ought to be backpatched through\n> pg11.\n\n> Comments?\n\n1. Doesn't seem to be accounting for the possibility of an error in fread().\n\n2. Don't we want to remove the stat() call altogether, if we're not\ngoing to believe its length?\n\n3. This bit might need to cast the RHS to int64:\n\tif (bytes_to_read > (MaxAllocSize - VARHDRSZ))\notherwise it might be treated as an unsigned comparison.\nOr you could check for bytes_to_read < 0 separately.\n\n4. appendStringInfoString seems like quite the wrong thing to use\nwhen the input is binary data.\n\n5. Don't like the comment. Whether the file is virtual or not isn't\nvery relevant here.\n\n6. If the file size exceeds 1GB, I fear we'll get some rather opaque\nfailure from the stringinfo infrastructure. It'd be better to\ncheck for that here and give a file-too-large error.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 27 Jun 2020 15:43:44 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pg_read_file() with virtual files returns empty string" }, { "msg_contents": "On 6/27/20 3:43 PM, Tom Lane wrote:\n> Joe Conway <mail@joeconway.com> writes:\n>> The attached patch fixes this for me. I think it ought to be backpatched through\n>> pg11.\n> \n>> Comments?\n> \n> 1. Doesn't seem to be accounting for the possibility of an error in fread().\n> \n> 2. Don't we want to remove the stat() call altogether, if we're not\n> going to believe its length?\n> \n> 3. This bit might need to cast the RHS to int64:\n> \tif (bytes_to_read > (MaxAllocSize - VARHDRSZ))\n> otherwise it might be treated as an unsigned comparison.\n> Or you could check for bytes_to_read < 0 separately.\n> \n> 4. appendStringInfoString seems like quite the wrong thing to use\n> when the input is binary data.\n> \n> 5. Don't like the comment. Whether the file is virtual or not isn't\n> very relevant here.\n> \n> 6. If the file size exceeds 1GB, I fear we'll get some rather opaque\n> failure from the stringinfo infrastructure. It'd be better to\n> check for that here and give a file-too-large error.\n\n\nAll good stuff -- I believe the attached checks all the boxes.\n\nI noted while at this, that the current code can never hit this case:\n\n! \tif (bytes_to_read < 0)\n! \t{\n! \t\tif (seek_offset < 0)\n! \t\t\tbytes_to_read = -seek_offset;\n\nThe intention here seems to be that if you pass bytes_to_read = -1 with a\nnegative offset, it will give you the last offset bytes of the file.\n\nBut all of the SQL exposed paths disallow an explicit negative value for\nbytes_to_read. This was also not documented as far as I can tell so I eliminated\nthat case in the attached. Is that actually a case I should fix/support instead?\n\nSeparately, it seems to me that a two argument version of pg_read_file() would\nbe useful:\n\n pg_read_file('filename', offset)\n\nIn that case bytes_to_read = -1 could be passed down in order to read the entire\nfile after the offset. In fact I think that would nicely handle the negative\noffset case as well.\n\nJoe\n\n-- \nCrunchy Data - http://crunchydata.com\nPostgreSQL Support for Secure Enterprises\nConsulting, Training, & Open Source Development", "msg_date": "Sun, 28 Jun 2020 13:00:29 -0400", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": true, "msg_subject": "Re: pg_read_file() with virtual files returns empty string" }, { "msg_contents": "Joe Conway <mail@joeconway.com> writes:\n> All good stuff -- I believe the attached checks all the boxes.\n\nLooks okay to me, except I think you want\n\n! \tif (bytes_to_read > 0)\n\nto be\n\n! \tif (bytes_to_read >= 0)\n\nAs it stands, a zero request will be treated like -1 (read all the\nrest of the file) while ISTM it ought to be an expensive way to\nread zero bytes --- perhaps useful to check the filename and seek\noffset validity?\n\n> The intention here seems to be that if you pass bytes_to_read = -1 with a\n> negative offset, it will give you the last offset bytes of the file.\n\nI think it's just trying to convert bytes_to_read = -1 into an explicit\npositive length-to-read in all cases. We don't need that anymore with\nthis code, so dropping it is fine.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 28 Jun 2020 18:00:16 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pg_read_file() with virtual files returns empty string" }, { "msg_contents": "On 6/28/20 6:00 PM, Tom Lane wrote:\n> Joe Conway <mail@joeconway.com> writes:\n>> All good stuff -- I believe the attached checks all the boxes.\n> \n> Looks okay to me, except I think you want\n> \n> ! \tif (bytes_to_read > 0)\n> \n> to be\n> \n> ! \tif (bytes_to_read >= 0)\n\nYep -- thanks.\n\nI did some performance testing of the worst case/largest possible file and found\nthat skipping the stat and bulk read does cause a significant regression.\nCurrent HEAD takes about 400ms on my desktop, and with that version of the patch\nmore like 1100ms.\n\nIn the attached patch I was able to get most of the performance degradation back\n-- ~600ms. Hopefully you don't think what I did was \"too cute by half\" :-). Do\nyou think this is good enough or should we go back to using the stat file size\nwhen it is > 0?\n\nAs noted in the comment, the downside of that method is that the largest\nsupported file size is 1 byte smaller when \"reading the entire file\" versus\n\"reading a specified size\" due to StringInfo reserving the last byte for a\ntrailing null.\n\nJoe\n\n-- \nCrunchy Data - http://crunchydata.com\nPostgreSQL Support for Secure Enterprises\nConsulting, Training, & Open Source Development", "msg_date": "Tue, 30 Jun 2020 11:52:26 -0400", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": true, "msg_subject": "Re: pg_read_file() with virtual files returns empty string" }, { "msg_contents": "Joe Conway <mail@joeconway.com> writes:\n> I did some performance testing of the worst case/largest possible file and found\n> that skipping the stat and bulk read does cause a significant regression.\n\nYeah, I was wondering a little bit if that'd be an issue.\n\n> In the attached patch I was able to get most of the performance degradation back\n> -- ~600ms. Hopefully you don't think what I did was \"too cute by half\" :-). Do\n> you think this is good enough or should we go back to using the stat file size\n> when it is > 0?\n\nI don't think it's unreasonable to \"get in bed\" with the innards of the\nStringInfo; plenty of other places do already, such as pqformat.h or\npgp_armor_decode, just to name the first couple that I came across in a\nquick grep.\n\nHowever, if we're going to get in bed with it, let's get all the way in\nand just read directly into the StringInfo's buffer, as per attached.\nThis saves all the extra memcpy'ing and reduces the number of fread calls\nto at most log(N).\n\n(This also fixes a bug in your version, which is that it captured\nthe buf.data pointer before any repalloc that might happen.)\n\n\t\t\tregards, tom lane", "msg_date": "Wed, 01 Jul 2020 16:12:20 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pg_read_file() with virtual files returns empty string" }, { "msg_contents": "On 7/1/20 4:12 PM, Tom Lane wrote:\n> Joe Conway <mail@joeconway.com> writes:\n>> I did some performance testing of the worst case/largest possible file and found\n>> that skipping the stat and bulk read does cause a significant regression.\n> \n> Yeah, I was wondering a little bit if that'd be an issue.\n> \n>> In the attached patch I was able to get most of the performance degradation back\n>> -- ~600ms. Hopefully you don't think what I did was \"too cute by half\" :-). Do\n>> you think this is good enough or should we go back to using the stat file size\n>> when it is > 0?\n> \n> I don't think it's unreasonable to \"get in bed\" with the innards of the\n> StringInfo; plenty of other places do already, such as pqformat.h or\n> pgp_armor_decode, just to name the first couple that I came across in a\n> quick grep.\n> \n> However, if we're going to get in bed with it, let's get all the way in\n> and just read directly into the StringInfo's buffer, as per attached.\n> This saves all the extra memcpy'ing and reduces the number of fread calls\n> to at most log(N).\n\n\nWorks for me. I'll retest to see how well it does performance-wise and report back.\n\n> (This also fixes a bug in your version, which is that it captured\n> the buf.data pointer before any repalloc that might happen.)\n\nYeah, I saw that after sending this.\n\nThanks,\n\nJoe\n\n-- \nCrunchy Data - http://crunchydata.com\nPostgreSQL Support for Secure Enterprises\nConsulting, Training, & Open Source Development", "msg_date": "Wed, 1 Jul 2020 17:17:16 -0400", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": true, "msg_subject": "Re: pg_read_file() with virtual files returns empty string" }, { "msg_contents": "On 7/1/20 5:17 PM, Joe Conway wrote:\n> On 7/1/20 4:12 PM, Tom Lane wrote:\n>> Joe Conway <mail@joeconway.com> writes:\n>>> I did some performance testing of the worst case/largest possible file and found\n>>> that skipping the stat and bulk read does cause a significant regression.\n>> \n>> Yeah, I was wondering a little bit if that'd be an issue.\n>> \n>>> In the attached patch I was able to get most of the performance degradation back\n>>> -- ~600ms. Hopefully you don't think what I did was \"too cute by half\" :-). Do\n>>> you think this is good enough or should we go back to using the stat file size\n>>> when it is > 0?\n>> \n>> I don't think it's unreasonable to \"get in bed\" with the innards of the\n>> StringInfo; plenty of other places do already, such as pqformat.h or\n>> pgp_armor_decode, just to name the first couple that I came across in a\n>> quick grep.\n>> \n>> However, if we're going to get in bed with it, let's get all the way in\n>> and just read directly into the StringInfo's buffer, as per attached.\n>> This saves all the extra memcpy'ing and reduces the number of fread calls\n>> to at most log(N).\n> \n> Works for me. I'll retest to see how well it does performance-wise and report back.\n\nA quick test shows that this gets performance back on par with HEAD.\n\nThe only downside is that the max filesize is reduced to (MaxAllocSize -\nMIN_READ_SIZE - 1) compared to MaxAllocSize with the old method.\n\nBut anyone pushing that size limit is going to run into other issues anyway. I.e\n(on pg11):\n\n8<---------------\nselect length(pg_read_binary_file('/tmp/rbftest4.bin'));\n length\n\n------------\n 1073737726\n(1 row)\n\nselect pg_read_binary_file('/tmp/rbftest4.bin');\nERROR: invalid memory alloc request size 2147475455\n8<---------------\n\nSo probably not worth worrying about.\n\nJoe\n\n-- \nCrunchy Data - http://crunchydata.com\nPostgreSQL Support for Secure Enterprises\nConsulting, Training, & Open Source Development", "msg_date": "Wed, 1 Jul 2020 17:43:37 -0400", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": true, "msg_subject": "Re: pg_read_file() with virtual files returns empty string" }, { "msg_contents": "Joe Conway <mail@joeconway.com> writes:\n> The only downside is that the max filesize is reduced to (MaxAllocSize -\n> MIN_READ_SIZE - 1) compared to MaxAllocSize with the old method.\n\nHm, I was expecting that the last successful iteration of\nenlargeStringInfo would increase the buffer size to MaxAllocSize,\nso that we'd really only be losing one byte (which we can't avoid\nif we use stringinfo). But you're right that it's most likely moot\nsince later manipulations of such a result would risk hitting overflows.\n\nI marked the CF entry as RFC.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 01 Jul 2020 18:22:51 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pg_read_file() with virtual files returns empty string" }, { "msg_contents": "On 7/1/20 6:22 PM, Tom Lane wrote:\n> Joe Conway <mail@joeconway.com> writes:\n>> The only downside is that the max filesize is reduced to (MaxAllocSize -\n>> MIN_READ_SIZE - 1) compared to MaxAllocSize with the old method.\n> \n> Hm, I was expecting that the last successful iteration of\n> enlargeStringInfo would increase the buffer size to MaxAllocSize,\n> so that we'd really only be losing one byte (which we can't avoid\n> if we use stringinfo). But you're right that it's most likely moot\n> since later manipulations of such a result would risk hitting overflows.\n> \n> I marked the CF entry as RFC.\n\nSorry to open this can of worms again, but I couldn't get my head past the fact\nthat reading the entire file would have a different size limit than reading the\nexact number of bytes in the file.\n\nSo, inspired by what you did (and StringInfo itself) I came up with the\nattached. This version performs equivalently to your patch (and HEAD), and\nallows files up to and including (MaxAllocSize - VARHDRSZ) -- i.e. exactly the\nsame as the specified-length case and legacy behavior for the full file read.\n\nBut if you object I will just go with your version barring any other opinions.\n\nJoe\n\n-- \nCrunchy Data - http://crunchydata.com\nPostgreSQL Support for Secure Enterprises\nConsulting, Training, & Open Source Development", "msg_date": "Thu, 2 Jul 2020 14:05:54 -0400", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": true, "msg_subject": "Re: pg_read_file() with virtual files returns empty string" }, { "msg_contents": "Joe Conway <mail@joeconway.com> writes:\n> On 7/1/20 6:22 PM, Tom Lane wrote:\n>> Hm, I was expecting that the last successful iteration of\n>> enlargeStringInfo would increase the buffer size to MaxAllocSize,\n>> so that we'd really only be losing one byte (which we can't avoid\n>> if we use stringinfo). But you're right that it's most likely moot\n>> since later manipulations of such a result would risk hitting overflows.\n\n> Sorry to open this can of worms again, but I couldn't get my head past the fact\n> that reading the entire file would have a different size limit than reading the\n> exact number of bytes in the file.\n\nAre you sure there actually is any such limit in the other code,\nafter accounting for the way that stringinfo.c will enlarge its\nbuffer? That is, I believe that the limit is MaxAllocSize minus\nfive bytes, not something less.\n\n> So, inspired by what you did (and StringInfo itself) I came up with the\n> attached. This version performs equivalently to your patch (and HEAD), and\n> allows files up to and including (MaxAllocSize - VARHDRSZ) -- i.e. exactly the\n> same as the specified-length case and legacy behavior for the full file read.\n\nI find this way overcomplicated for what it accomplishes. In the\nreal world there's not much difference between MaxAllocSize minus\nfive and MaxAllocSize minus four.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 02 Jul 2020 15:36:34 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pg_read_file() with virtual files returns empty string" }, { "msg_contents": "On 7/2/20 3:36 PM, Tom Lane wrote:\n> Joe Conway <mail@joeconway.com> writes:\n>> On 7/1/20 6:22 PM, Tom Lane wrote:\n>>> Hm, I was expecting that the last successful iteration of\n>>> enlargeStringInfo would increase the buffer size to MaxAllocSize,\n>>> so that we'd really only be losing one byte (which we can't avoid\n>>> if we use stringinfo). But you're right that it's most likely moot\n>>> since later manipulations of such a result would risk hitting overflows.\n> \n>> Sorry to open this can of worms again, but I couldn't get my head past the fact\n>> that reading the entire file would have a different size limit than reading the\n>> exact number of bytes in the file.\n> \n> Are you sure there actually is any such limit in the other code,\n> after accounting for the way that stringinfo.c will enlarge its\n> buffer? That is, I believe that the limit is MaxAllocSize minus\n> five bytes, not something less.\n> \n>> So, inspired by what you did (and StringInfo itself) I came up with the\n>> attached. This version performs equivalently to your patch (and HEAD), and\n>> allows files up to and including (MaxAllocSize - VARHDRSZ) -- i.e. exactly the\n>> same as the specified-length case and legacy behavior for the full file read.\n> \n> I find this way overcomplicated for what it accomplishes. In the\n> real world there's not much difference between MaxAllocSize minus\n> five and MaxAllocSize minus four.\n\nOk, so your version was not as bad as I thought.:\n\nll /tmp/rbftest*.bin\n-rw-r--r-- 1 postgres postgres 1073741819 Jul 2 15:48 /tmp/rbftest1.bin\n-rw-r--r-- 1 postgres postgres 1073741818 Jul 2 15:47 /tmp/rbftest2.bin\n-rw-r--r-- 1 postgres postgres 1073741817 Jul 2 15:53 /tmp/rbftest3.bin\n\nrbftest1.bin == MaxAllocSize - 4\nrbftest2.bin == MaxAllocSize - 5\nrbftest3.bin == MaxAllocSize - 6\n\npostgres=# select length(pg_read_binary_file('/tmp/rbftest1.bin'));\nERROR: requested length too large\npostgres=# select length(pg_read_binary_file('/tmp/rbftest2.bin'));\nERROR: requested length too large\npostgres=# select length(pg_read_binary_file('/tmp/rbftest3.bin'));\n length\n------------\n 1073741817\n\nWhen I saw originally MaxAllocSize - 5 fail I skipped to something smaller by\n4096 and it worked. But here I see that the actual max size is MaxAllocSize - 6.\nI guess I can live with that.\n\nJoe\n\n-- \nCrunchy Data - http://crunchydata.com\nPostgreSQL Support for Secure Enterprises\nConsulting, Training, & Open Source Development", "msg_date": "Thu, 2 Jul 2020 16:05:23 -0400", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": true, "msg_subject": "Re: pg_read_file() with virtual files returns empty string" }, { "msg_contents": "Joe Conway <mail@joeconway.com> writes:\n> When I saw originally MaxAllocSize - 5 fail I skipped to something smaller by\n> 4096 and it worked. But here I see that the actual max size is MaxAllocSize - 6.\n\nHuh, I wonder why it's not max - 5. Probably not worth worrying about,\nthough.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 02 Jul 2020 16:27:13 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pg_read_file() with virtual files returns empty string" }, { "msg_contents": "On 7/2/20 4:27 PM, Tom Lane wrote:\n> Joe Conway <mail@joeconway.com> writes:\n>> When I saw originally MaxAllocSize - 5 fail I skipped to something smaller by\n>> 4096 and it worked. But here I see that the actual max size is MaxAllocSize - 6.\n> \n> Huh, I wonder why it's not max - 5. Probably not worth worrying about,\n> though.\n\nWell this part:\n\n+\trbytes = fread(sbuf.data + sbuf.len, 1,\n+\t (size_t) (sbuf.maxlen - sbuf.len - 1), file);\n\ncould actually be:\n\n+\trbytes = fread(sbuf.data + sbuf.len, 1,\n+\t (size_t) (sbuf.maxlen - sbuf.len), file);\n\nbecause there is no actual need to reserve a byte for the trailing null, since\nwe are not using appendBinaryStringInfo() anymore, and that is where the\ntrailing NULL gets written.\n\nWith that change (and some elog(NOTICE,...) calls) we have:\n\nselect length(pg_read_binary_file('/tmp/rbftest2.bin'));\nNOTICE: loop start - buf max len: 1024; buf len 4\nNOTICE: loop end - buf max len: 8192; buf len 8192\nNOTICE: loop start - buf max len: 8192; buf len 8192\nNOTICE: loop end - buf max len: 16384; buf len 16384\nNOTICE: loop start - buf max len: 16384; buf len 16384\n[...]\nNOTICE: loop end - buf max len: 536870912; buf len 536870912\nNOTICE: loop start - buf max len: 536870912; buf len 536870912\nNOTICE: loop end - buf max len: 1073741823; buf len 1073741822\n length\n------------\n 1073741818\n(1 row)\n\nOr max - 5, so we got our byte back :-)\n\nIn fact, in principle there is no reason we can't get to max - 4 with this code\nexcept that when the filesize is exactly 1073741819, we need to try to read one\nmore byte to find the EOF that way I did in my patch. I.e.:\n\n-- use 1073741819 byte file\nselect length(pg_read_binary_file('/tmp/rbftest1.bin'));\nNOTICE: loop start - buf max len: 1024; buf len 4\nNOTICE: loop end - buf max len: 8192; buf len 8192\nNOTICE: loop start - buf max len: 8192; buf len 8192\nNOTICE: loop end - buf max len: 16384; buf len 16384\nNOTICE: loop start - buf max len: 16384; buf len 16384\n[...]\nNOTICE: loop end - buf max len: 536870912; buf len 536870912\nNOTICE: loop start - buf max len: 536870912; buf len 536870912\nNOTICE: loop end - buf max len: 1073741823; buf len 1073741823\nNOTICE: loop start - buf max len: 1073741823; buf len 1073741823\nERROR: requested length too large\n\nBecause we read the last byte, but not beyond, EOF is not reached, so on the\nnext loop iteration we continue and fail on max size rather than exit the loop.\n\nBut I am guessing that test in particular was what you thought too complicated\nfor what it accomplishes?\n\nJoe\n-- \nCrunchy Data - http://crunchydata.com\nPostgreSQL Support for Secure Enterprises\nConsulting, Training, & Open Source Development", "msg_date": "Thu, 2 Jul 2020 17:30:49 -0400", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": true, "msg_subject": "Re: pg_read_file() with virtual files returns empty string" }, { "msg_contents": "Joe Conway <mail@joeconway.com> writes:\n> On 7/2/20 4:27 PM, Tom Lane wrote:\n>> Huh, I wonder why it's not max - 5. Probably not worth worrying about,\n>> though.\n\n> Well this part:\n\n> +\trbytes = fread(sbuf.data + sbuf.len, 1,\n> +\t (size_t) (sbuf.maxlen - sbuf.len - 1), file);\n> could actually be:\n> +\trbytes = fread(sbuf.data + sbuf.len, 1,\n> +\t (size_t) (sbuf.maxlen - sbuf.len), file);\n> because there is no actual need to reserve a byte for the trailing null, since\n> we are not using appendBinaryStringInfo() anymore, and that is where the\n> trailing NULL gets written.\n\nNo, I'd put a big -1 on that, because so far as stringinfo.c is concerned\nyou're violating the invariant that len must be less than maxlen. The fact\nthat you happen to not hit any assertions right at the moment does not\nmake this code okay.\n\n> In fact, in principle there is no reason we can't get to max - 4 with this code\n> except that when the filesize is exactly 1073741819, we need to try to read one\n> more byte to find the EOF that way I did in my patch. I.e.:\n\nAh, right, *that* is where the extra byte is lost: we need a buffer\nworkspace one byte more than the file size, or we won't ever actually\nsee the EOF indication.\n\nI still can't get excited about contorting the code to remove that\nissue.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 02 Jul 2020 17:37:46 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pg_read_file() with virtual files returns empty string" }, { "msg_contents": "On 7/2/20 5:37 PM, Tom Lane wrote:\n> Joe Conway <mail@joeconway.com> writes:\n>> In fact, in principle there is no reason we can't get to max - 4 with this code\n>> except that when the filesize is exactly 1073741819, we need to try to read one\n>> more byte to find the EOF that way I did in my patch. I.e.:\n> \n> Ah, right, *that* is where the extra byte is lost: we need a buffer\n> workspace one byte more than the file size, or we won't ever actually\n> see the EOF indication.\n> \n> I still can't get excited about contorting the code to remove that\n> issue.\n\nIt doesn't seem much worse than the oom test that was there before -- see attached.\n\nIn any case I will give you the last word and then quit bugging you about it ;-)\n\nAre we in agreement that whatever gets pushed should be backpatched through pg11\n(see start of thread)?\n\nJoe\n\n-- \nCrunchy Data - http://crunchydata.com\nPostgreSQL Support for Secure Enterprises\nConsulting, Training, & Open Source Development", "msg_date": "Thu, 2 Jul 2020 18:24:44 -0400", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": true, "msg_subject": "Re: pg_read_file() with virtual files returns empty string" }, { "msg_contents": "Joe Conway <mail@joeconway.com> writes:\n> On 7/2/20 5:37 PM, Tom Lane wrote:\n>> I still can't get excited about contorting the code to remove that\n>> issue.\n\n> It doesn't seem much worse than the oom test that was there before -- see attached.\n\nPersonally I would not bother, but it's your patch.\n\n> Are we in agreement that whatever gets pushed should be backpatched through pg11\n> (see start of thread)?\n\nOK by me.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 02 Jul 2020 18:29:46 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pg_read_file() with virtual files returns empty string" }, { "msg_contents": "On 7/2/20 6:29 PM, Tom Lane wrote:\n> Joe Conway <mail@joeconway.com> writes:\n>> On 7/2/20 5:37 PM, Tom Lane wrote:\n>>> I still can't get excited about contorting the code to remove that\n>>> issue.\n> \n>> It doesn't seem much worse than the oom test that was there before -- see attached.\n> \n> Personally I would not bother, but it's your patch.\n\nThanks, committed that way, ...\n\n>> Are we in agreement that whatever gets pushed should be backpatched through pg11\n>> (see start of thread)?\n> \n> OK by me.\n\n... and backpatched to v11.\n\nI changed the new error message to \"file length too large\" instead of \"requested\nlength too large\" since that seems more descriptive of what is actually\nhappening there. I also changed the corresponding error code to match the one\nenlargeStringInfo() would have used because I thought it was more apropos.\n\nThanks for all the help with this!\n\nJoe\n\n-- \nCrunchy Data - http://crunchydata.com\nPostgreSQL Support for Secure Enterprises\nConsulting, Training, & Open Source Development", "msg_date": "Sat, 4 Jul 2020 09:53:28 -0400", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": true, "msg_subject": "Re: pg_read_file() with virtual files returns empty string" }, { "msg_contents": "Hi Joe\n\nThanks for addressing this.\n\nBut I noticed that cfbot is now populating with failures like:\n\nhttps://travis-ci.org/github/postgresql-cfbot/postgresql/builds/704898559\ngenfile.c: In function ‘read_binary_file’:\ngenfile.c:192:5: error: ignoring return value of ‘fread’, declared with attribute warn_unused_result [-Werror=unused-result]\n fread(rbuf, 1, 1, file);\n ^\ncc1: all warnings being treated as errors\n<builtin>: recipe for target 'genfile.o' failed\n\n-- \nJustin\n\n\n", "msg_date": "Sat, 4 Jul 2020 11:39:10 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: pg_read_file() with virtual files returns empty string" }, { "msg_contents": "Justin Pryzby <pryzby@telsasoft.com> writes:\n> But I noticed that cfbot is now populating with failures like:\n\n> genfile.c: In function ‘read_binary_file’:\n> genfile.c:192:5: error: ignoring return value of ‘fread’, declared with attribute warn_unused_result [-Werror=unused-result]\n> fread(rbuf, 1, 1, file);\n> ^\n\nYeah, some of the pickier buildfarm members (eg spurfowl) are showing\nthat as a warning, too. Maybe make it like\n\n if (fread(rbuf, 1, 1, file) != 0 || !feof(file))\n ereport(ERROR,\n\nProbably the feof test is redundant this way, but I'd be inclined to\nleave it in anyhow.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 04 Jul 2020 12:52:24 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pg_read_file() with virtual files returns empty string" }, { "msg_contents": "On 7/4/20 12:52 PM, Tom Lane wrote:\n> Justin Pryzby <pryzby@telsasoft.com> writes:\n>> But I noticed that cfbot is now populating with failures like:\n> \n>> genfile.c: In function ‘read_binary_file’:\n>> genfile.c:192:5: error: ignoring return value of ‘fread’, declared with attribute warn_unused_result [-Werror=unused-result]\n>> fread(rbuf, 1, 1, file);\n>> ^\n> \n> Yeah, some of the pickier buildfarm members (eg spurfowl) are showing\n> that as a warning, too. Maybe make it like\n> \n> if (fread(rbuf, 1, 1, file) != 0 || !feof(file))\n> ereport(ERROR,\n> \n> Probably the feof test is redundant this way, but I'd be inclined to\n> leave it in anyhow.\n\nOk, will fix. Thanks for the heads up.\n\nJoe\n\n-- \nCrunchy Data - http://crunchydata.com\nPostgreSQL Support for Secure Enterprises\nConsulting, Training, & Open Source Development", "msg_date": "Sat, 4 Jul 2020 13:10:25 -0400", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": true, "msg_subject": "Re: pg_read_file() with virtual files returns empty string" }, { "msg_contents": "On 7/4/20 1:10 PM, Joe Conway wrote:\n> On 7/4/20 12:52 PM, Tom Lane wrote:\n>> Justin Pryzby <pryzby@telsasoft.com> writes:\n>>> But I noticed that cfbot is now populating with failures like:\n>> \n>>> genfile.c: In function ‘read_binary_file’:\n>>> genfile.c:192:5: error: ignoring return value of ‘fread’, declared with attribute warn_unused_result [-Werror=unused-result]\n>>> fread(rbuf, 1, 1, file);\n>>> ^\n>> \n>> Yeah, some of the pickier buildfarm members (eg spurfowl) are showing\n>> that as a warning, too. Maybe make it like\n>> \n>> if (fread(rbuf, 1, 1, file) != 0 || !feof(file))\n>> ereport(ERROR,\n>> \n>> Probably the feof test is redundant this way, but I'd be inclined to\n>> leave it in anyhow.\n> \n> Ok, will fix. Thanks for the heads up.\n\nAnd pushed -- thanks!\n\nJoe\n-- \nCrunchy Data - http://crunchydata.com\nPostgreSQL Support for Secure Enterprises\nConsulting, Training, & Open Source Development", "msg_date": "Sat, 4 Jul 2020 13:50:03 -0400", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": true, "msg_subject": "Re: pg_read_file() with virtual files returns empty string" } ]
[ { "msg_contents": "In a few days, the first commitfest of the 14 cycle - 2020-07 - will start.\nUnless anyone has already spoken up that I've missed, I'm happy to volunteer to\nrun CFM for this one.\n\ncheers ./daniel\n\n", "msg_date": "Sun, 28 Jun 2020 13:10:48 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": true, "msg_subject": "Commitfest 2020-07" }, { "msg_contents": "Daniel Gustafsson <daniel@yesql.se> writes:\n> In a few days, the first commitfest of the 14 cycle - 2020-07 - will start.\n> Unless anyone has already spoken up that I've missed, I'm happy to volunteer to\n> run CFM for this one.\n\nNo one has volunteered that I recall, so the baton is yours.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 28 Jun 2020 10:49:10 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Commitfest 2020-07" }, { "msg_contents": "On Sun, Jun 28, 2020 at 4:49 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Daniel Gustafsson <daniel@yesql.se> writes:\n> > In a few days, the first commitfest of the 14 cycle - 2020-07 - will\n> start.\n> > Unless anyone has already spoken up that I've missed, I'm happy to\n> volunteer to\n> > run CFM for this one.\n>\n> No one has volunteered that I recall, so the baton is yours.\n>\n>\nEnjoy!\n\n//Magnus\n\nOn Sun, Jun 28, 2020 at 4:49 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:Daniel Gustafsson <daniel@yesql.se> writes:\n> In a few days, the first commitfest of the 14 cycle - 2020-07 - will start.\n> Unless anyone has already spoken up that I've missed, I'm happy to volunteer to\n> run CFM for this one.\n\nNo one has volunteered that I recall, so the baton is yours.Enjoy!//Magnus", "msg_date": "Sun, 28 Jun 2020 20:50:20 +0200", "msg_from": "Magnus Hagander <magnus@hagander.net>", "msg_from_op": false, "msg_subject": "Re: Commitfest 2020-07" }, { "msg_contents": "On Mon, Jun 29, 2020 at 2:50 AM Magnus Hagander <magnus@hagander.net> wrote:\n\n> On Sun, Jun 28, 2020 at 4:49 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n>> Daniel Gustafsson <daniel@yesql.se> writes:\n>> > In a few days, the first commitfest of the 14 cycle - 2020-07 - will\n>> start.\n>> > Unless anyone has already spoken up that I've missed, I'm happy to\n>> volunteer to\n>> > run CFM for this one.\n>>\n>> No one has volunteered that I recall, so the baton is yours.\n>>\n>>\n> Enjoy!\n>\n> //Magnus\n>\n>\n\nThanks for the volunteering!\n\n-- \nBest Regards\nAndy Fan\n\nOn Mon, Jun 29, 2020 at 2:50 AM Magnus Hagander <magnus@hagander.net> wrote:On Sun, Jun 28, 2020 at 4:49 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:Daniel Gustafsson <daniel@yesql.se> writes:\n> In a few days, the first commitfest of the 14 cycle - 2020-07 - will start.\n> Unless anyone has already spoken up that I've missed, I'm happy to volunteer to\n> run CFM for this one.\n\nNo one has volunteered that I recall, so the baton is yours.Enjoy!//Magnus \nThanks for the volunteering!-- Best RegardsAndy Fan", "msg_date": "Mon, 29 Jun 2020 08:04:35 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Commitfest 2020-07" }, { "msg_contents": "On Mon, Jun 29, 2020 at 08:04:35AM +0800, Andy Fan wrote:\n> Thanks for the volunteering!\n\n+1.\n--\nMichael", "msg_date": "Mon, 29 Jun 2020 09:32:27 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Commitfest 2020-07" } ]
[ { "msg_contents": "As I mentioned in [1], checking (struct Port)->ssl for NULL to determine\nwhether TLS is used for connection is a bit of a leaky abstraction, as that's\nan OpenSSL specific struct member. This sets the requirement that all TLS\nimplementations use a pointer named SSL, and that the pointer is set to NULL in\ncase of a failed connection, which may or may not fit.\n\nIs there a reason to not use (struct Port)->ssl_in_use flag which tracks just\nwhat we're looking for here? This also maps against other parts of the\nabstraction in be-secure.c which do just that. The attached implements this.\n\ncheers ./daniel\n\n[1] FAB21FC8-0F62-434F-AA78-6BD9336D630A@yesql.se", "msg_date": "Sun, 28 Jun 2020 13:39:38 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": true, "msg_subject": "TLS checking in pgstat" }, { "msg_contents": "On Sun, Jun 28, 2020 at 1:39 PM Daniel Gustafsson <daniel@yesql.se> wrote:\n\n> As I mentioned in [1], checking (struct Port)->ssl for NULL to determine\n> whether TLS is used for connection is a bit of a leaky abstraction, as\n> that's\n> an OpenSSL specific struct member. This sets the requirement that all TLS\n> implementations use a pointer named SSL, and that the pointer is set to\n> NULL in\n> case of a failed connection, which may or may not fit.\n>\n> Is there a reason to not use (struct Port)->ssl_in_use flag which tracks\n> just\n> what we're looking for here? This also maps against other parts of the\n> abstraction in be-secure.c which do just that. The attached implements\n> this.\n>\n\nYeah, this seems perfectly reasonable.\n\nI would argue this is a bug, but given how internal it is I don't think it\nhas any user visible effects yet (since we don't have more than one\nprovider), and thus isn't worthy of a backpatch.\n\nPushed.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Sun, Jun 28, 2020 at 1:39 PM Daniel Gustafsson <daniel@yesql.se> wrote:As I mentioned in [1], checking (struct Port)->ssl for NULL to determine\nwhether TLS is used for connection is a bit of a leaky abstraction, as that's\nan OpenSSL specific struct member.  This sets the requirement that all TLS\nimplementations use a pointer named SSL, and that the pointer is set to NULL in\ncase of a failed connection, which may or may not fit.\n\nIs there a reason to not use (struct Port)->ssl_in_use flag which tracks just\nwhat we're looking for here?  This also maps against other parts of the\nabstraction in be-secure.c which do just that.  The attached implements this.Yeah, this seems perfectly reasonable. I would argue this is a bug, but given how internal it is I don't think it has any user visible effects yet (since we don't have more than one provider), and thus isn't worthy of a backpatch.Pushed.--  Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/", "msg_date": "Tue, 7 Jul 2020 17:01:30 +0200", "msg_from": "Magnus Hagander <magnus@hagander.net>", "msg_from_op": false, "msg_subject": "Re: TLS checking in pgstat" } ]
[ { "msg_contents": "Hi,\n\nOne of the issues I'm fairly regularly reminded by users/customers is\nthat inserting into tables sharded using FDWs are rather slow. We do\neven get it reported on pgsql-bugs from time to time [1].\n\nSome of the slowness / overhead is expected, doe to the latency between\nmachines in the sharded setup. Even just 1ms latency will make it way\nmore expensive than a single instance.\n\nBut let's do a simple experiment, comparing a hash-partitioned regular\npartitions, and one with FDW partitions in the same instance. Scripts to\nrun this are attached. The duration of inserting 1M rows to this table\n(average of 10 runs on my laptop) looks like this:\n\n regular: 2872 ms\n FDW: 64454 ms\n\nYep, it's ~20x slower. On setup with ping latency well below 0.05ms.\nImagine how would it look on sharded setups with 0.1ms or 1ms latency,\nwhich is probably where most single-DC clusters are :-(\n\nNow, the primary reason why the performance degrades like this is that\nwhile FDW has batching for SELECT queries (i.e. we read larger chunks of\ndata from the cursors), we don't have that for INSERTs (or other DML).\nEvery time you insert a row, it has to go all the way down into the\npartition synchronously.\n\nFor some use cases this may be reduced by having many independent\nconnnections from different users, so the per-user latency is higher but\nacceptable. But if you need to import larger amounts of data (say, a CSV\nfile for analytics, ...) this may not work.\n\nSome time ago I wrote an ugly PoC adding batching, just to see how far\nwould it get us, and it seems quite promising - results for he same\nINSERT benchmarks look like this:\n\n FDW batching: 4584 ms\n\nSo, rather nice improvement, I'd say ...\n\nBefore I spend more time hacking on this, I have a couple open questions\nabout the design, restrictions etc.\n\n\n1) Extend the FDW API?\n\nIn the patch, the batching is simply \"injected\" into the existing insert\nAPI method, i.e. ExecForeignInsert et al. I wonder if it'd be better to\nextend the API with a \"batched\" version of the method, so that we can\neasily determine whether the FDW supports batching or not - it would\nrequire changes in the callers, though. OTOH it might be useful for\nCOPY, where we could do something similar to multi_insert (COPY already\nbenefits from this patch, but it does not use the batching built-into\nCOPY).\n\n\n2) What about the insert results?\n\nI'm not sure what to do about \"result\" status for the inserted rows. We\nonly really \"stash\" the rows into a buffer, so we don't know if it will\nsucceed or not. The patch simply assumes it will succeed, but that's\nclearly wrong, and it may result in reporting a wrong number or rows.\n\nThe patch also disables the batching when the insert has a RETURNING\nclause, because there's just a single slot (for the currently inserted\nrow). I suppose a \"batching\" method would take an array of slots.\n\n\n3) What about the other DML operations (DELETE/UPDATE)?\n\nThe other DML operations could probably benefit from the batching too.\nINSERT was good enough for a PoC, but having batching only for INSERT\nseems somewhat asmymetric. DELETE/UPDATE seem more complicated because\nof quals, but likely doable.\n\n\n3) Should we do batching for COPY insteads?\n\nWhile looking at multi_insert, I've realized it's mostly exactly what\nthe new \"batching insert\" API function would need to be. But it's only\nreally used in COPY, so I wonder if we should just abandon the idea of\nbatching INSERTs and do batching COPY for FDW tables.\n\nFor cases that can replace INSERT with COPY this would be enough, but\nunfortunately it does nothing for DELETE/UPDATE so I'm hesitant to do\nthis :-(\n\n\n4) Expected consistency?\n\nI'm not entirely sure what are the consistency expectations for FDWs.\nCurrently the FDW nodes pointing to the same server share a connection,\nso the inserted rows might be visible to other nodes. But if we only\nstash the rows in a local buffer for a while, that's no longer true. So\nmaybe this breaks the consistency expectations?\n\nBut maybe that's OK - I'm not sure how the prepared statements/cursors\naffect this. I can imagine restricting the batching only to plans where\nthis is not an issue (single FDW node or something), but it seems rather\nfragile and undesirable.\n\nI was thinking about adding a GUC to enable/disable the batching at some\nlevel (global, server, table, ...) but it seems like a bad match because\nthe consistency expectations likely depend on a query. There should be a\nGUC to set the batch size, though (it's hardcoded to 100 for now).\n\n\nregards\n\n\n\n[1] https://www.postgresql.org/message-id/CACnz%2BQ1q0%2B2KoJam9LyNMk8JmdC6qYHXWB895Wu2xcpoip18xQ%40mail.gmail.com\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Sun, 28 Jun 2020 17:10:02 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "POC: postgres_fdw insert batching" }, { "msg_contents": "Hi Tomas,\n\nOn Mon, Jun 29, 2020 at 12:10 AM Tomas Vondra\n<tomas.vondra@2ndquadrant.com> wrote:\n>\n> Hi,\n>\n> One of the issues I'm fairly regularly reminded by users/customers is\n> that inserting into tables sharded using FDWs are rather slow. We do\n> even get it reported on pgsql-bugs from time to time [1].\n>\n> Some of the slowness / overhead is expected, doe to the latency between\n> machines in the sharded setup. Even just 1ms latency will make it way\n> more expensive than a single instance.\n>\n> But let's do a simple experiment, comparing a hash-partitioned regular\n> partitions, and one with FDW partitions in the same instance. Scripts to\n> run this are attached. The duration of inserting 1M rows to this table\n> (average of 10 runs on my laptop) looks like this:\n>\n> regular: 2872 ms\n> FDW: 64454 ms\n>\n> Yep, it's ~20x slower. On setup with ping latency well below 0.05ms.\n> Imagine how would it look on sharded setups with 0.1ms or 1ms latency,\n> which is probably where most single-DC clusters are :-(\n>\n> Now, the primary reason why the performance degrades like this is that\n> while FDW has batching for SELECT queries (i.e. we read larger chunks of\n> data from the cursors), we don't have that for INSERTs (or other DML).\n> Every time you insert a row, it has to go all the way down into the\n> partition synchronously.\n>\n> For some use cases this may be reduced by having many independent\n> connnections from different users, so the per-user latency is higher but\n> acceptable. But if you need to import larger amounts of data (say, a CSV\n> file for analytics, ...) this may not work.\n>\n> Some time ago I wrote an ugly PoC adding batching, just to see how far\n> would it get us, and it seems quite promising - results for he same\n> INSERT benchmarks look like this:\n>\n> FDW batching: 4584 ms\n>\n> So, rather nice improvement, I'd say ...\n\nVery nice indeed.\n\n> Before I spend more time hacking on this, I have a couple open questions\n> about the design, restrictions etc.\n\nI think you may want to take a look this recent proposal by Andrey Lepikhov:\n\n* [POC] Fast COPY FROM command for the table with foreign partitions *\nhttps://www.postgresql.org/message-id/flat/3d0909dc-3691-a576-208a-90986e55489f%40postgrespro.ru\n\n-- \nAmit Langote\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 29 Jun 2020 14:00:28 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: POC: postgres_fdw insert batching" }, { "msg_contents": "On Sun, Jun 28, 2020 at 8:40 PM Tomas Vondra\n<tomas.vondra@2ndquadrant.com> wrote:\n\n>\n> FDW batching: 4584 ms\n>\n> So, rather nice improvement, I'd say ...\n\nVery nice.\n\n>\n> Before I spend more time hacking on this, I have a couple open questions\n> about the design, restrictions etc.\n>\n>\n> 1) Extend the FDW API?\n>\n> In the patch, the batching is simply \"injected\" into the existing insert\n> API method, i.e. ExecForeignInsert et al. I wonder if it'd be better to\n> extend the API with a \"batched\" version of the method, so that we can\n> easily determine whether the FDW supports batching or not - it would\n> require changes in the callers, though. OTOH it might be useful for\n> COPY, where we could do something similar to multi_insert (COPY already\n> benefits from this patch, but it does not use the batching built-into\n> COPY).\n\nAmit Langote has pointed out a related patch being discussed on hackers at [1].\n\nThat patch introduces a new API. But if we can do it without\nintroducing a new API that will be good. FDWs which can support\nbatching can just modify their code and don't have to implement and\nmanage a new API. We already have a handful of those APIs.\n\n>\n> 2) What about the insert results?\n>\n> I'm not sure what to do about \"result\" status for the inserted rows. We\n> only really \"stash\" the rows into a buffer, so we don't know if it will\n> succeed or not. The patch simply assumes it will succeed, but that's\n> clearly wrong, and it may result in reporting a wrong number or rows.\n\nI didn't get this. We are executing an INSERT on the foreign server,\nso we get the number of rows INSERTed from that server. We should just\nadd those up across batches. If there's a failure, it would abort the\ntransaction, local as well as remote.\n\n>\n> The patch also disables the batching when the insert has a RETURNING\n> clause, because there's just a single slot (for the currently inserted\n> row). I suppose a \"batching\" method would take an array of slots.\n>\n\nIt will be a rare case when a bulk load also has a RETURNING clause.\nSo, we can leave with this restriction. We should try to choose a\ndesign which allows that restriction to be lifted in the future. But I\ndoubt that restriction will be a serious one.\n\n>\n> 3) What about the other DML operations (DELETE/UPDATE)?\n>\n> The other DML operations could probably benefit from the batching too.\n> INSERT was good enough for a PoC, but having batching only for INSERT\n> seems somewhat asmymetric. DELETE/UPDATE seem more complicated because\n> of quals, but likely doable.\n\nBulk INSERTs are more common in a sharded environment because of data\nload in say OLAP systems. Bulk update/delete are rare, although not\nthat rare. So if an approach just supports bulk insert and not bulk\nUPDATE/DELETE that will address a large number of usecases IMO. But if\nwe can make everything work together that would be good as well.\n\nIn your patch, I see that an INSERT statement with batch is\nconstructed as INSERT INTO ... VALUES (...), (...) as many values as\nthe batch size. That won't work as is for UPDATE/DELETE since we can't\npass multiple pairs of ctids and columns to be updated for each ctid\nin one statement. Maybe we could build as many UPDATE/DELETE\nstatements as the size of a batch, but that would be ugly. What we\nneed is a feature like a batch prepared statement in libpq similar to\nwhat JDBC supports\n((https://mkyong.com/jdbc/jdbc-preparedstatement-example-batch-update/).\nThis will allow a single prepared statement to be executed with a\nbatch of parameters, each batch corresponding to one foreign DML\nstatement.\n\n>\n>\n> 3) Should we do batching for COPY insteads?\n>\n> While looking at multi_insert, I've realized it's mostly exactly what\n> the new \"batching insert\" API function would need to be. But it's only\n> really used in COPY, so I wonder if we should just abandon the idea of\n> batching INSERTs and do batching COPY for FDW tables.\n\nI think this won't support RETURNING as well. But if we could somehow\nuse copy protocol to send the data to the foreign server and yet treat\nit as INSERT, that might work. I think we have find out which performs\nbetter COPY or batch INSERT.\n\n>\n> For cases that can replace INSERT with COPY this would be enough, but\n> unfortunately it does nothing for DELETE/UPDATE so I'm hesitant to do\n> this :-(\n\nAgreed, if we want to support bulk UPDATE/DELETE as well.\n\n>\n>\n> 4) Expected consistency?\n>\n> I'm not entirely sure what are the consistency expectations for FDWs.\n> Currently the FDW nodes pointing to the same server share a connection,\n> so the inserted rows might be visible to other nodes. But if we only\n> stash the rows in a local buffer for a while, that's no longer true. So\n> maybe this breaks the consistency expectations?\n>\n> But maybe that's OK - I'm not sure how the prepared statements/cursors\n> affect this. I can imagine restricting the batching only to plans where\n> this is not an issue (single FDW node or something), but it seems rather\n> fragile and undesirable.\n\nI think that area is grey. Depending upon where the cursor is\npositioned when a DML node executes a query, the data fetched from\ncursor may or may not see the effect of DML. The cursor position is\nbased on the batch size so we already have problems in this area I\nthink. Assuming that the DML and SELECT are independent this will\nwork. So, the consistency problems exists, it will just be modulated\nby batching DML. I doubt that's related to this feature exclusively\nand should be solved independent of this feature.\n\n>\n> I was thinking about adding a GUC to enable/disable the batching at some\n> level (global, server, table, ...) but it seems like a bad match because\n> the consistency expectations likely depend on a query. There should be a\n> GUC to set the batch size, though (it's hardcoded to 100 for now).\n>\n\nSimilar to fetch_size, it should foreign server, table level setting, IMO.\n\n[1] https://www.postgresql.org/message-id/flat/3d0909dc-3691-a576-208a-90986e55489f%40postgrespro.ru\n\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Mon, 29 Jun 2020 16:22:15 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>", "msg_from_op": false, "msg_subject": "Re: POC: postgres_fdw insert batching" }, { "msg_contents": "On Mon, Jun 29, 2020 at 7:52 PM Ashutosh Bapat\n<ashutosh.bapat.oss@gmail.com> wrote:\n> On Sun, Jun 28, 2020 at 8:40 PM Tomas Vondra\n> <tomas.vondra@2ndquadrant.com> wrote:\n\n> > 3) What about the other DML operations (DELETE/UPDATE)?\n> >\n> > The other DML operations could probably benefit from the batching too.\n> > INSERT was good enough for a PoC, but having batching only for INSERT\n> > seems somewhat asmymetric. DELETE/UPDATE seem more complicated because\n> > of quals, but likely doable.\n>\n> Bulk INSERTs are more common in a sharded environment because of data\n> load in say OLAP systems. Bulk update/delete are rare, although not\n> that rare. So if an approach just supports bulk insert and not bulk\n> UPDATE/DELETE that will address a large number of usecases IMO. But if\n> we can make everything work together that would be good as well.\n\nIn most cases, I think the entire UPDATE/DELETE operations would be\npushed down to the remote side by DirectModify. So, I'm not sure we\nreally need the bulk UPDATE/DELETE.\n\n> > 3) Should we do batching for COPY insteads?\n> >\n> > While looking at multi_insert, I've realized it's mostly exactly what\n> > the new \"batching insert\" API function would need to be. But it's only\n> > really used in COPY, so I wonder if we should just abandon the idea of\n> > batching INSERTs and do batching COPY for FDW tables.\n\n> I think we have find out which performs\n> better COPY or batch INSERT.\n\nMaybe I'm missing something, but I think the COPY patch [1] seems more\npromising to me, because 1) it would not get the remote side's planner\nand executor involved, and 2) the data would be loaded more\nefficiently by multi-insert on the demote side. (Yeah, COPY doesn't\nsupport RETURNING, but it's rare that RETURNING is needed in a bulk\nload, as you mentioned.)\n\n> [1] https://www.postgresql.org/message-id/flat/3d0909dc-3691-a576-208a-90986e55489f%40postgrespro.ru\n\nBest regards,\nEtsuro Fujita\n\n\n", "msg_date": "Tue, 30 Jun 2020 12:18:03 +0900", "msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>", "msg_from_op": false, "msg_subject": "Re: POC: postgres_fdw insert batching" }, { "msg_contents": "On Tue, 30 Jun 2020 at 08:47, Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n\n> On Mon, Jun 29, 2020 at 7:52 PM Ashutosh Bapat\n> <ashutosh.bapat.oss@gmail.com> wrote:\n> > On Sun, Jun 28, 2020 at 8:40 PM Tomas Vondra\n> > <tomas.vondra@2ndquadrant.com> wrote:\n>\n> > > 3) What about the other DML operations (DELETE/UPDATE)?\n> > >\n> > > The other DML operations could probably benefit from the batching too.\n> > > INSERT was good enough for a PoC, but having batching only for INSERT\n> > > seems somewhat asmymetric. DELETE/UPDATE seem more complicated because\n> > > of quals, but likely doable.\n> >\n> > Bulk INSERTs are more common in a sharded environment because of data\n> > load in say OLAP systems. Bulk update/delete are rare, although not\n> > that rare. So if an approach just supports bulk insert and not bulk\n> > UPDATE/DELETE that will address a large number of usecases IMO. But if\n> > we can make everything work together that would be good as well.\n>\n> In most cases, I think the entire UPDATE/DELETE operations would be\n> pushed down to the remote side by DirectModify. So, I'm not sure we\n> really need the bulk UPDATE/DELETE.\n>\n\nThat may not be true for a partitioned table whose partitions are foreign\ntables. Esp. given the work that Amit Langote is doing [1]. It really\ndepends on the ability of postgres_fdw to detect that the DML modifying\neach of the partitions can be pushed down. That may not come easily.\n\n\n>\n> > > 3) Should we do batching for COPY insteads?\n> > >\n> > > While looking at multi_insert, I've realized it's mostly exactly what\n> > > the new \"batching insert\" API function would need to be. But it's only\n> > > really used in COPY, so I wonder if we should just abandon the idea of\n> > > batching INSERTs and do batching COPY for FDW tables.\n>\n> > I think we have find out which performs\n> > better COPY or batch INSERT.\n>\n> Maybe I'm missing something, but I think the COPY patch [1] seems more\n> promising to me, because 1) it would not get the remote side's planner\n> and executor involved, and 2) the data would be loaded more\n> efficiently by multi-insert on the demote side. (Yeah, COPY doesn't\n> support RETURNING, but it's rare that RETURNING is needed in a bulk\n> load, as you mentioned.)\n>\n> > [1]\n> https://www.postgresql.org/message-id/flat/3d0909dc-3691-a576-208a-90986e55489f%40postgrespro.ru\n>\n> Best regards,\n> Etsuro Fujita\n>\n\n[1]\nhttps://www.postgresql.org/message-id/CA+HiwqHpHdqdDn48yCEhynnniahH78rwcrv1rEX65-fsZGBOLQ@mail.gmail.com\n-- \nBest Wishes,\nAshutosh\n\nOn Tue, 30 Jun 2020 at 08:47, Etsuro Fujita <etsuro.fujita@gmail.com> wrote:On Mon, Jun 29, 2020 at 7:52 PM Ashutosh Bapat\n<ashutosh.bapat.oss@gmail.com> wrote:\n> On Sun, Jun 28, 2020 at 8:40 PM Tomas Vondra\n> <tomas.vondra@2ndquadrant.com> wrote:\n\n> > 3) What about the other DML operations (DELETE/UPDATE)?\n> >\n> > The other DML operations could probably benefit from the batching too.\n> > INSERT was good enough for a PoC, but having batching only for INSERT\n> > seems somewhat asmymetric. DELETE/UPDATE seem more complicated because\n> > of quals, but likely doable.\n>\n> Bulk INSERTs are more common in a sharded environment because of data\n> load in say OLAP systems. Bulk update/delete are rare, although not\n> that rare. So if an approach just supports bulk insert and not bulk\n> UPDATE/DELETE that will address a large number of usecases IMO. But if\n> we can make everything work together that would be good as well.\n\nIn most cases, I think the entire UPDATE/DELETE operations would be\npushed down to the remote side by DirectModify.  So, I'm not sure we\nreally need the bulk UPDATE/DELETE.That may not be true for a partitioned table whose partitions are foreign tables. Esp. given the work that Amit Langote is doing [1]. It really depends on the ability of postgres_fdw to detect that the DML modifying each of the partitions can be pushed down. That may not come easily. \n\n> > 3) Should we do batching for COPY insteads?\n> >\n> > While looking at multi_insert, I've realized it's mostly exactly what\n> > the new \"batching insert\" API function would need to be. But it's only\n> > really used in COPY, so I wonder if we should just abandon the idea of\n> > batching INSERTs and do batching COPY for FDW tables.\n\n> I think we have find out which performs\n> better COPY or batch INSERT.\n\nMaybe I'm missing something, but I think the COPY patch [1] seems more\npromising to me, because 1) it would not get the remote side's planner\nand executor involved, and 2) the data would be loaded more\nefficiently by multi-insert on the demote side.  (Yeah, COPY doesn't\nsupport RETURNING, but it's rare that RETURNING is needed in a bulk\nload, as you mentioned.)\n\n> [1] https://www.postgresql.org/message-id/flat/3d0909dc-3691-a576-208a-90986e55489f%40postgrespro.ru\n\nBest regards,\nEtsuro Fujita\n[1] https://www.postgresql.org/message-id/CA+HiwqHpHdqdDn48yCEhynnniahH78rwcrv1rEX65-fsZGBOLQ@mail.gmail.com-- Best Wishes,Ashutosh", "msg_date": "Tue, 30 Jun 2020 09:52:44 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: POC: postgres_fdw insert batching" }, { "msg_contents": "On Tue, Jun 30, 2020 at 1:22 PM Ashutosh Bapat\n<ashutosh.bapat@2ndquadrant.com> wrote:\n> On Tue, 30 Jun 2020 at 08:47, Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n>> On Mon, Jun 29, 2020 at 7:52 PM Ashutosh Bapat\n>> <ashutosh.bapat.oss@gmail.com> wrote:\n>> > On Sun, Jun 28, 2020 at 8:40 PM Tomas Vondra\n>> > <tomas.vondra@2ndquadrant.com> wrote:\n>>\n>> > > 3) What about the other DML operations (DELETE/UPDATE)?\n>> > >\n>> > > The other DML operations could probably benefit from the batching too.\n>> > > INSERT was good enough for a PoC, but having batching only for INSERT\n>> > > seems somewhat asmymetric. DELETE/UPDATE seem more complicated because\n>> > > of quals, but likely doable.\n>> >\n>> > Bulk INSERTs are more common in a sharded environment because of data\n>> > load in say OLAP systems. Bulk update/delete are rare, although not\n>> > that rare. So if an approach just supports bulk insert and not bulk\n>> > UPDATE/DELETE that will address a large number of usecases IMO. But if\n>> > we can make everything work together that would be good as well.\n>>\n>> In most cases, I think the entire UPDATE/DELETE operations would be\n>> pushed down to the remote side by DirectModify. So, I'm not sure we\n>> really need the bulk UPDATE/DELETE.\n> That may not be true for a partitioned table whose partitions are foreign tables. Esp. given the work that Amit Langote is doing [1]. It really depends on the ability of postgres_fdw to detect that the DML modifying each of the partitions can be pushed down. That may not come easily.\n\nWhile it's true that how to accommodate the DirectModify API in the\nnew inherited update/delete planning approach is an open question on\nthat thread, I would eventually like to find an answer to that. That\nis, that work shouldn't result in losing the foreign partition's\nability to use DirectModify API to optimize updates/deletes.\n\n-- \nAmit Langote\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 30 Jun 2020 14:53:55 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: POC: postgres_fdw insert batching" }, { "msg_contents": "On Tue, Jun 30, 2020 at 2:54 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> On Tue, Jun 30, 2020 at 1:22 PM Ashutosh Bapat\n> <ashutosh.bapat@2ndquadrant.com> wrote:\n> > On Tue, 30 Jun 2020 at 08:47, Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n> >> On Mon, Jun 29, 2020 at 7:52 PM Ashutosh Bapat\n> >> <ashutosh.bapat.oss@gmail.com> wrote:\n> >> > On Sun, Jun 28, 2020 at 8:40 PM Tomas Vondra\n> >> > <tomas.vondra@2ndquadrant.com> wrote:\n> >>\n> >> > > 3) What about the other DML operations (DELETE/UPDATE)?\n> >> > >\n> >> > > The other DML operations could probably benefit from the batching too.\n> >> > > INSERT was good enough for a PoC, but having batching only for INSERT\n> >> > > seems somewhat asmymetric. DELETE/UPDATE seem more complicated because\n> >> > > of quals, but likely doable.\n> >> >\n> >> > Bulk INSERTs are more common in a sharded environment because of data\n> >> > load in say OLAP systems. Bulk update/delete are rare, although not\n> >> > that rare. So if an approach just supports bulk insert and not bulk\n> >> > UPDATE/DELETE that will address a large number of usecases IMO. But if\n> >> > we can make everything work together that would be good as well.\n> >>\n> >> In most cases, I think the entire UPDATE/DELETE operations would be\n> >> pushed down to the remote side by DirectModify. So, I'm not sure we\n> >> really need the bulk UPDATE/DELETE.\n> > That may not be true for a partitioned table whose partitions are foreign tables. Esp. given the work that Amit Langote is doing [1]. It really depends on the ability of postgres_fdw to detect that the DML modifying each of the partitions can be pushed down. That may not come easily.\n>\n> While it's true that how to accommodate the DirectModify API in the\n> new inherited update/delete planning approach is an open question on\n> that thread, I would eventually like to find an answer to that. That\n> is, that work shouldn't result in losing the foreign partition's\n> ability to use DirectModify API to optimize updates/deletes.\n\nThat would be great!\n\nBest regards,\nEtsuro Fujita\n\n\n", "msg_date": "Tue, 30 Jun 2020 17:10:23 +0900", "msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>", "msg_from_op": false, "msg_subject": "Re: POC: postgres_fdw insert batching" }, { "msg_contents": "On Mon, Jun 29, 2020 at 04:22:15PM +0530, Ashutosh Bapat wrote:\n>On Sun, Jun 28, 2020 at 8:40 PM Tomas Vondra\n><tomas.vondra@2ndquadrant.com> wrote:\n>\n>>\n>> FDW batching: 4584 ms\n>>\n>> So, rather nice improvement, I'd say ...\n>\n>Very nice.\n>\n>>\n>> Before I spend more time hacking on this, I have a couple open questions\n>> about the design, restrictions etc.\n>>\n>>\n>> 1) Extend the FDW API?\n>>\n>> In the patch, the batching is simply \"injected\" into the existing insert\n>> API method, i.e. ExecForeignInsert et al. I wonder if it'd be better to\n>> extend the API with a \"batched\" version of the method, so that we can\n>> easily determine whether the FDW supports batching or not - it would\n>> require changes in the callers, though. OTOH it might be useful for\n>> COPY, where we could do something similar to multi_insert (COPY already\n>> benefits from this patch, but it does not use the batching built-into\n>> COPY).\n>\n>Amit Langote has pointed out a related patch being discussed on hackers at [1].\n>\n>That patch introduces a new API. But if we can do it without\n>introducing a new API that will be good. FDWs which can support\n>batching can just modify their code and don't have to implement and\n>manage a new API. We already have a handful of those APIs.\n>\n\nI don't think extending the API is a big issue - the FDW code will need\nchanging anyway, so this seems minor.\n\nI'll take a look at the COPY patch - I agree it seems like a good idea,\nalthough it can be less convenient in various caes (e.g. I've seen a lot\nof INSERT ... SELECT queries in sharded systems, etc.). \n\n>>\n>> 2) What about the insert results?\n>>\n>> I'm not sure what to do about \"result\" status for the inserted rows. We\n>> only really \"stash\" the rows into a buffer, so we don't know if it will\n>> succeed or not. The patch simply assumes it will succeed, but that's\n>> clearly wrong, and it may result in reporting a wrong number or rows.\n>\n>I didn't get this. We are executing an INSERT on the foreign server,\n>so we get the number of rows INSERTed from that server. We should just\n>add those up across batches. If there's a failure, it would abort the\n>transaction, local as well as remote.\n>\n\nTrue, but it's not the FDW code doing the counting - it's the caller,\ndepending on whether the ExecForeignInsert returns a valid slot or NULL.\nSo it's not quite possible to just return a number of inserted tuples,\nas returned by the remote server.\n\n>>\n>> The patch also disables the batching when the insert has a RETURNING\n>> clause, because there's just a single slot (for the currently inserted\n>> row). I suppose a \"batching\" method would take an array of slots.\n>>\n>\n>It will be a rare case when a bulk load also has a RETURNING clause.\n>So, we can leave with this restriction. We should try to choose a\n>design which allows that restriction to be lifted in the future. But I\n>doubt that restriction will be a serious one.\n>\n>>\n>> 3) What about the other DML operations (DELETE/UPDATE)?\n>>\n>> The other DML operations could probably benefit from the batching too.\n>> INSERT was good enough for a PoC, but having batching only for INSERT\n>> seems somewhat asmymetric. DELETE/UPDATE seem more complicated because\n>> of quals, but likely doable.\n>\n>Bulk INSERTs are more common in a sharded environment because of data\n>load in say OLAP systems. Bulk update/delete are rare, although not\n>that rare. So if an approach just supports bulk insert and not bulk\n>UPDATE/DELETE that will address a large number of usecases IMO. But if\n>we can make everything work together that would be good as well.\n>\n>In your patch, I see that an INSERT statement with batch is\n>constructed as INSERT INTO ... VALUES (...), (...) as many values as\n>the batch size. That won't work as is for UPDATE/DELETE since we can't\n>pass multiple pairs of ctids and columns to be updated for each ctid\n>in one statement. Maybe we could build as many UPDATE/DELETE\n>statements as the size of a batch, but that would be ugly. What we\n>need is a feature like a batch prepared statement in libpq similar to\n>what JDBC supports\n>((https://mkyong.com/jdbc/jdbc-preparedstatement-example-batch-update/).\n>This will allow a single prepared statement to be executed with a\n>batch of parameters, each batch corresponding to one foreign DML\n>statement.\n>\n\nI'm pretty sure we could make it work with some array/unnest tricks to\nbuild a relation, and use that as a source of data.\n\n>>\n>>\n>> 3) Should we do batching for COPY insteads?\n>>\n>> While looking at multi_insert, I've realized it's mostly exactly what\n>> the new \"batching insert\" API function would need to be. But it's only\n>> really used in COPY, so I wonder if we should just abandon the idea of\n>> batching INSERTs and do batching COPY for FDW tables.\n>\n>I think this won't support RETURNING as well. But if we could somehow\n>use copy protocol to send the data to the foreign server and yet treat\n>it as INSERT, that might work. I think we have find out which performs\n>better COPY or batch INSERT.\n>\n\nI don't see why not support both, the use cases are somewhat different I\nthink.\n\n>>\n>> For cases that can replace INSERT with COPY this would be enough, but\n>> unfortunately it does nothing for DELETE/UPDATE so I'm hesitant to do\n>> this :-(\n>\n>Agreed, if we want to support bulk UPDATE/DELETE as well.\n>\n>>\n>>\n>> 4) Expected consistency?\n>>\n>> I'm not entirely sure what are the consistency expectations for FDWs.\n>> Currently the FDW nodes pointing to the same server share a connection,\n>> so the inserted rows might be visible to other nodes. But if we only\n>> stash the rows in a local buffer for a while, that's no longer true. So\n>> maybe this breaks the consistency expectations?\n>>\n>> But maybe that's OK - I'm not sure how the prepared statements/cursors\n>> affect this. I can imagine restricting the batching only to plans where\n>> this is not an issue (single FDW node or something), but it seems rather\n>> fragile and undesirable.\n>\n>I think that area is grey. Depending upon where the cursor is\n>positioned when a DML node executes a query, the data fetched from\n>cursor may or may not see the effect of DML. The cursor position is\n>based on the batch size so we already have problems in this area I\n>think. Assuming that the DML and SELECT are independent this will\n>work. So, the consistency problems exists, it will just be modulated\n>by batching DML. I doubt that's related to this feature exclusively\n>and should be solved independent of this feature.\n>\n\nOK, thanks for the feedback.\n\n>>\n>> I was thinking about adding a GUC to enable/disable the batching at some\n>> level (global, server, table, ...) but it seems like a bad match because\n>> the consistency expectations likely depend on a query. There should be a\n>> GUC to set the batch size, though (it's hardcoded to 100 for now).\n>>\n>\n>Similar to fetch_size, it should foreign server, table level setting, IMO.\n>\n>[1] https://www.postgresql.org/message-id/flat/3d0909dc-3691-a576-208a-90986e55489f%40postgrespro.ru\n>\n\nYeah, I agree we should have a GUC to define the batch size. What I had\nin mind was something that would allow us to enable/disable batching to\nincrease the consistency guarantees, or something like that. I think\nsimple GUCs are a poor solution for that.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Tue, 30 Jun 2020 18:53:37 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: POC: postgres_fdw insert batching" }, { "msg_contents": "On Tue, 30 Jun 2020 at 22:23, Tomas Vondra <tomas.vondra@2ndquadrant.com>\nwrote:\n\n> >I didn't get this. We are executing an INSERT on the foreign server,\n> >so we get the number of rows INSERTed from that server. We should just\n> >add those up across batches. If there's a failure, it would abort the\n> >transaction, local as well as remote.\n> >\n>\n> True, but it's not the FDW code doing the counting - it's the caller,\n> depending on whether the ExecForeignInsert returns a valid slot or NULL.\n> So it's not quite possible to just return a number of inserted tuples,\n> as returned by the remote server.\n>\n\nHmm yes, now I remember that bit. So for every row buffered, we return a\nvalid slot without knowing whether that row was inserted on the remote\nserver or not. I think we have that problem even now where a single INSERT\nmight result in multiple INSERTs on the remote server (rare but not\ncompletely impossible).\n\n\n>\n> >In your patch, I see that an INSERT statement with batch is\n> >constructed as INSERT INTO ... VALUES (...), (...) as many values as\n> >the batch size. That won't work as is for UPDATE/DELETE since we can't\n> >pass multiple pairs of ctids and columns to be updated for each ctid\n> >in one statement. Maybe we could build as many UPDATE/DELETE\n> >statements as the size of a batch, but that would be ugly. What we\n> >need is a feature like a batch prepared statement in libpq similar to\n> >what JDBC supports\n> >((https://mkyong.com/jdbc/jdbc-preparedstatement-example-batch-update/).\n> >This will allow a single prepared statement to be executed with a\n> >batch of parameters, each batch corresponding to one foreign DML\n> >statement.\n> >\n>\n> I'm pretty sure we could make it work with some array/unnest tricks to\n> build a relation, and use that as a source of data.\n>\n\nThat sounds great. The solution will be limited to postgres_fdw only.\n\n\n> I don't see why not support both, the use cases are somewhat different I\n> think.\n>\n\n+1, if we can do both.\n\n-- \nBest Wishes,\nAshutosh\n\nOn Tue, 30 Jun 2020 at 22:23, Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:\n>I didn't get this. We are executing an INSERT on the foreign server,\n>so we get the number of rows INSERTed from that server. We should just\n>add those up across batches. If there's a failure, it would abort the\n>transaction, local as well as remote.\n>\n\nTrue, but it's not the FDW code doing the counting - it's the caller,\ndepending on whether the ExecForeignInsert returns a valid slot or NULL.\nSo it's not quite possible to just return a number of inserted tuples,\nas returned by the remote server.Hmm yes, now I remember that bit. So for every row buffered, we return a valid slot without knowing whether that row was inserted on the remote server or not. I think we have that problem even now where a single INSERT might result in multiple INSERTs on the remote server (rare but not completely impossible). \n>In your patch, I see that an INSERT statement with batch is\n>constructed as INSERT INTO ... VALUES (...), (...) as many values as\n>the batch size. That won't work as is for UPDATE/DELETE since we can't\n>pass multiple pairs of ctids and columns to be updated for each ctid\n>in one statement. Maybe we could build as many UPDATE/DELETE\n>statements as the size of a batch, but that would be ugly. What we\n>need is a feature like a batch prepared statement in libpq similar to\n>what JDBC supports\n>((https://mkyong.com/jdbc/jdbc-preparedstatement-example-batch-update/).\n>This will allow a single prepared statement to be executed with a\n>batch of parameters, each batch corresponding to one foreign DML\n>statement.\n>\n\nI'm pretty sure we could make it work with some array/unnest tricks to\nbuild a relation, and use that as a source of data.That sounds great. The solution will be limited to postgres_fdw only. \nI don't see why not support both, the use cases are somewhat different I\nthink.+1, if we can do both.-- Best Wishes,Ashutosh", "msg_date": "Wed, 1 Jul 2020 10:17:00 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: POC: postgres_fdw insert batching" }, { "msg_contents": "Hi,\n\nOn 2020-06-28 17:10:02 +0200, Tomas Vondra wrote:\n> 3) Should we do batching for COPY insteads?\n> \n> While looking at multi_insert, I've realized it's mostly exactly what\n> the new \"batching insert\" API function would need to be. But it's only\n> really used in COPY, so I wonder if we should just abandon the idea of\n> batching INSERTs and do batching COPY for FDW tables.\n> \n> For cases that can replace INSERT with COPY this would be enough, but\n> unfortunately it does nothing for DELETE/UPDATE so I'm hesitant to do\n> this :-(\n\nI personally think - and I realize that that might be annoying to\nsomebody looking to make an incremental improvement - that the\nnodeModifyTable.c and copy.c code dealing with DML has become too\ncomplicated to add features like this without a larger\nrefactoring. Leading to choices like this, whether to add a feature in\none place but not the other.\n\nI think before we add more complexity, we ought to centralize and clean\nup the DML handling code so most is shared between copy.c and\nnodeModifyTable.c. Then we can much more easily add batching to FDWs, to\nCTAS, to INSERT INTO SELECT etc, for which there are patches already.\n\n\n> 4) Expected consistency?\n> \n> I'm not entirely sure what are the consistency expectations for FDWs.\n> Currently the FDW nodes pointing to the same server share a connection,\n> so the inserted rows might be visible to other nodes. But if we only\n> stash the rows in a local buffer for a while, that's no longer true. So\n> maybe this breaks the consistency expectations?\n\nGiven that for local queries that's not the case (since the snapshot\nwon't have those changes visible), I think we shouldn't be too concerned\nabout that. If anything we should be concerned about the opposite.\n\nIf we are concerned, perhaps we could add functionality to flush all\npending changes before executing further statements?\n\n\n\n> I was thinking about adding a GUC to enable/disable the batching at some\n> level (global, server, table, ...) but it seems like a bad match because\n> the consistency expectations likely depend on a query. There should be a\n> GUC to set the batch size, though (it's hardcoded to 100 for now).\n\nHm. If libpq allowed to utilize pipelining ISTM the answer here would be\nto not batch by building a single statement with all rows as a VALUES,\nbut issue the single INSERTs in a pipelined manner. That'd probably\nremove all behavioural differences. I really wish somebody would pick\nup that libpq patch again.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 1 Jul 2020 11:34:33 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: POC: postgres_fdw insert batching" }, { "msg_contents": "On 6/28/20 8:10 PM, Tomas Vondra wrote:\n> Now, the primary reason why the performance degrades like this is that\n> while FDW has batching for SELECT queries (i.e. we read larger chunks of\n> data from the cursors), we don't have that for INSERTs (or other DML).\n> Every time you insert a row, it has to go all the way down into the\n> partition synchronously.\n\nYou added new fields into the PgFdwModifyState struct. Why you didn't \nreused ResultRelInfo::ri_CopyMultiInsertBuffer field and \nCopyMultiInsertBuffer machinery as storage for incoming tuples?\n\n-- \nregards,\nAndrey Lepikhov\nPostgres Professional\n\n\n", "msg_date": "Fri, 10 Jul 2020 09:28:44 +0500", "msg_from": "\"Andrey V. Lepikhov\" <a.lepikhov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: POC: postgres_fdw insert batching" }, { "msg_contents": "On Fri, Jul 10, 2020 at 09:28:44AM +0500, Andrey V. Lepikhov wrote:\n>On 6/28/20 8:10 PM, Tomas Vondra wrote:\n>>Now, the primary reason why the performance degrades like this is that\n>>while FDW has batching for SELECT queries (i.e. we read larger chunks of\n>>data from the cursors), we don't have that for INSERTs (or other DML).\n>>Every time you insert a row, it has to go all the way down into the\n>>partition synchronously.\n>\n>You added new fields into the PgFdwModifyState struct. Why you didn't \n>reused ResultRelInfo::ri_CopyMultiInsertBuffer field and \n>CopyMultiInsertBuffer machinery as storage for incoming tuples?\n>\n\nBecause I was focused on speeding-up inserts, and that is not using\nCopyMultiInsertBuffer I think. I agree the way the tuples are stored\nmay be improved, of course.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n", "msg_date": "Sun, 12 Jul 2020 02:11:01 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: POC: postgres_fdw insert batching" }, { "msg_contents": "On Sun, Jul 12, 2020 at 02:11:01AM +0200, Tomas Vondra wrote:\n> Because I was focused on speeding-up inserts, and that is not using\n> CopyMultiInsertBuffer I think. I agree the way the tuples are stored\n> may be improved, of course.\n\nThe CF bot is telling that the regression tests of postgres_fdw are\ncrashing. Could you look at that?\n--\nMichael", "msg_date": "Thu, 1 Oct 2020 13:12:54 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: POC: postgres_fdw insert batching" }, { "msg_contents": "Hello Tomas san,\n\n\nThank you for picking up this. I'm interested in this topic, too. (As an aside, we'd like to submit a bulk insert patch for ECPG in the near future.)\n\nAs others referred, Andrey-san's fast COPY to foreign partitions is also promising. But I think your bulk INSERT is a separate feature and offers COPY cannot do -- data transformation during loading with INSERT SELECT and CREATE TABLE AS SELECT.\n\nIs there anything that makes you worry and stops development? Could I give it a try to implement this (I'm not sure I can, sorry. I'm worried if we can change the executor's call chain easily.)\n\n\n> 1) Extend the FDW API?\n\nYes, I think, because FDWs for other DBMSs will benefit from this. (But it's questionable whether we want users to transfer data in Postgres database to other DBMSs...)\n\nMySQL and SQL Server has the same bulk insert syntax as Postgres, i.e., INSERT INTO table VALUES(record1), (record2), ... Oracle doesn't have this syntax, but it can use CTE as follows:\n\n INSERT INTO table\n WITH t AS (\n SELECT record1 FROM DUAL UNION ALL\n SELECT record2 FROM DUAL UNION ALL\n ...\n )\n SELECT * FROM t;\n\nAnd many DBMSs should have CTAS, INSERT SELECT, and INSERT SELECT record1 UNION ALL SELECT record2 ...\n\nThe API would simply be:\n\nTupleTableSlot **\nExecForeignMultiInsert(EState *estate,\n ResultRelInfo *rinfo,\n TupleTableSlot **slot,\n TupleTableSlot **planSlot,\n int numSlots);\n\n\n> 2) What about the insert results?\n\nI'm wondering if we can report success or failure of each inserted row, because the remote INSERT will fail entirely. Other FDWs may be able to do it, so the API can be like above.\n\nFor the same reason, support for RETURNING clause will vary from DBMS to DBMS.\n\n\n> 3) What about the other DML operations (DELETE/UPDATE)?\n\nI don't think they are necessary for the time being. If we want them, they will be implemented using the libpq batch/pipelining as Andres-san said.\n\n\n> 3) Should we do batching for COPY insteads?\n\nI'm thinking of issuing INSERT with multiple records as your patch does, because:\n\n* When the user executed INSERT statements, it would look strange to the user if the remote SQL is displayed as COPY.\n\n* COPY doesn't invoke rules unlike INSERT. (I don't think the rule is a feature what users care about, though.) Also, I'm a bit concerned that there might be, or will be, other differences between INSERT and COPY.\n\n\n[1]\nFast COPY FROM command for the table with foreign partitions\nhttps://www.postgresql.org/message-id/flat/3d0909dc-3691-a576-208a-90986e55489f%40postgrespro.ru\n\n\nRegards\nTakayuki Tsunakawa\n\n\n\n", "msg_date": "Thu, 8 Oct 2020 02:40:10 +0000", "msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: POC: postgres_fdw insert batching" }, { "msg_contents": "On Thu, Oct 08, 2020 at 02:40:10AM +0000, tsunakawa.takay@fujitsu.com wrote:\n>Hello Tomas san,\n>\n>\n>Thank you for picking up this. I'm interested in this topic, too. (As an aside, we'd like to submit a bulk insert patch for ECPG in the near future.)\n>\n>As others referred, Andrey-san's fast COPY to foreign partitions is also promising. But I think your bulk INSERT is a separate feature and offers COPY cannot do -- data transformation during loading with INSERT SELECT and CREATE TABLE AS SELECT.\n>\n>Is there anything that makes you worry and stops development? Could I give it a try to implement this (I'm not sure I can, sorry. I'm worried if we can change the executor's call chain easily.)\n>\n\nIt's primarily a matter of having too much other stuff on my plate, thus\nnot having time to work on this feature. I was not too worried about any\nparticular issue, but I wanted some feedback before spending more time\non extending the API.\n\nI'm not sure when I'll have time to work on this again, so if you are\ninterested and willing to work on it, please go ahead. I'll gladly do\nreviews and help you with it.\n\n>\n>> 1) Extend the FDW API?\n>\n>Yes, I think, because FDWs for other DBMSs will benefit from this. (But it's questionable whether we want users to transfer data in Postgres database to other DBMSs...)\n>\n\nI think transferring data to other databases is fine - interoperability\nis a big advantage for users, I don't see it as something threatening\nthe PostgreSQL project. I doubt this would make it more likely for users\nto migrate from PostgreSQL - there are many ways to do that already.\n\n\n>MySQL and SQL Server has the same bulk insert syntax as Postgres, i.e., INSERT INTO table VALUES(record1), (record2), ... Oracle doesn't have this syntax, but it can use CTE as follows:\n>\n> INSERT INTO table\n> WITH t AS (\n> SELECT record1 FROM DUAL UNION ALL\n> SELECT record2 FROM DUAL UNION ALL\n> ...\n> )\n> SELECT * FROM t;\n>\n>And many DBMSs should have CTAS, INSERT SELECT, and INSERT SELECT record1 UNION ALL SELECT record2 ...\n>\n\nTrue. In some cases INSERT may be replaced by COPY, but it has various\nother features too.\n\n>The API would simply be:\n>\n>TupleTableSlot **\n>ExecForeignMultiInsert(EState *estate,\n> ResultRelInfo *rinfo,\n> TupleTableSlot **slot,\n> TupleTableSlot **planSlot,\n> int numSlots);\n>\n>\n\n+1, seems quite reasonable\n\n>> 2) What about the insert results?\n>\n>I'm wondering if we can report success or failure of each inserted row, because the remote INSERT will fail entirely. Other FDWs may be able to do it, so the API can be like above.\n>\n\nYeah. I think handling complete failure should not be very difficult,\nbut there are cases that worry me more. For example, what if there's a\nbefore trigger (on the remote db) that \"skips\" inserting some of the\nrows by returning NULL?\n\n>For the same reason, support for RETURNING clause will vary from DBMS to DBMS.\n>\n\nYeah. I wonder if the FDW needs to indicate which features are supported\nby the ExecForeignMultiInsert, e.g. by adding a function that decides\nwhether batch insert is supported (it might also do that internally by\ncalling ExecForeignInsert, of course).\n\n>\n>> 3) What about the other DML operations (DELETE/UPDATE)?\n>\n>I don't think they are necessary for the time being. If we want them, they will be implemented using the libpq batch/pipelining as Andres-san said.\n>\n\nI agree.\n\n>\n>> 3) Should we do batching for COPY insteads?\n>\n>I'm thinking of issuing INSERT with multiple records as your patch does, because:\n>\n>* When the user executed INSERT statements, it would look strange to the user if the remote SQL is displayed as COPY.\n>\n>* COPY doesn't invoke rules unlike INSERT. (I don't think the rule is a feature what users care about, though.) Also, I'm a bit concerned that there might be, or will be, other differences between INSERT and COPY.\n>\n\nI agree.\n\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 9 Oct 2020 00:14:21 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: POC: postgres_fdw insert batching" }, { "msg_contents": "From: Tomas Vondra <tomas.vondra@2ndquadrant.com>\n> I'm not sure when I'll have time to work on this again, so if you are\n> interested and willing to work on it, please go ahead. I'll gladly do\n> reviews and help you with it.\n\nThank you very much.\n\n\n> I think transferring data to other databases is fine - interoperability\n> is a big advantage for users, I don't see it as something threatening\n> the PostgreSQL project. I doubt this would make it more likely for users\n> to migrate from PostgreSQL - there are many ways to do that already.\n\nDefinitely true. Users may want to use INSERT SELECT to do some data transformation in their OLTP database and load it into a non-Postgres data warehouse.\n\n\n> Yeah. I think handling complete failure should not be very difficult,\n> but there are cases that worry me more. For example, what if there's a\n> before trigger (on the remote db) that \"skips\" inserting some of the\n> rows by returning NULL?\n\n> Yeah. I wonder if the FDW needs to indicate which features are supported\n> by the ExecForeignMultiInsert, e.g. by adding a function that decides\n> whether batch insert is supported (it might also do that internally by\n> calling ExecForeignInsert, of course).\n\nThanks for your advice. I'll try to address them.\n\n\n Regards\nTakayuki Tsunakawa\n\n\n\n", "msg_date": "Fri, 9 Oct 2020 03:01:53 +0000", "msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: POC: postgres_fdw insert batching" }, { "msg_contents": "Hello,\n\n\nThe attached patch implements the new bulk insert routine for postgres_fdw and the executor utilizing it. It passes make check-world.\n\nI measured performance in a basic non-partitioned case by modifying Tomas-san's scripts. They perform an INSERT SELECT statement that copies one million records. The table consists of two integer columns, with a primary key on one of those them. You can run the attached prepare.sql to set up once. local.sql inserts to the table directly, while fdw.sql inserts through a foreign table.\n\nThe performance results, the average time of 5 runs, were as follows on a Linux host where the average round-trip time of \"ping localhost\" was 34 us:\n\n master, local: 6.1 seconds\n master, fdw: 125.3 seconds\n patched, fdw: 11.1 seconds (11x improvement)\n\n\nThe patch accumulates at most 100 records in ModifyTableState before inserting in bulk. Also, when an input record is targeted for a different relation (= partition) than that for already accumulated records, insert the accumulated records and store the new record for later insert.\n\n[Issues]\n\n1. Do we want a GUC parameter, say, max_bulk_insert_records = (integer), to control the number of records inserted at once?\nThe range of allowed values would be between 1 and 1,000. 1 disables bulk insert.\nThe possible reason of the need for this kind of parameter would be to limit the amount of memory used for accumulated records, which could be prohibitively large if each record is big. I don't think this is a must, but I think we can have it.\n\n2. Should we accumulate records per relation in ResultRelInfo instead?\nThat is, when inserting into a partitioned table that has foreign partitions, delay insertion until a certain number of input records accumulate, and then insert accumulated records per relation (e.g., 50 records to relation A, 30 records to relation B, and 20 records to relation C.) If we do that,\n\n* The order of insertion differs from the order of input records. Is it OK?\n\n* Should the maximum count of accumulated records be applied per relation or the query?\nWhen many foreign partitions belong to a partitioned table, if the former is chosen, it may use much memory in total. If the latter is chosen, the records per relation could be few and thus the benefit of bulk insert could be small.\n\n\nRegards\nTakayuki Tsunakawa", "msg_date": "Tue, 10 Nov 2020 00:45:50 +0000", "msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: POC: postgres_fdw insert batching" }, { "msg_contents": "Hi,\n\nThanks for working on this!\n\nOn 11/10/20 1:45 AM, tsunakawa.takay@fujitsu.com wrote:\n> Hello,\n> \n> \n> The attached patch implements the new bulk insert routine for\n> postgres_fdw and the executor utilizing it. It passes make\n> check-world.\n> \n\nI haven't done any testing yet, just a quick review.\n\nI see the patch builds the \"bulk\" query in execute_foreign_modify. IMO\nthat's something we should do earlier, when we're building the simple\nquery (for 1-row inserts). I'd understand if you were concerned about\noverhead in case of 1-row inserts, trying to not plan the bulk query\nuntil necessary, but I'm not sure this actually helps.\n\nOr was the goal to build a query for every possible number of slots? I\ndon't think that's really useful, considering it requires deallocating\nthe old plan, preparing a new one, etc. IMO it should be sufficient to\nhave two queries - one for 1-row inserts, one for the full batch. The\nlast incomplete batch can be inserted using a loop of 1-row queries.\n\nThat's what my patch was doing, but I'm not insisting on that - it just\nseems like a better approach to me. So feel free to argue why this is\nbetter.\n\n\n> I measured performance in a basic non-partitioned case by modifying\n> Tomas-san's scripts. They perform an INSERT SELECT statement that\n> copies one million records. The table consists of two integer\n> columns, with a primary key on one of those them. You can run the\n> attached prepare.sql to set up once. local.sql inserts to the table\n> directly, while fdw.sql inserts through a foreign table.\n> \n> The performance results, the average time of 5 runs, were as follows\n> on a Linux host where the average round-trip time of \"ping localhost\"\n> was 34 us:\n> \n> master, local: 6.1 seconds master, fdw: 125.3 seconds patched, fdw:\n> 11.1 seconds (11x improvement)\n> \n\nNice. I think we can't really get much closer to local master, so 6.1\nvs. 11.1 seconds look quite acceptable.\n\n> \n> The patch accumulates at most 100 records in ModifyTableState before\n> inserting in bulk. Also, when an input record is targeted for a\n> different relation (= partition) than that for already accumulated\n> records, insert the accumulated records and store the new record for\n> later insert.\n> \n> [Issues]\n> \n> 1. Do we want a GUC parameter, say, max_bulk_insert_records =\n> (integer), to control the number of records inserted at once? The\n> range of allowed values would be between 1 and 1,000. 1 disables\n> bulk insert. The possible reason of the need for this kind of\n> parameter would be to limit the amount of memory used for accumulated\n> records, which could be prohibitively large if each record is big. I\n> don't think this is a must, but I think we can have it.\n> \n\nI think it'd be good to have such GUC, even if only for testing and\ndevelopment. We should probably have a way to disable the batching,\nwhich the GUC could also do, I think. So +1 to have the GUC.\n\n> 2. Should we accumulate records per relation in ResultRelInfo\n> instead? That is, when inserting into a partitioned table that has\n> foreign partitions, delay insertion until a certain number of input\n> records accumulate, and then insert accumulated records per relation\n> (e.g., 50 records to relation A, 30 records to relation B, and 20\n> records to relation C.) If we do that,\n> \n\nI think there's a chunk of text missing here? If we do that, then what?\n\nAnyway, I don't see why accumulating the records in ResultRelInfo would\nbe better than what the patch does now. It seems to me like fairly\nspecific to FDWs, so keeping it int FDW state seems appropriate. What\nwould be the advantage of stashing it in ResultRelInfo?\n\n> * The order of insertion differs from the order of input records. Is\n> it OK?\n> \n\nI think that's OK for most use cases, and if it's not (e.g. when there's\nsomething requiring the exact order of writes) then it's not possible to\nuse batching. That's one of the reasons why I think we should have a GUC\nto disable the batching.\n\n> * Should the maximum count of accumulated records be applied per\n> relation or the query? When many foreign partitions belong to a\n> partitioned table, if the former is chosen, it may use much memory in\n> total. If the latter is chosen, the records per relation could be\n> few and thus the benefit of bulk insert could be small.\n> \n\nI think it needs to be applied per relation, because that's the level at\nwhich we can do it easily and consistently. The whole point is to send\ndata in sufficiently large chunks to minimize the communication overhead\n(latency etc.), but if you enforce it \"per query\" that seems hard.\n\nImagine you're inserting data into a table with many partitions - how do\nyou pick the number of rows to accumulate? The table may have 10 or 1000\npartitions, we may be inserting into all partitions or just a small\nsubset, not all partitions may be foreign, etc. It seems pretty\ndifficult to pick and enforce a reliable limit at the query level. But\nmaybe I'm missing something and it's easier than I think?\n\nOf course, you're entirely correct enforcing this at the partition level\nmay require a lot of memory. Sadly, I don't see a way around that,\nexcept for (a) disabling batching or (b) ordering the data to insert\ndata into one partition at a time.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Tue, 10 Nov 2020 16:05:14 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: POC: postgres_fdw insert batching" }, { "msg_contents": "\n\nOn 11/10/20 4:05 PM, Tomas Vondra wrote:\n> Hi,\n> \n> Thanks for working on this!\n> \n> On 11/10/20 1:45 AM, tsunakawa.takay@fujitsu.com wrote:\n>> Hello,\n>>\n>>\n>> The attached patch implements the new bulk insert routine for\n>> postgres_fdw and the executor utilizing it. It passes make\n>> check-world.\n>>\n> \n> I haven't done any testing yet, just a quick review.\n> \n> I see the patch builds the \"bulk\" query in execute_foreign_modify. IMO\n> that's something we should do earlier, when we're building the simple\n> query (for 1-row inserts). I'd understand if you were concerned about\n> overhead in case of 1-row inserts, trying to not plan the bulk query\n> until necessary, but I'm not sure this actually helps.\n> \n> Or was the goal to build a query for every possible number of slots? I\n> don't think that's really useful, considering it requires deallocating\n> the old plan, preparing a new one, etc. IMO it should be sufficient to\n> have two queries - one for 1-row inserts, one for the full batch. The\n> last incomplete batch can be inserted using a loop of 1-row queries.\n> \n> That's what my patch was doing, but I'm not insisting on that - it just\n> seems like a better approach to me. So feel free to argue why this is\n> better.\n> \n> \n>> I measured performance in a basic non-partitioned case by modifying\n>> Tomas-san's scripts. They perform an INSERT SELECT statement that\n>> copies one million records. The table consists of two integer\n>> columns, with a primary key on one of those them. You can run the\n>> attached prepare.sql to set up once. local.sql inserts to the table\n>> directly, while fdw.sql inserts through a foreign table.\n>>\n>> The performance results, the average time of 5 runs, were as follows\n>> on a Linux host where the average round-trip time of \"ping localhost\"\n>> was 34 us:\n>>\n>> master, local: 6.1 seconds master, fdw: 125.3 seconds patched, fdw:\n>> 11.1 seconds (11x improvement)\n>>\n> \n> Nice. I think we can't really get much closer to local master, so 6.1\n> vs. 11.1 seconds look quite acceptable.\n> \n>>\n>> The patch accumulates at most 100 records in ModifyTableState before\n>> inserting in bulk. Also, when an input record is targeted for a\n>> different relation (= partition) than that for already accumulated\n>> records, insert the accumulated records and store the new record for\n>> later insert.\n>>\n>> [Issues]\n>>\n>> 1. Do we want a GUC parameter, say, max_bulk_insert_records =\n>> (integer), to control the number of records inserted at once? The\n>> range of allowed values would be between 1 and 1,000. 1 disables\n>> bulk insert. The possible reason of the need for this kind of\n>> parameter would be to limit the amount of memory used for accumulated\n>> records, which could be prohibitively large if each record is big. I\n>> don't think this is a must, but I think we can have it.\n>>\n> \n> I think it'd be good to have such GUC, even if only for testing and\n> development. We should probably have a way to disable the batching,\n> which the GUC could also do, I think. So +1 to have the GUC.\n> \n>> 2. Should we accumulate records per relation in ResultRelInfo\n>> instead? That is, when inserting into a partitioned table that has\n>> foreign partitions, delay insertion until a certain number of input\n>> records accumulate, and then insert accumulated records per relation\n>> (e.g., 50 records to relation A, 30 records to relation B, and 20\n>> records to relation C.) If we do that,\n>>\n> \n> I think there's a chunk of text missing here? If we do that, then what?\n> \n> Anyway, I don't see why accumulating the records in ResultRelInfo would\n> be better than what the patch does now. It seems to me like fairly\n> specific to FDWs, so keeping it int FDW state seems appropriate. What\n> would be the advantage of stashing it in ResultRelInfo?\n> \n>> * The order of insertion differs from the order of input records. Is\n>> it OK?\n>>\n> \n> I think that's OK for most use cases, and if it's not (e.g. when there's\n> something requiring the exact order of writes) then it's not possible to\n> use batching. That's one of the reasons why I think we should have a GUC\n> to disable the batching.\n> \n>> * Should the maximum count of accumulated records be applied per\n>> relation or the query? When many foreign partitions belong to a\n>> partitioned table, if the former is chosen, it may use much memory in\n>> total. If the latter is chosen, the records per relation could be\n>> few and thus the benefit of bulk insert could be small.\n>>\n> \n> I think it needs to be applied per relation, because that's the level at\n> which we can do it easily and consistently. The whole point is to send\n> data in sufficiently large chunks to minimize the communication overhead\n> (latency etc.), but if you enforce it \"per query\" that seems hard.\n> \n> Imagine you're inserting data into a table with many partitions - how do\n> you pick the number of rows to accumulate? The table may have 10 or 1000\n> partitions, we may be inserting into all partitions or just a small\n> subset, not all partitions may be foreign, etc. It seems pretty\n> difficult to pick and enforce a reliable limit at the query level. But\n> maybe I'm missing something and it's easier than I think?\n> \n> Of course, you're entirely correct enforcing this at the partition level\n> may require a lot of memory. Sadly, I don't see a way around that,\n> except for (a) disabling batching or (b) ordering the data to insert\n> data into one partition at a time.\n> \n\nTwo more comments regarding this:\n\n1) If we want to be more strict about the memory consumption, we should\nprobably set the limit in terms of memory, not number of rows. Currently\nthe 100 rows may be 10kB or 10MB, there's no way to know. Of course,\nthis is not the only place with this issue.\n\n2) I wonder what the COPY FROM patch [1] does in this regard. I don't\nhave time to check right now, but I suggest we try to do the same thing,\nif only to be consistent.\n\n[1]\nhttps://www.postgresql.org/message-id/flat/3d0909dc-3691-a576-208a-90986e55489f%40postgrespro.ru\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Tue, 10 Nov 2020 18:16:38 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: POC: postgres_fdw insert batching" }, { "msg_contents": "From: Tomas Vondra <tomas.vondra@enterprisedb.com>\r\n> I see the patch builds the \"bulk\" query in execute_foreign_modify. IMO\r\n> that's something we should do earlier, when we're building the simple\r\n> query (for 1-row inserts). I'd understand if you were concerned about\r\n> overhead in case of 1-row inserts, trying to not plan the bulk query\r\n> until necessary, but I'm not sure this actually helps.\r\n> \r\n> Or was the goal to build a query for every possible number of slots? I\r\n> don't think that's really useful, considering it requires deallocating\r\n> the old plan, preparing a new one, etc. IMO it should be sufficient to\r\n> have two queries - one for 1-row inserts, one for the full batch. The\r\n> last incomplete batch can be inserted using a loop of 1-row queries.\r\n> \r\n> That's what my patch was doing, but I'm not insisting on that - it just\r\n> seems like a better approach to me. So feel free to argue why this is\r\n> better.\r\n\r\nDon't be concerned, the processing is not changed for 1-row inserts: the INSERT query string is built in PlanForeignModify(), and the remote statement is prepared in execute_foreign_modify() during the first call to ExecForeignInsert() and it's reused for subsequent ExecForeignInsert() calls.\r\n\r\nThe re-creation of INSERT query string and its corresponding PREPARE happen when the number of tuples to be inserted is different from the previous call to ExecForeignInsert()/ExecForeignBulkInsert(). That's because we don't know how many tuples will be inserted during planning (PlanForeignModify) or execution (until the scan ends for SELECT). For example, if we insert 10,030 rows with the bulk size 100, the flow is:\r\n\r\n PlanForeignModify():\r\n build the INSERT query string for 1 row\r\n ExecForeignBulkInsert(100):\r\n drop the INSERT query string and prepared statement for 1 row\r\n build the query string and prepare statement for 100 row INSERT\r\n execute it\r\n ExecForeignBulkInsert(100):\r\n reuse the prepared statement for 100 row INSERT and execute it\r\n...\r\n ExecForeignBulkInsert(30):\r\n drop the INSERT query string and prepared statement for 100 row\r\n build the query string and prepare statement for 30 row INSERT\r\n execute it\r\n\r\n\r\n> I think it'd be good to have such GUC, even if only for testing and\r\n> development. We should probably have a way to disable the batching,\r\n> which the GUC could also do, I think. So +1 to have the GUC.\r\n\r\nOK, I'll add it. The name would be max_bulk_insert_tuples, because a) it might cover bulk insert for local relations in the future, and b) \"tuple\" is used in cpu_(index_)tuple_cost and parallel_tuple_cost, while \"row\" or \"record\" is not used in GUC (except for row_security).\r\n\r\nThe valid range would be between 1 and 1,000 (I said 10,000 previously, but I think it's overreaction and am a bit worried about unforseen trouble too many tuples might cause.) 1 disables the bulk processing and uses the traditonal ExecForeignInsert(). The default value is 100 (would 1 be sensible as a default value to avoid surprising users by increased memory usage?)\r\n\r\n\r\n> > 2. Should we accumulate records per relation in ResultRelInfo\r\n> > instead? That is, when inserting into a partitioned table that has\r\n> > foreign partitions, delay insertion until a certain number of input\r\n> > records accumulate, and then insert accumulated records per relation\r\n> > (e.g., 50 records to relation A, 30 records to relation B, and 20\r\n> > records to relation C.) If we do that,\r\n> >\r\n> \r\n> I think there's a chunk of text missing here? If we do that, then what?\r\n\r\nSorry, the two bullets below there are what follows. Perhaps I should have written \":\" instead of \",\".\r\n\r\n\r\n> Anyway, I don't see why accumulating the records in ResultRelInfo would\r\n> be better than what the patch does now. It seems to me like fairly\r\n> specific to FDWs, so keeping it int FDW state seems appropriate. What\r\n> would be the advantage of stashing it in ResultRelInfo?\r\n\r\nI thought of distributing input records to their corresponding partitions' ResultRelInfos. For example, input record for partition 1 comes, store it in the ResultRelInfo for partition 1, then input record for partition 2 comes, store it in the ResultRelInfo for partition 2. When a ResultRelInfo accumulates some number of rows, insert the accumulated rows therein into the partition. When the input endds, perform bulk inserts for ResultRelInfos that have accumulated rows.\r\n\r\n\r\n\r\n> I think that's OK for most use cases, and if it's not (e.g. when there's\r\n> something requiring the exact order of writes) then it's not possible to\r\n> use batching. That's one of the reasons why I think we should have a GUC\r\n> to disable the batching.\r\n\r\nAgreed.\r\n\r\n\r\n> > * Should the maximum count of accumulated records be applied per\r\n> > relation or the query? When many foreign partitions belong to a\r\n> > partitioned table, if the former is chosen, it may use much memory in\r\n> > total. If the latter is chosen, the records per relation could be\r\n> > few and thus the benefit of bulk insert could be small.\r\n> >\r\n> \r\n> I think it needs to be applied per relation, because that's the level at\r\n> which we can do it easily and consistently. The whole point is to send\r\n> data in sufficiently large chunks to minimize the communication overhead\r\n> (latency etc.), but if you enforce it \"per query\" that seems hard.\r\n> \r\n> Imagine you're inserting data into a table with many partitions - how do\r\n> you pick the number of rows to accumulate? The table may have 10 or 1000\r\n> partitions, we may be inserting into all partitions or just a small\r\n> subset, not all partitions may be foreign, etc. It seems pretty\r\n> difficult to pick and enforce a reliable limit at the query level. But\r\n> maybe I'm missing something and it's easier than I think?\r\n> \r\n> Of course, you're entirely correct enforcing this at the partition level\r\n> may require a lot of memory. Sadly, I don't see a way around that,\r\n> except for (a) disabling batching or (b) ordering the data to insert\r\n> data into one partition at a time.\r\n\r\nOK, I think I'll try doing like that, after waiting for other opinions some days.\r\n\r\n\r\n> Two more comments regarding this:\r\n> \r\n> 1) If we want to be more strict about the memory consumption, we should\r\n> probably set the limit in terms of memory, not number of rows. Currently\r\n> the 100 rows may be 10kB or 10MB, there's no way to know. Of course,\r\n> this is not the only place with this issue.\r\n> \r\n> 2) I wonder what the COPY FROM patch [1] does in this regard. I don't\r\n> have time to check right now, but I suggest we try to do the same thing,\r\n> if only to be consistent.\r\n> \r\n> [1]\r\n> https://www.postgresql.org/message-id/flat/3d0909dc-3691-a576-208a-909\r\n> 86e55489f%40postgrespro.ru\r\n\r\nThat COPY FROM patch uses the tuple accumulation mechanism for local tables as-is. That is, it accumulates at most 1,000 tuples per partition.\r\n\r\n/*\r\n * No more than this many tuples per CopyMultiInsertBuffer\r\n *\r\n * Caution: Don't make this too big, as we could end up with this many\r\n * CopyMultiInsertBuffer items stored in CopyMultiInsertInfo's\r\n * multiInsertBuffers list. Increasing this can cause quadratic growth in\r\n * memory requirements during copies into partitioned tables with a large\r\n * number of partitions.\r\n */\r\n#define MAX_BUFFERED_TUPLES 1000\r\n\r\n\r\nRegards\r\nTakayuki Tsunakawa\r\n\r\n", "msg_date": "Wed, 11 Nov 2020 07:20:15 +0000", "msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: POC: postgres_fdw insert batching" }, { "msg_contents": "On Wed, 11 Nov 2020, tsunakawa.takay@fujitsu.com wrote:\n\n> This email was sent to you by someone outside of the University.\n> You should only click on links or attachments if you are certain that the email is genuine and the content is safe.\n>\n> From: Tomas Vondra <tomas.vondra@enterprisedb.com>\n>> I see the patch builds the \"bulk\" query in execute_foreign_modify. IMO\n>> that's something we should do earlier, when we're building the simple\n>> query (for 1-row inserts). I'd understand if you were concerned about\n>> overhead in case of 1-row inserts, trying to not plan the bulk query\n>> until necessary, but I'm not sure this actually helps.\n>>\n>> Or was the goal to build a query for every possible number of slots? I\n>> don't think that's really useful, considering it requires deallocating\n>> the old plan, preparing a new one, etc. IMO it should be sufficient to\n>> have two queries - one for 1-row inserts, one for the full batch. The\n>> last incomplete batch can be inserted using a loop of 1-row queries.\n>>\n>> That's what my patch was doing, but I'm not insisting on that - it just\n>> seems like a better approach to me. So feel free to argue why this is\n>> better.\n>\n> Don't be concerned, the processing is not changed for 1-row inserts: the INSERT query string is built in PlanForeignModify(), and the remote statement is prepared in execute_foreign_modify() during the first call to ExecForeignInsert() and it's reused for subsequent ExecForeignInsert() calls.\n>\n> The re-creation of INSERT query string and its corresponding PREPARE happen when the number of tuples to be inserted is different from the previous call to ExecForeignInsert()/ExecForeignBulkInsert(). That's because we don't know how many tuples will be inserted during planning (PlanForeignModify) or execution (until the scan ends for SELECT). For example, if we insert 10,030 rows with the bulk size 100, the flow is:\n>\n> PlanForeignModify():\n> build the INSERT query string for 1 row\n> ExecForeignBulkInsert(100):\n> drop the INSERT query string and prepared statement for 1 row\n> build the query string and prepare statement for 100 row INSERT\n> execute it\n> ExecForeignBulkInsert(100):\n> reuse the prepared statement for 100 row INSERT and execute it\n> ...\n> ExecForeignBulkInsert(30):\n> drop the INSERT query string and prepared statement for 100 row\n> build the query string and prepare statement for 30 row INSERT\n> execute it\n>\n>\n>> I think it'd be good to have such GUC, even if only for testing and\n>> development. We should probably have a way to disable the batching,\n>> which the GUC could also do, I think. So +1 to have the GUC.\n>\n> OK, I'll add it. The name would be max_bulk_insert_tuples, because a) it might cover bulk insert for local relations in the future, and b) \"tuple\" is used in cpu_(index_)tuple_cost and parallel_tuple_cost, while \"row\" or \"record\" is not used in GUC (except for row_security).\n>\n> The valid range would be between 1 and 1,000 (I said 10,000 previously, but I think it's overreaction and am a bit worried about unforseen trouble too many tuples might cause.) 1 disables the bulk processing and uses the traditonal ExecForeignInsert(). The default value is 100 (would 1 be sensible as a default value to avoid surprising users by increased memory usage?)\n>\n>\n>>> 2. Should we accumulate records per relation in ResultRelInfo\n>>> instead? That is, when inserting into a partitioned table that has\n>>> foreign partitions, delay insertion until a certain number of input\n>>> records accumulate, and then insert accumulated records per relation\n>>> (e.g., 50 records to relation A, 30 records to relation B, and 20\n>>> records to relation C.) If we do that,\n>>>\n>>\n>> I think there's a chunk of text missing here? If we do that, then what?\n>\n> Sorry, the two bullets below there are what follows. Perhaps I should have written \":\" instead of \",\".\n>\n>\n>> Anyway, I don't see why accumulating the records in ResultRelInfo would\n>> be better than what the patch does now. It seems to me like fairly\n>> specific to FDWs, so keeping it int FDW state seems appropriate. What\n>> would be the advantage of stashing it in ResultRelInfo?\n>\n> I thought of distributing input records to their corresponding partitions' ResultRelInfos. For example, input record for partition 1 comes, store it in the ResultRelInfo for partition 1, then input record for partition 2 comes, store it in the ResultRelInfo for partition 2. When a ResultRelInfo accumulates some number of rows, insert the accumulated rows therein into the partition. When the input endds, perform bulk inserts for ResultRelInfos that have accumulated rows.\n>\n>\n>\n>> I think that's OK for most use cases, and if it's not (e.g. when there's\n>> something requiring the exact order of writes) then it's not possible to\n>> use batching. That's one of the reasons why I think we should have a GUC\n>> to disable the batching.\n>\n> Agreed.\n>\n>\n>>> * Should the maximum count of accumulated records be applied per\n>>> relation or the query? When many foreign partitions belong to a\n>>> partitioned table, if the former is chosen, it may use much memory in\n>>> total. If the latter is chosen, the records per relation could be\n>>> few and thus the benefit of bulk insert could be small.\n>>>\n>>\n>> I think it needs to be applied per relation, because that's the level at\n>> which we can do it easily and consistently. The whole point is to send\n>> data in sufficiently large chunks to minimize the communication overhead\n>> (latency etc.), but if you enforce it \"per query\" that seems hard.\n>>\n>> Imagine you're inserting data into a table with many partitions - how do\n>> you pick the number of rows to accumulate? The table may have 10 or 1000\n>> partitions, we may be inserting into all partitions or just a small\n>> subset, not all partitions may be foreign, etc. It seems pretty\n>> difficult to pick and enforce a reliable limit at the query level. But\n>> maybe I'm missing something and it's easier than I think?\n>>\n>> Of course, you're entirely correct enforcing this at the partition level\n>> may require a lot of memory. Sadly, I don't see a way around that,\n>> except for (a) disabling batching or (b) ordering the data to insert\n>> data into one partition at a time.\n>\n> OK, I think I'll try doing like that, after waiting for other opinions some days.\n>\n>\n>> Two more comments regarding this:\n>>\n>> 1) If we want to be more strict about the memory consumption, we should\n>> probably set the limit in terms of memory, not number of rows. Currently\n>> the 100 rows may be 10kB or 10MB, there's no way to know. Of course,\n>> this is not the only place with this issue.\n>>\n>> 2) I wonder what the COPY FROM patch [1] does in this regard. I don't\n>> have time to check right now, but I suggest we try to do the same thing,\n>> if only to be consistent.\n>>\n>> [1]\n>> https://www.postgresql.org/message-id/flat/3d0909dc-3691-a576-208a-909\n>> 86e55489f%40postgrespro.ru\n>\n> That COPY FROM patch uses the tuple accumulation mechanism for local tables as-is. That is, it accumulates at most 1,000 tuples per partition.\n>\n> /*\n> * No more than this many tuples per CopyMultiInsertBuffer\n> *\n> * Caution: Don't make this too big, as we could end up with this many\n> * CopyMultiInsertBuffer items stored in CopyMultiInsertInfo's\n> * multiInsertBuffers list. Increasing this can cause quadratic growth in\n> * memory requirements during copies into partitioned tables with a large\n> * number of partitions.\n> */\n> #define MAX_BUFFERED_TUPLES 1000\n>\n>\n> Regards\n> Takayuki Tsunakawa\n>\n>\n\nDoes this patch affect trigger semantics on the base table?\n\nAt the moment when I insert 1000 rows into a postgres_fdw table using a\nsingle insert statement (e.g. INSERT INTO fdw_foo SELECT ... FROM bar) I\nnaively expect a \"statement level\" trigger on the base table to trigger\nonce. But this is not the case. The postgres_fdw implements this\noperation as 1000 separate insert statements on the base table, so the\ntrigger happens 1000 times instead of once. Hence there is no\ndistinction between using a statement level and a row level trigger on\nthe base table in this context.\n\nSo would this patch change the behaviour so only 10 separate insert\nstatements (each of 100 rows) would be made against the base table?\nIf so thats useful as it means improving performance using statement\nlevel triggers becomes possible. But it would also result in more\nobscure semantics and might break user processes dependent on the\nexisting behaviour after the patch is applied.\n\nBTW is this subtlety documented, I haven't found anything but happy\nto be proved wrong?\n\nTim\n\n-- \nThe University of Edinburgh is a charitable body, registered in\nScotland, with registration number SC005336.\n\n\n\n", "msg_date": "Wed, 11 Nov 2020 16:04:37 +0000 (GMT)", "msg_from": "Tim.Colles@ed.ac.uk", "msg_from_op": false, "msg_subject": "RE: POC: postgres_fdw insert batching" }, { "msg_contents": "From: timc@corona.is.ed.ac.uk <timc@corona.is.ed.ac.uk> On Behalf Of\n> Does this patch affect trigger semantics on the base table?\n> \n> At the moment when I insert 1000 rows into a postgres_fdw table using a\n> single insert statement (e.g. INSERT INTO fdw_foo SELECT ... FROM bar) I\n> naively expect a \"statement level\" trigger on the base table to trigger\n> once. But this is not the case. The postgres_fdw implements this\n> operation as 1000 separate insert statements on the base table, so the\n> trigger happens 1000 times instead of once. Hence there is no\n> distinction between using a statement level and a row level trigger on\n> the base table in this context.\n> \n> So would this patch change the behaviour so only 10 separate insert\n> statements (each of 100 rows) would be made against the base table?\n> If so thats useful as it means improving performance using statement\n> level triggers becomes possible. But it would also result in more\n> obscure semantics and might break user processes dependent on the\n> existing behaviour after the patch is applied.\n\nYes, the times the statement trigger defined on the base (remote) table will be reduced, as you said.\n\n\n> BTW is this subtlety documented, I haven't found anything but happy\n> to be proved wrong?\n\nUnfortunately, there doesn't seem to be any description on triggers on base tables. For example, if the local foreign table has an AFTER ROW trigger and its remote base table has a BEFORE ROW trigger that modifies the input record, it seems that the AFTER ROW trigger doesn't see the modified record.\n\n\nRegards\nTakayuki Tsunakawa\n\n\n\n", "msg_date": "Thu, 12 Nov 2020 02:06:49 +0000", "msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: POC: postgres_fdw insert batching" }, { "msg_contents": "Hello,\r\n\r\n\r\nModified the patch as I talked with Tomas-san. The performance results of loading one million records into a hash-partitioned table with 8 partitions are as follows:\r\n\r\n unpatched, local: 8.6 seconds\r\n\t unpatched, fdw: 113.7 seconds\r\n patched, fdw: 12.5 seconds (9x improvement)\r\n\r\nThe test scripts are also attached. Run prepare.sql once to set up tables and source data. Run local_part.sql and fdw_part.sql to load source data into a partitioned table with local partitions and a partitioned table with foreign tables respectively.\r\n\r\n\r\nRegards\r\nTakayuki Tsunakawa", "msg_date": "Tue, 17 Nov 2020 09:11:55 +0000", "msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: POC: postgres_fdw insert batching" }, { "msg_contents": "On 11/17/20 10:11 AM, tsunakawa.takay@fujitsu.com wrote:\n> Hello,\n> \n> \n> Modified the patch as I talked with Tomas-san. The performance\n> results of loading one million records into a hash-partitioned table\n> with 8 partitions are as follows:\n> \n> unpatched, local: 8.6 seconds unpatched, fdw: 113.7 seconds patched,\n> fdw: 12.5 seconds (9x improvement)\n> \n> The test scripts are also attached. Run prepare.sql once to set up\n> tables and source data. Run local_part.sql and fdw_part.sql to load\n> source data into a partitioned table with local partitions and a\n> partitioned table with foreign tables respectively.\n> \n\nUnfortunately, this does not compile for me, because nodeModifyTable\ncalls ExecGetTouchedPartitions, which is not defined anywhere. Not sure\nwhat's that about, so I simply commented-out this. That probably fails\nthe partitioned cases, but it allowed me to do some review and testing.\n\nAs for the patch, I have a couple of comments\n\n1) As I mentioned before, I really don't think we should be doing\ndeparsing in execute_foreign_modify - that's something that should\nhappen earlier, and should be in a deparse.c function.\n\n2) I think the GUC should be replaced with an server/table option,\nsimilar to fetch_size.\n\nThe attached patch tries to address both of these points.\n\nFirstly, it adds a new deparseBulkInsertSql function, that builds a\nquery for the \"full\" batch, and then uses those two queries - when we\nget a full batch we use the bulk query, otherwise we use the single-row\nquery in a loop. IMO this is cleaner than deparsing queries ad hoc in\nthe execute_foreign_modify.\n\nOf course, this might be worse when we don't have a full batch, e.g. for\na query that insert only 50 rows with batch_size=100. If this case is\ncommon, one option would be lowering the batch_size accordingly. If we\nreally want to improve this case too, I suggest we pass more info than\njust a position of the VALUES clause - that seems a bit too hackish.\n\n\nSecondly, it adds the batch_size option to server/foreign table, and\nuses that. This is not complete, though. postgresPlanForeignModify\ncurrently passes a hard-coded value at the moment, it needs to lookup\nthe correct value for the server/table from RelOptInfo or something. And\nI suppose ModifyTable inftractructure will need to determine the value\nin order to pass the correct number of slots to the FDW API.\n\nThe are a couple other smaller changes. E.g. it undoes changes to\nfinish_foreign_modify, and instead calls separate functions to prepare\nthe bulk statement. It also adds list_make5/list_make6 macros, so as to\nnot have to do strange stuff with the parameter lists.\n\n\nA finally, this should probably add a bunch of regression tests.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Wed, 18 Nov 2020 19:57:30 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: POC: postgres_fdw insert batching" }, { "msg_contents": "From: Tomas Vondra <tomas.vondra@enterprisedb.com>\r\n> Unfortunately, this does not compile for me, because nodeModifyTable calls\r\n> ExecGetTouchedPartitions, which is not defined anywhere. Not sure what's\r\n> that about, so I simply commented-out this. That probably fails the partitioned\r\n> cases, but it allowed me to do some review and testing.\r\n\r\nOuch, sorry. I'm ashamed to have forgotten including execPartition.c.\r\n\r\n\r\n> The are a couple other smaller changes. E.g. it undoes changes to\r\n> finish_foreign_modify, and instead calls separate functions to prepare the bulk\r\n> statement. It also adds list_make5/list_make6 macros, so as to not have to do\r\n> strange stuff with the parameter lists.\r\n\r\nThanks, I'll take them thankfully! I wonder why I didn't think of separating deallocate_query() from finish_foreign_modify() ... perhaps my brain was dying. As for list_make5/6(), I saw your first patch avoid adding them, so I thought you found them ugly (and I felt so, too.) But thinking now, there's no reason to hesitate it.\r\n\r\n\r\n> A finally, this should probably add a bunch of regression tests.\r\n\r\nSure.\r\n\r\n\r\n> 1) As I mentioned before, I really don't think we should be doing deparsing in\r\n> execute_foreign_modify - that's something that should happen earlier, and\r\n> should be in a deparse.c function.\r\n...\r\n> The attached patch tries to address both of these points.\r\n> \r\n> Firstly, it adds a new deparseBulkInsertSql function, that builds a query for the\r\n> \"full\" batch, and then uses those two queries - when we get a full batch we use\r\n> the bulk query, otherwise we use the single-row query in a loop. IMO this is\r\n> cleaner than deparsing queries ad hoc in the execute_foreign_modify.\r\n...\r\n> Of course, this might be worse when we don't have a full batch, e.g. for a query\r\n> that insert only 50 rows with batch_size=100. If this case is common, one\r\n> option would be lowering the batch_size accordingly. If we really want to\r\n> improve this case too, I suggest we pass more info than just a position of the\r\n> VALUES clause - that seems a bit too hackish.\r\n...\r\n> Secondly, it adds the batch_size option to server/foreign table, and uses that.\r\n> This is not complete, though. postgresPlanForeignModify currently passes a\r\n> hard-coded value at the moment, it needs to lookup the correct value for the\r\n> server/table from RelOptInfo or something. And I suppose ModifyTable\r\n> inftractructure will need to determine the value in order to pass the correct\r\n> number of slots to the FDW API.\r\n\r\nI can sort of understand your feeling, but I'd like to reconstruct the query and prepare it in execute_foreign_modify() because:\r\n\r\n* Some of our customers use bulk insert in ECPG (INSERT ... VALUES(record1, (record2), ...) to insert variable number of records per query. (Oracle's Pro*C has such a feature.) So, I want to be prepared to enable such a thing with FDW.\r\n\r\n* The number of records to insert is not known during planning (in general), so it feels natural to get prepared during execution phase, or not unnatural at least.\r\n\r\n* I wanted to avoid the overhead of building the full query string for 100-record insert statement during query planning, because it may be a bit costly for usual 1-record inserts. (The overhead may be hidden behind the high communication cost of postgres_fdw, though.)\r\n\r\nSo, in terms of code cleanness, how about moving my code for rebuilding query string from execute_foreign_modify() to some new function in deparse.c?\r\n\r\n\r\n> 2) I think the GUC should be replaced with an server/table option, similar to\r\n> fetch_size.\r\n\r\nHmm, batch_size differs from fetch_size. fetch_size is a postgres_fdw-specific feature with no relevant FDW routine, while batch_size is a configuration parameter for all FDWs that implement ExecForeignBulkInsert(). The ideas I can think of are:\r\n\r\n1. Follow JDBC/ODBC and add standard FDW properties. For example, the JDBC standard defines standard connection pool properties such as maxPoolSize and minPoolSize. JDBC drivers have to provide them with those defined names. Likewise, the FDW interface requires FDW implementors to handle the foreign server option name \"max_bulk_insert_tuples\" if he/she wants to provide bulk insert feature and implement ExecForeignBulkInsert(). The core executor gets that setting from the FDW by calling a new FDW routine like GetMaxBulkInsertTuples(). Sigh...\r\n\r\n2. Add a new max_bulk_insert_tuples reloption to CREATE/ALTER FOREIGN TABLE. executor gets the value from Relation and uses it. (But is this a table-specific configuration? I don't think so, sigh...)\r\n\r\n3. Adopt the current USERSET GUC max_bulk_insert_tuples. I think this is enough because the user can change the setting per session, application, and database.\r\n\r\n\r\n\r\nRegards\r\nTakayuki Tsunakawa\r\n\r\n", "msg_date": "Thu, 19 Nov 2020 02:43:07 +0000", "msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: POC: postgres_fdw insert batching" }, { "msg_contents": "On 11/19/20 3:43 AM, tsunakawa.takay@fujitsu.com wrote:\n> From: Tomas Vondra <tomas.vondra@enterprisedb.com>\n>> Unfortunately, this does not compile for me, because\n>> nodeModifyTable calls ExecGetTouchedPartitions, which is not\n>> defined anywhere. Not sure what's that about, so I simply\n>> commented-out this. That probably fails the partitioned cases, but\n>> it allowed me to do some review and testing.\n> \n> Ouch, sorry. I'm ashamed to have forgotten including\n> execPartition.c.\n> \n\nNo reason to feel ashamed. Mistakes do happen from time to time.\n\n> \n>> The are a couple other smaller changes. E.g. it undoes changes to \n>> finish_foreign_modify, and instead calls separate functions to\n>> prepare the bulk statement. It also adds list_make5/list_make6\n>> macros, so as to not have to do strange stuff with the parameter\n>> lists.\n> \n> Thanks, I'll take them thankfully! I wonder why I didn't think of\n> separating deallocate_query() from finish_foreign_modify() ...\n> perhaps my brain was dying. As for list_make5/6(), I saw your first\n> patch avoid adding them, so I thought you found them ugly (and I felt\n> so, too.) But thinking now, there's no reason to hesitate it.\n> \n\nI think it's often easier to look changes like deallocate_query with a\nbit of distance, not while hacking on the patch and just trying to make\nit work somehow.\n\nFor the list_make# stuff, I think I've decided to do the simplest thing\npossible in extension, without having to recompile the server. But I\nthink for a proper patch it's better to keep it more readable.\n\n> ...\n> \n>> 1) As I mentioned before, I really don't think we should be doing\n>> deparsing in execute_foreign_modify - that's something that should\n>> happen earlier, and should be in a deparse.c function.\n> ...\n>> The attached patch tries to address both of these points.\n>> \n>> Firstly, it adds a new deparseBulkInsertSql function, that builds a\n>> query for the \"full\" batch, and then uses those two queries - when\n>> we get a full batch we use the bulk query, otherwise we use the\n>> single-row query in a loop. IMO this is cleaner than deparsing\n>> queries ad hoc in the execute_foreign_modify.\n> ...\n>> Of course, this might be worse when we don't have a full batch,\n>> e.g. for a query that insert only 50 rows with batch_size=100. If\n>> this case is common, one option would be lowering the batch_size\n>> accordingly. If we really want to improve this case too, I suggest\n>> we pass more info than just a position of the VALUES clause - that\n>> seems a bit too hackish.\n> ...\n>> Secondly, it adds the batch_size option to server/foreign table,\n>> and uses that. This is not complete, though.\n>> postgresPlanForeignModify currently passes a hard-coded value at\n>> the moment, it needs to lookup the correct value for the \n>> server/table from RelOptInfo or something. And I suppose\n>> ModifyTable inftractructure will need to determine the value in\n>> order to pass the correct number of slots to the FDW API.\n> \n> I can sort of understand your feeling, but I'd like to reconstruct\n> the query and prepare it in execute_foreign_modify() because:\n> \n> * Some of our customers use bulk insert in ECPG (INSERT ...\n> VALUES(record1, (record2), ...) to insert variable number of records\n> per query. (Oracle's Pro*C has such a feature.) So, I want to be\n> prepared to enable such a thing with FDW.\n> \n> * The number of records to insert is not known during planning (in\n> general), so it feels natural to get prepared during execution phase,\n> or not unnatural at least.\n> \n\nI think we should differentiate between \"deparsing\" and \"preparing\".\n\n> * I wanted to avoid the overhead of building the full query string\n> for 100-record insert statement during query planning, because it may\n> be a bit costly for usual 1-record inserts. (The overhead may be\n> hidden behind the high communication cost of postgres_fdw, though.)\n> \n\nHmm, ok. I haven't tried how expensive that would be, but my assumption\nwas it's much cheaper than the latency we save. But maybe I'm wrong.\n\n> So, in terms of code cleanness, how about moving my code for\n> rebuilding query string from execute_foreign_modify() to some new\n> function in deparse.c?\n> \n\nThat might work, yeah. I suggest we do this:\n\n1) try to use the same approach for both single-row inserts and larger\nbatches, to not have a lot of different branches\n\n2) modify deparseInsertSql to produce not the \"final\" query but some\nintermediate representation useful to generate queries inserting\narbitrary number of rows\n\n3) in execute_foreign_modify remember the last number of rows, and only\nrebuild/replan the query when it changes\n\n> \n>> 2) I think the GUC should be replaced with an server/table option,\n>> similar to fetch_size.\n> \n> Hmm, batch_size differs from fetch_size. fetch_size is a\n> postgres_fdw-specific feature with no relevant FDW routine, while\n> batch_size is a configuration parameter for all FDWs that implement\n> ExecForeignBulkInsert(). The ideas I can think of are:\n> \n> 1. Follow JDBC/ODBC and add standard FDW properties. For example,\n> the JDBC standard defines standard connection pool properties such as\n> maxPoolSize and minPoolSize. JDBC drivers have to provide them with\n> those defined names. Likewise, the FDW interface requires FDW\n> implementors to handle the foreign server option name\n> \"max_bulk_insert_tuples\" if he/she wants to provide bulk insert\n> feature and implement ExecForeignBulkInsert(). The core executor\n> gets that setting from the FDW by calling a new FDW routine like\n> GetMaxBulkInsertTuples(). Sigh...\n> \n> 2. Add a new max_bulk_insert_tuples reloption to CREATE/ALTER FOREIGN\n> TABLE. executor gets the value from Relation and uses it. (But is\n> this a table-specific configuration? I don't think so, sigh...)\n> \n\nI do agree there's a difference between fetch_size and batch_size. For\nfetch_size, it's internal to postgres_fdw - no external code needs to\nknow about it. For batch_size that's not the case, the ModifyTable core\ncode needs to be aware of that.\n\nThat means the \"batch_size\" is becoming part of the API, and IMO the way\nto do that is by exposing it as an explicit API method. So +1 to add\nsomething like GetMaxBulkInsertTuples.\n\nIt still needs to be configurable at the server/table level, though. The\nnew API method should only inform ModifyTable about the final max batch\nsize the FDW decided to use.\n\n> 3. Adopt the current USERSET GUC max_bulk_insert_tuples. I think\n> this is enough because the user can change the setting per session,\n> application, and database.\n> \n\nI don't think this is usable in practice, because a single session may\nbe using multiple FDW servers, with different implementations, latency\nto the data nodes, etc. It's unlikely a single GUC value will be\nsuitable for all of them.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Thu, 19 Nov 2020 15:04:59 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: POC: postgres_fdw insert batching" }, { "msg_contents": "From: Tomas Vondra <tomas.vondra@enterprisedb.com>\r\n> I don't think this is usable in practice, because a single session may\r\n> be using multiple FDW servers, with different implementations, latency\r\n> to the data nodes, etc. It's unlikely a single GUC value will be\r\n> suitable for all of them.\r\n\r\nThat makes sense. The row size varies from table to table, so the user may want to tune this option to reduce memory consumption.\r\n\r\nI think the attached patch has reflected all your comments. I hope this will pass..\r\n\r\n\r\nRegards\r\nTakayuki Tsunakawa", "msg_date": "Mon, 23 Nov 2020 02:17:14 +0000", "msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: POC: postgres_fdw insert batching" }, { "msg_contents": "\nOn 11/23/20 3:17 AM, tsunakawa.takay@fujitsu.com wrote:\n> From: Tomas Vondra <tomas.vondra@enterprisedb.com>\n>> I don't think this is usable in practice, because a single session\n>> may be using multiple FDW servers, with different implementations,\n>> latency to the data nodes, etc. It's unlikely a single GUC value\n>> will be suitable for all of them.\n> \n> That makes sense. The row size varies from table to table, so the\n> user may want to tune this option to reduce memory consumption.\n> \n> I think the attached patch has reflected all your comments. I hope\n> this will pass..\n> \n\nThanks - I didn't have time for a thorough review at the moment, so I\nonly skimmed through the diff and did a couple very simple tests. And I\nthink overall it looks quite nice.\n\nA couple minor comments/questions:\n\n1) We're calling it \"batch_size\" but the API function is named\npostgresGetMaxBulkInsertTuples(). Perhaps we should rename the function\nto postgresGetModifyBatchSize()? That has the advantage it'd work if we\never add support for batching to UPDATE/DELETE.\n\n2) Do we have to lookup the batch_size in create_foreign_modify (in\nserver/table options)? I'd have expected to look it up while planning\nthe modify and then pass it through the list, just like the other\nFdwModifyPrivateIndex stuff. But maybe that's not possible.\n\n3) That reminds me - should we show the batching info on EXPLAIN? That\nseems like a fairly interesting thing to show to the user. Perhaps\nshowing the average batch size would also be useful? Or maybe not, we\ncreate the batches as large as possible, with the last one smaller.\n\n4) It seems that ExecInsert executes GetMaxBulkInsertTuples() over and\nover for every tuple. I don't know it that has measurable impact, but it\nseems a bit excessive IMO. I don't think we should support the batch\nsize changing during execution (seems tricky).\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Tue, 24 Nov 2020 02:44:45 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: POC: postgres_fdw insert batching" }, { "msg_contents": "From: Tomas Vondra <tomas.vondra@enterprisedb.com>\r\n> 1) We're calling it \"batch_size\" but the API function is named\r\n> postgresGetMaxBulkInsertTuples(). Perhaps we should rename the function\r\n> to postgresGetModifyBatchSize()? That has the advantage it'd work if we\r\n> ever add support for batching to UPDATE/DELETE.\r\n\r\nActually, I was in two minds whether the term batch or bulk is better. Because Oracle uses \"bulk insert\" and \"bulk fetch\", like in FETCH cur BULK COLLECT INTO array and FORALL in array INSERT INTO, while JDBC uses batch as in \"batch updates\" and its API method names (addBatch, executeBatch).\r\n\r\nBut it seems better or common to use batch according to the etymology and the following Stack Overflow page:\r\n\r\nhttps://english.stackexchange.com/questions/141884/which-is-a-better-and-commonly-used-word-bulk-or-batch\r\n\r\nOTOH, as for the name GetModifyBatchSize() you suggest, I think GetInsertBatchSize may be better. That is, this API deals with multiple records in a single INSERT statement. Your GetModifyBatchSize will be reserved for statement batching when libpq has supported batch/pipelining to execute multiple INSERT/UPDATE/DELETE statements, as in the following JDBC batch updates. What do you think?\r\n\r\nCODE EXAMPLE 14-1 Creating and executing a batch of insert statements \r\n--------------------------------------------------\r\nStatement stmt = con.createStatement(); \r\nstmt.addBatch(\"INSERT INTO employees VALUES (1000, 'Joe Jones')\"); \r\nstmt.addBatch(\"INSERT INTO departments VALUES (260, 'Shoe')\"); \r\nstmt.addBatch(\"INSERT INTO emp_dept VALUES (1000, 260)\"); \r\n\r\n// submit a batch of update commands for execution \r\nint[] updateCounts = stmt.executeBatch(); \r\n--------------------------------------------------\r\n\r\n\r\n> 2) Do we have to lookup the batch_size in create_foreign_modify (in\r\n> server/table options)? I'd have expected to look it up while planning\r\n> the modify and then pass it through the list, just like the other\r\n> FdwModifyPrivateIndex stuff. But maybe that's not possible.\r\n\r\nDon't worry, create_foreign_modify() is called from PlanForeignModify() during planning. Unfortunately, it's also called from BeginForeignInsert(), but other stuff passed to create_foreign_modify() including the query string is constructed there.\r\n\r\n\r\n> 3) That reminds me - should we show the batching info on EXPLAIN? That\r\n> seems like a fairly interesting thing to show to the user. Perhaps\r\n> showing the average batch size would also be useful? Or maybe not, we\r\n> create the batches as large as possible, with the last one smaller.\r\n\r\nHmm, maybe batch_size is not for EXPLAIN because its value doesn't change dynamically based on the planning or system state unlike shared buffers and parallel workers. OTOH, I sometimes want to see what configuration parameter values the user set, such as work_mem, enable_*, and shared_buffers, together with the query plan (EXPLAIN and auto_explain). For example, it'd be nice if EXPLAIN (parameters on) could do that. Some relevant FDW-related parameters could be included in that output.\r\n\r\n> 4) It seems that ExecInsert executes GetMaxBulkInsertTuples() over and\r\n> over for every tuple. I don't know it that has measurable impact, but it\r\n> seems a bit excessive IMO. I don't think we should support the batch\r\n> size changing during execution (seems tricky).\r\n\r\nDon't worry about this, too. GetMaxBulkInsertTuples() just returns a value that was already saved in a struct in create_foreign_modify().\r\n\r\n\r\nRegards\r\nTakayuki Tsunakawa\r\n\r\n", "msg_date": "Tue, 24 Nov 2020 08:45:40 +0000", "msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: POC: postgres_fdw insert batching" }, { "msg_contents": "\n\nOn 11/24/20 9:45 AM, tsunakawa.takay@fujitsu.com wrote:\n> From: Tomas Vondra <tomas.vondra@enterprisedb.com>\n>> 1) We're calling it \"batch_size\" but the API function is named\n>> postgresGetMaxBulkInsertTuples(). Perhaps we should rename the function\n>> to postgresGetModifyBatchSize()? That has the advantage it'd work if we\n>> ever add support for batching to UPDATE/DELETE.\n> \n> Actually, I was in two minds whether the term batch or bulk is better. Because Oracle uses \"bulk insert\" and \"bulk fetch\", like in FETCH cur BULK COLLECT INTO array and FORALL in array INSERT INTO, while JDBC uses batch as in \"batch updates\" and its API method names (addBatch, executeBatch).\n> \n> But it seems better or common to use batch according to the etymology and the following Stack Overflow page:\n> \n> https://english.stackexchange.com/questions/141884/which-is-a-better-and-commonly-used-word-bulk-or-batch\n> \n> OTOH, as for the name GetModifyBatchSize() you suggest, I think GetInsertBatchSize may be better. That is, this API deals with multiple records in a single INSERT statement. Your GetModifyBatchSize will be reserved for statement batching when libpq has supported batch/pipelining to execute multiple INSERT/UPDATE/DELETE statements, as in the following JDBC batch updates. What do you think?\n> \n\nI don't know. I was really only thinking about batching in the context\nof a single DML command, not about batching of multiple commands at the\nprotocol level. IMHO it's far more likely we'll add support for batching\nfor DELETE/UPDATE than libpq pipelining, which seems rather different\nfrom how the FDW API works. Which is why I was suggesting to use a name\nthat would work for all DML commands, not just for inserts.\n\n> CODE EXAMPLE 14-1 Creating and executing a batch of insert statements \n> --------------------------------------------------\n> Statement stmt = con.createStatement(); \n> stmt.addBatch(\"INSERT INTO employees VALUES (1000, 'Joe Jones')\"); \n> stmt.addBatch(\"INSERT INTO departments VALUES (260, 'Shoe')\"); \n> stmt.addBatch(\"INSERT INTO emp_dept VALUES (1000, 260)\"); \n> \n> // submit a batch of update commands for execution \n> int[] updateCounts = stmt.executeBatch(); \n> --------------------------------------------------\n> \n\nSure. We already have a patch to support something like this at the\nlibpq level, IIRC. But I'm not sure how well that matches the FDW API\napproach in general.\n\n> \n>> 2) Do we have to lookup the batch_size in create_foreign_modify (in\n>> server/table options)? I'd have expected to look it up while planning\n>> the modify and then pass it through the list, just like the other\n>> FdwModifyPrivateIndex stuff. But maybe that's not possible.\n> \n> Don't worry, create_foreign_modify() is called from PlanForeignModify() during planning. Unfortunately, it's also called from BeginForeignInsert(), but other stuff passed to create_foreign_modify() including the query string is constructed there.\n> \n\nHmm, ok.\n\n> \n>> 3) That reminds me - should we show the batching info on EXPLAIN? That\n>> seems like a fairly interesting thing to show to the user. Perhaps\n>> showing the average batch size would also be useful? Or maybe not, we\n>> create the batches as large as possible, with the last one smaller.\n> \n> Hmm, maybe batch_size is not for EXPLAIN because its value doesn't change dynamically based on the planning or system state unlike shared buffers and parallel workers. OTOH, I sometimes want to see what configuration parameter values the user set, such as work_mem, enable_*, and shared_buffers, together with the query plan (EXPLAIN and auto_explain). For example, it'd be nice if EXPLAIN (parameters on) could do that. Some relevant FDW-related parameters could be included in that output.\n> \n\nNot sure, but I'd guess knowing whether batching is used would be\nuseful. We only print the single-row SQL query, which kinda gives the\nimpression that there's no batching.\n\n>> 4) It seems that ExecInsert executes GetMaxBulkInsertTuples() over and\n>> over for every tuple. I don't know it that has measurable impact, but it\n>> seems a bit excessive IMO. I don't think we should support the batch\n>> size changing during execution (seems tricky).\n> \n> Don't worry about this, too. GetMaxBulkInsertTuples() just returns a value that was already saved in a struct in create_foreign_modify().\n> \n\nWell, I do worry for two reasons.\n\nFirstly, the fact that in postgres_fdw the call is cheap does not mean\nit'll be like that in every other FDW. Presumably, the other FDWs might\ncache it in the struct and do the same thing, of course.\n\nBut the fact that we're calling it over and over for each row kinda\nseems like we allow the value to change during execution, but I very\nmuch doubt the code is expecting that. I haven't tried, but assume the\nfunction first returns 10 and then 100. ISTM the code will allocate\nri_Slots with 25 slots, but then we'll try stashing 100 tuples there.\nThat can't end well. Sure, we can claim it's a bug in the FDW extension,\nbut it's also due to the API design.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Tue, 24 Nov 2020 18:08:57 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: POC: postgres_fdw insert batching" }, { "msg_contents": "On Thu, Oct 8, 2020 at 10:40 AM tsunakawa.takay@fujitsu.com <\ntsunakawa.takay@fujitsu.com> wrote:\n\n>\n> Thank you for picking up this. I'm interested in this topic, too. (As an\n> aside, we'd like to submit a bulk insert patch for ECPG in the near future.)\n>\n> As others referred, Andrey-san's fast COPY to foreign partitions is also\n> promising. But I think your bulk INSERT is a separate feature and offers\n> COPY cannot do -- data transformation during loading with INSERT SELECT and\n> CREATE TABLE AS SELECT.\n>\n> Is there anything that makes you worry and stops development? Could I\n> give it a try to implement this (I'm not sure I can, sorry. I'm worried if\n> we can change the executor's call chain easily.)\n>\n>\nI suggest that when developing this, you keep in mind the ongoing work on\nthe libpq pipelining/batching enhancements, and also the way many\ninterfaces to foreign data sources support asynchronous, concurrent\noperations.\n\nBest results with postgres_fdw insert batching would be achieved if it can\nalso send its batches as asynchronous queries and only block when it's\nrequired to report on the results of the work. This will also be true of\nany other FDW where the backing remote interface can support asynchronous\nconcurrent or pipelined operation.\n\nI'd argue it's pretty much vital for decent performance when talking to a\ncloud database from an on-prem server for example, or any other time that\nround-trip-time reduction is important.\n\nThe most important characteristic of an FDW API to permit this would be\ndecoupling of request and response into separate non-blocking calls that\ndon't have to occur in ordered pairs. Instead of \"insert_foo(foo) ->\ninsert_result\", have \"queue_insert_foo(foo) -> future_result\",\n\"get_result_if_available(future_result) -> maybe result\" and\n\"get_result_blocking(future_result) -> result\". Permit multiple\nqueue_insert_foo(...)s without a/b interleaving with result fetches being\nrequired.\n\nIdeally it'd be able to accumulate small batches of inserts locally and\nsend a batch to the remote end once it's accumulated enough. But instead of\nblocking waiting for the result, return control to the executor after\nsending, without forcing a socket flush (which might block) and without\nwaiting to learn what the outcome was. Allow new batches to be accumulated\nand sent before the results of the first batch are received, so long as\nit's within the same executor node so we don't make any unfortunate\nmistakes with mixing things up in compound statements or functions etc.\nOnly report outcomes like rowcounts lazily when results are received, or\nwhen required to do so.\n\nIf now we have\n\nREQUEST -> [block] -> RESULT\n~~ round trip delay ~~\nREQUEST -> [block] -> RESULT\n~~ round trip delay ~~\nREQUEST -> [block] -> RESULT\n~~ round trip delay ~~\nREQUEST -> [block] -> RESULT\n\nand batching would give us\n\n{ REQUEST, REQUEST} -> [block] -> { RESULT, RESULT }\n~~ round trip delay ~~\n{ REQUEST, REQUEST} -> [block] -> { RESULT, RESULT }\n\nconsider if room can be left in the batching API to permit:\n\n{ REQUEST, REQUEST} -> [nonblocking send...]\n{ REQUEST, REQUEST} -> [nonblocking send...]\n~~ round trip delay ~~\n[....] -> RESULT, RESULT\n[....] -> RESULT, RESULT\n\n\n... where we only actually block at the point where the result is required\nas input into the next node.\n\nHonestly I don't know the executor structure well enough to say if this is\neven remotely feasible right now. Maybe Andres may be able to comment. But\nplease keep it in mind if you're thinking of making FDW API changes.\n\nOn Thu, Oct 8, 2020 at 10:40 AM tsunakawa.takay@fujitsu.com <tsunakawa.takay@fujitsu.com> wrote:\nThank you for picking up this.  I'm interested in this topic, too.  (As an aside, we'd like to submit a bulk insert patch for ECPG in the near future.)\n\nAs others referred, Andrey-san's fast COPY to foreign partitions is also promising.  But I think your bulk INSERT is a separate feature and offers COPY cannot do -- data transformation during loading with INSERT SELECT and CREATE TABLE AS SELECT.\n\nIs there anything that makes you worry and stops development?  Could I give it a try to implement this (I'm not sure I can, sorry.  I'm worried if we can change the executor's call chain easily.)\nI suggest that when developing this, you keep in mind the ongoing work on the libpq pipelining/batching enhancements, and also the way many interfaces to foreign data sources support asynchronous, concurrent operations.Best results with postgres_fdw insert batching would be achieved if it can also send its batches as asynchronous queries and only block when it's required to report on the results of the work. This will also be true of any other FDW where the backing remote interface can support asynchronous concurrent or pipelined operation. I'd argue it's pretty much vital for decent performance when talking to a cloud database from an on-prem server for example, or any other time that round-trip-time reduction is important.The most important characteristic of an FDW API to permit this would be decoupling of request and response into separate non-blocking calls that don't have to occur in ordered pairs. Instead of \"insert_foo(foo) -> insert_result\", have \"queue_insert_foo(foo) -> future_result\", \"get_result_if_available(future_result) -> maybe result\" and \"get_result_blocking(future_result) -> result\". Permit multiple queue_insert_foo(...)s without a/b interleaving with result fetches being required.Ideally it'd be able to accumulate small batches of inserts locally and send a batch to the remote end once it's accumulated enough. But instead of blocking waiting for the result, return control to the executor after sending, without forcing a socket flush (which might block) and without waiting to learn what the outcome was. Allow new batches to be accumulated and sent before the results of the first batch are received, so long as it's within the same executor node so we don't make any unfortunate mistakes with mixing things up in compound statements or functions etc. Only report outcomes like rowcounts lazily when results are received, or when required to do so.If now we haveREQUEST -> [block] -> RESULT~~ round trip delay ~~REQUEST -> [block] -> RESULT~~ round trip delay ~~REQUEST -> [block] -> RESULT~~ round trip delay ~~REQUEST -> [block] -> RESULTand batching would give us{ REQUEST, REQUEST} -> [block] -> { RESULT, RESULT }~~ round trip delay ~~{ REQUEST, REQUEST} -> [block] -> { RESULT, RESULT }consider if room can be left in the batching API to permit:{ REQUEST, REQUEST} -> [nonblocking send...]{ REQUEST, REQUEST} -> [nonblocking send...]~~ round trip delay ~~[....] -> RESULT, RESULT[....] -> RESULT, RESULT... where we only actually block at the point where the result is required as input into the next node.Honestly I don't know the executor structure well enough to say if this is even remotely feasible right now. Maybe Andres may be able to comment. But please keep it in mind if you're thinking of making FDW API changes.", "msg_date": "Wed, 25 Nov 2020 10:10:44 +0800", "msg_from": "Craig Ringer <craig.ringer@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: POC: postgres_fdw insert batching" }, { "msg_contents": "From: Tomas Vondra <tomas.vondra@enterprisedb.com>\r\n> On 11/24/20 9:45 AM, tsunakawa.takay@fujitsu.com wrote:\r\n> > OTOH, as for the name GetModifyBatchSize() you suggest, I think\r\n> GetInsertBatchSize may be better. That is, this API deals with multiple\r\n> records in a single INSERT statement. Your GetModifyBatchSize will be\r\n> reserved for statement batching when libpq has supported batch/pipelining to\r\n> execute multiple INSERT/UPDATE/DELETE statements, as in the following\r\n> JDBC batch updates. What do you think?\r\n> >\r\n> \r\n> I don't know. I was really only thinking about batching in the context\r\n> of a single DML command, not about batching of multiple commands at the\r\n> protocol level. IMHO it's far more likely we'll add support for batching\r\n> for DELETE/UPDATE than libpq pipelining, which seems rather different\r\n> from how the FDW API works. Which is why I was suggesting to use a name\r\n> that would work for all DML commands, not just for inserts.\r\n\r\nRight, I can't imagine now how the interaction among the client, server core and FDWs would be regarding the statement batching. So I'll take your suggested name.\r\n\r\n\r\n> Not sure, but I'd guess knowing whether batching is used would be\r\n> useful. We only print the single-row SQL query, which kinda gives the\r\n> impression that there's no batching.\r\n\r\nAdded in postgres_fdw like \"Remote SQL\" when EXPLAIN VERBOSE is run.\r\n\r\n\r\n> > Don't worry about this, too. GetMaxBulkInsertTuples() just returns a value\r\n> that was already saved in a struct in create_foreign_modify().\r\n> >\r\n> \r\n> Well, I do worry for two reasons.\r\n> \r\n> Firstly, the fact that in postgres_fdw the call is cheap does not mean\r\n> it'll be like that in every other FDW. Presumably, the other FDWs might\r\n> cache it in the struct and do the same thing, of course.\r\n> \r\n> But the fact that we're calling it over and over for each row kinda\r\n> seems like we allow the value to change during execution, but I very\r\n> much doubt the code is expecting that. I haven't tried, but assume the\r\n> function first returns 10 and then 100. ISTM the code will allocate\r\n> ri_Slots with 25 slots, but then we'll try stashing 100 tuples there.\r\n> That can't end well. Sure, we can claim it's a bug in the FDW extension,\r\n> but it's also due to the API design.\r\n\r\nYou worried about other FDWs than postgres_fdw. That's reasonable. I insisted in other threads that PG developers care only about postgres_fdw, not other FDWs, when designing the FDW interface, but I myself made the same mistake. I made changes so that the executor calls GetModifyBatchSize() once per relation per statement.\r\n\r\n\r\nRegards\r\nTakayuki Tsunakawa", "msg_date": "Wed, 25 Nov 2020 05:04:36 +0000", "msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: POC: postgres_fdw insert batching" }, { "msg_contents": "From: Craig Ringer <craig.ringer@enterprisedb.com> \r\n> I suggest that when developing this, you keep in mind the ongoing work on the libpq pipelining/batching enhancements, and also the way many interfaces to foreign data sources support asynchronous, concurrent operations.\r\n\r\nYes, thank you, I bear it in mind. I understand it's a feature for batching multiple kinds of SQL statements like DBC's batch updates.\r\n\r\n\r\n> I'd argue it's pretty much vital for decent performance when talking to a cloud database from an on-prem server for example, or any other time that round-trip-time reduction is important.\r\n\r\nYeah, I'm thinking of the data migration and integration as the prominent use case.\r\n\r\n\r\nRegards\r\nTakayuki Tsunakawa\r\n\r\n", "msg_date": "Wed, 25 Nov 2020 06:31:53 +0000", "msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: POC: postgres_fdw insert batching" }, { "msg_contents": "On 11/25/20 7:31 AM, tsunakawa.takay@fujitsu.com wrote:\n> From: Craig Ringer <craig.ringer@enterprisedb.com>\n>> I suggest that when developing this, you keep in mind the ongoing\n>> work on the libpq pipelining/batching enhancements, and also the\n>> way many interfaces to foreign data sources support asynchronous,\n>> concurrent operations.\n> \n> Yes, thank you, I bear it in mind. I understand it's a feature for\n> batching multiple kinds of SQL statements like DBC's batch updates.\n> \n\nI haven't followed the libpq pipelining thread very closely. It does\nseem related, but I'm not sure if it's a good match for this patch, or\nhow far is it from being committable ...\n\n> \n>> I'd argue it's pretty much vital for decent performance when\n>> talking to a cloud database from an on-prem server for example, or\n>> any other time that round-trip-time reduction is important.\n> \n> Yeah, I'm thinking of the data migration and integration as the\n> prominent use case.\n> \n\nWell, good that we all agree this is a useful feature to have (in\ngeneral). The question is whether postgres_fdw should be doing batching\non it's onw (per this thread) or rely on some other feature (libpq\npipelining). I haven't followed the other thread, so I don't have an\nopinion on that.\n\nNote however we're doing two things here, actually - we're implementing\ncustom batching for postgres_fdw, but we're also extending the FDW API\nto allow other implementations do the same thing. And most of them won't\nbe able to rely on the connection library providing that, I believe.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Wed, 25 Nov 2020 21:04:51 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: POC: postgres_fdw insert batching" }, { "msg_contents": "From: Tomas Vondra <tomas.vondra@enterprisedb.com>\r\n> Well, good that we all agree this is a useful feature to have (in\r\n> general). The question is whether postgres_fdw should be doing batching\r\n> on it's onw (per this thread) or rely on some other feature (libpq\r\n> pipelining). I haven't followed the other thread, so I don't have an\r\n> opinion on that.\r\n\r\nWell, as someone said in this thread, I think bulk insert is much more common than updates/deletes. Thus, major DBMSs have INSERT VALUES(record1), (record2)... and INSERT SELECT. Oracle has direct path INSERT in addition. As for the comparison of INSERT with multiple records and libpq batching (= multiple INSERTs), I think the former is more efficient because the amount of data transfer is less and the parsing-planning of INSERT for each record is eliminated.\r\n\r\nI never deny the usefulness of libpq batch/pipelining, but I'm not sure if app developers would really use it. If they want to reduce the client-server round-trips, won't they use traditional stored procedures? Yes, the stored procedure language is very DBMS-specific. Then, I'd like to know what kind of well-known applications are using standard batching API like JDBC's batch updates. (Sorry, I think that should be discussed in libpq batch/pipelining thread and this thread should not be polluted.)\r\n\r\n\r\n> Note however we're doing two things here, actually - we're implementing\r\n> custom batching for postgres_fdw, but we're also extending the FDW API\r\n> to allow other implementations do the same thing. And most of them won't\r\n> be able to rely on the connection library providing that, I believe.\r\n\r\nI'm afraid so, too. Then, postgres_fdw would be an example that other FDW developers would look at when they use INSERT with multiple records.\r\n\r\n\r\nRegards\r\nTakayuki Tsunakawa\r\n\r\n\r\n\r\n\r\n", "msg_date": "Thu, 26 Nov 2020 01:48:08 +0000", "msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: POC: postgres_fdw insert batching" }, { "msg_contents": "\n\nOn 11/26/20 2:48 AM, tsunakawa.takay@fujitsu.com wrote:\n> From: Tomas Vondra <tomas.vondra@enterprisedb.com>\n>> Well, good that we all agree this is a useful feature to have (in \n>> general). The question is whether postgres_fdw should be doing \n>> batching on it's onw (per this thread) or rely on some other \n>> feature (libpq pipelining). I haven't followed the other thread,\n>> so I don't have an opinion on that.\n> \n> Well, as someone said in this thread, I think bulk insert is much \n> more common than updates/deletes. Thus, major DBMSs have INSERT \n> VALUES(record1), (record2)... and INSERT SELECT. Oracle has direct \n> path INSERT in addition. As for the comparison of INSERT with \n> multiple records and libpq batching (= multiple INSERTs), I think\n> the former is more efficient because the amount of data transfer is\n> less and the parsing-planning of INSERT for each record is\n> eliminated.\n> \n> I never deny the usefulness of libpq batch/pipelining, but I'm not \n> sure if app developers would really use it. If they want to reduce \n> the client-server round-trips, won't they use traditional stored \n> procedures? Yes, the stored procedure language is very \n> DBMS-specific. Then, I'd like to know what kind of well-known \n> applications are using standard batching API like JDBC's batch \n> updates. (Sorry, I think that should be discussed in libpq \n> batch/pipelining thread and this thread should not be polluted.)\n> \n\nNot sure how is this related to app developers? I think the idea was\nthat the libpq features might be useful between the two PostgreSQL\ninstances. I.e. the postgres_fdw would use the libpq batching to send\nchunks of data to the other side.\n\n> \n>> Note however we're doing two things here, actually - we're \n>> implementing custom batching for postgres_fdw, but we're also \n>> extending the FDW API to allow other implementations do the same \n>> thing. And most of them won't be able to rely on the connection \n>> library providing that, I believe.\n> \n> I'm afraid so, too. Then, postgres_fdw would be an example that \n> other FDW developers would look at when they use INSERT with\n> multiple records.\n> \n\nWell, my point was that we could keep the API, but maybe it should be\nimplemented using the proposed libpq batching. They could still use the\npostgres_fdw example how to use the API, but the internals would need to\nbe different, of course.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Thu, 26 Nov 2020 20:34:04 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: POC: postgres_fdw insert batching" }, { "msg_contents": "On Fri, Nov 27, 2020 at 3:34 AM Tomas Vondra <tomas.vondra@enterprisedb.com>\nwrote:\n\n\n> Not sure how is this related to app developers? I think the idea was\n> that the libpq features might be useful between the two PostgreSQL\n> instances. I.e. the postgres_fdw would use the libpq batching to send\n> chunks of data to the other side.\n>\n\nRight. Or at least, when designing the FDW API, do so in a way that doesn't\nstrictly enforce Request/Response alternation without interleaving, so you\ncan benefit from it in the future.\n\nIt's hardly just libpq after all. A *lot* of client libraries and drivers\nwill be capable of non-blocking reads or writes with multiple ones in\nflight at once. Any REST-like API generally can, for example. So for\nperformance reasons we should if possible avoid baking the assumption that\na request cannot be made until the response from the previous request is\nreceived, and instead have a wait interface to use for when a new request\nrequires the prior response's result before it can proceed.\n\nWell, my point was that we could keep the API, but maybe it should be\n> implemented using the proposed libpq batching. They could still use the\n> postgres_fdw example how to use the API, but the internals would need to\n> be different, of course.\n>\n\nSure. Or just allow room for it in the FDW API, though using the pipelining\nsupport natively would be nice.\n\nIf the FDW interface allows Pg to\n\nInsert A\nInsert B\nInsert C\nWait for outcome of insert A\n...\n\nthen that'll be useful for using libpq pipelining, but also FDWs for all\nsorts of other DBs, especially cloud-y ones where latency is a big concern.\n\nOn Fri, Nov 27, 2020 at 3:34 AM Tomas Vondra <tomas.vondra@enterprisedb.com> wrote: \nNot sure how is this related to app developers? I think the idea was\nthat the libpq features might be useful between the two PostgreSQL\ninstances. I.e. the postgres_fdw would use the libpq batching to send\nchunks of data to the other side.Right. Or at least, when designing the FDW API, do so in a way that doesn't strictly enforce Request/Response alternation without interleaving, so you can benefit from it in the future.It's hardly just libpq after all. A *lot* of client libraries and drivers will be capable of non-blocking reads or writes with multiple ones in flight at once. Any REST-like API generally can, for example. So for performance reasons we should if possible avoid baking the assumption that a request cannot be made until the response from the previous request is received, and instead have a wait interface to use for when a new request requires the prior response's result before it can proceed.\nWell, my point was that we could keep the API, but maybe it should be\nimplemented using the proposed libpq batching. They could still use the\npostgres_fdw example how to use the API, but the internals would need to\nbe different, of course.Sure. Or just allow room for it in the FDW API, though using the pipelining support natively would be nice.If the FDW interface allows Pg toInsert AInsert BInsert CWait for outcome of insert A...then that'll be useful for using libpq pipelining, but also FDWs for all sorts of other DBs, especially cloud-y ones where latency is a big concern.", "msg_date": "Fri, 27 Nov 2020 09:46:38 +0800", "msg_from": "Craig Ringer <craig.ringer@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: POC: postgres_fdw insert batching" }, { "msg_contents": "From: Tomas Vondra <tomas.vondra@enterprisedb.com>\r\n> Not sure how is this related to app developers? I think the idea was\r\n> that the libpq features might be useful between the two PostgreSQL\r\n> instances. I.e. the postgres_fdw would use the libpq batching to send\r\n> chunks of data to the other side.\r\n\r\n> Well, my point was that we could keep the API, but maybe it should be\r\n> implemented using the proposed libpq batching. They could still use the\r\n> postgres_fdw example how to use the API, but the internals would need to\r\n> be different, of course.\r\n\r\nYes, I understand them. I just wondered if app developers use the statement batching API for libpq or JDBC in what kind of apps. That is, I talked about the batching API itself, not related to FDW. (So, I mentioned I think I should ask such a question in the libpq batching thread.)\r\n\r\nI expect postgresExecForeignBatchInsert() would be able to use the libpq batching API, because it receives an array of tuples and can generate and issue INSERT statement for each tuple. But I'm not sure either if the libpq batching is likely to be committed in the near future. (The thread looks too long...) Anyway, this thread's batch insert can be progressed (and hopefully committed), and once the libpq batching has been committed, we can give it a try to use it and modify postgres_fdw to see if we can get further performance boost.\r\n\r\n\r\nRegards\r\nTakayuki Tsunakawa\r\n\r\n\r\n", "msg_date": "Fri, 27 Nov 2020 02:47:51 +0000", "msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: POC: postgres_fdw insert batching" }, { "msg_contents": "On Fri, Nov 27, 2020 at 10:47 AM tsunakawa.takay@fujitsu.com <\ntsunakawa.takay@fujitsu.com> wrote:\n\nCovering this one first:\n\nI expect postgresExecForeignBatchInsert() would be able to use the libpq\n> batching API, because it receives an array of tuples and can generate and\n> issue INSERT statement for each tuple.\n\n\nSure, you can generate big multi-inserts. Or even do a COPY. But you still\nhave to block for a full round-trip until the foreign server replies. So if\nyou have 6000 calls to postgresExecForeignBatchInsert() during a single\nquery, and a 100ms round trip time to the foreign server, you're going to\nwaste 6000*0.1 = 600s = 10 min blocked in postgresExecForeignBatchInsert()\nwaiting for results from the foreign server.\n\nSuch batches have some major downsides:\n\n* The foreign server cannot start executing the first query in the batch\nuntil the last query in the batch has been accumulated and the whole batch\nhas been sent to the foreign server;\n* The FDW has to block waiting for the batch to execute on the foreign\nserver and for a full network round-trip before it can start another batch\nor let the backend do other work\nThis means RTTs get multiplied by batch counts. Still a lot better than\nindividual statements, but plenty slow for high latency connections.\n\n* Prepare 1000 rows to insert [10ms]\n* INSERT 1000 values [100ms RTT + 50ms foreign server execution time]\n* Prepare 1000 rows to insert [10ms]\n* INSERT 1000 values [100ms RTT + 50ms foreign server execution time]\n* ...\n\nIf you can instead send new inserts (or sets of inserts) to the foreign\nserver without having to wait for the result of the previous batch to\narrive, you can spend 100ms total waiting for results instead of 10 mins.\nYou can start the execution of the first query earlier, spend less time\nblocked waiting on network, and let the local backend continue doing other\nwork while the foreign server is busy executing the statements.\n\nThe time spent preparing local rows to insert now overlaps with the RTT and\nremote execution time, instead of happening serially. And there only has to\nbe one RTT wait, assuming the foreign server and network can keep up with\nthe rate we are generating requests at.\n\nI can throw together some diagrams if it'll help. But in the libpq\npipelining patch I demonstrated a 300 times (3000%) performance improvement\non a test workload...\n\nAnyway, this thread's batch insert can be progressed (and hopefully\n> committed), and once the libpq batching has been committed, we can give it\n> a try to use it and modify postgres_fdw to see if we can get further\n> performance boost.\n>\n\nMy point is that you should seriously consider whether batching is the\nappropriate interface here, or whether the FDW can expose a pipeline-like\n\"queue work\" then \"wait for results\" interface. That can be used to\nimplement batching exactly as currently proposed, it does not have to wait\nfor any libpq pipelining features. But it can *also* be used to implement\nconcurrent async requests in other FDWs, and to implement pipelining in\npostgres_fdw once the needed libpq support is available.\n\nI don't know the FDW to postgres API well enough, and it's possible I'm\ntalking entirely out of my hat here.\n\n\n> From: Tomas Vondra <tomas.vondra@enterprisedb.com>\n> > Not sure how is this related to app developers? I think the idea was\n> > that the libpq features might be useful between the two PostgreSQL\n> > instances. I.e. the postgres_fdw would use the libpq batching to send\n> > chunks of data to the other side.\n>\n> > Well, my point was that we could keep the API, but maybe it should be\n> > implemented using the proposed libpq batching. They could still use the\n> > postgres_fdw example how to use the API, but the internals would need to\n> > be different, of course.\n>\n> Yes, I understand them. I just wondered if app developers use the\n> statement batching API for libpq or JDBC in what kind of apps.\n\n\nFor JDBC, yes, it's used very heavily and has been for a long time, because\nPgJDBC doesn't rely on libpq - it implements the protocol directly and\nisn't bound by libpq's limitations. The application interface for it in\nJDBC is a batch interface [1][2], not a pipelined interface, so that's what\nPgJDBC users interact with [3] but batch execution is implemented using\nprotocol pipelining support inside PgJDBC [4]. A while ago I did some work\non deadlock prevention to work around issues with PgJDBC's implementation\n[5] which was needed because the feature was so heavily used. Both were to\naddress customer needs in real world applications. The latter increased\napplication performance over 50x through round-trip elimination.\n\nFor libpq, no, batching and pipelining are not yet used by anybody because\napplication authors have to write to the libpq API and there hasn't been\nany in-core support for batching. We've had async / non-blocking support\nfor a while, but it still enforces strict request/response ordering without\ninterleaving, so application authors cannot make use of the same postgres\nserver and protocol capabilities as PgJDBC. Most other drivers (like\npsqlODBC and psycopg2) are implemented on top of libpq, so they inherit the\nsame limitations.\n\nI don't expect most application authors to adopt pipelining directly,\nmainly because hardly anyone writes application code against libpq anyway.\nBut drivers written on top of libpq will be able to adopt it to expose the\nbatching, pipeline, or async/callback/event driven interfaces supported by\ntheir client database language interface specifications, or expose their\nown extension interfaces to give users callback-driven or batched query\ncapabilities. In particular, psqlODBC will be able to implement ODBC batch\nquery [6] efficiently. Right now psqlODBC can't execute batches efficiently\nvia libpq, since it must perform one round-trip per query. It will be able\nto use the libpq pipelining API to greatly reduce round trips.\n\n\n>\n> But I'm not sure either if the libpq batching is likely to be committed\n> in the near future. (The thread looks too long...)\n\n\nI think it's getting there tbh.\n\n\n\n>\n> Regards\n> Takayuki Tsunakawa\n>\n>\n[1]\nhttps://docs.oracle.com/javase/7/docs/api/java/sql/Statement.html#executeBatch()\n[2]\nhttps://docs.oracle.com/javase/7/docs/api/java/sql/PreparedStatement.html#addBatch()\n[3]\nhttps://github.com/pgjdbc/pgjdbc/blob/master/pgjdbc/src/test/java/org/postgresql/test/jdbc2/BatchExecuteTest.java\n[4]\nhttps://github.com/pgjdbc/pgjdbc/blob/ff22a3c31bb423b08637c237cb2e5bc288008e18/pgjdbc/src/main/java/org/postgresql/core/v3/QueryExecutorImpl.java#L492\n[5] https://github.com/pgjdbc/pgjdbc/issues/194\n[6]\nhttps://docs.microsoft.com/en-us/sql/odbc/reference/develop-app/executing-batches?view=sql-server-ver15\n\nOn Fri, Nov 27, 2020 at 10:47 AM tsunakawa.takay@fujitsu.com <tsunakawa.takay@fujitsu.com> wrote:Covering this one first:\nI expect postgresExecForeignBatchInsert() would be able to use the libpq batching API, because it receives an array of tuples and can generate and issue INSERT statement for each tuple.Sure, you can generate big multi-inserts. Or even do a COPY. But you still have to block for a full round-trip until the foreign server replies. So if you have 6000 calls to postgresExecForeignBatchInsert() during a single query, and a 100ms round trip time to the foreign server, you're going to waste 6000*0.1 = 600s = 10 min blocked in  postgresExecForeignBatchInsert() waiting for results from the foreign server.Such batches have some major downsides:* The foreign server cannot start executing the first query in the batch until the last query in the batch has been accumulated and the whole batch has been sent to the foreign server; * The FDW has to block waiting for the batch to execute on the foreign server and for a full network round-trip before it can start another batch or let the backend do other workThis means RTTs get multiplied by batch counts. Still a lot better than individual statements, but plenty slow for high latency connections.* Prepare 1000 rows to insert [10ms]* INSERT 1000 values [100ms RTT + 50ms foreign server execution time]* Prepare 1000 rows to insert [10ms]* INSERT 1000 values [100ms RTT + 50ms foreign server execution time]* ...If you can instead send new inserts (or sets of inserts) to the foreign server without having to wait for the result of the previous batch to arrive, you can spend 100ms total waiting for results instead of 10 mins. You can start the execution of the first query earlier, spend less time blocked waiting on network, and let the local backend continue doing other work while the foreign server is busy executing the statements.The time spent preparing local rows to insert now overlaps with the RTT and remote execution time, instead of happening serially. And there only has to be one RTT wait, assuming the foreign server and network can keep up with the rate we are generating requests at.I can throw together some diagrams if it'll help. But in the libpq pipelining patch I demonstrated a 300 times (3000%) performance improvement on a test workload... Anyway, this thread's batch insert can be progressed (and hopefully committed), and once the libpq batching has been committed, we can give it a try to use it and modify postgres_fdw to see if we can get further performance boost.My point is that you should seriously consider whether batching is the appropriate interface here, or whether the FDW can expose a pipeline-like \"queue work\" then \"wait for results\" interface. That can be used to implement batching exactly as currently proposed, it does not have to wait for any libpq pipelining features. But it can *also* be used to implement concurrent async requests in other FDWs, and to implement pipelining in postgres_fdw once the needed libpq support is available.I don't know the FDW to postgres API well enough, and it's possible I'm talking entirely out of my hat here. From: Tomas Vondra <tomas.vondra@enterprisedb.com>\n> Not sure how is this related to app developers? I think the idea was\n> that the libpq features might be useful between the two PostgreSQL\n> instances. I.e. the postgres_fdw would use the libpq batching to send\n> chunks of data to the other side.\n\n> Well, my point was that we could keep the API, but maybe it should be\n> implemented using the proposed libpq batching. They could still use the\n> postgres_fdw example how to use the API, but the internals would need to\n> be different, of course.\n\nYes, I understand them.  I just wondered if app developers use the statement batching API for libpq or JDBC in what kind of apps.For JDBC, yes, it's used very heavily and has been for a long time, because PgJDBC doesn't rely on libpq - it implements the protocol directly and isn't bound by libpq's limitations. The application interface for it in JDBC is a batch interface [1][2], not a pipelined interface, so that's what PgJDBC users interact with [3] but batch execution is implemented using protocol pipelining support inside PgJDBC [4]. A while ago I did some work on deadlock prevention to work around issues with PgJDBC's implementation [5] which was needed because the feature was so heavily used. Both were to address customer needs in real world applications. The latter increased application performance over 50x through round-trip elimination.For libpq, no, batching and pipelining are not yet used by anybody because application authors have to write to the libpq API and there hasn't been any in-core support for batching. We've had async / non-blocking support for a while, but it still enforces strict request/response ordering without interleaving, so application authors cannot make use of the same postgres server and protocol capabilities as PgJDBC. Most other drivers (like psqlODBC and psycopg2) are implemented on top of libpq, so they inherit the same limitations.I don't expect most application authors to adopt pipelining directly, mainly because hardly anyone writes application code against libpq anyway. But drivers written on top of libpq will be able to adopt it to expose the batching, pipeline, or async/callback/event driven interfaces supported by their client database language interface specifications, or expose their own extension interfaces to give users callback-driven or batched query capabilities. In particular, psqlODBC will be able to implement ODBC batch query [6] efficiently. Right now psqlODBC can't execute batches efficiently via libpq, since it must perform one round-trip per query. It will be able to use the libpq pipelining API to greatly reduce round trips.    But I'm not sure either if the libpq batching is likely to be committed in the near future.  (The thread looks too long...) I think it's getting there tbh. \n\n\nRegards\nTakayuki Tsunakawa\n[1] https://docs.oracle.com/javase/7/docs/api/java/sql/Statement.html#executeBatch()[2] https://docs.oracle.com/javase/7/docs/api/java/sql/PreparedStatement.html#addBatch()[3] https://github.com/pgjdbc/pgjdbc/blob/master/pgjdbc/src/test/java/org/postgresql/test/jdbc2/BatchExecuteTest.java[4] https://github.com/pgjdbc/pgjdbc/blob/ff22a3c31bb423b08637c237cb2e5bc288008e18/pgjdbc/src/main/java/org/postgresql/core/v3/QueryExecutorImpl.java#L492[5] https://github.com/pgjdbc/pgjdbc/issues/194[6] https://docs.microsoft.com/en-us/sql/odbc/reference/develop-app/executing-batches?view=sql-server-ver15", "msg_date": "Fri, 27 Nov 2020 11:56:15 +0800", "msg_from": "Craig Ringer <craig.ringer@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: POC: postgres_fdw insert batching" }, { "msg_contents": "From: Craig Ringer <craig.ringer@enterprisedb.com> \r\n> But in the libpq pipelining patch I demonstrated a 300 times (3000%) performance improvement on a test workload...\r\n\r\nWow, impressive number. I've just seen it in the beginning of the libpq pipelining thread (oh, already four years ago..!) Could you share the workload and the network latency (ping time)? I'm sorry I'm just overlooking it.\r\n\r\nThank you for your (always) concise explanation. I'd like to check other DBMSs and your rich references for the FDW interface. (My first intuition is that many major DBMSs might not have client C APIs that can be used to implement an async pipelining FDW interface. Also, I'm afraid it requires major surgery or reform of executor. I don't want it to delay the release of reasonably good (10x) improvement with the synchronous interface.)\r\n\r\n(It'd be kind of you to send emails in text format. I've changed the format of this reply from HTML to text.)\r\n\r\n\r\nRegards\r\nTakayuki Tsunakawa\r\n \r\n", "msg_date": "Fri, 27 Nov 2020 06:05:55 +0000", "msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: POC: postgres_fdw insert batching" }, { "msg_contents": "\n\nOn 11/27/20 7:05 AM, tsunakawa.takay@fujitsu.com wrote:\n> From: Craig Ringer <craig.ringer@enterprisedb.com>\n>> But in the libpq pipelining patch I demonstrated a 300 times\n>> (3000%) performance improvement on a test workload...\n> \n> Wow, impressive number. I've just seen it in the beginning of the\n> libpq pipelining thread (oh, already four years ago..!) Could you\n> share the workload and the network latency (ping time)? I'm sorry\n> I'm just overlooking it.\n> \n> Thank you for your (always) concise explanation. I'd like to check\n> other DBMSs and your rich references for the FDW interface. (My\n> first intuition is that many major DBMSs might not have client C APIs\n> that can be used to implement an async pipelining FDW interface.\n> Also, I'm afraid it requires major surgery or reform of executor. I\n> don't want it to delay the release of reasonably good (10x)\n> improvement with the synchronous interface.)\n> \n\nI do agree that pipelining is nice, and can bring huge improvements.\n\nHowever, the FDW interface as it's implemented today is not designed to\nallow that, I believe (we pretty much just invoke the FWD callbacks as\nif it was a local AM). It assumes the calls are synchronous, and\nredesigning it to work in async way is a much larger/complex patch than\nwhat's being discussed here.\n\nI do think the FDW extension proposed here (adding the bulk-insert\ncallback) is useful in general, for two reasons: (a) even if most client\nlibraries support some sort of pipelining, some don't, and (b) I'd bet\nit's still more efficient to send one large insert than pipelining many\nindividual inserts.\n\nThat being said, I'm against expanding the scope of this patch to also\nrequire redesign of the whole FDW infrastructure - that would likely\nmean no such improvement landing in PG14. If the libpq pipelining patch\nseems likely to get committed, we can try using it for the bulk insert\ncallback (instead of the current multi-value stuff).\n\n\n> (It'd be kind of you to send emails in text format. I've changed the\n> format of this reply from HTML to text.)\n> \n\nCraig's client is sending messages in both text/plain and text/html. You\nprobably need to tell your client to prefer that over html, somehow.\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Sat, 28 Nov 2020 03:10:40 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: POC: postgres_fdw insert batching" }, { "msg_contents": "On Sat, 28 Nov 2020, 10:10 Tomas Vondra, <tomas.vondra@enterprisedb.com>\nwrote:\n\n>\n>\n> On 11/27/20 7:05 AM, tsunakawa.takay@fujitsu.com wrote:\n>\n> However, the FDW interface as it's implemented today is not designed to\n> allow that, I believe (we pretty much just invoke the FWD callbacks as\n> if it was a local AM). It assumes the calls are synchronous, and\n> redesigning it to work in async way is a much larger/complex patch than\n> what's being discussed here.\n>\n> I do think the FDW extension proposed here (adding the bulk-insert\n> callback) is useful in general, for two reasons: (a) even if most client\n> libraries support some sort of pipelining, some don't, and (b) I'd bet\n> it's still more efficient to send one large insert than pipelining many\n> individual inserts.\n>\n> That being said, I'm against expanding the scope of this patch to also\n> require redesign of the whole FDW infrastructure - that would likely\n> mean no such improvement landing in PG14. If the libpq pipelining patch\n> seems likely to get committed, we can try using it for the bulk insert\n> callback (instead of the current multi-value stuff).\n>\n\nI totally agree on all points. It was not my intent to expand the scope of\nthis significantly and I really don't want to hold it back.\n\nI raised the interface consideration in case it was something easy to\naccommodate. It's not, so that's done, topic over.\n\nOn Sat, 28 Nov 2020, 10:10 Tomas Vondra, <tomas.vondra@enterprisedb.com> wrote:\n\nOn 11/27/20 7:05 AM, tsunakawa.takay@fujitsu.com wrote:\nHowever, the FDW interface as it's implemented today is not designed to\nallow that, I believe (we pretty much just invoke the FWD callbacks as\nif it was a local AM). It assumes the calls are synchronous, and\nredesigning it to work in async way is a much larger/complex patch than\nwhat's being discussed here.\n\nI do think the FDW extension proposed here (adding the bulk-insert\ncallback) is useful in general, for two reasons: (a) even if most client\nlibraries support some sort of pipelining, some don't, and (b) I'd bet\nit's still more efficient to send one large insert than pipelining many\nindividual inserts.\n\nThat being said, I'm against expanding the scope of this patch to also\nrequire redesign of the whole FDW infrastructure - that would likely\nmean no such improvement landing in PG14. If the libpq pipelining patch\nseems likely to get committed, we can try using it for the bulk insert\ncallback (instead of the current multi-value stuff).I totally agree on all points. It was not my intent to expand the scope of this significantly and I really don't want to hold it back.I raised the interface consideration in case it was something easy to accommodate. It's not, so that's done, topic over.", "msg_date": "Sat, 28 Nov 2020 15:00:31 +0800", "msg_from": "Craig Ringer <craig.ringer@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: POC: postgres_fdw insert batching" }, { "msg_contents": "On Fri, 27 Nov 2020, 14:06 tsunakawa.takay@fujitsu.com,\n<tsunakawa.takay@fujitsu.com> wrote:\n>\n>\nAlso, I'm afraid it requires major surgery or reform of executor. I\ndon't want it to delay the release of reasonably good (10x)\nimprovement with the synchronous interface.)\n\n\nTotally sensible. If it isn't feasible without significant executor\nchange that's all that needs to be said.\n\nI was afraid that'd be the case given the executor's pull flow but\njust didn't know enough.\n\nIt was not my intention to hold this patch up or greatly expand its\nscope. I'll spend some time testing it out and have a closer read soon\nto see if I can help progress it.\n\nI know Andres did some initial work on executor parallelism and\npipelining. I should take a look.\n\n> > But in the libpq pipelining patch I demonstrated a 300 times (3000%) performance improvement on a test workload...\n>\n> Wow, impressive number. I've just seen it in the beginning of the libpq pipelining thread (oh, already four years ago..!)\n\nYikes.\n\n> Could you share the workload and the network latency (ping time)? I'm sorry I'm just overlooking it.\n\nI thought I gave it at the time, and a demo program. IIRC it was just\ndoing small multi row inserts or single row inserts. Latency would've\nbeen a couple of hundred ms probably, I think I did something like\nrunning on my laptop (Australia, ADSL) to a server on AWS US or EU.\n\n> Thank you for your (always) concise explanation.\n\nYou joke! I am many things but despite my best efforts concise is\nrarely one of them.\n\n> I'd like to check other DBMSs and your rich references for the FDW interface. (My first intuition is that many major DBMSs might not have client C APIs that can be used to implement an async pipelining FDW interface.\n\nLikely correct for C APIs of other traditional DBMSes. I'd be less\nsure about newer non SQL ones, especially cloud oriented. For example\nDynamoDB supports at least async requests in the Java client [3] and\nC++ client [4]; it's not immediately clear if requests can be\npipelined, but the API suggests they can.\n\nMost things with a REST-like API can do a fair bit of concurrency\nthough. Multiple async nonblocking HTTP connections can be serviced at\nonce. Or HTTP/1.1 pipelining can be used [1], or even better HTTP/2.0\nstreams [2]. This is relevant for any REST-like API.\n\n> (It'd be kind of you to send emails in text format. I've changed the format of this reply from HTML to text.)\n\nI try to remember. Stupid Gmail. Sorry. On mobile it offers very\nlittle control over format but I'll do my best when I can.\n\n[1] https://en.wikipedia.org/wiki/HTTP_pipelining\n[2] https://blog.restcase.com/http2-benefits-for-rest-apis/\n[3] https://aws.amazon.com/blogs/developer/asynchronous-requests-with-the-aws-sdk-for-java/\n[4] https://sdk.amazonaws.com/cpp/api/LATEST/class_aws_1_1_dynamo_d_b_1_1_dynamo_d_b_client.html#ab631edaccca5f3f8988af15e7e9aa4f0\n\n\n", "msg_date": "Mon, 30 Nov 2020 10:34:00 +0800", "msg_from": "Craig Ringer <craig.ringer@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: POC: postgres_fdw insert batching" }, { "msg_contents": "On Wed, Nov 25, 2020 at 05:04:36AM +0000, tsunakawa.takay@fujitsu.com wrote:\n> From: Tomas Vondra <tomas.vondra@enterprisedb.com>\n> > On 11/24/20 9:45 AM, tsunakawa.takay@fujitsu.com wrote:\n> > > OTOH, as for the name GetModifyBatchSize() you suggest, I think\n> > GetInsertBatchSize may be better. That is, this API deals with multiple\n> > records in a single INSERT statement. Your GetModifyBatchSize will be\n> > reserved for statement batching when libpq has supported batch/pipelining to\n> > execute multiple INSERT/UPDATE/DELETE statements, as in the following\n> > JDBC batch updates. What do you think?\n> > >\n> > \n> > I don't know. I was really only thinking about batching in the context\n> > of a single DML command, not about batching of multiple commands at the\n> > protocol level. IMHO it's far more likely we'll add support for batching\n> > for DELETE/UPDATE than libpq pipelining, which seems rather different\n> > from how the FDW API works. Which is why I was suggesting to use a name\n> > that would work for all DML commands, not just for inserts.\n> \n> Right, I can't imagine now how the interaction among the client, server core and FDWs would be regarding the statement batching. So I'll take your suggested name.\n> \n> \n> > Not sure, but I'd guess knowing whether batching is used would be\n> > useful. We only print the single-row SQL query, which kinda gives the\n> > impression that there's no batching.\n> \n> Added in postgres_fdw like \"Remote SQL\" when EXPLAIN VERBOSE is run.\n> \n> \n> > > Don't worry about this, too. GetMaxBulkInsertTuples() just returns a value\n> > that was already saved in a struct in create_foreign_modify().\n> > >\n> > \n> > Well, I do worry for two reasons.\n> > \n> > Firstly, the fact that in postgres_fdw the call is cheap does not mean\n> > it'll be like that in every other FDW. Presumably, the other FDWs might\n> > cache it in the struct and do the same thing, of course.\n> > \n> > But the fact that we're calling it over and over for each row kinda\n> > seems like we allow the value to change during execution, but I very\n> > much doubt the code is expecting that. I haven't tried, but assume the\n> > function first returns 10 and then 100. ISTM the code will allocate\n> > ri_Slots with 25 slots, but then we'll try stashing 100 tuples there.\n> > That can't end well. Sure, we can claim it's a bug in the FDW extension,\n> > but it's also due to the API design.\n> \n> You worried about other FDWs than postgres_fdw. That's reasonable. I insisted in other threads that PG developers care only about postgres_fdw, not other FDWs, when designing the FDW interface, but I myself made the same mistake. I made changes so that the executor calls GetModifyBatchSize() once per relation per statement.\n\nPlease pardon me for barging in late in this discussion, but if we're\ngoing to be using a bulk API here, wouldn't it make more sense to use\nCOPY, except where RETURNING is specified, in place of INSERT?\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate\n\n\n", "msg_date": "Mon, 30 Nov 2020 09:29:02 +0100", "msg_from": "David Fetter <david@fetter.org>", "msg_from_op": false, "msg_subject": "Re: POC: postgres_fdw insert batching" }, { "msg_contents": "From: Craig Ringer <craig.ringer@enterprisedb.com>\r\n> It was not my intention to hold this patch up or greatly expand its\r\n> scope. I'll spend some time testing it out and have a closer read soon\r\n> to see if I can help progress it.\r\n\r\nThank you, I'm relieved to hear that. Last weekend, I was scared of a possible mood that's something like \"We won't accept the insert speedup patch for foreign tables unless you take full advantage of pipelining and achieve maximum conceivable speed!\"\r\n\r\n\r\n> I thought I gave it at the time, and a demo program. IIRC it was just\r\n> doing small multi row inserts or single row inserts. Latency would've\r\n> been a couple of hundred ms probably, I think I did something like\r\n> running on my laptop (Australia, ADSL) to a server on AWS US or EU.\r\n\r\na couple of hundred ms, so that would be dominant in each prepare-send-execute-receive, possibly even for batch insert with hundreds of rows in each batch. Then, the synchronous batch insert of the current patch may achieve a few hundreds times speedup compared to a single row inserts when the batch size is hundreds or more.\r\n\r\n\r\n> > I'd like to check other DBMSs and your rich references for the FDW interface.\r\n> (My first intuition is that many major DBMSs might not have client C APIs that\r\n> can be used to implement an async pipelining FDW interface.\r\n> \r\n> Likely correct for C APIs of other traditional DBMSes. I'd be less\r\n> sure about newer non SQL ones, especially cloud oriented. For example\r\n> DynamoDB supports at least async requests in the Java client [3] and\r\n> C++ client [4]; it's not immediately clear if requests can be\r\n> pipelined, but the API suggests they can.\r\n\r\nI've checked ODBC, MySQL, Microsoft Synapse Analytics, Redshift, and BigQuery, guessing that the data warehouse may have asynchronous/pipelining API that enables efficient data integration/migration. But none of them had one. (I seem to have spent too long and am a bit tired... but it was a bit fun as well.) They all support INSERT with multiple records in its VALUES clause. So, it will be useful to provide a synchronous batch insert FDW API. I guess Oracle's OCI has an asynchronous API, but I didn't check it.\r\n\r\nAs an aside, MySQL 8.0.16 added support for asynchronous execution in its C API, but it allows only one active SQL statement in each connection. Likewise, although the ODBC standard defines asynchronous execution (SQLSetStmtAttr(SQL_ASYNC_ENABLE) and SQLCompleteAsync), SQL Server and Synapse Analytics only allows only one active statement per connection. psqlODBC doesn't support asynchronous execution.\r\n\r\n\r\n> Most things with a REST-like API can do a fair bit of concurrency\r\n> though. Multiple async nonblocking HTTP connections can be serviced at\r\n> once. Or HTTP/1.1 pipelining can be used [1], or even better HTTP/2.0\r\n> streams [2]. This is relevant for any REST-like API.\r\n\r\nI'm not sure if this is related, Google deprecated Batch HTTP API [1].\r\n\r\n\r\n[1]\r\nhttps://cloud.google.com/bigquery/batch\r\n\r\n\r\nRegards\r\nTakayuki Tsunakawa\r\n\r\n", "msg_date": "Mon, 30 Nov 2020 09:13:58 +0000", "msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: POC: postgres_fdw insert batching" }, { "msg_contents": "From: David Fetter <david@fetter.org>\n> Please pardon me for barging in late in this discussion, but if we're\n> going to be using a bulk API here, wouldn't it make more sense to use\n> COPY, except where RETURNING is specified, in place of INSERT?\n\nPlease do not hesitate. I mentioned earlier in this thread that I think INSERT is better because:\n\n\n--------------------------------------------------\n* When the user executed INSERT statements, it would look strange to the user if the remote SQL is displayed as COPY.\n\n* COPY doesn't invoke rules unlike INSERT. (I don't think the rule is a feature what users care about, though.) Also, I'm a bit concerned that there might be, or will be, other differences between INSERT and COPY.\n--------------------------------------------------\n\n\nAlso, COPY to foreign tables currently uses INSERTs, the improvement of using COPY instead of INSERT is in progress [1]. Keeping \"COPY uses COPY, INSERT uses INSERT\" correspondence seems natural, and it makes COPY's high-speed advantage stand out.\n\n\n[1]\nFast COPY FROM command for the table with foreign partitions\nhttps://www.postgresql.org/message-id/flat/3d0909dc-3691-a576-208a-90986e55489f%40postgrespro.ru\n\n\nRegards\nTakayuki Tsunakawa\n\n\n\n", "msg_date": "Mon, 30 Nov 2020 09:34:57 +0000", "msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: POC: postgres_fdw insert batching" }, { "msg_contents": "Hi,\n\nAttached is a v6 of this patch, rebased to current master and with some \nminor improvements (mostly comments and renaming the \"end\" struct field \nto \"values_end\" which I think is more descriptive).\n\nThe one thing that keeps bugging me is convert_prep_stmt_params - it \ndies the right thing, but the code is somewhat confusing.\n\n\nAFAICS the discussions about making this use COPY and/or libpq \npipelining (neither of which is committed yet) ended with the conclusion \nthat those changes are somewhat independent, and that it's worth getting \nthis committed in the current form. Barring objections, I'll push this \nwithin the next couple days.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Tue, 12 Jan 2021 03:06:29 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: POC: postgres_fdw insert batching" }, { "msg_contents": "From: Tomas Vondra <tomas.vondra@enterprisedb.com>\r\n> Attached is a v6 of this patch, rebased to current master and with some minor\r\n> improvements (mostly comments and renaming the \"end\" struct field to\r\n> \"values_end\" which I think is more descriptive).\r\n\r\nThank you very much. In fact, my initial patches used values_end, and I changed it to len considering that it may be used for bulk UPDATEand DELETE in the future. But I think values_end is easier to understand its role, too.\r\n\r\n\r\nRegards\r\nTakayuki Tsunakawa\r\n\r\n", "msg_date": "Tue, 12 Jan 2021 03:04:54 +0000", "msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: POC: postgres_fdw insert batching" }, { "msg_contents": "Hi Tomas, Tsunakawa-san,\n\nThanks for your work on this.\n\nOn Tue, Jan 12, 2021 at 11:06 AM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n> AFAICS the discussions about making this use COPY and/or libpq\n> pipelining (neither of which is committed yet) ended with the conclusion\n> that those changes are somewhat independent, and that it's worth getting\n> this committed in the current form. Barring objections, I'll push this\n> within the next couple days.\n\nI was trying this out today (been meaning to do so for a while) and\nnoticed that this fails when there are AFTER ROW triggers on the\nforeign table. Here's an example:\n\ncreate extension postgres_fdw ;\ncreate server lb foreign data wrapper postgres_fdw ;\ncreate user mapping for current_user server lb;\ncreate table p (a numeric primary key);\ncreate foreign table fp (a int) server lb options (table_name 'p');\ncreate function print_row () returns trigger as $$ begin raise notice\n'%', new; return null; end; $$ language plpgsql;\ncreate trigger after_insert_trig after insert on fp for each row\nexecute function print_row();\ninsert into fp select generate_series (1, 10);\n<crashes>\n\nApparently, the new code seems to assume that batching wouldn't be\nactive when the original query contains RETURNING clause but some\nparts fail to account for the case where RETURNING is added to the\nquery to retrieve the tuple to pass to the AFTER TRIGGER.\nSpecifically, the Assert in the following block in\nexecute_foreign_modify() is problematic:\n\n /* Check number of rows affected, and fetch RETURNING tuple if any */\n if (fmstate->has_returning)\n {\n Assert(*numSlots == 1);\n n_rows = PQntuples(res);\n if (n_rows > 0)\n store_returning_result(fmstate, slots[0], res);\n }\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 13 Jan 2021 18:15:05 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: POC: postgres_fdw insert batching" }, { "msg_contents": "On 1/13/21 10:15 AM, Amit Langote wrote:\n> Hi Tomas, Tsunakawa-san,\n> \n> Thanks for your work on this.\n> \n> On Tue, Jan 12, 2021 at 11:06 AM Tomas Vondra\n> <tomas.vondra@enterprisedb.com> wrote:\n>> AFAICS the discussions about making this use COPY and/or libpq\n>> pipelining (neither of which is committed yet) ended with the conclusion\n>> that those changes are somewhat independent, and that it's worth getting\n>> this committed in the current form. Barring objections, I'll push this\n>> within the next couple days.\n> \n> I was trying this out today (been meaning to do so for a while) and\n> noticed that this fails when there are AFTER ROW triggers on the\n> foreign table. Here's an example:\n> \n> create extension postgres_fdw ;\n> create server lb foreign data wrapper postgres_fdw ;\n> create user mapping for current_user server lb;\n> create table p (a numeric primary key);\n> create foreign table fp (a int) server lb options (table_name 'p');\n> create function print_row () returns trigger as $$ begin raise notice\n> '%', new; return null; end; $$ language plpgsql;\n> create trigger after_insert_trig after insert on fp for each row\n> execute function print_row();\n> insert into fp select generate_series (1, 10);\n> <crashes>\n> \n> Apparently, the new code seems to assume that batching wouldn't be\n> active when the original query contains RETURNING clause but some\n> parts fail to account for the case where RETURNING is added to the\n> query to retrieve the tuple to pass to the AFTER TRIGGER.\n> Specifically, the Assert in the following block in\n> execute_foreign_modify() is problematic:\n> \n> /* Check number of rows affected, and fetch RETURNING tuple if any */\n> if (fmstate->has_returning)\n> {\n> Assert(*numSlots == 1);\n> n_rows = PQntuples(res);\n> if (n_rows > 0)\n> store_returning_result(fmstate, slots[0], res);\n> }\n> \n\nThanks for the report. Yeah, I think there's a missing check in\nExecInsert. Adding\n\n (!resultRelInfo->ri_TrigDesc->trig_insert_after_row)\n\nsolves this. But now I'm wondering if this is the wrong place to make\nthis decision. I mean, why should we make the decision here, when the\ndecision whether to have a RETURNING clause is made in postgres_fdw in\ndeparseReturningList? We don't really know what the other FDWs will do,\nfor example.\n\nSo I think we should just move all of this into GetModifyBatchSize. We\ncan start with ri_BatchSize = 0. And then do\n\n if (resultRelInfo->ri_BatchSize == 0)\n resultRelInfo->ri_BatchSize =\n resultRelInfo->ri_FdwRoutine->GetModifyBatchSize(resultRelInfo);\n\n if (resultRelInfo->ri_BatchSize > 1)\n {\n ... do batching ...\n }\n\nThe GetModifyBatchSize would always return value > 0, so either 1 (no\nbatching) or >1 (batching).\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Wed, 13 Jan 2021 15:43:29 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: POC: postgres_fdw insert batching" }, { "msg_contents": "On 1/13/21 3:43 PM, Tomas Vondra wrote:\n>\n> ...\n>\n> Thanks for the report. Yeah, I think there's a missing check in\n> ExecInsert. Adding\n> \n> (!resultRelInfo->ri_TrigDesc->trig_insert_after_row)\n> \n> solves this. But now I'm wondering if this is the wrong place to make\n> this decision. I mean, why should we make the decision here, when the\n> decision whether to have a RETURNING clause is made in postgres_fdw in\n> deparseReturningList? We don't really know what the other FDWs will do,\n> for example.\n> \n> So I think we should just move all of this into GetModifyBatchSize. We\n> can start with ri_BatchSize = 0. And then do\n> \n> if (resultRelInfo->ri_BatchSize == 0)\n> resultRelInfo->ri_BatchSize =\n> resultRelInfo->ri_FdwRoutine->GetModifyBatchSize(resultRelInfo);\n> \n> if (resultRelInfo->ri_BatchSize > 1)\n> {\n> ... do batching ...\n> }\n> \n> The GetModifyBatchSize would always return value > 0, so either 1 (no\n> batching) or >1 (batching).\n> \n\nFWIW the attached v8 patch does this - most of the conditions are moved\nto the GetModifyBatchSize() callback. I've removed the check for the\nBatchInsert callback, though - the FDW knows whether it supports that,\nand it seems a bit pointless at the moment as there are no other batch\ncallbacks. Maybe we should add an Assert somewhere, though?\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Wed, 13 Jan 2021 18:41:09 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: POC: postgres_fdw insert batching" }, { "msg_contents": "From: Tomas Vondra <tomas.vondra@enterprisedb.com>\r\n> FWIW the attached v8 patch does this - most of the conditions are moved to the\r\n> GetModifyBatchSize() callback. I've removed the check for the BatchInsert\r\n> callback, though - the FDW knows whether it supports that, and it seems a bit\r\n> pointless at the moment as there are no other batch callbacks. Maybe we\r\n> should add an Assert somewhere, though?\r\n\r\nThank you. I'm in favor this idea that the decision to support RETURNING and trigger is left to the FDW. I don' think of the need for another Assert, as the caller has one for the returned batch size.\r\n\r\n\r\nRegards\r\nTakayuki Tsunakawa\r\n\r\n", "msg_date": "Thu, 14 Jan 2021 01:14:45 +0000", "msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: POC: postgres_fdw insert batching" }, { "msg_contents": "Hi,\n\nOn Thu, Jan 14, 2021 at 2:41 AM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n> On 1/13/21 3:43 PM, Tomas Vondra wrote:\n> > Thanks for the report. Yeah, I think there's a missing check in\n> > ExecInsert. Adding\n> >\n> > (!resultRelInfo->ri_TrigDesc->trig_insert_after_row)\n> >\n> > solves this. But now I'm wondering if this is the wrong place to make\n> > this decision. I mean, why should we make the decision here, when the\n> > decision whether to have a RETURNING clause is made in postgres_fdw in\n> > deparseReturningList? We don't really know what the other FDWs will do,\n> > for example.\n> >\n> > So I think we should just move all of this into GetModifyBatchSize. We\n> > can start with ri_BatchSize = 0. And then do\n> >\n> > if (resultRelInfo->ri_BatchSize == 0)\n> > resultRelInfo->ri_BatchSize =\n> > resultRelInfo->ri_FdwRoutine->GetModifyBatchSize(resultRelInfo);\n> >\n> > if (resultRelInfo->ri_BatchSize > 1)\n> > {\n> > ... do batching ...\n> > }\n> >\n> > The GetModifyBatchSize would always return value > 0, so either 1 (no\n> > batching) or >1 (batching).\n> >\n>\n> FWIW the attached v8 patch does this - most of the conditions are moved\n> to the GetModifyBatchSize() callback.\n\nThanks. A few comments:\n\n* I agree with leaving it up to an FDW to look at the properties of\nthe table and of the operation being performed to decide whether or\nnot to use batching, although maybe BeginForeignModify() is a better\nplace for putting that logic instead of GetModifyBatchSize()? So, in\ncreate_foreign_modify(), instead of PgFdwModifyState.batch_size simply\nbeing set to match the table's or the server's value for the\nbatch_size option, make it also consider the things that prevent\nbatching and set the execution state's batch_size based on that.\nGetModifyBatchSize() simply returns that value.\n\n* Regarding the timing of calling GetModifyBatchSize() to set\nri_BatchSize, I wonder if it wouldn't be better to call it just once,\nsay from ExecInitModifyTable(), right after BeginForeignModify()\nreturns? I don't quite understand why it is being called from\nExecInsert(). Can the batch size change once the execution starts?\n\n* Lastly, how about calling it GetForeignModifyBatchSize() to be\nconsistent with other nearby callbacks?\n\n> I've removed the check for the\n> BatchInsert callback, though - the FDW knows whether it supports that,\n> and it seems a bit pointless at the moment as there are no other batch\n> callbacks. Maybe we should add an Assert somewhere, though?\n\nHmm, not checking whether BatchInsert() exists may not be good idea,\nbecause if an FDW's GetModifyBatchSize() returns a value > 1 but\nthere's no BatchInsert() function to call, ExecBatchInsert() would\ntrip. I don't see the newly added documentation telling FDW authors\nto either define both or none.\n\nRegarding how this plays with partitions, I don't think we need\nExecGetTouchedPartitions(), because you can get the routed-to\npartitions using es_tuple_routing_result_relations. Also, perhaps\nit's a good idea to put the \"finishing\" ExecBatchInsert() calls into a\nfunction ExecFinishBatchInsert(). Maybe the logic to choose the\nrelations to perform the finishing calls on will get complicated in\nthe future as batching is added for updates/deletes too and it seems\nbetter to encapsulate that in the separate function than have it out\nin the open in ExecModifyTable().\n\n(Sorry about being so late reviewing this.)\n\n--\nAmit Langote\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 14 Jan 2021 17:58:53 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: POC: postgres_fdw insert batching" }, { "msg_contents": "On 1/14/21 9:58 AM, Amit Langote wrote:\n> Hi,\n> \n> On Thu, Jan 14, 2021 at 2:41 AM Tomas Vondra\n> <tomas.vondra@enterprisedb.com> wrote:\n>> On 1/13/21 3:43 PM, Tomas Vondra wrote:\n>>> Thanks for the report. Yeah, I think there's a missing check in\n>>> ExecInsert. Adding\n>>>\n>>> (!resultRelInfo->ri_TrigDesc->trig_insert_after_row)\n>>>\n>>> solves this. But now I'm wondering if this is the wrong place to make\n>>> this decision. I mean, why should we make the decision here, when the\n>>> decision whether to have a RETURNING clause is made in postgres_fdw in\n>>> deparseReturningList? We don't really know what the other FDWs will do,\n>>> for example.\n>>>\n>>> So I think we should just move all of this into GetModifyBatchSize. We\n>>> can start with ri_BatchSize = 0. And then do\n>>>\n>>> if (resultRelInfo->ri_BatchSize == 0)\n>>> resultRelInfo->ri_BatchSize =\n>>> resultRelInfo->ri_FdwRoutine->GetModifyBatchSize(resultRelInfo);\n>>>\n>>> if (resultRelInfo->ri_BatchSize > 1)\n>>> {\n>>> ... do batching ...\n>>> }\n>>>\n>>> The GetModifyBatchSize would always return value > 0, so either 1 (no\n>>> batching) or >1 (batching).\n>>>\n>>\n>> FWIW the attached v8 patch does this - most of the conditions are moved\n>> to the GetModifyBatchSize() callback.\n> \n> Thanks. A few comments:\n> \n> * I agree with leaving it up to an FDW to look at the properties of\n> the table and of the operation being performed to decide whether or\n> not to use batching, although maybe BeginForeignModify() is a better\n> place for putting that logic instead of GetModifyBatchSize()? So, in\n> create_foreign_modify(), instead of PgFdwModifyState.batch_size simply\n> being set to match the table's or the server's value for the\n> batch_size option, make it also consider the things that prevent\n> batching and set the execution state's batch_size based on that.\n> GetModifyBatchSize() simply returns that value.\n> \n> * Regarding the timing of calling GetModifyBatchSize() to set\n> ri_BatchSize, I wonder if it wouldn't be better to call it just once,\n> say from ExecInitModifyTable(), right after BeginForeignModify()\n> returns? I don't quite understand why it is being called from\n> ExecInsert(). Can the batch size change once the execution starts?\n> \n\nBut it should be called just once. The idea is that initially we have\nbatch_size=0, and the fist call returns value that is >= 1. So we never\ncall it again. But maybe it could be called from BeginForeignModify, in\nwhich case we'd not need this logic with first setting it to 0 etc.\n\n> * Lastly, how about calling it GetForeignModifyBatchSize() to be\n> consistent with other nearby callbacks?\n> \n\nYeah, good point.\n\n>> I've removed the check for the\n>> BatchInsert callback, though - the FDW knows whether it supports that,\n>> and it seems a bit pointless at the moment as there are no other batch\n>> callbacks. Maybe we should add an Assert somewhere, though?\n> \n> Hmm, not checking whether BatchInsert() exists may not be good idea,\n> because if an FDW's GetModifyBatchSize() returns a value > 1 but\n> there's no BatchInsert() function to call, ExecBatchInsert() would\n> trip. I don't see the newly added documentation telling FDW authors\n> to either define both or none.\n> \n\nHmm. The BatchInsert check seemed somewhat unnecessary to me, but OTOH\nit can't hurt, I guess. I'll ad it back.\n\n> Regarding how this plays with partitions, I don't think we need\n> ExecGetTouchedPartitions(), because you can get the routed-to\n> partitions using es_tuple_routing_result_relations. Also, perhaps\n\nI'm not very familiar with es_tuple_routing_result_relations, but that\ndoesn't seem to work. I've replaced the flushing code at the end of\nExecModifyTable with a loop over es_tuple_routing_result_relations, but\nthen some of the rows are missing (i.e. not flushed).\n\n> it's a good idea to put the \"finishing\" ExecBatchInsert() calls into a\n> function ExecFinishBatchInsert(). Maybe the logic to choose the\n> relations to perform the finishing calls on will get complicated in\n> the future as batching is added for updates/deletes too and it seems\n> better to encapsulate that in the separate function than have it out\n> in the open in ExecModifyTable().\n> \n\nIMO that'd be an over-engineering at this point. We don't need such\nseparate function yet, so why complicate the API? If we need it in the\nfuture, we can add it.\n\n> (Sorry about being so late reviewing this.)\n\nthanks\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Thu, 14 Jan 2021 13:57:44 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: POC: postgres_fdw insert batching" }, { "msg_contents": "On Thu, Jan 14, 2021 at 21:57 Tomas Vondra <tomas.vondra@enterprisedb.com>\nwrote:\n\n> On 1/14/21 9:58 AM, Amit Langote wrote:\n> > Hi,\n> >\n> > On Thu, Jan 14, 2021 at 2:41 AM Tomas Vondra\n> > <tomas.vondra@enterprisedb.com> wrote:\n> >> On 1/13/21 3:43 PM, Tomas Vondra wrote:\n> >>> Thanks for the report. Yeah, I think there's a missing check in\n> >>> ExecInsert. Adding\n> >>>\n> >>> (!resultRelInfo->ri_TrigDesc->trig_insert_after_row)\n> >>>\n> >>> solves this. But now I'm wondering if this is the wrong place to make\n> >>> this decision. I mean, why should we make the decision here, when the\n> >>> decision whether to have a RETURNING clause is made in postgres_fdw in\n> >>> deparseReturningList? We don't really know what the other FDWs will do,\n> >>> for example.\n> >>>\n> >>> So I think we should just move all of this into GetModifyBatchSize. We\n> >>> can start with ri_BatchSize = 0. And then do\n> >>>\n> >>> if (resultRelInfo->ri_BatchSize == 0)\n> >>> resultRelInfo->ri_BatchSize =\n> >>> resultRelInfo->ri_FdwRoutine->GetModifyBatchSize(resultRelInfo);\n> >>>\n> >>> if (resultRelInfo->ri_BatchSize > 1)\n> >>> {\n> >>> ... do batching ...\n> >>> }\n> >>>\n> >>> The GetModifyBatchSize would always return value > 0, so either 1 (no\n> >>> batching) or >1 (batching).\n> >>>\n> >>\n> >> FWIW the attached v8 patch does this - most of the conditions are moved\n> >> to the GetModifyBatchSize() callback.\n> >\n> > Thanks. A few comments:\n> >\n> > * I agree with leaving it up to an FDW to look at the properties of\n> > the table and of the operation being performed to decide whether or\n> > not to use batching, although maybe BeginForeignModify() is a better\n> > place for putting that logic instead of GetModifyBatchSize()? So, in\n> > create_foreign_modify(), instead of PgFdwModifyState.batch_size simply\n> > being set to match the table's or the server's value for the\n> > batch_size option, make it also consider the things that prevent\n> > batching and set the execution state's batch_size based on that.\n> > GetModifyBatchSize() simply returns that value.\n> >\n> > * Regarding the timing of calling GetModifyBatchSize() to set\n> > ri_BatchSize, I wonder if it wouldn't be better to call it just once,\n> > say from ExecInitModifyTable(), right after BeginForeignModify()\n> > returns? I don't quite understand why it is being called from\n> > ExecInsert(). Can the batch size change once the execution starts?\n> >\n>\n> But it should be called just once. The idea is that initially we have\n> batch_size=0, and the fist call returns value that is >= 1. So we never\n> call it again. But maybe it could be called from BeginForeignModify, in\n> which case we'd not need this logic with first setting it to 0 etc.\n\n\nRight, although I was thinking that maybe ri_BatchSize itself is not to be\nwritten to by the FDW. Not to say that’s doing anything wrong though.\n\n> * Lastly, how about calling it GetForeignModifyBatchSize() to be\n> > consistent with other nearby callbacks?\n> >\n>\n> Yeah, good point.\n>\n> >> I've removed the check for the\n> >> BatchInsert callback, though - the FDW knows whether it supports that,\n> >> and it seems a bit pointless at the moment as there are no other batch\n> >> callbacks. Maybe we should add an Assert somewhere, though?\n> >\n> > Hmm, not checking whether BatchInsert() exists may not be good idea,\n> > because if an FDW's GetModifyBatchSize() returns a value > 1 but\n> > there's no BatchInsert() function to call, ExecBatchInsert() would\n> > trip. I don't see the newly added documentation telling FDW authors\n> > to either define both or none.\n> >\n>\n> Hmm. The BatchInsert check seemed somewhat unnecessary to me, but OTOH\n> it can't hurt, I guess. I'll ad it back.\n>\n> > Regarding how this plays with partitions, I don't think we need\n> > ExecGetTouchedPartitions(), because you can get the routed-to\n> > partitions using es_tuple_routing_result_relations. Also, perhaps\n>\n> I'm not very familiar with es_tuple_routing_result_relations, but that\n> doesn't seem to work. I've replaced the flushing code at the end of\n> ExecModifyTable with a loop over es_tuple_routing_result_relations, but\n> then some of the rows are missing (i.e. not flushed).\n\n\nI should’ve mentioned es_opened_result_relations too which contain\nnon-routing result relations. So I really meant if (proute) then use\nes_tuple_routing_result_relations, else es_opened_result_relations. This\nshould work as long as batching is only used for inserts.\n\n\n> it's a good idea to put the \"finishing\" ExecBatchInsert() calls into a\n> > function ExecFinishBatchInsert(). Maybe the logic to choose the\n> > relations to perform the finishing calls on will get complicated in\n> > the future as batching is added for updates/deletes too and it seems\n> > better to encapsulate that in the separate function than have it out\n> > in the open in ExecModifyTable().\n> >\n>\n> IMO that'd be an over-engineering at this point. We don't need such\n> separate function yet, so why complicate the API? If we need it in the\n> future, we can add it.\n\n\nFair enough.\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com\n\nOn Thu, Jan 14, 2021 at 21:57 Tomas Vondra <tomas.vondra@enterprisedb.com> wrote:On 1/14/21 9:58 AM, Amit Langote wrote:\n> Hi,\n> \n> On Thu, Jan 14, 2021 at 2:41 AM Tomas Vondra\n> <tomas.vondra@enterprisedb.com> wrote:\n>> On 1/13/21 3:43 PM, Tomas Vondra wrote:\n>>> Thanks for the report. Yeah, I think there's a missing check in\n>>> ExecInsert. Adding\n>>>\n>>>   (!resultRelInfo->ri_TrigDesc->trig_insert_after_row)\n>>>\n>>> solves this. But now I'm wondering if this is the wrong place to make\n>>> this decision. I mean, why should we make the decision here, when the\n>>> decision whether to have a RETURNING clause is made in postgres_fdw in\n>>> deparseReturningList? We don't really know what the other FDWs will do,\n>>> for example.\n>>>\n>>> So I think we should just move all of this into GetModifyBatchSize. We\n>>> can start with ri_BatchSize = 0. And then do\n>>>\n>>>   if (resultRelInfo->ri_BatchSize == 0)\n>>>     resultRelInfo->ri_BatchSize =\n>>>       resultRelInfo->ri_FdwRoutine->GetModifyBatchSize(resultRelInfo);\n>>>\n>>>   if (resultRelInfo->ri_BatchSize > 1)\n>>>   {\n>>>     ... do batching ...\n>>>   }\n>>>\n>>> The GetModifyBatchSize would always return value > 0, so either 1 (no\n>>> batching) or >1 (batching).\n>>>\n>>\n>> FWIW the attached v8 patch does this - most of the conditions are moved\n>> to the GetModifyBatchSize() callback.\n> \n> Thanks.  A few comments:\n> \n> * I agree with leaving it up to an FDW to look at the properties of\n> the table and of the operation being performed to decide whether or\n> not to use batching, although maybe BeginForeignModify() is a better\n> place for putting that logic instead of GetModifyBatchSize()?  So, in\n> create_foreign_modify(), instead of PgFdwModifyState.batch_size simply\n> being set to match the table's or the server's value for the\n> batch_size option, make it also consider the things that prevent\n> batching and set the execution state's batch_size based on that.\n> GetModifyBatchSize() simply returns that value.\n> \n> * Regarding the timing of calling GetModifyBatchSize() to set\n> ri_BatchSize, I wonder if it wouldn't be better to call it just once,\n> say from ExecInitModifyTable(), right after BeginForeignModify()\n> returns?  I don't quite understand why it is being called from\n> ExecInsert().  Can the batch size change once the execution starts?\n> \n\nBut it should be called just once. The idea is that initially we have\nbatch_size=0, and the fist call returns value that is >= 1. So we never\ncall it again. But maybe it could be called from BeginForeignModify, in\nwhich case we'd not need this logic with first setting it to 0 etc.Right, although I was thinking that maybe ri_BatchSize itself is not to be written to by the FDW.  Not to say that’s doing anything wrong though.\n> * Lastly, how about calling it GetForeignModifyBatchSize() to be\n> consistent with other nearby callbacks?\n> \n\nYeah, good point.\n\n>> I've removed the check for the\n>> BatchInsert callback, though - the FDW knows whether it supports that,\n>> and it seems a bit pointless at the moment as there are no other batch\n>> callbacks. Maybe we should add an Assert somewhere, though?\n> \n> Hmm, not checking whether BatchInsert() exists may not be good idea,\n> because if an FDW's GetModifyBatchSize() returns a value > 1 but\n> there's no BatchInsert() function to call, ExecBatchInsert() would\n> trip.  I don't see the newly added documentation telling FDW authors\n> to either define both or none.\n> \n\nHmm. The BatchInsert check seemed somewhat unnecessary to me, but OTOH\nit can't hurt, I guess. I'll ad it back.\n\n> Regarding how this plays with partitions, I don't think we need\n> ExecGetTouchedPartitions(), because you can get the routed-to\n> partitions using es_tuple_routing_result_relations.  Also, perhaps\n\nI'm not very familiar with es_tuple_routing_result_relations, but that\ndoesn't seem to work. I've replaced the flushing code at the end of\nExecModifyTable with a loop over es_tuple_routing_result_relations, but\nthen some of the rows are missing (i.e. not flushed).I should’ve mentioned es_opened_result_relations too which contain non-routing result relations.  So I really meant if (proute) then use es_tuple_routing_result_relations, else es_opened_result_relations.  This should work as long as batching is only used for inserts.\n> it's a good idea to put the \"finishing\" ExecBatchInsert() calls into a\n> function ExecFinishBatchInsert().  Maybe the logic to choose the\n> relations to perform the finishing calls on will get complicated in\n> the future as batching is added for updates/deletes too and it seems\n> better to encapsulate that in the separate function than have it out\n> in the open in ExecModifyTable().\n> \n\nIMO that'd be an over-engineering at this point. We don't need such\nseparate function yet, so why complicate the API? If we need it in the\nfuture, we can add it.Fair enough.-- Amit LangoteEDB: http://www.enterprisedb.com", "msg_date": "Thu, 14 Jan 2021 22:57:10 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: POC: postgres_fdw insert batching" }, { "msg_contents": "On 1/14/21 2:57 PM, Amit Langote wrote:\n> On Thu, Jan 14, 2021 at 21:57 Tomas Vondra\n> <tomas.vondra@enterprisedb.com <mailto:tomas.vondra@enterprisedb.com>>\n> wrote:\n> \n> On 1/14/21 9:58 AM, Amit Langote wrote:\n> > Hi,\n> >\n> > On Thu, Jan 14, 2021 at 2:41 AM Tomas Vondra\n> > <tomas.vondra@enterprisedb.com\n> <mailto:tomas.vondra@enterprisedb.com>> wrote:\n> >> On 1/13/21 3:43 PM, Tomas Vondra wrote:\n> >>> Thanks for the report. Yeah, I think there's a missing check in\n> >>> ExecInsert. Adding\n> >>>\n> >>>   (!resultRelInfo->ri_TrigDesc->trig_insert_after_row)\n> >>>\n> >>> solves this. But now I'm wondering if this is the wrong place to\n> make\n> >>> this decision. I mean, why should we make the decision here,\n> when the\n> >>> decision whether to have a RETURNING clause is made in\n> postgres_fdw in\n> >>> deparseReturningList? We don't really know what the other FDWs\n> will do,\n> >>> for example.\n> >>>\n> >>> So I think we should just move all of this into\n> GetModifyBatchSize. We\n> >>> can start with ri_BatchSize = 0. And then do\n> >>>\n> >>>   if (resultRelInfo->ri_BatchSize == 0)\n> >>>     resultRelInfo->ri_BatchSize =\n> >>>     \n>  resultRelInfo->ri_FdwRoutine->GetModifyBatchSize(resultRelInfo);\n> >>>\n> >>>   if (resultRelInfo->ri_BatchSize > 1)\n> >>>   {\n> >>>     ... do batching ...\n> >>>   }\n> >>>\n> >>> The GetModifyBatchSize would always return value > 0, so either\n> 1 (no\n> >>> batching) or >1 (batching).\n> >>>\n> >>\n> >> FWIW the attached v8 patch does this - most of the conditions are\n> moved\n> >> to the GetModifyBatchSize() callback.\n> >\n> > Thanks.  A few comments:\n> >\n> > * I agree with leaving it up to an FDW to look at the properties of\n> > the table and of the operation being performed to decide whether or\n> > not to use batching, although maybe BeginForeignModify() is a better\n> > place for putting that logic instead of GetModifyBatchSize()?  So, in\n> > create_foreign_modify(), instead of PgFdwModifyState.batch_size simply\n> > being set to match the table's or the server's value for the\n> > batch_size option, make it also consider the things that prevent\n> > batching and set the execution state's batch_size based on that.\n> > GetModifyBatchSize() simply returns that value.\n> >\n> > * Regarding the timing of calling GetModifyBatchSize() to set\n> > ri_BatchSize, I wonder if it wouldn't be better to call it just once,\n> > say from ExecInitModifyTable(), right after BeginForeignModify()\n> > returns?  I don't quite understand why it is being called from\n> > ExecInsert().  Can the batch size change once the execution starts?\n> >\n> \n> But it should be called just once. The idea is that initially we have\n> batch_size=0, and the fist call returns value that is >= 1. So we never\n> call it again. But maybe it could be called from BeginForeignModify, in\n> which case we'd not need this logic with first setting it to 0 etc.\n> \n> \n> Right, although I was thinking that maybe ri_BatchSize itself is not to\n> be written to by the FDW.  Not to say that’s doing anything wrong though.\n> \n> > * Lastly, how about calling it GetForeignModifyBatchSize() to be\n> > consistent with other nearby callbacks?\n> >\n> \n> Yeah, good point.\n> \n> >> I've removed the check for the\n> >> BatchInsert callback, though - the FDW knows whether it supports\n> that,\n> >> and it seems a bit pointless at the moment as there are no other\n> batch\n> >> callbacks. Maybe we should add an Assert somewhere, though?\n> >\n> > Hmm, not checking whether BatchInsert() exists may not be good idea,\n> > because if an FDW's GetModifyBatchSize() returns a value > 1 but\n> > there's no BatchInsert() function to call, ExecBatchInsert() would\n> > trip.  I don't see the newly added documentation telling FDW authors\n> > to either define both or none.\n> >\n> \n> Hmm. The BatchInsert check seemed somewhat unnecessary to me, but OTOH\n> it can't hurt, I guess. I'll ad it back.\n> \n> > Regarding how this plays with partitions, I don't think we need\n> > ExecGetTouchedPartitions(), because you can get the routed-to\n> > partitions using es_tuple_routing_result_relations.  Also, perhaps\n> \n> I'm not very familiar with es_tuple_routing_result_relations, but that\n> doesn't seem to work. I've replaced the flushing code at the end of\n> ExecModifyTable with a loop over es_tuple_routing_result_relations, but\n> then some of the rows are missing (i.e. not flushed).\n> \n> \n> I should’ve mentioned es_opened_result_relations too which contain\n> non-routing result relations.  So I really meant if (proute) then use\n> es_tuple_routing_result_relations, else es_opened_result_relations. \n> This should work as long as batching is only used for inserts.\n> \n\nAh, right. That did the trick.\n\nAttached is v9 with all of those tweaks, except for moving the BatchSize\ncall to BeginForeignModify - I tried that, but it did not seem like an\nimprovement, because we'd still need the checks for API callbacks in\nExecInsert for example. So I decided not to do that.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Thu, 14 Jan 2021 16:05:25 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: POC: postgres_fdw insert batching" }, { "msg_contents": "From: Tomas Vondra <tomas.vondra@enterprisedb.com>\r\n> Attached is v9 with all of those tweaks, except for moving the BatchSize call to\r\n> BeginForeignModify - I tried that, but it did not seem like an improvement,\r\n> because we'd still need the checks for API callbacks in ExecInsert for example.\r\n> So I decided not to do that.\r\n\r\nThanks, Tomas-san. The patch looks good again.\r\n\r\nAmit-san, thank you for teaching us about es_tuple_routing_result_relations and es_opened_result_relations.\r\n\r\n\r\nRegards\r\nTakayuki Tsunakawa\r\n\r\n", "msg_date": "Fri, 15 Jan 2021 02:47:12 +0000", "msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: POC: postgres_fdw insert batching" }, { "msg_contents": "On Fri, Jan 15, 2021 at 12:05 AM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n> Attached is v9 with all of those tweaks,\n\nThanks.\n\n> except for moving the BatchSize\n> call to BeginForeignModify - I tried that, but it did not seem like an\n> improvement, because we'd still need the checks for API callbacks in\n> ExecInsert for example. So I decided not to do that.\n\nOkay, so maybe not moving the whole logic into the FDW's\nBeginForeignModify(), but at least if we move this...\n\n@@ -441,6 +449,72 @@ ExecInsert(ModifyTableState *mtstate,\n+ /*\n+ * Determine if the FDW supports batch insert and determine the batch\n+ * size (a FDW may support batching, but it may be disabled for the\n+ * server/table). Do this only once, at the beginning - we don't want\n+ * the batch size to change during execution.\n+ */\n+ if (resultRelInfo->ri_FdwRoutine->GetForeignModifyBatchSize &&\n+ resultRelInfo->ri_FdwRoutine->ExecForeignBatchInsert &&\n+ resultRelInfo->ri_BatchSize == 0)\n+ resultRelInfo->ri_BatchSize =\n+\nresultRelInfo->ri_FdwRoutine->GetForeignModifyBatchSize(resultRelInfo);\n\n...into ExecInitModifyTable(), ExecInsert() only needs the following block:\n\n /*\n+ * If the FDW supports batching, and batching is requested, accumulate\n+ * rows and insert them in batches. Otherwise use the per-row inserts.\n+ */\n+ if (resultRelInfo->ri_BatchSize > 1)\n+ {\n+ ...\n\nAFAICS, I don't see anything that will cause ri_BatchSize to become 0\nonce set so don't see the point of checking whether it needs to be set\nagain on every ExecInsert() call. Also, maybe not that important, but\nshaving off 3 comparisons for every tuple would add up nicely IMHO\nespecially given that we're targeting bulk loads.\n\n--\nAmit Langote\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 15 Jan 2021 22:48:49 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: POC: postgres_fdw insert batching" }, { "msg_contents": "From: Amit Langote <amitlangote09@gmail.com>\r\n> Okay, so maybe not moving the whole logic into the FDW's\r\n> BeginForeignModify(), but at least if we move this...\r\n> \r\n> @@ -441,6 +449,72 @@ ExecInsert(ModifyTableState *mtstate,\r\n> + /*\r\n> + * Determine if the FDW supports batch insert and determine the\r\n> batch\r\n> + * size (a FDW may support batching, but it may be disabled for the\r\n> + * server/table). Do this only once, at the beginning - we don't want\r\n> + * the batch size to change during execution.\r\n> + */\r\n> + if (resultRelInfo->ri_FdwRoutine->GetForeignModifyBatchSize &&\r\n> + resultRelInfo->ri_FdwRoutine->ExecForeignBatchInsert &&\r\n> + resultRelInfo->ri_BatchSize == 0)\r\n> + resultRelInfo->ri_BatchSize =\r\n> +\r\n> resultRelInfo->ri_FdwRoutine->GetForeignModifyBatchSize(resultRelInfo);\r\n> \r\n> ...into ExecInitModifyTable(), ExecInsert() only needs the following block:\r\n\r\nDoes ExecInitModifyTable() know all leaf partitions where the tuples produced by VALUES or SELECT go? ExecInsert() doesn't find the target leaf partition for the first time through the call to ExecPrepareTupleRouting()? Leaf partitions can have different batch_size settings.\r\n\r\n\r\nRegards\r\nTakayuki Tsunakawa\r\n\r\n", "msg_date": "Fri, 15 Jan 2021 14:59:55 +0000", "msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: POC: postgres_fdw insert batching" }, { "msg_contents": "On Sat, Jan 16, 2021 at 12:00 AM tsunakawa.takay@fujitsu.com\n<tsunakawa.takay@fujitsu.com> wrote:\n> From: Amit Langote <amitlangote09@gmail.com>\n> > Okay, so maybe not moving the whole logic into the FDW's\n> > BeginForeignModify(), but at least if we move this...\n> >\n> > @@ -441,6 +449,72 @@ ExecInsert(ModifyTableState *mtstate,\n> > + /*\n> > + * Determine if the FDW supports batch insert and determine the\n> > batch\n> > + * size (a FDW may support batching, but it may be disabled for the\n> > + * server/table). Do this only once, at the beginning - we don't want\n> > + * the batch size to change during execution.\n> > + */\n> > + if (resultRelInfo->ri_FdwRoutine->GetForeignModifyBatchSize &&\n> > + resultRelInfo->ri_FdwRoutine->ExecForeignBatchInsert &&\n> > + resultRelInfo->ri_BatchSize == 0)\n> > + resultRelInfo->ri_BatchSize =\n> > +\n> > resultRelInfo->ri_FdwRoutine->GetForeignModifyBatchSize(resultRelInfo);\n> >\n> > ...into ExecInitModifyTable(), ExecInsert() only needs the following block:\n>\n> Does ExecInitModifyTable() know all leaf partitions where the tuples produced by VALUES or SELECT go? ExecInsert() doesn't find the target leaf partition for the first time through the call to ExecPrepareTupleRouting()? Leaf partitions can have different batch_size settings.\n\nGood thing you reminded me that this is about inserts, and in that\ncase no, ExecInitModifyTable() doesn't know all leaf partitions, it\nonly sees the root table whose batch_size doesn't really matter. So\nit's really ExecInitRoutingInfo() that I would recommend to set\nri_BatchSize; right after this block:\n\n/*\n * If the partition is a foreign table, let the FDW init itself for\n * routing tuples to the partition.\n */\nif (partRelInfo->ri_FdwRoutine != NULL &&\n partRelInfo->ri_FdwRoutine->BeginForeignInsert != NULL)\n partRelInfo->ri_FdwRoutine->BeginForeignInsert(mtstate, partRelInfo);\n\nNote that ExecInitRoutingInfo() is called only once for a partition\nwhen it is initialized after being inserted into for the first time.\n\nFor a non-partitioned targets, I'd still say set ri_BatchSize in\nExecInitModifyTable().\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Sat, 16 Jan 2021 00:34:09 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: POC: postgres_fdw insert batching" }, { "msg_contents": "Tomas-san,\r\n\r\nFrom: Amit Langote <amitlangote09@gmail.com>\r\n> Good thing you reminded me that this is about inserts, and in that\r\n> case no, ExecInitModifyTable() doesn't know all leaf partitions, it\r\n> only sees the root table whose batch_size doesn't really matter. So\r\n> it's really ExecInitRoutingInfo() that I would recommend to set\r\n> ri_BatchSize; right after this block:\r\n> \r\n> /*\r\n> * If the partition is a foreign table, let the FDW init itself for\r\n> * routing tuples to the partition.\r\n> */\r\n> if (partRelInfo->ri_FdwRoutine != NULL &&\r\n> partRelInfo->ri_FdwRoutine->BeginForeignInsert != NULL)\r\n> partRelInfo->ri_FdwRoutine->BeginForeignInsert(mtstate, partRelInfo);\r\n> \r\n> Note that ExecInitRoutingInfo() is called only once for a partition\r\n> when it is initialized after being inserted into for the first time.\r\n> \r\n> For a non-partitioned targets, I'd still say set ri_BatchSize in\r\n> ExecInitModifyTable().\r\n\r\nAttached is the patch that added call to GetModifyBatchSize() to the above two places. The regression test passes.\r\n\r\n(FWIW, frankly, I prefer the previous version because the code is a bit smaller... Maybe we should refactor the code someday to reduce similar processings in both the partitioned case and non-partitioned case.)\r\n\r\n\r\nRegards\r\nTakayuki Tsunakawa", "msg_date": "Mon, 18 Jan 2021 06:51:45 +0000", "msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: POC: postgres_fdw insert batching" }, { "msg_contents": "Hi, Takayuki-san:\n\n+ if (batch_size <= 0)\n+ ereport(ERROR,\n+ (errcode(ERRCODE_SYNTAX_ERROR),\n+ errmsg(\"%s requires a non-negative integer value\",\n\nIt seems the message doesn't match the check w.r.t. the batch size of 0.\n\n+ int numInserted = numSlots;\n\nSince numInserted is filled by ExecForeignBatchInsert(), the initialization\ncan be done with 0.\n\nCheers\n\nOn Sun, Jan 17, 2021 at 10:52 PM tsunakawa.takay@fujitsu.com <\ntsunakawa.takay@fujitsu.com> wrote:\n\n> Tomas-san,\n>\n> From: Amit Langote <amitlangote09@gmail.com>\n> > Good thing you reminded me that this is about inserts, and in that\n> > case no, ExecInitModifyTable() doesn't know all leaf partitions, it\n> > only sees the root table whose batch_size doesn't really matter. So\n> > it's really ExecInitRoutingInfo() that I would recommend to set\n> > ri_BatchSize; right after this block:\n> >\n> > /*\n> > * If the partition is a foreign table, let the FDW init itself for\n> > * routing tuples to the partition.\n> > */\n> > if (partRelInfo->ri_FdwRoutine != NULL &&\n> > partRelInfo->ri_FdwRoutine->BeginForeignInsert != NULL)\n> > partRelInfo->ri_FdwRoutine->BeginForeignInsert(mtstate, partRelInfo);\n> >\n> > Note that ExecInitRoutingInfo() is called only once for a partition\n> > when it is initialized after being inserted into for the first time.\n> >\n> > For a non-partitioned targets, I'd still say set ri_BatchSize in\n> > ExecInitModifyTable().\n>\n> Attached is the patch that added call to GetModifyBatchSize() to the above\n> two places. The regression test passes.\n>\n> (FWIW, frankly, I prefer the previous version because the code is a bit\n> smaller... Maybe we should refactor the code someday to reduce similar\n> processings in both the partitioned case and non-partitioned case.)\n>\n>\n> Regards\n> Takayuki Tsunakawa\n>\n>\n\nHi, Takayuki-san:+           if (batch_size <= 0)+               ereport(ERROR,+                       (errcode(ERRCODE_SYNTAX_ERROR),+                        errmsg(\"%s requires a non-negative integer value\",It seems the message doesn't match the check w.r.t. the batch size of 0.+   int         numInserted = numSlots;Since numInserted is filled by ExecForeignBatchInsert(), the initialization can be done with 0.CheersOn Sun, Jan 17, 2021 at 10:52 PM tsunakawa.takay@fujitsu.com <tsunakawa.takay@fujitsu.com> wrote:Tomas-san,\n\nFrom: Amit Langote <amitlangote09@gmail.com>\n> Good thing you reminded me that this is about inserts, and in that\n> case no, ExecInitModifyTable() doesn't know all leaf partitions, it\n> only sees the root table whose batch_size doesn't really matter.  So\n> it's really ExecInitRoutingInfo() that I would recommend to set\n> ri_BatchSize; right after this block:\n> \n> /*\n>  * If the partition is a foreign table, let the FDW init itself for\n>  * routing tuples to the partition.\n>  */\n> if (partRelInfo->ri_FdwRoutine != NULL &&\n>     partRelInfo->ri_FdwRoutine->BeginForeignInsert != NULL)\n>     partRelInfo->ri_FdwRoutine->BeginForeignInsert(mtstate, partRelInfo);\n> \n> Note that ExecInitRoutingInfo() is called only once for a partition\n> when it is initialized after being inserted into for the first time.\n> \n> For a non-partitioned targets, I'd still say set ri_BatchSize in\n> ExecInitModifyTable().\n\nAttached is the patch that added call to GetModifyBatchSize() to the above two places.  The regression test passes.\n\n(FWIW, frankly, I prefer the previous version because the code is a bit smaller...  Maybe we should refactor the code someday to reduce similar processings in both the partitioned case and non-partitioned case.)\n\n\nRegards\nTakayuki Tsunakawa", "msg_date": "Mon, 18 Jan 2021 07:27:54 -0800", "msg_from": "Zhihong Yu <zyu@yugabyte.com>", "msg_from_op": false, "msg_subject": "Re: POC: postgres_fdw insert batching" }, { "msg_contents": "On 1/18/21 7:51 AM, tsunakawa.takay@fujitsu.com wrote:\n> Tomas-san,\n> \n> From: Amit Langote <amitlangote09@gmail.com>\n>> Good thing you reminded me that this is about inserts, and in that \n>> case no, ExecInitModifyTable() doesn't know all leaf partitions,\n>> it only sees the root table whose batch_size doesn't really matter.\n>> So it's really ExecInitRoutingInfo() that I would recommend to set \n>> ri_BatchSize; right after this block:\n>> \n>> /* * If the partition is a foreign table, let the FDW init itself\n>> for * routing tuples to the partition. */ if\n>> (partRelInfo->ri_FdwRoutine != NULL && \n>> partRelInfo->ri_FdwRoutine->BeginForeignInsert != NULL) \n>> partRelInfo->ri_FdwRoutine->BeginForeignInsert(mtstate,\n>> partRelInfo);\n>> \n>> Note that ExecInitRoutingInfo() is called only once for a\n>> partition when it is initialized after being inserted into for the\n>> first time.\n>> \n>> For a non-partitioned targets, I'd still say set ri_BatchSize in \n>> ExecInitModifyTable().\n> \n> Attached is the patch that added call to GetModifyBatchSize() to the\n> above two places. The regression test passes.\n> \n> (FWIW, frankly, I prefer the previous version because the code is a\n> bit smaller... Maybe we should refactor the code someday to reduce\n> similar processings in both the partitioned case and non-partitioned\n> case.)\n> \n\nLess code would be nice, but it's not always the right thing to do, \nunfortunately :-(\n\nI took a look at this - there's a bit of bitrot due to 708d165ddb92c, so \nattached is a rebased patch (0001) fixing that.\n\n0002 adds a couple comments and minor tweaks\n\n0003 addresses a couple shortcomings related to explain - we haven't \nbeen showing the batch size for EXPLAIN (VERBOSE), because there'd be no \nFdwState, so this tries to fix that. Furthermore, there were no tests \nfor EXPLAIN output with batch size, so I added a couple.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Mon, 18 Jan 2021 18:56:23 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: POC: postgres_fdw insert batching" }, { "msg_contents": "From: Tomas Vondra <tomas.vondra@enterprisedb.com>\r\n> I took a look at this - there's a bit of bitrot due to 708d165ddb92c, so attached is\r\n> a rebased patch (0001) fixing that.\r\n> \r\n> 0002 adds a couple comments and minor tweaks\r\n> \r\n> 0003 addresses a couple shortcomings related to explain - we haven't been\r\n> showing the batch size for EXPLAIN (VERBOSE), because there'd be no\r\n> FdwState, so this tries to fix that. Furthermore, there were no tests for EXPLAIN\r\n> output with batch size, so I added a couple.\r\n\r\nThank you, good additions. They all look good.\r\nOnly one point: I think the code for retrieving batch_size in create_foreign_modify() can be replaced with a call to the new function in 0003.\r\n\r\nGod bless us.\r\n\r\n\r\nRegards\r\nTakayuki Tsunakawa\r\n\r\n", "msg_date": "Tue, 19 Jan 2021 01:28:44 +0000", "msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: POC: postgres_fdw insert batching" }, { "msg_contents": "Tomas-san, Zhihong-san,\r\n\r\nFrom: Zhihong Yu <zyu@yugabyte.com> \r\n> + if (batch_size <= 0)\r\n> + ereport(ERROR,\r\n> + (errcode(ERRCODE_SYNTAX_ERROR),\r\n> + errmsg(\"%s requires a non-negative integer value\",\r\n> \r\n> It seems the message doesn't match the check w.r.t. the batch size of 0.\r\n\r\nAh, \"non-negative\" should be \"positive\". The message for the existing fetch_size should be fixed too. Tomas-san, could you include this as well? I'm sorry to trouble you.\r\n\r\n\r\n> + int numInserted = numSlots;\r\n> \r\n> Since numInserted is filled by ExecForeignBatchInsert(), the initialization can be done with 0.\r\n\r\nNo, the code is correct, since the batch function requires the number of rows to insert as input.\r\n\r\n\r\nRegards\r\nTakayuki Tsunakawa\r\n\r\n", "msg_date": "Tue, 19 Jan 2021 01:56:50 +0000", "msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: POC: postgres_fdw insert batching" }, { "msg_contents": "On 1/19/21 2:28 AM, tsunakawa.takay@fujitsu.com wrote:\n> From: Tomas Vondra <tomas.vondra@enterprisedb.com>\n>> I took a look at this - there's a bit of bitrot due to\n>> 708d165ddb92c, so attached is a rebased patch (0001) fixing that.\n>> \n>> 0002 adds a couple comments and minor tweaks\n>> \n>> 0003 addresses a couple shortcomings related to explain - we\n>> haven't been showing the batch size for EXPLAIN (VERBOSE), because\n>> there'd be no FdwState, so this tries to fix that. Furthermore,\n>> there were no tests for EXPLAIN output with batch size, so I added\n>> a couple.\n> \n> Thank you, good additions. They all look good. Only one point: I\n> think the code for retrieving batch_size in create_foreign_modify()\n> can be replaced with a call to the new function in 0003.\n> \n\nOK. Can you prepare a final patch, squashing all the commits into a \nsingle one, and perhaps use the function in create_foreign_modify?\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Tue, 19 Jan 2021 03:35:42 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: POC: postgres_fdw insert batching" }, { "msg_contents": "From: Tomas Vondra <tomas.vondra@enterprisedb.com>\r\n> OK. Can you prepare a final patch, squashing all the commits into a\r\n> single one, and perhaps use the function in create_foreign_modify?\r\n\r\nAttached, including the message fix pointed by Zaihong-san.\r\n\r\n\r\nRegards\r\nTakayuki Tsunakawa", "msg_date": "Tue, 19 Jan 2021 03:50:03 +0000", "msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: POC: postgres_fdw insert batching" }, { "msg_contents": "Tsunakawa-san,\n\nOn Tue, Jan 19, 2021 at 12:50 PM tsunakawa.takay@fujitsu.com\n<tsunakawa.takay@fujitsu.com> wrote:\n> From: Tomas Vondra <tomas.vondra@enterprisedb.com>\n> > OK. Can you prepare a final patch, squashing all the commits into a\n> > single one, and perhaps use the function in create_foreign_modify?\n>\n> Attached, including the message fix pointed by Zaihong-san.\n\nThanks for adopting my suggestions regarding GetForeignModifyBatchSize().\n\nI apologize in advance for being maybe overly pedantic, but I noticed\nthat, in ExecInitModifyTable(), you decided to place the call outside\nthe loop that goes over resultRelations (shown below), although my\nintent was to ask to place it next to the BeginForeignModify() in that\nloop.\n\n resultRelInfo = mtstate->resultRelInfo;\n i = 0;\n forboth(l, node->resultRelations, l1, node->plans)\n {\n ...\n /* Also let FDWs init themselves for foreign-table result rels */\n if (!resultRelInfo->ri_usesFdwDirectModify &&\n resultRelInfo->ri_FdwRoutine != NULL &&\n resultRelInfo->ri_FdwRoutine->BeginForeignModify != NULL)\n {\n List *fdw_private = (List *) list_nth(node->fdwPrivLists, i);\n\n resultRelInfo->ri_FdwRoutine->BeginForeignModify(mtstate,\n resultRelInfo,\n fdw_private,\n i,\n eflags);\n }\n\nMaybe it's fine today because we only care about inserts and there's\nalways only one entry in the resultRelations list in that case, but\nthat may not remain the case in the future.\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 19 Jan 2021 13:51:53 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: POC: postgres_fdw insert batching" }, { "msg_contents": "From: Amit Langote <amitlangote09@gmail.com>\r\n> I apologize in advance for being maybe overly pedantic, but I noticed\r\n> that, in ExecInitModifyTable(), you decided to place the call outside\r\n> the loop that goes over resultRelations (shown below), although my\r\n> intent was to ask to place it next to the BeginForeignModify() in that\r\n> loop.\r\n\r\nActually, I tried to do it (adding the GetModifyBatchSize() call after BeginForeignModify()), but it failed. Because postgresfdwGetModifyBatchSize() wants to know if RETURNING is specified, and ResultRelInfo->projectReturning is created after the above part. Considering the context where GetModifyBatchSize() implementations may want to know the environment, I placed the call as late as possible in the initialization phase. As for the future(?) multi-target DML statements, I think we can change this together with other many(?) parts that assume a single target table.\r\n\r\n\r\nRegards\r\nTakayuki Tsunakawa\r\n\r\n", "msg_date": "Tue, 19 Jan 2021 05:06:34 +0000", "msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: POC: postgres_fdw insert batching" }, { "msg_contents": "On Tue, Jan 19, 2021 at 2:06 PM tsunakawa.takay@fujitsu.com\n<tsunakawa.takay@fujitsu.com> wrote:\n> From: Amit Langote <amitlangote09@gmail.com>\n> > I apologize in advance for being maybe overly pedantic, but I noticed\n> > that, in ExecInitModifyTable(), you decided to place the call outside\n> > the loop that goes over resultRelations (shown below), although my\n> > intent was to ask to place it next to the BeginForeignModify() in that\n> > loop.\n>\n> Actually, I tried to do it (adding the GetModifyBatchSize() call after BeginForeignModify()), but it failed. Because postgresfdwGetModifyBatchSize() wants to know if RETURNING is specified, and ResultRelInfo->projectReturning is created after the above part. Considering the context where GetModifyBatchSize() implementations may want to know the environment, I placed the call as late as possible in the initialization phase. As for the future(?) multi-target DML statements, I think we can change this together with other many(?) parts that assume a single target table.\n\nOkay, sometime later then.\n\nI wasn't sure if bringing it up here would be appropriate, but there's\na patch by me to refactor ModfiyTable result relation allocation that\nwill have to remember to move this code along to an appropriate place\n[1]. Thanks for the tip about the dependency on how RETURNING is\nhandled. I will remember it when rebasing my patch over this.\n\n--\nAmit Langote\nEDB: http://www.enterprisedb.com\n\n[1] https://commitfest.postgresql.org/31/2621/\n\n\n", "msg_date": "Tue, 19 Jan 2021 15:23:51 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: POC: postgres_fdw insert batching" }, { "msg_contents": "\n\nOn 1/19/21 7:23 AM, Amit Langote wrote:\n> On Tue, Jan 19, 2021 at 2:06 PM tsunakawa.takay@fujitsu.com\n> <tsunakawa.takay@fujitsu.com> wrote:\n>> From: Amit Langote <amitlangote09@gmail.com>\n>>> I apologize in advance for being maybe overly pedantic, but I noticed\n>>> that, in ExecInitModifyTable(), you decided to place the call outside\n>>> the loop that goes over resultRelations (shown below), although my\n>>> intent was to ask to place it next to the BeginForeignModify() in that\n>>> loop.\n>>\n>> Actually, I tried to do it (adding the GetModifyBatchSize() call after BeginForeignModify()), but it failed. Because postgresfdwGetModifyBatchSize() wants to know if RETURNING is specified, and ResultRelInfo->projectReturning is created after the above part. Considering the context where GetModifyBatchSize() implementations may want to know the environment, I placed the call as late as possible in the initialization phase. As for the future(?) multi-target DML statements, I think we can change this together with other many(?) parts that assume a single target table.\n> \n> Okay, sometime later then.\n> \n> I wasn't sure if bringing it up here would be appropriate, but there's\n> a patch by me to refactor ModfiyTable result relation allocation that\n> will have to remember to move this code along to an appropriate place\n> [1]. Thanks for the tip about the dependency on how RETURNING is\n> handled. I will remember it when rebasing my patch over this.\n> \n\nThanks. The last version (v12) should be addressing all the comments and \nseems fine to me, so barring objections I'll get that pushed shortly.\n\nOne thing that seems a bit annoying is that with the partitioned table \nthe explain (verbose) looks like this:\n\n QUERY PLAN\n-----------------------------------------------------\n Insert on public.batch_table\n -> Function Scan on pg_catalog.generate_series i\n Output: i.i\n Function Call: generate_series(1, 66)\n(4 rows)\n\nThat is, there's no information about the batch size :-( But AFAICS \nthat's due to how explain shows (or rather does not) partitions in this \ntype of plan.\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Tue, 19 Jan 2021 17:01:05 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: POC: postgres_fdw insert batching" }, { "msg_contents": "On Wed, Jan 20, 2021 at 1:01 AM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n> On 1/19/21 7:23 AM, Amit Langote wrote:\n> > On Tue, Jan 19, 2021 at 2:06 PM tsunakawa.takay@fujitsu.com\n> >> Actually, I tried to do it (adding the GetModifyBatchSize() call after BeginForeignModify()), but it failed. Because postgresfdwGetModifyBatchSize() wants to know if RETURNING is specified, and ResultRelInfo->projectReturning is created after the above part. Considering the context where GetModifyBatchSize() implementations may want to know the environment, I placed the call as late as possible in the initialization phase. As for the future(?) multi-target DML statements, I think we can change this together with other many(?) parts that assume a single target table.\n> >\n> > Okay, sometime later then.\n> >\n> > I wasn't sure if bringing it up here would be appropriate, but there's\n> > a patch by me to refactor ModfiyTable result relation allocation that\n> > will have to remember to move this code along to an appropriate place\n> > [1]. Thanks for the tip about the dependency on how RETURNING is\n> > handled. I will remember it when rebasing my patch over this.\n> >\n>\n> Thanks. The last version (v12) should be addressing all the comments and\n> seems fine to me, so barring objections I'll get that pushed shortly.\n\n+1\n\n> One thing that seems a bit annoying is that with the partitioned table\n> the explain (verbose) looks like this:\n>\n> QUERY PLAN\n> -----------------------------------------------------\n> Insert on public.batch_table\n> -> Function Scan on pg_catalog.generate_series i\n> Output: i.i\n> Function Call: generate_series(1, 66)\n> (4 rows)\n>\n> That is, there's no information about the batch size :-( But AFAICS\n> that's due to how explain shows (or rather does not) partitions in this\n> type of plan.\n\nYeah. Partition result relations are always lazily allocated for\nINSERT, so EXPLAIN (without ANALYZE) has no idea what to show for\nthem, nor does it know which partitions will be used in the first\nplace. With ANALYZE however, you could get them from\nes_tuple_routing_result_relations and maybe list them if you want, but\nthat sounds like a project on its own.\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 20 Jan 2021 11:05:32 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: POC: postgres_fdw insert batching" }, { "msg_contents": "OK, pushed after a little bit of additional polishing (mostly comments).\n\nThanks everyone!\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Thu, 21 Jan 2021 00:00:34 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: POC: postgres_fdw insert batching" }, { "msg_contents": "Hmm, seems that florican doesn't like this :-(\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=florican&dt=2021-01-20%2023%3A08%3A15\n\nIt's a i386 machine running FreeBSD, so not sure what exactly it's picky \nabout. But when I tried running this under valgrind, I get some strange \nfailures in the new chunk in ExecInitModifyTable:\n\n /*\n * Determine if the FDW supports batch insert and determine the batch\n * size (a FDW may support batching, but it may be disabled for the\n * server/table).\n */\n if (!resultRelInfo->ri_usesFdwDirectModify &&\n operation == CMD_INSERT &&\n resultRelInfo->ri_FdwRoutine != NULL &&\n resultRelInfo->ri_FdwRoutine->GetForeignModifyBatchSize &&\n resultRelInfo->ri_FdwRoutine->ExecForeignBatchInsert)\n resultRelInfo->ri_BatchSize =\n \nresultRelInfo->ri_FdwRoutine->GetForeignModifyBatchSize(resultRelInfo);\n else\n resultRelInfo->ri_BatchSize = 1;\n\n Assert(resultRelInfo->ri_BatchSize >= 1);\n\nIt seems as if the resultRelInfo is not initialized, or something like \nthat. I wouldn't be surprised if the 32-bit machine was pickier and \nfailing because of that.\n\nA sample of the valgrind log is attached. It's pretty much just \nrepetitions of these three reports.\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Thu, 21 Jan 2021 00:52:25 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: POC: postgres_fdw insert batching" }, { "msg_contents": "Tomas Vondra <tomas.vondra@enterprisedb.com> writes:\n> OK, pushed after a little bit of additional polishing (mostly comments).\n> Thanks everyone!\n\nflorican reports this is seriously broken on 32-bit hardware:\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=florican&dt=2021-01-20%2023%3A08%3A15\n\nFirst guess is incorrect memory-allocation computations ...\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 20 Jan 2021 18:59:42 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: POC: postgres_fdw insert batching" }, { "msg_contents": "Hi,\nThe assignment to resultRelInfo is done when junk_filter_needed is true:\n\n if (junk_filter_needed)\n {\n resultRelInfo = mtstate->resultRelInfo;\n\nShould the code for determining batch size access mtstate->resultRelInfo\ndirectly ?\n\ndiff --git a/src/backend/executor/nodeModifyTable.c\nb/src/backend/executor/nodeModifyTable.c\nindex 9c36860704..a6a814454d 100644\n--- a/src/backend/executor/nodeModifyTable.c\n+++ b/src/backend/executor/nodeModifyTable.c\n@@ -2798,17 +2798,17 @@ ExecInitModifyTable(ModifyTable *node, EState\n*estate, int eflags)\n * size (a FDW may support batching, but it may be disabled for the\n * server/table).\n */\n- if (!resultRelInfo->ri_usesFdwDirectModify &&\n+ if (!mtstate->resultRelInfo->ri_usesFdwDirectModify &&\n operation == CMD_INSERT &&\n- resultRelInfo->ri_FdwRoutine != NULL &&\n- resultRelInfo->ri_FdwRoutine->GetForeignModifyBatchSize &&\n- resultRelInfo->ri_FdwRoutine->ExecForeignBatchInsert)\n- resultRelInfo->ri_BatchSize =\n-\n resultRelInfo->ri_FdwRoutine->GetForeignModifyBatchSize(resultRelInfo);\n+ mtstate->resultRelInfo->ri_FdwRoutine != NULL &&\n+ mtstate->resultRelInfo->ri_FdwRoutine->GetForeignModifyBatchSize &&\n+ mtstate->resultRelInfo->ri_FdwRoutine->ExecForeignBatchInsert)\n+ mtstate->resultRelInfo->ri_BatchSize =\n+\n mtstate->resultRelInfo->ri_FdwRoutine->GetForeignModifyBatchSize(mtstate->resultRelInfo);\n else\n- resultRelInfo->ri_BatchSize = 1;\n+ mtstate->resultRelInfo->ri_BatchSize = 1;\n\n- Assert(resultRelInfo->ri_BatchSize >= 1);\n+ Assert(mtstate->resultRelInfo->ri_BatchSize >= 1);\n\n /*\n * Lastly, if this is not the primary (canSetTag) ModifyTable node,\nadd it\n\nCheers\n\nOn Wed, Jan 20, 2021 at 3:52 PM Tomas Vondra <tomas.vondra@enterprisedb.com>\nwrote:\n\n> Hmm, seems that florican doesn't like this :-(\n>\n>\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=florican&dt=2021-01-20%2023%3A08%3A15\n>\n> It's a i386 machine running FreeBSD, so not sure what exactly it's picky\n> about. But when I tried running this under valgrind, I get some strange\n> failures in the new chunk in ExecInitModifyTable:\n>\n> /*\n> * Determine if the FDW supports batch insert and determine the batch\n> * size (a FDW may support batching, but it may be disabled for the\n> * server/table).\n> */\n> if (!resultRelInfo->ri_usesFdwDirectModify &&\n> operation == CMD_INSERT &&\n> resultRelInfo->ri_FdwRoutine != NULL &&\n> resultRelInfo->ri_FdwRoutine->GetForeignModifyBatchSize &&\n> resultRelInfo->ri_FdwRoutine->ExecForeignBatchInsert)\n> resultRelInfo->ri_BatchSize =\n>\n> resultRelInfo->ri_FdwRoutine->GetForeignModifyBatchSize(resultRelInfo);\n> else\n> resultRelInfo->ri_BatchSize = 1;\n>\n> Assert(resultRelInfo->ri_BatchSize >= 1);\n>\n> It seems as if the resultRelInfo is not initialized, or something like\n> that. I wouldn't be surprised if the 32-bit machine was pickier and\n> failing because of that.\n>\n> A sample of the valgrind log is attached. It's pretty much just\n> repetitions of these three reports.\n>\n> regards\n>\n> --\n> Tomas Vondra\n> EnterpriseDB: http://www.enterprisedb.com\n> The Enterprise PostgreSQL Company\n>\n\nHi,The assignment to resultRelInfo is done when junk_filter_needed is true:        if (junk_filter_needed)        {            resultRelInfo = mtstate->resultRelInfo;Should the code for determining batch size access mtstate->resultRelInfo directly ?diff --git a/src/backend/executor/nodeModifyTable.c b/src/backend/executor/nodeModifyTable.cindex 9c36860704..a6a814454d 100644--- a/src/backend/executor/nodeModifyTable.c+++ b/src/backend/executor/nodeModifyTable.c@@ -2798,17 +2798,17 @@ ExecInitModifyTable(ModifyTable *node, EState *estate, int eflags)      * size (a FDW may support batching, but it may be disabled for the      * server/table).      */-    if (!resultRelInfo->ri_usesFdwDirectModify &&+    if (!mtstate->resultRelInfo->ri_usesFdwDirectModify &&         operation == CMD_INSERT &&-        resultRelInfo->ri_FdwRoutine != NULL &&-        resultRelInfo->ri_FdwRoutine->GetForeignModifyBatchSize &&-        resultRelInfo->ri_FdwRoutine->ExecForeignBatchInsert)-        resultRelInfo->ri_BatchSize =-            resultRelInfo->ri_FdwRoutine->GetForeignModifyBatchSize(resultRelInfo);+        mtstate->resultRelInfo->ri_FdwRoutine != NULL &&+        mtstate->resultRelInfo->ri_FdwRoutine->GetForeignModifyBatchSize &&+        mtstate->resultRelInfo->ri_FdwRoutine->ExecForeignBatchInsert)+        mtstate->resultRelInfo->ri_BatchSize =+            mtstate->resultRelInfo->ri_FdwRoutine->GetForeignModifyBatchSize(mtstate->resultRelInfo);     else-        resultRelInfo->ri_BatchSize = 1;+        mtstate->resultRelInfo->ri_BatchSize = 1;-    Assert(resultRelInfo->ri_BatchSize >= 1);+    Assert(mtstate->resultRelInfo->ri_BatchSize >= 1);     /*      * Lastly, if this is not the primary (canSetTag) ModifyTable node, add itCheersOn Wed, Jan 20, 2021 at 3:52 PM Tomas Vondra <tomas.vondra@enterprisedb.com> wrote:Hmm, seems that florican doesn't like this :-(\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=florican&dt=2021-01-20%2023%3A08%3A15\n\nIt's a i386 machine running FreeBSD, so not sure what exactly it's picky \nabout. But when I tried running this under valgrind, I get some strange \nfailures in the new chunk in ExecInitModifyTable:\n\n   /*\n    * Determine if the FDW supports batch insert and determine the batch\n    * size (a FDW may support batching, but it may be disabled for the\n    * server/table).\n    */\n   if (!resultRelInfo->ri_usesFdwDirectModify &&\n       operation == CMD_INSERT &&\n       resultRelInfo->ri_FdwRoutine != NULL &&\n       resultRelInfo->ri_FdwRoutine->GetForeignModifyBatchSize &&\n       resultRelInfo->ri_FdwRoutine->ExecForeignBatchInsert)\n       resultRelInfo->ri_BatchSize =\n\nresultRelInfo->ri_FdwRoutine->GetForeignModifyBatchSize(resultRelInfo);\n   else\n       resultRelInfo->ri_BatchSize = 1;\n\n   Assert(resultRelInfo->ri_BatchSize >= 1);\n\nIt seems as if the resultRelInfo is not initialized, or something like \nthat. I wouldn't be surprised if the 32-bit machine was pickier and \nfailing because of that.\n\nA sample of the valgrind log is attached. It's pretty much just \nrepetitions of these three reports.\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Wed, 20 Jan 2021 16:17:39 -0800", "msg_from": "Zhihong Yu <zyu@yugabyte.com>", "msg_from_op": false, "msg_subject": "Re: POC: postgres_fdw insert batching" }, { "msg_contents": "On 1/21/21 12:52 AM, Tomas Vondra wrote:\n> Hmm, seems that florican doesn't like this :-(\n> \n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=florican&dt=2021-01-20%2023%3A08%3A15 \n> \n> \n> It's a i386 machine running FreeBSD, so not sure what exactly it's picky \n> about. But when I tried running this under valgrind, I get some strange \n> failures in the new chunk in ExecInitModifyTable:\n> \n>   /*\n>    * Determine if the FDW supports batch insert and determine the batch\n>    * size (a FDW may support batching, but it may be disabled for the\n>    * server/table).\n>    */\n>   if (!resultRelInfo->ri_usesFdwDirectModify &&\n>       operation == CMD_INSERT &&\n>       resultRelInfo->ri_FdwRoutine != NULL &&\n>       resultRelInfo->ri_FdwRoutine->GetForeignModifyBatchSize &&\n>       resultRelInfo->ri_FdwRoutine->ExecForeignBatchInsert)\n>       resultRelInfo->ri_BatchSize =\n> \n> resultRelInfo->ri_FdwRoutine->GetForeignModifyBatchSize(resultRelInfo);\n>   else\n>       resultRelInfo->ri_BatchSize = 1;\n> \n>   Assert(resultRelInfo->ri_BatchSize >= 1);\n> \n> It seems as if the resultRelInfo is not initialized, or something like \n> that. I wouldn't be surprised if the 32-bit machine was pickier and \n> failing because of that.\n> \n> A sample of the valgrind log is attached. It's pretty much just \n> repetitions of these three reports.\n> \n\nOK, it's definitely accessing uninitialized memory, because the \nresultRelInfo (on line 2801, i.e. the \"if\" condition) looks like this:\n\n(gdb) p resultRelInfo\n$1 = (ResultRelInfo *) 0xe595988\n(gdb) p *resultRelInfo\n$2 = {type = 2139062142, ri_RangeTableIndex = 2139062143, \nri_RelationDesc = 0x7f7f7f7f7f7f7f7f, ri_NumIndices = 2139062143, \nri_IndexRelationDescs = 0x7f7f7f7f7f7f7f7f, ri_IndexRelationInfo = \n0x7f7f7f7f7f7f7f7f,\n ri_TrigDesc = 0x7f7f7f7f7f7f7f7f, ri_TrigFunctions = \n0x7f7f7f7f7f7f7f7f, ri_TrigWhenExprs = 0x7f7f7f7f7f7f7f7f, \nri_TrigInstrument = 0x7f7f7f7f7f7f7f7f, ri_ReturningSlot = \n0x7f7f7f7f7f7f7f7f, ri_TrigOldSlot = 0x7f7f7f7f7f7f7f7f,\n ri_TrigNewSlot = 0x7f7f7f7f7f7f7f7f, ri_FdwRoutine = \n0x7f7f7f7f7f7f7f7f, ri_FdwState = 0x7f7f7f7f7f7f7f7f, \nri_usesFdwDirectModify = 127, ri_NumSlots = 2139062143, ri_BatchSize = \n2139062143, ri_Slots = 0x7f7f7f7f7f7f7f7f,\n ri_PlanSlots = 0x7f7f7f7f7f7f7f7f, ri_WithCheckOptions = \n0x7f7f7f7f7f7f7f7f, ri_WithCheckOptionExprs = 0x7f7f7f7f7f7f7f7f, \nri_ConstraintExprs = 0x7f7f7f7f7f7f7f7f, ri_GeneratedExprs = \n0x7f7f7f7f7f7f7f7f,\n ri_NumGeneratedNeeded = 2139062143, ri_junkFilter = \n0x7f7f7f7f7f7f7f7f, ri_returningList = 0x7f7f7f7f7f7f7f7f, \nri_projectReturning = 0x7f7f7f7f7f7f7f7f, ri_onConflictArbiterIndexes = \n0x7f7f7f7f7f7f7f7f,\n ri_onConflict = 0x7f7f7f7f7f7f7f7f, ri_PartitionCheckExpr = \n0x7f7f7f7f7f7f7f7f, ri_PartitionRoot = 0x7f7f7f7f7f7f7f7f, \nri_RootToPartitionMap = 0x8, ri_PartitionTupleSlot = 0x8, \nri_ChildToRootMap = 0xe5952b0,\n ri_CopyMultiInsertBuffer = 0xe596740}\n(gdb)\n\nI may be wrong, but the most likely explanation seems to be this is due \nto the junk filter initialization, which simply moves past the end of \nthe mtstate->resultRelInfo array.\n\nIt kinda seems the GetForeignModifyBatchSize call should happen before \nthat block. The attached patch fixes this for me (i.e. regression tests \npass with no valgrind reports.\n\nOr did I get that wrong?\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Thu, 21 Jan 2021 01:21:29 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: POC: postgres_fdw insert batching" }, { "msg_contents": "On 1/21/21 12:59 AM, Tom Lane wrote:\n> Tomas Vondra <tomas.vondra@enterprisedb.com> writes:\n>> OK, pushed after a little bit of additional polishing (mostly comments).\n>> Thanks everyone!\n> \n> florican reports this is seriously broken on 32-bit hardware:\n> \n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=florican&dt=2021-01-20%2023%3A08%3A15\n> \n> First guess is incorrect memory-allocation computations ...\n> \n\nI know, although it seems more like an access to unitialized memory. \nI've already posted a patch that resolves that for me on 64-bits (per \nvalgrind, I suppose it's the same issue).\n\nI'm working on reproducing it on 32-bits, hopefully it won't take long.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Thu, 21 Jan 2021 01:46:29 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: POC: postgres_fdw insert batching" }, { "msg_contents": "\n\nOn 1/21/21 1:17 AM, Zhihong Yu wrote:\n> Hi,\n> The assignment to resultRelInfo is done when junk_filter_needed is true:\n> \n>         if (junk_filter_needed)\n>         {\n>             resultRelInfo = mtstate->resultRelInfo;\n> \n> Should the code for determining batch size access mtstate->resultRelInfo \n> directly ?\n> \n\nIMO the issue is that code iterates over all plans and moves to the next \nfor each one:\n\n resultRelInfo++;\n\nso it ends up pointing past the last element, hence the failures. So \nyeah, either the code needs to move before the loop (per my patch), or \nwe need to access mtstate->resultRelInfo directly.\n\nI'm pretty amazed this did not crash during any of the many regression \nruns I did recently.\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Thu, 21 Jan 2021 01:56:25 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: POC: postgres_fdw insert batching" }, { "msg_contents": "Hi, Tomas:\nIn my opinion, my patch is a little better.\nSuppose one of the conditions in the if block changes in between the start\nof loop and the end of the loop:\n\n * Determine if the FDW supports batch insert and determine the batch\n * size (a FDW may support batching, but it may be disabled for the\n * server/table).\n\nMy patch would reflect that change. I guess this was the reason the if /\nelse block was placed there in the first place.\n\nCheers\n\nOn Wed, Jan 20, 2021 at 4:56 PM Tomas Vondra <tomas.vondra@enterprisedb.com>\nwrote:\n\n>\n>\n> On 1/21/21 1:17 AM, Zhihong Yu wrote:\n> > Hi,\n> > The assignment to resultRelInfo is done when junk_filter_needed is true:\n> >\n> > if (junk_filter_needed)\n> > {\n> > resultRelInfo = mtstate->resultRelInfo;\n> >\n> > Should the code for determining batch size access mtstate->resultRelInfo\n> > directly ?\n> >\n>\n> IMO the issue is that code iterates over all plans and moves to the next\n> for each one:\n>\n> resultRelInfo++;\n>\n> so it ends up pointing past the last element, hence the failures. So\n> yeah, either the code needs to move before the loop (per my patch), or\n> we need to access mtstate->resultRelInfo directly.\n>\n> I'm pretty amazed this did not crash during any of the many regression\n> runs I did recently.\n>\n> regards\n>\n> --\n> Tomas Vondra\n> EnterpriseDB: http://www.enterprisedb.com\n> The Enterprise PostgreSQL Company\n>\n\nHi, Tomas:In my opinion, my patch is a little better.Suppose one of the conditions in the if block changes in between the start of loop and the end of the loop:     * Determine if the FDW supports batch insert and determine the batch     * size (a FDW may support batching, but it may be disabled for the     * server/table).My patch would reflect that change. I guess this was the reason the if / else block was placed there in the first place.CheersOn Wed, Jan 20, 2021 at 4:56 PM Tomas Vondra <tomas.vondra@enterprisedb.com> wrote:\n\nOn 1/21/21 1:17 AM, Zhihong Yu wrote:\n> Hi,\n> The assignment to resultRelInfo is done when junk_filter_needed is true:\n> \n>          if (junk_filter_needed)\n>          {\n>              resultRelInfo = mtstate->resultRelInfo;\n> \n> Should the code for determining batch size access mtstate->resultRelInfo \n> directly ?\n> \n\nIMO the issue is that code iterates over all plans and moves to the next \nfor each one:\n\n     resultRelInfo++;\n\nso it ends up pointing past the last element, hence the failures. So \nyeah, either the code needs to move before the loop (per my patch), or \nwe need to access mtstate->resultRelInfo directly.\n\nI'm pretty amazed this did not crash during any of the many regression \nruns I did recently.\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Wed, 20 Jan 2021 17:02:56 -0800", "msg_from": "Zhihong Yu <zyu@yugabyte.com>", "msg_from_op": false, "msg_subject": "Re: POC: postgres_fdw insert batching" }, { "msg_contents": "\n\nOn 1/21/21 2:02 AM, Zhihong Yu wrote:\n> Hi, Tomas:\n> In my opinion, my patch is a little better.\n> Suppose one of the conditions in the if block changes in between the \n> start of loop and the end of the loop:\n> \n>      * Determine if the FDW supports batch insert and determine the batch\n>      * size (a FDW may support batching, but it may be disabled for the\n>      * server/table).\n> \n> My patch would reflect that change. I guess this was the reason the if / \n> else block was placed there in the first place.\n> \n\nBut can it change? All the loop does is extracting junk attributes from \nthe plans, it does not modify anything related to the batching. Or maybe \nI just don't understand what you mean.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Thu, 21 Jan 2021 02:11:32 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: POC: postgres_fdw insert batching" }, { "msg_contents": "Hi,\nDo we need to consider how this part of code inside ExecInitModifyTable()\nwould evolve ?\n\nI think placing the compound condition toward the end\nof ExecInitModifyTable() is reasonable because it checks the latest\ninformation.\n\nRegards\n\nOn Wed, Jan 20, 2021 at 5:11 PM Tomas Vondra <tomas.vondra@enterprisedb.com>\nwrote:\n\n>\n>\n> On 1/21/21 2:02 AM, Zhihong Yu wrote:\n> > Hi, Tomas:\n> > In my opinion, my patch is a little better.\n> > Suppose one of the conditions in the if block changes in between the\n> > start of loop and the end of the loop:\n> >\n> > * Determine if the FDW supports batch insert and determine the\n> batch\n> > * size (a FDW may support batching, but it may be disabled for the\n> > * server/table).\n> >\n> > My patch would reflect that change. I guess this was the reason the if /\n> > else block was placed there in the first place.\n> >\n>\n> But can it change? All the loop does is extracting junk attributes from\n> the plans, it does not modify anything related to the batching. Or maybe\n> I just don't understand what you mean.\n>\n>\n> regards\n>\n> --\n> Tomas Vondra\n> EnterpriseDB: http://www.enterprisedb.com\n> The Enterprise PostgreSQL Company\n>\n\nHi,Do we need to consider how this part of code inside ExecInitModifyTable() would evolve ?I think placing the compound condition toward the end of ExecInitModifyTable() is reasonable because it checks the latest information.RegardsOn Wed, Jan 20, 2021 at 5:11 PM Tomas Vondra <tomas.vondra@enterprisedb.com> wrote:\n\nOn 1/21/21 2:02 AM, Zhihong Yu wrote:\n> Hi, Tomas:\n> In my opinion, my patch is a little better.\n> Suppose one of the conditions in the if block changes in between the \n> start of loop and the end of the loop:\n> \n>       * Determine if the FDW supports batch insert and determine the batch\n>       * size (a FDW may support batching, but it may be disabled for the\n>       * server/table).\n> \n> My patch would reflect that change. I guess this was the reason the if / \n> else block was placed there in the first place.\n> \n\nBut can it change? All the loop does is extracting junk attributes from \nthe plans, it does not modify anything related to the batching. Or maybe \nI just don't understand what you mean.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Wed, 20 Jan 2021 17:16:43 -0800", "msg_from": "Zhihong Yu <zyu@yugabyte.com>", "msg_from_op": false, "msg_subject": "Re: POC: postgres_fdw insert batching" }, { "msg_contents": "Tomas Vondra <tomas.vondra@enterprisedb.com> writes:\n> I may be wrong, but the most likely explanation seems to be this is due \n> to the junk filter initialization, which simply moves past the end of \n> the mtstate->resultRelInfo array.\n\nresultRelInfo is certainly pointing at garbage at that point.\n\n> It kinda seems the GetForeignModifyBatchSize call should happen before \n> that block. The attached patch fixes this for me (i.e. regression tests \n> pass with no valgrind reports.\n\n> Or did I get that wrong?\n\nDon't we need to initialize ri_BatchSize for *each* resultrelinfo,\nnot merely the first one? That is, this new code needs to be\nsomewhere inside a loop over the result rels.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 20 Jan 2021 20:22:57 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: POC: postgres_fdw insert batching" }, { "msg_contents": "On Thu, Jan 21, 2021 at 9:56 AM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n> On 1/21/21 1:17 AM, Zhihong Yu wrote:\n> > Hi,\n> > The assignment to resultRelInfo is done when junk_filter_needed is true:\n> >\n> > if (junk_filter_needed)\n> > {\n> > resultRelInfo = mtstate->resultRelInfo;\n> >\n> > Should the code for determining batch size access mtstate->resultRelInfo\n> > directly ?\n> >\n>\n> IMO the issue is that code iterates over all plans and moves to the next\n> for each one:\n>\n> resultRelInfo++;\n>\n> so it ends up pointing past the last element, hence the failures. So\n> yeah, either the code needs to move before the loop (per my patch), or\n> we need to access mtstate->resultRelInfo directly.\n\nAccessing mtstate->resultRelInfo directly would do. The only\nconstraint on where this block should be placed is that\nri_projectReturning must be valid as of calling\nGetForeignModifyBatchSize(), as Tsunakawa-san pointed out upthread.\nSo, after this block in ExecInitModifyTable:\n\n /*\n * Initialize RETURNING projections if needed.\n */\n if (node->returningLists)\n {\n ....\n /*\n * Build a projection for each result rel.\n */\n resultRelInfo = mtstate->resultRelInfo;\n foreach(l, node->returningLists)\n {\n List *rlist = (List *) lfirst(l);\n\n resultRelInfo->ri_returningList = rlist;\n resultRelInfo->ri_projectReturning =\n ExecBuildProjectionInfo(rlist, econtext, slot, &mtstate->ps,\n resultRelInfo->ri_RelationDesc->rd_att);\n resultRelInfo++;\n }\n }\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 21 Jan 2021 10:24:55 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: POC: postgres_fdw insert batching" }, { "msg_contents": "From: Zhihong Yu <zyu@yugabyte.com>\r\n> Do we need to consider how this part of code inside ExecInitModifyTable() would evolve ?\r\n\r\n> I think placing the compound condition toward the end of ExecInitModifyTable() is reasonable because it checks the latest information.\r\n\r\n+1 for Zaihong-san's idea. But instead of rewriting every relsultRelInfo to mtstate->resultRelInfo, which makes it a bit harder to read, I'd like to suggest just adding \"resultRelInfo = mtstate->resultRelInfo;\" immediately before the if block.\r\n\r\nThanks a lot, all for helping to solve the problem quickly!\r\n\r\n\r\nRegards\r\nTakayuki Tsunakawa\r\n\r\n\n\n\n\n\n\n\n\n\n\n\n\nFrom: Zhihong Yu <zyu@yugabyte.com>\r\n\n> Do we need to consider how this part of code inside ExecInitModifyTable() would evolve ?\n \n> I think placing the compound condition toward the end of ExecInitModifyTable() is reasonable because it checks the latest information.\n \n+1 for Zaihong-san's idea.  But instead of rewriting every relsultRelInfo to mtstate->resultRelInfo, which makes it a bit harder to read, I'd like\r\n to suggest just adding \"resultRelInfo = mtstate->resultRelInfo;\" immediately before the if block.\n \nThanks a lot, all for helping to solve the problem quickly!\n \n \nRegards\nTakayuki Tsunakawa", "msg_date": "Thu, 21 Jan 2021 01:37:35 +0000", "msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: POC: postgres_fdw insert batching" }, { "msg_contents": "On 1/21/21 2:24 AM, Amit Langote wrote:\n> On Thu, Jan 21, 2021 at 9:56 AM Tomas Vondra\n> <tomas.vondra@enterprisedb.com> wrote:\n>> On 1/21/21 1:17 AM, Zhihong Yu wrote:\n>>> Hi,\n>>> The assignment to resultRelInfo is done when junk_filter_needed is true:\n>>>\n>>> if (junk_filter_needed)\n>>> {\n>>> resultRelInfo = mtstate->resultRelInfo;\n>>>\n>>> Should the code for determining batch size access mtstate->resultRelInfo\n>>> directly ?\n>>>\n>>\n>> IMO the issue is that code iterates over all plans and moves to the next\n>> for each one:\n>>\n>> resultRelInfo++;\n>>\n>> so it ends up pointing past the last element, hence the failures. So\n>> yeah, either the code needs to move before the loop (per my patch), or\n>> we need to access mtstate->resultRelInfo directly.\n> \n> Accessing mtstate->resultRelInfo directly would do. The only\n> constraint on where this block should be placed is that\n> ri_projectReturning must be valid as of calling\n> GetForeignModifyBatchSize(), as Tsunakawa-san pointed out upthread.\n> So, after this block in ExecInitModifyTable:\n> \n> /*\n> * Initialize RETURNING projections if needed.\n> */\n> if (node->returningLists)\n> {\n> ....\n> /*\n> * Build a projection for each result rel.\n> */\n> resultRelInfo = mtstate->resultRelInfo;\n> foreach(l, node->returningLists)\n> {\n> List *rlist = (List *) lfirst(l);\n> \n> resultRelInfo->ri_returningList = rlist;\n> resultRelInfo->ri_projectReturning =\n> ExecBuildProjectionInfo(rlist, econtext, slot, &mtstate->ps,\n> resultRelInfo->ri_RelationDesc->rd_att);\n> resultRelInfo++;\n> }\n> }\n> \n\nRight. But I think Tom is right this should initialize ri_BatchSize for \nall the resultRelInfo elements, not just the first one. Per the attached \npatch, which resolves the issue both on x86_64 and armv7l for me.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Thu, 21 Jan 2021 02:42:03 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: POC: postgres_fdw insert batching" }, { "msg_contents": "\n\nOn 1/21/21 2:22 AM, Tom Lane wrote:\n> Tomas Vondra <tomas.vondra@enterprisedb.com> writes:\n>> I may be wrong, but the most likely explanation seems to be this is due\n>> to the junk filter initialization, which simply moves past the end of\n>> the mtstate->resultRelInfo array.\n> \n> resultRelInfo is certainly pointing at garbage at that point.\n> \n\nYup. It's pretty amazing the x86 machines seem to be mostly OK with it.\n\n>> It kinda seems the GetForeignModifyBatchSize call should happen before\n>> that block. The attached patch fixes this for me (i.e. regression tests\n>> pass with no valgrind reports.\n> \n>> Or did I get that wrong?\n> \n> Don't we need to initialize ri_BatchSize for *each* resultrelinfo,\n> not merely the first one? That is, this new code needs to be\n> somewhere inside a loop over the result rels.\n> \n\nYeah, I think you're right. That's an embarrassing oversight :-(\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Thu, 21 Jan 2021 02:45:12 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: POC: postgres_fdw insert batching" }, { "msg_contents": "Hi, Takayuki-san:\nMy first name is Zhihong.\n\nYou can call me Ted if you want to save some typing :-)\n\nCheers\n\nOn Wed, Jan 20, 2021 at 5:37 PM tsunakawa.takay@fujitsu.com <\ntsunakawa.takay@fujitsu.com> wrote:\n\n> From: Zhihong Yu <zyu@yugabyte.com>\n>\n> > Do we need to consider how this part of code inside\n> ExecInitModifyTable() would evolve ?\n>\n>\n>\n> > I think placing the compound condition toward the end of\n> ExecInitModifyTable() is reasonable because it checks the latest\n> information.\n>\n>\n>\n> +1 for Zaihong-san's idea. But instead of rewriting every relsultRelInfo\n> to mtstate->resultRelInfo, which makes it a bit harder to read, I'd like to\n> suggest just adding \"resultRelInfo = mtstate->resultRelInfo;\" immediately\n> before the if block.\n>\n>\n>\n> Thanks a lot, all for helping to solve the problem quickly!\n>\n>\n>\n>\n>\n> Regards\n>\n> Takayuki Tsunakawa\n>\n>\n>\n>\n\nHi, Takayuki-san:My first name is Zhihong.You can call me Ted if you want to save some typing :-)CheersOn Wed, Jan 20, 2021 at 5:37 PM tsunakawa.takay@fujitsu.com <tsunakawa.takay@fujitsu.com> wrote:\n\n\n\n\n\nFrom: Zhihong Yu <zyu@yugabyte.com>\n\n> Do we need to consider how this part of code inside ExecInitModifyTable() would evolve ?\n \n> I think placing the compound condition toward the end of ExecInitModifyTable() is reasonable because it checks the latest information.\n \n+1 for Zaihong-san's idea.  But instead of rewriting every relsultRelInfo to mtstate->resultRelInfo, which makes it a bit harder to read, I'd like\n to suggest just adding \"resultRelInfo = mtstate->resultRelInfo;\" immediately before the if block.\n \nThanks a lot, all for helping to solve the problem quickly!\n \n \nRegards\nTakayuki Tsunakawa", "msg_date": "Wed, 20 Jan 2021 17:49:00 -0800", "msg_from": "Zhihong Yu <zyu@yugabyte.com>", "msg_from_op": false, "msg_subject": "Re: POC: postgres_fdw insert batching" }, { "msg_contents": "From: Tomas Vondra <tomas.vondra@enterprisedb.com>\r\n> Right. But I think Tom is right this should initialize ri_BatchSize for all the\r\n> resultRelInfo elements, not just the first one. Per the attached patch, which\r\n> resolves the issue both on x86_64 and armv7l for me.\r\n\r\nI think Your patch is perfect in the sense that it's ready for the future multi-target DML support. +1\r\n\r\nJust for learning, could anyone tell me what this loop for? I thought current Postgres's DML supports a single target table, so it's enough to handle the first element of mtstate->resultRelInfo. In that sense, Amit-san and I agreed that we don't put the if block in the for loop yet.\r\n\r\n\r\nRegards\r\nTakayuki Tsunakawa\r\n\r\n", "msg_date": "Thu, 21 Jan 2021 01:52:48 +0000", "msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: POC: postgres_fdw insert batching" }, { "msg_contents": "On Thu, Jan 21, 2021 at 10:42 AM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n> On 1/21/21 2:24 AM, Amit Langote wrote:\n> > On Thu, Jan 21, 2021 at 9:56 AM Tomas Vondra\n> > <tomas.vondra@enterprisedb.com> wrote:\n> >> On 1/21/21 1:17 AM, Zhihong Yu wrote:\n> >>> Hi,\n> >>> The assignment to resultRelInfo is done when junk_filter_needed is true:\n> >>>\n> >>> if (junk_filter_needed)\n> >>> {\n> >>> resultRelInfo = mtstate->resultRelInfo;\n> >>>\n> >>> Should the code for determining batch size access mtstate->resultRelInfo\n> >>> directly ?\n> >>>\n> >>\n> >> IMO the issue is that code iterates over all plans and moves to the next\n> >> for each one:\n> >>\n> >> resultRelInfo++;\n> >>\n> >> so it ends up pointing past the last element, hence the failures. So\n> >> yeah, either the code needs to move before the loop (per my patch), or\n> >> we need to access mtstate->resultRelInfo directly.\n> >\n> > Accessing mtstate->resultRelInfo directly would do. The only\n> > constraint on where this block should be placed is that\n> > ri_projectReturning must be valid as of calling\n> > GetForeignModifyBatchSize(), as Tsunakawa-san pointed out upthread.\n> > So, after this block in ExecInitModifyTable:\n> >\n> > /*\n> > * Initialize RETURNING projections if needed.\n> > */\n> > if (node->returningLists)\n> > {\n> > ....\n> > /*\n> > * Build a projection for each result rel.\n> > */\n> > resultRelInfo = mtstate->resultRelInfo;\n> > foreach(l, node->returningLists)\n> > {\n> > List *rlist = (List *) lfirst(l);\n> >\n> > resultRelInfo->ri_returningList = rlist;\n> > resultRelInfo->ri_projectReturning =\n> > ExecBuildProjectionInfo(rlist, econtext, slot, &mtstate->ps,\n> > resultRelInfo->ri_RelationDesc->rd_att);\n> > resultRelInfo++;\n> > }\n> > }\n> >\n>\n> Right. But I think Tom is right this should initialize ri_BatchSize for\n> all the resultRelInfo elements, not just the first one. Per the attached\n> patch, which resolves the issue both on x86_64 and armv7l for me.\n\n+1 in general. To avoid looping uselessly in the case of\nUPDATE/DELETE where batching can't be used today, I'd suggest putting\nif (operation == CMD_INSERT) around the loop.\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 21 Jan 2021 10:53:14 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: POC: postgres_fdw insert batching" }, { "msg_contents": "From: Zhihong Yu <zyu@yugabyte.com>\r\n> My first name is Zhihong.\r\n\r\n> You can call me Ted if you want to save some typing :-)\r\n\r\nAh, I'm very sorry. Thank you, let me call you Ted then. That can't be mistaken.\r\n\r\nRegards\r\nTakayuki Tsunakawa\r\n\r\n\r\n\n\n\n\n\n\n\n\n\n\n\n\nFrom: Zhihong Yu <zyu@yugabyte.com>\r\n\n> My first name is Zhihong.\n\n\n> You can call me Ted if you want to save some typing :-)\n\n\n \nAh, I'm very sorry.  Thank you, let me call you Ted then.  That can't be mistaken.\n \nRegards\nTakayuki Tsunakawa", "msg_date": "Thu, 21 Jan 2021 01:57:29 +0000", "msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: POC: postgres_fdw insert batching" }, { "msg_contents": "On 1/21/21 2:53 AM, Amit Langote wrote:\n> On Thu, Jan 21, 2021 at 10:42 AM Tomas Vondra\n> <tomas.vondra@enterprisedb.com> wrote:\n>> On 1/21/21 2:24 AM, Amit Langote wrote:\n>>> On Thu, Jan 21, 2021 at 9:56 AM Tomas Vondra\n>>> <tomas.vondra@enterprisedb.com> wrote:\n>>>> On 1/21/21 1:17 AM, Zhihong Yu wrote:\n>>>>> Hi,\n>>>>> The assignment to resultRelInfo is done when junk_filter_needed is true:\n>>>>>\n>>>>> if (junk_filter_needed)\n>>>>> {\n>>>>> resultRelInfo = mtstate->resultRelInfo;\n>>>>>\n>>>>> Should the code for determining batch size access mtstate->resultRelInfo\n>>>>> directly ?\n>>>>>\n>>>>\n>>>> IMO the issue is that code iterates over all plans and moves to the next\n>>>> for each one:\n>>>>\n>>>> resultRelInfo++;\n>>>>\n>>>> so it ends up pointing past the last element, hence the failures. So\n>>>> yeah, either the code needs to move before the loop (per my patch), or\n>>>> we need to access mtstate->resultRelInfo directly.\n>>>\n>>> Accessing mtstate->resultRelInfo directly would do. The only\n>>> constraint on where this block should be placed is that\n>>> ri_projectReturning must be valid as of calling\n>>> GetForeignModifyBatchSize(), as Tsunakawa-san pointed out upthread.\n>>> So, after this block in ExecInitModifyTable:\n>>>\n>>> /*\n>>> * Initialize RETURNING projections if needed.\n>>> */\n>>> if (node->returningLists)\n>>> {\n>>> ....\n>>> /*\n>>> * Build a projection for each result rel.\n>>> */\n>>> resultRelInfo = mtstate->resultRelInfo;\n>>> foreach(l, node->returningLists)\n>>> {\n>>> List *rlist = (List *) lfirst(l);\n>>>\n>>> resultRelInfo->ri_returningList = rlist;\n>>> resultRelInfo->ri_projectReturning =\n>>> ExecBuildProjectionInfo(rlist, econtext, slot, &mtstate->ps,\n>>> resultRelInfo->ri_RelationDesc->rd_att);\n>>> resultRelInfo++;\n>>> }\n>>> }\n>>>\n>>\n>> Right. But I think Tom is right this should initialize ri_BatchSize for\n>> all the resultRelInfo elements, not just the first one. Per the attached\n>> patch, which resolves the issue both on x86_64 and armv7l for me.\n> \n> +1 in general. To avoid looping uselessly in the case of\n> UPDATE/DELETE where batching can't be used today, I'd suggest putting\n> if (operation == CMD_INSERT) around the loop.\n> \n\nRight, that's pretty much what I ended up doing (without the CMD_INSERT \ncheck it'd add batching info to explain for updates too, for example). \nI'll do a bit more testing on the attached patch, but I think that's the \nright fix to push.\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Thu, 21 Jan 2021 03:00:42 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: POC: postgres_fdw insert batching" }, { "msg_contents": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com> writes:\n> Just for learning, could anyone tell me what this loop for? I thought current Postgres's DML supports a single target table, so it's enough to handle the first element of mtstate->resultRelInfo.\n\nThe \"single target table\" could be partitioned, in which case there'll be\nmultiple resultrelinfos, some of which could be foreign tables.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 20 Jan 2021 21:03:46 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: POC: postgres_fdw insert batching" }, { "msg_contents": "From: Tomas Vondra <tomas.vondra@enterprisedb.com>\r\n> Right, that's pretty much what I ended up doing (without the CMD_INSERT\r\n> check it'd add batching info to explain for updates too, for example).\r\n> I'll do a bit more testing on the attached patch, but I think that's the right fix to\r\n> push.\r\n\r\nThanks to the outer check for operation == CMD_INSERT, the inner one became unnecessary.\r\n\r\n\r\nRegards\r\nTakayuki Tsunakawa\r\n\r\n", "msg_date": "Thu, 21 Jan 2021 02:09:17 +0000", "msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: POC: postgres_fdw insert batching" }, { "msg_contents": "From: Tom Lane <tgl@sss.pgh.pa.us>\n> The \"single target table\" could be partitioned, in which case there'll be\n> multiple resultrelinfos, some of which could be foreign tables.\n\nThank you. I thought so at first, but later I found that ExecInsert() only handles one element in mtstate->resultRelInfo. So I thought just the first element is processed in INSERT case.\n\nI understood (guessed) the for loop is for UPDATE and DELETE. EXPLAIN without ANALYZE UPDATE/DELETE on a partitioned table shows partitions, which would be mtstate->resultRelInfo. EXPLAIN on INSERT doesn't show partitions, so I think INSERT will find relevant partitions based on input rows during execution.\n\n\nRegards\nTakayuki Tsunakawa\n\n\n\n", "msg_date": "Thu, 21 Jan 2021 02:22:34 +0000", "msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: POC: postgres_fdw insert batching" }, { "msg_contents": "On 1/21/21 3:09 AM, tsunakawa.takay@fujitsu.com wrote:\n> From: Tomas Vondra <tomas.vondra@enterprisedb.com>\n>> Right, that's pretty much what I ended up doing (without the CMD_INSERT\n>> check it'd add batching info to explain for updates too, for example).\n>> I'll do a bit more testing on the attached patch, but I think that's the right fix to\n>> push.\n> \n> Thanks to the outer check for operation == CMD_INSERT, the inner one became unnecessary.\n> \n\nRight. I've pushed the fix, hopefully buildfarm will get happy again.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Thu, 21 Jan 2021 03:36:31 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: POC: postgres_fdw insert batching" }, { "msg_contents": "Hi\n\n2021年1月21日(木) 8:00 Tomas Vondra <tomas.vondra@enterprisedb.com>:\n\n> OK, pushed after a little bit of additional polishing (mostly comments).\n>\n> Thanks everyone!\n>\n\nThere's a minor typo in the doc's version of the ExecForeignBatchInsert()\ndeclaration;\nis:\n\n TupleTableSlot **\n ExecForeignBatchInsert(EState *estate,\n ResultRelInfo *rinfo,\n TupleTableSlot **slots,\n TupleTableSlot *planSlots,\n int *numSlots);\n\nshould be:\n\n TupleTableSlot **\n ExecForeignBatchInsert(EState *estate,\n ResultRelInfo *rinfo,\n TupleTableSlot **slots,\n TupleTableSlot **planSlots,\n int *numSlots);\n\n(Trivial patch attached).\n\n\nRegards\n\nIan Barwick\n\n-- \nEnterpriseDB: https://www.enterprisedb.com", "msg_date": "Fri, 22 Jan 2021 14:50:49 +0900", "msg_from": "Ian Lawrence Barwick <barwick@gmail.com>", "msg_from_op": false, "msg_subject": "Re: POC: postgres_fdw insert batching" }, { "msg_contents": "On Thu, Jan 21, 2021 at 11:36 AM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n> On 1/21/21 3:09 AM, tsunakawa.takay@fujitsu.com wrote:\n> > From: Tomas Vondra <tomas.vondra@enterprisedb.com>\n> >> Right, that's pretty much what I ended up doing (without the CMD_INSERT\n> >> check it'd add batching info to explain for updates too, for example).\n> >> I'll do a bit more testing on the attached patch, but I think that's the right fix to\n> >> push.\n> >\n> > Thanks to the outer check for operation == CMD_INSERT, the inner one became unnecessary.\n> >\n>\n> Right. I've pushed the fix, hopefully buildfarm will get happy again.\n\nI was looking at this and it looks like we've got a problematic case\nwhere postgresGetForeignModifyBatchSize() is called from\nExecInitRoutingInfo().\n\nThat case is when the insert is performed as part of a cross-partition\nupdate of a partitioned table containing postgres_fdw foreign table\npartitions. Because we don't check the operation in\nExecInitRoutingInfo() when calling GetForeignModifyBatchSize(), such\ninserts attempt to use batching. However the ResultRelInfo may be one\nfor the original update operation, so ri_FdwState contains a\nPgFdwModifyState with batch_size set to 0, because updates don't\nsupport batching. As things stand now,\npostgresGetForeignModifyBatchSize() simply returns that, tripping the\nfollowing Assert in the caller.\n\nAssert(partRelInfo->ri_BatchSize >= 1);\n\nUse this example to see the crash:\n\ncreate table p (a int) partition by list (a);\ncreate table p1 (like p);\ncreate extension postgres_fdw;\ncreate server lb foreign data wrapper postgres_fdw ;\ncreate user mapping for current_user server lb;\ncreate foreign table fp1 (a int) server lb options (table_name 'p1');\nalter table p attach partition fp1 for values in (1);\ncreate or replace function report_trig_details() returns trigger as $$\nbegin raise notice '% % on %', tg_when, tg_op, tg_relname; if tg_op =\n'DELETE' then return old; end if; return new; end; $$ language\nplpgsql;\ncreate trigger trig before update on fp1 for each row execute function\nreport_trig_details();\ncreate table p2 partition of p for values in (2);\ninsert into p values (2);\nupdate p set a = 1; -- crashes\n\nSo we let's check mtstate->operation == CMD_INSERT in\nExecInitRoutingInfo() to prevent calling GetForeignModifyBatchSize()\nin cross-update situations where mtstate->operation would be\nCMD_UPDATE.\n\nI've attached a patch.\n\n\n\n\n--\nAmit Langote\nEDB: http://www.enterprisedb.com", "msg_date": "Sat, 23 Jan 2021 17:31:18 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: POC: postgres_fdw insert batching" }, { "msg_contents": "Amit:\nGood catch.\n\nbq. ExecInitRoutingInfo() that is in the charge of initialing\n\nShould be 'ExecInitRoutingInfo() that is in charge of initializing'\n\nCheers\n\nOn Sat, Jan 23, 2021 at 12:31 AM Amit Langote <amitlangote09@gmail.com>\nwrote:\n\n> On Thu, Jan 21, 2021 at 11:36 AM Tomas Vondra\n> <tomas.vondra@enterprisedb.com> wrote:\n> > On 1/21/21 3:09 AM, tsunakawa.takay@fujitsu.com wrote:\n> > > From: Tomas Vondra <tomas.vondra@enterprisedb.com>\n> > >> Right, that's pretty much what I ended up doing (without the\n> CMD_INSERT\n> > >> check it'd add batching info to explain for updates too, for example).\n> > >> I'll do a bit more testing on the attached patch, but I think that's\n> the right fix to\n> > >> push.\n> > >\n> > > Thanks to the outer check for operation == CMD_INSERT, the inner one\n> became unnecessary.\n> > >\n> >\n> > Right. I've pushed the fix, hopefully buildfarm will get happy again.\n>\n> I was looking at this and it looks like we've got a problematic case\n> where postgresGetForeignModifyBatchSize() is called from\n> ExecInitRoutingInfo().\n>\n> That case is when the insert is performed as part of a cross-partition\n> update of a partitioned table containing postgres_fdw foreign table\n> partitions. Because we don't check the operation in\n> ExecInitRoutingInfo() when calling GetForeignModifyBatchSize(), such\n> inserts attempt to use batching. However the ResultRelInfo may be one\n> for the original update operation, so ri_FdwState contains a\n> PgFdwModifyState with batch_size set to 0, because updates don't\n> support batching. As things stand now,\n> postgresGetForeignModifyBatchSize() simply returns that, tripping the\n> following Assert in the caller.\n>\n> Assert(partRelInfo->ri_BatchSize >= 1);\n>\n> Use this example to see the crash:\n>\n> create table p (a int) partition by list (a);\n> create table p1 (like p);\n> create extension postgres_fdw;\n> create server lb foreign data wrapper postgres_fdw ;\n> create user mapping for current_user server lb;\n> create foreign table fp1 (a int) server lb options (table_name 'p1');\n> alter table p attach partition fp1 for values in (1);\n> create or replace function report_trig_details() returns trigger as $$\n> begin raise notice '% % on %', tg_when, tg_op, tg_relname; if tg_op =\n> 'DELETE' then return old; end if; return new; end; $$ language\n> plpgsql;\n> create trigger trig before update on fp1 for each row execute function\n> report_trig_details();\n> create table p2 partition of p for values in (2);\n> insert into p values (2);\n> update p set a = 1; -- crashes\n>\n> So we let's check mtstate->operation == CMD_INSERT in\n> ExecInitRoutingInfo() to prevent calling GetForeignModifyBatchSize()\n> in cross-update situations where mtstate->operation would be\n> CMD_UPDATE.\n>\n> I've attached a patch.\n>\n>\n>\n>\n> --\n> Amit Langote\n> EDB: http://www.enterprisedb.com\n>\n\nAmit:Good catch.bq. ExecInitRoutingInfo() that is in the charge of initialingShould be 'ExecInitRoutingInfo() that is in charge of initializing'CheersOn Sat, Jan 23, 2021 at 12:31 AM Amit Langote <amitlangote09@gmail.com> wrote:On Thu, Jan 21, 2021 at 11:36 AM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n> On 1/21/21 3:09 AM, tsunakawa.takay@fujitsu.com wrote:\n> > From: Tomas Vondra <tomas.vondra@enterprisedb.com>\n> >> Right, that's pretty much what I ended up doing (without the CMD_INSERT\n> >> check it'd add batching info to explain for updates too, for example).\n> >> I'll do a bit more testing on the attached patch, but I think that's the right fix to\n> >> push.\n> >\n> > Thanks to the outer check for operation ==  CMD_INSERT, the inner one became unnecessary.\n> >\n>\n> Right. I've pushed the fix, hopefully buildfarm will get happy again.\n\nI was looking at this and it looks like we've got a problematic case\nwhere postgresGetForeignModifyBatchSize() is called from\nExecInitRoutingInfo().\n\nThat case is when the insert is performed as part of a cross-partition\nupdate of a partitioned table containing postgres_fdw foreign table\npartitions.  Because we don't check the operation in\nExecInitRoutingInfo() when calling GetForeignModifyBatchSize(), such\ninserts attempt to use batching.  However the ResultRelInfo may be one\nfor the original update operation, so ri_FdwState contains a\nPgFdwModifyState with batch_size set to 0, because updates don't\nsupport batching.  As things stand now,\npostgresGetForeignModifyBatchSize() simply returns that, tripping the\nfollowing Assert in the caller.\n\nAssert(partRelInfo->ri_BatchSize >= 1);\n\nUse this example to see the crash:\n\ncreate table p (a int) partition by list (a);\ncreate table p1 (like p);\ncreate extension postgres_fdw;\ncreate server lb foreign data wrapper postgres_fdw ;\ncreate user mapping for current_user server lb;\ncreate foreign table fp1 (a int) server lb options (table_name 'p1');\nalter table p attach partition fp1 for values in (1);\ncreate or replace function report_trig_details() returns trigger as $$\nbegin raise notice '% % on %', tg_when, tg_op, tg_relname; if tg_op =\n'DELETE' then return old; end if; return new; end; $$ language\nplpgsql;\ncreate trigger trig before update on fp1 for each row execute function\nreport_trig_details();\ncreate table p2 partition of p for values in (2);\ninsert into p values (2);\nupdate p set a = 1;  -- crashes\n\nSo we let's check mtstate->operation == CMD_INSERT in\nExecInitRoutingInfo() to prevent calling GetForeignModifyBatchSize()\nin cross-update situations where mtstate->operation would be\nCMD_UPDATE.\n\nI've attached a patch.\n\n\n\n\n--\nAmit Langote\nEDB: http://www.enterprisedb.com", "msg_date": "Sat, 23 Jan 2021 06:58:33 -0800", "msg_from": "Zhihong Yu <zyu@yugabyte.com>", "msg_from_op": false, "msg_subject": "Re: POC: postgres_fdw insert batching" }, { "msg_contents": "\n\nOn 1/23/21 9:31 AM, Amit Langote wrote:\n> On Thu, Jan 21, 2021 at 11:36 AM Tomas Vondra\n> <tomas.vondra@enterprisedb.com> wrote:\n>> On 1/21/21 3:09 AM, tsunakawa.takay@fujitsu.com wrote:\n>>> From: Tomas Vondra <tomas.vondra@enterprisedb.com>\n>>>> Right, that's pretty much what I ended up doing (without the CMD_INSERT\n>>>> check it'd add batching info to explain for updates too, for example).\n>>>> I'll do a bit more testing on the attached patch, but I think that's the right fix to\n>>>> push.\n>>>\n>>> Thanks to the outer check for operation == CMD_INSERT, the inner one became unnecessary.\n>>>\n>>\n>> Right. I've pushed the fix, hopefully buildfarm will get happy again.\n> \n> I was looking at this and it looks like we've got a problematic case\n> where postgresGetForeignModifyBatchSize() is called from\n> ExecInitRoutingInfo().\n> \n> That case is when the insert is performed as part of a cross-partition\n> update of a partitioned table containing postgres_fdw foreign table\n> partitions. Because we don't check the operation in\n> ExecInitRoutingInfo() when calling GetForeignModifyBatchSize(), such\n> inserts attempt to use batching. However the ResultRelInfo may be one\n> for the original update operation, so ri_FdwState contains a\n> PgFdwModifyState with batch_size set to 0, because updates don't\n> support batching. As things stand now,\n> postgresGetForeignModifyBatchSize() simply returns that, tripping the\n> following Assert in the caller.\n> \n> Assert(partRelInfo->ri_BatchSize >= 1);\n> \n> Use this example to see the crash:\n> \n> create table p (a int) partition by list (a);\n> create table p1 (like p);\n> create extension postgres_fdw;\n> create server lb foreign data wrapper postgres_fdw ;\n> create user mapping for current_user server lb;\n> create foreign table fp1 (a int) server lb options (table_name 'p1');\n> alter table p attach partition fp1 for values in (1);\n> create or replace function report_trig_details() returns trigger as $$\n> begin raise notice '% % on %', tg_when, tg_op, tg_relname; if tg_op =\n> 'DELETE' then return old; end if; return new; end; $$ language\n> plpgsql;\n> create trigger trig before update on fp1 for each row execute function\n> report_trig_details();\n> create table p2 partition of p for values in (2);\n> insert into p values (2);\n> update p set a = 1; -- crashes\n> \n> So we let's check mtstate->operation == CMD_INSERT in\n> ExecInitRoutingInfo() to prevent calling GetForeignModifyBatchSize()\n> in cross-update situations where mtstate->operation would be\n> CMD_UPDATE.\n> \n> I've attached a patch.\n> \n\nThanks for catching this. I think it'd be good if the fix included a \nregression test. The example seems like a good starting point, not sure \nif it can be simplified further.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Sat, 23 Jan 2021 18:16:54 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: POC: postgres_fdw insert batching" }, { "msg_contents": "On Sun, Jan 24, 2021 at 2:17 AM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n> On 1/23/21 9:31 AM, Amit Langote wrote:\n> > I was looking at this and it looks like we've got a problematic case\n> > where postgresGetForeignModifyBatchSize() is called from\n> > ExecInitRoutingInfo().\n> >\n> > That case is when the insert is performed as part of a cross-partition\n> > update of a partitioned table containing postgres_fdw foreign table\n> > partitions. Because we don't check the operation in\n> > ExecInitRoutingInfo() when calling GetForeignModifyBatchSize(), such\n> > inserts attempt to use batching. However the ResultRelInfo may be one\n> > for the original update operation, so ri_FdwState contains a\n> > PgFdwModifyState with batch_size set to 0, because updates don't\n> > support batching. As things stand now,\n> > postgresGetForeignModifyBatchSize() simply returns that, tripping the\n> > following Assert in the caller.\n> >\n> > Assert(partRelInfo->ri_BatchSize >= 1);\n> >\n> > Use this example to see the crash:\n> >\n> > create table p (a int) partition by list (a);\n> > create table p1 (like p);\n> > create extension postgres_fdw;\n> > create server lb foreign data wrapper postgres_fdw ;\n> > create user mapping for current_user server lb;\n> > create foreign table fp1 (a int) server lb options (table_name 'p1');\n> > alter table p attach partition fp1 for values in (1);\n> > create or replace function report_trig_details() returns trigger as $$\n> > begin raise notice '% % on %', tg_when, tg_op, tg_relname; if tg_op =\n> > 'DELETE' then return old; end if; return new; end; $$ language\n> > plpgsql;\n> > create trigger trig before update on fp1 for each row execute function\n> > report_trig_details();\n> > create table p2 partition of p for values in (2);\n> > insert into p values (2);\n> > update p set a = 1; -- crashes\n> >\n> > So we let's check mtstate->operation == CMD_INSERT in\n> > ExecInitRoutingInfo() to prevent calling GetForeignModifyBatchSize()\n> > in cross-update situations where mtstate->operation would be\n> > CMD_UPDATE.\n> >\n> > I've attached a patch.\n>\n> Thanks for catching this. I think it'd be good if the fix included a\n> regression test. The example seems like a good starting point, not sure\n> if it can be simplified further.\n\nYes, it can be simplified by using a local join to prevent the update\nof the foreign partition from being pushed to the remote server, for\nwhich my example in the previous email used a local trigger. Note\nthat the update of the foreign partition to be done locally is a\nprerequisite for this bug to occur.\n\nI've added that simplified test case in the attached updated patch.\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com", "msg_date": "Sun, 24 Jan 2021 21:31:42 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: POC: postgres_fdw insert batching" }, { "msg_contents": "From: Amit Langote <amitlangote09@gmail.com>\r\n> Yes, it can be simplified by using a local join to prevent the update of the foreign\r\n> partition from being pushed to the remote server, for which my example in the\r\n> previous email used a local trigger. Note that the update of the foreign\r\n> partition to be done locally is a prerequisite for this bug to occur.\r\n\r\nThank you, I was aware that UPDATE calls ExecInsert() but forgot about it partway. Good catch (and my bad miss.)\r\n\r\n\r\n+\tPgFdwModifyState *fmstate = resultRelInfo->ri_FdwState ?\r\n+\t\t\t\t\t\t\t(PgFdwModifyState *) resultRelInfo->ri_FdwState :\r\n+\t\t\t\t\t\t\tNULL;\r\n\r\nThis can be written as:\r\n\r\n+\tPgFdwModifyState *fmstate = (PgFdwModifyState *) resultRelInfo->ri_FdwState;\r\n\r\n\r\nRegards\r\nTakayuki Tsunakawa\r\n\r\n", "msg_date": "Mon, 25 Jan 2021 04:21:39 +0000", "msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: POC: postgres_fdw insert batching" }, { "msg_contents": "2021年1月22日(金) 14:50 Ian Lawrence Barwick <barwick@gmail.com>:\n\n> Hi\n>\n> 2021年1月21日(木) 8:00 Tomas Vondra <tomas.vondra@enterprisedb.com>:\n>\n>> OK, pushed after a little bit of additional polishing (mostly comments).\n>>\n>> Thanks everyone!\n>>\n>\n> There's a minor typo in the doc's version of the ExecForeignBatchInsert()\n> declaration;\n> is:\n>\n> TupleTableSlot **\n> ExecForeignBatchInsert(EState *estate,\n> ResultRelInfo *rinfo,\n> TupleTableSlot **slots,\n> TupleTableSlot *planSlots,\n> int *numSlots);\n>\n> should be:\n>\n> TupleTableSlot **\n> ExecForeignBatchInsert(EState *estate,\n> ResultRelInfo *rinfo,\n> TupleTableSlot **slots,\n> TupleTableSlot **planSlots,\n> int *numSlots);\n>\n> (Trivial patch attached).\n>\n>\n> Regards\n>\n> Ian Barwick\n>\n> --\n> EnterpriseDB: https://www.enterprisedb.com\n>\n>\n\n-- \nEnterpriseDB: https://www.enterprisedb.com\n\n2021年1月22日(金) 14:50 Ian Lawrence Barwick <barwick@gmail.com>:Hi2021年1月21日(木) 8:00 Tomas Vondra <tomas.vondra@enterprisedb.com>:OK, pushed after a little bit of additional polishing (mostly comments).\n\nThanks everyone!\nThere's a minor typo in the doc's version of the ExecForeignBatchInsert() declaration;is:    TupleTableSlot **    ExecForeignBatchInsert(EState *estate,                      ResultRelInfo *rinfo,                      TupleTableSlot **slots,                      TupleTableSlot *planSlots,                      int *numSlots);should be:    TupleTableSlot **    ExecForeignBatchInsert(EState *estate,                      ResultRelInfo *rinfo,                      TupleTableSlot **slots,                      TupleTableSlot **planSlots,                      int *numSlots);(Trivial patch attached).RegardsIan Barwick-- EnterpriseDB: https://www.enterprisedb.com\n\n-- EnterpriseDB: https://www.enterprisedb.com", "msg_date": "Fri, 5 Feb 2021 10:54:15 +0900", "msg_from": "Ian Lawrence Barwick <barwick@gmail.com>", "msg_from_op": false, "msg_subject": "Re: POC: postgres_fdw insert batching" }, { "msg_contents": "2021年1月22日(金) 14:50 Ian Lawrence Barwick <barwick@gmail.com>:\n\n> Hi\n>\n> 2021年1月21日(木) 8:00 Tomas Vondra <tomas.vondra@enterprisedb.com>:\n>\n>> OK, pushed after a little bit of additional polishing (mostly comments).\n>>\n>> Thanks everyone!\n>>\n>\n> There's a minor typo in the doc's version of the ExecForeignBatchInsert()\n> declaration;\n> is:\n>\n> TupleTableSlot **\n> ExecForeignBatchInsert(EState *estate,\n> ResultRelInfo *rinfo,\n> TupleTableSlot **slots,\n> TupleTableSlot *planSlots,\n> int *numSlots);\n>\n> should be:\n>\n> TupleTableSlot **\n> ExecForeignBatchInsert(EState *estate,\n> ResultRelInfo *rinfo,\n> TupleTableSlot **slots,\n> TupleTableSlot **planSlots,\n> int *numSlots);\n>\n> (Trivial patch attached).\n>\n>\nForgot to mention the relevant doc link:\n\n\nhttps://www.postgresql.org/docs/devel/fdw-callbacks.html#FDW-CALLBACKS-UPDATE\n\nRegards\n\nIan Barwick\n\n-- \nEnterpriseDB: https://www.enterprisedb.com\n\n2021年1月22日(金) 14:50 Ian Lawrence Barwick <barwick@gmail.com>:Hi2021年1月21日(木) 8:00 Tomas Vondra <tomas.vondra@enterprisedb.com>:OK, pushed after a little bit of additional polishing (mostly comments).\n\nThanks everyone!\nThere's a minor typo in the doc's version of the ExecForeignBatchInsert() declaration;is:    TupleTableSlot **    ExecForeignBatchInsert(EState *estate,                      ResultRelInfo *rinfo,                      TupleTableSlot **slots,                      TupleTableSlot *planSlots,                      int *numSlots);should be:    TupleTableSlot **    ExecForeignBatchInsert(EState *estate,                      ResultRelInfo *rinfo,                      TupleTableSlot **slots,                      TupleTableSlot **planSlots,                      int *numSlots);(Trivial patch attached).Forgot to mention the relevant doc link:    https://www.postgresql.org/docs/devel/fdw-callbacks.html#FDW-CALLBACKS-UPDATERegardsIan Barwick-- EnterpriseDB: https://www.enterprisedb.com", "msg_date": "Fri, 5 Feb 2021 10:55:04 +0900", "msg_from": "Ian Lawrence Barwick <barwick@gmail.com>", "msg_from_op": false, "msg_subject": "Re: POC: postgres_fdw insert batching" }, { "msg_contents": "Tsunakwa-san,\n\nOn Mon, Jan 25, 2021 at 1:21 PM tsunakawa.takay@fujitsu.com\n<tsunakawa.takay@fujitsu.com> wrote:\n> From: Amit Langote <amitlangote09@gmail.com>\n> > Yes, it can be simplified by using a local join to prevent the update of the foreign\n> > partition from being pushed to the remote server, for which my example in the\n> > previous email used a local trigger. Note that the update of the foreign\n> > partition to be done locally is a prerequisite for this bug to occur.\n>\n> Thank you, I was aware that UPDATE calls ExecInsert() but forgot about it partway. Good catch (and my bad miss.)\n\nIt appears I had missed your reply, sorry.\n\n> + PgFdwModifyState *fmstate = resultRelInfo->ri_FdwState ?\n> + (PgFdwModifyState *) resultRelInfo->ri_FdwState :\n> + NULL;\n>\n> This can be written as:\n>\n> + PgFdwModifyState *fmstate = (PgFdwModifyState *) resultRelInfo->ri_FdwState;\n\nFacepalm, yes.\n\nPatch updated. Thanks for the review.\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com", "msg_date": "Fri, 5 Feb 2021 11:52:00 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: POC: postgres_fdw insert batching" }, { "msg_contents": "From: Amit Langote <amitlangote09@gmail.com>\r\n> It appears I had missed your reply, sorry.\r\n> \r\n> > + PgFdwModifyState *fmstate = resultRelInfo->ri_FdwState ?\r\n> > +\r\n> (PgFdwModifyState *) resultRelInfo->ri_FdwState :\r\n> > + NULL;\r\n> >\r\n> > This can be written as:\r\n> >\r\n> > + PgFdwModifyState *fmstate = (PgFdwModifyState *)\r\n> > + resultRelInfo->ri_FdwState;\r\n> \r\n> Facepalm, yes.\r\n> \r\n> Patch updated. Thanks for the review.\r\n\r\nThank you for picking this up.\r\n\r\n\r\nRegards\r\nTakayuki Tsunakawa\r\n\r\n", "msg_date": "Fri, 5 Feb 2021 04:11:52 +0000", "msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: POC: postgres_fdw insert batching" }, { "msg_contents": "On 2/5/21 2:55 AM, Ian Lawrence Barwick wrote:\n> ...\n> \n> There's a minor typo in the doc's version of the\n> ExecForeignBatchInsert() declaration;\n> is:\n> \n>     TupleTableSlot **\n>     ExecForeignBatchInsert(EState *estate,\n>                       ResultRelInfo *rinfo,\n>                       TupleTableSlot **slots,\n>                       TupleTableSlot *planSlots,\n>                       int *numSlots);\n> \n> should be:\n> \n>     TupleTableSlot **\n>     ExecForeignBatchInsert(EState *estate,\n>                       ResultRelInfo *rinfo,\n>                       TupleTableSlot **slots,\n>                       TupleTableSlot **planSlots,\n>                       int *numSlots);\n> \n> (Trivial patch attached).\n> \n> \n> Forgot to mention the relevant doc link:\n> \n>    \n> https://www.postgresql.org/docs/devel/fdw-callbacks.html#FDW-CALLBACKS-UPDATE\n> <https://www.postgresql.org/docs/devel/fdw-callbacks.html#FDW-CALLBACKS-UPDATE>\n> \n\nThanks, I'll get this fixed.\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Mon, 15 Feb 2021 17:32:43 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: POC: postgres_fdw insert batching" }, { "msg_contents": "On 2/5/21 3:52 AM, Amit Langote wrote:\n> Tsunakwa-san,\n> \n> On Mon, Jan 25, 2021 at 1:21 PM tsunakawa.takay@fujitsu.com\n> <tsunakawa.takay@fujitsu.com> wrote:\n>> From: Amit Langote <amitlangote09@gmail.com>\n>>> Yes, it can be simplified by using a local join to prevent the update of the foreign\n>>> partition from being pushed to the remote server, for which my example in the\n>>> previous email used a local trigger. Note that the update of the foreign\n>>> partition to be done locally is a prerequisite for this bug to occur.\n>>\n>> Thank you, I was aware that UPDATE calls ExecInsert() but forgot about it partway. Good catch (and my bad miss.)\n> \n> It appears I had missed your reply, sorry.\n> \n>> + PgFdwModifyState *fmstate = resultRelInfo->ri_FdwState ?\n>> + (PgFdwModifyState *) resultRelInfo->ri_FdwState :\n>> + NULL;\n>>\n>> This can be written as:\n>>\n>> + PgFdwModifyState *fmstate = (PgFdwModifyState *) resultRelInfo->ri_FdwState;\n> \n> Facepalm, yes.\n> \n> Patch updated. Thanks for the review.\n> \n\nThanks for the patch, it seems fine to me. I wonder it the commit\nmessage needs some tweaks, though. At the moment it says:\n\n Prevent FDW insert batching during cross-partition updates\n\nbut what the patch seems to be doing is simply initializing the info\nonly for CMD_INSERT operations. Which does the trick, but it affects\neverything, i.e. all updates, no? Not just cross-partition updates.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Mon, 15 Feb 2021 17:36:00 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: POC: postgres_fdw insert batching" }, { "msg_contents": "On Tue, Feb 16, 2021 at 1:36 AM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n> On 2/5/21 3:52 AM, Amit Langote wrote:\n> > Tsunakwa-san,\n> >\n> > On Mon, Jan 25, 2021 at 1:21 PM tsunakawa.takay@fujitsu.com\n> > <tsunakawa.takay@fujitsu.com> wrote:\n> >> From: Amit Langote <amitlangote09@gmail.com>\n> >>> Yes, it can be simplified by using a local join to prevent the update of the foreign\n> >>> partition from being pushed to the remote server, for which my example in the\n> >>> previous email used a local trigger. Note that the update of the foreign\n> >>> partition to be done locally is a prerequisite for this bug to occur.\n> >>\n> >> Thank you, I was aware that UPDATE calls ExecInsert() but forgot about it partway. Good catch (and my bad miss.)\n> >\n> > It appears I had missed your reply, sorry.\n> >\n> >> + PgFdwModifyState *fmstate = resultRelInfo->ri_FdwState ?\n> >> + (PgFdwModifyState *) resultRelInfo->ri_FdwState :\n> >> + NULL;\n> >>\n> >> This can be written as:\n> >>\n> >> + PgFdwModifyState *fmstate = (PgFdwModifyState *) resultRelInfo->ri_FdwState;\n> >\n> > Facepalm, yes.\n> >\n> > Patch updated. Thanks for the review.\n> >\n>\n> Thanks for the patch, it seems fine to me.\n\nThanks for checking.\n\n> I wonder it the commit\n> message needs some tweaks, though. At the moment it says:\n>\n> Prevent FDW insert batching during cross-partition updates\n>\n> but what the patch seems to be doing is simply initializing the info\n> only for CMD_INSERT operations. Which does the trick, but it affects\n> everything, i.e. all updates, no? Not just cross-partition updates.\n\nYou're right. Please check the message in the updated patch.\n\n\n--\nAmit Langote\nEDB: http://www.enterprisedb.com", "msg_date": "Tue, 16 Feb 2021 18:25:25 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: POC: postgres_fdw insert batching" }, { "msg_contents": "\n\nOn 2/16/21 10:25 AM, Amit Langote wrote:\n> On Tue, Feb 16, 2021 at 1:36 AM Tomas Vondra\n> <tomas.vondra@enterprisedb.com> wrote:\n>> On 2/5/21 3:52 AM, Amit Langote wrote:\n>>> Tsunakwa-san,\n>>>\n>>> On Mon, Jan 25, 2021 at 1:21 PM tsunakawa.takay@fujitsu.com\n>>> <tsunakawa.takay@fujitsu.com> wrote:\n>>>> From: Amit Langote <amitlangote09@gmail.com>\n>>>>> Yes, it can be simplified by using a local join to prevent the update of the foreign\n>>>>> partition from being pushed to the remote server, for which my example in the\n>>>>> previous email used a local trigger. Note that the update of the foreign\n>>>>> partition to be done locally is a prerequisite for this bug to occur.\n>>>>\n>>>> Thank you, I was aware that UPDATE calls ExecInsert() but forgot about it partway. Good catch (and my bad miss.)\n>>>\n>>> It appears I had missed your reply, sorry.\n>>>\n>>>> + PgFdwModifyState *fmstate = resultRelInfo->ri_FdwState ?\n>>>> + (PgFdwModifyState *) resultRelInfo->ri_FdwState :\n>>>> + NULL;\n>>>>\n>>>> This can be written as:\n>>>>\n>>>> + PgFdwModifyState *fmstate = (PgFdwModifyState *) resultRelInfo->ri_FdwState;\n>>>\n>>> Facepalm, yes.\n>>>\n>>> Patch updated. Thanks for the review.\n>>>\n>>\n>> Thanks for the patch, it seems fine to me.\n> \n> Thanks for checking.\n> \n>> I wonder it the commit\n>> message needs some tweaks, though. At the moment it says:\n>>\n>> Prevent FDW insert batching during cross-partition updates\n>>\n>> but what the patch seems to be doing is simply initializing the info\n>> only for CMD_INSERT operations. Which does the trick, but it affects\n>> everything, i.e. all updates, no? Not just cross-partition updates.\n> \n> You're right. Please check the message in the updated patch.\n>\n\nThanks. I'm not sure I understand what \"FDW may not be able to handle \nboth the original update operation and the batched insert operation \nbeing performed at the same time\" means. I mean, if we translate the \nUPDATE into DELETE+INSERT, then we don't run both the update and insert \nat the same time, right? What exactly is the problem with allowing \nbatching for inserts in cross-partition updates?\n\nOn a closer look, it seems the problem actually lies in a small \ninconsistency between create_foreign_modify and ExecInitRoutingInfo. The \nformer only set batch_size for CMD_INSERT while the latter called the \nBatchSize() for all operations, expecting >= 1 result. So we may either \nrelax create_foreign_modify and set batch_size for all DML, or make \nExecInitRoutingInfo stricter (which is what the patches here do).\n\nIs there a reason not to do the first thing, allowing batching of \ninserts during cross-partition updates? I tried to do that, but it \ndawned on me that we can't mix batched and un-batched operations, e.g. \nDELETE + INSERT, because that'd break the order of execution, leading to \nbogus results in case the same row is modified repeatedly, etc.\n\nAm I getting this right?\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Tue, 16 Feb 2021 16:04:44 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: POC: postgres_fdw insert batching" }, { "msg_contents": "On Wed, Feb 17, 2021 at 5:46 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> On Wed, Feb 17, 2021 at 12:04 AM Tomas Vondra\n> <tomas.vondra@enterprisedb.com> wrote:\n> > On 2/16/21 10:25 AM, Amit Langote wrote:\n> > > On Tue, Feb 16, 2021 at 1:36 AM Tomas Vondra\n> > >> Thanks for the patch, it seems fine to me.\n> > >\n> > > Thanks for checking.\n> > >\n> > >> I wonder it the commit\n> > >> message needs some tweaks, though. At the moment it says:\n> > >>\n> > >> Prevent FDW insert batching during cross-partition updates\n> > >>\n> > >> but what the patch seems to be doing is simply initializing the info\n> > >> only for CMD_INSERT operations. Which does the trick, but it affects\n> > >> everything, i.e. all updates, no? Not just cross-partition updates.\n> > >\n> > > You're right. Please check the message in the updated patch.\n> >\n> > Thanks. I'm not sure I understand what \"FDW may not be able to handle\n> > both the original update operation and the batched insert operation\n> > being performed at the same time\" means. I mean, if we translate the\n> > UPDATE into DELETE+INSERT, then we don't run both the update and insert\n> > at the same time, right? What exactly is the problem with allowing\n> > batching for inserts in cross-partition updates?\n>\n> Sorry, I hadn't shared enough details of my investigations when I\n> originally ran into this. Such as that I had considered implementing\n> the use of batching for these inserts too but had given up.\n>\n> Now that you mention it, I think I gave a less convincing reason for\n> why we should avoid doing it at all. Maybe it would have been more\n> right to say that it is the core code, not necessarily the FDWs, that\n> currently fails to deal with the use of batching by the insert\n> component of a cross-partition update. Those failures could be\n> addressed as I'll describe below.\n>\n> For postgres_fdw, postgresGetForeignModifyBatchSize() could be taught\n> to simply use the PgFdwModifyTable that is installed to handle the\n> insert component of a cross-partition update (one can get that one via\n> aux_fmstate field of the original PgFdwModifyState). However, even\n> though that's fine for postgres_fdw to do, what worries (had worried)\n> me is that it also results in scribbling on ri_BatchSize that the core\n> code may see to determine what to do with a particular tuple, and I\n> just have to hope that nodeModifyTable.c doesn't end up doing anything\n> unwarranted with the original update based on seeing a non-zero\n> ri_BatchSize. AFAICS, we are fine on that front.\n>\n> That said, there are some deficiencies in the code that have to be\n> addressed before we can let postgres_fdw do as mentioned above. For\n> example, the code in ExecModifyTable() that runs after breaking out of\n> the loop to insert any remaining batched tuples appears to miss the\n> tuples batched by such inserts. Apparently, that is because the\n> ResultRelInfos used by those inserts are not present in\n> es_tuple_routing_result_relations. Turns out I had forgotten that\n> execPartition.c doesn't add the ResultRelInfos to that list if they\n> are made by ExecInitModifyTable() for the original update operation\n> and simply reused by ExecFindPartition() when tuples were routed to\n> those partitions. It can be \"fixed\" by reverting to the original\n> design in Tsunakawa-san's patch where the tuple routing result\n> relations were obtained from the PartitionTupleRouting data structure,\n> which fortunately stores all tuple routing result relations. (Sorry,\n> I gave wrong advice in [1] in retrospect.)\n>\n> > On a closer look, it seems the problem actually lies in a small\n> > inconsistency between create_foreign_modify and ExecInitRoutingInfo. The\n> > former only set batch_size for CMD_INSERT while the latter called the\n> > BatchSize() for all operations, expecting >= 1 result. So we may either\n> > relax create_foreign_modify and set batch_size for all DML, or make\n> > ExecInitRoutingInfo stricter (which is what the patches here do).\n>\n> I think we should be fine if we make\n> postgresGetForeignModifyBatchSize() use the correct PgFdwModifyState\n> as described above. We can be sure that we are not mixing the\n> information used by the batched insert with that of the original\n> unbatched update.\n>\n> > Is there a reason not to do the first thing, allowing batching of\n> > inserts during cross-partition updates? I tried to do that, but it\n> > dawned on me that we can't mix batched and un-batched operations, e.g.\n> > DELETE + INSERT, because that'd break the order of execution, leading to\n> > bogus results in case the same row is modified repeatedly, etc.\n>\n> Actually, postgres_fdw only supports moving a row into a partition (as\n> part of a cross-partition update that is) if it has already finished\n> performing any updates on it. So there is no worry of rows that are\n> moved into a partition subsequently getting updated due to the\n> original command.\n>\n> The attached patch implements the changes necessary to make these\n> inserts use batching too.\n>\n> [1] https://www.postgresql.org/message-id/CA%2BHiwqEbnhwVJMsukTP-S9Kv1ynC7Da3yuqSPZC0Y7oWWOwoHQ%40mail.gmail.com\n\nOops, I had mistakenly not hit \"Reply All\". Attaching the patch again.\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com", "msg_date": "Wed, 17 Feb 2021 17:51:12 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: POC: postgres_fdw insert batching" }, { "msg_contents": "On 2/17/21 9:51 AM, Amit Langote wrote:\n> On Wed, Feb 17, 2021 at 5:46 PM Amit Langote <amitlangote09@gmail.com> wrote:\n>> On Wed, Feb 17, 2021 at 12:04 AM Tomas Vondra\n>> <tomas.vondra@enterprisedb.com> wrote:\n>>> On 2/16/21 10:25 AM, Amit Langote wrote:\n>>>> On Tue, Feb 16, 2021 at 1:36 AM Tomas Vondra\n>>>>> Thanks for the patch, it seems fine to me.\n>>>>\n>>>> Thanks for checking.\n>>>>\n>>>>> I wonder it the commit\n>>>>> message needs some tweaks, though. At the moment it says:\n>>>>>\n>>>>> Prevent FDW insert batching during cross-partition updates\n>>>>>\n>>>>> but what the patch seems to be doing is simply initializing the info\n>>>>> only for CMD_INSERT operations. Which does the trick, but it affects\n>>>>> everything, i.e. all updates, no? Not just cross-partition updates.\n>>>>\n>>>> You're right. Please check the message in the updated patch.\n>>>\n>>> Thanks. I'm not sure I understand what \"FDW may not be able to handle\n>>> both the original update operation and the batched insert operation\n>>> being performed at the same time\" means. I mean, if we translate the\n>>> UPDATE into DELETE+INSERT, then we don't run both the update and insert\n>>> at the same time, right? What exactly is the problem with allowing\n>>> batching for inserts in cross-partition updates?\n>>\n>> Sorry, I hadn't shared enough details of my investigations when I\n>> originally ran into this. Such as that I had considered implementing\n>> the use of batching for these inserts too but had given up.\n>>\n>> Now that you mention it, I think I gave a less convincing reason for\n>> why we should avoid doing it at all. Maybe it would have been more\n>> right to say that it is the core code, not necessarily the FDWs, that\n>> currently fails to deal with the use of batching by the insert\n>> component of a cross-partition update. Those failures could be\n>> addressed as I'll describe below.\n>>\n>> For postgres_fdw, postgresGetForeignModifyBatchSize() could be taught\n>> to simply use the PgFdwModifyTable that is installed to handle the\n>> insert component of a cross-partition update (one can get that one via\n>> aux_fmstate field of the original PgFdwModifyState). However, even\n>> though that's fine for postgres_fdw to do, what worries (had worried)\n>> me is that it also results in scribbling on ri_BatchSize that the core\n>> code may see to determine what to do with a particular tuple, and I\n>> just have to hope that nodeModifyTable.c doesn't end up doing anything\n>> unwarranted with the original update based on seeing a non-zero\n>> ri_BatchSize. AFAICS, we are fine on that front.\n>>\n>> That said, there are some deficiencies in the code that have to be\n>> addressed before we can let postgres_fdw do as mentioned above. For\n>> example, the code in ExecModifyTable() that runs after breaking out of\n>> the loop to insert any remaining batched tuples appears to miss the\n>> tuples batched by such inserts. Apparently, that is because the\n>> ResultRelInfos used by those inserts are not present in\n>> es_tuple_routing_result_relations. Turns out I had forgotten that\n>> execPartition.c doesn't add the ResultRelInfos to that list if they\n>> are made by ExecInitModifyTable() for the original update operation\n>> and simply reused by ExecFindPartition() when tuples were routed to\n>> those partitions. It can be \"fixed\" by reverting to the original\n>> design in Tsunakawa-san's patch where the tuple routing result\n>> relations were obtained from the PartitionTupleRouting data structure,\n>> which fortunately stores all tuple routing result relations. (Sorry,\n>> I gave wrong advice in [1] in retrospect.)\n>>\n>>> On a closer look, it seems the problem actually lies in a small\n>>> inconsistency between create_foreign_modify and ExecInitRoutingInfo. The\n>>> former only set batch_size for CMD_INSERT while the latter called the\n>>> BatchSize() for all operations, expecting >= 1 result. So we may either\n>>> relax create_foreign_modify and set batch_size for all DML, or make\n>>> ExecInitRoutingInfo stricter (which is what the patches here do).\n>>\n>> I think we should be fine if we make\n>> postgresGetForeignModifyBatchSize() use the correct PgFdwModifyState\n>> as described above. We can be sure that we are not mixing the\n>> information used by the batched insert with that of the original\n>> unbatched update.\n>>\n>>> Is there a reason not to do the first thing, allowing batching of\n>>> inserts during cross-partition updates? I tried to do that, but it\n>>> dawned on me that we can't mix batched and un-batched operations, e.g.\n>>> DELETE + INSERT, because that'd break the order of execution, leading to\n>>> bogus results in case the same row is modified repeatedly, etc.\n>>\n>> Actually, postgres_fdw only supports moving a row into a partition (as\n>> part of a cross-partition update that is) if it has already finished\n>> performing any updates on it. So there is no worry of rows that are\n>> moved into a partition subsequently getting updated due to the\n>> original command.\n>>\n>> The attached patch implements the changes necessary to make these\n>> inserts use batching too.\n>>\n>> [1] https://www.postgresql.org/message-id/CA%2BHiwqEbnhwVJMsukTP-S9Kv1ynC7Da3yuqSPZC0Y7oWWOwoHQ%40mail.gmail.com\n> \n> Oops, I had mistakenly not hit \"Reply All\". Attaching the patch again.\n> \n\nThanks. The patch seems reasonable, but it's a bit too large for a fix,\nso I'll go ahead and push one of the previous fixes restricting batching\nto plain INSERT commands. But this seems useful, so I suggest adding it\nto the next commitfest.\n\nOne thing that surprised me is that we only move the rows *to* the\nforeign partition, not from it (even on pg13, i.e. before the batching\netc.). I mean, using the example you posted earlier, with one foreign\nand one local partition, consider this:\n\n delete from p;\n insert into p values (2);\n\n test=# select * from p2;\n a\n ---\n 2\n (1 row)\n\n test=# update p set a = 1;\n UPDATE 1\n\n test=# select * from p1;\n a\n ---\n 1\n (1 row)\n\nOK, so it was moved to the foreign partition, which is for rows with\nvalue in (1). So far so good. Let's do another update:\n\n test=# update p set a = 2;\n UPDATE 1\n test=# select * from p1;\n a\n ---\n 2\n (1 row)\n\nSo now the foreign partition contains value (2), which is however wrong\nwith respect to the partitioning rules - this should be in p2, the local\npartition. This however causes pretty annoying issue:\n\ntest=# explain analyze select * from p where a = 2;\n\n QUERY PLAN\n ---------------------------------------------------------------\n Seq Scan on p2 p (cost=0.00..41.88 rows=13 width=4)\n (actual time=0.024..0.028 rows=0 loops=1)\n Filter: (a = 2)\n Planning Time: 0.355 ms\n Execution Time: 0.089 ms\n (4 rows)\n\nThat is, we fail to find the row, because we eliminate the partition.\n\nNow, maybe this is expected, but it seems like a rather mind-boggling\nviolation of POLA principle. I've checked if postgres_fdw mentions this\nsomewhere, but all I see about row movement is this:\n\n Note also that postgres_fdw supports row movement invoked by UPDATE\n statements executed on partitioned tables, but it currently does not\n handle the case where a remote partition chosen to insert a moved row\n into is also an UPDATE target partition that will be updated later.\n\nand if I understand that correctly, that's about something different.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Wed, 17 Feb 2021 20:36:36 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: POC: postgres_fdw insert batching" }, { "msg_contents": "On 2/17/21 8:36 PM, Tomas Vondra wrote:\n>\n> ...\n> \n> Thanks. The patch seems reasonable, but it's a bit too large for a fix,\n> so I'll go ahead and push one of the previous fixes restricting batching\n> to plain INSERT commands. But this seems useful, so I suggest adding it\n> to the next commitfest.\n\nI've pushed the v4 fix, adding the CMD_INSERT to execPartition.\n\nI think we may need to revise the relationship between FDW and places\nthat (may) call GetForeignModifyBatchSize. Currently, these places need\nto be quite well synchronized - in a way, the issue was (partially) due\nto postgres_fdw and core not agreeing on the details.\n\nIn particular, create_foreign_modify sets batch_size for CMD_INSERT and\nleaves it 0 otherwise. So GetForeignModifyBatchSize() returned 0 later,\ntriggering an assert.\n\nIt's probably better to require GetForeignModifyBatchSize() to always\nreturn a valid batch size (>= 1). If batching is not supported, just\nreturn 1.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Thu, 18 Feb 2021 00:38:08 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: POC: postgres_fdw insert batching" }, { "msg_contents": "On Thu, Feb 18, 2021 at 8:38 AM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n> On 2/17/21 8:36 PM, Tomas Vondra wrote:\n> > Thanks. The patch seems reasonable, but it's a bit too large for a fix,\n> > so I'll go ahead and push one of the previous fixes restricting batching\n> > to plain INSERT commands. But this seems useful, so I suggest adding it\n> > to the next commitfest.\n>\n> I've pushed the v4 fix, adding the CMD_INSERT to execPartition.\n>\n> I think we may need to revise the relationship between FDW and places\n> that (may) call GetForeignModifyBatchSize. Currently, these places need\n> to be quite well synchronized - in a way, the issue was (partially) due\n> to postgres_fdw and core not agreeing on the details.\n\nAgreed.\n\n> In particular, create_foreign_modify sets batch_size for CMD_INSERT and\n> leaves it 0 otherwise. So GetForeignModifyBatchSize() returned 0 later,\n> triggering an assert.\n>\n> It's probably better to require GetForeignModifyBatchSize() to always\n> return a valid batch size (>= 1). If batching is not supported, just\n> return 1.\n\nThat makes sense.\n\nHow about the attached?\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com", "msg_date": "Thu, 18 Feb 2021 13:51:31 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: POC: postgres_fdw insert batching" }, { "msg_contents": "On Thu, Feb 18, 2021 at 4:36 AM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n> On 2/17/21 9:51 AM, Amit Langote wrote:\n> > On Wed, Feb 17, 2021 at 5:46 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> >> Sorry, I hadn't shared enough details of my investigations when I\n> >> originally ran into this. Such as that I had considered implementing\n> >> the use of batching for these inserts too but had given up.\n> >>\n> >> Now that you mention it, I think I gave a less convincing reason for\n> >> why we should avoid doing it at all. Maybe it would have been more\n> >> right to say that it is the core code, not necessarily the FDWs, that\n> >> currently fails to deal with the use of batching by the insert\n> >> component of a cross-partition update. Those failures could be\n> >> addressed as I'll describe below.\n> >>\n> >> For postgres_fdw, postgresGetForeignModifyBatchSize() could be taught\n> >> to simply use the PgFdwModifyTable that is installed to handle the\n> >> insert component of a cross-partition update (one can get that one via\n> >> aux_fmstate field of the original PgFdwModifyState). However, even\n> >> though that's fine for postgres_fdw to do, what worries (had worried)\n> >> me is that it also results in scribbling on ri_BatchSize that the core\n> >> code may see to determine what to do with a particular tuple, and I\n> >> just have to hope that nodeModifyTable.c doesn't end up doing anything\n> >> unwarranted with the original update based on seeing a non-zero\n> >> ri_BatchSize. AFAICS, we are fine on that front.\n> >>\n> >> That said, there are some deficiencies in the code that have to be\n> >> addressed before we can let postgres_fdw do as mentioned above. For\n> >> example, the code in ExecModifyTable() that runs after breaking out of\n> >> the loop to insert any remaining batched tuples appears to miss the\n> >> tuples batched by such inserts. Apparently, that is because the\n> >> ResultRelInfos used by those inserts are not present in\n> >> es_tuple_routing_result_relations. Turns out I had forgotten that\n> >> execPartition.c doesn't add the ResultRelInfos to that list if they\n> >> are made by ExecInitModifyTable() for the original update operation\n> >> and simply reused by ExecFindPartition() when tuples were routed to\n> >> those partitions. It can be \"fixed\" by reverting to the original\n> >> design in Tsunakawa-san's patch where the tuple routing result\n> >> relations were obtained from the PartitionTupleRouting data structure,\n> >> which fortunately stores all tuple routing result relations. (Sorry,\n> >> I gave wrong advice in [1] in retrospect.)\n> >>\n> >>> On a closer look, it seems the problem actually lies in a small\n> >>> inconsistency between create_foreign_modify and ExecInitRoutingInfo. The\n> >>> former only set batch_size for CMD_INSERT while the latter called the\n> >>> BatchSize() for all operations, expecting >= 1 result. So we may either\n> >>> relax create_foreign_modify and set batch_size for all DML, or make\n> >>> ExecInitRoutingInfo stricter (which is what the patches here do).\n> >>\n> >> I think we should be fine if we make\n> >> postgresGetForeignModifyBatchSize() use the correct PgFdwModifyState\n> >> as described above. We can be sure that we are not mixing the\n> >> information used by the batched insert with that of the original\n> >> unbatched update.\n> >>\n> >>> Is there a reason not to do the first thing, allowing batching of\n> >>> inserts during cross-partition updates? I tried to do that, but it\n> >>> dawned on me that we can't mix batched and un-batched operations, e.g.\n> >>> DELETE + INSERT, because that'd break the order of execution, leading to\n> >>> bogus results in case the same row is modified repeatedly, etc.\n> >>\n> >> Actually, postgres_fdw only supports moving a row into a partition (as\n> >> part of a cross-partition update that is) if it has already finished\n> >> performing any updates on it. So there is no worry of rows that are\n> >> moved into a partition subsequently getting updated due to the\n> >> original command.\n> >>\n> >> The attached patch implements the changes necessary to make these\n> >> inserts use batching too.\n> >>\n> >> [1] https://www.postgresql.org/message-id/CA%2BHiwqEbnhwVJMsukTP-S9Kv1ynC7Da3yuqSPZC0Y7oWWOwoHQ%40mail.gmail.com\n> >\n> > Oops, I had mistakenly not hit \"Reply All\". Attaching the patch again.\n> >\n>\n> Thanks. The patch seems reasonable, but it's a bit too large for a fix,\n> so I'll go ahead and push one of the previous fixes restricting batching\n> to plain INSERT commands. But this seems useful, so I suggest adding it\n> to the next commitfest.\n\nOkay will post the rebased patch to a new thread.\n\n> One thing that surprised me is that we only move the rows *to* the\n> foreign partition, not from it (even on pg13, i.e. before the batching\n> etc.). I mean, using the example you posted earlier, with one foreign\n> and one local partition, consider this:\n>\n> delete from p;\n> insert into p values (2);\n>\n> test=# select * from p2;\n> a\n> ---\n> 2\n> (1 row)\n>\n> test=# update p set a = 1;\n> UPDATE 1\n>\n> test=# select * from p1;\n> a\n> ---\n> 1\n> (1 row)\n>\n> OK, so it was moved to the foreign partition, which is for rows with\n> value in (1). So far so good. Let's do another update:\n>\n> test=# update p set a = 2;\n> UPDATE 1\n> test=# select * from p1;\n> a\n> ---\n> 2\n> (1 row)\n>\n> So now the foreign partition contains value (2), which is however wrong\n> with respect to the partitioning rules - this should be in p2, the local\n> partition.\n>\n> This however causes pretty annoying issue:\n>\n> test=# explain analyze select * from p where a = 2;\n>\n> QUERY PLAN\n> ---------------------------------------------------------------\n> Seq Scan on p2 p (cost=0.00..41.88 rows=13 width=4)\n> (actual time=0.024..0.028 rows=0 loops=1)\n> Filter: (a = 2)\n> Planning Time: 0.355 ms\n> Execution Time: 0.089 ms\n> (4 rows)\n>\n> That is, we fail to find the row, because we eliminate the partition.\n>\n> Now, maybe this is expected, but it seems like a rather mind-boggling\n> violation of POLA principle.\n\nYeah, I think we knowingly allow this behavior. The documentation\nstates that a foreign table's constraints are not enforced by the core\nserver nor by the FDW, but I suppose we still allow declaring them\nmostly for the planner's consumption:\n\nhttps://www.postgresql.org/docs/current/sql-createforeigntable.html\n\n\"Constraints on foreign tables (such as CHECK or NOT NULL clauses) are\nnot enforced by the core PostgreSQL system, and most foreign data\nwrappers do not attempt to enforce them either; that is, the\nconstraint is simply assumed to hold true. There would be little point\nin such enforcement since it would only apply to rows inserted or\nupdated via the foreign table, and not to rows modified by other\nmeans, such as directly on the remote server. Instead, a constraint\nattached to a foreign table should represent a constraint that is\nbeing enforced by the remote server.\"\n\nPartitioning constraints are not treated any differently for those\nreasons. It's a good idea to declare a CHECK constraint on the remote\ntable matching with the local partition constraint, though IIRC we\ndon't mention that advice anywhere in our documentation.\n\n> I've checked if postgres_fdw mentions this\n> somewhere, but all I see about row movement is this:\n>\n> Note also that postgres_fdw supports row movement invoked by UPDATE\n> statements executed on partitioned tables, but it currently does not\n> handle the case where a remote partition chosen to insert a moved row\n> into is also an UPDATE target partition that will be updated later.\n>\n> and if I understand that correctly, that's about something different.\n\nYeah, that's a note saying that while we do support moving a row from\na local partition to a postgres_fdw foreign partition, it's only\nallowed if the foreign partition won't subsequently be updated. So to\nreiterate, the cases we don't support:\n\n* Moving a row from a foreign partition to a local one\n\n* Moving a row from a local partition to a foreign one if the latter\nwill be updated subsequent to moving a row into it\n\npostgres_fdw detects the second case with the following code in\npostgresBeginForeignInsert():\n\n /*\n * If the foreign table we are about to insert routed rows into is also an\n * UPDATE subplan result rel that will be updated later, proceeding with\n * the INSERT will result in the later UPDATE incorrectly modifying those\n * routed rows, so prevent the INSERT --- it would be nice if we could\n * handle this case; but for now, throw an error for safety.\n */\n if (plan && plan->operation == CMD_UPDATE &&\n (resultRelInfo->ri_usesFdwDirectModify ||\n resultRelInfo->ri_FdwState) &&\n resultRelInfo > mtstate->resultRelInfo + mtstate->mt_whichplan)\n ereport(ERROR,\n (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),\n errmsg(\"cannot route tuples into foreign table to be\nupdated \\\"%s\\\"\",\n RelationGetRelationName(rel))));\n--\nAmit Langote\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 18 Feb 2021 16:35:20 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: POC: postgres_fdw insert batching" } ]
[ { "msg_contents": "In connection with the discussion at [1], I realized that we could unwind\nthe hacks we've introduced --- mostly in commit 54cd4f045 --- to avoid\ndepending on the behavior of %.*s format in printf. Now that we always\nuse our own snprintf.c code, we know that it measures field widths in\nbytes not characters, and we also know that use of this format won't\ncause random encoding-related failures.\n\nSome of the changes are not worth undoing; for example using strlcpy\ninstead of snprintf to truncate a string is a net win by any measure.\nBut places where we introduced a temporary buffer, such as the\nchange in truncate_identifier() in 54cd4f045, would be better off\nwith the old coding. In any case we could remove all the comments\nwarning against using this feature.\n\nThoughts?\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/flat/a120087c-4c88-d9d4-1ec5-808d7a7f133d%40gmail.com\n\n\n", "msg_date": "Sun, 28 Jun 2020 15:48:04 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Revert workarounds for unportability of printf %.*s format?" } ]
[ { "msg_contents": "If I use the attached sql file to set up the database with loop-back\npostgres_fdw, and then turn on use_remote_estimate for this query:\n\ndistinct on (id) id, z from fgn.priority order by id, priority desc,z\n\nIt issues two queries for the foreign estimate, one with a sort and one\nwithout:\n\nEXPLAIN SELECT id, priority, z FROM public.priority\n\nEXPLAIN SELECT id, priority, z FROM public.priority ORDER BY id ASC NULLS\nLAST, priority DESC NULLS FIRST, z ASC NULLS LAST\n\nIt doesn't cost out the plan of pushing the DISTINCT ON down to the foreign\nside, which is probably the best way to run the query. I guess it makes\nsense that FDW machinery in general doesn't want to try to push a\nPostgreSQL specific construct.\n\nBut much worse than that, it horribly misestmates the number of unique rows\nit will get back, having never asked the remote side for an estimate of\nthat.\n\n Result (cost=100.51..88635.90 rows=1 width=16)\n -> Unique (cost=100.51..88635.90 rows=1 width=16)\n -> Foreign Scan on priority (cost=100.51..86135.90 rows=1000000\nwidth=16)\n\nWhere does it come up with the idea that these 1,000,000 rows will\nDISTINCT/Unique down to just 1 row? I can't find the place in the code\nwhere that happens. I suspect it is happening somewhere in the core code\nbased on data fed into it by postgres_fdw, not in postgres_fdw itself.\n\nThis leads to horrible plans if the DISTINCT ON is actually in a subquery\nwhich is joined to other tables, for example.\n\nIf you don't use the remote estimates, it at least comes up with a roughly\nsane estimate of 200 distinct rows, which is enough to inhibit selection of\nthe worst plans. Why does an uninformative remote estimate do so much worse\nthan no remote estimate at all?\n\nOf course I could just disable remote estimates for this table, but then\nother queries that use the table without DISTINCT ON suffer. Another\nsolution is to ANALYZE the foreign table, but that opens up a can of worms\nof its own.\n\nI see this behavior in all supported or in-development versions.\n\nCheers,\n\nJeff", "msg_date": "Sun, 28 Jun 2020 23:23:04 -0400", "msg_from": "Jeff Janes <jeff.janes@gmail.com>", "msg_from_op": true, "msg_subject": "estimation problems for DISTINCT ON with FDW" }, { "msg_contents": "> It doesn't cost out the plan of pushing the DISTINCT ON down to the foreign side, which is probably the best way to run the query. I guess it makes sense that FDW machinery in general doesn't want to try to push a PostgreSQL specific construct.\n\nI think you are right, the DISTINCT operation is not being pushed to\nremote(I may be wrong here. just for info - I looked at remote SQL\nfrom explain(verbose) on the query to find this out) and so is for\nestimates. There might be problems pushing DISTINCTs to remote servers\nwith the usage of fdw for sharding configurations. But when fdw is\nused for non-sharded configurations such as just to get existing data\nfrom another remote postgres server, oracle, hadoop or some other\nremote database engines where DISTINCT operation is supported, it's\ngood to push that to remote for both explains/estimates as well as in\nthe actual queries itself, to reduce data transferred from remote\ndatabase server to local postgres database server.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 29 Jun 2020 15:32:00 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: estimation problems for DISTINCT ON with FDW" }, { "msg_contents": "On Mon, Jun 29, 2020 at 7:02 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n> > It doesn't cost out the plan of pushing the DISTINCT ON down to the foreign side, which is probably the best way to run the query. I guess it makes sense that FDW machinery in general doesn't want to try to push a PostgreSQL specific construct.\n>\n> I think you are right, the DISTINCT operation is not being pushed to\n> remote(I may be wrong here. just for info - I looked at remote SQL\n> from explain(verbose) on the query to find this out) and so is for\n> estimates.\n\nI think you are right.\n\n> But when fdw is\n> used for non-sharded configurations such as just to get existing data\n> from another remote postgres server, oracle, hadoop or some other\n> remote database engines where DISTINCT operation is supported, it's\n> good to push that to remote for both explains/estimates as well as in\n> the actual queries itself, to reduce data transferred from remote\n> database server to local postgres database server.\n\nI think so too. And I think we could do so using the upper-planner\npathification (ie, GetForeignUpperPaths() with UPPERREL_DISTINCT in\ncreate_distinct_paths()). It's on my long-term TODO list to implement\nthat in postgres_fdw.\n\nBest regards,\nEtsuro Fujita\n\n\n", "msg_date": "Tue, 30 Jun 2020 18:30:49 +0900", "msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>", "msg_from_op": false, "msg_subject": "Re: estimation problems for DISTINCT ON with FDW" }, { "msg_contents": "Jeff Janes <jeff.janes@gmail.com> writes:\n> It doesn't cost out the plan of pushing the DISTINCT ON down to the foreign\n> side, which is probably the best way to run the query. I guess it makes\n> sense that FDW machinery in general doesn't want to try to push a\n> PostgreSQL specific construct.\n\nWell, that's an unimplemented feature anyway. But people hared off after\nthat without addressing your actual bug report:\n\n> But much worse than that, it horribly misestmates the number of unique rows\n> it will get back, having never asked the remote side for an estimate of\n> that.\n\nI poked into that and found that the problem is in estimate_num_groups,\nwhich effectively just disregards any relation that has rel->tuples = 0.\nThat is the case for a postgres_fdw foreign table if use_remote_estimate\nis true, because postgres_fdw never bothers to set any other value.\n(On the other hand, if use_remote_estimate is false, it does fill in a\npretty-bogus value, mainly so it can use set_baserel_size_estimates.\nSee postgresGetForeignRelSize.)\n\nIt seems like we could make estimate_num_groups a bit more robust here;\nit could just skip its attempts to clamp based on total size or\nrestriction selectivity, but still include the reldistinct value for the\nrel into the total numdistinct. I wonder though if this is the only\nproblem caused by failing to fill in any value for rel->tuples ...\nshould we make postgres_fdw install some value for that?\n\n(Note that the question of whether we should ask the remote server for\nan estimate of ndistinct is kind of orthogonal to any of these points.\nEven if we had obtained one that way, estimate_num_groups would not pay\nany attention to it without a fix for the point at hand.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 30 Jun 2020 13:13:43 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: estimation problems for DISTINCT ON with FDW" }, { "msg_contents": "I wrote:\n> I poked into that and found that the problem is in estimate_num_groups,\n> which effectively just disregards any relation that has rel->tuples = 0.\n> That is the case for a postgres_fdw foreign table if use_remote_estimate\n> is true, because postgres_fdw never bothers to set any other value.\n> (On the other hand, if use_remote_estimate is false, it does fill in a\n> pretty-bogus value, mainly so it can use set_baserel_size_estimates.\n> See postgresGetForeignRelSize.)\n\n> It seems like we could make estimate_num_groups a bit more robust here;\n> it could just skip its attempts to clamp based on total size or\n> restriction selectivity, but still include the reldistinct value for the\n> rel into the total numdistinct. I wonder though if this is the only\n> problem caused by failing to fill in any value for rel->tuples ...\n> should we make postgres_fdw install some value for that?\n\nAttached are a couple of quick-hack patches along each of those lines.\nEither one resolves the crazy number-of-groups estimate for Jeff's\nexample; neither changes any existing regression test results.\n\nOn the whole I'm not sure I like 0001 (ie, changing estimate_num_groups).\nSure, it makes that function \"more robust\", but it does so at the cost\nof believing what might be a default or otherwise pretty insane\nreldistinct estimate. We put in the clamping behavior for a reason,\nand I'm not sure we should disable it just because reltuples = 0.\n\n0002 seems like a better answer on the whole, but it has a pretty\nsignificant issue as well: it's changing the API for FDW\nGetForeignRelSize functions, because now we're expecting them to set\nboth rows and tuples to something sane, contrary to the existing docs.\n\nWhat I'm sort of inclined to do is neither of these exactly, but\ninstead put the\n\n\tbaserel->tuples = Max(baserel->tuples, baserel->rows);\n\nclamping behavior into the core code, immediately after the call to\nGetForeignRelSize. This'd still let the FDW set baserel->tuples if\nit has a mind to, while not requiring that; and it prevents the\nsituation where the rows and tuples estimates are inconsistent.\n\n\t\t\tregards, tom lane", "msg_date": "Tue, 30 Jun 2020 18:21:32 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: estimation problems for DISTINCT ON with FDW" }, { "msg_contents": "On Wed, Jul 1, 2020 at 7:21 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Attached are a couple of quick-hack patches along each of those lines.\n> Either one resolves the crazy number-of-groups estimate for Jeff's\n> example; neither changes any existing regression test results.\n>\n> On the whole I'm not sure I like 0001 (ie, changing estimate_num_groups).\n> Sure, it makes that function \"more robust\", but it does so at the cost\n> of believing what might be a default or otherwise pretty insane\n> reldistinct estimate. We put in the clamping behavior for a reason,\n> and I'm not sure we should disable it just because reltuples = 0.\n>\n> 0002 seems like a better answer on the whole, but it has a pretty\n> significant issue as well: it's changing the API for FDW\n> GetForeignRelSize functions, because now we're expecting them to set\n> both rows and tuples to something sane, contrary to the existing docs.\n\npostgres_fdw already sets both rows and tuples if use_remote_estimate\nis false, and we have pages=0 and tuples=0, so the contrary seems OK\nto me.\n\nIn the 0002 patch:\n\n+ /*\n+ * plancat.c copied baserel->pages and baserel->tuples from pg_class.\n+ * If the foreign table has never been ANALYZEd, or if its stats are\n+ * out of date, baserel->tuples might now be less than baserel->rows,\n+ * which will confuse assorted logic. Hack it to appear minimally\n+ * sensible. (Do we need to hack baserel->pages too?)\n+ */\n+ baserel->tuples = Max(baserel->tuples, baserel->rows);\n\nfor consistency, this should be\n\n baserel->tuples = clamp_row_est(baserel->rows / sel);\n\nwhere sel is the selectivity of the baserestrictinfo clauses?\n\n> What I'm sort of inclined to do is neither of these exactly, but\n> instead put the\n>\n> baserel->tuples = Max(baserel->tuples, baserel->rows);\n>\n> clamping behavior into the core code, immediately after the call to\n> GetForeignRelSize. This'd still let the FDW set baserel->tuples if\n> it has a mind to, while not requiring that; and it prevents the\n> situation where the rows and tuples estimates are inconsistent.\n\nI'm not sure this would address the inconsistency. Consider the\npostgres_fdw case where use_remote_estimate is true, and the stats are\nout of date, eg, baserel->tuples copied from pg_class is much larger\nthan the actual tuples and hence baserel->rows (I assume here that\npostgres_fdw doesn't do anything about baserel->tuples). In such a\ncase the inconsistency would make the estimate_num_groups() estimate\nmore inaccurate. I think the consistency is the responsibility of the\nFDW rather than the core, so I would vote for the 0002 patch. Maybe\nI'm missing something.\n\nThanks for working on this!\n\nBest regards,\nEtsuro Fujita\n\n\n", "msg_date": "Wed, 1 Jul 2020 20:06:04 +0900", "msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>", "msg_from_op": false, "msg_subject": "Re: estimation problems for DISTINCT ON with FDW" }, { "msg_contents": "Etsuro Fujita <etsuro.fujita@gmail.com> writes:\n> On Wed, Jul 1, 2020 at 7:21 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> +\tbaserel->tuples = Max(baserel->tuples, baserel->rows);\n\n> for consistency, this should be\n> baserel->tuples = clamp_row_est(baserel->rows / sel);\n> where sel is the selectivity of the baserestrictinfo clauses?\n\nIf we had the selectivity available, maybe so, but we don't.\n(And even less so if we put this logic in the core code.)\n\nShort of sending a whole second query to the remote server, it's\nnot clear to me how we could get the full table size (or equivalently\nthe target query's selectivity for that table). The best we realistically\ncan do is to adopt pg_class.reltuples if there's been an ANALYZE of\nthe foreign table. That case already works (and this proposal doesn't\nbreak it). The problem is what to do when pg_class.reltuples is zero\nor otherwise badly out-of-date.\n\n>> What I'm sort of inclined to do is neither of these exactly, but\n>> instead put the\n>> \tbaserel->tuples = Max(baserel->tuples, baserel->rows);\n>> clamping behavior into the core code, immediately after the call to\n>> GetForeignRelSize. This'd still let the FDW set baserel->tuples if\n>> it has a mind to, while not requiring that; and it prevents the\n>> situation where the rows and tuples estimates are inconsistent.\n\n> I'm not sure this would address the inconsistency. Consider the\n> postgres_fdw case where use_remote_estimate is true, and the stats are\n> out of date, eg, baserel->tuples copied from pg_class is much larger\n> than the actual tuples and hence baserel->rows (I assume here that\n> postgres_fdw doesn't do anything about baserel->tuples). In such a\n> case the inconsistency would make the estimate_num_groups() estimate\n> more inaccurate. I think the consistency is the responsibility of the\n> FDW rather than the core, so I would vote for the 0002 patch. Maybe\n> I'm missing something.\n\nNothing about this proposal is stopping the FDW from inserting a better\nvalue for rel->tuples if it's got one. But it's not necessarily easy\nor cheap to get that info. In any case I think that upgrading the\nrequirements for what GetForeignRelSize must set is a hard sell.\nWe certainly could not back-patch a fix that required that, and even\ngoing forward, it seems likely that many FDWs would never get the word.\n(Well, maybe we could force the issue by throwing an error if\nrel->tuples < rel->rows after GetForeignRelSize, but it's not hard\nto imagine that routine testing could fail to trigger such a check.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 01 Jul 2020 10:40:09 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: estimation problems for DISTINCT ON with FDW" }, { "msg_contents": "On Wed, Jul 1, 2020 at 11:40 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Etsuro Fujita <etsuro.fujita@gmail.com> writes:\n> > On Wed, Jul 1, 2020 at 7:21 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> + baserel->tuples = Max(baserel->tuples, baserel->rows);\n>\n> > for consistency, this should be\n> > baserel->tuples = clamp_row_est(baserel->rows / sel);\n> > where sel is the selectivity of the baserestrictinfo clauses?\n>\n> If we had the selectivity available, maybe so, but we don't.\n> (And even less so if we put this logic in the core code.)\n>\n> Short of sending a whole second query to the remote server, it's\n> not clear to me how we could get the full table size (or equivalently\n> the target query's selectivity for that table). The best we realistically\n> can do is to adopt pg_class.reltuples if there's been an ANALYZE of\n> the foreign table. That case already works (and this proposal doesn't\n> break it). The problem is what to do when pg_class.reltuples is zero\n> or otherwise badly out-of-date.\n\nIn estimate_path_cost_size(), if use_remote_estimate is true, we\nadjust the rows estimate returned from the remote server, by factoring\nin the selectivity of the locally-checked quals. I thought what I\nproposed above would be more consistent with that.\n\nBest regards,\nEtsuro Fujita\n\n\n", "msg_date": "Thu, 2 Jul 2020 11:46:37 +0900", "msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>", "msg_from_op": false, "msg_subject": "Re: estimation problems for DISTINCT ON with FDW" }, { "msg_contents": "Etsuro Fujita <etsuro.fujita@gmail.com> writes:\n> On Wed, Jul 1, 2020 at 11:40 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Short of sending a whole second query to the remote server, it's\n>> not clear to me how we could get the full table size (or equivalently\n>> the target query's selectivity for that table). The best we realistically\n>> can do is to adopt pg_class.reltuples if there's been an ANALYZE of\n>> the foreign table. That case already works (and this proposal doesn't\n>> break it). The problem is what to do when pg_class.reltuples is zero\n>> or otherwise badly out-of-date.\n\n> In estimate_path_cost_size(), if use_remote_estimate is true, we\n> adjust the rows estimate returned from the remote server, by factoring\n> in the selectivity of the locally-checked quals. I thought what I\n> proposed above would be more consistent with that.\n\nNo, I don't think that would be very helpful. There are really three\ndifferent numbers of interest here:\n\n1. The actual total rowcount of the remote table.\n\n2. The number of rows returned by the remote query (which is #1 times\nthe selectivity of the shippable quals).\n\n3. The number of rows returned by the foreign scan (which is #2 times\nthe selectivity of the non-shippable quals)).\n\nClearly, rel->rows should be set to #3. However, what we really want\nfor rel->tuples is #1. That's because, to the extent that the planner\ninspects rel->tuples at all, it's to adjust whole-table stats such as\nwe might have from ANALYZE. What you're suggesting is that we use #2,\nbut I doubt that that's a big improvement. In a decently tuned query\nit's going to be a lot closer to #3 than to #1.\n\nWe could perhaps try to make our own estimate of the selectivity of the\nshippable quals and then back into #1 from the value we got for #2 from\nthe remote server. But that sounds mighty error-prone, so I doubt it'd\nmake for much of an improvement. It also doesn't sound like something\nI'd want to back-patch.\n\nAnother point here is that, to the extent we are relying on whole-table\nstats from the last ANALYZE, pg_class.reltuples is actually the right\nvalue to go along with that. We could spend a lot of cycles doing\nwhat I just suggested and end up with net-worse estimates.\n\nIn any case, the proposal I'm making is just to add a sanity-check\nclamp to prevent the worst effects of not setting rel->tuples sanely.\nIt doesn't foreclose future improvements inside the FDW.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 02 Jul 2020 10:46:17 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: estimation problems for DISTINCT ON with FDW" }, { "msg_contents": "Concretely, I now propose the attached, which seems entirely\nsafe to back-patch.\n\n\t\t\tregards, tom lane", "msg_date": "Thu, 02 Jul 2020 16:19:43 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: estimation problems for DISTINCT ON with FDW" }, { "msg_contents": "On Thu, Jul 2, 2020 at 11:46 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Etsuro Fujita <etsuro.fujita@gmail.com> writes:\n> > On Wed, Jul 1, 2020 at 11:40 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> Short of sending a whole second query to the remote server, it's\n> >> not clear to me how we could get the full table size (or equivalently\n> >> the target query's selectivity for that table). The best we realistically\n> >> can do is to adopt pg_class.reltuples if there's been an ANALYZE of\n> >> the foreign table. That case already works (and this proposal doesn't\n> >> break it). The problem is what to do when pg_class.reltuples is zero\n> >> or otherwise badly out-of-date.\n>\n> > In estimate_path_cost_size(), if use_remote_estimate is true, we\n> > adjust the rows estimate returned from the remote server, by factoring\n> > in the selectivity of the locally-checked quals. I thought what I\n> > proposed above would be more consistent with that.\n>\n> No, I don't think that would be very helpful. There are really three\n> different numbers of interest here:\n>\n> 1. The actual total rowcount of the remote table.\n>\n> 2. The number of rows returned by the remote query (which is #1 times\n> the selectivity of the shippable quals).\n>\n> 3. The number of rows returned by the foreign scan (which is #2 times\n> the selectivity of the non-shippable quals)).\n>\n> Clearly, rel->rows should be set to #3. However, what we really want\n> for rel->tuples is #1. That's because, to the extent that the planner\n> inspects rel->tuples at all, it's to adjust whole-table stats such as\n> we might have from ANALYZE. What you're suggesting is that we use #2,\n> but I doubt that that's a big improvement. In a decently tuned query\n> it's going to be a lot closer to #3 than to #1.\n>\n> We could perhaps try to make our own estimate of the selectivity of the\n> shippable quals and then back into #1 from the value we got for #2 from\n> the remote server.\n\nActually, that is what I suggested:\n\n + /*\n + * plancat.c copied baserel->pages and baserel->tuples from pg_class.\n + * If the foreign table has never been ANALYZEd, or if its stats are\n + * out of date, baserel->tuples might now be less than baserel->rows,\n + * which will confuse assorted logic. Hack it to appear minimally\n + * sensible. (Do we need to hack baserel->pages too?)\n + */\n + baserel->tuples = Max(baserel->tuples, baserel->rows);\n\n for consistency, this should be\n\n baserel->tuples = clamp_row_est(baserel->rows / sel);\n\n where sel is the selectivity of the baserestrictinfo clauses?\n\nBy \"the baserestrictinfo clauses\", I mean the shippable clauses as\nwell as the non-shippable clauses. Since baserel->rows stores the\nrows estimate returned by estimate_path_cost_size(), which is #3, this\nestimates #1.\n\n> But that sounds mighty error-prone, so I doubt it'd\n> make for much of an improvement.\n\nI have to admit the error-proneness.\n\n> In any case, the proposal I'm making is just to add a sanity-check\n> clamp to prevent the worst effects of not setting rel->tuples sanely.\n> It doesn't foreclose future improvements inside the FDW.\n\nAgreed.\n\nBest regards,\nEtsuro Fujita\n\n\n", "msg_date": "Fri, 3 Jul 2020 11:56:29 +0900", "msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>", "msg_from_op": false, "msg_subject": "Re: estimation problems for DISTINCT ON with FDW" }, { "msg_contents": "On Fri, Jul 3, 2020 at 5:19 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Concretely, I now propose the attached, which seems entirely\n> safe to back-patch.\n\nThe patch looks good to me. And +1 for back-patching.\n\nBest regards,\nEtsuro Fujita\n\n\n", "msg_date": "Fri, 3 Jul 2020 20:55:44 +0900", "msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>", "msg_from_op": false, "msg_subject": "Re: estimation problems for DISTINCT ON with FDW" }, { "msg_contents": "Etsuro Fujita <etsuro.fujita@gmail.com> writes:\n> On Thu, Jul 2, 2020 at 11:46 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> We could perhaps try to make our own estimate of the selectivity of the\n>> shippable quals and then back into #1 from the value we got for #2 from\n>> the remote server.\n\n> Actually, that is what I suggested:\n> ... By \"the baserestrictinfo clauses\", I mean the shippable clauses as\n> well as the non-shippable clauses. Since baserel->rows stores the\n> rows estimate returned by estimate_path_cost_size(), which is #3, this\n> estimates #1.\n\nAh. That isn't a number we compute in this code path at the moment,\nbut you're right that we could do so. However ...\n\n>> But that sounds mighty error-prone, so I doubt it'd\n>> make for much of an improvement.\n\n> I have to admit the error-proneness.\n\n... that is the crux of the problem. The entire reason why we're\nexpending all these cycles to get a remote estimate is that we don't\ntrust the local estimate of the shippable quals' selectivity to be\nany good. So relying on it anyway doesn't seem very smart, even if\nit's for the usually-not-too-important purpose of estimating the\ntotal table size.\n\nI suppose there is one case where this approach could win: if the\nlocal selectivity estimate is just fine, but the remote table size has\nchanged a lot since we last did an ANALYZE, then this would give us a\ndecent table size estimate with no additional remote traffic. But\nthat doesn't really seem like a great bet --- if the table size has\nchanged that much, our local stats are probably obsolete too.\n\nI wonder whether someday we ought to invent a new API that's more\nsuited to postgres_fdw's needs than EXPLAIN is. It's not like the\nremote planner doesn't know the number we want; it just fails to\ninclude it in EXPLAIN.\n\n>> In any case, the proposal I'm making is just to add a sanity-check\n>> clamp to prevent the worst effects of not setting rel->tuples sanely.\n>> It doesn't foreclose future improvements inside the FDW.\n\n> Agreed.\n\nOK, I'll go ahead and push the patch I proposed yesterday.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 03 Jul 2020 17:50:43 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: estimation problems for DISTINCT ON with FDW" }, { "msg_contents": "On Fri, Jul 3, 2020 at 5:50 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n>\n> OK, I'll go ahead and push the patch I proposed yesterday.\n>\n\nThank you. I tested 12_STABLE with my real queries on the real data set,\nand the \"hard coded\" estimate of 200 distinct rows (when use_remote_estimte\nis turned back on) is enough to get rid of the worst plans I was seeing in\n12.3.\n\nCheers,\n\nJeff\n\nOn Fri, Jul 3, 2020 at 5:50 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\nOK, I'll go ahead and push the patch I proposed yesterday.Thank you.  I tested 12_STABLE with my real queries on the real data set, and the \"hard coded\" estimate of 200 distinct rows (when use_remote_estimte is turned back on) is enough to get rid of the worst plans I was seeing in 12.3.Cheers,Jeff", "msg_date": "Sat, 25 Jul 2020 13:30:22 -0400", "msg_from": "Jeff Janes <jeff.janes@gmail.com>", "msg_from_op": true, "msg_subject": "Re: estimation problems for DISTINCT ON with FDW" } ]
[ { "msg_contents": "Hi,\n\nI found the bug about archive_timeout parameter.\nThere is the case archive_timeout parameter is ignored after recovery works.\n\n[Problem]\nWhen the value of archive_timeout is smaller than that of checkpoint_timeout and recovery works, archive_timeout is ignored in the first WAL archiving.\nOnce WAL is archived, the archive_timeout seems to be valid after that.\n\nI attached the simple script for reproducing this problem on version 12. \nI also confirmed that PostgreSQL10, 11 and 12. I think other supported versions have this problem. \n\n[Investigation]\nIn the CheckpointerMain(), calculate the time (cur_timeout) to wait on WaitLatch.\n\n-----------------------------------------------------------------\nnow = (pg_time_t) time(NULL);\nelapsed_secs = now - last_checkpoint_time;\nif (elapsed_secs >= CheckPointTimeout)\n continue; /* no sleep for us ... */\ncur_timeout = CheckPointTimeout - elapsed_secs;\nif (XLogArchiveTimeout > 0 && !RecoveryInProgress())\n{\n elapsed_secs = now - last_xlog_switch_time;\n if (elapsed_secs >= XLogArchiveTimeout)\n continue; /* no sleep for us ... */\n cur_timeout = Min(cur_timeout, XLogArchiveTimeout - elapsed_secs);\n}\n\n(void) WaitLatch(MyLatch,\n WL_LATCH_SET | WL_TIMEOUT | WL_EXIT_ON_PM_DEATH,\n cur_timeout * 1000L /* convert to ms */ ,\n WAIT_EVENT_CHECKPOINTER_MAIN);\n-----------------------------------------------------------------\n\nCurrently, cur_timeout is set according to only checkpoint_timeout when it is during recovery.\nEven during recovery, the cur_timeout should be calculated including archive_timeout as well as checkpoint_timeout, I think.\nI attached the patch to solve this problem.\n\nRegards,\nDaisuke, Higuchi", "msg_date": "Mon, 29 Jun 2020 04:35:11 +0000", "msg_from": "\"higuchi.daisuke@fujitsu.com\" <higuchi.daisuke@fujitsu.com>", "msg_from_op": true, "msg_subject": "[Bug fix]There is the case archive_timeout parameter is ignored after\n recovery works." }, { "msg_contents": "Hello.\n\nAt Mon, 29 Jun 2020 04:35:11 +0000, \"higuchi.daisuke@fujitsu.com\" <higuchi.daisuke@fujitsu.com> wrote in \n> Hi,\n> \n> I found the bug about archive_timeout parameter.\n> There is the case archive_timeout parameter is ignored after recovery works.\n...\n> [Problem]\n> When the value of archive_timeout is smaller than that of checkpoint_timeout and recovery works, archive_timeout is ignored in the first WAL archiving.\n> Once WAL is archived, the archive_timeout seems to be valid after that.\n...\n> Currently, cur_timeout is set according to only checkpoint_timeout when it is during recovery.\n> Even during recovery, the cur_timeout should be calculated including archive_timeout as well as checkpoint_timeout, I think.\n> I attached the patch to solve this problem.\n\nUnfortunately the diff command in your test script doesn't show me\nanything, but I can understand what you are thinking is a problem,\nmaybe. But the patch doesn't seem the fix for the issue.\n\nArchiving works irrelevantly from that parameter. Completed WAL\nsegments are immediately marked as \".ready\" and archiver does its task\nimmediately independently from checkpointer. The parameter, as\ndescribed in documentation, forces the server to switch to a new WAL\nsegment file periodically so that it can be archived, that is, it\nworks only on primary. On the other hand on standby, a WAL segment is\nnot marked as \".ready\" until any data for the *next* segment comes. So\nthe patch is not the fix for the issue.\n\nIf primary switched segment and archived it but standby didn't archive\nthe same immediately, you could force that by writing something on the\nmaster.\n\nAnyway, the attached patch would resolve your problem.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center", "msg_date": "Mon, 29 Jun 2020 16:41:11 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Bug fix]There is the case archive_timeout parameter is\n ignored after recovery works." }, { "msg_contents": "At Mon, 29 Jun 2020 16:41:11 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> Anyway, the attached patch would resolve your problem.\n\nI found another issue related to my last patch.\n\nFor the current master (and older versions) if walreceiver is signaled\nto exit just after a segment is completed, walreceiver exits without\nmarking the last segment as \".ready\". After restart, it doesn't\nremember that it didn't notified the last segment and the segment is\nmissing in archive. I think this is really a bug.\n\nWith the patch, that failure won't happen.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Mon, 29 Jun 2020 17:27:41 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Bug fix]There is the case archive_timeout parameter is\n ignored after recovery works." }, { "msg_contents": "\n\nOn 2020/06/29 16:41, Kyotaro Horiguchi wrote:\n> Hello.\n> \n> At Mon, 29 Jun 2020 04:35:11 +0000, \"higuchi.daisuke@fujitsu.com\" <higuchi.daisuke@fujitsu.com> wrote in\n>> Hi,\n>>\n>> I found the bug about archive_timeout parameter.\n>> There is the case archive_timeout parameter is ignored after recovery works.\n> ...\n>> [Problem]\n>> When the value of archive_timeout is smaller than that of checkpoint_timeout and recovery works, archive_timeout is ignored in the first WAL archiving.\n>> Once WAL is archived, the archive_timeout seems to be valid after that.\n> ...\n>> Currently, cur_timeout is set according to only checkpoint_timeout when it is during recovery.\n>> Even during recovery, the cur_timeout should be calculated including archive_timeout as well as checkpoint_timeout, I think.\n>> I attached the patch to solve this problem.\n> \n> Unfortunately the diff command in your test script doesn't show me\n> anything, but I can understand what you are thinking is a problem,\n> maybe. But the patch doesn't seem the fix for the issue.\n> \n> Archiving works irrelevantly from that parameter. Completed WAL\n> segments are immediately marked as \".ready\" and archiver does its task\n> immediately independently from checkpointer. The parameter, as\n> described in documentation, forces the server to switch to a new WAL\n> segment file periodically so that it can be archived, that is, it\n> works only on primary. On the other hand on standby, a WAL segment is\n> not marked as \".ready\" until any data for the *next* segment comes. So\n> the patch is not the fix for the issue.\n\nThe problems that you're describing and Daisuke-san reported are really\nthe same? The reported problem seems that checkpointer can sleep on\nthe latch for more than archive_timeout just after recovery and cannot\nswitch WAL files promptly even if necessary.\n\nThe cause of this problem is that the checkpointer's sleep time is calculated\nfrom both checkpoint_timeout and archive_timeout during normal running,\nbut calculated only from checkpoint_timeout during recovery. So Daisuke-san's\npatch tries to change that so that it's calculated from both of them even\nduring recovery. No?\n\n- if (XLogArchiveTimeout > 0 && !RecoveryInProgress())\n+ if (XLogArchiveTimeout > 0)\n {\n elapsed_secs = now - last_xlog_switch_time;\n- if (elapsed_secs >= XLogArchiveTimeout)\n+ if (elapsed_secs >= XLogArchiveTimeout && !RecoveryInProgress())\n continue; /* no sleep for us ... */\n cur_timeout = Min(cur_timeout, XLogArchiveTimeout - elapsed_secs);\n\nlast_xlog_switch_time is not updated during recovery. So \"elapsed_secs\" can be\nlarge and cur_timeout can be negative. Isn't this problematic?\n\nAs another approach, what about waking the checkpointer up at the end of\nrecovery like we already do for walsenders?\n\nRegards,\n\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Mon, 29 Jun 2020 19:34:03 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: [Bug fix]There is the case archive_timeout parameter is ignored\n after recovery works." }, { "msg_contents": "Thank you for comments.\n\n>Unfortunately the diff command in your test script doesn't show me\n>anything, but I can understand what you are thinking is a problem,\n>maybe.\n\nI'm sorry but I might have confused you... I explain how to use my test script.\nI use diff command to check if the archiver has started. diff command does not output nothing to stdout.\nSo, please see the time displayed by the two date command by output of my test script.\nI think you can confirm that the difference between the results of date commands is not the archive_timeout setting of 10 seconds.\nIf my test script runs for a few minutes, it means that my problem is reproduced.\n\n>immediately independently from checkpointer. The parameter, as\n>described in documentation, forces the server to switch to a new WAL\n>segment file periodically so that it can be archived, that is, it\n>works only on primary.\n\nI confirm that this problem is occurred in non-replication environment.\nThe problem occurs when database try to archive WAL during or after archive recovery.\nSo your patch may be good to solve another problem, but unfortunately it didn't fix my problem.\n\nRegards,\nDaisuke, Higuchi\n\n\n\n", "msg_date": "Mon, 29 Jun 2020 12:34:10 +0000", "msg_from": "\"higuchi.daisuke@fujitsu.com\" <higuchi.daisuke@fujitsu.com>", "msg_from_op": true, "msg_subject": "RE: [Bug fix]There is the case archive_timeout parameter is ignored\n after recovery works." }, { "msg_contents": "Fujii-san, thank you for comments.\n\n>The cause of this problem is that the checkpointer's sleep time is calculated\n>from both checkpoint_timeout and archive_timeout during normal running,\n>but calculated only from checkpoint_timeout during recovery. So Daisuke-san's\n>patch tries to change that so that it's calculated from both of them even\n>during recovery. No?\n\nYes, it's exactly so.\n\n>last_xlog_switch_time is not updated during recovery. So \"elapsed_secs\" can be\n>large and cur_timeout can be negative. Isn't this problematic?\n\nYes... My patch was missing this.\nHow about using the original archive_timeout value for calculating cur_timeout during recovery?\n\n if (XLogArchiveTimeout > 0 && !RecoveryInProgress())\n {\n elapsed_secs = now - last_xlog_switch_time;\n if (elapsed_secs >= XLogArchiveTimeout)\n continue; /* no sleep for us ... */\n cur_timeout = Min(cur_timeout, XLogArchiveTimeout - elapsed_secs);\n }\n+ else if (XLogArchiveTimeout > 0)\n+ cur_timeout = Min(cur_timeout, XLogArchiveTimeout);\n\nDuring recovery, accurate cur_timeout is not calculated because elapsed_secs is not used.\nHowever, after recovery is complete, WAL archiving will start by the next archive_timeout is reached.\nI felt it is enough to solve this problem.\n\n>As another approach, what about waking the checkpointer up at the end of\n>recovery like we already do for walsenders?\n\nIf the above solution is not good, I will consider this approach.\n\nRegards,\nDaisuke, Higuchi\n\n\n", "msg_date": "Mon, 29 Jun 2020 13:00:25 +0000", "msg_from": "\"higuchi.daisuke@fujitsu.com\" <higuchi.daisuke@fujitsu.com>", "msg_from_op": true, "msg_subject": "RE: [Bug fix]There is the case archive_timeout parameter is ignored\n after recovery works." }, { "msg_contents": "Opps! I misunderstood that.\n\nAt Mon, 29 Jun 2020 13:00:25 +0000, \"higuchi.daisuke@fujitsu.com\" <higuchi.daisuke@fujitsu.com> wrote in \n> Fujii-san, thank you for comments.\n> \n> >The cause of this problem is that the checkpointer's sleep time is calculated\n> >from both checkpoint_timeout and archive_timeout during normal running,\n> >but calculated only from checkpoint_timeout during recovery. So Daisuke-san's\n> >patch tries to change that so that it's calculated from both of them even\n> >during recovery. No?\n> \n> Yes, it's exactly so.\n> \n> >last_xlog_switch_time is not updated during recovery. So \"elapsed_secs\" can be\n> >large and cur_timeout can be negative. Isn't this problematic?\n> \n> Yes... My patch was missing this.\n\nThe patch also makes WaitLatch called with zero timeout, which causes\nassertion failure.\n\n> How about using the original archive_timeout value for calculating cur_timeout during recovery?\n> \n> if (XLogArchiveTimeout > 0 && !RecoveryInProgress())\n> {\n> elapsed_secs = now - last_xlog_switch_time;\n> if (elapsed_secs >= XLogArchiveTimeout)\n> continue; /* no sleep for us ... */\n> cur_timeout = Min(cur_timeout, XLogArchiveTimeout - elapsed_secs);\n> }\n> + else if (XLogArchiveTimeout > 0)\n> + cur_timeout = Min(cur_timeout, XLogArchiveTimeout);\n> \n> During recovery, accurate cur_timeout is not calculated because elapsed_secs is not used.\n> However, after recovery is complete, WAL archiving will start by the next archive_timeout is reached.\n> I felt it is enough to solve this problem.\n\nThat causes unwanted change of cur_timeout during recovery.\n\n> >As another approach, what about waking the checkpointer up at the end of\n> >recovery like we already do for walsenders?\n\nWe don't want change checkpoint interval during recovery, that means\nwe cannot cnosider archive_timeout at the fist checkpointer after\nrecovery ends. So I think that the suggestion from Fujii-san is the\ndirection.\n\n> If the above solution is not good, I will consider this approach.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Tue, 30 Jun 2020 09:14:23 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Bug fix]There is the case archive_timeout parameter is\n ignored after recovery works." }, { "msg_contents": "\n\nOn 2020/06/30 9:14, Kyotaro Horiguchi wrote:\n> Opps! I misunderstood that.\n> \n> At Mon, 29 Jun 2020 13:00:25 +0000, \"higuchi.daisuke@fujitsu.com\" <higuchi.daisuke@fujitsu.com> wrote in\n>> Fujii-san, thank you for comments.\n>>\n>>> The cause of this problem is that the checkpointer's sleep time is calculated\n>> >from both checkpoint_timeout and archive_timeout during normal running,\n>>> but calculated only from checkpoint_timeout during recovery. So Daisuke-san's\n>>> patch tries to change that so that it's calculated from both of them even\n>>> during recovery. No?\n>>\n>> Yes, it's exactly so.\n>>\n>>> last_xlog_switch_time is not updated during recovery. So \"elapsed_secs\" can be\n>>> large and cur_timeout can be negative. Isn't this problematic?\n>>\n>> Yes... My patch was missing this.\n> \n> The patch also makes WaitLatch called with zero timeout, which causes\n> assertion failure.\n> \n>> How about using the original archive_timeout value for calculating cur_timeout during recovery?\n>>\n>> if (XLogArchiveTimeout > 0 && !RecoveryInProgress())\n>> {\n>> elapsed_secs = now - last_xlog_switch_time;\n>> if (elapsed_secs >= XLogArchiveTimeout)\n>> continue; /* no sleep for us ... */\n>> cur_timeout = Min(cur_timeout, XLogArchiveTimeout - elapsed_secs);\n>> }\n>> + else if (XLogArchiveTimeout > 0)\n>> + cur_timeout = Min(cur_timeout, XLogArchiveTimeout);\n>>\n>> During recovery, accurate cur_timeout is not calculated because elapsed_secs is not used.\n\nYes, that's an idea. But I'm a bit concerned about that this change makes\ncheckpointer wake up more frequently than necessary during recovery.\nWhich may increase the idle power consumption of checkpointer during\nrecovery. Of course, this would not be so problematic in practice because\nwe can expect that archive_timeout is not so small. But it seems better to\navoid unncessary wake-ups if we can easily do that.\n\n>> However, after recovery is complete, WAL archiving will start by the next archive_timeout is reached.\n>> I felt it is enough to solve this problem.\n> \n> That causes unwanted change of cur_timeout during recovery.\n> \n>>> As another approach, what about waking the checkpointer up at the end of\n>>> recovery like we already do for walsenders?\n> \n> We don't want change checkpoint interval during recovery, that means\n> we cannot cnosider archive_timeout at the fist checkpointer after\n> recovery ends. So I think that the suggestion from Fujii-san is the\n> direction.\n\n+1\nIf this idea has some problems, we can revisit Daisuke-san's idea.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Tue, 30 Jun 2020 10:01:02 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: [Bug fix]There is the case archive_timeout parameter is ignored\n after recovery works." }, { "msg_contents": ">> We don't want change checkpoint interval during recovery, that means\r\n>> we cannot cnosider archive_timeout at the fist checkpointer after\r\n>> recovery ends. So I think that the suggestion from Fujii-san is the\r\n>> direction.\r\n>+1\r\n>If this idea has some problems, we can revisit Daisuke-san's idea.\r\n\r\nThanks for your comments. \r\nOk, I will work on the fix to wake the checkpointer up at the end of recovery.\r\n\r\nRegards,\r\nDaisuke, Higuchi\r\n", "msg_date": "Tue, 30 Jun 2020 04:01:15 +0000", "msg_from": "\"higuchi.daisuke@fujitsu.com\" <higuchi.daisuke@fujitsu.com>", "msg_from_op": true, "msg_subject": "RE: [Bug fix]There is the case archive_timeout parameter is ignored\n after recovery works." } ]
[ { "msg_contents": "Hi Hackers,\n\nFor Copy From Binary files, there exists below information for each\ntuple/row.\n1. field count(number of columns)\n2. for every field, field size(column data length)\n3. field data of field size(actual column data)\n\nCurrently, all the above data required at each step is read directly from\nfile using fread() and this happens for all the tuples/rows.\n\nOne observation is that in the total execution time of a copy from binary\nfile, the fread() call is taking upto 20% of time and the fread() function\ncall count is also too high.\n\nFor instance, with a dataset of size 5.3GB, 10million tuples with 10\ncolumns,\ntotal exec time in sec total time taken for fread() fread() function call\ncount\n101.193 *21.33* 210000005\n101.345 *21.436* 210000005\n\nThe total time taken for fread() and the corresponding function call count\nmay increase if we have more number of columns for instance 1000.\n\nOne solution to this problem is to read data from binary file in\nRAW_BUF_SIZE(64KB) chunks to avoid repeatedly calling fread()(thus possibly\navoiding few disk IOs). This is similar to the approach followed for\ncsv/text files.\n\nAttaching a patch, implementing the above solution for binary format files.\n\nBelow is the improvement gained.\ntotal exec time in sec total time taken for fread() fread() function call\ncount\n75.757 *2.73* 160884\n75.351 *2.742* 160884\n\n*Execution is 1.36X times faster, fread() time is reduced by 87%, fread()\ncall count is reduced by 99%.*\n\nRequest the community to take this patch for review if this approach and\nimprovement seem beneficial.\n\nAny suggestions to improve further are most welcome.\n\nAttached also is the config file used for testing the above use case.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Mon, 29 Jun 2020 10:50:59 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "[PATCH] Performance Improvement For Copy From Binary Files" }, { "msg_contents": "Hi,\n\nAdded this to commitfest incase this is useful -\nhttps://commitfest.postgresql.org/28/\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\nOn Mon, Jun 29, 2020 at 10:50 AM Bharath Rupireddy <\nbharath.rupireddyforpostgres@gmail.com> wrote:\n\n> Hi Hackers,\n>\n> For Copy From Binary files, there exists below information for each\n> tuple/row.\n> 1. field count(number of columns)\n> 2. for every field, field size(column data length)\n> 3. field data of field size(actual column data)\n>\n> Currently, all the above data required at each step is read directly from\n> file using fread() and this happens for all the tuples/rows.\n>\n> One observation is that in the total execution time of a copy from binary\n> file, the fread() call is taking upto 20% of time and the fread() function\n> call count is also too high.\n>\n> For instance, with a dataset of size 5.3GB, 10million tuples with 10\n> columns,\n> total exec time in sec total time taken for fread() fread() function call\n> count\n> 101.193 *21.33* 210000005\n> 101.345 *21.436* 210000005\n>\n> The total time taken for fread() and the corresponding function call count\n> may increase if we have more number of columns for instance 1000.\n>\n> One solution to this problem is to read data from binary file in\n> RAW_BUF_SIZE(64KB) chunks to avoid repeatedly calling fread()(thus possibly\n> avoiding few disk IOs). This is similar to the approach followed for\n> csv/text files.\n>\n> Attaching a patch, implementing the above solution for binary format files.\n>\n> Below is the improvement gained.\n> total exec time in sec total time taken for fread() fread() function call\n> count\n> 75.757 *2.73* 160884\n> 75.351 *2.742* 160884\n>\n> *Execution is 1.36X times faster, fread() time is reduced by 87%, fread()\n> call count is reduced by 99%.*\n>\n> Request the community to take this patch for review if this approach and\n> improvement seem beneficial.\n>\n> Any suggestions to improve further are most welcome.\n>\n> Attached also is the config file used for testing the above use case.\n>\n> With Regards,\n> Bharath Rupireddy.\n> EnterpriseDB: http://www.enterprisedb.com\n>\n\nHi,Added this to commitfest incase this is useful - https://commitfest.postgresql.org/28/With Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.comOn Mon, Jun 29, 2020 at 10:50 AM Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote:Hi Hackers,For Copy From Binary files, there exists below information for each tuple/row.1. field count(number of columns) 2. for every field, field size(column data length)3. field data of field size(actual column data)Currently, all the above data required at each step is read directly from file using fread() and this happens for all the tuples/rows.One observation is that in the total execution time of a copy from binary file, the fread() call is taking upto 20% of time and the fread() function call count is also too high.For instance, with a dataset of size 5.3GB, 10million tuples with 10 columns, \n\n\n\n\n\ntotal exec\n time in sec\ntotal time taken\n for fread()\nfread() function\n call count\n\n\n101.193\n21.33\n210000005\n\n\n101.345\n21.436\n210000005\n\n\nThe total time taken for fread() and the corresponding function call count may increase if we have more number of columns for instance 1000.One solution to this problem is to read data from binary file in RAW_BUF_SIZE(64KB) chunks to avoid repeatedly calling fread()(thus possibly avoiding few disk IOs). This is similar to the approach followed for csv/text files.Attaching a patch, implementing the above solution for binary format files.Below is the improvement gained.\n\n\n\ntotal exec\n time in sec\ntotal time taken\n for fread()\nfread() function\n call count\n\n\n75.757\n2.73\n160884\n\n\n75.351\n2.742\n160884\n\n\nExecution is 1.36X times faster, fread() time is reduced by 87%, fread() call count is reduced by 99%.Request the community to take this patch for review if this approach and improvement seem beneficial.Any suggestions to improve further are most welcome.Attached also is the config file used for testing the above use case.With Regards,Bharath Rupireddy.EnterpriseDB: http://www.enterprisedb.com", "msg_date": "Wed, 1 Jul 2020 15:03:06 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Performance Improvement For Copy From Binary Files" }, { "msg_contents": "Hi Bharath,\n\nOn Mon, Jun 29, 2020 at 2:21 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n> For Copy From Binary files, there exists below information for each tuple/row.\n> 1. field count(number of columns)\n> 2. for every field, field size(column data length)\n> 3. field data of field size(actual column data)\n>\n> Currently, all the above data required at each step is read directly from file using fread() and this happens for all the tuples/rows.\n>\n> One observation is that in the total execution time of a copy from binary file, the fread() call is taking upto 20% of time and the fread() function call count is also too high.\n>\n> For instance, with a dataset of size 5.3GB, 10million tuples with 10 columns,\n> total exec time in sec total time taken for fread() fread() function call count\n> 101.193 21.33 210000005\n> 101.345 21.436 210000005\n>\n> The total time taken for fread() and the corresponding function call count may increase if we have more number of columns for instance 1000.\n>\n> One solution to this problem is to read data from binary file in RAW_BUF_SIZE(64KB) chunks to avoid repeatedly calling fread()(thus possibly avoiding few disk IOs). This is similar to the approach followed for csv/text files.\n\nI agree that having the buffer in front of the file makes sense,\nalthough we do now have an extra memcpy, that is, from raw_buf to\nattribute_buf.data. Currently, fread() reads directly into\nattribute_buf.data. But maybe that's okay as I don't see the new copy\nbeing all that bad.\n\n> Attaching a patch, implementing the above solution for binary format files.\n>\n> Below is the improvement gained.\n> total exec time in sec total time taken for fread() fread() function call count\n> 75.757 2.73 160884\n> 75.351 2.742 160884\n>\n> Execution is 1.36X times faster, fread() time is reduced by 87%, fread() call count is reduced by 99%.\n>\n> Request the community to take this patch for review if this approach and improvement seem beneficial.\n>\n> Any suggestions to improve further are most welcome.\n\nNoticed the following misbehaviors when trying to test the patch:\n\ncreate table foo5 (a text, b text, c text, d text, e text);\ninsert into foo5 select repeat('a', (random()*100)::int), 'bbb', 'cc',\n'd', 'eee' from generate_series(1, 10000000);\ncopy foo5 to '/tmp/foo5.bin' binary;\ntruncate foo5;\ncopy foo5 from '/tmp/foo5.bin' binary;\nERROR: unexpected EOF in COPY data\nCONTEXT: COPY foo5, line 33, column a\n\ncreate table bar (a numeric);\ninsert into bar select sqrt(a) from generate_series(1, 10000) a;\ncopy bar to '/tmp/bar.bin' binary;\ncopy bar from '/tmp/bar.bin' binary;\nserver closed the connection unexpectedly\n This probably means the server terminated abnormally\n before or while processing the request.\nThe connection to the server was lost. Attempting reset: Failed.\n\nTrying to figure what was going wrong in each of these cases, I found\nthe new code a bit hard to navigate and debug :(. Here are a couple of\npoints that I think could have made things a bit easier:\n\n* Avoid spreading the new buffering logic in multiple existing\nfunctions, with similar pieces of code repeated in multiple places. I\nwould add a single new function that takes care of the various\nbuffering details and call it where CopyGetData() is being used\ncurrently.\n\n* You could've reused CopyLoadRawBuffer()'s functionality instead of\nreimplementing it. I also see multiple instances of special case\nhandling, which often suggests that bugs are lurking.\n\nConsidering these points, I came up with the attached patch with a\nmuch smaller footprint. As far as I can see, it implements the same\nbasic idea as your patch. With it, I was able to see an improvement\nin loading time consistent with your numbers. I measured the time of\nloading 10 million rows into tables with 5, 10, 20 text columns as\nfollows:\n\ncreate table foo5 (a text, b text, c text, d text, e text);\ninsert into foo5 select repeat('a', (random()*100)::int), 'bbb', 'cc',\n'd', 'eee' from generate_series(1, 10000000);\ncopy foo5 to '/tmp/foo5.bin' binary;\ntruncate foo5;\ncopy foo5 from '/tmp/foo5.bin' binary;\n\ncreate table foo10 (a text, b text, c text, d text, e text, f text, g\ntext, h text, i text, j text);\ninsert into foo10 select repeat('a', (random()*100)::int), 'bbb',\n'cc', 'd', 'eee', 'f', 'gg', 'hh', 'i', 'jjj' from generate_series(1,\n10000000);\ncopy foo10 to '/tmp/foo10.bin' binary;\ntruncate foo10;\ncopy foo10 from '/tmp/foo10.bin' binary;\n\ncreate table foo20 (a text, b text, c text, d text, e text, f numeric,\ng text, h text, i text, j text, k text, l text, m text, n text, o\ntext, p text, q text, r text, s text, t text);\ninsert into foo20 select repeat('a', (random()*100)::int), 'bbb',\n'cc', 'd', 'eee', '123.456', 'gg', 'hh', 'ii', 'jjjj', 'kkk', 'llll',\n'mm', 'n', 'ooooo', 'pppp', 'q', 'rrrrr', 'ss', 'tttttttttttt' from\ngenerate_series(1, 10000000);\ncopy foo20 to '/tmp/foo20.bin' binary;\ntruncate foo20;\ncopy foo20 from '/tmp/foo20.bin' binary;\n\nThe median times for the COPY FROM commands above, with and without\nthe patch, are as follows:\n\n HEAD patched\nfoo5 8.5 6.5\nfoo10 14 10\nfoo20 25 18\n\n A few more points to remember in the future:\n\n* Commenting style:\n\n+ /* If readbytes are lesser than the requested bytes, then initialize the\n+ * remaining bytes in the raw_buf to 0. This will be useful for checking\n+ * error \"received copy data after EOF marker\".\n+ */\n\nMulti-line comments are started like this:\n\n /*\n * <Start here>\n */\n\n* As also mentioned above, it's a good idea in general to avoid having\nspecial cases like these in the code:\n\n+ if (cstate->cur_lineno == 1)\n {\n- /* EOF detected (end of file, or protocol-level EOF) */\n- return false;\n+ /* This is for the first time, so read in buff size amount\n+ * of data from file.\n+ */\n\n...\n\n+\n+ /* Move bytes can either be 0, 1, or 2. */\n+ movebytes = RAW_BUF_SIZE - cstate->raw_buf_index;\n\n...\n\n+ uint8 movebytes = 0;\n+\n+ /* Move bytes can either be 0, 1, 2, 3 or 4. */\n+ movebytes = RAW_BUF_SIZE - cstate->raw_buf_index;\n\n* Please try to make variable names short if you can or follow the\nguidelines around long names:\n\n+ int32 remainingbytesincurrdatablock = RAW_BUF_SIZE -\ncstate->raw_buf_index;\n\nMaybe, remaining_bytes would've sufficed here, because \"in the current\ndata block\" might be clear to most readers by looking at the\nsurrounding code.\n\n* The above point also helps avoid long code lines that don't fit\nwithin 78 characters, like these:\n\n+ memcpy(&cstate->attribute_buf.data[0],\n&cstate->raw_buf[cstate->raw_buf_index],\nremainingbytesincurrdatablock);\n+\n+ if (CopyGetData(cstate,\n&cstate->attribute_buf.data[remainingbytesincurrdatablock],\n+ (fld_size - remainingbytesincurrdatablock),\n(fld_size - remainingbytesincurrdatablock)) != (fld_size -\nremainingbytesincurrdatablock))\n\n--\nAmit Langote\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Tue, 7 Jul 2020 16:28:20 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Performance Improvement For Copy From Binary Files" }, { "msg_contents": "On Tue, Jul 7, 2020 at 4:28 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> The median times for the COPY FROM commands above, with and without\n> the patch, are as follows:\n>\n> HEAD patched\n> foo5 8.5 6.5\n> foo10 14 10\n> foo20 25 18\n\nSorry, I forgot to mention that these times are in seconds.\n\n-- \nAmit Langote\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 7 Jul 2020 17:16:12 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Performance Improvement For Copy From Binary Files" }, { "msg_contents": ">\n> Considering these points, I came up with the attached patch with a\n> much smaller footprint. As far as I can see, it implements the same\n> basic idea as your patch. With it, I was able to see an improvement\n> in loading time consistent with your numbers. I measured the time of\n> loading 10 million rows into tables with 5, 10, 20 text columns as\n> follows:\n>\n\nThanks Amit for buying into the idea. I agree that your patch looks\nclean and simple compared to mine and I'm okay with your patch.\n\nI reviewed and tested your patch, below are few comments:\n\nI think we can remove(and delete the function from the code) the\nCopyGetInt16() and have the code directly to save the function call\ncost. It gets called for each attribute/column for each row/tuple to\njust call CopyReadFromRawBuf() and set the byte order. From a\nreadability perspective it's okay to have this function, but cost wise\nI feel no need for that function at all. In one of our earlier\nwork(parallel copy), we observed that having a new function or few\nextra statements in this copy from path which gets hit for each row,\nincurs noticeable execution cost.\n\nThe same way, we can also avoid using CopyGetInt32() function call in\nCopyReadBinaryAttribute() for the same reason stated above.\n\nIn CopyReadFromRawBuf(), can the \"saveTo\" parameter be named \"dest\"\nand use that with (char *) typecast directly, instead of having a\nlocal variable? Though it may/may not be a standard practice, let's\nhave the parameter name all lower case to keep it consistent with\nother function's parameters in the copy.c file.\n\nSeems like there's a bug in below part of the code. Use case is\nsimple, have some junk value at the end of the binary file, then with\nyour patch the query succeeds, but it should report the below error.\nHere, on fld_count == -1 instead of reading from file, we must be\nreading it from the buffer, as we would have already read all the data\nfrom the file into the buffer.\n if (cstate->copy_dest != COPY_OLD_FE &&\n CopyGetData(cstate, &dummy, 1, 1) > 0)\n ereport(ERROR,\n (errcode(ERRCODE_BAD_COPY_FILE_FORMAT),\n errmsg(\"received copy data after EOF marker\")));\n\nI also tried with some intentionally corrupted binary datasets, (apart\nfrom above issue) patch works fine.\n\nFor the case where required nbytes may not fit into the buffer in\nCopyReadFromRawBuf, I'm sure this can happen only for field data,\n(field count , and field size are of fixed length and can fit in the\nbuffer), instead of reading them in parts of buff size into the buffer\n(using CopyLoadRawBuf) and then DRAIN_COPY_RAW_BUF() to the\ndestination, I think we can detect this condition using requested\nbytes and the buffer size and directly read from the file to the\ndestination buffer and then reload the raw_buffer for further\nprocessing. I think this way, it will be good.\n\nI have few synthesized test cases where fields can be of larger size.\nI executed them on your patch, but didn't debug to see whether\nactually we hit the code where required nbytes can't fit in the entire\nbuffer. I will try this on the next version of the patch.\n\n>\n> HEAD patched\n> foo5 8.5 6.5\n> foo10 14 10\n> foo20 25 18\n>\n\nNumbers might improve a bit, if we remove the extra function calls as\nstated above.\n\nOverall, thanks for your suggestions in the previous mail, my patch\nwas prepared in a bit hurried manner, anyways, will take care next\ntime.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 9 Jul 2020 16:03:42 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Performance Improvement For Copy From Binary Files" }, { "msg_contents": "Hi Bharath,\n\nOn Thu, Jul 9, 2020 at 7:33 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n> Thanks Amit for buying into the idea. I agree that your patch looks\n> clean and simple compared to mine and I'm okay with your patch.\n>\n> I reviewed and tested your patch, below are few comments:\n\nThanks for checking it out.\n\n> I think we can remove(and delete the function from the code) the\n> CopyGetInt16() and have the code directly to save the function call\n> cost. It gets called for each attribute/column for each row/tuple to\n> just call CopyReadFromRawBuf() and set the byte order. From a\n> readability perspective it's okay to have this function, but cost wise\n> I feel no need for that function at all. In one of our earlier\n> work(parallel copy), we observed that having a new function or few\n> extra statements in this copy from path which gets hit for each row,\n> incurs noticeable execution cost.\n>\n> The same way, we can also avoid using CopyGetInt32() function call in\n> CopyReadBinaryAttribute() for the same reason stated above.\n\nI agree that removing the function call overhead in this case is worth\nthe slight loss of readability.\n\n> In CopyReadFromRawBuf(), can the \"saveTo\" parameter be named \"dest\"\n> and use that with (char *) typecast directly, instead of having a\n> local variable? Though it may/may not be a standard practice, let's\n> have the parameter name all lower case to keep it consistent with\n> other function's parameters in the copy.c file.\n\nAgreed.\n\n> Seems like there's a bug in below part of the code. Use case is\n> simple, have some junk value at the end of the binary file, then with\n> your patch the query succeeds, but it should report the below error.\n> Here, on fld_count == -1 instead of reading from file, we must be\n> reading it from the buffer, as we would have already read all the data\n> from the file into the buffer.\n> if (cstate->copy_dest != COPY_OLD_FE &&\n> CopyGetData(cstate, &dummy, 1, 1) > 0)\n> ereport(ERROR,\n> (errcode(ERRCODE_BAD_COPY_FILE_FORMAT),\n> errmsg(\"received copy data after EOF marker\")));\n>\n> I also tried with some intentionally corrupted binary datasets, (apart\n> from above issue) patch works fine.\n\nYeah, I see the bug. I should've checked all the call sites of\nCopyGetData() and made sure there is only one left, that is,\nCopyLoadRawBuffer().\n\n> For the case where required nbytes may not fit into the buffer in\n> CopyReadFromRawBuf, I'm sure this can happen only for field data,\n> (field count , and field size are of fixed length and can fit in the\n> buffer), instead of reading them in parts of buff size into the buffer\n> (using CopyLoadRawBuf) and then DRAIN_COPY_RAW_BUF() to the\n> destination, I think we can detect this condition using requested\n> bytes and the buffer size and directly read from the file to the\n> destination buffer and then reload the raw_buffer for further\n> processing. I think this way, it will be good.\n\nHmm, I'm afraid that this will make the code more complex for\napparently small benefit. Is this really that much of a problem\nperformance wise?\n\n> I have few synthesized test cases where fields can be of larger size.\n> I executed them on your patch, but didn't debug to see whether\n> actually we hit the code where required nbytes can't fit in the entire\n> buffer. I will try this on the next version of the patch.\n>\n> >\n> > HEAD patched\n> > foo5 8.5 6.5\n> > foo10 14 10\n> > foo20 25 18\n> >\n>\n> Numbers might improve a bit, if we remove the extra function calls as\n> stated above.\n\nHere the numbers with the updated patch:\n\n HEAD patched (v2)\nfoo5 8.5 6.1\nfoo10 14 9.4\nfoo20 25 16.7\n\n--\nAmit Langote\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Fri, 10 Jul 2020 12:21:12 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Performance Improvement For Copy From Binary Files" }, { "msg_contents": "On Fri, Jul 10, 2020 at 8:51 AM Amit Langote <amitlangote09@gmail.com> wrote:\n>\n> Hi Bharath,\n> Here the numbers with the updated patch:\n>\n> HEAD patched (v2)\n> foo5 8.5 6.1\n> foo10 14 9.4\n> foo20 25 16.7\n>\n\nPatch applies cleanly, make check & make check-world passes.\nI had reviewed the changes. I felt one minor change required:\n+ * CopyReadFromRawBuf\n+ * Reads 'nbytes' bytes from cstate->copy_file via\ncstate->raw_buf and\n+ * writes then to 'saveTo'\n+ *\n+ * Useful when reading binary data from the file.\nShould \"writes then to 'saveTo'\" be \"writes them to 'dest'\"?\n\nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Sun, 12 Jul 2020 18:43:31 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Performance Improvement For Copy From Binary Files" }, { "msg_contents": "On Mon, Jul 13, 2020 at 1:13 AM vignesh C <vignesh21@gmail.com> wrote:\n> On Fri, Jul 10, 2020 at 8:51 AM Amit Langote <amitlangote09@gmail.com> wrote:\n> > Here the numbers with the updated patch:\n> >\n> > HEAD patched (v2)\n> > foo5 8.5 6.1\n> > foo10 14 9.4\n> > foo20 25 16.7\n> >\n>\n> Patch applies cleanly, make check & make check-world passes.\n\nThis error showed up when cfbot tried it:\n\n COPY BINARY stud_emp FROM\n'/home/travis/build/postgresql-cfbot/postgresql/src/test/regress/results/stud_emp.data';\n+ERROR: could not read from COPY file: Bad address\n\n\n", "msg_date": "Mon, 13 Jul 2020 10:36:02 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Performance Improvement For Copy From Binary Files" }, { "msg_contents": "Thanks Thomas for checking this feature.\n\n> On Mon, Jul 13, 2020 at 4:06 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n>\n> This error showed up when cfbot tried it:\n>\n> COPY BINARY stud_emp FROM\n> '/home/travis/build/postgresql-cfbot/postgresql/src/test/regress/results/stud_emp.data';\n> +ERROR: could not read from COPY file: Bad address\n\nThis is due to the recent commit\ncd22d3cdb9bd9963c694c01a8c0232bbae3ddcfb, in which we restricted the\nraw_buf and line_buf allocations for binary files. Since we are using\nraw_buf for this performance improvement feature, now, it's enough to\nrestrict only line_buf for binary files. I made the changes\naccordingly in the v3 patch attached here.\n\nRegression tests(make check & make check-world) ran cleanly with the v3 patch.\n\nPlease also find my responses for:\n\nVignesh's comment:\n\n> On Sun, Jul 12, 2020 at 6:43 PM vignesh C <vignesh21@gmail.com> wrote:\n> I had reviewed the changes. I felt one minor change required:\n> + * CopyReadFromRawBuf\n> + * Reads 'nbytes' bytes from cstate->copy_file via\n> cstate->raw_buf and\n> + * writes then to 'saveTo'\n> + *\n> + * Useful when reading binary data from the file.\n> Should \"writes then to 'saveTo'\" be \"writes them to 'dest'\"?\n>\n\nThanks Vignesh for reviewing the patch. Modified 'saveTo' to 'dest' in v3 patch.\n\nAmit's comment:\n\n>\n> > For the case where required nbytes may not fit into the buffer in\n> > CopyReadFromRawBuf, I'm sure this can happen only for field data,\n> > (field count , and field size are of fixed length and can fit in the\n> > buffer), instead of reading them in parts of buff size into the buffer\n> > (using CopyLoadRawBuf) and then DRAIN_COPY_RAW_BUF() to the\n> > destination, I think we can detect this condition using requested\n> > bytes and the buffer size and directly read from the file to the\n> > destination buffer and then reload the raw_buffer for further\n> > processing. I think this way, it will be good.\n>\n> Hmm, I'm afraid that this will make the code more complex for\n> apparently small benefit. Is this really that much of a problem\n> performance wise?\n>\n\nYes it makes CopyReadFromRawBuf(), code a bit complex from a\nreadability perspective. I'm convinced not to have the\nabovementioned(by me) change, due to 3 reasons,1) the\nreadability/understandability 2) how many use cases can we have where\nrequested field size greater than RAW_BUF_SIZE(64KB)? I think very few\ncases. I may be wrong here. 3) Performance wise it may not be much as\nwe do one extra memcpy only in situations where field sizes are\ngreater than 64KB(as we have already seen and observed by you as well\nin one of the response [1]) that memcpy cost for this case may be\nnegligible.\n\nConsidering all of above, I'm okay to have CopyReadFromRawBuf()\nfunction, the way it is currently.\n\n[1]\n> >\n> > One solution to this problem is to read data from binary file in RAW_BUF_SIZE(64KB) chunks to avoid repeatedly calling fread()(thus possibly avoiding few disk IOs). This is similar to the approach followed for csv/text files.\n>\n> I agree that having the buffer in front of the file makes sense,\n> although we do now have an extra memcpy, that is, from raw_buf to\n> attribute_buf.data. Currently, fread() reads directly into\n> attribute_buf.data. But maybe that's okay as I don't see the new copy\n> being all that bad.\n>\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Mon, 13 Jul 2020 06:49:17 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Performance Improvement For Copy From Binary Files" }, { "msg_contents": "On Mon, Jul 13, 2020 at 10:19 AM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n> > On Mon, Jul 13, 2020 at 4:06 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> >\n> > This error showed up when cfbot tried it:\n> >\n> > COPY BINARY stud_emp FROM\n> > '/home/travis/build/postgresql-cfbot/postgresql/src/test/regress/results/stud_emp.data';\n> > +ERROR: could not read from COPY file: Bad address\n>\n> This is due to the recent commit\n> cd22d3cdb9bd9963c694c01a8c0232bbae3ddcfb, in which we restricted the\n> raw_buf and line_buf allocations for binary files. Since we are using\n> raw_buf for this performance improvement feature, now, it's enough to\n> restrict only line_buf for binary files. I made the changes\n> accordingly in the v3 patch attached here.\n>\n> Regression tests(make check & make check-world) ran cleanly with the v3 patch.\n\nThank you Bharath. I was a bit surprised that you had also submitted\na patch to NOT allocate raw_buf for COPY FROM ... BINARY. :-)\n\n> Please also find my responses for:\n>\n> Vignesh's comment:\n>\n> > On Sun, Jul 12, 2020 at 6:43 PM vignesh C <vignesh21@gmail.com> wrote:\n> > I had reviewed the changes. I felt one minor change required:\n> > + * CopyReadFromRawBuf\n> > + * Reads 'nbytes' bytes from cstate->copy_file via\n> > cstate->raw_buf and\n> > + * writes then to 'saveTo'\n> > + *\n> > + * Useful when reading binary data from the file.\n> > Should \"writes then to 'saveTo'\" be \"writes them to 'dest'\"?\n> >\n>\n> Thanks Vignesh for reviewing the patch. Modified 'saveTo' to 'dest' in v3 patch.\n\nMy bad.\n\n> Amit's comment:\n>\n> >\n> > > For the case where required nbytes may not fit into the buffer in\n> > > CopyReadFromRawBuf, I'm sure this can happen only for field data,\n> > > (field count , and field size are of fixed length and can fit in the\n> > > buffer), instead of reading them in parts of buff size into the buffer\n> > > (using CopyLoadRawBuf) and then DRAIN_COPY_RAW_BUF() to the\n> > > destination, I think we can detect this condition using requested\n> > > bytes and the buffer size and directly read from the file to the\n> > > destination buffer and then reload the raw_buffer for further\n> > > processing. I think this way, it will be good.\n> >\n> > Hmm, I'm afraid that this will make the code more complex for\n> > apparently small benefit. Is this really that much of a problem\n> > performance wise?\n> >\n>\n> Yes it makes CopyReadFromRawBuf(), code a bit complex from a\n> readability perspective. I'm convinced not to have the\n> abovementioned(by me) change, due to 3 reasons,1) the\n> readability/understandability 2) how many use cases can we have where\n> requested field size greater than RAW_BUF_SIZE(64KB)? I think very few\n> cases. I may be wrong here. 3) Performance wise it may not be much as\n> we do one extra memcpy only in situations where field sizes are\n> greater than 64KB(as we have already seen and observed by you as well\n> in one of the response [1]) that memcpy cost for this case may be\n> negligible.\n\nActually, an extra memcpy is incurred on every call of\nCopyReadFromRawBuf(), but I haven't seen it to be very problematic.\n\nBy the way, considering the rebase over cd22d3cdb9b, it seemed to me\nthat we needed to update the comments in CopyStateData struct\ndefinition a bit more. While doing that, I realized\nCopyReadFromRawBuf as a name for the new function might be misleading\nas long as we are only using it for binary data. Maybe\nCopyReadBinaryData is more appropriate? See attached v4 with these\nand a few other cosmetic changes.\n\n--\nAmit Langote\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Mon, 13 Jul 2020 11:32:30 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Performance Improvement For Copy From Binary Files" }, { "msg_contents": ">\n> > This is due to the recent commit\n> > cd22d3cdb9bd9963c694c01a8c0232bbae3ddcfb, in which we restricted the\n> > raw_buf and line_buf allocations for binary files. Since we are using\n> > raw_buf for this performance improvement feature, now, it's enough to\n> > restrict only line_buf for binary files. I made the changes\n> > accordingly in the v3 patch attached here.\n> >\n> > Regression tests(make check & make check-world) ran cleanly with the v3 patch.\n>\n> Thank you Bharath. I was a bit surprised that you had also submitted\n> a patch to NOT allocate raw_buf for COPY FROM ... BINARY. :-)\n>\n\nYes that was by me, before I started to work on this feature. I think\nwe can backpatch that change(assuming we don't backpatch this\nfeature), I will make the request accordingly.\n\nAnyways, now we don't allow line_buf allocation for binary files,\nwhich is also a good thing.\n\n>\n> > > > For the case where required nbytes may not fit into the buffer in\n> > > > CopyReadFromRawBuf, I'm sure this can happen only for field data,\n> > > > (field count , and field size are of fixed length and can fit in the\n> > > > buffer), instead of reading them in parts of buff size into the buffer\n> > > > (using CopyLoadRawBuf) and then DRAIN_COPY_RAW_BUF() to the\n> > > > destination, I think we can detect this condition using requested\n> > > > bytes and the buffer size and directly read from the file to the\n> > > > destination buffer and then reload the raw_buffer for further\n> > > > processing. I think this way, it will be good.\n> > >\n> > > Hmm, I'm afraid that this will make the code more complex for\n> > > apparently small benefit. Is this really that much of a problem\n> > > performance wise?\n> > >\n> >\n> > Yes it makes CopyReadFromRawBuf(), code a bit complex from a\n> > readability perspective. I'm convinced not to have the\n> > abovementioned(by me) change, due to 3 reasons,1) the\n> > readability/understandability 2) how many use cases can we have where\n> > requested field size greater than RAW_BUF_SIZE(64KB)? I think very few\n> > cases. I may be wrong here. 3) Performance wise it may not be much as\n> > we do one extra memcpy only in situations where field sizes are\n> > greater than 64KB(as we have already seen and observed by you as well\n> > in one of the response [1]) that memcpy cost for this case may be\n> > negligible.\n>\n> Actually, an extra memcpy is incurred on every call of\n> CopyReadFromRawBuf(), but I haven't seen it to be very problematic.\n>\n\nYes.\n\n>\n> CopyReadFromRawBuf as a name for the new function might be misleading\n> as long as we are only using it for binary data. Maybe\n> CopyReadBinaryData is more appropriate? See attached v4 with these\n> and a few other cosmetic changes.\n>\n\nCopyReadBinaryData() looks meaningful. +1.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 13 Jul 2020 08:46:57 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Performance Improvement For Copy From Binary Files" }, { "msg_contents": "On Mon, Jul 13, 2020 at 12:17 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n> > CopyReadFromRawBuf as a name for the new function might be misleading\n> > as long as we are only using it for binary data. Maybe\n> > CopyReadBinaryData is more appropriate? See attached v4 with these\n> > and a few other cosmetic changes.\n> >\n>\n> CopyReadBinaryData() looks meaningful. +1.\n\nOkay, thanks. Let's have a committer take a look at this then?\n\n-- \nAmit Langote\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 13 Jul 2020 16:34:46 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Performance Improvement For Copy From Binary Files" }, { "msg_contents": ">\n> > > CopyReadFromRawBuf as a name for the new function might be misleading\n> > > as long as we are only using it for binary data. Maybe\n> > > CopyReadBinaryData is more appropriate? See attached v4 with these\n> > > and a few other cosmetic changes.\n> > >\n> >\n> > CopyReadBinaryData() looks meaningful. +1.\n>\n> Okay, thanks. Let's have a committer take a look at this then?\n>\n\nI think yes, unless someone has any more points/review comments.\nAccordingly the status in the commitfest can be changed.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 13 Jul 2020 17:04:45 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Performance Improvement For Copy From Binary Files" }, { "msg_contents": "On Mon, Jul 13, 2020 at 8:02 AM Amit Langote <amitlangote09@gmail.com> wrote:\n> By the way, considering the rebase over cd22d3cdb9b, it seemed to me\n> that we needed to update the comments in CopyStateData struct\n> definition a bit more. While doing that, I realized\n> CopyReadFromRawBuf as a name for the new function might be misleading\n> as long as we are only using it for binary data. Maybe\n> CopyReadBinaryData is more appropriate? See attached v4 with these\n> and a few other cosmetic changes.\n>\n\nI had one small comment:\n+{\n+ int copied_bytes = 0;\n+\n+#define BUF_BYTES (cstate->raw_buf_len - cstate->raw_buf_index)\n+#define DRAIN_COPY_RAW_BUF(cstate, dest, nbytes)\\\n+ do {\\\n+ memcpy((dest), (cstate)->raw_buf +\n(cstate)->raw_buf_index, (nbytes));\\\n+ (cstate)->raw_buf_index += (nbytes);\\\n+ } while(0)\n\nBUF_BYTES could be used in CopyLoadRawBuf function also.\n\nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 13 Jul 2020 18:50:24 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Performance Improvement For Copy From Binary Files" }, { "msg_contents": ">\n> I had one small comment:\n> +{\n> + int copied_bytes = 0;\n> +\n> +#define BUF_BYTES (cstate->raw_buf_len - cstate->raw_buf_index)\n> +#define DRAIN_COPY_RAW_BUF(cstate, dest, nbytes)\\\n> + do {\\\n> + memcpy((dest), (cstate)->raw_buf +\n> (cstate)->raw_buf_index, (nbytes));\\\n> + (cstate)->raw_buf_index += (nbytes);\\\n> + } while(0)\n>\n> BUF_BYTES could be used in CopyLoadRawBuf function also.\n>\n\nThanks Vignesh for the find out. I changed and attached the v5 patch.\nThe regression tests(make check and make check-world) ran\nsuccessfully.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Mon, 13 Jul 2020 20:28:03 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Performance Improvement For Copy From Binary Files" }, { "msg_contents": "On Mon, Jul 13, 2020 at 11:58 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n> > I had one small comment:\n> > +{\n> > + int copied_bytes = 0;\n> > +\n> > +#define BUF_BYTES (cstate->raw_buf_len - cstate->raw_buf_index)\n> > +#define DRAIN_COPY_RAW_BUF(cstate, dest, nbytes)\\\n> > + do {\\\n> > + memcpy((dest), (cstate)->raw_buf +\n> > (cstate)->raw_buf_index, (nbytes));\\\n> > + (cstate)->raw_buf_index += (nbytes);\\\n> > + } while(0)\n> >\n> > BUF_BYTES could be used in CopyLoadRawBuf function also.\n> >\n>\n> Thanks Vignesh for the find out. I changed and attached the v5 patch.\n> The regression tests(make check and make check-world) ran\n> successfully.\n\nGood idea, thanks.\n\nIn CopyLoadRawBuf(), we could also change the condition if\n(cstate->raw_buf_index < cstate->raw_buf_len) to if (BUF_BYTES > 0),\nwhich looks clearer.\n\nAlso, if we are going to use the macro more generally, let's make it\nlook less localized. For example, rename it to RAW_BUF_BYTES similar\nto RAW_BUF_SIZE and place their definitions close by. It also seems\nlike a good idea to make 'cstate' a parameter for clarity.\n\nAttached v6.\n\n-- \nAmit Langote\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Tue, 14 Jul 2020 10:56:15 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Performance Improvement For Copy From Binary Files" }, { "msg_contents": "On Tue, Jul 14, 2020 at 7:26 AM Amit Langote <amitlangote09@gmail.com> wrote:\n>\n> Good idea, thanks.\n>\n> In CopyLoadRawBuf(), we could also change the condition if\n> (cstate->raw_buf_index < cstate->raw_buf_len) to if (BUF_BYTES > 0),\n> which looks clearer.\n>\n> Also, if we are going to use the macro more generally, let's make it\n> look less localized. For example, rename it to RAW_BUF_BYTES similar\n> to RAW_BUF_SIZE and place their definitions close by. It also seems\n> like a good idea to make 'cstate' a parameter for clarity.\n>\n> Attached v6.\n>\n\nThanks for making the changes.\n\n- if (cstate->raw_buf_index < cstate->raw_buf_len)\n+ if (RAW_BUF_BYTES(cstate) > 0)\n {\n /* Copy down the unprocessed data */\n- nbytes = cstate->raw_buf_len - cstate->raw_buf_index;\n+ nbytes = RAW_BUF_BYTES(cstate);\n memmove(cstate->raw_buf, cstate->raw_buf +\ncstate->raw_buf_index,\n nbytes);\n }\n\nOne small improvement could be to change it like below to reduce few\nmore instructions:\nstatic bool\nCopyLoadRawBuf(CopyState cstate)\n{\nint nbytes = RAW_BUF_BYTES(cstate);\nint inbytes;\n\n/* Copy down the unprocessed data */\nif (nbytes > 0)\nmemmove(cstate->raw_buf, cstate->raw_buf + cstate->raw_buf_index,\nnbytes);\n\ninbytes = CopyGetData(cstate, cstate->raw_buf + nbytes,\n 1, RAW_BUF_SIZE - nbytes);\nnbytes += inbytes;\ncstate->raw_buf[nbytes] = '\\0';\ncstate->raw_buf_index = 0;\ncstate->raw_buf_len = nbytes;\nreturn (inbytes > 0);\n}\n\nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 14 Jul 2020 10:31:58 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Performance Improvement For Copy From Binary Files" }, { "msg_contents": "On Tue, Jul 14, 2020 at 2:02 PM vignesh C <vignesh21@gmail.com> wrote:\n> On Tue, Jul 14, 2020 at 7:26 AM Amit Langote <amitlangote09@gmail.com> wrote:\n> > In CopyLoadRawBuf(), we could also change the condition if\n> > (cstate->raw_buf_index < cstate->raw_buf_len) to if (BUF_BYTES > 0),\n> > which looks clearer.\n> >\n> > Also, if we are going to use the macro more generally, let's make it\n> > look less localized. For example, rename it to RAW_BUF_BYTES similar\n> > to RAW_BUF_SIZE and place their definitions close by. It also seems\n> > like a good idea to make 'cstate' a parameter for clarity.\n> >\n> > Attached v6.\n> >\n>\n> Thanks for making the changes.\n>\n> - if (cstate->raw_buf_index < cstate->raw_buf_len)\n> + if (RAW_BUF_BYTES(cstate) > 0)\n> {\n> /* Copy down the unprocessed data */\n> - nbytes = cstate->raw_buf_len - cstate->raw_buf_index;\n> + nbytes = RAW_BUF_BYTES(cstate);\n> memmove(cstate->raw_buf, cstate->raw_buf +\n> cstate->raw_buf_index,\n> nbytes);\n> }\n>\n> One small improvement could be to change it like below to reduce few\n> more instructions:\n> static bool\n> CopyLoadRawBuf(CopyState cstate)\n> {\n> int nbytes = RAW_BUF_BYTES(cstate);\n> int inbytes;\n>\n> /* Copy down the unprocessed data */\n> if (nbytes > 0)\n> memmove(cstate->raw_buf, cstate->raw_buf + cstate->raw_buf_index,\n> nbytes);\n>\n> inbytes = CopyGetData(cstate, cstate->raw_buf + nbytes,\n> 1, RAW_BUF_SIZE - nbytes);\n> nbytes += inbytes;\n> cstate->raw_buf[nbytes] = '\\0';\n> cstate->raw_buf_index = 0;\n> cstate->raw_buf_len = nbytes;\n> return (inbytes > 0);\n> }\n\nSounds fine to me. Although CopyLoadRawBuf() does not seem to a\ncandidate for rigorous code optimization as it does not get called\nthat often.\n\n-- \nAmit Langote\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 14 Jul 2020 14:49:30 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Performance Improvement For Copy From Binary Files" }, { "msg_contents": "On Tue, Jul 14, 2020 at 11:19 AM Amit Langote <amitlangote09@gmail.com> wrote:\n>\n>\n> Sounds fine to me. Although CopyLoadRawBuf() does not seem to a\n> candidate for rigorous code optimization as it does not get called\n> that often.\n>\n\nI thought we could include that change as we are making changes around\nthat code. Rest of the changes looked fine to me. Also I noticed that\ncommit message was missing in the patch.\n\nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 14 Jul 2020 18:52:58 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Performance Improvement For Copy From Binary Files" }, { "msg_contents": "Hi Vignesh,\n\nOn Tue, Jul 14, 2020 at 10:23 PM vignesh C <vignesh21@gmail.com> wrote:\n> On Tue, Jul 14, 2020 at 11:19 AM Amit Langote <amitlangote09@gmail.com> wrote:\n> > Sounds fine to me. Although CopyLoadRawBuf() does not seem to a\n> > candidate for rigorous code optimization as it does not get called\n> > that often.\n>\n> I thought we could include that change as we are making changes around\n> that code.\n\nSure, done.\n\n> Rest of the changes looked fine to me. Also I noticed that\n> commit message was missing in the patch.\n\nPlease see the attached v7.\n\n-- \nAmit Langote\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Wed, 15 Jul 2020 11:32:58 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Performance Improvement For Copy From Binary Files" }, { "msg_contents": "On Wed, Jul 15, 2020 at 8:03 AM Amit Langote <amitlangote09@gmail.com> wrote:\n>\n> Hi Vignesh,\n>\n> On Tue, Jul 14, 2020 at 10:23 PM vignesh C <vignesh21@gmail.com> wrote:\n> > On Tue, Jul 14, 2020 at 11:19 AM Amit Langote <amitlangote09@gmail.com> wrote:\n> > > Sounds fine to me. Although CopyLoadRawBuf() does not seem to a\n> > > candidate for rigorous code optimization as it does not get called\n> > > that often.\n> >\n> > I thought we could include that change as we are making changes around\n> > that code.\n>\n> Sure, done.\n>\n> > Rest of the changes looked fine to me. Also I noticed that\n> > commit message was missing in the patch.\n>\n> Please see the attached v7.\n>\n\nThanks for fixing the comments.\nPatch applies cleanly, make check & make check-world passes.\nThe changes looks fine to me.\n\nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 15 Jul 2020 09:36:23 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Performance Improvement For Copy From Binary Files" }, { "msg_contents": "On Wed, Jul 15, 2020 at 1:06 PM vignesh C <vignesh21@gmail.com> wrote:\n> On Wed, Jul 15, 2020 at 8:03 AM Amit Langote <amitlangote09@gmail.com> wrote:\n> > On Tue, Jul 14, 2020 at 10:23 PM vignesh C <vignesh21@gmail.com> wrote:\n> > > Rest of the changes looked fine to me. Also I noticed that\n> > > commit message was missing in the patch.\n>>\n> > Please see the attached v7.\n>\n> Thanks for fixing the comments.\n> Patch applies cleanly, make check & make check-world passes.\n> The changes looks fine to me.\n\nThanks for checking. Sorry, I hadn't credited Bharath as an author in\nthe commit message, so here's v7 again.\n\n-- \nAmit Langote\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Thu, 16 Jul 2020 23:14:40 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Performance Improvement For Copy From Binary Files" }, { "msg_contents": "On Thu, Jul 16, 2020 at 7:44 PM Amit Langote <amitlangote09@gmail.com> wrote:\n>\n> On Wed, Jul 15, 2020 at 1:06 PM vignesh C <vignesh21@gmail.com> wrote:\n> > On Wed, Jul 15, 2020 at 8:03 AM Amit Langote <amitlangote09@gmail.com> wrote:\n> > > On Tue, Jul 14, 2020 at 10:23 PM vignesh C <vignesh21@gmail.com> wrote:\n> > > > Rest of the changes looked fine to me. Also I noticed that\n> > > > commit message was missing in the patch.\n> >>\n> > > Please see the attached v7.\n> >\n> > Thanks for fixing the comments.\n> > Patch applies cleanly, make check & make check-world passes.\n> > The changes looks fine to me.\n>\n> Thanks for checking. Sorry, I hadn't credited Bharath as an author in\n> the commit message, so here's v7 again.\n>\n\nPatch looks good. It applies on latest commit\n932f9fb504a57f296cf698d15bd93462ddfe2776 and make check, make\ncheck-world were run successfully.\n\nI will change the status to \"ready for committer\" in commitfest\ntomorrow. Hope that's fine.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 16 Jul 2020 20:52:22 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Performance Improvement For Copy From Binary Files" }, { "msg_contents": "On Thu, Jul 16, 2020 at 8:52 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Thu, Jul 16, 2020 at 7:44 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> >\n> > On Wed, Jul 15, 2020 at 1:06 PM vignesh C <vignesh21@gmail.com> wrote:\n> > > On Wed, Jul 15, 2020 at 8:03 AM Amit Langote <amitlangote09@gmail.com> wrote:\n> > > > On Tue, Jul 14, 2020 at 10:23 PM vignesh C <vignesh21@gmail.com> wrote:\n> > > > > Rest of the changes looked fine to me. Also I noticed that\n> > > > > commit message was missing in the patch.\n> > >>\n> > > > Please see the attached v7.\n> > >\n> > > Thanks for fixing the comments.\n> > > Patch applies cleanly, make check & make check-world passes.\n> > > The changes looks fine to me.\n> >\n> > Thanks for checking. Sorry, I hadn't credited Bharath as an author in\n> > the commit message, so here's v7 again.\n> >\n>\n> Patch looks good. It applies on latest commit\n> 932f9fb504a57f296cf698d15bd93462ddfe2776 and make check, make\n> check-world were run successfully.\n>\n> I will change the status to \"ready for committer\" in commitfest\n> tomorrow. Hope that's fine.\n\nI agree, a committer can have a look at this.\n\nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 16 Jul 2020 22:43:20 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Performance Improvement For Copy From Binary Files" }, { "msg_contents": "> >\n> > I will change the status to \"ready for committer\" in commitfest\n> > tomorrow. Hope that's fine.\n>\n> I agree, a committer can have a look at this.\n>\n\nI changed the status in the commit fest to \"Ready for Committer\".\n\nhttps://commitfest.postgresql.org/28/2632/\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 17 Jul 2020 17:26:36 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Performance Improvement For Copy From Binary Files" }, { "msg_contents": "Amit Langote <amitlangote09@gmail.com> writes:\n> [ v7-0001-Improve-performance-of-binary-COPY-FROM-with-buff.patch ]\n\nPushed with cosmetic changes.\n\nI'd always supposed that stdio does enough internal buffering that short\nfread()s shouldn't be much worse than memcpy(). But I reproduced your\nresult of ~30% speedup for data with a lot of narrow text columns, using\nRHEL 8.2. Thinking this an indictment of glibc, I also tried it on\ncurrent macOS, and saw an even bigger speedup, approaching 50%. So\nthere's definitely something to this. I wonder if we ought to check\nother I/O-constrained users of fread and fwrite, like pg_dump/pg_restore.\n\nA point that I did not see addressed in the thread is whether this\nhas any negative impact on the copy-from-frontend code path, where\nthere's no fread() to avoid; short reads from CopyGetData() are\nalready going to be satisfied by memcpy'ing from the fe_msgbuf.\nHowever, a quick check suggests that this patch is still a small\nwin for that case too --- apparently the control overhead in\nCopyGetData() is not negligible.\n\nSo the patch seems fine functionally, but there were some cosmetic\nthings I didn't like:\n\n* Removing CopyGetInt32 and CopyGetInt16 seemed like a pretty bad\nidea, because it made the callers much uglier and more error-prone.\nThis is a particularly bad example:\n\n \t\t/* Header extension length */\n-\t\tif (!CopyGetInt32(cstate, &tmp) ||\n-\t\t\ttmp < 0)\n+\t\tif (CopyReadBinaryData(cstate, (char *) &tmp, sizeof(tmp)) !=\n+\t\t\tsizeof(tmp) || (tmp = (int32) pg_ntoh32(tmp)) < 0)\n\nPutting side-effects into late stages of an if-condition is just\nawful coding practice. They're easy for a reader to miss and they\nare magnets for bugs, because of the possibility that control doesn't\nreach that part of the condition.\n\nYou can get the exact same speedup without any of those disadvantages\nby marking these two functions \"inline\", so that's what I did.\n\n* I dropped the DRAIN_COPY_RAW_BUF macro too, as in my estimation it was\na net negative for readability. With only two use-cases, having it made\nthe code longer not shorter; I was also pretty unconvinced about the\nwisdom of having some of the loop's control logic inside the macro and\nsome outside.\n\n* BTW, the macro definitions weren't particularly per project style\nanyway. We generally put at least one space before line-ending\nbackslashes. I don't think pgindent will fix this for you; IME\nit doesn't touch macro definitions at all.\n\n* Did some more work on the comments.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 25 Jul 2020 17:06:03 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Performance Improvement For Copy From Binary Files" }, { "msg_contents": "On Sun, Jul 26, 2020 at 6:06 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Amit Langote <amitlangote09@gmail.com> writes:\n> > [ v7-0001-Improve-performance-of-binary-COPY-FROM-with-buff.patch ]\n>\n> Pushed with cosmetic changes.\n\nThanks for that.\n\n> I'd always supposed that stdio does enough internal buffering that short\n> fread()s shouldn't be much worse than memcpy(). But I reproduced your\n> result of ~30% speedup for data with a lot of narrow text columns, using\n> RHEL 8.2. Thinking this an indictment of glibc, I also tried it on\n> current macOS, and saw an even bigger speedup, approaching 50%. So\n> there's definitely something to this. I wonder if we ought to check\n> other I/O-constrained users of fread and fwrite, like pg_dump/pg_restore.\n\nAh, maybe a good idea to check that.\n\n> A point that I did not see addressed in the thread is whether this\n> has any negative impact on the copy-from-frontend code path, where\n> there's no fread() to avoid; short reads from CopyGetData() are\n> already going to be satisfied by memcpy'ing from the fe_msgbuf.\n> However, a quick check suggests that this patch is still a small\n> win for that case too --- apparently the control overhead in\n> CopyGetData() is not negligible.\n\nIndeed.\n\n> So the patch seems fine functionally, but there were some cosmetic\n> things I didn't like:\n>\n> * Removing CopyGetInt32 and CopyGetInt16 seemed like a pretty bad\n> idea, because it made the callers much uglier and more error-prone.\n> This is a particularly bad example:\n>\n> /* Header extension length */\n> - if (!CopyGetInt32(cstate, &tmp) ||\n> - tmp < 0)\n> + if (CopyReadBinaryData(cstate, (char *) &tmp, sizeof(tmp)) !=\n> + sizeof(tmp) || (tmp = (int32) pg_ntoh32(tmp)) < 0)\n>\n> Putting side-effects into late stages of an if-condition is just\n> awful coding practice. They're easy for a reader to miss and they\n> are magnets for bugs, because of the possibility that control doesn't\n> reach that part of the condition.\n>\n> You can get the exact same speedup without any of those disadvantages\n> by marking these two functions \"inline\", so that's what I did.\n>\n> * I dropped the DRAIN_COPY_RAW_BUF macro too, as in my estimation it was\n> a net negative for readability. With only two use-cases, having it made\n> the code longer not shorter; I was also pretty unconvinced about the\n> wisdom of having some of the loop's control logic inside the macro and\n> some outside.\n>\n> * BTW, the macro definitions weren't particularly per project style\n> anyway. We generally put at least one space before line-ending\n> backslashes. I don't think pgindent will fix this for you; IME\n> it doesn't touch macro definitions at all.\n>\n> * Did some more work on the comments.\n\nThanks for these changes.\n\n-- \nAmit Langote\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 27 Jul 2020 11:29:54 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Performance Improvement For Copy From Binary Files" } ]
[ { "msg_contents": "Hi,\n\nDuring fully-cached SELECT-only test using pgbench, Postgres v13Beta1 shows\n~45% performance drop [2] at high DB connection counts (when compared with v12.3)\n\nDisabling pg_stat_statements.track_planning (which is 'On' by default)\nbrings the TPS numbers up to v12.3 levels.\n\nThe inflection point (in this test-case) is 128 Connections, beyond which the\nTPS numbers are consistently low. Looking at the mailing list [1], this issue\ndidn't surface earlier possibly since the regression is trivial at low connection counts.\n\nIt would be great if this could be optimized further, or track_planning\ndisabled (by default) so as to not trip users upgrading from v12 with pg_stat_statement\nenabled (but otherwise not particularly interested in track_planning).\n\nThese are some details around the above test:\n\npgbench: scale - 100 / threads - 16\ntest-duration - 30s each\nserver - 96 vCPUs / 768GB - r5.24xl (AWS EC2 instance)\nclient - 72 vCPUs / 144GB - c5.18xl (AWS EC2 instance) (co-located with the DB server - Same AZ) \nv12 - REL_12_STABLE (v12.3)\nv13Beta1 - REL_13_STABLE (v13Beta1)\nmax_connections = 10000\nshared_preload_libraries = 'pg_stat_statements'\nshared_buffers 128MB\n\n\nReference:\n1) https://www.postgresql.org/message-id/1554150919882-0.post%40n3.nabble.com\n\n2) Fully-cached-select-only TPS drops >= 128 connections.\n\nConn v12.3 v13Beta1 v13Beta1 (track_planning=off)\n1 6,764 6,734 6,905\n2 14,978 14,961 15,316\n4 31,641 32,012 36,961\n8 71,989 68,848 69,204\n16 129,056 131,157 132,773\n32 231,910 226,718 253,316\n64 381,778 371,782 385,402\n128 534,661 ====> 353,944 539,231\n256 636,794 ====> 248,825 643,631\n512 574,447 ====> 213,033 555,099\n768 493,912 ====> 214,801 502,014\n1024 484,993 ====> 222,492 490,716\n1280 480,571 ====> 223,296 483,843\n1536 475,030 ====> 228,137 477,153\n1792 472,145 ====> 229,027 474,423\n2048 471,385 ====> 228,665 470,238\n\n\n3) perf - v13Beta1\n\n- 88.38% 0.17% postgres postgres [.] PostgresMain\n - 88.21% PostgresMain \n - 80.09% exec_simple_query \n - 25.34% pg_plan_queries \n - 25.28% pg_plan_query \n - 25.21% pgss_planner \n - 14.36% pgss_store \n + 13.54% s_lock \n + 10.71% standard_planner \n + 18.29% PortalRun \n - 15.12% PortalDrop \n - 14.73% PortalCleanup \n - 13.78% pgss_ExecutorEnd \n - 13.72% pgss_store \n + 12.83% s_lock \n 0.72% standard_ExecutorEnd \n + 6.18% PortalStart \n + 4.86% pg_analyze_and_rewrite \n + 3.52% GetTransactionSnapshot \n + 2.56% pg_parse_query \n + 1.83% finish_xact_command \n 0.51% start_xact_command \n + 3.93% pq_getbyte \n + 3.40% ReadyForQuery \n\n\n\n4) perf - v12.3\n\nv12.3\n- 84.32% 0.21% postgres postgres [.] PostgresMain\n - 84.11% PostgresMain \n - 72.56% exec_simple_query \n + 26.71% PortalRun \n - 15.33% pg_plan_queries \n - 15.29% pg_plan_query \n + 15.21% standard_planner \n + 7.81% PortalStart \n + 6.76% pg_analyze_and_rewrite \n + 4.37% GetTransactionSnapshot \n + 3.69% pg_parse_query \n - 2.96% PortalDrop \n - 2.42% PortalCleanup \n - 1.35% pgss_ExecutorEnd \n - 1.22% pgss_store \n 0.57% s_lock \n 0.77% standard_ExecutorEnd \n + 2.16% finish_xact_command \n + 0.78% start_xact_command \n + 0.59% pg_rewrite_query \n + 5.67% pq_getbyte \n + 4.73% ReadyForQuery\n\n-\nrobins\n\n\n", "msg_date": "Mon, 29 Jun 2020 05:48:35 +0000", "msg_from": "\"Tharakan, Robins\" <tharar@amazon.com>", "msg_from_op": true, "msg_subject": "track_planning causing performance regression" }, { "msg_contents": "Hi,\n\nOn Mon, Jun 29, 2020 at 7:49 AM Tharakan, Robins <tharar@amazon.com> wrote:\n>\n> During fully-cached SELECT-only test using pgbench, Postgres v13Beta1 shows\n> ~45% performance drop [2] at high DB connection counts (when compared with v12.3)\n>\n> Disabling pg_stat_statements.track_planning (which is 'On' by default)\n> brings the TPS numbers up to v12.3 levels.\n>\n> The inflection point (in this test-case) is 128 Connections, beyond which the\n> TPS numbers are consistently low. Looking at the mailing list [1], this issue\n> didn't surface earlier possibly since the regression is trivial at low connection counts.\n>\n> It would be great if this could be optimized further, or track_planning\n> disabled (by default) so as to not trip users upgrading from v12 with pg_stat_statement\n> enabled (but otherwise not particularly interested in track_planning).\n>\n> These are some details around the above test:\n>\n> pgbench: scale - 100 / threads - 16\n> test-duration - 30s each\n> server - 96 vCPUs / 768GB - r5.24xl (AWS EC2 instance)\n> client - 72 vCPUs / 144GB - c5.18xl (AWS EC2 instance) (co-located with the DB server - Same AZ)\n> v12 - REL_12_STABLE (v12.3)\n> v13Beta1 - REL_13_STABLE (v13Beta1)\n> max_connections = 10000\n> shared_preload_libraries = 'pg_stat_statements'\n> shared_buffers 128MB\n\nI can't reproduce this on my laptop, but I can certainly believe that\nrunning the same 3 queries using more connections than available cores\nwill lead to extra overhead.\n\nI disagree with the conclusion though. It seems to me that if you\nreally have this workload that consists in these few queries and want\nto get better performance, you'll anyway use a connection pooler\nand/or use prepared statements, which will make this overhead\ndisappear entirely, and will also yield an even bigger performance\nimprovement. A quick test using pgbench -M prepared, with\ntrack_planning enabled, with still way too many connections already\nshows a 25% improvement over the -M simple without track_planning.\n\n\n", "msg_date": "Mon, 29 Jun 2020 09:05:18 +0200", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: track_planning causing performance regression" }, { "msg_contents": "\n\nOn 2020/06/29 16:05, Julien Rouhaud wrote:\n> Hi,\n> \n> On Mon, Jun 29, 2020 at 7:49 AM Tharakan, Robins <tharar@amazon.com> wrote:\n>>\n>> During fully-cached SELECT-only test using pgbench, Postgres v13Beta1 shows\n\nThanks for the benchmark!\n\n\n>> ~45% performance drop [2] at high DB connection counts (when compared with v12.3)\n\nThat's bad :(\n\n\n>>\n>> Disabling pg_stat_statements.track_planning (which is 'On' by default)\n>> brings the TPS numbers up to v12.3 levels.\n>>\n>> The inflection point (in this test-case) is 128 Connections, beyond which the\n>> TPS numbers are consistently low. Looking at the mailing list [1], this issue\n>> didn't surface earlier possibly since the regression is trivial at low connection counts.\n>>\n>> It would be great if this could be optimized further, or track_planning\n>> disabled (by default) so as to not trip users upgrading from v12 with pg_stat_statement\n>> enabled (but otherwise not particularly interested in track_planning).\n\nYour benchmark result seems to suggest that the cause of the problem is\nthe contention of per-query spinlock in pgss_store(). Right?\nThis lock contention is likely to happen when multiple sessions run\nthe same queries.\n\nOne idea to reduce that lock contention is to separate per-query spinlock\ninto two; one is for planning, and the other is for execution. pgss_store()\ndetermines which lock to use based on the given \"kind\" argument.\nTo make this idea work, also every pgss counters like shared_blks_hit\nneed to be separated into two, i.e., for planning and execution.\n\n\n>> These are some details around the above test:\n>>\n>> pgbench: scale - 100 / threads - 16\n>> test-duration - 30s each\n>> server - 96 vCPUs / 768GB - r5.24xl (AWS EC2 instance)\n>> client - 72 vCPUs / 144GB - c5.18xl (AWS EC2 instance) (co-located with the DB server - Same AZ)\n>> v12 - REL_12_STABLE (v12.3)\n>> v13Beta1 - REL_13_STABLE (v13Beta1)\n>> max_connections = 10000\n>> shared_preload_libraries = 'pg_stat_statements'\n>> shared_buffers 128MB\n> \n> I can't reproduce this on my laptop, but I can certainly believe that\n> running the same 3 queries using more connections than available cores\n> will lead to extra overhead.\n> \n> I disagree with the conclusion though. It seems to me that if you\n> really have this workload that consists in these few queries and want\n> to get better performance, you'll anyway use a connection pooler\n> and/or use prepared statements, which will make this overhead\n> disappear entirely, and will also yield an even bigger performance\n> improvement. A quick test using pgbench -M prepared, with\n> track_planning enabled, with still way too many connections already\n> shows a 25% improvement over the -M simple without track_planning.\n\nI understand your point. But IMO the default setting basically should\nbe safer value, i.e., off at least until the problem disappears.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Mon, 29 Jun 2020 17:55:28 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: track_planning causing performance regression" }, { "msg_contents": "On Mon, Jun 29, 2020 at 10:55 AM Fujii Masao\n<masao.fujii@oss.nttdata.com> wrote:\n>\n> On 2020/06/29 16:05, Julien Rouhaud wrote:\n> > On Mon, Jun 29, 2020 at 7:49 AM Tharakan, Robins <tharar@amazon.com> wrote:\n> >>\n> >> During fully-cached SELECT-only test using pgbench, Postgres v13Beta1 shows\n>\n> Thanks for the benchmark!\n>\n>\n> >> ~45% performance drop [2] at high DB connection counts (when compared with v12.3)\n>\n> That's bad :(\n>\n>\n> >>\n> >> Disabling pg_stat_statements.track_planning (which is 'On' by default)\n> >> brings the TPS numbers up to v12.3 levels.\n> >>\n> >> The inflection point (in this test-case) is 128 Connections, beyond which the\n> >> TPS numbers are consistently low. Looking at the mailing list [1], this issue\n> >> didn't surface earlier possibly since the regression is trivial at low connection counts.\n> >>\n> >> It would be great if this could be optimized further, or track_planning\n> >> disabled (by default) so as to not trip users upgrading from v12 with pg_stat_statement\n> >> enabled (but otherwise not particularly interested in track_planning).\n>\n> Your benchmark result seems to suggest that the cause of the problem is\n> the contention of per-query spinlock in pgss_store(). Right?\n> This lock contention is likely to happen when multiple sessions run\n> the same queries.\n>\n> One idea to reduce that lock contention is to separate per-query spinlock\n> into two; one is for planning, and the other is for execution. pgss_store()\n> determines which lock to use based on the given \"kind\" argument.\n> To make this idea work, also every pgss counters like shared_blks_hit\n> need to be separated into two, i.e., for planning and execution.\n\nThis can probably remove some overhead, but won't it eventually hit\nthe same issue when multiple connections try to plan the same query,\ngiven the number of different queries and very low execution runtime?\nIt'll also quite increase the shared memory consumption.\n\nI'm wondering if we could instead use atomics to store the counters.\nThe only downside is that we won't guarantee per-row consistency\nanymore, which may be problematic.\n\n\n", "msg_date": "Mon, 29 Jun 2020 11:17:14 +0200", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: track_planning causing performance regression" }, { "msg_contents": "\n\nOn 2020/06/29 18:17, Julien Rouhaud wrote:\n> On Mon, Jun 29, 2020 at 10:55 AM Fujii Masao\n> <masao.fujii@oss.nttdata.com> wrote:\n>>\n>> On 2020/06/29 16:05, Julien Rouhaud wrote:\n>>> On Mon, Jun 29, 2020 at 7:49 AM Tharakan, Robins <tharar@amazon.com> wrote:\n>>>>\n>>>> During fully-cached SELECT-only test using pgbench, Postgres v13Beta1 shows\n>>\n>> Thanks for the benchmark!\n>>\n>>\n>>>> ~45% performance drop [2] at high DB connection counts (when compared with v12.3)\n>>\n>> That's bad :(\n>>\n>>\n>>>>\n>>>> Disabling pg_stat_statements.track_planning (which is 'On' by default)\n>>>> brings the TPS numbers up to v12.3 levels.\n>>>>\n>>>> The inflection point (in this test-case) is 128 Connections, beyond which the\n>>>> TPS numbers are consistently low. Looking at the mailing list [1], this issue\n>>>> didn't surface earlier possibly since the regression is trivial at low connection counts.\n>>>>\n>>>> It would be great if this could be optimized further, or track_planning\n>>>> disabled (by default) so as to not trip users upgrading from v12 with pg_stat_statement\n>>>> enabled (but otherwise not particularly interested in track_planning).\n>>\n>> Your benchmark result seems to suggest that the cause of the problem is\n>> the contention of per-query spinlock in pgss_store(). Right?\n>> This lock contention is likely to happen when multiple sessions run\n>> the same queries.\n>>\n>> One idea to reduce that lock contention is to separate per-query spinlock\n>> into two; one is for planning, and the other is for execution. pgss_store()\n>> determines which lock to use based on the given \"kind\" argument.\n>> To make this idea work, also every pgss counters like shared_blks_hit\n>> need to be separated into two, i.e., for planning and execution.\n> \n> This can probably remove some overhead, but won't it eventually hit\n> the same issue when multiple connections try to plan the same query,\n> given the number of different queries and very low execution runtime?\n\nYes. But maybe we can expect that the idea would improve\nthe performance to the near same level as v12?\n\n\n> It'll also quite increase the shared memory consumption.\n\nYes.\n\n\n> I'm wondering if we could instead use atomics to store the counters.\n> The only downside is that we won't guarantee per-row consistency\n> anymore, which may be problematic.\n\nYeah, we can consider more improvements against this issue.\nBut I'm afraid these (maybe including my idea) basically should\nbe items for v14...\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Mon, 29 Jun 2020 18:38:17 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: track_planning causing performance regression" }, { "msg_contents": "On Mon, Jun 29, 2020 at 11:38 AM Fujii Masao\n<masao.fujii@oss.nttdata.com> wrote:\n>\n> >> Your benchmark result seems to suggest that the cause of the problem is\n> >> the contention of per-query spinlock in pgss_store(). Right?\n> >> This lock contention is likely to happen when multiple sessions run\n> >> the same queries.\n> >>\n> >> One idea to reduce that lock contention is to separate per-query spinlock\n> >> into two; one is for planning, and the other is for execution. pgss_store()\n> >> determines which lock to use based on the given \"kind\" argument.\n> >> To make this idea work, also every pgss counters like shared_blks_hit\n> >> need to be separated into two, i.e., for planning and execution.\n> >\n> > This can probably remove some overhead, but won't it eventually hit\n> > the same issue when multiple connections try to plan the same query,\n> > given the number of different queries and very low execution runtime?\n>\n> Yes. But maybe we can expect that the idea would improve\n> the performance to the near same level as v12?\n\nA POC patch should be easy to do and see how much it solves this\nproblem. However I'm not able to reproduce the issue, and IMHO unless\nwe specifically want to be able to distinguish planner-time counters\nfrom execution-time counters, I'd prefer to disable track_planning by\ndefault than going this way, so that users with a sane usage won't\nhave to suffer from a memory increase.\n\n> > I'm wondering if we could instead use atomics to store the counters.\n> > The only downside is that we won't guarantee per-row consistency\n> > anymore, which may be problematic.\n>\n> Yeah, we can consider more improvements against this issue.\n> But I'm afraid these (maybe including my idea) basically should\n> be items for v14...\n\nYes, that's clearly not something I'd vote to push in v13 at this point.\n\n\n", "msg_date": "Mon, 29 Jun 2020 11:53:27 +0200", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: track_planning causing performance regression" }, { "msg_contents": "\n\nOn 2020/06/29 18:53, Julien Rouhaud wrote:\n> On Mon, Jun 29, 2020 at 11:38 AM Fujii Masao\n> <masao.fujii@oss.nttdata.com> wrote:\n>>\n>>>> Your benchmark result seems to suggest that the cause of the problem is\n>>>> the contention of per-query spinlock in pgss_store(). Right?\n>>>> This lock contention is likely to happen when multiple sessions run\n>>>> the same queries.\n>>>>\n>>>> One idea to reduce that lock contention is to separate per-query spinlock\n>>>> into two; one is for planning, and the other is for execution. pgss_store()\n>>>> determines which lock to use based on the given \"kind\" argument.\n>>>> To make this idea work, also every pgss counters like shared_blks_hit\n>>>> need to be separated into two, i.e., for planning and execution.\n>>>\n>>> This can probably remove some overhead, but won't it eventually hit\n>>> the same issue when multiple connections try to plan the same query,\n>>> given the number of different queries and very low execution runtime?\n>>\n>> Yes. But maybe we can expect that the idea would improve\n>> the performance to the near same level as v12?\n> \n> A POC patch should be easy to do and see how much it solves this\n> problem. However I'm not able to reproduce the issue, and IMHO unless\n> we specifically want to be able to distinguish planner-time counters\n> from execution-time counters, I'd prefer to disable track_planning by\n> default than going this way, so that users with a sane usage won't\n> have to suffer from a memory increase.\n\nAgreed. +1 to change that default to off.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Mon, 29 Jun 2020 18:56:02 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: track_planning causing performance regression" }, { "msg_contents": "On 2020/06/29 18:56, Fujii Masao wrote:\n> \n> \n> On 2020/06/29 18:53, Julien Rouhaud wrote:\n>> On Mon, Jun 29, 2020 at 11:38 AM Fujii Masao\n>> <masao.fujii@oss.nttdata.com> wrote:\n>>>\n>>>>> Your benchmark result seems to suggest that the cause of the problem is\n>>>>> the contention of per-query spinlock in pgss_store(). Right?\n>>>>> This lock contention is likely to happen when multiple sessions run\n>>>>> the same queries.\n>>>>>\n>>>>> One idea to reduce that lock contention is to separate per-query spinlock\n>>>>> into two; one is for planning, and the other is for execution. pgss_store()\n>>>>> determines which lock to use based on the given \"kind\" argument.\n>>>>> To make this idea work, also every pgss counters like shared_blks_hit\n>>>>> need to be separated into two, i.e., for planning and execution.\n>>>>\n>>>> This can probably remove some overhead, but won't it eventually hit\n>>>> the same issue when multiple connections try to plan the same query,\n>>>> given the number of different queries and very low execution runtime?\n>>>\n>>> Yes. But maybe we can expect that the idea would improve\n>>> the performance to the near same level as v12?\n>>\n>> A POC patch should be easy to do and see how much it solves this\n>> problem.  However I'm not able to reproduce the issue, and IMHO unless\n>> we specifically want to be able to distinguish planner-time counters\n>> from execution-time counters, I'd prefer to disable track_planning by\n>> default than going this way, so that users with a sane usage won't\n>> have to suffer from a memory increase.\n> \n> Agreed. +1 to change that default to off.\n\nAttached patch does this.\n\nI also add the following into the description about each *_plan_time column\nin the docs. IMO this is helpful for users when they see that those columns\nreport zero by default and try to understand why.\n\n(if <varname>pg_stat_statements.track_planning</varname> is enabled, otherwise zero)\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION", "msg_date": "Mon, 29 Jun 2020 20:14:34 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: track_planning causing performance regression" }, { "msg_contents": "On Mon, Jun 29, 2020 at 1:14 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>\n> On 2020/06/29 18:56, Fujii Masao wrote:\n> >\n> >\n> > On 2020/06/29 18:53, Julien Rouhaud wrote:\n> >> On Mon, Jun 29, 2020 at 11:38 AM Fujii Masao\n> >> <masao.fujii@oss.nttdata.com> wrote:\n> >>>\n> >>>>> Your benchmark result seems to suggest that the cause of the problem is\n> >>>>> the contention of per-query spinlock in pgss_store(). Right?\n> >>>>> This lock contention is likely to happen when multiple sessions run\n> >>>>> the same queries.\n> >>>>>\n> >>>>> One idea to reduce that lock contention is to separate per-query spinlock\n> >>>>> into two; one is for planning, and the other is for execution. pgss_store()\n> >>>>> determines which lock to use based on the given \"kind\" argument.\n> >>>>> To make this idea work, also every pgss counters like shared_blks_hit\n> >>>>> need to be separated into two, i.e., for planning and execution.\n> >>>>\n> >>>> This can probably remove some overhead, but won't it eventually hit\n> >>>> the same issue when multiple connections try to plan the same query,\n> >>>> given the number of different queries and very low execution runtime?\n> >>>\n> >>> Yes. But maybe we can expect that the idea would improve\n> >>> the performance to the near same level as v12?\n> >>\n> >> A POC patch should be easy to do and see how much it solves this\n> >> problem. However I'm not able to reproduce the issue, and IMHO unless\n> >> we specifically want to be able to distinguish planner-time counters\n> >> from execution-time counters, I'd prefer to disable track_planning by\n> >> default than going this way, so that users with a sane usage won't\n> >> have to suffer from a memory increase.\n> >\n> > Agreed. +1 to change that default to off.\n>\n> Attached patch does this.\n\nPatch looks good to me.\n\n> I also add the following into the description about each *_plan_time column\n> in the docs. IMO this is helpful for users when they see that those columns\n> report zero by default and try to understand why.\n>\n> (if <varname>pg_stat_statements.track_planning</varname> is enabled, otherwise zero)\n\n+1\n\nDo you intend to wait for other input before pushing? FWIW I'm still\nnot convinced that the exposed problem is representative of any\nrealistic workload. I of course entirely agree with the other\ndocumentation changes.\n\n\n", "msg_date": "Mon, 29 Jun 2020 13:55:53 +0200", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: track_planning causing performance regression" }, { "msg_contents": "On Mon, 29 Jun 2020 at 12:17, Julien Rouhaud <rjuju123@gmail.com> wrote:\n\n> On Mon, Jun 29, 2020 at 10:55 AM Fujii Masao\n> <masao.fujii@oss.nttdata.com> wrote:\n> >\n> > On 2020/06/29 16:05, Julien Rouhaud wrote:\n> > > On Mon, Jun 29, 2020 at 7:49 AM Tharakan, Robins <tharar@amazon.com>\n> wrote:\n> > >>\n> > >> During fully-cached SELECT-only test using pgbench, Postgres v13Beta1\n> shows\n> >\n> > Thanks for the benchmark!\n> >\n> >\n> > >> ~45% performance drop [2] at high DB connection counts (when compared\n> with v12.3)\n> >\n> > That's bad :(\n> >\n> >\n> > >>\n> > >> Disabling pg_stat_statements.track_planning (which is 'On' by default)\n> > >> brings the TPS numbers up to v12.3 levels.\n> > >>\n> > >> The inflection point (in this test-case) is 128 Connections, beyond\n> which the\n> > >> TPS numbers are consistently low. Looking at the mailing list [1],\n> this issue\n> > >> didn't surface earlier possibly since the regression is trivial at\n> low connection counts.\n> > >>\n> > >> It would be great if this could be optimized further, or\n> track_planning\n> > >> disabled (by default) so as to not trip users upgrading from v12 with\n> pg_stat_statement\n> > >> enabled (but otherwise not particularly interested in track_planning).\n> >\n> > Your benchmark result seems to suggest that the cause of the problem is\n> > the contention of per-query spinlock in pgss_store(). Right?\n> > This lock contention is likely to happen when multiple sessions run\n> > the same queries.\n> >\n> > One idea to reduce that lock contention is to separate per-query spinlock\n> > into two; one is for planning, and the other is for execution.\n> pgss_store()\n> > determines which lock to use based on the given \"kind\" argument.\n> > To make this idea work, also every pgss counters like shared_blks_hit\n> > need to be separated into two, i.e., for planning and execution.\n>\n> This can probably remove some overhead, but won't it eventually hit\n> the same issue when multiple connections try to plan the same query,\n> given the number of different queries and very low execution runtime?\n> It'll also quite increase the shared memory consumption.\n>\n> I'm wondering if we could instead use atomics to store the counters.\n> The only downside is that we won't guarantee per-row consistency\n> anymore, which may be problematic.\n>\n\n\nThe problem looks to be that spinlocks are terrible with overloaded CPU and\na contended spinlock. A process holding the spinlock might easily get\nscheduled out leading to excessive spinning by everybody. I think a simple\nthing to try would be to replace the spinlock with LWLock.\n\nI did a prototype patch that replaces spinlocks with futexes, but was not\nable to find a workload where it mattered. We have done a great job at\neliminating spinlocks from contended code paths. Robins, perhaps you could\ntry it to see if it reduces the regression you are observing. The patch is\nagainst v13 stable branch.\n\n-- \nAnts Aasma\nSenior Database Engineerwww.cybertec-postgresql.com", "msg_date": "Mon, 29 Jun 2020 16:23:41 +0300", "msg_from": "Ants Aasma <ants@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: track_planning causing performance regression" }, { "msg_contents": "On Mon, Jun 29, 2020 at 1:55 AM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> > I disagree with the conclusion though. It seems to me that if you\n> > really have this workload that consists in these few queries and want\n> > to get better performance, you'll anyway use a connection pooler\n> > and/or use prepared statements, which will make this overhead\n> > disappear entirely, and will also yield an even bigger performance\n> > improvement. A quick test using pgbench -M prepared, with\n> > track_planning enabled, with still way too many connections already\n> > shows a 25% improvement over the -M simple without track_planning.\n>\n> I understand your point. But IMO the default setting basically should\n> be safer value, i.e., off at least until the problem disappears.\n\n+1 -- this regression seems unacceptable to me.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Mon, 29 Jun 2020 15:23:49 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: track_planning causing performance regression" }, { "msg_contents": "On Mon, Jun 29, 2020 at 3:23 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> +1 -- this regression seems unacceptable to me.\n\nI added an open item to track this.\n\nThanks\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Mon, 29 Jun 2020 15:29:06 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: track_planning causing performance regression" }, { "msg_contents": "Hi,\n\nOn 2020-06-29 09:05:18 +0200, Julien Rouhaud wrote:\n> I can't reproduce this on my laptop, but I can certainly believe that\n> running the same 3 queries using more connections than available cores\n> will lead to extra overhead.\n\n> I disagree with the conclusion though. It seems to me that if you\n> really have this workload that consists in these few queries and want\n> to get better performance, you'll anyway use a connection pooler\n> and/or use prepared statements, which will make this overhead\n> disappear entirely, and will also yield an even bigger performance\n> improvement.\n\nIt's an extremely common to have have times where there's more active\nqueries than CPUs. And a pooler won't avoid that fully, at least not\nwithout drastically reducing overall throughput.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 29 Jun 2020 16:00:19 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: track_planning causing performance regression" }, { "msg_contents": "Hi,\n\nOn 2020-06-29 17:55:28 +0900, Fujii Masao wrote:\n> One idea to reduce that lock contention is to separate per-query spinlock\n> into two; one is for planning, and the other is for execution. pgss_store()\n> determines which lock to use based on the given \"kind\" argument.\n> To make this idea work, also every pgss counters like shared_blks_hit\n> need to be separated into two, i.e., for planning and execution.\n\nI suspect that the best thing would be to just turn the spinlock into an\nlwlock. Spinlocks deal terribly with contention. I suspect it'd solve\nthe performance issue entirely. And it might even be possible, further\ndown the line, to just use a shared lock, and use atomics for the\ncounters.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 29 Jun 2020 16:10:15 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: track_planning causing performance regression" }, { "msg_contents": "On 2020/06/29 22:23, Ants Aasma wrote:\n> On Mon, 29 Jun 2020 at 12:17, Julien Rouhaud <rjuju123@gmail.com <mailto:rjuju123@gmail.com>> wrote:\n> \n> On Mon, Jun 29, 2020 at 10:55 AM Fujii Masao\n> <masao.fujii@oss.nttdata.com <mailto:masao.fujii@oss.nttdata.com>> wrote:\n> >\n> > On 2020/06/29 16:05, Julien Rouhaud wrote:\n> > > On Mon, Jun 29, 2020 at 7:49 AM Tharakan, Robins <tharar@amazon.com <mailto:tharar@amazon.com>> wrote:\n> > >>\n> > >> During fully-cached SELECT-only test using pgbench, Postgres v13Beta1 shows\n> >\n> > Thanks for the benchmark!\n> >\n> >\n> > >> ~45% performance drop [2] at high DB connection counts (when compared with v12.3)\n> >\n> > That's bad :(\n> >\n> >\n> > >>\n> > >> Disabling pg_stat_statements.track_planning (which is 'On' by default)\n> > >> brings the TPS numbers up to v12.3 levels.\n> > >>\n> > >> The inflection point (in this test-case) is 128 Connections, beyond which the\n> > >> TPS numbers are consistently low. Looking at the mailing list [1], this issue\n> > >> didn't surface earlier possibly since the regression is trivial at low connection counts.\n> > >>\n> > >> It would be great if this could be optimized further, or track_planning\n> > >> disabled (by default) so as to not trip users upgrading from v12 with pg_stat_statement\n> > >> enabled (but otherwise not particularly interested in track_planning).\n> >\n> > Your benchmark result seems to suggest that the cause of the problem is\n> > the contention of per-query spinlock in pgss_store(). Right?\n> > This lock contention is likely to happen when multiple sessions run\n> > the same queries.\n> >\n> > One idea to reduce that lock contention is to separate per-query spinlock\n> > into two; one is for planning, and the other is for execution. pgss_store()\n> > determines which lock to use based on the given \"kind\" argument.\n> > To make this idea work, also every pgss counters like shared_blks_hit\n> > need to be separated into two, i.e., for planning and execution.\n> \n> This can probably remove some overhead, but won't it eventually hit\n> the same issue when multiple connections try to plan the same query,\n> given the number of different queries and very low execution runtime?\n> It'll also quite increase the shared memory consumption.\n> \n> I'm wondering if we could instead use atomics to store the counters.\n> The only downside is that we won't guarantee per-row consistency\n> anymore, which may be problematic.\n> \n> \n> \n> The problem looks to be that spinlocks are terrible with overloaded CPU and a contended spinlock. A process holding the spinlock might easily get scheduled out leading to excessive spinning by everybody. I think a simple thing to try would be to replace the spinlock with LWLock.\n\nYes. Attached is the POC patch that replaces per-counter spinlock with LWLock.\n\n> \n> I did a prototype patch that replaces spinlocks with futexes, but was not able to find a workload where it mattered.\n\nI'm not familiar with futex, but could you tell me why you used futex instead\nof LWLock that we already have? Is futex portable?\n\n> We have done a great job at eliminating spinlocks from contended code paths. Robins, perhaps you could try it to see if it reduces the regression you are observing.\n\nYes. Also we need to check that this change doesn't increase performance\noverhead in other workloads.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION", "msg_date": "Tue, 30 Jun 2020 14:43:39 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: track_planning causing performance regression" }, { "msg_contents": "On Tue, 30 Jun 2020 at 08:43, Fujii Masao <masao.fujii@oss.nttdata.com>\nwrote:\n\n> > The problem looks to be that spinlocks are terrible with overloaded\n> CPU and a contended spinlock. A process holding the spinlock might easily\n> get scheduled out leading to excessive spinning by everybody. I think a\n> simple thing to try would be to replace the spinlock with LWLock.\n>\n> Yes. Attached is the POC patch that replaces per-counter spinlock with\n> LWLock.\n>\n\nGreat. I think this is the one that should get considered for testing.\n\n\n> > I did a prototype patch that replaces spinlocks with futexes, but was\n> not able to find a workload where it mattered.\n>\n> I'm not familiar with futex, but could you tell me why you used futex\n> instead\n> of LWLock that we already have? Is futex portable?\n>\n\nFutex is a Linux kernel call that allows to build a lock that has\nuncontended cases work fully in user space almost exactly like a spinlock,\nwhile falling back to syscalls that wait for wakeup in case of contention.\nIt's not portable, but probably something similar could be implemented for\nother operating systems. I did not pursue this further because it became\napparent that every performance critical spinlock had already been removed.\n\nTo be clear, I am not advocating for this patch to get included. I just had\nthe patch immediately available and it could have confirmed that using a\nbetter lock fixes things.\n\n-- \nAnts Aasma\nSenior Database Engineerwww.cybertec-postgresql.com\n\nOn Tue, 30 Jun 2020 at 08:43, Fujii Masao <masao.fujii@oss.nttdata.com> wrote:> The problem looks to be that spinlocks are terrible with overloaded CPU and a contended spinlock. A process holding the spinlock might easily get scheduled out leading to excessive spinning by everybody. I think a simple thing to try would be to replace the spinlock with LWLock.\n\nYes. Attached is the POC patch that replaces per-counter spinlock with LWLock.Great. I think this is the one that should get considered for testing. \n> I did a prototype patch that replaces spinlocks with futexes, but was not able to find a workload where it mattered.\n\nI'm not familiar with futex, but could you tell me why you used futex instead\nof LWLock that we already have? Is futex portable?Futex is a Linux kernel call that allows to build a lock that has uncontended cases work fully in user space almost exactly like a spinlock, while falling back to syscalls that wait for wakeup in case of contention. It's not portable, but probably something similar could be implemented for other operating systems. I did not pursue this further because it became apparent that every performance critical spinlock had already been removed.To be clear, I am not advocating for this patch to get included. I just had the patch immediately available and it could have confirmed that using a better lock fixes things.-- \nAnts Aasma\nSenior Database Engineer\nwww.cybertec-postgresql.com", "msg_date": "Tue, 30 Jun 2020 14:30:03 +0300", "msg_from": "Ants Aasma <ants@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: track_planning causing performance regression" }, { "msg_contents": "\n\nOn 2020/06/30 20:30, Ants Aasma wrote:\n> On Tue, 30 Jun 2020 at 08:43, Fujii Masao <masao.fujii@oss.nttdata.com <mailto:masao.fujii@oss.nttdata.com>> wrote:\n> \n> > The problem looks to be that spinlocks are terrible with overloaded CPU and a contended spinlock. A process holding the spinlock might easily get scheduled out leading to excessive spinning by everybody. I think a simple thing to try would be to replace the spinlock with LWLock.\n> \n> Yes. Attached is the POC patch that replaces per-counter spinlock with LWLock.\n> \n> \n> Great. I think this is the one that should get considered for testing.\n> \n> > I did a prototype patch that replaces spinlocks with futexes, but was not able to find a workload where it mattered.\n> \n> I'm not familiar with futex, but could you tell me why you used futex instead\n> of LWLock that we already have? Is futex portable?\n> \n> \n> Futex is a Linux kernel call that allows to build a lock that has uncontended cases work fully in user space almost exactly like a spinlock, while falling back to syscalls that wait for wakeup in case of contention. It's not portable, but probably something similar could be implemented for other operating systems. I did not pursue this further because it became apparent that every performance critical spinlock had already been removed.\n> \n> To be clear, I am not advocating for this patch to get included. I just had the patch immediately available and it could have confirmed that using a better lock fixes things.\n\nUnderstood. Thanks for the explanation!\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Tue, 30 Jun 2020 22:40:09 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: track_planning causing performance regression" }, { "msg_contents": "\n\nOn 2020/06/30 7:29, Peter Geoghegan wrote:\n> On Mon, Jun 29, 2020 at 3:23 PM Peter Geoghegan <pg@bowt.ie> wrote:\n>> +1 -- this regression seems unacceptable to me.\n> \n> I added an open item to track this.\n\nThanks!\nI'm thinking to change the default value of track_planning to off for v13.\n\nAnts and Andres suggested to replace the spinlock used in pgss_store() with\nLWLock. I agreed with them and posted the POC patch doing that. But I think\nthe patch is an item for v14. The patch may address the reported performance\nissue, but may cause other performance issues in other workloads. We would\nneed to measure how the patch affects the performance in various workloads.\nIt seems too late to do that at this stage of v13. Thought?\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Tue, 30 Jun 2020 22:40:40 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: track_planning causing performance regression" }, { "msg_contents": "Hi,\n\nOn 2020-06-30 14:43:39 +0900, Fujii Masao wrote:\n> > I did a prototype patch that replaces spinlocks with futexes, but was not able to find a workload where it mattered.\n> \n> I'm not familiar with futex, but could you tell me why you used futex instead\n> of LWLock that we already have? Is futex portable?\n\nWe can't rely on futexes, they're linux only. I also don't see much of a\nreason to use spinlocks (rather than lwlocks) here in the first place.\n\n\n> diff --git a/contrib/pg_stat_statements/pg_stat_statements.c b/contrib/pg_stat_statements/pg_stat_statements.c\n> index cef8bb5a49..aa506f6c11 100644\n> --- a/contrib/pg_stat_statements/pg_stat_statements.c\n> +++ b/contrib/pg_stat_statements/pg_stat_statements.c\n> @@ -39,7 +39,7 @@\n> * in an entry except the counters requires the same. To look up an entry,\n> * one must hold the lock shared. To read or update the counters within\n> * an entry, one must hold the lock shared or exclusive (so the entry doesn't\n> - * disappear!) and also take the entry's mutex spinlock.\n> + * disappear!) and also take the entry's partition lock.\n> * The shared state variable pgss->extent (the next free spot in the external\n> * query-text file) should be accessed only while holding either the\n> * pgss->mutex spinlock, or exclusive lock on pgss->lock. We use the mutex to\n> @@ -115,6 +115,11 @@ static const uint32 PGSS_PG_MAJOR_VERSION = PG_VERSION_NUM / 100;\n> \n> #define JUMBLE_SIZE\t\t\t\t1024\t/* query serialization buffer size */\n> \n> +#define\tPGSS_NUM_LOCK_PARTITIONS()\t\t(pgss_max)\n> +#define\tPGSS_HASH_PARTITION_LOCK(key)\t\\\n> +\t(&(pgss->base +\t\\\n> +\t (get_hash_value(pgss_hash, key) % PGSS_NUM_LOCK_PARTITIONS()))->lock)\n> +\n> /*\n> * Extension version number, for supporting older extension versions' objects\n> */\n> @@ -207,7 +212,7 @@ typedef struct pgssEntry\n> \tSize\t\tquery_offset;\t/* query text offset in external file */\n> \tint\t\t\tquery_len;\t\t/* # of valid bytes in query string, or -1 */\n> \tint\t\t\tencoding;\t\t/* query text encoding */\n> -\tslock_t\t\tmutex;\t\t\t/* protects the counters only */\n> +\tLWLock\t \t*lock;\t\t\t/* protects the counters only */\n> } pgssEntry;\n\nWhy did you add the hashing here? It seems a lot better to just add an\nlwlock in-place instead of the spinlock? The added size is neglegible\ncompared to the size of pgssEntry.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 30 Jun 2020 12:03:20 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: track_planning causing performance regression" }, { "msg_contents": "Hi,\n\nOn 2020-06-30 14:30:03 +0300, Ants Aasma wrote:\n> Futex is a Linux kernel call that allows to build a lock that has\n> uncontended cases work fully in user space almost exactly like a spinlock,\n> while falling back to syscalls that wait for wakeup in case of contention.\n> It's not portable, but probably something similar could be implemented for\n> other operating systems. I did not pursue this further because it became\n> apparent that every performance critical spinlock had already been removed.\n\nOur lwlock implementation does have that property already, though. While\nthe kernel wait is implemented using semaphores, those are implemented\nusing futexes internally (posix ones, not sysv ones, so only after\nwhatever version we switched the default to posix semas on linux).\n\nI'd rather move towards removing spinlocks from postgres than making\ntheir implementation more complicated...\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 30 Jun 2020 12:06:13 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: track_planning causing performance regression" }, { "msg_contents": "On Tue, Jun 30, 2020 at 6:40 AM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> Ants and Andres suggested to replace the spinlock used in pgss_store() with\n> LWLock. I agreed with them and posted the POC patch doing that. But I think\n> the patch is an item for v14. The patch may address the reported performance\n> issue, but may cause other performance issues in other workloads. We would\n> need to measure how the patch affects the performance in various workloads.\n> It seems too late to do that at this stage of v13. Thought?\n\nI agree that it's too late for v13.\n\nThanks\n--\nPeter Geoghegan\n\n\n", "msg_date": "Tue, 30 Jun 2020 15:37:05 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: track_planning causing performance regression" }, { "msg_contents": "\n\nOn 2020/07/01 4:03, Andres Freund wrote:\n> Hi,\n> \n> On 2020-06-30 14:43:39 +0900, Fujii Masao wrote:\n>>> I did a prototype patch that replaces spinlocks with futexes, but was not able to find a workload where it mattered.\n>>\n>> I'm not familiar with futex, but could you tell me why you used futex instead\n>> of LWLock that we already have? Is futex portable?\n> \n> We can't rely on futexes, they're linux only.\n\nUnderstood. Thanks!\n\n\n\n> I also don't see much of a\n> reason to use spinlocks (rather than lwlocks) here in the first place.\n> \n> \n>> diff --git a/contrib/pg_stat_statements/pg_stat_statements.c b/contrib/pg_stat_statements/pg_stat_statements.c\n>> index cef8bb5a49..aa506f6c11 100644\n>> --- a/contrib/pg_stat_statements/pg_stat_statements.c\n>> +++ b/contrib/pg_stat_statements/pg_stat_statements.c\n>> @@ -39,7 +39,7 @@\n>> * in an entry except the counters requires the same. To look up an entry,\n>> * one must hold the lock shared. To read or update the counters within\n>> * an entry, one must hold the lock shared or exclusive (so the entry doesn't\n>> - * disappear!) and also take the entry's mutex spinlock.\n>> + * disappear!) and also take the entry's partition lock.\n>> * The shared state variable pgss->extent (the next free spot in the external\n>> * query-text file) should be accessed only while holding either the\n>> * pgss->mutex spinlock, or exclusive lock on pgss->lock. We use the mutex to\n>> @@ -115,6 +115,11 @@ static const uint32 PGSS_PG_MAJOR_VERSION = PG_VERSION_NUM / 100;\n>> \n>> #define JUMBLE_SIZE\t\t\t\t1024\t/* query serialization buffer size */\n>> \n>> +#define\tPGSS_NUM_LOCK_PARTITIONS()\t\t(pgss_max)\n>> +#define\tPGSS_HASH_PARTITION_LOCK(key)\t\\\n>> +\t(&(pgss->base +\t\\\n>> +\t (get_hash_value(pgss_hash, key) % PGSS_NUM_LOCK_PARTITIONS()))->lock)\n>> +\n>> /*\n>> * Extension version number, for supporting older extension versions' objects\n>> */\n>> @@ -207,7 +212,7 @@ typedef struct pgssEntry\n>> \tSize\t\tquery_offset;\t/* query text offset in external file */\n>> \tint\t\t\tquery_len;\t\t/* # of valid bytes in query string, or -1 */\n>> \tint\t\t\tencoding;\t\t/* query text encoding */\n>> -\tslock_t\t\tmutex;\t\t\t/* protects the counters only */\n>> +\tLWLock\t \t*lock;\t\t\t/* protects the counters only */\n>> } pgssEntry;\n> \n> Why did you add the hashing here? It seems a lot better to just add an\n> lwlock in-place instead of the spinlock? The added size is neglegible\n> compared to the size of pgssEntry.\n\nBecause pgssEntry is not array entry but hashtable entry. First I was\nthinking to assign per-process lwlock to each entry in the array at the\nstartup. But each entry is created every time new entry is required.\nSo lwlock needs to be assigned to each entry at that creation time.\nWe cannnot easily assign lwlock to all the entries at the startup.\n\nAlso each entry can be dropped from the hashtable. In this case,\nmaybe already-assigned lwlock needs to be moved back to \"freelist\"\nso that it will be able to be assigned again to new entry later. We can\nimplement this probably, but which looks a bit complicated.\n\nSince the hasing addresses these issues, I just used it in POC patch.\nBut I'd like to hear better idea!\n\n> +#define PGSS_NUM_LOCK_PARTITIONS() (pgss_max)\n\nCurrently pgss_max is used as the number of lwlock for entries.\nBut if too large number of lwlock is useless (or a bit harmful?), we can\nset the upper limit here, e.g., max(pgss_max, 10000).\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Wed, 1 Jul 2020 22:20:50 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: track_planning causing performance regression" }, { "msg_contents": "Hi,\n\nOn 2020-07-01 22:20:50 +0900, Fujii Masao wrote:\n> On 2020/07/01 4:03, Andres Freund wrote:\n> > Why did you add the hashing here? It seems a lot better to just add an\n> > lwlock in-place instead of the spinlock? The added size is neglegible\n> > compared to the size of pgssEntry.\n> \n> Because pgssEntry is not array entry but hashtable entry. First I was\n> thinking to assign per-process lwlock to each entry in the array at the\n> startup. But each entry is created every time new entry is required.\n> So lwlock needs to be assigned to each entry at that creation time.\n> We cannnot easily assign lwlock to all the entries at the startup.\n\nBut why not just do it exactly at the place the SpinLockInit() is done\ncurrently?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 1 Jul 2020 09:54:25 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: track_planning causing performance regression" }, { "msg_contents": "\n\nOn 2020/07/02 1:54, Andres Freund wrote:\n> Hi,\n> \n> On 2020-07-01 22:20:50 +0900, Fujii Masao wrote:\n>> On 2020/07/01 4:03, Andres Freund wrote:\n>>> Why did you add the hashing here? It seems a lot better to just add an\n>>> lwlock in-place instead of the spinlock? The added size is neglegible\n>>> compared to the size of pgssEntry.\n>>\n>> Because pgssEntry is not array entry but hashtable entry. First I was\n>> thinking to assign per-process lwlock to each entry in the array at the\n>> startup. But each entry is created every time new entry is required.\n>> So lwlock needs to be assigned to each entry at that creation time.\n>> We cannnot easily assign lwlock to all the entries at the startup.\n> \n> But why not just do it exactly at the place the SpinLockInit() is done\n> currently?\n\nSorry I failed to understand your point... You mean that new lwlock should\nbe initialized at the place the SpinLockInit() is done currently instead of\nrequesting postmaster to initialize all the lwlocks required for pgss\nat _PG_init()?\n\nRegards,\n\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Fri, 3 Jul 2020 10:56:51 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: track_planning causing performance regression" }, { "msg_contents": "\n\nOn 2020/07/01 7:37, Peter Geoghegan wrote:\n> On Tue, Jun 30, 2020 at 6:40 AM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>> Ants and Andres suggested to replace the spinlock used in pgss_store() with\n>> LWLock. I agreed with them and posted the POC patch doing that. But I think\n>> the patch is an item for v14. The patch may address the reported performance\n>> issue, but may cause other performance issues in other workloads. We would\n>> need to measure how the patch affects the performance in various workloads.\n>> It seems too late to do that at this stage of v13. Thought?\n> \n> I agree that it's too late for v13.\n\nThanks for the comment!\n\nSo I pushed the patch and changed default of track_planning to off.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Fri, 3 Jul 2020 11:39:34 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: track_planning causing performance regression" }, { "msg_contents": "On Thu, Jul 2, 2020 at 7:39 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> So I pushed the patch and changed default of track_planning to off.\n\nI have closed out the open item I created for this.\n\nThanks!\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Thu, 2 Jul 2020 19:43:46 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: track_planning causing performance regression" }, { "msg_contents": "\n\nOn 2020/07/03 11:43, Peter Geoghegan wrote:\n> On Thu, Jul 2, 2020 at 7:39 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>> So I pushed the patch and changed default of track_planning to off.\n> \n> I have closed out the open item I created for this.\n\nThanks!!\n\nI added the patch that replaces spinlock with lwlock in pgss, into CF-2020-09.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Fri, 3 Jul 2020 11:48:58 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: track_planning causing performance regression" }, { "msg_contents": "Hi\n\npá 3. 7. 2020 v 4:39 odesílatel Fujii Masao <masao.fujii@oss.nttdata.com>\nnapsal:\n\n>\n>\n> On 2020/07/01 7:37, Peter Geoghegan wrote:\n> > On Tue, Jun 30, 2020 at 6:40 AM Fujii Masao <masao.fujii@oss.nttdata.com>\n> wrote:\n> >> Ants and Andres suggested to replace the spinlock used in pgss_store()\n> with\n> >> LWLock. I agreed with them and posted the POC patch doing that. But I\n> think\n> >> the patch is an item for v14. The patch may address the reported\n> performance\n> >> issue, but may cause other performance issues in other workloads. We\n> would\n> >> need to measure how the patch affects the performance in various\n> workloads.\n> >> It seems too late to do that at this stage of v13. Thought?\n> >\n> > I agree that it's too late for v13.\n>\n> Thanks for the comment!\n>\n> So I pushed the patch and changed default of track_planning to off.\n>\n\nMaybe there can be documented so enabling this option can have a negative\nimpact on performance.\n\nRegards\n\nPavel\n\n>\n> Regards,\n>\n> --\n> Fujii Masao\n> Advanced Computing Technology Center\n> Research and Development Headquarters\n> NTT DATA CORPORATION\n>\n>\n>\n\nHipá 3. 7. 2020 v 4:39 odesílatel Fujii Masao <masao.fujii@oss.nttdata.com> napsal:\n\nOn 2020/07/01 7:37, Peter Geoghegan wrote:\n> On Tue, Jun 30, 2020 at 6:40 AM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>> Ants and Andres suggested to replace the spinlock used in pgss_store() with\n>> LWLock. I agreed with them and posted the POC patch doing that. But I think\n>> the patch is an item for v14. The patch may address the reported performance\n>> issue, but may cause other performance issues in other workloads. We would\n>> need to measure how the patch affects the performance in various workloads.\n>> It seems too late to do that at this stage of v13. Thought?\n> \n> I agree that it's too late for v13.\n\nThanks for the comment!\n\nSo I pushed the patch and changed default of track_planning to off.Maybe there can be documented so enabling this option can have a negative impact on performance.RegardsPavel\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION", "msg_date": "Fri, 3 Jul 2020 06:05:10 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: track_planning causing performance regression" }, { "msg_contents": "\n\nOn 2020/07/03 13:05, Pavel Stehule wrote:\n> Hi\n> \n> pá 3. 7. 2020 v 4:39 odesílatel Fujii Masao <masao.fujii@oss.nttdata.com <mailto:masao.fujii@oss.nttdata.com>> napsal:\n> \n> \n> \n> On 2020/07/01 7:37, Peter Geoghegan wrote:\n> > On Tue, Jun 30, 2020 at 6:40 AM Fujii Masao <masao.fujii@oss.nttdata.com <mailto:masao.fujii@oss.nttdata.com>> wrote:\n> >> Ants and Andres suggested to replace the spinlock used in pgss_store() with\n> >> LWLock. I agreed with them and posted the POC patch doing that. But I think\n> >> the patch is an item for v14. The patch may address the reported performance\n> >> issue, but may cause other performance issues in other workloads. We would\n> >> need to measure how the patch affects the performance in various workloads.\n> >> It seems too late to do that at this stage of v13. Thought?\n> >\n> > I agree that it's too late for v13.\n> \n> Thanks for the comment!\n> \n> So I pushed the patch and changed default of track_planning to off.\n> \n> \n> Maybe there can be documented so enabling this option can have a negative impact on performance.\n\nYes. What about adding either of the followings into the doc?\n\n Enabling this parameter may incur a noticeable performance penalty.\n\nor\n\n Enabling this parameter may incur a noticeable performance penalty,\n especially when a fewer kinds of queries are executed on many\n concurrent connections.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Fri, 3 Jul 2020 15:57:38 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: track_planning causing performance regression" }, { "msg_contents": "pá 3. 7. 2020 v 8:57 odesílatel Fujii Masao <masao.fujii@oss.nttdata.com>\nnapsal:\n\n>\n>\n> On 2020/07/03 13:05, Pavel Stehule wrote:\n> > Hi\n> >\n> > pá 3. 7. 2020 v 4:39 odesílatel Fujii Masao <masao.fujii@oss.nttdata.com\n> <mailto:masao.fujii@oss.nttdata.com>> napsal:\n> >\n> >\n> >\n> > On 2020/07/01 7:37, Peter Geoghegan wrote:\n> > > On Tue, Jun 30, 2020 at 6:40 AM Fujii Masao <\n> masao.fujii@oss.nttdata.com <mailto:masao.fujii@oss.nttdata.com>> wrote:\n> > >> Ants and Andres suggested to replace the spinlock used in\n> pgss_store() with\n> > >> LWLock. I agreed with them and posted the POC patch doing that.\n> But I think\n> > >> the patch is an item for v14. The patch may address the reported\n> performance\n> > >> issue, but may cause other performance issues in other\n> workloads. We would\n> > >> need to measure how the patch affects the performance in various\n> workloads.\n> > >> It seems too late to do that at this stage of v13. Thought?\n> > >\n> > > I agree that it's too late for v13.\n> >\n> > Thanks for the comment!\n> >\n> > So I pushed the patch and changed default of track_planning to off.\n> >\n> >\n> > Maybe there can be documented so enabling this option can have a\n> negative impact on performance.\n>\n> Yes. What about adding either of the followings into the doc?\n>\n> Enabling this parameter may incur a noticeable performance penalty.\n>\n> or\n>\n> Enabling this parameter may incur a noticeable performance penalty,\n> especially when a fewer kinds of queries are executed on many\n> concurrent connections.\n>\n\nThis second variant looks perfect for this case.\n\nThank you\n\nPavel\n\n\n\n\n> Regards,\n>\n> --\n> Fujii Masao\n> Advanced Computing Technology Center\n> Research and Development Headquarters\n> NTT DATA CORPORATION\n>\n\npá 3. 7. 2020 v 8:57 odesílatel Fujii Masao <masao.fujii@oss.nttdata.com> napsal:\n\nOn 2020/07/03 13:05, Pavel Stehule wrote:\n> Hi\n> \n> pá 3. 7. 2020 v 4:39 odesílatel Fujii Masao <masao.fujii@oss.nttdata.com <mailto:masao.fujii@oss.nttdata.com>> napsal:\n> \n> \n> \n>     On 2020/07/01 7:37, Peter Geoghegan wrote:\n>      > On Tue, Jun 30, 2020 at 6:40 AM Fujii Masao <masao.fujii@oss.nttdata.com <mailto:masao.fujii@oss.nttdata.com>> wrote:\n>      >> Ants and Andres suggested to replace the spinlock used in pgss_store() with\n>      >> LWLock. I agreed with them and posted the POC patch doing that. But I think\n>      >> the patch is an item for v14. The patch may address the reported performance\n>      >> issue, but may cause other performance issues in other workloads. We would\n>      >> need to measure how the patch affects the performance in various workloads.\n>      >> It seems too late to do that at this stage of v13. Thought?\n>      >\n>      > I agree that it's too late for v13.\n> \n>     Thanks for the comment!\n> \n>     So I pushed the patch and changed default of track_planning to off.\n> \n> \n> Maybe there can be documented so enabling this option can have a negative impact on performance.\n\nYes. What about adding either of the followings into the doc?\n\n     Enabling this parameter may incur a noticeable performance penalty.\n\nor\n\n     Enabling this parameter may incur a noticeable performance penalty,\n     especially when a fewer kinds of queries are executed on many\n     concurrent connections.This second variant looks perfect for this case.Thank youPavel \n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION", "msg_date": "Fri, 3 Jul 2020 09:02:10 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: track_planning causing performance regression" }, { "msg_contents": "On 2020/07/03 16:02, Pavel Stehule wrote:\n> \n> \n> pá 3. 7. 2020 v 8:57 odesílatel Fujii Masao <masao.fujii@oss.nttdata.com <mailto:masao.fujii@oss.nttdata.com>> napsal:\n> \n> \n> \n> On 2020/07/03 13:05, Pavel Stehule wrote:\n> > Hi\n> >\n> > pá 3. 7. 2020 v 4:39 odesílatel Fujii Masao <masao.fujii@oss.nttdata.com <mailto:masao.fujii@oss.nttdata.com> <mailto:masao.fujii@oss.nttdata.com <mailto:masao.fujii@oss.nttdata.com>>> napsal:\n> >\n> >\n> >\n> >     On 2020/07/01 7:37, Peter Geoghegan wrote:\n> >      > On Tue, Jun 30, 2020 at 6:40 AM Fujii Masao <masao.fujii@oss.nttdata.com <mailto:masao.fujii@oss.nttdata.com> <mailto:masao.fujii@oss.nttdata.com <mailto:masao.fujii@oss.nttdata.com>>> wrote:\n> >      >> Ants and Andres suggested to replace the spinlock used in pgss_store() with\n> >      >> LWLock. I agreed with them and posted the POC patch doing that. But I think\n> >      >> the patch is an item for v14. The patch may address the reported performance\n> >      >> issue, but may cause other performance issues in other workloads. We would\n> >      >> need to measure how the patch affects the performance in various workloads.\n> >      >> It seems too late to do that at this stage of v13. Thought?\n> >      >\n> >      > I agree that it's too late for v13.\n> >\n> >     Thanks for the comment!\n> >\n> >     So I pushed the patch and changed default of track_planning to off.\n> >\n> >\n> > Maybe there can be documented so enabling this option can have a negative impact on performance.\n> \n> Yes. What about adding either of the followings into the doc?\n> \n>      Enabling this parameter may incur a noticeable performance penalty.\n> \n> or\n> \n>      Enabling this parameter may incur a noticeable performance penalty,\n>      especially when a fewer kinds of queries are executed on many\n>      concurrent connections.\n> \n> \n> This second variant looks perfect for this case.\n\nOk, so patch attached.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION", "msg_date": "Fri, 3 Jul 2020 20:02:03 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: track_planning causing performance regression" }, { "msg_contents": "pá 3. 7. 2020 v 13:02 odesílatel Fujii Masao <masao.fujii@oss.nttdata.com>\nnapsal:\n\n>\n>\n> On 2020/07/03 16:02, Pavel Stehule wrote:\n> >\n> >\n> > pá 3. 7. 2020 v 8:57 odesílatel Fujii Masao <masao.fujii@oss.nttdata.com\n> <mailto:masao.fujii@oss.nttdata.com>> napsal:\n> >\n> >\n> >\n> > On 2020/07/03 13:05, Pavel Stehule wrote:\n> > > Hi\n> > >\n> > > pá 3. 7. 2020 v 4:39 odesílatel Fujii Masao <\n> masao.fujii@oss.nttdata.com <mailto:masao.fujii@oss.nttdata.com> <mailto:\n> masao.fujii@oss.nttdata.com <mailto:masao.fujii@oss.nttdata.com>>> napsal:\n> > >\n> > >\n> > >\n> > > On 2020/07/01 7:37, Peter Geoghegan wrote:\n> > > > On Tue, Jun 30, 2020 at 6:40 AM Fujii Masao <\n> masao.fujii@oss.nttdata.com <mailto:masao.fujii@oss.nttdata.com> <mailto:\n> masao.fujii@oss.nttdata.com <mailto:masao.fujii@oss.nttdata.com>>> wrote:\n> > > >> Ants and Andres suggested to replace the spinlock used in\n> pgss_store() with\n> > > >> LWLock. I agreed with them and posted the POC patch doing\n> that. But I think\n> > > >> the patch is an item for v14. The patch may address the\n> reported performance\n> > > >> issue, but may cause other performance issues in other\n> workloads. We would\n> > > >> need to measure how the patch affects the performance in\n> various workloads.\n> > > >> It seems too late to do that at this stage of v13.\n> Thought?\n> > > >\n> > > > I agree that it's too late for v13.\n> > >\n> > > Thanks for the comment!\n> > >\n> > > So I pushed the patch and changed default of track_planning\n> to off.\n> > >\n> > >\n> > > Maybe there can be documented so enabling this option can have a\n> negative impact on performance.\n> >\n> > Yes. What about adding either of the followings into the doc?\n> >\n> > Enabling this parameter may incur a noticeable performance\n> penalty.\n> >\n> > or\n> >\n> > Enabling this parameter may incur a noticeable performance\n> penalty,\n> > especially when a fewer kinds of queries are executed on many\n> > concurrent connections.\n> >\n> >\n> > This second variant looks perfect for this case.\n>\n> Ok, so patch attached.\n>\n\n+1\n\nThank you\n\nPavel\n\n\n> Regards,\n>\n> --\n> Fujii Masao\n> Advanced Computing Technology Center\n> Research and Development Headquarters\n> NTT DATA CORPORATION\n>\n\npá 3. 7. 2020 v 13:02 odesílatel Fujii Masao <masao.fujii@oss.nttdata.com> napsal:\n\nOn 2020/07/03 16:02, Pavel Stehule wrote:\n> \n> \n> pá 3. 7. 2020 v 8:57 odesílatel Fujii Masao <masao.fujii@oss.nttdata.com <mailto:masao.fujii@oss.nttdata.com>> napsal:\n> \n> \n> \n>     On 2020/07/03 13:05, Pavel Stehule wrote:\n>      > Hi\n>      >\n>      > pá 3. 7. 2020 v 4:39 odesílatel Fujii Masao <masao.fujii@oss.nttdata.com <mailto:masao.fujii@oss.nttdata.com> <mailto:masao.fujii@oss.nttdata.com <mailto:masao.fujii@oss.nttdata.com>>> napsal:\n>      >\n>      >\n>      >\n>      >     On 2020/07/01 7:37, Peter Geoghegan wrote:\n>      >      > On Tue, Jun 30, 2020 at 6:40 AM Fujii Masao <masao.fujii@oss.nttdata.com <mailto:masao.fujii@oss.nttdata.com> <mailto:masao.fujii@oss.nttdata.com <mailto:masao.fujii@oss.nttdata.com>>> wrote:\n>      >      >> Ants and Andres suggested to replace the spinlock used in pgss_store() with\n>      >      >> LWLock. I agreed with them and posted the POC patch doing that. But I think\n>      >      >> the patch is an item for v14. The patch may address the reported performance\n>      >      >> issue, but may cause other performance issues in other workloads. We would\n>      >      >> need to measure how the patch affects the performance in various workloads.\n>      >      >> It seems too late to do that at this stage of v13. Thought?\n>      >      >\n>      >      > I agree that it's too late for v13.\n>      >\n>      >     Thanks for the comment!\n>      >\n>      >     So I pushed the patch and changed default of track_planning to off.\n>      >\n>      >\n>      > Maybe there can be documented so enabling this option can have a negative impact on performance.\n> \n>     Yes. What about adding either of the followings into the doc?\n> \n>           Enabling this parameter may incur a noticeable performance penalty.\n> \n>     or\n> \n>           Enabling this parameter may incur a noticeable performance penalty,\n>           especially when a fewer kinds of queries are executed on many\n>           concurrent connections.\n> \n> \n> This second variant looks perfect for this case.\n\nOk, so patch attached.+1Thank youPavel\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION", "msg_date": "Sat, 4 Jul 2020 05:22:31 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: track_planning causing performance regression" }, { "msg_contents": "\n\nOn 2020/07/04 12:22, Pavel Stehule wrote:\n> \n> \n> pá 3. 7. 2020 v 13:02 odesílatel Fujii Masao <masao.fujii@oss.nttdata.com <mailto:masao.fujii@oss.nttdata.com>> napsal:\n> \n> \n> \n> On 2020/07/03 16:02, Pavel Stehule wrote:\n> >\n> >\n> > pá 3. 7. 2020 v 8:57 odesílatel Fujii Masao <masao.fujii@oss.nttdata.com <mailto:masao.fujii@oss.nttdata.com> <mailto:masao.fujii@oss.nttdata.com <mailto:masao.fujii@oss.nttdata.com>>> napsal:\n> >\n> >\n> >\n> >     On 2020/07/03 13:05, Pavel Stehule wrote:\n> >      > Hi\n> >      >\n> >      > pá 3. 7. 2020 v 4:39 odesílatel Fujii Masao <masao.fujii@oss.nttdata.com <mailto:masao.fujii@oss.nttdata.com> <mailto:masao.fujii@oss.nttdata.com <mailto:masao.fujii@oss.nttdata.com>> <mailto:masao.fujii@oss.nttdata.com <mailto:masao.fujii@oss.nttdata.com> <mailto:masao.fujii@oss.nttdata.com <mailto:masao.fujii@oss.nttdata.com>>>> napsal:\n> >      >\n> >      >\n> >      >\n> >      >     On 2020/07/01 7:37, Peter Geoghegan wrote:\n> >      >      > On Tue, Jun 30, 2020 at 6:40 AM Fujii Masao <masao.fujii@oss.nttdata.com <mailto:masao.fujii@oss.nttdata.com> <mailto:masao.fujii@oss.nttdata.com <mailto:masao.fujii@oss.nttdata.com>> <mailto:masao.fujii@oss.nttdata.com <mailto:masao.fujii@oss.nttdata.com> <mailto:masao.fujii@oss.nttdata.com <mailto:masao.fujii@oss.nttdata.com>>>> wrote:\n> >      >      >> Ants and Andres suggested to replace the spinlock used in pgss_store() with\n> >      >      >> LWLock. I agreed with them and posted the POC patch doing that. But I think\n> >      >      >> the patch is an item for v14. The patch may address the reported performance\n> >      >      >> issue, but may cause other performance issues in other workloads. We would\n> >      >      >> need to measure how the patch affects the performance in various workloads.\n> >      >      >> It seems too late to do that at this stage of v13. Thought?\n> >      >      >\n> >      >      > I agree that it's too late for v13.\n> >      >\n> >      >     Thanks for the comment!\n> >      >\n> >      >     So I pushed the patch and changed default of track_planning to off.\n> >      >\n> >      >\n> >      > Maybe there can be documented so enabling this option can have a negative impact on performance.\n> >\n> >     Yes. What about adding either of the followings into the doc?\n> >\n> >           Enabling this parameter may incur a noticeable performance penalty.\n> >\n> >     or\n> >\n> >           Enabling this parameter may incur a noticeable performance penalty,\n> >           especially when a fewer kinds of queries are executed on many\n> >           concurrent connections.\n> >\n> >\n> > This second variant looks perfect for this case.\n> \n> Ok, so patch attached.\n> \n> \n> +1\n\nThanks for the review! Pushed.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Mon, 6 Jul 2020 14:29:28 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: track_planning causing performance regression" }, { "msg_contents": "<https://commitfest.postgresql.org/29/2634/>\n\nOn Mon, Jul 6, 2020 at 10:29 AM Fujii Masao <masao.fujii@oss.nttdata.com>\nwrote:\n\n>\n>\n> On 2020/07/04 12:22, Pavel Stehule wrote:\n> >\n> >\n> > pá 3. 7. 2020 v 13:02 odesílatel Fujii Masao <\n> masao.fujii@oss.nttdata.com <mailto:masao.fujii@oss.nttdata.com>> napsal:\n> >\n> >\n> >\n> > On 2020/07/03 16:02, Pavel Stehule wrote:\n> > >\n> > >\n> > > pá 3. 7. 2020 v 8:57 odesílatel Fujii Masao <\n> masao.fujii@oss.nttdata.com <mailto:masao.fujii@oss.nttdata.com> <mailto:\n> masao.fujii@oss.nttdata.com <mailto:masao.fujii@oss.nttdata.com>>> napsal:\n> > >\n> > >\n> > >\n> > > On 2020/07/03 13:05, Pavel Stehule wrote:\n> > > > Hi\n> > > >\n> > > > pá 3. 7. 2020 v 4:39 odesílatel Fujii Masao <\n> masao.fujii@oss.nttdata.com <mailto:masao.fujii@oss.nttdata.com> <mailto:\n> masao.fujii@oss.nttdata.com <mailto:masao.fujii@oss.nttdata.com>> <mailto:\n> masao.fujii@oss.nttdata.com <mailto:masao.fujii@oss.nttdata.com> <mailto:\n> masao.fujii@oss.nttdata.com <mailto:masao.fujii@oss.nttdata.com>>>>\n> napsal:\n> > > >\n> > > >\n> > > >\n> > > > On 2020/07/01 7:37, Peter Geoghegan wrote:\n> > > > > On Tue, Jun 30, 2020 at 6:40 AM Fujii Masao <\n> masao.fujii@oss.nttdata.com <mailto:masao.fujii@oss.nttdata.com> <mailto:\n> masao.fujii@oss.nttdata.com <mailto:masao.fujii@oss.nttdata.com>> <mailto:\n> masao.fujii@oss.nttdata.com <mailto:masao.fujii@oss.nttdata.com> <mailto:\n> masao.fujii@oss.nttdata.com <mailto:masao.fujii@oss.nttdata.com>>>> wrote:\n> > > > >> Ants and Andres suggested to replace the spinlock\n> used in pgss_store() with\n> > > > >> LWLock. I agreed with them and posted the POC\n> patch doing that. But I think\n> > > > >> the patch is an item for v14. The patch may\n> address the reported performance\n> > > > >> issue, but may cause other performance issues in\n> other workloads. We would\n> > > > >> need to measure how the patch affects the\n> performance in various workloads.\n> > > > >> It seems too late to do that at this stage of v13.\n> Thought?\n> > > > >\n> > > > > I agree that it's too late for v13.\n> > > >\n> > > > Thanks for the comment!\n> > > >\n> > > > So I pushed the patch and changed default of\n> track_planning to off.\n> > > >\n> > > >\n> > > > Maybe there can be documented so enabling this option can\n> have a negative impact on performance.\n> > >\n> > > Yes. What about adding either of the followings into the doc?\n> > >\n> > > Enabling this parameter may incur a noticeable\n> performance penalty.\n> > >\n> > > or\n> > >\n> > > Enabling this parameter may incur a noticeable\n> performance penalty,\n> > > especially when a fewer kinds of queries are executed\n> on many\n> > > concurrent connections.\n> > >\n> > >\n> > > This second variant looks perfect for this case.\n> >\n> > Ok, so patch attached.\n> >\n> >\n> > +1\n>\n> Thanks for the review! Pushed.\n>\n> Regards,\n>\n> --\n> Fujii Masao\n> Advanced Computing Technology Center\n> Research and Development Headquarters\n> NTT DATA CORPORATION\n>\n>\n>\nYou might also want to update this patch's status in the commitfest:\nhttps://commitfest.postgresql.org/29/2634/\n\n-- \nHighgo Software (Canada/China/Pakistan)\nURL : www.highgo.ca\nADDR: 10318 WHALLEY BLVD, Surrey, BC\nCELL:+923335449950 EMAIL: mailto:hamid.akhtar@highgo.ca\nSKYPE: engineeredvirus\n\nOn Mon, Jul 6, 2020 at 10:29 AM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n\nOn 2020/07/04 12:22, Pavel Stehule wrote:\n> \n> \n> pá 3. 7. 2020 v 13:02 odesílatel Fujii Masao <masao.fujii@oss.nttdata.com <mailto:masao.fujii@oss.nttdata.com>> napsal:\n> \n> \n> \n>     On 2020/07/03 16:02, Pavel Stehule wrote:\n>      >\n>      >\n>      > pá 3. 7. 2020 v 8:57 odesílatel Fujii Masao <masao.fujii@oss.nttdata.com <mailto:masao.fujii@oss.nttdata.com> <mailto:masao.fujii@oss.nttdata.com <mailto:masao.fujii@oss.nttdata.com>>> napsal:\n>      >\n>      >\n>      >\n>      >     On 2020/07/03 13:05, Pavel Stehule wrote:\n>      >      > Hi\n>      >      >\n>      >      > pá 3. 7. 2020 v 4:39 odesílatel Fujii Masao <masao.fujii@oss.nttdata.com <mailto:masao.fujii@oss.nttdata.com> <mailto:masao.fujii@oss.nttdata.com <mailto:masao.fujii@oss.nttdata.com>> <mailto:masao.fujii@oss.nttdata.com <mailto:masao.fujii@oss.nttdata.com> <mailto:masao.fujii@oss.nttdata.com <mailto:masao.fujii@oss.nttdata.com>>>> napsal:\n>      >      >\n>      >      >\n>      >      >\n>      >      >     On 2020/07/01 7:37, Peter Geoghegan wrote:\n>      >      >      > On Tue, Jun 30, 2020 at 6:40 AM Fujii Masao <masao.fujii@oss.nttdata.com <mailto:masao.fujii@oss.nttdata.com> <mailto:masao.fujii@oss.nttdata.com <mailto:masao.fujii@oss.nttdata.com>> <mailto:masao.fujii@oss.nttdata.com <mailto:masao.fujii@oss.nttdata.com> <mailto:masao.fujii@oss.nttdata.com <mailto:masao.fujii@oss.nttdata.com>>>> wrote:\n>      >      >      >> Ants and Andres suggested to replace the spinlock used in pgss_store() with\n>      >      >      >> LWLock. I agreed with them and posted the POC patch doing that. But I think\n>      >      >      >> the patch is an item for v14. The patch may address the reported performance\n>      >      >      >> issue, but may cause other performance issues in other workloads. We would\n>      >      >      >> need to measure how the patch affects the performance in various workloads.\n>      >      >      >> It seems too late to do that at this stage of v13. Thought?\n>      >      >      >\n>      >      >      > I agree that it's too late for v13.\n>      >      >\n>      >      >     Thanks for the comment!\n>      >      >\n>      >      >     So I pushed the patch and changed default of track_planning to off.\n>      >      >\n>      >      >\n>      >      > Maybe there can be documented so enabling this option can have a negative impact on performance.\n>      >\n>      >     Yes. What about adding either of the followings into the doc?\n>      >\n>      >           Enabling this parameter may incur a noticeable performance penalty.\n>      >\n>      >     or\n>      >\n>      >           Enabling this parameter may incur a noticeable performance penalty,\n>      >           especially when a fewer kinds of queries are executed on many\n>      >           concurrent connections.\n>      >\n>      >\n>      > This second variant looks perfect for this case.\n> \n>     Ok, so patch attached.\n> \n> \n> +1\n\nThanks for the review! Pushed.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\nYou might also want to update this patch's status in the commitfest:https://commitfest.postgresql.org/29/2634/-- Highgo Software (Canada/China/Pakistan)URL : www.highgo.caADDR: 10318 WHALLEY BLVD, Surrey, BCCELL:+923335449950  EMAIL: mailto:hamid.akhtar@highgo.caSKYPE: engineeredvirus", "msg_date": "Fri, 31 Jul 2020 17:40:34 +0500", "msg_from": "Hamid Akhtar <hamid.akhtar@gmail.com>", "msg_from_op": false, "msg_subject": "Re: track_planning causing performance regression" }, { "msg_contents": "\n\nOn 2020/07/31 21:40, Hamid Akhtar wrote:\n> <https://commitfest.postgresql.org/29/2634/>\n> \n> On Mon, Jul 6, 2020 at 10:29 AM Fujii Masao <masao.fujii@oss.nttdata.com <mailto:masao.fujii@oss.nttdata.com>> wrote:\n> \n> \n> \n> On 2020/07/04 12:22, Pavel Stehule wrote:\n> >\n> >\n> > pá 3. 7. 2020 v 13:02 odesílatel Fujii Masao <masao.fujii@oss.nttdata.com <mailto:masao.fujii@oss.nttdata.com> <mailto:masao.fujii@oss.nttdata.com <mailto:masao.fujii@oss.nttdata.com>>> napsal:\n> >\n> >\n> >\n> >     On 2020/07/03 16:02, Pavel Stehule wrote:\n> >      >\n> >      >\n> >      > pá 3. 7. 2020 v 8:57 odesílatel Fujii Masao <masao.fujii@oss.nttdata.com <mailto:masao.fujii@oss.nttdata.com> <mailto:masao.fujii@oss.nttdata.com <mailto:masao.fujii@oss.nttdata.com>> <mailto:masao.fujii@oss.nttdata.com <mailto:masao.fujii@oss.nttdata.com> <mailto:masao.fujii@oss.nttdata.com <mailto:masao.fujii@oss.nttdata.com>>>> napsal:\n> >      >\n> >      >\n> >      >\n> >      >     On 2020/07/03 13:05, Pavel Stehule wrote:\n> >      >      > Hi\n> >      >      >\n> >      >      > pá 3. 7. 2020 v 4:39 odesílatel Fujii Masao <masao.fujii@oss.nttdata.com <mailto:masao.fujii@oss.nttdata.com> <mailto:masao.fujii@oss.nttdata.com <mailto:masao.fujii@oss.nttdata.com>> <mailto:masao.fujii@oss.nttdata.com <mailto:masao.fujii@oss.nttdata.com> <mailto:masao.fujii@oss.nttdata.com <mailto:masao.fujii@oss.nttdata.com>>> <mailto:masao.fujii@oss.nttdata.com <mailto:masao.fujii@oss.nttdata.com> <mailto:masao.fujii@oss.nttdata.com <mailto:masao.fujii@oss.nttdata.com>> <mailto:masao.fujii@oss.nttdata.com <mailto:masao.fujii@oss.nttdata.com> <mailto:masao.fujii@oss.nttdata.com <mailto:masao.fujii@oss.nttdata.com>>>>> napsal:\n> >      >      >\n> >      >      >\n> >      >      >\n> >      >      >     On 2020/07/01 7:37, Peter Geoghegan wrote:\n> >      >      >      > On Tue, Jun 30, 2020 at 6:40 AM Fujii Masao <masao.fujii@oss.nttdata.com <mailto:masao.fujii@oss.nttdata.com> <mailto:masao.fujii@oss.nttdata.com <mailto:masao.fujii@oss.nttdata.com>> <mailto:masao.fujii@oss.nttdata.com <mailto:masao.fujii@oss.nttdata.com> <mailto:masao.fujii@oss.nttdata.com <mailto:masao.fujii@oss.nttdata.com>>> <mailto:masao.fujii@oss.nttdata.com <mailto:masao.fujii@oss.nttdata.com> <mailto:masao.fujii@oss.nttdata.com <mailto:masao.fujii@oss.nttdata.com>> <mailto:masao.fujii@oss.nttdata.com <mailto:masao.fujii@oss.nttdata.com> <mailto:masao.fujii@oss.nttdata.com <mailto:masao.fujii@oss.nttdata.com>>>>> wrote:\n> >      >      >      >> Ants and Andres suggested to replace the spinlock used in pgss_store() with\n> >      >      >      >> LWLock. I agreed with them and posted the POC patch doing that. But I think\n> >      >      >      >> the patch is an item for v14. The patch may address the reported performance\n> >      >      >      >> issue, but may cause other performance issues in other workloads. We would\n> >      >      >      >> need to measure how the patch affects the performance in various workloads.\n> >      >      >      >> It seems too late to do that at this stage of v13. Thought?\n> >      >      >      >\n> >      >      >      > I agree that it's too late for v13.\n> >      >      >\n> >      >      >     Thanks for the comment!\n> >      >      >\n> >      >      >     So I pushed the patch and changed default of track_planning to off.\n> >      >      >\n> >      >      >\n> >      >      > Maybe there can be documented so enabling this option can have a negative impact on performance.\n> >      >\n> >      >     Yes. What about adding either of the followings into the doc?\n> >      >\n> >      >           Enabling this parameter may incur a noticeable performance penalty.\n> >      >\n> >      >     or\n> >      >\n> >      >           Enabling this parameter may incur a noticeable performance penalty,\n> >      >           especially when a fewer kinds of queries are executed on many\n> >      >           concurrent connections.\n> >      >\n> >      >\n> >      > This second variant looks perfect for this case.\n> >\n> >     Ok, so patch attached.\n> >\n> >\n> > +1\n> \n> Thanks for the review! Pushed.\n> \n> Regards,\n> \n> -- \n> Fujii Masao\n> Advanced Computing Technology Center\n> Research and Development Headquarters\n> NTT DATA CORPORATION\n> \n> \n> \n> You might also want to update this patch's status in the commitfest:\n> https://commitfest.postgresql.org/29/2634/\n\nThe patch added into this CF entry has not been committed yet.\nSo I was thinking that there is no need to update the status yet. No?\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Mon, 17 Aug 2020 18:21:41 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: track_planning causing performance regression" }, { "msg_contents": "On Mon, Aug 17, 2020 at 2:21 PM Fujii Masao <masao.fujii@oss.nttdata.com>\nwrote:\n\n>\n>\n> On 2020/07/31 21:40, Hamid Akhtar wrote:\n> > <https://commitfest.postgresql.org/29/2634/>\n> >\n> > On Mon, Jul 6, 2020 at 10:29 AM Fujii Masao <masao.fujii@oss.nttdata.com\n> <mailto:masao.fujii@oss.nttdata.com>> wrote:\n> >\n> >\n> >\n> > On 2020/07/04 12:22, Pavel Stehule wrote:\n> > >\n> > >\n> > > pá 3. 7. 2020 v 13:02 odesílatel Fujii Masao <\n> masao.fujii@oss.nttdata.com <mailto:masao.fujii@oss.nttdata.com> <mailto:\n> masao.fujii@oss.nttdata.com <mailto:masao.fujii@oss.nttdata.com>>> napsal:\n> > >\n> > >\n> > >\n> > > On 2020/07/03 16:02, Pavel Stehule wrote:\n> > > >\n> > > >\n> > > > pá 3. 7. 2020 v 8:57 odesílatel Fujii Masao <\n> masao.fujii@oss.nttdata.com <mailto:masao.fujii@oss.nttdata.com> <mailto:\n> masao.fujii@oss.nttdata.com <mailto:masao.fujii@oss.nttdata.com>> <mailto:\n> masao.fujii@oss.nttdata.com <mailto:masao.fujii@oss.nttdata.com> <mailto:\n> masao.fujii@oss.nttdata.com <mailto:masao.fujii@oss.nttdata.com>>>>\n> napsal:\n> > > >\n> > > >\n> > > >\n> > > > On 2020/07/03 13:05, Pavel Stehule wrote:\n> > > > > Hi\n> > > > >\n> > > > > pá 3. 7. 2020 v 4:39 odesílatel Fujii Masao <\n> masao.fujii@oss.nttdata.com <mailto:masao.fujii@oss.nttdata.com> <mailto:\n> masao.fujii@oss.nttdata.com <mailto:masao.fujii@oss.nttdata.com>> <mailto:\n> masao.fujii@oss.nttdata.com <mailto:masao.fujii@oss.nttdata.com> <mailto:\n> masao.fujii@oss.nttdata.com <mailto:masao.fujii@oss.nttdata.com>>>\n> <mailto:masao.fujii@oss.nttdata.com <mailto:masao.fujii@oss.nttdata.com>\n> <mailto:masao.fujii@oss.nttdata.com <mailto:masao.fujii@oss.nttdata.com>>\n> <mailto:masao.fujii@oss.nttdata.com <mailto:masao.fujii@oss.nttdata.com>\n> <mailto:masao.fujii@oss.nttdata.com <mailto:masao.fujii@oss.nttdata.com>>>>>\n> napsal:\n> > > > >\n> > > > >\n> > > > >\n> > > > > On 2020/07/01 7:37, Peter Geoghegan wrote:\n> > > > > > On Tue, Jun 30, 2020 at 6:40 AM Fujii Masao <\n> masao.fujii@oss.nttdata.com <mailto:masao.fujii@oss.nttdata.com> <mailto:\n> masao.fujii@oss.nttdata.com <mailto:masao.fujii@oss.nttdata.com>> <mailto:\n> masao.fujii@oss.nttdata.com <mailto:masao.fujii@oss.nttdata.com> <mailto:\n> masao.fujii@oss.nttdata.com <mailto:masao.fujii@oss.nttdata.com>>>\n> <mailto:masao.fujii@oss.nttdata.com <mailto:masao.fujii@oss.nttdata.com>\n> <mailto:masao.fujii@oss.nttdata.com <mailto:masao.fujii@oss.nttdata.com>>\n> <mailto:masao.fujii@oss.nttdata.com <mailto:masao.fujii@oss.nttdata.com>\n> <mailto:masao.fujii@oss.nttdata.com <mailto:masao.fujii@oss.nttdata.com>>>>>\n> wrote:\n> > > > > >> Ants and Andres suggested to replace the\n> spinlock used in pgss_store() with\n> > > > > >> LWLock. I agreed with them and posted the\n> POC patch doing that. But I think\n> > > > > >> the patch is an item for v14. The patch may\n> address the reported performance\n> > > > > >> issue, but may cause other performance\n> issues in other workloads. We would\n> > > > > >> need to measure how the patch affects the\n> performance in various workloads.\n> > > > > >> It seems too late to do that at this stage\n> of v13. Thought?\n> > > > > >\n> > > > > > I agree that it's too late for v13.\n> > > > >\n> > > > > Thanks for the comment!\n> > > > >\n> > > > > So I pushed the patch and changed default of\n> track_planning to off.\n> > > > >\n> > > > >\n> > > > > Maybe there can be documented so enabling this\n> option can have a negative impact on performance.\n> > > >\n> > > > Yes. What about adding either of the followings into\n> the doc?\n> > > >\n> > > > Enabling this parameter may incur a noticeable\n> performance penalty.\n> > > >\n> > > > or\n> > > >\n> > > > Enabling this parameter may incur a noticeable\n> performance penalty,\n> > > > especially when a fewer kinds of queries are\n> executed on many\n> > > > concurrent connections.\n> > > >\n> > > >\n> > > > This second variant looks perfect for this case.\n> > >\n> > > Ok, so patch attached.\n> > >\n> > >\n> > > +1\n> >\n> > Thanks for the review! Pushed.\n> >\n> > Regards,\n> >\n> > --\n> > Fujii Masao\n> > Advanced Computing Technology Center\n> > Research and Development Headquarters\n> > NTT DATA CORPORATION\n> >\n> >\n> >\n> > You might also want to update this patch's status in the commitfest:\n> > https://commitfest.postgresql.org/29/2634/\n>\n> The patch added into this CF entry has not been committed yet.\n> So I was thinking that there is no need to update the status yet. No?\n>\n\nYour previous email suggested that it's been pushed, hence my comment.\nChecking the git log, I see a commit was pushed on July 6 (321fa6a) with\nthe changes that match the latest patch.\n\nAm I missing something here?\n\n\n>\n> Regards,\n>\n> --\n> Fujii Masao\n> Advanced Computing Technology Center\n> Research and Development Headquarters\n> NTT DATA CORPORATION\n>\n\n\n-- \nHighgo Software (Canada/China/Pakistan)\nURL : www.highgo.ca\nADDR: 10318 WHALLEY BLVD, Surrey, BC\nCELL:+923335449950 EMAIL: mailto:hamid.akhtar@highgo.ca\nSKYPE: engineeredvirus\n\nOn Mon, Aug 17, 2020 at 2:21 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n\nOn 2020/07/31 21:40, Hamid Akhtar wrote:\n> <https://commitfest.postgresql.org/29/2634/>\n> \n> On Mon, Jul 6, 2020 at 10:29 AM Fujii Masao <masao.fujii@oss.nttdata.com <mailto:masao.fujii@oss.nttdata.com>> wrote:\n> \n> \n> \n>     On 2020/07/04 12:22, Pavel Stehule wrote:\n>      >\n>      >\n>      > pá 3. 7. 2020 v 13:02 odesílatel Fujii Masao <masao.fujii@oss.nttdata.com <mailto:masao.fujii@oss.nttdata.com> <mailto:masao.fujii@oss.nttdata.com <mailto:masao.fujii@oss.nttdata.com>>> napsal:\n>      >\n>      >\n>      >\n>      >     On 2020/07/03 16:02, Pavel Stehule wrote:\n>      >      >\n>      >      >\n>      >      > pá 3. 7. 2020 v 8:57 odesílatel Fujii Masao <masao.fujii@oss.nttdata.com <mailto:masao.fujii@oss.nttdata.com> <mailto:masao.fujii@oss.nttdata.com <mailto:masao.fujii@oss.nttdata.com>> <mailto:masao.fujii@oss.nttdata.com <mailto:masao.fujii@oss.nttdata.com> <mailto:masao.fujii@oss.nttdata.com <mailto:masao.fujii@oss.nttdata.com>>>> napsal:\n>      >      >\n>      >      >\n>      >      >\n>      >      >     On 2020/07/03 13:05, Pavel Stehule wrote:\n>      >      >      > Hi\n>      >      >      >\n>      >      >      > pá 3. 7. 2020 v 4:39 odesílatel Fujii Masao <masao.fujii@oss.nttdata.com <mailto:masao.fujii@oss.nttdata.com> <mailto:masao.fujii@oss.nttdata.com <mailto:masao.fujii@oss.nttdata.com>> <mailto:masao.fujii@oss.nttdata.com <mailto:masao.fujii@oss.nttdata.com> <mailto:masao.fujii@oss.nttdata.com <mailto:masao.fujii@oss.nttdata.com>>> <mailto:masao.fujii@oss.nttdata.com <mailto:masao.fujii@oss.nttdata.com> <mailto:masao.fujii@oss.nttdata.com <mailto:masao.fujii@oss.nttdata.com>> <mailto:masao.fujii@oss.nttdata.com <mailto:masao.fujii@oss.nttdata.com> <mailto:masao.fujii@oss.nttdata.com <mailto:masao.fujii@oss.nttdata.com>>>>> napsal:\n>      >      >      >\n>      >      >      >\n>      >      >      >\n>      >      >      >     On 2020/07/01 7:37, Peter Geoghegan wrote:\n>      >      >      >      > On Tue, Jun 30, 2020 at 6:40 AM Fujii Masao <masao.fujii@oss.nttdata.com <mailto:masao.fujii@oss.nttdata.com> <mailto:masao.fujii@oss.nttdata.com <mailto:masao.fujii@oss.nttdata.com>> <mailto:masao.fujii@oss.nttdata.com <mailto:masao.fujii@oss.nttdata.com> <mailto:masao.fujii@oss.nttdata.com <mailto:masao.fujii@oss.nttdata.com>>> <mailto:masao.fujii@oss.nttdata.com <mailto:masao.fujii@oss.nttdata.com> <mailto:masao.fujii@oss.nttdata.com <mailto:masao.fujii@oss.nttdata.com>> <mailto:masao.fujii@oss.nttdata.com <mailto:masao.fujii@oss.nttdata.com> <mailto:masao.fujii@oss.nttdata.com <mailto:masao.fujii@oss.nttdata.com>>>>> wrote:\n>      >      >      >      >> Ants and Andres suggested to replace the spinlock used in pgss_store() with\n>      >      >      >      >> LWLock. I agreed with them and posted the POC patch doing that. But I think\n>      >      >      >      >> the patch is an item for v14. The patch may address the reported performance\n>      >      >      >      >> issue, but may cause other performance issues in other workloads. We would\n>      >      >      >      >> need to measure how the patch affects the performance in various workloads.\n>      >      >      >      >> It seems too late to do that at this stage of v13. Thought?\n>      >      >      >      >\n>      >      >      >      > I agree that it's too late for v13.\n>      >      >      >\n>      >      >      >     Thanks for the comment!\n>      >      >      >\n>      >      >      >     So I pushed the patch and changed default of track_planning to off.\n>      >      >      >\n>      >      >      >\n>      >      >      > Maybe there can be documented so enabling this option can have a negative impact on performance.\n>      >      >\n>      >      >     Yes. What about adding either of the followings into the doc?\n>      >      >\n>      >      >           Enabling this parameter may incur a noticeable performance penalty.\n>      >      >\n>      >      >     or\n>      >      >\n>      >      >           Enabling this parameter may incur a noticeable performance penalty,\n>      >      >           especially when a fewer kinds of queries are executed on many\n>      >      >           concurrent connections.\n>      >      >\n>      >      >\n>      >      > This second variant looks perfect for this case.\n>      >\n>      >     Ok, so patch attached.\n>      >\n>      >\n>      > +1\n> \n>     Thanks for the review! Pushed.\n> \n>     Regards,\n> \n>     -- \n>     Fujii Masao\n>     Advanced Computing Technology Center\n>     Research and Development Headquarters\n>     NTT DATA CORPORATION\n> \n> \n> \n> You might also want to update this patch's status in the commitfest:\n> https://commitfest.postgresql.org/29/2634/\n\nThe patch added into this CF entry has not been committed yet.\nSo I was thinking that there is no need to update the status yet. No?Your previous email suggested that it's been pushed, hence my comment. Checking the git log, I see a commit was pushed on July 6 (321fa6a) with the changes that match the latest patch.Am I missing something here? \n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n-- Highgo Software (Canada/China/Pakistan)URL : www.highgo.caADDR: 10318 WHALLEY BLVD, Surrey, BCCELL:+923335449950  EMAIL: mailto:hamid.akhtar@highgo.caSKYPE: engineeredvirus", "msg_date": "Mon, 17 Aug 2020 14:34:18 +0500", "msg_from": "Hamid Akhtar <hamid.akhtar@gmail.com>", "msg_from_op": false, "msg_subject": "Re: track_planning causing performance regression" }, { "msg_contents": "\r\n\r\nOn 2020/08/17 18:34, Hamid Akhtar wrote:\r\n> \r\n> \r\n> On Mon, Aug 17, 2020 at 2:21 PM Fujii Masao <masao.fujii@oss.nttdata.com <mailto:masao.fujii@oss.nttdata.com>> wrote:\r\n> \r\n> \r\n> \r\n> On 2020/07/31 21:40, Hamid Akhtar wrote:\r\n> > <https://commitfest.postgresql.org/29/2634/>\r\n> >\r\n> > On Mon, Jul 6, 2020 at 10:29 AM Fujii Masao <masao.fujii@oss.nttdata.com <mailto:masao.fujii@oss.nttdata.com> <mailto:masao.fujii@oss.nttdata.com <mailto:masao.fujii@oss.nttdata.com>>> wrote:\r\n> >\r\n> >\r\n> >\r\n> >     On 2020/07/04 12:22, Pavel Stehule wrote:\r\n> >      >\r\n> >      >\r\n> >      > pá 3. 7. 2020 v 13:02 odesílatel Fujii Masao <masao.fujii@oss.nttdata.com <mailto:masao.fujii@oss.nttdata.com> <mailto:masao.fujii@oss.nttdata.com <mailto:masao.fujii@oss.nttdata.com>> <mailto:masao.fujii@oss.nttdata.com <mailto:masao.fujii@oss.nttdata.com> <mailto:masao.fujii@oss.nttdata.com <mailto:masao.fujii@oss.nttdata.com>>>> napsal:\r\n> >      >\r\n> >      >\r\n> >      >\r\n> >      >     On 2020/07/03 16:02, Pavel Stehule wrote:\r\n> >      >      >\r\n> >      >      >\r\n> >      >      > pá 3. 7. 2020 v 8:57 odesílatel Fujii Masao <masao.fujii@oss.nttdata.com <mailto:masao.fujii@oss.nttdata.com> <mailto:masao.fujii@oss.nttdata.com <mailto:masao.fujii@oss.nttdata.com>> <mailto:masao.fujii@oss.nttdata.com <mailto:masao.fujii@oss.nttdata.com> <mailto:masao.fujii@oss.nttdata.com <mailto:masao.fujii@oss.nttdata.com>>> <mailto:masao.fujii@oss.nttdata.com <mailto:masao.fujii@oss.nttdata.com> <mailto:masao.fujii@oss.nttdata.com <mailto:masao.fujii@oss.nttdata.com>> <mailto:masao.fujii@oss.nttdata.com <mailto:masao.fujii@oss.nttdata.com> <mailto:masao.fujii@oss.nttdata.com <mailto:masao.fujii@oss.nttdata.com>>>>> napsal:\r\n> >      >      >\r\n> >      >      >\r\n> >      >      >\r\n> >      >      >     On 2020/07/03 13:05, Pavel Stehule wrote:\r\n> >      >      >      > Hi\r\n> >      >      >      >\r\n> >      >      >      > pá 3. 7. 2020 v 4:39 odesílatel Fujii Masao <masao.fujii@oss.nttdata.com <mailto:masao.fujii@oss.nttdata.com> <mailto:masao.fujii@oss.nttdata.com <mailto:masao.fujii@oss.nttdata.com>> <mailto:masao.fujii@oss.nttdata.com <mailto:masao.fujii@oss.nttdata.com> <mailto:masao.fujii@oss.nttdata.com <mailto:masao.fujii@oss.nttdata.com>>> <mailto:masao.fujii@oss.nttdata.com <mailto:masao.fujii@oss.nttdata.com> <mailto:masao.fujii@oss.nttdata.com <mailto:masao.fujii@oss.nttdata.com>> <mailto:masao.fujii@oss.nttdata.com <mailto:masao.fujii@oss.nttdata.com> <mailto:masao.fujii@oss.nttdata.com <mailto:masao.fujii@oss.nttdata.com>>>> <mailto:masao.fujii@oss.nttdata.com <mailto:masao.fujii@oss.nttdata.com> <mailto:masao.fujii@oss.nttdata.com <mailto:masao.fujii@oss.nttdata.com>> <mailto:masao.fujii@oss.nttdata.com <mailto:masao.fujii@oss.nttdata.com> <mailto:masao.fujii@oss.nttdata.com <mailto:masao.fujii@oss.nttdata.com>>> <mailto:masao.fujii@oss.nttdata.com\r\n> <mailto:masao.fujii@oss.nttdata.com> <mailto:masao.fujii@oss.nttdata.com <mailto:masao.fujii@oss.nttdata.com>> <mailto:masao.fujii@oss.nttdata.com <mailto:masao.fujii@oss.nttdata.com> <mailto:masao.fujii@oss.nttdata.com <mailto:masao.fujii@oss.nttdata.com>>>>>> napsal:\r\n> >      >      >      >\r\n> >      >      >      >\r\n> >      >      >      >\r\n> >      >      >      >     On 2020/07/01 7:37, Peter Geoghegan wrote:\r\n> >      >      >      >      > On Tue, Jun 30, 2020 at 6:40 AM Fujii Masao <masao.fujii@oss.nttdata.com <mailto:masao.fujii@oss.nttdata.com> <mailto:masao.fujii@oss.nttdata.com <mailto:masao.fujii@oss.nttdata.com>> <mailto:masao.fujii@oss.nttdata.com <mailto:masao.fujii@oss.nttdata.com> <mailto:masao.fujii@oss.nttdata.com <mailto:masao.fujii@oss.nttdata.com>>> <mailto:masao.fujii@oss.nttdata.com <mailto:masao.fujii@oss.nttdata.com> <mailto:masao.fujii@oss.nttdata.com <mailto:masao.fujii@oss.nttdata.com>> <mailto:masao.fujii@oss.nttdata.com <mailto:masao.fujii@oss.nttdata.com> <mailto:masao.fujii@oss.nttdata.com <mailto:masao.fujii@oss.nttdata.com>>>> <mailto:masao.fujii@oss.nttdata.com <mailto:masao.fujii@oss.nttdata.com> <mailto:masao.fujii@oss.nttdata.com <mailto:masao.fujii@oss.nttdata.com>> <mailto:masao.fujii@oss.nttdata.com <mailto:masao.fujii@oss.nttdata.com> <mailto:masao.fujii@oss.nttdata.com <mailto:masao.fujii@oss.nttdata.com>>>\r\n> <mailto:masao.fujii@oss.nttdata.com <mailto:masao.fujii@oss.nttdata.com> <mailto:masao.fujii@oss.nttdata.com <mailto:masao.fujii@oss.nttdata.com>> <mailto:masao.fujii@oss.nttdata.com <mailto:masao.fujii@oss.nttdata.com> <mailto:masao.fujii@oss.nttdata.com <mailto:masao.fujii@oss.nttdata.com>>>>>> wrote:\r\n> >      >      >      >      >> Ants and Andres suggested to replace the spinlock used in pgss_store() with\r\n> >      >      >      >      >> LWLock. I agreed with them and posted the POC patch doing that. But I think\r\n> >      >      >      >      >> the patch is an item for v14. The patch may address the reported performance\r\n> >      >      >      >      >> issue, but may cause other performance issues in other workloads. We would\r\n> >      >      >      >      >> need to measure how the patch affects the performance in various workloads.\r\n> >      >      >      >      >> It seems too late to do that at this stage of v13. Thought?\r\n> >      >      >      >      >\r\n> >      >      >      >      > I agree that it's too late for v13.\r\n> >      >      >      >\r\n> >      >      >      >     Thanks for the comment!\r\n> >      >      >      >\r\n> >      >      >      >     So I pushed the patch and changed default of track_planning to off.\r\n> >      >      >      >\r\n> >      >      >      >\r\n> >      >      >      > Maybe there can be documented so enabling this option can have a negative impact on performance.\r\n> >      >      >\r\n> >      >      >     Yes. What about adding either of the followings into the doc?\r\n> >      >      >\r\n> >      >      >           Enabling this parameter may incur a noticeable performance penalty.\r\n> >      >      >\r\n> >      >      >     or\r\n> >      >      >\r\n> >      >      >           Enabling this parameter may incur a noticeable performance penalty,\r\n> >      >      >           especially when a fewer kinds of queries are executed on many\r\n> >      >      >           concurrent connections.\r\n> >      >      >\r\n> >      >      >\r\n> >      >      > This second variant looks perfect for this case.\r\n> >      >\r\n> >      >     Ok, so patch attached.\r\n> >      >\r\n> >      >\r\n> >      > +1\r\n> >\r\n> >     Thanks for the review! Pushed.\r\n> >\r\n> >     Regards,\r\n> >\r\n> >     --\r\n> >     Fujii Masao\r\n> >     Advanced Computing Technology Center\r\n> >     Research and Development Headquarters\r\n> >     NTT DATA CORPORATION\r\n> >\r\n> >\r\n> >\r\n> > You might also want to update this patch's status in the commitfest:\r\n> > https://commitfest.postgresql.org/29/2634/\r\n> \r\n> The patch added into this CF entry has not been committed yet.\r\n> So I was thinking that there is no need to update the status yet. No?\r\n> \r\n> \r\n> Your previous email suggested that it's been pushed, hence my comment. Checking the git log, I see a commit was pushed on July 6 (321fa6a) with the changes that match the latest patch.\r\n\r\nYes, I pushed the document_overhead_by_track_planning.patch, but this\r\nCF entry is for pgss_lwlock_v1.patch which replaces spinlocks with lwlocks\r\nin pg_stat_statements. The latter patch has not been committed yet.\r\nProbably attachding the different patches in the same thread would cause\r\nthis confusing thing... Anyway, thanks for your comment!\r\n\r\nRegards,\r\n\r\n\r\n-- \r\nFujii Masao\r\nAdvanced Computing Technology Center\r\nResearch and Development Headquarters\r\nNTT DATA CORPORATION\r\n", "msg_date": "Mon, 17 Aug 2020 21:30:31 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: track_planning causing performance regression" }, { "msg_contents": "> Yes, I pushed the document_overhead_by_track_planning.patch, but this\n> CF entry is for pgss_lwlock_v1.patch which replaces spinlocks with lwlocks\n> in pg_stat_statements. The latter patch has not been committed yet.\n> Probably attachding the different patches in the same thread would cause\n> this confusing thing... Anyway, thanks for your comment!\n\nTo avoid further confusion, I attached the rebased version of\nthe patch that was registered at CF. I'd appreciate it if\nyou review this version.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION", "msg_date": "Wed, 19 Aug 2020 00:43:49 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: track_planning causing performance regression" }, { "msg_contents": "On Tue, Aug 18, 2020 at 8:43 PM Fujii Masao <masao.fujii@oss.nttdata.com>\nwrote:\n\n>\n> > Yes, I pushed the document_overhead_by_track_planning.patch, but this\n> > CF entry is for pgss_lwlock_v1.patch which replaces spinlocks with\n> lwlocks\n> > in pg_stat_statements. The latter patch has not been committed yet.\n> > Probably attachding the different patches in the same thread would cause\n> > this confusing thing... Anyway, thanks for your comment!\n>\n> To avoid further confusion, I attached the rebased version of\n> the patch that was registered at CF. I'd appreciate it if\n> you review this version.\n>\n\nThank you. Reviewing it now.\n\n\n>\n> Regards,\n>\n> --\n> Fujii Masao\n> Advanced Computing Technology Center\n> Research and Development Headquarters\n> NTT DATA CORPORATION\n>\n\n\n-- \nHighgo Software (Canada/China/Pakistan)\nURL : www.highgo.ca\nADDR: 10318 WHALLEY BLVD, Surrey, BC\nCELL:+923335449950 EMAIL: mailto:hamid.akhtar@highgo.ca\nSKYPE: engineeredvirus\n\nOn Tue, Aug 18, 2020 at 8:43 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> Yes, I pushed the document_overhead_by_track_planning.patch, but this\n> CF entry is for pgss_lwlock_v1.patch which replaces spinlocks with lwlocks\n> in pg_stat_statements. The latter patch has not been committed yet.\n> Probably attachding the different patches in the same thread would cause\n> this confusing thing... Anyway, thanks for your comment!\n\nTo avoid further confusion, I attached the rebased version of\nthe patch that was registered at CF. I'd appreciate it if\nyou review this version.Thank you. Reviewing it now. \n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n-- Highgo Software (Canada/China/Pakistan)URL : www.highgo.caADDR: 10318 WHALLEY BLVD, Surrey, BCCELL:+923335449950  EMAIL: mailto:hamid.akhtar@highgo.caSKYPE: engineeredvirus", "msg_date": "Tue, 18 Aug 2020 20:44:48 +0500", "msg_from": "Hamid Akhtar <hamid.akhtar@gmail.com>", "msg_from_op": false, "msg_subject": "Re: track_planning causing performance regression" }, { "msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: tested, passed\nImplements feature: not tested\nSpec compliant: not tested\nDocumentation: not tested\n\nOverall, the patch works fine. However, I have a few observations:\r\n\r\n(1) Code Comments:\r\n- The code comments should be added for the 2 new macros, in particular for PGSS_NUM_LOCK_PARTITIONS. As you explained in your email, this may be used to limit the number of locks if a very large value for pgss_max is specified.\r\n- From the code I inferred that the number of locks can in future be less than pgss_max (per your email where in future this macro could be used to limit the number of locks). I suggest to perhaps add some notes helping future changes in this code area.\r\n\r\n(2) It seems like that \"pgss->lock = &(pgss->base + pgss_max)->lock;\" statement should not use pgss_max directly and instead use PGSS_NUM_LOCK_PARTITIONS macro, as when a limit is imposed on number of locks, this statement will cause an overrun.\r\n\r\n\r\n-- \r\nHighgo Software (Canada/China/Pakistan)\r\nURL : www.highgo.ca\r\nADDR: 10318 WHALLEY BLVD, Surrey, BC\r\nCELL:+923335449950  EMAIL: mailto:hamid.akhtar@highgo.ca\r\nSKYPE: engineeredvirus\n\nThe new status of this patch is: Waiting on Author\n", "msg_date": "Wed, 19 Aug 2020 12:45:41 +0000", "msg_from": "Hamid Akhtar <hamid.akhtar@gmail.com>", "msg_from_op": false, "msg_subject": "Re: track_planning causing performance regression" }, { "msg_contents": "Hi,\n\nOn 2020-08-19 00:43, Fujii Masao wrote:\n>> Yes, I pushed the document_overhead_by_track_planning.patch, but this\n>> CF entry is for pgss_lwlock_v1.patch which replaces spinlocks with \n>> lwlocks\n>> in pg_stat_statements. The latter patch has not been committed yet.\n>> Probably attachding the different patches in the same thread would \n>> cause\n>> this confusing thing... Anyway, thanks for your comment!\n> \n> To avoid further confusion, I attached the rebased version of\n> the patch that was registered at CF. I'd appreciate it if\n> you review this version.\n\nI tested pgss_lwlock_v2.patch with 3 workloads. And I couldn't observe \nperformance\nimprovement in our environment and I'm afraid to say that even worser in \nsome case.\n - Workload1: pgbench select-only mode\n - Workload2: pgbench custom scripts which run \"SELECT 1;\"\n - Workload3: pgbench custom scripts which run 1000 types of different \nsimple queries\n\n- Workload1\nFirst we set the pg_stat_statements.track_planning to on/off and run the \nfully-cached pgbench\nselect-only mode on pg14head which is installed in on-premises \nserver(32CPU, 256GB mem).\nHowever in this enveronment we couldn't reproduce 45% performance drop \ndue to s_lock conflict\n(Tharakan-san mentioned in his post on \n2895b53b033c47ccb22972b589050dd9@EX13D05UWC001.ant.amazon.com).\n\n- Workload2\nThen we adopted pgbench custom script \"SELECT 1;\" which supposed to \nincrease the s_lock and\nmake it easier to reproduce the issue. In this case around 10% of \nperformance decrease\nwhich also shows slightly increase in s_lock (~10%). With this senario, \ndespite a s_lock\nabsence, the patch shows more than 50% performance degradation \nregardless of track_planning.\nAnd also we couldn't see performance improvement in this workload.\n\npgbench:\n initialization: pgbench -i -s 100\n benchmarking : pgbench -j16 -c128 -T180 -r -n -f <script> -h <address> \n-U <user> -p <port> -d <db>\n # VACUUMed and pg_prewarmed manually before run the benchmark\nquery:SELECT 1;\n> pgss_lwlock_v2.patch track_planning TPS decline rate \n> s_lock CPU usage\n> - OFF 810509.4 standard \n> 0.17% 98.8%(sys24.9%,user73.9%)\n> - ON 732823.1 -9.6% \n> 1.94% 95.1%(sys22.8%,user72.3%)\n> + OFF 371035.0 -49.4% - \n> 65.2%(sys20.6%,user44.6%)\n> + ON 193965.2 -47.7% - \n> 41.8%(sys12.1%,user29.7%)\n # \"-\" is showing that s_lock was not reported from the perf.\n\n- Workload3\nNext, there is concern that replacement of LWLock may reduce performance \nin some other workloads.\n(Fujii-san mentioned in his post on \n42a13b4c-e60c-c6e7-3475-8eff8050bed4@oss.nttdata.com).\nTo clarify this, we prepared 1000 simple queries which is supposed to \nprevent the conflict of\ns_lock and may expect to see the behavior without s_lock. In this case, \nno performance decline\nwas observed and also we couldn't see any additional memory consumption \nor cpu usage.\n\npgbench:\n initialization: pgbench -n -i -s 100 --partitions=1000 \n--partition-method=range\n benchmarking : command is same as (Workload1)\nquery: SELECT abalance FROM pgbench_accounts_xxxx WHERE aid = :aid + \n(10000 * :random_num - 10000);\n> pgss_lwlock_v2.patch track_planning TPS decline rate CPU \n> usage\n> - OFF 88329.1 standard \n> 82.1%(sys6.5%,user75.6%)\n> - ON 88015.3 -0.36% \n> 82.6%(sys6.5%,user76.1%)\n> + OFF 88177.5 0.18% \n> 82.2%(sys6.5%,user75.7%)\n> + ON 88079.1 -0.11% \n> 82.5%(sys6.5%,user76.0%)\n\n(Environment)\nmachine:\n server/client - 32 CPUs / 256GB # used same machine as server & client\npostgres:\n version: v14 (6eee73e)\n configure: '--prefix=/usr/pgsql-14a' 'CFLAGS=-O2'\nGUC param (changed from defaults):\n shared_preload_libraries = 'pg_stat_statements, pg_prewarm'\n autovacuum = off\n checkpoint = 120min\n max_connections=300\n listen_address='*'\n shared_buffers=64GB\n\n\nRegards,\n\n-- \nHibiki Tanaka\n\n\n", "msg_date": "Fri, 11 Sep 2020 16:23:28 +0900", "msg_from": "bttanakahbk <bttanakahbk@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: track_planning causing performance regression" }, { "msg_contents": "\n\nOn 2020/09/11 16:23, bttanakahbk wrote:\n> Hi,\n> \n> On 2020-08-19 00:43, Fujii Masao wrote:\n>>> Yes, I pushed the document_overhead_by_track_planning.patch, but this\n>>> CF entry is for pgss_lwlock_v1.patch which replaces spinlocks with lwlocks\n>>> in pg_stat_statements. The latter patch has not been committed yet.\n>>> Probably attachding the different patches in the same thread would cause\n>>> this confusing thing... Anyway, thanks for your comment!\n>>\n>> To avoid further confusion, I attached the rebased version of\n>> the patch that was registered at CF. I'd appreciate it if\n>> you review this version.\n> \n> I tested pgss_lwlock_v2.patch with 3 workloads. And I couldn't observe performance\n> improvement in our environment and I'm afraid to say that even worser in some case.\n>  - Workload1: pgbench select-only mode\n>  - Workload2: pgbench custom scripts which run \"SELECT 1;\"\n>  - Workload3: pgbench custom scripts which run 1000 types of different simple queries\n\nThanks for running the benchmarks!\n\n\n> \n> - Workload1\n> First we set the pg_stat_statements.track_planning to on/off and run the fully-cached pgbench\n> select-only mode on pg14head which is installed in on-premises server(32CPU, 256GB mem).\n> However in this enveronment we couldn't reproduce 45% performance drop due to s_lock conflict\n> (Tharakan-san mentioned in his post on 2895b53b033c47ccb22972b589050dd9@EX13D05UWC001.ant.amazon.com).\n> \n> - Workload2\n> Then we adopted pgbench custom script \"SELECT 1;\" which supposed to increase the s_lock and\n> make it easier to reproduce the issue. In this case around 10% of performance decrease\n> which also shows slightly increase in s_lock (~10%). With this senario, despite a s_lock\n> absence, the patch shows more than 50% performance degradation regardless of track_planning.\n> And also we couldn't see performance improvement in this workload.\n> \n> pgbench:\n>  initialization: pgbench -i -s 100\n>  benchmarking  : pgbench -j16 -c128 -T180 -r -n -f <script> -h <address> -U <user> -p <port> -d <db>\n>   # VACUUMed and pg_prewarmed manually before run the benchmark\n> query:SELECT 1;\n>>   pgss_lwlock_v2.patch  track_planning  TPS         decline rate s_lock   CPU usage\n>>   -                     OFF             810509.4    standard 0.17%    98.8%(sys24.9%,user73.9%)\n>>   -                     ON              732823.1    -9.6% 1.94%    95.1%(sys22.8%,user72.3%)\n>>   +                     OFF             371035.0    -49.4%         -     65.2%(sys20.6%,user44.6%)\n>>   +                     ON              193965.2    -47.7%         -     41.8%(sys12.1%,user29.7%)\n>   # \"-\" is showing that s_lock was not reported from the perf.\n\nOk, so my proposed patch degrated the performance in this case :(\nThis means that replacing spinlock with lwlock in pgss is not proper\napproach for the lock contention issue on pgss...\n\nI proposed to split the spinlock for each pgss entry into two\nto reduce the lock contention, upthread. One is for planner stats,\nand the other is for executor stats. Is it worth working on\nthis approach as an alternative idea? Or does anyone have any better idea?\n\nRegards,\n\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Fri, 11 Sep 2020 23:04:41 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: track_planning causing performance regression" }, { "msg_contents": "On Fri, Sep 11, 2020 at 4:04 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>\n> On 2020/09/11 16:23, bttanakahbk wrote:\n> >\n> > pgbench:\n> > initialization: pgbench -i -s 100\n> > benchmarking : pgbench -j16 -c128 -T180 -r -n -f <script> -h <address> -U <user> -p <port> -d <db>\n> > # VACUUMed and pg_prewarmed manually before run the benchmark\n> > query:SELECT 1;\n> >> pgss_lwlock_v2.patch track_planning TPS decline rate s_lock CPU usage\n> >> - OFF 810509.4 standard 0.17% 98.8%(sys24.9%,user73.9%)\n> >> - ON 732823.1 -9.6% 1.94% 95.1%(sys22.8%,user72.3%)\n> >> + OFF 371035.0 -49.4% - 65.2%(sys20.6%,user44.6%)\n> >> + ON 193965.2 -47.7% - 41.8%(sys12.1%,user29.7%)\n> > # \"-\" is showing that s_lock was not reported from the perf.\n>\n> Ok, so my proposed patch degrated the performance in this case :(\n> This means that replacing spinlock with lwlock in pgss is not proper\n> approach for the lock contention issue on pgss...\n>\n> I proposed to split the spinlock for each pgss entry into two\n> to reduce the lock contention, upthread. One is for planner stats,\n> and the other is for executor stats. Is it worth working on\n> this approach as an alternative idea? Or does anyone have any better idea?\n\nFor now only calls and [min|max|mean|total]_time are split between\nplanning and execution, so we'd have to do the same for the rest of\nthe counters to be able to have 2 different spinlocks. That'll\nincrease the size of the struct quite a lot, and we'd also have to\nchange the SRF output, which is already quite wide.\n\n\n", "msg_date": "Fri, 11 Sep 2020 21:41:48 +0200", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: track_planning causing performance regression" }, { "msg_contents": "On 2020-Sep-11, Fujii Masao wrote:\n\n> Ok, so my proposed patch degrated the performance in this case :(\n> This means that replacing spinlock with lwlock in pgss is not proper\n> approach for the lock contention issue on pgss...\n> \n> I proposed to split the spinlock for each pgss entry into two\n> to reduce the lock contention, upthread. One is for planner stats,\n> and the other is for executor stats. Is it worth working on\n> this approach as an alternative idea? Or does anyone have any better idea?\n\nIt does seem that the excl-locked section in pgss_store is rather large.\n(I admit I don't understand why would a LWLock decrease performance.)\n\nAndres suggested in [1] to use atomics for the counters together with a\nsingle lwlock to be used in shared mode only. I didn't quite understand\nwhat the lwlock is *for*, but maybe you do.\n\n[1] https://postgr.es/m/20200629231015.qlej5b3qpfe4uijo@alap3.anarazel.de\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 11 Sep 2020 19:10:05 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: track_planning causing performance regression" }, { "msg_contents": "Hi,\n\nOn 2020-09-11 19:10:05 -0300, Alvaro Herrera wrote:\n> Andres suggested in [1] to use atomics for the counters together with a\n> single lwlock to be used in shared mode only. I didn't quite understand\n> what the lwlock is *for*, but maybe you do.\n> \n> [1] https://postgr.es/m/20200629231015.qlej5b3qpfe4uijo@alap3.anarazel.de\n\nJust to be clear - I am saying that in the first iteration I would just\nstraight up replace the spinlock with an lwlock, i.e. having many\nlwlocks.\n\nThe piece about a single shared lwlocks is/was about protecting the set\nof entries that are currently in-memory - which can't easily be\nimplemented just using atomics (at least without the risk of increasing\nthe counters of an entry since replaced with another query).\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 11 Sep 2020 15:32:54 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: track_planning causing performance regression" }, { "msg_contents": "On Fri, Sep 11, 2020 at 03:32:54PM -0700, Andres Freund wrote:\n> The piece about a single shared lwlocks is/was about protecting the set\n> of entries that are currently in-memory - which can't easily be\n> implemented just using atomics (at least without the risk of increasing\n> the counters of an entry since replaced with another query).\n\nThis discussion has stalled, and the patch proposed is incorrect, so I\nhave marked it as RwF in the CF app.\n--\nMichael", "msg_date": "Wed, 30 Sep 2020 16:11:55 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: track_planning causing performance regression" }, { "msg_contents": "Reviewing this change which was committed last year as\n321fa6a4a26c9b649a0fbec9fc8b019f19e62289\n\nOn Fri, Jul 03, 2020 at 03:57:38PM +0900, Fujii Masao wrote:\n> On 2020/07/03 13:05, Pavel Stehule wrote:\n> > p� 3. 7. 2020 v�4:39 odes�latel Fujii Masao <masao.fujii@oss.nttdata.com> napsal:\n> > \n> > Maybe there can be documented so enabling this option can have a negative impact on performance.\n> \n> Yes. What about adding either of the followings into the doc?\n> \n> Enabling this parameter may incur a noticeable performance penalty.\n> \n> or\n> \n> Enabling this parameter may incur a noticeable performance penalty,\n> especially when a fewer kinds of queries are executed on many\n> concurrent connections.\n\nSomething seems is wrong with this sentence, and I'm not sure what it's trying\nto say. Is this right ?\n\n> Enabling this parameter may incur a noticeable performance penalty,\n> especially when a small number of queries are executed on many\n> concurrent connections.\n\n-- \nJustin\n\n\n", "msg_date": "Sun, 18 Apr 2021 18:36:15 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: track_planning causing performance regression" }, { "msg_contents": "\n\nOn 2021/04/19 8:36, Justin Pryzby wrote:\n> Reviewing this change which was committed last year as\n> 321fa6a4a26c9b649a0fbec9fc8b019f19e62289\n> \n> On Fri, Jul 03, 2020 at 03:57:38PM +0900, Fujii Masao wrote:\n>> On 2020/07/03 13:05, Pavel Stehule wrote:\n>>> pá 3. 7. 2020 v 4:39 odesílatel Fujii Masao <masao.fujii@oss.nttdata.com> napsal:\n>>>\n>>> Maybe there can be documented so enabling this option can have a negative impact on performance.\n>>\n>> Yes. What about adding either of the followings into the doc?\n>>\n>> Enabling this parameter may incur a noticeable performance penalty.\n>>\n>> or\n>>\n>> Enabling this parameter may incur a noticeable performance penalty,\n>> especially when a fewer kinds of queries are executed on many\n>> concurrent connections.\n> \n> Something seems is wrong with this sentence, and I'm not sure what it's trying\n> to say. Is this right ?\n\npg_stat_statements users different spinlock for each kind of query.\nSo fewer kinds of queries many sessions execute, fewer spinlocks\nthey try to acquire. This may lead to spinlock contention and\nsignificant performance degration. This is what the statement is\ntrying to say.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Mon, 19 Apr 2021 23:44:05 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: track_planning causing performance regression" }, { "msg_contents": "On Mon, Apr 19, 2021 at 11:44:05PM +0900, Fujii Masao wrote:\n> On 2021/04/19 8:36, Justin Pryzby wrote:\n> > Reviewing this change which was committed last year as\n> > 321fa6a4a26c9b649a0fbec9fc8b019f19e62289\n> > \n> > On Fri, Jul 03, 2020 at 03:57:38PM +0900, Fujii Masao wrote:\n> > > On 2020/07/03 13:05, Pavel Stehule wrote:\n> > > > p� 3. 7. 2020 v�4:39 odes�latel Fujii Masao <masao.fujii@oss.nttdata.com> napsal:\n> > > > \n> > > > Maybe there can be documented so enabling this option can have a negative impact on performance.\n> > > \n> > > Yes. What about adding either of the followings into the doc?\n> > > \n> > > Enabling this parameter may incur a noticeable performance penalty.\n> > > \n> > > or\n> > > \n> > > Enabling this parameter may incur a noticeable performance penalty,\n> > > especially when a fewer kinds of queries are executed on many\n> > > concurrent connections.\n> > \n> > Something seems is wrong with this sentence, and I'm not sure what it's trying\n> > to say. Is this right ?\n> \n> pg_stat_statements users different spinlock for each kind of query.\n> So fewer kinds of queries many sessions execute, fewer spinlocks\n> they try to acquire. This may lead to spinlock contention and\n> significant performance degration. This is what the statement is\n> trying to say.\n\nWhat does \"kind\" mean ? I think it means a \"normalized\" query or a \"query\nstructure\".\n\n\"a fewer kinds\" is wrong, so I think the docs should say \"a small number of\nqueries\" or maybe:\n\n> > > Enabling this parameter may incur a noticeable performance penalty,\n> > > especially similar queries are run by many concurrent connections and\n> > > compete to update the same pg_stat_statements entry\n\n-- \nJustin\n\n\n", "msg_date": "Mon, 19 Apr 2021 09:55:45 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: track_planning causing performance regression" }, { "msg_contents": "\n\nOn 2021/04/19 23:55, Justin Pryzby wrote:\n> What does \"kind\" mean ? I think it means a \"normalized\" query or a \"query\n> structure\".\n> \n> \"a fewer kinds\" is wrong, so I think the docs should say \"a small number of\n> queries\" or maybe:\n\nOkay, I agree to update the description.\n\n>>>> Enabling this parameter may incur a noticeable performance penalty,\n>>>> especially similar queries are run by many concurrent connections and\n>>>> compete to update the same pg_stat_statements entry\n\n\"a small number of\" is better than \"similar\" at the above because\n\"similar\" sounds a bit unclear in this case?\n\nIt's better to use \"entries\" rather than \"entry\" at the above?\n\nRegards,\n \n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Wed, 21 Apr 2021 23:38:52 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: track_planning causing performance regression" }, { "msg_contents": "On Wed, Apr 21, 2021 at 11:38:52PM +0900, Fujii Masao wrote:\n> On 2021/04/19 23:55, Justin Pryzby wrote:\n> > What does \"kind\" mean ? I think it means a \"normalized\" query or a \"query\n> > structure\".\n> > \n> > \"a fewer kinds\" is wrong, so I think the docs should say \"a small number of\n> > queries\" or maybe:\n> \n> Okay, I agree to update the description.\n> \n> > > > > Enabling this parameter may incur a noticeable performance penalty,\n> > > > > especially similar queries are run by many concurrent connections and\n> > > > > compete to update the same pg_stat_statements entry\n> \n> \"a small number of\" is better than \"similar\" at the above because\n> \"similar\" sounds a bit unclear in this case?\n> \n> It's better to use \"entries\" rather than \"entry\" at the above?\n\nHow about like this?\n\n Enabling this parameter may incur a noticeable performance penalty,\n- especially when a fewer kinds of queries are executed on many\n+ especially when queries with the same queryid are executed by many\n concurrent connections.\n\nOr:\n\n Enabling this parameter may incur a noticeable performance penalty,\n especially similar queries are executed by many concurrent connections\n and compete to update a small number of pg_stat_statements entries.\n\n-- \nJustin\n\n\n", "msg_date": "Wed, 21 Apr 2021 09:53:38 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: track_planning causing performance regression" }, { "msg_contents": "\n\nOn 2021/04/21 23:53, Justin Pryzby wrote:\n> Or:\n> \n> Enabling this parameter may incur a noticeable performance penalty,\n> especially similar queries are executed by many concurrent connections\n> and compete to update a small number of pg_stat_statements entries.\n\nI prefer this. But what about using \"identical\" instead of \"similar\"\nbecause pg_stat_statements docs already uses \"identical\" in some places?\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Thu, 22 Apr 2021 00:13:17 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: track_planning causing performance regression" }, { "msg_contents": "On Thu, Apr 22, 2021 at 12:13:17AM +0900, Fujii Masao wrote:\n> On 2021/04/21 23:53, Justin Pryzby wrote:\n> > Or:\n> > \n> > Enabling this parameter may incur a noticeable performance penalty,\n> > especially similar queries are executed by many concurrent connections\n> > and compete to update a small number of pg_stat_statements entries.\n> \n> I prefer this. But what about using \"identical\" instead of \"similar\"\n> because pg_stat_statements docs already uses \"identical\" in some places?\n\nI also missed \"when\", again...\n\n> > Enabling this parameter may incur a noticeable performance penalty,\n> > especially when queries with identical structure are executed by many concurrent connections\n> > which compete to update a small number of pg_stat_statements entries.\n\n\n", "msg_date": "Wed, 21 Apr 2021 10:40:07 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: track_planning causing performance regression" }, { "msg_contents": "On Wed, Apr 21, 2021 at 10:40:07AM -0500, Justin Pryzby wrote:\n> On Thu, Apr 22, 2021 at 12:13:17AM +0900, Fujii Masao wrote:\n> > On 2021/04/21 23:53, Justin Pryzby wrote:\n> > > Or:\n> > > \n> > > Enabling this parameter may incur a noticeable performance penalty,\n> > > especially similar queries are executed by many concurrent connections\n> > > and compete to update a small number of pg_stat_statements entries.\n> > \n> > I prefer this. But what about using \"identical\" instead of \"similar\"\n> > because pg_stat_statements docs already uses \"identical\" in some places?\n> \n> I also missed \"when\", again...\n> \n> > > Enabling this parameter may incur a noticeable performance penalty,\n> > > especially when queries with identical structure are executed by many concurrent connections\n> > > which compete to update a small number of pg_stat_statements entries.\n\nChecking back - here's the latest patch.\n\ndiff --git a/doc/src/sgml/pgstatstatements.sgml b/doc/src/sgml/pgstatstatements.sgml\nindex 930081c429..9e98472c5c 100644\n--- a/doc/src/sgml/pgstatstatements.sgml\n+++ b/doc/src/sgml/pgstatstatements.sgml\n@@ -696,8 +696,9 @@\n <varname>pg_stat_statements.track_planning</varname> controls whether\n planning operations and duration are tracked by the module.\n Enabling this parameter may incur a noticeable performance penalty,\n- especially when queries with the same queryid are executed on many\n- concurrent connections.\n+ especially when queries with identical structure are executed by many\n+ concurrent connections which compete to update a small number of\n+ pg_stat_statements entries.\n The default value is <literal>off</literal>.\n Only superusers can change this setting.\n </para>\n\n\n", "msg_date": "Mon, 28 Jun 2021 21:09:18 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: track_planning causing performance regression" }, { "msg_contents": "On Tue, Jun 29, 2021 at 10:09 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n> Checking back - here's the latest patch.\n>\n> diff --git a/doc/src/sgml/pgstatstatements.sgml b/doc/src/sgml/pgstatstatements.sgml\n> index 930081c429..9e98472c5c 100644\n> --- a/doc/src/sgml/pgstatstatements.sgml\n> +++ b/doc/src/sgml/pgstatstatements.sgml\n> @@ -696,8 +696,9 @@\n> <varname>pg_stat_statements.track_planning</varname> controls whether\n> planning operations and duration are tracked by the module.\n> Enabling this parameter may incur a noticeable performance penalty,\n> - especially when queries with the same queryid are executed on many\n> - concurrent connections.\n> + especially when queries with identical structure are executed by many\n> + concurrent connections which compete to update a small number of\n> + pg_stat_statements entries.\n> The default value is <literal>off</literal>.\n> Only superusers can change this setting.\n> </para>\n\nIs \"identical structure\" really accurate here? For instance a multi\ntenant application could rely on the search_path and only use\nunqualified relation name. So while they have queries with identical\nstructure, those will generate a large number of different query_id.\n\n\n", "msg_date": "Tue, 29 Jun 2021 10:29:43 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: track_planning causing performance regression" }, { "msg_contents": "On Tue, Jun 29, 2021 at 10:29:43AM +0800, Julien Rouhaud wrote:\n> On Tue, Jun 29, 2021 at 10:09 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> \n> Is \"identical structure\" really accurate here? For instance a multi\n> tenant application could rely on the search_path and only use\n> unqualified relation name. So while they have queries with identical\n> structure, those will generate a large number of different query_id.\n\nWe borrowed that language from the previous text:\n\n| Plannable queries (that is, SELECT, INSERT, UPDATE, and DELETE) are combined into a single pg_stat_statements entry whenever they have identical query structures according to an internal hash calculation\n\nNote that it continues to say:\n|In some cases, queries with visibly different texts might get merged into a single pg_stat_statements entry. Normally this will happen only for semantically equivalent queries, but there is a small chance of hash collisions causing unrelated queries to be merged into one entry. (This cannot happen for queries belonging to different users or databases, however.)\n|\n|Since the queryid hash value is computed on the post-parse-analysis representation of the queries, the opposite is also possible: queries with identical texts might appear as separate entries, if they have different meanings as a result of factors such as different search_path settings.\n\nReally, I'm only trying to fix where it currently says \"a fewer kinds\".\n\nIt looks like I'd sent the wrong diff (git diff with a previous patch applied).\n\nI think this is the latest proposal:\n\n Enabling this parameter may incur a noticeable performance penalty,\n- especially when a fewer kinds of queries are executed on many\n- concurrent connections.\n+ especially when queries with identical structure are executed by many \n+ concurrent connections which compete to update a small number of \n+ pg_stat_statements entries. \n\nIt could say \"identical structure\" or \"the same queryid\" or \"identical queryid\".\n\n-- \nJustin\n\n\n", "msg_date": "Mon, 28 Jun 2021 21:45:35 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: track_planning causing performance regression" }, { "msg_contents": "On Tue, Jun 29, 2021 at 10:45 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n> We borrowed that language from the previous text:\n>\n> | Plannable queries (that is, SELECT, INSERT, UPDATE, and DELETE) are combined into a single pg_stat_statements entry whenever they have identical query structures according to an internal hash calculation\n\nYes, but here's it's \"identical query structure\", which seems less\nambiguous than \"identical structure\" as iI think one could think it\nrefer to internal representation as much as as the query text. And\nit's also removing any doubt with the final \"internal hash\ncalculation\".\n\n> Really, I'm only trying to fix where it currently says \"a fewer kinds\".\n\nI agree that \"fewer kinds\" should be improved.\n\n> Enabling this parameter may incur a noticeable performance penalty,\n> - especially when a fewer kinds of queries are executed on many\n> - concurrent connections.\n> + especially when queries with identical structure are executed by many\n> + concurrent connections which compete to update a small number of\n> + pg_stat_statements entries.\n>\n> It could say \"identical structure\" or \"the same queryid\" or \"identical queryid\".\n\nI think we should try to reuse the previous formulation. How about\n\"statements with identical query structure\"? Or replace query\nstructure with \"internal representation\", in both places?\n\n\n", "msg_date": "Tue, 29 Jun 2021 23:12:36 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: track_planning causing performance regression" }, { "msg_contents": "\n\nOn 2021/06/30 0:12, Julien Rouhaud wrote:\n>> Enabling this parameter may incur a noticeable performance penalty,\n>> - especially when a fewer kinds of queries are executed on many\n>> - concurrent connections.\n>> + especially when queries with identical structure are executed by many\n>> + concurrent connections which compete to update a small number of\n>> + pg_stat_statements entries.\n>>\n>> It could say \"identical structure\" or \"the same queryid\" or \"identical queryid\".\n> \n> I think we should try to reuse the previous formulation. How about\n> \"statements with identical query structure\"?\n\nI'm fine with this. So what about the following diff? I added <structname> tag.\n\n <varname>pg_stat_statements.track_planning</varname> controls whether\n planning operations and duration are tracked by the module.\n Enabling this parameter may incur a noticeable performance penalty,\n- especially when a fewer kinds of queries are executed on many\n- concurrent connections.\n+ especially when statements with identical query structure are executed\n+ by many concurrent connections which compete to update a small number of\n+ <structname>pg_stat_statements</structname> entries.\n The default value is <literal>off</literal>.\n Only superusers can change this setting.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Thu, 1 Jul 2021 17:28:30 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: track_planning causing performance regression" }, { "msg_contents": "On Thu, Jul 1, 2021 at 4:28 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>\n> I'm fine with this. So what about the following diff? I added <structname> tag.\n>\n> <varname>pg_stat_statements.track_planning</varname> controls whether\n> planning operations and duration are tracked by the module.\n> Enabling this parameter may incur a noticeable performance penalty,\n> - especially when a fewer kinds of queries are executed on many\n> - concurrent connections.\n> + especially when statements with identical query structure are executed\n> + by many concurrent connections which compete to update a small number of\n> + <structname>pg_stat_statements</structname> entries.\n> The default value is <literal>off</literal>.\n> Only superusers can change this setting.\n\nIt seems perfect, thanks!\n\n\n", "msg_date": "Wed, 7 Jul 2021 17:09:21 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: track_planning causing performance regression" }, { "msg_contents": "\n\nOn 2021/07/07 18:09, Julien Rouhaud wrote:\n> On Thu, Jul 1, 2021 at 4:28 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>>\n>> I'm fine with this. So what about the following diff? I added <structname> tag.\n>>\n>> <varname>pg_stat_statements.track_planning</varname> controls whether\n>> planning operations and duration are tracked by the module.\n>> Enabling this parameter may incur a noticeable performance penalty,\n>> - especially when a fewer kinds of queries are executed on many\n>> - concurrent connections.\n>> + especially when statements with identical query structure are executed\n>> + by many concurrent connections which compete to update a small number of\n>> + <structname>pg_stat_statements</structname> entries.\n>> The default value is <literal>off</literal>.\n>> Only superusers can change this setting.\n> \n> It seems perfect, thanks!\n\nPushed. Thanks!\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Wed, 7 Jul 2021 21:57:37 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: track_planning causing performance regression" }, { "msg_contents": "On Wed, Jul 7, 2021 at 8:57 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>\n> Pushed. Thanks!\n\nThanks!\n\n\n", "msg_date": "Wed, 7 Jul 2021 21:05:33 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: track_planning causing performance regression" } ]
[ { "msg_contents": "CREATE TABLE boom (a integer, b integer);\n\n-- index on whole-row expression\nCREATE UNIQUE INDEX ON boom ((boom));\n\nINSERT INTO boom VALUES\n (1, 2),\n (1, 3);\n\nALTER TABLE boom DROP b;\n\nTABLE boom;\n\n a \n---\n 1\n 1\n(2 rows)\n\nREINDEX TABLE boom;\nERROR: could not create unique index \"boom_boom_idx\"\nDETAIL: Key ((boom.*))=((1)) is duplicated.\n\nThe problem here is that there *is* a \"pg_depend\" entry for the\nindex, but it only depends on the whole table, not on specific columns.\n\nI have been thinking what would be the right approach to fix this:\n\n1. Don't fix it, because it is an artificial corner case.\n (But I can imagine someone trying to exclude duplicate rows with\n a unique index.)\n\n2. Add code that checks if there is an index with a whole-row reference\n in the definition before dropping a column.\n That feels like a wart for a special case.\n\n3. Forbid indexes on whole-row expressions.\n After all, you can do the same with an index on all the columns.\n That would open the question what to do about upgrading old databases\n that might have such indexes today.\n\n4. Add dependencies on all columns whenever a whole-row expression\n is used in an index.\n That would need special processing for pg_upgrade.\n\nI'd like to hear your opinions.\n\nYours,\nLaurenz Albe\n\n\n\n", "msg_date": "Mon, 29 Jun 2020 12:13:10 +0200", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": true, "msg_subject": "Bug with indexes on whole-row expressions" }, { "msg_contents": "On Mon, Jun 29, 2020 at 3:43 PM Laurenz Albe <laurenz.albe@cybertec.at> wrote:\n>\n> CREATE TABLE boom (a integer, b integer);\n>\n> -- index on whole-row expression\n> CREATE UNIQUE INDEX ON boom ((boom));\n>\n> INSERT INTO boom VALUES\n> (1, 2),\n> (1, 3);\n>\n> ALTER TABLE boom DROP b;\n>\n> TABLE boom;\n>\n> a\n> ---\n> 1\n> 1\n> (2 rows)\n>\n> REINDEX TABLE boom;\n> ERROR: could not create unique index \"boom_boom_idx\"\n> DETAIL: Key ((boom.*))=((1)) is duplicated.\n>\n> The problem here is that there *is* a \"pg_depend\" entry for the\n> index, but it only depends on the whole table, not on specific columns.\n>\n> I have been thinking what would be the right approach to fix this:\n>\n> 1. Don't fix it, because it is an artificial corner case.\n> (But I can imagine someone trying to exclude duplicate rows with\n> a unique index.)\n>\n> 2. Add code that checks if there is an index with a whole-row reference\n> in the definition before dropping a column.\n> That feels like a wart for a special case.\n\nDo we need to do something about adding a new column or modifying an\nexisting one. Esp. if the later changes the uniqueness of row\nexpression.\n>\n> 3. Forbid indexes on whole-row expressions.\n> After all, you can do the same with an index on all the columns.\n> That would open the question what to do about upgrading old databases\n> that might have such indexes today.\n\nThis would be the best case. However, a whole row expression is not\nnecessarily the same as \"all columns\" at that moment. A whole row\nexpression means whatever column there are at any given point in time.\nSo creating an index on whole row expression is not the same as an\nindex on all the columns. However, I can't imagine a case where we\nwant an index on \"whole row expression\". An index containing all the\ncolumns would always suffice and will be deterministic as well.\n\nSo this option looks best.\n\n>\n> 4. Add dependencies on all columns whenever a whole-row expression\n> is used in an index.\n> That would need special processing for pg_upgrade.\n\nAgain, that dependency needs to be maintained as and when the columns\nare added and dropped.\n\n\n--\nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Mon, 13 Jul 2020 10:12:22 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Bug with indexes on whole-row expressions" } ]
[ { "msg_contents": "If a database (a) has a default tablespace set,\n\nReproduction:\n\nCREATE TABLESPACE t LOCATION '/tmp/t';\nCREATE DATABASE dumb TABLESPACE t;\n\\c dumb\nSET temp_tablespaces=t;\n\nAt this point if you run a query with a parallel hash join in it, the\ntempfiles go in base/pgsql_tmp instead of the temporary tablespace. For\nexample:\n\ncreate table foo(bar int);\ninsert into foo select * from generate_series(1,1000000);\nset parallel_tuple_cost =0;\nset parallel_setup_cost =0;\nset log_temp_files=0;\nset client_min_messages ='log';\nexplain analyze select foo.bar,count(*) from foo inner join foo foo2 on\nfoo.bar=foo2.bar group by foo.bar;\n\nWill trigger some temp files in the 't' tablespace and some in the\n'pg_default' one.\n\nI think the fix is the attached one (tested on version 11 which is what\n$customer is using). To me it looks like this may have been a copy/paste\nerror all the way back in 98e8b480532 which added default_tablespace back\nin 2004. (And is in itself entirely unrelated to parallel hashjoin, but\nthat's where it got exposed at least in my case)\n\nThoughts?\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>", "msg_date": "Mon, 29 Jun 2020 17:02:43 +0200", "msg_from": "Magnus Hagander <magnus@hagander.net>", "msg_from_op": true, "msg_subject": "Parallell hashjoin sometimes ignores temp_tablespaces" }, { "msg_contents": "> On 29 Jun 2020, at 17:02, Magnus Hagander <magnus@hagander.net> wrote:\n\n> I think the fix is the attached one (tested on version 11 which is what $customer is using). To me it looks like this may have been a copy/paste error all the way back in 98e8b480532 which added default_tablespace back in 2004. (And is in itself entirely unrelated to parallel hashjoin, but that's where it got exposed at least in my case)\n\nRunning through the repro and patch on HEAD I confirm that the attached fixes\nthe issue. +1 for the patch and a backpatch of it.\n\nIt would be nice to have a test covering test_tablespaces, but it seems a tad\ncumbersome to create a stable one.\n\ncheers ./daniel\n\n", "msg_date": "Mon, 29 Jun 2020 22:26:23 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: Parallell hashjoin sometimes ignores temp_tablespaces" }, { "msg_contents": "On Mon, Jun 29, 2020 at 10:26 PM Daniel Gustafsson <daniel@yesql.se> wrote:\n\n> > On 29 Jun 2020, at 17:02, Magnus Hagander <magnus@hagander.net> wrote:\n>\n> > I think the fix is the attached one (tested on version 11 which is what\n> $customer is using). To me it looks like this may have been a copy/paste\n> error all the way back in 98e8b480532 which added default_tablespace back\n> in 2004. (And is in itself entirely unrelated to parallel hashjoin, but\n> that's where it got exposed at least in my case)\n>\n> Running through the repro and patch on HEAD I confirm that the attached\n> fixes\n> the issue. +1 for the patch and a backpatch of it.\n>\n> It would be nice to have a test covering test_tablespaces, but it seems a\n> tad\n> cumbersome to create a stable one.\n>\n\nThanks. pushed!\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Mon, Jun 29, 2020 at 10:26 PM Daniel Gustafsson <daniel@yesql.se> wrote:> On 29 Jun 2020, at 17:02, Magnus Hagander <magnus@hagander.net> wrote:\n\n> I think the fix is the attached one (tested on version 11 which is what $customer is using).  To me it looks like this may have been a copy/paste error all the way back in 98e8b480532 which added default_tablespace back in 2004. (And is in itself entirely unrelated to parallel hashjoin, but that's where it got exposed at least in my case)\n\nRunning through the repro and patch on HEAD I confirm that the attached fixes\nthe issue. +1 for the patch and a backpatch of it.\n\nIt would be nice to have a test covering test_tablespaces, but it seems a tad\ncumbersome to create a stable one.Thanks. pushed! --  Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/", "msg_date": "Fri, 3 Jul 2020 15:13:31 +0200", "msg_from": "Magnus Hagander <magnus@hagander.net>", "msg_from_op": true, "msg_subject": "Re: Parallell hashjoin sometimes ignores temp_tablespaces" }, { "msg_contents": "Magnus Hagander <magnus@hagander.net> writes:\n> Thanks. pushed!\n\nSorry for not having paid more attention earlier, but this patch is\nquite broken. If it weren't misguided it'd still be wrong, because\nthis isn't the only spot in PrepareTempTablespaces that inserts\nInvalidOid into the output list.\n\nBut, in fact, it's intentional that we represent the DB's default\ntablespace by InvalidOid in that list. Some callers of\nGetNextTempTableSpace need that to be the case, either for\npermissions-checking reasons or because they're going to store the\nresult into a temp table's pg_class.reltablespace, where that\nrepresentation is *required*.\n\nI see that this is perhaps underdocumented, since while\nGetNextTempTableSpace's comment mentions the behavior, there's\nno comment about it with the data structure proper.\n\nIt looks to me like the actual bug here is that whoever added\nGetTempTablespaces() and made sharedfileset.c depend on it\ndid not get the memo about what to do with InvalidOid.\nIt's possible that we could safely make GetTempTablespaces()\ndo the substitution, but that would be making fd.c assume more\nabout the usage of GetTempTablespaces() than I think it ought to.\nI feel like we oughta fix sharedfileset.c, instead.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 03 Jul 2020 10:16:32 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Parallell hashjoin sometimes ignores temp_tablespaces" }, { "msg_contents": "On Fri, Jul 3, 2020 at 4:16 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Magnus Hagander <magnus@hagander.net> writes:\n> > Thanks. pushed!\n>\n> Sorry for not having paid more attention earlier, but this patch is\n> quite broken. If it weren't misguided it'd still be wrong, because\n> this isn't the only spot in PrepareTempTablespaces that inserts\n> InvalidOid into the output list.\n>\n> But, in fact, it's intentional that we represent the DB's default\n> tablespace by InvalidOid in that list. Some callers of\n> GetNextTempTableSpace need that to be the case, either for\n> permissions-checking reasons or because they're going to store the\n> result into a temp table's pg_class.reltablespace, where that\n> representation is *required*.\n>\n\nHmm. I guess I must've been careless in checking other callers.\n\n\nI see that this is perhaps underdocumented, since while\n> GetNextTempTableSpace's comment mentions the behavior, there's\n> no comment about it with the data structure proper.\n>\n\nYeah, it could definitely do with that. It was too many steps of\nindirections away to me to pick that up.\n\n\nIt looks to me like the actual bug here is that whoever added\n> GetTempTablespaces() and made sharedfileset.c depend on it\n> did not get the memo about what to do with InvalidOid.\n> It's possible that we could safely make GetTempTablespaces()\n> do the substitution, but that would be making fd.c assume more\n> about the usage of GetTempTablespaces() than I think it ought to.\n> I feel like we oughta fix sharedfileset.c, instead.\n>\n\nThis seems to be in dc6c4c9dc2a -- adding Thomas.\n\nA quick look -- to do things right, we will need to know the database\ndefault tablespace in this case right? Which I guess isn't there because\nthe shared fileset isn't tied to a database. But perhaps it's as easy as\nsomething like the attached, just overwriting the oid?\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>", "msg_date": "Fri, 3 Jul 2020 17:03:46 +0200", "msg_from": "Magnus Hagander <magnus@hagander.net>", "msg_from_op": true, "msg_subject": "Re: Parallell hashjoin sometimes ignores temp_tablespaces" }, { "msg_contents": "Magnus Hagander <magnus@hagander.net> writes:\n> A quick look -- to do things right, we will need to know the database\n> default tablespace in this case right? Which I guess isn't there because\n> the shared fileset isn't tied to a database. But perhaps it's as easy as\n> something like the attached, just overwriting the oid?\n\nYeah, we just have to pick an appropriate place for making the\nsubstitution. I have no objection to doing it in SharedFileSetInit, as\nlong as we're sure it will only be consulted for placing temp files and\nnot relations.\n\nThe lack of documentation seems to be my fault, so I'm willing to pick\nthis up unless somebody else wants it.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 03 Jul 2020 12:12:26 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Parallell hashjoin sometimes ignores temp_tablespaces" }, { "msg_contents": "On Fri, Jul 3, 2020 at 6:12 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Magnus Hagander <magnus@hagander.net> writes:\n> > A quick look -- to do things right, we will need to know the database\n> > default tablespace in this case right? Which I guess isn't there because\n> > the shared fileset isn't tied to a database. But perhaps it's as easy as\n> > something like the attached, just overwriting the oid?\n>\n> Yeah, we just have to pick an appropriate place for making the\n> substitution. I have no objection to doing it in SharedFileSetInit, as\n> long as we're sure it will only be consulted for placing temp files and\n> not relations.\n>\n\nIt doesn't *now*, and I'm pretty sure it can't be in the future the way it\nis now (a parallel worker can't be creating relations). But it is probably\na good idea to add a comment indicating this as well...\n\n\n>\n> The lack of documentation seems to be my fault, so I'm willing to pick\n> this up unless somebody else wants it.\n>\n\nIf the comments I included in that patch are enough, I can just commit\nthose along with it. Otherwise, please do :)\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Fri, Jul 3, 2020 at 6:12 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:Magnus Hagander <magnus@hagander.net> writes:\n> A quick look -- to do things right, we will need to know the database\n> default tablespace in this case right? Which I guess isn't there because\n> the shared fileset isn't tied to a database. But perhaps it's as easy as\n> something like the attached, just overwriting the oid?\n\nYeah, we just have to pick an appropriate place for making the\nsubstitution.  I have no objection to doing it in SharedFileSetInit, as\nlong as we're sure it will only be consulted for placing temp files and\nnot relations.It doesn't *now*, and I'm pretty sure it can't be in the future the way it is now (a parallel worker can't be creating relations). But it is probably a good idea to add a comment indicating this as well... \n\nThe lack of documentation seems to be my fault, so I'm willing to pick\nthis up unless somebody else wants it.If the comments I included in that patch are enough, I can just commit those along with it. Otherwise, please do :) --  Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/", "msg_date": "Fri, 3 Jul 2020 18:17:59 +0200", "msg_from": "Magnus Hagander <magnus@hagander.net>", "msg_from_op": true, "msg_subject": "Re: Parallell hashjoin sometimes ignores temp_tablespaces" }, { "msg_contents": "Magnus Hagander <magnus@hagander.net> writes:\n> On Fri, Jul 3, 2020 at 6:12 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> The lack of documentation seems to be my fault, so I'm willing to pick\n>> this up unless somebody else wants it.\n\n> If the comments I included in that patch are enough, I can just commit\n> those along with it. Otherwise, please do :)\n\nBeing once burned, I had something more like the attached in mind.\n\nBTW, looking at this, I'm kind of wondering about the other code path\nin SharedFileSetInit:\n\n\tif (fileset->ntablespaces == 0)\n\t{\n\t\tfileset->tablespaces[0] = DEFAULTTABLESPACE_OID;\n\t\tfileset->ntablespaces = 1;\n\t}\n\nShouldn't that be inserting MyDatabaseTableSpace? I see no other places\nanywhere that are forcing temp stuff into pg_default like this.\n\n\t\t\tregards, tom lane", "msg_date": "Fri, 03 Jul 2020 13:06:02 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Parallell hashjoin sometimes ignores temp_tablespaces" }, { "msg_contents": "On Fri, Jul 3, 2020 at 7:06 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Magnus Hagander <magnus@hagander.net> writes:\n> > On Fri, Jul 3, 2020 at 6:12 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> The lack of documentation seems to be my fault, so I'm willing to pick\n> >> this up unless somebody else wants it.\n>\n> > If the comments I included in that patch are enough, I can just commit\n> > those along with it. Otherwise, please do :)\n>\n> Being once burned, I had something more like the attached in mind.\n>\n\nThat's a bit more elaborate and yes, I agree, better.\n\n\nBTW, looking at this, I'm kind of wondering about the other code path\n> in SharedFileSetInit:\n>\n> if (fileset->ntablespaces == 0)\n> {\n> fileset->tablespaces[0] = DEFAULTTABLESPACE_OID;\n> fileset->ntablespaces = 1;\n> }\n>\n> Shouldn't that be inserting MyDatabaseTableSpace? I see no other places\n> anywhere that are forcing temp stuff into pg_default like this.\n>\n\nYeah, looking at it again, I think it should. I can't see any reason why it\nshould enforce pg_default.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Fri, Jul 3, 2020 at 7:06 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:Magnus Hagander <magnus@hagander.net> writes:\n> On Fri, Jul 3, 2020 at 6:12 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> The lack of documentation seems to be my fault, so I'm willing to pick\n>> this up unless somebody else wants it.\n\n> If the comments I included in that patch are enough, I can just commit\n> those along with it. Otherwise, please do :)\n\nBeing once burned, I had something more like the attached in mind.That's a bit more elaborate and yes, I agree, better.\nBTW, looking at this, I'm kind of wondering about the other code path\nin SharedFileSetInit:\n\n        if (fileset->ntablespaces == 0)\n        {\n                fileset->tablespaces[0] = DEFAULTTABLESPACE_OID;\n                fileset->ntablespaces = 1;\n        }\n\nShouldn't that be inserting MyDatabaseTableSpace?  I see no other places\nanywhere that are forcing temp stuff into pg_default like this.Yeah, looking at it again, I think it should. I can't see any reason why it should enforce pg_default. --  Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/", "msg_date": "Fri, 3 Jul 2020 20:54:59 +0200", "msg_from": "Magnus Hagander <magnus@hagander.net>", "msg_from_op": true, "msg_subject": "Re: Parallell hashjoin sometimes ignores temp_tablespaces" }, { "msg_contents": "Magnus Hagander <magnus@hagander.net> writes:\n> On Fri, Jul 3, 2020 at 7:06 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Shouldn't that be inserting MyDatabaseTableSpace? I see no other places\n>> anywhere that are forcing temp stuff into pg_default like this.\n\n> Yeah, looking at it again, I think it should. I can't see any reason why it\n> should enforce pg_default.\n\nOK, pushed with that additional correction.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 03 Jul 2020 17:02:33 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Parallell hashjoin sometimes ignores temp_tablespaces" }, { "msg_contents": "On Fri, Jul 3, 2020 at 11:02 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Magnus Hagander <magnus@hagander.net> writes:\n> > On Fri, Jul 3, 2020 at 7:06 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> Shouldn't that be inserting MyDatabaseTableSpace? I see no other places\n> >> anywhere that are forcing temp stuff into pg_default like this.\n>\n> > Yeah, looking at it again, I think it should. I can't see any reason why\n> it\n> > should enforce pg_default.\n>\n> OK, pushed with that additional correction.\n>\n\nLGTM, thanks!\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Fri, Jul 3, 2020 at 11:02 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:Magnus Hagander <magnus@hagander.net> writes:\n> On Fri, Jul 3, 2020 at 7:06 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Shouldn't that be inserting MyDatabaseTableSpace?  I see no other places\n>> anywhere that are forcing temp stuff into pg_default like this.\n\n> Yeah, looking at it again, I think it should. I can't see any reason why it\n> should enforce pg_default.\n\nOK, pushed with that additional correction.LGTM, thanks! --  Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/", "msg_date": "Sun, 5 Jul 2020 13:28:38 +0200", "msg_from": "Magnus Hagander <magnus@hagander.net>", "msg_from_op": true, "msg_subject": "Re: Parallell hashjoin sometimes ignores temp_tablespaces" } ]
[ { "msg_contents": "[Resent on hackers for CF registration, sorry for the noise]\n\nHello Tom,\n\nThe attached patch fixes some of the underlying problems reported by delaying \nthe :var to $1 substitution to the last possible moments, so that what \nvariables are actually defined is known. PREPARE-ing is also delayed to after \nthese substitutions are done.\n\nIt requires a mutex around the commands, I tried to do some windows \nimplementation which may or may not work.\n\nThe attached patch fixes (2) & (3) for extended & prepared.\n\nI have a doubt about fixing (1) because it would be a significant behavioral \nchange and it requires changing the replace routine significantly to check for \nquoting, comments, and so on. This means that currently ':var' is still broken \nunder -M extended & prepared, I could only break it differently by providing a \nnicer error message and also break it under simple whereas it currently works \nthere. I'm not thrilled by spending efforts to do that.\n\nThe patches change the name of \"parseQuery\" to \"makeVariablesParameters\", \nbecause it was not actually parsing any query. Maybe the new name could be \nimproved.\n\nIn passing, there was a bug in how NULL was passed, which I tried to fix\nas well.\n\n>>> I don't often do much with pgbench and variables, but there are a few\n>>> things that surprise me here.\n>>> 1) That pgbench replaces variables within single quotes, and;\n>>> 2) that we still think it's a variable name when it starts with a digit, \n>>> and;\n>>> 3) We replace variables that are undefined.\n> \n>> Also (4) this only happens when in non-simple query mode --- the\n>> example works fine without \"-M prepared\".\n> \n> After looking around in the code, it seems like the core of the issue\n> is that pgbench.c's parseQuery() doesn't check whether a possible\n> variable name is actually defined, unlike assignVariables() which is\n> what does the same job in simple query mode. So that explains the\n> behavioral difference.\n\nYes.\n\n> The reason for doing that probably was that parseQuery() is run when\n> the input file is read, so that relevant variables might not be set\n> yet. We could fix that by postponing the work to be done at first\n> execution of the query, as is already the case for PQprepare'ing the\n> query.\n\nYep, done at first execution of the Command, so that variables are known.\n\n> Also, after further thought I realize that (1) absolutely is a bug\n> in the non-simple query modes, whatever you think about it in simple\n> mode. The non-simple modes are trying to pass the variable values\n> as extended-query-protocol parameters, and the backend is not going\n> to recognize $n inside a literal as being a parameter.\n\nYep. See my comments above.\n\n> If we fixed (1) and (3) I think there wouldn't be any great need\n> to tighten up (2).\n\nI did (2) but not (1), for now.\n\n-- \nFabien.", "msg_date": "Mon, 29 Jun 2020 17:43:01 +0200 (CEST)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": true, "msg_subject": "Re: pgbench and timestamps (bounced)" }, { "msg_contents": "Attached v2 fixes some errors, per cfbot.\n\n-- \nFabien.", "msg_date": "Thu, 9 Jul 2020 08:41:22 +0200 (CEST)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": true, "msg_subject": "Re: pgbench and timestamps (bounced)" }, { "msg_contents": "Fabien COELHO <coelho@cri.ensmp.fr> writes:\n> [Resent on hackers for CF registration, sorry for the noise]\n\nFor the record, the original thread is at\n\nhttps://www.postgresql.org/message-id/flat/CAKVUGgQaZVAUi1Ex41H4wrru%3DFU%2BMfwgjG0aM1br6st7sz31Vw%40mail.gmail.com\n\n(I tried but failed to attach that thread to the CF entry, so we'll\nhave to settle for leaving a breadcrumb in this thread.)\n\n> It requires a mutex around the commands, I tried to do some windows \n> implementation which may or may not work.\n\nUgh, I'd really rather not do that. Even disregarding the effects\nof a mutex, though, my initial idea for fixing this has a big problem:\nif we postpone PREPARE of the query until first execution, then it's\nhappening during timed execution of the benchmark scenario and thus\ndistorting the timing figures. (Maybe if we'd always done it like\nthat, it'd be okay, but I'm quite against changing the behavior now\nthat it's stood for a long time.)\n\nHowever, perhaps there's more than one way to fix this. Once we've\nscanned all of the script and seen all the \\set commands, we know\n(in principle) the set of all variable names that are in use.\nSo maybe we could fix this by\n\n(1) During the initial scan of the script, make variable-table\nentries for every \\set argument, with the values shown as undefined\nfor the moment. Do not try to parse SQL commands in this scan,\njust collect them.\n\n(2) Make another scan in which we identify variable references\nin the SQL commands and issue PREPAREs (if enabled).\n\n(3) Perform the timed run.\n\nThis avoids any impact of this bug fix on the semantics or timing\nof the benchmark proper. I'm not sure offhand whether this\napproach makes any difference for the concerns you had about\nidentifying/suppressing variable references inside quotes.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 07 Sep 2020 20:31:11 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pgbench and timestamps (bounced)" }, { "msg_contents": "\nHello Tom,\n\n>> It requires a mutex around the commands, I tried to do some windows\n>> implementation which may or may not work.\n>\n> Ugh, I'd really rather not do that. Even disregarding the effects\n> of a mutex, though, my initial idea for fixing this has a big problem:\n> if we postpone PREPARE of the query until first execution, then it's\n> happening during timed execution of the benchmark scenario and thus\n> distorting the timing figures. (Maybe if we'd always done it like\n> that, it'd be okay, but I'm quite against changing the behavior now\n> that it's stood for a long time.)\n\nHmmm.\n\nPrepare is done *once* per client, ISTM that the impact on any \nstatistically significant benchmark is nul in practice, or it would mean \nthat the benchmark settings are too low.\n\nSecond, the mutex is only used when absolutely necessary, only for the \nsubstitution part of the query (replacing :stuff by ?), because scripts \nare shared between threads. This is just once, in an unlikely case \noccuring at the beginning.\n\n> However, perhaps there's more than one way to fix this. Once we've\n> scanned all of the script and seen all the \\set commands, we know\n> (in principle) the set of all variable names that are in use.\n> So maybe we could fix this by\n>\n> (1) During the initial scan of the script, make variable-table\n> entries for every \\set argument, with the values shown as undefined\n> for the moment. Do not try to parse SQL commands in this scan,\n> just collect them.\n\nThe issue with this approach is\n\n SELECT 1 AS one \\gset pref_\n\nwhich will generate a \"pref_one\" variable, and these names cannot be \nguessed without SQL parsing and possibly execution. That is why the\npreparation is delayed to when the variables are actually known.\n\n> (2) Make another scan in which we identify variable references\n> in the SQL commands and issue PREPAREs (if enabled).\n\n> (3) Perform the timed run.\n>\n> This avoids any impact of this bug fix on the semantics or timing\n> of the benchmark proper. I'm not sure offhand whether this\n> approach makes any difference for the concerns you had about\n> identifying/suppressing variable references inside quotes.\n\nI do not think this plan is workable, because of the \\gset issue.\n\nI do not see that the conditional mutex and delayed PREPARE would have any \nsignificant (measurable) impact on an actual (reasonable) benchmark run.\n\nA workable solution would be that each client actually execute each script \nonce before starting the actual benchmark. It would still need a mutex and \nalso a sync barrier (which I'm proposing in some other thread). However \nthis may raise some other issues because then some operations would be \ntrigger out of the benchmarking run, which may or may not be desirable.\n\nSo I'm not to keen to go that way, and I think the proposed solution is \nreasonable from a benchmarking point of view as the impact is minimal, \nalthough not zero.\n\n-- \nFabien.\n\n\n", "msg_date": "Fri, 11 Sep 2020 15:59:12 +0200 (CEST)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": true, "msg_subject": "Re: pgbench and timestamps (bounced)" }, { "msg_contents": "On 11.09.2020 16:59, Fabien COELHO wrote:\n>\n> Hello Tom,\n>\n>>> It requires a mutex around the commands, I tried to do some windows\n>>> implementation which may or may not work.\n>>\n>> Ugh, I'd really rather not do that.  Even disregarding the effects\n>> of a mutex, though, my initial idea for fixing this has a big problem:\n>> if we postpone PREPARE of the query until first execution, then it's\n>> happening during timed execution of the benchmark scenario and thus\n>> distorting the timing figures.  (Maybe if we'd always done it like\n>> that, it'd be okay, but I'm quite against changing the behavior now\n>> that it's stood for a long time.)\n>\n> Hmmm.\n>\n> Prepare is done *once* per client, ISTM that the impact on any \n> statistically significant benchmark is nul in practice, or it would \n> mean that the benchmark settings are too low.\n>\n> Second, the mutex is only used when absolutely necessary, only for the \n> substitution part of the query (replacing :stuff by ?), because \n> scripts are shared between threads. This is just once, in an unlikely \n> case occuring at the beginning.\n>\n>> However, perhaps there's more than one way to fix this.  Once we've\n>> scanned all of the script and seen all the \\set commands, we know\n>> (in principle) the set of all variable names that are in use.\n>> So maybe we could fix this by\n>>\n>> (1) During the initial scan of the script, make variable-table\n>> entries for every \\set argument, with the values shown as undefined\n>> for the moment.  Do not try to parse SQL commands in this scan,\n>> just collect them.\n>\n> The issue with this approach is\n>\n>   SELECT 1 AS one \\gset pref_\n>\n> which will generate a \"pref_one\" variable, and these names cannot be \n> guessed without SQL parsing and possibly execution. That is why the\n> preparation is delayed to when the variables are actually known.\n>\n>> (2) Make another scan in which we identify variable references\n>> in the SQL commands and issue PREPAREs (if enabled).\n>\n>> (3) Perform the timed run.\n>>\n>> This avoids any impact of this bug fix on the semantics or timing\n>> of the benchmark proper.  I'm not sure offhand whether this\n>> approach makes any difference for the concerns you had about\n>> identifying/suppressing variable references inside quotes.\n>\n> I do not think this plan is workable, because of the \\gset issue.\n>\n> I do not see that the conditional mutex and delayed PREPARE would have \n> any significant (measurable) impact on an actual (reasonable) \n> benchmark run.\n>\n> A workable solution would be that each client actually execute each \n> script once before starting the actual benchmark. It would still need \n> a mutex and also a sync barrier (which I'm proposing in some other \n> thread). However this may raise some other issues because then some \n> operations would be trigger out of the benchmarking run, which may or \n> may not be desirable.\n>\n> So I'm not to keen to go that way, and I think the proposed solution \n> is reasonable from a benchmarking point of view as the impact is \n> minimal, although not zero.\n>\nCFM reminder.\n\nHi, this entry is \"Waiting on Author\" and the thread was inactive for a \nwhile. I see this discussion still has some open questions. Are you \ngoing to continue working on it, or should I mark it as \"returned with \nfeedback\" until a better time?\n\n-- \nAnastasia Lubennikova\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n", "msg_date": "Tue, 24 Nov 2020 15:11:53 +0300", "msg_from": "Anastasia Lubennikova <a.lubennikova@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: pgbench and timestamps (bounced)" }, { "msg_contents": "\n> CFM reminder.\n>\n> Hi, this entry is \"Waiting on Author\" and the thread was inactive for a \n> while. I see this discussion still has some open questions. Are you \n> going to continue working on it, or should I mark it as \"returned with \n> feedback\" until a better time?\n\nIMHO the proposed fix is reasonable and addresses the issue.\n\nI have responded to Tom's remarks about it, and it is waiting for his \nanswer to my counter arguments. So ISTM that the patch is currently still \nwaiting for some feedback.\n\n-- \nFabien.\n\n\n", "msg_date": "Fri, 27 Nov 2020 00:08:26 +0100 (CET)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": true, "msg_subject": "Re: pgbench and timestamps (bounced)" }, { "msg_contents": "Fabien COELHO <coelho@cri.ensmp.fr> writes:\n>> Hi, this entry is \"Waiting on Author\" and the thread was inactive for a \n>> while. I see this discussion still has some open questions. Are you \n>> going to continue working on it, or should I mark it as \"returned with \n>> feedback\" until a better time?\n\n> IMHO the proposed fix is reasonable and addresses the issue.\n> I have responded to Tom's remarks about it, and it is waiting for his \n> answer to my counter arguments. So ISTM that the patch is currently still \n> waiting for some feedback.\n\nIt looks like my unhappiness with injecting a pthread dependency into\npgbench is going to be overtaken by events in the \"option delaying\nqueries\" thread [1]. However, by the same token there are some conflicts\nbetween the two patchsets, and also I prefer the other thread's approach\nto portability (i.e. do it honestly, not with a private portability layer\nin pgbench.c). So I'm inclined to put the parts of this patch that\nrequire pthreads on hold till that lands.\n\nWhat remains that we could do now, and perhaps back-patch, is point (2)\ni.e. disallow digits as the first character of a pgbench variable name.\nThat would be enough to \"solve\" the original bug report, and it does seem\nlike it could be back-patched, while we're certainly not going to risk\nback-patching anything as portability-fraught as introducing mutexes.\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/flat/20200227180100.zyvjwzcpiokfsqm2%40alap3.anarazel.de\n\n\n", "msg_date": "Tue, 12 Jan 2021 17:37:56 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pgbench and timestamps (bounced)" }, { "msg_contents": "\nHello Tom,\n\n>>> Hi, this entry is \"Waiting on Author\" and the thread was inactive for a\n>>> while. I see this discussion still has some open questions. Are you\n>>> going to continue working on it, or should I mark it as \"returned with\n>>> feedback\" until a better time?\n>\n>> IMHO the proposed fix is reasonable and addresses the issue.\n>> I have responded to Tom's remarks about it, and it is waiting for his\n>> answer to my counter arguments. So ISTM that the patch is currently still\n>> waiting for some feedback.\n>\n> It looks like my unhappiness with injecting a pthread dependency into\n> pgbench is going to be overtaken by events in the \"option delaying\n> queries\" thread [1]. However, by the same token there are some conflicts\n> between the two patchsets, and also I prefer the other thread's approach\n> to portability (i.e. do it honestly, not with a private portability layer\n> in pgbench.c). So I'm inclined to put the parts of this patch that\n> require pthreads on hold till that lands.\n\nOk. This is fair enough. Portability is a pain thanks to Windows vs MacOS \nvs Linux approaches of implementing or not a standard.\n\n> What remains that we could do now, and perhaps back-patch, is point (2)\n> i.e. disallow digits as the first character of a pgbench variable name.\n\nI'm fine with that.\n\n> That would be enough to \"solve\" the original bug report, and it does seem\n> like it could be back-patched, while we're certainly not going to risk\n> back-patching anything as portability-fraught as introducing mutexes.\n\nSure.\n\nI'm unable to do much pg work at the moment, but this should be eased \nquite soon.\n\n> [1] https://www.postgresql.org/message-id/flat/20200227180100.zyvjwzcpiokfsqm2%40alap3.anarazel.de\n\n-- \nFabien Coelho - CRI, MINES ParisTech\n\n\n", "msg_date": "Wed, 13 Jan 2021 16:53:09 +0100 (CET)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": true, "msg_subject": "Re: pgbench and timestamps (bounced)" }, { "msg_contents": "Fabien COELHO <coelho@cri.ensmp.fr> writes:\n>> What remains that we could do now, and perhaps back-patch, is point (2)\n>> i.e. disallow digits as the first character of a pgbench variable name.\n\n> I'm fine with that.\n\n>> That would be enough to \"solve\" the original bug report, and it does seem\n>> like it could be back-patched, while we're certainly not going to risk\n>> back-patching anything as portability-fraught as introducing mutexes.\n\n> Sure.\n\nOK. I've pushed a patch that just does that much, and marked the\ncommitfest entry closed. After the other thing lands, please rebase\nand resubmit what remains of this patch.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 13 Jan 2021 14:55:19 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pgbench and timestamps (bounced)" } ]
[ { "msg_contents": "Hi,\n\nThis patch removes two temporary files that are not removed. In\nDebian, repeated builds fail. We do not allow builds from modified\nsources.\n\nThe first file was clearly an oversight. It was created separately. I\nam not sure why the loop over @keys did not remove the second.\n\nFor the record, the error message from Debian is below. Lines 25-27\nshow the files that were left behind.\n\nKind regards,\nFelix Lechner\n\nlechner@4bba56c5a8a8:~/postgresql$ debuild\n dpkg-buildpackage -us -uc -ui\ndpkg-buildpackage: info: source package postgresql-13\ndpkg-buildpackage: info: source version 13~beta2-1\ndpkg-buildpackage: info: source distribution experimental\ndpkg-buildpackage: info: source changed by Christoph Berg <myon@debian.org>\n dpkg-source --before-build .\ndpkg-buildpackage: info: host architecture amd64\n fakeroot debian/rules clean\ndh clean\n debian/rules override_dh_auto_clean\nmake[1]: Entering directory '/home/lechner/postgresql'\nrm -rf build\nmake[1]: Leaving directory '/home/lechner/postgresql'\n dh_autoreconf_clean\n dh_clean\n dpkg-source -b .\ndpkg-source: info: using source format '3.0 (quilt)'\ndpkg-source: info: building postgresql-13 using existing\n./postgresql-13_13~beta2.orig.tar.bz2\ndpkg-source: info: using patch list from debian/patches/series\ndpkg-source: warning: ignoring deletion of file configure, use\n--include-removal to override\ndpkg-source: warning: ignoring deletion of file\nsrc/include/pg_config.h.in, use --include-removal to override\ndpkg-source: warning: ignoring deletion of file\ndoc/src/sgml/man-stamp, use --include-removal to override\ndpkg-source: warning: ignoring deletion of file\ndoc/src/sgml/html-stamp, use --include-removal to override\ndpkg-source: info: local changes detected, the modified files are:\n postgresql/src/test/ssl/ssl/client_tmp.key\n postgresql/src/test/ssl/ssl/client_wrongperms_tmp.key\ndpkg-source: error: aborting due to unexpected upstream changes, see\n/tmp/postgresql-13_13~beta2-1.diff.gy3ajb\ndpkg-source: info: you can integrate the local changes with dpkg-source --commit\ndpkg-buildpackage: error: dpkg-source -b . subprocess returned exit status 2\ndebuild: fatal error at line 1182:\ndpkg-buildpackage -us -uc -ui failed", "msg_date": "Mon, 29 Jun 2020 08:52:09 -0700", "msg_from": "Felix Lechner <felix.lechner@lease-up.com>", "msg_from_op": true, "msg_subject": "[PATCH] Better cleanup in TLS tests for -13beta2" }, { "msg_contents": "> On 29 Jun 2020, at 17:52, Felix Lechner <felix.lechner@lease-up.com> wrote:\n\n> This patch removes two temporary files that are not removed. In\n> Debian, repeated builds fail. We do not allow builds from modified\n> sources.\n\nAha, nice catch!\n\n> The first file was clearly an oversight. It was created separately. I\n> am not sure why the loop over @keys did not remove the second.\n\nThat's because it's created in the 002_scram.pl testsuite as well, but not\ncleaned up there.\n\nThe proposed patch admittedly seems a bit like a hack, and the client_tmo.key\nhandling is wrong as mentioned above. I propose that we instead add the key to\nthe @keys array and have clean up handle all files uniformly. The attached\nfixes both of the files.\n\ncheers ./daniel", "msg_date": "Mon, 29 Jun 2020 21:02:44 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Better cleanup in TLS tests for -13beta2" }, { "msg_contents": "Daniel Gustafsson <daniel@yesql.se> writes:\n> The proposed patch admittedly seems a bit like a hack, and the client_tmo.key\n> handling is wrong as mentioned above. I propose that we instead add the key to\n> the @keys array and have clean up handle all files uniformly. The attached\n> fixes both of the files.\n\nHmm ... so I guess my reaction to both of these is \"what guarantees\nthat we get to the part of the script that does the unlinks?\".\nI've certainly seen lots of TAP tests fail to complete. Could we\ndo the cleanup in an END block or the like? (I'm a poor enough\nPerl programmer to be uncertain what's the best way, but I know\nPerl has constructs like that.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 29 Jun 2020 15:27:49 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Better cleanup in TLS tests for -13beta2" }, { "msg_contents": "> On 29 Jun 2020, at 21:27, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Hmm ... so I guess my reaction to both of these is \"what guarantees\n> that we get to the part of the script that does the unlinks?\".\n> I've certainly seen lots of TAP tests fail to complete. Could we\n> do the cleanup in an END block or the like? (I'm a poor enough\n> Perl programmer to be uncertain what's the best way, but I know\n> Perl has constructs like that.)\n\nIf execution calls die() during testing, then we wont reach the clean up\nportion at the end but we would if we did it as part of END which is (unless my\nmemory is too fogged) guaranteed to be the last code to run before the\ninterpreter exits.\n\nThat being said, we do retain temporary files on such failures on purpose in\nour TestLib since 88802e068017bee8cea7a5502a712794e761c7b5 and a few follow-up\ncommits since, should these be handled differently? They are admittedly less\n\"unknown\" as compared to other files as they are copies, but famous last words\nhave been spoken about bugs that can never happen.\n\ncheers ./daniel\n\n", "msg_date": "Mon, 29 Jun 2020 21:37:54 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Better cleanup in TLS tests for -13beta2" }, { "msg_contents": "Daniel Gustafsson <daniel@yesql.se> writes:\n>> On 29 Jun 2020, at 21:27, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Hmm ... so I guess my reaction to both of these is \"what guarantees\n>> that we get to the part of the script that does the unlinks?\".\n\n> That being said, we do retain temporary files on such failures on purpose in\n> our TestLib since 88802e068017bee8cea7a5502a712794e761c7b5 and a few follow-up\n> commits since, should these be handled differently? They are admittedly less\n> \"unknown\" as compared to other files as they are copies, but famous last words\n> have been spoken about bugs that can never happen.\n\nOh, good point. Objection withdrawn.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 29 Jun 2020 15:51:48 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Better cleanup in TLS tests for -13beta2" }, { "msg_contents": "On Mon, Jun 29, 2020 at 03:51:48PM -0400, Tom Lane wrote:\n> Daniel Gustafsson <daniel@yesql.se> writes:\n>> That being said, we do retain temporary files on such failures on purpose in\n>> our TestLib since 88802e068017bee8cea7a5502a712794e761c7b5 and a few follow-up\n>> commits since, should these be handled differently? They are admittedly less\n>> \"unknown\" as compared to other files as they are copies, but famous last words\n>> have been spoken about bugs that can never happen.\n> \n> Oh, good point. Objection withdrawn.\n\nI looked at the patch, and can confirm that client_wrongperms_tmp.key\nremains around after running 001_ssltests.pl, and client_tmp.key after\nrunning 002_scram.pl. The way the patch does its cleanup looks fine\nto me, so I'll apply and backpatch where necessary, if there are no\nobjections of course.\n--\nMichael", "msg_date": "Tue, 30 Jun 2020 13:13:39 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Better cleanup in TLS tests for -13beta2" }, { "msg_contents": "On Tue, Jun 30, 2020 at 01:13:39PM +0900, Michael Paquier wrote:\n> I looked at the patch, and can confirm that client_wrongperms_tmp.key\n> remains around after running 001_ssltests.pl, and client_tmp.key after\n> running 002_scram.pl. The way the patch does its cleanup looks fine\n> to me, so I'll apply and backpatch where necessary, if there are no\n> objections of course.\n\nI found one problem when testing with parallel jobs once we apply this\npatch (say PROVE_FLAGS=\"-j 4\"): the tests of 001 and 002 had the idea\nto use the same file name client_tmp.key, so it was possible to easily\nfail the tests if for example 002 removes the temporary client key\ncopy that 001 needs, or vice-versa. 001 takes longer than 002, so the\nremoval would likely be done by the latter, not the former. And it\nwas even logically possible to fail in the case where 001 removes the\nfile and 002 needs it, though very unlikely because 002 needs this\nfile for a very short amount of time and one test case. I have fixed\nthis issue by just making 002 use a different file name, as we do in\n001 for the case of the wrong permissions, and applied the patch down\nto 13.\n--\nMichael", "msg_date": "Wed, 1 Jul 2020 10:52:29 +0900", "msg_from": "michael@paquier.xyz", "msg_from_op": false, "msg_subject": "Re: [PATCH] Better cleanup in TLS tests for -13beta2" } ]
[ { "msg_contents": "Hi,\n\nI noticed the incremental sort code makes use of the long datatype a\nfew times, e.g in TuplesortInstrumentation and\nIncrementalSortGroupInfo. (64-bit windows machines have sizeof(long)\n== 4). I understand that the values are in kilobytes and it would\ntake 2TB to cause them to wrap. Never-the-less, I think it would be\nbetter to choose a better-suited type. work_mem is still limited to\n2GB on 64-bit Windows machines, so perhaps there's some argument that\nit does not matter about fields that related to in-memory stuff, but\nthe on-disk fields are wrong. The in-memory fields likely raise the\nbar further for fixing the 2GB work_mem limit on Windows.\n\nMaybe Size would be better for the in-memory fields and uint64 for the\non-disk fields?\n\nDavid\n\n\n", "msg_date": "Tue, 30 Jun 2020 16:13:01 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Use of \"long\" in incremental sort code" }, { "msg_contents": "David Rowley <dgrowleyml@gmail.com> writes:\n> I noticed the incremental sort code makes use of the long datatype a\n> few times, e.g in TuplesortInstrumentation and\n> IncrementalSortGroupInfo. (64-bit windows machines have sizeof(long)\n> == 4). I understand that the values are in kilobytes and it would\n> take 2TB to cause them to wrap. Never-the-less, I think it would be\n> better to choose a better-suited type. work_mem is still limited to\n> 2GB on 64-bit Windows machines, so perhaps there's some argument that\n> it does not matter about fields that related to in-memory stuff, but\n> the on-disk fields are wrong. The in-memory fields likely raise the\n> bar further for fixing the 2GB work_mem limit on Windows.\n\n> Maybe Size would be better for the in-memory fields and uint64 for the\n> on-disk fields?\n\nThere is a fairly widespread issue that memory-size-related GUCs and\nsuchlike variables are limited to represent sizes that fit in a \"long\".\nAlthough Win64 is the *only* platform where that's an issue, maybe\nit's worth doing something about. But we shouldn't just fix the sort\ncode, if we do do something.\n\n(IOW, I don't agree with doing a fix that doesn't also fix work_mem.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 30 Jun 2020 00:20:00 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Use of \"long\" in incremental sort code" }, { "msg_contents": "On Tue, 30 Jun 2020 at 16:20, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> There is a fairly widespread issue that memory-size-related GUCs and\n> suchlike variables are limited to represent sizes that fit in a \"long\".\n> Although Win64 is the *only* platform where that's an issue, maybe\n> it's worth doing something about. But we shouldn't just fix the sort\n> code, if we do do something.\n>\n> (IOW, I don't agree with doing a fix that doesn't also fix work_mem.)\n\nI raised it mostly because this new-to-PG13-code is making the problem worse.\n\nIf we're not going to change the in-memory fields, then shouldn't we\nat least change the ones for disk space tracking?\n\nDavid\n\n\n", "msg_date": "Tue, 30 Jun 2020 16:24:00 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Use of \"long\" in incremental sort code" }, { "msg_contents": "On 2020-06-30 06:24, David Rowley wrote:\n> On Tue, 30 Jun 2020 at 16:20, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> There is a fairly widespread issue that memory-size-related GUCs and\n>> suchlike variables are limited to represent sizes that fit in a \"long\".\n>> Although Win64 is the *only* platform where that's an issue, maybe\n>> it's worth doing something about. But we shouldn't just fix the sort\n>> code, if we do do something.\n>>\n>> (IOW, I don't agree with doing a fix that doesn't also fix work_mem.)\n> \n> I raised it mostly because this new-to-PG13-code is making the problem worse.\n\nYeah, we recently got rid of a bunch of inappropriate use of long, so it \nseems reasonable to make this new code follow that.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Tue, 30 Jun 2020 13:21:37 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Use of \"long\" in incremental sort code" }, { "msg_contents": "On Tue, Jun 30, 2020 at 7:21 AM Peter Eisentraut\n<peter.eisentraut@2ndquadrant.com> wrote:\n>\n> On 2020-06-30 06:24, David Rowley wrote:\n> > On Tue, 30 Jun 2020 at 16:20, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> There is a fairly widespread issue that memory-size-related GUCs and\n> >> suchlike variables are limited to represent sizes that fit in a \"long\".\n> >> Although Win64 is the *only* platform where that's an issue, maybe\n> >> it's worth doing something about. But we shouldn't just fix the sort\n> >> code, if we do do something.\n> >>\n> >> (IOW, I don't agree with doing a fix that doesn't also fix work_mem.)\n> >\n> > I raised it mostly because this new-to-PG13-code is making the problem worse.\n>\n> Yeah, we recently got rid of a bunch of inappropriate use of long, so it\n> seems reasonable to make this new code follow that.\n\nI've attached a patch to make this change but with one tweak: I\ndecided to use unint64 for both memory and disk (rather than Size in\nsome cases) since we aggregated across multiple runs and have shared\ncode that deals with both values.\n\nJames", "msg_date": "Thu, 2 Jul 2020 12:01:21 -0400", "msg_from": "James Coleman <jtc331@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Use of \"long\" in incremental sort code" }, { "msg_contents": "On Mon, Jun 29, 2020 at 9:13 PM David Rowley <dgrowleyml@gmail.com> wrote:\n> I noticed the incremental sort code makes use of the long datatype a\n> few times, e.g in TuplesortInstrumentation and\n> IncrementalSortGroupInfo.\n\nI agree that long is terrible, and should generally be avoided.\n\n> Maybe Size would be better for the in-memory fields and uint64 for the\n> on-disk fields?\n\nFWIW we have to use int64 for the in-memory tuplesort.c fields. This\nis because it must be possible for the fields to have negative values\nin the context of tuplesort. If there is going to be a general rule\nfor in-memory fields, then ISTM that it'll have to be \"use int64\".\n\nlogtape.c uses long for on-disk fields. It also relies on negative\nvalues, albeit to a fairly limited degree (it uses -1 as a magic\nvalue).\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Thu, 2 Jul 2020 10:36:29 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Use of \"long\" in incremental sort code" }, { "msg_contents": "On Thu, Jul 2, 2020 at 1:36 PM Peter Geoghegan <pg@bowt.ie> wrote:\n>\n> On Mon, Jun 29, 2020 at 9:13 PM David Rowley <dgrowleyml@gmail.com> wrote:\n> > I noticed the incremental sort code makes use of the long datatype a\n> > few times, e.g in TuplesortInstrumentation and\n> > IncrementalSortGroupInfo.\n>\n> I agree that long is terrible, and should generally be avoided.\n>\n> > Maybe Size would be better for the in-memory fields and uint64 for the\n> > on-disk fields?\n>\n> FWIW we have to use int64 for the in-memory tuplesort.c fields. This\n> is because it must be possible for the fields to have negative values\n> in the context of tuplesort. If there is going to be a general rule\n> for in-memory fields, then ISTM that it'll have to be \"use int64\".\n>\n> logtape.c uses long for on-disk fields. It also relies on negative\n> values, albeit to a fairly limited degree (it uses -1 as a magic\n> value).\n\nDo you think it's reasonable to use int64 across the board for memory\nand disk space numbers then? If so, I can update the patch.\n\nJames\n\n\n", "msg_date": "Thu, 2 Jul 2020 13:53:32 -0400", "msg_from": "James Coleman <jtc331@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Use of \"long\" in incremental sort code" }, { "msg_contents": "On Thu, Jul 2, 2020 at 10:53 AM James Coleman <jtc331@gmail.com> wrote:\n> Do you think it's reasonable to use int64 across the board for memory\n> and disk space numbers then? If so, I can update the patch.\n\nUsing int64 as a replacement for long is the safest general strategy,\nand so ISTM that it might be worth doing that even in cases where it\nisn't clearly necessary. After all, any code that uses long must have\nbeen written with the assumption that that was the same thing as\nint64, at least on most platforms.\n\nThere is nothing wrong with using Size/size_t, and doing so is often\nslightly clearer. But it's no drop-in replacement for long.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Thu, 2 Jul 2020 11:07:03 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Use of \"long\" in incremental sort code" }, { "msg_contents": "Peter Geoghegan <pg@bowt.ie> writes:\n> On Thu, Jul 2, 2020 at 10:53 AM James Coleman <jtc331@gmail.com> wrote:\n>> Do you think it's reasonable to use int64 across the board for memory\n>> and disk space numbers then? If so, I can update the patch.\n\n> Using int64 as a replacement for long is the safest general strategy,\n\nmumble ssize_t mumble\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 02 Jul 2020 15:39:42 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Use of \"long\" in incremental sort code" }, { "msg_contents": "On Thu, Jul 2, 2020 at 12:39 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> mumble ssize_t mumble\n\nThat's from POSIX, though. I imagine MSVC won't be happy (surprise!).\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Thu, 2 Jul 2020 12:42:48 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Use of \"long\" in incremental sort code" }, { "msg_contents": "Peter Geoghegan <pg@bowt.ie> writes:\n> On Thu, Jul 2, 2020 at 12:39 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> mumble ssize_t mumble\n\n> That's from POSIX, though. I imagine MSVC won't be happy (surprise!).\n\nWe've got quite a few uses of it already, so apparently it's fine.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 02 Jul 2020 15:44:23 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Use of \"long\" in incremental sort code" }, { "msg_contents": "On Thu, Jul 2, 2020 at 3:39 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Peter Geoghegan <pg@bowt.ie> writes:\n> > On Thu, Jul 2, 2020 at 10:53 AM James Coleman <jtc331@gmail.com> wrote:\n> >> Do you think it's reasonable to use int64 across the board for memory\n> >> and disk space numbers then? If so, I can update the patch.\n>\n> > Using int64 as a replacement for long is the safest general strategy,\n>\n> mumble ssize_t mumble\n\nBut wouldn't that mean we'd get int on 32-bit systems, and since we're\naccumulating data we could go over that value in both memory and disk?\n\nMy assumption is that it's preferable to have the \"this run value\" and\nthe \"total used across multiple runs\" and both of those for disk and\nmemory to be the same. In that case it seems we want to guarantee\n64-bits.\n\nPatch using int64 attached.\n\nJames", "msg_date": "Thu, 2 Jul 2020 15:47:46 -0400", "msg_from": "James Coleman <jtc331@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Use of \"long\" in incremental sort code" }, { "msg_contents": "On Thu, Jul 2, 2020 at 12:44 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > That's from POSIX, though. I imagine MSVC won't be happy (surprise!).\n>\n> We've got quite a few uses of it already, so apparently it's fine.\n\nOh, looks like we have a compatibility hack for MSVC within\nwin32_port.h, where ssize_t is typedef'd to __int64. I didn't realize\nthat it was okay to use ssize_t.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Thu, 2 Jul 2020 12:49:48 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Use of \"long\" in incremental sort code" }, { "msg_contents": "On Thu, Jul 2, 2020 at 12:47 PM James Coleman <jtc331@gmail.com> wrote:\n> But wouldn't that mean we'd get int on 32-bit systems, and since we're\n> accumulating data we could go over that value in both memory and disk?\n>\n> My assumption is that it's preferable to have the \"this run value\" and\n> the \"total used across multiple runs\" and both of those for disk and\n> memory to be the same. In that case it seems we want to guarantee\n> 64-bits.\n\nI agree. There seems to be little reason to accommodate platform level\nconventions, beyond making sure that everything works on less popular\nor obsolete platforms.\n\nI suppose that it's a little idiosyncratic to use int64 like this. But\nit makes sense, and isn't nearly as ugly as the long thing, so I don't\nthink that it should really matter.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Thu, 2 Jul 2020 12:55:02 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Use of \"long\" in incremental sort code" }, { "msg_contents": "James Coleman <jtc331@gmail.com> writes:\n> On Thu, Jul 2, 2020 at 3:39 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> mumble ssize_t mumble\n\n> But wouldn't that mean we'd get int on 32-bit systems, and since we're\n> accumulating data we could go over that value in both memory and disk?\n\nCertainly, a number that's meant to represent the amount of data *on disk*\nshouldn't use ssize_t. But I think it's appropriate if you want to\nrepresent in-memory quantities while also allowing negative values.\n\nI guess if you're expecting in-memory sizes exceeding 2GB, you might worry\nthat ssize_t could overflow. I'm dubious that a 32-bit machine could get\nto that, though, seeing that it's going to have other demands on its\naddress space.\n\n> My assumption is that it's preferable to have the \"this run value\" and\n> the \"total used across multiple runs\" and both of those for disk and\n> memory to be the same. In that case it seems we want to guarantee\n> 64-bits.\n\nIf you're not going to distinguish in-memory from not-in-memory, agreed.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 02 Jul 2020 15:55:31 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Use of \"long\" in incremental sort code" }, { "msg_contents": "On Fri, 3 Jul 2020 at 07:47, James Coleman <jtc331@gmail.com> wrote:\n> Patch using int64 attached.\n\nI added this to the open items list for PG13.\n\nDavid\n\n\n", "msg_date": "Fri, 31 Jul 2020 14:11:46 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Use of \"long\" in incremental sort code" }, { "msg_contents": "On Thu, Jul 30, 2020 at 10:12 PM David Rowley <dgrowleyml@gmail.com> wrote:\n>\n> On Fri, 3 Jul 2020 at 07:47, James Coleman <jtc331@gmail.com> wrote:\n> > Patch using int64 attached.\n>\n> I added this to the open items list for PG13.\n>\n> David\n\nI'd previously attached a patch [1], and there seemed to be agreement\nit was reasonable (lightly so, but I also didn't see any\ndisagreement); would someone be able to either commit the change or\nprovide some additional feedback?\n\nThanks,\nJames\n\n[1]: https://www.postgresql.org/message-id/CAAaqYe_Y5zwCTFCJeso7p34yJgf4khR8EaKeJtGd%3DQPudOad6A%40mail.gmail.com\n\n\n", "msg_date": "Fri, 31 Jul 2020 10:02:16 -0400", "msg_from": "James Coleman <jtc331@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Use of \"long\" in incremental sort code" }, { "msg_contents": "On Sat, 1 Aug 2020 at 02:02, James Coleman <jtc331@gmail.com> wrote:\n> I'd previously attached a patch [1], and there seemed to be agreement\n> it was reasonable (lightly so, but I also didn't see any\n> disagreement); would someone be able to either commit the change or\n> provide some additional feedback?\n\nIt looks fine to me. Pushed.\n\nDavid\n\n> [1]: https://www.postgresql.org/message-id/CAAaqYe_Y5zwCTFCJeso7p34yJgf4khR8EaKeJtGd%3DQPudOad6A%40mail.gmail.com\n\n\n", "msg_date": "Sun, 2 Aug 2020 14:26:25 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Use of \"long\" in incremental sort code" } ]
[ { "msg_contents": "Some time ago, there were some discussions about gcc warnings produced \nby -Wcast-function-type [0]. To clarify, while that thread seemed to \nimply that the warnings appear by default in some compiler version, this \nis not the case AFAICT, and the warnings are entirely optional.\n\nSo I took a look at what it would take to fix all the warnings and came \nup with the attached patch.\n\nThere are three subplots:\n\n1. Changing the return type of load_external_function() and \nlookup_external_function() from PGFunction to a generic pointer type, \nwhich is what the discussion in [0] started out about.\n\n2. There is a bit of cheating in dynahash.c. They keycopy field is \ndeclared as a function pointer that returns a pointer to the \ndestination, to match the signature of memcpy(), but then we assign \nstrlcpy() to it, which returns size_t. Even though we never use the \nreturn value, I'm not sure whether this could break if size_t and \npointers are of different sizes, which in turn is very unlikely.\n\n3. Finally, there is some nonsense necessary in plpython, which is \nannoying but otherwise uninteresting.\n\nIs there anything we want to pursue further here?\n\n\n[0]: \nhttps://www.postgresql.org/message-id/flat/20180206200205.f5kvbyn6jawtzi6s%40alap3.anarazel.de\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Tue, 30 Jun 2020 08:47:56 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "warnings for invalid function casts" }, { "msg_contents": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> There are three subplots:\n\n> 1. Changing the return type of load_external_function() and \n> lookup_external_function() from PGFunction to a generic pointer type, \n> which is what the discussion in [0] started out about.\n\nI feel like what you propose to do here is just shifting the problem\naround: we're still casting from a function pointer that describes one\nconcrete call ABI to a function pointer that describes some other concrete\ncall ABI. That is, \"void (*ptr) (void)\" is *not* disclaiming knowledge\nof the function's signature, in the way that \"void *ptr\" disclaims\nknowledge of what a data pointer points to. So if current gcc fails to\nwarn about that, that's just a random and indeed obviously wrong decision\nthat they might change someday.\n\nRe-reading the original discussion, it seems like what we have to do\nif we want to suppress these warnings is to fully buy into POSIX's\nassertion that casting between data and function pointers is OK:\n\n Note that conversion from a void * pointer to a function pointer as in:\n fptr = (int (*)(int)) dlsym(handle, \"my_function\");\n is not defined by the ISO C standard. This standard requires this\n conversion to work correctly on conforming implementations.\n\nI suggest therefore that a logically cleaner solution is to keep the\nresult type of load_external_function et al as \"void *\", and have\ncallers cast that to the required specific function-pointer type,\nthus avoiding ever casting between two function-pointer types.\n(We could keep most of your patch as-is, but typedef GenericFunctionPtr\nas \"void *\" not a function pointer, with some suitable commentary.)\n\n> 2. There is a bit of cheating in dynahash.c.\n\nIt's slightly annoying that this fix introduces an extra layer of\nfunction-call indirection. Maybe that's not worth worrying about,\nbut I'm tempted to suggest that we could fix it on the same principle\nwith\n\n\thashp->keycopy = (HashCopyFunc) (void *) strlcpy;\n\n> 3. Finally, there is some nonsense necessary in plpython, which is \n> annoying but otherwise uninteresting.\n\nAgain, it seems pretty random to me that this suppresses any warnings,\nbut it'd be less so if the intermediate cast were to \"void *\".\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 30 Jun 2020 10:15:05 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: warnings for invalid function casts" }, { "msg_contents": "Hi,\n\nOn 2020-06-30 08:47:56 +0200, Peter Eisentraut wrote:\n> Some time ago, there were some discussions about gcc warnings produced by\n> -Wcast-function-type [0]. To clarify, while that thread seemed to imply\n> that the warnings appear by default in some compiler version, this is not\n> the case AFAICT, and the warnings are entirely optional.\n\nWell, it's part of -Wextra. Which I think a fair number of people just\nalways enable...\n\n\n> There are three subplots:\n> \n> 1. Changing the return type of load_external_function() and\n> lookup_external_function() from PGFunction to a generic pointer type, which\n> is what the discussion in [0] started out about.\n\nTo a generic *function pointer type*, right?\n\n\n> 2. There is a bit of cheating in dynahash.c. They keycopy field is declared\n> as a function pointer that returns a pointer to the destination, to match\n> the signature of memcpy(), but then we assign strlcpy() to it, which returns\n> size_t. Even though we never use the return value, I'm not sure whether\n> this could break if size_t and pointers are of different sizes, which in\n> turn is very unlikely.\n\nI agree that it's a low risk,\n\n\n> Is there anything we want to pursue further here?\n\nYou mean whether we want to do further changes in the vein of yours, or\nwhether we want to apply your patch?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 30 Jun 2020 12:12:40 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: warnings for invalid function casts" }, { "msg_contents": "Hi,\n\nOn 2020-06-30 10:15:05 -0400, Tom Lane wrote:\n> Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> > There are three subplots:\n> \n> > 1. Changing the return type of load_external_function() and \n> > lookup_external_function() from PGFunction to a generic pointer type, \n> > which is what the discussion in [0] started out about.\n> \n> I feel like what you propose to do here is just shifting the problem\n> around: we're still casting from a function pointer that describes one\n> concrete call ABI to a function pointer that describes some other concrete\n> call ABI. That is, \"void (*ptr) (void)\" is *not* disclaiming knowledge\n> of the function's signature, in the way that \"void *ptr\" disclaims\n> knowledge of what a data pointer points to. So if current gcc fails to\n> warn about that, that's just a random and indeed obviously wrong decision\n> that they might change someday.\n\nISTM that it's unlikely that they'd warn about casting from one\nsignature to another? That'd basically mean that you're not allowed to\ncast function pointers at all anymore? There's a legitimate reason to\ndistinguish between pointers to functions and pointers to data - but\nwhat'd be the point in forbidding all casts between different function\npointer types?\n\n\n> > 2. There is a bit of cheating in dynahash.c.\n> \n> It's slightly annoying that this fix introduces an extra layer of\n> function-call indirection. Maybe that's not worth worrying about,\n> but I'm tempted to suggest that we could fix it on the same principle\n> with\n\nHm. At first I was going to say that every compiler worth its salt\nshould be able to optimize the indirection, but that's probably not\ngenerally true, due to returning dest \"manually\". If the wrapper instead\njust added explicit cast to the return type it'd presumably be ok.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 30 Jun 2020 12:21:35 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: warnings for invalid function casts" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2020-06-30 10:15:05 -0400, Tom Lane wrote:\n>> I feel like what you propose to do here is just shifting the problem\n>> around: we're still casting from a function pointer that describes one\n>> concrete call ABI to a function pointer that describes some other concrete\n>> call ABI. That is, \"void (*ptr) (void)\" is *not* disclaiming knowledge\n>> of the function's signature, in the way that \"void *ptr\" disclaims\n>> knowledge of what a data pointer points to. So if current gcc fails to\n>> warn about that, that's just a random and indeed obviously wrong decision\n>> that they might change someday.\n\n> ISTM that it's unlikely that they'd warn about casting from one\n> signature to another?\n\nUh, what? Isn't that *exactly* what this warning class does?\nIf it doesn't do that, what good is it? I mean, I can definitely\nsee the point of warning when you cast a function pointer to some\nother not-ABI-compatible function pointer type, because that might\nbe a mistake, just like assigning \"int *\" to \"double *\" might be.\n\ngcc 8's manual says\n\n'-Wcast-function-type'\n Warn when a function pointer is cast to an incompatible function\n pointer. In a cast involving function types with a variable\n argument list only the types of initial arguments that are provided\n are considered. Any parameter of pointer-type matches any other\n pointer-type. Any benign differences in integral types are\n ignored, like 'int' vs. 'long' on ILP32 targets. Likewise type\n qualifiers are ignored. The function type 'void (*) (void)' is\n special and matches everything, which can be used to suppress this\n warning. In a cast involving pointer to member types this warning\n warns whenever the type cast is changing the pointer to member\n type. This warning is enabled by '-Wextra'.\n\nso it seems like they've already mostly crippled the type-safety of the\nwarning with the provision about \"all pointer types are interchangeable\"\n:-(. But they certainly are warning about *some* cases of casting one\nsignature to another.\n\nIn any case, I think the issue here is what is the escape hatch for saying\nthat \"I know this cast is okay, don't warn about it, thanks\". Treating\n\"void (*) (void)\" as special for that purpose is nothing more nor less\nthan a kluge, so another compiler might do it differently. Given the\nPOSIX restriction, I think we could reasonably use \"void *\" instead.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 30 Jun 2020 15:38:20 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: warnings for invalid function casts" }, { "msg_contents": "On 2020-06-30 21:38, Tom Lane wrote:\n> In any case, I think the issue here is what is the escape hatch for saying\n> that \"I know this cast is okay, don't warn about it, thanks\". Treating\n> \"void (*) (void)\" as special for that purpose is nothing more nor less\n> than a kluge, so another compiler might do it differently. Given the\n> POSIX restriction, I think we could reasonably use \"void *\" instead.\n\nI think gcc had to pick some escape hatch that is valid also outside of \nPOSIX, so they just had to pick something. If we're disregarding \nsupport for these Harvard architecture type things, then we might as \nwell use void * for easier notation.\n\nBtw., one of the hunks in my patch was in PL/Python. I have found an \nequivalent change in the core Python code, which does make use of void \n(*) (void): https://github.com/python/cpython/commit/62be74290aca\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 3 Jul 2020 16:04:10 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: warnings for invalid function casts" }, { "msg_contents": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> On 2020-06-30 21:38, Tom Lane wrote:\n>> In any case, I think the issue here is what is the escape hatch for saying\n>> that \"I know this cast is okay, don't warn about it, thanks\". Treating\n>> \"void (*) (void)\" as special for that purpose is nothing more nor less\n>> than a kluge, so another compiler might do it differently. Given the\n>> POSIX restriction, I think we could reasonably use \"void *\" instead.\n\n> I think gcc had to pick some escape hatch that is valid also outside of \n> POSIX, so they just had to pick something. If we're disregarding \n> support for these Harvard architecture type things, then we might as \n> well use void * for easier notation.\n\nAs long as it's behind a typedef, the code will look the same in any\ncase ;-).\n\n> Btw., one of the hunks in my patch was in PL/Python. I have found an \n> equivalent change in the core Python code, which does make use of void \n> (*) (void): https://github.com/python/cpython/commit/62be74290aca\n\nGiven that gcc explicitly documents \"void (*) (void)\" as being what\nto use, they're going to have a hard time changing their minds about\nthat ... and gcc is dominant enough in this space that I suppose\nother compilers would have to be compatible with it. So even though\nit's theoretically bogus, I suppose we might as well go along with\nit. The typedef will allow a centralized fix if we ever find a\nbetter answer.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 03 Jul 2020 10:40:00 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: warnings for invalid function casts" }, { "msg_contents": "On 2020-07-03 16:40, Tom Lane wrote:\n> Given that gcc explicitly documents \"void (*) (void)\" as being what\n> to use, they're going to have a hard time changing their minds about\n> that ... and gcc is dominant enough in this space that I suppose\n> other compilers would have to be compatible with it. So even though\n> it's theoretically bogus, I suppose we might as well go along with\n> it. The typedef will allow a centralized fix if we ever find a\n> better answer.\n\nDo people prefer a typedef or just writing it out, like it's done in the \nPython code?\n\nAttached is a provisional patch that has it written out.\n\nI'm minimally in favor of that, since the Python code would be \nconsistent with the Python core code, and the one other use is quite \nspecial and it might not be worth introducing a globally visible \nworkaround for it. But if we prefer a typedef then I'd propose \nGenericFuncPtr like in the initial patch.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Sat, 4 Jul 2020 13:36:44 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: warnings for invalid function casts" }, { "msg_contents": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> Do people prefer a typedef or just writing it out, like it's done in the \n> Python code?\n\nI'm for a typedef. There is *nothing* readable about \"(void (*) (void))\",\nand the fact that it's theoretically incorrect for the purpose doesn't\nexactly aid intelligibility either. With a typedef, not only are\nthe uses more readable but there's a place to put a comment explaining\nthat this is notionally wrong but it's what gcc specifies to use\nto suppress thus-and-such warnings.\n\n> But if we prefer a typedef then I'd propose \n> GenericFuncPtr like in the initial patch.\n\nThat name is OK by me.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 04 Jul 2020 10:16:07 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: warnings for invalid function casts" }, { "msg_contents": "On 2020-07-04 16:16, Tom Lane wrote:\n> Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n>> Do people prefer a typedef or just writing it out, like it's done in the\n>> Python code?\n> \n> I'm for a typedef. There is *nothing* readable about \"(void (*) (void))\",\n> and the fact that it's theoretically incorrect for the purpose doesn't\n> exactly aid intelligibility either. With a typedef, not only are\n> the uses more readable but there's a place to put a comment explaining\n> that this is notionally wrong but it's what gcc specifies to use\n> to suppress thus-and-such warnings.\n\nMakes sense. New patch here.\n\n>> But if we prefer a typedef then I'd propose\n>> GenericFuncPtr like in the initial patch.\n> \n> That name is OK by me.\n\nI changed that to pg_funcptr_t, to look a bit more like C and less like \nJava. ;-)\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Tue, 7 Jul 2020 11:45:41 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: warnings for invalid function casts" }, { "msg_contents": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> On 2020-07-04 16:16, Tom Lane wrote:\n>> I'm for a typedef. There is *nothing* readable about \"(void (*) (void))\",\n>> and the fact that it's theoretically incorrect for the purpose doesn't\n>> exactly aid intelligibility either. With a typedef, not only are\n>> the uses more readable but there's a place to put a comment explaining\n>> that this is notionally wrong but it's what gcc specifies to use\n>> to suppress thus-and-such warnings.\n\n> Makes sense. New patch here.\n\nI don't have a compiler handy that emits these warnings, but this\npasses an eyeball check.\n\n>>> But if we prefer a typedef then I'd propose\n>>> GenericFuncPtr like in the initial patch.\n\n>> That name is OK by me.\n\n> I changed that to pg_funcptr_t, to look a bit more like C and less like \n> Java. ;-)\n\nI liked the first proposal better. Not gonna fight about it though.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 07 Jul 2020 12:08:15 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: warnings for invalid function casts" }, { "msg_contents": "On 2020-07-07 18:08, Tom Lane wrote:\n> Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n>> On 2020-07-04 16:16, Tom Lane wrote:\n>>> I'm for a typedef. There is *nothing* readable about \"(void (*) (void))\",\n>>> and the fact that it's theoretically incorrect for the purpose doesn't\n>>> exactly aid intelligibility either. With a typedef, not only are\n>>> the uses more readable but there's a place to put a comment explaining\n>>> that this is notionally wrong but it's what gcc specifies to use\n>>> to suppress thus-and-such warnings.\n> \n>> Makes sense. New patch here.\n> \n> I don't have a compiler handy that emits these warnings, but this\n> passes an eyeball check.\n\ncommitted\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Tue, 14 Jul 2020 20:58:28 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: warnings for invalid function casts" } ]
[ { "msg_contents": "Hello.\n\nWhile looking a patch, I found that a standby with archive_mode=always\nfails to archive segments under certain conditions.\n\nA. Walreceiver is gracefully terminated just after a segment is\n finished.\n\nB. Walreceiver is gracefully terminated while receiving filling chunks\n for a segment switch.\n\nThe two above are reprodusible (without distinction between the two)\nusing a helper patch. See below.\n\nThere's one more issue here.\n\nC. Standby doesn't archive a segment until walreceiver receives any\n data for the next segment.\n\nI'm not sure wehther we assume C as an issue.\n\nThe first attached patch fixes A and B. A side-effect of that is that\nstandby archives the previous segment of the streaming start\nlocation. Concretely 00..0100..2 gets to be archived in the above case\n(recovery starts at 0/3000000). That behavior doesn't seem to be a\nproble since the segment is a part of the standby's data anyway.\n\nThe second attached patch fixes all of A to C, but seems somewhat\nredundant.\n\nAny opnions and/or suggestions are welcome.\n\n\nThe attached files are:\n\n1. v1-0001-Make-sure-standby-archives-all-segments.patch:\n Fix for A and B.\n\n2. v1-0001-Make-sure-standby-archives-all-segments-immediate.patch:\n Fix for A, B and C.\n\n3. repro.sh\n The reproducer shell script used below.\n\n4. repro_helper.patch\n Helper patch for repro.sh for master and patch 1 above.\n\n5. repro_helper2.patch\n Helper patch for repro.sh for patch 2 above.\n\n=====\n** REPRODUCER\n\nThe failure is reproducible with some code tweak.\n\n1. Create a primary server with archive_mode=always then start it.\n2. Create and start a standby.\n3. touch /tmp/hoge\n\n4. psql -c \"create table t(); drop table t; select pg_switch_wal(); select pg_sleep(1); create table t(); drop table t; select pg_switch_wal();\"\n\n5. look into the archive directory of the standby.\n If no missing segments found in archive, repeat from 3.\n\nThe third attached shell script is a reproducer for the problem,\nneeding the aid of the fourth patch attached.\n\n$ mkdir testdir\n$ cd testdir\n$ bash ..../repro.sh\n....\nAfter test 2:\nPrimary location: 0/8000310\nStandby location: 0/8000310\n# primary archive\n000000010000000000000003\n000000010000000000000004\n000000010000000000000005\n000000010000000000000006\n000000010000000000000007\n000000010000000000000008\n# standby archive\n000000010000000000000003\n000000010000000000000005\n000000010000000000000006\n000000010000000000000008\n\nThe segment 4 is skipped by the issue A and 7 is skipped by the issue\nB.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n\n\n#! /bin/bash\n\nROOT=`pwd`\nLOGFILE=\"repro.log\"\nPGPORT1=15432\nPGPORT2=15433\n\nPGDATA1=$ROOT/reprodata1\nARCHDIR1=$ROOT/reproarc1\nPGDATA2=$ROOT/reprodata2\nARCHDIR2=$ROOT/reproarc2\n\n\nfunction cleanup {\n\techo -n \"Killing servers...\"\n\tpg_ctl -D $PGDATA1 -m i stop\n\tpg_ctl -D $PGDATA2 -m i stop\n\techo \"done.\"\n\texit 1\n}\n\nrm -r $PGDATA1 $PGDATA2 $ARCHDIR1 $ARCHDIR2\nmkdir $ARCHDIR1 $ARCHDIR2\n\n# Create primary\necho \"# Creating primary\"\ninitdb -D $PGDATA1 &>$LOGFILE\ncat >> $PGDATA1/postgresql.conf <<EOF\nwal_keep_segments=10\narchive_mode=always\narchive_command='cp %p $ARCHDIR1/%f'\nEOF\n\n# Start primary\necho \"# Starting primary\"\npg_ctl -D $PGDATA1 -o\"-p $PGPORT1\" start &>>$LOGFILE\n\n# Create standby\necho \"# Creating standby\"\npg_basebackup -D $PGDATA2 -h /tmp -p $PGPORT1 &>>$LOGFILE\ncat >> $PGDATA2/postgresql.conf <<EOF\narchive_command='cp %p $ARCHDIR2/%f'\nprimary_conninfo='host=/tmp port=$PGPORT1'\nEOF\ntouch $PGDATA2/standby.signal\n\ntrap cleanup ERR 2 3 15\n\n# Start primary\necho \"# Starting standby\"\npg_ctl -D $PGDATA2 -o\"-p $PGPORT2\" start &>>$LOGFILE\nsleep 3\n\necho \"Start:\"\necho -n \"Primary location: \"\npsql -tAp $PGPORT1 -c \"select pg_current_wal_lsn()\"\necho -n \"Standby location: \"\npsql -tAp $PGPORT2 -c \"select pg_last_wal_receive_lsn()\"\n\n# Delocate from boundary..\npsql -p $PGPORT1 -c \"create table t(); drop table t\" &>>$LOGFILE\nsleep 1\n\n# TEST 1: walreceiver stops just after a segment is completed\necho \"# test 1\" >> $LOGFILE\ntouch /tmp/hoge1\npsql -p $PGPORT1 -c \"create table t(a int); insert into t (select a from generate_series(0, 260000) a); drop table t;\" &>>$LOGFILE\necho \"# test 1 end\" >> $LOGFILE\n\npsql -p $PGPORT1 -c \"create table t(); drop table t; select pg_switch_wal()\" &>>$LOGFILE\nsleep 2\n\necho \"After test 1:\"\necho -n \"Primary location: \"\npsql -tAp $PGPORT1 -c \"select pg_current_wal_lsn()\"\necho -n \"Standby location: \"\npsql -tAp $PGPORT2 -c \"select pg_last_wal_receive_lsn()\"\n\npsql -p $PGPORT1 -c \"create table t(); drop table t; select pg_switch_wal()\" &>>$LOGFILE\npsql -p $PGPORT1 -c \"create table t(); drop table t; select pg_switch_wal()\" &>>$LOGFILE\nsleep 2\n\n# TEST 2: walreceiver stops while receiving filling chunks after a wal switch.\necho \"# test 2\" >> $LOGFILE\ntouch /tmp/hoge2\npsql -p $PGPORT1 -c \"create table t(); drop table t; select pg_switch_wal()\" &>>$LOGFILE\necho \"# test 2 end\" >> $LOGFILE\n\nsleep 2\n\necho \"After test 2:\"\necho -n \"Primary location: \"\npsql -tAp $PGPORT1 -c \"select pg_current_wal_lsn()\"\necho -n \"Standby location: \"\npsql -tAp $PGPORT2 -c \"select pg_last_wal_receive_lsn()\"\n\n# stop servers\npg_ctl -D $PGDATA1 stop &>>$LOGFILE\npg_ctl -D $PGDATA2 stop &>>$LOGFILE\n\n#show last three archived segments\necho \"# primary archive\"\nls $ARCHDIR1 | egrep '[3-9]$'\necho \"# standby archive\"\nls $ARCHDIR2 | egrep '[3-9]$'", "msg_date": "Tue, 30 Jun 2020 16:55:03 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": true, "msg_subject": "Possible missing segments in archiving on standby" }, { "msg_contents": "On 2020/06/30 16:55, Kyotaro Horiguchi wrote:\n> Hello.\n> \n> While looking a patch, I found that a standby with archive_mode=always\n> fails to archive segments under certain conditions.\n\nI encountered this issue, too.\n\n\n> 1. v1-0001-Make-sure-standby-archives-all-segments.patch:\n> Fix for A and B.\n> \n> 2. v1-0001-Make-sure-standby-archives-all-segments-immediate.patch:\n> Fix for A, B and C.\n\nYou proposed two patches, but this patch should be reviewed preferentially\nbecause this addresses all the issues (i.e., A, B and C) that you reported?\n\n\n+\t\t\t * If we are starting streaming at the beginning of a segment,\n+\t\t\t * there may be the case where the previous segment have not been\n+\t\t\t * archived yet. Make sure it is archived.\n\nCould you clarify why the archive notification file of the previous\nWAL segment needs to be checked?\n\n\nAs far as I read the code, the cause of the issue seems to be that\nXLogWalRcvWrite() exits without creating an archive notification file\neven if the current WAL segment is fully written up in the last cycle of\nXLogWalRcvWrite()'s loop. That is, creation of the notification file\nand WAL archiving of that completed segment will be delayed\nuntil any data in the next segment is received and written (by next call\nto XLogWalRcvWrite()). Furthermore, in that case, if walreceiver exits\nbefore receiving such next segment, the completed current segment\nfails to be archived as Horiguchi-san reported.\n\nTherefore, IMO that the simple approach to fix the issue is to create\nan archive notification file if possible at the end of XLogWalRcvWrite().\nI implemented this idea. Patch attached.\n\nRegards,\n\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION", "msg_date": "Tue, 31 Aug 2021 01:54:36 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Possible missing segments in archiving on standby" }, { "msg_contents": "At Tue, 31 Aug 2021 01:54:36 +0900, Fujii Masao <masao.fujii@oss.nttdata.com> wrote in \n> \n> \n> On 2020/06/30 16:55, Kyotaro Horiguchi wrote:\n> > Hello.\n> > While looking a patch, I found that a standby with archive_mode=always\n> > fails to archive segments under certain conditions.\n> \n> I encountered this issue, too.\n> \n> \n> > 1. v1-0001-Make-sure-standby-archives-all-segments.patch:\n> > Fix for A and B.\n> > 2. v1-0001-Make-sure-standby-archives-all-segments-immediate.patch:\n> > Fix for A, B and C.\n> \n> You proposed two patches, but this patch should be reviewed\n> preferentially\n> because this addresses all the issues (i.e., A, B and C) that you\n> reported?\n\nMaybe. The point here was whether we regard C as an issue, but now I\nthink it is an issue.\n\n> + * If we are starting streaming at the beginning of a segment,\n> + * there may be the case where the previous segment have not been\n> +\t\t\t * archived yet. Make sure it is archived.\n> \n> Could you clarify why the archive notification file of the previous\n> WAL segment needs to be checked?\n> \n> As far as I read the code, the cause of the issue seems to be that\n> XLogWalRcvWrite() exits without creating an archive notification file\n> even if the current WAL segment is fully written up in the last cycle\n> of\n> XLogWalRcvWrite()'s loop. That is, creation of the notification file\n> and WAL archiving of that completed segment will be delayed\n> until any data in the next segment is received and written (by next\n> call\n> to XLogWalRcvWrite()). Furthermore, in that case, if walreceiver exits\n> before receiving such next segment, the completed current segment\n> fails to be archived as Horiguchi-san reported.\n\nRight. Finally such segments are archived when a future checkpoint\nremoves them. In that sense the patch works to just let archiving\nhappens faster, but on the other hand I came to think we are supposed\nto archive a segment as soon as it is completed. (That is, I think C\nis a problem.)\n\n> Therefore, IMO that the simple approach to fix the issue is to create\n> an archive notification file if possible at the end of\n> XLogWalRcvWrite().\n> I implemented this idea. Patch attached.\n\nI'm not sure which is simpler, but it works except for B, the case of\na long-jump by a segment switch. When a segment switch happens,\nwalsender sends filling zero-pages but even if walreceiver is\nterminated before the segment is completed, walsender restarts from\nthe next segment at the next startup. Concretely like the following.\n\n- pg_switch_wal() invoked at 6003228 (for example)\n- walreceiver terminates at 6500000 (or a bit later).\n- walrecever rstarts from 7000000\n\nIn this case the segment 6 is not notified even with the patch, and my\nold patches works the same way. (In other words, the call to\nXLogWalRcvClose() at the end of XLogWalRcvWrite doens't work for the\ncase as you might expect.) If we think it ok that we don't notify the\nsegment earlier than a future checkpoint removes it, yours or only the\nlast half of my one is sufficient, but do we really think so?\nFurthermore, your patch or only the last half of my second patch\ndoesn't save the case of a crash unlike the case of a graceful\ntermination.\n\n\nThe attached files are:\n\nv2wip-0001-Make-sure... : a rebased patch of the old second patch\nrepro_helper.diff : reproducer helper patch, used by the script below.\nrepro.sh : reproducer script.\n\n(The second diff conflicts with the first patch. Since the second just\ninserts a single code block, it is easily applicable manually:p)\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n\n#! /bin/bash\n\nROOT=`pwd`\nLOGFILE=\"repro.log\"\nPGPORT1=15432\nPGPORT2=15433\n\nPGDATA1=$ROOT/reprodata1\nARCHDIR1=$ROOT/reproarc1\nPGDATA2=$ROOT/reprodata2\nARCHDIR2=$ROOT/reproarc2\n\n\nfunction cleanup {\n\techo -n \"Killing servers...\"\n\tpg_ctl -D $PGDATA1 -m i stop\n\tpg_ctl -D $PGDATA2 -m i stop\n\techo \"done.\"\n\texit 1\n}\n\nrm -r $PGDATA1 $PGDATA2 $ARCHDIR1 $ARCHDIR2\nmkdir $ARCHDIR1 $ARCHDIR2\n\n# Create primary\necho \"# Creating primary\"\ninitdb -D $PGDATA1 &>$LOGFILE\ncat >> $PGDATA1/postgresql.conf <<EOF\nwal_keep_size=160\narchive_mode=always\narchive_command='cp %p $ARCHDIR1/%f'\nEOF\n\n# Start primary\necho \"# Starting primary\"\npg_ctl -D $PGDATA1 -o\"-p $PGPORT1\" start &>>$LOGFILE\n\n# Create standby\necho \"# Creating standby\"\npg_basebackup -D $PGDATA2 -h /tmp -p $PGPORT1 &>>$LOGFILE\ncat >> $PGDATA2/postgresql.conf <<EOF\narchive_command='cp %p $ARCHDIR2/%f'\nprimary_conninfo='host=/tmp port=$PGPORT1'\nEOF\ntouch $PGDATA2/standby.signal\n\ntrap cleanup ERR 2 3 15\n\n# Start primary\necho \"# Starting standby\"\npg_ctl -D $PGDATA2 -o\"-p $PGPORT2\" start &>>$LOGFILE\nsleep 3\n\necho \"Start:\"\necho -n \"Primary location: \"\npsql -tAp $PGPORT1 -c \"select pg_current_wal_lsn()\"\necho -n \"Standby location: \"\npsql -tAp $PGPORT2 -c \"select pg_last_wal_receive_lsn()\"\n\n# Delocate from boundary..\npsql -p $PGPORT1 -c \"create table t(); drop table t\" &>>$LOGFILE\nsleep 1\n\n# TEST 1: walreceiver stops just after a segment is completed\necho \"Before test 1:\"\necho -n \"Primary location: \" \npsql -tAp $PGPORT1 -c \"select pg_current_wal_lsn()\"\necho -n \"Standby location: \"\npsql -tAp $PGPORT2 -c \"select pg_last_wal_receive_lsn()\"\necho \"# test 1\" >> $LOGFILE\ntouch /tmp/hoge1\n# the number of records should be adjusted so that the LSN at \"before\n# end test 1\" below is locaed in the last page of the segment 3.\npsql -p $PGPORT1 -c \"create table t(a int); insert into t (select a from generate_series(0, 259500) a); drop table t;\" &>>$LOGFILE\necho \"# test 1 end\" >> $LOGFILE\npsql -p $PGPORT1 -c \"create table t(); drop table t\" &>>$LOGFILE\n\necho \"before end test 1:\"\necho -n \"Primary location: \" \npsql -tAp $PGPORT1 -c \"select pg_current_wal_lsn()\"\necho -n \"Standby location: \"\npsql -tAp $PGPORT2 -c \"select pg_last_wal_receive_lsn()\"\n\npsql -p $PGPORT1 -c \"select pg_switch_wal()\" &>>$LOGFILE\nsleep 2\n\necho \"After test 1:\"\necho -n \"Primary location: \"\npsql -tAp $PGPORT1 -c \"select pg_current_wal_lsn()\"\necho -n \"Standby location: \"\npsql -tAp $PGPORT2 -c \"select pg_last_wal_receive_lsn()\"\n\npsql -p $PGPORT1 -c \"create table t(); drop table t; select pg_switch_wal()\" &>>$LOGFILE\npsql -p $PGPORT1 -c \"create table t(); drop table t; select pg_switch_wal()\" &>>$LOGFILE\nsleep 2\n\n# TEST 2: walreceiver stops while receiving filling chunks after a wal switch.\necho \"# test 2\" >> $LOGFILE\ntouch /tmp/hoge2\npsql -p $PGPORT1 -c \"create table t(); drop table t; select pg_switch_wal()\" &>>$LOGFILE\necho \"# test 2 end\" >> $LOGFILE\n\nsleep 2\n\necho \"After test 2:\"\necho -n \"Primary location: \"\npsql -tAp $PGPORT1 -c \"select pg_current_wal_lsn()\"\necho -n \"Standby location: \"\npsql -tAp $PGPORT2 -c \"select pg_last_wal_receive_lsn()\"\n\n# stop servers\npg_ctl -D $PGDATA1 stop &>>$LOGFILE\npg_ctl -D $PGDATA2 stop &>>$LOGFILE\n\n#show last several archived segments\necho \"# primary archive\"\nls $ARCHDIR1 | egrep '[3-9]$'\necho \"# standby archive\"\nls $ARCHDIR2 | egrep '[3-9]$'", "msg_date": "Tue, 31 Aug 2021 16:35:36 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Possible missing segments in archiving on standby" }, { "msg_contents": "\n\nOn 2021/08/31 16:35, Kyotaro Horiguchi wrote:\n> I'm not sure which is simpler, but it works except for B, the case of\n> a long-jump by a segment switch. When a segment switch happens,\n> walsender sends filling zero-pages but even if walreceiver is\n> terminated before the segment is completed, walsender restarts from\n> the next segment at the next startup. Concretely like the following.\n> \n> - pg_switch_wal() invoked at 6003228 (for example)\n> - walreceiver terminates at 6500000 (or a bit later).\n> - walrecever rstarts from 7000000\n> \n> In this case the segment 6 is not notified even with the patch, and my\n> old patches works the same way. (In other words, the call to\n> XLogWalRcvClose() at the end of XLogWalRcvWrite doens't work for the\n> case as you might expect.) If we think it ok that we don't notify the\n> segment earlier than a future checkpoint removes it, yours or only the\n> last half of my one is sufficient, but do we really think so?\n> Furthermore, your patch or only the last half of my second patch\n> doesn't save the case of a crash unlike the case of a graceful\n> termination.\n\nThanks for the clarification!\nPlease let me check my understanding about the issue.\n\nThe issue happens when walreceiver exits after it receives XLOG_SWITCH record\nbut before receives the remaining bytes of the segment including that\nXLOG_SWITCH record. In this case, the startup process tries to replay that\n\"half-received\" segment, finds XLOG_SWITCH record in it, moves to the next\nsegment and then starts new walreceiver from that next segment. Therefore,\neven with my patch, the segment including that XLOG_SWITCH record is not\narchived soon. Is my understanding right? I agree that we should address also\nthis issue.\n\nISTM, to address the issue, it's simpler and less fragile to make the startup\nprocess call XLogArchiveCheckDone() or something whenever it moves\nthe next segment, rather than make walreceiver do that. Thought?\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Tue, 31 Aug 2021 23:23:27 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Possible missing segments in archiving on standby" }, { "msg_contents": "At Tue, 31 Aug 2021 23:23:27 +0900, Fujii Masao <masao.fujii@oss.nttdata.com> wrote in \n> \n> \n> On 2021/08/31 16:35, Kyotaro Horiguchi wrote:\n> > I'm not sure which is simpler, but it works except for B, the case of\n> > a long-jump by a segment switch. When a segment switch happens,\n> > walsender sends filling zero-pages but even if walreceiver is\n> > terminated before the segment is completed, walsender restarts from\n> > the next segment at the next startup. Concretely like the following.\n> > - pg_switch_wal() invoked at 6003228 (for example)\n> > - walreceiver terminates at 6500000 (or a bit later).\n> > - walrecever rstarts from 7000000\n> > In this case the segment 6 is not notified even with the patch, and my\n> > old patches works the same way. (In other words, the call to\n> > XLogWalRcvClose() at the end of XLogWalRcvWrite doens't work for the\n> > case as you might expect.) If we think it ok that we don't notify the\n> > segment earlier than a future checkpoint removes it, yours or only the\n> > last half of my one is sufficient, but do we really think so?\n> > Furthermore, your patch or only the last half of my second patch\n> > doesn't save the case of a crash unlike the case of a graceful\n> > termination.\n> \n> Thanks for the clarification!\n> Please let me check my understanding about the issue.\n> \n> The issue happens when walreceiver exits after it receives XLOG_SWITCH\n> record\n> but before receives the remaining bytes of the segment including that\n> XLOG_SWITCH record. In this case, the startup process tries to replay\n> that\n> \"half-received\" segment, finds XLOG_SWITCH record in it, moves to the\n> next\n> segment and then starts new walreceiver from that next\n> segment. Therefore,\n> even with my patch, the segment including that XLOG_SWITCH record is\n> not\n> archived soon. Is my understanding right? I agree that we should\n> address also\n> this issue.\n\nRight.\n\n> ISTM, to address the issue, it's simpler and less fragile to make the\n> startup\n> process call XLogArchiveCheckDone() or something whenever it moves\n> the next segment, rather than make walreceiver do that. Thought?\n\nPutting aside the issue C, it would work as far as recovery is not\npaused or delayed. Although simply doing that means we run additional\nand a bit) wasteful XLogArchiveCheckDone() in most cases, It's hard to\nimagine moving the responsibility to notify a finished segment from\nwalsender (writer side) to startup (reader side).\n\nIn the first place A and B happens only at termination or crash of\nwalsender so there's no fragility in checking only the previous\nsegment at start of walsender. After a bit thought I noticed that we\ndon't need to do that in the wal-writing loop. And I noticed that we\nneed to consider timeline transitions while calculating the previous\nsegment. Even though missing-notification at a timeline-switch\ndoesn't happen unless walsender is killed hard for example by a\nsigkill or a power cut, though.\n\nSo the attached is a new version of the patch to fix only A and B.\n\n- Moved the check code out of the replication loop.\n\n- Track timeline transition while calculating the previous segment.\n If we don't do that, we would need another means to avoid notifying\n non-existent segment instead of the correct one.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center", "msg_date": "Wed, 01 Sep 2021 12:12:25 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Possible missing segments in archiving on standby" }, { "msg_contents": "\n\nOn 2021/09/01 12:12, Kyotaro Horiguchi wrote:\n> Putting aside the issue C, it would work as far as recovery is not\n> paused or delayed. Although simply doing that means we run additional\n> and a bit) wasteful XLogArchiveCheckDone() in most cases, It's hard to\n> imagine moving the responsibility to notify a finished segment from\n> walsender (writer side) to startup (reader side).\n\nYou mean walreceiver, not walsender?\n\nI was thinking to apply my latest patch, to address the issue A and C.\nSo walreceiver is still basically responsible to create .ready file.\nAlso regarding the issue B, I was thinking to make the startup process\ncall XLogArchiveCheckDone() or something only when it finds\nXLOG_SWITCH record. Thought?\n\n\n> In the first place A and B happens only at termination or crash of\n> walsender so there's no fragility in checking only the previous\n> segment at start of walsender. After a bit thought I noticed that we\n> don't need to do that in the wal-writing loop. And I noticed that we\n> need to consider timeline transitions while calculating the previous\n> segment. Even though missing-notification at a timeline-switch\n> doesn't happen unless walsender is killed hard for example by a\n> sigkill or a power cut, though.\n\nWhat happens if the server is promoted before that walreceiver is invoked?\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Wed, 1 Sep 2021 14:37:43 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Possible missing segments in archiving on standby" }, { "msg_contents": "At Wed, 1 Sep 2021 14:37:43 +0900, Fujii Masao <masao.fujii@oss.nttdata.com> wrote in \n> \n> \n> On 2021/09/01 12:12, Kyotaro Horiguchi wrote:\n> > Putting aside the issue C, it would work as far as recovery is not\n> > paused or delayed. Although simply doing that means we run additional\n> > and a bit) wasteful XLogArchiveCheckDone() in most cases, It's hard to\n> > imagine moving the responsibility to notify a finished segment from\n> > walsender (writer side) to startup (reader side).\n> \n> You mean walreceiver, not walsender?\n\nSorry, it's walreceiver.\n\n> I was thinking to apply my latest patch, to address the issue A and C.\n> So walreceiver is still basically responsible to create .ready file.\n\nConsidering the following discussion, I don't object to the patch.\n\n> Also regarding the issue B, I was thinking to make the startup process\n> call XLogArchiveCheckDone() or something only when it finds\n> XLOG_SWITCH record. Thought?\n\nSounds workable. I came to agree to the reader-side amendment as\nbelow. But I might prefer to do that at every segment-switch in case\nof a crash.\n\n> What happens if the server is promoted before that walreceiver is\n> invoked?\n\nHmmmmm. A partial segment is not created if a server promotes just at\na segment boundary, then the previous segment won't get archived until\nthe next checkpoint runs.\n\nOk, I agree that the reader-side needs an amendment.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Thu, 02 Sep 2021 10:16:40 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Possible missing segments in archiving on standby" }, { "msg_contents": "On 2021/09/02 10:16, Kyotaro Horiguchi wrote:\n> Ok, I agree that the reader-side needs an amendment.\n\nThanks for the review! Attached is the updated version of the patch.\nBased on my latest patch, I changed the startup process so that\nit creates an archive notification file of the streamed WAL segment\nincluding XLOG_SWITCH record if the notification file has not been created yet.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION", "msg_date": "Fri, 3 Sep 2021 02:06:45 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Possible missing segments in archiving on standby" }, { "msg_contents": "At Fri, 3 Sep 2021 02:06:45 +0900, Fujii Masao <masao.fujii@oss.nttdata.com> wrote in \n> \n> \n> On 2021/09/02 10:16, Kyotaro Horiguchi wrote:\n> > Ok, I agree that the reader-side needs an amendment.\n> \n> Thanks for the review! Attached is the updated version of the patch.\n> Based on my latest patch, I changed the startup process so that\n> it creates an archive notification file of the streamed WAL segment\n> including XLOG_SWITCH record if the notification file has not been\n> created yet.\n\n+\t\t\t\tif (readSource == XLOG_FROM_STREAM &&\n+\t\t\t\t\trecord->xl_rmid == RM_XLOG_ID &&\n+\t\t\t\t\t(record->xl_info & ~XLR_INFO_MASK) == XLOG_SWITCH)\n\nreadSource is the source at the time startup reads it and it could be\ndifferent from the source at the time the record was written. We\ncannot know where the record came from there, so does the readSource\ncondition work as expected? If we had some trouble streaming just\nbefore, readSource at the time is likely to be XLOG_FROM_PG_WAL.\n\n+\t\t\t\t\t\tif (XLogArchivingAlways())\n+\t\t\t\t\t\t\tXLogArchiveNotify(xlogfilename, true);\n+\t\t\t\t\t\telse\n+\t\t\t\t\t\t\tXLogArchiveForceDone(xlogfilename);\n\nThe path is used both for crash and archive recovery. If we pass there\nwhile crash recovery on a primary with archive_mode=on, the file could\nbe marked .done before actually archived. On the other hand when\narchive_mode=always, the file could be re-marked .ready even after it\nhas been already archived. Why isn't it XLogArchiveCheckDone?\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Fri, 03 Sep 2021 14:56:06 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Possible missing segments in archiving on standby" }, { "msg_contents": "On 2021/09/03 14:56, Kyotaro Horiguchi wrote:\n> +\t\t\t\tif (readSource == XLOG_FROM_STREAM &&\n> +\t\t\t\t\trecord->xl_rmid == RM_XLOG_ID &&\n> +\t\t\t\t\t(record->xl_info & ~XLR_INFO_MASK) == XLOG_SWITCH)\n> \n> readSource is the source at the time startup reads it and it could be\n> different from the source at the time the record was written. We\n> cannot know where the record came from there, so does the readSource\n> condition work as expected? If we had some trouble streaming just\n> before, readSource at the time is likely to be XLOG_FROM_PG_WAL.\n\nYes.\n\n\n> +\t\t\t\t\t\tif (XLogArchivingAlways())\n> +\t\t\t\t\t\t\tXLogArchiveNotify(xlogfilename, true);\n> +\t\t\t\t\t\telse\n> +\t\t\t\t\t\t\tXLogArchiveForceDone(xlogfilename);\n> \n> The path is used both for crash and archive recovery. If we pass there\n> while crash recovery on a primary with archive_mode=on, the file could\n> be marked .done before actually archived. On the other hand when\n> archive_mode=always, the file could be re-marked .ready even after it\n> has been already archived. Why isn't it XLogArchiveCheckDone?\n\nYeah, you're right. ISTM what we should do is to just call\nXLogArchiveCheckDone() for the segment including XLOG_SWITCH record,\ni.e., to create .ready file if the segment has no archive notification file yet\nand archive_mode is set to 'always'. Even if we don't do this when we reach\nXLOG_SWITCH record, subsequent restartpoints eventually will call\nXLogArchiveCheckDone() for such segments.\n\nOne issue of this approach is that the WAL segment including XLOG_SWITCH\nrecord may be archived before its previous segments are. That is,\nthe notification file of current segment is created when it's replayed\nbecause it includes XLOG_SWIATCH, but the notification files of\nits previous segments will be created by subsequent restartpoints\nbecause they don't have XLOG_SWITCH. Probably we should avoid this?\n\nIf yes, one approach for this issue is to call XLogArchiveCheckDone() for\nnot only the segment including XLOG_SWITCH but also all the segments\nolder than that. Thought?\n\n\nAnyway, I extracted the changes in walreceiver from the patch,\nbecause it's self-contained and can be applied separately.\nPatch attached.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION", "msg_date": "Tue, 7 Sep 2021 17:03:06 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Possible missing segments in archiving on standby" }, { "msg_contents": "At Tue, 7 Sep 2021 17:03:06 +0900, Fujii Masao <masao.fujii@oss.nttdata.com> wrote in \n> > + if (XLogArchivingAlways())\n> > + XLogArchiveNotify(xlogfilename, true);\n> > +\t\t\t\t\t\telse\n> > + XLogArchiveForceDone(xlogfilename);\n> > The path is used both for crash and archive recovery. If we pass there\n> > while crash recovery on a primary with archive_mode=on, the file could\n> > be marked .done before actually archived. On the other hand when\n> > archive_mode=always, the file could be re-marked .ready even after it\n> > has been already archived. Why isn't it XLogArchiveCheckDone?\n> \n> Yeah, you're right. ISTM what we should do is to just call\n> XLogArchiveCheckDone() for the segment including XLOG_SWITCH record,\n> i.e., to create .ready file if the segment has no archive notification\n> file yet\n> and archive_mode is set to 'always'. Even if we don't do this when we\n> reach\n> XLOG_SWITCH record, subsequent restartpoints eventually will call\n> XLogArchiveCheckDone() for such segments.\n> \n> One issue of this approach is that the WAL segment including\n> XLOG_SWITCH\n> record may be archived before its previous segments are. That is,\n> the notification file of current segment is created when it's replayed\n> because it includes XLOG_SWIATCH, but the notification files of\n> its previous segments will be created by subsequent restartpoints\n> because they don't have XLOG_SWITCH. Probably we should avoid this?\n\nAnyway there's no guarantee on the archive ordering. As discussed in\nthe nearby thread [1], newer segment is often archived earlier. I\nagree that that happens mainly on busy servers, though. The archiver\nis designed to handle such \"gaps\" and/or out-of-order segment\nnotifications. We could impose a strict ordering on archiving but I\nthink we would take total performance than such strictness.\n\nIn short, no.\n\n> If yes, one approach for this issue is to call XLogArchiveCheckDone()\n> for\n> not only the segment including XLOG_SWITCH but also all the segments\n> older than that. Thought?\n\nAt least currently, recovery fetches segments by a single process and\nevery file is marked immediately after being filled-up, so all files\nother than the latest one in pg_wal including history files should\nhave been marked for sure unless file system gets into a trouble. So I\nthink we don't need to do that even if we want the strictness.\n\nAddition to that that takes too long time when many segments reside in\npg_wal so I think we never want to run such a scan at every segment\nend that recovery passes. If I remember correctly, the reason we\ndon't fix archive status at start up but at checkpoint is we avoided\nextra startup time.\n\n> Anyway, I extracted the changes in walreceiver from the patch,\n> because it's self-contained and can be applied separately.\n> Patch attached.\n\nI'm not sure I like that XLogWalRcvClose hides the segment-switch\ncondition. If we do that check in the function, I'd like to name the\nfunction XLogWalRcvCloseIfSwitched or something indicates the\ncondition. I'd like to invert the condition to reduce indentation,\ntoo.\n\nWhy don't we call it just after writing data, as my first proposal\ndid? There's no difference in functionality between doing that and the\npatch. If we do so, recvFile>=0 is always true and that condition can\nbe removed but that would be optional. Anyway, by doing that, no\nlonger need to call the function twice or we can even live without the\nnew function.\n\n[1] https://www.postgresql.org/message-id/20210504042755.ehuaoz5blcnjw5yk%40alap3.anarazel.de\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Wed, 08 Sep 2021 10:45:26 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Possible missing segments in archiving on standby" }, { "msg_contents": "On 2021/09/08 10:45, Kyotaro Horiguchi wrote:\n> Anyway there's no guarantee on the archive ordering. As discussed in\n> the nearby thread [1], newer segment is often archived earlier. I\n> agree that that happens mainly on busy servers, though. The archiver\n> is designed to handle such \"gaps\" and/or out-of-order segment\n> notifications. We could impose a strict ordering on archiving but I\n> think we would take total performance than such strictness.\n\nYes, there are other cases causing newer WAL file to be archived eariler.\nThe issue can happen if XLogArchiveNotify() fails to create .ready file,\nfor example. Fixing only the case that we're discussing here is not enough.\nIf *general* fix is discussed at the thread you told, it's better to\ndo nothing here for the issue and to just make the startup process call\nXLogArchiveCheckDone() if it finds the WAL file including XLOG_SWITCH record.\n\n\n> At least currently, recovery fetches segments by a single process and\n> every file is marked immediately after being filled-up, so all files\n> other than the latest one in pg_wal including history files should\n> have been marked for sure unless file system gets into a trouble.\n\nYou can reproduce that situation easily by starting the server with\narchive_mode=off, generating WAL files, sometimes running pg_switch_wal(),\ncausing the server to crash, and then restarting the server with\narchive_mode=on. At the beginning of recovery, all the WAL files in pg_wal\ndon't have their archive notification files at all. Then, with the patch,\nonly WAL files including XLOG_SWITCH are notified for WAL archiving\nduring recovery. The other WAL files will be notified at the subsequent\ncheckpoint.\n\n\n> I'm not sure I like that XLogWalRcvClose hides the segment-switch\n> condition. If we do that check in the function, I'd like to name the\n> function XLogWalRcvCloseIfSwitched or something indicates the\n> condition. I'd like to invert the condition to reduce indentation,\n> too.\n\nWe can move the condition-check out of the function like the attached patch.\n\n\n> Why don't we call it just after writing data, as my first proposal\n> did? There's no difference in functionality between doing that and the\n> patch. If we do so, recvFile>=0 is always true and that condition can\n> be removed but that would be optional. Anyway, by doing that, no\n> longer need to call the function twice or we can even live without the\n> new function.\n\nI think that it's better and more robust to confirm that the currently-opened\nWAL file is valid target one to write WAL *before* actually writing any data\ninto it.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION", "msg_date": "Wed, 8 Sep 2021 16:01:22 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Possible missing segments in archiving on standby" }, { "msg_contents": "At Wed, 8 Sep 2021 16:01:22 +0900, Fujii Masao <masao.fujii@oss.nttdata.com> wrote in \n> \n> \n> On 2021/09/08 10:45, Kyotaro Horiguchi wrote:\n> > Anyway there's no guarantee on the archive ordering. As discussed in\n> > the nearby thread [1], newer segment is often archived earlier. I\n> > agree that that happens mainly on busy servers, though. The archiver\n> > is designed to handle such \"gaps\" and/or out-of-order segment\n> > notifications. We could impose a strict ordering on archiving but I\n> > think we would take total performance than such strictness.\n> \n> Yes, there are other cases causing newer WAL file to be archived\n> eariler.\n> The issue can happen if XLogArchiveNotify() fails to create .ready\n> file,\n> for example. Fixing only the case that we're discussing here is not\n> enough.\n> If *general* fix is discussed at the thread you told, it's better to\n> do nothing here for the issue and to just make the startup process\n> call\n> XLogArchiveCheckDone() if it finds the WAL file including XLOG_SWITCH\n> record.\n\nNo. The discussion taken there is not about permanently missing .ready\nfiles, but about .ready files created out-of-order. So I don't think\nthe outcome from the thread does *fix* this issue.\n\n> > At least currently, recovery fetches segments by a single process and\n> > every file is marked immediately after being filled-up, so all files\n> > other than the latest one in pg_wal including history files should\n> > have been marked for sure unless file system gets into a trouble.\n> \n> You can reproduce that situation easily by starting the server with\n> archive_mode=off, generating WAL files, sometimes running\n> pg_switch_wal(),\n> causing the server to crash, and then restarting the server with\n> archive_mode=on. At the beginning of recovery, all the WAL files in\n> pg_wal\n> don't have their archive notification files at all. Then, with the\n> patch,\n> only WAL files including XLOG_SWITCH are notified for WAL archiving\n> during recovery. The other WAL files will be notified at the\n> subsequent\n> checkpoint.\n\nI don't think we want such extent of perfectness at all for the case\nwhere some archive-related parameters are changed after a\ncrash. Anyway we need to take a backup after that and at least all\nsegments required for the backup will be properly archived. The\nsegments up to the XLOG_SWITCH segment are harmless garbage, or a bit\nof food for disk.\n\n> > I'm not sure I like that XLogWalRcvClose hides the segment-switch\n> > condition. If we do that check in the function, I'd like to name the\n> > function XLogWalRcvCloseIfSwitched or something indicates the\n> > condition. I'd like to invert the condition to reduce indentation,\n> > too.\n> \n> We can move the condition-check out of the function like the attached\n> patch.\n\nThanks!\n\n> > Why don't we call it just after writing data, as my first proposal\n> > did? There's no difference in functionality between doing that and the\n> > patch. If we do so, recvFile>=0 is always true and that condition can\n> > be removed but that would be optional. Anyway, by doing that, no\n> > longer need to call the function twice or we can even live without the\n> > new function.\n> \n> I think that it's better and more robust to confirm that the\n> currently-opened\n> WAL file is valid target one to write WAL *before* actually writing\n> any data\n> into it.\n\nSounds convincing. Ok, I agree to that.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Wed, 08 Sep 2021 16:40:33 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Possible missing segments in archiving on standby" }, { "msg_contents": "On 2021/09/08 16:40, Kyotaro Horiguchi wrote:\n> No. The discussion taken there is not about permanently missing .ready\n> files, but about .ready files created out-of-order. So I don't think\n> the outcome from the thread does *fix* this issue.\n\nHmm...\n\n> I don't think we want such extent of perfectness at all for the case\n> where some archive-related parameters are changed after a\n> crash. Anyway we need to take a backup after that and at least all\n> segments required for the backup will be properly archived. The\n> segments up to the XLOG_SWITCH segment are harmless garbage, or a bit\n> of food for disk.\n\nSo probably we reached the consensus that something like the attached patch\n(XLogArchiveCheckDone_walfile_xlog_switch.patch) is enough for the corner\ncase where walreceiver fails to create .ready file of WAL file including\nXLOG_SWITCH?\n\n> Sounds convincing. Ok, I agree to that.\n\nSo barring any objection, I will commit the patch\nand back-patch it to all supported version.\n\nwalreceiver_notify_archive_soon_v5.patch\nwalreceiver_notify_archive_soon_v5_pg14-13.patch\nwalreceiver_notify_archive_soon_v5_pg12-11.patch\nwalreceiver_notify_archive_soon_v5_pg10.patch\nwalreceiver_notify_archive_soon_v5_pg96.patch\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION", "msg_date": "Wed, 8 Sep 2021 22:41:43 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Possible missing segments in archiving on standby" }, { "msg_contents": "On 2021/09/08 22:41, Fujii Masao wrote:\n> So probably we reached the consensus that something like the attached patch\n> (XLogArchiveCheckDone_walfile_xlog_switch.patch) is enough for the corner\n> case where walreceiver fails to create .ready file of WAL file including\n> XLOG_SWITCH?\n\nI attached the patch again, just in the case.\n\n\n>> Sounds convincing.  Ok, I agree to that.\n> \n> So barring any objection, I will commit the patch\n> and back-patch it to all supported version.\n\nPushed. Thanks!\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION", "msg_date": "Fri, 10 Sep 2021 00:09:08 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Possible missing segments in archiving on standby" } ]
[ { "msg_contents": "The first commitfest of the v14 cycle, 2020-07 is just around the corner now\nand the trend of growing the list of patches has continued, so there is a lot\nto go through.\n\nIf you have a patch registered in the commitfest, make sure it still applies\nand that the tests pass. Looking at the Patch Tester there are quite a few\npatches which no longer applies that need an updated version:\n\n\thttp://cfbot.cputube.org/\n\ncheers ./daniel\n\n\n", "msg_date": "Tue, 30 Jun 2020 10:49:34 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": true, "msg_subject": "Commitfest 2020-07" } ]
[ { "msg_contents": "This feature adds RESPECT NULLS and IGNORE NULLS syntax to several\nwindow functions, according to the SQL Standard.\n\nUnlike the last time this was attempted[1], my version does not hardcode\nthe spec's list of functions that this applies to. Instead, it accepts\nit for all true window functions (that is, it does not apply to\naggregates acting as window functions).\n\nThis patch also does not attempt to solve the FROM LAST problem. That\nremains unimplemented.\n\nFor the CREATE FUNCTION syntax, I used TREAT NULLS so as to avoid\ncreating new keywords.\n\nThe second patch adds some new window functions in order to test that\nthe null treatment works correctly for cases that aren't covered by the\nstandard functions but that custom functions might want to use. It is\n*not* intended to be committed; I am only submitting the first patch for\ninclusion in core.\n\nThis is based off of 324435eb14.\n\n[1]\nhttps://www.postgresql.org/message-id/CAGMVOdsbtRwE_4%2Bv8zjH1d9xfovDeQAGLkP_B6k69_VoFEgX-A%40mail.gmail.com\n-- \nVik Fearing", "msg_date": "Tue, 30 Jun 2020 15:54:16 +0200", "msg_from": "Vik Fearing <vik@postgresfriends.org>", "msg_from_op": true, "msg_subject": "Implement <null treatment> for window functions" }, { "msg_contents": "Vik Fearing <vik@postgresfriends.org> writes:\n\n> The second patch adds some new window functions in order to test that\n> the null treatment works correctly for cases that aren't covered by the\n> standard functions but that custom functions might want to use. It is\n> *not* intended to be committed; I am only submitting the first patch for\n> inclusion in core.\n\nWould it make stense to add them as a test extension under\nsrc/test/modules/?\n\n- ilmari\n-- \n\"A disappointingly low fraction of the human race is,\n at any given time, on fire.\" - Stig Sandbeck Mathisen\n\n\n", "msg_date": "Tue, 30 Jun 2020 17:08:58 +0100", "msg_from": "ilmari@ilmari.org (Dagfinn Ilmari =?utf-8?Q?Manns=C3=A5ker?=)", "msg_from_op": false, "msg_subject": "Re: Implement <null treatment> for window functions" }, { "msg_contents": "> On 30 Jun 2020, at 15:54, Vik Fearing <vik@postgresfriends.org> wrote:\n\n> This feature adds RESPECT NULLS and IGNORE NULLS syntax to several\n> window functions, according to the SQL Standard.\n\nThis fails compilation due to a compiler warning in WinGetFuncArgInPartition\nand WinGetFuncArgInFrame (same warning in both):\n\nnodeWindowAgg.c: In function ‘WinGetFuncArgInPartition’:\nnodeWindowAgg.c:3274:10: error: ‘step’ may be used uninitialized in this function [-Werror=maybe-uninitialized]\n relpos += step;\n ^\nThis was with GCC in the Travis build, the Windows build passed and so does\nclang locally for me.\n\ncheers ./daniel\n\n", "msg_date": "Wed, 1 Jul 2020 14:27:45 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: Implement <null treatment> for window functions" }, { "msg_contents": "On Wed, Jul 01, 2020 at 02:27:45PM +0200, Daniel Gustafsson wrote:\n> This was with GCC in the Travis build, the Windows build passed and so does\n> clang locally for me.\n\nThis was two months ago, so this patch has been marked as returned\nwith feedback. Please feel free to resubmit once you have a new\nversion.\n--\nMichael", "msg_date": "Wed, 30 Sep 2020 16:44:00 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Implement <null treatment> for window functions" } ]
[ { "msg_contents": "Hi,\n\nWhen I talked about max_slot_wal_keep_size as new feature in v13\nat the conference, I received the question like \"Why are the units of\nsetting values in max_slot_wal_keep_size and wal_keep_segments different?\"\nfrom audience. That difference looks confusing for users and\nIMO it's better to use the same unit for them. Thought?\n\nThere seem to be several options to do this.\n\n(1) Rename max_slot_wal_keep_size to max_slot_wal_keep_segments,\n and make users specify the number of WAL segments in it instead of\n WAL size.\n\n(2) Rename wal_keep_segments to wal_keep_size, and make users specify\n the WAL size in it instead of the number of WAL segments.\n\n(3) Don't rename the parameters, and allow users to specify not only\n the number of WAL segments but also the WAL size in wal_keep_segments.\n\nSince we have been moving away from measuring in segments, e.g.,\nmax_wal_size, I don't think (1) is good idea.\n\nFor backward compatibility, (3) is better. But which needs more\n(maybe a bit complicated) code in guc.c. Also the parameter names\nare not consistent yet (i.e., _segments and _size).\n\nSo for now I like (2).\n\nThought?\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Tue, 30 Jun 2020 23:51:40 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "max_slot_wal_keep_size and wal_keep_segments" }, { "msg_contents": "At Tue, 30 Jun 2020 23:51:40 +0900, Fujii Masao <masao.fujii@oss.nttdata.com> wrote in \n> Hi,\n> \n> When I talked about max_slot_wal_keep_size as new feature in v13\n> at the conference, I received the question like \"Why are the units of\n> setting values in max_slot_wal_keep_size and wal_keep_segments\n> different?\"\n> from audience. That difference looks confusing for users and\n> IMO it's better to use the same unit for them. Thought?\n\nWe are moving the units for amount of WAL from segments to MB. The\nvariable is affected by the movement. I'm not sure wal_keep_segments\nis going to die soon but we may change it to wal_keep_size(_mb) sooner\nor later if it going to stay alive.\n\n> There seem to be several options to do this.\n> \n> (1) Rename max_slot_wal_keep_size to max_slot_wal_keep_segments,\n> and make users specify the number of WAL segments in it instead of\n> WAL size.\n\nI don't think this is not the way.\n\n> (2) Rename wal_keep_segments to wal_keep_size, and make users specify\n> the WAL size in it instead of the number of WAL segments.\n\nYes. I agree to this (as I wrote above before reading this).\n\n> (3) Don't rename the parameters, and allow users to specify not only\n> the number of WAL segments but also the WAL size in wal_keep_segments.\n\nPossible in a short term, but not for a long term.\n\n> Since we have been moving away from measuring in segments, e.g.,\n> max_wal_size, I don't think (1) is good idea.\n> \n> For backward compatibility, (3) is better. But which needs more\n> (maybe a bit complicated) code in guc.c. Also the parameter names\n> are not consistent yet (i.e., _segments and _size).\n> \n> So for now I like (2).\n> \n> Thought?\n\nI agree to you. If someone found that wal_keep_segment is no longer\nusable, the alternative would easily be found by searching config file\nfor \"wal_keep\". Or we could have a default config line like this:\n\nwal_keep_size = 0 # in megabytes: 0 disables (formerly wal_keep_segments)\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Wed, 01 Jul 2020 09:19:18 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: max_slot_wal_keep_size and wal_keep_segments" }, { "msg_contents": "On 2020-Jun-30, Fujii Masao wrote:\n\n> Hi,\n> \n> When I talked about max_slot_wal_keep_size as new feature in v13\n> at the conference, I received the question like \"Why are the units of\n> setting values in max_slot_wal_keep_size and wal_keep_segments different?\"\n> from audience. That difference looks confusing for users and\n> IMO it's better to use the same unit for them. Thought?\n\nDo we still need wal_keep_segments for anything? Maybe we should\nconsider removing that functionality instead ... and even if we don't\nremove it in 13, then what is the point of renaming it only to remove it\nshortly after?\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Tue, 30 Jun 2020 23:26:05 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: max_slot_wal_keep_size and wal_keep_segments" }, { "msg_contents": "\n\nOn 2020/07/01 12:26, Alvaro Herrera wrote:\n> On 2020-Jun-30, Fujii Masao wrote:\n> \n>> Hi,\n>>\n>> When I talked about max_slot_wal_keep_size as new feature in v13\n>> at the conference, I received the question like \"Why are the units of\n>> setting values in max_slot_wal_keep_size and wal_keep_segments different?\"\n>> from audience. That difference looks confusing for users and\n>> IMO it's better to use the same unit for them. Thought?\n> \n> Do we still need wal_keep_segments for anything?\n\nYeah, personally I like wal_keep_segments because its setting is very\nsimple and no extra operations on replication slots are necessary.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Wed, 1 Jul 2020 21:50:56 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Re: max_slot_wal_keep_size and wal_keep_segments" }, { "msg_contents": "On 2020-Jul-01, Fujii Masao wrote:\n\n> On 2020/07/01 12:26, Alvaro Herrera wrote:\n> > On 2020-Jun-30, Fujii Masao wrote:\n> > \n> > > When I talked about max_slot_wal_keep_size as new feature in v13\n> > > at the conference, I received the question like \"Why are the units of\n> > > setting values in max_slot_wal_keep_size and wal_keep_segments different?\"\n> > > from audience. That difference looks confusing for users and\n> > > IMO it's better to use the same unit for them. Thought?\n> > \n> > Do we still need wal_keep_segments for anything?\n> \n> Yeah, personally I like wal_keep_segments because its setting is very\n> simple and no extra operations on replication slots are necessary.\n\nOkay. In that case I +1 the idea of renaming to wal_keep_size.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Wed, 1 Jul 2020 10:54:28 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: max_slot_wal_keep_size and wal_keep_segments" }, { "msg_contents": "On 7/1/20 10:54 AM, Alvaro Herrera wrote:\n> On 2020-Jul-01, Fujii Masao wrote:\n> \n>> On 2020/07/01 12:26, Alvaro Herrera wrote:\n>>> On 2020-Jun-30, Fujii Masao wrote:\n>>>\n>>>> When I talked about max_slot_wal_keep_size as new feature in v13\n>>>> at the conference, I received the question like \"Why are the units of\n>>>> setting values in max_slot_wal_keep_size and wal_keep_segments different?\"\n>>>> from audience. That difference looks confusing for users and\n>>>> IMO it's better to use the same unit for them. Thought?\n>>>\n>>> Do we still need wal_keep_segments for anything?\n>>\n>> Yeah, personally I like wal_keep_segments because its setting is very\n>> simple and no extra operations on replication slots are necessary.\n> \n> Okay. In that case I +1 the idea of renaming to wal_keep_size.\n\n+1 for renaming to wal_keep_size.\n\n-- \n-David\ndavid@pgmasters.net\n\n\n", "msg_date": "Wed, 1 Jul 2020 13:18:06 -0400", "msg_from": "David Steele <david@pgmasters.net>", "msg_from_op": false, "msg_subject": "Re: max_slot_wal_keep_size and wal_keep_segments" }, { "msg_contents": "On Wed, Jul 1, 2020 at 01:18:06PM -0400, David Steele wrote:\n> On 7/1/20 10:54 AM, Alvaro Herrera wrote:\n> > On 2020-Jul-01, Fujii Masao wrote:\n> > \n> > > On 2020/07/01 12:26, Alvaro Herrera wrote:\n> > > > On 2020-Jun-30, Fujii Masao wrote:\n> > > > \n> > > > > When I talked about max_slot_wal_keep_size as new feature in v13\n> > > > > at the conference, I received the question like \"Why are the units of\n> > > > > setting values in max_slot_wal_keep_size and wal_keep_segments different?\"\n> > > > > from audience. That difference looks confusing for users and\n> > > > > IMO it's better to use the same unit for them. Thought?\n> > > > \n> > > > Do we still need wal_keep_segments for anything?\n> > > \n> > > Yeah, personally I like wal_keep_segments because its setting is very\n> > > simple and no extra operations on replication slots are necessary.\n> > \n> > Okay. In that case I +1 the idea of renaming to wal_keep_size.\n> \n> +1 for renaming to wal_keep_size.\n\nWe have the following wal*size GUC settings:\n\n\tSELECT name FROM pg_settings WHERE name LIKE '%wal%size%';\n\t name\n\t------------------------\n\t max_slot_wal_keep_size\n\t max_wal_size\n\t min_wal_size\n\t wal_block_size\n\t wal_segment_size\n\nDoes wal_keep_size make sense here?\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n", "msg_date": "Wed, 1 Jul 2020 15:45:08 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: max_slot_wal_keep_size and wal_keep_segments" }, { "msg_contents": "On 2020-Jul-01, Bruce Momjian wrote:\n\n> We have the following wal*size GUC settings:\n> \n> \tSELECT name FROM pg_settings WHERE name LIKE '%wal%size%';\n> \t name\n> \t------------------------\n> \t max_slot_wal_keep_size\n> \t max_wal_size\n> \t min_wal_size\n> \t wal_block_size\n> \t wal_segment_size\n> \n> Does wal_keep_size make sense here?\n\nI think it does. What do you think?\n\nAre you suggesting that \"keep_wal_size\" is better, since it's more in\nline with min/max? I lean towards no.\n\n(I think it's okay to conceptually separate these three options from\nwal_block_size, since that's a compile time option and thus it's an\nintrospective GUC rather than actual configuration, but as I recall that\nargument does not hold for wal_segment_size. But at one point, even that\none was an introspective GUC too.)\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Wed, 1 Jul 2020 16:25:35 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: max_slot_wal_keep_size and wal_keep_segments" }, { "msg_contents": "On Wed, Jul 1, 2020 at 04:25:35PM -0400, Alvaro Herrera wrote:\n> On 2020-Jul-01, Bruce Momjian wrote:\n> \n> > We have the following wal*size GUC settings:\n> > \n> > \tSELECT name FROM pg_settings WHERE name LIKE '%wal%size%';\n> > \t name\n> > \t------------------------\n> > \t max_slot_wal_keep_size\n> > \t max_wal_size\n> > \t min_wal_size\n> > \t wal_block_size\n> > \t wal_segment_size\n> > \n> > Does wal_keep_size make sense here?\n> \n> I think it does. What do you think?\n> \n> Are you suggesting that \"keep_wal_size\" is better, since it's more in\n> line with min/max? I lean towards no.\n\nNo, I am more just asking since I saw wal_keep_size as a special version\nof wal_size. I don't have a firm opinion.\n\n> \n> (I think it's okay to conceptually separate these three options from\n> wal_block_size, since that's a compile time option and thus it's an\n> introspective GUC rather than actual configuration, but as I recall that\n> argument does not hold for wal_segment_size. But at one point, even that\n> one was an introspective GUC too.)\n\nYep, just asking.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n", "msg_date": "Wed, 1 Jul 2020 16:33:54 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: max_slot_wal_keep_size and wal_keep_segments" }, { "msg_contents": "On Thu, 2 Jul 2020 at 02:18, David Steele <david@pgmasters.net> wrote:\n>\n> On 7/1/20 10:54 AM, Alvaro Herrera wrote:\n> > On 2020-Jul-01, Fujii Masao wrote:\n> >\n> >> On 2020/07/01 12:26, Alvaro Herrera wrote:\n> >>> On 2020-Jun-30, Fujii Masao wrote:\n> >>>\n> >>>> When I talked about max_slot_wal_keep_size as new feature in v13\n> >>>> at the conference, I received the question like \"Why are the units of\n> >>>> setting values in max_slot_wal_keep_size and wal_keep_segments different?\"\n> >>>> from audience. That difference looks confusing for users and\n> >>>> IMO it's better to use the same unit for them. Thought?\n> >>>\n> >>> Do we still need wal_keep_segments for anything?\n> >>\n> >> Yeah, personally I like wal_keep_segments because its setting is very\n> >> simple and no extra operations on replication slots are necessary.\n> >\n> > Okay. In that case I +1 the idea of renaming to wal_keep_size.\n>\n> +1 for renaming to wal_keep_size.\n>\n\n+1 from me, too.\n\nRegards,\n\n\n--\nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 3 Jul 2020 16:54:40 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: max_slot_wal_keep_size and wal_keep_segments" }, { "msg_contents": "On 2020/07/02 2:18, David Steele wrote:\n> On 7/1/20 10:54 AM, Alvaro Herrera wrote:\n>> On 2020-Jul-01, Fujii Masao wrote:\n>>\n>>> On 2020/07/01 12:26, Alvaro Herrera wrote:\n>>>> On 2020-Jun-30, Fujii Masao wrote:\n>>>>\n>>>>> When I talked about max_slot_wal_keep_size as new feature in v13\n>>>>> at the conference, I received the question like \"Why are the units of\n>>>>> setting values in max_slot_wal_keep_size and wal_keep_segments different?\"\n>>>>> from audience. That difference looks confusing for users and\n>>>>> IMO it's better to use the same unit for them. Thought?\n>>>>\n>>>> Do we still need wal_keep_segments for anything?\n>>>\n>>> Yeah, personally I like wal_keep_segments because its setting is very\n>>> simple and no extra operations on replication slots are necessary.\n>>\n>> Okay.  In that case I +1 the idea of renaming to wal_keep_size.\n> \n> +1 for renaming to wal_keep_size.\n\nI attached the patch that renames wal_keep_segments to wal_keep_size.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION", "msg_date": "Thu, 9 Jul 2020 00:37:57 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Re: max_slot_wal_keep_size and wal_keep_segments" }, { "msg_contents": "On 2020-Jul-09, Fujii Masao wrote:\n\n> I attached the patch that renames wal_keep_segments to wal_keep_size.\n\nLooks good in a quick once-over. Just two small wording comments:\n\n> <para>\n> Independently of <varname>max_wal_size</varname>,\n> - <xref linkend=\"guc-wal-keep-segments\"/> + 1 most recent WAL files are\n> + most recent <xref linkend=\"guc-wal-keep-size\"/> megabytes\n> + WAL files plus one WAL file are\n> kept at all times. Also, if WAL archiving is used, old segments can not be\n> removed or recycled until they are archived. If WAL archiving cannot keep up\n> with the pace that WAL is generated, or if <varname>archive_command</varname>\n\nThis reads a little strange to me. Maybe \"the N most recent megabytes\nplus ...\"\n\n> \t\t\t/* determine how many segments slots can be kept by slots ... */\n> -\t\t\tkeepSegs = XLogMBVarToSegs(max_slot_wal_keep_size_mb, wal_segment_size);\n> -\t\t\t/* ... and override by wal_keep_segments as needed */\n> -\t\t\tkeepSegs = Max(keepSegs, wal_keep_segments);\n> +\t\t\tslotKeepSegs = XLogMBVarToSegs(max_slot_wal_keep_size_mb, wal_segment_size);\n> +\t\t\t/* ... and override by wal_keep_size as needed */\n> +\t\t\tkeepSegs = XLogMBVarToSegs(wal_keep_size_mb, wal_segment_size);\n\nSince you change the way these two variables are used, I would not say\n\"override\" in the above comment, nor keep the ellipses; perhaps just\nchange them to \"determine how many segments can be kept by slots\" and\n\"ditto for wal_keep_size\".\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Wed, 8 Jul 2020 12:20:31 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: max_slot_wal_keep_size and wal_keep_segments" }, { "msg_contents": "At Thu, 9 Jul 2020 00:37:57 +0900, Fujii Masao <masao.fujii@oss.nttdata.com> wrote in \n> \n> \n> On 2020/07/02 2:18, David Steele wrote:\n> > On 7/1/20 10:54 AM, Alvaro Herrera wrote:\n> >> On 2020-Jul-01, Fujii Masao wrote:\n> >>\n> >>> On 2020/07/01 12:26, Alvaro Herrera wrote:\n> >>>> On 2020-Jun-30, Fujii Masao wrote:\n> >>>>\n> >>>>> When I talked about max_slot_wal_keep_size as new feature in v13\n> >>>>> at the conference, I received the question like \"Why are the units of\n> >>>>> setting values in max_slot_wal_keep_size and wal_keep_segments\n> >>>>> different?\"\n> >>>>> from audience. That difference looks confusing for users and\n> >>>>> IMO it's better to use the same unit for them. Thought?\n> >>>>\n> >>>> Do we still need wal_keep_segments for anything?\n> >>>\n> >>> Yeah, personally I like wal_keep_segments because its setting is very\n> >>> simple and no extra operations on replication slots are necessary.\n> >>\n> >> Okay.  In that case I +1 the idea of renaming to wal_keep_size.\n> > +1 for renaming to wal_keep_size.\n> \n> I attached the patch that renames wal_keep_segments to wal_keep_size.\n\nIt fails on 019_replslot_limit.pl for uncertain reason to me..\n\n\n@@ -11323,7 +11329,7 @@ do_pg_stop_backup(char *labelfile, bool waitforarchive, TimeLineID *stoptli_p)\n \t * If archiving is enabled, wait for all the required WAL files to be\n \t * archived before returning. If archiving isn't enabled, the required WAL\n \t * needs to be transported via streaming replication (hopefully with\n-\t * wal_keep_segments set high enough), or some more exotic mechanism like\n+\t * wal_keep_size set high enough), or some more exotic mechanism like\n \t * polling and copying files from pg_wal with script. We have no knowledge\n\nIsn't this time a good chance to mention replication slots?\n\n\n-\t\"ALTER SYSTEM SET wal_keep_segments to 8; SELECT pg_reload_conf();\");\n+\t\"ALTER SYSTEM SET wal_keep_size to '128MB'; SELECT pg_reload_conf();\");\n\nwal_segment_size to 1MB here so, that conversion is not correct.\n(However, that test works as long as it is more than\nmax_slot_wal_keep_size so it's practically no problem.)\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Thu, 09 Jul 2020 13:47:43 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: max_slot_wal_keep_size and wal_keep_segments" }, { "msg_contents": "On 2020/07/09 1:20, Alvaro Herrera wrote:\n> On 2020-Jul-09, Fujii Masao wrote:\n> \n>> I attached the patch that renames wal_keep_segments to wal_keep_size.\n> \n> Looks good in a quick once-over. Just two small wording comments:\n\nThanks for review comments!\n\n\n> \n>> <para>\n>> Independently of <varname>max_wal_size</varname>,\n>> - <xref linkend=\"guc-wal-keep-segments\"/> + 1 most recent WAL files are\n>> + most recent <xref linkend=\"guc-wal-keep-size\"/> megabytes\n>> + WAL files plus one WAL file are\n>> kept at all times. Also, if WAL archiving is used, old segments can not be\n>> removed or recycled until they are archived. If WAL archiving cannot keep up\n>> with the pace that WAL is generated, or if <varname>archive_command</varname>\n> \n> This reads a little strange to me. Maybe \"the N most recent megabytes\n> plus ...\"\n\nYes, fixed.\n\n\n> \n>> \t\t\t/* determine how many segments slots can be kept by slots ... */\n>> -\t\t\tkeepSegs = XLogMBVarToSegs(max_slot_wal_keep_size_mb, wal_segment_size);\n>> -\t\t\t/* ... and override by wal_keep_segments as needed */\n>> -\t\t\tkeepSegs = Max(keepSegs, wal_keep_segments);\n>> +\t\t\tslotKeepSegs = XLogMBVarToSegs(max_slot_wal_keep_size_mb, wal_segment_size);\n>> +\t\t\t/* ... and override by wal_keep_size as needed */\n>> +\t\t\tkeepSegs = XLogMBVarToSegs(wal_keep_size_mb, wal_segment_size);\n> \n> Since you change the way these two variables are used, I would not say\n> \"override\" in the above comment, nor keep the ellipses; perhaps just\n> change them to \"determine how many segments can be kept by slots\" and\n> \"ditto for wal_keep_size\".\n\nYes, fixed.\n\nAttached is the updated version of the patch.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION", "msg_date": "Mon, 13 Jul 2020 14:11:05 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Re: max_slot_wal_keep_size and wal_keep_segments" }, { "msg_contents": "\n\nOn 2020/07/09 13:47, Kyotaro Horiguchi wrote:\n> At Thu, 9 Jul 2020 00:37:57 +0900, Fujii Masao <masao.fujii@oss.nttdata.com> wrote in\n>>\n>>\n>> On 2020/07/02 2:18, David Steele wrote:\n>>> On 7/1/20 10:54 AM, Alvaro Herrera wrote:\n>>>> On 2020-Jul-01, Fujii Masao wrote:\n>>>>\n>>>>> On 2020/07/01 12:26, Alvaro Herrera wrote:\n>>>>>> On 2020-Jun-30, Fujii Masao wrote:\n>>>>>>\n>>>>>>> When I talked about max_slot_wal_keep_size as new feature in v13\n>>>>>>> at the conference, I received the question like \"Why are the units of\n>>>>>>> setting values in max_slot_wal_keep_size and wal_keep_segments\n>>>>>>> different?\"\n>>>>>>> from audience. That difference looks confusing for users and\n>>>>>>> IMO it's better to use the same unit for them. Thought?\n>>>>>>\n>>>>>> Do we still need wal_keep_segments for anything?\n>>>>>\n>>>>> Yeah, personally I like wal_keep_segments because its setting is very\n>>>>> simple and no extra operations on replication slots are necessary.\n>>>>\n>>>> Okay.  In that case I +1 the idea of renaming to wal_keep_size.\n>>> +1 for renaming to wal_keep_size.\n>>\n>> I attached the patch that renames wal_keep_segments to wal_keep_size.\n> \n> It fails on 019_replslot_limit.pl for uncertain reason to me..\n\nI could not reproduce this...\n\n\n> \n> \n> @@ -11323,7 +11329,7 @@ do_pg_stop_backup(char *labelfile, bool waitforarchive, TimeLineID *stoptli_p)\n> \t * If archiving is enabled, wait for all the required WAL files to be\n> \t * archived before returning. If archiving isn't enabled, the required WAL\n> \t * needs to be transported via streaming replication (hopefully with\n> -\t * wal_keep_segments set high enough), or some more exotic mechanism like\n> +\t * wal_keep_size set high enough), or some more exotic mechanism like\n> \t * polling and copying files from pg_wal with script. We have no knowledge\n> \n> Isn't this time a good chance to mention replication slots?\n\n+1 to do that. But I found there are other places where replication slots\nneed to be mentioned. So I think it's better to do this as separate patch.\n\n\n> \n> \n> -\t\"ALTER SYSTEM SET wal_keep_segments to 8; SELECT pg_reload_conf();\");\n> +\t\"ALTER SYSTEM SET wal_keep_size to '128MB'; SELECT pg_reload_conf();\");\n> \n> wal_segment_size to 1MB here so, that conversion is not correct.\n> (However, that test works as long as it is more than\n> max_slot_wal_keep_size so it's practically no problem.)\n\nSo I changed 128MB to 8MB. Is this OK?\nI attached the updated version of the patch upthread.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Mon, 13 Jul 2020 14:14:30 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Re: max_slot_wal_keep_size and wal_keep_segments" }, { "msg_contents": "At Mon, 13 Jul 2020 14:14:30 +0900, Fujii Masao <masao.fujii@oss.nttdata.com> wrote in \n> \n> \n> On 2020/07/09 13:47, Kyotaro Horiguchi wrote:\n> > At Thu, 9 Jul 2020 00:37:57 +0900, Fujii Masao\n> > <masao.fujii@oss.nttdata.com> wrote in\n> >>\n> >>\n> >> On 2020/07/02 2:18, David Steele wrote:\n> >>> On 7/1/20 10:54 AM, Alvaro Herrera wrote:\n> >>>> On 2020-Jul-01, Fujii Masao wrote:\n> >>>>\n> >>>>> On 2020/07/01 12:26, Alvaro Herrera wrote:\n> >>>>>> On 2020-Jun-30, Fujii Masao wrote:\n> >>>>>>\n> >>>>>>> When I talked about max_slot_wal_keep_size as new feature in v13\n> >>>>>>> at the conference, I received the question like \"Why are the units of\n> >>>>>>> setting values in max_slot_wal_keep_size and wal_keep_segments\n> >>>>>>> different?\"\n> >>>>>>> from audience. That difference looks confusing for users and\n> >>>>>>> IMO it's better to use the same unit for them. Thought?\n> >>>>>>\n> >>>>>> Do we still need wal_keep_segments for anything?\n> >>>>>\n> >>>>> Yeah, personally I like wal_keep_segments because its setting is very\n> >>>>> simple and no extra operations on replication slots are necessary.\n> >>>>\n> >>>> Okay.  In that case I +1 the idea of renaming to wal_keep_size.\n> >>> +1 for renaming to wal_keep_size.\n> >>\n> >> I attached the patch that renames wal_keep_segments to wal_keep_size.\n> > It fails on 019_replslot_limit.pl for uncertain reason to me..\n> \n> I could not reproduce this...\n\nSorry for the ambiguity. The patch didn't applied on the file, and I\nnoticed that the reason is the wording change from master to\nprimary. So no problem in the latest patch.\n\n> > @@ -11323,7 +11329,7 @@ do_pg_stop_backup(char *labelfile, bool\n> > waitforarchive, TimeLineID *stoptli_p)\n> > \t * If archiving is enabled, wait for all the required WAL files to be\n> > \t * archived before returning. If archiving isn't enabled, the required\n> > \t * WAL\n> > \t * needs to be transported via streaming replication (hopefully with\n> > -\t * wal_keep_segments set high enough), or some more exotic mechanism like\n> > + * wal_keep_size set high enough), or some more exotic mechanism like\n> > \t * polling and copying files from pg_wal with script. We have no\n> > \t * knowledge\n> > Isn't this time a good chance to mention replication slots?\n> \n> +1 to do that. But I found there are other places where replication\n> slots\n> need to be mentioned. So I think it's better to do this as separate\n> patch.\n\nAgreed.\n\n> > -\t\"ALTER SYSTEM SET wal_keep_segments to 8; SELECT pg_reload_conf();\");\n> > + \"ALTER SYSTEM SET wal_keep_size to '128MB'; SELECT\n> > pg_reload_conf();\");\n> > wal_segment_size to 1MB here so, that conversion is not correct.\n> > (However, that test works as long as it is more than\n> > max_slot_wal_keep_size so it's practically no problem.)\n> \n> So I changed 128MB to 8MB. Is this OK?\n> I attached the updated version of the patch upthread.\n\nThat change looks find by me.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Mon, 13 Jul 2020 16:01:11 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: max_slot_wal_keep_size and wal_keep_segments" }, { "msg_contents": "On 2020/07/13 16:01, Kyotaro Horiguchi wrote:\n> At Mon, 13 Jul 2020 14:14:30 +0900, Fujii Masao <masao.fujii@oss.nttdata.com> wrote in\n>>\n>>\n>> On 2020/07/09 13:47, Kyotaro Horiguchi wrote:\n>>> At Thu, 9 Jul 2020 00:37:57 +0900, Fujii Masao\n>>> <masao.fujii@oss.nttdata.com> wrote in\n>>>>\n>>>>\n>>>> On 2020/07/02 2:18, David Steele wrote:\n>>>>> On 7/1/20 10:54 AM, Alvaro Herrera wrote:\n>>>>>> On 2020-Jul-01, Fujii Masao wrote:\n>>>>>>\n>>>>>>> On 2020/07/01 12:26, Alvaro Herrera wrote:\n>>>>>>>> On 2020-Jun-30, Fujii Masao wrote:\n>>>>>>>>\n>>>>>>>>> When I talked about max_slot_wal_keep_size as new feature in v13\n>>>>>>>>> at the conference, I received the question like \"Why are the units of\n>>>>>>>>> setting values in max_slot_wal_keep_size and wal_keep_segments\n>>>>>>>>> different?\"\n>>>>>>>>> from audience. That difference looks confusing for users and\n>>>>>>>>> IMO it's better to use the same unit for them. Thought?\n>>>>>>>>\n>>>>>>>> Do we still need wal_keep_segments for anything?\n>>>>>>>\n>>>>>>> Yeah, personally I like wal_keep_segments because its setting is very\n>>>>>>> simple and no extra operations on replication slots are necessary.\n>>>>>>\n>>>>>> Okay.  In that case I +1 the idea of renaming to wal_keep_size.\n>>>>> +1 for renaming to wal_keep_size.\n>>>>\n>>>> I attached the patch that renames wal_keep_segments to wal_keep_size.\n>>> It fails on 019_replslot_limit.pl for uncertain reason to me..\n>>\n>> I could not reproduce this...\n> \n> Sorry for the ambiguity. The patch didn't applied on the file, and I\n> noticed that the reason is the wording change from master to\n> primary. So no problem in the latest patch.\n> \n>>> @@ -11323,7 +11329,7 @@ do_pg_stop_backup(char *labelfile, bool\n>>> waitforarchive, TimeLineID *stoptli_p)\n>>> \t * If archiving is enabled, wait for all the required WAL files to be\n>>> \t * archived before returning. If archiving isn't enabled, the required\n>>> \t * WAL\n>>> \t * needs to be transported via streaming replication (hopefully with\n>>> -\t * wal_keep_segments set high enough), or some more exotic mechanism like\n>>> + * wal_keep_size set high enough), or some more exotic mechanism like\n>>> \t * polling and copying files from pg_wal with script. We have no\n>>> \t * knowledge\n>>> Isn't this time a good chance to mention replication slots?\n>>\n>> +1 to do that. But I found there are other places where replication\n>> slots\n>> need to be mentioned. So I think it's better to do this as separate\n>> patch.\n> \n> Agreed.\n> \n>>> -\t\"ALTER SYSTEM SET wal_keep_segments to 8; SELECT pg_reload_conf();\");\n>>> + \"ALTER SYSTEM SET wal_keep_size to '128MB'; SELECT\n>>> pg_reload_conf();\");\n>>> wal_segment_size to 1MB here so, that conversion is not correct.\n>>> (However, that test works as long as it is more than\n>>> max_slot_wal_keep_size so it's practically no problem.)\n>>\n>> So I changed 128MB to 8MB. Is this OK?\n>> I attached the updated version of the patch upthread.\n> \n> That change looks find by me.\n\nThanks for the review!\n\nThe patch was no longer applied cleanly because of recent commit.\nSo I updated the patch. Attached.\n\nBarring any objection, I will commit this patch.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION", "msg_date": "Tue, 14 Jul 2020 13:00:35 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Re: max_slot_wal_keep_size and wal_keep_segments" }, { "msg_contents": "On 7/14/20 12:00 AM, Fujii Masao wrote:\n> \n> The patch was no longer applied cleanly because of recent commit.\n> So I updated the patch. Attached.\n> \n> Barring any objection, I will commit this patch.\n\nThis doesn't look right:\n\n+ the <xref linkend=\"guc-wal-keep-size\"/> most recent megabytes\n+ WAL files plus one WAL file are\n\nHow about:\n\n+ <xref linkend=\"guc-wal-keep-size\"/> megabytes of\n+ WAL files plus one WAL file are\n\nOther than that, looks good to me.\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n\n", "msg_date": "Tue, 14 Jul 2020 07:30:24 -0400", "msg_from": "David Steele <david@pgmasters.net>", "msg_from_op": false, "msg_subject": "Re: max_slot_wal_keep_size and wal_keep_segments" }, { "msg_contents": "\n\nOn 2020/07/14 20:30, David Steele wrote:\n> On 7/14/20 12:00 AM, Fujii Masao wrote:\n>>\n>> The patch was no longer applied cleanly because of recent commit.\n>> So I updated the patch. Attached.\n>>\n>> Barring any objection, I will commit this patch.\n> \n> This doesn't look right:\n> \n> +   the <xref linkend=\"guc-wal-keep-size\"/> most recent megabytes\n> +   WAL files plus one WAL file are\n> \n> How about:\n> \n> +   <xref linkend=\"guc-wal-keep-size\"/> megabytes of\n> +   WAL files plus one WAL file are\n\nThanks for the comment! Isn't it better to keep \"most recent\" part?\nIf so, what about either of the followings?\n\n1. <xref linkend=\"guc-wal-keep-size\"/> megabytes of WAL files plus\n one WAL file that were most recently generated are kept all time.\n\n2. <xref linkend=\"guc-wal-keep-size\"/> megabytes + <xref linkend=\"guc-wal-segment-size\"> bytes\n of WAL files that were most recently generated are kept all time.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Fri, 17 Jul 2020 18:11:36 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Re: max_slot_wal_keep_size and wal_keep_segments" }, { "msg_contents": "\nOn 7/17/20 5:11 AM, Fujii Masao wrote:\n> \n> \n> On 2020/07/14 20:30, David Steele wrote:\n>> On 7/14/20 12:00 AM, Fujii Masao wrote:\n>>>\n>>> The patch was no longer applied cleanly because of recent commit.\n>>> So I updated the patch. Attached.\n>>>\n>>> Barring any objection, I will commit this patch.\n>>\n>> This doesn't look right:\n>>\n>> +   the <xref linkend=\"guc-wal-keep-size\"/> most recent megabytes\n>> +   WAL files plus one WAL file are\n>>\n>> How about:\n>>\n>> +   <xref linkend=\"guc-wal-keep-size\"/> megabytes of\n>> +   WAL files plus one WAL file are\n> \n> Thanks for the comment! Isn't it better to keep \"most recent\" part?\n> If so, what about either of the followings?\n> \n> 1. <xref linkend=\"guc-wal-keep-size\"/> megabytes of WAL files plus\n>     one WAL file that were most recently generated are kept all time.\n> \n> 2. <xref linkend=\"guc-wal-keep-size\"/> megabytes + <xref \n> linkend=\"guc-wal-segment-size\"> bytes\n>     of WAL files that were most recently generated are kept all time.\n\n\"most recent\" seemed implied to me, but I see your point.\n\nHow about:\n\n+ the most recent <xref linkend=\"guc-wal-keep-size\"/> megabytes of\n+ WAL files plus one additional WAL file are\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n\n", "msg_date": "Fri, 17 Jul 2020 07:24:07 -0400", "msg_from": "David Steele <david@pgmasters.net>", "msg_from_op": false, "msg_subject": "Re: max_slot_wal_keep_size and wal_keep_segments" }, { "msg_contents": "\n\nOn 2020/07/17 20:24, David Steele wrote:\n> \n> On 7/17/20 5:11 AM, Fujii Masao wrote:\n>>\n>>\n>> On 2020/07/14 20:30, David Steele wrote:\n>>> On 7/14/20 12:00 AM, Fujii Masao wrote:\n>>>>\n>>>> The patch was no longer applied cleanly because of recent commit.\n>>>> So I updated the patch. Attached.\n>>>>\n>>>> Barring any objection, I will commit this patch.\n>>>\n>>> This doesn't look right:\n>>>\n>>> +   the <xref linkend=\"guc-wal-keep-size\"/> most recent megabytes\n>>> +   WAL files plus one WAL file are\n>>>\n>>> How about:\n>>>\n>>> +   <xref linkend=\"guc-wal-keep-size\"/> megabytes of\n>>> +   WAL files plus one WAL file are\n>>\n>> Thanks for the comment! Isn't it better to keep \"most recent\" part?\n>> If so, what about either of the followings?\n>>\n>> 1. <xref linkend=\"guc-wal-keep-size\"/> megabytes of WAL files plus\n>>      one WAL file that were most recently generated are kept all time.\n>>\n>> 2. <xref linkend=\"guc-wal-keep-size\"/> megabytes + <xref linkend=\"guc-wal-segment-size\"> bytes\n>>      of WAL files that were most recently generated are kept all time.\n> \n> \"most recent\" seemed implied to me, but I see your point.\n> \n> How about:\n> \n> +   the most recent <xref linkend=\"guc-wal-keep-size\"/> megabytes of\n> +   WAL files plus one additional WAL file are\n\nI adopted this and pushed the patch. Thanks!\n\nAlso we need to update the release note for v13. What about adding the following?\n\n------------------------------------\nRename configuration parameter wal_keep_segments to wal_keep_size.\n\nThis allows how much WAL files to retain for the standby server, by bytes instead of the number of files.\nIf you previously used wal_keep_segments, the following formula will give you an approximately equivalent setting:\n\nwal_keep_size = wal_keep_segments * wal_segment_size (typically 16MB)\n------------------------------------\n\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Mon, 20 Jul 2020 13:48:37 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Re: max_slot_wal_keep_size and wal_keep_segments" }, { "msg_contents": "On 2020/07/20 13:48, Fujii Masao wrote:\n> \n> \n> On 2020/07/17 20:24, David Steele wrote:\n>>\n>> On 7/17/20 5:11 AM, Fujii Masao wrote:\n>>>\n>>>\n>>> On 2020/07/14 20:30, David Steele wrote:\n>>>> On 7/14/20 12:00 AM, Fujii Masao wrote:\n>>>>>\n>>>>> The patch was no longer applied cleanly because of recent commit.\n>>>>> So I updated the patch. Attached.\n>>>>>\n>>>>> Barring any objection, I will commit this patch.\n>>>>\n>>>> This doesn't look right:\n>>>>\n>>>> +   the <xref linkend=\"guc-wal-keep-size\"/> most recent megabytes\n>>>> +   WAL files plus one WAL file are\n>>>>\n>>>> How about:\n>>>>\n>>>> +   <xref linkend=\"guc-wal-keep-size\"/> megabytes of\n>>>> +   WAL files plus one WAL file are\n>>>\n>>> Thanks for the comment! Isn't it better to keep \"most recent\" part?\n>>> If so, what about either of the followings?\n>>>\n>>> 1. <xref linkend=\"guc-wal-keep-size\"/> megabytes of WAL files plus\n>>>      one WAL file that were most recently generated are kept all time.\n>>>\n>>> 2. <xref linkend=\"guc-wal-keep-size\"/> megabytes + <xref linkend=\"guc-wal-segment-size\"> bytes\n>>>      of WAL files that were most recently generated are kept all time.\n>>\n>> \"most recent\" seemed implied to me, but I see your point.\n>>\n>> How about:\n>>\n>> +   the most recent <xref linkend=\"guc-wal-keep-size\"/> megabytes of\n>> +   WAL files plus one additional WAL file are\n> \n> I adopted this and pushed the patch. Thanks!\n> \n> Also we need to update the release note for v13. What about adding the following?\n> \n> ------------------------------------\n> Rename configuration parameter wal_keep_segments to wal_keep_size.\n> \n> This allows how much WAL files to retain for the standby server, by bytes instead of the number of files.\n> If you previously used wal_keep_segments, the following formula will give you an approximately equivalent setting:\n> \n> wal_keep_size = wal_keep_segments * wal_segment_size (typically 16MB)\n> ------------------------------------\n\nPatch attached.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION", "msg_date": "Mon, 20 Jul 2020 19:02:50 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Re: max_slot_wal_keep_size and wal_keep_segments" }, { "msg_contents": "On 7/20/20 6:02 AM, Fujii Masao wrote:\n> \n> \n> On 2020/07/20 13:48, Fujii Masao wrote:\n>>\n>>\n>> On 2020/07/17 20:24, David Steele wrote:\n>>>\n>>> On 7/17/20 5:11 AM, Fujii Masao wrote:\n>>>>\n>>>>\n>>>> On 2020/07/14 20:30, David Steele wrote:\n>>>>> On 7/14/20 12:00 AM, Fujii Masao wrote:\n>>>>>>\n>>>>>> The patch was no longer applied cleanly because of recent commit.\n>>>>>> So I updated the patch. Attached.\n>>>>>>\n>>>>>> Barring any objection, I will commit this patch.\n>>>>>\n>>>>> This doesn't look right:\n>>>>>\n>>>>> +   the <xref linkend=\"guc-wal-keep-size\"/> most recent megabytes\n>>>>> +   WAL files plus one WAL file are\n>>>>>\n>>>>> How about:\n>>>>>\n>>>>> +   <xref linkend=\"guc-wal-keep-size\"/> megabytes of\n>>>>> +   WAL files plus one WAL file are\n>>>>\n>>>> Thanks for the comment! Isn't it better to keep \"most recent\" part?\n>>>> If so, what about either of the followings?\n>>>>\n>>>> 1. <xref linkend=\"guc-wal-keep-size\"/> megabytes of WAL files plus\n>>>>      one WAL file that were most recently generated are kept all time.\n>>>>\n>>>> 2. <xref linkend=\"guc-wal-keep-size\"/> megabytes + <xref \n>>>> linkend=\"guc-wal-segment-size\"> bytes\n>>>>      of WAL files that were most recently generated are kept all time.\n>>>\n>>> \"most recent\" seemed implied to me, but I see your point.\n>>>\n>>> How about:\n>>>\n>>> +   the most recent <xref linkend=\"guc-wal-keep-size\"/> megabytes of\n>>> +   WAL files plus one additional WAL file are\n>>\n>> I adopted this and pushed the patch. Thanks!\n>>\n>> Also we need to update the release note for v13. What about adding the \n>> following?\n>>\n>> ------------------------------------\n>> Rename configuration parameter wal_keep_segments to wal_keep_size.\n>>\n>> This allows how much WAL files to retain for the standby server, by \n>> bytes instead of the number of files.\n>> If you previously used wal_keep_segments, the following formula will \n>> give you an approximately equivalent setting:\n>>\n>> wal_keep_size = wal_keep_segments * wal_segment_size (typically 16MB)\n>> ------------------------------------\n\nI would rework that first sentence a bit. How about:\n\n+ This determines how much WAL to retain for the standby server,\n+ specified in megabytes rather than number of files.\n\nThe rest looks fine to me.\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n\n", "msg_date": "Mon, 20 Jul 2020 08:21:46 -0400", "msg_from": "David Steele <david@pgmasters.net>", "msg_from_op": false, "msg_subject": "Re: max_slot_wal_keep_size and wal_keep_segments" }, { "msg_contents": "\n\nOn 2020/07/20 21:21, David Steele wrote:\n> On 7/20/20 6:02 AM, Fujii Masao wrote:\n>>\n>>\n>> On 2020/07/20 13:48, Fujii Masao wrote:\n>>>\n>>>\n>>> On 2020/07/17 20:24, David Steele wrote:\n>>>>\n>>>> On 7/17/20 5:11 AM, Fujii Masao wrote:\n>>>>>\n>>>>>\n>>>>> On 2020/07/14 20:30, David Steele wrote:\n>>>>>> On 7/14/20 12:00 AM, Fujii Masao wrote:\n>>>>>>>\n>>>>>>> The patch was no longer applied cleanly because of recent commit.\n>>>>>>> So I updated the patch. Attached.\n>>>>>>>\n>>>>>>> Barring any objection, I will commit this patch.\n>>>>>>\n>>>>>> This doesn't look right:\n>>>>>>\n>>>>>> +   the <xref linkend=\"guc-wal-keep-size\"/> most recent megabytes\n>>>>>> +   WAL files plus one WAL file are\n>>>>>>\n>>>>>> How about:\n>>>>>>\n>>>>>> +   <xref linkend=\"guc-wal-keep-size\"/> megabytes of\n>>>>>> +   WAL files plus one WAL file are\n>>>>>\n>>>>> Thanks for the comment! Isn't it better to keep \"most recent\" part?\n>>>>> If so, what about either of the followings?\n>>>>>\n>>>>> 1. <xref linkend=\"guc-wal-keep-size\"/> megabytes of WAL files plus\n>>>>>      one WAL file that were most recently generated are kept all time.\n>>>>>\n>>>>> 2. <xref linkend=\"guc-wal-keep-size\"/> megabytes + <xref linkend=\"guc-wal-segment-size\"> bytes\n>>>>>      of WAL files that were most recently generated are kept all time.\n>>>>\n>>>> \"most recent\" seemed implied to me, but I see your point.\n>>>>\n>>>> How about:\n>>>>\n>>>> +   the most recent <xref linkend=\"guc-wal-keep-size\"/> megabytes of\n>>>> +   WAL files plus one additional WAL file are\n>>>\n>>> I adopted this and pushed the patch. Thanks!\n>>>\n>>> Also we need to update the release note for v13. What about adding the following?\n>>>\n>>> ------------------------------------\n>>> Rename configuration parameter wal_keep_segments to wal_keep_size.\n>>>\n>>> This allows how much WAL files to retain for the standby server, by bytes instead of the number of files.\n>>> If you previously used wal_keep_segments, the following formula will give you an approximately equivalent setting:\n>>>\n>>> wal_keep_size = wal_keep_segments * wal_segment_size (typically 16MB)\n>>> ------------------------------------\n> \n> I would rework that first sentence a bit. How about:\n> \n> + This determines how much WAL to retain for the standby server,\n> + specified in megabytes rather than number of files.\n> \n> The rest looks fine to me.\n\nThanks for the review!\nI adopted your suggestion in the updated version of the patch and pushed it.\n\nRegards,\n\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Tue, 28 Jul 2020 11:25:54 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Re: max_slot_wal_keep_size and wal_keep_segments" } ]
[ { "msg_contents": "Buildfarm member hyrax has shown this failure twice recently:\n\n--- /home/buildfarm/buildroot/HEAD/pgsql.build/src/test/regress/expected/brin.out\t2020-01-23 11:10:05.730014075 -0500\n+++ /home/buildfarm/buildroot/HEAD/pgsql.build/src/test/regress/results/brin.out\t2020-06-30 03:50:23.651196117 -0400\n@@ -490,6 +490,7 @@\n INSERT INTO brintest_2 VALUES (numrange(0, 2^1000::numeric));\n INSERT INTO brintest_2 VALUES ('(-1, 0)');\n SELECT brin_desummarize_range('brinidx', 0);\n+WARNING: leftover placeholder tuple detected in BRIN index \"brinidx\", deleting\n brin_desummarize_range \n ------------------------\n \nThis happened on HEAD:\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=hyrax&dt=2020-06-30%2001%3A41%3A50\nand v13:\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=hyrax&dt=2020-06-17%2005%3A50%3A46\nand lousyjack has also shown it once on HEAD:\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=lousyjack&dt=2020-03-02%2013%3A03%3A04\n\nThe warning is from this bit (commit c655899ba):\n\n /*\n * Because of ShareUpdateExclusive lock, this function shouldn't run\n * concurrently with summarization. Placeholder tuples can only exist as\n * leftovers from crashed summarization, so if we detect any, we complain\n * but proceed.\n */\n if (BrinTupleIsPlaceholder(tup))\n ereport(WARNING,\n (errmsg(\"leftover placeholder tuple detected in BRIN index \\\"%s\\\", deleting\",\n RelationGetRelationName(idxrel))));\n\nNow, there was no preceding crash in these tests, so the comment's claim\nis evidently a lie. But what's going wrong? The postmaster log provides\na strong hint:\n\n2020-06-30 03:50:14.603 EDT [3699:48] pg_regress/brin LOG: statement: SELECT brin_desummarize_range('brinidx', 0);\n2020-06-30 03:50:15.894 EDT [3699:49] pg_regress/brin LOG: process 3699 still waiting for ShareUpdateExclusiveLock on relation 24795 of database 16384 after 1000.094 ms\n2020-06-30 03:50:15.894 EDT [3699:50] pg_regress/brin DETAIL: Process holding the lock: 7237. Wait queue: 3699.\n2020-06-30 03:50:15.894 EDT [3699:51] pg_regress/brin STATEMENT: SELECT brin_desummarize_range('brinidx', 0);\n2020-06-30 03:50:15.895 EDT [7237:1] ERROR: canceling autovacuum task\n2020-06-30 03:50:15.895 EDT [7237:2] CONTEXT: while cleaning up index \"brinidx\" of relation \"public.brintest\"\n\tautomatic vacuum of table \"regression.public.brintest\"\n2020-06-30 03:50:15.899 EDT [3699:52] pg_regress/brin LOG: process 3699 acquired ShareUpdateExclusiveLock on relation 24795 of database 16384 after 1001.626 ms\n2020-06-30 03:50:15.899 EDT [3699:53] pg_regress/brin STATEMENT: SELECT brin_desummarize_range('brinidx', 0);\n2020-06-30 03:50:16.018 EDT [3699:54] pg_regress/brin WARNING: leftover placeholder tuple detected in BRIN index \"brinidx\", deleting\n2020-06-30 03:50:16.031 EDT [3699:55] pg_regress/brin LOG: statement: SELECT brin_summarize_range('brinidx', 0);\n\nI think the \"crash\" actually was the forced autovac cancellation we see\nhere. Thus, the fact that both these animals use CLOBBER_CACHE_ALWAYS is\nnot a direct cause of the failure, though it might contribute to getting\nthe timing right for this to happen.\n\nSo (1) the comment needs to be adjusted to mention that vacuum\ncancellation is enough to create the situation, and (2) probably the\nmessage needs to be downgraded quite a lot, DEBUG2 or even less.\nOr maybe we could just delete the quoted stanza altogether.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 30 Jun 2020 12:24:11 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Intermittent BRIN failures on hyrax and lousyjack" }, { "msg_contents": "On 2020-Jun-30, Tom Lane wrote:\n\n> SELECT brin_desummarize_range('brinidx', 0);\n> +WARNING: leftover placeholder tuple detected in BRIN index \"brinidx\", deleting\n\n> I think the \"crash\" actually was the forced autovac cancellation we see\n> here. Thus, the fact that both these animals use CLOBBER_CACHE_ALWAYS is\n> not a direct cause of the failure, though it might contribute to getting\n> the timing right for this to happen.\n\nOh, interesting.\n\n> So (1) the comment needs to be adjusted to mention that vacuum\n> cancellation is enough to create the situation, and (2) probably the\n> message needs to be downgraded quite a lot, DEBUG2 or even less.\n> Or maybe we could just delete the quoted stanza altogether.\n\nYeah, maybe deleting the whole thing is a decent answer. When I wrote\nthat, I was thinking that it was a very unlikely event, but evidently\nthat's not true.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Tue, 30 Jun 2020 23:28:39 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Intermittent BRIN failures on hyrax and lousyjack" }, { "msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> On 2020-Jun-30, Tom Lane wrote:\n>> SELECT brin_desummarize_range('brinidx', 0);\n>> +WARNING: leftover placeholder tuple detected in BRIN index \"brinidx\", deleting\n\n>> So (1) the comment needs to be adjusted to mention that vacuum\n>> cancellation is enough to create the situation, and (2) probably the\n>> message needs to be downgraded quite a lot, DEBUG2 or even less.\n>> Or maybe we could just delete the quoted stanza altogether.\n\n> Yeah, maybe deleting the whole thing is a decent answer. When I wrote\n> that, I was thinking that it was a very unlikely event, but evidently\n> that's not true.\n\ntrilobite (also a CCA animal) just reported one of these, too:\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=trilobite&dt=2020-07-09%2008%3A03%3A08\n\nWere you going to fix this, or did you expect me to?\nIf the latter, I lean to just deleting the message.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 09 Jul 2020 18:46:20 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Intermittent BRIN failures on hyrax and lousyjack" }, { "msg_contents": "On 2020-Jul-09, Tom Lane wrote:\n\n> Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> > On 2020-Jun-30, Tom Lane wrote:\n> >> SELECT brin_desummarize_range('brinidx', 0);\n> >> +WARNING: leftover placeholder tuple detected in BRIN index \"brinidx\", deleting\n> \n> >> So (1) the comment needs to be adjusted to mention that vacuum\n> >> cancellation is enough to create the situation, and (2) probably the\n> >> message needs to be downgraded quite a lot, DEBUG2 or even less.\n> >> Or maybe we could just delete the quoted stanza altogether.\n> \n> > Yeah, maybe deleting the whole thing is a decent answer. When I wrote\n> > that, I was thinking that it was a very unlikely event, but evidently\n> > that's not true.\n> \n> trilobite (also a CCA animal) just reported one of these, too:\n> \n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=trilobite&dt=2020-07-09%2008%3A03%3A08\n> \n> Were you going to fix this, or did you expect me to?\n\nI have a moment now, let me have a go at it. I agree with deleting the\nmessage. I think I'll keep the comment, slightly reworded:\n\n\t/*\n\t * Placeholder tuples only appear during unfinished summarization, and we\n\t * hold SUE lock, so this function cannot run concurrently with\n\t * that. Any placeholder tuples that exist must be leftovers from a\n\t * crashed or aborted summarization; remove them silently.\n\t */\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Thu, 9 Jul 2020 18:55:48 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Intermittent BRIN failures on hyrax and lousyjack" }, { "msg_contents": "On 2020-Jul-09, Alvaro Herrera wrote:\n\n> I have a moment now, let me have a go at it. I agree with deleting the\n> message. I think I'll keep the comment, slightly reworded:\n> \n> \t/*\n> \t * Placeholder tuples only appear during unfinished summarization, and we\n> \t * hold SUE lock, so this function cannot run concurrently with\n> \t * that. Any placeholder tuples that exist must be leftovers from a\n> \t * crashed or aborted summarization; remove them silently.\n> \t */\n\nPushed to all branches.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Thu, 9 Jul 2020 20:22:04 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Intermittent BRIN failures on hyrax and lousyjack" } ]
[ { "msg_contents": "This adds support for writing CREATE FUNCTION and CREATE PROCEDURE \nstatements for language SQL with a function body that conforms to the \nSQL standard and is portable to other implementations.\n\nInstead of the PostgreSQL-specific AS $$ string literal $$ syntax,\nthis allows writing out the SQL statements making up the body\nunquoted, either as a single statement:\n\n CREATE FUNCTION add(a integer, b integer) RETURNS integer\n LANGUAGE SQL\n RETURN a + b;\n\nor as a block\n\n CREATE PROCEDURE insert_data(a integer, b integer)\n LANGUAGE SQL\n BEGIN ATOMIC\n INSERT INTO tbl VALUES (a);\n INSERT INTO tbl VALUES (b);\n END;\n\nThe function body is parsed at function definition time and stored as\nexpression nodes in probin. So at run time, no further parsing is\nrequired.\n\nHowever, this form does not support polymorphic arguments, because\nthere is no more parse analysis done at call time.\n\nDependencies between the function and the objects it uses are fully\ntracked.\n\nA new RETURN statement is introduced. This can only be used inside\nfunction bodies. Internally, it is treated much like a SELECT\nstatement.\n\npsql needs some new intelligence to keep track of function body\nboundaries so that it doesn't send off statements when it sees\nsemicolons that are inside a function body.\n\nAlso, per SQL standard, LANGUAGE SQL is the default, so it does not\nneed to be specified anymore.\n\nNote: Some parts of the patch look better under git diff -w (ignoring \nwhitespace changes) because if/else blocks were introduced around \nexisting code.\n\nTODOs and discussion points:\n\n- pg_dump is not yet supported. As a consequence, the pg_upgrade\ntests don't pass yet. I'm thinking about changing pg_dump to use \npg_get_functiondef here instead of coding everything by hand. Some \ninitial experimenting showed that this would be possible with minimal \ntweaking and it would surely be beneficial in the long run.\n\n- The compiled function body is stored in the probin field of pg_proc. \nThis matches the historical split similar to adsrc/adbin, consrc/conbin, \nbut this has now been abandoned. Also, this field should ideally be of \ntype pg_node_tree, so reusing probin for that is probably not good. \nSeems like a new field might be best.\n\n- More test coverage is needed. Surprisingly, there wasn't actually any \ntest AFAICT that just creates and SQL function and runs it. Most of \nthat code is tested incidentally, but there is very little or no \ntargeted testing of this functionality.\n\n- Some of the changes in pg_proc.c, functioncmds.c, and functions.c in \nparticular were jammed in and could use some reorganization after the \nbasic ideas are solidified.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Tue, 30 Jun 2020 19:49:04 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "SQL-standard function body" }, { "msg_contents": "On Tue, Jun 30, 2020 at 1:49 PM Peter Eisentraut\n<peter.eisentraut@2ndquadrant.com> wrote:\n> This adds support for writing CREATE FUNCTION and CREATE PROCEDURE\n> statements for language SQL with a function body that conforms to the\n> SQL standard and is portable to other implementations.\n\nWith what other implementations is it compatible?\n\n> The function body is parsed at function definition time and stored as\n> expression nodes in probin. So at run time, no further parsing is\n> required.\n>\n> However, this form does not support polymorphic arguments, because\n> there is no more parse analysis done at call time.\n>\n> Dependencies between the function and the objects it uses are fully\n> tracked.\n>\n> A new RETURN statement is introduced. This can only be used inside\n> function bodies. Internally, it is treated much like a SELECT\n> statement.\n>\n> psql needs some new intelligence to keep track of function body\n> boundaries so that it doesn't send off statements when it sees\n> semicolons that are inside a function body.\n>\n> Also, per SQL standard, LANGUAGE SQL is the default, so it does not\n> need to be specified anymore.\n\nHmm, this all seems like a pretty big semantic change. IIUC, right\nnow, a SQL function can only contain one statement, but it seems like\nwith this patch you can have a block in there with a bunch of\nstatements, sorta like plpgsql. But probably you don't have all of the\nfunctionality of plpgsql available. Also, the fact that you're doing\nparsing earlier means that e.g. creating a table and inserting into it\nwon't work. Maybe that's fine. But it almost seems like you are\ninventing a whole new PL....\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Tue, 30 Jun 2020 13:58:26 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: SQL-standard function body" }, { "msg_contents": "Greetings,\n\n* Robert Haas (robertmhaas@gmail.com) wrote:\n> Hmm, this all seems like a pretty big semantic change. IIUC, right\n> now, a SQL function can only contain one statement, but it seems like\n> with this patch you can have a block in there with a bunch of\n> statements, sorta like plpgsql.\n\nFrom our docs:\n\nCREATE FUNCTION tf1 (accountno integer, debit numeric) RETURNS numeric AS $$\n UPDATE bank\n SET balance = balance - debit\n WHERE accountno = tf1.accountno;\n SELECT 1;\n$$ LANGUAGE SQL;\n\nhttps://www.postgresql.org/docs/current/xfunc-sql.html\n\nHaven't looked at the patch, tho if it adds support for something the\nSQL standard defines, that generally seems like a positive to me.\n\nThanks,\n\nStephen", "msg_date": "Tue, 30 Jun 2020 14:05:11 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: SQL-standard function body" }, { "msg_contents": "út 30. 6. 2020 v 19:58 odesílatel Robert Haas <robertmhaas@gmail.com>\nnapsal:\n\n> On Tue, Jun 30, 2020 at 1:49 PM Peter Eisentraut\n> <peter.eisentraut@2ndquadrant.com> wrote:\n> > This adds support for writing CREATE FUNCTION and CREATE PROCEDURE\n> > statements for language SQL with a function body that conforms to the\n> > SQL standard and is portable to other implementations.\n>\n> With what other implementations is it compatible?\n>\n> > The function body is parsed at function definition time and stored as\n> > expression nodes in probin. So at run time, no further parsing is\n> > required.\n> >\n> > However, this form does not support polymorphic arguments, because\n> > there is no more parse analysis done at call time.\n> >\n> > Dependencies between the function and the objects it uses are fully\n> > tracked.\n> >\n> > A new RETURN statement is introduced. This can only be used inside\n> > function bodies. Internally, it is treated much like a SELECT\n> > statement.\n> >\n> > psql needs some new intelligence to keep track of function body\n> > boundaries so that it doesn't send off statements when it sees\n> > semicolons that are inside a function body.\n> >\n> > Also, per SQL standard, LANGUAGE SQL is the default, so it does not\n> > need to be specified anymore.\n>\n> Hmm, this all seems like a pretty big semantic change. IIUC, right\n> now, a SQL function can only contain one statement, but it seems like\n> with this patch you can have a block in there with a bunch of\n> statements, sorta like plpgsql. But probably you don't have all of the\n> functionality of plpgsql available. Also, the fact that you're doing\n> parsing earlier means that e.g. creating a table and inserting into it\n> won't work. Maybe that's fine. But it almost seems like you are\n> inventing a whole new PL....\n>\n\nIt is SQL/PSM and can be nice to have it.\n\nI am a little bit afraid about performance - SQL functions doesn't use plan\ncache and simple expressions. Without inlining it can be too slow.\n\nRegards\n\nPavel\n\n\n> --\n> Robert Haas\n> EnterpriseDB: http://www.enterprisedb.com\n> The Enterprise PostgreSQL Company\n>\n>\n>\n\nút 30. 6. 2020 v 19:58 odesílatel Robert Haas <robertmhaas@gmail.com> napsal:On Tue, Jun 30, 2020 at 1:49 PM Peter Eisentraut\n<peter.eisentraut@2ndquadrant.com> wrote:\n> This adds support for writing CREATE FUNCTION and CREATE PROCEDURE\n> statements for language SQL with a function body that conforms to the\n> SQL standard and is portable to other implementations.\n\nWith what other implementations is it compatible?\n\n> The function body is parsed at function definition time and stored as\n> expression nodes in probin.  So at run time, no further parsing is\n> required.\n>\n> However, this form does not support polymorphic arguments, because\n> there is no more parse analysis done at call time.\n>\n> Dependencies between the function and the objects it uses are fully\n> tracked.\n>\n> A new RETURN statement is introduced.  This can only be used inside\n> function bodies.  Internally, it is treated much like a SELECT\n> statement.\n>\n> psql needs some new intelligence to keep track of function body\n> boundaries so that it doesn't send off statements when it sees\n> semicolons that are inside a function body.\n>\n> Also, per SQL standard, LANGUAGE SQL is the default, so it does not\n> need to be specified anymore.\n\nHmm, this all seems like a pretty big semantic change. IIUC, right\nnow, a SQL function can only contain one statement, but it seems like\nwith this patch you can have a block in there with a bunch of\nstatements, sorta like plpgsql. But probably you don't have all of the\nfunctionality of plpgsql available. Also, the fact that you're doing\nparsing earlier means that e.g. creating a table and inserting into it\nwon't work. Maybe that's fine. But it almost seems like you are\ninventing a whole new PL....It is SQL/PSM and can be nice to have it. I am a little bit afraid about performance - SQL functions doesn't use plan cache and simple expressions. Without inlining it can be too slow.RegardsPavel\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Tue, 30 Jun 2020 20:13:39 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: SQL-standard function body" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Tue, Jun 30, 2020 at 1:49 PM Peter Eisentraut\n> <peter.eisentraut@2ndquadrant.com> wrote:\n>> This adds support for writing CREATE FUNCTION and CREATE PROCEDURE\n>> statements for language SQL with a function body that conforms to the\n>> SQL standard and is portable to other implementations.\n\n> With what other implementations is it compatible?\n\nYeah ... I'm sort of wondering exactly what this really accomplishes.\nI think \"portability\" is a red herring unfortunately.\n\nTracking the dependencies of the function body sounds nice at first\nglance, so it might be a feature. But given our experiences with having\nto use check_function_bodies = off to not have impossible dependency loops\nin dump/restore, I rather wonder whether it'll be a net loss in practice.\nIIUC, this implementation is flat out incapable of doing the equivalent of\ncheck_function_bodies = off, and that sounds like trouble.\n\n> Hmm, this all seems like a pretty big semantic change. IIUC, right\n> now, a SQL function can only contain one statement,\n\nNot true, you can have more. However, it's nonetheless an enormous\nsemantic change, if only because the CREATE FUNCTION-time search_path\nis now relevant instead of the execution-time path. That *will*\nbreak use-cases I've heard of, where the same function is applied\nto different tables by adjusting the path. It'd certainly be useful\nfrom some perspectives (eg better security), but it's ... different.\n\nReplicating the creation-time search path will be a big headache for\npg_dump, I bet.\n\n> But it almost seems like you are\n> inventing a whole new PL....\n\nYes. Having this replace the existing SQL PL would be a disaster,\nbecause there are use-cases this simply can't meet (even assuming\nthat we can fix the polymorphism problem, which seems a bit unlikely).\nWe'd need to treat it as a new PL type.\n\nPerhaps this is useful enough to justify all the work involved,\nbut I'm not sure.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 30 Jun 2020 14:24:40 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: SQL-standard function body" }, { "msg_contents": "I wrote:\n> Replicating the creation-time search path will be a big headache for\n> pg_dump, I bet.\n\nOn further thought, we probably don't have to. Re-parsing the function\nbody the same way is exactly the same problem as re-parsing a view or\nmatview body the same way. I don't want to claim that that's a 100%\nsolved problem, but I've heard few complaints in that area lately.\n\nThe point remains that exposing the function body's dependencies will\nconstrain restore order far more than we are accustomed to see. It\nmight be possible to build examples that flat out can't be restored,\neven granting that we teach pg_dump how to break dependency loops\nby first creating the function with empty body and later redefining\nit with the real body. (Admittedly, if that's possible then you\nlikely could make it happen with views too. But somehow it seems\nmore likely that people would create spaghetti dependencies for\nfunctions than views.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 30 Jun 2020 14:51:38 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: SQL-standard function body" }, { "msg_contents": "Hi,\n\nOn 2020-06-30 19:49:04 +0200, Peter Eisentraut wrote:\n> The function body is parsed at function definition time and stored as\n> expression nodes in probin. So at run time, no further parsing is\n> required.\n\nAs raw parse tree or as a parse-analysed tree? I assume the latter?\n\nIsn't a consequence of that that we'd get a lot more errors if any DDL\nis done to tables involved in the query? In contrast to other languages\nwe'd not be able to handle column type changes etc, right?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 30 Jun 2020 12:26:06 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: SQL-standard function body" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2020-06-30 19:49:04 +0200, Peter Eisentraut wrote:\n>> The function body is parsed at function definition time and stored as\n>> expression nodes in probin. So at run time, no further parsing is\n>> required.\n\n> Isn't a consequence of that that we'd get a lot more errors if any DDL\n> is done to tables involved in the query? In contrast to other languages\n> we'd not be able to handle column type changes etc, right?\n\nI suppose it'd act like column references in a view, ie the dependency\nmechanisms would forbid you from changing/dropping any column mentioned\nin one of these functions.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 30 Jun 2020 15:43:36 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: SQL-standard function body" }, { "msg_contents": "On Tue, Jun 30, 2020 at 2:51 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> On further thought, we probably don't have to. Re-parsing the function\n> body the same way is exactly the same problem as re-parsing a view or\n> matview body the same way. I don't want to claim that that's a 100%\n> solved problem, but I've heard few complaints in that area lately.\n>\n> The point remains that exposing the function body's dependencies will\n> constrain restore order far more than we are accustomed to see. It\n> might be possible to build examples that flat out can't be restored,\n> even granting that we teach pg_dump how to break dependency loops\n> by first creating the function with empty body and later redefining\n> it with the real body. (Admittedly, if that's possible then you\n> likely could make it happen with views too. But somehow it seems\n> more likely that people would create spaghetti dependencies for\n> functions than views.)\n\nIn my experience, there's certainly demand for some kind of mode where\nplpgsql functions get checked at function definition time, rather than\nat execution time. The model we have is advantageous not only because\nit simplifies dump and reload, but also because it handles cases where\nthe table is created on the fly properly. However, it also means that\nyou can have silly mistakes in your function definitions that you\ndon't find out about until runtime, and in my experience, people don't\nlike that behavior much at all. So I don't think that it's a bad idea\non principle, or anything like that, but the details seem like they\nneed a lot of thought. The dump and restore issues need to be\nconsidered, but also, what about things like IF and WHILE? People are\ngoing to want those constructs with these new semantics, too.\n\nI actually don't have a very clear idea of what the standard has to\nsay about SQL-language functions. Does it just say it's a list of\nstatements, or does it involve variables and control-flow constructs\nand stuff like that, too? If we go that direction with this, then\nwe're actually going to end up with two different implementations of\nwhat's now plpgsql, or something. But if we don't, then I'm not sure\nhow far this takes us. I'm not saying it's bad, but the comment \"I\nlove the early binding but where's my IF statement\" seems like an\ninevitable one.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Wed, 1 Jul 2020 09:36:42 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: SQL-standard function body" }, { "msg_contents": "st 1. 7. 2020 v 15:37 odesílatel Robert Haas <robertmhaas@gmail.com> napsal:\n\n> On Tue, Jun 30, 2020 at 2:51 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > On further thought, we probably don't have to. Re-parsing the function\n> > body the same way is exactly the same problem as re-parsing a view or\n> > matview body the same way. I don't want to claim that that's a 100%\n> > solved problem, but I've heard few complaints in that area lately.\n> >\n> > The point remains that exposing the function body's dependencies will\n> > constrain restore order far more than we are accustomed to see. It\n> > might be possible to build examples that flat out can't be restored,\n> > even granting that we teach pg_dump how to break dependency loops\n> > by first creating the function with empty body and later redefining\n> > it with the real body. (Admittedly, if that's possible then you\n> > likely could make it happen with views too. But somehow it seems\n> > more likely that people would create spaghetti dependencies for\n> > functions than views.)\n>\n> In my experience, there's certainly demand for some kind of mode where\n> plpgsql functions get checked at function definition time, rather than\n> at execution time. The model we have is advantageous not only because\n> it simplifies dump and reload, but also because it handles cases where\n> the table is created on the fly properly. However, it also means that\n> you can have silly mistakes in your function definitions that you\n> don't find out about until runtime, and in my experience, people don't\n> like that behavior much at all. So I don't think that it's a bad idea\n> on principle, or anything like that, but the details seem like they\n> need a lot of thought. The dump and restore issues need to be\n> considered, but also, what about things like IF and WHILE? People are\n> going to want those constructs with these new semantics, too.\n>\n\nplpgsql_check can be integrated to upstream.\n\nhttps://github.com/okbob/plpgsql_check\n\n\n\n> I actually don't have a very clear idea of what the standard has to\n> say about SQL-language functions. Does it just say it's a list of\n> statements, or does it involve variables and control-flow constructs\n> and stuff like that, too? If we go that direction with this, then\n> we're actually going to end up with two different implementations of\n> what's now plpgsql, or something. But if we don't, then I'm not sure\n> how far this takes us. I'm not saying it's bad, but the comment \"I\n> love the early binding but where's my IF statement\" seems like an\n> inevitable one.\n>\n\nThe standard SQL/PSM is a full functionality language with variables,\nconditional statements, exception handlings, ..\n\nhttps://postgres.cz/wiki/SQL/PSM_Manual\n\nUnfortunately a basic implementation integrated into the main SQL parser\ncan be pretty hard work. First issue can be SET statement implementation.\n\nRegards\n\nPavel\n\n\n\n> --\n> Robert Haas\n> EnterpriseDB: http://www.enterprisedb.com\n> The Enterprise PostgreSQL Company\n>\n>\n>\n\nst 1. 7. 2020 v 15:37 odesílatel Robert Haas <robertmhaas@gmail.com> napsal:On Tue, Jun 30, 2020 at 2:51 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> On further thought, we probably don't have to.  Re-parsing the function\n> body the same way is exactly the same problem as re-parsing a view or\n> matview body the same way.  I don't want to claim that that's a 100%\n> solved problem, but I've heard few complaints in that area lately.\n>\n> The point remains that exposing the function body's dependencies will\n> constrain restore order far more than we are accustomed to see.  It\n> might be possible to build examples that flat out can't be restored,\n> even granting that we teach pg_dump how to break dependency loops\n> by first creating the function with empty body and later redefining\n> it with the real body.  (Admittedly, if that's possible then you\n> likely could make it happen with views too.  But somehow it seems\n> more likely that people would create spaghetti dependencies for\n> functions than views.)\n\nIn my experience, there's certainly demand for some kind of mode where\nplpgsql functions get checked at function definition time, rather than\nat execution time. The model we have is advantageous not only because\nit simplifies dump and reload, but also because it handles cases where\nthe table is created on the fly properly. However, it also means that\nyou can have silly mistakes in your function definitions that you\ndon't find out about until runtime, and in my experience, people don't\nlike that behavior much at all. So I don't think that it's a bad idea\non principle, or anything like that, but the details seem like they\nneed a lot of thought. The dump and restore issues need to be\nconsidered, but also, what about things like IF and WHILE? People are\ngoing to want those constructs with these new semantics, too.plpgsql_check can be integrated to upstream.https://github.com/okbob/plpgsql_check \n\nI actually don't have a very clear idea of what the standard has to\nsay about SQL-language functions. Does it just say it's a list of\nstatements, or does it involve variables and control-flow constructs\nand stuff like that, too? If we go that direction with this, then\nwe're actually going to end up with two different implementations of\nwhat's now plpgsql, or something. But if we don't, then I'm not sure\nhow far this takes us. I'm not saying it's bad, but the comment \"I\nlove the early binding but where's my IF statement\" seems like an\ninevitable one.The standard SQL/PSM is a full functionality language with variables, conditional statements, exception handlings, ..https://postgres.cz/wiki/SQL/PSM_ManualUnfortunately a basic implementation integrated into the main SQL parser can be pretty hard work. First issue can be SET statement implementation.RegardsPavel\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Wed, 1 Jul 2020 16:07:12 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: SQL-standard function body" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> In my experience, there's certainly demand for some kind of mode where\n> plpgsql functions get checked at function definition time, rather than\n> at execution time.\n\nYeah, absolutely agreed. But I'm afraid this proposal takes us too\nfar in the other direction: with this, you *must* have a 100% parseable\nand semantically valid function body, every time all the time.\n\nSo far as plpgsql is concerned, I could see extending the validator\nto run parse analysis (not just raw parsing) on all SQL statements in\nthe body. This wouldn't happen of course with check_function_bodies off,\nso it wouldn't affect dump/reload. But likely there would still be\ndemand for more fine-grained control over it ... or maybe it could\nstop doing analysis as soon as it finds a DDL command?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 01 Jul 2020 10:14:10 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: SQL-standard function body" }, { "msg_contents": "st 1. 7. 2020 v 16:14 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\n\n> Robert Haas <robertmhaas@gmail.com> writes:\n> > In my experience, there's certainly demand for some kind of mode where\n> > plpgsql functions get checked at function definition time, rather than\n> > at execution time.\n>\n> Yeah, absolutely agreed. But I'm afraid this proposal takes us too\n> far in the other direction: with this, you *must* have a 100% parseable\n> and semantically valid function body, every time all the time.\n>\n> So far as plpgsql is concerned, I could see extending the validator\n> to run parse analysis (not just raw parsing) on all SQL statements in\n> the body. This wouldn't happen of course with check_function_bodies off,\n> so it wouldn't affect dump/reload. But likely there would still be\n> demand for more fine-grained control over it ... or maybe it could\n> stop doing analysis as soon as it finds a DDL command?\n>\n\nThis simple analysis stops on first record type usage. PLpgSQL allows some\ndynamic work that increases the complexity of static analysis.\n\nRegards\n\nPavel\n\n\n> regards, tom lane\n>\n>\n>\n\nst 1. 7. 2020 v 16:14 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:Robert Haas <robertmhaas@gmail.com> writes:\n> In my experience, there's certainly demand for some kind of mode where\n> plpgsql functions get checked at function definition time, rather than\n> at execution time.\n\nYeah, absolutely agreed.  But I'm afraid this proposal takes us too\nfar in the other direction: with this, you *must* have a 100% parseable\nand semantically valid function body, every time all the time.\n\nSo far as plpgsql is concerned, I could see extending the validator\nto run parse analysis (not just raw parsing) on all SQL statements in\nthe body.  This wouldn't happen of course with check_function_bodies off,\nso it wouldn't affect dump/reload.  But likely there would still be\ndemand for more fine-grained control over it ... or maybe it could\nstop doing analysis as soon as it finds a DDL command?This simple analysis stops on first record type usage. PLpgSQL allows some dynamic work that increases the complexity of static analysis.RegardsPavel \n\n                        regards, tom lane", "msg_date": "Wed, 1 Jul 2020 16:19:50 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: SQL-standard function body" }, { "msg_contents": "On Wed, Jul 1, 2020 at 10:14:10AM -0400, Tom Lane wrote:\n> Robert Haas <robertmhaas@gmail.com> writes:\n> > In my experience, there's certainly demand for some kind of mode where\n> > plpgsql functions get checked at function definition time, rather than\n> > at execution time.\n> \n> Yeah, absolutely agreed. But I'm afraid this proposal takes us too\n> far in the other direction: with this, you *must* have a 100% parseable\n> and semantically valid function body, every time all the time.\n> \n> So far as plpgsql is concerned, I could see extending the validator\n> to run parse analysis (not just raw parsing) on all SQL statements in\n> the body. This wouldn't happen of course with check_function_bodies off,\n> so it wouldn't affect dump/reload. But likely there would still be\n> demand for more fine-grained control over it ... or maybe it could\n> stop doing analysis as soon as it finds a DDL command?\n\nIs the SQL-standard function body verified as preventing function\ninlining? That seems to be a major downside.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n", "msg_date": "Wed, 1 Jul 2020 11:42:25 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: SQL-standard function body" }, { "msg_contents": "Bruce Momjian <bruce@momjian.us> writes:\n> Is the SQL-standard function body verified as preventing function\n> inlining? That seems to be a major downside.\n\nI see no reason why that would make any difference. There might\nbe more code to be written than is in the patch, but in principle\ninlining should not care whether the function is pre-parsed or not.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 01 Jul 2020 12:50:24 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: SQL-standard function body" }, { "msg_contents": "On 7/1/20 3:36 PM, Robert Haas wrote:\n> I actually don't have a very clear idea of what the standard has to\n> say about SQL-language functions. Does it just say it's a list of\n> statements, or does it involve variables and control-flow constructs\n> and stuff like that, too?\n\n\nIt's either a single sql statement, or a collection of them between\n\"begin atomic\" and \"end\". There are no variables or flow control\nconstructs or anything like that, just as there are no such things\noutside of a function.\n\n(There are a few statements that are not allowed, such as COMMIT.)\n-- \nVik Fearing\n\n\n", "msg_date": "Wed, 1 Jul 2020 20:19:25 +0200", "msg_from": "Vik Fearing <vik@postgresfriends.org>", "msg_from_op": false, "msg_subject": "Re: SQL-standard function body" }, { "msg_contents": "st 1. 7. 2020 v 20:19 odesílatel Vik Fearing <vik@postgresfriends.org>\nnapsal:\n\n> On 7/1/20 3:36 PM, Robert Haas wrote:\n> > I actually don't have a very clear idea of what the standard has to\n> > say about SQL-language functions. Does it just say it's a list of\n> > statements, or does it involve variables and control-flow constructs\n> > and stuff like that, too?\n>\n>\n> It's either a single sql statement, or a collection of them between\n> \"begin atomic\" and \"end\". There are no variables or flow control\n> constructs or anything like that, just as there are no such things\n> outside of a function.\n>\n\nWhat is the source of this comment? Maybe we are speaking (and thinking)\nabout different languages.\n\nI thought the language of SQL functions (ANSI/SQL) is SQL/PSM.\n\nRegards\n\nPavel\n\n\n\n> (There are a few statements that are not allowed, such as COMMIT.)\n> --\n> Vik Fearing\n>\n>\n>\n\nst 1. 7. 2020 v 20:19 odesílatel Vik Fearing <vik@postgresfriends.org> napsal:On 7/1/20 3:36 PM, Robert Haas wrote:\n> I actually don't have a very clear idea of what the standard has to\n> say about SQL-language functions. Does it just say it's a list of\n> statements, or does it involve variables and control-flow constructs\n> and stuff like that, too?\n\n\nIt's either a single sql statement, or a collection of them between\n\"begin atomic\" and \"end\".  There are no variables or flow control\nconstructs or anything like that, just as there are no such things\noutside of a function.What is the source of this comment? Maybe we are speaking (and thinking) about different languages.I thought the language of SQL functions (ANSI/SQL) is SQL/PSM.RegardsPavel\n\n(There are a few statements that are not allowed, such as COMMIT.)\n-- \nVik Fearing", "msg_date": "Wed, 1 Jul 2020 21:32:38 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: SQL-standard function body" }, { "msg_contents": "On 7/1/20 9:32 PM, Pavel Stehule wrote:\n> st 1. 7. 2020 v 20:19 odesílatel Vik Fearing <vik@postgresfriends.org>\n> napsal:\n> \n>> On 7/1/20 3:36 PM, Robert Haas wrote:\n>>> I actually don't have a very clear idea of what the standard has to\n>>> say about SQL-language functions. Does it just say it's a list of\n>>> statements, or does it involve variables and control-flow constructs\n>>> and stuff like that, too?\n>>\n>>\n>> It's either a single sql statement, or a collection of them between\n>> \"begin atomic\" and \"end\". There are no variables or flow control\n>> constructs or anything like that, just as there are no such things\n>> outside of a function.\n>>\n> \n> What is the source of this comment?\n\n\nThe SQL Standard.\n\n\n> Maybe we are speaking (and thinking)\n> about different languages.\n\n\nI think so, yes.\n\n\n> I thought the language of SQL functions (ANSI/SQL) is SQL/PSM.\n\n\nThat is something else entirely, and not at all what Peter's patch is about.\n-- \nVik Fearing\n\n\n", "msg_date": "Wed, 1 Jul 2020 22:31:26 +0200", "msg_from": "Vik Fearing <vik@postgresfriends.org>", "msg_from_op": false, "msg_subject": "Re: SQL-standard function body" }, { "msg_contents": "st 1. 7. 2020 v 22:31 odesílatel Vik Fearing <vik@postgresfriends.org>\nnapsal:\n\n> On 7/1/20 9:32 PM, Pavel Stehule wrote:\n> > st 1. 7. 2020 v 20:19 odesílatel Vik Fearing <vik@postgresfriends.org>\n> > napsal:\n> >\n> >> On 7/1/20 3:36 PM, Robert Haas wrote:\n> >>> I actually don't have a very clear idea of what the standard has to\n> >>> say about SQL-language functions. Does it just say it's a list of\n> >>> statements, or does it involve variables and control-flow constructs\n> >>> and stuff like that, too?\n> >>\n> >>\n> >> It's either a single sql statement, or a collection of them between\n> >> \"begin atomic\" and \"end\". There are no variables or flow control\n> >> constructs or anything like that, just as there are no such things\n> >> outside of a function.\n> >>\n> >\n> > What is the source of this comment?\n>\n>\n> The SQL Standard.\n>\n\nThe SQL Standard is really big, and is very possible so I miss this part.\nCan you send me a link?\n\nRegards\n\nPavel\n\n\n>\n> > Maybe we are speaking (and thinking)\n> > about different languages.\n>\n>\n> I think so, yes.\n>\n>\n> > I thought the language of SQL functions (ANSI/SQL) is SQL/PSM.\n>\n>\n> That is something else entirely, and not at all what Peter's patch is\n> about.\n>\n-- \n> Vik Fearing\n>\n\nst 1. 7. 2020 v 22:31 odesílatel Vik Fearing <vik@postgresfriends.org> napsal:On 7/1/20 9:32 PM, Pavel Stehule wrote:\n> st 1. 7. 2020 v 20:19 odesílatel Vik Fearing <vik@postgresfriends.org>\n> napsal:\n> \n>> On 7/1/20 3:36 PM, Robert Haas wrote:\n>>> I actually don't have a very clear idea of what the standard has to\n>>> say about SQL-language functions. Does it just say it's a list of\n>>> statements, or does it involve variables and control-flow constructs\n>>> and stuff like that, too?\n>>\n>>\n>> It's either a single sql statement, or a collection of them between\n>> \"begin atomic\" and \"end\".  There are no variables or flow control\n>> constructs or anything like that, just as there are no such things\n>> outside of a function.\n>>\n> \n> What is the source of this comment?\n\n\nThe SQL Standard.The SQL Standard is really big, and is very possible so I miss this part. Can you send me a link?RegardsPavel\n\n\n> Maybe we are speaking (and thinking)\n> about different languages.\n\n\nI think so, yes.\n\n\n> I thought the language of SQL functions (ANSI/SQL) is SQL/PSM.\n\n\nThat is something else entirely, and not at all what Peter's patch is about. \n-- \nVik Fearing", "msg_date": "Wed, 1 Jul 2020 22:34:18 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: SQL-standard function body" }, { "msg_contents": "On 7/1/20 10:34 PM, Pavel Stehule wrote:\n> st 1. 7. 2020 v 22:31 odesílatel Vik Fearing <vik@postgresfriends.org>\n> napsal:\n> \n>> On 7/1/20 9:32 PM, Pavel Stehule wrote:\n>>> st 1. 7. 2020 v 20:19 odesílatel Vik Fearing <vik@postgresfriends.org>\n>>> napsal:\n>>>\n>>>> On 7/1/20 3:36 PM, Robert Haas wrote:\n>>>>> I actually don't have a very clear idea of what the standard has to\n>>>>> say about SQL-language functions. Does it just say it's a list of\n>>>>> statements, or does it involve variables and control-flow constructs\n>>>>> and stuff like that, too?\n>>>>\n>>>>\n>>>> It's either a single sql statement, or a collection of them between\n>>>> \"begin atomic\" and \"end\". There are no variables or flow control\n>>>> constructs or anything like that, just as there are no such things\n>>>> outside of a function.\n>>>>\n>>>\n>>> What is the source of this comment?\n>>\n>>\n>> The SQL Standard.\n>>\n> \n> The SQL Standard is really big, and is very possible so I miss this part.\n> Can you send me a link?\n\n\nISO/IEC 9075-2:2016 Section 11.60 <SQL-invoked routine>\n-- \nVik Fearing\n\n\n", "msg_date": "Wed, 1 Jul 2020 22:54:01 +0200", "msg_from": "Vik Fearing <vik@postgresfriends.org>", "msg_from_op": false, "msg_subject": "Re: SQL-standard function body" }, { "msg_contents": "st 1. 7. 2020 v 22:54 odesílatel Vik Fearing <vik@postgresfriends.org>\nnapsal:\n\n> On 7/1/20 10:34 PM, Pavel Stehule wrote:\n> > st 1. 7. 2020 v 22:31 odesílatel Vik Fearing <vik@postgresfriends.org>\n> > napsal:\n> >\n> >> On 7/1/20 9:32 PM, Pavel Stehule wrote:\n> >>> st 1. 7. 2020 v 20:19 odesílatel Vik Fearing <vik@postgresfriends.org>\n> >>> napsal:\n> >>>\n> >>>> On 7/1/20 3:36 PM, Robert Haas wrote:\n> >>>>> I actually don't have a very clear idea of what the standard has to\n> >>>>> say about SQL-language functions. Does it just say it's a list of\n> >>>>> statements, or does it involve variables and control-flow constructs\n> >>>>> and stuff like that, too?\n> >>>>\n> >>>>\n> >>>> It's either a single sql statement, or a collection of them between\n> >>>> \"begin atomic\" and \"end\". There are no variables or flow control\n> >>>> constructs or anything like that, just as there are no such things\n> >>>> outside of a function.\n> >>>>\n> >>>\n> >>> What is the source of this comment?\n> >>\n> >>\n> >> The SQL Standard.\n> >>\n> >\n> > The SQL Standard is really big, and is very possible so I miss this part.\n> > Can you send me a link?\n>\n>\n> ISO/IEC 9075-2:2016 Section 11.60 <SQL-invoked routine>\n>\n\nThank you\n\nPavel\n\n-- \n> Vik Fearing\n>\n\nst 1. 7. 2020 v 22:54 odesílatel Vik Fearing <vik@postgresfriends.org> napsal:On 7/1/20 10:34 PM, Pavel Stehule wrote:\n> st 1. 7. 2020 v 22:31 odesílatel Vik Fearing <vik@postgresfriends.org>\n> napsal:\n> \n>> On 7/1/20 9:32 PM, Pavel Stehule wrote:\n>>> st 1. 7. 2020 v 20:19 odesílatel Vik Fearing <vik@postgresfriends.org>\n>>> napsal:\n>>>\n>>>> On 7/1/20 3:36 PM, Robert Haas wrote:\n>>>>> I actually don't have a very clear idea of what the standard has to\n>>>>> say about SQL-language functions. Does it just say it's a list of\n>>>>> statements, or does it involve variables and control-flow constructs\n>>>>> and stuff like that, too?\n>>>>\n>>>>\n>>>> It's either a single sql statement, or a collection of them between\n>>>> \"begin atomic\" and \"end\".  There are no variables or flow control\n>>>> constructs or anything like that, just as there are no such things\n>>>> outside of a function.\n>>>>\n>>>\n>>> What is the source of this comment?\n>>\n>>\n>> The SQL Standard.\n>>\n> \n> The SQL Standard is really big, and is very possible so I miss this part.\n> Can you send me a link?\n\n\nISO/IEC 9075-2:2016 Section 11.60 <SQL-invoked routine>Thank youPavel\n-- \nVik Fearing", "msg_date": "Wed, 1 Jul 2020 23:04:08 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: SQL-standard function body" }, { "msg_contents": "On Wed, Jul 1, 2020 at 5:58 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> On Tue, Jun 30, 2020 at 1:49 PM Peter Eisentraut\n> <peter.eisentraut@2ndquadrant.com> wrote:\n> > This adds support for writing CREATE FUNCTION and CREATE PROCEDURE\n> > statements for language SQL with a function body that conforms to the\n> > SQL standard and is portable to other implementations.\n>\n> With what other implementations is it compatible?\n\nJudging by the Wikipedia article[1], it sounds like at least DB2 and\nMySQL/MariaDB are purposely striving for conformance. When I worked\nwith DB2 a few years back I preferred to use their standard-conforming\nPL stuff (as opposed to their be-more-like-Oracle PL/SQL mode), and I\nalways hoped that PostgreSQL would eventually understand the same\nsyntax; admittedly, anyone who has ever worked on large applications\nthat support multiple RDBMSs knows that there's more than just surface\nsyntax to worry about, but it still seems like a pretty solid plan to\nconform to the standard that's in our name, so +1 from me on the\ngeneral direction (I didn't look at the patch).\n\n[1] https://en.wikipedia.org/wiki/SQL/PSM\n\n\n", "msg_date": "Thu, 2 Jul 2020 10:56:24 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: SQL-standard function body" }, { "msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> On Wed, Jul 1, 2020 at 5:58 AM Robert Haas <robertmhaas@gmail.com> wrote:\n>> With what other implementations is it compatible?\n\n> Judging by the Wikipedia article[1], it sounds like at least DB2 and\n> MySQL/MariaDB are purposely striving for conformance.\n> [1] https://en.wikipedia.org/wiki/SQL/PSM\n\nbut ... but ... but ... that's about SQL/PSM, which is not this.\n\nHaving said that, I wonder whether this work could be repurposed\nto be the start of a real SQL/PSM implementation. There's other\nstuff in SQL/PSM, but a big part of it is routines that are written\nwith syntax like this.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 01 Jul 2020 19:54:11 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: SQL-standard function body" }, { "msg_contents": "st 1. 7. 2020 v 22:54 odesílatel Vik Fearing <vik@postgresfriends.org>\nnapsal:\n\n> On 7/1/20 10:34 PM, Pavel Stehule wrote:\n> > st 1. 7. 2020 v 22:31 odesílatel Vik Fearing <vik@postgresfriends.org>\n> > napsal:\n> >\n> >> On 7/1/20 9:32 PM, Pavel Stehule wrote:\n> >>> st 1. 7. 2020 v 20:19 odesílatel Vik Fearing <vik@postgresfriends.org>\n> >>> napsal:\n> >>>\n> >>>> On 7/1/20 3:36 PM, Robert Haas wrote:\n> >>>>> I actually don't have a very clear idea of what the standard has to\n> >>>>> say about SQL-language functions. Does it just say it's a list of\n> >>>>> statements, or does it involve variables and control-flow constructs\n> >>>>> and stuff like that, too?\n> >>>>\n> >>>>\n> >>>> It's either a single sql statement, or a collection of them between\n> >>>> \"begin atomic\" and \"end\". There are no variables or flow control\n> >>>> constructs or anything like that, just as there are no such things\n> >>>> outside of a function.\n> >>>>\n> >>>\n> >>> What is the source of this comment?\n> >>\n> >>\n> >> The SQL Standard.\n> >>\n> >\n> > The SQL Standard is really big, and is very possible so I miss this part.\n> > Can you send me a link?\n>\n>\n> ISO/IEC 9075-2:2016 Section 11.60 <SQL-invoked routine>\n>\n\nI am looking there, and it looks like a subset of SQL/PSM or better -\nSQL/PSM is extending this. But this part is a little bit strange, because\nit doesn't introduce its own variables, but it is working with the concept\nof host variables and is a little bit messy (for me). Looks like it is\nintroduced for usage in triggers. If we support triggers without trigger\nfunctions, then it has sense. Without it - It is hard for me to imagine a\nuse case for this reduced language.\n\nRegards\n\nPavel\n\n\n\n-- \n> Vik Fearing\n>\n\nst 1. 7. 2020 v 22:54 odesílatel Vik Fearing <vik@postgresfriends.org> napsal:On 7/1/20 10:34 PM, Pavel Stehule wrote:\n> st 1. 7. 2020 v 22:31 odesílatel Vik Fearing <vik@postgresfriends.org>\n> napsal:\n> \n>> On 7/1/20 9:32 PM, Pavel Stehule wrote:\n>>> st 1. 7. 2020 v 20:19 odesílatel Vik Fearing <vik@postgresfriends.org>\n>>> napsal:\n>>>\n>>>> On 7/1/20 3:36 PM, Robert Haas wrote:\n>>>>> I actually don't have a very clear idea of what the standard has to\n>>>>> say about SQL-language functions. Does it just say it's a list of\n>>>>> statements, or does it involve variables and control-flow constructs\n>>>>> and stuff like that, too?\n>>>>\n>>>>\n>>>> It's either a single sql statement, or a collection of them between\n>>>> \"begin atomic\" and \"end\".  There are no variables or flow control\n>>>> constructs or anything like that, just as there are no such things\n>>>> outside of a function.\n>>>>\n>>>\n>>> What is the source of this comment?\n>>\n>>\n>> The SQL Standard.\n>>\n> \n> The SQL Standard is really big, and is very possible so I miss this part.\n> Can you send me a link?\n\n\nISO/IEC 9075-2:2016 Section 11.60 <SQL-invoked routine>I am looking there, and it looks like a subset of SQL/PSM or better - SQL/PSM is extending this. But this part is a little bit strange, because it doesn't introduce its own variables, but it is working with the concept of host variables and is a little bit messy (for me). Looks like it is introduced for usage in triggers. If we support triggers without trigger functions, then it has sense. Without it - It is hard for me to imagine a use case for this reduced language. RegardsPavel\n-- \nVik Fearing", "msg_date": "Thu, 2 Jul 2020 07:47:48 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: SQL-standard function body" }, { "msg_contents": "On Wed, Jul 1, 2020 at 5:49 AM Peter Eisentraut\n<peter.eisentraut@2ndquadrant.com> wrote:\n> - More test coverage is needed. Surprisingly, there wasn't actually any\n> test AFAICT that just creates and SQL function and runs it. Most of\n> that code is tested incidentally, but there is very little or no\n> targeted testing of this functionality.\n\nFYI cfbot showed a sign of some kind of error_context_stack corruption\nwhile running \"DROP TABLE functest3 CASCADE;\".\n\n\n", "msg_date": "Fri, 10 Jul 2020 18:35:40 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: SQL-standard function body" }, { "msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> On Wed, Jul 1, 2020 at 5:49 AM Peter Eisentraut\n> <peter.eisentraut@2ndquadrant.com> wrote:\n>> - More test coverage is needed. Surprisingly, there wasn't actually any\n>> test AFAICT that just creates and SQL function and runs it. Most of\n>> that code is tested incidentally, but there is very little or no\n>> targeted testing of this functionality.\n\n> FYI cfbot showed a sign of some kind of error_context_stack corruption\n> while running \"DROP TABLE functest3 CASCADE;\".\n\nBTW, it occurs to me after answering bug #16534 that\ncontrib/earthdistance's SQL functions would be great candidates for this\nnew syntax. Binding their references at creation time is really exactly\nwhat we want.\n\nI still feel that we can't just replace the existing implementation,\nthough, as that would kill too many use-cases where late binding is\nhelpful.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 10 Jul 2020 13:24:21 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: SQL-standard function body" }, { "msg_contents": "On 2020-06-30 19:49, Peter Eisentraut wrote:\n> This adds support for writing CREATE FUNCTION and CREATE PROCEDURE\n> statements for language SQL with a function body that conforms to the\n> SQL standard and is portable to other implementations.\n\nHere is a new patch. The only significant change is that pg_dump \nsupport is now fixed. Per the discussion in [0], I have introduced a \nnew function pg_get_function_sqlbody() that just produces the formatted \nSQL body, not the whole function definition. All the tests now pass. \nAs mentioned before, more tests are probably needed, so if reviewers \njust want to play with this and find things that don't work, those could \nbe put into test cases, for example.\n\nAs a thought, a couple of things could probably be separated from this \npatch and considered separately:\n\n1. making LANGUAGE SQL the default\n\n2. the RETURN statement\n\nIf reviewers think that would be sensible, I can prepare separate \npatches for those.\n\n\n[0]: \nhttps://www.postgresql.org/message-id/flat/9df8a3d3-13d2-116d-26ab-6a273c1ed38c%402ndquadrant.com\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Fri, 28 Aug 2020 07:33:39 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: SQL-standard function body" }, { "msg_contents": "Some conflicts have emerged, so here is an updated patch.\n\nI have implemented/fixed the inlining of set-returning functions written \nin the new style, which was previously marked TODO in the patch.\n\n\nOn 2020-08-28 07:33, Peter Eisentraut wrote:\n> On 2020-06-30 19:49, Peter Eisentraut wrote:\n>> This adds support for writing CREATE FUNCTION and CREATE PROCEDURE\n>> statements for language SQL with a function body that conforms to the\n>> SQL standard and is portable to other implementations.\n> \n> Here is a new patch. The only significant change is that pg_dump\n> support is now fixed. Per the discussion in [0], I have introduced a\n> new function pg_get_function_sqlbody() that just produces the formatted\n> SQL body, not the whole function definition. All the tests now pass.\n> As mentioned before, more tests are probably needed, so if reviewers\n> just want to play with this and find things that don't work, those could\n> be put into test cases, for example.\n> \n> As a thought, a couple of things could probably be separated from this\n> patch and considered separately:\n> \n> 1. making LANGUAGE SQL the default\n> \n> 2. the RETURN statement\n> \n> If reviewers think that would be sensible, I can prepare separate\n> patches for those.\n> \n> \n> [0]:\n> https://www.postgresql.org/message-id/flat/9df8a3d3-13d2-116d-26ab-6a273c1ed38c%402ndquadrant.com\n> \n\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Mon, 7 Sep 2020 08:00:08 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: SQL-standard function body" }, { "msg_contents": "On Mon, Sep 07, 2020 at 08:00:08AM +0200, Peter Eisentraut wrote:\n> Some conflicts have emerged, so here is an updated patch.\n> \n> I have implemented/fixed the inlining of set-returning functions written in\n> the new style, which was previously marked TODO in the patch.\n\nThe CF bot is telling that this patch fails to apply. Could you send\na rebase?\n--\nMichael", "msg_date": "Tue, 29 Sep 2020 14:42:01 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: SQL-standard function body" }, { "msg_contents": "On 2020-09-29 07:42, Michael Paquier wrote:\n> On Mon, Sep 07, 2020 at 08:00:08AM +0200, Peter Eisentraut wrote:\n>> Some conflicts have emerged, so here is an updated patch.\n>>\n>> I have implemented/fixed the inlining of set-returning functions written in\n>> the new style, which was previously marked TODO in the patch.\n> \n> The CF bot is telling that this patch fails to apply. Could you send\n> a rebase?\n\nHere is a rebase, no functionality changes.\n\nAs indicated earlier, I'll also send some sub-patches as separate \nsubmissions.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Sat, 10 Oct 2020 10:41:09 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: SQL-standard function body" }, { "msg_contents": "Here is another updated patch. I did some merging and some small fixes \nand introduced the pg_proc column prosqlbody to store the parsed \nfunction body separately from probin. Aside from one TODO left it seems \nfeature-complete to me for now.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Tue, 27 Oct 2020 14:45:13 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: SQL-standard function body" }, { "msg_contents": "Hi,\r\n\r\nI noticed that this patch fails on the cfbot.\r\nFor this, I changed the status to: 'Waiting on Author'.\r\n\r\nCheers,\r\n//Georgios\n\nThe new status of this patch is: Waiting on Author\n", "msg_date": "Tue, 10 Nov 2020 15:21:04 +0000", "msg_from": "Georgios Kokolatos <gkokolatos@protonmail.com>", "msg_from_op": false, "msg_subject": "Re: SQL-standard function body" }, { "msg_contents": "On 2020-11-10 16:21, Georgios Kokolatos wrote:\n> Hi,\n> \n> I noticed that this patch fails on the cfbot.\n> For this, I changed the status to: 'Waiting on Author'.\n> \n> Cheers,\n> //Georgios\n> \n> The new status of this patch is: Waiting on Author\n> \n\nHere is an updated patch to get it building again.\n\n-- \nPeter Eisentraut\n2ndQuadrant, an EDB company\nhttps://www.2ndquadrant.com/", "msg_date": "Fri, 20 Nov 2020 08:25:13 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: SQL-standard function body" }, { "msg_contents": "On 2020-11-20 08:25, Peter Eisentraut wrote:\n> On 2020-11-10 16:21, Georgios Kokolatos wrote:\n>> Hi,\n>>\n>> I noticed that this patch fails on the cfbot.\n>> For this, I changed the status to: 'Waiting on Author'.\n>>\n>> Cheers,\n>> //Georgios\n>>\n>> The new status of this patch is: Waiting on Author\n>>\n> \n> Here is an updated patch to get it building again.\n\nAnother updated patch to get things building again. I've also fixed the \nlast TODO I had in there in qualifying function arguments as necessary \nin ruleutils.c.\n\n-- \nPeter Eisentraut\n2ndQuadrant, an EDB company\nhttps://www.2ndquadrant.com/", "msg_date": "Thu, 11 Feb 2021 09:02:40 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: SQL-standard function body" }, { "msg_contents": "On 11.02.21 09:02, Peter Eisentraut wrote:\n>> Here is an updated patch to get it building again.\n> \n> Another updated patch to get things building again.  I've also fixed the \n> last TODO I had in there in qualifying function arguments as necessary \n> in ruleutils.c.\n\nUpdated patch to resolve merge conflict. No changes in functionality.", "msg_date": "Tue, 2 Mar 2021 18:13:19 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: SQL-standard function body" }, { "msg_contents": "On Tue, Mar 2, 2021 at 12:13 PM Peter Eisentraut\n<peter.eisentraut@2ndquadrant.com> wrote:\n>\n> On 11.02.21 09:02, Peter Eisentraut wrote:\n> >> Here is an updated patch to get it building again.\n> >\n> > Another updated patch to get things building again. I've also fixed the\n> > last TODO I had in there in qualifying function arguments as necessary\n> > in ruleutils.c.\n>\n> Updated patch to resolve merge conflict. No changes in functionality.\n\nHi,\n\nI was making some tests with this patch and found this problem:\n\n\"\"\"\nCREATE OR REPLACE FUNCTION public.make_table()\n RETURNS void\n LANGUAGE sql\nBEGIN ATOMIC\n CREATE TABLE created_table AS SELECT * FROM int8_tbl;\nEND;\nERROR: unrecognized token: \"?\"\nCONTEXT: SQL function \"make_table\"\n\"\"\"\n\nAttached a backtrace from the point the error is thrown.\n\n-- \nJaime Casanova\nDirector de Servicios Profesionales\nSYSTEMGUARDS - Consultores de PostgreSQL", "msg_date": "Fri, 5 Mar 2021 00:58:47 -0500", "msg_from": "Jaime Casanova <jcasanov@systemguards.com.ec>", "msg_from_op": false, "msg_subject": "Re: SQL-standard function body" }, { "msg_contents": "On 05.03.21 06:58, Jaime Casanova wrote:\n> I was making some tests with this patch and found this problem:\n> \n> \"\"\"\n> CREATE OR REPLACE FUNCTION public.make_table()\n> RETURNS void\n> LANGUAGE sql\n> BEGIN ATOMIC\n> CREATE TABLE created_table AS SELECT * FROM int8_tbl;\n> END;\n> ERROR: unrecognized token: \"?\"\n> CONTEXT: SQL function \"make_table\"\n> \"\"\"\n\nI see. The problem is that we don't have serialization and \ndeserialization support for most utility statements. I think I'll need \nto add that eventually. For now, I have added code to prevent utility \nstatements. I think it's still useful that way for now.", "msg_date": "Tue, 9 Mar 2021 13:27:48 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: SQL-standard function body" }, { "msg_contents": "On Tue, Mar 9, 2021 at 7:27 AM Peter Eisentraut\n<peter.eisentraut@2ndquadrant.com> wrote:\n>\n>\n> I see. The problem is that we don't have serialization and\n> deserialization support for most utility statements. I think I'll need\n> to add that eventually. For now, I have added code to prevent utility\n> statements. I think it's still useful that way for now.\n>\n\nGreat! thanks!\n\nI found another problem when using CASE expressions:\n\nCREATE OR REPLACE FUNCTION foo_case()\nRETURNS boolean\nLANGUAGE SQL\nBEGIN ATOMIC\n select case when random() > 0.5 then true else false end;\nEND;\n\napparently the END in the CASE expression is interpreted as the END of\nthe function\n\n-- \nJaime Casanova\nDirector de Servicios Profesionales\nSYSTEMGUARDS - Consultores de PostgreSQL\n\n\n", "msg_date": "Mon, 15 Mar 2021 01:05:11 -0500", "msg_from": "Jaime Casanova <jcasanov@systemguards.com.ec>", "msg_from_op": false, "msg_subject": "Re: SQL-standard function body" }, { "msg_contents": "On Mon, Mar 15, 2021 at 01:05:11AM -0500, Jaime Casanova wrote:\n> I found another problem when using CASE expressions:\n> \n> CREATE OR REPLACE FUNCTION foo_case()\n> RETURNS boolean\n> LANGUAGE SQL\n> BEGIN ATOMIC\n> select case when random() > 0.5 then true else false end;\n> END;\n> \n> apparently the END in the CASE expression is interpreted as the END of\n> the function\n\nI think that it's an issue in psql scanner. If you escape the semicolon or\nforce a single query execution (say with psql -c), it works as expected.\n\n\n", "msg_date": "Mon, 15 Mar 2021 16:03:44 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: SQL-standard function body" }, { "msg_contents": "On Mon, Mar 15, 2021 at 04:03:44PM +0800, Julien Rouhaud wrote:\n> On Mon, Mar 15, 2021 at 01:05:11AM -0500, Jaime Casanova wrote:\n> > I found another problem when using CASE expressions:\n> > \n> > CREATE OR REPLACE FUNCTION foo_case()\n> > RETURNS boolean\n> > LANGUAGE SQL\n> > BEGIN ATOMIC\n> > select case when random() > 0.5 then true else false end;\n> > END;\n> > \n> > apparently the END in the CASE expression is interpreted as the END of\n> > the function\n> \n> I think that it's an issue in psql scanner. If you escape the semicolon or\n> force a single query execution (say with psql -c), it works as expected.\n\nApplying the following diff (not sending a patch to avoid breaking the cfbot)\nthe issue and doesn't seem to break anything else:\n\ndiff --git a/src/fe_utils/psqlscan.l b/src/fe_utils/psqlscan.l\nindex a492a32416..58026fe90a 100644\n--- a/src/fe_utils/psqlscan.l\n+++ b/src/fe_utils/psqlscan.l\n@@ -871,7 +871,9 @@ other .\n\n {identifier} {\n cur_state->identifier_count++;\n- if (pg_strcasecmp(yytext, \"begin\") == 0)\n+ if ((pg_strcasecmp(yytext, \"begin\") == 0)\n+ || (pg_strcasecmp(yytext, \"case\") == 0)\n+ )\n {\n if (cur_state->identifier_count > 1)\n cur_state->begin_depth++;\n\n\n\n", "msg_date": "Mon, 15 Mar 2021 16:14:48 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: SQL-standard function body" }, { "msg_contents": "On 15.03.21 09:14, Julien Rouhaud wrote:\n> On Mon, Mar 15, 2021 at 04:03:44PM +0800, Julien Rouhaud wrote:\n>> On Mon, Mar 15, 2021 at 01:05:11AM -0500, Jaime Casanova wrote:\n>>> I found another problem when using CASE expressions:\n>>>\n>>> CREATE OR REPLACE FUNCTION foo_case()\n>>> RETURNS boolean\n>>> LANGUAGE SQL\n>>> BEGIN ATOMIC\n>>> select case when random() > 0.5 then true else false end;\n>>> END;\n>>>\n>>> apparently the END in the CASE expression is interpreted as the END of\n>>> the function\n>>\n>> I think that it's an issue in psql scanner. If you escape the semicolon or\n>> force a single query execution (say with psql -c), it works as expected.\n> \n> Applying the following diff (not sending a patch to avoid breaking the cfbot)\n> the issue and doesn't seem to break anything else:\n\nRight. Here is a new patch with that fix added and a small conflict \nresolved.", "msg_date": "Fri, 19 Mar 2021 14:49:33 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: SQL-standard function body" }, { "msg_contents": "On Fri, Mar 19, 2021 at 8:49 AM Peter Eisentraut\n<peter.eisentraut@2ndquadrant.com> wrote:\n>\n> Right. Here is a new patch with that fix added and a small conflict\n> resolved.\n\nGreat.\n\nIt seems print_function_sqlbody() is not protected to avoid receiving\na function that hasn't a standard sql body in\nsrc/backend/utils/adt/ruleutils.c:3292, but instead it has an assert\nthat gets hit with something like this:\n\nCREATE FUNCTION foo() RETURNS int LANGUAGE SQL AS $$ SELECT 1 $$;\nSELECT pg_get_function_sqlbody('foo'::regproc);\n\n--\nJaime Casanova\nDirector de Servicios Profesionales\nSYSTEMGUARDS - Consultores de PostgreSQL\n\n\n", "msg_date": "Tue, 23 Mar 2021 23:28:55 -0500", "msg_from": "Jaime Casanova <jcasanov@systemguards.com.ec>", "msg_from_op": false, "msg_subject": "Re: SQL-standard function body" }, { "msg_contents": "On Tue, Mar 23, 2021 at 11:28:55PM -0500, Jaime Casanova wrote:\n> On Fri, Mar 19, 2021 at 8:49 AM Peter Eisentraut\n> <peter.eisentraut@2ndquadrant.com> wrote:\n> >\n> > Right. Here is a new patch with that fix added and a small conflict\n> > resolved.\n> \n> Great.\n> \n> It seems print_function_sqlbody() is not protected to avoid receiving\n> a function that hasn't a standard sql body in\n> src/backend/utils/adt/ruleutils.c:3292, but instead it has an assert\n> that gets hit with something like this:\n> \n> CREATE FUNCTION foo() RETURNS int LANGUAGE SQL AS $$ SELECT 1 $$;\n> SELECT pg_get_function_sqlbody('foo'::regproc);\n\nIt would also be good to add a regression test checking that we can't define a\nfunction with both a prosrc and a prosqlbody.\n\n\n\n@@ -76,6 +77,7 @@ ProcedureCreate(const char *procedureName,\n Oid languageValidator,\n const char *prosrc,\n const char *probin,\n+ Node *prosqlbody,\n char prokind,\n bool security_definer,\n bool isLeakProof,\n@@ -119,8 +121,6 @@ ProcedureCreate(const char *procedureName,\n /*\n * sanity checks\n */\n- Assert(PointerIsValid(prosrc));\n-\n parameterCount = parameterTypes->dim1;\n\n\nShouldn't we still assert that we either have a valid procsrc or valid\nprosqlbody?\n\nNo other comments apart from that!\n\n\n", "msg_date": "Wed, 31 Mar 2021 18:12:56 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: SQL-standard function body" }, { "msg_contents": "On 31.03.21 12:12, Julien Rouhaud wrote:\n> On Tue, Mar 23, 2021 at 11:28:55PM -0500, Jaime Casanova wrote:\n>> On Fri, Mar 19, 2021 at 8:49 AM Peter Eisentraut\n>> <peter.eisentraut@2ndquadrant.com> wrote:\n>>>\n>>> Right. Here is a new patch with that fix added and a small conflict\n>>> resolved.\n>>\n>> Great.\n>>\n>> It seems print_function_sqlbody() is not protected to avoid receiving\n>> a function that hasn't a standard sql body in\n>> src/backend/utils/adt/ruleutils.c:3292, but instead it has an assert\n>> that gets hit with something like this:\n>>\n>> CREATE FUNCTION foo() RETURNS int LANGUAGE SQL AS $$ SELECT 1 $$;\n>> SELECT pg_get_function_sqlbody('foo'::regproc);\n\nfixed\n\n> It would also be good to add a regression test checking that we can't define a\n> function with both a prosrc and a prosqlbody.\n\ndone\n\n> @@ -76,6 +77,7 @@ ProcedureCreate(const char *procedureName,\n> Oid languageValidator,\n> const char *prosrc,\n> const char *probin,\n> + Node *prosqlbody,\n> char prokind,\n> bool security_definer,\n> bool isLeakProof,\n> @@ -119,8 +121,6 @@ ProcedureCreate(const char *procedureName,\n> /*\n> * sanity checks\n> */\n> - Assert(PointerIsValid(prosrc));\n> -\n> parameterCount = parameterTypes->dim1;\n> \n> \n> Shouldn't we still assert that we either have a valid procsrc or valid\n> prosqlbody?\n\nfixed\n\nNew patch attached.", "msg_date": "Fri, 2 Apr 2021 14:25:15 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: SQL-standard function body" }, { "msg_contents": "On Fri, Apr 02, 2021 at 02:25:15PM +0200, Peter Eisentraut wrote:\n> \n> New patch attached.\n\nThanks, it all looks good to me. I just spot a few minor formatting issues:\n\n@@ -2968,6 +2973,13 @@ pg_get_functiondef(PG_FUNCTION_ARGS)\n }\n\n /* And finally the function definition ... */\n+ tmp = SysCacheGetAttr(PROCOID, proctup, Anum_pg_proc_prosqlbody, &isnull);\n+ if (proc->prolang == SQLlanguageId && !isnull)\n+ {\n+ print_function_sqlbody(&buf, proctup);\n+ }\n+ else\n+ {\n appendStringInfoString(&buf, \"AS \");\n\n tmp = SysCacheGetAttr(PROCOID, proctup, Anum_pg_proc_probin, &isnull);\n@@ -2999,6 +3011,7 @@ pg_get_functiondef(PG_FUNCTION_ARGS)\n appendBinaryStringInfo(&buf, dq.data, dq.len);\n appendStringInfoString(&buf, prosrc);\n appendBinaryStringInfo(&buf, dq.data, dq.len);\n+ }\n\nThe curly braces could probably be removed for the if branch, and the code in\nthe else branch isn't properly indented.\n\nOther occurences:\n\n+ else\n+ {\n+ src = TextDatumGetCString(tmp);\n+\n+ callback_arg.prosrc = src;\n+\n /*\n * Set up to handle parameters while parsing the function body. We need a\n * dummy FuncExpr node containing the already-simplified arguments to pass\n@@ -4317,6 +4337,7 @@ inline_function(Oid funcid, Oid result_type, Oid result_collid,\n querytree = transformTopLevelStmt(pstate, linitial(raw_parsetree_list));\n\n free_parsestate(pstate);\n+ }\n\nand\n\n+ else\n+ {\n+ char *src;\n+\n+ src = TextDatumGetCString(tmp);\n+\n+ callback_arg.prosrc = src;\n+\n /*\n * Set up to handle parameters while parsing the function body. We can\n * use the FuncExpr just created as the input for\n@@ -4829,18 +4878,6 @@ inline_set_returning_function(PlannerInfo *root, RangeTblEntry *rte)\n (Node *) fexpr,\n fexpr->inputcollid);\n\n- /*\n\nand\n\n@@ -2968,6 +2973,13 @@ pg_get_functiondef(PG_FUNCTION_ARGS)\n }\n\n /* And finally the function definition ... */\n+ tmp = SysCacheGetAttr(PROCOID, proctup, Anum_pg_proc_prosqlbody, &isnull);\n+ if (proc->prolang == SQLlanguageId && !isnull)\n+ {\n+ print_function_sqlbody(&buf, proctup);\n+ }\n+ else\n+ {\n appendStringInfoString(&buf, \"AS \");\n\n tmp = SysCacheGetAttr(PROCOID, proctup, Anum_pg_proc_probin, &isnull);\n@@ -2999,6 +3011,7 @@ pg_get_functiondef(PG_FUNCTION_ARGS)\n appendBinaryStringInfo(&buf, dq.data, dq.len);\n appendStringInfoString(&buf, prosrc);\n appendBinaryStringInfo(&buf, dq.data, dq.len);\n+ }\n\nand\n\n@@ -12290,7 +12309,11 @@ dumpFunc(Archive *fout, const FuncInfo *finfo)\n * versions would set it to \"-\". There are no known cases in which prosrc\n * is unused, so the tests below for \"-\" are probably useless.\n */\n- if (probin[0] != '\\0' && strcmp(probin, \"-\") != 0)\n+ if (prosqlbody)\n+ {\n+ appendPQExpBufferStr(asPart, prosqlbody);\n+ }\n\n\nAre you planning to run pg_indent before committing or would that add too much\nnoise?\n\nAnyway since it's only stylistic issues and the feature freeze is getting\ncloser I'm marking the patch as ready for committer.\n\n\n", "msg_date": "Sat, 3 Apr 2021 11:39:43 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: SQL-standard function body" }, { "msg_contents": "On 03.04.21 05:39, Julien Rouhaud wrote:\n> Are you planning to run pg_indent before committing or would that add too much\n> noise?\n\nYeah, I figured I'd leave that for later, to not bloat the patch so much.\n\n> Anyway since it's only stylistic issues and the feature freeze is getting\n> closer I'm marking the patch as ready for committer.\n\nCommitted. Thanks!\n\n\n", "msg_date": "Wed, 7 Apr 2021 21:55:40 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: SQL-standard function body" }, { "msg_contents": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> Committed. Thanks!\n\nBuildfarm suggests this has some issues under force_parallel_mode.\nI'm wondering about missed fields in outfuncs/readfuncs, or the like.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 07 Apr 2021 16:22:17 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: SQL-standard function body" }, { "msg_contents": "On Wed, Apr 07, 2021 at 04:22:17PM -0400, Tom Lane wrote:\n> Buildfarm suggests this has some issues under force_parallel_mode.\n> I'm wondering about missed fields in outfuncs/readfuncs, or the like.\n\nThe problem looks a bit more fundamental to me, as there seems to be\nsome confusion with the concept of what should be the query string \nwhen it comes to prosqlbody with a parallel run, as it replaces prosrc\nin some cases where the function uses SQL as language. If the\nbuildfarm cannot be put back to green, could it be possible to revert\nthis patch?\n--\nMichael", "msg_date": "Thu, 8 Apr 2021 12:28:34 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: SQL-standard function body" }, { "msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> On Wed, Apr 07, 2021 at 04:22:17PM -0400, Tom Lane wrote:\n>> Buildfarm suggests this has some issues under force_parallel_mode.\n>> I'm wondering about missed fields in outfuncs/readfuncs, or the like.\n\n> The problem looks a bit more fundamental to me, as there seems to be\n> some confusion with the concept of what should be the query string \n> when it comes to prosqlbody with a parallel run, as it replaces prosrc\n> in some cases where the function uses SQL as language. If the\n> buildfarm cannot be put back to green, could it be possible to revert\n> this patch?\n\nAndres pushed a stopgap fix. We might end up reverting the patch\naltogether for v14, but I don't want to be hasty. This should be enough\nto let people take advantage of the last few hours before feature freeze.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 08 Apr 2021 01:16:02 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: SQL-standard function body" }, { "msg_contents": "Hi,\n\nOn 2021-04-08 01:16:02 -0400, Tom Lane wrote:\n> Michael Paquier <michael@paquier.xyz> writes:\n> > On Wed, Apr 07, 2021 at 04:22:17PM -0400, Tom Lane wrote:\n> >> Buildfarm suggests this has some issues under force_parallel_mode.\n> >> I'm wondering about missed fields in outfuncs/readfuncs, or the like.\n>\n> > The problem looks a bit more fundamental to me, as there seems to be\n> > some confusion with the concept of what should be the query string\n> > when it comes to prosqlbody with a parallel run, as it replaces prosrc\n> > in some cases where the function uses SQL as language. If the\n> > buildfarm cannot be put back to green, could it be possible to revert\n> > this patch?\n>\n> Andres pushed a stopgap fix.\n\nLet's hope that it does fix it on the BF as well. One holdup was that\ncheck-world didn't succeed with force_parallel_mode=regress even after\nthe fix - but that turned out to be the fault of\n\ncommit 5fd9dfa5f50e4906c35133a414ebec5b6d518493 (HEAD)\nAuthor: Bruce Momjian <bruce@momjian.us>\nDate: 2021-04-07 13:06:47 -0400\n\n Move pg_stat_statements query jumbling to core.\n\net al.\n\n\n> We might end up reverting the patch altogether for v14, but I don't\n> want to be hasty. This should be enough to let people take advantage\n> of the last few hours before feature freeze.\n\nYea, I think it'd be good to make that decision after a decent night of\nsleep or two. And an actual look at the issues the patch might (or might\nnot) have.\n\n\nIndependent of this patch, it might be a good idea to have\nExecInitParallelPlan() be robust against NULL querystrings. Places like\nexecutor_errposition() are certainly trying to be...\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 7 Apr 2021 22:27:59 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: SQL-standard function body" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> Independent of this patch, it might be a good idea to have\n> ExecInitParallelPlan() be robust against NULL querystrings. Places like\n> executor_errposition() are certainly trying to be...\n\nFWIW, I think the long-term drift of things is definitely that\nwe want to have the querystring available everywhere. Code like\nexecutor_errposition is from an earlier era before we were trying\nto enforce that. In particular, if the querystring is available in\nthe leader and not the workers, then you will get different error\nreporting behavior in parallel query than non-parallel query, which\nis surely a bad thing.\n\nSo IMO what you did here is definitely a short-term thing that\nwe should be looking to revert. The question at hand is why\nPeter's patch broke this in the first place, and how hard it\nwill be to fix it properly. I'm entirely on board with reverting\nthe feature if that isn't readily fixable.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 08 Apr 2021 01:41:40 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: SQL-standard function body" }, { "msg_contents": "On Wed, Apr 07, 2021 at 10:27:59PM -0700, Andres Freund wrote:\n> \n> One holdup was that\n> check-world didn't succeed with force_parallel_mode=regress even after\n> the fix - but that turned out to be the fault of\n> \n> commit 5fd9dfa5f50e4906c35133a414ebec5b6d518493 (HEAD)\n> Author: Bruce Momjian <bruce@momjian.us>\n> Date: 2021-04-07 13:06:47 -0400\n> \n> Move pg_stat_statements query jumbling to core.\n> \n> et al.\n\nYep, I'm on it!\n\n\n", "msg_date": "Thu, 8 Apr 2021 13:46:26 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: SQL-standard function body" }, { "msg_contents": "Julien Rouhaud <rjuju123@gmail.com> writes:\n> On Wed, Apr 07, 2021 at 10:27:59PM -0700, Andres Freund wrote:\n>> One holdup was that\n>> check-world didn't succeed with force_parallel_mode=regress even after\n>> the fix - but that turned out to be the fault of\n\n>>>> Move pg_stat_statements query jumbling to core.\n\n> Yep, I'm on it!\n\nSo far the buildfarm seems to be turning green after b3ee4c503 ...\nso I wonder what extra condition is needed to cause the failure\nAndres is seeing.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 08 Apr 2021 02:05:25 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: SQL-standard function body" }, { "msg_contents": "Hi,\n\nOn 2021-04-08 02:05:25 -0400, Tom Lane wrote:\n> So far the buildfarm seems to be turning green after b3ee4c503 ...\n> so I wonder what extra condition is needed to cause the failure\n> Andres is seeing.\n\nNothing special, really. Surprised the BF doesn't see it:\n\nandres@awork3:~/build/postgres/dev-assert/vpath$ cat /tmp/test.conf\nforce_parallel_mode=regress\nandres@awork3:~/build/postgres/dev-assert/vpath$ make -j48 -s && EXTRA_REGRESS_OPTS='--temp-config /tmp/test.conf' make -s -C contrib/pg_stat_statements/ check\nAll of PostgreSQL successfully made. Ready to install.\n...\nThe differences that caused some tests to fail can be viewed in the\nfile \"/home/andres/build/postgres/dev-assert/vpath/contrib/pg_stat_statements/regression.diffs\". A copy of the test summary that you see\nabove is saved in the file \"/home/andres/build/postgres/dev-assert/vpath/contrib/pg_stat_statements/regression.out\".\n...\n\nandres@awork3:~/build/postgres/dev-assert/vpath$ head -n 30 /home/andres/build/postgres/dev-assert/vpath/contrib/pg_stat_statements/regression.diffs\ndiff -du10 /home/andres/src/postgresql/contrib/pg_stat_statements/expected/pg_stat_statements.out /home/andres/build/postgres/dev-assert/vpath/contrib/pg_stat_statements/results/pg_stat_statements.out\n--- /home/andres/src/postgresql/contrib/pg_stat_statements/expected/pg_stat_statements.out\t2021-04-06 09:08:42.688697932 -0700\n+++ /home/andres/build/postgres/dev-assert/vpath/contrib/pg_stat_statements/results/pg_stat_statements.out\t2021-04-07 23:30:26.876071024 -0700\n@@ -118,37 +118,38 @@\n ?column? | ?column?\n ----------+----------\n 1 | test\n (1 row)\n\n DEALLOCATE pgss_test;\n SELECT query, calls, rows FROM pg_stat_statements ORDER BY query COLLATE \"C\";\n query | calls | rows\n ------------------------------------------------------------------------------+-------+------\n PREPARE pgss_test (int) AS SELECT $1, $2 LIMIT $3 | 1 | 1\n- SELECT $1 +| 4 | 4\n+ PREPARE pgss_test (int) AS SELECT $1, 'test' LIMIT 1; | 1 | 1\n+ SELECT $1 +| 8 | 8\n +| |\n AS \"text\" | |\n- SELECT $1 + $2 | 2 | 2\n- SELECT $1 + $2 + $3 AS \"add\" | 3 | 3\n- SELECT $1 AS \"float\" | 1 | 1\n- SELECT $1 AS \"int\" | 2 | 2\n+ SELECT $1 + $2 | 4 | 4\n+ SELECT $1 + $2 + $3 AS \"add\" | 6 | 6\n+ SELECT $1 AS \"float\" | 2 | 2\n+ SELECT $1 AS \"int\" | 4 | 4\n SELECT $1 AS i UNION SELECT $2 ORDER BY i | 1 | 2\n- SELECT $1 || $2 | 1 | 1\n- SELECT pg_stat_statements_reset() | 1 | 1\n\n\nToo tired to figure out why the BF doesn't see this. Perhaps the extra\nsettings aren't used because it's scripted as an install check?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 7 Apr 2021 23:33:20 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: SQL-standard function body" }, { "msg_contents": "Hi,\n\nOn 2021-04-08 01:41:40 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > Independent of this patch, it might be a good idea to have\n> > ExecInitParallelPlan() be robust against NULL querystrings. Places like\n> > executor_errposition() are certainly trying to be...\n> \n> FWIW, I think the long-term drift of things is definitely that\n> we want to have the querystring available everywhere. Code like\n> executor_errposition is from an earlier era before we were trying\n> to enforce that. In particular, if the querystring is available in\n> the leader and not the workers, then you will get different error\n> reporting behavior in parallel query than non-parallel query, which\n> is surely a bad thing.\n\nYea, I think it's a sensible direction - but I think we should put the\nline in the sand earlier on / higher up than ExecInitParallelPlan().\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 7 Apr 2021 23:35:14 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: SQL-standard function body" }, { "msg_contents": "On Wed, Apr 07, 2021 at 11:33:20PM -0700, Andres Freund wrote:\n> Hi,\n> \n> On 2021-04-08 02:05:25 -0400, Tom Lane wrote:\n> > So far the buildfarm seems to be turning green after b3ee4c503 ...\n> > so I wonder what extra condition is needed to cause the failure\n> > Andres is seeing.\n> \n> Nothing special, really. Surprised the BF doesn't see it:\n> \n> andres@awork3:~/build/postgres/dev-assert/vpath$ cat /tmp/test.conf\n> force_parallel_mode=regress\n> andres@awork3:~/build/postgres/dev-assert/vpath$ make -j48 -s && EXTRA_REGRESS_OPTS='--temp-config /tmp/test.conf' make -s -C contrib/pg_stat_statements/ check\n> All of PostgreSQL successfully made. Ready to install.\n> ...\n> The differences that caused some tests to fail can be viewed in the\n> file \"/home/andres/build/postgres/dev-assert/vpath/contrib/pg_stat_statements/regression.diffs\". A copy of the test summary that you see\n> above is saved in the file \"/home/andres/build/postgres/dev-assert/vpath/contrib/pg_stat_statements/regression.out\".\n> ...\n\nIs think this is because the buildfarm client is running installcheck for the\ncontribs rather than check, and pg_stat_statements/Makefile has:\n\n# Disabled because these tests require \"shared_preload_libraries=pg_stat_statements\",\n# which typical installcheck users do not have (e.g. buildfarm clients).\nNO_INSTALLCHECK = 1\n\n\n\n", "msg_date": "Thu, 8 Apr 2021 14:40:36 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: SQL-standard function body" }, { "msg_contents": "Julien Rouhaud <rjuju123@gmail.com> writes:\n> On Wed, Apr 07, 2021 at 11:33:20PM -0700, Andres Freund wrote:\n>> Nothing special, really. Surprised the BF doesn't see it:\n\n> Is think this is because the buildfarm client is running installcheck for the\n> contribs rather than check, and pg_stat_statements/Makefile has:\n> # Disabled because these tests require \"shared_preload_libraries=pg_stat_statements\",\n> # which typical installcheck users do not have (e.g. buildfarm clients).\n> NO_INSTALLCHECK = 1\n\nNo, because if that were the explanation then we'd be getting no\nbuildfarm coverage at all for for pg_stat_statements. Which aside\nfrom being awful contradicts the results at coverage.postgresql.org.\n\nI think Andres has the right idea that there's some more-subtle\nvariation in the test conditions, but (yawn) too tired to look\ninto it right now.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 08 Apr 2021 02:58:02 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: SQL-standard function body" }, { "msg_contents": "On Wed, Apr 07, 2021 at 11:35:14PM -0700, Andres Freund wrote:\n> On 2021-04-08 01:41:40 -0400, Tom Lane wrote:\n>> Andres Freund <andres@anarazel.de> writes:\n>> FWIW, I think the long-term drift of things is definitely that\n>> we want to have the querystring available everywhere. Code like\n>> executor_errposition is from an earlier era before we were trying\n>> to enforce that. In particular, if the querystring is available in\n>> the leader and not the workers, then you will get different error\n>> reporting behavior in parallel query than non-parallel query, which\n>> is surely a bad thing.\n> \n> Yea, I think it's a sensible direction - but I think we should put the\n> line in the sand earlier on / higher up than ExecInitParallelPlan().\n\nIndeed, I agree that enforcing the availability of querystring\neverywhere sounds like a sensible thing to do in terms of consistency,\nand that's my impression when I scanned the parallel execution code,\nand I don't really get why SQL function bodies should not bind by this\nrule. Would people object if I add an open item to track that?\n--\nMichael", "msg_date": "Thu, 8 Apr 2021 16:54:56 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: SQL-standard function body" }, { "msg_contents": "On Thu, Apr 08, 2021 at 04:54:56PM +0900, Michael Paquier wrote:\n> On Wed, Apr 07, 2021 at 11:35:14PM -0700, Andres Freund wrote:\n> > On 2021-04-08 01:41:40 -0400, Tom Lane wrote:\n> >> Andres Freund <andres@anarazel.de> writes:\n> >> FWIW, I think the long-term drift of things is definitely that\n> >> we want to have the querystring available everywhere. Code like\n> >> executor_errposition is from an earlier era before we were trying\n> >> to enforce that. In particular, if the querystring is available in\n> >> the leader and not the workers, then you will get different error\n> >> reporting behavior in parallel query than non-parallel query, which\n> >> is surely a bad thing.\n> > \n> > Yea, I think it's a sensible direction - but I think we should put the\n> > line in the sand earlier on / higher up than ExecInitParallelPlan().\n> \n> Indeed, I agree that enforcing the availability of querystring\n> everywhere sounds like a sensible thing to do in terms of consistency,\n> and that's my impression when I scanned the parallel execution code,\n> and I don't really get why SQL function bodies should not bind by this\n> rule. Would people object if I add an open item to track that?\n\nIt makes sense, +1 for an open item.\n\n\n", "msg_date": "Thu, 8 Apr 2021 19:11:21 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: SQL-standard function body" }, { "msg_contents": "On Thu, Apr 08, 2021 at 02:58:02AM -0400, Tom Lane wrote:\n> Julien Rouhaud <rjuju123@gmail.com> writes:\n> > On Wed, Apr 07, 2021 at 11:33:20PM -0700, Andres Freund wrote:\n> >> Nothing special, really. Surprised the BF doesn't see it:\n> \n> > Is think this is because the buildfarm client is running installcheck for the\n> > contribs rather than check, and pg_stat_statements/Makefile has:\n> > # Disabled because these tests require \"shared_preload_libraries=pg_stat_statements\",\n> > # which typical installcheck users do not have (e.g. buildfarm clients).\n> > NO_INSTALLCHECK = 1\n> \n> No, because if that were the explanation then we'd be getting no\n> buildfarm coverage at all for for pg_stat_statements. Which aside\n> from being awful contradicts the results at coverage.postgresql.org.\n\nIs there any chance that coverage.postgresql.org isn't backed by the buildfarm\nclient but a plain make check-world or something like that?\n\n> I think Andres has the right idea that there's some more-subtle\n> variation in the test conditions, but (yawn) too tired to look\n> into it right now.\n\nI tried to look at some force-parallel-mode animal, like mantid, and I don't\nsee any evidence of pg_stat_statements being run by a \"make check\", and only a\nfew contrib modules seem to have an explicit check phase. However, looking at\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=mantid&dt=2021-04-08%2007%3A07%3A05\nI see\nhttps://buildfarm.postgresql.org/cgi-bin/show_stage_log.pl?nm=mantid&dt=2021-04-08%2007%3A07%3A05&stg=contrib-install-check-C:\n\n[...]\nmake -C pg_stat_statements installcheck\nmake[1]: Entering directory `/u1/tac/build-farm-11/buildroot/HEAD/pgsql.build/contrib/pg_stat_statements'\nmake[1]: Nothing to be done for `installcheck'.\n\n\n", "msg_date": "Thu, 8 Apr 2021 19:19:36 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: SQL-standard function body" }, { "msg_contents": "Julien Rouhaud <rjuju123@gmail.com> writes:\n> On Thu, Apr 08, 2021 at 02:58:02AM -0400, Tom Lane wrote:\n>> No, because if that were the explanation then we'd be getting no\n>> buildfarm coverage at all for for pg_stat_statements. Which aside\n>> from being awful contradicts the results at coverage.postgresql.org.\n\n> Is there any chance that coverage.postgresql.org isn't backed by the buildfarm\n> client but a plain make check-world or something like that?\n\nHmm, I think you're right. Poking around in the log files from one\nof my own buildfarm animals, there's no evidence that pg_stat_statements\nis being tested at all. Needless to say, that's just horrid :-(\n\nI see that contrib/test_decoding also sets NO_INSTALLCHECK = 1,\nand the reason it gets tested is that the buildfarm script has\na special module for that. I guess we need to clone that module,\nor maybe better, find a way to generalize it.\n\nThere are also some src/test/modules modules that set NO_INSTALLCHECK,\nbut apparently those do have coverage (modules-check is the step that\nruns their SQL tests, and then the TAP tests if any get broken out\nas separate buildfarm steps).\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 08 Apr 2021 12:21:05 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: SQL-standard function body" }, { "msg_contents": "\nOn 4/8/21 2:40 AM, Julien Rouhaud wrote:\n> On Wed, Apr 07, 2021 at 11:33:20PM -0700, Andres Freund wrote:\n>> Hi,\n>>\n>> On 2021-04-08 02:05:25 -0400, Tom Lane wrote:\n>>> So far the buildfarm seems to be turning green after b3ee4c503 ...\n>>> so I wonder what extra condition is needed to cause the failure\n>>> Andres is seeing.\n>> Nothing special, really. Surprised the BF doesn't see it:\n>>\n>> andres@awork3:~/build/postgres/dev-assert/vpath$ cat /tmp/test.conf\n>> force_parallel_mode=regress\n>> andres@awork3:~/build/postgres/dev-assert/vpath$ make -j48 -s && EXTRA_REGRESS_OPTS='--temp-config /tmp/test.conf' make -s -C contrib/pg_stat_statements/ check\n>> All of PostgreSQL successfully made. Ready to install.\n>> ...\n>> The differences that caused some tests to fail can be viewed in the\n>> file \"/home/andres/build/postgres/dev-assert/vpath/contrib/pg_stat_statements/regression.diffs\". A copy of the test summary that you see\n>> above is saved in the file \"/home/andres/build/postgres/dev-assert/vpath/contrib/pg_stat_statements/regression.out\".\n>> ...\n> Is think this is because the buildfarm client is running installcheck for the\n> contribs rather than check, and pg_stat_statements/Makefile has:\n>\n> # Disabled because these tests require \"shared_preload_libraries=pg_stat_statements\",\n> # which typical installcheck users do not have (e.g. buildfarm clients).\n> NO_INSTALLCHECK = 1\n>\n>\n>\n\n\nYeah, Julien is right, we run \"make check\" for these in src/test/modules\nbut I missed contrib. I have fixed this on crake so we get some\nimmediate coverage and a fix will be pushed to github shortly.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Thu, 8 Apr 2021 13:03:12 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: SQL-standard function body" }, { "msg_contents": "On 2021-Apr-08, Julien Rouhaud wrote:\n\n> On Thu, Apr 08, 2021 at 02:58:02AM -0400, Tom Lane wrote:\n\n> > No, because if that were the explanation then we'd be getting no\n> > buildfarm coverage at all for for pg_stat_statements. Which aside\n> > from being awful contradicts the results at coverage.postgresql.org.\n> \n> Is there any chance that coverage.postgresql.org isn't backed by the buildfarm\n> client but a plain make check-world or something like that?\n\nYes, coverage.pg.org runs \"make check-world\".\n\nMaybe it would make sense to change that script, so that it runs the\nbuildfarm's run_build.pl script instead of \"make check-world\". That\nwould make coverage.pg.org report what the buildfarm actually tests ...\nit would have made this problem a bit more obvious.\n\n-- \n�lvaro Herrera 39�49'30\"S 73�17'W\n\n\n", "msg_date": "Thu, 8 Apr 2021 13:08:02 -0400", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: SQL-standard function body" }, { "msg_contents": "On Thu, Apr 08, 2021 at 12:21:05PM -0400, Tom Lane wrote:\n> I see that contrib/test_decoding also sets NO_INSTALLCHECK = 1,\n> and the reason it gets tested is that the buildfarm script has\n> a special module for that. I guess we need to clone that module,\n> or maybe better, find a way to generalize it.\n> \n> There are also some src/test/modules modules that set NO_INSTALLCHECK,\n> but apparently those do have coverage (modules-check is the step that\n> runs their SQL tests, and then the TAP tests if any get broken out\n> as separate buildfarm steps).\n\nFWIW, on Windows any module with NO_INSTALLCHECK does not get tested\nas we rely mostly on an installed server to do all the tests and avoid\nthe performance impact of setting up a new server for each module's\ntest.\n--\nMichael", "msg_date": "Fri, 9 Apr 2021 08:01:03 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: SQL-standard function body" }, { "msg_contents": "Julien Rouhaud <rjuju123@gmail.com> writes:\n> On Thu, Apr 08, 2021 at 04:54:56PM +0900, Michael Paquier wrote:\n>> Indeed, I agree that enforcing the availability of querystring\n>> everywhere sounds like a sensible thing to do in terms of consistency,\n>> and that's my impression when I scanned the parallel execution code,\n>> and I don't really get why SQL function bodies should not bind by this\n>> rule. Would people object if I add an open item to track that?\n\n> It makes sense, +1 for an open item.\n\nSo here's what I propose to do about this.\n\n0001 attached reverts the patch's change to remove the NOT NULL\nconstraint on pg_proc.prosrc. I think that was an extremely poor\ndecision; it risks breaking non-core PLs, and for that matter I'm not\nsure the core PLs wouldn't crash on null prosrc. It is not any harder\nfor the SQL-language-related code to condition its checks on not-null\nprosqlbody instead of null prosrc. Of course that then requires us to\nput something into prosrc for these newfangled functions, but in 0001\nI just used an empty string. (This patch also adds an Assert to\nstandard_ExecutorStart checking that some source text was provided,\nresponding to Andres' point that we should be checking that upstream\nof parallel query. We should then revert b3ee4c503, but for simplicity\nI didn't include that here.)\n\n0002 addresses a different missing-source-text problem, which is that\nthe patch didn't bother to provide source text while running parse\nanalysis on the SQL function body. That means no error cursors for\nproblems; which might seem cosmetic on the toy example I added to the\nregression tests, but it won't be for people writing functions that\nare dozens or hundreds of lines long.\n\nFinally, 0003 might be a bit controversial: it changes the stored\nprosrc for new-style SQL functions to be the query text of the CREATE\nFUNCTION command. The main argument I can see being made against this\nis that it'll bloat the pg_proc entry. But I think that that's\nnot a terribly reasonable concern, because the source text is going\nto be a good deal smaller than the nodeToString representation in\njust about every case.\n\nThe real value of 0003 of course would be to get an error cursor at\nruntime, but I failed to create an example where that would happen\ntoday. Right now there are only three calls of executor_errposition,\nand all of them are for cases that are already rejected by the parser,\nso they're effectively unreachable. A scenario that seems more likely\nto be reachable is a failure reported during function inlining, but\nmost of the reasons I can think of for that also seem unreachable given\nthe already-parse-analyzed nature of the function body in these cases.\nMaybe I'm just under-caffeinated today.\n\nAnother point here is that for any error cursor to appear, we need\nnot only source text at hand but also token locations in the query\ntree nodes. Right now, since readfuncs.c intentionally discards\nthose locations, we won't have that. There is not-normally-compiled\nlogic to reload those location fields, though, and I think before too\nlong we'll want to enable it in some mainstream cases --- notably\nparallel query's shipping of querytrees to workers. However, until\nit gets easier to reach cases where an error-with-location can be\nthrown from the executor, I don't feel a need to do that.\n\nI do have ambitions to make execution-time errors produce cursors\nin more cases, so I think this will come to fruition before long,\nbut not in v14.\n\nOne could make an argument, therefore, for holding off 0003 until\nthere's more support for execution-time error cursors. I don't\nthink we should though, for two reasons:\n1. It'd be better to keep the pg_proc representation of new-style\nSQL functions stable across versions.\n2. Storing the CREATE text means we'll capture comments associated\nwith the function text, which is something that at least some\npeople will complain about the loss of. Admittedly we have no way\nto re-integrate the comments into the de-parsed body, but some\nfolks might be satisfied with grabbing the prosrc text.\n\nThoughts?\n\n\t\t\tregards, tom lane", "msg_date": "Fri, 09 Apr 2021 12:09:43 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: SQL-standard function body" }, { "msg_contents": "\nOn 4/9/21 12:09 PM, Tom Lane wrote:\n> One could make an argument, therefore, for holding off 0003 until\n> there's more support for execution-time error cursors. I don't\n> think we should though, for two reasons:\n> 1. It'd be better to keep the pg_proc representation of new-style\n> SQL functions stable across versions.\n> 2. Storing the CREATE text means we'll capture comments associated\n> with the function text, which is something that at least some\n> people will complain about the loss of. Admittedly we have no way\n> to re-integrate the comments into the de-parsed body, but some\n> folks might be satisfied with grabbing the prosrc text.\n>\n\n\n+many for storing query text.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Fri, 9 Apr 2021 12:32:21 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: SQL-standard function body" }, { "msg_contents": "On Fri, Apr 09, 2021 at 12:09:43PM -0400, Tom Lane wrote:\n> Finally, 0003 might be a bit controversial: it changes the stored\n> prosrc for new-style SQL functions to be the query text of the CREATE\n> FUNCTION command. The main argument I can see being made against this\n> is that it'll bloat the pg_proc entry. But I think that that's\n> not a terribly reasonable concern\n\nSuch storage cost should be acceptable, but ...\n\n> The real value of 0003 of course would be to get an error cursor at\n> runtime\n\nA key benefit of $SUBJECT is the function body following DDL renames:\n\ncreate table foo ();\ninsert into foo default values;\ncreate function count_it() returns int begin atomic return (select count(*) from foo); end;\nselect count_it();\ninsert into foo default values;\nalter table foo rename to some_new_long_table_name;\nselect count_it(); -- still works\n\nAfter the rename, any stored prosrc is obsolete. To show accurate error\ncursors, deparse prosqlbody and use that in place of prosrc.\n\n\n", "msg_date": "Fri, 9 Apr 2021 20:30:14 -0700", "msg_from": "Noah Misch <noah@leadboat.com>", "msg_from_op": false, "msg_subject": "Re: SQL-standard function body" }, { "msg_contents": "Noah Misch <noah@leadboat.com> writes:\n> On Fri, Apr 09, 2021 at 12:09:43PM -0400, Tom Lane wrote:\n>> The real value of 0003 of course would be to get an error cursor at\n>> runtime\n\n> A key benefit of $SUBJECT is the function body following DDL renames:\n\nAgreed. But ...\n\n> After the rename, any stored prosrc is obsolete. To show accurate error\n> cursors, deparse prosqlbody and use that in place of prosrc.\n\n... I'm not sure this conclusion follows. There are two problems with it:\n\n1. I don't see an acceptably low-overhead way to mechanize it.\nDeparsing prosqlbody is unlikely to be safe in a post-error transaction,\nbut surely we'd not want to expend that cost in advance on every use\nof a SQL function. Even ignoring that, the act of deparsing would not\nin itself tell you what offset to use. Should we deparse and then\nre-parse to get a new node tree with corrected token locations?\n\n2. The reason we can get away with showing a fragment of a large query\n(or function body) in an error message is that the user is supposed to\nbe able to correlate the display with what she wrote. That assumption\nfalls to the ground if the display is based on a deconstruction that is\nvirtually certain to have line breaks in different places, not to mention\nthat the details of what is shown may be substantially different from the\noriginal.\n\nStill, I take your point that the original text may be less useful\nfor this purpose than I was supposing.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 10 Apr 2021 10:52:15 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: SQL-standard function body" }, { "msg_contents": "On Sat, Apr 10, 2021 at 10:52:15AM -0400, Tom Lane wrote:\n> Noah Misch <noah@leadboat.com> writes:\n> > On Fri, Apr 09, 2021 at 12:09:43PM -0400, Tom Lane wrote:\n> >> The real value of 0003 of course would be to get an error cursor at\n> >> runtime\n> \n> > A key benefit of $SUBJECT is the function body following DDL renames:\n> \n> Agreed. But ...\n> \n> > After the rename, any stored prosrc is obsolete. To show accurate error\n> > cursors, deparse prosqlbody and use that in place of prosrc.\n> \n> ... I'm not sure this conclusion follows. There are two problems with it:\n> \n> 1. I don't see an acceptably low-overhead way to mechanize it.\n> Deparsing prosqlbody is unlikely to be safe in a post-error transaction,\n> but surely we'd not want to expend that cost in advance on every use\n> of a SQL function. Even ignoring that, the act of deparsing would not\n> in itself tell you what offset to use. Should we deparse and then\n> re-parse to get a new node tree with corrected token locations?\n\nIf you really want those error cursors, yes. (I feel we can continue to live\nwithout them; their absence is no more important than it was ten years ago.)\nOne can envision several ways to cache that high-overhead work. Otherwise,\nthe usual PostgreSQL answer would be to omit an error cursor, not show one\nthat reflects an obsolete sense of the function.\n\nIf the original CREATE FUNCTION query text were so valuable, I'd be arguing to\npreserve it across dump/reload.\n\n> 2. The reason we can get away with showing a fragment of a large query\n> (or function body) in an error message is that the user is supposed to\n> be able to correlate the display with what she wrote. That assumption\n> falls to the ground if the display is based on a deconstruction that is\n> virtually certain to have line breaks in different places, not to mention\n> that the details of what is shown may be substantially different from the\n> original.\n\nPreferences on this matter will be situation-dependent. If I do CREATE\nFUNCTION f() ...; SELECT f() all in one sitting, then it's fine for an error\nin the SELECT to show the function I wrote. If I'm calling a function defined\nyears ago, I'm likely to compare the error report to \"\\sf foo\" and not likely\nto compare it to a years-old record of the SQL statement. I think it's fine\nto expect users to consult \"\\sf foo\" when the user is in doubt.\n\n> Still, I take your point that the original text may be less useful\n> for this purpose than I was supposing.\n\n\n", "msg_date": "Sat, 10 Apr 2021 13:03:26 -0700", "msg_from": "Noah Misch <noah@leadboat.com>", "msg_from_op": false, "msg_subject": "Re: SQL-standard function body" }, { "msg_contents": "Based on the discussion so far, I've committed 0001 and 0002 but not 0003,\nand marked this open issue as closed.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 15 Apr 2021 17:29:01 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: SQL-standard function body" }, { "msg_contents": "On Tue, Jun 30, 2020 at 02:51:38PM -0400, Tom Lane wrote:\n> The point remains that exposing the function body's dependencies will\n> constrain restore order far more than we are accustomed to see. It\n> might be possible to build examples that flat out can't be restored,\n> even granting that we teach pg_dump how to break dependency loops\n> by first creating the function with empty body and later redefining\n> it with the real body. (Admittedly, if that's possible then you\n> likely could make it happen with views too. But somehow it seems\n> more likely that people would create spaghetti dependencies for\n> functions than views.)\n\nShould we be okay releasing v14 without support for breaking function\ndependency loops, or does that call for an open item?\n\n-- example\ncreate function f() returns int language sql return 1;\ncreate function g() returns int language sql return f();\ncreate or replace function f() returns int language sql return coalesce(2, g());\n\n-- but when a view can break the cycle, pg_dump still does so\ncreate view v as select 1 as c;\ncreate function f() returns int language sql return coalesce(0, (select count(*) from v));\ncreate or replace view v as select f() as c;\n\n\n", "msg_date": "Sun, 18 Apr 2021 11:55:46 -0700", "msg_from": "Noah Misch <noah@leadboat.com>", "msg_from_op": false, "msg_subject": "Re: SQL-standard function body" }, { "msg_contents": "Noah Misch <noah@leadboat.com> writes:\n> Should we be okay releasing v14 without support for breaking function\n> dependency loops, or does that call for an open item?\n\nOh! That should definitely be an open item. It doesn't seem\nthat hard to do something similar to what we do for views,\ni.e. create a dummy function and replace it later.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 18 Apr 2021 15:08:44 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: SQL-standard function body" }, { "msg_contents": "On Sun, Apr 18, 2021 at 03:08:44PM -0400, Tom Lane wrote:\n> Noah Misch <noah@leadboat.com> writes:\n> > Should we be okay releasing v14 without support for breaking function\n> > dependency loops, or does that call for an open item?\n> \n> Oh! That should definitely be an open item. It doesn't seem\n> that hard to do something similar to what we do for views,\n> i.e. create a dummy function and replace it later.\n\nI added\nhttps://wiki.postgresql.org/index.php?title=PostgreSQL_14_Open_Items&type=revision&diff=35926&oldid=35925\n\n\n", "msg_date": "Sun, 18 Apr 2021 16:15:05 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: SQL-standard function body" }, { "msg_contents": "... BTW, a dependency loop is also possible without using this feature,\nby abusing default-value expressions:\n\ncreate function f1(x int, y int) returns int language sql\nas 'select $1 + $2';\ncreate function f2(x int, y int default f1(1,2)) returns int language sql\nas 'select $1 + $2';\ncreate or replace function f1(x int, y int default f2(11,12)) returns int language sql\nas 'select $1 + $2';\n\nThe actual use-case for that seems pretty thin, so we never bothered\nto worry about it before. But if we're going to build loop-breaking\nlogic to handle function body dependencies, it should deal with this\ntoo. I think that all that's required is for the initial dummy\nfunction declaration to omit defaults as well as providing a dummy\nbody.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 18 Apr 2021 17:33:38 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: SQL-standard function body" }, { "msg_contents": "On Wed, Apr 7, 2021 at 3:55 PM Peter Eisentraut <\npeter.eisentraut@2ndquadrant.com> wrote:\n\n>\n> Committed. Thanks!\n>\n>\nThis commit break line continuation prompts for unbalanced parentheses in\nthe psql binary. Skimming through this thread, I don't see that this is\nintentional or has been noticed before.\n\nwith psql -X\n\nBefore:\n\njjanes=# asdf (\njjanes(#\n\nNow:\n\njjanes=# asdf (\njjanes-#\n\nI've looked through the parts of the commit that change psql, but didn't\nsee an obvious culprit.\n\nCheers,\n\nJeff\n\nOn Wed, Apr 7, 2021 at 3:55 PM Peter Eisentraut <peter.eisentraut@2ndquadrant.com> wrote:\nCommitted.  Thanks!\nThis commit break line continuation prompts for unbalanced parentheses in the psql binary.  Skimming through this thread, I don't see that this is intentional or has been noticed before.with psql -XBefore:jjanes=# asdf ( jjanes(# Now:jjanes=# asdf (jjanes-# I've looked through the parts of the commit that change psql, but didn't see an obvious culprit.Cheers,Jeff", "msg_date": "Thu, 22 Apr 2021 16:04:18 -0400", "msg_from": "Jeff Janes <jeff.janes@gmail.com>", "msg_from_op": false, "msg_subject": "Re: SQL-standard function body" }, { "msg_contents": "On Thu, Apr 22, 2021 at 04:04:18PM -0400, Jeff Janes wrote:\n> This commit break line continuation prompts for unbalanced parentheses in\n> the psql binary. Skimming through this thread, I don't see that this is\n> intentional or has been noticed before.\n> \n> with psql -X\n> \n> Before:\n> \n> jjanes=# asdf (\n> jjanes(#\n> \n> Now:\n> \n> jjanes=# asdf (\n> jjanes-#\n> \n> I've looked through the parts of the commit that change psql, but didn't\n> see an obvious culprit.\n\nI haven't studied it in detail, but it probably needs something like this.\n\ndiff --git a/src/fe_utils/psqlscan.l b/src/fe_utils/psqlscan.l\nindex 991b7de0b5..0fab48a382 100644\n--- a/src/fe_utils/psqlscan.l\n+++ b/src/fe_utils/psqlscan.l\n@@ -1098,23 +1098,23 @@ psql_scan(PsqlScanState state,\n \t{\n \t\tcase LEXRES_EOL:\t\t/* end of input */\n \t\t\tswitch (state->start_state)\n \t\t\t{\n \t\t\t\tcase INITIAL:\n \t\t\t\tcase xqs:\t\t/* we treat this like INITIAL */\n \t\t\t\t\tif (state->paren_depth > 0)\n \t\t\t\t\t{\n \t\t\t\t\t\tresult = PSCAN_INCOMPLETE;\n \t\t\t\t\t\t*prompt = PROMPT_PAREN;\n \t\t\t\t\t}\n-\t\t\t\t\tif (state->begin_depth > 0)\n+\t\t\t\t\telse if (state->begin_depth > 0)\n \t\t\t\t\t{\n \t\t\t\t\t\tresult = PSCAN_INCOMPLETE;\n \t\t\t\t\t\t*prompt = PROMPT_CONTINUE;\n \t\t\t\t\t}\n \t\t\t\t\telse if (query_buf->len > 0)\n \t\t\t\t\t{\n \t\t\t\t\t\tresult = PSCAN_EOL;\n \t\t\t\t\t\t*prompt = PROMPT_CONTINUE;\n \t\t\t\t\t}\n \t\t\t\t\telse\n \t\t\t\t\t{\n\n\n", "msg_date": "Mon, 26 Apr 2021 21:44:07 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: SQL-standard function body" }, { "msg_contents": "On 18.04.21 23:33, Tom Lane wrote:\n> ... BTW, a dependency loop is also possible without using this feature,\n> by abusing default-value expressions:\n> \n> create function f1(x int, y int) returns int language sql\n> as 'select $1 + $2';\n> create function f2(x int, y int default f1(1,2)) returns int language sql\n> as 'select $1 + $2';\n> create or replace function f1(x int, y int default f2(11,12)) returns int language sql\n> as 'select $1 + $2';\n> \n> The actual use-case for that seems pretty thin, so we never bothered\n> to worry about it before. But if we're going to build loop-breaking\n> logic to handle function body dependencies, it should deal with this\n> too. I think that all that's required is for the initial dummy\n> function declaration to omit defaults as well as providing a dummy\n> body.\n\nI have studied this a bit. I'm not sure where the dummy function \ndeclaration should be created. The current dependency-breaking logic in \npg_dump_sort.c doesn't appear to support injecting additional objects \ninto the set of dumpable objects. So we would need to create it perhaps \nin dumpFunc() and then later set flags that indicate whether it will be \nrequired.\n\nAnother option would be that we disallow this at creation time. It \nseems we could detect dependency loops using findDependentObjects(), so \nthis might not be so difficult. The use case for recursive SQL \nfunctions is probably low, at least with the current limited set of \ncontrol flow options in SQL. (And you can always use a quoted body to \nwork around it.)\n\n\n\n", "msg_date": "Tue, 27 Apr 2021 09:47:42 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: SQL-standard function body" }, { "msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> On 18.04.21 23:33, Tom Lane wrote:\n>> The actual use-case for that seems pretty thin, so we never bothered\n>> to worry about it before. But if we're going to build loop-breaking\n>> logic to handle function body dependencies, it should deal with this\n>> too. I think that all that's required is for the initial dummy\n>> function declaration to omit defaults as well as providing a dummy\n>> body.\n\n> I have studied this a bit. I'm not sure where the dummy function \n> declaration should be created. The current dependency-breaking logic in \n> pg_dump_sort.c doesn't appear to support injecting additional objects \n> into the set of dumpable objects. So we would need to create it perhaps \n> in dumpFunc() and then later set flags that indicate whether it will be \n> required.\n\nHmm, good point. The existing code that breaks loops involving views\ndepends on the fact that the view relation and the view's ON SELECT\nrule are already treated as distinct objects within pg_dump. So we\njust need to mark the rule object to indicate whether to emit it or\nnot. To make it work for functions, there would have to be a secondary\nobject representing the function body (and the default expressions,\nI guess).\n\nThat's kind of a lot of complication, and inefficiency, for a corner case\nthat may never arise in practice. We've ignored the risk for default\nexpressions, and AFAIR have yet to receive any field complaints about it.\nSo maybe it's okay to do the same for SQL-style function bodies, at least\nfor now.\n\n> Another option would be that we disallow this at creation time.\n\nDon't like that one much. The backend shouldn't be in the business\nof rejecting valid commands just because pg_dump might be unable\nto cope later.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 27 Apr 2021 12:16:53 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: SQL-standard function body" }, { "msg_contents": "On 27.04.21 04:44, Justin Pryzby wrote:\n> On Thu, Apr 22, 2021 at 04:04:18PM -0400, Jeff Janes wrote:\n>> This commit break line continuation prompts for unbalanced parentheses in\n>> the psql binary. Skimming through this thread, I don't see that this is\n>> intentional or has been noticed before.\n>>\n>> with psql -X\n>>\n>> Before:\n>>\n>> jjanes=# asdf (\n>> jjanes(#\n>>\n>> Now:\n>>\n>> jjanes=# asdf (\n>> jjanes-#\n>>\n>> I've looked through the parts of the commit that change psql, but didn't\n>> see an obvious culprit.\n> \n> I haven't studied it in detail, but it probably needs something like this.\n\nYeah, fixed like that.\n\n\n", "msg_date": "Thu, 29 Apr 2021 09:14:22 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: SQL-standard function body" }, { "msg_contents": "On 27.04.21 18:16, Tom Lane wrote:\n> That's kind of a lot of complication, and inefficiency, for a corner case\n> that may never arise in practice. We've ignored the risk for default\n> expressions, and AFAIR have yet to receive any field complaints about it.\n> So maybe it's okay to do the same for SQL-style function bodies, at least\n> for now.\n> \n>> Another option would be that we disallow this at creation time.\n> \n> Don't like that one much. The backend shouldn't be in the business\n> of rejecting valid commands just because pg_dump might be unable\n> to cope later.\n\nSince this is listed as an open item, I want to clarify that I'm \ncurrently not planning to work on this, based on this discussion. \nCertainly something to look into sometime later, but it's not in my \nplans right now.\n\n\n", "msg_date": "Mon, 10 May 2021 16:41:58 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: SQL-standard function body" }, { "msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> On 27.04.21 18:16, Tom Lane wrote:\n>> That's kind of a lot of complication, and inefficiency, for a corner case\n>> that may never arise in practice. We've ignored the risk for default\n>> expressions, and AFAIR have yet to receive any field complaints about it.\n>> So maybe it's okay to do the same for SQL-style function bodies, at least\n>> for now.\n\n>>> Another option would be that we disallow this at creation time.\n\n>> Don't like that one much. The backend shouldn't be in the business\n>> of rejecting valid commands just because pg_dump might be unable\n>> to cope later.\n\n> Since this is listed as an open item, I want to clarify that I'm \n> currently not planning to work on this, based on this discussion. \n> Certainly something to look into sometime later, but it's not in my \n> plans right now.\n\nRight, I concur with moving it to the \"won't fix\" category.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 10 May 2021 11:09:43 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: SQL-standard function body" }, { "msg_contents": "On Mon, May 10, 2021 at 11:09:43AM -0400, Tom Lane wrote:\n> Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> > On 27.04.21 18:16, Tom Lane wrote:\n> >> That's kind of a lot of complication, and inefficiency, for a corner case\n> >> that may never arise in practice. We've ignored the risk for default\n> >> expressions, and AFAIR have yet to receive any field complaints about it.\n> >> So maybe it's okay to do the same for SQL-style function bodies, at least\n> >> for now.\n> \n> >>> Another option would be that we disallow this at creation time.\n> \n> >> Don't like that one much. The backend shouldn't be in the business\n> >> of rejecting valid commands just because pg_dump might be unable\n> >> to cope later.\n> \n> > Since this is listed as an open item, I want to clarify that I'm \n> > currently not planning to work on this, based on this discussion. \n> > Certainly something to look into sometime later, but it's not in my \n> > plans right now.\n> \n> Right, I concur with moving it to the \"won't fix\" category.\n\nWorks for me.\n\n\n", "msg_date": "Tue, 11 May 2021 00:49:13 -0700", "msg_from": "Noah Misch <noah@leadboat.com>", "msg_from_op": false, "msg_subject": "Re: SQL-standard function body" }, { "msg_contents": "On Wed, Apr 07, 2021 at 09:55:40PM +0200, Peter Eisentraut wrote:\n> Committed. Thanks!\n\nI get a NULL pointer dereference if the function body has a doubled semicolon:\n\n create function f() returns int language sql begin atomic select 1;; end;\n\nProgram received signal SIGSEGV, Segmentation fault.\ntransformStmt (pstate=pstate@entry=0x2623978, parseTree=parseTree@entry=0x0) at analyze.c:297\n297 switch (nodeTag(parseTree))\n#0 transformStmt (pstate=pstate@entry=0x2623978, parseTree=parseTree@entry=0x0) at analyze.c:297\n#1 0x00000000006132a4 in interpret_AS_clause (queryString=<optimized out>, sql_body_out=<synthetic pointer>, probin_str_p=<synthetic pointer>, prosrc_str_p=<synthetic pointer>, inParameterNames=<optimized out>, parameterTypes=<optimized out>,\n sql_body_in=<optimized out>, as=<optimized out>, funcname=<optimized out>, languageName=<optimized out>, languageOid=14) at functioncmds.c:937\n#2 CreateFunction (pstate=pstate@entry=0x26213e0, stmt=stmt@entry=0x25fd048) at functioncmds.c:1227\n#3 0x0000000000813e23 in ProcessUtilitySlow (pstate=pstate@entry=0x26213e0, pstmt=pstmt@entry=0x25fd3b8, queryString=queryString@entry=0x25fc040 \"create function f() returns int language sql begin atomic select 1;; end;\",\n context=context@entry=PROCESS_UTILITY_TOPLEVEL, params=params@entry=0x0, queryEnv=queryEnv@entry=0x0, qc=qc@entry=0x7fff4b715b70, dest=0x25fd4a8) at utility.c:1607\n#4 0x0000000000812944 in standard_ProcessUtility (pstmt=0x25fd3b8, queryString=0x25fc040 \"create function f() returns int language sql begin atomic select 1;; end;\", context=PROCESS_UTILITY_TOPLEVEL, params=0x0, queryEnv=0x0, dest=0x25fd4a8,\n qc=0x7fff4b715b70) at utility.c:1034\n#5 0x0000000000810efe in PortalRunUtility (portal=portal@entry=0x265fb60, pstmt=0x25fd3b8, isTopLevel=isTopLevel@entry=true, setHoldSnapshot=setHoldSnapshot@entry=false, dest=0x25fd4a8, qc=0x7fff4b715b70) at pquery.c:1147\n#6 0x0000000000811053 in PortalRunMulti (portal=portal@entry=0x265fb60, isTopLevel=isTopLevel@entry=true, setHoldSnapshot=setHoldSnapshot@entry=false, dest=dest@entry=0x25fd4a8, altdest=altdest@entry=0x25fd4a8, qc=qc@entry=0x7fff4b715b70) at pquery.c:1310\n#7 0x00000000008115e4 in PortalRun (portal=portal@entry=0x265fb60, count=count@entry=9223372036854775807, isTopLevel=isTopLevel@entry=true, run_once=run_once@entry=true, dest=dest@entry=0x25fd4a8, altdest=altdest@entry=0x25fd4a8, qc=qc@entry=0x7fff4b715b70)\n at pquery.c:786\n#8 0x000000000080d004 in exec_simple_query (query_string=0x25fc040 \"create function f() returns int language sql begin atomic select 1;; end;\") at postgres.c:1214\n#9 0x000000000080ee1f in PostgresMain (argc=argc@entry=1, argv=argv@entry=0x7fff4b716030, dbname=0x2627788 \"test\", username=<optimized out>) at postgres.c:4486\n#10 0x000000000048bc97 in BackendRun (port=<optimized out>, port=<optimized out>) at postmaster.c:4507\n#11 BackendStartup (port=0x261f480) at postmaster.c:4229\n#12 ServerLoop () at postmaster.c:1745\n#13 0x000000000077c278 in PostmasterMain (argc=argc@entry=1, argv=argv@entry=0x25f6a00) at postmaster.c:1417\n#14 0x000000000048d51e in main (argc=1, argv=0x25f6a00) at main.c:209\n(gdb) p parseTree\n$1 = (Node *) 0x0\n\n\n", "msg_date": "Sat, 5 Jun 2021 21:44:18 -0700", "msg_from": "Noah Misch <noah@leadboat.com>", "msg_from_op": false, "msg_subject": "Re: SQL-standard function body" }, { "msg_contents": "On Sat, Jun 05, 2021 at 09:44:18PM -0700, Noah Misch wrote:\n> On Wed, Apr 07, 2021 at 09:55:40PM +0200, Peter Eisentraut wrote:\n> > Committed. Thanks!\n> \n> I get a NULL pointer dereference if the function body has a doubled semicolon:\n> \n> create function f() returns int language sql begin atomic select 1;; end;\n\nYou don't even need a statements to reproduce the problem, a body containing\nonly semi-colon(s) will behave the same.\n\nAttached patch should fix the problem.", "msg_date": "Sun, 6 Jun 2021 15:32:20 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: SQL-standard function body" }, { "msg_contents": "On 06.06.21 09:32, Julien Rouhaud wrote:\n> On Sat, Jun 05, 2021 at 09:44:18PM -0700, Noah Misch wrote:\n>> I get a NULL pointer dereference if the function body has a doubled semicolon:\n>>\n>> create function f() returns int language sql begin atomic select 1;; end;\n> \n> You don't even need a statements to reproduce the problem, a body containing\n> only semi-colon(s) will behave the same.\n> \n> Attached patch should fix the problem.\n\nYour patch filters out empty statements at the parse transformation \nphase, so they are no longer present when you dump the body back out. \nSo your edits in the test expected files don't fit.\n\nI suggest we just prohibit empty statements at the parse stage. I don't \nsee a strong reason to allow them, and if we wanted to, we'd have to do \nmore work, e.g., in ruleutils.c to print them back out correctly.", "msg_date": "Mon, 7 Jun 2021 10:52:10 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: SQL-standard function body" }, { "msg_contents": "On Mon, Jun 7, 2021 at 4:52 PM Peter Eisentraut\n<peter.eisentraut@enterprisedb.com> wrote:\n>\n> Your patch filters out empty statements at the parse transformation\n> phase, so they are no longer present when you dump the body back out.\n> So your edits in the test expected files don't fit.\n\nOh, somehow the tests aren't failing here, I'm not sure what I did wrong.\n\n> I suggest we just prohibit empty statements at the parse stage. I don't\n> see a strong reason to allow them, and if we wanted to, we'd have to do\n> more work, e.g., in ruleutils.c to print them back out correctly.\n\nI always thought extraneous semicolons were tokens to be ignored,\nwhich happens to be internally implemented as empty statements, so\ndeparsing them is not required, similar to deparsing extraneous\nwhitespaces. If the spec says otherwise then I agree it's not worth\nimplementing, but otherwise I'm not sure if it's really helpful to\nerror out.\n\n\n", "msg_date": "Mon, 7 Jun 2021 17:10:50 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: SQL-standard function body" }, { "msg_contents": "Julien Rouhaud <rjuju123@gmail.com> writes:\n> On Mon, Jun 7, 2021 at 4:52 PM Peter Eisentraut\n> <peter.eisentraut@enterprisedb.com> wrote:\n>> Your patch filters out empty statements at the parse transformation\n>> phase, so they are no longer present when you dump the body back out.\n>> So your edits in the test expected files don't fit.\n\n> Oh, somehow the tests aren't failing here, I'm not sure what I did wrong.\n\nModulo getting the tests right ...\n\n>> I suggest we just prohibit empty statements at the parse stage. I don't\n>> see a strong reason to allow them, and if we wanted to, we'd have to do\n>> more work, e.g., in ruleutils.c to print them back out correctly.\n\n> I always thought extraneous semicolons were tokens to be ignored,\n\n... I tend to agree with Julien's position here. It seems really ugly\nto prohibit empty statements just for implementation convenience.\nHowever, the way I'd handle it is to have the grammar remove them,\nwhich is what it does in other contexts. I don't think there's any\nneed to preserve them in ruleutils output --- there's a lot of other\nnormalization we do on the way to that, and this seems to fit in.\n\nBTW, is it just me, or does SQL:2021 fail to permit multiple\nstatements in a procedure at all? After much searching, I found the\nBEGIN ATOMIC ... END syntax, but it's in <triggered SQL statement>,\nin other words the body of a trigger not a procedure. I cannot find\nany production that connects a <routine body> to that. There's an\nexample showing use of BEGIN ATOMIC as a procedure statement, so\nthey clearly *meant* to allow it, but it looks like somebody messed\nup the grammar.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 07 Jun 2021 11:27:41 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: SQL-standard function body" }, { "msg_contents": "On Mon, Jun 7, 2021 at 11:27 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Julien Rouhaud <rjuju123@gmail.com> writes:\n> > On Mon, Jun 7, 2021 at 4:52 PM Peter Eisentraut\n> > <peter.eisentraut@enterprisedb.com> wrote:\n> >> Your patch filters out empty statements at the parse transformation\n> >> phase, so they are no longer present when you dump the body back out.\n> >> So your edits in the test expected files don't fit.\n>\n> > Oh, somehow the tests aren't failing here, I'm not sure what I did wrong.\n>\n> Modulo getting the tests right ...\n\nI can certainly accept that my patch broke the tests, but I just ran\nanother make check-world and it passed without any problem. What am I\nmissing?\n\n\n", "msg_date": "Mon, 7 Jun 2021 23:56:46 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: SQL-standard function body" }, { "msg_contents": "I wrote:\n> ... I tend to agree with Julien's position here. It seems really ugly\n> to prohibit empty statements just for implementation convenience.\n> However, the way I'd handle it is to have the grammar remove them,\n> which is what it does in other contexts.\n\nConcretely, I think the right fix is per attached.\n\nLike Julien, I don't see any additional change in regression test outputs.\nMaybe Peter thinks there should be some? But I think the reverse-listing\nwe get for functest_S_3a is fine.\n\n\t\t\tregards, tom lane", "msg_date": "Mon, 07 Jun 2021 15:24:33 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: SQL-standard function body" }, { "msg_contents": "\nOn 07.06.21 17:27, Tom Lane wrote:\n> ... I tend to agree with Julien's position here. It seems really ugly\n> to prohibit empty statements just for implementation convenience.\n> However, the way I'd handle it is to have the grammar remove them,\n> which is what it does in other contexts. I don't think there's any\n> need to preserve them in ruleutils output --- there's a lot of other\n> normalization we do on the way to that, and this seems to fit in.\n\nOk, if that's what people prefer.\n\n> BTW, is it just me, or does SQL:2021 fail to permit multiple\n> statements in a procedure at all? After much searching, I found the\n> BEGIN ATOMIC ... END syntax, but it's in <triggered SQL statement>,\n> in other words the body of a trigger not a procedure. I cannot find\n> any production that connects a <routine body> to that. There's an\n> example showing use of BEGIN ATOMIC as a procedure statement, so\n> they clearly*meant* to allow it, but it looks like somebody messed\n> up the grammar.\n\nIt's in the SQL/PSM part.\n\n\n", "msg_date": "Mon, 7 Jun 2021 21:49:59 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: SQL-standard function body" }, { "msg_contents": "On Mon, Jun 07, 2021 at 03:24:33PM -0400, Tom Lane wrote:\n> \n> Concretely, I think the right fix is per attached.\n\n+1, I agree that this approach is better.\n\n\n", "msg_date": "Tue, 8 Jun 2021 09:51:34 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: SQL-standard function body" }, { "msg_contents": "Julien Rouhaud <rjuju123@gmail.com> writes:\n> On Mon, Jun 07, 2021 at 03:24:33PM -0400, Tom Lane wrote:\n>> Concretely, I think the right fix is per attached.\n\n> +1, I agree that this approach is better.\n\nPushed.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 08 Jun 2021 12:00:03 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: SQL-standard function body" } ]
[ { "msg_contents": "Attached is a POC patch that teaches nbtree to delete old duplicate\nversions from unique indexes. The optimization targets non-HOT\nduplicate version bloat. Although the patch is rather rough, it\nnevertheless manages to more or less eliminate a whole class of index\nbloat: Unique index bloat from non-HOT updates in workloads where no\ntransaction lasts for more than a few seconds. For example, it\neliminates index bloat with a custom pgbench workload that uses an\nINCLUDE unique index on pgbench_accounts.aid (with abalance as the\nnon-key attribute), instead of the usual accounts primary key.\nSimilarly, with a standard pgbench_accounts primary key alongside an\nextra non-unique index on abalance, the primary key will never have\nany page splits with the patch applied. It's almost as if the updates\nwere actually HOT updates, at least if you focus on the unique index\n(assuming that there are no long-running transactions).\n\nThe patch repurposes the deduplication infrastructure to delete\nduplicates within unique indexes, provided they're actually safe to\nVACUUM. This is somewhat different to the _bt_unique_check() LP_DEAD\nbit setting stuff, in that we have to access heap pages that we\nprobably would not have to access otherwise -- it's something that we\ngo out of our way to make happen at the point that the page is about\nto split, not something that happens in passing at no extra cost. The\ngeneral idea is to exploit the fact that duplicates in unique indexes\nare usually deadwood.\n\nWe only need to \"stay one step ahead\" of the bloat to avoid all page\nsplits in many important cases. So we usually only have to access a\ncouple of heap pages to avoid a page split in each case. In\ntraditional serial/identity column primary key indexes, any page split\nthat happens that isn't a split of the current rightmost page must be\ncaused by version churn. It should be possible to avoid these\n\"unnecessary\" page splits altogether (again, barring long-running\ntransactions).\n\nI would like to get early feedback on high level direction. While the\npatch seems quite promising, I am uncertain about my general approach,\nand how it might fit into some much broader effort to control bloat in\ngeneral.\n\nThere are some clear downsides to my approach. The patch has grotty\nheuristics that try to limit the extra work performed to avoid page\nsplits -- the cost of accessing additional heap pages while a buffer\nlock is held on the leaf page needs to be kept. under control. No\ndoubt this regresses some workloads without giving them a clear\nbenefit. Also, the optimization only ever gets used with unique\nindexes, since they're the only case where a duplicate naturally\nsuggests version churn, which can be targeted fairly directly, and\nwithout wasting too many cycles when it doesn't work out.\n\nIt's not at all clear how we could do something like this with\nnon-unique indexes. One related-though-distinct idea that might be\nworth considering occurs to me: teach nbtree to try to set LP_DEAD\nbits in non-unique indexes, in about the same way as it will in\n_bt_check_unique() for unique indexes. Perhaps the executor could hint\nto btinsert()/aminsert() that it's inserting a duplicate caused by a\nnon-HOT update, so it's worth trying to LP_DEAD nearby duplicates --\nespecially if they're on the same heap page as the incoming item.\n\nThere is a wholly separate question about index bloat that is of long\nterm strategic importance to the Postgres project: what should we do\nabout long running transactions? I tend to think that we can address\nproblems in that area by indicating that it is safe to delete\n\"intermediate\" versions -- tuples that are not too old to be seen by\nthe oldest transaction, that are nevertheless not needed (they're too\nnew to be interesting to the old transaction's snapshot, but also too\nold to be interesting to any other snapshot). Perhaps this\noptimization could be pursued in a phased fashion, starting with index\nAMs, where it seems less scary.\n\nI recently read a paper that had some ideas about what we could do\nhere [1]. IMV it is past time that we thrashed together a \"remove\nuseless intermediate versions\" design that is compatible with the\ncurrent heapam design.\n\n[1] https://dl.acm.org/doi/pdf/10.1145/3318464.3389714\n-- \nPeter Geoghegan", "msg_date": "Tue, 30 Jun 2020 17:03:25 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Deleting older versions in unique indexes to avoid page splits" }, { "msg_contents": "On Tue, Jun 30, 2020 at 5:03 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> Attached is a POC patch that teaches nbtree to delete old duplicate\n> versions from unique indexes. The optimization targets non-HOT\n> duplicate version bloat. Although the patch is rather rough, it\n> nevertheless manages to more or less eliminate a whole class of index\n> bloat: Unique index bloat from non-HOT updates in workloads where no\n> transaction lasts for more than a few seconds.\n\nI'm slightly surprised that this thread didn't generate more interest\nback in June. After all, maintaining the pristine initial state of\n(say) a primary key index even after many high throughput non-HOT\nupdates (i.e. avoiding \"logically unnecessary\" page splits entirely)\nis quite appealing. It arguably goes some way towards addressing long\nheld criticisms of our approach to MVCC. Especially if it can be\ngeneralized to all b-tree indexes -- the Uber blog post mentioned\ntables that have several indexes, which presumably means that there\ncan be no HOT updates (the author of the blog post didn't seem to be\naware of HOT at all).\n\nI've been trying to generalize my approach to work with all indexes. I\nthink that I can find a strategy that is largely effective at\npreventing version churn page splits that take place with workloads\nthat have many non-HOT updates, without any serious downsides for\nworkloads that do not benefit. I want to get feedback on that now,\nsince I expect that it will be controversial. Teaching indexes about\nhow tuples are versioned or chaining tuples seems like a non-starter,\nso the trick will be to come up with something that behaves in\napproximately the same way as that in cases where it helps.\n\nThe approach I have in mind is to pass down a hint from the executor\nto btinsert(), which lets nbtree know that the index tuple insert is\nin fact associated with a non-HOT update. This hint is only given when\nthe update did not logically modify the columns that the index covers\n(so presumably the majority of unchanged indexes get the hint, but not\nthe one or two indexes whose columns were modified by our update\nstatement -- or maybe the non-HOT update was caused by not being able\nto fit a new version on the same heap page, in which case all the\nbtinsert() calls/all the indexes on the table get the hint). Of\ncourse, this hint makes it all but certain that the index tuple is the\nsuccessor for some other index tuple located on the same leaf page. We\ndon't actually include information about which other existing tuple it\nis, since it pretty much doesn't matter. Even if we did, we definitely\ncannot opportunistically delete it, because it needs to stay in the\nindex at least until our transaction commits (this should be obvious).\nActually, we already try to delete it from within _bt_check_unique()\ntoday for unique indexes -- we just never opportunistically mark it\ndead at that point (as I said, it's needed until the xact commits at\nthe earliest).\n\nHere is the maybe-controversial part: The algorithm initially assumes\nthat all indexes more or less have the same properties as unique\nindexes from a versioning point of view, even though that's clearly\nnot true. That is, it initially assumes that the only reason why there\ncan be duplicates on any leaf page it looks at is because some\nprevious transaction also did a non-HOT update that added a new,\nunchanged index tuple. The new algorithm only runs when the hint is\npassed down from the executor and when the only alternative is to\nsplit the page (or have a deduplication pass), so clearly there is\nsome justification for this assumption -- it really is highly unlikely\nthat this update (which is on the verge of splitting the page) just so\nhappened to be the first such update that affected the page.\n\nIt's extremely likely that there will be at least a couple of other\ntuples like that on the page, and probably quite a few more. And even\nif we only manage to free one or two tuples, that'll still generally\nbe enough to fit the incoming tuple. In general that is usually quite\nvaluable. Even with a busy database, that might buy us minutes or\nhours before the question of splitting the same leaf page arises\nagain. By the time that happens, longer running transactions may have\ncommitted, VACUUM may have run, etc. Like unique index deduplication,\nthis isn't about freeing space -- it's about buying time.\n\nTo be blunt: It may be controversial that we're accessing multiple\nheap pages while holding an exclusive lock on a leaf page, in the\nhopes that we can avoid a page split, but without any certainty that\nit'll work out.\n\nSometimes (maybe even most of the time), this assumption turns out to\nbe mostly correct, and we benefit in the obvious way -- no\n\"unnecessary\" page splits for affected non-unique indexes. Other times\nit won't work out, usually because the duplicates are in fact logical\nduplicates from distinct logical rows. When the new deletion thing\ndoesn't work out, the situation works itself out in the obvious way:\nwe get a deduplication pass. If that doesn't work out we get a page\nsplit. So we have three legitimate strategies for resolving the\n\"pressure\" against a leaf page: last minute emergency duplicate checks\n+ deletion (the new thing), a deduplication pass, or a page split. The\nstrategies are in competition with each other (though only when we\nhave non-HOT updates).\n\nWe're only willing to access a fixed number of heap pages (heap pages\npointed to by duplicate index tuples) to try to delete some index\ntuples and avoid a split, and only in the specific context of the hint\nbeing received at the point a leaf page fills and it looks like we\nmight have to split it. I think that we should probably avoid doing\nanything with posting list tuples left behind by deduplication, except\nmaybe if there are only 2 or 3 TIDs -- just skip over them. That way,\ncases with duplicates across logical rows (not version churn) tend to\nget made into a posting list early (e.g. during an index build), so we\ndon't even bother trying to delete from them later. Non-posting-list\nduplicates suggest recency, which suggests version churn -- those dups\nmust at least have come after the most recent deduplication pass. Plus\nwe have heuristics that maximize the chances of finding versions to\nkill. And we tend to look at the same blocks again and again -- like\nin the patch I posted, we look at the value with the most dups for\nthings to kill first, and so on. So repeated version deletion passes\nwon't end up accessing totally different heap blocks each time, unless\nthey're successful at killing old versions.\n\n(I think that this new feature should be framed as extending the\ndeduplication infrastructure to do deletes -- so it can only be used\non indexes that use deduplication.)\n\nEven if this new mechanism ends up slowing down non-HOT updates\nnoticeably -- which is *not* something that I can see with my\nbenchmarks now -- then that could still be okay. I think that there is\nsomething sensible about imposing a penalty on non-HOT update queries\nthat can cause problems for everybody today. Why shouldn't they have\nto clean up their own mess? I think that it buys us a lot to condition\ncleanup on avoiding page splits, because any possible penalty is only\npaid in cases where there isn't something else that keeps the bloat\nunder control. If something like the kill_prior_tuple mechanism mostly\nkeeps bloat under control already, then we'll resolve the situation\nthat way instead.\n\nAn important point about this technique is that it's just a back stop,\nso it can run very infrequently while still preventing an index from\ngrowing -- an index that can double in size today. If existing\nmechanisms against \"logically unnecessary\" page splits are 99%\neffective today, then they may still almost be useless to users --\nyour index still doubles in size. It just takes a little longer in\nPostgres 13 (with deduplication) compared to Postgres 12. So there is\na really huge asymmetry that we still aren't doing enough about, even\nwith deduplication.\n\nDeduplication cannot prevent the first \"wave\" of page splits with\nprimary key style indexes due to the dimensions of the index tuples on\nthe page. The first wave may be all that matters (deduplication does\nhelp more when things get really bad, but I'd like to avoid \"merely\nbad\" performance characteristics, too). Consider the simplest possible\nreal world example. If we start out with 366 items on a leaf page\ninitially (the actual number with default fillfactor + 64-bit\nalignment for the pgbench indexes), we can store another 40 non-HOT\ndups on the same leaf page before the page splits. We only save 4\nbytes by merging a dup into one of the 366 original tuples. It's\nunlikely that many of the 40 dups that go on the page will be\nduplicates of *each other*, and deduplication only really starts to\nsave space when posting list tuples have 4 or 5 TIDs each. So\neventually all of the original leaf pages split when they really\nshouldn't, despite the availability of deduplication.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Wed, 7 Oct 2020 16:48:18 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Deleting older versions in unique indexes to avoid page splits" }, { "msg_contents": "\n\n> 8 окт. 2020 г., в 04:48, Peter Geoghegan <pg@bowt.ie> написал(а):\n> \n> On Tue, Jun 30, 2020 at 5:03 PM Peter Geoghegan <pg@bowt.ie> wrote:\n>> Attached is a POC patch that teaches nbtree to delete old duplicate\n>> versions from unique indexes. The optimization targets non-HOT\n>> duplicate version bloat. Although the patch is rather rough, it\n>> nevertheless manages to more or less eliminate a whole class of index\n>> bloat: Unique index bloat from non-HOT updates in workloads where no\n>> transaction lasts for more than a few seconds.\n> \n> I'm slightly surprised that this thread didn't generate more interest\n> back in June.\n\nThe idea looks very interesting.\nIt resembles GiST microvacuum: GiST tries to vacuum single page before split.\nI'm curious how cost of page deduplication is compared to cost of page split? Should we do deduplication of page will still remain 99% full?\n\nBest regards, Andrey Borodin.\n\n\n\n", "msg_date": "Mon, 12 Oct 2020 15:47:21 +0500", "msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>", "msg_from_op": false, "msg_subject": "Re: Deleting older versions in unique indexes to avoid page splits" }, { "msg_contents": "On Mon, Oct 12, 2020 at 3:47 AM Andrey Borodin <x4mmm@yandex-team.ru> wrote:\n> The idea looks very interesting.\n> It resembles GiST microvacuum: GiST tries to vacuum single page before split.\n\nAFAICT the GiST microvacuum mechanism is based on the one in nbtree,\nwhich is based on setting LP_DEAD bits when index scans find that the\nTIDs are dead-to-all. That's easy to justify, because it is easy and\ncheap to save future index scans the trouble of following the TIDs\njust to discover the same thing for themselves.\n\nThe difference here is that we're simply making an intelligent guess\n-- there have been no index scans, but we're going to do a kind of\nspecial index scan at the last minute to see if we can avoid a page\nsplit. I think that this is okay because in practice we can do it in a\nreasonably targeted way. We will never do it with INSERTs, for example\n(except in unique indexes, though only when we know that there are\nduplicates because we saw them in _bt_check_unique()). In the worst\ncase we make non-HOT updates a bit slower...and I'm not even sure that\nthat's a bad thing when we don't manage to delete anything. After all,\nnon-HOT updates impose a huge cost on the system. They have \"negative\nexternalities\".\n\nAnother way to look at it is that the mechanism I propose to add takes\nadvantage of \"convexity\" [1]. (Actually, several of my indexing ideas\nare based on similar principles -- like the nbtsplitloc.c stuff.)\n\nAttached is v2. It controls the cost of visiting the heap by finding\nthe heap page that has the most TIDs that we might be able to kill\n(among those that are duplicates on a leaf page). It also adds a\nhinting mechanism to the executor to avoid uselessly applying the\noptimization with INSERTs.\n\n> I'm curious how cost of page deduplication is compared to cost of page split? Should we do deduplication of page will still remain 99% full?\n\nIt depends on how you measure it, but in general I would say that the\ncost of traditional Postgres 13 deduplication is much lower.\nEspecially as measured in WAL volume. I also believe that the same is\ntrue for this new delete deduplication mechanism. The way we determine\nwhich heap pages to visit maximizes the chances that we'll get lucky\nwhile minimizing the cost (currently we don't visit more than 2 heap\npages unless we get at least one kill -- I think I could be more\nconservative here without losing much). A page split also has to\nexclusive lock two other pages (the right sibling page and parent\npage), so even the locking is perhaps better.\n\nThe attached patch can completely or almost completely avoid index\nbloat in extreme cases with non-HOT updates. This can easily increase\nthroughput by 2x or more, depending on how extreme you make it (i.e.\nhow many indexes you have). It seems like the main problem caused by\nnon-HOT updates is in fact page splits themselves. It is not so much a\nproblem with dirtying of pages.\n\nYou can test this with a benchmark like the one that was used for WARM\nback in 2017:\n\nhttps://www.postgresql.org/message-id/flat/CABOikdMNy6yowA%2BwTGK9RVd8iw%2BCzqHeQSGpW7Yka_4RSZ_LOQ%40mail.gmail.com\n\nI suspect that it's maybe even more effective than WARM was with this benchmark.\n\n[1] https://fooledbyrandomness.com/ConvexityScience.pdf\n-- \nPeter Geoghegan", "msg_date": "Mon, 12 Oct 2020 14:45:00 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Deleting older versions in unique indexes to avoid page splits" }, { "msg_contents": "On 08.10.2020 02:48, Peter Geoghegan wrote:\n> On Tue, Jun 30, 2020 at 5:03 PM Peter Geoghegan <pg@bowt.ie> wrote:\n>> Attached is a POC patch that teaches nbtree to delete old duplicate\n>> versions from unique indexes. The optimization targets non-HOT\n>> duplicate version bloat. Although the patch is rather rough, it\n>> nevertheless manages to more or less eliminate a whole class of index\n>> bloat: Unique index bloat from non-HOT updates in workloads where no\n>> transaction lasts for more than a few seconds.\n> I'm slightly surprised that this thread didn't generate more interest\n> back in June. After all, maintaining the pristine initial state of\n> (say) a primary key index even after many high throughput non-HOT\n> updates (i.e. avoiding \"logically unnecessary\" page splits entirely)\n> is quite appealing. It arguably goes some way towards addressing long\n> held criticisms of our approach to MVCC. Especially if it can be\n> generalized to all b-tree indexes -- the Uber blog post mentioned\n> tables that have several indexes, which presumably means that there\n> can be no HOT updates (the author of the blog post didn't seem to be\n> aware of HOT at all).\nThe idea seems very promising, especially when extended to handle \nnon-unique indexes too.\n> I've been trying to generalize my approach to work with all indexes. I\n> think that I can find a strategy that is largely effective at\n> preventing version churn page splits that take place with workloads\n> that have many non-HOT updates, without any serious downsides for\n> workloads that do not benefit. I want to get feedback on that now,\n> since I expect that it will be controversial. Teaching indexes about\n> how tuples are versioned or chaining tuples seems like a non-starter,\n> so the trick will be to come up with something that behaves in\n> approximately the same way as that in cases where it helps.\n>\n> The approach I have in mind is to pass down a hint from the executor\n> to btinsert(), which lets nbtree know that the index tuple insert is\n> in fact associated with a non-HOT update. This hint is only given when\n> the update did not logically modify the columns that the index covers\nThat's exactly what I wanted to discuss after the first letter. If we \ncould make (non)HOT-updates index specific, I think it could improve \nperformance a lot.\n> Here is the maybe-controversial part: The algorithm initially assumes\n> that all indexes more or less have the same properties as unique\n> indexes from a versioning point of view, even though that's clearly\n> not true. That is, it initially assumes that the only reason why there\n> can be duplicates on any leaf page it looks at is because some\n> previous transaction also did a non-HOT update that added a new,\n> unchanged index tuple. The new algorithm only runs when the hint is\n> passed down from the executor and when the only alternative is to\n> split the page (or have a deduplication pass), so clearly there is\n> some justification for this assumption -- it really is highly unlikely\n> that this update (which is on the verge of splitting the page) just so\n> happened to be the first such update that affected the page.\n> To be blunt: It may be controversial that we're accessing multiple\n> heap pages while holding an exclusive lock on a leaf page, in the\n> hopes that we can avoid a page split, but without any certainty that\n> it'll work out.\n>\n> Sometimes (maybe even most of the time), this assumption turns out to\n> be mostly correct, and we benefit in the obvious way -- no\n> \"unnecessary\" page splits for affected non-unique indexes. Other times\n> it won't work out, usually because the duplicates are in fact logical\n> duplicates from distinct logical rows.\nI think that this optimization can affect low cardinality indexes \nnegatively, but it is hard to estimate impact without tests. Maybe it \nwon't be a big deal, given that we attempt to eliminate old copies not \nvery often and that low cardinality b-trees are already not very useful. \nBesides, we can always make this thing optional, so that users could \ntune it to their workload.\n\n\nI wonder, how this new feature will interact with physical replication? \nReplica may have quite different performance profile. For example, there \ncan be long running queries, that now prevent vacuumfrom removing \nrecently-dead rows. How will we handle same situation with this \noptimized deletion?\n\n-- \nAnastasia Lubennikova\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n\n\n\n\nOn 08.10.2020 02:48, Peter Geoghegan\n wrote:\n\n\nOn Tue, Jun 30, 2020 at 5:03 PM Peter Geoghegan <pg@bowt.ie> wrote:\n\n\nAttached is a POC patch that teaches nbtree to delete old duplicate\nversions from unique indexes. The optimization targets non-HOT\nduplicate version bloat. Although the patch is rather rough, it\nnevertheless manages to more or less eliminate a whole class of index\nbloat: Unique index bloat from non-HOT updates in workloads where no\ntransaction lasts for more than a few seconds.\n\n\n\nI'm slightly surprised that this thread didn't generate more interest\nback in June. After all, maintaining the pristine initial state of\n(say) a primary key index even after many high throughput non-HOT\nupdates (i.e. avoiding \"logically unnecessary\" page splits entirely)\nis quite appealing. It arguably goes some way towards addressing long\nheld criticisms of our approach to MVCC. Especially if it can be\ngeneralized to all b-tree indexes -- the Uber blog post mentioned\ntables that have several indexes, which presumably means that there\ncan be no HOT updates (the author of the blog post didn't seem to be\naware of HOT at all).\n\n\n The idea seems very promising, especially when extended to handle\n non-unique indexes too.\n\n\nI've been trying to generalize my approach to work with all indexes. I\nthink that I can find a strategy that is largely effective at\npreventing version churn page splits that take place with workloads\nthat have many non-HOT updates, without any serious downsides for\nworkloads that do not benefit. I want to get feedback on that now,\nsince I expect that it will be controversial. Teaching indexes about\nhow tuples are versioned or chaining tuples seems like a non-starter,\nso the trick will be to come up with something that behaves in\napproximately the same way as that in cases where it helps.\n\nThe approach I have in mind is to pass down a hint from the executor\nto btinsert(), which lets nbtree know that the index tuple insert is\nin fact associated with a non-HOT update. This hint is only given when\nthe update did not logically modify the columns that the index covers\n\n That's exactly what I wanted to discuss after the first letter. If\n we could make (non)HOT-updates index specific, I think it could\n improve performance a lot. \n\nHere is the maybe-controversial part: The algorithm initially assumes\nthat all indexes more or less have the same properties as unique\nindexes from a versioning point of view, even though that's clearly\nnot true. That is, it initially assumes that the only reason why there\ncan be duplicates on any leaf page it looks at is because some\nprevious transaction also did a non-HOT update that added a new,\nunchanged index tuple. The new algorithm only runs when the hint is\npassed down from the executor and when the only alternative is to\nsplit the page (or have a deduplication pass), so clearly there is\nsome justification for this assumption -- it really is highly unlikely\nthat this update (which is on the verge of splitting the page) just so\nhappened to be the first such update that affected the page.\n\n\n\nTo be blunt: It may be controversial that we're accessing multiple\nheap pages while holding an exclusive lock on a leaf page, in the\nhopes that we can avoid a page split, but without any certainty that\nit'll work out.\n\nSometimes (maybe even most of the time), this assumption turns out to\nbe mostly correct, and we benefit in the obvious way -- no\n\"unnecessary\" page splits for affected non-unique indexes. Other times\nit won't work out, usually because the duplicates are in fact logical\nduplicates from distinct logical rows.\n\n I think that this optimization can affect low cardinality indexes\n negatively, but it is hard to estimate impact without tests. Maybe\n it won't be a big deal, given that we attempt to eliminate old\n copies not very often and that low cardinality b-trees are already\n not very useful. Besides, we can always make this thing optional, so\n that users could tune it to their workload.\n\n I wonder, how this new feature will interact with physical\n replication? Replica may have quite different performance profile.\n For example, there can be long running queries, that now prevent\n vacuum from removing recently-dead\n rows. How will we handle same situation with this optimized\n deletion?\n\n-- \nAnastasia Lubennikova\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company", "msg_date": "Wed, 14 Oct 2020 17:07:54 +0300", "msg_from": "Anastasia Lubennikova <a.lubennikova@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: Deleting older versions in unique indexes to avoid page splits" }, { "msg_contents": "On Wed, Oct 14, 2020 at 7:07 AM Anastasia Lubennikova\n<a.lubennikova@postgrespro.ru> wrote:\n> The idea seems very promising, especially when extended to handle non-unique indexes too.\n\nThanks!\n\n> That's exactly what I wanted to discuss after the first letter. If we could make (non)HOT-updates index specific, I think it could improve performance a lot.\n\nDo you mean accomplishing the same goal in heapam, by making the\noptimization more intelligent about which indexes need new versions?\nWe did have a patch that did that in 2007, as you may recall -- this\nwas called WARM:\n\nhttps://www.postgresql.org/message-id/flat/CABOikdMNy6yowA%2BwTGK9RVd8iw%2BCzqHeQSGpW7Yka_4RSZ_LOQ%40mail.gmail.com\n\nThis didn't go anywhere. I think that this solution in more pragmatic.\nIt's cheap enough to remove it if a better solution becomes available\nin the future. But this is a pretty good solution by all important\nmeasures.\n\n> I think that this optimization can affect low cardinality indexes negatively, but it is hard to estimate impact without tests. Maybe it won't be a big deal, given that we attempt to eliminate old copies not very often and that low cardinality b-trees are already not very useful. Besides, we can always make this thing optional, so that users could tune it to their workload.\n\nRight. The trick is to pay only a fixed low cost (maybe as low as one\nheap page access) when we start out, and ratchet it up only if the\nfirst heap page access looks promising. And to avoid posting list\ntuples. Regular deduplication takes place when this fails. It's useful\nfor the usual reasons, but also because this new mechanism learns not\nto try the posting list TIDs.\n\n> I wonder, how this new feature will interact with physical replication? Replica may have quite different performance profile.\n\nI think of that as equivalent to having a long running transaction on\nthe primary. When I first started working on this patch I thought\nabout having \"long running transaction detection\". But I quickly\nrealized that that isn't a meaningful concept. A transaction is only\ntruly long running relative to the writes that take place that have\nobsolete row versions that cannot be cleaned up. It has to be\nsomething we can deal with, but it cannot be meaningfully\nspecial-cased.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Wed, 14 Oct 2020 07:40:14 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Deleting older versions in unique indexes to avoid page splits" }, { "msg_contents": "On Wed, Oct 14, 2020 at 7:40 AM Peter Geoghegan <pg@bowt.ie> wrote:\n> Right. The trick is to pay only a fixed low cost (maybe as low as one\n> heap page access) when we start out, and ratchet it up only if the\n> first heap page access looks promising.\n\nJust as an example of how the patch can help, consider the following\npgbench variant script:\n\n\\set aid1 random_gaussian(1, 100000 * :scale, 2.0)\n\\set aid2 random_gaussian(1, 100000 * :scale, 2.5)\n\\set aid3 random_gaussian(1, 100000 * :scale, 2.2)\n\\set bid random(1, 1 * :scale)\n\\set tid random(1, 10 * :scale)\n\\set delta random(-5000, 5000)\nBEGIN;\nUPDATE pgbench_accounts SET abalance = abalance + :delta WHERE aid = :aid1;\nSELECT abalance FROM pgbench_accounts WHERE aid = :aid2;\nSELECT abalance FROM pgbench_accounts WHERE aid = :aid3;\nEND;\n\n(These details you see here are a bit arbitrary; don't worry about the\nspecifics too much.)\n\nBefore running the script with pgbench, I initialized pgbench to scale\n1500, and made some changes to the indexing (in order to actually test\nthe patch). There was no standard pgbench_accounts PK. Instead, I\ncreated a unique index that had an include column, which is enough to\nmake every update a non-HOT update. I also added two more redundant\nnon-unique indexes to create more overhead from non-HOT updates. It\nlooked like this:\n\ncreate unique index aid_pkey_include_abalance on pgbench_accounts\n(aid) include (abalance);\ncreate index one on pgbench_accounts (aid);\ncreate index two on pgbench_accounts (aid);\n\nSo 3 indexes on the accounts table.\n\nI ran the script for two hours and 16 clients with the patch, then for\nanother two hours with master. After that time, all 3 indexes were\nexactly the same size with the patch, but had almost doubled in size\non master:\n\naid_pkey_include_abalance: 784,264 pages (or ~5.983 GiB)\none: 769,545 pages (or ~5.871 GiB)\ntwo: 770,295 pages (or ~5.876 GiB)\n\n(With the patch, all three indexes were 100% pristine -- they remained\nprecisely 411,289 pages in size by the end, which is ~3.137 GiB.)\n\nNote that traditional deduplication is used by the indexes I've called\n\"one\" and \"two\" here, but not the include index called\n\"aid_pkey_include_abalance\". But it makes little difference, for\nreasons that will be obvious if you think about what this looks like\nat the leaf page level. Cases that Postgres 13 deduplication does\nbadly with are often the ones that this new mechanism does well with.\nDeduplication by deleting and by merging are truly complementary -- I\nhaven't just structured the code that way because it was convenient to\nuse dedup infrastructure just to get the dups at the start. (Yes, it\n*was* convenient, but there clearly are cases where each mechanism\ncompetes initially, before nbtree converges on the best strategy at\nthe local level. So FWIW this patch is a natural and logical extension\nof the deduplication work in my mind.)\n\nThe TPS/throughput is about what you'd expect for the two hour run:\n\n18,988.762398 TPS for the patch\n11,123.551707 TPS for the master branch.\n\nThis is a ~1.7x improvement, but I can get more than 3x by changing\nthe details at the start -- just add more indexes. I don't think that\nthe precise throughput difference you see here matters. The important\npoint is that we've more or less fixed a pathological set of behaviors\nthat have poorly understood cascading effects. Full disclosure: I rate\nlimited pgbench to 20k for this run, which probably wasn't significant\nbecause neither patch nor master hit that limit for long.\n\nBig latency improvements for that same run, too:\n\nPatch:\n\nstatement latencies in milliseconds:\n 0.001 \\set aid1 random_gaussian(1, 100000 * :scale, 2.0)\n 0.000 \\set aid2 random_gaussian(1, 100000 * :scale, 2.5)\n 0.000 \\set aid3 random_gaussian(1, 100000 * :scale, 2.2)\n 0.000 \\set bid random(1, 1 * :scale)\n 0.000 \\set tid random(1, 10 * :scale)\n 0.000 \\set delta random(-5000, 5000)\n 0.057 BEGIN;\n 0.294 UPDATE pgbench_accounts SET abalance = abalance +\n:delta WHERE aid = :aid1;\n 0.204 SELECT abalance FROM pgbench_accounts WHERE aid = :aid2;\n 0.195 SELECT abalance FROM pgbench_accounts WHERE aid = :aid3;\n 0.090 END;\n\nMaster:\n\nstatement latencies in milliseconds:\n 0.002 \\set aid1 random_gaussian(1, 100000 * :scale, 2.0)\n 0.001 \\set aid2 random_gaussian(1, 100000 * :scale, 2.5)\n 0.001 \\set aid3 random_gaussian(1, 100000 * :scale, 2.2)\n 0.001 \\set bid random(1, 1 * :scale)\n 0.000 \\set tid random(1, 10 * :scale)\n 0.001 \\set delta random(-5000, 5000)\n 0.084 BEGIN;\n 0.604 UPDATE pgbench_accounts SET abalance = abalance +\n:delta WHERE aid = :aid1;\n 0.317 SELECT abalance FROM pgbench_accounts WHERE aid = :aid2;\n 0.311 SELECT abalance FROM pgbench_accounts WHERE aid = :aid3;\n 0.120 END;\n\nNote that the mechanism added by the patch naturally limits the number\nof versions that are in the index for each logical row, which seems\nmuch more important than the total amount of garbage tuples cleaned\nup. It's great that the index is half its original size, but even that\nis less important than the effect of more or less bounding the worst\ncase number of heap pages accessed by point index scans. Even though\nthis patch shows big performance improvements (as well as very small\nperformance regressions for small indexes with skew), the patch is\nmostly about stability. I believe that Postgres users want greater\nstability and predictability in this area more than anything else.\n\nThe behavior of the system as a whole that we see for the master\nbranch here is not anywhere near linear. Page splits are of course\nexpensive, but they also occur in distinct waves [1] and have lasting\nconsequences. They're very often highly correlated, with clear tipping\npoints, so you see relatively sudden slow downs in the real world.\nWorse still, with skew the number of hot pages that you have can\ndouble in a short period of time. This very probably happens at the\nworst possible time for the user, since the database was likely\nalready organically experiencing very high index writes at the point\nof experiencing the first wave of splits (splits correlated both\nwithin and across indexes on the same table). And, from that point on,\nthe number of FPIs for the same workload also doubles forever (or at\nleast until REINDEX).\n\n[1] https://btw.informatik.uni-rostock.de/download/tagungsband/B2-2.pdf\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Fri, 16 Oct 2020 12:12:10 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Deleting older versions in unique indexes to avoid page splits" }, { "msg_contents": "пт, 16 окт. 2020 г. в 21:12, Peter Geoghegan <pg@bowt.ie>:\n\n> I ran the script for two hours and 16 clients with the patch, then for\n> another two hours with master. After that time, all 3 indexes were\n> exactly the same size with the patch, but had almost doubled in size\n> on master:\n>\n> aid_pkey_include_abalance: 784,264 pages (or ~5.983 GiB)\n> one: 769,545 pages (or ~5.871 GiB)\n> two: 770,295 pages (or ~5.876 GiB)\n>\n> (With the patch, all three indexes were 100% pristine -- they remained\n> precisely 411,289 pages in size by the end, which is ~3.137 GiB.)\n>\n> …\n>\n> The TPS/throughput is about what you'd expect for the two hour run:\n>\n> 18,988.762398 TPS for the patch\n> 11,123.551707 TPS for the master branch.\n>\n\nI really like these results, great work!\n\nI'm also wondering how IO numbers changed due to these improvements,\nshouldn't be difficult to look into.\n\nPeter, according to cfbot patch no longer compiles.\nCan you send and update, please?\n\n\n-- \nVictor Yegorov\n\nпт, 16 окт. 2020 г. в 21:12, Peter Geoghegan <pg@bowt.ie>:I ran the script for two hours and 16 clients with the patch, then for\nanother two hours with master. After that time, all 3 indexes were\nexactly the same size with the patch, but had almost doubled in size\non master:\n\naid_pkey_include_abalance: 784,264 pages (or ~5.983 GiB)\none: 769,545 pages (or ~5.871 GiB)\ntwo: 770,295 pages (or ~5.876 GiB)\n\n(With the patch, all three indexes were 100% pristine -- they remained\nprecisely 411,289 pages in size by the end, which is ~3.137 GiB.)\n…\n\nThe TPS/throughput is about what you'd expect for the two hour run:\n\n18,988.762398 TPS for the patch\n11,123.551707 TPS for the master branch.I really like these results, great work!I'm also wondering how IO numbers changed due to these improvements, shouldn't be difficult to look into.Peter, according to cfbot patch no longer compiles.Can you send and update, please?-- Victor Yegorov", "msg_date": "Fri, 16 Oct 2020 21:59:51 +0200", "msg_from": "Victor Yegorov <vyegorov@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Deleting older versions in unique indexes to avoid page splits" }, { "msg_contents": "On Fri, Oct 16, 2020 at 1:00 PM Victor Yegorov <vyegorov@gmail.com> wrote:\n> I really like these results, great work!\n\nThanks Victor!\n\n> I'm also wondering how IO numbers changed due to these improvements, shouldn't be difficult to look into.\n\nHere is the pg_statio_user_indexes for patch for the same run:\n\n schemaname | relname | indexrelname |\nidx_blks_read | idx_blks_hit\n------------+------------------+---------------------------+---------------+---------------\n public | pgbench_accounts | aid_pkey_include_abalance |\n12,828,736 | 534,819,826\n public | pgbench_accounts | one |\n12,750,275 | 534,486,742\n public | pgbench_accounts | two |\n2,474,893 | 2,216,047,568\n(3 rows)\n\nAnd for master:\n\n schemaname | relname | indexrelname |\nidx_blks_read | idx_blks_hit\n------------+------------------+---------------------------+---------------+---------------\n public | pgbench_accounts | aid_pkey_include_abalance |\n29,526,568 | 292,705,432\n public | pgbench_accounts | one |\n28,239,187 | 293,164,160\n public | pgbench_accounts | two |\n6,505,615 | 1,318,164,692\n(3 rows)\n\nHere is pg_statio_user_tables patch:\n\n schemaname | relname | heap_blks_read | heap_blks_hit |\nidx_blks_read | idx_blks_hit | toast_blks_read | toast_blks_hit |\ntidx_blks_read | tidx_blks_hit\n------------+------------------+----------------+---------------+---------------+---------------+-----------------+----------------+----------------+---------------\n public | pgbench_accounts | 123,195,496 | 696,805,485 |\n28,053,904 | 3,285,354,136 | | |\n |\n public | pgbench_branches | 11 | 1,553 |\n | | | |\n |\n public | pgbench_history | 0 | 0 |\n | | | |\n |\n public | pgbench_tellers | 86 | 15,416 |\n | | | |\n |\n(4 rows)\n\nAnd the pg_statio_user_tables for master:\n\n schemaname | relname | heap_blks_read | heap_blks_hit |\nidx_blks_read | idx_blks_hit | toast_blks_read | toast_blks_hit |\ntidx_blks_read | tidx_blks_hit\n------------+------------------+----------------+---------------+---------------+---------------+-----------------+----------------+----------------+---------------\n public | pgbench_accounts | 106,502,089 | 334,875,058 |\n64,271,370 | 1,904,034,284 | | |\n |\n public | pgbench_branches | 11 | 1,553 |\n | | | |\n |\n public | pgbench_history | 0 | 0 |\n | | | |\n |\n public | pgbench_tellers | 86 | 15,416 |\n | | | |\n |\n(4 rows)\n\nOf course, it isn't fair to make a direct comparison because we're\ndoing ~1.7x times more work with the patch. But even still, the\nidx_blks_read is less than half with the patch.\n\nBTW, the extra heap_blks_hit from the patch are not only due to the\nfact that the system does more directly useful work. It's also due to\nthe extra garbage collection triggered in indexes. The same is\nprobably *not* true with heap_blks_read, though. I minimize the number\nof heap pages accessed by the new cleamup mechanism each time, and\ntemporal locality will help a lot. I think that we delete index\nentries pointing to garbage in the heap at pretty predictable\nintervals. Heap pages full of LP_DEAD line pointer garbage only get\nprocessed with a few times close together in time, after which they're\nbound to either get VACUUM'd or get evicted from shared buffers.\n\n> Peter, according to cfbot patch no longer compiles.\n> Can you send and update, please?\n\nAttached is v3, which is rebased against the master branch as of\ntoday. No real changes, though.\n\n-- \nPeter Geoghegan", "msg_date": "Fri, 16 Oct 2020 13:58:01 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Deleting older versions in unique indexes to avoid page splits" }, { "msg_contents": "On Fri, Oct 16, 2020 at 1:58 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> Attached is v3, which is rebased against the master branch as of\n> today. No real changes, though.\n\nAnd now here's v4.\n\nThis version adds handling of posting list tuples, which I was\nskipping over before. Highly contended leaf pages with posting list\ntuples could still sometimes get \"logically unnecessary\" page splits\nin v3, which this seems to fix (barring the most extreme cases of\nversion churn, where the patch cannot and should not prevent page\nsplits). Actually, posting list tuple handling was something that we\nhad in v1 but lost in v2, because v2 changed our general strategy to\nfocus on what is convenient to the heap/tableam, which is the most\nimportant thing by far (the cost of failing must be a low, fixed, well\nunderstood and well targeted cost). The fix was to include TIDs from\nposting lists, while marking them \"non-promising\". Only plain tuples\nthat are duplicates are considered promising. Only promising tuples\ninfluence where we look for tuples to kill most of the time. The\nexception is when there is an even number of promising tuples on two\nheap pages, where we tiebreak on the total number of TIDs that point\nto the heap page from the leaf page.\n\nI seem to be able to cap the costs paid when the optimization doesn't\nwork out to the extent that we can get away with visiting only *one*\nheap page before giving up. And, we now never visit more than 3 total\n(2 is the common case when the optimization is really effective). This\nmay sound conservative -- because it is -- but it seems to work in\npractice. I may change my mind about that and decide to be less\nconservative, but so far all of the evidence I've seen suggests that\nit doesn't matter -- the heuristics seem to really work. Might as well\nbe completely conservative. I'm *sure* that we can afford one heap\naccess here -- we currently *always* visit at least one heap tuple in\nroughly the same way during each and every unique index tuple insert\n(not just when the page fills).\n\nPosting list TIDs are not the only kind of TIDs that are marked\nnon-promising. We now also include TIDs from non-duplicate tuples. So\nwe're prepared to kill any of the TIDs on the page, though we only\nreally expect it to happen with the \"promising\" tuples (duplicates).\nBut why not be open to the possibility of killing some extra TIDs in\npassing? We don't have to visit any extra heap pages to do so, so it's\npractically free. Empirical evidence shows that this happens quite\noften.\n\nHere's why this posting list tuple strategy is a good one: we consider\nposting list tuple TIDs non-promising to represent that we think that\nthere are probably actually multiple logical rows involved, or at\nleast to represent that they didn't work -- simple trial and error\nsuggests that they aren't very \"promising\", whatever the true reason\nhappens to be. But why not \"keep an open mind\" about the TIDs not each\nbeing for distinct logical rows? If it just so happens that the\nposting list TIDs really were multiple versions of the same logical\nrow all along, then there is a reasonable chance that there'll be even\nmore versions on that same heap page later on. When this happens and\nwhen we end up back on the same B-Tree leaf page to think about dedup\ndeletion once again, it's pretty likely that we'll also naturally end\nup looking into the later additional versions on this same heap page\nfrom earlier. At which point we won't miss the opportunity to check\nthe posting lists TIDs in passing. So we get to delete the posting\nlist after all!\n\n(If you're thinking \"but we probably won't get lucky like that\", then\nconsider that it doesn't have to happen on the next occasion when\ndelete deduplication happens on the same leaf page. Just at some point\nin the future. This is due to the way we visit the heap pages that\nlook most promising first. It might take a couple of rounds of dedup\ndeletion to get back to the same heap page, but that's okay. The\nsimple heuristics proposed in the patch seem to have some really\ninteresting and useful consequences in practice. It's hard to quantify\nhow important these kinds of accidents of locality will be. I haven't\ntargeted this or that effect -- my heuristics are dead simple, and\nbased almost entirely on keeping costs down. You can think of it as\n\"increase the number of TIDs to increase our chances of success\" if\nyou prefer.)\n\nThe life cycle of logical rows/heap tuples seems to naturally lead to\nthese kinds of opportunities. Obviously heapam is naturally very keen\non storing related versions on the same heap block already, without\nany input from this patch. The patch is still evolving, and the\noverall structure and interface certainly still needs work -- I've\nfocussed on the algorithm so far. I could really use some feedback on\nhigh level direction, though. It's a relatively short patch, even with\nall of my README commentary. But...it's not an easy concept to digest.\n\nNote: I'm not really sure if it's necessary to provide specialized\nqsort routines for the sorting we do to lists of TIDs, etc. I did do\nsome experimentation with that, and it's an open question. So for now\nI rely on the patch that Thomas Munro posted to do that a little while\nback, which is why that's included here. The question of whether this\nis truly needed is unsettled.\n\n--\nPeter Geoghegan", "msg_date": "Mon, 19 Oct 2020 19:37:16 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Deleting older versions in unique indexes to avoid page splits" }, { "msg_contents": "On Wed, Oct 7, 2020 at 7:48 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> To be blunt: It may be controversial that we're accessing multiple\n> heap pages while holding an exclusive lock on a leaf page, in the\n> hopes that we can avoid a page split, but without any certainty that\n> it'll work out.\n\nThat certainly isn't great. I mean, it might be not be too terrible,\nbecause it's a leaf index page isn't nearly as potentially hot as a VM\npage or a clog page, but it hurts interruptibility and risks hurting\nconcurrency, but if it were possible to arrange to hold only a pin on\nthe page during all this rather than a lock, it would be better. I'm\nnot sure how realistic that is, though.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Wed, 21 Oct 2020 11:25:20 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Deleting older versions in unique indexes to avoid page splits" }, { "msg_contents": "On Wed, Oct 21, 2020 at 8:25 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> That certainly isn't great. I mean, it might be not be too terrible,\n> because it's a leaf index page isn't nearly as potentially hot as a VM\n> page or a clog page, but it hurts interruptibility and risks hurting\n> concurrency, but if it were possible to arrange to hold only a pin on\n> the page during all this rather than a lock, it would be better. I'm\n> not sure how realistic that is, though.\n\nI don't think that it's realistic. Well, technically you could do\nsomething like that, but you'd end up with some logically equivalent\nmechanism which would probably be slower. As you know, in nbtree pins\nare generally much less helpful than within heapam (you cannot read a\npage without a shared buffer lock, no matter what). Holding a pin only\nprovides a very weak guarantee about VACUUM and TID recycling that\nusually doesn't come up.\n\nBear in mind that we actually do practically the same thing all the\ntime with the current LP_DEAD setting stuff, where we need to call\ncompute_xid_horizon_for_tuples/heap_compute_xid_horizon_for_tuples\nwith a leaf buffer lock held in almost the same way. That's actually\npotentially far worse if you look at it in isolation, because you\ncould potentially have hundreds of heap pages, whereas this is just 1\n- 3. (BTW, next version will also do that work in passing, so you're\npractically guaranteed to do less with a buffer lock held compared to\nthe typical case of nbtree LP_DEAD setting, even without counting how\nthe LP_DEAD bits get set in the first place.)\n\nI could also point out that something very similar happens in\n_bt_check_unique().\n\nAlso bear in mind that the alternative is pretty much a page split, which means:\n\n* Locking the leaf page\n\n* Then obtaining relation extension lock\n\n* Locking to create new right sibling\n\n* Releasing relation extension lock\n\n* Locking original right sibling page\n\n* Release original right sibling page\n\n* Release new right sibling page\n\n* Lock parent page\n\n* Release original now-split page\n\n* Release parent page\n\n(I will refrain from going into all of the absurd and near-permanent\nsecondary costs that just giving up and splitting the page imposes for\nnow. I didn't even include all of the information about locking --\nthere is one thing that didn't seem worth mentioning.)\n\nThe key concept here is of course asymmetry. The asymmetry here is not\nonly favorable; it's just outrageous. The other key concept is it's\nfundamentally impossible to pay more than a very small fixed cost\nwithout getting a benefit.\n\nThat said, I accept that there is still some uncertainty that all\nworkloads that get a benefit will be happy with the trade-off. I am\nstill fine tuning how this works in cases with high contention. I\nwelcome any help with that part. But note that this doesn't\nnecessarily have much to do with the heap page accesses. It's not\nalways strictly better to never have any bloat at all (it's pretty\nclose to that, but not quite). We saw this with the Postgres 12 work,\nwhere small TPC-C test cases had some queries go slower simply because\na small highly contended index did not get bloated due to a smarter\nsplit algorithm. There is no reason to believe that it had anything to\ndo with the cost of making better decisions. It was the decisions\nthemselves.\n\nI don't want to completely prevent \"version driven page splits\"\n(though a person could reasonably imagine that that is in fact my\nprecise goal); rather, I want to make non-hot updates work to prove\nthat it's almost certainly necessary to split the page due to version\nchurn - then and only then should it be accepted. Currently we meekly\nroll over and let non-hot updaters impose negative externalities on\nthe system as a whole. The patch usually clearly benefits even\nworkloads that consist entirely of non-hot updaters. Negative\nexternalities are only good for the individual trying to impose costs\non the collective when they can be a true freeloader. It's always bad\nfor the collective, but it's even bad for the bad actors once they're\nmore than a small minority.\n\nCurrently non-hot updaters are not merely selfish to the extent that\nthey impose a downside on the collective or the system as a whole that\nis roughly proportionate to the upside benefit they get. Not cleaning\nup their mess as they go creates a downside that is a huge multiple of\nany possible upside for them. To me this seems incontrovertible.\nWorrying about the precise extent to which this is true in each\nsituation doesn't seem particularly productive to me. Whatever the\nactual extent of the imbalance is, the solution is that you don't let\nthem do that.\n\nThis patch is not really about overall throughput. It could be\njustified on that basis, but that's not how I like to think of it.\nRather, it's about providing a stabilizing backstop mechanism, which\ntends to bound the amount of index bloat and the number of versions in\neach index for each *logical row* -- that's the most important benefit\nof the patch. There are workloads that will greatly benefit despite\nonly invoking the new mechanism very occasionally, as a backstop. And\neven cases with a fair amount of contention don't really use it that\noften (which is why the heap page access cost is pretty much a\nquestion about specific high contention patterns only). The proposed\nnew cleanup mechanism may only be used in certain parts of the key\nspace for certain indexes at certain times, in a bottom-up fashion. We\ndon't have to be eager about cleaning up bloat most of the time, but\nit's also true that there are cases where we ought to work very hard\nat it in a localized way.\n\nThis explanation may sound unlikely, but the existing behaviors taken\ntogether present us with outrageous cost/benefit asymmetry, arguably\nin multiple dimensions.\n\nI think that having this backstop cleanup mechanism (and likely others\nin other areas) will help to make the assumptions underlying\nautovacuum scheduling much more reasonable in realistic settings. Now\nit really is okay that autovacuum doesn't really care about the needs\nof queries, and is largely concerned with macro level things like free\nspace management. It's top down approach isn't so bad once it has true\nbottom up complementary mechanisms. The LP_DEAD microvacuum stuff is\nnice because it marks things as dead in passing, pretty much for free.\nThat's not enough on its own -- it's no backstop. The current LP_DEAD\nstuff appears to work rather well, until one day it suddenly doesn't\nand you curse Postgres for it. I could go on about the non-linear\nnature of the system as a whole, hidden tipping points, and other\nstuff like that. But I won't right now.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Wed, 21 Oct 2020 12:36:29 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Deleting older versions in unique indexes to avoid page splits" }, { "msg_contents": "On Wed, Oct 21, 2020 at 3:36 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> Bear in mind that we actually do practically the same thing all the\n> time with the current LP_DEAD setting stuff, where we need to call\n> compute_xid_horizon_for_tuples/heap_compute_xid_horizon_for_tuples\n> with a leaf buffer lock held in almost the same way. That's actually\n> potentially far worse if you look at it in isolation, because you\n> could potentially have hundreds of heap pages, whereas this is just 1\n> - 3. (BTW, next version will also do that work in passing, so you're\n> practically guaranteed to do less with a buffer lock held compared to\n> the typical case of nbtree LP_DEAD setting, even without counting how\n> the LP_DEAD bits get set in the first place.)\n\nThat's fair. It's not that I'm trying to enforce some absolute coding\nrule, as if I even had the right to do such a thing. But, people have\nsometimes proposed patches which would have caused major regression in\nthis area and I think it's really important that we avoid that.\nNormally, it doesn't matter: I/O requests complete quickly and\neverything is fine. But, on a system where things are malfunctioning,\nit makes a big difference whether you can regain control by hitting\n^C. I expect you've probably at some point had the experience of being\nunable to recover control of a terminal window because some process\nwas stuck in wait state D on the kernel level, and you probably\nthought, \"well, this sucks.\" It's even worse if the kernel's entire\nprocess table fills up with such processes. This kind of thing is\nessentially the same issue at the database level, and it's smart to do\nwhat we can to mitigate it.\n\nBut that being said, I'm not trying to derail this patch. It isn't,\nand shouldn't be, the job of this patch to solve that problem. It's\njust better if it doesn't regress things, or maybe even (as you say)\nmakes them a little better. I think the idea you've got here is\nbasically good, and a lot of it comes down to how well it works in\npractice. I completely agree that looking at amortized cost rather\nthan worst-case cost is a reasonable principle here; you can't take\nthat to a ridiculous extreme because people also care about\nconsistently of performance, but it seems pretty clear from your\ndescription that your patch should not create any big problem in that\narea, because the worst-case number of extra buffer accesses for a\nsingle operation is tightly bounded. And, of course, containing index\nbloat is its own reward.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Thu, 22 Oct 2020 09:18:36 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Deleting older versions in unique indexes to avoid page splits" }, { "msg_contents": "On Fri, 16 Oct 2020 at 20:12, Peter Geoghegan <pg@bowt.ie> wrote:\n\n> The TPS/throughput is about what you'd expect for the two hour run:\n>\n> 18,988.762398 TPS for the patch\n> 11,123.551707 TPS for the master branch.\n\nVery good.\n\n> Patch:\n>\n> statement latencies in milliseconds:\n> 0.294 UPDATE pgbench_accounts SET abalance = abalance +\n> :delta WHERE aid = :aid1;\n>\n> Master:\n>\n> statement latencies in milliseconds:\n> 0.604 UPDATE pgbench_accounts SET abalance = abalance +\n> :delta WHERE aid = :aid1;\n\nThe average latency is x2. What is the distribution of latencies?\nOccasional very long or all uniformly x2?\n\nI would guess that holding the page locks will also slow down SELECT\nworkload, so I think you should also report that workload as well.\n\nHopefully that will be better in the latest version.\n\nI wonder whether we can put this work into a background process rather\nthan pay the cost in the foreground? Perhaps that might not need us to\nhold page locks??\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/\n\n\n", "msg_date": "Thu, 22 Oct 2020 18:11:58 +0100", "msg_from": "Simon Riggs <simon@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Deleting older versions in unique indexes to avoid page splits" }, { "msg_contents": "On Thu, Oct 22, 2020 at 10:12 AM Simon Riggs <simon@2ndquadrant.com> wrote:\n> > 18,988.762398 TPS for the patch\n> > 11,123.551707 TPS for the master branch.\n>\n> Very good.\n\nI'm happy with this result, but as I said it's not really the point. I\ncan probably get up to a 5x or more improvement in TPS if I simply add\nenough indexes.\n\nThe point is that we're preventing pathological behavior. The patch\ndoes not so much add something helpful as subtract something harmful.\nYou can contrive a case that has as much of that harmful element as\nyou like.\n\n> The average latency is x2. What is the distribution of latencies?\n> Occasional very long or all uniformly x2?\n\nThe latency is generally very even with the patch. There is a constant\nhum of cleanup by the new mechanism in the case of the benchmark\nworkload. As opposed to a cascade of page splits, which occur in\nclearly distinct correlated waves.\n\n> I would guess that holding the page locks will also slow down SELECT\n> workload, so I think you should also report that workload as well.\n>\n> Hopefully that will be better in the latest version.\n\nBut the same benchmark that you're asking about here has two SELECT\nstatements and only one UPDATE. It already is read-heavy in that\nsense. And we see that the latency is also significantly improved for\nthe SELECT queries.\n\nEven if there was often a performance hit rather than a benefit (which\nis definitely not what we see), it would still probably be worth it.\nUsers create indexes for a reason. I believe that we are obligated to\nmaintain the indexes to a reasonable degree, and not just when it\nhappens to be convenient to do so in passing.\n\n> I wonder whether we can put this work into a background process rather\n> than pay the cost in the foreground? Perhaps that might not need us to\n> hold page locks??\n\nHolding a lock on the leaf page is unavoidable.\n\nThis patch is very effective because it intervenes at precisely the\nright moment in precisely the right place only. We don't really have\nto understand anything about workload characteristics to be sure of\nthis, because it's all based on the enormous asymmetries I've\ndescribed, which are so huge that it just seems impossible that\nanything else could matter. Trying to do any work in a background\nprocess works against this local-first, bottom-up laissez faire\nstrategy. The strength of the design is in how clever it isn't.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Thu, 22 Oct 2020 10:42:28 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Deleting older versions in unique indexes to avoid page splits" }, { "msg_contents": "On Thu, Oct 22, 2020 at 6:18 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> But that being said, I'm not trying to derail this patch. It isn't,\n> and shouldn't be, the job of this patch to solve that problem. It's\n> just better if it doesn't regress things, or maybe even (as you say)\n> makes them a little better. I think the idea you've got here is\n> basically good, and a lot of it comes down to how well it works in\n> practice.\n\nThanks. I would like to have a more general conversation about how\nsome of the principles embodied by this patch could be generalized to\nheapam in the future. I think that we could do something similar with\npruning, or at least with the truncation of HOT chains. Here are a few\nof the principles I'm thinking of, plus some details of how they might\nbe applied in heapam:\n\n* The most important metric is not the total amount of dead tuples in\nthe whole table. Rather, it's something like the 90th + 99th\npercentile length of version chains for each logical row, or maybe\nonly those logical rows actually accessed by index scans recently.\n(Not saying that we should try to target this in some explicit way -\njust that it should be kept low with real workloads as a natural\nconsequence of the design.)\n\n* Have a variety of strategies for dealing with pruning that compete\nwith each other. Allow them to fight it out organically. Start with\nthe cheapest thing and work your way up. So for example, truncation of\nHOT chains is always preferred, and works in just the same way as it\ndoes today. Then it gets more complicated and more expensive, but in a\nway that is a natural adjunct of the current proven design.\n\nI believe that this creates a bit more pain earlier on, but avoids\nmuch more pain later. There is no point powering ahead if we'll end up\nhopelessly indebted with garbage tuples in the end.\n\nFor heapam we could escalate from regular pruning by using an old\nversion store for versions that were originally part of HOT chains,\nbut were found to not fit on the same page at the point of\nopportunistic pruning. The old version store is for committed versions\nonly -- it isn't really much like UNDO. We leave forwarding\ninformation on the original heap page.\n\nWe add old committed versions to the version store at the point when a\nheap page is about to experience an event that is roughly the\nequivalent of a B-Tree page split -- for heapam a \"split\" is being\nunable to keep the same logical row on the same heap page over time in\nthe presence of many updates. This is our last line of defense against\nthis so-called page split situation. It occurs after regular pruning\n(which is an earlier line of defense) fails to resolve the situation\ninexpensively. We can make moving tuples into the old version store\nWAL efficient by making it work like an actual B-Tree page split -- it\ncan describe the changes in a way that's mostly logical, and based on\nthe existing page image. And like an actual page split, we're\namortizing costs in a localized way.\n\nIt is also possible to apply differential compression to whole HOT\nchains rather than storing them in the old version store, for example,\nif that happens to look favorable (think of rows with something like a\npgbench_accounts.filler column -- not uncommon). We have options, we\ncan add new options in the future as new requirements come to light.\nWe allow the best option to present itself to us in a bottom-up\nfashion. Sometimes the best strategy at the local level is actually a\ncombination of two different strategies applied alternately over time.\nFor example, we use differential compression first when we fail to\nprune, then we prune the same page later (a little like merge\ndeduplication in my recent nbtree delete dedup patch).\n\nMaybe each of these two strategies (differential compression +\ntraditional heap HOT chain truncation) get applied alternately against\nthe same heap page over time, in a tick-tock fashion. We naturally\navoid availing of the old version store structure, which is good,\nsince that is a relatively expensive strategy that we should apply\nonly as a last resort. This tick-tock behavior is an emergent property\nof the workload rather than something planned or intelligent, and yet\nit kind of appears to be an intelligent strategy. (Maybe it works that\nway permanently in some parts of the heap, or maybe the same heap\nblocks only tick-tock like this on Tuesdays. It may be possible for\nstuff like that to happen sanely with well chosen simple heuristics\nthat exploit asymmetry.)\n\n* Work with (not against) the way that Postgres strongly decouples the\nphysical representation of data from the logical contents of the\ndatabase compared to other DB systems. But at the same time, work hard\nto make the physical representation of the data as close as is\npractically possible to an idealized, imaginary logical version of the\ndata. Do this because it makes queries faster, not because it's\nstrictly necessary for concurrency control/recovery/whatever.\n\nConcretely, this mostly means that we try our best to keep each\nlogical row (i.e. the latest physical version or two of each row)\nlocated on the same physical heap block over time, using the\nescalation strategy I described or something like it. Notice that\nwe're imposing a cost on queries that are arguably responsible for\ncreating garbage, but only when we really can't tolerate more garbage\ncollection debt. But if it doesn't work out that's okay -- we tried. I\nhave a feeling that it will in fact mostly work out. Why shouldn't it\nbe possible to have no more than one or two uncommitted row versions\nin a heap page at any given time, just like with my nbtree patch? (I\nthink that the answer to that question is \"weird workloads\", but I'm\nokay with the idea that they're a little slower or whatever.)\n\nNotice that this makes the visibility map work better in practice. I\nalso think that the FSM needs to be taught that it isn't a good thing\nto reuse a little fragment of space on its own, because it works\nagainst our goal of trying to avoid relocating rows. The current logic\nseems focussed on using every little scrap of free space no matter\nwhat, which seems pretty misguided. Penny-wise, pound-foolish.\n\nAlso notice that fighting to keep the same logical row on the same\nblock has a non-linear payoff. We don't need to give up on that goal\nat the first sign of trouble. If it's hard to do a conventional prune\nafter succeeding a thousand times then it's okay to work harder. Only\na sucker gives up at the first sign of trouble. We're playing a long\ngame here. If it becomes so hard that even applying a more aggressive\nstrategy fails, then it's probably also true that it has become\ninherently impossible to sustain the original layout. We graciously\naccept defeat and cut our losses, having only actually wasted a little\neffort to learn that we need to move our incoming successor version to\nsome other heap block (especially because the cost is amortized across\nversions that live on the same heap block).\n\n* Don't try to completely remove VACUUM. Treat its top down approach\nas complementary to the new bottom-up approaches we add.\n\nThere is nothing wrong with taking a long time to clean up garbage\ntuples in heap pages that have very little garbage total. In fact I\nthink that it might be a good thing. Why be in a hurry to dirty the\npage again? If it becomes a real problem in the short term then the\nbottom-up stuff can take care of it. Under this old-but-new paradigm,\nmaybe VACUUM has to visit indexes a lot less (maybe it just decides\nnot to sometimes, based on stats about the indexes, like we see today\nwith vacuum_index_cleanup = off). VACUUM is for making infrequent\n\"clean sweeps\", though it typically leaves most of the work to new\nbottom-up approaches, that are well placed to understand the needs of\nqueries that touch nearby data. Autovacuum does not presume to\nunderstand the needs of queries at all.\n\nIt would also be possible for VACUUM to make more regular sweeps over\nthe version store without disturbing the main relation under this\nmodel. The version store isn't that different to a separate heap\nrelation, but naturally doesn't require traditional index cleanup or\nfreezing, and naturally stores things in approximately temporal order.\nSo we recycle space in the version store in a relatively eager,\ncircular fashion, because it naturally contains data that favors such\nan approach. We make up the fixed cost of using the separate old\nversion store structure by reducing deferred costs like this. And by\nonly using it when it is demonstrably helpful.\n\nIt might also make sense for us to prioritize putting heap tuples that\nrepresent versions whose indexed columns were changed by update\n(basically a non-hot update) in the version store -- we work extra\nhard on that (and just leave behind an LP_DEAD line pointer). That way\nVACUUM can do retail index tuple deletion for the index whose columns\nwere modified when it finds them in the version store (presumably this\nnbtree patch of mine works well enough with the logically unchanged\nindex entries for other indexes that we don't need to go out of our\nway).\n\nI'm sure that there are more than a few holes in this sketch of mine.\nIt's not really worked out, but it has certain properties that are\ngenerally under appreciated -- especially the thing about the worst\ncase number of versions per logical row being extremely important, as\nwell as the idea that back pressure can be a good thing when push\ncomes to shove -- provided it is experienced locally, and only by\nqueries that update the same very hot logical rows. Back pressure\nneeds to be proportionate and approximately fair. Another important\npoint that I want to call out again here is that we should try to\nexploit cost/benefit asymmetry opportunistically. That seems to have\nworked out extremely well in my recent B-Tree delete dedup patch.\n\nI don't really expect anybody to take this seriously -- especially as\na total blueprint. Maybe some elements of what I've sketched can be\nused as part of some future big effort. You've got to start somewhere.\nIt has to be incremental.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Thu, 22 Oct 2020 18:05:28 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "heapam and bottom-up garbage collection, keeping version chains short\n (Was: Deleting older versions in unique indexes to avoid page splits)" }, { "msg_contents": "On Thu, 22 Oct 2020 at 18:42, Peter Geoghegan <pg@bowt.ie> wrote:\n\n> > The average latency is x2. What is the distribution of latencies?\n> > Occasional very long or all uniformly x2?\n>\n> The latency is generally very even with the patch. There is a constant\n> hum of cleanup by the new mechanism in the case of the benchmark\n> workload. As opposed to a cascade of page splits, which occur in\n> clearly distinct correlated waves.\n\nPlease publish details of how long a pre-split cleaning operation\ntakes and what that does to transaction latency. It *might* be true\nthat the cost of avoiding splits is worth it in balance against the\ncost of splitting, but it might not.\n\nYou've shown us a very nice paper analysing the page split waves, but\nwe need a similar detailed analysis so we can understand if what you\npropose is better or not (and in what situations).\n\n> > I would guess that holding the page locks will also slow down SELECT\n> > workload, so I think you should also report that workload as well.\n> >\n> > Hopefully that will be better in the latest version.\n>\n> But the same benchmark that you're asking about here has two SELECT\n> statements and only one UPDATE. It already is read-heavy in that\n> sense. And we see that the latency is also significantly improved for\n> the SELECT queries.\n>\n> Even if there was often a performance hit rather than a benefit (which\n> is definitely not what we see), it would still probably be worth it.\n> Users create indexes for a reason. I believe that we are obligated to\n> maintain the indexes to a reasonable degree, and not just when it\n> happens to be convenient to do so in passing.\n\nThe leaf page locks are held for longer, so we need to perform\nsensible tests that show if this has a catastrophic effect on related\nworkloads, or not.\n\nThe SELECT tests proposed need to be aimed at the same table, at the same time.\n\n> The strength of the design is in how clever it isn't.\n\nWhat it doesn't do could be good or bad so we need to review more\ndetails on behavior. Since the whole idea of the patch is to change\nbehavior, that seems a reasonable ask. I don't have any doubt about\nthe validity of the approach or coding.\n\nWhat you've done so far is very good and I am very positive about\nthis, well done.\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/\n\n\n", "msg_date": "Fri, 23 Oct 2020 17:03:19 +0100", "msg_from": "Simon Riggs <simon@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Deleting older versions in unique indexes to avoid page splits" }, { "msg_contents": "On Fri, Oct 23, 2020 at 9:03 AM Simon Riggs <simon@2ndquadrant.com> wrote:\n> Please publish details of how long a pre-split cleaning operation\n> takes and what that does to transaction latency. It *might* be true\n> that the cost of avoiding splits is worth it in balance against the\n> cost of splitting, but it might not.\n\nI don't think that you understand how the patch works. I cannot very\nwell isolate that cost because the patch is designed to only pay it\nwhen there is a strong chance of getting a much bigger reward, and\nwhen the only alternative is to split the page. When it fails the\nquestion naturally doesn't come up again for the same two pages that\nfollow from the page split. As far as I know the only cases that are\nregressed all involve small indexes with lots of contention, which is\nnot surprising. And not necessarily due to the heap page accesses -\nmaking indexes smaller sometimes has that effect, even when it happens\ndue to things like better page split heuristics.\n\nIf anybody finds a serious problem with my patch then it'll be a\nweakness or hole in the argument I just made -- it won't have much to\ndo with how expensive any of these operations are in isolation. It\nusually isn't sensible to talk about page splits as isolated things.\nMost of my work on B-Trees in the past couple of years built on the\nobservation that sometimes successive page splits are related to each\nother in one way or another.\n\nIt is a fallacy of composition to think of the patch as a thing that\nprevents some page splits. The patch is valuable because it more or\nless eliminates *unnecessary* page splits (and makes it so that there\ncannot be very many TIDs for each logical row in affected indexes).\nThe overall effect is not linear. If you added code to artificially\nmake the mechanism fail randomly 10% of the time (at the point where\nit becomes clear that the current operation would otherwise be\nsuccessful) that wouldn't make the resulting code 90% as useful as the\noriginal. It would actually make it approximately 0% as useful. On\nhuman timescales the behavioral difference between this hobbled\nversion of my patch and the master branch would be almost\nimperceptible.\n\nIt's obvious that a page split is more expensive than the delete\noperation (when it works out). It doesn't need a microbenchmark (and I\nreally can't think of one that would make any sense). Page splits\ntypically have WAL records that are ~4KB in size, whereas the\nopportunistic delete records are almost always less than 100 bytes,\nand typically close to 64 bytes -- which is the same size as most\nindividual leaf page insert WAL records. Plus you have almost double\nthe FPIs going forward with the page split.\n\n> You've shown us a very nice paper analysing the page split waves, but\n> we need a similar detailed analysis so we can understand if what you\n> propose is better or not (and in what situations).\n\nThat paper was just referenced in passing. It isn't essential to the\nmain argument.\n\n> The leaf page locks are held for longer, so we need to perform\n> sensible tests that show if this has a catastrophic effect on related\n> workloads, or not.\n>\n> The SELECT tests proposed need to be aimed at the same table, at the same time.\n\nBut that's exactly what I did the first time!\n\nI had two SELECT statements against the same table. They use almost\nthe same distribution as the UPDATE, so that they'd hit the same part\nof the key space without it being exactly the same as the UPDATE from\nthe same xact in each case (I thought that if it was exactly the same\npart of the table then that might unfairly favor my patch).\n\n> > The strength of the design is in how clever it isn't.\n>\n> What it doesn't do could be good or bad so we need to review more\n> details on behavior. Since the whole idea of the patch is to change\n> behavior, that seems a reasonable ask. I don't have any doubt about\n> the validity of the approach or coding.\n\nI agree, but the patch isn't the sum of its parts. You need to talk\nabout a workload or a set of conditions, and how things develop over\ntime.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Fri, 23 Oct 2020 10:13:46 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Deleting older versions in unique indexes to avoid page splits" }, { "msg_contents": "On Fri, 23 Oct 2020 at 18:14, Peter Geoghegan <pg@bowt.ie> wrote:\n\n> It's obvious that a page split is more expensive than the delete\n> operation (when it works out).\n\nThe problem I highlighted is that the average UPDATE latency is x2\nwhat it is on current HEAD. That is not consistent with the reported\nTPS, so it remains an issue and that isn't obvious.\n\n> It doesn't need a microbenchmark (and I\n> really can't think of one that would make any sense).\n\nI'm asking for detailed timings so we can understand the latency\nissue. I didn't ask for a microbenchmark.\n\nI celebrate your results, but we do need to understand the issue, somehow.\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/\n\n\n", "msg_date": "Sat, 24 Oct 2020 10:55:13 +0100", "msg_from": "Simon Riggs <simon@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Deleting older versions in unique indexes to avoid page splits" }, { "msg_contents": "On Sat, Oct 24, 2020 at 2:55 AM Simon Riggs <simon@2ndquadrant.com> wrote:\n> The problem I highlighted is that the average UPDATE latency is x2\n> what it is on current HEAD. That is not consistent with the reported\n> TPS, so it remains an issue and that isn't obvious.\n\nWhy do you say that? I reported that the UPDATE latency is less than\nhalf for the benchmark.\n\nThere probably are some workloads with worse latency and throughput,\nbut generally only with high contention/small indexes. I'll try to\nfine tune those, but some amount of it is probably inevitable. On\naverage query latency is quite a lot lower with the patch (where it is\naffected at all - the mechanism is only used with non-hot updates).\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Sat, 24 Oct 2020 08:01:06 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Deleting older versions in unique indexes to avoid page splits" }, { "msg_contents": "On Sat, Oct 24, 2020 at 8:01 AM Peter Geoghegan <pg@bowt.ie> wrote:\n> There probably are some workloads with worse latency and throughput,\n> but generally only with high contention/small indexes. I'll try to\n> fine tune those, but some amount of it is probably inevitable. On\n> average query latency is quite a lot lower with the patch (where it is\n> affected at all - the mechanism is only used with non-hot updates).\n\nAttached is v5, which has changes that are focused on two important\nhigh level goals:\n\n1. Keeping costs down generally, especially in high contention cases\nwhere costs are most noticeable.\n\n2. Making indexes on low cardinality columns that naturally really\nbenefit from merge deduplication in Postgres 13 receive largely the\nsame benefits that you've already seen with high cardinality indexes\n(at least outside of the extreme cases where it isn't sensible to\ntry).\n\nCPU costs (especially from sorting temp work arrays) seem to be the\nbig cost overall. It turns out that the costs of accessing heap pages\nwithin the new mechanism is not really noticeable. This isn't all that\nsurprising, though. The heap pages are accessed in a way that\nnaturally exploits locality across would-be page splits in different\nindexes.\n\nTo some degree the two goals that I describe conflict with each other.\nIf merge deduplication increases the number of logical rows that \"fit\non each leaf page\" (by increasing the initial number of TIDs on each\nleaf page by over 3x when the index is in the pristine CREATE INDEX\nstate), then naturally the average amount of work required to maintain\nindexes in their pristine state is increased. We cannot expect to pay\nnothing to avoid \"unnecessary page splits\" -- we can only expect to\ncome out ahead over time.\n\nThe main new thing that allowed me to more or less accomplish the\nsecond goal is granular deletion in posting list tuples. That is, v5\nteaches the delete mechanism to do VACUUM-style granular TID deletion\nwithin posting lists. This factor turned out to matter a lot.\n\nHere is an extreme benchmark that I ran this morning for this patch,\nwhich shows both strengths and weaknesses:\n\npgbench scale 1000 (with customizations to indexing that I go into\nbelow), 16 + 32 clients, 30 minutes per run (so 2 hours total\nexcluding initial data loading). Same queries as last time:\n\n\\set aid1 random_gaussian(1, 100000 * :scale, 4.0)\n\\set aid2 random_gaussian(1, 100000 * :scale, 4.5)\n\\set aid3 random_gaussian(1, 100000 * :scale, 4.2)\n\\set bid random(1, 1 * :scale)\n\\set tid random(1, 10 * :scale)\n\\set delta random(-5000, 5000)\nBEGIN;\nUPDATE pgbench_accounts SET abalance = abalance + :delta WHERE aid = :aid1;\nSELECT abalance FROM pgbench_accounts WHERE aid = :aid2;\nSELECT abalance FROM pgbench_accounts WHERE aid = :aid3;\nEND;\n\nAnd just like last time, we replace the usual pgbench_accounts index\nwith an INCLUDE index to artificially make every UPDATE a non-HOT\nUPDATE:\n\ncreate unique index aid_pkey_include_abalance on pgbench_accounts\n(aid) include(abalance);\n\nUnlike last time we have a variety of useless-to-us *low cardinality*\nindexes. These are much smaller than aid_pkey_include_abalance due to\nmerge deduplication, but are still quite similar to it:\n\ncreate index fiver on pgbench_accounts ((aid - (aid%5)));\ncreate index tenner on pgbench_accounts ((aid - (aid%10)));\ncreate index score on pgbench_accounts ((aid - (aid%20)));\n\nThe silly indexes this time around are designed to have the same skew\nas the PK, but with low cardinality data. So, for example \"score\", has\ntwenty distinct logical rows for each distinct aid value. Which is\npretty extreme as far as the delete mechanism is concerned. That's why\nthis is more of a mixed picture compared to the earlier benchmark. I'm\ntrying to really put the patch through its paces, not make it look\ngood.\n\nFirst the good news. The patch held up perfectly in one important way\n-- the size of the indexes didn't change at all compared to the\noriginal pristine size. That looked like this at the start for both\npatch + master:\n\naid_pkey_include_abalance: 274,194 pages/2142 MB\nfiver: 142,348 pages/1112 MB\ntenner: 115,352 pages/901 MB\nscore: 94,677 pages/740 MB\n\nBy the end, master looked like this:\n\naid_pkey_include_abalance: 428,759 pages (~1.56x original size)\nfiver: 220,114 pages (~1.54x original size)\ntenner: 176,983 pages (~1.53x original size)\nscore: 178,165 pages (~1.88x original size)\n\n(As I said, no change in the size of indexes with the patch -- not\neven one single page split.)\n\nNow for the not-so-good news. The TPS numbers looked like this\n(results in original chronological order of the runs, which I've\ninterleaved):\n\npatch_1_run_16.out: \"tps = 30452.530518 (including connections establishing)\"\nmaster_1_run_16.out: \"tps = 35101.867559 (including connections establishing)\"\npatch_1_run_32.out: \"tps = 26000.991486 (including connections establishing)\"\nmaster_1_run_32.out: \"tps = 32582.129545 (including connections establishing)\"\n\nThe latency numbers aren't great for the patch, either. Take the 16 client case:\n\nnumber of transactions actually processed: 54814992\nlatency average = 0.525 ms\nlatency stddev = 0.326 ms\ntps = 30452.530518 (including connections establishing)\ntps = 30452.612796 (excluding connections establishing)\nstatement latencies in milliseconds:\n 0.001 \\set aid1 random_gaussian(1, 100000 * :scale, 4.0)\n 0.000 \\set aid2 random_gaussian(1, 100000 * :scale, 4.5)\n 0.000 \\set aid3 random_gaussian(1, 100000 * :scale, 4.2)\n 0.000 \\set bid random(1, 1 * :scale)\n 0.000 \\set tid random(1, 10 * :scale)\n 0.000 \\set delta random(-5000, 5000)\n 0.046 BEGIN;\n 0.159 UPDATE pgbench_accounts SET abalance = abalance +\n:delta WHERE aid = :aid1;\n 0.153 SELECT abalance FROM pgbench_accounts WHERE aid = :aid2;\n 0.091 SELECT abalance FROM pgbench_accounts WHERE aid = :aid3;\n 0.075 END;\n\nvs master's 16 client case:\n\nnumber of transactions actually processed: 63183870\nlatency average = 0.455 ms\nlatency stddev = 0.307 ms\ntps = 35101.867559 (including connections establishing)\ntps = 35101.914573 (excluding connections establishing)\nstatement latencies in milliseconds:\n 0.001 \\set aid1 random_gaussian(1, 100000 * :scale, 4.0)\n 0.000 \\set aid2 random_gaussian(1, 100000 * :scale, 4.5)\n 0.000 \\set aid3 random_gaussian(1, 100000 * :scale, 4.2)\n 0.000 \\set bid random(1, 1 * :scale)\n 0.000 \\set tid random(1, 10 * :scale)\n 0.000 \\set delta random(-5000, 5000)\n 0.049 BEGIN;\n 0.117 UPDATE pgbench_accounts SET abalance = abalance +\n:delta WHERE aid = :aid1;\n 0.120 SELECT abalance FROM pgbench_accounts WHERE aid = :aid2;\n 0.091 SELECT abalance FROM pgbench_accounts WHERE aid = :aid3;\n 0.077 END;\n\nEarlier variations of this same workload that were run with a 10k TPS\nrate limit had SELECT statements that had somewhat lower latency with\nthe patch, while the UPDATE statements were slower by roughly the same\namount as you see here. Here we see that at least the aid3 SELECT is\njust as fast with the patch. (Don't have a proper example of this\nrate-limited phenomenon close at hand now, since I saw it happen back\nwhen the patch was less well optimized.)\n\nThis benchmark is designed to be extreme, and to really stress the\npatch. For one thing we have absolutely no index scans for all but one\nof the four indexes, so controlling the bloat there isn't going to\ngive you the main expected benefit. Which is that index scans don't\neven have to visit heap pages from old versions, since they're not in\nthe index for very long (unless there aren't very many versions for\nthe logical rows in question, which isn't really a problem for us).\nFor another the new mechanism is constantly needed, which just isn't\nvery realistic. It seems as if the cost is mostly paid by non-HOT\nupdaters, which seems exactly right to me.\n\nI probably could have cheated by making the aid_pkey_include_abalance\nindex a non-unique index, denying the master branch the benefit of\nLP_DEAD setting within _bt_check_unique(). Or I could have cheated\n(perhaps I should just say \"gone for something a bit more\nsympathetic\") by not having skew on all the indexes (say by hashing on\naid in the indexes that use merge deduplication). I also think that\nthe TPS gap would have been smaller if I'd spent more time on each\nrun, but I didn't have time for that today. In the past I've seen it\ntake a couple of hours or more for the advantages of the patch to come\nthrough (it takes that long for reasons that should be obvious).\n\nEven the overhead we see here is pretty tolerable IMV. I believe that\nit will be far more common for the new mechanism to hardly get used at\nall, and yet have a pretty outsized effect on index bloat. To give you\na simple example of how that can happen, consider that if this did\nhappen in a real workload it would probably be caused by a surge in\ndemand -- now we don't have to live with the bloat consequences of an\nisolated event forever (or until the next REINDEX). I can make more\nsophisticated arguments than that one, but it doesn't seem useful\nright now so I'll refrain.\n\nThe patch adds a backstop. It seems to me that that's really what we\nneed here. Predictability over time and under a large variety of\ndifferent conditions. Real workloads constantly fluctuate.\n\nEven if people end up not buying my argument that it's worth it for\nworkloads like this, there are various options. And, I bet I can\nfurther improve the high contention cases without losing the valuable\npart -- there are a number of ways in which I can get the CPU costs\ndown further that haven't been fully explored (yes, it really does\nseem to be CPU costs, especially due to TID sorting). Again, this\npatch is all about extreme pathological workloads, system stability,\nand system efficiency over time -- it is not simply about increasing\nsystem throughput. There are some aspects of this design (that come up\nwith extreme workloads) that may in the end come down to value\njudgments. I'm not going to tell somebody that they're wrong for\nprioritizing different things (within reason, at least). In my opinion\nalmost all of the problems we have with VACUUM are ultimately\nstability problems, not performance problems per se. And, I suspect\nthat we do very well with stupid benchmarks like this compared to\nother DB systems precisely because we currently allow non-HOT updaters\nto \"live beyond their means\" (which could in theory be great if you\nframe it a certain way that seems pretty absurd to me). This suggests\nwe can \"afford\" to go a bit slower here as far as the competitive\npressures determine what we should do (notice that this is a distinct\nargument to my favorite argument, which is that we cannot afford to\n*not* go a bit slower in certain extreme cases).\n\nI welcome debate about this.\n\n-- \nPeter Geoghegan", "msg_date": "Mon, 26 Oct 2020 14:15:03 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Deleting older versions in unique indexes to avoid page splits" }, { "msg_contents": "On Mon, 26 Oct 2020 at 21:15, Peter Geoghegan <pg@bowt.ie> wrote:\n\n> Now for the not-so-good news. The TPS numbers looked like this\n> (results in original chronological order of the runs, which I've\n> interleaved):\n\nWhile it is important we investigate the worst cases, I don't see this\nis necessarily bad.\n\nHOT was difficult to measure, but on a 2+ hour run on a larger table,\nthe latency graph was what showed it was a winner. Short runs and\nin-memory data masked the benefits in our early analyses.\n\nSo I suggest not looking at the totals and averages but on the peaks\nand the long term trend. Showing that in graphical form is best.\n\n> The patch adds a backstop. It seems to me that that's really what we\n> need here. Predictability over time and under a large variety of\n> different conditions. Real workloads constantly fluctuate.\n\nYeh, agreed. This is looking like a winner now, but lets check.\n\n> Even if people end up not buying my argument that it's worth it for\n> workloads like this, there are various options. And, I bet I can\n> further improve the high contention cases without losing the valuable\n> part -- there are a number of ways in which I can get the CPU costs\n> down further that haven't been fully explored (yes, it really does\n> seem to be CPU costs, especially due to TID sorting). Again, this\n> patch is all about extreme pathological workloads, system stability,\n> and system efficiency over time -- it is not simply about increasing\n> system throughput. There are some aspects of this design (that come up\n> with extreme workloads) that may in the end come down to value\n> judgments. I'm not going to tell somebody that they're wrong for\n> prioritizing different things (within reason, at least). In my opinion\n> almost all of the problems we have with VACUUM are ultimately\n> stability problems, not performance problems per se. And, I suspect\n> that we do very well with stupid benchmarks like this compared to\n> other DB systems precisely because we currently allow non-HOT updaters\n> to \"live beyond their means\" (which could in theory be great if you\n> frame it a certain way that seems pretty absurd to me). This suggests\n> we can \"afford\" to go a bit slower here as far as the competitive\n> pressures determine what we should do (notice that this is a distinct\n> argument to my favorite argument, which is that we cannot afford to\n> *not* go a bit slower in certain extreme cases).\n>\n> I welcome debate about this.\n\nAgreed, we can trade initial speed for long term consistency. I guess\nthere are some heuristics there on that tradeoff.\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/\n\n\n", "msg_date": "Tue, 27 Oct 2020 09:43:57 +0000", "msg_from": "Simon Riggs <simon@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Deleting older versions in unique indexes to avoid page splits" }, { "msg_contents": "On Tue, Oct 27, 2020 at 2:44 AM Simon Riggs <simon@2ndquadrant.com> wrote:\n> While it is important we investigate the worst cases, I don't see this\n> is necessarily bad.\n\nI looked at \"perf top\" a few times when the test from yesterday ran. I\nsaw that the proposed delete mechanism was the top consumer of CPU\ncycles. It seemed as if the mechanism was very expensive. However,\nthat's definitely the wrong conclusion about what happens in the\ngeneral case, or even in slightly less extreme cases. It at least\nneeds to be put in context.\n\nI reran exactly the same benchmark overnight, but added a 10k TPS rate\nlimit this time (so about a third of the TPS that's possible without a\nlimit). I also ran it for longer, and saw much improved latency. (More\non the latency aspect below, for now I want to talk about \"perf top\").\n\nThe picture with \"perf top\" changed significantly with a 10k TPS rate\nlimit, even though the workload itself is very similar. Certainly the\nnew mechanism/function is still quite close to the top consumer of CPU\ncycles. But it no longer uses more cycles than the familiar super hot\nfunctions that you expect to see right at the top with pgbench (e.g.\n_bt_compare(), hash_search_with_hash_value()). It's now something like\nthe 4th or 5th hottest function (I think that that means that the cost\nin cycles is more than an order of magnitude lower, but I didn't\ncheck). Just adding this 10k TPS rate limit makes the number of CPU\ncycles consumed by the new mechanism seem quite reasonable. The\nbenefits that the patch brings are not diminished at all compared to\nthe original no-rate-limit variant -- the master branch now only takes\nslightly longer to completely bloat all its indexes with this 10k TPS\nlimit (while the patch avoids even a single page split -- no change\nthere).\n\nAgain, this is because the mechanism is a backstop. It only works as\nhard as needed to avoid unnecessary page splits. When the system is\nworking as hard as possible to add version churn to indexes (which is\nwhat the original/unthrottled test involved), then the mechanism also\nworks quite hard. In this artificial and contrived scenario, any\ncycles we can save from cleaning up bloat (by micro optimizing the\ncode in the patch) go towards adding even more bloat instead...which\nnecessitates doing more cleanup. This is why optimizing the code in\nthe patch with this unrealistic scenario in mind is subject to sharp\ndiminishing returns. It's also why you can get a big benefit from the\npatch when the new mechanism is barely ever used. I imagine that if I\nran the same test again but with a 1k TPS limit I would hardly see the\nnew mechanism in \"perf top\" at all....but in the end the bloat\nsituation would be very similar.\n\nI think that you could model this probabilistically if you had the\ninclination. Yes, the more you throttle throughput (by lowering the\npgbench rate limit further), the less chance you have of any given\nleaf page splitting moment to moment (for the master branch). But in\nthe long run every original leaf page splits at least once anyway,\nbecause each leaf page still only has to be unlucky once. It is still\ninevitable that they'll all get split eventually (and probably not\nbefore too long), unless and until you *drastically* throttle pgbench.\n\nI believe that things like opportunistic HOT chain truncation (heap\npruning) and index tuple LP_DEAD bit setting are very effective much\nof the time. The problem is that it's easy to utterly rely on them\nwithout even realizing it, which creates hidden risk that may or may\nnot result in big blow ups down the line. There is nothing inherently\nwrong with being lazy or opportunistic about cleaning up garbage\ntuples -- I think that there are significant advantages, in fact. But\nonly if it isn't allowed to create systemic risk. More concretely,\nbloat cannot be allowed to become concentrated in any one place -- no\nindividual query should have to deal with more than 2 or 3 versions\nfor any given logical row. If we're focussed on the *average* number\nof physical versions per logical row then we may reach dramatically\nwrong conclusions about what to do (which is a problem in a world\nwhere autovacuum is supposed to do most garbage collection...unless\nyour app happens to look like standard pgbench!).\n\nAnd now back to latency with this 10k TPS limited variant I ran last\nnight. After 16 hours we have performed 8 runs, each lasting 2 hours.\nIn the original chronological order, these runs are:\n\npatch_1_run_16.out: \"tps = 10000.095914 (including connections establishing)\"\nmaster_1_run_16.out: \"tps = 10000.171945 (including connections establishing)\"\npatch_1_run_32.out: \"tps = 10000.082975 (including connections establishing)\"\nmaster_1_run_32.out: \"tps = 10000.533370 (including connections establishing)\"\npatch_2_run_16.out: \"tps = 10000.076865 (including connections establishing)\"\nmaster_2_run_16.out: \"tps = 9997.933215 (including connections establishing)\"\npatch_2_run_32.out: \"tps = 9999.911988 (including connections establishing)\"\nmaster_2_run_32.out: \"tps = 10000.864031 (including connections establishing)\"\n\nHere is what I see at the end of \"patch_2_run_32.out\" (i.e. at the end\nof the final 2 hour run for the patch):\n\nnumber of transactions actually processed: 71999872\nlatency average = 0.265 ms\nlatency stddev = 0.110 ms\nrate limit schedule lag: avg 0.046 (max 30.274) ms\ntps = 9999.911988 (including connections establishing)\ntps = 9999.915766 (excluding connections establishing)\nstatement latencies in milliseconds:\n 0.001 \\set aid1 random_gaussian(1, 100000 * :scale, 4.0)\n 0.000 \\set aid2 random_gaussian(1, 100000 * :scale, 4.5)\n 0.000 \\set aid3 random_gaussian(1, 100000 * :scale, 4.2)\n 0.000 \\set bid random(1, 1 * :scale)\n 0.000 \\set tid random(1, 10 * :scale)\n 0.000 \\set delta random(-5000, 5000)\n 0.023 BEGIN;\n 0.099 UPDATE pgbench_accounts SET abalance = abalance +\n:delta WHERE aid = :aid1;\n 0.036 SELECT abalance FROM pgbench_accounts WHERE aid = :aid2;\n 0.034 SELECT abalance FROM pgbench_accounts WHERE aid = :aid3;\n 0.025 END;\n\nHere is what I see at the end of \"master_2_run_32.out\" (i.e. at the\nend of the final run for master):\n\nnumber of transactions actually processed: 72006803\nlatency average = 0.266 ms\nlatency stddev = 2.722 ms\nrate limit schedule lag: avg 0.074 (max 396.853) ms\ntps = 10000.864031 (including connections establishing)\ntps = 10000.868545 (excluding connections establishing)\nstatement latencies in milliseconds:\n 0.001 \\set aid1 random_gaussian(1, 100000 * :scale, 4.0)\n 0.000 \\set aid2 random_gaussian(1, 100000 * :scale, 4.5)\n 0.000 \\set aid3 random_gaussian(1, 100000 * :scale, 4.2)\n 0.000 \\set bid random(1, 1 * :scale)\n 0.000 \\set tid random(1, 10 * :scale)\n 0.000 \\set delta random(-5000, 5000)\n 0.022 BEGIN;\n 0.073 UPDATE pgbench_accounts SET abalance = abalance +\n:delta WHERE aid = :aid1;\n 0.036 SELECT abalance FROM pgbench_accounts WHERE aid = :aid2;\n 0.034 SELECT abalance FROM pgbench_accounts WHERE aid = :aid3;\n 0.025 END;\n\nNotice the following:\n\n1. The overall \"latency average\" for the patch is very slightly lower.\n2. The overall \"latency stddev\" for the patch is far far lower -- over\n20x lower, in fact.\n3. The patch's latency for the UPDATE statement is still somewhat\nhigher, but it's not so bad. We're still visibly paying a price in\nsome sense, but at least we're imposing the new costs squarely on the\nquery that is responsible for all of our problems.\n4. The patch's latency for the SELECT statements is exactly the same\nfor the patch. We're not imposing any new cost on \"innocent\" SELECT\nstatements that didn't create the problem, even if they didn't quite\nmanage to benefit here. (Without LP_DEAD setting in _bt_check_unique()\nI'm sure that the SELECT latency would be significantly lower for the\npatch.)\n\nThe full results (with lots of details pulled from standard system\nviews after each run) can be downloaded as a .tar.gz archive from:\n\nhttps://drive.google.com/file/d/1Dn8rSifZqT7pOOIgyKstl-tdACWH-hqO/view?usp=sharing\n\n(It's probably not that interesting to drill down any further, but I\nmake the full set of results available just in case. There are loads\nof things included just because I automatically capture them when\nbenchmarking anything at all.)\n\n> HOT was difficult to measure, but on a 2+ hour run on a larger table,\n> the latency graph was what showed it was a winner. Short runs and\n> in-memory data masked the benefits in our early analyses.\n\nYeah, that's what was nice about working on sorting -- almost instant\nfeedback. Who wants to spend at least 2 hours to test out every little\ntheory? :-)\n\n> So I suggest not looking at the totals and averages but on the peaks\n> and the long term trend. Showing that in graphical form is best.\n\nI think that you're right that a graphical representation with an\nX-axis that shows how much time has passed would be very useful. I'll\ntry to find a way of incorporating that into my benchmarking workflow.\n\nThis is especially likely to help when modelling how cases with a long\nrunning xact/snapshot behave. That isn't a specific goal of mine here,\nbut I expect that it'll help with that a lot too. For now I'm just\nfocussing on downsides and not upsides, for the usual reasons.\n\n> Agreed, we can trade initial speed for long term consistency. I guess\n> there are some heuristics there on that tradeoff.\n\nRight. Another way of looking at it is this: it should be possible for\nreasonably good DBAs to develop good intuitions about how the system\nwill hold up over time, based on past experience and common sense --\nno chaos theory required. Whatever the cost of the mechanism is, at\nleast it's only something that gets shaved off the top minute to\nminute. It seems almost impossible for the cost to cause sudden\nsurprises (except maybe once, after an initial upgrade to Postgres 14,\nthough I doubt it). Whereas it seems very likely to prevent many large\nand unpleasant surprises caused by hidden, latent risk.\n\nI believe that approximately 100% of DBAs would gladly take that\ntrade-off, even if the total cost in cycles was higher. It happens to\nalso be true that they're very likely to use far fewer cycles over\ntime, but that's really just a bonus.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Tue, 27 Oct 2020 11:35:01 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Deleting older versions in unique indexes to avoid page splits" }, { "msg_contents": "пн, 26 окт. 2020 г. в 22:15, Peter Geoghegan <pg@bowt.ie>:\n\n> Attached is v5, which has changes that are focused on two important\n> high level goals:\n>\n\nI've reviewed v5 of the patch and did some testing.\n\nFirst things first, the niceties must be observed:\n\n Patch applies, compiles and passes checks without any issues.\n It has a good amount of comments that describe the changes very well.\n\nNow to its contents.\n\nI now see what you mean by saying that this patch is a natural and logical\nextension of the deduplication v13 work. I agree with this.\n\nBasically, 2 major deduplication strategies exist now:\n- by merging duplicates into a posting list; suits non-unique indexes\nbetter,\n 'cos actual duplicates come from the logically different tuples. This is\n existing functionality.\n- by deleting dead tuples and reducing need for deduplication at all; suits\n unique indexes mostly. This is a subject of this patch and it (to some\n extent) undoes v13 functionality around unique indexes, making it better.\n\nSome comments on the patch.\n\n1. In the following comment:\n\n+ * table_index_batch_check() is a variant that is specialized to garbage\n+ * collection of dead tuples in index access methods. Duplicates are\n+ * commonly caused by MVCC version churn when an optimization like\n+ * heapam's HOT cannot be applied. It can make sense to opportunistically\n+ * guess that many index tuples are dead versions, particularly in unique\n+ * indexes.\n\nI don't quite like the last sentence. Given that this code is committed,\nI would rather make it:\n\n … cannot be applied. Therefore we opportunistically check for dead tuples\n and reuse the space, delaying leaf page splits.\n\nI understand that \"we\" shouldn't be used here, but I fail to think of a\nproper way to express this.\n\n2. in _bt_dedup_delete_pass() and heap_index_batch_check() you're using some\nconstants, like:\n- expected score of 25\n- nblocksaccessed checks for 1, 2 and 3 blocks\n- maybe more, but the ones above caught my attention.\n\nPerhaps, it'd be better to use #define-s here instead?\n\n3. Do we really need to touch another heap page, if all conditions are met?\n\n+ if (uniqueindex && nblocksaccessed == 1 && score == 0)\n+ break;\n+ if (!uniqueindex && nblocksaccessed == 2 && score == 0)\n+ break;\n+ if (nblocksaccessed == 3)\n+ break;\n\nI was really wondering why to look into 2 heap pages. By not doing it\nstraight away,\nwe just delay the work for the next occasion that'll work on the same page\nwe're\nprocessing. I've modified this piece and included it in my tests (see\nbelow), I reduced\n2nd condition to just 1 block and limited the 3rd case to 2 blocks (just a\nquick hack).\n\nNow for the tests.\n\nI used an i3en.6xlarge EC2 instance with EBS disks attached (24 cores,\n192GB RAM).\nI've employed the same tests Peter described on Oct 16 (right after v2 of\nthe patch).\nThere were some config changes (attached), mostly to produce more logs and\nenable\nproper query monitoring with pg_stat_statements.\n\nThis server is used also for other tests, therefore I am not able to\nutilize all core/RAM.\nI'm interested in doing so though, subject for the next run of tests.\n\nI've used scale factor 10 000, adjusted indexes (resulting in a 189GB size\ndatabase)\nand run the following pgbench:\n\n pgbench -f testcase.pgbench -r -c32 -j8 -T 3600 bench\n\n\nResults (see also attachment):\n\n/* 1, master */\nlatency average = 16.482 ms\ntps = 1941.513487 (excluding connections establishing)\nstatement latencies in milliseconds:\n 4.476 UPDATE pgbench_accounts SET abalance = abalance + :delta\nWHERE aid = :aid1;\n 2.084 SELECT abalance FROM pgbench_accounts WHERE aid = :aid2;\n 2.090 SELECT abalance FROM pgbench_accounts WHERE aid = :aid3;\n/* 2, v5-patch */\nlatency average = 12.509 ms\ntps = 2558.119453 (excluding connections establishing)\nstatement latencies in milliseconds:\n 2.009 UPDATE pgbench_accounts SET abalance = abalance + :delta\nWHERE aid = :aid1;\n 0.868 SELECT abalance FROM pgbench_accounts WHERE aid = :aid2;\n 0.893 SELECT abalance FROM pgbench_accounts WHERE aid = :aid3;\n/* 3, v5-restricted */\nlatency average = 12.338 ms\ntps = 2593.519240 (excluding connections establishing)\nstatement latencies in milliseconds:\n 1.955 UPDATE pgbench_accounts SET abalance = abalance + :delta\nWHERE aid = :aid1;\n 0.846 SELECT abalance FROM pgbench_accounts WHERE aid = :aid2;\n 0.866 SELECT abalance FROM pgbench_accounts WHERE aid = :aid3;\n\nI can see a clear benefit from this patch *under specified conditions, YMMW*\n- 32% increase in TPS\n- 24% drop in average latency\n- most important — stable index size!\n\nLooking at the attached graphs (including statement specific ones):\n- CPU activity, Disk reads (reads, not hits) and Transaction throughput are\nvery\n stable for patched version\n- CPU's \"iowait\" is stable and reduced for patched version (expected)\n- CPU's \"user\" peaks out when master starts to split leafs, no such peaks\n for the patched version\n- there's expected increase in amount of \"Disk reads\" for patched versions,\n although on master we start pretty much on the same level and by the end\nof\n the test we seem to climb up on reads\n- on master, UPDATEs are spending 2x more time on average, reading 3x more\n pages than on patched versions\n- in fact, \"Average query time\" and \"Query stages\" graphs show very nice\ncaching\n effect for patched UPDATEs, a bit clumsy for SELECTs, but still visible\n\nComparing original and restricted patch versions:\n- there's no visible difference in amount of \"Disk reads\"\n- on restricted version UPDATEs behave more gradually, I like this pattern\n more, as it feels more stable and predictable\n\nIn my opinion, patch provides clear benefits from IO reduction and index\nsize\ncontrol perspective. I really like the stability of operations on patched\nversion. I would rather stick to the \"restricted\" version of the patch\nthough.\n\nHope this helps. I'm open to do more tests if necessary.\n\nP.S. I am using automated monitoring for graphs, do not have metrics\naround, sorry.\n\n-- \nVictor Yegorov", "msg_date": "Thu, 29 Oct 2020 00:05:01 +0100", "msg_from": "Victor Yegorov <vyegorov@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Deleting older versions in unique indexes to avoid page splits" }, { "msg_contents": "пн, 26 окт. 2020 г. в 22:15, Peter Geoghegan <pg@bowt.ie>:\n\n> Attached is v5, which has changes that are focused on two important\n> high level goals:\n>\n\nAnd some more comments after another round of reading the patch.\n\n1. Looks like UNIQUE_CHECK_NO_WITH_UNCHANGED is used for HOT updates,\n should we use UNIQUE_CHECK_NO_HOT here? It is better understood like\nthis.\n\n2. You're modifying the table_tuple_update() function on 1311 line of\ninclude/access/tableam.h,\n adding modified_attrs_hint. There's a large comment right before it\ndescribing parameters,\n I think there should be a note about modified_attrs_hint parameter in\nthat comment, 'cos\n it is referenced from other places in tableam.h and also from\nbackend/access/heap/heapam.c\n\n3. Can you elaborate on the scoring model you're using?\n Why do we expect a score of 25, what's the rationale behind this number?\n And should it be #define-d ?\n\n4. heap_compute_xid_horizon_for_tuples contains duplicate logic. Is it\npossible to avoid this?\n\n5. In this comment\n\n+ * heap_index_batch_check() helper function. Sorts deltids array in the\n+ * order needed for useful processing.\n\n perhaps it is better to replace \"useful\" with more details? Or point to\nthe place\n where \"useful processing\" is described.\n\n6. In this snippet in _bt_dedup_delete_pass()\n\n+ else if (_bt_keep_natts_fast(rel, state->base, itup) > nkeyatts &&\n+ _bt_dedup_save_htid(state, itup))\n+ {\n+\n+ }\n\n I would rather add a comment, explaining that the empty body of the\nclause is actually expected.\n\n7. In the _bt_dedup_delete_finish_pending() you're setting ispromising to\nfalse for both\n posting and non-posting tuples. This contradicts comments before\nfunction.\n\n\n\n\n-- \nVictor Yegorov\n\nпн, 26 окт. 2020 г. в 22:15, Peter Geoghegan <pg@bowt.ie>:Attached is v5, which has changes that are focused on two important\nhigh level goals:And some more comments after another round of reading the patch.1. Looks like UNIQUE_CHECK_NO_WITH_UNCHANGED is used for HOT updates,   should we use UNIQUE_CHECK_NO_HOT here? It is better understood like this.2. You're modifying the table_tuple_update() function on 1311 line of include/access/tableam.h,   adding modified_attrs_hint. There's a large comment right before it describing parameters,   I think there should be a note about modified_attrs_hint parameter in that comment, 'cos   it is referenced from other places in tableam.h and also from backend/access/heap/heapam.c3. Can you elaborate on the scoring model you're using?   Why do we expect a score of 25, what's the rationale behind this number?   And should it be #define-d ?4. heap_compute_xid_horizon_for_tuples contains duplicate logic. Is it possible to avoid this?5. In this comment+ * heap_index_batch_check() helper function.  Sorts deltids array in the+ * order needed for useful processing.   perhaps it is better to replace \"useful\" with more details? Or point to the place   where \"useful processing\" is described.6. In this snippet in _bt_dedup_delete_pass()+       else if (_bt_keep_natts_fast(rel, state->base, itup) > nkeyatts &&+                _bt_dedup_save_htid(state, itup))+       {++       }   I would rather add a comment, explaining that the empty body of the clause is actually expected.7. In the _bt_dedup_delete_finish_pending() you're setting ispromising to false for both   posting and non-posting tuples. This contradicts comments before function. -- Victor Yegorov", "msg_date": "Thu, 29 Oct 2020 23:05:28 +0100", "msg_from": "Victor Yegorov <vyegorov@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Deleting older versions in unique indexes to avoid page splits" }, { "msg_contents": "On Wed, Oct 28, 2020 at 4:05 PM Victor Yegorov <vyegorov@gmail.com> wrote:\n> I've reviewed v5 of the patch and did some testing.\n\nThanks!\n\n> I now see what you mean by saying that this patch is a natural and logical\n> extension of the deduplication v13 work. I agree with this.\n\nI tried the patch out with a long running transaction yesterday. I\nthink that the synergy with the v13 deduplication work really helped.\nIt took a really long time for an old snapshot to lead to pgbench page\nsplits (relative to the master branch, running a benchmark like the\none I talked about recently -- the fiver, tenner, score, etc index\nbenchmark). When the page splits finally started, they seemed much\nmore gradual -- I don't think that I saw the familiar pattern of\ndistinct waves of page splits that are clearly all correlated. I think\nthat the indexes grew at a low steady rate, which looked like the rate\nthat heap relations usually grow at.\n\nWe see a kind of \"tick tock\" pattern with this new mechanism + v13\ndeduplication: even when we don't delete very many TIDs, we still free\na few, and then merge the remaining TIDs to buy more time. Very\npossibly enough time that a long running transaction goes away by the\ntime the question of splitting the page comes up again. Maybe there is\nanother long running transaction by then, but deleting just a few of\nthe TIDs the last time around is enough to not waste time on that\nblock this time around, and therefore to actually succeed despite the\nsecond, newer long running transaction (we can delete older TIDs, just\nnot the now-oldest TIDs that the newer long running xact might still\nneed).\n\nIf this scenario sounds unlikely, bear in mind that \"unnecessary\" page\nsplits (which are all we really care about here) are usually only\nbarely necessary today, if you think about it in a localized/page\nlevel way. What the master branch shows is that most individual\n\"unnecessary\" page splits are in a sense *barely* unnecessary (which\nof course doesn't make the consequences any better). We could buy many\nhours until the next time the question of splitting a page comes up by\njust freeing a small number of tuples -- even on a very busy database.\n\nI found that the \"fiver\" and \"tenner\" indexes in particular took a\nvery long time to have even one page split with a long running\ntransaction. Another interesting effect was that all page splits\nsuddenly stopped when my one hour 30 minute long transaction/snapshot\nfinally went away -- the indexes stopped growing instantly when I\nkilled the psql session. But on the master branch the cascading\nversion driven page splits took at least several minutes to stop when\nI killed the psql session/snapshot at that same point of the benchmark\n(maybe longer). With the master branch, we can get behind on LP_DEAD\nindex tuple bit setting, and then have no chance of catching up.\nWhereas the patch gives us a second chance for each page.\n\n(I really have only started to think about long running transactions\nthis week, so my understanding is still very incomplete, and based on\nguesses and intuitions.)\n\n> I don't quite like the last sentence. Given that this code is committed,\n> I would rather make it:\n>\n> … cannot be applied. Therefore we opportunistically check for dead tuples\n> and reuse the space, delaying leaf page splits.\n>\n> I understand that \"we\" shouldn't be used here, but I fail to think of a\n> proper way to express this.\n\nMakes sense.\n\n> 2. in _bt_dedup_delete_pass() and heap_index_batch_check() you're using some\n> constants, like:\n> - expected score of 25\n> - nblocksaccessed checks for 1, 2 and 3 blocks\n> - maybe more, but the ones above caught my attention.\n>\n> Perhaps, it'd be better to use #define-s here instead?\n\nYeah. It's still evolving, which is why it's still rough.\n\nIt's not easy to come up with a good interface here. Not because it's\nvery important and subtle. It's actually very *unimportant*, in a way.\nnbtree cannot really expect too much from heapam here (though it might\nget much more than expected too, when it happens to be easy for\nheapam). The important thing is always what happens to be possible at\nthe local/page level -- the exact preferences of nbtree are not so\nimportant. Beggars cannot be choosers.\n\nIt only makes sense to have a \"score\" like this because sometimes the\nsituation is so favorable (i.e. there are so many TIDs that can be\nkilled) that we want to avoid vastly exceeding what is likely to be\nuseful to nbtree. Actually, this situation isn't that rare (which\nmaybe means I was wrong to say the score thing was unimportant, but\nhopefully you get the idea).\n\nEasily hitting our target score of 25 on the first heap page probably\nhappens almost all the time when certain kinds of unique indexes use\nthe mechanism, for example. And when that happens it is nice to only\nhave to visit one heap block. We're almost sure that it isn't worth\nvisiting a second, regardless of how many TIDs we're likely to find\nthere.\n\n> 3. Do we really need to touch another heap page, if all conditions are met?\n>\n> + if (uniqueindex && nblocksaccessed == 1 && score == 0)\n> + break;\n> + if (!uniqueindex && nblocksaccessed == 2 && score == 0)\n> + break;\n> + if (nblocksaccessed == 3)\n> + break;\n>\n> I was really wondering why to look into 2 heap pages. By not doing it straight away,\n> we just delay the work for the next occasion that'll work on the same page we're\n> processing. I've modified this piece and included it in my tests (see below), I reduced\n> 2nd condition to just 1 block and limited the 3rd case to 2 blocks (just a quick hack).\n\nThe benchmark that you ran involved indexes that are on a column whose\nvalues are already unique, pgbench_accounts.aid (the extra indexes are\nnot actually unique indexes, but they could work as unique indexes).\nIf you actually made them unique indexes then you would have seen the\nsame behavior anyway.\n\nThe 2 heap pages thing is useful with low cardinality indexes. Maybe\nthat could be better targeted - not sure. Things are still moving\nquite fast, and I'm still working on the patch by solving the biggest\nproblem I see on the horizon. So I will probably change this and then\nchange it again in the next week anyway.\n\nI've had further success microoptimizing the sorts in heapam.c in the\npast couple of days. I think that the regression that I reported can\nbe further shrunk. To recap, we saw a ~15% lost of throughput/TPS with\n16 clients, extreme contention (no rate limiting), several low\ncardinality indexes, with everything still fitting in shared_buffers.\nIt now looks like I can get that down to ~7%, which seems acceptable\nto me given the extreme nature of the workload (and given the fact\nthat we still win on efficiency here -- no index growth).\n\n> I've used scale factor 10 000, adjusted indexes (resulting in a 189GB size database)\n> and run the following pgbench:\n>\n> pgbench -f testcase.pgbench -r -c32 -j8 -T 3600 bench\n\n> I can see a clear benefit from this patch *under specified conditions, YMMW*\n> - 32% increase in TPS\n> - 24% drop in average latency\n> - most important — stable index size!\n\nNice. When I did a similar test on October 16th it was on a much\nsmaller database. I think that I saw a bigger improvement because the\ninitial DB size was close to shared_buffers. So not going over\nshared_buffers makes a much bigger difference. Whereas here the DB\nsize is several times larger, so there is no question about\nsignificantly exceeding shared_buffers -- it's going to happen for the\nmaster branch as well as the patch. (This is kind of obvious, but\npointing it out just in case.)\n\n> In my opinion, patch provides clear benefits from IO reduction and index size\n> control perspective. I really like the stability of operations on patched\n> version. I would rather stick to the \"restricted\" version of the patch though.\n\nYou're using EBS here, which probably has much higher latency than\nwhat I have here (an NVME SSD). What you have is probably more\nrelevant to the real world, though.\n\n> Hope this helps. I'm open to do more tests if necessary.\n\nIt's great, thanks!\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Thu, 29 Oct 2020 16:30:11 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Deleting older versions in unique indexes to avoid page splits" }, { "msg_contents": "On Thu, Oct 29, 2020 at 3:05 PM Victor Yegorov <vyegorov@gmail.com> wrote:\n> And some more comments after another round of reading the patch.\n>\n> 1. Looks like UNIQUE_CHECK_NO_WITH_UNCHANGED is used for HOT updates,\n> should we use UNIQUE_CHECK_NO_HOT here? It is better understood like this.\n\nThis would probably get me arrested by the tableam police, though.\n\nFWIW the way that that works is still kind of a hack. I think that I\nactually need a new boolean flag, rather than overloading the enum\nlike this.\n\n> 2. You're modifying the table_tuple_update() function on 1311 line of include/access/tableam.h,\n> adding modified_attrs_hint. There's a large comment right before it describing parameters,\n> I think there should be a note about modified_attrs_hint parameter in that comment, 'cos\n> it is referenced from other places in tableam.h and also from backend/access/heap/heapam.c\n\nOkay, makes sense.\n\n> 3. Can you elaborate on the scoring model you're using?\n> Why do we expect a score of 25, what's the rationale behind this number?\n> And should it be #define-d ?\n\nSee my remarks on this from the earlier e-mail.\n\n> 4. heap_compute_xid_horizon_for_tuples contains duplicate logic. Is it possible to avoid this?\n\nMaybe? I think that duplicating code is sometimes the lesser evil.\nLike in tuplesort.c, for example. I'm not sure if that's true here,\nbut it certainly can be true. This is the kind of thing that I usually\nonly make my mind up about at the last minute. It's a matter of taste.\n\n> 5. In this comment\n>\n> + * heap_index_batch_check() helper function. Sorts deltids array in the\n> + * order needed for useful processing.\n>\n> perhaps it is better to replace \"useful\" with more details? Or point to the place\n> where \"useful processing\" is described.\n\nOkay.\n\n> + else if (_bt_keep_natts_fast(rel, state->base, itup) > nkeyatts &&\n> + _bt_dedup_save_htid(state, itup))\n> + {\n> +\n> + }\n>\n> I would rather add a comment, explaining that the empty body of the clause is actually expected.\n\nOkay. Makes sense.\n\n> 7. In the _bt_dedup_delete_finish_pending() you're setting ispromising to false for both\n> posting and non-posting tuples. This contradicts comments before function.\n\nThe idea is that we can have plain tuples (non-posting list tuples)\nthat are non-promising when they're duplicates. Because why not?\nSomebody might have deleted them (rather than updating them). It is\nfine to have an open mind about this possibility despite the fact that\nit is close to zero (in the general case). Including these TIDs\ndoesn't increase the amount of work we do in heapam. Even when we\ndon't succeed in finding any of the non-dup TIDs as dead (which is\nvery much the common case), telling heapam about their existence could\nhelp indirectly (which is somewhat more common). This factor alone\ncould influence which heap pages heapam visits when there is no\nconcentration of promising tuples on heap pages (since the total\nnumber of TIDs on each block is the tie-breaker condition when\ncomparing heap blocks with an equal number of promising tuples during\nthe block group sort in heapam.c). I believe that this approach tends\nto result in heapam going after older TIDs when it wouldn't otherwise,\nat least in some cases.\n\nYou're right, though -- this is still unclear. Actually, I think that\nI should move the handling of promising/duplicate tuples into\n_bt_dedup_delete_finish_pending(), too (move it from\n_bt_dedup_delete_pass()). That would allow me to talk about all of the\nTIDs that get added to the deltids array (promising and non-promising)\nin one central function. I'll do it that way soon.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Thu, 29 Oct 2020 16:48:37 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Deleting older versions in unique indexes to avoid page splits" }, { "msg_contents": "On Thu, Oct 29, 2020 at 4:30 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> I found that the \"fiver\" and \"tenner\" indexes in particular took a\n> very long time to have even one page split with a long running\n> transaction. Another interesting effect was that all page splits\n> suddenly stopped when my one hour 30 minute long transaction/snapshot\n> finally went away -- the indexes stopped growing instantly when I\n> killed the psql session. But on the master branch the cascading\n> version driven page splits took at least several minutes to stop when\n> I killed the psql session/snapshot at that same point of the benchmark\n> (maybe longer). With the master branch, we can get behind on LP_DEAD\n> index tuple bit setting, and then have no chance of catching up.\n> Whereas the patch gives us a second chance for each page.\n\nI forgot to say that this long running xact/snapshot test I ran\nyesterday was standard pgbench (more or less) -- no custom indexes.\nUnlike my other testing, the only possible source of non-HOT updates\nhere was not being able to fit a heap tuple on the same heap page\n(typically because we couldn't truncate HOT chains in time due to a\nlong running xact holding back cleanup).\n\nThe normal state (without a long running xact/snapshot) is no page\nsplits, both with the patch and with master. But when you introduce a\nlong running xact, both master and patch will get page splits. The\ndifference with the patch is that it'll take much longer to start up\ncompared to master, the page splits are more gradual and smoother with\nthe patch, and the patch will stop having page splits just as soon as\nthe xact goes away -- the same second. With the master branch we're\nreliant on LP_DEAD bit setting, and if that gets temporarily held back\nby a long snapshot then we have little chance of catching up after the\nsnapshot goes away but before some pages have unnecessary\nversion-driven page splits.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Thu, 29 Oct 2020 17:32:40 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Deleting older versions in unique indexes to avoid page splits" }, { "msg_contents": "On Mon, Oct 26, 2020 at 2:15 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> Now for the not-so-good news.\n\n> The latency numbers aren't great for the patch, either. Take the 16 client case:\n\nAttached is v6, which more or less totally fixes the problem we saw\nwith this silly \"lots of low cardinality indexes\" benchmark.\n\nI wasn't really expecting this to happen -- the benchmark in question\nis extreme and rather unrealistic -- but I had some new insights that\nmade it possible (more details on the code changes towards the end of\nthis email). Now I don't have to convince anyone that the performance\nhit for extreme cases is worth it in order to realize big benefits for\nother workloads. There pretty much isn't a performance hit to speak of\nnow (or so it would seem). I've managed to take this small loss of\nperformance and turn it into a small gain. And without having to make\nany compromises on the core goal of the patch (\"no unnecessary page\nsplits caused by version churn\").\n\nWith the same indexes (\"score\", \"tenner\", \"fiver\", etc) as before on\npgbench_accounts, and same pgbench-variant queries, we once again see\nlots of index bloat with master, but no index bloat at all with v6 of\nthe patch (no change there). The raw latency numbers are where we see\nnew improvements for v6. Summary:\n\n2020-11-02 17:35:29 -0800 - Start of initial data load for run\n\"patch.r1c16\" (DB is also used by later runs)\n2020-11-02 17:40:21 -0800 - End of initial data load for run \"patch.r1c16\"\n2020-11-02 17:40:21 -0800 - Start of pgbench run \"patch.r1c16\"\n2020-11-02 19:40:31 -0800 - End of pgbench run \"patch.r1c16\":\npatch.r1c16.bench.out: \"tps = 9998.129224 (including connections\nestablishing)\" \"latency average = 0.243 ms\" \"latency stddev = 0.088\nms\"\n2020-11-02 19:40:46 -0800 - Start of initial data load for run\n\"master.r1c16\" (DB is also used by later runs)\n2020-11-02 19:45:42 -0800 - End of initial data load for run \"master.r1c16\"\n2020-11-02 19:45:42 -0800 - Start of pgbench run \"master.r1c16\"\n2020-11-02 21:45:52 -0800 - End of pgbench run \"master.r1c16\":\nmaster.r1c16.bench.out: \"tps = 9998.674505 (including connections\nestablishing)\" \"latency average = 0.231 ms\" \"latency stddev = 0.717\nms\"\n2020-11-02 21:46:10 -0800 - Start of pgbench run \"patch.r1c32\"\n2020-11-02 23:46:23 -0800 - End of pgbench run \"patch.r1c32\":\npatch.r1c32.bench.out: \"tps = 9999.968794 (including connections\nestablishing)\" \"latency average = 0.256 ms\" \"latency stddev = 0.104\nms\"\n2020-11-02 23:46:39 -0800 - Start of pgbench run \"master.r1c32\"\n2020-11-03 01:46:54 -0800 - End of pgbench run \"master.r1c32\":\nmaster.r1c32.bench.out: \"tps = 10001.097045 (including connections\nestablishing)\" \"latency average = 0.250 ms\" \"latency stddev = 1.858\nms\"\n2020-11-03 01:47:32 -0800 - Start of pgbench run \"patch.r2c16\"\n2020-11-03 03:47:45 -0800 - End of pgbench run \"patch.r2c16\":\npatch.r2c16.bench.out: \"tps = 9999.290688 (including connections\nestablishing)\" \"latency average = 0.247 ms\" \"latency stddev = 0.103\nms\"\n2020-11-03 03:48:04 -0800 - Start of pgbench run \"master.r2c16\"\n2020-11-03 05:48:18 -0800 - End of pgbench run \"master.r2c16\":\nmaster.r2c16.bench.out: \"tps = 10000.424117 (including connections\nestablishing)\" \"latency average = 0.241 ms\" \"latency stddev = 1.587\nms\"\n2020-11-03 05:48:39 -0800 - Start of pgbench run \"patch.r2c32\"\n2020-11-03 07:48:52 -0800 - End of pgbench run \"patch.r2c32\":\npatch.r2c32.bench.out: \"tps = 9999.539730 (including connections\nestablishing)\" \"latency average = 0.258 ms\" \"latency stddev = 0.125\nms\"\n2020-11-03 07:49:11 -0800 - Start of pgbench run \"master.r2c32\"\n2020-11-03 09:49:26 -0800 - End of pgbench run \"master.r2c32\":\nmaster.r2c32.bench.out: \"tps = 10000.833754 (including connections\nestablishing)\" \"latency average = 0.250 ms\" \"latency stddev = 0.997\nms\"\n\nThese are 2 hour runs, 16 and 32 clients -- same as last time (though\nnote the 10k TPS limit). So 4 pairs of runs (each pair of runs is a\npair of patch/master runs) making 8 runs total, lasting 16 hours total\n(not including initial data loading).\n\nNotice that the final pair of runs shows that the master branch\neventually gets to the point of having far higher latency stddev. The\nstddev starts high and gets higher as time goes on. Here is the\nlatency breakdown for the final pair of runs:\n\npatch.r2c32.bench.out:\n\nscaling factor: 1000\nquery mode: prepared\nnumber of clients: 32\nnumber of threads: 8\nduration: 7200 s\nnumber of transactions actually processed: 71997119\nlatency average = 0.258 ms\nlatency stddev = 0.125 ms\nrate limit schedule lag: avg 0.046 (max 39.151) ms\ntps = 9999.539730 (including connections establishing)\ntps = 9999.544743 (excluding connections establishing)\nstatement latencies in milliseconds:\n 0.001 \\set aid1 random_gaussian(1, 100000 * :scale, 4.0)\n 0.000 \\set aid2 random_gaussian(1, 100000 * :scale, 4.5)\n 0.000 \\set aid3 random_gaussian(1, 100000 * :scale, 4.2)\n 0.000 \\set bid random(1, 1 * :scale)\n 0.000 \\set tid random(1, 10 * :scale)\n 0.000 \\set delta random(-5000, 5000)\n 0.022 BEGIN;\n 0.091 UPDATE pgbench_accounts SET abalance = abalance +\n:delta WHERE aid = :aid1;\n 0.036 SELECT abalance FROM pgbench_accounts WHERE aid = :aid2;\n 0.034 SELECT abalance FROM pgbench_accounts WHERE aid = :aid3;\n 0.025 END;\n\nmaster.r2c32.bench.out:\n\nquery mode: prepared\nnumber of clients: 32\nnumber of threads: 8\nduration: 7200 s\nnumber of transactions actually processed: 72006667\nlatency average = 0.250 ms\nlatency stddev = 0.997 ms\nrate limit schedule lag: avg 0.053 (max 233.045) ms\ntps = 10000.833754 (including connections establishing)\ntps = 10000.839935 (excluding connections establishing)\nstatement latencies in milliseconds:\n 0.001 \\set aid1 random_gaussian(1, 100000 * :scale, 4.0)\n 0.000 \\set aid2 random_gaussian(1, 100000 * :scale, 4.5)\n 0.000 \\set aid3 random_gaussian(1, 100000 * :scale, 4.2)\n 0.000 \\set bid random(1, 1 * :scale)\n 0.000 \\set tid random(1, 10 * :scale)\n 0.000 \\set delta random(-5000, 5000)\n 0.023 BEGIN;\n 0.075 UPDATE pgbench_accounts SET abalance = abalance +\n:delta WHERE aid = :aid1;\n 0.037 SELECT abalance FROM pgbench_accounts WHERE aid = :aid2;\n 0.035 SELECT abalance FROM pgbench_accounts WHERE aid = :aid3;\n 0.026 END;\n\nThere is only one aspect of this that the master branch still wins on\nby the end -- the latency on the UPDATE is still a little lower on\nmaster. This happens for the obvious reason: the UPDATE doesn't clean\nup after itself on the master branch (to the extent that the average\nxact latency is still a tiny bit higher with the patch). But the\nSELECT queries are never actually slower with the patch, even during\nearlier runs -- they're just as fast as the master branch (and faster\nthan the master branch by the end). Only the UPDATEs can ever be made\nslower, so AFAICT any new cost is only experienced by the queries that\ncreate the problem for the system as a whole. We're imposing new costs\nfairly.\n\nPerformance with the patch is consistently much more stable. We don't\nget overwhelmed by FPIs in the way we see with the master branch,\nwhich causes sudden gluts that are occasionally very severe (notice\nthat master.r2c16.bench.out was a big stddev outlier). Maybe the most\nnotable thing is the max rate limit schedule lag. In the last run we\nsee max 39.151 ms for the patch vs max 233.045 ms for master.\n\nNow for a breakdown of the code enhancements behind the improved\nbenchmark numbers. They are:\n\n* The most notable improvement is to the sort order of the heap block\ngroups within heapam.c. We now round up to the next power of two when\nsorting candidate heap blocks (this is the sort used to decide which\nheap blocks to visit, and in what order). The \"number of promising\nTIDs\" field (as well as the \"number of TIDs total\" tiebreaker) is\nrounded up so that we ignore relatively small differences. We now tend\nto process related groups of contiguous pages in relatively big\nbatches -- though only where appropriate.\n\n* A closely related optimization was also added to heapam.c:\n\"favorable blocks\". That is, we recognize groups of related heap\nblocks that are contiguous. When we encounter these blocks we make\nheapam.c effectively increase its per-call batch size, so that it\nprocesses more blocks in the short term but does less absolute work in\nthe long run.\n\nThe key insight behind both of these enhancements is that physically\nclose heap blocks are generally also close together in time, and\ngenerally share similar characteristics (mostly LP_DEAD items vs\nmostly live items, already in shared_buffers, etc). So we're focussing\nmore on heap locality when the hint we get from nbtree isn't giving us\na very strong signal about what to do (we need to be very judicious\nbecause we are still only willing to access a small number of heap\nblocks at a time). While in general the best information heapam has to\ngo on comes from nbtree, heapam should not care about noise-level\nvariations in that information -- better to focus on heap locality\n(IOW heapam.c needs to have a sophisticated understanding of the\nlimitations of the hints it receives from nbtree). As a result of\nthese two changes, heapam.c tends to process earlier blocks/records\nfirst, in order, in a way that is correlated across time and across\nindexes -- with more sequential I/O and more confidence in a\nsuccessful outcome when we undertake work. (Victor should note that\nheapam.c no longer has any patience when it encounters even a single\nheap block with no dead TIDs -- he was right about that. The new\nheapam.c stuff works well enough that there is no possible upside to\n\"being patient\", even with indexes on low cardinality data that\nexperience lots of version churn, a workload that my recent\nbenchmarking exemplifies.)\n\n* nbtdedup.c will now give heapam.c an accurate idea about its\nrequirements -- it now expresses those in terms of space freed, which\nheapam.c now cares about directly. This matters a lot with low\ncardinality data, where freeing an entire index tuple is a lot more\nvaluable to nbtree than freeing just one TID in a posting list\n(because the former case frees up 20 bytes at a minimum, while the\nlatter case usually only frees up 6 bytes).\n\nI said something to Victor about nbtree's wishes not being important\nhere -- heapam.c is in charge. But that now seems like the wrong\nmental model. After all, how can nbtree not be important if it is\nentitled to call heapam.c as many times as it feels like, without\ncaring about the cost of thrashing (as we saw with low cardinality\ndata prior to v6)? With v6 of the patch I took my own advice about not\nthinking of each operation as an isolated event. So now heapam.c has a\nmore nuanced understanding of the requirements of nbtree, and can be\neither more eager or more lazy according to 1) nbtree requirements,\nand 2.) conditions local to heapam.c.\n\n* Improved handling of low cardinality data in nbtdedup.c -- we now\nalways merge together items with low cardinality data, regardless of\nhow well we do with deleting TIDs. This buys more time before the next\ndelete attempt for the same page.\n\n* Specialized shellsort implementations for heapam.c.\n\nShellsort is sometimes used as a lightweight system qsort in embedded\nsystems. It has many of the same useful properties as a well optimized\nquicksort for smaller datasets, and also has the merit of compiling to\nfar fewer instructions when specialized like this. Specializing on\nqsort (as in v5) results in machine code that seems rather heavyweight\ngiven the heapam.c requirements. Instruction cache matters here\n(although that's easy to miss when profiling).\n\nv6 still needs more polishing -- my focus has still been on the\nalgorithm itself. But I think I'm almost done with that part -- it\nseems unlikely that I'll be able to make any additional significant\nimprovements in that area after v6. The new bucketized heap block\nsorting behavior seems to be really effective, especially with low\ncardinality data, and especially over time, as the heap naturally\nbecomes more fragmented. We're now blending together locality from\nnbtree and heapam in an adaptive way.\n\nI'm pretty sure that the performance on sympathetic cases (such as the\ncase Victor tested) will also be a lot better, though I didn't look\ninto that on v6. If Victor is in a position to run further benchmarks\non v6, that would certainly be welcome (independent validation always\nhelps).\n\nI'm not aware of any remaining cases that it would be fair to describe\nas being regressed by the patch -- can anybody else think of any\npossible candidates?\n\nBTW, I will probably rename the mechanism added by the patch to\n\"bottom-up index vacuuming\", or perhaps \"bottom-up index deletion\" --\nthat seems to capture the general nature of what the patch does quite\nwell. Now regular index vacuuming can be thought of as a top-down\ncomplementary mechanism that takes care of remaining diffuse spots of\ngarbage tuples that queries happen to miss (more or less). Also, while\nit is true that there are some ways in which the patch is related to\ndeduplication, it doesn't seem useful to emphasize that part anymore.\nPlus clarifying which kind of deduplication I'm talking about in code\ncomments is starting to become really annoying.\n\n-- \nPeter Geoghegan", "msg_date": "Tue, 3 Nov 2020 12:44:10 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Deleting older versions in unique indexes to avoid page splits" }, { "msg_contents": "On Tue, Nov 3, 2020 at 12:44 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> v6 still needs more polishing -- my focus has still been on the\n> algorithm itself. But I think I'm almost done with that part -- it\n> seems unlikely that I'll be able to make any additional significant\n> improvements in that area after v6.\n\nAttached is v7, which tidies everything up. The project is now broken\nup into multiple patches, which can be committed separately. Every\npatch has a descriptive commit message. This should make it a lot\neasier to review.\n\nI've renamed the feature to \"bottom-up index deletion\" in this latest\nrevision. This seems like a better name than \"dedup deletion\". This\nname suggests that the feature complements \"top-down index deletion\"\nby VACUUM. This name is descriptive of what the new mechanism is\nsupposed to do at a high level.\n\nOther changes in v7 include:\n\n* We now fully use the tableam interface -- see the first patch.\n\nThe bottom-up index deletion API has been fully worked out. There is\nnow an optional callback/shim function. The bottom-up index deletion\ncaller (nbtree) is decoupled from the callee (heapam) by the tableam\nshim. This was what allowed me to break the patch into multiple\npieces/patches.\n\n* The executor no longer uses a IndexUniqueCheck-enum-constant as a\nhint to nbtree. Rather, we have a new aminsert() bool argument/flag\nthat hints to the index AM -- see the second patch.\n\nTo recap, the hint tells nbtree that the incoming tuple is a duplicate\nof an existing tuple caused by an UPDATE, without any logical changes\nfor the indexed columns. Bottom-up deletion is effective when there is\na local concentration of these index tuples that become garbage\nquickly.\n\nA dedicated aminsert() argument seems a lot cleaner. Though I wonder\nif this approach can be generalized a bit further, so that we can\nsupport other similar aminsert() hints in the future without adding\neven more arguments. Maybe some new enum instead of a boolean?\n\n* Code cleanup for the nbtdedup.c changes. Better explanation of when\nand how posting list TIDs are marked promising, and why.\n\n* Streamlined handling of the various strategies that nbtinsert.c uses\nto avoid a page split (e.g. traditional LP_DEAD deletion,\ndeduplication).\n\nA new unified function in nbtinsert.c was added. This organization is\na lot cleaner -- it greatly simplifies _bt_findinsertloc(), which\nbecame more complicated than it really needed to be due to the changes\nneeded for deduplication in PostgreSQL 13. This change almost seems\nlike an independently useful piece of work.\n\n--\nPeter Geoghegan", "msg_date": "Mon, 9 Nov 2020 09:20:50 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Deleting older versions in unique indexes to avoid page splits" }, { "msg_contents": "пн, 9 нояб. 2020 г. в 18:21, Peter Geoghegan <pg@bowt.ie>:\n\n> On Tue, Nov 3, 2020 at 12:44 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> > v6 still needs more polishing -- my focus has still been on the\n> > algorithm itself. But I think I'm almost done with that part -- it\n> > seems unlikely that I'll be able to make any additional significant\n> > improvements in that area after v6.\n>\n> Attached is v7, which tidies everything up. The project is now broken\n> up into multiple patches, which can be committed separately. Every\n> patch has a descriptive commit message. This should make it a lot\n> easier to review.\n>\n\nI've looked at the latest (v7) patchset.\nI've decided to use a quite common (in my practice) setup with an indexed\nmtime column over scale 1000 set:\n\nalter table pgbench_accounts add mtime timestamp default now();\ncreate or replace function fill_mtime() returns trigger as $$begin\nNEW.mtime=now(); return NEW; END;$$ language plpgsql;\ncreate trigger t_accounts_mtime before update on pgbench_accounts for each\nrow execute function fill_mtime();\ncreate index accounts_mtime on pgbench_accounts (mtime, aid);\ncreate index tenner on pgbench_accounts ((aid - (aid%10)));\nANALYZE pgbench_accounts;\n\nFor the test, I've used 3 pgbench scripts (started in parallel sessions):\n1. UPDATE + single PK SELECT in a transaction\n2. three PK SELECTs in a transaction\n3. SELECT of all modifications for the last 15 minutes\n\nGiven the size of the set, all data was cached and UPDATEs were fast enough\nto make 3rd query sit on disk-based sorting.\nSome figures follow.\n\nMaster sizes\n------------\n relkind | relname | nrows | blk_before | mb_before |\nblk_after | mb_after\n---------+-----------------------+-----------+------------+-----------+-----------+----------\n r | pgbench_accounts | 100000000 | 1639345 | 12807.4 |\n1677861 | 13182.8\n i | accounts_mtime | 100000000 | 385042 | 3008.1 |\n 424413 | 3565.6\n i | pgbench_accounts_pkey | 100000000 | 274194 | 2142.1 |\n 274194 | 2142.3\n i | tenner | 100000000 | 115352 | 901.2 |\n 128513 | 1402.9\n(4 rows)\n\nPatchset v7 sizes\n-----------------\n relkind | relname | nrows | blk_before | mb_before |\nblk_after | mb_after\n---------+-----------------------+-----------+------------+-----------+-----------+----------\n r | pgbench_accounts | 100000000 | 1639345 | 12807.4 |\n1676887 | 13170.2\n i | accounts_mtime | 100000000 | 385042 | 3008.1 |\n 424521 | 3536.4\n i | pgbench_accounts_pkey | 100000000 | 274194 | 2142.1 |\n 274194 | 2142.1\n i | tenner | 100000000 | 115352 | 901.2 |\n 115352 | 901.2\n(4 rows)\n\nTPS\n---\n query | Master TPS | Patched TPS\n----------------+------------+-------------\nUPDATE + SELECT | 5150 | 4884\n3 SELECT in txn | 23133 | 23193\n15min SELECT | 0.75 | 0.78\n\n\nWe can see that:\n- unused index is not suffering from not-HOT updates at all, which is the\npoint of the patch\n- we have ordinary queries performing on the same level as on master\n- we have 5,2% slowdown in UPDATE speed\n\nLooking at graphs (attached), I can see that on the patched version we're\ndoing some IO (which is expected) during UPADTEs.\nWe're also reading quite a lot from disks for simple SELECTs, compared to\nthe master version.\n\nI'm not sure if this should be counted as regression, though, as graphs go\non par pretty much.\nStill, I would like to understand why this combination of indexes and\nqueries slows down UPDATEs.\n\n\nDuring compilation I got one warning for make -C contrib:\n\nblutils.c: In function ‘blhandler’:\nblutils.c:133:22: warning: assignment from incompatible pointer type\n[-Wincompatible-pointer-types]\n amroutine->aminsert = blinsert;\n\nI agree with the rename to \"bottom-up index deletion\", using \"vacuuming\"\ngenerally makes users think\nthat functionality is used only during VACUUM (misleading).\nI haven't looked at the code yet.\n\n\n-- \nVictor Yegorov", "msg_date": "Wed, 11 Nov 2020 15:17:32 +0100", "msg_from": "Victor Yegorov <vyegorov@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Deleting older versions in unique indexes to avoid page splits" }, { "msg_contents": "пн, 9 нояб. 2020 г. в 18:21, Peter Geoghegan <pg@bowt.ie>:\n\n> On Tue, Nov 3, 2020 at 12:44 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> > v6 still needs more polishing -- my focus has still been on the\n> > algorithm itself. But I think I'm almost done with that part -- it\n> > seems unlikely that I'll be able to make any additional significant\n> > improvements in that area after v6.\n>\n> Attached is v7, which tidies everything up. The project is now broken\n> up into multiple patches, which can be committed separately. Every\n> patch has a descriptive commit message. This should make it a lot\n> easier to review.\n>\n\nAnd another test session, this time with scale=2000 and shared_buffers=512MB\n(vs scale=1000 and shared_buffers=16GB previously). The rest of the setup\nis the same:\n- mtime column that is tracks update time\n- index on (mtime, aid)\n- tenner low cardinality index from Peter's earlier e-mail\n- 3 pgbench scripts run in parallel on master and on v7 patchset (scripts\nfrom the previous e-mail used here).\n\n(I just realized that the size-after figures in my previous e-mail are off,\n'cos failed\nto ANALYZE table after the tests.)\n\nMaster\n------\n relkind | relname | nrows | blk_before | mb_before |\nblk_after | mb_after | Diff\n---------+-----------------------+-----------+------------+-----------+-----------+----------+-------\n r | pgbench_accounts | 200000000 | 3278689 | 25614.8 |\n3314951 | 25898.1 | +1.1%\n i | accounts_mtime | 200000000 | 770080 | 6016.3 |\n 811946 | 6343.3 | +5.4%\n i | pgbench_accounts_pkey | 200000000 | 548383 | 4284.2 |\n 548383 | 4284.2 | 0\n i | tenner | 200000000 | 230701 | 1802.4 |\n 252346 | 1971.5 | +9.4%\n(4 rows)\n\nPatched\n-------\n relkind | relname | nrows | blk_before | mb_before |\nblk_after | mb_after | Diff\n---------+-----------------------+-----------+------------+-----------+-----------+----------+-------\n r | pgbench_accounts | 200000000 | 3278689 | 25614.8 |\n3330788 | 26021.8 | +1.6%\n i | accounts_mtime | 200000000 | 770080 | 6016.3 |\n 806920 | 6304.1 | +4.8%\n i | pgbench_accounts_pkey | 200000000 | 548383 | 4284.2 |\n 548383 | 4284.2 | 0\n i | tenner | 200000000 | 230701 | 1802.4 |\n 230701 | 1802.4 | 0\n(4 rows)\n\nTPS\n---\n query | Master TPS | Patched TPS | Diff\n----------------+------------+-------------+------\nUPDATE + SELECT | 3024 | 2661 | -12%\n3 SELECT in txn | 19073 | 19852 | +4%\n15min SELECT | 2.4 | 3.9 | +60%\n\nWe can see that the patched version does much less disk writes during\nUPDATEs and simple SELECTs and\neliminates write amplification for not involved indexes. (I'm really\nexcited to see these figures.)\n\nOn the other hand, there's quite a big drop on the UPDATEs throughput. For\nsure, undersized shared_bufefrs\ncontribute to this drop. Still, my experience tells me that under\nconditions at hand (disabled HOT due to index\nover update time column) tables will tend to accumulate bloat and produce\nunnecessary IO also from WAL.\n\nPerhaps I need to conduct a longer test session, say 8+ hours to make\nobstacles appear more like in real life.\n\n\n-- \nVictor Yegorov", "msg_date": "Wed, 11 Nov 2020 21:58:06 +0100", "msg_from": "Victor Yegorov <vyegorov@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Deleting older versions in unique indexes to avoid page splits" }, { "msg_contents": "On Wed, Nov 11, 2020 at 6:17 AM Victor Yegorov <vyegorov@gmail.com> wrote:\n> I've looked at the latest (v7) patchset.\n> I've decided to use a quite common (in my practice) setup with an indexed mtime column over scale 1000 set:\n\nThanks for testing!\n\n> We can see that:\n> - unused index is not suffering from not-HOT updates at all, which is the point of the patch\n> - we have ordinary queries performing on the same level as on master\n> - we have 5,2% slowdown in UPDATE speed\n\nI think that I made a mistake with v7: I changed the way that we\ndetect low cardinality data during bottom-up deletion, which made us\ndo extra/early deduplication in more cases than we really should. I\nsuspect that this partially explains the slowdown in UPDATE latency\nthat you reported. I will fix this in v8.\n\nI don't think that the solution is to go back to the v6 behavior in\nthis area, though. I now believe that this whole \"proactive\ndeduplication for low cardinality data\" thing only made sense as a way\nof compensating for deficiencies in earlier versions of the patch.\nDeficiencies that I've since fixed anyway. The best solution now is to\nsimplify. We can have generic criteria for \"should we dedup the page\nearly after bottom-up deletion finishes without freeing up very much\nspace?\". This seemed to work well during my latest testing. Probably\nbecause heapam.c is now smart about the requirements from nbtree, as\nwell as the cost of accessing heap pages.\n\n> I'm not sure if this should be counted as regression, though, as graphs go on par pretty much.\n> Still, I would like to understand why this combination of indexes and queries slows down UPDATEs.\n\nAnother thing that I'll probably add to v8: Prefetching. This is\nprobably necessary just so I can have parity with the existing\nheapam.c function that the new code is based on,\nheap_compute_xid_horizon_for_tuples(). That will probably help here,\ntoo.\n\n> During compilation I got one warning for make -C contrib:\n\nOops.\n\n> I agree with the rename to \"bottom-up index deletion\", using \"vacuuming\" generally makes users think\n> that functionality is used only during VACUUM (misleading).\n\nYeah. That's kind of a problem already, because sometimes we use the\nword VACUUM when talking about the long established LP_DEAD deletion\nstuff. But I see that as a problem to be fixed. Actually, I would like\nto fix it very soon.\n\n> I haven't looked at the code yet.\n\nIt would be helpful if you could take a look at the nbtree patch --\nparticularly the changes related to deprecating the page-level\nBTP_HAS_GARBAGE flag. I would like to break those parts out into a\nseparate patch, and commit it in the next week or two. It's just\nrefactoring, really. (This commit can also make nbtree only use the\nword VACUUM for things that strictly involve VACUUM. For example,\nit'll rename _bt_vacuum_one_page() to _bt_delete_or_dedup_one_page().)\n\nWe almost don't care about the flag already, so there is almost no\nbehavioral change from deprecated BTP_HAS_GARBAGE in this way.\n\nIndexes that use deduplication already don't rely on BTP_HAS_GARBAGE\nbeing set ever since deduplication was added to Postgres 13 (the\ndeduplication code doesn't want to deal with LP_DEAD bits, and cannot\ntrust that no LP_DEAD bits can be set just because BTP_HAS_GARBAGE\nisn't set in the special area). Trusting the BTP_HAS_GARBAGE flag can\ncause us to miss out on deleting items with their LP_DEAD bits set --\nwe're better off \"assuming that BTP_HAS_GARBAGE is always set\", and\nfinding out if there really are LP_DEAD bits set for ourselves each\ntime.\n\nMissing LP_DEAD bits like this can happen when VACUUM unsets the\npage-level flag without actually deleting the items at the same time,\nwhich is expected when the items became garbage (and actually had\ntheir LP_DEAD bits set) after VACUUM began, but before VACUUM reached\nthe leaf pages. That's really wasteful, and doesn't actually have any\nupside -- we're scanning all of the line pointers only when we're\nabout to split (or dedup) the same page anyway, so the extra work is\npractically free.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Thu, 12 Nov 2020 14:00:07 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Deleting older versions in unique indexes to avoid page splits" }, { "msg_contents": "On Wed, Nov 11, 2020 at 12:58 PM Victor Yegorov <vyegorov@gmail.com> wrote:\n> On the other hand, there's quite a big drop on the UPDATEs throughput. For sure, undersized shared_bufefrs\n> contribute to this drop. Still, my experience tells me that under conditions at hand (disabled HOT due to index\n> over update time column) tables will tend to accumulate bloat and produce unnecessary IO also from WAL.\n\nI think that the big SELECT statement with an \"ORDER BY mtime ... \"\nwas a good way of demonstrating the advantages of the patch.\n\nAttached is v8, which has the enhancements for low cardinality data\nthat I mentioned earlier today. It also simplifies the logic for\ndealing with posting lists that we need to delete some TIDs from.\nThese posting list simplifications also make the code a bit more\nefficient, which might be noticeable during benchmarking.\n\nPerhaps your \"we have 5,2% slowdown in UPDATE speed\" issue will be at\nleast somewhat fixed by the enhancements to v8?\n\nAnother consideration when testing the patch is the behavioral\ndifferences we see between cases where system throughput is as high as\npossible, versus similar cases where we have a limit in place (i.e.\npgbench --rate=? was used). These two cases are quite different with\nthe patch because the patch can no longer be lazy without a limit --\nwe tend to see noticeably more CPU cycles spent doing bottom-up\ndeletion, though everything else about the profile looks similar (I\ngenerally use \"perf top\" to keep an eye on these things).\n\nIt's possible to sometimes see increases in latency (regressions) when\nrunning without a limit, at least in the short term. These increases\ncan go away when a rate limit is imposed that is perhaps as high as\n50% of the max TPS. In general, I think that it makes sense to focus\non latency when we have some kind of limit in place. A\nnon-rate-limited case is less realistic.\n\n> Perhaps I need to conduct a longer test session, say 8+ hours to make obstacles appear more like in real life.\n\nThat would be ideal. It is inconvenient to run longer benchmarks, but\nit's an important part of performance validation.\n\nBTW, another related factor is that the patch takes longer to \"warm\nup\". I notice this \"warm-up\" effect on the second or subsequent runs,\nwhere we have lots of garbage in indexes even with the patch, and even\nin the first 5 seconds of execution. The extra I/Os for heap page\naccesses end up being buffer misses instead of buffer hits, until the\ncache warms. This is not really a problem with fewer longer runs,\nbecause there is relatively little \"special warm-up time\". (We rarely\nexperience heap page misses during ordinary execution because the\nheapam.c code is smart about locality of access.)\n\nI noticed that the pgbench_accounts_pkey index doesn't grow at all on\nthe master branch in 20201111-results-master.txt. But it's always just\na matter of time until that happens without the patch. The PK/unique\nindex takes longer to bloat because it alone benefits from LP_DEAD\nsetting, especially within _bt_check_unique(). But this advantage will\nnaturally erode over time. It'll probably take a couple of hours or\nmore with larger scale factors -- I'm thinking of pgbench scale\nfactors over 2000.\n\nWhen the LP_DEAD bit setting isn't very effective (say it's 50%\neffective), it's only a matter of time until every original page\nsplits. But that's also true when LP_DEAD setting is 99% effective.\nWhile it is much less likely that any individual page will split when\nLP_DEAD bits are almost always set, the fundamental problem remains,\neven with 99% effectiveness. That problem is: each page only has to be\nunlucky once. On a long enough timeline, the difference between 50%\neffective and 99% effective may be very small. And \"long enough\ntimeline\" may not actually be very long, at least to a human.\n\nOf course, the patch still benefits from LP_DEAD bits getting set by\nqueries -- no change there. It just doesn't *rely* on LP_DEAD bits\nkeeping up with transactions that create bloat on every leaf page.\nMaybe the patch will behave exactly the same way as the master branch\n-- it's workload dependent. Actually, it behaves in exactly the same\nway for about the first 5 - 15 minutes following pgbench\ninitialization. This is roughly how long it takes before the master\nbranch has even one page split. You could say that the patch \"makes\nthe first 5 minutes last forever\".\n\n(Not sure if any of this is obvious to you by now, just saying.)\n\nThanks!\n--\nPeter Geoghegan", "msg_date": "Thu, 12 Nov 2020 15:00:49 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Deleting older versions in unique indexes to avoid page splits" }, { "msg_contents": "On Thu, Nov 12, 2020 at 3:00 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> Attached is v8, which has the enhancements for low cardinality data\n> that I mentioned earlier today. It also simplifies the logic for\n> dealing with posting lists that we need to delete some TIDs from.\n> These posting list simplifications also make the code a bit more\n> efficient, which might be noticeable during benchmarking.\n\nOne more thing: I repeated a pgbench test that was similar to my\nearlier low cardinality tests -- same indexes (fiver, tenner, score,\naid_pkey_include_abalance). And same queries. But longer runs: 4 hours\neach. Plus a larger DB: scale 2,500. Plus a rate-limit of 5000 TPS.\n\nHere is the high level report, with 4 runs -- one pair with 16\nclients, another pair with 32 clients:\n\n2020-11-11 19:03:26 -0800 - Start of initial data load for run\n\"patch.r1c16\" (DB is also used by later runs)\n2020-11-11 19:18:16 -0800 - End of initial data load for run \"patch.r1c16\"\n2020-11-11 19:18:16 -0800 - Start of pgbench run \"patch.r1c16\"\n2020-11-11 23:18:43 -0800 - End of pgbench run \"patch.r1c16\":\npatch.r1c16.bench.out: \"tps = 4999.100006 (including connections\nestablishing)\" \"latency average = 3.355 ms\" \"latency stddev = 58.455\nms\"\n2020-11-11 23:19:12 -0800 - Start of initial data load for run\n\"master.r1c16\" (DB is also used by later runs)\n2020-11-11 23:34:33 -0800 - End of initial data load for run \"master.r1c16\"\n2020-11-11 23:34:33 -0800 - Start of pgbench run \"master.r1c16\"\n2020-11-12 03:35:01 -0800 - End of pgbench run \"master.r1c16\":\nmaster.r1c16.bench.out: \"tps = 5000.061623 (including connections\nestablishing)\" \"latency average = 8.591 ms\" \"latency stddev = 64.851\nms\"\n2020-11-12 03:35:41 -0800 - Start of pgbench run \"patch.r1c32\"\n2020-11-12 07:36:10 -0800 - End of pgbench run \"patch.r1c32\":\npatch.r1c32.bench.out: \"tps = 5000.141420 (including connections\nestablishing)\" \"latency average = 1.253 ms\" \"latency stddev = 9.935\nms\"\n2020-11-12 07:36:40 -0800 - Start of pgbench run \"master.r1c32\"\n2020-11-12 11:37:19 -0800 - End of pgbench run \"master.r1c32\":\nmaster.r1c32.bench.out: \"tps = 5000.542942 (including connections\nestablishing)\" \"latency average = 3.069 ms\" \"latency stddev = 24.640\nms\"\n2020-11-12 11:38:18 -0800 - Start of pgbench run \"patch.r2c16\"\n\nWe see a very significant latency advantage for the patch here. Here\nis the breakdown on query latency from the final patch run,\npatch.r1c32:\n\nscaling factor: 2500\nquery mode: prepared\nnumber of clients: 32\nnumber of threads: 8\nduration: 14400 s\nnumber of transactions actually processed: 72002280\nlatency average = 1.253 ms\nlatency stddev = 9.935 ms\nrate limit schedule lag: avg 0.406 (max 694.645) ms\ntps = 5000.141420 (including connections establishing)\ntps = 5000.142503 (excluding connections establishing)\nstatement latencies in milliseconds:\n 0.002 \\set aid1 random_gaussian(1, 100000 * :scale, 4.0)\n 0.001 \\set aid2 random_gaussian(1, 100000 * :scale, 4.5)\n 0.001 \\set aid3 random_gaussian(1, 100000 * :scale, 4.2)\n 0.001 \\set bid random(1, 1 * :scale)\n 0.001 \\set tid random(1, 10 * :scale)\n 0.001 \\set delta random(-5000, 5000)\n 0.063 BEGIN;\n 0.361 UPDATE pgbench_accounts SET abalance = abalance +\n:delta WHERE aid = :aid1;\n 0.171 SELECT abalance FROM pgbench_accounts WHERE aid = :aid2;\n 0.172 SELECT abalance FROM pgbench_accounts WHERE aid = :aid3;\n 0.074 END;\n\nHere is the equivalent for master:\n\nscaling factor: 2500\nquery mode: prepared\nnumber of clients: 32\nnumber of threads: 8\nduration: 14400 s\nnumber of transactions actually processed: 72008125\nlatency average = 3.069 ms\nlatency stddev = 24.640 ms\nrate limit schedule lag: avg 1.695 (max 1097.628) ms\ntps = 5000.542942 (including connections establishing)\ntps = 5000.544213 (excluding connections establishing)\nstatement latencies in milliseconds:\n 0.002 \\set aid1 random_gaussian(1, 100000 * :scale, 4.0)\n 0.001 \\set aid2 random_gaussian(1, 100000 * :scale, 4.5)\n 0.001 \\set aid3 random_gaussian(1, 100000 * :scale, 4.2)\n 0.001 \\set bid random(1, 1 * :scale)\n 0.001 \\set tid random(1, 10 * :scale)\n 0.001 \\set delta random(-5000, 5000)\n 0.078 BEGIN;\n 0.560 UPDATE pgbench_accounts SET abalance = abalance +\n:delta WHERE aid = :aid1;\n 0.320 SELECT abalance FROM pgbench_accounts WHERE aid = :aid2;\n 0.308 SELECT abalance FROM pgbench_accounts WHERE aid = :aid3;\n 0.102 END;\n\nSo even the UPDATE is much faster here.\n\nThis is also something we see with pg_statio_tables, which looked like\nthis by the end for patch:\n\n-[ RECORD 1 ]---+-----------------\nschemaname | public\nrelname | pgbench_accounts\nheap_blks_read | 117,384,599\nheap_blks_hit | 1,051,175,835\nidx_blks_read | 24,761,513\nidx_blks_hit | 4,024,776,723\n\nFor the patch:\n\n-[ RECORD 1 ]---+-----------------\nschemaname | public\nrelname | pgbench_accounts\nheap_blks_read | 191,947,522\nheap_blks_hit | 904,536,584\nidx_blks_read | 65,653,885\nidx_blks_hit | 4,002,061,803\n\nNotice that heap_blks_read is down from 191,947,522 on master, to\n117,384,599 with the patch -- so it's ~0.611x with the patch. A huge\nreduction like this is possible with the patch because it effectively\namortizes the cost of accessing heap blocks to find garbage to clean\nup (\"nipping the [index bloat] problem in the bud\" is much cheaper\nthan letting it get out of hand for many reasons, locality in\nshared_buffers is one more reason). The patch accesses garbage tuples\nin heap blocks close together in time for all indexes, at a point in\ntime when the blocks are still likely to be found in shared_buffers.\n\nAlso notice that idx_blks_read is ~0.38x with the patch. That's less\nimportant, but still significant.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Thu, 12 Nov 2020 15:18:49 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Deleting older versions in unique indexes to avoid page splits" }, { "msg_contents": "пт, 13 нояб. 2020 г. в 00:01, Peter Geoghegan <pg@bowt.ie>:\n\n> On Wed, Nov 11, 2020 at 12:58 PM Victor Yegorov <vyegorov@gmail.com>\n> wrote:\n> > On the other hand, there's quite a big drop on the UPDATEs throughput.\n> For sure, undersized shared_bufefrs\n> > contribute to this drop. Still, my experience tells me that under\n> conditions at hand (disabled HOT due to index\n> > over update time column) tables will tend to accumulate bloat and\n> produce unnecessary IO also from WAL.\n>\n> I think that the big SELECT statement with an \"ORDER BY mtime ... \"\n> was a good way of demonstrating the advantages of the patch.\n>\n> Attached is v8, which has the enhancements for low cardinality data\n> that I mentioned earlier today. It also simplifies the logic for\n> dealing with posting lists that we need to delete some TIDs from.\n> These posting list simplifications also make the code a bit more\n> efficient, which might be noticeable during benchmarking.\n>\n> Perhaps your \"we have 5,2% slowdown in UPDATE speed\" issue will be at\n> least somewhat fixed by the enhancements to v8?\n>\n\nYes, v8 looks very nice!\n\nI've done two 8 hour long sessions with scale=2000 and shared_buffers=512MB\n(previously sent postgresql.auto.conf used here with no changes).\nThe rest of the setup is the same:\n- mtime column that is tracks update time\n- index on (mtime, aid)\n- tenner low cardinality index from Peter's earlier e-mail\n- 3 pgbench scripts run in parallel on master and on v8 patchset (scripts\nfrom the previous e-mail used here).\n\nMaster\n------\n relname | nrows | blk_before | mb_before | blk_after |\nmb_after | diff\n-----------------------+-----------+------------+-----------+-----------+----------+--------\n pgbench_accounts | 300000000 | 4918033 | 38422.1 | 5066589 |\n 39582.7 | +3.0%\n accounts_mtime | 300000000 | 1155119 | 9024.4 | 1422354 |\n 11112.1 | +23.1%\n pgbench_accounts_pkey | 300000000 | 822573 | 6426.4 | 822573 |\n6426.4 | 0\n tenner | 300000000 | 346050 | 2703.5 | 563101 |\n4399.2 | +62.7%\n(4 rows)\n\nDB size: 59.3..64.5 (+5.2GB / +8.8%)\n\nPatched\n-------\n relname | nrows | blk_before | mb_before | blk_after |\nmb_after | diff\n-----------------------+-----------+------------+-----------+-----------+----------+--------\n pgbench_accounts | 300000000 | 4918033 | 38422.1 | 5068092 |\n 39594.5 | +3.0%\n accounts_mtime | 300000000 | 1155119 | 9024.4 | 1428972 |\n 11163.8 | +23.7%\n pgbench_accounts_pkey | 300000000 | 822573 | 6426.4 | 822573 |\n6426.4 | 0\n tenner | 300000000 | 346050 | 2703.5 | 346050 |\n2703.5 | 0\n(4 rows)\n\nDB size: 59.3..62.8 (+3.5GB / +5.9%)\n\nTPS\n---\n query | Master TPS | Patched TPS | diff\n----------------+------------+-------------+-------\nUPDATE + SELECT | 2413 | 2473 | +2.5%\n3 SELECT in txn | 19737 | 19545 | -0.9%\n15min SELECT | 0.74 | 1.03 | +39%\n\nBased on the figures and also on the graphs attached, I can tell v8 has no\nvisible regression\nin terms of TPS, IO pattern changes slightly, but the end result is worth\nit.\nIn my view, this patch can be applied from a performance POV.\n\nI wanted to share these before I'll finish with the code review, I'm\nplanning to send it tomorrow.\n\n\n-- \nVictor Yegorov", "msg_date": "Sun, 15 Nov 2020 23:29:08 +0100", "msg_from": "Victor Yegorov <vyegorov@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Deleting older versions in unique indexes to avoid page splits" }, { "msg_contents": "On Sun, Nov 15, 2020 at 2:29 PM Victor Yegorov <vyegorov@gmail.com> wrote:\n> TPS\n> ---\n> query | Master TPS | Patched TPS | diff\n> ----------------+------------+-------------+-------\n> UPDATE + SELECT | 2413 | 2473 | +2.5%\n> 3 SELECT in txn | 19737 | 19545 | -0.9%\n> 15min SELECT | 0.74 | 1.03 | +39%\n>\n> Based on the figures and also on the graphs attached, I can tell v8 has no visible regression\n> in terms of TPS, IO pattern changes slightly, but the end result is worth it.\n> In my view, this patch can be applied from a performance POV.\n\nGreat, thanks for testing!\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Mon, 16 Nov 2020 20:59:34 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Deleting older versions in unique indexes to avoid page splits" }, { "msg_contents": "чт, 12 нояб. 2020 г. в 23:00, Peter Geoghegan <pg@bowt.ie>:\n\n> It would be helpful if you could take a look at the nbtree patch --\n> particularly the changes related to deprecating the page-level\n> BTP_HAS_GARBAGE flag. I would like to break those parts out into a\n> separate patch, and commit it in the next week or two. It's just\n> refactoring, really. (This commit can also make nbtree only use the\n> word VACUUM for things that strictly involve VACUUM. For example,\n> it'll rename _bt_vacuum_one_page() to _bt_delete_or_dedup_one_page().)\n>\n> We almost don't care about the flag already, so there is almost no\n> behavioral change from deprecated BTP_HAS_GARBAGE in this way.\n>\n> Indexes that use deduplication already don't rely on BTP_HAS_GARBAGE\n> being set ever since deduplication was added to Postgres 13 (the\n> deduplication code doesn't want to deal with LP_DEAD bits, and cannot\n> trust that no LP_DEAD bits can be set just because BTP_HAS_GARBAGE\n> isn't set in the special area). Trusting the BTP_HAS_GARBAGE flag can\n> cause us to miss out on deleting items with their LP_DEAD bits set --\n> we're better off \"assuming that BTP_HAS_GARBAGE is always set\", and\n> finding out if there really are LP_DEAD bits set for ourselves each\n> time.\n>\n> Missing LP_DEAD bits like this can happen when VACUUM unsets the\n> page-level flag without actually deleting the items at the same time,\n> which is expected when the items became garbage (and actually had\n> their LP_DEAD bits set) after VACUUM began, but before VACUUM reached\n> the leaf pages. That's really wasteful, and doesn't actually have any\n> upside -- we're scanning all of the line pointers only when we're\n> about to split (or dedup) the same page anyway, so the extra work is\n> practically free.\n>\n\nI've looked over the BTP_HAS_GARBAGE modifications, they look sane.\nI've double checked that heapkeyspace indexes don't use this flag (don't\nrely on it),\nwhile pre-v4 ones still use it.\n\nI have a question. This flag is raised in the _bt_check_unique() and\nin _bt_killitems().\nIf we're deprecating this flag, perhaps it'd be good to either avoid\nraising it at least for\n_bt_check_unique(), as it seems to me that conditions are dealing with\npostings, therefore\nwe are speaking of heapkeyspace indexes here.\n\nIf we'll conditionally raise this flag in the functions above, we can get\nrid of blocks that drop it\nin _bt_delitems_delete(), I think.\n\n-- \nVictor Yegorov\n\nчт, 12 нояб. 2020 г. в 23:00, Peter Geoghegan <pg@bowt.ie>:It would be helpful if you could take a look at the nbtree patch --\nparticularly the changes related to deprecating the page-level\nBTP_HAS_GARBAGE flag. I would like to break those parts out into a\nseparate patch, and commit it in the next week or two. It's just\nrefactoring, really. (This commit can also make nbtree only use the\nword VACUUM for things that strictly involve VACUUM. For example,\nit'll rename _bt_vacuum_one_page() to _bt_delete_or_dedup_one_page().)\n\nWe almost don't care about the flag already, so there is almost no\nbehavioral change from deprecated BTP_HAS_GARBAGE in this way.\n\nIndexes that use deduplication already don't rely on BTP_HAS_GARBAGE\nbeing set ever since deduplication was added to Postgres 13 (the\ndeduplication code doesn't want to deal with LP_DEAD bits, and cannot\ntrust that no LP_DEAD bits can be set just because BTP_HAS_GARBAGE\nisn't set in the special area). Trusting the BTP_HAS_GARBAGE flag can\ncause us to miss out on deleting items with their LP_DEAD bits set --\nwe're better off \"assuming that BTP_HAS_GARBAGE is always set\", and\nfinding out if there really are LP_DEAD bits set for ourselves each\ntime.\n\nMissing LP_DEAD bits like this can happen when VACUUM unsets the\npage-level flag without actually deleting the items at the same time,\nwhich is expected when the items became garbage (and actually had\ntheir LP_DEAD bits set) after VACUUM began, but before VACUUM reached\nthe leaf pages. That's really wasteful, and doesn't actually have any\nupside -- we're scanning all of the line pointers only when we're\nabout to split (or dedup) the same page anyway, so the extra work is\npractically free.I've looked over the BTP_HAS_GARBAGE modifications, they look sane.I've double checked that heapkeyspace indexes don't use this flag (don't rely on it),while pre-v4 ones still use it.I have a question. This flag is raised in the _bt_check_unique() and in _bt_killitems().If we're deprecating this flag, perhaps it'd be good to either avoid raising it at least for_bt_check_unique(), as it seems to me that conditions are dealing with postings, thereforewe are speaking of heapkeyspace indexes here.If we'll conditionally raise this flag in the functions above, we can get rid of blocks that drop it in _bt_delitems_delete(), I think.-- Victor Yegorov", "msg_date": "Tue, 17 Nov 2020 16:05:08 +0100", "msg_from": "Victor Yegorov <vyegorov@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Deleting older versions in unique indexes to avoid page splits" }, { "msg_contents": "чт, 12 нояб. 2020 г. в 23:00, Peter Geoghegan <pg@bowt.ie>:\n\n> Another thing that I'll probably add to v8: Prefetching. This is\n> probably necessary just so I can have parity with the existing\n> heapam.c function that the new code is based on,\n> heap_compute_xid_horizon_for_tuples(). That will probably help here,\n> too.\n>\n\nI don't quite see this part. Do you mean top_block_groups_favorable() here?\n\n\n-- \nVictor Yegorov\n\nчт, 12 нояб. 2020 г. в 23:00, Peter Geoghegan <pg@bowt.ie>:\nAnother thing that I'll probably add to v8: Prefetching. This is\nprobably necessary just so I can have parity with the existing\nheapam.c function that the new code is based on,\nheap_compute_xid_horizon_for_tuples(). That will probably help here,\ntoo.I don't quite see this part. Do you mean top_block_groups_favorable() here? -- Victor Yegorov", "msg_date": "Tue, 17 Nov 2020 16:16:50 +0100", "msg_from": "Victor Yegorov <vyegorov@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Deleting older versions in unique indexes to avoid page splits" }, { "msg_contents": "пт, 13 нояб. 2020 г. в 00:01, Peter Geoghegan <pg@bowt.ie>:\n\n> Attached is v8, which has the enhancements for low cardinality data\n> that I mentioned earlier today. It also simplifies the logic for\n> dealing with posting lists that we need to delete some TIDs from.\n> These posting list simplifications also make the code a bit more\n> efficient, which might be noticeable during benchmarking.\n>\n\nI've looked through the code and it looks very good from my end:\n- plenty comments, good description of what's going on\n- I found no loose ends in terms of AM integration\n- magic constants replaced with defines\nCode looks good. Still, it'd be good if somebody with more experience could\nlook into this patch.\n\n\nQuestion: why in the comments you're using double spaces after dots?\nIs this a convention of the project?\n\nI am thinking of two more scenarios that require testing:\n- queue in the table, with a high rate of INSERTs+DELETEs and a long\ntransaction.\n Currently I've seen such conditions yield indexes of several GB in size\nwil holding less\n than a thousand of live records.\n- upgraded cluster with !heapkeyspace indexes.\n\n-- \nVictor Yegorov\n\nпт, 13 нояб. 2020 г. в 00:01, Peter Geoghegan <pg@bowt.ie>:Attached is v8, which has the enhancements for low cardinality data\nthat I mentioned earlier today. It also simplifies the logic for\ndealing with posting lists that we need to delete some TIDs from.\nThese posting list simplifications also make the code a bit more\nefficient, which might be noticeable during benchmarking.I've looked through the code and it looks very good from my end:- plenty comments, good description of what's going on- I found no loose ends in terms of AM integration- magic constants replaced with definesCode looks good. Still, it'd be good if somebody with more experience could look into this patch.Question: why in the comments you're using double spaces after dots?Is this a convention of the project?I am thinking of two more scenarios that require testing:- queue in the table, with a high rate of INSERTs+DELETEs and a long transaction.  Currently I've seen such conditions yield indexes of several GB in size wil holding less  than a thousand of live records.- upgraded cluster with !heapkeyspace indexes.-- Victor Yegorov", "msg_date": "Tue, 17 Nov 2020 16:24:43 +0100", "msg_from": "Victor Yegorov <vyegorov@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Deleting older versions in unique indexes to avoid page splits" }, { "msg_contents": "On Tue, Nov 17, 2020 at 7:05 AM Victor Yegorov <vyegorov@gmail.com> wrote:\n> I've looked over the BTP_HAS_GARBAGE modifications, they look sane.\n> I've double checked that heapkeyspace indexes don't use this flag (don't rely on it),\n> while pre-v4 ones still use it.\n\nCool.\n\n> I have a question. This flag is raised in the _bt_check_unique() and in _bt_killitems().\n> If we're deprecating this flag, perhaps it'd be good to either avoid raising it at least for\n> _bt_check_unique(), as it seems to me that conditions are dealing with postings, therefore\n> we are speaking of heapkeyspace indexes here.\n\nWell, we still want to mark LP_DEAD bits set in all cases, just as\nbefore. The difference is that heapkeyspace indexes won't rely on the\npage-level flag later on.\n\n> If we'll conditionally raise this flag in the functions above, we can get rid of blocks that drop it\n> in _bt_delitems_delete(), I think.\n\nI prefer to continue to maintain the flag in the same way, regardless\nof which B-Tree version is in use (i.e. if it's heapkeyspace or not).\nMaintaining the flag is not expensive, may have some small value for\nforensic or debugging purposes, and saves callers the trouble of\ntelling _bt_delitems_delete() (and code like it) whether or not this\nis a heapkeyspace index.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Tue, 17 Nov 2020 08:24:36 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Deleting older versions in unique indexes to avoid page splits" }, { "msg_contents": "вт, 17 нояб. 2020 г. в 17:24, Peter Geoghegan <pg@bowt.ie>:\n\n> I prefer to continue to maintain the flag in the same way, regardless\n> of which B-Tree version is in use (i.e. if it's heapkeyspace or not).\n> Maintaining the flag is not expensive, may have some small value for\n> forensic or debugging purposes, and saves callers the trouble of\n> telling _bt_delitems_delete() (and code like it) whether or not this\n> is a heapkeyspace index.\n>\n\nOK. Can you explain what deprecation means here?\nIf this functionality is left as is, it is not really deprecation?..\n\n-- \nVictor Yegorov\n\nвт, 17 нояб. 2020 г. в 17:24, Peter Geoghegan <pg@bowt.ie>:I prefer to continue to maintain the flag in the same way, regardless\nof which B-Tree version is in use (i.e. if it's heapkeyspace or not).\nMaintaining the flag is not expensive, may have some small value for\nforensic or debugging purposes, and saves callers the trouble of\ntelling  _bt_delitems_delete() (and code like it) whether or not this\nis a heapkeyspace index.OK. Can you explain what deprecation means here?If this functionality is left as is, it is not really deprecation?..-- Victor Yegorov", "msg_date": "Tue, 17 Nov 2020 18:19:46 +0100", "msg_from": "Victor Yegorov <vyegorov@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Deleting older versions in unique indexes to avoid page splits" }, { "msg_contents": "On Tue, Nov 17, 2020 at 9:19 AM Victor Yegorov <vyegorov@gmail.com> wrote:\n> OK. Can you explain what deprecation means here?\n> If this functionality is left as is, it is not really deprecation?..\n\nIt just means that we only keep it around for compatibility purposes.\nWe would like to remove it, but can't right now. If we ever stop\nsupporting version 3 indexes, then we can probably remove it entirely.\nI would like to avoid special cases across B-Tree index versions.\nSimply maintaining the page flag in the same way as we always have is\nthe simplest approach.\n\nPushed the BTP_HAS_GARBAGE patch just now. I'll post a rebased version\nof the patch series later on today.\n\nThanks\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Tue, 17 Nov 2020 09:47:02 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Deleting older versions in unique indexes to avoid page splits" }, { "msg_contents": "On Tue, Nov 17, 2020 at 7:24 AM Victor Yegorov <vyegorov@gmail.com> wrote:\n> I've looked through the code and it looks very good from my end:\n> - plenty comments, good description of what's going on\n> - I found no loose ends in terms of AM integration\n> - magic constants replaced with defines\n> Code looks good. Still, it'd be good if somebody with more experience could look into this patch.\n\nGreat, thank you.\n\n> Question: why in the comments you're using double spaces after dots?\n> Is this a convention of the project?\n\nNot really. It's based on my habit of trying to be as consistent as\npossible with existing code.\n\nThere seems to be a weak consensus among English speakers on this\nquestion, which is: the two space convention is antiquated, and only\never made sense in the era of mechanical typewriters. I don't really\ncare either way, and I doubt that any other committer pays much\nattention to these things. You may have noticed that I use only one\nspace in my e-mails.\n\nActually, I probably shouldn't care about it myself. It's just what I\ndecided to do at some point. I find it useful to decide that this or\nthat practice is now a best practice, and then stick to it without\nthinking about it very much (this frees up space in my head to think\nabout more important things). But this particular habit of mine around\nspaces is definitely not something I'd insist on from other\ncontributors. It's just that: a habit.\n\n> I am thinking of two more scenarios that require testing:\n> - queue in the table, with a high rate of INSERTs+DELETEs and a long transaction.\n\nI see your point. This is going to be hard to make work outside of\nunique indexes, though. Unique indexes are already not dependent on\nthe executor hint -- they can just use the \"uniquedup\" hint. The code\nfor unique indexes is prepared to notice duplicates in\n_bt_check_unique() in passing, and apply the optimization for that\nreason.\n\nMaybe there is some argument to forgetting about the hint entirely,\nand always assuming that we should try to find tuples to delete at the\npoint that a page is about to be split. I think that that argument is\na lot harder to make, though. And it can be revisited in the future.\nIt would be nice to do better with INSERTs+DELETEs, but that's surely\nnot the big problem for us right now.\n\nI realize that this unique indexes/_bt_check_unique() thing is not\neven really a partial fix to the problem you describe. The indexes\nthat have real problems with such an INSERTs+DELETEs workload will\nnaturally not be unique indexes -- _bt_check_unique() already does a\nfairly good job of controlling bloat without bottom-up deletion.\n\n> - upgraded cluster with !heapkeyspace indexes.\n\nI do have a patch that makes that easy to test, that I used for the\nPostgres 13 deduplication work -- I can rebase it and post it if you\nlike. You will be able to apply the patch, and run the regression\ntests with a !heapkeyspace index. This works with only one or two\ntweaks to the tests (IIRC the amcheck tests need to be tweaked in one\nplace for this to work). I don't anticipate that !heapkeyspace indexes\nwill be a problem, because they won't use any of the new stuff anyway,\nand because nothing about the on-disk format is changed by bottom-up\nindex deletion.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Tue, 17 Nov 2020 12:45:52 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Deleting older versions in unique indexes to avoid page splits" }, { "msg_contents": "On Tue, Nov 17, 2020 at 7:17 AM Victor Yegorov <vyegorov@gmail.com> wrote:\n> чт, 12 нояб. 2020 г. в 23:00, Peter Geoghegan <pg@bowt.ie>:\n>> Another thing that I'll probably add to v8: Prefetching. This is\n>> probably necessary just so I can have parity with the existing\n>> heapam.c function that the new code is based on,\n>> heap_compute_xid_horizon_for_tuples(). That will probably help here,\n>> too.\n>\n> I don't quite see this part. Do you mean top_block_groups_favorable() here?\n\nI meant to add prefetching to the version of the patch that became v8,\nbut that didn't happen because I ran out of time. I wanted to get out\na version with the low cardinality fix, to see if that helped with the\nregression you talked about last week. (Prefetching seems to make a\nsmall difference when we're I/O bound, so it may not be that\nimportant.)\n\nAttached is v9 of the patch series. This actually has prefetching in\nheapam.c. Prefetching is not just applied to favorable blocks, though\n-- it's applied to all the blocks that we might visit, even though we\noften won't really visit the last few blocks in line. This needs more\ntesting. The specific choices I made around prefetching were\ndefinitely a bit arbitrary. To be honest, it was a bit of a\nbox-ticking thing (parity with similar code for its own sake). But\nmaybe I failed to consider particular scenarios in which prefetching\nreally is important.\n\nMy high level goal for v9 was to do cleanup of v8. There isn't very\nmuch that you could call a new enhancement (just the prefetching\nthing).\n\nOther changes in v9 include:\n\n* Much simpler approach to passing down an aminsert() hint from the\nexecutor in v9-0002* patch.\n\nRather than exposing some HOT implementation details from\nheap_update(), we use executor state that tracks updated columns. Now\nall we have to do is tell ExecInsertIndexTuples() \"this round of index\ntuple inserts is for an UPDATE statement\". It then figures out the\nspecific details (whether it passes the hint or not) on an index by\nindex basis. This interface feels much more natural to me.\n\nThis also made it easy to handle expression indexes sensibly. And, we\nget support for the logical replication UPDATE caller to\nExecInsertIndexTuples(). It only has to say \"this is for an UPDATE\",\nin the usual way, without any special effort (actually I need to test\nlogical replication, just to be sure, but I think that it works fine\nin v9).\n\n* New B-Tree sgml documentation in v9-0003* patch. I've added an\nextensive user-facing description of the feature to the end of\n\"Chapter 64. B-Tree Indexes\", near the existing discussion of\ndeduplication.\n\n* New delete_items storage parameter. This makes it possible to\ndisable the optimization. Like deduplicate_items in Postgres 13, it is\nnot expected to be set to \"off\" very often.\n\nI'm not yet 100% sure that a storage parameter is truly necessary -- I\nmight still change my mind and remove it later.\n\nThanks\n--\nPeter Geoghegan", "msg_date": "Tue, 17 Nov 2020 13:38:05 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Deleting older versions in unique indexes to avoid page splits" }, { "msg_contents": "On Tue, Nov 17, 2020 at 12:45 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> > I am thinking of two more scenarios that require testing:\n> > - queue in the table, with a high rate of INSERTs+DELETEs and a long transaction.\n>\n> I see your point. This is going to be hard to make work outside of\n> unique indexes, though. Unique indexes are already not dependent on\n> the executor hint -- they can just use the \"uniquedup\" hint. The code\n> for unique indexes is prepared to notice duplicates in\n> _bt_check_unique() in passing, and apply the optimization for that\n> reason.\n\nI thought about this some more. My first idea was to simply always try\nout bottom-up deletion (i.e. behave as if the hint from the executor\nalways indicates that it's favorable). I couldn't really justify that\napproach, though. It results in many bottom-up deletion passes that\nend up wasting cycles (and unnecessarily accessing heap blocks).\n\nThen I had a much better idea: Make the existing LP_DEAD stuff a\nlittle more like bottom-up index deletion. We usually have to access\nheap blocks that the index tuples point to today, in order to have a\nlatestRemovedXid cutoff (to generate recovery conflicts). It's worth\nscanning the leaf page for index tuples with TIDs whose heap block\nmatches the index tuples that actually have their LP_DEAD bits set.\nThis only consumes a few more CPU cycles. We don't have to access any\nmore heap blocks to try these extra TIDs, so it seems like a very good\nidea to try them out.\n\nI ran the regression tests with an enhanced version of the patch, with\nthis LP_DEAD-deletion-with-extra-TIDs thing. It also had custom\ninstrumentation that showed exactly what happens in each case. We\nmanage to delete at least a small number of extra index tuples in\nalmost all cases -- so we get some benefit in practically all cases.\nAnd in the majority of cases we can delete significantly more. It's\nnot uncommon to increase the number of index tuples deleted. It could\ngo from 1 - 10 or so without the enhancement to LP_DEAD deletion, to\n50 - 250 with the LP_DEAD enhancement. Some individual LP_DEAD\ndeletion calls can free more than 50% of the space on the leaf page.\n\nI believe that this is a lower risk way of doing better when there is\na high rate of INSERTs+DELETEs. Most of the regression test cases I\nlooked at were in the larger system catalog indexes, which often look\nlike that.\n\nWe don't have to be that lucky for a passing index scan to set at\nleast one or two LP_DEAD bits. If there is any kind of\nphysical/logical correlation, then we're bound to also end up deleting\nsome extra index tuples by the time that the page looks like it might\nneed to be split.\n\n--\nPeter Geoghegan\n\n\n", "msg_date": "Tue, 24 Nov 2020 20:35:13 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Deleting older versions in unique indexes to avoid page splits" }, { "msg_contents": "ср, 25 нояб. 2020 г. в 05:35, Peter Geoghegan <pg@bowt.ie>:\n\n> Then I had a much better idea: Make the existing LP_DEAD stuff a\n> little more like bottom-up index deletion. We usually have to access\n> heap blocks that the index tuples point to today, in order to have a\n> latestRemovedXid cutoff (to generate recovery conflicts). It's worth\n> scanning the leaf page for index tuples with TIDs whose heap block\n> matches the index tuples that actually have their LP_DEAD bits set.\n> This only consumes a few more CPU cycles. We don't have to access any\n> more heap blocks to try these extra TIDs, so it seems like a very good\n> idea to try them out.\n>\n\nI don't seem to understand this.\n\nIs it: we're scanning the leaf page for all LP_DEAD tuples that point to\nthe same\nheap block? Which heap block we're talking about here, the one that holds\nentry we're about to add (the one that triggered bottom-up-deletion due to\nlack\nof space I mean)?\n\nI ran the regression tests with an enhanced version of the patch, with\n> this LP_DEAD-deletion-with-extra-TIDs thing. It also had custom\n> instrumentation that showed exactly what happens in each case. We\n> manage to delete at least a small number of extra index tuples in\n> almost all cases -- so we get some benefit in practically all cases.\n> And in the majority of cases we can delete significantly more. It's\n> not uncommon to increase the number of index tuples deleted. It could\n> go from 1 - 10 or so without the enhancement to LP_DEAD deletion, to\n> 50 - 250 with the LP_DEAD enhancement. Some individual LP_DEAD\n> deletion calls can free more than 50% of the space on the leaf page.\n>\n\nI am missing a general perspective here.\n\nIs it true, that despite the long (vacuum preventing) transaction we can\nre-use space,\nas after the DELETE statements commits, IndexScans are setting LP_DEAD\nhints after\nthey check the state of the corresponding heap tuple?\n\nIf my thinking is correct for both cases — nature of LP_DEAD hint bits and\nthe mechanics of\nsuggested optimization — then I consider this a very promising improvement!\n\nI haven't done any testing so far since sending my last e-mail.\nIf you'll have a chance to send a new v10 version with\nLP_DEAD-deletion-with-extra-TIDs thing,\nI will do some tests (planned).\n\n-- \nVictor Yegorov\n\nср, 25 нояб. 2020 г. в 05:35, Peter Geoghegan <pg@bowt.ie>:Then I had a much better idea: Make the existing LP_DEAD stuff a\nlittle more like bottom-up index deletion. We usually have to access\nheap blocks that the index tuples point to today, in order to have a\nlatestRemovedXid cutoff (to generate recovery conflicts). It's worth\nscanning the leaf page for index tuples with TIDs whose heap block\nmatches the index tuples that actually have their LP_DEAD bits set.\nThis only consumes a few more CPU cycles. We don't have to access any\nmore heap blocks to try these extra TIDs, so it seems like a very good\nidea to try them out.I don't seem to understand this.Is it: we're scanning the leaf page for all LP_DEAD tuples that point to the sameheap block? Which heap block we're talking about here, the one that holdsentry we're about to add (the one that triggered bottom-up-deletion due to lackof space I mean)?\nI ran the regression tests with an enhanced version of the patch, with\nthis LP_DEAD-deletion-with-extra-TIDs thing. It also had custom\ninstrumentation that showed exactly what happens in each case. We\nmanage to delete at least a small number of extra index tuples in\nalmost all cases -- so we get some benefit in practically all cases.\nAnd in the majority of cases we can delete significantly more. It's\nnot uncommon to increase the number of index tuples deleted. It could\ngo from 1 - 10 or so without the enhancement to LP_DEAD deletion, to\n50 - 250 with the LP_DEAD enhancement. Some individual LP_DEAD\ndeletion calls can free more than 50% of the space on the leaf page.I am missing a general perspective here.Is it true, that despite the long (vacuum preventing) transaction we can re-use space,as after the DELETE statements commits, IndexScans are setting LP_DEAD hints afterthey check the state of the corresponding heap tuple?If my thinking is correct for both cases — nature of LP_DEAD hint bits and the mechanics ofsuggested optimization — then I consider this a very promising improvement!I haven't done any testing so far since sending my last e-mail.If you'll have a chance to send a new v10 version with LP_DEAD-deletion-with-extra-TIDs thing,I will do some tests (planned).-- Victor Yegorov", "msg_date": "Wed, 25 Nov 2020 13:43:10 +0100", "msg_from": "Victor Yegorov <vyegorov@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Deleting older versions in unique indexes to avoid page splits" }, { "msg_contents": "On Wed, Nov 25, 2020 at 4:43 AM Victor Yegorov <vyegorov@gmail.com> wrote:\n>> Then I had a much better idea: Make the existing LP_DEAD stuff a\n>> little more like bottom-up index deletion. We usually have to access\n>> heap blocks that the index tuples point to today, in order to have a\n>> latestRemovedXid cutoff (to generate recovery conflicts). It's worth\n>> scanning the leaf page for index tuples with TIDs whose heap block\n>> matches the index tuples that actually have their LP_DEAD bits set.\n>> This only consumes a few more CPU cycles. We don't have to access any\n>> more heap blocks to try these extra TIDs, so it seems like a very good\n>> idea to try them out.\n>\n>\n> I don't seem to understand this.\n>\n> Is it: we're scanning the leaf page for all LP_DEAD tuples that point to the same\n> heap block? Which heap block we're talking about here, the one that holds\n> entry we're about to add (the one that triggered bottom-up-deletion due to lack\n> of space I mean)?\n\nNo, the incoming tuple isn't significant.\n\nAs you know, bottom-up index deletion uses heuristics that are\nconcerned with duplicates on the page, and the \"logically unchanged by\nan UPDATE\" hint that the executor passes to btinsert(). Bottom-up\ndeletion runs when all LP_DEAD bits have been cleared (either because\nthere never were any LP_DEAD bits set, or because they were set and\nthen deleted, which wasn't enough).\n\nBut before bottom-up deletion may run, traditional deletion of LP_DEAD\nindex tuples runs -- this is always our preferred strategy because\nindex tuples with their LP_DEAD bits set are already known to be\ndeletable. We can make this existing process (which has been around\nsince PostgreSQL 8.2) better by applying similar principles.\n\nWe have promising tuples for bottom-up deletion. Why not have\n\"promising heap blocks\" for traditional LP_DEAD index tuple deletion?\nOr if you prefer, we can consider index tuples that *don't* have their\nLP_DEAD bits set already but happen to point to the *same heap block*\nas other tuples that *do* have their LP_DEAD bits set promising. (The\ntuples with their LP_DEAD bits set are not just promising -- they're\nalready a sure thing.)\n\nThis means that traditional LP_DEAD deletion is now slightly more\nspeculative in one way (it speculates about what is likely to be true\nusing heuristics). But it's much less speculative than bottom-up index\ndeletion. We are required to visit these heap blocks anyway, since a\ncall to _bt_delitems_delete() for LP_DEAD deletion must already call\ntable_compute_xid_horizon_for_tuples(), which has to access the blocks\nto get a latestRemovedXid for the WAL record.\n\nThe only thing that we have to lose here is a few CPU cycles to find\nextra TIDs to consider. We'll visit exactly the same number of heap\nblocks as before. (Actually, _bt_delitems_delete() does not have to do\nthat in all cases, actually, but it has to do it with a logged table\nwith wal_level >= replica, which is the vast majority of cases in\npractice.)\n\nThis means that traditional LP_DEAD deletion reuses some of the\nbottom-up index deletion infrastructure. So maybe nbtree never calls\ntable_compute_xid_horizon_for_tuples() now, since everything goes\nthrough the new heapam stuff instead (which knows how to check extra\nTIDs that might not be dead at all).\n\n> I am missing a general perspective here.\n>\n> Is it true, that despite the long (vacuum preventing) transaction we can re-use space,\n> as after the DELETE statements commits, IndexScans are setting LP_DEAD hints after\n> they check the state of the corresponding heap tuple?\n\nThe enhancement to traditional LP_DEAD deletion that I just described\ndoes not affect the current restrictions on setting LP_DEAD bits in\nthe presence of a long-running transaction, or anything like that.\nThat seems like an unrelated project. The value of this enhancement is\npurely its ability to delete *extra* index tuples that could have had\ntheir LP_DEAD bits set already (it was possible in principle), but\ndidn't. And only when they are nearby to index tuples that really do\nhave their LP_DEAD bits set.\n\n> I haven't done any testing so far since sending my last e-mail.\n> If you'll have a chance to send a new v10 version with LP_DEAD-deletion-with-extra-TIDs thing,\n> I will do some tests (planned).\n\nThanks! I think that it will be next week. It's a relatively big change.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Wed, 25 Nov 2020 10:41:15 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Deleting older versions in unique indexes to avoid page splits" }, { "msg_contents": "ср, 25 нояб. 2020 г. в 19:41, Peter Geoghegan <pg@bowt.ie>:\n\n> We have promising tuples for bottom-up deletion. Why not have\n> \"promising heap blocks\" for traditional LP_DEAD index tuple deletion?\n> Or if you prefer, we can consider index tuples that *don't* have their\n> LP_DEAD bits set already but happen to point to the *same heap block*\n> as other tuples that *do* have their LP_DEAD bits set promising. (The\n> tuples with their LP_DEAD bits set are not just promising -- they're\n> already a sure thing.)\n>\n\nIn the _bt_delete_or_dedup_one_page() we start with the simple loop over\nitems on the page and\nif there're any LP_DEAD tuples, we're kicking off _bt_delitems_delete().\n\nSo if I understood you right, you plan to make this loop (or a similar one\nsomewhere around)\nto track TIDs of the LP_DEAD tuples and then (perhaps on a second loop over\nthe page) compare all other\ncurrently-not-LP_DEAD tuples and mark those pages, that have at least 2\nTIDs pointing at (one LP_DEAD and other not)\nas a promising one.\n\nLater, should we require to kick deduplication, we'll go visit promising\npages first.\n\nIs my understanding correct?\n\n\n> I am missing a general perspective here.\n> >\n> > Is it true, that despite the long (vacuum preventing) transaction we can\n> re-use space,\n> > as after the DELETE statements commits, IndexScans are setting LP_DEAD\n> hints after\n> > they check the state of the corresponding heap tuple?\n>\n> The enhancement to traditional LP_DEAD deletion that I just described\n> does not affect the current restrictions on setting LP_DEAD bits in\n> the presence of a long-running transaction, or anything like that.\n> That seems like an unrelated project. The value of this enhancement is\n> purely its ability to delete *extra* index tuples that could have had\n> their LP_DEAD bits set already (it was possible in principle), but\n> didn't. And only when they are nearby to index tuples that really do\n> have their LP_DEAD bits set.\n>\n\nI wasn't considering improvements here, that was a general question about\nhow this works\n(trying to clear up gaps in my understanding).\n\nWhat I meant to ask — will LP_DEAD be set by IndexScan in the presence of\nthe long transaction?\n\n\n-- \nVictor Yegorov\n\nср, 25 нояб. 2020 г. в 19:41, Peter Geoghegan <pg@bowt.ie>:\nWe have promising tuples for bottom-up deletion. Why not have\n\"promising heap blocks\" for traditional LP_DEAD index tuple deletion?\nOr if you prefer, we can consider index tuples that *don't* have their\nLP_DEAD bits set already but happen to point to the *same heap block*\nas other tuples that *do* have their LP_DEAD bits set promising. (The\ntuples with their LP_DEAD bits set are not just promising -- they're\nalready a sure thing.)In the _bt_delete_or_dedup_one_page() we start with the simple loop over items on the page andif there're any LP_DEAD tuples, we're kicking off _bt_delitems_delete().So if I understood you right, you plan to make this loop (or a similar one somewhere around)to track TIDs of the LP_DEAD tuples and then (perhaps on a second loop over the page) compare all othercurrently-not-LP_DEAD tuples and mark those pages, that have at least 2 TIDs pointing at (one LP_DEAD and other not)as a promising one.Later, should we require to kick deduplication, we'll go visit promising pages first.Is my understanding correct?\n> I am missing a general perspective here.\n>\n> Is it true, that despite the long (vacuum preventing) transaction we can re-use space,\n> as after the DELETE statements commits, IndexScans are setting LP_DEAD hints after\n> they check the state of the corresponding heap tuple?\n\nThe enhancement to traditional LP_DEAD deletion that I just described\ndoes not affect the current restrictions on setting LP_DEAD bits in\nthe presence of a long-running transaction, or anything like that.\nThat seems like an unrelated project. The value of this enhancement is\npurely its ability to delete *extra* index tuples that could have had\ntheir LP_DEAD bits set already (it was possible in principle), but\ndidn't. And only when they are nearby to index tuples that really do\nhave their LP_DEAD bits set.I wasn't considering improvements here, that was a general question about how this works(trying to clear up gaps in my understanding).What I meant to ask — will LP_DEAD be set by IndexScan in the presence of the long transaction?-- Victor Yegorov", "msg_date": "Wed, 25 Nov 2020 22:20:44 +0100", "msg_from": "Victor Yegorov <vyegorov@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Deleting older versions in unique indexes to avoid page splits" }, { "msg_contents": "On Wed, Nov 25, 2020 at 1:20 PM Victor Yegorov <vyegorov@gmail.com> wrote:\n> In the _bt_delete_or_dedup_one_page() we start with the simple loop over items on the page and\n> if there're any LP_DEAD tuples, we're kicking off _bt_delitems_delete().\n\nRight.\n\n> So if I understood you right, you plan to make this loop (or a similar one somewhere around)\n> to track TIDs of the LP_DEAD tuples and then (perhaps on a second loop over the page) compare all other\n> currently-not-LP_DEAD tuples and mark those pages, that have at least 2 TIDs pointing at (one LP_DEAD and other not)\n> as a promising one.\n\nYes. We notice extra TIDs that can be included in our heapam test \"for\nfree\". The cost is low, but the benefits are also often quite high: in\npractice there are *natural* correlations that we can exploit.\n\nFor example: maybe there were non-HOT updates, and some but not all of\nthe versions got marked LP_DEAD. We can get them all in one go,\navoiding a true bottom-up index deletion pass for much longer\n(compared to doing LP_DEAD deletion the old way, which is what happens\nin v9 of the patch). We're better off doing the deletions all at once.\nIt's cheaper.\n\n(We still really need to have bottom-up deletion passes, of course,\nbecause that covers the important case where there are no LP_DEAD bits\nset at all, which is an important goal of this project.)\n\nMinor note: Technically there aren't any promising tuples involved,\nbecause that only makes sense when we are not going to visit every\npossible heap page (just the \"most promising\" heap pages). But we are\ngoing to visit every possible heap page with the new LP_DEAD bit\ndeletion code (which could occasionally mean visiting 10 or more heap\npages, which is a lot more than bottom-up index deletion will ever\nvisit). All we need to do with the new LP_DEAD deletion logic is to\ninclude all possible matching TIDs (not just those that are marked\nLP_DEAD already).\n\n> What I meant to ask — will LP_DEAD be set by IndexScan in the presence of the long transaction?\n\nThat works in the same way as before, even with the new LP_DEAD\ndeletion code. The new code uses the same information as before (set\nLP_DEAD bits), which is generated in the same way as before. The\ndifference is in how the information is actually used during LP_DEAD\ndeletion -- we can now delete some extra things in certain common\ncases.\n\nIn practice this (and bottom-up deletion) make nbtree more robust\nagainst disruption caused by long running transactions that hold a\nsnapshot open. It's hard to give a simple explanation of why that is,\nbecause it's a second order effect. The patch is going to make it\npossible to recover when LP_DEAD bits suddenly stop being set because\nof an old snapshot -- now we'll have a \"second chance\", and maybe even\na third chance. But if the snapshot is held open *forever*, then a\nsecond chance has no value.\n\nHere is a thought experiment that might be helpful:\n\nImagine Postgres just as it is today (without the patch), except that\nVACUUM runs very frequently, and is infinitely fast (this is a magical\nversion of VACUUM). This solves many problems, but does not solve all\nproblems. Magic Postgres will become just as slow as earthly Postgres\nwhen there is a snapshot that is held open for a very long time. That\nwill take longer to happen compared to earthly/mortal Postgres, but\neventually there will be no difference between the two at all. But,\nwhen you don't have such an extreme problem, magic Postgres really is\nmuch faster.\n\nI think that it will be possible to approximate the behavior of magic\nPostgres using techniques like bottom-up deletion, the new LP_DEAD\ndeletion thing we've been talking today, and maybe other enhancements\nin other areas (like in heap pruning). It doesn't matter that we don't\nphysically remove garbage immediately, as long as we \"logically\"\nremove it immediately. The actually physical removal can occur in a\njust in time, incremental fashion, creating the illusion that VACUUM\nreally does run infinitely fast. No magic required.\n\nActually, in a way this isn't new; we have always \"logically\" removed\ngarbage at the earliest opportunity (by which I mean we allow that it\ncan be physically removed according to an oldestXmin style cutoff,\nwhich can be reacquired/updated the second the oldest MVCC snapshot\ngoes away). We don't think of useless old versions as being \"logically\nremoved\" the instant an old snapshot goes away. But maybe we should --\nit's a useful mental model.\n\nIt will also be very helpful to add \"remove useless intermediate\nversions\" logic at some point. This is quite a distinct area to what I\njust described, but it's also important. We need both, I think.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Wed, 25 Nov 2020 17:00:31 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Deleting older versions in unique indexes to avoid page splits" }, { "msg_contents": "On Wed, Nov 25, 2020 at 10:41 AM Peter Geoghegan <pg@bowt.ie> wrote:\n> We have promising tuples for bottom-up deletion. Why not have\n> \"promising heap blocks\" for traditional LP_DEAD index tuple deletion?\n> Or if you prefer, we can consider index tuples that *don't* have their\n> LP_DEAD bits set already but happen to point to the *same heap block*\n> as other tuples that *do* have their LP_DEAD bits set promising. (The\n> tuples with their LP_DEAD bits set are not just promising -- they're\n> already a sure thing.)\n>\n> This means that traditional LP_DEAD deletion is now slightly more\n> speculative in one way (it speculates about what is likely to be true\n> using heuristics). But it's much less speculative than bottom-up index\n> deletion. We are required to visit these heap blocks anyway, since a\n> call to _bt_delitems_delete() for LP_DEAD deletion must already call\n> table_compute_xid_horizon_for_tuples(), which has to access the blocks\n> to get a latestRemovedXid for the WAL record.\n>\n> The only thing that we have to lose here is a few CPU cycles to find\n> extra TIDs to consider. We'll visit exactly the same number of heap\n> blocks as before. (Actually, _bt_delitems_delete() does not have to do\n> that in all cases, actually, but it has to do it with a logged table\n> with wal_level >= replica, which is the vast majority of cases in\n> practice.)\n>\n> This means that traditional LP_DEAD deletion reuses some of the\n> bottom-up index deletion infrastructure. So maybe nbtree never calls\n> table_compute_xid_horizon_for_tuples() now, since everything goes\n> through the new heapam stuff instead (which knows how to check extra\n> TIDs that might not be dead at all).\n\nAttached is v10, which has this LP_DEAD deletion enhancement I\ndescribed. (It also fixed bitrot -- v9 no longer applies.)\n\nThis revision does a little refactoring to make this possible. Now\nthere is less new code in nbtdedup.c, and more code in nbtpage.c,\nbecause some of the logic used by bottom-up deletion has been\ngeneralized (in order to be used by the new-to-v10 LP_DEAD deletion\nenhancement).\n\nOther than that, no big changes between this v10 and v9. Just\npolishing and refactoring. I decided to make it mandatory for tableams\nto support the new interface that heapam implements, since it's hardly\nokay for them to not allow LP_DEAD deletion in nbtree (which is what\nmaking supporting the interface optional would imply, given the\nLP_DEAD changes). So now the heapam and tableam changes are including\nin one patch/commit, which is to be applied first among patches in the\nseries.\n\n--\nPeter Geoghegan", "msg_date": "Mon, 30 Nov 2020 11:50:58 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Deleting older versions in unique indexes to avoid page splits" }, { "msg_contents": "This is a wholly new concept with a lot of heuristics. It's a lot of \nswallow. But here are a few quick comments after a first read-through:\n\nOn 30/11/2020 21:50, Peter Geoghegan wrote:\n> +/*\n> + * State used when calling table_index_delete_check() to perform \"bottom up\"\n> + * deletion of duplicate index tuples. State is intialized by index AM\n> + * caller, while state is finalized by tableam, which modifies state.\n> + */\n> +typedef struct TM_IndexDelete\n> +{\n> +\tItemPointerData tid;\t\t/* table TID from index tuple */\n> +\tint16\t\tid;\t\t\t\t/* Offset into TM_IndexStatus array */\n> +} TM_IndexDelete;\n> +\n> +typedef struct TM_IndexStatus\n> +{\n> +\tOffsetNumber idxoffnum;\t\t/* Index am page offset number */\n> +\tint16\t\ttupsize;\t\t/* Space freed in index if tuple deleted */\n> +\tbool\t\tispromising;\t/* Duplicate in index? */\n> +\tbool\t\tdeleteitup;\t\t/* Known dead-to-all? */\n> +} TM_IndexStatus;\n> ...\n> + * The two arrays are conceptually one single array. Two arrays/structs are\n> + * used for performance reasons. (We really need to keep the TM_IndexDelete\n> + * struct small so that the tableam can do an initial sort by TID as quickly\n> + * as possible.)\n\nIs it really significantly faster to have two arrays? If you had just \none array, each element would be only 12 bytes long. That's not much \nsmaller than TM_IndexDeletes, which is 8 bytes.\n\n> +\t/* First sort caller's array by TID */\n> +\theap_tid_shellsort(delstate->deltids, delstate->ndeltids);\n> +\n> +\t/* alltids caller visits all blocks, so make sure that happens */\n> +\tif (delstate->alltids)\n> +\t\treturn delstate->ndeltids;\n> +\n> +\t/* Calculate per-heap-block count of TIDs */\n> +\tblockcounts = palloc(sizeof(IndexDeleteCounts) * delstate->ndeltids);\n> +\tfor (int i = 0; i < delstate->ndeltids; i++)\n> +\t{\n> +\t\tItemPointer deltid = &delstate->deltids[i].tid;\n> +\t\tTM_IndexStatus *dstatus = delstate->status + delstate->deltids[i].id;\n> +\t\tbool\t\tispromising = dstatus->ispromising;\n> +\n> +\t\tif (curblock != ItemPointerGetBlockNumber(deltid))\n> +\t\t{\n> +\t\t\t/* New block group */\n> +\t\t\tnblockgroups++;\n> +\n> +\t\t\tAssert(curblock < ItemPointerGetBlockNumber(deltid) ||\n> +\t\t\t\t !BlockNumberIsValid(curblock));\n> +\n> +\t\t\tcurblock = ItemPointerGetBlockNumber(deltid);\n> +\t\t\tblockcounts[nblockgroups - 1].ideltids = i;\n> +\t\t\tblockcounts[nblockgroups - 1].ntids = 1;\n> +\t\t\tblockcounts[nblockgroups - 1].npromisingtids = 0;\n> +\t\t}\n> +\t\telse\n> +\t\t{\n> +\t\t\tblockcounts[nblockgroups - 1].ntids++;\n> +\t\t}\n> +\n> +\t\tif (ispromising)\n> +\t\t\tblockcounts[nblockgroups - 1].npromisingtids++;\n> +\t}\n\nInstead of sorting the array by TID, wouldn't it be enough to sort by \njust the block numbers?\n\n> \t * While in general the presence of promising tuples (the hint that index\n> +\t * AMs provide) is the best information that we have to go on, it is based\n> +\t * on simple heuristics about duplicates in indexes that are understood to\n> +\t * have specific flaws. We should not let the most promising heap pages\n> +\t * win or lose on the basis of _relatively_ small differences in the total\n> +\t * number of promising tuples. Small differences between the most\n> +\t * promising few heap pages are effectively ignored by applying this\n> +\t * power-of-two bucketing scheme.\n> +\t *\n\nWhat are the \"specific flaws\"?\n\nI understand that this is all based on rough heuristics, but is there \nany advantage to rounding/bucketing the numbers like this? Even if small \ndifferences in the total number of promising tuple is essentially noise \nthat can be safely ignored, is there any harm in letting those small \ndifferences guide the choice? Wouldn't it be the essentially the same as \npicking at random, or are those small differences somehow biased?\n\n- Heikki\n\n\n", "msg_date": "Tue, 1 Dec 2020 11:50:40 +0200", "msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>", "msg_from_op": false, "msg_subject": "Re: Deleting older versions in unique indexes to avoid page splits" }, { "msg_contents": "On Tue, Dec 1, 2020 at 1:50 AM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n> This is a wholly new concept with a lot of heuristics. It's a lot of\n> swallow.\n\nThanks for taking a look! Yes, it's a little unorthodox.\n\nIdeally, you'd find time to grok the patch and help me codify the\ndesign in some general kind of way. If there are general lessons to be\nlearned here (and I suspect that there are), then this should not be\nleft to chance. The same principles can probably be applied in heapam,\nfor example. Even if I'm wrong about the techniques being highly\ngeneralizable, it should still be interesting! (Something to think\nabout a little later.)\n\nSome of the intuitions behind the design might be vaguely familiar to\nyou as the reviewer of my earlier B-Tree work. In particular, the\nwhole way that we reason about how successive related operations (in\nthis case bottom-up deletion passes) affect individual leaf pages over\ntime might give you a feeling of déjà vu. It's a little like the\nnbtsplitloc.c stuff that we worked on together during the Postgres 12\ncycle. We can make what might seem like rather bold assumptions about\nwhat's going on, and adapt to the workload. This is okay because we're\nsure that the downside of our being wrong is a fixed, low performance\npenalty. (To a lesser degree it's okay because the empirical evidence\nshows that we're almost always right, because we apply the\noptimization again and again precisely because it worked out last\ntime.)\n\nI like to compare it to the familiar \"rightmost leaf page applies leaf\nfillfactor\" heuristic, which applies an assumption that is wrong in\nthe general case, but nevertheless obviously helps enormously as a\npractical matter. Of course it's still true that anybody reviewing\nthis patch ought to start with a principled skepticism of this claim\n-- that's how you review any patch. I can say for sure that that's the\nidea behind the patch, though. I want to be clear about that up front,\nto save you time -- if this claim is wrong, then I'm wrong about\neverything.\n\nGenerational garbage collection influenced this work, too. It also\napplies pragmatic assumptions about where garbage is likely to appear.\nAssumptions that are based on nothing more than empirical observations\nabout what is likely to happen in the real world, that are validated\nby experience and by benchmarking.\n\n> On 30/11/2020 21:50, Peter Geoghegan wrote:\n> > +} TM_IndexDelete;\n\n> > +} TM_IndexStatus;\n\n> Is it really significantly faster to have two arrays? If you had just\n> one array, each element would be only 12 bytes long. That's not much\n> smaller than TM_IndexDeletes, which is 8 bytes.\n\nYeah, but the swap operations really matter here. At one point I found\nthat to be the case, anyway. That might no longer be true, though. It\nmight be that the code became less performance critical because other\nparts of the design improved, resulting in the code getting called\nless frequently. But if that is true then it has a lot to do with the\npower-of-two bucketing that you go on to ask about -- that helped\nperformance a lot in certain specific cases (as I go into below).\n\nI will add a TODO item for myself, to look into this again. You may\nwell be right.\n\n> > + /* First sort caller's array by TID */\n> > + heap_tid_shellsort(delstate->deltids, delstate->ndeltids);\n\n> Instead of sorting the array by TID, wouldn't it be enough to sort by\n> just the block numbers?\n\nI don't understand. Yeah, I guess that we could do our initial sort of\nthe deltids array (the heap_tid_shellsort() call) just using\nBlockNumber (not TID). But OTOH there might be some small locality\nbenefit to doing a proper TID sort at the level of each heap page. And\neven if there isn't any such benefit, does it really matter?\n\nIf you are asking about the later sort of the block counts array\n(which helps us sort the deltids array a second time, leaving it in\nits final order for bottom-up deletion heapam.c processing), then the\nanswer is no. This block counts metadata array sort is useful because\nit allows us to leave the deltids array in what I believe to be the\nmost useful order for processing. We'll access heap blocks primarily\nbased on the number of promising tuples (though as I go into below,\nsometimes the number of promising tuples isn't a decisive influence on\nprocessing order).\n\n> > * While in general the presence of promising tuples (the hint that index\n> > + * AMs provide) is the best information that we have to go on, it is based\n> > + * on simple heuristics about duplicates in indexes that are understood to\n> > + * have specific flaws. We should not let the most promising heap pages\n> > + * win or lose on the basis of _relatively_ small differences in the total\n> > + * number of promising tuples. Small differences between the most\n> > + * promising few heap pages are effectively ignored by applying this\n> > + * power-of-two bucketing scheme.\n> > + *\n>\n> What are the \"specific flaws\"?\n\nI just meant the obvious: the index AM doesn't actually know for sure\nthat there are any old versions on the leaf page that it will\nultimately be able to delete. This uncertainty needs to be managed,\nincluding inside heapam.c. Feel free to suggest a better way of\nexpressing that sentiment.\n\n> I understand that this is all based on rough heuristics, but is there\n> any advantage to rounding/bucketing the numbers like this? Even if small\n> differences in the total number of promising tuple is essentially noise\n> that can be safely ignored, is there any harm in letting those small\n> differences guide the choice? Wouldn't it be the essentially the same as\n> picking at random, or are those small differences somehow biased?\n\nExcellent question! It actually helps enormously, especially with low\ncardinality data that makes good use of the deduplication optimization\n(where it is especially important to keep the costs and the benefits\nin balance). This has been validated by benchmarking.\n\nThis design naturally allows the heapam.c code to take advantage of\nboth temporal and spatial locality. For example, let's say that you\nhave several indexes all on the same table, which get lots of non-HOT\nUPDATEs, which are skewed. Naturally, the heap TIDs will match across\neach index -- these are index entries that are needed to represent\nsuccessor versions (which are logically unchanged/version duplicate\nindex tuples from the point of view of nbtree/nbtdedup.c). Introducing\na degree of determinism to the order in which heap blocks are\nprocessed naturally takes advantage of the correlated nature of the\nindex bloat. It naturally makes it much more likely that the\nalso-correlated bottom-up index deletion passes (that occur across\nindexes on the same table) each process the same heap blocks close\ntogether in time -- with obvious performance benefits.\n\nIn the extreme (but not particularly uncommon) case of non-HOT UPDATEs\nwith many low cardinality indexes, each heapam.c call will end up\ndoing *almost the same thing* across indexes. So we're making the\ncorrelated nature of the bloat (which is currently a big problem) work\nin our favor -- turning it on its head, you could say. Highly\ncorrelated bloat is not the exception -- it's actually the norm in the\ncases we're targeting here. If it wasn't highly correlated then it\nwould already be okay to rely on VACUUM to get around to cleaning it\nlater.\n\nThis power-of-two bucketing design probably also helps when there is\nonly one index. I could go into more detail on that, plus other\nvariations, but perhaps the multiple index example suffices for now. I\nbelieve that there are a few interesting kinds of correlations here --\nI bet you can think of some yourself. (Of course it's also possible\nand even likely that heap block correlation won't be important at all,\nbut my response is \"what specifically is the harm in being open to the\npossibility?\".)\n\nBTW, I tried to make the tableam interface changes more or less\ncompatible with Zedstore, which is notable for not using TIDs in the\nsame way as heapam (or zheap). Let me know what you think about that.\nI can go into detail about it if it isn't obvious to you.\n\n--\nPeter Geoghegan\n\n\n", "msg_date": "Tue, 1 Dec 2020 14:18:36 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Deleting older versions in unique indexes to avoid page splits" }, { "msg_contents": "On Tue, Dec 1, 2020 at 2:18 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> On Tue, Dec 1, 2020 at 1:50 AM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n> > This is a wholly new concept with a lot of heuristics. It's a lot of\n> > swallow.\n\nAttached is v11, which cleans everything up around the tableam\ninterface. There are only two patches in v11, since the tableam\nrefactoring made it impossible to split the second patch into a heapam\npatch and nbtree patch (there is no reduction in functionality\ncompared to v10).\n\nMost of the real changes in v11 (compared to v10) are in heapam.c.\nI've completely replaced the table_compute_xid_horizon_for_tuples()\ninterface with a new interface that supports all existing requirements\n(from index deletions that support LP_DEAD deletion), while also\nsupporting these new requirements (e.g. bottom-up index deletion). So\nnow heap_compute_xid_horizon_for_tuples() becomes\nheap_compute_delete_for_tuples(), which has different arguments but\nthe same overall structure. All of the new requirements can now be\nthought of as additive things that we happen to use for nbtree\ncallers, that could easily also be used in other index AMs later on.\nThis means that there is a lot less new code in heapam.c.\n\nPrefetching of heap blocks for the new bottom-up index deletion caller\nnow works in the same way as it has worked in\nheap_compute_xid_horizon_for_tuples() since Postgres 12 (more or\nless). This is a significant improvement compared to my original\napproach.\n\nChaning heap_compute_xid_horizon_for_tuples() rather than adding a\nsibling function started to make sense when v10 of the patch taught\nregular nbtree LP_DEAD deletion (the thing that has been around since\nPostgreSQL 8.2) to add \"extra\" TIDs to check in passing, just in case\nwe find that they're also deletable -- why not just have one function?\nIt also means that hash indexes and GiST indexes now use the\nheap_compute_delete_for_tuples() function, though they get the same\nbehavior as before. I imagine that it would be pretty straightforward\nto bring that same benefit to those other index AMs, since their\nimplementations are already derived from the nbtree implementation of\nLP_DEAD deletion (and because adding extra TIDs to check in passing in\nthese other index AMs should be fairly easy).\n\n> > > +} TM_IndexDelete;\n>\n> > > +} TM_IndexStatus;\n>\n> > Is it really significantly faster to have two arrays? If you had just\n> > one array, each element would be only 12 bytes long. That's not much\n> > smaller than TM_IndexDeletes, which is 8 bytes.\n\nI kept this facet of the design in v11, following some deliberation. I\nfound that the TID sort operation appeared quite prominently in\nprofiles, and so the optimizations mostly seemed to still make sense.\nI also kept one shellsort specialization. However, I removed the\nsecond specialized sort implementation, so at least there is only one\nspecialization now (which is small anyway). I found that the second\nsort specialization (added to heapam.c in v10) really wasn't pulling\nits weight, even in more extreme cases of the kind that justified the\noptimizations in the first place.\n\n-- \nPeter Geoghegan", "msg_date": "Wed, 9 Dec 2020 17:12:40 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Deleting older versions in unique indexes to avoid page splits" }, { "msg_contents": "Hi,\nIn v11-0001-Pass-down-logically-unchanged-index-hint.patch :\n\n+ if (hasexpression)\n+ return false;\n+\n+ return true;\n\nThe above can be written as 'return !hasexpression;'\n\nFor +index_unchanged_by_update_var_walker:\n\n+ * Returns true when Var that appears within allUpdatedCols located.\n\nthe sentence seems incomplete.\n\nCurrently the return value of index_unchanged_by_update_var_walker() is the\nreverse of index_unchanged_by_update().\nMaybe it is easier to read the code if their return values have the same\nmeaning.\n\nCheers\n\nOn Wed, Dec 9, 2020 at 5:13 PM Peter Geoghegan <pg@bowt.ie> wrote:\n\n> On Tue, Dec 1, 2020 at 2:18 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> > On Tue, Dec 1, 2020 at 1:50 AM Heikki Linnakangas <hlinnaka@iki.fi>\n> wrote:\n> > > This is a wholly new concept with a lot of heuristics. It's a lot of\n> > > swallow.\n>\n> Attached is v11, which cleans everything up around the tableam\n> interface. There are only two patches in v11, since the tableam\n> refactoring made it impossible to split the second patch into a heapam\n> patch and nbtree patch (there is no reduction in functionality\n> compared to v10).\n>\n> Most of the real changes in v11 (compared to v10) are in heapam.c.\n> I've completely replaced the table_compute_xid_horizon_for_tuples()\n> interface with a new interface that supports all existing requirements\n> (from index deletions that support LP_DEAD deletion), while also\n> supporting these new requirements (e.g. bottom-up index deletion). So\n> now heap_compute_xid_horizon_for_tuples() becomes\n> heap_compute_delete_for_tuples(), which has different arguments but\n> the same overall structure. All of the new requirements can now be\n> thought of as additive things that we happen to use for nbtree\n> callers, that could easily also be used in other index AMs later on.\n> This means that there is a lot less new code in heapam.c.\n>\n> Prefetching of heap blocks for the new bottom-up index deletion caller\n> now works in the same way as it has worked in\n> heap_compute_xid_horizon_for_tuples() since Postgres 12 (more or\n> less). This is a significant improvement compared to my original\n> approach.\n>\n> Chaning heap_compute_xid_horizon_for_tuples() rather than adding a\n> sibling function started to make sense when v10 of the patch taught\n> regular nbtree LP_DEAD deletion (the thing that has been around since\n> PostgreSQL 8.2) to add \"extra\" TIDs to check in passing, just in case\n> we find that they're also deletable -- why not just have one function?\n> It also means that hash indexes and GiST indexes now use the\n> heap_compute_delete_for_tuples() function, though they get the same\n> behavior as before. I imagine that it would be pretty straightforward\n> to bring that same benefit to those other index AMs, since their\n> implementations are already derived from the nbtree implementation of\n> LP_DEAD deletion (and because adding extra TIDs to check in passing in\n> these other index AMs should be fairly easy).\n>\n> > > > +} TM_IndexDelete;\n> >\n> > > > +} TM_IndexStatus;\n> >\n> > > Is it really significantly faster to have two arrays? If you had just\n> > > one array, each element would be only 12 bytes long. That's not much\n> > > smaller than TM_IndexDeletes, which is 8 bytes.\n>\n> I kept this facet of the design in v11, following some deliberation. I\n> found that the TID sort operation appeared quite prominently in\n> profiles, and so the optimizations mostly seemed to still make sense.\n> I also kept one shellsort specialization. However, I removed the\n> second specialized sort implementation, so at least there is only one\n> specialization now (which is small anyway). I found that the second\n> sort specialization (added to heapam.c in v10) really wasn't pulling\n> its weight, even in more extreme cases of the kind that justified the\n> optimizations in the first place.\n>\n> --\n> Peter Geoghegan\n>\n\nHi,In v11-0001-Pass-down-logically-unchanged-index-hint.patch :+   if (hasexpression)+       return false;++   return true;The above can be written as 'return !hasexpression;'For +index_unchanged_by_update_var_walker:+ * Returns true when Var that appears within allUpdatedCols located.the sentence seems incomplete.Currently the return value of index_unchanged_by_update_var_walker() is the reverse of index_unchanged_by_update().Maybe it is easier to read the code if their return values have the same meaning.CheersOn Wed, Dec 9, 2020 at 5:13 PM Peter Geoghegan <pg@bowt.ie> wrote:On Tue, Dec 1, 2020 at 2:18 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> On Tue, Dec 1, 2020 at 1:50 AM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n> > This is a wholly new concept with a lot of heuristics. It's a lot of\n> > swallow.\n\nAttached is v11, which cleans everything up around the tableam\ninterface. There are only two patches in v11, since the tableam\nrefactoring made it impossible to split the second patch into a heapam\npatch and nbtree patch (there is no reduction in functionality\ncompared to v10).\n\nMost of the real changes in v11 (compared to v10) are in heapam.c.\nI've completely replaced the table_compute_xid_horizon_for_tuples()\ninterface with a new interface that supports all existing requirements\n(from index deletions that support LP_DEAD deletion), while also\nsupporting these new requirements (e.g. bottom-up index deletion). So\nnow heap_compute_xid_horizon_for_tuples() becomes\nheap_compute_delete_for_tuples(), which has different arguments but\nthe same overall structure. All of the new requirements can now be\nthought of as additive things that we happen to use for nbtree\ncallers, that could easily also be used in other index AMs later on.\nThis means that there is a lot less new code in heapam.c.\n\nPrefetching of heap blocks for the new bottom-up index deletion caller\nnow works in the same way as it has worked in\nheap_compute_xid_horizon_for_tuples() since Postgres 12 (more or\nless). This is a significant improvement compared to my original\napproach.\n\nChaning heap_compute_xid_horizon_for_tuples() rather than adding a\nsibling function started to make sense when v10 of the patch taught\nregular nbtree LP_DEAD deletion (the thing that has been around since\nPostgreSQL 8.2) to add \"extra\" TIDs to check in passing, just in case\nwe find that they're also deletable -- why not just have one function?\nIt also means that hash indexes and GiST indexes now use the\nheap_compute_delete_for_tuples() function, though they get the same\nbehavior as before. I imagine that it would be pretty straightforward\nto bring that same benefit to those other index AMs, since their\nimplementations are already derived from the nbtree implementation of\nLP_DEAD deletion (and because adding extra TIDs to check in passing in\nthese other index AMs should be fairly easy).\n\n> > > +} TM_IndexDelete;\n>\n> > > +} TM_IndexStatus;\n>\n> > Is it really significantly faster to have two arrays? If you had just\n> > one array, each element would be only 12 bytes long. That's not much\n> > smaller than TM_IndexDeletes, which is 8 bytes.\n\nI kept this facet of the design in v11, following some deliberation. I\nfound that the TID sort operation appeared quite prominently in\nprofiles, and so the optimizations mostly seemed to still make sense.\nI also kept one shellsort specialization. However, I removed the\nsecond specialized sort implementation, so at least there is only one\nspecialization now (which is small anyway). I found that the second\nsort specialization (added to heapam.c in v10) really wasn't pulling\nits weight, even in more extreme cases of the kind that justified the\noptimizations in the first place.\n\n-- \nPeter Geoghegan", "msg_date": "Wed, 9 Dec 2020 18:35:39 -0800", "msg_from": "Zhihong Yu <zyu@yugabyte.com>", "msg_from_op": false, "msg_subject": "Re: Deleting older versions in unique indexes to avoid page splits" }, { "msg_contents": "On Wed, Dec 9, 2020 at 5:12 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> Most of the real changes in v11 (compared to v10) are in heapam.c.\n> I've completely replaced the table_compute_xid_horizon_for_tuples()\n> interface with a new interface that supports all existing requirements\n> (from index deletions that support LP_DEAD deletion), while also\n> supporting these new requirements (e.g. bottom-up index deletion).\n\nI came up with a microbenchmark that is designed to give some general\nsense of how helpful it is to include \"extra\" TIDs alongside\nLP_DEAD-in-index TIDs (when they happen to point to the same table\nblock as the LP_DEAD-in-index TIDs, which we'll have to visit anyway).\nThe basic idea is simple: Add custom instrumentation that summarizes\neach B-Tree index deletion in LOG output, and then run the regression\ntests. Attached in the result of this for a \"make installcheck-world\".\n\nIf you're just looking at this thread for the first time in a while:\nnote that what I'm about to go into isn't technically bottom-up\ndeletion (though you will see some of that in the full log output if\nyou look). It's a closely related optimization that only appeared in\nrecent versions of the patch. So I'm comparing the current approach to\nsimple deletion of LP_DEAD-marked index tuples to a new enhanced\napproach (that makes it a little more like bottom-up deletion, but\nonly a little).\n\nHere is some sample output (selected almost at random from a text file\nconsisting of 889 lines of similar output):\n\n... exact TIDs deleted 17, LP_DEAD tuples 4, LP_DEAD-related table blocks 2 )\n... exact TIDs deleted 38, LP_DEAD tuples 28, LP_DEAD-related table blocks 1 )\n... exact TIDs deleted 39, LP_DEAD tuples 1, LP_DEAD-related table blocks 1 )\n... exact TIDs deleted 44, LP_DEAD tuples 42, LP_DEAD-related table blocks 3 )\n... exact TIDs deleted 6, LP_DEAD tuples 2, LP_DEAD-related table blocks 2 )\n\n(The initial contents of each line were snipped here, to focus on the\nrelevant metrics.)\n\nHere we see that the actual number of TIDs/index tuples deleted often\n*vastly* exceeds the number of LP_DEAD-marked tuples (which are all we\nwould have been able to delete with the existing approach of just\ndeleting LP_DEAD items). It's pretty rare for us to fail to at least\ndelete a couple of extra TIDs. Clearly this technique is broadly\neffective, because in practice there are significant locality-related\neffects that we can benefit from. It doesn't really matter that it's\nhard to precisely describe all of these locality effects. IMV the\nquestion that really matters is whether or not the cost of trying is\nconsistently very low (relative to the cost of our current approach to\nsimple LP_DEAD deletion). We do need to understand that fully.\n\nIt's tempting to think about this quantitatively (and it also bolsters\nthe case for the patch), but that misses the point. The right way to\nthink about this is as a *qualitative* thing. The presence of LP_DEAD\nbits gives us certain reliable information about the nature of the\nindex tuple (that it is dead-to-all, and therefore safe to delete),\nbut it also *suggests* quite a lot more than that. In practice bloat\nis usually highly correlated/concentrated, especially when we limit\nour consideration to workloads where bloat is noticeable in any\npractical sense. So we're very well advised to look for nearby\ndeletable index tuples in passing -- especially since it's practically\nfree to do so. (Which is what the patch does here, of course.)\n\nLet's compare this to an extreme variant of my patch that runs the\nsame test, to see what changes. Consider a variant that exhaustively\nchecks every index tuple on the page at the point of a simple LP_DEAD\ndeletion operation, no matter what. Rather than only including those\nTIDs that happen to be on the same heap/table blocks (and thus are\npractically free to check), we include all of them. This design isn't\nacceptable in the real world because it does a lot of unnecessary I/O,\nbut that shouldn't invalidate any comparison I make here. This is\nstill a reasonable approximation of a version of the patch with\nmagical foreknowledge of where to find dead TIDs. It's an Oracle\n(ahem) that we can sensibly compare to the real patch within certain\nconstraints.\n\nThe results of this comparative analysis seem to suggest something\nimportant about the general nature of what's going on. The results\nare: There are only 844 deletion operations total with the Oracle.\nWhile this is less than the actual patch's 889 deletion operations,\nyou would expect a bigger improvement from using what is after all\nsupposed to apply magic! This suggests to me that the precise\nintervention of the patch here (the new LP_DEAD deletion stuff) has an\noutsized impact. The correlations that naturally exist make this\nasymmetrical payoff-to-cost situation possible. Simple cheap\nheuristics once again go a surprisingly long way, kind of like\nbottom-up deletion itself.\n\n--\nPeter Geoghegan", "msg_date": "Sat, 19 Dec 2020 15:59:59 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Deleting older versions in unique indexes to avoid page splits" }, { "msg_contents": "On Wed, Dec 9, 2020 at 5:12 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> Attached is v11, which cleans everything up around the tableam\n> interface. There are only two patches in v11, since the tableam\n> refactoring made it impossible to split the second patch into a heapam\n> patch and nbtree patch (there is no reduction in functionality\n> compared to v10).\n\nAttached is v12, which fixed bitrot against the master branch. This\nversion has significant comment and documentation revisions. It is\nfunctionally equivalent to v11, though.\n\nI intend to commit the patch in the next couple of weeks. While it\ncertainly would be nice to get a more thorough review, I don't feel\nthat it is strictly necessary. The patch provides very significant\nbenefits with certain workloads that have traditionally been\nconsidered an Achilles' heel for Postgres. Even zheap doesn't provide\na solution to these problems. The only thing that I can think of that\nmight reasonably be considered in competition with this design is\nWARM, which hasn't been under active development since 2017 (I assume\nthat it has been abandoned by those involved).\n\nI should also point out that the design doesn't change the on-disk\nformat in any way, and so doesn't commit us to supporting something\nthat might become onerous in the event of somebody else finding a\nbetter way to address at least some of the same problems.\n\n-- \nPeter Geoghegan", "msg_date": "Wed, 30 Dec 2020 18:54:45 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Deleting older versions in unique indexes to avoid page splits" }, { "msg_contents": "чт, 31 дек. 2020 г. в 03:55, Peter Geoghegan <pg@bowt.ie>:\n\n> Attached is v12, which fixed bitrot against the master branch. This\n> version has significant comment and documentation revisions. It is\n> functionally equivalent to v11, though.\n>\n> I intend to commit the patch in the next couple of weeks. While it\n> certainly would be nice to get a more thorough review, I don't feel\n> that it is strictly necessary. The patch provides very significant\n> benefits with certain workloads that have traditionally been\n> considered an Achilles' heel for Postgres. Even zheap doesn't provide\n> a solution to these problems. The only thing that I can think of that\n> might reasonably be considered in competition with this design is\n> WARM, which hasn't been under active development since 2017 (I assume\n> that it has been abandoned by those involved).\n>\n\nI am planning to look into this patch in the next few days.\n\n-- \nVictor Yegorov\n\nчт, 31 дек. 2020 г. в 03:55, Peter Geoghegan <pg@bowt.ie>:\nAttached is v12, which fixed bitrot against the master branch. This\nversion has significant comment and documentation revisions. It is\nfunctionally equivalent to v11, though.\n\nI intend to commit the patch in the next couple of weeks. While it\ncertainly would be nice to get a more thorough review, I don't feel\nthat it is strictly necessary. The patch provides very significant\nbenefits with certain workloads that have traditionally been\nconsidered an Achilles' heel for Postgres. Even zheap doesn't provide\na solution to these problems. The only thing that I can think of that\nmight reasonably be considered in competition with this design is\nWARM, which hasn't been under active development since 2017 (I assume\nthat it has been abandoned by those involved).I am planning to look into this patch in the next few days.-- Victor Yegorov", "msg_date": "Thu, 31 Dec 2020 14:50:07 +0100", "msg_from": "Victor Yegorov <vyegorov@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Deleting older versions in unique indexes to avoid page splits" }, { "msg_contents": "Hi, Peter:\nHappy New Year.\n\nFor v12-0001-Pass-down-logically-unchanged-index-hint.patch\n\n+ if (hasexpression)\n+ return false;\n+\n+ return true;\n\nThe above can be written as return !hasexpression;\nThe negation is due to the return value from\nindex_unchanged_by_update_var_walker() is inverse of index unchanged.\nIf you align the meaning of return value\nfrom index_unchanged_by_update_var_walker() the same way\nas index_unchanged_by_update(), negation is not needed.\n\nFor v12-0002-Add-bottom-up-index-deletion.patch :\n\nFor struct xl_btree_delete:\n\n+ /* DELETED TARGET OFFSET NUMBERS FOLLOW */\n+ /* UPDATED TARGET OFFSET NUMBERS FOLLOW */\n+ /* UPDATED TUPLES METADATA (xl_btree_update) ARRAY FOLLOWS */\n\nI guess the comment is for illustration purposes. Maybe you can write the\ncomment in lower case.\n\n+#define BOTTOMUP_FAVORABLE_STRIDE 3\n\nAdding a comment on why the number 3 is chosen would be helpful for people\nto understand the code.\n\nCheers\n\nOn Wed, Dec 30, 2020 at 6:55 PM Peter Geoghegan <pg@bowt.ie> wrote:\n\n> On Wed, Dec 9, 2020 at 5:12 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> > Attached is v11, which cleans everything up around the tableam\n> > interface. There are only two patches in v11, since the tableam\n> > refactoring made it impossible to split the second patch into a heapam\n> > patch and nbtree patch (there is no reduction in functionality\n> > compared to v10).\n>\n> Attached is v12, which fixed bitrot against the master branch. This\n> version has significant comment and documentation revisions. It is\n> functionally equivalent to v11, though.\n>\n> I intend to commit the patch in the next couple of weeks. While it\n> certainly would be nice to get a more thorough review, I don't feel\n> that it is strictly necessary. The patch provides very significant\n> benefits with certain workloads that have traditionally been\n> considered an Achilles' heel for Postgres. Even zheap doesn't provide\n> a solution to these problems. The only thing that I can think of that\n> might reasonably be considered in competition with this design is\n> WARM, which hasn't been under active development since 2017 (I assume\n> that it has been abandoned by those involved).\n>\n> I should also point out that the design doesn't change the on-disk\n> format in any way, and so doesn't commit us to supporting something\n> that might become onerous in the event of somebody else finding a\n> better way to address at least some of the same problems.\n>\n> --\n> Peter Geoghegan\n>\n\nHi, Peter:Happy New Year.For v12-0001-Pass-down-logically-unchanged-index-hint.patch+   if (hasexpression)+       return false;++   return true;The above can be written as return !hasexpression;The negation is due to the return value from index_unchanged_by_update_var_walker() is inverse of index unchanged.If you align the meaning of return value from index_unchanged_by_update_var_walker() the same way as index_unchanged_by_update(), negation is not needed.For v12-0002-Add-bottom-up-index-deletion.patch :For struct xl_btree_delete:+   /* DELETED TARGET OFFSET NUMBERS FOLLOW */+   /* UPDATED TARGET OFFSET NUMBERS FOLLOW */+   /* UPDATED TUPLES METADATA (xl_btree_update) ARRAY FOLLOWS */I guess the comment is for illustration purposes. Maybe you can write the comment in lower case.+#define BOTTOMUP_FAVORABLE_STRIDE  3Adding a comment on why the number 3 is chosen would be helpful for people to understand the code.CheersOn Wed, Dec 30, 2020 at 6:55 PM Peter Geoghegan <pg@bowt.ie> wrote:On Wed, Dec 9, 2020 at 5:12 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> Attached is v11, which cleans everything up around the tableam\n> interface. There are only two patches in v11, since the tableam\n> refactoring made it impossible to split the second patch into a heapam\n> patch and nbtree patch (there is no reduction in functionality\n> compared to v10).\n\nAttached is v12, which fixed bitrot against the master branch. This\nversion has significant comment and documentation revisions. It is\nfunctionally equivalent to v11, though.\n\nI intend to commit the patch in the next couple of weeks. While it\ncertainly would be nice to get a more thorough review, I don't feel\nthat it is strictly necessary. The patch provides very significant\nbenefits with certain workloads that have traditionally been\nconsidered an Achilles' heel for Postgres. Even zheap doesn't provide\na solution to these problems. The only thing that I can think of that\nmight reasonably be considered in competition with this design is\nWARM, which hasn't been under active development since 2017 (I assume\nthat it has been abandoned by those involved).\n\nI should also point out that the design doesn't change the on-disk\nformat in any way, and so doesn't commit us to supporting something\nthat might become onerous in the event of somebody else finding a\nbetter way to address at least some of the same problems.\n\n-- \nPeter Geoghegan", "msg_date": "Thu, 31 Dec 2020 11:02:29 -0800", "msg_from": "Zhihong Yu <zyu@yugabyte.com>", "msg_from_op": false, "msg_subject": "Re: Deleting older versions in unique indexes to avoid page splits" }, { "msg_contents": "чт, 31 дек. 2020 г. в 20:01, Zhihong Yu <zyu@yugabyte.com>:\n\n> For v12-0001-Pass-down-logically-unchanged-index-hint.patch\n>\n> + if (hasexpression)\n> + return false;\n> +\n> + return true;\n>\n> The above can be written as return !hasexpression;\n>\n\nTo be honest, I prefer the way Peter has it in his patch.\nYes, it's possible to shorten this part. But readability is hurt — for\ncurrent code I just read it, for the suggested change I need to think about\nit.\n\n-- \nVictor Yegorov\n\nчт, 31 дек. 2020 г. в 20:01, Zhihong Yu <zyu@yugabyte.com>:For v12-0001-Pass-down-logically-unchanged-index-hint.patch+   if (hasexpression)+       return false;++   return true;The above can be written as return !hasexpression;\nTo be honest, I prefer the way Peter has it in his patch.Yes, it's possible to shorten this part. But readability is hurt — for current code I just read it, for the suggested change I need to think about it.-- Victor Yegorov", "msg_date": "Thu, 31 Dec 2020 20:14:46 +0100", "msg_from": "Victor Yegorov <vyegorov@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Deleting older versions in unique indexes to avoid page splits" }, { "msg_contents": "On Thu, Dec 31, 2020 at 11:01 AM Zhihong Yu <zyu@yugabyte.com> wrote:\n> Happy New Year.\n\nHappy New Year.\n\n> For v12-0001-Pass-down-logically-unchanged-index-hint.patch\n>\n> + if (hasexpression)\n> + return false;\n> +\n> + return true;\n>\n> The above can be written as return !hasexpression;\n> The negation is due to the return value from index_unchanged_by_update_var_walker() is inverse of index unchanged.\n> If you align the meaning of return value from index_unchanged_by_update_var_walker() the same way as index_unchanged_by_update(), negation is not needed.\n\nI don't think that that represents an improvement. The negation seems\nclearer to me because we're negating the *absence* of something that\nwe search for more or less linearly (a modified column from the\nindex). This happens when determining whether to do an extra thing\n(provide the \"logically unchanged\" hint to this particular\nindex/aminsert() call). To me, the negation reflects that high level\nstructure.\n\n> For struct xl_btree_delete:\n>\n> + /* DELETED TARGET OFFSET NUMBERS FOLLOW */\n> + /* UPDATED TARGET OFFSET NUMBERS FOLLOW */\n> + /* UPDATED TUPLES METADATA (xl_btree_update) ARRAY FOLLOWS */\n>\n> I guess the comment is for illustration purposes. Maybe you can write the comment in lower case.\n\nThe comment is written like this (in higher case) to be consistent\nwith an existing convention. If this was a green field situation I\nsuppose I might not write it that way, but that's not how I assess\nthese things. I always try to give the existing convention the benefit\nof the doubt. In this case I don't think that it matters very much,\nand so I'm inclined to stick with the existing style.\n\n> +#define BOTTOMUP_FAVORABLE_STRIDE 3\n>\n> Adding a comment on why the number 3 is chosen would be helpful for people to understand the code.\n\nNoted - I will add something about that to the next revision.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Thu, 31 Dec 2020 16:23:38 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Deleting older versions in unique indexes to avoid page splits" }, { "msg_contents": "On 02/12/2020 00:18, Peter Geoghegan wrote:\n> On Tue, Dec 1, 2020 at 1:50 AM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n>> On 30/11/2020 21:50, Peter Geoghegan wrote:\n>>> +} TM_IndexDelete;\n> \n>>> +} TM_IndexStatus;\n> \n>> Is it really significantly faster to have two arrays? If you had just\n>> one array, each element would be only 12 bytes long. That's not much\n>> smaller than TM_IndexDeletes, which is 8 bytes.\n> \n> Yeah, but the swap operations really matter here. At one point I found\n> that to be the case, anyway. That might no longer be true, though. It\n> might be that the code became less performance critical because other\n> parts of the design improved, resulting in the code getting called\n> less frequently. But if that is true then it has a lot to do with the\n> power-of-two bucketing that you go on to ask about -- that helped\n> performance a lot in certain specific cases (as I go into below).\n> \n> I will add a TODO item for myself, to look into this again. You may\n> well be right.\n> \n>>> + /* First sort caller's array by TID */\n>>> + heap_tid_shellsort(delstate->deltids, delstate->ndeltids);\n> \n>> Instead of sorting the array by TID, wouldn't it be enough to sort by\n>> just the block numbers?\n> \n> I don't understand. Yeah, I guess that we could do our initial sort of\n> the deltids array (the heap_tid_shellsort() call) just using\n> BlockNumber (not TID). But OTOH there might be some small locality\n> benefit to doing a proper TID sort at the level of each heap page. And\n> even if there isn't any such benefit, does it really matter?\n\nYou said above that heap_tid_shellsort() is very performance critical, \nand that's why you use the two arrays approach. If it's so performance \ncritical that swapping 8 bytes vs 12 byte array elements makes a \ndifference, I would guess that comparing TID vs just the block numbers \nwould also make a difference.\n\nIf you really want to shave cycles, you could also store BlockNumber and \nOffsetNumber in the TM_IndexDelete array, instead of ItemPointerData. \nWhat's the difference, you might ask? ItemPointerData stores the block \nnumber as two 16 bit integers that need to be reassembled into a 32 bit \ninteger in the ItemPointerGetBlockNumber() macro.\n\nMy argument here is two-pronged: If this is performance critical, you \ncould do these additional optimizations. If it's not, then you don't \nneed the two-arrays trick; one array would be simpler.\n\nI'm not sure how performance critical this really is, or how many \ninstructions each of these optimizations might shave off, so of course I \nmight be wrong and the tradeoff you have in the patch now really is the \nbest. However, my intuition would be to use a single array, because \nthat's simpler, and do all the other optimizations instead: store the \ntid in the struct as separate BlockNumber and OffsetNumber fields, and \nsort only by the block number. Those optimizations are very deep in the \nlow level functions and won't add much to the mental effort of \nunderstanding the algorithm at a high level.\n\n>>> * While in general the presence of promising tuples (the hint that index\n>>> + * AMs provide) is the best information that we have to go on, it is based\n>>> + * on simple heuristics about duplicates in indexes that are understood to\n>>> + * have specific flaws. We should not let the most promising heap pages\n>>> + * win or lose on the basis of _relatively_ small differences in the total\n>>> + * number of promising tuples. Small differences between the most\n>>> + * promising few heap pages are effectively ignored by applying this\n>>> + * power-of-two bucketing scheme.\n>>> + *\n>>\n>> What are the \"specific flaws\"?\n> \n> I just meant the obvious: the index AM doesn't actually know for sure\n> that there are any old versions on the leaf page that it will\n> ultimately be able to delete. This uncertainty needs to be managed,\n> including inside heapam.c. Feel free to suggest a better way of\n> expressing that sentiment.\n\nHmm, maybe: \"... is the best information that we have to go on, it is \njust a guess based on simple heuristics about duplicates in indexes\".\n\n>> I understand that this is all based on rough heuristics, but is there\n>> any advantage to rounding/bucketing the numbers like this? Even if small\n>> differences in the total number of promising tuple is essentially noise\n>> that can be safely ignored, is there any harm in letting those small\n>> differences guide the choice? Wouldn't it be the essentially the same as\n>> picking at random, or are those small differences somehow biased?\n> \n> Excellent question! It actually helps enormously, especially with low\n> cardinality data that makes good use of the deduplication optimization\n> (where it is especially important to keep the costs and the benefits\n> in balance). This has been validated by benchmarking.\n> \n> This design naturally allows the heapam.c code to take advantage of\n> both temporal and spatial locality. For example, let's say that you\n> have several indexes all on the same table, which get lots of non-HOT\n> UPDATEs, which are skewed. Naturally, the heap TIDs will match across\n> each index -- these are index entries that are needed to represent\n> successor versions (which are logically unchanged/version duplicate\n> index tuples from the point of view of nbtree/nbtdedup.c). Introducing\n> a degree of determinism to the order in which heap blocks are\n> processed naturally takes advantage of the correlated nature of the\n> index bloat. It naturally makes it much more likely that the\n> also-correlated bottom-up index deletion passes (that occur across\n> indexes on the same table) each process the same heap blocks close\n> together in time -- with obvious performance benefits.\n> \n> In the extreme (but not particularly uncommon) case of non-HOT UPDATEs\n> with many low cardinality indexes, each heapam.c call will end up\n> doing *almost the same thing* across indexes. So we're making the\n> correlated nature of the bloat (which is currently a big problem) work\n> in our favor -- turning it on its head, you could say. Highly\n> correlated bloat is not the exception -- it's actually the norm in the\n> cases we're targeting here. If it wasn't highly correlated then it\n> would already be okay to rely on VACUUM to get around to cleaning it\n> later.\n> \n> This power-of-two bucketing design probably also helps when there is\n> only one index. I could go into more detail on that, plus other\n> variations, but perhaps the multiple index example suffices for now. I\n> believe that there are a few interesting kinds of correlations here --\n> I bet you can think of some yourself. (Of course it's also possible\n> and even likely that heap block correlation won't be important at all,\n> but my response is \"what specifically is the harm in being open to the\n> possibility?\".)\n\nI see. Would be good to explain that pattern with multiple indexes in \nthe comment.\n\n- Heikki\n\n\n", "msg_date": "Mon, 4 Jan 2021 14:08:18 +0200", "msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>", "msg_from_op": false, "msg_subject": "Re: Deleting older versions in unique indexes to avoid page splits" }, { "msg_contents": "чт, 31 дек. 2020 г. в 03:55, Peter Geoghegan <pg@bowt.ie>:\n\n> Attached is v12, which fixed bitrot against the master branch. This\n> version has significant comment and documentation revisions. It is\n> functionally equivalent to v11, though.\n>\n> I intend to commit the patch in the next couple of weeks. While it\n> certainly would be nice to get a more thorough review, I don't feel\n> that it is strictly necessary. The patch provides very significant\n> benefits with certain workloads that have traditionally been\n> considered an Achilles' heel for Postgres. Even zheap doesn't provide\n> a solution to these problems. The only thing that I can think of that\n> might reasonably be considered in competition with this design is\n> WARM, which hasn't been under active development since 2017 (I assume\n> that it has been abandoned by those involved).\n>\n\nI've looked through the v12 patch.\n\nI like the new outline:\n\n- _bt_delete_or_dedup_one_page() is the main entry for the new code\n- first we try _bt_simpledel_pass() does improved cleanup of LP_DEAD entries\n- then (if necessary) _bt_bottomupdel_pass() for bottomup deletion\n- finally, we perform _bt_dedup_pass() to deduplication\n\nWe split the leaf page only if all the actions above failed to provide\nenough space.\n\nSome comments on the code.\n\nv12-0001\n--------\n\n1. For the following comment\n\n+ * Only do this for key columns. A change to a non-key column within an\n+ * INCLUDE index should not be considered because that's just payload to\n+ * the index (they're not unlike table TIDs to the index AM).\n\nThe last part of it (in the parenthesis) is difficult to grasp due to\nthe double negation (not unlike). I think it's better to rephrase it.\n\n2. After reading the patch, I also think, that fact, that\nindex_unchanged_by_update()\nand index_unchanged_by_update_var_walker() return different bool states\n(i.e. when the latter returns true, the first one returns false) is a bit\nmisleading.\n\nAlthough logic as it is looks fine, maybe\nindex_unchanged_by_update_var_walker()\ncan be renamed to avoid this confusion, to smth like\nindex_expression_changed_walker() ?\n\nv12-0002\n--------\n\n1. Thanks for the comments, they're well made and do help to read the code.\n\n2. I'm not sure the bottomup_delete_items parameter is very helpful. In\norder to disable\nbottom-up deletion, DBA needs somehow to measure it's impact on a\nparticular index.\nCurrently I do not see how to achieve this. Not sure if this is overly\nimportant, though, as\nyou have a similar parameter for the deduplication.\n\n3. It feels like indexUnchanged is better to make indexChanged and negate\nits usage in the code.\n As !indexChanged reads more natural than !indexUnchanged, at least to\nme.\n\nThis is all I have. I agree, that this code is pretty close to being\ncommitted.\n\nNow for the tests.\n\nFirst, I run a 2-hour long case with the same setup as I used in my e-mail\nfrom 15 of November.\nI found no difference between patch and master whatsoever. Which makes me\nthink, that current\nmaster is quite good at keeping better bloat control (not sure if this is\nan effect of\n4228817449 commit or deduplication).\n\nI created another setup (see attached testcases). Basically, I emulated\nqueue operations(INSERT at the end and DELETE\n\n\n\n-- \nVictor Yegorov\n\nчт, 31 дек. 2020 г. в 03:55, Peter Geoghegan <pg@bowt.ie>:\nAttached is v12, which fixed bitrot against the master branch. This\nversion has significant comment and documentation revisions. It is\nfunctionally equivalent to v11, though.\n\nI intend to commit the patch in the next couple of weeks. While it\ncertainly would be nice to get a more thorough review, I don't feel\nthat it is strictly necessary. The patch provides very significant\nbenefits with certain workloads that have traditionally been\nconsidered an Achilles' heel for Postgres. Even zheap doesn't provide\na solution to these problems. The only thing that I can think of that\nmight reasonably be considered in competition with this design is\nWARM, which hasn't been under active development since 2017 (I assume\nthat it has been abandoned by those involved).I've looked through the v12 patch.I like the new outline:- _bt_delete_or_dedup_one_page() is the main entry for the new code- first we try _bt_simpledel_pass() does improved cleanup of LP_DEAD entries- then (if necessary) _bt_bottomupdel_pass() for bottomup deletion- finally, we perform _bt_dedup_pass() to deduplicationWe split the leaf page only if all the actions above failed to provide enough space.Some comments on the code.v12-0001--------1. For the following comment+    * Only do this for key columns.  A change to a non-key column within an+    * INCLUDE index should not be considered because that's just payload to+    * the index (they're not unlike table TIDs to the index AM).The last part of it (in the parenthesis) is difficult to grasp due tothe double negation (not unlike). I think it's better to rephrase it.2. After reading the patch, I also think, that fact, that index_unchanged_by_update()and index_unchanged_by_update_var_walker() return different bool states(i.e. when the latter returns true, the first one returns false) is a bit misleading.Although logic as it is looks fine, maybe index_unchanged_by_update_var_walker()can be renamed to avoid this confusion, to smth like index_expression_changed_walker() ?v12-0002--------1. Thanks for the comments, they're well made and do help to read the code.2. I'm not sure the bottomup_delete_items parameter is very helpful. In order to disablebottom-up deletion, DBA needs somehow to measure it's impact on a particular index.Currently I do not see how to achieve this. Not sure if this is overly important, though, asyou have a similar parameter for the deduplication.3. It feels like indexUnchanged is better to make indexChanged and negate its usage in the code.    As !indexChanged reads more natural than !indexUnchanged, at least to me.This is all I have. I agree, that this code is pretty close to being committed.Now for the tests.First, I run a 2-hour long case with the same setup as I used in my e-mail from 15 of November.I found no difference between patch and master whatsoever. Which makes me think, that currentmaster is quite good at keeping better bloat control (not sure if this is an effect of4228817449 commit or deduplication).I created another setup (see attached testcases). Basically, I emulated queue operations(INSERT at the end and DELETE -- Victor Yegorov", "msg_date": "Mon, 4 Jan 2021 17:28:11 +0100", "msg_from": "Victor Yegorov <vyegorov@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Deleting older versions in unique indexes to avoid page splits" }, { "msg_contents": "пн, 4 янв. 2021 г. в 17:28, Victor Yegorov <vyegorov@gmail.com>:\n\n> I created another setup (see attached testcases). Basically, I emulated\n> queue operations(INSERT at the end and DELETE\n>\n\nSorry, hit Send too early.\n\nSo, I emulated queue operations(INSERT at the end and DELETE from the\nhead). And also made 5-minute transactions\nappear in the background for the whole duration of the test. 3 pgbench were\nrun in parallel on a scale 3000 bench database\nwith modifications (attached).\n\nMaster\n------\n\n relname | nrows | blk_before | mb_before | blk_after |\nmb_after | diff\n-----------------------+-----------+------------+-----------+-----------+----------+--------\n pgbench_accounts | 300000000 | 4918033 | 38422.1 | 5065575 |\n 39574.8 | +3.0%\n accounts_mtime | 300000000 | 1155119 | 9024.4 | 1287656 |\n 10059.8 | +11.5%\n fiver | 300000000 | 427039 | 3336.2 | 567755 |\n4435.6 | +33.0%\n pgbench_accounts_pkey | 300000000 | 822573 | 6426.4 | 1033344 |\n8073.0 | +25.6%\n score | 300000000 | 284022 | 2218.9 | 458502 |\n3582.0 | +61.4%\n tenner | 300000000 | 346050 | 2703.5 | 417985 |\n3265.5 | +20.8%\n(6 rows)\n\nDB size: 65.2..72.3 (+7.1GB / +10.9%)\nTPS: 2297 / 495\n\nPatched\n------\n relname | nrows | blk_before | mb_before | blk_after |\nmb_after | diff\n-----------------------+-----------+------------+-----------+-----------+----------+--------\n pgbench_accounts | 300000000 | 4918033 | 38422.1 | 5067500 |\n 39589.8 | +3.0%\n accounts_mtime | 300000000 | 1155119 | 9024.4 | 1283441 |\n 10026.9 | +11.1%\n fiver | 300000000 | 427039 | 3336.2 | 429101 |\n3352.4 | +0.5%\n pgbench_accounts_pkey | 300000000 | 822573 | 6426.4 | 826056 |\n6453.6 | +0.4%\n score | 300000000 | 284022 | 2218.9 | 285465 |\n2230.2 | +0.5%\n tenner | 300000000 | 346050 | 2703.5 | 347695 |\n2716.4 | +0.5%\n(6 rows)\n\nDB size: 65.2..67.5 (+2.3GB / +3.5%)\nTPS: 2216 / 492\n\nAs you can see, TPS are very much similar, but the fact that we have no\nbloat for the patched version makes me very happy!\n\nOn the graphs, you can clearly see extra write activity performed by the\nbackedns of the patched version.\n\n\n-- \nVictor Yegorov", "msg_date": "Mon, 4 Jan 2021 23:07:13 +0100", "msg_from": "Victor Yegorov <vyegorov@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Deleting older versions in unique indexes to avoid page splits" }, { "msg_contents": "On Mon, Jan 4, 2021 at 4:08 AM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n> You said above that heap_tid_shellsort() is very performance critical,\n> and that's why you use the two arrays approach. If it's so performance\n> critical that swapping 8 bytes vs 12 byte array elements makes a\n> difference, I would guess that comparing TID vs just the block numbers\n> would also make a difference.\n>\n> If you really want to shave cycles, you could also store BlockNumber and\n> OffsetNumber in the TM_IndexDelete array, instead of ItemPointerData.\n> What's the difference, you might ask? ItemPointerData stores the block\n> number as two 16 bit integers that need to be reassembled into a 32 bit\n> integer in the ItemPointerGetBlockNumber() macro.\n>\n> My argument here is two-pronged: If this is performance critical, you\n> could do these additional optimizations. If it's not, then you don't\n> need the two-arrays trick; one array would be simpler.\n\nThat's reasonable. The reason why I haven't been more decisive here is\nbecause the question of whether or not it matters is actually very\ncomplicated, for reasons that have little to do with sorting. The more\neffective the mechanism is each time (the more bytes it allows nbtree\nto free from each leaf page), the less often it is called, and the\nless performance critical the overhead per operation is. On the other\nhand there are a couple of other questions about what we do in\nheapam.c that aren't quite resolved yet (e.g. exact \"favorable\nblocks\"/prefetching behavior in bottom-up case), that probably affect\nhow important the heap_tid_shellsort() microoptimisations are.\n\nI think that it makes sense to make a final decision on this at the\nlast minute, once everything else is resolved, since the implicit\ndependencies make any other approach much too complicated. I agree\nthat this kind of microoptimization is best avoided, but I really\ndon't want to have to worry about regressions in workloads that I now\nunderstand fully. I think that the sort became less important on\nperhaps 2 or 3 occasions during development, even though that was\nnever really the goal that I had in mind in each case.\n\nI'll do my best to avoid it.\n\n> Hmm, maybe: \"... is the best information that we have to go on, it is\n> just a guess based on simple heuristics about duplicates in indexes\".\n\nI'll add something like that to the next revision.\n\n> I see. Would be good to explain that pattern with multiple indexes in\n> the comment.\n\nWill do -- it is the single best example of how heap block locality\ncan matter with a real workload, so it makes sense to go with it in\nexplanatory comments.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Tue, 5 Jan 2021 13:54:30 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Deleting older versions in unique indexes to avoid page splits" }, { "msg_contents": "On Mon, Jan 4, 2021 at 8:28 AM Victor Yegorov <vyegorov@gmail.com> wrote:\n> I've looked through the v12 patch.\n>\n> I like the new outline:\n>\n> - _bt_delete_or_dedup_one_page() is the main entry for the new code\n> - first we try _bt_simpledel_pass() does improved cleanup of LP_DEAD entries\n> - then (if necessary) _bt_bottomupdel_pass() for bottomup deletion\n> - finally, we perform _bt_dedup_pass() to deduplication\n>\n> We split the leaf page only if all the actions above failed to provide enough space.\n\nI'm thinking of repeating the LP_DEAD enhancement within GiST and hash\nshortly after this, as follow-up work. Their implementation of LP_DEAD\ndeletion was always based on the nbtree original, and I think that it\nmakes sense to keep everything in sync. The simple deletion\nenhancement probably makes just as much sense in these other places.\n\n> + * Only do this for key columns. A change to a non-key column within an\n> + * INCLUDE index should not be considered because that's just payload to\n> + * the index (they're not unlike table TIDs to the index AM).\n>\n> The last part of it (in the parenthesis) is difficult to grasp due to\n> the double negation (not unlike). I think it's better to rephrase it.\n\nOkay, will do.\n\n> 2. After reading the patch, I also think, that fact, that index_unchanged_by_update()\n> and index_unchanged_by_update_var_walker() return different bool states\n> (i.e. when the latter returns true, the first one returns false) is a bit misleading.\n\n> Although logic as it is looks fine, maybe index_unchanged_by_update_var_walker()\n> can be renamed to avoid this confusion, to smth like index_expression_changed_walker() ?\n\nI agree. I'll use the name index_expression_changed_walker() in the\nnext revision.\n\n> 2. I'm not sure the bottomup_delete_items parameter is very helpful. In order to disable\n> bottom-up deletion, DBA needs somehow to measure it's impact on a particular index.\n> Currently I do not see how to achieve this. Not sure if this is overly important, though, as\n> you have a similar parameter for the deduplication.\n\nI'll take bottomup_delete_items out, since I think that you may be\nright about that. It can be another \"decision to recheck mid-beta\"\nthing (on the PostgreSQL 14 open items list), so we can delay making a\nfinal decision on it until after it gets tested by the broadest\npossible group of people.\n\n> 3. It feels like indexUnchanged is better to make indexChanged and negate its usage in the code.\n> As !indexChanged reads more natural than !indexUnchanged, at least to me.\n\nI don't want to do that because then we're negating \"indexChanged\" as\npart of our gating condition to the bottom-up deletion pass code. To\nme this feels wrong, since the hint only exists to be used to trigger\nindex tuple deletion. This is unlikely to ever change.\n\n> First, I run a 2-hour long case with the same setup as I used in my e-mail from 15 of November.\n> I found no difference between patch and master whatsoever. Which makes me think, that current\n> master is quite good at keeping better bloat control (not sure if this is an effect of\n> 4228817449 commit or deduplication).\n\nCommit 4228817449 can't have improved the performance of the master\nbranch -- that commit was just about providing a *correct*\nlatestRemovedXid value. It cannot affect how many index tuples get\ndeleted per pass, or anything like that.\n\nNot surprised that you didn't see many problems with the master branch\non your first attempt. It's normal for there to be non-linear effects\nwith this stuff, and these are very hard (maybe even impossible) to\nmodel. For example, even with artificial test data you often see\ndistinct \"waves\" of page splits, which is a phenomenon pretty well\ndescribed by the \"Waves of misery after index creation\" paper [1].\nIt's likely that your original stress test case (the 15 November one)\nwould have shown significant bloat if you just kept it up for long\nenough. One goal for this project is to make the performance\ncharacteristics more predictable. The performance characteristics are\nunpredictable precisely because they're pathological. B-Trees will\nalways have non-linear performance characteristics, but they can be\nmade a lot less chaotic in practice.\n\nTo be honest, I was slightly surprised that your more recent stress\ntest (the queue thing) worked out so well for the patch. But not too\nsurprised, because I don't necessarily expect to understand how the\npatch can help for any given workload -- this is dynamic behavior\nthat's part of a complex system. I first thought that maybe the\npresence of the INSERTs and DELETEs would mess things up. But I was\nwrong -- everything still worked well, probably because bottom-up\nindex deletion \"cooperated\" with deduplication. The INSERTs could not\ntrigger a bottom-up deletion (which might have been useful for these\nparticular INSERTs), but that didn't actually matter because\ndeduplication was triggered instead, which \"held the line\" for long\nenough for bottom-up deletion to eventually get triggered (by non-HOT\nUPDATEs).\n\nAs I've said before, I believe that the workload should \"figure out\nthe best strategy for handling bloat on its own\". A diversity of\nstrategies are available, which can adapt to many different kinds of\nworkloads organically. That kind of approach is sometimes necessary\nwith a complex system IMV.\n\n[1] https://btw.informatik.uni-rostock.de/download/tagungsband/B2-2.pdf\n--\nPeter Geoghegan\n\n\n", "msg_date": "Thu, 7 Jan 2021 15:07:56 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Deleting older versions in unique indexes to avoid page splits" }, { "msg_contents": "On Thu, Jan 7, 2021 at 3:07 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> I agree. I'll use the name index_expression_changed_walker() in the\n> next revision.\n\nAttached is v13, which has this tweak, and other miscellaneous cleanup\nbased on review from both Victor and Heikki. I consider this version\nof the patch to be committable. I intend to commit something close to\nit in the next week, probably no later than Thursday. I still haven't\ngot to the bottom of the shellsort question raised by Heikki. I intend\nto do further performance validation before committing the patch. I\nwill look into the shellsort thing again as part of this final\nperformance validation work -- perhaps I can get rid of the\nspecialized shellsort implementation entirely, simplifying the state\nstructs added to tableam.h. (As I said before, it seems best to\naddress this last of all to avoid making performance validation even\nmore complicated.)\n\nThis version of the patch is notable for removing the index storage\nparam, and for having lots of comment updates and documentation\nconsolidation, particularly in heapam.c. Many of the latter changes\nare based on feedback from Heikki. Note that all of the discussion of\nheapam level locality has been consolidated, and is now mostly\nconfined to a fairly large comment block over\nbottomup_nblocksfavorable() in heapam.c. I also cut down on redundancy\namong comments about the design at the whole-patch level. A small\namount of redundancy in design docs/comments is a good thing IMV. It\nwas hard to get the balance exactly right, since bottom-up index\ndeletion is by its very nature a mechanism that requires the index AM\nand the tableam to closely cooperate -- which is a novel thing.\n\nThis isn't 100% housekeeping changes, though. I did add one new minor\noptimization to v13: We now count the heap block of the incoming new\nitem index tuple's TID (the item that won't fit on the leaf page\nas-is) as an LP_DEAD-related block for the purposes of determining\nwhich heap blocks will be visited during simple index tuple deletion.\nThe extra cost of doing this is low: when the new item heap block is\nvisited purely due to this new behavior, we're still practically\nguaranteed to not get a buffer miss to read from the heap page. The\nreason should be obvious: the executor is currently in the process of\nmodifying that same heap page anyway. The benefits are also high\nrelative to the cost. This heap block in particular seems to be very\npromising as a place to look for deletable TIDs (I tested this with\ncustom instrumentation and microbenchmarks). I believe that this\neffect exists because by its very nature garbage is often concentrated\nin recently modified pages. This is per the generational hypothesis,\nan important part of the theory behind GC algorithms for automated\nmemory management (GC theory seems to have real practical relevance to\nthe GC/VACUUM problems in Postgres, at least at a high level).\n\nOf course we still won't do any simple deletion operations unless\nthere is at least one index tuple with its LP_DEAD bit set in the\nfirst place at the point that it looks like the page will overflow (no\nchange there). As always, we're just piggy-backing some extra work on\ntop of an expensive operation that needed to take place anyway. I\ncouldn't resist adding this new minor optimization at this late stage,\nbecause it is such a bargain.\n\nThanks\n--\nPeter Geoghegan", "msg_date": "Sun, 10 Jan 2021 16:06:54 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Deleting older versions in unique indexes to avoid page splits" }, { "msg_contents": "пн, 11 янв. 2021 г. в 01:07, Peter Geoghegan <pg@bowt.ie>:\n\n>\n> Attached is v13, which has this tweak, and other miscellaneous cleanup\n> based on review from both Victor and Heikki. I consider this version\n> of the patch to be committable. I intend to commit something close to\n> it in the next week, probably no later than Thursday. I still haven't\n> got to the bottom of the shellsort question raised by Heikki. I intend\n> to do further performance validation before committing the patch. I\n> will look into the shellsort thing again as part of this final\n> performance validation work -- perhaps I can get rid of the\n> specialized shellsort implementation entirely, simplifying the state\n> structs added to tableam.h. (As I said before, it seems best to\n> address this last of all to avoid making performance validation even\n> more complicated.)\n>\n\nI've checked this version quickly. It applies and compiles without issues.\n`make check` and `make check-world` reported no issue.\n\nBut `make installcheck-world` failed on:\n…\ntest explain ... FAILED 22 ms\ntest event_trigger ... ok 178 ms\ntest fast_default ... ok 262 ms\ntest stats ... ok 586 ms\n\n========================\n 1 of 202 tests failed.\n========================\n\n(see attached diff). It doesn't look like the fault of this patch, though.\n\nI suppose you plan to send another revision before committing this.\nTherefore I didn't perform any tests here, will wait for the next version.\n\n\n-- \nVictor Yegorov", "msg_date": "Mon, 11 Jan 2021 21:19:25 +0100", "msg_from": "Victor Yegorov <vyegorov@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Deleting older versions in unique indexes to avoid page splits" }, { "msg_contents": "On Mon, Jan 11, 2021 at 12:19 PM Victor Yegorov <vyegorov@gmail.com> wrote:\n> (see attached diff). It doesn't look like the fault of this patch, though.\n>\n> I suppose you plan to send another revision before committing this.\n> Therefore I didn't perform any tests here, will wait for the next version.\n\nI imagine that this happened because you have track_io_timing=on in\npostgresql.conf. Doesn't the same failure take place with the current\nmaster branch?\n\nI have my own way of running the locally installed Postgres when I\nwant \"make installcheck\" to pass that specifically accounts for this\n(and a few other similar things).\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Mon, 11 Jan 2021 13:09:48 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Deleting older versions in unique indexes to avoid page splits" }, { "msg_contents": "пн, 11 янв. 2021 г. в 22:10, Peter Geoghegan <pg@bowt.ie>:\n\n> I imagine that this happened because you have track_io_timing=on in\n> postgresql.conf. Doesn't the same failure take place with the current\n> master branch?\n>\n> I have my own way of running the locally installed Postgres when I\n> want \"make installcheck\" to pass that specifically accounts for this\n> (and a few other similar things).\n>\n\nOh, right, haven't thought of this. Thanks for pointing that out.\n\nNow everything looks good!\n\n\n-- \nVictor Yegorov\n\nпн, 11 янв. 2021 г. в 22:10, Peter Geoghegan <pg@bowt.ie>:I imagine that this happened because you have track_io_timing=on in\npostgresql.conf. Doesn't the same failure take place with the current\nmaster branch?\n\nI have my own way of running the locally installed Postgres when I\nwant \"make installcheck\" to pass that specifically accounts for this\n(and a few other similar things).Oh, right, haven't thought of this. Thanks for pointing that out.Now everything looks good!-- Victor Yegorov", "msg_date": "Mon, 11 Jan 2021 22:26:53 +0100", "msg_from": "Victor Yegorov <vyegorov@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Deleting older versions in unique indexes to avoid page splits" }, { "msg_contents": "On Sun, Jan 10, 2021 at 4:06 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> Attached is v13, which has this tweak, and other miscellaneous cleanup\n> based on review from both Victor and Heikki. I consider this version\n> of the patch to be committable. I intend to commit something close to\n> it in the next week, probably no later than Thursday. I still haven't\n> got to the bottom of the shellsort question raised by Heikki. I intend\n> to do further performance validation before committing the patch.\n\nI benchmarked the patch with one array and without the shellsort\nspecialization using two patches on top of v13, both of which are\nattached.\n\nThis benchmark was similar to other low cardinality index benchmarks\nI've run in the past (with indexes named fiver, tenner, score, plus a\npgbench_accounts INCLUDE unique index instead of the regular primary\nkey). I used pgbench scale 500, for 30 minutes, no rate limit. One run\nwith 16 clients, another with 32 clients.\n\nOriginal v13:\n\npatch.r1c32.bench.out: \"tps = 32709.772257 (including connections\nestablishing)\" \"latency average = 0.974 ms\" \"latency stddev = 0.514\nms\"\npatch.r1c16.bench.out: \"tps = 34670.929998 (including connections\nestablishing)\" \"latency average = 0.461 ms\" \"latency stddev = 0.314\nms\"\n\nv13 + attached simplifying patches:\n\npatch.r1c32.bench.out: \"tps = 31848.632770 (including connections\nestablishing)\" \"latency average = 1.000 ms\" \"latency stddev = 0.548\nms\"\npatch.r1c16.bench.out: \"tps = 33864.099333 (including connections\nestablishing)\" \"latency average = 0.472 ms\" \"latency stddev = 0.399\nms\"\n\nClearly the optimization work still has some value, since we're\nlooking at about a 2% - 3% increase in throughput here. This seems\nlike it makes the cost of retaining the optimizations acceptable.\n\nThe difference is much less visible with a rate-limit, which is rather\nmore realistic. To some extent the sort is hot here because the\npatched version of Postgres updates tuples as fast as it can, and must\ntherefore delete from the index as fast as it can. The sort itself was\nconsistently near the top consumer of CPU cycles according to \"perf\ntop\", even if it didn't get as bad as what I saw in earlier versions\nof the patch months ago.\n\nThere are actually two sorts involved here (not just the heapam.c\nshellsort). We need to sort the deltids array twice -- once in\nheapam.c, and a second time (to restore the original leaf-page-wise\norder) in nbtpage.c, using qsort(). I'm pretty sure that the latter\nsort also matters, though it matters less than the heapam.c shellsort.\n\nI'm going to proceed with committing the original version of the patch\n-- I feel that this settles it. The code would be a bit tidier without\ntwo arrays or the shellsort, but it really doesn't make that much\ndifference. Whereas the benefit is quite visible, and will be\nsomething that all varieties of index tuple deletion see a performance\nbenefit from (not just bottom-up deletion).\n\n-- \nPeter Geoghegan", "msg_date": "Mon, 11 Jan 2021 21:26:45 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Deleting older versions in unique indexes to avoid page splits" }, { "msg_contents": "On Mon, Jan 11, 2021 at 9:26 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> I'm going to proceed with committing the original version of the patch\n> -- I feel that this settles it.\n\nPushed both patches from the patch series just now.\n\nThanks for the code reviews and benchmarking work!\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Wed, 13 Jan 2021 09:45:52 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Deleting older versions in unique indexes to avoid page splits" }, { "msg_contents": "On Wed, Jan 13, 2021 at 11:16 PM Peter Geoghegan <pg@bowt.ie> wrote:\n>\n> On Mon, Jan 11, 2021 at 9:26 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> > I'm going to proceed with committing the original version of the patch\n> > -- I feel that this settles it.\n>\n> Pushed both patches from the patch series just now.\n>\n\nNice work! I briefly read the commits and have a few questions.\n\nDo we do this optimization (bottom-up deletion) only when the last\nitem which can lead to page split has indexUnchanged set to true? If\nso, what if the last tuple doesn't have indexUnchanged but the\nprevious ones do have?\n\nWhy are we using terminology bottom-up deletion and why not simply\nduplicate version deletion or something on those lines?\n\nThere is the following comment in the code:\n'Index AM/tableam coordination is central to the design of bottom-up\nindex deletion. The index AM provides hints about where to look to\nthe tableam by marking some entries as \"promising\". Index AM does\nthis with duplicate index tuples that are strongly suspected to be old\nversions left behind by UPDATEs that did not logically modify indexed\nvalues.'\n\nHow do we distinguish between version duplicate tuples (added due to\nthe reason that some other index column is updated) or duplicate\ntuples (added because a user has inserted a row with duplicate value)\nfor the purpose of bottom-up deletion? I think we need to distinguish\nbetween them because in the earlier case we can remove the old version\nof the tuple in the index but not in later. Or is it the case that\nindexAm doesn't differentiate between them but relies on heapAM to\ngive the correct answer?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Sun, 17 Jan 2021 11:27:48 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Deleting older versions in unique indexes to avoid page splits" }, { "msg_contents": "On Sat, Jan 16, 2021 at 9:55 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> Do we do this optimization (bottom-up deletion) only when the last\n> item which can lead to page split has indexUnchanged set to true? If\n> so, what if the last tuple doesn't have indexUnchanged but the\n> previous ones do have?\n\nUsing the indexUnchanged hint as a triggering condition makes sure\nthat non-HOT updaters are the ones that pay the cost of a bottom-up\ndeletion pass. We create backpressure for non-HOT updaters (in indexes\nthat are logically unchanged) specifically. (Of course it's also true\nthat the presence of the indexUnchanged hint is highly suggestive of\nthere being more version churn duplicates on the leaf page already,\nwhich is not actually certain.)\n\nIt's possible that there will be some mixture of inserts and non-hot\nupdates on the same leaf page, and as you say the implementation might\nfail to do a bottom-up pass based on the fact that an incoming item at\nthe point of a would-be page split was a plain insert (and so didn't\nreceive the hint). The possibility of these kinds of \"missed\nopportunities\" are okay because a page split was inevitable either\nway. You can imagine a case where that isn't true (i.e. a missed\nopportunity to avoid a page split), but that's kind of like imagining\na fair coin flip taking place 100 times and coming up heads each time.\nIt just isn't realistic for such an \"mixed tendencies\" leaf page to\nstay in flux (i.e. not ever split) forever, with the smartest\ntriggering condition in the world -- it's too unstable.\n\nAn important part of understanding the design is to imagine things at\nthe leaf page level, while thinking about how what that actually looks\nlike differs from an ideal physical representation of the same leaf\npage that is much closer to the logical contents of the database.\nWe're only interested in leaf pages where the number of logical rows\nis basically fixed over time (when there is version churn). Usually\nthis just means that there are lots of non-HOT updates, but it can\nalso work with lots of queue-like inserts and deletes in a unique\nindex.\n\nUltimately, the thing that determines whether or not the bottom-up\ndeletion optimization is effective for any given leaf page is the fact\nthat it actually prevents the page from splitting despite lots of\nversion churn -- this happens again and again. Another key concept\nhere is that bottom-up deletion passes for the same leaf page are\nrelated in important ways -- it is often a mistake to think of\nindividual bottom-up passes as independent events.\n\n> Why are we using terminology bottom-up deletion and why not simply\n> duplicate version deletion or something on those lines?\n\nWhy is that simpler? Also, it doesn't exactly delete duplicates. See\nmy later remarks.\n\n> How do we distinguish between version duplicate tuples (added due to\n> the reason that some other index column is updated) or duplicate\n> tuples (added because a user has inserted a row with duplicate value)\n> for the purpose of bottom-up deletion? I think we need to distinguish\n> between them because in the earlier case we can remove the old version\n> of the tuple in the index but not in later. Or is it the case that\n> indexAm doesn't differentiate between them but relies on heapAM to\n> give the correct answer?\n\nBottom-up deletion uses the presence of duplicates as a starting point\nfor determining which heap pages to visit, based on the assumption\nthat at least some are obsolete versions. But heapam.c has additional\nheap-level heuristics that help too.\n\nIt is quite possible and even likely that we will delete some\nnon-duplicate tuples in passing, just because they're checked in\npassing -- they might turn out to be deletable, for whatever reason.\nWe're also concerned (on the heapam.c side) about which heap pages\nhave the most TIDs (any kind of TID, not just one marked promising in\nindex AM), so this kind of \"serendipity\" is quite common in practice.\nOften the total number of heap pages that are pointed to by all index\ntuples on the page just isn't that many (8 - 10). And often cases with\nlots of HOT pruning can have hundreds of LP_DEAD item pointers on a\nheap page, which we'll tend to get to before too long anyway (with or\nwithout many duplicates).\n\nThe nbtree/caller side makes inferences about what is likely to be\ntrue about the \"logical contents\" of the physical leaf page, as a\nstarting point for the heuristic-driven search for deletable index\ntuples. There are various ways in which each inference (considered\nindividually) might be wrong, including the one that you pointed out:\ninserts of duplicates will look like update version churn. But that\nparticular issue won't matter if the inserted duplicates are on\nmostly-distinct heap blocks (which is likely), because then they'll\nonly make a noise level contribution to the behavior in heapam.c.\nAlso, we can fall back on deduplication when bottom-up deletion fails,\nwhich will merge together the duplicates-from-insert, so now any\nfuture bottom-up deletion pass over the same leaf page won't have the\nsame problem.\n\nBear in mind that heapam.c is only looking for a select few heap\nblocks, and doesn't even need to get the order exactly right. We're\nonly concerned about extremes, which are actually what we see in cases\nthat we're interested in helping. We only have to be very\napproximately right, or right on average. Sure, there might be some\ntiny cost in marginal cases, but that's not something that I've been\nable to quantify in any practical way. Because once we fail we fail\nfor good -- the page splits and the question of doing a bottom-up\ndeletion pass for that same leaf page ends.\n\nAnother important concept is the cost asymmetry -- the asymmetry here\nis huge. Leaf page splits driven by version churn are very expensive\nin the short term and in the long term. Our previous behavior was to\nassume that they were necessary. Now we're initially assuming that\nthey're unnecessary, and requiring non-HOT updaters to do some work\nthat shows (with some margin of error) that such a split is in fact\nnecessary. This is a form of backpressure.\n\nBottom-up deletion doesn't intervene unless and until that happens. It\nis only interested in pages where version churn is concentrated -- it\nis absolutely fine to leave it up to VACUUM to get to any \"floating\ngarbage\" tuples later. This is a pathological condition, and as such\nisn't hard to spot, regardless of workload conditions.\n\nIf you or anyone else can think of a gap in my reasoning, or a\nworkload in which the heuristics either fail to prevent page splits\nwhere that might be expected or impose too high a cost, do let me\nknow. I admit that my design is unorthodox, but the results speak for\nthemselves.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Sun, 17 Jan 2021 11:12:43 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Deleting older versions in unique indexes to avoid page splits" }, { "msg_contents": "On Mon, Jan 18, 2021 at 12:43 AM Peter Geoghegan <pg@bowt.ie> wrote:\n>\n> On Sat, Jan 16, 2021 at 9:55 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > Do we do this optimization (bottom-up deletion) only when the last\n> > item which can lead to page split has indexUnchanged set to true? If\n> > so, what if the last tuple doesn't have indexUnchanged but the\n> > previous ones do have?\n>\n> Using the indexUnchanged hint as a triggering condition makes sure\n> that non-HOT updaters are the ones that pay the cost of a bottom-up\n> deletion pass. We create backpressure for non-HOT updaters (in indexes\n> that are logically unchanged) specifically. (Of course it's also true\n> that the presence of the indexUnchanged hint is highly suggestive of\n> there being more version churn duplicates on the leaf page already,\n> which is not actually certain.)\n>\n> It's possible that there will be some mixture of inserts and non-hot\n> updates on the same leaf page, and as you say the implementation might\n> fail to do a bottom-up pass based on the fact that an incoming item at\n> the point of a would-be page split was a plain insert (and so didn't\n> receive the hint).\n>\n\nor it would do the scan when that is the only time for this leaf page\nto receive such a hint.\n\n> The possibility of these kinds of \"missed\n> opportunities\" are okay because a page split was inevitable either\n> way. You can imagine a case where that isn't true (i.e. a missed\n> opportunity to avoid a page split), but that's kind of like imagining\n> a fair coin flip taking place 100 times and coming up heads each time.\n> It just isn't realistic for such an \"mixed tendencies\" leaf page to\n> stay in flux (i.e. not ever split) forever, with the smartest\n> triggering condition in the world -- it's too unstable.\n>\n\nBut even if we don't want to avoid it forever delaying it will also\nhave advantages depending upon the workload. Let's try to see by some\nexample, say the item and page size are such that it would require 12\nitems to fill the page completely. Case-1, where we decide based on\nthe hint received in the last item, and Case-2 where we decide based\non whether we ever received such a hint for the leaf page.\n\nCase-1:\n========\n12 new items (6 inserts 6 non-HOT updates)\nPage-1: 12 items - no split would be required because we received the\nhint (indexUnchanged) with the last item, so page-1 will have 6 items\nafter clean up.\n\n6 new items (3 inserts, 3 non-HOT updates)\nPage-1: 12 items (9 inserts, 3 non-HOT updates) lead to split because\nwe received the last item without a hint.\n\nThe SPLIT-1 happens after 18 operations (6 after the initial 12).\n\nAfter this split (SPLIT-1), we will have 2 pages with the below state:\nPage-1: 6 items (4 inserts, 2 non-HOT updates)\nPage-2: 6 items (5 inserts, 1 non-HOT updates)\n\n6 new items (3 inserts, 3 non-HOT updates)\nPage-1 got 3 new items 1 insert and 2 non-HOT updates; Page-1: 9 items\n(5 inserts, 4 non-HOT updates)\nPage-2 got 3 new items 2 inserts and 1 non-HOT update; Page-2: 9 items\n(7 inserts, 2 non-HOT updates)\n\n6 new items (3 inserts, 3 non-HOT updates)\nPage-1 got 3 new items 1 insert and 2 non-HOT updates; Page-1: 12\nitems (6 inserts, 6 non-HOT updates) doesn't lead to split\nPage-2 got 3 new items 2 inserts and 1 non-HOT update; Page-2: 9 items\n(9 inserts, 3 non-HOT updates) lead to split (split-2)\n\nThe SPLIT-2 happens after 30 operations (12 new operations after the\nprevious split).\n\nCase-2:\n========\n12 new items (6 inserts 6 non-HOT updates)\nPage-1: 12 items - no split would be required because we received the\nhint (indexUnchanged) with at least one of the item, so page-1 will\nhave 6 items after clean up.\n\n6 new items (3 inserts, 3 non-HOT updates)\nPage-1: 12 items (9 inserts, 3 non-HOT updates), cleanup happens and\nPage-1 will have 9 items.\n\n6 new items (3 inserts, 3 non-HOT updates), at this stage in whichever\norder the new items are received one split can't be avoided.\n\nThe SPLIT-1 happens after 24 new operations (12 new ops after initial 12).\nPage-1: 6 items (6 inserts)\nPage-2: 6 items (6 inserts)\n\n6 new items (3 inserts, 3 non-HOT updates)\nPage-1 got 3 new items 1 insert and 2 non-HOT updates; Page-1: 9 items\n(7 inserts, 2 non-HOT updates)\nPage-2 got 3 new items 2 inserts and 1 non-HOT update; Page-2: 9 items\n(8 inserts, 1 non-HOT updates)\n\n6 new items (3 inserts, 3 non-HOT updates)\nPage-1 got 3 new items 1 insert and 2 non-HOT updates; Page-1: 12\nitems (8 inserts, 4 non-HOT updates) clean up happens and page-1 will\nhave 8 items.\nPage-2 got 3 new items 2 inserts and 1 non-HOT update; Page-2: 9 items\n(10 inserts, 2 non-HOT updates) clean up happens and page-2 will have\n10 items.\n\n6 new items (3 inserts, 3 non-HOT updates)\nPage-1 got 3 new items, 1 insert and 2 non-HOT updates; Page-1: 11\nitems (9 inserts, 2 non-HOT updates)\nPage-2 got 3 new items, 2 inserts and 1 non-HOT update; Page-2: 12\nitems (12 inserts, 0 non-HOT updates) cleanup happens for one of the\nnon-HOT updates\n\n6 new items (3 inserts, 3 non-HOT updates)\nPage-1 got 3 new items, 1 insert, and 2 non-HOT updates; Page-1: 12\nitems (12 inserts, 0 non-HOT updates) cleanup happens for one of the\nnon-HOT updates\nPage-2 got 3 new items, 2 inserts, and 1 non-HOT update; Page-2: split happens\n\nAfter split\nPage-1: 12 items (12 inserts)\nPage-2: 6 items (6 inserts)\nPage-3: 6 items (6 inserts)\n\nThe SPLIT-2 happens after 48 operations (24 new operations)\n\nThe summary of the above is that with Case-1 (clean-up based on hint\nreceived with the last item on the page) it takes fewer operations to\ncause a page split as compared to Case-2 (clean-up based on hint\nreceived with at least of the items on the page) for a mixed workload.\nHow can we say that it doesn't matter?\n\n> An important part of understanding the design is to imagine things at\n> the leaf page level, while thinking about how what that actually looks\n> like differs from an ideal physical representation of the same leaf\n> page that is much closer to the logical contents of the database.\n> We're only interested in leaf pages where the number of logical rows\n> is basically fixed over time (when there is version churn).\n>\n\nWith the above example, it seems like it would also help when this is not true.\n\n> Usually\n> this just means that there are lots of non-HOT updates, but it can\n> also work with lots of queue-like inserts and deletes in a unique\n> index.\n>\n> Ultimately, the thing that determines whether or not the bottom-up\n> deletion optimization is effective for any given leaf page is the fact\n> that it actually prevents the page from splitting despite lots of\n> version churn -- this happens again and again. Another key concept\n> here is that bottom-up deletion passes for the same leaf page are\n> related in important ways -- it is often a mistake to think of\n> individual bottom-up passes as independent events.\n>\n> > Why are we using terminology bottom-up deletion and why not simply\n> > duplicate version deletion or something on those lines?\n>\n> Why is that simpler?\n>\n\nI am not sure what I proposed fits here but the bottom-up sounds like\nwe are starting from the leaf level and move upwards to root level\nwhich I think is not true here.\n\n> Also, it doesn't exactly delete duplicates. See\n> my later remarks.\n>\n> > How do we distinguish between version duplicate tuples (added due to\n> > the reason that some other index column is updated) or duplicate\n> > tuples (added because a user has inserted a row with duplicate value)\n> > for the purpose of bottom-up deletion? I think we need to distinguish\n> > between them because in the earlier case we can remove the old version\n> > of the tuple in the index but not in later. Or is it the case that\n> > indexAm doesn't differentiate between them but relies on heapAM to\n> > give the correct answer?\n>\n> Bottom-up deletion uses the presence of duplicates as a starting point\n> for determining which heap pages to visit, based on the assumption\n> that at least some are obsolete versions. But heapam.c has additional\n> heap-level heuristics that help too.\n>\n> It is quite possible and even likely that we will delete some\n> non-duplicate tuples in passing, just because they're checked in\n> passing -- they might turn out to be deletable, for whatever reason.\n>\n\nHow is that working? Does heapam.c can someway indicate additional\ntuples (extra to what has been marked/sent by IndexAM as promising) as\ndeletable? I see in heap_index_delete_tuples that we mark the status\nof the passed tuples as delectable (by setting knowndeletable flag for\nthem).\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 18 Jan 2021 12:16:33 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Deleting older versions in unique indexes to avoid page splits" }, { "msg_contents": "пн, 18 янв. 2021 г. в 07:44, Amit Kapila <amit.kapila16@gmail.com>:\n\n> The summary of the above is that with Case-1 (clean-up based on hint\n> received with the last item on the page) it takes fewer operations to\n> cause a page split as compared to Case-2 (clean-up based on hint\n> received with at least of the items on the page) for a mixed workload.\n> How can we say that it doesn't matter?\n>\n\nI cannot understand this, perhaps there's a word missing in the brackets?..\n\n\nThinking of your proposal to undertake bottom-up deletion also on the\nlast-to-fit tuple insertion,\nI'd like to start with my understanding of the design:\n\n* we try to avoid index bloat, but we do it in the “lazy” way (for a\nreason, see below)\n\n* it means, that if there is enough space on the leaf page to fit new index\ntuple,\n we just fit it there\n\n* if there's not enough space, we first look at the presence of LP_DEAD\ntuples,\n and if they do exits, we scan the whole (index) page to re-check all\ntuples in order\n to find others, not-yet-marked-but-actually-being-LP_DEAD tuples and\nclean all those up.\n\n* if still not enough space, only now we try bottom-up-deletion. This is\nheavy operation and\n can cause extra IO (tests do show this), therefore this operation is\nundertaken at the point,\n when we can justify extra IO against leaf page split.\n\n* if no space also after bottom-up-deletion, we perform deduplication (if\npossible)\n\n* finally, we split the page.\n\nShould we try bottom-up-deletion pass in a situation where we're inserting\nthe last possible tuple\ninto the page (enough space for *current* insert, but likely no space for\nthe next),\nthen (among others) exists the following possibilities:\n\n- no new tuples ever comes to this page anymore (happy accident), which\nmeans that\n we've wasted IO cycles\n\n- we undertake bottom-up-deletion pass without success and we're asked to\ninsert new tuple in this\n fully packed index page. This can unroll to:\n - due to the delay, we've managed to find some space either due to\nLP_DEAD or bottom-up-cleanup.\n which means we've wasted IO cycles on the previous iteration (too early\nattempt).\n - we still failed to find any space and are forced to split the page.\n in this case we've wasted IO cycles twice.\n\nIn my view these cases when we generated wasted IO cycles (producing no\nbenefit) should be avoided.\nAnd this is main reason for current approach.\n\nAgain, this is my understanding and I hope I got it right.\n\n\n-- \nVictor Yegorov\n\nпн, 18 янв. 2021 г. в 07:44, Amit Kapila <amit.kapila16@gmail.com>:\nThe summary of the above is that with Case-1 (clean-up based on hint\nreceived with the last item on the page) it takes fewer operations to\ncause a page split as compared to Case-2 (clean-up based on hint\nreceived with at least of the items on the page) for a mixed workload.\nHow can we say that it doesn't matter?I cannot understand this, perhaps there's a word missing in the brackets?..Thinking of your proposal to undertake bottom-up deletion also on the last-to-fit tuple insertion,I'd like to start with my understanding of the design:* we try to avoid index bloat, but we do it in the “lazy” way (for a reason, see below)* it means, that if there is enough space on the leaf page to fit new index tuple,  we just fit it there* if there's not enough space, we first look at the presence of LP_DEAD tuples,  and if they do exits, we scan the whole (index) page to re-check all tuples in order  to find others, not-yet-marked-but-actually-being-LP_DEAD tuples and clean all those up.* if still not enough space, only now we try bottom-up-deletion. This is heavy operation and  can cause extra IO (tests do show this), therefore this operation is undertaken at the point,  when we can justify extra IO against leaf page split.* if no space also after bottom-up-deletion, we perform deduplication (if possible)* finally, we split the page.Should we try bottom-up-deletion pass in a situation where we're inserting the last possible tupleinto the page (enough space for *current* insert, but likely no space for the next),then (among others) exists the following possibilities:- no new tuples ever comes to this page anymore (happy accident), which means that  we've wasted IO cycles- we undertake bottom-up-deletion pass without success and we're asked to insert new tuple in this  fully packed index page. This can unroll to:  - due to the delay, we've managed to find some space either due to LP_DEAD or bottom-up-cleanup.    which means we've wasted IO cycles on the previous iteration (too early attempt).  - we still failed to find any space and are forced to split the page.    in this case we've wasted IO cycles twice.In my view these cases when we generated wasted IO cycles (producing no benefit) should be avoided.And this is main reason for current approach.Again, this is my understanding and I hope I got it right.-- Victor Yegorov", "msg_date": "Mon, 18 Jan 2021 12:41:09 +0100", "msg_from": "Victor Yegorov <vyegorov@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Deleting older versions in unique indexes to avoid page splits" }, { "msg_contents": "On Mon, Jan 18, 2021 at 5:11 PM Victor Yegorov <vyegorov@gmail.com> wrote:\n>\n> пн, 18 янв. 2021 г. в 07:44, Amit Kapila <amit.kapila16@gmail.com>:\n>>\n>> The summary of the above is that with Case-1 (clean-up based on hint\n>> received with the last item on the page) it takes fewer operations to\n>> cause a page split as compared to Case-2 (clean-up based on hint\n>> received with at least of the items on the page) for a mixed workload.\n>> How can we say that it doesn't matter?\n>\n>\n> I cannot understand this, perhaps there's a word missing in the brackets?..\n>\n\nThere is a missing word (one) in Case-2, let me write it again to\navoid any confusion. Case-2 (clean-up based on hint received with at\nleast one of the items on the page).\n\n>\n> Thinking of your proposal to undertake bottom-up deletion also on the last-to-fit tuple insertion,\n>\n\nI think there is some misunderstanding in what I am trying to say and\nyour conclusion of the same. See below.\n\n> I'd like to start with my understanding of the design:\n>\n> * we try to avoid index bloat, but we do it in the “lazy” way (for a reason, see below)\n>\n> * it means, that if there is enough space on the leaf page to fit new index tuple,\n> we just fit it there\n>\n> * if there's not enough space, we first look at the presence of LP_DEAD tuples,\n> and if they do exits, we scan the whole (index) page to re-check all tuples in order\n> to find others, not-yet-marked-but-actually-being-LP_DEAD tuples and clean all those up.\n>\n> * if still not enough space, only now we try bottom-up-deletion. This is heavy operation and\n> can cause extra IO (tests do show this), therefore this operation is undertaken at the point,\n> when we can justify extra IO against leaf page split.\n>\n> * if no space also after bottom-up-deletion, we perform deduplication (if possible)\n>\n> * finally, we split the page.\n>\n> Should we try bottom-up-deletion pass in a situation where we're inserting the last possible tuple\n> into the page (enough space for *current* insert, but likely no space for the next),\n>\n\nI am saying that we try bottom-up deletion when the new insert item\ndidn't find enough space on the page and there was previously some\nindexUnchanged tuple(s) inserted into that page. Of course, like now\nit will be attempted after an attempt to remove LP_DEAD items.\n\n> then (among others) exists the following possibilities:\n>\n> - no new tuples ever comes to this page anymore (happy accident), which means that\n> we've wasted IO cycles\n>\n> - we undertake bottom-up-deletion pass without success and we're asked to insert new tuple in this\n> fully packed index page. This can unroll to:\n> - due to the delay, we've managed to find some space either due to LP_DEAD or bottom-up-cleanup.\n> which means we've wasted IO cycles on the previous iteration (too early attempt).\n> - we still failed to find any space and are forced to split the page.\n> in this case we've wasted IO cycles twice.\n>\n> In my view these cases when we generated wasted IO cycles (producing no benefit) should be avoided.\n>\n\nI don't think any of these can happen in what I am actually saying. Do\nyou still have the same feeling after reading this email? Off-hand, I\ndon't see any downsides as compared to the current approach and it\nwill have fewer splits in some other workloads like when there is a\nmix of inserts and non-HOT updates (that doesn't logically change the\nindex).\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 18 Jan 2021 18:15:10 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Deleting older versions in unique indexes to avoid page splits" }, { "msg_contents": "пн, 18 янв. 2021 г. в 13:42, Amit Kapila <amit.kapila16@gmail.com>:\n\n> I don't think any of these can happen in what I am actually saying. Do\n> you still have the same feeling after reading this email? Off-hand, I\n> don't see any downsides as compared to the current approach and it\n> will have fewer splits in some other workloads like when there is a\n> mix of inserts and non-HOT updates (that doesn't logically change the\n> index).\n>\n\nIf I understand you correctly, you suggest to track _all_ the hints that\ncame\nfrom the executor for possible non-HOT logical duplicates somewhere on\nthe page. And when we hit the no-space case, we'll check not only the last\nitem being hinted, but all items on the page, which makes it more probable\nto kick in and do something.\n\nSounds interesting. And judging on the Peter's tests of extra LP_DEAD tuples\nfound on the page (almost always being more, than actually flagged), this\ncan\nhave some positive effects.\n\nAlso, I'm not sure where to put it. We've deprecated the BTP_HAS_GARBAGE\nflag, maybe it can be reused for this purpose?\n\n\n-- \nVictor Yegorov\n\nпн, 18 янв. 2021 г. в 13:42, Amit Kapila <amit.kapila16@gmail.com>:\nI don't think any of these can happen in what I am actually saying. Do\nyou still have the same feeling after reading this email? Off-hand, I\ndon't see any downsides as compared to the current approach and it\nwill have fewer splits in some other workloads like when there is a\nmix of inserts and non-HOT updates (that doesn't logically change the\nindex).If I understand you correctly, you suggest to track _all_ the hints that camefrom the executor for possible non-HOT logical duplicates somewhere onthe page. And when we hit the no-space case, we'll check not only the lastitem being hinted, but all items on the page, which makes it more probableto kick in and do something.Sounds interesting. And judging on the Peter's tests of extra LP_DEAD tuplesfound on the page (almost always being more, than actually flagged), this canhave some positive effects.Also, I'm not sure where to put it. We've deprecated the BTP_HAS_GARBAGEflag, maybe it can be reused for this purpose?-- Victor Yegorov", "msg_date": "Mon, 18 Jan 2021 15:11:23 +0100", "msg_from": "Victor Yegorov <vyegorov@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Deleting older versions in unique indexes to avoid page splits" }, { "msg_contents": "On Sun, Jan 17, 2021 at 10:44 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> With the above example, it seems like it would also help when this is not true.\n\nI'll respond to your remarks here separately, in a later email.\n\n> I am not sure what I proposed fits here but the bottom-up sounds like\n> we are starting from the leaf level and move upwards to root level\n> which I think is not true here.\n\nI guess that that's understandable, because it is true that B-Trees\nare maintained in a bottom-up fashion. However, it's also true that\nyou can have top-down and bottom-up approaches in query optimizers,\nand many other things (it's even something that is used to describe\ngovernance models). The whole point of the term \"bottom-up\" is to\nsuggest that bottom-up deletion complements top-down cleanup by\nVACUUM. I think that this design embodies certain principles that can\nbe generalized to other areas, such as heap pruning.\n\nI recall that Robert Haas hated the name deduplication. I'm not about\nto argue that my choice of \"deduplication\" was objectively better than\nwhatever he would have preferred. Same thing applies here - I more or\nless chose a novel name because the concept is itself novel (unlike\ndeduplication). Reasonable people can disagree about what exact name\nmight have been better, but it's not like we're going to arrive at\nsomething that everybody can be happy with. And whatever we arrive at\nprobably won't matter that much - the vast majority of users will\nnever need to know what either thing is. They may be important things,\nbut that doesn't mean that many people will ever think about them (in\nfact I specifically hope that they don't ever need to think about\nthem).\n\n> How is that working? Does heapam.c can someway indicate additional\n> tuples (extra to what has been marked/sent by IndexAM as promising) as\n> deletable? I see in heap_index_delete_tuples that we mark the status\n> of the passed tuples as delectable (by setting knowndeletable flag for\n> them).\n\nThe bottom-up nbtree caller to\nheap_index_delete_tuples()/table_index_delete_tuple() (not to be\nconfused with the simple/LP_DEAD heap_index_delete_tuples() caller)\nalways provides heapam.c with a complete picture of the index page, in\nthe sense that it exhaustively has a delstate.deltids entry for each\nand every TID on the page, no matter what. This is the case even\nthough in practice there is usually no possible way to check even 20%\nof the deltids entries within heapam.c.\n\nIn general, the goal during a bottom-up pass is *not* to maximize\nexpected utility (i.e. the number of deleted index tuples/space\nfreed/whatever), exactly. The goal is to lower the variance across\nrelated calls, so that we'll typically manage to free a fair number of\nindex tuples when we need to. In general it is much better for\nheapam.c to make its decisions based on 2 or 3 good reasons rather\nthan just 1 excellent reason. And so heapam.c applies a power-of-two\nbucketing scheme, never truly giving too much weight to what nbtree\ntells it about duplicates. See comments above\nbottomup_nblocksfavorable(), and bottomup_sort_and_shrink() comments\n(both are from heapam.c).\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Mon, 18 Jan 2021 11:44:58 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Deleting older versions in unique indexes to avoid page splits" }, { "msg_contents": "On Mon, Jan 18, 2021 at 6:11 AM Victor Yegorov <vyegorov@gmail.com> wrote:\n> If I understand you correctly, you suggest to track _all_ the hints that came\n> from the executor for possible non-HOT logical duplicates somewhere on\n> the page. And when we hit the no-space case, we'll check not only the last\n> item being hinted, but all items on the page, which makes it more probable\n> to kick in and do something.\n\n> Also, I'm not sure where to put it. We've deprecated the BTP_HAS_GARBAGE\n> flag, maybe it can be reused for this purpose?\n\nThere actually was a similar flag (named BTP_UNCHANGED_UPDATE and\nlater BTP_HAS_DUPS) that appeared in earlier versions of the patch\n(several versions in total, up to and including v6). This was never\ndiscussed on the list because I assumed that that wouldn't be helpful\n(I was already writing too many overlong e-mails). It was unsettled in\nmy mind at the time, so it didn't make sense to start discussing. I\nchanged my mind at some point, and so it never came up until now, when\nAmit raised the question.\n\nLooking back on my personal notes, I am reminded that I debated this\nexact question with myself at length. The argument for keeping the\nnbtree flag (i.e. what Amit is arguing for now) involved an isolated\nexample that seems very similar to the much more recent example from\nAmit (that he arrived at independently). I am at least sympathetic to\nthis view of things, even now. Let me go into why I changed my mind\nafter a couple of months of on and off deliberation.\n\nIn general, the highly speculative nature of the extra work that\nheapam.c does for index deleters in the bottom-up caller case can only\nbe justified because the cost is paid by non-HOT updaters that are\ndefinitely about to split the page just to fit another version, and\nbecause we only risk wasting one heap page access at any given point\nof the entire process (the latter point about risk/cost is not 100%\ntrue, because you have additional fixed CPU costs and stuff like that,\nbut it's at least true in spirit). We can justify \"gambling\" like this\nonly because the game is rigged in our favor to an *absurd* degree:\nthere are many ways to win big (relative to the likely version churn\npage split baseline case), and we only have to put down relatively\nsmall \"bets\" at any given point -- heapam.c will give up everything\nwhen it encounters one whole heap page that lacks a single deletable\nentry, no matter what the reason is.\n\nThe algorithm *appears* to behave very intelligently when seen from\nafar, but is in fact stupid and opportunistic when you look at it up\nclose. It's possible to be so permissive about the upside benefit by\nalso being very conservative about the downside cost. Almost all of\nour individual inferences can be wrong, and yet we still win in the\nend. And the effect is robust over time. You could say that it is an\norganic approach: it adapts to the workload, rather than trying to\nmake the workload adapt to it. This is less radical than you'd think\n-- in some sense this is how B-Trees have always worked.\n\nIn the end, I couldn't justify imposing this cost on anything other\nthan a non-HOT updater, which is what the flag proposal would require\nme to do -- then it's not 100% clear that the relative cost of each\n\"bet\" placed in heapam.c really is extremely low (we can no longer say\nfor sure that we have very little to lose, given the certain\nalternative that is a version churn page split). The fact is that\n\"version chains in indexes\" still cannot get very long. Plus there are\nother subtle ways in which it's unlikely to be a problem (e.g. the\nLP_DEAD deletion stuff also got much better, deduplication can combine\nwith bottom-up deletion so that we'll trigger a useful bottom-up\ndeletion pass sooner or later without much downside, and possibly\nother things I haven't even thought of).\n\nIt's possible that I have been too conservative. I admit that my\ndecision on this point is at least a little arbitrary, but I stand by\nit for now. I cannot feel too bad about theoretically leaving some\ngains on the table, given that we're only talking about deleting a\ngroup of related versions a little later than we would otherwise, and\nonly in some circumstances, and in a way that seems likely to be\nimperceptible to users. I reserve the right to change my mind about\nit, but for now it doesn't look like an absurdly good deal, and those\nare the kind of deals that it makes sense to focus on here. I am very\nhappy about the fact that it is relatively easy to understand why the\nworst case for bottom-up index deletion cannot be that bad.\n\n--\nPeter Geoghegan\n\n\n", "msg_date": "Mon, 18 Jan 2021 12:43:41 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Deleting older versions in unique indexes to avoid page splits" }, { "msg_contents": "пн, 18 янв. 2021 г. в 21:43, Peter Geoghegan <pg@bowt.ie>:\n\n> In the end, I couldn't justify imposing this cost on anything other\n> than a non-HOT updater, which is what the flag proposal would require\n> me to do -- then it's not 100% clear that the relative cost of each\n> \"bet\" placed in heapam.c really is extremely low (we can no longer say\n> for sure that we have very little to lose, given the certain\n> alternative that is a version churn page split).\n\n\nI must admit, that it's a bit difficult to understand you here (at least\nfor me).\n\nI assume that by \"bet\" you mean flagged tuple, that we marked as such\n(should we implement the suggested case).\nAs heapam will give up early in case there are no deletable tuples, why do\nsay,\nthat \"bet\" is no longer low? At the end, we still decide between page split\n(and\nindex bloat) vs a beneficial space cleanup.\n\nThe fact is that\n> \"version chains in indexes\" still cannot get very long. Plus there are\n> other subtle ways in which it's unlikely to be a problem (e.g. the\n> LP_DEAD deletion stuff also got much better, deduplication can combine\n> with bottom-up deletion so that we'll trigger a useful bottom-up\n> deletion pass sooner or later without much downside, and possibly\n> other things I haven't even thought of).\n>\n\nI agree with this, except for \"version chains\" not being long. It all\nreally depends\non the data distribution. It's perfectly common to find indexes supporting\nFK constraints\non highly skewed sets, with 80% of the index belonging to a single value\n(say, huge business\ncustomer vs several thousands of one-time buyers).\n\n\n-- \nVictor Yegorov\n\nпн, 18 янв. 2021 г. в 21:43, Peter Geoghegan <pg@bowt.ie>:In the end, I couldn't justify imposing this cost on anything other\nthan a non-HOT updater, which is what the flag proposal would require\nme to do -- then it's not 100% clear that the relative cost of each\n\"bet\" placed in heapam.c really is extremely low (we can no longer say\nfor sure that we have very little to lose, given the certain\nalternative that is a version churn page split).I must admit, that it's a bit difficult to understand you here (at least for me).I assume that by \"bet\" you mean flagged tuple, that we marked as such(should we implement the suggested case).As heapam will give up early in case there are no deletable tuples, why do say,that \"bet\" is no longer low? At the end, we still decide between page split (andindex bloat) vs a beneficial space cleanup.The fact is that\n\"version chains in indexes\" still cannot get very long. Plus there are\nother subtle ways in which it's unlikely to be a problem (e.g. the\nLP_DEAD deletion stuff also got much better, deduplication can combine\nwith bottom-up deletion so that we'll trigger a useful bottom-up\ndeletion pass sooner or later without much downside, and possibly\nother things I haven't even thought of).I agree with this, except for \"version chains\" not being long. It all really dependson the data distribution. It's perfectly common to find indexes supporting FK constraintson highly skewed sets, with 80% of the index belonging to a single value (say, huge businesscustomer vs several thousands of one-time buyers).-- Victor Yegorov", "msg_date": "Mon, 18 Jan 2021 22:10:08 +0100", "msg_from": "Victor Yegorov <vyegorov@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Deleting older versions in unique indexes to avoid page splits" }, { "msg_contents": "On Mon, Jan 18, 2021 at 1:10 PM Victor Yegorov <vyegorov@gmail.com> wrote:\n> I must admit, that it's a bit difficult to understand you here (at least for me).\n>\n> I assume that by \"bet\" you mean flagged tuple, that we marked as such\n> (should we implement the suggested case).\n> As heapam will give up early in case there are no deletable tuples, why do say,\n> that \"bet\" is no longer low? At the end, we still decide between page split (and\n> index bloat) vs a beneficial space cleanup.\n\nWell, as I said, there are various ways in which our inferences (say\nthe ones in nbtdedup.c) are likely to be wrong. You understand this\nalready. For example, obviously if there are two duplicate index\ntuples pointing to the same heap page then it's unlikely that both\nwill be deletable, and there is even a fair chance that neither will\nbe (for many reasons). I think that it's important to justify why we\nuse stuff like that to drive our decisions -- the reasoning really\nmatters. It's very much not like the usual optimization problem thing.\nIt's a tricky thing to discuss.\n\nI don't assume that I understand all workloads, or how I might\nintroduce regressions. It follows that I should be extremely\nconservative about imposing new costs here. It's good that we\ncurrently know of no workloads that the patch is likely to regress,\nbut absence of evidence isn't evidence of absence. I personally will\nnever vote for a theoretical risk with only a theoretical benefit. And\nright now that's what the idea of doing bottom-up deletions in more\nmarginal cases (the page flag thing) looks like.\n\n--\nPeter Geoghegan\n\n\n", "msg_date": "Mon, 18 Jan 2021 13:32:52 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Deleting older versions in unique indexes to avoid page splits" }, { "msg_contents": "On Tue, Jan 19, 2021 at 3:03 AM Peter Geoghegan <pg@bowt.ie> wrote:\n>\n> On Mon, Jan 18, 2021 at 1:10 PM Victor Yegorov <vyegorov@gmail.com> wrote:\n> > I must admit, that it's a bit difficult to understand you here (at least for me).\n> >\n> > I assume that by \"bet\" you mean flagged tuple, that we marked as such\n> > (should we implement the suggested case).\n> > As heapam will give up early in case there are no deletable tuples, why do say,\n> > that \"bet\" is no longer low? At the end, we still decide between page split (and\n> > index bloat) vs a beneficial space cleanup.\n>\n> Well, as I said, there are various ways in which our inferences (say\n> the ones in nbtdedup.c) are likely to be wrong. You understand this\n> already. For example, obviously if there are two duplicate index\n> tuples pointing to the same heap page then it's unlikely that both\n> will be deletable, and there is even a fair chance that neither will\n> be (for many reasons). I think that it's important to justify why we\n> use stuff like that to drive our decisions -- the reasoning really\n> matters. It's very much not like the usual optimization problem thing.\n> It's a tricky thing to discuss.\n>\n> I don't assume that I understand all workloads, or how I might\n> introduce regressions. It follows that I should be extremely\n> conservative about imposing new costs here. It's good that we\n> currently know of no workloads that the patch is likely to regress,\n> but absence of evidence isn't evidence of absence.\n>\n\nThe worst cases could be (a) when there is just one such duplicate\n(indexval logically unchanged) on the page and that happens to be the\nlast item and others are new insertions, (b) same as (a) and along\nwith it lets say there is an open transaction due to which we can't\nremove even that duplicate version. Have we checked the overhead or\nresults by simulating such workloads?\n\nI feel unlike LP_DEAD optimization this new bottom-up scheme can cost\nus extra CPU and I/O because there seems to be not much consideration\ngiven to the fact that we might not be able to delete any item (or\nvery few) due to long-standing open transactions except that we limit\nourselves when we are not able to remove even one tuple from any\nparticular heap page. Now, say due to open transactions, we are able\nto remove very few tuples (for the sake of argument say there is only\n'one' such tuple) from the heap page then we will keep on accessing\nthe heap pages without much benefit. I feel extending the deletion\nmechanism based on the number of LP_DEAD items sounds more favorable\nthan giving preference to duplicate items. Sure, it will give equally\ngood or better results if there are no long-standing open\ntransactions.\n\n> I personally will\n> never vote for a theoretical risk with only a theoretical benefit. And\n> right now that's what the idea of doing bottom-up deletions in more\n> marginal cases (the page flag thing) looks like.\n>\n\nI don't think we can say that it is purely theoretical because I have\ndome shown some basic computation where it can lead to fewer splits.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 20 Jan 2021 09:24:35 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Deleting older versions in unique indexes to avoid page splits" }, { "msg_contents": "Hi,\n\nOn 2021-01-20 09:24:35 +0530, Amit Kapila wrote:\n> I feel extending the deletion mechanism based on the number of LP_DEAD\n> items sounds more favorable than giving preference to duplicate\n> items. Sure, it will give equally good or better results if there are\n> no long-standing open transactions.\n\nThere's a lot of workloads that never set LP_DEAD because all scans are\nbitmap index scans. And there's no obvious way to address that. So I\ndon't think it's wise to purely rely on LP_DEAD.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 19 Jan 2021 21:20:21 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Deleting older versions in unique indexes to avoid page splits" }, { "msg_contents": "On Tue, Jan 19, 2021 at 7:54 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> The worst cases could be (a) when there is just one such duplicate\n> (indexval logically unchanged) on the page and that happens to be the\n> last item and others are new insertions, (b) same as (a) and along\n> with it lets say there is an open transaction due to which we can't\n> remove even that duplicate version. Have we checked the overhead or\n> results by simulating such workloads?\n\nThere is no such thing as a workload that has page splits caused by\nnon-HOT updaters, but almost no actual version churn from the same\nnon-HOT updaters. It's possible that a small number of individual page\nsplits will work out like that, of course, but they'll be extremely\nrare, and impossible to see in any kind of consistent way.\n\nThat just leaves long running transactions. Of course it's true that\neventually a long-running transaction will make it impossible to\nperform any cleanup, for the usual reasons. And at that point this\nmechanism is bound to fail (which costs additional cycles -- the\nwasted access to a single heap page, some CPU cycles). But it's still\na bargain to try. Even with a long running transactions there will be\na great many bottom-up deletion passes that still succeed earlier on\n(because at least some of the dups are deletable, and we can still\ndelete those that became garbage right before the long running\nsnapshot was acquired).\n\nVictor independently came up with a benchmark that ran over several\nhours, with cleanup consistently held back by ~5 minutes by a long\nrunning transaction:\n\nhttps://www.postgresql.org/message-id/CAGnEbogATZS1mWMVX8FzZHMXzuDEcb10AnVwwhCtXtiBpg3XLQ@mail.gmail.com\n\nThis was actually one of the most favorable cases of all for the patch\n-- the patch prevented logically unchanged indexes from growing (this\nis a mix of inserts, updates, and deletes, not just updates, so it was\nless than perfect -- we did see the indexes grow by a half of one\npercent). Whereas without the patch each of the same 3 indexes grew by\n30% - 60%.\n\nSo yes, I did think about long running transactions, and no, the\npossibility of wasting one heap block access here and there when the\ndatabase is melting down anyway doesn't seem like a big deal to me.\n\n> I feel unlike LP_DEAD optimization this new bottom-up scheme can cost\n> us extra CPU and I/O because there seems to be not much consideration\n> given to the fact that we might not be able to delete any item (or\n> very few) due to long-standing open transactions except that we limit\n> ourselves when we are not able to remove even one tuple from any\n> particular heap page.\n\nThere was plenty of consideration given to that. It was literally\ncentral to the design, and something I poured over at length. Why\ndon't you go read some of that now? Or, why don't you demonstrate an\nactual regression using a tool like pgbench?\n\nI do not appreciate being accused of having acted carelessly. You\ndon't have a single shred of evidence.\n\nThe design is counter-intuitive. I think that you simply don't understand it.\n\n> Now, say due to open transactions, we are able\n> to remove very few tuples (for the sake of argument say there is only\n> 'one' such tuple) from the heap page then we will keep on accessing\n> the heap pages without much benefit. I feel extending the deletion\n> mechanism based on the number of LP_DEAD items sounds more favorable\n> than giving preference to duplicate items. Sure, it will give equally\n> good or better results if there are no long-standing open\n> transactions.\n\nAs Andres says, LP_DEAD bits need to be set by index scans. Otherwise\nnothing happens. The simple deletion case can do nothing without that\nhappening. It's good that it's possible to reuse work from index scans\nopportunistically, but it's not reliable.\n\n> > I personally will\n> > never vote for a theoretical risk with only a theoretical benefit. And\n> > right now that's what the idea of doing bottom-up deletions in more\n> > marginal cases (the page flag thing) looks like.\n> >\n>\n> I don't think we can say that it is purely theoretical because I have\n> dome shown some basic computation where it can lead to fewer splits.\n\nI'm confused. You realize that this makes it *more* likely that\nbottom-up deletion passes will take place, right? It sounds like\nyou're arguing both sides of the issue at the same time.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Tue, 19 Jan 2021 21:28:36 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Deleting older versions in unique indexes to avoid page splits" }, { "msg_contents": "On Wed, Jan 20, 2021 at 10:58 AM Peter Geoghegan <pg@bowt.ie> wrote:\n>\n> On Tue, Jan 19, 2021 at 7:54 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > The worst cases could be (a) when there is just one such duplicate\n> > (indexval logically unchanged) on the page and that happens to be the\n> > last item and others are new insertions, (b) same as (a) and along\n> > with it lets say there is an open transaction due to which we can't\n> > remove even that duplicate version. Have we checked the overhead or\n> > results by simulating such workloads?\n>\n> There is no such thing as a workload that has page splits caused by\n> non-HOT updaters, but almost no actual version churn from the same\n> non-HOT updaters. It's possible that a small number of individual page\n> splits will work out like that, of course, but they'll be extremely\n> rare, and impossible to see in any kind of consistent way.\n>\n> That just leaves long running transactions. Of course it's true that\n> eventually a long-running transaction will make it impossible to\n> perform any cleanup, for the usual reasons. And at that point this\n> mechanism is bound to fail (which costs additional cycles -- the\n> wasted access to a single heap page, some CPU cycles). But it's still\n> a bargain to try. Even with a long running transactions there will be\n> a great many bottom-up deletion passes that still succeed earlier on\n> (because at least some of the dups are deletable, and we can still\n> delete those that became garbage right before the long running\n> snapshot was acquired).\n>\n\nHow many of the dups are deletable till there is an open long-running\ntransaction in the system before the transaction that has performed an\nupdate? I tried a simple test to check this.\n\ncreate table t1(c1 int, c2 int, c3 char(10));\ncreate index idx_t1 on t1(c1);\ncreate index idx_t2 on t1(c2);\n\ninsert into t1 values(generate_series(1,5000),1,'aaaaaa');\nupdate t1 set c2 = 2;\n\nThe update will try to remove the tuples via bottom-up cleanup\nmechanism for index 'idx_t1' and won't be able to remove any tuple\nbecause the duplicates are from the same transaction.\n\n> Victor independently came up with a benchmark that ran over several\n> hours, with cleanup consistently held back by ~5 minutes by a long\n> running transaction:\n>\n\nAFAICS, the long-running transaction used in the test is below:\nSELECT abalance, pg_sleep(300) FROM pgbench_accounts WHERE mtime >\nnow() - INTERVAL '15min' ORDER BY aid LIMIT 1;\n\nThis shouldn't generate a transaction id so would it be sufficient to\nhold back the clean-up?\n\n\n> https://www.postgresql.org/message-id/CAGnEbogATZS1mWMVX8FzZHMXzuDEcb10AnVwwhCtXtiBpg3XLQ@mail.gmail.com\n>\n> This was actually one of the most favorable cases of all for the patch\n> -- the patch prevented logically unchanged indexes from growing (this\n> is a mix of inserts, updates, and deletes, not just updates, so it was\n> less than perfect -- we did see the indexes grow by a half of one\n> percent). Whereas without the patch each of the same 3 indexes grew by\n> 30% - 60%.\n>\n> So yes, I did think about long running transactions, and no, the\n> possibility of wasting one heap block access here and there when the\n> database is melting down anyway doesn't seem like a big deal to me.\n>\n\nFirst, it is not clear to me if that has properly simulated the\nlong-running test but even if it is what I intend to say was to have\nan open long-running transaction possibly for the entire duration of\nthe test? If we do that, we will come to know if there is any overhead\nand if so how much?\n\n\n> > I feel unlike LP_DEAD optimization this new bottom-up scheme can cost\n> > us extra CPU and I/O because there seems to be not much consideration\n> > given to the fact that we might not be able to delete any item (or\n> > very few) due to long-standing open transactions except that we limit\n> > ourselves when we are not able to remove even one tuple from any\n> > particular heap page.\n>\n> There was plenty of consideration given to that. It was literally\n> central to the design, and something I poured over at length. Why\n> don't you go read some of that now? Or, why don't you demonstrate an\n> actual regression using a tool like pgbench?\n>\n> I do not appreciate being accused of having acted carelessly. You\n> don't have a single shred of evidence.\n>\n\nI think you have done a good job and I am just trying to see if there\nare any loose ends which we can tighten-up. Anyway, here are results\nfrom some simple performance tests:\n\nTest with 2 un-modified indexes\n===============================\ncreate table t1(c1 int, c2 int, c3 int, c4 char(10));\ncreate index idx_t1 on t1(c1);\ncreate index idx_t2 on t1(c2);\ncreate index idx_t3 on t1(c3);\n\ninsert into t1 values(generate_series(1,5000000), 1, 10, 'aaaaaa');\nupdate t1 set c2 = 2;\n\nWithout nbree mod (without commit d168b66682)\n===================================================\npostgres=# update t1 set c2 = 2;\nUPDATE 5000000\nTime: 46533.530 ms (00:46.534)\n\nWith HEAD\n==========\npostgres=# update t1 set c2 = 2;\nUPDATE 5000000\nTime: 52529.839 ms (00:52.530)\n\n\nI have dropped and recreated the table after each update in the test.\nSome non-default configurations:\nautovacuum = off; checkpoint_timeout = 35min; shared_buffers = 10GB;\nmin_wal_size = 10GB; max_wal_size = 20GB;\n\nThere seems to 12-13% of regression in the above test and I think we\ncan reproduce similar or higher regression with a long-running open\ntransaction. At this moment, I don't have access to any performance\nmachine so done these tests on CentOS VM. The results could vary but I\nhave repeated these enough times to reduce such a possibility.\n\n> The design is counter-intuitive. I think that you simply don't understand it.\n>\n\nI have read your patch and have some decent understanding but\nobviously, you and Victor will have a better idea. I am not sure what\nI wrote in my previous email which made you say so. Anyway, I hope I\nhave made my point clear this time.\n\n>\n> > > I personally will\n> > > never vote for a theoretical risk with only a theoretical benefit. And\n> > > right now that's what the idea of doing bottom-up deletions in more\n> > > marginal cases (the page flag thing) looks like.\n> > >\n> >\n> > I don't think we can say that it is purely theoretical because I have\n> > dome shown some basic computation where it can lead to fewer splits.\n>\n> I'm confused. You realize that this makes it *more* likely that\n> bottom-up deletion passes will take place, right?\n>\n\nYes.\n\n> It sounds like\n> you're arguing both sides of the issue at the same time.\n>\n\nNo, I am sure the bottom-up deletion is a good technique to get rid of\nbloat and just trying to see if there are more cases where we can take\nits advantage and also try to avoid regression if there is any.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 20 Jan 2021 19:03:23 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Deleting older versions in unique indexes to avoid page splits" }, { "msg_contents": "On Wed, Jan 20, 2021 at 7:03 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, Jan 20, 2021 at 10:58 AM Peter Geoghegan <pg@bowt.ie> wrote:\n> >\n> > On Tue, Jan 19, 2021 at 7:54 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > The worst cases could be (a) when there is just one such duplicate\n> > > (indexval logically unchanged) on the page and that happens to be the\n> > > last item and others are new insertions, (b) same as (a) and along\n> > > with it lets say there is an open transaction due to which we can't\n> > > remove even that duplicate version. Have we checked the overhead or\n> > > results by simulating such workloads?\n> >\n> > There is no such thing as a workload that has page splits caused by\n> > non-HOT updaters, but almost no actual version churn from the same\n> > non-HOT updaters. It's possible that a small number of individual page\n> > splits will work out like that, of course, but they'll be extremely\n> > rare, and impossible to see in any kind of consistent way.\n> >\n> > That just leaves long running transactions. Of course it's true that\n> > eventually a long-running transaction will make it impossible to\n> > perform any cleanup, for the usual reasons. And at that point this\n> > mechanism is bound to fail (which costs additional cycles -- the\n> > wasted access to a single heap page, some CPU cycles). But it's still\n> > a bargain to try. Even with a long running transactions there will be\n> > a great many bottom-up deletion passes that still succeed earlier on\n> > (because at least some of the dups are deletable, and we can still\n> > delete those that became garbage right before the long running\n> > snapshot was acquired).\n> >\n>\n> How many ...\n>\n\nTypo. /many/any\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 20 Jan 2021 19:05:40 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Deleting older versions in unique indexes to avoid page splits" }, { "msg_contents": "On Wed, Jan 20, 2021 at 10:50 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On 2021-01-20 09:24:35 +0530, Amit Kapila wrote:\n> > I feel extending the deletion mechanism based on the number of LP_DEAD\n> > items sounds more favorable than giving preference to duplicate\n> > items. Sure, it will give equally good or better results if there are\n> > no long-standing open transactions.\n>\n> There's a lot of workloads that never set LP_DEAD because all scans are\n> bitmap index scans. And there's no obvious way to address that. So I\n> don't think it's wise to purely rely on LP_DEAD.\n>\n\nRight, I understand this point. The point I was trying to make was\nthat in this new technique we might not be able to delete any tuple\n(or maybe very few) if there are long-running open transactions in the\nsystem and still incur a CPU and I/O cost. I am completely in favor of\nthis technique and patch, so don't get me wrong. As mentioned in my\nreply to Peter, I am just trying to see if there are more ways we can\nuse this optimization and reduce the chances of regression (if there\nis any).\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 20 Jan 2021 19:16:00 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Deleting older versions in unique indexes to avoid page splits" }, { "msg_contents": "On Wed, Jan 20, 2021 at 5:33 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > Victor independently came up with a benchmark that ran over several\n> > hours, with cleanup consistently held back by ~5 minutes by a long\n> > running transaction:\n> >\n>\n> AFAICS, the long-running transaction used in the test is below:\n> SELECT abalance, pg_sleep(300) FROM pgbench_accounts WHERE mtime >\n> now() - INTERVAL '15min' ORDER BY aid LIMIT 1;\n>\n> This shouldn't generate a transaction id so would it be sufficient to\n> hold back the clean-up?\n\nIt will hold back clean-up because it holds open a snapshot. Whether\nor not the long running transaction has been allocated a true XID (not\njust a VXID) is irrelevant. Victor's test case is perfectly valid.\n\nIn general there are significant benefits for cases with long-running\ntransactions, which will be quite apparent if you do something simple\nlike run pgbench (a script with non-HOT updates) while a REPEATABLE\nREAD transaction runs in psql (and has actually acquired a snapshot by\nrunning a simple query -- the xact snapshot is acquired lazily). I\nunderstand that this will be surprising if you believe that the\nproblem in these cases is strictly that there are too many \"recently\ndead\" versions that still need to be stored (i.e. versions that\ncleanup simply isn't able to remove, given the xmin horizon\ninvariant). But it's now clear that that's not what really happens in\nmost cases with a long running transaction. What we actually see is\nthat local page-level inefficiencies in cleanup were (and perhaps to\nsome degree still are) a much bigger problem than the inherent\ninability of cleanup to remove even one or two tuples. This is at\nleast true until the bloat problem becomes a full-blown disaster\n(because cleanup really is inherently restricted by the global xmin\nhorizon, and therefore hopelessly behind).\n\nIn reality there are seldom that many individual logical rows that get\nupdated more than a few times in (say) any given one hour period. This\nis true even with skewed updates -- the skew is almost never visible\nat the level of an individual leaf page. The behavior we see now that\nwe have bottom-up index deletion is much closer to the true optimal\nbehavior for the general approach Postgres takes to cleanup of garbage\ntuples, since it tends to make the number of versions for any given\nlogical row as low as possible (within the confines of the global xmin\nhorizon limitations for cleanup).\n\nOf course it would also be helpful to have something like zheap --\nsome mechanism that can store \"recently dead\" versions some place\nwhere they at least don't bloat up the main relation structures. But\nthat's only one part of the big picture for Postgres MVCC garbage. We\nshould not store garbage tuples (i.e. those that are dead rather than\njust recently dead) *anywhere*.\n\n> First, it is not clear to me if that has properly simulated the\n> long-running test but even if it is what I intend to say was to have\n> an open long-running transaction possibly for the entire duration of\n> the test? If we do that, we will come to know if there is any overhead\n> and if so how much?\n\nI am confident that you are completely wrong about regressing cases\nwith long-running transactions, except perhaps in some very narrow\nsense that is of little practical relevance. Victor's test case did\nresult in a small loss of throughput, for example, but that's a small\nprice to pay to not have your indexes explode (note also that most of\nthe indexes weren't even used for reads, so in the real world it would\nprobably also improve throughput even in the short term). FWIW the\nsmall drop in TPS probably had little to do with the cost of visiting\nthe heap for visibility information. Workloads must be made to \"live\nwithin their means\". You can call that a regression if you like, but I\nthink that almost anybody else would take issue with that\ncharacterization.\n\nSlowing down non-HOT updaters in these extreme cases may actually be a\ngood thing, even when bottom-up deletion finally becomes ineffective.\nIt can be thought of as backpressure. I am not worried about slowing\ndown something that is already hopelessly inefficient and\nunsustainable. I'd go even further than that, in fact -- I now wonder\nif we should *deliberately* slow them down some more!\n\n> Test with 2 un-modified indexes\n> ===============================\n> create table t1(c1 int, c2 int, c3 int, c4 char(10));\n> create index idx_t1 on t1(c1);\n> create index idx_t2 on t1(c2);\n> create index idx_t3 on t1(c3);\n>\n> insert into t1 values(generate_series(1,5000000), 1, 10, 'aaaaaa');\n> update t1 set c2 = 2;\n\n> postgres=# update t1 set c2 = 2;\n> UPDATE 5000000\n> Time: 46533.530 ms (00:46.534)\n>\n> With HEAD\n> ==========\n> postgres=# update t1 set c2 = 2;\n> UPDATE 5000000\n> Time: 52529.839 ms (00:52.530)\n>\n> I have dropped and recreated the table after each update in the test.\n\nGood thing that you remembered to drop and recreate the table, since\notherwise bottom-up index deletion would look really good!\n\nBesides, this test case is just ludicrous. I bet that Postgres was\nalways faster than other RDBMSs here, because Postgres is relatively\nunconcerned about making updates like this sustainable.\n\n> I have read your patch and have some decent understanding but\n> obviously, you and Victor will have a better idea. I am not sure what\n> I wrote in my previous email which made you say so. Anyway, I hope I\n> have made my point clear this time.\n\nI don't think that you fully understand the insights behind the patch.\nUnderstanding how the patch works mechanistically is not enough.\n\nThis patch is unusual in that you really need to think about emergent\nbehaviors to understand it. That is certainly a difficult thing to do,\nand it's understandable that even an expert might not grok it without\nconsidering it carefully. What annoys me here is that you didn't seem\nto seriously consider the *possibility* that something like that\n*might* be true, even after I pointed it out several times. If I was\nlooking at a project that you'd worked on just after it was committed,\nand something seemed *obviously* wrong, I know that I would think long\nand hard about the possibility that my understanding was faulty in\nsome subtle though important way. I try to always do this when the\napparent problem is too simple and obvious -- I know it's unlikely\nthat a respected colleague would make such a basic error (which is not\nto say that there cannot still be some error).\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Wed, 20 Jan 2021 10:53:17 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Deleting older versions in unique indexes to avoid page splits" }, { "msg_contents": "On Wed, Jan 20, 2021 at 10:53 AM Peter Geoghegan <pg@bowt.ie> wrote:\n> This patch is unusual in that you really need to think about emergent\n> behaviors to understand it. That is certainly a difficult thing to do,\n> and it's understandable that even an expert might not grok it without\n> considering it carefully.\n\nI happened to stumble upon a recent blog post that seems like a light,\napproachable introduction to some of the key concepts here:\n\nhttps://jessitron.com/2021/01/18/when-costs-are-nonlinear-keep-it-small/\n\nBottom-up index deletion enhances a complex system whose maintenance\ncosts are *dramatically* nonlinear, at least in many important cases.\nIf you apply linear thinking to such a system then you'll probably end\nup with a bad design.\n\nThe system as a whole is made efficient by making sure that we're lazy\nwhen that makes sense, while also making sure that we're eager when\nthat makes sense. So it almost *has* to be structured as a bottom-up,\nreactive mechanism -- no other approach is able to ramp up or down in\nexactly the right way. Talking about small cost differences (things\nthat can easily be empirically measured, perhaps with a\nmicrobenchmark) is almost irrelevant to the big picture. It's even\nirrelevant to the \"medium picture\".\n\nWhat's more, it's basically a mistake to think of heap page accesses\nthat don't yield any deletable index tuples as wasted effort (even\nthough that's how I describe them myself!). Here's why: we have to\naccess the heap page to learn that it has nothing for us in the first\nplace place! If we somehow knew ahead of time that some useless-to-us\nheap block was useless, then the whole system wouldn't be bottom-up\n(by definition). In other words, failing to get any index tuple\ndeletes from an entire heap page *is itself a form of feedback* at the\nlocal level -- it guides the entire system's behavior over time. Why\nshould we expect to get that information at zero cost?\n\nThis is somehow both simple and complicated, which creates huge\npotential for miscommunications. I tried to describe this in various\nways at various points. Perhaps I could have done better with that.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Wed, 20 Jan 2021 13:06:55 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Deleting older versions in unique indexes to avoid page splits" }, { "msg_contents": "On Thu, Jan 21, 2021 at 12:23 AM Peter Geoghegan <pg@bowt.ie> wrote:\n>\n> On Wed, Jan 20, 2021 at 5:33 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > Victor independently came up with a benchmark that ran over several\n> > > hours, with cleanup consistently held back by ~5 minutes by a long\n> > > running transaction:\n> > >\n> >\n> > AFAICS, the long-running transaction used in the test is below:\n> > SELECT abalance, pg_sleep(300) FROM pgbench_accounts WHERE mtime >\n> > now() - INTERVAL '15min' ORDER BY aid LIMIT 1;\n> >\n> > This shouldn't generate a transaction id so would it be sufficient to\n> > hold back the clean-up?\n>\n> It will hold back clean-up because it holds open a snapshot.\n>\n\nOkay, that makes sense. It skipped from my mind.\n\n>\n> Slowing down non-HOT updaters in these extreme cases may actually be a\n> good thing, even when bottom-up deletion finally becomes ineffective.\n> It can be thought of as backpressure. I am not worried about slowing\n> down something that is already hopelessly inefficient and\n> unsustainable. I'd go even further than that, in fact -- I now wonder\n> if we should *deliberately* slow them down some more!\n>\n\nDo you have something specific in mind for this?\n\n> > Test with 2 un-modified indexes\n> > ===============================\n> > create table t1(c1 int, c2 int, c3 int, c4 char(10));\n> > create index idx_t1 on t1(c1);\n> > create index idx_t2 on t1(c2);\n> > create index idx_t3 on t1(c3);\n> >\n> > insert into t1 values(generate_series(1,5000000), 1, 10, 'aaaaaa');\n> > update t1 set c2 = 2;\n>\n> > postgres=# update t1 set c2 = 2;\n> > UPDATE 5000000\n> > Time: 46533.530 ms (00:46.534)\n> >\n> > With HEAD\n> > ==========\n> > postgres=# update t1 set c2 = 2;\n> > UPDATE 5000000\n> > Time: 52529.839 ms (00:52.530)\n> >\n> > I have dropped and recreated the table after each update in the test.\n>\n> Good thing that you remembered to drop and recreate the table, since\n> otherwise bottom-up index deletion would look really good!\n>\n\nI have briefly tried that but numbers were not consistent probably\nbecause at that time autovacuum was also 'on'. So, I tried switching\noff autovacuum and dropping/recreating the tables.\n\n> Besides, this test case is just ludicrous.\n>\n\nI think it might be okay to say that in such cases we can expect\nregression especially because we see benefits in many other cases so\npaying some cost in such cases is acceptable or such scenarios are\nless common or probably such cases are already not efficient so it\ndoesn't matter much but I am not sure if we can say they are\ncompletely unreasonable. I think this test case depicts the behavior\nwith bulk updates.\n\nI am not saying that we need to definitely do anything but\nacknowledging that we can regress in some cases without actually\nremoving bloat is not necessarily a bad thing because till now none of\nthe tests done has shown any such behavior (where we are not able to\nhelp with bloat but still the performance is reduced).\n\n> > I have read your patch and have some decent understanding but\n> > obviously, you and Victor will have a better idea. I am not sure what\n> > I wrote in my previous email which made you say so. Anyway, I hope I\n> > have made my point clear this time.\n>\n> I don't think that you fully understand the insights behind the patch.\n> Understanding how the patch works mechanistically is not enough.\n>\n> This patch is unusual in that you really need to think about emergent\n> behaviors to understand it. That is certainly a difficult thing to do,\n> and it's understandable that even an expert might not grok it without\n> considering it carefully. What annoys me here is that you didn't seem\n> to seriously consider the *possibility* that something like that\n> *might* be true, even after I pointed it out several times.\n>\n\nI am not denying that I could be missing your point but OTOH you are\nalso completely refuting the points raised even though I have shown\nthem by test and by sharing an example. For example, I understand that\nyou want to be conservative in triggering the bottom-up clean up so\nyou choose to do it in fewer cases but we might still want to add a\n'Note' in the code (or README) suggesting something like we have\nconsidered the alternative for page-level-flag (to be aggressive in\ntriggering this optimization) but not pursued with that for so-and-so\nreasons. I think this can help future developers to carefully think\nabout it even if they want to attempt something like that. You have\nconsidered it during the early development phase and then the same\nthing occurred to Victor and me as an interesting optimization to\nexplore so the same can occur to someone else as well.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Fri, 22 Jan 2021 10:53:46 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Deleting older versions in unique indexes to avoid page splits" }, { "msg_contents": "On Thu, Jan 21, 2021 at 9:23 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > Slowing down non-HOT updaters in these extreme cases may actually be a\n> > good thing, even when bottom-up deletion finally becomes ineffective.\n> > It can be thought of as backpressure. I am not worried about slowing\n> > down something that is already hopelessly inefficient and\n> > unsustainable. I'd go even further than that, in fact -- I now wonder\n> > if we should *deliberately* slow them down some more!\n> >\n>\n> Do you have something specific in mind for this?\n\nMaybe something a bit like the VACUUM cost delay stuff could be\napplied at the point that we realize that a given bottom-up deletion\npass is entirely effective purely due to a long running transaction,\nthat gets applied by nbtree caller once it splits the page.\n\nThis isn't something that I plan to work on anytime soon. My point was\nmostly that it really would make sense to deliberately throttle\nnon-hot updates at the point that they trigger page splits that are\nbelieved to be more or less caused by a long running transaction.\nThey're so incredibly harmful to the general responsiveness of the\nsystem that having a last line of defense like that\n(backpressure/throttling) really does make sense.\n\n> I have briefly tried that but numbers were not consistent probably\n> because at that time autovacuum was also 'on'. So, I tried switching\n> off autovacuum and dropping/recreating the tables.\n\nIt's not at all surprising that they weren't consistent. Clearly\nbottom-up deletion wastes cycles on the first execution (it is wasted\neffort in at least one sense) -- you showed that already. Subsequent\nexecutions will actually manage to delete some tuples (probably a\ngreat many tuples), and so will have totally different performance\nprofiles/characteristics. Isn't that obvious?\n\n> > Besides, this test case is just ludicrous.\n> >\n>\n> I think it might be okay to say that in such cases we can expect\n> regression especially because we see benefits in many other cases so\n> paying some cost in such cases is acceptable or such scenarios are\n> less common or probably such cases are already not efficient so it\n> doesn't matter much but I am not sure if we can say they are\n> completely unreasonable. I think this test case depicts the behavior\n> with bulk updates.\n\nSincere question: What do you want me to do about it?\n\nYou're asking me about two separate though very closely related issues\nat the same time here (this bulk update regression thing, plus the\nquestion of doing bottom-up passes when the incoming item isn't from a\nHOT updater). While your positions on these closely related issues are\nnot incompatible, exactly, it's difficult for me to get to any\nunderlying substantive point. In effect, you are pulling the patch in\ntwo separate directions at the same time. In practical terms, it's\nvery likely that I cannot move the bottom-up deletion code closer to\none of your ideals without simultaneously moving it further away from\nthe other ideal.\n\nI will say the following about your bulk update example now, just in\ncase you feel that I gave you the impression of never having taken it\nseriously:\n\n1. I accept that the effect that you've described is real. It is a\npretty narrow effect in practice, though, and will be of minimal\nconcern to real applications (especially relative to the benefits they\nreceive).\n\nI happen to believe that the kind of bulk update that you showed is\nnaturally very rare, and will naturally cause horrible problems with\nany RDBMS -- and that's why I'm not too worried (about this specific\nsequence that you showed, or one like it that somehow artfully avoids\nreceiving any performance benefit). I just cannot buy the idea that\nany real world user will do a bulk update like that exactly once, and\nbe frustrated by the fact that it's ~12% slower in Postgres 14. If\nthey do it more than once the story changes, of course (the technique\nstarts to win). You have to do something more than once to notice a\nchange in its performance in the first place, of course -- so it just\ndoesn't seem plausible to me. Of course it's still possible to imagine\na way that that could happen. This is a risk that I am willing to live\nwith, given the benefits.\n\n2. If there was some simple way of avoiding the relative loss of\nperformance without hurting other cases I would certainly take it --\nin general I prefer to not have to rely on anybody's opinion of what\nis or is not a reasonable price to pay for something.\n\nI strongly doubt that I can do anything about your first/bulk update\ncomplaint (without causing much greater harm elsewhere), and so I\nwon't be pursuing it. I did not deny the existence of cases like this\nat any point. In fact, the entire discussion was ~50% me agonizing\nover regressions (though never this precise case). Discussion of\npossible regressions happened over many months and many dense emails.\nSo I refuse to be lectured about my supposed indifference to\nregressions -- not by you, and not by anyone else.\n\nIn general I consistently *bend over backwards* to avoid regressions,\nand never assume that I didn't miss something. This came up recently,\nin fact:\n\nhttps://smalldatum.blogspot.com/2021/01/insert-benchmark-postgres-is-still.html\n\nSee the \"Updates\" section of this recent blog post. No regressions\ncould be attributed to any of the nbtree projects I was involved with\nin the past few years. There was a tiny (IMV quite acceptable)\nregression attributed to insert-driven autovacuum in Postgres 13.\nDeduplication didn't lead to any appreciable loss of performance, even\nthough most of the benchmarks were rather unsympathetic towards it (no\nlow cardinality data).\n\nI specifically asked Mark Callaghan to isolate the small regression\nthat he thought might be attributable to the deduplication feature\n(via deduplicate_items storage param) -- even though at the time I\nmyself imagined that that would *confirm* his original suspicion that\nnbtree deduplication was behind the issue. I don't claim to be\nobjective when it comes to my own work, but I have at least been very\nconscientious.\n\n> I am not denying that I could be missing your point but OTOH you are\n> also completely refuting the points raised even though I have shown\n> them by test and by sharing an example.\n\nActually, I quite specifically and directly said that I was\n*sympathetic* to your second point (the one about doing bottom-up\ndeletion in more marginal cases not directly involving non-HOT\nupdaters). I also said that I struggled with it myself for a long\ntime. I just don't think that it is worth pursuing at this time -- but\nthat shouldn't stop anyone else that's interested in it.\n\n> 'Note' in the code (or README) suggesting something like we have\n> considered the alternative for page-level-flag (to be aggressive in\n> triggering this optimization) but not pursued with that for so-and-so\n> reasons.\n\nGood news, then: I pretty much *did* document it in the nbtree README!\n\nThe following aside concerns this *exact* theoretical limitation, and\nappears in parenthesis as part of commentary on the wider issue of how\nbottom-up passes are triggered:\n\n\"(in theory a small amount of version churn could\nmake a page split occur earlier than strictly necessary, but that's pretty\nharmless)\"\n\nThe \"breadcrumbs\" *are* there. You'd notice that if you actually\nlooked for them.\n\n> I think this can help future developers to carefully think\n> about it even if they want to attempt something like that. You have\n> considered it during the early development phase and then the same\n> thing occurred to Victor and me as an interesting optimization to\n> explore so the same can occur to someone else as well.\n\nI would not document the idea in the README unless perhaps I had high\nconfidence that it would work out. I have an open mind about that as a\npossibility, but that alone doesn't seem like a good enough reason to\ndo it. Though I am now prepared to say that this does not seem like an\namazing opportunity to make the feature much better. That's why it's\nnot a priority for me right now. There actually *are* things that I\nwould describe that way (Sawada-san's complementary work on VACUUM).\nAnd so that's what I'll be focussing on in the weeks ahead.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Fri, 22 Jan 2021 15:57:27 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Deleting older versions in unique indexes to avoid page splits" }, { "msg_contents": "On Sat, Jan 23, 2021 at 5:27 AM Peter Geoghegan <pg@bowt.ie> wrote:\n>\n> On Thu, Jan 21, 2021 at 9:23 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > Slowing down non-HOT updaters in these extreme cases may actually be a\n> > > good thing, even when bottom-up deletion finally becomes ineffective.\n> > > It can be thought of as backpressure. I am not worried about slowing\n> > > down something that is already hopelessly inefficient and\n> > > unsustainable. I'd go even further than that, in fact -- I now wonder\n> > > if we should *deliberately* slow them down some more!\n> > >\n> >\n> > Do you have something specific in mind for this?\n>\n> Maybe something a bit like the VACUUM cost delay stuff could be\n> applied at the point that we realize that a given bottom-up deletion\n> pass is entirely effective purely due to a long running transaction,\n> that gets applied by nbtree caller once it splits the page.\n>\n> This isn't something that I plan to work on anytime soon. My point was\n> mostly that it really would make sense to deliberately throttle\n> non-hot updates at the point that they trigger page splits that are\n> believed to be more or less caused by a long running transaction.\n> They're so incredibly harmful to the general responsiveness of the\n> system that having a last line of defense like that\n> (backpressure/throttling) really does make sense.\n>\n> > I have briefly tried that but numbers were not consistent probably\n> > because at that time autovacuum was also 'on'. So, I tried switching\n> > off autovacuum and dropping/recreating the tables.\n>\n> It's not at all surprising that they weren't consistent. Clearly\n> bottom-up deletion wastes cycles on the first execution (it is wasted\n> effort in at least one sense) -- you showed that already. Subsequent\n> executions will actually manage to delete some tuples (probably a\n> great many tuples), and so will have totally different performance\n> profiles/characteristics. Isn't that obvious?\n>\n\nYeah, that sounds obvious but what I remembered happening was that at\nsome point during/before the second update, the autovacuum kicks in\nand removes the bloat incurred by the previous update. In few cases,\nthe autovacuum seems to clean up the bloat and still we seem to be\ntaking additional time maybe because of some non-helpful cycles by\nbottom-up clean-up in the new pass (like second bulk-update for which\nwe can't clean up anything). Now, this is more of speculation based on\nthe few runs so I don't expect any response or any action based on it.\nI need to spend more time on benchmarking to study the behavior and I\nthink without that it would be difficult to make a conclusion in this\nregard. So, let's not consider any action on this front till I spend\nmore time to find the details.\n\nI agree with the other points mentioned by you in the email.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 26 Jan 2021 12:18:11 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Deleting older versions in unique indexes to avoid page splits" }, { "msg_contents": "On Mon, Jan 25, 2021 at 10:48 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> I need to spend more time on benchmarking to study the behavior and I\n> think without that it would be difficult to make a conclusion in this\n> regard. So, let's not consider any action on this front till I spend\n> more time to find the details.\n\nIt is true that I committed the patch without thorough review, which\nwas less than ideal. I welcome additional review from you now.\n\nI will say one more thing about it for now: Start with a workload, not\nwith the code. Without bottom-up deletion (e.g. when using Postgres\n13) with a simple though extreme workload that will experience version\nchurn in indexes after a while, it still takes quite a few minutes for\nthe first page to split (when the table is at least a few GB in size\nto begin with). When I was testing the patch I would notice that it\ncould take 10 or 15 minutes for the deletion mechanism to kick in for\nthe first time -- the patch really didn't do anything at all until\nperhaps 15 minutes into the benchmark, despite helping *enormously* by\nthe 60 minute mark. And this is with significant skew, so presumably\nthe first page that would split (in the absence of the bottom-up\ndeletion feature) was approximately the page with the most skew --\nmost individual pages might have taken 30 minutes or more to split\nwithout the intervention of bottom-up deletion.\n\nRelatively rare events (in this case would-be page splits) can have\nvery significant long term consequences for the sustainability of a\nworkload, so relatively simple targeted interventions can make all the\ndifference. The idea behind bottom-up deletion is to allow the\nworkload to figure out the best way of fixing its bloat problems\n*naturally*. The heuristics must be simple precisely because workloads\nare so varied and complicated. We must be willing to pay small fixed\ncosts for negative feedback -- it has to be okay for the mechanism to\noccasionally fail in order to learn what works. I freely admit that I\ndon't understand all workloads. But I don't think anybody can. This\nholistic/organic approach has a lot of advantages, especially given\nthe general uncertainty about workload characteristics. Your suspicion\nof the simple nature of the heuristics actually makes a lot of sense\nto me. I do get it.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Tue, 26 Jan 2021 10:40:31 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Deleting older versions in unique indexes to avoid page splits" } ]
[ { "msg_contents": "Hi,\n\nWhile working on 40efbf870 I noticed that when performing a Hash Join\nthat we always start out by setting nbatch to 1. That seems\nreasonable as it's hard to imagine being able to complete any non-zero\namount of work in fewer than 1 batch.\n\nIn the HashAgg case, since 40efbf870, we'll display:\n\n\"HashAgg Batches\": 0,\n\nif you do something like: explain(analyze, format json) select\ndistinct oid from pg_class;\n\nI'd rather this said that the number of batches was 1.\n\nDoes anyone have any objections to that being changed?\n\nDavid\n\n\n", "msg_date": "Wed, 1 Jul 2020 14:03:46 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "HashAgg's batching counter starts at 0, but Hash's starts at 1." }, { "msg_contents": "On Tue, Jun 30, 2020, 7:04 PM David Rowley <dgrowleyml@gmail.com> wrote:\n\n> Does anyone have any objections to that being changed?\n>\n\nThat's OK with me. By the way, I'm on vacation and will catch up on these\nHashAgg threads next week.\n\nRegards,\n Jeff Davis\n\nOn Tue, Jun 30, 2020, 7:04 PM David Rowley <dgrowleyml@gmail.com> wrote:\nDoes anyone have any objections to that being changed?That's OK with me. By the way, I'm on vacation and will catch up on these HashAgg threads next week.Regards,     Jeff Davis", "msg_date": "Tue, 30 Jun 2020 23:46:02 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": false, "msg_subject": "Re: HashAgg's batching counter starts at 0, but Hash's starts at 1." }, { "msg_contents": "On Wed, 1 Jul 2020 at 18:46, Jeff Davis <pgsql@j-davis.com> wrote:\n>\n> On Tue, Jun 30, 2020, 7:04 PM David Rowley <dgrowleyml@gmail.com> wrote:\n>>\n>> Does anyone have any objections to that being changed?\n>\n> That's OK with me. By the way, I'm on vacation and will catch up on these HashAgg threads next week.\n\n(Adding Justin as I know he's expressed interest in the EXPLAIN output\nof HashAgg before)\n\nI've written a patch to bring the HashAgg EXPLAIN ANALYZE output to be\nmore aligned to the Hash Join output.\n\nCouple of things I observed about Hash Join EXPLAIN ANALYZE:\n1. The number of batches starts at 1.\n2. We always display the number of batches.\n3. We write \"Batches\" for text format and \"Hash Batches\" for non-text formats.\n4. We write \"Memory Usage\" for text format and \"Peak Memory Usage\" for\nnon-text formats.\n5. \"Batches\" comes before memory usage.\n\nBefore this patch, HashAgg EXPLAIN ANALYZE output would:\n1. Start the number of batches at 0.\n2. Only display \"Hash Batches\" when batches > 0.\n3. Used the words \"HashAgg Batches\" for text and non-text formats.\n4. Used the words \"Peak Memory Usage\" for text and non-text formats.\n5. \"Hash Batches\" was written after memory usage.\n\nIn the attached patch I've changed HashAgg to be aligned to Hash Join\non each of the points above.\n\ne.g.\n\nBefore:\n\npostgres=# explain analyze select c.relname,count(*) from pg_class c\ninner join pg_Attribute a on c.oid = a.attrelid group by c.relname;\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------\n HashAggregate (cost=138.37..142.23 rows=386 width=72) (actual\ntime=3.121..3.201 rows=427 loops=1)\n Group Key: c.relname\n Peak Memory Usage: 109kB\n -> Hash Join (cost=21.68..124.10 rows=2855 width=64) (actual\ntime=0.298..1.768 rows=3153 loops=1)\n Hash Cond: (a.attrelid = c.oid)\n -> Seq Scan on pg_attribute a (cost=0.00..93.95 rows=3195\nwidth=4) (actual time=0.011..0.353 rows=3153 loops=1)\n -> Hash (cost=16.86..16.86 rows=386 width=68) (actual\ntime=0.279..0.279 rows=427 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 50kB\n -> Seq Scan on pg_class c (cost=0.00..16.86 rows=386\nwidth=68) (actual time=0.007..0.112 rows=427 loops=1)\n Planning Time: 0.421 ms\n Execution Time: 3.294 ms\n(11 rows)\n\nAfter:\n\npostgres=# explain analyze select c.relname,count(*) from pg_class c\ninner join pg_Attribute a on c.oid = a.attrelid group by c.relname;\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------\n HashAggregate (cost=566.03..580.00 rows=1397 width=72) (actual\ntime=13.097..13.430 rows=1397 loops=1)\n Group Key: c.relname\n Batches: 1 Memory Usage: 321kB\n -> Hash Join (cost=64.43..496.10 rows=13985 width=64) (actual\ntime=0.838..7.546 rows=13985 loops=1)\n Hash Cond: (a.attrelid = c.oid)\n -> Seq Scan on pg_attribute a (cost=0.00..394.85 rows=13985\nwidth=4) (actual time=0.010..1.462 rows=13985 loops=1)\n -> Hash (cost=46.97..46.97 rows=1397 width=68) (actual\ntime=0.820..0.821 rows=1397 loops=1)\n Buckets: 2048 Batches: 1 Memory Usage: 153kB\n -> Seq Scan on pg_class c (cost=0.00..46.97 rows=1397\nwidth=68) (actual time=0.009..0.362 rows=1397 loops=1)\n Planning Time: 0.440 ms\n Execution Time: 13.634 ms\n(11 rows)\n\n(ignore the change in memory consumption. That was due to adding\nrecords for testing)\n\nAny objections to this change?\n\nDavid", "msg_date": "Mon, 27 Jul 2020 10:48:45 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Re: HashAgg's batching counter starts at 0, but Hash's starts at 1." }, { "msg_contents": "On Mon, Jul 27, 2020 at 10:48:45AM +1200, David Rowley wrote:\n> On Wed, 1 Jul 2020 at 18:46, Jeff Davis <pgsql@j-davis.com> wrote:\n> >\n> > On Tue, Jun 30, 2020, 7:04 PM David Rowley <dgrowleyml@gmail.com> wrote:\n> >>\n> >> Does anyone have any objections to that being changed?\n> >\n> > That's OK with me. By the way, I'm on vacation and will catch up on these HashAgg threads next week.\n> \n> (Adding Justin as I know he's expressed interest in the EXPLAIN output\n> of HashAgg before)\n\nThanks.\n\nIt's unrelated to hashAgg vs hashJoin, but I also noticed that this is shown\nonly conditionally:\n\n if (es->format != EXPLAIN_FORMAT_TEXT)\n {\n if (es->costs && aggstate->hash_planned_partitions > 0)\n {\n ExplainPropertyInteger(\"Planned Partitions\", NULL,\n aggstate->hash_planned_partitions, es);\n\nThat was conditional since it was introduced at 1f39bce02:\n\n if (es->costs && aggstate->hash_planned_partitions > 0)\n {\n ExplainPropertyInteger(\"Planned Partitions\", NULL,\n aggstate->hash_planned_partitions, es);\n }\n\nI think 40efbf870 should've handled this, too.\n\n-- \nJustin\n\n\n", "msg_date": "Sun, 26 Jul 2020 21:54:02 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: HashAgg's batching counter starts at 0, but Hash's starts at 1." }, { "msg_contents": "On Mon, 27 Jul 2020 at 14:54, Justin Pryzby <pryzby@telsasoft.com> wrote:\n> It's unrelated to hashAgg vs hashJoin, but I also noticed that this is shown\n> only conditionally:\n>\n> if (es->format != EXPLAIN_FORMAT_TEXT)\n> {\n> if (es->costs && aggstate->hash_planned_partitions > 0)\n> {\n> ExplainPropertyInteger(\"Planned Partitions\", NULL,\n> aggstate->hash_planned_partitions, es);\n>\n> That was conditional since it was introduced at 1f39bce02:\n>\n> if (es->costs && aggstate->hash_planned_partitions > 0)\n> {\n> ExplainPropertyInteger(\"Planned Partitions\", NULL,\n> aggstate->hash_planned_partitions, es);\n> }\n>\n> I think 40efbf870 should've handled this, too.\n\nhmm. I'm not sure. I think this should follow the same logic as what\n\"Disk Usage\" follows, and right now we don't show Disk Usage unless we\nspill. Since we only use partitions when spilling, I don't think it\nmakes sense to show the estimated partitions when we don't plan on\nspilling.\n\nI think if we change this then we should change Disk Usage too.\nHowever, I don't think we should as Sort will only show \"Disk\" if the\nsort spills. I think Hash Agg should follow that.\n\nFor the patch I posted yesterday, I'll go ahead in push it in about 24\nhours unless there are any objections.\n\nDavid\n\n\n", "msg_date": "Tue, 28 Jul 2020 12:54:35 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Re: HashAgg's batching counter starts at 0, but Hash's starts at 1." }, { "msg_contents": "On Tue, Jul 28, 2020 at 12:54:35PM +1200, David Rowley wrote:\n> On Mon, 27 Jul 2020 at 14:54, Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > It's unrelated to hashAgg vs hashJoin, but I also noticed that this is shown\n> > only conditionally:\n> >\n> > if (es->format != EXPLAIN_FORMAT_TEXT)\n> > {\n> > if (es->costs && aggstate->hash_planned_partitions > 0)\n> > {\n> > ExplainPropertyInteger(\"Planned Partitions\", NULL,\n> > aggstate->hash_planned_partitions, es);\n> >\n> > That was conditional since it was introduced at 1f39bce02:\n> >\n> > if (es->costs && aggstate->hash_planned_partitions > 0)\n> > {\n> > ExplainPropertyInteger(\"Planned Partitions\", NULL,\n> > aggstate->hash_planned_partitions, es);\n> > }\n> >\n> > I think 40efbf870 should've handled this, too.\n> \n> hmm. I'm not sure. I think this should follow the same logic as what\n> \"Disk Usage\" follows, and right now we don't show Disk Usage unless we\n> spill.\n\nHuh ? I'm referring to non-text format, which is what you changed, on the\nreasoning that the same plan *could* spill:\n\n40efbf8706cdd96e06bc4d1754272e46d9857875\n if (es->format != EXPLAIN_FORMAT_TEXT)\n {\n\n if (es->costs && aggstate->hash_planned_partitions > 0)\n {\n ExplainPropertyInteger(\"Planned Partitions\", NULL,\n aggstate->hash_planned_partitions, es);\n }\n...\n /* EXPLAIN ANALYZE */\n ExplainPropertyInteger(\"Peak Memory Usage\", \"kB\", memPeakKb, es);\n- if (aggstate->hash_batches_used > 0)\n- {\n ExplainPropertyInteger(\"Disk Usage\", \"kB\", \n aggstate->hash_disk_used, es);\n ExplainPropertyInteger(\"HashAgg Batches\", NULL,\n aggstate->hash_batches_used, es);\n\n> Since we only use partitions when spilling, I don't think it\n> makes sense to show the estimated partitions when we don't plan on\n> spilling.\n\nIn any case, my thinking is that we *should* show PlannedPartitions=0,\nspecifically to indicate *that* we didn't plan to spill.\n\n> I think if we change this then we should change Disk Usage too.\n> However, I don't think we should as Sort will only show \"Disk\" if the\n> sort spills. I think Hash Agg should follow that.\n> \n> For the patch I posted yesterday, I'll go ahead in push it in about 24\n> hours unless there are any objections.\n\n\n", "msg_date": "Mon, 27 Jul 2020 22:01:45 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: HashAgg's batching counter starts at 0, but Hash's starts at 1." }, { "msg_contents": "On Mon, Jul 27, 2020 at 5:54 PM David Rowley <dgrowleyml@gmail.com> wrote:\n> hmm. I'm not sure. I think this should follow the same logic as what\n> \"Disk Usage\" follows, and right now we don't show Disk Usage unless we\n> spill. Since we only use partitions when spilling, I don't think it\n> makes sense to show the estimated partitions when we don't plan on\n> spilling.\n\nI'm confused about what the guiding principles for EXPLAIN ANALYZE\noutput (text or otherwise) are.\n\n> I think if we change this then we should change Disk Usage too.\n> However, I don't think we should as Sort will only show \"Disk\" if the\n> sort spills. I think Hash Agg should follow that.\n\nI don't follow your remarks here.\n\nSeparately, I wonder what your opinion is about what should happen for\nthe partial sort related EXPLAIN ANALYZE format open item, described\nhere:\n\nhttps://www.postgresql.org/message-id/flat/20200619040358.GZ17995%40telsasoft.com#b20bd205851a0390220964f7c31b23d1\n\nISTM that EXPLAIN ANALYZE for incremental sort manages to show the\nsame information as the sort case, aggregated across each tuplesort in\na fairly sensible way.\n\n(No activity over on the incremental sort thread, so I thought I'd ask\nagain here, while I was reminded of that issue.)\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Mon, 27 Jul 2020 20:20:45 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: HashAgg's batching counter starts at 0, but Hash's starts at 1." }, { "msg_contents": "On Mon, Jul 27, 2020 at 08:20:45PM -0700, Peter Geoghegan wrote:\n> On Mon, Jul 27, 2020 at 5:54 PM David Rowley <dgrowleyml@gmail.com> wrote:\n> > hmm. I'm not sure. I think this should follow the same logic as what\n> > \"Disk Usage\" follows, and right now we don't show Disk Usage unless we\n> > spill. Since we only use partitions when spilling, I don't think it\n> > makes sense to show the estimated partitions when we don't plan on\n> > spilling.\n> \n> I'm confused about what the guiding principles for EXPLAIN ANALYZE\n> output (text or otherwise) are.\n\nI don't know of a guideline for text format, but the issues I've raised for\nHashAgg and IncrSort are based on previous commits which change to \"show field\neven if its value is zero\" for non-text format:\n\n7d91b604d9b5d6ec8c19c57a9ffd2f27129cdd94\n8ebb69f85445177575684a0ba5cfedda8d840a91\n3ec20c7091e97a554e7447ac2b7f4ed795631395\n\n> > I think if we change this then we should change Disk Usage too.\n> > However, I don't think we should as Sort will only show \"Disk\" if the\n> > sort spills. I think Hash Agg should follow that.\n> \n> I don't follow your remarks here.\n> \n> Separately, I wonder what your opinion is about what should happen for\n> the partial sort related EXPLAIN ANALYZE format open item, described\n> here:\n> \n> https://www.postgresql.org/message-id/flat/20200619040358.GZ17995%40telsasoft.com#b20bd205851a0390220964f7c31b23d1\n> \n> ISTM that EXPLAIN ANALYZE for incremental sort manages to show the\n> same information as the sort case, aggregated across each tuplesort in\n> a fairly sensible way.\n> \n> (No activity over on the incremental sort thread, so I thought I'd ask\n> again here, while I was reminded of that issue.)\n\nNote that I did mail recently, on this 2nd thread:\n\nhttps://www.postgresql.org/message-id/20200723141454.GF4286@telsasoft.com\n\n-- \nJustin\n\n\n", "msg_date": "Mon, 27 Jul 2020 22:36:22 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: HashAgg's batching counter starts at 0, but Hash's starts at 1." }, { "msg_contents": "On Tue, 28 Jul 2020 at 15:01, Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n> On Tue, Jul 28, 2020 at 12:54:35PM +1200, David Rowley wrote:\n> > On Mon, 27 Jul 2020 at 14:54, Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > > It's unrelated to hashAgg vs hashJoin, but I also noticed that this is shown\n> > > only conditionally:\n> > >\n> > > if (es->format != EXPLAIN_FORMAT_TEXT)\n> > > {\n> > > if (es->costs && aggstate->hash_planned_partitions > 0)\n> > > {\n> > > ExplainPropertyInteger(\"Planned Partitions\", NULL,\n> > > aggstate->hash_planned_partitions, es);\n> > >\n> > > That was conditional since it was introduced at 1f39bce02:\n> > >\n> > > if (es->costs && aggstate->hash_planned_partitions > 0)\n> > > {\n> > > ExplainPropertyInteger(\"Planned Partitions\", NULL,\n> > > aggstate->hash_planned_partitions, es);\n> > > }\n> > >\n> > > I think 40efbf870 should've handled this, too.\n> >\n> > hmm. I'm not sure. I think this should follow the same logic as what\n> > \"Disk Usage\" follows, and right now we don't show Disk Usage unless we\n> > spill.\n>\n> Huh ? I'm referring to non-text format, which is what you changed, on the\n> reasoning that the same plan *could* spill:\n\nOh, right. ... (Sudden bout of confusion due to lack of sleep)\n\nLooks like it'll just need this line:\n\nif (es->costs && aggstate->hash_planned_partitions > 0)\n\nchanged to:\n\nif (es->costs)\n\nI think we'll likely need to maintain not showing that property with\nexplain (costs off) as it'll be a bit more difficult to write\nregression tests if we display it regardless of that option.\n\nDavid\n\n\n", "msg_date": "Tue, 28 Jul 2020 16:02:38 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Re: HashAgg's batching counter starts at 0, but Hash's starts at 1." }, { "msg_contents": "On Tue, 28 Jul 2020 at 15:21, Peter Geoghegan <pg@bowt.ie> wrote:\n> Separately, I wonder what your opinion is about what should happen for\n> the partial sort related EXPLAIN ANALYZE format open item, described\n> here:\n>\n> https://www.postgresql.org/message-id/flat/20200619040358.GZ17995%40telsasoft.com#b20bd205851a0390220964f7c31b23d1\n>\n> ISTM that EXPLAIN ANALYZE for incremental sort manages to show the\n> same information as the sort case, aggregated across each tuplesort in\n> a fairly sensible way.\n>\n> (No activity over on the incremental sort thread, so I thought I'd ask\n> again here, while I was reminded of that issue.)\n\nTBH, I've not really looked at that.\n\nTom did mention his view on this in [1]. I think that's a pretty good\npolicy. However, I've not looked at the incremental sort EXPLAIN\noutput enough to know how it'll best apply there.\n\nDavid\n\n[1] https://www.postgresql.org/message-id/2276865.1593102811%40sss.pgh.pa.us\n\n\n", "msg_date": "Tue, 28 Jul 2020 16:08:34 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Re: HashAgg's batching counter starts at 0, but Hash's starts at 1." }, { "msg_contents": "On Mon, 27 Jul 2020 at 14:54, Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n> On Mon, Jul 27, 2020 at 10:48:45AM +1200, David Rowley wrote:\n> > On Wed, 1 Jul 2020 at 18:46, Jeff Davis <pgsql@j-davis.com> wrote:\n> > >\n> > > On Tue, Jun 30, 2020, 7:04 PM David Rowley <dgrowleyml@gmail.com> wrote:\n> > >>\n> > >> Does anyone have any objections to that being changed?\n> > >\n> > > That's OK with me. By the way, I'm on vacation and will catch up on these HashAgg threads next week.\n> >\n> > (Adding Justin as I know he's expressed interest in the EXPLAIN output\n> > of HashAgg before)\n>\n> Thanks.\n>\n> It's unrelated to hashAgg vs hashJoin, but I also noticed that this is shown\n> only conditionally:\n>\n> if (es->format != EXPLAIN_FORMAT_TEXT)\n> {\n> if (es->costs && aggstate->hash_planned_partitions > 0)\n> {\n> ExplainPropertyInteger(\"Planned Partitions\", NULL,\n> aggstate->hash_planned_partitions, es);\n>\n> That was conditional since it was introduced at 1f39bce02:\n>\n> if (es->costs && aggstate->hash_planned_partitions > 0)\n> {\n> ExplainPropertyInteger(\"Planned Partitions\", NULL,\n> aggstate->hash_planned_partitions, es);\n> }\n>\n> I think 40efbf870 should've handled this, too.\n\nI pushed that change along with all the other changes mentioned to the\nEXPLAIN ANALYZE format.\n\nDavid\n\n\n", "msg_date": "Wed, 29 Jul 2020 11:44:52 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Re: HashAgg's batching counter starts at 0, but Hash's starts at 1." }, { "msg_contents": "On Mon, Jul 27, 2020 at 8:36 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> I don't know of a guideline for text format, but the issues I've raised for\n> HashAgg and IncrSort are based on previous commits which change to \"show field\n> even if its value is zero\" for non-text format:\n\nBut the non-text format for IncrSort shows about the same information\nas sort, broken out by group. What's missing if you assume that sort\nis the gold standard?\n\nThe objection to your argument from James (which could just as easily\napply to regular sort from earlier releases) is that accurate\ninformation just isn't available as a practical matter. This is due to\ntuplesort implementation limitations that cannot be fixed now. See the\ncomment block in tuplesort_get_stats() for an explanation. The hard\npart is showing memory used by external sorts.\n\nIt's true that \"Disk\" is specifically shown by sort nodes output in\ntext explain format, but you're talking about non-text formats so\nthat's not really relevant\n\nAFAICT sort (and IncrSort) don't fail to show a field value if it is\nzero. Rather, they consistently show \"space used\" (in non-text\nformat), which can be either memory or disk space. Technically they\ndon't violate the letter of the law that you cite. (Though admittedly\nthis is a legalistic loophole -- the \"space\" value presumably has to\nbe interpreted according to different rules for programs that consume\nnon-text EXPLAIN output.)\n\nHave I missed something?\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Wed, 29 Jul 2020 20:35:08 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: HashAgg's batching counter starts at 0, but Hash's starts at 1." }, { "msg_contents": "On Wed, Jul 29, 2020 at 08:35:08PM -0700, Peter Geoghegan wrote:\n> AFAICT sort (and IncrSort) don't fail to show a field value if it is\n> zero. Rather, they consistently show \"space used\" (in non-text\n> format), which can be either memory or disk space. Technically they\n> don't violate the letter of the law that you cite. (Though admittedly\n> this is a legalistic loophole -- the \"space\" value presumably has to\n> be interpreted according to different rules for programs that consume\n> non-text EXPLAIN output.)\n\nSort shows this:\n Sort Method: \"external merge\" +\n Sort Space Used: 19520 +\n Sort Space Type: \"Disk\" +\n\nIncremental sort shows this:\n Sort Methods Used: +\n - \"external merge\" +\n Sort Space Disk: +\n Average Sort Space Used: 128+\n Peak Sort Space Used: 128 +\n\nSo my 2ndary suggestion is to conditionalize based on the method rather than\nvalue of space used.\n\n--- a/src/backend/commands/explain.c\n+++ b/src/backend/commands/explain.c\n@@ -2830 +2830 @@ show_incremental_sort_group_info(IncrementalSortGroupInfo *groupInfo,\n- if (groupInfo->maxMemorySpaceUsed > 0)\n+ if (groupInfo->sortMethods & SORT_TYPE_QUICKSORT)\n@@ -2847 +2847 @@ show_incremental_sort_group_info(IncrementalSortGroupInfo *groupInfo,\n- if (groupInfo->maxDiskSpaceUsed > 0)\n+ if (groupInfo->sortMethods & SORT_TYPE_EXTERNAL_SORT)\n\n-- \nJustin\n\n\n", "msg_date": "Wed, 29 Jul 2020 23:05:02 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: HashAgg's batching counter starts at 0, but Hash's starts at 1." }, { "msg_contents": "On Wed, Jul 29, 2020 at 9:05 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> So my 2ndary suggestion is to conditionalize based on the method rather than\n> value of space used.\n\nWhat's the advantage of doing it that way?\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Wed, 29 Jul 2020 21:18:44 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: HashAgg's batching counter starts at 0, but Hash's starts at 1." }, { "msg_contents": "On Wed, Jul 29, 2020 at 09:18:44PM -0700, Peter Geoghegan wrote:\n> On Wed, Jul 29, 2020 at 9:05 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > So my 2ndary suggestion is to conditionalize based on the method rather than\n> > value of space used.\n> \n> What's the advantage of doing it that way?\n\nBecause filtering out zero values is exactly what's intended to be avoided for\nnontext output.\n\nI think checking whether the method was used should result in the same output,\nwithout the literal check for zero value (which itself sets a bad example).\n\n--- a/src/backend/commands/explain.c\n+++ b/src/backend/commands/explain.c\n@@ -2824,13 +2824,13 @@ show_incremental_sort_group_info(IncrementalSortGroupInfo *groupInfo,\n appendStringInfo(&groupName, \"%s Groups\", groupLabel);\n ExplainOpenGroup(\"Incremental Sort Groups\", groupName.data, true, es);\n ExplainPropertyInteger(\"Group Count\", NULL, groupInfo->groupCount, es);\n \n ExplainPropertyList(\"Sort Methods Used\", methodNames, es);\n \n- if (groupInfo->maxMemorySpaceUsed > 0)\n+ if (groupInfo->sortMethods & SORT_TYPE_QUICKSORT)\n {\n long avgSpace = groupInfo->totalMemorySpaceUsed / groupInfo->groupCount;\n const char *spaceTypeName;\n StringInfoData memoryName;\n \n spaceTypeName = tuplesort_space_type_name(SORT_SPACE_TYPE_MEMORY);\n@@ -2841,13 +2841,13 @@ show_incremental_sort_group_info(IncrementalSortGroupInfo *groupInfo,\n ExplainPropertyInteger(\"Average Sort Space Used\", \"kB\", avgSpace, es);\n ExplainPropertyInteger(\"Peak Sort Space Used\", \"kB\",\n groupInfo->maxMemorySpaceUsed, es);\n \n ExplainCloseGroup(\"Sort Spaces\", memoryName.data, true, es);\n }\n- if (groupInfo->maxDiskSpaceUsed > 0)\n+ if (groupInfo->sortMethods & SORT_TYPE_EXTERNAL_SORT)\n {\n long avgSpace = groupInfo->totalDiskSpaceUsed / groupInfo->groupCount;\n const char *spaceTypeName;\n StringInfoData diskName;\n \n spaceTypeName = tuplesort_space_type_name(SORT_SPACE_TYPE_DISK);\n\n\n", "msg_date": "Thu, 30 Jul 2020 19:22:45 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: HashAgg's batching counter starts at 0, but Hash's starts at 1.\n (now: incremental sort)" }, { "msg_contents": "On Thu, Jul 30, 2020 at 5:22 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> Because filtering out zero values is exactly what's intended to be avoided for\n> nontext output.\n>\n> I think checking whether the method was used should result in the same output,\n> without the literal check for zero value (which itself sets a bad example).\n\nIt seems fine to me as-is. What about SORT_TYPE_TOP_N_HEAPSORT? Or any\nother sort methods we add in the future?\n\nThe way that we flatten maxDiskSpaceUsed and maxMemorySpaceUsed into\n\"space used\" on output might be kind of questionable, but it's\nsomething that we have to live with for the foreseeable future. I\ndon't think that this is a bad example -- we don't output\nmaxDiskSpaceUsed or maxMemorySpaceUsed at the conceptual level.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Thu, 30 Jul 2020 18:33:32 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: HashAgg's batching counter starts at 0, but Hash's starts at 1.\n (now: incremental sort)" }, { "msg_contents": "On Thu, Jul 30, 2020 at 8:22 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n\n> On Wed, Jul 29, 2020 at 09:18:44PM -0700, Peter Geoghegan wrote:\n> > On Wed, Jul 29, 2020 at 9:05 PM Justin Pryzby <pryzby@telsasoft.com>\n> wrote:\n> > > So my 2ndary suggestion is to conditionalize based on the method\n> rather than\n> > > value of space used.\n> >\n> > What's the advantage of doing it that way?\n>\n> Because filtering out zero values is exactly what's intended to be avoided\n> for\n> nontext output.\n>\n> I think checking whether the method was used should result in the same\n> output,\n> without the literal check for zero value (which itself sets a bad example).\n>\n> --- a/src/backend/commands/explain.c\n> +++ b/src/backend/commands/explain.c\n> @@ -2824,13 +2824,13 @@\n> show_incremental_sort_group_info(IncrementalSortGroupInfo *groupInfo,\n> appendStringInfo(&groupName, \"%s Groups\", groupLabel);\n> ExplainOpenGroup(\"Incremental Sort Groups\",\n> groupName.data, true, es);\n> ExplainPropertyInteger(\"Group Count\", NULL,\n> groupInfo->groupCount, es);\n>\n> ExplainPropertyList(\"Sort Methods Used\", methodNames, es);\n>\n> - if (groupInfo->maxMemorySpaceUsed > 0)\n> + if (groupInfo->sortMethods & SORT_TYPE_QUICKSORT)\n> {\n> long avgSpace =\n> groupInfo->totalMemorySpaceUsed / groupInfo->groupCount;\n> const char *spaceTypeName;\n> StringInfoData memoryName;\n>\n> spaceTypeName =\n> tuplesort_space_type_name(SORT_SPACE_TYPE_MEMORY);\n> @@ -2841,13 +2841,13 @@\n> show_incremental_sort_group_info(IncrementalSortGroupInfo *groupInfo,\n> ExplainPropertyInteger(\"Average Sort Space Used\",\n> \"kB\", avgSpace, es);\n> ExplainPropertyInteger(\"Peak Sort Space Used\",\n> \"kB\",\n>\n> groupInfo->maxMemorySpaceUsed, es);\n>\n> ExplainCloseGroup(\"Sort Spaces\", memoryName.data,\n> true, es);\n> }\n> - if (groupInfo->maxDiskSpaceUsed > 0)\n> + if (groupInfo->sortMethods & SORT_TYPE_EXTERNAL_SORT)\n> {\n> long avgSpace =\n> groupInfo->totalDiskSpaceUsed / groupInfo->groupCount;\n> const char *spaceTypeName;\n> StringInfoData diskName;\n>\n> spaceTypeName =\n> tuplesort_space_type_name(SORT_SPACE_TYPE_DISK);\n>\n\nI very much do not like this approach, and I think it's actually\nfundamentally wrong, at least for the memory check. Quicksort is not the\nonly option that uses memory. For now, there's only one option that spills\nto disk (external merge sort), but there's no reason it has to remain that\nway. And in the future we might accurately report memory consumed even when\nwe've eventually spilled to disk also, so memory used would be relevant\npotentially even if no in-memory sort was ever performed.\n\nSo I'm pretty confident checking the space used is the correct way to do\nthis.\n\nJames\n\nOn Thu, Jul 30, 2020 at 8:22 PM Justin Pryzby <pryzby@telsasoft.com> wrote:On Wed, Jul 29, 2020 at 09:18:44PM -0700, Peter Geoghegan wrote:\n> On Wed, Jul 29, 2020 at 9:05 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > So my 2ndary suggestion is to conditionalize based on the method rather than\n> > value of space used.\n> \n> What's the advantage of doing it that way?\n\nBecause filtering out zero values is exactly what's intended to be avoided for\nnontext output.\n\nI think checking whether the method was used should result in the same output,\nwithout the literal check for zero value (which itself sets a bad example).\n\n--- a/src/backend/commands/explain.c\n+++ b/src/backend/commands/explain.c\n@@ -2824,13 +2824,13 @@ show_incremental_sort_group_info(IncrementalSortGroupInfo *groupInfo,\n                appendStringInfo(&groupName, \"%s Groups\", groupLabel);\n                ExplainOpenGroup(\"Incremental Sort Groups\", groupName.data, true, es);\n                ExplainPropertyInteger(\"Group Count\", NULL, groupInfo->groupCount, es);\n\n                ExplainPropertyList(\"Sort Methods Used\", methodNames, es);\n\n-               if (groupInfo->maxMemorySpaceUsed > 0)\n+               if (groupInfo->sortMethods & SORT_TYPE_QUICKSORT)\n                {\n                        long            avgSpace = groupInfo->totalMemorySpaceUsed / groupInfo->groupCount;\n                        const char *spaceTypeName;\n                        StringInfoData memoryName;\n\n                        spaceTypeName = tuplesort_space_type_name(SORT_SPACE_TYPE_MEMORY);\n@@ -2841,13 +2841,13 @@ show_incremental_sort_group_info(IncrementalSortGroupInfo *groupInfo,\n                        ExplainPropertyInteger(\"Average Sort Space Used\", \"kB\", avgSpace, es);\n                        ExplainPropertyInteger(\"Peak Sort Space Used\", \"kB\",\n                                                                   groupInfo->maxMemorySpaceUsed, es);\n\n                        ExplainCloseGroup(\"Sort Spaces\", memoryName.data, true, es);\n                }\n-               if (groupInfo->maxDiskSpaceUsed > 0)\n+               if (groupInfo->sortMethods & SORT_TYPE_EXTERNAL_SORT)\n                {\n                        long            avgSpace = groupInfo->totalDiskSpaceUsed / groupInfo->groupCount;\n                        const char *spaceTypeName;\n                        StringInfoData diskName;\n\n                        spaceTypeName = tuplesort_space_type_name(SORT_SPACE_TYPE_DISK);I very much do not like this approach, and I think it's actually fundamentally wrong, at least for the memory check. Quicksort is not the only option that uses memory. For now, there's only one option that spills to disk (external merge sort), but there's no reason it has to remain that way. And in the future we might accurately report memory consumed even when we've eventually spilled to disk also, so memory used would be relevant potentially even if no in-memory sort was ever performed.So I'm pretty confident checking the space used is the correct way to do this.James", "msg_date": "Thu, 30 Jul 2020 21:39:35 -0400", "msg_from": "James Coleman <jtc331@gmail.com>", "msg_from_op": false, "msg_subject": "Re: HashAgg's batching counter starts at 0, but Hash's starts at 1.\n (now: incremental sort)" }, { "msg_contents": "On Thu, Jul 30, 2020 at 06:33:32PM -0700, Peter Geoghegan wrote:\n> On Thu, Jul 30, 2020 at 5:22 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > Because filtering out zero values is exactly what's intended to be avoided for\n> > nontext output.\n> >\n> > I think checking whether the method was used should result in the same output,\n> > without the literal check for zero value (which itself sets a bad example).\n> \n> It seems fine to me as-is. What about SORT_TYPE_TOP_N_HEAPSORT? Or any\n> other sort methods we add in the future?\n\nFeel free to close it out. I'm satisfied that we've had a discussion about it.\n\n-- \nJustin\n\n\n", "msg_date": "Thu, 30 Jul 2020 20:40:27 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: HashAgg's batching counter starts at 0, but Hash's starts at 1.\n (now: incremental sort)" }, { "msg_contents": "On Thu, Jul 30, 2020 at 6:39 PM James Coleman <jtc331@gmail.com> wrote:\n> I very much do not like this approach, and I think it's actually fundamentally wrong, at least for the memory check. Quicksort is not the only option that uses memory. For now, there's only one option that spills to disk (external merge sort), but there's no reason it has to remain that way.\n\nI wouldn't be surprised if it was possible to get\nSORT_TYPE_EXTERNAL_SORT even today (though I'm not sure if that's\ntruly possible). That will happen for a regular sort node if we\nrequire randomAccess to the sort, and it happens to spill -- we can\nrandomly access the final tape, but cannot do a final on-the-fly\nmerge. Say for a merge join.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Thu, 30 Jul 2020 18:43:55 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: HashAgg's batching counter starts at 0, but Hash's starts at 1.\n (now: incremental sort)" }, { "msg_contents": "On Thu, Jul 30, 2020 at 6:40 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> Feel free to close it out. I'm satisfied that we've had a discussion about it.\n\nClosed it out.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Thu, 30 Jul 2020 18:52:10 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: HashAgg's batching counter starts at 0, but Hash's starts at 1.\n (now: incremental sort)" } ]
[ { "msg_contents": "Greetings,\n\nAmong the changes made to PG's recovery in v12 was to set\nrecovery_target_timeline to be 'latest' by default. That's handy when\nyou're flipping back and forth between replicas and want to have\neveryone follow that game, but it's made doing some basic things like\nrestoring from a backup problematic.\n\nSpecifically, if you take a backup off a primary and, while that backup\nis going on, some replica is promoted and drops a .history file into the\nWAL repo, that backup is no longer able to be restored with the new\nrecovery_target_timeline default. What happens is that the restore\nprocess will happily follow the timeline change- even though it happened\nbefore we reached consistency, and then it'll never find the needed\nend-of-backup WAL point that would allow us to reach consistency.\n\nNaturally, a primary isn't ever going to do a TL switch, and we already\nthrow an error during an online backup from a replica if that replica\ndid a TL switch during the backup, to indicate that the backup isn't\nvalid.\n\nAttached is an initial draft of a patch to at least give a somewhat\nclearer error message when we detect that the user has asked us to\nfollow a timeline switch to a new timeline before we've reached\nconsistency (though I had to hack in a check to see if pg_rewind is\nbeing used, since apparently it actually depends on PG following a\ntimeline switch before reaching consistency...).\n\nThoughts?\n\nThanks,\n\nStephen", "msg_date": "Wed, 1 Jul 2020 00:12:14 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": true, "msg_subject": "v12 and TimeLine switches and backups/restores" }, { "msg_contents": "On Wed, Jul 1, 2020 at 12:12 AM Stephen Frost <sfrost@snowman.net> wrote:\n> Among the changes made to PG's recovery in v12 was to set\n> recovery_target_timeline to be 'latest' by default. That's handy when\n> you're flipping back and forth between replicas and want to have\n> everyone follow that game, but it's made doing some basic things like\n> restoring from a backup problematic.\n>\n> Specifically, if you take a backup off a primary and, while that backup\n> is going on, some replica is promoted and drops a .history file into the\n> WAL repo, that backup is no longer able to be restored with the new\n> recovery_target_timeline default. What happens is that the restore\n> process will happily follow the timeline change- even though it happened\n> before we reached consistency, and then it'll never find the needed\n> end-of-backup WAL point that would allow us to reach consistency.\n\nOuch. Should we revert that change rather than doing this? Seems like\nthis might create a lot of problems for people, and they might be\nproblems that happen rarely enough that it looks like it's working\nuntil it doesn't. What's the fix, if you hit the error? Add\nrecovery_target_timeline=<the correct timeline> to\npostgresql.auto.conf?\n\nTypo: similairly.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Wed, 1 Jul 2020 15:51:39 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: v12 and TimeLine switches and backups/restores" }, { "msg_contents": "Greetings,\n\n* Robert Haas (robertmhaas@gmail.com) wrote:\n> On Wed, Jul 1, 2020 at 12:12 AM Stephen Frost <sfrost@snowman.net> wrote:\n> > Among the changes made to PG's recovery in v12 was to set\n> > recovery_target_timeline to be 'latest' by default. That's handy when\n> > you're flipping back and forth between replicas and want to have\n> > everyone follow that game, but it's made doing some basic things like\n> > restoring from a backup problematic.\n> >\n> > Specifically, if you take a backup off a primary and, while that backup\n> > is going on, some replica is promoted and drops a .history file into the\n> > WAL repo, that backup is no longer able to be restored with the new\n> > recovery_target_timeline default. What happens is that the restore\n> > process will happily follow the timeline change- even though it happened\n> > before we reached consistency, and then it'll never find the needed\n> > end-of-backup WAL point that would allow us to reach consistency.\n> \n> Ouch. Should we revert that change rather than doing this? Seems like\n> this might create a lot of problems for people, and they might be\n> problems that happen rarely enough that it looks like it's working\n> until it doesn't. What's the fix, if you hit the error? Add\n> recovery_target_timeline=<the correct timeline> to\n> postgresql.auto.conf?\n\nI don't really think reverting the change to make following the latest\ntimeline would end up being terribly helpful- an awful lot of systems\nare going to be running with that anyway for HA and such, so it seems\nlike something we just need to deal with. As such, it seems like this\nis also something that would need to be back-patched, though I've not\nlooked at how much effort that'll be (yet), since it probably makes\nsense to get agreement on if this approach is the best first.\n\nThere's two solutions, really- first would be, as you suggest, configure\nPG to stay on the timeline that the backup was taken on, but I suspect\nthat's often *not* what the user actually wants- what they really want\nis to restore an earlier backup (one taken before the TL switch) and\nthen have PG follow the timeline switch when it comes across it. We're\nlooking at having pgbackrest automatically pick the correct backup to be\nable to make that happen when someone requests timeline-latest (pretty\nhandy having a repo full of backups that allow us to pick the right one\nbased on what the user's request is).\n\nThere's another option here, though I rejected it, which is that we\ncould possibly force the restore to ignore a TL switch before reaching\nconsistency, but if we do that then, sure, we'll finish the restore but\nwe won't be on the TL that the user asked us to be, and we wouldn't be\nable to follow a primary that's on that TL, so ultimately the restore\nwouldn't actually be what the user wanted. There's really not an option\nto do what the user wanted except to find an earlier backup to restore,\nso that's why I'm proposing that if we hit this situation we just PANIC.\n\n> Typo: similairly.\n\nFixed locally.\n\nThanks!\n\nStephen", "msg_date": "Wed, 1 Jul 2020 16:02:18 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": true, "msg_subject": "Re: v12 and TimeLine switches and backups/restores" }, { "msg_contents": "On Wed, Jul 1, 2020 at 4:02 PM Stephen Frost <sfrost@snowman.net> wrote:\n> There's two solutions, really- first would be, as you suggest, configure\n> PG to stay on the timeline that the backup was taken on, but I suspect\n> that's often *not* what the user actually wants- what they really want\n> is to restore an earlier backup (one taken before the TL switch) and\n> then have PG follow the timeline switch when it comes across it.\n\nIt seems, though, that if it IS what the user actually wants, they're\nnow going to get the wrong behavior by default, and that seems pretty\nundesirable.\n\n> There's another option here, though I rejected it, which is that we\n> could possibly force the restore to ignore a TL switch before reaching\n> consistency, but if we do that then, sure, we'll finish the restore but\n> we won't be on the TL that the user asked us to be, and we wouldn't be\n> able to follow a primary that's on that TL, so ultimately the restore\n> wouldn't actually be what the user wanted. There's really not an option\n> to do what the user wanted except to find an earlier backup to restore,\n> so that's why I'm proposing that if we hit this situation we just PANIC.\n\nI'm not sure I really believe this. If someone tries to configure a\nbackup without inserting a non-default setting of\nrecovery_target_timeline, is it more likely that they want backup\nrestoration to fail, or that they want to recover from the timeline\nthat will let backup restoration succeed? You're arguing for the\nformer, but my instinct was the latter. Perhaps we need to hear some\nother opinions.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Wed, 1 Jul 2020 16:08:21 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: v12 and TimeLine switches and backups/restores" }, { "msg_contents": "Greetings,\n\n* Robert Haas (robertmhaas@gmail.com) wrote:\n> On Wed, Jul 1, 2020 at 4:02 PM Stephen Frost <sfrost@snowman.net> wrote:\n> > There's two solutions, really- first would be, as you suggest, configure\n> > PG to stay on the timeline that the backup was taken on, but I suspect\n> > that's often *not* what the user actually wants- what they really want\n> > is to restore an earlier backup (one taken before the TL switch) and\n> > then have PG follow the timeline switch when it comes across it.\n> \n> It seems, though, that if it IS what the user actually wants, they're\n> now going to get the wrong behavior by default, and that seems pretty\n> undesirable.\n\nWell, even if we revert the change to the default of target_timeline, it\nseems like we should still add the check that I'm proposing, to address\nthe case where someone explicitly asks for the latest timeline.\n\n> > There's another option here, though I rejected it, which is that we\n> > could possibly force the restore to ignore a TL switch before reaching\n> > consistency, but if we do that then, sure, we'll finish the restore but\n> > we won't be on the TL that the user asked us to be, and we wouldn't be\n> > able to follow a primary that's on that TL, so ultimately the restore\n> > wouldn't actually be what the user wanted. There's really not an option\n> > to do what the user wanted except to find an earlier backup to restore,\n> > so that's why I'm proposing that if we hit this situation we just PANIC.\n> \n> I'm not sure I really believe this. If someone tries to configure a\n> backup without inserting a non-default setting of\n> recovery_target_timeline, is it more likely that they want backup\n> restoration to fail, or that they want to recover from the timeline\n> that will let backup restoration succeed? You're arguing for the\n> former, but my instinct was the latter. Perhaps we need to hear some\n> other opinions.\n\nUltimately depends on if the user is knowledgable regarding what the\ndefault is, or not. I'm going off the expectation that they know what\nthe default value is and the other argument is that they have no idea\nwhat the default is and just expect the restore to work- which isn't a\nwrong position to take, but the entire situation is only going to\nhappen if there's been a promotion involving a replica in the first\nplace, and that newly-promoted-replica pushed a .history file into the\nsame WAL repo that this server is following the WAL from, and if you're\nrunning with replicas and you promote them, you probably do want to be\nusing a target timeline of 'latest' or your replicas won't follow those\ntimeline switches.\n\nChanging the default now in a back-patch would actively break such\nsetups that are working now in a very non-obvious way too, only to be\ndiscovered when a replica is promoted and another replica stops keeping\nup because it keeps on its current timeline.\n\nIn the above situation, the restore will fail either way from what I've\nseen- if we hit end-of-WAL before reaching consistency then we'll PANIC,\nor if we come across a SHUTDOWN record, we'll also PANIC, so it's not\nlike the user is going to get a successful restore that's just\ncorrupted, thankfully. Catching this earlier with a clearer error\nmessage, as I'm proposing here, seems like it would generally be helpful\nthough (perhaps with an added HINT: use an earlier backup to restore\nfrom...).\n\nThanks,\n\nStephen", "msg_date": "Wed, 1 Jul 2020 16:19:27 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": true, "msg_subject": "Re: v12 and TimeLine switches and backups/restores" }, { "msg_contents": "Hi,\n\nOn Wed, Jul 01, 2020 at 12:12:14AM -0400, Stephen Frost wrote:\n> Specifically, if you take a backup off a primary and, while that backup\n> is going on, some replica is promoted and drops a .history file into the\n> WAL repo, that backup is no longer able to be restored with the new\n> recovery_target_timeline default. \n\nQuick question to grasp the magnitude of this:\n\nIf a user takes a backup with pg_basebackup in streaming mode, would\nthat still be a problem? Or is this \"only\" a problem for base backups\nwhich go through a wal archive common between primary and standby?\n\n\nMichael\n\n-- \nMichael Banck\nProjektleiter / Senior Berater\nTel.: +49 2166 9901-171\nFax: +49 2166 9901-100\nEmail: michael.banck@credativ.de\n\ncredativ GmbH, HRB M�nchengladbach 12080\nUSt-ID-Nummer: DE204566209\nTrompeterallee 108, 41189 M�nchengladbach\nGesch�ftsf�hrung: Dr. Michael Meskes, J�rg Folz, Sascha Heuer\n\nUnser Umgang mit personenbezogenen Daten unterliegt\nfolgenden Bestimmungen: https://www.credativ.de/datenschutz\n\n\n", "msg_date": "Wed, 1 Jul 2020 22:48:04 +0200", "msg_from": "Michael Banck <michael.banck@credativ.de>", "msg_from_op": false, "msg_subject": "Re: v12 and TimeLine switches and backups/restores" }, { "msg_contents": "Greetings,\n\n* Michael Banck (michael.banck@credativ.de) wrote:\n> On Wed, Jul 01, 2020 at 12:12:14AM -0400, Stephen Frost wrote:\n> > Specifically, if you take a backup off a primary and, while that backup\n> > is going on, some replica is promoted and drops a .history file into the\n> > WAL repo, that backup is no longer able to be restored with the new\n> > recovery_target_timeline default. \n> \n> Quick question to grasp the magnitude of this:\n> \n> If a user takes a backup with pg_basebackup in streaming mode, would\n> that still be a problem? Or is this \"only\" a problem for base backups\n> which go through a wal archive common between primary and standby?\n\nThat's a bit complicated to answer.\n\n1) If the pg_basebackup is taken off of the primary, or some\nnon-promoted replica, and the user fetches WAL during the backup and\ndoes *not* configure a restore_command, then the backup should restore\njust fine using the WAL that was fetched/streamed from the primary,\nalong the original timeline. Of course, that system won't be then able\nto follow the new primary that was promoted during the backup.\n\n2) If the pg_basebackup is taken off of the primary, or some other\nreplica, and the user *does* configure a restore_command, and a\npromotion happens during the backup and that former-replica then pushes\na .history file into the repo that the restore_command is configured to\nuse, then I'm pretty sure this issue would be hit during the restore\n(though I haven't specifically tested that, but we do go out and look\nfor timelines pretty early on).\n\n3) If the pg_basebackup is taken off of the replica that's promoted, the\npg_basebackup will actually fail and error and there won't be a valid\nbackup in the first place.\n\nThanks,\n\nStephen", "msg_date": "Wed, 1 Jul 2020 16:57:28 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": true, "msg_subject": "Re: v12 and TimeLine switches and backups/restores" } ]
[ { "msg_contents": "Hi,\n\nAttached patch makes an adjustment to ipc.c code to be in the 80-column\nwindow.\n\nRegards,\nAmul", "msg_date": "Wed, 1 Jul 2020 12:30:29 +0530", "msg_from": "Amul Sul <sulamul@gmail.com>", "msg_from_op": true, "msg_subject": "Cleanup - adjust the code crossing 80-column window limit" }, { "msg_contents": "changes look good to me.\n\none comment: instead of having block variables onexit, in the while\nloops in shmem_exit, can we have a single local variable defined at\nthe start of the shmem_exit function\nand reuse them in the while loops? same comment for onexit block\nvariable in proc_exit_prepare() function.\n\nPatch applies successfully on commit - 4315e8c23b9a897e12fcf91de7bfd734621096bf\n\nmake check and make check-world runs are clean.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\nOn Wed, Jul 1, 2020 at 12:31 PM Amul Sul <sulamul@gmail.com> wrote:\n>\n> Hi,\n>\n> Attached patch makes an adjustment to ipc.c code to be in the 80-column window.\n>\n> Regards,\n> Amul\n>\n\n\n", "msg_date": "Wed, 1 Jul 2020 16:29:34 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Cleanup - adjust the code crossing 80-column window limit" }, { "msg_contents": "On Wed, Jul 1, 2020 at 4:29 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> changes look good to me.\n\nThanks for looking at the patch.\n\n>\n> one comment: instead of having block variables onexit, in the while\n> loops in shmem_exit, can we have a single local variable defined at\n> the start of the shmem_exit function\n> and reuse them in the while loops? same comment for onexit block\n> variable in proc_exit_prepare() function.\n>\n\nIf you are worried about the declaration and initialization of the variable will\nhappen with every loop cycle then you shouldn't because that happens only\nonce before the loop-block is entered.\n\nRegards,\nAmul\n\n\n", "msg_date": "Wed, 1 Jul 2020 18:22:25 +0530", "msg_from": "Amul Sul <sulamul@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Cleanup - adjust the code crossing 80-column window limit" }, { "msg_contents": "> >\n> > one comment: instead of having block variables onexit, in the while\n> > loops in shmem_exit, can we have a single local variable defined at\n> > the start of the shmem_exit function\n> > and reuse them in the while loops? same comment for onexit block\n> > variable in proc_exit_prepare() function.\n> >\n>\n> If you are worried about the declaration and initialization of the variable will\n> happen with every loop cycle then you shouldn't because that happens only\n> once before the loop-block is entered.\n>\n\nthanks. understood (just for info [1]) .\n\nIs there a test case covering this part of the code(I'm not sure if\none exists in the regression test suite)?\nIf no, can we add one?\n\n[1] - https://stackoverflow.com/questions/29785789/why-do-the-objects-created-in-a-loop-have-the-same-address/29785868\n\n\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 2 Jul 2020 11:08:23 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Cleanup - adjust the code crossing 80-column window limit" }, { "msg_contents": "On 2020-07-01 09:00, Amul Sul wrote:\n> Attached patch makes an adjustment to ipc.c code to be in the 80-column \n> window.\n\nI can see an argument that this makes the code a bit easier to read, but \nmaking code fit into 80 columns doesn't have to be a goal by itself.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 3 Jul 2020 10:01:57 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Cleanup - adjust the code crossing 80-column window limit" }, { "msg_contents": "On Fri, Jul 3, 2020 at 1:32 PM Peter Eisentraut\n<peter.eisentraut@2ndquadrant.com> wrote:\n>\n> On 2020-07-01 09:00, Amul Sul wrote:\n> > Attached patch makes an adjustment to ipc.c code to be in the 80-column\n> > window.\n>\n> I can see an argument that this makes the code a bit easier to read, but\n> making code fit into 80 columns doesn't have to be a goal by itself.\n>\nI wouldn't disagree with that. I believe the 80 column rule has been documented\nfor the code readability.\n\nRegards,\nAmul\n\n\n", "msg_date": "Mon, 6 Jul 2020 09:10:26 +0530", "msg_from": "Amul Sul <sulamul@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Cleanup - adjust the code crossing 80-column window limit" }, { "msg_contents": ">\n> Is there a test case covering this part of the code(I'm not sure if\n> one exists in the regression test suite)?\n> If no, can we add one?\n>\n\nI observed that the code areas this patch is trying to modify are\npretty much generic and are being called from many places.\nThe code basically handles exit callbacks on proc exits, on or before\nshared memory exits which is very generic and common code.\nI'm sure these parts are covered with the existing regression test suites.\n\nSince I have previously run the regression tests, now finally, +1 for\nthe patch from my end.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 6 Jul 2020 09:57:31 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Cleanup - adjust the code crossing 80-column window limit" } ]
[ { "msg_contents": "Hi,\n\nWhile reviewing copy from I identified few improvements for copy from\nthat can be done :\na) copy from stdin copies lesser amount of data to buffer even though\nspace is available in buffer because minread was passed as 1 to\nCopyGetData, Hence it only reads until the data read from libpq is\nless than minread. This can be fixed by passing the actual space\navailable in buffer, this reduces the unnecessary frequent calls to\nCopyGetData.\nb) CopyMultiInsertInfoNextFreeSlot had an unused function parameter\nthat is not being used, it can be removed.\nc) Copy from reads header line and do nothing for the header line, we\nneed not clear EOL & need not convert to server encoding for the\nheader line.\n\nAttached patch has the changes for the same.\nThoughts?\n\nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Wed, 1 Jul 2020 18:16:01 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": true, "msg_subject": "Improvements in Copy From" }, { "msg_contents": "On Wed, Jul 1, 2020 at 6:16 PM vignesh C <vignesh21@gmail.com> wrote:\n> Attached patch has the changes for the same.\n> Thoughts?\n>\n\nAdded a commitfest entry for this:\nhttps://commitfest.postgresql.org/29/2642/\n\nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com\n\nOn Wed, Jul 1, 2020 at 6:16 PM vignesh C <vignesh21@gmail.com> wrote:> Attached patch has the changes for the same.> Thoughts?>Added a commitfest entry for this:https://commitfest.postgresql.org/29/2642/Regards,VigneshEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Tue, 14 Jul 2020 10:36:58 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Improvements in Copy From" }, { "msg_contents": "On Thu, 2 Jul 2020 at 00:46, vignesh C <vignesh21@gmail.com> wrote:\n> b) CopyMultiInsertInfoNextFreeSlot had an unused function parameter\n> that is not being used, it can be removed.\n\nThis was raised in [1]. We decided not to remove it.\n\nDavid\n\n[1] https://www.postgresql.org/message-id/flat/CAKJS1f-A5aYvPHe10Wy9LjC4RzLsBrya8b2gfuQHFabhwZT_NQ%40mail.gmail.com#3bae9a84be253c527b0e621add0fbaef\n\n\n", "msg_date": "Tue, 14 Jul 2020 17:22:19 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Improvements in Copy From" }, { "msg_contents": "On Tue, 14 Jul 2020 at 17:22, David Rowley <dgrowleyml@gmail.com> wrote:\n>\n> On Thu, 2 Jul 2020 at 00:46, vignesh C <vignesh21@gmail.com> wrote:\n> > b) CopyMultiInsertInfoNextFreeSlot had an unused function parameter\n> > that is not being used, it can be removed.\n>\n> This was raised in [1]. We decided not to remove it.\n\nI just added a comment to the function to mention why we want to keep\nthe parameter. I hope that will save any wasted time proposing its\nremoval in the future.\n\nFWIW, the function is inlined. Removing it will gain us nothing\nperformance-wise anyway.\n\nDavid\n\n> [1] https://www.postgresql.org/message-id/flat/CAKJS1f-A5aYvPHe10Wy9LjC4RzLsBrya8b2gfuQHFabhwZT_NQ%40mail.gmail.com#3bae9a84be253c527b0e621add0fbaef\n\n\n", "msg_date": "Tue, 14 Jul 2020 17:43:41 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Improvements in Copy From" }, { "msg_contents": "On Tue, Jul 14, 2020 at 11:13 AM David Rowley <dgrowleyml@gmail.com> wrote:\n>\n> On Tue, 14 Jul 2020 at 17:22, David Rowley <dgrowleyml@gmail.com> wrote:\n> >\n> > On Thu, 2 Jul 2020 at 00:46, vignesh C <vignesh21@gmail.com> wrote:\n> > > b) CopyMultiInsertInfoNextFreeSlot had an unused function parameter\n> > > that is not being used, it can be removed.\n> >\n> > This was raised in [1]. We decided not to remove it.\n>\n> I just added a comment to the function to mention why we want to keep\n> the parameter. I hope that will save any wasted time proposing its\n> removal in the future.\n>\n> FWIW, the function is inlined. Removing it will gain us nothing\n> performance-wise anyway.\n>\n> David\n>\n> > [1] https://www.postgresql.org/message-id/flat/CAKJS1f-A5aYvPHe10Wy9LjC4RzLsBrya8b2gfuQHFabhwZT_NQ%40mail.gmail.com#3bae9a84be253c527b0e621add0fbaef\n\nThanks David for pointing it out, as this has been discussed and\nconcluded no point in discussing the same thing again. This patch has\na couple of other improvements which can still be taken forward. I\nwill remove this change and post a new patch to retain the other\nissues that were fixed.\n\nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 14 Jul 2020 12:17:49 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Improvements in Copy From" }, { "msg_contents": "On Tue, Jul 14, 2020 at 12:17 PM vignesh C <vignesh21@gmail.com> wrote:\n>\n> On Tue, Jul 14, 2020 at 11:13 AM David Rowley <dgrowleyml@gmail.com> wrote:\n> >\n> > On Tue, 14 Jul 2020 at 17:22, David Rowley <dgrowleyml@gmail.com> wrote:\n> > >\n> > > On Thu, 2 Jul 2020 at 00:46, vignesh C <vignesh21@gmail.com> wrote:\n> > > > b) CopyMultiInsertInfoNextFreeSlot had an unused function parameter\n> > > > that is not being used, it can be removed.\n> > >\n> > > This was raised in [1]. We decided not to remove it.\n> >\n> > I just added a comment to the function to mention why we want to keep\n> > the parameter. I hope that will save any wasted time proposing its\n> > removal in the future.\n> >\n> > FWIW, the function is inlined. Removing it will gain us nothing\n> > performance-wise anyway.\n> >\n> > David\n> >\n> > > [1] https://www.postgresql.org/message-id/flat/CAKJS1f-A5aYvPHe10Wy9LjC4RzLsBrya8b2gfuQHFabhwZT_NQ%40mail.gmail.com#3bae9a84be253c527b0e621add0fbaef\n>\n> Thanks David for pointing it out, as this has been discussed and\n> concluded no point in discussing the same thing again. This patch has\n> a couple of other improvements which can still be taken forward. I\n> will remove this change and post a new patch to retain the other\n> issues that were fixed.\n>\n\nI have removed the changes that david had pointed out and retained the\nremaining changes. Attaching the patch for the same.\nThoughts?\n\nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Tue, 14 Jul 2020 19:02:09 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Improvements in Copy From" }, { "msg_contents": "Hello.\n\nFYI - that patch has conflicts when applied.\n\nKind Regards\nPeter Smith\nFujitsu Australia.\n\nOn Thu, Aug 27, 2020 at 3:11 PM vignesh C <vignesh21@gmail.com> wrote:\n>\n> On Tue, Jul 14, 2020 at 12:17 PM vignesh C <vignesh21@gmail.com> wrote:\n> >\n> > On Tue, Jul 14, 2020 at 11:13 AM David Rowley <dgrowleyml@gmail.com> wrote:\n> > >\n> > > On Tue, 14 Jul 2020 at 17:22, David Rowley <dgrowleyml@gmail.com> wrote:\n> > > >\n> > > > On Thu, 2 Jul 2020 at 00:46, vignesh C <vignesh21@gmail.com> wrote:\n> > > > > b) CopyMultiInsertInfoNextFreeSlot had an unused function parameter\n> > > > > that is not being used, it can be removed.\n> > > >\n> > > > This was raised in [1]. We decided not to remove it.\n> > >\n> > > I just added a comment to the function to mention why we want to keep\n> > > the parameter. I hope that will save any wasted time proposing its\n> > > removal in the future.\n> > >\n> > > FWIW, the function is inlined. Removing it will gain us nothing\n> > > performance-wise anyway.\n> > >\n> > > David\n> > >\n> > > > [1] https://www.postgresql.org/message-id/flat/CAKJS1f-A5aYvPHe10Wy9LjC4RzLsBrya8b2gfuQHFabhwZT_NQ%40mail.gmail.com#3bae9a84be253c527b0e621add0fbaef\n> >\n> > Thanks David for pointing it out, as this has been discussed and\n> > concluded no point in discussing the same thing again. This patch has\n> > a couple of other improvements which can still be taken forward. I\n> > will remove this change and post a new patch to retain the other\n> > issues that were fixed.\n> >\n>\n> I have removed the changes that david had pointed out and retained the\n> remaining changes. Attaching the patch for the same.\n> Thoughts?\n>\n> Regards,\n> Vignesh\n> EnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 27 Aug 2020 15:32:11 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Improvements in Copy From" }, { "msg_contents": "On Thu, Aug 27, 2020 at 11:02 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> Hello.\n>\n> FYI - that patch has conflicts when applied.\n>\n\nThanks for letting me know. Attached new patch which is rebased on top of head.\n\nRegards,\nVIgnesh\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Sun, 30 Aug 2020 12:55:39 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Improvements in Copy From" }, { "msg_contents": "Hi Vignesh\n\nOn Wed, Jul 1, 2020 at 3:46 PM vignesh C <vignesh21@gmail.com> wrote:\n\n> Hi,\n>\n> While reviewing copy from I identified few improvements for copy from\n> that can be done :\n> a) copy from stdin copies lesser amount of data to buffer even though\n> space is available in buffer because minread was passed as 1 to\n> CopyGetData, Hence it only reads until the data read from libpq is\n> less than minread. This can be fixed by passing the actual space\n> available in buffer, this reduces the unnecessary frequent calls to\n> CopyGetData.\n>\n\nwhy not applying the same optimization on file read ?\n\n\n> c) Copy from reads header line and do nothing for the header line, we\n> need not clear EOL & need not convert to server encoding for the\n> header line.\n>\n\nWe have a patch for column matching feature [1] that may need a header line\nto be further processed. Even without that I think it is preferable to\nprocess the header line for nothing than adding those checks to the loop,\nperformance-wise.\n\n[1].\nhttps://www.postgresql.org/message-id/flat/CAF1-J-0PtCWMeLtswwGV2M70U26n4g33gpe1rcKQqe6wVQDrFA@mail.gmail.com\n\nregards\n\nSurafel\n\nHi VigneshOn Wed, Jul 1, 2020 at 3:46 PM vignesh C <vignesh21@gmail.com> wrote:Hi,\n\nWhile reviewing copy from I identified few  improvements for copy from\nthat can be done :\na) copy from stdin copies lesser amount of data to buffer even though\nspace is available in buffer because minread was passed as 1 to\nCopyGetData, Hence it only reads until the data read from libpq is\nless than minread. This can be fixed by passing the actual space\navailable in buffer, this reduces the unnecessary frequent calls to\nCopyGetData.\n\nwhy not applying the\nsame optimization on file read ?\n \nc) Copy from reads header line and do nothing for the header line, we\nneed not clear EOL & need not convert to server encoding for the\nheader line.\n\nWe have a patch for column matching feature [1] that may need a\nheader line to be further processed. Even without that I think it is\npreferable to process the header line for nothing than adding those\nchecks to the loop, performance-wise.\n[1].\nhttps://www.postgresql.org/message-id/flat/CAF1-J-0PtCWMeLtswwGV2M70U26n4g33gpe1rcKQqe6wVQDrFA@mail.gmail.com\nregards\nSurafel", "msg_date": "Mon, 7 Sep 2020 10:49:31 +0300", "msg_from": "Surafel Temesgen <surafel3000@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Improvements in Copy From" }, { "msg_contents": "My basic understanding of first part of your patch is that by\nadjusting the \"minread\" it now allows it to loop multiple times\ninternally within the CopyGetData rather than calling CopyLoadRawBuf\nfor every N lines. There doesn't seem to be much change to what other\ncode gets executed so the saving is essentially whatever is the cost\nof making 2 x function calls (CopyLoadRawBuff + CopyGetData) x N. Is\nthat understanding correct?\n\nBut with that change there seems to be opportunity for yet another\ntiny saving possible. IIUC, now you are processing a lot more data\nwithin the CopyGetData so it is now very likely that you will also\ngobble the COPY_NEW_FE's 'c' marker. So cstate->reached_eof will be\nset. So this means the calling code of CopyReadLineText may no longer\nneed to call the CopyLoadRawBuf one last time just to discover there\nare no more bytes to read - something that it already knows if\ncstate->reached_eof == true.\n\nFor example, with your change can't you also modify CopyReadLineText like below:\n\nBEFORE\n if (!CopyLoadRawBuf(cstate))\n hit_eof = true;\n\nAFTER\n if (cstate->reached_eof)\n {\n cstate->raw_buf[0] = '\\0';\n cstate->raw_buf_index = cstate->raw_buf_len = 0;\n hit_eof = true;\n }\n else if (!CopyLoadRawBuf(cstate))\n {\n hit_eof = true;\n }\n\nWhether such a micro-optimisation is worth doing is another question.\n\n---\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\nOn Sun, Aug 30, 2020 at 5:25 PM vignesh C <vignesh21@gmail.com> wrote:\n>\n> On Thu, Aug 27, 2020 at 11:02 AM Peter Smith <smithpb2250@gmail.com> wrote:\n> >\n> > Hello.\n> >\n> > FYI - that patch has conflicts when applied.\n> >\n>\n> Thanks for letting me know. Attached new patch which is rebased on top of head.\n>\n> Regards,\n> VIgnesh\n> EnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 9 Sep 2020 16:53:31 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Improvements in Copy From" }, { "msg_contents": "On Mon, Sep 7, 2020 at 1:19 PM Surafel Temesgen <surafel3000@gmail.com> wrote:\n>\n>\n> Hi Vignesh\n>\n> On Wed, Jul 1, 2020 at 3:46 PM vignesh C <vignesh21@gmail.com> wrote:\n>>\n>> Hi,\n>>\n>> While reviewing copy from I identified few improvements for copy from\n>> that can be done :\n>> a) copy from stdin copies lesser amount of data to buffer even though\n>> space is available in buffer because minread was passed as 1 to\n>> CopyGetData, Hence it only reads until the data read from libpq is\n>> less than minread. This can be fixed by passing the actual space\n>> available in buffer, this reduces the unnecessary frequent calls to\n>> CopyGetData.\n>\n>\n> why not applying the same optimization on file read ?\n\nFor file read this is already taken care as you can see from below code:\nbytesread = fread(databuf, 1, maxread, cstate->copy_file);\nif (ferror(cstate->copy_file))\nereport(ERROR,\n(errcode_for_file_access(),\nerrmsg(\"could not read from COPY file: %m\")));\nif (bytesread == 0)\ncstate->reached_eof = true;\nbreak;\n\nWe do not have any condition to break unlike the case of stdin, we\nread 1 * maxread size of data, So no need to do anything for it.\n\n>\n>>\n>> c) Copy from reads header line and do nothing for the header line, we\n>> need not clear EOL & need not convert to server encoding for the\n>> header line.\n>\n>\n> We have a patch for column matching feature [1] that may need a header line to be further processed. Even without that I think it is preferable to process the header line for nothing than adding those checks to the loop, performance-wise.\n\nI had seen that patch, I feel that change to match the header if the\nheader is specified can be addressed in this patch if that patch gets\ncommitted first or vice versa. We are doing a lot of processing for\nthe data which we need not do anything. Shouldn't this be skipped if\nnot required. Similar check is present in NextCopyFromRawFields also\nto skip header.\n\nThoughts?\n\nRegards,\nVIgnesh\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 10 Sep 2020 15:46:58 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Improvements in Copy From" }, { "msg_contents": "On Wed, Sep 9, 2020 at 12:24 PM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> My basic understanding of first part of your patch is that by\n> adjusting the \"minread\" it now allows it to loop multiple times\n> internally within the CopyGetData rather than calling CopyLoadRawBuf\n> for every N lines. There doesn't seem to be much change to what other\n> code gets executed so the saving is essentially whatever is the cost\n> of making 2 x function calls (CopyLoadRawBuff + CopyGetData) x N. Is\n> that understanding correct?\n\nYes you are right, we will avoid the function calls and try to get as\nmany records as possible from the buffer & insert it to the relation.\n\n> But with that change there seems to be opportunity for yet another\n> tiny saving possible. IIUC, now you are processing a lot more data\n> within the CopyGetData so it is now very likely that you will also\n> gobble the COPY_NEW_FE's 'c' marker. So cstate->reached_eof will be\n> set. So this means the calling code of CopyReadLineText may no longer\n> need to call the CopyLoadRawBuf one last time just to discover there\n> are no more bytes to read - something that it already knows if\n> cstate->reached_eof == true.\n>\n> For example, with your change can't you also modify CopyReadLineText like below:\n>\n> BEFORE\n> if (!CopyLoadRawBuf(cstate))\n> hit_eof = true;\n>\n> AFTER\n> if (cstate->reached_eof)\n> {\n> cstate->raw_buf[0] = '\\0';\n> cstate->raw_buf_index = cstate->raw_buf_len = 0;\n> hit_eof = true;\n> }\n> else if (!CopyLoadRawBuf(cstate))\n> {\n> hit_eof = true;\n> }\n>\n> Whether such a micro-optimisation is worth doing is another question.\nYes, what you suggested can also be done, but even I have the same\nquestion as you. Because we will reduce just one function call, the\neof check is present immediately in the function, Should we include\nthis or not?\n\nRegards,\nVIgnesh\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 10 Sep 2020 16:51:22 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Improvements in Copy From" }, { "msg_contents": "On Thu, Sep 10, 2020 at 1:17 PM vignesh C <vignesh21@gmail.com> wrote:\n\n>\n> >\n> > We have a patch for column matching feature [1] that may need a header\n> line to be further processed. Even without that I think it is preferable to\n> process the header line for nothing than adding those checks to the loop,\n> performance-wise.\n>\n> I had seen that patch, I feel that change to match the header if the\n> header is specified can be addressed in this patch if that patch gets\n> committed first or vice versa. We are doing a lot of processing for\n> the data which we need not do anything. Shouldn't this be skipped if\n> not required. Similar check is present in NextCopyFromRawFields also\n> to skip header.\n>\n\nThe existing check is unavoidable but we can live better without the checks\nadded by the patch. For very large files the loop may iterate millions of\ntimes if it is not in billion and I am sure doing the check that many times\nwill incur noticeable performance degradation than further processing a\nsingle line.\n\nregards\n\nSurafel\n\nOn Thu, Sep 10, 2020 at 1:17 PM vignesh C <vignesh21@gmail.com> wrote:\n>\n> We have a patch for column matching feature [1] that may need a header line to be further processed. Even without that I think it is preferable to process the header line for nothing than adding those checks to the loop, performance-wise.\n\nI had seen that patch, I feel that change to match the header if the\nheader is specified can be addressed in this patch if that patch gets\ncommitted first or vice versa. We are doing a lot of processing for\nthe data which we need not do anything. Shouldn't this be skipped if\nnot required. Similar check is present in NextCopyFromRawFields also\nto skip header.\n\nThe existing check\nis unavoidable but we can live better without the checks added by the\npatch. For very large files the loop may iterate millions of times if\nit is not in billion and I am sure doing the check that many times will \nincur noticeable performance degradation than further processing a single line. \n\nregards\nSurafel", "msg_date": "Thu, 10 Sep 2020 21:55:27 +0300", "msg_from": "Surafel Temesgen <surafel3000@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Improvements in Copy From" }, { "msg_contents": "At Thu, 10 Sep 2020 21:55:27 +0300, Surafel Temesgen <surafel3000@gmail.com> wrote in \n> On Thu, Sep 10, 2020 at 1:17 PM vignesh C <vignesh21@gmail.com> wrote:\n> \n> >\n> > >\n> > > We have a patch for column matching feature [1] that may need a header\n> > line to be further processed. Even without that I think it is preferable to\n> > process the header line for nothing than adding those checks to the loop,\n> > performance-wise.\n> >\n> > I had seen that patch, I feel that change to match the header if the\n> > header is specified can be addressed in this patch if that patch gets\n> > committed first or vice versa. We are doing a lot of processing for\n> > the data which we need not do anything. Shouldn't this be skipped if\n> > not required. Similar check is present in NextCopyFromRawFields also\n> > to skip header.\n> >\n> \n> The existing check is unavoidable but we can live better without the checks\n> added by the patch. For very large files the loop may iterate millions of\n> times if it is not in billion and I am sure doing the check that many times\n> will incur noticeable performance degradation than further processing a\n> single line.\n\nFWIW, I thought the same thing seeing the additional if-conditions. It\ngives more loss than gain.\n\nFor the first part, the patch reveals COPY_NEW_FE, which I don't think\nto be a knowledge for the function, to CopyGetData. Considering that\nthat doesn't seem to offer noticeable performance gain, I don't think\nwe should do that. On the contrary, if incoming data were\nintermittently delayed for some reasons (heavy load of client or\nin-between network), this patch would make things worse by waiting for\ndelayed bits before processing already received bits.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Fri, 11 Sep 2020 15:58:04 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Improvements in Copy From" }, { "msg_contents": "On Thu, Sep 10, 2020 at 9:21 PM vignesh C <vignesh21@gmail.com> wrote:\n> > Whether such a micro-optimisation is worth doing is another question.\n> Yes, what you suggested can also be done, but even I have the same\n> question as you. Because we will reduce just one function call, the\n> eof check is present immediately in the function, Should we include\n> this or not?\n\nI expect the difference from my suggestion is too small to be measured.\n\nProbably it is not worth changing the already complicated code unless\nthose changes can achieve something observable.\n\n~~\n\nFYI, I ran a few performance tests BEFORE/AFTER applying your patch.\n\nPerf results for \\COPY 5GB CSV file to UNLOGGED table.\n\nperf -a –g <pid>\npsql -d test -c \"\\copy tbl from '/my/path/data_5GB.csv' with (format csv);”\nperf report –g\n\nBEFORE\n#1 CopyReadLineText = 12.70%, CopyLoadRawBuf = 0.81%\n#2 CopyReadLineText = 12.54%, CopyLoadRawBuf = 0.81%\n#3 CopyReadLineText = 12.52%, CopyLoadRawBuf = 0.81%\n\nAFTER\n#1 CopyReadLineText = 12.55%, CopyLoadRawBuf = 1.20%\n#2 CopyReadLineText = 12.15%, CopyLoadRawBuf = 1.10%\n#3 CopyReadLineText = 13.11%, CopyLoadRawBuf = 1.24%\n#4 CopyReadLineText = 12.86%, CopyLoadRawBuf = 1.18%\n\nI didn't quite know how to interpret those results. It was opposite\nwhat I expected. Perhaps the original excessive CopyLoadRawBuf calls\nwere so brief they could often avoid being sampled? Anyway, I hope you\nhave a better understanding of perf than I do and can explain it.\n\nI then repeated/times same tests but without perf\n\nBEFORE\n#1 4min.36s\n#2 4min.45s\n#3 4min.43s\n#4 4min.34s\n\nAFTER\n#1 4min.41s\n#2 4min.37s\n#3 4min.34s\n\nAs you can see, unfortunately, the patch gave no observable benefit\nfor my test case.\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Fri, 11 Sep 2020 18:44:13 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Improvements in Copy From" }, { "msg_contents": "At Fri, 11 Sep 2020 18:44:13 +1000, Peter Smith <smithpb2250@gmail.com> wrote in \r\n> On Thu, Sep 10, 2020 at 9:21 PM vignesh C <vignesh21@gmail.com> wrote:\r\n> > > Whether such a micro-optimisation is worth doing is another question.\r\n> > Yes, what you suggested can also be done, but even I have the same\r\n> > question as you. Because we will reduce just one function call, the\r\n> > eof check is present immediately in the function, Should we include\r\n> > this or not?\r\n> \r\n> I expect the difference from my suggestion is too small to be measured.\r\n> \r\n> Probably it is not worth changing the already complicated code unless\r\n> those changes can achieve something observable.\r\n> \r\n> ~~\r\n> \r\n> FYI, I ran a few performance tests BEFORE/AFTER applying your patch.\r\n> \r\n> Perf results for \\COPY 5GB CSV file to UNLOGGED table.\r\n> \r\n> perf -a –g <pid>\r\n> psql -d test -c \"\\copy tbl from '/my/path/data_5GB.csv' with (format csv);”\r\n> perf report –g\r\n> \r\n> BEFORE\r\n> #1 CopyReadLineText = 12.70%, CopyLoadRawBuf = 0.81%\r\n> #2 CopyReadLineText = 12.54%, CopyLoadRawBuf = 0.81%\r\n> #3 CopyReadLineText = 12.52%, CopyLoadRawBuf = 0.81%\r\n> \r\n> AFTER\r\n> #1 CopyReadLineText = 12.55%, CopyLoadRawBuf = 1.20%\r\n> #2 CopyReadLineText = 12.15%, CopyLoadRawBuf = 1.10%\r\n> #3 CopyReadLineText = 13.11%, CopyLoadRawBuf = 1.24%\r\n> #4 CopyReadLineText = 12.86%, CopyLoadRawBuf = 1.18%\r\n> \r\n> I didn't quite know how to interpret those results. It was opposite\r\n> what I expected. Perhaps the original excessive CopyLoadRawBuf calls\r\n> were so brief they could often avoid being sampled? Anyway, I hope you\r\n> have a better understanding of perf than I do and can explain it.\r\n> \r\n> I then repeated/times same tests but without perf\r\n> \r\n> BEFORE\r\n> #1 4min.36s\r\n> #2 4min.45s\r\n> #3 4min.43s\r\n> #4 4min.34s\r\n> \r\n> AFTER\r\n> #1 4min.41s\r\n> #2 4min.37s\r\n> #3 4min.34s\r\n> \r\n> As you can see, unfortunately, the patch gave no observable benefit\r\n> for my test case.\r\n\r\nThat observation agrees with my assumption.\r\n\r\nAt Fri, 11 Sep 2020 15:58:04 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \r\nme> we should do that. On the contrary, if incoming data were\r\nme> intermittently delayed for some reasons (heavy load of client or\r\nme> in-between network), this patch would make things worse by waiting for\r\nme> delayed bits before processing already received bits.\r\n\r\nIt seems that a slow network is enough to cause that behavior even\r\nwithout any trouble,\r\n\r\nregards.\r\n\r\n-- \r\nKyotaro Horiguchi\r\nNTT Open Source Software Center\r\n", "msg_date": "Fri, 11 Sep 2020 18:04:01 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Improvements in Copy From" } ]
[ { "msg_contents": "Outline-atomics is a gcc compilation flag that adds runtime detection of weather or not the cpu supports atomic instructions. CPUs that don't support atomic instructions will use the old load-exclusive/store-exclusive instructions. If a different compilation flag defined an architecture that unconditionally supports atomic instructions (e.g. -march=armv8.2), the outline-atomic flag will have no effect.\r\n\r\nThe patch was tested to improve pgbench simple-update by 10% and sysbench write-only by 3% on a 64-core armv8.2 machine (AWS m6g.16xlarge). Select-only and read-only benchmarks were not significantly affected, and neither was performance on a 16-core armv8.0 machine that does not support atomic instructions (AWS a1.4xlarge).\r\n\r\nThe patch uses an existing configure.in macro to detect compiler support of the flag. Checking for aarch64 machine is not strictly necessary, but was added for readability.\r\n\r\nThank you!\r\nTsahi", "msg_date": "Wed, 1 Jul 2020 15:40:38 +0000", "msg_from": "\"Zidenberg, Tsahi\" <tsahee@amazon.com>", "msg_from_op": true, "msg_subject": "[PATCH] audo-detect and use -moutline-atomics compilation flag for\n aarch64" }, { "msg_contents": "On 01/07/2020, 18:40, \"Zidenberg, Tsahi\" <tsahee@amazon.com> wrote:\r\n\r\n> Outline-atomics is a gcc compilation flag that adds runtime detection of weather or not the cpu\r\n> supports atomic instructions. CPUs that don't support atomic instructions will use the old \r\n> load-exclusive/store-exclusive instructions. If a different compilation flag defined an architecture\r\n> that unconditionally supports atomic instructions (e.g. -march=armv8.2), the outline-atomic flag\r\n> will have no effect.\r\n>\r\n> The patch was tested to improve pgbench simple-update by 10% and sysbench write-only by 3%\r\n> on a 64-core armv8.2 machine (AWS m6g.16xlarge). Select-only and read-only benchmarks were\r\n> not significantly affected, and neither was performance on a 16-core armv8.0 machine that does\r\n> not support atomic instructions (AWS a1.4xlarge).\r\n>\r\n> The patch uses an existing configure.in macro to detect compiler support of the flag. Checking for\r\n> aarch64 machine is not strictly necessary, but was added for readability.\r\n\r\nAdded a commitfest entry:\r\nhttps://commitfest.postgresql.org/29/2637/\r\n\r\nThank you!\r\nTsahi\r\n\r\n\r\n\r\n\r\n", "msg_date": "Tue, 7 Jul 2020 15:28:02 +0000", "msg_from": "\"Zidenberg, Tsahi\" <tsahee@amazon.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] audo-detect and use -moutline-atomics compilation flag\n for aarch64" }, { "msg_contents": "Hi,\n\nOn 2020-07-01 15:40:38 +0000, Zidenberg, Tsahi wrote:\n> Outline-atomics is a gcc compilation flag that adds runtime detection\n> of weather or not the cpu supports atomic instructions. CPUs that\n> don't support atomic instructions will use the old\n> load-exclusive/store-exclusive instructions. If a different\n> compilation flag defined an architecture that unconditionally supports\n> atomic instructions (e.g. -march=armv8.2), the outline-atomic flag\n> will have no effect.\n\nSounds attractive.\n\n\n> The patch was tested to improve pgbench simple-update by 10% and\n> sysbench write-only by 3% on a 64-core armv8.2 machine (AWS\n> m6g.16xlarge). Select-only and read-only benchmarks were not\n> significantly affected, and neither was performance on a 16-core\n> armv8.0 machine that does not support atomic instructions (AWS\n> a1.4xlarge).\n\nWhat does \"not significantly affected\" exactly mean? Could you post the\nraw numbers? I'm a bit concerned that the additional conditional\nbranches on platforms without non ll/sc atomics could hurt noticably.\n\nI'm surprised that read-only didn't benefit - with ll/sc that ought to\nhave pretty high contention on a few lwlocks.\n\nCould you post the numbers?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 23 Jul 2020 18:17:07 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: [PATCH] audo-detect and use -moutline-atomics compilation flag\n for aarch64" }, { "msg_contents": "Hello!\r\n\r\nFirst, I apologize for taking so long to answer. This e-mail regretfully got lost in my inbox.\r\n\r\nOn 24/07/2020, 4:17, \"Andres Freund\" <andres@anarazel.de> wrote:\r\n\r\n > What does \"not significantly affected\" exactly mean? Could you post the\r\n > raw numbers?\r\n\r\nThe following tests show benchmark behavior on m6g.8xl instance (32-core with LSE support)\r\nand a1.4xlarge (16-core, no LSE support) with and without the patch, based on postgresql 12.4.\r\nTests are pgbench select-only/simple-update, and sysbench read-only/write only.\r\n\r\n. select-only. simple-update. read-only. write-only\r\nm6g.8xlarge/vanila. 482130. 56275. 273327. 33364\r\nm6g.8xlarge/patch. 493748. 59681. 262702. 33024\r\na1.4xlarge/vanila. 82437. 13978. 62489. 2928\r\na1.4xlarge/patch. 79499. 13932. 62796. 2945\r\n\r\nResults obviously change with OS / parameters /etc. I have attempted ensure a fair comparison,\r\nBut I don't think these numbers should be taken as absolute.\r\nAs reference points, m6g instance compiled with -march=native flag, and m5g (x86) instances:\r\n\r\nm6g.8xlarge/native. 522771. 60354. 261366. 33582\r\nm5.8xlarge. 362908. 58732. 147730. 32750\r\n\r\n > I'm a bit concerned that the additional conditional\r\n > branches on platforms without non ll/sc atomics could hurt noticably.\r\n\r\nAs can be seen in a1 results - the difference for CPUSs with no LSE atomic support is low.\r\nLocks have one branch added, which is always taken the same way and thus easy to predict.\r\n\r\n > I'm surprised that read-only didn't benefit - with ll/sc that ought to\r\n > have pretty high contention on a few lwlocks.\r\n\r\nThese results show only about 6% performance increase in simple-update, and very close\r\nperformance in other results, most of which could be attributed to benchmark result jitter.\r\nThese results from \"well behaved\" benchmarks do not show the full importance of using \r\noutline-atomics. I have observed in some experiments with other values and larger systems\r\na crush of performance including read-only tests, which was caused by continuously failing to\r\ncommit strx instructions. In such cases, outline-atomics improved performance by more\r\nthan 2x factor. These cases are not always easy to replicate.\r\n\r\nThank you!\r\nand sorry again for the delay\r\nTsahi Zidenberg\r\n\r\n", "msg_date": "Sun, 6 Sep 2020 21:00:02 +0000", "msg_from": "\"Zidenberg, Tsahi\" <tsahee@amazon.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] audo-detect and use -moutline-atomics compilation flag\n for\n aarch64" }, { "msg_contents": "On Sun, Sep 06, 2020 at 09:00:02PM +0000, Zidenberg, Tsahi wrote:\n> These results show only about 6% performance increase in simple-update, and very close\n> performance in other results, most of which could be attributed to benchmark result jitter.\n> These results from \"well behaved\" benchmarks do not show the full importance of using \n> outline-atomics. I have observed in some experiments with other values and larger systems\n> a crush of performance including read-only tests, which was caused by continuously failing to\n> commit strx instructions. In such cases, outline-atomics improved performance by more\n> than 2x factor. These cases are not always easy to replicate.\n\nInteresting stuff. ARM-related otimizations is not something you see\na lot around here. Please note that your latest patch fails to apply,\ncould you provide a rebase? I am marking the patch as waiting on\nauthor for the time being.\n--\nMichael", "msg_date": "Mon, 7 Sep 2020 12:10:37 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: [PATCH] audo-detect and use -moutline-atomics compilation flag\n for aarch64" }, { "msg_contents": "\"Zidenberg, Tsahi\" <tsahee@amazon.com> writes:\n> Outline-atomics is a gcc compilation flag that adds runtime detection of weather or not the cpu supports atomic instructions. CPUs that don't support atomic instructions will use the old load-exclusive/store-exclusive instructions. If a different compilation flag defined an architecture that unconditionally supports atomic instructions (e.g. -march=armv8.2), the outline-atomic flag will have no effect.\n\nI wonder what version of gcc you intend this for. AFAICS, older\ngcc versions lack this flag at all, while newer ones have it on\nby default. Docs I can find on the net suggest that it would only\nhelp to supply the flag when using gcc 10.0.x. Is there a sufficient\npopulation of production systems using such gcc releases to make it\nworth expending configure cycles on? (That's sort of a trick question,\nbecause the GCC docs make it look like 10.0.x was never considered\nto be production ready.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 07 Sep 2020 18:00:54 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCH] audo-detect and use -moutline-atomics compilation flag\n for aarch64" }, { "msg_contents": "On 07/09/2020, 6:11, \"Michael Paquier\" <michael@paquier.xyz> wrote:\r\n > Interesting stuff. ARM-related otimizations is not something you see\r\n > a lot around here.\r\n\r\nLet's hope that will change :)\r\n\r\n > could you provide a rebase? I am marking the patch as waiting on\r\n > author for the time being.\r\n\r\nOf course. Attached.\r\n\r\n--\r\nThank you!\r\nTsahi.", "msg_date": "Tue, 8 Sep 2020 15:34:18 +0000", "msg_from": "\"Zidenberg, Tsahi\" <tsahee@amazon.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] audo-detect and use -moutline-atomics compilation flag\n for\n aarch64" }, { "msg_contents": "On 08/09/2020, 1:01, \"Tom Lane\" <tgl@sss.pgh.pa.us> wrote:\r\n\r\n > I wonder what version of gcc you intend this for. AFAICS, older\r\n > gcc versions lack this flag at all, while newer ones have it on\r\n > by default.\r\n\r\n\r\n(previously sent private reply, sorry)\r\n\r\nThe moutline-atomics flag showed substantial enough improvements\r\nthat it has been backported to GCC 9, 8 and there is a gcc-7 branch in\r\nthe works.\r\nUbuntu has integrated this in 20.04, Amazon Linux 2 supports it,\r\nwith other distributions including Ubuntu 18.04 and Debian on the way.\r\nall distributions, including the upcoming Ubuntu with GCC-10, have\r\nmoutline-atomics turned off by default.\r\n\r\n", "msg_date": "Thu, 10 Sep 2020 06:37:37 +0000", "msg_from": "\"Zidenberg, Tsahi\" <tsahee@amazon.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] audo-detect and use -moutline-atomics compilation flag\n for\n aarch64" }, { "msg_contents": "On 10/09/2020 09:37, Zidenberg, Tsahi wrote:\n> On 08/09/2020, 1:01, \"Tom Lane\" <tgl@sss.pgh.pa.us> wrote:\n> \n> > I wonder what version of gcc you intend this for. AFAICS, older\n> > gcc versions lack this flag at all, while newer ones have it on\n> > by default.\n> \n> \n> (previously sent private reply, sorry)\n> \n> The moutline-atomics flag showed substantial enough improvements\n> that it has been backported to GCC 9, 8 and there is a gcc-7 branch in\n> the works.\n> Ubuntu has integrated this in 20.04, Amazon Linux 2 supports it,\n> with other distributions including Ubuntu 18.04 and Debian on the way.\n> all distributions, including the upcoming Ubuntu with GCC-10, have\n> moutline-atomics turned off by default.\n\nIf it's a good idea to use -moutline-atomics, I would expect the \ncompiler or distribution to enable it by default. And as you pointed \nout, many have. For the others, there are probably reasons they haven't, \nlike begin conservative in general. Whatever the reasons, IMHO we should \nnot second-guess them.\n\nI'm marking this as Rejected in the commitfest. But thanks for the \nbenchmarking, that is valuable information nevertheless.\n\n- Heikki\n\n\n", "msg_date": "Tue, 29 Sep 2020 10:21:01 +0300", "msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>", "msg_from_op": false, "msg_subject": "Re: [PATCH] audo-detect and use -moutline-atomics compilation flag\n for aarch64" }, { "msg_contents": "\r\n\r\nOn 29/09/2020, 10:21, \"Heikki Linnakangas\" <hlinnaka@iki.fi> wrote:\r\n > If it's a good idea to use -moutline-atomics, I would expect the\r\n > compiler or distribution to enable it by default. And as you pointed\r\n > out, many have.\r\n\r\n-moutline-atomics is only enabled by default on the gcc-10 branch where\r\nit was originally developed. It was important enough to be backported\r\nto previous versions and picked up by e.g. ubuntu and amazon-linux.\r\nHowever, outline-atomics is not enabled by default in any backports that\r\nI'm aware of. Ubuntu 20.04 even turned it off by default for gcc-10,\r\nwhich seems like a compatibility step with the main gcc-9 compiler.\r\nAlways-enabled outline-atomic is, sadly, many years in the\r\nfuture for release systems.\r\n\r\n > For the others, there are probably reasons they haven't,\r\n > like begin conservative in general. Whatever the reasons, IMHO we should\r\n > not second-guess them.\r\n\r\nI assume GCC chose conservatively not to add code by default that\r\nwon't help old CPUs when increasing minor versions (although I see\r\nno performance degradation in real software).\r\nOn the other hand, the feature was important enough to be\r\nback-ported to allow software to take advantage of it.\r\nPostgresql should be the most advanced open source database.\r\nAs I understand it, it should be able to handle as well as possible\r\nlarge workloads on large modern machines like Graviton2, and\r\nthat means using LSE.\r\n\r\n > I'm marking this as Rejected in the commitfest. But thanks for the\r\n > benchmarking, that is valuable information nevertheless.\r\n\r\nCould additional data change your mind?\r\n\r\n\r\n\r\n", "msg_date": "Wed, 30 Sep 2020 16:04:49 +0000", "msg_from": "\"Zidenberg, Tsahi\" <tsahee@amazon.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] audo-detect and use -moutline-atomics compilation flag\n for\n aarch64" }, { "msg_contents": "On 30/09/2020 19:04, Zidenberg, Tsahi wrote:\n> Ubuntu 20.04 even turned it off by default for gcc-10, which seems\n> like a compatibility step with the main gcc-9 compiler.\nOk, I definitely don't want to override that decision.\n\n>> I'm marking this as Rejected in the commitfest. But thanks for the\n>> benchmarking, that is valuable information nevertheless.\n> \n> Could additional data change your mind?\n\nI doubt it. IMO we shouldn't second-guess decisions made by compiler and \ndistribution vendors, and more performance data won't change that \nprinciple. For comparison, we also don't set -O or -funroll-loops or any \nother flags to enable/disable specific optimizations. (Except for a few \nspecific source files, like checksum.c, but those are exceptions the rule.)\n\nIf some other committer thinks differently, I won't object, but I'm not \ngoing to commit this. Maybe you should speak to the distribution vendors \nor the folk packaging PostgreSQL for those distributions, instead.\n\n- Heikki\n\n\n", "msg_date": "Wed, 30 Sep 2020 21:02:14 +0300", "msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>", "msg_from_op": false, "msg_subject": "Re: [PATCH] audo-detect and use -moutline-atomics compilation flag\n for aarch64" }, { "msg_contents": "\"Zidenberg, Tsahi\" <tsahee@amazon.com> writes:\n> On 29/09/2020, 10:21, \"Heikki Linnakangas\" <hlinnaka@iki.fi> wrote:\n>>>>>> If it's a good idea to use -moutline-atomics, I would expect the\n>>>>>> compiler or distribution to enable it by default. And as you pointed\n>>>>>> out, many have.\n\n> -moutline-atomics is only enabled by default on the gcc-10 branch where\n> it was originally developed. It was important enough to be backported\n> to previous versions and picked up by e.g. ubuntu and amazon-linux.\n> However, outline-atomics is not enabled by default in any backports that\n> I'm aware of. Ubuntu 20.04 even turned it off by default for gcc-10,\n> which seems like a compatibility step with the main gcc-9 compiler.\n> Always-enabled outline-atomic is, sadly, many years in the\n> future for release systems.\n\nI don't find this argument terribly convincing. I agree that it'll be\na couple years before gcc 10 is in use in \"stable production\" systems.\nBut it seems to me that big-iron aarch64 is also some way off from\nappearing in stable production systems. By the time this actually\nmatters to any measurable fraction of our users, distros will have\nconverged on reasonable default settings for this option.\n\nIn the meantime, you are asking that we more or less permanently expend\nconfigure cycles on checking for an option that seems to have a pretty\nshort expected useful life, even on the small minority of builds where\nit'll do anything at all. The cost/benefit ratio doesn't seem very\nattractive.\n\nNone of this prevents somebody from applying the switch in their own\nbuilds, of course. But I concur with Heikki's reasoning that it's\nprobably not a great idea for us to, by default, override the distro's\ndefault on this.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 30 Sep 2020 14:08:51 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCH] audo-detect and use -moutline-atomics compilation flag\n for aarch64" }, { "msg_contents": "Hi!\n\nOn Mon, Sep 7, 2020 at 1:12 AM Zidenberg, Tsahi <tsahee@amazon.com> wrote:\n> First, I apologize for taking so long to answer. This e-mail regretfully got lost in my inbox.\n>\n> On 24/07/2020, 4:17, \"Andres Freund\" <andres@anarazel.de> wrote:\n>\n> > What does \"not significantly affected\" exactly mean? Could you post the\n> > raw numbers?\n>\n> The following tests show benchmark behavior on m6g.8xl instance (32-core with LSE support)\n> and a1.4xlarge (16-core, no LSE support) with and without the patch, based on postgresql 12.4.\n> Tests are pgbench select-only/simple-update, and sysbench read-only/write only.\n>\n> . select-only. simple-update. read-only. write-only\n> m6g.8xlarge/vanila. 482130. 56275. 273327. 33364\n> m6g.8xlarge/patch. 493748. 59681. 262702. 33024\n> a1.4xlarge/vanila. 82437. 13978. 62489. 2928\n> a1.4xlarge/patch. 79499. 13932. 62796. 2945\n>\n> Results obviously change with OS / parameters /etc. I have attempted ensure a fair comparison,\n> But I don't think these numbers should be taken as absolute.\n> As reference points, m6g instance compiled with -march=native flag, and m5g (x86) instances:\n>\n> m6g.8xlarge/native. 522771. 60354. 261366. 33582\n> m5.8xlarge. 362908. 58732. 147730. 32750\n\nI'd like to resurrect this thread, if there is still interest from\nyour side. What number of clients and jobs did you use with pgbench?\n\nI've noticed far more dramatic effect on high number of clients.\nCould you verify that?\nhttps://akorotkov.github.io/blog/2021/04/30/arm/\n\n------\nRegards,\nAlexander Korotkov\n\n\n", "msg_date": "Thu, 23 Jun 2022 12:56:44 +0300", "msg_from": "Alexander Korotkov <aekorotkov@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] audo-detect and use -moutline-atomics compilation flag\n for aarch64" }, { "msg_contents": "Hi,\n\nFYI, people interested in this thread might be interested in\npgsql-bugs #18610. There are two related issues here:\n\n1. Some people want to use LSE on modern ARM servers so they want to\nuse -moutline-atomics, which IIUC adds auto-detection logic with\nfallback code so it can still run on the first generation of aarch64\nARMv8 (without .1) hardware. That was being discussed as a feature\nproposal for master. (People could already access that if they\ncompile from source by using -march=something_modern, but the big\ndistributions are in charge of what they target and AFAIK mostly still\nchoose ARMv8, so this outline atomics idea is a nice workaround to\nmake everyone happy, I haven't studied exactly how it works.)\n\n2. Clang has started assuming -moutline-atomics in some version, so\nit's already compiling .bc files that way, so it breaks if our JIT\nsystem decides to inline SQL-callable functions, so we'll need to\ndecide what to do about that and back-patch something. Conservative\nchoice would be to stop it from doing that with -mno-outline-atomics,\nuntil this thread makes progress, but perhaps people closer to the\nsubject have another idea...\n\n[1] https://www.postgresql.org/message-id/flat/18610-37bf303f904fede3%40postgresql.org\n\n\n", "msg_date": "Fri, 13 Sep 2024 10:55:31 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] audo-detect and use -moutline-atomics compilation flag\n for aarch64" } ]
[ { "msg_contents": "Hi hackers,\r\n\r\n\r\n\r\nCurrently, the COPY TO api does not support callback functions, while the COPY FROM api does. The COPY TO code does, however, include placeholders for supporting callbacks in the future.\r\n\r\n\r\n\r\nRounding out the support of callback functions to both could be very beneficial for extension development. In particular, supporting callbacks for COPY TO will allow developers to utilize the preexisting command in order to create tools that give users more support for moving data for storage, backup, analytics, etc.\r\n\r\n\r\n\r\nWe are aiming to get the support in core PostgreSQL and add COPY TO callback support in the next commitfest. The attached patch contains a change to COPY TO api to support callbacks.\r\n\r\n\r\n\r\nBest,\r\n\r\nBilva", "msg_date": "Wed, 1 Jul 2020 21:41:12 +0000", "msg_from": "\"Sanaba, Bilva\" <bilvas@amazon.com>", "msg_from_op": true, "msg_subject": "Adding Support for Copy callback functionality on COPY TO api " }, { "msg_contents": "Hi Bilva,\n\nThank you for registering this patch!\n\nI had a few suggestions:\n\n1. Please run pg_indent[1] on your code. Make sure you add\ncopy_data_destination_cb to src/tools/pgindent/typedefs.list. Please\nrun pg_indent on only the files you changed (it will take files as\ncommand line args)\n\n2. For features such as this, it is often helpful to find a use case\nwithin backend/utility/extension code that demonstrate thes callback and\nto include the code to exercise it with the patch. Refer how\ncopy_read_data() is used as copy_data_source_cb, to copy the data from\nthe query results from the WAL receiver (Refer: copy_table()). Finding\na similar use case in the source tree will make a stronger case\nfor this patch.\n\n3. Wouldn't we want to return the number of bytes written from\ncopy_data_destination_cb? (Similar to copy_data_source_cb) We should\nalso think about how to represent failure. Looking at CopySendEndOfRow(),\nwe should error out like we do for the other copy_dests after checking the\nreturn value for the callback invocation.\n\n4.\n> bool pipe = (cstate->filename == NULL) && (cstate->data_destination_cb == NULL);\n\nI think a similar change should also be applied to BeginCopyFrom() and\nCopyFrom(). Or even better, such code could be refactored to have a\nseparate destination type COPY_PIPE. This of course, will be a separate\npatch. I think the above line is okay for this patch.\n\nRegards,\nSoumyadeep\n\n\n", "msg_date": "Mon, 14 Sep 2020 16:28:12 -0700", "msg_from": "Soumyadeep Chakraborty <soumyadeep2007@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Adding Support for Copy callback functionality on COPY TO api" }, { "msg_contents": "On Mon, Sep 14, 2020 at 04:28:12PM -0700, Soumyadeep Chakraborty wrote:\n> I think a similar change should also be applied to BeginCopyFrom() and\n> CopyFrom(). Or even better, such code could be refactored to have a\n> separate destination type COPY_PIPE. This of course, will be a separate\n> patch. I think the above line is okay for this patch.\n\nThis feedback has not been answered after two weeks, so I have marked\nthe patch as returned with feedback.\n--\nMichael", "msg_date": "Wed, 30 Sep 2020 16:41:51 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Adding Support for Copy callback functionality on COPY TO api" }, { "msg_contents": "On 7/2/20 2:41 AM, Sanaba, Bilva wrote:\n> Hi hackers,\n> \n> Currently, the COPY TO api does not support callback functions, while \n> the COPY FROM api does. The COPY TO code does, however, include \n> placeholders for supporting callbacks in the future.\n> \n> Rounding out the support of callback functions to both could be very \n> beneficial for extension development. In particular, supporting \n> callbacks for COPY TO will allow developers to utilize the preexisting \n> command in order to create tools that give users more support for moving \n> data for storage, backup, analytics, etc.\n> \n> We are aiming to get the support in core PostgreSQL and add COPY TO \n> callback support in the next commitfest.The attached patch contains a \n> change to COPY TO api to support callbacks.\n> \nYour code almost exactly the same as proposed in [1] as part of 'Fast \nCOPY FROM' command. But it seems there are differences.\n\n[1] \nhttps://www.postgresql.org/message-id/flat/3d0909dc-3691-a576-208a-90986e55489f%40postgrespro.ru\n\n-- \nregards,\nAndrey Lepikhov\nPostgres Professional\n\n\n", "msg_date": "Wed, 30 Sep 2020 13:48:12 +0500", "msg_from": "\"Andrey V. Lepikhov\" <a.lepikhov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: Adding Support for Copy callback functionality on COPY TO api" }, { "msg_contents": "On Wed, Sep 30, 2020 at 04:41:51PM +0900, Michael Paquier wrote:\n> This feedback has not been answered after two weeks, so I have marked\n> the patch as returned with feedback.\n\nI've rebased this patch and will register it in the next commitfest\nshortly.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Tue, 2 Aug 2022 16:49:19 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Adding Support for Copy callback functionality on COPY TO api" }, { "msg_contents": "On Tue, Aug 02, 2022 at 04:49:19PM -0700, Nathan Bossart wrote:\n> I've rebased this patch and will register it in the next commitfest\n> shortly.\n\nPerhaps there should be a module in src/test/modules/ to provide a\nshort, still useful, example of what this could achieve?\n--\nMichael", "msg_date": "Fri, 7 Oct 2022 15:49:31 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Adding Support for Copy callback functionality on COPY TO api" }, { "msg_contents": "On Fri, Oct 07, 2022 at 03:49:31PM +0900, Michael Paquier wrote:\n> Perhaps there should be a module in src/test/modules/ to provide a\n> short, still useful, example of what this could achieve?\n\nHere is an attempt at adding such a test module.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Fri, 7 Oct 2022 14:48:24 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Adding Support for Copy callback functionality on COPY TO api" }, { "msg_contents": "On Fri, Oct 07, 2022 at 02:48:24PM -0700, Nathan Bossart wrote:\n> Here is an attempt at adding such a test module.\n\nUsing an ereport(NOTICE) to show the data reported in the callback is\nfine by me. How about making the module a bit more modular, by\npassing as argument a regclass and building a list of arguments with\nit? You may want to hold the ShareAccessLock on the relation until\nthe end of the transaction in this example.\n--\nMichael", "msg_date": "Sat, 8 Oct 2022 14:11:38 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Adding Support for Copy callback functionality on COPY TO api" }, { "msg_contents": "On Sat, Oct 08, 2022 at 02:11:38PM +0900, Michael Paquier wrote:\n> Using an ereport(NOTICE) to show the data reported in the callback is\n> fine by me. How about making the module a bit more modular, by\n> passing as argument a regclass and building a list of arguments with\n> it? You may want to hold the ShareAccessLock on the relation until\n> the end of the transaction in this example.\n\nYeah, that makes more sense. It actually simplifies things a bit, too.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Sat, 8 Oct 2022 10:37:41 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Adding Support for Copy callback functionality on COPY TO api" }, { "msg_contents": "On Sat, Oct 08, 2022 at 10:37:41AM -0700, Nathan Bossart wrote:\n> Yeah, that makes more sense. It actually simplifies things a bit, too.\n\nSorry for the noise. There was an extra #include in v4 that I've removed\nin v5.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Sat, 8 Oct 2022 14:14:04 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Adding Support for Copy callback functionality on COPY TO api" }, { "msg_contents": "On Sun, Oct 9, 2022 at 2:44 AM Nathan Bossart <nathandbossart@gmail.com> wrote:\n>\n> Sorry for the noise. There was an extra #include in v4 that I've removed\n> in v5.\n\nIIUC, COPY TO callback helps move a table's data out of postgres\nserver. Just wondering, how is it different from existing solutions\nlike COPY TO ... PROGRAM/FILE, logical replication, pg_dump etc. that\ncan move a table's data out? I understandb that the COPY FROM callback\nwas needed for logical replication 7c4f52409. Mentioning a concrete\nuse-case helps here.\n\nI'm not quite sure if we need a separate module to just tell how to\nuse this new callback. I strongly feel that it's not necessary. It\nunnecessarily creates extra code (actual code is 25 LOC with v1 patch\nbut 150 LOC with v5 patch) and can cause maintenance burden. These\ncallback APIs are simple enough to understand for those who know\nBeginCopyTo() or BeginCopyFrom() and especially for those who know how\nto write extensions. These are not APIs that an end-user uses. The\nbest would be to document both COPY FROM and COPY TO callbacks,\nperhaps with a pseudo code specifying just the essence [1], and their\npossible usages somewhere here\nhttps://www.postgresql.org/docs/devel/sql-copy.html.\n\nThe order of below NOTICE messages isn't guaranteed and it can change\ndepending on platforms. Previously, we've had to suppress such\nmessages in the test output 6adc5376d.\n\n+SELECT test_copy_to_callback('public.test'::pg_catalog.regclass);\n+NOTICE: COPY TO callback called with data \"1 2 3\" and length 5\n+NOTICE: COPY TO callback called with data \"12 34 56\" and length 8\n+NOTICE: COPY TO callback called with data \"123 456 789\" and length 11\n+ test_copy_to_callback\n\n[1]\n+ Relation rel = table_open(PG_GETARG_OID(0), AccessShareLock);\n+ CopyToState cstate;\n+\n+ cstate = BeginCopyTo(NULL, rel, NULL, RelationGetRelid(rel), NULL, NULL,\n+ to_cb, NIL, NIL);\n+ (void) DoCopyTo(cstate);\n+ EndCopyTo(cstate);\n+\n+ table_close(rel, AccessShareLock);\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 10 Oct 2022 12:41:40 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Adding Support for Copy callback functionality on COPY TO api" }, { "msg_contents": "On Mon, Oct 10, 2022 at 12:41:40PM +0530, Bharath Rupireddy wrote:\n> IIUC, COPY TO callback helps move a table's data out of postgres\n> server. Just wondering, how is it different from existing solutions\n> like COPY TO ... PROGRAM/FILE, logical replication, pg_dump etc. that\n> can move a table's data out? I understandb that the COPY FROM callback\n> was needed for logical replication 7c4f52409. Mentioning a concrete\n> use-case helps here.\n\nThis new callback allows the use of COPY TO's machinery in extensions. A\ncouple of generic use-cases are listed upthread [0], and one concrete\nuse-case is the aws_s3 extension [1].\n\n> I'm not quite sure if we need a separate module to just tell how to\n> use this new callback. I strongly feel that it's not necessary. It\n> unnecessarily creates extra code (actual code is 25 LOC with v1 patch\n> but 150 LOC with v5 patch) and can cause maintenance burden. These\n> callback APIs are simple enough to understand for those who know\n> BeginCopyTo() or BeginCopyFrom() and especially for those who know how\n> to write extensions. These are not APIs that an end-user uses. The\n> best would be to document both COPY FROM and COPY TO callbacks,\n> perhaps with a pseudo code specifying just the essence [1], and their\n> possible usages somewhere here\n> https://www.postgresql.org/docs/devel/sql-copy.html.\n> \n> The order of below NOTICE messages isn't guaranteed and it can change\n> depending on platforms. Previously, we've had to suppress such\n> messages in the test output 6adc5376d.\n\nI really doubt that this small test case is going to cause anything\napproaching undue maintenance burden. I think it's important to ensure\nthis functionality continues to work as expected long into the future.\n\n[0] https://postgr.es/m/253C21D1-FCEB-41D9-A2AF-E6517015B7D7%40amazon.com\n[1] https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/postgresql-s3-export.html#aws_s3.export_query_to_s3\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 10 Oct 2022 09:38:59 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Adding Support for Copy callback functionality on COPY TO api" }, { "msg_contents": "On Mon, Oct 10, 2022 at 09:38:59AM -0700, Nathan Bossart wrote:\n> This new callback allows the use of COPY TO's machinery in extensions. A\n> couple of generic use-cases are listed upthread [0], and one concrete\n> use-case is the aws_s3 extension [1].\n\nFWIW, I understand that the proposal is to have an easier control of\nhow, what and where to the data is processed. COPY TO PROGRAM\nprovides that with exactly the same kind of interface (data input, its\nlength) once you have a program able to process the data piped out the\nsame way. However, it is in the shape of an external process that\nreceives the data through a pipe hence it provides a much wider attack\nsurface which is something that all cloud provider care about. The\nthing is that this allows extension developers to avoid arbitrary\ncommands on the backend as the OS user running the Postgres instance,\nwhile still being able to process the data the way they want\n(auditing, analytics, whatever) within the strict context of the\nprocess running an extension code. I'd say that this is a very cheap\nchange to allow people to have more fun with the backend engine\n(similar to the recent changes with archive libraries for\narchive_command, but much less complex):\n src/backend/commands/copy.c | 2 +-\n src/backend/commands/copyto.c | 18 +++++++++++++++---\n 2 files changed, 16 insertions(+), 4 deletions(-)\n\n(Not to mention that we've had our share of CVEs regarding COPY\nPROGRAM even if it is superuser-only).\n\n> I really doubt that this small test case is going to cause anything\n> approaching undue maintenance burden. I think it's important to ensure\n> this functionality continues to work as expected long into the future.\n\nI like these toy modules, they provide test coverage while acting as a\ntemplate for new developers. I am wondering whether it should have\nsomething for the copy from callback, actually, as it is named\n\"test_copy_callbacks\" but I see no need to extend the module more than\nnecessary in the context of this thread (logical decoding uses it,\nanyway).\n--\nMichael", "msg_date": "Tue, 11 Oct 2022 09:01:41 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Adding Support for Copy callback functionality on COPY TO api" }, { "msg_contents": "On Tue, Oct 11, 2022 at 09:01:41AM +0900, Michael Paquier wrote:\n> I like these toy modules, they provide test coverage while acting as a\n> template for new developers. I am wondering whether it should have\n> something for the copy from callback, actually, as it is named\n> \"test_copy_callbacks\" but I see no need to extend the module more than\n> necessary in the context of this thread (logical decoding uses it,\n> anyway).\n\nYeah, I named it that way because I figured we might want a test for the\nCOPY FROM callback someday.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 10 Oct 2022 17:06:39 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Adding Support for Copy callback functionality on COPY TO api" }, { "msg_contents": "On Wed, Sep 30, 2020 at 01:48:12PM +0500, Andrey V. Lepikhov wrote:\n> Your code almost exactly the same as proposed in [1] as part of 'Fast COPY\n> FROM' command. But it seems there are differences.\n> \n> [1] https://www.postgresql.org/message-id/flat/3d0909dc-3691-a576-208a-90986e55489f%40postgrespro.ru\n\nI have been looking at what you have here while reviewing the contents\nof this thread, and it seems to me that you should basically be able\nto achieve the row-level control that your patch is doing with the\ncallback to do the per-row processing posted here. The main\ndifference, though, is that you want to have more control at the\nbeginning and the end of the COPY TO processing which explains the\nsplit of DoCopyTo(). I am a bit surprised to see this much footprint\nin the backend code once there are two FDW callbacks to control the\nbeginning and the end of the COPY TO, to be honest, sacrifying a lot\nthe existing symmetry between the COPY TO and COPY FROM code paths\nwhere there is currently a strict control on the pre-row and post-row\nprocessing like the per-row memory context.\n--\nMichael", "msg_date": "Tue, 11 Oct 2022 11:31:01 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Adding Support for Copy callback functionality on COPY TO api" }, { "msg_contents": "On Mon, Oct 10, 2022 at 05:06:39PM -0700, Nathan Bossart wrote:\n> Yeah, I named it that way because I figured we might want a test for the\n> COPY FROM callback someday.\n\nOkay. So, I have reviewed the whole thing, added a description of all\nthe fields of BeginCopyTo() in its top comment, tweaked a few things\nand added in the module an extra NOTICE with the number of processed\nrows. The result seemed fine, so applied.\n--\nMichael", "msg_date": "Tue, 11 Oct 2022 11:52:03 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Adding Support for Copy callback functionality on COPY TO api" }, { "msg_contents": "On Tue, Oct 11, 2022 at 11:52:03AM +0900, Michael Paquier wrote:\n> Okay. So, I have reviewed the whole thing, added a description of all\n> the fields of BeginCopyTo() in its top comment, tweaked a few things\n> and added in the module an extra NOTICE with the number of processed\n> rows. The result seemed fine, so applied.\n\nThanks!\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 10 Oct 2022 19:55:07 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Adding Support for Copy callback functionality on COPY TO api" } ]
[ { "msg_contents": "Hi,\n\nFor an extended query that needs to get parameter types before sending\nthem, is there a difference in doing:\n\nParse, Describe statement, Flush, Bind, Execute, Sync\nvs\nParse, Describe statement, Sync, Bind, Execute, Sync\n\nOf course, there will be an additional ReadyForQuery in the latter case,\nbut other than that.\n\nThanks!\n\nJaka\n\nHi,For an extended query that needs to get parameter types before sending them, is there a difference in doing:Parse, Describe statement, Flush, Bind, Execute, SyncvsParse, Describe statement, Sync, Bind, Execute, SyncOf course, there will be an additional ReadyForQuery in the latter case, but other than that.Thanks!Jaka", "msg_date": "Thu, 2 Jul 2020 12:30:51 -0400", "msg_from": "=?UTF-8?B?SmFrYSBKYW7EjWFy?= <jaka@kubje.org>", "msg_from_op": true, "msg_subject": "Sync vs Flush" }, { "msg_contents": "=?UTF-8?B?SmFrYSBKYW7EjWFy?= <jaka@kubje.org> writes:\n> For an extended query that needs to get parameter types before sending\n> them, is there a difference in doing:\n\n> Parse, Describe statement, Flush, Bind, Execute, Sync\n> vs\n> Parse, Describe statement, Sync, Bind, Execute, Sync\n\nSync is a resync point after an error, so the real question is what\nyou want to have happen if you get some kind of error during the Parse.\nIf you expect that the app wouldn't proceed with issuing Bind/Execute\nthen you want to do it the second way.\n\nI suppose you could do\n\n\tSend Parse/Describe/Flush\n\tRead results\n\tIf OK:\n\t Send Bind/Execute/Sync\n\telse:\n\t Send Sync # needed to get back to normal state\n\nbut that doesn't sound all that convenient.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 02 Jul 2020 12:41:22 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Sync vs Flush" }, { "msg_contents": "Hehe, that's exactly what I am doing, which is why I thought of just\nsending two Syncs. Good to hear it's OK.\n\n From reading the Extended query protocol docs, I somehow got the impression\nthat you need to do everything within one cycle, and send Sync only at the\nend of the cycle:\n\n - \"The extended query protocol breaks down the above-described simple\nquery protocol into multiple steps.\"\n - \"[Only] At completion of each series of extended-query messages, the\nfrontend should issue a Sync message.\"\n - \"A Flush [and not Sync] must be sent [...] if the frontend wishes to\nexamine the results of that command before issuing more commands.\"\n - \"The simple Query message is approximately equivalent to the series\nParse, Bind, portal Describe, Execute, Close, Sync.\"\n\nWhat is a common situation for using Flush instead of Sync?\nWhen would you need and wait for the output, get an error, yet still\nproceed to send further messages that you would want the server to ignore?\n\nJaka\n\nOn Thu, Jul 2, 2020 at 12:41 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> =?UTF-8?B?SmFrYSBKYW7EjWFy?= <jaka@kubje.org> writes:\n> > For an extended query that needs to get parameter types before sending\n> > them, is there a difference in doing:\n>\n> > Parse, Describe statement, Flush, Bind, Execute, Sync\n> > vs\n> > Parse, Describe statement, Sync, Bind, Execute, Sync\n>\n> Sync is a resync point after an error, so the real question is what\n> you want to have happen if you get some kind of error during the Parse.\n> If you expect that the app wouldn't proceed with issuing Bind/Execute\n> then you want to do it the second way.\n>\n> I suppose you could do\n>\n> Send Parse/Describe/Flush\n> Read results\n> If OK:\n> Send Bind/Execute/Sync\n> else:\n> Send Sync # needed to get back to normal state\n>\n> but that doesn't sound all that convenient.\n>\n> regards, tom lane\n>\n\nHehe, that's exactly what I am doing, which is why I thought of just sending two Syncs. Good to hear it's OK.From reading the Extended query protocol docs, I somehow got the impression that you need to do everything within one cycle, and send Sync only at the end of the cycle: - \"The extended query protocol breaks down the above-described simple query protocol into multiple steps.\" - \"[Only] At completion of each series of extended-query messages, the frontend should issue a Sync message.\" - \"A Flush [and not Sync] must be sent [...] if the frontend wishes to examine the results of that command before issuing more commands.\" - \"The simple Query message is approximately equivalent to the series Parse, Bind, portal Describe, Execute, Close, Sync.\"What is a common situation for using Flush instead of Sync?When would you need and wait for the output, get an error, yet still proceed to send further messages that you would want the server to ignore?JakaOn Thu, Jul 2, 2020 at 12:41 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:=?UTF-8?B?SmFrYSBKYW7EjWFy?= <jaka@kubje.org> writes:\n> For an extended query that needs to get parameter types before sending\n> them, is there a difference in doing:\n\n> Parse, Describe statement, Flush, Bind, Execute, Sync\n> vs\n> Parse, Describe statement, Sync, Bind, Execute, Sync\n\nSync is a resync point after an error, so the real question is what\nyou want to have happen if you get some kind of error during the Parse.\nIf you expect that the app wouldn't proceed with issuing Bind/Execute\nthen you want to do it the second way.\n\nI suppose you could do\n\n        Send Parse/Describe/Flush\n        Read results\n        If OK:\n           Send Bind/Execute/Sync\n        else:\n           Send Sync    # needed to get back to normal state\n\nbut that doesn't sound all that convenient.\n\n                        regards, tom lane", "msg_date": "Thu, 2 Jul 2020 13:18:25 -0400", "msg_from": "=?UTF-8?B?SmFrYSBKYW7EjWFy?= <jaka@kubje.org>", "msg_from_op": true, "msg_subject": "Re: Sync vs Flush" }, { "msg_contents": "=?UTF-8?B?SmFrYSBKYW7EjWFy?= <jaka@kubje.org> writes:\n> What is a common situation for using Flush instead of Sync?\n> When would you need and wait for the output, get an error, yet still\n> proceed to send further messages that you would want the server to ignore?\n\nThe only case I can think of offhand is bursting some time-consuming\nqueries to the server, that is sending this all at once:\n\n Execute, Flush, Execute, Flush, Execute, Flush, Execute, Sync\n\nThis presumes that, if an earlier query fails, you want the rest\nto be abandoned; else you'd use Syncs instead. But if you leave\nout the Flushes then you won't see the tail end of (or indeed\nmaybe none of) the output of an earlier query until a later query\nfills the server's output buffer. So if you're hoping to overlap\nthe client's processing with the server's you want the extra flushes.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 02 Jul 2020 15:29:11 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Sync vs Flush" }, { "msg_contents": "Makes sense, thanks!\n\nOn Thu, Jul 2, 2020 at 15:29 Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> =?UTF-8?B?SmFrYSBKYW7EjWFy?= <jaka@kubje.org> writes:\n> > What is a common situation for using Flush instead of Sync?\n> > When would you need and wait for the output, get an error, yet still\n> > proceed to send further messages that you would want the server to\n> ignore?\n>\n> The only case I can think of offhand is bursting some time-consuming\n> queries to the server, that is sending this all at once:\n>\n> Execute, Flush, Execute, Flush, Execute, Flush, Execute, Sync\n>\n> This presumes that, if an earlier query fails, you want the rest\n> to be abandoned; else you'd use Syncs instead. But if you leave\n> out the Flushes then you won't see the tail end of (or indeed\n> maybe none of) the output of an earlier query until a later query\n> fills the server's output buffer. So if you're hoping to overlap\n> the client's processing with the server's you want the extra flushes.\n>\n> regards, tom lane\n>\n\nMakes sense, thanks!On Thu, Jul 2, 2020 at 15:29 Tom Lane <tgl@sss.pgh.pa.us> wrote:=?UTF-8?B?SmFrYSBKYW7EjWFy?= <jaka@kubje.org> writes:\n> What is a common situation for using Flush instead of Sync?\n> When would you need and wait for the output, get an error, yet still\n> proceed to send further messages that you would want the server to ignore?\n\nThe only case I can think of offhand is bursting some time-consuming\nqueries to the server, that is sending this all at once:\n\n   Execute, Flush, Execute, Flush, Execute, Flush, Execute, Sync\n\nThis presumes that, if an earlier query fails, you want the rest\nto be abandoned; else you'd use Syncs instead.  But if you leave\nout the Flushes then you won't see the tail end of (or indeed\nmaybe none of) the output of an earlier query until a later query\nfills the server's output buffer.  So if you're hoping to overlap\nthe client's processing with the server's you want the extra flushes.\n\n                        regards, tom lane", "msg_date": "Thu, 2 Jul 2020 16:03:00 -0400", "msg_from": "=?UTF-8?B?SmFrYSBKYW7EjWFy?= <jaka@kubje.org>", "msg_from_op": true, "msg_subject": "Re: Sync vs Flush" } ]
[ { "msg_contents": "This seems pretty strange:\n\ncreate publication pub1 for all tables;\n\n WARNING: wal_level is insufficient to publish logical changes\nHINT: Set wal_level to logical before creating subscriptions.\n\nDave Cramer\n\nThis seems pretty strange:create publication pub1 for all tables;                                                                                                              WARNING:  wal_level is insufficient to publish logical changesHINT:  Set wal_level to logical before creating subscriptions.Dave Cramer", "msg_date": "Thu, 2 Jul 2020 12:37:29 -0400", "msg_from": "Dave Cramer <davecramer@gmail.com>", "msg_from_op": true, "msg_subject": "why do we allow people to create a publication before setting\n wal_leve" }, { "msg_contents": "On Thu, Jul 02, 2020 at 12:37:29PM -0400, Dave Cramer wrote:\n>This seems pretty strange:\n>\n>create publication pub1 for all tables;\n>\n> WARNING: wal_level is insufficient to publish logical changes\n>HINT: Set wal_level to logical before creating subscriptions.\n>\n\npg_dump restoring a database with publications would fail unnecessarily.\n\nThere's a more detailed explanation in the thread that ultimately added\nthe warning in 2019:\n\nhttps://www.postgresql.org/message-id/flat/CAPjy-57rn5Y9g4e5u--eSOP-7P4QrE9uOZmT2ZcUebF8qxsYhg%40mail.gmail.com\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n", "msg_date": "Fri, 3 Jul 2020 01:58:22 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: why do we allow people to create a publication before setting\n wal_leve" } ]
[ { "msg_contents": "Hi!\n\n(Sorry if this was already discussed, it looks pretty obvious, but I\ncould not find anything.)\n\nI was thinking and reading about how to design the schema to keep\nrecords of all changes which happen to the table, at row granularity,\nwhen I realized that all this is already done for me by PostgreSQL\nMVCC. All rows (tuples) are already stored, with an internal version\nfield as well.\n\nSo I wonder, how could I hack PostgreSQL to disable vacuuming a table,\nso that all tuples persist forever, and how could I make those\ninternal columns visible so that I could make queries asking for\nresults at the particular historical version of table state? My\nunderstanding is that indices are already indexing over those internal\ncolumns as well, so those queries over historical versions would be\nefficient as well. Am I missing something which would make this not\npossible?\n\nIs this something I would have to run a custom version of PostgreSQL\nor is this possible through an extension of sort?\n\n\nMitar\n\n-- \nhttp://mitar.tnode.com/\nhttps://twitter.com/mitar_m\n\n\n", "msg_date": "Thu, 2 Jul 2020 11:55:38 -0700", "msg_from": "Mitar <mmitar@gmail.com>", "msg_from_op": true, "msg_subject": "Persist MVCC forever - retain history" }, { "msg_contents": "On Thursday, July 2, 2020, Mitar <mmitar@gmail.com> wrote:\n\n\n> make queries asking for\n> results at the particular historical version of table state?\n\n\nEven for a single table how would you go about specifying this in a\nuser-friendly way? Then consider joins.\n\n\n> Is this something I would have to run a custom version of PostgreSQL\n> or is this possible through an extension of sort?\n>\n\n If by “this” you mean leveraging MVCC you don’t; it isn’t suitable for\npersistent temporal data.\n\nThe fundamental missing piece is that there is no concept of timestamp in\nMVCC. Plus, wrap-around and freezing aren’t just nice-to-have features.\n\nDavid J.\n\nOn Thursday, July 2, 2020, Mitar <mmitar@gmail.com> wrote: make queries asking for\nresults at the particular historical version of table state?Even for a single table how would you go about specifying this in a user-friendly way?  Then consider joins. \nIs this something I would have to run a custom version of PostgreSQL\nor is this possible through an extension of sort?\n If by “this” you mean leveraging MVCC you don’t; it isn’t suitable for persistent temporal data.The fundamental missing piece is that there is no concept of timestamp in MVCC. Plus, wrap-around and freezing aren’t just nice-to-have features.David J.", "msg_date": "Thu, 2 Jul 2020 12:12:50 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Persist MVCC forever - retain history" }, { "msg_contents": "On Fri, Jul 3, 2020 at 6:56 AM Mitar <mmitar@gmail.com> wrote:\n> I was thinking and reading about how to design the schema to keep\n> records of all changes which happen to the table, at row granularity,\n> when I realized that all this is already done for me by PostgreSQL\n> MVCC. All rows (tuples) are already stored, with an internal version\n> field as well.\n\nThis was a research topic in ancient times (somewhere I read that in\nsome ancient version, VACUUM didn't originally remove tuples, it moved\nthem to permanent write-only storage). Even after the open source\nproject began, there was a \"time travel\" feature, but it was removed\nin 6.2:\n\nhttps://www.postgresql.org/docs/6.3/c0503.htm\n\n> So I wonder, how could I hack PostgreSQL to disable vacuuming a table,\n> so that all tuples persist forever, and how could I make those\n> internal columns visible so that I could make queries asking for\n> results at the particular historical version of table state? My\n> understanding is that indices are already indexing over those internal\n> columns as well, so those queries over historical versions would be\n> efficient as well. Am I missing something which would make this not\n> possible?\n\nThere aren't indexes on those things.\n\nIf you want to keep track of all changes in a way that lets you query\nthings as of historical times, including joins, and possibly including\nmultiple time dimensions (\"on the 2nd of Feb, what address did we\nthink Fred lived at on the 1st of Jan?\") you might want to read\n\"Developing Time-Oriented Database Applications in SQL\" about this,\nfreely available as a PDF[1]. There's also a bunch of temporal\nsupport in more recent SQL standards, not supported by PostgreSQL, and\nit was designed by the author of that book. There are people working\non trying to implement parts of the standard support for PostgreSQL.\n\n> Is this something I would have to run a custom version of PostgreSQL\n> or is this possible through an extension of sort?\n\nThere are some extensions that offer some temporal support inspired by\nthe standard (I haven't used any of them so I can't comment on them).\n\n[1] http://www2.cs.arizona.edu/~rts/publications.html\n\n\n", "msg_date": "Fri, 3 Jul 2020 07:16:21 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Persist MVCC forever - retain history" }, { "msg_contents": "Hi!\n\nOn Thu, Jul 2, 2020 at 12:12 PM David G. Johnston\n<david.g.johnston@gmail.com> wrote:\n> Even for a single table how would you go about specifying this in a user-friendly way? Then consider joins.\n\nOne general answer: you use query rewriting. But what is user-friendly\ndepends on the use case. For me, the main motivation for this is that\nI would like to sync database and client state, including all\nrevisions of data. So it is pretty easy to then query based on this\nrow revision for which rows are newer and sync them over. And then I\ncan show diffs of changes through time for that particular row.\n\nI agree that reconstructing joins at one particular moment in time in\nthe past requires more information. But that information also other\nsolutions (like copying all changes to a separate table in triggers)\nrequire: adding timestamp column and so on. So I can just have a\ntimestamp column in my original (and only) table and have a BEFORE\ntrigger which populates it with a timestamp. Then at a later time,\nwhen I have in one table all revisions of a row, I can also query\nbased on timestamp, but PostgreSQL revision column help me to address\nthe issue of two changes happening at the same timestamp.\n\nI still gain that a) I do not have to copy rows to another table b) I\ndo not have to vacuum. The only downside is that I have to rewrite\nqueries for the latest state to operate only on the latest state (or\nmaybe PostgreSQL could continue to do this for me like now, just allow\nme to also access old versions).\n\n> If by “this” you mean leveraging MVCC you don’t; it isn’t suitable for persistent temporal data.\n\nWhy not?\n\n> The fundamental missing piece is that there is no concept of timestamp in MVCC.\n\nThat can be added using BEFORE trigger.\n\n> Plus, wrap-around and freezing aren’t just nice-to-have features.\n\nOh, I forgot about that. ctid is still just 32 bits? So then for such\ntable with permanent MVCC this would have to be increased, to like 64\nbits or something. Then one would not have to do wrap-around\nprotection, no?\n\n\nMitar\n\n-- \nhttp://mitar.tnode.com/\nhttps://twitter.com/mitar_m\n\n\n", "msg_date": "Thu, 2 Jul 2020 17:58:22 -0700", "msg_from": "Mitar <mmitar@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Persist MVCC forever - retain history" }, { "msg_contents": "On Thu, Jul 2, 2020 at 2:56 PM Mitar <mmitar@gmail.com> wrote:\n\n> Hi!\n>\n> (Sorry if this was already discussed, it looks pretty obvious, but I\n> could not find anything.)\n\n\nThere have been a couple timetravel extensions done, each with their own\nlimitations. I don’t believe a perfect implementation could be done without\nreading the functionality to core (which would be new functionality given\nall the changes.) I’d say start with the extensions and go from there.\n\n-- \nJonah H. Harris\n\nOn Thu, Jul 2, 2020 at 2:56 PM Mitar <mmitar@gmail.com> wrote:Hi!\n\n(Sorry if this was already discussed, it looks pretty obvious, but I\ncould not find anything.)There have been a couple timetravel extensions done, each with their own limitations. I don’t believe a perfect implementation could be done without reading the functionality to core (which would be new functionality given all the changes.) I’d say start with the extensions and go from there.-- Jonah H. Harris", "msg_date": "Thu, 2 Jul 2020 22:43:42 -0400", "msg_from": "\"Jonah H. Harris\" <jonah.harris@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Persist MVCC forever - retain history" }, { "msg_contents": "\n\n> On Jul 2, 2020, at 5:58 PM, Mitar <mmitar@gmail.com> wrote:\n> \n>> Plus, wrap-around and freezing aren’t just nice-to-have features.\n> \n> Oh, I forgot about that. ctid is still just 32 bits? So then for such\n> table with permanent MVCC this would have to be increased, to like 64\n> bits or something. Then one would not have to do wrap-around\n> protection, no?\n\nI think what you propose is a huge undertaking, and would likely result in a fork of postgres not compatible with the public sources. I do not recommend the project. But in answer to your question....\n\nYes, the values stored in the tuple header are 32 bits. Take a look in access/htup_details.h. You'll notice that HeapTupleHeaderData has a union:\n\n union\n {\n HeapTupleFields t_heap;\n DatumTupleFields t_datum;\n } t_choice;\n\nIf you check, HeapTupleFields and DatumTupleFields are the same size, each having three 32 bit values, though they mean different things. You may need to expand types TransactionId, CommandId, and Oid to 64 bits, expand varlena headers to 64 bits, and typemods to 64 bits. You may find that it is harder to just expand a subset of those, given the way these fields overlay in these unions. There will be lot of busy work going through the code to adjust everything else to match. Just updating printf style formatting in error messages may take a long time.\n\nIf you do choose to expand only some of the types, say just TransactionId and CommandId, you'll have to deal with the size mismatch between HeapTupleFields and DatumTupleFields.\n\nAborted transactions leave dead rows in your tables, and you may want to deal with that for performance reasons. Even if you don't intend to remove deleted rows, because you are just going to keep them around for time travel purposes, you might still want to use vacuum to remove dead rows, those that never committed.\n\nYou'll need to think about how to manage the growing clog if you don't intend to truncate it periodically. Or if you do intend to truncate clog periodically, you'll need to think about the fact that you have TransactionIds in your tables older than what clog knows about.\n\nYou may want to think about how keeping dead rows around affects index performance.\n\nI expect these issues to be less than half what you would need to resolve, though much of the rest of it is less clear to me.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Thu, 2 Jul 2020 19:51:51 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Persist MVCC forever - retain history" }, { "msg_contents": "Hi!\n\nOn Thu, Jul 2, 2020 at 12:16 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> This was a research topic in ancient times (somewhere I read that in\n> some ancient version, VACUUM didn't originally remove tuples, it moved\n> them to permanent write-only storage). Even after the open source\n> project began, there was a \"time travel\" feature, but it was removed\n> in 6.2:\n\nVery interesting. Thanks for sharing.\n\n> There aren't indexes on those things.\n\nOh. My information is based on what I read in [1]. This is where I\nrealized that if PostgreSQL maintains those extra columns and indices,\nthen there is no point in replicating that by copying all that to\nanother table. So this is not true? Or not true anymore?\n\n> If you want to keep track of all changes in a way that lets you query\n> things as of historical times, including joins, and possibly including\n> multiple time dimensions (\"on the 2nd of Feb, what address did we\n> think Fred lived at on the 1st of Jan?\") you might want to read\n> \"Developing Time-Oriented Database Applications in SQL\" about this,\n\nInteresting. I checked it out a bit. I think this is not exactly what\nI am searching for. My main motivation is reactive web applications,\nwhere I can push changes of (sub)state of the database to the web app,\nwhen that (sub)state changes. And if the web app is offline for some\ntime, that it can come and resync also all missed changes. Moreover,\nchanges themselves are important (not just the last state) because it\nallows one to merge with a potentially changed local state in the web\napp while it was offline. So in a way it is logical replication and\nreplay, but just at database - client level.\n\n[1] https://eng.uber.com/postgres-to-mysql-migration/\n\n\nMitar\n\n-- \nhttp://mitar.tnode.com/\nhttps://twitter.com/mitar_m\n\n\n", "msg_date": "Thu, 2 Jul 2020 20:32:51 -0700", "msg_from": "Mitar <mmitar@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Persist MVCC forever - retain history" }, { "msg_contents": "Hi!\n\nOn Thu, Jul 2, 2020 at 7:51 PM Mark Dilger <mark.dilger@enterprisedb.com> wrote:\n> I expect these issues to be less than half what you would need to resolve, though much of the rest of it is less clear to me.\n\nThank you for this insightful input. I will think it over.\n\n\nMitar\n\n-- \nhttp://mitar.tnode.com/\nhttps://twitter.com/mitar_m\n\n\n", "msg_date": "Thu, 2 Jul 2020 20:42:39 -0700", "msg_from": "Mitar <mmitar@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Persist MVCC forever - retain history" }, { "msg_contents": "\n\nOn 02.07.2020 21:55, Mitar wrote:\n> Hi!\n>\n> (Sorry if this was already discussed, it looks pretty obvious, but I\n> could not find anything.)\n>\n> I was thinking and reading about how to design the schema to keep\n> records of all changes which happen to the table, at row granularity,\n> when I realized that all this is already done for me by PostgreSQL\n> MVCC. All rows (tuples) are already stored, with an internal version\n> field as well.\n>\n> So I wonder, how could I hack PostgreSQL to disable vacuuming a table,\n> so that all tuples persist forever, and how could I make those\n> internal columns visible so that I could make queries asking for\n> results at the particular historical version of table state? My\n> understanding is that indices are already indexing over those internal\n> columns as well, so those queries over historical versions would be\n> efficient as well. Am I missing something which would make this not\n> possible?\n>\n> Is this something I would have to run a custom version of PostgreSQL\n> or is this possible through an extension of sort?\n>\n>\n> Mitar\n>\nDid you read this thread:\nhttps://www.postgresql.org/message-id/flat/78aadf6b-86d4-21b9-9c2a-51f1efb8a499%40postgrespro.ru\nI have proposed a patch for supporting time travel (AS OF) queries.\nBut I didn't fill a big interest to it from community.\n\n\n\n", "msg_date": "Fri, 3 Jul 2020 10:29:17 +0300", "msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: Persist MVCC forever - retain history" }, { "msg_contents": "> But I didn't fill a big interest to it from community.\nJust fyi, it is something that I use in my database design now (just hacked\ntogether using ranges / exclusion constraints) and\nwould love for a well supported solution.\n\nI've chimed in a couple times as this feature has popped up in discussion\nover the years, as I have seen others with similar needs do as well.\nJust sometimes feels like spam to chime in just saying \"i'd find this\nfeature useful\" so I try and not do that too much. I'd rather not step on\nthe\ncommunity's toes.\n\n-Adam\n\n> But I didn't fill a big interest to it from community.Just fyi, it is something that I use in my database design now (just hacked together using ranges / exclusion constraints) and would love for a well supported solution.I've chimed in a couple times as this feature has popped up in discussion over the years, as I have seen others with similar needs do as well. Just sometimes feels like spam to chime in just saying \"i'd find this feature useful\" so I try and not do that too much. I'd rather not step on the community's toes.-Adam", "msg_date": "Fri, 3 Jul 2020 09:57:55 -0400", "msg_from": "Adam Brusselback <adambrusselback@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Persist MVCC forever - retain history" }, { "msg_contents": "Hi!\n\nOn Fri, Jul 3, 2020 at 12:29 AM Konstantin Knizhnik\n<k.knizhnik@postgrespro.ru> wrote:\n> Did you read this thread:\n> https://www.postgresql.org/message-id/flat/78aadf6b-86d4-21b9-9c2a-51f1efb8a499%40postgrespro.ru\n> I have proposed a patch for supporting time travel (AS OF) queries.\n> But I didn't fill a big interest to it from community.\n\nOh, you went much further than me in this thinking. Awesome!\n\nI am surprised that you are saying you didn't feel big interest. My\nreading of the thread is the opposite, that there was quite some\ninterest, but that there are technical challenges to overcome. So you\ngave up on that work?\n\n\nMitar\n\n-- \nhttp://mitar.tnode.com/\nhttps://twitter.com/mitar_m\n\n\n", "msg_date": "Sat, 4 Jul 2020 22:48:43 -0700", "msg_from": "Mitar <mmitar@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Persist MVCC forever - retain history" }, { "msg_contents": "\n\nOn 05.07.2020 08:48, Mitar wrote:\n> Hi!\n>\n> On Fri, Jul 3, 2020 at 12:29 AM Konstantin Knizhnik\n> <k.knizhnik@postgrespro.ru> wrote:\n>> Did you read this thread:\n>> https://www.postgresql.org/message-id/flat/78aadf6b-86d4-21b9-9c2a-51f1efb8a499%40postgrespro.ru\n>> I have proposed a patch for supporting time travel (AS OF) queries.\n>> But I didn't fill a big interest to it from community.\n> Oh, you went much further than me in this thinking. Awesome!\n>\n> I am surprised that you are saying you didn't feel big interest. My\n> reading of the thread is the opposite, that there was quite some\n> interest, but that there are technical challenges to overcome. So you\n> gave up on that work?\nNo, I have not gave up.\nBut...\nThere are well known problems of proposed approach:\n1. Not supporting schema changes\n2. Not compatible with DROP/TRUNCATE\n3. Presence of large number of aborted transaction can slow down data \naccess.\n4. Semantic of join of tables with different timestamp is obscure.\n\nI do not know how to address this issues. I am not sure how critical all \nthis issues are and do them made this approach unusable.\nAlso there is quite common opinion that time travel should be don at \napplication level and we do not need to support it at database kernel level.\n\nI will be glad to continue work in this direction if there is some \ninterest to this topic and somebody is going to try/review this feature.\nIt is very difficult to find some motivation for developing new features \nif you are absolutely sure that it will be never accepted by community.\n\n\n\n\n", "msg_date": "Sun, 5 Jul 2020 20:31:26 +0300", "msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: Persist MVCC forever - retain history" }, { "msg_contents": "Konstantin Knizhnik schrieb am 05.07.2020 um 19:31:\n>> I am surprised that you are saying you didn't feel big interest. My\n>> reading of the thread is the opposite, that there was quite some\n>> interest, but that there are technical challenges to overcome. So you\n>> gave up on that work?\n> No, I have not gave up.\n> But...\n> There are well known problems of proposed approach:\n> 1. Not supporting schema changes\n> 2. Not compatible with DROP/TRUNCATE\n> 3. Presence of large number of aborted transaction can slow down data access.\n> 4. Semantic of join of tables with different timestamp is obscure.\n\nOracle partially solved this (at least 1,3 and 4 - don't know about 3) by storing the old versions in a separate table that is automatically managed if you enable the feature. If a query uses the AS OF to go \"back in time\", it's rewritten to access the history table.\n\nThomas\n\n\n\n", "msg_date": "Sun, 5 Jul 2020 21:08:07 +0200", "msg_from": "Thomas Kellerer <shammat@gmx.net>", "msg_from_op": false, "msg_subject": "Re: Persist MVCC forever - retain history" } ]
[ { "msg_contents": "Hi,\n\nWhile checking through the code I found that some of the function\nparameters in reorderbuffer & vacuumlazy are not used. I felt this\ncould be removed. I'm not sure if it is kept for future use or not.\nAttached patch contains the changes for the same.\nThoughts?\n\nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Fri, 3 Jul 2020 13:37:30 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": true, "msg_subject": "Cleanup - Removed unused function parameter in reorder buffer &\n parallel vacuum" }, { "msg_contents": "On Fri, 3 Jul 2020 at 09:07, vignesh C <vignesh21@gmail.com> wrote:\n\n\n> While checking through the code I found that some of the function\n> parameters in reorderbuffer & vacuumlazy are not used. I felt this\n> could be removed. I'm not sure if it is kept for future use or not.\n> Attached patch contains the changes for the same.\n> Thoughts?\n>\n\nUnused? To confirm that, everybody that has a logical decoding plugin needs\nto check their code so we are certain this is sensible.\n\nSeems like a change with low utility.\n\n-- \nSimon Riggs http://www.2ndQuadrant.com/\n<http://www.2ndquadrant.com/>\nMission Critical Databases\n\nOn Fri, 3 Jul 2020 at 09:07, vignesh C <vignesh21@gmail.com> wrote: While checking through the code I found that  some of the function\nparameters in reorderbuffer & vacuumlazy are not used. I felt this\ncould be removed. I'm not sure if it is kept for future use or not.\nAttached patch contains the changes for the same.\nThoughts?Unused? To confirm that, everybody that has a logical decoding plugin needs to check their code so we are certain this is sensible.Seems like a change with low utility.-- Simon Riggs                http://www.2ndQuadrant.com/Mission Critical Databases", "msg_date": "Fri, 3 Jul 2020 09:36:27 +0100", "msg_from": "Simon Riggs <simon@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Cleanup - Removed unused function parameter in reorder buffer &\n parallel vacuum" }, { "msg_contents": "On Fri, 3 Jul 2020 at 17:07, vignesh C <vignesh21@gmail.com> wrote:\n>\n> Hi,\n>\n> While checking through the code I found that some of the function\n> parameters in reorderbuffer & vacuumlazy are not used. I felt this\n> could be removed. I'm not sure if it is kept for future use or not.\n> Attached patch contains the changes for the same.\n> Thoughts?\n>\n\nFor the part of parallel vacuum change, it looks good to me.\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 3 Jul 2020 20:48:00 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Cleanup - Removed unused function parameter in reorder buffer &\n parallel vacuum" }, { "msg_contents": "On Fri, Jul 3, 2020 at 2:06 PM Simon Riggs <simon@2ndquadrant.com> wrote:\n>\n> On Fri, 3 Jul 2020 at 09:07, vignesh C <vignesh21@gmail.com> wrote:\n>\n>>\n>> While checking through the code I found that some of the function\n>> parameters in reorderbuffer & vacuumlazy are not used. I felt this\n>> could be removed. I'm not sure if it is kept for future use or not.\n>> Attached patch contains the changes for the same.\n>> Thoughts?\n>\n>\n> Unused? To confirm that, everybody that has a logical decoding plugin needs to check their code so we are certain this is sensible.\n>\n\nThe changes proposed by Vignesh are in ReorderBuffer APIs and some of\nthem are static functions, so not sure if decoding plugin comes into\nthe picture.\n\n> Seems like a change with low utility.\n>\n\nYeah, all or most of the ReorderBuffer APIs seem to take the\n\"ReorderBuffer *\" parameter, so not sure if removing from some of them\nis useful or not. At least in the current form, they look consistent.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 3 Jul 2020 18:04:50 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Cleanup - Removed unused function parameter in reorder buffer &\n parallel vacuum" }, { "msg_contents": "Amit Kapila <amit.kapila16@gmail.com> writes:\n> On Fri, Jul 3, 2020 at 2:06 PM Simon Riggs <simon@2ndquadrant.com> wrote:\n>> Seems like a change with low utility.\n\n> Yeah, all or most of the ReorderBuffer APIs seem to take the\n> \"ReorderBuffer *\" parameter, so not sure if removing from some of them\n> is useful or not. At least in the current form, they look consistent.\n\nYeah, I agree with that. This makes things less consistent and it seems\nlike it's not buying much. Are any of these code paths sufficiently hot\nthat saving a couple of instructions would matter?\n\nIn the long run, it seems like the fact that any of these functions\nare not using these parameters is an implementation artifact that\ncould change at any time. So we might just be putting them back\nsomeday, with nothing except code churn and back-patch hazards to\nshow for our trouble. Or, if you want to argue that a \"ReorderBufferXXX\"\nfunction is inherently never going to use the ReorderBuffer, why is it\nin that module with that name to begin with?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 03 Jul 2020 09:52:19 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Cleanup - Removed unused function parameter in reorder buffer &\n parallel vacuum" }, { "msg_contents": "On Fri, Jul 3, 2020 at 5:18 PM Masahiko Sawada\n<masahiko.sawada@2ndquadrant.com> wrote:\n>\n> On Fri, 3 Jul 2020 at 17:07, vignesh C <vignesh21@gmail.com> wrote:\n> >\n> > Hi,\n> >\n> > While checking through the code I found that some of the function\n> > parameters in reorderbuffer & vacuumlazy are not used. I felt this\n> > could be removed. I'm not sure if it is kept for future use or not.\n> > Attached patch contains the changes for the same.\n> > Thoughts?\n> >\n>\n> For the part of parallel vacuum change, it looks good to me.\n>\n\nUnlike ReorderBuffer, this change looks fine to me as well. This is a\nquite recent (PG13) change and it would be good to remove it now. So,\nI will push this part of change unless I hear any objection in a day\nor so.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Sat, 4 Jul 2020 12:32:21 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Cleanup - Removed unused function parameter in reorder buffer &\n parallel vacuum" }, { "msg_contents": "On Sat, Jul 4, 2020 at 12:32 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Fri, Jul 3, 2020 at 5:18 PM Masahiko Sawada\n> <masahiko.sawada@2ndquadrant.com> wrote:\n> >\n> > On Fri, 3 Jul 2020 at 17:07, vignesh C <vignesh21@gmail.com> wrote:\n> > >\n> > > Hi,\n> > >\n> > > While checking through the code I found that some of the function\n> > > parameters in reorderbuffer & vacuumlazy are not used. I felt this\n> > > could be removed. I'm not sure if it is kept for future use or not.\n> > > Attached patch contains the changes for the same.\n> > > Thoughts?\n> > >\n> >\n> > For the part of parallel vacuum change, it looks good to me.\n> >\n>\n> Unlike ReorderBuffer, this change looks fine to me as well. This is a\n> quite recent (PG13) change and it would be good to remove it now. So,\n> I will push this part of change unless I hear any objection in a day\n> or so.\n\nThanks all for your comments, attached patch has the changes that\nexcludes the changes made in reorderbuffer.\n\nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Sun, 5 Jul 2020 06:29:04 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Cleanup - Removed unused function parameter in reorder buffer &\n parallel vacuum" } ]
[ { "msg_contents": "Hi,\n\nParallel worker hangs while handling errors.\n\nAnalysis:\nWhen there is an error in the parallel worker process, we will call\nereport/elog with the error message. Worker will then jump from\nerrfinish to setjmp in StartBackgroundWorker function which was set\nearlier. Then the worker process will then send the error message\nthrough the shared memory to the leader process. Shared memory size is\nok 16K, if the error message is less than 16K it works fine. If there\nis a bigger error message, the worker process will wait for the leader\nprocess to read the message, free up some memory in shared memory and\nset the latch. The worker will be waiting at the below back trace:\n#4 0x000000000090480c in WaitLatch (latch=0x7f2b39f6b454,\nwakeEvents=33, timeout=0, wait_event_info=134217753) at latch.c:368\n#5 0x0000000000787c7f in mq_putmessage (msgtype=69 'E', s=0x2f24350\n\"SERROR\", len=230015) at pqmq.c:171\n#6 0x000000000078712e in pq_endmessage (buf=0x7ffe721c4370) at pqformat.c:301\n#7 0x0000000000ac1749 in send_message_to_frontend (edata=0xfe91a0\n<errordata>) at elog.c:3327\n#8 0x0000000000abdf5b in EmitErrorReport () at elog.c:1460\n\nLeader process then identifies that there are some messages that need\nto be processed, it copies the messages and sets the latch so that the\nworker process can copy the remaining message from the below function:\nshm_mq_inc_bytes_read -> SetLatch(&sender->procLatch);, Worker is not\nable to receive any signal at this point of time & hangs infinitely\nWorker hangs in this case because when the worker is started the\nsignals will be masked using sigprocmask. Unblocking of signals is\ndone by calling BackgroundWorkerUnblockSignals in ParallelWorkerMain.\nNow due to error handling the worker has jumped to setjmp in\nStartBackgroundWorker function. Here the signals are in a blocked\nstate, hence the signal is not received by the worker process.\n\nOne of the fixes could be to call BackgroundWorkerUnblockSignals just\nafter sigsetjmp. I'm not sure if this is the best solution.\nRobert & myself had a discussion about the problem yesterday. We felt\nthis is a genuine problem with the parallel worker error handling and\nneed to be fixed.\nI could reproduce this issue when there is an error during copy of\ntoast data using parallel copy, this project is an in-progress\nproject. I don't have a test case to reproduce on the head. Any\nsuggestions for a test case on head?\nThe Attached patch has the fix for the same.\n\nThoughts?\n\nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Fri, 3 Jul 2020 14:40:56 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": true, "msg_subject": "Parallel worker hangs while handling errors." }, { "msg_contents": "On Fri, Jul 3, 2020 at 2:40 PM vignesh C <vignesh21@gmail.com> wrote:\n>\n> The Attached patch has the fix for the same.\n>\nI have added a commitfest entry for this bug:\nhttps://commitfest.postgresql.org/29/2636/\n\nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com\n\nOn Fri, Jul 3, 2020 at 2:40 PM vignesh C <vignesh21@gmail.com> wrote:>> The Attached patch has the fix for the same.>I have added a commitfest entry for this bug:https://commitfest.postgresql.org/29/2636/Regards,VigneshEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Tue, 7 Jul 2020 06:56:29 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Parallel worker hangs while handling errors." }, { "msg_contents": ">\n> Parallel worker hangs while handling errors.\n>\n> When there is an error in the parallel worker process, we will call\n> ereport/elog with the error message. Worker will then jump from\n> errfinish to setjmp in StartBackgroundWorker function which was set\n> earlier. Then the worker process will then send the error message\n> through the shared memory to the leader process. Shared memory size is\n> ok 16K, if the error message is less than 16K it works fine.\n\nI reproduced the hang issue with the parallel copy patches[1]. The use\ncase is as follows - one of the parallel workers tries to report error\nto the leader process and as part of the error context it also tries\nto send the entire row/tuple data(which is a lot more than 16KB).\n\nThe fix provided here solves the above problem, i.e. no hang occurs,\nand the entire tuple/row data in the error from worker to leader gets\ntransferred, see the attachment \"testcase.text\" for the output.\n\nApart from that, I also executed the regression tests (make check and\nmake check-world) on the patch, no issues are observed.\n\n[1] - https://www.postgresql.org/message-id/CALDaNm2-wMYO68vtDuuWO5h4FQCsfm4Pcg5XrzEPtRty1bEM7w%40mail.gmail.com\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Thu, 9 Jul 2020 15:12:57 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel worker hangs while handling errors." }, { "msg_contents": ">\n> Leader process then identifies that there are some messages that need\n> to be processed, it copies the messages and sets the latch so that the\n> worker process can copy the remaining message from the below function:\n> shm_mq_inc_bytes_read -> SetLatch(&sender->procLatch);, Worker is not\n> able to receive any signal at this point of time & hangs infinitely\n> Worker hangs in this case because when the worker is started the\n> signals will be masked using sigprocmask. Unblocking of signals is\n> done by calling BackgroundWorkerUnblockSignals in ParallelWorkerMain.\n> Now due to error handling the worker has jumped to setjmp in\n> StartBackgroundWorker function. Here the signals are in a blocked\n> state, hence the signal is not received by the worker process.\n>\n\nYour analysis looks fine to me.\n\nAdding some more info:\n\nThe worker uses SIGUSR1 (with a special shared memory flag\nPROCSIG_PARALLEL_MESSAGE) both for error message sharing(from\nmq_putmessage) and for parallel worker shutdown(from\nParallelWorkerShutdown).\n\nAnd yes, the postmaster blocks SIGUSR1 before forking bgworker(in\nPostmasterMain with pqinitmask() and PG_SETMASK(&BlockSig)), bgworker\nreceives the same blocked signal mask for itself and enters\nStartBackgroundWorker(), uses sigsetjmp for error handling, and then\ngoes to ParallelWorkerMain() where few of the signal handers are set\nand then BackgroundWorkerUnblockSignals() is called to not block any\nsignals.\n\nBut observe when we did sigsetjmp the state of the signal mask is that\nof we received from postmaster which is all signals blocked.\n\nSo, now in error cases when the control is jumped to sigsetjmp we\nstill have the signals blocked(along with SIGUSR1) mask and in the\ncode path of EmitErrorReport, we do send SIGUSR1 with flag\nPROCSIG_PARALLEL_MESSAGE to the leader/backend and wait for the latch\nto be set, this happens only if the worker is able to receive back\nSIGUSR1 from the leader/backend.\n\nIn this reported issue, since SIGUSR1 is blocked at sigsetjmp in\nStartBackgroundWorker(), there is no way that the worker process\nreceiving it from the leader and the latch cannot be set and hence the\nhang occurs.\n\nThe same hang issue can occur(though I'm not able to back it up with a\nuse case), in the cases from wherever the EmitErrorReport() is called\nfrom \"if (sigsetjmp(local_sigjmp_buf, 1) != 0)\" block, such as\nautovacuum.c, bgwriter.c, bgworker.c, checkpointer.c, walwriter.c, and\npostgres.c.\n\n>\n> One of the fixes could be to call BackgroundWorkerUnblockSignals just\n> after sigsetjmp. I'm not sure if this is the best solution.\n> Robert & myself had a discussion about the problem yesterday. We felt\n> this is a genuine problem with the parallel worker error handling and\n> need to be fixed.\n>\n\nNote that, in all sigsetjmp blocks, we intentionally\nHOLD_INTERRUPTS(), to not cause any issues while performing error\nhandling, I'm concerned here that now, if we directly call\nBackgroundWorkerUnblockSignals() which will open up all the signals\nand our main intention of holding interrupts or block signals may go\naway.\n\nSince the main problem for this hang issue is because of blocking\nSIGUSR1, in sigsetjmp, can't we just only unblock only the SIGUSR1,\ninstead of unblocking all signals? I tried this with parallel copy\nhang, the issue is resolved.\n\nSomething like below -\n\n if (sigsetjmp(local_sigjmp_buf, 1) != 0)\n {\n sigset_t x;\n sigemptyset (&x);\n sigaddset(&x, SIGUSR1);\n sigprocmask(SIG_UNBLOCK, &x, NULL);\n\n /* Since not using PG_TRY, must reset error stack by hand */\n error_context_stack = NULL;\n\n /* Prevent interrupts while cleaning up */\n HOLD_INTERRUPTS();\n\nIf okay, with the above approach, we can put the above\nsigprocmask(SIG_UNBLOCK,..) piece of code(of course generically to\nunblock any given signal) in a macro similar to PG_SETMASK() and use\nthat in all the places wherever EmitErrorReport() is called from\nsigsetjmp. We should mind the portability of sigprocmask as well.\n\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 17 Jul 2020 13:21:26 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel worker hangs while handling errors." }, { "msg_contents": "Thanks for reviewing and adding your thoughts, My comments are inline.\n\nOn Fri, Jul 17, 2020 at 1:21 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> The same hang issue can occur(though I'm not able to back it up with a\n> use case), in the cases from wherever the EmitErrorReport() is called\n> from \"if (sigsetjmp(local_sigjmp_buf, 1) != 0)\" block, such as\n> autovacuum.c, bgwriter.c, bgworker.c, checkpointer.c, walwriter.c, and\n> postgres.c.\n>\n\nI'm not sure if this can occur in other cases.\n\n> >\n> > One of the fixes could be to call BackgroundWorkerUnblockSignals just\n> > after sigsetjmp. I'm not sure if this is the best solution.\n> > Robert & myself had a discussion about the problem yesterday. We felt\n> > this is a genuine problem with the parallel worker error handling and\n> > need to be fixed.\n> >\n>\n> Note that, in all sigsetjmp blocks, we intentionally\n> HOLD_INTERRUPTS(), to not cause any issues while performing error\n> handling, I'm concerned here that now, if we directly call\n> BackgroundWorkerUnblockSignals() which will open up all the signals\n> and our main intention of holding interrupts or block signals may go\n> away.\n>\n> Since the main problem for this hang issue is because of blocking\n> SIGUSR1, in sigsetjmp, can't we just only unblock only the SIGUSR1,\n> instead of unblocking all signals? I tried this with parallel copy\n> hang, the issue is resolved.\n>\n\nOn putting further thoughts on this, I feel just unblocking SIGUSR1\nwould be the right approach in this case. I'm attaching a new patch\nwhich unblocks SIGUSR1 signal. I have verified that the original issue\nwith WIP parallel copy patch gets fixed. I have made changes only in\nbgworker.c as we require the parallel worker to receive this signal\nand continue processing. I have not included the changes for other\nprocesses as I'm not sure if this scenario is applicable for other\nprocesses.\n\nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Thu, 23 Jul 2020 12:54:05 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Parallel worker hangs while handling errors." }, { "msg_contents": "On Thu, Jul 23, 2020 at 12:54 PM vignesh C <vignesh21@gmail.com> wrote:\n>\n> Thanks for reviewing and adding your thoughts, My comments are inline.\n>\n> On Fri, Jul 17, 2020 at 1:21 PM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n> >\n> > The same hang issue can occur(though I'm not able to back it up with a\n> > use case), in the cases from wherever the EmitErrorReport() is called\n> > from \"if (sigsetjmp(local_sigjmp_buf, 1) != 0)\" block, such as\n> > autovacuum.c, bgwriter.c, bgworker.c, checkpointer.c, walwriter.c, and\n> > postgres.c.\n> >\n>\n> I'm not sure if this can occur in other cases.\n>\n\nI checked further on this point: Yes, it can't occur for the other\ncases, as mq_putmessage() gets only used for parallel\nworkers(ParallelWorkerMain() --> pq_redirect_to_shm_mq()).\n\n>\n> > Note that, in all sigsetjmp blocks, we intentionally\n> > HOLD_INTERRUPTS(), to not cause any issues while performing error\n> > handling, I'm concerned here that now, if we directly call\n> > BackgroundWorkerUnblockSignals() which will open up all the signals\n> > and our main intention of holding interrupts or block signals may go\n> > away.\n> >\n> > Since the main problem for this hang issue is because of blocking\n> > SIGUSR1, in sigsetjmp, can't we just only unblock only the SIGUSR1,\n> > instead of unblocking all signals? I tried this with parallel copy\n> > hang, the issue is resolved.\n> >\n>\n> On putting further thoughts on this, I feel just unblocking SIGUSR1\n> would be the right approach in this case. I'm attaching a new patch\n> which unblocks SIGUSR1 signal. I have verified that the original issue\n> with WIP parallel copy patch gets fixed. I have made changes only in\n> bgworker.c as we require the parallel worker to receive this signal\n> and continue processing. I have not included the changes for other\n> processes as I'm not sure if this scenario is applicable for other\n> processes.\n>\n\nSince we are sure that this hang issue can occur only for parallel\nworkers, and the change in StartBackgroundWorker's sigsetjmp's block\nshould only be made for parallel worker cases. And also there can be a\nlot of other callbacks execution and processing in proc_exit() for\nwhich we might not need the SIGUSR1 unblocked. So, let's undo the\nunblocking right after EmitErrorReport() to not cause any new issues.\n\nAttaching a modified v2 patch: it has the unblocking for only for\nparallel workers, undoing it after EmitErrorReport(), and some\nadjustments in the comment.\n\nI verified this fix for the parallel copy hang issue. And also make\ncheck and make check-world passes.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Fri, 24 Jul 2020 12:40:52 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel worker hangs while handling errors." }, { "msg_contents": "On Fri, Jul 24, 2020 at 12:41 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Thu, Jul 23, 2020 at 12:54 PM vignesh C <vignesh21@gmail.com> wrote:\n> >\n> > Thanks for reviewing and adding your thoughts, My comments are inline.\n> >\n> > On Fri, Jul 17, 2020 at 1:21 PM Bharath Rupireddy\n> > <bharath.rupireddyforpostgres@gmail.com> wrote:\n> > >\n> > > The same hang issue can occur(though I'm not able to back it up with a\n> > > use case), in the cases from wherever the EmitErrorReport() is called\n> > > from \"if (sigsetjmp(local_sigjmp_buf, 1) != 0)\" block, such as\n> > > autovacuum.c, bgwriter.c, bgworker.c, checkpointer.c, walwriter.c, and\n> > > postgres.c.\n> > >\n> >\n> > I'm not sure if this can occur in other cases.\n> >\n>\n> I checked further on this point: Yes, it can't occur for the other\n> cases, as mq_putmessage() gets only used for parallel\n> workers(ParallelWorkerMain() --> pq_redirect_to_shm_mq()).\n>\n> >\n> > > Note that, in all sigsetjmp blocks, we intentionally\n> > > HOLD_INTERRUPTS(), to not cause any issues while performing error\n> > > handling, I'm concerned here that now, if we directly call\n> > > BackgroundWorkerUnblockSignals() which will open up all the signals\n> > > and our main intention of holding interrupts or block signals may go\n> > > away.\n> > >\n> > > Since the main problem for this hang issue is because of blocking\n> > > SIGUSR1, in sigsetjmp, can't we just only unblock only the SIGUSR1,\n> > > instead of unblocking all signals? I tried this with parallel copy\n> > > hang, the issue is resolved.\n> > >\n> >\n> > On putting further thoughts on this, I feel just unblocking SIGUSR1\n> > would be the right approach in this case. I'm attaching a new patch\n> > which unblocks SIGUSR1 signal. I have verified that the original issue\n> > with WIP parallel copy patch gets fixed. I have made changes only in\n> > bgworker.c as we require the parallel worker to receive this signal\n> > and continue processing. I have not included the changes for other\n> > processes as I'm not sure if this scenario is applicable for other\n> > processes.\n> >\n>\n> Since we are sure that this hang issue can occur only for parallel\n> workers, and the change in StartBackgroundWorker's sigsetjmp's block\n> should only be made for parallel worker cases. And also there can be a\n> lot of other callbacks execution and processing in proc_exit() for\n> which we might not need the SIGUSR1 unblocked. So, let's undo the\n> unblocking right after EmitErrorReport() to not cause any new issues.\n>\n> Attaching a modified v2 patch: it has the unblocking for only for\n> parallel workers, undoing it after EmitErrorReport(), and some\n> adjustments in the comment.\n>\n\nI have made slight changes on top of the patch to remove duplicate\ncode, attached v3 patch for the same.\nThe parallel worker hang issue gets resolved, make check & make\ncheck-world passes.\n\nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Sat, 25 Jul 2020 07:02:32 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Parallel worker hangs while handling errors." }, { "msg_contents": "On Sat, Jul 25, 2020 at 7:02 AM vignesh C <vignesh21@gmail.com> wrote:\n>\n> I have made slight changes on top of the patch to remove duplicate\n> code, attached v3 patch for the same.\n> The parallel worker hang issue gets resolved, make check & make\n> check-world passes.\n>\n\nHaving a function to unblock selective signals for a bg worker looks good to me.\n\nFew comments:\n1. Do we need \"worker\" as a function argument in\nupdate_parallel_worker_sigmask(BackgroundWorker *worker,.... ? Since\nMyBgworkerEntry is a global variable, can't we have a local variable\ninstead?\n2. Instead of update_parallel_worker_sigmask() serving only for\nparallel workers, can we make it generic, so that for any bgworker,\ngiven a signal it unblocks it, although there's no current use case\nfor a bg worker unblocking a single signal other than a parallel\nworker doing it for SIGUSR1 for this hang issue. Please note that we\nhave BackgroundWorkerBlockSignals() and\nBackgroundWorkerUnblockSignals().\nI slightly modified your function, something like below?\n\nvoid\nBackgroundWorkerUpdateSignalMask(int signum, bool toblock)\n{\n if (toblock)\n sigaddset(&BlockSig, signum);\n else\n sigdelset(&BlockSig, signum);\n\n PG_SETMASK(&BlockSig);\n}\n\n/*to unblock SIGUSR1*/\nif ((worker->bgw_flags & BGWORKER_CLASS_PARALLEL) != 0)\n BackgroundWorkerUpdateSignalMask(SIGUSR1, false);\n\n/*to block SIGUSR1*/\nif ((worker->bgw_flags & BGWORKER_CLASS_PARALLEL) != 0)\n BackgroundWorkerUpdateSignalMask(SIGUSR1, true);\n\nIf okay, with the BackgroundWorkerUpdateSignalMask() function, please\nnote that we might have to add it in bgworker.sgml as well.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 27 Jul 2020 10:13:43 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel worker hangs while handling errors." }, { "msg_contents": "Thanks for your comments Bharath.\n\nOn Mon, Jul 27, 2020 at 10:13 AM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n> 1. Do we need \"worker\" as a function argument in\n> update_parallel_worker_sigmask(BackgroundWorker *worker,.... ? Since\n> MyBgworkerEntry is a global variable, can't we have a local variable\n> instead?\n\nFixed, I have moved the worker check to the caller function.\n\n> 2. Instead of update_parallel_worker_sigmask() serving only for\n> parallel workers, can we make it generic, so that for any bgworker,\n> given a signal it unblocks it, although there's no current use case\n> for a bg worker unblocking a single signal other than a parallel\n> worker doing it for SIGUSR1 for this hang issue. Please note that we\n> have BackgroundWorkerBlockSignals() and\n> BackgroundWorkerUnblockSignals().\n\nFixed. I have slightly modified the changes to break into\nBackgroundWorkerRemoveBlockSignal & BackgroundWorkerAddBlockSignal.\nThis maintains the consistency similar to\nBackgroundWorkerBlockSignals() and BackgroundWorkerUnblockSignals().\n\n> If okay, with the BackgroundWorkerUpdateSignalMask() function, please\n> note that we might have to add it in bgworker.sgml as well.\n\nIncluded the documentation.\n\nAttached the updated patch for the same.\n\nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Tue, 28 Jul 2020 11:05:05 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Parallel worker hangs while handling errors." }, { "msg_contents": "On Tue, Jul 28, 2020 at 11:05 AM vignesh C <vignesh21@gmail.com> wrote:\n>\n> Thanks for your comments Bharath.\n>\n> On Mon, Jul 27, 2020 at 10:13 AM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n> > 1. Do we need \"worker\" as a function argument in\n> > update_parallel_worker_sigmask(BackgroundWorker *worker,.... ? Since\n> > MyBgworkerEntry is a global variable, can't we have a local variable\n> > instead?\n>\n> Fixed, I have moved the worker check to the caller function.\n>\n> > 2. Instead of update_parallel_worker_sigmask() serving only for\n> > parallel workers, can we make it generic, so that for any bgworker,\n> > given a signal it unblocks it, although there's no current use case\n> > for a bg worker unblocking a single signal other than a parallel\n> > worker doing it for SIGUSR1 for this hang issue. Please note that we\n> > have BackgroundWorkerBlockSignals() and\n> > BackgroundWorkerUnblockSignals().\n>\n> Fixed. I have slightly modified the changes to break into\n> BackgroundWorkerRemoveBlockSignal & BackgroundWorkerAddBlockSignal.\n> This maintains the consistency similar to\n> BackgroundWorkerBlockSignals() and BackgroundWorkerUnblockSignals().\n>\n> > If okay, with the BackgroundWorkerUpdateSignalMask() function, please\n> > note that we might have to add it in bgworker.sgml as well.\n>\n> Included the documentation.\n>\n> Attached the updated patch for the same.\n>\n\nThe v4 patch looks good to me. Hang is not seen, make check and make\ncheck-world passes. I moved this to the committer for further review\nin https://commitfest.postgresql.org/29/2636/.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 28 Jul 2020 15:04:51 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel worker hangs while handling errors." }, { "msg_contents": "On Tue, Jul 28, 2020 at 5:35 AM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n> The v4 patch looks good to me. Hang is not seen, make check and make\n> check-world passes. I moved this to the committer for further review\n> in https://commitfest.postgresql.org/29/2636/.\n\nI don't think I agree with this approach. In particular, I don't\nunderstand the rationale for unblocking only SIGUSR1. Above, Vignesh\nsays that he feels that unblocking only that signal would be the right\napproach, but no reason is given. I have two reasons why I suspect\nit's not the right approach. One, it doesn't seem to be what we do\nelsewhere; the only existing cases where we have special handling for\nparticular signals are SIGQUIT and SIGPIPE, and those places have\ncomments explaining the reason why they are handled in a special way.\nTwo, SIGUSR1 is used for a LOT of things: look at all the different\ncases procsignal_sigusr1_handler() checks. If the intention is to only\nallow the things we know are safe, rather than all the signals there\nare, I think this coding utterly fails to achieve that - and for\nreasons that I don't think are really fixable.\n\nMy first idea about how to fix this was just to call\nBackgroundWorkerUnblockSignals() before sigsetjmp(), but that doesn't\nreally work, because ParallelWorkerMain() needs to set the handler for\nSIGTERM before unblocking signals. When you really look at it, the\ncode that does sigsetjmp() in StartBackgroundWorker() is entirely\nbogus. The comment says \"See notes in postgres.c about the design of\nthis coding,\" but if you go read that comment, it says that the point\nof using sigsetjmp() is to make sure that signals are unblocked within\nthe if-block that follows, but the use in bgworker.c actually achieves\nexactly the opposite, because signals have not yet been unblocked at\nthis point. So, whereas the postgres.c code unblocks signals if they\nare blocked, this code blocks signals if they are unblocked. Given\nthat, maybe the right thing to do is to just start the if-block with a\ncall to BackgroundWorkerUnblockSignals(). Perhaps there's some reason\nthat would be unsafe, if the failure occurs too early: postgres.c\ndoesn't unblock signals until after BaseInit() and InitProcess() have\nbeen called, but here an error in those functions would unblock\nsignals while it's being handled. Off-hand, I don't see why that would\nmatter, though. In the postgres.c case, there wouldn't be a\nPG_exception_stack yet, so we'd end up in the long part of\npg_re_throw(), which basically promotes the ERROR to FATAL but\notherwise does pretty similar things to what this handler does anyway.\nSo I'm not really sure there's any reason not to just go this way.\n\nAnother approach would be to establish a new PG_exception_stack() with\na free sigsetjmp() call and fresh local buffer, inside\nParallelWorkerMain(). It would do the same thing as the existing\nhandler, but because it would be established after unblocking signals,\nsigsetjmp() would behave as desired. This doesn't seem quite as good\nto me because I think this pattern might end up getting copied into\nmany background workers, and it's a bunch of extra code for which I\ndon't really see a clear need. So at the moment I think a one line fix\nin StartBackgroundWorker(), to just unblock signals while handling\nerrors, looks better.\n\nAdding Alvaro to the CC line, since I think he wrote this code\noriginally. Not sure if he or anyone else might have an opinion.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Thu, 6 Aug 2020 16:03:48 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel worker hangs while handling errors." }, { "msg_contents": "On Fri, Aug 7, 2020 at 1:34 AM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Tue, Jul 28, 2020 at 5:35 AM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n> > The v4 patch looks good to me. Hang is not seen, make check and make\n> > check-world passes. I moved this to the committer for further review\n> > in https://commitfest.postgresql.org/29/2636/.\n>\n> I don't think I agree with this approach. In particular, I don't\n> understand the rationale for unblocking only SIGUSR1. Above, Vignesh\n> says that he feels that unblocking only that signal would be the right\n> approach, but no reason is given. I have two reasons why I suspect\n> it's not the right approach. One, it doesn't seem to be what we do\n> elsewhere; the only existing cases where we have special handling for\n> particular signals are SIGQUIT and SIGPIPE, and those places have\n> comments explaining the reason why they are handled in a special way.\n>\n\nWe intend to unblock SIGQUIT before sigsetjmp() in places like\nbgwriter, checkpointer, walwriter and walreceiver, but we only call\nsigdelset(&BlockSig, SIGQUIT);, Without PG_SETMASK(&BlockSig);, seems\nlike we are not actually unblocking SIQUIT and quickdie() will never\nget called in these processes if (sigsetjmp(local_sigjmp_buf, 1) !=\n0){....}, if postmaster sends a SIGQUIT while these processes are\ndoing clean up tasks in sigsetjmp(), it will not be received, and the\npostmaster later sends SIGKLL to kill from below places.\n\n /*\n * If we already sent SIGQUIT to children and they are slow to shut\n * down, it's time to send them SIGKILL. This doesn't happen\n * normally, but under certain conditions backends can get stuck while\n * shutting down. This is a last measure to get them unwedged.\n *\n * Note we also do this during recovery from a process crash.\n */\n if ((Shutdown >= ImmediateShutdown || (FatalError && !SendStop)) &&\n AbortStartTime != 0 &&\n (now - AbortStartTime) >= SIGKILL_CHILDREN_AFTER_SECS)\n {\n /* We were gentle with them before. Not anymore */\n TerminateChildren(SIGKILL);\n /* reset flag so we don't SIGKILL again */\n AbortStartTime = 0;\n }\n\nShouldn't we call PG_SETMASK(&BlockSig); to make it effective?\n\nAm I missing anything here?\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 7 Aug 2020 14:34:39 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel worker hangs while handling errors." }, { "msg_contents": "On Fri, Aug 7, 2020 at 5:05 AM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n> We intend to unblock SIGQUIT before sigsetjmp() in places like\n> bgwriter, checkpointer, walwriter and walreceiver, but we only call\n> sigdelset(&BlockSig, SIGQUIT);, Without PG_SETMASK(&BlockSig);, seems\n> like we are not actually unblocking SIQUIT and quickdie() will never\n> get called in these processes if (sigsetjmp(local_sigjmp_buf, 1) !=\n> 0){....}\n\nYeah, maybe so. This code has been around for a long time and I'm not\nsure what the thought process behind it was, but I don't see a flaw in\nyour analysis here.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Fri, 7 Aug 2020 09:52:59 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel worker hangs while handling errors." }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Fri, Aug 7, 2020 at 5:05 AM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n>> We intend to unblock SIGQUIT before sigsetjmp() in places like\n>> bgwriter, checkpointer, walwriter and walreceiver, but we only call\n>> sigdelset(&BlockSig, SIGQUIT);, Without PG_SETMASK(&BlockSig);, seems\n>> like we are not actually unblocking SIQUIT and quickdie() will never\n>> get called in these processes if (sigsetjmp(local_sigjmp_buf, 1) !=\n>> 0){....}\n\n> Yeah, maybe so. This code has been around for a long time and I'm not\n> sure what the thought process behind it was, but I don't see a flaw in\n> your analysis here.\n\nI think that code is the way it is intentionally: the idea is to not\naccept any signals until we reach the explicit \"PG_SETMASK(&UnBlockSig);\"\ncall further down, between the sigsetjmp stanza and the main loop.\nThe sigdelset call, just like the adjacent pqsignal calls, is\npreparatory setup; it does not intend to allow anything to happen\nimmediately.\n\nIn general, you don't want to accept signals in that area because the\nprocess state may not be fully set up yet. You could argue that the\nSIGQUIT handler has no state dependencies, making it safe to accept\nSIGQUIT earlier during startup of one of these processes, and likewise\nfor them to accept SIGQUIT during error recovery. But barring actual\nevidence of a problem with slow SIGQUIT response in these areas I'm more\ninclined to leave well enough alone. Changing this would add hazards,\ne.g. if somebody ever changes the behavior of the SIGQUIT handler, so\nI'd want some concrete evidence of a benefit. It seems fairly\nirrelevant to the problem at hand with bgworkers, anyway.\n\nAs for said problem, I concur with Robert that the v4 patch seems pretty\ndubious; it's adding a lot of barely-thought-out mechanism for no\nconvincing gain in safety. I think the v1 patch was more nearly the right\nthing, except that the unblock needs to happen a tad further down, as\nattached (this is untested but certainly it should pass any test that v1\npassed). I didn't do anything about rewriting the bogus comment just\nabove the sigsetjmp call, but I agree that that should happen too.\n\n\t\t\tregards, tom lane", "msg_date": "Fri, 07 Aug 2020 11:36:45 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Parallel worker hangs while handling errors." }, { "msg_contents": "On Fri, Aug 7, 2020 at 11:36 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I think that code is the way it is intentionally: the idea is to not\n> accept any signals until we reach the explicit \"PG_SETMASK(&UnBlockSig);\"\n> call further down, between the sigsetjmp stanza and the main loop.\n> The sigdelset call, just like the adjacent pqsignal calls, is\n> preparatory setup; it does not intend to allow anything to happen\n> immediately.\n\nI don't think that your analysis here is correct. The sigdelset call\nis manipulating BlockSig, and the subsequent PG_SETMASK call is\nworking with UnblockSig, so it doesn't make sense to view one as a\npreparatory step for the other. It could be correct to interpret the\nsigdelset call as preparatory work for a future call to\nPG_SETMASK(&BlockSig), but AFAICS there are no such calls in the\nprocesses where this incantation exists, so really it just seems to be\na no-op. Furthermore, the comment says that the point of the\nsigdelset() is to allow SIGQUIT \"at all times,\" which doesn't square\nwell with your suggestion that we intended it to take effect only\nlater.\n\n> In general, you don't want to accept signals in that area because the\n> process state may not be fully set up yet. You could argue that the\n> SIGQUIT handler has no state dependencies, making it safe to accept\n> SIGQUIT earlier during startup of one of these processes, and likewise\n> for them to accept SIGQUIT during error recovery. But barring actual\n> evidence of a problem with slow SIGQUIT response in these areas I'm more\n> inclined to leave well enough alone. Changing this would add hazards,\n> e.g. if somebody ever changes the behavior of the SIGQUIT handler, so\n> I'd want some concrete evidence of a benefit. It seems fairly\n> irrelevant to the problem at hand with bgworkers, anyway.\n\nThe SIGQUIT handler in question contains nothing than a call to\n_exit(2) and a long comment explaining why we don't do anything else,\nso I think the argument that it has no state dependencies is pretty\nwell water-tight. Whether or not we've got a problem with timely\nSIGQUIT acceptance is much less clear. So it seems to me that the\nsafer thing to do here would be to unblock the signal. It might gain\nsomething, and it can't really lose anything. Now it's true that the\ncalculus might change if someone were to modify the behavior of the\nSIGQUIT handler in the future, but if they do then it's their job to\nthink about this stuff. It doesn't seem especially likely for that to\nchange, anyway. The only reason that the handler for regular backends\ndoes anything other than _exit(2) is that we want to try to let the\nclient know what happened before we croak, and that concern is\nirrelevant for background workers. Doing any other cleanup here is\nunsafe and unnecessary.\n\n> As for said problem, I concur with Robert that the v4 patch seems pretty\n> dubious; it's adding a lot of barely-thought-out mechanism for no\n> convincing gain in safety. I think the v1 patch was more nearly the right\n> thing, except that the unblock needs to happen a tad further down, as\n> attached (this is untested but certainly it should pass any test that v1\n> passed). I didn't do anything about rewriting the bogus comment just\n> above the sigsetjmp call, but I agree that that should happen too.\n\nI am not sure whether the difference between this and v1 matters,\nbecause in postgres.c it's effectively happening inside sigsetjmp, so\nthe earlier unblock must not be that bad. But I don't mind putting it\nin the place you suggest.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Fri, 7 Aug 2020 12:13:55 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel worker hangs while handling errors." }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Fri, Aug 7, 2020 at 11:36 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> The sigdelset call, just like the adjacent pqsignal calls, is\n>> preparatory setup; it does not intend to allow anything to happen\n>> immediately.\n\n> I don't think that your analysis here is correct. The sigdelset call\n> is manipulating BlockSig, and the subsequent PG_SETMASK call is\n> working with UnblockSig, so it doesn't make sense to view one as a\n> preparatory step for the other.\n\nThat SETMASK call will certainly unblock SIGQUIT, so I don't see what\nyour point is. Anyway, the bottom line is that that code's been like\nthat for a decade or two without complaints, so I'm disinclined to\nmess with it on the strength of nothing much.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 07 Aug 2020 12:56:01 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Parallel worker hangs while handling errors." }, { "msg_contents": "On Fri, Aug 7, 2020 at 12:56 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > I don't think that your analysis here is correct. The sigdelset call\n> > is manipulating BlockSig, and the subsequent PG_SETMASK call is\n> > working with UnblockSig, so it doesn't make sense to view one as a\n> > preparatory step for the other.\n>\n> That SETMASK call will certainly unblock SIGQUIT, so I don't see what\n> your point is.\n\nI can't figure out if you're trolling me here or what. It's true that\nthe PG_SETMASK() call will certainly unblock SIGQUIT, but that would\nalso be true if the sigdelset() call were absent.\n\n> Anyway, the bottom line is that that code's been like\n> that for a decade or two without complaints, so I'm disinclined to\n> mess with it on the strength of nothing much.\n\nReally? Have you reversed your policy of wanting the comments to\naccurately describe what the code does?\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Fri, 7 Aug 2020 13:37:59 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel worker hangs while handling errors." }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Fri, Aug 7, 2020 at 12:56 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> That SETMASK call will certainly unblock SIGQUIT, so I don't see what\n>> your point is.\n\n> I can't figure out if you're trolling me here or what. It's true that\n> the PG_SETMASK() call will certainly unblock SIGQUIT, but that would\n> also be true if the sigdelset() call were absent.\n\nThe point of the sigdelset is that if somewhere later on, we install\nthe BlockSig mask, then SIGQUIT will remain unblocked. You asserted\nupthread that noplace in these processes ever does so; maybe that's\ntrue today, or maybe not, but the intent of this code is that *once\nwe get through initialization* SIGQUIT will remain unblocked.\n\nI'll concede that it's not 100% clear whether or not these processes\nneed to re-block SIGQUIT during error recovery. I repeat, though,\nthat I'm disinclined to change that without some evidence that there's\nactually a problem with the way it works now.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 07 Aug 2020 14:00:34 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Parallel worker hangs while handling errors." }, { "msg_contents": "On Fri, Aug 7, 2020 at 2:00 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> The point of the sigdelset is that if somewhere later on, we install\n> the BlockSig mask, then SIGQUIT will remain unblocked.\n\nI mean, we're just repeating the same points here, but that's not what\nthe comment says.\n\n> You asserted\n> upthread that noplace in these processes ever does so; maybe that's\n> true today, or maybe not,\n\nIt's easily checked using 'git grep'.\n\n> but the intent of this code is that *once\n> we get through initialization* SIGQUIT will remain unblocked.\n\nI can't speak to the intent, but I can speak to what the comment says.\n\n> I'll concede that it's not 100% clear whether or not these processes\n> need to re-block SIGQUIT during error recovery.\n\nI think it's entirely clear that they do not, and I have explained my\nreasoning already.\n\n> I repeat, though,\n> that I'm disinclined to change that without some evidence that there's\n> actually a problem with the way it works now.\n\nI've also already explained why I don't agree with this perspective.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Fri, 7 Aug 2020 14:15:35 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel worker hangs while handling errors." }, { "msg_contents": "On Fri, Aug 7, 2020 at 11:30 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Robert Haas <robertmhaas@gmail.com> writes:\n> > On Fri, Aug 7, 2020 at 12:56 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> That SETMASK call will certainly unblock SIGQUIT, so I don't see what\n> >> your point is.\n>\n> > I can't figure out if you're trolling me here or what. It's true that\n> > the PG_SETMASK() call will certainly unblock SIGQUIT, but that would\n> > also be true if the sigdelset() call were absent.\n>\n> The point of the sigdelset is that if somewhere later on, we install\n> the BlockSig mask, then SIGQUIT will remain unblocked. You asserted\n> upthread that noplace in these processes ever does so; maybe that's\n> true today, or maybe not, but the intent of this code is that *once\n> we get through initialization* SIGQUIT will remain unblocked.\n>\n> I'll concede that it's not 100% clear whether or not these processes\n> need to re-block SIGQUIT during error recovery. I repeat, though,\n> that I'm disinclined to change that without some evidence that there's\n> actually a problem with the way it works now.\n>\n\nI think the main point that needs to be thought is that: will any of\nthe bgwriter, checkpointer, walwriter and walreceiver processes need\nto unblock SIGQUIT during their error recovery code paths i.e. in\ntheir respective if (sigsetjmp(local_sigjmp_buf, 1) != 0){....}\nstanzas? Currently, SIGQUIT is blocked in the sigsetjmp() stanza.\n\nIf the answer is yes: then we must do PG_SETMASK(&BlockSig); :either\nright after sigdelset(&BlockSig, SIGQUIT); to allow quickdie() even\nbefore the sigsetjmp() stanza and also in the sigsetjmp() stanza or do\nPG_SETMASK(&BlockSig); only inside the sigsetjmp() stanza. The\npostmaster sends SIGQUIT in immediate shutdown mode and it gives\nchildren a chance to exit safely, but if the children take longer\ntime, then it anyways kills them with SIGKILL(note that SIGKILL can\nnot be handled or ignored by any process).\n\nIf the answer is no: let these processes perform clean ups in their\nrespective sigsetjmp() stanzas, until the postmaster sends SIGKILL if\nthe clean ups take time. We could have some elaborated comments before\nsigdelset(&BlockSig, SIGQUIT); instead of \"/* We allow SIGQUIT\n(quickdie) at all times */\" to avoid confusion.\n\nWe must not worry about blocking or unblocking SIGQUIT in these\nprocesses after the sigsetjmp() stanza, as it anyways gets unblocked\nby PG_SETMASK(&UnBlockSig); and also no problem if somebody does\nPG_SETMASK(&BlockSig); in future as we have already done\nsigdelset(&BlockSig, SIGQUIT);.\n\nCan we start a separate thread to discuss this SIGQUIT point to not\nsidetrack the main issue \"Parallel worker hangs while handling\nerrors.\"?\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 10 Aug 2020 16:24:07 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel worker hangs while handling errors." }, { "msg_contents": "On Fri, Aug 7, 2020 at 1:34 AM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Tue, Jul 28, 2020 at 5:35 AM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n> > The v4 patch looks good to me. Hang is not seen, make check and make\n> > check-world passes. I moved this to the committer for further review\n> > in https://commitfest.postgresql.org/29/2636/.\n>\n> I don't think I agree with this approach. In particular, I don't\n> understand the rationale for unblocking only SIGUSR1. Above, Vignesh\n> says that he feels that unblocking only that signal would be the right\n> approach, but no reason is given. I have two reasons why I suspect\n> it's not the right approach. One, it doesn't seem to be what we do\n> elsewhere; the only existing cases where we have special handling for\n> particular signals are SIGQUIT and SIGPIPE, and those places have\n> comments explaining the reason why they are handled in a special way.\n> Two, SIGUSR1 is used for a LOT of things: look at all the different\n> cases procsignal_sigusr1_handler() checks. If the intention is to only\n> allow the things we know are safe, rather than all the signals there\n> are, I think this coding utterly fails to achieve that - and for\n> reasons that I don't think are really fixable.\n>\n\nMy intention of blocking only SIGUSR1 over unblocking all signals\nmainly because we are already in the error path and we are about to\nexit after emitting the error report. I was not sure if we intended to\nreceive any other signal just before exiting.\nThe Solution Robert & Tom are suggesting by Calling\nBackgroundWorkerUnblockSignals fixes the actual problem.\n\nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 11 Aug 2020 20:08:11 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Parallel worker hangs while handling errors." }, { "msg_contents": "vignesh C <vignesh21@gmail.com> writes:\n> The Solution Robert & Tom are suggesting by Calling\n> BackgroundWorkerUnblockSignals fixes the actual problem.\n\nI've gone ahead and pushed the bgworker fix, since everyone seems\nto agree that that's okay, and it is provably fixing a problem.\n\nAs for the question of SIGQUIT handling, I see that postgres.c\ndoes \"PG_SETMASK(&BlockSig)\" immediately after applying the sigdelset\nchange, so there probably isn't any harm in having the background\nprocesses do likewise. I wonder though why bgworkers are not\napplying the same policy. (I remain of the opinion that any\nchanges in this area should not be back-patched without evidence\nof a concrete problem; it's at least as likely that we'll introduce\na problem as fix one.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 03 Sep 2020 17:01:43 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Parallel worker hangs while handling errors." }, { "msg_contents": "On 2020-Sep-03, Tom Lane wrote:\n\n> As for the question of SIGQUIT handling, I see that postgres.c\n> does \"PG_SETMASK(&BlockSig)\" immediately after applying the sigdelset\n> change, so there probably isn't any harm in having the background\n> processes do likewise. I wonder though why bgworkers are not\n> applying the same policy.\n\nIt's quite likely that it's the way it is more by accident than because\nI was thinking extremely carefully about signal handling when originally\nwriting that code. Some parts of that code I was copying from others'\npatches, and I could easily have missed a detail like this. (I didn't\n\"git blame\" to verify that these parts are mine, though).\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Thu, 3 Sep 2020 17:07:10 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Parallel worker hangs while handling errors." }, { "msg_contents": "I wrote:\n> As for the question of SIGQUIT handling, I see that postgres.c\n> does \"PG_SETMASK(&BlockSig)\" immediately after applying the sigdelset\n> change, so there probably isn't any harm in having the background\n> processes do likewise.\n\nConcretely, something about like this (I just did the bgwriter, but\nwe'd want the same in all the background processes). I tried to\nrespond to Robert's complaint about the inaccurate comment just above\nsigsetjmp, too.\n\nThis passes check-world, for what little that's worth.\n\n\t\t\tregards, tom lane", "msg_date": "Thu, 03 Sep 2020 17:29:35 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Parallel worker hangs while handling errors." }, { "msg_contents": "On Thu, Sep 3, 2020 at 5:29 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Concretely, something about like this (I just did the bgwriter, but\n> we'd want the same in all the background processes). I tried to\n> respond to Robert's complaint about the inaccurate comment just above\n> sigsetjmp, too.\n>\n> This passes check-world, for what little that's worth.\n\nSeems totally reasonable from here.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Tue, 8 Sep 2020 07:54:37 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel worker hangs while handling errors." }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Thu, Sep 3, 2020 at 5:29 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Concretely, something about like this (I just did the bgwriter, but\n>> we'd want the same in all the background processes). I tried to\n>> respond to Robert's complaint about the inaccurate comment just above\n>> sigsetjmp, too.\n>> This passes check-world, for what little that's worth.\n\n> Seems totally reasonable from here.\n\nOK, I did the same in other relevant places and pushed it.\n\nIt's not clear to me whether we want to institute the \"accepting SIGQUIT\nis always okay\" rule in processes that didn't already have code to change\nBlockSig. The relevant processes are pgarch.c, startup.c, bgworker.c,\nautovacuum.c (launcher and workers both), and walsender.c. In the first\ntwo of these I doubt it matters, because I don't think they'll ever block\nsignals again anyway -- they certainly don't have outer sigsetjmp blocks.\nAnd I'm a bit hesitant to mess with bgworker given that we seem to expect\nthat to be heavily used by extension code, and we're exposing code to\nallow extensions to mess with the signal blocking state. On the other\nhand, as long as SIGQUIT is pointing at SignalHandlerForCrashExit, it's\nhard to see a reason why holding it off could be necessary. So maybe\nhaving a uniform rule would be good.\n\nAny thoughts about that?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 11 Sep 2020 16:20:37 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Parallel worker hangs while handling errors." }, { "msg_contents": "On Fri, Sep 11, 2020 at 4:20 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> It's not clear to me whether we want to institute the \"accepting SIGQUIT\n> is always okay\" rule in processes that didn't already have code to change\n> BlockSig. The relevant processes are pgarch.c, startup.c, bgworker.c,\n> autovacuum.c (launcher and workers both), and walsender.c. In the first\n> two of these I doubt it matters, because I don't think they'll ever block\n> signals again anyway -- they certainly don't have outer sigsetjmp blocks.\n> And I'm a bit hesitant to mess with bgworker given that we seem to expect\n> that to be heavily used by extension code, and we're exposing code to\n> allow extensions to mess with the signal blocking state. On the other\n> hand, as long as SIGQUIT is pointing at SignalHandlerForCrashExit, it's\n> hard to see a reason why holding it off could be necessary. So maybe\n> having a uniform rule would be good.\n>\n> Any thoughts about that?\n\nI think a backend process that isn't timely handling SIGQUIT is by\nthat very fact buggy. \"pg_ctl stop -mi\" isn't a friendly suggestion.\nSo I favor trying to make it uniform.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Fri, 11 Sep 2020 16:23:21 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel worker hangs while handling errors." }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Fri, Sep 11, 2020 at 4:20 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> It's not clear to me whether we want to institute the \"accepting SIGQUIT\n>> is always okay\" rule in processes that didn't already have code to change\n>> BlockSig.\n\n> I think a backend process that isn't timely handling SIGQUIT is by\n> that very fact buggy. \"pg_ctl stop -mi\" isn't a friendly suggestion.\n> So I favor trying to make it uniform.\n\nWell, if we want to take a hard line about that, we should centralize\nthe setup of SIGQUIT. The attached makes InitPostmasterChild do it,\nand removes the duplicative code from elsewhere.\n\nI also flipped autovacuum and walsender from using quickdie to using\nSignalHandlerForCrashExit. Whatever you think about the safety or\nunsafety of quickdie, there seems no reason for autovacuum to be trying\nto tell its nonexistent client about a shutdown. I don't think it's\nterribly useful for a walsender either, though maybe somebody has a\ndifferent opinion about that?\n\n\t\t\tregards, tom lane", "msg_date": "Sat, 12 Sep 2020 13:57:21 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Parallel worker hangs while handling errors." } ]
[ { "msg_contents": "Ahoj\n\nPL/pgSQL vznikl jako jednoducha implementace jazyka silne inspirovana\nPL/SQL. Proto muze byt zajimave si neco precist o PL/SQL\n\nhttp://oracle-internals.com/blog/2020/04/29/a-not-so-brief-but-very-accurate-history-of-pl-sql/\n\nPavel\n\nAhojPL/pgSQL vznikl jako jednoducha implementace jazyka silne inspirovana PL/SQL. Proto muze byt zajimave si neco precist o PL/SQLhttp://oracle-internals.com/blog/2020/04/29/a-not-so-brief-but-very-accurate-history-of-pl-sql/Pavel", "msg_date": "Fri, 3 Jul 2020 21:16:35 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "pro pametniky -historie vzniku PL/SQL" }, { "msg_contents": "Hi\n\nI am sorry, wrong mailing list.\n\nRegards\n\nPavel\n\npá 3. 7. 2020 v 21:16 odesílatel Pavel Stehule <pavel.stehule@gmail.com>\nnapsal:\n\n> Ahoj\n>\n> PL/pgSQL vznikl jako jednoducha implementace jazyka silne inspirovana\n> PL/SQL. Proto muze byt zajimave si neco precist o PL/SQL\n>\n>\n> http://oracle-internals.com/blog/2020/04/29/a-not-so-brief-but-very-accurate-history-of-pl-sql/\n>\n> Pavel\n>\n\nHi I am sorry, wrong mailing list.RegardsPavelpá 3. 7. 2020 v 21:16 odesílatel Pavel Stehule <pavel.stehule@gmail.com> napsal:AhojPL/pgSQL vznikl jako jednoducha implementace jazyka silne inspirovana PL/SQL. Proto muze byt zajimave si neco precist o PL/SQLhttp://oracle-internals.com/blog/2020/04/29/a-not-so-brief-but-very-accurate-history-of-pl-sql/Pavel", "msg_date": "Fri, 3 Jul 2020 21:19:20 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: pro pametniky -historie vzniku PL/SQL" }, { "msg_contents": "On Fri, Jul 3, 2020 at 3:20 PM Pavel Stehule <pavel.stehule@gmail.com>\nwrote:\n\n> Hi\n>\n> I am sorry, wrong mailing list.\n>\n\nThanks for reading/sharing my blog post, regardless of the mailing list :)\n\n-- \nJonah H. Harris\n\nOn Fri, Jul 3, 2020 at 3:20 PM Pavel Stehule <pavel.stehule@gmail.com> wrote:Hi I am sorry, wrong mailing list.Thanks for reading/sharing my blog post, regardless of the mailing list :)-- Jonah H. Harris", "msg_date": "Fri, 3 Jul 2020 16:27:00 -0400", "msg_from": "\"Jonah H. Harris\" <jonah.harris@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pro pametniky -historie vzniku PL/SQL" } ]
[ { "msg_contents": "(resending to the list)\n\nHi All\n\nI started looking into Konstantin's 30 month old thread/patch:\n|Re: [HACKERS] Secondary index access optimizations\nhttps://www.postgresql.org/message-id/27516421-5afa-203c-e22a-8407e9187327%40postgrespro.ru\n\n..to which David directed me 12 months ago:\n|Subject: Re: scans on table fail to be excluded by partition bounds\nhttps://www.postgresql.org/message-id/CAKJS1f_iOmCW11dFzifpDGUgSLAoSTDOjw2tcec%3D7Cgq%2BsR80Q%40mail.gmail.com\n\nMy complaint at the time was for a query plan like:\n\nCREATE TABLE p (i int, j int) PARTITION BY RANGE(i);\nSELECT format('CREATE TABLE p%s PARTITION OF p FOR VALUES FROM(%s)TO(%s)', i, 10*(i-1), 10*i) FROM generate_series(1,10)i; \\gexec\nINSERT INTO p SELECT i%99, i%9 FROM generate_series(1,99999)i;\nVACUUM ANALYZE p;\nCREATE INDEX ON p(i);\nCREATE INDEX ON p(j);\n\npostgres=# explain analyze SELECT * FROM p WHERE (i=10 OR i=20 OR i=30) AND j<2;\nAppend (cost=28.51..283.25 rows=546 width=12) (actual time=0.100..1.364 rows=546 loops=1)\n -> Bitmap Heap Scan on p2 (cost=28.51..93.51 rows=182 width=12) (actual time=0.099..0.452 rows=182 loops=1)\n Recheck Cond: ((i = 10) OR (i = 20) OR (i = 30))\n Filter: (j < 2)\n Rows Removed by Filter: 818\n Heap Blocks: exact=45\n -> BitmapOr (cost=28.51..28.51 rows=1000 width=0) (actual time=0.083..0.083 rows=0 loops=1)\n -> Bitmap Index Scan on p2_i_idx (cost=0.00..19.79 rows=1000 width=0) (actual time=0.074..0.074 rows=1000 loops=1)\n Index Cond: (i = 10)\n -> Bitmap Index Scan on p2_i_idx (cost=0.00..4.29 rows=1 width=0) (actual time=0.008..0.008 rows=0 loops=1)\n Index Cond: (i = 20)\n -> Bitmap Index Scan on p2_i_idx (cost=0.00..4.29 rows=1 width=0) (actual time=0.001..0.001 rows=0 loops=1)\n Index Cond: (i = 30)\n...\n\nThis 2nd and 3rd index scan on p2_i_idx are useless, and benign here, but\nharmful if we have a large OR list.\n\nI tried rebasing Konstantin's patch, but it didn't handle the case of\n\"refuting\" inconsistent arms of an \"OR\" list, so I came up with this. This\ncurrently depends on the earlier patch only to call RelationGetPartitionQual,\nso appears to be mostly a separate issue.\n\nI believe the current behavior of \"OR\" lists is also causing another issue I\nreported, which a customer hit again last week:\nhttps://www.postgresql.org/message-id/20191216184906.GA2082@telsasoft.com\n|ERROR: could not resize shared memory segment...No space left on device\n\nWhen I looked into it, their explain(format text) was 50MB, due to a list of\n~500 \"OR\" conditions, *each* of which was causing an index scan for each of\n~500 partitions, where only one index scan per partition was needed or useful,\nall the others being inconsistent with the partition constraint. Thus the\nquery ultimately errors when it exceeds a resource limit (maybe no surprise\nwith 8500 index scans).\n\nHere, I was trying to create a test case reproducing that error to see if this\nresolves it, but so far hasn't failed. Tomas has a reproducer with a different\n(much simpler) plan, though.\n\nCREATE TABLE p (i int, j int) PARTITION BY RANGE(i);\n\\pset pager off\nSELECT format('CREATE TABLE p%s PARTITION OF p FOR VALUES FROM(%s)TO(%s)', i, 10*(i-1), 10*i) FROM generate_series(1,500)i;\n\\timing off\n\\set quiet\n\\set echo always\n\\gexec\n\\timing on\nINSERT INTO p SELECT i%5000, i%500 FROM generate_series(1,9999999)i;\nVACUUM ANALYZE p;\nCREATE INDEX ON p(i);\nCREATE INDEX ON p(j);\nSELECT format('explain analyze SELECT * FROM p WHERE %s', array_to_string(array_agg('i='||(i*10)::text),' OR ')) FROM generate_series(1,500)i;\n\n-- \nJustin", "msg_date": "Fri, 3 Jul 2020 19:45:03 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "avoid bitmapOR-ing indexes with scan condition inconsistent with\n partition constraint" }, { "msg_contents": "Added here:\nhttps://commitfest.postgresql.org/29/2644/\n\nAnd updated tests to pass following:\n|commit 689696c7110f148ede8004aae50d7543d05b5587\n|Author: Tom Lane <tgl@sss.pgh.pa.us>\n|Date: Tue Jul 14 18:56:49 2020 -0400\n|\n| Fix bitmap AND/OR scans on the inside of a nestloop partition-wise join.", "msg_date": "Tue, 14 Jul 2020 19:17:50 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: avoid bitmapOR-ing indexes with scan condition inconsistent with\n partition constraint" }, { "msg_contents": "Rebased and updated for tests added in 13838740f.\n\n-- \nJustin Pryzby\nSystem Administrator\nTelsasoft\n+1-952-707-8581", "msg_date": "Mon, 3 Aug 2020 13:12:23 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: avoid bitmapOR-ing indexes with scan condition inconsistent with\n partition constraint" }, { "msg_contents": "Hi Justin,\n\nAttached is a minimal patch that is rebased and removes the\ndependency on Konstantin's earlier patch[1], making it enough to remove\nthe extraneous index scans as Justin brought up. Is this the direction we\nwant to head in?\n\nTagging Konstantin in the email to hear his input on his old patch.\nSince Justin's attached patch [1] does not include the work that was done\non the operator_predicate_proof() and as discussed here in [2], that\nwork is important to see real benefits? Just wanted to check before\nreviewing [1].\n\nRegards,\nSoumyadeep (VMware)\n\n[1] https://www.postgresql.org/message-id/attachment/112074/0001-Secondary-index-access-optimizations.patch\n[2] https://www.postgresql.org/message-id/5A006016.1010009%40postgrespro.ru", "msg_date": "Wed, 30 Sep 2020 16:52:02 -0700", "msg_from": "Soumyadeep Chakraborty <soumyadeep2007@gmail.com>", "msg_from_op": false, "msg_subject": "Re: avoid bitmapOR-ing indexes with scan condition inconsistent with\n partition constraint" }, { "msg_contents": "On Wed, Sep 30, 2020 at 04:52:02PM -0700, Soumyadeep Chakraborty wrote:\n> Hi Justin,\n> \n> Attached is a minimal patch that is rebased and removes the\n> dependency on Konstantin's earlier patch[1], making it enough to remove\n> the extraneous index scans as Justin brought up. Is this the direction we\n> want to head in?\n\nYes, thanks for doing that. I hadn't dug into it yet to figure out what to put\nwhere to separate the patches. It seems like my patch handles a different goal\nthan Konstantin's, but they both depend on having the constraints populated.\n\n-- \nJustin\n\n\n", "msg_date": "Tue, 13 Oct 2020 15:20:31 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: avoid bitmapOR-ing indexes with scan condition inconsistent with\n partition constraint" }, { "msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: tested, passed\nImplements feature: tested, passed\nSpec compliant: tested, passed\nDocumentation: not tested\n\nI think that work on improving operator_predicate_proof should really be done in separate patch.\r\nAnd this minimal patch is doing it's work well.\n\nThe new status of this patch is: Ready for Committer\n", "msg_date": "Wed, 11 Nov 2020 13:36:02 +0000", "msg_from": "Konstantin Knizhnik <knizhnik@garret.ru>", "msg_from_op": false, "msg_subject": "Re: avoid bitmapOR-ing indexes with scan condition inconsistent with\n partition constraint" }, { "msg_contents": "I started looking through this patch. I really quite dislike solving\nthis via a kluge in indxpath.c. There are multiple disadvantages\nto that:\n\n* It only helps for the very specific problem of redundant bitmap\nindex scans, whereas the problem of applying redundant qual checks\nin partition scans seems pretty general.\n\n* It's not unlikely that this will end up trying to make the same\nproof multiple times (and the lack of any way to turn that off,\nthrough constraint_exclusion or some other knob, isn't too cool).\n\n* It does nothing to fix rowcount estimates in the light of the\nknowledge that some of the restriction clauses are no-ops. Now,\nif we have up-to-date stats we'll probably manage to come out with\nan appropriate 0 or 1 selectivity anyway, but we might not have those.\nIn any case, spending significant effort to estimate a selectivity\nwhen some other part of the code has taken the trouble to *prove* the\nclause true or false seems very undesirable.\n\n* I'm not even convinced that the logic is correct, specifically that\nit's okay to just \"continue\" if we refute the OR clause. That seems\nlikely to break generate_bitmap_or_paths' surrounding loop logic about\n\"We must be able to match at least one index to each of the arms of\nthe OR\". At least, if that still works it requires more than zero\ncommentary about why.\n\n\nSo I like much better the idea of Konstantin's old patch, that we modify\nthe rel's baserestrictinfo list by removing quals that we can prove\ntrue. We could extend that to solve the bitmapscan problem by removing\nOR arms that we can prove false. So I started to review that patch more\ncarefully, and after awhile realized that it has a really fundamental\nproblem: it is trying to use CHECK predicates to prove WHERE clauses.\nBut we don't know that CHECK predicates are true, only that they are\nnot-false, and there is no proof mode in predtest.c that will allow\nproving some clauses true based only on other ones being not-false.\n\nWe can salvage something by restricting the input quals to be only\npartition quals, since those are built to be guaranteed-true-or-false;\nwe can assume they don't yield NULL. There's a hole in that for\nhashing, as I noted elsewhere, but we'll fail to prove anything anyway\nfrom a satisfies_hash_partition() qual. (In principle we could also use\nattnotnull quals, which also have that property. But I'm dubious that\nthat will help often enough to be worth the extra cycles for predtest.c\nto process them.)\n\nSo after a bit of coding I had the attached. This follows Konstantin's\noriginal patch in letting relation_excluded_by_constraints() change\nthe baserestrictinfo list. I read the comments in the older thread\nabout people not liking that, and I can see the point. But I'm not\nconvinced that the later iterations of the patch were an improvement,\nbecause (a) the call locations for\nremove_restrictions_implied_by_constraints() seemed pretty random, and\n(b) it seems necessary to have relation_excluded_by_constraints() and\nremove_restrictions_implied_by_constraints() pretty much in bed with\neach other if we don't want to duplicate constraint-fetching work.\nNote the comment on get_relation_constraints() that it's called at\nmost once per relation; that's not something I particularly desire\nto give up, because a relcache open isn't terribly cheap. Also\n(c) I think it's important that there be a way to suppress this\noverhead when it's not useful. In the patch as attached, turning off\nconstraint_exclusion does that since relation_excluded_by_constraints()\nfalls out before getting to the new code. If we make\nremove_restrictions_implied_by_constraints() independent then it\nwill need some possibly-quite-duplicative logic to check\nconstraint_exclusion. (Of course, if we'd rather condition this\non some other GUC then that argument falls down. But I think we\nneed something.) So, I'm not dead set on this code structure, but\nI haven't seen one I like better.\n\nAnyway, this seems to work, and if the regression test changes are\nany guide then it may fire often enough in the real world to be useful.\nNonetheless, I'm concerned about performance, because predtest.c is a\npretty expensive thing and there will be a lot of cases where the work\nis useless. I did a quick check using pgbench's option to partition\nthe tables, and observed that the -S (select only) test case seemed to\nget about 2.5% slower with the patch than without. That's not far\noutside the noise floor, so maybe it's not real, but if it is real then\nit seems pretty disastrous. Perhaps we could avoid that problem by\nremoving the \"predicate_implied_by\" cases and only trying the\n\"predicate_refuted_by\" case, so that no significant time is added\nunless you've got an OR restriction clause on a partitioned table.\nThat seems like it'd lose a lot of the benefit though :-(.\n\nSo I'm not sure where to go from here. Thoughts? Anyone else\ncare to run some performance tests?\n\n\t\t\tregards, tom lane", "msg_date": "Thu, 12 Nov 2020 15:14:34 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: avoid bitmapOR-ing indexes with scan condition inconsistent with\n partition constraint" }, { "msg_contents": "Hi,\n\nOn Fri, Nov 13, 2020 at 5:14 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> I started looking through this patch. I really quite dislike solving\n> this via a kluge in indxpath.c. There are multiple disadvantages\n> to that:\n>\n> * It only helps for the very specific problem of redundant bitmap\n> index scans, whereas the problem of applying redundant qual checks\n> in partition scans seems pretty general.\n>\n> * It's not unlikely that this will end up trying to make the same\n> proof multiple times (and the lack of any way to turn that off,\n> through constraint_exclusion or some other knob, isn't too cool).\n>\n> * It does nothing to fix rowcount estimates in the light of the\n> knowledge that some of the restriction clauses are no-ops. Now,\n> if we have up-to-date stats we'll probably manage to come out with\n> an appropriate 0 or 1 selectivity anyway, but we might not have those.\n> In any case, spending significant effort to estimate a selectivity\n> when some other part of the code has taken the trouble to *prove* the\n> clause true or false seems very undesirable.\n>\n> * I'm not even convinced that the logic is correct, specifically that\n> it's okay to just \"continue\" if we refute the OR clause. That seems\n> likely to break generate_bitmap_or_paths' surrounding loop logic about\n> \"We must be able to match at least one index to each of the arms of\n> the OR\". At least, if that still works it requires more than zero\n> commentary about why.\n>\n>\n> So I like much better the idea of Konstantin's old patch, that we modify\n> the rel's baserestrictinfo list by removing quals that we can prove\n> true. We could extend that to solve the bitmapscan problem by removing\n> OR arms that we can prove false. So I started to review that patch more\n> carefully, and after awhile realized that it has a really fundamental\n> problem: it is trying to use CHECK predicates to prove WHERE clauses.\n> But we don't know that CHECK predicates are true, only that they are\n> not-false, and there is no proof mode in predtest.c that will allow\n> proving some clauses true based only on other ones being not-false.\n>\n> We can salvage something by restricting the input quals to be only\n> partition quals, since those are built to be guaranteed-true-or-false;\n> we can assume they don't yield NULL. There's a hole in that for\n> hashing, as I noted elsewhere, but we'll fail to prove anything anyway\n> from a satisfies_hash_partition() qual. (In principle we could also use\n> attnotnull quals, which also have that property. But I'm dubious that\n> that will help often enough to be worth the extra cycles for predtest.c\n> to process them.)\n>\n> So after a bit of coding I had the attached. This follows Konstantin's\n> original patch in letting relation_excluded_by_constraints() change\n> the baserestrictinfo list. I read the comments in the older thread\n> about people not liking that, and I can see the point. But I'm not\n> convinced that the later iterations of the patch were an improvement,\n> because (a) the call locations for\n> remove_restrictions_implied_by_constraints() seemed pretty random, and\n> (b) it seems necessary to have relation_excluded_by_constraints() and\n> remove_restrictions_implied_by_constraints() pretty much in bed with\n> each other if we don't want to duplicate constraint-fetching work.\n> Note the comment on get_relation_constraints() that it's called at\n> most once per relation; that's not something I particularly desire\n> to give up, because a relcache open isn't terribly cheap. Also\n> (c) I think it's important that there be a way to suppress this\n> overhead when it's not useful. In the patch as attached, turning off\n> constraint_exclusion does that since relation_excluded_by_constraints()\n> falls out before getting to the new code. If we make\n> remove_restrictions_implied_by_constraints() independent then it\n> will need some possibly-quite-duplicative logic to check\n> constraint_exclusion. (Of course, if we'd rather condition this\n> on some other GUC then that argument falls down. But I think we\n> need something.) So, I'm not dead set on this code structure, but\n> I haven't seen one I like better.\n>\n> Anyway, this seems to work, and if the regression test changes are\n> any guide then it may fire often enough in the real world to be useful.\n> Nonetheless, I'm concerned about performance, because predtest.c is a\n> pretty expensive thing and there will be a lot of cases where the work\n> is useless. I did a quick check using pgbench's option to partition\n> the tables, and observed that the -S (select only) test case seemed to\n> get about 2.5% slower with the patch than without. That's not far\n> outside the noise floor, so maybe it's not real, but if it is real then\n> it seems pretty disastrous. Perhaps we could avoid that problem by\n> removing the \"predicate_implied_by\" cases and only trying the\n> \"predicate_refuted_by\" case, so that no significant time is added\n> unless you've got an OR restriction clause on a partitioned table.\n> That seems like it'd lose a lot of the benefit though :-(.\n>\n> So I'm not sure where to go from here. Thoughts? Anyone else\n> care to run some performance tests?\n\nStatus update for a commitfest entry.\n\nReading through the discussion, several patches have been proposed and\nit has been inactive for almost 2 months. Does anyone listed as the\nauthor plan to work on this patch? It looks like we're waiting for\nsome reviews on the patch including from the performance perspective\nbut this patch entry has been set to \"Waiting on Author\" since\n2021-01-12. If no one works on this and it's really waiting on the\nauthor, I'm going to set it to \"Returned with Feedback\", barring\nobjections.\n\nRegards,\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Mon, 1 Feb 2021 12:09:12 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: avoid bitmapOR-ing indexes with scan condition inconsistent with\n partition constraint" }, { "msg_contents": "On Mon, Feb 1, 2021 at 12:09 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> Hi,\n>\n> On Fri, Nov 13, 2020 at 5:14 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >\n> > I started looking through this patch. I really quite dislike solving\n> > this via a kluge in indxpath.c. There are multiple disadvantages\n> > to that:\n> >\n> > * It only helps for the very specific problem of redundant bitmap\n> > index scans, whereas the problem of applying redundant qual checks\n> > in partition scans seems pretty general.\n> >\n> > * It's not unlikely that this will end up trying to make the same\n> > proof multiple times (and the lack of any way to turn that off,\n> > through constraint_exclusion or some other knob, isn't too cool).\n> >\n> > * It does nothing to fix rowcount estimates in the light of the\n> > knowledge that some of the restriction clauses are no-ops. Now,\n> > if we have up-to-date stats we'll probably manage to come out with\n> > an appropriate 0 or 1 selectivity anyway, but we might not have those.\n> > In any case, spending significant effort to estimate a selectivity\n> > when some other part of the code has taken the trouble to *prove* the\n> > clause true or false seems very undesirable.\n> >\n> > * I'm not even convinced that the logic is correct, specifically that\n> > it's okay to just \"continue\" if we refute the OR clause. That seems\n> > likely to break generate_bitmap_or_paths' surrounding loop logic about\n> > \"We must be able to match at least one index to each of the arms of\n> > the OR\". At least, if that still works it requires more than zero\n> > commentary about why.\n> >\n> >\n> > So I like much better the idea of Konstantin's old patch, that we modify\n> > the rel's baserestrictinfo list by removing quals that we can prove\n> > true. We could extend that to solve the bitmapscan problem by removing\n> > OR arms that we can prove false. So I started to review that patch more\n> > carefully, and after awhile realized that it has a really fundamental\n> > problem: it is trying to use CHECK predicates to prove WHERE clauses.\n> > But we don't know that CHECK predicates are true, only that they are\n> > not-false, and there is no proof mode in predtest.c that will allow\n> > proving some clauses true based only on other ones being not-false.\n> >\n> > We can salvage something by restricting the input quals to be only\n> > partition quals, since those are built to be guaranteed-true-or-false;\n> > we can assume they don't yield NULL. There's a hole in that for\n> > hashing, as I noted elsewhere, but we'll fail to prove anything anyway\n> > from a satisfies_hash_partition() qual. (In principle we could also use\n> > attnotnull quals, which also have that property. But I'm dubious that\n> > that will help often enough to be worth the extra cycles for predtest.c\n> > to process them.)\n> >\n> > So after a bit of coding I had the attached. This follows Konstantin's\n> > original patch in letting relation_excluded_by_constraints() change\n> > the baserestrictinfo list. I read the comments in the older thread\n> > about people not liking that, and I can see the point. But I'm not\n> > convinced that the later iterations of the patch were an improvement,\n> > because (a) the call locations for\n> > remove_restrictions_implied_by_constraints() seemed pretty random, and\n> > (b) it seems necessary to have relation_excluded_by_constraints() and\n> > remove_restrictions_implied_by_constraints() pretty much in bed with\n> > each other if we don't want to duplicate constraint-fetching work.\n> > Note the comment on get_relation_constraints() that it's called at\n> > most once per relation; that's not something I particularly desire\n> > to give up, because a relcache open isn't terribly cheap. Also\n> > (c) I think it's important that there be a way to suppress this\n> > overhead when it's not useful. In the patch as attached, turning off\n> > constraint_exclusion does that since relation_excluded_by_constraints()\n> > falls out before getting to the new code. If we make\n> > remove_restrictions_implied_by_constraints() independent then it\n> > will need some possibly-quite-duplicative logic to check\n> > constraint_exclusion. (Of course, if we'd rather condition this\n> > on some other GUC then that argument falls down. But I think we\n> > need something.) So, I'm not dead set on this code structure, but\n> > I haven't seen one I like better.\n> >\n> > Anyway, this seems to work, and if the regression test changes are\n> > any guide then it may fire often enough in the real world to be useful.\n> > Nonetheless, I'm concerned about performance, because predtest.c is a\n> > pretty expensive thing and there will be a lot of cases where the work\n> > is useless. I did a quick check using pgbench's option to partition\n> > the tables, and observed that the -S (select only) test case seemed to\n> > get about 2.5% slower with the patch than without. That's not far\n> > outside the noise floor, so maybe it's not real, but if it is real then\n> > it seems pretty disastrous. Perhaps we could avoid that problem by\n> > removing the \"predicate_implied_by\" cases and only trying the\n> > \"predicate_refuted_by\" case, so that no significant time is added\n> > unless you've got an OR restriction clause on a partitioned table.\n> > That seems like it'd lose a lot of the benefit though :-(.\n> >\n> > So I'm not sure where to go from here. Thoughts? Anyone else\n> > care to run some performance tests?\n>\n> Status update for a commitfest entry.\n>\n> Reading through the discussion, several patches have been proposed and\n> it has been inactive for almost 2 months. Does anyone listed as the\n> author plan to work on this patch? It looks like we're waiting for\n> some reviews on the patch including from the performance perspective\n> but this patch entry has been set to \"Waiting on Author\" since\n> 2021-01-12. If no one works on this and it's really waiting on the\n> author, I'm going to set it to \"Returned with Feedback\", barring\n> objections.\n\nI've moved this patch to \"Returned with Feedback\". Depending on\ntiming, this may be reversable, so let us know if there are\nextenuating circumstances. In any case, you are welcome to address\nthe feedback you have received, and resubmit the patch to the next CommitFest.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Mon, 1 Feb 2021 22:39:35 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: avoid bitmapOR-ing indexes with scan condition inconsistent with\n partition constraint" } ]
[ { "msg_contents": "In <1116564.1593813043@sss.pgh.pa.us> I wrote:\n> I wonder whether someday we ought to invent a new API that's more\n> suited to postgres_fdw's needs than EXPLAIN is. It's not like the\n> remote planner doesn't know the number we want; it just fails to\n> include it in EXPLAIN.\n\nI've been thinking about this a little more, and I'd like to get some\nideas down on electrons before they vanish.\n\nThe current method for postgres_fdw to obtain remote estimates is to\nissue an EXPLAIN command to the remote server and then decipher the\nresult. This has just one big advantage, which is that it works\nagainst existing, even very old, remote PG versions. In every other\nway it's pretty awful: it involves a lot of cycles on the far end\nto create output details we don't really care about, it requires a\nfair amount of logic to parse that output, and we can't get some\ndetails that we *do* care about (such as the total size of the foreign\ntable, as per the other discussion).\n\nWe can do better. I don't propose removing the existing logic, because\nbeing able to work against old remote PG versions seems pretty useful.\nBut we could probe at connection start for whether the remote server\nhas support for a better way, and then use that way if available.\n\nWhat should the better way look like? I suggest the following:\n\n* Rather than adding a core-server feature, the remote-end part of the new\nAPI should be a function installed by an extension (either postgres_fdw\nitself, or a new extension \"postgres_fdw_remote\" or the like). One\nattraction of this approach is that it'd be conceivable to back-port the\nnew code into existing PG releases by updating the extension. Also\nthere'd be room for multiple versions of the support. The\nconnection-start probe could be of the form \"does this function exist\nin pg_proc?\".\n\n* I'm imagining the function being of the form\n\n function pg_catalog.postgres_fdw_support(query text) returns something\n\nwhere the input is still the text of a query we're considering issuing,\nand the output is some structure that contains the items of EXPLAIN-like\ndata we need, but not the items we don't. The implementation of the\nfunction would run the query through parse/plan, then pick out the\ndata we want and return that.\n\n* We could do a lot worse than to have the \"structure\" be JSON.\nThis'd allow structured, labeled data to be returned; it would not be\ntoo difficult to construct, even in PG server versions predating the\naddition of JSON logic to the core; and the receiving postgres_fdw\nextension could use the core's JSON logic to parse the data.\n\n* The contents of the structure need to be designed with forethought\nfor extensibility, but this doesn't seem hard if it's all a collection\nof labeled fields. We can just say that the recipient must ignore\nfields it doesn't recognize. Once a given field has been defined, we\ncan't change its contents, but we can introduce new fields as needed.\nNote that I would not be in favor of putting an overall version number\nwithin the structure; that's way too coarse-grained.\n\nI'm not planning to do anything about these ideas myself, at least\nnot in the short term. But perhaps somebody else would like to\nrun with them.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 03 Jul 2020 23:08:16 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Ideas about a better API for postgres_fdw remote estimates" }, { "msg_contents": "Greetings,\n\n* Tom Lane (tgl@sss.pgh.pa.us) wrote:\n> In <1116564.1593813043@sss.pgh.pa.us> I wrote:\n> > I wonder whether someday we ought to invent a new API that's more\n> > suited to postgres_fdw's needs than EXPLAIN is. It's not like the\n> > remote planner doesn't know the number we want; it just fails to\n> > include it in EXPLAIN.\n> \n> I've been thinking about this a little more, and I'd like to get some\n> ideas down on electrons before they vanish.\n> \n> The current method for postgres_fdw to obtain remote estimates is to\n> issue an EXPLAIN command to the remote server and then decipher the\n> result. This has just one big advantage, which is that it works\n> against existing, even very old, remote PG versions. In every other\n> way it's pretty awful: it involves a lot of cycles on the far end\n> to create output details we don't really care about, it requires a\n> fair amount of logic to parse that output, and we can't get some\n> details that we *do* care about (such as the total size of the foreign\n> table, as per the other discussion).\n> \n> We can do better. I don't propose removing the existing logic, because\n> being able to work against old remote PG versions seems pretty useful.\n> But we could probe at connection start for whether the remote server\n> has support for a better way, and then use that way if available.\n\nI agree we can, and should, try to do better here.\n\n> What should the better way look like? I suggest the following:\n> \n> * Rather than adding a core-server feature, the remote-end part of the new\n> API should be a function installed by an extension (either postgres_fdw\n> itself, or a new extension \"postgres_fdw_remote\" or the like). One\n> attraction of this approach is that it'd be conceivable to back-port the\n> new code into existing PG releases by updating the extension. Also\n> there'd be room for multiple versions of the support. The\n> connection-start probe could be of the form \"does this function exist\n> in pg_proc?\".\n> \n> * I'm imagining the function being of the form\n> \n> function pg_catalog.postgres_fdw_support(query text) returns something\n> \n> where the input is still the text of a query we're considering issuing,\n> and the output is some structure that contains the items of EXPLAIN-like\n> data we need, but not the items we don't. The implementation of the\n> function would run the query through parse/plan, then pick out the\n> data we want and return that.\n> \n> * We could do a lot worse than to have the \"structure\" be JSON.\n> This'd allow structured, labeled data to be returned; it would not be\n> too difficult to construct, even in PG server versions predating the\n> addition of JSON logic to the core; and the receiving postgres_fdw\n> extension could use the core's JSON logic to parse the data.\n\nI also tend to agree with using JSON for this.\n\n> * The contents of the structure need to be designed with forethought\n> for extensibility, but this doesn't seem hard if it's all a collection\n> of labeled fields. We can just say that the recipient must ignore\n> fields it doesn't recognize. Once a given field has been defined, we\n> can't change its contents, but we can introduce new fields as needed.\n> Note that I would not be in favor of putting an overall version number\n> within the structure; that's way too coarse-grained.\n\nThis also makes sense to me.\n\n> I'm not planning to do anything about these ideas myself, at least\n> not in the short term. But perhaps somebody else would like to\n> run with them.\n\nI'm trying to figure out why it makes more sense to use\n'postgres_fdw_support(query text)', which would still do parse/plan and\nreturn EXPLAIN-like data, rather than having:\n\nEXPLAIN (FORMAT JSON, FDW true) query ...\n\n(Or, perhaps better, individual boolean options for whatever stuff we\nwant to ask for, or to exclude if we don't want it, so that other tools\ncould use this...).\n\nThanks,\n\nStephen", "msg_date": "Sun, 5 Jul 2020 13:06:01 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: Ideas about a better API for postgres_fdw remote estimates" }, { "msg_contents": "Stephen Frost <sfrost@snowman.net> writes:\n> * Tom Lane (tgl@sss.pgh.pa.us) wrote:\n>> * Rather than adding a core-server feature, the remote-end part of the new\n>> API should be a function installed by an extension (either postgres_fdw\n>> itself, or a new extension \"postgres_fdw_remote\" or the like).\n\n> I'm trying to figure out why it makes more sense to use\n> 'postgres_fdw_support(query text)', which would still do parse/plan and\n> return EXPLAIN-like data, rather than having:\n\n> EXPLAIN (FORMAT JSON, FDW true) query ...\n\nI see a couple of reasons not to do it like that:\n\n1. This is specific to postgres_fdw. Some other extension might want some\nother data, and different versions of postgres_fdw might want different\ndata. So putting it into core seems like the wrong thing.\n\n2. Wedging this into EXPLAIN would be quite ugly, because (at least\nas I envision it) the output would have just about nothing to do with\nany existing EXPLAIN output.\n\n3. We surely would not back-patch a core change like this. OTOH, if\nthe added infrastructure is in an extension, somebody might want to\nback-patch that (even if unofficially). This argument falls to the\nground of course if we're forced to make any core changes to be able\nto get at the data we need; but I'm not sure that will be needed.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 05 Jul 2020 13:36:43 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Ideas about a better API for postgres_fdw remote estimates" }, { "msg_contents": "Greetings,\n\n* Tom Lane (tgl@sss.pgh.pa.us) wrote:\n> Stephen Frost <sfrost@snowman.net> writes:\n> > * Tom Lane (tgl@sss.pgh.pa.us) wrote:\n> >> * Rather than adding a core-server feature, the remote-end part of the new\n> >> API should be a function installed by an extension (either postgres_fdw\n> >> itself, or a new extension \"postgres_fdw_remote\" or the like).\n> \n> > I'm trying to figure out why it makes more sense to use\n> > 'postgres_fdw_support(query text)', which would still do parse/plan and\n> > return EXPLAIN-like data, rather than having:\n> \n> > EXPLAIN (FORMAT JSON, FDW true) query ...\n> \n> I see a couple of reasons not to do it like that:\n> \n> 1. This is specific to postgres_fdw. Some other extension might want some\n> other data, and different versions of postgres_fdw might want different\n> data. So putting it into core seems like the wrong thing.\n\nAnother extension or use-case might want exactly the same information\ntoo though. In a way, we'd be 'hiding' that information from other\npotential users unless they want to install their own extension, which\nis a pretty big leap. Are we sure this information wouldn't be at all\ninteresting to pgAdmin4 or explain.depesz.com?\n\n> 2. Wedging this into EXPLAIN would be quite ugly, because (at least\n> as I envision it) the output would have just about nothing to do with\n> any existing EXPLAIN output.\n\nThis is a better argument for not making it part of EXPLAIN, though I\ndon't really feel like I've got a decent idea of what you are suggesting\nthe output *would* look like, so it's difficult for me to agree (or\ndisagree) about this particular point.\n\n> 3. We surely would not back-patch a core change like this. OTOH, if\n> the added infrastructure is in an extension, somebody might want to\n> back-patch that (even if unofficially). This argument falls to the\n> ground of course if we're forced to make any core changes to be able\n> to get at the data we need; but I'm not sure that will be needed.\n\nSince postgres_fdw is part of core and core's release cycle, and the\npackagers manage the extensions from core in a way that they have to\nmatch up, this argument doesn't hold any weight with me. For this to be\na viable argument, we would need to segregate extensions from core and\ngive them their own release cycle and clear indication of which\nextension versions work with which PG major versions, etc. I'm actually\ngenerally in support of *that* idea- and with that, would agree with\nyour point 3 above, but that's not the reality of today.\n\nThanks,\n\nStephen", "msg_date": "Sun, 5 Jul 2020 13:48:17 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: Ideas about a better API for postgres_fdw remote estimates" }, { "msg_contents": "Stephen Frost <sfrost@snowman.net> writes:\n> * Tom Lane (tgl@sss.pgh.pa.us) wrote:\n>> 2. Wedging this into EXPLAIN would be quite ugly, because (at least\n>> as I envision it) the output would have just about nothing to do with\n>> any existing EXPLAIN output.\n\n> This is a better argument for not making it part of EXPLAIN, though I\n> don't really feel like I've got a decent idea of what you are suggesting\n> the output *would* look like, so it's difficult for me to agree (or\n> disagree) about this particular point.\n\nPer postgres_fdw's get_remote_estimate(), the only data we use right now\nis the startup_cost, total_cost, rows and width estimates from the\ntop-level Plan node. That's available immediately from the Plan tree,\nmeaning that basically *nothing* of the substantial display effort\nexpended by explain.c and ruleutils.c is of any value. So the level-zero\nimplementation of this would be to run the parser and planner, format\nthose four numbers into a JSON object (which would require little more\ninfrastructure than sprintf), and return that. Sure, we could make that\ninto some kind of early-exit path in explain.c, but I think it'd be a\npretty substantial wart, especially since it'd mean that none of the\nother EXPLAIN options are sensible in combination with this one.\n\nFurther down the road, we might want to rethink the whole idea of\ncompletely constructing a concrete Plan. We could get the data we need\nat the list-of-Paths stage. Even more interesting, we could (with very\nlittle more work) return data about multiple Paths, so that the client\ncould find out, for example, the costs of sorted and unsorted output\nwithout paying two network round trips to discover that. That'd\ndefinitely require changes in the core planner, since it has no API to\nstop at that point. And it's even less within the charter of EXPLAIN.\n\nI grant your point that there might be other users for this besides\npostgres_fdw, but that doesn't mean it must be a core feature.\n\n>> 3. We surely would not back-patch a core change like this. OTOH, if\n>> the added infrastructure is in an extension, somebody might want to\n>> back-patch that (even if unofficially).\n\n> Since postgres_fdw is part of core and core's release cycle, and the\n> packagers manage the extensions from core in a way that they have to\n> match up, this argument doesn't hold any weight with me.\n\nCertainly only v14 (or whenever) and later postgres_fdw would be able\nto *use* this data. The scenario I'm imagining is that somebody wants\nto be able to use that client against an older remote server, and is\nwilling to install some simple extension on the remote server to do so.\nPerhaps this scenario is not worth troubling over, but I don't think\nit's entirely far-fetched.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 05 Jul 2020 16:25:22 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Ideas about a better API for postgres_fdw remote estimates" }, { "msg_contents": "Greetings,\n\n* Tom Lane (tgl@sss.pgh.pa.us) wrote:\n> Stephen Frost <sfrost@snowman.net> writes:\n> > * Tom Lane (tgl@sss.pgh.pa.us) wrote:\n> >> 2. Wedging this into EXPLAIN would be quite ugly, because (at least\n> >> as I envision it) the output would have just about nothing to do with\n> >> any existing EXPLAIN output.\n> \n> > This is a better argument for not making it part of EXPLAIN, though I\n> > don't really feel like I've got a decent idea of what you are suggesting\n> > the output *would* look like, so it's difficult for me to agree (or\n> > disagree) about this particular point.\n> \n> Per postgres_fdw's get_remote_estimate(), the only data we use right now\n> is the startup_cost, total_cost, rows and width estimates from the\n> top-level Plan node. That's available immediately from the Plan tree,\n> meaning that basically *nothing* of the substantial display effort\n> expended by explain.c and ruleutils.c is of any value. So the level-zero\n\nThe 'display effort' you're referring to, when using JSON format with\nexplain, is basically to format the results into JSON and return them-\nwhich is what you're suggesting this mode would do anyway, no..?\n\nIf the remote side 'table' is actually a view that's complicated then\nhaving a way to get just the top-level information (and excluding the\nrest) sounds like it'd be useful and perhaps excluding that other info\ndoesn't really fit into EXPLAIN's mandate, but that's also much less\ncommon.\n\n> implementation of this would be to run the parser and planner, format\n> those four numbers into a JSON object (which would require little more\n> infrastructure than sprintf), and return that. Sure, we could make that\n> into some kind of early-exit path in explain.c, but I think it'd be a\n> pretty substantial wart, especially since it'd mean that none of the\n> other EXPLAIN options are sensible in combination with this one.\n\nThat EXPLAIN has options that only make sense in combination with\ncertain other options isn't anything new- BUFFERS makes no sense without\nANALYZE, etc.\n\n> Further down the road, we might want to rethink the whole idea of\n> completely constructing a concrete Plan. We could get the data we need\n> at the list-of-Paths stage. Even more interesting, we could (with very\n> little more work) return data about multiple Paths, so that the client\n> could find out, for example, the costs of sorted and unsorted output\n> without paying two network round trips to discover that. That'd\n> definitely require changes in the core planner, since it has no API to\n> stop at that point. And it's even less within the charter of EXPLAIN.\n\nI have to admit that I'm not really sure how we could make it work, but\nhaving a way to get multiple paths returned by EXPLAIN would certainly\nbe interesting to a lot of users. Certainly it's easier to see how we\ncould get at that info in a postgres_fdw-specific function, and be able\nto understand how to deal with it there and what could be done, but once\nit's there I wonder if other tools might see that and possibly even\nbuild on it because it'd be the only way to get that kind of info, which\ncertainly wouldn't be ideal.\n\n> I grant your point that there might be other users for this besides\n> postgres_fdw, but that doesn't mean it must be a core feature.\n\nThat postgres_fdw is an extension is almost as much of a wart as\nanything being discussed here and suggesting that things added to\npostgres_fdw aren't 'core features' seems akin to ignoring the forest\nfor the trees- consider that, today, there isn't even an option to\ninstall only the core server from the PGDG repos (at least for Debian /\nUbuntu, not sure if the RPMs have caught up to that yet, but they\nprobably should). The 'postgresql-12' .deb includes all the extensions\nthat are part of the core git repo, because they're released and\nmaintained just the same as the core server and, from a practical\nperspective, to run a decent PG system you really should have them\ninstalled, so why bother having a separate package?\n\n> >> 3. We surely would not back-patch a core change like this. OTOH, if\n> >> the added infrastructure is in an extension, somebody might want to\n> >> back-patch that (even if unofficially).\n> \n> > Since postgres_fdw is part of core and core's release cycle, and the\n> > packagers manage the extensions from core in a way that they have to\n> > match up, this argument doesn't hold any weight with me.\n> \n> Certainly only v14 (or whenever) and later postgres_fdw would be able\n> to *use* this data. The scenario I'm imagining is that somebody wants\n> to be able to use that client against an older remote server, and is\n> willing to install some simple extension on the remote server to do so.\n> Perhaps this scenario is not worth troubling over, but I don't think\n> it's entirely far-fetched.\n\nI definitely don't think that such an extension should be maintained\noutside of core, and I seriously doubt any of our packagers would be\nanxious to build an indepedent package for this to be usable in older\nservers. Sure, it's possible someone will care about this enough to\nspend the effort to try and build it for an older version and use it but\nI definitely don't think we should be considering that a serious design\ngoal or a reason to put this capability in a separate extension.\n\nThanks,\n\nStephen", "msg_date": "Mon, 6 Jul 2020 10:11:14 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: Ideas about a better API for postgres_fdw remote estimates" }, { "msg_contents": "Stephen Frost <sfrost@snowman.net> writes:\n> * Tom Lane (tgl@sss.pgh.pa.us) wrote:\n>> Per postgres_fdw's get_remote_estimate(), the only data we use right now\n>> is the startup_cost, total_cost, rows and width estimates from the\n>> top-level Plan node. That's available immediately from the Plan tree,\n>> meaning that basically *nothing* of the substantial display effort\n>> expended by explain.c and ruleutils.c is of any value. So the level-zero\n\n> The 'display effort' you're referring to, when using JSON format with\n> explain, is basically to format the results into JSON and return them-\n> which is what you're suggesting this mode would do anyway, no..?\n\nNot hardly. Spend some time studying ruleutils.c sometime ---\nreverse-compiling a plan is *expensive*. For instance, we have to\nlook up the names of all the operators used in the query quals,\ndecide what needs quoting, decide what needs parenthesization, etc.\nThere's also a fun little bit that assigns unique aliases to each\ntable appearing in the query, which from memory is at least O(N^2)\nand maybe worse. (Admittedly, shipped queries are usually not so\ncomplicated that N would be large.) And by the way, we're also\nstarting up the executor, even if you didn't say ANALYZE.\n\nA little bit of fooling with \"perf\" suggests that when explaining\na pretty simple bitmapscan query --- I used\n\tEXPLAIN SELECT * FROM tenk1 WHERE unique1 > 9995\nwhich ought to be somewhat representative of what postgres_fdw needs\n--- only about half of the runtime is spent within pg_plan_query, and\nthe other half is spent on explain.c + ruleutils.c formatting work.\nSo while getting rid of that overhead wouldn't be an earthshattering\nimprovement, I think it'd be worthwhile.\n\n>> Further down the road, we might want to rethink the whole idea of\n>> completely constructing a concrete Plan. We could get the data we need\n>> at the list-of-Paths stage. Even more interesting, we could (with very\n>> little more work) return data about multiple Paths, so that the client\n>> could find out, for example, the costs of sorted and unsorted output\n>> without paying two network round trips to discover that.\n\n> I have to admit that I'm not really sure how we could make it work, but\n> having a way to get multiple paths returned by EXPLAIN would certainly\n> be interesting to a lot of users. Certainly it's easier to see how we\n> could get at that info in a postgres_fdw-specific function, and be able\n> to understand how to deal with it there and what could be done, but once\n> it's there I wonder if other tools might see that and possibly even\n> build on it because it'd be the only way to get that kind of info, which\n> certainly wouldn't be ideal.\n\nYeah, thinking about it as a function that inspects partial planner\nresults, it might be useful for other purposes besides postgres_fdw.\nAs I said before, I don't think this necessarily has to be bundled as\npart of postgres_fdw. That still doesn't make it part of EXPLAIN.\n\n> That postgres_fdw is an extension is almost as much of a wart as\n> anything being discussed here and suggesting that things added to\n> postgres_fdw aren't 'core features' seems akin to ignoring the forest\n> for the trees-\n\nI think we just had this discussion in another thread. The fact that\npostgres_fdw is an extension is a feature, not a bug, because (a) it\nmeans that somebody could implement their own version if they wanted\nit to act differently; and (b) it keeps us honest about whether the\nAPIs needed by an FDW are accessible from outside core. I think moving\npostgres_fdw into core would be a large step backwards.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 06 Jul 2020 11:05:46 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Ideas about a better API for postgres_fdw remote estimates" }, { "msg_contents": "Greetings,\n\n* Tom Lane (tgl@sss.pgh.pa.us) wrote:\n> Stephen Frost <sfrost@snowman.net> writes:\n> > * Tom Lane (tgl@sss.pgh.pa.us) wrote:\n> >> Per postgres_fdw's get_remote_estimate(), the only data we use right now\n> >> is the startup_cost, total_cost, rows and width estimates from the\n> >> top-level Plan node. That's available immediately from the Plan tree,\n> >> meaning that basically *nothing* of the substantial display effort\n> >> expended by explain.c and ruleutils.c is of any value. So the level-zero\n> \n> > The 'display effort' you're referring to, when using JSON format with\n> > explain, is basically to format the results into JSON and return them-\n> > which is what you're suggesting this mode would do anyway, no..?\n> \n> Not hardly. Spend some time studying ruleutils.c sometime ---\n> reverse-compiling a plan is *expensive*. For instance, we have to\n> look up the names of all the operators used in the query quals,\n> decide what needs quoting, decide what needs parenthesization, etc.\n\nAh, alright, that makes more sense then.\n\n> There's also a fun little bit that assigns unique aliases to each\n> table appearing in the query, which from memory is at least O(N^2)\n> and maybe worse. (Admittedly, shipped queries are usually not so\n> complicated that N would be large.) And by the way, we're also\n> starting up the executor, even if you didn't say ANALYZE.\n> \n> A little bit of fooling with \"perf\" suggests that when explaining\n> a pretty simple bitmapscan query --- I used\n> \tEXPLAIN SELECT * FROM tenk1 WHERE unique1 > 9995\n> which ought to be somewhat representative of what postgres_fdw needs\n> --- only about half of the runtime is spent within pg_plan_query, and\n> the other half is spent on explain.c + ruleutils.c formatting work.\n> So while getting rid of that overhead wouldn't be an earthshattering\n> improvement, I think it'd be worthwhile.\n\nSure.\n\n> >> Further down the road, we might want to rethink the whole idea of\n> >> completely constructing a concrete Plan. We could get the data we need\n> >> at the list-of-Paths stage. Even more interesting, we could (with very\n> >> little more work) return data about multiple Paths, so that the client\n> >> could find out, for example, the costs of sorted and unsorted output\n> >> without paying two network round trips to discover that.\n> \n> > I have to admit that I'm not really sure how we could make it work, but\n> > having a way to get multiple paths returned by EXPLAIN would certainly\n> > be interesting to a lot of users. Certainly it's easier to see how we\n> > could get at that info in a postgres_fdw-specific function, and be able\n> > to understand how to deal with it there and what could be done, but once\n> > it's there I wonder if other tools might see that and possibly even\n> > build on it because it'd be the only way to get that kind of info, which\n> > certainly wouldn't be ideal.\n> \n> Yeah, thinking about it as a function that inspects partial planner\n> results, it might be useful for other purposes besides postgres_fdw.\n> As I said before, I don't think this necessarily has to be bundled as\n> part of postgres_fdw. That still doesn't make it part of EXPLAIN.\n\nProviding it as a function rather than through EXPLAIN does make a bit\nmore sense if we're going to skip things like the lookups you mention\nabove. I'm still inclined to have it be a part of core rather than\nhaving it as postgres_fdw though. I'm not completely against it being\npart of postgres_fdw... but I would think that would really be\nappropriate if it's actually using something in postgres_fdw, but if\neverything that it's doing is part of core and nothing related\nspecifically to the postgres FDW, then having it as part of core makes\nmore sense to me. Also, having it as part of core would make it more\nappropriate for other tools to look at and adding that kind of\ninspection capability for partial planner results could be very\ninteresting for tools like pgAdmin and such.\n\n> > That postgres_fdw is an extension is almost as much of a wart as\n> > anything being discussed here and suggesting that things added to\n> > postgres_fdw aren't 'core features' seems akin to ignoring the forest\n> > for the trees-\n> \n> I think we just had this discussion in another thread. The fact that\n> postgres_fdw is an extension is a feature, not a bug, because (a) it\n> means that somebody could implement their own version if they wanted\n> it to act differently; and (b) it keeps us honest about whether the\n> APIs needed by an FDW are accessible from outside core. I think moving\n> postgres_fdw into core would be a large step backwards.\n\nI'm not looking to change it today, as that ship has sailed, but while\nhaving FDWs as a general capability that can be implemented by\nextensions is certainly great and I'd love to see more of that (even\nbetter would be more of those that are well maintained and cared for by\nthis community of folks), requiring users to install an extension into\nevery database where they want to query another PG server from isn't a\nfeature.\n\nThanks,\n\nStephen", "msg_date": "Mon, 6 Jul 2020 11:28:28 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: Ideas about a better API for postgres_fdw remote estimates" }, { "msg_contents": "On Mon, Jul 6, 2020 at 11:28:28AM -0400, Stephen Frost wrote:\n> > Yeah, thinking about it as a function that inspects partial planner\n> > results, it might be useful for other purposes besides postgres_fdw.\n> > As I said before, I don't think this necessarily has to be bundled as\n> > part of postgres_fdw. That still doesn't make it part of EXPLAIN.\n> \n> Providing it as a function rather than through EXPLAIN does make a bit\n> more sense if we're going to skip things like the lookups you mention\n> above. I'm still inclined to have it be a part of core rather than\n> having it as postgres_fdw though. I'm not completely against it being\n> part of postgres_fdw... but I would think that would really be\n> appropriate if it's actually using something in postgres_fdw, but if\n> everything that it's doing is part of core and nothing related\n> specifically to the postgres FDW, then having it as part of core makes\n> more sense to me. Also, having it as part of core would make it more\n> appropriate for other tools to look at and adding that kind of\n> inspection capability for partial planner results could be very\n> interesting for tools like pgAdmin and such.\n\nI agree the statistics extraction should probably be part of core. \nThere is the goal if FDWs returning data, and returning the data\nquickly. I think we can require all-new FDW servers to get improved\nperformance. I am not even clear if we have a full understanding of the\nperformance characteristics of FDWs yet. I know Tomas did some research\non its DML behavior, but other than that, I haven't seen much.\n\nOn a related note, I have wished to be able to see all the costs\nassociated with plans not chosen, and I think others would like that as\nwell. Getting multiple costs for a query goes in that direction.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n", "msg_date": "Mon, 13 Jul 2020 21:32:19 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Ideas about a better API for postgres_fdw remote estimates" }, { "msg_contents": "On 7/14/20 6:32 AM, Bruce Momjian wrote:\n> On Mon, Jul 6, 2020 at 11:28:28AM -0400, Stephen Frost wrote:\n>>> Yeah, thinking about it as a function that inspects partial planner\n>>> results, it might be useful for other purposes besides postgres_fdw.\n>>> As I said before, I don't think this necessarily has to be bundled as\n>>> part of postgres_fdw. That still doesn't make it part of EXPLAIN.\n>>\n>> Providing it as a function rather than through EXPLAIN does make a bit\n>> more sense if we're going to skip things like the lookups you mention\n>> above. I'm still inclined to have it be a part of core rather than\n>> having it as postgres_fdw though. I'm not completely against it being\n>> part of postgres_fdw... but I would think that would really be\n>> appropriate if it's actually using something in postgres_fdw, but if\n>> everything that it's doing is part of core and nothing related\n>> specifically to the postgres FDW, then having it as part of core makes\n>> more sense to me. Also, having it as part of core would make it more\n>> appropriate for other tools to look at and adding that kind of\n>> inspection capability for partial planner results could be very\n>> interesting for tools like pgAdmin and such.\n> \n> I agree the statistics extraction should probably be part of core.\n> There is the goal if FDWs returning data, and returning the data\n> quickly. I think we can require all-new FDW servers to get improved\n> performance. I am not even clear if we have a full understanding of the\n> performance characteristics of FDWs yet. I know Tomas did some research\n> on its DML behavior, but other than that, I haven't seen much.\n> \n> On a related note, I have wished to be able to see all the costs\n> associated with plans not chosen, and I think others would like that as\n> well. Getting multiple costs for a query goes in that direction.\n> \n\nDuring the implementation of sharding related improvements i noticed \nthat if we use a lot of foreign partitions, we have bad plans because of \nvacuum don't update statistics of foreign tables.This is done by the \nANALYZE command, but it is very expensive operation for foreign table.\nProblem with statistics demonstrates with TAP-test from the first patch \nin attachment.\n\nI implemented some FDW + pg core machinery to reduce weight of the \nproblem. The ANALYZE command on foreign table executes query on foreign \nserver that extracts statistics tuple, serializes it into json-formatted \nstring and returns to the caller. The caller deserializes this string, \ngenerates statistics for this foreign table and update it. The second \npatch is a proof-of-concept.\n\nThis patch speedup analyze command and provides statistics relevance on \na foreign table after autovacuum operation. Its effectiveness depends on \nrelevance of statistics on the remote server, but still.\n\n-- \nregards,\nAndrey Lepikhov\nPostgres Professional", "msg_date": "Sat, 29 Aug 2020 10:38:56 +0500", "msg_from": "Andrey Lepikhov <a.lepikhov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: Ideas about a better API for postgres_fdw remote estimates" }, { "msg_contents": "Greetings,\n\n* Andrey Lepikhov (a.lepikhov@postgrespro.ru) wrote:\n> During the implementation of sharding related improvements i noticed that if\n> we use a lot of foreign partitions, we have bad plans because of vacuum\n> don't update statistics of foreign tables.This is done by the ANALYZE\n> command, but it is very expensive operation for foreign table.\n> Problem with statistics demonstrates with TAP-test from the first patch in\n> attachment.\n\nYes, the way we handle ANALYZE today for FDWs is pretty terrible, since\nwe stream the entire table across to do it.\n\n> I implemented some FDW + pg core machinery to reduce weight of the problem.\n> The ANALYZE command on foreign table executes query on foreign server that\n> extracts statistics tuple, serializes it into json-formatted string and\n> returns to the caller. The caller deserializes this string, generates\n> statistics for this foreign table and update it. The second patch is a\n> proof-of-concept.\n\nIsn't this going to create a version dependency that we'll need to deal\nwith..? What if a newer major version has some kind of improved ANALYZE\ncommand, in terms of what it looks at or stores, and it's talking to an\nolder server?\n\nWhen I was considering the issue with ANALYZE and FDWs, I had been\nthinking it'd make sense to just change the query that's built in\ndeparseAnalyzeSql() to have a TABLESAMPLE clause, but otherwise run in\nmore-or-less the same manner as today. If we don't like the available\nTABLESAMPLE methods then we could add a new one that's explicitly the\n'right' sample for an ANALYZE call and use that when it's available on\nthe remote side. Not sure if it'd make any sense for ANALYZE itself to\nstart using that same TABLESAMPLE code, but maybe? Not that I think\nit'd be much of an issue if it's independent either, with appropriate\ncomments to note that we should probably try to make them match up for\nthe sake of FDWs.\n\n> This patch speedup analyze command and provides statistics relevance on a\n> foreign table after autovacuum operation. Its effectiveness depends on\n> relevance of statistics on the remote server, but still.\n\nIf we do decide to go down this route, wouldn't it mean we'd have to\nsolve the problem of what to do when it's a 9.6 foreign server being\nqueried from a v12 server and dealing with any difference in the\nstatistics structures of the two?\n\nSeems like we would... in which case I would say that we should pull\nthat bit out and make it general, and use it for pg_upgrade too, which\nwould benefit a great deal from having the ability to upgrade stats\nbetween major versions also. That's a much bigger piece to take on, of\ncourse, but seems to be what's implied with this approach for the FDW.\n\nThanks,\n\nStephen", "msg_date": "Sat, 29 Aug 2020 12:22:31 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: Ideas about a better API for postgres_fdw remote estimates" }, { "msg_contents": "Stephen Frost <sfrost@snowman.net> writes:\n> Isn't this going to create a version dependency that we'll need to deal\n> with..? What if a newer major version has some kind of improved ANALYZE\n> command, in terms of what it looks at or stores, and it's talking to an\n> older server?\n\nYeah, this proposal is a nonstarter unless it can deal with the remote\nserver being a different PG version with different stats.\n\nYears ago (when I was still at Salesforce, IIRC, so ~5 years) we had\nsome discussions about making it possible for pg_dump and/or pg_upgrade\nto propagate stats data forward to the new database. There is at least\none POC patch in the archives for doing that by dumping the stats data\nwrapped in a function call, where the target database's version of the\nfunction would be responsible for adapting the data if necessary, or\nmaybe just discarding it if it couldn't adapt. We seem to have lost\ninterest but it still seems like something worth pursuing. I'd guess\nthat if such infrastructure existed it could be helpful for this.\n\n> When I was considering the issue with ANALYZE and FDWs, I had been\n> thinking it'd make sense to just change the query that's built in\n> deparseAnalyzeSql() to have a TABLESAMPLE clause, but otherwise run in\n> more-or-less the same manner as today.\n\n+1, that seems like something worth doing in any case, since even if\nwe do get somewhere with the present idea it would only work for new\nremote servers. TABLESAMPLE would work pretty far back (9.5,\nlooks like).\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 29 Aug 2020 12:50:59 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Ideas about a better API for postgres_fdw remote estimates" }, { "msg_contents": "\n\nOn 8/29/20 9:22 PM, Stephen Frost wrote:\n> \n>> I implemented some FDW + pg core machinery to reduce weight of the problem.\n>> The ANALYZE command on foreign table executes query on foreign server that\n>> extracts statistics tuple, serializes it into json-formatted string and\n>> returns to the caller. The caller deserializes this string, generates\n>> statistics for this foreign table and update it. The second patch is a\n>> proof-of-concept.\n> \n> Isn't this going to create a version dependency that we'll need to deal\n> with..? What if a newer major version has some kind of improved ANALYZE\n> command, in terms of what it looks at or stores, and it's talking to an\n> older server?\n> \n> When I was considering the issue with ANALYZE and FDWs, I had been\n> thinking it'd make sense to just change the query that's built in\n> deparseAnalyzeSql() to have a TABLESAMPLE clause, but otherwise run in\n> more-or-less the same manner as today. If we don't like the available\n> TABLESAMPLE methods then we could add a new one that's explicitly the\n> 'right' sample for an ANALYZE call and use that when it's available on\n> the remote side. Not sure if it'd make any sense for ANALYZE itself to\n> start using that same TABLESAMPLE code, but maybe? Not that I think\n> it'd be much of an issue if it's independent either, with appropriate\n> comments to note that we should probably try to make them match up for\n> the sake of FDWs.\nThis approach does not contradict your idea. This is a lightweight \nopportunity to reduce the cost of analysis if we have a set of servers \nwith actual versions of system catalog and fdw.\n> \n>> This patch speedup analyze command and provides statistics relevance on a\n>> foreign table after autovacuum operation. Its effectiveness depends on\n>> relevance of statistics on the remote server, but still.\n> \n> If we do decide to go down this route, wouldn't it mean we'd have to\n> solve the problem of what to do when it's a 9.6 foreign server being\n> queried from a v12 server and dealing with any difference in the\n> statistics structures of the two?\n> \n> Seems like we would... in which case I would say that we should pull\n> that bit out and make it general, and use it for pg_upgrade too, which\n> would benefit a great deal from having the ability to upgrade stats\n> between major versions also. That's a much bigger piece to take on, of\n> course, but seems to be what's implied with this approach for the FDW.\n> \n\nThank you for this use case.\n\nWe can add field \"version\" to statistics string (btree uses versioning \ntoo). As you can see, in this patch we are only trying to get \nstatistics. If for some reason this does not work out, then we go along \na difficult route.\n\nMoreover, I believe this strategy should only work if we analyze a \nrelation implicitly. If the user executes analysis explicitly by the \ncommand \"ANALYZE <relname>\", we need to perform an fair analysis of the \ntable.\n\n-- \nregards,\nAndrey Lepikhov\nPostgres Professional\n\n\n", "msg_date": "Sat, 29 Aug 2020 22:00:18 +0500", "msg_from": "Andrey Lepikhov <a.lepikhov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: Ideas about a better API for postgres_fdw remote estimates" }, { "msg_contents": "On 8/29/20 9:50 PM, Tom Lane wrote:\n> Years ago (when I was still at Salesforce, IIRC, so ~5 years) we had\n> some discussions about making it possible for pg_dump and/or pg_upgrade\n> to propagate stats data forward to the new database. There is at least\n> one POC patch in the archives for doing that by dumping the stats data\n> wrapped in a function call, where the target database's version of the\n> function would be responsible for adapting the data if necessary, or\n> maybe just discarding it if it couldn't adapt. We seem to have lost\n> interest but it still seems like something worth pursuing. I'd guess\n> that if such infrastructure existed it could be helpful for this.\n\nThanks for this helpful feedback.\n\nI found several threads related to the problem [1-3].\nI agreed that this task needs to implement an API for \nserialization/deserialization of statistics:\npg_load_relation_statistics(json_string text);\npg_get_relation_statistics(relname text);\nWe can use a version number for resolving conflicts with different \nstatistics implementations.\n\"Load\" function will validate the values[] anyarray while deserializing \nthe input json string to the datatype of the relation column.\n\nMaybe I didn't feel all the problems of this task?\n\n1. https://www.postgresql.org/message-id/flat/724322880.K8vzik8zPz%40abook\n2. \nhttps://www.postgresql.org/message-id/flat/CAAZKuFaWdLkK8eozSAooZBets9y_mfo2HS6urPAKXEPbd-JLCA%40mail.gmail.com\n3. \nhttps://www.postgresql.org/message-id/flat/GNELIHDDFBOCMGBFGEFOOEOPCBAA.chriskl%40familyhealth.com.au\n\n-- \nregards,\nAndrey Lepikhov\nPostgres Professional\n\n\n", "msg_date": "Mon, 31 Aug 2020 15:06:09 +0500", "msg_from": "\"Andrey V. Lepikhov\" <a.lepikhov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: Ideas about a better API for postgres_fdw remote estimates" }, { "msg_contents": "On Mon, Aug 31, 2020 at 3:36 PM Andrey V. Lepikhov\n<a.lepikhov@postgrespro.ru> wrote:\n>\n> Thanks for this helpful feedback.\n>\n> I found several threads related to the problem [1-3].\n> I agreed that this task needs to implement an API for\n> serialization/deserialization of statistics:\n> pg_load_relation_statistics(json_string text);\n> pg_get_relation_statistics(relname text);\n> We can use a version number for resolving conflicts with different\n> statistics implementations.\n> \"Load\" function will validate the values[] anyarray while deserializing\n> the input json string to the datatype of the relation column.\n>\n\nThis is a valuable feature. Analysing a foreign table by fetching rows\nfrom the foreign server isn't very efficient. In fact the current FDW\nAPI for doing that forges that in-efficiency by requiring the FDW to\nreturn a sample of rows that will be analysed by the core. That's why\nI see that your patch introduces a new API to get foreign rel stat. I\ndon't think there's any point in maintaining these two APIs just for\nANALYSING table. Instead we should have only one FDW API which will do\nwhatever it wants and return statistics that can be understood by the\ncore and let core install it in the catalogs. I believe that's doable.\n\nIn case of PostgreSQL it could get the stats available as is from the\nforeign server, convert it into a form that the core understands and\nreturns. The patch introduces a new function postgres_fdw_stat() which\nwill be available only from version 14 onwards. Can we use\nrow_to_json(), which is available in all the supported versions,\ninstead?\n\nIn case of some other foreign server, an FDW will be responsible to\nreturn statistics in a form that the core will understand. It may\nfetch rows from the foreign server or be a bit smart and fetch the\nstatistics and convert.\n\nThis also means that FDWs will have to deal with the statistics format\nthat the core understands and thus will need changes in their code\nwith every version in the worst case. But AFAIR, PostgreSQL supports\ndifferent forms of statistics so the problem may not remain that\nsevere if FDWs and core agree on some bare minimum format that the\ncore supports for long.\n\nI think the patch has some other problems like it works only for\nregular tables on foreign server but a foreign table can be pointing\nto any relation like a materialized view, partitioned table or a\nforeign table on the foreign server all of which have statistics\nassociated with them. I didn't look closely but it does not consider\nthat the foreign table may not have all the columns from the relation\non the foreign server or may have different names. But I think those\nproblems are kind of secondary. We have to agree on the design first.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Mon, 31 Aug 2020 18:49:21 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Ideas about a better API for postgres_fdw remote estimates" }, { "msg_contents": "On Sat, Aug 29, 2020 at 12:50:59PM -0400, Tom Lane wrote:\n> Stephen Frost <sfrost@snowman.net> writes:\n> > Isn't this going to create a version dependency that we'll need to deal\n> > with..? What if a newer major version has some kind of improved ANALYZE\n> > command, in terms of what it looks at or stores, and it's talking to an\n> > older server?\n> \n> Yeah, this proposal is a nonstarter unless it can deal with the remote\n> server being a different PG version with different stats.\n> \n> Years ago (when I was still at Salesforce, IIRC, so ~5 years) we had\n> some discussions about making it possible for pg_dump and/or pg_upgrade\n> to propagate stats data forward to the new database. There is at least\n> one POC patch in the archives for doing that by dumping the stats data\n> wrapped in a function call, where the target database's version of the\n> function would be responsible for adapting the data if necessary, or\n> maybe just discarding it if it couldn't adapt. We seem to have lost\n> interest but it still seems like something worth pursuing. I'd guess\n> that if such infrastructure existed it could be helpful for this.\n\nI don't think there was enough value to do statistics migration just for\npg_upgrade, but doing it for pg_upgrade and FDWs seems like it might\nhave enough demand to justify the required work and maintenance.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n", "msg_date": "Mon, 31 Aug 2020 12:14:55 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Ideas about a better API for postgres_fdw remote estimates" }, { "msg_contents": "Greetings,\n\n* Bruce Momjian (bruce@momjian.us) wrote:\n> On Sat, Aug 29, 2020 at 12:50:59PM -0400, Tom Lane wrote:\n> > Stephen Frost <sfrost@snowman.net> writes:\n> > > Isn't this going to create a version dependency that we'll need to deal\n> > > with..? What if a newer major version has some kind of improved ANALYZE\n> > > command, in terms of what it looks at or stores, and it's talking to an\n> > > older server?\n> > \n> > Yeah, this proposal is a nonstarter unless it can deal with the remote\n> > server being a different PG version with different stats.\n> > \n> > Years ago (when I was still at Salesforce, IIRC, so ~5 years) we had\n> > some discussions about making it possible for pg_dump and/or pg_upgrade\n> > to propagate stats data forward to the new database. There is at least\n> > one POC patch in the archives for doing that by dumping the stats data\n> > wrapped in a function call, where the target database's version of the\n> > function would be responsible for adapting the data if necessary, or\n> > maybe just discarding it if it couldn't adapt. We seem to have lost\n> > interest but it still seems like something worth pursuing. I'd guess\n> > that if such infrastructure existed it could be helpful for this.\n> \n> I don't think there was enough value to do statistics migration just for\n> pg_upgrade, but doing it for pg_upgrade and FDWs seems like it might\n> have enough demand to justify the required work and maintenance.\n\nNot sure that it really matters much, but I disagree with the assessment\nthat there wasn't enough value to do it for pg_upgrade; I feel that it\njust hasn't been something that's had enough people interested in\nworking on it, which isn't the same thing.\n\nIf a good patch showed up tomorrow, with someone willing to spend time\non it, I definitely think it'd be something we should include even if\nit's just for pg_upgrade. A solution that works for both pg_upgrade and\nthe postgres FDW would be even better, of course.\n\nThanks,\n\nStephen", "msg_date": "Mon, 31 Aug 2020 12:19:52 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: Ideas about a better API for postgres_fdw remote estimates" }, { "msg_contents": "On Mon, Aug 31, 2020 at 12:19:52PM -0400, Stephen Frost wrote:\n> * Bruce Momjian (bruce@momjian.us) wrote:\n> > I don't think there was enough value to do statistics migration just for\n> > pg_upgrade, but doing it for pg_upgrade and FDWs seems like it might\n> > have enough demand to justify the required work and maintenance.\n> \n> Not sure that it really matters much, but I disagree with the assessment\n> that there wasn't enough value to do it for pg_upgrade; I feel that it\n> just hasn't been something that's had enough people interested in\n> working on it, which isn't the same thing.\n\nI am not sure what point you are trying to make, but if it had enough\nvalue, wouldn't people work on it, or are you saying that it had enough\nvalue, but people didn't realize it, so didn't work on it? I guess I\ncan see that. For me, it was the maintenance burden that always scared\nme from getting involved since it would be the rare case where\npg_upgrade would have to be modified for perhaps every major release.\n\n> If a good patch showed up tomorrow, with someone willing to spend time\n> on it, I definitely think it'd be something we should include even if\n> it's just for pg_upgrade. A solution that works for both pg_upgrade and\n> the postgres FDW would be even better, of course.\n\nYep, see above. The problem isn't mostly the initial patch, but someone\nwho is going to work on it and test it for every major release,\npotentially forever. Frankly, this is a pg_dump feature, rather than\nsomething pg_upgrade should be doing, because not having to run ANALYZE\nafter restoring a dump is also a needed feature.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n", "msg_date": "Mon, 31 Aug 2020 12:47:25 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Ideas about a better API for postgres_fdw remote estimates" }, { "msg_contents": "Greetings,\n\n* Bruce Momjian (bruce@momjian.us) wrote:\n> On Mon, Aug 31, 2020 at 12:19:52PM -0400, Stephen Frost wrote:\n> > * Bruce Momjian (bruce@momjian.us) wrote:\n> > > I don't think there was enough value to do statistics migration just for\n> > > pg_upgrade, but doing it for pg_upgrade and FDWs seems like it might\n> > > have enough demand to justify the required work and maintenance.\n> > \n> > Not sure that it really matters much, but I disagree with the assessment\n> > that there wasn't enough value to do it for pg_upgrade; I feel that it\n> > just hasn't been something that's had enough people interested in\n> > working on it, which isn't the same thing.\n> \n> I am not sure what point you are trying to make, but if it had enough\n> value, wouldn't people work on it, or are you saying that it had enough\n> value, but people didn't realize it, so didn't work on it? I guess I\n> can see that. For me, it was the maintenance burden that always scared\n> me from getting involved since it would be the rare case where\n> pg_upgrade would have to be modified for perhaps every major release.\n\nThe point I was making was that it has value and people did realize it\nbut there's only so many resources to go around when it comes to hacking\non PG and therefore it simply hasn't been done yet.\n\nThere's a big difference between \"yes, we all agree that would be good\nto have, but no one has had time to work on it\" and \"we don't think this\nis worth having because of the maintenance work it'd require.\" The\nlatter shuts down anyone thinking of working on it, which is why I said\nanything.\n\n> > If a good patch showed up tomorrow, with someone willing to spend time\n> > on it, I definitely think it'd be something we should include even if\n> > it's just for pg_upgrade. A solution that works for both pg_upgrade and\n> > the postgres FDW would be even better, of course.\n> \n> Yep, see above. The problem isn't mostly the initial patch, but someone\n> who is going to work on it and test it for every major release,\n> potentially forever. Frankly, this is a pg_dump feature, rather than\n> something pg_upgrade should be doing, because not having to run ANALYZE\n> after restoring a dump is also a needed feature.\n\nI tend to agree with it being more of a pg_dump issue- but that also\nshows that your assessment above doesn't actually fit, because we\ndefinitely change pg_dump every release. Consider that if someone wants\nto add some new option to CREATE TABLE, which gets remembered in the\ncatalog, they have to make sure that pg_dump support is added for that.\nIf we added the statistics export/import to pg_dump, someone changing\nthose parts of the system would also be expected to update pg_dump to\nmanage those changes, including working with older versions of PG.\n\nThanks,\n\nStephen", "msg_date": "Mon, 31 Aug 2020 12:56:21 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: Ideas about a better API for postgres_fdw remote estimates" }, { "msg_contents": "On Mon, Aug 31, 2020 at 12:56:21PM -0400, Stephen Frost wrote:\n> Greetings,\n> * Bruce Momjian (bruce@momjian.us) wrote:\n> The point I was making was that it has value and people did realize it\n> but there's only so many resources to go around when it comes to hacking\n> on PG and therefore it simply hasn't been done yet.\n> \n> There's a big difference between \"yes, we all agree that would be good\n> to have, but no one has had time to work on it\" and \"we don't think this\n> is worth having because of the maintenance work it'd require.\" The\n> latter shuts down anyone thinking of working on it, which is why I said\n> anything.\n\nI actually don't know which statement above is correct, because of the\n\"forever\" maintenance.\n\n> I tend to agree with it being more of a pg_dump issue- but that also\n> shows that your assessment above doesn't actually fit, because we\n> definitely change pg_dump every release. Consider that if someone wants\n> to add some new option to CREATE TABLE, which gets remembered in the\n> catalog, they have to make sure that pg_dump support is added for that.\n> If we added the statistics export/import to pg_dump, someone changing\n> those parts of the system would also be expected to update pg_dump to\n> manage those changes, including working with older versions of PG.\n\nYes, very true, but technically any change in any aspect of the\nstatistics system would require modification of the statistics dump,\nwhich usually isn't required for most feature changes.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n", "msg_date": "Mon, 31 Aug 2020 13:08:17 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Ideas about a better API for postgres_fdw remote estimates" }, { "msg_contents": "Greetings,\n\n* Bruce Momjian (bruce@momjian.us) wrote:\n> On Mon, Aug 31, 2020 at 12:56:21PM -0400, Stephen Frost wrote:\n> > The point I was making was that it has value and people did realize it\n> > but there's only so many resources to go around when it comes to hacking\n> > on PG and therefore it simply hasn't been done yet.\n> > \n> > There's a big difference between \"yes, we all agree that would be good\n> > to have, but no one has had time to work on it\" and \"we don't think this\n> > is worth having because of the maintenance work it'd require.\" The\n> > latter shuts down anyone thinking of working on it, which is why I said\n> > anything.\n> \n> I actually don't know which statement above is correct, because of the\n> \"forever\" maintenance.\n\nI can understand not being sure which is correct, and we can all have\ndifferent points of view on it too, but that's a much softer stance than\nwhat I, at least, understood from your up-thread comment which was-\n\n> I don't think there was enough value to do statistics migration just\n> for pg_upgrade [...]\n\nThat statement came across to me as saying the latter statement above.\nPerhaps that wasn't what you intended it to, in which case it's good to\nhave the discussion and clarify it, for others who might be following\nthis thread and wondering if they should consider working on this area\nof the code or not.\n\n> Yes, very true, but technically any change in any aspect of the\n> statistics system would require modification of the statistics dump,\n> which usually isn't required for most feature changes.\n\nFeature work either requires changes to pg_dump, or not. I agree that\nfeatures which don't require pg_dump changes are definitionally less\nwork than features which do (presuming the rest of the feature is the\nsame in both cases) but that isn't a justification to not have pg_dump\nsupport in cases where it's expected- we just don't currently expect it\nfor statistics (which is a rather odd exception when you consider that\nnearly everything else that ends up in the catalog tables is included).\n\nFor my part, at least, I'd like to see us change that expectation, for a\nnumber of reasons:\n\n- pg_upgrade could leverage it and reduce downtime and/or confusion for\n users who are upgrading and dealing with poor statistics or no\n statistics for however long after the upgrade\n\n- Tables restored wouldn't require an ANALYZE to get reasonable queries\n against them\n\n- Debugging query plans would be a lot less guess-work or having to ask\n the user to export the statistics by hand from the catalog and then\n having to hand-hack them in to try and reproduce what's happening,\n particularly when re-running an analyze ends up giving different\n results, which isn't uncommon for edge cases\n\n- The postgres_fdw would be able to leverage this, as discussed earlier\n on in this thread\n\n- Logical replication could potentially leverage the existing stats and\n not require ANALYZE to be done after an import, leading to more\n predictable query plans on the replica\n\nI suspect there's probably other benefits than the ones above, but these\nall seem pretty valuable to me.\n\nThanks,\n\nStephen", "msg_date": "Mon, 31 Aug 2020 13:26:59 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: Ideas about a better API for postgres_fdw remote estimates" }, { "msg_contents": "Stephen Frost <sfrost@snowman.net> writes:\n> Feature work either requires changes to pg_dump, or not. I agree that\n> features which don't require pg_dump changes are definitionally less\n> work than features which do (presuming the rest of the feature is the\n> same in both cases) but that isn't a justification to not have pg_dump\n> support in cases where it's expected- we just don't currently expect it\n> for statistics (which is a rather odd exception when you consider that\n> nearly everything else that ends up in the catalog tables is included).\n\n> For my part, at least, I'd like to see us change that expectation, for a\n> number of reasons:\n\nYeah. I think that originally we expected that the definition of the\nstats might change fast enough that porting them cross-version would be\nproblematic. Subsequent experience has shown that they don't actually\nchange any faster than any other aspect of the catalogs. So, while\nI do think we must have a plan for how to cope when/if the definition\nchanges, I don't buy Bruce's argument that it's going to require more\nmaintenance effort than any other part of the system does.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 31 Aug 2020 13:53:01 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Ideas about a better API for postgres_fdw remote estimates" }, { "msg_contents": "On Mon, Aug 31, 2020 at 01:26:59PM -0400, Stephen Frost wrote:\n> * Bruce Momjian (bruce@momjian.us) wrote:\n> > I actually don't know which statement above is correct, because of the\n> > \"forever\" maintenance.\n> \n> I can understand not being sure which is correct, and we can all have\n> different points of view on it too, but that's a much softer stance than\n> what I, at least, understood from your up-thread comment which was-\n> \n> > I don't think there was enough value to do statistics migration just\n> > for pg_upgrade [...]\n> \n> That statement came across to me as saying the latter statement above.\n> Perhaps that wasn't what you intended it to, in which case it's good to\n> have the discussion and clarify it, for others who might be following\n> this thread and wondering if they should consider working on this area\n> of the code or not.\n\nI concluded that based on the fact that pg_upgrade has been used for\nyears and there has been almost no work on statistics upgrades.\n\n> > Yes, very true, but technically any change in any aspect of the\n> > statistics system would require modification of the statistics dump,\n> > which usually isn't required for most feature changes.\n> \n> Feature work either requires changes to pg_dump, or not. I agree that\n> features which don't require pg_dump changes are definitionally less\n> work than features which do (presuming the rest of the feature is the\n> same in both cases) but that isn't a justification to not have pg_dump\n> support in cases where it's expected- we just don't currently expect it\n> for statistics (which is a rather odd exception when you consider that\n> nearly everything else that ends up in the catalog tables is included).\n\nAgreed, but the big differences is that you can change most SQL commands\neasily, e.g. system catalog changes, without any pg_dump changes, unless\nyou change the SQL API, while any statistics storage change would\npotentially need pg_dump adjustments. And once you start doing it, you\nhad better keep doing it for every major release or there will be major\ncomplaints.\n\n> For my part, at least, I'd like to see us change that expectation, for a\n> number of reasons:\n\nYes, there are certainly more uses now than we used to have.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n", "msg_date": "Mon, 31 Aug 2020 17:45:05 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Ideas about a better API for postgres_fdw remote estimates" }, { "msg_contents": "On Mon, Aug 31, 2020 at 01:53:01PM -0400, Tom Lane wrote:\n> Stephen Frost <sfrost@snowman.net> writes:\n> > Feature work either requires changes to pg_dump, or not. I agree that\n> > features which don't require pg_dump changes are definitionally less\n> > work than features which do (presuming the rest of the feature is the\n> > same in both cases) but that isn't a justification to not have pg_dump\n> > support in cases where it's expected- we just don't currently expect it\n> > for statistics (which is a rather odd exception when you consider that\n> > nearly everything else that ends up in the catalog tables is included).\n> \n> > For my part, at least, I'd like to see us change that expectation, for a\n> > number of reasons:\n> \n> Yeah. I think that originally we expected that the definition of the\n> stats might change fast enough that porting them cross-version would be\n> problematic. Subsequent experience has shown that they don't actually\n> change any faster than any other aspect of the catalogs. So, while\n> I do think we must have a plan for how to cope when/if the definition\n> changes, I don't buy Bruce's argument that it's going to require more\n> maintenance effort than any other part of the system does.\n\nWell, my point is that even bucket/calculation/data text respresentation\nchanges could affect dumping statistics, and that is kind of rare for\nother changes we make.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n", "msg_date": "Mon, 31 Aug 2020 17:46:22 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Ideas about a better API for postgres_fdw remote estimates" }, { "msg_contents": "On Mon, Aug 31, 2020 at 05:46:22PM -0400, Bruce Momjian wrote:\n> On Mon, Aug 31, 2020 at 01:53:01PM -0400, Tom Lane wrote:\n> > Stephen Frost <sfrost@snowman.net> writes:\n> > > Feature work either requires changes to pg_dump, or not. I agree that\n> > > features which don't require pg_dump changes are definitionally less\n> > > work than features which do (presuming the rest of the feature is the\n> > > same in both cases) but that isn't a justification to not have pg_dump\n> > > support in cases where it's expected- we just don't currently expect it\n> > > for statistics (which is a rather odd exception when you consider that\n> > > nearly everything else that ends up in the catalog tables is included).\n> > \n> > > For my part, at least, I'd like to see us change that expectation, for a\n> > > number of reasons:\n> > \n> > Yeah. I think that originally we expected that the definition of the\n> > stats might change fast enough that porting them cross-version would be\n> > problematic. Subsequent experience has shown that they don't actually\n> > change any faster than any other aspect of the catalogs. So, while\n> > I do think we must have a plan for how to cope when/if the definition\n> > changes, I don't buy Bruce's argument that it's going to require more\n> > maintenance effort than any other part of the system does.\n> \n> Well, my point is that even bucket/calculation/data text respresentation\n> changes could affect dumping statistics, and that is kind of rare for\n> other changes we make.\n\nAnd I have been hoping someone would prove me wrong all these years, but\nit hasn't happened yet. It is possible we have hit a tipping point\nwhere the work is worth it, and I hope that is the case. I am just\nexplaining why I think it has not happened yet.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n", "msg_date": "Mon, 31 Aug 2020 17:47:41 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Ideas about a better API for postgres_fdw remote estimates" }, { "msg_contents": "On 8/31/20 6:19 PM, Ashutosh Bapat wrote:\n> On Mon, Aug 31, 2020 at 3:36 PM Andrey V. Lepikhov\n> <a.lepikhov@postgrespro.ru> wrote:\n>> I agreed that this task needs to implement an API for\n>> serialization/deserialization of statistics:\n>> pg_load_relation_statistics(json_string text);\n>> pg_get_relation_statistics(relname text);\n>> We can use a version number for resolving conflicts with different\n>> statistics implementations.\n>> \"Load\" function will validate the values[] anyarray while deserializing\n>> the input json string to the datatype of the relation column.\n> \n> This is a valuable feature. Analysing a foreign table by fetching rows\n> from the foreign server isn't very efficient. In fact the current FDW\n> API for doing that forges that in-efficiency by requiring the FDW to\n> return a sample of rows that will be analysed by the core. That's why\n> I see that your patch introduces a new API to get foreign rel stat. I\n> don't think there's any point in maintaining these two APIs just for\n\n> ANALYSING table. Instead we should have only one FDW API which will do\n> whatever it wants and return statistics that can be understood by the\n> core and let core install it in the catalogs. I believe that's doable.\nI think the same.\n> \n> In case of PostgreSQL it could get the stats available as is from the\n> foreign server, convert it into a form that the core understands and\n> returns. The patch introduces a new function postgres_fdw_stat() which\n> will be available only from version 14 onwards. Can we use\n> row_to_json(), which is available in all the supported versions,\n> instead?\nI started from here. But we need to convert starelid, staop[] stacoll[] \noids into portable format. Also we need to explicitly specify the type \nof each values[] array. And no one guaranteed that anyarray values[] \ncan't contained an array of complex type values, containing oids, that \ncan't be correctly converted to database objects on another server...\nThese considerations required me to add new postgres_fdw_stat() routine \nthat can be moved into the core.\n> \n> In case of some other foreign server, an FDW will be responsible to\n> return statistics in a form that the core will understand. It may\n> fetch rows from the foreign server or be a bit smart and fetch the\n> statistics and convert.\nI don't think I fully understood your idea. Please explain in more \ndetail if possible.\n> \n> This also means that FDWs will have to deal with the statistics format\n> that the core understands and thus will need changes in their code\n> with every version in the worst case. But AFAIR, PostgreSQL supports\n> different forms of statistics so the problem may not remain that\n> severe if FDWs and core agree on some bare minimum format that the\n> core supports for long.\nI don't think FDW needs to know anything about the internals of \nstatistics. It only need to execute query like\n\"SELECT extract_statistics(namespace.relation);\"\nand apply the text representation by the function call like this:\nstore_statistics(const char *stat);\nAll validation and update pg_statistic operations will be performed into \nthe core.\n> \n> I think the patch has some other problems like it works only for\n> regular tables on foreign server but a foreign table can be pointing\n> to any relation like a materialized view, partitioned table or a\n> foreign table on the foreign server all of which have statistics\n> associated with them.\nOk. It was implemented for discussion, test and as a base of development.\n\n> I didn't look closely but it does not consider\n> that the foreign table may not have all the columns from the relation\n> on the foreign server or may have different names.\nHere we get full statistics from remote server and extract statistics \nonly for columns, included into the tuple descriptor of foreign table.\n\n> But I think those\n> problems are kind of secondary. We have to agree on the design first. \n+1.\nI only want to point out the following. In previous threads statistics \nwas converted row-by-row. I want to suggest to serialize all statistics \ntuples for the relation into single json string. On apply phase we can \nfilter unneeded attributes.\n\n-- \nregards,\nAndrey Lepikhov\nPostgres Professional\n\n\n", "msg_date": "Tue, 1 Sep 2020 09:47:46 +0500", "msg_from": "Andrey Lepikhov <a.lepikhov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: Ideas about a better API for postgres_fdw remote estimates" }, { "msg_contents": "On 8/31/20 6:19 PM, Ashutosh Bapat wrote:\n> On Mon, Aug 31, 2020 at 3:36 PM Andrey V. Lepikhov\n> <a.lepikhov@postgrespro.ru> wrote:\n>>\n>> Thanks for this helpful feedback.\n> I think the patch has some other problems like it works only for\n> regular tables on foreign server but a foreign table can be pointing\n> to any relation like a materialized view, partitioned table or a\n> foreign table on the foreign server all of which have statistics\n> associated with them. I didn't look closely but it does not consider\n> that the foreign table may not have all the columns from the relation\n> on the foreign server or may have different names. But I think those\n> problems are kind of secondary. We have to agree on the design first.\n> \nIn accordance with discussion i made some changes in the patch:\n1. The extract statistic routine moved into the core.\n2. Serialized stat contains 'version' field to indicate format of \nstatistic received.\n3. ANALYZE and VACUUM ANALYZE uses this approach only in the case of \nimplicit analysis of the relation.\n\nI am currently keeping limitation of using the approach for regular \nrelations only, because i haven't studied the specifics of another types \nof relations.\nBut I don't know any reason to keep this limit in the future.\n\nThe patch in attachment is very raw. I publish for further substantive \ndiscussion.\n\n-- \nregards,\nAndrey Lepikhov\nPostgres Professional", "msg_date": "Thu, 3 Sep 2020 10:14:41 +0500", "msg_from": "\"Andrey V. Lepikhov\" <a.lepikhov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: Ideas about a better API for postgres_fdw remote estimates" }, { "msg_contents": "On Thu, 3 Sep 2020 at 10:44, Andrey V. Lepikhov <a.lepikhov@postgrespro.ru>\nwrote:\n\n> On 8/31/20 6:19 PM, Ashutosh Bapat wrote:\n> > On Mon, Aug 31, 2020 at 3:36 PM Andrey V. Lepikhov\n> > <a.lepikhov@postgrespro.ru> wrote:\n> >>\n> >> Thanks for this helpful feedback.\n> > I think the patch has some other problems like it works only for\n> > regular tables on foreign server but a foreign table can be pointing\n> > to any relation like a materialized view, partitioned table or a\n> > foreign table on the foreign server all of which have statistics\n> > associated with them. I didn't look closely but it does not consider\n> > that the foreign table may not have all the columns from the relation\n> > on the foreign server or may have different names. But I think those\n> > problems are kind of secondary. We have to agree on the design first.\n> >\n> In accordance with discussion i made some changes in the patch:\n> 1. The extract statistic routine moved into the core.\n>\n\nBulk of the patch implements the statistics conversion to and fro json\nformat. I am still not sure whether we need all of that code here. Can we\nre-use pg_stats view? That is converting some of the OIDs to names. I agree\nwith anyarray but if that's a problem here it's also a problem for pg_stats\nview, isn't it? If we can reduce the stats handling code to a minimum or\nuse it for some other purpose as well e.g. pg_stats enhancement, the code\nchanges required will be far less compared to the value that this patch\nprovides.\n\n-- \nBest Wishes,\nAshutosh\n\nOn Thu, 3 Sep 2020 at 10:44, Andrey V. Lepikhov <a.lepikhov@postgrespro.ru> wrote:On 8/31/20 6:19 PM, Ashutosh Bapat wrote:\n> On Mon, Aug 31, 2020 at 3:36 PM Andrey V. Lepikhov\n> <a.lepikhov@postgrespro.ru> wrote:\n>>\n>> Thanks for this helpful feedback.\n> I think the patch has some other problems like it works only for\n> regular tables on foreign server but a foreign table can be pointing\n> to any relation like a materialized view, partitioned table or a\n> foreign table on the foreign server all of which have statistics\n> associated with them. I didn't look closely but it does not consider\n> that the foreign table may not have all the columns from the relation\n> on the foreign server or may have different names. But I think those\n> problems are kind of secondary. We have to agree on the design first.\n> \nIn accordance with discussion i made some changes in the patch:\n1. The extract statistic routine moved into the core.Bulk of the patch implements the statistics conversion to and fro json format. I am still not sure whether we need all of that code here. Can we re-use pg_stats view? That is converting some of the OIDs to names. I agree with anyarray but if that's a problem here it's also a problem for pg_stats view, isn't it? If we can reduce the stats handling code to a minimum or use it for some other purpose as well e.g. pg_stats enhancement, the code changes required will be far less compared to the value that this patch provides.-- Best Wishes,Ashutosh", "msg_date": "Fri, 4 Sep 2020 18:53:37 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Ideas about a better API for postgres_fdw remote estimates" }, { "msg_contents": "On Thu, Sep 03, 2020 at 10:14:41AM +0500, Andrey V. Lepikhov wrote:\n>On 8/31/20 6:19 PM, Ashutosh Bapat wrote:\n>>On Mon, Aug 31, 2020 at 3:36 PM Andrey V. Lepikhov\n>><a.lepikhov@postgrespro.ru> wrote:\n>>>\n>>>Thanks for this helpful feedback.\n>>I think the patch has some other problems like it works only for\n>>regular tables on foreign server but a foreign table can be pointing\n>>to any relation like a materialized view, partitioned table or a\n>>foreign table on the foreign server all of which have statistics\n>>associated with them. I didn't look closely but it does not consider\n>>that the foreign table may not have all the columns from the relation\n>>on the foreign server or may have different names. But I think those\n>>problems are kind of secondary. We have to agree on the design first.\n>>\n>In accordance with discussion i made some changes in the patch:\n>1. The extract statistic routine moved into the core.\n>2. Serialized stat contains 'version' field to indicate format of \n>statistic received.\n>3. ANALYZE and VACUUM ANALYZE uses this approach only in the case of \n>implicit analysis of the relation.\n>\n>I am currently keeping limitation of using the approach for regular \n>relations only, because i haven't studied the specifics of another \n>types of relations.\n>But I don't know any reason to keep this limit in the future.\n>\n>The patch in attachment is very raw. I publish for further substantive \n>discussion.\n>\n\nThanks for working on this. I briefly looked at the patch today, and I\nhave some comments/feedback:\n\n1) I wonder why deparseGetStatSql looks so different from e.g.\ndeparseAnalyzeSizeSql - no deparseStringLiteral on relname, no cast to\npg_catalog.regclass, function name not qualified with pg_catalog, ...\n\n\n2) I'm a bit annoyed by the amount of code added to analyze.c only to\nsupport output/input in JSON format. I'm no expert, but I don't recall\nexplain needing this much new stuff (OTOH it just produces json, it does\nnot need to read it). Maybe we also need to process wider range of data\ntypes here. But the code is almost perfectly undocumented :-(\n\n\n3) Why do we need to change vacuum_rel this way?\n\n\n4) I wonder if we actually want/need to simply output pg_statistic data\nverbatim like this. Is postgres_fdw actually going to benefit from it? I\nkinda doubt that, and my assumption was that we'd return only a small\nsubset of the data, needed by get_remote_estimate.\n\nThis has a couple of issues. Firstly, it requires the knowledge of what\nthe stakind constants in pg_statistic mean and how to interpret it - but\nOK, it's true that does not change very often (or at all). Secondly, it\nentirely ignores extended statistics - OK, we might extract those too,\nbut it's going to be much more complex. And finally it entirely ignores\ncosting on the remote node. Surely we can't just apply local\nrandom_page_cost or whatever, because those may be entirely different.\nAnd we don't know if the remote is going to use index etc.\n\nSo is extracting data from pg_statistic the right approach?\n\n\n5) I doubt it's enough to support relnames - we also need to estimate\njoins, so this needs to support plain queries I think. At least that's\nwhat Tom envisioned in his postgres_fdw_support(query text) proposal.\n\n\n6) I see you've included a version number in the data - why not to just\ncheck \n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 4 Sep 2020 16:57:51 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Ideas about a better API for postgres_fdw remote estimates" }, { "msg_contents": "On 9/4/20 6:23 PM, Ashutosh Bapat wrote:\n> \n> \n> On Thu, 3 Sep 2020 at 10:44, Andrey V. Lepikhov \n> <a.lepikhov@postgrespro.ru <mailto:a.lepikhov@postgrespro.ru>> wrote:\n> \n> On 8/31/20 6:19 PM, Ashutosh Bapat wrote:\n> > On Mon, Aug 31, 2020 at 3:36 PM Andrey V. Lepikhov\n> > <a.lepikhov@postgrespro.ru <mailto:a.lepikhov@postgrespro.ru>> wrote:\n> >>\n> >> Thanks for this helpful feedback.\n> > I think the patch has some other problems like it works only for\n> > regular tables on foreign server but a foreign table can be pointing\n> > to any relation like a materialized view, partitioned table or a\n> > foreign table on the foreign server all of which have statistics\n> > associated with them. I didn't look closely but it does not consider\n> > that the foreign table may not have all the columns from the relation\n> > on the foreign server or may have different names. But I think those\n> > problems are kind of secondary. We have to agree on the design first.\n> >\n> In accordance with discussion i made some changes in the patch:\n> 1. The extract statistic routine moved into the core.\n> \n> \n> Bulk of the patch implements the statistics conversion to and fro json \n> format. I am still not sure whether we need all of that code here.\nYes, i'm sure we'll replace it with something.\n\nRight now, i want to discuss format of statistics dump. Remind, that a \nstatistics dump is needed not only for fdw, but it need for the pg_dump. \nAnd in the dump will be placed something like this:\n'SELECT store_relation_statistics(rel, serialized_stat)'\n\nmy reasons for using JSON:\n* it have conversion infrastructure like json_build_object()\n* this is flexible readable format, that can be useful in text dumps of \nrelations.\n\n> Can we re-use pg_stats view? That is converting some of the OIDs to names. I \n> agree with anyarray but if that's a problem here it's also a problem for \n> pg_stats view, isn't it?\nRight now, I don't know if it is possible to unambiguously convert the \npg_stats information to a pg_statistic tuple.\n\n> If we can reduce the stats handling code to a \n> minimum or use it for some other purpose as well e.g. pg_stats \n> enhancement, the code changes required will be far less compared to the \n> value that this patch provides.\n+1\n\n-- \nregards,\nAndrey Lepikhov\nPostgres Professional\n\n\n", "msg_date": "Mon, 7 Sep 2020 16:37:00 +0500", "msg_from": "\"Andrey V. Lepikhov\" <a.lepikhov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: Ideas about a better API for postgres_fdw remote estimates" }, { "msg_contents": "On Fri, 4 Sep 2020 at 20:27, Tomas Vondra <tomas.vondra@2ndquadrant.com>\nwrote\n\n>\n>\n> 4) I wonder if we actually want/need to simply output pg_statistic data\n> verbatim like this. Is postgres_fdw actually going to benefit from it? I\n> kinda doubt that, and my assumption was that we'd return only a small\n> subset of the data, needed by get_remote_estimate.\n>\n> This has a couple of issues. Firstly, it requires the knowledge of what\n> the stakind constants in pg_statistic mean and how to interpret it - but\n> OK, it's true that does not change very often (or at all). Secondly, it\n> entirely ignores extended statistics - OK, we might extract those too,\n> but it's going to be much more complex. And finally it entirely ignores\n> costing on the remote node. Surely we can't just apply local\n> random_page_cost or whatever, because those may be entirely different.\n> And we don't know if the remote is going to use index etc.\n>\n> So is extracting data from pg_statistic the right approach?\n>\n>\nThere are two different problems, which ultimately might converge.\n1. If use_remote_estimates = false, more generally if querying costs from\nforeign server for costing paths is impractical, we want to use local\nestimates and try to come up with costs. For that purpose we keep some\nstatistics locally and user is expected to refresh it periodically by\nrunning ANALYZE on the foreign table. This patch is about a. doing this\nefficiently without requiring to fetch every row from the foreign server b.\nthrough autovacuum automatically without user firing ANALYZE. I think this\nalso answers your question about vacuum_rel() above.\n\n2. How to efficiently extract costs from an EXPLAIN plan when\nuse_remote_eestimates is true. That's the subject of some nearby thread. I\nthink you are referring to that problem here. Hence your next point.\n\nUsing EXPLAIN to get costs from the foreign server isn't efficient. It\nincreases planning time a lot; sometimes planning time exceeds execution\ntime. If usage of foreign tables becomes more and more common, this isn't\nideal. I think we should move towards a model in which the optimizer can\ndecide whether a subtree involving a foreign server should be evaluated\nlocally or on the foreign server without the help of foreign server. One\nway to do it (I am not saying that this is the only or the best way) is to\nestimate the cost of foreign query locally based on the information\navailable locally about the foreign server and foreign table. This might\nmean that we have to get that information from the foreign server and cache\nit locally and use it several times, including the indexes on foreign\ntable, values of various costs etc. Though this approach doesn't solve all\nof those problems it's one step forward + it makes the current scenario\nalso efficient.\n\nI agree that the patch needs some work though, esp the code dealing with\nserialization and deserialization of statistics.\n-- \nBest Wishes,\nAshutosh\n\nOn Fri, 4 Sep 2020 at 20:27, Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote\n\n\n4) I wonder if we actually want/need to simply output pg_statistic data\nverbatim like this. Is postgres_fdw actually going to benefit from it? I\nkinda doubt that, and my assumption was that we'd return only a small\nsubset of the data, needed by get_remote_estimate.\n\nThis has a couple of issues. Firstly, it requires the knowledge of what\nthe stakind constants in pg_statistic mean and how to interpret it - but\nOK, it's true that does not change very often (or at all). Secondly, it\nentirely ignores extended statistics - OK, we might extract those too,\nbut it's going to be much more complex. And finally it entirely ignores\ncosting on the remote node. Surely we can't just apply local\nrandom_page_cost or whatever, because those may be entirely different.\nAnd we don't know if the remote is going to use index etc.\n\nSo is extracting data from pg_statistic the right approach?\nThere are two different problems, which ultimately might converge.1. If use_remote_estimates = false, more generally if querying costs from foreign server for costing paths is impractical, we want to use local estimates and try to come up with costs. For that purpose we keep some statistics locally and user is expected to refresh it periodically by running ANALYZE on the foreign table. This patch is about a. doing this efficiently without requiring to fetch every row from the foreign server b. through autovacuum automatically without user firing ANALYZE. I think this also answers your question about vacuum_rel() above.2. How to efficiently extract costs from an EXPLAIN plan when use_remote_eestimates is true. That's the subject of some nearby thread. I think you are referring to that problem here. Hence your next point. Using EXPLAIN to get costs from the foreign server isn't efficient. It increases planning time a lot; sometimes planning time exceeds execution time. If usage of foreign tables becomes more and more common, this isn't ideal. I think we should move towards a model in which the optimizer can decide whether a subtree involving a foreign server should be evaluated locally or on the foreign server without the help of foreign server. One way to do it (I am not saying that this is the only or the best way) is to estimate the cost of foreign query locally based on the information available locally about the foreign server and foreign table. This might mean that we have to get that information from the foreign server and cache it locally and use it several times, including the indexes on foreign table, values of various costs etc. Though this approach doesn't solve all of those problems it's one step forward + it makes the current scenario also efficient.I agree that the patch needs some work though, esp the code dealing with serialization and deserialization of statistics.-- Best Wishes,Ashutosh", "msg_date": "Tue, 8 Sep 2020 17:55:09 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Ideas about a better API for postgres_fdw remote estimates" }, { "msg_contents": "On Tue, Sep 08, 2020 at 05:55:09PM +0530, Ashutosh Bapat wrote:\n>On Fri, 4 Sep 2020 at 20:27, Tomas Vondra <tomas.vondra@2ndquadrant.com>\n>wrote\n>\n>>\n>>\n>> 4) I wonder if we actually want/need to simply output pg_statistic data\n>> verbatim like this. Is postgres_fdw actually going to benefit from it? I\n>> kinda doubt that, and my assumption was that we'd return only a small\n>> subset of the data, needed by get_remote_estimate.\n>>\n>> This has a couple of issues. Firstly, it requires the knowledge of what\n>> the stakind constants in pg_statistic mean and how to interpret it - but\n>> OK, it's true that does not change very often (or at all). Secondly, it\n>> entirely ignores extended statistics - OK, we might extract those too,\n>> but it's going to be much more complex. And finally it entirely ignores\n>> costing on the remote node. Surely we can't just apply local\n>> random_page_cost or whatever, because those may be entirely different.\n>> And we don't know if the remote is going to use index etc.\n>>\n>> So is extracting data from pg_statistic the right approach?\n>>\n>>\n>There are two different problems, which ultimately might converge.\n>1. If use_remote_estimates = false, more generally if querying costs from\n>foreign server for costing paths is impractical, we want to use local\n>estimates and try to come up with costs. For that purpose we keep some\n>statistics locally and user is expected to refresh it periodically by\n>running ANALYZE on the foreign table. This patch is about a. doing this\n>efficiently without requiring to fetch every row from the foreign server b.\n>through autovacuum automatically without user firing ANALYZE. I think this\n>also answers your question about vacuum_rel() above.\n>\n>2. How to efficiently extract costs from an EXPLAIN plan when\n>use_remote_eestimates is true. That's the subject of some nearby thread. I\n>think you are referring to that problem here. Hence your next point.\n>\n\nI think that was the topic of *this* thread as started by Tom, but I now\nrealize Andrey steered it in the direction to allow re-using remote\nstats. Which seems useful too, but it confused me a bit.\n\n>Using EXPLAIN to get costs from the foreign server isn't efficient. It\n>increases planning time a lot; sometimes planning time exceeds execution\n>time. If usage of foreign tables becomes more and more common, this isn't\n>ideal. I think we should move towards a model in which the optimizer can\n>decide whether a subtree involving a foreign server should be evaluated\n>locally or on the foreign server without the help of foreign server. One\n>way to do it (I am not saying that this is the only or the best way) is to\n>estimate the cost of foreign query locally based on the information\n>available locally about the foreign server and foreign table. This might\n>mean that we have to get that information from the foreign server and cache\n>it locally and use it several times, including the indexes on foreign\n>table, values of various costs etc. Though this approach doesn't solve all\n>of those problems it's one step forward + it makes the current scenario\n>also efficient.\n>\n\nTrue, but that ptoject is way more ambitious than providing a simple API\nfor postgres_fdw to obtain the estimates more efficiently.\n\n>I agree that the patch needs some work though, esp the code dealing with\n>serialization and deserialization of statistics.\n\nI think there's a bunch of open questions, e.g. what to do with extended\nstatistics - for example what should happen when the extended statistics\nobject is defined only on local/remote server, or when the definitions\ndon't match? What should happen when the definitions don't match? This\nprobably is not an issue for \"regular\" stats, because that seems pretty\nstable, but for extended stats there are differences between versions.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n", "msg_date": "Tue, 8 Sep 2020 23:05:30 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Ideas about a better API for postgres_fdw remote estimates" }, { "msg_contents": "On Wed, 9 Sep 2020 at 02:35, Tomas Vondra <tomas.vondra@2ndquadrant.com>\nwrote\n\n>\n> I think that was the topic of *this* thread as started by Tom, but I now\n> realize Andrey steered it in the direction to allow re-using remote\n> stats. Which seems useful too, but it confused me a bit.\n>\n\nI didn't realize that the nearby thread I am mentioning is actually this\nthread :). Sorry.\n\n\n>\n> >Using EXPLAIN to get costs from the foreign server isn't efficient. It\n> >increases planning time a lot; sometimes planning time exceeds execution\n> >time. If usage of foreign tables becomes more and more common, this isn't\n> >ideal. I think we should move towards a model in which the optimizer can\n> >decide whether a subtree involving a foreign server should be evaluated\n> >locally or on the foreign server without the help of foreign server. One\n> >way to do it (I am not saying that this is the only or the best way) is to\n> >estimate the cost of foreign query locally based on the information\n> >available locally about the foreign server and foreign table. This might\n> >mean that we have to get that information from the foreign server and\n> cache\n> >it locally and use it several times, including the indexes on foreign\n> >table, values of various costs etc. Though this approach doesn't solve all\n> >of those problems it's one step forward + it makes the current scenario\n> >also efficient.\n> >\n>\n> True, but that ptoject is way more ambitious than providing a simple API\n> for postgres_fdw to obtain the estimates more efficiently.\n>\n\nDoing all of that is a big project. But what this patch aims at is a small\nsubset which makes statistics collection efficient and automatic. So, just\nfor that, we should consider it.\n\n\n>\n> >I agree that the patch needs some work though, esp the code dealing with\n> >serialization and deserialization of statistics.\n>\n> I think there's a bunch of open questions, e.g. what to do with extended\n> statistics - for example what should happen when the extended statistics\n> object is defined only on local/remote server, or when the definitions\n> don't match? What should happen when the definitions don't match? This\n> probably is not an issue for \"regular\" stats, because that seems pretty\n> stable, but for extended stats there are differences between versions.\n\n\nIf it is defined on the foreign server but not the local server, there is\nno need to fetch it from the foreign server. The other way round case is\ntricky. We could mark the extended statistics object invalid if it's not\ndefined on the foreign server or the definition is different. We have to\ndocument it that way. I think that should serve most of the cases.\n\n-- \nBest Wishes,\nAshutosh\n\nOn Wed, 9 Sep 2020 at 02:35, Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote\n\nI think that was the topic of *this* thread as started by Tom, but I now\nrealize Andrey steered it in the direction to allow re-using remote\nstats. Which seems useful too, but it confused me a bit.I didn't realize that the nearby thread I am mentioning is actually this thread :). Sorry. \n\n>Using EXPLAIN to get costs from the foreign server isn't efficient. It\n>increases planning time a lot; sometimes planning time exceeds execution\n>time. If usage of foreign tables becomes more and more common, this isn't\n>ideal. I think we should move towards a model in which the optimizer can\n>decide whether a subtree involving a foreign server should be evaluated\n>locally or on the foreign server without the help of foreign server. One\n>way to do it (I am not saying that this is the only or the best way) is to\n>estimate the cost of foreign query locally based on the information\n>available locally about the foreign server and foreign table. This might\n>mean that we have to get that information from the foreign server and cache\n>it locally and use it several times, including the indexes on foreign\n>table, values of various costs etc. Though this approach doesn't solve all\n>of those problems it's one step forward + it makes the current scenario\n>also efficient.\n>\n\nTrue, but that ptoject is way more ambitious than providing a simple API\nfor postgres_fdw to obtain the estimates more efficiently.Doing all of that is a big project. But what this patch aims at is a small subset which makes statistics collection efficient and automatic. So, just for that, we should consider it. \n\n>I agree that the patch needs some work though, esp the code dealing with\n>serialization and deserialization of statistics.\n\nI think there's a bunch of open questions, e.g. what to do with extended\nstatistics - for example what should happen when the extended statistics\nobject is defined only on local/remote server, or when the definitions\ndon't match? What should happen when the definitions don't match? This\nprobably is not an issue for \"regular\" stats, because that seems pretty\nstable, but for extended stats there are differences between versions.If it is defined on the foreign server but not the local server, there is no need to fetch it from the foreign server. The other way round case is tricky. We could mark the extended statistics object invalid if it's not defined on the foreign server or the definition is different. We have to document it that way. I think that should serve most of the cases.-- Best Wishes,Ashutosh", "msg_date": "Wed, 9 Sep 2020 10:06:17 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Ideas about a better API for postgres_fdw remote estimates" } ]
[ { "msg_contents": "Hello, hackers!\n\nI'd like to propose a feature for changing a constraint's index. The\nprovided patch allows to do it for EXCLUDE, UNIQUE, PRIMARY KEY and\nFOREIGN KEY constraints.\n\nFeature description:\nALTER TABLE ... ALTER CONSTRAINT ... USING INDEX ...\nReplace a constraint's index with another sufficiently similar index.\n\nUse cases:\n - Removing index bloat [1] (now also achieved by REINDEX \nCONCURRENTLY)\n - Swapping a normal index for an index with INCLUDED columns, or vice \nversa\n\nExample of use:\nCREATE TABLE target_tbl (\nid integer PRIMARY KEY,\ninfo text\n);\nCREATE TABLE referencing_tbl (\nid_ref integer REFERENCES target_tbl (id)\n);\n-- Swapping primary key's index for an equivalent index,\n-- but with INCLUDE-d attributes.\nCREATE UNIQUE INDEX new_idx ON target_tbl (id) INCLUDE (info);\nALTER TABLE target_tbl ALTER CONSTRAINT target_tbl_pkey USING INDEX\nnew_idx;\nALTER TABLE referencing_tbl ALTER CONSTRAINT referencing_tbl_id_ref_fkey\nUSING INDEX new_idx;\nDROP INDEX target_tbl_pkey;\n\nI'd like to hear your feedback on this feature.\nAlso, some questions:\n1) If the index supporting a UNIQUE or PRIMARY KEY constraint is\nchanged, should foreign keys also automatically switch to the new index?\nOr should the user switch it manually, by using ALTER CONSTRAINT USING\nINDEX on the foreign key?\n2) Whose name should change to fit the other - constraint's or index's?\n\n[1] \nhttps://www.postgresql.org/message-id/flat/CABwTF4UxTg%2BkERo1Nd4dt%2BH2miJoLPcASMFecS1-XHijABOpPg%40mail.gmail.com\n\nP.S. I apologize for resending the email, the previous one was sent as a \nresponse to another thread by mistake.\n\n-- \nAnna Akenteva\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company", "msg_date": "Sun, 05 Jul 2020 17:12:07 +0300", "msg_from": "Anna Akenteva <a.akenteva@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Change a constraint's index - ALTER TABLE ... ALTER CONSTRAINT ...\n USING INDEX ..." }, { "msg_contents": "On 2020-Jul-05, Anna Akenteva wrote:\n\n> -- Swapping primary key's index for an equivalent index,\n> -- but with INCLUDE-d attributes.\n> CREATE UNIQUE INDEX new_idx ON target_tbl (id) INCLUDE (info);\n> ALTER TABLE target_tbl ALTER CONSTRAINT target_tbl_pkey USING INDEX\n> new_idx;\n> ALTER TABLE referencing_tbl ALTER CONSTRAINT referencing_tbl_id_ref_fkey\n> USING INDEX new_idx;\n\nHow is this state represented by pg_dump?\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Mon, 6 Jul 2020 17:47:35 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Change a constraint's index - ALTER TABLE ... ALTER CONSTRAINT\n ... USING INDEX ..." }, { "msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> On 2020-Jul-05, Anna Akenteva wrote:\n>> -- Swapping primary key's index for an equivalent index,\n>> -- but with INCLUDE-d attributes.\n>> CREATE UNIQUE INDEX new_idx ON target_tbl (id) INCLUDE (info);\n>> ALTER TABLE target_tbl ALTER CONSTRAINT target_tbl_pkey USING INDEX\n>> new_idx;\n>> ALTER TABLE referencing_tbl ALTER CONSTRAINT referencing_tbl_id_ref_fkey\n>> USING INDEX new_idx;\n\n> How is this state represented by pg_dump?\n\nEven if it's possible to represent, I think we should flat out reject\nthis \"feature\". Primary keys that aren't primary keys don't seem like\na good idea. For one thing, it won't be possible to describe the\nconstraint accurately in the information_schema.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 06 Jul 2020 18:08:54 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Change a constraint's index - ALTER TABLE ... ALTER CONSTRAINT\n ... USING INDEX ..." }, { "msg_contents": "On 2020-07-07 01:08, Tom Lane wrote:\n> Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n>> On 2020-Jul-05, Anna Akenteva wrote:\n>>> -- Swapping primary key's index for an equivalent index,\n>>> -- but with INCLUDE-d attributes.\n>>> CREATE UNIQUE INDEX new_idx ON target_tbl (id) INCLUDE (info);\n>>> ALTER TABLE target_tbl ALTER CONSTRAINT target_tbl_pkey USING INDEX\n>>> new_idx;\n>>> ALTER TABLE referencing_tbl ALTER CONSTRAINT \n>>> referencing_tbl_id_ref_fkey\n>>> USING INDEX new_idx;\n> \n>> How is this state represented by pg_dump?\n> \n> Even if it's possible to represent, I think we should flat out reject\n> this \"feature\". Primary keys that aren't primary keys don't seem like\n> a good idea. For one thing, it won't be possible to describe the\n> constraint accurately in the information_schema.\n\nDo you think it could still be a good idea if we only swap the \nrelfilenodes of indexes, as it was suggested in [1]? The original use \ncase was getting rid of index bloat, which is now solved by REINDEX \nCONCURRENTLY, but this feature still has its own use case of adding \nINCLUDE-d columns to constraint indexes.\n\n[1]\nhttps://www.postgresql.org/message-id/flat/CABwTF4UxTg%2BkERo1Nd4dt%2BH2miJoLPcASMFecS1-XHijABOpPg%40mail.gmail.com\n\n-- \nAnna Akenteva\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n", "msg_date": "Mon, 10 Aug 2020 09:29:31 +0300", "msg_from": "Anna Akenteva <a.akenteva@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: Change a constraint's index - ALTER TABLE ... ALTER CONSTRAINT\n ... USING INDEX ..." }, { "msg_contents": "On Mon, 2020-08-10 at 09:29 +0300, Anna Akenteva wrote:\n> On 2020-07-07 01:08, Tom Lane wrote:\n> \n> > Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> > > On 2020-Jul-05, Anna Akenteva wrote:\n> > > > -- Swapping primary key's index for an equivalent index,\n> > > > -- but with INCLUDE-d attributes.\n> > > > CREATE UNIQUE INDEX new_idx ON target_tbl (id) INCLUDE (info);\n> > > > ALTER TABLE target_tbl ALTER CONSTRAINT target_tbl_pkey USING INDEX\n> > > > new_idx;\n> > > > ALTER TABLE referencing_tbl ALTER CONSTRAINT \n> > > > referencing_tbl_id_ref_fkey\n> > > > USING INDEX new_idx;\n> > > How is this state represented by pg_dump?\n> > Even if it's possible to represent, I think we should flat out reject\n> > this \"feature\". Primary keys that aren't primary keys don't seem like\n> > a good idea. For one thing, it won't be possible to describe the\n> > constraint accurately in the information_schema.\n> \n> \n> Do you think it could still be a good idea if we only swap the \n> relfilenodes of indexes, as it was suggested in [1]? The original use \n> case was getting rid of index bloat, which is now solved by REINDEX \n> CONCURRENTLY, but this feature still has its own use case of adding \n> INCLUDE-d columns to constraint indexes.\n\nHow can you just swap the filenodes if \"indnatts\" and \"indkey\" is\ndifferent, since one index has an INCLUDE clause?\n\nI think that the original proposal is better, except that foreign key\ndependencies should be changed along with the primary or unique index,\nso that everything is consistent once the command is done.\n\nThen the ALTER CONSTRAINT from that replaces the index referenced\nby a foreign key becomes unnecessary and should be removed.\n\nThe value I see in this is:\n- replacing a primary key index\n- replacing the index behind a constraint targeted by a foreign key\n\nSome code comments:\n\n+ <varlistentry>\n+ <term><literal>ALTER CONSTRAINT</literal> <replaceable class=\"parameter\">constraint_name</replaceable> [USING INDEX <replaceable class=\"para>\n+ <listitem>\n+ <para>\n+ For uniqueness, primary key, and exclusion constraints, this form\n+ replaces the original index and renames the constraint accordingly.\n\nYou forgot to mention foreign keys.\n\n+ /* This function might need modificatoins if pg_index gets new fields */\n+ Assert(Natts_pg_index == 20);\n\nTypo.\n\n+ if (!equal(RelationGetIndexExpressions(oldIndex),\n+ RelationGetIndexExpressions(newIndex)))\n+ return \"Indexes must have the same non-column attributes\";\n\nCorrect me if I am wrong, but constraint indexes can never use\nexpressions. So this should be covered by comparing the key\nattributes above (they would be 0 for an expression).\n\n+ if (!equal(oldPredicate, newPredicate))\n+ {\n+ if (oldPredicate && newPredicate)\n+ return \"Indexes must have the same partial index predicates\";\n+ else\n+ return \"Either none or both indexes must have partial index predicates\";\n+ }\n\nA constraint index can never have predicates. Only the new index would\nhave to be checked.\n\n+/*\n+ * ALTER TABLE ALTER CONSTRAINT USING INDEX\n+ *\n+ * Replace an index of a constraint.\n+ *\n+ * Currently only works for UNIQUE, EXCLUSION and PRIMARY constraints.\n\nYou forgot foreign key constraints (although I think they should not be allowed).\n\n\nI'll set the commitfest entry to \"waiting for author\".\n\nYours,\nLaurenz Albe\n\n\n\n", "msg_date": "Fri, 04 Sep 2020 13:59:47 +0200", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: Change a constraint's index - ALTER TABLE ... ALTER CONSTRAINT\n ... USING INDEX ..." }, { "msg_contents": "On 2020-Sep-04, Laurenz Albe wrote:\n\n> The value I see in this is:\n> - replacing a primary key index\n> - replacing the index behind a constraint targeted by a foreign key\n\nBut why is this better than using REINDEX CONCURRENTLY?\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 4 Sep 2020 10:41:40 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Change a constraint's index - ALTER TABLE ... ALTER CONSTRAINT\n ... USING INDEX ..." }, { "msg_contents": "On Fri, 2020-09-04 at 10:41 -0400, Alvaro Herrera wrote:\n> > The value I see in this is:\n> > - replacing a primary key index\n> > - replacing the index behind a constraint targeted by a foreign key\n> \n> But why is this better than using REINDEX CONCURRENTLY?\n\nIt is not better, but it can be used to replace a constraint index\nwith an index with a different INCLUDE clause, which is something\nthat cannot easily be done otherwise.\n\nFor exclusion constraints it is pretty useless, and for unique\nconstraints it can be worked around with CREATE UNIQUE INDEX CONCURRENTLY.\n\nAdmitted, the use case is pretty narrow, and I am not sure if it is\nworth adding code and SQL syntax for that.\n\nYours,\nLaurenz Albe\n\n\n\n", "msg_date": "Fri, 04 Sep 2020 17:37:17 +0200", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: Change a constraint's index - ALTER TABLE ... ALTER CONSTRAINT\n ... USING INDEX ..." }, { "msg_contents": "On 2020-Sep-04, Laurenz Albe wrote:\n\n> On Fri, 2020-09-04 at 10:41 -0400, Alvaro Herrera wrote:\n> > > The value I see in this is:\n> > > - replacing a primary key index\n> > > - replacing the index behind a constraint targeted by a foreign key\n> > \n> > But why is this better than using REINDEX CONCURRENTLY?\n> \n> It is not better, but it can be used to replace a constraint index\n> with an index with a different INCLUDE clause, which is something\n> that cannot easily be done otherwise.\n\nI can see that there is value in having an index that serves both a\nuniqueness constraint and coverage purposes. But this seems a pretty\nroundabout way to get that -- I think you should have to do \"CREATE\nUNIQUE INDEX ... INCLUDING ...\" instead. That way, the fact that this\nis a Postgres extension remains clear.\n\n55432 14devel 24138=# create table foo (a int not null, b int not null, c int);\nCREATE TABLE\nDuraci�n: 1,775 ms\n55432 14devel 24138=# create unique index on foo (a, b) include (c);\nCREATE INDEX\nDuraci�n: 1,481 ms\n55432 14devel 24138=# create table bar (a int not null, b int not null, foreign key (a, b) references foo (a, b)); \nCREATE TABLE\nDuraci�n: 2,559 ms\n\nNow you have a normal index that you can reindex in the normal way, if you need\nit.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 4 Sep 2020 13:31:27 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Change a constraint's index - ALTER TABLE ... ALTER CONSTRAINT\n ... USING INDEX ..." }, { "msg_contents": "On Fri, 2020-09-04 at 13:31 -0400, Alvaro Herrera wrote:\n> On 2020-Sep-04, Laurenz Albe wrote:\n> > On Fri, 2020-09-04 at 10:41 -0400, Alvaro Herrera wrote:\n> > > > The value I see in this is:\n> > > > - replacing a primary key index\n> > > > - replacing the index behind a constraint targeted by a foreign key\n> > > But why is this better than using REINDEX CONCURRENTLY?\n> > It is not better, but it can be used to replace a constraint index\n> > with an index with a different INCLUDE clause, which is something\n> > that cannot easily be done otherwise.\n> \n> \n> I can see that there is value in having an index that serves both a\n> uniqueness constraint and coverage purposes. But this seems a pretty\n> roundabout way to get that -- I think you should have to do \"CREATE\n> UNIQUE INDEX ... INCLUDING ...\" instead. That way, the fact that this\n> is a Postgres extension remains clear.\n> \n> 55432 14devel 24138=# create table foo (a int not null, b int not null, c int);\n> CREATE TABLE\n> Duración: 1,775 ms\n> 55432 14devel 24138=# create unique index on foo (a, b) include (c);\n> CREATE INDEX\n> Duración: 1,481 ms\n> 55432 14devel 24138=# create table bar (a int not null, b int not null, foreign key (a, b) references foo (a, b)); \n> CREATE TABLE\n> Duración: 2,559 ms\n> \n> Now you have a normal index that you can reindex in the normal way, if you need\n> it.\n\nYes, that is true.\n\nBut what if you have done\n\n CREATE TABLE a (id bigint CONSTRAINT a_pkey PRIMARY KEY, val integer);\n CREATE TABLE b (id bigint CONSTRAINT b_fkey REFERENCES a);\n\nand later you figure out later that it would actually be better to have\nan index ON mytab (id) INCLUDE (val), and you don't want to maintain\ntwo indexes.\n\nYes, you could do\n\n CREATE UNIQUE INDEX CONCURRENTLY ind ON a (id) INCLUDE (val);\n ALTER TABLE a ADD UNIQUE USING INDEX ind;\n ALTER TABLE a DROP CONSTRAINT a_pkey CASCADE;\n ALTER TABLE b ADD FOREIGN KEY (id) REFERENCES a(id);\n\nbut then you don't have a primary key, and you have to live without\nthe foreign key for a while.\n\nAdding a primary key to a large table is very painful, because it\nlocks the table exclusively for a long time.\n\n\nThis patch would provide a more convenient way to do that.\n\nAgain, I am not sure if that justifies the effort.\n\nYours,\nLaurenz Albe\n\n\n\n", "msg_date": "Mon, 07 Sep 2020 16:28:34 +0200", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: Change a constraint's index - ALTER TABLE ... ALTER CONSTRAINT\n ... USING INDEX ..." }, { "msg_contents": "On 2020-Sep-07, Laurenz Albe wrote:\n\n> This patch would provide a more convenient way to do that.\n> \n> Again, I am not sure if that justifies the effort.\n\nI have to admit I've seen cases where it'd be useful to have included\ncolumns in primary keys.\n\nTBH I think if we really wanted the feature of primary keys with\nincluded columns, we'd have to add it to the PRIMARY KEY syntax rather\nthan having an ad-hoc ALTER TABLE ALTER CONSTRAINT USING INDEX command\nto replace the index underneath. Then things like pg_dump would work\nnormally.\n\n(I have an answer for the information_schema question Tom posed; I'd\nlike to know what's yours.)\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Mon, 7 Sep 2020 11:42:29 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Change a constraint's index - ALTER TABLE ... ALTER CONSTRAINT\n ... USING INDEX ..." }, { "msg_contents": "On Mon, 2020-09-07 at 11:42 -0300, Alvaro Herrera wrote:\n> > This patch would provide a more convenient way to do that.\n> > Again, I am not sure if that justifies the effort.\n> \n> I have to admit I've seen cases where it'd be useful to have included\n> columns in primary keys.\n> \n> TBH I think if we really wanted the feature of primary keys with\n> included columns, we'd have to add it to the PRIMARY KEY syntax rather\n> than having an ad-hoc ALTER TABLE ALTER CONSTRAINT USING INDEX command\n> to replace the index underneath. Then things like pg_dump would work\n> normally.\n> \n> (I have an answer for the information_schema question Tom posed; I'd\n> like to know what's yours.)\n\nGah, now I see my mistake. I was under the impression that a\nprimary key can have an INCLUDE clause today, which is not true.\n\nSo this would introduce that feature in a weird way.\nI agree that that is undesirable.\n\nWe should at least have\n\n ALTER TABLE ... ADD PRIMARY KEY (id) INCLUDE (val);\n\nor something before we consider this patch.\n\nAs to the information_schema, that could pretend that the INCLUDE\ncolumns just don't exist.\n\nYours,\nLaurenz Albe\n\n\n\n", "msg_date": "Tue, 08 Sep 2020 16:50:44 +0200", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: Change a constraint's index - ALTER TABLE ... ALTER CONSTRAINT\n ... USING INDEX ..." }, { "msg_contents": "On 2020-Sep-08, Laurenz Albe wrote:\n\n> We should at least have\n> \n> ALTER TABLE ... ADD PRIMARY KEY (id) INCLUDE (val);\n> \n> or something before we consider this patch.\n\nAgreed.\n\nNow the trick in this new command is to let the user change the included\ncolumns afterwards, which remains useful (since it's clearly reasonable\nto change minds after applications using the constraint start to take\nshape).\n\n> As to the information_schema, that could pretend that the INCLUDE\n> columns just don't exist.\n\nYeah, that's what I was thinking too, since for all intents and\npurposes, from the standard's POV the constraint works the same\nregardless of included columns.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Tue, 8 Sep 2020 11:56:18 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Change a constraint's index - ALTER TABLE ... ALTER CONSTRAINT\n ... USING INDEX ..." }, { "msg_contents": "On Fri, Sep 04, 2020 at 01:59:47PM +0200, Laurenz Albe wrote:\n> I'll set the commitfest entry to \"waiting for author\".\n\nThis review, as well as any of the follow-up emails, have not been\nanswered by the author, so I have marked the patch as returned with\nfeedback.\n--\nMichael", "msg_date": "Wed, 30 Sep 2020 16:27:46 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Change a constraint's index - ALTER TABLE ... ALTER CONSTRAINT\n ... USING INDEX ..." }, { "msg_contents": "On Wed, 2020-09-30 at 16:27 +0900, Michael Paquier wrote:\n> On Fri, Sep 04, 2020 at 01:59:47PM +0200, Laurenz Albe wrote:\n> > I'll set the commitfest entry to \"waiting for author\".\n> \n> This review, as well as any of the follow-up emails, have not been\n> answered by the author, so I have marked the patch as returned with\n> feedback.\n\nI had the impression that \"rejected\" would be more appropriate.\n\nIt doesn't make sense to introduce a featue whose only remaining\nuse case is modifying a constraint index in a way that is not\nsupported currently.\n\nYours,\nLaurenz Albe\n\n\n\n", "msg_date": "Wed, 30 Sep 2020 13:33:30 +0200", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: Change a constraint's index - ALTER TABLE ... ALTER CONSTRAINT\n ... USING INDEX ..." } ]
[ { "msg_contents": "I'm writing a small extension, and I'm trying to use C++ constructs. I'm\nnot actually doing anything that needs C++, but I *really* like declaring\nvariables when I first initialize them (for example), and I also *really*\nlike warning-free compiles.\n\nThe C++ compiler is mangling the names so they aren't visible to the\nextension mechanism. The following page suggests using extern C (it doesn't\nspecify that this means extern \"C\", but I suppose anybody who actually knew\nwhat they were doing would have known that immediately):\n\nhttps://www.postgresql.org/docs/current/xfunc-c.html\n\nSo I'm writing my functions to start:\n\nextern \"C\" Datum ...\n\nThe problem is that PG_FUNCTION_INFO_V1 generates its own function\ndeclaration with a conflicting extern specification (just plain extern,\nwhich I guess means \"C++\" in the context of C++ code).\n\nI also tried wrapping everything - both the functions and\nthe PG_FUNCTION_INFO_V1 invocations - in extern \"C\" { ... }, but now a\nvariable declaration from fmgr.h conflicts with one from the macros.\n\nIs there a simple fix I'm missing? Any hints much appreciated.\n\nI'm writing a small extension, and I'm trying to use C++ constructs. I'm not actually doing anything that needs C++, but I *really* like declaring variables when I first initialize them (for example), and I also *really* like warning-free compiles.The C++ compiler is mangling the names so they aren't visible to the extension mechanism. The following page suggests using extern C (it doesn't specify that this means extern \"C\", but I suppose anybody who actually knew what they were doing would have known that immediately):https://www.postgresql.org/docs/current/xfunc-c.htmlSo I'm writing my functions to start:extern \"C\" Datum ...The problem is that PG_FUNCTION_INFO_V1 generates its own function declaration with a conflicting extern specification (just plain extern, which I guess means \"C++\" in the context of C++ code).I also tried wrapping everything - both the functions and the PG_FUNCTION_INFO_V1 invocations - in extern \"C\" { ... }, but now a variable declaration from fmgr.h conflicts with one from the macros.Is there a simple fix I'm missing? Any hints much appreciated.", "msg_date": "Sun, 5 Jul 2020 16:53:49 -0400", "msg_from": "Isaac Morland <isaac.morland@gmail.com>", "msg_from_op": true, "msg_subject": "Can I use extern \"C\" in an extension so I can use C++?" }, { "msg_contents": "Isaac Morland <isaac.morland@gmail.com> writes:\n> I'm writing a small extension, and I'm trying to use C++ constructs. I'm\n> not actually doing anything that needs C++, but I *really* like declaring\n> variables when I first initialize them (for example), and I also *really*\n> like warning-free compiles.\n\nWell, you could get that with -Wno-declaration-after-statement ...\nbut yeah, this is supposed to work, modulo all the caveats on the\npage you already found.\n\n> The C++ compiler is mangling the names so they aren't visible to the\n> extension mechanism.\n\nSomething like the attached works for me; what problem are you having\n*exactly*?\n\n$ g++ -Wall -fno-strict-aliasing -fwrapv -g -O2 -D_GNU_SOURCE -c -I/home/postgres/pgsql/src/include -o test.o test.cpp\n$ nm --ext --def test.o\n0000000000000000 T Pg_magic_func\n0000000000000010 T pg_finfo_silly_func\n0000000000000020 T silly_func\n\n\t\t\tregards, tom lane", "msg_date": "Sun, 05 Jul 2020 18:07:07 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Can I use extern \"C\" in an extension so I can use C++?" }, { "msg_contents": "On Sun, 5 Jul 2020 at 18:07, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Isaac Morland <isaac.morland@gmail.com> writes:\n> > I'm writing a small extension, and I'm trying to use C++ constructs. I'm\n> > not actually doing anything that needs C++, but I *really* like declaring\n> > variables when I first initialize them (for example), and I also *really*\n> > like warning-free compiles.\n>\n> Well, you could get that with -Wno-declaration-after-statement ...\n> but yeah, this is supposed to work, modulo all the caveats on the\n> page you already found.\n>\n> > The C++ compiler is mangling the names so they aren't visible to the\n> > extension mechanism.\n>\n> Something like the attached works for me; what problem are you having\n> *exactly*?\n>\n\nI've attached a .cpp file. When I run \"make\", I get:\n\ng++ -Wall -Wpointer-arith -Wendif-labels -Wmissing-format-attribute\n-Wformat-security -fno-strict-aliasing -fwrapv -g -g -O2\n-fstack-protector-strong -Wformat -Werror=format-security -I. -I./\n-I/usr/include/postgresql/12/server -I/usr/include/postgresql/internal\n -Wdate-time -D_FORTIFY_SOURCE=2 -D_GNU_SOURCE -I/usr/include/libxml2\n -I/usr/include/mit-krb5 -c -o hashblob.o hashblob.cpp\nhashblob.cpp:9:18: error: conflicting declaration of ‘Datum\nhashblob_touch(FunctionCallInfo)’ with ‘C’ linkage\n 9 | extern \"C\" Datum hashblob_touch (PG_FUNCTION_ARGS) {\n | ^~~~~~~~~~~~~~\nIn file included from hashblob.cpp:2:\nhashblob.cpp:6:21: note: previous declaration with ‘C++’ linkage\n 6 | PG_FUNCTION_INFO_V1(hashblob_touch);\n | ^~~~~~~~~~~~~~\n/usr/include/postgresql/12/server/fmgr.h:405:14: note: in definition of\nmacro ‘PG_FUNCTION_INFO_V1’\n 405 | extern Datum funcname(PG_FUNCTION_ARGS); \\\n | ^~~~~~~~\n[... and then the same set of 3 errors again for the other function ...]\n\nWithout the extern \"C\" stuff it compiles fine but the names are mangled;\nrenaming to .c makes it compile fine with non-mangled names. It also\ncompiles if I get rid of the PG_FUNCTION_INFO_V1 invocations; but the same\ndocumentation page is pretty clear that is supposed to be needed; and then\nI think it C++-mangles the names of functions that get called behind the\nscenes (\"undefined symbol: _Z23pg_detoast_datum_packedP7varlena\" when I try\nto CREATE EXTENSION).\n\nI should add I'm using a Makefile I've also attached; it uses PGXS with\nnothing special, but on the other hand it means I don't really understand\neverything that is going on (in particular, I didn't pick whether to say\ng++ or gcc, nor did I explicitly choose any of the arguments to the\ncommand).\n\n$ g++ -Wall -fno-strict-aliasing -fwrapv -g -O2 -D_GNU_SOURCE -c\n> -I/home/postgres/pgsql/src/include -o test.o test.cpp\n> $ nm --ext --def test.o\n> 0000000000000000 T Pg_magic_func\n> 0000000000000010 T pg_finfo_silly_func\n> 0000000000000020 T silly_func\n>", "msg_date": "Sun, 5 Jul 2020 18:41:47 -0400", "msg_from": "Isaac Morland <isaac.morland@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Can I use extern \"C\" in an extension so I can use C++?" }, { "msg_contents": "Isaac Morland <isaac.morland@gmail.com> writes:\n> On Sun, 5 Jul 2020 at 18:07, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Something like the attached works for me; what problem are you having\n>> *exactly*?\n\n> I've attached a .cpp file.\n\nMy example wrapped the Postgres #include's, the PG_MODULE_MAGIC call,\nand the PG_FUNCTION_INFO_V1 call(s) in extern \"C\" { ... }. I'm pretty\nsure you need to do all three of those things to get a working result\nwithout mangled external function names.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 05 Jul 2020 18:49:56 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Can I use extern \"C\" in an extension so I can use C++?" }, { "msg_contents": "On Sun, 5 Jul 2020 at 18:49, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n>\n> My example wrapped the Postgres #include's, the PG_MODULE_MAGIC call,\n> and the PG_FUNCTION_INFO_V1 call(s) in extern \"C\" { ... }. I'm pretty\n> sure you need to do all three of those things to get a working result\n> without mangled external function names.\n>\n\nI wrapped my entire file - #includes and all - in extern \"C\" { ... } and it\nworked perfectly. Thanks very much for your assistance and patience.\n\nOn Sun, 5 Jul 2020 at 18:49, Tom Lane <tgl@sss.pgh.pa.us> wrote:\nMy example wrapped the Postgres #include's, the PG_MODULE_MAGIC call,\nand the PG_FUNCTION_INFO_V1 call(s) in extern \"C\" { ... }.  I'm pretty\nsure you need to do all three of those things to get a working result\nwithout mangled external function names.I wrapped my entire file - #includes and all - in extern \"C\" { ... } and it worked perfectly. Thanks very much for your assistance and patience.", "msg_date": "Sun, 5 Jul 2020 20:47:08 -0400", "msg_from": "Isaac Morland <isaac.morland@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Can I use extern \"C\" in an extension so I can use C++?" } ]
[ { "msg_contents": "I hit this issue intermittently (roughly half the time) while working with a\npatch David submitted, and finally found a recipe to reproduce it on an\nunpatched v12 instance.\n\nI was surprised to see pg_restore -j2 is restoring ACLs in pre-data in\nparallel. Note different session IDs and PIDs:\n\n2020-07-05 23:31:27.448 CDT,\"pryzbyj\",\"secondary_dump\",24037,\"[local]\",5f02a91f.5de5,70,,LOG,00000,\"statement: REVOKE SELECT ON TABLE pg_catalog.pg_proc FROM PUBLIC; \",,,,,,,,,\"pg_restore\",\"client backend\"\n2020-07-05 23:31:27.448 CDT,\"pryzbyj\",\"secondary_dump\",24036,\"[local]\",5f02a91f.5de4,78,,LOG,00000,\"statement: GRANT SELECT(tableoid) ON TABLE pg_catalog.pg_proc TO PUBLIC; \",,,,,,,,,\"pg_restore\",\"client backend\"\n2020-07-05 23:31:27.450 CDT,\"pryzbyj\",\"secondary_dump\",24036,\"[local]\",5f02a91f.5de4,79,,LOG,00000,\"statement: GRANT SELECT(oid) ON TABLE pg_catalog.pg_proc TO PUBLIC; \",,,,,,,,,\"pg_restore\",\"client backend\"\n2020-07-05 23:31:27.450 CDT,\"pryzbyj\",\"secondary_dump\",24037,\"[local]\",5f02a91f.5de5,71,,ERROR,XX000,\"tuple concurrently updated\",,,,,,\"REVOKE SELECT ON TABLE pg_catalog.pg_proc FROM PUBLIC;\n\npostgres=# CREATE DATABASE pryzbyj;\npostgres=# \\c pryzbyj \npryzbyj=# REVOKE ALL ON pg_proc FROM postgres;\npryzbyj=# GRANT SELECT (tableoid, oid, proname) ON pg_proc TO public;\npryzbyj=# \\dp+ pg_catalog.pg_proc\n Schema | Name | Type | Access privileges | Column privileges | Policies \n------------+---------+-------+-------------------+-------------------+----------\n pg_catalog | pg_proc | table | =r/postgres | tableoid: +| \n | | | | =r/postgres +| \n | | | | oid: +| \n | | | | =r/postgres +| \n | | | | proname: +| \n | | | | =r/postgres | \n\n[pryzbyj@database ~]$ pg_dump pryzbyj -Fc -f pg_dump.out\n[pryzbyj@database ~]$ pg_restore pg_dump.out -j2 -d pryzbyj --clean -v\n...\npg_restore: entering main parallel loop\npg_restore: launching item 3744 ACL TABLE pg_proc\npg_restore: launching item 3745 ACL COLUMN pg_proc.proname\npg_restore: creating ACL \"pg_catalog.TABLE pg_proc\"\npg_restore: creating ACL \"pg_catalog.COLUMN pg_proc.proname\"\npg_restore:pg_restore: while PROCESSING TOC:\nfinished item 3745 ACL COLUMN pg_proc.proname\npg_restore: from TOC entry 3744; 0 0 ACL TABLE pg_proc postgres\npg_restore: error: could not execute query: ERROR: tuple concurrently updated\nCommand was: REVOKE ALL ON TABLE pg_catalog.pg_proc FROM postgres;\n\n\n", "msg_date": "Mon, 6 Jul 2020 00:01:29 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "\"tuple concurrently updated\" in pg_restore --jobs" }, { "msg_contents": "Justin Pryzby <pryzby@telsasoft.com> writes:\n> I hit this issue intermittently (roughly half the time) while working with a\n> patch David submitted, and finally found a recipe to reproduce it on an\n> unpatched v12 instance.\n\n> I was surprised to see pg_restore -j2 is restoring ACLs in pre-data in\n> parallel.\n\nIt's not pre-data. But it's true that pg_restore figures it can restore\nACLs in parallel during the ACL-restoring pass, on the theory that pg_dump\nwill not emit two different ACL entries for the same object, so that we\ncan do all the catalog updates in parallel without conflicts.\n\nThis works about 99% of the time, in fact. It falls down in the --clean\ncase if we have to revoke existing table permissions, because in that case\nthe REVOKE at table level is required to clear the table's per-column ACLs\nas well, so that that ACL entry involves touching the same catalog rows\nthat the per-column ACLs want to touch.\n\nI think the right fix is to give the per-column ACL entries dependencies\non the per-table ACL, if there is one. This will not fix the problem\nfor the case of restoring from an existing pg_dump archive that lacks\nsuch dependency links --- but given the lack of field complaints, I'm\nokay with that.\n\nThis looks straightforward, if somewhat tedious because we'll have to\nchange the API of pg_dump's dumpACL() function, which is called by\na lot of places. Barring objections, I'll go do that.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 10 Jul 2020 16:54:40 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: \"tuple concurrently updated\" in pg_restore --jobs" }, { "msg_contents": "On Fri, Jul 10, 2020 at 04:54:40PM -0400, Tom Lane wrote:\n> Justin Pryzby <pryzby@telsasoft.com> writes:\n> > I hit this issue intermittently (roughly half the time) while working with a\n> > patch David submitted, and finally found a recipe to reproduce it on an\n> > unpatched v12 instance.\n> \n> > I was surprised to see pg_restore -j2 is restoring ACLs in pre-data in\n> > parallel.\n> \n> It's not pre-data. But it's true that pg_restore figures it can restore\n> ACLs in parallel during the ACL-restoring pass, on the theory that pg_dump\n> will not emit two different ACL entries for the same object, so that we\n> can do all the catalog updates in parallel without conflicts.\n> \n> This works about 99% of the time, in fact. It falls down in the --clean\n\nNote that this fails for me (sometimes) even without --clean.\n\n$ pg_restore pg_dump.out -j2 -d pryzbyj -v --section pre-data\n\npg_restore: entering main parallel loop\npg_restore: launching item 3395 ACL TABLE pg_proc\npg_restore: launching item 3396 ACL COLUMN pg_proc.proname\npg_restore: creating ACL \"pg_catalog.TABLE pg_proc\"\npg_restore: creating ACL \"pg_catalog.COLUMN pg_proc.proname\"\npg_restore: finished item 3395 ACL TABLE pg_proc\npg_restore: launching item 3397 ACL COLUMN pg_proc.pronamespace\npg_restore: creating ACL \"pg_catalog.COLUMN pg_proc.pronamespace\"\npg_restore: while PROCESSING TOC:\npg_restore: from TOC entry 3396; 0 0 ACL COLUMN pg_proc.proname postgres\npg_restore: error: could not execute query: ERROR: tuple concurrently updated\nCommand was: GRANT SELECT(proname) ON TABLE pg_catalog.pg_proc TO PUBLIC;\n\n\n-- \nJustin\n\n\n", "msg_date": "Fri, 10 Jul 2020 16:06:07 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: \"tuple concurrently updated\" in pg_restore --jobs" }, { "msg_contents": "Justin Pryzby <pryzby@telsasoft.com> writes:\n> On Fri, Jul 10, 2020 at 04:54:40PM -0400, Tom Lane wrote:\n>> This works about 99% of the time, in fact. It falls down in the --clean\n\n> Note that this fails for me (sometimes) even without --clean.\n\nOh, I was thinking that REVOKE would only be issued in the --clean\ncase, but apparently that's not so. Doesn't really affect the fix\nproposal though. I just finished a patch for HEAD, as attached.\n\n(I flushed the \"CatalogId objCatId\" argument of dumpACL, which was\nnot used.)\n\nI'm not sure how far to back-patch it -- I think the parallel restore\nof ACLs behavior is not very old, but we might want to teach older\npg_dump versions to insert the extra dependency anyway, for safety.\n\n\t\t\tregards, tom lane", "msg_date": "Fri, 10 Jul 2020 17:36:28 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: \"tuple concurrently updated\" in pg_restore --jobs" }, { "msg_contents": "On Fri, Jul 10, 2020 at 05:36:28PM -0400, Tom Lane wrote:\n> Justin Pryzby <pryzby@telsasoft.com> writes:\n> > On Fri, Jul 10, 2020 at 04:54:40PM -0400, Tom Lane wrote:\n> >> This works about 99% of the time, in fact. It falls down in the --clean\n> \n> > Note that this fails for me (sometimes) even without --clean.\n> \n> Oh, I was thinking that REVOKE would only be issued in the --clean\n> case, but apparently that's not so. Doesn't really affect the fix\n> proposal though. I just finished a patch for HEAD, as attached.\n> \n> (I flushed the \"CatalogId objCatId\" argument of dumpACL, which was\n> not used.)\n> \n> I'm not sure how far to back-patch it -- I think the parallel restore\n> of ACLs behavior is not very old, but we might want to teach older\n> pg_dump versions to insert the extra dependency anyway, for safety.\n\nYes, and the test case in David's patch on other thread [0] can't be\nbackpatched further than this patch is. A variant on his test case could just\nas well be included in this patch (with pg_dump writing to a seekable FD) and\nthen amended later to also test writing to an unseekable FD.\n\n[0] https://commitfest.postgresql.org/28/2568/\n\n\n", "msg_date": "Fri, 10 Jul 2020 16:45:21 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: \"tuple concurrently updated\" in pg_restore --jobs" }, { "msg_contents": "Justin Pryzby <pryzby@telsasoft.com> writes:\n> On Fri, Jul 10, 2020 at 05:36:28PM -0400, Tom Lane wrote:\n>> I'm not sure how far to back-patch it -- I think the parallel restore\n>> of ACLs behavior is not very old, but we might want to teach older\n>> pg_dump versions to insert the extra dependency anyway, for safety.\n\n> Yes, and the test case in David's patch on other thread [0] can't be\n> backpatched further than this patch is.\n\nActually, the answer seems to be that we'd better back-patch all the way,\nbecause this is a live bug much further back than I'd guessed. pg_restore\nis willing to run these ACL restores in parallel in all active branches.\nThe given test case only shows a failure back to 9.6, because older\nversions don't dump ACLs on system catalogs; but of course you can just\ntry it with a user table instead.\n\nOddly, I could not get the \"tuple concurrently updated\" syndrome to\nappear on 9.5. Not sure why not; the GRANT/REVOKE code looks the\nsame as in 9.6. What I *could* demonstrate in 9.5 is that sometimes\nthe post-restore state is flat out wrong: the column-level grants go\nmissing, presumably as a result of the table-level REVOKE executing\nafter the column-level GRANTs. Probably that syndrome occurs sometimes\nin later branches too, depending on timing; but I didn't look.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 11 Jul 2020 13:11:26 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: \"tuple concurrently updated\" in pg_restore --jobs" } ]
[ { "msg_contents": "Hello\n\nI would like to ask about the conditions under which partition pruning is performed.\nIn PostgreSQL 12, when I executed following SQL, partition pruning is not performed.\n\npostgres=# explain select * from a where (c1, c2) < (99, 99);\n QUERY PLAN\n----------------------------------------------------------------\n Append (cost=0.00..60.00 rows=800 width=40)\n -> Seq Scan on a1 a_1 (cost=0.00..28.00 rows=400 width=40)\n Filter: (ROW(c1, c2) < ROW(99, 99))\n -> Seq Scan on a2 a_2 (cost=0.00..28.00 rows=400 width=40)\n Filter: (ROW(c1, c2) < ROW(99, 99))\n(5 rows)\n\nHowever, pruning is performed when I changed the SQL as follows.\n\npostgres=# explain select * from a where c1 < 99 and c2 < 99;\n QUERY PLAN\n--------------------------------------------------------\n Seq Scan on a1 a (cost=0.00..28.00 rows=133 width=40)\n Filter: ((c1 < 99) AND (c2 < 99))\n(2 rows)\n\nThese tables are defined as follows.\n\ncreate table a( c1 int, c2 int, c3 varchar) partition by range(c1, c2);\ncreate table a1 partition of a for values from(0, 0) to (100, 100);\ncreate table a2 partition of a for values from(100, 100) to (200, 200);\n\n\nLooking at the code, \"(c1, c2) < (99, 99)\" is recognized as RowCompExpr and \"c1 < 99 and c2 < 99\" is recognized combination of OpExpr.\n\nCurrently, pruning is not performed for RowCompExpr, is this correct?\nAlso, at the end of match_clause_to_partition_key(), the following Comments like.\n\n\"Since the qual didn't match up to any of the other qual types supported here, then trying to match it against any other partition key is a waste of time, so just return PARTCLAUSE_UNSUPPORTED.\"\n\nBecause it would take a long time to parse all Expr nodes, does match_cluause_to_partition_key() return PART_CLAUSE_UNSUPPORTED when such Expr node is passed?\n\nIf the number of args in RowCompExpr is small, I would think that expanding it would improve performance.\n\nregards,\nsho kato\n\n\n", "msg_date": "Mon, 6 Jul 2020 08:25:37 +0000", "msg_from": "\"kato-sho@fujitsu.com\" <kato-sho@fujitsu.com>", "msg_from_op": true, "msg_subject": "Performing partition pruning using row value" }, { "msg_contents": "Kato-san,\n\nOn Mon, Jul 6, 2020 at 5:25 PM kato-sho@fujitsu.com\n<kato-sho@fujitsu.com> wrote:\n> I would like to ask about the conditions under which partition pruning is performed.\n> In PostgreSQL 12, when I executed following SQL, partition pruning is not performed.\n>\n> postgres=# explain select * from a where (c1, c2) < (99, 99);\n> QUERY PLAN\n> ----------------------------------------------------------------\n> Append (cost=0.00..60.00 rows=800 width=40)\n> -> Seq Scan on a1 a_1 (cost=0.00..28.00 rows=400 width=40)\n> Filter: (ROW(c1, c2) < ROW(99, 99))\n> -> Seq Scan on a2 a_2 (cost=0.00..28.00 rows=400 width=40)\n> Filter: (ROW(c1, c2) < ROW(99, 99))\n> (5 rows)\n>\n> However, pruning is performed when I changed the SQL as follows.\n>\n> postgres=# explain select * from a where c1 < 99 and c2 < 99;\n> QUERY PLAN\n> --------------------------------------------------------\n> Seq Scan on a1 a (cost=0.00..28.00 rows=133 width=40)\n> Filter: ((c1 < 99) AND (c2 < 99))\n> (2 rows)\n\nJust to be clear, the condition (c1, c2) < (99, 99) is not equivalent\nto the condition c1 < 99 and c2 < 99 (see the documentation note in\n[1]).\n\n> Looking at the code, \"(c1, c2) < (99, 99)\" is recognized as RowCompExpr and \"c1 < 99 and c2 < 99\" is recognized combination of OpExpr.\n>\n> Currently, pruning is not performed for RowCompExpr, is this correct?\n\nYeah, I think so.\n\n> Because it would take a long time to parse all Expr nodes, does match_cluause_to_partition_key() return PART_CLAUSE_UNSUPPORTED when such Expr node is passed?\n\nI don't know the reason why that function doesn't support row-wise\ncomparison, but I don't think the main reason for that is that it\ntakes time to parse expressions.\n\n> If the number of args in RowCompExpr is small, I would think that expanding it would improve performance.\n\nYeah, I think it's great to support row-wise comparison not only with\nthe small number of args but with the large number of them.\n\nBest regards,\nEtsuro Fujita\n\n[1] https://www.postgresql.org/docs/current/functions-comparisons.html#ROW-WISE-COMPARISON\n\n\n", "msg_date": "Tue, 7 Jul 2020 18:30:51 +0900", "msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Performing partition pruning using row value" }, { "msg_contents": "Fujita san\r\n\r\nOn Tuesday, July 7, 2020 6:31 PM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\r\n> Just to be clear, the condition (c1, c2) < (99, 99) is not equivalent to the\r\n> condition c1 < 99 and c2 < 99 (see the documentation note in [1]).\r\n\r\nThanks for sharing this document. I have understood.\r\n\r\n> but I don't think the main reason for that is that it takes time to parse\r\n> expressions.\r\n> Yeah, I think it's great to support row-wise comparison not only with the small\r\n> number of args but with the large number of them.\r\n\r\nThese comments are very helpful.\r\nOk, I try to make POC that allows row-wise comparison with partition-pruning.\r\n\r\nRegards, \r\nsho kato\r\n> -----Original Message-----\r\n> From: Etsuro Fujita <etsuro.fujita@gmail.com>\r\n> Sent: Tuesday, July 7, 2020 6:31 PM\r\n> To: Kato, Sho/加藤 翔 <kato-sho@fujitsu.com>\r\n> Cc: PostgreSQL-development <pgsql-hackers@postgresql.org>\r\n> Subject: Re: Performing partition pruning using row value\r\n> \r\n> Kato-san,\r\n> \r\n> On Mon, Jul 6, 2020 at 5:25 PM kato-sho@fujitsu.com <kato-sho@fujitsu.com>\r\n> wrote:\r\n> > I would like to ask about the conditions under which partition pruning is\r\n> performed.\r\n> > In PostgreSQL 12, when I executed following SQL, partition pruning is not\r\n> performed.\r\n> >\r\n> > postgres=# explain select * from a where (c1, c2) < (99, 99);\r\n> > QUERY PLAN\r\n> > ----------------------------------------------------------------\r\n> > Append (cost=0.00..60.00 rows=800 width=40)\r\n> > -> Seq Scan on a1 a_1 (cost=0.00..28.00 rows=400 width=40)\r\n> > Filter: (ROW(c1, c2) < ROW(99, 99))\r\n> > -> Seq Scan on a2 a_2 (cost=0.00..28.00 rows=400 width=40)\r\n> > Filter: (ROW(c1, c2) < ROW(99, 99))\r\n> > (5 rows)\r\n> >\r\n> > However, pruning is performed when I changed the SQL as follows.\r\n> >\r\n> > postgres=# explain select * from a where c1 < 99 and c2 < 99;\r\n> > QUERY PLAN\r\n> > --------------------------------------------------------\r\n> > Seq Scan on a1 a (cost=0.00..28.00 rows=133 width=40)\r\n> > Filter: ((c1 < 99) AND (c2 < 99))\r\n> > (2 rows)\r\n> \r\n> Just to be clear, the condition (c1, c2) < (99, 99) is not equivalent to the\r\n> condition c1 < 99 and c2 < 99 (see the documentation note in [1]).\r\n> \r\n> > Looking at the code, \"(c1, c2) < (99, 99)\" is recognized as RowCompExpr and\r\n> \"c1 < 99 and c2 < 99\" is recognized combination of OpExpr.\r\n> >\r\n> > Currently, pruning is not performed for RowCompExpr, is this correct?\r\n> \r\n> Yeah, I think so.\r\n> \r\n> > Because it would take a long time to parse all Expr nodes, does\r\n> match_cluause_to_partition_key() return PART_CLAUSE_UNSUPPORTED\r\n> when such Expr node is passed?\r\n> \r\n> I don't know the reason why that function doesn't support row-wise comparison,\r\n> but I don't think the main reason for that is that it takes time to parse\r\n> expressions.\r\n> \r\n> > If the number of args in RowCompExpr is small, I would think that expanding\r\n> it would improve performance.\r\n> \r\n> Yeah, I think it's great to support row-wise comparison not only with the small\r\n> number of args but with the large number of them.\r\n> \r\n> Best regards,\r\n> Etsuro Fujita\r\n> \r\n> [1]\r\n> https://www.postgresql.org/docs/current/functions-comparisons.html#ROW-\r\n> WISE-COMPARISON\r\n", "msg_date": "Wed, 8 Jul 2020 01:32:40 +0000", "msg_from": "\"kato-sho@fujitsu.com\" <kato-sho@fujitsu.com>", "msg_from_op": true, "msg_subject": "RE: Performing partition pruning using row value" }, { "msg_contents": "Kato-san,\n\nOn Wed, Jul 8, 2020 at 10:32 AM kato-sho@fujitsu.com\n<kato-sho@fujitsu.com> wrote:\n> On Tuesday, July 7, 2020 6:31 PM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n> > Just to be clear, the condition (c1, c2) < (99, 99) is not equivalent to the\n> > condition c1 < 99 and c2 < 99 (see the documentation note in [1]).\n>\n> Thanks for sharing this document. I have understood.\n>\n> > but I don't think the main reason for that is that it takes time to parse\n> > expressions.\n\nI think the only reason that this is not supported is that I hadn't\ntested such a query when developing partition pruning, nor did anyone\nelse suggest doing so. :)\n\n> > Yeah, I think it's great to support row-wise comparison not only with the small\n> > number of args but with the large number of them.\n\n+1\n\n> These comments are very helpful.\n> Ok, I try to make POC that allows row-wise comparison with partition-pruning.\n\nThat would be great, thank you.\n\n-- \nAmit Langote\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 8 Jul 2020 11:53:00 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Performing partition pruning using row value" }, { "msg_contents": "Amit-san\r\n\r\nOn Wednesday, July 8, 2020 11:53 AM, Amit Langote <amitlangote09@gmail.com>:\r\n> I think the only reason that this is not supported is that I hadn't tested such a\r\n> query when developing partition pruning, nor did anyone else suggest doing\r\n> so. :)\r\n\r\nThanks for the information. I'm relieved to hear this reason.\r\n\r\nRegards, \r\nSho kato\r\n> -----Original Message-----\r\n> From: Amit Langote <amitlangote09@gmail.com>\r\n> Sent: Wednesday, July 8, 2020 11:53 AM\r\n> To: Kato, Sho/加藤 翔 <kato-sho@fujitsu.com>\r\n> Cc: Etsuro Fujita <etsuro.fujita@gmail.com>; PostgreSQL-development\r\n> <pgsql-hackers@postgresql.org>\r\n> Subject: Re: Performing partition pruning using row value\r\n> \r\n> Kato-san,\r\n> \r\n> On Wed, Jul 8, 2020 at 10:32 AM kato-sho@fujitsu.com\r\n> <kato-sho@fujitsu.com> wrote:\r\n> > On Tuesday, July 7, 2020 6:31 PM Etsuro Fujita <etsuro.fujita@gmail.com>\r\n> wrote:\r\n> > > Just to be clear, the condition (c1, c2) < (99, 99) is not\r\n> > > equivalent to the condition c1 < 99 and c2 < 99 (see the documentation\r\n> note in [1]).\r\n> >\r\n> > Thanks for sharing this document. I have understood.\r\n> >\r\n> > > but I don't think the main reason for that is that it takes time to\r\n> > > parse expressions.\r\n> \r\n> I think the only reason that this is not supported is that I hadn't tested such a\r\n> query when developing partition pruning, nor did anyone else suggest doing\r\n> so. :)\r\n> \r\n> > > Yeah, I think it's great to support row-wise comparison not only\r\n> > > with the small number of args but with the large number of them.\r\n> \r\n> +1\r\n> \r\n> > These comments are very helpful.\r\n> > Ok, I try to make POC that allows row-wise comparison with\r\n> partition-pruning.\r\n> \r\n> That would be great, thank you.\r\n> \r\n> --\r\n> Amit Langote\r\n> EnterpriseDB: http://www.enterprisedb.com\r\n", "msg_date": "Wed, 8 Jul 2020 04:25:16 +0000", "msg_from": "\"kato-sho@fujitsu.com\" <kato-sho@fujitsu.com>", "msg_from_op": true, "msg_subject": "RE: Performing partition pruning using row value" }, { "msg_contents": "\n\nOn 2020/07/08 13:25, kato-sho@fujitsu.com wrote:\n> Amit-san\n> \n> On Wednesday, July 8, 2020 11:53 AM, Amit Langote <amitlangote09@gmail.com>:\n>> I think the only reason that this is not supported is that I hadn't tested such a\n>> query when developing partition pruning, nor did anyone else suggest doing\n>> so. :)\n\nSeems we can do partition pruning even in Kato-san's case by dong\n\ncreate type hoge as (c1 int, c2 int);\ncreate table a( c1 int, c2 int, c3 varchar) partition by range(((c1, c2)::hoge));\ncreate table a1 partition of a for values from((0, 0)) to ((100, 100));\ncreate table a2 partition of a for values from((100, 100)) to ((200, 200));\nexplain select * from a where (c1, c2)::hoge < (99, 99)::hoge;\n\nI'm not sure if this method is officially supported or not, though...\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Wed, 8 Jul 2020 15:19:34 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Performing partition pruning using row value" }, { "msg_contents": "Fujii-san\r\n\r\nWednesday, July 8, 2020 3:20 PM, Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\r\n> Seems we can do partition pruning even in Kato-san's case by dong\r\n> \r\n> create type hoge as (c1 int, c2 int);\r\n> create table a( c1 int, c2 int, c3 varchar) partition by range(((c1, c2)::hoge));\r\n> create table a1 partition of a for values from((0, 0)) to ((100, 100)); create table\r\n> a2 partition of a for values from((100, 100)) to ((200, 200)); explain select * from\r\n> a where (c1, c2)::hoge < (99, 99)::hoge;\r\n\r\nI hadn't thought of it that way. Thanks.\r\n\r\nRegards, \r\nSho kato\r\n> -----Original Message-----\r\n> From: Fujii Masao <masao.fujii@oss.nttdata.com>\r\n> Sent: Wednesday, July 8, 2020 3:20 PM\r\n> To: Kato, Sho/加藤 翔 <kato-sho@fujitsu.com>; 'Amit Langote'\r\n> <amitlangote09@gmail.com>\r\n> Cc: Etsuro Fujita <etsuro.fujita@gmail.com>; PostgreSQL-development\r\n> <pgsql-hackers@postgresql.org>\r\n> Subject: Re: Performing partition pruning using row value\r\n> \r\n> \r\n> \r\n> On 2020/07/08 13:25, kato-sho@fujitsu.com wrote:\r\n> > Amit-san\r\n> >\r\n> > On Wednesday, July 8, 2020 11:53 AM, Amit Langote\r\n> <amitlangote09@gmail.com>:\r\n> >> I think the only reason that this is not supported is that I hadn't\r\n> >> tested such a query when developing partition pruning, nor did anyone\r\n> >> else suggest doing so. :)\r\n> \r\n> Seems we can do partition pruning even in Kato-san's case by dong\r\n> \r\n> create type hoge as (c1 int, c2 int);\r\n> create table a( c1 int, c2 int, c3 varchar) partition by range(((c1, c2)::hoge));\r\n> create table a1 partition of a for values from((0, 0)) to ((100, 100)); create table\r\n> a2 partition of a for values from((100, 100)) to ((200, 200)); explain select * from\r\n> a where (c1, c2)::hoge < (99, 99)::hoge;\r\n> \r\n> I'm not sure if this method is officially supported or not, though...\r\n> \r\n> Regards,\r\n> \r\n> --\r\n> Fujii Masao\r\n> Advanced Computing Technology Center\r\n> Research and Development Headquarters\r\n> NTT DATA CORPORATION\r\n", "msg_date": "Wed, 8 Jul 2020 06:35:50 +0000", "msg_from": "\"kato-sho@fujitsu.com\" <kato-sho@fujitsu.com>", "msg_from_op": true, "msg_subject": "RE: Performing partition pruning using row value" }, { "msg_contents": "Hi,\r\n\r\nI made a patch that enable partition pruning using row-wise comparison.\r\nPlease review and comment on this patch.\r\n\r\nregards, \r\nsho kato\r\n> -----Original Message-----\r\n> From: kato-sho@fujitsu.com <kato-sho@fujitsu.com>\r\n> Sent: Wednesday, July 8, 2020 10:33 AM\r\n> To: 'Etsuro Fujita' <etsuro.fujita@gmail.com>\r\n> Cc: PostgreSQL-development <pgsql-hackers@postgresql.org>\r\n> Subject: RE: Performing partition pruning using row value\r\n> \r\n> Fujita san\r\n> \r\n> On Tuesday, July 7, 2020 6:31 PM Etsuro Fujita <etsuro.fujita@gmail.com>\r\n> wrote:\r\n> > Just to be clear, the condition (c1, c2) < (99, 99) is not equivalent\r\n> > to the condition c1 < 99 and c2 < 99 (see the documentation note in [1]).\r\n> \r\n> Thanks for sharing this document. I have understood.\r\n> \r\n> > but I don't think the main reason for that is that it takes time to\r\n> > parse expressions.\r\n> > Yeah, I think it's great to support row-wise comparison not only with\r\n> > the small number of args but with the large number of them.\r\n> \r\n> These comments are very helpful.\r\n> Ok, I try to make POC that allows row-wise comparison with partition-pruning.\r\n> \r\n> Regards,\r\n> sho kato\r\n> > -----Original Message-----\r\n> > From: Etsuro Fujita <etsuro.fujita@gmail.com>\r\n> > Sent: Tuesday, July 7, 2020 6:31 PM\r\n> > To: Kato, Sho/加藤 翔 <kato-sho@fujitsu.com>\r\n> > Cc: PostgreSQL-development <pgsql-hackers@postgresql.org>\r\n> > Subject: Re: Performing partition pruning using row value\r\n> >\r\n> > Kato-san,\r\n> >\r\n> > On Mon, Jul 6, 2020 at 5:25 PM kato-sho@fujitsu.com\r\n> > <kato-sho@fujitsu.com>\r\n> > wrote:\r\n> > > I would like to ask about the conditions under which partition\r\n> > > pruning is\r\n> > performed.\r\n> > > In PostgreSQL 12, when I executed following SQL, partition pruning\r\n> > > is not\r\n> > performed.\r\n> > >\r\n> > > postgres=# explain select * from a where (c1, c2) < (99, 99);\r\n> > > QUERY PLAN\r\n> > > ----------------------------------------------------------------\r\n> > > Append (cost=0.00..60.00 rows=800 width=40)\r\n> > > -> Seq Scan on a1 a_1 (cost=0.00..28.00 rows=400 width=40)\r\n> > > Filter: (ROW(c1, c2) < ROW(99, 99))\r\n> > > -> Seq Scan on a2 a_2 (cost=0.00..28.00 rows=400 width=40)\r\n> > > Filter: (ROW(c1, c2) < ROW(99, 99))\r\n> > > (5 rows)\r\n> > >\r\n> > > However, pruning is performed when I changed the SQL as follows.\r\n> > >\r\n> > > postgres=# explain select * from a where c1 < 99 and c2 < 99;\r\n> > > QUERY PLAN\r\n> > > --------------------------------------------------------\r\n> > > Seq Scan on a1 a (cost=0.00..28.00 rows=133 width=40)\r\n> > > Filter: ((c1 < 99) AND (c2 < 99))\r\n> > > (2 rows)\r\n> >\r\n> > Just to be clear, the condition (c1, c2) < (99, 99) is not equivalent\r\n> > to the condition c1 < 99 and c2 < 99 (see the documentation note in [1]).\r\n> >\r\n> > > Looking at the code, \"(c1, c2) < (99, 99)\" is recognized as\r\n> > > RowCompExpr and\r\n> > \"c1 < 99 and c2 < 99\" is recognized combination of OpExpr.\r\n> > >\r\n> > > Currently, pruning is not performed for RowCompExpr, is this correct?\r\n> >\r\n> > Yeah, I think so.\r\n> >\r\n> > > Because it would take a long time to parse all Expr nodes, does\r\n> > match_cluause_to_partition_key() return PART_CLAUSE_UNSUPPORTED\r\n> when\r\n> > such Expr node is passed?\r\n> >\r\n> > I don't know the reason why that function doesn't support row-wise\r\n> > comparison, but I don't think the main reason for that is that it\r\n> > takes time to parse expressions.\r\n> >\r\n> > > If the number of args in RowCompExpr is small, I would think that\r\n> > > expanding\r\n> > it would improve performance.\r\n> >\r\n> > Yeah, I think it's great to support row-wise comparison not only with\r\n> > the small number of args but with the large number of them.\r\n> >\r\n> > Best regards,\r\n> > Etsuro Fujita\r\n> >\r\n> > [1]\r\n> > https://www.postgresql.org/docs/current/functions-comparisons.html#ROW\r\n> > -\r\n> > WISE-COMPARISON", "msg_date": "Thu, 9 Jul 2020 08:43:06 +0000", "msg_from": "\"kato-sho@fujitsu.com\" <kato-sho@fujitsu.com>", "msg_from_op": true, "msg_subject": "RE: Performing partition pruning using row value" }, { "msg_contents": "Kato-san,\n\nOn Thu, Jul 9, 2020 at 5:43 PM kato-sho@fujitsu.com\n<kato-sho@fujitsu.com> wrote:\n> I made a patch that enable partition pruning using row-wise comparison.\n> Please review and comment on this patch.\n\nPlease add the patch to the next CF so that it does not get lost.\n\nThanks!\n\nBest regards,\nEtsuro Fujita\n\n\n", "msg_date": "Thu, 9 Jul 2020 19:45:58 +0900", "msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Performing partition pruning using row value" }, { "msg_contents": "\n\nOn 2020/07/09 19:45, Etsuro Fujita wrote:\n> Kato-san,\n> \n> On Thu, Jul 9, 2020 at 5:43 PM kato-sho@fujitsu.com\n> <kato-sho@fujitsu.com> wrote:\n>> I made a patch that enable partition pruning using row-wise comparison.\n>> Please review and comment on this patch.\n\nThanks for the patch!\n\n\n> Please add the patch to the next CF so that it does not get lost.\n\nIs this a bug rather than new feature?\n\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Thu, 9 Jul 2020 19:57:22 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Performing partition pruning using row value" }, { "msg_contents": "Fujii-san,\n\nOn Thu, Jul 9, 2020 at 7:57 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> On 2020/07/09 19:45, Etsuro Fujita wrote:\n> > Please add the patch to the next CF so that it does not get lost.\n>\n> Is this a bug rather than new feature?\n\nI think it's a limitation rather than a bug that partition pruning\ndoesn't support row-wise comparison, so I think the patch is a new\nfeature.\n\nBest regards,\nEtsuro Fujita\n\n\n", "msg_date": "Fri, 10 Jul 2020 09:35:53 +0900", "msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Performing partition pruning using row value" }, { "msg_contents": "On Fri, Jul 10, 2020 at 9:35 AM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n> On Thu, Jul 9, 2020 at 7:57 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> > On 2020/07/09 19:45, Etsuro Fujita wrote:\n> > > Please add the patch to the next CF so that it does not get lost.\n> >\n> > Is this a bug rather than new feature?\n>\n> I think it's a limitation rather than a bug that partition pruning\n> doesn't support row-wise comparison, so I think the patch is a new\n> feature.\n\nI tend to think so too. IMO, partition pruning, like any other\noptimization, works on a best-effort basis. If the result it produces\nis wrong, now that would be a bug, but I don't think that's the case\nhere. However, I do think it was a bit unfortunate that we failed to\nconsider RowCompare expressions when developing partition pruning\ngiven, that index scans are already able to match them.\n\nSpeaking of which, I hope that Kato-san has looked at functions\nmatch_rowcompare_to_indexcol(), expand_indexqual_rowcompare(), etc. in\nindxpath.c as starting points for the code to match RowCompares to\npartition keys.\n\n-- \nAmit Langote\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 10 Jul 2020 10:00:17 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Performing partition pruning using row value" }, { "msg_contents": "Amit-san\r\nFriday, July 10, 2020 10:00 AM, Amit Langote <amitlangote09@gmail.com> wrote:\r\n>Speaking of which, I hope that Kato-san has looked at functions match_rowcompare_to_indexcol(), expand_indexqual_rowcompare(), etc. in indxpath.c as starting points >for the code to match RowCompares to partition keys.\r\n\r\nHmm, I did not look at these functions. So, after looking at these functions and modifying this patch, I would like to add this patch to the next CF.\r\nthanks for providing this information.\r\n\r\nregards, \r\nsho kato\r\n", "msg_date": "Fri, 10 Jul 2020 02:07:50 +0000", "msg_from": "\"kato-sho@fujitsu.com\" <kato-sho@fujitsu.com>", "msg_from_op": true, "msg_subject": "RE: Performing partition pruning using row value" }, { "msg_contents": ">So, after looking at these functions and modifying this patch, I would like to add this patch to the next\r\n\r\nI updated this patch and registered for the next CF .\r\n\r\nhttps://commitfest.postgresql.org/29/2654/\r\n\r\nregards, \r\nsho kato", "msg_date": "Tue, 21 Jul 2020 08:24:49 +0000", "msg_from": "\"kato-sho@fujitsu.com\" <kato-sho@fujitsu.com>", "msg_from_op": true, "msg_subject": "RE: Performing partition pruning using row value" }, { "msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: tested, passed\nImplements feature: tested, passed\nSpec compliant: tested, passed\nDocumentation: not tested\n\nI have performed testing of the patch with row comparison partition pruning scenarios, it is working well. I didn't code review hence not changing the status.", "msg_date": "Wed, 19 Aug 2020 15:12:27 +0000", "msg_from": "ahsan hadi <ahsan.hadi@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Performing partition pruning using row value" }, { "msg_contents": "On 21.07.2020 11:24, kato-sho@fujitsu.com wrote:\n>> So, after looking at these functions and modifying this patch, I would like to add this patch to the next\n> I updated this patch and registered for the next CF .\n>\n> https://commitfest.postgresql.org/29/2654/\n>\n> regards,\n> sho kato\n\nThank you for working on this improvement. I took a look at the code.\n\n1) This piece of code is unneeded:\n\n             switch (get_op_opfamily_strategy(opno, partopfamily))\n             {\n                 case BTLessStrategyNumber:\n                 case BTLessEqualStrategyNumber:\n                 case BTGreaterEqualStrategyNumber:\n                 case BTGreaterStrategyNumber:\n\nSee the comment for RowCompareExpr, which states that \"A RowCompareExpr \nnode is only generated for the < <= > >= cases\".\n\n2) It's worth to add a regression test for this feature.\n\nOther than that, the patch looks good to me.\n\n-- \nAnastasia Lubennikova\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n", "msg_date": "Tue, 16 Feb 2021 17:07:28 +0300", "msg_from": "Anastasia Lubennikova <a.lubennikova@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: Performing partition pruning using row value" }, { "msg_contents": "On 2/16/21 9:07 AM, Anastasia Lubennikova wrote:\n> On 21.07.2020 11:24, kato-sho@fujitsu.com wrote:\n>>> So, after looking at these functions and modifying this patch, I \n>>> would like to add this patch to the next\n>> I updated this patch and registered for the next CF .\n>>\n>> https://commitfest.postgresql.org/29/2654/\n>>\n>> regards,\n>> sho kato\n> \n> Thank you for working on this improvement. I took a look at the code.\n> \n> 1) This piece of code is unneeded:\n> \n>             switch (get_op_opfamily_strategy(opno, partopfamily))\n>             {\n>                 case BTLessStrategyNumber:\n>                 case BTLessEqualStrategyNumber:\n>                 case BTGreaterEqualStrategyNumber:\n>                 case BTGreaterStrategyNumber:\n> \n> See the comment for RowCompareExpr, which states that \"A RowCompareExpr \n> node is only generated for the < <= > >= cases\".\n> \n> 2) It's worth to add a regression test for this feature.\n> \n> Other than that, the patch looks good to me.\n\nThis patch has been Waiting on Author for several months, so marking \nReturned with Feedback.\n\nPlease resubmit to the next CF when you have a new patch.\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n\n", "msg_date": "Thu, 8 Apr 2021 11:04:28 -0400", "msg_from": "David Steele <david@pgmasters.net>", "msg_from_op": false, "msg_subject": "Re: Performing partition pruning using row value" } ]
[ { "msg_contents": "The previous discussion of automatic partition creation [1] has \naddressed static and dynamic creation of partitions and ended up with \nseveral syntax proposals.\nIn this thread, I want to continue this work.\n\nAttached is PoC for static partition creation. The patch core is quite \nstraightforward. It adds one more transform clause to convert given \npartitioning specification into several CREATE TABLE statements.\n\nThe patch implements following syntax:\n\nCREATE TABLE ... PARTITION BY partition_method (list_of_columns)\npartition_auto_create_clause\n\nwhere partition_auto_create_clause is\n\nCONFIGURATION [IMMEDIATE| DEFERRED] USING partition_bound_spec\n\nand partition_bound_spec is:\n\nMODULUS integer | VALUES IN (expr [,...]) [, ....] |  INTERVAL \nrange_step FROM range_start TO range_end\n\nFor more examples check auto_partitions.sql in the patch.\n\nTODO:\n\n- CONFIGURATION is just an existing keyword, that I picked as a stub.\n  Ideas on better wording are welcome.\n\n- IMMEDIATE| DEFERRED is optional, DEFERRED is not implemented yet\nI wonder, is it worth placing a stub for dynamic partitioning, or we can \nrather add these keywords later.\n\n- HASH and LIST static partitioning works as expected.\nTesting and feedback are welcome.\n\n- RANGE partitioning is not really implemented in this patch.\nNow it only accepts interval data type as 'interval' and respectively \ndate types as range_start and range_end expressions.\nOnly one partition is created. I found it difficult to implement the \ngeneration of bounds using internal functions and data types.\nBoth existing solutions (pg_pathman and pg_partman) rely on SQL level \nroutines [2].\nI am going to implement this via SPI, which allow to simplify checks and \ncalculations. Do you see any pitfalls in this approach?\n\n- Partition naming. Now partition names for all methods look like \n$tablename_$partnum\nDo we want more intelligence here? Now we have \nRunObjectPostCreateHook(), which allows to rename the table.\nTo make it more user-friendly, we can later implement pl/pgsql function \nthat sets the callback, as it is done in pg_pathman set_init_callback() [3].\n\n- Current design doesn't allow to create default partition \nautomatically. Do we need this functionality?\n\n- Do you see any restrictions for future extensibility (dynamic \npartitioning, init_callback, etc.) in the proposed design ?\n\nI expect this to be a long discussion, so here is the wiki page [4] to \nfix important questions and final agreements.\n\n[1] \nhttps://www.postgresql.org/message-id/flat/alpine.DEB.2.21.1907150711080.22273%40lancre\n[2] \nhttps://github.com/postgrespro/pg_pathman/blob/dbcbd02e411e6acea6d97f572234746007979538/range.sql#L99\n[3] https://github.com/postgrespro/pg_pathman#additional-parameters\n[4] https://wiki.postgresql.org/wiki/Declarative_partitioning_improvements\n\n-- \nAnastasia Lubennikova\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company", "msg_date": "Mon, 6 Jul 2020 13:45:52 +0300", "msg_from": "Anastasia Lubennikova <a.lubennikova@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Proposal: Automatic partition creation" }, { "msg_contents": "On Mon, Jul 06, 2020 at 01:45:52PM +0300, Anastasia Lubennikova wrote:\n> The previous discussion of automatic partition creation [1] has addressed\n> static and dynamic creation of partitions and ended up with several syntax\n> proposals.\n...\n> where partition_auto_create_clause is\n> \n> CONFIGURATION [IMMEDIATE| DEFERRED] USING partition_bound_spec\n\n> - IMMEDIATE| DEFERRED is optional, DEFERRED is not implemented yet\n> I wonder, is it worth placing a stub for dynamic partitioning, or we can\n> rather add these keywords later.\n\nI understand by \"deferred\" you mean that the partition isn't created at the\ntime CREATE TABLE is run but rather deferred until needed by INSERT.\n\nFor deferred, range partitioned tables, I think maybe what you'd want to\nspecify (and store) is the INTERVAL. If the table is partitioned by day, then\nwe'd date_trunc('day', time) and dynamically create that day. But if it was\npartitioned by month, we'd create the month. I think you'd want to have an\nALTER command for that (we would use that to change tables between\ndaily/monthly based on their current size). That should also support setting\nthe MODULUS of a HASH partitioned table, to allow changing the size of its\npartitions (currently, the user would have to more or less recreate the table\nand move all its data into different partitions, but that's not ideal).\n\nI don't know if it's important for anyone, but it would be interesting to think\nabout supporting sub-partitioning: partitions which are themselvese partitioned.\nLike something => something_YYYY => something_YYYY_MM => something_YYYY_MM_DD.\nYou'd need to specify how to partition each layer of the heirarchy. In the\nmost general case, it could be different partition strategy.\n\nIf you have a callback function for partition renaming, I think you'd want to\npass it not just the current name of the partition, but also the \"VALUES\" used\nin partition creation. Like (2020-04-05)TO(2020-05-06). Maybe instead, we'd\nallow setting a \"format\" to use to construct the partition name. Like\n\"child.foo_bar_%Y_%m_%d\". Ideally, the formats would be fixed-length\n(zero-padded, etc), so failures with length can happen at \"parse\" time of the\nstatement and not at \"run\" time of the creation. You'd still have to handle\nthe case that the name already exists but isn't a partition (or is a partition\nby doesn't handle the incoming tuple for some reason).\n\nAlso, maybe your \"configuration\" syntax would allow specifying other values.\nMaybe including a retention period (as an INTERVAL for RANGE tables). That's\nuseful if you had a command to PRUNE the oldest partitions, like ALTER..PRUNE.\n\n-- \nJustin\n\n\n", "msg_date": "Mon, 6 Jul 2020 09:59:47 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Proposal: Automatic partition creation" }, { "msg_contents": "On Mon, Jul 6, 2020 at 6:46 AM Anastasia Lubennikova\n<a.lubennikova@postgrespro.ru> wrote:\n> CREATE TABLE ... PARTITION BY partition_method (list_of_columns)\n> partition_auto_create_clause\n>\n> where partition_auto_create_clause is\n>\n> CONFIGURATION [IMMEDIATE| DEFERRED] USING partition_bound_spec\n>\n> and partition_bound_spec is:\n>\n> MODULUS integer | VALUES IN (expr [,...]) [, ....] | INTERVAL\n> range_step FROM range_start TO range_end\n\nMight be good to compare this to what other databases support.\n\n> - IMMEDIATE| DEFERRED is optional, DEFERRED is not implemented yet\n> I wonder, is it worth placing a stub for dynamic partitioning, or we can\n> rather add these keywords later.\n\nI think we should not add any keywords we don't need immediately - and\nshould seek to minimize the number of new keywords that we need to\nadd, though compatibility with other implementations might be a good\nreason for accepting some new ones.\n\n> - HASH and LIST static partitioning works as expected.\n> Testing and feedback are welcome.\n>\n> - RANGE partitioning is not really implemented in this patch.\n> Now it only accepts interval data type as 'interval' and respectively\n> date types as range_start and range_end expressions.\n> Only one partition is created. I found it difficult to implement the\n> generation of bounds using internal functions and data types.\n> Both existing solutions (pg_pathman and pg_partman) rely on SQL level\n> routines [2].\n> I am going to implement this via SPI, which allow to simplify checks and\n> calculations. Do you see any pitfalls in this approach?\n\nI don't really see why we need SPI here. Why can't we just try to\nevaluate the impression and see if we get a constant of the right\ntype, then use that?\n\nI think the big problem here is identifying the operator to use. We\nhave no way of identifying the \"plus\" or \"minus\" operator associated\nwith a datatype; indeed, that constant doesn't exist. So either we (a)\nlimit this to a short list of data types and hard-code the operators\nto be used (which is kind of sad given how extensible our type system\nis) or we (b) invent some new mechanism for identifying the +/-\noperators that should be used for a datatype, which was also proposed\nin the context of some previous discussion of window framing options,\nbut which I don't think ever went anywhere (which is a lot of work) or\nwe (c) just look for operators called '+' and/or '-' by operator name\n(which will probably make Tom throw up in his mouth a little).\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Mon, 6 Jul 2020 11:45:09 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Proposal: Automatic partition creation" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Mon, Jul 6, 2020 at 6:46 AM Anastasia Lubennikova\n> <a.lubennikova@postgrespro.ru> wrote:\n>> I am going to implement this via SPI, which allow to simplify checks and\n>> calculations. Do you see any pitfalls in this approach?\n\n> I don't really see why we need SPI here.\n\nI would vote against any core facility that is implemented via SPI\nqueries. It is just too darn hard to control the semantics completely in\nthe face of fun stuff like varying search_path. Look at what a mess the\nqueries generated by the RI triggers are --- and they only have a very\nsmall set of behaviors to worry about. I'm still only about 95% confident\nthey don't have security issues, too.\n\nIf you're using SPI to try to look up appropriate operators, I think\nthe chances of being vulnerable to security problems are 100%.\n\n> I think the big problem here is identifying the operator to use. We\n> have no way of identifying the \"plus\" or \"minus\" operator associated\n> with a datatype; indeed, that constant doesn't exist.\n\nWe did indeed solve this in connection with window functions, cf\n0a459cec9. I may be misunderstanding what the problem is here,\nbut I think trying to reuse that infrastructure might help.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 06 Jul 2020 12:10:37 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Proposal: Automatic partition creation" }, { "msg_contents": "On Mon, Jul 6, 2020 at 12:10 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> We did indeed solve this in connection with window functions, cf\n> 0a459cec9. I may be misunderstanding what the problem is here,\n> but I think trying to reuse that infrastructure might help.\n\nAh, nice. I didn't realize that we'd added that. But I'm not sure that\nit helps here, because I think we need to compute the end of the\nrange, not just test whether something is in a range. Like, if someone\nwants monthly range partitions starting on 2020-01-01, we need to be\nable to figure out that the subsequent months start on 2020-02-01,\n2020-03-01, 2020-04-01, etc. Is there a way to use in_range to achieve\nthat?\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Tue, 7 Jul 2020 10:22:47 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Proposal: Automatic partition creation" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Mon, Jul 6, 2020 at 12:10 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> We did indeed solve this in connection with window functions, cf\n>> 0a459cec9. I may be misunderstanding what the problem is here,\n>> but I think trying to reuse that infrastructure might help.\n\n> Ah, nice. I didn't realize that we'd added that. But I'm not sure that\n> it helps here, because I think we need to compute the end of the\n> range, not just test whether something is in a range.\n\nYeah, I was thinking about that later, and I agree that the in_range\nsupport function doesn't quite do the job. But we could expand on the\nprinciple, and register addition (and subtraction?) functions as btree\nsupport functions under the same rules as for in_range functions.\n\nThe reason in_range isn't just addition is that we wanted it to be able\nto give correct answers even in cases where addition would overflow.\nThat's still valid for that use-case, but it doesn't apply here.\n\nSo it'd be something like \"btree support function 4, registered under\namproclefttype x and amprocrighttype y, must have the signature\n\tplus(x, y) returns x\nand it gives results compatible with the opfamily's ordering of type x\".\nSimilarly for subtraction if we think we need that.\n\nI'm not sure if we need a formal notion of what \"compatible results\"\nmeans, but it probably would be something like \"if x < z according to the\nopfamily sort ordering, then plus(x, y) < plus(z, y) for any given y\".\nNow this falls to the ground when y is a weird value like Inf or NaN,\nbut we'd want to exclude those as partitioning values anyway. Do we\nalso need some datatype-independent way of identifying such \"weird\nvalues\"?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 07 Jul 2020 11:09:49 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Proposal: Automatic partition creation" }, { "msg_contents": "Hello Anastasia,\n\nMy 0.02 €:\n\n> The patch implements following syntax:\n>\n> CREATE TABLE ... PARTITION BY partition_method (list_of_columns)\n> partition_auto_create_clause\n>\n> where partition_auto_create_clause is\n>\n> CONFIGURATION [IMMEDIATE| DEFERRED] USING partition_bound_spec\n>\n> and partition_bound_spec is:\n>\n> MODULUS integer | VALUES IN (expr [,...]) [, ....] |  INTERVAL range_step \n> FROM range_start TO range_end\n\nISTM That we should avoid new specific syntaxes when possible, and prefer \nfree keyword option style, like it is being discussed for some other \ncommands, because it reduces the impact on the parser.\n\nThat would suggest a more versatile partition_bound_spec which could look \nlike (<keyword> <constant-or-maybe-even-expr>[, …]):\n\nFor modulus, looks easy:\n\n (MODULUS 8)\n\nFor interval, maybe something like:\n\n (STEP ..., FROM/START ..., TO/END ...)\n\nThe key point is that for dynamic partitioning there would be no need for \nboundaries, so that it could just set a point and an interval\n\n (START/INIT/FROM??? ..., STEP ...)\n\nFor lists of values, probably it would make little sense to have dynamic \npartitioning? Or maybe yes, if we could partition on a column \nvalue/expression?! eg \"MOD(id, 8)\"??\n\nWhat about pg_dump? Should it be able to regenerate the initial create?\n\n> [4] https://wiki.postgresql.org/wiki/Declarative_partitioning_improvements\n\nGood point, a wiki is better than a thread for that type of things. I'll \nlook at this page.\n\n-- \nFabien.", "msg_date": "Wed, 8 Jul 2020 06:53:52 +0200 (CEST)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": false, "msg_subject": "Re: Proposal: Automatic partition creation" }, { "msg_contents": "On Wed, Jul 8, 2020 at 10:24 AM Fabien COELHO <coelho@cri.ensmp.fr> wrote:\n>\n>\n> Hello Anastasia,\n>\n> My 0.02 €:\n>\n> > The patch implements following syntax:\n> >\n> > CREATE TABLE ... PARTITION BY partition_method (list_of_columns)\n> > partition_auto_create_clause\n> >\n> > where partition_auto_create_clause is\n> >\n> > CONFIGURATION [IMMEDIATE| DEFERRED] USING partition_bound_spec\n> >\n> > and partition_bound_spec is:\n> >\n> > MODULUS integer | VALUES IN (expr [,...]) [, ....] | INTERVAL range_step\n> > FROM range_start TO range_end\n>\n> ISTM That we should avoid new specific syntaxes when possible, and prefer\n> free keyword option style, like it is being discussed for some other\n> commands, because it reduces the impact on the parser.\n>\n> That would suggest a more versatile partition_bound_spec which could look\n> like (<keyword> <constant-or-maybe-even-expr>[, …]):\n>\n> For modulus, looks easy:\n>\n> (MODULUS 8)\n>\n> For interval, maybe something like:\n>\n> (STEP ..., FROM/START ..., TO/END ...)\n>\n> The key point is that for dynamic partitioning there would be no need for\n> boundaries, so that it could just set a point and an interval\n>\n> (START/INIT/FROM??? ..., STEP ...)\n>\n> For lists of values, probably it would make little sense to have dynamic\n> partitioning? Or maybe yes, if we could partition on a column\n> value/expression?! eg \"MOD(id, 8)\"??\n>\n> What about pg_dump? Should it be able to regenerate the initial create?\n>\nI don't think this is needed for the proposed \"Automatic partitioning (static)\"\nwhich generates a bunch of CREATE TABLE statements, IIUC. Might be needed later\nfor \"Automatic partitioning (dynamic)\" where dynamic specifications need to be\nstored.\n\n> > [4] https://wiki.postgresql.org/wiki/Declarative_partitioning_improvements\n>\n> Good point, a wiki is better than a thread for that type of things. I'll\n> look at this page.\n+1\n\nRegards,\nAmul\n\n\n", "msg_date": "Wed, 8 Jul 2020 11:14:40 +0530", "msg_from": "Amul Sul <sulamul@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Proposal: Automatic partition creation" }, { "msg_contents": "On 06.07.2020 19:10, Tom Lane wrote:\n> Robert Haas <robertmhaas@gmail.com> writes:\n>> On Mon, Jul 6, 2020 at 6:46 AM Anastasia Lubennikova\n>> <a.lubennikova@postgrespro.ru> wrote:\n>>> I am going to implement this via SPI, which allow to simplify checks and\n>>> calculations. Do you see any pitfalls in this approach?\n>> I don't really see why we need SPI here.\n> I would vote against any core facility that is implemented via SPI\n> queries. It is just too darn hard to control the semantics completely in\n> the face of fun stuff like varying search_path. Look at what a mess the\n> queries generated by the RI triggers are --- and they only have a very\n> small set of behaviors to worry about. I'm still only about 95% confident\n> they don't have security issues, too.\n>\n> If you're using SPI to try to look up appropriate operators, I think\n> the chances of being vulnerable to security problems are 100%.\nGood to know, thank you for that. I had doubts about the internal usage \nof SPI,\nbut didn't know what exactly can go wrong.\n\n>\n>> I think the big problem here is identifying the operator to use. We\n>> have no way of identifying the \"plus\" or \"minus\" operator associated\n>> with a datatype; indeed, that constant doesn't exist.\n> We did indeed solve this in connection with window functions, cf\n> 0a459cec9. I may be misunderstanding what the problem is here,\n> but I think trying to reuse that infrastructure might help.\n\nDo we need to introduce a new support function? Is there a reason why we \ncan\nnot rely on '+' operator? I understand that the addition operator may \nlack or\nbe overloaded for some complex datatypes, but I haven't found any \nexamples that\nare useful for range partitioning. Both pg_pathman and pg_partman also \nuse '+'\nto generate bounds.\n\nI explored the code a bit more and came up with this function, which is \nvery\nsimilar to generate_series_* functions, but it doesn't use SPI and looks \nfor\nthe function that implements the '+' operator, instead of direct call:\n\n// almost pseudocode\n\nstatic Const *\ngenerate_next_bound(Const *start, Const *interval)\n{\n     ObjectWithArgs *sum_oper_object = makeNode(ObjectWithArgs);\n\n     sum_oper_object->type = OBJECT_OPERATOR;\n     /* hardcode '+' operator for addition */\n     sum_oper_object->objname = list_make1(makeString(\"+\"));\n\n     ltype = makeTypeNameFromOid(start->consttype, start->consttypmod);\n     rtype = makeTypeNameFromOid(interval->consttype, \ninterval->consttypmod);\n\n     sum_oper_object->objargs = list_make2(ltype, rtype);\n\n     sum_oper_oid = LookupOperWithArgs(sum_oper_object, false);\n     oprcode = get_opcode(sum_oper_oid);\n     fmgr_info(oprcode, &opproc);\n\nnext_bound->constvalue = FunctionCall2(&opproc,\n                              start->constvalue,\n                              interval->constvalue);\n}\n\nThoughts?\n\n-- \nAnastasia Lubennikova\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n\n\n\n\nOn 06.07.2020 19:10, Tom Lane wrote:\n\n\nRobert Haas <robertmhaas@gmail.com> writes:\n\n\nOn Mon, Jul 6, 2020 at 6:46 AM Anastasia Lubennikova\n<a.lubennikova@postgrespro.ru> wrote:\n\n\nI am going to implement this via SPI, which allow to simplify checks and\ncalculations. Do you see any pitfalls in this approach?\n\n\n\n\n\n\nI don't really see why we need SPI here.\n\n\n\nI would vote against any core facility that is implemented via SPI\nqueries. It is just too darn hard to control the semantics completely in\nthe face of fun stuff like varying search_path. Look at what a mess the\nqueries generated by the RI triggers are --- and they only have a very\nsmall set of behaviors to worry about. I'm still only about 95% confident\nthey don't have security issues, too.\n\nIf you're using SPI to try to look up appropriate operators, I think\nthe chances of being vulnerable to security problems are 100%.\n\n\n Good to know, thank you for that. I had doubts about the internal\n usage of SPI, \n but didn't know what exactly can go wrong.\n\n\n\nI think the big problem here is identifying the operator to use. We\nhave no way of identifying the \"plus\" or \"minus\" operator associated\nwith a datatype; indeed, that constant doesn't exist.\n\n\n\nWe did indeed solve this in connection with window functions, cf\n0a459cec9. I may be misunderstanding what the problem is here,\nbut I think trying to reuse that infrastructure might help.\n\n\nDo we need to introduce a new support function? Is there a reason\n why we can \n not rely on '+' operator? I understand that the addition operator\n may lack or \n be overloaded for some complex datatypes, but I haven't found any\n examples that \n are useful for range partitioning. Both pg_pathman and pg_partman\n also use '+' \n to generate bounds. \n\n I explored the code a bit more and came up with this function,\n which is very \n similar to generate_series_* functions, but it doesn't use SPI and\n looks for \n the function that implements the '+' operator, instead of direct\n call:\n// almost pseudocode\n\n static Const *\n generate_next_bound(Const *start, Const *interval)\n {\n     ObjectWithArgs *sum_oper_object = makeNode(ObjectWithArgs);\n\n     sum_oper_object->type = OBJECT_OPERATOR;\n     /* hardcode '+' operator for addition */\n     sum_oper_object->objname = list_make1(makeString(\"+\"));\n\n     ltype = makeTypeNameFromOid(start->consttype,\n start->consttypmod);\n     rtype = makeTypeNameFromOid(interval->consttype,\n interval->consttypmod);\n\n     sum_oper_object->objargs = list_make2(ltype, rtype);\n\n     sum_oper_oid = LookupOperWithArgs(sum_oper_object, false);\n     oprcode = get_opcode(sum_oper_oid);\n     fmgr_info(oprcode, &opproc);\n\n   \n next_bound->constvalue = FunctionCall2(&opproc,\n                              start->constvalue,\n                              interval->constvalue);\n }\nThoughts?\n\n-- \nAnastasia Lubennikova\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company", "msg_date": "Mon, 13 Jul 2020 21:01:28 +0300", "msg_from": "Anastasia Lubennikova <a.lubennikova@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: Proposal: Automatic partition creation" }, { "msg_contents": "Anastasia Lubennikova <a.lubennikova@postgrespro.ru> writes:\n> On 06.07.2020 19:10, Tom Lane wrote:\n>> Robert Haas <robertmhaas@gmail.com> writes:\n>>> I think the big problem here is identifying the operator to use. We\n>>> have no way of identifying the \"plus\" or \"minus\" operator associated\n>>> with a datatype; indeed, that constant doesn't exist.\n\n>> We did indeed solve this in connection with window functions, cf\n>> 0a459cec9. I may be misunderstanding what the problem is here,\n>> but I think trying to reuse that infrastructure might help.\n\n> Do we need to introduce a new support function? Is there a reason why we \n> can not rely on '+' operator?\n\n(1) the appropriate operator might not be named '+'\n(2) even if it is, it might not be in your search_path\n(3) you're vulnerable to security problems from someone capturing the\n '+' operator with a better match; since you aren't writing the\n operator explicitly, you can't fix that by qualifying it\n(4) if the interval constant is written as an undecorated string\n literal, the parser may have trouble resolving a match at all\n\n> I understand that the addition operator may lack or be overloaded for\n> some complex datatypes, but I haven't found any examples that are useful\n> for range partitioning.\n\n\"It works for all the built-in data types\" isn't really a satisfactory\nanswer. But even just in the built-in types, consider \"date\":\n\n# select oid::regoperator from pg_operator where oprname ='+' and oprleft = 'date'::regtype;\n oid \n--------------------------------\n +(date,interval)\n +(date,integer)\n +(date,time without time zone)\n +(date,time with time zone)\n(4 rows)\n\nIt's not that immediately obvious which of these would make sense to use.\n\nBut the short answer here is that we did not accept relying on '+' being\nthe right thing for window function ranges, and I don't see why it is more\nacceptable for partitioning ranges. The existing places where our parser\nrelies on implicit operator names are, without exception, problematic [1].\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/flat/ffefc172-a487-aa87-a0e7-472bf29735c8%40gmail.com\n\n\n", "msg_date": "Mon, 13 Jul 2020 15:01:50 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Proposal: Automatic partition creation" }, { "msg_contents": "On 06.07.2020 13:45, Anastasia Lubennikova wrote:\n> The previous discussion of automatic partition creation [1] has \n> addressed static and dynamic creation of partitions and ended up with \n> several syntax proposals.\n> In this thread, I want to continue this work.\n>\n> ...\n> [1] \n> https://www.postgresql.org/message-id/flat/alpine.DEB.2.21.1907150711080.22273%40lancre\n\nSyntax proposal v2, that takes into account received feedback.\n\nI compared the syntax of other databases. You can find an overview here \n[1]. It\nseems that there is no industry standard, so every DBMS has its own\nimplementation. I decided to rely on a Greenplum syntax, as the most \nsimilar to\nthe original PostgreSQL syntax.\n\nNew proposal is:\n\nCREATE TABLE numbers(int number)\nPARTITION BY partition_method (list_of_columns)\nUSING (partition_desc)\n\nwhere partition_desc is:\n\nMODULUS n\n| VALUES IN (value_list), [DEFAULT PARTITION part_name]\n| START ([datatype] 'start_value')\n END ([datatype] 'end_value')\n EVERY (partition_step), [DEFAULT PARTITION part_name]\n\nwhere partition_step is:\n[datatype] [number | INTERVAL] 'interval_value'\n \nexample:\n\nCREATE TABLE years(int year)\nPARTITION BY RANGE (year)\nUSING\n(START (2006) END (2016) EVERY (1),\nDEFAULT PARTITION other_years);\n\nIt is less wordy than the previous version. It uses a free keyword option\nstyle. It covers static partitioning for all methods, default partition for\nlist and range methods, and can be extended to implement dynamic \npartitioning\nfor range partitions.\n\n[1] \nhttps://wiki.postgresql.org/wiki/Declarative_partitioning_improvements#Other_DBMS\n[2] \nhttps://wiki.postgresql.org/wiki/Declarative_partitioning_improvements#Proposal_.28is_subject_to_change.29\n\n-- \nAnastasia Lubennikova\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n\n\n\n\nOn 06.07.2020 13:45, Anastasia\n Lubennikova wrote:\n\nThe\n previous discussion of automatic partition creation [1] has\n addressed static and dynamic creation of partitions and ended up\n with several syntax proposals.\n \n In this thread, I want to continue this work.\n \n\n ...\n [1]\nhttps://www.postgresql.org/message-id/flat/alpine.DEB.2.21.1907150711080.22273%40lancre\n\n\nSyntax proposal v2, that takes into account received feedback.\n\n I compared the syntax of other databases. You can find an overview\n here [1]. It \n seems that there is no industry standard, so every DBMS has its\n own \n implementation. I decided to rely on a Greenplum syntax, as the\n most similar to \n the original PostgreSQL syntax.\n\n New proposal is:\n\nCREATE TABLE numbers(int number)\nPARTITION BY partition_method (list_of_columns)\nUSING (partition_desc)\n\nwhere partition_desc is:\n\nMODULUS n\n| VALUES IN (value_list), [DEFAULT PARTITION part_name]\n| START ([datatype] 'start_value')\n END ([datatype] 'end_value')\n EVERY (partition_step), [DEFAULT PARTITION part_name]\n\nwhere partition_step is:\n[datatype] [number | INTERVAL] 'interval_value'\n \nexample:\n\nCREATE TABLE years(int year)\nPARTITION BY RANGE (year)\nUSING\n(START (2006) END (2016) EVERY (1),\nDEFAULT PARTITION other_years);\nIt is less wordy than the previous version. It uses a free\n keyword option \n style. It covers static partitioning for all methods, default\n partition for \n list and range methods, and can be extended to implement dynamic\n partitioning \n for range partitions.\n\n\n[1]\nhttps://wiki.postgresql.org/wiki/Declarative_partitioning_improvements#Other_DBMS\n [2]\nhttps://wiki.postgresql.org/wiki/Declarative_partitioning_improvements#Proposal_.28is_subject_to_change.29\n\n-- \nAnastasia Lubennikova\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company", "msg_date": "Tue, 14 Jul 2020 00:11:56 +0300", "msg_from": "Anastasia Lubennikova <a.lubennikova@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: Proposal: Automatic partition creation" }, { "msg_contents": "On 06.07.2020 17:59, Justin Pryzby wrote:\n> I think you'd want to have an\n> ALTER command for that (we would use that to change tables between\n> daily/monthly based on their current size). That should also support setting\n> the MODULUS of a HASH partitioned table, to allow changing the size of its\n> partitions (currently, the user would have to more or less recreate the table\n> and move all its data into different partitions, but that's not ideal).\nNew syntax fits to the ALTER command as well.\n\nALTER TABLE tbl\nPARTITION BY HASH (number)\nUSING (partition_desc)\n\nIn simple cases (i.e. range partitioning granularity), it will simply \nupdate\nthe rule of bound generation, saved in the catalog. More complex hash\npartitions will require some rebalancing. Though, the syntax is pretty\nstraightforward for all cases. In the next versions, we can also add a\nCONCURRENTLY keyword to cover partitioning of an existing \nnon-partitioned table\nwith data.\n\n> I don't know if it's important for anyone, but it would be interesting to think\n> about supporting sub-partitioning: partitions which are themselvese partitioned.\n> Like something => something_YYYY => something_YYYY_MM => something_YYYY_MM_DD.\n> You'd need to specify how to partition each layer of the heirarchy. In the\n> most general case, it could be different partition strategy.\n\nI suppose it will be a natural extension of this work. Now we need to \nensure\nthat the proposed syntax is extensible. Greenplum syntax, which I choose \nas an\nexample, provides subpartition syntax as well.\n\n> If you have a callback function for partition renaming, I think you'd want to\n> pass it not just the current name of the partition, but also the \"VALUES\" used\n> in partition creation. Like (2020-04-05)TO(2020-05-06). Maybe instead, we'd\n> allow setting a \"format\" to use to construct the partition name. Like\n> \"child.foo_bar_%Y_%m_%d\". Ideally, the formats would be fixed-length\n> (zero-padded, etc), so failures with length can happen at \"parse\" time of the\n> statement and not at \"run\" time of the creation. You'd still have to handle\n> the case that the name already exists but isn't a partition (or is a partition\n> by doesn't handle the incoming tuple for some reason).\n\nIn callback design, I want to use the best from pg_pathman's \nset_init_callback().\nThe function accepts jsonb argument, which contains all the data about the\nparent table, bounds, and so on. This information can be used to \nconstruct name\nfor the partition and generate RENAME statement.\n\n> Also, maybe your \"configuration\" syntax would allow specifying other values.\n> Maybe including a retention period (as an INTERVAL for RANGE tables). That's\n> useful if you had a command to PRUNE the oldest partitions, like ALTER..PRUNE.\nIn this version, I got rid of the 'configuration' keyword. Speaking of\nretention, I think that it would be hard to cover all use-cases with a\ndeclarative syntax. While it is relatively easy to implement deletion \nwithin a\ncallback function. See rotation_callback example in pg_pathman [1].\n\n[1] \nhttps://github.com/postgrespro/pg_pathman/blob/79e11d94a147095f6e131e980033018c449f8e2e/sql/pathman_callbacks.sql#L107 \n\n\n-- \nAnastasia Lubennikova\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n\n\n\n\nOn 06.07.2020 17:59, Justin Pryzby\n wrote:\n\n\nI think you'd want to have an\nALTER command for that (we would use that to change tables between\ndaily/monthly based on their current size). That should also support setting\nthe MODULUS of a HASH partitioned table, to allow changing the size of its\npartitions (currently, the user would have to more or less recreate the table\nand move all its data into different partitions, but that's not ideal).\n\n New syntax fits to the ALTER command as well. \nALTER TABLE tbl\nPARTITION BY HASH (number)\nUSING (partition_desc)\n\n In simple cases (i.e. range partitioning granularity), it will\n simply update \n the rule of bound generation, saved in the catalog. More complex\n hash \n partitions will require some rebalancing. Though, the syntax is\n pretty \n straightforward for all cases. In the next versions, we can also add\n a \n CONCURRENTLY keyword to cover partitioning of an existing\n non-partitioned table \n with data.\n\n\n\nI don't know if it's important for anyone, but it would be interesting to think\nabout supporting sub-partitioning: partitions which are themselvese partitioned.\nLike something => something_YYYY => something_YYYY_MM => something_YYYY_MM_DD.\nYou'd need to specify how to partition each layer of the heirarchy. In the\nmost general case, it could be different partition strategy.\n\nI suppose it will be a natural extension of this work. Now we\n need to ensure \n that the proposed syntax is extensible. Greenplum syntax, which I\n choose as an \n example, provides subpartition syntax as well.\n\n\nIf you have a callback function for partition renaming, I think you'd want to\npass it not just the current name of the partition, but also the \"VALUES\" used\nin partition creation. Like (2020-04-05)TO(2020-05-06). Maybe instead, we'd\nallow setting a \"format\" to use to construct the partition name. Like\n\"child.foo_bar_%Y_%m_%d\". Ideally, the formats would be fixed-length\n(zero-padded, etc), so failures with length can happen at \"parse\" time of the\nstatement and not at \"run\" time of the creation. You'd still have to handle\nthe case that the name already exists but isn't a partition (or is a partition\nby doesn't handle the incoming tuple for some reason).\n\n\nIn callback design, I want to use the best from pg_pathman's\n set_init_callback(). \n The function accepts jsonb argument, which contains all the data\n about the \n parent table, bounds, and so on. This information can be used to\n construct name \n for the partition and generate RENAME statement.\n\n\nAlso, maybe your \"configuration\" syntax would allow specifying other values.\nMaybe including a retention period (as an INTERVAL for RANGE tables). That's\nuseful if you had a command to PRUNE the oldest partitions, like ALTER..PRUNE.\n\n\n In this version, I got rid of the 'configuration' keyword. Speaking\n of \n retention, I think that it would be hard to cover all use-cases with\n a \n declarative syntax. While it is relatively easy to implement\n deletion within a \n callback function. See rotation_callback example in pg_pathman [1].\n\n [1]\nhttps://github.com/postgrespro/pg_pathman/blob/79e11d94a147095f6e131e980033018c449f8e2e/sql/pathman_callbacks.sql#L107\n-- \nAnastasia Lubennikova\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company", "msg_date": "Tue, 14 Jul 2020 00:14:54 +0300", "msg_from": "Anastasia Lubennikova <a.lubennikova@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: Proposal: Automatic partition creation" }, { "msg_contents": "On 14.07.2020 00:11, Anastasia Lubennikova wrote:\n> On 06.07.2020 13:45, Anastasia Lubennikova wrote:\n>> The previous discussion of automatic partition creation [1] has \n>> addressed static and dynamic creation of partitions and ended up with \n>> several syntax proposals.\n>> In this thread, I want to continue this work.\n>>\n>> ...\n>> [1] \n>> https://www.postgresql.org/message-id/flat/alpine.DEB.2.21.1907150711080.22273%40lancre\n>\n> Syntax proposal v2, that takes into account received feedback.\n>\n> CREATE TABLE numbers(int number)\n> PARTITION BY partition_method (list_of_columns)\n> USING (partition_desc)\n>\n> where partition_desc is:\n>\n> MODULUS n\n> | VALUES IN (value_list), [DEFAULT PARTITION part_name]\n> | START ([datatype] 'start_value')\n> END ([datatype] 'end_value')\n> EVERY (partition_step), [DEFAULT PARTITION part_name]\n>\n> where partition_step is:\n> [datatype] [number | INTERVAL] 'interval_value'\n>\n> It is less wordy than the previous version. It uses a free keyword option\n> style. It covers static partitioning for all methods, default \n> partition for\n> list and range methods, and can be extended to implement dynamic \n> partitioning\n> for range partitions.\n>\n> [1] \n> https://wiki.postgresql.org/wiki/Declarative_partitioning_improvements#Other_DBMS\n> [2] \n> https://wiki.postgresql.org/wiki/Declarative_partitioning_improvements#Proposal_.28is_subject_to_change.29\n>\nHere is the patch for automated HASH and LIST partitioning, that \nimplements proposed syntax.\n\nRange partitioning is more complicated. It will require new support \nfunction to calculate bounds, new catalog attribute to store them and so \non. So I want to start small and implement automated range partitioning \nin a separate patch later.\n\n1) Syntax\n\nNew syntax is heavily based on Greenplum syntax for automated \npartitioning with one change. Keyword \"USING\", that was suggested above, \ncauses shift/reduce conflict with \"USING method\" syntax of a table \naccess method. It seems that Greenplum folks will face this problem later.\n\nI stick to CONFIGURATION as an existing keyword that makes sense in this \ncontext.\nAny better ideas are welcome.\n\nThus, current version is:\n\nCREATE TABLE table_name (attrs)\nPARTITION BY partition_method (list_of_columns)\nCONFIGURATION (partition_desc)\n\nwhere partition_desc is:\n\nMODULUS n\n| VALUES IN (value_list) [DEFAULT PARTITION part_name]\n\nThis syntax can be easily extended for range partitioning as well.\n\n2) Implementation\n\nPartitionBoundAutoSpec is a new part of PartitionSpec, that contains \ninformation needed to generate partition bounds.\n\nFor HASH and LIST automatic partition creation, transformation happens \nduring parse analysis of CREATE TABLE statement.\ntransformPartitionAutoCreate() calculates bounds and generates \nstatements to create partition tables.\n\nPartitions are named in a format: $tablename_$partnum. One can use post \ncreate hook to rename relations.\n\nFor LIST partition one can also define a default partition.\n\n3) TODO\n\nThe patch lacks documentation, because I expect some details may change \nduring discussion. Other than that, the feature is ready for review.\n\n\nRegards\n\n-- \nAnastasia Lubennikova\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company", "msg_date": "Tue, 25 Aug 2020 13:14:29 +0300", "msg_from": "Anastasia Lubennikova <a.lubennikova@postgrespro.ru>", "msg_from_op": true, "msg_subject": "[PATCH] Automatic HASH and LIST partition creation" }, { "msg_contents": "> The patch lacks documentation, because I expect some details may change\n> during discussion. Other than that, the feature is ready for review.\n>\nHi, hackers!\n\n From what I've read I see there is much interest in automatic partitions\ncreation. (Overall discussion on the topic is partitioned into two threads:\n(1)\nhttps://www.postgresql.org/message-id/alpine.DEB.2.21.1907150711080.22273%40lancre\nand\n(2)\nhttps://www.postgresql.org/message-id/flat/7fec3abb-c663-c0d2-8452-a46141be6d4a@postgrespro.ru\n(current thread) )\n\nThere were many syntax proposals and finally, there is a patch realizing\none of them. So I'd like to review it.\n\nThe syntax proposed in the patch seems good enough for me and is in\naccordance with one of the proposals in the discussion. Maybe I'd prefer\nusing the word AUTOMATICALLY/AUTO instead of CONFIGURATION with explicit\nmeaning that using this syntax we'd get already (automatically) created\npartitions and don't need to create them manually, as in the existing state\nof postgresql declarative partitioning.\n\nCREATE TABLE tbl (i int) PARTITION BY HASH (i) AUTOMATICALLY (MODULUS\n3); (partitions are created automatically)\n\nvs\n\nCREATE TABLE tbl (i int) PARTITION BY HASH (i); (partitions should be\ncreated manually by use of PARTITION OF)\n\n\nCREATE TABLE tbl (i char) PARTITION BY LIST (i) AUTOMATICALLY (VALUES\nIN ('a', 'b'), ('c', 'd'), ('e','f') DEFAULT PARTITION tbl_default);\n\nvs\n\nCREATE TABLE tbl (i char) PARTITION BY LIST (i); (partitions should be\ncreated manually by use of PARTITION OF)\n\n\nI think this syntax can also be extended later with adding automatic\ncreation of RANGE partitions, with IMMEDIATE/DEFERRED for dynamic/on-demand\nautomatic partition creation, and with SUBPARTITION possibility.\n\nBut I don't have a strong preference for the word AUTOMATICALLY, moreover I\nsaw opposition to using AUTO at the top of the discussion. I suppose we can\ngo with the existing CONFIGURATION word.\n\nIf compare with existing declarative partitions, I think automatic creation\nsimplifies the process for the end-user and I'd vote for its committing\ninto Postgres. The patch is short and clean in code style. It has enough\ncomments Tests covering the new functionality are included. Yet it doesn't\nhave documentation and I'd suppose it's worth adding it. Even if there will\nbe syntax changes, I hope they will not be more than the replacement of\nseveral words. Current syntax is described in the text of a patch.\n\nThe patch applies cleanly and installcheck-world is passed.\n\nSome minor things:\n\nI've got a compiler warning:\nparse_utilcmd.c:4280:15: warning: unused variable 'lc' [-Wunused-variable]\n\nWhen the number of partitions is over the maximum value of int32 the output\nshows a generic syntax error. I don't think it is very important as it is\nnot the case someone will make deliberately, but maybe it's better to\noutput something like \"Partitions number is more than the maximum supported\nvalue\"\ncreate table test (i int, t text) partition by hash (i) configuration\n(modulus 888888888888);\nERROR: syntax error at or near \"888888888888\"\n\nOne more piece of nitpicking. Probably we can go just with a mention in\ndocumentation.\ncreate table test (i int, t text) partition by hash (i) configuration\n(modulus 8888);\nERROR: out of shared memory\nHINT: You might need to increase max_locks_per_transaction.\n\nTypo:\n+ /* Add statemets to create each partition after we create parent table */\n\nOverall I see the patch almost ready for commit and I'd like to meet this\nfunctionality in v14.\n\nTested it and see this feature very cool and much simpler to use compared\nto declarative partitioning to date.\n\nThanks!\n-- \nBest regards,\nPavel Borisov\n\nPostgres Professional: http://postgrespro.com <http://www.postgrespro.com>\n\nThe patch lacks documentation, because I expect some details may\n change during discussion. Other than that, the feature is ready\n for review.Hi, hackers!From what I've read I see there is much interest in automatic partitions creation. (Overall discussion on the topic is partitioned into two threads: (1) https://www.postgresql.org/message-id/alpine.DEB.2.21.1907150711080.22273%40lancre and (2) https://www.postgresql.org/message-id/flat/7fec3abb-c663-c0d2-8452-a46141be6d4a@postgrespro.ru (current thread) ) There were many syntax proposals and finally, there is a patch realizing one of them. So I'd like to review it.The syntax proposed in the patch seems good enough for me and is in accordance with one of the proposals in the discussion. Maybe I'd prefer using the word AUTOMATICALLY/AUTO instead of CONFIGURATION with explicit meaning that using this syntax we'd get already (automatically) created partitions and don't need to create them manually, as in the existing state of postgresql declarative partitioning. CREATE TABLE tbl (i int) PARTITION BY HASH (i) AUTOMATICALLY (MODULUS 3); (partitions are created automatically)vsCREATE TABLE tbl (i int) PARTITION BY HASH (i); (partitions should be created manually by use of PARTITION OF)CREATE TABLE tbl (i char) PARTITION BY LIST (i) AUTOMATICALLY (VALUES IN ('a', 'b'), ('c', 'd'), ('e','f') DEFAULT PARTITION tbl_default);vsCREATE TABLE tbl (i char) PARTITION BY LIST (i); (partitions should be created manually by use of PARTITION OF)I think this syntax can also be extended later with adding automatic creation of RANGE partitions, with IMMEDIATE/DEFERRED for dynamic/on-demand automatic partition creation, and with SUBPARTITION possibility.But I don't have a strong preference for the word AUTOMATICALLY, moreover I saw opposition to using AUTO at the top of the discussion. I suppose we can go with the existing CONFIGURATION word.If compare with existing declarative partitions, I think automatic creation simplifies the process for the end-user and  I'd vote for its committing into Postgres. The patch is short and clean in code style. It has enough comments Tests covering the new functionality are included. Yet it doesn't have documentation and I'd suppose it's worth adding it. Even if there will be syntax changes, I hope they will not be more than the replacement of several words. Current syntax is described in the text of a patch.The patch applies cleanly and installcheck-world is passed. Some minor things:I've got a compiler warning:parse_utilcmd.c:4280:15: warning: unused variable 'lc' [-Wunused-variable]When the number of partitions is over the maximum value of int32 the output shows a generic syntax error. I don't think it is very important as it is not the case someone will make deliberately, but maybe it's better to output something like \"Partitions number is more than the maximum supported value\"create table test (i int, t text) partition by hash (i) configuration (modulus 888888888888);ERROR:  syntax error at or near \"888888888888\" One more piece of nitpicking. Probably we can go just with a mention in documentation. create table test (i int, t text) partition by hash (i) configuration (modulus 8888);ERROR:  out of shared memoryHINT:  You might need to increase max_locks_per_transaction.Typo:+\t/* Add statemets to create each partition after we create parent table */Overall I see the patch almost ready for commit and I'd like to meet this functionality in v14.Tested it and see this feature very cool and much simpler to use compared to declarative partitioning to date.Thanks!-- Best regards,Pavel BorisovPostgres Professional: http://postgrespro.com", "msg_date": "Tue, 8 Sep 2020 18:03:39 +0400", "msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Automatic HASH and LIST partition creation" }, { "msg_contents": "On 08.09.2020 17:03, Pavel Borisov wrote:\n>\n> The patch lacks documentation, because I expect some details may\n> change during discussion. Other than that, the feature is ready\n> for review.\n>\n> Hi, hackers!\n>\n> From what I've read I see there is much interest in automatic \n> partitions creation. (Overall discussion on the topic is partitioned \n> into two threads: (1) \n> https://www.postgresql.org/message-id/alpine.DEB.2.21.1907150711080.22273%40lancre and \n> (2) \n> https://www.postgresql.org/message-id/flat/7fec3abb-c663-c0d2-8452-a46141be6d4a@postgrespro.ru \n> (current thread) )\n>\n> There were many syntax proposals and finally, there is a patch \n> realizing one of them. So I'd like to review it.\n>\n> The syntax proposed in the patch seems good enough for me and is in \n> accordance with one of the proposals in the discussion. Maybe I'd \n> prefer using the word AUTOMATICALLY/AUTO instead of CONFIGURATION with \n> explicit meaning that using this syntax we'd get already \n> (automatically) created partitions and don't need to create them \n> manually, as in the existing state of postgresql declarative \n> partitioning.\n>\n> CREATE TABLE tbl (iint) PARTITION BY HASH (i) AUTOMATICALLY (MODULUS 3); (partitions are created automatically)\n> vs\n> CREATE TABLE tbl (iint) PARTITION BY HASH (i); (partitions should be created manually by use of PARTITION OF)\n> CREATE TABLE tbl (i char) PARTITION BY LIST (i) AUTOMATICALLY (VALUES \n> IN ('a', 'b'), ('c', 'd'), ('e','f') DEFAULTPARTITION tbl_default);\n> vs\n> CREATE TABLE tbl (ichar) PARTITION BY LIST (i); (partitions should be created manually by use of PARTITION OF)\n>\n> I think this syntax can also be extended later with adding automatic \n> creation of RANGE partitions, with IMMEDIATE/DEFERRED for \n> dynamic/on-demand automatic partition creation, and with SUBPARTITION \n> possibility.\n>\n> But I don't have a strong preference for the word AUTOMATICALLY, \n> moreover I saw opposition to using AUTO at the top of the discussion. \n> I suppose we can go with the existing CONFIGURATION word.\n\nI agree that 'AUTOMATICALLY' keyword is more specific and probably less \nconfusing for users. I've picked 'CONFIGURATION' simply because it is an \nalready existing keyword. It would like to hear other opinions on that.\n\n\n> If compare with existing declarative partitions, I think automatic \n> creation simplifies the process for the end-user and  I'd vote for its \n> committing into Postgres. The patch is short and clean in code style. \n> It has enough comments Tests covering the new functionality are \n> included. Yet it doesn't have documentation and I'd suppose it's worth \n> adding it. Even if there will be syntax changes, I hope they will not \n> be more than the replacement of several words. Current syntax is \n> described in the text of a patch.\n>\n\nFair enough. New patch contains a documentation draft. While writing it, \nI also noticed, that the syntax, introduced in the patch differs from \ngreenpulm one. For now, list partitioning clause doesn't support \n'PARTITION name' part, that is supported in greenplum. I don't think \nthat we aim for 100% compatibility here. Still, the ability to provide \ntable names is probably a good optional feature, especially for list \npartitions.\n\nWhat do you think?\n\n> The patch applies cleanly and installcheck-world is passed.\n>\n> Some minor things:\n>\n> I've got a compiler warning:\n> parse_utilcmd.c:4280:15: warning: unused variable 'lc' [-Wunused-variable]\n\nFixed. This was also caught by cfbot. This version should pass it clean.\n\n>\n> When the number of partitions is over the maximum value of int32 the \n> output shows a generic syntax error. I don't think it is very \n> important as it is not the case someone will make deliberately, but \n> maybe it's better to output something like \"Partitions number is more \n> than the maximum supported value\"\n> create table test (i int, t text) partition by hash (i) configuration \n> (modulus 888888888888);\n> ERROR:  syntax error at or near \"888888888888\"\n\nThis value is not a valid int32 number, thus parser throws the error \nbefore we have a chance to handle it more gracefully.\n\n>\n> One more piece of nitpicking. Probably we can go just with a mention \n> in documentation.\n> create table test (i int, t text) partition by hash (i) configuration \n> (modulus 8888);\n> ERROR:  out of shared memory\n> HINT:  You might need to increase max_locks_per_transaction.\n>\nWell, it looks like a legit error, when we try to lock a lot of objects \nin one transaction. I will double check if we don't release a lock \nsomewhere.\n\nDo we need to restrict the number of partitions, that can be created by \nthis statement? With what number?  As far as I see, there is no such \nrestriction for now, just a recommendation about performance issues. \nWith automatic creation it becomes easier to mess with it.\n\nProbably, it's enough to mention it in documentation and rely on users \ncommon sense.\n\n> Typo:\n> + /* Add statemets to create each partition after we create parent \n> table */\n>\nFixed.\n\n> Overall I see the patch almost ready for commit and I'd like to meet \n> this functionality in v14.\nI also hope that this patch will make it to v14, but for now, I don't \nsee a consensus on the syntax and some details, so I wouldn't rush.\n\nBesides, it definitely needs more testing. I haven't thoroughly tested \nfollowing cases yet:\n- how triggers and constraints are propagated to partitions;\n- how does it handle some tricky clauses in list partitioning expr_list;\nand so on.\n\nAlso, there is an open question about partition naming. Currently, the \npatch implements dummy %tbl_%partnum name generation, which is far from \nuser-friendly. I think we must provide some hook or trigger function to \nrename partitions after they were created.\n\n-- \nAnastasia Lubennikova\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company", "msg_date": "Mon, 14 Sep 2020 14:38:56 +0300", "msg_from": "Anastasia Lubennikova <a.lubennikova@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Automatic HASH and LIST partition creation" }, { "msg_contents": "On Mon, Sep 14, 2020 at 02:38:56PM +0300, Anastasia Lubennikova wrote:\n> Fixed. This was also caught by cfbot. This version should pass it clean.\n\nPlease note that regression tests are failing, because of 6b2c4e59.\n--\nMichael", "msg_date": "Thu, 24 Sep 2020 12:27:38 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Automatic HASH and LIST partition creation" }, { "msg_contents": "On 24.09.2020 06:27, Michael Paquier wrote:\n> On Mon, Sep 14, 2020 at 02:38:56PM +0300, Anastasia Lubennikova wrote:\n>> Fixed. This was also caught by cfbot. This version should pass it clean.\n> Please note that regression tests are failing, because of 6b2c4e59.\n> --\n> Michael\n\nThank you. Updated patch is attached.\n\nOpen issues for review:\n- new syntax;\n- generation of partition names;\n- overall patch review and testing, especially with complex partitioning \nclauses.\n\n-- \nAnastasia Lubennikova\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company", "msg_date": "Thu, 24 Sep 2020 23:40:46 +0300", "msg_from": "Anastasia Lubennikova <a.lubennikova@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Automatic HASH and LIST partition creation" }, { "msg_contents": "Hi Anastasia,\n\nI tested the syntax with some basic commands and it works fine, regression\ntests also pass.\n\nCouple of comments:\n1. The syntax used omits the { IMMEDIATE | DEFERRED} keywords suggested in\nthe earlier discussions. I think it is intuitive to include IMMEDIATE with\nthe current implementation\nso that the syntax can be extended with a DEFERRED clause in future for\ndynamic partitions.\n\n> CREATE TABLE tbl_lst (i int) PARTITION BY LIST (i)\n> CONFIGURATION (values in (1, 2), (3, 4) DEFAULT PARTITION tbl_default);\n\n\n2. One suggestion for generation of partition names is to append a unique\nid to\navoid conflicts.\n\n3. Probably, here you mean to write list and hash instead of range and list\nas\nper the current state.\n\n <para>\n> Range and list partitioning also support automatic creation of\n> partitions\n> with an optional <literal>CONFIGURATION</literal> clause.\n> </para>\n\n\n4. Typo in default_part_name\n\n+VALUES IN ( <replaceable\n> class=\"parameter\">partition_bound_expr</replaceable> [, ...] ), [(\n> <replaceable class=\"parameter\">partition_bound_expr</replaceable> [, ...]\n> )] [, ...] [DEFAULT PARTITION <replaceable\n> class=\"parameter\">defailt_part_name</replaceable>]\n> +MODULUS <replaceable class=\"parameter\">numeric_literal</replaceable>\n\n\n\nThank you,\nRahila Syed\n\nHi Anastasia,I tested the syntax with some basic commands and it works fine, regression tests also pass.Couple of comments: 1. The syntax used omits the { IMMEDIATE | DEFERRED} keywords suggested in the earlier discussions. I think it is intuitive to include IMMEDIATE with the current implementationso that the syntax can be extended with a  DEFERRED clause in future for dynamic partitions.  CREATE TABLE tbl_lst (i int) PARTITION BY LIST (i) CONFIGURATION (values in (1, 2), (3, 4) DEFAULT PARTITION tbl_default); 2. One suggestion for generation of partition names is to append a unique id toavoid conflicts.3. Probably, here you mean to write list and hash instead of range and list as per the current state.      <para>     Range and list partitioning also support automatic creation of partitions      with an optional <literal>CONFIGURATION</literal> clause.    </para> 4. Typo in default_part_name+VALUES IN ( <replaceable class=\"parameter\">partition_bound_expr</replaceable> [, ...] ), [( <replaceable class=\"parameter\">partition_bound_expr</replaceable> [, ...] )] [, ...] [DEFAULT PARTITION <replaceable class=\"parameter\">defailt_part_name</replaceable>]+MODULUS <replaceable class=\"parameter\">numeric_literal</replaceable>Thank you,Rahila Syed", "msg_date": "Thu, 1 Oct 2020 01:28:28 +0530", "msg_from": "Rahila Syed <rahilasyed90@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Automatic HASH and LIST partition creation" }, { "msg_contents": "On 30.09.2020 22:58, Rahila Syed wrote:\n> Hi Anastasia,\n>\n> I tested the syntax with some basic commands and it works fine, \n> regression tests also pass.\n>\nThank you for your review.\n> Couple of comments:\n> 1. The syntax used omits the { IMMEDIATE | DEFERRED} keywords \n> suggested in\n> the earlier discussions. I think it is intuitive to include IMMEDIATE \n> with the current implementation\n> so that the syntax can be extended with a  DEFERRED clause in future \n> for dynamic partitions.\n>\n>   CREATE TABLE tbl_lst (i int) PARTITION BY LIST (i)\n>  CONFIGURATION (values in (1, 2), (3, 4) DEFAULT PARTITION\n> tbl_default);\n>\nAfter some consideration, I decided that we don't actually need to \nintroduce IMMEDIATE | DEFERRED keyword. For hash and list partitions it \nwill always be immediate, as the number of partitions cannot change \nafter we initially set it. For range partitions, on the contrary, it \ndoesn't make much sense to make partitions immediately, because in many \nuse-cases one bound will be open.\n\n> 2. One suggestion for generation of partition names is to append a \n> unique id to\n> avoid conflicts.\n\nCan you please give an example of such a conflict? I agree that current \nnaming scheme is far from perfect, but I think that 'tablename'_partnum \nprovides unique name for each partition.\n\n>\n> 3. Probably, here you mean to write list and hash instead of range and \n> list as\n> per the current state.\n>\n>      <para>\n>      Range and list partitioning also support automatic creation\n> of partitions\n>       with an optional <literal>CONFIGURATION</literal> clause.\n>     </para>\n>\n> 4. Typo in default_part_name\n>\n> +VALUES IN ( <replaceable\n> class=\"parameter\">partition_bound_expr</replaceable> [, ...] ), [(\n> <replaceable class=\"parameter\">partition_bound_expr</replaceable>\n> [, ...] )] [, ...] [DEFAULT PARTITION <replaceable\n> class=\"parameter\">defailt_part_name</replaceable>]\n> +MODULUS <replaceable class=\"parameter\">numeric_literal</replaceable>\n>\n>\nYes, you're right. I will fix these typos in next version of the patch.\n>\n> Thank you,\n> Rahila Syed\n\n\n-- \nAnastasia Lubennikova\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n\n\n\n\nOn 30.09.2020 22:58, Rahila Syed wrote:\n\n\n\n\nHi Anastasia,\n \n\nI tested the syntax with some basic commands and it works\n fine, regression tests also pass.\n\n\n\n\n\n Thank you for your review.\n \n\n\nCouple of comments: \n\n1. The syntax used omits the { IMMEDIATE | DEFERRED}\n keywords suggested in \nthe earlier discussions. I think it is intuitive to\n include IMMEDIATE with the current implementation\nso that the syntax can be extended with a  DEFERRED\n clause in future for dynamic partitions.\n  CREATE TABLE tbl_lst (i\n int) PARTITION BY LIST (i)\n  CONFIGURATION (values in (1, 2), (3, 4) DEFAULT PARTITION\n tbl_default);\n \n\n\n\n After some consideration, I decided that we don't actually need to\n introduce IMMEDIATE | DEFERRED keyword. For hash and list partitions\n it will always be immediate, as the number of partitions cannot\n change after we initially set it. For range partitions, on the\n contrary, it doesn't make much sense to make partitions immediately,\n because in many use-cases one bound will be open. \n\n\n\n\n2. One suggestion for generation of partition names is to\n append a unique id to\navoid conflicts.\n\n\n\nCan you please give an example of such a conflict? I agree that\n current naming scheme is far from perfect, but I think that\n 'tablename'_partnum provides unique name for each partition.\n\n\n\n\n\n\n3. Probably, here you mean to write list and hash instead\n of range and list as \nper the current state. \n     <para>\n      Range and list partitioning also support automatic\n creation of partitions\n       with an optional\n <literal>CONFIGURATION</literal> clause.\n     </para>\n \n4. Typo in default_part_name\n\n\n+VALUES IN (\n <replaceable\n class=\"parameter\">partition_bound_expr</replaceable>\n [, ...] ), [( <replaceable\n class=\"parameter\">partition_bound_expr</replaceable>\n [, ...] )] [, ...] [DEFAULT PARTITION <replaceable\n class=\"parameter\">defailt_part_name</replaceable>]\n +MODULUS <replaceable\n class=\"parameter\">numeric_literal</replaceable>\n\n\n\n\n\n Yes, you're right. I will fix these typos in next version of the\n patch.\n\n\n\n\n\nThank you,\nRahila Syed\n\n\n\n\n\n-- \nAnastasia Lubennikova\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company", "msg_date": "Thu, 1 Oct 2020 19:02:04 +0300", "msg_from": "Anastasia Lubennikova <a.lubennikova@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Automatic HASH and LIST partition creation" }, { "msg_contents": "Hi,\n\nCouple of comments:\n> 1. The syntax used omits the { IMMEDIATE | DEFERRED} keywords suggested in\n> the earlier discussions. I think it is intuitive to include IMMEDIATE with\n> the current implementation\n> so that the syntax can be extended with a DEFERRED clause in future for\n> dynamic partitions.\n>\n>> CREATE TABLE tbl_lst (i int) PARTITION BY LIST (i)\n>> CONFIGURATION (values in (1, 2), (3, 4) DEFAULT PARTITION tbl_default);\n>\n>\n>\n> After some consideration, I decided that we don't actually need to\n> introduce IMMEDIATE | DEFERRED keyword. For hash and list partitions it\n> will always be immediate, as the number of partitions cannot change after\n> we initially set it. For range partitions, on the contrary, it doesn't make\n> much sense to make partitions immediately, because in many use-cases one\n> bound will be open.\n>\n>\nAs per discussions on this thread:\nhttps://www.postgresql.org/message-id/flat/alpine.DEB.2.21.1907150711080.22273%40lancre\nDEFERRED clause refers to creating partitions on the fly, while the data is\nbeing inserted.\nThe number of partitions and partition bounds can be the same as specified\ninitially\nduring partitioned table creation, but the actual creation of\npartitions can be deferred.\nThis seems like a potential extension to statically created partitions even\nin the case of\nhash and list partitions, as it won't involve moving any existing data.\n\n 2. One suggestion for generation of partition names is to append a\n> unique id to\n\navoid conflicts.\n>\n> Can you please give an example of such a conflict? I agree that current\n> naming scheme is far from perfect, but I think that 'tablename'_partnum\n> provides unique name for each partition.\n>\n>\n> Sorry for not being clear earlier, I mean the partition name\n'tablename_partnum' can conflict with any existing table name.\nAs per current impemetation, if I do the following it results in the table\nname conflict.\n\npostgres=# create table tbl_test_5_1(i int);\nCREATE TABLE\npostgres=# CREATE TABLE tbl_test_5 (i int) PARTITION BY LIST((tbl_test_5))\n\n CONFIGURATION (values in\n('(1)'::tbl_test_5), ('(3)'::tbl_test_5) default partition tbl_default_5);\nERROR: relation \"tbl_test_5_1\" already exists\n\nThank you,\nRahila Syed\n\n>\n\nHi, \n\n\nCouple of comments: \n\n1. The syntax used omits the { IMMEDIATE | DEFERRED}\n keywords suggested in \nthe earlier discussions. I think it is intuitive to\n include IMMEDIATE with the current implementation\nso that the syntax can be extended with a  DEFERRED\n clause in future for dynamic partitions.\n  CREATE TABLE tbl_lst (i\n int) PARTITION BY LIST (i)\n  CONFIGURATION (values in (1, 2), (3, 4) DEFAULT PARTITION\n tbl_default);\n \n\n\n\n After some consideration, I decided that we don't actually need to\n introduce IMMEDIATE | DEFERRED keyword. For hash and list partitions\n it will always be immediate, as the number of partitions cannot\n change after we initially set it. For range partitions, on the\n contrary, it doesn't make much sense to make partitions immediately,\n because in many use-cases one bound will be open. \nAs per discussions on this thread:   https://www.postgresql.org/message-id/flat/alpine.DEB.2.21.1907150711080.22273%40lancreDEFERRED clause refers to creating partitions on the fly, while the data is being inserted. The number of partitions and partition bounds can be the same as specified initially during partitioned table creation, but the actual creation of partitions can be deferred. This seems like a potential extension to statically created partitions even in the case of hash and list partitions, as it won't involve moving any existing data.     2. One suggestion for generation of partition names is to\n append a unique id to\navoid conflicts.\n\n\n\nCan you please give an example of such a conflict? I agree that\n current naming scheme is far from perfect, but I think that\n 'tablename'_partnum provides unique name for each partition.\n\n\n\n\nSorry for not being clear earlier, I mean the partition name 'tablename_partnum' can conflict with any existing table name. As per current impemetation, if I do the following it results in the table name conflict.postgres=# create table tbl_test_5_1(i int);CREATE TABLEpostgres=# CREATE TABLE tbl_test_5 (i int) PARTITION BY LIST((tbl_test_5))                                                                                                               CONFIGURATION (values in ('(1)'::tbl_test_5), ('(3)'::tbl_test_5) default partition tbl_default_5);ERROR:  relation \"tbl_test_5_1\" already exists  \nThank you,Rahila Syed", "msg_date": "Mon, 5 Oct 2020 12:06:49 +0530", "msg_from": "Rahila Syed <rahilasyed90@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Automatic HASH and LIST partition creation" }, { "msg_contents": ">\n> Sorry for not being clear earlier, I mean the partition name\n> 'tablename_partnum' can conflict with any existing table name.\n> As per current impemetation, if I do the following it results in the table\n> name conflict.\n>\n> postgres=# create table tbl_test_5_1(i int);\n> CREATE TABLE\n> postgres=# CREATE TABLE tbl_test_5 (i int) PARTITION BY LIST((tbl_test_5))\n>\n> CONFIGURATION (values in\n> ('(1)'::tbl_test_5), ('(3)'::tbl_test_5) default partition tbl_default_5);\n> ERROR: relation \"tbl_test_5_1\" already exists\n>\n\nBasically, it's the same thing when you try to create two tables with the\nsame name. It is not specific to partition creation and common for every\ncase that using any defaults, they can conflict with something existing.\nAnd in this case this conflict is explicitly processes as I see from output\nmessage.\n\nIn fact in PG there are other places when names are done in default way\ne.g. in aggregates regression test it is not surprise to find in PG13:\n\nexplain (costs off)\n select min(f1), max(f1) from minmaxtest;\n QUERY PLAN\n---------------------------------------------------------------------------------------------\n Result\n InitPlan 1 (returns $0)\n -> Limit\n -> Merge Append\n Sort Key: minmaxtest.f1\n -> Index Only Scan using minmaxtesti on minmaxtest\nminmaxtest_1\n Index Cond: (f1 IS NOT NULL)\n -> Index Only Scan using minmaxtest1i on minmaxtest1\nminmaxtest_2\n Index Cond: (f1 IS NOT NULL)\n -> Index Only Scan Backward using minmaxtest2i on\nminmaxtest2 minmaxtest_3\n Index Cond: (f1 IS NOT NULL)\n -> Index Only Scan using minmaxtest3i on minmaxtest3\nminmaxtest_4\n InitPlan 2 (returns $1)\n -> Limit\n -> Merge Append\n Sort Key: minmaxtest_5.f1 DESC\n -> Index Only Scan Backward using minmaxtesti on\nminmaxtest minmaxtest_6\n Index Cond: (f1 IS NOT NULL)\n -> Index Only Scan Backward using minmaxtest1i on\nminmaxtest1 minmaxtest_7\n Index Cond: (f1 IS NOT NULL)\n -> Index Only Scan using minmaxtest2i on minmaxtest2\nminmaxtest_8\n Index Cond: (f1 IS NOT NULL)\n -> Index Only Scan Backward using minmaxtest3i on\nminmaxtest3 minmaxtest_9\n\nwhere minmaxtest_<number> are the temporary relations\nand minmaxtest<number> are real partition names (last naming is unrelated\nto first)\n\nOverall I don't see much trouble in any form of automatic naming. But there\nmay be a convenience to provide fixed user-specified prefix to partition\nnames.\n\nThank you,\n-- \nBest regards,\nPavel Borisov\n\nPostgres Professional: http://postgrespro.com <http://www.postgrespro.com>\n\nSorry for not being clear earlier, I mean the partition name 'tablename_partnum' can conflict with any existing table name. As per current impemetation, if I do the following it results in the table name conflict.postgres=# create table tbl_test_5_1(i int);CREATE TABLEpostgres=# CREATE TABLE tbl_test_5 (i int) PARTITION BY LIST((tbl_test_5))                                                                                                               CONFIGURATION (values in ('(1)'::tbl_test_5), ('(3)'::tbl_test_5) default partition tbl_default_5);ERROR:  relation \"tbl_test_5_1\" already exists Basically, it's the same thing when you try to create two tables with the same name. It is not specific to partition creation and common for every case that using any defaults, they can conflict with something existing. And in this case this conflict is explicitly processes as I see from output message.In fact in PG there are other places when names are done in default way e.g. in aggregates regression test it is not surprise to find in PG13:explain (costs off)  select min(f1), max(f1) from minmaxtest;                                         QUERY PLAN--------------------------------------------------------------------------------------------- Result   InitPlan 1 (returns $0)     ->  Limit           ->  Merge Append                 Sort Key: minmaxtest.f1                 ->  Index Only Scan using minmaxtesti on minmaxtest minmaxtest_1                       Index Cond: (f1 IS NOT NULL)                 ->  Index Only Scan using minmaxtest1i on minmaxtest1 minmaxtest_2                       Index Cond: (f1 IS NOT NULL)                 ->  Index Only Scan Backward using minmaxtest2i on minmaxtest2 minmaxtest_3                       Index Cond: (f1 IS NOT NULL)                 ->  Index Only Scan using minmaxtest3i on minmaxtest3 minmaxtest_4   InitPlan 2 (returns $1)     ->  Limit           ->  Merge Append                 Sort Key: minmaxtest_5.f1 DESC                 ->  Index Only Scan Backward using minmaxtesti on minmaxtest minmaxtest_6                       Index Cond: (f1 IS NOT NULL)                 ->  Index Only Scan Backward using minmaxtest1i on minmaxtest1 minmaxtest_7                       Index Cond: (f1 IS NOT NULL)                 ->  Index Only Scan using minmaxtest2i on minmaxtest2 minmaxtest_8                       Index Cond: (f1 IS NOT NULL)                 ->  Index Only Scan Backward using minmaxtest3i on minmaxtest3 minmaxtest_9 where minmaxtest_<number> are the temporary relations and minmaxtest<number> are real partition names (last naming is unrelated to first)Overall I don't see much trouble in any form of automatic naming. But there may be a convenience to provide fixed user-specified prefix to partition names.Thank you,-- Best regards,Pavel BorisovPostgres Professional: http://postgrespro.com", "msg_date": "Mon, 5 Oct 2020 11:53:27 +0400", "msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Automatic HASH and LIST partition creation" }, { "msg_contents": "Hi, hackers!\nI added some extra tests for different cases of use of automatic partition\ncreation.\nv3-0002 can be applied on top of the original v2 patch for correct work\nwith some corner cases with constraints included in this test.\n\nAs for immediate/deferred I think that now only available now is immediate,\nso using word IMMEDIATE seems a little bit redundant to me. We may\nintroduce this word together with adding DEFERRED option. However, my\nopinion is not in strong opposition to both options. Оther opinions are\nvery much welcome!\n\n-- \nBest regards,\nPavel Borisov\n\nPostgres Professional: http://postgrespro.com <http://www.postgrespro.com>", "msg_date": "Tue, 6 Oct 2020 01:21:01 +0400", "msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Automatic HASH and LIST partition creation" }, { "msg_contents": "On 06.10.2020 00:21, Pavel Borisov wrote:\n> Hi, hackers!\n> I added some extra tests for different cases of use of automatic \n> partition creation.\n> v3-0002 can be applied on top of the original v2 patch for correct \n> work with some corner cases with constraints included in this test.\n>\nThank you for the tests. I've added them and the fix into the patch.\n\nI also noticed, that some table parameters, such as persistence were not \npromoted to auto generated partitions. This is fixed now. The test cases \nfor temp and unlogged auto partitioned tables are updated respectively.\nBesides, I slightly refactored the code and fixed documentation typos, \nthat were reported by Rahila.\n\nWith my recent changes, one test statement, that you've added as \nfailing, works.\n\nCREATE TABLE list_parted_fail (a int) PARTITION BY LIST (a) CONFIGURATION\n(VALUES IN ('1' collate \"POSIX\"));\n\nIt simply ignores collate POSIX part and creates a table with following \nstructure:\n\n\n                        Partitioned table \"public.list_parted_fail\"\n  Column |  Type   | Collation | Nullable | Default | Storage | Stats \ntarget | Description\n--------+---------+-----------+----------+---------+---------+--------------+-------------\n  a      | integer |           |          |         | plain \n|              |\nPartition key: LIST (a)\nPartitions: list_parted_fail_0 FOR VALUES IN (1)\n\nDo you think that it is a bug? For now, I removed this statement from \ntests just to calm down the CI.\n\n-- \nAnastasia Lubennikova\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company", "msg_date": "Tue, 6 Oct 2020 20:25:48 +0300", "msg_from": "Anastasia Lubennikova <a.lubennikova@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Automatic HASH and LIST partition creation" }, { "msg_contents": ">\n> Do you think that it is a bug? For now, I removed this statement from\n> tests just to calm down the CI.\n>\n\nIt is in accordance with changes in tests for vanilla\ndecralarive partitioning as per\n\ncommit 2dfa3fea88bc951d0812a18649d801f07964c9b9\nAuthor: Tom Lane <tgl@sss.pgh.pa.us>\nDate: Mon Sep 28 13:44:01 2020 -0400\n Remove complaints about COLLATE clauses in partition bound values.\n\nwhich my test does for automatic way in the same style. So I consider your\nremoval completely correct.\n\nThank you!\n\nDo you think that it is a bug? For now, I removed this statement from \ntests just to calm down the CI. It is in accordance with changes in tests for vanilla decralarive partitioning as percommit 2dfa3fea88bc951d0812a18649d801f07964c9b9Author: Tom Lane <tgl@sss.pgh.pa.us>Date:   Mon Sep 28 13:44:01 2020 -0400    Remove complaints about COLLATE clauses in partition bound values. which my test does for automatic way in the same style. So I consider your removal completely correct.Thank you!", "msg_date": "Wed, 7 Oct 2020 11:30:01 +0400", "msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Automatic HASH and LIST partition creation" }, { "msg_contents": "On 05.10.2020 09:36, Rahila Syed wrote:\n>\n> Hi,\n>\n>> Couple of comments:\n>> 1. The syntax used omits the { IMMEDIATE | DEFERRED} keywords\n>> suggested in\n>> the earlier discussions. I think it is intuitive to\n>> include IMMEDIATE with the current implementation\n>> so that the syntax can be extended with a DEFERRED clause in\n>> future for dynamic partitions.\n>>\n>>   CREATE TABLE tbl_lst (i int) PARTITION BY LIST (i)\n>>  CONFIGURATION (values in (1, 2), (3, 4) DEFAULT PARTITION\n>> tbl_default);\n>>\n> After some consideration, I decided that we don't actually need to\n> introduce IMMEDIATE | DEFERRED keyword. For hash and list\n> partitions it will always be immediate, as the number of\n> partitions cannot change after we initially set it. For range\n> partitions, on the contrary, it doesn't make much sense to make\n> partitions immediately, because in many use-cases one bound will\n> be open.\n>\n>\n> As per discussions on this thread: \n> https://www.postgresql.org/message-id/flat/alpine.DEB.2.21.1907150711080.22273%40lancre\n> DEFERRED clause refers to creating partitions on the fly, while the \n> data is being inserted.\n> The number of partitions and partition bounds can be the same as \n> specified initially\n> during partitioned table creation, but the actual creation of \n> partitions can be deferred.\n> This seems like a potential extension to statically created partitions \n> even in the case of\n> hash and list partitions, as it won't involve moving any existing data.\n\nOh, now I see what you mean. The case with already existing tables will \nrequire changes to ALTER TABLE syntax. And that's where we may want to \nchoose between immediate (i.e. locking) and deferred (i.e. concurrent) \ncreation of partitions. I think we should try to implement it with \nexisting keywords, maybe use 'CONCURRENTLY' keyword and it will look like:\n\nALTER TABLE tbl PARTITION BY ... CONFIGURATION (....) [CONCURRENTLY];\n\nAnyway, the task of handling existing data is much more complicated, \nespecially the 'concurrent' case and to be honest, I haven't put much \nthought into it yet.\n\nThe current patch only implements the simplest case of creating a new \npartitioned table. And I don't see if CREATE TABLE needs this \nimmediate|deferred clause or if it will need it in the future.\n\nThoughts?\n\n>\n>      2. One suggestion for generation of partition names is to\n> append a unique id to\n>\n>> avoid conflicts.\n>\n> Can you please give an example of such a conflict? I agree that\n> current naming scheme is far from perfect, but I think that\n> 'tablename'_partnum provides unique name for each partition.\n>\n>>\n> Sorry for not being clear earlier, I mean the partition name \n> 'tablename_partnum' can conflict with any existing table name.\n> As per current impemetation, if I do the following it results in the \n> table name conflict.\n>\n> postgres=# create table tbl_test_5_1(i int);\n> CREATE TABLE\n> postgres=# CREATE TABLE tbl_test_5 (i int) PARTITION BY \n> LIST((tbl_test_5)) CONFIGURATION (values in ('(1)'::tbl_test_5), \n> ('(3)'::tbl_test_5) default partition tbl_default_5);\n> ERROR:relation \"tbl_test_5_1\" already exists\n\n\nI don't mind adding some specific suffix for generated partitions, \nalthough it still may conflict with existing table names. The main \ndisadvantage of this idea, is that it reduces number of symbols \navailable for table name, which can lead to something like this:\n\nCREATE TABLE \nparteddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddd (a \ntext, b int NOT NULL DEFAULT 0,  CONSTRAINT check_aa CHECK (length(a) > 0))\nPARTITION BY LIST (a) CONFIGURATION (VALUES IN ('a','b'),('c','d') \nDEFAULT PARTITION parted_def) ;;\nNOTICE:  identifier \n\"parteddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddd\" \nwill be truncated to \n\"partedddddddddddddddddddddddddddddddddddddddddddddddddddddddddd\"\nERROR:  relation \n\"partedddddddddddddddddddddddddddddddddddddddddddddddddddddddddd\" \nalready exists\n\nThe error message here is a bit confusing, as relation \n'partedddddddddddddddddddddddddddddddddddddddddddddddddddddddddd' \nhaven't existed before and this is a conflict between partitioned and \ngenerated partition table name. For now, I don't know if we can handle \nit more gracefully. Probably, we could truncate tablename to a shorter \nsize, but it doesn't provide a complete solution, because partition \nnumber can contain several digits.\n\nSee also pg_partman documentation on the same issue: \nhttps://github.com/pgpartman/pg_partman/blob/master/doc/pg_partman.md#naming-length-limits\n\n-- \nAnastasia Lubennikova\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n\n\n\n\nOn 05.10.2020 09:36, Rahila Syed wrote:\n\n\n\n\n\n\n Hi, \n \n\n\n\n\n\n\n\nCouple of comments: \n\n1. The syntax used omits the { IMMEDIATE |\n DEFERRED} keywords suggested in \nthe earlier discussions. I think it is\n intuitive to include IMMEDIATE with the\n current implementation\nso that the syntax can be extended with a \n DEFERRED clause in future for dynamic\n partitions.\n  CREATE\n TABLE tbl_lst (i int) PARTITION BY LIST (i)\n  CONFIGURATION (values in (1, 2), (3, 4)\n DEFAULT PARTITION tbl_default);\n \n\n\n\n After some consideration, I decided that we don't\n actually need to introduce IMMEDIATE | DEFERRED\n keyword. For hash and list partitions it will always\n be immediate, as the number of partitions cannot\n change after we initially set it. For range\n partitions, on the contrary, it doesn't make much\n sense to make partitions immediately, because in many\n use-cases one bound will be open. \n\n\n\n\nAs per discussions on this thread:   https://www.postgresql.org/message-id/flat/alpine.DEB.2.21.1907150711080.22273%40lancre\nDEFERRED clause refers to creating partitions on\n the fly, while the data is being inserted. \nThe number of partitions and partition bounds can\n be the same as specified initially \nduring partitioned table creation, but the actual\n creation of partitions can be deferred. \nThis seems like a potential extension to\n statically created partitions even in the case of \nhash and list partitions, as it won't involve\n moving any existing data.\n\n\n\n\n\n\nOh, now I see what you mean. The case with already existing\n tables will require changes to ALTER TABLE syntax. And that's\n where we may want to choose between immediate (i.e. locking) and\n deferred (i.e. concurrent) creation of partitions. I think we\n should try to implement it with existing keywords, maybe use\n 'CONCURRENTLY' keyword and it will look like:\n\n ALTER TABLE tbl PARTITION BY ... CONFIGURATION (....)\n [CONCURRENTLY];\n\n Anyway, the task of handling existing data is much more\n complicated, especially the 'concurrent' case and to be honest, I\n haven't put much thought into it yet.\n\n The current patch only implements the simplest case of creating a\n new partitioned table. And I don't see if CREATE TABLE needs this\n immediate|deferred clause or if it will need it in the future.\n\n Thoughts?\n\n\n\n\n\n\n\n\n\n\n     2. One\n suggestion for generation of partition names is to\n append a unique id to\n\n\n\n\n\navoid conflicts.\n\n\n\nCan you please give an example of such a\n conflict? I agree that current naming scheme is\n far from perfect, but I think that\n 'tablename'_partnum provides unique name for each\n partition.\n\n\n\n\n\n\n\n\n\n\n\nSorry for not being clear earlier, I mean the\n partition name 'tablename_partnum' can conflict with\n any existing table name. \nAs per current impemetation, if I do the following\n it results in the table name conflict.\n\n\npostgres=#\n create table tbl_test_5_1(i int);\nCREATE\n TABLE\npostgres=#\n CREATE TABLE tbl_test_5 (i int) PARTITION BY\n LIST((tbl_test_5))  \n                                                    \n                                                    \n     CONFIGURATION\n (values in ('(1)'::tbl_test_5), ('(3)'::tbl_test_5)\n default partition tbl_default_5);\nERROR: \n relation\n \"tbl_test_5_1\" already exists \n\n \n\n\n\n\n\n\n\n\n I don't mind adding some specific suffix for generated partitions,\n although it still may conflict with existing table names. The main\n disadvantage of this idea, is that it reduces number of symbols\n available for table name, which can lead to something like this:\n\n CREATE TABLE\n parteddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddd\n (a text, b int NOT NULL DEFAULT 0,  CONSTRAINT check_aa CHECK\n (length(a) > 0))\n PARTITION BY LIST (a) CONFIGURATION (VALUES IN ('a','b'),('c','d')\n DEFAULT PARTITION parted_def) ;;\n NOTICE:  identifier\n \"parteddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddd\"\n will be truncated to\n \"partedddddddddddddddddddddddddddddddddddddddddddddddddddddddddd\"\n ERROR:  relation\n \"partedddddddddddddddddddddddddddddddddddddddddddddddddddddddddd\"\n already exists\n\n The error message here is a bit confusing, as relation\n 'partedddddddddddddddddddddddddddddddddddddddddddddddddddddddddd'\n haven't existed before and this is a conflict between partitioned\n and generated partition table name. For now, I don't know if we\n can handle it more gracefully. Probably, we could truncate\n tablename to a shorter size, but it doesn't provide a complete\n solution, because partition number can contain several digits.\n\n See also pg_partman documentation on the same issue:\nhttps://github.com/pgpartman/pg_partman/blob/master/doc/pg_partman.md#naming-length-limits\n\n-- \nAnastasia Lubennikova\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company", "msg_date": "Wed, 7 Oct 2020 16:05:08 +0300", "msg_from": "Anastasia Lubennikova <a.lubennikova@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Automatic HASH and LIST partition creation" }, { "msg_contents": ">\n>\n> 2. One suggestion for generation of partition names is to append a\n>> unique id to\n>\n> avoid conflicts.\n>>\n>> Can you please give an example of such a conflict? I agree that current\n>> naming scheme is far from perfect, but I think that 'tablename'_partnum\n>> provides unique name for each partition.\n>>\n>>\n>> Sorry for not being clear earlier, I mean the partition name\n> 'tablename_partnum' can conflict with any existing table name.\n> As per current impemetation, if I do the following it results in the table\n> name conflict.\n>\n> postgres=# create table tbl_test_5_1(i int);\n> CREATE TABLE\n> postgres=# CREATE TABLE tbl_test_5 (i int) PARTITION BY LIST((tbl_test_5))\n>\n> CONFIGURATION (values in\n> ('(1)'::tbl_test_5), ('(3)'::tbl_test_5) default partition tbl_default_5);\n> ERROR: relation \"tbl_test_5_1\" already exists\n>\n>\n>\n> I don't mind adding some specific suffix for generated partitions,\n> although it still may conflict with existing table names. The main\n> disadvantage of this idea, is that it reduces number of symbols available\n> for table name, which can lead to something like this:\n>\n> CREATE TABLE\n> parteddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddd (a\n> text, b int NOT NULL DEFAULT 0, CONSTRAINT check_aa CHECK (length(a) > 0))\n> PARTITION BY LIST (a) CONFIGURATION (VALUES IN ('a','b'),('c','d') DEFAULT\n> PARTITION parted_def) ;;\n> NOTICE: identifier\n> \"parteddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddd\" will\n> be truncated to\n> \"partedddddddddddddddddddddddddddddddddddddddddddddddddddddddddd\"\n> ERROR: relation\n> \"partedddddddddddddddddddddddddddddddddddddddddddddddddddddddddd\" already\n> exists\n> doc/pg_partman.md#naming-length-limits\n> <https://github.com/pgpartman/pg_partman/blob/master/doc/pg_partman.md#naming-length-limits>\n>\nIt seems to me that a working idea is to add a prefix to partitions is to\ngive the possibility to specify it for users. So the user will be able to\nchoose appropriate and not very long suffix to avoid conflicts.\nMaybe like this:\nCREATE TABLE city (a text) PARTITION BY LIST (a) CONFIGURATION (VALUES IN\n('a','b'),('c','d') DEFAULT PARTITION city_other PREFIX _prt) ;\n\nResult:\n---\ncity_prt1\ncity_prt2\n...\ncity_other\n\n-- \nBest regards,\nPavel Borisov\n\nPostgres Professional: http://postgrespro.com <http://www.postgrespro.com>\n\n\n     2. One\n suggestion for generation of partition names is to\n append a unique id to\n\n\n\n\n\navoid conflicts.\n\n\n\nCan you please give an example of such a\n conflict? I agree that current naming scheme is\n far from perfect, but I think that\n 'tablename'_partnum provides unique name for each\n partition.\n\n\n\n\n\n\n\n\n\n\n\nSorry for not being clear earlier, I mean the\n partition name 'tablename_partnum' can conflict with\n any existing table name. \nAs per current impemetation, if I do the following\n it results in the table name conflict.\n\n\npostgres=#\n create table tbl_test_5_1(i int);\nCREATE\n TABLE\npostgres=#\n CREATE TABLE tbl_test_5 (i int) PARTITION BY\n LIST((tbl_test_5))  \n                                                    \n                                                    \n     CONFIGURATION\n (values in ('(1)'::tbl_test_5), ('(3)'::tbl_test_5)\n default partition tbl_default_5);\nERROR: \n relation\n \"tbl_test_5_1\" already exists \n\n \n\n\n\n\n\n\n\n\n I don't mind adding some specific suffix for generated partitions,\n although it still may conflict with existing table names. The main\n disadvantage of this idea, is that it reduces number of symbols\n available for table name, which can lead to something like this:\n\n CREATE TABLE\n parteddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddd\n (a text, b int NOT NULL DEFAULT 0,  CONSTRAINT check_aa CHECK\n (length(a) > 0))\n PARTITION BY LIST (a) CONFIGURATION (VALUES IN ('a','b'),('c','d')\n DEFAULT PARTITION parted_def) ;;\n NOTICE:  identifier\n \"parteddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddd\"\n will be truncated to\n \"partedddddddddddddddddddddddddddddddddddddddddddddddddddddddddd\"\n ERROR:  relation\n \"partedddddddddddddddddddddddddddddddddddddddddddddddddddddddddd\"\n already existsdoc/pg_partman.md#naming-length-limitsIt seems to me that a working idea is to add a prefix to partitions is to give the possibility to specify it for users. So the user will be able to choose appropriate and not very long suffix to avoid conflicts.Maybe like this:CREATE TABLE\n city (a text) PARTITION BY LIST (a) CONFIGURATION (VALUES IN ('a','b'),('c','d')\n DEFAULT PARTITION city_other PREFIX _prt) ;Result:---city_prt1city_prt2 ...city_other-- Best regards,Pavel BorisovPostgres Professional: http://postgrespro.com", "msg_date": "Wed, 7 Oct 2020 20:30:32 +0400", "msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Automatic HASH and LIST partition creation" }, { "msg_contents": "Again I've checked v3 patch. In the discussion, there are several other\nideas on its further development, so I consider the patch as the first step\nto later progress. Though now the patch is fully self-sufficient in\nfunctionality and has enough tests etc. I suppose it is ready to be\ncommitted.\n\n-- \nBest regards,\nPavel Borisov\n\nPostgres Professional: http://postgrespro.com <http://www.postgrespro.com>\n\nAgain I've checked v3 patch. In the discussion, there are several other ideas on its further development, so I consider the patch as the first step to later progress. Though now the patch is fully self-sufficient in functionality and has enough tests etc. I suppose it is ready to be committed. -- Best regards,Pavel BorisovPostgres Professional: http://postgrespro.com", "msg_date": "Tue, 3 Nov 2020 12:09:41 +0400", "msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Automatic HASH and LIST partition creation" }, { "msg_contents": "I've realized one strange effect in current grammar parsing: if I do\n\nCREATE TABLE foo (a int) PARTITION BY LIST (a) CONFIGURATION (a 1);\nERROR: unrecognized auto partition bound specification \"a\"\n\nI consulted the patch code and realized that in fact, the patch considers\nit the (invalid) HASH bounds (doesn't find a word 'modulus') unless it is\nspecified to be (still invalid) LIST. This is due to the fact that the\ngrammar parser is not context-aware and in the patch, we tried to avoid the\nnew parser keyword MODULUS. The effect is that inside a CONFIGURATION\nparentheses in case of HASH bounds we don't have a single keyword for the\nparser to determine it is really a HASH case.\n\nIt doesn't make the patch work wrongly, besides it checks the validity of\nall types of bounds in the HASH case even when the partitioning is not\nHASH. I find this slightly bogus. This is because the parser can not\ndetermine the type of partitioning inside the configuration clause and this\nmakes adding new syntax (e.g. adding RANGE partitioning configuration\ninside CONFIGURATION parentheses) complicated.\n\nSo I have one more syntax proposal: to have separate keywords\ninside CONFIGURATION parentheses for each partitioning type.\nE.g:\nCREATE TABLE foo(a int) PARTITION BY LIST(a) CONFIGURATION (FOR VALUES IN\n(1,2),(3,4) DEFAULT PARTITION foo_def);\nCREATE TABLE foo(a int) PARTITION BY HASH(a) CONFIGURATION (FOR VALUES WITH\nMODULUS 3);\nCREATE TABLE foo(a int) PARTITION BY RAGE(a) CONFIGURATION (FOR VALUES FROM\n1 TO 1000 INTERVAL 10 DEFAULT PARTITION foo_def);\n\nThis proposal is in accordance with the current syntax of declarative\npartitioning: CREATE TABLE foo_1 PARTITION OF foo FOR VALUES ...\n\nSome more facultative proposals incremental to the abovementioned:\n1. Omit CONFIGURATION with/without parentheses. This makes syntax closer\nto (non-automatic) declarative partitioning syntax but the clause\nseems less legible (in my opinion).\n2. Omit just FOR VALUES. This makes the clause short, but adds a difference\nto (non-automatic) declarative partitioning syntax.\n\n I'm planning also to add RANGE partitioning syntax to this in the future\nand I will be happy if all three types of the syntax could come along\neasily.\n I very much appreciate your views on this especially regarding that\nchanges can be still made easily because the patch is not committed yet.\n\n-- \nBest regards,\nPavel Borisov\n\nPostgres Professional: http://postgrespro.com <http://www.postgrespro.com>\n\nI've realized one strange effect in current grammar parsing: if I doCREATE TABLE foo (a int) PARTITION BY LIST (a) CONFIGURATION (a 1);ERROR:  unrecognized auto partition bound specification \"a\"I consulted the patch code and realized that in fact, the patch considers it the (invalid) HASH bounds (doesn't find a word 'modulus') unless it is specified to be (still invalid) LIST. This is due to the fact that the grammar parser is not context-aware and in the patch, we tried to avoid the new parser keyword MODULUS. The effect is that inside a CONFIGURATION parentheses in case of HASH bounds we don't have a single keyword for the parser to determine it is really a HASH case.It doesn't make the patch work wrongly, besides it checks the validity of all types of bounds in the HASH case even when the partitioning is not HASH. I find this slightly bogus. This is because the parser can not determine the type of partitioning inside the configuration clause and this makes adding new syntax (e.g. adding RANGE partitioning configuration inside CONFIGURATION parentheses) complicated.So I have one more syntax proposal: to have separate keywords inside CONFIGURATION parentheses for each partitioning type.E.g:CREATE TABLE foo(a int) PARTITION BY LIST(a) CONFIGURATION (FOR VALUES IN (1,2),(3,4) DEFAULT PARTITION foo_def);CREATE TABLE foo(a int) PARTITION BY HASH(a) CONFIGURATION (FOR VALUES WITH MODULUS 3);CREATE TABLE foo(a int) PARTITION BY RAGE(a) CONFIGURATION (FOR VALUES FROM 1 TO 1000 INTERVAL 10 DEFAULT PARTITION foo_def);This proposal is in accordance with the current syntax of declarative partitioning: CREATE TABLE foo_1 PARTITION OF foo FOR VALUES ...  Some more facultative proposals incremental to the abovementioned:1. Omit CONFIGURATION with/without parentheses. This makes syntax closer to (non-automatic) declarative partitioning syntax but the clause seems less legible (in my opinion). 2. Omit just FOR VALUES. This makes the clause short, but adds a difference to (non-automatic) declarative partitioning syntax. I'm planning also to add RANGE partitioning syntax to this in the future and I will be happy if all three types of the syntax could come along easily. I very much appreciate your views on this especially regarding that changes can be still made easily because the patch is not committed yet.-- Best regards,Pavel BorisovPostgres Professional: http://postgrespro.com", "msg_date": "Fri, 18 Dec 2020 22:54:54 +0400", "msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Automatic HASH and LIST partition creation" }, { "msg_contents": "On 2020-12-18 21:54, Pavel Borisov wrote:\n> I've realized one strange effect in current grammar parsing: if I do\n> \n> CREATE TABLE foo (a int) PARTITION BY LIST (a) CONFIGURATION (a 1);\n> ERROR: unrecognized auto partition bound specification \"a\"\n> \n> I consulted the patch code and realized that in fact, the patch\n> considers it the (invalid) HASH bounds (doesn't find a word 'modulus')\n> unless it is specified to be (still invalid) LIST. This is due to the\n> fact that the grammar parser is not context-aware and in the patch, we\n> tried to avoid the new parser keyword MODULUS. The effect is that\n> inside a CONFIGURATION parentheses in case of HASH bounds we don't\n> have a single keyword for the parser to determine it is really a HASH\n> case.\n> \n> It doesn't make the patch work wrongly, besides it checks the validity\n> of all types of bounds in the HASH case even when the partitioning is\n> not HASH. I find this slightly bogus. This is because the parser can\n> not determine the type of partitioning inside the configuration clause\n> and this makes adding new syntax (e.g. adding RANGE partitioning\n> configuration inside CONFIGURATION parentheses) complicated.\n> \n> So I have one more syntax proposal: to have separate keywords inside\n> CONFIGURATION parentheses for each partitioning type.\n> E.g:\n> CREATE TABLE foo(a int) PARTITION BY LIST(a) CONFIGURATION (FOR VALUES\n> IN (1,2),(3,4) DEFAULT PARTITION foo_def);\n> CREATE TABLE foo(a int) PARTITION BY HASH(a) CONFIGURATION (FOR VALUES\n> WITH MODULUS 3);\n> CREATE TABLE foo(a int) PARTITION BY RAGE(a) CONFIGURATION (FOR VALUES\n> FROM 1 TO 1000 INTERVAL 10 DEFAULT PARTITION foo_def);\n> \n> This proposal is in accordance with the current syntax of declarative\n> partitioning: CREATE TABLE foo_1 PARTITION OF foo FOR VALUES ...\n> \n> Some more facultative proposals incremental to the abovementioned:\n> 1. Omit CONFIGURATION with/without parentheses. This makes syntax\n> closer to (non-automatic) declarative partitioning syntax but the\n> clause seems less legible (in my opinion).\n> 2. Omit just FOR VALUES. This makes the clause short, but adds a\n> difference to (non-automatic) declarative partitioning syntax.\n> \n> I'm planning also to add RANGE partitioning syntax to this in the\n> future and I will be happy if all three types of the syntax could come\n> along easily.\n> \n> I very much appreciate your views on this especially regarding that\n> changes can be still made easily because the patch is not committed\n> yet.\n> \n> --\n> \n> Best regards,\n> Pavel Borisov\n> \n> Postgres Professional: http://postgrespro.com [1]\n> \n> \n> Links:\n> ------\n> [1] http://www.postgrespro.com\n\nIn my view, next expressions are the golden ground here. On one hand, \nnot far from the original\nnon-automatic declarative partitioning syntax syntax, on the other hand, \nomit CONFIGURATION\nkey-word (which is redundant here in terms of gram parsing) makes this \nexpressions less apprehensible for the human.\n\nCREATE TABLE foo(a int) PARTITION BY LIST(a) CONFIGURATION (FOR VALUES \nIN (1,2),(3,4) DEFAULT PARTITION foo_def);\nCREATE TABLE foo(a int) PARTITION BY HASH(a) CONFIGURATION (FOR VALUES \nWITH MODULUS 3);\nCREATE TABLE foo(a int) PARTITION BY RAGE(a) CONFIGURATION (FOR VALUES \nFROM 1 TO 1000 INTERVAL 10 DEFAULT PARTITION foo_def);\n\nIn addition to that, adding RANGE PARTITION would be much simpler since \nwe would have specific \"branches\" in gram instead of using \ncontext-sensitive grammar and dealing with it in c-code.\n---\nBest regards,\nMaxim Orlov.\n\n\n", "msg_date": "Mon, 21 Dec 2020 13:49:25 +0300", "msg_from": "Maxim Orlov <m.orlov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Automatic HASH and LIST partition creation" }, { "msg_contents": "\n> CREATE TABLE foo(a int) PARTITION BY LIST(a) CONFIGURATION (FOR VALUES IN \n> (1,2),(3,4) DEFAULT PARTITION foo_def);\n\nI would like to disagree with this syntactic approach because it would \nvery specific to each partition method. IMHO the syntax should be as \ngeneric as possible. I'd suggest (probably again) a keyword/value list \nwhich would allow to be quite adaptable without inducing any pressure on \nthe parser.\n\n-- \nFabien.\n\n\n", "msg_date": "Tue, 22 Dec 2020 10:29:56 -0400 (AST)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Automatic HASH and LIST partition creation" }, { "msg_contents": ">\n> > CREATE TABLE foo(a int) PARTITION BY LIST(a) CONFIGURATION (FOR VALUES\n> IN\n> > (1,2),(3,4) DEFAULT PARTITION foo_def);\n>\n> I would like to disagree with this syntactic approach because it would\n> very specific to each partition method. IMHO the syntax should be as\n> generic as possible. I'd suggest (probably again) a keyword/value list\n> which would allow to be quite adaptable without inducing any pressure on\n> the parser.\n>\nIf I remember your proposal correctly it is something like\nCREATE TABLE foo(...) PARTITION BY HASH AUTOMATIC (MODULUS 10);\n\nIt is still possible but there are some caveats:\n1. We'll need to add keyword MODULUS (and probably AUTOMATIC) to the\nparser's list. I don't against this but as far as I've heard there is some\nopposition among PG community against new keywords. Maybe I am wrong.\n2. The existing syntax for declarative partitioning is different to your\nproposal. It is still not a big problem and your proposal makes query\nshorter for several words. I'd just like to see some consensus on the\nsyntax. Now I must admit there are too many contradictions in opinions\nwhich make progress slow. Also I think it is important to have a really\nconvenient syntaх.\n2a Maybe we all who participated in the thread can vote for some variant?\n2b Maybe the existing syntax for declarative partitioniong should be given\nsome priority as it is already committed into CREATE TABLE ... PARTITION OF\n... FOR VALUES IN.. etc.\n\nI'd be happy if everyone will join some version of the proposed syntaх in\nthis thread and in the previous discussion [1]. If we have a variant with\nmore than one supporter, sure we can develop patch based on it.\nThank you very much\nand Merry Christmas!\n\n[1]\nhttps://www.postgresql.org/message-id/flat/alpine.DEB.2.21.1907150711080.22273%40lancre\n\n-- \nBest regards,\nPavel Borisov\n\nPostgres Professional: http://postgrespro.com <http://www.postgrespro.com>\n\n> CREATE TABLE foo(a int) PARTITION BY LIST(a) CONFIGURATION (FOR VALUES IN \n> (1,2),(3,4) DEFAULT PARTITION foo_def);\n\nI would like to disagree with this syntactic approach because it would \nvery specific to each partition method. IMHO the syntax should be as \ngeneric as possible. I'd suggest (probably again) a keyword/value list \nwhich would allow to be quite adaptable without inducing any pressure on \nthe parser.If I remember your proposal correctly it is something likeCREATE TABLE foo(...) PARTITION BY HASH AUTOMATIC (MODULUS 10);It is still possible but there are some caveats:1. We'll need to add keyword MODULUS (and probably AUTOMATIC) to the parser's list. I don't against this but as far as I've heard there is some opposition among PG community against new keywords. Maybe I am wrong.2. The existing syntax for declarative partitioning is different to your proposal. It is still not a big problem and your proposal makes query shorter for several words. I'd just like to see some consensus on the syntax. Now I must admit there are too many contradictions in opinions which make progress slow. Also I think it is important to have a really convenient syntaх.2a Maybe we all who participated in the thread can vote for some variant?2b Maybe the existing syntax for declarative partitioniong should be given some priority as it is already committed into CREATE TABLE ... PARTITION OF ... FOR VALUES IN.. etc.I'd be happy if everyone will join some version of the proposed syntaх in this thread and in the previous discussion [1]. If we have a variant with more than one supporter, sure we can develop patch based on it.Thank you very muchand Merry Christmas![1] https://www.postgresql.org/message-id/flat/alpine.DEB.2.21.1907150711080.22273%40lancre-- Best regards,Pavel BorisovPostgres Professional: http://postgrespro.com", "msg_date": "Tue, 22 Dec 2020 19:03:05 +0400", "msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Automatic HASH and LIST partition creation" }, { "msg_contents": "HEllo.\n\n>>> CREATE TABLE foo(a int) PARTITION BY LIST(a) CONFIGURATION (FOR VALUES\n>> IN\n>>> (1,2),(3,4) DEFAULT PARTITION foo_def);\n>>\n>> I would like to disagree with this syntactic approach because it would\n>> very specific to each partition method. IMHO the syntax should be as\n>> generic as possible. I'd suggest (probably again) a keyword/value list\n>> which would allow to be quite adaptable without inducing any pressure on\n>> the parser.\n>>\n> If I remember your proposal correctly it is something like\n> CREATE TABLE foo(...) PARTITION BY HASH AUTOMATIC (MODULUS 10);\n\nYep, that would be the spirit.\n\n> It is still possible but there are some caveats: 1. We'll need to add \n> keyword MODULUS (and probably AUTOMATIC) to the parser's list.\n\nWhy? We could accept anything in the list? i.e.:\n\n (ident =? value[, ident =? value]*)\n\n> I don't against this but as far as I've heard there is some\n> opposition among PG community against new keywords. Maybe I am wrong.\n\nthe ident is a keyword that can be interpreted later on, not a \"reserved \nkeyword\" from a parser perspective, which is the only real issue?\n\nThe parser does not need to know about it, only the command interpreter \nwhich will have to interpret it. AUTOMATIC is a nice parser cue to \nintroduce such a ident-value list.\n\n> 2. The existing syntax for declarative partitioning is different to your\n> proposal.\n\nYep. I think that it was not so good a design choice from a \nlanguage/extensibility perspective.\n\n> It is still not a big problem and your proposal makes query\n> shorter for several words. I'd just like to see some consensus on the\n> syntax. Now I must admit there are too many contradictions in opinions\n> which make progress slow. Also I think it is important to have a really\n> convenient syntaх.\n\n\n\n> 2a Maybe we all who participated in the thread can vote for some variant?\n> 2b Maybe the existing syntax for declarative partitioniong should be given\n> some priority as it is already committed into CREATE TABLE ... PARTITION OF\n> ... FOR VALUES IN.. etc.\n\n> I'd be happy if everyone will join some version of the proposed syntaх in\n> this thread and in the previous discussion [1]. If we have a variant with\n> more than one supporter, sure we can develop patch based on it.\n> Thank you very much\n> and Merry Christmas!\n>\n> [1]\n> https://www.postgresql.org/message-id/flat/alpine.DEB.2.21.1907150711080.22273%40lancre\n>\n>\n\n-- \nFabien.", "msg_date": "Tue, 22 Dec 2020 11:38:30 -0400 (AST)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Automatic HASH and LIST partition creation" }, { "msg_contents": ">\n> Why? We could accept anything in the list? i.e.:\n>\n> (ident =? value[, ident =? value]*)\n>\n> > I don't against this but as far as I've heard there is some\n> > opposition among PG community against new keywords. Maybe I am wrong.\n>\n> the ident is a keyword that can be interpreted later on, not a \"reserved\n> keyword\" from a parser perspective, which is the only real issue?\n>\n> The parser does not need to know about it, only the command interpreter\n> which will have to interpret it. AUTOMATIC is a nice parser cue to\n> introduce such a ident-value list.\n>\n> > 2. The existing syntax for declarative partitioning is different to your\n> > proposal.\n>\n> Yep. I think that it was not so good a design choice from a\n> language/extensibility perspective.\n>\nThank you very much, Fabien. It is clear enough.\nBTW could you tell me a couple of words about pros and cons of c-code\nsyntax parsing comparing to parsing using gram.y trees? I think both are\npossible but my predisposition was that we'd better use the later if\npossible.\n\nBest regards,\nPavel Borisov\n\n>\n\n\nWhy? We could accept anything in the list? i.e.:\n\n    (ident =? value[, ident =? value]*)\n\n> I don't against this but as far as I've heard there is some\n> opposition among PG community against new keywords. Maybe I am wrong.\n\nthe ident is a keyword that can be interpreted later on, not a \"reserved \nkeyword\" from a parser perspective, which is the only real issue?\n\nThe parser does not need to know about it, only the command interpreter \nwhich will have to interpret it. AUTOMATIC is a nice parser cue to \nintroduce such a ident-value list.\n\n> 2. The existing syntax for declarative partitioning is different to your\n> proposal.\n\nYep. I think that it was not so good a design choice from a \nlanguage/extensibility perspective.Thank you very much, Fabien. It is clear enough.BTW could you tell me a couple of words about pros and cons of c-code syntax parsing comparing to parsing using gram.y trees? I think both are possible but my predisposition was that we'd better use the later if possible.Best regards,Pavel Borisov", "msg_date": "Tue, 22 Dec 2020 19:58:13 +0400", "msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Automatic HASH and LIST partition creation" }, { "msg_contents": "\n> BTW could you tell me a couple of words about pros and cons of c-code\n> syntax parsing comparing to parsing using gram.y trees?\n\nI'd rather use an automatic tool (lexer/parser) if possible instead of \ndoing it by hand if I can. If you want a really nice syntax with clever \ntricks, then you may need to switch to manual though, but pg/sql is not in \nthat class.\n\n> I think both are possible but my predisposition was that we'd better use \n> the later if possible.\n\nI agree.\n\n-- \nFabien.\n\n\n", "msg_date": "Tue, 22 Dec 2020 16:50:36 -0400 (AST)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Automatic HASH and LIST partition creation" }, { "msg_contents": ">\n>\n> > BTW could you tell me a couple of words about pros and cons of c-code\n> > syntax parsing comparing to parsing using gram.y trees?\n>\n> I'd rather use an automatic tool (lexer/parser) if possible instead of\n> doing it by hand if I can. If you want a really nice syntax with clever\n> tricks, then you may need to switch to manual though, but pg/sql is not in\n> that class.\n>\n> > I think both are possible but my predisposition was that we'd better use\n> > the later if possible.\n>\n> I agree.\n>\nThank you!\n\nFabien, do you consider it possible to change the syntax of declarative\npartitioning too? It is problematic as it is already committed but also is\nvery tempting to have the same type of syntax both in automatic\npartitioning and in manual (PARTITION OF...)\n\n-- \nBest regards,\nPavel Borisov\n\nPostgres Professional: http://postgrespro.com <http://www.postgrespro.com>\n\n> BTW could you tell me a couple of words about pros and cons of c-code\n> syntax parsing comparing to parsing using gram.y trees?\n\nI'd rather use an automatic tool (lexer/parser) if possible instead of \ndoing it by hand if I can. If you want a really nice syntax with clever \ntricks, then you may need to switch to manual though, but pg/sql is not in \nthat class.\n\n> I think both are possible but my predisposition was that we'd better use \n> the later if possible.\n\nI agree.Thank you!Fabien, do you consider it possible to change the syntax of declarative partitioning too? It is problematic as it is already committed but also is very tempting to have the same type of syntax both in automatic partitioning and in manual (PARTITION OF...)-- Best regards,Pavel BorisovPostgres Professional: http://postgrespro.com", "msg_date": "Wed, 23 Dec 2020 14:59:51 +0400", "msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Automatic HASH and LIST partition creation" }, { "msg_contents": "> Fabien, do you consider it possible to change the syntax of declarative\n> partitioning too?\n\nMy 0.02 €: What I think does not matter much, what committers think is the \nway to pass something. However, I do not think that such an idea would \npass a committer:-)\n\n> It is problematic as it is already committed but also is very tempting \n> to have the same type of syntax both in automatic partitioning and in \n> manual (PARTITION OF...)\n\nI think that if a \"common\" syntax, for a given meaning of common, can be \nthought of, and without breaking backward compatibility, then there may be \nan argument to provide such a syntax, but I would not put too much energy \ninto that if I were you.\n\nI see 3 cases:\n\n - partition declaration but no actual table generated, the current\n version.\n\n - partition declaration with actual sub-tables generated, eg for hash\n where it is pretty straightforward to know what would be needed, or for\n a bounded range.\n\n - partition declaration without generated table, but they are generated\n on demand, when needed, for a range one may want weekly or monthly\n without creating tables in advance, esp. if it is unbounded.\n\nISTM that the syntax should be clear and if possible homogeneous for all \nthree use cases, even if they are not implemented yet. It should also \nallow easy extensibility, hence something without a strong syntax, \nkey/value pairs to be interpreted later.\n\n-- \nFabien.", "msg_date": "Wed, 23 Dec 2020 09:49:15 -0400 (AST)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Automatic HASH and LIST partition creation" }, { "msg_contents": ">\n> My 0.02 €: What I think does not matter much, what committers think is the\n> way to pass something. However, I do not think that such an idea would\n> pass a committer:-)\n>\n\nThe same idea was the reason for my proposal to make automatic partitioning\nclauses to be in accordance with existing declarative syntax (even if it\nseems little bit long to write words \"configuration (for values\" )\n\nCREATE TABLE foo(a int) PARTITION BY LIST(a) CONFIGURATION (FOR VALUES\nIN (1,2),(3,4) DEFAULT PARTITION foo_def);\nCREATE TABLE foo(a int) PARTITION BY HASH(a) CONFIGURATION (FOR VALUES\nWITH MODULUS 3);\nCREATE TABLE foo(a int) PARTITION BY RAGE(a) CONFIGURATION (FOR VALUES\nFROM 1 TO 1000 INTERVAL 10 DEFAULT PARTITION foo_def)\n\nIf we want generic (ident = value,...) then we need to introduce different\nto what is already committed for manual partitioning which I considered\nworse than my proposal above. Still other opinions are highly valued.\n-- \nBest regards,\nPavel Borisov\n\nPostgres Professional: http://postgrespro.com <http://www.postgrespro.com>\n\nMy 0.02 €: What I think does not matter much, what committers think is the \nway to pass something. However, I do not think that such an idea would \npass a committer:-)The same idea was the reason for my proposal to make automatic partitioning clauses to be in accordance with existing declarative syntax (even if it seems little bit long to write words \"configuration (for values\" ) CREATE TABLE foo(a int) PARTITION BY LIST(a) CONFIGURATION (FOR VALUESIN (1,2),(3,4) DEFAULT PARTITION foo_def);CREATE TABLE foo(a int) PARTITION BY HASH(a) CONFIGURATION (FOR VALUESWITH MODULUS 3);CREATE TABLE foo(a int) PARTITION BY RAGE(a) CONFIGURATION (FOR VALUESFROM 1 TO 1000 INTERVAL 10 DEFAULT PARTITION foo_def)If we want generic (ident = value,...) then we need to introduce different to what is already committed for manual partitioning which I considered worse than my proposal above. Still other opinions are highly valued.-- Best regards,Pavel BorisovPostgres Professional: http://postgrespro.com", "msg_date": "Wed, 23 Dec 2020 18:03:28 +0400", "msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Automatic HASH and LIST partition creation" }, { "msg_contents": "On Wed, Oct 7, 2020 at 6:26 AM Anastasia Lubennikova\n<a.lubennikova@postgrespro.ru> wrote:\n> Do you think that it is a bug? For now, I removed this statement from\n> tests just to calm down the CI.\n\nI don't think we can use \\d+ on a temporary table here, because the\nbackend ID appears in the namespace, which is causing a failure on one\nof the CI OSes due to nondeterminism:\n\nCREATE TEMP TABLE temp_parted (a char) PARTITION BY LIST (a)\nCONFIGURATION (VALUES IN ('a') DEFAULT PARTITION temp_parted_default);\n\\d+ temp_parted\n- Partitioned table \"pg_temp_3.temp_parted\"\n+ Partitioned table \"pg_temp_4.temp_parted\"\n\n\n", "msg_date": "Mon, 11 Jan 2021 10:22:48 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Automatic HASH and LIST partition creation" }, { "msg_contents": ">\n> I don't think we can use \\d+ on a temporary table here, because the\n> backend ID appears in the namespace, which is causing a failure on one\n> of the CI OSes due to nondeterminism:\n>\n> CREATE TEMP TABLE temp_parted (a char) PARTITION BY LIST (a)\n> CONFIGURATION (VALUES IN ('a') DEFAULT PARTITION temp_parted_default);\n> \\d+ temp_parted\n> - Partitioned table \"pg_temp_3.temp_parted\"\n> + Partitioned table \"pg_temp_4.temp_parted\"\n>\n\nI've updated the tests accordingly. PFA version 4.\nAs none of the recent proposals to modify the syntax were seconded by\nanyone, I return the previous Ready-for-committer CF status.\n\n-- \nBest regards,\nPavel Borisov\n\nPostgres Professional: http://postgrespro.com <http://www.postgrespro.com>", "msg_date": "Mon, 25 Jan 2021 16:32:31 +0400", "msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Automatic HASH and LIST partition creation" }, { "msg_contents": "https://commitfest.postgresql.org/32/2694/\n\nI don't know what committers will say, but I think that \"ALTER TABLE\" might be\nthe essential thing for this patch to support, not \"CREATE\". (This is similar\nto ALTER..SET STATISTICS, which is not allowed in CREATE.)\n\nThe reason is that ALTER is what's important for RANGE partitions, which need\nto be created dynamically (for example, to support time-series data\ncontinuously inserting data around 'now'). I assume it's sometimes also\nimportant for LIST. I think this patch should handle those cases better before\nbeing commited, or else we risk implementing grammar and other user-facing interface\nthat fails to handle what's needed into the future (or that's non-essential).\nEven if dynamic creation isn't implemented yet, it seems important to at least\nimplement the foundation for setting the configuration to *allow* that in the\nfuture, in a manner that's consistent with the initial implementation for\n\"static\" partitions.\n\nALTER also supports other ideas I mentioned here:\nhttps://www.postgresql.org/message-id/20200706145947.GX4107%40telsasoft.com\n\n - ALTER .. SET interval (for dynamic/deferred RANGE partitioning)\n - ALTER .. SET modulus, for HASH partitioning, in the initial implementation,\n this would allow CREATING paritions, but wouldn't attempt to handle moving\n data if overlapping table already exists:\n - Could also set the table-name, maybe by format string;\n - Could set \"retention interval\" for range partitioning;\n - Could set if the partitions are themselves partitioned(??)\n\nI think once you allow setting configuration parameters like this, then you\nmight have an ALTER command to \"effect\" them, which would create any static\ntables required by the configuration. maybe that'd be automatic, but if\nthere's an \"ALTER .. APPLY PARTITIONS\" command (or whatever), maybe in the\nfuture, the command could also be used to \"repartition\" existing table data\ninto partitions with more fine/course granularity (modulus, or daily vs monthly\nrange, etc).\n\n-- \nJustin\n\n\n", "msg_date": "Tue, 2 Mar 2021 14:26:17 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Automatic HASH and LIST partition creation" }, { "msg_contents": "I have reviewed the v4 patch. The patch does not get applied on the latest\nsource. Kindly rebase.\nHowever I have found few comments.\n\n1.\n> +-- must fail because of wrong configuration\n> +CREATE TABLE tbl_hash_fail (i int) PARTITION BY HASH (i)\n> +CONFIGURATION (values in (1, 2), (3, 4) default partition tbl_default);\n\nHere some of the keywords are mentioned in UPPER CASE (Ex: CREATE TABLE,\nCONFIGURATION, etc) and some are mentioned in lower case (Ex: values in,\ndefault partition, etc). Kindly make it common. I feel making it to UPPER\nCASE is better. Please take care of this in all the cases.\n\n2. It is better to separate the failure cases and success cases in\n/src/test/regress/sql/create_table.sql for better readability. All the\nfailure cases can be listed first and then the success cases.\n\n3.\n> + char *part_relname;\n> +\n> + /*\n> + * Generate partition name in the format:\n> + * $relname_$partnum\n> + * All checks of name validity will be made afterwards in\nDefineRelation()\n> + */\n> + part_relname = psprintf(\"%s_%d\", cxt->relation->relname, i);\n\nThe assignment can be done directly while declaring the variable.\n\nThanks & Regards,\nNitin Jadhav\n\nOn Wed, Mar 3, 2021 at 1:56 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n\n> https://commitfest.postgresql.org/32/2694/\n>\n> I don't know what committers will say, but I think that \"ALTER TABLE\"\n> might be\n> the essential thing for this patch to support, not \"CREATE\". (This is\n> similar\n> to ALTER..SET STATISTICS, which is not allowed in CREATE.)\n>\n> The reason is that ALTER is what's important for RANGE partitions, which\n> need\n> to be created dynamically (for example, to support time-series data\n> continuously inserting data around 'now'). I assume it's sometimes also\n> important for LIST. I think this patch should handle those cases better\n> before\n> being commited, or else we risk implementing grammar and other user-facing\n> interface\n> that fails to handle what's needed into the future (or that's\n> non-essential).\n> Even if dynamic creation isn't implemented yet, it seems important to at\n> least\n> implement the foundation for setting the configuration to *allow* that in\n> the\n> future, in a manner that's consistent with the initial implementation for\n> \"static\" partitions.\n>\n> ALTER also supports other ideas I mentioned here:\n> https://www.postgresql.org/message-id/20200706145947.GX4107%40telsasoft.com\n>\n> - ALTER .. SET interval (for dynamic/deferred RANGE partitioning)\n> - ALTER .. SET modulus, for HASH partitioning, in the initial\n> implementation,\n> this would allow CREATING paritions, but wouldn't attempt to handle\n> moving\n> data if overlapping table already exists:\n> - Could also set the table-name, maybe by format string;\n> - Could set \"retention interval\" for range partitioning;\n> - Could set if the partitions are themselves partitioned(??)\n>\n> I think once you allow setting configuration parameters like this, then you\n> might have an ALTER command to \"effect\" them, which would create any static\n> tables required by the configuration. maybe that'd be automatic, but if\n> there's an \"ALTER .. APPLY PARTITIONS\" command (or whatever), maybe in the\n> future, the command could also be used to \"repartition\" existing table data\n> into partitions with more fine/course granularity (modulus, or daily vs\n> monthly\n> range, etc).\n>\n> --\n> Justin\n>\n>\n>\n\nI have reviewed the v4 patch. The patch does not get applied on the latest source. Kindly rebase. However I have found few comments.1.> +-- must fail because of wrong configuration> +CREATE TABLE tbl_hash_fail (i int) PARTITION BY HASH (i)> +CONFIGURATION (values in (1, 2), (3, 4) default partition tbl_default);Here some of the keywords are mentioned in UPPER CASE (Ex: CREATE TABLE, CONFIGURATION, etc) and some are mentioned in lower case (Ex: values in, default partition, etc). Kindly make it common. I feel making it to UPPER CASE is better. Please take care of this in all the cases.2. It is better to separate the failure cases and success cases in /src/test/regress/sql/create_table.sql for better readability. All the failure cases can be listed first and then the success cases.3. > +           char *part_relname;> +> +           /*> +            * Generate partition name in the format:> +            * $relname_$partnum> +            * All checks of name validity will be made afterwards in DefineRelation()> +            */> +           part_relname = psprintf(\"%s_%d\", cxt->relation->relname, i);The assignment can be done directly while declaring the variable.Thanks & Regards,Nitin JadhavOn Wed, Mar 3, 2021 at 1:56 AM Justin Pryzby <pryzby@telsasoft.com> wrote:https://commitfest.postgresql.org/32/2694/\n\nI don't know what committers will say, but I think that \"ALTER TABLE\" might be\nthe essential thing for this patch to support, not \"CREATE\".  (This is similar\nto ALTER..SET STATISTICS, which is not allowed in CREATE.)\n\nThe reason is that ALTER is what's important for RANGE partitions, which need\nto be created dynamically (for example, to support time-series data\ncontinuously inserting data around 'now').  I assume it's sometimes also\nimportant for LIST.  I think this patch should handle those cases better before\nbeing commited, or else we risk implementing grammar and other user-facing interface\nthat fails to handle what's needed into the future (or that's non-essential).\nEven if dynamic creation isn't implemented yet, it seems important to at least\nimplement the foundation for setting the configuration to *allow* that in the\nfuture, in a manner that's consistent with the initial implementation for\n\"static\" partitions.\n\nALTER also supports other ideas I mentioned here:\nhttps://www.postgresql.org/message-id/20200706145947.GX4107%40telsasoft.com\n\n  - ALTER .. SET interval (for dynamic/deferred RANGE partitioning)\n  - ALTER .. SET modulus, for HASH partitioning, in the initial implementation,\n    this would allow CREATING paritions, but wouldn't attempt to handle moving\n    data if overlapping table already exists:\n  - Could also set the table-name, maybe by format string;\n  - Could set \"retention interval\" for range partitioning;\n  - Could set if the partitions are themselves partitioned(??)\n\nI think once you allow setting configuration parameters like this, then you\nmight have an ALTER command to \"effect\" them, which would create any static\ntables required by the configuration.  maybe that'd be automatic, but if\nthere's an \"ALTER .. APPLY PARTITIONS\" command (or whatever), maybe in the\nfuture, the command could also be used to \"repartition\" existing table data\ninto partitions with more fine/course granularity (modulus, or daily vs monthly\nrange, etc).\n\n-- \nJustin", "msg_date": "Sun, 25 Apr 2021 19:49:47 +0530", "msg_from": "Nitin Jadhav <nitinjadhavpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Automatic HASH and LIST partition creation" }, { "msg_contents": ">\n> I have reviewed the v4 patch. The patch does not get applied on the latest\n> source. Kindly rebase.\n> However I have found few comments.\n>\n> 1.\n> > +-- must fail because of wrong configuration\n> > +CREATE TABLE tbl_hash_fail (i int) PARTITION BY HASH (i)\n> > +CONFIGURATION (values in (1, 2), (3, 4) default partition tbl_default);\n>\n> Here some of the keywords are mentioned in UPPER CASE (Ex: CREATE TABLE,\n> CONFIGURATION, etc) and some are mentioned in lower case (Ex: values in,\n> default partition, etc). Kindly make it common. I feel making it to UPPER\n> CASE is better. Please take care of this in all the cases.\n>\n> 2. It is better to separate the failure cases and success cases in\n> /src/test/regress/sql/create_table.sql for better readability. All the\n> failure cases can be listed first and then the success cases.\n>\n> 3.\n> > + char *part_relname;\n> > +\n> > + /*\n> > + * Generate partition name in the format:\n> > + * $relname_$partnum\n> > + * All checks of name validity will be made afterwards in\n> DefineRelation()\n> > + */\n> > + part_relname = psprintf(\"%s_%d\", cxt->relation->relname, i);\n>\n> The assignment can be done directly while declaring the variable.\n>\nThank you for your review!\nI've rebased the patch and made the changes mentioned.\nPFA v5.\n\n-- \nBest regards,\nPavel Borisov\n\nPostgres Professional: http://postgrespro.com <http://www.postgrespro.com>", "msg_date": "Fri, 9 Jul 2021 14:29:49 +0400", "msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Automatic HASH and LIST partition creation" }, { "msg_contents": "On Fri, Jul 9, 2021 at 6:30 AM Pavel Borisov <pashkin.elfe@gmail.com> wrote:\n\n> Thank you for your review!\n> I've rebased the patch and made the changes mentioned.\n> PFA v5.\n\nI've set this back to \"needs review\" in CF.\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Fri, Jul 9, 2021 at 6:30 AM Pavel Borisov <pashkin.elfe@gmail.com> wrote:> Thank you for your review! > I've rebased the patch and made the changes mentioned. > PFA v5.I've set this back to \"needs review\" in CF.--John NaylorEDB: http://www.enterprisedb.com", "msg_date": "Fri, 9 Jul 2021 07:05:13 -0400", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Automatic HASH and LIST partition creation" }, { "msg_contents": ">\n> > Thank you for your review!\n> > I've rebased the patch and made the changes mentioned.\n> > PFA v5.\n>\n> I've set this back to \"needs review\" in CF.\n>\nThanks for the attention! I did the review of this patch, and the changes\nI've introduced in v5 are purely cosmetic. So I'd suppose the\nready-for-committer status should not better have been changed.\nSo I'd like return it to ready-for-committer. If you mind against this,\nplease mention. The opinion of Nitin, a second reviewer, is also very much\nappreciated.\n\n-- \nBest regards,\nPavel Borisov\n\nPostgres Professional: http://postgrespro.com <http://www.postgrespro.com>\n\n> Thank you for your review! > I've rebased the patch and made the changes mentioned. > PFA v5.I've set this back to \"needs review\" in CF.Thanks for the attention! I did the review of this patch, and the changes I've introduced in v5 are purely cosmetic. So I'd suppose the ready-for-committer status should not better have been changed.So I'd like return it to ready-for-committer. If you mind against this, please mention. The opinion of Nitin, a second reviewer, is also very much appreciated.-- Best regards,Pavel BorisovPostgres Professional: http://postgrespro.com", "msg_date": "Fri, 9 Jul 2021 15:19:02 +0400", "msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Automatic HASH and LIST partition creation" }, { "msg_contents": "On Tue, Mar 2, 2021 at 3:26 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> I don't know what committers will say, but I think that \"ALTER TABLE\" might be\n> the essential thing for this patch to support, not \"CREATE\". (This is similar\n> to ALTER..SET STATISTICS, which is not allowed in CREATE.)\n>\n> The reason is that ALTER is what's important for RANGE partitions, which need\n> to be created dynamically (for example, to support time-series data\n> continuously inserting data around 'now'). I assume it's sometimes also\n> important for LIST. I think this patch should handle those cases better before\n> being commited, or else we risk implementing grammar and other user-facing interface\n> that fails to handle what's needed into the future (or that's non-essential).\n> Even if dynamic creation isn't implemented yet, it seems important to at least\n> implement the foundation for setting the configuration to *allow* that in the\n> future, in a manner that's consistent with the initial implementation for\n> \"static\" partitions.\n\nI don't think it's a hard requirement, but it's an interesting point.\nMy initial reactions to the patch are:\n\n- I don't think it's a very good idea to support LIST and HASH but not\nRANGE. We need a design that can work for all three partitioning\nstrategies, even if we don't have support for all of them in the\ninitial patch. If they CAN all be in the same patch, so much the\nbetter.\n\n- I am not very impressed with the syntax. CONFIGURATION is an odd\nword that seems too generic for what we're talking about here. It\nwould be tempting to use a connecting word like WITH or USING except\nthat both would be ambiguous here, so we can't. MySQL and Oracle use\nthe keyword PARTITIONS -- which I realize isn't a keyword at all in\nPostgreSQL right now -- to introduce the partition specification. DB2\nuses no keyword at all; it seems you just say PARTITION BY\n(mypartitioncol) (...partition specifications go here...). I think\neither approach could work for us. Avoiding the extra keyword is a\nplus, especially since I doubt we're likely to support the exact\nsyntax that Oracle and MySQL offer anyway - though if we do, then I'd\nbe in favor of inserting the PARTITIONS keyword so that people's SQL\ncan work without modification.\n\n- We need to think a little bit about exactly what we're trying to do.\nThe simplest imaginable thing here would be to just give people a\nplace to put a bunch of partition specifications. So you can imagine\nletting someone say PARTITION BY HASH (FOR VALUES WITH (MODULUS 2,\nREMAINDER 0), FOR VALUES WITH (MODULUS 2, REMAINDER 1)). However, the\npatch quite rightly rejects that approach in favor of the theory that,\nat CREATE TABLE time, you're just going to want to give a modulus and\nhave the system create one partition for every possible remainder. But\nthat could be expressed even more compactly than what the patch does.\nInstead of saying PARTITION BY HASH CONFIGURATION (MODULUS 4) we could\njust let people say PARTITION BY HASH (4) or probably even PARTITION\nBY HASH 4.\n\n- For list partitioning, the patch falls back to just letting you put\na bunch of VALUES IN clauses in the CREATE TABLE statement. I don't\nfind something like PARTITION BY LIST CONFIGURATION (VALUES IN (1, 2),\n(1, 3)) to be particularly readable. What are all the extra keywords\nadding? We could just say PARTITION BY LIST ((1, 2), (1, 3)). I think\nI would find that easier to remember; not sure what other people\nthink. As an alternative, PARTITION BY LIST VALUES IN (1, 2), (1, 3)\nlooks workable, too.\n\n- What about range partitioning? This is an interesting case because\nwhile in theory you could leave gaps between range partitions, in\npractice people probably don't want to do that very often, and it\nmight be better to have a simpler syntax that caters to the common\ncase, since people can always create partitions individually if they\nhappen to want gaps. So you can imagine making something like\nPARTITION BY RANGE ((MINVALUE), (42), (163)) mean create two\npartitions, one from (MINVALUE) to (42) and the other from (42) to\n(163). I think that would be pretty useful.\n\n- Another possible separating keyword here would be INITIALLY, which\nis already a parser keyword. So then you could have stuff like\nPARTITION BY HASH INITIALLY 4, PARTITION BY LIST INITIALLY ((1, 2),\n(1, 3)), PARTITION BY RANGE INITIALLY ((MINVALUE), (42), (163)).\n\n- The patch doesn't document the naming convention for the\nautomatically created partitions, and it is worth thinking a bit about\nhow that is going to work. Do people want to be able to specify the\nname of the partitioned table when they are using this syntax, or are\nthey happy with automatically generated names? If the latter, are they\nhappy with THESE automatically generated names? I guess for HASH\nappending _%d where %d is the modulus is fine, but it is not necessary\nso great for LIST. If I said CREATE TABLE foo ... PARTITION BY LIST\n(('en'), ('ru'), ('jp')) I think I'd be hoping to end up with\npartitions named foo_en, foo_ru, and foo_jp rather than foo_0, foo_1,\nfoo_2. Or maybe I'd rather say PARTITION BY LIST (foo_en ('en'),\nfoo_ru ('ru'), foo_jp ('jp')) or something like that to be explicit\nabout it. Not sure. But it's worth some thought. I think this comes\ninto focus even more clearly for range partitions, where you probably\nwant the partitions to follow a convention like basetablename_yyyy_mm.\n\n- The documentation for the CONFIGURATION option doesn't match the\ngrammar. The documentation makes it an independent clause, so\nCONFIGURATION could be specified even if PARTITION BY is not. But the\nimplementation makes the better choice to treat CONFIGURATION as a\nfurther specification of PARTITION BY.\n\n- I don't think this patch is really all that close to being ready for\ncommitter. Beyond the design issues which seem to need more thought,\nthere's stuff in the patch like:\n\n+ elog(DEBUG1,\"stransformPartitionAutoCreate HASH i %d MODULUS %d \\n %s\\n\",\n+ i, bound->modulus, nodeToString(part));\n\nNow, on the one hand, debugging elogs like this have little business\nin a final patch. And, on the other hand, if we were going to include\nthem in the final patch, we'd probably want to at least spell the\nfunction name correctly. Similarly, it's evident that this test case\nhas not been carefully reviewed by anyone, including the author:\n\n+REATE TABLE fail_parted (a int) PARTITION BY HASH (a) CONFIGURATION\n+(MODULUS 10 DEFAULT PARTITION hash_default);\n\nNot too surprisingly, the system isn't familiar with the REATE command.\n\n- There's some questionable error-reporting behavior in here, too, particularly:\n\n+CREATE TABLE fail_parted (a int) PARTITION BY LIST (a) CONFIGURATION\n+(values in (1, 2), (1, 3));\n+ERROR: partition \"fail_parted_1\" would overlap partition \"fail_parted_0\"\n+LINE 2: (values in (1, 2), (1, 3));\n\nSince the user hasn't provided the names fail_parted_0 or\nfail_parted_1, it kind of stinks to use them in an error report. The\nerror cursor is good, but I wonder if we need to do better. One option\nwould be to go to a syntax where the user specifies the partition\nnames explicitly, which then justifies using that name in an error\nreport. Another possibility would be to give a different message in\nthis case, like:\n\nERROR: partition for values in (1, 2) would overlap partition for\nvalues in (1, 3)\n\nNow, that would require a pretty substantial redesign of the patch,\nand I'm not sure it's worth the effort. But I'm also not sure that it\nisn't.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 9 Jul 2021 09:31:05 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Automatic HASH and LIST partition creation" }, { "msg_contents": ">\n> - I don't think it's a very good idea to support LIST and HASH but not\n> RANGE. We need a design that can work for all three partitioning\n> strategies, even if we don't have support for all of them in the\n> initial patch. If they CAN all be in the same patch, so much the\n> better.\n>\n> - I am not very impressed with the syntax. CONFIGURATION is an odd\n> word that seems too generic for what we're talking about here. It\n> would be tempting to use a connecting word like WITH or USING except\n> that both would be ambiguous here, so we can't. MySQL and Oracle use\n> the keyword PARTITIONS -- which I realize isn't a keyword at all in\n> PostgreSQL right now -- to introduce the partition specification. DB2\n> uses no keyword at all; it seems you just say PARTITION BY\n> (mypartitioncol) (...partition specifications go here...). I think\n> either approach could work for us. Avoiding the extra keyword is a\n> plus, especially since I doubt we're likely to support the exact\n> syntax that Oracle and MySQL offer anyway - though if we do, then I'd\n> be in favor of inserting the PARTITIONS keyword so that people's SQL\n> can work without modification.\n>\n> - We need to think a little bit about exactly what we're trying to do.\n> The simplest imaginable thing here would be to just give people a\n> place to put a bunch of partition specifications. So you can imagine\n> letting someone say PARTITION BY HASH (FOR VALUES WITH (MODULUS 2,\n> REMAINDER 0), FOR VALUES WITH (MODULUS 2, REMAINDER 1)). However, the\n> patch quite rightly rejects that approach in favor of the theory that,\n> at CREATE TABLE time, you're just going to want to give a modulus and\n> have the system create one partition for every possible remainder. But\n> that could be expressed even more compactly than what the patch does.\n> Instead of saying PARTITION BY HASH CONFIGURATION (MODULUS 4) we could\n> just let people say PARTITION BY HASH (4) or probably even PARTITION\n> BY HASH 4.\n>\n> - For list partitioning, the patch falls back to just letting you put\n> a bunch of VALUES IN clauses in the CREATE TABLE statement. I don't\n> find something like PARTITION BY LIST CONFIGURATION (VALUES IN (1, 2),\n> (1, 3)) to be particularly readable. What are all the extra keywords\n> adding? We could just say PARTITION BY LIST ((1, 2), (1, 3)). I think\n> I would find that easier to remember; not sure what other people\n> think. As an alternative, PARTITION BY LIST VALUES IN (1, 2), (1, 3)\n> looks workable, too.\n>\n> - What about range partitioning? This is an interesting case because\n> while in theory you could leave gaps between range partitions, in\n> practice people probably don't want to do that very often, and it\n> might be better to have a simpler syntax that caters to the common\n> case, since people can always create partitions individually if they\n> happen to want gaps. So you can imagine making something like\n> PARTITION BY RANGE ((MINVALUE), (42), (163)) mean create two\n> partitions, one from (MINVALUE) to (42) and the other from (42) to\n> (163). I think that would be pretty useful.\n>\n> - Another possible separating keyword here would be INITIALLY, which\n> is already a parser keyword. So then you could have stuff like\n> PARTITION BY HASH INITIALLY 4, PARTITION BY LIST INITIALLY ((1, 2),\n> (1, 3)), PARTITION BY RANGE INITIALLY ((MINVALUE), (42), (163)).\n>\n\nRobert, I've read your considerations and I have a proposal to change the\nsyntax to make it like:\n\nCREATE TABLE foo (bar text) PARTITION BY LIST (bar) PARTITIONS (('US'),\n('UK', 'RU'));\nCREATE TABLE foo (bar text) PARTITION BY LIST (bar) PARTITIONS\n(foo_us('US'), foo_uk_ru('UK', 'RU'), { DEFAULT foo_dflt | AUTOMATIC });\n\nCREATE TABLE foo (bar int) PARTITION BY HASH (bar) PARTITIONS (5);\n\nCREATE TABLE foo (bar int) PARTITION BY RANGE (bar) PARTITIONS (FROM 1 TO\n10 INTERVAL 2, { DEFAULT foo_dflt | AUTOMATIC });\n\n- I think using partitions syntax without any keyword at all, is quite\ndifferent from the existing pseudo-english PostgreSQL syntax. Also, it will\nneed two consecutive brackets divided by nothing (<partitioning\nkey>)(<partitions configuration>). So I think it's better to use the\nkeyword PARTITIONS\n\n- from the current patch it seems like a 'syntactic sugar' only but I don't\nthink it is being so. From a new syntaх proposal it's seen that it can\nenable three options\n(1) create a fixed set of partitions with everything else comes to the\ndefault partition\n(2) create a fixed set of partitions with everything else invokes error on\ninsert\n(3) create a set of partitions with everything else invokes a new partition\ncreation based on a partition key (AUTOMATIC word). Like someone will be\nable to do:\nCREATE TABLE foo (a varchar) PARTITION BY LIST (SUBSTRING (a, 1, 1))\nPARTITIONS (('a'),('b'),('c'));\nINSERT INTO foo VALUES (\"doctor\"); // will automatically create partition\nfor 'd'\nINSERT INTO foo VALUES (\"dam\"); // will come into partition 'd'\n\nOption (3) is not yet implemented and sure it needs much care from DBA to\nnot end up with the each-row-separate-partition.\n\n- Also with option (3) and AUTOMATIC word someone will be able to do:\nCREATE TABLE foo (a timestamp, t text) PARTITION BY LIST(EXTRACT (YEAR FROM\na)) PARTITIONS (('1982'),('1983'),('1984'));\nINSERT INTO foo VALUES (TIMESTAMP '1986-01-01 13:30:03', 'Orwell'); //\ncreates '1986' partition and inserts into it\nI think this option will be very useful as partitioning based on regular\nintervals of time I think is quite natural and often used. And to do it we\ndon't need to implement arbitrary intervals (partition by range). But I\nthink it's also worth implementing (proposed syntax for RANGE see above);\n\n- As for the naming of partitions I've seen what is done in Oracle:\npartition names can be provided when you create an initial set, and when a\npartition is created automatically on insert it will get some illegible\nname chosen by the system (it even doesn't include parent table prefix).\nI'd propose to implement:\n(1) If partition name is not specified it has format\n<parent_table_name>_<value_of_partition_key>\nwhere <value_of_partition_key> is a remainder in HASH, the first element of\nthe list of values for the partition in LIST case, left range-bound in\nRANGE case\n(2) If it is specified (not possible at partition creation at insert\ncommand) it is <parent_table_name>_<specified_name>\nThough we'll probably need to have some rules for the abbreviation for\npartition name should not exceed the relation name length limit. I think\npartitions naming with plain _numbers in the existing patch is for the\npurpose of increasing relation name length as little as possible for not\nimplementing abbreviation.\n\nWhat do you think will the described approach lead to a useful patch?\nShould it be done as a whole or it's possible to commit it in smaller\nsteps? (E.g. first part without AUTOMATIC capability, then add AUTOMATIC\ncapability. Or with some other order of features implementation)\n\nMy own view is that if some implementation of syntax is solidly decided, it\nwill promote work on more complicated logic of the patch and implement all\nparts one-by-one for the feature finally become really usable (not just\nhelping to squash several SQL commands into one as this patch does). I see\nthe existing patch as the starting point of the whole work and given some\ndecisions on syntax I can try to rework and extend it accordingly.\n\nOverall I consider this useful for PostgreSQL.\n\nWhat do you think about it?\n\n-- \nBest regards,\nPavel Borisov\n\nPostgres Professional: http://postgrespro.com <http://www.postgrespro.com>\n\n- I don't think it's a very good idea to support LIST and HASH but not\nRANGE. We need a design that can work for all three partitioning\nstrategies, even if we don't have support for all of them in the\ninitial patch. If they CAN all be in the same patch, so much the\nbetter.\n\n- I am not very impressed with the syntax. CONFIGURATION is an odd\nword that seems too generic for what we're talking about here. It\nwould be tempting to use a connecting word like WITH or USING except\nthat both would be ambiguous here, so we can't. MySQL and Oracle use\nthe keyword PARTITIONS -- which I realize isn't a keyword at all in\nPostgreSQL right now -- to introduce the partition specification. DB2\nuses no keyword at all; it seems you just say PARTITION BY\n(mypartitioncol) (...partition specifications go here...). I think\neither approach could work for us. Avoiding the extra keyword is a\nplus, especially since I doubt we're likely to support the exact\nsyntax that Oracle and MySQL offer anyway - though if we do, then I'd\nbe in favor of inserting the PARTITIONS keyword so that people's SQL\ncan work without modification.\n\n- We need to think a little bit about exactly what we're trying to do.\nThe simplest imaginable thing here would be to just give people a\nplace to put a bunch of partition specifications. So you can imagine\nletting someone say PARTITION BY HASH (FOR VALUES WITH (MODULUS 2,\nREMAINDER 0), FOR VALUES WITH (MODULUS 2, REMAINDER 1)). However, the\npatch quite rightly rejects that approach in favor of the theory that,\nat CREATE TABLE time, you're just going to want to give a modulus and\nhave the system create one partition for every possible remainder. But\nthat could be expressed even more compactly than what the patch does.\nInstead of saying PARTITION BY HASH CONFIGURATION (MODULUS 4) we could\njust let people say PARTITION BY HASH (4) or probably even PARTITION\nBY HASH 4.\n\n- For list partitioning, the patch falls back to just letting you put\na bunch of VALUES IN clauses in the CREATE TABLE statement. I don't\nfind something like PARTITION BY LIST CONFIGURATION (VALUES IN (1, 2),\n(1, 3)) to be particularly readable. What are all the extra keywords\nadding? We could just say PARTITION BY LIST ((1, 2), (1, 3)). I think\nI would find that easier to remember; not sure what other people\nthink. As an alternative, PARTITION BY LIST VALUES IN (1, 2), (1, 3)\nlooks workable, too.\n\n- What about range partitioning? This is an interesting case because\nwhile in theory you could leave gaps between range partitions, in\npractice people probably don't want to do that very often, and it\nmight be better to have a simpler syntax that caters to the common\ncase, since people can always create partitions individually if they\nhappen to want gaps. So you can imagine making something like\nPARTITION BY RANGE ((MINVALUE), (42), (163)) mean create two\npartitions, one from (MINVALUE) to (42) and the other from (42) to\n(163). I think that would be pretty useful.\n\n- Another possible separating keyword here would be INITIALLY, which\nis already a parser keyword. So then you could have stuff like\nPARTITION BY HASH INITIALLY 4, PARTITION BY LIST INITIALLY ((1, 2),\n(1, 3)), PARTITION BY RANGE INITIALLY ((MINVALUE), (42), (163)).Robert, I've read your considerations and I have a proposal to change the syntax to make it like:CREATE TABLE foo (bar text) PARTITION BY LIST (bar) PARTITIONS (('US'), ('UK', 'RU'));CREATE TABLE foo (bar text) PARTITION BY LIST (bar) PARTITIONS (foo_us('US'), foo_uk_ru('UK', 'RU'), { DEFAULT foo_dflt | AUTOMATIC });CREATE TABLE foo (bar int) PARTITION BY HASH (bar) PARTITIONS (5);CREATE TABLE foo (bar int) PARTITION BY RANGE (bar) PARTITIONS (FROM 1 TO 10 INTERVAL 2, { DEFAULT foo_dflt | AUTOMATIC }); - I think using partitions syntax without any keyword at all, is quite different from the existing pseudo-english PostgreSQL syntax. Also, it will need two consecutive brackets divided by nothing (<partitioning key>)(<partitions configuration>). So I think it's better to use the keyword PARTITIONS- from the current patch it seems like a 'syntactic sugar' only but I don't think it is being so. From a new syntaх proposal it's seen that it can enable three options (1) create a fixed set of partitions with everything else comes to the default partition (2) create a fixed set of partitions with everything else invokes error on insert (3) create a set of partitions with everything else invokes a new partition creation based on a partition key (AUTOMATIC word). Like someone will be able to do:CREATE TABLE foo (a varchar) PARTITION BY LIST (SUBSTRING (a, 1, 1)) PARTITIONS (('a'),('b'),('c'));INSERT INTO foo VALUES (\"doctor\"); // will automatically create partition for 'd'INSERT INTO foo VALUES (\"dam\"); // will come into partition 'd' Option (3) is not yet implemented and sure it needs much care from DBA to not end up with the each-row-separate-partition. - Also with option (3) and AUTOMATIC word someone will be able to do:CREATE TABLE foo (a timestamp, t text) PARTITION BY LIST(EXTRACT (YEAR FROM a)) PARTITIONS (('1982'),('1983'),('1984'));INSERT INTO foo VALUES (TIMESTAMP '1986-01-01 13:30:03', 'Orwell'); // creates '1986' partition and inserts into itI think this option will be very useful as partitioning based on regular intervals of time I think is quite natural and often used. And to do it we don't need to implement arbitrary intervals (partition by range). But I think it's also worth implementing (proposed syntax for RANGE see above);- As for the naming of partitions I've seen what is done in Oracle: partition names can be provided when you create an initial set, and when a partition is created automatically on insert it will get some illegible name chosen by the system (it even doesn't include parent table prefix). I'd propose to implement: (1) If partition name is not specified it has format <parent_table_name>_<value_of_partition_key> where <value_of_partition_key> is a remainder in HASH, the first element of the list of values for the partition in LIST case, left range-bound in RANGE case (2) If it is specified (not possible at partition creation at insert command) it is <parent_table_name>_<specified_name>Though we'll probably need to have some rules for the abbreviation for partition name should not exceed the relation name length limit. I think partitions naming with plain _numbers in the existing patch is for the purpose of increasing relation name length as little as possible for not implementing abbreviation.What do you think will the described approach lead to a useful patch? Should it be done as a whole or it's possible to commit it in smaller steps? (E.g. first part without AUTOMATIC capability, then add AUTOMATIC capability. Or with some other order of features implementation)My own view is that if some implementation of syntax is solidly decided, it will promote work on more complicated logic of the patch and implement all parts one-by-one for the feature finally become really usable (not just helping to squash several SQL commands into one as this patch does). I see the existing patch as the starting point of the whole work and given some decisions on syntax I can try to rework and extend it accordingly. Overall I consider this useful for PostgreSQL.What do you think about it?-- Best regards,Pavel BorisovPostgres Professional: http://postgrespro.com", "msg_date": "Wed, 14 Jul 2021 15:28:05 +0400", "msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Automatic HASH and LIST partition creation" }, { "msg_contents": "On Wed, Jul 14, 2021 at 7:28 AM Pavel Borisov <pashkin.elfe@gmail.com> wrote:\n> What do you think will the described approach lead to a useful patch? Should it be done as a whole or it's possible to commit it in smaller steps? (E.g. first part without AUTOMATIC capability, then add AUTOMATIC capability. Or with some other order of features implementation)\n\nI would suggest that you consider on-the-fly partition creation to be\na completely separate feature from initial partition creation as part\nof CREATE TABLE. I think you can have either without the other, and I\nthink the latter is a lot easier than the former. I doubt that\non-the-fly partition creation makes any sense at all for hash\npartitions; there seems to be no reason not to pre-create all the\npartitions. It's pretty straightforward to see how it should work for\nLIST, but RANGE needs an interval or something to be stored in the\nsystem catalogs so you can figure out where to put the boundaries, and\nsomehow you've got to identify a + operator for the relevant data\ntype. Tom Lane probably won't be thrilled if you suggest looking it up\nbased on the operator NAME. The bigger issue IMHO with on-the-fly\npartition creation is avoiding deadlocks in the presence of current\ninserters; I submit that without at least some kind of attempt to\navoid deadlocks and spurious errors there, it's not really a usable\nscheme, and that seems hard.\n\nOn the other hand, modulo syntax details, creating partitions at\nCREATE TABLE time seems relatively simple and, especially in the case\nof hash partitioning, useful.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 20 Jul 2021 14:42:16 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Automatic HASH and LIST partition creation" }, { "msg_contents": "On Tue, Jul 20, 2021 at 02:42:16PM -0400, Robert Haas wrote:\n> The bigger issue IMHO with on-the-fly\n> partition creation is avoiding deadlocks in the presence of current\n> inserters; I submit that without at least some kind of attempt to\n> avoid deadlocks and spurious errors there, it's not really a usable\n> scheme, and that seems hard.\n\nI was thinking that for dynamic creation, there would be a DDL command to\ncreate the necessary partitions:\n\n-- Creates 2021-01-02, unless the month already exists:\nALTER TABLE bydate SET GRANULARITY='1day';\nALTER TABLE bydate CREATE PARTITION FOR VALUE ('2021-01-02');\n\nI'd want it to support changing the granularity of the range partitions:\n\n-- Creates 2021-01 unless the month already exists.\n-- Errors if a day partition already exists which would overlap?\nALTER TABLE bydate SET granularity='1month';\nALTER TABLE bydate CREATE PARTITION FOR VALUE ('2021-01-03');\n\nIt could support creating ranges, which might create multiple partitions,\ndepending on the granularity:\n\nALTER TABLE bydate CREATE PARTITION FOR VALUES ('2021-01-01') TO ('2021-02-01')\n\nOr the catalog could include not only granularity, but also endpoints:\n\nALTER TABLE bydate SET ENDPOINTS ('2012-01-01') ('2022-01-01')\nALTER TABLE bydate CREATE PARTITIONS; --create anything needed to fill from a->b\nALTER TABLE bydate PRUNE PARTITIONS; --drop anything outside of [a,b]\n\nI would use this to set \"fine\" granularity for large tables, and \"course\"\ngranularity for tables that were previously set to \"fine\" granularity, but its\npartitions are no longer large enough to justify it. This logic currently\nexists in our application - we create partitions dynamically immediately before\ninserting. But it'd be nicer if it were created asynchronously. It may create\ntables which were never inserted into, which is fine - they'd be course\ngranularity tables (one per month).\n\nI think this might elegantly allow both 1) subpartitioning; 2) repartitioning\nto a different granularity (for which I currently have my own tool).\n\n-- \nJustin\n\n\n", "msg_date": "Tue, 20 Jul 2021 14:13:34 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Automatic HASH and LIST partition creation" }, { "msg_contents": "On Tue, Jul 20, 2021 at 3:13 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> On Tue, Jul 20, 2021 at 02:42:16PM -0400, Robert Haas wrote:\n> > The bigger issue IMHO with on-the-fly\n> > partition creation is avoiding deadlocks in the presence of current\n> > inserters; I submit that without at least some kind of attempt to\n> > avoid deadlocks and spurious errors there, it's not really a usable\n> > scheme, and that seems hard.\n>\n> I was thinking that for dynamic creation, there would be a DDL command to\n> create the necessary partitions:\n>\n> -- Creates 2021-01-02, unless the month already exists:\n> ALTER TABLE bydate SET GRANULARITY='1day';\n> ALTER TABLE bydate CREATE PARTITION FOR VALUE ('2021-01-02');\n\nWell, that dodges the deadlock issue with doing it implicitly, but it\nalso doesn't seem to offer a lot of value over just creating the\npartitions in a fully manual way. I mean you could just say:\n\nCREATE TABLE bydate_2021_02_02 PARTITION OF bydate FOR VALUES FROM\n('2021-01-02') TO ('2021-02-03');\n\nIt's longer, but it's not really that bad.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 20 Jul 2021 15:34:22 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Automatic HASH and LIST partition creation" }, { "msg_contents": "This thread has stalled since July with review comments unanswered, I'm marking\nthe patch Returned with Feedback. Please feel free to resubmit when/if a new\npatch is available.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Thu, 2 Dec 2021 12:20:16 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Automatic HASH and LIST partition creation" }, { "msg_contents": "Hi,\nI found that thread (and the patch), but it seems to be pretty dead.\nPatch didn't apply, due to gen_node_support.pl\nCan I hope for a rebirth ?\n\nI've made a rebased patch,in case of no response...\nIt's just the patch from\nhttps://www.postgresql.org/message-id/CALT9ZEG9oKz9-dv9YYZaeeXNpZp0+teLFSz7QST28AcmERVpiw@mail.gmail.com\nrebased on 17dev\n\nPerhaps it's too early for a commit ; automatic range partitioning is still\nmissing and, according to\nhttps://wiki.postgresql.org/wiki/Declarative_partitioning_improvements,\nsyntax is arguable.\n\nIf 'USING' it out of option (already a keyword for CREATE TABLE) and\n'CONFIGURATION()' is not what we want, we should reach for a final decision\nfirst.\nI suggest OVER that is a keyword but unused in CREATE TABLE (nor ALTER\nTABLE). Whatever...\n\nFor RANGE partitioning I think of four syntaxes (inspired by pg_partman)\nPARTITION BY RANGE(stamp) CONFIGURATION (SPAN interval CENTER datetime BACK\ninteger AHEAD integer [DEFAULT [PARTITION] [defname]])\nPARTITION BY RANGE(stamp) CONFIGURATION (SPAN interval\nSTART firstfrombound END lasttobound [DEFAULT [PARTITION] [defname]])\nPARTITION BY RANGE(region_id) CONFIGURATION (STEP integer START integer END\ninteger [DEFAULT [PARTITION] [defname]])\nPARTITION BY RANGE(name) CONFIGURATION (BOUNDS (boundlist) [START\nfirstfrombound] [END lasttobound] [DEFAULT [PARTITION] [defname]])\n\nLast one should solve the addition operator problem with non numeric non\ntimedate range.\nPlus, it allows non uniform range (thinking about an \"encyclopedia volume\"\npartitioning, you know 'A', 'B-CL', 'CL-D'...)\n\nCREATE table (LIKE other INCLUDING PARTITIONS) should create 'table'\npartitioned the same as 'other'\nand\nCREATE table (LIKE other INCLUDING PARTITIONS) PARTITION BY partspec\nCONFIGURATION(), should create 'table' partitioned by partspec and sub\npartitioned as 'other'.\n\nThen CREATE could accept multiple PARTITION BY CONFIGURATION().\n\nFor ALTER TABLE (and automatic maintenance) to be usable, we will need\nSPLIT and MERGE CONCURRENTLY (pg_pathman ?) enhanced by CREATE TABLE LIKE\nto handle subpartitioning. But that's another story.\n\nStéphane.\n\n\nLe jeu. 2 déc. 2021 à 12:20, Daniel Gustafsson <daniel@yesql.se> a écrit :\n\n> This thread has stalled since July with review comments unanswered, I'm\n> marking\n> the patch Returned with Feedback. Please feel free to resubmit when/if a\n> new\n> patch is available.\n>\n> --\n> Daniel Gustafsson https://vmware.com/\n>\n>\n>\n>\n\n-- \n\"Où se posaient les hirondelles avant l'invention du téléphone ?\"\n -- Grégoire Lacroix", "msg_date": "Mon, 17 Jul 2023 16:26:14 +0200", "msg_from": "=?UTF-8?Q?St=C3=A9phane_Tachoires?= <stephane.tachoires@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Automatic HASH and LIST partition creation" } ]
[ { "msg_contents": "Hi\n\nI checking state of https://commitfest.postgresql.org/28/2176/\n\nIt should be committed, but I don't see a commit?\n\nRegards\n\nPavel\n\nHiI checking state of https://commitfest.postgresql.org/28/2176/It should be committed, but I don't see a commit?RegardsPavel", "msg_date": "Mon, 6 Jul 2020 17:23:22 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "bad status of FETCH PERCENT in commitfest application" }, { "msg_contents": "On 2020-Jul-06, Pavel Stehule wrote:\n\n> I checking state of https://commitfest.postgresql.org/28/2176/\n> \n> It should be committed, but I don't see a commit?\n\nI reverted it to needs-review. It was marked as committed by me, but\nthe one I did commit by the same author in the same area is FETCH WITH\nTIES. The confusion is understandable.\n\nThanks for pointing it out\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Mon, 6 Jul 2020 11:27:21 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: bad status of FETCH PERCENT in commitfest application" }, { "msg_contents": "po 6. 7. 2020 v 17:27 odesílatel Alvaro Herrera <alvherre@2ndquadrant.com>\nnapsal:\n\n> On 2020-Jul-06, Pavel Stehule wrote:\n>\n> > I checking state of https://commitfest.postgresql.org/28/2176/\n> >\n> > It should be committed, but I don't see a commit?\n>\n> I reverted it to needs-review. It was marked as committed by me, but\n> the one I did commit by the same author in the same area is FETCH WITH\n> TIES. The confusion is understandable.\n>\n\nok, thank you for info\n\nPavel\n\n\n> Thanks for pointing it out\n>\n> --\n> Álvaro Herrera https://www.2ndQuadrant.com/\n> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n>\n\npo 6. 7. 2020 v 17:27 odesílatel Alvaro Herrera <alvherre@2ndquadrant.com> napsal:On 2020-Jul-06, Pavel Stehule wrote:\n\n> I checking state of https://commitfest.postgresql.org/28/2176/\n> \n> It should be committed, but I don't see a commit?\n\nI reverted it to needs-review.  It was marked as committed by me, but\nthe one I did commit by the same author in the same area is FETCH WITH\nTIES.  The confusion is understandable.ok, thank you for infoPavel\n\nThanks for pointing it out\n\n-- \nÁlvaro Herrera                https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Mon, 6 Jul 2020 17:28:26 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: bad status of FETCH PERCENT in commitfest application" } ]
[ { "msg_contents": "Hi,\n\nAt the moment, only single-byte characters in identifiers are\ncase-folded, and multi-byte characters are not.\n\nFor example, abĉDĚF is case-folded to \"abĉdĚf\". This can be referred\nto as \"abĉdĚf\" or \"ABĉDĚF\", but not \"abĉděf\" or \"ABĈDĚF\".\n\ndowncase_identifier() has the following comment:\n\n /*\n * SQL99 specifies Unicode-aware case normalization, which we don't yet\n * have the infrastructure for. Instead we use tolower() to provide a\n * locale-aware translation. However, there are some locales where this\n * is not right either (eg, Turkish may do strange things with 'i' and\n * 'I'). Our current compromise is to use tolower() for characters with\n * the high bit set, as long as they aren't part of a multi-byte\n * character, and use an ASCII-only downcasing for 7-bit characters.\n */\n\nSo my question is, do we yet have the infrastructure to make\ncase-folding consistent across all character widths?\n\nThanks\n\nThom\n\n\n", "msg_date": "Mon, 6 Jul 2020 18:35:10 +0100", "msg_from": "Thom Brown <thom@linux.com>", "msg_from_op": true, "msg_subject": "Multi-byte character case-folding" }, { "msg_contents": "Thom Brown <thom@linux.com> writes:\n> At the moment, only single-byte characters in identifiers are\n> case-folded, and multi-byte characters are not.\n> ...\n> So my question is, do we yet have the infrastructure to make\n> case-folding consistent across all character widths?\n\nWe still lack any built-in knowledge about this, and would have to rely\non libc, which means the results would likely be platform-dependent\nand probably LC_CTYPE-dependent.\n\nMore generally, I'd be mighty hesitant to change this behavior after\nit's stood for so many years. I suspect more people would complain\nthat we broke their application than would be happy about it.\n\nHaving said that, we are already relying on towlower() in places,\nand could do similarly here if we didn't care about the above issues.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 06 Jul 2020 16:33:51 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Multi-byte character case-folding" }, { "msg_contents": "On 2020-Jul-06, Tom Lane wrote:\n\n> More generally, I'd be mighty hesitant to change this behavior after\n> it's stood for so many years. I suspect more people would complain\n> that we broke their application than would be happy about it.\n> \n> Having said that, we are already relying on towlower() in places,\n> and could do similarly here if we didn't care about the above issues.\n\nI think the fact that identifiers fail to follow language-specific case\nfolding rules is more a known gotcha than a desired property, but on\nprinciple I tend to agree that Turkish people would not be happy about\nthe prospect of us changing the downcasing rule in a major release -- it\nwould mean having to edit any affected application code as part of a\npg_upgrade process, which is not great.\n\nNow you could say that this can be fixed by adding a GUC that preserves\nthe old behavior, but generally we don't like that too much.\n\nThe counter argument is that there are more future users than there are\ncurrent users.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Mon, 6 Jul 2020 18:46:23 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Multi-byte character case-folding" }, { "msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> On 2020-Jul-06, Tom Lane wrote:\n>> More generally, I'd be mighty hesitant to change this behavior after\n>> it's stood for so many years. I suspect more people would complain\n>> that we broke their application than would be happy about it.\n\n> I think the fact that identifiers fail to follow language-specific case\n> folding rules is more a known gotcha than a desired property, but on\n> principle I tend to agree that Turkish people would not be happy about\n> the prospect of us changing the downcasing rule in a major release -- it\n> would mean having to edit any affected application code as part of a\n> pg_upgrade process, which is not great.\n\nIt's not just the Turks. As near as I can tell, we'd likely break *every*\napp that's using such identifiers. For example, supposing I do\n\ntest=# create table MYÉCLASS (f1 text);\nCREATE TABLE\ntest=# \\dt\n List of relations\n Schema | Name | Type | Owner \n--------+----------+-------+----------\n public | myÉclass | table | postgres\n(1 row)\n\npg_dump will render this as\n\nCREATE TABLE public.\"myÉclass\" (\n f1 text\n);\n\nIf we start to case-fold É, then the only way to access this table will\nbe by double-quoting its name, which the application probably is not\nexpecting (else it would have double-quoted in the original CREATE TABLE).\n\n> Now you could say that this can be fixed by adding a GUC that preserves\n> the old behavior, but generally we don't like that too much.\n\nYes, a GUC changing this would be a headache. It would be just as much of\na compatibility and security hazard as standard_conforming_strings (which\nindeed I've been thinking of proposing that we get rid of; it's hung\naround long enough).\n\n> The counter argument is that there are more future users than there are\n> current users.\n\nEspecially if we drive away the current users :-(. In practice, we've\nheard very very few complaints about this, so my gut says to leave\nit alone.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 06 Jul 2020 20:32:22 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Multi-byte character case-folding" }, { "msg_contents": "\tTom Lane wrote:\n\n> CREATE TABLE public.\"myÉclass\" (\n> f1 text\n> );\n> \n> If we start to case-fold É, then the only way to access this table will\n> be by double-quoting its name, which the application probably is not\n> expecting (else it would have double-quoted in the original CREATE TABLE).\n\nThis problem already exists when migrating from a mono-byte database\nto a multi-byte database, since downcase_identifier() does use\ntolower() for mono-byte databases.\n\ndb9=# show server_encoding ;\n server_encoding \n-----------------\n LATIN9\n(1 row)\n\ndb9=# create table MYÉCLASS (f1 text);\nCREATE TABLE\n\ndb9=# \\d\n\t List of relations\n Schema | Name | Type | Owner \n--------+----------+-------+----------\n public | myéclass | table | postgres\n(1 row)\n\ndb9=# select * from MYÉCLASS;\n f1 \n----\n(0 rows)\n\npg_dump will dump this as\n\nCREATE TABLE public.\"myéclass\" (\n f1 text\n);\n\nSo far so good. But after importing this into an UTF-8 database,\nthe same \"select * from MYÉCLASS\" that used to work now fails:\n\nu8=# show server_encoding ;\n server_encoding \n-----------------\n UTF8\n(1 row)\n\nu8=# select * from MYÉCLASS;\nERROR:\trelation \"myÉclass\" does not exist\n\n\nThe compromise that is mentioned in downcase_identifier() justifying\nthis inconsistency is not very convincing, because the issues in case\nfolding due to linguistic differences exist both in mono-byte and\nmulti-byte encodings. For instance, if it's fine to trust the locale\nto downcase 'İ' in a LATIN5 db, it should be okay in a UTF-8 db too.\n\n\nBest regards,\n-- \nDaniel Vérité\nPostgreSQL-powered mailer: https://www.manitou-mail.org\nTwitter: @DanielVerite\n\n\n", "msg_date": "Tue, 07 Jul 2020 13:33:16 +0200", "msg_from": "\"Daniel Verite\" <daniel@manitou-mail.org>", "msg_from_op": false, "msg_subject": "Re: Multi-byte character case-folding" }, { "msg_contents": "\"Daniel Verite\" <daniel@manitou-mail.org> writes:\n> \tTom Lane wrote:\n>> If we start to case-fold É, then the only way to access this table will\n>> be by double-quoting its name, which the application probably is not\n>> expecting (else it would have double-quoted in the original CREATE TABLE).\n\n> This problem already exists when migrating from a mono-byte database\n> to a multi-byte database, since downcase_identifier() does use\n> tolower() for mono-byte databases.\n\nSure, but that's a tiny minority of use-cases. In particular it would\nnot bite you after a straight upgrade to a new PG version.\n\n[ thinks... ] Wait, actually the described case would occur if you\nmigrated *from* UTF8 (no folding) to LATINn (with folding). That's\ngotta be an even tinier minority. Migration to UTF8 would show\ndifferent, though perhaps just as annoying, symptoms.\n\nAnyway, I freely concede that I'm ill-equipped to judge how annoying\nthis is, since I don't program in any languages where it'd make a\ndifference. But we mustn't fool ourselves: changing this would be\njust as dangerous as the standard_conforming_strings changeover was.\nI'm not really convinced it's worth it. In particular, I don't find\nthe \"it's required by the standard\" argument convincing. The standard\nrequires us to fold to upper case, too, but we've long ago decided to\njust say no to that. (Which reminds me: there are extensive threads in\nthe archives analyzing whether it's practical to support more than one\nfolding behavior. Those discussions would likely be relevant here.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 07 Jul 2020 09:01:47 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Multi-byte character case-folding" }, { "msg_contents": "On Mon, Jul 6, 2020 at 8:32 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> test=# create table MYÉCLASS (f1 text);\n> CREATE TABLE\n> test=# \\dt\n> List of relations\n> Schema | Name | Type | Owner\n> --------+----------+-------+----------\n> public | myÉclass | table | postgres\n> (1 row)\n>\n> pg_dump will render this as\n>\n> CREATE TABLE public.\"myÉclass\" (\n> f1 text\n> );\n>\n> If we start to case-fold É, then the only way to access this table will\n> be by double-quoting its name, which the application probably is not\n> expecting (else it would have double-quoted in the original CREATE TABLE).\n\nWhile this is true, it's also pretty hard to imagine a user being\nsatisfied with a table that ends up with this kind of mixed-case name.\n\nThat's not to say that I have any good idea what to do about this. I\njust disagree with labelling the above case as a success.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Wed, 8 Jul 2020 09:26:16 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Multi-byte character case-folding" }, { "msg_contents": "On 2020-Jul-08, Robert Haas wrote:\n\n> That's not to say that I have any good idea what to do about this. I\n> just disagree with labelling the above case as a success.\n\nYeah, particularly since it works differently in single-char encodings.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Wed, 8 Jul 2020 12:33:41 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Multi-byte character case-folding" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> That's not to say that I have any good idea what to do about this. I\n> just disagree with labelling the above case as a success.\n\nI can't say that I like it either. But I'm afraid that changing it now\nwill create many more problems than it solves.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 08 Jul 2020 12:49:49 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Multi-byte character case-folding" }, { "msg_contents": "On Mon, Jul 6, 2020 at 08:32:22PM -0400, Tom Lane wrote:\n> Yes, a GUC changing this would be a headache. It would be just as much of\n> a compatibility and security hazard as standard_conforming_strings (which\n> indeed I've been thinking of proposing that we get rid of; it's hung\n> around long enough).\n\n+1\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n", "msg_date": "Tue, 14 Jul 2020 16:08:11 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Multi-byte character case-folding" } ]
[ { "msg_contents": "Hi Postgres team,\n\nI would like to know if PostgreSQL can be installed and used without any\nissues on Amazon Linux EC2 machines.\n\nhttps://www.postgresql.org/docs/11/supported-platforms.html\n\nI was going through the documentation and couldn't find very specific\ndetails related to support.\n\nAny input will be much helpful.\n\nWarm regards,\nAjay\n\nHi Postgres team,I would like to know if PostgreSQL can be installed and used without any issues on Amazon Linux EC2 machines.https://www.postgresql.org/docs/11/supported-platforms.htmlI was going through the documentation and couldn't find very specific details related to support.Any input will be much helpful.Warm regards,Ajay", "msg_date": "Mon, 6 Jul 2020 15:54:50 -0400", "msg_from": "Ajay Patel <mailajaypatel@gmail.com>", "msg_from_op": true, "msg_subject": "Question: PostgreSQL on Amazon linux EC2" }, { "msg_contents": "Em seg., 6 de jul. de 2020 às 21:55, Ajay Patel <mailajaypatel@gmail.com>\nescreveu:\n\n> Hi Postgres team,\n>\n> I would like to know if PostgreSQL can be installed and used without any\n> issues on Amazon Linux EC2 machines.\n>\n\nYes you can, but not with the repositories at yum.postgresql.org. There's a\ndependency of a package that only exists on RHEL or CentOS and fail.\n\n\n>\n> https://www.postgresql.org/docs/11/supported-platforms.html\n>\n> I was going through the documentation and couldn't find very specific\n> details related to support.\n>\n> Any input will be much helpful.\n>\n\nYou'll be able to :\n- compile PostgreSQL\n- download and install rpm packages by hand\n- use a Amazon provided repo that installs more recent Postgres versions -\nactually not up-to-date.\n\nAfter struggling with the points above I decided just to use CentOS on EC2.\nIt works perfectly and from CentOS 7 and up it is supported by AWS on all\ninstance types and their exquisite hardware like network interfaces for EBS\nperformance.\n\nFlavio Gurgel\n\nEm seg., 6 de jul. de 2020 às 21:55, Ajay Patel <mailajaypatel@gmail.com> escreveu:Hi Postgres team,I would like to know if PostgreSQL can be installed and used without any issues on Amazon Linux EC2 machines.Yes you can, but not with the repositories at yum.postgresql.org. There's a dependency of a package that only exists on RHEL or CentOS and fail. https://www.postgresql.org/docs/11/supported-platforms.htmlI was going through the documentation and couldn't find very specific details related to support.Any input will be much helpful.You'll be able to :- compile PostgreSQL- download and install rpm packages by hand-  use a Amazon provided repo that installs more recent Postgres versions - actually not up-to-date.After struggling with the points above I decided just to use CentOS on EC2. It works perfectly and from CentOS 7 and up it is supported by AWS on all instance types and their exquisite hardware like network interfaces for EBS performance.Flavio Gurgel", "msg_date": "Mon, 6 Jul 2020 22:45:13 +0200", "msg_from": "Flavio Henrique Araque Gurgel <fhagur@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Question: PostgreSQL on Amazon linux EC2" }, { "msg_contents": "Thank you Flavio, this is helpful.\n\nHave you faced any other challenges or any other problems after installing\nPostgres?\n\nWarm regards,\nAjay\n\nOn Mon, Jul 6, 2020 at 4:45 PM Flavio Henrique Araque Gurgel <\nfhagur@gmail.com> wrote:\n\n>\n> Em seg., 6 de jul. de 2020 às 21:55, Ajay Patel <mailajaypatel@gmail.com>\n> escreveu:\n>\n>> Hi Postgres team,\n>>\n>> I would like to know if PostgreSQL can be installed and used without any\n>> issues on Amazon Linux EC2 machines.\n>>\n>\n> Yes you can, but not with the repositories at yum.postgresql.org. There's\n> a dependency of a package that only exists on RHEL or CentOS and fail.\n>\n>\n>>\n>> https://www.postgresql.org/docs/11/supported-platforms.html\n>>\n>> I was going through the documentation and couldn't find very specific\n>> details related to support.\n>>\n>> Any input will be much helpful.\n>>\n>\n> You'll be able to :\n> - compile PostgreSQL\n> - download and install rpm packages by hand\n> - use a Amazon provided repo that installs more recent Postgres versions\n> - actually not up-to-date.\n>\n> After struggling with the points above I decided just to use CentOS on\n> EC2. It works perfectly and from CentOS 7 and up it is supported by AWS on\n> all instance types and their exquisite hardware like network interfaces for\n> EBS performance.\n>\n> Flavio Gurgel\n>\n>\n>\n\nThank you Flavio, this is helpful.Have you faced any other challenges or any other problems after installing Postgres?Warm regards,AjayOn Mon, Jul 6, 2020 at 4:45 PM Flavio Henrique Araque Gurgel <fhagur@gmail.com> wrote:Em seg., 6 de jul. de 2020 às 21:55, Ajay Patel <mailajaypatel@gmail.com> escreveu:Hi Postgres team,I would like to know if PostgreSQL can be installed and used without any issues on Amazon Linux EC2 machines.Yes you can, but not with the repositories at yum.postgresql.org. There's a dependency of a package that only exists on RHEL or CentOS and fail. https://www.postgresql.org/docs/11/supported-platforms.htmlI was going through the documentation and couldn't find very specific details related to support.Any input will be much helpful.You'll be able to :- compile PostgreSQL- download and install rpm packages by hand-  use a Amazon provided repo that installs more recent Postgres versions - actually not up-to-date.After struggling with the points above I decided just to use CentOS on EC2. It works perfectly and from CentOS 7 and up it is supported by AWS on all instance types and their exquisite hardware like network interfaces for EBS performance.Flavio Gurgel", "msg_date": "Mon, 6 Jul 2020 17:30:10 -0400", "msg_from": "Ajay Patel <mailajaypatel@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Question: PostgreSQL on Amazon linux EC2" }, { "msg_contents": "> Thank you Flavio, this is helpful.\n>\n> Have you faced any other challenges or any other problems after installing\n> Postgres?\n>\n\nNo problem related to software installation whatsoever, you'll need to take\ncare of the same points as any Postgres usage and some cloud specific care\nlike replication between geographically separated instances (what they call\navailability zones) and disk performance.\nI suggest you to try it and read a bit, you'll find a lot of feedback from\npeople that already did it on the internet and AWS documentation is key\nwhen dealing with their hardware specifics. I only learned how to do some\nstuff their way when I tried, even with AWS premium support at hand.\nBecause every database is different.\nQuestions on this list are better handled when you have a more specific\nquestion with a problem you're experiencing (like your first one) and the\nhackers list is more aimed at Postgres development, post user questions on\npgsql-general.\n\nFlavio Gurgel\n\nThank you Flavio, this is helpful.Have you faced any other challenges or any other problems after installing Postgres?No problem related to software installation whatsoever, you'll need to take care of the same points as any Postgres usage and some cloud specific care like replication between geographically separated instances (what they call availability zones) and disk performance.I suggest you to try it and read a bit, you'll find a lot of feedback from people that already did it on the internet and AWS documentation is key when dealing with their hardware specifics. I only learned how to do some stuff their way when I tried, even with AWS premium support at hand. Because every database is different.Questions on this list are better handled when you have a more specific question with a problem you're experiencing (like your first one) and the hackers list is more aimed at Postgres development, post user questions on pgsql-general.Flavio Gurgel", "msg_date": "Tue, 7 Jul 2020 09:20:10 +0200", "msg_from": "Flavio Henrique Araque Gurgel <fhagur@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Question: PostgreSQL on Amazon linux EC2" }, { "msg_contents": "Thanks Flavio.\n\nI believe trying is the best way forward. Thank you for the guidance.\n\nWarm regards,\n\nAjay Patel\n\n\n\n> On Jul 7, 2020, at 3:20 AM, Flavio Henrique Araque Gurgel <fhagur@gmail.com> wrote:\n> \n> \n> \n>> Thank you Flavio, this is helpful.\n>> \n>> Have you faced any other challenges or any other problems after installing Postgres?\n> \n> No problem related to software installation whatsoever, you'll need to take care of the same points as any Postgres usage and some cloud specific care like replication between geographically separated instances (what they call availability zones) and disk performance.\n> I suggest you to try it and read a bit, you'll find a lot of feedback from people that already did it on the internet and AWS documentation is key when dealing with their hardware specifics. I only learned how to do some stuff their way when I tried, even with AWS premium support at hand. Because every database is different.\n> Questions on this list are better handled when you have a more specific question with a problem you're experiencing (like your first one) and the hackers list is more aimed at Postgres development, post user questions on pgsql-general.\n> \n> Flavio Gurgel\n> \n\nThanks Flavio.I believe trying is the best way forward. Thank you for the guidance.Warm regards,Ajay PatelOn Jul 7, 2020, at 3:20 AM, Flavio Henrique Araque Gurgel <fhagur@gmail.com> wrote:Thank you Flavio, this is helpful.Have you faced any other challenges or any other problems after installing Postgres?No problem related to software installation whatsoever, you'll need to take care of the same points as any Postgres usage and some cloud specific care like replication between geographically separated instances (what they call availability zones) and disk performance.I suggest you to try it and read a bit, you'll find a lot of feedback from people that already did it on the internet and AWS documentation is key when dealing with their hardware specifics. I only learned how to do some stuff their way when I tried, even with AWS premium support at hand. Because every database is different.Questions on this list are better handled when you have a more specific question with a problem you're experiencing (like your first one) and the hackers list is more aimed at Postgres development, post user questions on pgsql-general.Flavio Gurgel", "msg_date": "Tue, 7 Jul 2020 07:41:15 -0400", "msg_from": "mailajaypatel@gmail.com", "msg_from_op": false, "msg_subject": "Re: Question: PostgreSQL on Amazon linux EC2" }, { "msg_contents": "On Mon, 6 Jul 2020 at 15:55, Ajay Patel <mailajaypatel@gmail.com> wrote:\n\n> Hi Postgres team,\n>\n> I would like to know if PostgreSQL can be installed and used without any\n> issues on Amazon Linux EC2 machines.\n>\n> https://www.postgresql.org/docs/11/supported-platforms.html\n>\n> I was going through the documentation and couldn't find very specific\n> details related to support.\n>\n> Any input will be much helpful.\n>\n\n In a way, this is not a whole lot different from asking,\n\n\"I would like to know if PostgreSQL can be installed and used without any\nissues on Dell server machines.\"\n\nIn that case, there could be questions about whether there are good drivers\nfor disk controllers that would vary from model to model, and some things\nlike that. But there are few up-front answers the way there used to be for\nhow to handle (say) different versions of AIX.\n\nAmazon EC2 provides virtualized \"gear\" that simulates x86-64 hardware\nreasonably decently; there can certainly be performance issues relating to\nhow fast their simulated disk is, and how fast their simulated network is.\n\nBut there are no highly-specific-to-EC2 details related to hardware\nsupport, as you noticed on that web page.\n\nIf you do not have performance or load requirements that are so high that\nthey point at edge cases where the EC2 virtualized environment starts to\nbreak down, then it's probably mostly smooth sailing.\n\nYou need to be aware that they do not promise super-high-availability, so\nyou should be sure to keep good backups lest your server gets dropped on\nthe floor and you lose all your data. I'm not sure there's good stats just\nyet as to how often that happens. But it isn't difficult to provision a\npgbackrest server that will capture backups into S3 cloud storage to help\nprotect from that.\n-- \nWhen confronted by a difficult problem, solve it by reducing it to the\nquestion, \"How would the Lone Ranger handle this?\"\n\nOn Mon, 6 Jul 2020 at 15:55, Ajay Patel <mailajaypatel@gmail.com> wrote:Hi Postgres team,I would like to know if PostgreSQL can be installed and used without any issues on Amazon Linux EC2 machines.https://www.postgresql.org/docs/11/supported-platforms.htmlI was going through the documentation and couldn't find very specific details related to support.Any input will be much helpful. In a way, this is not a whole lot different from asking,\"I would like to know if PostgreSQL can be installed and used without any issues on Dell server machines.\"In that case, there could be questions about whether there are good drivers for disk controllers that would vary from model to model, and some things like that.  But there are few up-front answers the way there used to be for how to handle (say) different versions of AIX.Amazon EC2 provides virtualized \"gear\" that simulates x86-64 hardware reasonably decently; there can certainly be performance issues relating to how fast their simulated disk is, and how fast their simulated network is.But there are no highly-specific-to-EC2 details related to hardware support, as you noticed on that web page.If you do not have performance or load requirements that are so high that they point at edge cases where the EC2 virtualized environment starts to break down, then it's probably mostly smooth sailing.You need to be aware that they do not promise super-high-availability, so you should be sure to keep good backups lest your server gets dropped on the floor and you lose all your data.  I'm not sure there's good stats just yet as to how often that happens.  But it isn't difficult to provision a pgbackrest server that will capture backups into S3 cloud storage to help protect from that.-- When confronted by a difficult problem, solve it by reducing it to thequestion, \"How would the Lone Ranger handle this?\"", "msg_date": "Tue, 7 Jul 2020 15:08:38 -0400", "msg_from": "Christopher Browne <cbbrowne@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Question: PostgreSQL on Amazon linux EC2" } ]
[ { "msg_contents": "Hi,\n\nI found an issue while executing a backup use case(please see [1] for\nqueries) on postgres version 12.\n\nFirstly, pg_start_backup registers nonexclusive_base_backup_cleanup as\non_shmem_exit call back which will\nadd this function to the before_shmem_exit_list array which is\nsupposed to be removed on pg_stop_backup\nso that we can do the pending cleanup and issue a warning for each\npg_start_backup for which we did not call\nthe pg_stop backup. Now, I executed a query for which JIT is picked\nup, then the the llvm compiler inserts it's\nown exit callback i.e. llvm_shutdown at the end of\nbefore_shmem_exit_list array. Now, I executed pg_stop_backup\nand call to cancel_before_shmem_exit() is made with the expectation\nthat the nonexclusive_base_backup_cleanup\ncallback is removed from before_shmem_exit_list array.\n\nSince the cancel_before_shmem_exit() only checks the last entry in the\nbefore_shmem_exit_list array, which is\nllvm compiler's exit callback, so we exit the\ncancel_before_shmem_exit() without removing the intended\nnonexclusive_base_backup_cleanup callback which remains still the\nbefore_shmem_exit_list and gets executed\nduring the session exit throwing a warning \"aborting backup due to\nbackend exiting before pg_stop_backup was called\",\nwhich is unintended.\n\nAttached is the patch that fixes the above problem by making\ncancel_before_shmem_exit() to look for the\ngiven function(and for given args) in the entire\nbefore_shmem_exit_list array, not just the last entry, starting\nfrom the last entry.\n\nRequest the community take this patch for review for v12.\n\nHaving said that, abovementioned problem for backup use case does not\noccur for v13 and latest versions of\npostgres (please below description[2]), but these kinds of issues can\ncome, if the cancel_before_shmem_exit()\nis left to just look at the last array entry while removing a\nregistered callback.\n\nThere's also a comment in cancel_before_shmem_exit() function\ndescription \"For simplicity, only the latest entry\ncan be removed. (We could work harder but there is no need for current uses.)\n\nSince we start to find use cases now, there is a need to enhance\ncancel_before_shmem_exit(), so I also propose\nto have the same attached patch for v13 and latest versions.\n\nThoughts?\n\n[1]\nCREATE TABLE t1 (id SERIAL);\nINSERT INTO t1 (id) SELECT * FROM generate_series(1, 20000000);\nSELECT * FROM pg_start_backup('label', false, false);\n/*JIT is enabled in my session and it is being picked by below query*/\nEXPLAIN (ANALYZE, VERBOSE, BUFFERS) SELECT COUNT(*) FROM t1;\nSELECT * FROM pg_stop_backup(false, true);\n\n[2]\nfor v13 and latest versions, start_backup first registers do_pg_abort_backup,\nand then pg_start_backup_callback, performs startup backup operations\nand unregisters only pg_start_backup_callback from before_shmem_exit_list,\nretaining do_pg_abort_backup still in the list, which is to be called\non session's exit\nJIT compiler inserts it's own exit call back at the end of\nbefore_shmem_exit_list array.\nstop_backup registers pg_stop_backup_callback, performs stop operations,\nunregisters pg_stop_backup_callback from before_shmem_exit_list, and sets\nthe sessionBackupState = SESSION_BACKUP_NONE, note that the callback\ndo_pg_abort_backup registered by start_backup command still exists in the\nbefore_shmem_exit_list and will not be removed by stop_backup. On session exit,\ndo_pg_abort_backup gets called but returns without performing any operations(not\neven throwing a warning), by checking sessionBackupState which was set to\nSESSION_BACKUP_NONE by stop_backup.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Tue, 7 Jul 2020 12:05:31 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Issue with cancel_before_shmem_exit while searching to remove a\n particular registered exit callbacks" }, { "msg_contents": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> writes:\n> Firstly, pg_start_backup registers nonexclusive_base_backup_cleanup as\n> on_shmem_exit call back which will\n> add this function to the before_shmem_exit_list array which is\n> supposed to be removed on pg_stop_backup\n> so that we can do the pending cleanup and issue a warning for each\n> pg_start_backup for which we did not call\n> the pg_stop backup. Now, I executed a query for which JIT is picked\n> up, then the the llvm compiler inserts it's\n> own exit callback i.e. llvm_shutdown at the end of\n> before_shmem_exit_list array. Now, I executed pg_stop_backup\n> and call to cancel_before_shmem_exit() is made with the expectation\n> that the nonexclusive_base_backup_cleanup\n> callback is removed from before_shmem_exit_list array.\n\nI'm of the opinion that the JIT code is abusing this mechanism, and the\nright thing to do is fix that. The restriction you propose to remove is\nnot just there out of laziness, it's an expectation about what safe use of\nthis mechanism would involve. Un-ordered removal of callbacks seems\npretty risky: it would mean that whatever cleanup is needed is not going\nto be done in LIFO order.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 07 Jul 2020 09:44:41 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Issue with cancel_before_shmem_exit while searching to remove a\n particular registered exit callbacks" }, { "msg_contents": "Hi,\n\nOn 2020-07-07 09:44:41 -0400, Tom Lane wrote:\n> Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> writes:\n> > Firstly, pg_start_backup registers nonexclusive_base_backup_cleanup as\n> > on_shmem_exit call back which will\n> > add this function to the before_shmem_exit_list array which is\n> > supposed to be removed on pg_stop_backup\n> > so that we can do the pending cleanup and issue a warning for each\n> > pg_start_backup for which we did not call\n> > the pg_stop backup. Now, I executed a query for which JIT is picked\n> > up, then the the llvm compiler inserts it's\n> > own exit callback i.e. llvm_shutdown at the end of\n> > before_shmem_exit_list array. Now, I executed pg_stop_backup\n> > and call to cancel_before_shmem_exit() is made with the expectation\n> > that the nonexclusive_base_backup_cleanup\n> > callback is removed from before_shmem_exit_list array.\n> \n> I'm of the opinion that the JIT code is abusing this mechanism, and the\n> right thing to do is fix that.\n\nWhat are you proposing? For now we could easily enough work around this\nby just making it a on_proc_exit() callback, but that doesn't really\nchange the fundamental issue imo.\n\n\n> The restriction you propose to remove is not just there out of\n> laziness, it's an expectation about what safe use of this mechanism\n> would involve. Un-ordered removal of callbacks seems pretty risky: it\n> would mean that whatever cleanup is needed is not going to be done in\n> LIFO order.\n\nMaybe I am confused, but isn't it <13's pg_start_backup() that's\nviolating the protocol much more clearly than the JIT code? Given that\nit relies on there not being any callbacks registered between two SQL\nfunction calls? I mean, what it does is basically:\n\n1) before_shmem_exit(nonexclusive_base_backup_cleanup...\n2) arbitrary code executed for a long time\n3) cancel_before_shmem_exit(nonexclusive_base_backup_cleanup...\n\nwhich pretty obviously can't at all deal with any other\nbefore_shmem_exit callbacks being registered in 2). Won't this be a\nproblem for every other before_shmem_exit callback that we register\non-demand? Say Async_UnlistenOnExit, RemoveTempRelationsCallback?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 7 Jul 2020 09:54:45 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Issue with cancel_before_shmem_exit while searching to remove a\n particular registered exit callbacks" }, { "msg_contents": "On Tue, Jul 7, 2020 at 10:24 PM Andres Freund <andres@anarazel.de> wrote:\n>\n> On 2020-07-07 09:44:41 -0400, Tom Lane wrote:\n> > Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> writes:\n> > > Firstly, pg_start_backup registers nonexclusive_base_backup_cleanup as\n> > > on_shmem_exit call back which will\n> > > add this function to the before_shmem_exit_list array which is\n> > > supposed to be removed on pg_stop_backup\n> > > so that we can do the pending cleanup and issue a warning for each\n> > > pg_start_backup for which we did not call\n> > > the pg_stop backup. Now, I executed a query for which JIT is picked\n> > > up, then the the llvm compiler inserts it's\n> > > own exit callback i.e. llvm_shutdown at the end of\n> > > before_shmem_exit_list array. Now, I executed pg_stop_backup\n> > > and call to cancel_before_shmem_exit() is made with the expectation\n> > > that the nonexclusive_base_backup_cleanup\n> > > callback is removed from before_shmem_exit_list array.\n> >\n> > I'm of the opinion that the JIT code is abusing this mechanism, and the\n> > right thing to do is fix that.\n>\n> What are you proposing? For now we could easily enough work around this\n> by just making it a on_proc_exit() callback, but that doesn't really\n> change the fundamental issue imo.\n>\n> > The restriction you propose to remove is not just there out of\n> > laziness, it's an expectation about what safe use of this mechanism\n> > would involve. Un-ordered removal of callbacks seems pretty risky: it\n> > would mean that whatever cleanup is needed is not going to be done in\n> > LIFO order.\n>\n\nI quickly searched(in HEAD) what all the callbacks are getting\nregistered to before_shmem_exit_list, with the intention to see if\nthey also call corresponding cancel_before_shmem_exit() after their\nintended usage is done.\n\nFor few of the callbacks there is no cancel_before_shmem_exit(). This\nseems expected; those callbacks ought to be executed before shmem\nexit. These callbacks are(let say SET 1): ShutdownPostgres,\nlogicalrep_worker_onexit, llvm_shutdown, Async_UnlistenOnExit,\nRemoveTempRelationsCallback, ShutdownAuxiliaryProcess,\ndo_pg_abort_backup in xlog.c (this callback exist only in v13 or\nlater), AtProcExit_Twophase.\n\nWhich means, once they are into the before_shmem_exit_list array, in\nsome order, they are never going to be removed from it as they don't\nhave corresponding cancel_before_shmem_exit() and the relative order\nof execution remains the same.\n\nAnd there are other callbacks that are getting registered to\nbefore_shmem_exit_list array(let say SET 2): apw_detach_shmem,\n_bt_end_vacuum_callback, pg_start_backup_callback,\npg_stop_backup_callback, createdb_failure_callback,\nmovedb_failure_callback, do_pg_abort_backup(in basebackup.c). They all\nhave corresponding cancel_before_shmem_exit() to unregister/remove the\ncallbacks from before_shmem_exit_list array.\n\nI think the callbacks that have no cancel_before_shmem_exit()(SET 1)\nmay have to be executed in the LIFO order: it makes sense to execute\nShutdownPostgres at the end after let's say other callbacks in SET 1.\n\nAnd the SET 2 callbacks have cancel_before_shmem_exit() with the only\nintention that there's no need to call the callbacks on the\nbefore_shmem_exit(), since they are not needed, and try to remove from\nthe before_shmem_exit_list array and may fail, if any other callback\ngets registered in between.\n\nIf I'm not wrong with the above points, we must enhance\ncancel_before_shmem_exit() or have cancel_before_shmem_exit_v2() (as\nmentioned in my below response).\n\n>\n> Maybe I am confused, but isn't it <13's pg_start_backup() that's\n> violating the protocol much more clearly than the JIT code? Given that\n> it relies on there not being any callbacks registered between two SQL\n> function calls? I mean, what it does is basically:\n>\n> 1) before_shmem_exit(nonexclusive_base_backup_cleanup...\n> 2) arbitrary code executed for a long time\n> 3) cancel_before_shmem_exit(nonexclusive_base_backup_cleanup...\n>\n> which pretty obviously can't at all deal with any other\n> before_shmem_exit callbacks being registered in 2). Won't this be a\n> problem for every other before_shmem_exit callback that we register\n> on-demand? Say Async_UnlistenOnExit, RemoveTempRelationsCallback?\n>\n\nYes, for versions <13's, clearly pg_start_backup causes the problem\nand the issue can also be reproduced with Async_UnlistenOnExit,\nRemoveTempRelationsCallback coming in between pg_start_backup and\npg_stop_backup.\n\nWe can have it fixed in a few ways: 1) enhance\ncancel_before_shmem_exit() as attached in the original patch. 2) have\nexisting cancel_before_shmem_exit(), whenever called for\nnonexclusive_base_backup_cleanup(), we can look for the entire array\ninstead of just the last entry. 3) have a separate function, say,\ncancel_before_shmem_exit_v2(), that searches for the entire\nbefore_shmem_exit_list array(the logic proposed in this patch) so that\nit will not disturb the existing cancel_before_shmem_exit(). 4) or try\nto have the pg_start_backup code that exists in after > 13 versions.\n\nIf okay to have cancel_before_shmem_exit_v2() for versions < 13's,\nwith the searching for the entire array instead of just the last\nelement to fix the abort issue, maybe we can have this function in\nversion 13 and latest as well(at least with disable mode, something\nlike #if 0 ... #endif), so that in case if any of similar issues arise\nwe could just quickly reuse.\n\nIf the before_shmem_exit_list array is to be used in LIFO order, do we\nhave some comment/usage guideline mentioned in the ipc.c/.h? I didn't\nfind one.\n\nThoughts?\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 14 Jul 2020 14:11:29 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Issue with cancel_before_shmem_exit while searching to remove a\n particular registered exit callbacks" }, { "msg_contents": "On Tue, Jul 7, 2020 at 12:55 PM Andres Freund <andres@anarazel.de> wrote:\n> What are you proposing? For now we could easily enough work around this\n> by just making it a on_proc_exit() callback, but that doesn't really\n> change the fundamental issue imo.\n\nI think it would be more correct for it to be an on_proc_exit()\ncallback, because before_shmem_exit() callbacks can and do perform\nactions which rely on an awful lot of the system being still in a\nworking state. RemoveTempRelationsCallback() is a good example: it\nthinks it can start and end transactions and make a bunch of catalog\nchanges. I don't know that any of that could use JIT, but moving the\nJIT cleanup to the on_shmem_exit() stage seems better. At that point,\nthere shouldn't be anybody doing anything that relies on being able to\nperform logical changes to the database; we're just shutting down\nlow-level subsystems at that point, and thus presumably not doing\nanything that could possibly need JIT.\n\nBut I also agree that what pg_start_backup() was doing before v13 was\nwrong; that's why I committed\n303640199d0436c5e7acdf50b837a027b5726594. The only reason I didn't\nback-patch it is because the consequences are so minor I didn't think\nit was worth worrying about. We could, though. I'd be somewhat\ninclined to both do that and also change LLVM to use on_proc_exit() in\nmaster, but I don't feel super-strongly about it.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Mon, 20 Jul 2020 15:47:00 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Issue with cancel_before_shmem_exit while searching to remove a\n particular registered exit callbacks" }, { "msg_contents": "On Tue, Jul 21, 2020 at 1:17 AM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Tue, Jul 7, 2020 at 12:55 PM Andres Freund <andres@anarazel.de> wrote:\n> > What are you proposing? For now we could easily enough work around this\n> > by just making it a on_proc_exit() callback, but that doesn't really\n> > change the fundamental issue imo.\n>\n> I think it would be more correct for it to be an on_proc_exit()\n> callback, because before_shmem_exit() callbacks can and do perform\n> actions which rely on an awful lot of the system being still in a\n> working state. RemoveTempRelationsCallback() is a good example: it\n> thinks it can start and end transactions and make a bunch of catalog\n> changes. I don't know that any of that could use JIT, but moving the\n> JIT cleanup to the on_shmem_exit() stage seems better. At that point,\n> there shouldn't be anybody doing anything that relies on being able to\n> perform logical changes to the database; we're just shutting down\n> low-level subsystems at that point, and thus presumably not doing\n> anything that could possibly need JIT.\n>\n\nI looked at what actually llvm_shutdown() does? It frees up JIT stacks,\nalso if exists perf related resource, using LLVMOrcDisposeInstance() and\nLLVMOrcUnregisterPerf(), that were dynamically allocated in\nllvm_session_initialize through a JIT library function\nLLVMOrcCreateInstance() [1].\n\nIt looks like there is no problem in moving llvm_shutdown to either\non_shmem_exit() or on_proc_exit().\n\n[1] - https://llvm.org/doxygen/OrcCBindings_8cpp_source.html\n\n>\n> But I also agree that what pg_start_backup() was doing before v13 was\n> wrong; that's why I committed\n> 303640199d0436c5e7acdf50b837a027b5726594. The only reason I didn't\n> back-patch it is because the consequences are so minor I didn't think\n> it was worth worrying about. We could, though. I'd be somewhat\n> inclined to both do that and also change LLVM to use on_proc_exit() in\n> master, but I don't feel super-strongly about it.\n>\n\nPatch: v1-0001-Move-llvm_shutdown-to-on_proc_exit-list-from-befo.patch\nMoved llvm_shutdown to on_proc_exit() call back list. Request to consider\nthis change for master, if possible <=13 versions. Basic JIT use cases and\nregression tests are working fine with the patch.\n\nPatches: PG11-0001-Fix-minor-problems-with-non-exclusive-backup-clea.patch\nand PG12-0001-Fix-minor-problems-with-non-exclusive-backup-cleanup.patch\nRequest to consider the commit\n303640199d0436c5e7acdf50b837a027b5726594(above two patches are for this\ncommit) to versions < 13, to fix the abort issue. Please note that the\nabove two patches have no difference in the code, just I made it applicable\non PG11.\n\nPatch: v1-0001-Modify-cancel_before_shmem_exit-comments.patch\nThis patch, modifies cancel_before_shmem_exit() function comment to reflect\nthe safe usage of before_shmem_exit_list callback mechanism and also\nremoves the point \"For simplicity, only the latest entry can be\nremoved*********\" as this gives a meaning that there is still scope for\nimprovement in cancel_before_shmem_exit() search mechanism.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Fri, 24 Jul 2020 16:39:52 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Issue with cancel_before_shmem_exit while searching to remove a\n particular registered exit callbacks" }, { "msg_contents": "On Fri, Jul 24, 2020 at 7:10 AM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n> I looked at what actually llvm_shutdown() does? It frees up JIT stacks, also if exists perf related resource, using LLVMOrcDisposeInstance() and LLVMOrcUnregisterPerf(), that were dynamically allocated in llvm_session_initialize through a JIT library function LLVMOrcCreateInstance() [1].\n>\n> It looks like there is no problem in moving llvm_shutdown to either on_shmem_exit() or on_proc_exit().\n\nIf it doesn't involve shared memory, I guess it can be on_proc_exit()\nrather than on_shmem_exit().\n\nI guess the other question is why we're doing it at all. What\nresources are being allocated that wouldn't be freed up by process\nexit anyway?\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Fri, 24 Jul 2020 10:37:40 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Issue with cancel_before_shmem_exit while searching to remove a\n particular registered exit callbacks" }, { "msg_contents": "On Fri, Jul 24, 2020 at 8:07 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Fri, Jul 24, 2020 at 7:10 AM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n> > I looked at what actually llvm_shutdown() does? It frees up JIT stacks,\nalso if exists perf related resource, using LLVMOrcDisposeInstance() and\nLLVMOrcUnregisterPerf(), that were dynamically allocated in\nllvm_session_initialize through a JIT library function\nLLVMOrcCreateInstance() [1].\n> >\n> > It looks like there is no problem in moving llvm_shutdown to either\non_shmem_exit() or on_proc_exit().\n>\n> If it doesn't involve shared memory, I guess it can be on_proc_exit()\n> rather than on_shmem_exit().\n>\n> I guess the other question is why we're doing it at all. What\n> resources are being allocated that wouldn't be freed up by process\n> exit anyway?\n>\n\nLLVMOrcCreateInstance() and LLVMOrcDisposeInstance() are doing new and\ndelete respectively, I just found these functions from the link [1]. But I\ndon't exactly know whether there are any other resources being allocated\nthat can't be freed up by proc_exit(). Tagging @Andres Freund for inputs\non whether we have any problem making llvm_shutdown() a on_proc_exit()\ncallback instead of before_shmem_exit() callback.\n\nAnd as suggested in the previous mails, we wanted to make it on_proc_exit()\nto avoid the abort issue reported in this mail chain, however if we take\nthe abort issue fix commit # 303640199d0436c5e7acdf50b837a027b5726594 as\nmentioned in the previous response[2], then it may not be necessary, right\nnow, but just to be safer and to avoid any of these similar kind of issues\nin future, we can consider this change as well.\n\n[1] - https://llvm.org/doxygen/OrcCBindings_8cpp_source.html\n\n LLVMOrcJITStackRef LLVMOrcCreateInstance(LLVMTargetMachineRef TM) {\n TargetMachine *TM2(unwrap(TM));\n Triple T(TM2->getTargetTriple());\n auto IndirectStubsMgrBuilder =\n orc::createLocalIndirectStubsManagerBuilder(T);\n OrcCBindingsStack *JITStack =\n new OrcCBindingsStack(*TM2, std::move(IndirectStubsMgrBuilder));\n return wrap(JITStack);\n }\n\nLLVMErrorRef LLVMOrcDisposeInstance(LLVMOrcJITStackRef JITStack) {\n auto *J = unwrap(JITStack);\n auto Err = J->shutdown();\n delete J;\n return wrap(std::move(Err));\n }\n\n[2] -\nhttps://www.postgresql.org/message-id/CALj2ACVwOKZ8qYUsZrU2y2efnYZOLRxPC6k52FQcB3oriH9Kcg%40mail.gmail.com\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\nOn Fri, Jul 24, 2020 at 8:07 PM Robert Haas <robertmhaas@gmail.com> wrote:>> On Fri, Jul 24, 2020 at 7:10 AM Bharath Rupireddy> <bharath.rupireddyforpostgres@gmail.com> wrote:> > I looked at what actually llvm_shutdown() does? It frees up JIT stacks, also if exists perf related resource, using LLVMOrcDisposeInstance() and LLVMOrcUnregisterPerf(), that were dynamically allocated in llvm_session_initialize through a JIT library function LLVMOrcCreateInstance() [1].> >> > It looks like there is no problem in moving llvm_shutdown to either on_shmem_exit() or on_proc_exit().>> If it doesn't involve shared memory, I guess it can be on_proc_exit()> rather than on_shmem_exit().>> I guess the other question is why we're doing it at all. What> resources are being allocated that wouldn't be freed up by process> exit anyway?>LLVMOrcCreateInstance() and LLVMOrcDisposeInstance() are doing new and delete respectively, I just found these functions from the link [1]. But I don't exactly know whether there are any other resources being allocated that can't be freed up by proc_exit(). Tagging @Andres Freund  for inputs on whether we have any problem making llvm_shutdown() a on_proc_exit() callback instead of before_shmem_exit() callback.And as suggested in the previous mails, we wanted to make it on_proc_exit() to avoid the abort issue reported in this mail chain, however if we take the abort issue fix commit # 303640199d0436c5e7acdf50b837a027b5726594 as mentioned in the previous response[2], then it may not be necessary, right now, but just to be safer and to avoid any of these similar kind of issues in future, we can consider this change as well.[1] - https://llvm.org/doxygen/OrcCBindings_8cpp_source.html LLVMOrcJITStackRef LLVMOrcCreateInstance(LLVMTargetMachineRef TM) {  TargetMachine *TM2(unwrap(TM));   Triple T(TM2->getTargetTriple());   auto IndirectStubsMgrBuilder =  orc::createLocalIndirectStubsManagerBuilder(T);   OrcCBindingsStack *JITStack =  new OrcCBindingsStack(*TM2, std::move(IndirectStubsMgrBuilder));   return wrap(JITStack); }LLVMErrorRef LLVMOrcDisposeInstance(LLVMOrcJITStackRef JITStack) {  auto *J = unwrap(JITStack);  auto Err = J->shutdown();  delete J;  return wrap(std::move(Err)); }[2] - https://www.postgresql.org/message-id/CALj2ACVwOKZ8qYUsZrU2y2efnYZOLRxPC6k52FQcB3oriH9Kcg%40mail.gmail.comWith Regards,Bharath Rupireddy.EnterpriseDB: http://www.enterprisedb.com", "msg_date": "Mon, 27 Jul 2020 12:29:28 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Issue with cancel_before_shmem_exit while searching to remove a\n particular registered exit callbacks" }, { "msg_contents": "On Mon, Jul 20, 2020 at 3:47 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> But I also agree that what pg_start_backup() was doing before v13 was\n> wrong; that's why I committed\n> 303640199d0436c5e7acdf50b837a027b5726594. The only reason I didn't\n> back-patch it is because the consequences are so minor I didn't think\n> it was worth worrying about. We could, though. I'd be somewhat\n> inclined to both do that and also change LLVM to use on_proc_exit() in\n> master, but I don't feel super-strongly about it.\n\nUnless somebody complains pretty soon, I'm going to go ahead and do\nwhat is described above.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Thu, 30 Jul 2020 08:11:19 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Issue with cancel_before_shmem_exit while searching to remove a\n particular registered exit callbacks" }, { "msg_contents": "On Thu, Jul 30, 2020 at 8:11 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> Unless somebody complains pretty soon, I'm going to go ahead and do\n> what is described above.\n\nDone.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Thu, 6 Aug 2020 14:21:01 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Issue with cancel_before_shmem_exit while searching to remove a\n particular registered exit callbacks" }, { "msg_contents": "On Thu, Aug 6, 2020 at 11:51 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Thu, Jul 30, 2020 at 8:11 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> > Unless somebody complains pretty soon, I'm going to go ahead and do\n> > what is described above.\n>\n> Done.\n>\n\nThanks!\n\nI have one more request to make: since we are of the opinion to not\nchange the way cancel_before_shmem_exit() searches\nbefore_shmem_exit_list array, wouldn't it be good to adjust comments\nbefore the function cancel_before_shmem_exit()?\n\nI sent the patch previously[1], but attaching here again, modifies\ncancel_before_shmem_exit() function comment to reflect the safe usage\nof before_shmem_exit_list callback mechanism and also removes the\npoint \"For simplicity, only the latest entry can be removed*********\"\nas this gives a meaning that there is still scope for improvement in\ncancel_before_shmem_exit() search mechanism.\n\nThoughts?\n\n[1] - https://www.postgresql.org/message-id/CALj2ACVwOKZ8qYUsZrU2y2efnYZOLRxPC6k52FQcB3oriH9Kcg%40mail.gmail.com\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Fri, 7 Aug 2020 09:16:19 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Issue with cancel_before_shmem_exit while searching to remove a\n particular registered exit callbacks" }, { "msg_contents": "On Thu, Aug 6, 2020 at 11:46 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n> I sent the patch previously[1], but attaching here again, modifies\n> cancel_before_shmem_exit() function comment to reflect the safe usage\n> of before_shmem_exit_list callback mechanism and also removes the\n> point \"For simplicity, only the latest entry can be removed*********\"\n> as this gives a meaning that there is still scope for improvement in\n> cancel_before_shmem_exit() search mechanism.\n>\n> Thoughts?\n\nI think that the first part of the comment change you suggest is a\ngood idea and would avoid developer confusion, but I think that the\nstatement about unordered removal of comments being risky doesn't add\nmuch. It's too vague to help anybody and I don't think I believe it,\neither. So I suggest something more like:\n\n- * callback. For simplicity, only the latest entry can be\n- * removed. (We could work harder but there is no need for\n- * current uses.)\n+ * callback. We only look at the latest entry for removal, as we\n+ * expect the caller to use before_shmem_exit callback mechanism\n+ * in the LIFO order.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Fri, 7 Aug 2020 12:29:03 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Issue with cancel_before_shmem_exit while searching to remove a\n particular registered exit callbacks" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> ... So I suggest something more like:\n\n> - * callback. For simplicity, only the latest entry can be\n> - * removed. (We could work harder but there is no need for\n> - * current uses.)\n> + * callback. We only look at the latest entry for removal, as we\n> + * expect the caller to use before_shmem_exit callback mechanism\n> + * in the LIFO order.\n\nThat's a meaningless statement for any one caller. So it needs to be more\nlike \"we expect callers to add and remove temporary before_shmem_exit\ncallbacks in strict LIFO order\".\n\nI wonder whether we ought to change the function to complain if the\nlast list entry doesn't match. We'd have caught this bug sooner\nif it did, and it's not very clear why silently doing nothing is\na good idea when there's no match.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 07 Aug 2020 13:12:06 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Issue with cancel_before_shmem_exit while searching to remove a\n particular registered exit callbacks" }, { "msg_contents": "On Fri, Aug 7, 2020 at 1:12 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> That's a meaningless statement for any one caller. So it needs to be more\n> like \"we expect callers to add and remove temporary before_shmem_exit\n> callbacks in strict LIFO order\".\n\nSure, that seems fine.\n\n> I wonder whether we ought to change the function to complain if the\n> last list entry doesn't match. We'd have caught this bug sooner\n> if it did, and it's not very clear why silently doing nothing is\n> a good idea when there's no match.\n\n+1.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Fri, 7 Aug 2020 13:39:21 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Issue with cancel_before_shmem_exit while searching to remove a\n particular registered exit callbacks" }, { "msg_contents": "Hi,\n\nOn 2020-08-07 12:29:03 -0400, Robert Haas wrote:\n> On Thu, Aug 6, 2020 at 11:46 PM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n> > I sent the patch previously[1], but attaching here again, modifies\n> > cancel_before_shmem_exit() function comment to reflect the safe usage\n> > of before_shmem_exit_list callback mechanism and also removes the\n> > point \"For simplicity, only the latest entry can be removed*********\"\n> > as this gives a meaning that there is still scope for improvement in\n> > cancel_before_shmem_exit() search mechanism.\n> >\n> > Thoughts?\n> \n> I think that the first part of the comment change you suggest is a\n> good idea and would avoid developer confusion, but I think that the\n> statement about unordered removal of comments being risky doesn't add\n> much. It's too vague to help anybody and I don't think I believe it,\n> either. So I suggest something more like:\n> \n> - * callback. For simplicity, only the latest entry can be\n> - * removed. (We could work harder but there is no need for\n> - * current uses.)\n> + * callback. We only look at the latest entry for removal, as we\n> + * expect the caller to use before_shmem_exit callback mechanism\n> + * in the LIFO order.\n\nIn which situations is the removal actually useful *and* safe, with\nthese constraints? You'd have to have a very narrow set of functions\nthat are called while the exit hook is present, i.e. basically this\nwould only be usable for PG_ENSURE_ERROR_CLEANUP and nothing else. And\neven there it seems like it's pretty easy to get into a situation where\nit's not safe.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 7 Aug 2020 14:20:22 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Issue with cancel_before_shmem_exit while searching to remove a\n particular registered exit callbacks" }, { "msg_contents": "On Fri, Aug 7, 2020 at 11:09 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Fri, Aug 7, 2020 at 1:12 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > That's a meaningless statement for any one caller. So it needs to be more\n> > like \"we expect callers to add and remove temporary before_shmem_exit\n> > callbacks in strict LIFO order\".\n>\n> Sure, that seems fine.\n>\n\nv2 patch has the comments modified.\n\n>\n> > I wonder whether we ought to change the function to complain if the\n> > last list entry doesn't match. We'd have caught this bug sooner\n> > if it did, and it's not very clear why silently doing nothing is\n> > a good idea when there's no match.\n>\n> +1.\n>\n\nThis is a good idea. v3 patch has both the modified comments(from v2)\nas well as a DEBUG3 (DEBUG3 level, because the other\nnon-error/non-fatal logs in ipc.c are using the same level) log to\nreport when the latest entry for removal is not matched with the one\nthe caller cancel_before_shmem_exit() is looking for and a hint on how\nto safely use temporary before_shmem_exit() callbacks. In v3 patch,\nfunction pointer is being printed, I'm not sure how much it is helpful\nto have function pointers in the logs though there are some other\nplaces printing pointers into the logs, I wish I could print function\nnames. (Is there a way we could get function names from function\npointers?).\n\nThoughts?\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Mon, 10 Aug 2020 16:22:21 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Issue with cancel_before_shmem_exit while searching to remove a\n particular registered exit callbacks" }, { "msg_contents": "On Fri, Aug 7, 2020 at 5:20 PM Andres Freund <andres@anarazel.de> wrote:\n> In which situations is the removal actually useful *and* safe, with\n> these constraints? You'd have to have a very narrow set of functions\n> that are called while the exit hook is present, i.e. basically this\n> would only be usable for PG_ENSURE_ERROR_CLEANUP and nothing else. And\n> even there it seems like it's pretty easy to get into a situation where\n> it's not safe.\n\nWell, I don't really care whether or not we change this function to\niterate over the callback list or whether we add a warning that you\nneed to use it in LIFO order, but I think we should do one or the\nother, because this same confusion has come up multiple times. I\nthought that Tom was opposed to making it iterate over the callback\nlist (for reasons I don't really understand, honestly) so adding a\ncomment and a cross-check seemed like the practical option. Now I also\nthink it's fine to iterate over the callback list: this function\ndoesn't get used so much that it's likely to be a performance problem,\nand I don't think this is the first bug that would have become a\nnon-bug had we done that years and years ago whenever it was first\nproposed. In fact, I'd go so far as to say that the latter is a\nslightly better option. However, doing nothing is clearly worst.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Mon, 10 Aug 2020 10:29:54 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Issue with cancel_before_shmem_exit while searching to remove a\n particular registered exit callbacks" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> Well, I don't really care whether or not we change this function to\n> iterate over the callback list or whether we add a warning that you\n> need to use it in LIFO order, but I think we should do one or the\n> other, because this same confusion has come up multiple times. I\n> thought that Tom was opposed to making it iterate over the callback\n> list (for reasons I don't really understand, honestly) so adding a\n> comment and a cross-check seemed like the practical option. Now I also\n> think it's fine to iterate over the callback list: this function\n> doesn't get used so much that it's likely to be a performance problem,\n> and I don't think this is the first bug that would have become a\n> non-bug had we done that years and years ago whenever it was first\n> proposed. In fact, I'd go so far as to say that the latter is a\n> slightly better option. However, doing nothing is clearly worst.\n\nI agree that doing nothing seems like a bad idea. My concern about\nallowing non-LIFO callback removal is that it seems to me that such\nusage patterns have likely got *other* bugs, so we should discourage\nthat. These callbacks don't exist in a vacuum: they reflect that\nthe mainline code path has set up, or torn down, important state.\nNon-LIFO usage requires very strong assumptions that the states in\nquestion are not interdependent, and that's something I'd rather not\nrely on if we don't have to.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 10 Aug 2020 10:44:21 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Issue with cancel_before_shmem_exit while searching to remove a\n particular registered exit callbacks" }, { "msg_contents": "On Mon, Aug 10, 2020 at 10:44 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I agree that doing nothing seems like a bad idea. My concern about\n> allowing non-LIFO callback removal is that it seems to me that such\n> usage patterns have likely got *other* bugs, so we should discourage\n> that. These callbacks don't exist in a vacuum: they reflect that\n> the mainline code path has set up, or torn down, important state.\n> Non-LIFO usage requires very strong assumptions that the states in\n> question are not interdependent, and that's something I'd rather not\n> rely on if we don't have to.\n\nI have mixed feelings about this. On the one hand, I think saying that\nit requires \"very strong assumptions\" about interdependence makes it\nsound scarier than it is -- not that your statement is wrong, but that\nin most cases we kinda know whether that's true or not, and the idea\nthat you shouldn't remove a callback upon which some later callback\nmight be depending is not too difficult for anybody to understand. On\nthe other hand, I think that in most cases we ought to be discouraging\npeople who are trying to do per-subsystem cleanup from using\nbefore_shmem_exit() at all. I think such callbacks ought to be done\nusing on_shmem_exit() if they involve shared memory or on_proc_exit()\nif they do not. Those functions don't have cancel_blah_exit()\nvariants, and I don't think they should: the right way to code those\nthings is not to remove the callbacks from the stack when they're no\nlonger needed, but rather to code the callbacks so that they will do\nnothing if no work is required, leaving them permanently registered.\n\nAnd the main reason why I think that such callbacks should be\nregistered using on_shmem_exit() or on_proc_exit() rather than\nbefore_shmem_exit() is because of (1) the interdependency issue you\nraise and (2) the fact that cancel_before_shmem_exit doesn't do what\npeople tend to think it does. For an example of (1), look at\nShutdownPostgres() and RemoveTempRelationsCallback(). The latter\naborts out of any transaction and then starts a new one to drop your\ntemp schema. But that might fail, so ShutdownPostgres() also needs to\nbe prepared to AbortOutOfAnyTransaction(). If you inserted more\ncleanup steps that were thematically similar to those, each one of\nthem would also need to begin with AbortOutOfAnyTransaction(). That\nsuggests that this whole thing is a bit under-engineered. Some of the\nother before_shmem_exit() callbacks don't start with that incantation,\nbut that's because they are per-subsystem callbacks that likely ought\nto be using on_shmem_exit() rather than actually being the same sort\nof thing.\n\nPerhaps we really have four categories here:\n(1) Temporary handlers for PG_ENSURE_ERROR_CLEANUP().\n(2) High-level cleanup that needs to run after aborting out of the\ncurrent transaction.\n(3) Per-subsystem shutdown for shared memory stuff.\n(4) Per-subsystem shutdown for backend-private stuff.\n\nRight now we blend (1), (2), and some of (3) together, but we could\ntry to create a cleaner line. We could redefine before_shmem_exit() as\nbeing exactly #2, and abort out of any transaction before calling each\nstep, and document that you shouldn't use it unless you need that\nbehavior. And we could have a separate stack for #1 that is explicitly\nLIFO and not intended for any other use. But then again maybe that's\noverkill. What I do think we should do, after thinking about it more,\nis discourage the casual use of before_shmem_exit() for things where\non_shmem_exit() or on_proc_exit() would be just as good. I think\nthat's what would avoid the most problems here.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Mon, 10 Aug 2020 15:33:15 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Issue with cancel_before_shmem_exit while searching to remove a\n particular registered exit callbacks" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> Perhaps we really have four categories here:\n> (1) Temporary handlers for PG_ENSURE_ERROR_CLEANUP().\n> (2) High-level cleanup that needs to run after aborting out of the\n> current transaction.\n> (3) Per-subsystem shutdown for shared memory stuff.\n> (4) Per-subsystem shutdown for backend-private stuff.\n\nHmm, I don't think we actually have any of (2) do we? Or at least\nwe aren't using ipc.c callbacks for them.\n\n> What I do think we should do, after thinking about it more,\n> is discourage the casual use of before_shmem_exit() for things where\n> on_shmem_exit() or on_proc_exit() would be just as good. I think\n> that's what would avoid the most problems here.\n\nI think we're mostly in violent agreement here. The interesting\nquestion seems to be Andres' one about whether before_shmem_exit\nactually has any safe use except for PG_ENSURE_ERROR_CLEANUP.\nIt may not, in which case perhaps we oughta rename it?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 10 Aug 2020 15:41:33 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Issue with cancel_before_shmem_exit while searching to remove a\n particular registered exit callbacks" }, { "msg_contents": "On Mon, Aug 10, 2020 at 3:41 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Robert Haas <robertmhaas@gmail.com> writes:\n> > Perhaps we really have four categories here:\n> > (1) Temporary handlers for PG_ENSURE_ERROR_CLEANUP().\n> > (2) High-level cleanup that needs to run after aborting out of the\n> > current transaction.\n> > (3) Per-subsystem shutdown for shared memory stuff.\n> > (4) Per-subsystem shutdown for backend-private stuff.\n>\n> Hmm, I don't think we actually have any of (2) do we? Or at least\n> we aren't using ipc.c callbacks for them.\n\nWell, I was thinking about the place where ShutdownPostgres() does\nLockReleaseAll(), and also the stuff in RemoveTempRelationsCallback().\nThose are pretty high-level operations that need to happen before we\nstart shutting down subsystems. Especially the removal of temp\nrelations.\n\n> > What I do think we should do, after thinking about it more,\n> > is discourage the casual use of before_shmem_exit() for things where\n> > on_shmem_exit() or on_proc_exit() would be just as good. I think\n> > that's what would avoid the most problems here.\n>\n> I think we're mostly in violent agreement here. The interesting\n> question seems to be Andres' one about whether before_shmem_exit\n> actually has any safe use except for PG_ENSURE_ERROR_CLEANUP.\n> It may not, in which case perhaps we oughta rename it?\n\nIf we could eliminate the other places where it's used, that'd be\ngreat. That's not too clear to me, though, because of the above two\ncases.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Mon, 10 Aug 2020 15:50:19 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Issue with cancel_before_shmem_exit while searching to remove a\n particular registered exit callbacks" }, { "msg_contents": "Hi,\n\nOn 2020-08-10 15:50:19 -0400, Robert Haas wrote:\n> On Mon, Aug 10, 2020 at 3:41 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > > What I do think we should do, after thinking about it more,\n> > > is discourage the casual use of before_shmem_exit() for things where\n> > > on_shmem_exit() or on_proc_exit() would be just as good. I think\n> > > that's what would avoid the most problems here.\n> >\n> > I think we're mostly in violent agreement here. The interesting\n> > question seems to be Andres' one about whether before_shmem_exit\n> > actually has any safe use except for PG_ENSURE_ERROR_CLEANUP.\n> > It may not, in which case perhaps we oughta rename it?\n\n> If we could eliminate the other places where it's used, that'd be\n> great. That's not too clear to me, though, because of the above two\n> cases.\n\nI think there's two different aspects here: Having before_shmem_exit(),\nand having cancel_before_shmem_exit(). We could just not have the\nlatter, and instead use a separate list for PG_ENSURE_ERROR_CLEANUP\ninternally. With the callback for PG_ENSURE_ERROR_CLEANUP calling those\nfrom its private list. There's no other uses of\ncancel_before_shmem_exit afaict.\n\nI guess alternatively we at some point might just need a more complex\ncallback system, where one can specify where in relation to another\ncallback a callback needs to be registered etc.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 10 Aug 2020 17:11:02 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Issue with cancel_before_shmem_exit while searching to remove a\n particular registered exit callbacks" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> I think there's two different aspects here: Having before_shmem_exit(),\n> and having cancel_before_shmem_exit(). We could just not have the\n> latter, and instead use a separate list for PG_ENSURE_ERROR_CLEANUP\n> internally. With the callback for PG_ENSURE_ERROR_CLEANUP calling those\n> from its private list. There's no other uses of\n> cancel_before_shmem_exit afaict.\n\nIt's certainly arguable that PG_ENSURE_ERROR_CLEANUP is a special\nsnowflake and needs to use a separate mechanism. What is not real clear\nto me is why there are any other callers that must use before_shmem_exit\nrather than on_shmem_exit --- IOW, except for P_E_E_C's use, I have never\nbeen persuaded that the former callback list should exist at all. The\nexpectation for on_shmem_exit is that callbacks correspond to system\nservice modules that are initialized in a particular order, and can safely\nbe torn down in the reverse order. Why can't the existing callers just\nmake even-later entries into that same callback list?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 10 Aug 2020 20:46:02 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Issue with cancel_before_shmem_exit while searching to remove a\n particular registered exit callbacks" }, { "msg_contents": "On Mon, Aug 10, 2020 at 8:46 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> It's certainly arguable that PG_ENSURE_ERROR_CLEANUP is a special\n> snowflake and needs to use a separate mechanism. What is not real clear\n> to me is why there are any other callers that must use before_shmem_exit\n> rather than on_shmem_exit --- IOW, except for P_E_E_C's use, I have never\n> been persuaded that the former callback list should exist at all. The\n> expectation for on_shmem_exit is that callbacks correspond to system\n> service modules that are initialized in a particular order, and can safely\n> be torn down in the reverse order. Why can't the existing callers just\n> make even-later entries into that same callback list?\n\nThat split dates to the parallel query work, and there are some\ncomments in shmem_exit() about it; see in particular the explanation\nin the middle where it says \"Call dynamic shared memory callbacks.\" It\nseemed to me that I needed the re-entrancy behavior that is described\nthere, but for a set of callbacks that needed to run before some of\nthe existing callbacks and after others.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Mon, 10 Aug 2020 22:10:08 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Issue with cancel_before_shmem_exit while searching to remove a\n particular registered exit callbacks" }, { "msg_contents": "On Mon, Aug 10, 2020 at 10:10:08PM -0400, Robert Haas wrote:\n> That split dates to the parallel query work, and there are some\n> comments in shmem_exit() about it; see in particular the explanation\n> in the middle where it says \"Call dynamic shared memory callbacks.\" It\n> seemed to me that I needed the re-entrancy behavior that is described\n> there, but for a set of callbacks that needed to run before some of\n> the existing callbacks and after others.\n\nWe still have a CF entry here:\nhttps://commitfest.postgresql.org/29/2649/\n\nIs there still something that needs to absolutely be done here knowing\nthat we have bab1500 that got rid of the root issue? Can the CF entry\nbe marked as committed?\n\n(FWIW, I would move any discussion about improving more stuff related\nto shared memory cleanup code at proc exit into a new thread, as that\nlooks like a new topic.)\n--\nMichael", "msg_date": "Mon, 7 Sep 2020 16:40:57 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Issue with cancel_before_shmem_exit while searching to remove a\n particular registered exit callbacks" }, { "msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> Is there still something that needs to absolutely be done here knowing\n> that we have bab1500 that got rid of the root issue? Can the CF entry\n> be marked as committed?\n\nI think there is agreement that we're not going to change\ncancel_before_shmem_exit's restriction to only allow LIFO popping.\nSo we should improve its comment to explain why. The other thing\nthat seems legitimately on-the-table for this CF entry is whether\nwe should change cancel_before_shmem_exit to complain, rather than\nsilently do nothing, if it fails to pop the stack. Bharath's\nlast patchset proposed to add an elog(DEBUG3) complaint, which\nseems to me to be just about entirely useless. I'd make it an\nERROR, or maybe an Assert.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 07 Sep 2020 11:20:21 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Issue with cancel_before_shmem_exit while searching to remove a\n particular registered exit callbacks" }, { "msg_contents": "On Mon, Sep 7, 2020 at 8:50 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> I think there is agreement that we're not going to change\n> cancel_before_shmem_exit's restriction to only allow LIFO popping.\n> So we should improve its comment to explain why. The other thing\n> that seems legitimately on-the-table for this CF entry is whether\n> we should change cancel_before_shmem_exit to complain, rather than\n> silently do nothing, if it fails to pop the stack. Bharath's\n> last patchset proposed to add an elog(DEBUG3) complaint, which\n> seems to me to be just about entirely useless. I'd make it an\n> ERROR, or maybe an Assert.\n>\n\nAttaching a patch with both the comments modification and changing\nDEBUG3 to ERROR. make check and make world-check passes on this patch.\n\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Tue, 8 Sep 2020 18:06:36 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Issue with cancel_before_shmem_exit while searching to remove a\n particular registered exit callbacks" }, { "msg_contents": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> writes:\n> Attaching a patch with both the comments modification and changing\n> DEBUG3 to ERROR. make check and make world-check passes on this patch.\n\nI pushed this after simplifying the ereport down to an elog. I see\nno reason to consider this a user-facing error, so there's no need\nto make translators deal with the message.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 08 Sep 2020 15:56:51 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Issue with cancel_before_shmem_exit while searching to remove a\n particular registered exit callbacks" }, { "msg_contents": "> On 30 Jul 2020, at 14:11, Robert Haas <robertmhaas@gmail.com> wrote:\n> \n> On Mon, Jul 20, 2020 at 3:47 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>> But I also agree that what pg_start_backup() was doing before v13 was\n>> wrong; that's why I committed\n>> 303640199d0436c5e7acdf50b837a027b5726594. The only reason I didn't\n>> back-patch it is because the consequences are so minor I didn't think\n>> it was worth worrying about. We could, though. I'd be somewhat\n>> inclined to both do that and also change LLVM to use on_proc_exit() in\n>> master, but I don't feel super-strongly about it.\n> \n> Unless somebody complains pretty soon, I'm going to go ahead and do\n> what is described above.\n\nWhen backpatching 9dce22033d5d I ran into this in v13 and below, since it needs\nllvm_shutdown to happen via on_proc_exit in order for all llvm_release_context\ncalls to have finished. Unless anyone objects I will backpatch bab150045bd97\nto v12 and v13 as part of my backpatch.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Thu, 16 Nov 2023 22:05:19 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: Issue with cancel_before_shmem_exit while searching to remove a\n particular registered exit callbacks" } ]
[ { "msg_contents": "Hi,\n\nHere is a quick issue I found on the BRIN documentation. I'm not a 100%\nsure I'm right but it looks like a failed copy/paste from the GIN\ndocumentation.\n\nCheers.\n\n\n-- \nGuillaume.", "msg_date": "Tue, 7 Jul 2020 09:17:15 +0200", "msg_from": "Guillaume Lelarge <guillaume@lelarge.info>", "msg_from_op": true, "msg_subject": "Quick doc patch" }, { "msg_contents": "> On 7 Jul 2020, at 09:17, Guillaume Lelarge <guillaume@lelarge.info> wrote:\n\n> Here is a quick issue I found on the BRIN documentation. I'm not a 100% sure I'm right but it looks like a failed copy/paste from the GIN documentation.\n\nI agree, it looks like a copy-pasteo in 15cb2bd2700 which introduced the\nparagraph for both GIN and BRIN. LGTM. Adding Alexander who committed in on\ncc.\n\ncheers ./daniel\n\n", "msg_date": "Tue, 7 Jul 2020 09:58:59 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: Quick doc patch" }, { "msg_contents": "On Tue, Jul 07, 2020 at 09:58:59AM +0200, Daniel Gustafsson wrote:\n> I agree, it looks like a copy-pasteo in 15cb2bd2700 which introduced the\n> paragraph for both GIN and BRIN. LGTM. Adding Alexander who committed in on\n> cc.\n\n+1.\n--\nMichael", "msg_date": "Tue, 7 Jul 2020 18:36:10 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Quick doc patch" }, { "msg_contents": "On Tue, Jul 07, 2020 at 06:36:10PM +0900, Michael Paquier wrote:\n> On Tue, Jul 07, 2020 at 09:58:59AM +0200, Daniel Gustafsson wrote:\n>> I agree, it looks like a copy-pasteo in 15cb2bd2700 which introduced the\n>> paragraph for both GIN and BRIN. LGTM. Adding Alexander who committed in on\n>> cc.\n> \n> +1.\n\nAlexander does not seem to be around, so I have just applied the fix.\nThere were more inconsistencies in gin.sgml and spgist.sgml missed in\n14903f2, making the docs of GIN/SP-GiST less in line with the BRIN\nequivalent, so I have fixed both while on it.\n--\nMichael", "msg_date": "Wed, 8 Jul 2020 10:43:02 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Quick doc patch" }, { "msg_contents": "Hi!\n\nOn Wed, Jul 8, 2020 at 4:43 AM Michael Paquier <michael@paquier.xyz> wrote:\n> On Tue, Jul 07, 2020 at 06:36:10PM +0900, Michael Paquier wrote:\n> > On Tue, Jul 07, 2020 at 09:58:59AM +0200, Daniel Gustafsson wrote:\n> >> I agree, it looks like a copy-pasteo in 15cb2bd2700 which introduced the\n> >> paragraph for both GIN and BRIN. LGTM. Adding Alexander who committed in on\n> >> cc.\n> >\n> > +1.\n>\n> Alexander does not seem to be around, so I have just applied the fix.\n> There were more inconsistencies in gin.sgml and spgist.sgml missed in\n> 14903f2, making the docs of GIN/SP-GiST less in line with the BRIN\n> equivalent, so I have fixed both while on it.\n\nI just read this thread.\nThank you for fixing this!\n\n------\nRegards,\nAlexander Korotkov\n\n\n", "msg_date": "Wed, 8 Jul 2020 14:00:31 +0300", "msg_from": "Alexander Korotkov <aekorotkov@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Quick doc patch" } ]
[ { "msg_contents": "Hi,\n\nThe manual describes the size of pg_stat_activity.query\nas below:\n\n | By default the query text is truncated at 1024 characters;\n\nWhen considering multibyte characters, it seems more\naccurate to change the unit from \"characters\" to \"bytes\".\n\nI also searched other \"[0-9] characters\" in the manual.\nI may overlook something, but apparently it seems ok\nbecause of their contexts which are limited to ASCII\ncharacter or other reasons.\n\n\nRegards,\n\n--\nAtsushi Torikoshi\nNTT DATA CORPORATION", "msg_date": "Wed, 08 Jul 2020 10:54:42 +0900", "msg_from": "torikoshia <torikoshia@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "[doc] modifying unit from characters to bytes" }, { "msg_contents": "\n\nOn 2020/07/08 10:54, torikoshia wrote:\n> Hi,\n> \n> The manual describes the size of pg_stat_activity.query\n> as below:\n> \n> �| By default the query text is truncated at 1024 characters;\n> \n> When considering multibyte characters, it seems more\n> accurate to change the unit from \"characters\" to \"bytes\".\n\nAgreed. Barring any objection, I will commit this patch.\n\nFor record, this change derived from the discussion about other patch [1].\n\nRegards,\n\n[1]\nhttps://postgr.es/m/cd0e961fd42e5708fdea70f7420bf214@oss.nttdata.com\n\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Wed, 8 Jul 2020 11:25:26 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: [doc] modifying unit from characters to bytes" }, { "msg_contents": "> On 8 Jul 2020, at 04:25, Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> \n> On 2020/07/08 10:54, torikoshia wrote:\n>> Hi,\n>> The manual describes the size of pg_stat_activity.query\n>> as below:\n>> | By default the query text is truncated at 1024 characters;\n>> When considering multibyte characters, it seems more\n>> accurate to change the unit from \"characters\" to \"bytes\".\n> \n> Agreed. Barring any objection, I will commit this patch.\n\n+1 to commit this patch, following the link to track_activity_query_size it's\neven specified to be bytes there. IIRC the NULL terminator is also included in\nthe 1024 bytes which prevents it from being 1024 characters even for\nnon-multibyte.\n\ncheers ./daniel\n\n\n\n", "msg_date": "Wed, 8 Jul 2020 09:17:45 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: [doc] modifying unit from characters to bytes" }, { "msg_contents": "\n\nOn 2020/07/08 16:17, Daniel Gustafsson wrote:\n>> On 8 Jul 2020, at 04:25, Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>>\n>> On 2020/07/08 10:54, torikoshia wrote:\n>>> Hi,\n>>> The manual describes the size of pg_stat_activity.query\n>>> as below:\n>>> | By default the query text is truncated at 1024 characters;\n>>> When considering multibyte characters, it seems more\n>>> accurate to change the unit from \"characters\" to \"bytes\".\n>>\n>> Agreed. Barring any objection, I will commit this patch.\n> \n> +1 to commit this patch, following the link to track_activity_query_size it's\n> even specified to be bytes there. IIRC the NULL terminator is also included in\n> the 1024 bytes which prevents it from being 1024 characters even for\n> non-multibyte.\n\nYes, so we should document \"truncated at 1023 bytes\" for accuracy, instead?\nThis might be more confusing for users, though....\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Wed, 8 Jul 2020 17:05:47 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: [doc] modifying unit from characters to bytes" }, { "msg_contents": "> On 8 Jul 2020, at 10:05, Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> \n> On 2020/07/08 16:17, Daniel Gustafsson wrote:\n>>> On 8 Jul 2020, at 04:25, Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>>> \n>>> On 2020/07/08 10:54, torikoshia wrote:\n>>>> Hi,\n>>>> The manual describes the size of pg_stat_activity.query\n>>>> as below:\n>>>> | By default the query text is truncated at 1024 characters;\n>>>> When considering multibyte characters, it seems more\n>>>> accurate to change the unit from \"characters\" to \"bytes\".\n>>> \n>>> Agreed. Barring any objection, I will commit this patch.\n>> +1 to commit this patch, following the link to track_activity_query_size it's\n>> even specified to be bytes there. IIRC the NULL terminator is also included in\n>> the 1024 bytes which prevents it from being 1024 characters even for\n>> non-multibyte.\n> \n> Yes, so we should document \"truncated at 1023 bytes\" for accuracy, instead?\n> This might be more confusing for users, though....\n\nI think that's overcomplicating things, since we do (will) specify bytes and\nnot characters.\n\ncheers ./daniel\n\n", "msg_date": "Wed, 8 Jul 2020 10:12:18 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: [doc] modifying unit from characters to bytes" }, { "msg_contents": "\n\nOn 2020/07/08 17:12, Daniel Gustafsson wrote:\n>> On 8 Jul 2020, at 10:05, Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>>\n>> On 2020/07/08 16:17, Daniel Gustafsson wrote:\n>>>> On 8 Jul 2020, at 04:25, Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>>>>\n>>>> On 2020/07/08 10:54, torikoshia wrote:\n>>>>> Hi,\n>>>>> The manual describes the size of pg_stat_activity.query\n>>>>> as below:\n>>>>> | By default the query text is truncated at 1024 characters;\n>>>>> When considering multibyte characters, it seems more\n>>>>> accurate to change the unit from \"characters\" to \"bytes\".\n>>>>\n>>>> Agreed. Barring any objection, I will commit this patch.\n>>> +1 to commit this patch, following the link to track_activity_query_size it's\n>>> even specified to be bytes there. IIRC the NULL terminator is also included in\n>>> the 1024 bytes which prevents it from being 1024 characters even for\n>>> non-multibyte.\n>>\n>> Yes, so we should document \"truncated at 1023 bytes\" for accuracy, instead?\n>> This might be more confusing for users, though....\n> \n> I think that's overcomplicating things, since we do (will) specify bytes and\n> not characters.\n\nAgreed. So I pushed the proposed patch. Thanks!\n\nRegards,\n\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Thu, 9 Jul 2020 13:47:27 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: [doc] modifying unit from characters to bytes" }, { "msg_contents": "On 2020-07-09 13:47, Fujii Masao wrote:\n> On 2020/07/08 17:12, Daniel Gustafsson wrote:\n>>> On 8 Jul 2020, at 10:05, Fujii Masao <masao.fujii@oss.nttdata.com> \n>>> wrote:\n>>> \n>>> On 2020/07/08 16:17, Daniel Gustafsson wrote:\n>>>>> On 8 Jul 2020, at 04:25, Fujii Masao <masao.fujii@oss.nttdata.com> \n>>>>> wrote:\n>>>>> \n>>>>> On 2020/07/08 10:54, torikoshia wrote:\n>>>>>> Hi,\n>>>>>> The manual describes the size of pg_stat_activity.query\n>>>>>> as below:\n>>>>>> | By default the query text is truncated at 1024 characters;\n>>>>>> When considering multibyte characters, it seems more\n>>>>>> accurate to change the unit from \"characters\" to \"bytes\".\n>>>>> \n>>>>> Agreed. Barring any objection, I will commit this patch.\n>>>> +1 to commit this patch, following the link to \n>>>> track_activity_query_size it's\n>>>> even specified to be bytes there. IIRC the NULL terminator is also \n>>>> included in\n>>>> the 1024 bytes which prevents it from being 1024 characters even for\n>>>> non-multibyte.\n>>> \n>>> Yes, so we should document \"truncated at 1023 bytes\" for accuracy, \n>>> instead?\n>>> This might be more confusing for users, though....\n>> \n>> I think that's overcomplicating things, since we do (will) specify \n>> bytes and\n>> not characters.\n> \n> Agreed. So I pushed the proposed patch. Thanks!\n\nThanks for applying!\n\nRegards,\n\n\n--\nAtsushi Torikoshi\nNTT DATA CORPORATION\n\n\n", "msg_date": "Thu, 09 Jul 2020 21:59:32 +0900", "msg_from": "torikoshia <torikoshia@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Re: [doc] modifying unit from characters to bytes" } ]
[ { "msg_contents": "Hi,\n\nCurrently, slot_keep_segs is defined as \"XLogRecPtr\" in KeepLogSeg(),\nbut it seems that should be \"XLogSegNo\" because this variable is\nsegment number.\n\nHow do you think?\n\nRegards,\n\n--\nAtsushi Torikoshi\nNTT DATA CORPORATION", "msg_date": "Wed, 08 Jul 2020 11:02:17 +0900", "msg_from": "torikoshia <torikoshia@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Modifying data type of slot_keep_segs from XLogRecPtr to XLogSegNo" }, { "msg_contents": "\n\nOn 2020/07/08 11:02, torikoshia wrote:\n> Hi,\n> \n> Currently, slot_keep_segs is defined as \"XLogRecPtr\" in KeepLogSeg(),\n> but it seems that should be \"XLogSegNo\" because this variable is\n> segment number.\n> \n> How do you think?\n\nI agree that using XLogRecPtr for slot_keep_segs is incorrect.\nBut this variable indicates the number of segments rather than\nsegment no, uint64 seems better. Thought?\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Wed, 8 Jul 2020 11:15:48 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Modifying data type of slot_keep_segs from XLogRecPtr to\n XLogSegNo" }, { "msg_contents": "On 2020-07-08 11:15, Fujii Masao wrote:\n> On 2020/07/08 11:02, torikoshia wrote:\n>> Hi,\n>> \n>> Currently, slot_keep_segs is defined as \"XLogRecPtr\" in KeepLogSeg(),\n>> but it seems that should be \"XLogSegNo\" because this variable is\n>> segment number.\n>> \n>> How do you think?\n> \n> I agree that using XLogRecPtr for slot_keep_segs is incorrect.\n> But this variable indicates the number of segments rather than\n> segment no, uint64 seems better. Thought?\n\nThat makes sense.\nThe number of segments and segment number are different.\n\n\nRegards,\n\n--\nAtsushi Torikoshi\nNTT DATA CORPORATION\n\n\n", "msg_date": "Wed, 08 Jul 2020 11:55:56 +0900", "msg_from": "torikoshia <torikoshia@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Re: Modifying data type of slot_keep_segs from XLogRecPtr to\n XLogSegNo" }, { "msg_contents": "On 2020/07/08 11:55, torikoshia wrote:\n> On 2020-07-08 11:15, Fujii Masao wrote:\n>> On 2020/07/08 11:02, torikoshia wrote:\n>>> Hi,\n>>>\n>>> Currently, slot_keep_segs is defined as \"XLogRecPtr\" in KeepLogSeg(),\n>>> but it seems that should be \"XLogSegNo\" because this variable is\n>>> segment number.\n>>>\n>>> How do you think?\n>>\n>> I agree that using XLogRecPtr for slot_keep_segs is incorrect.\n>> But this variable indicates the number of segments rather than\n>> segment no, uint64 seems better. Thought?\n> \n> That makes sense.\n> The number of segments and segment number are different.\n\nYes, so patch attached. I will commit it later.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION", "msg_date": "Wed, 8 Jul 2020 15:22:36 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Modifying data type of slot_keep_segs from XLogRecPtr to\n XLogSegNo" }, { "msg_contents": "\n\nOn 2020/07/08 15:22, Fujii Masao wrote:\n> \n> \n> On 2020/07/08 11:55, torikoshia wrote:\n>> On 2020-07-08 11:15, Fujii Masao wrote:\n>>> On 2020/07/08 11:02, torikoshia wrote:\n>>>> Hi,\n>>>>\n>>>> Currently, slot_keep_segs is defined as \"XLogRecPtr\" in KeepLogSeg(),\n>>>> but it seems that should be \"XLogSegNo\" because this variable is\n>>>> segment number.\n>>>>\n>>>> How do you think?\n>>>\n>>> I agree that using XLogRecPtr for slot_keep_segs is incorrect.\n>>> But this variable indicates the number of segments rather than\n>>> segment no, uint64 seems better. Thought?\n>>\n>> That makes sense.\n>> The number of segments and segment number are different.\n> \n> Yes, so patch attached. I will commit it later.\n\nPushed. Thanks!\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Wed, 8 Jul 2020 21:27:04 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Modifying data type of slot_keep_segs from XLogRecPtr to\n XLogSegNo" }, { "msg_contents": "At Wed, 8 Jul 2020 21:27:04 +0900, Fujii Masao <masao.fujii@oss.nttdata.com> wrote in \n> \n> \n> On 2020/07/08 15:22, Fujii Masao wrote:\n> > On 2020/07/08 11:55, torikoshia wrote:\n> >> On 2020-07-08 11:15, Fujii Masao wrote:\n> >>> On 2020/07/08 11:02, torikoshia wrote:\n> >>>> Hi,\n> >>>>\n> >>>> Currently, slot_keep_segs is defined as \"XLogRecPtr\" in KeepLogSeg(),\n> >>>> but it seems that should be \"XLogSegNo\" because this variable is\n> >>>> segment number.\n> >>>>\n> >>>> How do you think?\n\nYeah, that's my mistake while made bouncing back and forth between\nsegments and LSN in the code. I noticed that once but forgotten until\nnow. Thanks for finding it.\n\n> >>> I agree that using XLogRecPtr for slot_keep_segs is incorrect.\n> >>> But this variable indicates the number of segments rather than\n> >>> segment no, uint64 seems better. Thought?\n> >>\n> >> That makes sense.\n> >> The number of segments and segment number are different.\n> > Yes, so patch attached. I will commit it later.\n> \n> Pushed. Thanks!\n\nThanks!\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Thu, 09 Jul 2020 13:12:49 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Modifying data type of slot_keep_segs from XLogRecPtr to\n XLogSegNo" } ]
[ { "msg_contents": "Over on [1] someone was asking about chained window paths making use\nof already partially sorted input. (The thread is on -general, so I\nguessed they're not using PG13.)\n\nHowever, On checking PG13 to see if incremental sort would help their\ncase, I saw it didn't. Looking at the code I saw that\ncreate_window_paths() and create_one_window_path() don't make any use\nof incremental sort paths.\n\nI quickly put together the attached. It's only about 15 mins of work,\nbut it seems worth looking at a bit more for some future commitfest.\nYeah, I'll need to add some tests as I see nothing failed by changing\nthis.\n\nI'll just park this here until then so I don't forget.\n\nDavid", "msg_date": "Wed, 8 Jul 2020 16:57:21 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Use incremental sort paths for window functions" }, { "msg_contents": "> On 8 Jul 2020, at 06:57, David Rowley <dgrowleyml@gmail.com> wrote:\n> \n> Over on [1] someone was asking about chained window paths making use\n> of already partially sorted input. (The thread is on -general, so I\n> guessed they're not using PG13.)\n\nThe [1] reference wasn't qualified, do you remember which thread it was?\n\n> However, On checking PG13 to see if incremental sort would help their\n> case, I saw it didn't. Looking at the code I saw that\n> create_window_paths() and create_one_window_path() don't make any use\n> of incremental sort paths.\n\nCommit 728202b63cdcd7f counteracts this optimization in part since it orders\nthe windows such that the longest common prefix is executed first to allow\nsubsequent windows to skip sorting entirely.\n\nThat being said, it's only in part and when the stars don't align with sub-\nsequently shorter common prefixes then incremental sort can help. A synthetic\nunscientific test with three windows over 10M rows, where no common prefix\nexists, shows consistent speedups (for worst cases) well past what can be\nattributed to background noise.\n\n> I quickly put together the attached. It's only about 15 mins of work,\n> but it seems worth looking at a bit more for some future commitfest.\n> Yeah, I'll need to add some tests as I see nothing failed by changing\n> this.\n\nA few comments on the patch: there is no check for enable_incremental_sort, and\nit lacks tests (as already mentioned) for the resulting plan.\n\ncheers ./daniel\n\n", "msg_date": "Mon, 14 Sep 2020 14:02:10 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: Use incremental sort paths for window functions" }, { "msg_contents": "On Wed, Jul 08, 2020 at 04:57:21PM +1200, David Rowley wrote:\n>Over on [1] someone was asking about chained window paths making use\n>of already partially sorted input. (The thread is on -general, so I\n>guessed they're not using PG13.)\n>\n>However, On checking PG13 to see if incremental sort would help their\n>case, I saw it didn't. Looking at the code I saw that\n>create_window_paths() and create_one_window_path() don't make any use\n>of incremental sort paths.\n>\n>I quickly put together the attached. It's only about 15 mins of work,\n>but it seems worth looking at a bit more for some future commitfest.\n>Yeah, I'll need to add some tests as I see nothing failed by changing\n>this.\n>\n\nYeah, I'm sure there are a couple other places that might benefit from\nincremental sort but were not included in the PG13 commit. The patch\nseems correct - did it help in the reported thread? How much?\n\nI suppose this might benefit from an optimization similar to the GROUP\nBY reordering discussed in [1]. For example, with\n\n max(a) over (partition by b,c)\n\nI think we could use index on (c) and consider incremental sort by c,b,\ni.e. with the inverted pathkeys. But that's a completely independent\ntopic, I believe.\n\n[1] https://www.postgresql.org/message-id/7c79e6a5-8597-74e8-0671-1c39d124c9d6%40sigaev.ru\n\n>I'll just park this here until then so I don't forget.\n>\n\nOK, thanks for looking into this!\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Mon, 14 Sep 2020 19:18:56 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Use incremental sort paths for window functions" }, { "msg_contents": "On Tue, 15 Sep 2020 at 00:02, Daniel Gustafsson <daniel@yesql.se> wrote:\n>\n> > On 8 Jul 2020, at 06:57, David Rowley <dgrowleyml@gmail.com> wrote:\n> >\n> > Over on [1] someone was asking about chained window paths making use\n> > of already partially sorted input. (The thread is on -general, so I\n> > guessed they're not using PG13.)\n>\n> The [1] reference wasn't qualified, do you remember which thread it was?\n\nThat was sloppy of me. It's\nhttps://www.postgresql.org/message-id/CADd42iFZWwYNsXjEM_3HWK3QnfiCrMNmpOkZqyBQCabnVxOPtw%40mail.gmail.com\n\n> > However, On checking PG13 to see if incremental sort would help their\n> > case, I saw it didn't. Looking at the code I saw that\n> > create_window_paths() and create_one_window_path() don't make any use\n> > of incremental sort paths.\n>\n> Commit 728202b63cdcd7f counteracts this optimization in part since it orders\n> the windows such that the longest common prefix is executed first to allow\n> subsequent windows to skip sorting entirely.\n\nThis would have been clearer if I'd remembered to include the link to\nthe thread. The thread talks about sorting requirements like c1, c3\nthen c1, c4. So it can make use of the common prefix and do\nincremental sorts.\n\nIt sounds like you're talking about cases like: wfunc() over (order by\na), wfunc2() over (order by a,b). Where we can just sort on a,b and\nhave that order work for the first wfunc(). That's a good optimisation\nbut does not work for the above case.\n\n> That being said, it's only in part and when the stars don't align with sub-\n> sequently shorter common prefixes then incremental sort can help. A synthetic\n> unscientific test with three windows over 10M rows, where no common prefix\n> exists, shows consistent speedups (for worst cases) well past what can be\n> attributed to background noise.\n>\n> > I quickly put together the attached. It's only about 15 mins of work,\n> > but it seems worth looking at a bit more for some future commitfest.\n> > Yeah, I'll need to add some tests as I see nothing failed by changing\n> > this.\n>\n> A few comments on the patch: there is no check for enable_incremental_sort, and\n> it lacks tests (as already mentioned) for the resulting plan.\n\nYeah, it should be making sure enable_incremental_sort is on for sure.\nI've attached another version with a few tests added too.\n\nDavid", "msg_date": "Tue, 15 Sep 2020 11:17:24 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Use incremental sort paths for window functions" }, { "msg_contents": "> On 15 Sep 2020, at 01:17, David Rowley <dgrowleyml@gmail.com> wrote:\n> \n> On Tue, 15 Sep 2020 at 00:02, Daniel Gustafsson <daniel@yesql.se> wrote:\n>> \n>>> On 8 Jul 2020, at 06:57, David Rowley <dgrowleyml@gmail.com> wrote:\n>>> \n>>> Over on [1] someone was asking about chained window paths making use\n>>> of already partially sorted input. (The thread is on -general, so I\n>>> guessed they're not using PG13.)\n>> \n>> The [1] reference wasn't qualified, do you remember which thread it was?\n> \n> That was sloppy of me. It's\n> https://www.postgresql.org/message-id/CADd42iFZWwYNsXjEM_3HWK3QnfiCrMNmpOkZqyBQCabnVxOPtw%40mail.gmail.com\n\nThanks!\n\n>>> However, On checking PG13 to see if incremental sort would help their\n>>> case, I saw it didn't. Looking at the code I saw that\n>>> create_window_paths() and create_one_window_path() don't make any use\n>>> of incremental sort paths.\n>> \n>> Commit 728202b63cdcd7f counteracts this optimization in part since it orders\n>> the windows such that the longest common prefix is executed first to allow\n>> subsequent windows to skip sorting entirely.\n> \n> This would have been clearer if I'd remembered to include the link to\n> the thread. The thread talks about sorting requirements like c1, c3\n> then c1, c4. So it can make use of the common prefix and do\n> incremental sorts.\n> \n> It sounds like you're talking about cases like: wfunc() over (order by\n> a), wfunc2() over (order by a,b). Where we can just sort on a,b and\n> have that order work for the first wfunc(). That's a good optimisation\n> but does not work for the above case.\n\nRight, the combination of these two optimizations will however work well\ntogether for quite a few cases.\n\nOn that note, assume we have the below scenario:\n\n wfunc .. (order by a), .. (order by a,b), .. (order by a,b,c)\n\nCurrently the windows will be ordered such that a,b,c is sorted first, with a,b\nand a not having to sort. I wonder if there is a good heuristic to find cases\nwhere sorting a, then a,b incrementally and finally a,b,c incrementally is\ncheaper than a big sort of a,b,c? If a,b,c would spill but subsequent\nincremental sorts won't then perhaps that could be a case? Not sure if it's\nworth the planner time, just thinking out loud.\n\n>> A few comments on the patch: there is no check for enable_incremental_sort, and\n>> it lacks tests (as already mentioned) for the resulting plan.\n> \n> Yeah, it should be making sure enable_incremental_sort is on for sure.\n> I've attached another version with a few tests added too.\n\nNo comments on this version, LGTM.\n\ncheers ./daniel\n\n", "msg_date": "Tue, 15 Sep 2020 10:12:54 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: Use incremental sort paths for window functions" }, { "msg_contents": "On Tue, 15 Sep 2020 at 05:19, Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:\n>\n> On Wed, Jul 08, 2020 at 04:57:21PM +1200, David Rowley wrote:\n> >Over on [1] someone was asking about chained window paths making use\n> >of already partially sorted input. (The thread is on -general, so I\n> >guessed they're not using PG13.)\n> >\n> >However, On checking PG13 to see if incremental sort would help their\n> >case, I saw it didn't. Looking at the code I saw that\n> >create_window_paths() and create_one_window_path() don't make any use\n> >of incremental sort paths.\n> >\n> >I quickly put together the attached. It's only about 15 mins of work,\n> >but it seems worth looking at a bit more for some future commitfest.\n> >Yeah, I'll need to add some tests as I see nothing failed by changing\n> >this.\n> >\n>\n> Yeah, I'm sure there are a couple other places that might benefit from\n> incremental sort but were not included in the PG13 commit. The patch\n> seems correct - did it help in the reported thread? How much?\n\nLooks like I didn't mention the idea on the thread. I must have felt\nit was just too many steps away from being very useful to mention it\nin the -general thread.\n\nI suppose it'll help similar to any use case for incremental sort;\nlots in some and less so in others. It'll mostly depend on how big\neach incremental sort is. e.g order by a,b when there's only an index\non (a) will be pretty good if a is unique. Each sort will be over\nquite fast. If there are a million rows for each value of a then\nincremental sort would be less favourable\n\n> I suppose this might benefit from an optimization similar to the GROUP\n> BY reordering discussed in [1]. For example, with\n>\n> max(a) over (partition by b,c)\n>\n> I think we could use index on (c) and consider incremental sort by c,b,\n> i.e. with the inverted pathkeys. But that's a completely independent\n> topic, I believe.\n\nI've only vaguely followed that. Sounds like interesting work, but I\nagree that it's not related to this.\n\nThanks for having a look at this.\n\nDavid\n\n\n", "msg_date": "Tue, 15 Sep 2020 20:34:21 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Use incremental sort paths for window functions" }, { "msg_contents": "On Tue, 15 Sep 2020 at 20:12, Daniel Gustafsson <daniel@yesql.se> wrote:\n>\n> On that note, assume we have the below scenario:\n>\n> wfunc .. (order by a), .. (order by a,b), .. (order by a,b,c)\n>\n> Currently the windows will be ordered such that a,b,c is sorted first, with a,b\n> and a not having to sort. I wonder if there is a good heuristic to find cases\n> where sorting a, then a,b incrementally and finally a,b,c incrementally is\n> cheaper than a big sort of a,b,c? If a,b,c would spill but subsequent\n> incremental sorts won't then perhaps that could be a case? Not sure if it's\n> worth the planner time, just thinking out loud.\n\nIt's a worthy cause, but unfortunately, I don't think there's any very\nrealistic thing that can be done about that. The problem is that\nyou're deciding the \"most sorted\" window clause and putting that first\nin the parameters to the query_planner()'s callback function. If you\nwanted to try some alternative orders then it means calling\nquery_planner() again with some other order for\nqp_extra.activeWindows.\n\nPerhaps there's some other way of doing it so that the planner does\nsome sort of preliminary investigation about the best order to\nevaluate the windows in. Currently, standard_qp_callback just takes\nthe first window and has the planner perform the join order search\nbased on that. Performing the join order search multiple times is\njust not realistic, so it could only be done by some sort of\npre-checks. e.g, is there an index that's likely to help me obtain\nthis specific order. Then we'd just have to hope that through the\njoin search that the planner actually managed to produce a more\noptimal plan than it would have if we'd left the window evaluation\norder alone. It sounds pretty tricky to make cheap and good enough at\nthe same time.\n\n> No comments on this version, LGTM.\n\nCool. Many thanks for having a look.\n\nDavid\n\n\n", "msg_date": "Tue, 15 Sep 2020 23:21:31 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Use incremental sort paths for window functions" }, { "msg_contents": "On Tue, 15 Sep 2020 at 23:21, David Rowley <dgrowleyml@gmail.com> wrote:\n>\n> On Tue, 15 Sep 2020 at 20:12, Daniel Gustafsson <daniel@yesql.se> wrote:\n> >\n> > No comments on this version, LGTM.\n>\n> Cool. Many thanks for having a look.\n\nPushed. 62e221e1c\n\nDavid\n\n\n", "msg_date": "Tue, 15 Sep 2020 23:46:47 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Use incremental sort paths for window functions" } ]
[ { "msg_contents": "Spotted a small typo in pgstat.c this morning, attached patch fixes this.\n\ncheers ./daniel", "msg_date": "Wed, 8 Jul 2020 10:04:55 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": true, "msg_subject": "Typo in pgstat.c" }, { "msg_contents": "On Wed, Jul 8, 2020 at 10:05 AM Daniel Gustafsson <daniel@yesql.se> wrote:\n\n> Spotted a small typo in pgstat.c this morning, attached patch fixes this.\n>\n\nThanks, applied.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Wed, Jul 8, 2020 at 10:05 AM Daniel Gustafsson <daniel@yesql.se> wrote:Spotted a small typo in pgstat.c this morning, attached patch fixes this.Thanks, applied. --  Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/", "msg_date": "Wed, 8 Jul 2020 10:12:24 +0200", "msg_from": "Magnus Hagander <magnus@hagander.net>", "msg_from_op": false, "msg_subject": "Re: Typo in pgstat.c" } ]
[ { "msg_contents": "Here is a patch that adds the following to pg_stat_database:\n- number of connections\n- number of sessions that were not disconnected regularly\n- total time spent in database sessions\n- total time spent executing queries\n- total idle in transaction time\n\nThis is useful to check if connection pooling is working.\nIt also helps to estimate the size of the connection pool\nrequired to keep the database busy, which depends on the\npercentage of the transaction time that is spent idling.\n\nYours,\nLaurenz Albe", "msg_date": "Wed, 08 Jul 2020 13:17:37 +0200", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": true, "msg_subject": "Add session statistics to pg_stat_database" }, { "msg_contents": "On Wed, Jul 8, 2020 at 4:17 PM Laurenz Albe <laurenz.albe@cybertec.at>\nwrote:\n\n> Here is a patch that adds the following to pg_stat_database:\n> - number of connections\n>\n\nIs it expected behaviour to not count idle connections? The connection is\nincluded after it is aborted but not while it was idle.\n\n\n> - number of sessions that were not disconnected regularly\n> - total time spent in database sessions\n> - total time spent executing queries\n> - total idle in transaction time\n>\n> This is useful to check if connection pooling is working.\n> It also helps to estimate the size of the connection pool\n> required to keep the database busy, which depends on the\n> percentage of the transaction time that is spent idling.\n>\n> Yours,\n> Laurenz Albe\n>\n\n\n-- \nHighgo Software (Canada/China/Pakistan)\nURL : http://www.highgo.ca\nADDR: 10318 WHALLEY BLVD, Surrey, BC\nEMAIL: mailto: ahsan.hadi@highgo.ca\n\nOn Wed, Jul 8, 2020 at 4:17 PM Laurenz Albe <laurenz.albe@cybertec.at> wrote:Here is a patch that adds the following to pg_stat_database:\n- number of connectionsIs it expected behaviour to not count idle connections? The connection is included after it is aborted but not while it was idle. \n- number of sessions that were not disconnected regularly\n- total time spent in database sessions\n- total time spent executing queries\n- total idle in transaction time\n\nThis is useful to check if connection pooling is working.\nIt also helps to estimate the size of the connection pool\nrequired to keep the database busy, which depends on the\npercentage of the transaction time that is spent idling.\n\nYours,\nLaurenz Albe\n-- Highgo Software (Canada/China/Pakistan)URL : http://www.highgo.caADDR: 10318 WHALLEY BLVD, Surrey, BCEMAIL: mailto: ahsan.hadi@highgo.ca", "msg_date": "Thu, 23 Jul 2020 18:16:02 +0500", "msg_from": "Ahsan Hadi <ahsan.hadi@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add session statistics to pg_stat_database" }, { "msg_contents": "On Thu, 2020-07-23 at 18:16 +0500, Ahsan Hadi wrote:\n> On Wed, Jul 8, 2020 at 4:17 PM Laurenz Albe <laurenz.albe@cybertec.at> wrote:\n> > Here is a patch that adds the following to pg_stat_database:\n> > - number of connections\n> \n> Is it expected behaviour to not count idle connections? The connection is included after it is aborted but not while it was idle.\n\nThanks for looking.\n\nCurrently, the patch counts connections when they close.\nI could change the behavior that they are counted immediately.\n\nYours,\nLaurenz Albe\n\n\n\n", "msg_date": "Tue, 11 Aug 2020 13:53:42 +0200", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": true, "msg_subject": "Re: Add session statistics to pg_stat_database" }, { "msg_contents": "On Tue, 2020-08-11 at 13:53 +0200, I wrote:\n> On Thu, 2020-07-23 at 18:16 +0500, Ahsan Hadi wrote:\n> \n> > On Wed, Jul 8, 2020 at 4:17 PM Laurenz Albe <laurenz.albe@cybertec.at> wrote:\n> > > Here is a patch that adds the following to pg_stat_database:\n> > > - number of connections\n> >\n> > Is it expected behaviour to not count idle connections? The connection is included after it is aborted but not while it was idle.\n> \n> Currently, the patch counts connections when they close.\n> \n> I could change the behavior that they are counted immediately.\n\nI have changed the code so that connections are counted immediately.\n\nAttached is a new version.\n\nYours,\nLaurenz Albe", "msg_date": "Fri, 04 Sep 2020 17:50:55 +0200", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": true, "msg_subject": "Re: Add session statistics to pg_stat_database" }, { "msg_contents": "Hello Laurenz,\n\nThanks for submitting this! Please find my feedback below.\n\n* Are we trying to capture ONLY client initiated disconnects in\nm_aborted (we are not handling other disconnects by not accounting for\nEOF..like if psql was killed)? If yes, why?\n\n* pgstat_send_connstats(): How about renaming the \"force\" argument to\n\"disconnected\"?\n\n*\n> static TimestampTz pgStatActiveStart = DT_NOBEGIN;\n> static PgStat_Counter pgStatActiveTime = 0;\n> static TimestampTz pgStatTransactionIdleStart = DT_NOBEGIN;\n> static PgStat_Counter pgStatTransactionIdleTime = 0;\n> static bool pgStatSessionReported = false;\n> bool pgStatSessionDisconnected = false;\n\nI think we can house all of these globals inside PgBackendStatus and can\nfollow the protocol for reading/writing fields in PgBackendStatus.\nRefer: PGSTAT_{BEGIN|END}_WRITE_ACTIVITY\n\nAlso, some of these fields are not required:\n\nI don't think we need pgStatActiveStart and pgStatTransactionIdleStart -\ninstead of these two we could use\nPgBackendStatus.st_state_start_timestamp which marks the beginning TS of\nthe backend's current state (st_state). We can look at that field along\nwith the current and to-be-transitioned-to states inside\npgstat_report_activity() when there is a transition away from\nSTATE_RUNNING, STATE_IDLEINTRANSACTION or\nSTATE_IDLEINTRANSACTION_ABORTED, in order to update pgStatActiveTime and\npgStatTransactionIdleTime. We would also need to update those counters\non disconnect/PGSTAT_STAT_INTERVAL timeout if the backend's current\nstate was STATE_RUNNING, STATE_IDLEINTRANSACTION or\nSTATE_IDLEINTRANSACTION_ABORTED (in pgstat_send_connstats())\n\npgStatSessionDisconnected is not required as it can be determined if a\nsession has been disconnected by looking at the force argument to\npgstat_report_stat() [unless we would want to distinguish between\nclient-initiated disconnects, which I am not sure why, as I have\nbrought up above].\n\npgStatSessionReported is not required. We can glean this information by\nchecking if the function local static last_report in\npgstat_report_stat() is 0 and passing this on as another param\n\"first_report\" to pgstat_send_connstats().\n\n\n* PGSTAT_FILE_FORMAT_ID needs to be updated when a stats collector data\nstructure changes and we had a change in PgStat_StatDBEntry.\n\n* We can directly use PgBackendStatus.st_proc_start_timestamp for\ncalculating m_session_time. We can also choose to report session uptime\neven when the report is for the not-disconnect case\n(PGSTAT_STAT_INTERVAL elapsed). No reason why not. Then we would need to\npass in the value of last_report to pgstat_send_connstats() -> calculate\nm_session_time to be number of time units from\nPgBackendStatus.st_proc_start_timestamp for the first report and then\nnumber of time units from the last_report for all subsequent reports.\n\n* We would need to bump the catalog version since we have made\nchanges to system views. Refer: #define CATALOG_VERSION_NO\n\n\nRegards,\nSoumyadeep (VMware)\n\n\n", "msg_date": "Thu, 24 Sep 2020 14:38:43 -0700", "msg_from": "Soumyadeep Chakraborty <soumyadeep2007@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add session statistics to pg_stat_database" }, { "msg_contents": "On Thu, 2020-09-24 at 14:38 -0700, Soumyadeep Chakraborty wrote:\n> Thanks for submitting this! Please find my feedback below.\n\nThanks for the thorough review.\n\nBefore I update the patch, I have a few comments and questions.\n\n> * Are we trying to capture ONLY client initiated disconnects in\n> m_aborted (we are not handling other disconnects by not accounting for\n> EOF..like if psql was killed)? If yes, why?\n\nI thought it was interesting to know how many database sessions are\nended regularly as opposed to ones that get killed or end by unexpected\nclient death.\n\n> * pgstat_send_connstats(): How about renaming the \"force\" argument to\n> \"disconnected\"?\n\nYes, that might be better. I'll do that.\n\n> *\n> > static TimestampTz pgStatActiveStart = DT_NOBEGIN;\n> > static PgStat_Counter pgStatActiveTime = 0;\n> > static TimestampTz pgStatTransactionIdleStart = DT_NOBEGIN;\n> > static PgStat_Counter pgStatTransactionIdleTime = 0;\n> > static bool pgStatSessionReported = false;\n> > bool pgStatSessionDisconnected = false;\n> \n> I think we can house all of these globals inside PgBackendStatus and can\n> follow the protocol for reading/writing fields in PgBackendStatus.\n> Refer: PGSTAT_{BEGIN|END}_WRITE_ACTIVITY\n\nAre you sure that is the right way to go?\n\nCorrect me if I am wrong, but isn't PgBackendStatus for relevant status\ninformation that other processes can access?\nI'd assume that it is not the correct place to store backend-private data\nthat are not relevant to others.\nBesides, if data is written to this structure more often, readers would\nhave deal with more contention, which could affect performance.\n\nBut I agree with the following:\n\n> Also, some of these fields are not required:\n> \n> I don't think we need pgStatActiveStart and pgStatTransactionIdleStart -\n> instead of these two we could use\n> PgBackendStatus.st_state_start_timestamp which marks the beginning TS of\n> the backend's current state (st_state). We can look at that field along\n> with the current and to-be-transitioned-to states inside\n> pgstat_report_activity() when there is a transition away from\n> STATE_RUNNING, STATE_IDLEINTRANSACTION or\n> STATE_IDLEINTRANSACTION_ABORTED, in order to update pgStatActiveTime and\n> pgStatTransactionIdleTime. We would also need to update those counters\n> on disconnect/PGSTAT_STAT_INTERVAL timeout if the backend's current\n> state was STATE_RUNNING, STATE_IDLEINTRANSACTION or\n> STATE_IDLEINTRANSACTION_ABORTED (in pgstat_send_connstats())\n\nYes, that would be better.\n\n> pgStatSessionDisconnected is not required as it can be determined if a\n> session has been disconnected by looking at the force argument to\n> pgstat_report_stat() [unless we would want to distinguish between\n> client-initiated disconnects, which I am not sure why, as I have\n> brought up above].\n\nBut wouldn't that mean that we count *every* end of a session as regular\ndisconnection, even if the backend was killed?\n\nI personally would want all my database connections to be closed by\nthe client, unless something unexpected happens.\n\n> pgStatSessionReported is not required. We can glean this information by\n> checking if the function local static last_report in\n> pgstat_report_stat() is 0 and passing this on as another param\n> \"first_report\" to pgstat_send_connstats().\n\nYes, that is better.\n\n> * PGSTAT_FILE_FORMAT_ID needs to be updated when a stats collector data\n> structure changes and we had a change in PgStat_StatDBEntry.\n\nI think that should be left to the committer.\n\n> * We can directly use PgBackendStatus.st_proc_start_timestamp for\n> calculating m_session_time. We can also choose to report session uptime\n> even when the report is for the not-disconnect case\n> (PGSTAT_STAT_INTERVAL elapsed). No reason why not. Then we would need to\n> pass in the value of last_report to pgstat_send_connstats() -> calculate\n> m_session_time to be number of time units from\n> PgBackendStatus.st_proc_start_timestamp for the first report and then\n> number of time units from the last_report for all subsequent reports.\n\nYes, that would make for better statistics, since client connections\ncan last quite long.\n\n> * We would need to bump the catalog version since we have made\n> changes to system views. Refer: #define CATALOG_VERSION_NO\n\nAgain, I think that's up to the committer.\n\nThanks again!\n\nYours,\nLaurenz Albe\n\n\n\n", "msg_date": "Tue, 29 Sep 2020 11:44:13 +0200", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": true, "msg_subject": "Re: Add session statistics to pg_stat_database" }, { "msg_contents": "On Tue, Sep 29, 2020 at 2:44 AM Laurenz Albe <laurenz.albe@cybertec.at> wrote:\n\n\n> > * Are we trying to capture ONLY client initiated disconnects in\n> > m_aborted (we are not handling other disconnects by not accounting for\n> > EOF..like if psql was killed)? If yes, why?\n>\n> I thought it was interesting to know how many database sessions are\n> ended regularly as opposed to ones that get killed or end by unexpected\n> client death.\n\nIt may very well be. It would also be interesting to find out how many\nconnections are still open on the database (something we could easily\nglean if we had the number of all disconnects, client-initiated or\nunnatural). Maybe we could have both?\n\nm_sessions_disconnected;\nm_sessions_killed;\n\n>\n>\n> > *\n> > > static TimestampTz pgStatActiveStart = DT_NOBEGIN;\n> > > static PgStat_Counter pgStatActiveTime = 0;\n> > > static TimestampTz pgStatTransactionIdleStart = DT_NOBEGIN;\n> > > static PgStat_Counter pgStatTransactionIdleTime = 0;\n> > > static bool pgStatSessionReported = false;\n> > > bool pgStatSessionDisconnected = false;\n> >\n> > I think we can house all of these globals inside PgBackendStatus and can\n> > follow the protocol for reading/writing fields in PgBackendStatus.\n> > Refer: PGSTAT_{BEGIN|END}_WRITE_ACTIVITY\n>\n> Are you sure that is the right way to go?\n>\n> Correct me if I am wrong, but isn't PgBackendStatus for relevant status\n> information that other processes can access?\n> I'd assume that it is not the correct place to store backend-private data\n> that are not relevant to others.\n> Besides, if data is written to this structure more often, readers would\n> have deal with more contention, which could affect performance.\n\nYou are absolutely right! PgBackendStatus is not the place for any of\nthese fields. We could place them in LocalPgBackendStatus perhaps. But\nI don't feel too strongly about that now, having looked at similar fields\nsuch as pgStatXactCommit, pgStatXactRollback etc. If we decide to stick\nwith the globals, let's isolate and decorate them with a comment such as\nthis example from the source:\n\n/*\n * Updated by pgstat_count_buffer_*_time macros\n */\nextern PgStat_Counter pgStatBlockReadTime;\nextern PgStat_Counter pgStatBlockWriteTime;\n\n> > pgStatSessionDisconnected is not required as it can be determined if a\n> > session has been disconnected by looking at the force argument to\n> > pgstat_report_stat() [unless we would want to distinguish between\n> > client-initiated disconnects, which I am not sure why, as I have\n> > brought up above].\n>\n> But wouldn't that mean that we count *every* end of a session as regular\n> disconnection, even if the backend was killed?\n\nSee my comment above about client-initiated and unnatural disconnects.\n\n>\n> > * PGSTAT_FILE_FORMAT_ID needs to be updated when a stats collector data\n> > structure changes and we had a change in PgStat_StatDBEntry.\n>\n> I think that should be left to the committer.\n\nFair.\n\n> > * We would need to bump the catalog version since we have made\n> > changes to system views. Refer: #define CATALOG_VERSION_NO\n>\n> Again, I think that's up to the committer.\n\nFair.\n\n\nRegards,\nSoumyadeep (VMware)\n\n\n", "msg_date": "Fri, 2 Oct 2020 15:10:26 -0700", "msg_from": "Soumyadeep Chakraborty <soumyadeep2007@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add session statistics to pg_stat_database" }, { "msg_contents": "On 2020-09-05 00:50, Laurenz Albe wrote:\n> I have changed the code so that connections are counted immediately.\n> Attached is a new version.\n\nThanks for making a patch.\nI'm interested in this feature.\n\nI think to add the number of login failures is good for security.\nAlthough we can see the event from log files, it's useful to know the \noverview\nif the database may be attached or not.\n\nBy the way, could you rebase the patch since the latest patches\nfailed to be applied to the master branch?\n\nRegards,\n-- \nMasahiro Ikeda\nNTT DATA CORPORATION\n\n\n", "msg_date": "Tue, 06 Oct 2020 18:29:45 +0900", "msg_from": "Masahiro Ikeda <ikedamsh@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Add session statistics to pg_stat_database" }, { "msg_contents": "On Fri, 2020-10-02 at 15:10 -0700, Soumyadeep Chakraborty wrote:\n> On Tue, Sep 29, 2020 at 2:44 AM Laurenz Albe <laurenz.albe@cybertec.at> wrote:\n> > > * Are we trying to capture ONLY client initiated disconnects in\n> > > m_aborted (we are not handling other disconnects by not accounting for\n> > > EOF..like if psql was killed)? If yes, why?\n> > \n> > I thought it was interesting to know how many database sessions are\n> > ended regularly as opposed to ones that get killed or end by unexpected\n> > client death.\n> \n> It may very well be. It would also be interesting to find out how many\n> connections are still open on the database (something we could easily\n> glean if we had the number of all disconnects, client-initiated or\n> unnatural). Maybe we could have both?\n> \n> m_sessions_disconnected;\n> m_sessions_killed;\n\nWe already have \"numbackends\" in \"pg_stat_database\", so we know the number\nof active connections, right?\n\n> You are absolutely right! PgBackendStatus is not the place for any of\n> these fields. We could place them in LocalPgBackendStatus perhaps. But\n> I don't feel too strongly about that now, having looked at similar fields\n> such as pgStatXactCommit, pgStatXactRollback etc. If we decide to stick\n> with the globals, let's isolate and decorate them with a comment such as\n> this example from the source:\n> \n> /*\n> * Updated by pgstat_count_buffer_*_time macros\n> */\n> extern PgStat_Counter pgStatBlockReadTime;\n> extern PgStat_Counter pgStatBlockWriteTime;\n\nI have reduced the number of variables with my latest patch; I think\nthe rewrite based on your review is definitely an improvement.\n\nThe comment you quote is from \"pgstat.h\", and my only global variable\nhas a comment there.\n\n> > > pgStatSessionDisconnected is not required as it can be determined if a\n> > > session has been disconnected by looking at the force argument to\n> > > pgstat_report_stat() [unless we would want to distinguish between\n> > > client-initiated disconnects, which I am not sure why, as I have\n> > > brought up above].\n> > \n> > But wouldn't that mean that we count *every* end of a session as regular\n> > disconnection, even if the backend was killed?\n> \n> See my comment above about client-initiated and unnatural disconnects.\n\nI decided to leave the functionality as it is; I think it is interesting\ninformation to know if your clients disconnect cleanly or not.\n\n\nMasahiro Ikeda wrote:\n> I think to add the number of login failures is good for security.\n> Although we can see the event from log files, it's useful to know the \n> overview if the database may be attached or not.\n\nI don't think login failures can be reasonably reported in\n\"pg_stat_database\", since authentication happens before the session is\nattached to a database.\n\nWhat if somebody attempts to connect to a non-existing database?\n\nI agree that this is interesting information, but I don't think it\nbelongs into this patch.\n\n> By the way, could you rebase the patch since the latest patches\n> failed to be applied to the master branch?\n\nYes, the patch has bit-rotted.\n\nAttached is v3 with improvements.\n\nYours,\nLaurenz Albe", "msg_date": "Tue, 13 Oct 2020 13:44:41 +0200", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": true, "msg_subject": "Re: Add session statistics to pg_stat_database" }, { "msg_contents": "On Tue, Oct 13, 2020 at 01:44:41PM +0200, Laurenz Albe wrote:\n> Attached is v3 with improvements.\n\n+ <para>\n+ Time spent in database sessions in this database, in milliseconds.\n+ </para></entry>\n\nShould say \"Total time spent *by* DB sessions...\" ?\n\nI think these counters are only accurate as of the last state change, right?\nSo a session which has been idle for 1hr, that 1hr is not included. I think\nthe documentation should explain that, or (ideally) the implementation would be\nmore precise. Maybe the timestamps should only be updated after a session\nterminates (and the docs should say so).\n\n+ <entry role=\"catalog_table_entry\"><para role=\"column_definition\">\n+ <structfield>connections</structfield> <type>bigint</type>\n+ </para>\n+ <para>\n+ Number of connections established to this database.\n\n*Total* number of connections established, otherwise it sounds like it might\nmean \"the number of sessions [currently] established\".\n\n+ Number of database sessions to this database that did not end\n+ with a regular client disconnection.\n\nDoes that mean \"sessions which ended irregularly\" ? Or does it also include\n\"sessions which have not ended\" ?\n\n+ msg.m_aborted = (!disconnect || pgStatSessionDisconnected) ? 0 : 1;\n\nI think this can be just:\nmsg.m_aborted = (bool) (disconnect && !pgStatSessionDisconnected);\n\n+ if ((dbentry = pgstat_fetch_stat_dbentry(dbid)) == NULL)\n+ result = 0;\n+ else\n+ result = ((double) dbentry->n_session_time) / 1000.0;\n\nI think these can say:\n|double result = 0;\n|if ((dbentry=..) != NULL)\n| result = (double) ..;\n\nThat not only uses fewer LOC, but also the assignment to zero is (known to be)\ndone at compile time (BSS) rather than runtime.\n\n\n", "msg_date": "Tue, 13 Oct 2020 17:55:48 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Add session statistics to pg_stat_database" }, { "msg_contents": "Thanks for the --- as always --- valuable review!\n\nOn Tue, 2020-10-13 at 17:55 -0500, Justin Pryzby wrote:\n> On Tue, Oct 13, 2020 at 01:44:41PM +0200, Laurenz Albe wrote:\n> > Attached is v3 with improvements.\n> \n> + <para>\n> + Time spent in database sessions in this database, in milliseconds.\n> + </para></entry>\n> \n> Should say \"Total time spent *by* DB sessions...\" ?\n\nThat is indeed better. Fixed.\n\n> I think these counters are only accurate as of the last state change, right?\n> So a session which has been idle for 1hr, that 1hr is not included. I think\n> the documentation should explain that, or (ideally) the implementation would be\n> more precise. Maybe the timestamps should only be updated after a session\n> terminates (and the docs should say so).\n\nI agree, and I have added an explanation that the value doesn't include\nthe duration of the current state.\n\nOf course it would be nice to have totally accurate values, but I think\nthat the statistics are by nature inaccurate (datagrams can get lost),\nand more frequent statistics updates increase the work load.\nI don't think that is worth the effort.\n\n> + <entry role=\"catalog_table_entry\"><para role=\"column_definition\">\n> + <structfield>connections</structfield> <type>bigint</type>\n> + </para>\n> + <para>\n> + Number of connections established to this database.\n> \n> *Total* number of connections established, otherwise it sounds like it might\n> mean \"the number of sessions [currently] established\".\n\nFixed like that.\n\n> + Number of database sessions to this database that did not end\n> + with a regular client disconnection.\n> \n> Does that mean \"sessions which ended irregularly\" ? Or does it also include\n> \"sessions which have not ended\" ?\n\nI have added an explanation for that.\n\n> + msg.m_aborted = (!disconnect || pgStatSessionDisconnected) ? 0 : 1;\n> \n> I think this can be just:\n> msg.m_aborted = (bool) (disconnect && !pgStatSessionDisconnected);\n\nI mulled over this and finally decided to leave it as it is.\n\nSince \"m_aborted\" gets added to the total counter, I'd prefer to\nhave it be an \"int\".\n\nYour proposed code works (the cast is actually not necessary, right?).\nBut I think that my version is more readable if you think of\n\"m_aborted\" as a counter rather than a flag.\n\n> + if ((dbentry = pgstat_fetch_stat_dbentry(dbid)) == NULL)\n> + result = 0;\n> + else\n> + result = ((double) dbentry->n_session_time) / 1000.0;\n> \n> I think these can say:\n> > double result = 0;\n> > if ((dbentry=..) != NULL)\n> > result = (double) ..;\n> \n> That not only uses fewer LOC, but also the assignment to zero is (known to be)\n> done at compile time (BSS) rather than runtime.\n\nI didn't know about the performance difference.\nConcise code (if readable) is good, so I changed the code like you propose.\n\nThe code pattern is actually copied from neighboring functions,\nwhich then should also be changed like this, but that is outside\nthe scope of this patch.\n\nAttached is v4 of the patch.\n\nYours,\nLaurenz Albe", "msg_date": "Wed, 14 Oct 2020 11:28:36 +0200", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": true, "msg_subject": "Re: Add session statistics to pg_stat_database" }, { "msg_contents": "Hi Laurenz,\n\nI have applied the latest patch on master, all the regression test cases\nare passing and the implemented functionality is also looking fine. The\npoint that I raised about idle connection not included is also addressed.\n\nthanks,\nAhsan\n\nOn Wed, Oct 14, 2020 at 2:28 PM Laurenz Albe <laurenz.albe@cybertec.at>\nwrote:\n\n> Thanks for the --- as always --- valuable review!\n>\n> On Tue, 2020-10-13 at 17:55 -0500, Justin Pryzby wrote:\n> > On Tue, Oct 13, 2020 at 01:44:41PM +0200, Laurenz Albe wrote:\n> > > Attached is v3 with improvements.\n> >\n> > + <para>\n> > + Time spent in database sessions in this database, in\n> milliseconds.\n> > + </para></entry>\n> >\n> > Should say \"Total time spent *by* DB sessions...\" ?\n>\n> That is indeed better. Fixed.\n>\n> > I think these counters are only accurate as of the last state change,\n> right?\n> > So a session which has been idle for 1hr, that 1hr is not included. I\n> think\n> > the documentation should explain that, or (ideally) the implementation\n> would be\n> > more precise. Maybe the timestamps should only be updated after a\n> session\n> > terminates (and the docs should say so).\n>\n> I agree, and I have added an explanation that the value doesn't include\n> the duration of the current state.\n>\n> Of course it would be nice to have totally accurate values, but I think\n> that the statistics are by nature inaccurate (datagrams can get lost),\n> and more frequent statistics updates increase the work load.\n> I don't think that is worth the effort.\n>\n> > + <entry role=\"catalog_table_entry\"><para role=\"column_definition\">\n> > + <structfield>connections</structfield> <type>bigint</type>\n> > + </para>\n> > + <para>\n> > + Number of connections established to this database.\n> >\n> > *Total* number of connections established, otherwise it sounds like it\n> might\n> > mean \"the number of sessions [currently] established\".\n>\n> Fixed like that.\n>\n> > + Number of database sessions to this database that did not end\n> > + with a regular client disconnection.\n> >\n> > Does that mean \"sessions which ended irregularly\" ? Or does it also\n> include\n> > \"sessions which have not ended\" ?\n>\n> I have added an explanation for that.\n>\n> > + msg.m_aborted = (!disconnect || pgStatSessionDisconnected) ? 0 :\n> 1;\n> >\n> > I think this can be just:\n> > msg.m_aborted = (bool) (disconnect && !pgStatSessionDisconnected);\n>\n> I mulled over this and finally decided to leave it as it is.\n>\n> Since \"m_aborted\" gets added to the total counter, I'd prefer to\n> have it be an \"int\".\n>\n> Your proposed code works (the cast is actually not necessary, right?).\n> But I think that my version is more readable if you think of\n> \"m_aborted\" as a counter rather than a flag.\n>\n> > + if ((dbentry = pgstat_fetch_stat_dbentry(dbid)) == NULL)\n> > + result = 0;\n> > + else\n> > + result = ((double) dbentry->n_session_time) / 1000.0;\n> >\n> > I think these can say:\n> > > double result = 0;\n> > > if ((dbentry=..) != NULL)\n> > > result = (double) ..;\n> >\n> > That not only uses fewer LOC, but also the assignment to zero is (known\n> to be)\n> > done at compile time (BSS) rather than runtime.\n>\n> I didn't know about the performance difference.\n> Concise code (if readable) is good, so I changed the code like you propose.\n>\n> The code pattern is actually copied from neighboring functions,\n> which then should also be changed like this, but that is outside\n> the scope of this patch.\n>\n> Attached is v4 of the patch.\n>\n> Yours,\n> Laurenz Albe\n>\n\n\n-- \nHighgo Software (Canada/China/Pakistan)\nURL : http://www.highgo.ca\nADDR: 10318 WHALLEY BLVD, Surrey, BC\nEMAIL: mailto: ahsan.hadi@highgo.ca\n\nHi Laurenz,I have applied the latest patch on master, all the regression test cases are passing and the implemented functionality is also looking fine. The point that I raised about idle connection not included is also addressed.thanks,AhsanOn Wed, Oct 14, 2020 at 2:28 PM Laurenz Albe <laurenz.albe@cybertec.at> wrote:Thanks for the --- as always --- valuable review!\n\nOn Tue, 2020-10-13 at 17:55 -0500, Justin Pryzby wrote:\n> On Tue, Oct 13, 2020 at 01:44:41PM +0200, Laurenz Albe wrote:\n> > Attached is v3 with improvements.\n> \n> +      <para>\n> +       Time spent in database sessions in this database, in milliseconds.\n> +      </para></entry>\n> \n> Should say \"Total time spent *by* DB sessions...\" ?\n\nThat is indeed better.  Fixed.\n\n> I think these counters are only accurate as of the last state change, right?\n> So a session which has been idle for 1hr, that 1hr is not included.  I think\n> the documentation should explain that, or (ideally) the implementation would be\n> more precise.  Maybe the timestamps should only be updated after a session\n> terminates (and the docs should say so).\n\nI agree, and I have added an explanation that the value doesn't include\nthe duration of the current state.\n\nOf course it would be nice to have totally accurate values, but I think\nthat the statistics are by nature inaccurate (datagrams can get lost),\nand more frequent statistics updates increase the work load.\nI don't think that is worth the effort.\n\n> +      <entry role=\"catalog_table_entry\"><para role=\"column_definition\">\n> +       <structfield>connections</structfield> <type>bigint</type>\n> +      </para>\n> +      <para>\n> +       Number of connections established to this database.\n> \n> *Total* number of connections established, otherwise it sounds like it might\n> mean \"the number of sessions [currently] established\".\n\nFixed like that.\n\n> +       Number of database sessions to this database that did not end\n> +       with a regular client disconnection.\n> \n> Does that mean \"sessions which ended irregularly\" ?  Or does it also include\n> \"sessions which have not ended\" ?\n\nI have added an explanation for that.\n\n> +       msg.m_aborted = (!disconnect || pgStatSessionDisconnected) ? 0 : 1;\n> \n> I think this can be just:\n> msg.m_aborted = (bool) (disconnect && !pgStatSessionDisconnected);\n\nI mulled over this and finally decided to leave it as it is.\n\nSince \"m_aborted\" gets added to the total counter, I'd prefer to\nhave it be an \"int\".\n\nYour proposed code works (the cast is actually not necessary, right?).\nBut I think that my version is more readable if you think of\n\"m_aborted\" as a counter rather than a flag.\n\n> +       if ((dbentry = pgstat_fetch_stat_dbentry(dbid)) == NULL)\n> +               result = 0;\n> +       else\n> +               result = ((double) dbentry->n_session_time) / 1000.0;\n> \n> I think these can say:\n> > double result = 0;\n> > if ((dbentry=..) != NULL)\n> >  result = (double) ..;\n> \n> That not only uses fewer LOC, but also the assignment to zero is (known to be)\n> done at compile time (BSS) rather than runtime.\n\nI didn't know about the performance difference.\nConcise code (if readable) is good, so I changed the code like you propose.\n\nThe code pattern is actually copied from neighboring functions,\nwhich then should also be changed like this, but that is outside\nthe scope of this patch.\n\nAttached is v4 of the patch.\n\nYours,\nLaurenz Albe\n-- Highgo Software (Canada/China/Pakistan)URL : http://www.highgo.caADDR: 10318 WHALLEY BLVD, Surrey, BCEMAIL: mailto: ahsan.hadi@highgo.ca", "msg_date": "Fri, 16 Oct 2020 16:24:56 +0500", "msg_from": "Ahsan Hadi <ahsan.hadi@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add session statistics to pg_stat_database" }, { "msg_contents": "Hi,\r\n\r\nI noticed that the cfbot fails for this patch.\r\nFor this, I am setting the status to: 'Waiting on Author'.\r\n\r\nCheers,\r\n//Georgios\n\nThe new status of this patch is: Waiting on Author\n", "msg_date": "Tue, 10 Nov 2020 15:03:28 +0000", "msg_from": "Georgios Kokolatos <gkokolatos@protonmail.com>", "msg_from_op": false, "msg_subject": "Re: Add session statistics to pg_stat_database" }, { "msg_contents": "On Tue, 2020-11-10 at 15:03 +0000, Georgios Kokolatos wrote:\n> I noticed that the cfbot fails for this patch.\n> \n> For this, I am setting the status to: 'Waiting on Author'.\n\nThanks for noticing, it was only the documentation build.\n\nVersion 5 attached, status changed back to \"waiting for review\".\n\nYours,\nLaurenz Albe", "msg_date": "Wed, 11 Nov 2020 20:17:04 +0100", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": true, "msg_subject": "Re: Add session statistics to pg_stat_database" }, { "msg_contents": "I wrote:\n> On Tue, 2020-11-10 at 15:03 +0000, Georgios Kokolatos wrote:\n> > I noticed that the cfbot fails for this patch.\n> > \n> > For this, I am setting the status to: 'Waiting on Author'.\n> \n> Thanks for noticing, it was only the documentation build.\n> \n> Version 5 attached, status changed back to \"waiting for review\".\n\nThe patch is still failing, so I looked again:\n\n make[3]: Entering directory '/home/travis/build/postgresql-cfbot/postgresql/doc/src/sgml'\n { \\\n echo \"<!ENTITY version \\\"14devel\\\">\"; \\\n echo \"<!ENTITY majorversion \\\"14\\\">\"; \\\n } > version.sgml\n '/usr/bin/perl' ./mk_feature_tables.pl YES ../../../src/backend/catalog/sql_feature_packages.txt ../../../src/backend/catalog/sql_features.txt > features-supported.sgml\n '/usr/bin/perl' ./mk_feature_tables.pl NO ../../../src/backend/catalog/sql_feature_packages.txt ../../../src/backend/catalog/sql_features.txt > features-unsupported.sgml\n '/usr/bin/perl' ./generate-errcodes-table.pl ../../../src/backend/utils/errcodes.txt > errcodes-table.sgml\n '/usr/bin/perl' ./generate-keywords-table.pl . > keywords-table.sgml\n /usr/bin/xmllint --path . --noout --valid postgres.sgml\n error : Unknown IO error\n postgres.sgml:21: /usr/bin/bison -Wno-deprecated -d -o gram.c gram.y\n warning: failed to load external entity \"http://www.oasis-open.org/docbook/xml/4.5/docbookx.dtd\"\n ]>\n ^\n postgres.sgml:23: element book: validity error : No declaration for attribute id of element book\n <book id=\"postgres\">\n ^\n postgres.sgml:24: element title: validity error : No declaration for element title\n <title>PostgreSQL &version; Documentation</title>\n\nI have the impression that this is not the fault of my patch, something seems to be\nwrong with the cfbot.\n\nI see that other patches are failing with the same error.\n\nYours,\nLaurenz Albe\n\n\n\n", "msg_date": "Thu, 12 Nov 2020 09:31:20 +0100", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": true, "msg_subject": "Re: Add session statistics to pg_stat_database" }, { "msg_contents": "\n\n‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐\nOn Thursday, November 12, 2020 9:31 AM, Laurenz Albe <laurenz.albe@cybertec.at> wrote:\n\n> I wrote:\n>\n> > On Tue, 2020-11-10 at 15:03 +0000, Georgios Kokolatos wrote:\n> >\n> > > I noticed that the cfbot fails for this patch.\n> > > For this, I am setting the status to: 'Waiting on Author'.\n> >\n> > Thanks for noticing, it was only the documentation build.\n> > Version 5 attached, status changed back to \"waiting for review\".\n>\n> The patch is still failing, so I looked again:\n>\n> make[3]: Entering directory '/home/travis/build/postgresql-cfbot/postgresql/doc/src/sgml'\n> { \\\n> echo \"<!ENTITY version \\\"14devel\\\">\"; \\\n>\n> echo \"<!ENTITY majorversion \\\\\"14\\\\\">\"; \\\\\n>\n>\n> } > version.sgml\n> '/usr/bin/perl' ./mk_feature_tables.pl YES ../../../src/backend/catalog/sql_feature_packages.txt ../../../src/backend/catalog/sql_features.txt > features-supported.sgml\n> '/usr/bin/perl' ./mk_feature_tables.pl NO ../../../src/backend/catalog/sql_feature_packages.txt ../../../src/backend/catalog/sql_features.txt > features-unsupported.sgml\n> '/usr/bin/perl' ./generate-errcodes-table.pl ../../../src/backend/utils/errcodes.txt > errcodes-table.sgml\n> '/usr/bin/perl' ./generate-keywords-table.pl . > keywords-table.sgml\n> /usr/bin/xmllint --path . --noout --valid postgres.sgml\n> error : Unknown IO error\n> postgres.sgml:21: /usr/bin/bison -Wno-deprecated -d -o gram.c gram.y\n> warning: failed to load external entity \"http://www.oasis-open.org/docbook/xml/4.5/docbookx.dtd\"\n> ]>\n>\n> ^\n>\n>\n> postgres.sgml:23: element book: validity error : No declaration for attribute id of element book\n> <book id=\"postgres\">\n>\n> ^\n>\n>\n> postgres.sgml:24: element title: validity error : No declaration for element title\n> <title>PostgreSQL &version; Documentation</title>\n>\n> I have the impression that this is not the fault of my patch, something seems to be\n> wrong with the cfbot.\n>\n> I see that other patches are failing with the same error.\n\nYou are indeed correct. Unfortunately the cfbot is a bit unstable due\nto some issues related to the documentation. I alerted a contributor\nand he was quick to try to address the issue in pgsql-www [1].\n\nThank you very much for looking and apologies for the chatter.\n\n>\n> Yours,\n> Laurenz Albe\n\n[1] https://www.postgresql.org/message-id/E2EE6B76-2D96-408A-B961-CAE47D1A86F0%40yesql.se\n\n\n", "msg_date": "Thu, 12 Nov 2020 08:44:09 +0000", "msg_from": "Georgios <gkokolatos@protonmail.com>", "msg_from_op": false, "msg_subject": "Re: Add session statistics to pg_stat_database" }, { "msg_contents": "On Fri, 2020-10-16 at 16:24 +0500, Ahsan Hadi wrote:\n> I have applied the latest patch on master, all the regression test cases are passing\n> and the implemented functionality is also looking fine. The point that I raised about\n> idle connection not included is also addressed.\n\nIf you think that the patch is ready to go, you could mark it as\n\"ready for committer\" in the commitfest app.\n\nYours,\nLaurenz Albe\n\n\n\n", "msg_date": "Tue, 17 Nov 2020 16:22:33 +0100", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": true, "msg_subject": "Re: Add session statistics to pg_stat_database" }, { "msg_contents": "On Tue, Nov 17, 2020 at 4:22 PM Laurenz Albe <laurenz.albe@cybertec.at>\nwrote:\n\n> On Fri, 2020-10-16 at 16:24 +0500, Ahsan Hadi wrote:\n> > I have applied the latest patch on master, all the regression test cases\n> are passing\n> > and the implemented functionality is also looking fine. The point that\n> I raised about\n> > idle connection not included is also addressed.\n>\n> If you think that the patch is ready to go, you could mark it as\n> \"ready for committer\" in the commitfest app.\n>\n\nI've taken a look as well, and here are a few short notes:\n\n* It talks about \"number of connections\" but \"number of aborted sessions\".\nWe should probably be consistent about talking either about connections or\nsessions? In particular, connections seems wrong in this case, because it\nonly starts counting after authentication is complete (since otherwise we\nsend no stats)? (This goes for both docs and actual function names)\n\n* Is there a reason we're counting active and idle in transaction\n(including aborted), but not fastpath? In particular, we seem to ignore\nfastpath -- if we don't want to single it out specifically, it should\nprobably be included in active?\n\n* pgstat_send_connstat() but pgstat_recv_connection(). Let's call both\nconnstat or both connection (I'd vote connstat)?\n\n* Is this actually a fix that's independent of the new stats? It seems in\ngeneral to be changing the behaviour of \"force\", which is more generic?\n- !have_function_stats)\n+ !have_function_stats && !force)\n\n* in pgstat_send_connstat() you pass the parameter \"force\" in as\n\"disconnect\". That behaviour at least requires a comment saying why, I\nthink. My understanding is it relies on that \"force\" means this is\na \"backend is shutting down\", but that is not actually documented anywhere.\nMaybe the \"force\" parameter should actually be renamed to indicate this is\nreally what it means, to avoid a future mistake in the area? But even with\nthat, how does that turn into disconnect?\n\n* Maybe rename pgStatSessionDisconnected\nto pgStatSessionNormalDisconnected? To avoid having to go back to the\nsetting point and look it up in a comment.\n\nI wonder if there would also be a way to count \"sessions that crashed\" as\nwell. That is,the ones that failed in a way that caused the postmaster to\nrestart the system. But that's information we'd have to send from the\npostmaster, but I'm actually unsure if we're \"allowed\" to send things to\nthe stats collector from the postmaster. But I think it could be quite\nuseful information to have. Maybe we can find some way to piggyback on the\nfact that we're restarting the stats collector as a result?\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Tue, Nov 17, 2020 at 4:22 PM Laurenz Albe <laurenz.albe@cybertec.at> wrote:On Fri, 2020-10-16 at 16:24 +0500, Ahsan Hadi wrote:\n> I have applied the latest patch on master, all the regression test cases are passing\n>  and the implemented functionality is also looking fine. The point that I raised about\n>  idle connection not included is also addressed.\n\nIf you think that the patch is ready to go, you could mark it as\n\"ready for committer\" in the commitfest app.I've taken a look as well, and here are a few short notes:* It talks about \"number of connections\" but \"number of aborted sessions\". We should probably be consistent about talking either about connections or sessions? In particular, connections seems wrong in this case, because it only starts counting after authentication is complete (since otherwise we send no stats)? (This goes for both docs and actual function names)* Is there a reason we're counting active and idle in transaction (including aborted), but not fastpath? In particular, we seem to ignore fastpath -- if we don't want to single it out specifically, it should probably be included in active?* pgstat_send_connstat() but pgstat_recv_connection(). Let's call both connstat or both connection (I'd vote connstat)?* Is this actually a fix that's independent of the new stats? It seems in general to be changing the behaviour of \"force\", which is more generic?-               !have_function_stats)+               !have_function_stats && !force)* in pgstat_send_connstat() you pass the parameter \"force\" in as \"disconnect\". That behaviour at least requires a comment saying why, I think. My understanding is it relies on that \"force\" means this is a \"backend is shutting down\", but that is not actually documented anywhere. Maybe the \"force\" parameter should actually be renamed to indicate this is really what it means, to avoid a future mistake in the area? But even with that, how does that turn into disconnect?* Maybe rename pgStatSessionDisconnected to pgStatSessionNormalDisconnected? To avoid having to go back to the setting point and look it up in a comment.I wonder if there would also be a way to count \"sessions that crashed\" as well. That is,the ones that failed in a way that caused the postmaster to restart the system. But that's information we'd have to send from the postmaster, but I'm actually unsure if we're \"allowed\" to send things to the stats collector from the postmaster. But I think it could be quite useful information to have. Maybe we can find some way to piggyback on the fact that we're restarting the stats collector as a result?--  Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/", "msg_date": "Tue, 17 Nov 2020 17:33:05 +0100", "msg_from": "Magnus Hagander <magnus@hagander.net>", "msg_from_op": false, "msg_subject": "Re: Add session statistics to pg_stat_database" }, { "msg_contents": "On Tue, 2020-11-17 at 17:33 +0100, Magnus Hagander wrote:\n> I've taken a look as well, and here are a few short notes:\n\nMuch appreciated!\n\n> * It talks about \"number of connections\" but \"number of aborted sessions\". We should probably\n> be consistent about talking either about connections or sessions? In particular, connections\n> seems wrong in this case, because it only starts counting after authentication is complete\n> (since otherwise we send no stats)? (This goes for both docs and actual function names)\n\nYes, that is true. I have changed \"connections\" to \"sessions\" and renamed the new\ncolumn \"connections\" to \"session_count\".\n\nI think that most people will understand a session as started after a successful\nconnection.\n\n> * Is there a reason we're counting active and idle in transaction (including aborted),\n> but not fastpath? In particular, we seem to ignore fastpath -- if we don't want to single\n> it out specifically, it should probably be included in active?\n\nThe only reason is that I didn't think of it. Fixed.\n\n> * pgstat_send_connstat() but pgstat_recv_connection(). Let's call both connstat or both\n> connection (I'd vote connstat)?\n\nAgreed, done.\n\n> * Is this actually a fix that's independent of the new stats? It seems in general to be\n> changing the behaviour of \"force\", which is more generic?\n> - !have_function_stats)\n> + !have_function_stats && !force)\n\nThe comment right above that reads:\n/* Don't expend a clock check if nothing to do */\nSo it is just a quick exit if there is nothing to do.\n\nBut with that patch we have something to do if \"force\" (see below) is true:\nReport the remaining session duration and if the session was closed normally.\n\nThus the additional check.\n\n> * in pgstat_send_connstat() you pass the parameter \"force\" in as \"disconnect\".\n> That behaviour at least requires a comment saying why, I think. My understanding is\n> it relies on that \"force\" means this is a \"backend is shutting down\", but that is not\n> actually documented anywhere. Maybe the \"force\" parameter should actually be renamed\n> to indicate this is really what it means, to avoid a future mistake in the area?\n> But even with that, how does that turn into disconnect?\n\n\"pgstat_report_stat(true)\" is only called from \"pgstat_beshutdown_hook()\", so\nit is currently only called when the backend is about to exit.\n\nAccording the the comments the flag means that \"caller wants to force stats out\".\nI guess that the author thought that there may arise other reasons to force sending\nstatistics in the future (commit 641912b4d from 2007).\n\nHowever, since that has not happened, I have renamed the flag to \"disconnect\" and\nadapted the documentation. This doesn't change the current behavior, but establishes\na new rule.\n\n> * Maybe rename pgStatSessionDisconnected to pgStatSessionNormalDisconnected?\n> To avoid having to go back to the setting point and look it up in a comment.\n\nLong, descriptive names are a good thing.\nI have decided to use \"pgStatSessionDisconnectedNormally\", since that is even longer\nand seems to fit the \"yes or no\" category better.\n \n> I wonder if there would also be a way to count \"sessions that crashed\" as well.\n> That is,the ones that failed in a way that caused the postmaster to restart the system.\n> But that's information we'd have to send from the postmaster, but I'm actually unsure\n> if we're \"allowed\" to send things to the stats collector from the postmaster.\n> But I think it could be quite useful information to have. Maybe we can find some way\n> to piggyback on the fact that we're restarting the stats collector as a result?\n\nSure, a crash count would be useful. I don't know if it is easy for the stats collector\nto tell the difference between a start after a backend crash and - say - starting from\na base backup.\n\nPatch v6 attached.\n\nI think that that would be material for another patch, and I don't think it should go\nto \"pg_stat_database\", because a) it might be hard to tell to which database the crashed\nbackend was attached, b) it might be a background process that doesn't belong to a database\nand c) if the crash were caused by - say - corruption in a shared catalog, it would be\nmisleading.\n\nYours,\nLaurenz Albe", "msg_date": "Fri, 20 Nov 2020 15:41:05 +0100", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": true, "msg_subject": "Re: Add session statistics to pg_stat_database" }, { "msg_contents": "On Fri, Nov 20, 2020 at 3:41 PM Laurenz Albe <laurenz.albe@cybertec.at>\nwrote:\n\n> On Tue, 2020-11-17 at 17:33 +0100, Magnus Hagander wrote:\n> > I've taken a look as well, and here are a few short notes:\n>\n> Much appreciated!\n>\n\nSorry about the delay in getting back to you on this one. FYI, while the\npatch has been bumped to the next CF by now, I do intend to continue\nworking on it before that starts.\n\n\n> * It talks about \"number of connections\" but \"number of aborted\n> sessions\". We should probably\n> > be consistent about talking either about connections or sessions? In\n> particular, connections\n> > seems wrong in this case, because it only starts counting after\n> authentication is complete\n> > (since otherwise we send no stats)? (This goes for both docs and\n> actual function names)\n>\n> Yes, that is true. I have changed \"connections\" to \"sessions\" and renamed\n> the new\n> column \"connections\" to \"session_count\".\n>\n> I think that most people will understand a session as started after a\n> successful\n> connection.\n>\n\nYeah, I agree, and as long as it's consistent we don't need more\nexplanations than that.\n\nFurther int he views, it's a bit strange to have session_count and\naborted_session, but I'm not sure what to suggest. \"aborted_session_count\"\nseems too long. Maybe just \"sessions\" instead of \"session_count\" -- no\nother counters actually have the \"_count\" suffix.\n\n\n> * Is this actually a fix that's independent of the new stats? It seems in\n> general to be\n> > changing the behaviour of \"force\", which is more generic?\n> > - !have_function_stats)\n> > + !have_function_stats && !force)\n>\n> The comment right above that reads:\n> /* Don't expend a clock check if nothing to do */\n> So it is just a quick exit if there is nothing to do.\n>\n> But with that patch we have something to do if \"force\" (see below) is true:\n> Report the remaining session duration and if the session was closed\n> normally.\n>\n> Thus the additional check.\n>\n\nAh yeah, makes sense. It becomes more clear with the rename.\n\n\n> * in pgstat_send_connstat() you pass the parameter \"force\" in as\n> \"disconnect\".\n> > That behaviour at least requires a comment saying why, I think. My\n> understanding is\n> > it relies on that \"force\" means this is a \"backend is shutting down\",\n> but that is not\n> > actually documented anywhere. Maybe the \"force\" parameter should\n> actually be renamed\n> > to indicate this is really what it means, to avoid a future mistake in\n> the area?\n> > But even with that, how does that turn into disconnect?\n>\n> \"pgstat_report_stat(true)\" is only called from \"pgstat_beshutdown_hook()\",\n> so\n> it is currently only called when the backend is about to exit.\n>\n> According the the comments the flag means that \"caller wants to force\n> stats out\".\n> I guess that the author thought that there may arise other reasons to\n> force sending\n> statistics in the future (commit 641912b4d from 2007).\n>\n> However, since that has not happened, I have renamed the flag to\n> \"disconnect\" and\n> adapted the documentation. This doesn't change the current behavior, but\n> establishes\n> a new rule.\n>\n\nThat makes it a lot more clear. And I agree, if nobody came up with a\nreason since 2007, then we are free to repurpose it :)\n\n\n\n> * Maybe rename pgStatSessionDisconnected to\n> pgStatSessionNormalDisconnected?\n> > To avoid having to go back to the setting point and look it up in a\n> comment.\n>\n> Long, descriptive names are a good thing.\n> I have decided to use \"pgStatSessionDisconnectedNormally\", since that is\n> even longer\n> and seems to fit the \"yes or no\" category better.\n>\n\nWFM.\n\n\n> I wonder if there would also be a way to count \"sessions that crashed\" as\n> well.\n> > That is,the ones that failed in a way that caused the postmaster to\n> restart the system.\n> > But that's information we'd have to send from the postmaster, but I'm\n> actually unsure\n> > if we're \"allowed\" to send things to the stats collector from the\n> postmaster.\n> > But I think it could be quite useful information to have. Maybe we can\n> find some way\n> > to piggyback on the fact that we're restarting the stats collector as a\n> result?\n>\n> Sure, a crash count would be useful. I don't know if it is easy for the\n> stats collector\n> to tell the difference between a start after a backend crash and - say -\n> starting from\n> a base backup.\n>\n> Patch v6 attached.\n>\n> I think that that would be material for another patch, and I don't think\n> it should go\n> to \"pg_stat_database\", because a) it might be hard to tell to which\n> database the crashed\n> backend was attached, b) it might be a background process that doesn't\n> belong to a database\n> and c) if the crash were caused by - say - corruption in a shared catalog,\n> it would be\n> misleading\n\n\nI'm not sure it is outside the scope of this patch, because I think it\nmight be easier to do than I (and I think you) first thought. We don't need\nto track which database crashed -- if we track all *other* ways a database\nexits, then crashes are all that remains.\n\nSo in fact, we *almost* have all the data we need already. We have the\nnumber of sessions started. We have the number of sessions \"aborted\". if we\nalso had the number of sessions that were closed normally, then whatever is\n\"left\" would be the number of sessions crashed. And we do already, in your\npatch, send the message in the case of both aborted and non-aborted\nsessions. So we just need to keep track of both in the statsfile (which we\ndon't now), and we'd more or less have it, wouldn't we?\n\nHowever, some thinking around that also leads me to another question which\nis very much in scope for this patch regardless, which is what about\nshutdown and admin termination. Right now, when you do a \"pg_ctl stop\" on\nthe database, all sessions count as aborted. Same thing for a\npg_terminate_backend(). I wonder if this is also a case that would be\nuseful to track as a separate thing? One could argue that the docs in your\npatch say aborted means \"terminated by something else than a regular client\ndisconnection\". But that's true for a \"shutdown\", but not for a crash, so\nwhichever way we go with crashes it's slightly incorrect.\n\nBut thinking from a usability perspective, wouldn't what we want more be\nsomething like <closed by correct disconnect>, <closed by abnormal\ndisconnect>, <closed by admin>, <crash>?\n\nWhat do you think of adapting it to that?\n\nBasically, that would change pgStatSessionDisconnectedNormally into instead\nbeing an enum of reasons, which could be normal disconnect, abnormal\ndisconnect and admin. And we'd track all those three as separate numbers in\nthe stats file, meaning we could then calculate the crash by subtracting\nall three from the total number of sessions?\n\n(Let me know if you think the idea could work and would prefer it if I\nworked up a complete suggestion based on it rather than just spitting ideas)\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Fri, Nov 20, 2020 at 3:41 PM Laurenz Albe <laurenz.albe@cybertec.at> wrote:On Tue, 2020-11-17 at 17:33 +0100, Magnus Hagander wrote:\n> I've taken a look as well, and here are a few short notes:\n\nMuch appreciated!Sorry about the delay in getting back to you on this one. FYI, while the patch has been bumped to the next CF by now, I do intend to continue working on it before that starts.\n> * It talks about \"number of connections\" but \"number of aborted sessions\". We should probably\n>   be consistent about talking either about connections or sessions? In particular, connections\n>   seems wrong in this case, because it only starts counting after authentication is complete\n>   (since otherwise we send no stats)? (This goes for both docs and actual function names)\n\nYes, that is true.  I have changed \"connections\" to \"sessions\" and renamed the new\ncolumn \"connections\" to \"session_count\".\n\nI think that most people will understand a session as started after a successful\nconnection.Yeah, I agree, and as long as it's consistent we don't need more explanations than that.Further int he views, it's a bit strange to have session_count and aborted_session, but I'm not sure what to suggest. \"aborted_session_count\" seems too long. Maybe just \"sessions\" instead of \"session_count\" -- no other counters actually have the \"_count\" suffix.> * Is this actually a fix that's independent of the new stats? It seems in general to be\n>   changing the behaviour of \"force\", which is more generic?\n> -               !have_function_stats)\n> +               !have_function_stats && !force)\n\nThe comment right above that reads:\n/* Don't expend a clock check if nothing to do */\nSo it is just a quick exit if there is nothing to do.\n\nBut with that patch we have something to do if \"force\" (see below) is true:\nReport the remaining session duration and if the session was closed normally.\n\nThus the additional check.Ah yeah, makes sense. It becomes more clear with the rename.\n> * in pgstat_send_connstat() you pass the parameter \"force\" in as \"disconnect\".\n>   That behaviour at least requires a comment saying why, I think. My understanding is\n>   it relies on that \"force\" means this is a \"backend is shutting down\", but that is not\n>   actually documented anywhere. Maybe the \"force\" parameter should actually be renamed\n>   to indicate this is really what it means, to avoid a future mistake in the area?\n>   But even with that, how does that turn into disconnect?\n\n\"pgstat_report_stat(true)\" is only called from \"pgstat_beshutdown_hook()\", so\nit is currently only called when the backend is about to exit.\n\nAccording the the comments the flag means that \"caller wants to force stats out\".\nI guess that the author thought that there may arise other reasons to force sending\nstatistics in the future (commit 641912b4d from 2007).\n\nHowever, since that has not happened, I have renamed the flag to \"disconnect\" and\nadapted the documentation.  This doesn't change the current behavior, but establishes\na new rule.That makes it a lot more clear. And I agree, if nobody came up with a reason since 2007, then we are free to repurpose it :)\n> * Maybe rename pgStatSessionDisconnected to pgStatSessionNormalDisconnected?\n>   To avoid having to go back to the setting point and look it up in a comment.\n\nLong, descriptive names are a good thing.\nI have decided to use \"pgStatSessionDisconnectedNormally\", since that is even longer\nand seems to fit the \"yes or no\" category better.WFM.\n> I wonder if there would also be a way to count \"sessions that crashed\" as well.\n>  That is,the ones that failed in a way that caused the postmaster to restart the system.\n>  But that's information we'd have to send from the postmaster, but I'm actually unsure\n>  if we're \"allowed\" to send things to the stats collector from the postmaster.\n>  But I think it could be quite useful information to have. Maybe we can find some way\n>  to piggyback on the fact that we're restarting the stats collector as a result?\n\nSure, a crash count would be useful.  I don't know if it is easy for the stats collector\nto tell the difference between a start after a backend crash and - say - starting from\na base backup.\n\nPatch v6 attached.\n\nI think that that would be material for another patch, and I don't think it should go\nto \"pg_stat_database\", because a) it might be hard to tell to which database the crashed\nbackend was attached, b) it might be a background process that doesn't belong to a database\nand c) if the crash were caused by - say - corruption in a shared catalog, it would be\nmisleadingI'm not sure it is outside the scope of this patch, because I think it might be easier to do than I (and I think you) first thought. We don't need to track which database crashed -- if we track all *other* ways a database exits, then crashes are all that remains.So in fact, we *almost* have all the data we need already. We have the number of sessions started. We have the number of sessions \"aborted\". if we also had the number of sessions that were closed normally, then whatever is \"left\" would be the number of sessions crashed. And we do already, in your patch, send the message in the case of both aborted and non-aborted sessions. So we just need to keep track of both in the statsfile (which we don't now), and we'd more or less have it, wouldn't we?However, some thinking around that also leads me to another question which is very much in scope for this patch regardless, which is what about shutdown and admin termination. Right now, when you do a \"pg_ctl stop\" on the database, all sessions count as aborted. Same thing for a pg_terminate_backend(). I wonder if this is also a case that would be useful to track as a separate thing? One could argue that the docs in your patch say aborted means \"terminated by something else than a regular client disconnection\".  But that's true for a \"shutdown\", but not for a crash, so whichever way we go with crashes it's slightly incorrect.But thinking from a usability perspective, wouldn't what we want more be something like <closed by correct disconnect>, <closed by abnormal disconnect>, <closed by admin>, <crash>?What do you think of adapting it to that?Basically, that would change pgStatSessionDisconnectedNormally into instead being an enum of reasons, which could be normal disconnect, abnormal disconnect and admin. And we'd track all those three as separate numbers in the stats file, meaning we could then calculate the crash by subtracting all three from the total number of sessions?(Let me know if you think the idea could work and would prefer it if I worked up a complete suggestion based on it rather than just spitting ideas)--  Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/", "msg_date": "Tue, 1 Dec 2020 17:32:21 +0100", "msg_from": "Magnus Hagander <magnus@hagander.net>", "msg_from_op": false, "msg_subject": "Re: Add session statistics to pg_stat_database" }, { "msg_contents": "On Tue, 2020-12-01 at 17:32 +0100, Magnus Hagander wrote:\n> > I have changed \"connections\" to \"sessions\" and renamed the new\n> > column \"connections\" to \"session_count\".\n> > \n> > I think that most people will understand a session as started after a successful\n> > connection.\n> \n> Yeah, I agree, and as long as it's consistent we don't need more explanations than that.\n> \n> Further int he views, it's a bit strange to have session_count and aborted_session, but I'm not\n> sure what to suggest. \"aborted_session_count\" seems too long. Maybe just \"sessions\" instead\n> of \"session_count\" -- no other counters actually have the \"_count\" suffix.\n\n\"sessions\" is fine, I think; I changed the name.\n\n> > > I wonder if there would also be a way to count \"sessions that crashed\" as well.\n> > > That is,the ones that failed in a way that caused the postmaster to restart the system.\n> >\n> > Sure, a crash count would be useful. I don't know if it is easy for the stats collector\n> > to tell the difference between a start after a backend crash and - say - starting from\n> > a base backup.\n> > \n> > I think that that would be material for another patch, and I don't think it should go\n> > to \"pg_stat_database\", because a) it might be hard to tell to which database the crashed\n> > backend was attached, b) it might be a background process that doesn't belong to a database\n> > and c) if the crash were caused by - say - corruption in a shared catalog, it would be\n> > misleading\n> \n> I'm not sure it is outside the scope of this patch, because I think it might be easier to\n> do than I (and I think you) first thought. We don't need to track which database crashed --\n> if we track all *other* ways a database exits, then crashes are all that remains.\n> \n> So in fact, we *almost* have all the data we need already. We have the number of sessions\n> started. We have the number of sessions \"aborted\". if we also had the number of sessions\n> that were closed normally, then whatever is \"left\" would be the number of sessions crashed.\n> And we do already, in your patch, send the message in the case of both aborted and\n> non-aborted sessions. So we just need to keep track of both in the statsfile\n> (which we don't now), and we'd more or less have it, wouldn't we?\n\nThere is one problem with that: the statistics collector is not guaranteed to get all\nmessages, right? If a disconnection statistics UDP datagram doesn't reach the statistics\ncollector, that connection\nwould end up being reported as crashed.\nThat would alarm people unnecessarily and make the crash statistics misleading.\n\n> However, some thinking around that also leads me to another question which is very much\n> in scope for this patch regardless, which is what about shutdown and admin termination.\n> Right now, when you do a \"pg_ctl stop\" on the database, all sessions count as aborted.\n> Same thing for a pg_terminate_backend(). I wonder if this is also a case that would be\n> useful to track as a separate thing? One could argue that the docs in your patch say\n> aborted means \"terminated by something else than a regular client disconnection\".\n> But that's true for a \"shutdown\", but not for a crash, so whichever way we go with crashes\n> it's slightly incorrect.\n\n> But thinking from a usability perspective, wouldn't what we want more be something\n> like <closed by correct disconnect>, <closed by abnormal disconnect>, <closed by admin>,\n> <crash>?\n> \n> What do you think of adapting it to that?\n> \n> Basically, that would change pgStatSessionDisconnectedNormally into instead being an\n> enum of reasons, which could be normal disconnect, abnormal disconnect and admin.\n> And we'd track all those three as separate numbers in the stats file, meaning we could\n> then calculate the crash by subtracting all three from the total number of sessions?\n\nI think at least \"closed by admin\" might be interesting; I'll have a look.\nI don't think we have to specifically count \"closed by normal disconnect\", because\nthat should be the rule and could be more or less deduced from the other numbers\n(with the uncertainty mentioned above).\n\n> (Let me know if you think the idea could work and would prefer it if I worked up a\n> complete suggestion based on it rather than just spitting ideas)\n\nThanks for the offer, and I'll get back to it if I get stuck.\nBut I'm ready to do the grunt work, so that you can spend your precious\ncommitter cycles elsewhere :^)\n\nI'll have a go at \"closed by admin\", meanwhile here is patch v7 with the renaming\n\"session_count -> sessions\".\n\nYours,\nLaurenz Albe", "msg_date": "Thu, 03 Dec 2020 13:22:35 +0100", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": true, "msg_subject": "Re: Add session statistics to pg_stat_database" }, { "msg_contents": "On Thu, 2020-12-03 at 13:22 +0100, Laurenz Albe wrote:\n> > Basically, that would change pgStatSessionDisconnectedNormally into instead being an\n> > enum of reasons, which could be normal disconnect, abnormal disconnect and admin.\n> > And we'd track all those three as separate numbers in the stats file, meaning we could\n> > then calculate the crash by subtracting all three from the total number of sessions?\n> \n> I think at least \"closed by admin\" might be interesting; I'll have a look.\n> I don't think we have to specifically count \"closed by normal disconnect\", because\n> that should be the rule and could be more or less deduced from the other numbers\n> (with the uncertainty mentioned above).\n> \n> > (Let me know if you think the idea could work and would prefer it if I worked up a\n> > complete suggestion based on it rather than just spitting ideas)\n> \n> Thanks for the offer, and I'll get back to it if I get stuck.\n\nOk, I could use a pointer.\n\nI am considering the cases\n\n1) client just went away (currently \"aborted\")\n2) death by FATAL error\n3) killed by the administrator (or shutdown)\n\nWhat is a good place in the code to tell 2) or 3)\nso that I can set the state accordingly?\n\nYours,\nLaurenz Albe\n\n\n\n", "msg_date": "Fri, 04 Dec 2020 16:55:52 +0100", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": true, "msg_subject": "Re: Add session statistics to pg_stat_database" }, { "msg_contents": "On Fri, 2020-12-04 at 16:55 +0100, I wrote:\n> > > Basically, that would change pgStatSessionDisconnectedNormally into instead being an\n> > > enum of reasons, which could be normal disconnect, abnormal disconnect and admin.\n> > > And we'd track all those three as separate numbers in the stats file, meaning we could\n> > > then calculate the crash by subtracting all three from the total number of sessions?\n> > \n> > I think at least \"closed by admin\" might be interesting; I'll have a look.\n> > I don't think we have to specifically count \"closed by normal disconnect\", because\n> > that should be the rule and could be more or less deduced from the other numbers\n> > (with the uncertainty mentioned above).\n> \n> I am considering the cases\n> \n> 1) client just went away (currently \"aborted\")\n> 2) death by FATAL error\n> 3) killed by the administrator (or shutdown)\n\nI think I figured it out. Here is a patch along these lines.\n\nI named the three counters \"sessions_client_eof\", \"sessions_fatal\" and\n\"sessions_killed\", but I am not wedded to these bike shed colors.\n\nYours,\nLaurenz Albe", "msg_date": "Sat, 05 Dec 2020 13:04:13 +0100", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": true, "msg_subject": "Re: Add session statistics to pg_stat_database" }, { "msg_contents": "On Sat, Dec 5, 2020 at 1:04 PM Laurenz Albe <laurenz.albe@cybertec.at>\nwrote:\n\n> On Fri, 2020-12-04 at 16:55 +0100, I wrote:\n> > > > Basically, that would change pgStatSessionDisconnectedNormally into\n> instead being an\n> > > > enum of reasons, which could be normal disconnect, abnormal\n> disconnect and admin.\n> > > > And we'd track all those three as separate numbers in the stats\n> file, meaning we could\n> > > > then calculate the crash by subtracting all three from the total\n> number of sessions?\n> > >\n> > > I think at least \"closed by admin\" might be interesting; I'll have a\n> look.\n> > > I don't think we have to specifically count \"closed by normal\n> disconnect\", because\n> > > that should be the rule and could be more or less deduced from the\n> other numbers\n> > > (with the uncertainty mentioned above).\n> >\n> > I am considering the cases\n> >\n> > 1) client just went away (currently \"aborted\")\n> > 2) death by FATAL error\n> > 3) killed by the administrator (or shutdown)\n>\n> I think I figured it out. Here is a patch along these lines.\n>\n> I named the three counters \"sessions_client_eof\", \"sessions_fatal\" and\n> \"sessions_killed\", but I am not wedded to these bike shed colors.\n>\n\n\nMaybe we should, in honor of the bikeshed, we should call them\nsessions_blue, sessions_green etc :)\n\nIn true bikeshedding mode, I'm not entirely happy with sessions_client_eof,\nbut I'm also not sure I have a better suggestion. Maybe just\n\"sessions_lost\" or \"sessions_connlost\", which is basically the terminology\nthat the documentation uses? Maybe it's just me, but I don't really like\nthe eof terminology here.\n\nWhat do you think about that? Or does somebody else have an opinion here?\n\nAside from that bikeshedding, I think this version looks very good!\n\nIn today's dept of small things I noticed:\n\n+ if (disconnect)\n+ msg.m_disconnect = pgStatSessionEndCause;\n\nin the non-disconnect state, that variable is left uninitialized, isn't\nit? It does end up getting ignored later, but to be more future proof the\nenum should probably have a value specifically for \"not disconnected yet\"?\n\n+ case DISCONNECT_CLIENT_EOF:\n+ ++(dbentry->n_sessions_client_eof);\n+ break;\n\nThe normal syntax we'd use for that would be\n dbentry->n_sessions_client_eof++;\n\n+ typedef enum sessionEndType {\n\nTo be consistent with the other enums in the same place, seems this should\nbe SessionEndType.\n\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Sat, Dec 5, 2020 at 1:04 PM Laurenz Albe <laurenz.albe@cybertec.at> wrote:On Fri, 2020-12-04 at 16:55 +0100, I wrote:\n> > > Basically, that would change pgStatSessionDisconnectedNormally into instead being an\n> > > enum of reasons, which could be normal disconnect, abnormal disconnect and admin.\n> > > And we'd track all those three as separate numbers in the stats file, meaning we could\n> > > then calculate the crash by subtracting all three from the total number of sessions?\n> > \n> > I think at least \"closed by admin\" might be interesting; I'll have a look.\n> > I don't think we have to specifically count \"closed by normal disconnect\", because\n> > that should be the rule and could be more or less deduced from the other numbers\n> > (with the uncertainty mentioned above).\n> \n> I am considering the cases\n> \n> 1) client just went away (currently \"aborted\")\n> 2) death by FATAL error\n> 3) killed by the administrator (or shutdown)\n\nI think I figured it out.  Here is a patch along these lines.\n\nI named the three counters \"sessions_client_eof\", \"sessions_fatal\" and\n\"sessions_killed\", but I am not wedded to these bike shed colors.Maybe we should, in honor of the bikeshed, we should call them sessions_blue, sessions_green etc :)In true bikeshedding mode, I'm not entirely happy with sessions_client_eof, but I'm also not sure I have a better suggestion. Maybe just \"sessions_lost\" or \"sessions_connlost\", which is basically the terminology that the documentation uses? Maybe it's just me, but I don't really like the eof terminology here.What do you think about that? Or does somebody else have an opinion here?Aside from that bikeshedding, I think this version looks very good!In today's dept of small things I noticed:+   if (disconnect)+       msg.m_disconnect = pgStatSessionEndCause;in the non-disconnect state, that variable is left uninitialized, isn't it?  It does end up getting ignored later, but to be more future proof the enum should probably have a value specifically for \"not disconnected yet\"?+       case DISCONNECT_CLIENT_EOF:+           ++(dbentry->n_sessions_client_eof);+           break;The normal syntax we'd use for that would be  dbentry->n_sessions_client_eof++;+ typedef enum sessionEndType {To be consistent with the other enums in the same place, seems this should be SessionEndType.--  Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/", "msg_date": "Sun, 13 Dec 2020 17:49:58 +0100", "msg_from": "Magnus Hagander <magnus@hagander.net>", "msg_from_op": false, "msg_subject": "Re: Add session statistics to pg_stat_database" }, { "msg_contents": "On Sun, 2020-12-13 at 17:49 +0100, Magnus Hagander wrote:\n> > > I am considering the cases\n> > > \n> > > 1) client just went away (currently \"aborted\")\n> > > 2) death by FATAL error\n> > > 3) killed by the administrator (or shutdown)\n> > \n> > I named the three counters \"sessions_client_eof\", \"sessions_fatal\" and\n> > \"sessions_killed\", but I am not wedded to these bike shed colors.\n> \n> In true bikeshedding mode, I'm not entirely happy with sessions_client_eof,\n> but I'm also not sure I have a better suggestion. Maybe just \"sessions_lost\"\n> or \"sessions_connlost\", which is basically the terminology that the documentation uses?\n> Maybe it's just me, but I don't really like the eof terminology here.\n> \n> What do you think about that? Or does somebody else have an opinion here?\n\nI slept over it, and came up with \"sessions_abandoned\".\n\n> In today's dept of small things I noticed:\n> \n> + if (disconnect)\n> + msg.m_disconnect = pgStatSessionEndCause;\n> \n> in the non-disconnect state, that variable is left uninitialized, isn't it?\n> It does end up getting ignored later, but to be more future proof the enum should probably\n> have a value specifically for \"not disconnected yet\"?\n\nYes. I named it DISCONNECT_NOT_YET.\n\n> + case DISCONNECT_CLIENT_EOF:\n> + ++(dbentry->n_sessions_client_eof);\n> + break;\n> \n> The normal syntax we'd use for that would be\n> dbentry->n_sessions_client_eof++;\n\nOk, changed.\n\n> + typedef enum sessionEndType {\n> \n> To be consistent with the other enums in the same place, seems this should be SessionEndType.\n\nTrue. I have renamed the type.\n\nAttached is patch version 9.\nAdded goodie: I ran pgindent on it.\n\nYours,\nLaurenz Albe", "msg_date": "Tue, 15 Dec 2020 13:53:11 +0100", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": true, "msg_subject": "Re: Add session statistics to pg_stat_database" }, { "msg_contents": "On Tue, 2020-12-15 at 13:53 +0100, Laurenz Albe wrote:\n> Attached is patch version 9.\n\nAah, I forgot the ++.\nVersion 10 attached.\n\nYours,\nLaurenz Albe", "msg_date": "Tue, 15 Dec 2020 13:55:51 +0100", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": true, "msg_subject": "Re: Add session statistics to pg_stat_database" }, { "msg_contents": "Hi,\n\nAs a user, I want this feature to know whether\nclients' session activities are as expected.\n\nI have some comments about the patch.\n\n\n1. pg_proc.dat\n\nThe unit of \"session time\" and so on says \"in seconds\".\nBut, is \"in milliseconds\" right?\n\n\n2. monitoring.sgml\n\nIIUC, \"active_time\" includes the time executes a fast-path function and\n\"idle in transaction\" includes \"idle in transaction(aborted)\" time.\n\nWhy don't you reference pg_stat_activity's \"state\" column and\n\"active_time\" is the total time when the state is \"active\" and \"fast \npath\"?\n\"idle in transaction\" is as same too.\n\n\n3. pgstat.h\n\nThe comment of PgStat_MsgConn says \"Sent by pgstat_connection\".\nI thought \"pgstat_connection\" is a function, but it doesn't exist.\n\nIs \"Sent by the backend\" right?\n\nAlthough this is a trivial thing, the following row has too many tabs.\nOther structs have only one space.\n// }<tab><tab><tab>Pgstat_MsgConn;\n\n\n\nRegards,\n-- \nMasahiro Ikeda\nNTT DATA CORPORATION\n\n\n", "msg_date": "Fri, 25 Dec 2020 20:28:06 +0900", "msg_from": "Masahiro Ikeda <ikedamsh@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Add session statistics to pg_stat_database" }, { "msg_contents": "On Fri, 2020-12-25 at 20:28 +0900, Masahiro Ikeda wrote:\n> As a user, I want this feature to know whether\n> clients' session activities are as expected.\n> \n> I have some comments about the patch.\n> \n> 1. pg_proc.dat\n> \n> The unit of \"session time\" and so on says \"in seconds\".\n> But, is \"in milliseconds\" right?\n> \n> 2. monitoring.sgml\n> \n> IIUC, \"active_time\" includes the time executes a fast-path function and\n> \"idle in transaction\" includes \"idle in transaction(aborted)\" time.\n> \n> Why don't you reference pg_stat_activity's \"state\" column and\n> \"active_time\" is the total time when the state is \"active\" and \"fast \n> path\"?\n> \"idle in transaction\" is as same too.\n> \n> 3. pgstat.h\n> \n> The comment of PgStat_MsgConn says \"Sent by pgstat_connection\".\n> I thought \"pgstat_connection\" is a function, but it doesn't exist.\n> \n> Is \"Sent by the backend\" right?\n> \n> Although this is a trivial thing, the following row has too many tabs.\n> Other structs have only one space.\n> // }<tab><tab><tab>Pgstat_MsgConn;\n\nThanks for the feedback.\n\nI am currently on vacations and will take a look after January 7.\n\nYours,\nLaurenz Albe\n\n\n\n", "msg_date": "Sun, 27 Dec 2020 16:16:03 +0100", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": true, "msg_subject": "Re: Add session statistics to pg_stat_database" }, { "msg_contents": "On Fri, 2020-12-25 at 20:28 +0900, Masahiro Ikeda wrote:\n> As a user, I want this feature to know whether\n> clients' session activities are as expected.\n> \n> I have some comments about the patch.\n\nThanks you for the thorough review!\n\n> 1. pg_proc.dat\n> \n> The unit of \"session time\" and so on says \"in seconds\".\n> But, is \"in milliseconds\" right?\n\nYou are right. Fixed.\n\n> 2. monitoring.sgml\n> \n> IIUC, \"active_time\" includes the time executes a fast-path function and\n> \"idle in transaction\" includes \"idle in transaction(aborted)\" time.\n>\n> Why don't you reference pg_stat_activity's \"state\" column and\n> \"active_time\" is the total time when the state is \"active\" and \"fast \n> path\"?\n> \"idle in transaction\" is as same too.\n\nGood idea; I have expanded the documentation like that.\n\n> 3. pgstat.h\n> \n> The comment of PgStat_MsgConn says \"Sent by pgstat_connection\".\n> I thought \"pgstat_connection\" is a function, but it doesn't exist.\n>\n> Is \"Sent by the backend\" right?\n\nThe function was renamed and is now called \"pgstat_send_connstats\".\n\nBut you are right, I might as well match the surrounding code and\nwrite \"Sent by the backend\".\n\n> Although this is a trivial thing, the following row has too many tabs.\n> \n> Other structs have only one space.\n> \n> // }<tab><tab><tab>Pgstat_MsgConn;\n\nYes, I messed that up during the pgindent run. Fixed.\n\nPatch version 11 is attached.\n\nYours,\nLaurenz Albe", "msg_date": "Thu, 07 Jan 2021 16:47:05 +0100", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": true, "msg_subject": "Re: Add session statistics to pg_stat_database" }, { "msg_contents": "On 2021-01-08 00:47, Laurenz Albe wrote:\n> On Fri, 2020-12-25 at 20:28 +0900, Masahiro Ikeda wrote:\n>> As a user, I want this feature to know whether\n>> clients' session activities are as expected.\n>> \n>> I have some comments about the patch.\n> \n> Thanks you for the thorough review!\n\nThanks for updating the patch!\n\n>> 1. pg_proc.dat\n>> \n>> The unit of \"session time\" and so on says \"in seconds\".\n>> But, is \"in milliseconds\" right?\n> \n> You are right. Fixed.\n> \n>> 2. monitoring.sgml\n>> \n>> IIUC, \"active_time\" includes the time executes a fast-path function \n>> and\n>> \"idle in transaction\" includes \"idle in transaction(aborted)\" time.\n>> \n>> Why don't you reference pg_stat_activity's \"state\" column and\n>> \"active_time\" is the total time when the state is \"active\" and \"fast\n>> path\"?\n>> \"idle in transaction\" is as same too.\n> \n> Good idea; I have expanded the documentation like that.\n\nBTW, is there any reason to merge the above statistics?\nIIUC, to separate statistics' cons is that two columns increase, and\nthere is no performance penalty. So, I wonder that there is a way to \nseparate them\ncorresponding to the state column of pg_stat_activity.\n\n>> 3. pgstat.h\n>> \n>> The comment of PgStat_MsgConn says \"Sent by pgstat_connection\".\n>> I thought \"pgstat_connection\" is a function, but it doesn't exist.\n>> \n>> Is \"Sent by the backend\" right?\n> \n> The function was renamed and is now called \"pgstat_send_connstats\".\n> \n> But you are right, I might as well match the surrounding code and\n> write \"Sent by the backend\".\n> \n>> Although this is a trivial thing, the following row has too many tabs.\n>> \n>> Other structs have only one space.\n>> \n>> // }<tab><tab><tab>Pgstat_MsgConn;\n> \n> Yes, I messed that up during the pgindent run. Fixed.\n> \n> Patch version 11 is attached.\n\nThere are some following codes in pgstatfuncs.c.\nint64 result = 0.0;\n\nBut, I think the following is better.\nint64 result = 0;\n\nAlthough now pg_stat_get_db_session_time is initialize \"result\" to zero \nwhen it is declared,\nanother pg_stat_XXX function didn't initialize. Is it better to change \nit?\n\nRegards,\n-- \nMasahiro Ikeda\nNTT DATA CORPORATION\n\n\n", "msg_date": "Fri, 08 Jan 2021 12:00:10 +0900", "msg_from": "Masahiro Ikeda <ikedamsh@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Add session statistics to pg_stat_database" }, { "msg_contents": "On Fri, 2021-01-08 at 12:00 +0900, Masahiro Ikeda wrote:\n> 2. monitoring.sgml\n> \n> > > IIUC, \"active_time\" includes the time executes a fast-path function \n> > > and\n> > > \"idle in transaction\" includes \"idle in transaction(aborted)\" time.\n> > > Why don't you reference pg_stat_activity's \"state\" column and\n> > > \"active_time\" is the total time when the state is \"active\" and \"fast\n> > > path\"?\n> > > \"idle in transaction\" is as same too.\n> >\n> > Good idea; I have expanded the documentation like that.\n> \n> BTW, is there any reason to merge the above statistics?\n> IIUC, to separate statistics' cons is that two columns increase, and\n> there is no performance penalty. So, I wonder that there is a way to \n> separate them\n> corresponding to the state column of pg_stat_activity.\n\nSure, that could be done.\n\nI decided to do it like this because I thought that few people would\nbe interested in \"time spend doing fast-path function calls\"; my guess\nwas that the more interesting value is \"time where the database was\nbusy calculating results\".\n\nI tried to keep the balance between providing reasonable detail\nwhile not creating more additional columns to \"pg_stat_database\"\nthan necessary.\n\nThis is of course a matter of taste, and it is good to hear different\nopinions. If more people share your opinion, I'll change the code.\n\n> There are some following codes in pgstatfuncs.c.\n> int64 result = 0.0;\n> \n> But, I think the following is better.\n> int64 result = 0;\n\nYou are right. That was a silly copy-and-paste error. Fixed.\n\n> Although now pg_stat_get_db_session_time is initialize \"result\" to zero \n> when it is declared,\n> another pg_stat_XXX function didn't initialize. Is it better to change \n> it?\n\nI looked at other similar functions, and the ones I saw returned\nNULL if there were no data. In that case, it makes sense to write\n\n char *result;\n\n if ((result = get_stats_data()) == NULL)\n PG_RETURN_NULL();\n\n PG_RETURN_TEXT_P(cstring_to_text(result));\n\nBut I want to return 0 for the session time if there are no data yet,\nso I think initializing the result to 0 in the declaration makes sense.\n\nThere are some functions that do it like this:\n\n int32 result;\n\n result = 0;\n for (...)\n {\n if (...)\n result++;\n }\n\n PG_RETURN_INT32(result);\n\nAgain, it is a matter of taste, and I didn't detect a clear pattern\nin the existing code that I feel I should follow in this question.\n\nVersion 12 of the patch is attached.\n\nYours,\nLaurenz Albe", "msg_date": "Fri, 08 Jan 2021 10:34:24 +0100", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": true, "msg_subject": "Re: Add session statistics to pg_stat_database" }, { "msg_contents": "On 2021-01-08 18:34, Laurenz Albe wrote:\n> On Fri, 2021-01-08 at 12:00 +0900, Masahiro Ikeda wrote:\n>> 2. monitoring.sgml\n>> \n>> > > IIUC, \"active_time\" includes the time executes a fast-path function\n>> > > and\n>> > > \"idle in transaction\" includes \"idle in transaction(aborted)\" time.\n>> > > Why don't you reference pg_stat_activity's \"state\" column and\n>> > > \"active_time\" is the total time when the state is \"active\" and \"fast\n>> > > path\"?\n>> > > \"idle in transaction\" is as same too.\n>> >\n>> > Good idea; I have expanded the documentation like that.\n>> \n>> BTW, is there any reason to merge the above statistics?\n>> IIUC, to separate statistics' cons is that two columns increase, and\n>> there is no performance penalty. So, I wonder that there is a way to\n>> separate them\n>> corresponding to the state column of pg_stat_activity.\n> \n> Sure, that could be done.\n> \n> I decided to do it like this because I thought that few people would\n> be interested in \"time spend doing fast-path function calls\"; my guess\n> was that the more interesting value is \"time where the database was\n> busy calculating results\".\n> \n> I tried to keep the balance between providing reasonable detail\n> while not creating more additional columns to \"pg_stat_database\"\n> than necessary.\n> \n> This is of course a matter of taste, and it is good to hear different\n> opinions. If more people share your opinion, I'll change the code.\n\nOK, I understood.\nI don't have any strong opinions to add them.\n\n>> There are some following codes in pgstatfuncs.c.\n>> int64 result = 0.0;\n>> \n>> But, I think the following is better.\n>> int64 result = 0;\n> \n> You are right. That was a silly copy-and-paste error. Fixed.\n\nThanks.\n\n>> Although now pg_stat_get_db_session_time is initialize \"result\" to \n>> zero\n>> when it is declared,\n>> another pg_stat_XXX function didn't initialize. Is it better to change\n>> it?\n> \n> I looked at other similar functions, and the ones I saw returned\n> NULL if there were no data. In that case, it makes sense to write\n> \n> char *result;\n> \n> if ((result = get_stats_data()) == NULL)\n> PG_RETURN_NULL();\n> \n> PG_RETURN_TEXT_P(cstring_to_text(result));\n> \n> But I want to return 0 for the session time if there are no data yet,\n> so I think initializing the result to 0 in the declaration makes sense.\n> \n> There are some functions that do it like this:\n> \n> int32 result;\n> \n> result = 0;\n> for (...)\n> {\n> if (...)\n> result++;\n> }\n> \n> PG_RETURN_INT32(result);\n> \n> Again, it is a matter of taste, and I didn't detect a clear pattern\n> in the existing code that I feel I should follow in this question.\n\nThanks, I understood.\n\nI checked my comments are fixed.\nThis patch looks good to me for monitoring session statistics.\n\nRegards,\n-- \nMasahiro Ikeda\nNTT DATA CORPORATION\n\n\n", "msg_date": "Fri, 08 Jan 2021 21:44:59 +0900", "msg_from": "Masahiro Ikeda <ikedamsh@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Add session statistics to pg_stat_database" }, { "msg_contents": "On Fri, Jan 8, 2021 at 10:34 AM Laurenz Albe <laurenz.albe@cybertec.at> wrote:\n>\n> On Fri, 2021-01-08 at 12:00 +0900, Masahiro Ikeda wrote:\n> > 2. monitoring.sgml\n> >\n> > > > IIUC, \"active_time\" includes the time executes a fast-path function\n> > > > and\n> > > > \"idle in transaction\" includes \"idle in transaction(aborted)\" time.\n> > > > Why don't you reference pg_stat_activity's \"state\" column and\n> > > > \"active_time\" is the total time when the state is \"active\" and \"fast\n> > > > path\"?\n> > > > \"idle in transaction\" is as same too.\n> > >\n> > > Good idea; I have expanded the documentation like that.\n> >\n> > BTW, is there any reason to merge the above statistics?\n> > IIUC, to separate statistics' cons is that two columns increase, and\n> > there is no performance penalty. So, I wonder that there is a way to\n> > separate them\n> > corresponding to the state column of pg_stat_activity.\n>\n> Sure, that could be done.\n>\n> I decided to do it like this because I thought that few people would\n> be interested in \"time spend doing fast-path function calls\"; my guess\n> was that the more interesting value is \"time where the database was\n> busy calculating results\".\n>\n> I tried to keep the balance between providing reasonable detail\n> while not creating more additional columns to \"pg_stat_database\"\n> than necessary.\n>\n> This is of course a matter of taste, and it is good to hear different\n> opinions. If more people share your opinion, I'll change the code.\n>\n> > There are some following codes in pgstatfuncs.c.\n> > int64 result = 0.0;\n> >\n> > But, I think the following is better.\n> > int64 result = 0;\n>\n> You are right. That was a silly copy-and-paste error. Fixed.\n>\n> > Although now pg_stat_get_db_session_time is initialize \"result\" to zero\n> > when it is declared,\n> > another pg_stat_XXX function didn't initialize. Is it better to change\n> > it?\n>\n> I looked at other similar functions, and the ones I saw returned\n> NULL if there were no data. In that case, it makes sense to write\n>\n> char *result;\n>\n> if ((result = get_stats_data()) == NULL)\n> PG_RETURN_NULL();\n>\n> PG_RETURN_TEXT_P(cstring_to_text(result));\n>\n> But I want to return 0 for the session time if there are no data yet,\n> so I think initializing the result to 0 in the declaration makes sense.\n>\n> There are some functions that do it like this:\n>\n> int32 result;\n>\n> result = 0;\n> for (...)\n> {\n> if (...)\n> result++;\n> }\n>\n> PG_RETURN_INT32(result);\n>\n> Again, it is a matter of taste, and I didn't detect a clear pattern\n> in the existing code that I feel I should follow in this question.\n>\n> Version 12 of the patch is attached.\n\nThanks! I have applied this version, with some minor changes:\n\n* I renamed the n_<x>_time members in the struct to just\ntotal_<x>_time. The n_ indicates \"number of\" and is thus wrong for\ntime parameters.\n\n* Some very minor wording changes.\n\n* catversion bump (for once I didn't forget it!)\n\n\n--\n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n", "msg_date": "Sun, 17 Jan 2021 14:07:07 +0100", "msg_from": "Magnus Hagander <magnus@hagander.net>", "msg_from_op": false, "msg_subject": "Re: Add session statistics to pg_stat_database" }, { "msg_contents": "On Sun, 2021-01-17 at 14:07 +0100, Magnus Hagander wrote:\n> I have applied this version, with some minor changes:\n> \n> * I renamed the n_<x>_time members in the struct to just\n> total_<x>_time. The n_ indicates \"number of\" and is thus wrong for\n> time parameters.\n\nRight.\n\n> * Some very minor wording changes.\n>\n> * catversion bump (for once I didn't forget it!)\n\nThank you!\n\nYou included the catversion bump, but shouldn't PGSTAT_FILE_FORMAT_ID\nin \"include/pgstat.h\" be updated as well?\n\nYours,\nLaurenz Albe\n\n\n\n", "msg_date": "Mon, 18 Jan 2021 17:11:08 +0100", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": true, "msg_subject": "Re: Add session statistics to pg_stat_database" }, { "msg_contents": "On Mon, Jan 18, 2021 at 5:11 PM Laurenz Albe <laurenz.albe@cybertec.at> wrote:\n>\n> On Sun, 2021-01-17 at 14:07 +0100, Magnus Hagander wrote:\n> > I have applied this version, with some minor changes:\n> >\n> > * I renamed the n_<x>_time members in the struct to just\n> > total_<x>_time. The n_ indicates \"number of\" and is thus wrong for\n> > time parameters.\n>\n> Right.\n>\n> > * Some very minor wording changes.\n> >\n> > * catversion bump (for once I didn't forget it!)\n>\n> Thank you!\n>\n> You included the catversion bump, but shouldn't PGSTAT_FILE_FORMAT_ID\n> in \"include/pgstat.h\" be updated as well?\n\nYup, you are absolutely correct. Will fix.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n", "msg_date": "Mon, 18 Jan 2021 17:52:47 +0100", "msg_from": "Magnus Hagander <magnus@hagander.net>", "msg_from_op": false, "msg_subject": "Re: Add session statistics to pg_stat_database" } ]
[ { "msg_contents": "Hi,\n\nCurrently with the postgres_fdw remote connections cached in the local\nbackend, the queries that use the cached connections from local\nbackend will not check whether the remote backend is killed or gone\naway, and it goes tries to submit the query and fails if the remote\nbackend is killed.\n\nThis problem was found during the discussions made in [1].\n\nOne way, we could solve the above problem is that, upon firing the new\nforeign query from local backend using the cached connection,\n(assuming the remote backend that was cached in the local backed got\nkilled for some reasons), instead of failing the query in the local\nbackend, upon detecting error from the remote backend, we could just\ndelete the cached old entry and try getting another connection to\nremote backend, cache it and proceed to submit the query. This has to\nhappen only at the beginning of remote xact.\n\nThis way, instead of the query getting failed, the query succeeds if\nthe local backend is able to get a new remote backend connection.\n\nAttaching the patch that implements the above retry mechanism.\n\nThe way I tested the patch:\n1. select * from foreign_tbl; /*from local backend - this results in a\nremote connection being cached in the postgres_fdw connection cache\nand a remote backend is opened.*/\n2. (intentionally) kill the remote backend, just to simulate the scenario.\n3. select * from foreign_tbl; /*from local backend - without patch\nthis throws error \"ERROR: server closed the connection unexpectedly\".\nwith patch - try to use the cached connection at the beginning of\nremote xact, upon receiving error from remote postgres server, instead\nof aborting the query, delete the cached entry, try to get a new\nconnection, if it gets, caches it and uses that for executing the\nquery, query succeeds.\n\nI couldn't think of adding a test case to the existing postgres_fdw\nregression test suite with an automated scenario of the remote backend\ngetting killed.\n\nI would like to thank Ashutosh Bapat (ashutosh.bapat.oss@gmail.com)\nfor the suggestion to fix this and the review of my initial patch\nattached in [2]. I tried to address the review comments provided on my\ninitial patch [3].\n\nFor, one of the Ashutosh's review comments from [3] \"the fact that the\nsame connection may be used by multiple plan nodes\", I tried to have\nfew use cases where there exist joins on two foreign tables on the\nsame remote server, in a single query, so essentially, the same\nconnection was used for multiple plan nodes. In this case we avoid\nretrying for the second GetConnection() request for the second foreign\ntable, with the check entry->xact_depth <= 0 , xact_depth after the\nfirst GetConnection() and the first remote query will become 1 and we\ndon't hit the retry logic and seems like we are safe here. Please add\nIf I'm missing something here.\n\nRequest the community to consider the patch for further review if the\noverall idea seems beneficial.\n\n[1] https://www.postgresql.org/message-id/CAExHW5t21B_XPQy_hownm1Qq%3DhMrgOhX%2B8gDj3YEKFvpk%3DVRgw%40mail.gmail.com\n[2] https://www.postgresql.org/message-id/CALj2ACXp6DQ3iLGx5g%2BLgVtGwC4F6K9WzKQJpyR4FfdydQzC_g%40mail.gmail.com\n[3] https://www.postgresql.org/message-id/CAExHW5u3Gyv6Q1BEr6zMg0t%2B59e8c4KMfKVrV3Z%3D4UKKjJ19nQ%40mail.gmail.com\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Wed, 8 Jul 2020 18:09:54 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Retry Cached Remote Connections for postgres_fdw in case remote\n backend gets killed/goes away" }, { "msg_contents": "Hi,\n\nOn Wed, Jul 8, 2020 at 9:40 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n> One way, we could solve the above problem is that, upon firing the new\n> foreign query from local backend using the cached connection,\n> (assuming the remote backend that was cached in the local backed got\n> killed for some reasons), instead of failing the query in the local\n> backend, upon detecting error from the remote backend, we could just\n> delete the cached old entry and try getting another connection to\n> remote backend, cache it and proceed to submit the query. This has to\n> happen only at the beginning of remote xact.\n+1.\n\nI think this is a very useful feature.\nIn an environment with connection pooling for local, if a remote\nserver has a failover or switchover,\nthis feature would prevent unexpected errors of local queries after\nrecovery of the remote server.\n\nI haven't looked at the code in detail yet, some comments here.\n\n1. To keep the previous behavior (and performance), how about allowing\nthe user to specify\n whether or not to retry as a GUC parameter or in the FOREIGN SERVER OPTION?\n\n2. How about logging a LOG message when retry was success to let us know\n the retry feature worked or how often the retries worked ?\n\n> I couldn't think of adding a test case to the existing postgres_fdw\n> regression test suite with an automated scenario of the remote backend\n> getting killed.\n\nCouldn't you confirm this by adding a test case like the following?\n===================================================\nBEGIN;\n-- Generate a connection to remote\nSELECT * FROM ft1 LIMIT 1;\n\n-- retrieve pid of postgres_fdw and kill it\n-- could use the other unique identifier (not postgres_fdw but\nfdw_retry_check, etc ) for application name\nSELECT pg_terminate_backend(pid) FROM pg_stat_activity WHERE\nbackend_type = 'client backend' AND application_name = 'postgres_fdw'\n\n-- COMMIT, so next query will should success if connection-retry works\nCOMMIT;\nSELECT * FROM ft1 LIMIT 1;\n===================================================\n\nBest regards,\n\n-- \nTatsuhito Kasahara\nkasahara.tatsuhito _at_ gmail.com\n\n\n", "msg_date": "Fri, 10 Jul 2020 16:55:41 +0900", "msg_from": "Kasahara Tatsuhito <kasahara.tatsuhito@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Retry Cached Remote Connections for postgres_fdw in case remote\n backend gets killed/goes away" }, { "msg_contents": "Thanks for the comments. Attaching the v2 patch.\n\n>\n> > One way, we could solve the above problem is that, upon firing the new\n> > foreign query from local backend using the cached connection,\n> > (assuming the remote backend that was cached in the local backed got\n> > killed for some reasons), instead of failing the query in the local\n> > backend, upon detecting error from the remote backend, we could just\n> > delete the cached old entry and try getting another connection to\n> > remote backend, cache it and proceed to submit the query. This has to\n> > happen only at the beginning of remote xact.\n> +1.\n>\n> I think this is a very useful feature.\n> In an environment with connection pooling for local, if a remote\n> server has a failover or switchover,\n> this feature would prevent unexpected errors of local queries after\n> recovery of the remote server.\n\nThanks for backing this feature.\n\n>\n> I haven't looked at the code in detail yet, some comments here.\n>\n\nThanks for the comments. Please feel free to review more of the\nattached v2 patch.\n\n>\n> 1. To keep the previous behavior (and performance), how about allowing\n> the user to specify\n> whether or not to retry as a GUC parameter or in the FOREIGN SERVER\nOPTION?\n>\n\nDo we actually need this? We don't encounter much performance with this\nconnection retry, as\nwe just do it at the beginning of the remote xact i.e. only once per a\nremote session, if we are\nable to establish it's well and good otherwise, the query is bound to fail.\n\nIf at all, we need one (if there exists a strong reason to have the\noption), then the question is\nGUC or the SERVER OPTION?\n\nThere's a similar discussion going on having GUC at the core level vs\nSERVER OPTION for\npostgres_fdw in [1].\n\n>\n> 2. How about logging a LOG message when retry was success to let us know\n> the retry feature worked or how often the retries worked ?\n>\n\nIn the v1 patch I added the logging messages, but in v2 patch\n\"postgres_fdw connection retry is successful\" is added. Please note that\nall the\nnew logs are added at level \"DEBUG3\" as all the existing logs are also at\nthe same\nlevel.\n\n>\n> > I couldn't think of adding a test case to the existing postgres_fdw\n> > regression test suite with an automated scenario of the remote backend\n> > getting killed.\n>\n> Couldn't you confirm this by adding a test case like the following?\n> ===================================================\n> BEGIN;\n> -- Generate a connection to remote\n> SELECT * FROM ft1 LIMIT 1;\n>\n> -- retrieve pid of postgres_fdw and kill it\n> -- could use the other unique identifier (not postgres_fdw but\n> fdw_retry_check, etc ) for application name\n> SELECT pg_terminate_backend(pid) FROM pg_stat_activity WHERE\n> backend_type = 'client backend' AND application_name = 'postgres_fdw'\n>\n> -- COMMIT, so next query will should success if connection-retry works\n> COMMIT;\n> SELECT * FROM ft1 LIMIT 1;\n> ===================================================\n>\n\nYes, this way it works. Thanks for the suggestion. I added the test\ncase to the postgres_fdw regression test suite. v2 patch has these\nchanges also.\n\n[1] -\nhttps://www.postgresql.org/message-id/CALj2ACVvrp5%3DAVp2PupEm%2BnAC8S4buqR3fJMmaCoc7ftT0aD2A%40mail.gmail.com\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Sat, 11 Jul 2020 19:28:08 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Retry Cached Remote Connections for postgres_fdw in case remote\n backend gets killed/goes away" }, { "msg_contents": "On Wed, Jul 8, 2020 at 6:10 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> I couldn't think of adding a test case to the existing postgres_fdw\n> regression test suite with an automated scenario of the remote backend\n> getting killed.\n\nYou could get a backend's PID using PQbackendPID and then kill it by\ncalling pg_terminate_backend() to kill the remote backend to automate\nscenario of remote backend being killed.\n\n>\n> I would like to thank Ashutosh Bapat (ashutosh.bapat.oss@gmail.com)\n> for the suggestion to fix this and the review of my initial patch\n> attached in [2]. I tried to address the review comments provided on my\n> initial patch [3].\n>\n> For, one of the Ashutosh's review comments from [3] \"the fact that the\n> same connection may be used by multiple plan nodes\", I tried to have\n> few use cases where there exist joins on two foreign tables on the\n> same remote server, in a single query, so essentially, the same\n> connection was used for multiple plan nodes. In this case we avoid\n> retrying for the second GetConnection() request for the second foreign\n> table, with the check entry->xact_depth <= 0 , xact_depth after the\n> first GetConnection() and the first remote query will become 1 and we\n> don't hit the retry logic and seems like we are safe here. Please add\n> If I'm missing something here.\n>\n> Request the community to consider the patch for further review if the\n> overall idea seems beneficial.\n\nI think this idea will be generally useful if your work on dropping\nstale connection uses idle_connection_timeout or something like that\non the remote server.\n\nAbout the patch. It seems we could just catch the error from\nbegin_remote_xact() in GetConnection() and retry connection if the\nerror is \"bad connection\". Retrying using PQreset() might be better\nthan calling PQConnect* always.\n\n\n>\n> [1] https://www.postgresql.org/message-id/CAExHW5t21B_XPQy_hownm1Qq%3DhMrgOhX%2B8gDj3YEKFvpk%3DVRgw%40mail.gmail.com\n> [2] https://www.postgresql.org/message-id/CALj2ACXp6DQ3iLGx5g%2BLgVtGwC4F6K9WzKQJpyR4FfdydQzC_g%40mail.gmail.com\n> [3] https://www.postgresql.org/message-id/CAExHW5u3Gyv6Q1BEr6zMg0t%2B59e8c4KMfKVrV3Z%3D4UKKjJ19nQ%40mail.gmail.com\n>\n> With Regards,\n> Bharath Rupireddy.\n> EnterpriseDB: http://www.enterprisedb.com\n\n\n\n--\nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Mon, 13 Jul 2020 10:13:19 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Retry Cached Remote Connections for postgres_fdw in case remote\n backend gets killed/goes away" }, { "msg_contents": ">\n> You could get a backend's PID using PQbackendPID and then kill it by\n> calling pg_terminate_backend() to kill the remote backend to automate\n> scenario of remote backend being killed.\n>\n\nI already added the test case in v2 patch itself(added one more test\ncase in v3 patch), using the similar approach.\n\n>\n> > For, one of the Ashutosh's review comments from [3] \"the fact that the\n> > same connection may be used by multiple plan nodes\", I tried to have\n> > few use cases where there exist joins on two foreign tables on the\n> > same remote server, in a single query, so essentially, the same\n> > connection was used for multiple plan nodes. In this case we avoid\n> > retrying for the second GetConnection() request for the second foreign\n> > table, with the check entry->xact_depth <= 0 , xact_depth after the\n> > first GetConnection() and the first remote query will become 1 and we\n> > don't hit the retry logic and seems like we are safe here. Please add\n> > If I'm missing something here.\n> >\n> > Request the community to consider the patch for further review if the\n> > overall idea seems beneficial.\n>\n> I think this idea will be generally useful if your work on dropping\n> stale connection uses idle_connection_timeout or something like that\n> on the remote server.\n\nAssuming we use idle_connection_timeout or some other means(as it is\nnot yet finalized, I will try to respond in that mail chain) to drop\nstale/idle connections from the local backend, I think we have two\noptions 1) deleting that cached entry from the connection cache\nentirely using disconnect_pg_server() and hash table remove. This\nfrees up some space and we don't have to deal with the connection\ninvalidations and don't have to bother on resetting cached entry's\nother parameters such as xact_depth, have_prep_stmt etc. 2) or we\ncould just drop the connections using disconnect_pg_server(), retain\nthe hash entry, reset other parameters, and deal with the\ninvalidations. This is like, we maintain unnecessary info in the\ncached entry, where we actually don't have a connection at all and\nkeep holding some space for cached entry.\n\nIMO, if we go with option 1, then it will be good.\n\nAnyways, this connection retry feature will not have any dependency on\nthe idle_connection_timeout or dropping stale connection feature,\nbecause there can be many reasons where remote backends go away/get\nkilled.\n\nIf I'm not sidetracking - if we use something like\nidle_session_timeout [1] on the remote server, this retry feature will\nbe very useful.\n\n>\n> About the patch. It seems we could just catch the error from\n> begin_remote_xact() in GetConnection() and retry connection if the\n> error is \"bad connection\". Retrying using PQreset() might be better\n> than calling PQConnect* always.\n>\n\nThanks for the comment, it made life easier. Added the patch with the\nchanges. Please take a look at the v3 patch and let me know if still\nsomething needs to be done.\n\n[1] -\nhttps://www.postgresql.org/message-id/763A0689-F189-459E-946F-F0EC4458980B%40hotmail.com\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Tue, 14 Jul 2020 07:27:10 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Retry Cached Remote Connections for postgres_fdw in case remote\n backend gets killed/goes away" }, { "msg_contents": "Has this been added to CF, possibly next CF?\n\nOn Tue, Jul 14, 2020 at 7:27 AM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> >\n> > You could get a backend's PID using PQbackendPID and then kill it by\n> > calling pg_terminate_backend() to kill the remote backend to automate\n> > scenario of remote backend being killed.\n> >\n>\n> I already added the test case in v2 patch itself(added one more test\n> case in v3 patch), using the similar approach.\n>\n> >\n> > > For, one of the Ashutosh's review comments from [3] \"the fact that the\n> > > same connection may be used by multiple plan nodes\", I tried to have\n> > > few use cases where there exist joins on two foreign tables on the\n> > > same remote server, in a single query, so essentially, the same\n> > > connection was used for multiple plan nodes. In this case we avoid\n> > > retrying for the second GetConnection() request for the second foreign\n> > > table, with the check entry->xact_depth <= 0 , xact_depth after the\n> > > first GetConnection() and the first remote query will become 1 and we\n> > > don't hit the retry logic and seems like we are safe here. Please add\n> > > If I'm missing something here.\n> > >\n> > > Request the community to consider the patch for further review if the\n> > > overall idea seems beneficial.\n> >\n> > I think this idea will be generally useful if your work on dropping\n> > stale connection uses idle_connection_timeout or something like that\n> > on the remote server.\n>\n> Assuming we use idle_connection_timeout or some other means(as it is\n> not yet finalized, I will try to respond in that mail chain) to drop\n> stale/idle connections from the local backend, I think we have two\n> options 1) deleting that cached entry from the connection cache\n> entirely using disconnect_pg_server() and hash table remove. This\n> frees up some space and we don't have to deal with the connection\n> invalidations and don't have to bother on resetting cached entry's\n> other parameters such as xact_depth, have_prep_stmt etc. 2) or we\n> could just drop the connections using disconnect_pg_server(), retain\n> the hash entry, reset other parameters, and deal with the\n> invalidations. This is like, we maintain unnecessary info in the\n> cached entry, where we actually don't have a connection at all and\n> keep holding some space for cached entry.\n>\n> IMO, if we go with option 1, then it will be good.\n>\n> Anyways, this connection retry feature will not have any dependency on\n> the idle_connection_timeout or dropping stale connection feature,\n> because there can be many reasons where remote backends go away/get\n> killed.\n>\n> If I'm not sidetracking - if we use something like\n> idle_session_timeout [1] on the remote server, this retry feature will\n> be very useful.\n>\n> >\n> > About the patch. It seems we could just catch the error from\n> > begin_remote_xact() in GetConnection() and retry connection if the\n> > error is \"bad connection\". Retrying using PQreset() might be better\n> > than calling PQConnect* always.\n> >\n>\n> Thanks for the comment, it made life easier. Added the patch with the\n> changes. Please take a look at the v3 patch and let me know if still\n> something needs to be done.\n>\n> [1] - https://www.postgresql.org/message-id/763A0689-F189-459E-946F-F0EC4458980B%40hotmail.com\n>\n> With Regards,\n> Bharath Rupireddy.\n> EnterpriseDB: http://www.enterprisedb.com\n\n\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Tue, 14 Jul 2020 18:13:09 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Retry Cached Remote Connections for postgres_fdw in case remote\n backend gets killed/goes away" }, { "msg_contents": "On Tue, Jul 14, 2020 at 6:13 PM Ashutosh Bapat\n<ashutosh.bapat.oss@gmail.com> wrote:\n>\n> Has this been added to CF, possibly next CF?\n>\n\nI have not added yet. Request to get it done in this CF, as the final\npatch for review(v3 patch) is ready and shared. We can target it to\nthe next CF if there are major issues with the patch.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 14 Jul 2020 18:40:01 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Retry Cached Remote Connections for postgres_fdw in case remote\n backend gets killed/goes away" }, { "msg_contents": ">\n> On Tue, Jul 14, 2020 at 6:13 PM Ashutosh Bapat\n> <ashutosh.bapat.oss@gmail.com> wrote:\n> >\n> > Has this been added to CF, possibly next CF?\n> >\n>\n> I have not added yet. Request to get it done in this CF, as the final\n> patch for review(v3 patch) is ready and shared. We can target it to\n> the next CF if there are major issues with the patch.\n>\n\nI added this feature to the next CF - https://commitfest.postgresql.org/29/2651/\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 17 Jul 2020 17:32:37 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Retry Cached Remote Connections for postgres_fdw in case remote\n backend gets killed/goes away" }, { "msg_contents": "\n\nOn 2020/07/17 21:02, Bharath Rupireddy wrote:\n>>\n>> On Tue, Jul 14, 2020 at 6:13 PM Ashutosh Bapat\n>> <ashutosh.bapat.oss@gmail.com> wrote:\n>>>\n>>> Has this been added to CF, possibly next CF?\n>>>\n>>\n>> I have not added yet. Request to get it done in this CF, as the final\n>> patch for review(v3 patch) is ready and shared. We can target it to\n>> the next CF if there are major issues with the patch.\n>>\n> \n> I added this feature to the next CF - https://commitfest.postgresql.org/29/2651/\n\nThanks for the patch! Here are some comments from me.\n\n+\t\t\tPQreset(entry->conn);\n\nIsn't using PQreset() to reconnect to the foreign server unsafe?\nWhen new connection is established, some SET commands seem\nto need to be issued like configure_remote_session() does.\nBut PQreset() doesn't do that at all.\n\n\nOriginally when GetConnection() establishes new connection,\nit resets all transient state fields, to be sure all are clean (as the\ncomment says). With the patch, even when reconnecting\nthe remote server, shouldn't we do the same, for safe?\n\n\n+\t\t\tPGresult *res = NULL;\n+\t\t\tres = PQgetResult(entry->conn);\n+\t\t\tPQclear(res);\n\nAre these really necessary? I was just thinking that's not because\npgfdw_get_result() and pgfdw_report_error() seem to do that\nalready in do_sql_command().\n\n\n+\t\t/* Start a new transaction or subtransaction if needed. */\n+\t\tbegin_remote_xact(entry);\n\nEven when there is no cached connection and new connection is made,\nthen if begin_remote_xact() reports an error, another new connection\ntries to be made again. This behavior looks a bit strange.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Tue, 15 Sep 2020 18:19:17 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Retry Cached Remote Connections for postgres_fdw in case remote\n backend gets killed/goes away" }, { "msg_contents": "Thanks for the review comments. I will post a new patch soon\naddressing all the comments.\n\nOn Tue, Sep 15, 2020 at 2:49 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>\n> + PQreset(entry->conn);\n>\n> Isn't using PQreset() to reconnect to the foreign server unsafe?\n> When new connection is established, some SET commands seem\n> to need to be issued like configure_remote_session() does.\n> But PQreset() doesn't do that at all.\n>\n\nThis is a good catch. Thanks, I missed this point. Indeed we need to\nset the session params. We can do this in two ways: 1. use\nconfigure_remote_session() after PQreset(). 2. use connect_pg_server()\ninstead of PQreset() and configure_remote_session(). One problem I see\nwith the 2nd way is that we will be doing the checks that are being\nperformed in connect_pg_server() twice, which we would have done for\nthe first time before retrying. The required parameters such as\nkeywords, values, are still in entry->conn structure from the first\nattempt, which can safely be used by PQreset(). So, I will go with the\n1st way. Thoughts?\n\n>\n> Originally when GetConnection() establishes new connection,\n> it resets all transient state fields, to be sure all are clean (as the\n> comment says). With the patch, even when reconnecting\n> the remote server, shouldn't we do the same, for safe?\n>\n\nI guess there is no need for us to reset all the transient state\nbefore we do begin_remote_xact() in the 2nd attempt. We retry the\nconnection only when entry->xact_depth <= 0 i.e. beginning of the\nremote txn and the begin_remote_xact() doesn't modify any transient\nstate if entry->xact_depth <= 0, except for entry->changing_xact_state\n= true; all other transient state is intact in entry structure. In the\nerror case, we will not reach the code after do_sql_command in\nbegin_remote_xact(). If needed, we can only set\nentry->changing_xact_state to false which is set to true before\ndo_sql_command().\n\n entry->changing_xact_state = true;\n do_sql_command(entry->conn, sql);\n entry->xact_depth = 1;\n entry->changing_xact_state = false;\n\n>\n> + PGresult *res = NULL;\n> + res = PQgetResult(entry->conn);\n> + PQclear(res);\n> Are these really necessary? I was just thinking that's not because\n> pgfdw_get_result() and pgfdw_report_error() seem to do that\n> already in do_sql_command().\n>\n\nIf an error occurs in the first attempt, we return from\npgfdw_get_result()'s if (!PQconsumeInput(conn)) to the catch block we\nadded and pgfdw_report_error() will never get called. And the below\npart of the code is reached only in scenarios as mentioned in the\ncomments. Removing this might have problems if we receive errors other\nthan CONNECTION_BAD or for subtxns. We could clear if any result and\njust rethrow the error upstream. I think no problem having this code\nhere.\n\n else\n {\n /*\n * We are here, due to either some error other than CONNECTION_BAD\n * occurred or connection may have broken during start of a\n * subtransacation. Just, clear off any result, try rethrowing the\n * error, so that it will be caught appropriately.\n */\n PGresult *res = NULL;\n res = PQgetResult(entry->conn);\n PQclear(res);\n PG_RE_THROW();\n }\n\n>\n> + /* Start a new transaction or subtransaction if needed. */\n> + begin_remote_xact(entry);\n>\n> Even when there is no cached connection and new connection is made,\n> then if begin_remote_xact() reports an error, another new connection\n> tries to be made again. This behavior looks a bit strange.\n>\n\nWhen there is no cached connection, we try to acquire one, if we\ncan't, then no error will be thrown to the user, just we retry one\nmore time. If we get in the 2nd attempt, fine, if not, we will throw\nthe error to the user. Assume in the 1st attempt the remote server is\nunreachable, we may hope to connect in the 2nd attempt. I think there\nis no problem here.\n\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 17 Sep 2020 12:14:20 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Retry Cached Remote Connections for postgres_fdw in case remote\n backend gets killed/goes away" }, { "msg_contents": "\n\nOn 2020/09/17 15:44, Bharath Rupireddy wrote:\n> Thanks for the review comments. I will post a new patch soon\n> addressing all the comments.\n\nThanks a lot!\n\n\n> \n> On Tue, Sep 15, 2020 at 2:49 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>>\n>> + PQreset(entry->conn);\n>>\n>> Isn't using PQreset() to reconnect to the foreign server unsafe?\n>> When new connection is established, some SET commands seem\n>> to need to be issued like configure_remote_session() does.\n>> But PQreset() doesn't do that at all.\n>>\n> \n> This is a good catch. Thanks, I missed this point. Indeed we need to\n> set the session params. We can do this in two ways: 1. use\n> configure_remote_session() after PQreset(). 2. use connect_pg_server()\n> instead of PQreset() and configure_remote_session(). One problem I see\n> with the 2nd way is that we will be doing the checks that are being\n> performed in connect_pg_server() twice, which we would have done for\n> the first time before retrying. The required parameters such as\n> keywords, values, are still in entry->conn structure from the first\n> attempt, which can safely be used by PQreset(). So, I will go with the\n> 1st way. Thoughts?\n\n\nIn 1st way, you may also need to call ReleaseExternalFD() when new connection fails\nto be made, as connect_pg_server() does. Also we need to check that\nnon-superuser has used password to make new connection,\nas connect_pg_server() does? I'm concerned about the case where\npg_hba.conf is changed accidentally so that no password is necessary\nat the remote server and the existing connection is terminated. In this case,\nif we connect to the local server as non-superuser, connection to\nthe remote server should fail because the remote server doesn't\nrequire password. But with your patch, we can successfully reconnect\nto the remote server.\n\nTherefore I like 2nd option. Also maybe disconnect_ps_server() needs to\nbe called before that. I'm not sure how much useful 1st option is.\n\n\n> \n>>\n>> Originally when GetConnection() establishes new connection,\n>> it resets all transient state fields, to be sure all are clean (as the\n>> comment says). With the patch, even when reconnecting\n>> the remote server, shouldn't we do the same, for safe?\n>>\n> \n> I guess there is no need for us to reset all the transient state\n> before we do begin_remote_xact() in the 2nd attempt. We retry the\n> connection only when entry->xact_depth <= 0 i.e. beginning of the\n> remote txn and the begin_remote_xact() doesn't modify any transient\n> state if entry->xact_depth <= 0, except for entry->changing_xact_state\n> = true; all other transient state is intact in entry structure. In the\n> error case, we will not reach the code after do_sql_command in\n> begin_remote_xact(). If needed, we can only set\n> entry->changing_xact_state to false which is set to true before\n> do_sql_command().\n\nWhat if 2nd attempt happens with have_prep_stmt=true? I'm not sure\nif this case really happens, though. But if that can, it's strange to start\nnew connection with have_prep_stmt=true even when the caller of\nGetConnection() doesn't intend to create any prepared statements.\n\nI think it's safer to do 2nd attempt in the same way as 1st one. Maybe\nwe can simplify the code by making them into common code block\nor function.\n\n\n> \n> entry->changing_xact_state = true;\n> do_sql_command(entry->conn, sql);\n> entry->xact_depth = 1;\n> entry->changing_xact_state = false;\n> \n>>\n>> + PGresult *res = NULL;\n>> + res = PQgetResult(entry->conn);\n>> + PQclear(res);\n>> Are these really necessary? I was just thinking that's not because\n>> pgfdw_get_result() and pgfdw_report_error() seem to do that\n>> already in do_sql_command().\n>>\n> \n> If an error occurs in the first attempt, we return from\n> pgfdw_get_result()'s if (!PQconsumeInput(conn)) to the catch block we\n> added and pgfdw_report_error() will never get called. And the below\n> part of the code is reached only in scenarios as mentioned in the\n> comments. Removing this might have problems if we receive errors other\n> than CONNECTION_BAD or for subtxns. We could clear if any result and\n> just rethrow the error upstream. I think no problem having this code\n> here.\n\nBut the orignal code works without this?\nOr you mean that the original code has the bug?\n\n\n> \n> else\n> {\n> /*\n> * We are here, due to either some error other than CONNECTION_BAD\n> * occurred or connection may have broken during start of a\n> * subtransacation. Just, clear off any result, try rethrowing the\n> * error, so that it will be caught appropriately.\n> */\n> PGresult *res = NULL;\n> res = PQgetResult(entry->conn);\n> PQclear(res);\n> PG_RE_THROW();\n> }\n> \n>>\n>> + /* Start a new transaction or subtransaction if needed. */\n>> + begin_remote_xact(entry);\n>>\n>> Even when there is no cached connection and new connection is made,\n>> then if begin_remote_xact() reports an error, another new connection\n>> tries to be made again. This behavior looks a bit strange.\n>>\n> \n> When there is no cached connection, we try to acquire one, if we\n> can't, then no error will be thrown to the user, just we retry one\n> more time. If we get in the 2nd attempt, fine, if not, we will throw\n> the error to the user. Assume in the 1st attempt the remote server is\n> unreachable, we may hope to connect in the 2nd attempt. I think there\n> is no problem here.\n> \n> \n> With Regards,\n> Bharath Rupireddy.\n> EnterpriseDB: http://www.enterprisedb.com\n> \n> \n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Fri, 18 Sep 2020 01:50:00 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Retry Cached Remote Connections for postgres_fdw in case remote\n backend gets killed/goes away" }, { "msg_contents": "On Thu, Sep 17, 2020 at 10:20 PM Fujii Masao <masao.fujii@oss.nttdata.com>\nwrote:\n>\n> In 1st way, you may also need to call ReleaseExternalFD() when new\nconnection fails\n> to be made, as connect_pg_server() does. Also we need to check that\n> non-superuser has used password to make new connection,\n> as connect_pg_server() does? I'm concerned about the case where\n> pg_hba.conf is changed accidentally so that no password is necessary\n> at the remote server and the existing connection is terminated. In this\ncase,\n> if we connect to the local server as non-superuser, connection to\n> the remote server should fail because the remote server doesn't\n> require password. But with your patch, we can successfully reconnect\n> to the remote server.\n>\n> Therefore I like 2nd option. Also maybe disconnect_ps_server() needs to\n> be called before that. I'm not sure how much useful 1st option is.\n>\n\nThanks. Above points look sensible. +1 for the 2nd option i.e. instead of\nPQreset(entry->conn);, let's try to disconnect_pg_server() and\nconnect_pg_server().\n\n>\n> What if 2nd attempt happens with have_prep_stmt=true? I'm not sure\n> if this case really happens, though. But if that can, it's strange to\nstart\n> new connection with have_prep_stmt=true even when the caller of\n> GetConnection() doesn't intend to create any prepared statements.\n>\n> I think it's safer to do 2nd attempt in the same way as 1st one. Maybe\n> we can simplify the code by making them into common code block\n> or function.\n>\n\nI don't think the have_prep_stmt will be set by the time we make 2nd\nattempt because entry->have_prep_stmt |= will_prep_stmt; gets hit only\nafter we are successful in either 1st attempt or 2nd attempt. I think we\ndon't need to set all transient state. No other transient state except\nchanging_xact_state changes from 1st attempt to 2nd attempt(see below), so\nlet's set only entry->changing_xact_state to false before 2nd attempt.\n\n1st attempt:\n(gdb) p *entry\n$3 = {key = 16389, conn = 0x55a896199990, xact_depth = 0, have_prep_stmt =\nfalse,\n have_error = false, changing_xact_state = false, invalidated = false,\n server_hashvalue = 3905865521, mapping_hashvalue = 2617776010}\n\n2nd attempt i.e. in retry block:\n(gdb) p *entry\n$4 = {key = 16389, conn = 0x55a896199990, xact_depth = 0, have_prep_stmt =\nfalse,\n have_error = false, changing_xact_state = true, invalidated = false,\n server_hashvalue = 3905865521, mapping_hashvalue = 2617776010}\n\n>>\n> > If an error occurs in the first attempt, we return from\n> > pgfdw_get_result()'s if (!PQconsumeInput(conn)) to the catch block we\n> > added and pgfdw_report_error() will never get called. And the below\n> > part of the code is reached only in scenarios as mentioned in the\n> > comments. Removing this might have problems if we receive errors other\n> > than CONNECTION_BAD or for subtxns. We could clear if any result and\n> > just rethrow the error upstream. I think no problem having this code\n> > here.\n>\n> But the orignal code works without this?\n> Or you mean that the original code has the bug?\n>\n\nThere's no bug in the original code. Sorry, I was wrong in saying\npgfdw_report_error() will never get called with this patch. Indeed it gets\ncalled even when 1's attempt connection is failed. Since we added an extra\ntry-catch block, we will not be throwing the error to the user, instead we\nmake a 2nd attempt from the catch block.\n\nI'm okay to remove below part of the code\n\n> >> + PGresult *res = NULL;\n> >> + res = PQgetResult(entry->conn);\n> >> + PQclear(res);\n> >> Are these really necessary? I was just thinking that's not because\n> >> pgfdw_get_result() and pgfdw_report_error() seem to do that\n> >> already in do_sql_command().\n\nPlease let me know if okay with the above agreed points, I will work on the\nnew patch.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\nOn Thu, Sep 17, 2020 at 10:20 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:>> In 1st way, you may also need to call ReleaseExternalFD() when new connection fails> to be made, as connect_pg_server() does. Also we need to check that> non-superuser has used password to make new connection,> as connect_pg_server() does? I'm concerned about the case where> pg_hba.conf is changed accidentally so that no password is necessary> at the remote server and the existing connection is terminated. In this case,> if we connect to the local server as non-superuser, connection to> the remote server should fail because the remote server doesn't> require password. But with your patch, we can successfully reconnect> to the remote server.>> Therefore I like 2nd option. Also maybe disconnect_ps_server() needs to> be called before that. I'm not sure how much useful 1st option is.>Thanks. Above points look sensible. +1 for the 2nd option i.e. instead of PQreset(entry->conn);, let's try to disconnect_pg_server() and connect_pg_server().>> What if 2nd attempt happens with have_prep_stmt=true? I'm not sure> if this case really happens, though. But if that can, it's strange to start> new connection with have_prep_stmt=true even when the caller of> GetConnection() doesn't intend to create any prepared statements.>> I think it's safer to do 2nd attempt in the same way as 1st one. Maybe> we can simplify the code by making them into common code block> or function.>I don't think the have_prep_stmt will be set by the time we make 2nd attempt because entry->have_prep_stmt |= will_prep_stmt; gets hit only after we are successful in either 1st attempt or 2nd attempt. I think we don't need to set all transient state. No other transient state except changing_xact_state changes from 1st attempt to 2nd attempt(see below), so let's set only entry->changing_xact_state to false before 2nd attempt.1st attempt: (gdb) p *entry$3 = {key = 16389, conn = 0x55a896199990, xact_depth = 0, have_prep_stmt = false,  have_error = false, changing_xact_state = false, invalidated = false,  server_hashvalue = 3905865521, mapping_hashvalue = 2617776010}2nd attempt i.e. in retry block:(gdb) p *entry$4 = {key = 16389, conn = 0x55a896199990, xact_depth = 0, have_prep_stmt = false,  have_error = false, changing_xact_state = true, invalidated = false,  server_hashvalue = 3905865521, mapping_hashvalue = 2617776010}>>> > If an error occurs in the first attempt, we return from> > pgfdw_get_result()'s  if (!PQconsumeInput(conn)) to the catch block we> > added and pgfdw_report_error() will never get called. And the below> > part of the code is reached only in scenarios as mentioned in the> > comments. Removing this might have problems if we receive errors other> > than CONNECTION_BAD or for subtxns. We could clear if any result and> > just rethrow the error upstream. I think no problem having this code> > here.>> But the orignal code works without this?> Or you mean that the original code has the bug?>There's no bug in the original code. Sorry, I was wrong in saying pgfdw_report_error() will never get called with this patch. Indeed it gets called even when 1's attempt connection is failed. Since we added an extra try-catch block, we will not be throwing the error to the user, instead we make a 2nd attempt from the catch block.I'm okay to remove below part of the code> >> +                       PGresult *res = NULL;> >> +                       res = PQgetResult(entry->conn);> >> +                       PQclear(res);> >> Are these really necessary? I was just thinking that's not because> >> pgfdw_get_result() and pgfdw_report_error() seem to do that> >> already in do_sql_command().Please let me know if okay with the above agreed points, I will work on the new patch.With Regards,Bharath Rupireddy.EnterpriseDB: http://www.enterprisedb.com", "msg_date": "Mon, 21 Sep 2020 09:14:30 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Retry Cached Remote Connections for postgres_fdw in case remote\n backend gets killed/goes away" }, { "msg_contents": "\n\nOn 2020/09/21 12:44, Bharath Rupireddy wrote:\n> On Thu, Sep 17, 2020 at 10:20 PM Fujii Masao <masao.fujii@oss.nttdata.com <mailto:masao.fujii@oss.nttdata.com>> wrote:\n> >\n> > In 1st way, you may also need to call ReleaseExternalFD() when new connection fails\n> > to be made, as connect_pg_server() does. Also we need to check that\n> > non-superuser has used password to make new connection,\n> > as connect_pg_server() does? I'm concerned about the case where\n> > pg_hba.conf is changed accidentally so that no password is necessary\n> > at the remote server and the existing connection is terminated. In this case,\n> > if we connect to the local server as non-superuser, connection to\n> > the remote server should fail because the remote server doesn't\n> > require password. But with your patch, we can successfully reconnect\n> > to the remote server.\n> >\n> > Therefore I like 2nd option. Also maybe disconnect_ps_server() needs to\n> > be called before that. I'm not sure how much useful 1st option is.\n> >\n> \n> Thanks. Above points look sensible. +1 for the 2nd option i.e. instead of PQreset(entry->conn);, let's try to disconnect_pg_server() and connect_pg_server().\n> \n> >\n> > What if 2nd attempt happens with have_prep_stmt=true? I'm not sure\n> > if this case really happens, though. But if that can, it's strange to start\n> > new connection with have_prep_stmt=true even when the caller of\n> > GetConnection() doesn't intend to create any prepared statements.\n> >\n> > I think it's safer to do 2nd attempt in the same way as 1st one. Maybe\n> > we can simplify the code by making them into common code block\n> > or function.\n> >\n> \n> I don't think the have_prep_stmt will be set by the time we make 2nd attempt because entry->have_prep_stmt |= will_prep_stmt; gets hit only after we are successful in either 1st attempt or 2nd attempt. I think we don't need to set all transient state. No other transient state except changing_xact_state changes from 1st attempt to 2nd attempt(see below), so let's set only entry->changing_xact_state to false before 2nd attempt.\n> \n> 1st attempt:\n> (gdb) p *entry\n> $3 = {key = 16389, conn = 0x55a896199990, xact_depth = 0, have_prep_stmt = false,\n>   have_error = false, changing_xact_state = false, invalidated = false,\n>   server_hashvalue = 3905865521, mapping_hashvalue = 2617776010}\n> \n> 2nd attempt i.e. in retry block:\n> (gdb) p *entry\n> $4 = {key = 16389, conn = 0x55a896199990, xact_depth = 0, have_prep_stmt = false,\n>   have_error = false, changing_xact_state = true, invalidated = false,\n>   server_hashvalue = 3905865521, mapping_hashvalue = 2617776010}\n> \n> >>\n> > > If an error occurs in the first attempt, we return from\n> > > pgfdw_get_result()'s  if (!PQconsumeInput(conn)) to the catch block we\n> > > added and pgfdw_report_error() will never get called. And the below\n> > > part of the code is reached only in scenarios as mentioned in the\n> > > comments. Removing this might have problems if we receive errors other\n> > > than CONNECTION_BAD or for subtxns. We could clear if any result and\n> > > just rethrow the error upstream. I think no problem having this code\n> > > here.\n> >\n> > But the orignal code works without this?\n> > Or you mean that the original code has the bug?\n> >\n> \n> There's no bug in the original code. Sorry, I was wrong in saying pgfdw_report_error() will never get called with this patch. Indeed it gets called even when 1's attempt connection is failed. Since we added an extra try-catch block, we will not be throwing the error to the user, instead we make a 2nd attempt from the catch block.\n> \n> I'm okay to remove below part of the code\n> \n> > >> +                       PGresult *res = NULL;\n> > >> +                       res = PQgetResult(entry->conn);\n> > >> +                       PQclear(res);\n> > >> Are these really necessary? I was just thinking that's not because\n> > >> pgfdw_get_result() and pgfdw_report_error() seem to do that\n> > >> already in do_sql_command().\n> \n> Please let me know if okay with the above agreed points, I will work on the new patch.\n\nYes, please work on the patch! Thanks! I may revisit the above points later, though ;)\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Wed, 23 Sep 2020 23:49:09 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Retry Cached Remote Connections for postgres_fdw in case remote\n backend gets killed/goes away" }, { "msg_contents": "On Wed, Sep 23, 2020 at 8:19 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>\n> > Please let me know if okay with the above agreed points, I will work on the new patch.\n>\n> Yes, please work on the patch! Thanks! I may revisit the above points later, though ;)\n>\n\nThanks, attaching v4 patch. Please consider this for further review.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Fri, 25 Sep 2020 10:26:40 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Retry Cached Remote Connections for postgres_fdw in case remote\n backend gets killed/goes away" }, { "msg_contents": "On 2020/09/25 13:56, Bharath Rupireddy wrote:\n> On Wed, Sep 23, 2020 at 8:19 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>>\n>>> Please let me know if okay with the above agreed points, I will work on the new patch.\n>>\n>> Yes, please work on the patch! Thanks! I may revisit the above points later, though ;)\n>>\n> \n> Thanks, attaching v4 patch. Please consider this for further review.\n\nThanks for updating the patch!\n\nIn the orignal code, disconnect_pg_server() is called when invalidation\noccurs and connect_pg_server() is called when no valid connection exists.\nI think that we can simplify the code by merging the connection-retry\ncode into them, like the attached very WIP patch (based on yours) does.\nBasically I'd like to avoid duplicating disconnect_pg_server(),\nconnect_pg_server() and begin_remote_xact() if possible. Thought?\n\nYour patch adds several codes into PG_CATCH() section, but it's better\n to keep that section simple enough (as the source comment for\nPG_CATCH() explains). So what about moving some codes out of PG_CATCH()\nsection?\n\n+\t\t\telse\n+\t\t\t\tereport(ERROR,\n+\t\t\t\t\t(errcode(ERRCODE_SQLCLIENT_UNABLE_TO_ESTABLISH_SQLCONNECTION),\n+\t\t\t\t\t errmsg(\"could not connect to server \\\"%s\\\"\",\n+\t\t\t\t\t\t\tserver->servername),\n+\t\t\t\t\t errdetail_internal(\"%s\", pchomp(PQerrorMessage(entry->conn)))));\n\nThe above is not necessary? If this error occurs, connect_pg_server()\nreports an error, before the above code is reached. Right?\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION", "msg_date": "Fri, 25 Sep 2020 18:51:33 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Retry Cached Remote Connections for postgres_fdw in case remote\n backend gets killed/goes away" }, { "msg_contents": "On Fri, Sep 25, 2020 at 3:21 PM Fujii Masao <masao.fujii@oss.nttdata.com>\nwrote:\n>\n> I think that we can simplify the code by merging the connection-retry\n> code into them, like the attached very WIP patch (based on yours) does.\n>\n\n+1.\n\n>\n> + else\n> + ereport(ERROR,\n> +\n(errcode(ERRCODE_SQLCLIENT_UNABLE_TO_ESTABLISH_SQLCONNECTION),\n> + errmsg(\"could not connect to\nserver \\\"%s\\\"\",\n> +\nserver->servername),\n> + errdetail_internal(\"%s\",\npchomp(PQerrorMessage(entry->conn)))));\n>\n> The above is not necessary? If this error occurs, connect_pg_server()\n> reports an error, before the above code is reached. Right?\n>\n\nRemoved.\n\nThanks for the comments.\n\nI think we need to have a volatile qualifier for need_reset_conn, because\nof the sigsetjmp.\nInstead of \"need_reset_conn\", \"retry_conn\" would be more meaningful and\nalso instead of goto label name \"reset;\", \"retry:\".\nI changed \"closing connection %p to reestablish connection\" to \"closing\nconnection %p to reestablish a new one\"\nI also adjusted the comments to be under the 80char limit.\nI feel, when we are about to close an existing connection and reestablish a\nnew connection, it will be good to have a debug3 message saying that we\n\"could not connect to postgres_fdw connection %p\"[1].\n\nAttaching v5 patch that has the above changes. Both make check and make\ncheck-world regression tests passes. Please review.\n\n[1] This would tell the user that we are not able to connect to the\nconnection.\npostgres=# SELECT * FROM foreign_tbl;\nDEBUG: starting remote transaction on connection 0x55ab0e416830\nDEBUG: could not connect to postgres_fdw connection 0x55ab0e416830\nDEBUG: closing connection 0x55ab0e416830 to reestablish a new one\nDEBUG: new postgres_fdw connection 0x55ab0e416830 for server\n\"foreign_server\" (user mapping oid 16407, userid 10)\nDEBUG: starting remote transaction on connection 0x55ab0e416830\nDEBUG: closing remote transaction on connection 0x55ab0e416830\n a1 | b1\n-----+-----\n 100 | 200\n\nWithout the above message, it would look like we are starting remote txn,\nand closing connection without any reason.\n\npostgres=# SELECT * FROM foreign_tbl;\nDEBUG: starting remote transaction on connection 0x55ab0e4c0d50\nDEBUG: closing connection 0x55ab0e4c0d50 to reestablish a new one\nDEBUG: new postgres_fdw connection 0x55ab0e4c0d50 for server\n\"foreign_server\" (user mapping oid 16389, userid 10)\nDEBUG: starting remote transaction on connection 0x55ab0e4c0d50\nDEBUG: closing remote transaction on connection 0x55ab0e4c0d50\n a1 | b1\n-----+-----\n 100 | 200\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Fri, 25 Sep 2020 17:49:50 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Retry Cached Remote Connections for postgres_fdw in case remote\n backend gets killed/goes away" }, { "msg_contents": "\n\nOn 2020/09/25 21:19, Bharath Rupireddy wrote:\n> On Fri, Sep 25, 2020 at 3:21 PM Fujii Masao <masao.fujii@oss.nttdata.com <mailto:masao.fujii@oss.nttdata.com>> wrote:\n> >\n> > I think that we can simplify the code by merging the connection-retry\n> > code into them, like the attached very WIP patch (based on yours) does.\n> >\n> \n> +1.\n> \n> >\n> > +                       else\n> > +                               ereport(ERROR,\n> > +                                       (errcode(ERRCODE_SQLCLIENT_UNABLE_TO_ESTABLISH_SQLCONNECTION),\n> > +                                        errmsg(\"could not connect to server \\\"%s\\\"\",\n> > +                                                       server->servername),\n> > +                                        errdetail_internal(\"%s\", pchomp(PQerrorMessage(entry->conn)))));\n> >\n> > The above is not necessary? If this error occurs, connect_pg_server()\n> > reports an error, before the above code is reached. Right?\n> >\n> \n> Removed.\n> \n> Thanks for the comments.\n> \n> I think we need to have a volatile qualifier for need_reset_conn, because of the sigsetjmp.\n\nYes.\n\n> Instead of \"need_reset_conn\", \"retry_conn\" would be more meaningful and also instead of goto label name \"reset;\", \"retry:\".\n\nSounds good.\n\n> I changed \"closing connection %p to reestablish connection\" to  \"closing connection %p to reestablish a new one\"\n\nOK.\n\n> I also adjusted the comments to be under the 80char limit.\n> I feel, when we are about to close an existing connection and reestablish a new connection, it will be good to have a debug3 message saying that we \"could not connect to postgres_fdw connection %p\"[1].\n\n+1 to add debug3 message there. But this message doesn't seem to\nmatch with what the error actually happened. What about something like\n\"could not start remote transaction on connection %p\", instead?\nAlso maybe it's better to append PQerrorMessage(entry->conn)?\n\n> \n> Attaching v5 patch that has the above changes. Both make check and make check-world regression tests passes. Please review.\n\nThanks for updating the patch!\n\n+-- Generate a connection to remote. Local backend will cache it.\n+SELECT * FROM ft1 LIMIT 1;\n\nThe result of this query would not be stable. Probably you need to,\nfor example, add ORDER BY or replace * with 1, etc.\n\n\n+-- Retrieve pid of remote backend with application name fdw_retry_check\n+-- and kill it intentionally here. Note that, local backend still has\n+-- the remote connection/backend info in it's cache.\n+SELECT pg_terminate_backend(pid) FROM pg_stat_activity WHERE\n+backend_type = 'client backend' AND application_name = 'fdw_retry_check';\n\nIsn't this test fragile because there is no gurantee that the target backend\nhas exited just after calling pg_terminate_backend()?\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Tue, 29 Sep 2020 23:00:50 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Retry Cached Remote Connections for postgres_fdw in case remote\n backend gets killed/goes away" }, { "msg_contents": "Thanks for the comments.\n\nOn Tue, Sep 29, 2020 at 7:30 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>\n> +1 to add debug3 message there. But this message doesn't seem to\n> match with what the error actually happened. What about something like\n> \"could not start remote transaction on connection %p\", instead?\n>\n\nLooks better. Changed.\n\n>\n> Also maybe it's better to append PQerrorMessage(entry->conn)?\n>\n\nAdded. Now the log looks like [1].\n\n>\n> +-- Generate a connection to remote. Local backend will cache it.\n> +SELECT * FROM ft1 LIMIT 1;\n>\n> The result of this query would not be stable. Probably you need to,\n> for example, add ORDER BY or replace * with 1, etc.\n>\n\nChanged to SELECT 1 FROM ft1 LIMIT 1;\n\n>\n> +-- Retrieve pid of remote backend with application name fdw_retry_check\n> +-- and kill it intentionally here. Note that, local backend still has\n> +-- the remote connection/backend info in it's cache.\n> +SELECT pg_terminate_backend(pid) FROM pg_stat_activity WHERE\n> +backend_type = 'client backend' AND application_name = 'fdw_retry_check';\n>\n> Isn't this test fragile because there is no gurantee that the target backend\n> has exited just after calling pg_terminate_backend()?\n>\n\nI think this is okay, because pg_terminate_backend() sends SIGTERM to\nthe backend, and upon receiving SIGTERM the signal handler die() will\nbe called and since there is no query being executed on the backend by\nthe time SIGTERM is received, it will exit immediately. Thoughts?\n\npqsignal(SIGTERM, die); /* cancel current query and exit */\n\nAnd also, pg_terminate_backend() returns true in case the backend is\nkilled successfully, otherwise it returns false. PG_RETURN_BOOL(r\n== SIGNAL_BACKEND_SUCCESS);\n\nAttaching v6 patch, please review it further.\n\n[1]\nDEBUG: starting remote transaction on connection 0x55cd393a66e0\nDEBUG: could not start remote transaction on connection 0x55cd393a66e0\nDETAIL: server closed the connection unexpectedly\n This probably means the server terminated abnormally\n before or while processing the request.\nDEBUG: closing connection 0x55cd393a66e0 to reestablish a new one\nDEBUG: new postgres_fdw connection 0x55cd393a66e0 for server\n\"foreign_server\" (user mapping oid 16398, userid 10)\nDEBUG: starting remote transaction on connection 0x55cd393a66e0\nDEBUG: closing remote transaction on connection 0x55cd393a66e0\n\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Tue, 29 Sep 2020 21:20:48 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Retry Cached Remote Connections for postgres_fdw in case remote\n backend gets killed/goes away" }, { "msg_contents": "\n\nOn 2020/09/30 0:50, Bharath Rupireddy wrote:\n> Thanks for the comments.\n> \n> On Tue, Sep 29, 2020 at 7:30 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>>\n>> +1 to add debug3 message there. But this message doesn't seem to\n>> match with what the error actually happened. What about something like\n>> \"could not start remote transaction on connection %p\", instead?\n>>\n> \n> Looks better. Changed.\n> \n>>\n>> Also maybe it's better to append PQerrorMessage(entry->conn)?\n>>\n> \n> Added. Now the log looks like [1].\n> \n>>\n>> +-- Generate a connection to remote. Local backend will cache it.\n>> +SELECT * FROM ft1 LIMIT 1;\n>>\n>> The result of this query would not be stable. Probably you need to,\n>> for example, add ORDER BY or replace * with 1, etc.\n>>\n> \n> Changed to SELECT 1 FROM ft1 LIMIT 1;\n> \n>>\n>> +-- Retrieve pid of remote backend with application name fdw_retry_check\n>> +-- and kill it intentionally here. Note that, local backend still has\n>> +-- the remote connection/backend info in it's cache.\n>> +SELECT pg_terminate_backend(pid) FROM pg_stat_activity WHERE\n>> +backend_type = 'client backend' AND application_name = 'fdw_retry_check';\n>>\n>> Isn't this test fragile because there is no gurantee that the target backend\n>> has exited just after calling pg_terminate_backend()?\n>>\n> \n> I think this is okay, because pg_terminate_backend() sends SIGTERM to\n> the backend, and upon receiving SIGTERM the signal handler die() will\n> be called and since there is no query being executed on the backend by\n> the time SIGTERM is received, it will exit immediately. Thoughts?\n\nYeah, basically you're right. But that backend *can* still be running\nwhen the subsequent test query starts. I'm wondering if wait_pid()\n(please see regress.c and sql/dblink.sql) should be used to ensure\nthe target backend disappeared.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Wed, 30 Sep 2020 01:31:11 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Retry Cached Remote Connections for postgres_fdw in case remote\n backend gets killed/goes away" }, { "msg_contents": "On Tue, Sep 29, 2020 at 10:01 PM Fujii Masao\n<masao.fujii@oss.nttdata.com> wrote:\n>\n> > I think this is okay, because pg_terminate_backend() sends SIGTERM to\n> > the backend, and upon receiving SIGTERM the signal handler die() will\n> > be called and since there is no query being executed on the backend by\n> > the time SIGTERM is received, it will exit immediately. Thoughts?\n>\n> Yeah, basically you're right. But that backend *can* still be running\n> when the subsequent test query starts. I'm wondering if wait_pid()\n> (please see regress.c and sql/dblink.sql) should be used to ensure\n> the target backend disappeared.\n>\n\nI think wait_pid() is not a generic function, and I'm unable to use\nthat inside postgres_fdw.sql. I think I need to recreate that function\nfor postgres_fdw.sql. For dblink, it's being created as part of\npaths.source. Could you help me in doing so?\n\nAnd another way, if we don't want to use wait_pid() is to have a\nplpgsql stored procedure, that in a loop keeps on checking for the\nbacked pid from pg_stat_activity, exit when pid is 0. and then proceed\nto issue the next foreign table query. Thoughts?\n\nmypid = -1;\nwhile (mypid != 0)\nSELECT pid INTO mypid FROM pg_stat_activity WHERE backend_type =\n'client backend' AND application_name = 'fdw_retry_check';\n\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 30 Sep 2020 17:32:07 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Retry Cached Remote Connections for postgres_fdw in case remote\n backend gets killed/goes away" }, { "msg_contents": "\n\nOn 2020/09/30 21:02, Bharath Rupireddy wrote:\n> On Tue, Sep 29, 2020 at 10:01 PM Fujii Masao\n> <masao.fujii@oss.nttdata.com> wrote:\n>>\n>>> I think this is okay, because pg_terminate_backend() sends SIGTERM to\n>>> the backend, and upon receiving SIGTERM the signal handler die() will\n>>> be called and since there is no query being executed on the backend by\n>>> the time SIGTERM is received, it will exit immediately. Thoughts?\n>>\n>> Yeah, basically you're right. But that backend *can* still be running\n>> when the subsequent test query starts. I'm wondering if wait_pid()\n>> (please see regress.c and sql/dblink.sql) should be used to ensure\n>> the target backend disappeared.\n>>\n> \n> I think wait_pid() is not a generic function, and I'm unable to use\n> that inside postgres_fdw.sql. I think I need to recreate that function\n> for postgres_fdw.sql. For dblink, it's being created as part of\n> paths.source. Could you help me in doing so?\n> \n> And another way, if we don't want to use wait_pid() is to have a\n> plpgsql stored procedure, that in a loop keeps on checking for the\n> backed pid from pg_stat_activity, exit when pid is 0. and then proceed\n> to issue the next foreign table query. Thoughts?\n\n+1 for this approach! We can use plpgsql or DO command.\n\n\n> \n> mypid = -1;\n> while (mypid != 0)\n> SELECT pid INTO mypid FROM pg_stat_activity WHERE backend_type =\n> 'client backend' AND application_name = 'fdw_retry_check';\n\nOr we can just wait for the number of processes with\nappname='fdw_retry_check' to be zero rather than checking the pid.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Thu, 1 Oct 2020 03:02:47 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Retry Cached Remote Connections for postgres_fdw in case remote\n backend gets killed/goes away" }, { "msg_contents": "On Wed, Sep 30, 2020 at 11:32 PM Fujii Masao\n<masao.fujii@oss.nttdata.com> wrote:\n>\n> > And another way, if we don't want to use wait_pid() is to have a\n> > plpgsql stored procedure, that in a loop keeps on checking for the\n> > backed pid from pg_stat_activity, exit when pid is 0. and then proceed\n> > to issue the next foreign table query. Thoughts?\n>\n> +1 for this approach! We can use plpgsql or DO command.\n>\n\nUsed plpgsql procedure as we have to use the procedure 2 times.\n\n>\n> >\n> > mypid = -1;\n> > while (mypid != 0)\n> > SELECT pid INTO mypid FROM pg_stat_activity WHERE backend_type =\n> > 'client backend' AND application_name = 'fdw_retry_check';\n>\n> Or we can just wait for the number of processes with\n> appname='fdw_retry_check' to be zero rather than checking the pid.\n>\n\nDone.\n\nAttaching v7 patch with above changes. Please review it.\n\n\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Thu, 1 Oct 2020 17:44:23 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Retry Cached Remote Connections for postgres_fdw in case remote\n backend gets killed/goes away" }, { "msg_contents": "\n\nOn 2020/10/01 21:14, Bharath Rupireddy wrote:\n> On Wed, Sep 30, 2020 at 11:32 PM Fujii Masao\n> <masao.fujii@oss.nttdata.com> wrote:\n>>\n>>> And another way, if we don't want to use wait_pid() is to have a\n>>> plpgsql stored procedure, that in a loop keeps on checking for the\n>>> backed pid from pg_stat_activity, exit when pid is 0. and then proceed\n>>> to issue the next foreign table query. Thoughts?\n>>\n>> +1 for this approach! We can use plpgsql or DO command.\n>>\n> \n> Used plpgsql procedure as we have to use the procedure 2 times.\n> \n>>\n>>>\n>>> mypid = -1;\n>>> while (mypid != 0)\n>>> SELECT pid INTO mypid FROM pg_stat_activity WHERE backend_type =\n>>> 'client backend' AND application_name = 'fdw_retry_check';\n>>\n>> Or we can just wait for the number of processes with\n>> appname='fdw_retry_check' to be zero rather than checking the pid.\n>>\n> \n> Done.\n> \n> Attaching v7 patch with above changes. Please review it.\n\nThanks for updating the patch!\n\n+-- committed the txn. The entry of the terminated backend from pg_stat_activity\n+-- would be removed only after the txn commit.\n\npg_stat_clear_snapshot() can be used to reset the entry.\n\n+\t\tEXIT WHEN proccnt = 0;\n+ END LOOP;\n\nIsn't it better to sleep here, to avoid th busy loop?\n\nSo what I thought was something like\n\nCREATE OR REPLACE PROCEDURE wait_for_backend_termination()\nLANGUAGE plpgsql\nAS $$\nBEGIN\n LOOP\n PERFORM * FROM pg_stat_activity WHERE application_name = 'fdw_retry_check';\n EXIT WHEN NOT FOUND;\n PERFORM pg_sleep(1), pg_stat_clear_snapshot();\n END LOOP;\nEND;\n$$;\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Thu, 1 Oct 2020 23:39:57 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Retry Cached Remote Connections for postgres_fdw in case remote\n backend gets killed/goes away" }, { "msg_contents": "On Thu, Oct 1, 2020 at 8:10 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>\n> pg_stat_clear_snapshot() can be used to reset the entry.\n>\n\nThanks. I wasn't knowing it.\n\n>\n> + EXIT WHEN proccnt = 0;\n> + END LOOP;\n>\n> Isn't it better to sleep here, to avoid th busy loop?\n>\n\n+1.\n\n>\n> So what I thought was something like\n>\n> CREATE OR REPLACE PROCEDURE wait_for_backend_termination()\n> LANGUAGE plpgsql\n> AS $$\n> BEGIN\n> LOOP\n> PERFORM * FROM pg_stat_activity WHERE application_name = 'fdw_retry_check';\n> EXIT WHEN NOT FOUND;\n> PERFORM pg_sleep(1), pg_stat_clear_snapshot();\n> END LOOP;\n> END;\n> $$;\n>\n\nChanged.\n\nAttaching v8 patch, please review it.. Both make check and make\ncheck-world passes on v8.\n\nI have another question not related to this patch: though we have\nwait_pid() function, we are not able to use it like\npg_terminate_backend() in other modules, wouldn't be nice if we can\nmake it generic under the name pg_wait_pid() and usable across all pg\nmodules?\n\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Thu, 1 Oct 2020 21:16:00 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Retry Cached Remote Connections for postgres_fdw in case remote\n backend gets killed/goes away" }, { "msg_contents": "On 2020/10/02 0:46, Bharath Rupireddy wrote:\n> On Thu, Oct 1, 2020 at 8:10 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>>\n>> pg_stat_clear_snapshot() can be used to reset the entry.\n>>\n> \n> Thanks. I wasn't knowing it.\n> \n>>\n>> + EXIT WHEN proccnt = 0;\n>> + END LOOP;\n>>\n>> Isn't it better to sleep here, to avoid th busy loop?\n>>\n> \n> +1.\n> \n>>\n>> So what I thought was something like\n>>\n>> CREATE OR REPLACE PROCEDURE wait_for_backend_termination()\n>> LANGUAGE plpgsql\n>> AS $$\n>> BEGIN\n>> LOOP\n>> PERFORM * FROM pg_stat_activity WHERE application_name = 'fdw_retry_check';\n>> EXIT WHEN NOT FOUND;\n>> PERFORM pg_sleep(1), pg_stat_clear_snapshot();\n>> END LOOP;\n>> END;\n>> $$;\n>>\n> \n> Changed.\n> \n> Attaching v8 patch, please review it.. Both make check and make\n> check-world passes on v8.\n\nThanks for updating the patch! It basically looks good to me.\nI tweaked the patch as follows.\n\n+\t\tif (!entry->conn ||\n+\t\t\tPQstatus(entry->conn) != CONNECTION_BAD ||\n\nWith the above change, if entry->conn is NULL, an error is thrown and no new\nconnection is reestablished. But why? IMO it's more natural to reestablish\nnew connection in that case. So I removed \"!entry->conn\" from the above\ncondition.\n\n+\t\tereport(DEBUG3,\n+\t\t\t\t(errmsg(\"could not start remote transaction on connection %p\",\n+\t\t\t\t entry->conn)),\n\nI replaced errmsg() with errmsg_internal() because the translation of\nthis debug message is not necessary.\n\n\n+SELECT pg_terminate_backend(pid) FROM pg_stat_activity WHERE\n+backend_type = 'client backend' AND application_name = 'fdw_retry_check';\n+CALL wait_for_backend_termination();\n\nSince we always use pg_terminate_backend() and wait_for_backend_termination()\ntogether, I merged them into one function.\n\nI simplied the comments on the regression test.\n\n Attached is the updated version of the patch. If this patch is ok,\n I'd like to mark it as ready for committer.\n\n\n> I have another question not related to this patch: though we have\n> wait_pid() function, we are not able to use it like\n> pg_terminate_backend() in other modules, wouldn't be nice if we can\n> make it generic under the name pg_wait_pid() and usable across all pg\n> modules?\n\nI thought that, too. But I could not come up with good idea for *real* use case\nof that function. At least that's useful for the regression test, though.\nAnyway, IMO it's worth proposing that and hearing more opinions about that\nfrom other hackers.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION", "msg_date": "Sat, 3 Oct 2020 03:00:38 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Retry Cached Remote Connections for postgres_fdw in case remote\n backend gets killed/goes away" }, { "msg_contents": "On Fri, Oct 2, 2020 at 11:30 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>\n> > Attaching v8 patch, please review it.. Both make check and make\n> > check-world passes on v8.\n>\n> Thanks for updating the patch! It basically looks good to me.\n> I tweaked the patch as follows.\n>\n> + if (!entry->conn ||\n> + PQstatus(entry->conn) != CONNECTION_BAD ||\n>\n> With the above change, if entry->conn is NULL, an error is thrown and no new\n> connection is reestablished. But why? IMO it's more natural to reestablish\n> new connection in that case. So I removed \"!entry->conn\" from the above\n> condition.\n>\n\nYeah, that makes sense.\n\n>\n> + ereport(DEBUG3,\n> + (errmsg(\"could not start remote transaction on connection %p\",\n> + entry->conn)),\n>\n> I replaced errmsg() with errmsg_internal() because the translation of\n> this debug message is not necessary.\n>\n\nI'm okay with this as we don't have any specific strings that need\ntranslation in the debug message. But, should we also try to have\nerrmsg_internal in a few other places in connection.c?\n\n errmsg(\"could not obtain message string for remote error\"),\n errmsg(\"cannot PREPARE a transaction that has\noperated on postgres_fdw foreign tables\")));\n errmsg(\"password is required\"),\n\nI see the errmsg() with plain texts in other places in the code base\nas well. Is it that we look at the error message and if it is a plain\ntext(without database objects or table data), we decide to have no\ntranslation? Or is there any other policy?\n\n>\n> +SELECT pg_terminate_backend(pid) FROM pg_stat_activity WHERE\n> +backend_type = 'client backend' AND application_name = 'fdw_retry_check';\n> +CALL wait_for_backend_termination();\n>\n> Since we always use pg_terminate_backend() and wait_for_backend_termination()\n> together, I merged them into one function.\n>\n> I simplied the comments on the regression test.\n>\n\n+1. I slightly adjusted comments in connection.c and ran pg_indent to\nkeep them upt 80 char limit.\n\n>\n> Attached is the updated version of the patch. If this patch is ok,\n> I'd like to mark it as ready for committer.\n>\n\nThanks. Attaching v10 patch. Please have a look.\n\n>\n> > I have another question not related to this patch: though we have\n> > wait_pid() function, we are not able to use it like\n> > pg_terminate_backend() in other modules, wouldn't be nice if we can\n> > make it generic under the name pg_wait_pid() and usable across all pg\n> > modules?\n>\n> I thought that, too. But I could not come up with good idea for *real* use case\n> of that function. At least that's useful for the regression test, though.\n> Anyway, IMO it's worth proposing that and hearing more opinions about that\n> from other hackers.\n>\n\nYes it will be useful for testing when coupled with\npg_terminate_backend(). I will post the idea in a separate thread soon\nfor more thoughts.\n\n\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Sat, 3 Oct 2020 17:10:26 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Retry Cached Remote Connections for postgres_fdw in case remote\n backend gets killed/goes away" }, { "msg_contents": "\n\nOn 2020/10/03 20:40, Bharath Rupireddy wrote:\n> On Fri, Oct 2, 2020 at 11:30 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>>\n>>> Attaching v8 patch, please review it.. Both make check and make\n>>> check-world passes on v8.\n>>\n>> Thanks for updating the patch! It basically looks good to me.\n>> I tweaked the patch as follows.\n>>\n>> + if (!entry->conn ||\n>> + PQstatus(entry->conn) != CONNECTION_BAD ||\n>>\n>> With the above change, if entry->conn is NULL, an error is thrown and no new\n>> connection is reestablished. But why? IMO it's more natural to reestablish\n>> new connection in that case. So I removed \"!entry->conn\" from the above\n>> condition.\n>>\n> \n> Yeah, that makes sense.\n> \n>>\n>> + ereport(DEBUG3,\n>> + (errmsg(\"could not start remote transaction on connection %p\",\n>> + entry->conn)),\n>>\n>> I replaced errmsg() with errmsg_internal() because the translation of\n>> this debug message is not necessary.\n>>\n> \n> I'm okay with this as we don't have any specific strings that need\n> translation in the debug message. But, should we also try to have\n> errmsg_internal in a few other places in connection.c?\n> \n> errmsg(\"could not obtain message string for remote error\"),\n> errmsg(\"cannot PREPARE a transaction that has\n> operated on postgres_fdw foreign tables\")));\n> errmsg(\"password is required\"),\n> \n> I see the errmsg() with plain texts in other places in the code base\n> as well. Is it that we look at the error message and if it is a plain\n> text(without database objects or table data), we decide to have no\n> translation? Or is there any other policy?\n\nI was thinking that elog() basically should be used to report this\ndebug message, instead, but you used ereport() because maybe\nyou'd like to add detail message about connection error. Is this\nunderstanding right? elog() uses errmsg_internal(). So if ereport()\nis used as an aternative of elog() for some reasons,\nIMO errmsg_internal() should be used. Thought?\n\nOTOH, the messages you mentioned are not debug ones,\nso basically ereport() and errmsg() should be used, I think.\n\n\n> \n>>\n>> +SELECT pg_terminate_backend(pid) FROM pg_stat_activity WHERE\n>> +backend_type = 'client backend' AND application_name = 'fdw_retry_check';\n>> +CALL wait_for_backend_termination();\n>>\n>> Since we always use pg_terminate_backend() and wait_for_backend_termination()\n>> together, I merged them into one function.\n>>\n>> I simplied the comments on the regression test.\n>>\n> \n> +1. I slightly adjusted comments in connection.c and ran pg_indent to\n> keep them upt 80 char limit.\n> \n>>\n>> Attached is the updated version of the patch. If this patch is ok,\n>> I'd like to mark it as ready for committer.\n>>\n> \n> Thanks. Attaching v10 patch. Please have a look.\n\nThanks for updating the patch! I will mark the patch as ready for committer in CF.\nBarring any objections, I will commit that.\n\n> \n>>\n>>> I have another question not related to this patch: though we have\n>>> wait_pid() function, we are not able to use it like\n>>> pg_terminate_backend() in other modules, wouldn't be nice if we can\n>>> make it generic under the name pg_wait_pid() and usable across all pg\n>>> modules?\n>>\n>> I thought that, too. But I could not come up with good idea for *real* use case\n>> of that function. At least that's useful for the regression test, though.\n>> Anyway, IMO it's worth proposing that and hearing more opinions about that\n>> from other hackers.\n>>\n> \n> Yes it will be useful for testing when coupled with\n> pg_terminate_backend(). I will post the idea in a separate thread soon\n> for more thoughts.\n\nSounds good!\nISTM that he function should at least check the target process is PostgreSQL one.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Mon, 5 Oct 2020 13:15:19 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Retry Cached Remote Connections for postgres_fdw in case remote\n backend gets killed/goes away" }, { "msg_contents": "On Mon, Oct 5, 2020 at 9:45 AM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>\n> > I see the errmsg() with plain texts in other places in the code base\n> > as well. Is it that we look at the error message and if it is a plain\n> > text(without database objects or table data), we decide to have no\n> > translation? Or is there any other policy?\n>\n> I was thinking that elog() basically should be used to report this\n> debug message, instead, but you used ereport() because maybe\n> you'd like to add detail message about connection error. Is this\n> understanding right? elog() uses errmsg_internal().\n>\n\nYes that's correct.\n\n>\n> So if ereport() is used as an aternative of elog() for some reasons,\n> IMO errmsg_internal() should be used. Thought?\n>\n\nYes, this is apt for our case.\n\n>\n> OTOH, the messages you mentioned are not debug ones,\n> so basically ereport() and errmsg() should be used, I think.\n>\n\nIn connection.c file, yes they are of ERROR type. Looks like it's not\na standard to use errmsg_internal for DEBUG messages that require no\ntranslation with ereport\n\n(errmsg(\"wrote block details for %d blocks\", num_blocks)));\n(errmsg(\"MultiXact member stop limit is now %u based on MultiXact %u\n(errmsg(\"oldest MultiXactId member is at offset %u\",\n\nHowever, there are few other places, where errmsg_internal is used for\nDEBUG purposes.\n\n(errmsg_internal(\"finished verifying presence of \"\n(errmsg_internal(\"%s(%d) name: %s; blockState:\n\nHaving said that, IMHO it's better to keep the way it is currently in\nthe code base.\n\n>\n> > Thanks. Attaching v10 patch. Please have a look.\n>\n> Thanks for updating the patch! I will mark the patch as ready for committer in CF.\n> Barring any objections, I will commit that.\n>\n\nThanks a lot for the review comments.\n\n> >>\n> >>> I have another question not related to this patch: though we have\n> >>> wait_pid() function, we are not able to use it like\n> >>> pg_terminate_backend() in other modules, wouldn't be nice if we can\n> >>> make it generic under the name pg_wait_pid() and usable across all pg\n> >>> modules?\n> >>\n> >> I thought that, too. But I could not come up with good idea for *real* use case\n> >> of that function. At least that's useful for the regression test, though.\n> >> Anyway, IMO it's worth proposing that and hearing more opinions about that\n> >> from other hackers.\n> >>\n> >\n> > Yes it will be useful for testing when coupled with\n> > pg_terminate_backend(). I will post the idea in a separate thread soon\n> > for more thoughts.\n>\n> Sounds good!\n> ISTM that he function should at least check the target process is PostgreSQL one.\n>\n\nThanks. I will take care of this point.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 5 Oct 2020 17:02:25 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Retry Cached Remote Connections for postgres_fdw in case remote\n backend gets killed/goes away" }, { "msg_contents": "\n\nOn 2020/10/05 20:32, Bharath Rupireddy wrote:\n> On Mon, Oct 5, 2020 at 9:45 AM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>>\n>>> I see the errmsg() with plain texts in other places in the code base\n>>> as well. Is it that we look at the error message and if it is a plain\n>>> text(without database objects or table data), we decide to have no\n>>> translation? Or is there any other policy?\n>>\n>> I was thinking that elog() basically should be used to report this\n>> debug message, instead, but you used ereport() because maybe\n>> you'd like to add detail message about connection error. Is this\n>> understanding right? elog() uses errmsg_internal().\n>>\n> \n> Yes that's correct.\n> \n>>\n>> So if ereport() is used as an aternative of elog() for some reasons,\n>> IMO errmsg_internal() should be used. Thought?\n>>\n> \n> Yes, this is apt for our case.\n> \n>>\n>> OTOH, the messages you mentioned are not debug ones,\n>> so basically ereport() and errmsg() should be used, I think.\n>>\n> \n> In connection.c file, yes they are of ERROR type. Looks like it's not\n> a standard to use errmsg_internal for DEBUG messages that require no\n> translation with ereport\n> \n> (errmsg(\"wrote block details for %d blocks\", num_blocks)));\n> (errmsg(\"MultiXact member stop limit is now %u based on MultiXact %u\n> (errmsg(\"oldest MultiXactId member is at offset %u\",\n> \n> However, there are few other places, where errmsg_internal is used for\n> DEBUG purposes.\n> \n> (errmsg_internal(\"finished verifying presence of \"\n> (errmsg_internal(\"%s(%d) name: %s; blockState:\n> \n> Having said that, IMHO it's better to keep the way it is currently in\n> the code base.\n\nAgreed.\n\n\n> \n>>\n>>> Thanks. Attaching v10 patch. Please have a look.\n>>\n>> Thanks for updating the patch! I will mark the patch as ready for committer in CF.\n>> Barring any objections, I will commit that.\n>>\n> \n> Thanks a lot for the review comments.\n\nI pushed the patch. Thanks!\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Tue, 6 Oct 2020 10:54:27 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Retry Cached Remote Connections for postgres_fdw in case remote\n backend gets killed/goes away" } ]
[ { "msg_contents": "Hello,\n\nI noticed the following strage output when running Postgres 12.3 (not psql) on Windows\n\n postgres=# select pg_current_logfile();\n pg_current_logfile\n ------------------------------------\n pg_log/postgresql-2020-07-08.log\\r\n (1 row)\n\nNote the \"\\r\" at the end of the file name.\n\nThis does not happen when running Postgres on Linux.\n\nIs this intended for some strange reason?\nOr a bug or a technical limitation?\n\nRegards\nThomas\n\n\n", "msg_date": "Wed, 8 Jul 2020 15:05:25 +0200", "msg_from": "Thomas Kellerer <shammat@gmx.net>", "msg_from_op": true, "msg_subject": "Is this a bug in pg_current_logfile() on Windows?" }, { "msg_contents": "On 7/8/20 6:05 AM, Thomas Kellerer wrote:\n> Hello,\n> \n> I noticed the following strage output when running Postgres 12.3 (not psql) on Windows\n> \n> postgres=# select pg_current_logfile();\n> pg_current_logfile\n> ------------------------------------\n> pg_log/postgresql-2020-07-08.log\\r\n> (1 row)\n> \n> Note the \"\\r\" at the end of the file name.\n> \n> This does not happen when running Postgres on Linux.\n> \n> Is this intended for some strange reason?\n> Or a bug or a technical limitation?\n\nI'm guessing the difference between Unix line ending:\n\n\\n\n\nand Windows:\n\n\\r\\n\n\n> \n> Regards\n> Thomas\n> \n> \n\n\n-- \nAdrian Klaver\nadrian.klaver@aklaver.com\n\n\n", "msg_date": "Wed, 8 Jul 2020 06:45:12 -0700", "msg_from": "Adrian Klaver <adrian.klaver@aklaver.com>", "msg_from_op": false, "msg_subject": "Re: Is this a bug in pg_current_logfile() on Windows?" }, { "msg_contents": "On 7/8/20 6:45 AM, Adrian Klaver wrote:\n> On 7/8/20 6:05 AM, Thomas Kellerer wrote:\n>> Hello,\n>>\n>> I noticed the following strage output when running Postgres 12.3 (not \n>> psql) on Windows\n>>\n>>      postgres=# select pg_current_logfile();\n>>               pg_current_logfile\n>>      ------------------------------------\n>>       pg_log/postgresql-2020-07-08.log\\r\n>>      (1 row)\n>>\n>> Note the \"\\r\" at the end of the file name.\n>>\n>> This does not happen when running Postgres on Linux.\n>>\n>> Is this intended for some strange reason?\n>> Or a bug or a technical limitation?\n> \n> I'm guessing the difference between Unix line ending:\n> \n> \\n\n> \n> and Windows:\n> \n> \\r\\n\n> \n\n From source(backend/utils/adt/misc.c):\n\nnlpos = strchr(log_filepath, '\\n');\nif (nlpos == NULL)\n{\n /* Uh oh. No newline found, so file content is corrupted. */\n elog(ERROR,\n \"missing newline character in \\\"%s\\\"\", \nLOG_METAINFO_DATAFILE);\n break;\n}\n*nlpos = '\\0';\n\nif (logfmt == NULL || strcmp(logfmt, log_format) == 0)\n{\n FreeFile(fd);\n PG_RETURN_TEXT_P(cstring_to_text(log_filepath));\n}\n\n>>\n>> Regards\n>> Thomas\n>>\n>>\n> \n> \n\n\n-- \nAdrian Klaver\nadrian.klaver@aklaver.com\n\n\n", "msg_date": "Wed, 8 Jul 2020 07:16:00 -0700", "msg_from": "Adrian Klaver <adrian.klaver@aklaver.com>", "msg_from_op": false, "msg_subject": "Re: Is this a bug in pg_current_logfile() on Windows?" }, { "msg_contents": "Thomas Kellerer <shammat@gmx.net> writes:\n> I noticed the following strage output when running Postgres 12.3 (not psql) on Windows\n\n> postgres=# select pg_current_logfile();\n> pg_current_logfile\n> ------------------------------------\n> pg_log/postgresql-2020-07-08.log\\r\n> (1 row)\n\n> Note the \"\\r\" at the end of the file name.\n\nYeah, that seems like a bug. I think the reason is that syslogger.c\ndoes this when writing the log metafile:\n\n\tfh = fopen(LOG_METAINFO_DATAFILE_TMP, \"w\");\n...\n#ifdef WIN32\n\t\t/* use CRLF line endings on Windows */\n\t\t_setmode(_fileno(fh), _O_TEXT);\n#endif\n\nwhile misc.c only does this when reading the file:\n\n\tfd = AllocateFile(LOG_METAINFO_DATAFILE, \"r\");\n\nSomehow, the reading file is being left in binary mode and thus it's\nfailing to convert \\r\\n back to plain \\n.\n\nNow the weird thing about that is I'd have expected \"r\" and \"w\" modes\nto imply Windows text mode already, so that I'd have figured that\n_setmode call to be a useless no-op. Apparently on some Windows libc\nimplementations, it's not. How was your installation built exactly?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 08 Jul 2020 12:41:36 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Is this a bug in pg_current_logfile() on Windows?" }, { "msg_contents": "Tom Lane schrieb am 08.07.2020 um 18:41:\n> Somehow, the reading file is being left in binary mode and thus it's\n> failing to convert \\r\\n back to plain \\n.\n>\n> Now the weird thing about that is I'd have expected \"r\" and \"w\" modes\n> to imply Windows text mode already, so that I'd have figured that\n> _setmode call to be a useless no-op. Apparently on some Windows libc\n> implementations, it's not. How was your installation built exactly?\n\nThat's the build from EnterpriseDB\n\nhttps://www.enterprisedb.com/download-postgresql-binaries\n\n\n\n\n", "msg_date": "Wed, 8 Jul 2020 19:07:20 +0200", "msg_from": "Thomas Kellerer <shammat@gmx.net>", "msg_from_op": true, "msg_subject": "Re: Is this a bug in pg_current_logfile() on Windows?" }, { "msg_contents": "[ redirecting to pghackers ]\n\nThomas Kellerer <shammat@gmx.net> writes:\n> Tom Lane schrieb am 08.07.2020 um 18:41:\n>> Somehow, the reading file is being left in binary mode and thus it's\n>> failing to convert \\r\\n back to plain \\n.\n>> Now the weird thing about that is I'd have expected \"r\" and \"w\" modes\n>> to imply Windows text mode already, so that I'd have figured that\n>> _setmode call to be a useless no-op. Apparently on some Windows libc\n>> implementations, it's not. How was your installation built exactly?\n\n> That's the build from EnterpriseDB\n\nWhat I'd momentarily forgotten is that we don't use Windows' native\nfopen(). On that platform, we use pgwin32_fopen which defaults to\nbinary mode (because _open_osfhandle does). So the _setmode calls in\nsyslogger.c are *not* no-ops, and the failure in pg_current_logfile()\nis clearly explained by the fact that it's doing nothing to strip\ncarriage returns.\n\nHowever ... I put in a test case to try to expose this failure, and\nour Windows buildfarm critters remain perfectly happy. So what's up\nwith that? After some digging around, I believe the reason is that\nPostgresNode::psql is stripping the \\r from pg_current_logfile()'s\nresult, here:\n\n\t\t$$stdout =~ s/\\r//g if $TestLib::windows_os;\n\nI'm slightly tempted to extend the test case by verifying on the\nserver side that the result ends in \".log\" with no extra characters.\nMore generally, I wonder if the above behavior is really a good idea.\nIt seems to have been added in commit 33f3bbc6d as a hack to avoid\nhaving to think too hard about mingw's behavior, but now I wonder if\nit isn't masking other bugs too. At the very least I think we ought\nto tighten the coding to\n\n\t\t$$stdout =~ s/\\r\\n/\\n/g if $TestLib::windows_os;\n\nso that it won't strip carriage returns at random.\n\nMeanwhile, back at the ranch, how shall we fix pg_current_logfile()?\nI see two credible alternatives:\n\n1. Insert\n#ifdef WIN32\n\t_setmode(_fileno(fd), _O_TEXT);\n#endif\nto make this function match the coding in syslogger.c.\n\n2. Manually strip '\\r' if present, independent of platform.\n\nThe second alternative would conform to the policy we established in\ncommit b654714f9, that newline-chomping code should uniformly drop \\r.\nHowever, that policy is mainly intended to allow non-Windows builds\nto cope with text files that might have been made with a Windows text\neditor. Surely we don't need to worry about a cross-platform source\nfor the log metafile. So I'm leaning a bit to the first alternative,\nso as not to add useless overhead and complexity on non-Windows builds.\n\nThoughts?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 08 Jul 2020 17:26:48 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Is this a bug in pg_current_logfile() on Windows?" }, { "msg_contents": "\nOn 7/8/20 5:26 PM, Tom Lane wrote:\n>\n> However ... I put in a test case to try to expose this failure, and\n> our Windows buildfarm critters remain perfectly happy. So what's up\n> with that? After some digging around, I believe the reason is that\n> PostgresNode::psql is stripping the \\r from pg_current_logfile()'s\n> result, here:\n>\n> \t\t$$stdout =~ s/\\r//g if $TestLib::windows_os;\n>\n> I'm slightly tempted to extend the test case by verifying on the\n> server side that the result ends in \".log\" with no extra characters.\n> More generally, I wonder if the above behavior is really a good idea.\n> It seems to have been added in commit 33f3bbc6d as a hack to avoid\n> having to think too hard about mingw's behavior, but now I wonder if\n> it isn't masking other bugs too. At the very least I think we ought\n> to tighten the coding to\n>\n> \t\t$$stdout =~ s/\\r\\n/\\n/g if $TestLib::windows_os;\n>\n> so that it won't strip carriage returns at random.\n>\n\nSeems reasonable. If we rip it out completely we'll have to find all the\nplaces it breaks and fix them. And we'll almost certainly get new\nbreakage. If it's hiding a real bug we'll have to do that, but I'd be\nreluctant unless there's actual proof.\n\n\ncheers\n\n\nandrew\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n", "msg_date": "Wed, 8 Jul 2020 19:12:40 -0400", "msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Is this a bug in pg_current_logfile() on Windows?" }, { "msg_contents": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com> writes:\n> On 7/8/20 5:26 PM, Tom Lane wrote:\n>> However ... I put in a test case to try to expose this failure, and\n>> our Windows buildfarm critters remain perfectly happy. So what's up\n>> with that? After some digging around, I believe the reason is that\n>> PostgresNode::psql is stripping the \\r from pg_current_logfile()'s\n>> result, here:\n>> \t$$stdout =~ s/\\r//g if $TestLib::windows_os;\n>> I'm slightly tempted to extend the test case by verifying on the\n>> server side that the result ends in \".log\" with no extra characters.\n>> More generally, I wonder if the above behavior is really a good idea.\n>> It seems to have been added in commit 33f3bbc6d as a hack to avoid\n>> having to think too hard about mingw's behavior, but now I wonder if\n>> it isn't masking other bugs too. At the very least I think we ought\n>> to tighten the coding to\n>> \t$$stdout =~ s/\\r\\n/\\n/g if $TestLib::windows_os;\n>> so that it won't strip carriage returns at random.\n\n> Seems reasonable. If we rip it out completely we'll have to find all the\n> places it breaks and fix them. And we'll almost certainly get new\n> breakage. If it's hiding a real bug we'll have to do that, but I'd be\n> reluctant unless there's actual proof.\n\nHard to tell. What I propose to do right now is change the \\r filters\nas shown above, and see if the test I added in 004_logrotate.pl starts\nto show failures on Windows. If it does, and no other place does,\nI'm willing to be satisfied with that. If we see *other* failures then\nthat'd prove that the problem is real, no?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 08 Jul 2020 19:29:57 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Is this a bug in pg_current_logfile() on Windows?" }, { "msg_contents": "I wrote:\n> Andrew Dunstan <andrew.dunstan@2ndquadrant.com> writes:\n>> Seems reasonable. If we rip it out completely we'll have to find all the\n>> places it breaks and fix them. And we'll almost certainly get new\n>> breakage. If it's hiding a real bug we'll have to do that, but I'd be\n>> reluctant unless there's actual proof.\n\n> Hard to tell. What I propose to do right now is change the \\r filters\n> as shown above, and see if the test I added in 004_logrotate.pl starts\n> to show failures on Windows. If it does, and no other place does,\n> I'm willing to be satisfied with that. If we see *other* failures then\n> that'd prove that the problem is real, no?\n\nSo I did that, and the first report is from bowerbird and it's still\ngreen. Unless I'm completely misinterpreting what's happening (always\na possibility), that means we're still managing to remove \"data\"\noccurrences of \\r.\n\nThe most likely theory about that, I think, is that IPC::Run::run already\ntranslated any \\r\\n occurrences in the psql command's output to plain \\n.\nThen, the \\r generated by pg_current_logfile() would butt up against the\nline-ending \\n, allowing the \"fix\" in sub psql to remove valid data.\n\nOne thing I noticed while making 91bdf499b is that some of these\nsubstitutions are conditioned on \"if $TestLib::windows_os\" while others\nare conditioned on \"if $Config{osname} eq 'msys'\". What is the reason\nfor this difference? Is it possible that we only really need to do it\nin the latter case?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 08 Jul 2020 22:40:42 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Is this a bug in pg_current_logfile() on Windows?" }, { "msg_contents": "On 7/8/20 10:40 PM, Tom Lane wrote:\n> I wrote:\n>> Andrew Dunstan <andrew.dunstan@2ndquadrant.com> writes:\n>>> Seems reasonable. If we rip it out completely we'll have to find all the\n>>> places it breaks and fix them. And we'll almost certainly get new\n>>> breakage. If it's hiding a real bug we'll have to do that, but I'd be\n>>> reluctant unless there's actual proof.\n>> Hard to tell. What I propose to do right now is change the \\r filters\n>> as shown above, and see if the test I added in 004_logrotate.pl starts\n>> to show failures on Windows. If it does, and no other place does,\n>> I'm willing to be satisfied with that. If we see *other* failures then\n>> that'd prove that the problem is real, no?\n> So I did that, and the first report is from bowerbird and it's still\n> green. Unless I'm completely misinterpreting what's happening (always\n> a possibility), that means we're still managing to remove \"data\"\n> occurrences of \\r.\n>\n> The most likely theory about that, I think, is that IPC::Run::run already\n> translated any \\r\\n occurrences in the psql command's output to plain \\n.\n> Then, the \\r generated by pg_current_logfile() would butt up against the\n> line-ending \\n, allowing the \"fix\" in sub psql to remove valid data.\n\n\nIt's possible. I do see some mangling of that kind in IPC::Run's\nWin32IO.pm and Win32Pump.pm.\n\n\nAttached for reference is the IPC::Run package I usually use on Windows.\n\n\n>\n> One thing I noticed while making 91bdf499b is that some of these\n> substitutions are conditioned on \"if $TestLib::windows_os\" while others\n> are conditioned on \"if $Config{osname} eq 'msys'\". What is the reason\n> for this difference? Is it possible that we only really need to do it\n> in the latter case?\n>\n> \t\t\t\n\n\nIn general I make the condition for such hacks as restrictive as\npossible. I don't guarantee that I have been perfectly consistent about\nthat, though.\n\n\ncheers\n\n\nandrew\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Thu, 9 Jul 2020 09:43:53 -0400", "msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Is this a bug in pg_current_logfile() on Windows?" }, { "msg_contents": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com> writes:\n> On 7/8/20 10:40 PM, Tom Lane wrote:\n>> So I did that, and the first report is from bowerbird and it's still\n>> green. Unless I'm completely misinterpreting what's happening (always\n>> a possibility), that means we're still managing to remove \"data\"\n>> occurrences of \\r.\n>> The most likely theory about that, I think, is that IPC::Run::run already\n>> translated any \\r\\n occurrences in the psql command's output to plain \\n.\n>> Then, the \\r generated by pg_current_logfile() would butt up against the\n>> line-ending \\n, allowing the \"fix\" in sub psql to remove valid data.\n\n> It's possible. I do see some mangling of that kind in IPC::Run's\n> Win32IO.pm and Win32Pump.pm.\n\nThe plot thickens: as of this morning, fairywren and jacana are showing\nthe failure I expected, while drongo and bowerbird are not. (Our other\nWindows animals are not running the TAP tests, so they're no help here.)\n\nIt's not hard to believe that the latter two are using a different libc\nimplementation, but how would that affect the behavior of the TAP\ninfrastructure? Are they also using different Perls? (By hypothesis,\nthe pg_current_logfile bug exists across all Windows builds, so we have\nto explain why the TAP tests only reveal it on some of these animals.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 09 Jul 2020 10:44:10 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Is this a bug in pg_current_logfile() on Windows?" }, { "msg_contents": "\nOn 7/9/20 10:44 AM, Tom Lane wrote:\n> Andrew Dunstan <andrew.dunstan@2ndquadrant.com> writes:\n>> On 7/8/20 10:40 PM, Tom Lane wrote:\n>>> So I did that, and the first report is from bowerbird and it's still\n>>> green. Unless I'm completely misinterpreting what's happening (always\n>>> a possibility), that means we're still managing to remove \"data\"\n>>> occurrences of \\r.\n>>> The most likely theory about that, I think, is that IPC::Run::run already\n>>> translated any \\r\\n occurrences in the psql command's output to plain \\n.\n>>> Then, the \\r generated by pg_current_logfile() would butt up against the\n>>> line-ending \\n, allowing the \"fix\" in sub psql to remove valid data.\n>> It's possible. I do see some mangling of that kind in IPC::Run's\n>> Win32IO.pm and Win32Pump.pm.\n> The plot thickens: as of this morning, fairywren and jacana are showing\n> the failure I expected, while drongo and bowerbird are not. (Our other\n> Windows animals are not running the TAP tests, so they're no help here.)\n>\n> It's not hard to believe that the latter two are using a different libc\n> implementation, but how would that affect the behavior of the TAP\n> infrastructure? Are they also using different Perls? (By hypothesis,\n> the pg_current_logfile bug exists across all Windows builds, so we have\n> to explain why the TAP tests only reveal it on some of these animals.)\n>\n> \t\t\t\n\n\n\nThey should use the same libc implementation (msvcrt.dll).\n\n\nBut the perls they are using are indeed different - msys animals have to\nuse msys' perl for TAP tests because native perl doesn't understand msys\nfile paths. Conversely, MSVC animals must use native perl (AS or\nStrawberry) to run TAP tests. So jacana and fairywren, the two msys\nanimals, are doing what you expect5ed and drongo and bowerbird, the two\nMSVC animals, are not.\n\n\ncheers\n\n\nandrew\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n", "msg_date": "Thu, 9 Jul 2020 10:54:13 -0400", "msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Is this a bug in pg_current_logfile() on Windows?" }, { "msg_contents": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com> writes:\n> On 7/9/20 10:44 AM, Tom Lane wrote:\n>> Andrew Dunstan <andrew.dunstan@2ndquadrant.com> writes:\n>>> On 7/8/20 10:40 PM, Tom Lane wrote:\n>>>> The most likely theory about that, I think, is that IPC::Run::run already\n>>>> translated any \\r\\n occurrences in the psql command's output to plain \\n.\n\n>> It's not hard to believe that the latter two are using a different libc\n>> implementation, but how would that affect the behavior of the TAP\n>> infrastructure? Are they also using different Perls? (By hypothesis,\n>> the pg_current_logfile bug exists across all Windows builds, so we have\n>> to explain why the TAP tests only reveal it on some of these animals.)\n\n> But the perls they are using are indeed different - msys animals have to\n> use msys' perl for TAP tests because native perl doesn't understand msys\n> file paths. Conversely, MSVC animals must use native perl (AS or\n> Strawberry) to run TAP tests. So jacana and fairywren, the two msys\n> animals, are doing what you expect5ed and drongo and bowerbird, the two\n> MSVC animals, are not.\n\nAh-hah. So this leads to the conclusion that in native perl, IPC::Run\nis doing \\r\\n conversion for us while in msys perl it is not.\n\nTherefore, we either should figure out how to get msys perl to do\nthat conversion (and remove it from our code altogether), or make the\nconversions conditional on \"is it msys perl?\". I am not quite sure\nif the existing tests \"if $Config{osname} eq 'msys'\" are a legitimate\nimplementation of that condition or not --- it seems like nominally\nthey are checking the OS not the Perl, but maybe it's close enough.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 09 Jul 2020 11:04:13 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Is this a bug in pg_current_logfile() on Windows?" }, { "msg_contents": "\nOn 7/9/20 11:04 AM, Tom Lane wrote:\n> Andrew Dunstan <andrew.dunstan@2ndquadrant.com> writes:\n>> On 7/9/20 10:44 AM, Tom Lane wrote:\n>>> Andrew Dunstan <andrew.dunstan@2ndquadrant.com> writes:\n>>>> On 7/8/20 10:40 PM, Tom Lane wrote:\n>>>>> The most likely theory about that, I think, is that IPC::Run::run already\n>>>>> translated any \\r\\n occurrences in the psql command's output to plain \\n.\n>>> It's not hard to believe that the latter two are using a different libc\n>>> implementation, but how would that affect the behavior of the TAP\n>>> infrastructure? Are they also using different Perls? (By hypothesis,\n>>> the pg_current_logfile bug exists across all Windows builds, so we have\n>>> to explain why the TAP tests only reveal it on some of these animals.)\n>> But the perls they are using are indeed different - msys animals have to\n>> use msys' perl for TAP tests because native perl doesn't understand msys\n>> file paths. Conversely, MSVC animals must use native perl (AS or\n>> Strawberry) to run TAP tests. So jacana and fairywren, the two msys\n>> animals, are doing what you expect5ed and drongo and bowerbird, the two\n>> MSVC animals, are not.\n> Ah-hah. So this leads to the conclusion that in native perl, IPC::Run\n> is doing \\r\\n conversion for us while in msys perl it is not.\n>\n> Therefore, we either should figure out how to get msys perl to do\n> that conversion (and remove it from our code altogether), or make the\n> conversions conditional on \"is it msys perl?\". I am not quite sure\n> if the existing tests \"if $Config{osname} eq 'msys'\" are a legitimate\n> implementation of that condition or not --- it seems like nominally\n> they are checking the OS not the Perl, but maybe it's close enough.\n>\n> \t\t\t\n\n\n\nIf the reported OS is msys (it's a pseudo OS in effect) then the perl\nmust be msys' perl. Even when called from msys, native perl reports the\nOS as MSWin32. So yes, close enough.\n\n\ncheers\n\n\nandrew\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n", "msg_date": "Thu, 9 Jul 2020 11:22:43 -0400", "msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Is this a bug in pg_current_logfile() on Windows?" }, { "msg_contents": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com> writes:\n> On 7/9/20 11:04 AM, Tom Lane wrote:\n>> Therefore, we either should figure out how to get msys perl to do\n>> that conversion (and remove it from our code altogether), or make the\n>> conversions conditional on \"is it msys perl?\". I am not quite sure\n>> if the existing tests \"if $Config{osname} eq 'msys'\" are a legitimate\n>> implementation of that condition or not --- it seems like nominally\n>> they are checking the OS not the Perl, but maybe it's close enough.\n\n> If the reported OS is msys (it's a pseudo OS in effect) then the perl\n> must be msys' perl. Even when called from msys, native perl reports the\n> OS as MSWin32. So yes, close enough.\n\nCool, I'll go try changing all those conditions to use the msys test.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 09 Jul 2020 11:24:43 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Is this a bug in pg_current_logfile() on Windows?" }, { "msg_contents": "I wrote:\n> Cool, I'll go try changing all those conditions to use the msys test.\n\nOK, that worked: all four relevant buildfarm members are now showing\nthe expected test failure. So I'll go fix the original bug.\n\nShould we consider back-patching the CRLF filtering changes, ie\n91bdf499b + ffb4cee43? It's not really necessary perhaps, but\nI dislike situations where the \"same\" test on different branches is\ntesting different things. Seems like a recipe for future surprises.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 09 Jul 2020 15:36:10 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Is this a bug in pg_current_logfile() on Windows?" }, { "msg_contents": "\nOn 7/9/20 3:36 PM, Tom Lane wrote:\n> I wrote:\n>> Cool, I'll go try changing all those conditions to use the msys test.\n> OK, that worked: all four relevant buildfarm members are now showing\n> the expected test failure. So I'll go fix the original bug.\n>\n> Should we consider back-patching the CRLF filtering changes, ie\n> 91bdf499b + ffb4cee43? It's not really necessary perhaps, but\n> I dislike situations where the \"same\" test on different branches is\n> testing different things. Seems like a recipe for future surprises.\n\n\nYes please.\n\n\ncheers\n\n\nandrew\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n", "msg_date": "Thu, 9 Jul 2020 16:11:08 -0400", "msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Is this a bug in pg_current_logfile() on Windows?" }, { "msg_contents": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com> writes:\n> On 7/9/20 3:36 PM, Tom Lane wrote:\n>> Should we consider back-patching the CRLF filtering changes, ie\n>> 91bdf499b + ffb4cee43? It's not really necessary perhaps, but\n>> I dislike situations where the \"same\" test on different branches is\n>> testing different things. Seems like a recipe for future surprises.\n\n> Yes please.\n\nDone.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 09 Jul 2020 17:39:40 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Is this a bug in pg_current_logfile() on Windows?" } ]
[ { "msg_contents": "Last week as I was working on adaptive hash join [1] and trying to get\nparallel adaptive hash join batch 0 to spill correctly, I noticed what\nseemed like a problem with the code to repartition batch 0.\n\nIf we run out of space while inserting tuples into the hashtable during\nthe build phase of parallel hash join and proceed to increase the number\nof batches, we need to repartition all of the tuples from the old\ngeneration (when nbatch was x) and move them to their new homes in the\nnew generation (when nbatch is 2x). Before we do this repartitioning we\ndisable growth in the number of batches.\n\nThen we repartition the tuples from the hashtable, inserting them either\nback into the hashtable or into a batch file. While inserting them into\nthe hashtable, we call ExecParallelHashTupleAlloc(), and, if there is no\nspace for the current tuple in the current chunk and growth in the\nnumber of batches is disabled, we go ahead and allocate a new chunk of\nmemory -- regardless of whether or not we will exceed the space limit.\n\nBelow, I've included a test case, which, on master, results in an error\nwhile trying to allocate shared memory. I use a custom data type whose\nhash function ensures that the tuples will go to batch 0. With my\nattached patch, this test case no longer errors out.\n\nI discussed with Thomas Munro, and it seems this is not the desired\nbehavior.\n\nWe discussed how abandoning a repartitioning effort once we know it is\ndoomed is an optimization anyway.\n\nTo start with, I've attached a patch which bails out of the\nExecParallelHashRepartitionFirst() attempt when allocating a new chunk\nof memory would exceed the space limit. We skip\nExecParallelHashRepartitionRest() and engage in the deciding phase as\nbefore.\n\nThis means that we will disable growth in the number of batches\nif all of the tuples that we attempted to load back into the hashtable\nfrom the evicted tuple queue would stay resident in the hashtable.\nOtherwise, we will set growth to indicate we need to try increasing the\nnumber of batches and return, eventually returning NULL to the original\nallocation function call and indicating we need to retry repartitioning.\n\nIt's important to note that if we disable growth in the deciding phase\ndue to skew, batch 0, and subsequent batches that had too many tuples to\nfit in the space allowed, will simply exceed the space limit while\nbuilding the hashtable. This patch does not fix that.\n\nThomas and I also discussed the potential optimization of bailing out of\nrepartitioning during repartitioning of all of the other batches (after\nbatch 0) in ExecParallelHashRepartitionRest(). This would be a good\noptimization, however, it isn't addressing a \"bug\" in the same way that\nbailing out in ExecParallelHashRepartitionFirst() is. Also, I hacked on\na few versions of this optimization and it requires more thought. I\nwould like to propose that as a separate patch and thread.\n\nOne note about the code of the attached patch, I added a variable to the\nParallelHashJoinState structure indicating that repartitioning should be\nabandoned. Workers only need to check it before allocating a new chunk of\nmemory during repartitioning. I thought about whether or not it would be\nbetter to make it a ParallelHashGrowth stage, but I wasn't sure whether\nor not that made sense.\n\n--------------------------------\nTest Case\n--------------------------------\n\nDROP TYPE stub CASCADE;\nCREATE TYPE stub AS (value CHAR(8098));\n\nCREATE FUNCTION stub_hash(item stub)\nRETURNS INTEGER AS $$\nBEGIN\n RETURN 0;\nEND; $$ LANGUAGE plpgsql IMMUTABLE LEAKPROOF STRICT PARALLEL SAFE;\n\nCREATE FUNCTION stub_eq(item1 stub, item2 stub)\nRETURNS BOOLEAN AS $$\nBEGIN\n RETURN item1.value = item2.value;\nEND; $$ LANGUAGE plpgsql IMMUTABLE LEAKPROOF STRICT PARALLEL SAFE;\n\nCREATE OPERATOR = (\n FUNCTION = stub_eq,\n LEFTARG = stub,\n RIGHTARG = stub,\n COMMUTATOR = =,\n HASHES, MERGES\n);\n\nCREATE OPERATOR CLASS stub_hash_ops\nDEFAULT FOR TYPE stub USING hash AS\n OPERATOR 1 =(stub, stub),\n FUNCTION 1 stub_hash(stub);\n\nDROP TABLE IF EXISTS probeside_batch0;\nCREATE TABLE probeside_batch0(a stub);\nALTER TABLE probeside_batch0 ALTER COLUMN a SET STORAGE PLAIN;\nINSERT INTO probeside_batch0 SELECT '(\"\")' FROM generate_series(1, 13);\n\nDROP TABLE IF EXISTS hashside_wide_batch0;\nCREATE TABLE hashside_wide_batch0(a stub, id int);\nALTER TABLE hashside_wide_batch0 ALTER COLUMN a SET STORAGE PLAIN;\nINSERT INTO hashside_wide_batch0 SELECT '(\"\")', 22 FROM generate_series(1,\n200);\nANALYZE probeside_batch0, hashside_wide_batch0;\n\nset min_parallel_table_scan_size = 0;\nset parallel_setup_cost = 0;\nset enable_hashjoin = on;\n\nset max_parallel_workers_per_gather = 1;\nset enable_parallel_hash = on;\nset work_mem = '64kB';\n\nexplain (analyze, costs off)\nSELECT TRIM((probeside_batch0.a).value),\n hashside_wide_batch0.id,\n hashside_wide_batch0.ctid as innerctid,\n TRIM((hashside_wide_batch0.a).value), probeside_batch0.ctid as outerctid\nFROM probeside_batch0\nLEFT OUTER JOIN hashside_wide_batch0 USING (a);\n\n---------------------------------\n\n[1]\nhttps://www.postgresql.org/message-id/flat/CA%2BhUKGJvYFCcF8vTHFSQQB_F8oGRsBp3JdZAPWbORZgfAPk5Sw%40mail.gmail.com#1156516651bb2587da3909cf1db29952\n\n-- \nMelanie Plageman", "msg_date": "Wed, 8 Jul 2020 13:16:54 -0700", "msg_from": "Melanie Plageman <melanieplageman@gmail.com>", "msg_from_op": true, "msg_subject": "Reigning in ExecParallelHashRepartitionFirst" }, { "msg_contents": "s/reign/rein/ in $subject\nhttps://www.merriam-webster.com/words-at-play/do-you-rein-in-or-reign-in-something\n\ns/reign/rein/ in $subjecthttps://www.merriam-webster.com/words-at-play/do-you-rein-in-or-reign-in-something", "msg_date": "Wed, 8 Jul 2020 17:57:30 -0700", "msg_from": "Melanie Plageman <melanieplageman@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Reigning in ExecParallelHashRepartitionFirst" }, { "msg_contents": "On Thu, Jul 9, 2020 at 8:17 AM Melanie Plageman\n<melanieplageman@gmail.com> wrote:\n> Last week as I was working on adaptive hash join [1] and trying to get\n> parallel adaptive hash join batch 0 to spill correctly, I noticed what\n> seemed like a problem with the code to repartition batch 0.\n>\n> If we run out of space while inserting tuples into the hashtable during\n> the build phase of parallel hash join and proceed to increase the number\n> of batches, we need to repartition all of the tuples from the old\n> generation (when nbatch was x) and move them to their new homes in the\n> new generation (when nbatch is 2x). Before we do this repartitioning we\n> disable growth in the number of batches.\n>\n> Then we repartition the tuples from the hashtable, inserting them either\n> back into the hashtable or into a batch file. While inserting them into\n> the hashtable, we call ExecParallelHashTupleAlloc(), and, if there is no\n> space for the current tuple in the current chunk and growth in the\n> number of batches is disabled, we go ahead and allocate a new chunk of\n> memory -- regardless of whether or not we will exceed the space limit.\n\nHmm. It shouldn't really be possible for\nExecParallelHashRepartitionFirst() to run out of memory anyway,\nconsidering that the input of that operation previously fit (just... I\nmean we started repartitioning because one more chunk would have\npushed us over the edge, but the tuples so far fit, and we'll insert\nthem in the same order for each input chunk, possibly filtering some\nout). Perhaps you reached this condition because\nbatches[0].shared->size finishes up accounting for the memory used by\nthe bucket array in PHJ_GROW_BUCKETS_ELECTING, but didn't originally\naccount for it in generation 0, so what previously appeared to fit no\nlonger does :-(. I'll look into that.\n\n\n", "msg_date": "Mon, 27 Jul 2020 18:52:33 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Reigning in ExecParallelHashRepartitionFirst" } ]
[ { "msg_contents": "I want to explain one bad situation we have encountered with one of our \ncustomers.\nThere are ~5000 tables in their database. And what is worse - most of \nthem are actively used.\nThen several flaws of Postgres make their system almost stuck.\n\nAutovacuum is periodically processing all this 5k relations (because \nthem are actively updated).\nAnd as far as most of this tables are small enough autovacuum complete \nprocessing of them almost in the same time.\nAs a result autovacuum workers produce ~5k invalidation messages in \nshort period of time.\n\nThere are several thousand clients, most of which are executing complex \nqueries.\nSo them are not able to process all this invalidation messages and their \ninvalidation message buffer is overflown.\nSize of this buffer is hardcoded (MAXNUMMESSAGES = 4096) and can not be \nchanged without recompilation of Postgres.\nThis is problem N1.\n\nAs a result resetState is set to true, forcing backends to invalidate \ntheir caches.\nSo most of backends loose there cached metadata and have to access \nsystem catalog trying to reload it.\nBut then we come to the next show stopper: NUM_LOCK_PARTITIONS.\nIt is also hardcoded and can't be changed without recompilation:\n\n#define LOG2_NUM_LOCK_PARTITIONS  4\n#define NUM_LOCK_PARTITIONS  (1 << LOG2_NUM_LOCK_PARTITIONS)\n\nHaving just 16 LW-Locks greatly increase conflict probability (taken in \naccount that there are 5k tables and totally about 25k relations).\nIt cause huge lw-lock acquisition time for heap_open and planning stage \nof some queries is increased from milliseconds to several minutes!\nKoda!\n\nThis is problem number 2. But there is one more flaw we have faced with. \nWe have increased LOG2_NUM_LOCK_PARTITIONS to 8\nand ... clients start to report \"too many LWLocks taken\" error.\nThere is yet another hardcoded constant MAX_SIMUL_LWLOCKS = 200\nwhich relation with NUM_LOCK_PARTITIONS  was not mentioned anywhere.\n\nBut there are several places in Postgres where it tries to hold all \npartition locks (for example in deadlock detector).\nDefinitely if NUM_LOCK_PARTITIONS > MAX_SIMUL_LWLOCKS we get this error.\n\nSo looks like NUM_LOCK_PARTITIONS and MAXNUMMESSAGES  constants have to \nbe replaced with GUCs.\nTo avoid division, we can specify log2 of this values, so shift can be \nused instead.\nAnd MAX_SIMUL_LWLOCKS should be defined as NUM_LOCK_PARTITIONS + \nNUM_INDIVIDUAL_LWLOCKS + NAMED_LWLOCK_RESERVE.\n\n\n\n\n\n\n", "msg_date": "Wed, 8 Jul 2020 23:41:01 +0300", "msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Postgres is not able to handle more than 4k tables!?" }, { "msg_contents": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru> writes:\n> There are several thousand clients, most of which are executing complex \n> queries.\n\nSo, that's really the core of your problem. We don't promise that\nyou can run several thousand backends at once. Usually it's recommended\nthat you stick a connection pooler in front of a server with (at most)\na few hundred backends.\n\n> So them are not able to process all this invalidation messages and their \n> invalidation message buffer is overflown.\n> Size of this buffer is hardcoded (MAXNUMMESSAGES = 4096) and can not be \n> changed without recompilation of Postgres.\n> This is problem N1.\n\nNo, this isn't a problem. Or at least you haven't shown a reason to\nthink it is. Sinval overruns are somewhat routine, and we certainly\ntest that code path (see CLOBBER_CACHE_ALWAYS buildfarm animals).\n\n> But then we come to the next show stopper: NUM_LOCK_PARTITIONS.\n> It is also hardcoded and can't be changed without recompilation:\n\n> #define LOG2_NUM_LOCK_PARTITIONS  4\n> #define NUM_LOCK_PARTITIONS  (1 << LOG2_NUM_LOCK_PARTITIONS)\n\n> Having just 16 LW-Locks greatly increase conflict probability (taken in \n> account that there are 5k tables and totally about 25k relations).\n\n> It cause huge lw-lock acquisition time for heap_open and planning stage \n> of some queries is increased from milliseconds to several minutes!\n\nReally?\n\n> This is problem number 2. But there is one more flaw we have faced with. \n> We have increased LOG2_NUM_LOCK_PARTITIONS to 8\n> and ... clients start to report \"too many LWLocks taken\" error.\n> There is yet another hardcoded constant MAX_SIMUL_LWLOCKS = 200\n> which relation with NUM_LOCK_PARTITIONS  was not mentioned anywhere.\n\nSeems like self-inflicted damage. I certainly don't recall anyplace\nin the docs where we suggest that you can alter that constant without\nworrying about consequences.\n\n> So looks like NUM_LOCK_PARTITIONS and MAXNUMMESSAGES  constants have to \n> be replaced with GUCs.\n\nI seriously doubt we'd do that.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 08 Jul 2020 17:35:55 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Postgres is not able to handle more than 4k tables!?" }, { "msg_contents": "From: Konstantin Knizhnik <k.knizhnik@postgrespro.ru>\r\n> Autovacuum is periodically processing all this 5k relations (because\r\n> them are actively updated).\r\n> And as far as most of this tables are small enough autovacuum complete\r\n> processing of them almost in the same time.\r\n> As a result autovacuum workers produce ~5k invalidation messages in\r\n> short period of time.\r\n\r\nHow about trying CREATE/ALTER TABLE WITH (vacuum_truncate = off)? It's available since PG 12. It causes autovacuum to not truncate the relation. It's the relation truncation what produces those shared invalidation messages.\r\n\r\n\r\n> But then we come to the next show stopper: NUM_LOCK_PARTITIONS.\r\n> It is also hardcoded and can't be changed without recompilation:\r\n> \r\n> #define LOG2_NUM_LOCK_PARTITIONS 4\r\n> #define NUM_LOCK_PARTITIONS (1 << LOG2_NUM_LOCK_PARTITIONS)\r\n> \r\n> Having just 16 LW-Locks greatly increase conflict probability (taken in\r\n> account that there are 5k tables and totally about 25k relations).\r\n> It cause huge lw-lock acquisition time for heap_open and planning stage\r\n> of some queries is increased from milliseconds to several minutes!\r\n> Koda!\r\n\r\nThe vacuum's relation truncation is also the culprit here, and it can be eliminated by the above storage parameter. It acquires Access Exclusive lock on the relation. Without the strong Access Exclusive lock, just running DML statements use the fast path locking, which doesn't acquire the lock manager partition lock.\r\n\r\nThe long lwlock wait is a sad story. The victim is probably exclusive lockers. When someone holds a shared lock on a lwlock, the exclusive locker has to wait. That's OK. However, if another share locker comes later, it acquires the lwlock even though there're waiting exclusive lockers. That's unfair, but this is the community decision.\r\n\r\n\r\nRegards\r\nTakayuki Tsunakawa\r\n\r\n\r\n\r\n", "msg_date": "Thu, 9 Jul 2020 00:49:21 +0000", "msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Postgres is not able to handle more than 4k tables!?" }, { "msg_contents": "\n\nOn 09.07.2020 03:49, tsunakawa.takay@fujitsu.com wrote:\n> From: Konstantin Knizhnik <k.knizhnik@postgrespro.ru>\n>> Autovacuum is periodically processing all this 5k relations (because\n>> them are actively updated).\n>> And as far as most of this tables are small enough autovacuum complete\n>> processing of them almost in the same time.\n>> As a result autovacuum workers produce ~5k invalidation messages in\n>> short period of time.\n> How about trying CREATE/ALTER TABLE WITH (vacuum_truncate = off)? It's available since PG 12. It causes autovacuum to not truncate the relation. It's the relation truncation what produces those shared invalidation messages.\n\nInvalidation messages are also caused by statistic update:\n\n#0  0x000055a85f4f5fd6 in RegisterCatcacheInvalidation (cacheId=49, \nhashValue=715727843, dbId=12443)\n     at inval.c:483\n#1  0x000055a85f4f4dc2 in PrepareToInvalidateCacheTuple \n(relation=0x7f45a34ce5a0, tuple=0x7ffc75bebc70,\n     newtuple=0x7f4598e75ef8, function=0x55a85f4f5fc0 \n<RegisterCatcacheInvalidation>) at catcache.c:1830\n#2  0x000055a85f4f6b21 in CacheInvalidateHeapTuple \n(relation=0x7f45a34ce5a0, tuple=0x7ffc75bebc70,\n     newtuple=0x7f4598e75ef8) at inval.c:1149\n#3  0x000055a85f016372 in heap_update (relation=0x7f45a34ce5a0, \notid=0x7f4598e75efc,\n     newtup=0x7f4598e75ef8, cid=0, crosscheck=0x0, wait=1 '\\001', \nhufd=0x7ffc75bebcf0,\n     lockmode=0x7ffc75bebce8) at heapam.c:4245\n#4  0x000055a85f016f98 in simple_heap_update (relation=0x7f45a34ce5a0, \notid=0x7f4598e75efc,\n     tup=0x7f4598e75ef8) at heapam.c:4490\n#5  0x000055a85f153ec5 in update_attstats (relid=16384, inh=0 '\\000', \nnatts=1, vacattrstats=0x55a860f0fba0)\n     at analyze.c:1619\n#6  0x000055a85f151898 in do_analyze_rel (onerel=0x7f45a3480080, \noptions=98, params=0x55a860f0f028,\n     va_cols=0x0, acquirefunc=0x55a85f15264e <acquire_sample_rows>, \nrelpages=26549, inh=0 '\\000',\n     in_outer_xact=0 '\\000', elevel=13) at analyze.c:562\n#7  0x000055a85f150be1 in analyze_rel (relid=16384, \nrelation=0x7ffc75bec370, options=98,\n     params=0x55a860f0f028, va_cols=0x0, in_outer_xact=0 '\\000', \nbstrategy=0x55a860f0f0b8) at analyze.c:257\n#8  0x000055a85f1e1589 in vacuum (options=98, relation=0x7ffc75bec370, \nrelid=16384, params=0x55a860f0f028,\n     va_cols=0x0, bstrategy=0x55a860f0f0b8, isTopLevel=1 '\\001') at \nvacuum.c:320\n#9  0x000055a85f2fd92a in autovacuum_do_vac_analyze (tab=0x55a860f0f020, \nbstrategy=0x55a860f0f0b8)\n     at autovacuum.c:2874\n#10 0x000055a85f2fcccb in do_autovacuum () at autovacuum.c:2374\n>\n>> But then we come to the next show stopper: NUM_LOCK_PARTITIONS.\n>> It is also hardcoded and can't be changed without recompilation:\n>>\n>> #define LOG2_NUM_LOCK_PARTITIONS 4\n>> #define NUM_LOCK_PARTITIONS (1 << LOG2_NUM_LOCK_PARTITIONS)\n>>\n>> Having just 16 LW-Locks greatly increase conflict probability (taken in\n>> account that there are 5k tables and totally about 25k relations).\n>> It cause huge lw-lock acquisition time for heap_open and planning stage\n>> of some queries is increased from milliseconds to several minutes!\n>> Koda!\n> The vacuum's relation truncation is also the culprit here, and it can be eliminated by the above storage parameter. It acquires Access Exclusive lock on the relation. Without the strong Access Exclusive lock, just running DML statements use the fast path locking, which doesn't acquire the lock manager partition lock.\n\nLooks like it is not true (at lest for PG9.6):\n\n#0  0x00007fa6d30da087 in semop () from /lib64/libc.so.6\n#1  0x0000000000682241 in PGSemaphoreLock \n(sema=sema@entry=0x7fa66f5655d8) at pg_sema.c:387\n#2  0x00000000006ec6eb in LWLockAcquire (lock=lock@entry=0x7f23b544f800, \nmode=mode@entry=LW_EXCLUSIVE) at lwlock.c:1338\n#3  0x00000000006e5560 in LockAcquireExtended \n(locktag=locktag@entry=0x7ffd94883fa0, lockmode=lockmode@entry=1, \nsessionLock=sessionLock@entry=0 '\\000', dontWait=dontWait@entry=0 \n'\\000', reportMemoryError=reportMemoryError@entry=1 '\\001', \nlocallockp=locallockp@entry=0x7ffd94883f98) at lock.c:962\n#4  0x00000000006e29f6 in LockRelationOid (relid=87103837, lockmode=1) \nat lmgr.c:113\n#5  0x00000000004a9f55 in relation_open (relationId=87103837, \nlockmode=lockmode@entry=1) at heapam.c:1131\n#6  0x00000000004bdc66 in index_open (relationId=<optimized out>, \nlockmode=lockmode@entry=1) at indexam.c:151\n#7  0x000000000067be58 in get_relation_info (root=root@entry=0x3a1a758, \nrelationObjectId=72079078, inhparent=<optimized out>, \nrel=rel@entry=0x3a2d460) at plancat.c:183\n#8  0x000000000067ef45 in build_simple_rel (root=root@entry=0x3a1a758, \nrelid=2, reloptkind=reloptkind@entry=RELOPT_BASEREL) at relnode.c:148\n\nPlease notice  lockmode=1 (AccessShareLock)\n\n>\n> The long lwlock wait is a sad story. The victim is probably exclusive lockers. When someone holds a shared lock on a lwlock, the exclusive locker has to wait. That's OK. However, if another share locker comes later, it acquires the lwlock even though there're waiting exclusive lockers. That's unfair, but this is the community decision.\nYes, I also think that it is the reason of the problem.\nAlexandr Korotokov has implemented fair LW-Locks which eliminate such \nkind of problems in some scenarios.\nMay it also can help here.\n\n\n\n", "msg_date": "Thu, 9 Jul 2020 09:31:06 +0300", "msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: Postgres is not able to handle more than 4k tables!?" }, { "msg_contents": "\n\nOn 09.07.2020 00:35, Tom Lane wrote:\n> Konstantin Knizhnik <k.knizhnik@postgrespro.ru> writes:\n>> There are several thousand clients, most of which are executing complex\n>> queries.\n> So, that's really the core of your problem. We don't promise that\n> you can run several thousand backends at once. Usually it's recommended\n> that you stick a connection pooler in front of a server with (at most)\n> a few hundred backends.\nIt is not my problem - it is customer's problem.\nCertainly the advice to use connection pooler is the first thing we have \nproposed to the customer when we see such larger number of active backends.\nUnfortunately it is not always possible (connection pooler is not \npreseving session semantic).\nThis is why I have proposed builtin connection pooler for Postgres.\nBut it is different story.\n\n\n>> So them are not able to process all this invalidation messages and their\n>> invalidation message buffer is overflown.\n>> Size of this buffer is hardcoded (MAXNUMMESSAGES = 4096) and can not be\n>> changed without recompilation of Postgres.\n>> This is problem N1.\n> No, this isn't a problem. Or at least you haven't shown a reason to\n> think it is. Sinval overruns are somewhat routine, and we certainly\n> test that code path (see CLOBBER_CACHE_ALWAYS buildfarm animals).\nCertainly cache overrun is not fatal.\nBut if most of backends are blocked in heap_open pf pg_attribute \nrelation then something is not ok with Postgres, isn't it?\n\n> It cause huge lw-lock acquisition time for heap_open and planning stage\n>> of some queries is increased from milliseconds to several minutes!\n> Really?\n\nPlanning time: 75698.602 ms\nExecution time: 0.861 ms\n\n>> This is problem number 2. But there is one more flaw we have faced with.\n>> We have increased LOG2_NUM_LOCK_PARTITIONS to 8\n>> and ... clients start to report \"too many LWLocks taken\" error.\n>> There is yet another hardcoded constant MAX_SIMUL_LWLOCKS = 200\n>> which relation with NUM_LOCK_PARTITIONS  was not mentioned anywhere.\n> Seems like self-inflicted damage. I certainly don't recall anyplace\n> in the docs where we suggest that you can alter that constant without\n> worrying about consequences.\n\nLooks like you try to convince me that such practice of hardcoding \nconstants in code and\nnot taken in account relation between them is good design pattern?\n>> So looks like NUM_LOCK_PARTITIONS and MAXNUMMESSAGES  constants have to\n>> be replaced with GUCs.\n> I seriously doubt we'd do that.\nIt's a pity, because such attention is one of the reasons why Postgres \nis pgbench-oriented database showing good results at notebooks\nbut not at real systems running at power servers (NUMA, SSD, huge amount \nof memory, large number of cores,...).\n\n\n\n\n", "msg_date": "Thu, 9 Jul 2020 10:07:38 +0300", "msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: Postgres is not able to handle more than 4k tables!?" }, { "msg_contents": "From: Konstantin Knizhnik <k.knizhnik@postgrespro.ru>\r\n> Looks like it is not true (at lest for PG9.6):\r\n> \r\n> #0 0x00007fa6d30da087 in semop () from /lib64/libc.so.6\r\n> #1 0x0000000000682241 in PGSemaphoreLock\r\n> (sema=sema@entry=0x7fa66f5655d8) at pg_sema.c:387\r\n> #2 0x00000000006ec6eb in LWLockAcquire\r\n> (lock=lock@entry=0x7f23b544f800,\r\n> mode=mode@entry=LW_EXCLUSIVE) at lwlock.c:1338\r\n> #3 0x00000000006e5560 in LockAcquireExtended\r\n> (locktag=locktag@entry=0x7ffd94883fa0, lockmode=lockmode@entry=1,\r\n> sessionLock=sessionLock@entry=0 '\\000', dontWait=dontWait@entry=0\r\n> '\\000', reportMemoryError=reportMemoryError@entry=1 '\\001',\r\n> locallockp=locallockp@entry=0x7ffd94883f98) at lock.c:962\r\n> #4 0x00000000006e29f6 in LockRelationOid (relid=87103837, lockmode=1)\r\n> at lmgr.c:113\r\n> #5 0x00000000004a9f55 in relation_open (relationId=87103837,\r\n> lockmode=lockmode@entry=1) at heapam.c:1131\r\n> #6 0x00000000004bdc66 in index_open (relationId=<optimized out>,\r\n> lockmode=lockmode@entry=1) at indexam.c:151\r\n> #7 0x000000000067be58 in get_relation_info (root=root@entry=0x3a1a758,\r\n> relationObjectId=72079078, inhparent=<optimized out>,\r\n> rel=rel@entry=0x3a2d460) at plancat.c:183\r\n> #8 0x000000000067ef45 in build_simple_rel (root=root@entry=0x3a1a758,\r\n> relid=2, reloptkind=reloptkind@entry=RELOPT_BASEREL) at relnode.c:148\r\n> \r\n> Please notice lockmode=1 (AccessShareLock)\r\n\r\nOuch, there exists another sad hardcoded value: the number of maximum locks that can be acquired by the fast-path mechanism.\r\n\r\n[LockAcquireExtended]\r\n /*\r\n * Attempt to take lock via fast path, if eligible. But if we remember\r\n * having filled up the fast path array, we don't attempt to make any\r\n * further use of it until we release some locks. It's possible that some\r\n * other backend has transferred some of those locks to the shared hash\r\n * table, leaving space free, but it's not worth acquiring the LWLock just\r\n * to check. It's also possible that we're acquiring a second or third\r\n * lock type on a relation we have already locked using the fast-path, but\r\n * for now we don't worry about that case either.\r\n */\r\n if (EligibleForRelationFastPath(locktag, lockmode) &&\r\n FastPathLocalUseCount < FP_LOCK_SLOTS_PER_BACKEND)\r\n {\r\n\r\n/*\r\n * We allow a small number of \"weak\" relation locks (AccessShareLock,\r\n * RowShareLock, RowExclusiveLock) to be recorded in the PGPROC structure\r\n * rather than the main lock table. This eases contention on the lock\r\n * manager LWLocks. See storage/lmgr/README for additional details.\r\n */\r\n#define FP_LOCK_SLOTS_PER_BACKEND 16\r\n\r\n\r\n16 looks easily exceeded even in a not-long OLTP transaction... especially the table is partitioned. I wonder if we're caught in the hell of lock manager partition lock contention without knowing it. I'm afraid other pitfalls are lurking when there are many relations.\r\n\r\n\r\nRegards\r\nTakayuki Tsunakawa\r\n\r\n", "msg_date": "Thu, 9 Jul 2020 07:17:02 +0000", "msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Postgres is not able to handle more than 4k tables!?" }, { "msg_contents": "Greetings,\n\n* Tom Lane (tgl@sss.pgh.pa.us) wrote:\n> Konstantin Knizhnik <k.knizhnik@postgrespro.ru> writes:\n> > There are several thousand clients, most of which are executing complex \n> > queries.\n> \n> So, that's really the core of your problem. We don't promise that\n> you can run several thousand backends at once. Usually it's recommended\n> that you stick a connection pooler in front of a server with (at most)\n> a few hundred backends.\n\nSure, but that doesn't mean things should completely fall over when we\ndo get up to larger numbers of backends, which is definitely pretty\ncommon in larger systems. I'm pretty sure we all agree that using a\nconnection pooler is recommended, but if there's things we can do to\nmake the system work at least a bit better when folks do use lots of\nconnections, provided we don't materially damage other cases, that's\nprobably worthwhile.\n\n> > So them are not able to process all this invalidation messages and their \n> > invalidation message buffer is overflown.\n> > Size of this buffer is hardcoded (MAXNUMMESSAGES = 4096) and can not be \n> > changed without recompilation of Postgres.\n> > This is problem N1.\n> \n> No, this isn't a problem. Or at least you haven't shown a reason to\n> think it is. Sinval overruns are somewhat routine, and we certainly\n> test that code path (see CLOBBER_CACHE_ALWAYS buildfarm animals).\n\nTesting that it doesn't outright break and having it be decently\nperformant are two rather different things. I think we're talking more\nabout performance and not so much about if the system is outright broken\nin this case.\n\n> > But then we come to the next show stopper: NUM_LOCK_PARTITIONS.\n> > It is also hardcoded and can't be changed without recompilation:\n> \n> > #define LOG2_NUM_LOCK_PARTITIONS  4\n> > #define NUM_LOCK_PARTITIONS  (1 << LOG2_NUM_LOCK_PARTITIONS)\n> \n> > Having just 16 LW-Locks greatly increase conflict probability (taken in \n> > account that there are 5k tables and totally about 25k relations).\n> \n> > It cause huge lw-lock acquisition time for heap_open and planning stage \n> > of some queries is increased from milliseconds to several minutes!\n> \n> Really?\n\nApparently, given the response down-thread.\n\n> > This is problem number 2. But there is one more flaw we have faced with. \n> > We have increased LOG2_NUM_LOCK_PARTITIONS to 8\n> > and ... clients start to report \"too many LWLocks taken\" error.\n> > There is yet another hardcoded constant MAX_SIMUL_LWLOCKS = 200\n> > which relation with NUM_LOCK_PARTITIONS  was not mentioned anywhere.\n> \n> Seems like self-inflicted damage. I certainly don't recall anyplace\n> in the docs where we suggest that you can alter that constant without\n> worrying about consequences.\n\nPerhaps not in the docs, but would be good to make note of it somewhere,\nas I don't think it's really appropriate to assume these constants won't\never change and whomever contemplates changing them would appreciate\nknowing about other related values..\n\n> > So looks like NUM_LOCK_PARTITIONS and MAXNUMMESSAGES  constants have to \n> > be replaced with GUCs.\n> \n> I seriously doubt we'd do that.\n\nMaking them GUCs does seem like it's a few steps too far... but it'd be\nnice if we could arrange to have values that don't result in the system\nfalling over with large numbers of backends and large numbers of tables.\nTo get a lot of backends, you'd have to set max_connections up pretty\nhigh to begin with- perhaps we should contemplate allowing these values\nto vary based on what max_connections is set to?\n\nThanks,\n\nStephen", "msg_date": "Thu, 9 Jul 2020 10:57:00 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: Postgres is not able to handle more than 4k tables!?" }, { "msg_contents": "Stephen Frost <sfrost@snowman.net> writes:\n> * Tom Lane (tgl@sss.pgh.pa.us) wrote:\n>> So, that's really the core of your problem. We don't promise that\n>> you can run several thousand backends at once. Usually it's recommended\n>> that you stick a connection pooler in front of a server with (at most)\n>> a few hundred backends.\n\n> Sure, but that doesn't mean things should completely fall over when we\n> do get up to larger numbers of backends, which is definitely pretty\n> common in larger systems.\n\nAs I understood the report, it was not \"things completely fall over\",\nit was \"performance gets bad\". But let's get real. Unless the OP\nhas a machine with thousands of CPUs, trying to run this way is\ncounterproductive.\n\nPerhaps in a decade or two such machines will be common enough that\nit'll make sense to try to tune Postgres to run well on them. Right\nnow I feel no hesitation about saying \"if it hurts, don't do that\".\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 09 Jul 2020 11:14:29 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Postgres is not able to handle more than 4k tables!?" }, { "msg_contents": "Greetings,\n\n* Tom Lane (tgl@sss.pgh.pa.us) wrote:\n> Stephen Frost <sfrost@snowman.net> writes:\n> > * Tom Lane (tgl@sss.pgh.pa.us) wrote:\n> >> So, that's really the core of your problem. We don't promise that\n> >> you can run several thousand backends at once. Usually it's recommended\n> >> that you stick a connection pooler in front of a server with (at most)\n> >> a few hundred backends.\n> \n> > Sure, but that doesn't mean things should completely fall over when we\n> > do get up to larger numbers of backends, which is definitely pretty\n> > common in larger systems.\n> \n> As I understood the report, it was not \"things completely fall over\",\n> it was \"performance gets bad\". But let's get real. Unless the OP\n> has a machine with thousands of CPUs, trying to run this way is\n> counterproductive.\n\nRight, the issue is that performance gets bad (or, really, more like\nterrible...), and regardless of if it's ideal or not, lots of folks\nactually do run PG with thousands of connections, and we know that at\nstart-up time because they've set max_connections to a sufficiently high\nvalue to support doing exactly that.\n\n> Perhaps in a decade or two such machines will be common enough that\n> it'll make sense to try to tune Postgres to run well on them. Right\n> now I feel no hesitation about saying \"if it hurts, don't do that\".\n\nI disagree that we should completely ignore these use-cases.\n\nThanks,\n\nStephen", "msg_date": "Thu, 9 Jul 2020 11:29:24 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: Postgres is not able to handle more than 4k tables!?" }, { "msg_contents": "Hi Stephen,\n\nThank you for supporting an opinion that it is the problems not only of \nclient system design (I agree it is not so good\nidea to have thousands tables and thousands active backends) but also of \nPostgres.\n\nWe have made more investigation and found out one more problem in \nPostgres causing \"invalidation storm\".\nThere are some log living transactions which prevent autovacuum to do it \nwork and� remove dead tuples.\nSo autovacuum is started once and once again and each time did no \nprogress but updated statistics and so sent invalidation messages.\nautovacuum_naptime was set to 30 seconds, so each 30 seconds autovacuum \nproceed huge number of tables and initiated large number of invalidation \nmessages which quite soon cause overflow of validation message buffers \nfor backends performing long OLAP queries.\n\nIt makes me think about two possible optimizations:\n\n1. Provide separate invalidation messages for relation metadata and its \nstatistic.\nSo update of statistic should not invalidate relation cache.\nThe main problem with this proposal is that pg_class contains relpages \nand reltuples columns which conceptually are \\ part of relation statistic\nbut stored in relation cache. If relation statistic is updated, then \nmost likely this fields are also changed. So we have to remove this relation\nfrom relation cache in any case.\n\n2. Remember in relation info XID of oldest active transaction at the \nmoment of last autovacuum.\nAt next autovacuum iteration we first of all compare this stored XID \nwith current oldest active transaction XID\nand bypass vacuuming this relation if XID is not changed.\n\nThoughts?\n\n>> So, that's really the core of your problem. We don't promise that\n>> you can run several thousand backends at once. Usually it's recommended\n>> that you stick a connection pooler in front of a server with (at most)\n>> a few hundred backends.\n> Sure, but that doesn't mean things should completely fall over when we\n> do get up to larger numbers of backends, which is definitely pretty\n> common in larger systems. I'm pretty sure we all agree that using a\n> connection pooler is recommended, but if there's things we can do to\n> make the system work at least a bit better when folks do use lots of\n> connections, provided we don't materially damage other cases, that's\n> probably worthwhile.\n\nI also think that Postgres performance should degrade gradually with \nincreasing number\nof active backends. Actually further investigations of this particular \ncase shows that such large number of\ndatabase connections was caused by ... Postgres slowdown.\nDuring normal workflow number of active backends is few hundreds.\nBut \"invalidation storm\" cause hangout of queries, so user application \nhas to initiate more and more new connections to perform required actions.\nYes, this may be not the best behavior of application in this case. At \nleast it should first terminate current connection using \npg_terminate_backend. I just want to notice that large number of \nbackends was not the core of the problem.\n\n>\n> Making them GUCs does seem like it's a few steps too far... but it'd be\n> nice if we could arrange to have values that don't result in the system\n> falling over with large numbers of backends and large numbers of tables.\n> To get a lot of backends, you'd have to set max_connections up pretty\n> high to begin with- perhaps we should contemplate allowing these values\n> to vary based on what max_connections is set to?\n\nI think that optimal value of number of lock partitions should depend \nnot on number of connections\nbut on number of available CPU cores and so expected level on concurrency.\nIt is hard to propose some portable way to obtain this number.\nThis is why I think that GUCs is better solution.\nCertainly I realize that it is very dangerous parameter which should be \nchanged with special care.\nNot only because of� MAX_SIMUL_LWLOCKS.\n\nThere are few places in Postgres when it tries to lock all partitions \n(deadlock detector, logical replication,...).\nIf there very thousands of partitions, then such lock will be too \nexpensive and we get yet another\npopular Postgres program: \"deadlock detection storm\" when due to high \ncontention between backends lock can not be obtained\nin deadlock timeout and so initiate deadlock detection. Simultaneous \ndeadlock detection performed by all backends\n(which tries to take ALL partitions locks) paralyze the system (TPS \nfalls down to 0).\nProposed patch for this problem was also rejected (once again - problem \ncan be reproduced only of powerful server with large number of cores).\n\n\n", "msg_date": "Thu, 9 Jul 2020 18:57:46 +0300", "msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: Postgres is not able to handle more than 4k tables!?" }, { "msg_contents": "\n\nOn 09.07.2020 18:14, Tom Lane wrote:\n>\n> As I understood the report, it was not \"things completely fall over\",\n> it was \"performance gets bad\". But let's get real. Unless the OP\n> has a machine with thousands of CPUs, trying to run this way is\n> counterproductive.\nSorry, that I was not clear. It is actually case when \"things completely \nfall over\".\nIf query planning time takes several minutes and so user response time \nis increased from seconds to hours,\nthen system becomes unusable, doesn't it?\n\n> Perhaps in a decade or two such machines will be common enough that\n> it'll make sense to try to tune Postgres to run well on them. Right\n> now I feel no hesitation about saying \"if it hurts, don't do that\".\n\nUnfortunately we have not to wait for decade or two.\nPostgres is faced with multiple problems at existed multiprocessor \nsystems (64, 96,.. cores).\nAnd it is not even necessary to initiate thousands of connections: just \nenough to load all this cores and let them compete for some\nresource (LW-lock, buffer,...). Even standard pgbench/YCSB benchmarks \nwith zipfian distribution may illustrate this problems.\n\nThere were many proposed patches which help to improve this situation.\nBut as far as this patches increase performance only at huge servers \nwith large number of cores and show almost no\nimprovement  (or even some degradation) at standard 4-cores desktops, \nalmost none of them were committed.\nConsequently our customers have a lot of troubles trying to replace \nOracle with Postgres and provide the same performance at same\n(quite good and expensive) hardware.\n\n\n\n", "msg_date": "Thu, 9 Jul 2020 19:16:26 +0300", "msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: Postgres is not able to handle more than 4k tables!?" }, { "msg_contents": "Hi Konstantin, a silly question: do you consider the workload you have as\nwell-optimized? Can it be optimized further? Reading this thread I have a\nstrong feeling that a very basic set of regular optimization actions is\nmissing here (or not explained): query analysis and optimization based on\npg_stat_statements (and, maybe pg_stat_kcache), some method to analyze the\nstate of the server in general, resource consumption, etc.\n\nDo you have some monitoring that covers pg_stat_statements?\n\nBefore looking under the hood, I would use multiple pg_stat_statements\nsnapshots (can be analyzed using, say, postgres-checkup or pgCenter) to\nunderstand the workload and identify the heaviest queries -- first of all,\nin terms of total_time, calls, shared buffers reads/hits, temporary files\ngeneration. Which query groups are Top-N in each category, have you looked\nat it?\n\nYou mentioned some crazy numbers for the planning time, but why not to\nanalyze the picture holistically and see the overall numbers? Those queries\nthat have increased planning time, what their part of total_time, on the\noverall picture, in %? (Unfortunately, we cannot see Top-N by planning time\nin pg_stat_statements till PG13, but it doesn't mean that we cannot have\nsome good understanding of overall picture today, it just requires more\nwork).\n\nIf workload analysis & optimization was done holistically already, or not\npossible due to some reason — pardon me. But if not and if your primary\ngoal is to improve this particular setup ASAP, then the topic could be\nstarted in the -performance mailing list first, discussing the workload and\nits aspects, and only after it's done, raised in -hackers. No?\n\nOn Thu, Jul 9, 2020 at 8:57 AM Konstantin Knizhnik <\nk.knizhnik@postgrespro.ru> wrote:\n\n> Hi Stephen,\n>\n> Thank you for supporting an opinion that it is the problems not only of\n> client system design (I agree it is not so good\n> idea to have thousands tables and thousands active backends) but also of\n> Postgres.\n>\n> We have made more investigation and found out one more problem in\n> Postgres causing \"invalidation storm\".\n> There are some log living transactions which prevent autovacuum to do it\n> work and remove dead tuples.\n> So autovacuum is started once and once again and each time did no\n> progress but updated statistics and so sent invalidation messages.\n> autovacuum_naptime was set to 30 seconds, so each 30 seconds autovacuum\n> proceed huge number of tables and initiated large number of invalidation\n> messages which quite soon cause overflow of validation message buffers\n> for backends performing long OLAP queries.\n>\n> It makes me think about two possible optimizations:\n>\n> 1. Provide separate invalidation messages for relation metadata and its\n> statistic.\n> So update of statistic should not invalidate relation cache.\n> The main problem with this proposal is that pg_class contains relpages\n> and reltuples columns which conceptually are \\ part of relation statistic\n> but stored in relation cache. If relation statistic is updated, then\n> most likely this fields are also changed. So we have to remove this\n> relation\n> from relation cache in any case.\n>\n> 2. Remember in relation info XID of oldest active transaction at the\n> moment of last autovacuum.\n> At next autovacuum iteration we first of all compare this stored XID\n> with current oldest active transaction XID\n> and bypass vacuuming this relation if XID is not changed.\n>\n> Thoughts?\n>\n> >> So, that's really the core of your problem. We don't promise that\n> >> you can run several thousand backends at once. Usually it's recommended\n> >> that you stick a connection pooler in front of a server with (at most)\n> >> a few hundred backends.\n> > Sure, but that doesn't mean things should completely fall over when we\n> > do get up to larger numbers of backends, which is definitely pretty\n> > common in larger systems. I'm pretty sure we all agree that using a\n> > connection pooler is recommended, but if there's things we can do to\n> > make the system work at least a bit better when folks do use lots of\n> > connections, provided we don't materially damage other cases, that's\n> > probably worthwhile.\n>\n> I also think that Postgres performance should degrade gradually with\n> increasing number\n> of active backends. Actually further investigations of this particular\n> case shows that such large number of\n> database connections was caused by ... Postgres slowdown.\n> During normal workflow number of active backends is few hundreds.\n> But \"invalidation storm\" cause hangout of queries, so user application\n> has to initiate more and more new connections to perform required actions.\n> Yes, this may be not the best behavior of application in this case. At\n> least it should first terminate current connection using\n> pg_terminate_backend. I just want to notice that large number of\n> backends was not the core of the problem.\n>\n> >\n> > Making them GUCs does seem like it's a few steps too far... but it'd be\n> > nice if we could arrange to have values that don't result in the system\n> > falling over with large numbers of backends and large numbers of tables.\n> > To get a lot of backends, you'd have to set max_connections up pretty\n> > high to begin with- perhaps we should contemplate allowing these values\n> > to vary based on what max_connections is set to?\n>\n> I think that optimal value of number of lock partitions should depend\n> not on number of connections\n> but on number of available CPU cores and so expected level on concurrency.\n> It is hard to propose some portable way to obtain this number.\n> This is why I think that GUCs is better solution.\n> Certainly I realize that it is very dangerous parameter which should be\n> changed with special care.\n> Not only because of MAX_SIMUL_LWLOCKS.\n>\n> There are few places in Postgres when it tries to lock all partitions\n> (deadlock detector, logical replication,...).\n> If there very thousands of partitions, then such lock will be too\n> expensive and we get yet another\n> popular Postgres program: \"deadlock detection storm\" when due to high\n> contention between backends lock can not be obtained\n> in deadlock timeout and so initiate deadlock detection. Simultaneous\n> deadlock detection performed by all backends\n> (which tries to take ALL partitions locks) paralyze the system (TPS\n> falls down to 0).\n> Proposed patch for this problem was also rejected (once again - problem\n> can be reproduced only of powerful server with large number of cores).\n>\n>\n>\n\nHi Konstantin, a silly question: do you consider the workload you have as well-optimized? Can it be optimized further? Reading this thread I have a strong feeling that a very basic set of regular optimization actions is missing here (or not explained): query analysis and optimization based on pg_stat_statements (and, maybe pg_stat_kcache), some method to analyze the state of the server in general, resource consumption, etc.Do you have some monitoring that covers pg_stat_statements?Before looking under the hood, I would use multiple pg_stat_statements snapshots (can be analyzed using, say, postgres-checkup or pgCenter) to understand the workload and identify the heaviest queries -- first of all, in terms of total_time, calls, shared buffers reads/hits, temporary files generation. Which query groups are Top-N in each category, have you looked at it?You mentioned some crazy numbers for the planning time, but why not to analyze the picture holistically and see the overall numbers? Those queries that have increased planning time, what their part of total_time, on the overall picture, in %? (Unfortunately, we cannot see Top-N by planning time in pg_stat_statements till PG13, but it doesn't mean that we cannot have some good understanding of overall picture today, it just requires more work).If workload analysis & optimization was done holistically already, or not possible due to some reason — pardon me. But if not and if your primary goal is to improve this particular setup ASAP, then the topic could be started in the -performance mailing list first, discussing the workload and its aspects, and only after it's done, raised in -hackers. No?On Thu, Jul 9, 2020 at 8:57 AM Konstantin Knizhnik <k.knizhnik@postgrespro.ru> wrote:Hi Stephen,\n\nThank you for supporting an opinion that it is the problems not only of \nclient system design (I agree it is not so good\nidea to have thousands tables and thousands active backends) but also of \nPostgres.\n\nWe have made more investigation and found out one more problem in \nPostgres causing \"invalidation storm\".\nThere are some log living transactions which prevent autovacuum to do it \nwork and  remove dead tuples.\nSo autovacuum is started once and once again and each time did no \nprogress but updated statistics and so sent invalidation messages.\nautovacuum_naptime was set to 30 seconds, so each 30 seconds autovacuum \nproceed huge number of tables and initiated large number of invalidation \nmessages which quite soon cause overflow of validation message buffers \nfor backends performing long OLAP queries.\n\nIt makes me think about two possible optimizations:\n\n1. Provide separate invalidation messages for relation metadata and its \nstatistic.\nSo update of statistic should not invalidate relation cache.\nThe main problem with this proposal is that pg_class contains relpages \nand reltuples columns which conceptually are \\ part of relation statistic\nbut stored in relation cache. If relation statistic is updated, then \nmost likely this fields are also changed. So we have to remove this relation\nfrom relation cache in any case.\n\n2. Remember in relation info XID of oldest active transaction at the \nmoment of last autovacuum.\nAt next autovacuum iteration we first of all compare this stored XID \nwith current oldest active transaction XID\nand bypass vacuuming this relation if XID is not changed.\n\nThoughts?\n\n>> So, that's really the core of your problem.  We don't promise that\n>> you can run several thousand backends at once.  Usually it's recommended\n>> that you stick a connection pooler in front of a server with (at most)\n>> a few hundred backends.\n> Sure, but that doesn't mean things should completely fall over when we\n> do get up to larger numbers of backends, which is definitely pretty\n> common in larger systems.  I'm pretty sure we all agree that using a\n> connection pooler is recommended, but if there's things we can do to\n> make the system work at least a bit better when folks do use lots of\n> connections, provided we don't materially damage other cases, that's\n> probably worthwhile.\n\nI also think that Postgres performance should degrade gradually with \nincreasing number\nof active backends. Actually further investigations of this particular \ncase shows that such large number of\ndatabase connections was caused by ... Postgres slowdown.\nDuring normal workflow number of active backends is few hundreds.\nBut \"invalidation storm\" cause hangout of queries, so user application \nhas to initiate more and more new connections to perform required actions.\nYes, this may be not the best behavior of application in this case. At \nleast it should first terminate current connection using \npg_terminate_backend. I just want to notice that large number of \nbackends was not the core of the problem.\n\n>\n> Making them GUCs does seem like it's a few steps too far... but it'd be\n> nice if we could arrange to have values that don't result in the system\n> falling over with large numbers of backends and large numbers of tables.\n> To get a lot of backends, you'd have to set max_connections up pretty\n> high to begin with- perhaps we should contemplate allowing these values\n> to vary based on what max_connections is set to?\n\nI think that optimal value of number of lock partitions should depend \nnot on number of connections\nbut on number of available CPU cores and so expected level on concurrency.\nIt is hard to propose some portable way to obtain this number.\nThis is why I think that GUCs is better solution.\nCertainly I realize that it is very dangerous parameter which should be \nchanged with special care.\nNot only because of  MAX_SIMUL_LWLOCKS.\n\nThere are few places in Postgres when it tries to lock all partitions \n(deadlock detector, logical replication,...).\nIf there very thousands of partitions, then such lock will be too \nexpensive and we get yet another\npopular Postgres program: \"deadlock detection storm\" when due to high \ncontention between backends lock can not be obtained\nin deadlock timeout and so initiate deadlock detection. Simultaneous \ndeadlock detection performed by all backends\n(which tries to take ALL partitions locks) paralyze the system (TPS \nfalls down to 0).\nProposed patch for this problem was also rejected (once again - problem \ncan be reproduced only of powerful server with large number of cores).", "msg_date": "Thu, 9 Jul 2020 09:19:34 -0700", "msg_from": "Nikolay Samokhvalov <samokhvalov@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Postgres is not able to handle more than 4k tables!?" }, { "msg_contents": "On Thu, Jul 9, 2020 at 6:57 PM Konstantin Knizhnik\n<k.knizhnik@postgrespro.ru> wrote:\n> 2. Remember in relation info XID of oldest active transaction at the\n> moment of last autovacuum.\n> At next autovacuum iteration we first of all compare this stored XID\n> with current oldest active transaction XID\n> and bypass vacuuming this relation if XID is not changed.\n\n\nThis option looks good for me independently of the use case under\nconsideration. Long-running transactions are an old and well-known\nproblem. If we can skip some useless work here, let's do this.\n\n------\nRegards,\nAlexander Korotkov\n\n\n", "msg_date": "Thu, 9 Jul 2020 19:37:46 +0300", "msg_from": "Alexander Korotkov <aekorotkov@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Postgres is not able to handle more than 4k tables!?" }, { "msg_contents": "Greetings,\n\n* Konstantin Knizhnik (k.knizhnik@postgrespro.ru) wrote:\n> It makes me think about two possible optimizations:\n> \n> 1. Provide separate invalidation messages for relation metadata and its\n> statistic.\n> So update of statistic should not invalidate relation cache.\n> The main problem with this proposal is that pg_class contains relpages and\n> reltuples columns which conceptually are \\ part of relation statistic\n> but stored in relation cache. If relation statistic is updated, then most\n> likely this fields are also changed. So we have to remove this relation\n> from relation cache in any case.\n\nI realize this is likely to go over like a lead balloon, but the churn\nin pg_class from updating reltuples/relpages has never seemed all that\ngreat to me when just about everything else is so rarely changed, and\nonly through some user DDL action- and I agree that it seems like those\nparticular columns are more 'statistics' type of info and less info\nabout the definition of the relation. Other columns that do get changed\nregularly are relfrozenxid and relminmxid. I wonder if it's possible to\nmove all of those elsewhere- perhaps some to the statistics tables as\nyou seem to be alluding to, and the others to $somewhereelse that is\ndedicated to tracking that information which VACUUM is primarily\nconcerned with.\n\n> 2. Remember in relation info XID of oldest active transaction at the moment\n> of last autovacuum.\n> At next autovacuum iteration we first of all compare this stored XID with\n> current oldest active transaction XID\n> and bypass vacuuming this relation if XID is not changed.\n> \n> Thoughts?\n\nThat sounds like an interesting optimization and I agree it'd be nice to\navoid the re-run of autovacuum when we can tell that there's not going\nto be anything more we can do. As noted above, for my part, I think\nit'd be nice to move that kind of ongoing maintenance/updates out of\npg_class, but just in general I agree with the idea to store that info\nsomewhere and wait until there's actually been progress in the global\nxmin before re-running a vacuum on a table. If we can do that somewhere\noutside of pg_class, I think that'd be better, but if no one is up for\nthat kind of a shake-up, then maybe we just put it in pg_class and deal\nwith the churn there.\n\n> >>So, that's really the core of your problem. We don't promise that\n> >>you can run several thousand backends at once. Usually it's recommended\n> >>that you stick a connection pooler in front of a server with (at most)\n> >>a few hundred backends.\n> >Sure, but that doesn't mean things should completely fall over when we\n> >do get up to larger numbers of backends, which is definitely pretty\n> >common in larger systems. I'm pretty sure we all agree that using a\n> >connection pooler is recommended, but if there's things we can do to\n> >make the system work at least a bit better when folks do use lots of\n> >connections, provided we don't materially damage other cases, that's\n> >probably worthwhile.\n> \n> I also think that Postgres performance should degrade gradually with\n> increasing number\n> of active backends. Actually further investigations of this particular case\n> shows that such large number of\n> database connections was caused by ... Postgres slowdown.\n> During normal workflow number of active backends is few hundreds.\n> But \"invalidation storm\" cause hangout of queries, so user application has\n> to initiate more and more new connections to perform required actions.\n> Yes, this may be not the best behavior of application in this case. At least\n> it should first terminate current connection using pg_terminate_backend. I\n> just want to notice that large number of backends was not the core of the\n> problem.\n\nYeah, this is all getting back to the fact that we don't have an\nacceptance criteria or anything like that, where we'd actually hold off\non new connections/queries being allowed in while other things are\nhappening. Of course, a connection pooler would address this (and you\ncould use one and have it still look exactly like PG, if you use, say,\npgbouncer in session-pooling mode, but then you need to have the\napplication drop/reconnect and not do its own connection pooling..), but\nit'd be nice to have something in core for this.\n\n> >Making them GUCs does seem like it's a few steps too far... but it'd be\n> >nice if we could arrange to have values that don't result in the system\n> >falling over with large numbers of backends and large numbers of tables.\n> >To get a lot of backends, you'd have to set max_connections up pretty\n> >high to begin with- perhaps we should contemplate allowing these values\n> >to vary based on what max_connections is set to?\n> \n> I think that optimal value of number of lock partitions should depend not on\n> number of connections\n> but on number of available CPU cores and so expected level on concurrency.\n> It is hard to propose some portable way to obtain this number.\n> This is why I think that GUCs is better solution.\n\nA GUC for 'number of CPUs' doesn't seem like a bad option to have. How\nto make that work well may be challenging though.\n\n> Certainly I realize that it is very dangerous parameter which should be\n> changed with special care.\n> Not only because of  MAX_SIMUL_LWLOCKS.\n\nSure.\n\n> There are few places in Postgres when it tries to lock all partitions\n> (deadlock detector, logical replication,...).\n> If there very thousands of partitions, then such lock will be too expensive\n> and we get yet another\n> popular Postgres program: \"deadlock detection storm\" when due to high\n> contention between backends lock can not be obtained\n> in deadlock timeout and so initiate deadlock detection. Simultaneous\n> deadlock detection performed by all backends\n> (which tries to take ALL partitions locks) paralyze the system (TPS falls\n> down to 0).\n> Proposed patch for this problem was also rejected (once again - problem can\n> be reproduced only of powerful server with large number of cores).\n\nThat does sound like something that would be good to improve on, though\nI haven't looked at the proposed patch or read the associated thread, so\nI'm not sure I can really comment on its rejection specifically.\n\nThanks,\n\nStephen", "msg_date": "Thu, 9 Jul 2020 12:47:35 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: Postgres is not able to handle more than 4k tables!?" }, { "msg_contents": "\n\nOn 09.07.2020 19:19, Nikolay Samokhvalov wrote:\n> Hi Konstantin, a silly question: do you consider the workload you have \n> as well-optimized? Can it be optimized further? Reading this thread I \n> have a strong feeling that a very basic set of regular optimization \n> actions is missing here (or not explained): query analysis and \n> optimization based on pg_stat_statements (and, maybe pg_stat_kcache), \n> some method to analyze the state of the server in general, resource \n> consumption, etc.\n>\n> Do you have some monitoring that covers pg_stat_statements?\n>\n> Before looking under the hood, I would use multiple pg_stat_statements \n> snapshots (can be analyzed using, say, postgres-checkup or pgCenter) \n> to understand the workload and identify the heaviest queries -- first \n> of all, in terms of total_time, calls, shared buffers reads/hits, \n> temporary files generation. Which query groups are Top-N in each \n> category, have you looked at it?\n>\n> You mentioned some crazy numbers for the planning time, but why not to \n> analyze the picture holistically and see the overall numbers? Those \n> queries that have increased planning time, what their part of \n> total_time, on the overall picture, in %? (Unfortunately, we cannot \n> see Top-N by planning time in pg_stat_statements till PG13, but it \n> doesn't mean that we cannot have some good understanding of overall \n> picture today, it just requires more work).\n>\n> If workload analysis & optimization was done holistically already, or \n> not possible due to some reason — pardon me. But if not and if your \n> primary goal is to improve this particular setup ASAP, then the topic \n> could be started in the -performance mailing list first, discussing \n> the workload and its aspects, and only after it's done, raised in \n> -hackers. No?\n\nCertainly, both we and customer has made workload analysis & optimization.\nIt is not a problem of particular queries, bad plans, resource \nexhaustion,...\n\nUnfortunately there many scenarios when Postgres demonstrates not \ngradual degrade of performance with increasing workload,\nbut \"snow avalanche\" whennegative feedback cause very fastparalysis of \nthe system.\n\nThis case is just one if this scenarios. It is hard to say for sure what \ntriggers the avalanche... Long living transaction, huge number of tables,\naggressive autovacuum settings... But there is cascade of negative \nevents which cause system which normally function for months to stop \nworking at all.\n\nIn this particular case we have the following chain:\n\n- long living transaction cause autovacuum to send a lot of invalidation \nmessage\n- this messages cause overflow of invalidation message queues, forcing \nbackens to invalidate their caches and reload from catalog.\n- too small value of fastpath lock cache cause many concurrent accesses \nto shared lock hash\n- contention for LW-lock caused by small number of lock partition cause \nstarvation\n\n\n\n", "msg_date": "Thu, 9 Jul 2020 19:49:26 +0300", "msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: Postgres is not able to handle more than 4k tables!?" }, { "msg_contents": "Great idea.\n\nIn addition to this, it would be good to consider another optimization for\nthe default transaction isolation level: making autovacuum to clean dead\ntuples in relations that are not currently used in any transaction and when\nthere are no IN_PROGRESS transactions running at RR or S level (which is a\nvery common case because RC is the default level and this is what is\nactively used in heavily loaded OLTP systems which often suffer from\nlong-running transactions). I don't know the details of how easy it would\nbe to implement, but it always wondered that autovacuum has the global XID\n\"horizon\".\n\nWith such an optimization, the \"hot_standby_feedback=on\" mode could be\nimplemented also more gracefully, reporting \"min(xid)\" for ongoing\ntransactions on standbys separately for RC and RR/S levels.\n\nWithout this, we cannot have good performance for HTAP use cases for\nPostgres – the presence of just a small number of long-running\ntransactions, indeed, is known to kill the performance of OLTP workloads\nquickly. And leading to much faster bloat growth than it could be.\n\nHowever, maybe I'm wrong in these considerations, or it's impossible / too\ndifficult to implement.\n\nOn Thu, Jul 9, 2020 at 9:38 AM Alexander Korotkov <aekorotkov@gmail.com>\nwrote:\n\n> On Thu, Jul 9, 2020 at 6:57 PM Konstantin Knizhnik\n> <k.knizhnik@postgrespro.ru> wrote:\n> > 2. Remember in relation info XID of oldest active transaction at the\n> > moment of last autovacuum.\n> > At next autovacuum iteration we first of all compare this stored XID\n> > with current oldest active transaction XID\n> > and bypass vacuuming this relation if XID is not changed.\n>\n>\n> This option looks good for me independently of the use case under\n> consideration. Long-running transactions are an old and well-known\n> problem. If we can skip some useless work here, let's do this.\n>\n> ------\n> Regards,\n> Alexander Korotkov\n>\n>\n>\n\nGreat idea.In addition to this, it would be good to consider another optimization for the default transaction isolation level: making autovacuum to clean dead tuples in relations that are not currently used in any transaction and when there are no IN_PROGRESS transactions running at RR or S level (which is a very common case because RC is the default level and this is what is actively used in heavily loaded OLTP systems which often suffer from long-running transactions). I don't know the details of how easy it would be to implement, but it always wondered that autovacuum has the global XID \"horizon\".With such an optimization, the \"hot_standby_feedback=on\" mode could be implemented also more gracefully, reporting \"min(xid)\" for ongoing transactions on standbys separately for RC and RR/S levels.Without this, we cannot have good performance for HTAP use cases for Postgres – the presence of just a small number of long-running transactions, indeed, is known to kill the performance of OLTP workloads quickly. And leading to much faster bloat growth than it could be.However, maybe I'm wrong in these considerations, or it's impossible / too difficult to implement.On Thu, Jul 9, 2020 at 9:38 AM Alexander Korotkov <aekorotkov@gmail.com> wrote:On Thu, Jul 9, 2020 at 6:57 PM Konstantin Knizhnik\n<k.knizhnik@postgrespro.ru> wrote:\n> 2. Remember in relation info XID of oldest active transaction at the\n> moment of last autovacuum.\n> At next autovacuum iteration we first of all compare this stored XID\n> with current oldest active transaction XID\n> and bypass vacuuming this relation if XID is not changed.\n\n\nThis option looks good for me independently of the use case under\nconsideration.  Long-running transactions are an old and well-known\nproblem.  If we can skip some useless work here, let's do this.\n\n------\nRegards,\nAlexander Korotkov", "msg_date": "Thu, 9 Jul 2020 12:00:02 -0700", "msg_from": "Nikolay Samokhvalov <samokhvalov@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Postgres is not able to handle more than 4k tables!?" }, { "msg_contents": "Greetings,\n\nWe generally prefer that you don't top-post on these lists.\n\n* Nikolay Samokhvalov (samokhvalov@gmail.com) wrote:\n> In addition to this, it would be good to consider another optimization for\n> the default transaction isolation level: making autovacuum to clean dead\n> tuples in relations that are not currently used in any transaction and when\n> there are no IN_PROGRESS transactions running at RR or S level (which is a\n> very common case because RC is the default level and this is what is\n> actively used in heavily loaded OLTP systems which often suffer from\n> long-running transactions). I don't know the details of how easy it would\n> be to implement, but it always wondered that autovacuum has the global XID\n> \"horizon\".\n\nYeah, I've had thoughts along the same lines, though I had some ideas\nthat we could actually manage to support it even with RR (at least...\nnot sure about serializable) by considering what tuples the transactions\nin the system could actually see (eg: even with RR, a tuple created\nafter that transaction started and was then deleted wouldn't ever be\nable to be seen and therefore could be cleaned up..). A further thought\non that was to only spend that kind of effort once a tuple had aged a\ncertain amount, though it depends a great deal on exactly what would\nneed to be done for this.\n\nUnfortunately, anything in this area is likely to carry a good bit of\nrisk associated with it as VACUUM doing the wrong thing would be quite\nbad.\n\nThanks,\n\nStephen", "msg_date": "Thu, 9 Jul 2020 15:07:10 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: Postgres is not able to handle more than 4k tables!?" }, { "msg_contents": "\nOn 7/8/20 11:41 PM, Konstantin Knizhnik wrote:\n>\n> So looks like NUM_LOCK_PARTITIONS and MAXNUMMESSAGES  constants have \n> to be replaced with GUCs.\n> To avoid division, we can specify log2 of this values, so shift can be \n> used instead.\n> And MAX_SIMUL_LWLOCKS should be defined as NUM_LOCK_PARTITIONS + \n> NUM_INDIVIDUAL_LWLOCKS + NAMED_LWLOCK_RESERVE.\n>\nBecause I was involved in this particular case and don`t want it to \nbecame a habit, I`m volunteering to test whatever patch this discussion \nwill produce.\n\n-- \nGrigory Smolkin\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n", "msg_date": "Thu, 9 Jul 2020 22:16:37 +0300", "msg_from": "Grigory Smolkin <g.smolkin@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: Postgres is not able to handle more than 4k tables!?" }, { "msg_contents": "On Thu, Jul 9, 2020 at 10:00 PM Nikolay Samokhvalov\n<samokhvalov@gmail.com> wrote:\n> In addition to this, it would be good to consider another optimization for the default transaction isolation level: making autovacuum to clean dead tuples in relations that are not currently used in any transaction and when there are no IN_PROGRESS transactions running at RR or S level (which is a very common case because RC is the default level and this is what is actively used in heavily loaded OLTP systems which often suffer from long-running transactions). I don't know the details of how easy it would be to implement, but it always wondered that autovacuum has the global XID \"horizon\".\n>\n> With such an optimization, the \"hot_standby_feedback=on\" mode could be implemented also more gracefully, reporting \"min(xid)\" for ongoing transactions on standbys separately for RC and RR/S levels.\n\nYes, the current way of calculation of dead tuples is lossy, because\nwe only rely on the oldest xid. However, if we would keep the oldest\nsnapshot instead of oldest xmin, long-running transactions wouldn't be\nsuch a disaster. I don't think this is feasible with the current\nsnapshot model, because keeping full snapshots instead of just xmins\nwould bloat shared-memory structs and complicate computations. But\nCSN can certainly support this optimization.\n\n------\nRegards,\nAlexander Korotkov\n\n\n", "msg_date": "Fri, 10 Jul 2020 00:38:07 +0300", "msg_from": "Alexander Korotkov <aekorotkov@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Postgres is not able to handle more than 4k tables!?" }, { "msg_contents": "From: Konstantin Knizhnik <k.knizhnik@postgrespro.ru>\r\n> Unfortunately we have not to wait for decade or two.\r\n> Postgres is faced with multiple problems at existed multiprocessor\r\n> systems (64, 96,.. cores).\r\n> And it is not even necessary to initiate thousands of connections: just\r\n> enough to load all this cores and let them compete for some\r\n> resource (LW-lock, buffer,...). Even standard pgbench/YCSB benchmarks\r\n> with zipfian distribution may illustrate this problems.\r\n\r\nI concur with you. VMs and bare metal machines with 100~200 CPU cores and TBs of RAM are already available even on public clouds. The users easily set max_connections to a high value like 10,000, create thousands or tens of thousands of relations, and expect it to go smoothly. Although it may be a horror for PG developers who know the internals well, Postgres has grown a great database to be relied upon.\r\n\r\nBesides, I don't want people to think like \"Postgres cannot scale up on one machine, so we need scale-out.\" I understand some form of scale-out is necessary for large-scale analytics and web-scale multitenant OLTP, but it would be desirable to be able to cover the OLTP workloads for one organization/region with the advances in hardware and Postgres leveraging those advances, without something like Oracle RAC.\r\n\r\n\r\n> There were many proposed patches which help to improve this situation.\r\n> But as far as this patches increase performance only at huge servers\r\n> with large number of cores and show almost no\r\n> improvement (or even some degradation) at standard 4-cores desktops,\r\n> almost none of them were committed.\r\n> Consequently our customers have a lot of troubles trying to replace\r\n> Oracle with Postgres and provide the same performance at same\r\n> (quite good and expensive) hardware.\r\n\r\nYeah, it's a pity that the shiny-looking patches from Postgres Pro (mostly from Konstantin san?) -- autoprepare, built-in connection pooling, fair lwlock, and revolutionary multi-threaded backend -- haven't gained hot atension.\r\n\r\n\r\nRegards\r\nTakayuki Tsunakawa\r\n\r\n\r\n", "msg_date": "Fri, 10 Jul 2020 02:10:20 +0000", "msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Postgres is not able to handle more than 4k tables!?" }, { "msg_contents": "On 09.07.2020 22:16, Grigory Smolkin wrote:\n>\n> On 7/8/20 11:41 PM, Konstantin Knizhnik wrote:\n>>\n>> So looks like NUM_LOCK_PARTITIONS and MAXNUMMESSAGES  constants have \n>> to be replaced with GUCs.\n>> To avoid division, we can specify log2 of this values, so shift can \n>> be used instead.\n>> And MAX_SIMUL_LWLOCKS should be defined as NUM_LOCK_PARTITIONS + \n>> NUM_INDIVIDUAL_LWLOCKS + NAMED_LWLOCK_RESERVE.\n>>\n> Because I was involved in this particular case and don`t want it to \n> became a habit, I`m volunteering to test whatever patch this \n> discussion will produce.\n>\nYou are welcome:)", "msg_date": "Fri, 10 Jul 2020 10:18:36 +0300", "msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: Postgres is not able to handle more than 4k tables!?" }, { "msg_contents": "On Thu, 2020-07-09 at 12:47 -0400, Stephen Frost wrote:\n> I realize this is likely to go over like a lead balloon, but the churn\n> in pg_class from updating reltuples/relpages has never seemed all that\n> great to me when just about everything else is so rarely changed, and\n> only through some user DDL action- and I agree that it seems like those\n> particular columns are more 'statistics' type of info and less info\n> about the definition of the relation. Other columns that do get changed\n> regularly are relfrozenxid and relminmxid. I wonder if it's possible to\n> move all of those elsewhere- perhaps some to the statistics tables as\n> you seem to be alluding to, and the others to $somewhereelse that is\n> dedicated to tracking that information which VACUUM is primarily\n> concerned with.\n\nPerhaps we could create pg_class with a fillfactor less than 100\nso we het HOT updates there.\nThat would be less invasive.\n\nYours,\nLaurenz Albe\n\n\n\n", "msg_date": "Fri, 10 Jul 2020 09:24:25 +0200", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: Postgres is not able to handle more than 4k tables!?" }, { "msg_contents": "On Fri, Jul 10, 2020 at 02:10:20AM +0000, tsunakawa.takay@fujitsu.com\nwrote:\n> > There were many proposed patches which help to improve this\n> > situation. But as far as this patches increase performance only\n> > at huge servers with large number of cores and show almost no\n> > improvement (or even some degradation) at standard 4-cores desktops,\n> > almost none of them were committed. Consequently our customers have\n> > a lot of troubles trying to replace Oracle with Postgres and provide\n> > the same performance at same (quite good and expensive) hardware.\n>\n> Yeah, it's a pity that the shiny-looking patches from Postgres Pro\n> (mostly from Konstantin san?) -- autoprepare, built-in connection\n> pooling, fair lwlock, and revolutionary multi-threaded backend --\n> haven't gained hot atension.\n\nYeah, it is probably time for us to get access to a current large-scale\nmachine again and really find the bottlenecks. We seem to next this\nevery few years.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n", "msg_date": "Tue, 14 Jul 2020 18:59:03 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Postgres is not able to handle more than 4k tables!?" }, { "msg_contents": "On 2020-Jul-10, Konstantin Knizhnik wrote:\n\n> @@ -3076,6 +3080,10 @@ relation_needs_vacanalyze(Oid relid,\n> \t\tinstuples = tabentry->inserts_since_vacuum;\n> \t\tanltuples = tabentry->changes_since_analyze;\n> \n> +\t\trel = RelationIdGetRelation(relid);\n> +\t\toldestXmin = TransactionIdLimitedForOldSnapshots(GetOldestXmin(rel, PROCARRAY_FLAGS_VACUUM), rel);\n> +\t\tRelationClose(rel);\n\n*cough*\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Tue, 14 Jul 2020 19:17:10 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Postgres is not able to handle more than 4k tables!?" }, { "msg_contents": "\n\nOn 15.07.2020 02:17, Alvaro Herrera wrote:\n> On 2020-Jul-10, Konstantin Knizhnik wrote:\n>\n>> @@ -3076,6 +3080,10 @@ relation_needs_vacanalyze(Oid relid,\n>> \t\tinstuples = tabentry->inserts_since_vacuum;\n>> \t\tanltuples = tabentry->changes_since_analyze;\n>> \n>> +\t\trel = RelationIdGetRelation(relid);\n>> +\t\toldestXmin = TransactionIdLimitedForOldSnapshots(GetOldestXmin(rel, PROCARRAY_FLAGS_VACUUM), rel);\n>> +\t\tRelationClose(rel);\n> *cough*\n>\nSorry, Alvaro.\nCan you explain this *cough*\nYou didn't like that relation is opened  just to call GetOldestXmin?\nBut this functions requires Relation. Do you suggest to rewrite it so \nthat it is possible to pass just Oid of relation?\n\nOr you do you think that such calculation of oldestSmin is obscure and \nat lest requires some comment?\nActually, I have copied it from vacuum.c and there is a large comment \nexplaining why it is calculated in this  way.\nMay be it is enough to add reference to vacuum.c?\nOr may be create some special function for it?\nI just need to oldestXmin in calculated in the same way in vacuum.c and \nautovacuum.c\n\n\n\n", "msg_date": "Wed, 15 Jul 2020 17:28:07 +0300", "msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: Postgres is not able to handle more than 4k tables!?" }, { "msg_contents": "On 2020-Jul-15, Konstantin Knizhnik wrote:\n\n> \n> \n> On 15.07.2020 02:17, Alvaro Herrera wrote:\n> > On 2020-Jul-10, Konstantin Knizhnik wrote:\n> > \n> > > @@ -3076,6 +3080,10 @@ relation_needs_vacanalyze(Oid relid,\n> > > \t\tinstuples = tabentry->inserts_since_vacuum;\n> > > \t\tanltuples = tabentry->changes_since_analyze;\n> > > +\t\trel = RelationIdGetRelation(relid);\n> > > +\t\toldestXmin = TransactionIdLimitedForOldSnapshots(GetOldestXmin(rel, PROCARRAY_FLAGS_VACUUM), rel);\n> > > +\t\tRelationClose(rel);\n> > *cough*\n> > \n> Sorry, Alvaro.\n> Can you explain this *cough*\n> You didn't like that relation is opened� just to call GetOldestXmin?\n> But this functions requires Relation. Do you suggest to rewrite it so that\n> it is possible to pass just Oid of relation?\n\nAt that point of autovacuum, you don't have a lock on the relation; the\nonly thing you have is a pg_class tuple (and we do it that way on\npurpose as I recall). I think asking relcache for it is dangerous, and\nmoreover requesting relcache for it directly goes counter our normal\ncoding pattern. At the very least you should have a comment explaining\nwhy you do it and why it's okay to do it, and also handle the case when\nRelationIdGetRelation returns null.\n\nHowever, looking at the bigger picture I wonder if it would be better to\ntest the getoldestxmin much later in the process to avoid this whole\nissue. Just carry forward the relation until the point where vacuum is\ncalled ... that may be cleaner? And the extra cost is not that much.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Wed, 15 Jul 2020 11:03:33 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Postgres is not able to handle more than 4k tables!?" }, { "msg_contents": "On 15.07.2020 18:03, Alvaro Herrera wrote:\n> On 2020-Jul-15, Konstantin Knizhnik wrote:\n>\n>>\n>> On 15.07.2020 02:17, Alvaro Herrera wrote:\n>>> On 2020-Jul-10, Konstantin Knizhnik wrote:\n>>>\n>>>> @@ -3076,6 +3080,10 @@ relation_needs_vacanalyze(Oid relid,\n>>>> \t\tinstuples = tabentry->inserts_since_vacuum;\n>>>> \t\tanltuples = tabentry->changes_since_analyze;\n>>>> +\t\trel = RelationIdGetRelation(relid);\n>>>> +\t\toldestXmin = TransactionIdLimitedForOldSnapshots(GetOldestXmin(rel, PROCARRAY_FLAGS_VACUUM), rel);\n>>>> +\t\tRelationClose(rel);\n>>> *cough*\n>>>\n>> Sorry, Alvaro.\n>> Can you explain this *cough*\n>> You didn't like that relation is opened  just to call GetOldestXmin?\n>> But this functions requires Relation. Do you suggest to rewrite it so that\n>> it is possible to pass just Oid of relation?\n> At that point of autovacuum, you don't have a lock on the relation; the\n> only thing you have is a pg_class tuple (and we do it that way on\n> purpose as I recall). I think asking relcache for it is dangerous, and\n> moreover requesting relcache for it directly goes counter our normal\n> coding pattern. At the very least you should have a comment explaining\n> why you do it and why it's okay to do it, and also handle the case when\n> RelationIdGetRelation returns null.\n>\n> However, looking at the bigger picture I wonder if it would be better to\n> test the getoldestxmin much later in the process to avoid this whole\n> issue. Just carry forward the relation until the point where vacuum is\n> called ... that may be cleaner? And the extra cost is not that much.\n>\nThank you for explanation.\nI have prepared new version of the patch which opens relation with care.\nConcerning your suggestion to perform this check later (in vacuum_rel() \nI guess?)\nI tried to implement it but faced with another problem.\n\nRight now information about autovacuum_oldest_xmin for relationis stored \nin statistic (PgStat_StatTabEntry)\ntogether with other atovacuum related fields like \nautovac_vacuum_timestamp, autovac_analyze_timestamp,...)\nI am not sure that it is right place for storing autovacuum_oldest_xmin \nbut I didn't want to organize in shared memory yet another hash table\njust for keeping this field. May be you can suggest something better...\nBut right now it is stored here when vacuum is completed.\n\nPgStat_StatTabEntry is obtained by get_pgstat_tabentry_relid() which \nalso needs  pgstat_fetch_stat_dbentry(MyDatabaseId) and \npgstat_fetch_stat_dbentry(InvalidOid).  I do not want to copy all this \nstuff to vacuum.c.\nIt seems to me to be easier to open relation in \nrelation_needs_vacanalyze(), isn;t it?", "msg_date": "Wed, 15 Jul 2020 20:10:06 +0300", "msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: Postgres is not able to handle more than 4k tables!?" } ]
[ { "msg_contents": "Hello, hackers.\n\n\nI've been thinking to suggest\na peformance-oriented feature for COPY FROM.\nIt's UNLOGGED clause, which means data loading skipping WAL generation.\n\nHow to make it work is the following.\n1. Aquire ACCESS EXCLUSIVE mode to lock the target table and its indexes.\n2. Mark those relations 'unrecoverable' in pg_class/pg_index.\n3. Issue one WAL to indicate when COPY UNLOGGED is executed. \n4. Execute the data loading, bypassing WAL generation for data.\n5. Sync the data to disk by performing checkpoint.\n\nDuring the recovery,\nI'd like to make postgres recognize both marked 'unrecoverable' flags of the second step\nand when the command was issued by the third step\nin order to recover data that the target table had before the execution of COPY UNLOGGED.\n\nOracle's SQL*Loader has\nsuch a feature called UNRECOVERABLE to boost the loading speed\nfor severe time limit of workload.\n\n\nBest,\n\tTakamichi Osumi\n\n\n", "msg_date": "Thu, 9 Jul 2020 02:36:36 +0000", "msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>", "msg_from_op": true, "msg_subject": "Implement UNLOGGED clause for COPY FROM" }, { "msg_contents": "\n\nOn 2020/07/09 11:36, osumi.takamichi@fujitsu.com wrote:\n> Hello, hackers.\n> \n> \n> I've been thinking to suggest\n> a peformance-oriented feature for COPY FROM.\n> It's UNLOGGED clause, which means data loading skipping WAL generation.\n\nThis feature can work safely with wal_level=replica or logical?\nOr it can work only with wal_level=minimal? If yes, what is the main\ndifference between this method and wal_skip_threshold?\n\n> \n> How to make it work is the following.\n> 1. Aquire ACCESS EXCLUSIVE mode to lock the target table and its indexes.\n> 2. Mark those relations 'unrecoverable' in pg_class/pg_index.\n> 3. Issue one WAL to indicate when COPY UNLOGGED is executed.\n> 4. Execute the data loading, bypassing WAL generation for data.\n> 5. Sync the data to disk by performing checkpoint.\n\nWhat happens if the server crashes before #5? Since no WAL for\ndata-loading can be replayed, the target table should be truncated?\n\nRegards,\n\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Thu, 9 Jul 2020 12:21:24 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Implement UNLOGGED clause for COPY FROM" }, { "msg_contents": "On Wednesday, July 8, 2020, osumi.takamichi@fujitsu.com <\nosumi.takamichi@fujitsu.com> wrote:\n>\n> 5. Sync the data to disk by performing checkpoint.\n>\n\nThis step seems to invalidate the idea outright. The checkpoint command is\nsuperuser only and isn’t table specific. This seems to require both those\nthings to be changed.\n\nAside from that, though, how does this improve upon the existing capability\nto copy into an unlogged temporary table?\n\nDavid J.\n\nOn Wednesday, July 8, 2020, osumi.takamichi@fujitsu.com <osumi.takamichi@fujitsu.com> wrote:\n5. Sync the data to disk by performing checkpoint.\nThis step seems to invalidate the idea outright.  The checkpoint command is superuser only and isn’t table specific.  This seems to require both those things to be changed.Aside from that, though, how does this improve upon the existing capability to copy into an unlogged temporary table?David J.", "msg_date": "Wed, 8 Jul 2020 22:07:46 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Implement UNLOGGED clause for COPY FROM" }, { "msg_contents": "Fujii-san\n\nThank you for your interest in this idea.\n\n> This feature can work safely with wal_level=replica or logical?\n> Or it can work only with wal_level=minimal?\n>If yes, what is the main difference\n> between this method and wal_skip_threshold?\nI'm thinking this feature can be used\nwhen you set any parameters of wal_level.\nBesides that, data loading into a table *with some data*\nshould be allowed. This means I don't want to limit \nthe usage of this feature only for initial load \nfor empty table or under condition of 'minimal' wal_level in other words.\n\nLet me explain more detail of the background.\n\nI got a report that one of my customers says that \nmultiple COPY from multiple sessions have a bottleneck to write WAL.\nHer use case was DWH system using postgres mainly to load dozens of GB (or more) log data\nfrom multiple data sources to execute night batch processing everyday.\n\nHer scenario included both initial load to empty table\nand load to table that already has records.\n\nIn passing, she also used our company's product of parallel loader,\nto load data with dozens of, nearly 100, BGWs at the same time.\nThrough investigation of iostat,\nthey found the same problem that CPU worked for WAL write intensively.\n\nThis could happen after the implementation \nof Parallel copy that is really hotly discussed and reviewed in the mailing lists now.\nSo I thought it's good to discuss this in advance.\n\n> > 4. Execute the data loading, bypassing WAL generation for data.\n> > 5. Sync the data to disk by performing checkpoint.\n> \n> What happens if the server crashes before #5? Since no WAL for data-loading can\n> be replayed, the target table should be truncated?\nMy answer for this is just to load that COPY data again.\nIt's because the application itself knows what kind of data was loaded\nfrom the command.\n\nLastly, let me add some functional specifications of this clause.\nThe syntax is \"COPY tbl FROM ‘/path/to/input/file’ UNLOGGED\".\n\nIn terms of streaming replication,\nI'd like to ask for advice of other members in this community.\nNow, I think this feature requires to re-create standby\nimmediately after the COPY UNLOGGED like Oracle's clause\nbut I wanna make postgres more attractive than Oracle to users.\nDoes someone have any ideas ?\n\n\nRegards,\n\tTakamichi Osumi\n\n\n", "msg_date": "Thu, 9 Jul 2020 06:17:12 +0000", "msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>", "msg_from_op": true, "msg_subject": "RE: Implement UNLOGGED clause for COPY FROM" }, { "msg_contents": "From: David G. Johnston <david.g.johnston@gmail.com>\r\n> This step seems to invalidate the idea outright. The checkpoint command is superuser only and isn’t table specific. This seems to require both those things to be changed.\r\n\r\nPerhaps FlushRelationBuffers() followed by smgrsync() can be used instead. Or, depending on the assumed use case (e.g. the DBA adds data regularly for analytics), we may allow COPY UNLOGGED to be used only by superusers and some special pg_ roles, and COPY UNLOGGED performs checkpoints. Anyway, I kind of feel that COPY UNLOGGED needs some special privileges, because it renders the table unrecoverable and not being replicated to the standby.\r\n\r\n\r\nRegards\r\nTakayuki Tsunakawa\r\n\r\n\n\n\n\n\n\n\n\n\n\n\nFrom: David G. Johnston <david.g.johnston@gmail.com>\r\n\n> This step seems to invalidate the idea outright.  The checkpoint command is superuser only and isn’t table specific.  This seems to require both\r\n those things to be changed.\n \nPerhaps FlushRelationBuffers() followed by smgrsync() can be used instead.  Or, depending on the assumed use case (e.g. the DBA adds data regularly\r\n for analytics), we may allow COPY UNLOGGED to be used only by superusers and some special pg_ roles, and COPY UNLOGGED performs checkpoints.  Anyway, I kind of feel that COPY UNLOGGED needs some special privileges, because it renders the table unrecoverable\r\n and not being replicated to the standby.\n \n \nRegards\nTakayuki Tsunakawa", "msg_date": "Thu, 9 Jul 2020 06:21:43 +0000", "msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Implement UNLOGGED clause for COPY FROM" }, { "msg_contents": "On Thu, Jul 9, 2020 at 11:47 AM osumi.takamichi@fujitsu.com\n<osumi.takamichi@fujitsu.com> wrote:\n>\n>\n> In terms of streaming replication,\n> I'd like to ask for advice of other members in this community.\n> Now, I think this feature requires to re-create standby\n> immediately after the COPY UNLOGGED like Oracle's clause\n>\n\nThis seems quite limiting to me and I think the same will be true for\nsubscribers that get data via logical replication, right? I suspect\nthat the user will perform such an operation from time-to-time and\neach time creating replica again could be really time-consuming and\nmaybe more than it will save by making this operation unlogged.\n\nI wonder do they really need to replicate such a table and its data\nbecause each time creating a replica from scratch after an operation\non one table doesn't sound advisable to me?\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 9 Jul 2020 17:50:21 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Implement UNLOGGED clause for COPY FROM" }, { "msg_contents": "\n\nOn 2020/07/09 15:17, osumi.takamichi@fujitsu.com wrote:\n> Fujii-san\n> \n> Thank you for your interest in this idea.\n> \n>> This feature can work safely with wal_level=replica or logical?\n>> Or it can work only with wal_level=minimal?\n>> If yes, what is the main difference\n>> between this method and wal_skip_threshold?\n> I'm thinking this feature can be used\n> when you set any parameters of wal_level.\n> Besides that, data loading into a table *with some data*\n> should be allowed. This means I don't want to limit\n> the usage of this feature only for initial load\n> for empty table or under condition of 'minimal' wal_level in other words.\n> \n> Let me explain more detail of the background.\n> \n> I got a report that one of my customers says that\n> multiple COPY from multiple sessions have a bottleneck to write WAL.\n> Her use case was DWH system using postgres mainly to load dozens of GB (or more) log data\n> from multiple data sources to execute night batch processing everyday.\n> \n> Her scenario included both initial load to empty table\n> and load to table that already has records.\n\nYes, I understand this use case.\n\n\n> \n> In passing, she also used our company's product of parallel loader,\n> to load data with dozens of, nearly 100, BGWs at the same time.\n> Through investigation of iostat,\n> they found the same problem that CPU worked for WAL write intensively.\n> \n> This could happen after the implementation\n> of Parallel copy that is really hotly discussed and reviewed in the mailing lists now.\n> So I thought it's good to discuss this in advance.\n> \n>>> 4. Execute the data loading, bypassing WAL generation for data.\n>>> 5. Sync the data to disk by performing checkpoint.\n>>\n>> What happens if the server crashes before #5? Since no WAL for data-loading can\n>> be replayed, the target table should be truncated?\n> My answer for this is just to load that COPY data again.\n> It's because the application itself knows what kind of data was loaded\n> from the command.\n\nWhen the server crashes before #5, some table and index pages that #4 loaded\nthe data into might have been partially synced to the disk because of bgwriter\nor shared buffer replacement. So ISTM that the target table needs to be\ntruncated to empty during recovery and users need to load whole the data into\nthe table again. Is my understanding right? If yes, isn't this the same feature\nas that UNLOGGED table provides?\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Thu, 9 Jul 2020 22:03:10 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Implement UNLOGGED clause for COPY FROM" }, { "msg_contents": "Hi David Johnston\r\n\r\nThank you for your comment.\r\nAside from that, though, how does this improve upon the existing capability to copy into an unlogged temporary table?\r\n\r\n[>] unlogged temporary table can’t be inherited over sessions first of all.\r\nAnd unlogged table needs to be recreated due to startup truncation of the table’s content\r\nwhen the server crashes.\r\nIf you hold massive data in an unlogged table,\r\nyou’d forced to spend much time to recover it. This isn’t good.\r\n\r\nSo I’m thinking that COPY UNLOGGED’d provide a more flexible way for keeping data\r\nfor COPY FROM users.\r\n\r\nI’m considering that the feature gives them a choice that during ordinary operation\r\nyou can keep WAL logging for a target table and when you need high-speed loading\r\nyou can bypass WAL generation for it.\r\n\r\nTo achieve this, we have to\r\nconsider a new idea like loaded data’d be added\r\nat the end of the all other pages and detach those\r\nif the server crashes during the UNLOGGED loading processing for example.\r\n\r\nBy the way, “ALTER TABLE tbl SET UNLOGGED” is supported by postgres.\r\nYou may think it’s OK to change LOGGED table to UNLOGGED table by this command.\r\nBut, it copies the whole relation once actually. (This isn’t written in the manual.)\r\nSo this command becomes slow if the table the command is applied to contains a lot of data.\r\nThus changing the table’s status of UNLOGGED/LOGGED also requires cost at the moment and I think this copy is an obstacle for switching that table’s status.\r\n\r\nThe discussion of the reason is written in the url below.\r\nhttps://www.postgresql.org/message-id/flat/CAFcNs%2Bpeg3VPG2%3Dv6Lu3vfCDP8mt7cs6-RMMXxjxWNLREgSRVQ%40mail.gmail.com\r\n\r\nBest\r\n Takamichi Osumi\r\n\r\n\n\n\n\n\n\n\n\n\nHi David Johnston\n \nThank you for your comment.\n\n\nAside from that, though, how does this improve upon the existing capability to copy into an unlogged temporary table?\n\n\n \n[>]\r\nunlogged temporary table can’t be inherited over sessions first of all.\nAnd unlogged table needs to be recreated due to startup truncation of the table’s content\nwhen the server crashes.\nIf you hold massive data in an unlogged table,\nyou’d forced to spend much time to recover it. This isn’t good.\n \nSo I’m thinking that COPY UNLOGGED’d provide a more flexible way for keeping data\nfor COPY FROM users.\n \nI’m considering that the feature gives them a choice that during ordinary operation\r\n\nyou can keep WAL logging for a target table and when you need high-speed loading\nyou can bypass WAL generation for it.\n \nTo achieve this, we have to\nconsider a new idea like loaded data’d be added\nat the end of the all other pages and detach those\nif the server crashes during the UNLOGGED loading processing for example.\n \nBy the way, “ALTER TABLE tbl SET UNLOGGED” is supported by postgres.\nYou may think it’s OK to change LOGGED table to UNLOGGED table by this command.\nBut, it copies the whole relation once actually. (This isn’t written in the manual.)\nSo this command becomes slow if the table the command is applied to contains a lot of data.\nThus changing the table’s status of UNLOGGED/LOGGED also requires cost at the moment and I think this copy is an obstacle for switching that table’s status.\n \nThe discussion of the reason is written in the url below.\nhttps://www.postgresql.org/message-id/flat/CAFcNs%2Bpeg3VPG2%3Dv6Lu3vfCDP8mt7cs6-RMMXxjxWNLREgSRVQ%40mail.gmail.com\n \nBest\r\n\n            Takamichi Osumi", "msg_date": "Fri, 10 Jul 2020 13:38:40 +0000", "msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>", "msg_from_op": true, "msg_subject": "RE: Implement UNLOGGED clause for COPY FROM" }, { "msg_contents": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com> writes:\n>> Aside from that, though, how does this improve upon the existing capability to copy into an unlogged temporary table?\n\n> [>] unlogged temporary table can’t be inherited over sessions first of all.\n\nUnlogged tables don't have to be temporary.\n\n> And unlogged table needs to be recreated due to startup truncation of the table’s content\n> when the server crashes.\n\nIndeed, and your proposed feature would extend that un-safety to tables\nthat are NOT marked unlogged, which is not good.\n\nAFAICS, we can already accomplish basically the same thing as what you\nwant to do like this:\n\nalter table foo set unlogged;\ncopy foo from ...;\nalter table foo set logged;\n\nThe mechanics of that are already well worked out. It's expensive,\nno doubt about that, but I think you're just fooling yourself to\nimagine that any shortcuts are possible. A mix of unlogged and logged\ndata is never going to work safely.\n\n> To achieve this, we have to\n> consider a new idea like loaded data’d be added\n> at the end of the all other pages and detach those\n> if the server crashes during the UNLOGGED loading processing for example.\n\nYou keep on ignoring the indexes... not to mention replication.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 10 Jul 2020 10:01:00 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Implement UNLOGGED clause for COPY FROM" }, { "msg_contents": "Hi,\r\n\r\n> AFAICS, we can already accomplish basically the same thing as what you want to\r\n> do like this:\r\n> \r\n> alter table foo set unlogged;\r\n> copy foo from ...;\r\n> alter table foo set logged;\r\nThis didn't satisfy what I wanted.\r\nIn case that 'foo' has huge amount of rows at the beginning,\r\nthis example would spend much time to copy\r\nthe contents of 'foo' twice to swap relfilenodes atomically.\r\nWhen that loaded data by COPY is big too, its execution time becomes much longer.\r\n\r\n> You keep on ignoring the indexes... not to mention replication.\r\nSorry for having made you think like this.\r\n\r\nWhen the server crash occurs during data loading of COPY UNLOGGED,\r\nit's a must to keep index consistent of course.\r\nI'm thinking that to rebuild the indexes on the target table would work.\r\n\r\nIn my opinion, UNLOGGED clause must be designed to guarantee that \r\nwhere the data loaded by this clause is written starts from the end of all other data blocks.\r\nPlus, those blocks needs to be protected by any write of other transactions during the copy.\r\nApart from that, the server must be aware of which block is the first block,\r\nor the range about where it started or ended in preparation for the crash.\r\n\r\nDuring the crash recovery, those points are helpful to recognize and detach such blocks\r\nin order to solve a situation that the loaded data is partially synced to the disk and the rest isn't.\r\nIn terms of index, we can recreate index based on the relation's data\r\nprotected by this mechanism above.\r\n\r\nAnother idea of index crash recovery was to\r\ncopy the indexes on the target table as a backup just before loading and\r\nwrite new added indexes from loaded data into this temporary index files\r\nin order to localize the new indexes. But, my purpose is to accelerate speed\r\nof data loading under the condition that target table has huge amount of data initially.\r\nTaking this purpose into an evaluation criterion, the initial copy of indexes\r\nwould make the execution slow down. Thus, I choose rebuilding index.\r\n\r\nAnother point I need to add for recovery would be\r\nhow the startup postgres knows the condition of COPY UNLOGGED clause.\r\nMy current idea is to utilize any other system file or\r\ncreate a new system file in the cluster for this clause.\r\nAt least, it would become necessary for postgres to identify\r\nwhich blocks should be detached at the beginning when the command is executed.\r\nTherefore, we need add information for it.\r\n\r\nLastly, I have to admit that \r\nthe status of target table where data is loaded by COPY UNLOGGED would be marked\r\nas invalid and notified to standbys under replication environment\r\nfrom the point in time when the operation takes place.\r\nBut, I'm going to allow users with special privileges (like DBA) to use this clause\r\nand this kind of tables would be judged by them not to replicate.\r\nOf course, I'm thinking better idea but now what I can say is like this for replication.\r\n\r\nBest,\r\n\tTakamichi Osumi\r\n", "msg_date": "Fri, 17 Jul 2020 03:04:25 +0000", "msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>", "msg_from_op": true, "msg_subject": "RE: Implement UNLOGGED clause for COPY FROM" }, { "msg_contents": "On Fri, Jul 17, 2020 at 9:53 AM osumi.takamichi@fujitsu.com\n<osumi.takamichi@fujitsu.com> wrote:\n>\n> Lastly, I have to admit that\n> the status of target table where data is loaded by COPY UNLOGGED would be marked\n> as invalid and notified to standbys under replication environment\n> from the point in time when the operation takes place.\n> But, I'm going to allow users with special privileges (like DBA) to use this clause\n> and this kind of tables would be judged by them not to replicate.\n> Of course, I'm thinking better idea but now what I can say is like this for replication.\n>\n\nIf you are going to suggest users not to replicate such tables then\nwhy can't you suggest them to create such tables as UNLOGGED in the\nfirst place? Another idea could be that you create an 'unlogged'\ntable, copy the data to it. Then perform Alter Table .. SET Logged\nand attach it to the main table. I think for this you need the main\ntable to be partitioned but I think if that is possible then it might\nbe better than all the hacking you are proposing to do in the server\nfor this special operation.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 17 Jul 2020 15:33:57 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Implement UNLOGGED clause for COPY FROM" }, { "msg_contents": "Hi. Amit-san\r\n\r\n\r\n> If you are going to suggest users not to replicate such tables then why can't you\r\n> suggest them to create such tables as UNLOGGED in the first place? Another\r\n> idea could be that you create an 'unlogged'\r\n> table, copy the data to it. Then perform Alter Table .. SET Logged and attach it to\r\n> the main table. I think for this you need the main table to be partitioned but I\r\n> think if that is possible then it might be better than all the hacking you are\r\n> proposing to do in the server for this special operation.\r\nThank you for your comment.\r\n\r\nAt the beginning, I should have mentioned this function was\r\nfor data warehouse, where you need to load large amounts of data\r\nin the shortest amount of time. \r\nSorry for my bad explanation. \r\n\r\nBased on the fact that data warehouse cannot be separated from\r\nusage of applications like B.I. tool in general,\r\nwe cannot define unlogged table at the beginning easily.\r\nBasically, such tools don't support to define unlogged table as far as I know.\r\n\r\nAnd if you want to do so, you need *modification or fix of existing application*\r\nwhich is implemented by a third party and commercially available for data analytics.\r\nIn other words, to make CREATE UNLOGGED TABLE available in that application,\r\nyou must revise the product's source code of the application directly,\r\nwhich is an act to invalidate the warranty from the software company of B.I. tool.\r\nIn my opinion, it would be like unrealistic for everyone to do so.\r\n\r\nBest,\r\n\tTakamichi Osumi\r\n", "msg_date": "Wed, 22 Jul 2020 05:41:15 +0000", "msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>", "msg_from_op": true, "msg_subject": "RE: Implement UNLOGGED clause for COPY FROM" }, { "msg_contents": "On Wed, Jul 22, 2020 at 11:11 AM osumi.takamichi@fujitsu.com\n<osumi.takamichi@fujitsu.com> wrote:\n>\n> > If you are going to suggest users not to replicate such tables then why can't you\n> > suggest them to create such tables as UNLOGGED in the first place? Another\n> > idea could be that you create an 'unlogged'\n> > table, copy the data to it. Then perform Alter Table .. SET Logged and attach it to\n> > the main table. I think for this you need the main table to be partitioned but I\n> > think if that is possible then it might be better than all the hacking you are\n> > proposing to do in the server for this special operation.\n> Thank you for your comment.\n>\n> At the beginning, I should have mentioned this function was\n> for data warehouse, where you need to load large amounts of data\n> in the shortest amount of time.\n> Sorry for my bad explanation.\n>\n> Based on the fact that data warehouse cannot be separated from\n> usage of applications like B.I. tool in general,\n> we cannot define unlogged table at the beginning easily.\n> Basically, such tools don't support to define unlogged table as far as I know.\n>\n> And if you want to do so, you need *modification or fix of existing application*\n> which is implemented by a third party and commercially available for data analytics.\n> In other words, to make CREATE UNLOGGED TABLE available in that application,\n> you must revise the product's source code of the application directly,\n> which is an act to invalidate the warranty from the software company of B.I. tool.\n> In my opinion, it would be like unrealistic for everyone to do so.\n>\n\nSo, does this mean that the need for data warehouse application can be\nsatisfied if the table would have been an 'UNLOGGED'? However, you\nstill need 'COPY UNLOGGED ..' syntax because they don't have control\nover table definition. I think if this is the case, then the\napplication user should find a way for this. BTW, if the application\nis anyway going to execute a PG native syntax like 'COPY UNLOGGED ..'\nthen why can't they simply set tables as UNLOGGED by using Alter\nTable?\n\nIIUC, I think here the case is that the applications are not allowed\nto change the standard definitions of tables owned/created by B.I tool\nand instead they use some Copy Unlogged sort of syntax provided by\nother databases to load the data at a faster speed and now they expect\nPG to also have a similar alternative. If that is true, then it is\npossible that those database doesn't have 'Create Unlogged Table ...'\ntypes of syntax which PG has and that is why they have provided such\nan alternative for Copy kind of commands.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 23 Jul 2020 09:54:11 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Implement UNLOGGED clause for COPY FROM" }, { "msg_contents": "On Fri, 17 Jul 2020 at 13:23, osumi.takamichi@fujitsu.com\n<osumi.takamichi@fujitsu.com> wrote:\n>\n> Hi,\n>\n> > AFAICS, we can already accomplish basically the same thing as what you want to\n> > do like this:\n> >\n> > alter table foo set unlogged;\n> > copy foo from ...;\n> > alter table foo set logged;\n> This didn't satisfy what I wanted.\n> In case that 'foo' has huge amount of rows at the beginning,\n> this example would spend much time to copy\n> the contents of 'foo' twice to swap relfilenodes atomically.\n> When that loaded data by COPY is big too, its execution time becomes much longer.\n>\n> > You keep on ignoring the indexes... not to mention replication.\n> Sorry for having made you think like this.\n>\n> When the server crash occurs during data loading of COPY UNLOGGED,\n> it's a must to keep index consistent of course.\n> I'm thinking that to rebuild the indexes on the target table would work.\n>\n> In my opinion, UNLOGGED clause must be designed to guarantee that\n> where the data loaded by this clause is written starts from the end of all other data blocks.\n> Plus, those blocks needs to be protected by any write of other transactions during the copy.\n> Apart from that, the server must be aware of which block is the first block,\n> or the range about where it started or ended in preparation for the crash.\n>\n> During the crash recovery, those points are helpful to recognize and detach such blocks\n> in order to solve a situation that the loaded data is partially synced to the disk and the rest isn't.\n\nHow do online backup and archive recovery work?\n\nSuppose that the user executes pg_basebackup during COPY UNLOGGED\nrunning, the physical backup might have the portion of tuples loaded\nby COPY UNLOGGED but these data are not recovered. It might not be a\nproblem because the operation is performed without WAL records. But\nwhat if an insertion happens after COPY UNLOGGED but before\npg_stop_backup()? I think that a new tuple could be inserted at the\nend of the table, following the data loaded by COPY UNLOGGED. With\nyour approach described above, the newly inserted tuple will be\nrecovered during archive recovery, but it either will be removed if we\nreplay the insertion WAL then truncate the table or won’t be inserted\ndue to missing block if we truncate the table then replay the\ninsertion WAL, resulting in losing the tuple although the user got\nsuccessful of insertion.\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 24 Jul 2020 17:14:42 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Implement UNLOGGED clause for COPY FROM" }, { "msg_contents": "Hello.\r\n\r\nApologies for the delay.\r\n> > When the server crash occurs during data loading of COPY UNLOGGED,\r\n> > it's a must to keep index consistent of course.\r\n> > I'm thinking that to rebuild the indexes on the target table would work.\r\n> >\r\n> > In my opinion, UNLOGGED clause must be designed to guarantee that\r\n> > where the data loaded by this clause is written starts from the end of all other\r\n> data blocks.\r\n> > Plus, those blocks needs to be protected by any write of other transactions\r\n> during the copy.\r\n> > Apart from that, the server must be aware of which block is the first\r\n> > block, or the range about where it started or ended in preparation for the crash.\r\n> >\r\n> > During the crash recovery, those points are helpful to recognize and\r\n> > detach such blocks in order to solve a situation that the loaded data is partially\r\n> synced to the disk and the rest isn't.\r\n> \r\n> How do online backup and archive recovery work ?\r\n> \r\n> Suppose that the user executes pg_basebackup during COPY UNLOGGED running,\r\n> the physical backup might have the portion of tuples loaded by COPY UNLOGGED\r\n> but these data are not recovered. It might not be a problem because the operation\r\n> is performed without WAL records. But what if an insertion happens after COPY\r\n> UNLOGGED but before pg_stop_backup()? I think that a new tuple could be\r\n> inserted at the end of the table, following the data loaded by COPY UNLOGGED.\r\n> With your approach described above, the newly inserted tuple will be recovered\r\n> during archive recovery, but it either will be removed if we replay the insertion\r\n> WAL then truncate the table or won’t be inserted due to missing block if we\r\n> truncate the table then replay the insertion WAL, resulting in losing the tuple\r\n> although the user got successful of insertion.\r\nI consider that from the point in time when COPY UNLOGGED is executed,\r\nany subsequent operations to the data which comes from UNLOGGED operation\r\nalso cannot be recovered even if those issued WAL.\r\n\r\nThis is basically inevitable because subsequent operations \r\nafter COPY UNLOGGED depend on blocks of loaded data without WAL,\r\nwhich means we cannot replay exact operations.\r\n\r\nTherefore, all I can do is to guarantee that \r\nwhen one recovery process ends, the target table returns to the state\r\nimmediately before the COPY UNLOGGED is executed.\r\nThis could be achieved by issuing and notifying the server of an invalidation WAL,\r\nan indicator to stop WAL application toward one specific table after this new type of WAL.\r\nI think I need to implement this mechanism as well for this feature.\r\nThus, I'll take a measure against your concern of confusing data loss.\r\n\r\nFor recovery of the loaded data itself, the user of this clause,\r\nlike DBA or administrator of data warehouse for instance, \r\nwould need to make a backup just after the data loading.\r\nFor some developers, this behavior would seem incomplete because of the heavy user's burden.\r\n\r\nOn the other hand, I'm aware of a fact that Oracle Database has a feature of UNRECOVERABLE clause,\r\nwhich is equivalent to what I'm suggesting now in this thread.\r\n\r\nThis data loading without REDO log by the clause is more convenient than what I said above,\r\nbecause it's supported by a tool named Recovery Manager which enables users to make an incremental backup.\r\nThis works to back up only the changed blocks since the previous backup and\r\nremove the manual burden from the user like above.\r\nHere, I have to admit that I cannot design and implement \r\nthis kind of synergistic pair of all features at once for data warehousing.\r\nSo I'd like to make COPY UNLOGGED as the first step.\r\n\r\nThis is the URL of how Oracle database for data warehouse achieves the backup of no log operation while acquiring high speed of data loading.\r\nhttps://docs.oracle.com/database/121/VLDBG/GUID-42825ED1-C4C5-449B-870F-D2C8627CBF86.htm#VLDBG1578\r\n\r\nBest,\r\n\tTakamichi Osumi\r\n", "msg_date": "Thu, 20 Aug 2020 00:18:52 +0000", "msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>", "msg_from_op": true, "msg_subject": "RE: Implement UNLOGGED clause for COPY FROM" }, { "msg_contents": "At Thu, 20 Aug 2020 00:18:52 +0000, \"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com> wrote in \r\n> Hello.\r\n> \r\n> Apologies for the delay.\r\n> > > When the server crash occurs during data loading of COPY UNLOGGED,\r\n> > > it's a must to keep index consistent of course.\r\n> > > I'm thinking that to rebuild the indexes on the target table would work.\r\n> > >\r\n> > > In my opinion, UNLOGGED clause must be designed to guarantee that\r\n> > > where the data loaded by this clause is written starts from the end of all other\r\n> > data blocks.\r\n> > > Plus, those blocks needs to be protected by any write of other transactions\r\n> > during the copy.\r\n> > > Apart from that, the server must be aware of which block is the first\r\n> > > block, or the range about where it started or ended in preparation for the crash.\r\n> > >\r\n> > > During the crash recovery, those points are helpful to recognize and\r\n> > > detach such blocks in order to solve a situation that the loaded data is partially\r\n> > synced to the disk and the rest isn't.\r\n> > \r\n> > How do online backup and archive recovery work ?\r\n> > \r\n> > Suppose that the user executes pg_basebackup during COPY UNLOGGED running,\r\n> > the physical backup might have the portion of tuples loaded by COPY UNLOGGED\r\n> > but these data are not recovered. It might not be a problem because the operation\r\n> > is performed without WAL records. But what if an insertion happens after COPY\r\n> > UNLOGGED but before pg_stop_backup()? I think that a new tuple could be\r\n> > inserted at the end of the table, following the data loaded by COPY UNLOGGED.\r\n> > With your approach described above, the newly inserted tuple will be recovered\r\n> > during archive recovery, but it either will be removed if we replay the insertion\r\n> > WAL then truncate the table or won’t be inserted due to missing block if we\r\n> > truncate the table then replay the insertion WAL, resulting in losing the tuple\r\n> > although the user got successful of insertion.\r\n> I consider that from the point in time when COPY UNLOGGED is executed,\r\n> any subsequent operations to the data which comes from UNLOGGED operation\r\n> also cannot be recovered even if those issued WAL.\r\n> \r\n> This is basically inevitable because subsequent operations \r\n> after COPY UNLOGGED depend on blocks of loaded data without WAL,\r\n> which means we cannot replay exact operations.\r\n> \r\n> Therefore, all I can do is to guarantee that \r\n> when one recovery process ends, the target table returns to the state\r\n> immediately before the COPY UNLOGGED is executed.\r\n> This could be achieved by issuing and notifying the server of an invalidation WAL,\r\n> an indicator to stop WAL application toward one specific table after this new type of WAL.\r\n> I think I need to implement this mechanism as well for this feature.\r\n> Thus, I'll take a measure against your concern of confusing data loss.\r\n> \r\n> For recovery of the loaded data itself, the user of this clause,\r\n> like DBA or administrator of data warehouse for instance, \r\n> would need to make a backup just after the data loading.\r\n> For some developers, this behavior would seem incomplete because of the heavy user's burden.\r\n> \r\n> On the other hand, I'm aware of a fact that Oracle Database has a feature of UNRECOVERABLE clause,\r\n> which is equivalent to what I'm suggesting now in this thread.\r\n> \r\n> This data loading without REDO log by the clause is more convenient than what I said above,\r\n> because it's supported by a tool named Recovery Manager which enables users to make an incremental backup.\r\n> This works to back up only the changed blocks since the previous backup and\r\n> remove the manual burden from the user like above.\r\n> Here, I have to admit that I cannot design and implement \r\n> this kind of synergistic pair of all features at once for data warehousing.\r\n> So I'd like to make COPY UNLOGGED as the first step.\r\n> \r\n> This is the URL of how Oracle database for data warehouse achieves the backup of no log operation while acquiring high speed of data loading.\r\n> https://docs.oracle.com/database/121/VLDBG/GUID-42825ED1-C4C5-449B-870F-D2C8627CBF86.htm#VLDBG1578\r\n\r\nAnyway, if the target table is turned back to LOGGED, the succeedeing\r\nWAL stream can be polluted by the logs on the table. So any operations\r\nrelying on WAL records is assumed not to be continuable. Not only\r\nuseless, it is harmful. I think we don't accept emitting an\r\ninsconsistent WAL stream intentionally while wal_level > minimal.\r\n\r\nYou assert that we could prevent WAL from redoed by the \"invalidation\"\r\nbut it is equivalent to turning the table into UNLOGGED. Why do you\r\ninsist that a table should be labeled as \"LOGGED\" whereas it is\r\nvirtually UNLOGGED? That costs nothing (if the table is almost empty).\r\n\r\nIf you want to get the table back to LOGGED without emitting WAL,\r\nwal_level=minimal works. That requires restart twice, and table\r\ncopying, though. It seems like that we can skip copying in this case\r\nbut I'm not sure.\r\n\r\nregards.\r\n\r\n-- \r\nKyotaro Horiguchi\r\nNTT Open Source Software Center\r\n", "msg_date": "Thu, 20 Aug 2020 15:24:57 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Implement UNLOGGED clause for COPY FROM" }, { "msg_contents": "On Thu, Aug 20, 2020 at 5:49 AM osumi.takamichi@fujitsu.com\n<osumi.takamichi@fujitsu.com> wrote:\n>\n> Hello.\n> > > During the crash recovery, those points are helpful to recognize and\n> > > detach such blocks in order to solve a situation that the loaded data is partially\n> > synced to the disk and the rest isn't.\n> >\n> > How do online backup and archive recovery work ?\n> >\n> > Suppose that the user executes pg_basebackup during COPY UNLOGGED running,\n> > the physical backup might have the portion of tuples loaded by COPY UNLOGGED\n> > but these data are not recovered. It might not be a problem because the operation\n> > is performed without WAL records. But what if an insertion happens after COPY\n> > UNLOGGED but before pg_stop_backup()? I think that a new tuple could be\n> > inserted at the end of the table, following the data loaded by COPY UNLOGGED.\n> > With your approach described above, the newly inserted tuple will be recovered\n> > during archive recovery, but it either will be removed if we replay the insertion\n> > WAL then truncate the table or won’t be inserted due to missing block if we\n> > truncate the table then replay the insertion WAL, resulting in losing the tuple\n> > although the user got successful of insertion.\n> I consider that from the point in time when COPY UNLOGGED is executed,\n> any subsequent operations to the data which comes from UNLOGGED operation\n> also cannot be recovered even if those issued WAL.\n>\n> This is basically inevitable because subsequent operations\n> after COPY UNLOGGED depend on blocks of loaded data without WAL,\n> which means we cannot replay exact operations.\n>\n> Therefore, all I can do is to guarantee that\n> when one recovery process ends, the target table returns to the state\n> immediately before the COPY UNLOGGED is executed.\n> This could be achieved by issuing and notifying the server of an invalidation WAL,\n> an indicator to stop WAL application toward one specific table after this new type of WAL.\n>\n\nI don't think we can achieve what you want by one special invalidation\nWAL. Consider a case where an update has happened on the page which\nexists before 'Copy Unlogged' operation and while writing that page to\ndisk, the system crashed and the page is half-written. Without the\nspecial WAL mechanism you are proposing to introduce, during recovery,\nwe can replay the full-page-image from WAL of such a page and then\nperform the required update, so after recovery, the page won't be torn\nanymore.\n\nBasically, the idea is that to protect from such torn-writes\n(half-written pages), we have a concept called full-page writes which\nprotects the data from such writes after recovery. Before writing to\nany page after a checkpoint, we write its full-page-image in WAL which\nhelps us in recovering from such situations but with your proposed\nmechanism it won't work.\n\nAnother concern I have with this idea is that you want to keep writing\nWAL for such a relation but don't want to replay in recovery which\nsounds like a good idea.\n\nThe idea to keep part of the table as logged and other as unlogged\nsounds scary to me. Now, IIUC, you are trying to come up with these\nideas because to use Alter Table .. Set Unlogged, one has to rewrite\nthe entire table and if such a table is large, it will be a very\ntime-consuming operation. You might want to explore whether we can\navoid rewriting the table for such an operation but I don't think that\nis easy either. The two problems I could see immediately are (a) we\nhave to change BM_PERMANENT marking of exiting buffers of such a\nrelation which again can be a time-consuming operation especially for\na large value of shread_buffers, (b) create an _init fork of such a\nrelation in-sync with the commit. You might want to read the archives\nto see why at the first place we have decided to re-write the table\nfor SET UNLOGGED operation, see email [1].\n\n[1] - https://www.postgresql.org/message-id/CAFcNs%2Bpeg3VPG2%3Dv6Lu3vfCDP8mt7cs6-RMMXxjxWNLREgSRVQ%40mail.gmail.com\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Sat, 22 Aug 2020 12:23:58 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Implement UNLOGGED clause for COPY FROM" }, { "msg_contents": "Hello,\r\n\r\n\r\nI think it's worth thinking about a sophisticated feature like Oracle's UNRECOVERABLE data loading (because SQL Server's BCP load utility also has such a feature, but for an empty table), how about an easier approach like MySQL? I expect this won't complicate Postgres code much.\r\n\r\nThe customer is using Oracle RAC for high availability of a data warehouse. Then, I think they can use the traditional shared disk-based HA clustering, not the streaming replication when they migrate to Postgres.\r\n\r\nThey load data into the data warehouse with the nightly ETL or ELT. The loading window is limited, so they run multiple concurrent loading sessions, with the transaction logging off. They probably use all resources for the data loading during that period.\r\n\r\nThen, you might think \"How about turning fsync and full_page_writes to off?\" But the customer doesn't like to be worried about the massive amount of WAL generated during the loading.\r\n\r\n\r\nOTOH, the latest MySQL 8.0.21 introduced the following feature. This is for the initial data loading into a new database instance, though.\r\n\r\n\r\nhttps://dev.mysql.com/doc/refman/8.0/en/innodb-redo-log.html#innodb-disable-redo-logging\r\n--------------------------------------------------\r\nDisabling Redo Logging\r\nAs of MySQL 8.0.21, you can disable redo logging using the ALTER INSTANCE DISABLE INNODB REDO_LOG statement. This functionality is intended for loading data into a new MySQL instance. Disabling redo logging speeds up data loading by avoiding redo log writes and doublewrite buffering.\r\n\r\nWarning\r\nThis feature is intended only for loading data into a new MySQL instance. Do not disable redo logging on a production system. It is permitted to shutdown and restart the server while redo logging is disabled, but an unexpected server stoppage while redo logging is disabled can cause data loss and instance corruption.\r\n\r\nAttempting to restart the server after an unexpected server stoppage while redo logging is disabled is refused with the following error:\r\n\r\n[ERROR] [MY-013578] [InnoDB] Server was killed when Innodb Redo \r\nlogging was disabled. Data files could be corrupt. You can try \r\nto restart the database with innodb_force_recovery=6\r\nIn this case, initialize a new MySQL instance and start the data loading procedure again.\r\n--------------------------------------------------\r\n\r\n\r\nFollowing this idea, what do you think about adding a new value \"none\" to wal_level, where no WAL is generated? The setting of wal_level is recorded in pg_control. The startup process can see the value and reject recovery after abnormal shutdown, emitting a message similar to MySQL's.\r\n\r\nJust a quick idea. I hope no devil will appear in the details.\r\n\r\n\r\nRegards\r\nTakayuki Tsunakawa\r\n\r\n", "msg_date": "Wed, 26 Aug 2020 07:24:43 +0000", "msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Implement UNLOGGED clause for COPY FROM" }, { "msg_contents": "On Wed, Aug 26, 2020 at 12:54 PM tsunakawa.takay@fujitsu.com\n<tsunakawa.takay@fujitsu.com> wrote:\n>\n>\n> Following this idea, what do you think about adding a new value \"none\" to wal_level, where no WAL is generated? The setting of wal_level is recorded in pg_control. The startup process can see the value and reject recovery after abnormal shutdown, emitting a message similar to MySQL's.\n>\n\nSo you want your users to shutdown and restart the server before Copy\nbecause that would be required if you want to change the wal_level.\nHowever, even if we do that, users who are running the server\npreviously with wal_level as 'replica' won't be happy after doing this\nchange. Because if they change the wal_level to 'none' for certain\noperations like bulk load and then again change back the mode to\n'replica' they need to back up the database again to setup 'replica'\nas they can't continue replication from the previous point (consider\nupdate on a page for which previously WAL was not written).\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 26 Aug 2020 16:20:49 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Implement UNLOGGED clause for COPY FROM" }, { "msg_contents": "From: Amit Kapila <amit.kapila16@gmail.com>\r\n> So you want your users to shutdown and restart the server before Copy\r\n> because that would be required if you want to change the wal_level.\r\n\r\nYes. They seem to be fine with it, as far as I heard from a person who is involved in the system design.\r\n\r\n\r\n> However, even if we do that, users who are running the server\r\n> previously with wal_level as 'replica' won't be happy after doing this\r\n> change. Because if they change the wal_level to 'none' for certain\r\n> operations like bulk load and then again change back the mode to\r\n> 'replica' they need to back up the database again to setup 'replica'\r\n> as they can't continue replication from the previous point (consider\r\n> update on a page for which previously WAL was not written).\r\n\r\nYes, it requires the database backup. The database backup should be a daily task anyway, so I expect it wouldn't impose extra maintenance burdon on the user. Plus, not all users use the streaming replication for HA. I think it's helpful for the maturing Postgres to provide some kind of solution for some context so that user can scratch their itches.\r\n\r\n\r\nRegards\r\nTakayuki Tsunakawa\r\n\r\n", "msg_date": "Thu, 27 Aug 2020 01:34:26 +0000", "msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Implement UNLOGGED clause for COPY FROM" }, { "msg_contents": "On Thu, Aug 27, 2020 at 7:04 AM tsunakawa.takay@fujitsu.com\n<tsunakawa.takay@fujitsu.com> wrote:\n>\n> From: Amit Kapila <amit.kapila16@gmail.com>\n> > So you want your users to shutdown and restart the server before Copy\n> > because that would be required if you want to change the wal_level.\n>\n> Yes. They seem to be fine with it, as far as I heard from a person who is involved in the system design.\n>\n>\n> > However, even if we do that, users who are running the server\n> > previously with wal_level as 'replica' won't be happy after doing this\n> > change. Because if they change the wal_level to 'none' for certain\n> > operations like bulk load and then again change back the mode to\n> > 'replica' they need to back up the database again to setup 'replica'\n> > as they can't continue replication from the previous point (consider\n> > update on a page for which previously WAL was not written).\n>\n> Yes, it requires the database backup. The database backup should be a daily task anyway, so I expect it wouldn't impose extra maintenance burdon on the user.\n>\n\nSure, but on a daily basis, one requires only incremental WAL to\ncomplete the backup but in this case, it would require the entire\ndatabase back up unless we have some form of block-level incremental\nbackup method. OTOH, I don't deny that there is some use case with\nwal_level = 'none' for initial data loading as MySQL provides but I\nthink that is a separate feature than what is proposed here (Copy\nUnlogged). It might be better if you can start a separate thread for\nthat with some more details on the implementation side as well.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 27 Aug 2020 09:40:15 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Implement UNLOGGED clause for COPY FROM" }, { "msg_contents": "From: Amit Kapila <amit.kapila16@gmail.com>\r\n> Sure, but on a daily basis, one requires only incremental WAL to\r\n> complete the backup but in this case, it would require the entire\r\n> database back up unless we have some form of block-level incremental\r\n> backup method. \r\n\r\nRegarding the backup time, I think users can shorten it by using the storage device's snapshoting (or split mirroring?), filesystem's snapshot feature.\r\n\r\n\r\n> OTOH, I don't deny that there is some use case with\r\n> wal_level = 'none' for initial data loading as MySQL provides but I\r\n> think that is a separate feature than what is proposed here (Copy\r\n> Unlogged). It might be better if you can start a separate thread for\r\n> that with some more details on the implementation side as well.\r\n\r\nYeah, the feature doesn't match the title of this thread and could confuse other readers that join later. Nevertheless, I think MySQL's feature could be used for additional data loading as well if the user understands what he/she is doing. So, we can discuss it in another thread just in case the ongoing discussion gets stuck.\r\n\r\n\r\nRegards\r\nTakayuki Tsunakawa\r\n\r\n\r\n\r\n", "msg_date": "Thu, 27 Aug 2020 04:41:58 +0000", "msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Implement UNLOGGED clause for COPY FROM" }, { "msg_contents": "Hi.\n\nI expect I have some basic misunderstanding because IMO now this\nthread seems to have come full circle.\n\nEarlier, Osumi-san was rejecting the idea of using ALTER TABLE tbl SET\nUNLOGGED on basis that it is too time consuming for large data to\nswitch the table modes [1].\n\nNow the latest idea is to introduce a wal_level=none. But now\napparently full daily backups are OK, and daily restarting the server\nbefore the copies is also OK [2].\n\n~\n\nDoesn't wal_level=none essentially just behave as if every table was\nUNLOGGED; not just the ones we are loading?\n\nDoesn't wal_level=none come with all the same limitations/requirements\n(full daily backups/restarts etc) that the UNLOGGED TABLE would also\nhave?\n\nSo I don't recognise the difference?\n\nIf wal_level=none is judged OK as a fast loading solution, then why\nwasn't an initially UNLOGGED table also judged OK by the same\ncriteria? And if there is no real difference, then why is it necessary\nto introduce wal_level=none (instead of using the existing UNLOGGED\nfeature) in the first place?\n\nOr, if all this problem is simply due to a quirk that the BI tool\nreferred to does not support the CREATE UNLOGGED TABLE syntax [3],\nthen surely there is some other workaround could be written to handle\nthat.\n\nWhat part am I missing?\n\n--\n[1] - https://www.postgresql.org/message-id/OSBPR01MB48884832932F93DAA953CEB9ED650%40OSBPR01MB4888.jpnprd01.prod.outlook.com\n[2] - https://www.postgresql.org/message-id/TYAPR01MB299005FC543C43348A4993FDFE550%40TYAPR01MB2990.jpnprd01.prod.outlook.com\n[3] - https://www.postgresql.org/message-id/OSBPR01MB4888CBD08DDF73721C18D2C0ED790%40OSBPR01MB4888.jpnprd01.prod.outlook.com\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Thu, 10 Sep 2020 16:48:55 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Implement UNLOGGED clause for COPY FROM" }, { "msg_contents": "From: Peter Smith <smithpb2250@gmail.com>\r\n> Earlier, Osumi-san was rejecting the idea of using ALTER TABLE tbl SET\r\n> UNLOGGED on basis that it is too time consuming for large data to\r\n> switch the table modes [1].\r\n\r\n> Doesn't wal_level=none essentially just behave as if every table was\r\n> UNLOGGED; not just the ones we are loading?\r\n> \r\n> Doesn't wal_level=none come with all the same limitations/requirements\r\n> (full daily backups/restarts etc) that the UNLOGGED TABLE would also\r\n> have?\r\n\r\nALTER TABLE takes long time proportional to the amount of existing data, while wal_level = none doesn't.\r\n\r\n\r\nRegards\r\nTakayuki Tsunakawa\r\n\r\n", "msg_date": "Thu, 10 Sep 2020 09:16:23 +0000", "msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Implement UNLOGGED clause for COPY FROM" }, { "msg_contents": "On Thu, Sep 10, 2020 at 7:16 PM tsunakawa.takay@fujitsu.com\n<tsunakawa.takay@fujitsu.com> wrote:\n\n> ALTER TABLE takes long time proportional to the amount of existing data, while wal_level = none doesn't.\n\nRight, but if wal_level=none is considered OK for that table with\nexisting data, then why not just create the table UNLOGGED in the\nfirst place? (or ALTER it to set UNLOGGED just one time and then leave\nit as UNLOGGED).\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Fri, 11 Sep 2020 11:43:34 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Implement UNLOGGED clause for COPY FROM" }, { "msg_contents": "From: Peter Smith <smithpb2250@gmail.com>\r\nOn Thu, Sep 10, 2020 at 7:16 PM tsunakawa.takay@fujitsu.com\r\n> <tsunakawa.takay@fujitsu.com> wrote:\r\n> > ALTER TABLE takes long time proportional to the amount of existing data,\r\n> while wal_level = none doesn't.\r\n> \r\n> Right, but if wal_level=none is considered OK for that table with\r\n> existing data, then why not just create the table UNLOGGED in the\r\n> first place? (or ALTER it to set UNLOGGED just one time and then leave\r\n> it as UNLOGGED).\r\n\r\nThe target tables sometimes receive updates (for data maintenance and/or correction). They don't want those updates to be lost due to the database server crash. Unlogged tables lose their entire contents during crash recovery.\r\n\r\nPlease think like this: logging is is the norm, and unlogged operations are exceptions/hacks for some requirement of which the user wants to minimize the use.\r\n\r\n\r\nRegards\r\nTakayuki Tsunakawa\r\n\r\n\r\n\r\n\r\n", "msg_date": "Fri, 11 Sep 2020 05:15:32 +0000", "msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Implement UNLOGGED clause for COPY FROM" }, { "msg_contents": "At Fri, 11 Sep 2020 05:15:32 +0000, \"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com> wrote in \n> From: Peter Smith <smithpb2250@gmail.com>\n> On Thu, Sep 10, 2020 at 7:16 PM tsunakawa.takay@fujitsu.com\n> > <tsunakawa.takay@fujitsu.com> wrote:\n> > > ALTER TABLE takes long time proportional to the amount of existing data,\n> > while wal_level = none doesn't.\n> > \n> > Right, but if wal_level=none is considered OK for that table with\n> > existing data, then why not just create the table UNLOGGED in the\n> > first place? (or ALTER it to set UNLOGGED just one time and then leave\n> > it as UNLOGGED).\n> \n> The target tables sometimes receive updates (for data maintenance and/or correction). They don't want those updates to be lost due to the database server crash. Unlogged tables lose their entire contents during crash recovery.\n> \n> Please think like this: logging is is the norm, and unlogged operations are exceptions/hacks for some requirement of which the user wants to minimize the use.\n\nI suspect that wal_level=none is a bit too toxic.\n\n\"ALTER TABLE SET UNLOGGED\" doesn't dump large amount of WAL so I don't\nthink it can be a problem. \"ALTER TABLE SET LOGGED\" also doesn't issue\nWAL while wal_level=minimal but runs a table copy. I think the only\nproblem of the UNLOGGED table method is that table copy.\n\nIf we can skip the table-copy when ALTER TABLE SET LOGGED on\nwal_level=minimal, is your objective achived?\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Fri, 11 Sep 2020 17:36:19 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Implement UNLOGGED clause for COPY FROM" }, { "msg_contents": "At Fri, 11 Sep 2020 17:36:19 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> At Fri, 11 Sep 2020 05:15:32 +0000, \"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com> wrote in \n> > From: Peter Smith <smithpb2250@gmail.com>\n> > On Thu, Sep 10, 2020 at 7:16 PM tsunakawa.takay@fujitsu.com\n> > > <tsunakawa.takay@fujitsu.com> wrote:\n> > > > ALTER TABLE takes long time proportional to the amount of existing data,\n> > > while wal_level = none doesn't.\n> > > \n> > > Right, but if wal_level=none is considered OK for that table with\n> > > existing data, then why not just create the table UNLOGGED in the\n> > > first place? (or ALTER it to set UNLOGGED just one time and then leave\n> > > it as UNLOGGED).\n> > \n> > The target tables sometimes receive updates (for data maintenance and/or correction). They don't want those updates to be lost due to the database server crash. Unlogged tables lose their entire contents during crash recovery.\n> > \n> > Please think like this: logging is is the norm, and unlogged operations are exceptions/hacks for some requirement of which the user wants to minimize the use.\n> \n> I suspect that wal_level=none is a bit too toxic.\n> \n> \"ALTER TABLE SET UNLOGGED\" doesn't dump large amount of WAL so I don't\n> think it can be a problem. \"ALTER TABLE SET LOGGED\" also doesn't issue\n\n(Oops! this runs a table copy)\n\n> WAL while wal_level=minimal but runs a table copy. I think the only\n> problem of the UNLOGGED table method is that table copy.\n> \n> If we can skip the table-copy when ALTER TABLE SET LOGGED on\n> wal_level=minimal, is your objective achived?\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Fri, 11 Sep 2020 17:57:05 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Implement UNLOGGED clause for COPY FROM" }, { "msg_contents": "From: Kyotaro Horiguchi <horikyota.ntt@gmail.com>\n> If we can skip the table-copy when ALTER TABLE SET LOGGED on\n> wal_level=minimal, is your objective achived?\n\nI expect so, if we can skip the table copy during ALTER TABLE SET LOGGED/UNLOGGED. On the other hand, both approaches have different pros and cons. It's nice that ALTER TABLE doesn't require database restart, but the user has to specify tables. wal_level = none is vice versa. Anyway, wal_level = none would be useful for initial data loading after creating a new database cluster, as MySQL suggests.\n\n\nRegards\nTakayuki Tsunakawa\n\n\n\n\n", "msg_date": "Fri, 11 Sep 2020 09:39:12 +0000", "msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Implement UNLOGGED clause for COPY FROM" } ]
[ { "msg_contents": "It seems I cannot access to pgsql-hackers archives.\nhttps://www.postgresql.org/list/pgsql-hackers/2020-07/\n\nError 503 Backend fetch failed\n\nBackend fetch failed\nGuru Meditation:\n\nXID: 68609318\n\nVarnish cache server\n--\nTatsuo Ishii\nSRA OSS, Inc. Japan\nEnglish: http://www.sraoss.co.jp/index_en.php\nJapanese:http://www.sraoss.co.jp\n\n\n", "msg_date": "Thu, 09 Jul 2020 12:14:53 +0900 (JST)", "msg_from": "Tatsuo Ishii <ishii@sraoss.co.jp>", "msg_from_op": true, "msg_subject": "pgsql-hackers archive broken?" }, { "msg_contents": "On Thu, 9 Jul 2020 at 15:15, Tatsuo Ishii <ishii@sraoss.co.jp> wrote:\n>\n> It seems I cannot access to pgsql-hackers archives.\n> https://www.postgresql.org/list/pgsql-hackers/2020-07/\n>\n> Error 503 Backend fetch failed\n\nI just hit this too. Cross-posting to www.\n\nDavid\n\n\n", "msg_date": "Thu, 9 Jul 2020 15:57:48 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pgsql-hackers archive broken?" }, { "msg_contents": "On Thu, Jul 9, 2020 at 9:28 AM David Rowley <dgrowleyml@gmail.com> wrote:\n>\n> On Thu, 9 Jul 2020 at 15:15, Tatsuo Ishii <ishii@sraoss.co.jp> wrote:\n> >\n> > It seems I cannot access to pgsql-hackers archives.\n> > https://www.postgresql.org/list/pgsql-hackers/2020-07/\n> >\n> > Error 503 Backend fetch failed\n>\n> I just hit this too. Cross-posting to www.\n>\n\nI'm also facing this problem:\nError 503 Backend fetch failed\nBackend fetch failed\nGuru Meditation:\nXID: 67771345\n\nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 9 Jul 2020 09:36:40 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pgsql-hackers archive broken?" }, { "msg_contents": "On 2020-Jul-09, vignesh C wrote:\n\n> On Thu, Jul 9, 2020 at 9:28 AM David Rowley <dgrowleyml@gmail.com> wrote:\n> >\n> > On Thu, 9 Jul 2020 at 15:15, Tatsuo Ishii <ishii@sraoss.co.jp> wrote:\n> > >\n> > > It seems I cannot access to pgsql-hackers archives.\n> > > https://www.postgresql.org/list/pgsql-hackers/2020-07/\n> > >\n> > > Error 503 Backend fetch failed\n\nShould be fixed now.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Thu, 9 Jul 2020 00:54:02 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: pgsql-hackers archive broken?" } ]
[ { "msg_contents": "Hi,\n\nI had an ALTER TABLE dependency problem reported to me. Here's a\nsimplified version of it:\n\nCREATE TABLE t (a INT, PRIMARY KEY(a));\nALTER TABLE t ADD CONSTRAINT t_fkey FOREIGN KEY (a) REFERENCES t(a) NOT VALID;\nALTER TABLE t VALIDATE CONSTRAINT t_fkey, ALTER a TYPE BIGINT;\n\nWhich results in:\n\nERROR: could not read block 0 in file \"base/12854/16411\": read only 0\nof 8192 bytes\nCONTEXT: SQL statement \"SELECT fk.\"a\" FROM ONLY \"public\".\"t\" fk LEFT\nOUTER JOIN ONLY \"public\".\"t\" pk ON ( pk.\"a\" OPERATOR(pg_catalog.=)\nfk.\"a\") WHERE pk.\"a\" IS NULL AND (fk.\"a\" IS NOT NULL)\"\n\nWhat's going on here is that due to the ALTER TYPE, a table rewrite is\npending. The primary key index of the table is also due to be\nrewritten which ATExecAddIndex() delays due to the pending table\nrewrite. When we process AT_PASS_MISC level changes and attempt to\nvalidate the foreign key constraint, the table is still pending a\nrewrite and the new index still does not exist.\nvalidateForeignKeyConstraint() executes regardless of the pending\nrewrite and bumps into the above error during the SPI call while\ntrying to check the _bt_getrootheight() in get_relation_info().\n\nI think the fix is just to delay the foreign key validation when\nthere's a rewrite pending until the rewrite is complete.\n\nI also considered that we could just delay all foreign key validations\nuntil phase 3, but I ended up just doing then only when a rewrite is\npending.\n\nDavid", "msg_date": "Thu, 9 Jul 2020 15:54:01 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "ALTER TABLE validate foreign key dependency problem" }, { "msg_contents": "On Thu, 9 Jul 2020 at 15:54, David Rowley <dgrowleyml@gmail.com> wrote:\n> I think the fix is just to delay the foreign key validation when\n> there's a rewrite pending until the rewrite is complete.\n\nI looked over this again and only slightly reworded a comment. The\nproblem exists as far back as 9.5 so I've attached 3 patches that,\npending any objections, I plan to push about 24 hours from now.\n\n> I also considered that we could just delay all foreign key validations\n> until phase 3, but I ended up just doing then only when a rewrite is\n> pending.\n\nI still wonder if it's best to delay the validation of the foreign key\nregardless of if there's a pending table rewrite, but the patch as it\nis now only delays if there's a pending rewrite.\n\nDavid", "msg_date": "Sun, 12 Jul 2020 16:50:46 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Re: ALTER TABLE validate foreign key dependency problem" }, { "msg_contents": "On Sun, 12 Jul 2020 at 05:51, David Rowley <dgrowleyml@gmail.com> wrote:\n\n\n> > I also considered that we could just delay all foreign key validations\n> > until phase 3, but I ended up just doing then only when a rewrite is\n> > pending.\n>\n> I still wonder if it's best to delay the validation of the foreign key\n> regardless of if there's a pending table rewrite, but the patch as it\n> is now only delays if there's a pending rewrite.\n>\n\nConsistency seems the better choice, so I agree we should validate later in\nall cases. Does changing that have any other effects?\n\n-- \nSimon Riggs http://www.2ndQuadrant.com/\n<http://www.2ndquadrant.com/>\nMission Critical Databases\n\nOn Sun, 12 Jul 2020 at 05:51, David Rowley <dgrowleyml@gmail.com> wrote: > I also considered that we could just delay all foreign key validations\n> until phase 3, but I ended up just doing then only when a rewrite is\n> pending.\n\nI still wonder if it's best to delay the validation of the foreign key\nregardless of if there's a pending table rewrite, but the patch as it\nis now only delays if there's a pending rewrite.Consistency seems the better choice, so I agree we should validate later in all cases. Does changing that have any other effects?-- Simon Riggs                http://www.2ndQuadrant.com/Mission Critical Databases", "msg_date": "Sun, 12 Jul 2020 21:12:52 +0100", "msg_from": "Simon Riggs <simon@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: ALTER TABLE validate foreign key dependency problem" }, { "msg_contents": "On Mon, 13 Jul 2020 at 08:13, Simon Riggs <simon@2ndquadrant.com> wrote:\n>\n> On Sun, 12 Jul 2020 at 05:51, David Rowley <dgrowleyml@gmail.com> wrote:\n>\n>>\n>> > I also considered that we could just delay all foreign key validations\n>> > until phase 3, but I ended up just doing then only when a rewrite is\n>> > pending.\n>>\n>> I still wonder if it's best to delay the validation of the foreign key\n>> regardless of if there's a pending table rewrite, but the patch as it\n>> is now only delays if there's a pending rewrite.\n>\n>\n> Consistency seems the better choice, so I agree we should validate later in all cases. Does changing that have any other effects?\n\nThanks for having a look here.\n\nI looked at this again and noticed it wasn't just FOREIGN KEY\nconstraints. CHECK constraints were being validated at the wrong time\ntoo.\n\nI did end up going with unconditionally moving the validation until\nphase 3. I've pushed fixed back to 9.5\n\nDavid\n\n\n", "msg_date": "Tue, 14 Jul 2020 17:10:25 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Re: ALTER TABLE validate foreign key dependency problem" } ]